Have you ever stopped to think about the limitations of technology? Are there any limitations? As new developments pop up almost every single day, it seems the answer is “no.” And the youth of today are demanding more – more convenience, more adaptability, and more ease of use. And with the advances in technology, you can no longer ignore artificial intelligence’s role in making those advances possible. In fact, AI has a role in all aspects of life from the home to shopping to the workplace.
Think of the most exciting and talked about sectors that the top tech companies are focusing on—and what comes to mind are smart homes, augmented reality, virtual reality and even smart cars. But guess what is common between all of these diverse tech dimensions? The answer is simple—Artificial Intelligence, also known as AI. By the year 2020, AI will create more jobs than it takes away, according to estimates by research firm Gartner. AI will create 2.30million jobs, while eliminating 1.8-million. AI will also eliminate low-skill and low-level positions, while creating more highly skilled, management and even the entry-level jobs. “Unfortunately, most calamitous warnings of job losses confuse AI with automation, that overshadows the greatest AI benefit — AI augmentation — a combination of human and artificial intelligence, where both complement each other,” says Svetlana Sicular, research vice president, Gartner. That is the story of AI on a larger level.
What is Artificial intelligence?
Artificial intelligence (AI), sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals. In computer science AI research is defined as the study of “intelligent agents”: any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals. Colloquially, the term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving”.
The scope of AI is disputed: as machines become increasingly capable, tasks considered as requiring “intelligence” are often removed from the definition, a phenomenon known as the AI effect, leading to the quip, “AI is whatever hasn’t been done yet.” For instance, optical character recognition is frequently excluded from “artificial intelligence”, having become a routine technology. Capabilities generally classified as AI as of 2017 include successfully understanding human speech, competing at the highest level in strategic game systems (such as chess and Go), autonomous cars, intelligent routing in content delivery network and military simulations.
Artificial intelligence was founded as an academic discipline in 1956, and in the years since has experienced several waves of optimism, followed by disappointment and the loss of funding (known as an “AI winter”), followed by new approaches, success and renewed funding. For most of its history, AI research has been divided into subfields that often fail to communicate with each other. These sub-fields are based on technical considerations, such as particular goals (e.g. “robotics” or “machine learning”), the use of particular tools (“logic” or artificial neural networks), or deep philosophical differences. Subfields have also been based on social factors (particular institutions or the work of particular researchers).
The traditional problems (or goals) of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception and the ability to move and manipulate objects. General intelligence is among the field’s long-term goals. Approaches include statistical methods, computational intelligence, and traditional symbolic AI. Many tools are used in AI, including versions of search and mathematical optimization, artificial neural networks, and methods based on statistics, probability and economics. The AI field draws upon computer science, mathematics, psychology, linguistics, philosophy and many others.
The field was founded on the claim that human intelligence “can be so precisely described that a machine can be made to simulate it”. This raises philosophical arguments about the nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence which are issues that have been explored by myth, fiction and philosophy since antiquity. Some people also consider AI to be a danger to humanity if it progresses unabatedly. Others believe that AI, unlike previous technological revolutions, will create a risk of mass unemployment. In the twenty-first century, AI techniques have experienced a resurgence following concurrent advances in computer power, large amounts of data, and theoretical understanding; and AI techniques have become an essential part of the technology industry, helping to solve many challenging problems in computer science
Every sector in day by day
As an everyday interaction, the smartphone that we use, is your gateway to the world of AI. Almost all popular apps that you use are relying on AI to make everything work better. Productivity apps, photo editing apps, your favourite social networks or even a simple translator app—everything is using AI. Let us take the example of Microsoft, which has baked in some AI goodness in the Office365 productivity suite for Windows and macOS. If you are using Microsoft Word for instance, you’ll be able to translate specific parts of text or the entire document, from over 60 languages, using neural machine translation to understand sentence context. Incidentally, the same neural machine translation is used for the Microsoft Translate app, which can now do translations while offline—not connected with the internet, that is.
Under the leadership of Sundar Pichai, Google has taken an AI-first approach—something that is very visible in Google products. Gmail and Drive are getting new AI driven features, the latest being Gmail’s Smart Compose feature and Docs being integrated with AI Grammar Checker. Photos is perhaps Google’s most AI intensive app that you may be using often, and it uses very powerful algorithms to do everything from suggesting image edits to making search better.
Recently, Korean electronics conglomerate LG launched an array of smart TVs that “listens, thinks and answers” your calls. ThinQ AI – an artificial intelligence platform pioneered by the company in 2017 – has been integrated with its sophisticated OLED (organic light emitting diode) TVs, lauded for their perfect black colour and slim structure.
LG’s 2018 series of TVs run on the manufacturer’s new AI Thinq software paired with a powerful Alpha 9 Intelligent Processor which offers a myriad of image enhancement capabilities. Quad step noise reduction for instance scans multiple frames within scenes in pictures on display in the OLED TVs. This is to establish and correct graininess in images. In connection, the processor also initiates a frequency-based sharpness enhancement function to bring out vivid images after the noise reduction process. These TVs are also engineered to contrast ratio between subjects on screen and backgrounds. This is the object depth enhancer feature of the intelligent processor.
The AI element of the OLED TVs comes to the fore with the voice interactivity feature of the TVs. Everything from information on on-screen content to the weather can be accessed using voice command through the TVs remote controls. The TVs are further compatible with voice assistants such as Google voice and Amazon Alexa. Adding to the arsenal of top-of-the range features, LG’s 2018 OLED sets are fitted with Dolby Atmos sound systems giving the best in sound quality. The TVs are also built on a minimalist layout dubbed the Unitas design that emphasizes on harmony, balance and wholeness.
If there is one company that has brought AI to the mainstream and created a completely new consumer electronics category with it, it is Amazon. The Echo line-up of speakers find their roots in the Alexa smart assistant, artificially intelligent and relies on conversational interactions. The popularity if the Alexa speakers, and the fact that users seemed interested in the concept of smart speakers, is pretty much the reason why Google, Microsoft and Apple had to jump in too.
Streaming service Netflix, relies heavily on AI to do the finer things such as personalize the Netflix homepage for each user and even alter the artwork for movies and TV shows, with complex algorithms. In March this year, Netflix deployed an AI tool developed in-house, called Dynamic Optimizer, which will ensure the best possible visual quality streaming by altering compression rates based on the on-screen elemets, while using the lowest bandwidth possible. “What we’ve done is invest in the codex, the video encoders, so that at a half a megabit, you get incredible picture quality on a 4- and 5-inch screen. Now, we’re down in some cases to 300 kilobits and we’re hoping someday to be able to get to 200 kilobits for an amazing picture. So we’re getting more and more efficient at using operators’ bandwidth,” is how Reed Hastings, CEO, Netflix, described Dynamic Optimizer.
The year did not start well for popular social network Facebook. The Cambridge Analytica data breach revelation. Data privacy issues. Spread of unchecked Fake news. Hate posts. Allegations of Russian meddling in US elections. Lesser mortals would probably have wilted under pressure, but not Facebook. Mark Zuckerberg put on a brave face when facing a barrage of questions at the senate hearing earlier this summer, and Facebook kept adding and updating data privacy measures quite literally on a daily basis to give users more transparency and control over their data. And there is a promise that the future will be better. That being said, it is not just the humans at Facebook who are trying. The company is banking on artificial intelligence (AI) to solve most, if not all, of the problems. That is perhaps why Zuckerberg referred to AI more than 30 times during the marathon congressional sessions. “AI is the best tool we have to keep our community safe at scale,” said Mike Schroepfer, Chief Technology Officer, Facebook, at the company’s F8 conference in May this year. That was a sign of things to come.
In July, Facebook got AI experts from top universities on-board. These include Jessica Hodgins, a professor of robotics and computer science at Carnegie Mellon University, Abhinav Gupta, associate professor of robotics at Carnegie Melon University, University of Washington’s Luke Zettlemoyer, Andrea Vedaldi, an associate professor of engineering science at University of Oxford, and UC Berkeley’s Jitendra Malik, for the Facebook AI Research [FAIR] group. Facebook said the new experts will bring on board collective expertise in robotics, natural language processing, and computer vision. Earlier this week, Facebook also said that it’ll add thousands of human moderators and advanced artificial intelligence systems to weed out fake accounts and foreign propaganda campaigns, a reaction to the Russian interference in the 2016 US Presidential elections. With a user base of over 2 billion active users, it would be impossible for even the largest of teams comprising of only humans to monitor, identify and flag each post or image or comment being shared on the platform. And this, when we are not even counting the 813 million users on Instagram, 1,500 million WhatsApp users and 1,300 million users are communicating via Facebook Messenger.
In April, the European Commission announced that it would set aside €1.5 billion for AI research funding until the year 2020. “Just as the steam engine and electricity did in the past, AI is transforming our world. It presents new challenges that Europe should meet together in order for AI to succeed and work for everyone,” says Andrus Ansip, Vice-President for the Digital Single Market, European Commission, in an official statement. Before that, France had already announced a €1.5 billion plan for AI research and innovation in the country.
Apple is betting heavily on its artificially intelligent virtual assistant Siri, to make the experience with the upcoming iOS 12 even more personalized. Siri Suggestions will offer contextual inputs for tasks that it feels you may need to do, based on your calendar or location, for instance. Then there will be the Siri Shortcuts, which will trigger an app or a task to be executed based on what you say to Siri. And this is just the start. In July, Apple announced that John Giannandrea, Google’s former head of search and AI, will be the “chief of machine learning and AI strategy” at Apple. He will lead the evolution of Siri and the machine learning framework for apps, also known as Core ML.
As things stand, image recognition is one of the critical pillars of AI. Thus far, image recognition is largely done using a human supervised process, which teaches algorithms through labelled data sets. Facebook confirms that while such a method worked so far with as many as 50 million images being used to teach AI, that really isn’t workable if there is a need to scale up. The solution, as it turns out, lies with AI. The researchers are now utilizing as many as 3.5 billion user shared images on Instagram, along with 17,000 hashtags. In many ways, Instagram’s data bank is larger than what any rival tech company may have access to. Facebook uses a Microsoft developed cloud based service called PhotoDNA to detect posts with child pornography, for instance.
Google Photos is one of the most used apps in smartphones, particularly Android phones. The updated Google Photos app that you can now download for your phones relies on AI to offer suggestions to fix or edit your photos. Photos will now also detect faces in photos that you click, and will suggest contacts you may want to share the photos with. Google has trained these algorithms on millions of photographs.