Artificial intelligence (AI) is everywhere, and if you haven’t yet got an AI-powered smartphone, you probably soon will do. Is it all just marketing hubris, or is AI in a smartphone – and particularly, in its camera – something we should all aspire to have? With the term AI increasingly being used not only in smartphones, but in all kinds of cameras, it pays to know what AI is actually doing for your photos.
What is AI?
AI is a genre of computer science that examines if we can teach a computer to think or, at least, learn. It’s generally split into subsets of technology that try to emulate what humans do, such as speech recognition, voice-to-text dictation, image recognition and face scanning, computer vision, and machine learning. What’s it got to do with cameras? Computational photography and time-saving photo editing, that’s what. And voice-activation.
AI in Cameras
Artificial Intelligence (AI) is a branch of computer science that tries to teach a computer to think, learn, and perform tasks like a human. Instead of simply being coded to do a particular job, high-powered electronic appliances are loaded with the power of AI, which does not simply instruct it to do a particular job, but actually programs it such that it learns and adapts to user behaviors and patterns. AI cameras are simply cameras that use AI programs to wisely deal with images and videos. Computational photography is usually the core of an AI-powered camera. The subject of computational photography is generally split into subsets of technology trying to mimic what humans do, such as voice recognition, voice-to-text composition, image/face recognition, computer vision, and machine learning.
That’s all good to know, but what’s the big deal with AI cameras? Well, these advanced cameras help to save time by smartly performing the requisite image processing/enhancement in real time, which would otherwise require hours of toiling with the image on Photoshop or Lighthouse—commercial-grade image editing software.
If you are an iPhone X owner, you are probably using the face unlock feature. This face unlocking ability is actually an AI program. Aside from expensive iPhones, even the cheaper Android smartphones now come with a face unlock feature. Face unlocking analyses the face of the end-user and remembers it. It even learns about changes in the face, so if you completely shave your long beard or go for a bald summer look after years of dreadlocks, it will still manage to recognize you and unlock your phone if you happen to be its owner. It learns about those changes so that your face doesn’t go unrecognized. In fact, face recognition is fast becoming the de-facto authentication method for biometrics applications. Assisted by depth-sensing sensors, the level of safety provided by this technology has met the expectations even in high-security settings and applications, such as banking. The development of secure runtime environments (programs and libraries) has led to growing trust in the technology at the user level, meaning that people are now happily accepting this tech in smartphones, not to mention the many companies working to implement this tech in other domains, including cars, homes, and surveillance applications.
The ability for a computer to understand human speech is a form of AI, and it’s been creeping onto cameras for the last few years.
Smartphones have been offering Google Now and Siri for a few years, while Alexa is entering homes via the Amazon Echo speakers. Action cameras have jumped on that bandwagon in recent years, with the GoPro action cameras and even dash cams able to take actions when you utter simple phrases such as ‘start video’, ‘take photo’ and so on.
It all makes sense, especially for action cameras where hands-free operation makes them much easier to use but is it really AI? Technically, it is, but until recently voice-activated gadgets were simply referred to as ‘smart’. Some now allow you to say quite specific things such as ‘take slow-motion video’ or ‘take low-light photo’, but an AI camera needs to do a little more than that be worthy of the name.
AI is about new kinds of software, initially to make up for smartphones’ lack of zoom lenses. “Software is becoming more and more important for smartphones because they have a physical lack of optics, so we’ve seen the rise of computational photography that tries to replicate an optical zoom,” says imaging analyst Arun Gill, Senior Market Analyst at Futuresource Consulting. “Top-end smartphones are increasingly featuring dual-lens cameras, but the Google Pixel 2 uses a single camera lens with computational photography to replicate an optical zoom and add various effects.”
Google is also using AI on its new Google Clips wearable camera, which uses AI to only capture and keep particularly memorable moments. It uses an algorithm that understands the basics about photography so it doesn’t waste time processing images that would definitely not make the final cut of a highlights reel. For example, it auto-deletes photographs with a finger in the frame and out-of-focus images, and favours those that comply with the general rule-of-thirds concept of how to frame a photo.
What is computational photography?
Computational photography is a digital image processing technique that uses algorithms to replace optical processes, and it seeks to improve image quality by using machine vision to identify the content of an image. “It’s about taking studio effects that you achieve with Lightroom and Photoshop and making them accessible to people at the click of a button,” says Simon Fitzpatrick, Senior Director, Product Management at FotoNation, which provides much of the computational technology to camera brands. “So you’re able to smooth the skin and get rid of blemishes, but not just by blurring it – you also get texture.” In the past, the technology behind ‘smooth skin’ and ‘beauty’ modes has essentially been about blurring the image to hide imperfections. “Now it’s about creating looks that are believable, and AI plays a key role in that,” says Fitzpatrick. “For example, we use AI to train algorithms about the features of people’s faces.”
LG’s V30S ThinQ phone allows the user to select a professional image on its Graphy app and apply the same white balance, shutter speed, aperture and ISO. LG also just announced Vision A, an image recognition engine that uses a neural network trained on 100 million images, which recommends how to set the camera. It even detects reflections in the picture, the angle of the shot, and the amount of available light.
Depth sensors and blurry backgrounds
In recent years we’ve seen many dual-lens phone cameras use two lenses to produce aesthetically pleasing images that have a blurry background around the main subject. People (and, therefore, Instagram) love blurry backgrounds, but instead of using dual-lens cameras or picking up a DSLR and manually manipulating the depth of field, AI can now do it for you.
Commonly called the ‘bokeh’ effect (Japanese for blur), machine learning identifies the subject, and blurs the rest of the image. “We can now simulate bokeh using AI-based algorithms that segment people from foreground and background, so that we can create an effect that begins to look very much like a portrait taken in a studio,” says Fitzpatrick. The latest smartphones allow you to do this for photos taken with either the rear or the front (selfie) camera.
“People refer to it as bokeh, but you don’t get the true blur you get with a DSLR where you can change the depth; with a phone, you can only blur the background,” says Gill. “But a small and growing number of photographers are really impressed with it and are using an iPhone X for everyday capture, and only when they’re on professional jobs will they get out their DSLR.”
What about DSLRs?
Automatic red-eye removal has been in DSLR cameras for years, as has face detection and, lately even smile detection, whereby a selfie is automatically taken when the subject cracks a grin. All of that is AI. Will the likes of Nikon and Canon ever adopt more advanced AI for their flagship DSLRs? After all, it took many years for WiFi and Bluetooth to appear on DSLRs.
While we wait, a Kickstarter-funded ‘smart camera assistant’ accessory called Arsenal wants to fill the gap. “Arsenal is an accessory that allows the wireless control of an interchangeable-lens camera (eg a DSLR) from a mobile device, with machine learning algorithms used to take the perfect shot,” says Gill. “What it’s doing is comparing the current scene with thousands of past images, using image recognition to recognise a specific subject and applying the correct settings, such as a fast shutter speed if it recognises wildlife.”
Who is AI photography for?
Everyone. For starters, it’s about democratising photography. “In the past photography was the domain of those with the expertise of using a DSLR to create different types of images, and what AI has started to do is to make the effects and capabilities of more advanced photography available to more people,” says Fitzpatrick.
So does this mean Adobe’s Photoshop and Lightroom will soon be defunct? Absolutely not; AI is a complementary technology, and is already making photo editing much more automated. One of FotoNation’s partners is Athen Tech, whose ‘Perfectly Clear’ AI-based technology carries out automatic batch corrections that mimic the human eye. A plugin for Lightroom, it’s specifically aimed at reducing how long photographers sit in front of computers manually editing. “Professional photographers make money when they’re out taking photos, not when they’re processing images,” says Fitzpatrick. “AI makes professional-looking creative effects more accessible to smartphone users, and it helps professional photographers maximise their ability to make a living.”
AI is quickly becoming an overused term in the world of photography. Right now it largely applies to smartphone cameras, but the incredible algorithms and sheer level of automated software that the technology is allowing will soon prove irresistible to most of us. It may not be time to chuck out the DSLR quite yet, but AI seems set to change how we take photos. Not only that, but it could soon take charge of editing and curating our existing photography libraries too. It may be over-hyped and often a shorthand for what is nothing more than the latest, greatest advanced software, but AI is going to do something incredible for photographers; it’s going to free-up more of your time so you can take more, and better, photographs.
AI-Powered Surveillance Camera for Gun Detection
I’ve talked a lot about AI-powered smartphone cameras, but even security cameras can benefit from AI. In fact, a company called Athena Security has developed AI for cameras to spot guns and notify higher authorities. The US is witnessing a rising number of gun shootings, making this technology incredibly important. So far in 2018, more than 200 mass shootings have taken place across the country. This is where AI could help the police nab a gunman before he can launch an attack.
AI for security cameras developed by Athena can recognize a wide range of guns and other deadly weapons that bad guys can carry clandestinely and later use to harm others. Whenever this AI-enabled camera spots a gun in the vicinity, it sends a notification to the business owner or nearby law enforcement office. As this system is cloud-based, it also sends streaming footage of the event to a computer database. In fact, this footage can be monitored via an app, so crime prevention can be taken mobile. The user can connect this AI-powered camera to other security systems, such as doors and elevators. For example, if a gunman tries to enter any premise where a gun is prohibited, the entrance door will be locked automatically. This security system has already been implemented in Archbishop Wood High School in Warminster, Pennsylvania. The company believes that its system is 99% accurate and effective.