The Rise of the Machines and AI in Photography. Part 1
“In a properly automated and educated world, then,
machines may prove to be the true humanizing influence.
It may be that machines will do the work that makes life possible and that
human beings will do all the other things that make life pleasant and worthwhile ”
| Isaac Asimov, Robot Visions
In recent times, we are witnessing a digital revolution of the arts and of photography in specific. It is not only that we see a penetration of machine learning and artificial intelligence (AI) in technology in general, but AI created art is also overrunning the world by storm. In this two-part article, I’ll be first shortly defining, characterizing and discussing machines, machine learning and AI from a technological point of view. In there, I’ll give examples of how this technology is already influencing the world of photography and what may come in the future. In the second part of the article, I’ll reflect on the impact machines have on our understanding of art from a more philosophical point of view. In this upcoming part, I’ll talk about the disciplines of AI art, computational photography and computational creativity. Based on these recent streams, I’ll be questioning our self-conception as photographers and artists. In the end, I would like to motivate us all to reflect on our inner, human values, use existing tools as support to give rise to our conceptions, but to let our heart, soul and empathy always guide of creative endeavor.
A Brief History of the Machines
This history of machines might go back to ancient Greek mythology and the stories of Talos, a giant made of bronze guarding the island of Crete. The lore says that Talos was an automaton, created to automatically follow either predetermined sequence of operations or react towards predetermined instructions. In Talos’ case, traditions argue that he threw stones at the ships of unwanted visitors.
The early developments of what we call computers might go back to the early 19th century. The British mathematician Charles Babbage invented the first mechanical calculation machine called Analytical Engine. This machine is regarded as the precursor of the modern computer.
The first digital computer might go back to Dr. John V. Atanasoff and Clifford Berry. The so called Atanasoff-Berry Computer (ABC) is perceived as the first digital computer. Almost in parallel, John W. Mauchly and J. Presper Eckert at the University of Pennsylvania developed the Electrical Numerical Integrator and Calculator (ENIAC). Many argue that ENIAC was the first substantial computer. Computers of that generation were only capable to perform a single task and had no operating system. It is said that they weighted more than 30 tons and when this “[…] computer was turned on for the first-time lights dim in sections of Philadelphia.”
Against the backdrop of these early calculation machines, modern computers are able to follow programs, generalized sets of operations. The term “artificial intelligence (AI)” was coined and first used on Dartmouth Summer Research Project on Artificial Intelligence. John McCarthy, a Professor of Mathematics at Dartmouth College, decided to organize an expert group on modern computer developments group to clarify and develop ideas about thinking machines. As he didn’t want to use narrow terms such as automata theory, but also wanted to avoid Norbert Wiener’s field of Cybernetics focusing on analog feedback, he came up with the term artificial intelligence. In these days, AI was related to complex calculations in the fields of problem solving (i.e. through application of computational algorithms), abstraction (i.e. the development of general rules and concepts based on exemplary classifications), and creativity (i.e. the creation of something new based on something we already know). With major technological improvements on the one side, and a deeper understanding of the human brain and human mind, the original concepts of AI have changed.
Machines, Machine Learning and AI: What Are They and How Do they Work?
The objective of AI is to build machines capable of simulating the human cognition, rational and learning. As such, AI imitates human decision-making processes and human behavior. But it does even more. AI is not only mimicking human behavior, but it is the intelligence shown by machines or computers. Meanwhile, Artificial Intelligence has become an own interdisciplinary scientific field at the crossroad between computer science, software engineering, operations research, mathematics, psychology, sociology and economics. In these disciplines researchers study intelligent agents who are capable to perceive its environment, react and make decisions to act that optimize the probability of achieving its objectives. At the core of AI, objectives are problem-solving and learning.
How does AI work? Human programmers have predefined sets of rules, algorithms and models that analyze an environment with the objective to recognize patterns and make classifications based on them. For example, today is heavily used for product recommendation platforms on Amazon (who bought also bought), on Netflix (who watched the movie also watched that movie), on Spotify (who listened to this song, also listened to that song), but also on Uber or Google Maps to learn about frequent routes on a city map for example. With time, AI are able to learn from past classifications and the way the environment reacts to them. For example, if we accepted a suggestion and in fact listened to a suggested song and like it. Then, through machine learning algorithms, AI learns the way we make decisions based on past AI classifications. Today, AI applications are used in many areas, such as computer vision (i.e. the ability to identity an image for example in face recognition), speech recognition (i.e. converting speech into text) or natural language processing (i.e. the ability to interact with a computer).
Comparing AI with machine learning, AI relates to a broad scientific discipline that applies machine learning amongst other techniques to operationalize smart objectives and solve problems. Thus, machine learning is an application of AI which assumes that machines are capable to learn through different environments, past classifications, and past decisions. Following this way of argumentation, machines start with simple set of rules and algorithms, but are getting better of time. In fact, there are at least two different ways of learning. Either, machines have to be trained to get better with huge number of datasets, for example evaluated songs, recommended books, labelled images or watched movies. In this so-called supervised learning, humans label the training data manually and support the machines’ learning process. In reinforcement learning the machines are exposed to environments that provide the machines feedback in terms of rewards or punishments. And the machines are able to learn from them. Learning then means that machines are capable to make accurate guesses about things they haven’t know before. For example, the machine is able to identify a sky as a sky and replaces it with another sky image. This learning consumes a lot of time and computing power. However, once the system is well trained, it can be applied on simple end-user devices such as mobile phones. In sum, the intelligence behind an AI system is not only the set of predefined rules and algorithms, but also the quality of input data it is trained on.
Machines and AI in Photography
Today, more and more software companies offer technology based on new developments in AI. Some of these companies have even named their products after this new technology, such as Topaz Labs with Sharpen AI, Denoise AI or Gigapixel AI, or Skylum’s Luminar AI. Other companies in the photography sector such as Adobe, Capture One, or DxO have been using intelligent algorithms for quite some time, without necessarily branding it with the word AI. However, today, the abbreviation AI has become the new marketing tool. It’s promises are to save time while creating high-quality photographs. And there is no reasons to believe that this technology progress and the AI immersion will slow down. Where can we see this AI impact on photography already today and how might it develop in the future?
The machine impact on photography is not something new. We are all using “intelligence” in some way or the other for quite a while. Here are a few examples of how we already use intelligent programming and algorithms for quite some while:
In-camera automation and intelligence:
- When RAW information is stored as JPEG with standardized, lossless compression algorithms, this intelligence allows us to make trade-offs between storage size and image quality.
- There are many functions in the camera that are or can be automatized in an intelligent way, such as auto-focus, auto white-balance, program mode, eye tracking, focus-blending.
- These days we see more and more automatic RAW enhancements within the camera that automatically brighten too dark areas or darken bright areas in the image, correct for blur and add micro-contrast or – like in the latest Nikon Z lens series – we see a within-lens distortion correction.
- In mobile phones we see this trend already. With the integration of the portrait mode in the iPhone 7 the foreground is automatically recognized and sharpened, while the background can be blurred. Newest models even add depth control to adjust the background blur.
In-software AI-applications:
- Photo management and image classification are heavily influenced by AI and machine learning. In these areas, faces or objects (e.g. sky, sun, tree) can automatically being identified and classified. Through these classifications the software proactively suggests folder structures and image tagging.
- In traditional photo editing software, “auto” button analyze an image quality and make proactive suggestions for image auto improvement.
- When it comes to photo enhancement, we can already find a large number of software products in the market that help to remove noise, sharpen or upscale an image, replace the sky and automatically identify unwanted objects in the image that can be content-aware filled. There is other software available for specific interests, such as finding lines of light to micro-sharpen and dodge or enhancing eyes in portraits.
- Artificial intelligence is further used for image interpretation, art history, conservation and preservation. For example, famous drawing of Picasso or Monet were analyzed to understand more about the artist’s drawing techniques. These insights also help to restore old drawings.
Summary
We currently live in a time that is heavily influenced by learning technology. It goes even that far that technology can override and improve its own code it is originally built on. Most advancements in photo technology have so far been on the software side, rather than on camera and lenses. However, in the future AI and machine learning will arguably have a much a bigger impact on how images are captured in the first place. We will shortly see a stronger personalization and contextualization of camera equipment that proactively learn from our preferences and habits. Automatic speech recognition and voice support may change the way we work with camera equipment increasing the accuracy of our work.
The field of Computational Photography is trendy and newest hardware will not only integrate traditional CPUs in the camera, but also image signal processors and neural processing units (NPUs) that help the computers in the camera to learn and support the photographers image capturing process. This kind of hardware will be necessary for on-device machine learning and efficient processes.
In parallel, software will be more and more pervaded by learning technology as well for any kind of adjustment in any artistic field. The software will actively make suggestions and calculate the right values for the improvement of each image.
We will not be able to stop the speed nor the intensity of this development, but we are able to make a decision where we want to position ourselves in this world as artists. But we should do it today. We should position ourselves through our artistic statement how we use AI in our creation process, how we build our decisions based on AI and how we perceive the usage of intelligence in the arts. There is no right, no wrong. Everything is possible in the arts. We should however not allow to let technology conquering our mind without our conscious human decision based on our human values. See also my recent article on Finding Meaning in Photography
“Machine men, with machine minds and machine hearts! You are not machines, you are not cattle, you are men! You have the love of humanity in your hearts. You don’t hate: only the unloved hate, the unloved and the unnatural. Soldiers, don’t fight for slavery, fight for liberty! You the people have the power, the power to create machines, the power to create happiness! You the people have the power to make this life free and beautiful, to make this life a wonderful adventure! Then, in the name of democracy, let us use that power. Let us all unite! Let us fight for a new world, a decent world . . .”
| Charles Chaplin
References
N.N.: Artificial Intelligence (AI), Definition, https://www.techopedia.com/definition/190/artificial-intelligence-ai, March 2020.
Marnie Benney and Pete Kistler: Creative Tools to Generate AI Art, by: AIArtists.org, https://aiartists.org/ai-generated-art-tools
Max Bense Lab, University of Stuttgart, https://monoskop.org/Computer_art
Gabe Cohn: AI Art at Christie’s Sells for $432,500, https://www.nytimes.com/2018/10/25/arts/design/ai-art-sold-christies.html
John McCarthy; Minsky, Marvin; Rochester, Nathan; Shannon, Claude (1955), A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence, archived from the original on 2007-08-26, 9th of April 2006.
Moor, J., The Dartmouth College Artificial Intelligence Conference: The Next Fifty years, AI Magazine, Vol 27, No., 4, Pp. 87-9, 2006.
Pamela McCorduck: Machines Who Think, A.K. Peters, Ltd, Second Edition, 2004
Iyad Rahwan et al. (2019): Machine Behavior, Nature 568, 477-486, https://www.nature.com/articles/s41586-019-1138-y
Beverly Steitz, A Brief Computer History, http://people.bu.edu/baws/brief%20computer%20history.html, 2006.