From algorithm-generated art to deep fakes, the world of photography isn’t what it used to be. Modern computer learning techniques stretch the definitions of possible and real. Before you get ideas of Matrix-Esque effects generated on the fly with humans unable to distinguish the difference, we need to ground the AI advancements we’re talking about. Also, keep in mind that we’re built to reject uncanny valley images.
In tests where doctored and undoctored photos were shown to subjects, 60% could spot the fake. That’s better than chance, but not by much.
Advancements in digital imagery and computational power continue to push us into some crazy places. If you’re a big fan of the zoom and enhance cliche of espionage thrillers, the future is the place for you.
AI Advancements Now
Definition time. When we talk about AI photo technology we refer to computer-assisted image development. The programs in place today are more machine learning and less true AI.
Machine learning, of the type used here and in self-driving cars, utilize carefully labeled data. This data comes from humans and is labeled by humans. The computer becomes a vast library of image choices and then makes decisions based on an algorithm.
The algorithm receives further training and refinement processing through iterations of pass/fail gating from human checkers. There are multiple ways to train an algorithm, below we go through two used in image enlargement today.
The resulting software does things to images that any human with a photo editing program could do. The difference is in the speed and accuracy with which the artificial intelligence pics are produced.
The most common application for single image super-resolution (SISR) relies on the computer not only applying algorithms to an image but knowing why it made the choices. This composite technique is called Super-Resolution Convolutional Neural Network (SRCNN).
A low-resolution image is basically a sketch or a child’s crayon coloring of an image. There are large blotches of color that don’t quite fill in the lines. You will also find mistakes and bleed from other colors.
Through deep learning such as SRCNN the computer colors in the missing bits. That parts easy enough. The complicated part comes from knowing what parts of an image are bleed-over and mistakes.
Teaching the computer to think about and address the errors and artifacts instead of just filling in the colors requires distinction. Just like any other artist, a lot of training to understand the difference between good and good enough is needed.
To this end, data engineers group similar images and run them through a program, manually explaining to the program which is superior and why. The algorithm then learns a series of precedents to compare a new image with so it understands what it’s seeing.
Essentially, the computer learns what to color in and what to color out the same way you would, through practice.
A competing model for SISR are Generative Adversarial networks. Instead of learning what something is, they learn what something is not. This method, referred to as the critic loss function, works great for SR but not repair.
Basically, GANs are great at coloring and bad at drawing. They upscale an image quickly but also upscale the errors and can’t color correct for obvious defects.
This next application gets into the job-stealing areas that plague all uses of automation. While algorithms attempting to write books and screenplays fail at nuance enough to be funny, image algorithms are on betting footing.
A user requests an image featuring multiple objects and the computer organizes them according to a develop aesthetic. The result looks better than a person jumbling together random clip art into a collage, but the idea is the same.
The AI uses what it knows of composition, tone, color and more to layer the selected objects into the frame.
While the image appears real, it is made of assortments of pixels and not a recreation of an actual image. The tone comes across as overly ideal. A lack of errors, known colloquially as wabi-sabi, keeps the image from being truly appealing.
The computers too-perfect composition comes across as too polished to be real.
To accomplish its goal, enhancement software needs to understand what it’s seeing to reproduce a better image. This is not so different than the composites a computer creates for facial recognition or deep fakes.
Guesses at the lighting, color, tone, and objects are all made by the AI. The resulting image is a painting, of sorts, but a painting based on real data and presented as a total composed of parts.
Mobile phone cameras and SLR digitals alike use AI to take photos of greater quality with less equipment.
Apple’s dual-lens camera uses an algorithm to select for faces with one camera and background elements with another. The resulting image creates a sense of the greater depth of field associated with binocular vision by adjusting the light and focus between the various foreground and background elements.
The difference between what the digital camera AI does and a photographer does is about speed. The combination of f-stop to aperture to exposure time all come from the program. A filter or selected pre-set gives the user a touch of control but the AI still does the work.
Programs such as Maya, Photoshop, and Lightroom use complex AI to retouch images quickly. (Learn more about the detailed reviews of these photo editing programs on SoftwareHow.)
Light sampling and gradient setting once did little more than saturate and desaturate an image. The newest version of these programs recolors the image across the gradient, creating a nuanced retouched image in seconds.
Selecting individual elements was once a tedious process zoomed in to isolate pixels to cut and move. Now, the program understands what the barriers of the element are and can select them even as the color shifts.
More is expected from image alteration. Consumers crave more realistic images of less realistic things. Even real images need to showcase expected balances of light and depth to be considered worthwhile.
Nothing should be blurry and discoloration from competing for light sources is out of the question. The world of photography demands perfection not found with the human eye.
As much as things have changed due to AI advancements so far, they will change more in the future. For photographers, this will mean more adjustment and an emphasis on core skills.
If you work with digital photography or photo editing you need to say up to date on this emerging technology. Contact us for more information about how we can help you train and refine your skills.