We are in the early days of the successful marketing of “Generative AI” or the idea that a computer could be usefully creative. But is it any good?
Computers are trained on massive datasets. In their most basic form, images with text descriptions hopefully train the machine to identify objects. But as much as we don’t know about the human mind, we rarely understand how the computer is actually identifying the objects in the image. My favorite example, so far, is related to fish.
In my images, AI performs the following:
- “Subtle” facial retouching using Google Pixel in-camera smoothing or Adobe’s Neural Engine.
- Background removal using iOS Photos Lift Subject or Photoshop’s AI Object Selection, depending on which method works best for the image. Then, if necessary, I’ll apply some minimal masking: the more traditional, non-destructive technique for cleaning up the small bits the computer gets wrong.
- Background replacement using Generative Fill feature in Adobe Photoshop Beta. Or sometimes I’ll insert the subject into a real photo.
The goal is not to trick, but to understand how people might respond to these images. Are they aesthetically pleasing? Can they still bring joy?