AI-generated images are increasingly everywhere in contemporary visual culture with lots of whispers, positive vibes on the street and wild swings in the share prices of infrastructure and software vendors as investors attempt to read the tea leaves of the generative AI market. There is money in image-generating models. It’s akin to a gold rush.
If these cultural technologies are to have long term value, then they require reasonably high quality knowledge to keep on working. AI models are thus built on the backs of out-sourced human labor: people toiling away, providing mountains of training examples for AI systems that corporations can use to make billions of dollars. By themselves the AI do not provide a solution to the garbage-in, garbage-out problem, from the ever increasing disinformation, hallucination and fakery on the internet. So how do they get the right data with all this tainted data? How is that data protected in the applications built on top of the AI platforms? How will they address safeguards around facial recognition? Are safety culture and processes taking a backseat to shiny products?
The tectonic cultural plates are indeed shifting — eg., using Claude on the iPhone indicates that the modern chatbots allow users to now interact with computers through natural conversation–and the newly arrived flirtatious and coquettish GPT-40 that accepts visual, audio, and text input, and generate output in any of those modes from a user’s prompt or request. Creating images through innovative AI generators, such as OpenAI’s art generator DALL-E 3, or Midjourney and Stable Diffusion has become a new form of image production.
This is based on a text prompt that then turns it into a matching image — it assembles a new images from a database of already existing ones. So the dataset is just a big scrape of the Internet, and the current legal situation about intellectual property is incredibly murky, given that the AI companies are using copyrighted images to train their algorithms without asking for consent or offering compensation.
By the looks of it, AI generated images will have a big immediate impact on stock photography to the extent of replacing Shutterstock-type photography. Adobe is saying in relation to its Firefly AI generator to ‘skip the photoshoot‘, rather than enhancing the photoshoot. The political economy here is simple: Adobe gets the money as Firefly will be used to benefit their makers at the expense of others. At the moment I see the array of new generative “AI” tools to enable me to modify my photographic images as being offered solutions to problems in post-processing that I don’t have.
No Comments