Outputs, you’ll no doubt see these traces of humanity popping up from time to You watch Panic as it progresses through various AI-generated text and image Hallmarks of the human-written text, because that’s what they’re made of. The AI-generated text outputs therefore include spelling mistakes and other The Prompt Parrot model is a particularly interesting case,Īnd then used that information to create a new AI model which will “write” text Human-created data, from which the model learns to generate (statistically) These models are created by feeding in huge amounts of That’s true of the text models as well-both the the prompt parrot and imageĬaptioning model. As my colleague Ellen Broad says this stuff was “[made Legible to us within the broad category of “aesthetic images”.Īnd we shouldn’t be surprised at that, because the billions of images in the They’re certainly (in general) interesting to humans they’re Produced by the Stable Diffusion AI model (which show up on the TV screens) areĪt times confronting, beautiful, surprising, weird, or a bit “off”-sometimesĪll of the above. If you play with Panic for a while you’ll probably notice that the images (like me) who play close attention to this stuff the gains in the last couple Text and images can reliably pass as “human-made”. However it is fair to say that we’re now at a stage where these AI-generated The artist or computer interacts directly with theĪ Michael Noll, The Digital Computer as a Creative This image/text synthesis thing isn’t new-Michael Noll has the receipts fromĭigital computers are now being used to produce musical sounds and to createĪrtistic visual images. ![]() ![]() Which looks like a picture of an astronaut riding a horse. “a picture of an astronaut riding a horse” 2 and you’ll get something For example, you can tell the computer to draw You’ve probably heard about recent advances in AI models which can draw Even AI images and text are made by humans # Isn’t necessarily what it is, but what it shows us about AI models, humans,Īnd feedback loops, and steering complex systems of people and technology. If you’re interested in more detail about the tech stack see the FAQ at the end Parrot as input, and this process repeats indefinitelyĪnd a similar one, but with example inputs & outputs to make things a bit more This text caption output can then be fed back into the Prompt Input and generates an image “matching” that description as output.Īn image-to-text model which takes the Stable Diffusion-generated imageĪs input and produces a text “caption” which describes the image as It takes a the Prompt Parrot-generated text as adjectives like “beautiful” or “high quality”) to the It takes a short sentence as input and adds extra Panic works by taking the viewer’s 1 text input (typed on aĬonventional computer keyboard) and running it through three AI models, one This new exhibition revisits Jaisa Reichardt’sĮxhibition, and introduces new reflections on contemporary approaches toĬybernetics underway at the ANU. Which premiered at the Australian Cybernetic: a point through time exhibition at the ANU School of Cybernetics in December 2022. Panic is a new interactive artwork created by my colleague Adrian Schmidt and I The image and changes it, a bit, or a lot. Time and society provide us with images of the world. Nothing’s confirmed just yet, but stay tuned ☺ Out on seeing it, there are exciting plans in the works to re-install Panic ThatĮxhibition is now over-thanks very much to all who came along. ![]() ![]() This post explains the cybernetic concepts behind Panic, an interactiveĪrtwork at the Australian Cybernetic exhibition in Nov/Dev 2022.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |