Studio Update Ten
LIVE
August 2, 2024 8:59 PM
Chris Adler
August 2, 2024 8:59 PM

Live Streams

Drew Nikonowicz has officially wrapped up his time in the AiR Program, and we are excited to feature him next week on our Live Twitch Stream for a discussion of his work! This will take place Thursday, August 8th at 3pm PST.

The following week we are excited to have all current AiR artists Live on Twitch for a round table discussion of their experience on Friday, August 16th at 3pm PST! This will be the first time all of the AiR artists have been in collective conversation - not to be missed!

Inés Kivimäki

Inés has made extremely short work of building out her figma board and has even produced around 2 minutes of sample footage from the new film she is making. You can check it out by clicking through to the figma board below (click on Inés' name in lower left corner) and clicking play!

On the board you’ll notice there are hundreds of images, each with annotations and citations laid out in the associative style of a mind map, which visually captures the semiotic relationships in her film. Our conversations about her work have ranged from linguistic theory to search engine indexing to the exploration of AI bias, all of which are deeply embedded in her work. 

Using objects from the Baba Yaga folk tale as a jumping off point, Inés has: 1) searched these object phrases in various languages ranging from French to Russian to Finnish, each of which returns a different visual bias in the search results. This step alone is about as clear a representation as I’ve ever seen of the Sapir-Whorf Hypothesis. 2) Transmuted these found images into text using img2txt machine vision. 3) Had that text read aloud by a deepfake trained on her own voice. 4) The audio is then listened to by a transcription AI bot and converted back into written prompts, introducing more errors. 5) The resulting texts are then used as prompts to generate new images. 6) These images are edited together as scenes in the film, with the deepfake audio description of the original object synced as the voiceover.

This entire collection of gestures is designed to sequentially introduce what I’ll call “slippage” into the work. Each new step moves the content further away from the starting point, in a game of telephone between AI agents. In the film so far, the descriptions of the images start off very direct - what you see is what you hear and it’s clear that they match. As the film goes on, the vocal descriptions begin to drift, revealing the slippage between the object and its description, driving a mental wedge between the viewer and the content they are witnessing. It slowly pushes the viewers mind into an interpretive space, forcing a mental image in competition with what is being seen on screen. This produces what I like to term a “thirdness” born of the interaction between two adjacent elements: in this case audio and image. 

As the content drifts, the form of the audio also changes in unexpected ways. Notice that the text at beginning of the film adheres to a mechanical description of what's on screen. As it moves along, the voice becomes more casual, with the initial descriptive form repeatedly broken by more conversational phrases that add context and also begin to distract. 

“A wooden comb with a long handle, a white cloth on a white surface, a white egg on a white surface….This object is a gold ring… The image shows a simple polished gold ring” 

Slowly, expanding thoughts begin to creep in: why is the AI acknowledging that this is an image of an object? The formal context that typically recedes into the background is all of the sudden brought into sharp focus - a focal shift that happens again and again. In doing so, this simple evolution of speech patterns subtly leads the viewer in broadening concentric rings, directing the our awareness from content to context, to format, and eventually to language itself as the audio diverges more drastically (see second clip below).

Like an exponential math problem that begins with very simple terms - a single digit raised to its own power, perhaps - with each utterance the field expands rapidly to fill the space of all possibilities, like a black hole gathering conceptual mass.

In the second shorter sample (~30 seconds, click through on the board below) language really starts to break down. The machine vision begins to read the watermark present in one of the images and the affectation that the voice is given makes it sound as if it’s attempting to learn how to speak a new language - which in fact it is.

This intra-active loop between human and machine systems breaks apart the space of communication into its phonetic and mimetic elements, revealing to us the building blocks of our new reality, blocks that are constantly being reconstructed through this recursive process. By doing so, Inés has now begun to probe the iconographic apparatus that is AI, and we're here for it.

Drew Nikonowicz

Drew wrapped up his residency this week in a flurry of activity that left his figma board in a linear if spiraled state. As an encapsulation of the past four weeks, he chose to link each step he took with a sequential arrow to describe the evolution of his experimentation with these AI tools. 

During his residency he produced several different subsets of photography-style images, mostly focused around a LoRA he trained on a diptych of images extracted from a new unpublished body of work called Line in the Sand. The two images are shot from the interior and exterior of a window and produce a dark and light duality highlighting the primary compositional shape of the window arch. His recent work focused around trying to reproduce this window arch in various photographic scenarios. The pursuit of this simple task proved difficult and limited his output, but it did lead to some interesting results.

I wouldn't say that his project produced a “finished work” - but that’s OK! He met with alternating failures along the way, which captures the basic relationship we all have to emerging technologies, really. The point of this residency is to provide a detour from an artist’s typical approach via new AI tools, and to afford that artist the space and time to explore. So Drew's self-dubbed “failure” is in a sense ironically exactly what we were going for. He says it best: "I kind of love a failure. So I'm not mad about it." 

I think that the AI exploration of his photographic language was a worthwhile journey, and I think there was no better photographer than Drew to explore this space. The mundane nature of the images that he began with (and tends to create - no shade!) and their uncanny relationship to reality was inherently going to be very tricky for an AI to grasp. His newest body of work in particular does not have a single subject matter to focus on. Rather, it’s a cosmos of different subjects ranging from people to places to things. For me, this actually highlights a barrier that many current artists (read: capital C Contemporary) face when working with these tools: they are inherently “dumb”. The linguistic turn that happened in Philosophy and therefore Contemporary Art in the 60s and 70s set up a certain relationship of the Artist, the Viewer, and the Work, wherein the work is no longer a direct channel of representation between Artist and Viewer, but rather an iconographic exploration of ideas that each party may or may not be familiar with. Something AI struggles with, and dare I say is not represented at all in its underlying architecture, is this “thirdness” of grasping an intellectual context. It has no capacity to generate or interpret subtext or associations that are not explicitly linked to the next predictive token. In other words, it is completely blind to meaning (or at least appears to be).

Take the image above as an illustration of my point. The image shows two arms pulling each other’s arm pit hair, or actually one pulling, the other pressing. This for me has a very obvious resonance with the diptych of the window shot from inside and outside. But what possible concrete linkage could there be for an AI to discern, other than the presence of two arms and two windows? This shred of information doesn’t begin to access the associative space of meaning that these two images open up within the mind of a human viewer. There are themes of intimacy, isolation, the self in relationship to another, the self as other, binary vs non-binary frameworks (which is itself ironically binary lol), so on and so forth, hundreds of underlying threads that connect and build out the space of potential meaning generated by these two works. 

A group of humans could discuss these themes for hours, each person relating and filtering ideas back through their personal libraries of memory and experience. This is the core “move” of Contemporary Art, found nowhere in the space of AI. This is not a criticism of the tools themselves, but rather of the ways in which they are regarded by our collective cultural breath. It points to a basic misunderstanding of what AI is and is not capable of. Criticizing AI for being incapable of doing deep intellectual work like this would be like scoffing at Photoshop for not understanding Kant (and I’m not even sure how many humans do). However, what this also means is that the fear of AI replacing human’s intellectual work is largely misplaced. And I didn’t say intellectual property, but intellectual work. There's some Marxian wedge there. Perhaps someone should write a paper about the difference between the two.

So what does it mean to make art that does not fit into a single style, concept, or character framework? How do you get images that are truly outside The Box when the training categories are themselves The Box. The strife between AI and the art world may come from this misunderstanding of what art is versus how AI is being trained to represent it. 

Machine learning by it's very nature learns through brute force repetition and predicting the next step in a given sequence, whether that's a language token or a bit emerging from the latent space of diffusion. This sort of linear prediction has very little to do with how most artworks relate to one another, much less how they relate to the viewer and the outside world that they may or may not portray. Conceptual art therefore embodies a type of ephemeral relationship that is very hard to capture in training a single model. The true intellectual labor of an artwork lies outside of this space.

In the true style of a spiral, Drew's meandering relationship to this new medium is not finished, and I quote "It's gonna go on to this shelf of things that I do... when it's 2 in the morning, and then it becomes 5am and I don’t know where the last three hours went”.

To pound a nail into the coffin of difference, here is how ChatGPT 4o has summarized the transcripts of our conversations over the course of Drew’s residency when given explicit instructions to “embed the writing with conceptual implications of AI in the context of contemporary art concerns from the last few decades.”

“Drew shared, "I did a lot of AI work this week... to no avail. But it's okay, it was a journey." His work focused on reinforcing shapes within AI-generated images, yet he discovered that simple descriptive prompts like "perfectly round" could yield similar results. This revelation led him to re-evaluate the complexity of shapes he aimed to achieve.

Despite numerous attempts with color variations and inverse shapes, Drew found that the photographic elements in his "inside, outside" project played a crucial role in the system's ability to identify and replicate shapes. This insight highlighted the importance of edges and defined forms in AI's recognition process.

Drew's experimentation wasn't without its frustrations. He encountered issues with the system producing noise and latching onto colors rather than shapes. However, he appreciated the abstract results and saw potential in further exploring these "failures." He noted, "I kind of love a failure. So I'm not mad about it."

As Drew continued his exploration, he was fascinated by the consistent and captivating results of certain prompts, even if the exact reasons behind their success remained elusive. His iterative process of trial and error led to moments of clarity and unexpected beauty, reinforcing the importance of low-level experimentation.

Drew's residency may be over, but the insights gained and the unfinished explorations will linger in his creative process. As he prepares to move forward, he remains open to revisiting these ideas, possibly in late-night bursts of inspiration. "It's gonna go on to this shelf of things that I do... when it's 2 in the morning," he mused.

Ultimately, Drew's time with us underscored the value of embracing the unknown and the unexpected in the creative journey. His work serves as a reminder that sometimes, the most profound discoveries emerge from what initially seems like a dead end.”

I love ChatGPT. I leave any resulting value judgment to you, dear reader.

Anne Horel


Recently we ran into some issues with the production partner we were planning on releasing the plushies with. It turns out that Anne's designs are far too intricate for the type of work that they do (no real surprise there, honestly). The designs would have been forced to take the shape of prints on fabric dolls instead of actually embroidered and sewn from various strips of fabric, etc. After evaluating this we collectively decided to pivot the project to a set of limited edition AR-enabled T-shirts that will be connected to an exhibition she is organizing on Sculpture in the Expanded Digital Space.

For this exhibition, she will be using several of the designs that she created in DALL-E 2 and is currently in talks with a foundry in France to attempt to produce some of these as bronzes. We are both extremely excited about this development, and not to jinx it, but I personally believe this would be an incredible meeting of material and design. Regardless of the execution during this short window of time, it would be incredible for her to pull this off at some point!

Initially, she was thinking of displaying the masks as 3D printed objects, made of plastic resin of course. But as we discussed this we realized that this resin could actually act as the substrate for a loss-wax cast, an approach which yields unique metal sculptures. This process coats an original object in wax and then in a high resolution sand slurry which fills every crevice of the coated object. Next it is put into an extremely hot oven where the sand slurry hardens and the object inside is burned out and the wax completely melts away, leaving a mold. A metal such as bronze or aluminum can then be poured into the mold and left to cool. The sand slurry is then broken off of the new sculpture. This is a way to get a one-to-one index - a perfect reproduction of an object that is both a copy and completely unique, since both the mold and the original object are destroyed in the production of the sculpture.

I find this strategy to be both beautiful from a material perspective -  the bronzes that this process yields are incredibly detailed - and also a conceptual perspective,  which after the above two sections I'm sure you could have guessed this. 

Here are a few of the examples that she is looking to produce!

Finally, she would release full-color versions of these sculptures (since after all, the bronzes would be a high-polish monochrome gold finish!) on t-shirts available to buy in our on-site store. These t-shirts will have a scannable QR code on them which activates an AR mask overlay on the wearer’s head. You can scan these codes using Instagram, Snapchat, or any other application that is AR-enabled! Here’s an example of what these will look like.

Jason Bailer Losh

Jason is now solidly into the rendering phase of creating his new paintings. He has generated images, created new collaged compositions, and is now in the process of completing the under paintings and rendering out the final works. Here is an update of where Jason’s first painting stands:

digital composition made from 100% AI generated elements

underdrawing

a painting emerges

As you can see in the image above, Jason is currently painting with 12x12” version of this composition made clipped to the top of his painting. In order to even get them this large he has used topaz AI to upscale his original compositions, which are pieced together from images that are ~1MP each. 

We've discussed zooming in even further using these AI upresing tools. This small move is actually a massive benefit for painters who are creating from small original source photographs. I remember making paintings in college and attempting to zoom in as far as I could on a pixelated image and then make up the rest of the details myself which often resulted in muddy compositions. Using these new AI tools, a painter can infinitely zoom in on each section of a small image with endless detail, leading to greater complexity in each quadrant, if that is what the painter desires. What a simple yet revolutionary new way to “see” a painting as it’s being completed!

Jason has also been traveling so we’ve had to postpone our meeting this week, more on the development of these works in the next update!

Keion Kopper

Keion has officially started his residency and is off to the races training his new fossil themed model. Keion comes into the program with the core mission of using AI as an intermediary step in making paintings. Similar to Jason Bailer Losh, he has been using AI tools to inform his compositional decisions in the creation of these works. So far we've had mainly technical conversations about how to train models and how to use our tools on Civitai so I look forward to getting into the weeds on his output in the coming weeks. He has an extensive archive of over 250 reference images that he's collected to train on. He will be using these all to train a central model, and potentially breaking them apart into stylistic categories to train different sub-models. 

This being his first week, Keion only has a few images on his figma board but I look forward to seeing the iterative process of working with our tools in the coming weeks and months!

Post link
©Civitai 2024