I ran across several articles in Harpers this week that really piqued my interest. The one I’ll share here is called The Gods of Logic, by Benjamin Labatut. https://harpers.org/archive/2024/07/the-gods-of-logic-benjamin-labatut-ai/
The piece explores the philosophical and ethical implications of AI through both a historical and fictional lens. Drawing inspiration from Frank Herbert's "Dune" series, the article highlights an imagined catastrophic war against sentient technology, and the resulting societal shift to distrust and limit AI. Labatut contrasts this with real-world advances in AI, tracing the evolution from George Boole's foundational work in logic to Geoffrey Hinton's development of neural networks. The most fascinating part I lifted from it however is pure nonfiction: the fact that Geoff Hinton (the father of AI) is the great-great-grandson of George Boole (the father of Computing). Still cleaning the dirt off of my chin.
While it may not be a techno optimist view, Labatut’s nuanced and personal take on the AI Safety debate headlamps the apparent darkness without making any sensational proclamations about the end of the world. I find that refreshing. It also features AI generated imagery by conceptual artist and renown horology geezer Phillip Toledano, which is how I found the article in the first place (I follow him for the geezer part). After reading this piece I reached out to Phil, and I’ll be having a conversation with him this Friday. Look out for snippets of that convo to appear in the CWA~AI series.. now on YouTube!
The AiR Program has a new Youtube channel! We will be releasing all conversations in the CWA~AI series here. For now, we’re just releasing short clips of ~2mins and below, nuggets selected from our wide ranging conversations with artists. But, in the future, you can expect some longer form content as well. Check it out here:
A sample snippet of a conversation with Ira Greenberg:
We also established a new account on X where I’ll be tweeting, retweeting, sharing some of the aforementioned clips above, and generally engaging in the intellectual degeneracy that happens over there. Give us a follow, if you would.
On Wednesday we featured Ira on a special AiR Twitch stream, where he gave a presentation about his work and its evolution from painting and drawing to coding to AI. As would be expected from a decorated pedagog, the lecture was most excellent! It can now be found on our main Civitai Youtube channel (cross posting to our new AiR channel isn’t supported, unfortunately). I highly recommend checking it out for more context about his work, but also for insight into the technological evolutions enabling the past 20 years or so of generative art-making.
For those of you more taken to a text interpretation in lieu of the hour long video, Ira's approach to generative art-making is deeply rooted in his extensive background in traditional art and his embrace of contemporary technologies. Educated at Cornell and Penn, Ira's early training was in rigorous observational drawing and painting, focusing on mastering the depiction of figures and still life through careful observation. This foundation instilled in him a meticulous attention to detail and a deep understanding of visual composition. Over time, Ira transitioned to abstraction, finding liberation in moving away from representational motifs. This shift allowed him to explore the expressive potential of his work without being confined by traditional rules.
Ira's introduction to computational methods and code significantly impacted his artistic journey, leading him to question and deconstruct his rigorous art education. The advent of AI presented a pivotal moment for Ira, bridging the gap between computation and painting. He began integrating AI into his process, using it as a collaborative tool to generate insights and enhance his work. Highlighting the process he is currently using in his AiR project, Ira feeds his initial sketches and drawings into AI models, which provide subtle shifts and new perspectives that he incorporates back into his art. This iterative process enables him to produce works that surpass the capabilities of both his manual skills and the AI alone.
Functionally speaking, Ira's engagement with AI involves a delicate balance between controlling the creative process and allowing for serendipitous discoveries. He uses tools like stable diffusion and midjourney to iteratively refine his images, adjusting parameters to achieve desired effects while remaining open to unexpected outcomes. Ira describes his method as akin to working with an atelier of students as Tiepolo or Davinci did, where the AI acts as an assistant revealing hidden elements and enhancing the overall composition. His work, whether in traditional or digital mediums, reflects a continuous dialogue between his training, his abstract coding explorations, and the generative capabilities of AI. This blend of techniques positions Ira at the intersection of historical art practices and cutting-edge technology, creating a dynamic and evolving body of work.
Recently, Ira has been documenting his AiR Project on his Figma board: a new drawing made using these hybrid iterative techniques. In it he shares the step by step process in short video clips, worth a watch! Head over to his studio to take a look. This charcoal drawing began with a gestural field of marks, out of which is born emergent shapes, patterns, and figures. He then complicates this manual process of emergence by feeding his drawing into a Stable Diffusion workflow using very low denoise settings. He works off of the insights, mutations, and added definition these generations return, but never in a 1:1 way, instead leaning on these new images for conceptual inspiration.
Jason has been training up a storm using our on-site trainer. He has also been working (fighting?) with a local instance of A1111, which has proven to be a fairly steep learning curve for a first time user. He had not engaged with local toolings at all prior to this residency but now has a decent handle on A1111 and is using the LoRAs he’s trained on Civit to output some solid source materials for his paintings. He even mentioned feeling ready to start putting some paint down on canvas - an exciting step!
The data he’s training on comes from a continuous archive of found images he keeps from sources like Architectural Digest, online archives, and found photographs that he purchases on eBay and the like. These images are interior shots of 60s/70s living rooms entirely devoid of people. The lack of human presence is an important touchstone in his work, imbuing a the spectre of a presence-through-absence. These emptied containers for living raise questions about their potential occupants, their personal lives, and collective histories. Generously open-ended, they become tapestries for associative meaning to which the viewer can bring their own personal histories and associations.
He leans towards a snapshot aesthetic, hoping to capture a latent feeling of human presence - a recency of occupation, as if the spaces were immediately vacated for some unknown reason. Occupied until they weren't.
We often see this in archaeological sites, where researchers struggle to piece together the reason for a site's urgent abandonment. What made them leave? Was it an attack from a neighboring tribe? A fire? Some other natural disaster? Or perhaps a spiritual nomadism that demanded leaving lived spaces behind quickly, and just so. As temporarily empowered cultural anthropologists, when standing in front of Jason's works we get to excavate ourselves for these feelings.
In training, Jason has been uploading and tagging these images meticulously. His strategy involves the use of diverse images, including variations in tone (sepia, black and white) to see how the AI interprets different inputs. He’s been primarily outputting 512x512 images and using these low-resolution outputs as a sketching tool. To him, the uncanny hallucinations actually make incredibly compelling compositional foundations for the collaged style of his paintings. He's targeting to incorporate 2-4 final images into a large painting. Here are a few of his favorites so far.
Alex had a very productive albeit short run in the AiR program and has now completed his film, recently screened at the AI Film Festival in Amsterdam! Well, the film was actually screened as a work in progress along with a short lecture about the wider ethnographic project, since just before the film festival he received feedback from the Kalasha tribal members who requested a few changes. So, I'll only share a shorter sample clip here. The longer final film will be about 90 seconds.
Alex decided to use a natural setting for the background of the video. This scene is generated from a photograph of one of the three ancestral valleys occupied by the Kalasha. The feeling was that bringing the dance out into a natural setting allowed for an amplification of the ceremony, which has actually been slowed down to maximize focus on the performance itself and allow viewers to fully experience the movements of fabrics and textures in the final film.
It’s worth touching on some of the tribal feedback provided by the Kalasha, who pointed out a very obvious blunder in the final output. The AI model had inadvertently applied a bindi or “hindu red dot” on the foreheads of the dancers, which is not at all part of their cultural traditions. Whether this error was due to the underlying data set trained into the base model or a result of user prompting errors (perhaps a combination of both) is still being sorted out. As this project is experimental in nature, we naturally expected there to be some mistakes, and the feedback about the Kalasha’s representation in this work is invaluable. This highlights two very important things: 1) that AI workflows, if left unchecked by humans, have the capacity to produce culturally inaccurate representations. This a well known issue. At least the models used in english speaking regions are a reflection of our western cultural data sets, so do not cater to cultural nuance outside of that experience. And 2) that establishing a cultural feedback loop is a mandatory step in any ethnographic work.
We are happy to support this experiment in cultural ethnography, and look forward to seeing future boundary-bending projects from Madhatter Foundation in this space!