With summer winding down we're looking forward to a brand new cohort of artists starting this coming week (September 3rd). Four new artists will be joining us, with a fifth following closely behind, working on projects ranging from interactive AI video projections to a frame-by-frame reinterpretation of 2001: A Space Odyssey, to a large scale colored pencil drawing that maps the psychogeographic makeup of our core site.
The recording of our first AiR Artist Round Table featuring all five current artists in residence is now live on our YouTube channel - check it out below. We plan on doing this once per quarter along with individual artist features once a month. I’m excited to be able to share more of what's going on in the program in a conversational format that is open to audience engagement. Leave a comment or any feedback if you have it.
Big update in the world of AR/XR: this week Meta officially decided to close down their augmented reality platform Metaspark, and therefore rid Instagram of all AR filters that have been built by artists on the platform over the past few years. This set off a massive shockwave in the AR/XR community of which Anne is an originating pillar. She's been involved with this community since the early days, and has millions of uses of her AR filters on Instagram. This move throws into sharp focus the transient nature of digital art and labor, and the platform risk that comes with the tyranny of GAFAM.
She was of course planning to use Metaspark to develop her AR-enabled t-shirts that were supposed to be released through our main site as part of her residency project. It's sad to see this technology getting killed off by Meta so casually, especially in the context of the pending TikTok ban which coincidentally (?) is slated to happen 5 days after the deprecation of Metaspark in January (both also curiously close to US Presidentdial Inauguration Day). The pending doom of these two major AR platforms upends nearly a decade of progress in the field. In the context of the AI gold rush, AR just doesn't seem as sexy, an economic pathos that seems to be playing out in the lifecycles of these platforms. To me this opens up a massive opportunity in this decreasingly niche market, which I view as still very much inevitable since potential mixed reality use cases are far too vast to ignore. Hopefully some Buffettesque developer will see this as an opportunity to get greedy in the face of fear and bring more variety to the market. Of course, we still have Snapchat’s AR ecosystem, but without any other mainstream alternatives, there may not be enough competition to incentivize further development and adoption of this tech in the near term.
This also brings to light questions that Anne is deeply engaged with about the manifestation of self through technology. She wrote her Master's dissertation on this theme and is currently entertaining the pursuit of a PhD around it. I'd go as far as saying that her whole practice is centered around these questions. What is digital selfhood? What is memory but a materialized data set? What is AI if not a collective consciousness? Is our digital footprint an imprint of the soul? What happens when we die - a final ascension to the cloud?
Immortality is google-search defined as “the indefinite continuation of a person’s existence, even after death,” with Existence being defined in one case as “the state of being known or recognized”. According to these definitions, I believe we already are immortal, if only perhaps at a relatively low cache limit compared with our embodied selves. But, with existence becoming less and less embodied by the day, the only question left may be, to what degree do we choose to immortalize ourselves? Is the only power that we have left to simply unplug? Is this even a choice anymore?
Anne suggested that we turn our weekly studio visits into a podcast. With borderline existential thoughts like this, I’m not sure how many listeners we would have :). Tune in next week for the latest installment of Deep Dark Thoughts by Anne and Chris.
Ira has been winding down his residency by releasing a series of new animated works he's called Reticula. In a recent Studio visit we discussed the level of detail in these and the potential to upres these, but also more interestingly the potential of stitching these together to create a tapestry style masterwork: a cinematic quilt of mechanical curiosities. For me, this could highlight the generative context that the work draws its motion from. The bits and bytes that go into the reconstitution of each still image would be represented by each work as a unit constituting a larger whole. Visually speaking, I can't wait to see these tests and I feel that this will be a nice resting place for Ira's residency given his recent turn towards motion-only digital works.
He also successfully completed a painting he's been working on during the residency. I posted in progress images of it before, but here is the final work. This process was a conceptual meditation on the idea of embodied diffusion, recasting the work of a painter (especially an abstract one) as a human algorithm of sorts interpreting the visual world through his own trained context. To make this painting, Ira didn't use any source material but rather painted “automatically” reacting to the brush strokes as he made them. I think that this actually links directly into the discussion above of an embodied digital spirituality (if that’s what it is?), and as a meditative practice Ira has perhaps here made visible a section of his hyper diffused soul. Maybe? It’s also just a nice painting.
Another potential outcome of this program is that Ira now has an interest in writing a book on Comfy UI! This would take a shape similar to his very successful book on Processing. I'd like to congratulate him on completing the residency and wish him success in penning the first book on this new emergent tooling environment. We will certainly provide an update if and when this book is released!
Jason is also wrapping up his residency and while his painting is not yet complete the AI work that was done to inspire the painting is really what his residency was intended to accomplish, and this part is now officially done. From this perspective, the actual oil painting phase is somewhat akin to the GPU rendering of an image or video after designing the full workflow. So, to this I say: off to the render farm! We will definitely post the final painting in the gallery when complete!
Thinking back on his time in the residency, I view his Figma board as an excellent example of how to display the process of training a LoRA. He very generously included his full data sets, along with the captioning that went into the training of each of the two LoRAs that he created for use in his paintings. This generosity of data sharing is what the program is all about and I thank and commend him on his open source approach to this work.
Here is a final in-progress view of his painting. He’s actually not too far off from completing it, with the brown underpainting parts needing to be rendered more fully in white. Aside from the balance of the overall composition (which is very hard to do given this many components and depth), some of the most compelling moments for me are the adjacent textures of objects such as the hovering clay-like balls, the glossy Cheshire Cat sticker, and the tousled shag carpet, all informed by the varying AI outputs he collaged into this work. I love this painting and can't wait to see it complete!
I had the great pleasure of meeting with Andy Bennett this week, the curator of a group exhibition that Inés is taking part in at The Fulcrum Press, a gallery and book shop in Los Angeles. Here are a few install shots from that show. The documentation of the show is not yet live on the gallery website, but I can provide more information upon request. The other artists included in this show are Rahel Levine, Nathan Gulick, and Jessica Wilson.
The works on view are about voice actor Susan Bennett, who provided the recordings used to train the American voice of Siri. The crystal and its laser etchings use the aesthetics of corporate achievement to represent memory, data, and examine the role of identity ownership in the technological age (there is a common theme here!). Some of the engraved words on these trophy-cum-sculptures are specific lines read by Bennett which were used to train Siri. The fact that Bennett was unaware that her voice would be used in the creation of a global tech product package like Siri encapsulates much of the intellectual debate around ethical data use in AI today. From the exhibition text:
The portraits that Kivimaki integrates into these works are generated by text-to-image AI, prompted by Bennett's description of Siri-who, when asked, described her avatar in the following way: "I'm six feet tall, I have long brown hair, I can sing, I can dance, I can do it all."
So, is Siri actually Bennett? I don’t think anyone would argue that they are one in the same. But how about when AI agents proliferate and we can simply call up a friend’s agent to make plans with them while their owner is busy? What then will be the experiential difference between Siri and Bennett? Could her friends tell the difference? Will we soon each be subject to our own personal Turing tests? As our world inches closer to going full Matrix, these questions become at once more pressing and more confounding.
Keion recently met with our internal ML specialist, Jimmy! After several attempts to train models on his own, with some all-important failures taking place (we learn, we grow!), he began segmenting his data more carefully with the intent of training several different models using smaller, more cohesive data sets. We also dove into one of the models already available on the site, which yielded some interesting results but still not exactly what he has in mind. I look forward to seeing the results of his latest training foray, and finally some outputs that he is happy with!