Studio Update One
LIVE
March 29, 2024 8:09 AM
Chris Adler
March 29, 2024 8:09 AM

In what was initially forecast as a period of slow acclimation where artists would get their feet wet in the new digital studio and flesh out next steps to execute on their proposal, the first two weeks of the new Civitai AiR program have been explosive. In case you missed it, here is the list of artists we selected and links to their websites:

Noah Miller

Arvid Tappert

Domenec Miralles

Shinji Murakami (April)

If you want to follow along, each Artist’s name below links to their digital Studio on air.civitai.com - go and have a look for yourself at their works in progress. These are real time visual studios that reveal the artist’s process: a collection of their thoughts, inspirations, sketches, tests, storyboards, recipes, workflows, etc. And since these are live, you might even happen across an artist actively working in their studio!

A few updates from the studio:

Arvid Tappert

Arvid has been working on an adaptation of his odd birds project called ODD BIRD BEHAVIORS.

These works are a collection of vignettes centered around various ODD BIRDS characters. They are contemplative, mundane, and funny, short films that display Arvid’s virtuosic command of the Krita AI Diffusion animation suite to produce his Wes Andersen inspired aesthetic.

  

While each short will be a standalone work, we’ve also been discussing them as potentially modular, or randomly woven together into ever changing short film. We discussed a collection of poems by Anne Carson called FLOAT. The poems in it are each individually bound and can be shuffled and read in any random order. This stochastic function gives the work a beautiful, almost prophetic impact. So some potential there for the longer form film to be somewhat interactive.

Workflow

As a traditional animator working for Swedish Public TV, Arvid’s approach to AI is as a tool in the animator’s toolbox. Much of his work involves training custom LoRAs for texturing and matting his 3d animations. 

From Arvid:

“Much of AI-prompted video lacks the thrill of creation and control over the output. The magic happens when human creativity and traditional methods intersect with AI tools, developing your own AI models and workflows based on your unique artistic vision.” 


While he’s experimenting with several different methods, the broad strokes of workflow are as follows:

  1. Render 3D textures in Blender

  2. Train AI models in kohya_ss

  3. Add the models in the app Krita AI

  4. Draw and create AI claymation output from Krita diffusion engine

Here is a video showing one of his workflows, a custom trained clay lora applied to live AI painting in Krita Diffusion, a method called “AI Shader”.

Clay texture LoRA

ODD BIRDS LoRA + live draw AI Shader method

Stay tuned - we may make a more in-depth workshop/tutorial available from Arvid, soon. Let us know if you’d be interested in something like this!

Domenec Miralles

In our first formal studio visit, Domenec surprised me with an auxiliary project that he’d started previously but hadn’t had the resources to continue developing. The project was way outside of the purview of his original proposal. But, since the core intent of the AiR program is to cultivate compelling artistic output, full stop, I’m quite excited that his exploratory approach ended up revealing this gem of a work.

Made in React, Infinite Mesh is a full fledged interactive AI tool that creates original animated short films based on literary works - novels, at the moment. Named after the beta text used during its design - Infinite Jest, by David Foster Wallace - Infinite Mesh blends together multiple AI tools into a single workflow that is capable of outputting fully formed short films. It makes use of LLMs, Stable Diffusion,  I was so taken by his reveal of this latent project that I encouraged Domenec to focus on this entirely, rather than his original short film idea.

Here’s a 1:30 short film generated from Gravity’s Rainbow by Thomas Pynchon.

The goal is to make this tool into a standalone site that can be used by anyone. Domenec’s final “exhibition” will actually be launching this working product out into the world. 

Over a couple studio visits we’ve defined a series of broad phases for pushing this project forward.

Phase 1: Make It Better

Phase 2: UI Updates

Phase 3: Sustainability

Some of the improvements he is focusing on include:

  • Upgraded vocals (elevenlabs?)
  • Multiple voices to better capture dialogue
  • User uploaded texts
  • Sliders to change both input length (under the covers) and output length (final video)
  • 16:9 aspect ratio
  • Storage of all unique generations in a feed below
  • Ability to share to social (storage of previous generations)
  • Ability to download
  • An actual Infinite Mesh, where the previously generated stories are all strung together to make a longer film, or where a single GPU is dedicated to continuously pre-caching an entire novel end to end, and you can just step into the flowing stream wherever you'd like.

Noah Miller

Noah comes from a strong traditional filmmaking background, and brings a very broad skillset to the AiR program. His film project, Zero, initially a 5 minute short, has evolved into a 15-20 minute film! 

Over the past two weeks, Noah has demonstrated significant progress on his quickly evolving project, showcasing a deep dive into conceptual development, technical experimentation, and narrative exploration

You’ll have to check out his Studio for the full picture (the layout of which is beautiful in and of itself) - he’s already managed to deliver an entire 4 act storyboard, production roadmap, character psychology visualization, an 18 page script (available to read in the studio), a few WIP test clips, and an inspiration section filled with music and films that are currently informing his creative process.

Zero title image

In Noah’s words Zero is 

“a sci fi short in the tradition of Solaris, Lost, Annihilation, and many others.

The goal is to create an animated film utilizing Al models to create vid2vid animations that replicate a traditional animation pipeline but allow for the fidelity of emotional human performances. Utilizing LoRAs for characters/costumes, and ipadapters for heavy style transfer.”

His focus on filming real actors responding in real space, then transforming that footage with AI, in a sort of live capture animation transfer is designed to give the resulting animation life beyond what traditional “booth acting” can impute. You get real, physical stage dynamics as opposed to isolated performances.

A few of his storyboards:

Act 1 conceptual storyboard

Act 3 conceptual storyboard + psychological mapping of Tara

An AI generated storyboard sketch.

Since his workflow relies heavily on video input into, we are currently looking more into the capabilities of high res inputs into ComfyUI. Typically, his process has been to edit in SD or 720, then up res using Topaz or the like on output. This works well for simpler animations, but for a more complex animation with critical inpainting and detailed wide shots, it would give him more control for Comfy to be able to accept higher res footage, ideally 4K.

Since this new world of complex workflows is so vast, if anybody out there knows of a solid way to do high res workflows in Comfy or otherwise, please reach out. I will be focusing on connecting him with other known creators in the space to target an optimal workflow for his project, and am very excited to see where we land in a couple weeks!

Please reach out with any questions or collaborations. Stay tuned for more updates soon.

Post link
©Civitai 2024