As the Civitai Artist in Residence (AiR) program moves into its fourth week, the studios are abuzz with innovation. Each artist is pushing the boundaries of their own practice, exploring uncharted territory within their respective works. Let’s delve deeper into the developments of these past two weeks.
Noah has shown his extraordinary capacity for narrative and technical development, having rapidly developed a new version of his script alongside a complete advanced storyboard. Utilizing a Comfy UI workflow to assist in the storyboarding process, Noah has achieved consistent character representation, producing a series of black and white sketches that lay the foundation for the next stage of his project. Take a look at some of the storyboard, along with a video of his process below!
Once the script is locked, he will use these to produce a short animatic, which will basically be a video sketch of the final film. Think of how a painter uses an underpainting to explore the composition of the final work. This Animatic will allow him to “shoot” every scene before he goes into production, making sure that he gets the coverage needed when cameras turn on.
This film feels like is utilizing an aspect of nearly every available AI workflow out there today, from still image gen, to animation, to video to video workflows, you name it. Building off of his sizable following within the field of AI, we increasingly have the feeling that this film could serve as a north star for any filmmakers working in the space over the coming months and years. In fact, we are in strategic discussions with several film distributors about making it just that.
Let’s not forget, this short film is still aiming at a 12-14 minute run time. Traditionally, films like this have taken years to produce. And although his workflow will save an immense amount of production time, it still presents technical challenges, particularly when it comes to the lengthy render times. A film like this could take a month or more of pure render time on an A100.
So, we have been scoping out a fix for this. We are planning to use our custom orchestration system to route each individual scene of his film to a separate node in a decentralized network of GPUs. Instead of rendering his film sequentially on one commercial machine, this would allow us to render all 25 scenes in parallel, and on consumer GPUs to boot. This approach promises to reduce what would have been a month-long rendering process to a single day, revolutionizing the p2p compute AI workflow, and enabling Noah to realize his vision for a 12+ minute film without compromising on quality or scale.
Arvid has continued shattering the boundaries of digital animation with the introduction of his 'Erase AI Workflow.' This pioneering method, aimed at simulating the naturalistic nuances of organic growth in real time, stands to revolutionize the animator’s toolkit by offering a speed 20 times greater than traditional techniques. And not only is it fast, it can be breathtakingly beautiful.
This advancement is not merely a technical leap; it opens a cornucopia of possibilities for narrative and aesthetic expression in film, advertising, social media, music videos, or any space that leverages digital animation. We’ve already been in talks with several strategic partners, any one of which would be a huge win for the new AiR program. While details remain under wraps for now, these alliances are poised to leverage the groundbreaking techniques fostered by our AiR program for broader creative and commercial applications. Arvid’s work is truly a testament to the transformative potential of AI in enhancing and expanding the creative toolkit of artists and animators alike.
Of course, this workflow was the product of working towards his ODD BIRDS project, and he’s been continually expanding this universe of characters into different worlds. My favorite so far is the ODD BIRD KINGDOM setting. Here are some stills below.
Stay tuned for more updates about ODD BIRDS and potential partnerships coming soon!
Domenec’s Infinite Mesh is nearing the end of its development, with just a couple of weeks remaining before it's ready to be shared with the world. A cornerstone of this final development phase has been the training of a custom LoRA, utilizing Civitai’s on-site trainer. This LoRA was trained on an SDXL base model, and has refined the tool’s story creation capabilities. Check it out here: Forgepop.
If you haven’t seen the site yet - which is the form the final project will take - screenshot below. To establish his vision of an alternate world archive, every story ever generated on the site is saved and accessible. We decided to add black arrows on either side of the main window so you can scroll through other peoples completed stories, giving form to this core concept of Infinite Mesh. It also cuts down on a users time to interact. Instead of waiting for your own story to render (which we've been working on cutting down by routing his jobs through our Civitai enterprise API!) you can click through already existing ones to see new stories instantly.
HINT: this site is live somewhere right now. If you can find it, you can use it while its still in development. No promises it won’t break! Also visible is the text box where you can insert text of any length - really, try an entire novel, that’s what it’s made for.
Also, scrolling down you now see a generous layout of the panels that compose the short film. Seeing all the matching panels together like this is somewhat of a mouth-watering experience (for me?). There’s something quite beautiful about Forgepop and the image sets it produces, so seeing all of these images laid out in a comic style arrangement almost makes it feel handmade. This also makes apparent the character and style consistency achieved through his GPTvision recognition step, which happens between each image. This basically functions as a stage director, making sure each new prompt is based off of the visual interpretation of the previous frame. This is in and of itself an innovative use of this tool, even if it’s just a piece of a more complex workflow.
Another new feature also allows users to download their generated stories, perfect for sharing their AI-crafted narratives across social media platforms or simply saving them to their local device. While the project already boasts the ability to transform any text into a short comic-style film, further refinement is focused on fine-tuning the AI’s interpretation of movement, actions, and objects to enrich the storytelling.
Finally, we are looking into adding the Gutenberg Project’s open source library as an expansive resource of texts, allowing users to pull in any public domain book automatically. On top of the preselected canon of works already available in the tool, this would invite users to reinterpret stories from a wealth of literary classics (think Pride and Prejudice in an alternate dimension).
Shinji is the next artist set to join the Civitai AiR program! I wanted to highlight his upcoming exhibition, titled 2600, at NowHere Gallery in New York City. It runs from April 4th - May 5th, with an opening reception on Thursday, April 4th from 6-8pm. If you plan to attend, please kindly RSVP on their website. He will be showing a collection of paintings and custom developed video games all based on the Atari 2600. We look forward to welcoming Shinji to the program in mid April!
Well, that’s Studio Update Two done. The progress made in just two weeks underscores the dynamic potential of AI in the creative process - the excitement of speed and distance! Stay tuned for more updates as our artists continue to chart new territories.