Choose your language

Choose your language

The website has been translated to English with the help of Humans and AI

Dismiss

A Frame-by-Frame Look at How Generative AI Supercharges Creativity

AI AI, AI & Emerging Technology Consulting, Digital transformation, Experience 6 min read
Profile picture for user Labs.Monks

Written by
Labs.Monks

A landscape animated mountainside with mist and fog

By now, you’ve seen it all over social media: uncanny images painted by artificial intelligence. Fun to play with thanks to its accessibility, generative AI has exploded in popularity online. But it’s also raised questions about the nature of human creativity: what is the value of artistry and craft if anyone can generate images in a few seconds?

The impressive output of generative AI has led some to voice concerns about whether their livelihoods are in jeopardy. Creativity, after all, has long been considered a strictly human skill.

But creatives aren’t about to lose their jobs to robot overlords who can spin strings of text into pixelated gold. To the contrary, these tools—which rely on human input and some level of artistic aptitude to really shine—are unlocking creative potential and helping people bring their concepts to life in new ways. This outlook prompted the Labs.Monks, our research and development team, to explore how generative AI can uplevel the work of our teams and our clients.

“We’ve been playing with this technology for a while, and after it began to trend, we’ve been getting more and more questions about it,” says Geert Eichhorn, Innovation Director and Head of Labs. For instance: a lot can be said about the future of content creation aided by AI, but how could today’s tools integrate into a present-day production pipeline? 

Looking for an answer, the Labs.Monks collaborated with animators and illustrators on our team to develop a prototype production workflow that blends traditional animation methods with cutting-edge AI technology. The result is an animated film trailer made using a fraction of the time and resources that a typical, frame-by-frame animation of its length would require.

00:00

00:00

00:00

Learn to live with the algorithm.

Ancestor Saga is a 2D-animated side project focused on a central question: what if people in the Viking Age realized they were living in a simulation? After learning that their purpose in life is to entertain the gods, will they accept their new reality, or put an end to the world by bringing about Ragnarök?

The theme might feel familiar to anyone trying to make sense of the increasingly algorithmic world we’ve suddenly found ourselves in. “We wanted to tell a story that could integrate with the tech we’re using: virtual worlds and virtual people,” says Samuel Snider-Held, Creative Technologist. Associate Creative Director Joan Llabata takes this thought further, citing some of the challenges faced when humans and AI don’t quite connect. “There’s some space where we need to find the best way to communicate with the machine effectively,” he says.

When using generative AI, a bespoke approach is best.

That challenge of getting humans and AI to play nice demonstrates the need for a team like the Labs.Monks to experiment with the tools that are available. While off-the-shelf tools are great for empowering individual creators, integrating them into team pipelines requires a more custom solution.

AI is designed to do specific tasks very, very well. Projects that involve multiple capabilities and phases call for a workflow that can integrate a variety of generative AI to fulfill different goals throughout. With an animation project, this means plugging into creative concepting, storyboarding, sound and of course animating the visuals.

In our case, says Snider-Held, “We wanted to explore how AI could allow us to do the work we really want to do, even if the time or the budget isn’t there.” He found that while our animation team loves classic, frame-by-frame animation, the method is often overlooked because it is slower to produce and less cost-efficient than other ways of animating. 

Now the team had a clear goal: orchestrate an AI-based workflow that could output a frame-by-frame animation in record time, without compromising quality. They took inspiration from rotoscoping, a method used by animators like Ralph Baskhi, in which an artist traces images over existing footage. This task of translating an existing recording from one style to another was ideal for image-to-image generative AI. In addition, the team used AI technology to develop background designs and read out the animation’s voiceover.

Generative AI isn’t a radical departure from tradition.

The team began by recording a 3D character model in a virtual setting, capturing a variety of poses for an illustrator to trace over. These visuals were then used to train the AI model on how to draw the character in different movements. “If you draw about five frames, you have enough to teach a neural network how to paint the others,” says Snider-Held, noting that it’s important to select frames that are different from one another so the AI can pick up on various forms, shapes and poses.

In addition to rotoscoping virtual production, the team also experimented with live-action stock footage. Being able to use two different types of visual source material baked extra flexibility into the process; teams could mix and match the different methods according to their specific needs or abilities. Fantastical creatures might be captured more easily in virtual production, while a team lacking in their ability to animate lifelike movements may prefer using live-action film as a base. “You get better acting from footage versus a 3D model, but the visual output is ultimately the same,” says Snider-Held.

Much like how that process emulated classic rotoscoping by hand, other ways of integrating AI followed a traditional animation process, albeit with some additional steps here and there. For example, the storyboarding phase is important for visualizing which types of shots or animations are needed for a specific sequence. In addition to pondering that, the team also planned which kinds of AI would be best for generating this or that shot.

Using Stable Diffusion—a kind of generative AI that translates a text prompt into an image—allowed the team to create a large volume of backgrounds that they could swap in and out to test how they looked. “You can explore a lot in this phase,” says Snider-Held.

As for developing backgrounds in particular, “It’s like describing the shot you want to a director of photography,” says Llabata. He was able to test hundreds of different environments, camera angles, artistic styles, lighting and more, all with relative ease.

a grid of landscapes of a house amid mountains and fields

Unlock efficiencies and long-term gains.

The findings above hit on perhaps the biggest gain that a generative AI-powered workflow can provide: greater flexibility throughout the life of a creative project. Being able to generate 60 frames in one minute—rather than one frame in 60 minutes—makes it incredibly easy to pivot or change things up in the blink of an eye.

Monk Thoughts It’s a producer’s dream to be able to create so many assets so flexibly. It redefines linearity in the pipeline because you can always go back and change things.
m
Joan Llabata headshot
nk

It doesn’t require a sophisticated hardware setup either, further making content creation accessible to teams of all sizes. “You don’t need a giant server or cloud computing,” says Eichhorn. “A reasonably good gaming PC can churn out assets like backdrops quickly.” Still, more complex uses of AI like rotoscoping may require more power.

The flexibility unlocked by integrating generative AI into a team’s pipeline continues to pay dividends beyond the life of a single project. “If you have a project whose scope is really big, the effort and money you spent in that R&D is compounded in value over time,” says Snider-Held, noting that whether a brand wants to make 10 animations or 30, the steps to lay down an AI-powered foundation will be roughly the same.

Experiment to find an approach that suits your needs.

Tools like stable diffusion aren’t meant to replace those in the creative field. “An AI will not achieve anything by itself,” says Llabata. Instead, these products will give teams the ability to chase more ambitious projects with fewer constraints in time and budget. Just consider how closely the creation of the Ancestor Saga trailer follows a traditional animation process, just with more efficiencies baked in. 

Such flexibility afforded by generative AI can go well beyond traditional animation.

Monk Thoughts The merging of data and creativity is something we’re always exploring at Media.Monks, and this technology is going to supercharge that. Imagine using data that we already use for media campaigns to generate hyper-personalized images.
m
Portrait of Geert Eichhorn
nk

Whatever your use case for generative AI, understand that while building tools from scratch can be challenging, the result is extremely powerful. “Our approach is that if an off-the-shelf tool is mature enough, use it. If not, create it yourself,” says Snider-Held. In addition to ensuring a tool is calibrated for their specific needs, teams who go the bespoke route will also be better poised to future proof as the technology continues to evolve at a rapid pace.

So, think you’re ready to explore what generative AI means for your field? Learn more about the ins and outs of the technology in the latest Labs Report exploring the rapid evolution of digital creation.

Related
Thinking

Make our digital heart beat faster

Get our newsletter with inspiration on the latest trends, projects and much more.

Thank you for signing up!

Continue exploring

Media.Monks needs the contact information you provide to us to contact you about our products and services. You may unsubscribe from these communications at any time. For information on how to unsubscribe, as well as our privacy practices and commitment to protecting your privacy, please review our Privacy Policy.

Choose your language

Choose your language

The website has been translated to English with the help of Humans and AI

Dismiss