Six Days. Six Tools. One Killer AI Launch.
Inside Deel’s AI-Generated Launch Video — and What It Teaches Us About the Future of Creative Work
Deel, the global HR and payroll platform, launched its new product, Deel AI Workforce last month, adding agent capabilities to their existing stack. Launching agents is almost a table-stakes requirement this year. But the innovative twist? Their launch video itself was AI-generated. The result was clean, entertaining, and fast-paced — and looked surprisingly “real” overall, with a few fun fantasy moments.
I first saw the announcement from VP of Marketing Leslie Lee, who then pointed me to a follow-up from Alan Roll, Deel’s Head of Brand and Creative. Alan shared a detailed breakdown of how the video came together on LinkedIn. This wasn’t a quick gimmick from amateurs — it was a sophisticated creative team using AI at its highest level. It’s another reminder that the most impactful uses of AI aren’t shortcuts, but experts pushing the tools to their limits.
Watch the video and read the behind-the-scenes below:
But first - quick ask- I’m doing a survey on the most common use marketing & sales AI use cases to help leaders understand where peers are focusing. Please take the survey and I’ll report back here.\
Now on to the video:
You can read Alan’s whole LinkedIn post here, or I’ve extracted the text below:
How 6 creatives used 6 AI tools to create 1 launch video in 6 Days
There was a lot of buzz when we launched our Deel AI Workforce video last week, and a few questions about how we did it.
TLDR:
The process we creatives have been using for decades is still the same process... kinda. With AI in the loop, some steps are exponentially faster than before, and some can happen at the same time. But with this speed comes a level of unpredictability.
A combination of tools (Whisk, Flow, Visual Electric, Eleven Labs, Chat GPT, Gemini) gave us the best way to mirror our creative process. Don’t try to do it all in one tool.
AI feature access varies due to local AI laws and restrictions. What my team in the US and LATAM could do was different from those in the UK. For global teams, it may require folks to get on a video call and tag-team different parts of the script.
Here’s the play-by-play of how it came to be:
Step 1: Develop concepts & scripts
This was done the old-fashioned way, and we gave ourselves a one-day brainstorm limit. We generated a few clips with Veo 3 to validate the concept.
We developed a VO script and shooting script with scene breakdowns, including camera directions and character actions.
Step 2: Art Direction, Location Scouting, & Casting - where we combined creatives and AI, and the magic really happened
We input a combination of mood boards and prompts into Google Whisk. This replicated the Art Direction process, defining our “locations” and “props.”
We did the same with our “cast” and their “wardrobe.” We also defined the mood with prompt directions for lighting and camera. The outputs of this phase were both images and “aggregated” prompts that could be copied and pasted into Flow.
We developed the avatars that appeared over our cast’s heads using simple sketches crafted in Figma that were then brought into Visual Electric.
The avatar characters forced us to go back to our “casting” process to make some adjustments. A great example of what would have cost a fortune in reshooting was simply managed in a few hours and with additional Google credits.
Step 3: Production & Post at the same time
We generated an AI voiceover leveraging Eleven Labs and dropped it into our edit timeline in Adobe Premiere.
The shooting script guided our shot generation. Using the images and prompts generated in Whisk, we turned to Google Flow using Veo 3, where we added camera direction and character performance to the prompts. This was a process of trial and error, some of which were quite comical. We evolved the edit shot by shot, generating new clips as we went. (Pro tip: having an expert DP and an editor on the team really helped us nail this part of the process.)
We went back to basics for final finishing. Editing was still manual, as was compositing, and the final audio mix.
It takes a village! Huge thanks to Malena Romano, Jean-Baptiste Di Marco, Dylan Groat, William Simmons, Dimitri W., and Grace Stuart
Impressive! Kudos to the team!
Have your visual teams been creating video using AI? Would love to hear more.
Carilu Dietrich is a former CMO, most notably the head of marketing who took Atlassian public. She currently advises CEOs and CMOs of high-growth tech companies. Carilu helps leaders operationalize the chaos of scale, see around corners, and improve marketing and company performance.