Play
Baby Designers
Play
Baby Designers

Project
When two AI baby designers start a podcast, the workflow tells a bigger story.
Objective
To explore a fully integrated AI workflow capable of producing consistent, character based audio visual content while testing the reliability and efficiency of emerging creative tools.
Role
I led the entire process from concept and narrative development to character creation, scripting, animation, audio design and final post production, orchestrating a seamless multi platform workflow.
Project
When two AI baby designers start a podcast, the workflow tells a bigger story.
Objective
To explore a fully integrated AI workflow capable of producing consistent, character based audio visual content while testing the reliability and efficiency of emerging creative tools.
Role
I led the entire process from concept and narrative development to character creation, scripting, animation, audio design and final post production, orchestrating a seamless multi platform workflow.
Project
When two AI baby designers start a podcast, the workflow tells a bigger story.
Objective
To explore a fully integrated AI workflow capable of producing consistent, character based audio visual content while testing the reliability and efficiency of emerging creative tools.
Role
I led the entire process from concept and narrative development to character creation, scripting, animation, audio design and final post production, orchestrating a seamless multi platform workflow.
Baby Designers is a playful yet rigorous exploration of an end to end AI production pipeline. The project tested how far AI tools can be combined to create repeatable, character driven content at a professional standard.
Baby Designers is a playful yet rigorous exploration of an end to end AI production pipeline. The project tested how far AI tools can be combined to create repeatable, character driven content at a professional standard.
Baby Designers is a playful yet rigorous exploration of an end to end AI production pipeline. The project tested how far AI tools can be combined to create repeatable, character driven content at a professional standard.
Creative workflow design
The project began with world building, character definition and scripting inside ChatGPT using the 4o image model to generate cohesive visual identities and a narrative foundation. This early stage focused on establishing tone, behaviours and a storytelling structure that would translate across every downstream output. Once defined, the characters were brought to life through ElevenLabs where expressive, natural voices added emotional nuance and consistency. This step validated the potential of AI voice synthesis to support episodic content that feels authored and intentional.
With script and dialogue established, Hedra Character 3 was used to animate the static character images into performance led video assets. This involved testing prompt structures, refining facial expressiveness and ensuring the subtle lip sync aligned with the recorded voices. To build a richer sensorial experience, background music was generated with Riffusion, allowing the soundtrack to be uniquely tailored to each scene. This stage proved the value of treating AI tools as modular components within a single creative ecosystem.
Creative workflow design
The project began with world building, character definition and scripting inside ChatGPT using the 4o image model to generate cohesive visual identities and a narrative foundation. This early stage focused on establishing tone, behaviours and a storytelling structure that would translate across every downstream output. Once defined, the characters were brought to life through ElevenLabs where expressive, natural voices added emotional nuance and consistency. This step validated the potential of AI voice synthesis to support episodic content that feels authored and intentional.
With script and dialogue established, Hedra Character 3 was used to animate the static character images into performance led video assets. This involved testing prompt structures, refining facial expressiveness and ensuring the subtle lip sync aligned with the recorded voices. To build a richer sensorial experience, background music was generated with Riffusion, allowing the soundtrack to be uniquely tailored to each scene. This stage proved the value of treating AI tools as modular components within a single creative ecosystem.
Creative workflow design
The project began with world building, character definition and scripting inside ChatGPT using the 4o image model to generate cohesive visual identities and a narrative foundation. This early stage focused on establishing tone, behaviours and a storytelling structure that would translate across every downstream output. Once defined, the characters were brought to life through ElevenLabs where expressive, natural voices added emotional nuance and consistency. This step validated the potential of AI voice synthesis to support episodic content that feels authored and intentional.
With script and dialogue established, Hedra Character 3 was used to animate the static character images into performance led video assets. This involved testing prompt structures, refining facial expressiveness and ensuring the subtle lip sync aligned with the recorded voices. To build a richer sensorial experience, background music was generated with Riffusion, allowing the soundtrack to be uniquely tailored to each scene. This stage proved the value of treating AI tools as modular components within a single creative ecosystem.
Production integration
The final phase focused on stitching the entire workflow into a cohesive piece inside DaVinci Resolve. Here pacing, timing and narrative clarity were refined to ensure the story remained engaging without losing the charm of the experimental concept. The edit became the point of convergence where all AI generated assets were assessed for consistency and refined for quality control.
Throughout post production I evaluated friction points in the workflow, identifying where human direction added value and where automation increased speed. The result was a blueprint for how similar character based content could be produced at scale, with strategic checkpoints to maintain creative integrity. This iterative process highlighted how AI tools, when coordinated effectively, can streamline production without sacrificing narrative depth.
Production integration
The final phase focused on stitching the entire workflow into a cohesive piece inside DaVinci Resolve. Here pacing, timing and narrative clarity were refined to ensure the story remained engaging without losing the charm of the experimental concept. The edit became the point of convergence where all AI generated assets were assessed for consistency and refined for quality control.
Throughout post production I evaluated friction points in the workflow, identifying where human direction added value and where automation increased speed. The result was a blueprint for how similar character based content could be produced at scale, with strategic checkpoints to maintain creative integrity. This iterative process highlighted how AI tools, when coordinated effectively, can streamline production without sacrificing narrative depth.
Production integration
The final phase focused on stitching the entire workflow into a cohesive piece inside DaVinci Resolve. Here pacing, timing and narrative clarity were refined to ensure the story remained engaging without losing the charm of the experimental concept. The edit became the point of convergence where all AI generated assets were assessed for consistency and refined for quality control.
Throughout post production I evaluated friction points in the workflow, identifying where human direction added value and where automation increased speed. The result was a blueprint for how similar character based content could be produced at scale, with strategic checkpoints to maintain creative integrity. This iterative process highlighted how AI tools, when coordinated effectively, can streamline production without sacrificing narrative depth.



