Play

Vegetables

Play

Vegetables

Project
Vegetables an AI generated Halloween B Movie experiment in character and narrative cohesion.
Objective
The goal was to explore how far AI tools could be pushed to maintain believable character identity, wardrobe continuity and narrative flow across an entire concept trailer while experimenting with the idea of AI generated characters behaving like real actors reflecting on their own roles.
Role
I led the full creative process from concept and narrative development to production, art direction, editing and delivery. I designed and tested the end to end workflow, managed all image and video generation, crafted the motion graphics and SFX, and directed all voice, audio and post production.
Project
Vegetables an AI generated Halloween B Movie experiment in character and narrative cohesion.
Objective
The goal was to explore how far AI tools could be pushed to maintain believable character identity, wardrobe continuity and narrative flow across an entire concept trailer while experimenting with the idea of AI generated characters behaving like real actors reflecting on their own roles.
Role
I led the full creative process from concept and narrative development to production, art direction, editing and delivery. I designed and tested the end to end workflow, managed all image and video generation, crafted the motion graphics and SFX, and directed all voice, audio and post production.
Project
Vegetables an AI generated Halloween B Movie experiment in character and narrative cohesion.
Objective
The goal was to explore how far AI tools could be pushed to maintain believable character identity, wardrobe continuity and narrative flow across an entire concept trailer while experimenting with the idea of AI generated characters behaving like real actors reflecting on their own roles.
Role
I led the full creative process from concept and narrative development to production, art direction, editing and delivery. I designed and tested the end to end workflow, managed all image and video generation, crafted the motion graphics and SFX, and directed all voice, audio and post production.

Vegetables was an experimental Halloween themed B Movie created to test character consistency and multi context storytelling using a mix of emerging AI platforms. The project presented a playful, self aware narrative where AI generated cast members discussed the film they had just starred in, blurring the line between fiction and fabrication.

Vegetables was an experimental Halloween themed B Movie created to test character consistency and multi context storytelling using a mix of emerging AI platforms. The project presented a playful, self aware narrative where AI generated cast members discussed the film they had just starred in, blurring the line between fiction and fabrication.

Vegetables was an experimental Halloween themed B Movie created to test character consistency and multi context storytelling using a mix of emerging AI platforms. The project presented a playful, self aware narrative where AI generated cast members discussed the film they had just starred in, blurring the line between fiction and fabrication.

Designing narrative cohesion

The project began with a design thinking approach to narrative structure. I defined each character as a repeatable asset, mapping personality traits, wardrobe variations and contextual behaviours to ensure they could exist both within the trailer’s story and in the meta style interview moments outside it. This dual world structure required a strategy that treated AI characters as performers with persistent identities, enabling continuity throughout shifting scenes.

To build the visual language, I established a cohesive aesthetic by generating master images in Midjourney and using them as anchors for every downstream variation. Weavy acted as a production hub for shot lists, storyboards and workflow branching, allowing me to track identity consistency as scenes evolved. This created a foundation that supported quick iteration without sacrificing the recognisability of the cast.

Designing narrative cohesion

The project began with a design thinking approach to narrative structure. I defined each character as a repeatable asset, mapping personality traits, wardrobe variations and contextual behaviours to ensure they could exist both within the trailer’s story and in the meta style interview moments outside it. This dual world structure required a strategy that treated AI characters as performers with persistent identities, enabling continuity throughout shifting scenes.

To build the visual language, I established a cohesive aesthetic by generating master images in Midjourney and using them as anchors for every downstream variation. Weavy acted as a production hub for shot lists, storyboards and workflow branching, allowing me to track identity consistency as scenes evolved. This created a foundation that supported quick iteration without sacrificing the recognisability of the cast.

Designing narrative cohesion

The project began with a design thinking approach to narrative structure. I defined each character as a repeatable asset, mapping personality traits, wardrobe variations and contextual behaviours to ensure they could exist both within the trailer’s story and in the meta style interview moments outside it. This dual world structure required a strategy that treated AI characters as performers with persistent identities, enabling continuity throughout shifting scenes.

To build the visual language, I established a cohesive aesthetic by generating master images in Midjourney and using them as anchors for every downstream variation. Weavy acted as a production hub for shot lists, storyboards and workflow branching, allowing me to track identity consistency as scenes evolved. This created a foundation that supported quick iteration without sacrificing the recognisability of the cast.

Engineering character consistency

Achieving stable character continuity across multiple tools meant designing a pipeline that controlled variability while exploiting each platform’s strengths. Nano Banana and Freepik provided motion and scene alternates, while Kling 2.5 introduced dynamic sequences that still held to the established design principles. Every new output was cross checked against the visual anchor library to avoid drift in features or styling.

Audio and performance refinement added another layer of complexity. Using VEO 3.1 Fast, HeyGen and ElevenLabs, I developed distinct voices and ensured emotional alignment with each character’s role. Maintaining believable lip sync over longer sequences required careful timing adjustments in Premiere Pro and supplemental frame correction in Photoshop. These refinements were essential for grounding the playful concept in a level of polish that made the characters feel unexpectedly real.

Engineering character consistency

Achieving stable character continuity across multiple tools meant designing a pipeline that controlled variability while exploiting each platform’s strengths. Nano Banana and Freepik provided motion and scene alternates, while Kling 2.5 introduced dynamic sequences that still held to the established design principles. Every new output was cross checked against the visual anchor library to avoid drift in features or styling.

Audio and performance refinement added another layer of complexity. Using VEO 3.1 Fast, HeyGen and ElevenLabs, I developed distinct voices and ensured emotional alignment with each character’s role. Maintaining believable lip sync over longer sequences required careful timing adjustments in Premiere Pro and supplemental frame correction in Photoshop. These refinements were essential for grounding the playful concept in a level of polish that made the characters feel unexpectedly real.

Engineering character consistency

Achieving stable character continuity across multiple tools meant designing a pipeline that controlled variability while exploiting each platform’s strengths. Nano Banana and Freepik provided motion and scene alternates, while Kling 2.5 introduced dynamic sequences that still held to the established design principles. Every new output was cross checked against the visual anchor library to avoid drift in features or styling.

Audio and performance refinement added another layer of complexity. Using VEO 3.1 Fast, HeyGen and ElevenLabs, I developed distinct voices and ensured emotional alignment with each character’s role. Maintaining believable lip sync over longer sequences required careful timing adjustments in Premiere Pro and supplemental frame correction in Photoshop. These refinements were essential for grounding the playful concept in a level of polish that made the characters feel unexpectedly real.

Vegetables demonstrated how a multi platform AI workflow can maintain narrative and character integrity across complex creative outputs. The project validated new methods for achieving style cohesion, identity consistency and audio visual alignment while showing how experimental storytelling can push AI tooling into more cinematic territory.

Vegetables demonstrated how a multi platform AI workflow can maintain narrative and character integrity across complex creative outputs. The project validated new methods for achieving style cohesion, identity consistency and audio visual alignment while showing how experimental storytelling can push AI tooling into more cinematic territory.

Vegetables demonstrated how a multi platform AI workflow can maintain narrative and character integrity across complex creative outputs. The project validated new methods for achieving style cohesion, identity consistency and audio visual alignment while showing how experimental storytelling can push AI tooling into more cinematic territory.

Hello Person Ltd
Strategy - Creative - Design - AI