One still from Midjourney, next to the Runway clip that brings it to life. The idea is created once — and then moves.
All four pairs edited together — stills, clips, sound. Voices via ElevenLabs, cut & mix in Adobe Premiere.
As of · August 2025




Every clip here is the end of a long chain, not the output of a single prompt. The starting point is always the universe.
I develop the universe with an LLM: lore, characters, atmosphere, rules. The LLM is co-author, not ghostwriter — I sketch, it pushes back, I sharpen.
Prompt into Midjourney, result back to the LLM, tell it what to change, new prompt out. Again and again, until the look is right — palette, light, atmosphere, style.
With the dialled-in look as a reference, I generate a data pool: many variants of the same scene, multiple characters in the same style. Consistency through reference mode, not through luck.
Still into Runway, motion prompt on top — camera movement, subtle body motion, ambient detail. Iterated here too, until the movement complements the frame instead of betraying it.
All clips pulled into Premiere: cut, voices from ElevenLabs, music, a bit of color grading. That's what turns individual frames into a piece.