Case study
Why we built our own LoRA software
We translated technical tooling into language filmmakers understand and wrapped it in familiar production structures.
This case study is still being iterated on while we capture more workshop notes.
Filmmakers deserve workflow-first training tools
Existing LoRA builders assume researchers: folder structures are technical, terminology is developer-centric, and training settings lack context for creative decisions. Filmmaking teams were forced to search docs, guess parameters, rinse and repeat, and lose track of experiments.
How we fixed it
We relabeled the experience around concepts filmmakers already use: Character, Look, and Scene context.
- Character Library instead of raw checkpoints
- Dataset Manager instead of folders full of JSON
- Training Experiments, Version History, and Director notes instead of cryptic logs
Filmmakers could now iterate identity models just like costume tests: versions evolve with intention, not random folders.