Case study
Creative Control Requires Compute Control
A case study of a remote production weekend using portable local compute to support AI-assisted trailer development.

Technical Report
Creative Control Requires Compute Control
Portable AI Infrastructure for Filmmaking: A Remote Production Case Study Using Nomad
Abstract Generative AI tools are entering filmmaking workflows across concept development, storyboarding, visual experimentation, and audio prototyping. Most of these systems depend on centralized cloud infrastructure, which introduces privacy concerns, workflow dependence, and limits on rapid creative iteration.
This paper presents a case study of a remote production weekend in which a portable local compute system was used to support AI-assisted trailer development outside a traditional studio or VFX environment. The goal was not to run a formal research study, but to document whether local high-performance compute could realistically support creative work in a mobile production context.
The case study focused on three infrastructure requirements:
- Privacy: Data control on-location
- Portability: Mobility between production sites
- Performance: Local iteration at speed
Together, these requirements shaped whether AI could function as a practical filmmaking tool rather than a remote service. This paper documents the production context, workflow stack, operational observations, limitations, and broader implications for AI-assisted filmmaking.
1. Introduction
Artificial intelligence is beginning to influence multiple stages of filmmaking. Generative models can support concept art, storyboarding, set design visualization, character exploration, visual effects experimentation, and sound prototyping.
Most generative AI systems, however, are designed around centralized infrastructure. They assume stable internet access, fixed workstations, and cloud-based processing. Film production rarely works that way.
Filmmakers move between temporary offices, remote locations, travel environments, and mobile production setups. Creative decisions often happen in places where connectivity is inconsistent or where uploading sensitive materials to external services is undesirable. This creates a direct mismatch between AI infrastructure assumptions and production reality.
2. The Infrastructure Problem in Creative AI
Most generative AI platforms operate through centralized services. While these systems can be powerful, they introduce structural limitations for creative production. Three problems repeatedly emerge:
Data Privacy
Filmmaking involves proprietary assets such as scripts, treatments, visual references, footage, and custom datasets. Uploading these materials to external platforms creates privacy and ownership concerns.
Workflow Dependence
Cloud systems place creative work at the mercy of external infrastructure. Queues, rate limits, outages, and moderation layers can interrupt production timelines and reduce iteration speed.
Insufficient Local Performance
Generative image, video, and audio workflows require substantial compute power. Without strong GPU performance, local AI becomes too slow to support active creative decision-making.
"Creative control requires compute control."
3. Core Infrastructure Requirements
This case study evaluates three requirements for AI-assisted filmmaking. These requirements determine whether generative AI can function inside a real production workflow.
3.1 Privacy
Local compute allows filmmakers to run generative tools without transmitting working files to outside systems. This aligns with long-standing film industry practice, where editorial systems, media storage, and post workflows are often tightly controlled.
3.2 Portability
Production is mobile by nature. Traditional high-performance workstations are usually difficult to move. A portable compute system changes that equation by allowing AI workflows to travel with the production rather than forcing the production to work from a fixed machine.
3.3 Performance
Privacy and portability are irrelevant if the machine cannot generate useful outputs fast enough to support iteration. Portable AI only becomes practical when local performance is strong enough to support repeated experimentation in real time.
4. Case Study Overview
This case study documents a four-day remote production weekend in which a small team used a portable AI workstation to develop a roughly 90-second trailer while testing the operational viability of local AI workflows in a mobile production environment.
All AI generations were executed locally on the portable system. Internet access was used when necessary to download models, troubleshoot workflows, and reference tutorials, but the generation pipeline itself ran offline.
5. Workflow Stack
The local workflow used during the case study combined several AI systems:
- ComfyUI for image and video workflows
- LM Studio for local language and vision model use
- LoRA-style identity workflow preparation for character consistency experiments
- ACE Step 1.5 for music generation setup
- Adobe Premiere Pro for editorial assembly
6. Production Execution
Travel and Setup
The portable compute system was transported during commercial air travel along with other production equipment. After arrival, it was moved by car to the production site and set up in approximately five minutes. The location operated off-grid on solar power.
Pre-Production and Shot Development
A local language model in LM Studio was used to break the treatment into scenes and suggest coverage. Those shot descriptions were translated into prompts for image generation. The team iterated through multiple variations to refine composition, tone, and clarity without queue delays or API limits.
Animatic Construction
Still images were generated for shots in the sequence, assembled on a timeline, and evaluated for pacing and narrative clarity. This stage clarified a key creative dynamic: batch generation and side-by-side review worked better than evaluating one render at a time.
Character Consistency Testing
The team prepared LoRA-style identity workflow assets before production and used image-based consistency experiments. Filmclusive also built software support intended to make this process easier.
Video and Audio Experiments
Selected shots were pushed into video generation using text-to-video and image-to-video workflows in ComfyUI. A parallel audio setup was also prepared using ACE Step 1.5 locally for early music experimentation.
7. Operational Findings
Several findings emerged from the case study:
- Portability in Practice: The system successfully moved through commercial travel without requiring a specialized post environment.
- Local Iteration Speed: GPU acceleration allowed repeated generation cycles without queue delays.
- Offline Generation: Core generation acts happened locally, reducing internet dependency.
- Creative Bottlenecks: Challenges were primarily human (alignment with director vision) rather than computational.
8. Cloud Versus Local Comparison
Local compute gives filmmakers more control over timing, privacy, and experimentation. Cloud tools introduce dependencies that are often poorly matched to fast-moving production work.
9. Conclusion
This case study documented a remote production weekend in which a portable local AI workstation was used to develop a roughly 90-second trailer and validate a mobile generative workflow.
Nomad proved that these requirements can coexist in a single portable system. More broadly, the weekend showed that AI-assisted filmmaking becomes more practical when compute moves with the production rather than staying locked to centralized infrastructure.
Creative control depends on compute control.
Built for the Field
To see what portable high compute can be in a pelican box, visit nomadplatforms.com.
If you want portable compute and onset production tools, check out Nomad's "Odyssey1"—the standard for AI-native production infrastructure.
Explore Odyssey1