Article
What does InterPositive and Continuum mean for the Industry?
Ben Affleck's InterPositive and Natasha Lyonne's Asteria Continuum mean a lot to this industry. Explore how Filmclusive is bringing studio-level AI infrastructure to independent filmmakers.

Ben Afflecks’ InterPositive and Natasha Lyonne’s Asteria Continuum mean a lot to this industry, and mostly that the industry is putting money into AI. But what does that mean for the regular filmmaker? In this article, here’s what we think it means and how Filmclusive is doing what the big players do but for the independents who don’t have studio money.
AI, Privacy, and the Future of Filmmaking
Most people using AI right now are not really thinking about where it runs, what it needs, or what happens to the work once they upload it.
They open a browser, type a prompt, drag in an image, hit generate, and wait for something to come back.
That is the part everybody sees.
What most people do not see is the machinery behind it.
And that matters, especially if you are a filmmaker, writer, producer, editor, or creative team working on something unreleased.
Because the second you start using AI for real work, you are no longer just playing with a cool app. You are making decisions about privacy, infrastructure, authorship, speed, cost, and control.
That sounds technical. It is. But it is also creative.
This is the part a lot of people in film are still missing.
The conversation around AI keeps getting flattened into two bad takes. One side treats it like magic. The other side treats it like the death of craft. Neither one is serious enough.
The real question is much simpler.
How do you use these tools in a way that is actually useful, protects your work, and does not make you worse at your craft?
That is what this piece is about.
First, what AI actually is
AI is not magic. It is software running trained models.
Those models take something in and return something out.
That input might be text. It might be an image. It might be a video clip. It might be a voice sample. It might be a script, treatment, deck, storyboard, or transcript.
The output could be writing, summaries, images, video, voice, analysis, organization, or transformation.
That is all most people ever see.
But underneath that experience, models need hardware to run.
What AI needs to run
AI models do not run on vibes. They run on compute.
That means they need a processor, memory, storage, power, and, for many serious AI tasks, a GPU.
A GPU is a graphics processing unit. Most people know it from gaming or graphics. But in AI, it is one of the most important pieces of hardware because it does the heavy lifting much faster than a normal processor can.
And the kind of task you are doing changes the amount of power you need.
Text tasks can often run on smaller systems. Image generation needs much more. Video generation needs even more.
This is where a lot of creative people get tripped up.
They think, my laptop runs Premiere, Photoshop, and Lightroom just fine, so it should be able to run AI too.
Not necessarily.
Traditional creative software and generative AI are different beasts.
What infrastructure means, in normal language
Infrastructure is the system behind the system.
It is the machinery behind the experience.
It includes the computers running the models, the GPUs doing the heavy lifting, the servers storing files, the networking that moves data around, the security rules around that data, and the workflow tools that connect everything together.
When people use AI casually, they usually only see the front end: the prompt box, the upload button, the result.
Infrastructure is everything behind that screen.
If you use cloud AI, the infrastructure belongs to someone else. If you use local AI, the infrastructure belongs to you or to hardware you directly control.
That is why this matters. Infrastructure determines privacy, speed, cost structure, reliability, and control.
Why most people are using cloud AI without thinking about it
Most people are not running AI on their own machines. They are using cloud AI.
That means the model is running on someone else’s hardware, in someone else’s data center, on someone else’s GPUs, with someone else managing the storage, orchestration, and delivery.
That is why it feels easy. You are not buying the machine. You are borrowing access to it.
And to be fair, that is often the right starting point.
Cloud AI is easier to test, easier to access, usually faster, and often capable of running much larger models than an ordinary personal computer can handle.
But convenience always has a tradeoff.
What happens when you use a cloud AI service
When you use a cloud AI service, your material goes somewhere.
That does not automatically mean it is being misused. But it does mean it is leaving your direct control.
Your prompt has to be processed. Your upload has to be stored or temporarily handled. Your result has to be generated and returned.
Depending on the service, that can involve temporary retention, internal logging, monitoring, account-level storage, or system handling that exists simply to make the product function.
If you are messing around with something low-stakes, maybe that is fine.
If you are working on unreleased IP, private concepts, client materials, pitch decks, internal notes, actor references, or anything sensitive, that matters a lot more.
This is where privacy-first thinking starts. Not with paranoia. With basic workflow literacy.
What local AI means
Local AI means the model runs on your own machine or on hardware you directly control.
Instead of uploading your work into a browser-based tool, you run the model yourself.
That might mean a model running on your laptop, a workstation in your office, a local server, a portable high-performance compute box, or a privately controlled on-prem setup.
The appeal is simple. Control.
You know where the work is happening. You know what machine is being used. You are not relying on a public web interface to process sensitive material.
That is why local AI matters to people who care about privacy.
But this is where people start lying to themselves. Local AI is possible. Local AI done well is harder.
Why local AI is harder than people think
This is the part people either ignore or romanticize.
Running AI locally is possible. Running it well is expensive, technical, and constrained by hardware.
For text models, you can run smaller LLMs locally with tools like LM Studio and similar apps. That can be genuinely useful for drafting, summarizing, organizing notes, or private experimentation.
But there are tradeoffs.
Smaller local models are usually less capable than the biggest cloud models. They may be slower. They may struggle with more complex reasoning. They often have tighter practical limits around context, speed, and quality.
One important idea here is context window. A context window is how much information the model can hold in view at one time while it responds. Think of it like a production table.
A small table can only hold a few papers. A bigger table can hold the script, the notes, the treatment, the character breakdown, the deck, and the schedule all at once.
A larger context window lets the model work across more material in one pass. That matters for long scripts, transcripts, research packets, and multi-part creative work.
But more capability usually means more compute.
And once you move from text to image or video, consumer hardware hits the wall fast. For example, a 2021 MacBook Pro with an M1 chip handles Premiere Pro, Lightroom, and Photoshop just fine. But when running a normal image model locally, it can take hours just to reach twenty percent progress.
That is the difference. A machine can be perfectly good for normal creative software and still be completely outmatched by generative AI workloads. That does not mean local AI is fake. It means the hardware requirement is real.
The part everybody keeps getting wrong about craft
One of the laziest arguments about AI in film is that any use of it means abandoning craft.
Usually that comes from people who have not spent enough time inside the actual workflow. AI can absolutely be used badly. A slop workflow produces slop results. That part is true.
But that is not an argument against the tool. It is an argument against weak taste, weak standards, and weak process.
A paintbrush in the hands of a five-year-old and a master painter is still the same tool. The difference is judgment. The same thing is true here. People who treat AI like a slot machine will get slot-machine work. People who treat it like part of a disciplined creative pipeline can move faster without throwing away authorship.
That distinction matters.
There is a huge difference between using AI to replace thinking and using AI to accelerate execution.
There is also a huge difference between using AI to generate noise and using AI to support a real filmmaking workflow.
If a tool reads your script aloud so you can listen in the car, that is not replacing creativity. It is just a stronger version of text-to-speech. The screenplay still had to be written. The characters still had to be built. The scenes still had to be structured. The voice still had to come from somewhere.
If a tool helps turn an existing script into a first-pass storyboard or shot list, that is not directing the movie for you. It is helping you move from blank page to planning faster.
If AI helps summarize notes, organize revisions, tag scenes by location, or surface continuity issues, it is not replacing the writer, director, producer, or coordinator. It is reducing the time wasted on repetitive labor around the work.
That is the part critics often miss.
The real problem is not AI. The real problem is money.
A lot of people are blaming AI for what is happening in film. That is too easy.
AI is not the main reason the industry is hurting. Money is. Time is. Risk concentration is. Development drag is. Shrinking margins are. Fewer greenlights are. Higher production costs are.
AI does raise real questions about ethics, labor, training data, and authorship. Those are not fake concerns. They should be taken seriously.
But pretending AI is the only villain is lazy thinking.
If a tool can cut wasted time without cutting out the artist, that matters. Because in film, time is money. And if you cut time, you can often cut budget. And if you cut budget, more projects become possible.
That does not excuse bad use. But it does explain why people are turning to the tools.
Why filmmakers may still want local AI
Privacy is one reason. Control is another. Predictability is another. Cost clarity is another.
When you rely entirely on cloud tools, you are paying for access, credits, queue time, subscriptions, vendor rules, and generations that do not always work. You also usually only see the successful outputs people post online. You do not see the failed generations, the ugly ones, or the money burned on experiments that went nowhere.
With local infrastructure, the economics change. You invest in the machine. Then you use it as much as your workflow requires. That does not make it free. It makes it legible.
For some filmmakers, producers, and studios, that matters. A privacy-first local setup starts to look less like a hobby device and more like production equipment. The same way a camera package, lighting package, edit station, or storage array is a real investment, compute becomes part of the production stack.
That is the mindset shift a lot of creative people still resist. They still think of AI as an app. It is not just an app. It is infrastructure.
What AI is actually useful for in creative work
This is where the conversation needs more discipline. The point is not that AI should replace filmmaking. The point is that AI can give independent creators new ways to prototype, visualize, test, and develop work that would otherwise remain impossible or unaffordable.
Useful use cases include private script analysis, storyboarding, mood and concept visualization, look development, previz experimentation, reference generation, internal pitch support, offline creative iteration, and rough proof-of-concept development.
For some creators, AI works less like finished cinema and more like a bridge between writing and visualization. That is the better analogy. Not this replaces a full crew. More like this gives smaller teams access to a layer of experimentation that used to require much bigger departments.
There is another side of this that gets ignored too. Not every valuable AI tool needs to generate video.
Filmclusive, for example, uses AI to build workflow tools for the industry such as storyboards and shot lists. Those tools do not always require AI in the way people imagine when they hear the word. Sometimes AI is supporting structure and speed rather than replacing the creative decision-maker.
There are also tools that use AI to read text to you. That does not remove any of the creativity. It only reads what is already on the page, which means you can drive and listen to a script instead of carving out separate time to read. That is not inherently evil. It is basically an improved version of old text-to-speech, just with better voice quality.
The more honest question is not, is AI pure. The better question is, does this tool reduce time without destroying authorship. Sometimes the answer is yes.
My own example
I am a screenwriter.
Before AI, I had a ten-step first-draft process that took me about ten weeks. That was not because I lacked discipline. That was just the process. Writing, reworking, checking structure, testing scenes, getting pages into shape, reading aloud, reorganizing, and pushing through the dead zones of the draft. That is what writing looks like.
Now I can move through that same first-draft process on my phone in about thirteen hours.
That does not mean AI wrote the screenplay. It means the speed of execution changed.
The thinking still came from me. The structure still came from me. The scenes still came from me. The taste still came from me. The standards still came from me.
And after that draft exists, the actual creative process still continues. I still send the script to my manager. My manager gives notes. I still have to interpret those notes and implement them. Then friends give notes. Then teachers give notes. Then I rewrite again. If the script survives all of that, it survives because of judgment, rewriting, restraint, and taste.
By that point, saying AI wrote it is like saying Final Draft wrote it because I used screenwriting software instead of a typewriter. That is nonsense. AI can compress part of the path. It cannot replace authorship unless the author gives up authorship.
What the old workflow looked like, and what the new one can look like
The traditional workflow is still familiar. You write a script. You revise it. You get notes. You revise again. You make decks. You build references. You storyboard. You shot list. You schedule. You budget. You prep. You shoot. You edit. You add sound, color, VFX, graphics, and finishing.
None of that disappears. What changes is the time between stages and the amount of labor needed to move material from one format into another.
Old workflow: idea to script, script to notes, notes to revision, script to storyboards, storyboards to shot list, shot list to schedule, schedule to production plan, production to edit, edit to finishing. At every step, people are translating work manually.
A more AI-assisted workflow can look like this: idea to structured outline faster, outline to first-draft pages faster, script to read-aloud instantly, script to shot-list support faster, shot list to rough storyboard support faster, storyboard to animatic, animatic to generated temp music, animatic to generated video tests, notes to revision map faster, production coordinating support, automated paperwork such as time cards and mileage logs, transcripts to summaries faster, meetings to action items faster, edits to searchable review faster.
The craft layers remain. The translation layers get compressed. That is the real shift. AI is often less about replacing the core work and more about shrinking the dead time between creative decisions.
A practical ladder for creatives who care about privacy
Most creatives do not need to go from zero to building a private AI studio overnight. That is fantasy thinking. The smarter path is staged.
Step 1: Understand what AI actually is
AI is software running on compute. It is not magic. It needs hardware, memory, storage, and power. The more complex the task, especially image or video generation, the more serious the compute requirement becomes.
Step 2: Understand what infrastructure means
Infrastructure is the system behind the system. It is the machinery behind the experience. That includes the computers running the models, the GPUs doing the heavy lifting, the servers storing files, the networking that moves data, the security rules around that data, and the workflow tools connecting everything together. When people use AI casually, they usually only see the front end. Infrastructure is everything behind it.
Step 3: Start with low-risk use cases
Do not begin by uploading your most sensitive work into random tools. Start with non-sensitive use cases such as brainstorming, public-domain references, formatting experiments, generic summaries, and workflow tests using non-confidential material. Learn the behavior of the tools before trusting them with real IP.
Step 4: Separate AI for creation from AI for acceleration
Some AI tools generate images, video, or text. Others speed up the work around the work. That second category matters just as much. Script read-aloud tools, transcription, coverage support, shot-list helpers, storyboard organization, tagging, search, summaries, continuity support, and meeting-note cleanup do not inherently remove creativity. Often they just reduce friction.
Step 5: Learn cloud first, but use it carefully
Most people begin in the cloud because it is easy. That is fine. But understand the trade. You get convenience. You lose some control. Cloud AI is often the right place to learn the shape of a workflow before spending money on hardware. But it should not automatically become the home for sensitive scripts, private pitch decks, unreleased references, internal development notes, or client-confidential material.
Step 6: Move sensitive work local when privacy matters
If privacy matters, move the workflow closer to your own machine. That could mean a local language model for script analysis, a workstation for image work, an on-prem setup for internal studio use, or a dedicated box or rented private compute unit for heavier tasks. Local does not mean easy. It means controlled.
Step 7: Treat compute like production equipment
This is the mindset shift most filmmakers still have not made. People will spend real money on a camera package, lens set, DIT cart, edit station, storage array, and color workflow. But when the conversation turns to AI hardware, they act like it is fantasy gear. It is not fantasy gear if your workflow depends on it. If you want private, serious, repeatable AI workflows, compute becomes part of the production stack.
Step 8: Build a hybrid model, not a purity test
Do not think like an extremist. The mature workflow is often hybrid. Use cloud when the risk is low and convenience matters. Use local when privacy, predictability, or control matters. Use AI where it reduces waste. Do not use AI where it weakens the work. That is the adult version.
How the timeline is changing
Phase 1: AI as novelty. The public mostly used AI for gimmicks, tests, and viral outputs. A lot of this was surface-level and easy to dismiss.
Phase 2: AI as creative experiment. Writers, designers, editors, and filmmakers started testing it for ideation, references, and proof-of-concept work. Quality was inconsistent, but the time savings became obvious.
Phase 3: AI as workflow infrastructure. This is where things are moving now. The value is less about one flashy generation and more about how AI supports planning, organization, revisions, post-production, and operational speed.
Phase 4: AI as standard production layer. This is the next shift. Not every project will use AI video generation. But more productions will use AI for planning, script workflows, search, organization, continuity support, asset prep, post-production assistance, and internal tooling. That layer will likely become normal even when fully generated outputs remain controversial.
Ben Affleck, InterPositive, and the part indie filmmakers should actually learn from
Ben Affleck is a useful example here, but only if we read it correctly. The lazy takeaway is this: Ben Affleck built an AI company, Netflix bought it, so this is a rich-person game. That is the wrong lesson.
The more useful lesson is that a serious filmmaker saw where the workflow was heading, started building around it early, and focused on infrastructure and process instead of just flashy outputs.
InterPositive’s reported approach depended on training models on a production’s own dailies to help with things like relighting, reframing, background replacement, continuity fixes, and editorial refinements. That matters because it shows what serious adoption actually looks like.
It does not start with type a prompt and make a movie. It starts with what part of the workflow are we trying to improve, and what infrastructure do we need to improve it.
And there is another part that matters. In the marketing and demo material around the company, they were still using tools like ComfyUI, which are publicly available today. So the secret was not that they found some magical hidden tool nobody else can access. The real value was workflow design, production understanding, model-training strategy, and the GPU power required to make those workflows practical at scale.
That is the actual lesson. The value was not just in using AI tools. The value was in knowing how to structure them around real film problems.
That is the indie lesson too. No, most indie filmmakers are not building Netflix-scale infrastructure. No, most indie filmmakers are not hiring a research team tomorrow. No, most indie filmmakers are not spending hundreds of millions of dollars. But they do not need to. What an indie filmmaker can learn from this is simpler: Start with the workflow, not the fantasy.
Ask: Where am I losing time? Where am I burning money? What steps in my process are repetitive, mechanical, or bottlenecked? What can be sped up without weakening authorship? Then build from there. Maybe that means using AI to read scripts aloud while driving. Maybe it means using AI to organize boards, references, and shot lists. Maybe it means using local models for private script work. Maybe it means training a look or reference pipeline on your own materials. Maybe it means building a workstation instead of endlessly paying for cloud generations. That is the indie version of the same mindset. The studio version is bigger. The principle is the same.
Ben Affleck’s company was not proof that AI belongs only to big studios. It was proof that filmmakers who understand workflow, training, and compute will shape the next generation of production faster than people arguing from the sidelines. And yes, Netflix buying InterPositive mostly means Netflix will get internal access to that team and those tools. But the larger logic is already visible to everyone else: project-specific models, production-aware workflows, and infrastructure strong enough to support them. That is how indie filmmakers should think about this through the whole process. Not just, can AI generate a clip. But, how can AI support development, prep, visualization, post, and planning in a way that protects my authorship, saves time, and keeps costs under control. That is the smarter question.
The real bridge between film craft and AI
I am not going to let YouTubers who know a little tech and are experimenting with these tools outrun filmmakers who actually understand story. Screenwriters are not becoming less important because of AI. If anything, they become more valuable. A script is still a script. Structure is still structure. Taste is still taste. The people making flashy short clips are not automatically the people who know how to tell a great ninety-minute story or build a meaningful television drama. That is why filmmakers should not sit this out.
If you understand story, performance, pacing, editing, character, and emotional truth, you already have the part that matters most. The missing piece is learning the workflow. Those who use AI as a slop machine will keep making slop. The same way a five-year-old uses a paintbrush differently than a master painter, the tool itself does not decide the outcome. Be the master painter.
Final thought
AI is not automatically private. It is not automatically unsafe either. What matters is where the model runs, what you upload, who controls the system, and how intentional your workflow is. For creative teams, the conversation cannot stay at the level of which tool looks cool. It has to move to a more professional question: What infrastructure are we trusting with our work? If the goal is speed and convenience, cloud tools may be enough. If the goal is privacy, control, and long-term production independence, local workflows start to matter.
The future does not belong to the loudest AI hype people or the loudest anti-AI people. It belongs to the filmmakers who understand the workflow, protect the work, and know when a tool is serving the craft rather than replacing it. That is the version worth learning. If you want to learn how to work this way, follow Filmclusive. We are going to keep teaching more of the tools, the workflows, and the infrastructure behind them.
Be the master painter.
Learn how to build workflows that protect your work and accelerate your craft.
Explore Training