Activity
Mon
Wed
Fri
Sun
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
Mar
What is this?
Less
More

Memberships

AIography: The Pro AI Film Lab

845 members • Free

Master The Workflow

144 members • $9/m

5 contributions to AIography: The Pro AI Film Lab
The Last Human Host - Hollywood's Double Talk
Sunday night at the Oscars, Conan O'Brien called himself "the last human host." Will Arnett got a standing ovation for: "Animation is more than a prompt. It's an art form and it needs to be protected." Four days earlier, Netflix paid $600M for Ben Affleck's AI post-production startup. So what's the real message? The tension I'm seeing: • Public stance: AI threatens human craft, must resist • Private reality: Massive investments in AI infrastructure • The gap between the stage speech and the boardroom deal • Editors, VFX artists, animators watching from the middle Here's my read: Hollywood wants to be seen resisting AI while adopting it behind closed doors. The Oscars are the public face. The $600M deals are the private truth. For those of us building with AI tools, this matters. You're on the right side of where the industry is actually going — but don't expect applause from the stage. Question for you: Do you think Hollywood's public AI resistance helps or hurts independent creators who are already using these tools? Does the anti-AI rhetoric protect jobs or just delay the inevitable conversation? I've lived through every tech disruption in this industry — analog to digital, linear to nonlinear, broadcast to streaming. The pattern is always the same: public fear, private adoption, then the tools become infrastructure. What's your take? Are we watching resistance or theater?
1 like • 3d
I don’t think this is necessarily a parralel comparison to previous technological innovations. The academy is made up of people. And they will vigorously protect their jobs. And they have the ultimate power to do so as long as their will is strong. AI is getting a much more negative public image than say the invention of the NLE or adoption of CGI. And rightly so. “I’m so glad they are building that data center where the forest in my backyard used to be,” said nobody ever. Similarly “That AI movie was the coolest thing I’ve ever seen.” (Granted some stuff is kinda cool.) As post professionals we definitely should know how to use it and when, because it’s never good to say “I don’t know how to do that.” There will be resistance and rules to play out. Laws will be written. But also adoption will happen. Even for a big budget feature, we in post should know exactly when AI is appropriate to use.
ByteDance Just Blinked.
Here's What It Means for Your AI Video Workflow. Seedance 2.0 — the AI video model everybody was buzzing about — just got pulled from its global launch. ByteDance suspended it after copyright pressure from Disney, Netflix, and the major studios. Here's what happened and why it matters: Seedance shipped without IP guardrails. Users immediately generated Marvel characters, Star Wars scenes, and celebrity deepfakes. Disney sent a cease-and-desist accusing ByteDance of packaging "a pirated library of copyrighted characters." Hollywood found its weapon. Copyright disputes freeze launches, force negotiations, and let studios pick which AI tools survive. This is competitive positioning disguised as IP protection — Disney has a content deal with OpenAI's Sora. The model is still live in China but suspended globally. Creators who built workflows around Seedance are now stuck. The real lesson: Don't build your production pipeline on a single model, especially one with unresolved copyright issues. Diversify across tools. This is the same disruption pattern we've seen for decades—the tools change, the instinct to diversify doesn't. Everything becomes post. The skills that survive are the ones that aren't tied to any single platform. What's your take? Are you worried about building workflows on tools that could disappear overnight? Which AI video tools are you actually using in production right now? 👇 Drop your thoughts below.
5 likes • 6d
In general I still think AI’s strength lies in creating temp elements for spec. Just another way to share an idea. Or elements ina frame such as background that aren’t a focal point. But those who rely too much on it for the finished product will be running into some issues. One being, as AI becomes more popular it will start to get less trendy. People will look down upon it. The whole draw of storytelling is the suspension of disbelief. I feel like too much AI just takes you out of the story like a Jar Jar binks. Also of course, these copyright issues. I mean, take using AI for background talent as an example. What if it creates a character that looks exactly like someone else. Can they sue? At what point to productions say that’s not worth the risk? Marvels Secret Invasion had a full AI intro and people hated it. (It definitely sucks.). Was that worth it? So yeah. Generally still feels like a very useful temp tool to me. But I don’t say that as too disparage it. It’s potentially game changing still.
Adobe Just Built an AI That Does Your First Cut
Here's Why I'm Not Worried. Adobe just dropped a new Firefly feature called "Quick Cut." You upload raw footage, type a description of what the video should be—interview, product demo, travel vlog—and it automatically produces a rough cut. Let that sink in for a second. AI is now assembling edits from raw footage based on a text prompt. It pulls from Adobe, Google, OpenAI, and Runway models. It targets product reviewers, podcasters, marketers—anyone who needs a fast edit without hiring an editor. I can already hear the panic. "They're coming for our jobs." No. They're not. Here's why. A rough cut is not an edit. Every editor in this community knows the difference. A rough cut is assembly. It's organization. It's the starting point. The CRAFT of editing—pacing, rhythm, emotional timing, knowing what to cut and what to keep, building tension, finding the story inside the footage—that's what happens AFTER the rough cut. Quick Cut is doing the part of the job that was already the least creative. It's pulling selects and assembling them in order. That's assistant editor work at best—and even assistants bring more judgment to it than an algorithm. This is actually good news for editors. Here's why: When the rough assembly takes 5 minutes instead of 5 hours, you get to spend more time on the part that actually matters—the storytelling. The craft. The decisions. This is exactly what I mean when I say everything becomes post. AI is collapsing the mechanical parts of the pipeline so humans can focus on the creative parts. The question isn't whether AI can assemble footage. It can. The question is: who decides if the assembly is any good? That's you. That's always been you. What do you think? Are tools like this a threat or an opportunity? Drop your take below.
2 likes • 24d
I can't wait to try it more. But made an attempt just now but realized that you have to upload the footage. Which is not desirable for so many reasons. But really hoping this can help with me cutting down my wife's podcast.
Runway Becomes a Multi-Model Platform
Kling, Sora, WAN, GPT-Image Under One Roof TL;DR: Runway has integrated third-party AI models directly into its platform, including Kling 3.0, Kling 2.6 Pro, Kling 2.5 Turbo Pro, WAN2.2 Animate, GPT-Image-1.5, and Sora 2 Pro — with more models coming soon. Through Sunday, commenting "MODELS" on their X post gets you 50% off Pro Yearly plans. Key Takeaways: - Kling 3.0, Sora 2 Pro, WAN2.2 Animate, and GPT-Image-1.5 are all now accessible directly within Runway's interface - Single-platform workflow — no more juggling multiple tabs, accounts, and credit systems across different AI video tools - Runway's own Gen-3 Alpha still available alongside the third-party models, letting you compare outputs side by side - WAN2.2 Animate brings the open-source Wan model's animation capabilities into a polished UI for the first time - 50% off Pro Yearly through Sunday for early adopters Why It's Important: This is a seismic shift in how AI filmmakers work. Until now, professional workflows meant maintaining separate subscriptions to Runway, Kling, Sora, and others — each with different interfaces, credit systems, and export formats. Runway is positioning itself as the "editing suite" of AI video, not just another model provider. For filmmakers, this means you can prompt the same scene across Kling 3.0, Sora 2 Pro, and Gen-3 Alpha, compare the results, and pick the best take — all without leaving your timeline. This is the Netflix-of-models approach, and it fundamentally changes the competitive landscape. Just as a side note, this is exactly how Lumarka is designed. Access to all major models in the character, shot, and take rendering interfaces. What do they say about great minds? 😎 Source: r/runwayml — Official Announcement
0 likes • 26d
This is awesome thanks! For more data and research capabilities I use a similar service called Perplexity. Basically scours a couple of different models to get you the best answers to things. But I've also successfully used it for some video work and image generation. I can see how this is a game changer. Logging into various models to try and get a result isn't very practical.
The Sky Has Been Falling for 120 Years 🌩️
Hey everyone, You've probably seen the news: Darren Aronofsky just released "On This Day… 1776," a short-form Revolutionary War series created through his AI studio with Google DeepMind. SAG voice actors, AI visuals. I haven't watched it yet, so I'm not here to tell you it's good or bad. But I AM here to talk about the reaction — because we've seen this exact movie before. And I mean that literally. 1903 — "The Great Train Robbery" comes out. Audiences panic at the image of a gun pointed at the camera. Some people want films banned entirely. Late 1920s — Sound arrives. Silent film purists, including legendary filmmakers, declare it a gimmick that will destroy the art form. Chaplin refuses to make a talkie for years. Then it was color. Television. Home video. CGI. Digital editing. Streaming. The sky has been falling for 120 years. And yet here we are — with more ways to tell stories than at any point in human history. Now it's AI's turn to be the villain. Look, I get it. There are real ethical concerns. We should absolutely have conversations about compensation, attribution, and impact on working artists. Those conversations matter, and I'm not dismissing them. But the instant pile-on? The "AI slop" mockery before most people have even watched it? That's not thoughtful criticism. That's fear wearing the costume of principle. An Academy Award-nominated filmmaker is experimenting publicly. Taking a risk. Whether this project lands or not, he's pushing into territory most of Hollywood is too scared to touch. For those of us in this community — many of you would never have had access to traditional production resources. These tools are giving you a voice. That's not a threat to creativity. That's an expansion of it. So yeah. I'm going to watch Aronofsky's series with an open mind. Maybe it's great. Maybe it's rough around the edges. Either way, I'd rather see someone swinging than an industry paralyzed by the same fears it's had since a train first rolled toward a camera.
2 likes • Feb 19
Lots of things are great about this. However it is notable that I didn't feel really connected to a single moment of acting performance here. It hasn't changed my impression that generative AI is and will be super helpful for creating pitch/spec work. But we'll also see a lot of people try to make whole films/series with way too much AI. The feel of this could become saturated and boring very quickly. But that being said, we as the professionals still definitely need to be learning it.
1-5 of 5
Eric Kenehan
2
10points to level up
@eric-kenehan-6682
I'm a TV, Film, Youtube editor. Love playing guitar and KGLW.

Active 3d ago
Joined Feb 19, 2026
INTP
Agoura Hills, CA
Powered by