Activity
Mon
Wed
Fri
Sun
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
Mar
Apr
May
What is this?
Less
More

Memberships

2DAnimation101

690 members • $9/month

102 contributions to 2DAnimation101
Testing the AI Process
A motions from mixamo, applied to an iClone dummy, exported as an mp4 for motion reference. On the Galaxy platform, used the AI Video generator and the Kling Motion control to create character movement. Then, using the Galaxy AI lipsync generator with the Sync Lipsync option and a separate character audio mp3 file. The two of the three times it worked well. One time, not. The section with the bulls chasing is still animatic. Didn't want to try AI on that until I got the character motion working. Couldn't stop the AI from creating mouth movement with the character motion. But, when I lip synched, the audio file/AI overrode the original unwanted mouth/lip movement.
Testing the AI Process
1 like • 4d
I really like the concept of mixing Mixamo, Iclone8 and Kling! To be continued...
Restults of the Test
@Thomas Jennings @Arlene Dilworth @Ernesto Guerrero @Simona Adelina @Helene J @G. Vern Morris @Nancy Moon CONCLUSION 1: Kling Motion Control 3.0 can't animate if your video reference is an animation of rigged characters (like those from Cartoon Animator) - it has to be either live footage or 3D (like iClone) CONCLUSION 2: Kling Motion Control 3.0 struggles in getting the Character to be animated as traditional animation, still kept it 3D - I am attaching both videos here. When I used only Kling 3.0 (not motion control) to animate an image addint the text "Make this is a 2D traditional Animation animated at 12 fps" - it did a better job. Also Kling Motion Control 3.0 can't follow Cartoon Animator motions, I assume it is becase 2D motion from rigged characters could be too abstract for Kling - it kept crashing and not doing the job. - We kept burning credits testing multiple approaches for 1.5+ hrs, and nothing. - When we used 3D motion from iClone (thanks @Helene J for providing this simple clip), it worked on the first try. For this, I followed 5 steps. ✅ 1. Get a motion video ✅ 2. Get the first frame rendered as an image of the motion video ✅ 3. Get the character you need ✅ 4. Create the first frame with your character and your background in the position of the first frame ✅ 5. Use Kling Motion Control 3.0 - have the image move the same way as in the video
Restults of the Test
1 like • 11d
@Mark Diaz That’s damn true!
Here is the information that I can provide regarding my video project.
I hope that this information will provide you with enough info regarding my video project that you can understand the process in which i was using & attempting to achieve even though weather got in the way & wreaked the 1st floor of church to the tune of at least 2/3 to over 1 million dollars in damage. That is where all of my equipment is located & it will take several months to return some kind of normalized life as we once knew. Anyway, I plan on attending that last few Zoom sessions to the end.....My video was a first generation cell phone copy & that is all I have of the project when the storms hit with 5 1/2 to 7 inches of rain in 1 1/2 hours last Friday evening. Thanks again Mark for all of your hard work!
1 like • 12d
What bad luck! I hope you’ll be able to pick your project back up again very soon.
Getting hung up on the details (again) 😉
Well yes and no. I've been a bit busy recently but restarted the Storyboard bit for the 4th time last week and then ran into the next problem of getting the positioning of things really nailed down. Nano Banana 2 previously did this quite well but I don't know if it got updated and despite my best efforts, it would not position the character(s) part way down the corridor. Clearly I'm leaning towards being a bit of a control freak but I set about building a workflow based off an "inpainting" approach, using Flux 2 Klein to position things where I ORDER, ahem, tell it to position things. This, as with everything in this space, turned out to be more complicated under the hood than initially imagined. I won't bore you too much with the details but it seems AI diffusion models don't really understand space and layout like we do, so asking it to scale the characters to fit x or to size y isn't really understood. Gemini educated me on this one, but it's the same way they don't understand left and right reliably. Cutting a very long story short, the workflow I wanted was to be able to draw a mask on a background where I wanted a character(s) placed and have it put my characters there, scaled to fit the mask and not chopping off feet or heads... (there was soo much chopping off feet and heads and I had to build extras into the workflow to restrain it but I can do this fairly repeatably now finally! This is pretty much prompt free to integrate a pre-posed character into an existing background. I've not not tried taking them from a character sheet(s) straight into a position in a scene yet, that might work but isn't as important I don't think. 🤔 Anyway, still at the "building tools" stage! I'm hoping to get some storyboard stuff done this weekend. Here's a before/during and after.
Getting hung up on the details (again) 😉
1 like • 12d
I used a masking setup for my Escher clip, and it works great with the Adobe Creative Cloud tools
Made a full teaser for my first AI feature film — based on my grandfather's real story (Gulag survivor, 1948)
I've been working for months on a teaser for my debut AI-generated feature film, and I'd love feedback from this community. The film is called "Brotherhood in the Bitter Cold" — inspired by the true story of my grandfather, a Transylvanian Hungarian who survived a Soviet labor camp and walked home from Siberia in 1948. The project is part AI experiment, part memoir, part love letter to a man who rarely spoke about what he lived through. WORKFLOW: — Script and storyboard: written by me, based on my novel of the same name (not yet published) — Character references, Cloth Reference, Environment Generation: Nano Banana (Gemini) with family photo references for facial consistency — Video generation: Seedance with detailed per-shot prompts — Narration: ElevenLabs v3 with intention-based tags for an elderly voice — Music: Suno TECHNICAL NOTES: — Maintained character consistency across shots using reference sheets generated in Nano Banana, then reinforced via Img2Vid in Seedance — Developed custom prompt structures for every generation using Claude Desktop with dedicated Skills (one Skill per tool: Seedance director, Nano Banana reference-sheet builder, ElevenLabs voice director, etc.) — Built custom character identity docs for each of the main characters to keep visual continuity across 40+ generations — Aspect ratio 21:9 CHALLENGES: — Facial consistency across long sequences remains the hardest problem — Text generation (carved into wood, etc.) still fails reliably — Period-correct wardrobe required heavy negative prompting (Seedance defaults wanted to add German/Alsatian half-timbering to Eastern European scenes) - Seedance still denies a lot of prompts, especially images with faces — Higgsfield Cinema Studio 3.5 solves this quite well as an alternative - Cost is significant: ~$300+ for 3 minutes of final output (teaser + prologue combined), which is steep for an independent creator. Much of that is re-generations and failed prompts — the "visible cost" is only part of the total spend.
2 likes • 12d
It really makes you want to watch the movie!
1-10 of 102
Michel Diamantis
5
213points to level up
@michel-diamantis-1877
Retired math teacher

Active 4d ago
Joined Dec 1, 2025
Powered by