User
Write something
LTX 2.3 Motion Control workflow
New version of the LTX motion control using LTX 2.3, in general I'd say its 10-15% better but hard to quantify. Harder to run locally cause it's bigger. Make sure your Comfyui and NVIDIA drivers are updated to use the latest optimizations for Klein and LTX.Mostly removed the nodes that were causing issues for alot of peopleTry it out and let me know. LTX goes in checkpoints:https://huggingface.co/Lightricks/LTX-2.3-fp8/blob/main/ltx-2.3-22b-distilled-fp8.safetensors Text encoder: https://huggingface.co/Comfy-Org/ltx-2/blob/main/split_files/text_encoders/gemma_3_12B_it_fp4_mixed.safetensors Control lora: https://huggingface.co/Lightricks/LTX-2.3-22b-IC-LoRA-Union-Control/blob/main/ltx-2.3-22b-ic-lora-union-control-ref0.5.safetensors Klein KV: https://huggingface.co/black-forest-labs/FLUX.2-klein-9b-kv-fp8/blob/main/flux-2-klein-9b-kv-fp8.safetensors Klein No KV: https://huggingface.co/black-forest-labs/FLUX.2-klein-9b-fp8/blob/main/flux-2-klein-9b-fp8.safetensors if no access: https://modelscope.cn/models/black-forest-labs/FLUX.2-klein-9b-fp8/file/view/master/flux-2-klein-9b-fp8.safetensors?status=2 text_encoders: https://huggingface.co/Comfy-Org/flux2-klein-9B/resolve/main/split_files/text_encoders/qwen_3_8b_fp8mixed.safetensors
AI influencer dataset AIO Revamped
Using this workflow, you should be able to get a good dataset to train your LoRA on, I have many video on how to do this locally and using runpod. All the prompts can be changed to generate the type of images you want, just make sure you have one prompt per line in the CR prompt list node Model: https://huggingface.co/black-forest-labs/FLUX.2-klein-9b-fp8/blob/main/flux-2-klein-9b-fp8.safetensors Text encoder: https://huggingface.co/Comfy-Org/flux2-klein-9B/resolve/main/split_files/text_encoders/qwen_3_8b_fp8mixed.safetensors vae: https://huggingface.co/Comfy-Org/flux2-klein-9B/resolve/main/split_files/vae/flux2-vae.safetensors
3
0
ICY ANIMATE WORKFLOW
Load your image and video, and run the Klein Swap. Might take a couple tries to get the good first frame, nothing I can do about that (model skill issue)If you do not want to use Klein swap, connect the load image directly to the Set_Referenceimage When you're happy with your first frame, lock the seed and activate the preprocessing group and run the workflow The higher the resolution, the higher the processing time. You will need to do some basic math to determine how many frames you do per sampler, this amount you can do per sampler will depend on your available amount of VRAM. The more you have the more frames you can do at once. For example, using a RTX PRO 6000 you can do as many frames as you want at once. I recommend starting medium res (3) with 81 frames chunks (5seconds) per sampler and go up or down from there depending on your hardware and chosen resolution.you will need access to sam3 and klein for all to work guide in blue node isnt fully updated Klein (ask for access) Diffusion_models: https://huggingface.co/black-forest-labs/FLUX.2-klein-9b-fp8/blob/main/flux-2-klein-9b-fp8.safetensors text_encoders: https://huggingface.co/Comfy-Org/flux2-klein-9B/resolve/main/split_files/text_encoders/qwen_3_8b_fp8mixed.safetensors vae: https://huggingface.co/Comfy-Org/flux2-klein-9B/resolve/main/split_files/vae/flux2-vae.safetensors Wan Animate Diffusion_models: https://huggingface.co/Kijai/WanVideo_comfy_fp8_scaled/resolve/main/Wan22Animate/Wan2_2-Animate-14B_fp8_scaled_e4m3fn_KJ_v2.safetensors
1
0
High quality Lora Based Faceswap
A WAN 2.2 low noise character Lora is required for this to work but I think it's one of the better solutions out there right now and I quickly go over the whole process in the video This is mostly a faceswap pipeline, hair color, body and clothes will be from the original video, select your reference videos with that in mind. We use WAN 2.2 T2V not I2V make sure you get that right Workflow in the zip My character on the left in the samples The video for this will be coming today or tomorrow WAN 2.2 T2V: https://huggingface.co/icekiub/WAN-2.2-T2V-FP8-NON-SCALED/resolve/main/WAN2.2t2vLOWNOISEFP8.safetensors 4step: https://huggingface.co/lightx2v/Wan2.2-Lightning/resolve/main/Wan2.2-T2V-A14B-4steps-lora-250928/low_noise_model.safetensors text encoder: https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/text_encoders/umt5_xxl_fp8_e4m3fn_scaled.safetensors vae: https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/blob/main/split_files/vae/wan_2.1_vae.safetensors
1
0
Captioning workflow for datasets
Just set the path to your dataset folder and your trigger word and Queue up the workflow. Super easy! Set image load cap to 0 to process everything
1
0
Captioning workflow for datasets
1-9 of 9
powered by
skool.com/ai-tech-and-comfyui-workflows-2226
ComfyUI workflows and AI influencer Hub | Subs get exclusive workflows, nodes and educational content Appreciate the support <3
Build your own community
Bring people together around your passion and get paid.
Powered by