Higgsfield just released their MCP integration — you can now generate cinematic images and videos directly inside Claude conversations
What it does:
— Connects Higgsfield to Claude, OpenClaw, Hermes, and NemoClaw
— Generate stills from 16+ models including Soul, Nano Banana Pro, Flux, Seedream
— Create video from text or images using Seedance, Kling, Veo, Minimax Hailuo
— Train a Soul Character once and reuse it for character consistency — Access your full Higgsfield generation history inside Claude
The biggest win is the workflow — describe what you want in Claude and it picks the best model, sets the parameters, and delivers the result. No tab switching. No copying prompts. No re-uploading reference images
Setup takes under a minute
Settings → Add → Connect → sign into Higgsfield. No API keys to manage.
Has anyone tested it yet? Curious what the first generations look like for people on different use cases.