Over the past week we ran an interesting experiment. We built a consistent AI UGC creator. First we trained a custom model in ComfyUI for about four days so we could generate stable images of the same persona across different scenes and angles. The goal was not just good images, but identity consistency. Once the character was stable, we moved into motion. Using Kling and motion control through Higgsfield, we started generating short UGC style videos that feel native to platforms like Instagram, TikTok, and Facebook. After several iterations we reached the point where the same character can appear across multiple pieces of content while staying visually consistent. Which opens an interesting possibility. Instead of producing one video at a time, brands can potentially create a repeatable UGC content engine with a recognizable persona. Now we are testing the next stage, which is publishing content and measuring how audiences react compared to traditional UGC. Still early, but it feels like this could change how brands scale short form content. Curious what other founders think.