Activity
Mon
Wed
Fri
Sun
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
What is this?
Less
More

Owned by Joel

U
UNITEE

1 member • Free

Memberships

Lead Generation Insiders 🧲

1.7k members • $1,497

AI AUTOMATION INSIDERS

3k members • Free

Midjourney Experience Academy

79 members • $47/m

6 contributions to Midjourney Experience Academy
The Future Just Landed
Midjourney is now part of Meta.ai. Expect to see more art from customers. https://www.images-magazine.com/DE/February_2026/60/ Check out this article from Images Magazine on how it works and what to expect.
The Future Just Landed
1 like • 2d
The software is called “midjourney” for a reason. To me this implies it was the developers intention to get you midway but it takes a talented artist to finalize the imagery 99% of the time. A client is going to press a button once and accept as is because they don’t know any better
0 likes • 2d
@Marshall Atkinson oh ok you mean Meta's version:) I was implying Midjourney in general...and yes 1000%, just like a pro can typically navigate tools like Canva because "that eye" helps them make better, more intentional choices
Jumping Marlin
We create a ton of fishy graphics since we are by the beach. I can't show the final designs until they go to production, but I can show one of the generations that where fairly accurate. Prompt; an artistic illustration of a large award winning Blue Marlin jumping out of the deep blue sea, large splash of water and surf around the Blue Marlin, dynamic curving pose, airbrush and watercolor illustration style
Jumping Marlin
1 like • 14d
Very cool. might I ask what decoration method (DTF, SP, dye sub etc)? and what will yall do to finalize it (halftone treatment etc.)
0 likes • 6d
@Jon Anderson Nice workflow, seems very built out. Appreciate you sharing!
mere mortal models
It's been 3 yrs since I experimented with generating more average looking models in MJ…Has been any progress on the training of the model to accomplish this from the jump or is there a better tool for building lifestyle photo assets out the box that don't scream "Greek god and goddesses with perfect skin tone hair and tattoos posing behind a generic weathered brick wall" workaround: 1. first inclination is to shoot some parts practically, generate the expensive components then composite altogether in post 2. perhaps plugging in omni-ref's of real life references could be a workaround the model's biases
mere mortal models
0 likes • 6d
@Marshall Atkinson Thankyou. those specific prompts get it closer and clothing them certainly helps but v.7 still doesn't appear to make them average enough to not pass for a clothing ad at Target (less face symmetry, average build, etc.)...I get it MJ initial training was probably professional photography and stock photography. Im going to do some experimenting with omni-ref + a "bad coloring" photoshop color treatment and see if that'll help.
Daily experiment
purple, gold, green colored funky abstract painting of mardi gras beads hanging from inside a streetcar in new orleans --chaos 50 --ar 3:2 --stylize 550 --weird 13
Daily experiment
Columbus
I created something I am quite proud of today — a cinematic video of the Nina, the Pinta and the Santa Maria — all in Midjourney. Usually I start in Midjourney and take it into photoshop and fix it there. Then it's over to Higgsfield to use Nano, Kling or the new cinematic video tab. But today, this came together all in Midjourney. Whooda thunk it? The final image prompt: "Wide cinematic low angle shot of the Nina, the Pinta and the Santa Maria at sea with very rough water and large breaking waves, The sails are fully open with a red Spanish cross centered on them, The sky is blue with clouds and dramatic morning light, Rule of thirds, shot with a ARRI Alexa Mini, Epic Cinema." --chaos 10 --ar 7:3 --profile dpi2j1v --stylize 150 I do give @Marshall Atkinson a lot of the credit as I used a bunch of things I learned over the last few months especially not being stingy with the iterations. Also, used the edit tool in MJ instead of Photoshop (with Nano). While the ships looked great, I wanted them smaller but every time MJ could stick another ship in the space I just wanted open ocean. It took a bit of futzing, but I finally got my starting and ending images to create a video from. Next, I went right over to Higgsfield and proceeded to got weird results. I know what I did wrong: I created a smaller version of the original image and when I asked Kling and Nano to make a video, it just scaled the image up which made the whole video look really unnatural. I had to stop at this point as work intruded. Cut to later this evening and I had to fill up with gas. I pulled out my iPhone and decided to generate a video from my image with MJ while I was waiting. I'll be damned if it didn't turn out pretty good AND better than my go-to's for video generation. It's a little fast but I can slow it down in Premiere. Next steps: Take the last frame and add a bunch of Spanish sailors driving golf balls off the side of the ship, A close up of Christopher Columbus with a driver and a telescope, and some other shots. Then stitch them all together to make a commercial I couldn't possibly make 30 years ago when I foolishly pitched it to my boss.
Columbus
3 likes • 16d
@James Ballard Amazing results! making just the water realistically move 10 yrs ago would've taken licensing a big budget film plugin such as realflow + 3D software such as Maya then taken weeks of running physic simulations then rendering, test and repeat plus a farm of computers to get it done in an efficient manner. To your point on the golfball scene...artistic and creative ideas are now more boundless and limitless when removing the technical hurdles. Generative AI is essentially one's own personal technical software engineering
1-6 of 6
Joel Hebert
2
7points to level up
@joel-hebert-2558
designer by day father by night

Active 1d ago
Joined Apr 1, 2025
Powered by