\ud83e\uddbeFree AI Tools & Workflow Tutorial (30 Min Video)
Here is a free, full step-by-step 30-minute tutorial on how I made the AI cinematic trailer for @IntoAuris that I released last week using Midjourney, Runway, ElevenLabs, and Topaz.
Many people DMed me asking how I made it so to pay it forward and help others feeling overwhelmed by generative AI, I decided to write up this long tweet and create the attached video tutorial.
I will provide links to every account, resource, and AI tool that I cover in the attached subtweet.
Bookmark this bad Larry and grab a cup of coffee, cause we're going deep!
Here's the breakdown of what I will cover:
1. Midjourney AI Image Tips/Tricks
2. Midjourney AI Cinematic Shots
3. Topaz AI Image/Video Quality Enhancement
4. Runway AI Image to Video Tips/Tricks
5. Eleven Labs AI Text to Speech for Voiceovers
1. Midjourney - Finding Your Style (@midjourney)
When I first started trying to get the story, characters, and environment from my head to image form, I had no idea what to use for prompts in AI tools.
I used the 5 methods below to find the best prompt keywords that produced image styles I loved and then saved them to use over and over when making characters.
A. The "/describe" Feature In Midjourney
I reverse-engineered my favorite art from childhood games like Diablo, Warcraft, Starcraft, etc and then using the "/describe" feature in Midjourney.
By dragging pictures into Midjourney and using "/describe" it will give you 4 prompts that it thinks best create that image.
Take what artists, art styles, games, movies etc you love, and use '/describe' to get the ball rolling and find keywords for prompts you can use.
B. Midjourney Gallery
I spent a lot of time in the Midjourney's creator gallery for things I liked, and then took splinters of the prompts and applied them to my own characters.
Create an account, log in, and save/copy-paste prompts that fit your style.
C. Youtube/Twitter Creators
There are SO many good content creators giving great info away for free.
In the first few months, I easily spent 300 hours looking over video and tweet content from @mreflow @iamneubert @LinusEkenstam @HBCoop_ @iamneubert @chaseleantj @ciguleva @saana_ai and so many more.
Follow a bunch of great artists on X and YouTube, and use the "list" feature in both to curate a dope feed of new ideas, prompts, styles, and updates for AI tools.
D. ChatGPT
Ask ChatGPT to give you text-to-image prompts using 'in the style of...." and add your favorite stories, artists, or brands
I primed ChatGPT by telling it that it was a 'world-renowned sci-fi fantasy filmmaker", asked it to create a shot list for me, and then asked me to prompt ideas in the style of an epic live action sci-fi fantasy movie.
E. Study Film/Photography Terms
I talked with friends in the photography and film industries to learn about cameras, lenses, lighting, etc which was huge for @midjourney image creation.
Once I found the camera, style, lens, and shot list that resonated, I saved them all and kept reusing them. It dramatically improved the speed and quality in which the outputs were produced sky rocketed.
2. Midjourney - Creating Cinematic AI Shots
To save time and money in generative AI tools, you have to create prompting templates or formulas that you can quickly reuse when you get new ideas.
I quickly found a consistent style of prompting that gave me a lot of consistency. It was
"a [emotional tone keyword] cinematic very [type of shot] of [subject/actor] wearing [clothing/armor][posture/position] in a [setting/environment/location] with a special emphasis [key scene characteristics], depth of focus, [camera type], [lens type], award-winning photograph, [genre] --ar [desired ratio] *
Sometimes I would use additional modifiers like
--c for chaos to give Midjourney more creative freedom
--iw for image weight of past reference if I was trying to have a base image
--s for stylize to allow more or less style
- "Zoom Out" to outpaint more of the scene
- "Pan" to expand out specific areas of detail
- "Vary (Region)" to inpaint or replace items
Example for a shot from the trailer
"a breathtaking cinematic very close-up shot of an emotional male hero warrior knight in neon blue cybernetic glowing armor standing on a battlefield with a special emphasis on his highly detailed facial features, blood on face, rain droplets in forground, depth of focus, Fujifilm camera, EF 80mm f/1.8 lens, award-winning photography, sci fi fantasy --ar 16:9
3. Topaz Image/Video - Quality Enhancement (@topazlabs)
Although it costs money (free options do exist like BigJPG) I quickly found that a higher quality image input to Image-to-Video platforms like Runway made a huge difference in the cinematic video output.
So, once I found shots I liked and storyboarded them in @figma, I ran them all through Topaz AI Image enhancement.
Then, after I found the shot I liked from Runway, I would put it back through Topaz Video to enhance the quality to 4k and interpolate missing frames.
This was one of the most labor-intensive parts, but I think it made the ending trailer WAY more polished and emotionally resonant/believable.
If you can afford it and really want to create eye-popping visuals, this might be worth the investment.
4. Runway Gen2/AfterEffects/Motionleap Image to Video Tips/Tricks
One thing I found is that in the current state of cinematic AI, there is a huge tug-of-war between quality and cinematic control.
The 3 main image-to-video workflows I used were
A. Midjourney, Topaz Image, Runway, Topaz Video (@midjourney @topazlabs @runwayml)
B. Midjourney, Photoshop Layer Split, Leipix Depth Map, After Effects (@Photoshop @AdobeAE)
C. Midjourney, MotionLeap (@MotionleapApp)
What I mean by this is that with some text-to-image-to-video pipelines (like Midjourney to RunwayGen2) you can get insane cinematic control/lifelike camera movement but often exchange quality
In other image-to-video pipelines (like Midjourney to After Effects or MotionLeap) you can get insane quality preservation but limited overall cinematic control/lifelike camera movement
Based on the scene and your goal, I think learning each pipeline and picking what fits your goal is key for AI cinematic creators.
5. ElevenLabs Text to Speach for Voiceover
Personally with my style and cinematic goal, an epic music soundtrack and epic voiceovers really make or break a cinematic trailer's emotional impact.
So, I wanted to make sure that the music absolutely slapped and I scoured @epidemicsound for copyright-free music first. This was a trick offered by @iamneubert with his trailer 'Genesis'
I listened to so many tracks, over and over and over picturing shots in my mind. I then made them with the above tricks in Midjourney, storyboarded them in Figma, and then started the text-to-video pipeline process.
Second, the narrator's voice in the trailer is a main character and the dialogue is super important in the larger story arc.
So I spent a lot of time thinking about his voice, quality, tone, and language. I used @elevenlabsio to custom create a voice I felt was very powerful and epic, as the character is in the story.
I suggest typing out the whole voiceover (chatGPT or yourself in the notepad app) and then putting in each sentence line by line, and generating them individually.
That way, you have more creative control over the pace, tone, etc. of the voiceover. Also, it's much easier to edit and change the pieces in the editing process.
I ended up with 10 separate smaller audio clips that I pitched and slowed down to really hit the mark.
Summary:
So, that's all I got! In summary, I suggest
1. Brainstorm trailer story outline and visual shots
2. Start with backing music, and imagine your shots
3. Make basic shots in Midjourney, and drag/drop around in Figma or Canva for storyboard
4. Use one of 3 (or more) Image to Video Pipelines to find the balance between quality and cinematic control
5. If able, use quality enhancement on the front and back end of images and videos
6. Go bit by bit with voiceover and speech
7. Use others for creative inspiration paired with your own ideas!
Hope this helps! If you made it this far, you're a legend. Feel free to retweet this and spread the love.
#IntoAuris #CinematicCrafting #AIInsights @IntoAuris