AI Experiments

A.I. experiments

 

Tron fights for the users.

I made this AI short as an ode to my childhood and an experiment to see if I could do it. Set in the 80s, it follows a little boy stepping up to a classic TRON arcade game, dropping in a quarter, and getting zapped into the grid. I used Midjourney, Google's VEO2, Pika, Udio for music, and Premiere to pull it all together.

 

A.I. holiday short.

I wrote a short for the holidays and tried to bring it to life. To make it happen, I used several different AI video applications, including Krea.ai, Kling.ai and Runway. Funny enough, the hardest part wasn’t what I expected at all, it was the rolling ornament. And I never did get it right. AI music in Udio.

 

An A.I. Coke Christmas.

Jason Zada and Coca-Cola recently used AI to reimagine some classic Christmas spots and they did it in only three weeks. Inspired, I challenged myself to create something with Coke. This took me about four hours with a ton of trial and error, with lots of errors still visible. My workflow: ChatGPT for refining prompts, Midjourney for stills, a little Photoshop for touch-ups, animation in Runway and editing in Premiere. I borrowed the audio from the actual Coke spot to layer under the animations I generated. That helped it a lot. Hopefully, Mr. Zada doesn’t mind too much. Here’s a link to the BTS for the amazing Coke work he did.

 

My buddy is an actor.

So, I put him in a bunch of cinematic test scenes. I really wanted to test out the consistent character feature in the Hailuo model on Krea.ai and I ended up with some great shots. These were all generated from text prompts and a single still image of his face. AI music from Udio.

 

A.I.ndiana Jones.

I challenged myself to create a trailer-style comic book animation using a classic hero as my subject. My workflow: ChatGPT for refining prompts, Midjourney for visuals, a little Photoshop for touch-ups, animation in Runway, AI-generated music in Udio and video editing in Premiere.

 

Doritos A.I. crashin’ the bowl.

PepsiCo brought back the Crash the Super Bowl contest this year, so I downloaded their assets: logo, bag image, music, end tag for this experiment. I can’t enter (I almost won in 2012) and they don’t allow AI anyway, but I wanted to see what I could create. My workflow: ChatGPT for refining prompts, Midjourney for visuals, Photoshop for touch-ups, animation in Runway and editing in Premiere. Voice was generated in Runway.

 

Animated explainer.

My friend Greg asked me to try and make an animated explainer type vid for his business in a Pixar-like style. I used Midjourney to make the stills and then generated videos from those in Runway and Krea. VO from Elevenlabs.