@TheoreticallyMedia Yes, I am actually beginning to get a little excited especially because Stability AI video is open source and will remain so indefinitely? If that could run on my own HP Workstation then that of course will be great. At 9:34, this one is particularly compelling because is not only rotates but adds blinking eyes.......and quite convincingly for a beta model. 👍 So, is it with Stability Video that in your previous video you got the motion on the Cheese Pirate Ship? I think you got a really good effect with that static image turned into motion. I love Pirate ships. For the replicate famous scene with animal contest you ran last year the theme of my submission was a raccoon pirate standing in a cove with a pirate ship in the background. I had a lot of fun seeing what MJ can do with the character and ship and it did well. Your cheese pirate ship is awesome, I'm kinda loving that image .....static or motion. I must try prompting for something similar! I did buy a subscription to ClipDrop last year and probably will renew again. You get access to so many tools for a very reasonable price. Plus, I actually love the simplicity of how we access style and image dimensions options for SDXL. Makes experimenting very easy. Also, the images are generated super fast. If they'd add some version of Stability Video to the ClipDrop platform and not raisingthe subscription cost too much, that would be sick! Bottom line with AI Video is: the length of the final videos is painfully short. The trajectory for advancements to the technology seems decent but being that I'm a year and a half off of 60, I wonder if they can get the technology to a point where one can create a full length movie of high quality before I end up in a nursing home. 🤔
Thank you so much! Very much appreciate the Super Thanks as well! Ultimately super glad to hear you’re as excited about the future of these tools as I am!
I’ll say, MJ’s challenge is to keep it from getting cluttered. No question the creep of obscure commands is present- but I’ll say the website is doing a pretty good job of keeping it clean. I’m sure they’ll figure out a solution to style and character ref at some point, but typing a dash dash into the website along with copy pasting did feel a little…messy? Again, totally realize this is an alpha.
Welcome aboard!! Glad to have you here, and really happy to hear all that! I’m always open to suggestions and feedback for the channel, so please don’t hesitate to comment or reach out! Looking forward to hanging out!
@@TheoreticallyMedia I've recently had a couple of online presentations with VES Vancouver and Foundry. During these sessions, I mentioned your RU-vid and Discord channels to artists interested in venturing into the world of AI/ML. Hopefully, you'll see an increase in viewership or new subscribers from VFX folks.
Thanks Nanci!! This one was a lot of fun! I'm glad Midjourney is pushing out some big updates-- doing an MJ video is always "home" for me, so I always have a little extra fun with them!
Can't wait for high quality consistent characters. I'll be making my own manga for a year or so until I can go full anime show when the AI video is ready. So excited!!!
So, it is-- although, there are some rumors that the version on Stablity.AI's website has a little secret sauce. I'm keeping an eye on it and I'll let you know if the gang finds a smoking gun.
Thank you Tim! 💯 MJ says here's something cool. It's another fantastic slot machine. It just takes a while to figure it out, along with all the other parameters, and say goodbye to your fast hours. And, work life in general! It's a great feature, but my goodness, it still needs polishing as far as ease of use. Fun as heck, and can't wait to see how they make it more user-friendly. And yes, the results are often dulled down. Appreciate the heads up for SD's video. 👍 Have a great weekend.
Oh agreed there-- to be fair, they have said this is an Alpha, but to your point on ease of use? Yeah, that was a tough pill to swallow. There's this nice and fairly elegant website, and I'm still dash dash'ing and adding long URLs into a prompt box. I know they'll figure something out for it-- but, I guess getting your hands dirty is the price we pay to play with the new toys!
I really think between LLM's and AI image generation we will see AI VR Holodeck interactive worlds within a few years. Characters, environments, stories, speech and background audio. Content creators will generate characters objects and world/universe artifacts that become IP much like Han Solo or The Enterprise are in movies and shows, and people will be able to interact with their favorite fictional worlds as players within the script.
Sref and the upcoming Cref are real game changers for Midjourney! I think a lot of people are mistaking Style for Scene, and I’m really excited for when that hits. Being able to generate a location and really explore? That’s going to be wild!
I've generated over 10,000 images on Midjourney and still don't have access to the site. Weird. I really enjoy your channel. You manage to keep us all informed of the huge amount of daily changes to these things without getting boring and repetitive.
Thanks for the clarity. I read David Holz description on Discord on how to prompt this new V6 style tuner and it totally lacked clarity. Then you demonstrate it and... voile! Magic in a beer mug. TY! 🍻
Haha, that’s sort of what I see as my job. It wasn’t for MJ, but some other AI tool where I got a comment that was like “they didn’t even have a manual, so you wrote it for them” Haha, one day someone will hire me to do this!
Same! It took me a few minutes to wrap my head around it (like, isn't this just image referencing?), but once you get the concept down-- it gets pretty wild. Going to start experimenting with srefs as srefs this weekend. It's going to be a rabbit hole for sure!
@@TheoreticallyMedia Just signed up for Stable AI. Runway don't seem to be doing anything on warping and morphing and your video makes stable ai look cool. Fingers crossed.
The --sref has been built really well and seeing it come out of the gate looking this good only adds excitement for its future. Also, I feel they're getting much better in quality when they release something new new. This shows that when they release the consistency character - it will be a game changer. I don't feel anyone has gotten that locked in yet and if you do get good results, its usually a hit or miss or a ton of steps.
Agreed! I’ve got a few tools on the list for next week that do consistent characters, but the input is all still a little wonky. I think, mid March we’ll have full (and easy) characters everywhere. Once we have that, plus consistent scenes/locations? I mean- we’re in warp speed!
@@TheoreticallyMedia I agree with the mid March prediction. So many tools out there that have specific needs that can used with others. When the --cref hits you could do a video where you use different companies/tools to get results for different styles. It's like being an AI producer and compiling the needs/wants to get a specific vision across. There is a very defined workflow with combining all these to get crazy results.
I gotta admit, it was a BLAST playing around with this afternoon. Some of the “broken” outputs when I pushed things too far were hilarious as well. I love it when MJ makes me laugh!
It does indeed look cool, but a major shortcoming I've noticed is that when you use the --sref parameter, your actual prompt is largely ignored. The reference image has too much influence on the final result. It doesn't merely copy the style only.
I know they’re still dialing it in, but that might be an interesting idea, if you could neg weight the reference image. One idea I want to play with is to sref an image you’ve already sref’d. I think prompts are working (see the Astronaut in the coffee shop in the video), it just isn’t working well! At least not yet!
Ha! Well, a picture is worth a thousand words! Might be interesting to try an use a style reference, then run it through /describe, then reprompt with that to see what you get! (I have the feeling it won’t be the same!)
Very soon. I think I'm doing a look at some tools for this next week-- so it is right on the horizon. MJ will roll out --cref (character ref) soon, and I'm REALLY anxious to see that. How much generation control are we going to have? Can we change Character Styles? Can we change Character Outfits? So much to ponder! Can't wait to see it!
📝 Summary of Key Points: 📌 The first update is about a new feature called style references in the mid-Journey platform. Users can create new styles by using image URLs and prompts. The feature is still in the alpha phase and does not support consistent characters. 🧐 The second update is about stable video, a platform by stability.a. It is now in beta and allows users to manipulate the camera in various ways, such as locking, shaking, tilting, orbiting, panning, and zooming. Stable video is currently free during the beta period. 💡 Additional Insights and Observations: 💬 "Style references" feature in the mid-Journey platform allows users to generate new styles using image URLs and prompts. 💬 Stable video by stability.a is a platform that offers options to manipulate the camera in various ways. 📊 No specific data or statistics were mentioned in the video. 🌐 The video does not reference any external sources. 📣 Concluding Remarks: The video highlights two updates in the creative AI space. The first update introduces the "style references" feature in the mid-Journey platform, allowing users to create new styles using image URLs and prompts. The second update is about stable video, a platform that offers camera manipulation options. Both updates show promising advancements in creative AI, and the speaker expresses excitement for further developments in the future. Generated using TalkBud
In some ways dipshiz'se like me have KINDA been doing that already, plus they have the blend function in the discord - but using it on the web app is going to be better
Totally! It took a bit of playing, but at first I felt it was a combo image ref/blend, but as I spent time with it, I realized there was a lot more going on. A really interesting thought I didn’t get to was the idea of taking an output and then redoing the process. I think there’s a lot to uncover here!
Well, it IS- I think it’s what Leonardo and PixVerse are using, but the Stablity version is waitlist. I do mention that at the top of the section. Sign up!
Hello Tim, can I have some advances to create a consistent tattoo designs ? I try to do following your advice but is now really working. I updated a tattoo dragon or tiger in a specific style colorful but I can not get the good results with other animal like lion or even another dragon . Do you have and advanced or something what I can do ? Thanks I like your videos and how you are
So, it’s a bit subtile, and something that took me a minute to wrap my head around as well. I hit on it in the Lara Croft example (but maybe it wasn’t fully clear): as a Style Ref we get Lara Croft in that image style. But as an image reference, Lara turns into an Asian woman, because MJ is reading the whole of the image (including subject) as one “look” Does that make sense?
I never like to pick sides with the tech, considering that every step each of them makes brings us closer to something amazing-- BUT, you gotta give it up for Stablity, who kicked the whole thing off!
MJ? I mean, I don’t know their books, and I certainly understand needing to turn a profit… But yeah, I don’t know what their margins are. I’ll say, much like the “Apple Tax” there is a “Midjourney Tax”
Wow...spent more time retrieving your 'free files' then needed to. Was going to make a solid donation but the frustration level you set up getting your files was worth zero donation. Dude, you gotta to make it more efficient and easy for people to get your files. Trying to trick them by sending them all over the to click multiple links and then spamming their email with your garbage is utter garbage man. Do better. Holy cow what a nightmere. Who the hell using google drive to retrieve files now commercially.
I don’t know about all that. Gumroad is a pretty common platform for independent distribution. And I don’t send out any emails. I haven’t launched a newsletter or anything like that. So you shouldn’t be getting anything from me.