Here’s what I want: 1) Stable Model 2) Stable Environment. 3) Creative Camera Work If I can simply create characters and insert them into an environment, without either of them morphing into an acid trip, I’ll pay. As of now, getting usable clips is not only time consuming with too many trial and error prompting, it gets expensive. Whoever can accomplish this first is going to do very well. I hope it happens soon.
it’s cool to have the runway gen3 extensions… the problem is they’re using GEN 2 to do the extensions, not GEN 3, so that’s why the extensions look so weird and low quality… they really need to use GEN 3 to do the extensions.
The Adobe CEOs are extremely greedy, unlike anything seen in any other company. Additionally, it’s worth noting that ON1 is set to launch their new ON1 RAW photo software, which will exclusively feature LOCALY generated images ... NOT ONLINE.
As soon as Adobe releases a 4th version of the firefly model, we'll have a robust image-to-video pipeline without subscribing to many different services.
@@AINIMANIA-3DYes, but that's the case with every plug-n-play solution. If you want privacy and uncensored generations, you gotta go with flux and comfyUI
@@KevinSanMateo-p1l Do you have a list of AI video generators that are uncensored? Ideogram is the only one I know of. Minimax appears to do Will Smith, Darth Vader and Mario, but I don't know if it would do Trump shooting a gun, for example (like in the Dor Brothers' videos).
@@KevinSanMateo-p1l That's such a "I don't know how to f-ing read" comment, so, f-ing READ. Commenter said when Adobe release Firefly 4, they will be able to use ONE subscriptions service (Creative Cloud) rather than NEEDING to use multiple. Read, for God's sake.
After watching the freaky extend video feature it made me wonder if this is the real skynet. Instead of an apocalypse and nukes, which we've seen coming, skynet is going to create seriously disturbing videos that drive us into insanity.
You know what? You make a great point because. what can end up happening is people could start end up making AI videos that look like a real terrorist threat to try and cause war and now this is going to make it harder for governments to verify videos. Ohh jeez. this is going to cause a hot mess of new fraud.
MiniMax is the most impressive model out. It does great with expressing prompted emotions but I have to say that Kling’s pro version has been capable of that too. Great video, as always :)
100%, my Friend, I've been using MiniMax every day for over a week, and it by far is the best at animating humans as well as other things like birds flying and animals running.
@@curiousrefuge I tried minimax after I watched your perfect video and minimax is great but without the ability to use picture to video is useless for me, because basically you cannot produce something with one character (human). Which is the best option for picture to video? I personally never seen so realistic videos like minimax, but if you have to produce something with one character on many videos doing different things which AI tool you prefer to use? Thank you once again for your you tube channel :)
As someone working on an XR concept in California I've been following the legislation you mentioned. The lead on the language in that bill is the "Center for AI Safety" which is basically a non-profit consultancy. Not all that enthusiastic that they are leading the charge here in CA.
Dude if you dragged the ankle point to the bear's toes I can imagine how precise you were with the rest of them. No wonder the bear animation looks wonky
I think a better comparison would be to have them all start with the same picture. Even just taking the first frame from Firefly would have been a good starting point to compare
@@terryd8692 I mean, there wil always bere a bias as in Adobe will simply choose their best example, but to give it a little of a fight at least start based on the same premises
Feels like a free ad for Adobe. If you're going to run comparisons with Runway, at least show us the prompt so we can make our own assessments. The "trust me bro" approach makes people wonder what you're hiding and who is paying you to hide it.
how tf do they get their movies to look so high resolution in those movies showed at the end? i know there are ways to "cheat" by adding fine grain and filters, but the resolution overall looks much better then what runway gives out, even with good prompting and high resolution input images. especially "seeing is believing" it looks amazing, the shot with the asian woman is great!
Thank you so much! Technically, Runway’s resolution is slightly higher (1280x768) than Minimax (1280x720), but I agree-the pixel density in Minimax feels smoother. Especially with cinematic outputs, Minimax has a great consistency, and though not technically "sharp" or "high-res," it feels more balanced-kind of like a BluRay downsized to DVD that still retains its perceived sharpness. For "Seeing Is Believing," I didn’t use Topaz or any other AI video upscaler. Instead, I just put all 720p clips into a 5K Final Cut Pro project, which just "zooms" them out without additional upscaling or pixel interpolation. Then, as you mentioned, color grading and adding fine grain help give the shots that "hi-res" look, even though they technically aren’t. :) You can watch the final 4K version of "Seeing Is Believing" here: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-ghnk0rf5qPU.html
@@particlepanic thank you for taking the time to answer so detailed, this is great input. I appreciate it! and at first didn’t realize you are the creator that answered haha, I’m looking forward to future projects of you, keep it up :)))
11:47 Why not do an end frame when testing the camera movement though? I would want as much control as possible, so I would definitely do an end frame. I’m curious to see what the results look like when you have both a start frame and an end frame, and you change the camera movement at the same time.
After LITTERALY stolen thousands and thousands of photos, images, and video footages from their clients in their cloud service, of course they can generate great AI videos.
In the runway vs adobe comparisons. Adobe's actually seem just as jank tbh and I wouldnt use either in real world applications. 1. Look at the reindeers back leg as it turns to face the camera 2. Drone flying through Lava... cause sure that's totally a thing drones can do 3. The puppets, sure whatever both are cursed 4. Look at the ripples on the sand change over time
Wait, which program generated the montage that's playing while you're talking about legislation (19:48, 20:26, 20:37, 20:45, etc.)? Those are some of the best I've ever seen.
...and yesterday the announcment that Runway Gen-3 is now able to generate video-to-video...things go faster than the news.. btw. thanks for the meshy reminder...have to check directly. 🙃
There are several videos on RU-vid of people flying drones through lava. It's not copying anyone's work. It's using it as a reference, just like every other generation. CTFD.
Thank you very much! I have been researching this space, looking at smaller vendors. I would have ignored Adobe assuming a heavy handed "solution", but this actually looks worth paying for. (Adobe stock at the next dip?)
I'd love to see Adobe release this stuff, but my fear is that they begin charging extra for generations. Wouldn't surprise me either as they are pretty greedy with their stock footage after you're paying big money for the suite.
All this talk of film and footage, but you never showed a single clip of either.?! It's all digital video, ain't no film or feet involved! Digital "video" not film, is measured in "time", not feet. It's my pet peeve and everybody gets it dead wrong it seems, drives me half nuts! I grew up in the age of film and I made the switch to digital and got it right. So how is it that you kids that never touched a piece of film in your entire life, how is it that you all keep talking about "film" and "filming" like you even have a clue what the stuff is, let alone where to get it? Cheers 🍻
I am not convinced by a comparison of a few random generations from models that have been trained on different data sets and for a different range of topics. Its a bit like taking a formula one car and a golf cart and compare the off road capability.
The AI video sector is getting hot AF. Im using like 10 different video generator websites in my workflow to make videos. Its honestly getting out of hand. I also find the California push back of AI is due to the fact that Hollywood is there and they dont like the idea of the common man competing with their market share.
Do you know which video generators are uncensored (violence, guns, gore, horror, blood, celebrity and politician likenesses, etc.)? If one uploads an image of Trump, for example, to Gen3 Runway will it animate it?
@@High-Tech-Geek I’ve made some Trump image to video with kling. I think if you just upload the picture and call it “Fat orange idiot…” instead of “Donald Trump” then it won’t ID it.
... strange to compare an adobe ia video wich is an example but you didn't try on your own with runway that you can try by your own, we know that's for demo its cherry picking and not representative when you try by yourself an ia video tool. (sry for my english,not my first language)
Thank you so much for highlighting "Seeing Is Believing" as one of your AI Films of the Week! For everyone who wants to see the full "Cinematic Turing Test" demo in 4K: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-ghnk0rf5qPU.html
that lava shot is actually an FVP drone pilots footage, i remember the footage from his youtube vlog where he flew his FPV drone into a volcano, and the lava destroyed his propeller wings! i wonder if he submitted his clip for AI training, or is Adobe just snatch up Content creators clips the same way udio does there audio generations
I don't know why everyone goes crazy about it already it's still in baby stages.. video lasts like 5 seconds at best and doesn't even have audio, you can't possibly make a movie or tv show, your better of imagining something at least you can think of outcomes
just like they still haven't done 2k video with gen 3 also tell ,them to get on with it or they will loose out big as they are starting to do with kling who are taking a lot of there business bec they have both
With kling update in every way now kling is better than runway runway 3 is loosing on every level too stupid to do negative prompts and 2k its become a joke
wait until it come out first. this is advertising…. it doesn’t always do what it is claiming. Still never had any decent results with firefly with images.
We've already got AI models with large followings online, so the insistence of caring if the popularity of a persona was fabricated/augmented by Hollywood with real people or AI does it all is already starting to wane. Thank goodness that bill means the top actors can still rake in their millions. Just ponder how much money is spent on getting a brand-name actor and how that money could have used to help make the film better in other ways. I mean really, take a look at your latest Marvel blockbuster: would using real actors make any real difference, other than some expectation thing? It's only an expectation because marketing has made sure it is. On the plus side, those with true acting talent who don't have the 'look' aesthetic of the moment will have more opportunity to get work, coupled with some pretty visual avatar representation to lust over so yay? The same could said for the dearth of YT (tiktok, whatever) influencers: the talking head model is going to be the first to go AI, so hoping to make some money on the platform is not a good future job prospect. heh
This technology needs mandatory watermarking, or a ban on photorealism. It is going to make video evidence inadmissible in court, even when it is authentic, when there is no way of knowing for sure if it is real or not.
Obrigado pelo conteúdo!! Tudo o que precisamos é do conselho certo sobre como investir em cripto e estaremos prontos para a vida, ganhei mais de um milhão de dólares negociando no mercado de cripto este ano, independentemente das condições de mercado 😊.
"The future of healthcare is here! With AI-powered avatars, we'll have access to expert-level knowledge in real-time. No more waiting for doctors or searching for answers online. These avatars will be trained on vast amounts of medical data, enabling them to reason and respond like a PhD-level expert. By mid-2025, avatars will transform healthcare delivery, providing personalized medical information, emotional support, and even helping with diagnosis and treatment plans. Get ready for faster patient understanding, increased accessibility, and reduced healthcare costs! What do you think about this revolution in healthcare? Share your thoughts!"
The (very, *_very_* young) woman in the coffee shop does not actually seem to be speaking real words; it looks more like she's clacking her teeth together at times. There are some really amazing horror-oriented AI "reels" on Facebook that might give H.P. Lovecraft chills. One especially has multiple scenes that weave together a sci-fi mini-story to great effect. I'm looking for an image-to-3D-model conversion that can work from clean drawings. BTW: I think your "walking bear" animation failed because of the raised arm position of the base model. Finally, although having nothing to do with film directly, a retopology tool for organic and hard-surface 3D models would seem like a highly useful (and non-controversial) use of AI; no sane person enjoys *_that_* tedious process.