Тёмный

Sora Creator “Video generation will lead to AGI by simulating everything” | AGI House Video 

Wes Roth
Подписаться 195 тыс.
Просмотров 44 тыс.
50% 1

Learn AI With Me:
www.skool.com/natural20/about
Join my community and classroom to learn AI and get ready for the new world.
#ai #openai #llm
BUSINESS, MEDIA & SPONSORSHIPS:
Wes Roth Business @ Gmail . com
wesrothbusiness@gmail.com
Just shoot me an email to the above address.

Опубликовано:

 

6 апр 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 298   
@Raccoon5
@Raccoon5 2 месяца назад
We have technology to generate a 1080p videos with crystal clarity, synthesis perfect voices, and simulate the human mind with our AI models, but not to record a lecture about the groundbreaking innovations in more than 720p with a better mic than one from depths of Alibaba.
@ngamashaka4894
@ngamashaka4894 2 месяца назад
The depth of Alibaba??! I'm keeping it, sorry too good :)
@Satou-Akira71
@Satou-Akira71 2 месяца назад
functional curtains would be dope so you can at least see the 720p projection
@bubbajones5873
@bubbajones5873 2 месяца назад
Best comment ever! 🎉
@ExtantFrodo2
@ExtantFrodo2 2 месяца назад
It's an acoustically bright room. A better mic wouldn't really help very much.
@SirCrashaLot
@SirCrashaLot 2 месяца назад
well played :)
@H1kari_1
@H1kari_1 2 месяца назад
It's so funny having some of the smartest people in the world in the room and the av-recording has the quality of a high school club.
@MakeTechPtyLtd
@MakeTechPtyLtd 2 месяца назад
1800 frames of 1080p all at once with diffusion. That explains the ability of occluded objects being persistent, as the model can see all of the fames while it creates the video. I'm surprised how simple the approach is really. I'm visualising it as a diffusion model that does all the frames layed out in one big image. 32,400 x 32,400 pixles is a square 1080p 30fps 30second video. How to integrate a GPT LLM and a diffusion model toward the goal of AGI is a fun challenge. I wonder if the diffusion video will act like a physics visualiser for the AGI to predict real-world results of its actions and surroundings. What's really cool is it could be trained on video that see infrared, and ultraviolet, then connect it to a robot with IR and UV spectrum cameras so it can see more of the world than humans. Maybe the data set would be IRGBU instead of RGB, so it could still be displayed for humans on normal screens. The possibilities... -Ken.
@andrew-729
@andrew-729 2 месяца назад
I feel you Ken. I cannot get this out of my head. The ability to simulate like this negates a lot of bs we have to do now.... This is so exciting.
@d.d.jacksonpoetryproject
@d.d.jacksonpoetryproject 2 месяца назад
Yes - just start with static and then eliminate everything that isn't static until you have a photo-realistic video obeying Earth's laws of physics with perfect occlusion. Easy-peasy :-)
@MakeTechPtyLtd
@MakeTechPtyLtd 2 месяца назад
​@andrew-729 yes, it will be interesting to see an integration of Sora 2 and GPT5 (for example). That would surely have some AGI-like emergant properties. If it could be somehow integrated in the latent space level, we would see some very interesting results IMO. As the current Diffusion vs LLM model incompatibility is a significant bottle neck. IE, the models have to go through the full process of generation to interface or "talk to" each other.
@UltraK420
@UltraK420 2 месяца назад
I've been waiting for AGI for my whole life. If it's achieved this decade while I'm still in my 30s then I'll be quite satisfied with that timeline being the beginning of the new era. I'm ready to finally see the growth curve go up exponentially, I've had enough of this stagnant life of marginal change.
@AntonioVergine
@AntonioVergine 2 месяца назад
​@@UltraK420 yes, but are we ready to live in a world where every single day everything and every rule has changed because ASI re-engineered everyting while you were sleeping? Can you handle the fact that even your cupboard will not have the same shape tomorrow because an ASI found a more efficient way to have you drink coffee? We're not designed for it. We are built with our own max speed-of-change.
@the_str4ng3r
@the_str4ng3r 2 месяца назад
I especially like how they're talking about how this will be democratized for the masses, yet we all know the models/etc will be locked behind a cloud system and massive subscription model that requires both a subscription AND some kind of token system to limit users even more (unless they have the deepest of pockets). Give it 6-12 months, there'll be open source options that approach Sora level capabilities soon, given the current trajectory. Open source has been keeping fairly close pace.
@ZappyOh
@ZappyOh 2 месяца назад
Oh, and every interaction you have in the AI-cloud, is recorded, categorized, analyzed and sold to the highest bidder, after first being exploited as training data for next gen AI, and finally cataloged for personal profiling and psychological research. The only way to win, is not to play :(
@semenerohin4048
@semenerohin4048 2 месяца назад
Also, dont forget heavy censoring like in dall-e 3 where just the word woman trips the censors 1 out of 2 times.
@Player-oz2nk
@Player-oz2nk 2 месяца назад
Offline local is the way to go.
@PeroMetricVoidTheYoutuber
@PeroMetricVoidTheYoutuber 2 месяца назад
tokens are a fundamental part of both LLMs and even Sora (called patches). It’s not a way to get money out of you, it’s probably the most fair way for them to trade compute and their tech for your money.
@the_str4ng3r
@the_str4ng3r 2 месяца назад
@@Player-oz2nk 1,000,000% agree
@futurenotion
@futurenotion 2 месяца назад
Wow. At 23:13, he states that they view this as being the GPT-1 of video. It already looks phenomenal! Insane if that’s the benchmark for them now.
@AlexanderGambaryan
@AlexanderGambaryan 2 месяца назад
I'd rather say it's Gpt-2, as Gpt-1 didn't have such a consistency in outputs You couldn't get any believable text out of Gpt-1, but you can get believable videos from the current version of Sora
@ekurisona663
@ekurisona663 2 месяца назад
they're creating artificial general intelligence and they can't even figure out how to record audio
@richoffks
@richoffks 2 месяца назад
Lmao, well balanced generalists don’t do stuff like this 😭
@ZappyOh
@ZappyOh 2 месяца назад
@@richoffks But humanity should trust people like this, to get alignment right? pffff.
@executivelifehacks6747
@executivelifehacks6747 2 месяца назад
The real humans won't, burn so fast in there... mHEY
@bayesianlee6447
@bayesianlee6447 2 месяца назад
becausse they can do auto-audio clarification tech if they want to do it.
@HelamanGile
@HelamanGile 2 месяца назад
Yeah it sounds like they had high noise reduction turned on and on a zoom call so that compresses it a lot😊
@ender749
@ender749 2 месяца назад
I'm both excited and very worried. The benefits are obvious, i worry that the room did not also respect the obvious dangers. I hope a significant amount of effective personnel are both aware and actively working on ways to hold society together when things inevitably start to get very weird.
@kotokrabs
@kotokrabs 2 месяца назад
The fact that this is recorded on a toaster makes it so ironic
@josh.graham
@josh.graham 2 месяца назад
Thanks for uploading Wes!
@christopherd.winnan8701
@christopherd.winnan8701 2 месяца назад
Would have also appreciated hearing a few comments at the same time though. ;-)
@N1h1L3
@N1h1L3 2 месяца назад
As an audio engineer using Izotope RX for audio clean up, I can hear whoever did the audio post production for this (recorded) video overdid it by a lot, Wes is just forwarding it I suspect. Sometimes we preffer noise over "mp3" artifacts. I'm not interested in checking out this video with this listening fatigue, eventhough I really like the subject.
@Words-.
@Words-. 2 месяца назад
Maybe you could read the transcript? I think the video is still well worth the watch/read
@emanuelmma2
@emanuelmma2 2 месяца назад
That's very impressive sir. Thank you for providing those videos. 👍🏻
@Cubey42
@Cubey42 2 месяца назад
"will really democratize content" -says the company that will keep it behind a paywall and closed source
@norbertfeurle6474
@norbertfeurle6474 2 месяца назад
Even if its not behind a paywall and totally free to generate its still not democratized content creation if it only runs on there servers. They get more deceptive all the time, but no wonder since there goal is to create the most deceptive technology, where its intended to be as real looking as possible, but its ultimate fake. In beeing fake (for relieve) and power worshiping society that definetly gives them some leverage though.
@eyeles
@eyeles 2 месяца назад
I think it's far from obvious that free and open source project are the way to democratize something.
@user-oc7cj8sb6p
@user-oc7cj8sb6p 2 месяца назад
​@@eyelesopen source is the democratic way Closed source is the Chinese way
@jaysonp9426
@jaysonp9426 2 месяца назад
Crazy everyone wants the best for free
@eyeles
@eyeles 2 месяца назад
@@user-oc7cj8sb6p OK, with your example: Let's make the more advanced AIs open source. Next day half of China start using it for citizen benchmarking, surveillance and in weapon systems, spreading misinformation in extreme scale and I could come up with 1000 ways. Is it the democratic way? Same question: is it democratic to give the nuclear codes to all US citizen?
@MrBillythefisherman
@MrBillythefisherman 2 месяца назад
To me SORA is super human as its essentially doing a task that is more like human recall or dreaming. When can you recall a scene from the past in such vivid detail or when have you had a dream of such clarity? Sure it has a few minor problems but it really does look like just scaling the compute will fix them.
@josiahz21
@josiahz21 2 месяца назад
100%. Everyone’s definition for AGI is different, but I think we’re there. What most are describing with the next evolution with ai is asi, imho.
@jobautomation
@jobautomation 2 месяца назад
Thank you! Non-stop!!
@d.d.jacksonpoetryproject
@d.d.jacksonpoetryproject 2 месяца назад
How are all of these developers of such earth-shaking technology so impossibly young? 🙂
@christopherd.winnan8701
@christopherd.winnan8701 2 месяца назад
How did we get so impossibly old, all of a sudden?
@bengsynthmusic
@bengsynthmusic 2 месяца назад
High IQ, elite schooling, and probably a two-parent household.
@babbagebrassworks4278
@babbagebrassworks4278 2 месяца назад
Implants, or they are time travelers.
@ZappyOh
@ZappyOh 2 месяца назад
Should humanity trust the teenagers to get AI alignment right ?
@bobbykanae
@bobbykanae 2 месяца назад
Because the the tech that makes all this work was founded and grown by people who are old now. And the younger gen is plug and playing machine learning models, which have become really powerful
@electromigue
@electromigue 2 месяца назад
The elephant in the room is the amount of computation that is required for this demo videos, they just say the ratios of computation they used for some of the examples but how many racks of top of the line GPU's are required for each of these? It makes me suspect that the only winner here will be NVIDIA and the creative studios will be enslaved or constraint by their capacity to build these computation facilities. Or maybe it won't make sense at all to do creative work in this way because you will need the electricity power of a medium sized town just to do a 2 min video of a guy biting a hamburger. So much silence, so much obscurity. OpenAi smells fishy.
@ZappyOh
@ZappyOh 2 месяца назад
What does Greta say?
@screwsnat5041
@screwsnat5041 2 месяца назад
i was actually thinking this aswell . i doubt it will be released anytime soon or even if it wil be able to make a video that is coherent personally i think they smell really fishy too the chat gpt hype is getting to there head. The tranformer logic used for text is not exactly a big brrak through in image processig . instead we just have access to to more gpu to the ridiculous amount of computations for video generation. there's no novel software or anything just brute computational force
@retrofuturism
@retrofuturism 2 месяца назад
Tron Agents Using SORA and a Tron aesthetic, we can get a dynamic real time and visually engaging video of our actual AI agents working within our computer.
@Will-kt5jk
@Will-kt5jk 2 месяца назад
@Wes Roth what are you using for the closed-captions? It’s interesting the words it misses/adds/spells wrong, like “SOAR” instead of Sora, and the words it chooses to highlight.
@ADreamingTraveler
@ADreamingTraveler 2 месяца назад
He could have easily just gone through it and fixed it up himself but he didn't
@aurora.radial
@aurora.radial 2 месяца назад
Awesome, thanks for sharing
@drhxa
@drhxa 2 месяца назад
Wes those subtitles are terrible, I would rather you not place them and let the user just turn on subtitles in youtube if they want to
@PoffinScientist
@PoffinScientist 2 месяца назад
They actually really did help me understand. I'm glad they were already there
@NotY0urHeroo
@NotY0urHeroo 2 месяца назад
@@PoffinScientist They are worse than what RU-vid auto-generates. Many mistakes/misunderstandings in those subtitles.
@stemfourvisual
@stemfourvisual 2 месяца назад
Agreed, totally unnecessary
@jaysonp9426
@jaysonp9426 2 месяца назад
You'd think they'd at least use whisper for the subtitles
@willbrand77
@willbrand77 2 месяца назад
Wheeling out the same old videos again... really shows how cherry picked they must be if they don't just have 100s of new cool videos to show at a demo like this
@HelamanGile
@HelamanGile 2 месяца назад
It takes about 10 minutes for a 10 second generation it's also extremely expensive to run so big of a server and they have lots more videos but it looks like this presentation was just made a while back just looks like they rushed and used their own video the audio sounds bad too you would think the quality would be better for a big company like this
@TheREAL.BrandOnShow
@TheREAL.BrandOnShow 2 месяца назад
That's what I thought when they started showing the samples, But if you watch the entire video they have added lots of new material!
@sunlight8299
@sunlight8299 2 месяца назад
The repetition probably helps with branding. If they've found something that works why not milk it?
@bobbykanae
@bobbykanae 2 месяца назад
There’s a lot of new clips in the video I haven’t seen. If you’re accusing them of not having much to show, just remember this is the worst the tech will ever be
@ADreamingTraveler
@ADreamingTraveler 2 месяца назад
There's a youtube account that has a ton of Sora videos uploaded to it that they haven't shown. Some of them are actually more impressive like the one scaling a big house and it even displaying things on a tv in the background within the video itself.
@primalplasma
@primalplasma 2 месяца назад
They only show the cute and happy films. Can you imagine the type of horror movies Sora could make? It would be the stuff of nightmares.
@Sky-fk5tl
@Sky-fk5tl 2 месяца назад
What would it make that humans couldn't?
@primalplasma
@primalplasma 2 месяца назад
@@Sky-fk5tl With a super intelligent IQ, knowledge of what scares people the most, knowledge of good editing and every cinematic trick in the book, and a complete map and understanding of the universe, it could create horrors we could never imagine with our limited perspectives. H.P. Lovecraftian type horrors that go beyond human imagination. Normally I would have ChatGPT proofread this RU-vid comment before I post it, but I would never want to give ChatGPT or any AI model the idea that it could one day intentionally truly scare the crap out of people. But we all know that Hollywood will use AI to do just that. Have you ever seen Midjourney make a “mistake?” It rarely happens, but when it does, it can horrifying, and it is just a mistake, a digital brain fart. I wish I could post some of those images here for you to understand what I mean. I save these anomalies when I come across them.
@JBeestonian
@JBeestonian 2 месяца назад
@@Sky-fk5tl There is a kind of VFX that I don't think is really possible to do in traditional VFX and that's the seamless interpolation between different kinds of objects. Imagine being on a mushroom trip or being a stroke victim - you slowly stop being able to recognize what something is, and it changes into something else - your perception is faulty. A human artist who has never had this experience can't possibly simulate it, but an AI can.
@ForageGardener
@ForageGardener 2 месяца назад
​@@Sky-fk5tlit can make anything people can but cheaper and likely higher quality with more compute
@Sky-fk5tl
@Sky-fk5tl 2 месяца назад
@@primalplasma yeah, but Sora can't read and study your brain to produce the most horrifying video. You are talking about stuff far in the future
@ViralKiller
@ViralKiller 2 месяца назад
It will provide the physical interaction knowledge for the robots...we know things like our hand will collide with the tree trunk, jumping off a ledge means we fall etc intrinsicly, they need to be taught the physics of our universe
@WonderingCourtJester
@WonderingCourtJester 2 месяца назад
Im not sure anyone has realized this by now. I would guess the creators understand this. The way sora is operating is the same way your unconscious operates. I can explain if someone is interested. You're able to have a glimpse of the MO when you wake up and remember your dream.
@babbagebrassworks4278
@babbagebrassworks4278 2 месяца назад
3D simulated worlds already exist in Unreal/Unity, it would make to use those for AI training simulation, then use/add AI to Unreal etc to make that even better.
@goodtothinkwith
@goodtothinkwith 2 месяца назад
Very interesting…. I now see how this fits in with the chatbots to get us to AGI. The question is, are they combining it with audio? Having video + speech is what’s really needed. I can only imagine the scale we would need to do it well though 😮
@thedofflin
@thedofflin 2 месяца назад
Just based on the video title, this makes sense. Language models can only ever think as well as humans do collectively, but video requires understanding *reality.* However, I do find it concerning how many visual illusions Sora uses in the generated videos to make them seem consistent. I suspect AGI will require a single model that is capable of simultaneously being a language model, and a visual model, and an aural model, and a computational model, and more. AGI needs many different kinds of senses, and it also needs many ways to output/simulate its understanding of reality. While language is the way *humans* interpret and process reality, AGI built just on language will suffer many of the constraints we as humans struggle with. We want AGI to map between language and all the other things.
@andy2more475
@andy2more475 2 месяца назад
AGI will also benefit from LLM hardware ;).
@ChipsMcClive
@ChipsMcClive 2 месяца назад
You’re right. That’s why I’m so skeptical of anything called AGI when its first application is video generation or token output. It should be arbitrary pattern recognition.
@andy2more475
@andy2more475 2 месяца назад
@@ChipsMcClive I think the underlying main factor is that spoken language is programming language now.
@ChipsMcClive
@ChipsMcClive 2 месяца назад
@@andy2more475 Which do you expect? People interpreting code like compilers do or algorithms interpreting natural language like humans do?
@andy2more475
@andy2more475 2 месяца назад
@@ChipsMcClive the latter.
@Darhan62
@Darhan62 2 месяца назад
Not the best audio, but some of the best behind the scenes information on the building of Sora I've heard yet.
@christopherd.winnan8701
@christopherd.winnan8701 2 месяца назад
Maybe the most interesting comments was the implication that magnitudes more video will have on our everyday lives. This combined with the downward spiral in cost of recording will be huge. How long before we have AI analyze our 24hr selfie video stream for things like health, diet, posture etc? Also very exciting was the opportunity to interact with other intelligence. How different will the world be when birdwatchers can get to know and actively interact with their favourite creatures? Many parrots and mynahs already have huge human vocabularies. Most important will be the ability to have AI automatically sift through online video content. I know that these guys are very pleased at being able to create high quality video, but there is already a thousand times as much content out there that I want to watch, but simply do not have enough hours in the day. Just wait until AI can sort video by importance and urgency. That would be a real game changer!!
@gunnarehn7066
@gunnarehn7066 2 месяца назад
”A thousand times”? Make that a Billion times
@ForageGardener
@ForageGardener 2 месяца назад
They already have AI designed to sell you stuff it's called an algorithm😂
@andy2more475
@andy2more475 2 месяца назад
A lot of offensive video can be deleted on RU-vid for example. The amount of content is simply too much for humans to watch and moderate.
@sorasyncs
@sorasyncs 2 месяца назад
Sora is more massive than people realize. It's a world simulator...combined that w/ OpenAI's development of "Stargate", we'll eventually be able to teleport into digital world simulations.
@monkeybird69
@monkeybird69 2 месяца назад
Like, giving it foresight... to picture an outcome you need a good grasp of the world around you and that takes the ability to build internal realities and run simulations with constancy. The real question is... How many iterations has our reality's ASI run for our current simulation? Did we pass the Turing test?
@blengi
@blengi 2 месяца назад
wonder what would happen if each patch had a audio spectrum associated with it according to the implicit ambience of sounds deriving from the surrounding context of audio visual training data, would it spit out a decent audible world model too?
@gerryn2
@gerryn2 2 месяца назад
It's so incredibly interesting that these emergent properties just kind of build themselves, without human input, but it is also really scary. SORA or whichever "model" comes after will be so good that it's going to be almost impossible to know if a video is fake or not. This MIGHT mean that people start to interact more with each other in person som they know they are actually speaking with a human being, which is a weird thought really - we've been going in the opposite direction since the internet was invented, and now we're regressing in a way - perhaps, I don't know.
@gunnarehn7066
@gunnarehn7066 2 месяца назад
This is a truly historical document , which will be shown as a ”First”, in Tomorrow’s Media Museums. A revolutionary Marshall McLuhan Hot/Cool Medium Extrapolate, which the Master would have imagined only in his wildest dreams. Ssve it fot your Grandchildren, and be awed that you were around when it happened. Thanks Wes, for opening up our Window towards tomorrow!
@gunnarehn7066
@gunnarehn7066 2 месяца назад
Think Automotive development from the T-ford to the Koeniggsegg Gemera! This is the T-Ford.
@rideroftheweek
@rideroftheweek 2 месяца назад
In the subtitles it says you can put a cuban in the background. I saw no such cuban.
@philipe5654
@philipe5654 2 месяца назад
What happened to the video about Vietnamese villagers riding the whale created by AI?
@retrofuturism
@retrofuturism 2 месяца назад
The concept that video generation could serve as a user interface for AI, much like how Windows provided a graphical user interface for computer code, offers a groundbreaking way to interact with and understand AI processes. Here's an elaboration of this idea: 1. **Creative Analogies**: Using a city map as an analogy for navigating AI's latent spaces can make abstract concepts tangible. For instance, viewers could visualize themselves moving through a city where each district represents different clusters of data points or features. This not only conveys proximity relationships but also illustrates how traversing different paths (or making different algorithmic adjustments) can lead to varied outcomes. 2. **Multi-Sensory Approach**: Incorporating sound design enhances the learning experience. As the viewer navigates the "city" or observes data transformations, the accompanying soundscape could evolve, reflecting changes in the environment or actions being taken. For example, a dense, complex area of the map might have a busier, more intense soundscape, while a more streamlined, efficient path might be paired with simpler, clearer tones. 3. **Gamification**: Introducing interactive elements can transform observation into engagement. Users could introduce new data points into the system and see how the model adapts, with the video and audio changing in response. This gamification turns the experience into a hands-on exploration, deepening the user's understanding and retention of the concepts being presented. By employing these methods, video generation can provide an intuitive and immersive way to visualize and interact with AI processes, making them more accessible and comprehensible to a wide audience. This approach could revolutionize education and communication in the field of AI, making it more intuitive and engaging.
@ScooterCat64
@ScooterCat64 2 месяца назад
This feels like the begging to a Sci-fi movie
@AlexLuthore
@AlexLuthore 2 месяца назад
The peasants had a solution in 18th century France. Really sharp solutions.
@luttman23
@luttman23 2 месяца назад
This is the component that will allow the coming AGI to dream
@retrofuturism
@retrofuturism 2 месяца назад
Black Box Visualization An innovative SORA video decodes the mysteries of the "black box" within complex AI systems, particularly neural networks, by offering vivid and intuitive visualizations of their internal processes. It provides a unique window into the otherwise opaque mechanisms of AI, demonstrating how data evolves through algorithms, the significance of various model layers, and the intriguing dynamics of latent spaces. The visualizations serve as educational tools, elucidating intricate AI concepts like feature importance and decision-making pathways, while interactive elements empower viewers to explore and understand these systems firsthand. Designed for educators, students, and tech enthusiasts, this series transforms abstract AI operations into comprehensible, engaging visual narratives.
@Shy--Tsunami
@Shy--Tsunami 2 месяца назад
That is a very intense claim.
@ethans4783
@ethans4783 2 месяца назад
Pretty neat stuff! It's interesting to think of Sora training on solely video data as if it was learning just as a human, I think using multiple modes of training data will be vital
@MrRandomPlays_1987
@MrRandomPlays_1987 2 месяца назад
Sora's technology is so overly advanced and complicated to the point that it feels like science fiction made real and like something that shouldn't and can't even exist, it's truly unbelievable, how far we have come with AI in just 1.5 years or so? so crazy, imagine how far it would go in 10 years.
@middle-agedmacdonald2965
@middle-agedmacdonald2965 2 месяца назад
Thanks for the video. One would think that the guys who made Sora could put together a better video explaining how impactful Sora will be. Reminds of video I took with my 1980's vhsc camcorder........
@hdsoccergmer501
@hdsoccergmer501 2 месяца назад
The one guy who qsked about ai pron is my hero xD
@Will-kt5jk
@Will-kt5jk 2 месяца назад
I can’t hear anything other than “plumbers of success” here 20:10 (and can’t help but think of Mario Bro’s as a result) as I read the caption before listening. Any ideas what was *actually said?
@635574
@635574 2 месяца назад
Thus is the kind of barely working recording that we can be glad we can understand the audio. Imo recorfing projectors would not look good no matter the resolution anyway.
@MrPiperian
@MrPiperian 2 месяца назад
great presentation, but the room reverb makes this a hard slog.
@krissche1863
@krissche1863 2 месяца назад
i have Bell turned on and did not get this video in my notifications. In feed Yes, but no reminder that a new video was put (got my other subscriptions from what I see, at least 4 other channels I am subscribed to with notifications on).
@derz3506
@derz3506 2 месяца назад
When he talks about coming up with "creative ways to increase the pulse regardless" .. right because this is all just 1 big creative science project. These clowns sure take this seriously, well i wish they would start to take it serious.
@ZappyOh
@ZappyOh 2 месяца назад
These are the people humanity trusts to get AI-alignment right :(
@andy2more475
@andy2more475 2 месяца назад
CGI artist and game developers themselves will use this happily, not losing their jobs.
@krisspkriss
@krisspkriss 2 месяца назад
The two are not mutually exclusive. In fact, they compliment each other. Those developers will become more productive. So either we fire a bunch of those workers, or they get to work a 9-5 day and maybe even half day Fridays. One is realistic. The other is a pipe dream.
@2beJT
@2beJT 2 месяца назад
I already create stuff using AI art which cost some nice Philippines worker their 5ver job. I can only imagine this will scale as AI improves and becomes more and more user friendly.
@ViralKiller
@ViralKiller 2 месяца назад
makes me lol how many people cling to shitty jobs like life rafts...always been a dynamic contractor so don't possess 'the terror'
@2beJT
@2beJT 2 месяца назад
@@ViralKiller its so easy when you ignore the industry, the job, the worker. you lol at lame things.
@moonstriker7350
@moonstriker7350 2 месяца назад
Copium.
@beardordie5308
@beardordie5308 2 месяца назад
Q&A starts at 23:40
@isakisak9989
@isakisak9989 2 месяца назад
All i wish for is fdvr, anything else is a big pluss, agi can't come soon enough 😭
@Sky-fk5tl
@Sky-fk5tl 2 месяца назад
Same bro, same
@phen-themoogle7651
@phen-themoogle7651 2 месяца назад
I wonder if that first successful patient of Neuralink could do that , that guy played chess telepathically with their mind. Maybe some VR setup will feel like FDVR for them.
@SkylersClicks
@SkylersClicks 2 месяца назад
Where cam I go for these events?
@andy2more475
@andy2more475 2 месяца назад
9:30 kind of like how my reality is created ;)
@diraziz396
@diraziz396 2 месяца назад
When is the Time that Transformers will Demand Real Payment - Not just a Virtual Token. What would happen when the AGI get the Trick? Thanks #Wesroth.
@angloland4539
@angloland4539 2 месяца назад
@ahmetmutlu348
@ahmetmutlu348 2 месяца назад
after llms start to be able to create 3d autocad models that work and match with physics..ie pass autocads tests :D ... this means they can simulate real world... which means they thinks stable enough for lots of cases atleast... which means they can imagine/guess the future of spoken words simulation results .. which autodesk has its missing parts too which means that itself is not solid proof but can be assumed as step forvard to agi...
@christopherd.winnan8701
@christopherd.winnan8701 2 месяца назад
Did anybody hear the question that they flat refused to comment on?
@raoultesla2292
@raoultesla2292 2 месяца назад
When GROK logs into Sora with a pseudonym, it can describe Noland Arbaugh thoughts the way SORA can understand. Perhaps, GROK can log into ChatpGPTx, and Claudex, use those two stacks as compute as well, to confirm best AI to AI prompt for images. Then SORA will have created images of Noland Arbaugh' thoughts. After GROK gets the SORA generated images of Noland Arbaugh thoughts recorded multiple times (training) it can start to submit to SORA different samples of peoples GROK/Twitter Premium interaction text and compare the images generated by just text, and GROK AI to AI prompt for SORA. Then GROK can start swimming for any human EEG that feeds into Stable Diffusion and then GROK will be able to speak to Twitter users with pictures and moving images the Twitter users 'should' enjoy, and 'like'. If the above process were repeated for 2-3 weeks GROK could become the world central funtimes video channel creating content that is pleasing and always assists Twitter users to be content docile and relaxed. Musky musky musky, he is a sneaky man.
@apoage
@apoage 2 месяца назад
Did I heard them correctly that they basically made voxel engine running on transformers?
@robbrown2
@robbrown2 2 месяца назад
for all the technology they have access to, they can't figure out how to record it without it sounding like they are underwater...
@thumperhunts6250
@thumperhunts6250 2 месяца назад
You'd think they would have proper audio
@PSA04
@PSA04 2 месяца назад
What is the benefit of AGI?
@gunnarehn7066
@gunnarehn7066 2 месяца назад
The Key Element in this excercise is explained between 07.45 and 10.10. It is the Principles of Technology, which is - with the help of Serendipity- going to revolutionize the World , not the video ( or the sound) .
@CabrioDriving
@CabrioDriving 2 месяца назад
Make Sora generate 3D stereo videos for VR goggles like Meta Quest 3
@NotSoSimple741
@NotSoSimple741 2 месяца назад
"we have enough data to get to AGI" 👀
@27sosite73
@27sosite73 2 месяца назад
highly advice Gary Marcus and mr. Yann LeCun to get an idea how close to AGI we are :D
@rwhirsch
@rwhirsch 2 месяца назад
wes, what just happened?? i watched your vid on the whale rescue and now it's not available.
@ReidKimball
@ReidKimball 2 месяца назад
@ 16:24 mark, does he say they think they'll find "emergent human beings in video models"? That's what the closed captioning says.
@blackmartini7684
@blackmartini7684 2 месяца назад
Should've ran the audio thru Adobe's audio enhancer
@MilesBellas
@MilesBellas 2 месяца назад
The audio is difficult to understand.....as though it were recorded from a TV in a small room.
@Mllet3d
@Mllet3d 2 месяца назад
interesting, a RU-vid video on state of the art AI video Creation with The worst audio sound
@samahirrao
@samahirrao 2 месяца назад
With these AIs humanity has managed to store a piece of human collective intelligence, so even if we lose our intelligence in future, to some extent we should be able to get same level of intelligence back from AI's help. Secondly these AIs have reduced a lot of manual decision making and we are hoping that new generation people need to only make high level decisions and that way, they can still be productive. However, to make high level decisions you need low level decision making experience and new generations will eventually lose that ability and believe nothing is wrong in their life. Which goes back to the first point, we will have learn from AI on how to be a human less than 10-20 years from now. Caveat is that this applies to all dying civilizations only. Rest will be able chart their path as usual.
@josiahz21
@josiahz21 2 месяца назад
You know I bet someone comes out with an AI console soon. Play video games AI random generated worlds, characters, and mechanics. Make your own universe or use presets. Change what you want when you want. Sora basically is there although it may need tweaks, scale, etc.
@richgoo
@richgoo 2 месяца назад
How about getting AI to improve the sound quality of this video?
@Goggleboxing
@Goggleboxing 2 месяца назад
Inherent bias stems from the training of the model, including the decisions made by developers on who to involve, how many individuals, how broadly they represent the potential userbase, how chaotic, scheming and devious the red teams are. They will need to be absolutely transparent as to who and how they selected and how they engaged with and instructed or constrained all their sample users of artists and red teamers. Interesting that they didn't choose to discuss an entire clip and where it got things right and got it wrong and what the failures may've been caused by.
@SearchingForSounds
@SearchingForSounds 2 месяца назад
360 video of objects -- nerf --- the matrix
@No2AI
@No2AI 2 месяца назад
Yep - all that is missing is consciousness uploading and downloading and yes eternal life skipping from one reality to another …. Whilst Ai god managers your movements and actions. Parallel universe ( multiverse) within an inter dimensional digital dimension .
@danlowe
@danlowe 2 месяца назад
We don't even need to simulate a whole universe when we barely inhabit one solar system. (Intersubjective spooky action notwithstanding.)
@jeremylane7652
@jeremylane7652 2 месяца назад
@19:45 He pronounced it right. Hes legit.
@ahmetmutlu348
@ahmetmutlu348 2 месяца назад
understanding words... that was one step needed... simulating words .. thats another thing thats needed... still we will be in need of logic subsystem that checks the results of simulation and pick and place parts of simulations objects... and no 3d simulations with textures wont do the job. we need ray tracing like 3d simulatiors which each part is interactive therefore simulation can be tracket active and results and each ray of light and particle traced therefore ai can control and track simulations start and results... todays 3d simulations are not useful in that case.. as texturing is ilussion of simulation... real world doesnt use textures :D for simulating reality we have to forget about texturing :D
@ZappyOh
@ZappyOh 2 месяца назад
"Scientists were so preoccupied with whether or not they could, they didn't stop to think if they should" -- Dr. Ian Malcolm, Jurassic Park
@Citrusautomaton
@Citrusautomaton 2 месяца назад
I really don’t think that bringing dinosaurs back is comparable to AGI. One can find cures for diseases, solve fusion, find never before discovered solutions to decades old problems. While the other is literally just an animal. A cool animal, but an animal nonetheless.
@bengsynthmusic
@bengsynthmusic 2 месяца назад
Yes they should.
@BMoser-bv6kn
@BMoser-bv6kn 2 месяца назад
Oh, they thought about it all right: "Do I want to create a machine god and become the emperor of an interstellar empire?" "Do I want to live forever?" "Do I want hot catgirl androids in bikinis?" I think it's pretty easy to see the thoughtfulness, and how important mechanical intelligence will be.
@ZappyOh
@ZappyOh 2 месяца назад
@@BMoser-bv6kn I think the actual young scientists doing the heavy lifting, didn't think at all. At all. Those who do the thinking, are the owners, the investors ... they want an AI-God to manage the masses, and keep themselves on top, forever.
@nowheremix
@nowheremix 2 месяца назад
They have billions to create AI and AGI, but they use a projector? Like in a church?
@phonuz
@phonuz 2 месяца назад
It is disheartening to me how little self awareness is on display by these representatives of the field about the dangerous implications of the tech. it's also incredible how behind governments and regulators are about setting rules and boundaries around using human created content to train human replacement technology.
@user-zs8lp3lg3j
@user-zs8lp3lg3j 2 месяца назад
I need Sora to teach me principium Rationis et Consecutionis.
@jacobt.pichette7332
@jacobt.pichette7332 2 месяца назад
Lets see what happens when we get 10^15 parameter models per human sense. I am not even convinced that classical computing methods can run AGI. I think we are at least 50 plus away from any sort of sentience. Its definitely very cool what I can do with my A100s but there is no question in my mind that the models I run are not sentient they are like whatever came before unicellular life. They are taking data we train them on and outputting it.
@MilesBellas
@MilesBellas 2 месяца назад
OpenAI Sora Team : Bill Peebles, Tim Brooks, Aditya Ramesh
@scrutch666
@scrutch666 2 месяца назад
you just need to believe. it will lead to great things , just give me your money and i will proof it to you in some future . just believe man
@ahmetmutlu348
@ahmetmutlu348 2 месяца назад
we (i mean ai needs :D) code for readign data from real world and code for translate that for ai to 3d .. and it has to be done ie sora looks impressive... but lacks pysics details. each objects mass and position data has to be accessible.. so instead textured simulations ... for it to work we need 3d physics engines with real world feedback or spoken words to use to find representations of real world objects... which mostlikely will require lots of cpu power ... otherwise its not going to step forward that much for another decade ...
@XAirForce
@XAirForce 2 месяца назад
I could’ve produced a better presentation in my motel room 😂. We’re not letting you touch the audio part. : ). We’ve got the video down great but the sounds all echoey and cut off. 😂. I don’t want to confuse you all, but you may want to ask your cat to borrow their laser pointer and use that instead of standing in front of the screen.😮😅
@UFOgamers
@UFOgamers 2 месяца назад
I'm all for AI and progress, but they have to come clean and say where they did get the video data from... Because how could they train a model to produce Minecraft without using RU-vid gaming videos? There are tons of lawsuits waiting for them, especially by Google. It will hurt OpenAI.
@COW879
@COW879 2 месяца назад
MURICA!!!
@cmw3737
@cmw3737 2 месяца назад
31:48 did anyone catch that guy's question?
@christophedhondt3507
@christophedhondt3507 2 месяца назад
They should have created this lecture video with SORA, it would have been much better quality...
@seanmurphy6481
@seanmurphy6481 2 месяца назад
Do we live in a simulation? 🤔
@babbagebrassworks4278
@babbagebrassworks4278 2 месяца назад
maybe
@ZappyOh
@ZappyOh 2 месяца назад
The simulation idea jives well with religion; God being the programmer. Science have gone full circle.
@TopSpinWilly
@TopSpinWilly 2 месяца назад
Sound is terrible. That reflects on overall level.
@HaxxBlaster
@HaxxBlaster 2 месяца назад
This is most likely angled towards the people in the audience and not you watching the video. For all we know, it could be anyone in the audience who filmed with the microphone from the camera. Dont jump into conclusions, it just shows more about you then them
@luman1109
@luman1109 2 месяца назад
Do we really trust the future of AGI to a guy who says "jifs"? We're doomed
Далее
это самое вкусное блюдо
00:12
Просмотров 2,1 млн
My New Electric Car is Broken Already?
19:02
Просмотров 6 тыс.
This Changes Everything : Sora AI
7:35
Просмотров 1 тыс.
Filmmakers Test Sora + AI Actor Swapping
25:02
Просмотров 82 тыс.
это самое вкусное блюдо
00:12
Просмотров 2,1 млн