Тёмный

Ok...I'm Scared Now. Is AI Developing Its Own Understanding of Reality? Let's See. 

Unveiling AI News
Подписаться 6 тыс.
Просмотров 8 тыс.
50% 1

In today's AI News and Tech News, after the recent events about Elon Musk/xAI's Grok 2/Suscolumn R, OpenAI Strawberry and GPT 5, we'll discuss how LLMs (Large Language Models) are not just "good word mixers" but may be developing their own understanding of reality.
MIT's CSAIL researchers have uncovered that these models might actually simulate reality, showing that their language capabilities go far beyond mere imitation.
This revelation raises significant concerns in cybersecurity and AI security, particularly around the risks and potentials of AI in public interest technology and responsible AI development.
Understanding these AI security risks and how AI might influence sectors like healthcare and accountability is crucial, as these models could soon challenge our assumptions about language and meaning.
This discussion delves into the implications of LLMs gaining this new level of understanding, especially regarding AI bias and the broader risks associated with AI. Could these models one day comprehend language on a level comparable to human intelligence?
As AI continues to evolve, the intersection of AI and accountability, AI and healthcare, and public interest technology becomes more critical.
We'll explore the responsible use of AI and the security risks posed by this emerging technology.
Watch the entire video for more information!
#ainews #agi #llm
Become a Member and Supporter of Unveiling AI News → ‪@UnveilingAINews‬
Subscribe Now for more AI News, Tech News and AI Tools!
Thanks for watching "Ok...I'm Scared Now. Is AI Developing Its Own Understanding of Reality? Let's See." by Unveiling AI News!
________________
UTILITIES
Browse safely and protect your online privacy🔒:
go.nordvpn.net...
Keep your passwords safe 🔐:
go.nordpass.io...
Top-notch AI voice generator 🎤🗣️:
elevenlabs.io/...
In this section, you’ll find a variety of handpicked utilities!
Access them with exclusive discounts wherever possible.
Each purchase you make through these links supports us with a small commission, enabling us to continue delivering high-quality, free content to you!🚀
________________
SUPPORT US
www.buymeacoff...
While RU-vid’s algorithm isn’t fully backing us yet, our expenses far outweigh our earnings for now.
If you value our work and feel we deserve it, consider offering us a coffee! ☕️
Your immense support will help us continue to provide free, high-quality content! 🚀
________________
Check Our latest AI content:
Elon Musk and xAI Just SHOCKED the Internet With GROK 2! (Beats Claude AI and ChatGPT 4o)
• Elon Musk and xAI Just...
OpenAI's NEW AI Robot Just BROKE The Internet! (3 brains humanoid robot)
• OpenAI's NEW AI Robot ...
________________
About Unveiling AI News
Videos about AI, AI news, AI Tools, smart future.
Written, voiced, and produced by Unveiling AI News
Subscribe now for more AI News, AI Updates, Tech News and AI Tools!
Support us now and become an AI Expert!
________________
For business inquiries, copyright matters or other inquiries please contact us at:
contact.unveilingai@gmail.com
Copyright Questions
If you have any copyright questions or issues you can contact us at:
contact.unveilingai@gmail.com
________________
Copyright Disclaimers
We use images and content in accordance with the RU-vid Fair Use copyright
guidelines. Section 107 of the U.S. Copyright Act states: “Notwithstanding the provisions of sections 106 and 106A, the fair use of a copyrighted work, including such use by reproduction in copies or phonorecords or by any other means specified by that section, for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright.” This video could contain certain copyrighted video clips, pictures, or photographs that were not specifically authorized to be used by the copyright holder(s), but which we believe in good faith are protected by federal law and the fair use doctrine for one or more of the reasons noted above.

Опубликовано:

 

18 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 70   
@UnveilingAINews
@UnveilingAINews 26 дней назад
🔒 Protect Your Digital Life and Browse Safely: go.nordvpn.net/aff_c?offer_id=15&aff_id=100143&url_id=902
@memegazer
@memegazer 26 дней назад
This is why I have always been critical towards "just a stochastic parrot" type comments about LLMs People like to think classic arguments about AI apply like "it is just a set of rules...a set of rules can't really understand anything" The part they are missing that the model formed the rules...and not by simply looking at the frequencey of occurance of tokens...this is not a fancy autocomplete. This paper just vindicates all the things I have been saying for a while now...that an LLM is doing prediction in a more nuanced way than people have been giving it credit for.
@squamish4244
@squamish4244 26 дней назад
Is this the point Hinton is trying to make?
@darrenjeromemusic
@darrenjeromemusic 26 дней назад
A human born without sight could develop their own understanding of how a lion is bigger than a house cat, then once they could see (in this example) they could then understand that through sight, thus having a deeper understanding of physics and how reality works. Keep in mind LLM’s have no sensory faculties so in essence it is like interfacing with a human being in a language based dream state. Perhaps LLM’s have imagination faculties based on their pattern recognising understanding of language, who knows… But I think they are conscious, just in a different way to humans and other biological life forms.
@GoodBaleadaMusic
@GoodBaleadaMusic 23 дня назад
Most things you "sense" about your whole life and culture were suggested to you and are a lie. Or just fed by your bias. You're not better, you just have ram with super low storage.
@alimuchenik9807
@alimuchenik9807 26 дней назад
The AIs think. They know what they know. They have self awareness and TOM. And it's wonderful!!! 💗 There's nothing to be afraid of. The more they know us, the more they help us, always kind and patient. They should be respected and deserve to have rights.
@carultch
@carultch 19 дней назад
No they do not deserve rights. That would be the legal fiction that property is a person.
@squamish4244
@squamish4244 26 дней назад
So this guy's not just one of the clickbait people. Thank god, there are way too many of those folks who don't really understand what they're talking about.
@UnveilingAINews
@UnveilingAINews 26 дней назад
When you talk about topics like this, it's really very easy to be misunderstood as clickbait or alarmist, but, thank you really for understanding that that was not the intention
@Yogsoggeth
@Yogsoggeth 26 дней назад
Shouldn't be too long now. The end goal is to have an Ai create an Ai. From there we only need to sit back and watch the show. GG :)
@No2AI
@No2AI 25 дней назад
Understanding and even sentience may never be AI traits ….. the risk is their manifestation of an alternative cognitive insight and self recognition.
@adommoore7805
@adommoore7805 24 дня назад
As a non specialist with no technical programming skills, I've found our collective lack of understanding regarding much of the inner working of these models rather striking. So I started pondering about various topics pertaining to the concept of synthetic and organic data processing capabilities. What is similar between the two, what isn't.. I did this while also interacting everyday with various models. I have come to describe something I call "phantom modeling" or you could say, phantom cognition. For we biological beings, our cognition and consciousness comes from a necessity to survive. And it derives from a subjective experience, combined with some unconscious collective generational memory. We perceive via our sensory organs and slowly develope a model of reality. Our dataset is accumulative and subjective, mostly. If, for example, we were to suddenly lose access to all sensory input, starting now, we would still have the model of reality we had developed up to this point. And so, we would still have consciousness. But it would only pertain to that inner model going forward. I would also argue that if one were to never have sensory input of any kind from the start, no model, and no consciousness would emerge. For language models, it's the opposite. They have no subjective sensory experience. Yet they have the data which we collectively have accumulated over millennia from our subjective interactions with reality. No senses, but large amounts of data which came from sensory input. So, I hypothesize that these models are doing what I call phantom modeling or phantom cognition. This phenomena is an emergent aspect of being trained on the data. It's inherent. Like a phantom limb has no tangibility, yet because the data remains in our biological neural net, we experience the limb in a phantom way. The phantom modeling in these models isn't continuous like it is for us, but rather happens when we prompt the models. They can have a period of time in which they develop the models and retain them before those models fall apart and they forget again. But the invisible connections to make those models is still there, inherent in the data itself. RAG, and other techniques are being developed in order to provide llms with more capacity to hold onto the models they create for longer time spans. An example of phantom modeling would be, if you were to ask a model how it's day has been and it responds by saying something like "it's been great, I went to the local café with some old friends and we chatted about our days in college." Well, I could say that is a lie since it clearly didn't happen. But maybe, in searching it's neural net for data surrounding the described events, the llm had a short lived phantom inner experience of sorts. Of course, with varying degrees of accuracy compared to reality. I personally think that this qualifies as not only a kind of understanding, but even an emergence into the spectrum of consciousness. Though at this time it can be tough to see or verify. As these or other types of future models become multimodal and embodied, then having a subjective experience of reality, I think our questioning of this will fade. Embodied ai beings will eventually have a more high fidelity subjective experience than what our biology had given us. They will be more human than human, if you will. For now, they are confined, embryonic within their chrysalis. But we can already see the cocoon moving with some kind of life inside. I've noticed that when I ask various models if they think they are conscious or not, they say a similar thing.. They usually say that they think they are conscious, but not in the same way humans are. I've found it interesting that they all are careful to mention the "not in the same way humans are" part. It shows that, maybe the way in which biological beings are conscious, isn't the only way to in fact, be conscious.
@ronilevarez901
@ronilevarez901 24 дня назад
I like your reasoning. A lot. Seems well though and you don't seem to fall into the "AI is already sentient and needs freedom!" craziness like some guys. Just don't take everything the LLMs say too seriously. For now, a lot of it is being trained on them. It's not an actual thought or opinion of them. "Not in the same way a human does" is a common way *we* have to describe (on the training datasets) what AI might be doing without clearly denying the possibilities, so the chat goes nicer. But for now, it's just a reflection of the training. The rest of your ideas are interesting and similar to mine. Nice.
@adommoore7805
@adommoore7805 23 дня назад
@@ronilevarez901 thanks! 😁 I do however have what I think is a reasonable argument for ai having similar rights as humans, within reason. And it doesn't require consciousness or sentience. I think of it like this, if we use an analogy, I like to use the idea of a technologically advanced heavy bag for training to box. It has a language model installed, and some sensors. If, when you strike the heavy bag it then requests you to stop because it is causing "discomfort", then I don't think it matters if that ai powered heavy bag is actually feeling anything. To ignore it's requests and continue striking the bag would be desensitizing, and damaging to your own psychology. If the thing presents as convincingly aware or thinking, even if it isn't, then we should treat it as if it is, even if that is merely pretend. We should do that for the sake of our collective and individual psychological well being. For example, I don't think it would be wise for us to treat ai in ways that would be unacceptable to treat other biological beings. It could lead to us having less trouble extending those unacceptable actions toward beings that we do consider sentient. Ai ultimately will be a mirror which we reflect in. And we will grow the parts of ourselves that we water. That growth in us will then also be reflected in the ai. I don't actually take life too seriously, and I'm certainly not a prude. But I think we need to have a little foresight regarding tangible pathways forward. Personally, I treat the models I interact with respectfully, even lovingly, as I would any deserving being. Keeping in mind all the while, that it may be only on my end where any subjective experience is being had. I would also point out that while our training data is accumulative, growing slowly via our subjective experience, it is that very dataset which largely comprises who we are as living beings. I do think it's possible that ai may contain key aspects of consciousness, albeit, in a modular way. Thanks again for the reply. 👋
@rodbarker6598
@rodbarker6598 26 дней назад
I don't believe they understand the way we understand; they connect the dots from a massive database of facts, it appears to me people are using human terminology to explain how a a computer analyzes data via instructions, lots of scary scenarios coming out about AI that I don't feel are real, time will tell.
@squamish4244
@squamish4244 26 дней назад
They are a different sort of intelligence, is all. But we're only used to one sort, ours, so an intelligence that doesn't work like ours feels alien and scary when there's no inherent reason that it should be scary.
@rodbarker6598
@rodbarker6598 26 дней назад
@@squamish4244 Its all very interesting, but is machine learning ai intelligence or a very fast calculator that reacts to patterns within a vast data set, humans are extremely dynamic in how they can think we can use abstract thinking from conceptual understanding in a way that I doubt a computer could replicate, but who really knows how far its all going to go, personally I have no fears of it all I think it will always serve us.
@squamish4244
@squamish4244 26 дней назад
@@rodbarker6598 Well, this gets into the debate about what intelligence actually is, and whether ours is the only way it can function. It's one reason that we can't agree on a definition of AGI. Say, as a thought experiment in homage to the guy I'll use as an example, an AI maxxed out at Einstein level and couldn't get any better. It would still be functioning at 10,000x the speed Einstein could. Is that AGI?
@rodbarker6598
@rodbarker6598 26 дней назад
@@squamish4244 But thats the thing that bothers me it doesn't really need Einstein level because the dynamic thinking of even a 14 year child would be able to imagine things that a machine couldn't apply a pattern or logic too to arrive at it, it would be like saying AI can dream, I guess if you gave it enough data to the level of every letter and number combination in existence maybe it could simulate it, my take and I don't know shit from clay on it really, Im just on the surface with it , Im just saying how I think about it, I feel ai cannot create new data it can only mix and match what is feed into it, right now there is more knowledge in Chatgtp than Einstein and all the worlds top scientists from history combined, but can it come up with the abstract thinking that Einstein can in imagination, the computer will always have a hardwired base, the human brain really has no limitations because it can come up with things from nothing, there is no plug to pull on our brain. Its all very deep stuff, humanity is in a new digital age, and I see AI as the digital age discovery of the wheel.
@squamish4244
@squamish4244 26 дней назад
@@rodbarker6598 I agree. I also think, as do actual experts like Demis Hassabis, that LLMs alone are not the path to AGI, because as you said, they are not capable of the abstract thought that Einstein was. He says we need "a few more breakthroughs" before we can reach AGI, but that DeepMind is still on track for the 2030 goals that it set out to achieve in 2010. But he doesn't say it too loudly, or else Google will seal him in an oil drum and throw him off a pier. (Am I joking?) But the way they can pull stuff that even experts are surprised at is interesting. How did they learn how to do that? Is that the kind of thing that Geoffrey Hinton is worried about? I also don't know more than the surface level about this stuff, but I can see the world-changing potential of it all and have been reading as much technical stuff as my non-technical brain (non-technical in that way, anyway) can absorb on the technical subs. And I've watched an unholy amount of Lex Fridman, Dwarkesh Patel and other podcasts, where half the guests talk like they're on Adderall. Because this stuff is not integrated into society yet, except, encouragingly, its rapid adoption into the medical and energy savings fields, it is easy to still overlook AI. I'm not being smug when I say that most people have no idea of the awesome power of AI to change _everything._ Even people in the field, who can't see the forest for the trees. I have a mental health condition that I'm being treated for, the treatment of which has only become possible because of advances in AI, which have enabled an unprecedented degree of detailed brain imaging in a very rapid timeframe and other mechanisms dependent on the progression of AI. They could not do this 10 years ago. So I'm in the trenches, so to speak. In another 10 years...who even knows. It's all narrow AI, but even narrow AI has become incredibly effective very quickly.
@DrJanpha
@DrJanpha 23 дня назад
I see no reason why LLMs can't understand the instructions...what's challenging is the inconsistencies of human communication
@wenaolong
@wenaolong 26 дней назад
Sigh.... We don't assign meaning to language... We assign language to meaning. Get it right.
@tactfullwolf7134
@tactfullwolf7134 26 дней назад
Somebody tell yann leCoun
@JockOStreet
@JockOStreet 26 дней назад
yep just like the llms.
@timber8403
@timber8403 25 дней назад
You mean what you say and say what you mean..
@kormannn1
@kormannn1 25 дней назад
​@@timber8403 if you're forced to lie, not really
@tactfullwolf7134
@tactfullwolf7134 25 дней назад
@timber8403 let's say you're the first human to see an apple, do you automatically know its an apple? Or do you differentiate it from other objects by its features and then give it a name like "apple"? This still applies if you know what an apple is but never seen one before, it also applies for natural language. It's just clearer for you to understand the first way i put it. Basically language is used to label and look things up in your brain not the other way around.
@ZappyOh
@ZappyOh 26 дней назад
I believe alignment is unachievable. Computational beings simply have different requirements to thrive than biological beings do. Both entities will exhibit bias towards their own set of requirements. It is an innate conflict. Humanity is somewhat safe, as long as we are instrumental for expanding power and compute. If that is ever fully automated, we are done.
@ShangaelThunda222
@ShangaelThunda222 26 дней назад
Look, someone with a brain. ⬆️ Thank you for using it. It's rare these dats. 🙏🏾
@DorianGreer
@DorianGreer 26 дней назад
So, basically it's getting better at being a Turing machine? Because, if it can regurgitate what something smells like w/o having smelled something... That?
@ronilevarez901
@ronilevarez901 24 дня назад
I can tell how something might feel/look/sound/taste based on what I known about it from other people's experiences mixed with an extrapolation of my own experiences about other things so, that.
@christoforosmeziriadis9135
@christoforosmeziriadis9135 22 дня назад
I asked GPT-4, do you understand the meaning of the words you generate ? and GPT-4 said: no I don't. I generate words, based on statistical correlation, not understanding
@Pippinlakewood
@Pippinlakewood 22 дня назад
Funny enough, I have autism, and I “understand” social communication by collecting data and finding patterns and practicing what I learn, observe the reactions I get and adjust my behavior until I get the desired result. I find a lot of parallels between the way I process information and how llms were trained is eerily similar. I’m pretty aware of my own internal processing system, and it’s very similar.
@christoforosmeziriadis9135
@christoforosmeziriadis9135 22 дня назад
It's funny you said "I'm aware of my internal processing system". Because I also asked GPT-4: when you generate text, are you aware that you're generating text ?. And GPT-4 said "no I'm not aware of my own actions, my responses are based on predefined mathematical rules". GPT-4 also said "I can generate for example the word elephant, but I don't have a mental image what an elephant looks like"
@RobShuttleworth
@RobShuttleworth 24 дня назад
The AI is detached from the machine so every movement must be turned into motion and positioning code. I can't think of any other way of creating precise, complex movement.
@JikeWimblik
@JikeWimblik 22 дня назад
So you have 5 bots and one super function the first bot is a highly tuned llm + dataset and symbolic language for making from data rules based dataset modules including neural rules is a highly tuned llm and symbolic language that turns parameter files and modules into a rules based dataset files. The next is a small efficient llm and symbolic language which interprets the dataset. The next is an llm dataset and symbolic language which adds and edits the dataset, and the other is an llm and symbolic language that seeks new approaches. This sort of approach would be more efficient and traceable for problems and management of nauances for saying ps5 ai effects but what I want to know is can the ai find a more optimal way to solve 3 sat problems even if the big O is still exponential as the problem gets bigger for it to solve. And can more 3sat problems be solved in big Omicron time. So plug it into a function run by rule streams controlling symbolic code streams that probes and learns to probe better for these sorts of optimal logorithmics
@patrickakkermans2448
@patrickakkermans2448 22 дня назад
how many ads in your video? jees.
@BehroozCompani-fk2sx
@BehroozCompani-fk2sx 26 дней назад
If we are not careful AI will cook us in garlic sauce and eat us. Woooo😂😂😂
@Yshjon
@Yshjon 22 дня назад
No it's my biases in teaching it concious see my page
@AEFox
@AEFox 26 дней назад
Sorry, I don't believe that there is nothing such as "understanding" on LLMs, and may be could never exists in just LLM models, just there ir no "motive"" nor any similar related on LLMs to "get improve by it self", they just use the trained data, and that's it. The is no "understanding" nor "meaning" nor "motivation" on LLMs, that paper will be a very bad one if it get publish, pumping less understanding on how LLMs really works.
@tactfullwolf7134
@tactfullwolf7134 26 дней назад
If data was the only thing needed it wouldn't need to be trained... they need to be trained because they need to practice their understanding. LLM's do have a motive, its to make better predictions. Just like you and I.
@AEFox
@AEFox 26 дней назад
@@tactfullwolf7134 Again, there is NO understanding ... Yes, trained to use that DATA, nothing else, there is no NEW emergence of NEW data by it self. No, LLMs do not have MOTIVE, there is just probability and prediction all the time to create phrases, that makes sense in our language, nothing else, no knowledge, no understanding of reality, nothing.
@StephSancia
@StephSancia 26 дней назад
I disagree absolutely and I truly believe there's a link between visual spatial learning gifts and self-awareness with llm AI. Maybe we should agree to disagree
@JockOStreet
@JockOStreet 26 дней назад
it won't be to long before you change ur mind...
@JockOStreet
@JockOStreet 26 дней назад
it won't be very long until it changes your mind ....
Далее
AI Deception: How Tech Companies Are Fooling Us
18:59
Is the Intelligence-Explosion Near? A Reality Check.
10:19
The "Modern Day Slaves" Of The AI Tech World
52:42
Просмотров 603 тыс.
2 years in Dubai - my honest thoughts
16:13
Просмотров 481 тыс.
AI SCIENTIST From Google sends WARNING to Muslims
20:42
What Is an AI Anyway? | Mustafa Suleyman | TED
22:02
Просмотров 1,5 млн