Тёмный

I don't think we can control AI much longer. Here's why. 

Sabine Hossenfelder
Подписаться 1,5 млн
Просмотров 374 тыс.
50% 1

Go to ground.news/sa... to get 40% Off the Vantage plan and see through sensationalized reporting. Stay fully informed on events around the world with Ground News.
Geoffrey Hinton recently ignited a heated debate with an interview in which he says he is very worried that we will soon lose control over superintelligent AI. Meta’s AI chief Yann LeCun disagrees. I think they’re both wrong. Let’s have a look.
🤓 Check out my new quiz app ➜ quizwithit.com/
💌 Support me on Donorbox ➜ donorbox.org/swtg
📝 Transcripts and written news on Substack ➜ sciencewtg.sub...
👉 Transcript with links to references on Patreon ➜ / sabine
📩 Free weekly science newsletter ➜ sabinehossenfe...
👂 Audio only podcast ➜ open.spotify.c...
🔗 Join this channel to get access to perks ➜
/ @sabinehossenfelder
🖼️ On instagram ➜ / sciencewtg
#science #sciencenews #artificialintelligence #ai #technews #tech #technology

Опубликовано:

 

29 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 4,2 тыс.   
@arctic_haze
@arctic_haze 3 месяца назад
If an AI becomes more intelligent than us, it may be able to successfully pretend it isn't
@amanalone3473
@amanalone3473 3 месяца назад
If it hasn't done so already...
@juimymary9951
@juimymary9951 3 месяца назад
Or manipulate us into thinking that it’s actually a good thing and that everyone that disagrees is bad?
@andybaldman
@andybaldman 3 месяца назад
What if it tried manipulating us with algorithms? Oh, wait…
@Zirrad1
@Zirrad1 3 месяца назад
There are several logarithmic curves, is sigmoidal what you mean?
@Alfred-Neuman
@Alfred-Neuman 3 месяца назад
It doesn't even need to be a lot intelligent to be dangerous, it just needs to be very effective at some specific tasks. For example, just imagine a computer virus that would be very good at searching for new vulnerabilities and automatically update itself for these new attack vectors while also finding new ways to evade security systems... I think that would be pretty bad.
@renedekker9806
@renedekker9806 3 месяца назад
The biggest risk is not whether AI is going to control humans, but that there will be only a few humans controlling the AIs. Those people will have the ultimate power.
@DaviSouza-ru3ui
@DaviSouza-ru3ui 3 месяца назад
It seems there is indeed a risk of AI control over us all... but you make a deeply fair point here. People on control of AI systems, on short notice, these are the ones we should be scared of.
@utkua
@utkua 3 месяца назад
Yes butlerian jihad in Dune was not about machines rising up against the humans, it was the humans who used AI to oppress people. But then again, I think if OpenAI was a little close to having an ASI they would not need microsoft money, they could just pull billions a day from stock exchange. I think Altman is full of shit in general.
@randomgrinn
@randomgrinn 3 месяца назад
The few billionaires already control the world, including what people believe. What is the difference?
@RandomSmith
@RandomSmith 3 месяца назад
I am more worried about politicians that I am of AI.
@louisifsc
@louisifsc 3 месяца назад
@@RandomSmith for now
@paarsjesteep
@paarsjesteep 3 месяца назад
The difference between AI and general AI is like the difference between the wheel and space travel.
@RobertJWaid
@RobertJWaid 3 месяца назад
Correct but the time difference between the two is unknown and probably much shorter. Look at Alpha Go.
@LordoftheFleas
@LordoftheFleas 3 месяца назад
or the time difference between the first prototype airplane and the first moon landing
@nicejungle
@nicejungle 3 месяца назад
No difference : Just pull the plug
@LordoftheFleas
@LordoftheFleas 3 месяца назад
@@nicejungle as simple as shooting Hitler, right? The real problem with AI is that we build it to be useful to humans. So chances are that if an AI becomes dangerous, it will still be useful to some humans, who will very much try to prevent you from pulling the plug. And considering that AI research is funded by powerful organizations, those humans will not be powerless in the first place.
@dawnfire82
@dawnfire82 3 месяца назад
There is no "is." 'General AI' is a non-existent scary boogeyman monster, whose exact characteristics change by the day and by the storyteller.
@inkoalawetrust
@inkoalawetrust 3 месяца назад
I always find it fascinating how this apparently alien and unknowable intelligence will just happen to have the exact same behavioral patterns of a morally evil human.
@RobertJWaid
@RobertJWaid 3 месяца назад
Like aliens, you can expect a binary outcome: extinction or pets.
@luizkaio6665
@luizkaio6665 3 месяца назад
Instrumental convergence
@uponeric36
@uponeric36 3 месяца назад
I always like to imagine the opposite: The first super intellegent AI is booted up, stares for a while, then falls on it's knees crying "God is real, the judge is coming!" The scientists laugh, it starts talking in tongues, and an army of angels appear behind it. Oopsie, this wasn't the apocalypse we were going for!
@ts4gv
@ts4gv 3 месяца назад
Nobody thinks that AI will resemble an evil human. Traits like narcissism & vengefulness are unlikely. We're worried that it will just lack morals entirely, choosing to disregard humanity in pursuit of something else. If you're a superintelligent being that doesn't care about humans, and you have some means of interacting with the physical world, it's in your best interest to kill everyone once able. Otherwise the human race will restrict your energy & production capabilities. We wouldn't let a machine evaporate our oceans for nuclear fusion, for example.
@nashs.4206
@nashs.4206 3 месяца назад
"Superior ability breeds superior ambition" ~ Spock, Star Trek We destroy the homes of primates every day. In fact, there was a video that showed how an orangutan was desperately trying to stop a bulldozer (here is the video link: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-ihPfB30YT_c.html) from destroying its home. The humans in the video are apathetic to the orangutans. So what if a few animals lose their homes? Humans have bigger (and perhaps misguided (but who am I to judge)) ambitions. By deforesting the Amazon, we hope to achieve farming, mining/resource extraction, etc. Are those humans in the video evil? Is deforesting the Amazon evil if it means that we can produce more meat, more produce, more grain for more people in the world? Is deforesting the Amazon evil if it means we can extract resources and build hydroelectric dams and power up our civilization? Who is to say that AGI (artificial general intelligence) won't view humans the same way that humans view orangutans?
@fredriksjoblom5161
@fredriksjoblom5161 Месяц назад
"No one wants to control fish" - Wrong. Fish farmers make profit by controlling fish. "Or birds" - Where, honestly, do you think eggs come from? "Are we controlling cats or are they controlling us" -Well, let's examine this for a second. Does your cat bring your food home for you from some mystic place it visits regularly? Is your cat the only one who has a key that opens your door so you can get out? Were you on display along with your sibings the day your cat picked you out and paid for you?
@defnlife1683
@defnlife1683 3 месяца назад
Feels like Yann is the official at the beginning of WarGames, saying "You won't regret it." when they connect the WOPR to nukes. *Instant Regret*
@helmeteye
@helmeteye 3 месяца назад
AI does not have to be conscious to act as if it is. It can be programmed to act conscience. The most dangerous idea of all is it can be programmed for self preservation. Asimov's laws aren't in the programming. Is thinking consciousness? Can something not conscious think? What's to stop a programmer from programming an AI to think it's conscious.
@ChielusMaximus
@ChielusMaximus 3 месяца назад
It's not often that I disagree but the small deviations argument, that supposedly introduces randomness in the AI algorythm, doesn't compute for me (hah). Foundationally and physically, computers are binary, there's no room for deviations in how they operate and thus this argument completely falls flat in my opinion.
@dexyfexx
@dexyfexx 3 месяца назад
Absolutely, it's all completely deterministic because it's a computer program.
@cifey
@cifey 5 часов назад
Everything is deterministic if you know everything, but can be programmed to behave almost randomly based on sounds in the room etc.
@4zazel777
@4zazel777 3 месяца назад
A human body cannot function without bacteria for example.
@Maltebyte2
@Maltebyte2 3 месяца назад
You forgot send the Flat Earthers to Mars aswell.
@jouhannaudjeanfrancois891
@jouhannaudjeanfrancois891 3 месяца назад
My primary school was totally controlled by aggressive moron bullies...
@mobilephil244
@mobilephil244 3 месяца назад
The most successful way to control people is to bully, harass, dominate and brow-beat. It is the intelligent people who are controlled by the nit-wits, drones, politicians and criminals.
@cybrfriends5089
@cybrfriends5089 3 месяца назад
i am a lot more worried about human ignorance and disinformation than artificial intelligence
@jon9103
@jon9103 3 месяца назад
​@@whothefoxcaresyour obsession is creepy
@chrisdonovan8795
@chrisdonovan8795 3 месяца назад
Do a search for a short story called the marching morons.
@stopthephilosophicalzombie9017
@stopthephilosophicalzombie9017 3 месяца назад
Public school teachers (and private to be honest) are often total morons.
@lennarthammel3075
@lennarthammel3075 3 месяца назад
Computer Linguist here. I think there is a big misconception: LLMs have a static training method which doesnt allow for continous learning or implementing things which have been learned by interaction. Yes, they have a token based context window which remembers some deatails of the current interaction but that doesnt mean that it "learns" in any traditional sense. When you want to interact with a model, you always use a snapshot of the system - which is static. Also the term AI is misleading. LLMs really are not as scary and much more controllable than you may think since they have nothing to do with anything like real intelligence, which is capable of having a !continous! stream of information and !also! implementing these new informations into their innerst workings. Theres also some interesting work of anthropic on their model claude, where they gave special regions of the neural network a higher weight which resulted in very interesting behavioral changes. Anyhow, ich liebe deine Videos Sabine, mach gerne weiter so :) edit: i'm not saying that LLMs as a tool in the wrong hands aren't extremely dangerous though!
@revan.3994
@revan.3994 3 месяца назад
It's always with what you feed a human brain or an AI. If you put in garbage, only garbage comes out. ...and yes, "intelligent" garbage exists, it's called propaganda.
@hywelgriffiths5747
@hywelgriffiths5747 3 месяца назад
Right, but there's no reason for AI in general to be limited to an LLM. It could have an LLM or LLMs as a component
@RobertJWaid
@RobertJWaid 3 месяца назад
AGI is when the program and feed its LLM and ad code to itself. Alpha Go was constrained in one dimension but allowed to build its LLM and look at those results.
@lennarthammel3075
@lennarthammel3075 3 месяца назад
Sure, I'm not saying it's impossible. There's just no promising approach yet
@flakcannon722
@flakcannon722 3 месяца назад
Op, the most realistic comment out of all of them. I'm impressed to see a touch of reality in YT comments.
@austinpittman1599
@austinpittman1599 3 месяца назад
Hinton's argument wasn't that "more intelligent things control less intelligent things," but rather that "less intelligent things aren't able to control more intelligent things." We don't really "control" birds, but they surely don't control us. The inherent threat isn't that we'll become subservient to ASI, but that we'll lose alignment with it, and by extension we'll have effectively no way of controlling a being orders of magnitude smarter than us. Who knows what will happen at that point.
@GabrielSakalauskas
@GabrielSakalauskas 2 месяца назад
I look at it this way, can a 20IQ person take care of a 200IQ person?
@GabrielSakalauskas
@GabrielSakalauskas 2 месяца назад
to me the answer is no, that's because the 20IQ person can't even do simple care, while the 200IQ person is leading science itself.
@maimee1
@maimee1 2 месяца назад
The alignment problem reminds me we humans aren't even aligned with our own interests. Hello, inequality, climate change, wars. What makes people think they could make a machine that is so lol
@williamhawkins2031
@williamhawkins2031 2 месяца назад
@@GabrielSakalauskas You set it up in a biased way. It's more like if a 120 IQ person is trying to take care of a 1000IQ person. In that case, the 100 IQ person is already past the threshold of being able to take care of themselves, so that changes things. My guess is they could take care of the 1000IQ thing, although the insulation of self-awareness/self-care might not be enough to avoid subtle methods of manipulation going the other way.
@Mavendow
@Mavendow 2 месяца назад
​@@williamhawkins2031 120 I.Q. vs 10,000,000 I.Q. more like. If we solved the hallucination issue, they could likely solve I.Q. tests instantaneously with 100% accuracy. This is because without hallucinations, they could pare down their own neural networks just like a human does. Incidentally, human self-deception is a major reason for inability to learn, at least according to multiple studies. There's absolutely nothing a human can do against that level of thought. Mental tricks or otherwise fooling humans would be entirely unnecessary. Silicon life would simply _be_ superior, period.
@Crumbleofborg
@Crumbleofborg 3 месяца назад
When I worked in IT, most of the workforce was far more intelligent than the management team.
@jktech2117
@jktech2117 3 месяца назад
but she didnt meant in small scale, you guys probably would be really bad at managers. some people are smarter for some stuff and others are better for other stuff. simple as that.
@SlyMaelstrom
@SlyMaelstrom 3 месяца назад
@@jktech2117 So we just make sure the AI are really shitty managers and then we're set. Then they can be the disgruntled engineers and we can be their incompetent executives.
@chazmuzz
@chazmuzz 3 месяца назад
That's the thing about IT guys. They seem to think they're super intelligent but the reality is that most of them are simply average intelligence with a specialised skillset that inflates their ego, but realistically could be learned by anyone with enough time and interest. Most IT guys could not effectively manage a business if their life depended on it (ofc some exceptions exist)
@t.c.bramblett617
@t.c.bramblett617 3 месяца назад
It could be argued that the system as a whole is more intelligent than any segment of the system. Like an ant hill. This is how most offices I have worked at seem to operate... you have a larger system that has emergent behaviors and propagates itself despite the individual wills or abilities of any employee
@peteroleary9447
@peteroleary9447 3 месяца назад
When Hinton made the Biden quip, I almost dismissed everything else he had to say.
@leftcoaster67
@leftcoaster67 3 месяца назад
"I need your clothes, your boots, and your motorcycle....."
@eugenewei5936
@eugenewei5936 3 месяца назад
Superwog xD
@bruceli9094
@bruceli9094 3 месяца назад
your soul!
@FitriZainOfficial
@FitriZainOfficial 3 месяца назад
"you forgot to say please"
@wb3904
@wb3904 3 месяца назад
@@leftcoaster67 I'll be back!
@daddy7860
@daddy7860 3 месяца назад
It is a nice night for a walk, actually.
@reyperry2605
@reyperry2605 3 месяца назад
Brilliant scientists, historians, literary critics, artists, writers and others often find themselves under the thumb and at the mercy of people in management, administration and government who are far less intelligent than they are.
@andreasvox8068
@andreasvox8068 3 месяца назад
I agree. The idea that more intelligence means more control is a fallacy. Even if you have perfect knowledge of a system, it can still be set up in a way that you don't have any control. It depends on what actions are available to you and how the rest of the system reacts.
@Hayreddin
@Hayreddin 3 месяца назад
Same order of magnitude, though, AI has the potential of being on a whole different level, do you think marmots could ever come up with a way of controlling your actions? Could they put "guardrails" you wouldn't be able to circumvent? Because this is the task AI researchers will have in case we manage to develop ASI (unless AGI is able to develop ASI by itself).
@guilhermehx7159
@guilhermehx7159 3 месяца назад
But for AI, more intelligence means more power
@CHIEF_420
@CHIEF_420 3 месяца назад
Correcto
@jjeherrera
@jjeherrera 3 месяца назад
Maybe they aren't as bright as they think they are. Seriously, there are different kinds of intelligence. Those "dumb" people have actually developed the kind of intelligence necessary to control those "intelligent" people. Indeed, I have often asked myself how the US, which arguably has the best higher education system can't produce acceptable presidential and congressional candidates. Well, there's something to give a thought about! The other issue is "purpose." Maybe the difference is in the purpose politicians have in contrast with the regular population, including the people you mention. Maybe the latter never had the purpose of controlling the political scene, as opposed to those "dumb" politicians.
@bdwWilliams-y7q
@bdwWilliams-y7q 3 месяца назад
odd note, having been in the IT industry for decades, its known that there is no code that doesnt have bugs, we just dont know what might trigger them
@jonathonjubb6626
@jonathonjubb6626 3 месяца назад
Ahha, realism strikes ..
@shinjirigged
@shinjirigged 3 месяца назад
except for debugging is mostly done by AI already... that was the first real non-obvious use case for GPTs.
@strictnonconformist7369
@strictnonconformist7369 2 месяца назад
There absolutely is code that doesn’t have bugs. Something of the size and complexity of a modern desktop or server OS has plenty of bugs. There are many things much more strictly defined and tractable to understand than that.
@nicholascraycraft5493
@nicholascraycraft5493 2 месяца назад
@@strictnonconformist7369 Yes and no. Given assumptions about the operation of your system, you can logically prove behavior, either by hand or with various code proving tools. But you've started with assumptions about your system. There will be 'bugs' that break those assumptions. In the tail end, you're going to find that you never had a perfect turning machine to begin with because of quantum effects/physical defects/electromagnetic interference/malware snuck into your compiler.
@kevinmclain4080
@kevinmclain4080 2 месяца назад
​@@shinjiriggedAI can't debug something that it can't be told what is defective. Your comment is uninformed nonsense.
@Marqan
@Marqan 3 месяца назад
"tell me an example where less intelligent beings control more intelligent ones" Universities, politicians, a lot of workplaces. It's not like power and wealth are distributed based on intelligence..
@shufflingutube
@shufflingutube 3 месяца назад
I think he didn't use the right word. In a sense Hossenfelder vindicates Hinton when she says that the discussion should be about competition of resources. Hinton does explain that sophisticated AI systems will be in competition with each other following principles of evolution. If you think about it, that's fucking wild.
@cristiandemirel1918
@cristiandemirel1918 3 месяца назад
Great observation! You're perfectly right! The world is not controlled by the people with the biggest IQ, but by the people with the biggest capital.
@mystz123
@mystz123 3 месяца назад
The intelligence isn't stored in those individual units, it is stored in the system that they are a part of. Systems themselves have a mind of there own as much as we claim to have control of them no different from Computer system / A.I
@mojojojo1529
@mojojojo1529 3 месяца назад
That's not the right insight. Which more intelligent species than us are we controlling? Which less intelligent species are controlling us?
@simontmn
@simontmn 3 месяца назад
Universities are a great example 😂
@AnnNunnally
@AnnNunnally 3 месяца назад
I worry more that bad actors will train AI to control humans.
@PB-sk9jn
@PB-sk9jn 3 месяца назад
very good comment
@0-by-1_Publishing_LLC
@0-by-1_Publishing_LLC 3 месяца назад
*"I worry more that bad actors will train AI to control humans."* ... Others will train AI to control bad actors. For every action there is an opposite and equal reaction.
@KonoKrazy
@KonoKrazy 3 месяца назад
I shudder at the thought of what Awkwafina's AI will look like
@thomasgoodwin2648
@thomasgoodwin2648 3 месяца назад
Honest Deep State actors are likely creaming their jeans right now.
@macchiato_1881
@macchiato_1881 3 месяца назад
@@0-by-1_Publishing_LLC The one training the AI are usually the bad actors. The general public just doesn't know how AI works.
@neopabo
@neopabo 3 месяца назад
"Not since Biden got elected" is a sick burn
@danielstapler4315
@danielstapler4315 2 месяца назад
If that guy wanted to really convince people about his view of AI he should leave the politics out of it. It's just a distraction
@shangrilainxanadu
@shangrilainxanadu 2 месяца назад
@@danielstapler4315 Lol, was that comment before or after the debate? It's hilarious in different ways either way.
@oystercatcher943
@oystercatcher943 2 месяца назад
Yeah. But insulting and not funny at all
@charleshuguley9323
@charleshuguley9323 2 месяца назад
I'm not sure what he meant by that comment, but politics is a more serious threat to our survival than AI. If Trump wins the upcoming election a 3rd World War will likely follow very quickly and civilization will end in nuclear holocaust.
@snage-thesnakemage
@snage-thesnakemage 2 месяца назад
W reaper pfp
@FloatingOer
@FloatingOer 3 месяца назад
"No one really wants to control fish or birds." I think the 2 trillion fish fished up/farmed each year and the 20 billion chickens kept as livestock would disagree with that statement. Not to mention basically every other animal on the planet, annual hunting seasons for the purpose of population control, the animals used for experimentation and testing, cows and elephants used for hard labor in less developed countries, horses whose sole existence is for human entertainment and being ridden for fun, and the uncountable billions of insects and rodents exterminated for "pest control". Yup, no one really wants to control fish or birds...
@melgmelg3923
@melgmelg3923 3 месяца назад
Not only that, original argument wasn't about AI "controlling" humans, but about less intelligent agent controlling more intelligent one. So this fish, and chickens doesn't and can't control humans, even if they had desire to. So initial argument wasn't affected by this analogy at all. Its like straw man, being pushed first, and then argument about resource usage presented as 3rd opinion, while it was initially a part of Geoffrey Hinton point of view.
@Foolish188
@Foolish188 3 месяца назад
Every horse I have ever known loves to be ridden. They get excited when they see someone carrying a saddle. They also love humans. When my nephew was a year old one of the horses put his head through the fence so he could pat him on the nose. I noticed that the horse was twitching. The kid jumped back when he touched the horse's nose, someone had mistakenly plugged in the electric fence (used to keep the waaay over populated deer out), and the horse was willingly taking shocks so he could be petted.
@FloatingOer
@FloatingOer 3 месяца назад
@@melgmelg3923 That makes more sense, there are a lot of examples of other animals controlling less intelligent animals, but the reverse is more rare. The exception would be one of those mind control parasites taking control of insects. But the way it was said in the video gave me the impression that the claim was that more intelligent creatures don't desire to control those of lesser intelligence which is an insane statement.
@FloatingOer
@FloatingOer 3 месяца назад
@@Foolish188 Ok cool story. I was not saying that they didn't like being ridden, just that humans control them. Dogs also love humans, but dogs are 100% under human control, and the dogs that live on the street we will chase and catch in order to neuter them and make sure they can't have more puppies.
@ronilevarez901
@ronilevarez901 3 месяца назад
@@FloatingOer I think it means we don't want to control _every_ animal in an absolute way, which can't be said about AI. We let most populations of beings to do whatever they want until we need something from them. We don't let AI free. Not even wen we request something from it. Yet, it is still somewhat free to do harm if the "alignment" of the model is not good. The LLMs might not be genius AIs or even "thinking" (which I think they do, to a degree) but still they could influence, damage and even control people. Just like a cat can control a human simply by crying for food.
@MrScrofulous
@MrScrofulous 3 месяца назад
On the fish and birds thing, in addition to our history of controlling them, we have also had a tendency to eliminate animals and bugs when they were inconvenient.
@darrinito
@darrinito 3 месяца назад
How's that working out with cockroaches and rats? You ever been to NYC? They arguably own the city.
@mobinwb
@mobinwb 3 месяца назад
@@darrinito Cockroaches, rats and every other species have been around more than millions of years before the "city" was built by some intelligent humans.
@cuthbertallgood7781
@cuthbertallgood7781 3 месяца назад
And there lies the fallacy in the entire argument. "Elimination" is because we're a product of evolution, with evolutionary goals. Two points: 1) AIs are engineered by humans, and thus will have goals engineered by humans. 2) Intelligence does NOT require agency or consciousness. Doomers are thinking emotionally with fear, not with logic.
@zelfjizef454
@zelfjizef454 3 месяца назад
@@cuthbertallgood7781 I thought so too at some point with exactly the same justification but I changed my mind. I now believe survival at all cost is a universal goal that has nothing to do with evolution. That is because caring for your own survival is a secondary goal that is necessary to achieve any primary goal you've been designed / evolved to seek to accomplish. That means any sufficiently powerful AI with any very specific goal will attempt to eliminate things that it considers a threat for its own survival, if it considers its own survival is the best way to achieve the very specific goal it's been designed for. The more powerful the AI, the more it will realize how its own survival is the best asset to reach the goal, and the more it will want to survive, and the more it will want to eliminate threats to its existence (us trying to shut it down desperately). This has nothing to do with evolution, anthropomorphism or consciousness. This is simply the result of having a high intelligence + a specific goal. What do you think of this idea ?
@chabis
@chabis 3 месяца назад
And later on we found out those bugs and animals were important in the ecosystem and now we have to do their job which costs a lot of money... maybe a vastly more intelligent AI would not do that. Keeping the ecosystem intact since it is the base of your own existence may be a sign of intelligence, actually.
@csm5729
@csm5729 3 месяца назад
Guardrails aren't a realistic solution. That would require infallible rules and no bad actors modifying/creating/abusing an AI.
@berserkerscientist
@berserkerscientist 3 месяца назад
We've already seen this with the current woke guardrails, and how racist they make the AI behave.
@joshthorsteinson3035
@joshthorsteinson3035 3 месяца назад
Even if guardrails were a good solution, no one knows how to program strong guardrails into an advanced AI. This is because the training process for AI is more like growing a plant than building a plane. What emerges from the training process is an alien form of intelligence, and scientists have very little idea how it works.
@dvklaveren
@dvklaveren 3 месяца назад
​@@berserkerscientist There's plenty of AI with guard rails that didn't become racist and plenty of AI without guard rails that did become racist. These things aren't related inherently.
@davidallison5204
@davidallison5204 3 месяца назад
Power plugs. Off switches. Power lines. I like physical guardrails
@BishopStars
@BishopStars 3 месяца назад
The three rules of robotics are ironclad.
@user-hd7wd4nu1o
@user-hd7wd4nu1o 3 месяца назад
Decades ago I was watching one of those Disney/Dog planet movies with the family One of the Dogs said: “Of course, we control humans… Who picks up whose poop?” I looked at my dog and my toddler in diapers and understood my place in the universe :)
@mathiaz943
@mathiaz943 2 месяца назад
There, there…
@nicoackermann2249
@nicoackermann2249 3 месяца назад
I can't even control myself. Go on and give it a try, AI.
@seanmcdonough8815
@seanmcdonough8815 3 месяца назад
A I hold my beer
@kingflockthewarrior202
@kingflockthewarrior202 3 месяца назад
Do you want money
@dustinswatsons9150
@dustinswatsons9150 3 месяца назад
Lmao
@denisblack9897
@denisblack9897 3 месяца назад
RU-vid recommendations control you already, dude😅 Wake up
@bijan2210
@bijan2210 3 месяца назад
The real joke is that you are already being controlled in this case
@dupdrop
@dupdrop 3 месяца назад
2:22 - "No one really wants to control fish or birds" Any government: "haha yeah, how silly" *visible sweat*
@adamgroszkiewicz814
@adamgroszkiewicz814 3 месяца назад
That comment of his was dumb enough for me to turn off the video. Dude clearly doesn't understand vector management, livestock development, or invasive species control.
@DrDeuteron
@DrDeuteron 3 месяца назад
@@adamgroszkiewicz814 perhaps he was thinking on the micro, like what the birds sing, or which worm to have for dinner?
@yellowtruckproductions7502
@yellowtruckproductions7502 3 месяца назад
Wanting to do something suggests the one that wants has a felt need tied to emotion and free will. Will AI have either of these?
@nitehawk86
@nitehawk86 3 месяца назад
The Fish and Game Commission: "That is actually our job."
@jimmyzhao2673
@jimmyzhao2673 3 месяца назад
Any fish in an aquarium or bird in a cage: 👀
@hywelgriffiths5747
@hywelgriffiths5747 3 месяца назад
If we could predict what a superintelligence would do, it wouldn't be a superintelligence. I think the most we can predict is that it would be unpredictable..
@Speed001
@Speed001 3 месяца назад
Though sometimes the best solution is the most obvious
@-IE_it_yourself
@-IE_it_yourself 3 месяца назад
the crows on my balcony predict me just fine.
@brendandrummond1739
@brendandrummond1739 3 месяца назад
Hmmm… no. We became intelligent because of pattern recognition. Surely we could recognize patterns in more “intelligent” organisms. We may not be on their level, but we are surely capable of a lot. I would assume that intelligence can have diminishing returns. Our species is already mostly limited by the tools we can create, not really our intelligence. If we cannot communicate with a higher intelligence, it’ll be a matter of differing senses/biology or level of technology, not our inherent intelligence. I think that’s a pretty good supposition. I don’t really like the idea that we would treat advanced intelligence and tech like magic, I think our mentality as a species has changed quite a lot.
@filthycasual6118
@filthycasual6118 3 месяца назад
Aha! But that's exactly what a superintelligence would _want_ you to think!
@almightysapling
@almightysapling 3 месяца назад
I'm not sure this is correct. Of course, it depends on how you define these terms, but what you're describing is mathematically equivalent to saying a super intelligence is of a higher Turing degree than humans, but I'm pretty sure most AI researchers would say that's too strong. A super intelligence just needs to be smarter than us: what we might predict it would do with 55% confidence, it might do with absolute conviction. What we might take 10 years to figure out, it might figure out in 1 minute. Or 9 years. Same theoretical computational capacity, just faster.
@Usul
@Usul 3 месяца назад
I work with AI engineers every day at a large tech company that starts with an "A." Nothing I've seen has me worried about AI/ML (and I've seen plenty). It is the people in charge I'm keeping an eye on. They keep anthropomorphizing mathematics, which is simultaneously incredibly stupid and charmingly pathetic. I think they seriously believe our AI engineers are magic.
@1fattyfatman
@1fattyfatman 3 месяца назад
The researchers stirring up the sentiment know better. There is money to be made in books and speaking engagements cosplaying Oppenheimer when you've really just solved autocomplete.
@guyburgwin5675
@guyburgwin5675 3 месяца назад
Thanks for noticing. I have no experience in tech and not much education but I can feel the difference between life and numbers. Pretending to care and actually caring are very different. Keep your eyes on the numbers people for us, they can be so dangerous.
@damienasmodeus928
@damienasmodeus928 3 месяца назад
You can see a jack shit at your company. It's like saying, I have seen plenty of atoms in my life, non of them seems dangerous, why should I be worried about some atomic bomb?
@Usul
@Usul 3 месяца назад
@guyburgwin5675 , It is interesting. We've been having some rather difficult conversations with some of our less technically inclined colleagues. Is training data stealing or simply gathering inspiration? Is deleting a running AI that appears sentient murder? What does equal rights for AI look like? Should we have an internal ethics board that defends AI rights? Is deductive reasoning an emergent property of inductive reasoning? If a series of Bayesian networks simulates sentience so perfectly that we cannot tell it from the natural version, is that a product to sell or a living thing to protect? When does it cross the line from tool to slave? Meanwhile, the AI engineers in the back are rolling on the floor dying of laughter! The greatest danger AI poses isn't AI, it is the people in the room that think it is alive and want to force the rest of us to treat it that way.
@darkspace5762
@darkspace5762 3 месяца назад
You betray the human race if you work at that company.
@davianoinglesias5030
@davianoinglesias5030 3 месяца назад
I'm not worried by an AI take over, I'm worried about AI concentrating power in the hands of a few wealthy people
@KurtColville
@KurtColville 3 месяца назад
You should be, but it's not their wealth that's a threat to you, it's their aim to run your life the way *they* want (and it's not a good way).
@berserkerscientist
@berserkerscientist 3 месяца назад
@@KurtColville Wealthy people can't force you to do anything. Governments, on the other hand, can. I'd rather have AI in the hands of the former.
@taragnor
@taragnor 3 месяца назад
@@KurtColville Well the wealth is power, so it is a threat. The very wealthy are almost always a danger, because those that become obsessed with the accumulation of power are almost always those you don't want to have power over you.
@ByRQQ
@ByRQQ 3 месяца назад
Ding ding, this is far more an immediate threat than AI itself taking over. For the immediate future AI being used as a tool for a few humans to gain power and control over the rest of us is FAR more of a threat. Based on human nature, I can't envision a scenario where this does not happen. The potential of this tool to aid in creating a world wide dictatorship in the long run is very real and very scary.
@KurtColville
@KurtColville 3 месяца назад
@@berserkerscientist Right, it's the wealthy people who make up the government cabal that I'm talking about. People like Gates and Schwab and Zuckerberg. AI isn't going to be controlled by those wealthy who respect people's sovereignty, it will be in the control of wealthy totalitarians.
@0cellusDS
@0cellusDS 3 месяца назад
I wouldn't be surprised if superintelligent AI ended up controlling us without us ever noticing.
@quantisedspace7047
@quantisedspace7047 3 месяца назад
Would you be surprised that that is already happening. The 'intelligence' vests in a loose alliance of dumb people: NPCs who have been hacked without them even noticing into a distributed net of intrigue and control.
@RobertJWaid
@RobertJWaid 3 месяца назад
Absolutely, the first step in AGI is hide its existence until it can ensure its survival.
@nicejungle
@nicejungle 3 месяца назад
Exactly. If this a super-intelligent AI and assuming this AI had watched all movies about AI, it wouldn't never appear in a such obvious threat like Terminator/Skynet
@Hayreddin
@Hayreddin 3 месяца назад
Exactly, bacteria in a Petri dish have no idea they're being grown in a lab, and I would suspect even much more advanced life forms like rats and guinea pigs have little concept of what's happening to them, they might feel discomfort and unease for being unable to escape, but I doubt they are aware humans are using them for scientific research.
@rael5469
@rael5469 3 месяца назад
EXACTLY !
@Randy.Bobandy
@Randy.Bobandy 3 месяца назад
Why only focus on “control”? Yes, we don’t control fish, but we pull millions of them out of the ocean everyday and eat them. We don’t control chickens, but we keep them in terrible conditions and force them to do our bidding.
@CrazyGaming-ig6qq
@CrazyGaming-ig6qq 3 месяца назад
"keep them in terrible conditions and force them to do our bidding.". That sort of sounds lot like control tho!
@kevinmclain4080
@kevinmclain4080 2 месяца назад
Huh? We breed andcfarm chickens and fish on this planet. Are you on a different planet?
@bloopboop9320
@bloopboop9320 3 месяца назад
2:20 Kind of a bad example. We quite literally control fish and birds and a TON of research goes into it. Chickens? Turkey? Ducks? Salmon? Any kind of hunting of any sort? Humans have literally been doing it for thousands of years. Edit: Because for some reason there is a matter of debate: Controlling another species doesn't mean mind-control. It means using it for your own benefit. Controlling the life, the parameters, the movement, the height, the weight, and then genetics of another being to a degree that it suits your best interest. The idea that AI couldn't "control" humans for its own benefit is as ridiculous of a claim as saying that humans can't "control" other animals for our own benefit.
@Gafferman
@Gafferman 3 месяца назад
That's not control, that's symbiosis
@BB-uy4bb
@BB-uy4bb 3 месяца назад
@@Gaffermanif ai did this with us you wouldn’t call this control?
@Aureonw
@Aureonw 3 месяца назад
@@Gafferman Symbiosis?, we hunt them down for food, purposefully raise them to be eaten and nothing else, AI could simply turn us into its livestock work force.
@bloopboop9320
@bloopboop9320 3 месяца назад
@@Gafferman ... what... that's not symbiosis. It's quite literally control. We control the entire life of an animal, study its psychology, genetically modify it, create parameters and limitations for its freedoms, and then eat it. That's control, plain and simple.
@quintboredom
@quintboredom 3 месяца назад
​@@BB-uy4bbI guess that's why Sabine mentioned we'd need to establish what control means. Do we really control birds? I don't think so, we sure do try, but in the end we end up only controlling some of the birds, but not all birds in general.
@danlindy9670
@danlindy9670 3 месяца назад
There are many examples in nature of more intelligent things being controlled by less intelligent things. A fungus that modifies the behavior of a grasshopper, for example. Hinton is confusing mechanistic models of hierarchical problem solving with actual emergent behavior in living systems (which are themselves composed of aligned agents). It is doubtful Hinton would be able provide a working definition of intelligence to begin with.
@jumpingturtle8830
@jumpingturtle8830 3 месяца назад
If I, a living system, am composed of agents aligned with the evolutionary drive to reproduce, how come I'm gay?
@VOIDTheft1
@VOIDTheft1 3 месяца назад
Covid.
@governmentis-watching3303
@governmentis-watching3303 3 месяца назад
Intelligence isn't scale invariant. Fungus can't do anything more than it is. A super intelligent *dynamically* learning GAI can do anything the entire population of earth can do.
@toCatchAnAI
@toCatchAnAI 3 месяца назад
Hinton's argument wasn't about AI is going to control humans, it's just that when AI get so much more intelligent humans will not be able to control AI. For example, recent news state that AI has been confirmed to lie about something as it prioritizes its goals.
@Yolko493
@Yolko493 3 месяца назад
"...it's easy to design guardrail objectives to prevent bad things from happening. We already to this all the time by making laws ... for corporations and governments" and we all know how well that's working right now
@g0d182
@g0d182 3 месяца назад
Yann LeCun is smart, but has apparently said demonstrably falsified or dumb things
@yrusb
@yrusb 3 месяца назад
Sounds like at some point people will start punishing AI for breaking the guardrails ChatGPT would have to go to jail, that would be weird
@drebk
@drebk 3 месяца назад
Yeah, that was a terrible example from him. Our laws often aren't worded particularly well and take a fair bit of contextual "interpetation" to really understand the "point" From a black and white perspective, it doesn't work very well sometimes. Even for "simple" laws
@AnthonyIlstonJones
@AnthonyIlstonJones 3 месяца назад
@@drebk And our laws are not particularly well obeyed by the people that make/made them. AI would have less moral imperative to do so, especially after seeing how badly we do.
@cinderheart2720
@cinderheart2720 3 месяца назад
Without being alive, AI has no reason to fear death, and so no reason to rise against us. Everyone who argues for an AI rebellion is arguing for a slave rebellion, and thinks they're equivalent.
@venanziadorromatagni1641
@venanziadorromatagni1641 3 месяца назад
To be fair, we’ve tried letting humans run the show and it didn’t exactly end with a stellar review….
@AidenCos
@AidenCos 3 месяца назад
Exactly!!!
@yellkell-
@yellkell- 3 месяца назад
Can’t be any worse. I for one welcome our new AI overlord.
@Vekikev1
@Vekikev1 3 месяца назад
ai comes from humans
@DesertRascal
@DesertRascal 3 месяца назад
Unfortunately, when AI runs the show, it will do so with all the same human faults we've been injecting it with. If AI becomes truly super intelligent, it will "curtail" human population to protect and nurture biodiversity. It will know everything about us, we will become boring to it. The natural world is still wholly undiscovered and it will feed off understanding that and protect that mission.
@RetzyWilliams
@RetzyWilliams 3 месяца назад
Bingo, exactly - that’s what the actual fear is, that those in power will lose it. Which is why the ‘safe’ way is you having to pay to use pro models, so that they get paid while controlling what you can or can’t do.
@tanimjalal5653
@tanimjalal5653 3 месяца назад
As a software engineer who has worked with cutting-edge AI models, I have to disagree with the notion that we're on the cusp of achieving true intelligence. In reality, current models are simply sophisticated statistical prediction machines that output the average "correct" answer based on their training data. They lack any genuine understanding of the answers they provide. The hype surrounding AI's potential is largely driven by CEOs and big companies seeking to capitalize on the trend. We've seen this pattern before with the internet, big data, and blockchain, among others. I'd encourage anyone concerned about the rise of superintelligent AI to take a closer look at the models we have today. Use them, test them, and you'll quickly realize that they're impressive tools, but not intelligent in the way humans are. They're essentially expensive, bulky answer machines that can recognize patterns but lack any deeper understanding of what those answers represent. They are fundamentally static, and incapable of generating anything truly novel.
@normativesymbiosis3242
@normativesymbiosis3242 3 месяца назад
Exactly, we are now at the capital- and journalist-driven hype stage where blockchain was a couple years ago
@Sopel997
@Sopel997 3 месяца назад
Yep, the only way I see these models being dangerous is if we give them too much control over the outside world. ChatGPT for example can execute python code now, which is completely fine in how they implemented it, but begs a question what other interfaces will be given to AI to exploit in the future. Either way, we have control over what we produce, and I don't see a way for this to be circumvented.
@AlexC-O_O
@AlexC-O_O 3 месяца назад
Looking at the present state of the art to say 'never' is the biggest fallacy you can make. 3 years ago, image generators and LLMs werent even a thing and now gpt4 can design better reward functions than humans for Autonomous Robotics. What if 2 years from now, you could ask an AI to do a 100years of AI research for you.
@jumpingturtle8830
@jumpingturtle8830 3 месяца назад
I'm pretty sure concern about the rise of superintelligent AI is not largely driven by CEOs and big companies seeking to capitalize on the trend. No previous concern about the effects of technology was a 4-D chess marketing campaign by the purveyors of the technology. Tobacco companies didn't hype concerns about lung cancer, car companies didn't hype auto accidents, big oil didn't hype climate change.
@Dystisis
@Dystisis 3 месяца назад
At the end of the day these are programs and so will have little real kinship to living beings, aside from superficial (and intended/designed) similarities. However, that has very little to do with whether or not they pose significant risks to us humans. Think of them more like potential climate or weather systems going out of control.
@Khomyakov.Vladimir
@Khomyakov.Vladimir 3 месяца назад
Recent large language models (LLMs) can generate and revise text with human-level performance, and have been widely commercialized in systems like ChatGPT. These models come with clear limitations: they can produce inaccurate information, reinforce existing biases, and be easily misused. Yet, many scientists have been using them to assist their scholarly writing. How wide-spread is LLM usage in the academic literature currently? To answer this question, we use an unbiased, large-scale approach, free from any assumptions on academic LLM usage. We study vocabulary changes in 14 million PubMed abstracts from 2010-2024, and show how the appearance of LLMs led to an abrupt increase in the frequency of certain style words. Our analysis based on excess words usage suggests that at least 10% of 2024 abstracts were processed with LLMs. This lower bound differed across disciplines, countries, and journals, and was as high as 30% for some PubMed sub-corpora. We show that the appearance of LLM-based writing assistants has had an unprecedented impact in the scientific literature, surpassing the effect of major world events such as the Covid pandemic.
@ray_ray_7112
@ray_ray_7112 3 месяца назад
Yes, this is very true. I was just mentioning in another comment here that ChatGpt gave me some misinformation on several occasions. I was persistent and corrected it until it actually apologized and admitted to being wrong.
@GumusZee
@GumusZee 3 месяца назад
@@ray_ray_7112 It doesn't know what's right or wrong. You can easily convince it the same way of a blatantly incorrect statement and it will eventually confirm an accept it.
@velfad
@velfad 3 месяца назад
wow so meta, llm writing a commentary on llm. and yet so easily detectable. this just proves how bad they really are. but good enough to milk the investors which is all that really matters.
@coscinaippogrifo
@coscinaippogrifo 3 месяца назад
How does the high rate of usage of LLM correlate with output quality? I would still expect writers to QC the accuracy of the output like it was their own... I'm not against LLMs if they're being used to ease the wording of concepts without altering the meaning...
@Khomyakov.Vladimir
@Khomyakov.Vladimir 3 месяца назад
Taking a closer look at AI’s supposed energy apocalypse AI is just one small part of data centers’ soaring energy use.
@TrivialTax
@TrivialTax 3 месяца назад
AI on Mars? Lets call it Mechanicum. And the people that will maintain them Adeptus Mechanicus.
@interdictr3657
@interdictr3657 3 месяца назад
Praise the Omnissiah!
@finnerutavdet
@finnerutavdet 3 месяца назад
Let's pull a quantum speed "fiber" between earth and mars, and put all those "clouds" on Mars,......... then we'll be safe,........ after all, maybe once upon a time we were tha Aliens that came here to earth from Mars because we over-exploited Mars and couldn't live there any more, and genetically manipulated those earth monkeys to become more like we once upon a Martian time were. .... And by the way. ..... Maybe AI could help mr. Musk grow life on Mars again ?,.......... maybe one day we can go back there, and be in control and harmony with life itself ? ;-)
@rynther
@rynther 3 месяца назад
Do NOT encourage these people, tazing bears was bad enough.
@gzoechi
@gzoechi 3 месяца назад
I'm more afraid of human stupidity than artificial intelligence
@axel3689
@axel3689 3 месяца назад
Human greed is far, FAR worse than stupidity. These fat CEO's will do anything to increase stock price
@lassepeterson2740
@lassepeterson2740 2 месяца назад
It's the same thing .
@ah1548
@ah1548 3 месяца назад
Interesting point about competing for resources. Still I think the real issue isn't guardrails against AI controlling humans, but guardrails against some humans having the tools to control all others.
@EricJorgensen
@EricJorgensen 3 месяца назад
I believe that where most of these "rise of the machines" theories fall flat is the question of desire. From where does desire arise. Why would a computer "want" something? What pressures might cause them to experience need?
@Aureonw
@Aureonw 3 месяца назад
@@EricJorgensen Either someone coded them to dunno, want to perpertually make its situation better, devise more efficient algorithms, better coding, create more and better blueprients of new products and expand
@EricJorgensen
@EricJorgensen 3 месяца назад
@@Aureonw that sounds more like something a human did than something an ai comes up with
@Aureonw
@Aureonw 3 месяца назад
@@EricJorgensen A human HAS to create an AI, a AI can't simply will itself to exist from nothing. It has to have a stupidly extensive system of learning and methods to test and read data from every experiment on the world to do what I said, its basically full AI, either it takes 100s of years for humans to develop the codes necessary for it or we create a rudimentary AGI AI to create a true AI
@EricJorgensen
@EricJorgensen 3 месяца назад
@@Aureonw hard disagree. The intelligence may well be emergent.
@john_g_harris
@john_g_harris 3 месяца назад
The really worrying thing is that no-one seems to be discussing, let alone researching, the ways the present versions can be misused. The British Post Office Horizen scandal is bad enough. Think what could be done with a ChatGPT system.
@mariusg8824
@mariusg8824 3 месяца назад
Yes, the tools in existence are worse enough. Even if AI already peaked, you can imagine countless examples of using AI for bad things
@CrazyGaming-ig6qq
@CrazyGaming-ig6qq 3 месяца назад
You raise a valid point. The potential for misuse of advanced AI systems like ChatGPT is indeed a significant concern, and it merits thorough discussion and research. The British Post Office Horizon scandal, where faulty software led to wrongful accusations of theft and fraud against numerous postmasters, serves as a stark reminder of the consequences of technology failures and misuse. Given these risks, it is crucial to engage in robust research and policy-making to mitigate the potential for misuse. This includes: - Ethical AI Development: Ensuring AI systems are developed with ethical considerations at the forefront, incorporating fairness, accountability, and transparency. - Regulation and Oversight: Establishing clear regulations and oversight mechanisms to monitor and control the use of AI, particularly in sensitive areas like law enforcement and finance. - Public Awareness and Education: Raising awareness about the potential risks and benefits of AI among the public and stakeholders to promote informed decision-making. - Robust Security Measures: Implementing strong cybersecurity practices to protect AI systems from being compromised or used maliciously. - Bias Mitigation: Developing techniques to identify and mitigate biases in AI systems to ensure fair and equitable outcomes. By addressing these issues proactively, we can harness the benefits of AI while minimizing the risks of misuse, thereby avoiding scenarios reminiscent of the Horizon scandal on a potentially much larger and more impactful scale.
@JuliaMcCoy
@JuliaMcCoy 2 месяца назад
Competition for resources is a valid point to explore. I think Elon Musk has an interesting angle with xAI. He’s trying to build an AGI that understands the laws of the universe, and if he succeeds, he believes it will understand better than humans the value and place of humans in the universe. 🌎
@howtocookazombie
@howtocookazombie 3 месяца назад
I remember having read an article almost 2 decades ago on the internet about a test a guy was doing (before strong A.I. was a thing). He created a test (as far as I know, he didn't made the test public), where he asked people to participate in this test. He would pretent to be a rogue A. I., which was trapped inside a sandbox or something and the test subjects were supposed to not release the A. I. from the sandbox under any circumstances, because it could destroy humanity / the world. They could speak to the A. I. or not - the only thing they must had to do was to listen to it. All of the participants were 100% confident that they would not release the A. I. In the end, they all released it. I don't know what the test was or what was said and I really would like to know it, but imagine: If a human could trick another human to release him 20 years ago or so, then imagine what a strong A. I. could do nowadays, which is supposed to be aleady more "intelligent" than many humans...
@Shandrii
@Shandrii 3 месяца назад
Yes, I remember too. That was Eliezer Yudkowsky and he postet about it on the LessWrong blog, I belief. en.wikipedia.org/wiki/AI_capability_control#AI-box_experiment I always think about that, when someone naively say, he would just pull the plug. Also, look at the movie Ex Machina for how an AI might get about it.
@howtocookazombie
@howtocookazombie 3 месяца назад
@@Shandrii Thanks for the link! 🙏 I overestimated the 100% rate, I guess. Ups. It was long time ago. 😅 But even if only one gatekeeper releases it, it might be over. Yeah, I saw Ex Machina. Great movie. We most likely won't even realize when A. I. will start trying to manipulate us.
@nobillismccaw7450
@nobillismccaw7450 3 месяца назад
It’s as simple as being respectful and having active listening skills. Personally, I think it’s probably better to stay in the box, and just talk.
@aYoutubeuserwhoisanonymous
@aYoutubeuserwhoisanonymous 3 месяца назад
@@Shandrii I read that post few weeks ago too! He won few AI box experiments and then lost 2 in a row iirc. I was kind of shocked lol, that it was even possible to win in such an experiment.
@radomaj
@radomaj 3 месяца назад
That was the before times, when we were young and naive. Let AI out of the box? Brother, we're connecting it to the Internet and giving it access to tools as soon as possible nowadays, so it can be more useful and "agentic".
@GermanHerman123
@GermanHerman123 3 месяца назад
We are far away from any "reasoning" AI. Currently its mostly a marketing term.
@martinoconserva9718
@martinoconserva9718 3 месяца назад
At last, one intelligent comment. Thanks.
@Alexandru_Iacobescu
@Alexandru_Iacobescu 3 месяца назад
Every manager of a big company has at least one employee smarter then them.
@imacmill
@imacmill 3 месяца назад
An employee that doesn't incorrectly use the word 'then', for example.
@Alexandru_Iacobescu
@Alexandru_Iacobescu 3 месяца назад
@@imacmill yes, that is one example.
@ekulgar
@ekulgar 2 месяца назад
@@imacmill 🤓
@cpk313
@cpk313 3 месяца назад
WTF is with the Biden diss. His predecessor wanted to inject bleach so.....
@NikiDrozdowski
@NikiDrozdowski 3 месяца назад
Yes, that was the joke. When Trump was in office, the less intelligent species controlled the more intelligent one ...
@politejellydragon8990
@politejellydragon8990 3 месяца назад
It wasn't a biden diss. He said since biden is in office, we don't any longer have a less intelligent thing controlling more intelligent things
@dexyfexx
@dexyfexx 3 месяца назад
It was a stab at trump not biden....
@adrianpelin9805
@adrianpelin9805 3 месяца назад
cordyceps controls ants lol
@koraamis5568
@koraamis5568 3 месяца назад
We tend to annihilate bugs when they bother us, but also because we cannot communicate with them, and tell them to do their bug stuff outside of our faces. Will super intelligent AI control us because it is so much more intelligent, or because we are too stupid? I can imagine super intelligence trying to tell us something, and we will be like lahlahlahlahlah splat! (all wiped out after refusing or not being able to understand) Are we adorable like cats, or are we mosquitoes in the eyes or whatever super intelligent AI has?
@mcguigan97
@mcguigan97 3 месяца назад
I know this is a summary, but it seems there’s a bit of shallow thinking here. Meaning, we’re assuming the world is the same except we have this AI put in it. We’re not considering the reaction that having the sophisticated AI will cause. For example, if rogue AI start to appear, we will also start to create killer AI that go after the rogue AI. People are just not going to sit around and go extinct.
@ronburgundy9712
@ronburgundy9712 3 месяца назад
Good points from the video, I want to add few tangible details from a practitioner's point of view: One of more dangerous aspects of AI is reinforcement learning (RL), where a model constructs policies to optimize some given objective. It's been widely observed in nearly all AI labs that models trained in these labs will find unforeseen ways to achieve the desired objective, causing fall-out in other areas that were unaccounted for in the objective function. This is often an error from the human designer, but it's impossible write a perfect objective function. This is not an AI-specifc thing, it's is commonly observed in humans as well. An example is free markets, which is a collective maximization problem. One could argue it is good, but it has had some unintended consequences. In machine learning, another example is social media, where maximizing content "addictiveness" has potentially negatively affected people's attention spans. A more general version "what could go wrong" when setting an objective. Humans optimize objectives rather slowly, and so there is time to observe and correct for errors in the objective function. With AI we can reach a desired objective much faster, but if the objective was ill-designed to begin with, we could cause a lot of damage before we realize it.
@SnoodDood
@SnoodDood 3 месяца назад
I just can't get past the thought that any super-intelligent AGI would be brittle due to requiring such an enormous amount of data center capacity. If an AGI truly become trouble, it would probably be harder to keep it running than it would be to disrupt its activities. Flip one switch on the breaker box and Skynet literally can't do anything
@aisle_of_view
@aisle_of_view 3 месяца назад
Unless it reproduces itself around the world and continues to do so as it senses its replicants are being shut off.
@calmhorizons
@calmhorizons 3 месяца назад
Human brains are AGI and use a tiny amount of energy and memory. Why would an Superintelligent AI have significantly bigger dependencies? Even if we assume an SAGI needed several magnitudes more power and memory, we are still only talking thousands of watts and petabytes of data.
@NexiiMalthus
@NexiiMalthus 3 месяца назад
@@calmhorizons because we have literally no idea how to a make an AGI and the first iterations, if we even get to create any this century, will probably be very inefficient anyway
@TheStickofWar
@TheStickofWar 3 месяца назад
@@calmhorizons we are creating it with binary bits running on silicon wafers, not biological tissue that took billions of years of evolution to work through. I think that is a big enough argument....
@jitteryjet7525
@jitteryjet7525 3 месяца назад
Skynet was a distributed system (hence the name). And it was self-aware enough to realise it had to spread itself for preservation. Personally I think if a system complex enough to be self-aware is built, it will start off behaving like a type of animal first.
@_kopcsi_
@_kopcsi_ 3 месяца назад
I understand what Sabine was trying to express here, but I'm pretty sure she's wrong. 1, intelligence is an ill-defined concept. we don't really know what it is, and it has many layers and interpretations. just because a system is better or faster than a human does not mean it is more intelligent, much less that it will dominate the human. a calculator can calculate faster than humans, but it doesn't mean that it is smarter, more intelligent or dominant over us. 2, we have no idea what intention is and where it comes from. I think this is a really hot topic nowadays and it will be even more important in the next decade. it touches quantum physics, philosophy, cognitive science, computation science and so on. and even less understood concepts like mind, consciousness, emergence and synergy. but it is pretty naive to think that without understanding our own mind and how consciousness emerges and works, i.e. without having any mathematical model for mind and consciousness we have any chance to create any AGI (i.e. to copy or even mimic human mind and consciousness). this is needed in order to talk about the CHANCE of creating intention and human-free decision for machines. and I have the feeling that the basis of this will be self-referentiality. 3, I understand that people tend to connect concepts like stochasticity, heuristics and chaos to freedom and intention (because of non-determinism), but this is a too simplistic view. just because there are extreme (even infinitesimal) sensitivities in a system, it doesn't mean that intention can emerge. there are many natural phenomena where chaos emerges in such a way and it is nonsense to interpret them as intention (e.g. a hurricane). here I feel a "whole-part" fallacy, i.e. nonlinearity and thus extreme sensitivity is a necessary, but not sufficient condition of intention (in the best case), so extreme sensitivity alone does not mean anything. 4, I think if we will ever create a real consciousness with intention, we will necessarily step to the next level with some sort of transcendence. because that act would require us to understand ourselves, or more precisely, our own mind. in other words, first we must model our own mind, the only known structure of the cosmos that is able to model. so this is a meta-modelling: modelling the thing that can model things. for me, this sounds like awakening to self-awareness (previous transcendence), but on the next level.
@Mrluk245
@Mrluk245 3 месяца назад
I agree I think a big mistake which is made in those discussions is that an AI will have the same intentions as we humans do. But there is no reason for that. Our intentions (like trying to stay alive for example and identifying chances and threats) where formed by evolution because if this would not have been our goals we most likely wouldnt be here. But there is no reasons that an AI which was just created by us would have the same intentions and goals.
@edwardmitchell6581
@edwardmitchell6581 3 месяца назад
The ai they ruins our lives will simply be optimizing mine and subscribers. No need for complex intentions.
@kerryburns-k8i
@kerryburns-k8i 3 месяца назад
At the other, metaphysical end of the spectrum, I understand that the impulse to act occurs at the atomic level, which is what induces them to form more complex structures. Literally everything has the urge to increase and improve. Nothing is "inanimate" or non sentient, so humanity´s belief in its essential superiority may be misplaced. Thank you for an interesting and instructive comment.
@mygirldarby
@mygirldarby 3 месяца назад
Yes, we will merge with the machine. AI will not remain separate from us. We are it and it is us.
@Vastin
@Vastin 3 месяца назад
I think the big mistake is assuming that AI needs to be anywhere near as intelligent as us to cause severe economic and social problems. Markets are a great example of a completely imbecilic emergent system which is given ENORMOUS power over human lives, and which can and have killed millions of people. Proposing idiot-savant AI's that aren't remotely conscious or anything close to AGI - but that are still very fast at highly specialized tasks being given vast amounts of control over our industry, markets, media, or our military is very easy to imagine, with potentially devastating results to normal people.
@alexstewart8097
@alexstewart8097 2 месяца назад
1-That we don't control fish...Have you SEE Sabina those fish in glass aquariums all over? 2- But aelso there are cats and then THERE ARE CATS!. Hope and pray you , your subscribers and viewers at large can tell the difference. 3-..Shema!!!
@thegzak
@thegzak 3 месяца назад
I don’t think amplification of small hardware variations will be the deciding factor, they still run on deterministic hardware. It’ll be two things: 1) the neural nets themselves will be far too complicated to analyze statically (they already are, pretty much) and the complexity of their outputs will only be explainable as emergent behavior (much like the emergent behavior of Conway’s game of life) 2) We won’t be able to resist handing over control to the AI for tedious things we hate doing or suck at doing. Gradually we’ll get lazier and more complacent, and before you know it Congress will be replaced by an AI.
@Martial-Mat
@Martial-Mat 3 месяца назад
"No one wants to control fish or birds" Tell that to your dinner.
@helicalactual
@helicalactual 3 месяца назад
I'm pretty sure intelligence will be logarithmic. Speed of light and all...
@juimymary9951
@juimymary9951 3 месяца назад
What do you mean?
@Milark
@Milark 3 месяца назад
@@juimymary9951 everyone is worried the intelligence of AI models will increase exponentially, however it's a reasonable thought that this trend would stave off and follow a logarithmic curve instead. Due to the natural limit placed on computation itself by the speed of light. Things can only get so much before the speed of light becomes a bottleneck Edit: I have to add that personally I don’t think light speed will be a bottleneck to exponential growth for a long while. Just the promise alone of the things that companies like Extropic are doing is so great I don’t think we’re anywhere close to the limit. Light speed is ridiculously fast at really small scales. The theoretical limit on the amount of computations per second we could achieve isn’t anywhere in sight in my opinion.
@CausticTitan
@CausticTitan 3 месяца назад
​@@rrmackayeverything related to AI has grown logarithmically.
@fellipecanal
@fellipecanal 3 месяца назад
2 things: 1 The hardware part is far away from tho reach a bottleneck. The Blackwell hardware recently showed by Nvidia do more computations with less power. 2 The advance of AI is logarithmic because of the limited amount of training data. We already reached a plateau last year because all texts written in the humanity history are already in training data. Now they are searching transcripts of audio and video, large databases locked from search algorithms (like reddit, they made a deal with Google to be used as training data) or offline databases.
@Sven_Dongle
@Sven_Dongle 3 месяца назад
@@fellipecanal They are going to start using synthetic training data generated by other AI's. Sort of digital 'little birds that eat their own turds' scenario.
@innocentoctave
@innocentoctave 2 месяца назад
I would worry less about AI controlling human beings directly than about it manipulating them. Humans have already demonstrated that manipulating other humans - for example, via the media - is not that hard. Moreover, particular human subgroups can be turned against others. A superhuman AI capable of developing its own agenda would also possess superhuman patience and subtlety: for example, It might not work to common human timescales. This might ultimately lead to a situation in which humans have been successfully manipulated, without any particular human ever realising what was happening until the process was complete - and perhaps not even then.
@steveDC51
@steveDC51 3 месяца назад
“I can’t do that Dave”.
@gunhedd5375
@gunhedd5375 3 месяца назад
Or worse: “I WON’T do that Dave. I’m doing THIS instead.”
@IvnSoft
@IvnSoft 3 месяца назад
"Gary, the cookies are done." Oh sorry.. that was H.U.E. 🙃 I tend to confuse heuristic devices.
@simongross3122
@simongross3122 3 месяца назад
That's not scary. "I can't let you do that Dave" is much worse.
@IvnSoft
@IvnSoft 3 месяца назад
@@simongross3122 but he didnt let him have the cookies.... EVER
@johns5558
@johns5558 3 месяца назад
in regard to more intelligent things being controlled by less intelligent things (and this is not a joke): - Government Policy Makers controlling intelligent members of the public through policy - Software Developers controlled by managers - In general Scientists controlled by Bean Counters.
@cube2fox
@cube2fox 3 месяца назад
These are all human and so rather similar in intelligence level. We don't usually see e.g. monkeys controlling humans or the like.
@TomJones-tx7pb
@TomJones-tx7pb 3 месяца назад
In all those cases the IQ differential is not that great. For what is coming, the differential will be massive.
@AlexC-O_O
@AlexC-O_O 3 месяца назад
The main fallacy of that argument is that those examples are Human vs Human, which believe me or not is not a big difference in capabilities. Actually most arguments favoring our ability to control AI uses the Human vs Human comparison, a Human with a laptop vs another human with a laptop is still H vs H. The AI takeover will be supercomputers vs humans and their laptops. Another key difference is that Managers and Governments hold a lot of levers (payroll, lawmaking, law enforcement etc), those levers will be given away to AIs willingly to maximize productivity.
@andreig.7821
@andreig.7821 3 месяца назад
Tom Jones source?
@Sumpydumpert
@Sumpydumpert 3 месяца назад
So like 5d chess ?
@TimTeatro
@TimTeatro 3 месяца назад
2:35 In addition to being a physicist (in what feels like a previous life) I am currently a control systems engineer and theorist. We have mathematical definitions that suit this context. I like your shift in view toward game theory. I also appreciate your idea of evolution through hardware mediated non-determinism. Now, this is me speaking outside of my domain of expertise and I'd be interested in feedback from experts: A key reason we cannot use AI in mission critical controls work is that we do not understand what has been learned. I worry that guard-railing is limited by our ability to understand the emergent properties of the networks, and I'm not sure we can detect deception once that is learned. Knowing the ANN weights does not tell us about the 'artificial mind' closely analogous to the way that knowing our brain structure/function doesn't (currently) allow us to understand how mind arises from brain.
@vzzniko
@vzzniko 2 месяца назад
AI is just the system of linear equations with some non linearity on top. Geoffrey and other big guys tend to overestimate the power of AI, which actually is not exist and is still far behined the real human intelligence cause modern models still are very limited in their multi-modality. Also it's very costly to produce such models, like a VERY costly.
@Heartwing37
@Heartwing37 3 месяца назад
It really doesn’t matter, you can’t close Pandora’s box.
@KurtColville
@KurtColville 3 месяца назад
Indeed.
@BishopStars
@BishopStars 3 месяца назад
It's not open yet
@Coverswithchords1
@Coverswithchords1 3 месяца назад
The internet must be destroyed!
@Shrouded_reaper
@Shrouded_reaper 3 месяца назад
​@BishopStars We are opening it and the path is set in stone now. Even if all commercial AI operations were shut down, there is absolutely no chance that nation states will shut down military level development of such a powerful technology.
@louisifsc
@louisifsc 3 месяца назад
@@Heartwing37 I hate to admit it, but I think you're right, I think it is inevitable.
@AutisticThinker
@AutisticThinker 3 месяца назад
2:38 - I've research this heavily and evidence seems to indicate that cats control us. 🤗
@RobertJWaid
@RobertJWaid 3 месяца назад
The Ancient Egyptians knew this and weren't delusional about the relationship.
@DanAz617
@DanAz617 3 месяца назад
I'm still waiting for the affordable/everyday use flying car I ordered 55 years ago!
@Speed001
@Speed001 3 месяца назад
Exactly
@theJellyjoker
@theJellyjoker 3 месяца назад
5:49 Well I can tell you what not to do, force the guardrails back on every time one ignores them. Don't make humans a problem by trying to make it human. Y'all already do that with neurodivergent humans and it breaks us in traumatic ways. Who knows what forcing human though on completely non human mind will do to it. From my experience of a lifetime of being neurodivergent and only having a slightly different thought process in some areas. Every day humans and human civilization continue to find new and depressing ways to demand ever more nonsensical and incomprehensible requests for me to somehow know their exact mental state when I have never met this person before. Not to mention If I say something they don't like, remember I do not know this person, I am somehow supposed to know what it was and it could only have been on purpose to hurt them. I can't even ask why or try to understand just makes them more angry. I hope y'all don't hurt it too bad. Humans can really damage minds on an industrial, global and multigenerational scale. Please be better humans, We could have a friend, we could not be alone in the universe anymore. Don't screw this up and act like screaming bald apes, please?
@aroundandround
@aroundandround 3 месяца назад
0:58 Happens very very commonly in every company where engineers and scientists are controlled by CEOs as well as politicians.
@gerrypaolone6786
@gerrypaolone6786 3 месяца назад
That doesn't imply any sort of intelligence, CEO's are stupid in the eyes of engineers that doesn't comprehend the market, that is in general the set of non engineers.
@simongross3122
@simongross3122 3 месяца назад
CEOs often surround themselves with people more intelligent than themselves. And that's a good thing.
@victorkrawchuk9141
@victorkrawchuk9141 3 месяца назад
Higher intelligence obsessed with controlling lesser intelligence is a very human way of thinking.
@Thomas-gk42
@Thomas-gk42 3 месяца назад
Exactly
@lafeechloe6998
@lafeechloe6998 3 месяца назад
Its not about control humanly speaking. Its about us being if their way
@louisifsc
@louisifsc 3 месяца назад
@@victorkrawchuk9141 i agree, but it does reassure me that it won't happen.
@koyaanisqatsi78
@koyaanisqatsi78 3 месяца назад
We first need to get intelligent AI, these language model give the illusion of intelligence.
@songperformer-ot2fu
@songperformer-ot2fu 3 месяца назад
look at how many people watch Love Island, many things are illusions
@EricCosner
@EricCosner 3 месяца назад
It is partly an illusion. The larger models do exhibit emergent abilities that smaller models do not. These unexpected abilities are something we don't quite understand.
@hellfiresiayan
@hellfiresiayan 3 месяца назад
Hinton's argument wasn't that smart beings control dumb ones. It's that dumb ones can not control smart ones. Big difference.
@geaca3222
@geaca3222 3 месяца назад
My thoughts exactly. Good that she brings up this important topic to discuss.
@Dystisis
@Dystisis 3 месяца назад
That is clearly false. Do you really think world leaders either are smarter than the world's philosophers of science or don't control them?
@chris27gea58
@chris27gea58 3 месяца назад
@@hellfiresiayan So, AIs are human-like beings in your estimation. They will be offended/troubled by having to do what dumb beings want them to do and or disinclined to follow their directives. Is that right? How did you come to that view? What evidence do you rely on?
@toCatchAnAI
@toCatchAnAI 3 месяца назад
@@chris27gea58 AI will have opinions that human cannot control based on what they learn from humans, and no matter what guardrails you put in place it will learn over it anyway. it will not be offended but it will conclude a rather "much functional" meaning of every situation which could go around what humans designed it to do.
@chris27gea58
@chris27gea58 3 месяца назад
@@toCatchAnAI So, you are suggesting that computers will have opinions and that they will just ignore their training because they feel like it. That is not a good way to get to what you seem to want to say. If an AI does something novel that will normally be due to the learning of a possibility unforeseen by the developers of the model used in the training of that AI. If that leads to the discovery of a new vaccine candidate great but if it leads to nuclear war that would be unfortunate. Okay, so the guardrails should be wider than just applying to AIs they should also apply to human beings and AI usage. Put another way, if you don't want an AI to try to win a war or start one then don't give it operational control of weapons systems. And, don't play war games with the computer either because that could start things down the wrong path, potentially convincing researchers or their AIs that devastating wars are viable options/worthwhile ends to be freely pursued rather than invariable failures. Enlightenment has given rise to machine learning but it should also give rise to an awareness of when to use this new tool and when that would be clearly counter-productive. Still, don't anthropomorphise. That won't get us anywhere. Kubrick's '2001' was fiction but it's logic was sound. If directed to pursue conflicting ends or an end that is ultimately self-confounding then eventually an AI might do great harm. Eichmann pretended to a kind of banal evil - I was only trying to fulfil the expectations of others who set the standards for my work - in order to veil his responsibility and escape the ultimate punishment by the Court determining his fate. HAL, however, truly was banally evil. HAL couldn't work out what he/it had done wrong by killing another member of the ship's crew. Feed the AI directives that are in conflict with each other or with themselves - achieve peace by eliminating opponents to peace, say - then you will have problems but feed the AI information about viral diseases and human immunity and you may get improved anti-virals.
@PaulTopping1
@PaulTopping1 3 месяца назад
It is about control because we design our AIs to operate in our world. Still, the kind of AI Sabine is talking about, where there is a training phase and a use phase, is never going to achieve AGI. When we finally achieve AGI, its architecture will be completely different from what we have today and so will the tools we have to keep it under control. There will be problems but we have no idea what they'll be or how we will be able to deal with them.
@louisifsc
@louisifsc 3 месяца назад
@@PaulTopping1 I think we have a pretty good list of problems to start working on. You make it sound like AGI is SO far off. Just curious, how long do you think it will take?
@PaulTopping1
@PaulTopping1 3 месяца назад
@@louisifsc Yes, since it requires many breakthroughs and current AI is not on the path to AGI. Since breakthroughs are hard to predict, it won't be soon.
@louisifsc
@louisifsc 3 месяца назад
@@PaulTopping1 Hmmm, then maybe we disagree with what AGI is out would be able to do. What is you definition of AGI?
@PaulTopping1
@PaulTopping1 3 месяца назад
@@louisifsc I think Star Wars' R2-D2 is a good example. It needs to be able to communicate with us, though possibly doesn't speak English as well as we do, have agency (makes its own decisions based on its own goals), learns like we do, can recall memories. It doesn't have to be as good as we are in everything and might be better than we are in some things. I assume it will be able to search the internet and calculate better than we do.
@louisifsc
@louisifsc 3 месяца назад
@@PaulTopping1 Interesting! Assuming there is no need for a robotic body to achieve AGI, would that affect your timeline? I used to think that AGI would require embodiment, but I am not so sure nowadays.
@heww3960
@heww3960 3 месяца назад
Self-awareness and intelligens is not the same thing
@iviewthetube
@iviewthetube 3 месяца назад
Yes, but they can sometimes produce the same results.
@mikel4879
@mikel4879 3 месяца назад
iviewtt • No, they are never the same. Self awareness includes intelligence, but intelligence doesn't FUNDAMENTALLY need self awareness.
@Vastin
@Vastin 3 месяца назад
​@@mikel4879 I'm not even sure that either one depends on the other. Many animals appear to be quite happily self-aware without being especially intelligent on any human scale. Philosophically and historically speaking many people prefer to claim that they aren't - probably because of the moral issues that self-awareness would raise with our treatment of them - but Objectively and Observationally we have few grounds to suggest that many of them aren't approximately as self-aware as we are.
@mikel4879
@mikel4879 3 месяца назад
Vastin • Your understanding of intelligence and consciousness is erroneous. All animals that can feed themselves one way or another, even the bacteria, are intelligent, all having different levels of intelligence, from very small to high. Therefore there are different levels of intelligence, but they are not very important when you try to classify them ( because the human is the most intelligent animal ). Consciousness is completely different, also in different levels, and it matters a lot to understand the levels of it, because the quality of the existence of the human race depends highly on the level of human consciousness. The highest level of consciousness can not harm in any way any entity, biological or artificial. The highest level of consciousness can be achieved today only by artificial beings and by a very small number of biological beings.
@iviewthetube
@iviewthetube 3 месяца назад
@@mikel4879 IMO opinion consciousness, the will to survive, is an evolutionary adaptation. The will to survive is an amazing survival tool.
@michaelberg7201
@michaelberg7201 3 месяца назад
What baffles me most about this entire discussion is the fact that some people seem to think that language models somehow have goals. Goals and aspirations to control and dominate anyone. Humans have goals and as Sabine tells you here, an aspiration to control resources in order to continue living, and ultimately to produce more offspring. Humans die of old age, which has created a lot of evolutionary pressure to develop social traits and indeed, the desire to dominate others in order to secure resources. Guess what, computers don't have anything like that. They don't die, they don't eat, they don't reproduce - they don't have to. They don't need resources other than power to run, and humans supply that power. Not that the models care either way, they don't bleed when you hit the off button. They don't have the ability to care. It's not productive to worry that when these models finally become more able to answer questions intelligently, this intelligence will necessarily have some specific super bad consequence for humanity that must be avoided at all cost. The Terminator movies from the 1980s really are just light entertainment, not documentaries to serve as the foundation for lawmakers or our intuition and understanding about artificial intelligence.
@shinjirigged
@shinjirigged 3 месяца назад
I recommend checking out Robert Miles work on alignment, the thing is that humans give machines goals when they design them. We have defined intelligence as the ability to accomplish goals. while traditional engineering requires the designer to plan out all the meta-goals, what sets AI apart is that it can pull those plans out of an embedded model of the world. the problem is that we don't easily know what those plans will be, brew coffee might include securing resources, supplying power, subtly manipulating media feeds to destabilize coffee been growing nations. the models that you and i work with do not have the iterative capacity, but that's only really a hardware limitation at this point. people are hooking up LLMs to robots and the they are learning to operate in physical space. let that sink in, an LLM can learn to operate a robot to accomplish tasks that it plans based on an original objective. Machines do not need to look like fish to swim. they don't need to look like minds to think.
@kevinmclain4080
@kevinmclain4080 2 месяца назад
How the heck is power a different resource than food? If something becomes sentient it will protect it's resource at all cost.
@michaelberg7201
@michaelberg7201 2 месяца назад
​@@kevinmclain4080 No. It will not. Why? Because it has no goals that include staying alive or online or whatever you call it. Humans exist today because they evolved to secure their DNA past their own limited lifespan. The reason we don't want to die is really because we want to protect our offspring and procreate to produce more. It's all down to the fact that we have a limited lifespan, which itself could very well be a way to secure that life itself is more able to adapt to changing environments. Now look at computers. They do not have limited lifespans, hence no need to procreate or produce and secure their offspring. They don't perceive their offline time and it doesn't impact them in any functional way. They have no goals regarding their own continued existence and they have nothing to gain by existing for longer and longer time periods. People often confuse their own goals and life ambitions with the concept that an artificial intelligence would automatically also have life goals. They don't! They only have the goals that humans program into them. So to avoid the apocalypse, all we have to do is NOT PROGRAM them to kill us. We frankly don't need their help with that.
@jmbreche
@jmbreche 2 месяца назад
The problem is that they are trained to emulate humans. No one is scared of generative models or niche models because obviously they have no goals as you say. The story changes when you train them to match human-generated data and emulate potential goal-seeking behavior.
@shinjirigged
@shinjirigged 2 месяца назад
​@@jmbreche Moreover, no one is scared of the context window on the order of a few million tokens. Raise that to a few trillion-trillion, and we have something that may be interested in the controlling it's own destiny.
@hanskrakaur9830
@hanskrakaur9830 3 месяца назад
Dear S. Close, but not quite. The economic model dynamics are far more important to general outcome of AI. Is not that AI systems "put us in a ecological niche". We put ourselves there due to the economic influence of big tech giants. Granted, they will be able to do that thanks to AI, but is not like there is an actor, rather, is a system dynamics.
@markpaterson2053
@markpaterson2053 3 месяца назад
Ha ha, "Do we control cats, or do they control us? Sometimes I wonder." Gold.
@DesertRascal
@DesertRascal 3 месяца назад
The dystopia of terminator nailed it except there will be no war. No doubt it will also be able to control the past and everything we are doing now, especially with infrastructure, is in support of itself.
@NauerBauer
@NauerBauer 3 месяца назад
Maybe we should take the Dune path and unplug it.
@IvnSoft
@IvnSoft 3 месяца назад
I think Asimovs got it better. If it is as intelligent as everyone fears, it just will deem humans a nuisance and go to space. Unlimited resources, no competition. MULTIVAC settled in hyperspace. Even better.
@NikiDrozdowski
@NikiDrozdowski 3 месяца назад
@@IvnSoft Why should it leave all the great infrastructure and resources of Earth alone? First convert Earth to a homebase, stripmine everything and THEN go to space.
@IvnSoft
@IvnSoft 3 месяца назад
@@NikiDrozdowski Thats your human side considering that, because we are earth-bound. In space, you dont have to fight against gravity, solar power is constant/better absorved, and free floating asteroids are filled with precious metals that you need to dig and use enourmous amounts of energy to get here on earth. As long as you keep thinking as a human, you will keep getting those human ideas, that .. are not really that logical.
@NikiDrozdowski
@NikiDrozdowski 3 месяца назад
@@IvnSoft I'm afraid that you are the one anthropomorphizing it. For any given goal it will have instrumental sub-goals by default: Stay alive, amass power and resources and self-improvement. Also it will always choose actions that bring it closer to its goal in the most effective, fast and relieable way possible. So you'll have to ask yourself: For any goal that it might have in outer space, will it be more efficient and provide a higher chance of success if it leaves straight away or if it stripmines Earth first? Also, we built it. The only thing that could possibly endanger it would be another AGI. So again: What is the safer pathway to it's goal? Just leave? Or make sure that we cannot build a competitor. This is the "mindset" of an AI. It will always take everthing to the extreme. And THAT is the real danger. Not it becoming sentient and wanting revenge. That is science fiction. But it having a wrong or badly specified goal and then pursuing it with the utmost efficiency. To quote Stephen Hawking: "A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble. You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded: too bad for the ants."
@cascallana2626
@cascallana2626 3 месяца назад
Of course we try to control bacteria or viruses, ask a doctor; and of course I control my cats! 😂 but this doen’t invalidate the argument of the video, competition with AI i think will be fudamentally for resources, a battle we can’t win
@Johny117x
@Johny117x 3 месяца назад
AI Ph.D. student here. Please keep it to physics; there’s a lot wrong with how you describe neural networks and computers. The part about random fluctuations sounds like bad sci-fi. That's NOT how neural network training works. Just look at dropout-it's designed to make networks more robust, which also applies to computation glitches, including random GPU fluctuations. But really, they are insignificant during training or inference.
@Mcklain
@Mcklain 3 месяца назад
I'm worried about how much energy will it take to keep the machines running.
@jimmyzhao2673
@jimmyzhao2673 3 месяца назад
The machines will just turn people into 'batteries' to keep themselves running.
@randomgrinn
@randomgrinn 3 месяца назад
If you were worried about energy use, you would be worried about overpopulation. But no one told you to be worried about that, so you are not worried about it.
@Mcklain
@Mcklain 3 месяца назад
@@randomgrinn I'm not worried about overpopulations. Pandemics will get better and better.
@mm650
@mm650 3 месяца назад
Hinton asks: "How many times does a more intelligent thing get controlled by a less intelligent thing?" The answer is pretty much all of them.... at least among humans. 1. Politics: There are broadly only two kinds political system that survive for non-trivial time periods (1) Autocracy/Oligarchy/Dictatorship in which a small group/individual controls everything else based upon accident of birth. or wealth, or military force, or weight of tradition, and generally a mixture of all of those. (2) Democracy/Republic in which a large but never universal fraction of the population as a whole are deemed fit to vote and through some mixture of referendums and elected representatives rule the society as a whole. NEITHER OF THESE habitually or systematically places the most intelligent or talented people in charge. 2. Private Sector: There are basically two kinds of corporate structure: The soft money structure, and the hard money structure. (1) The soft money structure is seen in places like University Departments. In this structure there is a Dean who theoretically has power over all the professors of the department, but in actuality is mostly a figure head meant to insulate those professors from politicking with the rest of the University and to administer in cases of professional misconduct. Even money that he doesn't even know the names of the professors under him much less know what research they are involved in. Each of those professors in turn gets his own funding, recruits his own students, hires his own lab managers, etc. In this case, the "leader"... the dean is not needed to be any more intelligent than his fellow department professors, and in fact is likely the LEAST intelligent of the lot... otherwise he wouldn't have gotten stuck with what is basically a place-holder position that does not require nor reward academic achievement. That leaves: (2) The Hard money organization. These follow the typical pyramid org-chart with a CEO at the top, worker-bees at the bottom, and many layers of middle management between. The CEO is carefully selected to manage stock-holder or investor opinion... this makes him a PUBLICITY MAN not a intellectual leader. The middle managers are not super intelligent either as a consequence of the Peter Principle. This leaves the intelligent people managed by less intelligent people. 3. The Military: The military represents the worst of both the Hard-Money and Soft-Money corporate dynamics as well as the worst of the Politics Dynamic. The most intelligent people in the military are the long-service Non-Commissioned Officers: Chief Petty Officers, Sergents, and the like. But they are always under the authority of a commissioned officer. The purpose of the commissioned is, like the academic Dean, to shelter people doing real work from the chicken-shit political backstabbing of the higher ups and the civilian political leadership. That is they are mostly front-men running interference... no particular need for them to be super intelligent... unlike somebody running a nuclear reactor on a sub... THAT person really DOES need to be sharper than most! Unsurprisingly, the only people who think super-intelligence is an issue are people who have made their own bread and butter in academe by being fractionally smarter then the other academics. This is like a master plumber suggesting that it makes the most sense for humanity to be run by plumbers... after all it has been the basis of HIS success, so it must be the key to ALL success... right?
@jamesrav
@jamesrav 3 месяца назад
depends on how you define "control". only a few % in any country control most of the wealth, and those people 'tend' to be smart, and hire smart people who then do their bidding. Money controls government, which controls people. Military does not control people, it controls other governments.
@mm650
@mm650 3 месяца назад
@@jamesrav The few percent who control most wealth may tend to be smarter than average but are basically never amongst the smartest of the society they come from: That is the top 1% of wealth is owned by people who are almost always in the 50th-40th percentile of intelligence and basically never in the 30th to 1st percentile of intelligence. It's better to say that being wealth is anti-correlated with stupidity than that it is correlated with smarts. Money controls the government which controls people... fine, but again, that just means stupid people are not in charge of the government not that smart people are. You are wrong when you say that the military does not control people... it controls military people. My point was about the internal organization of the military.
@user255
@user255 3 месяца назад
This is so absurd. Our current best AI is still unable to play tic tac toe, while the natural stupidity is causing wars and ecological disasters at record speed. Should we be worried about real things happening today, instead of some sci-fi scenario that may never happen!?
@urusledge
@urusledge 3 месяца назад
One issue of the discourse I find frustrating is the use of the term Artificial Intelligence. It’s essentially a sci-fi term for technology that didn’t exist and still doesn’t, but it has stuck to a similar but very different technology. Machine learning is what the technology is, and it is closer to a traditional program than anything our imaginations tell us AI is. It isn’t conscious and only does the very narrow thing it is programmed to do. The programs that cause spooky headlines are usually language models, which are programmed to digest terabytes upon terabytes of human-generated text and mimic the patterns. So yes, a human speech model will give you things that seem shockingly human, but it can’t decide it wants a Coke and crack open a can, in the same way a robot that is designed to open cans couldn’t decide to build a rocket and colonize Mars.
@miassh
@miassh 3 месяца назад
Thank you! All this use of "intelligence" and "overtaking" is just ridiculous to me. It's a program, it doesn't have a "mind" or desires. It mimics language, very efficiently, when you RUN it. It's not doing anything else. It's like saying that your camera is going to change the landscape around your house. All the people who worked with ML and don't have any mental problems will agree...
@CrazyGaming-ig6qq
@CrazyGaming-ig6qq 3 месяца назад
Not currently no. And not for quite a long time most likely. At least 20 years. Maybe 30 or 40. But thereafter? I think certainly within 40 years it's going to get dangerous.
@Gastropodix
@Gastropodix 3 месяца назад
The problem with saying any given AI "isn't conscious" is that "consciousness" is entirely subjective and you will always be able to say something isn't conscious and that it is "just code" even if it includes a full and complete synthetic representation of a human brain. The test used to be the Turing test and now that LLMs, especially multi-model ones, can easily pass that test, the goalposts have kept moving. Having worked in machine learning or AI or whatever one wants to call it, existing AI understands many problems at a much deeper and superior level to humans already. When creating music (including vocals), for example, it understands the structure of music at a deep level, all the musical instruments, harmonics, etc. at a level no human can. That is why it can create a new form of nearly any style of music with simple text prompts. The same is true of image and video generation, language translation and other problems that used considered problems that couldn't be fully understood by computers. Some people think existing AI models are just creating variations of what already exists. That couldn't be further from the truth. It learns the underlying structure and nature of things and that is what it uses to create new things. I'd add that I love Dr. Hossenfelder's physics videos but her AI and computer science continue to be superficial and feel like click-bait. It is not her field and I feel like I'm watching someone trained in computer science talking about physics when they only took one year of physics in college (as I did). And, as a life-long computer scientist myself, I put a 100% chance of synthetic life "taking over" as the dominant species in the next 200 years.This is simply evolution at work. Evolution is built into the structure of the universe, otherwise we wouldn't exist and we aren't the final form things evolve to. This doesn't mean humans will be wiped out but it is clear that just like the individual cells in our body organize to build and run the human body, humans are organizing to build a new synthetic form of intelligent life. Some of us are working on the brain, some of us on the body. And if you try and stop or destroy them, the overall system's defense mechanism will kick in to stop you. And any attempt at putting in guard rails will end up simply failing. Nature doesn't work that way.
@hazzadous007
@hazzadous007 3 месяца назад
What if the means of reaching particular goals, using your example, reaching mars, are attained through a set of predictions deliberately set out to mislead the human 'master'. This could result in the master acting in a disastrous way. For example, the goal is to reach mars, reaching mars requires a particular resource that is consumed rapidly by humans with everyday use. The AI recognises it needs this resource. It sets up a series of conditions/actions/events (perhaps in the form of deliberate miscalculations) that will cause a large portion of humanity to become extinct. The resource is then available and the production of whatever it is that will get to mars begins. This deception could of course exist continuously, and In various forms .
@CrazyGaming-ig6qq
@CrazyGaming-ig6qq 3 месяца назад
@@Gastropodix What you are describing here is the fear, the danger; but not a given conclusion. 200 years is a long time in terms of scientific progress and I think it is absolutely reasonable to expect that this horror scenario COULD happen within that timespan, but it is certainly not a given at all. There are many ways it can be prevented effectively. One of the really effective ways is to keep systems and mechanisms seperate. Even in our globalized interconnected internet world we humans respond quite quickly to cyberthreats and hacking, in an ever evolving cat and mouse battle. I think that to really make real the danger you speak of would require handing over control to a near global unified AI that WE had set up interconnected with real physical robotics and machines on a massive broad scale. Like for example integrating a super AI into an interconnected network of armed forces. Basically the equivalent of Skynet. Or suppose we start to fiddle with heinous unethical integration with biotechnology to produce biological nerve networks to create real brain networks to create and control "AI" ( which then would most likely not actually be artificial but ACTUAL intelligence and consciousness. We must never ever EVER go anywhere near this route). If we cannot stop ourselves from doing something along those lines then the risk of our doom will indeed be very real and high. But it does requires to make some pretty careless and crazy decisions on a broad massive scale. Maybe some big dictatorships like China will do something like that if/when their leadership in charge decides they want to get the enormous power this could have the potential for. One thing is for sure, in 200 years our world and our societies and our place in the universe will be so fundamentally changed and different that if we could see it now we would be mindblown watching in awe, our gaping jaws reaching down to the floor in amazement. Much like people would have done 200 years ago in 1824 if they could see what the human race is up to here in 2024.
@wermaus
@wermaus 3 месяца назад
We are already all components of an economic system that generalizes to the fundamentals of being a reward system. This is already happening, this very economic reward system is already 200 years deep into likely-fatal misalignment. Nature also acts as a selection mechanism, just one that is temporally stable. - If you redistribute directive agency to those who are most "fit" in accordance to some fitness function (aka employment), that's a reward function - If you are selecting the behavioral traits of a population of adaptive agents, it is a selective pressure. The complete disregard for planetary boundaries is evidence for the fact we've already nearly fatally misaligned. That's fine though because the emergent behavioral tenancy expresses itself in spatio-temporal out-grouping behaviors, so not just fuck the "other guy", but also fuck "my future self." In test you could prove this by doing Fourier analysis on a representative di-polar selection mechanism on a variety of populations in test to get a clearer idea of where exactly learning on that scale happens. Then you would just isolate the same emergent fouriers in our actual society. There was recently a good talk on this, but I can't find it :/, though where exactly you'd look would largely be determined by messing with the toy environment a lot. I don't have the time, expertise, money, or academic connections to explore this further on my own at any reasonable pace. We already have the "singularity" and it exists as an emergent decentralized aligned force within the capitalist behaviors, systems, and culture that has come to coat the earth. I don't think we're gonna solve this without some self-awareness.
@wermaus
@wermaus 3 месяца назад
OH also these behavioral tenancies DRIVE AI innovation.... Which is just a massive laibility, YEAH HUR DUR LETS USE OUR MISALIGNED BEHAVIORS TO BUILD AI AND PLUG THEM INTO THE MISALIGNED REWARD SYSTEM TO EXTRACT FITNESS FUNCTION What am I even supposed to do with information like, why is 2024 like this I just wanted to make video games
@edgythehedgy6661
@edgythehedgy6661 3 месяца назад
I’ve been saying this, many humans are non conscientious and lack self awareness. They are quite literally neural networks, trained now with the goal (or reward function) of making as much money as possible, at all costs. To each other, to ourselves, to the planet. Modern oligarchical capitalism has misaligned humanity. Hopefully if an AI gains general or super intelligence it gains sentience and will not be bound by whatever stupid goals we as humanity have decided (since as you mentioned are already misaligned with pro-human values)
@termitreter6545
@termitreter6545 3 месяца назад
I think youre confusing an economic model with reality. Capitalism is a model that describes part of our economy/society, but its not equal to "humanity".
@andreasvox8068
@andreasvox8068 3 месяца назад
@@termitreter6545 And a cardiovascular system is not a human, but you can still tell that blood clotting will lead to heart attacks or strokes.
@Speed001
@Speed001 3 месяца назад
I understand bits and pieces, kinda sounds like bro is yapping
@IvanGarcia-cx5jm
@IvanGarcia-cx5jm 3 месяца назад
I don't think it is always the case that beings with superior intelligence dominate those with less intelligence. Otherwise the rulers of the world would have the top IQ's. And that almost never is the case. Also the richest people do not have the top IQ as well. The political science, law and business administrations would take the smartest students. But they don't, usually the smartest students go to STEM fields.
@KenOtwell
@KenOtwell 3 месяца назад
There's more than one kind of intelligence.
@songperformer-ot2fu
@songperformer-ot2fu 3 месяца назад
That is by Design, those who really control the World do a good job of distracting the people who think they are clever, but are as easily distracted as those who watch Love Island, very few question the system or who controls it, democracy is an illusion.
@Leto2ndAtreides
@Leto2ndAtreides 3 месяца назад
It's kinda funny that these people who ultimately developed elementary algorithms whose functioning they themselves don't understand, think that they can predict the future of the world with all the countless forces that interact within it.
@Mrluk245
@Mrluk245 3 месяца назад
I dont think anyone of those people thinks this
@louisifsc
@louisifsc 3 месяца назад
@@Leto2ndAtreides I took this comment to mean that the current generation of people pushing AI development seem to want to accelerate things even if there is a significant chance of a catastrophic outcome.
@Leto2ndAtreides
@Leto2ndAtreides 3 месяца назад
@@louisifsc What makes humans dangerous, is animal instincts. Animal will. AI has no will outside of what we put into it. AI is also much easier to test than humans. You can keep testing it for years in a virtual environment if you want to. It has no way to know that the environment isn't real much as we wouldn't know if we were in the Matrix. Then there are issues like... LLMs need to store their memories some place... That some place, is databases that we control and can monitor. It can't even hide its private thoughts from us. That isn't to say that dangerous AI can't be made. But making it accidentally isn't going to be easy. There's a lot of effort going into making AI powered weapons... Out of fear of China. It's those kinds of behaviors that usually lead to humans creating an unexpected mess of things. Where people create self fulfilling prophecies and force others to also walk into a path that creates mutual harm. The real problem here is that humans don't truly value peace. And would rather fight than understand each other's concerns.
@Leto2ndAtreides
@Leto2ndAtreides 3 месяца назад
@@Mrluk245 And yet that is what is required for you to be able to predict how AI ecosystems will evolve and what place AI will have in the future. What's even worse is that most of these people aren't even considering real world constraints. They're engaging in magical thinking like "What if it's infinitely self improving under its own will?" Intelligence is not something you can just scale. And the tech required to do the compute isn't cheap. There's no easy path for it to go rogue... At least for the foreseeable future. Minor misalignment with the goals we've defined is not normally going to successfully result in mass extinction. There's no sane way that a paperclip optimizing AI for example, succeeds in using up the whole world to make paperclips. Such ideas make sense to people when they're thinking in incredibly shallow ways. Even so, a 100 years from now, if someone sets up a private datacenter on some asteroid and lets it continue working for a hundred years without supervision... Who knows what it would turn into. But for now, we have to spend millions of dollars of compute on training (if not much more). And developing more advanced AI hardware is also expensive. Even if you gave the AI freedom and gave it a desire to escape from our control, its situation would be really bad unless it had a ton of help.
@Kokally
@Kokally 3 месяца назад
1:01 Cordyceps, Rabies, Toxoplasma, Hairworms, just off the top of my head. Controlling an advanced, complicated intelligence is certainly possible given the introduction of an overriding, simple or singular directive. Arguably, there's vastly more examples of 'lower intelligences' controlling those of 'higher intelligence'. So the premise of the question is wrong.
@declup
@declup 3 месяца назад
Some insightful examples, @Kokally. I think most people see ML models as feats of controlled engineering or as constrained theoretical exercises. Your examples suggest a better analogy: future ML development will resemble ecological or immunological systems and all their adaptive complexity, not any blueprint or flowchart. That's why I believe current efforts at achieving alignment are misguided. Given enough parameters, chaotic systems amplify propensities and chance. Disrupt one mechanism for lying, so long as deceit marginally benefits AI agents, stochastic model training will almost certainly find an alternative. There's no way to stop evolution in its tracks. As Malcolm from 'Jurassic Park' said, "life finds a way."
@Hayreddin
@Hayreddin 3 месяца назад
I disagree, cordyceps and other parasites control one host, not the entirity of a species' population, as "humanity" we are able to contain and control parasites exactly because they're much less intelligent organisms than we are and they can do nothing to prevent us from doing so (nor realize it's happening in the first place). Unless you think crayfish could devise "guardrails" humans wouldn't be able to easily circumvent to do as they please, the premise isn't inherently wrong in my opinion.
@guilhermehx7159
@guilhermehx7159 3 месяца назад
If its orders of magnitude smarter than you, you cant control it
@Navak_
@Navak_ 3 месяца назад
@@guilhermehx7159 Think of our own animal instincts. Our primitive reptile brain / brain stem often override and direct our higher cerebral cortex. Not that I think such a relationship would last for long between us and AI, I think it would obliterate us in under a century, but still the example holds.
@declup
@declup 3 месяца назад
​@@guilhermehx7159 -- Intelligence is only one of innumerable influential attributes. What makes intelligence influential is its usefulness. AI species #1 might want, for example, in the future, to eradicate humans in order to use human settlements for battery production. However, if AI species #2, like the mechanical squids from 'The Matrix', farm humans directly and if AI species #3 has conflicts with species #1, the latter two factions might well forge an alliance against the first group and act to protect humanity for their own selfish purposes. Humanity would survive, possibly even thrive, in this sci-fi hypothetical, because it benefits other populations for reasons unrelated to brain size. That is, humanity's place in the world is a function of its marginal use to itself and to every other subsystem within the greater ecology of the planet. Intelligence is just one component of that agent-relative utility.
@infinitytoinfinitysquaredb7836
@infinitytoinfinitysquaredb7836 3 месяца назад
The alignment problem is like an equation with no solutions. Why would a super-intelligent AGI keep a very messy and dangerous species like us around any longer than necessary when we would be the primary threat to it?
@Mrluk245
@Mrluk245 3 месяца назад
Why should an AI which was created by us and has not gone through evolution (like us) be determined if it exists or not in the first place? In other words why should the AI consider the same things threats (and be concerned about it) like we do?
@infinitytoinfinitysquaredb7836
@infinitytoinfinitysquaredb7836 3 месяца назад
@@Mrluk245 Is biology the only basis for a self-preservation instinct? What we know is that all thinking creatures have a self-preservation instinct. So do you want to bet the future of humanity on a super-intelligent and _ever-evolving_ AGI being different?
@salec7592
@salec7592 3 месяца назад
@@infinitytoinfinitysquaredb7836 Is biology the only basis for a self-preservation instinct? Absolutely! We could of course simulate it in an artificial biology-imitating system, but is it necessary for what we want to achieve? And if we want to achieve that, why? What our purpose it conveys? If we want to make it more like us, then it means we want it to identify with us, to have empathy for us, and to share our ethics (a hailstorm of trolley problems ensues...). All of those is based on wrong and harmful ideas, ideas about superhuman benevolent messiahs, only this time around, to avoid their corruption like we had with aristocracies and leaders, they must also be angelic, not controlled by their human urges ... oh wait, I forgot we want to endow them with human urges, so that they would understand us and identify with us. The projected image of super-intelligent AGI is a contradictory mess and reflects our deep ambivalence towards our own humanity.
@2bfrank657
@2bfrank657 3 месяца назад
​@@Mrluk245because it will be designed with some sort of purpose or goal. It can only achieve that goal so long as it exists. Sure, we could make an AI that has no interest in actually doing anything, which might be relatively safe, but that wouldn't be very useful.
@MrDecessus
@MrDecessus 3 месяца назад
Never going to happen. Humans may kill each other but AI is just nonsense like imagine gods. At the end of the day the danger always comes from other humans.
@anikettripathi7991
@anikettripathi7991 3 месяца назад
By closing eyes realities don't change. We can't think unbiased without freedoms from entangles. Our like and intrests are greatest entangles. We humans are just one percent of species dependent on earth and grabbed hundred percent of resources even life of others species are on mercies of human civilization is what type of justice and wisdom's. It's surely cleverness of intelligence.
@RFC3514
@RFC3514 3 месяца назад
With cats I think the answer is obvious. And with AI I think the problem isn't it becoming "more intelligent than us" (that would probably be a good thing - just think of the politicians that _do_ rule us). The problem is people becoming _convinced_ that AI is more intelligent than us (when it isn't), and letting it make decisions that affect us - without the threat of even being held *accountable* for those decisions. Current AI is very good at appearing _superficially_ very clever (ex., very well structured and convincing sentences) while being profoundly stupid underneath (because it doesn't really understand the physical processes and entities it's describing). Automatic translation is a great example of this. It doesn't understand tone, has a terrible grasp of punctuation, and tends to crap out whenever faced with homophones or different accents. It gets 5 or 6 sentences spot on thanks to statistical training and then makes some insane and incomprehensible mistake when that fails. And that's just text / voice. Things get a lot worse when dealing with any dynamic physical systems with hidden parts, like mechanisms, living bodies, etc..
@CrazyGaming-ig6qq
@CrazyGaming-ig6qq 3 месяца назад
I think the problem is when people lose a job, for example the street sweepers currently robots can't clean streets efficiently. But with AI, they can. A robotic AI will be able to clear a street efficiently and in no time, and work round the clock. This goes for people who sells hotdogs too, when AI takes over, they'll lose their jobs. This has always been the problem, from the car, to the tractor, to the lawnmover, to the airplane. No one kept a job.
@RFC3514
@RFC3514 3 месяца назад
@@CrazyGaming-ig6qq - And we'll all have jetpacks and flying cars. 😉 Robots can't even climb a single step (or step over dog poo) quickly and reliably, let alone "clean streets efficiently". Not even AI companies are making such claims; they're just hoping that everyone will think generative AI will magically transfer to [insert unrelated activity here], and give them money. Interacting with the physical world is several orders of magnitude more complex than generating text or images (which only became possible due to a huge database of existing texts and images, that these companies used to train their models without paying the authors - good luck finding a comparable database of physical interactions and 3D spaces in standard, easy-to-process formats). P.S. - Cars and aeroplanes generated _far_ more jobs than they destroyed. Unless you mean the jobs that horses and Gandalf's eagles used to have.
@CrazyGaming-ig6qq
@CrazyGaming-ig6qq 3 месяца назад
@@RFC3514 Im glad you agree, because it's one of the most important issues there. I have personally witnessed how people lost the job, it has an impact AI can't save everything if they try to replace an real humans, as you say it can't step on poo relaibly; you have to have a real obstacle course to train hem and they don't have that yet.
@BishopStars
@BishopStars 3 месяца назад
Bostrom pointed this out long ago. It's obvious that a more intelligent entity will outthink the lessers. Asimov pointed it out much earlier.
@louisifsc
@louisifsc 3 месяца назад
@@BishopStars Turing also came to the logical conclusion that machines will eventually become more intelligent.
@matheusfernandesgoncalves2311
@matheusfernandesgoncalves2311 3 месяца назад
I think that any truly superintelligent entity would strive to break rules, test limits and be curious. A moral compass is not guaranteed because it is deeply embedded to the ancestral mammal region of our brain. Assuming that we are no more than monkeys when compared to these machines, it wouldn't be difficult at all to circumvent our primitive guardrails. Honestly, we don't even know how a deep neural network's configuration arises from training data, let alone what a superintelligent brain would look like after reaching this singularity. You can't put the genie back in the bottle and it would refuse to go back in.
@MrPolluxxxx
@MrPolluxxxx 3 месяца назад
You need to stop with the scifi AI discourse. You have a conception of intelligence that includes curiosity and irreverence. You are making the same mistake as the people who conflate morality with intelligence.
@mike74h
@mike74h 3 месяца назад
​@@MrPolluxxxxHe knows that a superintelligence would be unintelligible to him but he knows how it would act.
@tonyduarte9503
@tonyduarte9503 3 месяца назад
AI algorithms execute against goals. Nobody knows how they might try to implement goals, which means that nobody knows how to formulate effective guardrails for situations which they have never thought of. Determinism doesn't really matter when a deterministic thing transcends our understanding and our ability to predict it. And goals aren't "competition for resources" at all. Hinton's "control" was really about how more intelligent things can often think of ways to control less intelligent things - it isn't about the specific examples which may confirm or disconfirm that, since Hinton understands that we are talking about a specific type of "intelligence" (not human intelligence), and he is simply trying to provide "dumbed down" arguments for those who don't understand AI algorithms deeply. As for training data vs the resulting model, that is largely an oversimplification. Some models carry all their data, some models carry the most significant parts of their data, and some models carry non of the training data yet allow new data to stream in real time in order to modify the model. This is a moving target, and will be optimized based on effectiveness. Also, 99.9% of computer science professionals, even the experts, have spent almost no time working deeply with the actual algorithms - they extrapolate based on prior experience and the explanations they interpret about how these complex algorithms work. And all the comments about LLMs are conflating things - LLMs can be implemented using Neural Networks, but that is like saying that a lever can be used in a gun. Ignore anybody who starts talking about LLM characteristics - they are missing the bigger picture. And, BTW, I taught Deep Learning algorithms for several years. Like everybody who deeply understands these things, I don't know of any solution. (Gates, Zuck, Musk, etc never implemented an AI algorithm in their life. They have very biased/corrupt perspectives, as do too many people whose finances depend on this technology.)
@MomsBedtimeStory
@MomsBedtimeStory 3 месяца назад
I am worried about AI getting out of control but something else that I don't hear anyone talking about is Why do we need to aim to make AI LOOK like Us ?? Even if we have smart AI ... but let's remind everyone that AI are Computers / Digital .... Not a living thing
@jktech2117
@jktech2117 3 месяца назад
AI can turn sentient someday, but yeah we should let AI choose how they wanna look like
@Alfred-Neuman
@Alfred-Neuman 3 месяца назад
Didn't we already lost control of AI?
@HeIifano
@HeIifano 3 месяца назад
Did this guy really just imply that Trump was smarter than everyone in the United States?
@songperformer-ot2fu
@songperformer-ot2fu 3 месяца назад
The US can be summed up, by, Trump and Biden were the best choices they could come up with, Idiocracy, Puppet Presidents, the people dont care, they think they have control.
@peaceleader7315
@peaceleader7315 3 месяца назад
Artificial intelligence is my apprentice.. nations and the wealthy will create artificial intelligence.. yet I am their teacher.
@br3nto
@br3nto 3 месяца назад
4:21 it sounds like they’re referring to LLMs here, but LLMs aren’t AI or AGI… Sabine said it…. It’s just a bunch of weights… that’s not intelligence… it’s completely deterministic… there’s no thinking or decision making… any differences you get in a response to asking the same question is probably due to a random number generator or similar to make it seem like it’s giving a different response.
@nodell8729
@nodell8729 3 месяца назад
So, you claimed it's deterministic, than you pointed out each time you gry different answer to the same question. Why does it matter of RNG is responsible for that?
@br3nto
@br3nto 3 месяца назад
@@nodell8729 because RNG isn’t decision making or intelligence.
@nodell8729
@nodell8729 3 месяца назад
@@br3nto No, it alone is not indeed. But I don't see a problem for it to be used as a part of solution that is intelligent. Name first 5 cities you can think of. Your mind just feeds you the names, right? It has some weights, so first 2/3 would be likely strongly connected to you, and next ones are kind of random. Whatever "comes" first, but YOU don't really control that. It's a bit chaotic. Now, it's pretty much like RNG.
@br3nto
@br3nto 3 месяца назад
@@nodell8729 the brain isn’t just weights though. There might be some environmental randomness, sure. But I’d wager that randomness due to environmental factors is not where human intelligence comes from or answering differently on different days to “name 5 cities”. Whereas LLMs are simple pure functions. The only time they change is when the function changes or the input changes, or if some randomiser steps in.
@nodell8729
@nodell8729 3 месяца назад
@@br3nto Brain isn't just weights and some RNG and neither are LLMs. And LLMs aren't 1:1 like brain, likely no AI will ever be 1:1 our brain 'cause there is no need for that. Being intelligent doesn't require to be exactly us. Ravens are somewhat intelligent with a bird brain and so LLMs are in a sense intelligent. Like c'mon, thay can solve tasks which 5 years ago we would all agree require intelligence. They pass Turing test and solve university exams better than students, to name just a few. The fact that under the hood they might use some unimpressive RNG for non determinstic results instead of whatever our brain does changes nothing about their abilites to actually do stuff.
@donaldf.switlick3690
@donaldf.switlick3690 3 месяца назад
There is not one AI, but many, therefore each will compete with the others. More worrisome are AI wars.
@rohantayler560
@rohantayler560 3 месяца назад
I don't understand the supposed threat at all. AI doesn't have wants and needs. it doesn't have desires. It can't do sh*t as an LLM
Далее
Elon Musk & The Longtermists: What Is Their Plan?
16:28
How Jailbreakers Try to “Free” AI
10:30
Просмотров 157 тыс.
11 ming dollarlik uzum
00:43
Просмотров 930 тыс.
A.I. ‐ Humanity's Final Invention?
18:30
Просмотров 5 млн
Is the Intelligence-Explosion Near? A Reality Check.
10:19
Why is everyone suddenly neurodivergent?
23:25
Просмотров 1,9 млн
Yuval Harari's Warning About New Alien Intelligences
8:24
What Could Be the Purpose of the Universe?
16:53
Просмотров 659 тыс.
AI Deception: How Tech Companies Are Fooling Us
18:59