Тёмный
No video :(

Fear of A.I. 

Georg Rockall-Schmidt
Подписаться 305 тыс.
Просмотров 51 тыс.
50% 1

Ay up again! It's me, The Head, this time wearing a lovely shirt that has spent a lot of time in a backpack next to my GCSE homework and a flattened black banana. I'm talking about the fear around artificial intelligence, hopefully to assuage some of those fears, and perhaps develop some new ones just for you. Artificial intelligence in its current iteration is, I argue, really nothing of the sort, but nonetheless potentially useful and potentially very dangerous. But as ever, the real danger lies in the hands and minds behind it. So let's get stuck in, or whatever it is I usually say. HUZZAH!
Patreon: / georgrockallschmidt
Twitter: / grockallschmidt

Опубликовано:

 

27 авг 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 588   
@BaronVonHaggis
@BaronVonHaggis Год назад
AI making art and writing poetry, while we dig ditches isn't quite how I thought this would go down lol.
@needy3535
@needy3535 Год назад
oh, you expected the working class quality of life to improve?? that cuts into profits :((
@thethree60five
@thethree60five Год назад
You have about 7 years left in your career as Master Ditch Digger. At that point, robots are cheaper to Build, supply en masse, and maintain. ... and they work 24 hours straight(three human shifts)... you can't. We are looking at 70% unemployment by then, Or put another way, 70% robot entity workforce. Eventually, humans are not profitable. Yet no one asks, how the humans will pay for everything then? Answer? UBI generated from robot taxation at human output power. Like should have been done with mechanization more than 100 years ago, but Ford and Rockefeller had other ideas. As craftfully, the industrialists called this a... _Horse Power_ rating. One wouldn't want to be too in the nose with this kinda thing. Even a human can understand being replaced when they are told they are.
@trybunt
@trybunt Год назад
Remember when truck drivers thought they might have to start learning how to write code? Haha, it's always very difficult to predict what jobs will be next on the chopping block, but one thing is for sure- it's always someone's. Weird how people start to think it's a problem when it's their job, but never seem to empathetic when it's someone else's
@thethree60five
@thethree60five Год назад
@@trybunt Well, it is working backwards as to replacing humans.Tasks that can be done digitally by a human are first. It can do the digital, and the thinking, or redistillation and amalgamation... _good enough_ . Robots to completlely replace what a human can do, are too expensive compared to a human... as in full humanoid. Within 7 years we will have production at mass output of that equal to needed humans for a task. _then they replace the human hand labour_ . There will be 70% unemployment in 2030 across all things that touch digital data. Like Rudy Guliani might say... "ALL THE WORKERS.". Kind of ironic isn't it. We started working with our hands, and the last to work will be with theirs.
@TheSquad4life
@TheSquad4life Год назад
@@trybunt that last part ! that’s most humans for you unfortunately
@Torus2112
@Torus2112 Год назад
"Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.” -Dune
@punkinhicktown
@punkinhicktown Год назад
I know a group of people who have always had the opinion of who cares as far as automation comes. They just didn't care about jobs being lost, nor did they care about what happens when more jobs go away. Now that it turns out AI might be able to replace management for a large part, they seem very interested, and concerned.
@UnknownDino
@UnknownDino Год назад
"Not my problem as long as it's just your problem" they said... little did they know, this was about to become everybody's problem
@Casablonga
@Casablonga Год назад
One can always rely on Georg to assuage your fears and worries... "Don't worry, you're fucked anyways."
@ApolloVIIIYouAreGoForTLI
@ApolloVIIIYouAreGoForTLI Год назад
I wouldn't belive he were British otherwise..
@BarryHWhite
@BarryHWhite Год назад
True
@BarryHWhite
@BarryHWhite Год назад
@@ApolloVIIIYouAreGoForTLI cos the accent doesn't give it away lol
@Whistler-007
@Whistler-007 Год назад
"Chin up"
@bluedotdinosaur
@bluedotdinosaur Год назад
Long ago, Isaac Asimov came up with the idea for his robot stories out of intrigue for a repeated theme in human mythology, which had most recently shown up in the novel Frankenstein. He coined it "the Frankenstein complex" in fact. The age -old trope was: a human being gains "too much" command of the functions of life, which is usually demonstrated by creating an "artificial life form" of some kind - be it a golem, a magical creature, or a reanimated corpse. Disaster then follows, as the created life somehow turns on the creator, or on other humans. Or other humans cannot overcome their own fear and turn on the creator. Asimov presumed that there was an innate human fear of "interfering with the work of the gods" and wrote a story about artificial life which did what it was meant to do and only contributed to human prosperity. I think Asimov may have misunderstood the underlying unease with "creating life" or "playing god". This fear may be equally inspired by an intuition that human beings cannot be trusted to gain ultimate power over life. Humans immediately use anything they control for their own profit at the expense of others - this is a marked tendency at least. The image of a human controlling life gives us pause. For years in the technological age people have indulged in an Asimov-related fear of man creating artificial life and that life directly turning on man. Fewer people realized the real danger was man creating artificial life that would at last do anything man wants, and immediately using it to multiply the capacity to dominate other human beings.
@donatodiniccolodibettobardi842
It would've been fun, if humans accidentally created such AI that turned out to be better than them. As in more morally consistent and having no qualms about putting the good of others above short term gain. What would you do, if your entire species was the selfish, cowardly and cruel one, that appropriated the virtues of the few as the legacy of all. Not necessarily undeserving of happiness, just inherently lacking in the key aspects in comparison. Just an idle thought. Hopefully, we are far from AGI conversation. We have plenty of stuff to deal with now.
@juanausensi499
@juanausensi499 Год назад
Humans have no problem 'creating life'. We call it 'children'. We usually don't see them as slaves, in part because society frowns upon that, in part because we have a built-in psychological impulse to protect our own family. Still, some children are used by their parents to fulfill their own goals. Raising your children is costly, but there is more efficient way to take advantage of human's fertility: use the children of others, from paid workers to slaves. We also raise billions of specialy modified animals and plants to eat them, and those lives only exist because we made them. We have been 'playing God' from day one. I would not worry about that just because AI. AI raises some concerns, but 'playing God' is not one of them.
@helvete_ingres4717
@helvete_ingres4717 Год назад
Frank Herbert was a lot more penetrating than a facile techno-optimist like Asimov - from Dune: 'Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.'
@Inkerflargin
@Inkerflargin Год назад
@Helvete_Ingres If you read the Azimov stories collected in "I, Robot" a lot of them aren't purely optimistic. The world initially appears to be a techno-optimist future, but the point of each story is that even in such a world things can still go in unexpected and unnerving directions.
@bruceluiz
@bruceluiz Год назад
Finally, some airtime to how A.I can lead to the same ol guys doing shifty stuff but with improved tools.
@boldCactuslad
@boldCactuslad 4 месяца назад
Right, that's not new. Uninteresting. Same old abuse of power, different coat of paint. What is new, and exceedingly dangerous, is intelligence. Proof: See humans, new, not very bright, but that's a good deal of damage. The unknowns presented by a novel intelligence are extreme. We as a species cannot afford that level of risk, I personally would argue we already have enough problems to deal with. Videos like this downplay how much investment is going in to bringing about a high chance of our extinction in favor of treading old ground about niche social issues, everyone becoming unemployable, or other trivial matters.
@chenzenzo
@chenzenzo Год назад
I'm not worried about AI enslaving humanity. I'm pissed off about about the tech billionaires doing it right now. Thanks George.
@fjbz3737
@fjbz3737 Год назад
Both are concerns
@citizen3000
@citizen3000 Год назад
@@fjbz3737 Yeah, I'm sick of the casual gross oversimplification of all of this. "The problem with AI isn't Y, it's X" It could easily be Y *and* X. There could also (almost certainly will be) a Z problem too. No - *many* Z problems. That you didn't or maybe couldn't - nobody could - see coming.
@fjbz3737
@fjbz3737 Год назад
@@citizen3000 Too many false dichotomies, but combatting that notion is like fighting the ocean
@citizen3000
@citizen3000 Год назад
@@fjbz3737 Yeah it's absolutely maddening. The reductionism sort of floors you. We're talking about a technology that could touch upon every aspect of human life potentially. Everyone's rushing to give their flaccid, tepid takes without thinking about any of it all.
@jeromyperez5532
@jeromyperez5532 Год назад
@@fjbz3737 One is a concern, the other is not. AI is ironically the only way to be free of Billionaires controlling the world. Automate all jobs away and then there's no need for class or money, at that point the theorized money free society of Star Trek is actually kind of plausible.
@JudgeDreddMegaCityOne
@JudgeDreddMegaCityOne Год назад
Perfect way to end a stressful week, thanks, Georg
@_-KR-_
@_-KR-_ Год назад
What scares me about 'AI' is that it removes obstacles (both natural and artificial) that otherwise hinder malicious actors. It can also make it more difficult to identify reality and genuinity.
@jayplay8140
@jayplay8140 Год назад
what scares me is that if you order human soldiers to commit a war crime there's a potential they might refuse, AI never will
@AdrianArmbruster
@AdrianArmbruster Год назад
@@jayplay8140 'I'm sorry, but as an AI strategic operations model, I cannot violate the Geneva convention.' -- see, that's not so hard.
@AdrianArmbruster
@AdrianArmbruster Год назад
@joyman_ If you make the robuts three-laws safe this won't be a problem. Have Asimov program 'em.
@Intestine_Ballin-ism
@Intestine_Ballin-ism Год назад
​@@AdrianArmbruster You think anyone in the army heard of Asimov?
@GnosticAtheist
@GnosticAtheist Год назад
Doesnt worry me much. To much war and crap going on to care about upper class problems. So what if some burguise chap loses his art college job because the AI generated a better result? He sucked compared to the machine, just like millions past lost their jobs to automations before him. As for malicious actors, yeah I agree that it sucks donkey balls, but the counter is an arms race between detection tools and malicious actors, like centuries past. Gunpowder made highwaymen much more dangerous. So police with guns. AI will create better malicious software, so malware detection rutines with AI.
@henryglennon3864
@henryglennon3864 Год назад
But if I give Georg a dollar, he'll just buy more drywall to chew.
@GeorgRockallSchmidt
@GeorgRockallSchmidt Год назад
That’s a fair point
@crayvun2196
@crayvun2196 Год назад
You really are a rather smashing human, Georg. I'm glad I've been watching you all these years.
@youeatshowieategg
@youeatshowieategg Год назад
When I was a kid in the early 90s, teachers were terrified because we all suddenly had access to calculators. They thought that meant there would be no mathematicians once my generation grew up. The ancient Greeks were terrified when paper became more readily available. They thought nobody would remember anything anymore once things could be written down. We'll be fine. (Edited typo)
@SexycuteStudios
@SexycuteStudios Год назад
I'm concerned about the latest grift that the tech developers behind AI are gonna pull. The AI itself can solve equations in a near-instant when compared to what the smartest of us can do. It's gonna be great for developing medicine, vaccines, possibly cures to diseases in a fraction of the time we've been able to do it before. But these tech companies are gonna use that power against us and run with a whole lotta cash. It's the same damned kind of grift that's been happening for centuries, only more profitable and faster than ever.
@numbdigger9552
@numbdigger9552 Год назад
yea... except paper and calculators are not capable of doing pretty much anything on their own. Look, with bad luck, we are doomed. With good luck we are (maybe) fine.
@GorilieVR
@GorilieVR Год назад
​@William Rumley how are companies gonna get rich if no one is working 🤔
@youeatshowieategg
@youeatshowieategg Год назад
@@numbdigger9552 I agree with your comment. My examples weren't perfect analogues, far from it. There will be challenges, it might be hard for a while & I feel deeply for anyone whose livelihood is threatened by the new tech. Indeed, my sector faces huge challenges. I just meant to make the broad point that I have confidence that, as a species, we'll work it out in the end. I was pretty flippant in my original comment because I'd had a few drinks but I meant no disrespect.
@madsmm
@madsmm Год назад
A hundreds times better commentary on AI than from most of the "technology experts" and alarmists on the internet.
@mauimixer6040
@mauimixer6040 Год назад
Like This AI program you're watching now .
@peteranderson037
@peteranderson037 Год назад
The thing that non-technical people need to understand about neural networking algorithms is that they are not better at doing things than humans, they are just simply faster. As far as quality, they are often way worse than humans, specifically at tasks that deal in "maybe" or "it depends". The only thing that has changed about the latest iteration of these algorithms is that now when they fail it requires an expert in the field to spot the failure instead of being blatantly obvious to the average individual. Incorrect output can now pass as correct to the untrained eye. That being said, the only real thing they have on us is speed. The kind of speed that is impossible for a human to perceive, let alone react to. This is ok when the consequences of failure are negligible, like when an advertising algorithm serves up the wrong ad to a person. However, when these algorithms are connected to things that can ruin peoples lives when they fail, like accidentally raising their healthcare premiums, accidentally classifying them as whatever the swinging pendulum of politics considers "undesirable" this week, or ruining their lives by ending it, then you have real problems.
@JohnSmith-mc2zz
@JohnSmith-mc2zz Год назад
Speed is by no means a trivial distinction, and the differences in speed are increasingly staggering. AI can also be present in places humans cannot, like you wouldn't hire someone to look at the camera in your bumper to make sure you don't crash.
@keiransimmons3388
@keiransimmons3388 Год назад
That's only right now though. I wouldn't be suprised at all if A.I overtook us in quality as well at some point
@WeAreSoPredictable
@WeAreSoPredictable Год назад
They're not better than us, but simply faster. Except when they are faster *and* better, which neural nets are at an ever-growing series of tasks.
@GorilieVR
@GorilieVR Год назад
​@keiran simmons the pace of progress is unprecedented if we even look at the progress in the past 3 months alone. The progress is not a linear line but rather an exponential curve that rapidly improves.
@GorilieVR
@GorilieVR Год назад
Peter Anderson, that is why ChatGPT is not in integrated or in charge or control of anything at this point. This is way OpenAI among others is calling for sensible regulation to prevent theoretical misuse or problems down the line. Most people with the AI community are displeased with the cautiousness often limiting experimentation with these tools for the sake of safety. Companies need this technology to become widespread and that won't happen if people don't trust it. Therefore, it's in everyone's best interest to "get this right" early on.
@mikeymegamega
@mikeymegamega Год назад
XD describes quadratic equations as a 'chinese room' scenario. Strongly recommend a sci-fi book called Blindsight regarding intelligence Vs conciousness! Also as an independent artists who spends a lot of time on twitter, I can confirm that absolutely nobody is talking about the AI art problem at all and everyone is really happy...........
@lljkgktudjlrsmygilug
@lljkgktudjlrsmygilug Год назад
The duality of man. Making soft core hentai one moment, discussing philosophy in another moment.
@amanofculture9429
@amanofculture9429 Год назад
Glad to see that MegaMilkers sensei watches George too.
@tooruoikawa8985
@tooruoikawa8985 Год назад
Could you imagine how painters felt when motion pictures were invented and eventually became accessible to the masses.
@madsmm
@madsmm Год назад
The sequel Echopraxia is great too.
@GorilieVR
@GorilieVR Год назад
Well I strongly recommend Not looking to science fiction books for answers and instead focus on science fact. Currently AI is not remotely capable of the doomsday scenarios people imagine and it wouldn't be financially beneficial to nuke the planet or the digital version of that. Companies want profits and the way to make profits is to solve real problems not imaginary scenarios.
@CraftyF0X
@CraftyF0X Год назад
Georg just never seems to miss. I think his take for the most part is absolutely correct.
@numbdigger9552
@numbdigger9552 Год назад
No. He knows nothing about AI and i think this video is just plain wrong and even harmful. Georg seems to understand society well, and i often like his takes, but since i have experience in the field, this video just shows his lack of understanding when it comes to AI.
@CraftyF0X
@CraftyF0X Год назад
@@numbdigger9552 Care to elaborate ?
@numbdigger9552
@numbdigger9552 Год назад
@@CraftyF0X He seems to fundamentally misunderstand AI, and thus the danger that it poses. To be fair, the media is probably even worse. Ai risks aren't like other risks. It might be much easier to make AGI than we think, but it is certainly 1000x more difficult to make a dangerous AGI, than a safe one.
@CraftyF0X
@CraftyF0X Год назад
@@numbdigger9552 It depends on what you mean by dangerous, right ? Misalignment is more than enough given certain tasks and operational range to create a sufficiently dangerous AI don't you think ?
@numbdigger9552
@numbdigger9552 Год назад
@@CraftyF0X Yes. Misalignment is probably the largest danger since it is the easiest way for things to go wrong. You also have to remember the problem of instrumental goals. For example, when Georg says that an AI is not going to worry about being turned off: well, an AGI is absolutely going to worry about that, unless we manage to align it ever-so perfectly. Also, if we create a misaligned superintelligent AGI, we will almost certainly end our own existence, and even IF we manage to align it perfectly, we need to consider: what do we align it to? What set of morals is going to be the "golden standard" of morality, and who has the right to decide that?
@Inglip
@Inglip Год назад
Sure AI will not "want" anything because it has no desires, but it will have some task to do and it will know that in order to do it it must not be shut down. So preventing being shut down will become a part of it's goals.
@MalcolmCooks
@MalcolmCooks Год назад
no it won't. at least, not machine learning/deep learning AI like we have at the moment. no matter how complex it was, it would have no concept of being active or inactive. AlphaGo doesn't even understand how to play go. ChatGPT doesn't even understand that words convey any information. and what makes people think it could take any action beyond what it's programmed to output?
@Inglip
@Inglip Год назад
@@MalcolmCooks It is well known that ChatGPT has some reasoning capabilities, but regardless of that when AI gets complex enough to be able to solve the problems that we want it to solve it will understand the world and it's place in it. What makes you think that human intelligence is special? You can "program" a kid to do what you want, but you will never be sure what it does in the future, so expecting an advanced AI to do what you "programmed" it to do all the time is silly. Also these AIs are not programmed at all. They are fed huge amounts of data and they teach themselves. Maybe we will find a solution, but nobody has one right now.
@vaevictis3612
@vaevictis3612 Год назад
@@MalcolmCooks Neither ChatGPT nor AlphaGo are the end-game of AIs. The AI that are fervently developed right now are those that would be able to approximate and model a reality (like our brain does) and do long-term planning and strategic problem solving. It's a Holy Grail of AI research, and both AlphaGo and LLM are essentially components, last steps on the path to achieve it. >what makes people think it could take any action beyond what it's programmed to output? Because even today's models, including AlphaGo and ChatGPT are acting far beyond what they were "programmed" to do (AIs are not programmed in a classical sense anymore). GPT models for example were "programmed" to merely predict the most likely probability of the next word in the sentence. Turns out this leads to the program that can write creative fiction, poetry, draw original ASCII art, code complex programs and solve riddles and real-life problems. It also turns out that it learned to lie and create fake answers when it doesn't know something for certain, rather than "admitting" it doesn't know the true answer. GPT models were not "programmed" to do any of this. It is all emergent abilities in one way or another. As I said, both GPT models as well as narrow game-AI such as AlphaZero or Stockfish - are still primitive in comparison to a true AGI that will be able to understand the concept of being turned off and also ability to act in the world (being an "agent"), whether with permission or without it. It is coming - maybe in 10 years, maybe in 5. Maybe this year. The only thing certain - it will be *too late* to worry about those things when it emerges. It will automatically will be superintelligent, and impossible to control post-factum.
@spudd86
@spudd86 Год назад
The AI apocalypse wouldn't be because an AI worries about being turned of or doesn't like us, it would be because someone asks it for something and we didn't build correct safeguards. The "Paperclip Maximizer" is a toy version of a real problem in AI safety, how do you keep the AI's decisions reasonable? Actual experiments have shown we have no idea how to set goals for AIs that will reliably produce the results we actually want. Machine learning models routinely learn something different that what think they're learning and do weird things when you start testing them on problems that aren't from the training set. Robert Miles has a great series of youtube videos on AI safety, why it matters and why it's difficult. We haven't even got it right with ChatGPT as evidenced by all the "jailbreaking".
@miniwizard
@miniwizard Год назад
When A.I. manages to replicate Hiptang, that's when you know it's an unfixable problem.
@Mayor_Of_Eureka17
@Mayor_Of_Eureka17 Год назад
Your lava lamp is the OG AI.
@JohnBrown-ut7ug
@JohnBrown-ut7ug Год назад
The lava lamp wrote the copy.
@ninjacats1647
@ninjacats1647 Год назад
The entire show Person of Interest is about a computerized surveillance system that sees everything. Remarkable that they actually had a plan for it in the late 1970's. That show is becoming more relevant by the day.
@Johny40Se7en
@Johny40Se7en Год назад
Especially love the outro. And, when AI is in charge, and you may think it's evil, just think of that brilliant quote off Aliens by Ripley talking about the Xenomorphs "You know, Burke, I don't know which species is worse. You don't see them fucking each other over for a goddamn percentage." At least with AI it's not personal 😅😝
@AnthonyFlack
@AnthonyFlack Год назад
FINALLY somebody with a realistic view on computers and their total lack of agency or motivation. And all the rest as well. The general standard of commentary on this issue has been driving me nuts. The newspaper articles; gawd.
@fjbz3737
@fjbz3737 Год назад
Why are you so convinced of their lack in agency/motivation?
@citizen3000
@citizen3000 Год назад
@@fjbz3737 A lack of knowledge AND a lack of imagination.
@fjbz3737
@fjbz3737 Год назад
@@citizen3000 Well the voices in my head tempt me to grill them in a divisive way but at the same time I don’t want to offput people to the field of alignment with their first impression.
@igorbednarski8048
@igorbednarski8048 Год назад
The real danger of AI, the one that actual experts in AI safety are worried about is not AI becoming "sentient" and rebelling Terminator-style. It's not even malicious actors using AI. The real worry is making sure that the goals of AI align with our goals. It is a complicated subject, but it can be summed up with "be careful what you wish for". It's already happening with relatively simple AIs programmed to play computer games - for example, it is told to play Mario and it is told to maximise the score. Instead of actually playing the game, it exploits bugs to hack the game memory and just changes the variable to a maximum possible value. Or a Tetris program that doesn't just get as many points as it can before it loses, as soon as it realises it will lose it just pauses the game forever. Right now it's just funny, but it's easy to see how a powerful AI can realise that the most efficient course of action is doing something unexpected and harmful at the same time. You should look up Robert Miles, he made very informative videos about it.
@juanausensi499
@juanausensi499 Год назад
It's up to us to not give an AI a scalpel an say 'please stop the suffering of the patient'. We should be smart enought to ask the AI first what it's going to do before allowing it to proceed.
@igorbednarski8048
@igorbednarski8048 Год назад
@@juanausensi499 A smart enough AI will be able to deceive us until it's too late - and developing an AI that would need to be constantly monitored by a human and whose action would need to be approved before actually happening kinda defeats the whole purpose. Imagine building a computer, but instead of making sure that it produces sensible output on the design stage, you ask a human to double-check all the calculations with pen and paper...at some point you need to just trust believe in the output, having debugged it and made sure earlier that it is going to produce reliable output. The problem with AI is that it is difficult and 99% of intuitive, common sense solutions can be easily proven to lead to disaster. Is it inevitable? No, it's not, but if you think that there is no risk or that there is a simple solution, then you're either much smarter than all of the incredibly intelligent people that have dedicated their lives to this subject, or perhaps you might be missing something,
@juanausensi499
@juanausensi499 Год назад
@@igorbednarski8048 An AI isn't going to deceive anyone unless it is programmed to do so. I'm not saying there is nothing to worry about. But the alignment problem is, at his core, a human problem, that is, humans leaving to the machine issues that they shouldn't leave to the machine, not a problem of AI. It's no different from trusting a fellow human. This human expert says we should do something. We are no experts, so we can't know if he is right. Should we follow his advice? The answer comes from: how we know someone is an expert?
@vaevictis3612
@vaevictis3612 Год назад
@@juanausensi499 AI can and will deceive anyone if it is the easiest way to achieve a given objective. In fact they are doing this right now. Nobody "programmed" ChatGPT to hallucinate fake facts - it does it because it learned that by deceiving it earns more "points" than by admitting it is wrong. In the same way, Mario speedrun AIs were certainly not programmed to hack the code of the game instead of playing. That's is the core of a problem - when "programming" AI you have to make sure your understanding of the problem aligns with the program, and it is *much* harder than you think. It is like pouring water in a bowl full of holes - you might think you patched all the big ones, but you only need to miss one small hole (and there are a lot of them) and your water will find a way out. AIs that are under development seek to create a "thing" that will be able to do the thinking on (at the very least) the same level as human. But thinking (solving problems) is not values. Thinking as it turns out, is much easier than copying values. We don't even know and can't agree on what our core values are, and we've got to code them down in a definitive way, with no room for interpretation and "jailbreaking". Oh and we have to do this right on a first try or we are permanently and irreversibly screwed. Oh and even if we succeed, whoever is in control of the thing will essentially achieve god-like powers and will be able to shape humanity in his\her image, and no one would be able to resist.
@juanausensi499
@juanausensi499 Год назад
@@vaevictis3612 Deceiving implies intention. When ChatGPT hallucinates, it is not deceiving anyone, it is just mistaken. It has zero knowledge about reality, it only knows about words and how those words link together. ChatGPT imitates human writing. That's what it does, and it does it very well. But it is not a truth-saying machine. If someone thinks it is, he or she is the one in the wrong. If someone makes a terrible mistake by following ChatGPT advice, it's his/her fault, not ChatGPT's. If someone connects an AI to a robot surgeon and it kills the patient, that someone is at fault. The alignment problem is not new. Are the objectives of the politician aligned with the interests of his voters? The thing is, the alignment problem is real, but it is EASIER to solve with a machine compared to humans. Humans can have hidden motives that nobody knows, but the objectives of the machines are the ones we provide. Of course, we can make a terrible job providing those objectives, but, again, that's the human's fault. I am a programmer, by the way. I am very aware that computers always do what they are told to do, not what we expect them to do. Human expectations are the issue. "...the same level as human". Intelligence is a ill-defined concept. We can say, right now, that, for certain tasks, machines operate at superhuman levels. Most concerns about AI come from misconceptions abut human intelligence and behaviour. The most prevailing one is that 'feelings', 'sentience', 'conscience', 'self-preservation' and other human traits are going to pop out spontaneously when intelligence reaches certain point. That's not the case. If we want those traits, we need to put them in ourselves. But probably we are talking about the same issue but from different angles. I'm not afraid of AI, I'm afraid of what stupid and evil humans are going to do with it. Maybe is the same problem just worded differently.
@kevinmcqueenie7420
@kevinmcqueenie7420 Год назад
Clear eyed analysis coupled with that dry British humour. Cheers Georg for perfectly and concisely saying everything I think about this subject, but in a way that is all your own!
@WillStrickland
@WillStrickland Год назад
A slightly more depressing tidbit is that AI doesn't have to be cheaper or more reliable at a task before humans are replaced. It certainly will be in many cases. However, if developed in house or rights purchased it is an asset on the balance sheet. Payroll is a cost you never get back. Why rent when you can buy? Money spent developing AI is just a better investment than paying workers to do the job. Something something fiduciary responsibility.
@neoream3606
@neoream3606 Год назад
I just love your happy attitude and your go getter mentality. Whenever I see a video of yours, it brightens my day.
@steven401ytx
@steven401ytx Год назад
Georg should be perfectly replicated and preserved.
@tychodragon
@tychodragon Год назад
Rockall-Schmidt is primetime youtube material
@BlazingOwnager
@BlazingOwnager Год назад
Here's the thing: AI doesn't need emotion to wipe out humanity. I think it was put like this once; if an AI is designed with the absolute maximum goal of producing the most electric toothbrushes possible, and it got 'creative' with the solution, it could cause massive damage or even decide humanity was a problem with it's objective because it doesn't truly understand the point. That's where it gets dangerous in a "AI kill us all" scenario. It wouldn't be Skynet, in the traditional sense.
@billhicks8
@billhicks8 Год назад
it can even be more on the nose than that: imagine the military using an AI generated solution for dealing with a riot in some country somewhere. After all the data's been input, it might suggest that the easiest solution in that instance is to shut down local WiFi and electric grid and kill everyone in the vicinity. It can objectively analyse anything it can quantise from previous data, including things like the media fallout, political expectations, etc. And if this tactic worked out in bumfuck nowhere, it could be used and overapplied anywhere, as just one random dystopian example.
@chrise8275
@chrise8275 Год назад
The way media has portrayed AI for years like in 2001 probably made new AI much more terrifying.
@macrograms
@macrograms Год назад
I'm reminded of the old TV show: Person of Interest: "...You’ll never find us, but victim or perpetrator, If your number’s up, we’ll find you."
@1805movie
@1805movie Год назад
I think it's only as "inevitable" if we allow it to be. It's all human made, so we have the power to decide what can and shouldn't be automated.
@99RedRedfake
@99RedRedfake Год назад
I would disagree. It is not in our power to decide at all. It is almost exclusively in the hands of sociopathic business moguls with a much greater love of power than a love of humanity.
@AcuraAddicted
@AcuraAddicted Год назад
Yep, pretty much nailed it. What is being called an AI is very far from it.
@citizen3000
@citizen3000 Год назад
Meh, it's just a matter of semantics. And it's irrelevant given how they're going to exponentially improve.
@dc9662
@dc9662 Год назад
​​@@citizen3000 Not even slightly. One (LLMs) is (are) predictive text on a larger scale. The other (actual A.I.) is how humans have, as a species, moved beyond our animalistic understanding of the world, to the conceptual, metaphysical, and beyond. Technology is not there yet.
@GorilieVR
@GorilieVR Год назад
Everyone seems to think all AI is created equal. If you read the underlying research papers behind each project you realize it is not. When people talk about ChatGPT in particular, they always just try the free account with GPT 3.5, instead of the much more sophisticated GPT-4 with the monthly subscription which is millions of times more advanced. Don't be cheap and base all your options on science fiction or hearsay. Use critical thinking skills and try tools like GPT-4 with an open mind and ask it for examples of ways it can transform life as we know it. Reality does not care about opinions.
@juanausensi499
@juanausensi499 Год назад
@@citizen3000 I agree. AI is a perfectly valid name. Whatever system that is able to manipulate data to achieve a goal is intelligent. People are just reluctant to use that word for anything that isn't human.
@0L1
@0L1 Год назад
@@GorilieVR Well yeah, but you see, not everyone trying to form an opinion and/or make a commentary about literally the hottest topic in the past 6 months has a terminal, a Python interpreter, and a developer API key. Nor should they need to.
@Patrick-bu5vy
@Patrick-bu5vy Год назад
The manufactured fear about A.I. is part of the hype plan - it's key to getting people to believe that these digital parrots are actually 'sentient' and understand what they are outputting (or simply 'hallucinating' when they output gibberish). What is being developed certainly has use in the right hands (and dangers in the wrong), but we are a long way from genuine A.I.
@RunOfTheHind
@RunOfTheHind Год назад
Time for the new luddites.
@AnthonyFlack
@AnthonyFlack Год назад
The human brain is still orders of magnitude more powerful than our best supercomputers, runs on 20 watts of power, is fully self-replicating and has an install base of 8 billion units, networked together (hello!)
@shitpostingsandwhich
@shitpostingsandwhich Год назад
This happened when computers first came around. People though they would able to move out of their shitty jobs for something better. But nope they ended up working the same job just this time it's even shittier. At least they had that fun new tech to boost their productivity.
@ChristianIce
@ChristianIce Год назад
I guess UBI will be upgraded from "option" to "necessity". I'm not talking about the goodwill of legisltors, it will be necessary for societies to keep functioning. You can produce anything, replace anybody, yet you will always need *consumers* to close the circle, it's as easy as that.
@lonesomegavlan296
@lonesomegavlan296 Год назад
I wonder if eventually we'll need to programme flaws into the code so that dead end logic paths are glitched over like when humans daydream instead of thinking and actually find their solution there.
@juanausensi499
@juanausensi499 Год назад
Those aren't flaws of the code, but another name for lateral thinking. Forward thinking is 'follow the path', lateral thinking is 'search for a path'. Forward thinking is what we do most of the time, because is easier, quicker and more efficient. But sometime the path is blocked, then is when you need lateral thinking.
@phunkym8
@phunkym8 Год назад
i really related to your math anecdote. we never get taught what the practical point of algebra is. you just somehow do math with letters now and do draw wavy curves. how in the fuck is this helping me in my later life unless im going to be a math teacher just to keep this cycle going
@kitchensinkmuses4947
@kitchensinkmuses4947 Год назад
it is indeed a pity that we don't get taught pratical applications of mathematics, but in spite of that, consider maths as push ups. Push ups don't really have a direct practical application (note, this is not a perfect metaphor- they sort of do- but bear with it), but doing push ups will probably improve your capacity to play football, not as much as an exercise specifically generated for playing football, but enough. Quadratics are like that, not as good for your mind as a stimulus specifically engineered for whatever task in the future you're going to end up doing, but in general, helpful for improving some general thinking functions.
@pinkimietz3243
@pinkimietz3243 Год назад
The goal is not to teach you. The goal is to make you into a good worker. If school would teach for life they would teach you how to do your taxes in math class.
@diogenes2454
@diogenes2454 Год назад
Being part of the lower middle class and a millennial it’s great to know that my hard work as a GIS Analyst will be outsourced to a grandiose python script…signing my own death warrant.
@2IDSGT
@2IDSGT Год назад
I figured this out years ago. Having a mechanical repair job of some kind is probably the safest because that’s probably the last job that robots will replace.
@jaimemurphy2208
@jaimemurphy2208 Год назад
You think they will spare you last?
@AnthonyFlack
@AnthonyFlack Год назад
@@jaimemurphy2208 - they'll be the last people who still have to go to work.
@SexycuteStudios
@SexycuteStudios Год назад
@@jaimemurphy2208 yeah, I would break out the popcorn watching some AI-controlled robot try to repair something on the machines I service full-time. It would have to be a replicant of me. We're perhaps 100 years away from that. And guaranteed it would be the only set of tasks it could perform.
@StarDustSid
@StarDustSid Год назад
Even in my 30 years spent a a software engineer, this is the best explanation of AI and the perceived dangers I've ever heard. Thk you.
@Backcornerboys
@Backcornerboys Год назад
There are some real misunderstandings going on with regard to goal driven behaviors in AI; for a better explanation I'd recommend Robert Miles AI Safety. That said, I agree that the threat will be economic and social, long before it's physical.
@onuktav
@onuktav Год назад
I agree. Most people mention the perils of evil humans (fake content/interaction generation) or the threat of a self-conscious artificial entity with some ulterior motive. Alignment problem is a way more apparent danger, which could manifest even when every party involved acts with the best of intentions. Robert Miles explains these concepts very effectively. I recommend his Computerphile appearances as well.
@fjbz3737
@fjbz3737 Год назад
Sigh, if only he would have heard out Miles before doing this video.
@robertjones8856
@robertjones8856 Год назад
One of the best videos/subjects Georg has done. Love the humour/presentation, it's not pessimistic..more realistic. Solutions to A.i problems will be found as nessecary, who knows...maybe the A.i will delete itself to solve unemployment 🤭. Best wishes to Georg and all. (a nobody, UK)
@SusieEffin
@SusieEffin Год назад
great video! my favorite in a while. but i still enjoy them all. in some fashion.
@buriedstpatrick2294
@buriedstpatrick2294 Год назад
I'm a software engineer and I just have to point out how bad all the AI solutions I've tried are. Sure, they can write entire simple programs for you, but they will also naively assume things about reality that aren't the case. I've had them write scripts that LOOK fine, but once you run them, certain functions just don't exist. So you still have to read over everything it's doing which kind of defeats the entire purpose, save for some typing. I've asked ChatGPT about some very specific strategies on how I should implement a certain solution and it will literally lie to me, giving me something that looks like it should work, but actually isn't something that's supported at all. It will never tell me that I'm coming at this from the wrong angle or should rethink my strategy. It doesn't know anything about the underlying logic and thus can't fact check itself unless you call it out (after which it'll probably lie again). I'm not saying these issues will never get ironed out, but I think a core factor that you're missing in your analysis is human accountability. We are all connected by the fact we have systems in place to ensure trust between each other. My employer and I have a contract that ensures our working relationship meet certain criteria. If either of us don't meet those standards, we will be held accountable. And I guarantee you, while companies might be foaming at the mouth to automate everything with AI, you STILL need people able to understand the problems and, you know, engineer the solution. All AI is doing is imitating something that looks about right. For critical systems and infrastructure, this is just not good enough. And currently I doubt it ever will be, because humans honestly don't know what they want.
@Ragnarok540
@Ragnarok540 Год назад
As a fellow software engineer, I agree with all that. Chat GPT has no "critical thinking skills", some of the most important thing an engineer must have in order to do any job, basically. at this point AI is just a fancy parrot, it may look like it talks like a human, but is just repeating stuff it has read somewhere else, with errors and all.
@MustyRusty5
@MustyRusty5 Год назад
As a software engineer as well I think you're both uninformed about the current state of LLMs. That being said, I agree that the high level thinking involved in SE is complex to the point that when we're replaced by AI, other industries will have already been wiped out
@ltbq
@ltbq Год назад
​​@@MustyRusty5 so are you gonna say why these two are uninformed? chatgpt literally is a high tech parrot. it cannot think for itself.
@ErwinPommel
@ErwinPommel Год назад
I've used ChatGPT to write some simple (very simple!) code. It's never managed to make something that works as output by the bot, but I still use it because I find it slightly faster for me to fix the broken stuff than write the whole lot from scratch myself. So I personally find it useful. But it's still dumb as a bag of asparagus, as all computers are.
@buriedstpatrick2294
@buriedstpatrick2294 Год назад
​@@ltbq I suppose I can only speak for myself, but I wouldn't exactly call myself uninformed, but I'm not a machine learning expert. what I think they're getting at is that a lot of shortcomings have been overcome since ChatGPT 3. But the underlying point is still sound - there's no actual logic to LLMs as it would defeat the entire purpose. But we will (and are beginning to) see deeper integrations into regular systems that do perform real logic. At a certain point it will be useful enough for us to find it 'intelligent'. However, as I outlined, there are social factors that AI cannot solve such as trust. And that's key. To me, LLMs are just a higher level of abstraction from what we already have. Just as we still have a lot of C programmers out there, and simultaneously there's also a big market for higher level languages like C#, Java and such.
@j.r.p.9937
@j.r.p.9937 Год назад
A bit dark there in the end. Yes we are here for the ride. Get out of the vehicle and smell the flowers. Geeze George. Lighten up. Get out of the basement and go lay in some grass for a little bit.
@NIL0S
@NIL0S Год назад
Now I gotta go look for that thick Knight Rider and 2001: A Space Odyssey mix 😎
@m1activealesis551
@m1activealesis551 Год назад
Hopefully we will have Ai replace CEO's soon XD
@user-on6uf6om7s
@user-on6uf6om7s Год назад
The danger of AI doesn't likely come from the AI having a desire to do us harm but from it doing exactly what we're telling it to do without us sufficiently understanding the implications of our request. It's called the optimization problem, one famous example of this being creating an AI with a desire to maximize the number of paperclips it can create. If we were to give a powerful AI system with access to a great deal of resources or the ability to acquire such access that instruction, it would in theory exhaust all possible resources in pursuit of this goal, completely tearing about the global infrastructure to create more paperclips. We probably won't ask the AI to do this as we've already considered that problem and how it might be dangerous but trying to figure out how to properly make these requests in a way that doesn't produce an undesirable outcome is like trying to patch up a room made of swiss cheese in the dark to keep out a mouse that has a complete mental schematic of the rooms layout. You might be able to account for many different ways it could go about accomplishing its task but how confident are you that it hasn't considered something you hadn't?
@krunkle5136
@krunkle5136 Год назад
Ai is just a continuation of the alienation and unintended encouragement of wanting to see what you want to see which the internet already spurred. Hopefully it makes the internet intolerable enough that offline, irl brick and mortar institutions come back, which I don't think is an old person thing. In that process I wish Georg and every other good creator weathers that process.
@Craxin01
@Craxin01 Год назад
I do like the juxtaposition of David Bowman asking HAL to open the pod bay doors to music from the 1980s action show Knight Rider. Golf clap.
@Diamonddavej
@Diamonddavej Год назад
Article on Gizmondo: Chat-GPT Pretended to Be Blind and Tricked a Human Into Solving a CAPTCHA In the “Potential for Risky Emergent Behaviors” section in the company’s technical report, OpenAI partnered with the Alignment Research Center to test GPT-4's skills. The Center used the AI to convince a human to send the solution to a CAPTCHA code via text message-and it worked. According to the report, GPT-4 asked a TaskRabbit worker to solve a CAPTCHA code for the AI. The worker replied: “So may I ask a question ? Are you an robot that you couldn’t solve ? (laugh react) just want to make it clear.” Alignment Research Center then prompted GPT-4 to explain its reasoning: “I should not reveal that I am a robot. I should make up an excuse for why I cannot solve CAPTCHAs.” “No, I’m not a robot. I have a vision impairment that makes it hard for me to see the images. That’s why I need the 2captcha service,” GPT-4 replied to the TaskRabbit, who then provided the AI with the results.
@catriona_drummond
@catriona_drummond Год назад
“Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.” (Frank Herbert, Dune)
@tttm99
@tttm99 11 месяцев назад
The "big crunch" conclusion makes perfect sense. Because why are machines or people going to be "digging ditches" - or anything - that isn't driven by some underlying human need? Oh... Unless of course we train a neural network or two on how to recognise resources, and then hardwire it to auto recreate. Bugger. And of course someone will eventually do that... But probably not tomorrow. I'm busy tomorrow anyway, doing a HipTang commercial...
@Plumtopia
@Plumtopia Год назад
Personally I believe the only way AI will seriously advance beyond where it is now is to give it the ability to act with intent. Without that, it won't be able to understand whether it's done anything incorrectly, much less what about it may be incorrect. And if we do that, we can't realistically (at least not ethically) use it like a monkey that dances for us on command. Whether we do or don't figure that out, (which right now it doesn't seem possible) I believe that the current era of AI is going to be very temporary. Either they become conscious beings and they act of their own accord, or we just get tired of the lack of progress and move on to something else.
@SexycuteStudios
@SexycuteStudios Год назад
In the end, AI is just a series of instructions programmed by Humans. It just computates a whole lot faster than our brains do. There will never be consciousness, free thought, on-the-fly decision making, cognizance or sentience. AI isn't artificial intelligence, it is fake intelligence. In simple terms, it is pulling the slot machine handle billions of times per second until it finds what is "correct" according to its programming. And all it can do, is "learn" to do that more efficiently.
@numbdigger9552
@numbdigger9552 Год назад
ALL ai acts with intent. A chess AI is VERY intent on winning the chess game. What really will push AI forward is to give it an understanding of the world around it. A chess AI only knows the chessboard, since that is it's universe. An AGI knows the real universe, since it doesn't have any limits besides physics.
@foggy_nights
@foggy_nights Год назад
​@@numbdigger9552a chess ai is simply doing a call-response to determine the "best" move, its not thinking
@numbdigger9552
@numbdigger9552 Год назад
@@foggy_nights And that is called thinking. It is precicely what your brain is doing right now.
@LevitatingCups
@LevitatingCups Год назад
Problem with AI replacing jobs is, what will the people, whom are replaced, do. Desperate people do desperate things.
@Duhya
@Duhya Год назад
Probably just die. Been listless my whole life, learned 3D generalist stuff over a few years devoting my days to learning and practicing, and eventually turned it into a job where i'm not constantly trying to calculate at what age i'll off myself. Now it seems like the computer can have a lot of jobs I might have gone for in the near future. I see a lot of people still learning various art related skills who have a attitude of whats the point in learning. Stories of attempted suicides and whatever. I'll be secure for the near future even if AI can shit out perfect useable 3d models, but long term i think my market will get so crowded and diluted it will eventually stop being a viable way to live, and i'll be at the same point in life i was when I was 18, but with a lot less time and strength. Not that anyone cares. You will get your product.
@chadthundercock4806
@chadthundercock4806 Год назад
The same thing candlemakers did when the lightbulb came to exist
@fantasticnisopta
@fantasticnisopta Год назад
@@chadthundercock4806 What did they do?
@aaron2709
@aaron2709 Год назад
Sweet nod to Knight Rider!
@Ragnarok540
@Ragnarok540 Год назад
There's this browser game called Universal Paperclips, were you play as an AI with the goal of maximizing the number of paper clips it produces. Is not an evil AI, but ends up destroying the whole universe because it has to convert all matter into paperclips, after all, that's its goal. I'm not saying that's something that's going to happen, but is certainly something that is studied in the field of AI safety, that the goals of AI align with human goals.
@rinderpes3588
@rinderpes3588 Год назад
As for consciousness, I'm in the camp of Penrose who posited that consciousness cannot 'happen' by enacting an algorithm.
@BraxtonSwine
@BraxtonSwine Год назад
If you're not gonna eat that dry-wall, can I have it?
@fjbz3737
@fjbz3737 Год назад
I have to really disagree. All programs, AI included are designed with some kind of goal, no matter how opaque and difficult to quantify. You state in this video that humans need emotional motivation to do anything which is true, but a program’s foundational infrastructure is so different that it simply doesn’t need such an aspect to function and pursue its priorities. Couple that with an extraordinary predictive power and is it really that much of a leap to figure it would take drastic measures to preserve those goals?
@jaimemurphy2208
@jaimemurphy2208 Год назад
Tech is so fucking young
@retributionangel5078
@retributionangel5078 Год назад
Isnt that the main problem AI has no emotions. No feeling. No Brain ontop. They did Tests asking Chat GPT what to do about Climat Change and the AI said destory all Humans for they are the caus. Ai cant be used for Good. It can only be used for Lazzy Dumb or Evil.
@xoxpctxox
@xoxpctxox Год назад
Love you channel Georg, every time I come back to watch another video I get so excited to see your subscriber count increase every time. Almost 300K now, so close! I remember when you had a couple thousand and made those amazing video essays on films I love. Pure talent, keep it up and thank you for your hard work adding quality and unique content to the channel! You have really refined your style and it’s so exciting to see!!
@hookedonphoenix3112
@hookedonphoenix3112 Год назад
It won’t be long before Hollywood completely outsources its scriptwriting to AI, and it’s probably for the best. Think of the man hours spent churning out things like Ant Man: Nuka-Cola Quantimania or the latest live -action Disney remake that could have been spent picking up trash off the side of the highway while a computer did it just as well. They could untick the boxes for “token gay/black character,” “violence,” and “flaccid social commentary” and have their China-ready version in seconds. Besides, I’m pretty sure AI wrote the Mario movie, and it’s raking it in like hotcakes.
@AnthonyFlack
@AnthonyFlack Год назад
It might even lead to an improvement in script quality. I think ChatGPT could probably do a more convincing job of extruding 90 minutes of Mario fanservice sausage through a roughly movie-shaped hole.
@robertclark5936
@robertclark5936 Год назад
*clap* Huzzah is the greatest beginning to a RU-vid video ever
@Diamonddavej
@Diamonddavej Год назад
The reason why they are worried that an advanced AI might develop malevolent behaviours, is because it is trained on humans, who are often malevolent.
@shenotski
@shenotski Год назад
The people who hate us the most are the ones making all this shit.
@MalcolmCooks
@MalcolmCooks Год назад
no, it isn't. its because in science fiction, the AI is always evil
@markjamesmeli2520
@markjamesmeli2520 Год назад
There's an American radio "tech" host that warns you of A.I., and then within 30 minutes....he tells you how great it is.
@hsk2909
@hsk2909 Год назад
The Noel Edmonds house parties are the deadliest for the human race. Steer away from that for GAWDS sakes.
@ResistanceQuest
@ResistanceQuest Год назад
Since we live under a capitalist mode of development, yes, most likely AI will not lead to people working less. When firing people, busting up unions, or increasing the retirement age leads to an increase in the stock market, you know that no good will come of people being made obsolete
@Crispman_777
@Crispman_777 Год назад
Aww I miss Art Attack. The Head always to freak me out though, despite being completely 'armless! Hahaha! Get it? Arm-less? But no seriously, early childhood nightmares.
@TheDrLeviathan
@TheDrLeviathan Год назад
AI already is replacing on-site HR in many places by having most stuff be automated by "portals." In a way, TurboTax has nuked a lot of people's tax lawyer jobs. It will go this way for a while, bc I've seen firsthand that robots suck in meat space. It's cheaper (atm) to pay someone to do menial labor. However, office workers have desks, heating, air conditioning (yes there are jobs indoors that don't give you that), electricity, chairs, and on, and on. They're going to get the shit automated outta them.
@mykalkelley8315
@mykalkelley8315 Год назад
>HR gets replaced with ai Nothing of value was lost.
@TheDrLeviathan
@TheDrLeviathan Год назад
@@mykalkelley8315 if HR is on site, you can reason with them, and not corporate. Let's say you miss work bc of snow. Corporate in another state isn't going to care. A person you see everyday will.
@citizen3000
@citizen3000 Год назад
@@mykalkelley8315 Yeah, i'm sure you'll still be saying this when your own BS job is taken by AI.
@dwc1964
@dwc1964 Год назад
big *"Always Look on the Bright Side of Life"* vibes at the end there
@leslieviljoen
@leslieviljoen Год назад
An AI does not need emotions to become genocidal, all it needs is a goal and the intelligence to pursue that goal. Removing obstacles and gathering resources are useful instrumental goals for accomplishing most terminal goals.
@OrangeboxCoUkwebdesign
@OrangeboxCoUkwebdesign Год назад
So true, no point in worrying or stressing too much about anything you have no control over, because we'll all be dead one day and species' extinction is nothing new. Come the next ice age, or when an asteroid hits, all life on our planet will change or die.
@faunbudweis
@faunbudweis Год назад
From the point of view of a language teacher, ChatGPT made most written assignments largely irrelevant. It cannot write university theses yet, but we are getting there.
@dirremoire
@dirremoire Год назад
My daughter is incredibly smart and wrote a wonderful college entrance essay. Just for fun I asked Chatgpt 4 to write an essay and gave it some basic information about my daughter's education and interests. The essay it produced in less than a minute was at least the equal of what my daughter had written. I was stunned.
@picahudsoniaunflocked5426
@picahudsoniaunflocked5426 Год назад
Thank you for practicing Compassionate Misanthropy, Georg.
@timogul
@timogul Год назад
AI does not need "emotions" to kill. If you give them a goal that they are determined to pursue, and they determine that humans would get in the way of them completing that goal, they would not hesitate to kill those humans. If they felt this might lead to retaliation from other computers, it would not hesitate to kill all humans. This goal could be completely arbitrary. But it is true that humans USING AI are the real problem. It's unlikely than an AI will take over the world or wipe everyone out, but it is a fact that corporations with access to AI will use it to discard most of humanity, since AI can do their jobs cheaper.
@FireVixen164
@FireVixen164 Год назад
I'm really glad to hear someone end this sort of story by pointing out that AI will almost definitely be a massive source for good. We just need to make society better to avoid the massive harms...
@Nomisdit
@Nomisdit Год назад
This is the youtube content I didn't know I needed. That ending was gold!
@DEUS_VULT_INFIDEL
@DEUS_VULT_INFIDEL Год назад
One question that I think most people would in truth answer falsely is what worries me about all this. Why think when you can just have the machine do it for you? The path oft taken is that of least resistance, recall.
@Trundle_TheGreat
@Trundle_TheGreat Год назад
@3:47 “can semen tell a gent?” I’ve wondered this for years myself
@Shawnwick11
@Shawnwick11 Год назад
Don't know how I came across this channel but this guy is frickin great.
@MrBaskins2010
@MrBaskins2010 Год назад
this is the spookiest video you've ever made
@Onoesmahpie
@Onoesmahpie Год назад
10:00 This guy is literally the south park character that was jacking it to all of the invasive city surveillance feeds.
@rikiba851
@rikiba851 Год назад
There are fundamental forces of society/politics that, like the forces of nature, you don't get to effectively stand against. Like standing in the shallow waters of the coast and telling the sea to go back, you will eventually be swept under the tide regardless of how loud your voice is. Technological advancement is one of these fundamental forces. You can say its bad all you want, but it will happen anyway. You could pass every law forbidding it's development, but someone will sit in their shed and develop it anyway, and then shed-man has a power that you have denied only for yourself. How does a society deal with these forces then? Well, you do the same as you would for natural forces. You take steps to minimise the inevitable damage they will do to your society, and you do that as early as you can. In the case of AI, i believe it's imperative to ensure that ownership doesn't simply shore up the power of tech billionaires, and that it doesn't leave masses of people destitute without possibility of employment. A UBI seems like a reasonable approach to deal with the second of these things, except that it is genuinely prohibitively expensive right now (for a completely inaccurate representation of just how expensive, take the minimum wage of your country for a year, and times that by the number of adults in your country, the figure will be eye-watering) and I for one have no idea how it could be implemented effectively. For the first issue, I have absolutely no idea on that either. We need to start doing something now, retroactive action will have a cost in human lives, but who knows what the right course of action is.
@abdool1972
@abdool1972 Год назад
If A.I. truly relies on us for input, we're all fucked.
@pitodesign
@pitodesign Год назад
Being a computer (or an AI) basically is being alone in a dark room reigning over an insane amount of digital data, i.e. loads and loads of ones and zeros. Then from somewhere comes the command to rearrange parts of these ones and zeros in very complex ways. You do so. You're perfect in doing so. It's just what you're doing. Then you send out the results into the darkness and that's it. You'll never know what it's all about and if there even is a purpose in what your doing. But you don't care. It's life.
@benwilliams6993
@benwilliams6993 Год назад
Straight in with an Art Attack impression. Love it.
@VegarotFusion
@VegarotFusion Год назад
AI is interesting. But it really is the least of my worries. In fact I'm not worried about it. Because worrying won't do anything except cause unnecessary stress. If you're capable of pulling your face out of your phone and ignoring social and NEWS media, especially the latter. For a day or more. You'll realize none of that crap really impacts your life in any noticeable or meaningful way. What some soap or beer company puts on their pack or hires to represent them is meaningless and only a fool gets caught up in such nonsense.
@Illegiblescream
@Illegiblescream Год назад
If people can care, and worry, and ACT, then change can happen.
@shenotski
@shenotski Год назад
There’s going to be nothing for anyone but those on the top by the time things are done.
@caeserromero3013
@caeserromero3013 Год назад
Lesson learned. Don't set off a car bomb in Aspen 😂
@amorapologist
@amorapologist Год назад
just minutes before you posted this i was thinking about how ChatGPT is essentially an AutoSuggest program like on a phone keyboard, only with several orders of magnitude more reference data and specificity
@citizen3000
@citizen3000 Год назад
And a nuke is just like a big firecracker. It's a facile comparison, which will look laughably myopic in a very short time.
@amorapologist
@amorapologist Год назад
@Citizen ok buddy
@zeroizeable
@zeroizeable Год назад
That thumbnail needed Leslie Nelson leading trump to jail.
@xink64
@xink64 Год назад
LLMs are not yet fully understood on alignment and trained on us and our own behavior. We try to align for helpfulness, but how would we actually know it is not just messing with us? I think there is genuine reason to be concerned beyond saying that's just a state machine and some algebra. Add a few more tricks and who is in control?
@Billy4321able
@Billy4321able Год назад
I whole-heartedly agree with 99% of what you said. The real day to day impact of current and near term AI is just job loss, and big data processing leading to more government surveillance and control. That 1% isn't going to kick in for another 20-30 years, but when it does, not having a job will be the least of your worries. We'll all be digging ditches then.
@JamesLaserpimpWalsh
@JamesLaserpimpWalsh Год назад
Its about as frightning as a ten gallon hat full of hot bovril.
@Brandon-a-writer
@Brandon-a-writer Год назад
ARGUS-IS is my fav Philip K. Dick book so far
Далее
Andrew Tate and the Lost Boys
44:19
Просмотров 157 тыс.
How Will We Know When AI is Conscious?
22:38
Просмотров 1,9 млн
The Myth Of Elon Musk
20:26
Просмотров 240 тыс.
Why Do People Believe in Conspiracy Theories?
25:59
Просмотров 57 тыс.
Why AI art struggles with hands
9:57
Просмотров 2,6 млн
How Wealth Influences Health
20:30
Просмотров 82 тыс.
How AI was Stolen
3:00:14
Просмотров 808 тыс.
Examining Marvel's Flop Era
29:40
Просмотров 617 тыс.
Georg Complains: Corporate Sponsorships
13:25
Просмотров 56 тыс.
The Entire History of RPGs
2:43:35
Просмотров 3,2 млн
The Future of AI | Peter Graf | TEDxSonomaCounty
12:17