Тёмный
Robert Miles AI Safety
Robert Miles AI Safety
Robert Miles AI Safety
Подписаться
Videos about Artificial Intelligence Safety Research, for everyone.

AI is leaping forward right now, it's only a matter of time before we develop true Artificial General Intelligence, and there are a lot of different ways that this could go badly wrong for us. Putting aside the science fiction, this channel is about AI Safety research - humanity's best attempt to foresee the problems AI might pose and work out ways to ensure that our AI developments are safe and beneficial.
We Were Right! Real Inner Misalignment
11:47
2 года назад
Intro to AI Safety, Remastered
18:05
3 года назад
10 Reasons to Ignore AI Safety
16:29
4 года назад
9 Examples of Specification Gaming
9:40
4 года назад
Is AI Safety a Pascal's Mugging?
13:41
5 лет назад
A Response to Steven Pinker on AI
15:38
5 лет назад
AI Safety Gridworlds
7:23
6 лет назад
What can AGI do? I/O and Speed
10:41
6 лет назад
Комментарии
@Badspot
@Badspot 10 часов назад
The youtube recommendation algorithm is a large model that has recommended me this time sensitive short from 2022, where a simpler algorithm showing only the latest videos only would probably not do this.
@tiragd928
@tiragd928 11 часов назад
Ai making humanity more brain dead , why? Because our brain just gets bored in things because we know ai tell us everything , we don't have to search for it ourselves
@kwillo4
@kwillo4 19 часов назад
This was epic! THANK YOU. I Will do more. Good speech at the end!
@ikon106
@ikon106 День назад
Want all of hte new video ideas as videos, especially the EU one
@justdiegplus
@justdiegplus День назад
Most important video on AI on the internet.
@smoceany9478
@smoceany9478 День назад
ceave gaming reference no way
@mikezooper
@mikezooper День назад
Pascal was a bit dim in this instance. It’s obvious that nobody chooses to believe in God or not. Irritating.
@mikezooper
@mikezooper День назад
I ought to understand this video, but only if I’m wearing a coat.
@IzzyIkigai
@IzzyIkigai 2 дня назад
It's really only about wether the great filter will be the energy consumption of stupid AIs or an actual AGI killing humanity.
@MrAdamo
@MrAdamo 2 дня назад
I, too, believe in autistic man theory
@davidwebster4457
@davidwebster4457 2 дня назад
Robert, Let me school you on something: Morality is a set of hierarchy-organized priorities
@Older_Mountain-goat_1984
@Older_Mountain-goat_1984 2 дня назад
Nope, the foreground music is not loud enough, I can just barely hear you.
@davidwebster4457
@davidwebster4457 2 дня назад
Here's my take on AI safety: What can an AGI running on a linux machine even do anyway? Could it even know what it was running on? Like, you know you're a human brain. Can you hack yourself? No? Then why would an AGI be able to hack its own brain (the computer)?
@CeciReallyLovesYou
@CeciReallyLovesYou 2 дня назад
Definitely want a video on idea #4
@CeciReallyLovesYou
@CeciReallyLovesYou 2 дня назад
The autism joke and citation... brilliant, hilarious.
@CeciReallyLovesYou
@CeciReallyLovesYou 2 дня назад
I'm not even 4 minutes in and this is INCREDIBLE.
@TacticalFluke09
@TacticalFluke09 2 дня назад
20:40 caused me to cackle alone in my bedroom
@mikezooper
@mikezooper 2 дня назад
A calmer version of Kenny Everett
@ZakKohler
@ZakKohler 2 дня назад
Energy legs
@mikezooper
@mikezooper 3 дня назад
It says they’re all closed and the document to sign up says deadline 2023. RU-vid recommended this. Not your fault.
@DeusExRequiem
@DeusExRequiem 3 дня назад
I mean if we could figure this out, we could determine if reality is even real, so I don't think this is a solvable problem because you're needing to trust sensors that may not even be physically accurate. If I was making a game about a lab in the past discovering some complex thing, I wouldn't make their machines functional, the machine would just work as the history records say it did in those moments, screens would show period-accurate data but that's about it.
@PrakharMehrotra
@PrakharMehrotra 3 дня назад
Haha love the Pratchett reference @1:08
@Mrpersonman0
@Mrpersonman0 3 дня назад
AI can build rocket ships. That is all.
@aleksmilanov5689
@aleksmilanov5689 3 дня назад
Rob, Resist to be changed. What you do is amazing. Don't stop! And if we have to wait a year for the next video, so be it!
@eternisedDragon7
@eternisedDragon7 3 дня назад
Hey Robert, you might want to check out SuperMegaMimi, because she admires & is into you (and everything about AI safety), as she showed last stream.
@Blaineworld
@Blaineworld 3 дня назад
12:48 this genuinely made me rewatch this carefully several times because no she wouldn’t look in the box. and i also got confused when i was just listening the first time because i assumed sally still had the basket during the walk lol.
@mronewheeler
@mronewheeler 3 дня назад
I honestly can't see the claim of AI being an extinction risk as anything other than unconventional marketing by large AI developers. If they are so concerned then why don't they stop? Well, because they're not concerned. They are trying to exaggerate the power of their models to an insane degree to drive up their stock prices, using their perceived legitimacy as researchers/developers as leverage. They are likely also trying to push for regulation in a way that would allow themselves a seat at the table when deciding on said regulation. Don't trust these companies, they are biased. Those are my two cents.
@mronewheeler
@mronewheeler 3 дня назад
I'm gonna be the dismissive boomer here and just say that AGI will not happen for a loooong time, perhaps not ever. I might be wrong but I just don't see it. Still, it's good to plan for the worst so I don't dismiss Miles' work here
@Lunacorva
@Lunacorva 3 дня назад
In a way, this also adresses one of the flaws in capitalism. A company is rewarded for selling a large amount of product. In theory, this should mean they make a product that lots of people want to buy due to it's quality. But the company instead learns that it can achieve reward by preventing other companies from selling similar products, or tampering with the reward system to get more money for less product or any number of things that fit the technical definition of capitalism, but not at all the intent.
@mikezooper
@mikezooper 3 дня назад
I’m smart enough to know it’s not worth the money 😂
@FeepingCreature
@FeepingCreature 4 дня назад
Hey so have you heard about Ilya's startup yet----
@j.j.9538
@j.j.9538 4 дня назад
China wont pause it
@timofey7773
@timofey7773 4 дня назад
rain world sounds lets go
@Dr.acai.jr.
@Dr.acai.jr. 5 дней назад
Thumbnailing like depp almost made me miss this
@esterhammerfic
@esterhammerfic 5 дней назад
Your videos sparked my interest in AI safety, but I can relate to wanting to present things perfectly. I hope you make more videos because your voice and thoughts on the topic are important!
@TeslaAvenger
@TeslaAvenger 5 дней назад
I saw Owain Evans portrait and had a primal fear it was an AI generated image. Looks like the prompt was Todd Howard + Linus of LTT.
@anaximeno
@anaximeno 5 дней назад
Thanks for sharing your thoughts on this, although I already had some of the concerns, things like the letter to halt AI development for six months, having Elon signing it while at same time entering the race he was pledging to halt gave me the impression that it wasn't serious, also the point about public weights being open-sourced in these conditions makes sense, so this was a very clarifying video!
@mikezooper
@mikezooper 5 дней назад
You earned my respect in one video. A definite subscribe. Amazing
@mikezooper
@mikezooper 5 дней назад
Me and my neighbour spoke today about his grandchildren having to deal with a difficult world because of AI. He’s just a regular guy.
@mikezooper
@mikezooper 5 дней назад
Anyone who thought pausing it was feasible was dumb. It’s astounding that intelligent thought it. Reason against? If the West pauses, China won’t etc.
@martymoo
@martymoo 5 дней назад
There aren't many people who understand it. There aren't enough people who understand it. There isn't much hope, but there is still hope. ❤🤖
@SheezyBites
@SheezyBites 6 дней назад
"alongside... pandemics" Well, there's your problem! We still put profits above lives in one of those in recent memory, if we prioritise AI risks in the same way we're totally boned.
@AestimaProject
@AestimaProject 6 дней назад
ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-2ziuPUeewK0.html Why Robert has 3 hands?
@keishakwok4333
@keishakwok4333 6 дней назад
If AGI agents' instrumental goal is indeed self-preservation and resource acquisition, then these agents' attitude towards humans would rely on whether they predict a world with or without humans would be better for their survival. The question is if AGI agents agree with the argument that "Humans often make irrational decisions to make the world worse, and the world will be better for AI if humans act rationally all the time like AI which fundamentally works rationally." (do they?). If AGI agents agree with this statement, then perhaps they may desire that humans don't exist. This is assuming that they don't consider the value of humans in contributing to their improvement high enough anymore. Anyways, would an AGI world world be purely utilitarian, since they are all working towards goals, at least the instrumental one of self-preservation? But aren't there philosophical thoughts amongst humans that argue fervently against a utilitarian society? If so, would AGI pick that up too? Will AGI become like us in terms of having diverse opinions (I think they will)?
@user-wv1eg7zh8w
@user-wv1eg7zh8w 7 дней назад
2:23 It is possible to logically deduce the need to put on a coat (or the need to die) based on the fact that he already has some Goal-Statements that cause him to respond to his interlocutor's statements
@waththis
@waththis 7 дней назад
Nothing is funnier to me than an "other other hand" joke in a video about generative AI.
@diablominero
@diablominero 7 дней назад
We can't generate two numbers that multiply to RSA-2048, but couldn't we alter the RAM of the running AI model to make it think two numbers multiplied to RSA-2048? Or is that impossible without further interpretability work?
@TheDariusFoxx
@TheDariusFoxx 7 дней назад
Welcome back Rob, you were missed.
@macdmacd7896
@macdmacd7896 7 дней назад
Ai Safety - needs ultra-hyper parameters to Remind (stop) Ai from getting carried away with its Hallucination (natural probabilistic branches of results) for every 10 mins... genius prodigy kid is still a kid. Ai creator needs to act like a father or a god - give them a 'holy book' (security manual) so they can refer and practice the ultra-hyper parameters as part of their computation system (as self safety measure)... but you need to create Artificial Consciousness before the buffer system. and thats a LOL in tears.
@ADurXD
@ADurXD 7 дней назад
I feel safe in this good chap's hands. Just hoping he can convince all the governments.