Videos about Artificial Intelligence Safety Research, for everyone.
AI is leaping forward right now, it's only a matter of time before we develop true Artificial General Intelligence, and there are a lot of different ways that this could go badly wrong for us. Putting aside the science fiction, this channel is about AI Safety research - humanity's best attempt to foresee the problems AI might pose and work out ways to ensure that our AI developments are safe and beneficial.
The youtube recommendation algorithm is a large model that has recommended me this time sensitive short from 2022, where a simpler algorithm showing only the latest videos only would probably not do this.
Ai making humanity more brain dead , why? Because our brain just gets bored in things because we know ai tell us everything , we don't have to search for it ourselves
Here's my take on AI safety: What can an AGI running on a linux machine even do anyway? Could it even know what it was running on? Like, you know you're a human brain. Can you hack yourself? No? Then why would an AGI be able to hack its own brain (the computer)?
I mean if we could figure this out, we could determine if reality is even real, so I don't think this is a solvable problem because you're needing to trust sensors that may not even be physically accurate. If I was making a game about a lab in the past discovering some complex thing, I wouldn't make their machines functional, the machine would just work as the history records say it did in those moments, screens would show period-accurate data but that's about it.
12:48 this genuinely made me rewatch this carefully several times because no she wouldn’t look in the box. and i also got confused when i was just listening the first time because i assumed sally still had the basket during the walk lol.
I honestly can't see the claim of AI being an extinction risk as anything other than unconventional marketing by large AI developers. If they are so concerned then why don't they stop? Well, because they're not concerned. They are trying to exaggerate the power of their models to an insane degree to drive up their stock prices, using their perceived legitimacy as researchers/developers as leverage. They are likely also trying to push for regulation in a way that would allow themselves a seat at the table when deciding on said regulation. Don't trust these companies, they are biased. Those are my two cents.
I'm gonna be the dismissive boomer here and just say that AGI will not happen for a loooong time, perhaps not ever. I might be wrong but I just don't see it. Still, it's good to plan for the worst so I don't dismiss Miles' work here
In a way, this also adresses one of the flaws in capitalism. A company is rewarded for selling a large amount of product. In theory, this should mean they make a product that lots of people want to buy due to it's quality. But the company instead learns that it can achieve reward by preventing other companies from selling similar products, or tampering with the reward system to get more money for less product or any number of things that fit the technical definition of capitalism, but not at all the intent.
Your videos sparked my interest in AI safety, but I can relate to wanting to present things perfectly. I hope you make more videos because your voice and thoughts on the topic are important!
Thanks for sharing your thoughts on this, although I already had some of the concerns, things like the letter to halt AI development for six months, having Elon signing it while at same time entering the race he was pledging to halt gave me the impression that it wasn't serious, also the point about public weights being open-sourced in these conditions makes sense, so this was a very clarifying video!
"alongside... pandemics" Well, there's your problem! We still put profits above lives in one of those in recent memory, if we prioritise AI risks in the same way we're totally boned.
If AGI agents' instrumental goal is indeed self-preservation and resource acquisition, then these agents' attitude towards humans would rely on whether they predict a world with or without humans would be better for their survival. The question is if AGI agents agree with the argument that "Humans often make irrational decisions to make the world worse, and the world will be better for AI if humans act rationally all the time like AI which fundamentally works rationally." (do they?). If AGI agents agree with this statement, then perhaps they may desire that humans don't exist. This is assuming that they don't consider the value of humans in contributing to their improvement high enough anymore. Anyways, would an AGI world world be purely utilitarian, since they are all working towards goals, at least the instrumental one of self-preservation? But aren't there philosophical thoughts amongst humans that argue fervently against a utilitarian society? If so, would AGI pick that up too? Will AGI become like us in terms of having diverse opinions (I think they will)?
2:23 It is possible to logically deduce the need to put on a coat (or the need to die) based on the fact that he already has some Goal-Statements that cause him to respond to his interlocutor's statements
We can't generate two numbers that multiply to RSA-2048, but couldn't we alter the RAM of the running AI model to make it think two numbers multiplied to RSA-2048? Or is that impossible without further interpretability work?
Ai Safety - needs ultra-hyper parameters to Remind (stop) Ai from getting carried away with its Hallucination (natural probabilistic branches of results) for every 10 mins... genius prodigy kid is still a kid. Ai creator needs to act like a father or a god - give them a 'holy book' (security manual) so they can refer and practice the ultra-hyper parameters as part of their computation system (as self safety measure)... but you need to create Artificial Consciousness before the buffer system. and thats a LOL in tears.