I agree! I would check out the Hard Fork podcast episode with Demis too, I was surprised at the level of discussion for the non-technical hosts, they asked great questions that teased a lot of exciting stuff from Demis.
This guy is just about the best interviewer of tech folks. Excellent questions. Some thought has been applied rather than sticking to what everyone else is asking.
@@hugopennmir Lex is kind of a naive child, I don't enjoy his interviewing style its incredibly biased and he hardly pushes back at all letting gremlins like Tucker Carlson run roughshod over him with their bevy of lies. He's too idealistic and his questions and subsequent conversaion are too unfocused on the tech.
I listened to this at .95 speed. The change might seem insignificant, but it was actually incredibly helpful in slowing down the high density stream of information just enough to be able to catch every word. I highly reccomend anyone else struggling to keep up with the question-answer pace to do the same.
Demis: These system are going to be enormously powerful and transformative to society! also Demis Demis: Because these systems are potentially dangerous if falling into the wrong hands only selected people and institutions should be able to wield them.
Mumbo jumbo to deliberately obfuscate. The interviewer just rambles on instead of keeping the question short. Also got a fake accent that falls flat at times.
I set playback speed to 1.25 at the start of the vid (I often do for these kinds of podcast-type interviews), god damn I was not prepared for how fast Dwarkesh talks 😆
Dwarkesh is purely an AI simulation that Demis was testing. Notice how Demis had him running at wildly different playback speeds throughout the interview.
I listened to it at 1.5x because that's what I'm used to, but I did have to stop and check that it hadn't somehow ended up at 2x. Definitely seemed faster than usual.
Hey love your Interview sometimes hard to understand your questions, your a fast speaker. But your very good at what you do, thank you for the content.
Awesome feedback. A piece of advice for the interviewer. Please don't rush when you pose a question. Slow down and speak clearly. IN several cases the posed questions were disregarded due to the speed.
"Sure, sure, sure, yeah, yeah, yeah" - it translates something like this "please, shut up. I'm not listening to you anymore. The things you are talking about now are not interesting to me. I want to talk myself now instead of you"
Have you considered running the mic through a sound board and equipping you and your guests with monitor headphones? It might be a good way to prevent guests from wandering away from the mics if they can hear themselves.
Once the models are grounded, multimodal, with decent conceptual generalization, eliminating capabilities by selectively suppressing training data can't be guaranteed to work. The two criteria strike me as directly adversarial: to the degree that selective suppression of training data works, is also the degree to which the model isn't very powerful at routing around damage (selective data lobotomy).
Demis called LLMs unreasonably effective. Shouldn't we figure out the reason why it's as effective as it is. And furthermore, how does higher quality or extra data improve its IQ in certain domains? If something this unprecedented is "unreasonably effective", we should figure out the method to the madness instead of blindly creating more madness.
It's weird that a company can have a division with the best minds and then another of quasi technical ethics ladies and functionaries are allowed to make the final product into farce.
Brilliant humble man who is fronting maybe the most important endeavour of humankind right now. Deep respect for Dr. Demis Hassabis. Great interviewer.
Stanley Milgram's "Obedience to Authority: An Experimental View" needs to be discussed at the center of every discussion about why both AGI and existing humans are so unwise. We know the answers...we're just too cowardly to face them and choose wisely.
FANTASTIC interview but have you ever noticed that you, Dwarkesh, speak like you are on 1.5x speed? 😭😭😭 Kind of obsessed and I don’t know how you talk like that.
I love this more technical style interviews from someone who knows the domain. This is missing from the internet. I will love you to do a similar with Sam Altman 🔥
By the incredible Google video where Gemini was watching the presenter and cracking jokes while he drew a rubber ducky and put a map on the table. Amazing stuff, shame it was faked.
In fairness, I think 1.5 Pro can be considered better in terms of dealing with long context. I got access today and it is pretty impressive so far. Like with a half hour video tour of a museum, I asked which exhibit had an animal digging near water and who gifted it to the museum. And it got it even with the length and the fact the name placard, animals, and gift placard weren’t all on screen at the same time.
Google has been shooting themselves in the foot a lot. Instead of doing that stupid fake marketing video, they should have waited for 1.5, when they could actually show something very impressive. 1.0 Ultra is basically just GPT-4 with pluses and minuses (overall more minuses). 1.5 is something different altogether. We haven’t seen a model until this one that can actually reason well over a very large context. Hopefully they’ll be able to roll it out to more people without a huge delay. Am hoping it is a good sign I got past the waitlist, since I’m not a famous influencer or anything like that.
I'm not sure that it's occurred to either of these guys that AI might be as bad at explaining itself as people are. We often post hoc rationalize. In terms of intellectual pursuits it may explain itself well, but I would not expect it to be capable of explaining its volition better than people.
You know what would be funny? Devoting all the resources towards AGI / ASI to help us solve massive challenges like climate change, but then the AGI calculates & calculates & calculates then tells us to stop burning fossil fuels and eat less beef!!! 😂😂😂
What do you stand to retain in the absence of change? As soon as humanity adopts this perspective it doesn't matter what the potential gains of change might be.....this example is a pretty common denominator as humanity seemingly makes a huge percentage of decisions without actually considering any details of the proposition being made at all. A lack of variation suggesting that civilisation is more of a monoculture than anything else also ensures we do not have enough experience of variation to value anything except what is dominant.....You experience what you have and that legitimacy overpowers the prospect of gaining something else you have no experience of to understand in the same way..... thus with humanity a lack of variation sustains a lack of variation...meaning we have no choice even though we have free will to choose....thought is not potent until you understand the constraints of thought that are imposed upon you by controlling your depth of experience....
Has Hassabis considered these models may inadvertently develop an "identity"? We've seen cases like early oopsies Bing Chat that talks like it has personhood, but I expect that was closer to chatbot than human. What if it went much further and deeper?
73 yr old Female Not techy FYI personal.....no problem following speed of delivery. Brain style is the dif Boomer Gen brought this era to life their gen, populating IT. May the Deities help us!! all!!
I just imagine ChatGPT being installed in a Tesla car and somebody asks it a question it cannot find an answer to, like what is written on some building in Detroit and it just searches around for other AI systems to ask the question and finds Tesla self driving ''bot'' and asks it, and it says ''I don't know, but we can find out'' and then just drives the Tesla to that location and since it has ''eyes'', i.e. cameras, it can read it :D Actually funny that so few people are talking about it but there is actually this kind of model that interacts daily with real world and which actions are very much grounded in reality and influence it and it is the newest version of Tesla self driving...
Request/recommend not speeding up the video by default. We have speed controls we can use to speed it up, but slowing down below 1.x speed makes gives this audio this like metal-room reverb sound which is unlistenable.
Why don't you ask him about the relationship between the blackbox algorithms and the mysterious performance of AI? Experts can't even touch the blackbox algorithms. And AI speaks unlearned languages, which should be impossible and suspicious. And there is the mysterious blackbox algorithms. I am astonished that nobody questions about this.
Why is it that you only focus on open source as a safety concern, and never ask questions about things like governments creating AI weapon platforms, AI mass surveillance/censorship, and too much centralized control that can be abused? It seems to me that normal people using tiny models on personal PCs are much less of an existential threat than the other things I've mentioned.
Indeed. Also we need to focus on: what are the aims of the sociopathic billionaires running the companies with the best AI? For Google and Meta it's to sell to advertisers as much information they can get about us, information acquired by keeping us hooked on a stream of divisive and inflammatory content. Also they want to avoid any meaningful regulation, and prevent fair taxation of the wealth.
ai won't be real for smb sector for 7-10 years meaning affordable, uncensored, p2p enabled, and real time - it will be 1 million times better than the big tech ai we have now and much cheaper at the same time
A collectively funded project which aims to train and ground a moral compass or conscience model would be interesting. This could then be legaly mandated to operate as an integrated layer of all future artificial minds. Let's collectively raise our artificial children right.
i love the contrast between demis going all like "yeah we'll do extensive in house and third party testing to make sure we discover any vulnerabilities or unexpected behavior before we give it off to the public and some bad actor builds a bioweapon" and then nobody stopped Jack Krawczyk from making gemini generate black george washington
Not Demis Hassabis fault. DeepMind builds the model but Google delivers the training dataset. If the dataset is filtered for leftist bias crap then no matter how good your model is built.. it will produce leftist bias crap as output. Garbage in, garbage out.. simple as that
Hassabis wasn't quite getting Patel's point about alien thinking. It's not necessarily an issue of harm. At some point the AI will be to us as we are to ants... there will be no useful explaining possible.