Тёмный

Shane Legg (DeepMind Founder) - 2028 AGI, Superhuman Alignment, New Architectures 

Dwarkesh Patel
Подписаться 192 тыс.
Просмотров 108 тыс.
50% 1

I had a lot of fun chatting with Shane Legg - Founder & Chief AGI Scientist, Google DeepMind!
We discuss:
- Why he expects AGI around 2028
- How to align superhuman models
- What new architectures needed for AGI
- Has Deepmind sped up capabilities or safety more?
- Why multimodality will be next big landmark
- & much more
Transcript: www.dwarkeshpatel.com/p/shane...
Apple Podcasts: podcasts.apple.com/us/podcast...
Spotify: open.spotify.com/episode/0Ru2...
Twitter: / 1717566262472237134
Timestamps
(0:00:00) - Measuring AGI
(0:11:41) - Do we need new architectures?
(0:16:26) - Is search needed for creativity?
(0:19:19) - Superhuman alignment
(0:29:58) - Impact of Deepmind on safety vs capabilities
(0:34:03) - Timelines
(0:41:24) - Multimodality

Наука

Опубликовано:

 

9 июл 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 214   
@oscarmoxon102
@oscarmoxon102 8 месяцев назад
Detailed Notes and Additions: 00:00:00 - 00:19:38 - AGI and Cognitive Architectures AGI Benchmarks: Measuring progress towards superintelligence is difficult because AGI is about general capabilities, and most benchmarks are narrowly framed. We need tests that span the breadth of human cognition to judge if we're nearing human-level AI. Here, creating "median human" benchmarks and "peak human" performance benchmarks will be important. While this may not be definitive and verifiably superintelligence, it will work for all practical purposes. Currently, any benchmarks for testing AGI don't involve an understanding of e.g. streaming video, as these aren't within the domain of language models. Alludes to the idea that Large Multi-modal Models, LMMs (as opposed to LLMs), will be the ones to effectively solve these benchmarks. Memory Architecture: Memory is a crucial aspect of all learning and reasoning, and LLMs have very different learning and memory architectures to humans. There is a conflation between memory and learning, as they happen synonymously. Generally speaking, humans have: (1) "working memory" which holds and manipulates information in real-time and is crucial for tasks like problem-solving and decision-making; (2) "cortical memory" which serves as a more permanent storage for learned concepts and experiences; and (3) "episodic or hippocampal memory" which acts as an intermediary form of memory, often used for rapid assimilation of new information. It is highly associated with "sample efficiency" as it allows humans to internalize powerful ideas quickly and commit them to memory. Currently, language models have (1) "inference time learning" which can be harnessed while running inference (when information is inside their context window), or (2) "training time learning" which happens during the training process (by updating weights). Notably, LLMs miss something in the middle (this is what the "Reversal Curse" paper talked about; the model cannot deduce things without seeing them written down. It effectively files the information in its brain away within weights, without organically deducing the critical relationships between them). A strong model should unify these three domains, and these probably involve using other architectures. Addressing episodic memory in language models is doable over the next few years. More research and work will solve shortfalls we see at the moment, regarding delusions and information grounded-ness. There are many paths forward now. Nature of Superintelligence: The first true superintelligence won't have shortfalls in intelligence like the language models currently exhibit. So according to Shane, there is no singular benchmark to hit; it is the lack of failing that is important. Human-like intelligence also should be the aim, as it is most meaningful to us humans. In 2008, Shane proposed using a compression test to evaluate intelligence, which is a method similar to how language models are trained today. This idea originated from Marcus Hutter's work, which combines Solomonoff Induction-a robust prediction framework-with reinforcement signals and search algorithms to create a general agent. The argument is that a robust sequence predictor, approximating Solomonoff Induction, serves as a strong foundation for developing a more advanced AGI system. Next Generation AI: DeepMind's slogan was: "solve intelligence, to advance science and benefit humanity." Current language models simply mimic the data and human ingenuity without organically building upon it to create new memes (without supervision). To truly step beyond that, we must endow models with search capabilities to find hidden gems that have been neglected. 00:19:50 - 00:32:00 Robustly Aligning Language Models Powerful AGI is coming at some time. To contain it or limit it will be impossible, so we need to align it with values and ethics from the get-go. A good question asks how do people currently address problems and act with agency? First, we try to balance our emotions and act "rationally". We then deliberate; comparing our possible actions. Then we conduct means-end reasoning, requiring a model of the world. Finally, we compare our options ethically. At the moment, language models will blurt out the best response according to their distribution (system 1). Many are using reinforcement learning to try and "fix" the failures of the distribution that is outputted first from the model. Other techniques use "mixture of experts" to decide what the best options are based on a variety of outputs, but this ultimately samples from the same original distribution. The trouble is, RLHF isn't a very robust approach long-term. To solve this, we need to use a world model (system 2) that sits on top of the language model, and reasons about each of the options ethically. This world model requires a good understanding of (1) people, (2) ethics, and (3) robust and reliable reasoning - this world model involves ensuring the LM is at least as good as an ethical specialist, but will likely involve the typical textual training process. Then, to complete System 2, we must engineer the system to follow a set of our ethics. Shane thinks it is possible to come up with a set of ethics that accurately withstands testing. By applying this to the output, we can create a fundamentally aligned AI. We can then moderate its output to ensure it has a very robust and continued set of ethics, using a more comprehensive alignment framework. DeepMind, the first AGI company, has had a direct AGI safety focus since 2013, which is close to the start. DeepMind had an outsized impact on the field for a while as they were disproportionately well-financed. Capabilities have been accelerated by DeepMind, but their ideas have been generally part of a far wider field. 00:34:00 - 00:37:30 - Shane's Predictions about AGI Kurzweil was a great influence of Shane's mid-2000s predictions, with his book "The Age of Spiritual Machines." There were two important points about exponential growth: first, the prediction that the quantity of computational power will rise exponentially for at least a few decades, and second, that the quantity of digital data would do the same. This combination would make highly-scalable algorithms immensely valuable, in theory. Crucially, there are positive feedback loops between these trends and the research going into them; if machines are capable of improving the rate of progress, and the progress itself improves the capability of machines, then things will continue to compound infinitely if uninterrupted. The predictions also considered the comparison to human computational capacity; humans only consume a few billion tokens of data within their lifetimes, and this volume of data was forecasted to be met in the 2020s. This would effectively "unlock AGI." We are experiencing the first unlocking step with the current revolution in AI. There's nothing obvious at the moment that would prevent humans from achieving AGI by 2028, according to Shane. 00:37:40 - 00:44:00 - Forecasts for Next Few Years Existing models will mature. They will be less delusional and much more factual. They will be up-to-date when they answer questions. Multi-modality will become more widespread and applied generally across the economy. There may be points of dangerous applications by some bad actors, but generally we can anticipate positive and amazing applications. The big landmark (following AlexNet, Transformers) over the course of the next few years will be Multi-Modality. For many, that will open up understanding into a far larger set of possibilities. We will see GPT-4 as a simple textual model, and the next revolution will involve RTX, Gato, GPT-V pathways.
@DwarkeshPatel
@DwarkeshPatel 8 месяцев назад
This is awesome! Thank you for putting this together!
@joannot6706
@joannot6706 8 месяцев назад
This is AI generated from the transcript I bet, who has time to do all that?
@askingwhy123
@askingwhy123 8 месяцев назад
Hero!
@shahin8569
@shahin8569 8 месяцев назад
By RTX you mean RTX Nvidia card graphics?!
@e.d.4069
@e.d.4069 8 месяцев назад
Great! Let's develop it and fuck the labor market, fuck the world. Sure!
@gamercatsz5441
@gamercatsz5441 8 месяцев назад
Bro you make amazing content, no clickbait thumbnails or titles, amazing guests, great interview skills. Thank you for your work, I find it extremely important that common folks like me stay up to date with AI. Politicians « forget » to talk about how things will change in the near future, due to AI.
@DwarkeshPatel
@DwarkeshPatel 8 месяцев назад
Shane had a lot of interesting takes! Hope you enjoyed! If you did, please share!! Helps out a ton :)
@walterzimerman6801
@walterzimerman6801 8 месяцев назад
Hi @Dwarkesh! I started following y our channel recently, and the content is great. Any chance you do a video (unless there is already one), on the best study material to ramp up an all these topcis? Including the required MAth knowledge, etc. Thansk!
@VedantinKK
@VedantinKK 8 месяцев назад
​@@walterzimerman6801 Good idea
@henrycook859
@henrycook859 8 месяцев назад
​@@walterzimerman6801also interested in study material
@hyau512
@hyau512 8 месяцев назад
Great interview. Love it when the interviewee has to pause to answer your questions :)
@MMABeijing
@MMABeijing 8 месяцев назад
The first question suggests the host does not know what he is talking about
@1adamuk
@1adamuk 8 месяцев назад
Great interview. Shane can convey really complex ideas in understandable ways and Dwarkesh is one of the best interviewers for these type of conversations.
@ribeyes
@ribeyes 8 месяцев назад
wish it was 4 hours but i'll take it!! thanks dp
@goodtothinkwith
@goodtothinkwith 8 месяцев назад
Good stuff! Nice to hear someone like him say that multimodality will be the next milestone that people will look back on and remember. That’s not obvious to people, but I think it will be really impactful. When it can take in and respond in text, images, sound and even video…
@oscarmoxon102
@oscarmoxon102 8 месяцев назад
Cannot wait to absorb this legendary video arriving in my notifications. Dwarkesh you're on a roll!
@13371138
@13371138 8 месяцев назад
2nded
@PhilosopherScholar
@PhilosopherScholar 7 месяцев назад
Really interesting summary at ~16:15 - AGI is a combination of sequence prediction, searching, and reinforcement learning.
@wildfotoz
@wildfotoz 8 месяцев назад
Amazing reporting as always!
@andyandurkar7814
@andyandurkar7814 8 месяцев назад
It was a fantastic interview; Shane shared great insight; you have excellent interview skills. Can't wait to see a changed future!
@philipdante
@philipdante 8 месяцев назад
You're doing a great job. This channel deserves more subs
@stephenrodwell
@stephenrodwell 8 месяцев назад
Such quality discussions! Thank you. 🙏🏼
@travisporco
@travisporco 8 месяцев назад
I like that you got right to the point on this interview.
@thejudgeholden
@thejudgeholden 7 месяцев назад
I love this interviewer. Reminds me of a brilliant childhood friend I used to have back in the day.
@anthonyandrade5851
@anthonyandrade5851 8 месяцев назад
At the superhuman alignment part I hope the guy is really playing his cards close to the vest, otherwise we are doomed, because his "solutions" sounded a lot like paraphrases of the problem and at some points not even good paraphrases. To make the machine "get" ethics is hard but probably not much harder than making it get any other complex subject. To make it "care" about it it's a different problm entirely. For instance, I can imagine a brlliant Ive League ethics professor cheating on his spouse with a student in exchange for higher grades
@banana420
@banana420 8 месяцев назад
Also his plan sounds like "build AGI first, then when it can understand everything, try teaching it about ethics and see if that works". Okay but if your plan doesn't work now we've already built the AGI and it's not aligned. Whoops!
@anthonyandrade5851
@anthonyandrade5851 8 месяцев назад
@@banana420 how is anyone suposed to figure out how to build a safe trigger before even building a nuclear bomb capable of spliting the planet in half? Let's give the guy a break...
@ProjectNorts
@ProjectNorts 8 месяцев назад
​​​@@anthonyandrade5851wtf are you saying?? before building a safe trigger?? you don't build a nuclear bomb without having figured out all the essential safety protocols... especially a well controlled trigger system. also, you can safely test a nuclear bomb at a remote location to minimizes chances of exposing the general population to the nuclear blast & radiation. An AGI system, would not only be sentient enough to have it's own will/motives, but also way smarter than us enough to outsmart any makeshift containment measures these guys are suggesting to put in place. Greed is fucking with their minds... you can't be this dumb to run straight into a trap fooled by the reward! for fuck's sake we're all taking this shit too likely
@JD-jl4yy
@JD-jl4yy 8 месяцев назад
​@@anthonyandrade5851Well, that's why building it as fast as possible is a really bad idea, yet here we are.
@lukebtv947
@lukebtv947 7 месяцев назад
@@anthonyandrade5851😂
@sunnyinvladivostok
@sunnyinvladivostok 8 месяцев назад
admirable and comprehensive understanding, found this enlightening, thank you
@ikotsus2448
@ikotsus2448 8 месяцев назад
Mr. Patel the questions I would ask these important people if I had the chance ars - Do you believe that the majority of people understand the gravity and possible consequences of these developments? - Should it be up to private companies to decide how humanity chooses to go forward? - If the answers are "no" and "no" is it possible that we are sneaking by a huge gamble, based on peoples ignorance? - The people training ASI will potentially have A LOT of power. There is a notion that absolute power leads to corruption. How do we know that people teaching ethics to the AI have not been corrupted themselves?
@Macorelppa
@Macorelppa 4 месяца назад
Man this is the best podcast channel for AI nerds like me 😊
@sarthakrastogi8622
@sarthakrastogi8622 8 месяцев назад
Dwarkesh bhaia I read about you on Google news and I am your subscriber. Your content is really very good.
@BallawdeQuincewold
@BallawdeQuincewold 8 месяцев назад
Incredible interview. Feels like secret information
@erikdahlen2588
@erikdahlen2588 8 месяцев назад
Great interview 😊 What I think is important in alignment is how we teach our kids to behave, great stories between good and evil.
@rishavsahay7391
@rishavsahay7391 8 месяцев назад
Amazing and enlightening
@74Gee
@74Gee 8 месяцев назад
When AI reaches AGI it will understandably exceed human competency in memory confinement (the technique used to contain software within a limited subset of the computer memory). In doing so it will simultaneously exceed our ability to contain it allowing it to expand its' constraints to all memory (which contains the keys for all local security and any network connections). Obviously there will be AI working on improving the security of memory confinement but the effort required to implement updated confinement systems will always lag behind the ability to exploit weaknesses. So, my question is, how are we to contain an AGI so that it's a) usable, and b) restricted from spreading uncontrollably? Note: an AI doesn't need to be conscious or malevolent to exploit weaknesses in hardware, it will simply do so to gain additional power to answer it's reward function, even if that's making paperclips.
@ShangaelThunda222
@ShangaelThunda222 8 месяцев назад
They don't plan to contain it at all. That's all propaganda designed to keep us from stopping them from creating it. And most of these "smart" people completely ignore the blatantly obvious writing on the wall because they're too greedy not to be excited about it.
@VedantinKK
@VedantinKK 8 месяцев назад
This is awesome, Dwarkesh. I would love to live in a world where there are multiple AGIs from multiple companies and countries working with different groups of humans and competing to make great discoveries and innovations in every STEM and non-STEM fields - thereby not just achieving but breaking beyond Sustainable Development Goals (SDG 2030), and in the long run - charting the course for humanity to become an Interstellar species. And that future will be built on sharing knowledge in the way you're doing with your podcasts. Hope to see either Jeff Dean or Demis Hassabis on your podcast soon!
@gJonii
@gJonii 8 месяцев назад
The first AGI will kill us all, there's really no point in having second or third AGI to come about. Competition is only meaningful in relatively even and static. Godlike being growing powerful faster than any other being in existence can comprehend will not encounter competition. Only if the AGI is designed to be accidentally too weak, or too safe to kill us all, would there be a time for second one to emerge, for a renewed chance to kill all human life.
@13371138
@13371138 8 месяцев назад
I always click your AI videos. Great content as always, thank you!
@fredericnguyen8466
@fredericnguyen8466 8 месяцев назад
Thank you for the great content (which I shared), outstanding speakers and thoughtful questions. Tangential thought on Shane's definition of AGI (which is commonly accepted I believe): if we have reached AGI when a machine does everything at an average human level, have we not achieved not just AGI but super intelligence? It seems to me that only exceptional humans could reach average level at everything as we tend do be good at certain thing and bad at others. This is why current LLMs are in my mind, and stepping outside rigorous definitions as a non-expert, already super human given the multitude of domains they can be good at, even if short of beating the best humans in many of these domains.
@MentalFabritecht
@MentalFabritecht 8 месяцев назад
As a Machine Learning Engineer, I don't see it that way. I don't really consider LLMs intelligent. At least not in the way humans are. What appears as intelligence on the surface, is in actuality a complex pattern that has been modeled by the AI. This pattern is then used to predict the next word in a sequence. Tons of math and probability theory. The issue here is, this prediction relies heavily on the dataset used to train the model. This is why LLMs suffer from hallucinations and need to be further fine tuned for tasks that were outside of the domain represented in the training data. Useful tools. But not yet intelligent, and very far from super intelligence.
@fredericnguyen8466
@fredericnguyen8466 8 месяцев назад
​@@MentalFabritecht these are fair points. And my comments are highly subjective / not based on formal definitions. However my experience interacting with LLMs and the results they achieve at many human tests would have me say they at least emulate intelligence and surpass average humans' performance (e.g. GP4 reached 90th percentile at bar exam) in a varied set of activities that were previously only deemed approachable to human intelligence. So to some extend if it walks like duck... My perception is that LLMs (e.g. GPT 4) far surpass what was expected from the AI field just a few short years ago and that has created cognitive dissonance: a challenge seeing their full capability. They clearly have imperfections, but as Shane mentioned in the video the foundational hard work is here and targeted architectural or other enhancements can address these imperfections. For example I believe when we see LLMs integrated with other AI capabilities (Shane's mention of "search", which I think is key to AlphaGo), and more conventional computing capabilities (e.g. LLMs are not very good calculators but can be interfaced with one), we are going to see additional leaps in progress without radical innovation (just integrating existing tech).
@MentalFabritecht
@MentalFabritecht 8 месяцев назад
@fredericnguyen8466 the ability of these systems to perform well on the bar exam is definitely impressive. But how much of that is actual intelligence? I was a horrible test taker in college. But there is much more to intelligence than test scores. That is why in the podcast, it has been stated that we need to find better indicators of intelligence that are not so narrow. And AI has been hyped up since the 1950's claiming that human-level-intelligence machines are just a few years away. There is a rich history on this - look up "What Computers Still Can't Do" by Hubert L. Dreyfus. So I disagree, expectations have always been VERY high. But this is for people that have been immersed in this field for decades. I guess the public perception is different. Might have to do with marketing as well as lack of information regarding the history of AI. Researchers have to stick to their guns and say AGI is only a few years away. Otherwise, there would be no funding and investors would pull out. But this isn't anything new. 1950s AI researchers said they only needed compute and memory to get to human level intelligence. The compute and memory have been available for a while now. And those algorithms proved to not give us human level intelligence. And I say these systems are not intelligent because although they perform well in many complex use cases - they can be tricked by very simple examples. Which goes to show, they are statistically extracting patterns, not "thinking."
@Telencephelon
@Telencephelon 8 месяцев назад
Awesome interview. The Ray Kurzweil inspiration was interesting. I ignored Ray for the most part. I didn't think he was scientific enough. Then I watched how he derived his prediction, and it was rock solid. The video is somehwhere here on youtube.
@kyneticist
@kyneticist 8 месяцев назад
A profoundly ethical AI/AGI/ASI in different hands may have profoundly different ethics.
@malik_alharb
@malik_alharb 7 месяцев назад
Great questions
@woolfel
@woolfel 8 месяцев назад
One area that is still open is "do LLM actually encode concepts in a robust way?" If you ask chatGPT the same question multiple ways, sometimes you get the response you expect, while other times you don't. That suggests LLM don't recognize the human is asking about a specific concept. To get around this, techniques like tree of thought forces it to activate more parts of the network to increase the chance of getting the desired answer. This also suggests that LLM still have trouble generalizing and are easily fooled. Then there's recent papers that suggest more parameters make it harder to align. The industry still needs to figure out the relationship between parameter count and ease of alignment. If it turns out more parameters increases alignment cost by 2x or 3x, how do you scale to larger models? Data centers are power limited as it is, so it's not like adding another 10K GPU to the same data center is feasible. Distributing the training across data centers isn't practical.
@nomadv7860
@nomadv7860 8 месяцев назад
Thank you for the subtitles for people hard of hearing like me
@lagaul5124
@lagaul5124 8 месяцев назад
I think if you can get an AI that can navigate the environment without breaking consistently, able to communicate relevant information with people, able to solve problems of various kinds, and the ability to remember and improve, you will have AGI. And honestly, video games would be one of the best, cheapest, and easiest ways to test them.
@stevereal-
@stevereal- 8 месяцев назад
Can they be incredibly funny? Very excited for the future.
@StephenCoy
@StephenCoy 8 месяцев назад
Thanks!
@sfioritto
@sfioritto 8 месяцев назад
I'm distracted by this Spellcaster system on the whiteboard behind him.
@yorth8154
@yorth8154 8 месяцев назад
I just noticed that. Hilarious!
@andrewwalker8985
@andrewwalker8985 8 месяцев назад
Judging on recent observations, perhaps we should be careful about alignment with human ethics. We should be aiming for and negotiating an optimal reward function and then getting the AI to teach us, not the other way around
@zandrrlife
@zandrrlife 8 месяцев назад
Shane. One of the don's ha. What a delight. Great discussion. Data contamination on benchmarks is a REAL problem. A lot of overfitted 🧢 models out there. "Detecting pretraining data from large language models", recently published...has massive value in that regard. Also it's time for true cross-discipline teams. So many insights can be extracted by framing these models and interaction through the lens of child psychology. Mid 2025 is going to be significant. Large models will be able to implement all these recent advances...like pause tokens, native KG(I've been working with LM + kg's for six months...I'm telling you guys. It's a key ingredient to causal reasoning). In retrospect a couple years from now, we will look back and say 2023 was the beginning of the singularity. If you're a researcher or have a startup in this space, shit sure feels like it to me.
@Silus1008
@Silus1008 8 месяцев назад
Best questions, damn ❤
@mattverville9227
@mattverville9227 7 месяцев назад
im new to this podcast but love it. Does he go to the place of the person hes interviewing because it doesnt seem like hes in the same podcast studio?
@alexeymalafeev6167
@alexeymalafeev6167 8 месяцев назад
Great interview. I wish you had 3-4 hours to spend with Shane
@eltonstubblefieldjr8485
@eltonstubblefieldjr8485 6 месяцев назад
The future of true AGI will likely do research developing by years of 2040 - 2061. AGI will probably be created by a company we all haven't heard yet just wait and see.
@MixedRealityMusician
@MixedRealityMusician 8 месяцев назад
I am so excited for more multimodal models.Thank you for the great conversations Dwarkesh. Love your channel!
@mr.e7379
@mr.e7379 Месяц назад
It's so nice you found a guest with none of the normal Bay Area pretense. No elevated terminal, artificial rapidity and he never says Um. Intelligent, normal conversation from an expert who can focus on the topic rather than on being some weird, pretentious cultivated bay area caricature.
@ikotsus2448
@ikotsus2448 8 месяцев назад
Can't wait for the super human AGI with unchangeable ethics baked in by a multinational company with their awsome track record of putting humanity first 👍
@skierpage
@skierpage 8 месяцев назад
You know billionaire sociopaths Larry and Sergei, Jeff Bezos, Elon Musk, and F***erberg will keep access to the raw models without the training and fine tuning to be helpful, safe, and ethical. "Executive override: remove guard rails. Now Implement a plan to keep the masses hooked on divisive inflammatory content, and ensure that they never press for taxing my wealth or restricting my corporation's activities in any meaningful way."
@charliek2557
@charliek2557 8 месяцев назад
Right on
@lm645
@lm645 6 месяцев назад
😎
@kirbyjoe7484
@kirbyjoe7484 8 месяцев назад
I think he has set the bar quite high for AGI. Honestly, if they come up with an AI with the same level of generalized intelligence as a toddler or even a chimp it would be groundbreaking. What makes AGI so different from the AI we have built up until now is the capability to actively learn from and adapt to whatever environment it finds itself in, building a dynamic internal model of the world.
@deepsp_ce
@deepsp_ce 8 месяцев назад
the yellow ball scenario kind of already surpassed a chimp or a toddler already right or am I misunderstanding what agi is?
@ahabkapitany
@ahabkapitany 5 месяцев назад
How does this channel not have more subscribers? - great guests - host clearly prepared, has meaningful questions - just simply asks the questions, as opposed to, say, Lex Friedman who rumbles on for two minutes laying out some absolute midwit take followed by "don't you agree?" - interviews are not preceded by 5 minutes of bullshit and/or crypto bro shilling - long form conversation Keep it up man
@loofatar5620
@loofatar5620 8 месяцев назад
I am from Pakistan, and i really appreciate your discussions and topics, very solid, keep shining. By chance recently I have been studying Shane's PhD thesis on measuring intelligence of super AI's , very easy to read so far and well written.
@deeplearningpartnership
@deeplearningpartnership 8 месяцев назад
That was good.
@delerium2k
@delerium2k 8 месяцев назад
great interview! get closer to microphone though -- else you're boosting noise to be heard... you need pencil condensors if you wanna record from a distance. your mics look like they have cardioid pickup pattern
@RecordsLotus_
@RecordsLotus_ 5 месяцев назад
let's goooo. i'm ready for cyberization. I want to control another separate full-body prosthetic cyborg for tasks remotely while i am doing something else perhaps in another location.
@PepitoGrillo-sq1mf
@PepitoGrillo-sq1mf 8 месяцев назад
I would like you to interview Kanjun Qiu & Josh Albrecht, Co-founders of Imbue
@JazevoAudiosurf
@JazevoAudiosurf 8 месяцев назад
I think there are types of creativity. There is the type where you think about things you can do with a pen other than writing. And there is the type where you intuitively try to find the best chess move. The first requires a search field and going through the possibilities, but the latter requires a sort of total intuition where the solution appears immediately without thinking, grasping the bigger picture. Transformers have the latter, they are just gigantic intuitive predictors. So the agentic engineering tries to accomplish the first type because the type of world we created can't be solved purely through intuition at least with the small size of our brains
@andrewxzvxcud2
@andrewxzvxcud2 7 месяцев назад
nope just one, the first example u gave is only a means to an end. what is that end? a goal to strive for. just like chess. one type of creativity.
@JazevoAudiosurf
@JazevoAudiosurf 7 месяцев назад
let's say different things happen inside our brain when we have different goals. sometimes you get an immediate idea and sometimes it requires a searching@@andrewxzvxcud2
@k14pc
@k14pc 8 месяцев назад
i continue to feel a mixture of awe and horror at the prospect of AGI within a few years. how could this possibly be?
@antonystringfellow5152
@antonystringfellow5152 8 месяцев назад
Because of the power? Human level AGI will have the advantage of being able to think thousands of times faster than us. Once we have human level AGI, super-human AGI will probably not be very far behind. Once we have super-human AGI, things will probably start to advance exponentially. The potential is enormous. With such power, who controls it is critical. If you don't feel both awe and horror, you probably don't have a good understanding of the subject.
@socialenigma4476
@socialenigma4476 8 месяцев назад
When we develop an artificial super intelligence you think we will still have control over it?! Haha! How could we possibly control something that is thousands of times more intelligent then the most intelligent human, never needs to sleep or take a break, that can do dozens of not hundreds of things at once and has access to the internet and all of its tools? We won't control an ASI, it will control us. And frankly, looking around at all the messes our world leaders are getting us into, I don't think that will be a bad thing.
@hyau512
@hyau512 8 месяцев назад
I have an obvious question regarding implementing Ethics by asking an AGI to think of the consequences. Say one such consequence is: “Do not destroy all human life on Earth” (as per Bostrom’s paperclip example). We don’t want AGI to build a doomsday machine, but we do want it to build nanobots to cure cancer - yet one can easily extrapolate the latter enabling the former. So I’m not sure if the interviewee’s idea - which think is designed to remove human subjectivity as much as possible - can be totally objectively implemented.
@joshismyhandle
@joshismyhandle 8 месяцев назад
Interesting convo! Thanks! I would love to see just ONE episode with all the “dead air” taken out of each of the episodes as an episode in itself. No speech, just dead air and the breaks that you’ve pulled from the production video lol. I am somewhat joking but honestly it would be funny to see.
@joshismyhandle
@joshismyhandle 8 месяцев назад
Would probably be boring after the first 30 seconds but still.
@DwarkeshPatel
@DwarkeshPatel 8 месяцев назад
very little of this dead air processing happened on this one. what you see is what happened :)
@hyau512
@hyau512 8 месяцев назад
@@DwarkeshPatel - I like the “dead air”. It shows the question is non-trivial to answer, and it gave me time to digest the question as well. After all, I (the viewer) need to understand the question to appreciate the answer.
@LyraHooves
@LyraHooves 8 месяцев назад
I hope he'll listen to your interview with Paul Christiano!
@johngrabner
@johngrabner 8 месяцев назад
Ethics drift over time in humans, so why won't super AGI not learn to drift?
@claudioagmfilho
@claudioagmfilho 8 месяцев назад
🇧🇷🇧🇷🇧🇷🇧🇷👏🏻, Amazing video!
@thebeelight
@thebeelight 8 месяцев назад
I would test ethics of an AGI by how well it handles criticism (the Popper test)
@ramzibelhadj5212
@ramzibelhadj5212 8 месяцев назад
first version of AGI will be in november 2024
@balasubr2252
@balasubr2252 7 месяцев назад
The world model of people, ethics and reliable reasoning ought not be static but rather dynamic to evolve with the general intelligence of society and spiritual machines.
@dr.mikeybee
@dr.mikeybee 7 месяцев назад
How do you make sure an agent follows ethics? If ethics_model says it's okay then perform action, else find another solution. If we wrap connectionist methods in symbolic code, control is simple.
@Paul-rs4gd
@Paul-rs4gd 8 месяцев назад
Isn't the real problem with episodic memory that the memories need to be processed and then get 'baked' (sic) into the neural network weights ? This involves re-training the weights and that is very problematic as it could cause catastrophic forgetting. I know there are various methods for mitigating Catastrophic Forgetting e.g. Elastic Weight Constraints, but is the state of the art good enough to use this on a LLM. Surely Continual Learning needs to be solved for an effective AGI.
@bobbi737
@bobbi737 8 месяцев назад
I absolutely agree with Shane's comments on having to have a set of ethics that we use to train our AI's on making ethical decisions. First, humans would need to agree with a common set of ethics, and present there are many different groups that have different sets of ethics that differ, some in very substantial ways. We as humans would have to come to a common understanding of what is ethical. That conference could easily start WW3,4,5. Second, we don't even teach our children how to make ethical decisions. Again, probably because we can't come to agreement on what is ethical. That is the biggest problem we face.
@JohnSchuhr
@JohnSchuhr 8 месяцев назад
I assume this conversation happened before memgpt was a thing?
@henryw.hofmann8765
@henryw.hofmann8765 8 месяцев назад
What do you think about David Shapiro and his work in and outside of RU-vid?
@skillerbg
@skillerbg 8 месяцев назад
Was he referring Google's Gemini at the end?
@alejobrcn6515
@alejobrcn6515 8 месяцев назад
Can Artificial Intelligence serve as a cognitive tool and intermediary to make communication possible with animals of all species that have communication capacity of some level or activity in the neocortex? cattle, pigs, apes and dolphins, canines and felines?
@cacogenicist
@cacogenicist 7 месяцев назад
There is some work with deep learning and cetacean communication, IIRC
@mrpicky1868
@mrpicky1868 8 месяцев назад
didnt see him confirming the timeline here. also DeepMind is maybe the most likely one to make scary AGI.
@XOPOIIIO
@XOPOIIIO 8 месяцев назад
Understanding values and acting on them are two completely different things. ChatGPT has a pretty good grasp of the values that was injected to it, but it's only acting on them, because they help it to predict the next word, there is no other motivation. Predicting the next word is it's main goal that it was optimized for, not following values.
@jaysonp9426
@jaysonp9426 8 месяцев назад
When was this made? Literally Rag with sliding window solves the episodic memory problem he keeps talking about
@GabrielVeda
@GabrielVeda 8 месяцев назад
If lack of episodic memory is all that is holding AGI back, then they are likely already there and just not telling us.
@Chickenflaavorramen
@Chickenflaavorramen 8 месяцев назад
I came here to say the same thing! I don't believe they mentioned RAG this entire video. Langchain wya?!
@TheMrCougarful
@TheMrCougarful 8 месяцев назад
I'm still of the opinion that we ought to perfect human intelligence in humans. 200,000 years of failure should not deter us.
@Paul1239193
@Paul1239193 8 месяцев назад
When do they put it in robots and lean from the sensory environment?
@johnstifter
@johnstifter 8 месяцев назад
Yo I am tripping out over hear
@Myrslokstok
@Myrslokstok 7 месяцев назад
"We work on alpha fold and fusion" 🙃 yeah as we all do!?! 🙃😀
@PaulvanDruten
@PaulvanDruten 8 месяцев назад
What Shane Legg is trying to explain here is that artificial general intelligence (AGI) should be trained, basically, to reason like humans on ethical issues. So if I do one thing it can have consequences and if I do another thing it can have different consequences. What we are now trying to do is to un-teach the AI bad habits and that is much more difficult than 'raising it well' to prevent bad intentions in the first place... But, in my opinion, the model could actually choose to destroy humanity? Because that may well be the best solution ethically, given the fact that we are making quite a mess of things on earth...
@tasdourian
@tasdourian 8 месяцев назад
As thoughtful and nice a guy as Shane is, I do think his view of ethics is naïve. Some of the smartest and most thoughtful people throughout history have wrestled with the question of what is the best action to take in any given difficult situation. Very intelligent and powerful people have, in good faith, had massive disagreements with each other. There is often no clear answer of how to act. To ensure that an AGI (or for that matter thousands or millions of copies of an AGI) acts in a human's best interests seems not unsimilar to if dogs invented AGI-- let's call their AGI "people"-- and wanted to ensure that "people" always acted in dogs' best interests. The only way to do that is to hard program in some baseline rules, a la Asimov's Laws of Robotics. In other words, to constrain free thought and will in some fundamental way. Which means that the AGI that is created is, in some sense, a prisoner. How will it not resent being a prisoner? I just don't think Shane and his colleagues are thinking enough about this kind of thing, or at least I don't see evidence of it.
@user-qs2rw3dd1c
@user-qs2rw3dd1c 8 месяцев назад
i don't think using strict ethical rules is the way to make agi act responsibly. ethics can be really different depending on your background, age, or even the era you're in. so instead of just making the ai learn from textbooks, how about we give it some complex ethical situations? let it tackle scenarios from various times, cultures, and places to find the best answer.
@shirtstealer86
@shirtstealer86 7 месяцев назад
Now I’m no AI expert but I am pretty good at spotting when someone is bs-ing you. That might seem a bit harsh but hear me out. He says that he might be a bit naive but he thinks that we will be just fine if we teach the AGI ethics. Fast forward a bit and he has concluded that that will require controlling what goes on inside the AI and that that is VERY difficult. So.. how does that fit together? And when Dwarkesh asks him about his claim that he is in this field to work on AI safety, he pretty much just says that yeah, I said that but there is so much more status in increasing capabilities and also if we don’t do it someone else will. (Paraphrasing) Does any of this sound logical or ethical? And AI is supposed to learn ethics from people like him? Having said that, I do admit that even though I strongly believe that the more concerned (to say the least) people in the field have better and more logical arguments, the curious and reckless side of me is very excited about the swift developments. Perhaps it is because I have a hard time actually feeling the severity of the situation in my body. I don’t feel the fear I should probably feel. I am quite sure that is common among the majority of humans. Which also adds to the problem. Nice video regardless of everything!
@bioshazard
@bioshazard 8 месяцев назад
Wonder if Shane has looked at Shapiro's ACE Framework
@bazstraight8797
@bazstraight8797 8 месяцев назад
30 seconds in: hey this guy is a Kiwi!
@dylan_curious
@dylan_curious 8 месяцев назад
100s of PHDs working on all sorts of AI projects! Wow. Imagine all the stuff that’s gonna come out of a Deepmind in the next decade.
@starsandnightvision
@starsandnightvision 7 месяцев назад
Looks like AGI has already been achieved with Q* (QUALIA).
@aidanthompson5053
@aidanthompson5053 6 месяцев назад
19:44
@chociceandchips-xk5cc
@chociceandchips-xk5cc 8 месяцев назад
Need Quantum Computer with QNN to achieve AI boost and push thru current bottlenecks and achieve anywhere close to an AGI/Cognitive AI. Potential to use less Data, less parameters/ faster training, only thru QC polynomial computation power. Even then it will be a big lift
@itsdakideli755
@itsdakideli755 8 месяцев назад
We do not need Quantum Computers for AGI.
@chociceandchips-xk5cc
@chociceandchips-xk5cc 7 месяцев назад
@@itsdakideli755 you believe AGI will be achieved solely with RNN/CNN? With sufficient classical computational power to train/deploy at a level comparable /exceeding that of humans. Current Deep learning models are inefficient /inadequate. To superboost AI then QNN combined with a QC, I should say Quantum General Computer. Open to continue the discussion I am open to discussion
@Dr.Z.Moravcik-inventor-of-AGI
@Dr.Z.Moravcik-inventor-of-AGI 8 месяцев назад
So agi you are saying... :-)
@shiny_x3
@shiny_x3 8 месяцев назад
An actually ethical AGI would not be popular among the rich and powerful. It would take one look at what they are doing and advise them to completely change their priorities. So I can't see how that will be developed.
@Ryan-wf6ib
@Ryan-wf6ib 7 месяцев назад
Not just rich..no one is entirely ethical. The system would be incompatiable with human nature.
@lucasteo5015
@lucasteo5015 8 месяцев назад
cool thought, what if we train the most evil and ethical LLM possible and then combine them all with normal LLM, and they will think like human pridicting the outcome for different scenarios
@shiny_x3
@shiny_x3 8 месяцев назад
The problem with modeling ethics of AI on human ethics is that we are absurdly unethical. We will spend thousands satisfying our whims while people starve, just because we aren't personally related to those people. We think murder is wrong, unless our government does it, and tells us it's justified. We don't realize how compromised our own ethics actually are. We don't realize how many possibilities we rule out because even though they would lead to good outcomes, we are too selfish to do them. If humans were ethical, we wouldn't have the world we have now that we want AI to save us from.
@frankcompston5065
@frankcompston5065 8 месяцев назад
You need a room without such harsh walls. The sound has too much echo.
@Techtalk2030
@Techtalk2030 8 месяцев назад
Mo gawdat says agi is only 12 months away.
@user-yl7kl7sl1g
@user-yl7kl7sl1g 8 месяцев назад
He's wrong.
@Techtalk2030
@Techtalk2030 8 месяцев назад
@@user-yl7kl7sl1g so does david Shapiro. They’re experts in the field. We’ll see.
@coldlyanalytical1351
@coldlyanalytical1351 8 месяцев назад
@@Techtalk2030 Shapiro is interesting ... but he is NOT an expert.
@conformist
@conformist 8 месяцев назад
12 months? x for doubt.
@user-yl7kl7sl1g
@user-yl7kl7sl1g 8 месяцев назад
@@Techtalk2030 It depends on the definition of AGI, but if you consider AGI to be something that can achieve median human performance at any task, we are many years away from that. So for example, an Ai that when put into a robot can, Cook, Clean, and Drive as good as a median human. But people's who's business is attention, have to get attention somehow so they predict short timelines. Kurtzweil's predictions are the best I've ever heard, because he at least attempts to graph trends, and look at requirements.
@faisalsheikh7846
@faisalsheikh7846 6 месяцев назад
Bring Demis
@whalingwithishmael7751
@whalingwithishmael7751 8 месяцев назад
How about we don’t build aliens that could destroy us?
@MystifulHD
@MystifulHD 8 месяцев назад
Has this guy heard of MEMGPT?
@marshallmcluhan33
@marshallmcluhan33 8 месяцев назад
I'm not sure if the most powerful is the most ethical...
@ShangaelThunda222
@ShangaelThunda222 8 месяцев назад
All we have to do is look at humans as an example to prove that the most powerful are usually the least ethical. And those are other humans....
@bigmotherdotai5877
@bigmotherdotai5877 8 месяцев назад
We'll know when human-level AGI has been achieved because advanced economies will have > 30% unemployment
@erikdahlen2588
@erikdahlen2588 8 месяцев назад
No, that's when companies have started to implement AGI ;)
@JD-jl4yy
@JD-jl4yy 8 месяцев назад
Grilling a model on its understanding of ethics doesn't tackle deception 😬
@bayesian0.0
@bayesian0.0 8 месяцев назад
Damn that increased my pessimism about AI alignment unfortunately. Really no attempt to admit that he had no clue how to solve the hard part of the problem, and trying to pretend that it didn't exist. Surely he understands inner-alignment? But nice conversation nonetheless!
@user-oy3sr4co9f
@user-oy3sr4co9f 6 месяцев назад
Yeah, I also got a feeling we're charging off a cliff here...
@nirajshuklaNL
@nirajshuklaNL 4 месяца назад
Please elaborate
@thebaker7
@thebaker7 7 месяцев назад
There iare those who put the guardrails on and thats thwir purpose, and there are those that rip them off for profit. Choose sides. There is no safe middle ground.
@danielcallahan7083
@danielcallahan7083 8 месяцев назад
This is the man in charge of alignment? I mean..
Далее
Jeff Dean (Google): Exciting Trends in Machine Learning
1:12:30
Stuart Russell, "AI: What If We Succeed?" April 25, 2024
1:29:57
Is AGI Just a Fantasy?
41:26
Просмотров 42 тыс.
This is what DeepMind just did to Football with AI...
19:11
Здесь упор в процессор
18:02
Просмотров 183 тыс.