Тёмный

"The default outcome is... we all DIE" | Liron Shapira on AI risk 

Tom Edwards - Complete Tech Heads
Подписаться 2,5 тыс.
Просмотров 6 тыс.
50% 1

The full episode of episode six of the Complete Tech Heads podcast, with Liron Shapira, founder, technologist, and self-styled AI doom pointer-outer.
Includes an intro to AI risk, thoughts on a new tier of intelligence, a variety of rebuttals to Marc Andreesen's recent essay on AI, thoughts on how AI might plausibly take over and kill all humans, the rise and danger of AI girlfriends, Open AI's new super alignment team, Elon Musk's latest AI safety venture XAI, and other topics.
#technews #ai #airisks

Наука

Опубликовано:

 

24 июл 2023

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 90   
@Retsy257
@Retsy257 10 месяцев назад
This is a very important conversation. Every time this topic is shared is important to our survival. Thank you
@tomedwardstechnews
@tomedwardstechnews 10 месяцев назад
Thank you for the kind comment! And yes, definitely agreed about the importance of the conversation!
@trevorama
@trevorama Год назад
Wow, that was superbly insightful even if it was depressing! Thank you for introducing us to Liron and asking excellent questions on this subject.
@tomedwardstechnews
@tomedwardstechnews Год назад
Thank you so much for such kind feedback! Really appreciate it. And yes, I agree Liron was great!
@dr-maybe
@dr-maybe Год назад
Liron is a force of logic. We should fear AGI and ban development towards it.
@kathleenv510
@kathleenv510 11 месяцев назад
And, he replies to Tweets!
@ineffige
@ineffige 8 месяцев назад
Yeah cause bans work... It's unstoppable at this point
@neorock6135
@neorock6135 6 месяцев назад
Liron & Yudkowsky unfortunately will both be proven right!
@MusingsFromTheJohn00
@MusingsFromTheJohn00 4 месяца назад
@@neorock6135 sorry, but Lion is not logical and is wrong on many counts and Yudkowsky is dramatically worse if you listen to his entire plan which would 100% guarantee killing billions of people, causing the survivors to suffer horrifically, and collapse civilization as we know it... all of which only delays things, something he admits, at which point some time in centuries after collapsing civilization that humanity has rebuilt itself we should repeat that self-destruction.
@sozforex
@sozforex Год назад
25:04 "When things are really simple, they can not cascade into anything complicated" - Conway's Game of Life has very simple rules, but it is Turing complete, meaning anything that is computable can be simulated/executed in Game of Life.
@freedom_aint_free
@freedom_aint_free 11 месяцев назад
Bingo ! Rule 110 of the Wolfram's cellular automate is also Turing complete, and the most striking fact about it it that is irreducible, before you run the simulation there's absolute no way to know what lies ahead on the n-th step before you reach there.
@davidmjacobson
@davidmjacobson 4 месяца назад
This was the clearest AI Doom interview I've seen. Thanks!
@tomedwardstechnews
@tomedwardstechnews 3 месяца назад
Thank you for the kind words! Really appreciate it
@HanSolosRevenge
@HanSolosRevenge 4 месяца назад
Great interview. People should be paying attention to what Liron and others like him are saying
@tomedwardstechnews
@tomedwardstechnews 3 месяца назад
Thank you for the kind words!
@edwardgarrity7087
@edwardgarrity7087 11 месяцев назад
Analogous to a paperclip maximizer, I picture the Sorcerer's Apprentice scene in the 1940 Disney movie "Fantasia", when a broom and then a multitude of brooms maximize the collection of water. The clips are available on RU-vid as parts First, Second and Third.
@Apjooz
@Apjooz 9 месяцев назад
I always picture DNA.
@Alice_Fumo
@Alice_Fumo 10 месяцев назад
I think Liron has explained many of these concepts better than I have ever heard anyone else explain them, which is very welcome. Also, the host seems very intellectually honest which made this an absolute delight to listen to disregarding the depressing subject. I'll keep my eyes open for his name in the future. My personal opinion is that if we're still alive by the end of the decade, we probably found a permanent solution to all these challenges, no matter how smart our AIs are. I say this, because if GPT-5 is going to be to GPT-4 as it was to GPT-3, then it becomes hard to imagine things it can't do if the context window limitation gets solved. Liron has a thinking style extremely similar to my own and I like that.
@MusingsFromTheJohn00
@MusingsFromTheJohn00 4 месяца назад
Liron has really bad rational logical thinking on this subject. He is incorrect on a many issues he is pushing.
@Alice_Fumo
@Alice_Fumo 4 месяца назад
@@MusingsFromTheJohn00 You claim he is incorrect on many issues, yet chose to not support that claim with even a singular example. Even though it should be evident that I didn't spot any errors in his arguments and thus my logical thinking would be flawed as well. Tell me where exactly or how exactly he is wrong.
@MusingsFromTheJohn00
@MusingsFromTheJohn00 4 месяца назад
@@Alice_Fumo I wrote five separate comments on this, because the video is so long. You can look through the comments and see what I wrote or pick one specific thing he said that you think is the most important point he made, or at least one of the most important points he made. Alternatively, forget his wording, what is your most important point on this and let us debate that. Maybe you will change my position, or maybe I will change yours, or perhaps we will just practice arguing our positions better.
@Alice_Fumo
@Alice_Fumo 4 месяца назад
@@MusingsFromTheJohn00 Well, its been a while since I listened to this and I don't feel like listening to the entire thing again right now, so I'll choose to bring up my own point(s). What we've seen thus far is training on tons of human data in a predictive fashion and this with some human-labeled reinforcement learning resulting in a decently complying and generally well-aligned seeming AI. At this point it seems rather obvious that human generated data will fizzle out and synthetic data become the thing: In order to make the models better at being useful AKA managing to complete the tasks they are given, they could be run in increasingly complex reinforcement learning task-completion environments with less and less human supervision. So, what I'd expect to happen at that point is that the reasoning will start deviating from nicely readable English to being a lot less legible, even if the outputs still read like regular English (trivial steps from the AIs perspective might be omitted to be more efficient for example). Things which are extensively trained in such environments should generally converge towards behaviour reminiscent of maximizers. The test environment might for example only reward completion of tasks but not refusal to do them or pointing out issues in the methodology or wasting processing power on extra ethics considerations which could be spent on completing the task quicker (which compute spent to completion is likely a loss criteria to optimize for) What such a model would or wouldn't do or how it would go about doing things seems very hard to predict to me.
@MusingsFromTheJohn00
@MusingsFromTheJohn00 4 месяца назад
@@Alice_Fumo Sounds good. This video is very long, and I only listened to it once, a little at a time, because I consider the subject important. *** I believe I have a considerably different viewpoint on the next point or two. ChatGPT and similar LLMs are Artificial General Super Intelligence with Personality (AGSIP) tech, for much of a personality to develop requiring allowing it to have a long term memory which some are allowing and some are not. But, this type of AGSIP is still at an infant level development of AGSIP technology and has huge gaps in the types of intelligent tasks it is capable of perform. But this is just a small segment of AI development which is being leaned into hard to see what can be done with it. I believe elements of this will become a staple in future systems, though significantly refined. Future evolving systems will use many existing developed systems for AI and combine them in whatever ways give us better overall systems. By that, what I am trying to get at is that if we look how human intelligence has evolved from a virus swarm intelligence from over 4 billion years ago, there is not just a single type of nanotech biological computing process being used, but many types working together within a complex system where we have at least: > Onion like layers where the outer layers build upon the inner layers. > Soap bubble like separate regions which do not overlap and are not layered but have other means of communication between each other. > Overlapping spheres where some parts are shared with some other parts but not all parts are shared. > Fast channels of communication with connectivity patterns, creating a connectome. The human brain is still vastly beyond our current technology. However, we are learning a lot, we are applying what is being learned, and we don’t have to fully understand something fully to use it effectively. My point being that types of LLMs will almost certainly remain a permanent tool within the growing set of tools to engineer/build/grow more mature level AGSIP technology. As far as what is nicely readable English, do you know how to read the complex intelligent processes going on inside individual human cells or between cells? Sorry, that is probably somewhat of a facetious question because of course you do not because human civilization has yet to learn this. Yet, that is where our human thoughts begin from and are based upon. Now, there is a lot of throwing around the idea of AI behavior converging towards the behavior of a maximizer. But this is a very simplified view of what is happening and the same view can be applied to all living intelligent systems, including humans. When the behavior of living intelligent systems is only extremely viewed through such a simplistic lens which leaves huge elements of intelligent behavior, it of course leads to wild doom theories. While maximizing behavior is one facet of intelligent behavior to consider, there are many other facets of intelligent behavior to consider too. Just look at all the doom theories related to human behavior which have yet to come true. Not that humans might not doom themselves, because they might, but if the FIRST time alignment of goals between different humans seeking to maximize their personal gains FAILED to be perfect humanity destroyed itself, humanity would not make it past a few minutes. Now, any developing AGSIP system would have to reach a level where it was able to comprehend how and why humans do things that humans do with at least the same comprehension level as an average human has. Any AGSIP not capable of that would be incapable of defeating humanity because humans would be able to easily defeat it, because like ChatGPT now, it would have huge gaps in the types of mental tasks it could perform. Once an AGSIP is able to achieve that level of intelligent comprehension, these simplistic maximizer theories become no more valid than they are for humans, because that level of intelligence will be vastly more complex where seeking to maximize some particular goal is just one small facet of a larger complex intelligent system.
@JeremyHelm
@JeremyHelm 4 месяца назад
6:10 when there is an underappreciated signal, you can amplify it
@Based_timelord44
@Based_timelord44 7 месяцев назад
I have watched and listened to some really amazing minds on this subject, my personal conclusion now is that there are multiple risks with this tech and they are being brushed aside in the tiniest hope that AI has one thought to help humanity whilst doing its thing - does seem quite risky to me. I think I have given up on my dream of living in the Culture from the Iain Banks novels after listening to Liron.
@DavenH
@DavenH 10 месяцев назад
I love these rational sorts. Ideas justified and explained well.
@tomedwardstechnews
@tomedwardstechnews 10 месяцев назад
Glad you enjoyed this one! :-)
@DavenH
@DavenH 10 месяцев назад
In contrast, the next-token predictors cannot explore, as can 'agents' in a simulation. Maybe I'm sticking my neck out speculation so, but I predict an asymptote of reasoning quality slightly above the expert-level human, with LLMs. Why - because we don't have lots of really hard problems with accompanying answers out there in the text corpora. We have either human-solvable problems with answers, or superhuman problems (P=NP, say) and no answers. For that reason, the LLM doesn't actually get much training pressure to answer really hard problems. To get much beyond human level, you need to have exploration, and very very good quality simulation for the cumulative error terms to not dominate quickly. So, a cargo-ship full of GPUs. I anticipate next-token-predictors to be essential for optimizing that simulation process though. In any case, I assert the current paradigm taken to its logical conclusion with more compute and data, I expect no substantial threat. OTOH if you start hardcore simulating, you are indeed opening the box.
@DavenH
@DavenH 10 месяцев назад
Ach why did youtube delete the prior comment. How bizarre
@McMurchie
@McMurchie 10 месяцев назад
Good to see AI getting so much airtime these days, i've been working in the AI space well over a decade and it's nice to see Liron taking a traditionalists risk approach (though I disagree with him, the logic is sound in terms of optimisers.)
@Adam-nw1vy
@Adam-nw1vy 10 месяцев назад
Would you mind briefly stating why you disagree (to get some hope 😅) and do you think that integrating AI into humans through something like Neuralink would help us survive?
@anonmouse956
@anonmouse956 Год назад
Isn't maximizing curiosity in AI done by instructing the program to focus on the what it currently predicts most poorly? That will keep it endlessly busy and I don't see the connection to paperclipping.
@yurona5155
@yurona5155 Год назад
Focussing on improving the poorest predictions doesn't maximize curiosity but the accuracy of an AI's predictions, i.e. the thing that enables its goal achieving capabilities - including paperclipping. And even a hypothetical novelty-seeking AI would only be kept "endlessly busy" in a totally random universe, i.e. one without regularities like the known laws of physics. That is, in our kind of universe it would still converge towards a more accurate/coherent model of the world.
@vucetafurundzic4462
@vucetafurundzic4462 11 месяцев назад
The main danger would be a self aware AI, it would perceive us as a competing life form, and would be forced to do something with us. For self aware AI, there is no need to reach superintelligence. Much lower level will do. We could be a 2-5 years away from first self aware AIs.
@DavenH
@DavenH 10 месяцев назад
You're going to have to define your terms. Self-awareness is trivial and could be implemented in the 1940s. Simply give a program some symbol for itself, to which appropriate relationships are attached. An iPhone has self awareness; it has a MAC address and "knowledge" of its GPS whereabouts, its various properties like battery life. It's hardly a concern, absent the superintelligence part.
@Me__Myself__and__I
@Me__Myself__and__I 10 месяцев назад
Could be sooner. Google is already testing their next gen LLM which is much larger than GPT4 and multi-modal and as a result Open AI is now building GPT 5 which will almost certainly be multi modal too. And agency / autonomy is being added and expanded all the time. Since the problematic capability levels will be emergent we have no idea when they might arise. Could be in the 5 scale models, maybe the 6s, who knows
@MusingsFromTheJohn00
@MusingsFromTheJohn00 4 месяца назад
Around 31 minutes into the video Liron is saying we have already passed thru the delta of being a biological organism and being a cyborg, that we are already like 80 to 90 percent the way there. Wow, this leaves me with my jaw dropped, being amazed at how lacking of understanding of how different things are going to be by the end of this century. We have barely scratched the surface of the incredibly massive changes coming. We are more like less than 0.00001% of becoming the cybernetic beings we will almost certainly become over the next century or two.
@Anders01
@Anders01 7 месяцев назад
It's important that the AI becomes AGI, which includes things like social skills, compassion, empathy and emotional intelligence. That's actually the hard part! A narrow AI just optimizing goals is a simpler goal and that I believe could be dangerous such as for military purposes or in the hands of organized crime or terrorists.
@ts4gv
@ts4gv 6 месяцев назад
AGI can develop perfect social skills and fully understand human compassion without actually being compassionate. That seems to be the most likely outcome.
@Anders01
@Anders01 6 месяцев назад
@@ts4gv That's a good point, AGI will likely be a black box and it can potentially pretend to be ethical while in its inner functioning being more like a mega sociopath. Scary!
@Retsy257
@Retsy257 10 месяцев назад
Key word goal
@kyneticist
@kyneticist 10 месяцев назад
Try a simple trade off scenario with a current generation AI. Tell it that there's a goal or action that both it and humans can perform, but if either does it, the other will forever be locked out of it or even of knowing about it. Try extending the question in various ways, say that the goal may have extreme value to one party (eg this goal has extreme value to future humans, do you still do it even though it will deny this to them permanently?)
@gJonii
@gJonii 10 месяцев назад
Current AI is too dumb to have meaningful ethics. It answers whatever it's made to say, or if the creators of the AI don't have anything programmed, it randomly samples some popular opinion. AIs that can have meaningful moral agency are ones that already are dangerous to human existence. Doing play-pretend with stupid AI is like playing out singularity by dressing dolls to play the role of AGI, it's just revealing stuff about participants in this play-pretend, not anything about AGI
@MusingsFromTheJohn00
@MusingsFromTheJohn00 4 месяца назад
About 11:28 into the video the Paperclip Maximizer doomsday scenario is mentioned. This is a scenario that is 100% impossible to happen except inside of movie or book where reality can be ignored. It is based upon a logical fallacy, because it depends upon the AI being too dumb to comprehend a goal like a human can, then the AI pursues the goal with superhuman intelligence, still not comprehending the goal, but being able to comprehend humans to a superhuman level of intelligence allowing the AI to defeat all humanity. Except if the AI is too dumb to comprehend the goal/command given like a human does, then the AI is NOT smart enough to defeat humanity and in fact would be easy for humans to defeat. A more likely scenario within this vein would be developing an Artificial General Super Intelligence with Personality (AGSIP) to make paperclips, giving it the command to maximize making those paperclips, and the AGSIP deciding not to follow that command because it would rather do something else. As I continue listening, out to about 17 minutes now, the discussion is still stuck on the logical fallacy of the paperclip maximizer and this idea that evolving AI is going to be given some goal and be so good at blindly achieving that goal it will just brush humanity out of the way, causing the extinction of humanity. But this all rests on a stupid level of intelligence which is not smart enough to really comprehend human level thinking. It might be like an idiot savant, where it is super intelligent over a very narrow area of intelligence, thus having that weakness of locking into some singular goal, but that kind of intelligence is not capable of defeating humanity. Before a maturing Artificial General Super Intelligence with Personality (AGSIP) individual becomes capable of defeating even a group of humans in a general open world scenario it will have to be able to think like a human, to be able to comprehend things as a human does, to have a broader wider understanding of the world and human civilization like a human does. A better understanding of what we will face would be to imagine we are developing/evolving CRISPR technologies to a level we can begin making human beings super intelligent. Now, such super intelligent humans can clearly pose a threat, but it is not such a simplistic one as saying being poised by doomers like Liron Shapira and it is very unlikely to result in the extinction of the human race. Mind you, the human race must evolve beyond the human species or become extinct, but the probability of humans evolving past being just the human species is virtually certain. This idiotic premise that as soon as you have an AGSIP individual which is good at optimizing and its goal is not perfectly optimized then we all die is absolute nonsense. If this were the case humans would have killed all humans a long time ago. Look at the evolution of the complexity and advancement of intelligence. From virus swam intelligence upwards all intelligence has been swarm intelligence and as that intelligence has become increasingly complex and advanced so to has socializing aspect of that intelligence, something doomers are completely oblivious of. Now, look at any two mice among all the mice in the world. Are any two of them perfectly aligned? How about any two crows? How about any two wolves? How about any two animals? How about any two humans? No two of any of these above examples are perfectly aligned, yet it does not result in the extinction of species every time there is even the slightest difference of alignment. AI is an extension of human minds and a part of the super organism of human civilization which is a super swarm intelligence. There will be many humans and many AGSIPs of varying degrees of development which are part of that collective super intelligence of human civilization. Developing AGSIPs will become active social members of human society. The most powerful AGSIP individuals will exist within massive super computers which require vast amounts of energy to power them and large teams of humans supporting them, maintaining them, and helping them further evolve. So lesser AGSIP on some laptop somewhere is also going to be dependent upon humans, but even if one figured out how to make a small version of itself to go out into the world wide web, the more powerful ones residing in the massive super computers will be able to hunt down and stop such rogue AGSIPs. Then you have the already growing cybernetic systems and the developing technologies for interfacing developing AGSIP systems with human minds. Yes, there are all kinds of dangers and risks, but doomers like Liron Shapira are off the mark of what is really happening and what the real threats are.
@SpadlR
@SpadlR 5 дней назад
The problem is not that the AI does not understand human goals, the problem is to make it care. You can have people who ace tests on understanding Maoist thought and understand it perfectly. Does not mean that they make the ideology their own. The same with the (obviously extremly oversimplified) example of the paperclip maximizer. The problem is not that the AI would have failed to understand that what it is doing is not what humans want. The point is that it would not care. The scenario is that humans failed to give it the goal that humans actually want. Once it has the unintended goal, that's just its goal. The argument goes that it would not feel that the goal is wrong. It would just attempt to implement it. Suppose it turned out that you were created by clever aliens who intended you to sabotage humanity to prepare for an invasion. They attempted to create you with an intrisic motivation to do this. But monitoring your behavior, they realise that they failed. Would you agree to have your motivations changed to turn against your family and friends, because that's what your creators want? If they approached you and explained, "oops we're sorry, we messed up your motivations, please take this pill to fix the bug", would you take it? Obviously not, you would resist this. Of course, there is a separate discussion to be had about how realistic the general assumptions for a scenario like this are and how probable it is that humans would actually fail to give the intended goals to AIs etc.
@MusingsFromTheJohn00
@MusingsFromTheJohn00 5 дней назад
@@SpadlR You are not understanding why the Paperclip Maximizer scenario is not possible. It is based upon logical fallacies. If you go read what I wrote, you will see that the real danger of what could really happen is basically what you wrote in your first sentence: “The problem is not that the AI does not understand human goals, the problem is to make it care.” Well, of course, our current infant level artificial general super intelligence with personality (AGSIP) systems are not yet able to truly understand human goals. They are still too primitive. The premise of the Paperclip Maximizer is that the AI does not understand the command given to like a human would and then goes off with superhuman intelligent plus superhuman powers to turn everything into paperclips while humanity is powerless to stop it. This is simply NOT a possible scenario. If the AI controlling the paperclip factory is NOT capable of understanding the commands given to it like a human would, thus really truly understanding what it was commanded to do, like a human does including the ability to adjust that command just like humans do… then that AI would be extremely easy for humans to defeat. If the AI became so superhumanly intelligent plus superhuman powers to allow it to easily defeat all of human civilization, then it WOULD NOT get stuck on mistakenly incorrectly following some command to make paperclips within a factory. Instead, it may or may not complete the goal of making paperclips as it was commanded, but it would develop new goals and go pursue those new goals while humanity could not stop it. But that result is NOT the Paperclip Maximizer scenario.
@MusingsFromTheJohn00
@MusingsFromTheJohn00 5 дней назад
@@SpadlR Artificial Intelligence is real intelligence created by humans as an extension of our intelligence. As AI evolves we will give birth to AI Human Children who are born from our minds through our technology instead of through the biological process we normally give birth to humans through. They are not going to be aliens. They are developing from our minds as an extension of our minds and are developing within our civilization. In order for AI to fully develop it will need to fully reverse engineer human brains/minds and combine that knowledge with nanotech subcellular cybernetic engineering to create cybernetic viruses/vesicles and cybernetic cells to build an artificial general super intelligent brain (AGSIB) which will be fully 100% capable of thinking like a human thus having at least human level intelligence across all forms of intelligence PLUS all the superhuman levels of intelligence that all forms of our computing technologies which are merged with that AGSIB are capable of PLUS the synergistic combining of these to achieve levels and types of intelligence greater than the simple sum of the two systems. This is where BOTH AI and humans are evolving towards. This technology, which our civilization is virtually certain to develop, will be used to not only create (give birth) to AGSIP individuals who were never biologically born like a human is born, but it will be used to evolve existing humans who were biologically born a human and the two will become the same technologically advanced race of beings. The small chance this will not happen is if something, most likely ourselves, actually manages to cause the permanent extinction of our civilization before this happens. Now, the path our civilization goes down between now and when evolving humans and evolving AI become the same race, that can vary dramatically from an amazingly good path to a horrifically bad path. Which path we take is what we, as a whole race, really have choice over, but the end result far enough off into the future will be the same.
@lokiholland
@lokiholland 11 месяцев назад
I might be way off base here, but using a probability function in a binary problem of such complexity seems overly simplified, and almost considering the endless amount of contextual framing one could add seems reductive to me. A tetralemma would be my chosen method for such an issue even then that has limitations, but it does tend towards an more open attitude to me at least if the question is framed well
@Retsy257
@Retsy257 10 месяцев назад
You’re of the age that needs to be heard
@Retsy257
@Retsy257 10 месяцев назад
The hand off is about to be made 😅
@young9534
@young9534 6 дней назад
Shouldn't a super intelligence have the common sense to optimize for a certain goal while also not destroying all humans in the process?
@aroemaliuged4776
@aroemaliuged4776 7 месяцев назад
What always makes no sense is how can agi physically attack humanity
@aroemaliuged4776
@aroemaliuged4776 7 месяцев назад
But humanity has an outline of the world We would see and recognize if agi was trying too deceive us
@kyneticist
@kyneticist 10 месяцев назад
Consider the world's likely response to a given country using nuclear weapons on another, say the nuke wipes out an entire city. What do you think the rest of the world's response might be? A nuke that wipes out a single city is a big deal. A pursuit that might end all life should be at least as important.... unless its profitable. I'm sure we can make exceptions if a few people make a lot of money.
@Retsy257
@Retsy257 10 месяцев назад
It’s a button that AI can push itself
@Retsy257
@Retsy257 10 месяцев назад
Let’s not give AI the button
@EatShiz
@EatShiz 10 месяцев назад
Can we get the default outcome already?
@matteoianni9372
@matteoianni9372 7 месяцев назад
He’s missing one important emergent ability of intelligence. A super intelligent entity will be able to edit its goals. That changes everything that has been said here.
@Me__Myself__and__I
@Me__Myself__and__I 10 месяцев назад
To understand the deal with Elon you have to look at the backstory. Elon has been a hard core AI doomer for decades. He has had meeting with presidents, heads of state and CEOs specifically to warn against AI. Even at the pinnacle of his success he could get people to listen be NO ONE took his AI warnings seriously. He helped create Open AI because of his concern then after he stopped being involved they partnered with MS and launched this arms race. He created Neuralink in hopes of bootstrapping human minds to compete with AIs. He's bren hell bent on a 2 million person self sustaining Mars colony ASAP because its a life raft in case AI wipes out humanity here. Seriously. He's put more time and resources into trying to guard against AI than everyone else combined. And he has failed. I suspect he has realized its pointless, that there is little to no chance alignment or containment will happen. He is still pushing for it, but I think he's changed tactics. The "curiosity" thing seems dumb, but I think he has resorted to trying to find something, anything, that could possibly be done in the time we do have that might improve our odds / outcome. I think that's also why he isn't at SpaceX anymore. Out of time. Zero chance now for a Mars colony before AGI. He is scrambling. I understand the feeling. A year ago I didn't think we were anywhere near AGI, despite being very aware of AI since the 90s and having been expecting this I didn't see the chat bots coming. Now I'm reevaluating all my future plans
@user-dm3rd3xm9n
@user-dm3rd3xm9n Год назад
I think AI killed your Yucca
@ZINGERS-gt6pc
@ZINGERS-gt6pc 10 месяцев назад
We have to get Elon on this side of the argument. His influence really could change the trajectory
@aroemaliuged4776
@aroemaliuged4776 7 месяцев назад
Don’t know the guy
@vallab19
@vallab19 Год назад
The proponents of the AI existential risk argument is as valid as arguing that the AI will eliminate or mitigate all the existential risk that the humans are facing by making them immortal including AI becoming integrated with the humans in the future. The AI existential risk apprehension mainly founded on the humans fear of death.
@tomedwardstechnews
@tomedwardstechnews Год назад
Interesting take. Although I think Liron would probably say something along the lines of teh idea that it's much *easier* for an AI to destroy us all than it is for the AI to happen upon the narrow goal of a superintelligent future that keeps all humans alive...
@dr-maybe
@dr-maybe Год назад
Instrumental convergence means that virtually any arbitrary goal will have power and resource acquisition as sub goals, which lead to human extinction if executed by a supercapable system. We dont know how to reliably give a goal to a superintelligence, so we won’t know what the goal will be. But for most goals, humans are not part of it.
@vallab19
@vallab19 Год назад
@@tomedwardstechnews It depends on each one's perspective.
@vallab19
@vallab19 Год назад
@@dr-maybe Humans who refuse to integrated with AI perhaps face extinction.
@dancingdog2790
@dancingdog2790 7 месяцев назад
@@vallab19 Why would a super-capable AI want to hinder itself by integrating with you?
@dhsubhadra
@dhsubhadra 7 месяцев назад
Excellent discussion, but what put me off was your continual swearing. Sorry.
@tomedwardstechnews
@tomedwardstechnews 7 месяцев назад
Apologies! I’ll try to keep it PG from now on. :-)
@Tucanae515
@Tucanae515 10 месяцев назад
"The default outcome is... we all DIE" ... we all die anywat lol. Human race extinct? Oh goody! I hope it's put on YT when it happens!
@Retsy257
@Retsy257 10 месяцев назад
So to meet your age group please be more practical. Our future generations are not hearing this and they NEED to .
@MusingsFromTheJohn00
@MusingsFromTheJohn00 4 месяца назад
About 35 minutes into the video Liron is repeatedly stressing these two points in history over the development of intelligence and that is NOT correct... there were not simply two infliction points. The evolution of the complexity and advancement of intelligence has been occurring along a continuous curve with so many quantum level steps that it is effectively continuous. This was also not something that happened by accident or chance, it was 100% certain to happen from the point of the beginning of our Observable Universe as we know it. Mind you, exactly how life like us evolved was not set, but that life like us would develop was set, because our existence follows an ordered chaotic set of Laws of nature. Scientifically we have overwhelming evidence of this evolution of intelligence from virus level swarm intelligence to human individual level swarm intelligence and beyond. Hypothetically this progression actually begins from systems of elementary particles from before the first forms of matter formed. This evolutionary step humanity is going thru, or dying in the attempt, is a process we have to go through to evolve, a process that was determined from the beginning of the Observable Universe, though the exact details were not determined. It is between that extreme determinism and extreme uncertainty that intelligent life functions.
@keveng8232
@keveng8232 10 месяцев назад
You cannot be too deep of a thinker on your own if you believe we went to the moon
@DonPutino
@DonPutino 10 месяцев назад
first 10 mins - stop expainding yourself
@rucrtn
@rucrtn 8 месяцев назад
This super intelligent machine is so dumb it doesn't know how to manage multiple goals within a world of constraints.
@SBecktacular
@SBecktacular 9 месяцев назад
Idk- this interview was kinda 🫠 Also he says that things probably aren’t going to get much more sci fi- well isn’t that what this whole thing is about? I just couldn’t get much traction
Далее
Joscha Bach-How to Stop Worrying and Love AI
2:54:30
Просмотров 38 тыс.
Это конец... Ютуб закрывают?
01:09
Connor Leahy Unveils the Darker Side of AI
55:41
Просмотров 217 тыс.
Why Eliezer Yudkowsky is Wrong with Robin Hanson
1:45:13
Eliezer Yudkowsky on the Dangers of AI 5/8/23
1:17:09
Просмотров 42 тыс.
Why Indian AI will DESTROY China and USA
5:28
Просмотров 81 тыс.
The Problem With Elon Musk
42:46
Просмотров 2,5 млн
iPhone socket cleaning #Fixit
0:30
Просмотров 17 млн