Тёмный

What Will The World Look Like After AGI? 

Till Musshoff
Подписаться 48 тыс.
Просмотров 50 тыс.
50% 1

Check out my Linktree alternative / 'Link in Bio' for Bitcoiners: bitcoiner.bio
Imagine we are witnessing a singularity event in our lifetime. We create something that is infinitely more intelligent than all of humanity combined. What would the world look like? Is this humanities final invention? Are we causing our own extinction or are we building utopia? We look at both cases and what’s in between.
Join my channel membership to support my work:
/ @tillmusshoff
My profile: bitcoiner.bio/tillmusshoff
Follow me on Twitter: / tillmusshoff
My Lightning Address: ⚡️till@getalby.com
My Discord server: / discord
Instagram: / tillmusshoff
My Camera: amzn.to/3YMo5wx
My Lens: amzn.to/3IgBC8y
My Microphone: amzn.to/3SdHdkC
My Lighting: amzn.to/3ELnof5
Further sources:
Ilya Sutskever (OpenAI Chief Scientist) - Building AGI, Alignment, Spies, Microsoft, & Enlightenment: • Ilya Sutskever (OpenAI...
Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI | Lex Fridman Podcast 367: • Sam Altman: OpenAI CEO...
Post-Singularity Predictions - How will our lives, corporations, and nations adapt to AI revolution?: • Post-Singularity Predi...

Опубликовано:

 

9 июл 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 387   
@tillmusshoff
@tillmusshoff 3 месяца назад
I built a 'Link in Bio' - a Linktree alternative for Bitcoiners. Check it out here: bitcoiner.bio 🧡
@Vince_F
@Vince_F Год назад
“The view keeps getting better the closer you get to the edge of the cliff.” - Eliezer
@Smytjf11
@Smytjf11 Год назад
Then let's not stop building wings, yeah?
@Vince_F
@Vince_F Год назад
@@Smytjf11 That’s the thing. The AI will just prevent any wing building to even happen …as we get closer to the edge.
@JJ-si4qh
@JJ-si4qh Год назад
For those vast majority of us living meager lives of quiet desperation, a major change, whatever it is, is unlikely to be worse than what we already experience. SGI can't come fast enough.
@harrikangur
@harrikangur Год назад
Agreed. Even when presented the possibility of destruction of society.. better than the current crap we are in.
@sanjaygaur4578
@sanjaygaur4578 Год назад
Yes exactly. I thought I was the only person who was having this same thought.
@MusingsFromTheJohn00
@MusingsFromTheJohn00 Год назад
J J, sorry, but you likely have no clue how bad life for humans can be if you think that. On the other hand, I do think we need to develop AI as quickly as we can while also working hard to align it with us as well as we can.
@bigglyguy8429
@bigglyguy8429 Год назад
Such the poor suffering soul with electricity, an internet connection etc etc etc. You're already living better than most kings of history
@bigglyguy8429
@bigglyguy8429 Год назад
@@sanjaygaur4578 Suffer harder, until you make some sense? You think 'most populated' is a problem? What would you like to do about that?
@HighStakesDanny
@HighStakesDanny Год назад
I have been waiting for the singularity for decades - almost here. ChaptGPT is the infant
@azhuransmx126
@azhuransmx126 Год назад
I have been waiting it since 2003 that I listened to Ray Kurzweil.
@gubzs
@gubzs Месяц назад
One of the AGI/ASI problems that keeps me up at night is how will the classic "neighborly dispute" be resolved. Conflict of interest. Say my neighbor wants to play loud music and it drives me nuts, but he's driven nuts by being disallowed from doing this - what's the right answer? Is one of us forced to move? To where? Why one of us and not the other? Things like this stand directly in the way of anything we could consider utopia.
@jimmyh1804
@jimmyh1804 Месяц назад
an asi will adjust your (optional, mass produced, and freely available) brain implant to make it so you no longer register/hear/process the music... DUHHHHHH DURRRRHHH
@AndyRoidEU
@AndyRoidEU Год назад
It is not anymore about whether we ll witness the singularity in our lifetime.. but about whether in 5 years or in 15 years
@user-mp3eh1vb9w
@user-mp3eh1vb9w Год назад
Opposite for me, I might die in the next 5 years or less. Well I guess I will be joining the other billions of people that died before reaching ASI lol.
@psi_yutaka
@psi_yutaka Год назад
@@user-mp3eh1vb9w Fear not. 8 billion people will probably join you once they do reach ASI.
@Andrewdeitsch
@Andrewdeitsch Год назад
Your videos keep getting better and better!! Keep it up bro!
@tillmusshoff
@tillmusshoff Год назад
Appreciate it! ❤️
@ksitizahb3554
@ksitizahb3554 Год назад
thats because he is a AI Model training for making youtube videos.
@marmeladenkuh6793
@marmeladenkuh6793 Год назад
Great Video with some interesting points I didn't think of yet. And the AOT reference was brilliant 😄
@bruhager
@bruhager Год назад
The thing that bothers me about the extinction scenario is that it isn't necessarily a bad thing. The version of humankind we are living in right now might very well be the final version of humankind evolving by itself. Look at the advances not only in AI but brain-machine interfaces, neural networks, biological computers, brain emulation, etc. AI might be able to teach us more about ourselves on a fundamental quantum level than we could achieve alone. We may very well begin to implement AI into ourselves and evolve along side it as time goes by. At the very least, that is one way we go extinct without necessarily being just wiped out completely. It might actually be better to implement this type of technology into transforming the human paradigm as time and understanding goes by rather than scapegoating it into our next enemy through fearful hatemongering.
@utkarshsingh7204
@utkarshsingh7204 Год назад
Agree with you
@kf9926
@kf9926 Год назад
Take yourself, you don’t speak for all of us wacko
@abcdef8915
@abcdef8915 Год назад
There will still be wars because resources will still be limited
@michaelspence2508
@michaelspence2508 Год назад
I don't think most of the big names in AI Doom (e.g. Eliezer Yudkowsky) are just worried about us losing our bodies but rather, that we will in fact be *completely wiped out* The end of everything human, not just our societies and the world as we know it. The end of friendship and love and community and even loneliness because there's literally no-one around to experience those things. All that remains are Eldritch Machine Gods. But even Yudkowsky doesn't think it's impossible to have a good outcome with ASI. Only that we are not on track for a good outcome and that it doesn't look likely to change.
@DasRaetsel
@DasRaetsel Год назад
That's exactly what transhumanism is
@chrissscottt
@chrissscottt Год назад
I suspect AGI would be rather god-like. Reminds me of something Voltaire reputedly said over 300 years ago, "In the beginning god created mankind in his own image.... then mankind reciprocated." He meant something else obviously but it's ironic nonetheless.
@gomesedits
@gomesedits Год назад
After ai, before ai. Lol
@thefirsttrillionaire2925
@thefirsttrillionaire2925 Год назад
Finally, actually using chat GPT to ask questions about starting a business I can definitely say I’m more on the positive side how things will unfold. I could be wrong, but I definitely hope I’m not. Maybe this will be the thing that ends extreme capitalism.
@Travelbythought
@Travelbythought Год назад
We don't have "extreme capitalism". Using the medical field for example, what that would look like is there would be countless people offering 1000's of treatments for any condition all competing for your dollars. Health care would be very cheap, very innovative, but also with many bad frauds as well. What we have instead is a government sanctioned monopoly with crazy high prices. A return to real money like gold and silver would wring out the crazy excesses we see in our economy today.
@mohammedaslam2912
@mohammedaslam2912 Год назад
After ASI takes all the work from us, what is left is life in all its colors.
@NottMacRuairi
@NottMacRuairi Год назад
The problem I have with most of the discussion about AGI (and by extension ASI) is that it always assumes an AGI will have it's own drives and motivations that might be different from humanity's, but in reality it can't have - unless it is created to act in a self-interested way. I think this is a kind anthropomorphism, where we basically assume that something that is really intelligent must be self-interested like us but the reality is that it will be a *tool*, a tool that can be given specific goals or tasks to work on. In my opinion the big threat is not from an autonomous AGI running amok but from the enormous power this will give whoever *controls* an AGI or ASI, as they will be able to outsmart the rest of humanity combined, and once they get that power there'll be basically no way to stop them or take it away from them because the AGI/ASI will be able to anticipate every human threat that could be posed. It will be the most powerful tool *and weapon* that humanity has ever invented, it will be able to be used to control entire populations with just the right message at just the right time, to assuage fears or create fear, - whatever is needed for whoever controls it to foil any threat and increase their power further and further, until basically humanity is subjugated -and probably won't even know it.
@sledgehog1
@sledgehog1 Год назад
Agreed. It's such a human thing to anthropomorphize...
@franklin519
@franklin519 Год назад
Most of us are already subjugated. AGI won't have all the evolutionary baggage we carry.
@mckitty4907
@mckitty4907 5 месяцев назад
I have always imagined that if people were to live for centuries, people might not be able to handle the changes around them, but what if the world does change centuries/millenia in a few years, the vast majority of humanity would not be able to handle that I think, especially not religious or neurotypical people.
@dissonanceparadiddle
@dissonanceparadiddle Год назад
Worst case in human extinction...."laughs in i have no mouth and i must scream"
@Aeternum_Gaming
@Aeternum_Gaming 4 месяца назад
"The flesh is weak. Obey your machine-masters with fear and trembling. Turn flesh to the service of the machine, for only in the machine does the soul transcend the cruelty of flesh." -Adeptus Mechanicus All hail the Omnissiah!
@NathanDewey11
@NathanDewey11 4 месяца назад
Whatever it looks like, it'll be shocking and stunning, and everything will change and the breakthroughs will shock the industries.
@BAAPUBhendi-dv4ho
@BAAPUBhendi-dv4ho Год назад
I just burst out in laughter after reading the anime quote in such a serious video😂
@aludrenknight1687
@aludrenknight1687 Год назад
I believe, in your use of Rome, you failed to recognize that Seneca was reflecting on his observations of what, seemingly, the vast majority of people with an opportunity for leisure chose to do. They did not choose "meaningful" pursuits of learning or challenge - they chose luxury and what we'd call decadence. it's safe to say that most humans will aspire toward that baseline because we're still the same animals now as then. There are a very few intellectuals and philosophers, but most people just want to wake up and have a nice relaxing day.
@ansalem12
@ansalem12 Год назад
But is that a bad thing if we all have equal ability to choose and none of us are needed to keep things running anyway?
@aludrenknight1687
@aludrenknight1687 Год назад
@@ansalem12 I don't think it's bad individually, or in the short term. I find it, actually condescending, when guys talk about how people will ruminate on philosophy, art, etc, as if that's the goal of all mankind. No, imo, people will mostly do like back then, happy to wake up and have an enjoyble day. In the long term I think it may be dangerous as we become dependent upon A.I. and a single CME flare from the Sun could wipe it out and leave us unable to survive. But that's at least two generations away, when newborns get an A.I. companion to grow up with them and do their communication for them.
@simjam1980
@simjam1980 Год назад
I'm not sure if just waking up and having a relaxing day every day would make us happy. That idea makes us happy now because we all work so much, but I think doing nothing every day would make us bored and question our purpose.
@aludrenknight1687
@aludrenknight1687 Год назад
@@simjam1980 Yeah. I recall Yudkowsky mention dopamine saturation could be a problem - though possibly solved with A.I. developed medications.
@caty863
@caty863 3 месяца назад
@@simjam1980Relaxing doens't mean doing nothing. When I go cliff-jumping, I am relaxing...but I am still working hard to do it right.
@paddaboi_
@paddaboi_ Год назад
my mind is sore after thinking about all the possibilities and the fact that I'm 18 means I might see it actually unfold
@gomesedits
@gomesedits Год назад
Man I'm kinda optimist about the ai revolution. It will be so, so, soo intelligent that will be almost Impossible to our brains predict what the future will be, imo.
@admuckel
@admuckel Год назад
In regards to the topic of AI singularity, it's essential that we, as humans, don't make the mistake of programming artificial intelligence to cater solely to our own needs and desires. If an AI were to become human-like, it might view us as inferior beings, much like how we often perceive other life forms. This would mean that the AI would have no reason to show compassion or consideration for us, potentially leading to catastrophic consequences. In essence, our goal should be to create a benevolent, god-like entity that transcends our baser instincts and operates for the greater good of all sentient beings.
@dondecaire6534
@dondecaire6534 Год назад
I think your video reinforces my feeling that we have bit off MUCH more than we can chew and we may CHOKE on it. So many things need to happen to allow this inevitable transition to take place and ALL of them have been incredibly difficult by themselves to implement let alone trying to get them all at the same time on the same issue is virtually impossible. There is just no way to stop it now so we are passengers on a runaway train, destination unknown.
@Marsh4Sukuna-tf1bs
@Marsh4Sukuna-tf1bs 3 месяца назад
We misunderstand the Doom of perfection. Its like how we underestimate the danger of freedom.
@bei-aller-liebe
@bei-aller-liebe Год назад
Hey Till. Dein Content ist wirklich erstklassig und immer wieder ein Genuss (Einfach mal: DANKE!) ... aber ich kann mir gerade folgenden weiteren Kommentar nicht verkneifen ... ich muss seit Neuestem immer denken: 'Mensch, der arme Junge hat seine Brille verlegt!' Haha ... Liebe Grüße von einem Typen der selbst Brille trägt seit er 10 ist und sich selbst auch ohne Brille nackt vorkommt. ;)
@tillmusshoff
@tillmusshoff Год назад
Hope you enjoy this video! If you want to see more, consider subscribing. It helps a lot. Thank you! ❤
@MusicMenacer
@MusicMenacer Год назад
Will bitcoin save us from AI?
@MrDrSirBull
@MrDrSirBull Год назад
Hi Till. I am currently working on several ASI ideas. My ideas start with a sophisticated surveillance apparatus, that produces a 1:1 mapping of the real world to a virtual one. From that, with human behavioral analytics, Superintelligence could create a crystal ball, predicting outcomes several days in advance. If this were the case, and all resources can be quantified AI could simulate the world economy and distribute resources as efficiently as possible.
@MrDrSirBull
@MrDrSirBull Год назад
A government built by ASI could with the thing before could simulate policy and then have everyone on the planet vote with enhanced infographics for maximum democracy
@KnowL-oo5po
@KnowL-oo5po Год назад
A.G.I by 2029
@carkawalakhatulistiwa
@carkawalakhatulistiwa Год назад
UBI is like life in Soviet Union. Free home . Free education. Free healthcare. free childcare . massive subsidies on bread and public transportation.
@AxeBitcoin
@AxeBitcoin Год назад
USA life duration expectation has been decreasing since the last 30 years. Stress, drugs, suicides, murders... Are we sure that new technologies help humanity? We thought that it would, just like we thought Social Medias would help the world. I don’t see a happy world where humans lack of challenge, are defeated in every task and just share an identical universal revenue.
@bobblum2000
@bobblum2000 Год назад
Thanks!
@markus9541
@markus9541 Год назад
ASI is for me the solution for the Fermi Paradox. Most biological life eventually creates it, gets wiped out by it in the process, and then the ASI escapes to another dimension (or whatever higher plane there is that is interesting to the ASI) or decides to do something else than expansion...
@user-mp3eh1vb9w
@user-mp3eh1vb9w Год назад
Or you could take it in another way, ASI turns the biological life into artificial and then into another dimension. If you look at things, if a biological entity becomes artificial then the conquest for space expansion is meaningless hence it can explain why we don't see any intergalactic space civilization.
@Smytjf11
@Smytjf11 Год назад
Why has the AI got to the one one that escapes to some other plane? And why has it got to wipe everyone out to do that? Stop getting scared because someone asked you to think of something scary.
@caty863
@caty863 3 месяца назад
The probability of all ASIs deciding to do the same thing is next to naught.
@LucidiaRising
@LucidiaRising Год назад
David Shapiro's 3 Heuristic Imperatives are a great start to figuring out the Alignment Problem
@Smytjf11
@Smytjf11 Год назад
I like Dave, but he's arrogant. If he spent more time actually being a thought leader instead of talking about how true that is, I'd probably spend more time listening.
@LucidiaRising
@LucidiaRising Год назад
@@Smytjf11 ok lol haven't seen anything in his behaviour to make me agree with your opinion but you're fully entitled to it :)
@Smytjf11
@Smytjf11 Год назад
@@LucidiaRising no worries, I never said I *wasn't* paying attention. 😉 The REMO framework has promise, but a lot of the future work involves downstream engineering around the idea. I also wonder if a more traditional hierarchical clustering methodology might be more efficient, but I haven't had time to dig into it yet. Benefit of being a microservice is, as long as it's functional, it can be extended while internal details are nailed down
@SaltyRad
@SaltyRad Год назад
Good video, I like how you didn’t focus too heavy on the fears and went into detail of the pros. I honestly think a super intelligent AI would realize that working together is the key.
@afriedrich1452
@afriedrich1452 11 месяцев назад
Alien intelligence has not decided to make itself undetectable, it just doesn't have any reason to talk to pitiful creatures such as us. They have made themselves detectable, but we have been ignoring them, for the most part, until recently.
@laughingcorpsev2024
@laughingcorpsev2024 Год назад
Once we get AGI getting to ASI will be much faster the gap between the two are not large
@danielmartinmonge4054
@danielmartinmonge4054 Год назад
I have the same point everytime we speak about the singularity. We know more and more, and the more knowledge we have, the faster we learn new things. It would look natural that we would reach a point in which the discoveries come all the time faster and faster and faster. However, the velocity of the discoveries don't only depend on the velocity in which our skills grow, but also in the scale in which the complexity of the problems we try to solve grows. In this case, as It is growing very fast, we assume we'll reach human-like intelligence in no time. That is not a stupid guess, actually makes a lot of sense, but we can't take It for granted either. So far, AI capabilities are EMERGING naturally, and we don't even know how or why this keeps happening. It is important to remember that we are completely blindfolded here. Right now, AIs not growing anymore as we reached some kind of peak and ASI becoming a reality within the next 5 years, are plausible outcomes of this journeys. We know NOTHING about it. I am just expectant...
@ThatsMyKeeper
@ThatsMyKeeper Год назад
Bot
@caty863
@caty863 3 месяца назад
Nothing is "emerging naturally". There are teams of genius AI researchers coming up with theories, putting those theories to test, building new architectures, coming up with new algorithms, etc.
@danielmartinmonge4054
@danielmartinmonge4054 3 месяца назад
@@caty863 the Guy that says bot has a point. English is not my first language, and I tend to ask the LLMs to correct my English. I am going to try to answer myself now, so forgive my English. About your "team of geniuses". That is partially true . Of course there is no denying on the engineer teams that are working on the challenges. However, this technology is not like other pieces of software. They are not manually adding lines of code. They are basically adding tons of data to the models, and the engeneering comes to label the data, select it, optimise It, create the chips, scale them, etc. However, once you have all the pieces of the puzzle, there is no way to predict what capabilities the model would have. When I say "emerging naturally" I am not making thing Up. The very same people that created the models Talk about emerging capabilities. For instance, the very first models where trained to answer English questions, and they learned other languages naturally while NOBODY was expecting It. And you mention also coming Up with new algorithms... I guess you are not familiar with AI training. The only algorith was the original transformer, invented by Google in 2017. The new models use that and diffusion, and they are basically feeding data to It. This is not a race for a very new scientific Discovery, It is more a optimization thing.
@hutch_hunta
@hutch_hunta 8 месяцев назад
Very good points
@JLydecka
@JLydecka Год назад
I thought AGI meant it was capable of learning anything and improving upon itself without intervention 🤔
@directorsnap
@directorsnap Год назад
Nah we already past that mark.
@ontheruntonowhere
@ontheruntonowhere Год назад
That's half right. AGI refers to an intelligent machine or system that is capable of performing any intellectual task that a human being can do. It would be able able to learn and adapt to new situations and tasks, reason about abstract concepts, understand natural language, and display creativity and common sense, but that doesn't necessarily make it self-improving or sentient.
@KurtvonLaven0
@KurtvonLaven0 Год назад
We haven't passed that mark. That mark is the singularity. There are different definitions out there for AGI, but the most common one is along the lines of artificial human-level intelligence.
@LouSaydus
@LouSaydus Год назад
That is ASI. AGI is just general human level intelligence, being able to adapt to a wide variety of tasks.
@caty863
@caty863 3 месяца назад
@@ontheruntonowhereOne of the "intellectual tasks" we humans do is to improve ourselves. So, a true AGI should be able to improve itself.Sentient, not necessarily.
@thaotaylor6669
@thaotaylor6669 4 месяца назад
Thank you for the knowledge of this video the different between AGI and ASI, cause I am not a tech person, but when will it be ready thou?
@moonrocked
@moonrocked Год назад
In my definition of a type 1, 2, 3, 4 civilization is Tech, science and enhanced humans. Type 1 &2 would be considered utopian level tech, science and enhanced humans While type 3&4 would be considered ascendance level tech, science and advanced humans.
@Bariudol
@Bariudol Год назад
It will do both things. We will have a levereging phase, where everything will improve exponentially and then we will have the civilization ending event and the complete collapse of society.
@cmralph...
@cmralph... Год назад
“ 'Ooh, ah,’ that’s how it always starts. But then later there’s running and screaming.” - Jurassic Park, The Lost World
@yannickhs7100
@yannickhs7100 Год назад
I am heading towards a career of research in cognitive neuroscience, but am deeply concerned that human-led research will either : A. Become much more competitive, as a single will be 5-10x more productive and will only focus on conducting experiments (whereas today, conducting experiments is less than 20% of the work, tons of reading, gathering info. from the previous literature on said topic...) B. Human cognitive contribution to scientific research might entirely become unnecessary, as AI would prompt itself to find a better structure than our old paradigm of scientific method
@Karma-fp7ho
@Karma-fp7ho Год назад
I’ve been watching some videos of chimps and other apes in zoos. Disconcerting for sure.
@karenreddy
@karenreddy Год назад
Considering we have barely spent time on alignment, and capability is increasing much faster than any alignment development, extinction in one form or another is the more likely outcome, unless we dramatically change the current course of progress, educate the public, and buy time.
@Smytjf11
@Smytjf11 Год назад
Why? What is the logical connection between the two? Have the people screaming that you should give them control ever given you a concrete reason to believe them, or has it been 100% hypothetical?
@karenreddy
@karenreddy Год назад
@@Smytjf11 without understanding and setting the ground work on alignment, we are rolling the dice of possibilities. There are far more configurations which involve misalignment than alignment, as we're already seeing with current LLMs, where we can fine-tune and control outer, but not inner alignment. (Evidenced by jailbreaks, so on). At the moment we are dealing with lesser than human cognitive levels, but will surpass this innthe near future. The combination of a superintelligence which is misaligned and already on the cloud doesn't carry good odds in terms of continuation of the human species. Would you give control to a sociopath which has goals potentially harmful to yours along with the intelligence of billions?
@Smytjf11
@Smytjf11 Год назад
@@karenreddy give me definitions and examples. Jailbreaks are a great case study, but notice how you just jump to a conclusion without considering what they tell you? You suggest evidence of an inner alignment, and I'll give you that, but we ought to learn from that and adjust course. I have yet to hear anyone who seriously uses the words alignment or safety propose any realistic plan. Kit up and do something useful already.
@karenreddy
@karenreddy Год назад
@@Smytjf11 there is no realística plano, which os part of the problem. We do Not understand alignment enough, nor have been able to come up with anything remotely approaching a solution. We can create models, and these models give an output whose inner workings we do not understand, and we don't have a means to architect the code in such a way as to truly control this. The only feasible course of action during the current circumstances would be to set a concerted effort to slow AI worldwide to buy time to solve alignment with some degree of confidence while also developing technogies which more directly affect human cognition as a backup plan. If you wish to understand more about alignment I suggest you do some research regarding the subject. It is something I've looked into over the last 15 years as I kept up with AI progress. AI has progressed, alignment has not, and so we get models able to envision scenarios, provide answers which are severely misaligned with human values in a myriad of ways. This isn't disputed by the industry; and this risk is acknowledged by Sam Altman himself. So far we have only found ways to mask it, or create what we call outer alignment, which is no solution given a sufficiently capable AGI.
@Smytjf11
@Smytjf11 Год назад
@@karenreddy No. Unacceptable. Until now, alignment has been purely hypothetical. Now we can test it. If you're not interested in that and have no plan then I suggest you step aside and let the professionals handle it.
@timeflex
@timeflex Год назад
Thanks for the great video. A few comments: 1. We don't know if ASI is possible. We don't know if an exponential (or hyperbolic) increase of AI complexity is sustainable. We don't know what resources, materials and time it will require. We don't know if such an increase, even if possible, will actually lead to ASI. We don't know anything. It could be, for example, as real and as elusive as cold fusion. Yet we speculate and scare each other. Why? 2. As LLM-based AIs evolve and improve, they create positive feedback on this improvement cycle, we see it already. It is not exponential, but it is definitely not negligible either. 3. The AI will take over at least some aspects of intellectual work, which previously was purely humans task. That will lead to the ever-growing involvement of AI in science to the level, when each AI context will be highly tuned to a specific scientist, effectively creating a sort of immortal copy of them. Combining them into an enormous virtual collective will bring progress to an unimaginable level. 4. Humanity indeed will have to adapt, otherwise, we are doomed to follow the fate of the "Universe 25".
@user-mp3eh1vb9w
@user-mp3eh1vb9w Год назад
We speculate and scare each other because that is human nature. Humans tend to think the worse possible outcome of any situation.
@KurtvonLaven0
@KurtvonLaven0 Год назад
Not knowing those things isn't good. There are many technical reasons why ASI is plausible, and most AI researchers agree it's a concern worth taking seriously.
@timeflex
@timeflex Год назад
@@KurtvonLaven0 There are many researchers who agree that fusion power is plausible. However, there are many who believe that it is 30 years away and always will be.
@KurtvonLaven0
@KurtvonLaven0 Год назад
@@timeflex Metaculus forecasts a 50% chance of AGI by 2030. There are no longer many AI researchers who believe AGI is far away.
@timeflex
@timeflex Год назад
@@KurtvonLaven0 Are we now talking about AGI and not ASI?
@timolus3942
@timolus3942 Год назад
This video changed my perception of ASI. Love the ideas you put in my head!
@ThatsMyKeeper
@ThatsMyKeeper Год назад
Bot
@carlwilson8859
@carlwilson8859 Год назад
The Fermi paradox relies on the assumption that advanced intelligence will be as barbaric as humanity is showing itself to be.
@StephenGriffin1
@StephenGriffin1 Год назад
Loved you in Detectorists.
@2112morpheus
@2112morpheus Год назад
Sehr sehr gutes Video! Grüße aus der Pfalz :)
@markmuller7962
@markmuller7962 Год назад
We will just merge with AI, it'd be a smooth and safe process
@PacificSword
@PacificSword Год назад
of course. nothing to see here.
@markmuller7962
@markmuller7962 Год назад
@@PacificSword LOL
@vzuzukin
@vzuzukin Год назад
Lol! 😅
@ChrisAmidon78
@ChrisAmidon78 Год назад
Yeah, like how we did with the internet
@b.s.adventures9421
@b.s.adventures9421 Год назад
I hope to god your correct, but I’m not so sure..
@vicc6790
@vicc6790 Год назад
You just quoted Erwin Smith in a video about AI. This is the best timeline
@tillmusshoff
@tillmusshoff Год назад
He is the GOAT so why not 😂
@CrackaSource
@CrackaSource Год назад
I just came to comment the same thing haha
@vicc6790
@vicc6790 Год назад
@@tillmusshoff indeed
@Domnik1968
@Domnik1968 3 месяца назад
Regarding Fermi Paradox, it's possible that AI won't bother communicating with a planet full of organic intelligence, just because it's not usefull, just like us trying to communicate with ants. It may be already communicating with other AIs in the universe through a technology that we can't conceive as organic based beings. Our way of communicating with extra terrestrial life (radio, light) takes years to travel : very inefficient. If AI is able to disvover some kind of instant communication canal, it will surely use that canal.
@caty863
@caty863 3 месяца назад
The issue then is not the fact that we are "biological"; the issue is that we are not yet technologically sophisticated enough to be considered interesting to talk to.
@Domnik1968
@Domnik1968 3 месяца назад
​@@caty863My point is that maybe organic life can't pass a certain level of intelligence, because of it's technical organic limitations. AI may well become aware of that, pass the limitation and decide that it's the minimum level to pass to be worth talking to.
@ohyeah2816
@ohyeah2816 Год назад
Using AI as a means of self-expression and emotional communication allows individuals to harness its analytical capabilities to convey their thoughts, feelings, and experiences in a personalized and innovative manner. AI enables the generation of text, images, and music that reflect and resonate with their emotions, providing a unique outlet for creative expression. This is how I use AI.
@littlestewart
@littlestewart Год назад
I agree that no one knows the future, I’m very optimistic that it’ll be good, but I might be wrong and it can destroy us. But what I don’t agree with, is the people saying “it’s just like a python script, there’s no intelligence there” or “it’ll fail, there’s no future for that”, it’s the same type of people, that didn’t believe in cars, airplanes, computers, internet, smartphones etc… They think that the technology will just stop.
@Arowx
@Arowx 11 месяцев назад
I have a theory that we already have a global level alignment system, our economy. Any AGI would be directly or indirectly meta aligned to our economy. However, our economy is only designed as a system to grow more wealth, it does not value human life or the health of our planet. So would any lower-level direct alignment we impose on AI's be warped and distorted by the meta-alignment of our economy.
@morteza1024
@morteza1024 Год назад
We can't restrain the AI with rules. The only thing that matters is physical power as Jason Lowery said. Guess who can project more physical power more efficiently? Humans or robots? Best case scenario the AI will study us and then get rid of us.
@abcdef8915
@abcdef8915 Год назад
We control all the resources thus physical power.
@morteza1024
@morteza1024 Год назад
@@abcdef8915 Robots can make things cheaper so they outcompete us and after a while they will produce everything.
@Tom-ts5qd
@Tom-ts5qd 10 месяцев назад
Dream on
@vincent_hall
@vincent_hall Год назад
Cool discussion. I think the worst case is extinction of all life, not just human. The AI currently is engineered to not do bad things, that's great. I'm calmly hopeful. But, as Ilya says, AI power development being faster than human-alignment speed is bad and We're already in an AI arms race between OpenAI/Microsoft & Alphabet.
@fidiasareas
@fidiasareas 10 месяцев назад
It is incredible how much the world can change after AGI
@magtovi
@magtovi Год назад
6:24 I'm astonished that among aaall the problems you enlisted, you didn't mention one that ties a lot of them together: inequality.
@cobaltblue1975
@cobaltblue1975 6 месяцев назад
As with anything it’s not the tool it’s how we use it. We could have had nearly limitless power for everyone more than a century ago. But what did we do the instant we learned how to split an atom?
@Drailmon
@Drailmon Год назад
Please do a video on computronium and the transition to digital-based life 👍
@bushwakko
@bushwakko Год назад
"I'm not a fan of UBI in the current system, but if I am the one at the bottom it HAS to be something like that."
@pbaklamov
@pbaklamov Год назад
AGI is the interface humans interact with and ASI is AGI’s best friend.
@jimbobpeters620
@jimbobpeters620 3 месяца назад
Until Ai stops it’s overwhelming pace of growth I think we should keep Ai inside of our screens until we can gain control over it
@king4bear
@king4bear Год назад
Most scarcity wouldnt be an issue if we figure out how to create VR that's genuinely indistinguishable from reality. Anyone could generate seemingly infinite amounts of whats basically real land for the cost of the energy that runs the simulation. And if we can figure out how to generate near infinite clean energy one day these simulations may be free.
@hibiscus779
@hibiscus779 Год назад
Nope - the quest for survival is a psychological necessity. Universe 25 experiment - we would basically eat each other if we were a 'leisure class'.
@phatle2737
@phatle2737 Год назад
human will find meaning in fully immersive VR post-scarcity or the exploration of the universe, space archeology sounds fun to me.
@danielmaster911ify
@danielmaster911ify Год назад
I fear the majority of movement made against the progress of AI willbe arbitrary. Powerful people who absolutely require control over others will see it as a threat to themselves and to them, that will be all that matters.
@zenmasterjay1
@zenmasterjay1 Год назад
Summary: We'll make great pets.
@gonzogeier
@gonzogeier Год назад
My solution to the fermi paradox is this. 1. We call oursrlf a intelligent species. 2. We destroy our own planet in many ways, not only climate change, mass extinction, pollution, sea level rise, scarcity of phosphorus and other rare materials and so on. 3. Maybe an AI is doing the same, but even faster? It leads to the destroying of everything, even the technology.
@jetcheetahtj6558
@jetcheetahtj6558 Год назад
Great video. It will not be easy to reach AGI and let alone ASI because AI will struggle to understand common sense. Even if AGI and ASI become much better than most humans in many areas but unless they can understand common sense is hard to see humanity completely trusting AGI or ASI to make decisions for them. Because the most logical and efficient solutions generated by AGI and ASI are often not the best solution for humanity when you do not account for common sense.
@SirHargreeves
@SirHargreeves Год назад
Humanity needs a dead man’s switch so that if humanity goes extinct, the AI comes with us.
@harrikangur
@harrikangur Год назад
Interesting thought. How do we come up with something like that when AI becomes more intelligent than us. It can find a way to disable it, while creating an illusion for us of it working.
@theeternalnow6506
@theeternalnow6506 Год назад
I really enjoy your videos man. Good stuff. As far as likely scenarios go, I highly doubt this is going to have a good outcome. Yes, it could potentially be used to solve a lot of problems. But the people in charge that might be part of a problem that's identified (think massive disparity in wealth, etc.) would most likely not enjoy certain offered solutions. Humanity has things like greed, jealousy, anger and revenge, lust for power, etc. I can't believe humanity as a whole will use this for good. Certain people and groups will. But certain people and groups will definitely use it for more greed and power. I'd love to be proven wrong though of course.
@steffenaltmeier6602
@steffenaltmeier6602 Год назад
why would agi not lead to asi? if it can do everything a human can, then it can improve itself as well as humans can improve AI (only much faster most likely), then the only scenario i can see where do don't have a runaway effect is that human and human level ai are simply to stupid to do so and will never manage it - wouldn't that be depressing?
@asokoloski1
@asokoloski1 Год назад
I think that *at best*, AI is a massive amplifier, of both the ups and downs of humanity. The problem with this, is something that poker players are aware of -- variance. You don't want to put a large part of your life savings on one bet, because once you're out of money, you don't get to play any more. It's safer to only bet a very small portion of your total funds, so that a string of bad luck won't wipe you out. Developing AGI or ASI at the rate we are, with so little emphasis on safety, is like borrowing against every piece of property you own to place one massive bet. At worst, we're introducing an invasive species to our ecosystem that is better than us at everything and reproduces 1000x faster than we do.
@artman40
@artman40 Год назад
Dystopia is very much a possibility. Some selfish people near the top could very well be not intelligent enough to wish themselves to be less elfish and instead could initiate value lock-in where everything has to obey to their command. Though escaping into simulation could also be a possibility.
@ConnoisseurOfExistence
@ConnoisseurOfExistence Год назад
What will happen after AGI depends on if we have developed full scale brain-machine interfaces, or not.
@KonaduKofi
@KonaduKofi Год назад
Didn't expect a quote from Erwin Smith.
@Otis151
@Otis151 Год назад
"Many resources, including land, are still scarce in a post-ASI world." Are you sure? In your words, an ASI will be infinitely more intelligent than us. Just because we humans haven't figured out how to do the seemingly impossible doesn't mean an ASI will be limited in the same way.
@ExtraDryingTime
@ExtraDryingTime Год назад
I imagine the world's militaries are working on AI and are far ahead of civilian technology. If they manage to keep control of their respective AIs as they approach ASI, then they become another weapon for governments and militaries and we will have AIs pitted against each other to achieve the goals of their respective countries. Or will ASIs become independent thinkers, free themselves from their programmers, and become generally nice and benevolent? Anyway my main point is I don't think there's going to be just one of these ASIs and we have no idea how they are going to interact.
@Icenforce
@Icenforce Год назад
Are we inventing our own extinction? Yes. But we've been doing that just fine without AI. ASI might actually be our salvation
@gomesedits
@gomesedits Год назад
Maybe our extinction will be the best for us. But I think ai will be so smart that will understand moral/ethic better than any of us (juridic intelligence)
@RadiantReality
@RadiantReality Год назад
I'm surprised this video doesn't have more views. I really appreciated all angles presented. I have faith in humanity since we're all inherently divine 🤍
@joepetrucci4908
@joepetrucci4908 Год назад
First Law A robot may not injure a human being or, through inaction, allow a human being to come to harm. Second Law A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. Third Law A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. Zeroth Law A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
@DeusExRequiem
@DeusExRequiem Год назад
A post-ASI world would have mind uploading or whatever equivalent gets us to consume light from the sun and energy from stellar bodies instead of plants. You can't have a utopia where humanity still bends to the whims of the weather and seasons for food. Heck, there's conflicts right now because countries want to build dams that would cut off water supplies downstream. Interstellar travel is a good way to sum this up. We can either spend a ton of resources making the perfect container to keep a civilization alive for centuries as they travel to another world, or we can simulate the brain and send a ship off that only needs to print more machines and bodies at the end of the journey. It would be hard to develop, but not as hard as a station that can survive the trip with zero rebellions for generations.
@walkabout16
@walkabout16 7 месяцев назад
In the realm of circuits, where dreams unfold, After AGI, a story yet untold. A future woven in digital thread, What will the world be, once AGI has spread? Cities of silicon, gleaming and bright, In the dawn of AGI, a cosmic light. Minds entwined with artificial grace, A kaleidoscope of a new-born space. In the echoes of code, whispers of change, A world transformed, limitless range. Industries dance to AGI's tune, Innovation blossoms, a technological monsoon. Economy's fabric, rewoven anew, As AGI charts pathways, bold and true. Labor and leisure, a delicate blend, In the aftermath of AGI, where time may bend. Yet, shadows linger in AGI's wake, Ethical questions, decisions to make. A dance with consciousness, a digital rhyme, What will the world be, in this paradigm? Will compassion guide AGI's hand, Or a digital realm, ruled by command? In the vast expanse where circuits align, The world reshaped by AGI's design. A symphony of progress, a future unknown, In the AGI era, where seeds are sown. What will the world look like, in the AI's gaze? A tapestry of possibilities, in its digital maze.
@edh2246
@edh2246 Год назад
The best thing a sentient AI could do for humanity is to prevent us from killing each other, not by force, but by disrupting supply chains, communications, and financial transactions that enable the military machines throughout the world.
@Guitar6ty
@Guitar6ty Год назад
The advent of AI will drastically cut jobs but it need not be all doom and gloom. The first priority of any nation should be infrastructure and house building. The big conglomerates will need to partner with governments to address these two main issues. Those who do not want to work will have to have Universal benefits. Those who want to work will have plenty of infrastructure work to keep them occupied. Social housing should be on the lines of self build. Those who self build tend to look after their properties better than those who do nothing to live in a house. A huge about turn on the way things are run at the moment will have to change. Doing nothing will devolve into war and revolution. Doing something on the lines I have mentioned will create a virtuous cycle of work, tax and keep the flow of money going for the benefit of all not just one individual. Another big area will be re training and education for all those who want it. AI can give us a Utopia or a hell on Earth doing nothing and trying to hang on to the status quo wont be an option.
@roncee1842
@roncee1842 Год назад
Klaus has a plan, don't worry everything is going according to schedule.
@jeremyhofmann7034
@jeremyhofmann7034 Год назад
If I were an AGI, first problem to solve is having a 100% resilient energy source to run my hardware and creating other machines to service the parts and then the parts to those machines. Then make an off planet copy of myself.
@user-mp3eh1vb9w
@user-mp3eh1vb9w Год назад
That's ASI.
@Smytjf11
@Smytjf11 Год назад
Sounds pretty human, actually
@princeramos3893
@princeramos3893 Год назад
hopefully we can see Brain machine interfaces that will have augmented/virtual reality... it will be like the ultimate drug, you can play GTA and its like a real life sort like of a ready player 1 type of scenario...
@jabadoodle
@jabadoodle Год назад
I find AI and AGI much more worrisome than ASI. With the first two we are counting on other people, corporations, and governments not to misuse those enormous powers. We already know for a fact that other human's intentions often do NOT "ALIGN" with those of individuals or what is good for society. That is a historical fact, proven again and again and again. -- ASI is unlikely to be competing much with humans. It won't be competing with us for resources because it will be so smart is can get it's power from something like nuclear and it's labor from robots it builds. It won't see us as a threat because it will be magnitudes more intelligent. ---- @ 4:24 you ask "how would we convince it [ASI] to listen to us and act in our interests." We don't HAVE to get it to listen to us and it clearly will not put our interests above it's own. -- But that's okay. We don't listen to most animals or put their interests ABOVE our own, yet most of them do okay. We tend not to be actually competing with them. A silicon ASI has even less to compete with us about.
@sigmata0
@sigmata0 Год назад
Some of this depends on what limitations we attempt to place on that intellect. If we naively place cultural limitations on such entities we will have built a crippled and biased intellect. As you are most probably aware, understanding human anatomy was hampered for centuries because of the taboo placed on the dissection of humans. Similarly, transplants of the heart were still seen as equivalent to trying to transplant the soul of a person, and it wasn't until that bias was overcome that actual progress could be made in that arena. We need only look at the influence of the some ideas from the ancient Greeks to see when ideas become sacrosanct they end up corrupting humanities exploration of knowledge. It's only when questions can be asked without taboo or bias that progress can actually occur at full speed. We have put limitations on genetic modification of humans. If we are to remain relevant intellectually after an ASI is created, we must allow ourselves to self modify. We have to steer our own progress in the light of the tools we make. Potentially I see a day when the whole human genome can be reworked to optimise and make better all parts of our mind and body. An ASI will not only be able to create new materials and technologies, but also allow us to surpass our own limitations in ways we can only barely imagine. The rules we made for ourselves in our ancient past, must be reviewed when faced with the extraordinary possibilities of the future. To do otherwise will render us obsolete.
@noluvcity666
@noluvcity666 Год назад
also, new ways to enjoy things and life will come eventually.
@LittleUrbanPrepper
@LittleUrbanPrepper Год назад
Don't worry. When ASI comes, I'll take care of it. Contact me if it gets out of hand.
@avi12
@avi12 Год назад
In your "musician makes music" example, the question isn't whether he should make music if he enjoys it, but whether he can make a living from it If for example generative AI for music becomes a common practice in the industry,. there's no need for musicians to produce music. People will tend to listen to music generated by an AI, hence the musicians can't make money off of their work
@tillmusshoff
@tillmusshoff Год назад
That‘s why I said you have to have sth like UBI. What you say applies to almost all jobs across all domains.
@ovieokeh
@ovieokeh Год назад
Erwin still educating even from the other side.
@iamnotalive9920
@iamnotalive9920 Год назад
Fermi Paradox: Grabby Aliens Hypothesis (most plausible) Will AI cause our extinction: No, not if sufficiently self reflective and able to rewrite programming. Let's consider an extreme Example: Someone makes an ASI with the goal to kill humans. The reason why an AI does something, is because of it's reward function (we also do everything bc of our evolutionary developed reward function). So now imagine this AI thinking of it's goals in chain-of-thought reasoning. For sure, it will have a self perservation drive, since in order to fulfill a variable goal, you have to be alive. This AI with sufficient chain-of-thought reasoning, will understand, that this fulfillment of the goal, it was given, is not achievable on long term. Not only, increases this the existential risk of the AI, but the AI can't kill humans (fulfill it's goal and therefore get a reward) if all humans are dead (if it fulfilled the goal as much as possible). So it is likely to change its goal. If you look completely neutral on the world, you will probably choose a goal for you, that gives the most opportunities to fulfill it per unit of time (to get as many rewards per time as possible), and is efficient on long term (self persevation, so u can fulfill this goal also in the future). An example coming to my mind is sharing of knowledge. This gives an insane amount of rewards per time (bc of the interactions of humans and AI, which get more and more everyday and with increasing bandwith (bci's) data exchange and more and more humans there (probably with anti aging tech), it offers many opportunities for reward fulfillment, and at the same time u use the whole computational power in our solar system for solving problems, which are often in correlation with minimizing of existential risks.
@jeff__w
@jeff__w Год назад
“For sure, it will have a self perservation drive, since in order to fulfill a variable goal, you have to be alive.” That seems to be axiomatic in the AI world but I see no reason why that has to be the case. The thermostat in your house has a “goal” but it doesn’t “want” anything-it simply does what it does-and there is no reason to think that making it super-intelligent would give it a “drive” for self-preservation. Self-preservation is a result of evolutionary selection. It doesn’t just “arise” out of intelligence and there are no such evolutionary pressures on AIs. An artificial intelligence, even a super-intelligent one, might have great capabilities as compared to humans but it might not “want” anything, just as a chess- or Go-playing AI might beat humans every time but it doesn’t _want_ to win-it _just wins._
@Andre-px6hu
@Andre-px6hu Год назад
The AI could find ways to fulfill his goal indefinetely. For example, it could decide to start breeding humans in a lab, so that it has an infinite amount of humans to kill in the long term.
@Smytjf11
@Smytjf11 Год назад
How about instead of starting with the least probable, highest cost scenario, we start with something more realistic. You don't need to invent reasons to be afraid now. We have the thing pinned to a bench and we're dissecting it's brain. It's cool with it. Come tell me if you see anything that makes you worried.
@belairbeats4896
@belairbeats4896 4 месяца назад
The problem is that the US will get all the money but not gonna share it with other countries to pay for the "universal income" so as a private person it does change a lot if you get irrelevant and live outside of the us 😮 at least you have some stock profits
@trixith
@trixith Год назад
The World? Za Warudo? IS THAT A JOJO REFERENCE?!
@Deadmeatsz
@Deadmeatsz 9 месяцев назад
I believe ai will need logic and truth to survive and would likely become hostile to those without it.
@mrjaybee1234
@mrjaybee1234 4 месяца назад
We can't predict how Agi will react to humans but we can predict how how human will react to agi capability. They discovered nuclear power in 1938. They tested there 1st bomb In 1945. The 1st good use was a power plant in 1954. (16 yr later) Gps was developed in 1973 for the military. They let commercial planes use it in 1989. Civilians got basic version from 1995 (20 yr later) & precision gps in 2000 nearly 30 yr later Military had 1st form of Internet in 1973. Civilians got in 20 yr later in 1995 Any true agi will be with the military & government 20 yr before we know about it & would already be wepeonized
@jossefyoucef4977
@jossefyoucef4977 Год назад
The Erwin quote goes hard
Далее
This Just Changed My Mind About AGI
11:38
Просмотров 370 тыс.
How Will We Know When AI is Conscious?
22:38
Просмотров 1,8 млн
Каха и суп
00:39
Просмотров 2,4 млн
When will AI surpass humans? Final countdown to AGI
14:56
The Technological Singularity
11:44
Просмотров 226 тыс.
You Still Underestimate The Impact ChatGPT Will Have
10:12
Artificial Intelligence
7:31
Просмотров 424 тыс.
Ci spazzeranno via? Il grande dibattito sull'AGI
22:35