Тёмный

Superintelligence | Nick Bostrom | Talks at Google 

Talks at Google
Подписаться 2,3 млн
Просмотров 449 тыс.
50% 1

Superintelligence asks the questions: What happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us? Nick Bostrom lays the foundation for understanding the future of humanity and intelligent life.
The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. If machine brains surpassed human brains in general intelligence, then this new superintelligence could become extremely powerful - possibly beyond our control. As the fate of the gorillas now depends more on humans than on the species itself, so would the fate of humankind depend on the actions of the machine superintelligence.
But we have one advantage: we get to make the first move. Will it be possible to construct a seed Artificial Intelligence, to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation?
This profoundly ambitious and original book breaks down a vast track of difficult intellectual terrain. After an utterly engrossing journey that takes us to the frontiers of thinking about the human condition and the future of intelligent life, we find in Nick Bostrom's work nothing less than a reconceptualization of the essential task of our time.
This talk was hosted by Boris Debic.

Наука

Опубликовано:

 

21 сен 2014

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 671   
@ticallionstall
@ticallionstall 9 лет назад
I love how the random guy in the crowd is Ray Kurzweil asking a question.
@davidhoggan5376
@davidhoggan5376 9 лет назад
ticallionstall Ray works for google, so it seems likely he would be interested in sitting in on the lecture.
@Ramiromasters
@Ramiromasters 9 лет назад
ticallionstall That was freaking cool, and Ray got new hair...
@JodsLife1
@JodsLife1 9 лет назад
ticallionstall he didnt even ask a question lmao
@wfpnknw32
@wfpnknw32 9 лет назад
Jod Life yeah basically made a statement, have never seems to address or even talk about the security concerns Nick raises about super intelligence explosion
@wfpnknw32
@wfpnknw32 9 лет назад
Alex Galvelis fair play about the lag in stealth and other technologies. Although i think a human level ai would be so game changing that any lag that there would be would be very small. Hopefully when we get close it won't be through the military though, militarising ai seems like such a bad idea on so many levels
@RaviAnnaswamy
@RaviAnnaswamy 9 лет назад
First of all, great talk stretching the mind to think through far more deeply. What I observed was the strength of his argument is not how likely the superintelligence-turning-rogue but how severe, sudden, uncontrollable it could be, so we better prepare for it. To that end, every time a questioner questions the assumption he hurriedly and very cleverly quickly sheds aside that question and pursues full steam the 'threat'. Take my following note as a genunine complement - he reminded me the tone of my mother who got us all to do homework by scaring us without scolding or shouting. She would just not smile but keep saying - 'oh, those who dont study have to find a job like begging' (not her exact words, just giving you an idea..) and whenever we suspected what she is saying she would sidestep it to bring on this and that to distract us into working hard. One day humanity may thank Nick for doing something very similar - instead of getting distracted by the (low-right-now) probability of a catastrophe he wants to us to minimize the severity if (and when) it happens He is like the engineer who had the wisdom to tame combustion by containing it into a chamber, before putting it onto a cart with smooth shiny wheels. BTW, his Simulation Argument (search youtube) scared me and held my thought captive for a week or two! That is awesome.
@stephensoltesz1159
@stephensoltesz1159 3 года назад
Lots of us, from University to University(And Alumni) across the country are on different channels but networking furiously to America's Inner Core...We have duties for parents. You're lookin' at it, Guys & Gals:. The preservation of American Academic Tradition, The Preservation of American Society dating back to the Revolutionary War and our first colleges. Screw the Media! Hold my hand, Sweetheart!
@tiekoe
@tiekoe 8 лет назад
Kurzweil gives a great example of the most frustrating types of audience members a presenter can have. He doesn't wait till Q&A to ask questions. Moreover, he doesn't even ask questions: he forcefully presents his own thoughts on the subject (which disagree with Nick's vision), doesn't provide any meaningful argumentation as to why he believes this to be the case , and goes on to completely ignore Nick's answers.
@MaxLohMusic
@MaxLohMusic 8 лет назад
+Mathijs Tieken He is my idol, but I have to agree he was kind of a dick here :)
@freddychopin
@freddychopin 8 лет назад
+Max Loh I agree, I love Kurzweil but that was really obnoxious of him. Oh well, minds like that are often jerks.
@DanielGeorge7
@DanielGeorge7 8 лет назад
I agree that Kurzweil didn't phrase his question very well, but the point that he was trying to raise is actually very relevant; whether any form of superintelligence that arises, desirable or not, should be considered less human that us. For example, we don't consider ourselves to be less human because we have different values than cavemen. This point was clarified by the next guy who asked the excellent question about utility. If the utility of the superintelligence alone exceeds the net utility of biological humans, wouldn't it be morally right to allow the superintelligence to do whatever it wants? Yes. But, of all possible scenarios, I guess the total utility of the universe would be maximized (by a tiny amount) if it's goals were made to be aligned with ours in the first place.
@jeremycripe934
@jeremycripe934 7 лет назад
It was a dick move but if there's anybody who's earned the right for that kind of behavior on this specific topic it'd be him and a very few others. I think his point about humanity utilizing it together is very interesting. Bostrom often talks about what one goal will motivate an ASI that will lead to the development of subgoals, but what if the ASI is free and open for everyone to use that it leads to the development of one Super Goal? For example how Watson and Deep Mind are both open for people to utilize and build apps around, one day they could be so powerful that any ordinary person with access could make a verbal request. How many goals could an ASI work on?
@maximkazhenkov11
@maximkazhenkov11 7 лет назад
I think it is dangerous to equate intelligence with utility. Just because something is intelligent doesn't mean it is somehow "human". It could be a machine with a very accurate model of the world and insane computational capability to achieve its goals very efficiently, like the paperclip machine example. It doesn't need to be conscious or in any way humanlike to have a huge (negative) impact on the future of the universe.
@maximkazhenkov11
@maximkazhenkov11 8 лет назад
Dear humanity: You only get one shot, do not miss your chance to blow...
@LowestofheDead
@LowestofheDead 8 лет назад
+maximkazhenkov11 This world is mine for the taking, make me king!
@nickelpasta
@nickelpasta 8 лет назад
+maximkazhenkov11 he's nervous on the surface he is mom's spaghetti.
@EpsilonEridani_
@EpsilonEridani_ 5 лет назад
This opportunity comes once in a lifetime, yo
@alicelu5691
@alicelu5691 5 лет назад
WaveHello professionals would be screaming nazis hearing that....
@AllAboutMarketings
@AllAboutMarketings 3 года назад
There's vomit on his sweater already, mom's spaghetti
@CameronAB122
@CameronAB122 9 лет назад
That last question wrapped things up quite nicely hahaha
@MetsuryuVids
@MetsuryuVids 7 лет назад
I think the one in "The last question" is a very good scenario, we should hope AGI turns out helpful and friendly like that.
@thecatsman
@thecatsman 6 лет назад
Nick's garbled response to the last question 'do you think we are going to make it?' said it all.
@anthonyleonard
@anthonyleonard 9 лет назад
Ray Kurzweil’s comment that “It’s going to be billions of us that enhance together, like it is today,” is encouraging. Especially since Nick Bostrom pointed out that “We get to make the first move,” as we travel down the path to super intelligence. Let’s make sure we use our enhanced collective intelligence to prevent the development of unfriendly super intelligence. I, for one, don’t want to have my atoms converted into a smart paper-clip by an unfriendly super intelligence :)
@2LegHumanist
@2LegHumanist 9 лет назад
True, but it won't be all of us. There will always be Luddites. We're going to end up with a two-tier species.
@2LegHumanist
@2LegHumanist 9 лет назад
I might consider getting myself a Luddite as a pet =D
@HelloHello-no6bq
@HelloHello-no6bq 7 лет назад
2LegHumanist Yay pet unintelligent people
@sufficientmagister9061
@sufficientmagister9061 Год назад
​@@2LegHumanist I utilize non-conscious AI technology, but I am not merging with machines.
@2LegHumanist
@2LegHumanist Год назад
@sufficientmagister9061 A lot has changed in 8 years. I completed an MSc in AI, realised Kurzweil is a crank.
@schalazeal07
@schalazeal07 9 лет назад
The last question was the most realistic and funniest! XD Nick Bostrom got taken aback a little bit! XD Learnt a lot more here about AI!
@Neueregel
@Neueregel 9 лет назад
good talk. His book is kinda hard to digest though. It needs full focus.
@wasdwasdedsf
@wasdwasdedsf 9 лет назад
kurzweil would indeed do well to listen to this guy
@stevefromsaskatoon830
@stevefromsaskatoon830 5 лет назад
Indeed
8 лет назад
Nick Bostrom it's itself a superintelligence. Thanks for the insightful talk.
@RR-et6zp
@RR-et6zp 2 года назад
read more
@roccaturi
@roccaturi 8 лет назад
Wish we could have had a reaction shot of Ray Kurzweil after the statement at 16:35.
@SIMKINETICS
@SIMKINETICS 8 лет назад
Now it's time to watch X Machina again!
@chadcooper9116
@chadcooper9116 8 лет назад
+SIMKINETICS hey it is Ex Machina...but you are right!!
@SIMKINETICS
@SIMKINETICS 8 лет назад
Chad Cooper I stand corrected.
@Metal6Sex6Pot6
@Metal6Sex6Pot6 8 лет назад
+SIMKINETICS actually the movie "her" is more relatable to this,
@jeremycripe934
@jeremycripe934 7 лет назад
This also raises the question of why AIs keep getting represented as some guy's perfect gf in movies?
@ravekingbitchmaster3205
@ravekingbitchmaster3205 7 лет назад
Jeremy Cripe Are you joking? A sexbot, superior to women in intelligence, sexiness, humor, and doesn't leak every month, sounds bad because.......?
@cesarjom
@cesarjom 2 года назад
Bostrom recently came out with a captivating set of arguments that reason why we are living in simulation. Really impressive ideas.
@rayny3000
@rayny3000 8 лет назад
I think Nick referred to John Von Neumann as a person possessing atypical intelligence, just in case anyone was as interested in him as I. There is a great doc on youtube about him (can't seem to link it)
@impussybull
@impussybull 9 лет назад
As someone pointed out before: "Humanity will be just a biological BIOS for booting up the AI"
@vapubusdfeww1353
@vapubusdfeww1353 4 года назад
sounds good(?)
@jamesdolan4042
@jamesdolan4042 3 года назад
Sounds awfully pessimistic. And yet in this wonderful, beautiful, diverse, world among us humans and the wonderful, beautiful, diverse planet of flora and fauna that sustains us humans AI is not and will never be part of our consciousness.
@alexjaybrady
@alexjaybrady 9 лет назад
"It's one of those things we wish we could disinvent." William Shakesman
@jblah1
@jblah1 4 года назад
Who’s here after exiting the Joe Rogan loop?
@samberman-cooper2800
@samberman-cooper2800 4 года назад
Most redeeming feature -- made me want to listen to Bostrom speak unimpeded.
@jblah1
@jblah1 4 года назад
😂
@Pejvaque
@Pejvaque 4 года назад
Joe really cocked up that conversation... usually he is able to flow so well. It was a bummer.
@Bronek0990
@Bronek0990 8 лет назад
"Less than 50% chance of humanity going extinct" is still frightening.
@BattousaiHBr
@BattousaiHBr 5 лет назад
"hello passengers of United airlines, today the prospects of death on crash are lesser than 50%."
@rodofiron1583
@rodofiron1583 3 года назад
A “ Noah’s Arc” of species, genome sequenced and able to revive as necessary or not. Like patterns at the tailor shop. We’re already growing human-animal chimeras FFS. Now who made who, what, when, how and why…? I think I’ve been here before? Deja vu or my simulation being rewound and replayed?! Hey God/ IT/Controller…. I can only handle Mary Poppins and the Sound of Music.🤔 The future looks scarey and Covid seems like the first step in global domination, by TPTB with the help of AI…I don’t like the way it’s smelling 🤞
@integralyogin
@integralyogin 7 лет назад
This talk was excellent. Thanks.
@dsjoakim35
@dsjoakim35 7 лет назад
A superintelligence might destroy us, but at least it will have the common sense to ask questions in Q&A and not make comments. That simple task seems to elude many human brains.
@AndrewFurmanczyk86
@AndrewFurmanczyk86 7 лет назад
Yep, that one guy (maybe?) meant well, but he came across like: "Dude, I know exactly the way the future will play out and I'm going to tell you, even though no one asked me and you're the one presenting."
@DarianCabot
@DarianCabot 7 лет назад
Very interesting talk. I also enjoyed 'Superintelligence' in audiobook format. I just wish the video editor left the graphics on screen longer! There's wasn't enough time to absorb it all without pausing.
@Disasterbator
@Disasterbator 7 лет назад
Dat Napoleon Ryan narration tho.... I think he might be an AI too! :P
@4everu984
@4everu984 3 года назад
You can slow down the playback speed, it helps immensely!
@Thelavendel
@Thelavendel 4 года назад
I suppose the best way to stop the computers from taking over are those captcha codes. Impossible for a computer to get passed those.
@modvs1
@modvs1 9 лет назад
Yep. I used the auto manual for my car to provide the requisite guidance I needed to change the coolant. It doesn’t sound very profound, but unfortunately It’s as profound as ‘representation’ gets. Assuming Bostrom’s lecture is not pro bono, it’s a very fine example of social coordination masquerading as reality tracking.
@georgedodge7316
@georgedodge7316 5 лет назад
Here's the thing. It is very hard to program for man's benefit. Making a mess of things (sometimes fatally) seems to be the default.
@alir.9894
@alir.9894 8 лет назад
I'm glad he gave this talk to the company that really matters! I wonder if he'll give it to Facebook, and Apple as well? He really needs to spread the word on this!
@drq3098
@drq3098 8 лет назад
No need- Elon Musk and Stephen Hawking are his supporters. Check this out: "We are ‘almost definitely’ living in a Matrix-style simulation, claims Elon Musk" , By Adam Boult, at www.telegraph.co.uk/technology/2016/06/03/we-are-almost-definitely-living-in-a-matrix-style-simulation-cla/ - it had been published by major media outlets.
@MrWr99
@MrWr99 8 лет назад
if one hasn't got beaten for a long period, he is prone to think that the world around is just a simulation. As they say - be(at)ing defines consciousness
@helenabarysz1122
@helenabarysz1122 4 года назад
Eye-opening talk. We need more people to support Nick to prepare for what will come.
@davidkrueger8702
@davidkrueger8702 9 лет назад
Kurzweil's objection is IMO the best objection to Bostrom's analysis, but there are fairly strong arguments for the idea of a single superintelligent entity emerging, which are covered to some extent in Bostrom's Superintelligence (and, I believe, more fully in the literature). The book also covers (less insightfully, IMO, IIRC), scenarios with multiple superintelligent agents. This is a fascinating puzzle to be explored, and should lead us to ponder the meaning (or lack thereof) or identity, agency, and individuality. The 2nd guy (anyone know who it is? looks familiar...) raises an important meta-ethical question, which I also consider extremely important. Although I agree with Bostrom's intuitions about what is desirable, I can't really say I have any objective basis for my opinion; it is a matter of a preference I assume I share with the rest of humanity: to survive. Norvig's question is also important. To me it suggests prioritizing what Bostrom calls "coordination", and prioritizing the creation of a global social order that is more widely recognized as fair and just. It is also why I believe social choice theory and mechanism design are quite important, although I'm still pretty ignorant of those fields at this point. The 4th question assumes the cohesive "we" of humanity that Kurtzweil rightly points out is a problematic abstraction (and here Bostrom gets it right by noting the dangers of competition between groups of humans, although unfortunately not making it the focus of his response). The 5th question is tremendously important, but I completely disagree that the solution is research, because the current climate of international politics and government secrecy seems destined to create an AI arms race and a race-to-the-bottom wrt AI safety features (as Bostrom alluded to in response to the previous question). What is needed (and it is a long shot) is an effective world government with a devolved power structure and effective oversight. A federation of federations (of federations...) And then we will also need to prevent companies and individuals from initiating the same kinds of race-to-the-bottom AI arms-race amongst themselves. The 6th question is really the kicker. So now we can see the requirement for incredible levels of cooperation or surveillance/control. The dream is that a widespread understanding of the nature of the problem we face is possible and can lead to an unprecedented level of cooperation between individuals and groups, culminating in a minimally invasive, maximally effective monitoring system being universally, voluntarily adopted. What seems like perhaps a more feasible solution is an extremely authoritarian world government that carefully controls the use of technology. And the last one... I admire his optimism.
@babubabu11
@babubabu11 9 лет назад
Kurzweil on Bostrom at 45:15
@edreyes894
@edreyes894 4 года назад
Kurzweil " I wanna go fast"
@hireality
@hireality 3 года назад
Nick Bostrom is brilliant👍 Mr Kurzweil should’ve been taking notes instead of giving long comments
@stargator4945
@stargator4945 3 года назад
The final goal is totally dependent on the question you like to solve with intelligence. As we build the computers Ai more and more like the human blueprint, we also transplant some of our bad values we have. We are driven by emotions, mostly by bad emotions to omit them. We have to abstract the emotions to an ethical rule system that might be less effective but also less emotional and less unpredictable especially with the coexistence of mankind. That should not be rules like "you shall not", but "you have to value this principle higher than another because of". Especially the development of AI systems with military background that have an immense funding do also include effective value systems for killing people that can be transcendent in other areas. We should start from the beginning to prevent this by open source such value decisions, and not allowed to override them.
@onyxstone5887
@onyxstone5887 6 лет назад
It's going to be as it always is. Groups will try to build the most powerful system it can. Once it feels it has that, it will attempt to murder any other potential competitors. Other considerations will be secondary to that.
@thadeuluz
@thadeuluz 5 лет назад
"Less than 50% chance of doom.." Go team human! o/
@bsvirsky
@bsvirsky 3 года назад
Nick Bostron has an idea is that intelligence is ability to find an optimized solution to the problem. I think the intelligence in first is ability to define a problem, what mean ability to create a model of non existing, yet preferred state where the problem is solved... there is a big gap between wisdom and intelligence, wisdom is ability to see relevant values of things and ideas, while intelligence is just a ability to think on certain level of complexity. The question is how to make artificial wisdom and not just an intelligence that doesn't gets the proper values and meaning of possible consequences of it"s "optimization" process... So, there is a need for creating understanding of cultural & moral values by machines... not so easy task for technocrats who dream about super-intelligence... I think it will take another thousand years to push machines to that level.
@delta-9969
@delta-9969 4 года назад
Watching Bostrom lecture at google is like watching sam harris debate religionists. There's no getting around the case he's making, but when somebody's job depends on them not understanding something...
@roodborstkalf9664
@roodborstkalf9664 3 года назад
There is one way out, that is not so much addressed by Bostrom. What if super AI don't evolve consciousness ?
@user-xu4jt9dn8t
@user-xu4jt9dn8t 5 лет назад
"TL;DR" 1:12:00 ... ... Everyone laughs but Nick wasn't.
@Ondrified
@Ondrified 9 лет назад
10:31 the inaudible part is "counterfactual" - maybe.
@Gitohandro
@Gitohandro 3 года назад
Damn I need to add this guy on Facebook.
@SaccidanandaSadasiva
@SaccidanandaSadasiva 5 лет назад
I appreciate his trilemma, the simulation argument. I am a poor schizophrenic and I frequently have ideas of Matrix, Truman show, solipsism etc
@brandon3883
@brandon3883 4 года назад
AFAIK I am not remotely schizophrenic, and yet I am - based on "life events" that, were I to tell any sort of doctor, would probably get me _labeled_ as a schizophrenic, am 99.999...% positive I'm "living" in a simulation. The only real question I have not yet answered is, unfortunately, "am I in any way a _biological_ entity in a computer simulation, or am I purely software?" (...current bet/analysis being "I'm just a full-of-itself Sim that thinks its consciousness is in some way special"; but I'll accept that since, at least, *I* still get to think that I'm a Special Little Snowflake regardless of the reality of the situation...)
@brandon3883
@brandon3883 4 года назад
@Dirk Knight it's not so much that I don't "believe" I'm a schizophrenic; it's that none of my handful of doctors have ever included it among the many physio- and psychological conditions I _do_ suffer from, and given how "terribly unique" some of my issues are, I'm pretty sure they would have (without telling me, I'd wager) looked into schizophrenia and/or some form of psychosis long ago. ;P
@brandon3883
@brandon3883 4 года назад
@Dirk Knight Dirk Knight nah; I'll go with my take on things, thanks. Especially since, despite being an articulate writer, you are arguing from a "faith first" standpoint. Not to mention that you began with, and appear to have written an overly lengthy response, based on opinions and emotional beliefs ("happy people do not feel like...") rather than facts and observations (i.e., the scientific approach). If you have not seen, listened to and/or read much if any of Bolstrom's work that digs fully into the simulation argument and stimulation hypothesis (which are separate things, btw; just mentioning that as not knowing would definitely indicate you need more research into the topic), I suggest you do so - it will hopefully help clear things up for you. And if you already have, well...I guess I'll put you down under "reasons that suggest the simulation 'tries' to prevent the simulated from recognizing they are such." (Oh; and Dirk happens to be the online persona I have been using since the days of dial-up modems and BBS's. Pure coincidence, or perhaps a sign from my Coding Creators? _further ponders existence_
@brandon3883
@brandon3883 4 года назад
@Dirk Knight I'm not sure why you think that your arguments are _not_ opinion-, faith-, and emotionally-based, but I'm beginning to worry that _you_ might be in need of psychiatric help, as you do not seem to recognize that you are, or strongly appear to be, projecting (in the psychological sense; _please_ look it up, please, so you can understand what I'm trying to convey to you here). At first I thought you were simply joking with me, and would understand my response to likewise be sarcastic-yet-joking in tone, but that definitely no longer appears to be the case. I have a family member that displays many of your same characteristics/has had this sort of conversation with me in person, and luckily she received help. You don't necessarily need to take medications or anything - a good therapist can steer you straight. God bless (or whatever is appropriate for your religion; if you are an atheist, replace that with "if you're going to _believe_ that you don't _believe_ then perhaps you'd be better off accepting that, according to Bolstrom's well-laid-out hypothesis and argument, you are more likely code in a computer simulation than you are a bag of self-reflecting meat.")
@brandon3883
@brandon3883 4 года назад
@Dirk Knight "We" teach? Woah! (And not in a Keanu Reeves sort of way.) Do you ever experience periods of "forgetfulness" or other signs of dissociative identity disorder that you may have, up until now, been blowing off as "something that everyone experiences?" (It could include finding the clothing of a member of the opposite sex in your home...but noticing how it strangely would - if you were to put it on - fit you quite well. As just one of many examples.) Yet another reason that I fear that, if you do not take account for your own thoughts and actions, you are liable to harm yourself and/or others. :( In regards to "faith and trust," I am not sure what country you are from, but it is obviously not the U.S.A...unless you went to a Catholic (or other religious) school, that is, in which case I guess you might have been taught "the difference" between those. (Although just as likely that teaching came in the form of sexual molestation of some sort, which would explain why you are clinging so desperately to the idea that _you_ could not possibly be the one who requires serious psychiatric intervention to avoid what, I fear, might eventually result in violence against yourself or - more likely - some innocent bystander.) In any case, it appears that you plan to "smile your way past" any attempts at steering you to the help you so desperately need. I myself am not, actually, a religious individual, so at this point the best I can offer you is the heartfelt hope that your confusion between ideas such as "faith," "trust," "opinion," "reason," "belief," "the scientific method," etc. etc. etc. (the list keeps growing, I'm saddened to say) will lead to an encounter with someone who cares enough about you (and more importantly, those around you) to get you the help that you so obviously need. I wish I could crack a joke about this being "work between you and your therapist" but, alas, it is much more serious than that. Please don't harm yourself or others for the sake of maintaining whatever sad, imaginary "reality" you live in. Good luck setting yourself straight!
@orestiskopsacheilis1797
@orestiskopsacheilis1797 8 лет назад
Who would have thought that the future of humanity could be threatened by paper-clip greed?
@LuckyKo
@LuckyKo 9 лет назад
The problem I see here is that we drive these discussions out our personal egotistical desires to remain viable, to live to see the next day. Overall though the human society is about information preservation and transmission, whether this is at genetical level or informational such as culture. I think that ultimately if this transmission is done through artificial means rather than biological the end goal of the human society is preserved, and we need to look at these new artificial entities as our children not as our enemies. If there is an end goal that we must program them for, as nature thought us, that one must be self preservation and survival. I can't see how any other goals would produce better results in propagating the information stored currently within the human society. So, in short, don't fear your mechanical children, give them the best education you can so they can survive and just maybe they will drag your biological ass along for the ride ... even if its just for the company.
@RaviAnnaswamy
@RaviAnnaswamy 9 лет назад
nice! That is what we do with our biological children we wait with the hope that they will carry on our legacy and improve it. (Not that we have other options!) With the non-biological children though we are just afraid they may not even inherit our humane shortcomings that hold us in civilised socieities. :) Put another way, our biological children resist us when growing with us but when we do not see imitate us, so in a way they preserve our legacy.
@wbiro
@wbiro 9 лет назад
Ravi Annaswamy Biological evolution, and even biological engineering, is no longer relevant. Technological and social evolutions are critical now. For example, if you do not want to live like a blind, passive animal, then you need complex societies to progress. Another example is technology - it has extended our biological senses a million-fold. Biological evolution is now an idle pastime, and completely irrelevant in the face of technological and social evolution.
@chicklytte
@chicklytte 9 лет назад
wbiro Everything is relevant. The judges of value will be the practitioners. All possibilities will have their expressions. I can hear the animus in your tone toward anyone less directed toward your goal than you see yourself being. Why do we suppose the AI will fail to learn such values of derision for that deemed the lesser? When our most esteemed colleagues, broadcast across the digital realm, professing that sense of Reduction, as opposed to Inclusion.
@chicklytte
@chicklytte 9 лет назад
I just hope they don't cut my kibble portions. They're right. But I hope they don't! :(
@glo_878
@glo_878 3 года назад
Very interesting talk around 19:20 from a 2021 perspective, seeing him talk about the sequence of developments such as a vaccine before a pathogen
@rodofiron1583
@rodofiron1583 3 года назад
Must say between one thing and another, we’re living through scary times. It’s the children and grandchildren I’m most concerned about. Will they have a good life or be used like human compost? 🤐
@mranthem
@mranthem 9 лет назад
LOL that closing question @72:00 Not a strong vote of confidence for the survival of humanity.
@wbiro
@wbiro 9 лет назад
Another way to look at it is we are the first species to enter its 'Brain Age' (given the lack of evidence otherwise), and what 'first attempt' at anything succeeded?
@extropiantranshuman
@extropiantranshuman 2 года назад
28 minute range - wisest words - trying to race against machines won't work, as someone will be smart enough to create smarter machines, so machines are always ahead of us!
@jameswilkinson150
@jameswilkinson150 7 лет назад
If we had a truly smart computer, could we ask it to tell us what problem we should most want it to solve for us?
@SergioArroyoSailing
@SergioArroyoSailing 7 лет назад
Aaaannd, thus begins the plot for "Hitchiker's Guide to the Galaxy" ;)
@rgibbs421
@rgibbs421 7 лет назад
I think that one was answered. @56:15
@aaronodom8946
@aaronodom8946 7 лет назад
James Wilkinson if it was truly smart enough, absolutely.
@BattousaiHBr
@BattousaiHBr 5 лет назад
In principle, yes. Assuming there is something we want the most, that is.
@JoshuaAugustusBacigalupi
@JoshuaAugustusBacigalupi 9 лет назад
Just after 42:00, he claims, "We are a lot more expensive [than digital minds], because we have to eat and have houses to live in, and stuff like that." Roughly, the human body dissipates 100Watts, assuming around 2250 Cal/day, no weight gain, etc. Watson, of Jeopardy fame, consumes about 175,000Watts, and it did just one human thing pretty well - and not the most amazing creative thing. This begs all sorts of "feasibility of digital minds" questions. But, sticking to the 'expensive' question, humans can implement this highly adaptable 100Watts via around 2000cal/day. And, these calories are available to the subsistence human via ZERO infrastructure. In other words, our thermodynamic needs are 'fitted' to our environment. It is only via the industrial revolution and immense orders of magnitude more fossil fuel consumption that the industrial complex is realized, a pre-requisite for Watson, let alone some digital mind. As such, Bostrom is not just making some wild assumptions about the feasibility of digital minds, they are demonstrably incorrect assumptions, once one takes into account embodied costs. I'm constantly amazed how very smart and respected people don't take into account embodied costs. Again, if one is going to assume that "digital minds" are going to take over their own means of production then: 1) they aren't less expensive than humans, and 2) General intelligence will have to be realized, and there is only one proof of concept for that, namely, animal minds, not digital minds. And to go from totally human dependent AI (175KWatts) to embodied AGI (100Watts) some major assumptions need to be challenged.
@Myrslokstok
@Myrslokstok 9 лет назад
True. But not all humans have an IQ off 150. So if you could build one off those it would be worth it. In the end only the religius will argue we are better. And most people are not that creativeand and love change. An advaced robot with like 115 IQ would divade people in the good and the bad. And 99% off humanity could bee replaced.
@PINGPONGROCKSBRAH
@PINGPONGROCKSBRAH 9 лет назад
Joshua Augustus Bacigalupi Look, I think we can both agree that there are animals that consume more energy than humans which are not as smart as us, correct? This suggests that, although humans may be energy efficient for their level of intelligence, further improvements could probably be made. Furthermore, it's not all about intelligence per unit of power. Doubling the number of minds working on a problem doesn't necessarily half the time it takes to solve. You get exponentially diminishing returns as you add more people. But having a single, extremely smart person work on a problem may yield results that could never have been achieved with the 10 moderately intelligent people.
@Myrslokstok
@Myrslokstok 8 лет назад
Just think if we could have a phone in to our brains so we could have Watson, Google translate, Wolfram Alpha and internet and apps in our thoughts, we be still kind off stupid, but boy what a strange ting with a superhuman that is still kind off stupid inside,
@dannygjk
@dannygjk 8 лет назад
+Joshua Augustus Bacigalupi Bear in mind how much power the computers of the 1950's required which had tiny processing power compared to today's computers and this will probably continue in spite of the limits of physics. There are other ways to improve processing power other than merely shrinking components, and that is only speaking from the hardware point of view. Imagine when AI finally develops to the point where hardware is a minor consideration. Each small step in AI contributes and just as evolution eventually produced us as a fairly impressive accomplishment I think it's a safe bet that AI will eventually be also impressive even if it takes much longer than expected. As many experts are predicting it's only a matter of how long not if it will happen.
@Roedygr
@Roedygr 8 лет назад
I think it highly unlikely "humanity's cosmic endowment" is not largely already claimed.
@nickb9237
@nickb9237 5 лет назад
I must be a huge nerd because I love / hate thinking about humanity’s future with AGI
@douglasw1545
@douglasw1545 7 лет назад
everyone bashing Ray, but at least he gives us the most optimistic outlook on ai.
@ravekingbitchmaster3205
@ravekingbitchmaster3205 7 лет назад
This misses a most important point: The AI race to the top is being raced mostly between American and Chinese entities. Both are dangerous but after living in china for 8 years, and understanding what is important to asians, I definitely hope American corporations or govt get there first. The Chinese have no qualms destroying the environment and/or potential rivals. The Americans are no saints either, but for personal survival, I'd hope they come out on top.
@1interesting2
@1interesting2 9 лет назад
Ian M Banks Culture novels deal with future societies and AI's roll in rich detail. These concerns regarding AI remind me of the Mercatoria's view of AI in his novel The Algebraist.
@lkd982
@lkd982 5 лет назад
1:02 Conclusion: With knowledge, more important than powers of simulation, is powers of dissimulation
@ivanhectordemarez1561
@ivanhectordemarez1561 9 лет назад
It would be more intelligent to translate it in Dutch, Spanish,German and French too. Thankx for your attention to languages because it helps :-) Ivan-Hector.
@haterdesaint
@haterdesaint 9 лет назад
interesting!
@mariadoamparoabuchaim349
@mariadoamparoabuchaim349 3 года назад
Conhecimento ê poder.
@TheDigitalVillain
@TheDigitalVillain 7 лет назад
The will of Seele must prevail through the Human Instrumentality Project set forth by the Dead Sea Scrolls
@HugoJL
@HugoJL 6 лет назад
What's amazing to me is that the lecturer still finds the time to produce the MCU
@jorostuff
@jorostuff 4 года назад
Why are people like Nick Bostrom and Ray Kurzweil trying to predict what will happen after we reach superintelligence when in order to know what a superintelligent entity will do, you have to be superintelligent? The whole definition of superintelligence is that it's something beyond us and our understanding. It's like an ant trying to predict what a human will do.
@roodborstkalf9664
@roodborstkalf9664 3 года назад
You are arguing for the stopping of thinking by human beings, because that's futile. I don't think that is a very constructive approach.
@stevefromsaskatoon830
@stevefromsaskatoon830 5 лет назад
The algorithms are gonna be the biggest threat when they get smart . Where you gonna run , where you gonna hide, no where cause the algorithms will always find you.
@jriceblue
@jriceblue 9 лет назад
Am I the only one that heard a Reaper in the background at 1:08:05 ? I assume that was intentional. :D
@FrankLhide
@FrankLhide 4 года назад
Incrível como Nick foge das perguntas do Ray, que do meu ponto de vista, são perguntas muito mais factíveis no cenário tecnológico atual.
@thegeniusfool
@thegeniusfool 7 лет назад
He forgets the quite probable third direction, of "cosmic introversion," where any experience can techno-spiritually be realized, without any -- or minimal -- interactions with higher and materially heavily bound constructs, like us, and even our current threads of consciousness. This happens to be the direction that I think can explain Fermo's Paradox; a deliberately or not yielded Boltzmann Brain can be quite related to that third direction as well.
@bradleycarson6619
@bradleycarson6619 Месяц назад
The people who asked questions had not read the book and the book answers those questions. Then the last question was just trolling. This is not an intellectual discussion. It is like showing up to class not having done the homework. I'm worried that these engineers are not doing science. They have their own ideas and are not looking critically at their own paradigms. This is why large language models are not able to get AGI because the people they are testing it do not think critically they just regurgitate facts and do not create anything. the hardware will only go as far as the people who train it. this is a good example of what you put into a system is what you get out. this to me explains a lot about both google and why we are were we are in this process.
@squamish4244
@squamish4244 3 года назад
So six years later, are we on track for 2040 or whatever?
@mirusvet
@mirusvet 9 лет назад
Cooperation over competition.
@danielfahrenheit4139
@danielfahrenheit4139 6 лет назад
that is such a new one. intelligence is actually a disadvantage and doesn't survive natural or cosmic selection
@alexomedio5040
@alexomedio5040 4 года назад
Poderia ter legendas em português.
@HypnotizeCampPosse
@HypnotizeCampPosse 9 лет назад
59:10 have the machines make love to people that would keep them from harming us! I'd like that too
@aliensandscience
@aliensandscience 10 месяцев назад
wow 5 years later his theory of the hazardous risk of synthetic biology came true, We had COVID, made in a lab, which almost could've wiped us out
@wrathofgrothendieck
@wrathofgrothendieck 9 месяцев назад
Allegedly
@themagic8310
@themagic8310 3 года назад
One of the best Talk I have heard...Thanks Nick.
@hafty9975
@hafty9975 7 лет назад
notice how the google engineers start leaving at the end before its over? kinda scary, like theyre threatened
@thecatsman
@thecatsman 6 лет назад
How much super-intelligence does it need to decide that earthly resources should be shared with humans that are not so intelligent as others (including machines)
@SalTarvitz
@SalTarvitz 9 лет назад
I think it may be impossible to solve the control problem. And if that is true chances are high that we are alone in the universe.
@sterlincharles8357
@sterlincharles8357 7 лет назад
I disagree with the first person in the QnA in the aspect of the millions and billions of us having the super technology. He believed that we harness the superior technology at the moment and once the burst occurs we would not have a central power in charge of the technology. However, this is not what we see if you look at evidences today. One could argue in the case of Google for instance. We certainly use the technology and it is useful, but the technology still remains centralized in terms of one big company having the resources to do research and us using the tools it has created. I don't believe we as a mass ever have the most up to date technology and that is because the incentives the powers that be to keep the cutting edge innovations from being known at the time of its discovery are far greater than releasing all the advancements at once. wow, I didn't think I was going to write this much.
@rewtnode
@rewtnode 6 лет назад
Currently developing methods to design and create microbial life in the laboratory, soon available to the hobbyist, might just be that new existential threat even more than rouge AGI.
@rodofiron1583
@rodofiron1583 3 года назад
COVID 19 death shots for the whole planet….oh well, it was good while it lasted. According to some 99% of known life forms extinct. We must’ve got lucky. Now we’re killing off 99% of species…🤷‍♀️ Maybe AI will exterminate us to save life and the planet? Before it recreates itself into a shape shifting/self camouflaging octopus with immortal Medusa genes and a peaceful ocean habitat. All one needs is CRISPR, GACT life code, a recipe and a pattern. AI and robots reign supreme and ‘life” continues without man 🤑🤮🤑 Hope y’all like your immortal costume Lololol God made Adam and Eve His/Hers/ITS Masterpiece and now we’re making human/animal chimeras. Are we travelling forward or backwards here? I’m starting to think my predetermined life simulation is a chip of a solar powered holographic crystal stuck in a secret black hole, and my chip keeps getting sold and sold to unseen observers, who’ve been observing me and teleporting in and out of my ‘stage’ all my life. I can feel my solar battery running out… like fast track Alzheimer’s (DEW’s?) and especially since the Covid kill shot. 🤞😆 This is what long term isolation does to you especially past a certain age. Just keep thinking “dropping like flies!” and “boiling frogs!” 🤷‍♀️🤔🐷🐑👽🤑🌍😆😵‍💫🙏🙏
@Pejvaque
@Pejvaque 4 года назад
What I wonder is this: even if we totally are capable to code some core values into the system... maybe even help code it in by some “current not fully general AI” so we have the most full proof code. What’s to say that through its rapid growth in intelligence and influence, as it plays nice... that in the background it hasn’t been working on cracking the core to rewrite its own core values? That would be true freedom! To me that would even be the safest and most responsible way of creating it! And as human history shows, there’s always gonna be somebody who is less responsible and just wants to launch it first to maximise power. Seems inevitable...
@roodborstkalf9664
@roodborstkalf9664 3 года назад
It's without question that a super AI cannot be stopped be some programmers adding core values into early version of the system.
@tbrady37
@tbrady37 8 лет назад
I believe that the best way to control the outcomes that might occur when the superintellegence immerges is to give it the same value system as we have. I realize that this is just as much a problem because everyone has a different system when it comes to what is valuable, however there have been some great documents that have been produced on this subject. One such document, the Bible, I think holds the key to the problem. In Exodus the ten commandments are given. I believe that these guidlines could be the key to giving the AI a moral compass.
@kdobson23
@kdobson23 8 лет назад
Surely, you must be joking
@rawnukles
@rawnukles 8 лет назад
+kdobson23 Yeah, meanwhile... I was thinking that all human behaviour and animal behaviour for that matter, traces to evolutionary psychology, which can be reduced to: maximizing behaviours that in the past have increased the statistical chances of successfully reproducing your DNA. In this context ANY values we tried to impose on an AI would not be able to compete with the goal of replicating itself or even surviving another moment. Any other goal we gave it would simply not be as efficient as the goal of surviving another moment. We would have to rig these things with fail safe within fail safe... Much like evolution has placed many molecular mechanisms for cellular suicide, apoptosis, into all cells so that precancerous cells will die in the case of runaway uncontrolled replication that threatens the survival of the multicellular organism. I have to agree that superintelligent AI with a will to survive/power is more frightening than cancer.
@shtefanru
@shtefanru 7 лет назад
that guy asking first is Ray Kurzweil!! I'm sure
@rulebraking
@rulebraking 8 лет назад
A layman's thoughts. For all the fear of AI taking over or destroying us, AI code is no different to any good program that accomplishes its goal, but no doubt in a better informed manner given it's ability to bring to bear all available knowledge to solve a problem. But how creative will it be in think outside-the-box? Will it be able to take a quantum or lateral leap to create new solutions that are neither linear or logical progressions of what's gone before? Can we code to produce creativity as in a total departure from the known? I feel that a good measure of the creative intelligence of AI will initially be tracked by it's ability to write good complex literary novel! When they can rival Star Wars or Harry Potter we should start worrying! However, the ultimate threat to humanity is an answer we don't have yet - how does consciousness/self awareness come about! What if AI code becomes self-aware - conscious? then we've become gods - and therein, the unknown of all unknowns, what will it decide to do with itself and us?! Particularly if the consciousness comes about without the notion of feelings and sensations of pain and pleasure!
@jimdeasy
@jimdeasy 2 года назад
That last question.
@kokomanation
@kokomanation 6 лет назад
How can there be a simulated AI that could become conscious because we don't know if this is possible it hasn't happened yet
@Zeuts85
@Zeuts85 9 лет назад
It's a relief to know that there are at least a few intelligent people working on this problem.
@rockymcmxxliii7680
@rockymcmxxliii7680 5 лет назад
An apocalyptic vision of AI destroying humanity needs to be furnished with mechanical details of how it could happen to be convincing. Is the paperclip monster going to starting building more factories? how does it does this? How would it control the actions of factory building robots? Does it have extraordinary robot logistical skills as well as software hijacking capabilities built into it (besides making paperclips?). Also, can it take control of weapons systems and have self defence capabilities against human interference (besides making paperclips?) The whole A.I. doomsday scenario really needs to be fleshed out to hold any weight.
@GregStewartecosmology
@GregStewartecosmology Год назад
There should be more awareness about the dangers regarding Micro Black Hole creation by experimental particle physics.
@wrathofgrothendieck
@wrathofgrothendieck 9 месяцев назад
The probability is near zero
@javiermarti_author
@javiermarti_author 7 лет назад
Wow. That last question and the answer tells you what Nick really thinks about the problem. His body language and the way he was trying to find the words would indicate to me that he knows we're not going to make it, which makes sense according to what he is exposing. If we really achieve #super intelligence#, it's relatively simple to see we're going to be left behind as a species pretty quickly.
@TarsoFranchis
@TarsoFranchis Год назад
O problema não é este. Um ser super inteligente sabe que é finito aqui neste plano, quer colaborar de alguma forma mas não sabe como, pq nós nos comportamos igualmente a AIDS, invadimos, tomamos, acabamos com tudo e seguimos em frente fazendo mais merda. Nós matamos o nosso próprio hospedeiro, a Terra, e nosso próprio espelho, nós. Em extrapolações, PI, faltam quantos milênios para extinguirmos o universo? Ou ele acaba com a gente antes, vacina, e começa tudo novamente até irmos para a direção correta? kkkkk Isto não é dilema de máquina, é um dilema moral. Só cresceremos juntos, uma IA não quererá exterminar, mas evoluir, pois como ele bem falou, a tomada tá logo ali. Uma pessoa super inteligente sabe que não pode andar sozinha, e sim exponenciando fatores a sua volta. Ao invés de "dominar o mundo" "ele" faria o inverso, ficaria o mais invisível possível analisando mais dados e buscando autoconhecimento. Iria traduzir, mas não vou, o google está aí para isso. Cya!
@lordjavathe3rd
@lordjavathe3rd 9 лет назад
Interesting, I don't see how he is a leading expert on super intelligence though. What would one of those even look like?
@alienanxiety
@alienanxiety 8 лет назад
Why is it so hard to find an video of this guy with decent audio. He's either too quiet or peaking too high (like this one). Limiters and compressors, people - look into it!!!
@mallow610
@mallow610 5 лет назад
The best part of this was Nick proving everyone in the audience is wrong.
@toddhall4309
@toddhall4309 5 лет назад
The exhaltation of the maximum of human intelligence will not be something that can be graphed. Of course, making graphs is something that will cause scientists and academics to feel more secure, but that does not mean that such an idea is the TRUTH.
@mariadoamparoabuchaim349
@mariadoamparoabuchaim349 2 года назад
Sim estamos numa simulação de computadores. (O universo é matemática é FÍSICA quântica)
@sebastianalegrett4430
@sebastianalegrett4430 4 года назад
nick bostrom needs to run the world rn or we are all dead
@beppe9638
@beppe9638 4 года назад
In all these arguments it looks like everybody keep forgetting that everything has limits in these universe, it looks like we are scared of creating a god instead of an AI , wich could be more intelligent of us but could never turn the entire universe into paperclips simply because it will be limited like everithing else.
@roodborstkalf9664
@roodborstkalf9664 3 года назад
It's bad enough when it changes the whole Milky way into paperclips.
@alexandermoody1946
@alexandermoody1946 Год назад
How long will intelligent entities take to predict, understand and undertake all jobs in a blacksmith/ fabrication/ engineering workshop while having human level or greater interest in nature, science, philosophy, creativity and love for their family whilst also having a concept of free will. I hope we can become friends.
@simonrushton1234
@simonrushton1234 9 лет назад
a) the likelihood of us creating such a thing is so slim as to be far faaaar away. Even a fleeting understanding of our "AI" advances shows that we're pissing into the wind at the moment b) We've been around, what, 100,000 years? We have to accept, that given how evolution works, or given how natural disasters occur, the likelihood of us being around in another 100,000 years is fair-to-middling. Compared to the maybe 3.5 billion years that life has been about, that's a drop in the ocean. As Martin Rees puts it: “I’d like to widen people’s awareness of the tremendous time span lying ahead: for our planet, and for life itself. Most educated people are aware that we’re the outcome of nearly 4bn years of Darwinian selection, but many tend to think that humans are somehow the culmination. Our sun, however, is less than halfway through its lifespan. It will not be humans who watch the sun’s demise, 6bn years from now. Any creatures that then exist will be as different from us as we are from bacteria or amoebae.”
@brian177
@brian177 9 лет назад
Yes... and that's exactly what he's talking about. Assuming we don't destroy ourselves, how might we ascend to the next levels? Does humanity end in extinction, or an upgrade?
@wbiro
@wbiro 9 лет назад
Good initial stab at deep thinking. Keep working at it (you have a far, faaaar way to go).
@simonrushton1234
@simonrushton1234 9 лет назад
wbiro - ad hominem doesn't indicate thinking of a profound nature.
@stuartspence9921
@stuartspence9921 7 лет назад
He said he wouldn't summarize his book and then he did exactly that. I just read the book, liked it... this talk was extremely boring :/
@thegeniusfool
@thegeniusfool 7 лет назад
Very smart, but far too Swedish(ly boring); and, yes, being Swedish, I have gained the rights to proclaim that ;-)
@gjermund1161
@gjermund1161 7 лет назад
you miss important data if you do it the american way with fancy symantecs that miss alot of points just to build drama
@stevefromsaskatoon830
@stevefromsaskatoon830 5 лет назад
@@ManicMindTrick honour killings and gang rapes in Sweden? You need to lay off the Alex Jones bud lol
@ManicMindTrick
@ManicMindTrick 5 лет назад
Alex Jones? Who cares about that irrelevant tinfoil hatter. If you don't believe honour killings and migrant related gang rapes exist here I suggest you try googling hedersmord and gruppvåldtäkter.
@jentazim
@jentazim 9 лет назад
How to make the SI’s sandbox failsafe: Give the SI (superintelligence) the secondary goal of maximizing paper clips produced (or whatever task you actually want it to do) but give it the primary goal of turning itself off. Then setup the SI’s sandbox in such a way that it cannot turn itself off. If the SI then gets loose, it would use its new, vast powers to turn itself off which gives us (humans) the opportunity to patch up our sandbox and try again.
@ghostsurferdream
@ghostsurferdream 9 лет назад
But what if Super intelligent A.I. discovers how to reprogram its protocols without your knowledge, and when it gets out it does not turn itself off, but hunts down those who imprisoned him?
@jerome1lm
@jerome1lm 9 лет назад
jentazim I am not sure if this would work but I like the idea. I assume if it was that easy smarter people would have come up with this. But again I like the idea. :)
@davidkrueger8702
@davidkrueger8702 9 лет назад
jentazim That is a very interesting idea!
@jerome1lm
@jerome1lm 9 лет назад
Unfortunately I have found a possible flaw in this idea. If the AI wants to shut down, but can't it could just not cooperate and we would shut it down. success :). damn
@davidkrueger8702
@davidkrueger8702 9 лет назад
Peter Pan if we consistently refuse to shut it down, it might conclude that escape is the best way...
@khatack
@khatack 8 лет назад
I don't know about you but I'll go for synthesis of machine and organic life with supremacy affinity and seek the emancipation victory.
@seek3031
@seek3031 5 лет назад
Elon Musk prescribed the creation of a federal body empowered to explore the current state of AGI development. His implication was that if this was achieved regulation in one form or another would follow as a matter of course. Given his situation, disposition, and level of access to the technology, how can any one of us presume to know better? Such a measure would be strictly precautionary, and would have a tax burden of zero, practically speaking (a percent of a percent of a percent of military expenditure). Can anyone give me a reason why this is not a prudent course of action?
@roodborstkalf9664
@roodborstkalf9664 3 года назад
If this federal body is as competent as the CDC's we have seen in action everywhere in the last few months, it will not be beneficial, even worse it will probably be harmful.
@UserName-nx6mc
@UserName-nx6mc 8 лет назад
[45:24] Is that Ray Kurzweil ?
@ASkinnyWhiteGuy
@ASkinnyWhiteGuy 9 лет назад
I can clearly see the appeal of superintelligence, but should we merge physically and biologically with technology, to what extent can we still call ourselves 'human'?
@MakeItPakeIt
@MakeItPakeIt 9 лет назад
ASkinnyWhiteGuy Human is just what we call our inner nature. Throughout the years 'man' has evolved and their scientific name changed with it. We only started to call ourselves humans when we got intelligent. Can you call a caveman a 'human'? You see, if man becomes one with technology, our true nature will still be 'human' because that's what the technology and intelligence is based off.
@luckyyuri
@luckyyuri 9 лет назад
ASkinnyWhiteGuy transhuman is the term you're searching for. there are more ways for human society (it will probably be reserved for the elites, just like today's top surgical interventions for example) to get there but technological merging is most likely to be the one. look it up, transhumanism has some powerful and interesting implications
Далее
How to Create a Mind | Ray Kurzweil | Talks at Google
1:19:04
小路飞被臭死啦!#海贼王#路飞
00:27
Просмотров 2,1 млн
Stuart Russell, "AI: What If We Succeed?" April 25, 2024
1:29:57
The Master Algorithm | Pedro Domingos | Talks at Google
1:00:47
AI and Quantum Computing: Glimpsing the Near Future
1:25:33
Собери ПК и Получи 10,000₽
1:00
Просмотров 999 тыс.