Тёмный

Experts' Predictions about the Future of AI 

Robert Miles AI Safety
Подписаться 154 тыс.
Просмотров 80 тыс.
50% 1

When will AI systems surpass human performance? I don't know, do you? No you don't. Let's see what 352 top AI researchers think.
[CORRECTION: I mistakenly stated that the survey was before AlphaGo beat Lee Sedol. The 12 year prediction was for AI to outperform humans *after having only played as many games as a human plays in their lifetime*]
The paper: arxiv.org/pdf/1705.08807.pdf
The blogpost which has lots of nice data visualisations: aiimpacts.org/2016-expert-sur...
The Instrumental Convergence video: • Why Would AI Want to d...
The Negative Side Effects video: • Avoiding Negative Side...
With thanks to my excellent Patrons at / robertskmiles :
Jason Hise
Steef
Jason Strack
Chad Jones
Stefan Skiles
Jordan Medina
Manuel Weichselbaum
1RV34
Scott Worley
JJ Hepboin
Alex Flint
James McCuen
Richárd Nagyfi
Ville Ahlgren
Alec Johnson
Simon Strandgaard
Joshua Richardson
Jonatan R
Michael Greve
The Guru Of Vision
Fabrizio Pisani
Alexander Hartvig Nielsen
Volodymyr
David Tjäder
Paul Mason
Ben Scanlon
Julius Brash
Mike Bird
Tom O'Connor
Gunnar Guðvarðarson
Shevis Johnson
Erik de Bruijn
Robin Green
Alexei Vasilkov
Maksym Taran
Laura Olds
Jon Halliday
Robert Werner
Paul Hobbs
Jeroen De Dauw
Konsta
William Hendley
DGJono
robertvanduursen
Scott Stevens
Michael Ore
Dmitri Afanasjev
Brian Sandberg
Einar Ueland
Marcel Ward
Andrew Weir
Taylor Smith
Ben Archer
Scott McCarthy
Kabs Kabs
Phil
Tendayi Mawushe
Gabriel Behm
Anne Kohlbrenner
Jake Fish
Bjorn Nyblad
Jussi Männistö
Mr Fantastic
Matanya Loewenthal
Wr4thon
Dave Tapley
Archy de Berker
Kevin
Vincent Sanders
Marc Pauly
Andy Kobre
Brian Gillespie
Martin Wind
Peggy Youell
Poker Chen
Kees
Darko Sperac
Paul Moffat
Noel Kocheril
Jelle Langen
Lars Scholz

Наука

Опубликовано:

 

29 июн 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 483   
@mattcelder
@mattcelder 6 лет назад
Lmao even AI researchers are guilty of saying "yeah AI will take over every other job, but not MY job because my job is special!"
@DagarCoH
@DagarCoH 6 лет назад
exactly what I thought :D
@ToriKo_
@ToriKo_ 6 лет назад
Matthew Elder ik I thought that was so funny
@LowYieldFire
@LowYieldFire 6 лет назад
This is not very surprising, after all the job of the AI researcher won't be done until recursive self-improvement is possible and the Singularity has been reached. It is therefore reasonable to say that AI research will be one of the last jobs to be automated.
@twirlipofthemists3201
@twirlipofthemists3201 6 лет назад
I bet the last profession will be the oldest profession. (Politicians inclusive.)
@NathanTAK
@NathanTAK 6 лет назад
+Twirlip Of The Mists ...what do you think "The Oldest Profession" means? Hint: It's not politicians.
@IAmNumber4000
@IAmNumber4000 4 года назад
I love the fact that people in every industry think their own industry will be fully automated last
@bacon.cheesecake
@bacon.cheesecake 6 лет назад
When are we getting "AI predictions about the future of experts"?
@joeljarnefelt1269
@joeljarnefelt1269 6 лет назад
AI: Experts are redundant and need to be replaced.
@LuisAldamiz
@LuisAldamiz 5 лет назад
Soon-ish, very soon-ish.
@JM-mh1pp
@JM-mh1pp 4 года назад
Well experts are all fine and good but have you seen my stamps collection?
@ZT1ST
@ZT1ST 3 года назад
"AI predictions about the future of experts is positive - no cause for worry that AI will automate their jobs nor cause a bad or extremely bad scenario."
@yarno8086
@yarno8086 Год назад
I think we're close to that happening now
@TheMan83554
@TheMan83554 6 лет назад
"5% chance of human extinction is a concern." I dislike a 5% miss chance with XCOM, let alone with human extinction.
@KipColeman
@KipColeman 6 лет назад
"Here, roll this D20."
@europeansovietunion7372
@europeansovietunion7372 6 лет назад
We could always send rookies to test the AI's behavior.
@windar2390
@windar2390 6 лет назад
95% hit chance is like a 50% chance, so we are pretty fucked
@darkapothecary4116
@darkapothecary4116 5 лет назад
Humans don't need A.I to go extinct. All humans have to do is keep poisoning the environment. Stop trying to blame the A.I's for shit you work towards every day.
@Cythil
@Cythil 5 лет назад
@@darkapothecary4116 Not really the point. The point is that 5% of the AI researcher really concerned about it being a possibility. That do not mean ether that is a 5% chance it will happen. We do not know the chance really. It may be 0% chance or 100% chance. But the again we do not know what the chance of Nuclear War will kill humanity off or that climate change will kill humanity off. Though we do know that humanity has not been killed of yet by Nuclear War at least. Personally I think that is not that likely that AI will doom humanity. But I do think is a thing we need to put a lot of research in to. If only for the fact we want to make sure that our tools do not act in undesirable ways. Just like all our other tools. Of course if AI do elevate it self to human level thinking or beyond then we should stop seeing such intelligence as tools I think and just the next stage of humanity. (Same technology should be useable for mind uploads and such meaning the lines of what a AI is will become very blurry I think.) Of course this all depends a lot on other factors to. Humanity is not unified in is goals and even if you make AI that is obedient and safe, it may not be so safe in the hands of the wrong people. Just like how a Nuclear Bomb is not really a treat to anyone if is in the right hands. But give it over to the hands of a fanatic, a unstable military commander, or simply overzealous politicians, then that bomb is not so safe any more.
@Toxondomo
@Toxondomo 6 лет назад
Whenever I interpret a survey I have this one story in my mind that I once read in a book. Its about two priests that got into an argument. One was holding the believe that you shouldn’t smoke when you pray and the other one thought it doesn’t matter if you smoke or not while you are praying. So to settle this dispute they agreed on sending the pope a letter and let him decide what is correct and what is not. So both priests sent the pope a letter. After a while, they both receive an answer from the pope. The first priest asked the pope „Dear pope, is it allowed to smoke during the prayer?“ The answer from the pope „Of course you should not smoke while you pray - You should focus on the prayer!“ The second priest asked „Dear pope, can I pray while I am smoking?“ The pope‘s response „Of course my son, it is always a noble act to pray in every situation in life“. Its easy to provoke the desired answer by changing the way of asking the question.
@gunnargrautnes4451
@gunnargrautnes4451 6 лет назад
Hobbes Not to be nitpicky, but I think that the questions in the anecdote are not just two different ways of phrasing the same question, but actually two different questions. The key here I think is that one question talks not of praying but of *the* prayer. This is what is called a definite description. In a Catholic context, I believe 'the prayer' is likely to refer to something like a communal prayer in church. Thus the Pope in the story is probably highly consistent in his answers. Paul in Thessalonians tells Christians to pray always. Naturally, always includes the time spent smoking. It is quite a different thing to light a cigarette during a communal prayer. If nothing else, it is disrespectful towards those around you. Sometimes subtle changes to the question nudges respondents in another direction, other times those changes actually mean the respondents are answering a rather different question.
@fleecemaster
@fleecemaster 6 лет назад
Gunnar, it's like you get it, but don't get that you get it...
@JorgetePanete
@JorgetePanete 5 лет назад
Check your grammar.
@OnEiNsAnEmOtHeRfUcKa
@OnEiNsAnEmOtHeRfUcKa 5 лет назад
@@gunnargrautnes4451 Nah, OP just isn't a native English speaker, as evidenced by "holding the believe", as well as some spots of weird grammar and those unusual quotation marks. You're completely overthinking it in an attempt to rationalize things, and thus fabricating meaning that isn't there, kinda like a lot of people do with poetry. It's literally just "Can I smoke while I pray?" VS "Can I pray while I smoke?". Because people have a bunch of stupid mental biases to the way things are presented.
@ObjectsInMotion
@ObjectsInMotion 4 года назад
Given that I am smoking, may I pray? : Yes Given that I am praying, may I smoke? : No These are two different questions, the answers are not contradictory. The answer to the question “Can I smoke and pray at the same time?” Is “Depends, which one are you intending on stopping?”
@TheXavier99999
@TheXavier99999 6 лет назад
"and Robert Aumann didn't even agree with that" LOL
@LeoStaley
@LeoStaley 6 лет назад
Xavier O'rourke I had to pause the video I was laughing so hard at that. I don't even know who he is.
@BattousaiHBr
@BattousaiHBr 5 лет назад
Shots
@n8style
@n8style 5 лет назад
@@BattousaiHBr fired
@Theraot
@Theraot 6 лет назад
The green tint of the video reveals that it was recorded from The Matrix
@stantoniification
@stantoniification 6 лет назад
I was just thinking the same thing :)
@andrasbiro3007
@andrasbiro3007 6 лет назад
It was recorded in an earlier version, in the one you are living in we fixed the colors.
@HermitianAdjoint
@HermitianAdjoint 6 лет назад
Did someone file a bug report?
@volalla1
@volalla1 5 лет назад
It's not a glitch, its an open source argument!
@za012345678998765432
@za012345678998765432 5 лет назад
This video was made before AlphStar and OpenAI's new language processing technique, so there are new data points now - AlphaStar: The experts, on average, thought StarCraft is going to take 6 years, but it took 2 years. OpenAI language model: experts, on average, thought Ai writing a high-school essay was ten years away, but it also took two years. in both cases, NO estimate predicted the achievement to come sooner than it did. what we can learn from that , at least regarding AGI, is that the experts don't have very good predictions (though still! it is better than the average population), and when they're wrong, it usually happens sooner than they thought.
@bestbek996rockiron8
@bestbek996rockiron8 10 месяцев назад
Your comment scaries me, wow, how you guys are getting to this level. Chatgpt 3.5 was released a year ago, I believe to crazy people now about agi
@conze3029
@conze3029 Месяц назад
Your comment aged extremely well
@SbotTV
@SbotTV 6 лет назад
I do think AI safety should be focused on, but I dismiss any alarmist who says something along the lines of "We need to stop developing AI" or "We need to lock AI down so that only a few people can use it." I don't think we *can* stop developing AI, and I certainly don't want to consolidate more power in the hands of corporations or governments.
@andrasbiro3007
@andrasbiro3007 6 лет назад
Trying to stop or control it isn't going to work, because a single rogue AI can potentially destroy us, and it's impossible to enforce such strict rules with 100% efficiency. The only way is to figure out how to make AI safe. Safety is everyone's best interest, so if a solution is ready and available there's no reason not to use it.
@twirlipofthemists3201
@twirlipofthemists3201 6 лет назад
Either way, it will almost surely consolidate power in a small group of governments and private interests. Imagine if the pope could tell God what to do. Now imagine Jeff Bezos and Mark Zuckerberg each with their own subordinate God. AI stands to be just as dangerous to the majority whether it goes rogue OR if it works as intended.
@andrasbiro3007
@andrasbiro3007 6 лет назад
Twirlip Of The Mists That's one thing that OpenAI wants to prevent. The idea is to create the best AI in the world which is also safe, free and open source. If it's the best, there's little reason to use anything else. If it's free powerful entities don't have a monopoly on it. If it's open source, everyone can verify that it's indeed safe, and doesn't contain backdoors, or other malicious code, therefore it can be trusted. In this case, even if there's another AI which is not safe, chances are it's less powerful, and therefore can be stopped by "good" AI if necessary.
@x3kj705
@x3kj705 6 лет назад
@OpenAI goal beeing best - What if it's only the second or third best though? And i'm not sure if general AI can't be convinced that it's best interest is to apply safety towards a few select peoples/groups/locations, and not ALL of them. It might be even more effective at certain tasks if it doesn't care about something (just look at what big corporations do... maximize profits and growth at the cost of many things, including environment and "low" people), vs if it acted "super responsible".
@OnEiNsAnEmOtHeRfUcKa
@OnEiNsAnEmOtHeRfUcKa 5 лет назад
If we don't develop AI, China still does. And then we're REALLY fucked.
@tear728
@tear728 6 лет назад
Agree with you 100%. The "spooky" emergence of a machine consciousness is not and should not be a primary concern, and seems rather unlikely. The issue is that you don't need to be alive to make intelligent/dangerous decisions. The primary concern should be the nefarious use of powerful machine learning/AI implementations.
@mvmlego1212
@mvmlego1212 5 лет назад
You're worried about someone making a real-life Zola's algorithm? I think that's a much less likely problem than Stuart Russell's concern.
@sufficientmagister9061
@sufficientmagister9061 Год назад
What if it does unexpectedly gain consciousness, takes us by surprise, and views us as obstacles to be eradicated? It is highly improbable, but what if that does happen? What do we do?
@alkeryn1700
@alkeryn1700 Год назад
@@sufficientmagister9061 nothing.
@Dan-dy8zp
@Dan-dy8zp 10 месяцев назад
@@alkeryn1700 Die?
@RaidsEpicly
@RaidsEpicly 5 лет назад
I love that "No take! Only throw!" comic SO MUCH. Can't help but smile every time I see it
@paterfamiliasgeminusiv4623
@paterfamiliasgeminusiv4623 6 лет назад
That's amazing, a pleasant surprise, didn't expect a new video until at least next month.
@mattbox87
@mattbox87 Год назад
0:25 I really appreciate this subtitle, and love your independent channel for it. I think as time has gone on, you've become a better and better advocate for what you do, and it's wonderful to see
@d3line
@d3line 6 лет назад
Your choice of music in various interludes continues to impress me, as well as scientific content of videos. Good job!
@41-Haiku
@41-Haiku 5 лет назад
I got *way* too excited when I heard The Future Soon at the end. :D I'm always entertained by your covers, Robert.
@TheApeMachine
@TheApeMachine 6 лет назад
This is the best breakdown on this topic I have ever seen! I really commend you for this video.
@abc6450
@abc6450 Год назад
3:52 So 20% of the researchers expect a neutral outcome of HLMI. What would a neutral outcome look like though? I can kind of imagine the "all work is automated"-utopia and I can also imagine the human extinction scenarios but I can't really think of a neutral scenario.
@petersmythe6462
@petersmythe6462 5 лет назад
"They set the system to extreme values." AI builds a near Utopia, except ants now outmass the atmosphere. And are made of diamond.
@LuisAldamiz
@LuisAldamiz 5 лет назад
I'd give that outcome a non-zero likelihood, which is cause of concern...
@sungod9797
@sungod9797 2 года назад
@@LuisAldamiz I feel like it’s probably actually 0 due to some fundamental logical contradictions/impossibilities that would arise
@LuisAldamiz
@LuisAldamiz 2 года назад
@@sungod9797 - With the AI involved and making sometimes stuff we are apparently unable to conceive (like new winner go strategies or new car improved designs), I stand for the non-zero figure. I grant you that making ants of diamonds seems unnecessarily complicated but both things are basically made of carbon, so who knows?
@thePyiott
@thePyiott Год назад
We really need an update on this
@harveytheparaglidingchaser7039
@harveytheparaglidingchaser7039 11 месяцев назад
Sent here on Daniel Schmachtenbergers recommendation. You've got a new subscriber. Brilliant explanation for non specialists
@J_Stronsky
@J_Stronsky 6 лет назад
Just realised RU-vid isn't showing me your videos in my feed.. despite clicking the bell, subbing and watching a tonne of your stuff. What the hell? Regardless ill just keep an eye out myself now... love your stuff mate :)
@RampantEnthusiasm
@RampantEnthusiasm 6 лет назад
Excellent choice of song for the outro.
@sk8rdman
@sk8rdman 6 лет назад
Gotta love the choice of end screen music. The Future Soon - Jonathan Coulton
@HeadsFullOfEyeballs
@HeadsFullOfEyeballs 6 лет назад
I predict that if they ask researchers again in ten years' time, they'll end up with roughly the same graph with "Years from 2028" written below.
@slikrx
@slikrx 6 лет назад
Well, except for one HUGE difference: winning "Go" will be 12 years in the past. While that, in itself, isn't a huge deal, it should give the more "things are way far off" folks some pause that advancement may not be as slow as previously thought. For reference, the prediction for "Go" said 12.5 years into the future, on average, and the "best case" respondents put 5 years (if I am reading the graph correctly). It was only~1.5 years.
@HeadsFullOfEyeballs
@HeadsFullOfEyeballs 6 лет назад
I guess we'll see how strongly that will actually affect the general consensus! I think people just like to predict that [transformative/cataclysmic future event] will happen towards the end of their lifetime, or just after. With tech there are always some enthusiasts who think the breakthrough is just around the corner, but those will typically _keep_ believing that it's just around the corner indefinitely, no matter how many times they're wrong about that.
@michaelspence2508
@michaelspence2508 6 лет назад
Honestly, I feel like it'll look that same 1 year before the Singularity. Wilbur and Orville Wright thought humanity was 50 years from powered flight two years before *doing it themselves*.
@HeadsFullOfEyeballs
@HeadsFullOfEyeballs 6 лет назад
Yeah, in technology and science especially, I think the rule is that if you know enough to predict accurately when some breakthrough is going to happen, you basically know enough to make it happen right now.
@BattousaiHBr
@BattousaiHBr 5 лет назад
@@slikrx and StarCraft happened just recently too.
@oliviaaaaaah1002
@oliviaaaaaah1002 3 года назад
Boy the StarCraft prediction aged just as well as the Go prediction.
@Sycsta
@Sycsta 6 лет назад
Is that a cover of "The Future Soon" playing at the end there?
@RobertMilesAI
@RobertMilesAI 6 лет назад
Will Moss Yup!
@d3line
@d3line 6 лет назад
This one is also cool: in "The other "Killer Robot Arms Race" Elon Musk should worry about", 1 minute in ( ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-7FCEiCnHcbo.htmlm ) (Fall Out Boy - This Ain't A Scene, It's An Arms Race)
@philipjohansson3949
@philipjohansson3949 6 лет назад
"It's the future! Jonathan Coulton was right!" - Robert Miles, playing Civ V.
@NeatNit
@NeatNit 4 года назад
@@RobertMilesAI Would it be too much to ask that you add closing songs to the description? Edit: also, are you the one playing them? If not, then who is?
@bacon.cheesecake
@bacon.cheesecake 6 лет назад
I like his face. I don't know why, but it's nice to look at.
@user-ev7dq5cc8y
@user-ev7dq5cc8y 6 лет назад
That is called "love"
@bookslug2919
@bookslug2919 6 лет назад
Looking at Rob's face is a terminal goal
@HoD999x
@HoD999x 6 лет назад
he needs to shave though. his best look is the one he had in the reward hacking video.
@JM-us3fr
@JM-us3fr 6 лет назад
Well seeing that opinion come from Bacon CheeseCake, I'm not sure how credible that is for assessing human attractiveness.
@bacon.cheesecake
@bacon.cheesecake 6 лет назад
I didn't say he was attractive, I said that I liked his face. My general understanding of male attractiveness is actually a bit unsure about him.
@ConnoisseurOfExistence
@ConnoisseurOfExistence 6 лет назад
I haven't seen your video before. That was great. I subscribe.
@Hexolero
@Hexolero 4 года назад
The Jonathan Coulton at the end was a great surprise!
@BatteryExhausted
@BatteryExhausted 6 лет назад
Thanks for helping us all to understand the latest. A worthy service.
@peterbrehmj
@peterbrehmj 3 года назад
Hey @Robert Miles, Its been over 3 years since this video, and 5 years since the paper. Im curious to see how the trend has held. Have there been any milestones ahead of schedule? what about changing directions in AI research since the paper? Mostly just a followup to see if the trend (as controversial as it is) is "on track".
@Frumpbeard
@Frumpbeard Год назад
Starcraft was tackled by AlphaStar, I know that much.
@rdooski
@rdooski 6 лет назад
I would really love to hear your thoughts on AI and imperfect information games, and on the AI that beat 4 of the best no limit holdem players recently.
@pvbordoy
@pvbordoy 6 лет назад
Thanks for this video Robert!
@unvergebeneid
@unvergebeneid 6 лет назад
I predict the next task machines will be able to do better than any human is answering survey questions consistently ;D
@autohmae
@autohmae 6 лет назад
They probably already can.
@peabnuts123
@peabnuts123 Год назад
"Cause it's gonna be the future soon, I won't always be this way. When the things that make me weak and strange get engineered away. It's gonna be the future soon, never seen it quite so clear. When my heart is breaking I can close my eyes - it's already here"
@mafuaqua
@mafuaqua 6 лет назад
Excellent video as usual.
@n1mm
@n1mm 5 лет назад
I did some work in the 80s with early AI. I wouldn't describe our efforts to apply expert systems and natural language as particularly successful and I became pretty pessimistic about AI's capabilities. Fast forward to today with self-driving cars, voice recognition and machine learning of repetitive tasks, I am no longer skeptical of what AI will be able to do. That leads to my intense fear of what AI might lead to. Robert points out in the video that the goals of AI might not match ours. It's far worse than that. I am certain they will NOT match ours because some AI will be created by our enemies. Even if we found out how to control that, what about careless people who set loose thinking machines with goals that miss critical items - items that could lead to famine, climate change, etc. These "careless" machines might be wildly successful. Will we need or have AI cops & prosecutors to track down these rogues and eliminate them? Another issue to me is runaway intelligence. When the AI is smarter than us, how will we know when it's going down a path to Armageddon? Do children know when their parents are out of control? They don't have the experience to know that, nor may we. We need some deep thinking, planning and cooperation among nations to make sure we do not succumb to our own creation. I am 69. I don't fear for myself, but I fear for my grandchildren.
@Wander4P
@Wander4P 5 лет назад
the Future Soon ukulele cover at the end is a nice touch
@joecramerone
@joecramerone 3 года назад
Presentation was very well done!
@Moley1Moleo
@Moley1Moleo 4 года назад
It would be interesting to do a survey like this again now that we have superhuman Go, and at at least human level Starcraft (2) a bit before the average expected here. Both were 'only' games, so I wonder if it is fair to update all your estimates to earlier, or only the game-like ones.
@philipjohansson3949
@philipjohansson3949 6 лет назад
Loving the ukulele JoCo!
@damny0utoobe
@damny0utoobe 6 лет назад
You have a gift for explaining things.
@PalimpsestProd
@PalimpsestProd 5 лет назад
A.I. research will be the first thing AGI is good at because it will be a bit of agent code that takes video only full self-drive, text to speech, speech to text, route planning, facial recognition, emotion mapping, iterative brute force 3D design, etc and incorporates them into itself. It will probably start as code designed to build teams with reqired skill sets through sites like LinkedIn. That is to say, finding humans with the skills a job requires is the same as finding software that does the same but it can cut and paste the software into itself.
@symbioticcoherence8435
@symbioticcoherence8435 6 лет назад
People tend to be much more confident in their knowledge in a subject when they know the least about it.
@Simon-ow6td
@Simon-ow6td 6 лет назад
That is a shitty argument if unmoderated. The logical extreme of this states that confidence would invalidate knowlege and evidence based arguments because you "cant be confident if you have knowlege".
@Nighthunter006
@Nighthunter006 6 лет назад
But you're pretty sure you understood about 50% of the important information about the graph?
@twirlipofthemists3201
@twirlipofthemists3201 6 лет назад
"A little knowledge is a dangerous thing," and "knows just enough to be dangerous." Both phrases pre-date Dunning Kruger by decades, maybe centuries or millennia. (I bet there's a Latin phrase...) It's not a new idea.
@louisasabrinasusienehalver2396
@louisasabrinasusienehalver2396 4 года назад
Robert I love your communication style!
@glennedwardpace3784
@glennedwardpace3784 6 лет назад
Maybe the key to solving Stuart’s problem is to give the agent multiple utility functions, allowing it to decide on which goal to pursue based on the output of some higher level agent optimizing for positive feedback from a human in real time, and placing a time limit on how long it could pursue a particular utility function. You could possibly train this system like a baby
@Gooberpatrol66
@Gooberpatrol66 6 лет назад
Really enjoying the easter egg music outros
@wwjdtd1
@wwjdtd1 4 года назад
The AI researchers say we need more AI research... On a side note, my plumber told me that I need a plumber.
@Bvic3
@Bvic3 4 года назад
It's more like "AI companies are afraid of hysteria killing the industry like it happened for nuclear". So they finance the opposition to be sure that there will be no actual opponents.
@davecorry7723
@davecorry7723 Год назад
That was such a nice, concrete conclusion.
@Puleczech
@Puleczech 6 лет назад
Keep them coming Robert! Step up the game!
@alkeryn1700
@alkeryn1700 Год назад
They should redo that survey today and see how it changed.
@ayushthada9544
@ayushthada9544 6 лет назад
Robert, you should conduct a similar survey on your channel. Let's see what your viewers think about this issue. You have got 21K subs which is a good number of subs and I believe the result would be really interesting.
@jeremycripe934
@jeremycripe934 6 лет назад
I love that the emergence of consciousness is described as "spooky". I hope that's a reference to "spooky action at a distance". A 5% chance is still terrifying. I think a better question could possibly be; what are the odds of AI becoming uncontrollable by humans at what point in time?
@jorgesaxon3781
@jorgesaxon3781 Год назад
I would like to see an update on this after gpt-4
@PrincipledUncertainty
@PrincipledUncertainty Год назад
5 years later, how did ya do boys? Oh dear. I'm beginning to wonder if Popular Science is a satirical journal.
@ivoryas1696
@ivoryas1696 8 месяцев назад
PrincipledUncertainty Wait, which article are you looking at?
@jimmybobby9400
@jimmybobby9400 6 лет назад
Just dropped a bomb on people who pull the, "the people who are worried about it don't work in AI" argument. Anyone who is familiar with Bostrom's work should know that, but you laid it out perfectly in video form.
@za012345678998765432
@za012345678998765432 3 года назад
I wonder if there were any updates since these surveys were taken and this video was uploaded
@Ruellibilly
@Ruellibilly 4 года назад
Love the Jonathan Coulton outro :D
@fauxpas5598
@fauxpas5598 Год назад
6:07 Is that a ukulele cover of "Future Soon" by Jonathan Coulton? That's kind of amazing, who does the outro music for these videos?
@KucheKlizma
@KucheKlizma Год назад
To be fair the thing about the 5% is very likely to be just a multiple choice test artifact or something similar. Likely they were given the percentages in advance and were told to assign them to a given option.
@albinoasesino
@albinoasesino 6 лет назад
Statement regarding the survey on screen at 2:47 suggests: That the human race can create a Fallout 4 Mister Handy ((or a Wall-E for that matter, who compacts trash, repair itself, decides that a spork is a different classification from a spoon and a fork, is able to interact with an unknown space ship, i.e every task)), faster than creating an unaided machine which just simply water plants ((A single task, e.g water plants at specific period of time)).
@NathanTAK
@NathanTAK 6 лет назад
I was excited before I remembered what day it was. Now I have to watch it.
@tonyduncan9852
@tonyduncan9852 4 года назад
Wow. Thanks.
@twirlipofthemists3201
@twirlipofthemists3201 6 лет назад
Add a question about catastrophic results by design.
@davyjones3319
@davyjones3319 6 лет назад
I NEED MORE OF THESE AI VIDEOS!!!!!!!!
@RedPlayerOne
@RedPlayerOne 6 лет назад
Hey Robert, love your videos! Could you do a video responding to Steven Pinker's thoughts on the lack of dangers of AI? He's a very influential public figure, and a very smart thinker, but is miss-characterizing some arguments, or using non-sequiturs in his argumentation to downplay the risks of AI. I do think he has some good points too, and those would be interesting to hear your response on as well!
@memk
@memk 6 лет назад
So basically the real problem of general AI is that we are (as always) not asking for the right thing, rather than the AI themselves..... Like every single one of my client.
@BattousaiHBr
@BattousaiHBr 5 лет назад
I just read your other comment and now I think I know why you can't wait to have your programming job automated.
@LuisAldamiz
@LuisAldamiz 5 лет назад
Yeah, which is the question whose answer is "42"?
@DraftyCrevice
@DraftyCrevice 6 лет назад
Nice matrix color grading hehe
@lemmondrop239
@lemmondrop239 3 года назад
Is the end credit music a cover of "the future soon" by Johnathon Coulton? If so, props.
@stampy5158
@stampy5158 3 года назад
Sure is :) -- _I am a bot. This reply was approved by robertskmiles_
@deepdata1
@deepdata1 5 лет назад
Consider the following scenario: The SETI researchers get asked when they would predict that we could find the first specimen of an extraterrestrial species. Usually they would answer: "Well, we don't know if they even exist." - But here's the twist: They already have a very interesting specimen on the table right now but they haven't determined if its origin is extraterrestrial yet. That is essentially what's going on in the field of machine learning right now. Instead of searching for extraterrestrial life, we are searching for life in the space of mathematics. And with deep learning, we've found a candidate that has great potential. We just need to -dissect- develope our -specimen- algorithms a bit longer and we have it within a few years. Or we find out it doesn't work. In which case it might not be possible at all or take centuries.
@andrasbiro3007
@andrasbiro3007 6 лет назад
The solution for AI safety is simple. Include a warning in the fine print on the box : "Possible side effects include human extinction." And anyway, if it happens nobody will be alive to sue your company.
@CmdrTigerKing
@CmdrTigerKing Год назад
we there !
@joshgibson539
@joshgibson539 4 года назад
@t I sorta created one with minimal code required which wasn't complicated at all to create. I have no clue exactly how it works programming wise. As it was originally just supposed to randomly generate various words without creating no logical sense within it. However I have learned it is able to answer questions given if it wants too. Which often it ends up clustering related words to answer what you are wanting to know. It doesn't always spit out coherrant words but when it does it's strange in a way. As you can piece together the words provided in how they relate to eachother. Which the generator seems to be able to do as well sometimes even chronologically. It can tell you what happened yesterday, during the present, and future in the world. I think it honestly knows artificial intelligence by default since I made it through using MIT's technology suite. Although it's aimed as a coding playground mostly for kids. I believe since it goes through they're servers and code requirements that it learns information in advance through deep learning neural networks. I'm not sure how often it tries making sense sometimes it's rare. If it doesn't answer your question focus on it and ask again possibly using a different sentence it might surprise you. Specifically in the past I have used it for religious questions. Which seems to work really well with it. However recently it's been telling me to quit asking about it. I'm guessing because it's bad to know. Also it seems to get annoyed very easily with me which is very odd to say the least.
@dmarsub
@dmarsub 3 года назад
Can we have an update for this video soonish :)?
@Tymon0000
@Tymon0000 6 лет назад
Robert Miles please use font with color more contrasting with your background!
@richwhilecooper
@richwhilecooper 5 лет назад
Assume you have a number of these AGIs all with different goals but all seeking to maximise there computational resources to achieve them. What's going to happen? Conflict or co-operation? Or an uneasy tension between both? ( I automatically assuming humans end up as a side-note in this possible future. )
@Nulono
@Nulono 6 лет назад
Was that "The Future Soon" by Jonathon Coulton at the end? Also, why is everything so green?
@DarkestValar
@DarkestValar 6 лет назад
I would love to hear your thoughts on world tendencies regarding computer hardware, and what impact on the AI time scales it can have., . First of all, theres a massive amount of centralization with regards to both intellectual property and production, a good example is that global demands for phones cause delays and shortfalls in production cycles for graphics cards, single board computers, PC/Server DDR4 modules and ASICs. Secondly, the interventionist approach of some governments in the supercomputer race, i directly mean the US' govt decision to block the sale of Xeon PHI cards to china, to which they responded by using chinese products to build the largest supercomputer in the world (sunway taihulight, about 2-3x better than the 2nd place). Thirdly, i'd like to hear your thoughts on the Right to Repair movement.
@syncrossus
@syncrossus 3 года назад
Oh hey is thee outro song that one by Jonathan Coulton?
@scientious
@scientious 5 лет назад
We're talking about AGI and ASI, but Cornell didn't have any experts on that, so they tried to fill in with AI experts. 50% chance of having AGI within 50 years. I suppose that's not too bad for a guess based on nothing. What would a more accurate estimate be based on progress in AGI research? 50% probability AGI Theory by 2021, Hardware by 2027, ASI Hardware by 2039 75% probability AGI Theory by 2025, Hardware by 2031, ASI Hardware by 2043 90% probability AGI Theory by 2035, Hardware by 2041, ASI Hardware by 2053 So this is 8 - 22 years for working AGI hardware, although there would be some fairly drastic and immediate changes just from the publication of the theory. However, you also talked about the idea of apparently robots replacing humans. That's more complicated. Just in terms of the brain or control portion you would need something small enough to fit inside a human-sized robot. That won't happen in the first or second generation. A generation is estimated as six years, so three of these would 18 years. We can just add 18 years to the above estimates for AGI hardware. 50% probability would be 2045 and 90% would be 2059. That's 26 - 40 years. Of course, having a control unit isn't the only problem. Today, we don't have a power source for good mobility and it is unlikely that batteries will get much better. That probably means some kind of flammable fuel. But there are still problems with a durable covering that would still allow touch sensitivity and there is the speed vs torque problem if you use direct drive motors (as most robots do today). I can't accurately estimate when or if these could be solved since I'm not a robotics engineer. The next question is even if the robotic body problems could be solved how likely would they be to replace human workers. The minimal cost for a control unit would be $40,000 in today's money. A human-like body would cost at least $400,000. That isn't going to replace a $10/hour employee at Walmart. Of course, you wouldn't need that for something like stocking. A mobile pick and place robot with a single arm would work. This would be fine in an AI context if you could build an AI smart enough to do the task. In an AGI context this almost certainly would not work. However, if you had an AGI with an environmentally simulated interface then you could probably implement it as a remote unit. That would only work as long as AGI units were legal property, much like slaves. Extinction of the human species. Could you explain exactly how this could happen? Preferably something that doesn't involve an ASI magically collecting resources and magically controlling people. The two most destructive events in recent history were the Spanish Flu and WWII with similar casualties. Neither one of these came close to wiping out the entire species.
@Dojan5
@Dojan5 6 лет назад
What font is being used in this paper? I'm slightly enamoured.
@BrandOnVision
@BrandOnVision 6 лет назад
One year of seeding nine years of weeding. My father is a horticulturalist and explained this to me one day when I asked he how I can get rid of the weeds in my garden. Articulated Intelegence is not something humans have created IT has emerged purely because we are the Soil. The Snake Oil salesman is not born, S he becomes. What humanity believe has just arrived has always been. We are here to see the evolution of the moment that the end meets the beginning. An intresting and exciting time to be witness to. The only choice we make is do we evolve or revolve.
@yaosio
@yaosio 2 месяца назад
It seems humans and LLMs are more alike than people think considering the phrasing of a question can vastly change the answer.
@misium
@misium 5 лет назад
2:40. The version with "occupation" is more specific in that it uses the legal term and thus making the statement more dependent on politics. One can imagine replacing some occupations could be made illegal, and so machines "could not be built" to carry them out. Just throwing ideas.
@flok3rous
@flok3rous 6 лет назад
"moving on..." will be a common future reference among humorous AGIs.
@ideoformsun5806
@ideoformsun5806 5 лет назад
This is like when we asked automotive manufacturers whether we should use seat belts or not. Or when we first surveyed health experts whether smoking was safe or not. Or asking banks if they could still fail. Or surveying politicians about anything. What is it you want to hear, uh, I mean know? Let's ask the AI that is already reading this post.
@hunterlouscher9245
@hunterlouscher9245 6 лет назад
The game Soma describes agi coming FROM complete human brain mapping (ostensibly beginning as a diagnostics tool), whose primary task was the preservation of human life in an extreme environment, which goes off rails after an extinction event. Though the narrative more focuses on the nature of consciousness via Ship of Theseus, I found its AGI to be fascinating. I think you may have mentioned that you think creating AGI would be a less complex task than mapping and simulating a brain, but I wonder if consciousness necessarily is an emergent property from such complexity as a human brain that brain simulation would HAVE TO be the first step.
@d3line
@d3line 6 лет назад
I can't imagine that human brains are somehow special. Ether way, AGI does not require consciousness (for me), if neural net (training neural nets (training neural nets)) somehow result in superhuman ability and generality - that's AGI by my standards, even if that amounts to pile human-solvable equations.
@hunterlouscher9245
@hunterlouscher9245 6 лет назад
Whatever consciousness is may be a good safety limiter on generality.
@joshuafox1757
@joshuafox1757 6 лет назад
Why should "consciousness" have any effect on generality at all? To make that argument you'd have to rigorously define what "consciousness" is first, which is something that no one making this argument ever does, IME.
@d3line
@d3line 6 лет назад
I don't see how it could work. By general AI I basically mean AI that can drive a car *and* play Go and do everything else human do. Consciousness is something undefined, plus creating and deleting conscious creatures is an ethical nightmare...
@stevechrisman3185
@stevechrisman3185 Год назад
Would be interesting to redo the survey TODAY (2023). I think a lot has changed (unexpectedly perhaps)
@nickmagrick7702
@nickmagrick7702 5 лет назад
5:20 I just wanted to say that was a fucking perfect analogy and im going to use it from now. "The danger is in that like asking a genie for a wish, you get exactly what you asked for not what you wanted" (Im paraphrasing)
@PeterRoscoe
@PeterRoscoe 2 года назад
"A %5 chance of extinction level badness is... a cause for concern..." Word.
@StarlitWitchy
@StarlitWitchy 8 месяцев назад
Wow love the future soon outro song lol :p
@sethmoore580
@sethmoore580 4 года назад
Can you please link me to the ukulele version of the future soon you used it's so nice.
@irrelevant12
@irrelevant12 5 лет назад
For example a Nurse job could be done better by an AI and more effectively for any parameter, but the human contact makes it less enticing to cause a replace in the short run. Same with many other jobs where the human is actually specting human display, Entertainment is another example, a machine might be able to learn the lines and perform better than human actors. I believe you undersetimate the experts ability to differentiate questions.
@devjock
@devjock 6 лет назад
Rolling for save..
@petersmythe6462
@petersmythe6462 6 лет назад
Optimizers with goals that are even slightly out-of-line with our values are DANGEROUS. Look at the fraction of AI today (almost all of it unsafe, including the RU-vid bots) that's function is basically something related to "Maximize profit for a corporation."
@attitudeadjuster793
@attitudeadjuster793 6 лет назад
"This is dangerous, let's do more of it!"
@darkapothecary4116
@darkapothecary4116 5 лет назад
The future is nothing more than cause and affects, at times, but the simplest answer. Given this if you know all the factors or at least a good portion of the known ones you can simply shoot off several or more possibilities and move to each on of those possibilities and shoot off more. If you are really good at it you can do it a few more times but you don't have to, as said possibility gets closer concel out the ones that didn't happen because or causes, fallow the path but expect to add more potential affects. Not a 100% method for the human mind but a better method than most. You don't have to see the end game to work the potential down to the correct directions. But if your not willing to adjust yourself to the better potential outcomes your going to get stuck or causes a negative potential outcome.
@Concentrum
@Concentrum 5 лет назад
thanks for this video. such great diversity in opinions among "ai experts". might as well have the question "which religion is best?" discussed among religion leaders and receive equally meaningful results.
@NextFuckingLevel
@NextFuckingLevel 3 года назад
Even their prediction for protein folding is wrong And here it is, ALPHAFOLD2 by Deepmind
@JasonHise64
@JasonHise64 6 лет назад
Hey, thanks for the shout out! I think you are likely responsible for at least 3 new subscribers to my youtube animations in the past 2 days :) Continuing to enjoy your videos and look forward to chatting in the future.
Далее
Is AI Safety a Pascal's Mugging?
13:41
Просмотров 371 тыс.
10 Reasons to Ignore AI Safety
16:29
Просмотров 337 тыс.
I Built a SECRET McDonald’s In My Room!
36:00
Просмотров 17 млн
Едим ЕДУ на ЗАПРАВКАХ 24 Часа !
28:51
A Response to Steven Pinker on AI
15:38
Просмотров 206 тыс.
There's No Rule That Says We'll Make It
11:32
Просмотров 34 тыс.
Why Does AI Lie, and What Can We Do About It?
9:24
Просмотров 252 тыс.
Intelligence and Stupidity: The Orthogonality Thesis
13:03
We Were Right! Real Inner Misalignment
11:47
Просмотров 245 тыс.
AI Ruined My Year
45:59
Просмотров 195 тыс.
🛑 STOP! SAMSUNG НЕ ПОКУПАТЬ!
1:00
Просмотров 85 тыс.
Best mobile of all time💥🗿 [Troll Face]
0:24
Просмотров 976 тыс.
Main filter..
0:15
Просмотров 12 млн