Тёмный

224. Superintelligence Assumptions 

THUNK
Подписаться 34 тыс.
Просмотров 2,1 тыс.
50% 1

Many of those who are 100% convinced by Nick Bostrom’s arguments about the potential for a runaway superintelligence tend to share certain beliefs about the world - is it possible that his case hits much differently depending on one’s starting assumptions?
- Links for the Curious -
Superintelligence: Paths, Dangers, & Strategies (Bostrom, 2014) - dorshon.com/wp-content/upload...
Superintelligence: The Idea That Eats Smart People (Maciej Cegłowski, 2016) - idlewords.com/talks/superinte...
Dual use of artificial-intelligence-powered drug discovery (Urbina et al, 2022) - www.nature.com/articles/s4225...
A Collection of Definitions of Intelligence (Legg & Hutter, 2006) - www.vetta.org/documents/A-Coll...
How to Think Rationally about World Problems (Stanovich, 2018) - keithstanovich.com/Site/Resear...
MIRI 990 Tax Documentation, 2020 - intelligence.org/wp-content/u...
Should We Fear Artificial Intelligence? (Startalk, 2015) - • Should We Fear Artific...
On NYT Magazine on AI: Resist the Urge to be Impressed (Bender, 2022) - / on-nyt-magazine-on-ai-...
IBM's AI Ethics, focusing on social & practical problems with AI systems - www.ibm.com/cloud/learn/ai-et...
Artifice and Intelligence (Tucker, 2022) - techpolicy.press/artifice-and...

Опубликовано:

 

14 май 2022

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 35   
@TheGemsbok
@TheGemsbok 2 года назад
Coincidentally, this is quite similar to how I feel about Bostrom's simulation argument---that the alleged 'inevitability' of hyper-complex simulated universes and minds which would quickly outnumber the real universe and minds . . . boldly assumes the _feasibility_ of hyper-complex simulated universes and minds. He treats us to this elegant statement that skips past the assumption: "later generations [. . .] computers would be so powerful, they could run a great many such simulations." The only assumption he acknowledges at the outset is that "enormous amounts of computing power will be available in the future." Why we are to assent that enormous amounts of computing power would necessarily entail _sufficient_ amounts of computing power for the simulation task he describes (let alone "a great many" simulation tasks of that nature) is left mysterious.
@ReynaSingh
@ReynaSingh 2 года назад
I was meaning to watch this video and then it showed up again 😅
@THUNKShow
@THUNKShow 2 года назад
So nice I uploaded twice. 😉
@iDubito
@iDubito 2 года назад
That's the second time in a day I've seen a comment from you Reyna. The first under a Charles Taylor video on "The Inner Self", now here. Guess I'd better check out your channel!
@adelhishem1
@adelhishem1 2 года назад
Great as always.
@THUNKShow
@THUNKShow 2 года назад
Thanks!
@christianlight8511
@christianlight8511 2 года назад
I haven't thought about AI in a while but this does remind me of a story. When I was in New Orleans my economics professor would hold seminars every other Friday during the school year that were open to the public. We would discuss a book or article and then have dinner at a nearby restaurant. I would usually ride with my professor to the restaurant afterwards but one time his car was full, so I ended up getting a ride with a character of a physics professor from Tulane who recently started attending the seminar. On the way there he begins talking to me about how AI will eventually become very advanced and will try to wipe out humanity. He said he was certain this would happen. In addition, he said he could also mathematically prove the existence of God. I listened with an open mind but didn't really know what to think of any of it. After all, I'm just a humble philosopher and economist and this older gentleman certainly knows more about physics than I ever will. I looked him up when I got home and it turns out he's a very controversial physicist. His name is Frank Tipler. I don't have much to respond to his comments but meeting and talking with him was an experience I'll never forget.
@THUNKShow
@THUNKShow 2 года назад
Sounds like one hell of a trip. 🙃
@christianlight8511
@christianlight8511 2 года назад
@@THUNKShowWhen hang around in "intellectual circles" you're going to meet some characters 🤣. But that's also what makes it fun and interesting.
@ChrisDRimmer
@ChrisDRimmer Год назад
I feel like I’m a little late to the party on this one but it feels worth saying anyway. When Bostrom (or more importantly, the AI alignment doomsdayers, among whom I count myself) think of intelligence as little more than the ability to select actions that lead to the outcomes we want. It’s not like an IQ test or the ability to adapt or whatever, it’s the general fuzzy set of things like world modelling (building up an idea in your head of what the world is like), coming up with actions you could take, and accurately predicting the results of those actions. The basic idea is that all the goofy stupid AIs we might make along the way aren’t a guaranteed-apocalypse-by-default like some eventual superintelligence will be (unless we figure out alignment first, ofc). So for the most part we basically say humanity will probably survive whatever else happens in between, and the next extinction-level-event we seem likely to face (in the sense of being a thing that seems highly likely, as opposed to it just being a thing that could possibly happen) is a misaligned superintelligence. If we get a misaligned not-so-superintelligence while on the way, congrats to us we might well survive it, but we’ll be dumb and build something more dangerous afterward, and THAT is the entity we want to make sure has the same goals we do. Which seems hard, given we don’t even know what our goals are ourselves 😅
@ChrisDRimmer
@ChrisDRimmer Год назад
Forgot to say by the way, I bumped into your channel a few months ago when you released a video on the ultimatum game and have been a big fan ever since, tho I admit I’m lazy and haven’t got through anywhere near all of your historical videos yet!
@Xob_Driesestig
@Xob_Driesestig 2 года назад
One other assumption that bothers me is that preferences are assumed to be transitive. The evidence seems to suggest there are animals with intransitive preferences (e.g sphex), we know there are computer programs with intransitive "preferences" (get stuck in a loop), I sometimes seem to have intransitive preferences and I'm not the only one (e.g I like film 1 more than film 2 which I like more than film 3 which I like more than film 1), why would we assume that AGI will have transitive preferences? Even if thats true by default, couldn't we program intransitive preferences in as a safety precaution? (This comment is a reupload from the unlisted video)
@alanjenkins1508
@alanjenkins1508 Год назад
Never blame the tool. Always blame the person using it.
@threethrushes
@threethrushes 2 года назад
Bold of you to assume that dogs aren't planning the absolute annihilation of their bipedal masters.
@jameslabs1
@jameslabs1 2 года назад
All of my silliness will go on until… it stops. Responsible
@THUNKShow
@THUNKShow 2 года назад
Truth.
@slriegc
@slriegc 2 года назад
Hey! I've been watching (and enjoying!) your videos for a few years now and am wondering if you have any sort of profile or public list of books on Goodreads (or some other way of tracking books you've read / liked). I feel like there's a pretty high chance I'd be into many of them :)
@THUNKShow
@THUNKShow 2 года назад
Oh dear, I don't have anything of the sort I'm afraid...I joked the last time I did a live stream that I should just write down a list of everything people asked about that I hadn't read yet...😅 FWIW there's very little that I read these days that I don't work into episodes somehow. I guess the last thing I haven't explicitly covered was probably "The Dispossessed" by Ursula LeGuin.
@redswap
@redswap 2 года назад
So yeah your channel is pretty awesome btw. Did these supersmart theists make super cool general AIs?
@THUNKShow
@THUNKShow 2 года назад
Not yet! ;) And thank you very much!
@eunomiac
@eunomiac 2 года назад
We really are doomed aren't we
@bthomson
@bthomson 2 года назад
Maybe not?
@eunomiac
@eunomiac 2 года назад
@@bthomson Well, phew, that takes a load off ;)
@TheGemsbok
@TheGemsbok 2 года назад
Nice double pun to open this one.
@THUNKShow
@THUNKShow 2 года назад
🍅🍅
@examinatorant4522
@examinatorant4522 2 года назад
I'm not a philosopher or a super intelligent person ( some might say not that intelligent). I would suggest that humans relative good at a lot of things but not anthropomorphizing every thing is not one of them. Inherent in what I understand of his thesis is that any and all movements are either advances or regressions based on our current C21st perspective. A very basic understanding of evolution since the primordial life is that its survival is conditional on it's environment and circumstances (i.e. not withstanding a restarting of the clock by a "natural" disaster..... a few mega volcanoes or a god awful large space rock.). The point of life is its self. Hence what ever conditions life tends to refigure its self to reproduce and prosper and if a version doesn't it disappears. There is NO GUARANTEE that future evolutionary movement would be an advancement ( from our C21st perspective) or a regression. Much less what our C23rd version would adjudge. Ergo it stands to logic that similar factors may/would apply with an AI It may conclude it's best option is to simply stop and destroy it's self ( there is no guarantee that super intelligent AI would have our biological drive to either replace us or have self consciousness and therefore emotions or drive to continue to evolve) as the logical best solution . Then back on the conditions for existence what isn't considered is the limitation of physics and the availability to create the resources to make it's next iteration et al and then there's the inevitable parts constituent failure. In short this is a pointless mental exercise as is worrying about The practically unknowable future. I'm more worried about the almost unfathomable level of human beings' internecine capacity for self destruction.
@bthomson
@bthomson 2 года назад
Me too! But I am also amazed at our kindness, generosity, adaptability, inventiveness, creativity, tenacity, and willingness to learn!
@monsieurLDN
@monsieurLDN Год назад
I think your interpretation of ai evolution is also antropocentric(?)
@anakimluke
@anakimluke 2 года назад
🐕👋👋❤
@THUNKShow
@THUNKShow 2 года назад
🍎🐶🪐💖
@Macieks300
@Macieks300 2 года назад
Why the reupload?
@JE-ee7cd
@JE-ee7cd 2 года назад
Better audio?
@THUNKShow
@THUNKShow 2 года назад
The audio on the first upload was like garbage, if you took that garbage & threw it down a stairwell with a bunch of hand tools & then recorded that racket & mangled it in Audacity until it was maximally unlistenable & then made some parts too quiet to hear anyways. 😣
@spilex5421
@spilex5421 Год назад
that bostrom guy seems wack man
Далее
225. Relexicalization: Extraneous Renaming
9:58
Просмотров 1,8 тыс.
243. Maintenance
12:44
Просмотров 1,8 тыс.
245. The STEM Shortage
13:18
Просмотров 80 тыс.
236. Self-Control, Akrasia, & Multiple Self Theory
14:23
238. Conway's Law & Division of Labor
10:05
Просмотров 1,9 тыс.
241. Mental Speed
7:52
Просмотров 2,1 тыс.
248. Reductionism
10:47
Просмотров 2 тыс.
246. Against Worldbuilding
12:20
Просмотров 3,9 тыс.
234. Tidiness
9:56
Просмотров 2,3 тыс.
252. The Theory of Generations
10:48
Просмотров 1,2 тыс.
251. Safety & Risk Management
8:54
Просмотров 964
240. Dual Process Theory & The Mythical Number Two
8:51