Тёмный

Tim Urban on AI Relationships, Wokeness and Humans Living to 150 

The Logan Bartlett Show
Подписаться 43 тыс.
Просмотров 3 тыс.
50% 1

Опубликовано:

 

15 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 8   
@silly_man
@silly_man Год назад
Love how Tim’s talking about the apocalypse at the end and Logan says “thanks for doing this, it was fun”
@JoakimVesterlund
@JoakimVesterlund 4 месяца назад
I am so glad Tim has started to check the time tables for the vegan train❤
@human_shaped
@human_shaped Год назад
Exponentials don't have inflection points. It all depends on your axis range. That's one of the deceptive things about them that makes us not understand them intuitively. It's just an optical illusion, but really it's a fairly continuous process that has always been happening.
@loganbartlett4307
@loganbartlett4307 Год назад
fair point good call out
@MetsuryuVids
@MetsuryuVids Год назад
About the AI extinction: He says that "selfishly", we should proceed with AGI as fast as possible. I get the selfish point of view, and I actually share it, in that, I want to live, and I want a chance to not die, since I'm going to die anyway, AI gives me at least that chance. I don't even consider the other people, I'm thinking selfishly, about myself. But even selfishly, rushing AI is a terrible choice. We can rush it now, and not only "potentially lose 40 years, but potentially gain immortality", no, that's too simplistic. The chance that we lose 40 years, or the rest of our lives, seems to be much higher. Alignment seems to be very hard, and we certainly have not solved it at the present time. Instrumental convergence seems almost inevitable. I say almost, because I don't think it's impossible to solve, but I see no solution currently. So, even from a selfish point of view, yes, let's develop AGI, but maybe not rush it? Maybe let's take some time to try to increase our chances to make a good AGI? We don't want to die, but if we rush it, we probably will. If we go about it carefully, we increase our chances at immortality. Why throw it all away? Sure, we could die from an accident tomorrow, but I think currently, the risk of death from misaligned AGI is much higher than that. If that risk gets lower than the risk of death from a random accident, then I'd be happy for the world to proceed to build AGI, but as things stand now, I'd rather wait and focus more on AI alignment research.
@loganbartlett4307
@loganbartlett4307 Год назад
I agree with your opinion on this and I think Tim would as well. Rushing AI feels like a terrible outcome and I think he was being a little intentionally flip/nihilistic about it
@sulljoh1
@sulljoh1 Год назад
It's hard to quantify these risks So terms like "much higher" feel subjective
@sulljoh1
@sulljoh1 Год назад
To be fair to cancel culture - if you truly believe that opponents are harmful then you are often not trying to argue about the truth of ideas. You're trying to defeat the harmful results of bad ideas. Pretend it is the 1960s and a thoughtful, but highly racist guy owns a restaurant in your town. You may organize a boycott to try and financially harm his business because of his racist views. He could complain that he's being silenced, or say there is a chilling effect preventing him and his friends from openly saying what they secretly believe. He may resent not being able to discuss the idea that black people are inferior to white people. Maybe the racist owner believes he is just following the evidence in an open minded way. Plenty of pseudoscience was being published at the time seeming to scientifically demonstrate a lot of dubious racial stuff. But you could hardly blame the boycotters in this case for "cancelling" him - even bankrupting his business if he doesn't change his views.
Далее
How AI Changes The World THIS DECADE - Mo Gawdat
1:13:34
Eliezer Yudkowsky on if Humanity can Survive AI
3:12:41
Просмотров 263 тыс.
Long Ripples of History with Samo Burja and Rudyard Lynch
1:02:38
OLX Masterclass Backend
2:28:11
Просмотров 4,5 тыс.