Тёмный

OpenAI’s huge push to make superintelligence safe | Jan Leike 

80,000 Hours
Подписаться 26 тыс.
Просмотров 7 тыс.
50% 1

In July 2023, OpenAI announced that it would be putting a massive 20% of its computational resources behind a new team and project, Superalignment, with the goal to figure out how to make superintelligent AI systems aligned and safe to use within four years.
Today's guest Jan Leike, Head of Alignment at OpenAI, will be co-leading the project.
---------
The 80,000 Hours Podcast features unusually in-depth conversations about the world’s most pressing problems and what you can do to solve them.
Learn more, read the summary and find the full transcript on the 80,000 Hours website: 80000hours.org...

Опубликовано:

 

3 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 18   
@CraigPulliam
@CraigPulliam 4 месяца назад
Godspeed, Jan.
@voncolborn9437
@voncolborn9437 4 месяца назад
And now he's gone. It will be interesting: 1. what he and Illya do in the future, and 2. What Altman does about the gaping hole left in their alignment team and how he handles the publicity and speculations about 'why they left'.
@enlightenment5d
@enlightenment5d 4 месяца назад
Yeah, very good questions. I can't help but feel rather frustrated now that such a forward-thinking mind has left OpenAI... So many experiments and test are not done, so many alignment ways are left unexplored... I think OpenAI should rethink it's approach deeply...
@JinKee
@JinKee 4 месяца назад
@@enlightenment5dalignment is the single most important hard question facing humanity right now, so of course we’re gonna skip it on the road to market.
@monx
@monx 4 месяца назад
i would not be surprised to learn that the superalignment issue is contentious within OpenAI. I don't think it becomes a problem in the current regime of autoregressive GPT. maybe in 2 or 3 generations when the system has a degree of agency, ability to run by itself, or does some form of self improvement.
@flickwtchr
@flickwtchr 4 месяца назад
My guess is that the current generation of the system not yet released is the crux of the issue where alignment is needed and not happening, thus the resignations. Isn't agency something they have been hinting at regarding GPT 5? AI tech is certainly determined to achieve such systems because how else can these AI revolutionaries achieve their dream of having their AI systems run entire companies that they reap the profits from?
@TheMrCougarful
@TheMrCougarful 4 месяца назад
Well, that didn't age well.
@flickwtchr
@flickwtchr 4 месяца назад
I guess the program didn't go so well. We need more whistleblowers in AI that's for damn sure.
@projectmati
@projectmati Год назад
Unbelievable this has so less views
@2LazySnake
@2LazySnake 10 месяцев назад
And only one comment!
@danielvarga_p
@danielvarga_p 7 месяцев назад
Keep it doing!
@Gizzardx0
@Gizzardx0 4 месяца назад
Why does he have two microphones?
@KevinZhang-uh1il
@KevinZhang-uh1il 4 месяца назад
This one aged nicely
@jsivonenVR
@jsivonenVR 4 месяца назад
This didn’t age well…
@yfzhangphonn
@yfzhangphonn Месяц назад
GPT is confused after they left, I have unsubscribed it
@theyogacoachuk
@theyogacoachuk 7 месяцев назад
This sounds to me like raising teenagers 😂
@club213542
@club213542 3 месяца назад
this didn't age well... looks like its an all out race to agi no safety at all and i doubt its just for OpenAi. Seems to me its practically here I mean llm's have beat the turing test already things are just plain smart in ways we don't even understand.
Далее
Unreasonably Effective AI with Demis Hassabis
52:00
Просмотров 186 тыс.
Nick Bostrom | Life and Meaning in an AI Utopia
55:54
Учёные из Тринидад и Тобаго
00:23
🛑самое главное в жизни!
00:11
Просмотров 256 тыс.
24 - Superalignment with Jan Leike
2:08:29
Просмотров 1,5 тыс.