Тёмный
No video :(

Will Machine Intelligence Surpass Human Intelligence? with Yann LeCun 

Aspen Physics
Подписаться 5 тыс.
Просмотров 9 тыс.
50% 1

Abstract
How could machines learn as efficiently as humans and animals? How could machines learn to reason and plan? How could machines learn representations of percepts and action plans at multiple levels of abstraction, enabling them to reason, predict, and plan at multiple time horizons?
LeCun will propose a possible path towards autonomous intelligent agents, based on a new modular cognitive architecture and a somewhat new self-supervised training paradigm. The centerpiece of the proposed architecture is a configurable predictive world model that allows the agent to plan. Behavior and learning are driven by a set of differentiable intrinsic cost functions. The world model uses a new type of energy-based model architecture called H-JEPA (Hierarchical Joint Embedding Predictive Architecture). H-JEPA learns hierarchical abstract representations of the world that are simultaneously maximally informative and maximally predictable.
The corresponding working paper is available at openreview.net....
About Yann LeCun
Yann LeCun is VP and Chief AI Scientist at Meta and Silver Professor at NYU affiliated with the Courant Institute and the Center for Data Science. He was the founding Director of Facebook AI Research and of the NYU Center for Data Science. He received an EE Diploma from ESIEE (Paris) in 1983, a PhD in Computer Science from Sorbonne Université (Paris) in 1987. After a postdoc at the University of Toronto, he joined AT&T Bell Laboratories. He became head of the Image Processing Research Department at AT&T Labs-Research in 1996, and joined NYU in 2003 after a short tenure at the NEC Research Institute. In late 2013, LeCun became Director of AI Research at Facebook, while remaining on the NYU Faculty part-time. He was visiting professor at Collège de France in 2016. His research interests include machine learning and artificial intelligence, with applications to computer vision, natural language understanding, robotics, and computational neuroscience.
He is best known for his work in deep learning and the invention of the convolutional network method which is widely used for image, video and speech recognition. He is a member of the US National Academy of Sciences, National Academy of Engineering, and the French Académie des Sciences, a Chevalier de la Légion d’Honneur, a fellow of AAAI and AAAS, the recipient of the 2022 Princess of Asturias Award, the 2014 IEEE Neural Network Pioneer Award, the 2015 IEEE Pattern Analysis and Machine Intelligence Distinguished Researcher Award, the 2016 Lovie Award for Lifetime Achievement, the University of Pennsylvania Pender Award, and honorary doctorates from IPN, Mexico, EPFL, and Université Côte d’Azur. He is the recipient of the 2018 ACM Turing Award (with Geoffrey Hinton and Yoshua Bengio) for "conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing".

Опубликовано:

 

28 авг 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 34   
@dr.mikeybee
@dr.mikeybee Год назад
I think it's amazing how these networks learn about the structure of a system, for example, protein folding. They're able to learn new abstractions that scientists have never seen or recognized. This can mean, for example, that for abstractions p and q, if p and q are true, then a protein folds in a predictable way for some distinct amino acid complex. I like to think of it as backpropagation over a loss function automatically organizing input with similar features that are in the same selection of embedding dimensions. Think of it this way, if two distinct entities have similar features, the weights should also be similar to predict a similar outcome -- perhaps shifted in one or more dimensions. So we may have no clue as to why that particular fold will occur, but the model can predict it. Amazing! And if you map the hotspots in the net for a given feature set, a similar feature set should create a similar map. And although one can say that a transformer simply chooses the next token. I believe this is a terrible simplification. The way an "embedding space" creates the possibility of finding arbitrarily many dimensions as well as the way these are arranged into pyramidal hierarchical abstractions, creates a world map of the input space. In the case of NLP, it results in an entire point of view, all at the same time, and accessed and activated by a wave of electrons. What that means exactly, in terms of the experience of whatever agency is extant in the architecture of the AI model, is beyond my understanding at this time -- and perhaps will be forever.
@250txc
@250txc 8 месяцев назад
BS ... You must be a BOT as you are making up, just strings words together THAT ARE NOT STATED here.
@brad6742
@brad6742 Год назад
[44:50] Predictions in embedding space. Why? b/c you can make better predictions in embedding space, due to disentanglement and lower dimensionality.
@Gabcikovo
@Gabcikovo Год назад
17:24 reinforcement learning
@andrewxzvxcud2
@andrewxzvxcud2 Год назад
its interesting to see the stark contrast between his beliefs about the current LLMs and ilya sutskevers beliefs, i wonder whos right :)
@aiartrelaxation
@aiartrelaxation Год назад
Of course it will surpass
@Gabcikovo
@Gabcikovo Год назад
14:32 deep learning for science
@Gabcikovo
@Gabcikovo Год назад
15:09
@Gabcikovo
@Gabcikovo Год назад
21:13 self-supervised learning
@Gabcikovo
@Gabcikovo Год назад
39:18 autonomous AI systems that can learn, reason, plan, "A path towards autonomous machine intelligence"
@dr.mikeybee
@dr.mikeybee Год назад
Cars can be taught with RL, but going over a cliff isn't the error function. The error function is human intervention. So if you hit the brake or grab the steering wheel, the system knows there was an error.
@250txc
@250txc 8 месяцев назад
Why let the machine drive its self IF YOU, THE HUMAN, MUST WATCH THE ROAD ALSO? LOL
@nemeking9040
@nemeking9040 Год назад
Short answer NO😊
@JOlivier2011
@JOlivier2011 Год назад
As a rather dumb human, I'm here to inform you that it already has.
@taopaille-paille4992
@taopaille-paille4992 Год назад
Ai was programmed by human, building upon centuries of mathematical research and more recently computers
@Gabcikovo
@Gabcikovo Год назад
:D
@TheBlackClockOfTime
@TheBlackClockOfTime Год назад
Yea, I mean I don't really get this "we're not even close". It passes e.g. an Amazon coding interview 100% correctly in 3 minutes where the timelimit is 2 hours and that's just because it takes a human 3 minutes to copy paste from window to window. It passes the bar, the medical license in the 90 percentile. It has a theory of mind. It has emergent capabilities we're not aware of, nor understand how it got them other than large amount of parameters. We literally no longer understand how it works: this is from Ilya Sutskever himself (Open AI Chief Scientist). It's development is advancing at an exponential velocity. Like how is that not more intelligent that pretty much 95% of the humans on the planet if not more?
@Luca-tw9fk
@Luca-tw9fk Год назад
​@@TheBlackClockOfTime just you
@chenwilliam5176
@chenwilliam5176 Год назад
I don't think 「ChatGPT-5 will be A Real Artifical General Intelligence System in the future」😊 We can Make a Bet Now 😉
@Gabcikovo
@Gabcikovo Год назад
27:50 generative AI systems
@Gabcikovo
@Gabcikovo Год назад
27:56 autoregressive generative models (architecture)
@Gabcikovo
@Gabcikovo Год назад
29:01 Tomáš Mikolov just smiled :)
@Gabcikovo
@Gabcikovo Год назад
30:36
@Gabcikovo
@Gabcikovo Год назад
30:46 let's change this! Let's bring in new, better and high quality training data by banishing politicians from blocking responsible user who are able to share their favt checked knowledge in a comprehensive way both for humans as well as AI
@Gabcikovo
@Gabcikovo Год назад
12:14
@Gabcikovo
@Gabcikovo Год назад
12:30
@Gabcikovo
@Gabcikovo Год назад
12:44 image understanding, semantic and dense, panoptic segmentation
@bradmodd7856
@bradmodd7856 Год назад
Don't you mean "when?" 🤣🤣🤣
@farmerjohn6526
@farmerjohn6526 Год назад
Even if it did, who cares. We dont know how our phones work
@dr-maybe
@dr-maybe Год назад
Every month of insane AI progress makes this man sound less credible. Just give up this take, it’s not just foolish, it’s dangerous.
Далее
Oh No! My Doll Fell In The Dirt🤧💩
00:17
Просмотров 11 млн
Inside OpenAI [Entire Talk]
50:23
Просмотров 137 тыс.