Тёмный

GEOMETRIC DEEP LEARNING BLUEPRINT 

Machine Learning Street Talk
Подписаться 122 тыс.
Просмотров 162 тыс.
50% 1

Patreon: / mlst
Discord: / discord
"Symmetry, as wide or narrow as you may define its meaning, is one idea by which man through the ages has tried to comprehend and create order, beauty, and perfection." and that was a quote from Hermann Weyl, a German mathematician who was born in the late 19th century.
The last decade has witnessed an experimental revolution in data science and machine learning, epitomised by deep learning methods. Many high-dimensional learning tasks previously thought to be beyond reach -- such as computer vision, playing Go, or protein folding -- are in fact tractable given enough computational horsepower. Remarkably, the essence of deep learning is built from two simple algorithmic principles: first, the notion of representation or feature learning and second, learning by local gradient-descent type methods, typically implemented as backpropagation.
While learning generic functions in high dimensions is a cursed estimation problem, many tasks are not uniform and have strong repeating patterns as a result of the low-dimensionality and structure of the physical world.
Geometric Deep Learning unifies a broad class of ML problems from the perspectives of symmetry and invariance. These principles not only underlie the breakthrough performance of convolutional neural networks and the recent success of graph neural networks but also provide a principled way to construct new types of problem-specific inductive biases.
This week we spoke with Professor Michael Bronstein (head of graph ML at Twitter) and Dr.
Petar Veličković (Senior Research Scientist at DeepMind), and Dr. Taco Cohen and Prof. Joan Bruna about their new proto-book Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges.
Enjoy the show!
Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges
arxiv.org/abs/2104.13478
[00:00:00] Tim Intro
[00:01:55] Fabian Fuchs article
[00:04:05] High dimensional learning and curse
[00:05:33] Inductive priors
[00:07:55] The proto book
[00:09:37] The domains of geometric deep learning
[00:10:03] Symmetries
[00:12:03] The blueprint
[00:13:30] NNs don't deal with network structure (TedX)
[00:14:26] Penrose - standing edition
[00:15:29] Past decade revolution (ICLR)
[00:16:34] Talking about the blueprint
[00:17:11] Interpolated nature of DL / intelligence
[00:21:29] Going tack to Euclid
[00:22:42] Erlangen program
[00:24:56] “How is geometric deep learning going to have an impact”
[00:26:36] Introduce Michael and Petar
[00:28:35] Petar Intro
[00:32:52] Algorithmic reasoning
[00:36:16] Thinking fast and slow (Petar)
[00:38:12] Taco Intro
[00:46:52] Deep learning is the craze now (Petar)
[00:48:38] On convolutions (Taco)
[00:53:17] Joan Bruna's voyage into geometric deep learning
[00:56:51] What is your most passionately held belief about machine learning? (Bronstein)
[00:57:57] Is the function approximation theorem still useful? (Bruna)
[01:11:52] Could an NN learn a sorting algorithm efficiently (Bruna)
[01:17:08] Curse of dimensionality / manifold hypothesis (Bronstein)
[01:25:17] Will we ever understand approximation of deep neural networks (Bruna)
[01:29:01] Can NNs extrapolate outside of the training data? (Bruna)
[01:31:21] What areas of math are needed for geometric deep learning? (Bruna)
[01:32:18] Graphs are really useful for representing most natural data (Petar)
[01:35:09] What was your biggest aha moment early (Bronstein)
[01:39:04] What gets you most excited? (Bronstein)
[01:39:46] Main show kick off + Conservation laws
[01:49:10] Graphs are king
[01:52:44] Vector spaces vs discrete
[02:00:08] Does language have a geometry? Which domains can geometry not be applied? +Category theory
[02:04:21] Abstract categories in language from graph learning
[02:07:10] Reasoning and extrapolation in knowledge graphs
[02:15:36] Transformers are graph neural networks?
[02:21:31] Tim never liked positional embeddings
[02:24:13] Is the case for invariance overblown? Could they actually be harmful?
[02:31:24] Why is geometry a good prior?
[02:34:28] Augmentations vs architecture and on learning approximate invariance
[02:37:04] Data augmentation vs symmetries (Taco)
[02:40:37] Could symmetries be harmful (Taco)
[02:47:43] Discovering group structure (from Yannic)
[02:49:36] Are fractals a good analogy for physical reality?
[02:52:50] Is physical reality high dimensional or not?
[02:54:30] Heuristics which deal with permutation blowups in GNNs
[02:59:46] Practical blueprint of building a geometric network architecture
[03:01:50] Symmetry discovering procedures
[03:04:05] How could real world data scientists benefit from geometric DL?
[03:07:17] Most important problem to solve in message passing in GNNs
[03:09:09] Better RL sample efficiency as a result of geometric DL (XLVIN paper)
[03:14:02] Geometric DL helping latent graph learning
[03:17:07] On intelligence
[03:23:52] Convolutions on irregular objects (Taco)

Опубликовано:

 

13 май 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 149   
@hannesstark5024
@hannesstark5024 2 года назад
I cannot believe it! I am so happy that you got the whole GDL crew on ML street talk. This episode is great; thanks for your awesome content!
@TheDRAGONFLITE
@TheDRAGONFLITE 2 года назад
The amount of detail here is astounding
@TheReferrer72
@TheReferrer72 2 года назад
Bloody hell your introductions are brilliant, they are mini documentaries.
@billykotsos4642
@billykotsos4642 2 года назад
This video really needs 100k views. Everyone working in RL research needs to view this
@Hexanitrobenzene
@Hexanitrobenzene Год назад
This episode is beyond good... Finally, some visionaries turning ML from its alchemy stage into proper science!
@AICoffeeBreak
@AICoffeeBreak 2 года назад
This has taken epic proportions, wow! 💪
@michaelwangCH
@michaelwangCH 2 года назад
During my study of Statistics, ML and DL I never understood the connection between different NN architectures - they simply pop up without any history or proofs. Shut up, learn and memorize. Therefore my professors in ML do not earn my respects, because they do not real understand what they are teaching. I like thank you for bringing of the fundamental understanding of the zoo of DNN models. I lost the intuition for long time about DNN models. Again thank you for the clarity.
@pauloabelha
@pauloabelha 2 года назад
One of the best RU-vid videos I’ve ever watched. Videos, not only ML videos.
@nonchai
@nonchai Год назад
Agreed
@therealjewbagel
@therealjewbagel 2 года назад
So glad to see more attention here on geometric deep learning. Thanks for sharing your chats with these fantastic thinkers!
@pretzelboi64
@pretzelboi64 Год назад
Honestly, this channel has such a great format. It's the perfect mix between a podcast and a documentary. It reminds me a lot of Sixty Symbols or Numberphile but for ML instead of physics and maths
@jonnysolaris
@jonnysolaris 10 дней назад
Tim, your channel is the best of its kind. Kudos man, much love 🤘🏻
@paulnelson4821
@paulnelson4821 2 месяца назад
At 2:10:43 the speaker describes how the methods used to analyze an N based model breaks down at 2N. The Theory of Complex Dynamic Systems includes the concepts “self similarity independent of scale”, fractals to relate scaled features, and non-integer dimensionality. Something worth considering …
@mailoisback
@mailoisback 2 года назад
Wow, three and a half hours of amazing content. Thank you so much for making it. It's like a documentary.
@welcomeaioverlords
@welcomeaioverlords 2 года назад
I love Taco's point about the community having too heavy of a cognitive reliance on benchmark datasets and ground truth labels. I have learned this lesson to my bones recently when I developed a "best ever" model in an industrial setting, but all of the metrics showed it to be WORSE because the model was actually better than the ground truth data, and e.g., true positives were counted as false positives. When your data/labels are biased from the data collection process, you can come up with all sorts of counter-intuitive conclusions when we don't keep those biases in mind during interpretation.
@dinoscheidt
@dinoscheidt 2 года назад
+Prof. Bronsteins’ 26:03 mic drop
@M0481
@M0481 7 месяцев назад
I couldn't agree more, this is why in an industrial setting I always take new SOTA approaches with a grain of salt. It is not to say that the results are not convincing, but there's danger in having benchmarks that have been static for years. Benchmarks have generally been collected on a single temporal (and, at times, spatial) scale using a uniform collection method. What I found is that once you start using data that has been evolving over years and years, it becomes non-trivial to deal with the non-uniformity of the data or biases that have been induced throughout the years. I remember finding this "great" pattern in our data and only later finding out that this was simply due to one of the systems assigning a certain property to data points if the system was running out of memory #rip.
@bryanbosire
@bryanbosire 2 года назад
Epic...3 hours of pure bliss
@ydas9125
@ydas9125 Год назад
Remarkable channel. Inspiring, challenging, eye opening... I have been following it for months and I love it. Thank you
@janosneumann1987
@janosneumann1987 2 года назад
Excellent ! thank you for all your hard work putting this together, been waiting for this to drop for a long time
@scarletrazor1102
@scarletrazor1102 2 года назад
Great episode! Learned a lot, really appreciate all the work that makes these possible. The videos are incredibly well done! Great guest speakers too of course.
@jonathanbethune9075
@jonathanbethune9075 Год назад
Now getting close to the end off your video , I realize that the" architecture" is the grande unified theory. Got excited , jumped the gun...
@PlancksOcean
@PlancksOcean 2 года назад
excellent episode 👌👌 the geometric dl viewpoint is truly fascinating. More than the obvious repercussions on the ML/DL community, I also hope that it will have an impact on the theoretical statistics community's research interests as well. I'm eager to see what lies ahead 😉
@teegnas
@teegnas 2 года назад
The way this video builds up over the fundamentals totally blew my mind.
@flooreijkelboom1693
@flooreijkelboom1693 2 года назад
This is great! Thank you for making this episode :)
@oncedidactic
@oncedidactic 2 года назад
Again, thanks so much to the MLST team and you Tim for bringing this to us! Nothing like it, truly special stuff! Will be very intrigued to hear the next few months of conversations while y’all chew on Hawkins vs graphs. 😆
@vaibhavnayel
@vaibhavnayel 2 года назад
Yesss normalize awkward people in cinematic shots! I love it
@Addoagrucu
@Addoagrucu 2 года назад
I've never known anyone more prepared than Tim.
@MachineLearningStreetTalk
@MachineLearningStreetTalk 2 года назад
😎😃
@billykotsos4642
@billykotsos4642 2 года назад
May this podcast last for another 10 years !
@MMUnubi
@MMUnubi 10 месяцев назад
why not forever?
@tornados2111
@tornados2111 2 года назад
You really need a patreon. Would love to give you money to help make more of these documentary style parts. Excellent podcast
@MachineLearningStreetTalk
@MachineLearningStreetTalk 2 года назад
Thanks! I don't really want the audience to pay for the content. Most of them are poor Ph.D students! Also -- even very large channels make nothing on patreon so it's pretty pointless adding it, monetization is off too. Perhaps one day we will get a viable sponsorship offer. Right now I would rather keep the channel pure i.e. almost any amount of sponsorship money would pale into insignificance next to the effort we put into making the content and just seems out of place.
@federicorios1140
@federicorios1140 2 года назад
@@MachineLearningStreetTalk I understand what you're saying but I still feel like I owe it to you to tell you that I'd definitely sign up for your Patreon if you ever decided to make one
@catythatzall4now
@catythatzall4now 2 года назад
Why not just privately message the person responsible and make a massive donation- privately - if that’s the way you know to encourage someone’s work. All grateful helpful comments are welcome, This is on you tube. I hope it continues to be be free for me and students forever
@AICoffeeBreak
@AICoffeeBreak Год назад
💰
@Coolguydudeness1234
@Coolguydudeness1234 2 года назад
thanks so much for making this! amazing video
@Srdchinna
@Srdchinna Год назад
I am enjoying it thoroughly. Its fascinating to see different perspectives from all GDL experts!
@Mutual_Information
@Mutual_Information 2 года назад
This is the longest RU-vid video I’ve ever watched.
@ukulele2345
@ukulele2345 2 года назад
I really enjoyed your introduction. This is starting to be my favorite science podcast besides Lex Fridman's!
@scottmiller2591
@scottmiller2591 2 года назад
Heard rumors about this MLST - and missed it when it came out. Looking forward to the talk!
@PhucLe-qs7nx
@PhucLe-qs7nx 2 года назад
Love this topic, looking forward to hear this. I think this episode is great and land at the right technical level, not as low-level as "In these paper we did these augmentation and it works well", but not too philosophically useless as the "knowledge is universal..." argument.
@livinlavidaluke
@livinlavidaluke 2 года назад
A fantastically detailed video, I'm starting a PhD project on this topic now so this is perfect to watch! Thanks.
@good6894
@good6894 2 года назад
As a lot of this is quite over my head. BUT, I'm using OpenAI's playground to have the concepts I don't understand explained to me. And it works extremely well. Ideas like gage symmetry and such now make sense to me. On a very broad level of course, but still. An AI explaining tome the concepts that went into it's own creation! Truly amazing!
@tinkeringengr
@tinkeringengr Год назад
Paradigm shift
@crimythebold
@crimythebold 2 года назад
Excellent content again. Damn another book to read !
@MachineLearningStreetTalk
@MachineLearningStreetTalk 2 года назад
Thanks Eric!
@thierryderrmann1170
@thierryderrmann1170 2 года назад
What a great episode, thank you guys so much for this!
@MachineLearningStreetTalk
@MachineLearningStreetTalk 2 года назад
Our sincere thanks to these 4 brilliant researchers: Professor Michael Bronstein www.imperial.ac.uk/people/m.bronstein twitter.com/mmbronstein Dr. Petar Veličković twitter.com/PetarV_93 petar-v.com/ Dr. Taco Cohen twitter.com/TacoCohen tacocohen.wordpress.com/ Prof. Joan Bruna twitter.com/joanbruna cims.nyu.edu/~bruna/ References: ICLR 2021 Keynote - "Geometric Deep Learning: The Erlangen Programme of ML" ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-w6Pw4MOzMuo.html (Note we used some clips from this, and the graphics designer was Jakub Kuba Makowski www.linkedin.com/in/jakub-kuba-makowski-19b17143/) Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges [Michael M. Bronstein, Joan Bruna, Taco Cohen, Petar Veličković] geometricdeeplearning.com/ arxiv.org/abs/2104.13478 Review: Deep Learning on Sets fabianfuchsml.github.io/learningonsets/ arxiv.org/pdf/2107.01959.pdf AMMI Course "Geometric Deep Learning" (12 lectures) ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-PtA0lg_e5nA.html Beyond the Patterns 28 - Petar Veličković - Geometric Deep Learning ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-9cxhvQK9ALQ.html Neural Algorithmic Reasoning [Petar Veličković, Charles Blundell] arxiv.org/abs/2105.02761 Equivariant convolutional networks [Taco Cohen] pure.uva.nl/ws/files/60770359/Thesis.pdf Solving Mixed Integer Programs Using Neural Networks arxiv.org/abs/2012.13349 Project CETI: meaningful communication with another species audaciousproject.org/ideas/2020/project-ceti Discovering Symbolic Models from Deep Learning with Inductive Biases [Cranmer] arxiv.org/pdf/2006.11287.pdf Node2Vec arxiv.org/pdf/1607.00653.pdf Deepwalk arxiv.org/pdf/1403.6652.pdf AMMI Course "Geometric Deep Learning" - Lecture 12 (Applications & Conclusions) - Michael Bronstein ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-caQV-Vb9TBw.html Graph Attentional Networks [PetarV] arxiv.org/pdf/1710.10903.pdf A Generalization of Transformer Networks to Graphs arxiv.org/pdf/2012.09699.pdf The Hardware Lottery [Sara Hooker] arxiv.org/pdf/2009.06489.pdf Developments in fractal geometry [Barnsley] link.springer.com/content/pdf/10.1007/s13373-013-0041-3.pdf Super-Resolution from a Single Image [Fractals] www.wisdom.weizmann.ac.il/~vision/single_image_SR/files/single_image_SR.pdf XLVIN: eXecuted Latent Value Iteration Nets arxiv.org/pdf/2010.13146.pdf Analogy as the Core of Cognition [Hofstadter] worrydream.com/refs/Hofstadter%20-%20Analogy%20as%20the%20Core%20of%20Cognition.pdf The CLRS Algorithmic Reasoning Benchmark github.com/deepmind/clrs WHAT CAN NEURAL NETWORKS REASON ABOUT? [Xu] openreview.net/forum?id=rJxbJeHFPS Graph Representation Learning [Hamilton] www.cs.mcgill.ca/~wlh/grl_book/files/GRL_Book.pdf
@ekkapricious
@ekkapricious Год назад
Great content! It may be better to put the references in the description so they don't get pushed too far down in the comment section. I initially thought there were no references and just happened to scroll down the list of comments.
@hendrixgryspeerdt2085
@hendrixgryspeerdt2085 Месяц назад
pin this comment
@abby5493
@abby5493 2 года назад
The most epic video you have ever made!
@castorpolux9862
@castorpolux9862 2 года назад
omg!! what a great generalization effort and applauses for the metanalogy with geometry Erlangen program.. how many problems are in some way only really the same problem!! congratulations and thank you.
@duskomanojlovic8507
@duskomanojlovic8507 Год назад
Thanks for podcasts. It helps me with other stuff i found to be in touch with ML, Ai, and neuroscience. Before this new hype around ML and transformers i didnt know that i will found that i will be in love with neuroscience
@Kartik_C
@Kartik_C 2 года назад
This is amazing! Thanks!
@love12xfuture
@love12xfuture Год назад
It's my 1st time here. Your show already brings me wow!!! Thank you!!
@francescserratosa3284
@francescserratosa3284 11 месяцев назад
I enjoyed watching it very much!!!. Thanks.🙂
@tinkeringengr
@tinkeringengr 2 года назад
Nice gem of a channel.
@RM-bs2td
@RM-bs2td 2 года назад
amazing stuff. thank you for the time stamps. thus i can only listen to questions relevant for my domain:=)
@Rems766
@Rems766 11 месяцев назад
One year after, still one of my favorite episode, with the Chomski one
@fast_harmonic_psychedelic
@fast_harmonic_psychedelic 2 года назад
i love the rotational intro sequence around michael i n the tradition of a science documentary
@swayson5208
@swayson5208 2 года назад
@lucca1820
@lucca1820 2 года назад
thank you for this
@denismoiseenko9100
@denismoiseenko9100 Год назад
Absolutely amazing episode! When you eavesdrop on a topical conversation of such detail you are almost bound to pick up some wisdom and sync in with the speakers at least for some time (prerequisite you have some background)
@sonneryhugo4361
@sonneryhugo4361 2 года назад
Best episode so far :) !!!!
@dr.mikeybee
@dr.mikeybee Год назад
I really enjoyed this one the second time around. One thing, the transformer is mostly a pyramidal graph network -- just FYI.It's a fully-connected Feed Forward NN A pyramidal graph is a type of graph that has a hierarchical structure where nodes are organized into layers. The nodes in each layer are connected to all nodes in the layer above it. Pyramidal graphs have been used in various applications such as object detection1, EEG classification2, and spatial significance exploration3
@otmaneelbourki3663
@otmaneelbourki3663 2 года назад
Hello sir can you group all these talks in a single youtube playlist please thank you for the tremendous effort you are doing
@paxdriver
@paxdriver 2 года назад
Love the format! Personally I'd prefer more content than more production value. If you can do both though that would be phenomenal lol
@amitkumarsingh406
@amitkumarsingh406 2 года назад
hey Street Talk crew. Watching out for the next one ✌️
@MachineLearningStreetTalk
@MachineLearningStreetTalk 2 года назад
Something very cool on its way ;)
@billykotsos4642
@billykotsos4642 2 года назад
Best episode ever
@radicalrodriguez5912
@radicalrodriguez5912 2 года назад
Graphcore should hire Tim
@Murphyalex
@Murphyalex 2 года назад
I can never keep up with these long episodes. It means I miss an awful lot of MLST if the guest or topic title doesn't really jump out at me. Once in a blue moon, the stars align to when the YT notification pops up and I can envisage fitting in an episode over the next couple of days in chunks. The promise of the Netflix-style Part 1 got me intrigued and it's one of those stars-aligned moments today (probably just because it's Sunday though). I think it's time to crack open the notepad and follow along. I can tell it's going to be good. Most of your stuff is just top content for us researchers in the ML domain (whether partially or fully). I really would consider trying to make smaller videos that are less intimidating length-wise, as I can more easily just ignore this entire episode, but would be much more likely to watch a relevantly-titled section I found interesting that was more around the 20-40 min mark, and thereby find myself enjoying more of your content.
@valeknappich6387
@valeknappich6387 2 года назад
That's a fair point. I sometimes have a similar issue where even if the title attracts my attention, I wont come to finish episodes, just because theyre so long. Also given the often challenging content it is hard to just resume where you stopped. I really appreciate MLSTs effort to make it easier to absorb the content. Perhaps, shorter videos that summarize a certain topic using scenes from different epsiodes would help in that aspect. On the other hand, I assume this is a lot of work.
@oncedidactic
@oncedidactic 2 года назад
I hear where you all are coming from but I want to voice that the beauty of these conversations is there fullness, and distillation or bite sizing seems a big ask given the discussion is already at the edge of current understanding. I love the long form. If it could also be snackable that’d be overpowered 😅
@XOPOIIIO
@XOPOIIIO 2 года назад
Wow, this chair looks comfortable.
@connorshorten6311
@connorshorten6311 2 года назад
Amazing!
@Epistemophilos
@Epistemophilos Год назад
Fantastic, thank you. (Minor improvement suggestion - might just be me, but I would prefer if the music was not constantly playing.)
@DeepFindr
@DeepFindr 2 года назад
Love it
@roomo7time
@roomo7time 2 года назад
it would be extremely helpful for phd students who are very busy if you can make a highlight video of this that contains only the most eseential academical discussions done in this video.
@jonathanbethune9075
@jonathanbethune9075 Год назад
That was really interesting.
@FredPauling
@FredPauling Год назад
The content was great! The editing was a bit disjointed. I love long form videos like this, but a touch more planning may have made for a smoother experience.
@sotlanakazerov4755
@sotlanakazerov4755 11 месяцев назад
I am so happy!!!!!!!!!!! thank tou so much
@Daniel-Six
@Daniel-Six 2 месяца назад
Okay Tim... I have a question worthy of your intellect and imagination. Granting arguendo the proposition that we operate within a carefully designed simulation, is it conceivable that the manifold hypothesis illustrates an intentionally implemented efficiency in our computational regime? Otherwise put, is the low dimensionality of crucial correlations and symmetries in source data an artificially induced property of our technical methodology in some sense? If proper pedagogy compels the sim architects to deploy logic that is never compromised by later learning--but the sim requires _some_ mechanism for invisibly adjusting the computational reach of our algorithms--then would it not be logical to establish that control inside the _dimensionality_ of our data?
@soumyasarkar4100
@soumyasarkar4100 2 года назад
wow....some episode !
@KaliferDeil
@KaliferDeil 2 года назад
A thing to keep in mind is you don't want to use neural networks for things that other computer methods do much better at like sorting, computing, mapping, and information lookup. A good example is Apple's Seri.
@alexmorehead6723
@alexmorehead6723 2 года назад
Yes!
@petergoodall6258
@petergoodall6258 2 года назад
Is there a theory for ‘transfer learning’ as a method for discovering a basis of symmetries for a domain?
@jonathanbethune9075
@jonathanbethune9075 Год назад
A grande unified theory is needed.
@jonathanbethune9075
@jonathanbethune9075 Год назад
I still think that coming from an understanding of a thing ,it is much easier to determine the way to think about it. BIOS Which are rules...I'm getting interested in computer science.
@juan-fernandogomez-molina645
@juan-fernandogomez-molina645 2 года назад
A good attempt to geometrization of machine learning by extending the Erlang program using group theory to found networks that capture symmetries. But, unfortunately, although valid for artificial CNN this is only valid to simple neuron equations that only capture very superficially the signal processing of real biological neurons.
@vladomie
@vladomie 2 года назад
It seems clear how to identify symmetries in problems of computer vision. However, in games such as chess, go, etc. can the individual symmetries of space, time, power, material be extracted from a trained model? Can a superhuman AI reveal additional symmetries which lead us to "ah-ha!" moments not experienced by any human in history?
@drhilm
@drhilm 2 года назад
I wonder, does Hawkins "reference frames" are analogs of the symmetry group ideas? Thus, brain cortical columns encode priors and symmetries using reference frames, which leads to a robust generalization and reasoning? Does generalization is basically grid cell mapping of the internal knowledge graph? I mean, isnt these ideas very similar?
@oncedidactic
@oncedidactic 2 года назад
Musing about this here too. I think Hawkins’ model is one implementation of some kind of good trade off general purpose symmetry-leverage. Sort of along the fuzzy lines of analogy vs graphs/categories like was asked. Like, at the end of the day you can never depend on rigid symmetry because you’ll find a novelty that breaks it, so whatever AGI model must bend to that.
@citizizen
@citizizen 2 года назад
If you attack a certain kind of complexity, you need not do that on the same level. I.e. there are cheap ways of working through certain problems. Example: to apply all the necessary facts, after you have collected all the related phenomena and then choose how to work with those first. First the easy way, and when you are done, you apply 'all'. No need to do too expensive computations. To some degree.
@mgostIH
@mgostIH 2 года назад
At 1:04:00 Joan Bruna starts talking about "scale separation", giving the example of classifying images via convolutions that have a locality bias as a necessity alongside simmetry, further going on with how there's experimental evidence of how this can hold too. But doesn't the recent progress in transformer based image models (like ViT and newer ones) and MLP Mixer suggest that it probably doesn't matter or that it might even be constraining? After all these models even ignore translational symmetry and even remove the assumptions of permutation invariance in transformers by using positional encodings.
@adikamath9740
@adikamath9740 2 года назад
But when you divide images into patches, doesn't it fix the locality invariant part?
@mgostIH
@mgostIH 2 года назад
@@adikamath9740 Only if you explicitly encode it in the positional embedding, but in the usual setting you are just telling it that the image patch at position 1 is different than the one at position 2, the transformer doesn't make assumptions over attending to closer patches rather than farther ones. If you mean that the patch embedding itself is learning locality, I'd argue that something like GPT-Image doesn't even divide the image into patches but just does it pixel based, while ViT seems to show better performance as the patches become smaller and smaller, suggesting that the ideal size would be single pixel based (but of course self attention on a million tokens would be prohibitively expensive)
@dr.mikeybee
@dr.mikeybee 2 года назад
I may not be understanding your question; so please excuse me if I'm being simplistic. I think that the take-away here is that if you know the symmetries, you can enhance training performance. It's as simple as that. You can avoid redundancies by constraining the input and output spaces. If the function is invariant and equivariant you only need to worry about stationarity and locality. All the rest of the spaces are scaled transforms. I might be getting this wrong as I'm just learning this, but I think I may be properly addressing your concern.
@mgostIH
@mgostIH 2 года назад
@@dr.mikeybee I agree that geometric deep learning is the right approach for when you know the exact symmetries of your problem, like encoding permutation invariance in GNNs, however the cases made by him at that point of the video were: - CNNs are good at image tasks because they encode translational invariance *and* have a locality prior thanks to convolutions - Making networks learn invariances is far too expensive (he gives the example of N! amount of permutations being prohibitive for data augmentation) On the first point, CNNs are a very good approach at low data/compute regimes and they were the first successful models at their own tasks, but recent progress is showing how networks that completely avoid both the translation invariances and locality biases still perform extremely well at image recognition at larger scales, even beating CNNs in some cases (like CLIP and DALL-E showing ViT is better for their task compared to resnets) In the second case I think that's just not true: the augmentations provided to the inputs don't have to cover all possible actions of the group considered, ViT or MLP Mixer show robustness to translation even if the amount of all possible image translations is far higher than what's obtained via augmented datasets. I like geometric DL approaches as I think newer classes of networks are always worth checking out, however I don't really buy their argument that it's something necessary for the future, it seems that more compute and larger scales ultimately end up beating the inductive priors researchers want to encode, since the latter often risk removing information from the input. What I found more interesting was the perspective of *hinting* symmetries rather than forcing them on the input and let the network discover approximate group actions instead, as it maps better to the concept in bayesian inference of not making your prior density absolutely 0 at spaces you don't have the absolute certainty can't represent your parameters.
@dr.mikeybee
@dr.mikeybee 2 года назад
@@mgostIH I agree that machines are almost always better at search than we humans, but computational restraints often force us to "stick our noses in." Moreover, I would say this will always be the case. We will always model the world at some small scale relative to the actual world. Nevertheless, if we know the data resides on a sphere, why check the whole three-dimensional space? Or if the world is a meta-graph as Stephen Wolfram suggests, etc. I think the argument here is that we need to find the right methods for these searches that begin with finding the right signal representation. Right now, I don't understand this too much at all. It just seems like where all this research is heading. The point is that like with early geometry, we have not yet generalized the postulates (priors.) And this is an attempt to do that. My intuition tells me it's correct on the whole, but for smaller problems, what you say about CNNs, for example, is true too. There are many approaches to these problems, but there may only be one optimal approach. So like using Ptolemaic system for geo-navigation, CNNs may be perfect for small problems, but I would want something more restrictive for large problems as I would want the Copernican system for navigating the solar system.
@victorlacerda8015
@victorlacerda8015 Год назад
Does anybody have a clue as to where to find the result stated at 1:59:55?
@Artula55
@Artula55 2 года назад
5:55 you have written 'the WORD is full of simulac..' don't you mean the "worLd"? btw love it so far, thank you so much for your hard work!
@TimScarfe
@TimScarfe 2 года назад
Whoops!!
@sruturaj10
@sruturaj10 2 года назад
Finally!!
@Hukkinen
@Hukkinen 2 года назад
1:59:35 Is the growth really EXPONENTIAL in a hyperbolic space ALSO? Or did Bernstein misspeak? I'm no expert here sure, just trying to understand. - EDIT: No! Of course, if the embedding space grows exponentially, this also captures better ..what ever represented.
@citizizen
@citizizen 2 года назад
Hi, i learned that I can use the depth of my brain itself. Perhaps there are people who can use this idea. As far as I know, this hasn't been done. I myself try to realize whilst typing. Which might be too unclear a way of getting inside. This is THE legacy ever. Namely our brains.
@arjundubhashi1
@arjundubhashi1 2 года назад
Can someone ELI5 this? I understand basic ML but this was greek and Latin to me.
@quebono100
@quebono100 2 года назад
0:17 cool Tim became Morpheus :) is Yannic Neo?
@tinyentropy
@tinyentropy Год назад
Where do I find the learning courses that you mentioned? :)
@MachineLearningStreetTalk
@MachineLearningStreetTalk Год назад
In comments we have references ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-caQV-Vb9TBw.html
@tinyentropy
@tinyentropy Год назад
@@MachineLearningStreetTalk Thanks a lot! For everything :€)
@jonathanbethune9075
@jonathanbethune9075 Год назад
Teaching a computer to recognize natural laws as it computes and extrapolate relevance.
@rpranaviitk
@rpranaviitk Год назад
2:19:17 and the next 15 seconds contains all the dreaded words of my undergraduate education in a single sentence. Anyone dare to explain what it means in ELI5 fashion.
@Hexanitrobenzene
@Hexanitrobenzene Год назад
2:03:07 The sound is temporarily missing.
@user-gc6my9jg2c
@user-gc6my9jg2c 2 года назад
The sound effects are distracting. Thanks for the info.
@fast_harmonic_psychedelic
@fast_harmonic_psychedelic 2 года назад
wonder why the sound went out at a certain part? couldve been just my speakers.. But - im still going to do some deep lip reading operations because if its that secretive then I definitely need to know it even more lol
@TimScarfe
@TimScarfe 2 года назад
I hope it's just your speakers!
@fast_harmonic_psychedelic
@fast_harmonic_psychedelic 2 года назад
@@TimScarfe i think it was. speakers must have just thought the phrase "only interpolate but not extrapolate" was a noise signal and subsequently went into standby mode. LOL. just kidding
@subtlethingsinlife
@subtlethingsinlife 2 года назад
Was the lecture is English .. I doubt, didn't understand one little bit .. plz can you elaborate ... On the topics you have been speaking upon.
@fredxu9826
@fredxu9826 2 года назад
Clicked so fast!
@yuwang600
@yuwang600 Год назад
Symmetry is all you need
@barzinlotfabadi
@barzinlotfabadi 14 дней назад
Still trying to figure out how this guy stays so buff despite being five magnitudes more nerd than me
@jonathanbethune9075
@jonathanbethune9075 Год назад
Cellular automata. Stephen wolfram ,I think , is working on that.
@dulcamara2851
@dulcamara2851 Год назад
the only deep question about AI is one never raised, which is whether the Galilean metaphor that the universe is a book written in mathematics is still relevant, I think it is not, the metaphor is dead. Then the question is whether AI has the potential to mutate beyond the Galilean metaphor, like the scholasticism that preceded it the worldview embedded in the metaphor has exhausted itself. Nothing in AI supports the metaphor, if anything it negates it. Which is why AI's potential to inaugurate a new era is really the only interesting philosophical question to ponder.
@sugamtyagi9144
@sugamtyagi9144 2 года назад
Waiting for your new videos.
@MachineLearningStreetTalk
@MachineLearningStreetTalk 2 года назад
We have a treat for you, we are working hard to release it as soon as possible
@oncedidactic
@oncedidactic 2 года назад
Upboat for chair
Далее
Prof. Chris Bishop's NEW Deep Learning Textbook!
1:23:00
The Most Important Algorithm in Machine Learning
40:08
Просмотров 181 тыс.
СКУФИЗАЦИЯ ЗА 4 МЕСЯЦА
00:16
Просмотров 229 тыс.
🙌🏆🙌 #36Ligas #RealMadrid
00:16
Просмотров 2,6 млн
This is why Deep Learning is really weird.
2:06:38
Просмотров 310 тыс.
Why a Forefather of AI Fears the Future
1:10:41
Просмотров 89 тыс.
#55 Dr. ISHAN MISRA - Self-Supervised Vision Models
1:36:22
AI and Quantum Computing: Glimpsing the Near Future
1:25:33
MIT Introduction to Deep Learning | 6.S191
1:09:58
Просмотров 116 тыс.
THE GHOST IN THE MACHINE
3:36:55
Просмотров 798 тыс.
[1hr Talk] Intro to Large Language Models
59:48
Просмотров 1,8 млн