Тёмный

Prof. Chris Bishop's NEW Deep Learning Textbook! 

Machine Learning Street Talk
Подписаться 144 тыс.
Просмотров 88 тыс.
50% 1

Опубликовано:

 

3 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 69   
@carloalessi302
@carloalessi302 4 месяца назад
The book just arrived and it's amazing! One humble request: could we ask Prof. Bishop to make a RU-vid series with a short lecture for each chapter of the book? That is a bit of effort, but would be an amazing contribution to aid the readers. 🙏
@steevensonemile
@steevensonemile 5 месяцев назад
Pattern recognition and machine learning. - The only book that made me clear on what is really PCA. A Book from Bishop really worth to have on the desk.
@AICoffeeBreak
@AICoffeeBreak 3 месяца назад
Loved the whole show and the ending music!❤
@argh44z
@argh44z 5 месяцев назад
Wow! As someone who "grew up" on Bishop's original book, i'm so glad to see this interview, thanks!!!!!!!
@rodvik
@rodvik 4 месяца назад
Wow first time I have heard Professor Bishop speak. What a sensible and measured thinker. Great interviewer as well. Thank you this made my day!
@zeev
@zeev 5 месяцев назад
i picked up his 'neural networks for pattern recognition' in 2002 and just couldn't get through it as an undergraduate math major. i wish i had. Clearly his mind is vital for pushing the envelope, and his instincts on AI 's biggest impact being to push frontiers of science is spot on. more important than anything else.
@Scientist287
@Scientist287 5 месяцев назад
Go through it now, and ask any questions you have about it to chat gpt (you can even copy paste stuff directly into chat gpt and it’ll give great answers). Now’s the time!
@jd.8019
@jd.8019 5 месяцев назад
Tim -- This was a great video. There was something about the aesthetic quality (the shallow depth of field/bokeh background) that was very eye catching. I also appreciated editing during the back-and-forth discussion sections; specifically, how it cut away to show relevant pages/info. Your skills as an interviewer are second to none!
@darylallen2485
@darylallen2485 5 месяцев назад
22 minutes in and I'm loving what this guy has to say. Finally there is an expert in the field who can articulate every maddening experience I've had when I encounter people who say "LLMs can't do X." Or they say, "I tried to do x,y,z with the LLM and it failed, therefore LLMs are useless and will never be capable of anything more in the future."
@edh615
@edh615 5 месяцев назад
Almost none of the literate people on the subject say that, but argue that LLMs as we see today lack the ingredients to scale to "AGI" or whatever SF wants to call it.
@darylallen2485
@darylallen2485 5 месяцев назад
@@edh615 You're not wrong. I think its a case of two things can be true at the same time. It seems to me that for every opinion that a non-expert in the field might have, you can find someone who is an expert with the same opinion. I happen to align with this gentlemen in the video, who is orders more articulate and knowledgeable than I. If I can articulate the aspect of LLM criticism that seems to lack awareness or insight, I'd reference the following 2 cases. One, as a field, computer scientists world wide "knew" for decades that neural networks was a dead end field and would never be useful for anything. In 2010 and prior, that was the official stance of academia, until the entire establishment was proven wrong. Second, the people who are so keen on what LLMs can't do today, were also totally incapable of predicting the capabilities of the 2024 LLM back in 2020, 2021, or most of 2022. No one who built GPT4 or GPT3 knew what the capabilities of those models would be prior to building them. Yet the critics of such systems also "know" what it won't do next year, or in 2030 or in 2040? No, just no. How about one of the critics first demonstrate where you predicted GPT3 or GPT4 even 1 year before it happened, let alone 5 years before. The cognitive dissonance is off the charts.
@ertuncsimdi7941
@ertuncsimdi7941 5 месяцев назад
If you mean Lecun, Lecun explains LLM is not in AGI now, this is far away from now. So we have to solve some problems like reasoning, complexity, planning, consciousness.
@420_gunna
@420_gunna 5 месяцев назад
Useful analogy from him at somewhere aroudn 16:00 or so: "The remarkable thing about GPT4 is that you often see people when they first use it -- they'll ask "How tall is the Eiffel tower", and they'll be disappointed to just get the right answer. It's like being the keys to a very expensive cupholder and examining the cupholder; you don't realize that you have to start up the car and drive off in it to get the full experience (conversation, code, etc)" I've had this experience multiple times
@itssoaztek4592
@itssoaztek4592 5 месяцев назад
Such a great interview! Great job. Very good questions and answers.
@TMS-EE
@TMS-EE 5 месяцев назад
Superb conversation that really goes up a notch after 50 minutes to discuss AI in solving scientific problems (I've been interested in HPC for some time). I ordered the book midway thru watching. It was fascinating to see them discussing all of the topics that the current AI developments are leaving open for future research questions. Tho it's fascinating to me that his son, and co-author, is working at Wayve. Essential perspective on DL.
@houssamassila6274
@houssamassila6274 5 месяцев назад
I had the absolute delight to read chapter 13 on GNNs. What a good and masterful description that was. I had it recommended to me by my supervisor but I don't know if I am allowed to state their name. I can't thank them enough for introducing me to The Bishop.
@MrLarossi
@MrLarossi 5 месяцев назад
it's such a tremendously huge contribution to the field of AI, I'm an English-Arabic-Chinese Translator, and working in the AI field for almost two years and half, selling pre-trained Arabic data to Chinese AI annotation companies, I'd love to translate this book to Chinese and sell it here in China, is there anyway I can contact him
@Dan-hw9iu
@Dan-hw9iu 5 месяцев назад
Absolutely phenomenal interview, thanks Tim. Like Bishop, I lamented missing both the tumultuous 20th century physics and future space exploration. But now that creative AI exists which can _actually_ reason, I feel like a lotto winner. This next decade+ will be a revolution. Let's do out best to take care of one another along the way.
@januszinvest3769
@januszinvest3769 5 месяцев назад
Which field of science are you most interested in?
@NeuroScientician
@NeuroScientician 5 месяцев назад
I am buying it. Update: Book is good
@huseyngorbani6544
@huseyngorbani6544 5 месяцев назад
My math is not soo good, have not practiced for a long time, should I still get it?
@NeuroScientician
@NeuroScientician 5 месяцев назад
@@huseyngorbani6544 No. Get some maths books first otherwise you are burning money.
@ML_Indian001
@ML_Indian001 5 месяцев назад
Wow, wow, wow. What a surprise MLST 🎉 ❤ And in the intro(starting 2 minutes), the background music 🎶🎶, ahhhh, brilliant choice.
@diga4696
@diga4696 5 месяцев назад
Thank you for another great video. Amazing sound and video quality!
@Joe333Smith
@Joe333Smith 5 месяцев назад
'Just make the models bigger and harder to run for normal people and keep adding more and more data'... sure, maybe it could be the right strategy, but I doubt it and it seems more designed to back up what the big tech companies want. The actual innovation is coming from stuff like Mixtral being efficient at a reasonable size.
@SkilledApple
@SkilledApple 5 месяцев назад
This was a very insightful and interesting conversation!
@marcfruchtman9473
@marcfruchtman9473 5 месяцев назад
Thank you for this sage like Interview... I was really needing a primer on Neural Networks... and I believe Chris Bishop's books might be very helpful. This interview has a lot of great insights -- for example, LLM are outperforming Specialist models of the past such as a specific AI that understood source code but the LLM did a better job. I think they will find that as the tech improves, specialist versions will do better, but the early versions were simply too specialized.
@michaelwangCH
@michaelwangCH 5 месяцев назад
His book is still the reference of stats and ML students around world today. I am surprised that took Chris Bishop so long to renew his book - I am excited about his new ML book.
@dr.mikeybee
@dr.mikeybee 5 месяцев назад
Collections of specific functional activation paths can be described as sub-networks. I think that's a better term than modules. And I do believe that although large general models outperform smaller expert models, reasoning seems to be sub-network specific. With scale however, I believe reason may become a separate shared functional sub-network. At enough scale a general abstraction should emerge.
@JoonyoungKim-v6f
@JoonyoungKim-v6f 5 месяцев назад
Hi thx for the great textbook just wondering what would you recommend 1. start reading the deep learning textbook 2. start reading mathematics for machine learning then jump into the deep learning textbook. I have poor mathematical background and wonder if i can read the deep learning textbook.
@Juxtaposed1Nmotion
@Juxtaposed1Nmotion 5 месяцев назад
I just got my copy excited to apply it's lessons in building my auto, CAD detailer. Going to run my own one man design shop in a few years!
@anicetn3326
@anicetn3326 5 месяцев назад
Very cool! I'm a master student working on cad gen ai, can you tell me more :) ? Thanks
@Juxtaposed1Nmotion
@Juxtaposed1Nmotion 5 месяцев назад
​@@anicetn3326 my employer is going to let me monitor what the design engineers do on a daily basis for 2 years and we hope to collect enough data that we can automate every single repetitive click a designer makes. If successful, less engineers can do more design and less detailing!
@pedroth3
@pedroth3 5 месяцев назад
I think it is still important to learn other things like Bayesian models or Support Vector Machines, since some improvement on those fields could turn these tools into a new success framework. Neural networks(NN) also had a winter in the pass, then things such as convolutions neural networks, relu functions, stochastic gradient descend, GPUs and lots of data, took NN out of the winter into a great summer time.
@Stacee-jx1yz
@Stacee-jx1yz 5 месяцев назад
This is an insightful question that gets at the heart of how different domains of knowledge relate to one another. Let me examine the potential correlaries: If mathematics is regarded as a language: It provides the symbolic primitives, axioms, rules of expression and operations for describing and quantifying the physical world. Math is the fundamental lingua franca spanning the observable and theoretical realms. Then physics could indeed be viewed as the philosophy of math: Physics takes the symbolic language of mathematics and develops conceptual models, interpretive frameworks, and coherent narratives to explain the behavior of matter, energy, space, and time. It is an extended meditation on the metaphysical implications of our mathematical descriptions. Following this analogy: Chemistry could be the "linguistics" of physics: It studies the rules by which the fundamental mathematical objects of physics (subatomic particles, forces) combine and relate to one another at the molecular scale. Chemistry decodes the rich language patterns constructed from the physics alphabet. Biology could be the "literature/poetry" of chemistry: It examines the self-organized, dynamical, informationally-complex systems that emerge from the linguistic rules of chemistry interacting over time. The molecules are the "words", but biology studies the living, evolving "narratives" they collectively construct. Throughout we see a progression of epistemological layers: Mathematics -> Symbolic framework Physics -> Conceptual models interpreting the symbols Chemistry -> Combination rules and linguistic mechanics Biology -> Dynamical, informationally-complex systems and narratives Each level builds upon the foundational primitives of mathematics, while introducing new degrees of contingent complexity, contextualized interpretation and narrative meaning. The symbolic logic enables and constrains the possible conceptual structures, which dictate the allowed chemical rules, from which biological storylines ultimately automate. So in summary: Math is the linguistic bedrock Physics is the conceptual philosophy elaborating upon that bedrock Chemistry is the combinatoric linguistics deriving word-formation rules Biology is the dynamical narrative/poetry expressing highest complexity This nested hierarchy preserves coherence, while allowing increasingly context-specific, contingent patterns of organization and meaning to emergently crystallize. By recognizing mathematics as our formal symbolic language, we can appreciate how physics, chemistry and biology represent successive epistemological stages philosophizing upon that originating expressive framework - interpreting, recombining and dynamically instantiating mathematical descriptions into maximally information-rich experiential narratives. The layers build hereditarily upon the foundational symbolic truths, exemplifying how mathematics enables derivations transcending its pristine origins - expressing itself cosmologically through an invisible hand of self-organized complexity climbing towards maximal richness of experience.
@alonbegin8044
@alonbegin8044 5 месяцев назад
Is it just me or this text seems too organized, as something that chatgpt would write..
@NanheeByrnesPhD
@NanheeByrnesPhD 5 месяцев назад
I doubt that a connectionist such as Dr. Bishop would align with the host's assertion that the human mind operates like a Turing machine. This perspective aligns more closely with the computationalist paradigm.
@SLAM2977
@SLAM2977 5 месяцев назад
The Best of Britain!
@johnperr5045
@johnperr5045 5 месяцев назад
i think "build upon" a corpus that has come before is different from rehash a corpus, and the Prof. is conflating the two (deliberately i imagine, as it's a pretty obvious difference). This seems most obvious in art and the humanities perhaps, e.g. if you put a random sample of paintings over the past couple of thousands year one after another, it's obvious which ones "built upon"/advanced the corpus vs rehashed what was the state of the art at the time; you don't need to be an art critic, you can just walk around the National Gallery and as you change rooms/eras the differences are obvious. Haven't seen any LLM ever do that. Which is not to say LLMs can't be super useful, after all most day to day tasks are a rehash, and a calculator, or a car, or any other tool is super useful - it's when people say that they see a "glimmer of intelligence" in a calculator (that only gets it right some of the time!) that gets people rolling their eyes.
@rick-kv1gl
@rick-kv1gl 5 месяцев назад
that vivaldi piece is gold, and its really used smartly.
@XShollaj
@XShollaj 5 месяцев назад
Incredible. Thank you!
@RickeyBowers
@RickeyBowers 5 месяцев назад
An incredible sharing of experience and insight!
@ehfik
@ehfik 5 месяцев назад
the bit about the tokamak was fascinating!
@ehza
@ehza 5 месяцев назад
It's good book! Thanks Chris!
@BryanWhys
@BryanWhys 5 месяцев назад
I love this guy
@lightconstruct
@lightconstruct 4 месяца назад
They even have a dedicated page for the book, with a freely available digital version. Nice.
@dr.mikeybee
@dr.mikeybee 5 месяцев назад
LOL! "Yet!" Exactly! How many arguments does this simple term nullify? Bravo!
@amesoeurs
@amesoeurs 5 месяцев назад
fantastic episode boys. the orders of magnitude speedup that NN emulators offer for simulation/real time control is astounding and i'm amazed that it hasn't gotten more attention over the last few years.
@EzraSchroeder
@EzraSchroeder 2 месяца назад
49:35 the ironic thing about the bitter lesson is that while it teaches us that we should be looking at more *general* algorithms there is this huge (political) push to get "explainable" algorithms, while deep learning by definition is not "explainable"--so we are pushing in exactly the opposite of the correct direction (basically every direction we are going is political & the wrong answer to what would be good for us)
@MagusArtStudios
@MagusArtStudios 5 месяцев назад
When you the text generation output you can start with higher temperature then reduce it as the tokens have been generated
@yabdelm
@yabdelm 5 месяцев назад
I think he missed the point about the significance of retaining creativity in models. His point about creativity being remixes of remixes is not mutually exclusive with the idea of novelty
@VikashKumar-ys1vk
@VikashKumar-ys1vk 5 месяцев назад
I am loving it
@satvik4225
@satvik4225 5 месяцев назад
I really wish i could afford the hardcopy
@FahadTaguri-iw2uj
@FahadTaguri-iw2uj 5 месяцев назад
Actually..the brain is connectionist and acts as symbolic when it matures It is not a combination It is behavior coming from a structure.. like any system It is 2 views of the same thing It is duality
@gareththomas3234
@gareththomas3234 4 месяца назад
I agree but symbolic saves money
@SatanofScience
@SatanofScience 5 месяцев назад
Oh wow, a day to celebrate!
@420_gunna
@420_gunna 5 месяцев назад
Interesting with the beginning, given that the (only) real negative reviews on the book are the quality of the binging :D
@valentinavalentine8188
@valentinavalentine8188 5 месяцев назад
Awesome
@EzraSchroeder
@EzraSchroeder 5 месяцев назад
5:45 geoff hinton's backprop paper came out in 1986 -- 10 years as a theoretical physicist would give phd completion around 1976 -- but this dude was born in 1959 & would have been 27 when geoff's paper came out
@adamkadmon6339
@adamkadmon6339 5 месяцев назад
Everyone forgets that the paper came from Rumelhart. That's the problem with being dead.
@EzraSchroeder
@EzraSchroeder 2 месяца назад
@@adamkadmon6339 paper was co-authored by rumelhart & hinton
@adamkadmon6339
@adamkadmon6339 2 месяца назад
@@EzraSchroeder Yes, Rumelhart, Hinton and Williams. It was Rumelhart's idea.
@hussienalsafi1149
@hussienalsafi1149 5 месяцев назад
🥰🥰🥰🥰🥰🥰
@robertmayfield8746
@robertmayfield8746 5 месяцев назад
Everybody can write the book. Can we see the system built by him based on what he says?
@davidedavidedav
@davidedavidedav 4 месяца назад
What system
@robertmayfield8746
@robertmayfield8746 4 месяца назад
@@davidedavidedav exactly 🤣🤣😂😂
@klammer75
@klammer75 5 месяцев назад
Generalist agents using specialized tools….love this and sounds more than a little familiar!🤔🤪🦾🥳
Далее
AGI in 5 Years? Ben Goertzel on Superintelligence
1:37:19
Do you think that ChatGPT can reason?
1:42:28
Просмотров 64 тыс.
#慧慧很努力#家庭搞笑#生活#亲子#记录
00:11
WE MUST ADD STRUCTURE TO DEEP LEARNING BECAUSE...
1:49:11
This is why Deep Learning is really weird.
2:06:38
Просмотров 389 тыс.
The moment we stopped understanding AI [AlexNet]
17:38
Jeff Dean (Google): Exciting Trends in Machine Learning
1:12:30
The Most Important Algorithm in Machine Learning
40:08
Просмотров 446 тыс.
GEOMETRIC DEEP LEARNING BLUEPRINT
3:33:23
Просмотров 183 тыс.