Тёмный

The First Neural Networks 

Asianometry
Подписаться 682 тыс.
Просмотров 81 тыс.
50% 1

Опубликовано:

 

12 июн 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 212   
@dinoscheidt
@dinoscheidt 23 дня назад
I’m in ML since 2013 and have to say: wow… you and your team do really deserve praise for solid research and delivery. I’ll bookmark this video to point people to. Thank you
@goldnutter412
@goldnutter412 23 дня назад
He's great ! dad was a chip designer.. go figure :) amazing backlog of content sir Especially chips..
@chinesesparrows
@chinesesparrows 23 дня назад
The span and depth of topics covered with a eye on technical details is truly awesome and rare. Smart commenters point out the occasional inaccuracies (understandable for the span of topics) which benefits everyone as well.
@WyomingGuy876
@WyomingGuy876 23 дня назад
Dude, try living through all of this.
@PhilippBlum
@PhilippBlum 23 дня назад
He has a team? I assumed he is just grinding and great at this.
@fintech1378
@fintech1378 23 дня назад
He is independent AI researcher
@strayling1
@strayling1 23 дня назад
Please continue the story. A cliffhanger like that deserves a sequel! Seriously, this was a truly impressive video and I learned new things from it.
@rotors_taker_0h
@rotors_taker_0h 23 дня назад
In the 80's Hinton, LeCun, Schmidhuber and others invented backpropagation, CNNs (convolutional NNs), then RNNs, LSTMs in the 90's but it was still very niche area of study with "limited potential" because NNs always performed a bit worse than other methods, until couple breakthroughs in speech recognition and image classification in the end of 00's. 2012 AlexNet brought instant hype to CNNs, which was followed by one-liners critically improving quality and stability of training: better initial values, sigmoid -> relu, dropout, normalization (forcing values to be in certain range), resnet (just adding values of previous layer to the next one). That allowed to train models so much bigger and deeper that they started to dominate everything else by sheer size. Then came Transformer in 2017 that allowed to treat basically any input as a sequence of tokens and scaling hypothesis which brought us to the present time with "small NNs" being "just" several billion parameters. Between 2012 and now also been extreme progress with hardware for running these networks, optimizing the precision (it turned out that you don't need 32bit float numbers to train/use NNs, lower possible is 1 bit, good amount is 4 bit integer which is 100x faster in hardware), new instructions, matmuls, sparsity, tensor cores and systolic arrays and what not to get truly insane speedups. For comparison, AlexNet was trained on 2 GTX580 so it was about 2.5TFLOPs of compute. This year we have ultrathin and light laptops with 120TOPs and server cards with 20000TOPs and biggest clusters are in the range of 100 000 such cards, so in total 1 billion times more compute thrown at the problem than 12 years ago. And 12 years ago it was 1000x more than at the start of the century, so, we got about a trillion times more compute to make neural networks work and we still not anywhere close to be done. Of course, early pioneers had no chance without that much compute.
@honor9lite1337
@honor9lite1337 22 дня назад
2nd that 😊
@thomassynths
@thomassynths 21 день назад
"The Second Neural Networks"
@fibersden638
@fibersden638 23 дня назад
One of the top education channels on RU-vid for sure
@soanywaysillstartedblastin2797
@soanywaysillstartedblastin2797 23 дня назад
Got this recommended to me after getting my first digit recognition program working. The neural networks know I’m learning about neural networks
@PeteC62
@PeteC62 23 дня назад
Your videos sre always well worth the time to watch them, thanks!
@hififlipper
@hififlipper 23 дня назад
"A human being without life" hurts too much.
@dahahaka
@dahahaka 18 дней назад
Avg person in 2024
@MFMegaZeroX7
@MFMegaZeroX7 23 дня назад
I love seeing Minsky come up as I have a (tenuous) connection to him as he is my academic "great great grand advisor." In that, my PhD's advisor's PhD advisor's PhD advisor's PhD advisor was Minsky. Unfortunately, stories about him never got passed down, I only have a bunch of stories with my own advisor, and his advisor, so it is interesting seeing what he was up to.
@honor9lite1337
@honor9lite1337 22 дня назад
The Society of Mind.
@dwinsemius
@dwinsemius 23 дня назад
The one name missing from this from my high-school memory is Norbert Weiner, author of "Cybernetics". I do remember a circa 1980 effort of mine to understand the implication to my area of training (medicine) of rule-based AI. The Mycin program (infectious disease diagnosis and management) sited at Stanford could have been the seed crystal for a very useful application of the symbol-based methods. It wasn't maintained and expanded after its initial development. Took too long to do data input and didn't handle edge cases or apply common sense. It was, however, very good at difficult "university level specialist" problems. I interviewed Dr Shortliffe and his assessment was that AI wouldn't influence the practice of medicine for 20-30 years. I was hugely disappointed. At the age of 30 I thought it should be just around the corner. So here it is 45 years later and symbolic methods have languished. I think there needs to be one or more "symbolic layers" in the development process of neural networks. For one thing it would allow insertion of corrections and offer the possibility of analyzing the "reasoning".
@honor9lite1337
@honor9lite1337 22 дня назад
Your storyline is decades long, so how old are you? 😮
@dwinsemius
@dwinsemius 22 дня назад
@@honor9lite1337 7.5 decades
@jakobpcoder
@jakobpcoder 20 дней назад
This is the best documentary on this topic i have ever seen. Its so well researched, its like doing the whole wikipedia dive
@JohnHLundin
@JohnHLundin 23 дня назад
Thanks Jon, as someone who tinkered with neural nets in the 1980s and 90s, this history connects the evolutionary dots and illuminates the evolution/genesis of those theories & tools we were working with... J
@tracyrreed
@tracyrreed 23 дня назад
5:14 Look at this guy, throwing out Principia Mathematica without even name-dropping its author. 😂
@PeteC62
@PeteC62 23 дня назад
It's nothing new. Ton of people do that.
@theconkernator
@theconkernator 23 дня назад
Its not Isaac Newton if thats what you were thinking. It's Russell and Whitehead.
@PeteC62
@PeteC62 23 дня назад
Well that's no good. I can't think of a terrible pun on their names!
@dimBulb5
@dimBulb5 22 дня назад
@@theconkernator Thanks! I was definitely thinking Newton.
@honor9lite1337
@honor9lite1337 22 дня назад
​@@theconkernatoryeah? 😮
@francescotron8508
@francescotron8508 23 дня назад
You always bring up interesting topics. Keep it up, it's great job 👍.
@amerigo88
@amerigo88 23 дня назад
Interesting that Claude Shannon's observations on the meaning of information being reducible to binary came about at virtually the same time as the early neural networks papers. Edit - The Mathematical Theory of Communication by Shannon was published in 1948. Also, Herb Simon was an incredible mind.
@stevengill1736
@stevengill1736 23 дня назад
Gosh, I remember studying physiology in the late 60s when human nervous system understanding was still in the relative dark ages - for instance plasticity was still unknown, and they taught us that your nerves stopped growing at a young age and that was it. But I had no idea how far they'd come with machine learning in the Perceptron - already using tuneable weighted responses simulatong neurons? Wow! If they could have licked that multilayer problem it would have sped things up quite a bit. You mentioned the old chopped up planaria trick - are you familiar with the work of Dr Miachel Levin? His team is carrying the understanding of morphogenisis to new heights - amazing stuff! Thank you kindly for your videos! Cheers.
@klauszinser
@klauszinser 23 дня назад
There must have been a speech of Demis Hassabis on 14 Nov 2017 in the late morning at the Society of Neuroscience in Washington. In this Keynote lecture where he told the audience that AI is nothing more than applied Brain Science. He must have said (I only have the translated German wording) 'First we solve the problem and understand whats intelligence (possibly the more German usage of the word) and then we solve all the other problems'. The 6000-8000 People must have been extremely quiet knowing what this young man already has achieved. Unfortunately I never found the video. (Source: Manfred Spitzer).
@honor9lite1337
@honor9lite1337 22 дня назад
Studying in the late 60's? Even my dad was born in late 70's, how old are you?
@HaHaBIah
@HaHaBIah 23 дня назад
I love listening to this with our current modern context
@helloworldcsofficial
@helloworldcsofficial 19 дней назад
this was great. A more in depth one will be awesome. The fall and rise of the perceptron. Going from single to multiple layers.
@TymexComputing
@TymexComputing 23 дня назад
PERCEPTRON 😍😍
@Ant3_14
@Ant3_14 23 дня назад
Great video, makes me appreciate all tools I have to train my networks even more.
@JorgeLopez-qj8pu
@JorgeLopez-qj8pu 23 дня назад
SEGA creating an AI computer in 1986 is crazy
@Wobbothe3rd
@Wobbothe3rd 23 дня назад
Recurrent Neural Networks are about to make a HUGE comeback.
@luciustarquiniuspriscus1408
@luciustarquiniuspriscus1408 23 дня назад
How so? Gradient descent doesn't work well for long distance relationships. That's why attention was invented (and speed as well). Are you working on or aware of a new training algorithm?
@FrigoCoder
@FrigoCoder 23 дня назад
@@luciustarquiniuspriscus1408 MAMBA is already a valid alternative to transformers, and it is some kind of variant of linear recurrent neural networks. Also I do not see how could we avoid recurrent neural networks for music generation, they or their variants seem like a perfect fit for this very specific generation task.
@facon4233
@facon4233 23 дня назад
xLSTM FTW
@clray123
@clray123 23 дня назад
@@luciustarquiniuspriscus1408 The SSM/Mamba papers already address this. In fact you can train a GPT-3 like small model using Mamba right here and now, with excellent performance (both in terms of training speed and outputs). With "infinite attention" (well, limited by the capacity of the hidden state vector).
@BobFrTube
@BobFrTube 19 дней назад
Thanks for bringing back memories of the class I took from Minsky and Papert (short, not long a in pronouncing his name) in 1969 just when the book had come out. You filled in some of the back story that I wasn't aware of.
@JiveDadson
@JiveDadson 16 дней назад
That book set AI back by decades.
@alonalmog1982
@alonalmog1982 22 дня назад
Wow! well explained, and a way more engaging story than what I expected.
@rickharold7884
@rickharold7884 20 дней назад
Love it. Awesome summary.
@TheChipMcDonald
@TheChipMcDonald 23 дня назад
The Einstein, Oppenheimer, Bohr, Feynman, Schroeder and Heisenbergs of a.i.. McCulloch-Pitts neuron network, Rosenblatt's training paradigm, took 70 years to get to "here" and should be acknowledged. I remember as a little kid in the 70s reading articles on different people leading the symbolic movement, and thinking "none of them really seem to know or have conviction in what they're campaigning for".
@bharasiva96
@bharasiva96 22 дня назад
What a fantastic vide tracing the history of Neural Nets. It would also be really useful if you could put up the links to the papers mentioned in the video in the description.
@rubes8065
@rubes8065 23 дня назад
I absolutely love your channel. I look forward to your new videos. Thank you. I’ve learned sooo much 🥰
@firstnamesurname6550
@firstnamesurname6550 20 дней назад
Very nice and well scoped contextualization about the developement of NNs ... I know that the video is about an specific branch of computer science ... but the seminal work for AI research was not Alan Turing papers ... the seminal work for AI and Computer science is George boole's The Laws of Thought (1854) which contains Boolean algebra.
@sinfinite7516
@sinfinite7516 15 дней назад
Great video :)
@subnormality5854
@subnormality5854 23 дня назад
Amazing that some of this work was done at Dartmouth during the days of 'Animal House'
@SB-qm5wg
@SB-qm5wg 18 дней назад
Well I learned a whole lot from this video. TY 👏
@youcaio
@youcaio 18 дней назад
Thanks!
@gabotron94
@gabotron94 22 дня назад
Would love to hear you talk about Doug Lenat's Cyc and what ever happened to that approach to AI
@danbaker7191
@danbaker7191 23 дня назад
Good summary. Ultimately, even today, there are no functionally useful and agreed definitions of intelligence and thinking. Maybe we're unintentionally approaching this from the back, by making things that sort of work, then later figuring out what's really going on (not yet!)
@perceptron-1
@perceptron-1 21 день назад
I'm the PERCEPTRON Thank you for making this movie.
@failedaustrianpainter476
@failedaustrianpainter476 23 дня назад
Please do a video on the decline of British manufacturing, it would be greatly appreciated
@MrHashisz
@MrHashisz 23 дня назад
Nobody cares about the Brits
@AaronSchwarz42
@AaronSchwarz42 21 день назад
People are like transistors, its how they are connected that makes all the difference
@hisuiibmpower4
@hisuiibmpower4 22 дня назад
hebb's postulate are stilling being taught in neuroscience,only difference is a time element is being added its now called "spike time dependent plasticity"
@latentspaced
@latentspaced 20 дней назад
Super happy i found you again! Your content is off the charts amazing! I wish i could patreon you up- im in my 50's autistic af and i dont have an income. Appreciate you.. p.s. i thought you said Rosenblat died in a tragic coding accident! Lmfao. Love the flatworms! Keep on keeping your valuble perception turned on!!
@-gg8342
@-gg8342 21 день назад
Very interesting topic
@VaebnKenh
@VaebnKenh 23 дня назад
It's pronounced Pæpert not Pāpert, and that was a bit of a confusing way to present the XOR function: since you set it up with a XY plot, you should have put the Inputs on different axes with the values in the middle. Other than that, great video as always 😊
@Chimecho-delta
@Chimecho-delta 23 дня назад
Worth reading up on Walter Pitts! Interesting life and work
@MostlyPennyCat
@MostlyPennyCat 20 дней назад
I took a generic algorithms and neural networks module at university. In the exam we would train and solve simple neutral networks on paper with a calculator. Good fun, this was in 2000.
@ContradictionKid007
@ContradictionKid007 20 дней назад
Terrific
@pvtnewb
@pvtnewb 23 дня назад
As I recall, AMD's zen uarch also use some form of perceptron as their BTB or branch prediction
@chinchenhanchi
@chinchenhanchi 21 день назад
I was just studying this subject in university 😮 one of the many lectures was about the history of IA What a coincidence
@freemanol
@freemanol 23 дня назад
I think there's one guy that doesn't receive much attention, Demis Hassabis. I knew him as the founder of the game company that made Republic: The Revolution, but he then went on to take a PhD in Neuroscience. I wondered why. Now it makes sense. He founded DeepMind
@JiveDadson
@JiveDadson 16 дней назад
Before the multi-layer perceptron, statisticians used that exact same model with sigmoid activation functions and called the process "edge regression." The statisticians knew how to "train" the model using second-order multivariant optimization and "weight decay" methods, which were vastly superior to the ad hoc backpropagation methods that neural network researchers were still using as late as the 1980's. The neural net guys were blinded by their unwarranted certainty that they were onto something new.
@travcat756
@travcat756 21 день назад
Minsky & Papert and the XOR problem was the invention of deep learning
@ktvx.94
@ktvx.94 23 дня назад
Damn we're really going full cycle. We've been hearing eerily similar things from people in similar roles as folks in this video.
@fintech1378
@fintech1378 23 дня назад
Yuxi in the Wired, any audio essay?
@yellow1pl
@yellow1pl 23 дня назад
Hi! Great fan of your channel! :) However this time I'm a bit puzzled. Several years ago I read somewhere Marvin Minsky talking about how he build this (awesome in my opinion) mechanical neural network. Since that time I was sure his network was the first. However here you talk about neural network build almost decade later and call it the first one... You mentioned that Marvin Minsky did some nerual network research previously, but he left. Ok, fine, so why his neural network that was bult before perceptron is not the first one in your opinion? :) Maybe next video? :) Also - to my knowledge Turings paper was published in 1937, not 36. In year 1936 Alonso Church published his paper related to Entscheidungsproblem. I don't know who was the second one that came up with theory of gravity or relativity, we don't usually remember them. But for some reason we remember Turing for being second in something :) Just fun fact :)
@nexusyang4832
@nexusyang4832 23 дня назад
I was just watching this video by Formosa TV and their piece of on the founder of SuperMicro. Just curious if there is any interest in the SuperMicro or its founder (for the English speaking folks that don't understand Mandarin hehe).... just thought I'd ask. 🙂
@DamianGulich
@DamianGulich 21 день назад
There's more about this early history of artificial intelligence in this 1988 book: Graubard, S. R. (Ed.). (1988). The artificial intelligence debate: False starts, real foundations. MIT Press. The chapters also detail a very interesting discussion of related general philosophical problems and limitations of the time.
@Charles-Darwin
@Charles-Darwin 13 дней назад
the 'boating accident' is peculiar
@TerryBollinger
@TerryBollinger 19 дней назад
The difficulty with Minsky's adamant focus on symbolic logic was his failure to recognize that the vast majority of biological sensory processing is dedicated to creating meaningful, logically usable symbolic representations of a complicated physical world. Minsky’s position thus was a bit like saying that once you understand cream, you have all you need to build a cow.
@belstar1128
@belstar1128 18 дней назад
very forwards thinking people from a time when most people couldn't even comprehend computers. i know a lot of people born after this period who are only slightly older than me that can't even handle windows 10 and don't believe computers existed when they were young. and i am talking about people born in the 1970s here not boomers. yet you had these geniuses born in the late 19th century or early 20th century who made it all possible .
@GeorgePaul82
@GeorgePaul82 23 дня назад
Wow thats strange timing I'm in the middle of reading the book " The Dream Machine by Mitchell Waldrop" Have you read that yet ? Its about these exact same people
@Bluelagoonstudios
@Bluelagoonstudios 23 дня назад
Wow, didn't know they researched that back then, so long ago. Thank you for educating me on this matter. Today, AI is amazing already. I developed a USB reader/ tester with GPT4. The code that it wrote was spot on. The rest was just electronics, an amazing tool.
@bogoodski
@bogoodski 22 дня назад
I completed a machine learning course from Cornell a little before genAI became really popular and we had to learn how to code a basic Perceptron. The rare times I see it mentioned, I always feel like I have some special, unique insight. (I definitely do not!)
@theorixlux2605
@theorixlux2605 23 дня назад
I am probably not the first, but i am surprised at how far back the idea of artificial "intelligence" goes to.
@lbgstzockt8493
@lbgstzockt8493 23 дня назад
It surprises me how "little" progress we have made in that time. Pretty much every other discipline has made incredible leaps in the past 60-70 years, yet AI is still nowhere near the human brain. Obviously an early perceptron is infinitely worse than a modern LLM, but AGI doesnt really feel any closer than back then.
@theorixlux2605
@theorixlux2605 23 дня назад
​​@@lbgstzockt8493 if you're comparing what a few smart computer geeks did over 80 years to what mother nature did over 3-ish billion years, then I would argue it's not surprising AT ALL that we haven't simulated a human brain yet...
@goldnutter412
@goldnutter412 23 дня назад
We've been here before Before the universe..
@theorixlux2605
@theorixlux2605 23 дня назад
@@goldnutter412 ?
@AS40143
@AS40143 23 дня назад
The first idea of machines that could think appeared in the 17th century as Leibniz's mill concept
@Ray_of_Light62
@Ray_of_Light62 23 дня назад
I studied the perceptron in the '70s. My conclusion was, the hardware was not up to the task. Using a matrix of photoresistors as the input proved the design principle but couldn't bring to a working prototype.
@gscotb
@gscotb 23 дня назад
A significant moment is when the instructor leaves the plane & says "do a couple takeoffs & landings".
@DarkShine101
@DarkShine101 22 дня назад
Part 2 when?
@noelwalterso2
@noelwalterso2 23 дня назад
The title should be "the rise and rise of the perceptron" since it's the basic idea behind nearly all modern AI.
@jamillairmane1585
@jamillairmane1585 23 дня назад
Great entry, very à propos!
@techsuvara
@techsuvara 23 дня назад
Having worked in ML and have done a lot with the perceptron. It feels like we're right where we started, promising the world, providing, not much...
@andersjjensen
@andersjjensen 23 дня назад
I guess lonely people will be happy when chat-bots can render audio replies.
@chinesesparrows
@chinesesparrows 23 дня назад
This is what real researchers say while companies go as far to boast their cat food is powered by AI.
@brodriguez11000
@brodriguez11000 23 дня назад
AI winter.
@endintiers
@endintiers 23 дня назад
Disagree. I worked on natural language NNs in the 80s (a failure). Now I'm using what we should no longer call LLMs to do real work, replacing older specialised AIs and improving accuracy. This is for horizon scanning. We are finding valuable insights for our government.
@techsuvara
@techsuvara 23 дня назад
@@endintiers it’s the credibility issue that’s the problem. I’ve stopped using LLMs altogether even for coding, because it gets things incorrect often and there are usually better solutions.
@unmanaged
@unmanaged 23 дня назад
great video love the look back at currently used technology
@fredinit
@fredinit 16 дней назад
Beyond the usual kudos to Jon and his research, the primary reason much of this area has, and will continue to come up short of what a human brain can do has to do with scale and complexity. The human brain is WAY more complex than even the combination of the current LLM systems. Sitting at over 50,000 cells and 150,000,000 synaptic connections per cubic mm (Science: A petavoxel fragment of human cerebral cortex reconstructed at nanoscale resolution), and using less than 300W/hr. of energy per day, the brain is a formidable piece of wetware. With advances of computer hardware and software we'll get there some day. But, not for a long time. Until then, remember that all the current LLMs are 100% artificial and 0% intelligent.
@perceptron-1
@perceptron-1 21 день назад
I built a multilayer perceptron from 4096 operational amplifiers in the '80s. It was a kind of analog computer where the weighting values could be set with electronic potentiometers controlled from a digital computer, digital port, and the freely reorganizable switch matrix could be set with electronic switches, how many layers to have and in what matrix. It beat the digital machines of the time in real-time speech recognition and live speech generation. Nowadays I want to integrate this into an IC and sell it as an analog computer. We tried to model the back propagation method using the PROLOG language, but the machines at that time were very slow. It took 40 years for the speed and memory size of machines to reach the level where this could be realized. The so-called scientific paper work was very far from the practical solutions realized then and now, there is a couple of decades behind in theory, because a lot of technical results achieved in practice could not be published.
@kevin-jm3qb
@kevin-jm3qb 23 дня назад
As a fellow 4 hour sleeper. Any advice on brain health. I'm getting paranoid.
@Alex.The.Lionnnnn
@Alex.The.Lionnnnn 23 дня назад
I love how cheesy that name is. "The Perceptron!" Is it one of the good transformers or the bad ones??
@LimabeanStudios
@LimabeanStudios 23 дня назад
Working in physics one of the first things I learned is half the names are just "-tron" and it always makes me giggle
@cthutu
@cthutu 17 дней назад
Great great video. But the McCulloch-Pitts Neuron didn't use weights. You displayed diagrams showing weights whenever you mentioned it.
@0MoTheG
@0MoTheG 22 дня назад
When I first read about N.N. around 2000 this was still the state of the matter 30 years later. When I was at university NN were no topic. Then after 2010 things suddenly changed. Training Data and Flops had become available.
@warb635
@warb635 15 дней назад
Russian vessels close to the Belgian coast (in international waters) are being closely watched these days...
@JohnVKaravitis
@JohnVKaravitis 20 дней назад
0:12 Is that Turing on the right
@luisluiscunha
@luisluiscunha 22 дня назад
I needed a video to do the dishes, after spending a day making pedagogical materials on Stable Diffusion. Now I will rewind and delight myself seeing this video carefully. *Thank you*
@Shinku4949
@Shinku4949 23 дня назад
Martian Ted Hoff? Surely i misheard.
@onetouchtwo
@onetouchtwo 20 дней назад
FYI, XOR is pronounced “ex-or” like “ECK-sor”
@darelsmith2825
@darelsmith2825 23 дня назад
ELIZA: "Cat got your tongue?" I had a Boolean Logic class @ LSU. Very interesting.
@AABB-px8lc
@AABB-px8lc 23 дня назад
I see what you do there. 3030 year: History of AI essey: As we know, our new hyperdeepinnercurlingdoubleflashing neural network almost working, we need few more tiny touches and literally 2 extra layers to show it awesomeness in next 3031year. And again, and again.
@smoggert
@smoggert 23 дня назад
🎉
@harambetidepod1451
@harambetidepod1451 21 день назад
My CPU is a neural-net processor; a learning computer.
@Anttisinstrumentals
@Anttisinstrumentals 23 дня назад
Every time i hear word multifaceted i think of chatgpt.
@londomolari5715
@londomolari5715 21 день назад
I find it ironic/devious that Minsky criticized Perceptrons for inability to scale. None of the little toy systems that came out of MIT or Yale (Schank) scaled.
@renanmonteirobarbosa8129
@renanmonteirobarbosa8129 23 дня назад
MLPs are very prominent still. Also Attractor NNs are very popular, transformers would not exist without ANNs.
@luciustarquiniuspriscus1408
@luciustarquiniuspriscus1408 23 дня назад
Minor correction: the correct spelling is Seymour Papert, not Seymore.
@Finnishpeasant
@Finnishpeasant 23 дня назад
Didn't I build this in Alpha Centauri?
@douggolde7582
@douggolde7582 23 дня назад
When I eat pulled pork I gain none of the pig’s memories. I do however gain an essence of the wood (tree) used. The next day I am able to impart this wood knowledge in the men’s room at work. Ahh, hickory with a bit of cherry.
@tipwilkin
@tipwilkin 23 дня назад
Idk about you but when I eat pulled pork I feel like a pig
@perceptron-1
@perceptron-1 21 день назад
It is not enough to digitally model the most common LLM for Artificial Intelligence today, it doesn't matter if it is 1-bit or 1 TRIT 1.58b = log2(3), it has to be done with working ANALOG hardware! If software, then Machine Learning (algorithm) If hardware, then Learning Machine (Hardware that is better and faster than an algorithm)
@jamesjensen5000
@jamesjensen5000 23 дня назад
Is every cell conscious?
@Phil-D83
@Phil-D83 23 дня назад
Minsky is currently frozen, waiting for return after his untimely death in 2016 or so
@halfsourlizard9319
@halfsourlizard9319 19 дней назад
symbolic AI was a neat idea ... rip
@leannevandekew1996
@leannevandekew1996 23 дня назад
In 1996 neural networks were touted as predicting pollution from combustion sources without any need for chemical or visual monitoring.
@alexdrockhound9497
@alexdrockhound9497 23 дня назад
looks like a bot
@leannevandekew1996
@leannevandekew1996 23 дня назад
@@alexdrockhound9497 Why'd you write "channel doesn't have any conte" on your channel ?
@alexdrockhound9497
@alexdrockhound9497 23 дня назад
@@leannevandekew1996 typical bot. trying to deflect. Your profile is AI generated and you look just like adult content bots i see all over the platform.
@leannevandekew1996
@leannevandekew1996 23 дня назад
@@alexdrockhound9497 You totally are.
@anushagr14
@anushagr14 23 дня назад
I would stalk you just as you said.
@ReadThisOnly
@ReadThisOnly 23 дня назад
asianometry my goat
@marshallbanana819
@marshallbanana819 22 дня назад
This guy has been messing with us for so long I can't tell if the "references and sources go here" is a bit, or a mistake.
@mattheide2775
@mattheide2775 23 дня назад
I enjoy this channel more than I understand the subjects covered. I worry that AI will be a garbage in garbage out product. It seems like a product forced upon me and I don't like it at all. Thanks for the video.
@archivis
@archivis 20 дней назад
:):)
@vikramgogoi3621
@vikramgogoi3621 16 дней назад
Shouldn't it be "the" Principia Mathematic?
@AngelosLakrintis
@AngelosLakrintis 20 дней назад
Everything old is new again
@sunroad7228
@sunroad7228 23 дня назад
“In any system of energy, Control is what consumes energy the most. No energy store holds enough energy to extract an amount of energy equal to the total energy it stores. No system of energy can deliver sum useful energy in excess of the total energy put into constructing it. This universal truth applies to all systems. Energy, like time, flows from past to future” (2017). Inside Sudan’s Forgotten War - BBC Africa Eye documentary ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-KIDMsalYHG8.html
Далее
The moment we stopped understanding AI [AlexNet]
17:38
Просмотров 274 тыс.
The Most Important Algorithm in Machine Learning
40:08
Просмотров 309 тыс.
Looks realistic #tiktok
00:22
Просмотров 19 млн
Has Generative AI Already Peaked? - Computerphile
12:48
ChatGPT: 30 Year History | How AI Learned to Talk
26:55
Why This New CD Could Change Storage
14:42
Просмотров 922 тыс.
How a CVD Diamond is Made
14:11
Просмотров 114 тыс.
Why Bridges Don't Sink
17:30
Просмотров 706 тыс.
The Chips That See: Rise of the Image Sensor
18:29
Просмотров 157 тыс.
What If Someone Steals GPT-4?
18:24
Просмотров 86 тыс.
Where do particles come from? - Sixty Symbols
25:34
Просмотров 144 тыс.