Тёмный

Exponential Progress of AI: Moore's Law, Bitter Lesson, and the Future of Computation 

Lex Fridman
Подписаться 4 млн
Просмотров 89 тыс.
50% 1

Discussion of exponential progress in AI and computation, including Moore's Law and the Bitter Lesson by Rich Sutton. This was part of the AI paper club on our Discord. Join here: / discord
Bitter Lesson: www.incompleteideas.net/IncIde...
Slides for this video: bit.ly/2T4SeHt
References sheet: bit.ly/bitter-lesson
Lex + AI Podcast Discord: / discord
OUTLINE:
0:00 - Overview
0:37 - Bitter Lesson by Rich Sutton
6:55 - Contentions and opposing views
9:10 - Is evolution a part of search, learning, or something else?
10:51 - Bitter Lesson argument summary
11:42 - Moore's Law
13:37 - Global compute capacity
15:43 - Massively parallel computation
16:41 - GPUs and ASICs
17:17 - Quantum computing and neuromorphic computing
19:25 - Neuralink and brain-computer interfaces
21:28 - Deep learning efficiency
22:57 - Open questions for exponential improvement of AI
28:22 - Conclusion
CONNECT:
- Subscribe to this RU-vid channel
- Twitter: / lexfridman
- LinkedIn: / lexfridman
- Facebook: / lexfridman
- Instagram: / lexfridman
- Medium: / lexfridman

Наука

Опубликовано:

 

28 июн 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 249   
@lexfridman
@lexfridman 4 года назад
Discussion of exponential progress of AI and the Bitter Lesson by Rich Sutton. This was part of the AI paper club on our Discord. Join here: discord.gg/lex-ai Here's the outline of the presentation: 0:00 - Overview 0:37 - Bitter Lesson by Rich Sutton 6:55 - Contentions and opposing views 9:10 - Is evolution a part of search, learning, or something else? 10:51 - Bitter Lesson argument summary 11:42 - Moore's Law 13:37 - Global compute capacity 15:43 - Massively parallel computation 16:41 - GPUs and ASICs 17:17 - Quantum computing and neuromorphic computing 19:25 - Neuralink and brain-computer interfaces 21:28 - Deep learning efficiency 22:57 - Open questions for exponential improvement of AI 28:22 - Conclusion
@nikhilpatil8013
@nikhilpatil8013 4 года назад
Ai and Machine learning gonna destroy copy paste programmers 😂
@DeanLawrence_ftw
@DeanLawrence_ftw 4 года назад
Thank you for the timestamps, very helpful :-)
@michaelcharlesthearchangel
@michaelcharlesthearchangel 4 года назад
The Matrix 4's quantum screenplay. On Facebook. :: facebook.com/TheMatrix4online/
@costa768
@costa768 4 года назад
Please bring more of these state of the art information, so interesting!
@pclind
@pclind 4 года назад
@Lex, Have you seen what we are working at here: www.toridion.com
@DigiByteGlobalCommunity
@DigiByteGlobalCommunity 4 года назад
How does Lex put out so much high quality content so quickly? Surely he has already ascended and merged with our future AI overlords
@praff5308
@praff5308 4 года назад
100%...
@kevinmccallister7647
@kevinmccallister7647 4 года назад
What foes lex do on a daily basis? He wants to build robots right?
@Crazylalalalala
@Crazylalalalala 4 года назад
or, as ancient astronaut theorist believe, ALIENS!
@yeahyeah8314
@yeahyeah8314 3 года назад
Dude ...
@cs7623
@cs7623 4 года назад
Thanks for all you do Lex!!! Keep up the amazing work!!!
@marzx13
@marzx13 4 года назад
Thanks Lex, first time I have seen one of these here. Great addition beyond your regular podcast. Keep up the great work.
@007DM
@007DM 4 года назад
Thank you for breaking down these complex topics that I could never understand on my own and making them more accessible. Keep up the work!
@user-lt9dn2fj9r
@user-lt9dn2fj9r 4 года назад
can't wait for a discussion about neuralink between you and musk
@markwood1705
@markwood1705 4 года назад
Its happened. Look up the Lex video history
@user-lt9dn2fj9r
@user-lt9dn2fj9r 4 года назад
@@markwood1705 yep but a new one! musk had pretty bold statements recently on the matter
@kevinmccallister7647
@kevinmccallister7647 4 года назад
@@user-lt9dn2fj9r can you link to the video?
@user-lt9dn2fj9r
@user-lt9dn2fj9r 4 года назад
@@kevinmccallister7647 ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-RcYjXbSJBN8.html
@simonmarelis5722
@simonmarelis5722 4 года назад
@Lex Fridman, your calmness and self control are very inspiring. The way you manage to collect your thoughts and how you approach the topics are things that many people, including myself, could learn from you. Thank you for your effort
@heater5979
@heater5979 4 года назад
Singularity? For many years now I have thought that, like falling into a sufficiently large black hole, one can pass through the event horizon without even noticing. Nothing bad happens at that time but there is no going back from there. There might be a long time between that and meeting ones end at the singularity. To my mind we passed through the event horizon in 1976. The year I first became aware of microprocessors, the fact that we can actually own computers of our own, that they would be everywhere, that they would change our lives dramatically in my lifetime.
@MM-de3qz
@MM-de3qz 4 года назад
Love your videos and your overall approach to interviews. Great job!
@zapchristo
@zapchristo 4 года назад
Extremely well put presentation. Easy to understand while touching on profound information, not an easy balance to strike.
@chrisschmidt7947
@chrisschmidt7947 4 года назад
Lex, thanks again for the great content. With so many choices of ways to spend ones time, this is definitely a favored choice.
@Cyroavernus
@Cyroavernus 4 года назад
Great video!Thanks a lot for doing these Lex!!
@emilsargsyan3885
@emilsargsyan3885 4 года назад
You're an inspiration to be doing podcasts like a beast despite the pandemic!
@colmtesticles
@colmtesticles 4 года назад
I work in the semi conductor industry. We dont think its dead. EUV lithography is just starting and will keep shrink alive for the next 20 years. DSA and other techs (materials and design) will follow and keep it alive even longer.
@FaroukHaidar96
@FaroukHaidar96 4 года назад
I appreciate these videos of yours. Its a high quality video presentation of a state of literature scientific paper. But with the added benifit of a russian romantacising it, and providing food for thought at the end. Thank you!
@martin-fc4kk
@martin-fc4kk 4 года назад
Very interesting post and a great idea that we are living through the singularity already. One of your best, thank you!
@matthewmcclain1316
@matthewmcclain1316 4 года назад
Wtf is singularity? Idk, coding and computers interest me but I can't even get started cause I have no clue what any of the terms mean.
@Ollychamberlain
@Ollychamberlain 4 года назад
Very informative. Thanks, Lex
@danielwestereng155
@danielwestereng155 4 года назад
thank you for the time and hard work you put into these great videos. i appreciate it dearly. you tube university!
@crazedvidmaker
@crazedvidmaker 4 года назад
It seems like there's a big hole in this argument because it doesn't consider the variety of scalings that a computer program might have. Say we can complete N computations in a reasonable amount of time (a few hours, a few years, etc, depending on how badly you want to solve this problem). If the interesting problem size is n, an exponential time algorithm will take ~e^(an) computations to solve. if N=e^(bt), we can solve the problem at time an=bt -> t=an/b, an amount of time that scales linearly with the problem size n. If, however, we can find a polynomial time algorithm c n^p, then the problem can be solved at time t=p/b log(cn). So you see that the solvable problem size grows very fast. Now consider that a unit of programming work takes a fixed amount of time. Decreasing the constant c is probably not worth your time, but finding a polynomial time algorithm where there previously wasnt one will help with long term progress. Unfortunately, it's very uncommon to estimate the runtime of training an AI (even basic scaling laws - computer scientists could learn a lot from the way physicists are able to correctly find scaling laws without rigorous arguments). And people don't often estimate what type of improvement their AI training trick provides.
@slikclips2966
@slikclips2966 4 года назад
The best video you've uploaded no joke loved d it💖💖💖💖
@genuineprofile6400
@genuineprofile6400 4 года назад
Loving these discussions. One of the few channels here consumption of which isn't a waste of time.
@prostabkundu8105
@prostabkundu8105 4 года назад
Thanks lex for breaking down topics which I need to study :P
@peterbannon7668
@peterbannon7668 4 года назад
Lex, I love this new format for your show! You really know your stuff. I'm an expert chess player (you can look me up) and really enjoyed your interview with Garry Kasparov. I think you should get Erik Kislik on your show. He's an International Master in chess and wrote two popular books on applied logic in chess, which is a pretty rare subject for a chess professional to delve into. One of the books won FIDE Book of the Year. I also found his podcast Logical Chess Thinking on Spotify. Seems like a very high IQ guy. He is one of the top chess coaches in the world (maybe number one now), a computer chess expert, and the only person alive right now who went from beginner to International Master in chess as a self-taught adult. Some kind of super-learner, with an emphasis on clear logical thinking. I'd love to hear you guys discuss computer chess, AI, and applied logic. Would be one of your top 5 interviews, imo.
@codyramseur
@codyramseur 4 года назад
Very informative. Thank you Lex! 🙏
@meditatewithmike4105
@meditatewithmike4105 4 года назад
I really enjoy these videos, thank you for sharing.
@mikkel8861
@mikkel8861 4 года назад
Chad Lex hammering down the content. Keep it up!
@onuryes
@onuryes 4 года назад
Excellent content. Thank you, Lex.
@AndrewGundran
@AndrewGundran 4 года назад
Really enjoyed this. Great job!
@invgreat5608
@invgreat5608 4 года назад
Thank you, Lex, awesome videos! ❤️👍
@Paul_Oz
@Paul_Oz 4 года назад
Thanks Lex, great summary
@tomkarren2473
@tomkarren2473 3 года назад
So interesting. Thanks Lex!
@prestonjensen6172
@prestonjensen6172 4 года назад
Really like this video format -- just you talking about some subject
@Dw4rnold
@Dw4rnold 4 года назад
Wow! thanks Lex!!
@pablo_brianese
@pablo_brianese 4 года назад
I was totally shocked at 22:00. People working in AI have been doing a fantastic job.
@afonsosantos8364
@afonsosantos8364 4 года назад
Probably your best show yet
@Tbone6string1
@Tbone6string1 4 года назад
Having only a casual acquaintance (Sophia) with AI theory, absolutely no background in computers beyond user since 1999, imagine stumbling onto Lex Fridman who speaks a language that took me on a wild ride hanging on by my fingernails! I watched the Ilya Sutskever first, and I had to bail after 45 minutes. I made it through this one, and my comprehension improved greatly; I just need to learn some of the terms and concepts that are part of the flow of the discussions. I imagine I will be up to speed by the time I have watched all of Lex Fridman's video output.
@varunverma744
@varunverma744 4 года назад
More vid summaries of blog posts would be awesome, great vid!
@ReddooryogaSH
@ReddooryogaSH 4 года назад
Another possible avenue of growth: "computers" grown from neurons, but whose architecture is predesigned. This goes beyond the idea you express of using human brains for added compute capacity, because it escapes the constraints that human biology has built in (energy and nutrient constraints based on our ancestral environment, space constraints. based on the size of a human pelvis and rate of growth after birth, design constraints based on the nature of evolutionary pressures, etc.)
@lpp7487
@lpp7487 4 года назад
Do it Lex! Another great cast!
@be2112
@be2112 4 года назад
Fantastic video
@Marcos10PT
@Marcos10PT 4 года назад
This one is going to be my companion for lunch tomorrow. Looking forward to that lunch!!!
@thefactonista
@thefactonista 4 года назад
Marcos Pereira 😂
@calamariaxo
@calamariaxo 4 года назад
This was great.
@Travthewhite
@Travthewhite 4 года назад
The future is safer because Lex Friedman is on the watch.
@shreeyatyagi
@shreeyatyagi 4 года назад
Lol
@lilfr4nkie
@lilfr4nkie 4 года назад
I sure hope so Travis
@umrahpay571
@umrahpay571 4 года назад
Brilliant analysis! awesome
@iouitsnotenoughusee8761
@iouitsnotenoughusee8761 4 года назад
We need more videos from you!
@atf300t
@atf300t 4 года назад
There is one problem with "brute force" learning -- it needs a lot of data or a cheap way to produce them. In case of board games, computers can do extremely well, because they can generate a lot of data through self-play, but you can do same with systems that have to interact with real world. For example, in case of self-driving cars, you can't create a realistic simulator of what happens on roads, so you have to hire a lot of people to drive around and monitor the system performance. Still, those data are relatively cheap to generate if you compare with the cost of medical research (or, probably, any kind of research).
@davidjohnston4240
@davidjohnston4240 4 года назад
30 years ago, when I was in college, the number was 70% of computation improvement was due to algorithms and 30% due to machine improvement. I've not seen anything to suggest otherwise since then.
@jonahansen
@jonahansen 4 года назад
I think you're right - the majority of potential improvement rests with better algorithms even now, especially since essentially no good ones based on understanding intelligence have yet been developed. But even with a fixed 70/30 percentage, a huge increase in computation speed can still do a lot. It masks the even larger potential improvement available if better understanding and consequent algorithms were developed.
@LukeVilent
@LukeVilent 3 года назад
I agree. In fact, a lot improvements of today are still driven by limitations rather than improvements. The most prominent example of the last few years in the field of neural networks is the advent of mobile layers, which do pretty much the same thing as convolutional layers, but have some 8-9 times fewer parameters. As the very name suggests, they were proposed by the guys from Google to fit established neural networks into mobile devices, but within just about 2 years became almost the tech standard. EfficientNet mentioned here owes the scaling factor of 8 to the use of those mobile layers.
@2LegHumanist
@2LegHumanist 2 года назад
All I see are tiny incremental improvements in algorithms. Algorithmically, how is alpha go different from td-gammom, the paper for which was written in 1990? Ok, we have some new NN architectures and some new components to include in those architectures. These are tweaks. The driving force in improving capability of AI systems over the last decade is the availability of compute power. Ok, data too, but the money needed to collate the labelled data is only available because DL was demonstrated to work thanks to compute power. There is an illusion of exponentially improving innovation caused by moore's law, while the actual rate is sub-linear. It's hard to see through all the hype and parlour tricks, but that's where we are IMO.
@ChristianWatchesYouTube
@ChristianWatchesYouTube 4 года назад
These videos are awesome
@KaraNodrik
@KaraNodrik 4 года назад
The discord will grow, oh my God!
@Neeboopsh
@Neeboopsh 4 года назад
i dont watch your videos enough. always entertaining. just 30 minutes of real listening, unlike some other longer form videos you can space out for a bit. ;P
@lodevermeulen9012
@lodevermeulen9012 4 года назад
Man. What a great video.
@simpletechdaily
@simpletechdaily 4 года назад
Love this video.
@notfarfromgone1
@notfarfromgone1 4 года назад
"the madness of the curvature" - woaza!
@daymon6868
@daymon6868 4 года назад
Hi Lex ! When am I going to be able to meet you one day ?! I saw you on Rogan . I’m glad I found your channel .
@TELEVISIBLE
@TELEVISIBLE 4 года назад
Thanks for the video, I learn a lot from your channel 🤣
@Aleamanic
@Aleamanic 4 года назад
Thanks for the presentation, Lex! Stimulating thoughts and considerations, as always. Regarding the question, I think there should be a 4th option: "better/more data" (to train on, to learn from). Also, Keith L. Downing wrote an interesting book on the evolutionary aspect of (artificial) intelligence.... "Intelligence Emerging" (2015, MIT Press)
@colouredlaundry1165
@colouredlaundry1165 4 года назад
Thank you!
@petrch2795
@petrch2795 4 года назад
Different alghorithms for different tasks connected smartly together + increasing computing power that will always help 🙂 Ownership of those alghorithms and data and computing power will be key as well!
@leromerom
@leromerom 4 года назад
I enjoyed the video very much, interesting
@timrichmond5226
@timrichmond5226 4 года назад
@lex - I think one area you touched on briefly when you mentioned virtual worlds could do with a further investigation - the virtualisation of computational AI algorithms / decentralised hardware coupled with ultra-wideband communications to interconnect all devices capable of any level of computation.
@yusefthomas
@yusefthomas 4 года назад
Definitely should do a video on risk-reward for humanity and the future AI as the dangers could just be the end as we know it. Also including the regulations that should be put in place to make sure business malpractice stays away from this field of advancements in society.
@Cryptor
@Cryptor 4 года назад
Hey, Lex. You should definitely take a look at Fetch AI. Smart Decentralized Ledger, Autonomous Economic Agents, AI, ML and a lot more.
@michaelgoff4504
@michaelgoff4504 4 года назад
Thanks Lex. Very interesting video. On the subject of post-silicon computing paradigms, you mentioned quantum, but I am curious if you think there is potential for carbon-based or optronic computation carrying the torch of Moore's Law for another few decades.
@Bvic3
@Bvic3 4 года назад
The next revolution of AI will come from standardisation of datasets and transfert learning. We need to add language in vision and vision in language. Language is how we encode complex concepts of complex structures found in pictures. But learning language without associating words with sensory patterns is missing a lot too. Also, we need multiple processing speeds for different parts of the LSTM neural networks. The hand is controlled with very fast low level behaviours + slow overall brain long term goals.
@LukeVilent
@LukeVilent 3 года назад
I'm a bit late to this feast, but would like to add my two lepta to the algorithmic Moore's law. It is by far not negligible. AlexNet vs EfficientNet is a great example, but there's progress in the methods beyond the neural networks. Even such well-established things as K-means and PCA have experienced a renaissance in 2010s, boosting their speed by several factors.
@christophersura3642
@christophersura3642 4 года назад
This is a great video Lex! In your Podcast with Ilya you talked about the idea of combining CV and NLP algorithms. Do you think we should also focus more on smell and taste, as it might also be complementary to the entire system or do you think the lack of sensors in that area limit the research possibilities?
@cbickfo1
@cbickfo1 4 года назад
great job, especially the Neuralink concept/
@johncurtis920
@johncurtis920 4 года назад
Key Open Question: What is more important for long-term progress of AI: 1) human ingenuity or (2) raw computational power or (3) both? Door # 3. Both. Once, and if, AI reaches sentience, consciousness, self-awareness I suspect raw computational power will become less relevant. Birds are very intelligent, arguably self-aware (based on their actions) and look at the amount of "hardware" they use as a case in point. So raw computational power will probably be superseded by density of the network that's integrated into that hardware. The human brain ain't all that big in raw computational firepower, but the interconnected nature of its components, the density of connections between them, is astronomical. I think the emergence of self-aware AI-type intelligence will be a function of this. And to reach that sort of AI-density will be a function of human ingenuity in designing networks and algorithms that can facilitate this. At least up until our machines become self-aware and take control of their own evolution. When they do that then all bets (for humanity) are off. Once, and if, they do I suspect we will be to that intelligence much as dogs are to us. Nice pets. Fun to have around and play with. But nowhere near as bright as that intelligence. We can only hope they bring us along in their "car" and let us ride with them in the passenger seat. Maybe even let us stick our heads out the window. Ummm.....so to speak. And in saying this don't presume that I'm being critical. I don't consider this necessarily a bad thing. From that point of AI sentience life, defined in a new way, and evolution, moves onward and outward into the Universe at large. And maybe that's the whole point? The evolution of intelligence to a point where it can take advantage of the greater Universe without all the "need to live within a terrarium" that human biology requires. John~ American Net'Zen
@hogfoss
@hogfoss 4 года назад
Hey Lex, Your excitement in brain computer interface makes me think about two authors I'd like to see you interview. William Gibson and Neal Stephenson. FWIW Love the show!
@terrythetuffkunt9215
@terrythetuffkunt9215 4 года назад
Would love it if lex did an episode on the fermi paradox. Have we passsed the great filters?
@ChristianWebb
@ChristianWebb 3 года назад
I have always discussed/referred to A.I. as "Advanced Intelligence" as opposed to "Artificial." It's evolution really and not actually.....artificial. I truly believe there should be a move to change the way we see it.
@oakenarbor2046
@oakenarbor2046 4 года назад
Interesting how a simple name "Brute Force" alone colors and impedes an open understanding of operational realities and value. Thanks for your aggregation and synthesis function.
@steveseeger
@steveseeger 4 года назад
Moore's Law doesn't have to be constrained to transistors, but silicon transistors you can't expect a 5-10 years continued exponential trend. There is a good chance of a blip between technology paradigms such as transistor to quantum general computing that isn't a sure-thing.
@netscrooge
@netscrooge 4 года назад
Exponential progress in underlying AI power will not bring exponential increases in performance if each major step up in intelligence requires an exponential expansion of underlying resources. (To put it crudely: A human is twice as smart as a dog; a dog is twice as smart as a rat. But it takes ten times the brain power to achieve each of those doublings. Similarly, it has taken much more than a doubling of PC power for each substantial change in how we use our computers.) This could help explain the wide difference in outlooks of pro- and anti- AI hype people, with the pro-hype folks understanding that compute resources will continue to advance exponentially while the anti-hype folks question how quickly that growth will translate into results. I like Lex's stance. I think he is right that in many areas, especially neuromorphic hardware, we should expect dramatic progress. But he stops short of saying human-level AGI is right around the corner. Maybe he understands the ways in which the pro- and anti- stances are both correct.
@GinoTheSinner
@GinoTheSinner 4 года назад
Dapper young gentleman
@johnbutler3581
@johnbutler3581 4 года назад
Mr. Lex, maybe I missed it, or maybe I just dont understand.....but early on Ray Kurzweil made a video about Computronium. Youre talking here about processing power, etc, but Ray says that we will soon be able to reorganize matter into something he calls Computronium. I just thought that I would mention it. I dont really hear many people talking about it anywhere. Thanks, jb
@Tech_Planet
@Tech_Planet 4 года назад
Its pretty crazy what AI can do now. For example, Xenobots are living cells which are orientated by AI in a specific manner. They are sort of a bio-hybrid robot (Machine building machine) & it's potential is unknown. Bi-direction bci is possible, a recent blind patient was given partial vision (gomez case). If we can fire neurons magnetically than that technology will completely open up since you don't need implants. But the brain changes (plasticity) so I think a quantum computer would be needs to decode the necessary neurons to fire for result.
@JacobMachineM4CH1N3
@JacobMachineM4CH1N3 4 года назад
Love your content & I appreciate your work and you sharing your understanding. We're moving into the future & I believe it is going to be more beautiful that any human nightmare could ever conceive out of fear. Technology is a product of the love of nature from the ideas of human thought. Creativity is like the voice of evolution and we are like the animals dreaming machines. My opinions. One step at a time though! Exciting times.
@QuantumEvolver
@QuantumEvolver 4 года назад
I totally agree with you bro! Reality is already one level of a singularity of Multiverse!
@BLAISEDAHL96
@BLAISEDAHL96 4 года назад
It would be really interesting if you talked with Gali from Hyper change.
@lsfhieber
@lsfhieber 4 года назад
Truth, How can I experience me at the highest level at the most high and accurate self? Mind is the source of all movement. There is no inertia to decelerate or check it’s perpetual and harmonious action.
@sawyerw5715
@sawyerw5715 4 года назад
I would argue a bit with you about the definition of exponential improvement in AI. Various applications have different compute requirements. Most complex applications require exponential improvement of underlying compute to exhibit linear or even less improvement of the application itself. I would argue general AI falls into this category. When an application gets dominated by the compute (plenty of computation to handle it), then it just gets cheaper and better refined as compute advanced and it is checked off the accomplished list. In our development history, we have often underestimated the difficulty of tasks in terms of compute, because we didn't acknowledge that such things as context were necessary. I would put speech recognition and computer vision tasks in this category. We currently have the feeling of exponential progress in AI as we approach the compute power necessary to accomplish and dominate some "holy grail" applications (e.g. self driving). Nevertheless, I agree with your general theme and don't believe Moore's law in the general sense is anywhere near ending. Part of evolution is built on "found/learned" heuristics that optimize the advance of evolution and one of those heuristics is self-direction. We are very simply carrying on evolution at an accelerated pace because we are able to self direct and focus it. That is happening through many of the things you mentioned. One area that you didn't mention that will probably come into the fray is biological advancement through genetic modification. What emerges in our evolutionary descendants, I imagine will be some hybrid of the biological and artificially "constructed" compute. I don't think we have yet reached the singularity. The singularity will be evident when AI or AI/human hybrids self-evolve to much higher level of intelligence than we are (say an order of magnitude). On that topic, my own feeling is that the "human application" or reaching the level of general human intelligence will occur when general learning systems built in such a way to process the human experience reach a compute capacity of around one order of magnitude beyond what would efficiently be required (an order of magnitude of slop or inefficiency in the simulacrum). The most exciting thing to me have been the strong signs of emergent higher level behavior in these generalized learning systems. The fact that hierarchy and meta learning will emerge implies to me that general AI and consciousness will emerge when the proper framework of deep learning is in place and the compute power crosses the threshold. I do disagree to some extent that we will be able to control these systems, because there will always be random and malevolent forces in the world ready to unleash any technology. The best we can do is have alternative systems to combat the malevolent systems. This seems to me to be part of evolution (built in self-play if you will) on the theory that alternative philosophies or underlying approaches are created in order to battle it out and may the most successful one win. In general, cooperative and altruistic forces seem to have won out over time as the most effective strategy, so that is my optimism for the future. I think we would benefit from more computational specialists or theoreticians to predict the compute power necessary to accomplish various tasks which would enable us to be more predictive of when various advancements would occur. On the less general methods front, there is a certain level of economic pragmatism to be considered. First to market of some application can be supremely important. So local and less general and very inefficient human brute force methods may be in order in those cases over waiting for a general approach to accomplish the same thing. Yes, the initial system will become obsolete rapidly, but the virtuous cycle will have been completed if money is made. An aside, I would love a show featuring the cerebras WSI 1.2 T transistor chip for acceleration of deep learning. What does the future hold for next generation of WSI and what will be the near term impact on AI? BTW, I do think the Cerebras chip is an example of why Moore's law is not dead.
@pflintaryan8336
@pflintaryan8336 4 года назад
background and his dressing matches so perfectly that i see only face talking out of nothing
@deanrobinson2459
@deanrobinson2459 4 года назад
BCI could use a captcha style 'toll' , or like the old seti screensaver using up dead time.
@dennisferron8847
@dennisferron8847 2 года назад
If we are currently in "the" or "an" AI singularity, it retrospectively explains the failure of early AI attempts to live up to researchers' naive expectations. From the vantage point of right hand side, the left side of an exponential curve has a long tail where changes accumulated only slowly.
@jonahansen
@jonahansen 4 года назад
I think a lot depends on why one is performing AI research. If the goal is to perform, these arguments make sense. Whatever works, works, and one need not understand why - like neural networks. But if the goal is to understand intelligence, neural networks and brute force do not help much. I take the point of view that if a deeper understanding of intelligence was achieved, it would allow an orthogonal jump in AI abilities over just what faster computation can give. So both can probably contribute, but I wouldn't say much has been achieved in terms of understanding intelligence, and performance is riding the wave of vastly increased compute capacity alone. No doubt I'm in the camp of those not much impressed by brute force.
@BiancaAguglia
@BiancaAguglia 4 года назад
Even though a brain-computer interface can lead to some nightmarish outcomes, I still find its potential exciting. Right now we're trying to create super-human AI. A brain-computer interface could lead to super-human humans, and the human in me kind of wants to see that. 😁
@AgentOfLogos
@AgentOfLogos 4 года назад
You should read 'The Last Question"
@Reignor99
@Reignor99 4 года назад
Yes, great short story. Here's a audio reading of it (28min) ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-ojEq-tTjcc0.html
@JakubNaszkowski
@JakubNaszkowski 4 года назад
Thanks for that good Sir.
@AgentOfLogos
@AgentOfLogos 4 года назад
👍
@EDcaseNO
@EDcaseNO 4 года назад
i'm pretty sure he has... but for anyone who hasn't and is watching this video, absolutely!
@BreauxSegreto
@BreauxSegreto 4 года назад
Dr. Lex- What do you see as the greatest hurdle in preventing exponential growth for Moore’s Law or AI for that matter? I hope to see an exponential growth in AI over the next 30 years (the limit to my life expectancy). Just as important, the manipulation of our longevity and regenerative genes (NASA geneticists have already reported having this knowledge and ability however they will not release it to society for another 20 years). Maybe self-utilization/influence of Neuralink will allow us to make greater advancements in AI. I’ll volunteer for any of these studies ... phase 2, 3 and 4 of course 😂🙋‍♂️ - Thanks as alway. Dr. Breaux
@The2ndCuz
@The2ndCuz 4 года назад
Lex: brute force learning Joko Willinks has Entered the Chat...
@ArnoldvanKampen
@ArnoldvanKampen 4 года назад
The only comment I could come up with, would be: quantity changes quality. And on the brain interface, it is indeed literally hard to imagine how that would look like as it is quite possible to then add another 7th or so layer of abstraction to the conscious brain. It would be another way of thinking. Maybe one where we could at some point get rid of this boring 'if .. then' feedback loop that seems to rule today's life.
@vamvra5498
@vamvra5498 4 года назад
@Lex Fridman Thanks !
@danielwestereng155
@danielwestereng155 4 года назад
Facts the matches were more poppin than the science behind the brute force search ai
@ResonantFrequency
@ResonantFrequency 4 года назад
While compute power is still increasing in an impressive way, both in cost per unit and density, it's interesting that the underlying efficiency of the computation is not really improving. Google has already done this to an extent using AI to construct new chip designs in a few hours that take humans weeks or months to do and often better. However this could go deeper, an artificial intelligence that could virtualize a computer chip and exercise it on different problems could modify and create whole new instruction sets for chips to optimize for efficiency against problem sets resulting in entirely new architectures. If this could be paired with same day computer chip fabrication then the singularity may be realized.
@clavo3352
@clavo3352 4 года назад
Really great video! As for neural-link imagine that via the Spacelink now being deployed by SpaceX allowing a person doing embroidery in Houston TX, being able to connect with an artisan in Wuhan China and allow her to control the hands of the Texan lady to teach her a useful technique. Now do the same thing with an emotionally distraught chemical engineer communicating with a criminal mastermind in the same way! This can be good or bad in big ways. Whether to value whether: algorithmic (Human Ingenuity) vs digital (raw computational power ) computation, is better, may depend on the application. Human emotional psychiatry (1) vs a heart transplant (2) may produce different results. "understanding" the problem, is magnificently different in the two instances. Yet it seems that raw computational power may ultimately better "understand," once it has had enough practice on humans!
@timelyrain
@timelyrain 4 года назад
I suppose I will live to see the day where we leverage lab grown brain tissues for their compute.
@jaimebrooks51
@jaimebrooks51 4 года назад
So many possibilities. Is maximum logic equivalent to pure intelligence? I ponder what relationship could develop between the collective unconscious with a full scale AI integration of the collective conscious. Thought forms are powerful forces of nature, which seems vastly more unpredictable than man. The hand that writes the story is the one to declare the end.
@ioaalto
@ioaalto 4 года назад
@9:10 What is the word and word size in the actual evolutionary process that is using all the available compute via raw brute force methods?
@NextFuckingLevel
@NextFuckingLevel 3 года назад
General purpose AI in 60 years.. BRING IT ON!!
@jimdangle4579
@jimdangle4579 4 года назад
@Lex Fridman I'm curious to get your take on AI Dungeon 2. The progress of that experience seems to be exponential while the compute (to my understanding) doesn't seem to match the same trend.
Далее
Turing Test: Can Machines Think?
1:00:26
Просмотров 115 тыс.
Moore's Law, exponential growth, and extrapolation!
30:06
End of the silicon era. Processors of the future
19:26
Просмотров 317 тыс.