Тёмный

"We Really Don't Know How to Compute!" - Gerald Sussman (2011) 

Strange Loop Conference
Подписаться 82 тыс.
Просмотров 81 тыс.
50% 1

Though we have been building and programming computing machines for about 60 years and have learned a great deal about composition and abstraction, we have just begun to scratch the surface.
A mammalian neuron takes about ten milliseconds to respond to a stimulus. A driver can respond to a visual stimulus in a few hundred milliseconds, and decide an action, such as making a turn. So the computational depth of this behavior is only a few tens of steps. We don't know how to make such a machine, and we wouldn't know how to program it.
The human genome -- the information required to build a human from a single, undifferentiated eukariotic cell -- is about 1GB. The instructions to build a mammal are written in very dense code, and the program is extremely flexible. Only small patches to the human genome are required to build a cow or a dog rather than a human. Bigger patches result in a frog or a snake. We don't have any idea how to make a description of such a complex machine that is both dense and flexible.
New design principles and new linguistic support are needed. I will address this issue and show some ideas that can perhaps get us to the next phase of engineering design.
Gerald Sussman
Massachusetts Institute of Technology
Gerald Jay Sussman is the Panasonic Professor of Electrical Engineering at the Massachusetts Institute of Technology. He received the S.B. and the Ph.D. degrees in mathematics from MIT in 1968 and 1973, respectively. Sussman is a coauthor (with Hal Abelson and Julie Sussman) of the MIT computer science textbook "Structure and Interpretation of Computer Programs". Sussman's contributions to Artificial Intelligence include problem solving by debugging almost-right plans, propagation of constraints applied to electrical circuit analysis and synthesis, dependency-based explanation and dependency-based backtracking, and various language structures for expressing problem-solving strategies. Sussman and his former student, Guy L. Steele Jr., invented the Scheme programming language in 1975.

Наука

Опубликовано:

 

15 мар 2021

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 71   
@raphaeld9270
@raphaeld9270 Год назад
11:15 Nothing brings fear to my heart more than a floating point number - Gerald Sussman 2011 :D
@Ceelvain
@Ceelvain Год назад
If you have a few days on your hand, try to read "What every computer scientist should know about floating point arithmetic". (Several versions online, none of them is without typos. When you read surprising things, triple check.) It enlightened me soooo much. I got a glimpse of what it takes to understand a program using floating points and why it's so easy to get them wrong. It's unsettling. As the name suggest. I think every computer scientist should read it.
@totheknee
@totheknee Год назад
What happens when people like Sussman are gone? How do we get this knowledge back? He knows all the stuff from the 60s and 70s, and I graduated with a physics degree knowing about 20% of what he spent his life figuring out.
@pleggli
@pleggli Год назад
Having had a period of my life where I had a huge interest in programming languages in general, the history of programming languages and programming language design I would say that we do have a lot of knowledge recorded, in papers, books and by looking at the actual programming languages that exists and have existed.
@LinusBerglund
@LinusBerglund Год назад
Isn't the main thing that all of them either had a firm understanding of math or CS? There are people that do really friggin cool things today, and in 15 years we will see them in the languages we use today. Like delimited continuations. Which languages have them? Ocaml, some schemes, GHC haskell? Scala has the weird shift reset primitives, but that counts. Delimited continuations are aver 30 years old, yet they are obviously a primitive everyone should have. Anyway, people will find ways to express complex things, and then abstractions will allow us mortals to use those cool things adapted to certain domains to simplify our programming lives. But I might be too optimistic
@NickRoman
@NickRoman Год назад
I have thought about that a number of times. There is perspective that we will lose when all of these people who were there at the beginning are gone. The Windows operating system used to have a better user interface than it does now. I and people my age still remember that. Knowing that it was better in the past, there's a chance that people will come to their senses and bring back or improve in a way that recognizes that. If that doesn't happen soon, then that understanding might be gone forever. Or two hundred years from now, someone will invent something that was common 10 years ago and think it is a modern marvel, while really, people were just so distracted by other things that they forgot about it--like how you can hold the middle mouse button and then scroll by dragging, just to give a concrete example.
@jamesschinner5388
@jamesschinner5388 Год назад
Joe Armstrong as an example.
@artemhevorhian1785
@artemhevorhian1785 Год назад
We build on the shoulders of Giants.
@psybertao
@psybertao Год назад
Having glanced at functional programming and Lisp I hadn't found a reason and motivation to invest time in learning them. Until now!
@immabreakaleg
@immabreakaleg 3 года назад
40:34 fits well within a "strange loop" conference. twist ourselves we do and must indeed
@ssw4m
@ssw4m Год назад
Wow. This is a phenomenal virtuoso performance, showing extremely expressive programming systems that most of us can only dream about, as we struggle with horrific legacy code in our day jobs!
@lordlucan529
@lordlucan529 Год назад
Indeed. Sadly it went right over the head of half of the commenters here.
@monchytales6857
@monchytales6857 Год назад
*spends three weeks trying to push a single 5 loc change to production and dealing with paperwork and management* *comes home to write 2400 loc 6502 emulator in one day to relax*
@ecosta
@ecosta Год назад
It's amazing how math can be described using any language in any medium and everything still works as expected.
@jonseltzer321
@jonseltzer321 3 года назад
Should be required viewing for any software company.
@InfiniteQuest86
@InfiniteQuest86 Год назад
What's really amazing about this is that he's effectively describing a sheaf theoretic approach before that was even popular.
@explicitlynotboundby
@explicitlynotboundby Год назад
Please say more about this!
@LowestofheDead
@LowestofheDead Год назад
The Wikipedia article is only written for people who already know what Sheafs are.. could anyone explain?
@firefly618
@firefly618 Год назад
@@LowestofheDead This and the other lectures in the same series are not perfect, but they explain a lot: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-90MbHphnPUU.html I can see what they meant with "sheaf theoretic approach," because the set of constraints of a given problem form a topological space (you can take intersections and unions of constraints, basically equivalent to AND and OR in logic), and the degrees of knowledge you have about your problem (the intervals of approximation in the video) maybe form a sheath over that space. Or in any case they have interesting algebraic properties that can be exploited.
@5pp000
@5pp000 Год назад
Second time watching this. Great talk! I disagree with the title, though. It's not _computing_ we're bad at; it's _reasoning_. Computing is planned; reasoning is unplanned.
@BryonLape
@BryonLape Год назад
Considering his "memory is free" comments, it is interesting that my current job is optimizing a code base to reduce the client program footprint, use less memory, reduce network packet size and latency, and utilize multiple threading to reduce user wait time for computations.
@hijarian
@hijarian Год назад
He made a remark that, surely, there're applications which still require peak performance. A bit later he describe a lot more specific goal: to reach low latency, no matter the means. If your app is under the 100ms bar of human perception all the time for all users, you don't really need any performance increases. This is the idea.
@drewwiens9506
@drewwiens9506 Год назад
Even so, I don't think you are doing it to nearly the extent that they used to in the 60s, 70s, 80s, which was his point. As just one example, Float64 used to be thought of as extremely expensive because it used 8 entire bytes to represent one value - the horror! - and now in JavaScript you can't even represent numbers any other way, every number in JavaScript is Float64, because who cares about saving a few bytes per value.
@Pedritox0953
@Pedritox0953 Год назад
Great lecture!
@higienes.a.8538
@higienes.a.8538 Год назад
Really nice and helpful... Thanks!
@AI-xi4jk
@AI-xi4jk 3 года назад
Great words on not being religious about a particular paradigm from 39:00 min
@alexm4161
@alexm4161 День назад
I wish this video had citations. Does anyone know what paper he aluded to at 43:05? re: cells merge information monotonically. By Radoul?
@rfvtgbzhn
@rfvtgbzhn 8 месяцев назад
1:32 I think the main difference between a genome and a computer program is that the genome doesn't really determine everything a cell can do, but a lot of it only works by interaction between what the cell is "told" by it's genome and the environment. You can see evidence for this for example in finger prints, which are different even between identical twins. Computer programs can also have emerging complexity, but only by the data interacting with other data,like in a cellular automaton. But cells also have physical and chemical interaction with the environment outside of the body.
@nenharma82
@nenharma82 7 месяцев назад
Yes, a genome doesn’t do any computing at all. It’s a descriptive language.
@totheknee
@totheknee Год назад
24:31 - It's funny, he's a futurist by looking 100 million years into the past.
@Ceelvain
@Ceelvain Год назад
This talk is actually less about programming than it is about automated deduction systems. Which is exactly what I'm interested in because it's what modern machine learning is exceedingly bad at. ML works solely by approxmations. Not that it's bad to get an answer of 99 meters high instead of 100m for a building. But it's very bad to have the system able to mix up a bit of the current weather with the barometer reading. ML is (usually) continuous in *every* dimension. Most models don't allow for discontinuity. Which is unfortunately the base of symbolic reasoning.
@Golnarth
@Golnarth Год назад
This man is literally describing the theoretical basis for modern Explainable AI, so I'm not sure what you're referring to.
@EzequielBirman77
@EzequielBirman77 3 года назад
Is this the same version served by InfoQ or is there some remastering/cleaning process in the mix?
@StrangeLoopConf
@StrangeLoopConf 3 года назад
It’s a re-edit
@NostraDavid2
@NostraDavid2 Год назад
Anyone got an article for that 3-elbowed salamander?
@atikzimmerman
@atikzimmerman Год назад
No, but works of Michael Levin on regeneration seem relevant here
@random-characters4162
@random-characters4162 Год назад
@@atikzimmerman wow cool! thanks for the reference. It is so good sometimes to open the comment section
@holykoolala
@holykoolala Год назад
love this talk 🎉 saw this computer news today and it reminded me of it ```js What has to happen for mixing and matching different companies’ chiplets into the same package to become a reality? Naffziger: First of all, we need an industry standard on the interface. UCIe, a chiplet interconnect standard introduced in 2022, is an important first step. I think we’ll see a gradual move towards this model because it really is going to be essential to deliver the next level of performance per watt and performance per dollar. Then, you will be able to put together a system-on-chip that is market or customer specific. ``` 24:49
@petevenuti7355
@petevenuti7355 Год назад
I just so want to ask him how would he program a computer to calculate irreducible 3nary operations... I bet he would even have an answer!
@overlisted
@overlisted Год назад
"But in the future it's gonna be the case that computers are so cheap and so easy to make that you can have them in the size of a grain of sand, complete with a megabyte of RAM. You're gonna buy them by the bushel. You could pour them into your concrete-and you buy your concrete by the megaFLOP-and then you have a wall that's smart. So long as you can just get some power to them, and they can do something, that's gonna happen."
@BryonLape
@BryonLape Год назад
Everyone remembers Dijkstra. Few remember Mills.
@sergesolkatt
@sergesolkatt Год назад
@nolan412
@nolan412 Год назад
🤔 drop the stopwatch or the barometer?
@Evan490BC
@Evan490BC Год назад
If you drop the stopwatch how are you going to measure time? Using the barometer?
@user-tf4mg2yi6e
@user-tf4mg2yi6e Год назад
Drop both. Simultaneously. If you do hear one large bump instead of two distinct ones, then your assumptions are correct. Otherwise back out to add air resistance to your model or pump the air out to get the vacuum instead.
@unduloid
@unduloid Год назад
But ... it's more fun to compute!
@hikarihitomi7706
@hikarihitomi7706 Год назад
Given two things, A) I'm on a 12 year old machine that can't run anything programed in the past 5 years, and B) I had faster webpage loading times in the days of 56k modems than I have today with 4g lte. Both of these facts tell me that SPEED AND MEMORY ARE NOT FREE! If you can't run your program on a decade old machine, you're doing it wrong.
@lordlucan529
@lordlucan529 Год назад
I think you’re missing the point - he’s demoing using a language that would have originally run on the old computer he shows, so now he has millions of times more speed and memory available to explore different ways to solve problems, with the primary constraint for coding no longer being performance, but rather flexibility, redundancy, reliability, etc. This isn’t the same as what is going on in the web and presumably electron apps you might be referring to, where they are just plain inefficient and consume all available memory and cpu for zero return. He could have easily ran those lisp programs interactively on a machine from the 1980’s, and he demoed them in this video on a machine that is now over a decade old!
@ecosta
@ecosta Год назад
They are free from a developer perspective. One could allocate 16GB to resolve a problem that could be resolved in 64kb. The problem you are facing is: you want to consume products created by entities who doesn't care about how speed/memory would cost to their consumers.
@danielsmith5626
@danielsmith5626 Месяц назад
imagine how crazy effective these abstractions would be when they're running on a TPU...
@schrodingerssloth438
@schrodingerssloth438 Год назад
Excellent ideas. Biological 'code' is very very dense by my look. It's around 3.1 billion base pairs for DNA, where he got his gigabyte. But ACGT is base 4 so it's more like 9,000,000 terabytes of binary code? That is very very complex code for growing elbows and everything else in the expected places. You can maybe cut that down ignoring some non-coding DNA but you can also consider those sections like compiler tuning parameters effecting the transcription rates of neighboring coding sequences and folding stickiness of the chromosomes. ...time to practice some LISP.
@schrodingerssloth438
@schrodingerssloth438 Год назад
It's pretty fun thinking about biology like a computer. Contains its own compiler code to make ribosomes. Converts DNA to RNA like source code to AST for a compiler. Each cell as its own processor with state memory from chemical signals and damage. Proteins as small programs vying for processing time to work in a crowded cell before they are killed. Each cell flooding the system with messages easily lost in the form of chemical signals. A ton of parallel I/O processing of all of those signals, noisy networks. Trashing a whole processor if it gets a virus before it can send out too many bad virus packets to the system... Not sure if it's a useful model to work off of though. Self destructing pi zeroes when they detect an errant process would be pricey.
@wumi2419
@wumi2419 Год назад
You've made a mistake in converting size due to different bases. it's just *2, not ^2 (1 base 4 symbol is 2 base 2 symbols), so gigabyte is about right (3.1 billion pairs, each of which is 1 base 4 symbol, so 2 base 2 symbols -> 6.2 billion bits)
@Verrisin
@Verrisin Год назад
17:59 - YES! - That's what I missed at Uni. Math is so _unclear_ compared to code. They even put parts in sentences around the formulas. Unintelligible mess.
@rfvtgbzhn
@rfvtgbzhn 8 месяцев назад
It depends on what notation you use, math can be very rigorous and clear. Even the sentences can have a clearly defined meaning.
@jakedoom8807
@jakedoom8807 Год назад
nobody gonna bring up the fact hes talking about making a pi super cluster by like @6:00 out of 50k computers for a million bucks. my math has been known to be wrong, but which board is available at -$20 a pop?
@ReaperUnreal
@ReaperUnreal Год назад
The RockPi S has a quad core processor, so you'd only need 12500 of those, and they start at ~$10USD. So you could get that done for well under $1M.
@thelawgameplaywithcommenta2654
The premise of free processing and memory is that it's wrong on its face. There is no company; no government that doesn't have processing and memory as a consideration. Has this man done anything of note in the private sector? Try thinking this way in game creation. And processing cost is always about cost of implementation. If you had infinite resources you could just run an equation forever, but, of course, you would be long dead along with everyone else.
@wumi2419
@wumi2419 Год назад
Your premise falls flat with almost all modern websites. For a private developer it's cheaper to make user spend more time than it is to spend more on developers. Bigger projects can consider system requirements. Most of them do not. Just throw it on the web and be gone.
@sedevacantist1
@sedevacantist1 Год назад
This all assumes intelligence is mechanical. This assumes programming can go beyond a simple mechinical function. What if the brain doesn't think, what if the mind operates the brain like an engineer controls a machine. I am hearing in this lecture a comparison of a neuron to a circuit or sensor and the arrangement of circuits can do what? They just do what they do because someone designed them to do what we know how to do. I really don't know what this technology can expand into?
@ssw4m
@ssw4m Год назад
Intelligence definitely can be mechanical, we already have mechanical AI systems that can exceed human capabilities for many intelligent tasks, and there is no apparent limit to it. But I feels that life is not merely mechanical, or perhaps there is some transcendence from the mechanical to the living.
@sedevacantist1
@sedevacantist1 Год назад
@@ssw4mI guess it all depends on our definition of intelligence. If I use your definition a hinge on a door is intelligent. For me, it would be the ability to solve problems. I would say a computer doesn't solve problems it just follows a program and the computer is unaware of what it is doing. The problem is actually solved by the programmer. Every time the door is opened the hinge doesn’t perform an act of intelligence, does it? Every time a computer runs an algorithm it is not an intelligent act, it is no more intelligent than what a hinge does, it only did what it was programmed to do.
@solonyetski
@solonyetski Год назад
@@ssw4m many intelligent tasks like what?
@LuaanTi
@LuaanTi Год назад
@@sedevacantist1 Sure. But people who follow algorithms might have something to say about you telling them they're not intelligent :) You could also say that the ability to change approaches is a sign of intelligence. Which is obviously true enough - but again, humans routinely _don't_ do that. You could say that the ability to look at the same data and produce different results (i.e. creativity) is a sign of intelligence - but then again, if a program does that, you're going to complain it's buggy. Funnily enough, again, just like with humans. Understanding how neural networks work is a great insight into how our brains work. Even extremely simplistic models of neurons already display the same features we observe in animals. Look at how data is stored in neural networks - and you'll see where intelligence comes from. It took a lot of evolution to make intelligence work remotely reliably; again, humans are stupid most often than not - and even when they stumble upon something truly smart, they are very likely to not notice or be ridiculed for it. Our standards for software are a lot higher than evolution's :) Every time you read data from a _real_ neural network, you also modify it. We specifically disable that function in our models, because it's inconvenient. What reason do you have to believe anything about this is _not_ mechanical?
@ssw4m
@ssw4m Год назад
@@solonyetskithere are many well-known examples: playing chess and other games, solving protein folding, finding new methods of matrix multiplication, generating rational written content based on wide knowledge (much faster, and better than most humans can), generating artistic images (much faster, and better than most humans can). I think that AGI is not very far away at all, and we already have all the pieces more or less.
@SynchronizedRandomness
@SynchronizedRandomness Год назад
Isn't what he describes (a database of locally consistent computational worldviews which allow global inconsistency) essentially Douglas Lenat's Cyc project? (en.wikipedia.org/wiki/Cyc)
Далее
Luiza Rasulova #luizarasulova
00:37
Просмотров 2,6 млн
Artificial Intelligence: a Problem of Plumbing?
1:00:55
Просмотров 1,9 тыс.
"Propositions as Types" by Philip Wadler
42:43
Просмотров 126 тыс.
SICP: the end of an era
7:10
Просмотров 25 тыс.
Three Directions in Design: Gerald Jay Sussman
31:17
Просмотров 1,8 тыс.
Hammock Driven Development - Rich Hickey
39:49
Просмотров 288 тыс.
"Performance Matters" by Emery Berger
42:15
Просмотров 480 тыс.
APPLE дают это нам БЕСПЛАТНО!
1:01
Просмотров 276 тыс.