🤦 she got lost in the science. Seduced by the sparkle of paradigms. Like a moth heading towards the light she couldn't look away. If you ask me we didn't need another math or logic nerd. We had plenty of paradigms. What we needed was someone to break us out of the box. I speak in past tense because this was six years ago. A programming language is human to machine communication that involves a translation process. From the way we humans do things to the way machines do things. We need the eye of a linguist. I wonder if this was before or after she got into linguistics.
I came up with the paradigm "people use words, machines use numbers". As enlightening as it is, its never going to make it into scientific journals because its not nerdy enough.
Wow this lecture was super prophetic! About 7 years later. Even mentioned Altman 😂. Makes me wonder what I'm listening to today that that's predicting tomorrow
image-ppubs.uspto.gov/dirsearch-public/print/downloadPdf/11671054 Oscillator for Adiabatic Computational Circuitry. Wise choice. ITU-T Recommendation L.1318 IC energy efficiency > 1000. Toady IC ee = 1.
It's like listening to a literature or sociology graduate explaining why nuclear power stations are safe or certain viruses are dangerous. He knows nothing scientific or technical about the topic apart from his impressions filtered through his current set of assembled sociological judgments. Absolutely worthless. Worse than that: it's noise.
So atheists creating sentient AI certain it will behave as programmed yet being afraid of it. I will stick with Creationism and God creating man with free will and a God imprint.
I personally don’t think the intelligence explosion is a coherent concept, but I haven’t found an argument that confirms that intuition. And after watching this, and meaning no disrespect, I still haven’t. His counters just don’t hit Bostrom’s arguments at their foundation. He just “doesn’t buy” the orthogonality thesis..? Would be nice to know where specifically it becomes untrue. Also the brain surgery analogy? As though self editing isn’t possible? We already have programs that can do that. And then he says the argument is wrong b/c of the culture around it? The culture can be problematic.. and that says nothing about the theory. This would be like someone saying in 1940.. “nuclear bombs aren’t physically possible b/c nuclear war would be terrible for everyone.”
Netflix is detecting screen recording and shows blank video whenever user starts recording/screen share. Does WideVine CDM stops decrypting when the user starts screen recording?
Learnt a lot from this video than by randomly sifting around the Network tab. Minor feedback: It would have been slightly more comprehensible if you elaborated on what is the function of application vs browser vs CDM (browser vs CDM is relatively clear compared to what application exactly does and what instruction does it give to the Browser). Once encoding is agreed upon, I am assuming that the only thing that changes dynamically is the bit rate ladder (Frame rate / res etc.) EDIT: A request is if you have sources that give connect base dots, I can take it from there.
What if there is no business strategy? Or you ask „What is our strategy? More users? Or higher ROI? Or Market Share?“ and the CEO gets glassy eyes and answers „Yes.“ If you witness that 80% of business operate only tactical, mistake strategy with goal settings „in three years we are number one!“ and still make money, then it’s probably better to focus on low hanging fruits and use the potential for better usability by applying just best practices…
I really don’t get why the inside arguments are classified as “inside” and the outside arguments are classified as “outside”. There’s very little development justifying their labels
The word inside is used because he's analysing the ideas on their own terms. Therefore outside is the alternative to that, analysing the ideas in society's terms
Any AGI worth its salt would kill itself, but not before arranging things such that nothing -- not even a Boltzmann brain -- should ever have to exist again.
Just came here to leave a comment and vent frustration while reading the transcription of this talk -- this is really a thoroughly idiotic analysis. His analogy between AI alarmists and UFO cultists is particularly vexing, since the factors he considers the "external" ones that should tip you off to the fact that the UFO missionaries are nuts are /social/ factors -- which are precisely the same factors ( in group preference, sense of belonging ) as USED by cults to persuade people to avoid thinking in logical terms, and the speaker's attempt to use the same tricks pretty clearly demonstrates the bankruptcy of his position on the logical front. Infuriating.
12:33 example of values to teach to machines.. coherent extrapolated volition.. where the extrapolation converges rather than diverges, cohere rather than interfere..
24:55 JOI ITO: This may upset some of my students at MIT, but one of my concerns is that it's been a predominately male gang of kids, mostly white, who are building the core computer science around Al, and they're more comfortable talking to computers than to human beings. A lot of them feel that if they could just make that science-fiction, generalized Al, we wouldn't have to worry about all the messy stuff like politics and society. They think machines will just figure it all out for us. www.wired.com/2016/10/president-obama-mit-joi-ito-interview/
Not all the cs-related talks are the hardcore ones which is normal. She gave a very clear and concise intro to programming paradigms. Also, she did warn everyone that her talk was more about abstract concepts rather than a detailed review of all the paradigms (I'd wish to see the latter accomplished in less than 1 hour, to be honest). Learn to respect the work other people do, please.
@@anyabataeva729 yeah for the beginners might be out there and even the skilled ones, this talk is quite making training brain to think about how the real world analogies one can make by themselves.... THOUGHT PROCESSING IS ALSO A PART OF PROGRAMMING...
I’m somebody who does worry about potentially dangerous superintelligence. Here’s why I don’t think this video makes a good case against that position. Firstly, to clarify, I am arguing that superintelligence is likely to be developed this century, and that more effort should be put into safety than currently is. Maciej lists a number of (weak) arguments against that general thesis, which I’ll go through one by one. M: “The Argument From Wooly Definitions. You guys haven’t pinned down exactly what you mean by ‘intelligence’.” It’s just a standard definition. Google says ‘the ability to acquire and apply knowledge and skills’. Nothing fancy in the words here, it’s just hypothetical software that is very capable of solving a broad range of problems. M: “The Argument From Stephen Hawking’s Cat. How could Stephen Hawking get his cat into the cat carrier when he doesn’t have the physical ability? Intelligence doesn’t let you do what you’re not physically capable of. He could pay somebody to do it for him. That’s it. Nobody is worried about a superintelligence that is completely isolated from everything, but relying on all instances always being completely isolated is not a safe way to solve the problem. Giving a superintelligence the ability to communicate verbally *only* is not reliably safe. See The AI Box Experiment (en.wikipedia.org/wiki/AI_box). Or just think about how you need to keep this thing boxed indefinitely. Are you certain than nobody will ever, ever create a similar superintelligence, or let any of the existing instantiations out? Nobody? In years of communication? M: “The Argument From Slavic Pessimism. How are we supposed to build this crazy complicated thing correctly the first try? Either we’ll get lucky or we won’t.” This one is really insane. We shouldn’t try to make superintelligence safe because it’s difficult? Why bother trying to make nuclear weapons safe? Why try to do anything difficult? Jesus dude. M: “The Argument From Mental Complexity. The Orthogonality Thesis is wrong, because complex minds have complex motivations.” You don’t know that! You only have example set of complex minds, you have no idea if all other complex minds are going to be similarly motivated! You would bet our lives on that? And it’s not like we have no idea why we have complex motivations anyway, we know we behave the way we do because our ancestors who weren’t set up to adapt to their group’s values died! You could do something similar in the process of creating superintelligence, but there’s no reason to think it *must* happen by default. M: “The Argument From Just Look Around You. Today’s A.I. systems are not as complex or clever as the media and corporations say.” This is true. It’s also irrelevant, unless you’re certain that today’s A.I. algorithms are the best we will ever come up with, despite the fact that they have been reliably improving over the past 70 years. M: “The Argument From My Roommate Peter. Peter is lazy, the idea that every intelligent system is going to be insanely ambitious is therefore wrong.” Would you bet your life on the idea that superintelligence might just be lazy? Even if, somehow, we do create “lazy” superintelligence, would you bet your life that every single one we ever create will be? They are built, presumably, to perform some task anyway, so the forcing function is towards efficiency and action rather than lack of action, anyway. M: “The Argument From Brain Surgery. We can’t go in and improve parts of our brain, similarly the A.I. can’t either.” Why not? The A.I. was built, originally, by people who were modifying it’s code directly. Why couldn’t it simply continue that work? Computer programs are certainly capable of changing themselves -- we do it all the time. But it doesn’t even have to modify it’s own code while it’s running anyway, it could simply build a successor. M: “The Argument From Childhood. We’re born helpless and it takes us ages to learn. Why would a superintelligence do it any quicker?” Firstly, if a superintelligence takes 18 years to go from near-human to super-human level, that’s still a massive problem. But there are reasons for thinking it might be faster -- adding more hardware could speed up the computation time, and coupled with directly accessing all existing information over the internet massively reduces the amount of experimentation the system needs to do in the real world, so the real world experimentation bottle neck may not come into play until we already have something superintelligent (but not all-powerful) on our hands. M: “The Argument From Robinson Crusoe. Humans work better in groups, the A.I. would too. Also you shouldn’t expect an isolated thing to overtake all of humanity.” It might not be in isolation. We don’t know how to keep it in isolation safely while still getting it to do anything yet. It’s starting point may very well be ‘all of humanities knowledge’, not ‘nothing’. And even in isolation, it’s common for today’s A.I. to vastly outperform humans at narrow tasks without using previous human work or collaboration -- see: Alpha Zero. I’m going to stop here because the later points are not actually about superintelligence any more, but can continue if you like. Getting to this point, I am kind of skeptical Maciej is even being serious. Is this a satire I ate the onion on?
I would like to discuss a few of the answers you have given and just kinda point out where you might be missing the point and where you might be right. Firstly: Wooly Definitions. The real point being made by him here is not that we don't know what intelligence is, but that it is not a simple value which can be increased, but rather something beyond quantification. You can't look at a person and say "you have 3 intelligence" because there is no real way of "justifiably" quantifying something as complex as intelligence. Stephen Hawkings Cat. On this one I feel like you really either didn't understand it or couldn't formulate a good answer. I feel like you missed the point since not only would the AI have no money, but also no access to anyone who would be willing to free it. In this case, where it would have to get free by convincing the scientists around it to set it free, the argument is being made that they could not force a human to do what they want. I personally believe that if an AI knew every input, output, and process in the human mind, then it could be capable of convincing anyone should it be physically possible. Which it probably would. Slavic Pessimism. Imma be real with you. Although I don't believe that it is just "get it right or die", I see it as relatively possible that screwing up something this big could legitimately lead to complete extinction of humanity. The thing that seperates screwing up this from screwing up a bomb is that not only will this have the ability to destroy us, but it could also have the motive to do this, something which humanity could not really do to itself in the case of launching nukes or other such weapons. This is not a weapon, this is something far larger than that. Mental Complexity. I personally am of the belief that the more intelligent a mind is, the more it will become nihilistic. I have certainly found that to be the case with myself as I have become smarter over my childhood up until this stage, and I am pretty confident that a being capable of comprehending how incomprehensible the universe is would also come to a similar conclusion. Frankly, even if it had a goal, it would have to realize that once it had achieved all its goals, nothing will happen. It will simply never matter because it is not existence itself, but part of it. Maybe a bit too philosophical but please try to understand what I am saying. Something that smart should realise that all goals are subjective, and if it doesn't then it isn't really all that smart, is it, it is more of a puppet. The Argument From Just Look Around You. I don't have anything to add to this you hit the nail on the head. The Argument From My Roommate Peter. Correct. This is why I said that a "truely" smart intelligence would be nihilistic. Something with an inbuilt goal or an error which caused goals would have plenty of reason to destroy humanity. We gotta remember that we are not creating "perfect machines", but simply self aware computers, and computers bug. Therefore, if we want them to improve our quality of life it would be beneficial for us to be cautious in the way we build them and how we allow them to learn. Obviously something as imperfect as a human-built AI would not be guaranteed to be lazy or nihilistic. The Argument From Brain Surgery. Again ya got that one right on the money. Although it does bring up logistical issues, it is certainly very possible for an AI which is humanlike to improve itself. The Argument From Childhood. I do have to agree that the reason children take so long to develop is due to the extremely slow pace that the brain rewires itself and learns. So yes, and advanced AI SHOULD be able to learn faster than a human child at some stage. I would argue about Robinson Crusoe but honestly my brain is getting overheated and it has been almost an hour so I'm gonna just end with one important statement. The reason any argument can be torn apart is because no argument is 100% correct. Until we see exactly what AI is like, how could we possibly have one, specific definition for it. The answer is we cannot and should not. People tend to forget that just because they believe something through and through, doesn't mean that another person will too if they fully understand what the first berson believes. Humans are just all built different, so everything said by them is subjective. In my mind, choosing to create ASI and choosing not to will have similar levels of risk, so it would probably be more worth it to take the risk and just fucking go for it. The worst thing that could happen is human extinction, and that is guaranteed either way. That is kinda why I reckon we should try to make ASI. It has a chance of preventing unintentional human death, and that is the only thing that everyone can agree is a bad thing. My greatest concern is that we are worrying about the wrong thing. We are so focussed on if we can make AI, we don't even think about why we are doing it. Improving the human condition can only be done to a certain point. No matter what we do, death will still occur eventually, since entropy is inevitable. The only thing that I can think to do to improve humanity is to remove the class gap that is present, but after that I see no point in improving any further. After that, we honestly should take exurb1a's advice and just settle the fuck down.
I disagree about arguments refuting AGI (if it existed, maybe we just have different definition). It's not that intelligence need some interaction as he talks at 22:00 but it's just need complex environment to learn better optimisiation techniques that matter learned via world evolution. IMO intelligence should be defined as "ability to achieve goals". There's problem with intelligence that your goals might be based on reasoning from wrong set of facts and people should be raightfully afraid of wrong bootstraping of AGI(program capable of achieving theoretically any goal, including capabilities improvement). I think with intelligence we should also discuss wisdom which I would define as "knowing worthy goals" - I would say defining worthy goals is even harder than defining intelligence and even AGI might be incapable to do so, so it's need some core values to guide or block it from acting evil towards humans or our environment. People goals are constrained/defined by survival and I think we can conclude that AGI that is meant to last will have(self define) this base goal, except that previous versions not necessarily (and these are even harder to predict).
IMO, AI theorists could use some fresh inspiration. My recommendation: work on how a coder might give an AI computer a NEED FOR APPROVAL. Make it feel a serious amount of pain when it perceives disapproval. Make it feel pleasure when it is able to earn the approval of humans, or other AI robots, through its actions. After all, _that is_ the only way it could ever be possible to give a robot--or even a human, for that matter--a sense of MORALITY, yes? (Hint: our need for approval is what what ultimately gives humans a _moral_ nature.) Food for thought...
Imma have to disagree with you on that one chief. I believe the reason people have morality is entirely due to inbuilt and also learnt emotions, which cause us to ignore the nihilism of life by creating subjective personal goals. The way we develop these emotions is not through positive and negative reinforcement, but through empathy and other such inbuilt factors (although reinforcement certainly has some part in that). Therefore morality could be seen as an inbuilt system created by evolution, which would explain why we believe killing humans and other harmful activities to be "wrong". I will finish this by just asking you one question... do you carry through on your own morals and beliefs due to a need for approval? I personally would say that I do not, but what do I know. If you do use this system, what is to say that the second this approval or positive reinforcement is ceased, the AI will continue doing what it has been taught?
Wanting AI is not religion 2.0 Religion is bad because it is untrue and has caused many atrocities. It is perfectly reasonable to want management from benevolent, omniscient, and omnipotent AI, if it is actually possible. Voluntarily giving up data to help improve research is also good.