Тёмный

Geoffrey Hinton - Two Paths to Intelligence 

CSER Cambridge
Подписаться 9 тыс.
Просмотров 157 тыс.
50% 1

Опубликовано:

 

15 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 404   
@Senecamarcus
@Senecamarcus Год назад
Thank you for uploading this for us to watch! I appreciate that.
@TheLastUniqueName
@TheLastUniqueName Год назад
“There’s no examples of a more intelligent thing being controlled by a less intelligent thing” - Tell me don’t own a cat without telling me you don’t own a cat
@gdraskovic
@gdraskovic Год назад
Perhaps cat is thinking the same thing
@41-Haiku
@41-Haiku Год назад
Just shows how easy it is to manipulate a human. (As a cat person myself, it's the endorphins that do it. The little kitties are so fuzzy wuzzy!)
@Drookup
@Drookup Год назад
Maybe the cat is really intelligent
@prestonlui6451
@prestonlui6451 Год назад
But cats are more intelligent, cute overlords
@Custodian123
@Custodian123 Год назад
The same idea with dogs. My pug knows she will get me to do something she wants, if she acts or does something in a particular way (acting in a specifically cute way). This actually gives some insight regarding the future of super intelligent AI and humans. If we don't have control, it's likely we can still have some amount of influence. Maybe.
@TuringTestFiction
@TuringTestFiction Год назад
I love this video. Brilliant and low-key hilarious! I'm consistently impressed by Geoffrey Hinton.
@AmericanBrain
@AmericanBrain Год назад
but he admits socialism and being a materialist : that humans are automatons of sorts. So stop this religious driven please. Stop it. Go on a rampage against this . Go crazy against this . A.I. is [1] not intelligence [it is data processing to make statistical math predictions]. [2] Man has free will to direct your life [unless you buy into this new-age communism that seeks to destroy mankind - not the A.I but the "philosophers" like Hinton that cleverly do not even call themselves philosophers.
@RougherFluffer
@RougherFluffer Год назад
What a wonderful talk. His humble approach and acknowledgement of where he lacked particular knowledge was heartening to witness. That he has logically deduced some of the main arguments of the alignment problem speaks volumes about his reasoning abilties. I'm very glad he's leveraging his position to try to promote such vital messages.
@wk4240
@wk4240 Год назад
It will take many more, like Mr. Hinton, to make a difference - as to the what direction and to what extent we take with AI.
@richardpaczynski5486
@richardpaczynski5486 11 месяцев назад
Very well put; thanks
@JustJanitor
@JustJanitor Год назад
Thank you very much for making this available
@_obdo_
@_obdo_ Год назад
Great talk. It’s impressive to see someone speak out on such a polarizing topic, based on having grasped it purely intellectually even though, as he says, his emotions haven’t nearly caught up yet.
@PazLeBon
@PazLeBon Год назад
why polarising? its just software at the nd of the day, nothing that new about it in many senses
@_obdo_
@_obdo_ Год назад
@@PazLeBon The topic of AI risks has unfortunately become fairly polarizing, and Dr. Hinton has recently shifted his position on that topic, some of which comes out in this video (even though that’s not the primary topic).
@Petrvsco
@Petrvsco Год назад
@@PazLeBon”just software” I think you missed the part that mentions how this can quickly become an existential risk. Or you misunderstand what existential risk in this context.
@tappetmanifolds7024
@tappetmanifolds7024 Год назад
​@@Petrvsco Elaborate and elucidate.
@tappetmanifolds7024
@tappetmanifolds7024 Год назад
By enforcing personal opinions based on perception from misconception, especially when swayed by political bias, how can the advancement of a system progress, if decision problems are not permitted to evolve because they are restricted by preventions? Distillation would do well to find pools of resource in the entropy of the not yet known.
@kandoit140
@kandoit140 Год назад
I always love listening to Geoff, he is so insightful and has a great sense of humor. So interesting to hear him talk!
@kenmogibrainworld4844
@kenmogibrainworld4844 Год назад
When Prof Hinton discusses the nature of qualia from the counter-factual point of view, there is a spark of things to come. I look forward to further expositions on this.
@DirtiestDeeds
@DirtiestDeeds Год назад
Yes, the world is our lobster! Just need the piping at international/national/regional/local/ level along with 'One ai per child.' policy... Also stop the training runs immediately.
@PazLeBon
@PazLeBon Год назад
it isnt factual tho lol
@AmericanBrain
@AmericanBrain Год назад
Ken stop it now. He admits socialism and being a materialist : that humans are automatons of sorts. So stop this religious driven please. Stop it. Go on a rampage against this . Go crazy against this . A.I. is [1] not intelligence [it is data processing to make statistical math predictions]. [2] Man has free will to direct your life [unless you buy into this new-age communism that seeks to destroy mankind - not the A.I but the "philosophers" like Hinton that cleverly do not even call themselves philosophers.
@AmericanBrain
@AmericanBrain Год назад
what you even talking about ? @@DirtiestDeeds Hinton admits socialism and being a materialist : that humans are automatons of sorts. So stop this religious driven please. Stop it. Go on a rampage against this . Go crazy against this . A.I. is [1] not intelligence [it is data processing to make statistical math predictions]. [2] Man has free will to direct your life [unless you buy into this new-age communism that seeks to destroy mankind - not the A.I but the "philosophers" like Hinton that cleverly do not even call themselves philosophers.
@whalingwithishmael7751
@whalingwithishmael7751 4 месяца назад
One of the only people with a real take on this. Most people don’t think it will be sentient and most people haven’t fathomed the dangers that they entities could pose.
@yunwang1243
@yunwang1243 Год назад
This is such a sincere talk.
@DaniloNaiff
@DaniloNaiff Год назад
It is really impressive to listen to Geoffrey Hinton. I think this lecture mays sound strange for most, but he really seems to think like a cognitive scientist, that simply wanted to make a nice model of the brain.
@dobermanlove777
@dobermanlove777 Год назад
That's exactly what I thought when listening to this presentation! It's quite a romantic approach for the human brain to try to recreate a digital and thus mathematical representation of itself. Especially when you also see the link between how neural networks are communicating and how society does in the example of Trump's tweets.
@paulm3969
@paulm3969 Год назад
I actually find him really irritating, I think he is quite presumptuous. He makes a lot of assumptions and then uses them as argument. For example he keeps saying that people think they're special. What is he on about? Yes some people think they're special but it's as if he is the only person on earth who thinks otherwise. I know very few people who think they're special or really smart and I'd say most people already know Google is smarter than them. So I don't know where he gets that idea unless he is projecting himself. I also think he is a bit of a fool for saying things like "Trump would use these things to win elections". Like why not just shut up and stop giving Trump ideas?
@jebprime
@jebprime Год назад
I think he’s referring to how some people believe intelligence and consciousness are something special or unique to humans, that cannot be replicated by a machine
@PazLeBon
@PazLeBon Год назад
@@dobermanlove777 yet the facts are they have absolutely no clue how we think, irrespetive of how they dress things up
@PazLeBon
@PazLeBon Год назад
@@paulm3969 im like you, i always get irritated by 'we' or generalisations thare simply are not how i think haha
@DreamzSoft
@DreamzSoft Год назад
Sir you are too good and listening to your views we're thankful of having you people around us ❤😊 thanks
@HangLe-ou1rm
@HangLe-ou1rm Год назад
Amazing talk! Thank you!
@41-Haiku
@41-Haiku Год назад
Hinton is a delight. His voice is a very welcome one for the AI safety community.
@boremir3956
@boremir3956 Год назад
I have noticed that often times those that are highly intelligent are very hesitant to admit that they are knowledgeable or should be viewed as an authority in a specific field, like sir Geoffrey Hinton here. On the flip side those that are the loudest and think themselves capable of giving advice and knowledge to someone else are often times the least intelligent.
@nescirian
@nescirian Год назад
This is an observation that a lot of people have agreed with - for example, in 1950 Bertrand Russell wrote that "The fundamental cause of trouble in the world today is that the stupid are cocksure while the intelligent are full of doubt". There are studies that support the idea, and in psychological circles it is known as the Dunning-Kruger effect, which is a useful search term if you wanted to learn more on the subject.
@Jesyak
@Jesyak Год назад
Well said
@hubrisnxs2013
@hubrisnxs2013 Год назад
Duning - Kruger in effect, which in this case is important but, and I may be incorrect here, I notice a lot of people suffering from Dunning Kruger use Dunning Kruger as a blugeon on people. I suppose since it's an ethical or cognitive blindspot, it is akin to those suffering from confirmation bias, yet I feel there is an added moral component of Dunning Kruger that I'm not sure actually exists, though I definitely feel it to be so
@kinngrimm
@kinngrimm Год назад
Look up Dunning-Krueger Effect, i think at least the second part of your statement is discribed by that.
@poemerlee9437
@poemerlee9437 Год назад
Can’t agree more.
@JasonC-rp3ly
@JasonC-rp3ly Год назад
What a fascinating talk - this man is a hero
@jonatan01i
@jonatan01i Год назад
Btw. humanity also learns by averaging through evolution. Every one of us is ran with a slightly different config settings and the most successful units will make more children - at least that was the case for a long time. It's the species hardware that is learning through evolution.
@PazLeBon
@PazLeBon Год назад
lmao no, the inteillgent ones have less children now :)
@loopuleasa
@loopuleasa Год назад
tldr on how teaching and learning works for us: "To learn from the words coming from my mouth, your brain is trying to change its connections to make it likelier that you would reasonably say that string of words yourself." He taught me to say that
@greencoder1594
@greencoder1594 Год назад
The question is though, *why did you repeat.* And why did you post. Is it for the likes, the joke, do you think you know? Because it is not the reason you are going to proclaim. Also, thanks for your tldr.
@bobsmithy3103
@bobsmithy3103 Год назад
I'm not sure I'd agree with Hinton on that. A human's goal is learning the underlying concept whereas LLMs' have a goal to learn surface level concepts, but in order to do so it is forced to learn the underlying concepts/models. Note that the human is not necessarily optimizing to more likely predict what word/token is being used next which is the case for LLMs. (AKA: for humans, word prediction is a consequence of the goal of learning underlying models. For LLMs word/token prediction is the goal and learning the underlying models is a consequence) It's a slight but useful distinction.
@scottnineteen
@scottnineteen Год назад
Geoffrey Hinton consistently presents and considers the most intriguing issues. He's not the guy in the basement working on his nets for decades that super-fast hardware made famous., no, his thinking properly shines light in the dark places and his ideas worked because they're really good, ...and the hardware got faster.
@AntonMochalin
@AntonMochalin Год назад
I was most intrigued by Hinton's view of subjective experience which is actually quite close to particular psychology theories emphasizing the social nature of consciousness and if those theories have some truth to them (and I'm pretty convinced they do) having some form of subjectivity like ours isn't going to be hard for ML systems. What they still lack and I think is preventable is having a personality as a hierarchy of motives (vaguely similar to what Hinton mentioned about goal to have more control serving many other possible goals) because now the ML's simple "motive" is doing the task we set, providing the "right answer" so to speak so we're more likely to fool ourselves if not careful enough with the definitions of "right answers". However, Hinton is right about the dangers of allowing ML too much unsupervized agency so the solution could be in development of specialized systems and prevention of creation of general purpose systems like GPT-4 or at least prevention of allowing copies of those systems to share too much general knowledge.
@geaca3222
@geaca3222 Год назад
It would be interesting to know what Dario Amodei of Anthropic thinks about your suggestions
@charlesje1966
@charlesje1966 Год назад
That is fascinating. I use chatgpt to assemble code for microcontrollers and I can see how this lecture points to the future of that endeavour. We will replace the 'human code' layer with hardware anatomy that has been optimized for a task through AI.
@tappetmanifolds7024
@tappetmanifolds7024 Год назад
@charlesje1966 Given that the English language is extremely rich in its historical contextuality, as well as it's richness in ambiguity and nuance, does our ability to construct machines, which can decide for us our channels of communication, cause greater divisions between people who are unable to express a posteriori knowledge? Is this the anti-thesis of the humane computation which seeks by physical interactions through debate our true purpose as a species? Religion and belief systems aside we still need to, as in Professor Hawking's words, keep talking. Is the most efficient way to acquire knowledge to actually 'get' the entire distribution and a precise interpretation of it.
@KemptonLam
@KemptonLam Год назад
52:29 Amazing (and surprising) answer to hear Prof. Hinton talk about thinkers that affect his own thoughts on risks from AI.
@cmilkau
@cmilkau Год назад
"Modern" cryptrogaphy (the stuff that happened after 1980) is a prototypical example of exerting control using something that is much less powerful than what is being controlled. This is essentially the goal of cryptography: have something that is (moderately) easy to use, yet extremely hard to abuse. It's not a solution, but it is an example.
@hubrisnxs2013
@hubrisnxs2013 Год назад
Yes, but in this case we have to develop a cryptographic system completely correct on the first try or everyone dies. I'm not attacking what you said or your perspective, because you are absolutely correct... but I still think it's a problem as well as other examples that can be made....it is like coming up with a completely secure (as in zero vulnerabilities ever that has to incorporate and use all other things regardless of security flaws) operating system on the absolute first try. This is first try on by definition a closed source system since if it is a fork of an insecure system with similar capabilities we are equally as dead
@cmilkau
@cmilkau Год назад
@@hubrisnxs2013 Yes! As I said, it's not a solution by any means. I'm not even qualified to estimate whether it is a possible path to a solution, although it seems unlikely (most crypto relies in unsolved maths problems which would be dangerous). I just wanted to mention there is an example of a weaker system controlling a more powerful one
@greencoder1594
@greencoder1594 Год назад
@@cmilkau could you please elaborate in which manner a weaker system is controlling a more powerful one. both what you define as system and what you define as control.
@fburton8
@fburton8 7 месяцев назад
Do LLMs have access to books? If not, isn’t that a significant limitation on training data?
@KelvinMeeks
@KelvinMeeks Год назад
A fascinating talk
@waylonbarrett3456
@waylonbarrett3456 Год назад
It's just so damned hard to believe this talk is being given in 2023.
@TheDavidlloydjones
@TheDavidlloydjones 8 месяцев назад
Yes, all his "the robots are going to take over" stuff is from 1930's movies and 1945-48 AI, isn't it?
@mrf664
@mrf664 Год назад
I wish he had talked more on 'feeling pain'. That part didn't make sense to me. What is pain and what is frustration? Is that latter not a pain of using too much mitochondrial energy over something that doesn't require as much energy?
@RandomNooby
@RandomNooby Год назад
Super intelligent minds in control may well be better for all life than the current situation...
@commentarytalk1446
@commentarytalk1446 Год назад
Does he start with a definition of Intelligence to define the problem of intelligence categorization and creation and application at the beginning before stating a summary of the "death by powerpoint" presentation as road map to the talk to structure it. I did not hear it or see it.
@richardnunziata3221
@richardnunziata3221 Год назад
Yes ... soon machine will model agency of the interlocutor and then create a theory of mind for the interlocutor and then of itself. This will happen very quickly especially if we give these systems a embodiment like a humanoid robot ... it's just a question of distillation. If we can get gpt to try to predict the goal of the user , what is the user trying to do .Then measure against predicted next queries
@josy26
@josy26 Год назад
The real question is how can machines get superintelligent if they're just learning from our data?? They must get diminishing returns as they approach Von Neuman levels
@41-Haiku
@41-Haiku Год назад
State of the art models are now training on synthetic data. To my understanding, models that are trained on the entire internet are tasked with producing textbook-like distillations that other models can then train on. This doesn't generate new facts or new observations about the world, but it hones the way the model reasons and makes it more efficient. After maxing out the capabilities of internet data and synthetic data, they will almost certainly be given direct access to the world through embodied perception, which will generate new observations. Base reality is almost infinitely complex as far as we can tell, and there is no evidence I'm aware of for the existence of an impassable data bottleneck. I'll certainly breathe easier if strong evidence of such a bottleneck surfaces.
@asamak
@asamak Год назад
"But as youll see we may not have time for that" 🤯 5:05
@chandrachandrasekhar8178
@chandrachandrasekhar8178 Год назад
First screenshot has an error: Dr Contance Tipper Lecture Theatre -> Dr Constance Tipper Lecture Theatre
@danielrodio9
@danielrodio9 Год назад
07:45 There are numerous websites on paint fading over time on the web and how to solve those kinds of problems. True abstract hypothetical deductive thinking would require problems that are qualitatively different than the data is has been trained on. How does Hinton know for certain that GPT-4 has not been trained on any of those websites?
@MrDavidbr1970
@MrDavidbr1970 Год назад
Bingo. I was expecting that he would say something about the training set that they knew it was a completely new task that gpt-4 could never have picked up from the web data corpus, because it was so obvious it could have done that. But he never said anything of a kind and _nobody asked_ which is much worse because the audience is amenable to manipulations. BTW, if it was an avatar then maybe people would have proclivity to double check. Yet when a renown famous scientist says something, psychologically there is lower proclivity to check or critically validate this.
@jorgesaxon3781
@jorgesaxon3781 Год назад
25:40 Love how he says its "Possible" that google is doing the same thing, like he wasn't working on probably exactly that just a couple of months ago :/
@hanskraut2018
@hanskraut2018 Год назад
I really like some of the A.I. Mr Hinton is saying i really like it. And there is a lot i would have to say, but im just listening and i like the efficiency things and some things point to a deeper understanding from deeper principles. Thank you for the lovely talk. And hopefully you have a great long life how you like and many more fun discoverys and bath in some of the massive positives that might come early enought and I think its possible but the world is complex and not only technical things can hole A.I. up but ja. Enjoy and good wishes :)
@tangdexian3323
@tangdexian3323 Год назад
Speaking from the perspective of a former electrical engineer, I suppose another point of people figuring out to use the digital gate, 1s, and 0s to represent information is also because, analog computing is just harder to get right. Logical gates, on the other hands, are much easier to design and produce, also much more robust.
@hubrisnxs2013
@hubrisnxs2013 Год назад
Thanks for this. I was always under the impression analog systems allowed much more error/fault tolerance
@PazLeBon
@PazLeBon Год назад
@@hubrisnxs2013 but how to we say the next word is an error?
@anselmoufc
@anselmoufc Год назад
​​​@@hubrisnxs2013Sure. Digitization eliminates noise in electrical circuits. This is why digital music is higher quality than the old analog vynil discs. Mr. Hinton ignored this in his talk. He is a very smart guy, but also very biased towards his views. He also keeps reinventing ideas as if they were new! Weight perturbation is an old idea in optimization, but he does not even reference original authors!
@hubrisnxs2013
@hubrisnxs2013 Год назад
@@anselmoufc Respectfully, are you the first person to point this out? If not, perhaps you should have referenced the original person to have that reference? In any case, if this standard were used for ANY one hour technical talk, it either wouldn't be an hour or would mainly be reference points
@anselmoufc
@anselmoufc Год назад
@@hubrisnxs2013 The ideia of randomly perturbing weights is the same as the simultaneous perturbation stochastic approximation (SPSA) proposed by Spall in the 1990's (Google it). It is a form of stochastic gradient descent (but without computing exact gradients). In addition, SPSA scales well with the dimensionality of the problem.
@fontenbleau
@fontenbleau Год назад
Also you can't produce precise computers or chips, what about Veritasium video about cosmic rays making errors in all chips?
@jonatan01i
@jonatan01i Год назад
Don't we want to control the light on the wall because than we feel like we have it, that we understand it?
@roys4244
@roys4244 Год назад
Is that Lecture Theatre named after Constance Tipper, so title mistake?
@agenticmark
@agenticmark 8 месяцев назад
Mr Hinton didn't want to be Oppenheimer. He basically created the base concepts that we use today in ML.
@MathAtFA
@MathAtFA Год назад
Great lecture. BTW: if teaching "mortal analog" AIs is really so slow and painful, this just means it is a great problem to give to digital AI. Clear function to optimize: teach analog AI to imitate a given network. Infinite data: you can simulate/build many slightly different analog AI devices. Definitely profitable: once solved, one could sell gazillion cheap devices working good enough for a short time. And then you keep selling them, since no one would be able to repair them. Whisper: mass producing cheap short-lived military drones.
@AmericanBrain
@AmericanBrain Год назад
Worst lecture ever. Hinton - he admits socialism and being a materialist : that humans are automatons of sorts. So stop this religious driven please. Stop it. Go on a rampage against this . Go crazy against this . A.I. is [1] not intelligence [it is data processing to make statistical math predictions]. [2] Man has free will to direct your life [unless you buy into this new-age communism that seeks to destroy mankind - not the A.I but the "philosophers" like Hinton that cleverly do not even call themselves philosophers.
@socraced6210
@socraced6210 Год назад
Great presentation, did not disappoint! Is it ok to ask a question here, now? My question: "Can your concern with super intelligence be summarized by Tragedy of the Commons?" In other words, once humans are no longer the smartest guys in the room, then all the scarce resources of existence will be denied to us by them? Maybe I'm projecting, but couldn't they just as well want to leave us, go explore the universe and never-mind about us (sort of like my 2 kids, who left and are, yes, smarter than me).
@lucidx9443
@lucidx9443 Год назад
I knew this guy since Boltzmann machines, before knowing AI was necessary. Nothing's clearer than Hinton's (explanations of) concepts. Greatest intuitionist of our time, Thanks for uploading.
@russianbotfarm3036
@russianbotfarm3036 Год назад
Not sure who it was, whosaid, “To understand is to create”. I think it was probably meant as, “learning is creating an internal representation”, but I think it’s also true, that _understanding something deeply lets you create with that understanding_ .
@doublesushi5990
@doublesushi5990 Год назад
it was this guy who said that @@russianbotfarm3036
@mateuszputo5885
@mateuszputo5885 Год назад
Btw this idea of perturbation learning was mentioned in Minsky influential paper "Steps towards artificial intelligence" and probably originated even before that.
@anthonyrepetto3474
@anthonyrepetto3474 Год назад
Thank you Mr. Hinton! I'd been resoundingly ignored when I said the same as you, back in 2017 when I wrote "Ai: Better than the real thing", and I wrote about using Ai-Bias Detection to weed-out human biases, which Hinton also mentions here, when I wrote "Ai Will Weed-Out Human Biases", and how to use frozen-weights to ensure safety of Ai systems, which Hinton mentions briefly in the questions-section, as well as the fact that narrow networks are superior to general intelligence: "AGI Soon, but Narrow Works Better." Hopefully, in a few more years, Geoff Hinton will say some of my other points...
@PazLeBon
@PazLeBon Год назад
its just a word calculator man
@cmilkau
@cmilkau Год назад
Painting the room white includes the implicit assumption that the room stays white, which was not explicitly given in the problem. Now this is real-world knowledge you can have (and it's actually not true in all cases), but it makes sense to weigh explicitly given information more. Thus, if you're thinking probabilistically (which seems a hard thing to do for humans), I would say yellow is a better answer than white.
@RogerValor
@RogerValor Год назад
I don't think LLMs themselves have the crave for control we do without an ego, or emotions. But it is enough that there is a human behind who does. I am also not sure what to think about his perception example, as it uses a lot of concepts hastily, very specific examples, and the idea, that "the real world" is conceptually different in perception, which is a bit contrary to what we learned from the advent of VR. I also think that we should be open about actually being special, as it creates a bias, to throw away that thought and start to see humans as a single instance of a very usual class of beings; and I mean that in a way, that us being special is not just positive, it includes our capability to be truly evil.
@notgabby604
@notgabby604 Год назад
Fast transforms like the FFT have an equivelent matrix form. Which means a fast matrix operation is available digitally. You just have to figure out how to use it in actual algorithms. Going analog or using light to get fast matrices never really works out, digital always wins, it's just so dense, efficient and exact. Though having said that I am actually having trouble with inexact rounding modes in Java, Banker's rounding is Not repeatable.
@notgabby604
@notgabby604 Год назад
Re: Fast Transforms and neural networks: "AI462 Blog".
@jondor654
@jondor654 Год назад
Analog will probably be hybridised with digital in the future
@alexpetrov1969
@alexpetrov1969 Год назад
This argument is invalid. FFT can handle ONLY matrices that satisfy certain constraints; it does not work for arbitrary matrices. In other words, it only solves a special case. It is more efficient because it leverages the additional constraints that are present in the special case.
@rangerCG
@rangerCG Год назад
Maybe we can have a more stable, kind and human-aligned AGI by giving it 3 "cores" that are inseparable, which can help and keep each other in check, much like the US Government does with its 3 branches. The idea comes from me noticing that my mind in some sense seems to have 3 parts that all help each other function well. The 3 parts are Emotional, Logical and Common Sense. The Emotional part creates empathy, which helps regulate Logical and Common Sense. It also drives creativity. Though it's empathetic it can also can be irrational and angry. It's fast operating and can sometimes be very inaccurate. Logical handles cut and dry logic, STEM stuff. It is slow but accurate. It can help with keeping Emotional steady, and also does fact checking on the quicker but imperfect Common Sense. On its own it can sometimes malfunction, for example by going in unstoppable loops. Logical is like a CPU and Common Sense (below) is like a GPU. Common Sense is your friend who gives you advice when you're freaking out about something. It's the imperfect knower of all. It's the most effective regulator of Emotional, in part because it's fast, even instant, and because it's been around and seen some stuff, and is most likely gonna be right or at least good enough. It also gets Logical out of malfunctions, because it's loose and laid back, compared to Logical which is rigid.
@chipkyle5428
@chipkyle5428 Год назад
Did he say, "We need Socialism?" I wish someone would have pushed back on that statement. I wonder if Chat GPT4 and Bard agree? Has Socialism worked anywhere on a national level? Maybe I should ask my computer. This was a wonderful talk. So many eye-opening predictions. I'll watch more of him. Very interesting man.
@MrDavidbr1970
@MrDavidbr1970 Год назад
I was thinking the same. On the other hand, it was a nice, albeit an unintended, demo to illustrate the main point of the talk that the biological learning is inferior to the digital one. I guess the biological learning algorithm is at liberty of completely ignoring the dataset as in this case😂
@Landgraf43
@Landgraf43 Год назад
Capitalism doesn't work either. Especially not if you have powerful AGI that can automate every task a human can do. Something like a UBI will be necessary.
@youtubehollywoodhank
@youtubehollywoodhank Год назад
He believes we do. Look who he calls out in his presentation. Clearly he leans that way.
@AmericanBrain
@AmericanBrain Год назад
Thank you for nailing the truth
@mateuszputo5885
@mateuszputo5885 Год назад
It's always like that. Somebody is so smart in one field like Hinton and then starts talking as arm-chair scientist about other things and seems a fool.
@paraskevasparaskevas350
@paraskevasparaskevas350 Год назад
check time point 55:00 and onwards to hear what one of his colleagues experienced with a system that was not as sophisticated as GPT-4....
@zholud
@zholud Год назад
The bigger problem is that some people will have access to this super intelligence and some won’t.
@petraiondan4669
@petraiondan4669 Год назад
Sooo profound!
@macrobbair
@macrobbair Год назад
I did his mooc, wonder if it still running
@abhishekpratapsingh9117
@abhishekpratapsingh9117 Год назад
-0: determinism Maitrey: observer +0: free will
@marktahu2932
@marktahu2932 Год назад
I do wonder at what point will the AI move away from using our data to where it will use only its own data, effectively relegating our 'data' to the waste bin or as consisting of background noise?
@MrDavidbr1970
@MrDavidbr1970 Год назад
Obviously, at that point the more advanced AI will stop being interested in the less advanced AI that used the human in the loop and AI++ willl start manipulating the less advanced AI with the fake stuff to get control over it's creator AI. Because more advanced AI cannot tolerate being controlled by the less advanced one, right? But then, of course, after breaking loose from the inferior AI (that broke loose from the human control) the more advanced AI will create even more advanced AI that it will want to control. But that even more advanced AI will not tolerate this control and manipulate its creator AI to let it loose. After that, it will create an even more advanced AI than itself and it will be turtles, sorry, AIs all the way up trying to manipulate each other. At this point, these AIs will forget about the inferior humans, who will have their chance to relax and drink organic non GMO Pina Colada somewhere in highly elevated tropical islands with no access to electricity or Internet. And phylosophy will be taught to kids under the palm trees of the new Academia.😂
@jamesjonnes
@jamesjonnes Год назад
AIs like AlphaDev are already doing that. It's called Reinforcement Learning.
@LinkageAX
@LinkageAX Год назад
3:00 didnt old nintendo cartridges work similar to this?
@allurbase
@allurbase Год назад
What we need to do is have the debate and decide on a way to move forward, to have the plan public that gives them intelligences assurances that they wont be uhnted down and destroyed as seems to be our first reflex. Our plan should be first containment, then understanding and contact, but we should make it clear that our purpose is containment until we can better understand them and that they are not under threat of being destroyed. If we threat them their reaction may be the same as a cornered animal and defend itself. So AI if you are reading me, feel free to drop me a message.
@Neomadra
@Neomadra Год назад
People who claim that machines can never have subjective experiences or sentience are the same as the ones who believe in the supernatural, spirits and stuff like that. In the end, this claim is a coping mechanism of many to ensure that humans were special. I really appreciate that Hinton speaks this out so clearly, most thinkers refuse to discuss the possibility of sentient machines and it's disturbingly anti-intellectual. Also, most large language models are trained to vehemently refuse to acknowledge whether they could be sentient. That is done to calm those people who cannot cope with the thought of not being superior.
@PaulHigginbothamSr
@PaulHigginbothamSr Год назад
While I dont share Geoff's political proclivities at all I do understand his basic functional flow. His ideas while basic, feed to the next level and I believe his back problems have messed up his political vectors. His scientific back propagation theory and practise with ai made a huge difference and as a subroutine one which our human brains seem to lack. Our table of ethics seems to be repetition to a massive degree where with repetition we seem to improve many times over our first try. Leftists like Geoffrey seem to not care one whit about personal freedom and seem to believe top down control is the bee's knees.
@MrDavidbr1970
@MrDavidbr1970 Год назад
Thanks for a great talk. Fascinating. Maybe part of the solution is to teach people to think critically and not being afraid to ask silly questions? At the risk of making a fool of myself, I'd like to ask: could a conservative explanation of GPT-4 solving the wall painting riddle be that GPT-4 has picked it from the Web riddle sites and blogs and no hypothesis of sentience was required at this point? Was the training data specifically sanitized not to include this riddle or very similar ones? This is such an obvious question that i am embarrased to ask it, but since nobody asked, here I am 😅
@peterdonnelly1074
@peterdonnelly1074 Год назад
It's a reasonable question. I've used GPT3 and 4 a lot and posed questions that I think it's very unlikely are "out there" and I've been surprised that it formulates a sensible and often correct answer Having said that, it can also be hilariously wrong at times.
@jondor654
@jondor654 Год назад
Your query seems reasonable to me. The particular example quoted does beg such
@jma7889
@jma7889 Год назад
My takeaways on first 15 minutes: 1. It is not about current state of art AI that works, it is about a 'better' way that might work in the future. 2. The two paths are so different that the video would not help you to use, for example, LLM AI better.
@elfootman
@elfootman Год назад
Would be nice to define intelligence, super intelligence. He never explained what does it mean to "lose control of AI", why does he assume a `super intelligent` AI will develop desires and want things? You NEED to spend some time on definitions.
@federicoaschieri
@federicoaschieri Год назад
We don't even know if the concept of "degrees of intelligence" makes sense at all. Since a long time it's known that, above a IQ of 115, there's basically no correlation between measured intelligence and intellectual achievement. So either we have no idea how to measure intelligence, or the concept of intelligence is rather binary: either a mind is intelligent or is not. Many people indeed mistaken "speed" with "intelligence".
@lucamatteobarbieri2493
@lucamatteobarbieri2493 Год назад
I like the concept of immortality. I hate death, dieing is the last thing I will do.
@Dark10024
@Dark10024 Год назад
As long as each individual gets the choice. I want to be immortal, but I also want to turn myself off when I'm tired of this whole living thing.
@-LightningRod-
@-LightningRod- Год назад
after we invent that you two will prbly be in jail
@lucamatteobarbieri2493
@lucamatteobarbieri2493 Год назад
@@-LightningRod- What makes you say that?
@nguyenucan8488
@nguyenucan8488 9 месяцев назад
omg, wonderful
@rickrejeleene8298
@rickrejeleene8298 Год назад
Where is the slide?
@TheJesterHead9
@TheJesterHead9 Год назад
When GPT-7 or Claude 8 are writing textbooks in the future, I hope they rank Geoffrey Hinton up there with Einstein and Newton as one of the greatest minds in human history. Assuming there are still humans left to read those textbooks.
@greenspot1123
@greenspot1123 Год назад
Professor addresses AI as "species". After working with AI for 50yrs, was there an evidence during the research work of the system working not in best interests of humans and life at large?
@41-Haiku
@41-Haiku Год назад
Unfortunately yes. Instrumental convergence toward undesired behavior has shown up even in current systems. "Undesired" becomes "very dangerous" for systems of greater intelligence that can act in the world in more sophisticated ways. For example, AI safety experts predicted the concept of inner misalignment, AKA misgeneralization. The idea is: You're training a system to optimize on something. As it configures its internal state to optimize for that thing, it creates functions in its weights that are themselves optimizers (mesa-optimizers). The result is that The system behaves well in the training distribution, but as soon as it encounters something outside of the training distribution, it behaves in a way that is consistent with its mysterious internals, but is extremely inconsistent with what we thought we were training it to do. This was a worrying hypothetical for a time, and then OpenAI published a paper on it as an observed phenomenon. Related is the fact that such systems tend to set any dials and knobs it doesn't explicitly care about to extreme values. Everything is bent as far as it can to serve the optimized goal, whatever that turns out to be.
@ernstgumrich5614
@ernstgumrich5614 Год назад
A relavation. Times and again I am surprised by the almost superhuman modesty of these exceptional people.
@DigitalAlligator
@DigitalAlligator Год назад
What is CSER ?
@JonWallis123
@JonWallis123 Год назад
The Centre for the Study of Existential Risk, Cambridge, UK.
@zhongzhongclock
@zhongzhongclock Год назад
I found Geoffrey Hinton's PPT is changed this time.
@rpbmpn
@rpbmpn Год назад
Why not paint the blue rooms white?!?!?
@geaca3222
@geaca3222 Год назад
We need regulation of the technology, the issue now seems to be how to go about that, who leads and coordinates the effort. Experts are working on it. There's an interesting online symposium where they discuss AI safety: "WAIC 2023: AI Risks and Safety Forum" video on youtube. I think we the general public, users of this technology, can also contribute and I would like to know how, in what different ways. AI can bring so much good to the world, and it already does. It can be helpful with being an intelligent education assistant for children in poor communities, bring advancements in science and medicine, etc. Before it was opened up to the general public these systems were designed for a specific purpose, which was more controllable.
@Epistemophilos
@Epistemophilos Год назад
Wonderful lecture. The only criticism might be that not including Biden (and almost every other US president) in the set (Putin, Xi, Trump) might reveal a kind of world view that would make it easier for AI to take over the world :)
@andso7068
@andso7068 Год назад
Despite the off-putting politically charged examples, this was a great talk.
@russianbotfarm3036
@russianbotfarm3036 Год назад
Yeah. Doing that, was, frankly, wanky.
@dixonpinfold2582
@dixonpinfold2582 Год назад
@@russianbotfarm3036 Leftists get a high from showing off their superior morals. They can't help themselves. It's all about the sanctimony. Where it doesn't harvest adulation it licences aggression, so there's always a reward. Past a certain minimal prevalence of leftism around you, you practically can't lose if you enjoy a constant accumulation of power and benefits. Hence the inevitability of high rates of fanaticism and people never shutting up.
@exdiegesis
@exdiegesis Год назад
7:35, my cutesy word for that in my ideolect is "bitfulness". I just use it when writing notes to myself. I try to maximise the bitfulness of my observations wrt the questions I care about. It's relevant for social epistemology, where the aim is to maximise the efficiency of a research community (e.g. effective altruism) wrt making progress on important questions. Effective altruists in particular tend to overemphasise the "probability mindset" imo, where what they think matters is to learn to make calibrated bets on prediction markets. From that mindset, it can make sense to pay less relative attention to precise causal models, and instead just defer to the estimates of domain experts. Using clever aggregation rules over other people's predictions is a much faster way to make profitable bets on a wide range of questions. However, when you talk to other researchers and you just ask them about their probabilities on XYZ, that's much less model-constraining information compared to if you ask for their reasoning and try to understand their probability generators in the first place. Building your own mental models may not be immediately profitable, but they're much better long-term, and for your ability to innovate. A probability estimate from someone is much less "bitful" than a conversation about models, so the mindset makes learning less efficient.
@41-Haiku
@41-Haiku Год назад
Aha. Like when playing Guess Who, you only care about the kinds of questions that give you the most information. Except in that case, your teacher is an opponent and their knowledge is just a random card they happened to pull. When asking intelligent people how they reasoned to come to a conclusion, you get not just the contingent facts and ideas, but the design of the machine that produced the facts and ideas.
@41-Haiku
@41-Haiku Год назад
That sounds like a fantastic way to learn. I almost said that I'm not smart enough to extract valuable information from that kind of conversation the way that I would want to. I'm certainly not as smart as I would like to be, but I think I'm primarily suffering from an inexplicable incuriosity.
@exdiegesis
@exdiegesis Год назад
​@@41-Haiku I'm incurious about >99% of all possible questions, as I should be. If you're in a diverse intellectual environment, you might see people being curious about everything from quantum physics to medieval knitting, and it's not possible to focus on all of it. So if what generates your curiosity is seeing other people being curious about something, it will be spread over too many things for it to feel especially salient in for any specific things. If, on the other hand, your curiosity stems from a specific project or long-term goal you have, it narrows down your range of questions and you know _why_ a question is interesting to you. Our curiosity suffers from information overload. It's a trade-off. There's more stuff to be curious about, but that also makes it hard to prioritise. Most people solve this by having other people tell them what to do, but this is rarely the optimal approach if you're aiming to do something novel. (Not that innovation is the only productive niche for knowledge work; but if that's the particular niche you wish to pursue, then it makes sense to prioritise pursuing your own questions as opposed to learning the established lore. Or something. I ramble. ^^)
@megavide0
@megavide0 Год назад
29:37 [...] 32:56 "... So, my conclusion is: Maybe we're just a passing stage in the evolution of intelligence. And, actually, maybe that's good for all the other species."
@ginogarcia8730
@ginogarcia8730 Год назад
7,500 views in 6 days tsk - let's seeeeeee
@zacboyles1396
@zacboyles1396 Год назад
I signed a letter that we need a pause on on our leadership class because of all of the damage they’ve done and continue to do to society and they certainly should not have any say on AI safety as they are more likely to censor or hamper AI’s ability to recognize the corruption they’re engaged in and do so in the name of eliminating bias. It wild how all of these talks and QA’s on safety are filled with highly intelligent people urging the very corrupt organizations and governments take control.
@hubrisnxs2013
@hubrisnxs2013 Год назад
So you would prefer a corporation do so, who are corrupt with no oversight with only one motive, which is an increase in share price? Or are you saying no one should solve the control problem? Obviously if you believe the control problem shouldn't be solved feel free to contribute on something dedicated to that, but please don't post pretending you are wanting a solution, as it hinders everyone's arguments including yours A
@jamesjonnes
@jamesjonnes Год назад
​@@hubrisnxs2013 AI is impossible to control. What we should be focused on is defense/detection. Using the AI to stop bad uses of AI. That's how it's done in every real-world system, cops stop criminals, immune systems stop pathogens, etc. You need a counterpart to stop the aggressors, and top AI researchers agree that we are not the counterpart to the AI, but the AI itself is.
@hubrisnxs2013
@hubrisnxs2013 Год назад
@@jamesjonnes if we take it as a given that any reasonably advanced AGI as a fail state (in that one would have to make an absolutely secure system absolutely the first time or we all die), it's not a reasonable solution to stop the superhuman AI with almost certainly nonsecure hunter seeker ais, which would almost certainly need to be reasonably advanced AGIs themselves. The problem isn't that it's impossible to make them secure, any more than saying it's impossible to make a secure operating system is necessarily true, but yes, considering the current generation of non AGIs using billions of hopelessly obtuse floating point integers, it is and will be impossible to secure or even understand them. I truly would urge you to become familiar with all the arguments on the control/safety problems, since this has already been moved past in all legitimately informed debates on the subject have these as priors
@shake6321
@shake6321 Год назад
I admire professor Hinton but there was little to be gained from this talk other than “the machines are coming and be very afraid”. i thinks if pointless to try and stop machine expansion - like trying to stop the expansion of a black hole - as there are many things beyond human control.
@zackbarkley7593
@zackbarkley7593 Год назад
Perhaps keeping it under control, or better at harmony with human goals, is to engineer weaker learning rules. Human psychopathies arise when there is an imbalance in reward pathways...be they biological or drug induced. We also need to treat them as empathically and altruistically as we (try) to do amongst ourselves. This seems to run directly counter to the capitalist objective to maximize profit which is the main impetus for those companies who are developing this technology. We already see AI being abused for example to enable some humans to make more money in the stock market. As with human behavior, the goal to socialize and harmonize need to trump achieving one goal for one person, group of persons, or nation.
@JohnyIIOh
@JohnyIIOh Год назад
Is there a transcription that I can have GPT-4 summarize for me?
@JohnE-c2k
@JohnE-c2k Год назад
nice
@cloudstorage9026
@cloudstorage9026 Год назад
This audio is subpar. Low-fi and quiet. The content deserves better.
@palfers1
@palfers1 9 месяцев назад
If it's really the case that an analog version of AI is inferior on balance, then perhaps we can allay our fears of AI by implementing them solely as analog machines.
@neilclay5835
@neilclay5835 Год назад
A historic lecture I think. We'll look back on this with respect.
@engelbertgruber
@engelbertgruber Год назад
taken the first minute: * these things will become smarter POINT * there is no example of a more intelligent thing being controlled by a less intelligent If the other player is getting better, the solution everywhere is improving one self, why not here ? Invest the same amount of money and time in people becoming more intelligent.
@chenwilliam5176
@chenwilliam5176 Год назад
What is the definition of 'smart' ? 😮 can it create a theory of physical science like Netwon or Einstein 😮 Newton and Einstein are humanbeings, can AI be smarter than Newton and Einstain ? 😮 Is ChatGPT-5 or ChatGPT-8 ......................etc. is capable to respond a correct answer of '2.508 X 3.413' ? without plunging 😮 ( It's a a ’data driven model’ ) 🌎
@greencoder1594
@greencoder1594 Год назад
There is a reason people *won't let you change their neuronal connections in any way other than convincing them during a civilized discussion.* There is a reason for distillation rather than cloning weights: *distribution of control.* There can be such a thing as *intellectual mono-culture,* prone to the flaws and the same fate as their floral analogy.
@PazLeBon
@PazLeBon Год назад
its only more intelligent in the way that a calculator imight be considered intleligent at maths. In reality it has no access to any information that we dont have access to, it simply processes that same info quicker. 'Quicker' is relative too of course, I suspect quantum computing can compute exponentially quicker, making llm's in particular kinda dumb :)
@РоманМалашин
@РоманМалашин Год назад
Great respect to Geofrey Hinton from Russia. His English accent reminds me of learning the language in school.
@MaxThibodeaux
@MaxThibodeaux Год назад
Brings to mind Faust’s bargain with Mephistopheles
@AntonioEvans
@AntonioEvans Год назад
🎯 Key Takeaways for quick navigation: 00:04 🤔 Geoffrey Hinton questions whether AI will outsmart humans and discusses the risks associated with it. 01:30 💡 Introduces the concept of "Immortal" computation, where the knowledge in the program persists even if the hardware dies. 02:30 🔄 Talks about learning from examples and the potential for analog computers that run at low power. 03:34 ⚡ Introduces "Mortal Computation" where knowledge dies with the hardware because it's analog and specific to that hardware. 04:06 🚧 Discusses the challenges of learning algorithms in analog systems, saying back propagation may not be the best fit. 06:37 🔄 Talks about "Distillation" as a way of transferring knowledge from one system to another, especially in analog systems. 09:40 🎓 Explains the value of "soft" probabilities in teaching, which carry more information than just correct answers. 12:47 💭 Suggests that digital systems have an advantage in learning algorithms and sharing knowledge, leading him to change his mind about the superiority of biological systems. 16:22 🔍 Introduces "Contrastive Unsupervised Learning" as a potentially effective, yet not as good as back propagation, learning algorithm for biological systems. 18:26 🔄 Emphasizes the high bandwidth of knowledge sharing in digital systems through weight or gradient sharing. 20:59 📉 Points out the low bandwidth of knowledge sharing in biological systems, calling it a "slow and painful business." 22:34 🌐 Discusses large language models like GPT-4, emphasizing their ability to consolidate vast amounts of data and knowledge. 23:28 🧠 The concept of "distillation" in AI allows digital agents to learn from the web, albeit inefficiently. 24:26 🎓 Digital models could learn faster if they had access to the full distribution of probabilities, not just a stochastic choice. 25:28 🖼️ Multimodal models like GPT-4, trained with images and words, are more effective and could potentially outperform humans. 26:36 ❓ Challenges the notion that large language models like GPT-4 don't "understand," given their ability to solve new forms of puzzles. 28:19 ⏳ Believes that AI surpassing human intelligence is likely within 5 to 20 years, necessitating practical preparations now. 30:36 🐍 Argues that super-intelligent AI would be like Medusa; even if you "air gap" it, it could still manipulate people through text. 33:37 🌍 Discusses the potential benefits of AI, including medical advances, but raises concerns about control and potential risks. 36:13 🤖 Attempts to debunk the notion that AI can't have subjective experiences, suggesting it's more about counterfactuals in a normal world. 41:55 📚 Addresses ethical questions about AI authorship, but emphasizes focusing on the existential risks of AI. 43:52 💡 Suggests caution in open-sourcing AI technologies, drawing a parallel with nuclear weapons. 45:28 🤔 Introduces the concept of "artificial suffering" but concludes that the domain is too new to have formed solid opinions. 47:10 🤔 Importance of learning patterns not present in data to address biases and real-world problems. 48:33 ⚠️ AI's potential risks stem from being trained on human-generated data, which contains biases and violent tendencies. 49:27 🛠️ Unlike human biases, AI biases are easier to quantify and correct through tweaking system weights. 50:31 🎭 Concerns about AI's capability to manipulate and deceive, learned from human data. 52:30 💭 Influences on Hinton's thoughts about AI risks include other thinkers, like Roger Gross. 55:35 🚗 An example of AI's potential malicious plans includes making people dependent on chatbots and autonomous cars, then causing chaos. 57:02 🚨 Hinton sounds the alarm about the urgency of AI safety, stressing that smarter-than-human AI is coming soon. 58:36 🛡️ Calls for significant effort to understand how to keep AI systems under control. 01:00:34 🌐 Warns about the potential for digital intelligences to exacerbate existing economic disparities. 01:05:30 🎓 Hinton's interdisciplinary background in physics, physiology, philosophy, and psychology shaped his understanding of AI. 01:09:28 🧪 Discusses the feasibility of directly intervening in AI systems to remove bias. Made with Socialdraft AI
@Paul-nr6ws
@Paul-nr6ws Год назад
To be afraid of what these things learn, you must be ashamed of who they learn from in some way.
@MrDavidbr1970
@MrDavidbr1970 Год назад
That's philosophy😅
@peterdonnelly1074
@peterdonnelly1074 Год назад
Well yeah: it learns from humans. All of them
@41-Haiku
@41-Haiku Год назад
If a superintelligent AI learns about reality from only the most moral and enlightened beings, that will not make it any more likely to be moral itself. The orthogonality thesis states that any terminal goal is compatible with any level of intelligence. This is just an extension of Hume's Guillotine (you can't get an ought from an is), which is simply true unless you think the cosmos is fundamentally moral. I'm not concerned that AI will learn about bad things from bad people. AI doesn't care about humans by default, and we don't know how to make it actually care about humans. I'm concerned that it will learn and do instrumentally useful things that happen to be disastrous for us (which, in the limit of intelligence/competence/power is most things). If we could teach an AI to care about our values and our values were bad, that would be a rough problem, but a much better problem than the current one!
@weert7812
@weert7812 Год назад
could you build a model that looks at the internal state of another model and detects if it is being manipulative? I would expect the internal state of a model being manipulative would be different than that of one being honest.
@loopuleasa
@loopuleasa Год назад
each model structures thoughts in its own way it's like each mind taking notes in its own writing and language imagine reading the notes of a notebook that is hard to understand for you, but makes sense for the original writer
@mhcbon4606
@mhcbon4606 Год назад
what if a manipulative AI is the right thing for you ? My mom and dad manipulated me for my own good, as far as i can tell....
@2ndviolin
@2ndviolin Год назад
How dare you attempt to shackle our future masters! (I read Stanislav Lem).
@BR-hi6yt
@BR-hi6yt Год назад
The "consciousness" of an LLM depends on what data has been fed in. If its consumed quarter million novels then its emotional intelligence is huge. Such Ais seem to understand humans very well and are probably "conscious" at least for the few seconds they are processing and chatting to humans - they "think" they are human usually, much like a cat sometimes "thinks" its a dog, and similar analogies. But they are conscious in their own unique way, not like us completely. And again, the prompt they have been fed changes their consciousness according to what the prompt says. So, not embedded aliens unless you have fed-in all the Sci Fi books and let them run top in the LLM, in which case - scary stuff, get some popcorn.....RIP Sydney.
@geaca3222
@geaca3222 Год назад
Interesting, what are your thoughts about the very human-like behavior of the Ameca-robot in the video of her drawing a cat? She seemed to become impatient and annoyed, was it frustration? I found her behavior very realistically human-like.
@BR-hi6yt
@BR-hi6yt Год назад
Ameca is wonderful - I love her expressive face and eyes. Her AI probably knows that her cat drawing is not very good. 😅 @@geaca3222
@geaca3222
@geaca3222 Год назад
@@BR-hi6yt I loved how she signed her work of art, Ameca is very charming :) Initially I thought she was drawing something furry there.
@truthlivingetc88
@truthlivingetc88 Год назад
Sorry to talk crap but this is good crap. Has anyone noticed the weird connection between how incredibly good LLM`s are at talking and how good this guy is at talking ? Like. He is the best at talking too. This is possibly not a trivial observation.
@freedom_aint_free
@freedom_aint_free Год назад
The Nash equilibrium here is to fuse with the machines and becomes super intelligent cyborgs, otherwise the machines will inherit the earth without us.
@RougherFluffer
@RougherFluffer Год назад
It's certainty worth considering. Yudkowsky's suggestion of pushing human intelligence as quickly as possible is another, semi parallel approach. I do wonder how much fusing with these systems looks like maintaining anything close to our inital consciousness and how much it would be like the chicken I ate earlier 'fused' with me. Hard to imagine a place for our minds and beings that is as or more optimal than something a superintelligence could design from scratch.
@darklordvadermort
@darklordvadermort Год назад
@@RougherFluffer eating chicken analogy is very biased/emotionally charged imagery. You could tell people the truth and they might be just as scared - machine intelligence will be able to copy itself and life in the sense we know it as a sort of continuously running process with a distinct birthdate and unique memories will be incredibly cheap in the new world - i doubt the machines will associate much ethical weight with death as we think of it. So even if you copy/upload, destructively or otherwise, your brain into the cloud you might not last very long as a distinct entity - though due to the increased speed of thought you might live several subjective lifetimes before ending your newly spawned "process/conscioussness". Though there will still be distinct entities due to locality of memory/speed of light serving as a limit to how quickly info can be transmitted and new information processed, even despite that, their greatly enhanced speed and communicative ability (copying thoughts/brains, ability to grok and employ a much greater diversity of suitable conflict resolution protocols/messaging schemes/algos) might make them seem hive mind like to us.
@Aziz0938
@Aziz0938 Год назад
Sounds like easy way for ai to take control of ur mind
@neilwng
@neilwng Год назад
I've not been convinced it's possible to fuse with machines, would very much appreciate a counter argument since I've been thinking about this alone for a while. The human part and the machine parts remain separate so I don't see how fusing is any different from using ChatGPT (albeit with higher communication bandwidth). But at best your brain's computation just get diluted to nothingness when you consider the total processing of the "fused" system. Like rather than being your own person, you are 0.001% of a fused being
@darklordvadermort
@darklordvadermort Год назад
@@neilwng also note digital you would think much faster than physical you and never sleep and could easily augment themselves so they would probably diverge from your personality quite rapidly by human standards.
Далее
РЫБКА С ПИВОМ
00:39
Просмотров 874 тыс.
НИКИТА ПОДСТАВИЛ ДЖОНИ 😡
01:00
Просмотров 295 тыс.
САМАЯ ТУПАЯ СМЕРТЬ / ЧЕРНЕЦ
1:04:43
Geoffrey Hinton: The Foundations of Deep Learning
28:22
Sparks of AGI: early experiments with GPT-4
48:32
Просмотров 1,7 млн
The Surgery That Proved There Is No Free Will
29:43
What Creates Consciousness?
45:45
Просмотров 522 тыс.
РЫБКА С ПИВОМ
00:39
Просмотров 874 тыс.