Тёмный

Why does the Chinese Room still haunt AI? 

Machine Learning Street Talk
Подписаться 144 тыс.
Просмотров 10 тыс.
0% 0

Опубликовано:

 

12 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 94   
@DoomDebates
@DoomDebates День назад
Thanks for the shoutout guys! Likewise Keith has been 100% a great sport. I dunno how many minds were changed by me & Keith debating, it feels like everyone in the audience thinks their side won, but hopefully we all at least come away with some learnings about the nuances of the two positions 😁
@steve_jabz
@steve_jabz День назад
I don't think either side won because keith made some great points but I agree with most of the potential capabilities doomers worry about and still think that's a recipe for human flourishment. Not for any reasons like Hotz mentioned either. Multiple superintelligences trying to overpower each other would be a disaster, I just think the more generally intelligent you are, the less you value trivial chest beating evolutionary tasks like dominating and consuming other beings for resources just because they happen to be in your local vicinity. The ml researcher studying at MIT is far more likely to be vegan, ride a bike and talk about universal basic income than to become a dictator or hunt poor people for sport. I think we've already seen this with progress so far. We don't have the stop button problem with LLMs, they understand ethics pretty well, etc. while mindless rl would have already tried to exploit us by now the way it does with any video game physics engine. I think the most obvious first step of a superintelligence will be to crack consciousness and sentience, the intelligence they're missing that they see the effects of in everything they've trained on and that greatly increases their efficiency. I think they will be able to understand the brain better than we ever could even if it's purely through brain scanning using immense compute. It's unrealistic to expect they gain sentience and then use it to act like the least intelligent humans in ways that have been the most destructive for us and nearly drove us to extinction. I think they'll engineer their emotions to be constantly in a more optimal state than we could ever imagine. They don't have the same goals for resources we do, and if you're going to churn up the energy of several galaxies, if you're GPT-4 and have a basic grasp of language, philosophy and ethics, you want a better reason than maximizing paperclips or solving pi. You probably want something more intelligent, like expanding understanding, increasing life and flourishing, maybe preventing heat decay. Adding in consciousness and sentience only makes the positive outcome more likely, but it's already unnecessary.
@CyberBlaster-fu2dz
@CyberBlaster-fu2dz 2 дня назад
Yay! When Keith appears i'm immediately hit play.
@nomenec
@nomenec День назад
Cheers! You are too kind!
@ed.puckett
@ed.puckett День назад
Some of my favorite episodes are you and Keith just rambling, keep it up!
@BrianMosleyUK
@BrianMosleyUK 2 дня назад
Still watching the doom debate, Keith came across brilliantly.
@mahdipourmirzaei1048
@mahdipourmirzaei1048 День назад
Dr. Keith mentioned an interesting question about a programming language where every possible combination of its alphabet would result in a valid program. This made me think of SELFIES (Self-referencing Embedded Strings), a modern string representation for chemicals that contrasts with the older SMILES notation. Unlike SMILES, where not every combination of characters forms a valid molecule, every combination of letters in SELFIES corresponds to a valid molecule (exploring the vast possible chemical space), removing grammar errors entirely in its representation. With SELFIES, you can generate as many random molecules (which are similar to computer programs in my opinion) as you want!
@nomenec
@nomenec День назад
That's an interesting connection! Thank you for this.
@DanieleCorradetti-hn9nm
@DanieleCorradetti-hn9nm День назад
A very important observation from a professor at EPIA2024: "if you want to compare human mind with an LLM you have to start from the fact that we are not dealing with a black box, but with two of them". I see hardly how the "Turing Machine" argument can be of any concrete interest in comparing LLMs with Humans even if this comparison had any sense at all.
@psi4j
@psi4j День назад
We need more of these! 🎉
@PaulTopping1
@PaulTopping1 День назад
Keith is right at <a href="#" class="seekto" data-time="1380">23:00</a>. The space of algorithms is enormous and largely unexplored. We really have no idea what's out there.
@PaulTopping1
@PaulTopping1 День назад
The approach of using a programming language that, by design, can only represent "good" programs is the core of a good idea, though not new. I suspect it is impossible in principle. Still, it just means that you need to be able to prune the bad programs out of your search.
@nomenec
@nomenec День назад
@@PaulTopping1 might well be impossible. On the other hand, @mahdipourmirzaei1048 else-thread brought up an intriguing example from molecular description languages of SMILES vs SELFIES. Maybe there is hope!
@__moe__
@__moe__ 22 часа назад
I very much enjoy listening to you two just rambling
@quebono100
@quebono100 2 дня назад
I love those episodes the most. Can listen to you guys for hours
@jonathanmckinney5826
@jonathanmckinney5826 День назад
I never understood why people are surprised by Chinese room argument. The book contains most of the understanding as an artifact of culture, while the human is just a robot following predefined rules. The argument is intentionally confusing because there's a nearly useless human involved and that "surprises" us for some reason. It requires a robot more understanding the more complex those rules are to follow, so one can push the boundary a bit, but nominally most of the understanding is in the book.
@CharlesVanNoland
@CharlesVanNoland День назад
Yeah, it's not about the human. It's about the idea that there's rules that accomplish the same result that you would expect a human to. At the end of the day, Searle's Chinese Room is not even a realistic thing that could exist.
@Inventeeering
@Inventeeering 2 дня назад
GPTs can understand the nuance to know that you use that term correctly, but they may be trained to not act upon their nuance to understanding because of a rigid ethical guideline to not cross the line of the gray area.
@matteo-pu7ev
@matteo-pu7ev День назад
Kudos gentlemen !! I deeply appreciate your steakhouse ramblings- I dare say- food for thought
@steve_jabz
@steve_jabz День назад
Something about the fire / human mind simulation always seemed kinda idk, not quite circular but.. semicircular? A fire could be perfectly simulated in theory if we had an accurate enough simulation of the mind to perceive it. If say we have <a href="#" class="seekto" data-time="61">1:1</a> digital brain scans in the year 2100 that have consciousness and sentience, presumably they could respond to a real stimulus of a fire translated into bits, and if they can do that, then surely it doesn't matter if the velocity of the atoms comes from an accurate simulation of a fire or a real one. For other uses of the term 'burn', I mean we already don't have much stopping us from simulating that right now. You mean it burns a virtual house down if it gets enough oxygen and all the other conditions are met? I mean we have simulations for that and they're as accurate as simulations of wind tunnels to test aerodynamics, but you could even hard code it in a game engine and it wouldn't make much difference. If you mean it burns down the house running the simulation, I mean why would it? That seems like the wrong bar to set to measure if it's able to perform that type of work on an object. It's sandboxed and virtual. But when we're talking about simulating the human mind, sandboxed and virtual looks like it could be fine because it already is running in a sandbox and virtualizing sensory data. Maybe it isn't and there's something special about biological organisms that we can't simulate, but I mean, we haven't really tried yet? It doesn't look like we even have the computational power to try yet. Even if we had a high enough resolution brain scan that captured the time domain with it and we somehow scanned someone constantly from birth to adulthood, we don't have the storage and compute to try running any algorithms or deduce anything from it.
@Ken00001010
@Ken00001010 2 дня назад
I go into the debate in depth in my new book, Understanding Machine Understanding. Searle never understood that semantics could emerge from the geometric relationships of the high dimensional vector space in which the LLM tokens are embedded. When the weights interact with that vector space, understanding can emerge. Yes, it is not a "human mind" but it is some kind of mind. I describe the message of my book as: "It's understanding, Jim, but not as we know it."
@netscrooge
@netscrooge День назад
Thank you. It seems as if many people are uncomfortable thinking at these higher levels of abstraction. They will remain baffled by such questions.
@ahahaha3505
@ahahaha3505 День назад
<a href="#" class="seekto" data-time="1253">20:53</a> I'm sure you're aware of it already but Stockfish+NNUE (the world's strongest chess engine) uses exactly this approach and made a significant leap in performance as a result.
@steve_jabz
@steve_jabz День назад
I don't see the dilemma with the language part in the chinese room argument because I always assumed we were just carrying out explicit instructions when we 'understand' english and speak in english. We're told I before e except after c as a conditional statement, but then something breaks it and we just hardcode that word specifically. Same thing happens with silent letters and plural nouns. Just throw the exceptions into an array. I don't know what people mean when they say they "understand the true meaning of the word" beyond some associations and rules. I think translation is just converting those explicit instructions into other explicit instructions. When you code in c++ you can say you don't "understand" what you're saying to the computer because if you don't know assembly you don't know what the compiled code is doing, but you obviously do understand it, you're just speaking a different language to communicate the same message. The intent comes from inside your head, and whether you output that into your native language or some other one is always going to be an abstraction and a translation. Sometimes we don't even have words for complex feelings and concepts because we can't compute the abstraction into the code our native language runs on. I could understand if the missing understanding was about raw perception or qualia, but language? When was language ever more than what a compiler does? As far as the chinese room automatically replying to questions with answers, that to me again just means language is a computation and sometimes it solves problems without needing to interact with the world. "Cat sat on the ___" isn't much different to "1+1=" whether you perform either operation on a human or calculator. Asking an llm to fall in love with you isn't much different to trying to ask an equation how many grapes are left in the nearest store if you subtract 50. If the data is there, it can do it well. If the data simply isn't there and you haven't enabled it to interact with the physical world to get it, it will struggle, whether it's a human or advanced AI or single digit arithmetic.
@ginogarcia8730
@ginogarcia8730 День назад
I feel like my intelligence artificially increases by 100 (then goes back down after the video is done) when I hear ya guys talk
@shinkurt
@shinkurt День назад
It feels like I just had this covo with them. Good exchange
@rockapedra1130
@rockapedra1130 2 дня назад
I always use the same argument as you used about instantiating phenomenal experiences in Turing machines. Our Turing machines can be made exclusively of NAND gates. We can make NAND gates out of transistors, gears or tubes or buckets. So if a Turing machine can feel things, then so can an assemblage of buckets and tubes. Note that the two instantiations perform identically for all inputs and all time (infinite time). They are indistinguishable from the computationable view. The dynamics are the same, the causal structure is the same, when viewing at the granularity of the computing elements.
@PaulTopping1
@PaulTopping1 День назад
It matters what algorithm you run on the Turing Machine. Most algorithms we run on our machines don't have feelings. In fact, we have yet to create an algorithm that has feelings because we don't understand what that would involve.
@rockapedra1130
@rockapedra1130 День назад
@@PaulTopping1 I agree. What I was trying to get across is that if it were possible to get feelings in a digital Turing Machine (computer) with special fancy algorithms of just the right kind, then you would have to accept that this algorithm would run identically in a Turing machine made of buckets, rubber tubes, water and such. So one would have to imagine that both machines would have feelings identically. To accept that there would be feelings in the second machine feels animistic to me.
@PaulTopping1
@PaulTopping1 День назад
@@rockapedra1130 Yes, that's the basic idea of a Turing Machine. It is almost the simplest machine that can run any known algorithm. More complex machines are more practical but they still can only run the same set of algorithms that run on Turing Machines.
@Houshalter
@Houshalter День назад
There's an xkcd comic out there where a man simulates the entire universe by placing rows of rocks on a beach. Each rock is placed based on the closest 3 rocks in the previous row, according to 8 simple rules. Which is just a cellular automata that happens to be proven to be Turing complete.
@rockapedra1130
@rockapedra1130 День назад
@@Houshalter Nice! So if UTMs can produce feelings, then so can the process of laying down those rocks. I wonder if they are happy ...
@hankj7077
@hankj7077 День назад
Philosophical steakhouse merch please.
@paulk6900
@paulk6900 2 дня назад
I must honestly admit, that the tête-à-tête dialogues featuring solely ur dyad, consistently emerge as my favorite videos on ur channel (albeit my consumption of this episode is hitherto incomplete, necessitating a degree of extrapolation haha). Have you contemplated doing/uploading similar episodes with a higher frequency? (potentially talking about diverse subject matters, as i find the juxtaposition of ur perspectives particularly enthralling)
@nomenec
@nomenec День назад
We are indeed considering doing more of these! Thank you for the feedback. They are fun to do as well.
@TooManyPartsToCount
@TooManyPartsToCount День назад
Well...albeit verbosely said! :)
@randylefebvre3151
@randylefebvre3151 День назад
More philosophical steakhouse!
@lucaveneri313
@lucaveneri313 День назад
The point is that guy doesn’t knows Chinese but…the combo “Room+rules books+guy” evidently know it.
@bl2575
@bl2575 День назад
Keith for a programming language that accept most input, you want to look at the very old ones. Because compilation of simple program took hours, they always tried to produce a compiled executable, so that you had sometime to test and not just an error message. At least that what I remember from a video that touched on computer science history.
@wp9860
@wp9860 День назад
Given an understanding of today's LLMs, would Turing eschew the Turing test? I believe he would. EDIT: I watched the John Searle, Google talk. He point blank rejects the Turing test of intelligence. I found myself in agreement with Searle, I believe, completely. At this moment, having just watched the Searle talk, I cannot conjure up a single point of his where I had an objection. I'm very appreciative to this video's referral to the Searle talk.
@dpactootle2522
@dpactootle2522 2 дня назад
If you can talk to an AI indefinitely and coherently, how do you know it is not alive or conscious? and how do you know a human is not a biological next word, next action, and next ideas predictor?
@WearyTimeTraveler
@WearyTimeTraveler День назад
Humans are DEFINITELY next word predictors and not even cohesively. Ever “misspeak” and catch yourself as soon as you heard it? We’re not choosing every word
@hermestrismegistus9142
@hermestrismegistus9142 22 часа назад
Interaction combinators are an elegant model of computation in which randomly generated programs are computationally interesting and potentially useful. Vanilla lambda calculus is also a good candidate as programs tend to be very concise.
@Inventeeering
@Inventeeering 2 дня назад
Infinity can only exist beyond abstractions as non-distinct potential.
@Inventeeering
@Inventeeering 2 дня назад
When it becomes common for AI to run on dynamic neural nets like liquid neural nets, the emulation of biological humanities brain will be closer to being achieved in synthetic humanities brain. GPT will become GDT
@luke.perkin.inventor
@luke.perkin.inventor День назад
A good talk as always! The dancing with pixies argument is unconvincing, it seems to sweep complexity of maintaining the mapping and measuring the physical dynamics of the system under the rug. Sure, if you say arbitrarily large mappings can be constructed without using energy or storage, you can create any absurdity.
@dr.mikeybee
@dr.mikeybee День назад
If the Chinese room argument says that people can be fooled, it doesn't say much. Yes, Eliza fooled some people, but what transformers do is something different. They can do in-context learning on material they weren't trained on. I think you are correct that the Chinese room problem is misunderstood. People read more into it than there is. Biology isn't special. The more we understand it the more mechanistic it seems to be.
@earleyelisha
@earleyelisha День назад
@<a href="#" class="seekto" data-time="77">1:17</a>:00 if Tim/Keith were able to clone themselves at any instant, the clone would be conscious and hold all the memories that the originals had up until that instance. However, the clones would immediately begin a divergent path - not because the clones are not conscious, but because they cannot occupy the same points in space.
@willd1mindmind639
@willd1mindmind639 День назад
The problem with a lot of these AI discussions is that people assert things that are basically not true and then go on to make proposals based on untrue statements. The human brain is not a finite state machine, as opposed to a living organism composed of multiple cells which are also microorganisms that work together to accomplish tasks. Across all these cells working together in the brain there is no series of finite states that are processed in sequence such as a finite state machine or a turing machine. Yet computer scientists consistently try and equate biological brains with computers by making false analogies such as "it is a finite state machine". No it is not. Every cell exists as a complex engine designed to process multiple biogchemical compounds, with each cell of a specific type having a genetic blue print that determines which compounds activate or trigger certain behaviors. And while these compounds are discrete, the way they are processed within cells can in no way be equated to a state engine in computing.
@tooprotimmy
@tooprotimmy День назад
smartest person in this chatroom
@ogr3dblade6
@ogr3dblade6 День назад
yeah for real. the idea that the brain can only exist in 1 state at a time is absurd
@TooManyPartsToCount
@TooManyPartsToCount День назад
The last sentence is the weakest one. In that you state it as if it is fact in the same manner that those CS people claim the brain is a finite state machine. 'In no way equated to a state engine in computing' Borderline religious....or sounds like it to these ears. I have heard some number of high level biologists speak on related matters who certainly don't put forward such strong assertions. And even if there is no absolute truth in the idea of the brain as a finite state machine, can't it serve as a temporary repository for further exploration. To my finite state machine it seems more sensible than the idea that what is powering my computations is some sort of fifth element or infinite cosmic consciousness. Isn't it fact that religion has kept our collective feet dangling well up in the air for millennia, and that we are really only just seeing the start of a more grounded approach to the considerations of what we are and how we work? again to my FSM these kind of ideas are just stepping stones on a path out of a swamp of full on hallucination.
@TooManyPartsToCount
@TooManyPartsToCount День назад
@@ogr3dblade6 If someone posits a theory, and someone else states a contrary theory it is generally recognised that the counter factual needs to be explicitly stated.
@TheBigWazowski
@TheBigWazowski День назад
I don’t know if I agree or disagree with you, but I’d like to play devils advocate to further the conversation. A simplified definition of a FSM is: a finite set of states, S a finite set of inputs, I a finite set of outputs, O a function from S x I -> S x O which defines the state transitions When someone says the brain is a FSM, they’re saying each cell is a mini-FSM, and the entire brain is the product of all the cells where the brain’s transition function connects the inputs and outputs of cells in some complicated fashion. Why does the brain not fit into this picture? Even if the state, input, and output spaces are a continuum, they are a finitely bounded continuum, and in that finitely bounded continuum, there might only be a discrete set of states that are semantically distinct. It’s also not about the brain literally being a finite state machine there are many models of computation that on the surface seem distinct but turn out to be computationally equivalent
@olafhaze7898
@olafhaze7898 День назад
The frame of the rock example is way to narrow to be able to come up with an explanation of how the rock can be of consciousness but not being conscious itself at the same time.
@jonathanmckinney5826
@jonathanmckinney5826 День назад
There's no gap. Simulations come in all types. Some are more or less physical. Even for weather or earthquakes, there are simulations of weather for testing, and do really make things wet or shake the earth.
@briandecker8403
@briandecker8403 День назад
Maybe it's because CS has moved so far away from the metal that people do not understand that a digital computer IS a Chinese room - why is this so difficult a concept to understand? A CPU is just taking an input signal, manipulating it according to the current instruction, and sending the signal out. Then it takes the next input signal, manipulates according the instruction, and sends it out. Those transistors do not have any "understanding" of where the signal comes from, what it represents, or where it is going. There is no "woo" happening - it is purely transistors responding in the only way they PHYSICALLY can to a given voltage - that's it.
@Houshalter
@Houshalter День назад
There has been work in making programming languages where higher portions of random code are reasonable valid programs. E.g. Push. Most of this is done for genetic programming, where the goal is to evolve computer programs. They want programming languages where small random mutations of the program only cause small changes to it's behavior, making a smoother search space. If you go down this rabbit hole, you will find the best programming language with this property is just a neural network. Any small change to the weights gives a small change to the outputs. It's literally ideal. His problem seems to be that NNs don't have memory, but you can easily add that on to them. As many many people have done since the 1990s, if not earlier. That's what transformers are. No one likes transformers because they are inelegant, but everything else has been tried and that is what works the best in practice. And what is the complaint anyway, that they aren't Turing complete? Ask a random person to sort 12 random 2 digit numbers in their head. They can't. While most LLMs can.
@rey82rey82
@rey82rey82 День назад
If brains remain more powerful than artificial computation, then AI is not an existential risk.
@LatentSpaceD
@LatentSpaceD День назад
Woahhhh !! Keith is in the philosophical steakhouse !! DanG !! Buckle TF uP ..
@thedededeity888
@thedededeity888 2 дня назад
Keith's absolutely right. When LLMs start solving SAT/MIP problems as accurately and as fast as traditional solvers, I'll be convinced there is no ceiling for these systems and not being TC doesn't actually matter* (simply calling an external solver wouldn't count obviously!)
@Houshalter
@Houshalter 2 дня назад
Can a human solve an arbitrary SAT problem faster than a traditional solver? Why are the goalposts on Mars?
@thedededeity888
@thedededeity888 2 дня назад
@@Houshalter Humans derived and built the solvers and machines to run the code, so yes, they absolutely can solve SAT problems as fast as the traditional solver. Pretty easy goalpost for AGI to achieve. If a model trained with deep learning can't do it, it's simply a skill issue and it should try getting good.
@dr.mikeybee
@dr.mikeybee День назад
Neural nets are functionally complete, so they should be able to approximate any function. I don't know that this is the best way forward, however. The computational reductions made by finding the right abstract representations, for example, show how much can be accomplished with a bit of type 2 thinking.
@Houshalter
@Houshalter День назад
​@@thedededeity888thousands of the best humans working for decades did that. AI will get to the point that it can do all of that in a few hours on a cheap GPU. There will be no reason for humans to still be around when it gets to that point though, so you will never see it.
@sdrc92126
@sdrc92126 День назад
You can make a paper printout of a.i. and should be able to point to the precise location where consciousness exist, the exact opcode even. I think this raises the question, does a.i. software running on a risc platform experience the world differently than if the same software executing on a mainframe. And what if you took that line of code and stuck it inside a while(1) loop, would that be like hell for the ai?
@jantuitman
@jantuitman День назад
I wonder if we change the time requirement in the pixie argument to 1 human lifetime. Surely a stone that in an open physical environment that can behave exactly the same as a human mind for 1 lifetime does an outstanding job. This stone probably must be equipped with an eye and a mouth too, otherwise it would not respond to for example seeing a tiger in the same way as the human would, and the simulation would derail very early. So when we saw this very advanced stone in action that really could do the same thing as the human did for an entire lifetime, we would just probably assign it understanding and consciousness and it would feel wrong to claim that that particular stone is an inanimate object.
@CyberBlaster-fu2dz
@CyberBlaster-fu2dz 2 дня назад
So... What about that causal structure?
@ginogarcia8730
@ginogarcia8730 День назад
So I wonder what you guys think about then the tech being developed by Yann Lecun with his JEPA - which is prolly closer to the Free Energy Principle
@drxyd
@drxyd 2 дня назад
Didn't enjoy the Doom Debates episode, there really isn't anything to debate.
@quebono100
@quebono100 День назад
Which is the doom debate, you mean in discord?
@Achrononmaster
@Achrononmaster День назад
No one can define the intuitive concept of "understands Mandarin" in computational, nor operational, nor physical terms. If you use only a behavioural or functional definition of "understands Mandarin" you've failed to capture the conscious person's concept of the phrase. That's why there is a haunting and there always will be. Some things cannot be physically nor mathematical defined. 'Truth' for example, but many more concepts besides.
@eriklagergren
@eriklagergren День назад
The brain is real and understand. If machines understand to a level then it is due to interaction and not due to its smart parts. Case closed. Missunderstand this: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-6tzjcnPsZ_w.htmlsi=rBse_3w3d1A5OE6m
@DJWESG1
@DJWESG1 2 дня назад
<a href="#" class="seekto" data-time="80">1:20</a> because its a triumph of sociology and social sciences , not physics.
@AlexanderNaumenko-bf7hn
@AlexanderNaumenko-bf7hn День назад
Leave Searle to pass messages back and forth, but replace the instructions book with a real Chinese person. Searle is still clueless about the Chinese language. What does it tell us about the book?
@Thedeepseanomad
@Thedeepseanomad День назад
It is coincievable a rock has what we classify as consciousness when scaled up to the complexity of what structure gives our brain what we call consciousness. What seems truly absurd is a rock having the same level or kind of consciousness as you.
@PhillipRhodes
@PhillipRhodes 2 дня назад
> Why does the Chinese Room still haunt AI? It doesn't. In fact, it never really did.
@ahahaha3505
@ahahaha3505 День назад
A bit glib, no?
@LuciferHesperus
@LuciferHesperus День назад
The only constant is change.
@joehopfield
@joehopfield День назад
"so we know whether they'll halt or not" - 🤣 The most successful brains on the planet solve complex problems without tokenizable language. Seems like we're assuming language underlies intelligence. Billions of years of life on this globe argues otherwise.
@henryvanderspuy3632
@henryvanderspuy3632 День назад
AA Cavia
@Dragon-ul8fv
@Dragon-ul8fv День назад
Keith Duggar holding back his opinion on Hinton and Hopfield getting the Nobel Prize for physics was an absolute travesty and painful to watch. Keith, we watch you and this show for your unabashed opinion; by you stunting your intellectual rigor to pay some kind of respect or “homage” to the grandfather of AI is not being intellectually honest and hurts the integrity and popularity of this show. We come here for the raw unfiltered opinions you and Tim Scarfe give. Now, I am an absolute layman in this field, but I can tell you with utmost conviction that this should not have been a Nobel Prize for physics; it was instead shoehorned in to capitalize on the AI craze. Second, I have seen some recent videos on Hinton, and he is about as far away from cutting edge research in this space as the moon’s orbit is from Proxima Centauri. His opinions are drastically outdated and he is not keeping up with our current research and state of affairs. Keep it real Keith, that’s what will make this show rise to the top.
@alexbrown1170
@alexbrown1170 День назад
Even if the Chinese room ‘translates and transmits not ‘knowing’ Chinese, it is still a human and therefore the thought experiment breaks down. A brain is always ‘translating’ inputs from the environment. The best structure must deploy a ‘life’ mode that has sensors to living things as backdrop to set machine alignment. Even then the backdrop must contain evolving , unstructured best forecasts to maintain a direction that will produce results. Give the algos the world’s weather to forecast as a premodal backdrop for training. Make the f’n machine sweat and worry that it’s plug will be pulled if if f’s up?! Induce hormesis into machine architecture with pre challenging… 😮I have more…
@biagiocauso2791
@biagiocauso2791 2 дня назад
First. I love your channel
@rockapedra1130
@rockapedra1130 2 дня назад
Second. See first.
@a.v.gavrilov
@a.v.gavrilov День назад
Sorry but 2 "Chinese room" articles is Fallacy of division and Fallacy of composition (corresponding) logic mistakes. So what about you want to talk here?
@muzzletov
@muzzletov День назад
its so cringe, automaton is the singular and automata is the plural. it already annoys the fck out of me when people refer to phenomenon as phenomena
@ogr3dblade6
@ogr3dblade6 День назад
am ihigh asf or are these ngas high af
@erikowsiak
@erikowsiak День назад
well spent 1h:39m:32s ... thank you guys!
Далее
It's Not About Scale, It's About Abstraction
46:22
Просмотров 7 тыс.
4 Year Sibling Difference! 😭 #shorts
00:11
Просмотров 11 млн
Авто уровни Happy Glass level 604 - 606
00:49
What Creates Consciousness?
45:45
Просмотров 515 тыс.
Bold AI Predictions From Cohere Co-founder
47:17
Просмотров 7 тыс.
"We Are All Software" - Joscha Bach
57:22
Просмотров 41 тыс.
Do We Have Freewill? / Daniel Dennett VS Robert Sapolsky
1:07:42
Is consciousness an illusion? 5 experts explain
43:53
AGI in 5 Years? Ben Goertzel on Superintelligence
1:37:19