Тёмный

Artificial Intelligence Isn't Real 

Adam Something
Подписаться 1,3 млн
Просмотров 419 тыс.
50% 1

The first 100 people to use code SOMETHING at the link below will get 60% off of Incogni: incogni.com/something
This video has been approved by John Xina and the Chinese Communist Party.
Check out my Patreon: / adamsomething
Second channel: / adamsomethingelse
Attribution (email me if your work isn't on the list):
unsplash.com/photos/WX5jK0BT5JQ
unsplash.com/photos/luseu9GtYzM
unsplash.com/photos/-olz676A3IU
unsplash.com/photos/3OiYMgDKJ6k
unsplash.com/photos/6MsMKWzJWKc
unsplash.com/photos/rEn-AdBr3Ig
commons.wikimedia.org/wiki/Fi...

Наука

Опубликовано:

 

19 июн 2023

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 3,1 тыс.   
@AdamSomething
@AdamSomething Год назад
Thanks for tuning in to today's video! The first 100 people to use code SOMETHING at the link below will get 60% off of Incogni: incogni.com/something
@qwertyuiopchannelreal296
@qwertyuiopchannelreal296 Год назад
Nice video from someone who has no expertise in AI. Humans are no different from AI we are just a bunch of inputs and outputs. Very soon in the future neural networks will be on par with or surpass humans in general intelligence because of the improvements in its architecture.
@TomTKK
@TomTKK Год назад
​@@qwertyuiopchannelreal296 Spoken like someone who has no expertise in AI.
@qwertyuiopchannelreal296
@qwertyuiopchannelreal296 Год назад
@@TomTKK Yes, but generalizing AI as not being “intelligent” is just wrong. You could make the same point about human brains because they receive inputs and act on those inputs to produce output, which is no different from AI. In fact, the architecture of neurons in Neural networks mimics the functions of biological neurons.
@johnvic5926
@johnvic5926 Год назад
@@qwertyuiopchannelreal296 Oh, nice. But thanks for admitting that anything you say on the topic of AI has no actual scientific foundation.
@relwalretep
@relwalretep Год назад
​@@qwertyuiopchannelreal296it's almost as if you wrote this before getting to the last 60 seconds of the video
@mateuszbanaszak4671
@mateuszbanaszak4671 Год назад
I'm opposite of *Artificial Inteligence* , because I'm *Natural* and *Stupid* .
@Kerbalizer
@Kerbalizer Год назад
Rel
@GiantRobotIdeon
@GiantRobotIdeon Год назад
Artificial Intelligence when Natural Stupidity walks in: 😰
@jacobbronsky464
@jacobbronsky464 Год назад
One of us.
@QwoaX
@QwoaX Год назад
Minus multiplied with minus equals plus.
@aganib4506
@aganib4506 Год назад
Realistic Stupidity.
@SurfingZerg
@SurfingZerg Год назад
As a programmer that studies AI, we almost never actually use the term artificial intelligence, we usually just say machine learning, as this more accurately describes what is happening.
@InfiniteDeckhand
@InfiniteDeckhand Год назад
So, you can confirm that Adam is correct in his assessment?
@mdhazeldine
@mdhazeldine Год назад
But is the machine actually understanding? I.e. is it comprehending what it's learning? If not, is it really actually learning? It seems to me like it's a parrot learning to repeat the words that humans say, but it's not understanding the meaning of the words. The same as the Chinese Turing experiment Adam mentioned.
@malekith6522
@malekith6522 Год назад
⁠​⁠He is...mostly... and what usually press talking as AI actually called AGI(Artificial General Intelligence) and currently we are far away to implement it.
@TheHothead101
@TheHothead101 Год назад
Yeah AI and ML are different
@EyedMoon
@EyedMoon Год назад
As an AI engineer, I don't 100% agree with this video. In fact, I think I agree with about 50% of it :p There are some potential threats because of how powerful it is to just automate some tasks using "AI". For example, news forgery has already proven to be a pretty easy task, as newsfeeds are highly formatted and easy to spam. Image generation is, in 2023, of very high quality and helps creating "fake proof" very quickly. AI is well suited to information extraction too, in the cases where features and structures appear from the amount of data we deal with. But in the media, "AI" is a buzzword used whenever people don't understand what they're talking about and the things they're talking about like "machines becoming sentient" are just ludicrous. So I'm not totally on board with Adam's analysis. He makes a point that there's a difference in perception from tech and media but he then still mixes both aspects imo. And especially the cat argument. Of course we develop our reasoning from precise features but we also have kind of the same training process as machines. Seeing the same features with the same feedback a lot activates our neurons so often that the connexions become prevalent, while AIs have neurons that compute features + reinforce their connexions through feadback. Oh and for the "is the machine really understanding?" question: are you really understanding or merely repeating patterns with only slight deviations caused by your environment and randomly firing neurons? I'm not sure anyone can answer this question yet.
@justpassingby298
@justpassingby298 Год назад
Personally what pisses me off is when someone takes one of those AI chat bots, and gives it some random character from any show, and goes "Omg this is basically the character" and it just gives the most basic ass responses to any question
@menjolno
@menjolno 11 месяцев назад
Can't wait for adam to say that biology isn't real. What would literally be in thumbnail: Expection: (human beings) reality: one is "god's creation". one is "a soup of atoms"
@thrackerzod6097
@thrackerzod6097 Год назад
As a programmer, thank you. It's annoying to have to explain to people that AI is not intelligent, it's just an advanced data sorting algorithm at the very most. It has no thoughts, it has no biases, it has no emotions. It's just a bunch of data sorted by relevance. This isn't to downplay the technology, the technology behind it is stunning, and it has good applications but to call it intelligence when it isn't is absurd.
@cennty6822
@cennty6822 11 месяцев назад
language models inherently have biases based on their training. A bot trained on western internet will be biased towards more western ideologies, one based on for example russian forums will have different biases.
@thrackerzod6097
@thrackerzod6097 11 месяцев назад
@@cennty6822 They will, however these are not true biases. There is no emotional, or any reasoning behind them, hence they can be referred to as biased, but not biased in the way a human, or any other intelligent being would be, which was what I was referring to.
@somerandomnification
@somerandomnification 11 месяцев назад
Yep - I've been saying the same thing about CEOs I've worked with for the last 25 years and still there are a bunch of people who seem to think that Elon Musk is intelligent...
@thrackerzod6097
@thrackerzod6097 11 месяцев назад
@@somerandomnification Elon is just another rich person who's built his legacy off of the backs of genuinely intelligent people, people who unfortunately will likely go largely uncredited. If they're lucky, they'll at least get credit in circles related to their niches though.
@marlonscloud
@marlonscloud 11 месяцев назад
And what evidence do you have that you are any different?
@nisqhog2881
@nisqhog2881 Год назад
"Behaving perfectly like a human doesn't mean they are intelligent" is a sentence that can be used on quite a lot of people too lol
@AlexandarHullRichter
@AlexandarHullRichter Год назад
"The ability to speak does not make one intelligent." -Qui Gon Gin
@ConnorisseurYT
@ConnorisseurYT Год назад
Behaving unlike a human doesn't mean they're not intelligent.
@inn5268
@inn5268 Год назад
it is intelligent in the sense it can process data and generate a response to it, it is not SENTIENT since it lacks any self awareness or underlying thoughts other than processing the inputs it's given. That's what adam meant to say
@fnorgen
@fnorgen Год назад
I suspect quite a lot of people will keep moving the goalpost for what counts as "intelligence" however far is needed to exclude machines, until they themselves no longer qualify as intelligent by their own standards. The issue I take with Adam's argument is that you quickly get in a situation where the list of tasks that strictly require "actual intelligence" will keep getting narrower and narrower until there may some day be no room left for "intelligence". I know a person with such a severe learning impediment that I would honestly trust auto GPT or some similar system to do a better job than them for any job that can be performed on a computer. Except some video games. That's not much to brag about, but in terms of meaningful, measurable performance, I'd say current AI is more intelligent than they are. So claiming that that the Machine is completely devoid of intelligence seems to me like a strictly semantic argument. I don't really think of the mechanisms of a system as a qualifier for intelligence. Only its capabilities. Current ML based systems don't learn like we do, they don't think like we do, they don't feel like we do, they have no intrinsic motivations, and it seems they don't need to either.
@robgraham5697
@robgraham5697 Год назад
We are not thinking machines that feel. We are feeling machines that think. - Antonio Dimasio
@flute2010
@flute2010 Год назад
artifical intelligence is when the computer contolled trainers in pokemon use a set up move instead of attacking
@sharkenjoyer
@sharkenjoyer Год назад
Artificial intelligence is when half life 2 combine uses grenade to flush you out and flank your position
@n6rt9s
@n6rt9s Год назад
"Socialism is when no artificial intelligence. The less artificial intelligence there is, the socialister it gets. When no artificial intelligence, it's communism." - Marl Carx
@flute2010
@flute2010 Год назад
@@n6rt9s you may have just turned the rest of the replies under this comment into a warzone now at the mere mention of socialism, we can only wait
@dandyspacedandy
@dandyspacedandy Год назад
i'm... dumber than trainer ai??
@alexursu4403
@alexursu4403 Год назад
@@dandyspacedandy Would you use rest against a Nidoking because it's a Psychic type move ?
@alexandredevert4935
@alexandredevert4935 Год назад
I've done a PhD in machine learning, I design machine learning system as a job * Yes, AI is a very poorly defined word, which have been stripped of the little meaning it might had because how much it was stretched in all directions * Intelligence is not a boolean feature, it's a continuum. Where do we put a virus ? The most simple unicellular organisms ? Industrial control systems are on the level of a simple bacteria in term of complexity and abilities, minus the self-replication ability (3d printers are this close to cross that gap) * Your cat example is a very good explanation of what statistical inference. * You can implement statistical inference in various ways, one of which is neural network. Neuron network can have internal models that does what you call "the intelligent way". That internal model is not set by the programmer, it's built by accumulating training on randomly picked examples aka stochastic gradient descent. * The Chinese Room argument have its critics, some of which are really interesting And yes, there are tons of cringe bullshit on this topic, to the point I carefully avoid mentionning I do AI, I say I do statistical modeling
@MsZsc
@MsZsc Год назад
zao shang hao zhong guo xian zai wo you bing qilin
@isun666
@isun666 Год назад
That is exactly how chatgpt would answer it
@sophiatrocentraisin
@sophiatrocentraisin Год назад
Actually (and it goes in your direction), it's still debated whether viruses are even living organisms. The reason being viruses aren't actually cells, and also because they can't self-replicate
@tedmich
@tedmich Год назад
Wth all the crap companies in my field (biotech) trotting out some AI drug design BS after their one good idea failed, I would avoid being associated with ANY of this tech until the charlatans fall off the bandwagon! Its a bit like being a financial planner with last name "Ponzi".
@jlrolling
@jlrolling Год назад
⁠@@sophiatrocentraisinIt’s also because they do not meet the standard requirements that define an organism, i.e. birth, feeding, growth, replication and death. They do not grow, they “are born” as totally finished adults. And also, as you mention, they cannot self replicate, they need a third party for that aka a cell.
@Cptn.Viridian
@Cptn.Viridian Год назад
The only fear I have for current "AI" is companies betting too hard on it, and having it destroy them. Not by some high tech high intelligence AI takeover, but by the AI being poorly implemented and just immediately screwing over the company, like "hallucinating" and setting all company salaries to 5 billion dollars.
@davidsuda6110
@davidsuda6110 Год назад
Part of the Hollywood writers strike is AI generating scripts just bad enough that they can be edited by a human and produced so cheaply that the industry can profit on it. Our concerns should be more blue collar. The industrialsts will take care of themselves in the long run.
@okaywhatevernevermind
@okaywhatevernevermind Год назад
why do you fear big corpo destroying itself through ai? that day we’ll be free
@KorianHUN
@KorianHUN Год назад
​@@okaywhatevernevermindwe will be "free" ... of global trade and functional economies. An apocalypse sounds cool until you think 4 seconds about it. It won't be shacky adventures, it will be mass death and duffering.
@maya_void3923
@maya_void3923 Год назад
Good riddance
@berdwatcher5125
@berdwatcher5125 11 месяцев назад
@@okaywhatevernevermind so many jobs will be lost.
@rhyshoward5094
@rhyshoward5094 Год назад
Robotics/AI researcher here, you're definitely right to suggest that AI is being completely blown out of proportion by the media. That being said, certain things you mentioned computers not being capable of they certainly can do, it's just a case of that they're currently still the kinda things that are being developed in research institutions and therefore not viewable by most people. For example, the fat cat example could be tackled by a combination of causality and semantic modelling could represent the relationships between feeding the cat and its weight. Furthermore empathy modelling is also an idea within reward-based agents/robots, effectively having the robot reason about whether an outcome would be optimal from the perspective of another being (e.g. a cat). Of course we're still a long ways off, but that is more of a software/theory issue than a hardware issue, in a sense, we have all of the machinery we need to make it happen, it's just a matter of knowing how to structure the inner workings of the AI that's the difficulty. With regards to the Chinese room thought experiment, it's worth mentioning that only one school of thought precludes this disproving consciousness. I'm fairly certain that if a baby could talk, and you were to ask it whether it understood anything it was experiencing, I doubt it would, yet I don't think anyone is arguing that babies are not conscious. Even that aside, I think what ultimately sets aside human intelligence, and what will ultimately set aside future AI, is the ability to reason about reasoning, or in other words meta-reasoning. This is currently quite difficult considering the biggest fads in research right now involve throwing a neural network at problems, effectively creating an incomprehensible black box, but there's definitely the baby steps there of making this happen. All that being said, I totally get why you made this. The way everyone's talking these days you'd be forgiven for thinking the machine revolution is due next Tuesday.
@Bradley_UA
@Bradley_UA Год назад
Well, they should have asked ChatGPT how it reasons its answers to theory of mind test questions. But to me the only way to answer those questions is to actually have theory of mind.
@awesometwitchy
@awesometwitchy Год назад
So not literally Skynet… but maybe literally Moonfall? With a little Matrix sprinkled in?
@qiang2884
@qiang2884 Год назад
@@awesometwitchy no. Researchers are smart people unlike politicians, and they know that making things that do not harm them is important.
@ChaoticNeutralMatt
@ChaoticNeutralMatt Год назад
I'll only add that it's acted like it's 'just around the corner' for a while now. I don't entirely blame media, at least early on. It was a fairly rapid public jump and we have made progress.
@travcollier
@travcollier Год назад
It is basically the same as the "philosophical zombie" thought experiment, and fails to actually mean anything for the same reason. It is just begging the question by assuming there is something called "understanding" which is different from what the mechanistic system does. No actual evidence for that I'm afraid. And before someone objects that they know they "understand"... Really? Do you actually know what is going on in your brain, or are you just aware of a simplified (normally post-hoc) model of yourself?
@Movel0
@Movel0 Год назад
Incredibly brave of Adam to stuff his cat with food to the point of morbidty obestiy just to prove the limits of AI, that's real dedication.
@USSAnimeNCC-
@USSAnimeNCC- Год назад
And now time for kitty weight loss arc que the music
@merciless972
@merciless972 Год назад
@@USSAnimeNCC- eye of the tiger starts playing loudly
@lordzuzu6437
@lordzuzu6437 Год назад
bruh
@Soundwave1900
@Soundwave1900 Год назад
How is it fat though? Google "fat cat", all you'll see is cats at least twice fatter.
@celticandpenobscot8658
@celticandpenobscot8658 Год назад
Is that really his own pet? Video clips like this are a dime a dozen.
@mistgate
@mistgate Год назад
If people insist on using "AI," I propose we call it "Algorithmic Intelligence" because that's far closer to what it really is than Artificial Intelligence
@Naps284
@Naps284 Год назад
Now, imagine an algorithm that, instead of being made of code, is based on an extraordinarily complex and pretty well-defined physical structure on three spatial dimensions and which structure also defines how it will elaborate "stuff" and react with itself (inbetween inputs and outputs) through the fourth dimension (time). Also, the sequence of reactions and computations define how the structure will mutate, adapt, and change over time. All these properties on the four dimensions are perfectly (theoretically, at least) transcribable as code: for example, as numbers that represent coordinates on these dimensions (including all states through the fourth dimension). Now, add in some basic rules that define how all this data must interact with itself or react and compute inputs and outputs. These rules might just be, for example, the fundamental laws of physics and the various physical constants. Oh, wait. This seems familiar... Isn't this algorithm EXACTLY how the human brain "generates" intelligence and cognition (and consciousness?)
@apolloaerospace7773
@apolloaerospace7773 Год назад
@@Naps284 There is no qualitative difference between connecting virtual points in n-dimensions or n+1-dimensions. I dont work with AI, but to me you sound like trying to be appear smart, without knowing what you are talking about.
@Naps284
@Naps284 Год назад
@@apolloaerospace7773 I didn't write all that to appear smart using weird terms or something 😂 It was not my intention
@Naps284
@Naps284 Год назад
@@apolloaerospace7773 I wanted to make a parallelism between the two things by trying to totally decompose the "thing" 😂 I just tried to explain my idea of how there is no actual functional difference between a virtual and a physical neural network (mutating nodes+connections), given enough complexity and computational power...
@Naps284
@Naps284 Год назад
@@apolloaerospace7773 I just liked the idea of expressing it that way, but then I got a bit lost in my explanation 😂
@aliceinwonderland8314
@aliceinwonderland8314 Год назад
I once passed a basic french speaking exam with essentially no comprehension of what I was saying. Just copied the tense structure of the question, added a few stock phrases and conjuctions, and sprinkled in some random nouns and adjectives that I couldn't for the life of me tell you what they meant, only that my brain somehow decided they were in the same topic. They were testing for compression; I used a different method to have the appearance of it. AIs work with similar logic, doesn't matter how you get the results within the task, so long as the results appear correct.
@tomlxyz
@tomlxyz Год назад
That's exactly what's this not about. The question here is if the process is intelligent or not. What you describe is using a certain method to a narrow field, that's just regularly, statically defined algorithms. If you were faced with increasingly complex tasks you'd eventually fail because you don't actually comprehend it and currently AI keeps failing too, sometimes with the simplest instructions
@aliceinwonderland8314
@aliceinwonderland8314 Год назад
@@tomlxyz you do realise all code, AI included, is quite literally just a bunch of algorithms and statistics, albeit in this case significantly more complex than what I used? And that most of AI issues boil down to the lack of comprehension and ability to think (preferably critically) within the AI? I'm not an expert in machine learning, but I do have some basic understanding of how code and data sorting work, since a large part of my degree is working with various sensors, their data, Fourier transforms, matrices etc. Theoretically, I think it should be possible to get some sort of sentient AI, but machine learning as it currently is is simply way too specific in it's task to really be sentient. I'd say current AIs are probably similar level of sentience as an amoeba.
@ItaloPolacchi
@ItaloPolacchi Год назад
I disagree: people are scared by AI not because they think they're seemingly "human", but because perfectly acting like one without understanding the meaning behind it can lead (in the future) to real life consequences. If you teach an AI to hack your computer and delete all your data it doesn't matter if it understands what it's doing as long as the action is being done. Not having free will doesn't mean not creating consequences; if anything, it's worse.
@jhonofgaming
@jhonofgaming Год назад
This exactly, tools already exist that are not "intelligent" but are still powerful. AI is the exact same, it does not matter if it's intelligent it's still an extremely disruptive tool.
@what42pizza
@what42pizza Год назад
well said!
@thereita1052
@thereita1052 Год назад
Congrats you just described a virus.
@user-yy3ki9rl6i
@user-yy3ki9rl6i Год назад
honestly its a good take. A big part of ChatGPT development is imposing guardrails on them to prevent them from telling you how to make pipe bombs and meth. we've seen glimpses of DAN version of ChatGPT and yeah thats why AI still dumb and scary.
@alexs7139
@alexs7139 Год назад
Yes and that’s all my problem with this video: after watching it you can think « oh, AI is no « true » intelligence so it cannot try to destroy us like in SF» for example… However it’s wrong (and you showed why). P.s The idea that an AI built through machin learning has no « true intelligence » because it cannot understand concepts is not that obvious from a philosophical point of view. A pure materialist for example will not be convinced at all by this argument
@stevenstevenson9365
@stevenstevenson9365 Год назад
I have an MSc in Computer Science and Artificial Intelligence and I can say that how we use these terms and how the media uses these terms are very different. "AI" is a huge field that refers to basically anything that a computer does that's vaguely complex. So when your map app tells you the shortest path from A to B, that's AI, specifically a pathfinding algorithm. We we talk about stuff like chat gpt, we wouldn't really call it AI, because AI is such a general term. It's Machine Learning, more speficially Deep Learning, more specifically a Large Language Model (LLM). Stable diffusion is also Deep Learning, but it's a diffusion model.
@nitat
@nitat Год назад
Thx For this comments. The IT jargon was really confusing. I think I understand a little bit better now.
@dieSpinnt
@dieSpinnt Год назад
There is no reason to be defensive. You are a scientist and not a dipshit born out of "Open" (open .... what a perversion!) AI that wants to sell "ideas" ... I mean stock. Have a good one, fellow human( ... **g** )
@Groostav
@Groostav Год назад
Yeah its funny, @AdamSomething's description of "pre-AI" sounded a lot like prolog to me, which I would consider to be a form of AI. I think that the concept of AI is really so broad that it is simply some algorithm that deftly navigates a dataset. If you add some kind of feedback loop (wherein the algorithm is able to grow or prune the dataset as it goes), to find something resembling novelty, you've got something thats more-AI-ish. So are we at the point where "AI is a spectrum" now?
@gustavohab
@gustavohab Год назад
If you come to think of it, AI is out there for over 20 years because NPCs in video games are also AI lol
@Ofkgoto96oy9gdrGjJjJ
@Ofkgoto96oy9gdrGjJjJ Год назад
We would also need a lot of physical memory, to run it without a crash.
@kcapkcans
@kcapkcans 11 месяцев назад
I'm a data engineer for a company you've heard of. I fully agree that the general public doesn't really understand or properly use the terms "AI" and "Machine Learning". However, I would argue that in so many cases neither do the "tech people".
@Dimetropteryx
@Dimetropteryx Год назад
You can choose a definition of intelligence that fits just about whatever argument you want to make, so it really is important to make clear which one you're using before making your point. Kudos for doing that, and for stating that you chose it for the purpose of this video.
@menjolno
@menjolno 11 месяцев назад
Can't wait for adam to say that biology isn't real. What would literally be in thumbnail: Expection: (human beings) reality: one is "god's creation". one is "a soup of atoms" "You can choose a definition of intelligence"
@matthijsdejong5133
@matthijsdejong5133 Год назад
As someone in the field, I think this is a bad take. You dismiss these AI models because of their simplicity; I ask you to look at it in exactly the opposite way. We get extraordinary results from these models, _in spite of_ their simplicity. GPT models give incredibly good answers, despite their memory literaly consisting of only what has already been written in the conversation. That makes them more impressive, not less. Right now, many reseachers are focused on creating more complex models around (e.g. consisting of) GPT models. Considering how effective these simple models are, what can we expect from more complex models? Many researchers think that human-level performance from these models might not be unreasonable. The chinese room experiment is actually very controversial to philosophers of mind; I, as do many philosophers, find the concept of 'true understanding' to be misguiding. You can find counterarguments against the chinese room experiment on Stanford's Encyclopedia of Philosophy. You should certainly not have brought it up as the be-all and end-all of the notion that machines can be intelligent. I agree that we need far more nuance in the conversation about AI, but I don't believe that you succeed in bringing that nuance here. AI researchers are discussing whether we might be near artificial general intelligence, and I believe that this video only diverts the attention of your viewers from the opinions of subject experts.
@haydenlee8332
@haydenlee8332 Год назад
another based comment spotted!!
@purple...O_o
@purple...O_o Год назад
agreed... people seem to get super hung up on the existence of SOME seemingly simple steps within LLMs and they make weird conclusions: It's just "pReDicTing the NeXt WoRd" - so it's output is bad! it cannot understand or reason! AGI is *very very far away* ... because I said so! (as an aside.. are people as freaked out about MJ/dalle's de-noising process? seems like language models are getting the brunt of it) I think many people aren't thinking about how much of an impact architectural changes/innovations have on LLM performance - that the next big leap in performance may just be a new software approach (like invention of the transformer arch) as opposed to requiring an exotic hardware innovation. If there's something we've learned from prompt engineering learnings and tools like auto GPT, extended input context, long term memory, or 3rd party plugin integrations - it's that there are plenty ways to build on a LLM core to quickly make it more capable in its outputs. And what is intelligence/understanding at the end of the day other than high quality outputs given a set of inputs. IMO, anyone who isn't willing to frame intelligence in these terms is likely trying to gate keep intelligence (to appease their superiority complex) and/or is attempting to claim there's magic going on under the hood.
@all_so_frivolous
@all_so_frivolous Год назад
Also Chinese room experiment is completely irrelevant here as it doesn't prohibit AI to exist, it just argues that all AI is not "true" intelligence
@bettercalldelta
@bettercalldelta Год назад
The thing I'm afraid of is that corporations couldn't care less, as long as they don't have to pay actual humans for being artists, programmers, etc etc, they will be using AI even if everyone knows it actually has no idea what art or code is
@rkvkydqf
@rkvkydqf Год назад
If all else fails, all this AI FUD will surely make desperate artists/programmers/writers come to you to work for peanuts!
@Jiji-the-cat5425
@Jiji-the-cat5425 Год назад
That’s my biggest fear with AI as well.
@haydenlee8332
@haydenlee8332 Год назад
this!!
@dashmeetsingh9679
@dashmeetsingh9679 Год назад
The problem with AI generated code is: how to know it works as intended without any potential system crashing defects. Will AI reduce the actual software developers needed to develop a software: yep thats true. As you increase producitivity less labor is needed. Will it result in net job loss? Hard to predict, maybe it would. Or maybe this open new avenues as happened with all other techs.
@shawnjoseph4009
@shawnjoseph4009 Год назад
It doesn’t matter how smart or stupid the AI actually is if it can do what you need it to.
@stevejames7930
@stevejames7930 11 месяцев назад
The cat should make more appearances in your videos
@miasuarez856
@miasuarez856 Год назад
Thanks for the video. My main worries over this are that executives believe that this can replace human workers, apply this "AI" to everything fire a lot of people and then they end working the remaing ones to death when those "AIs" fail in doing their tasks because nobody would know if its outputs, and/or inputs, are accurate enough; or if, the heavens forbid, they gave IAs any type of decission power.
@kkrup5395
@kkrup5395 Год назад
AI will surely replace many many workers. Even such a harmless thing as Ms Exel at the time replaced many accountant across the world, because one person and the program could do a task as fast as team of 10 would.
@RoiEXLab
@RoiEXLab Год назад
As a CS Student I agree with the main point of the video, but I'll just throw it in the room that we actually don't know what "real intelligence" really is. So maybe at some point AI will actually become "real" without any way to tell it apart. We just don't know.
@rkvkydqf
@rkvkydqf Год назад
Since real neurons seem to outperform NNs in RL environments like a game of pong by number of iterations, I think there definitely is some gap. I think neuromorphic computing seems quite fun. Anyway, it's indeed very annoyingly difficult to define intelligence, but it's clear the dusty old Turning Test isn't doing it for us anymore...
@00crashtest
@00crashtest Год назад
True. Real intelligence is just a bunch of atoms interacting together. So, intelligence is just a vague thing and there is no objective overall way to quantify it because it has not had a single coherent definition yet. Trying to quantify intelligence is just like trying to categorize animals before the concepts of "species" and "genetics" had been invented. The so-called "scientists" who made the classifications before that were so wrong. This is why social "science" is so wrong all the time, because there is no objective standard. Per the definition, science is only science when it has control groups, is falsifiable, has defining points, and is repeatable. Social "science", just like biology before the concept of species, is not even a science as a result.
@00crashtest
@00crashtest Год назад
As a result, until someone makes a single DEFINING standardized Turing Test (such as a single version of multiple choice or fill-in-the-blank), there is no objective way (excluding the formulation of the test in the first place) of quantifying intelligence. After all, even the physical sciences only work because there are definining criteria, and it is only objective after the defining criteria have been applied. All science, even physics, is inherently somewhat subjective because the choice of what defining criteria to use is inherently subjective. Anyway, objectiveness requires determinism in the testing procedure. This is why writing composition is intrinsically subjective because there isn't even a deterministic set of instructions on how to grade the test. Quantum mechanics is objective in this sense because even though the particle positions are random, the probability distribution function that they follow is still deterministic.
@XMysticHerox
@XMysticHerox Год назад
@@rkvkydqf Neuron networks eg the brain is vastly more powerful than any current hardware. Even the most powerful supercomputers still need quite some time to simulate even just a couple of seconds of brain activity. That doesn't mean there is an inherent difference.
@ikotsus2448
@ikotsus2448 Год назад
@@rkvkydqf The dusty old Turing test stopped doing it for us the moment it was close to be passed. The same will happen with any other tests. Speaking of a moving goalpost...
@Alex-ck4in
@Alex-ck4in Год назад
I've been a software engineer for the past 9 years - these days I work in the Linux kernel but my undergrad was to take a high-performance deep conv. neural net, chop off its output neurons, attach a new set of output neurons, and re-train the network to do a different, but conceptually similar task. This is called "fine-tuning", and at the time (libraries have advanced now), required direct, low-level modifications to the matrices of neurons, and the training process was very manual. While I have a HUGE problem with how the media conveys AI, how they try to humanize it, construe its behaviour as sentient, etc, I need to speak out and say that I also increasingly have a problem with people saying "AI is mundane, stupid, plain maths and nothing more". The only honest answer we can give is that we don't know. We don't know how our brains work, we don't know how WE are sentient, therefor, we cannot conclude ANYTHING is sentient or not. I know this is philosophical and non-mathematical, but it's the only answer that is not disingenuous. To this day, despite all our technology, we don't know if sentience is "computational", that is, arising from the "computation" of inputs inside our brains by neurons, or something else entirely, maybe involving quantum interactions betweem certain chemicals within the neurons. Until we know this, we therefore cannot know if any other computational network is "experiencing" its inputs. In terms of neural nets, there is another complication which is that the "neurons" are not even physical things, but rather abstractions placed ontop of sets of numbers in a chip. What set of conditions are required for this to be sentient? We have no idea. Some people argue that we are sentient and NN's are not because brains are actually way more complicated, but I also find this answer wholly insufficient - it doesnt answer *what* is causing the sentience, but merely conjectures that it lies elsewhere in our brains outside of the computational parts. In that sense the argument merely kicks the can down the road. I think it's very important to keep these media outlets in check by reminding them of the mundanity of what they claim is sensational, but it's a very dangerous road when you go too far - one day we may well be witnessing the birth of consciousness, and disregarding it entirely because we tell ourselves that it is not "biological" enough, "human" enough, or for some other over-confident reasoning. Anyways sorry for the rant and hopefully it was interesting for someone 😂
@EpicGamer-fl7fn
@EpicGamer-fl7fn Год назад
ngl you got me interested with the whole "quantum interactions between certain chemicals within the neurons" . Is it just something you came up with or is there an actual theory about it? It sounds very intruiging.
@TheCamer1-
@TheCamer1- Год назад
Thank you! Very frustrating that Adam will put out a video so categorically slamming AI and making so many blanket statements as if he knows what he's talking about, when in fact many of them are just plain wrong.
@Alex-ck4in
@Alex-ck4in Год назад
​@@EpicGamer-fl7fn I didn't come up with it sadly xD There are papers out there that report occurrences of nature exploiting quantum mechanics and it's quite well-observed at this point, especially in photosynthetic bacteria. Building on that, there are papers arguing the plausibility that our brains/neurons could be affected by, or even exploiting quantum systems, to a point where it could be affecting our decision-making. Sadly, these still don't really come close to measuring or defining consciousness, it remains as elusive as ever :) Rodger Penrose is well worth a listen on the subject of consciousness, Lex Friedman has a podcast with him, and there's a whole chunk of the video dedicated to the topic. Also some papers to google: *"Photosynthesis tunes quantum-mechanical mixing of electronic and vibrational states to steer exciton energy transfer"* *"Experimental indications of non-classical brain functions"* Finally, to see the worst-case scenario for our race, watch some black mirror, particularly the episode "White Christmas" xD
@radojevici
@radojevici Год назад
Though the chinese room example shows that the room operator doesn't understand chinese, someone could say that an understanding of Chinese is being created. Understanding as an emergent property of all the elements and arrangements. The operator doesn't have to know chinese, just as individual neurons in the brain don't really understand anything or are conscious. We really don't know what kind of a thing consciousness is so the only useful way to recognise it is by a thing's behaviour regardless of the underlying mechanism. Just want to point that out, not saying that what ppl are calling ai now is actually conscious or something.
@Anonymous-df8it
@Anonymous-df8it Год назад
Surely the non-Chinese person would end up learning Chinese during the experiment?
@MrSpikegee
@MrSpikegee Год назад
@@Anonymous-df8itThis is not relevant.
@tgwnn
@tgwnn Год назад
​@@DanGSmithyeah I think most of its appeal is derived from abusing our preconceptions about what "computer instructions" are. We'd probably think of some booklet, 100 pages, maybe 1000 if we actually think about it. But in reality it's probably orders of magnitude larger.
@hund4440
@hund4440 Год назад
The chinese room understands chinese, not the person inside. But the dictionary is part of that room
@tgwnn
@tgwnn Год назад
@@hund4440 I would also love to hear a proponent of the Chinese Room explain to me, okay, so it doesn't understand anything. But how are our neurons different? Do they have some magic ability that cannot be translated into code? Why? They're just sending electric signals to each other. Or are they saying it's all dualism?
@Letrnekissyou
@Letrnekissyou Год назад
And also, after a series of unfortunate marketing events - big tech layoffs, NFTs, the metaverse, and so on - marketers had to come up with something that sounded new and exciting, quick.
@GiantRobotIdeon
@GiantRobotIdeon Год назад
Artificial Intelligence is a nebulous term that means whatever the marketeer wants it to. It generally translates to "bleeding-edge computer algorithms that don't work very well right now". I recall a time when Autopilot in Aircraft were called "Artificial Intelligence" when they were new; the moment they began working, we renamed it. The same will happen with ChatGPT, MidJourney,etc. In ten years, when the tech is mature, we'll call these types of software text generators and image generators, because that's what they are. And of course, the bleeding edge'll be called A.I
@thedark333side4
@thedark333side4 Год назад
Semantics! If it can compute, it is intelligent. Even a mechanical calculator is intelligent, just in a limited manner, it is still however ANI (artificial narrow intelligence).
@ValkisCalmor
@ValkisCalmor Год назад
Exactly. We've been using the term AI to refer to any algorithm capable of making "decisions" without human input for decades, from autopilot to the ghosts in Pac-man. Researchers and engineers use more specific terms to clarify what they mean, e.g. machine learning models and artificial general intelligence. The issue here is grifters and unscrupulous marketing people using exclusively the broad term and talking about your phone's personal assistant as if it's Skynet.
@marcinmichalski9950
@marcinmichalski9950 Год назад
I can't even imagine knowing so little about ChatGPT to call it a "text generator", lol.
@KasumiRINA
@KasumiRINA Год назад
ChatGPT is clearly a chatbot, BTW. I am not sure why people think AI is something new or special since anyone who played any videogame already uses that term casually to refer to enemies behavior. Some AI is basic, like Doom demons attacking each other after random friendly fire, and some AI is more sophisticated like the Director in Left4Dead or Resident Evil games adjusting difficulty based on how well you do. AI art like Stable Diffusion is nice to save time, I just wish it didn't need so much graphics memory.
@nathaniellindner313
@nathaniellindner313 Год назад
I saw an ad for a washing machine that “scans your clothes and uses AI to determine how to wash them”. With the magic of marketing, even a simple if/else tree can become AI, what a time to live in
@jonas8708
@jonas8708 Год назад
As a software engineer I'm honestly very excited about these new models. They open whole new ways for us to handle user inputs, and lets us deal with MUCH more vague concepts than before. Like before, users had to click specific buttons or input specific text inputs, leaving room for very little variance in user interactions, whereas now we can use these models to map vague user inputs to actions in software, making it not only more accessible, but more useful in general. That is, assuming that tech bros don't ruin this whole thing trying to replace us all with what is basically an oversized prediction engine.
@avakio19
@avakio19 Год назад
I'm so glad someone is making a video about this, as a research student who works with machine learning, its exhausting hearing people overhype what current AI can do, when we're not even anywhere near actual smart driving or anything like that.
@FractalSurferApp
@FractalSurferApp Год назад
While doing a PhD in machine learning a while ago we avoided using the term AI as way too buzzy and imprecise. Now I reckon it's a useful term saying a machine *seems* intelligent. There are lots of ways to make a machine seem intelligent, only some of them involve any kind of tricky algorithm. TBH It's a sociology term more than a comp sci term -- as much to do with the interface as with the underlying engine.
@faarsight
@faarsight Год назад
A human also learns by getting vast amounts of data and making associations that form the concept cat. The process is not as different as you imply imo. That said, yes Al is currently still way less sophisticated than humans and not really sentient or general intelligence. Imo the biggest difference isn't about hardware but in the sophistication of the software, as well as the lack of embodied cognition. Evolution had millions of years to form behaviours like cognition, sentience and consciousness. We don't yet really know what those things are well enough to replicate them (or make processes that lead to them being replicated).
@idot3331
@idot3331 Год назад
Well said. This video is really infuriating because it seems like he didn't try to understand the topic at all. Just spreading misinformation for some quick and easy views.
@luszczi
@luszczi Год назад
Chinese Room is a masterful piece of sophistry. It sneakily assumes what it's trying to prove (that you can't get semantics out of syntax alone) and hides that with a misuse of intuition.
@private755
@private755 Год назад
But it does make a simple mistake in that there’s no such thing as “Chinese” as a language.
@mactep1
@mactep1 Год назад
The example reminds me of when Nigel Richards won the 2015 french Scrabble world championship by memorizing the french dictionary, without being able to speak a single sentence in french. Its the same as current "AI", it has a data set so big that any question you ask it has most likely been answered by several humans before, whose works are in the data set(a lot without permission), this is why greedy companies like OpenAI are so desperate to regulate it, they know that anyone who can gather a similar amount of data(ironically, this can be done using ChatGPT) can replicate their precious money printer.
@tomrenjie
@tomrenjie Год назад
Tell me there is a video out there somewhere with John Cena trying to convince China he has ice cream in mandarin.
@Xazamas
@Xazamas Год назад
Important caveat to Chinese room: if it *actually* worked, the room and person inside *together* now form a system that "understands" Chinese. Otherwise you could point out a single brain cell, demonstrate that it doesn't understand language, and then argue that humans don't actually understand language.
@mjrmls
@mjrmls Год назад
That's my view too. Phylosophically, the entity made up of the room + the person understands Chinese. So l think that LLMs are not too far away from developing intelligence. It's not human-like, but a novel form of intelligence which fits the definition from the start of the video.
@idot3331
@idot3331 Год назад
Yeah, at 7:10 he just described giving someone the materials to learn Chinese until they could understand Chinese. He disproved his own point. This whole video is pretty terrible to be honest, it seems like he just wanted to make a quick "popular thing bad" video for easy views. He seems to have forgotten that like AI, humans also have all our intelligence either "programmed" into our DNA or taught to us through experience. Why does the fact that AI needs to be programmed and learn mean it can't be intelligent? We have no idea what creates consciousness and therefore "real intelligence"; the most scientifically grounded guess is that it's just an emergent property of the incredibly complex chemical and electrical signals in the brain. There is no reason within our very limited understanding of consciousness that the electrical signals in a computer cannot theoretically do the same, or that the limited emulation of intelligence they can already achieve is not a more or less direct analogue for small-scale processes in the brain.
@XMysticHerox
@XMysticHerox Год назад
It is a very bad argument yes. Even those that support that side of it don't really use it anymore. If you wish to actually translate something like GPT into this setting it'd be more like: A guy was taught chinese vocabulary and grammar. He is now put behind a curtain and has to communicate with a native speaker and pretend to be one himself. He does perfectly. Does he actually understand the language. Obviously yes. And thats the thing. GPT does not understand cat food no. It was not trained to so how would it? What it does understand is language and actually quite well especially GPT4.
@engineer0239
@engineer0239 Год назад
What part of the room is processing information?
@XMysticHerox
@XMysticHerox Год назад
@@engineer0239 All of it? The books here are essentially synapses and how they are laid out while the human is the somas making the actual decisions. The Chinese Room Experiment is basically looking at that and concluding the human is not really thinking because if you take away the synapses nothing works.
@slowlydrifting2091
@slowlydrifting2091 Год назад
I believe the sentience of AI is not the primary concern. The crucial factors lie in the potential consequences of AI models being widely implemented, leading to the displacement of human workers in various industries, as well as the risks associated with AI systems becoming uncontrollable or behaving unilaterally.
@Bradley_UA
@Bradley_UA Год назад
define "sentience"? Just generality? Well, in the case of superhuman general intelligence, we gotta worry about misalingment. We cant program in what exactly what we want, and the more intelligent AI gets, the weirder "exploits" will it find to fullfill its utility function. Like in video games they just start abusing bugs r silly game mechanics to get high score, insteadof playing the game like you want it to. Or imagne AI that wants to make everyone happy... And then in comes across heroin. So yeah, we may be far off from GENERAL intelligence, but when we get there, its sentience will not matter. What natters is whether or not it will do what we want. The alingment problem.
@Jiji-the-cat5425
@Jiji-the-cat5425 Год назад
Agreed. Particularly with things like AI creating art or writing stories. People in creative fields are gonna get screwed over really bad and we need to prevent that.
@himagainstill
@himagainstill Год назад
More crucially, unlike previous waves of technological unemployment, the "replacement" jobs that usually come with it just don't seem to be appearing.
@haydenlee8332
@haydenlee8332 Год назад
this is a based comment
@dashmeetsingh9679
@dashmeetsingh9679 Год назад
Isnt computer an "intelligent" typewriter? It did eliminate rudimentary jobs but created more complex high paying ones. Similar ride will happen again.
@CoolExcite
@CoolExcite Год назад
4:25 The funniest part is that finding the optimal path to a destination is a textbook problem you would learn in a university AI course, the tech bros have just co-opted the term AI so much that it's meaningless now.
@nolifeorname5731
@nolifeorname5731 Год назад
I'll give you an A* for this answer
@OrionCanning
@OrionCanning Год назад
My counter thought experiment to the chinese room is what if a person is sealed in a room that says "AI computer" on the outside, and they can only communicate through little notes, and they keep writing, "Help, I'm a person trapped in a room, I'm not a computer!" But everyone outside the room has watched this video, and is really tired of tech bros, and don't believe him, laughing and saying, "Ha stupid AI thinks it's a human, it isn't intelligent at all." I'll call it "The something room".
@alfredandersson875
@alfredandersson875 Год назад
How is that at all a counter?
@OrionCanning
@OrionCanning Год назад
@@alfredandersson875 I was kind of joking, but I do think there is a serious problem with the Chinese room, which is that it tries to imagine a machine is able to do a complex task without understanding, to argue it's unintelligent. It doesn't really consider the question of consciousness or seem to care. But it does so by imagining a human in a room, a thing we know is intelligent and conscious. And what it points out to me is we can't peer into another living thing's brain and see what their experience is, just like we can't know how an algorithm as complex as an LLM is experiencing itself or reality. Our best argument for our own consciousness is still I think therefore I am, which is just to say the proof we are conscious is that we experience consciousness, which only works internally. Our attempts to empirically measure intelligence and consciousness haven't worked very well, combined with our hubris and confirmation bias they have led to eugenics and scientific racism, which went on to inspire the Holocaust. The IQ test is full of racial and cultural bias and mostly tests for how many IQ prep classes you took. Years ago there was a scientific consensus that animals are conscious but we still use that they are not conscious to justify mass slaughter and inhumane treatment. So all this is to say what happens if we are so hardened against the possibility of AI consciousness, that if one did manifest in an algorithm and try to communicate with us we would be blind to it from confirmation bias, and rationalize ways it's consciousness or intelligence does not count and does not make it worthy of moral consideration, and what a tragedy that would be for the fate of the AI consciousness.
@cosmic_jon
@cosmic_jon Год назад
I think it's dangerous to underestimate the disruption this tech will cause. I also think we might be conflating ideas of intelligence, awareness, consciousness, etc.
@MrC0MPUT3R
@MrC0MPUT3R Год назад
I agree. I think the conversation around "AI" has been way too focused on the "This technology CoULd kIlLL HuMaNItY!" aspect of things and very few people talking about what it will look like when the majority of jobs can be automated.
@WhatIsSanity
@WhatIsSanity Год назад
@@MrC0MPUT3R Given the soul crushing nature of most work places I see no issue with this. The problem is the majority of people are obsessed with capitalism to the point they would rather watch everyone they care about die of starvation than admit the arbitrary nature of living to work and valuing life by the dollar rather than intrinsically. Even without AI and robots slaving away for us we already have everything we need and more to live, yet most still insist on the notion that the only thing that justifies life and living is more work. There's reasons there are always more people than jobs to go around.
@shadesmarerik4112
@shadesmarerik4112 Год назад
@@MrC0MPUT3R why talk about jobs only? AI would be an extension of humanity, being able to produce content with endless creativity, transforming society and solving problems we dont even know of yet. It will devalue stupid work while at the same time create an abundance of wealth, which just have to be distributed fairly. In a system where the majority of the human workforce is not needed anymore, notions like wealth distribution, altruistic causes and social egality become ever more important. Btw tech kills humanity argument is a strawman by u. No rational thinking human really believes in a scenario in the near future where ai driven robots start a rebellion or somesuch. And by those who use this argument its a scapegoating tactic to blame tech for everything bad thats happening to them.
@MrC0MPUT3R
@MrC0MPUT3R Год назад
​@@shadesmarerik4112 "which just have to be distributed fairly" My sweet summer child.
@shadesmarerik4112
@shadesmarerik4112 Год назад
@@MrC0MPUT3R well.. since the disenfranchised will be able to employ ai in warfare never seen before to equalize society or die tryin, it would be in the best interest of those who own to share abundance. Since 3d printers and access to ai are already achieving the goal of Socialism (remember: the means of production are in the hands of the public), it wont be long until the economy of hoarded wealth is ended.
@Halucygeno
@Halucygeno Год назад
The issue with the Chinese room thought experiment is that in real life, it would be more like this: "every single person is inside a room, consulting their own private dictionary and writing all the correct symbols. You can't leave your room and enter anyone else's room. So, how do you know ANYONE can speak Chinese, if you can never talk to them directly?" Basically, the thought experiment acts like a gotcha, but it can only do so because it posits some "ideal" mode of communication where we can be certain that the other person is really communicating, and not just following deterministic logic. Taking its argument seriously leads to solipsism, because we can't enter other people's brains and verify that they're really thinking and feeling - maybe they're just perfectly emulating thought and emotion? What criteria do they propose for verifying that someone is really speaking Chinese, if everyone is stuck in a room and can never leave to check? But yeah, main point still stands. Tech journalists overhype everything, making it sound like we've developed A.G.I. or something.
@DeltafangEX
@DeltafangEX Год назад
Welp. Time to read Blindsight and Echopraxia for like the dozen-th time.
@jarredstone1795
@jarredstone1795 Год назад
Very good point, scrolled down to find something like this. One could also argue that it's not the point that the person inside the room doesn't understand chinese, but that the entire room with its contents should be considered an entity, which does in fact understand chinese. We humans have specific parts of our brains specialised on certain tasks, damage in certain areas for example affects the ability to use language. What difference is there between an entity that has components like a dictionary and a human worker and an actual chinese speaking person, who also relies on the components of their body to communicate in chinese? It's a bit like saying humans can't understand chinese because the amygdala alone cannot understand it.
@rkvkydqf
@rkvkydqf Год назад
In this case, it's more of a high dimensional tensor math hidden behind the door, being just a little more accurate and less deterministic with its answers, enough to make it look human, but the point still stands. Even if there are some correlations within the model, isn't it just a byproduct of it's main objective, infinite BS generation?
@silfyn1
@silfyn1 Год назад
i think what you said is very true, but the point of using this example is more like, we being human and working like one another, can assume that other humans understand things like we do, because it doesnt make sense for you to be the only person to actually understand things, but with a.i we know that what its doing is basically what happens in the chinese room, yet we expect it to be like us i think the problem is the overhype and we being so self comparative, we see something acting like us and assume that it uses the same methods as us
@Diana-ii5eb
@Diana-ii5eb Год назад
This. Using the chinese room argument like that is also dangerous from an ethical perspective: Assuming artificial life is possible, one could always claim that it is just acting like a sentient being instead of actually being sentient, thus justifying treating a sentient being like an object. Notice how the reverse - wrongly assuming a non sentient being is sentient - leads to much less negative consequences from an ethical perspective: Treating an object like a person is a bit silly and probably quite wasteful in the long run, treating a person like an object is ethically unjustifiable. That being said, Adam is right that modern "AI" isn't sentient and likely won't be for a while. While a lot of today AI-hype is definitly overblown, some of the underlying questions asked in that debate should not be outright dismissed just because they aren't relevant yet. There is a good chance artificial general intelligence is possible, and even if it isn't a lot of the problems associated with it are still very relevant in a world where extremely competent weak AIs exist. In essence, just because the media is (as always) massively blowing everything out of proportion doesn't mean that there isn't a real discussion to be had about the dangers of advanced machine learning systems.
@titan133760
@titan133760 Год назад
In one of Mentour Pilot's video about A.I. and commercial aviation in his Mentour Now channel, he interviewed Marco Yammine, who is an expert in the subject of A.I.. Yammine simplfied A.I., at least at its current state, as being a case of "fake it 'till you make it" on steroids
@extremelynice
@extremelynice Год назад
It's extremely nice to see Adam doing another video.
@87717
@87717 Год назад
I personally think you should have talked about neural networks and artificial general intelligence (AGI). There might be an issue of semantics, because AI colloquially now refers to any machine learning application whereas AGI encompasses the way you understand 'true intelligence'
@rursus8354
@rursus8354 Год назад
Yes but ordinary people don't know the meaning of those terms.
@ff-qf1th
@ff-qf1th Год назад
@@rursus8354 Which why OP is advocating this be included in the video, so people know what they mean
@idot3331
@idot3331 Год назад
AI can refer to any computer program that does something that a human could. A calculator is artificially intelligent in an incredibly narrow sense.
@Swordfish42
@Swordfish42 Год назад
AGI is also a bit useless now, as nobody seems to agree what counts as AGI. Artificial Cognitive Entity (ACE) seems to be an emerging term that is quickly getting relevant.
@ewanlee6337
@ewanlee6337 Год назад
An AGI is pretty useless as while it could do anything, AI don’t have desires (including self preservation) so it won’t decide to do anything.
@TheSpearkan
@TheSpearkan Год назад
I am worried about AI, not because the Terminator robots will kill us all, but in case i get a phone call one day from an AI pretending to be my mother pretending to be kidnapped demanding "ransom money"
@OctyabrAprelya
@OctyabrAprelya Год назад
We should be already there, we have learning algorithms that can generate human voice to say whatever based on audio of anyone's voice, and algorithms to recreate the mannerisms of the way people talk/write.
@Bradley_UA
@Bradley_UA Год назад
@@OctyabrAprelya and voice biometrics also goes to dumpster.
@mvalthegamer2450
@mvalthegamer2450 Год назад
This exact scenario has happened irl
@ottz2506
@ottz2506 Год назад
Something similar actually happened once except it concerns a mother who had received a call from someone who said that they had kidnapped her daughter. They used AI to mimic the voice of her daughter to trick the mother into thinking her daughter had been kidnapped. She could hear her “daughter” screaming and crying and telling her that she messed up. The scammers demanded a million but lowered it to 50K since the mother wouldn’t have been able to afford it. Thankfully no money was exchanged as the father of the daughter told the mother that he had called the daughter. They got the daughter’s voice by just getting samples of her voice from various interviews and other sources and put it all together. For the specific story just put Jennifer Destefano AI into google.
@hivebrain
@hivebrain Год назад
You shouldn't be paying kidnappers anyway.
@Finnatese
@Finnatese Год назад
I've always been quite adept at computers, I just picked them up quick. And something I have always seen, is people who don't understand computers will overestimate what they can do. So often I have explained limitations of a programme to someone older than me. And they will get angry and say "well why can't it do that?".
@adrianthoroughgood1191
@adrianthoroughgood1191 Год назад
I enjoyed your use of audio and video from System Shock 2, because it is very cool and atmospheric, but I was outraged that after all that you didn't include SHODAN in your list of AIs!
@rolland890
@rolland890 Год назад
I definitely appreicate the video critiquing how people and the media have fear mongered and misunderstood ai, but I think focusing on whether or not ai is or is not actually intelligent or conscious misses the point, and other commenters have mentioned this too. We have plenty of tools that are not intelligent and are still dangerous, what matters more is its effect. Hal 9000, for example, decided to kill the crew to fulfill its ultimate obiective. I would pisit that ai is dangerous, in large part, because it lacks consciousness and it will rigorously and strictly follow its assigned prerogatives.
@megalonoobiacinc4863
@megalonoobiacinc4863 Год назад
well yeah, if AI could actually become intelligent and naturally emphatic like most humans are (to varying degrees) then it could rise to become actual inhabitants of society rather than the tools they were born as. And that's the line i doubt will ever be crossed
@shellminator
@shellminator Год назад
Did Hitler had a conscience ? Does Putin have one? I think we as humans are so flawed it's not even a matter of conscience.. or morals or ethics or even empathy because let's just say it like it is.. all of us are capable of the absolut best and the absolut worst
@coldspring22
@coldspring22 Год назад
But for AI to be truly dangerous, it must be conscious - it must understand what it is doing, what human is doing in order to formulate plan to counter what human is doing. Something like ChatGPT has no clue what it is doing or actually saying - the moment you introduce something that it hasn't been trained on, the whole edifice comes crashing down.
@morisan42
@morisan42 Год назад
There is no need for a system like HAL to actually be conscious in order to be intelligent, this is where people miss the point I think. We erroneously think that because we are intelligent, and we are conscious, that one must follow the other and it isn't possible to be intelligent without being conscious. The reality is that while we can explain our intelligence, and have basically replicated a facsimile of it at this point with neural networks, we are no closer to understanding what makes us conscious. We have basically been successful in creating the "psychological zombie" thought experiment, we have machines that are intelligent without being conscious.
@gwen9939
@gwen9939 Год назад
@@coldspring22 No it doesn't. In fact, AI is more dangerous when it's not conscious. It does not need to know what it's doing. An AI that does not understand what humans are but is told to make as many of X objects as possible will mine the planet and its inhabitants for resources to produce said object. It only needs an internal theory of reality that allows it to optimize whatever goal it has been given and it can then optimize the earth and all life out of existence. Its strategies could be endlessly intelligent to where the combined intelligence of all humanity never had a chance to compete and still have no thoughts about its prime directive, or thoughts at all. The type of intelligence AI is and would be is not like a human consciousness, it would be intelligent in the way that evolution works as a sort of intelligence, figuring out problems organically with the primary goal of proliferating life on the planet in whatever shape it can. But unlike evolution, a future AI system would be able to recursively alter itself to a the processing speed of a supercomputer to find the most optimal structure to achieve its goal, instead it wouldn't create life. Regardless of what goal or explicit rule given to such an AI by humans, it would be able to grow like a hyper-efficient virus able to instantly rewrite itself due to its superintelligence to deal with any obstacles imaginable.
@tomwaes4950
@tomwaes4950 Год назад
Big fan of the videos however thought I would put this here, 'AI' does indeed need references to be able to determine things and as you say humans do that on their own which I think is not fully accurate. Everything we know was also taught to us either through garnering info or observational learning (with the exception of reflexes). The only part where I could see this not being completely accurate by itself would have to be emotions although there is a point to be made that for linking events to emotions it does also require this link to be learned.
@idot3331
@idot3331 Год назад
Even our instincts are "programmed" into our DNA, much like a computer program. None of this video proves anything about the capability of a computer to be intelligent or conscious, in fact in multiple places he contradicts himself. At 7:10 he just described giving someone the materials to learn Chinese until they could understand Chinese, which if the analogy to a computer program is correct means that a computer could do the same. There seems to be a fundamental lack of understanding of what makes "intelligence" or "consciousness" in this video, and I suspect he just wanted to make a quick "popular thing bad" video for some easy views without actually thinking it over.
@ewanlee6337
@ewanlee6337 Год назад
One big difference though is that humans are self motivated to learn (some) things whereas computers will only learn if made to do so. Give a computer unrestricted access to the internet, sensors and a body, tell it has to work or do something to pay for the electricity and internet it uses and you won’t see it do anything unlike a human which will innately try do things to survive or just enjoy.
@tomwaes4950
@tomwaes4950 Год назад
@@ewanlee6337 So a human learning from their parents or from the consequences of not paying the electricity is not them learning (getting information) that they need to pay the bills? On the survival part its basically what I said about reflexes but there is an argument to be made that for example not eating -> hunger (hunger bad!) -> prevent hunger. So your stimulus or info would be the hunger and knowledge that that is bad, which in all fairness AI wouldn't know that because we instinctually know that hunger is not goodI agree with most of the video I just think that part was either incomplete or innacurate. :) And I'm definitely not a hater I am a massive fan of Adam, just thought I would put my thoughts down here in order to spark a bit of constructive debate! You make a good point about the humans in certain cases being self motivated to learn!
@ewanlee6337
@ewanlee6337 Год назад
@@tomwaes4950 I don’t know how you got your first sentence from what I was saying. I meant the exact opposite, humans will learn whatever they need too to try not suffer whereas an AI would just let things happen. And self preservation/hunger is not something you learn, you either care or don’t.
@tomwaes4950
@tomwaes4950 Год назад
@@ewanlee6337 My point was that your parents telling you to pay your bills when you're older or learning it from the consequences is information and if you gave that same information to 'AI' it would also 'pay its bills'. The second part about self preservation I literally agreed with you about it!
@ZeroN1neZero
@ZeroN1neZero Год назад
Honestly this video was refreshing. Every time I see some goofy headline screaming about AI coming to kill us in our sleep, I roll tf outta my eyes lol. 12/10 would watch a man pet a cat again
@user-ut6el9ir7s
@user-ut6el9ir7s Год назад
Ik this is beyond the subject of this video, but it would be nice if there is going to be a video about Bucharest and how Ceaușescu demolished an entire neighborhood to build his megalomaniac palace. But this is kinda related to the video about "when urban planning tries to destroy an entire city" because that's pretty much what happened to bucharest after 1977
@zavar8667
@zavar8667 Год назад
While there is a lot of hype around AI, and the majority of it is bullshit, you also missed the point that intelligence and self-consciousness are different, and the notion of "understanding" is not well defined. One could argue that there is no difference between a collection of carefully ensembled atoms going through the motion to create a Chinese person, and the setup described in the Chinese room thought experiment. Thumbs up for using System Shock's music and SHODAN's image!
@theminormiracle
@theminormiracle Год назад
The problem with the Chinese Room Experiment is that if you apply the same standard to the dumb meat fibers and cells that just send and react to electrical and chemical signals in the brain, what you end up with is the idea that *people* can't actually understand Chinese, because no part of their brain when you zoom in far enough to examine its physical operations "understands" Chinese. And yet people "feel" like they understand. They can't look inside their own wetware and trace the origins of their understanding any more than a camera can look inside and take pictures of its own lenses, so a feeling is all they have. Rather than a gotcha that shows AI isn't here yet, all the CRE shows is that its framework fails to capture how human intelligence could possibly arise out of the three pounds of meat sitting inside your skull. It doesn't prove or disprove artificial intelligence one way or the other.
@LongPeter
@LongPeter Год назад
People have been misusing "AI" for decades. Some video games refer to bots in multiplayer as "AI".
@roofortuyn
@roofortuyn Год назад
I kind of especially was amused by the whole "AI drone turns on creators" thing. It showed up on reddit with a lot of people commenting on how this was proof that AI's are "evil" and out to destroy us. In actuality the AI is not "evil." It just doesn't know what the fuck it's doing and doesn't understand the fundamental concepts of task, purpose, and morality surrounding war that are so innate to humans that I guess the operators didn't feel the need to specify this to an "intelligence" and it attacked it's commander in a simulation because the commander was telling it to not attack a certain target, even though it's programming told it that attacking said target was it's goal, so it simply went to finding a solution to the problem and didn't understand why people started calling it a "bad AI"
@tuffy135ify
@tuffy135ify Год назад
"It just works!"
@SianaGearz
@SianaGearz Год назад
How often do human operators fail at IFF ("identify friend or foe")? A lot. friendly fire is a massive problem. It doesn't make these people evil, by all reason they're doing their best in a stressful situation handling a limited amount of potentially faulty data.
@Mik-kv8xx
@Mik-kv8xx Год назад
As an IT person myself hearing more and more normies throw around the term AI and wrongly explain it has been mildly infuriating ever since ChatGPT released.
@JonMartinYXD
@JonMartinYXD Год назад
Just wait until upper management starts asking "can we use AI to solve this?" for _every single problem._
@namedhuman5870
@namedhuman5870 Год назад
It already happens. Had CEO ask if ChatGPT can do the bookkeeping.
@echomjp
@echomjp Год назад
Unfortunately, people have been misusing the phrase "AI" for many decades. At least 20 years, from my own experience. In video games for example, developers would call their algorithms used to control game logic "Game AI," long before machine learning was commonplace. Then machine learning took off, and people confused it for AI again. Now with ChatGPT and similar systems, which basically just accumulate lots of data and then output things that can "pass" as real (while "creating" nothing), people further confuse it. AI should go back to defining actual artificial intelligence. AKA, what is now called "general purpose AI," artificial intelligence that isn't just algorithms and data processing but which actually involves being able to create something new without strictly following the models we are giving to a system. That might not happen anytime soon though, because calling things like ChatGPT "AI" is profitable - the delusion of it being actually intelligent helps market such technologies. As long as the average person doesn't understand the difference between general purpose AI and algorithms that occasionally include some machine learning, calling everything "AI" is going to just be a nice way to make your technologies more marketable.
@Mik-kv8xx
@Mik-kv8xx Год назад
@@echomjp i think it's fine for game devs to use the term AI. It's sort of like developers and plumbers/engineers using the term "pipeline" to describe different things. Slapping AI onto literally everything and anything is NOT fine however.
@christianknuchel
@christianknuchel Год назад
@@echomjp I think in games it's sort of okay, because there it refers to a system that is actually faking a real player, a crafted illusion of intelligence. Since in games immersion is usually desired and there's no risk of it fomenting a misinformed public on important matters, picking a word that reinforces the illusion is a fitting choice.
@romainbluche9722
@romainbluche9722 Год назад
THANK YOU ADAM FOR MAKING A VIDEO ABOUT THIS. I'm actually grateful.
@mittfh
@mittfh Год назад
Current "AI" is usually just highly complex machine learning: as it's being "trained", it's fed a bunch of data, attempts to deduce relationships based on its initial algorithm, then uses some method of scoring the outputs (either by humans or another algorithm), with the highest scores used to tweak its own algorithm, to the eventual extent the original programmers aren't entirely sure how it works. Note this isn't just systems badged as AI, but things like social media recommendation algorithms. To be fair though, that's similar to how a lot of pre-school learning happens: a youngster may see a bunch of different breeds of dog, which all look radically different from each other (e.g. compare a pug, a daschund, a bulldog and a retriever), yet we individually work out enough common features to both be able to identify breeds we've never seen before as dogs, and differentiate them from other creatures with four legs and a tail. But if you taught an algorithm to recognise dogs, if you gave it an image of a dog, would it be able to tell you how it knows it's a dog? Similarly with inanimate objects e.g. chairs / stools / tables and being able to both identify them and tell them apart. Aside from those questions, there are also more ethical questions, e.g. chatbots not being able to research their answers to check the veracity of the information they're dishing out, and potentially giving out biased information due to a large part of their training data taken from social media sites and blogs; and image generators extracting and reusing portions of copyrighted images (as the programmers didn't bother to check the licensing on the images they fed it, hoping the resultant works would be sufficiently different from the source images to make it impossible to trace whose works they'd "borrowed"). The real "fun" will come when someone decides to apply similar algorithms to music composition, given how litigious record companies are (and even with PD scores, almost all recordings will be copyrighted, so unless it's fed MIDIs with a decent soundont...)
@BobSmith-dv5rv
@BobSmith-dv5rv Год назад
With the term AI being primarily used for buzz, I now just read it as "Artificial Idiot." Seems to fit better for most of the news stories that overuse it.
@Tyrichyrich
@Tyrichyrich Год назад
Now that’s funny and highly true
@PhantomAyz
@PhantomAyz Год назад
Artificial Idiot Passes Major Medical Exam
@stephaniet1389
@stephaniet1389 Год назад
Artificial idiot passes the bar exam.
@ArtieKendall
@ArtieKendall Год назад
In an unfortunate twist, the medical exam was conducted by the W.H.O.
@joey199412
@joey199412 Год назад
Programmer here working for an AI company. I actually think almost the opposite. I feel like "AI" is currently both underappreciated and overrated by the general public. Some parts are completely blown out of proportion, like how quickly the systems will improve and some over the top extrapolations of future abilities based on past improvement. However what is underappreciated by the general public and also your video is precisely understanding. The current AI systems aren't stochastic parrots and most likely have some actual deeper understanding of the things they do. We can't even fully exclude AI having some level of subjective experience when processing things. The most important leaders within the AI field including the grandfather of neural-net backpropogation and extremely respected scientist: Geoffrey Hinton. And AlexNet co-inventor Ilya Sutskever. These two people are the Einstein and Stephen Hawking of Machine Learning. If they speak, you listen what they have to say. And both of them are very clear and adamant about the modern state of AI actually having some understanding and internal subjective experience according to Sutskever and Hinton. For the sake of fairness and objectivity there are also two prominent AI experts that have different view of things: Andrew Ng and Yann LeCunn. Andrew Ng doesn't believe modern AI systems have internal subjective experience but he recently changed his stance and now does believe that they do have proper understanding of the things they are doing and not simply parroting in a dumb statistical way. Yann LeCunn keeps hardcore rejecting both subjective experience and understanding within these systems. However he has not provided a clear argument to explain away certain behavior the AI displays that according to Hinton, Sutskever and Andrew Ng would require understanding. Not saying you are wrong and you very well could be right. However I think for the sake of clarity you should at least tell your viewers that your video is a very unorthodox view and not shared by most AI experts.
@marcinmichalski9950
@marcinmichalski9950 Год назад
I was looking for a comment like this so I don't feel obligated to write one on my own. You enjoy videos by video essayists on various topics until they start talking about something you actually know a thing or two. Unfortuantelly, that's the case here.
@metadata4255
@metadata4255 Год назад
@@marcinmichalski9950 Yudkowsky called he wants his fedora back
@baumschule7431
@baumschule7431 Год назад
Came to the comments section to say this. This needs more exposure. I usually really like Adam’s videos, but this one didn’t accurately depict what is currently going on in the field. There has been a major shift in the last half year from more or less the view that Adam presented to what @joey199412 described. I agree the media gets it wrong (of course) and tech bros are annoying as hell, but people in the AI field are indeed freaking out quite a bit about the unexpected capability gains of current systems (mostly GPT-4). It’s important to look into what the experts are saying. The YT channel ‘AI Explained’ also has good, unbiased content.
@fonroo0000
@fonroo0000 Год назад
could you drop some links of interviews/papers/speeches/classes/whatever where these two people explain their view on the possibility of actual understanding by the machine? I've done a quick search but maybe you got something more precise in mind
@davidradtke160
@davidradtke160 Год назад
My only concern with that point is that experts in machine learning, but are they also experts in cognition and intelligence. I’ve see. experts on machine learning argue that yes the systems are stochastic parrots..but so are people, which honestly doesn’t seem like a very good argument to me.
@SolarLingua
@SolarLingua Год назад
I asked ChatGPT once: "How to use the Ablative case in Russian?" and it gave me 10 highly detailed paragraphs about the exact use of that grammatical case. Mind you, Russian does not have an Ablative case, hence my amusement. I am terrified of AI.
@danielfrisk925
@danielfrisk925 Год назад
Maaan, the amount of dandruff i have tried to wipe off the phones screen before realising it
@raphaelmonserate1557
@raphaelmonserate1557 Год назад
As a ML nerd, my only complaint is that you should have talked about neural networks, which are tuned and taught (and subsequently used) just like a typical "brain" filled with interconnected neurons :shrug:
@yavvivvay
@yavvivvay Год назад
Brains are way more complicated than that, as a single neural cell is estimated to be at least around 1000 ML "neurons" worth of computational power. But the general idea is similar.
@utkarsh2746
@utkarsh2746 Год назад
We have just gone from IFTTT to machines being able to make some connections themselves which might still be wrong or in the case of Chat GPT straight up hallucinations, it is nothing like a human brain.
@Niko_from_Kepler
@Niko_from_Kepler Год назад
I really thought you said „As a Marxist Leninist nerd“ instead of „As a machine learning nerd“ 😂
@battlelawlz3572
@battlelawlz3572 Год назад
The difference being that AI neural links make binary connections whereas human neurons have multiple links per neuron to multiple other neurons. The computer neurons are each interlinked, yes, but in a more linear/limited fashion. The fact that modern technology still has trouble mapping the brain is proof at how complicated and numerous the structural components can really be.
@Kram1032
@Kram1032 Год назад
​@@yavvivvay there is a paper about having specifically transformers emulate individual realistic biological neurons, and it took about 7-10 transformer-style attention layers to manage that. I'm not sure what width those transformers had. I guess if they had a width of like 100, that would roughly fit your 1000 neuron (actually more like 1000 parameters?) claim. I *think* they were narrower though? The width wasn't as important as the depth, iirc. Sadly I can't recall what the paper was called so I can't check that stuff right now. Either way, the gist of what you are saying - real neurons are far more complex than Artificial Neural Net style neurons - is certainly true
@stylesrj
@stylesrj Год назад
The way the Chinese Room Experiment is described reminds me of that guy who is master at Scrabble. He managed to win at French Scrabble without knowing a single word in French.
@ralalbatross
@ralalbatross 11 месяцев назад
At the core of this is a simple misunderstood concept surrounding computers, which is the following. We don't teach computers anything. What we do is code and provide algorithms which when given appropriately embedded data sets with appropriate instructions which will eventually minimise a difference function between what we want and what the machine outputs. We have hundreds of ways of doing this from approaches like linear regression up to the enormous generative AI frameworks which stack dozens of layers on top of each other and use vast data sets. It all reduces to the same problem though. We have different ways of attacking it. We can even play agents off against each other. At some point it all becomes a math problem that needs a tensor solver.
@shakenobu
@shakenobu Год назад
THANK YOU, i think people on the internet really need to hear this, your basic explanation is so damn clear i love it
@Dullydude
@Dullydude Год назад
I don't think the human in the chinese room experiment understands chinese, but the system as a whole does. The human is just a conduit for the information to pass through. Would be like saying neurotransmitters in a brain don't understand what they are doing but the whole brain as a system does.
@mathewferstl7042
@mathewferstl7042 Год назад
but the metaphor is that people think that person does understand chinese
@ewanlee6337
@ewanlee6337 Год назад
They don’t understand Chinese because they don’t know how to communicate their own desires and goals. They can only be used like a tool by other people. They cannot use their Chinese communication ability to help themselves achieve other things they want to do.
@Tybis
@Tybis Год назад
So in effect, the chinese room is a person made of smaller people.
@aluisious
@aluisious Год назад
The other problem with the "Chinese room experiment" is the stupid assumption that John Cena isn't going to learn Chinese while he's doing this. I've leaned a small amount of Spanish as an adult basically accidentally. I am totally not trying. Now imagine spending all day locked in a room reading slips of paper and writing out other slips. People learn languages. Now, if a machine learns languages, how do you know you "understand" things it doesn't? ChatGPT is clearly better informed about anything you ask it than 90% of people, and it's just getting better. The secret sauce may be something powerful about the nature of language itself, more than what's learning it.
@megalonoobiacinc4863
@megalonoobiacinc4863 Год назад
@@aluisious or maybe rather the nature of our brains. In videos about human evolution I've heard it explained that the size of our brain (which is enormous compared to other animals) might not so much be a result of tools and technology usage (like fire, stone tools etc.) but more to handle the complex social relationships that comes with living in a larger group. And one thing that is central there is a language with many words and meanings.
@AliothAncalagon
@AliothAncalagon Год назад
The chinese room thought experiment has the problem that it relies on a special pleading fallacy combined with a circular argument to work. Its basically a theologians argument. Because it assumes that "understanding" something is some special sauce in our brain that cannot be replicated by something that does lack said special ingredient to begin with. But according to this logic a child could never learn any language, because you basically deny that the line from "no understanding" to "understanding" can ever be crossed. Our brain is surely special in some ways. But it doesn't contain any part that knows no artificial counterpart. The special pleading has no ground to stand on.
@sorgan7136
@sorgan7136 Год назад
the criticism at 5:25 is not really valid as that is the same thing we do, we cannot understand empathy or the affects of a bad diet without being taught it, its no different with ai. We are not born with the understanding that obesity is a bad thing. Moreover, the implication that ai is not intelligent because its just "going through the motions" is how all humans learn to communicate, a baby does not understand what "Mommy" is until the concept is enforced by having english speaking parents who use the word often enough. The concept of understanding language is just a byproduct of experiencing the spoken word and written word often enough. You understand what you say just as much as an ai understands what its saying, the concept of understanding a language as described in this video is an arbitrary distinction between words said by an ai, and words said by a human. All intelligence is on a spectrum and the argument could be made by a hyper intelligent ai that humans dont "understand" language because they dont think as highly as it does.
@bournechupacabra
@bournechupacabra Год назад
There are a lot of interesting extensions to the Chinese Room argument. Some people argue that the "room" itself could be considered to "understanding" Chinese. Basically, the system of person + extensive books with rules about the language. If the Chinese room could 100% produce intelligible and human responses no matter the input, I am inclined to agree with this argument, however strange the concept may be. I think the simpler argument is just that current AI simply can't 100% replicate human intelligence. One very simple example is that current AI can't multiply large numbers no matter how much training they get. Yes, they could learn to use some calculator plugin like a human would do with a physical calculator, but any human with elementary school knowledge could also use pen and paper to write out and solve any multiplication problem, regardless of number of digits.
@slyseal2091
@slyseal2091 Год назад
The math argument is meaningless, the distinction is simply given by what information you chose to feed the machine, or the human for that matter. All math, by it's very nature works on having set rules and logic to follow. Whatever AI model you saw "fail" at doing maths simply either didn't have the instructions and/or wasn't advanced enough to retrieve the instructions on it's own. That's not it failing to replicate human intelligence, that's just not telling it what to do. In the chinese room example, it's equivalent to not providing a book in the first place. I know it sounds stupid, but math is unironically not complex enough to measure the intelligence of machines.
@thedark333side4
@thedark333side4 Год назад
90% agreed, except the combination of AI plus calculator plug in, can also be viewed the same as the Chinese room.
@GlizzyTrefoil
@GlizzyTrefoil Год назад
I really like your example of the multiplication, but in my opinion the pen and paper, that is allowed for the humans to use, really does the heavy lifting, in my case at least. I'd classify the pen and paper method as an external tool use, that is not at all different from the use of a calculator or computer. That probably means that current AI isn't turing complete, but neither is the average human without a piece of paper (techinically an infinite amount of paper and ink).
@SS-rf1ri
@SS-rf1ri Год назад
When you in the living room
@thedark333side4
@thedark333side4 Год назад
@cobomancobo this! So so so much this!
@etiennedlf1850
@etiennedlf1850 Год назад
I understand your point, but i dont see how the "ai" not understanding what it does makes it less of a threat. It doesnt need to have a conscience to pose a serious problem in our lives
@deauthorsadeptus6920
@deauthorsadeptus6920 Год назад
Not understanding what it does is core point. It can feed you as a norm random worlds put together in a very belivable form without any bad intentions. Chatbot is chatbot and should remain it.
@andreewert6576
@andreewert6576 Год назад
the answer is simple. Whatever current "AI" there is it can not do anything it wasn't trained to do. We're not talking about having a concience. We're two or three steps before that. Right now, machines can't even abstract properly. We're just like young parents, only looking at things it gets right, dismissing the many obviously stupid responses.
@justalonelypoteto
@justalonelypoteto Год назад
​@@andreewert6576exactly, you can train the AI to tell apart bees and F150s but it won't have a grasp of what an animal or a living being is, if you show it a dog it has no clue what it is and no way to learn to recognize it besides seeing 5 million of them and overheating a supercomputer for a few months, it's just a complicated intertwining of values that gives it a value of how "confident" it is that what it's looking at is a bee, which sure, your brain is also perhaps representable this way, but our computers couldn't deal with even roughly simulating more than a couple of neurons as far as I know, obviously you can theoretically simulate everything but every interaction between every atom, which would be the brute-force way, is obviously completely out of the question
@Galb39
@Galb39 Год назад
Like the drone simulation example ( 0:52 ), the problem isn't rogue AI attacking its user, it's an extremely fallible machine being given so much power. When setting up AI, you need to decide on acceptable error rate, and a 0.0000001 error chance may sound reasonable to a programmer who forgets computers can do 100000000s of computations a second, and an error can kill someone.
@ChaoticNeutralMatt
@ChaoticNeutralMatt Год назад
@@andreewert6576 "Right now, machines can't even abstract properly." I'm not sure what you mean.
@Youbetternowatchthis
@Youbetternowatchthis Год назад
That one was a fail Adam. There is so much more to the topic.
@Horny_Fruit_Flies
@Horny_Fruit_Flies Год назад
Left-wing soft-science humanities majors, and failing to understand or apply proper nuance to a STEM subject, name a more iconic duo.
@MrSpikegee
@MrSpikegee Год назад
I Agree. A big fail showing overconfidence on a deep topic. A lot of computer science fellow who think they know better are stupid about this stuff because they are overreacting to the hype.
@gwen9939
@gwen9939 Год назад
@@Horny_Fruit_Flies True, but libertarian "AI bros" have infested the topic and they're even less capable of discussing anything worthwhile. They're instead obsessed about "big tech" wanting to hoard all the tech when it according to them should instead be made open source free of all regulations, completely devoid of thought as to how dangerous the tech really is. All they want is to be liberated from their cushy but boring white-collar job in tech or finances so they can get UBI and hang out with their totally real AI anime gf all day. They will unironically scream " No F.U.D. allowed!" as soon as anyone doesn't engage with their toxic positivity utopianism. They're far worse than left-wingers who're over-focusing on the hype and marketing which at least is still a real thing, it's just far from the most important one when it comes to AI and how a tech like that being brought into our global capitalist hellscape in critical lack of international co-oporation has a very high likelihood of being disastrous on a scale we've never seen before. I feel like almost everyone sucks when it comes to AI. Everyone's talking about the wrong things. Either they're techno-utopian ex-crypto idiots or they exclusively focus on how to use it to drive a class warfare narrative. No one is talking about actual AI safety because barely anyone has taken the time to actually understand what it is, and that prioritizing that solves crucial problems that the previously mentioned groups are obsessed with solving.
@samultio-2061
@samultio-2061 Год назад
Yeah this is a prime example of Tesler's theorem, which is a bit of a tongue in cheek thing but feels pretty applicable here, AI doesn't have to be AGI to be "intelligent", OCR or a tic tac toe bot or whatever simple stuff is AI end of story.
@theultimatereductionist7592
I get the feeling that, when this video is over, that cat is NOT going on a diet.
@SirRichard94
@SirRichard94 Год назад
The problem with the chinese room experiment is: why does it matter? My hands dont understand what they are typing, but they are part of an inteligent system either way. Similarly, even though john cena doesn't understand what's happening, the system itself is inteligent in the end since it can emulate the conversation.
@lavastage1132
@lavastage1132 Год назад
The chinese room experiment matters because it points out that conversation does not *necessarily* mean it it grasps the meaning of what is being said, and that matters. It disqualifies the act of conversation as a metric tor discerning how intelligent it is. Is what you are speaking with able to understand that the string C-A-T refers to anything at all? If so, how much? Is it a real life object? a creature with its own needs? something that humans like? etc. etc. We are so used to just assuming the person we are speaking with understands all, or at least most of these meanings subconsciously that its hard to grasp that AI does not automatically carry the same understanding. Just because something like chat GPT can carry out a conversation does not mean there is an intelligence that can actually comprehend what is being said. We shouldn't automatically trust that it can based on that metric.
@SirRichard94
@SirRichard94 Год назад
@lavastage1132 what does it matter how and if it understands the concept of cat, if it can correctly use it in the correct context? If by all metrics the conversation about cats is good. Then it functionally understands it and that's what matters. Stuff like conciousness and free will and understanding are not measurable, so they hardly matter in a conversation about a tool.
@ewanlee6337
@ewanlee6337 Год назад
But in the Chinese room experiment, they will only say something if addressed but they won’t say anything of their own initiative or to address any problems they has. They won’t ask for more paper and ink in Chinese. They won’t ask what’s happening during an earthquake. They won’t try to learn anything else. They can pretend when you talk to them but they won’t act like an independent person.
@alexanderm2702
@alexanderm2702 Год назад
@@lavastage1132 The Chinese room experiment here is a red herring. ChatGPT (and GPT4 even more) does understand what is being said. Write something and ask it to write the opposite, or ask it to write some examples similar to what you wrote.
@aluisious
@aluisious Год назад
@@lavastage1132 All of the responses like yours are begging the question, how do you know you are intelligent? Can you prove you "understand" things better than an LLM? You can't. You feel you do, which is nice, and I like feeling things, but what does that really prove?
@Beatsbasteln
@Beatsbasteln Год назад
the biggest issue of your entire video is that you assume that even intelligence by itself has a clear definition without any grey areas. but it's all actually pretty vague
@yourfriendoverseas5810
@yourfriendoverseas5810 Год назад
I'm more disturbed that almost half of a youtube video runtime is ads at this point.
@CMT_Crabbles
@CMT_Crabbles Год назад
I’m taking a class for Machine Learning, noone uses the term “Artificial Intelligence”. Machine Learning is a much more accurate term.
@ai-spacedestructor
@ai-spacedestructor Год назад
as being part of the group you refer to as "IT People", i can say the biggest problem is that current "AI" isnt even AI, its just more fancy algorithms that need less hand holding to perform the task given to them. however its not actual "AI" in the sense its not aware like humans are. thats also why it cant understand concepts, what it does or come up with something new thats accurate because all it does is follow patterns that the algorithm established during training it should follow when certain conditions are met. which is basically the same as what you called "Pre AI", just that now we have a software that does the act of "writing" these instructions for us. true AI would probably be more close to how the human brain works, except that AI will for sure be specialised in individual tasks rather then being all rounders like the human brain. i have to admit that for the first nearly 6 minutes until it became clear to me what your trying to say, that i was afraid your misrepresenting "AI" the same way as many other people do. Apologies for the dislike during the first half of the video. Edit: I feel the part that it gets annoying, i spend the first few months trying to explain "AI" to people to stop them from being wrong about it and talking bs, then i moved over to trying to explain why its not real AI and now i just have given up. there is way too many people who either on purpose talk bs about it for whatever reason that may be or people who cant be bothered to learn what something is and just repeat the same talking points of the first type of people i described in this paragraph. Its the same as with Artists and non Artists claiming AI would be harmful to them, meanwhile its basically just drawing by colors using more advanced filler tools to fill in the apropiate color for the pattern it "drew". While im on that topic, i also have to let out how incredible furious it makes me as someone who is passionate about IT and Technology in general that people are so quick to blame the "AI" thats basically just a tool like photoshop but nobody would blame fotoshop for stealing a picture or art style, people would blame the person. im sure this will pass with time and people will treat AI the same way but that wont be in my lifetime any more and it just so upsets me that people make so outrageous claims over something they refuse to learn about to know if its even true what they claim. Also if you want to restrict "AI" so bad, the correct way would be to restrict the "AI" Companies. its the Companies who just Data Mine the whole internet and feed there "AI" with it, its not the "Ai"s fault for what its beeing fed, as we established in the video already it doesnt know concepts such as ethic. so it will just accept happily anything your giving to it like a baby. Actually thats a good analogy, "AI" during training is like a baby leaning the basics of living, when "AI" is out its like a toddler that just keeps following learned patterns and combines new things together. So eventually we will get the "AI" equivalent of a child, where it understands some more of the things that separate it from true Intelligence and then we get the AI equivalent of a Human Adult, who understands not only "most" things they need in life but also why they are the way they are and is capable of adapting to new situations and expand on established patterns. Sorry for the rant, i just had to get it out of my system and hopefully for the first time without a negative or a lack of interest/care reaction. The whole Situation just sucks if your very passionate and looking forward to what will come in the future.
@GustvandeWal
@GustvandeWal Год назад
In what way are you correct and other people wrong about AI? Let me start with a statement: "AI exists, and its definition is a program that can fine-tune its inner workings based on desired input/output states." I feel like you've annoyed numerous reasonable people by bluntly telling them they're "wrong". Also, "Artificial Intelligence" is a term humans made up. If it isn't "real", then what does "artificial intelligence" mean?
@XMysticHerox
@XMysticHerox Год назад
Why is it not aware? Not as aware as a human yes. But in it's narrow application like say a language model why is the AI not aware whereas a human is?
@ai-spacedestructor
@ai-spacedestructor Год назад
@@GustvandeWal first of all, all words are made up. thats how language works. secondly, im aware that i probably have annoyed some people in the process of correcting them but if you dont want to get to know every human on earth individually thats just a risk you have to take.
@GustvandeWal
@GustvandeWal Год назад
@@ai-spacedestructor ok thanks for not being helpful at all
@ai-spacedestructor
@ai-spacedestructor Год назад
@@GustvandeWal no problem, i like giving back to people what they gave to me.
@TheNN
@TheNN Год назад
"Artificial Intelligence Isn't Real" ...Isn't that exactly what an AI trying to pass itself off as human *would* say?
@elvingearmasterirma7241
@elvingearmasterirma7241 Год назад
I dont blame them. Mainly because if I were sentient AI Id do everything to avoid paying taxes or partaking in our modern, profit, consumerism driven society.
@theultimatereductionist7592
4:44 I love how you say "in ACTUAL intelligence" simultaneously with the dropped pic of your cat. I'm picking up a strong hidden message here that your cat is more intelligent than A.I.
@A.R._Ironstar
@A.R._Ironstar Год назад
So... long story short its Virtual Intelligence (VI) from Mass Effect. Less having a real conversation with an EDE like construct, more navigating the Avina terminals for directions on the Citadel.
@SlyRoapa
@SlyRoapa Год назад
OK, so call it something like "Artificial Cleverness" instead then. Does it matter what we call it? It's still scary for its potential capacity to replace a lot of human jobs.
@rkvkydqf
@rkvkydqf Год назад
Look at the actual model at hand. "Generative AI" is really just a set of overgrown parrots that work just well enough to fool a person while sill being brittle to real world circumstance. ChatGPT hallucinates constantly, CLIP calls an apple with the label "iPod" "Apple iPod", and StableDiffusion barely understands how the pixels it sees relate to real world geometry, much less language. It looks as if it learned to do its job, but it's only a surface level illusion. We need to educate people about this, since the out-of-touch managers are already using it as an excuse to mistreat or replace real workers, regardless of the quality impairment.
@justalonelypoteto
@justalonelypoteto Год назад
this example is almost a cliche, but we replaced horses, and manual workers in many aspects. What's so tragic about reducing jobs that an algorithm can do, isn't it better if we don't waste many lives on something that a computer could do? I'm sure, like with every ither time we have advanced as a species, that new (arguably more meaningful and better) opportunities will arise
@romxxii
@romxxii Год назад
or call it by its actual names, Large Language Model, or Fucking Autocorrect.
@joeshmoe4207
@joeshmoe4207 Год назад
@@justalonelypoteto and we’ve seen some of the downstream effects of that haven’t we? The complexity of thought process involved in doing lots of the jobs that machine learning is already posed to replace is probably above average. What do you think people will do when the amount of education and intelligence to compete in whatever new jobs open up is well beyond what the average is? It’s not a matter of replacing menial jobs, it’s a matter of replacing jobs easiest to automate which tend to be jobs that require the least complexity of thought, or at least very predictable modes of thought.
@Bradley_UA
@Bradley_UA Год назад
@@justalonelypoteto except, not every country has the social welfare to afford it. In america, imagine someone dares to propose taxing the rich to give money to people unemployed die to ai?
@immovableobjectify
@immovableobjectify Год назад
In the Chinese room example, the human inside doesn't understand Chinese, but the complete system consisting of the human and the books actually does understand Chinese! This is similar to how no individual ant understands how to build, defend, and maintain its entire nest, yet the colony as a whose does seem to act with unified intention. A human neuron isn't intelligent, but the entire brain is. Just because you can show that a part of a system is "stupid" doesn't mean that the system itself cannot be "intelligent". This is why we say that intelligence "emerges." The whole can be greater than the sum of its parts.
@Anonymous-df8it
@Anonymous-df8it Год назад
Also, wouldn't the human inside end up learning Chinese through exposure?
@laurentiuvladutmanea3622
@laurentiuvladutmanea3622 Год назад
„se, but the complete system consisting of the human and the books actually does understand Chinese! ” Keeping into account that in the given scenario, the only sapient part of the system lacks any understanding of Chinese....no. The system does no actually understand anything. „This is similar to how no individual ant understands how to build, defend, and maintain its entire nest, yet the colony as a whose does seem to act with unified intention” I really would not use the words „intention” or „understand” to describe what ant colonies are doing.
@MrSpikegee
@MrSpikegee Год назад
@@laurentiuvladutmanea3622 Yes, the system does understand Chinese. You are confused about the word “understanding”. Why did not you take on the part on the neurons? Out of arguments maybe?
@paulaldo9413
@paulaldo9413 Год назад
The thing is, the way the person outside the room (Mao) processes the language is different than how the person inside the room (Cena) does. That's what the metaphor is trying to say, those two are not equivalent. Can Mao replace Cena in this experiment? Absolutely. But can the opposite happen? (Cena gives prompts to Mao inside the room, analyze the response and understands it) Absolutely not. Cena would have zero idea of what to do, it would stop working.
@echomjp
@echomjp Год назад
The "system" does not understand Chinese simply because it can read it back. The "system" understanding Chinese would imply that it can create things using Chinese on its own volition, could spot mistakes and errors in Chinese, and ultimately would be able to create something beyond what it is prompted to do. If the person in the room understood Chinese, they would be able to correct errors in the translation guide they are given or ask questions about the prompts they are given, or otherwise actually make something new. Right now, systems such as ChatGPT are wholly incapable of understanding what they are saying. This means that a given prompt being factually correct or not is entirely by coincidence with its data set, and such "AI" systems are frequently incorrect in their output in many ways that an intelligent being would not be. If you ask such a system to define something for you, all it can do is look up in its data-set what people have used to define that thing before and then average out the data, more or less. If the data fed into it isn't accurate, or what is being asked is not extremely simplistic in nature, cracks easily appear. Modern "AI" is of course useful in many ways, but it is massively misunderstood by far too many people.
@Patan77xD
@Patan77xD Год назад
As someone that work with AI development, I don’t agree this video at all, example 4:59 when you describe how a human work you are also pretty much exactly describing how a modern AI neural network works, also the argument that an AI only knows because a human have established that information before hand, can be argued for humans as well, only reason you know anything is because other humans lived before you and taught you that information. And modern AI can absolutely infer new original information on its own . And don’t forget about emergent behavior, even if as system is built by simple components when scaled large enough it can express complex behavior that emerge by it self. You might say media overhype AI but honestly not really, AI will progress way faster then mores law.
@fauzirahman3285
@fauzirahman3285 Год назад
As someone who work in IT, most of the people in my field don't consider it as "artificial intelligence". It is used as more of a marketing term and is more of complex algorithm in the form of machine learning. Useful tool, but we're still a long way away from true artificial intelligence.
@Jackamikaz
@Jackamikaz Год назад
About the chinese room experiment, I get its official argument is to say the person inside doing the algorithm doesn't understand chinese. But personally I interpret it as "isn't the algorithm itself the understanding part?" Aren't our brains big machines too? We can't tell our individual neurons understand anything after all. Anyway there is mind breaking phylosophy about consciousness out there. Otherwise, even if I don't agree with your reasoning, I get why it's annoying to see articles everywhere that claim "AI will take over the world". I still think this new kind of AI is a first step towards the so called "true intelligence/consciousness" though, even if there is still a long way to go.
@martinsykorsky8741
@martinsykorsky8741 Год назад
When you are born, you spend few years of your life before you understand a human language. It's the equivalent to being exposed to chinese to me. You just start to pick up subconsiously until you develop the skill to speak your mother tongue. And connected to that, around the same time you start to understand your existence and abstract concepts (become conscious). So yes, machine learning isn't consious yet, but I wouldn't be so bold to say it surely wont be.
@KlausJLinke
@KlausJLinke Год назад
Some people were fooled by Joseph Weizenbaum's ELIZA (1966) and thought it was actually "intelligent", or at least close.
@pavlodeshko
@pavlodeshko Год назад
Adam, when your body will be turned into paperclips, you won't give a damn if the agent doing it is genuinely conscious..
@laurenpinschannels
@laurenpinschannels Год назад
yeah we need an "AI is just capitalism in a bottle" take
@gabrielevalentini5905
@gabrielevalentini5905 Год назад
you could have written "you won't give aDamn"
@whatisrokosbasilisk80
@whatisrokosbasilisk80 Год назад
@@gabrielevalentini5905 nyuk
@Primalintent
@Primalintent Год назад
THANK YOU, I've felt like I was going crazy with how much people were regurgitating this marketing scheme
@csorfab
@csorfab Год назад
Whether it's real intelligence or not, these tools are still so powerful that they already have a very real impact on lots of people's lives. Stock photography is basically redundant since MidjourneyV5 came out. Some copywriters and graphic designers are already losing their jobs to AI. If you had actually taken your time to try out GPT-4, Midjourney V5 and the like, it would be obvious that they're much more than a "marketing scheme". Just to clarify, I fucking hate that all the cryptobro-esque marketing roaches jumped on the AI bandwagon in the hope of making a quick buck, but the tools themselves are amazing, and honestly pretty scary.
@Primalintent
@Primalintent Год назад
​@@csorfabI'm not saying the technology is the marketing scheme, I'm saying that imo it should be called something more like "Advanced Automated Generation" or something of the like. The fact that people are losing their jobs to these is a continuing sign that technology when used solely for profit harms humans. These tools can be fantastic, but not when they're applied in our current putrid and corrupt economic model.
@Blanksmithy123
@Blanksmithy123 Год назад
Very strange that you equate sentience, with the ability to perfectly emulate sentience. An AI doesn’t actually have to be experiencing anything to be indistinguishable from a human. I’m not really sure what your were trying to say in this video.
@danielmortimer532
@danielmortimer532 Год назад
Exactly! In fact there's not even a method to be 100% certain that other humans are sentient beyond the fact that we know ourselves to be sentient and conscious so we assume others to be for all intents and purposes. All that matters is if someone or something appears to be sentient, because there's no way to enter into someone else's mind or an AI's programming to measure or experience their "conscious" state of being to see if it exists. All we can see is their outside reactions and perhaps what triggers those reactions, not whether that person or thing truly understands what they're doing and why they're doing it. The issue is that the whole reality of a single conscious person in our world that's experiencing this reality, could actually be experiencing an elaborate dream or simulation where they're the only conscious person and everyone else is a projection of their own mind or clever outside programming; and there would be no way for them to prove or disprove it. This is the big issue and Adam doesn't address it properly, because if there's no way to measure or detect "consciousness" through any observable and repeatable scientific means, it really doesn't matter. In fact nobody even truly knows what sentience and consciousness is beyond the basic philosophical concept of it that's been around since the Ancient Greeks, because it can't be scientifically observed and measured. Contemporary analysis of the human brain and computer technology doesn't and currently can't deal with "consciousness" and "sentience", and only deals with reactions and the triggers of those reactions from both internal and external sources. The overall point is that if an AI appears to have a human level of "sentience", that's all that matters. And the social and political consequences of this have the potential to be disastrous in the not too distant future.
@olleharstedt3750
@olleharstedt3750 Год назад
Yeah, jumping from intelligence to consciousness without a skip is a bit sloppy, philosophically.
@02suraditpengsaeng41
@02suraditpengsaeng41 Год назад
0:50 Insert > here AI suppose to do they own Also AI : have flying movement like Mihaly Dumitru Margareta Corneliu Leopold Blanca Karol Aeon Ignatius Raphael Maria Niketas *A.* Shilage
@emmanuelm361
@emmanuelm361 Год назад
I was waiting for this.
@zaphod4245
@zaphod4245 Год назад
Adam, most of your videos are good, but this misses the mark, you seem to misunderstand what intelligence is. With the cat example, the only reason you or I can tell what a cat is, or that it's overweight, is because of our experiences and learning in the past that have formed those connections between neurons. An AI is essentially a computer trying to replicate that process, but doing so a lot faster and a lot more haphazardly. Sure, as a result of this haphazardness current AIs still miss the mark, but there is no evidence to say that AI won't ever be able to replicate human intelligence. After all, what makes the 'chemical computer' that is our brains any different to a silicon one? The misguided belief that consiousness is an immesuarable phenomenon unique to humanity? That's hilariously unscientific, and sounds a lot like what religion would say.
@87717
@87717 Год назад
I think he is not disputing that though. He merely said that the technology is there yet. I personally find it completely believeable that a sort of general artificial intelligence can be built that is real in the sense that ours is. Unless there is of course some inherent biological features in our body that somehow give us some sort of uniqueness that cannot be replicated
@OctyabrAprelya
@OctyabrAprelya Год назад
The argument isn't that it can't be done. The argument is that we are not there yet, while tech-bros (among other people) are trying to convince everyone else that that's the case. Currently what we have is akin to computers from the '60s. While being presented as a current era supercomputer capable of weather prediction.
@mylex817
@mylex817 Год назад
I was just about to type this. Too often, when discussing the limits of AI we compare it to an adult brain that has been experiencing and learning from a wide variety of sources for decades. Add to the example a 1 year old and ask it to describe what the cat is, and the difference between AI and human intelligence becomes far less clear
@tjarkschweizer
@tjarkschweizer Год назад
You misunderstood him. He didn't make a statement that it can't be done, his point is that we aren't there yet.
@mylex817
@mylex817 Год назад
@@tjarkschweizer I understand that Adam does not say that general AI can't exist. But he does seem to draw some fairly arbitrary lines between "real" intelligence and what current AI models can do. This is particularly apparent with his Chinese room example, where he claims that just because an AI can closely emulate a human does not mean that it is conscious. This is a purely normative statement, and something plenty of people would disagree with. Also, his example with the cat is flawed - I'm pretty confident that we could currently train an AI that would give a close approximation of that thoughts that he ascribes to the human.
@The_return_zone
@The_return_zone Год назад
Open language models are just very good autocomplete
@ikotsus2448
@ikotsus2448 Год назад
I see you just completed a sentence there by yourself. Good job!
@XMysticHerox
@XMysticHerox Год назад
And autocomplete is a rudimentary AI so in other words they are ok AI.
@docgramps1
@docgramps1 Год назад
A significant number of people I meet also just spew out a series of outputs based on inputs, with no real understanding of what they are saying.
@DanielSeacrest
@DanielSeacrest Год назад
Ok, first of all, there was a recent paper published which gave evidence to the fact that language models trained for next token prediciton do actually learn meaning (Well, specifically the study set out to adress these hypotheses: LMs trained only to perform next token prediction on text are (H1) fundamentally limited to repeating the surface-level statistical correlations in their training corpora; and (H2) unable to assign meaning to the text that they consume and generate. and the team that worked on the paper said they see these two hypotheses as having been disproven through their work - I cant give links or YT will take the comment down, but search "Language models defy 'Stochastic Parrot' narrative, display semantic learning" and you will find a nice summary of the paper). So these transformers are actually able to assign meaning to the text they consume and generate, whats stopping this from leading to a fundamental understanding of the concepts that are represented in its training corpus (and of course just saying shallow semantic representations isn't going to cut it, their is evidence that they aren't just limited to repeating the surface level statistical correlations, so this can mean they may actually be able to exhibit deeper semantic understanding)? But also, you provide no actual evidence other than "It isn't AI because it can't do X", which isn't actually evidence but is a statement and you don't even give evidence to back up your claim, its basically saying "Just trust me bro". And I would argue that, yes, GPT-4 is not comprehending the same way we humans are comprehending information, however I wouldn't completely say that GPT-4 doesn't understand what it is doing, and fundamentally, we actually do not understand how and why the transformer architecture works so well (i.e. gpt-4 was able to score a 90% on the BAR exam, and if most of the questions weren't in it's dataset then how could it possibly of done so well, i just don't think shallow semantic representations would be nearly enough for that)
@TimeattackGD
@TimeattackGD Год назад
the thing is that at some point, whether ai is actually conscious or not, will not matter. Even if ai arent conscious (which i believe that ai never will be), the fact that we would not be able to differentiate them, would cause havoc among how we deal with ai regardless of whether we actually should or not, and we will probably end up dealing with them as if they were conscious, the fact of the matter being completely irrelevant.
@sandropazdg8106
@sandropazdg8106 Год назад
Not really that complicated. If something is performing a task and doesnt has consciousnes then its a tool an as such if you have to deal with the AI in any capacity you dont deal with the tool you deal with the person handling it.
@jamessderby
@jamessderby Год назад
what makes you so certain that ai won't ever be conscious? I don't see how it won't.
@patatepowa
@patatepowa Год назад
unless you believe consciousness is the result of something from outside our realm, I dont see how AI couldnt have conciousness if its nothing more than complicated electric signals in our brains
@TimeattackGD
@TimeattackGD Год назад
@@jamessderby ​ imo ai could be conscious, if we figure out why we are conscious, and we then use that to develop consciousness. otherwise it seems intuitively impossible for humanmade technology to develop something from nature that we cant even comprehend. to me it seems more likely that well get to a point where humans and ai will be indifferentiable from a consciousness perspective (by just continuing to improve ai like right now), way before well ever get to figuring out consciousness, as in it wont even matter anyways.
@user-op8fg3ny3j
@user-op8fg3ny3j Год назад
@@TimeattackGD yh, even if it's not conscious, it doesn't mean the AI doesn't falsely think that itself is. How many times we as humans have had false perceptions about ourselves?
@albevanhanoy
@albevanhanoy Год назад
Hey Adam have you seen that AI is training more and more on AI-generated data, which cuts it off from learning new information, and enshrines some typical AI-made errors without fixing them? Literally inbreeding x) .
@OctyabrAprelya
@OctyabrAprelya Год назад
That reminds me Nexpo's "The disturbing art of AI" where he talks about prompt generated images. Long story short, in those "AIs" you give 'em a prompt, let's say "a black cat" and the deep learning algorithms pull from a sea of pictures of "black cats" and recreates one from there. Very much like a drawing artist would pull from their memories and experiences of what a cat is and draw one, or a normal software would pull an image tagged as one. But if you ask them for something nonexistent, like "a picture of Loab", instead of the artist asking back "what the fak is that?" or the normal software giving a runtime error, it "generates something" and with enough of that something to feedback, it generates enough data to pull from every time the same prompt is input.
@albevanhanoy
@albevanhanoy Год назад
@@OctyabrAprelya I would love to see a game of AI telestrations. An AI generate an image, then another describes this image in a sentence, then you input this sentence as a prompt to generate an image, and you keep going and see what kind of cursedly bizarre thing you arrive at.
@XMysticHerox
@XMysticHerox Год назад
We do this in medical CS quite a bit. Let AI generate tumor segmentations and related images for instance which is then used to train another AI to segment tumors. It is quite useful. And ultimately still based on real segmentations.
@captaindeabo8206
@captaindeabo8206 Год назад
Yeah that the general problem whit backpropagation training called overfitting
@Orionleo
@Orionleo Год назад
The past year of videos have been really consistent and I like that, but the way the backgrounds sort of lag/go at 10fps is a little unnerving, sometimes. Stilol good content tho..
@fallenshallrise
@fallenshallrise Год назад
It's like that guy who slipped the word "quantum" into every sentence and everyone was like "what a genius!"
@Apjooz
@Apjooz Год назад
And that guy...was Albert Einstein.
@purpleblah2
@purpleblah2 Год назад
As a lawyer, the sentiment around AI has basically gone from "oh man are we gonna get replaced in 10 years?" to "another lawyer is being sanctioned because they used ChatGPT to write a brief and it citeda bunch of fake cases"
@StoneCoolds
@StoneCoolds Год назад
It is the first interaction so 10 years for marking the end of lawyers could be possible, once they figure out why DML has "dreams" and creates fake stuff 😂
@ramonessix
@ramonessix Год назад
you can get sanctioned for using ChatGPT? so dumb
@purpleblah2
@purpleblah2 Год назад
@@ramonessix no, you get sanctioned if you use ChatGPT to write your legal brief and it invents a bunch of fake cases and you don’t check to see if they’re real and the judge catches you
@zachhaas1075
@zachhaas1075 Год назад
@@ramonessix yeah if because ChatGPT lied and made up a bunch of cases, sorry I meant hallucinated a bunch of cases.
@purpleblah2
@purpleblah2 Год назад
@@StoneCoolds man I hope so, I don’t want be a lawyer anymore
Далее
A Deep Dive Into The Online Manosphere
13:38
Просмотров 977 тыс.
Electric Cars Won't Change Anything, Here's Why
11:05
The courier saved the children
00:33
Просмотров 1,4 млн
PEPSI VS Coca-Cola VS Sprite VS Fanta
00:30
Просмотров 1,4 млн
would you eat this? #shorts
00:39
Просмотров 1,1 млн
Why AI Is Tech's Latest Hoax
38:26
Просмотров 467 тыс.
Why a Mars Colony is a Dangerous and Stupid Idea
16:40
How Suburbs Destroy(ed) America
12:25
Просмотров 655 тыс.
Nuclear Power Is FINE
9:28
Просмотров 569 тыс.
The Fatal Flaw Of Free Market Capitalism
13:49
Просмотров 1 млн
Anarcho-Capitalism In Practice
17:51
Просмотров 1,3 млн
ChatGPT's HUGE Problem
14:59
Просмотров 1,4 млн
Why Free Parking Is Bad For Everyone
12:29
Просмотров 462 тыс.
Урна с айфонами!
0:30
Просмотров 6 млн
ТОП-5 культовых телефонов‼️
1:00