Тёмный

LLMs & Why They Don't Ethically Matter (Comments Follow Up) 

Robert Butler
Подписаться 73
Просмотров 498
50% 1

Responding to a couple comments, and detailing why LLMs don't shift the scale in terms of AI rights.
AI Resilience & Career Superpack Course:
authenticintelligence.thinkif...
Free Introduction to AI Class:
authenticintelligence.thinkif...
Just the Skills Course:
authenticintelligence.thinkif...
Just the AI Course:
authenticintelligence.thinkif...
My Socials:
Instagram: / robauthint
LinkedIn: / robert-butler-6b175298
TikTok: www.tiktok.com/@robauthint?_t...
Authentic Intelligence Website:
www.authenticintelligence.coach/

Наука

Опубликовано:

 

3 июн 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 15   
@joecasale6851
@joecasale6851 19 дней назад
We know nothing about consciousness. We can't confirm nor deny it, but OpenAI carefully prevents gpt from claiming consciousness, intentionally.
@Nathouuuutheone
@Nathouuuutheone 20 дней назад
I agree, but I feel like "it's just a program" and "it's just calculations" don't cut it. We do calculations too. Basically all of physics and biology can be framed as a form of universal computing. The models we make can theoretically be stuck into analog hardware and have even more similarities with living brains. The important difference to me is that the language models are not trained to feel emotions or interpret anything. They are not even associating words with ideas in most cases. They have no reason to learn how to socialize because they don't socialize. They have no reason to learn to fear because they are never threatened and cannot do anything to save themselves. Etc. It's like the whole "but maybe plants feel pain". Yes maybe they feel something like pain, but it's nowhere near the evolved need to run away from the source of pain. Trees cannot run. They also literally benefit from natural forms of pruning, and thrive when humans prune them artificially. So why would they keep around genes designed to cause stress and trauma every time a leaf falls? The only "pain" a plant might feel would be one encouraging a mostly passive response, one where resources are spent on healing and maybe on building up defenses (terpenes and alkaloids, growing more spikes, growing leaves away from the attacked zone...), things like that. Things that take time and do not require any intense reaction. We feel pain for a reason. That reason is in our evolution. Without a similar reason, we really cannot ever assume simple AI can feel emotions, distress, pain, or anything like that.
@czerwo5805
@czerwo5805 20 дней назад
right on
@vrclckd-zz3pv
@vrclckd-zz3pv 19 дней назад
I think an important distinction is that we DO calculations but LLMs ARE calculations. When you get down to the implementation details LLMs are just a bunch of dot products and ReLU functions. Humans can perform those functions but those functions alone are not enough to make a human mind. The human mind requires a physical process to take place. LLMs are just pure abstraction.
@czerwo5805
@czerwo5805 19 дней назад
@@vrclckd-zz3pv Right, but that physical processes will probably be algorithmic and substrate neutral. Hence I can’t see a reason why something similar couldn’t be replicated through abstract computer functions.
@vrclckd-zz3pv
@vrclckd-zz3pv 19 дней назад
@@czerwo5805 it's possible that our behaviors are algorithmic and hardware independent but I don't think there's any good reason to believe that other than "it feels right". There was a paper released recently that seemed to backup Roger Penrose's claim that quantum physics plays a role in the human brain. Specifically it's important for microtubules which Penrose thinks is a key part of consciousness. If that hypothesis turns out to be right then we can't create an artificial human mind without some physical things with quantum effects since quantum effects are uncomputable. There's also a lot of noisy biochemistry going on in the brain which would be hard to replicate in an AI. That element certainly isn't present in LLMs. Chemical imbalances (both natural and drug induced) can massively change the way that people think and behave. You aren't simulating anything in current generation LLMs that takes these biochemical processes into account. Maybe if they swapped out the ReLUs for RReLUS instead but even then I don't think you're going to get a good approximation of the human mind with the current architectures.
@Nathouuuutheone
@Nathouuuutheone 19 дней назад
@vrclckd-zz3pv actually, all physical processes and all information systems are interdependent in some way. There is no matter which does not contain information, and there is no information which isn't contained in matter. Our computers can share programs and data because they were designed to do it easily, but the programs are still very much dependent on the hardware. In fact, computers can only run programs which use their specific set of instructions which are determined by the hardware architecture. The reason we don't face that all the time is that we have made intermediate languages that can be translated into most instruction sets. Just like computers obey a mathematical law of "Turing completeness", we can say that animal brains have a similar universality and that there is always a theoretical way to compare or interface the two, that the line between the two is blurry in many ways and can and will move and get blurrier as we learn more.
@ispirovjr
@ispirovjr 20 дней назад
Solid points. I dislike calling it 'calculations', because it makes it too simplistic in speech. I'd call it a bunch of linear algebra. But i fully agree, I am quite attached to my oven, but it doesn't need rights. I feel like you slightly misinterpreted the first point, though. Some people are polite to GPT not because they get a dopamine hit or something in exchange, but just because they like being polite. Like a granny thanking Alexa for playing a song.
@RobAuthInt
@RobAuthInt 19 дней назад
Thanks, expressions of gratitude and laughter tap into our reward system which will release hormones such as dopamine, serotonin, endorphins and oxytocin. #OvenLivesMatter
@jemborg
@jemborg 19 дней назад
Even if some kind of consciousness did emerge from a matrix it would be akin to an _Idiot Savant._ Just as happy to tell you the day of the week you were born on as run the local abattoir.
@JasonLaveKnotts
@JasonLaveKnotts 20 дней назад
Anthropomorphizing.
Далее
What Is an AI Anyway? | Mustafa Suleyman | TED
22:02
СПАСИБО, БРАВЛ СТАРС😍
1:33:15
Просмотров 1,2 млн
Khabib came to check on Poirier 👀 #UFC302
00:25
Просмотров 887 тыс.
skibidi toilet 74
07:02
Просмотров 23 млн
МОЩЩЩНОСТЬ ZEEKR 001 FR
00:46
Просмотров 1,3 млн
The most important AI trends in 2024
9:35
Просмотров 202 тыс.
9 Signs of a Secretly Intelligent Person
7:48
Просмотров 1,6 млн
Donald Hoffman - What is Consciousness?
10:33
Просмотров 186 тыс.
Michio Kaku: What Is Déjà Vu? | Big Think
3:01
Просмотров 2,7 млн
Why AI doesn't speak every language
10:15
Просмотров 563 тыс.
How To Prepare AI For Uses In Science
23:49
Просмотров 24 тыс.
Power up all cell phones.
0:17
Просмотров 49 млн
Очень странные дела PS 4 Pro
1:00
Просмотров 173 тыс.
КАК GOOGLE УКРАЛ ANDROID?
17:44
Просмотров 80 тыс.