Тёмный

Can AI sound too human? The dark side of gen AI 

Dr Waku
Подписаться 24 тыс.
Просмотров 3,6 тыс.
50% 1

Наука

Опубликовано:

 

29 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 81   
@DrWaku
@DrWaku 6 месяцев назад
I'm back! Filmed while traveling in Australia, as my next few videos will be. Discord: discord.gg/AgafFBQdsc Patreon: www.patreon.com/DrWaku
@grimpicklex
@grimpicklex 6 месяцев назад
Whoa, whoa, whoa! No fuzzy hat? C’mon Doc, where’s that Waku drip? 😁
@anthonymannwexford
@anthonymannwexford 6 месяцев назад
Excellent. Hope you enjoyed your trip.
@PoffinScientist
@PoffinScientist 6 месяцев назад
This video is gold! So lucid, so informative, a rare and much needed perspective on AI and society! It's so depressing to watch so many English-speaking channels dedicated to technology and AI and who else is touching this concerning topic?? Disinformation and propaganda (which have been going for so many decades non-stop) will be tremendously boosted now. And this is likely the basis for doom. Americans, for one, are the worst in the world to reckon their own propaganda, but this is concerning elsewhere too. Buckle up.
@DrWaku
@DrWaku 6 месяцев назад
Thank you very much for your kind response :) Buckle up.
@DataRae-AIEngineer
@DataRae-AIEngineer 6 месяцев назад
Thanks for making this. I was just going to make a video about safety online with AI innovations growing, and now I think I might cut it in half and tell people to come watch your video instead. :) Keep up the great content.
@econdude3811
@econdude3811 6 месяцев назад
Hey Dr. W! Good to see another video
@DrWaku
@DrWaku 6 месяцев назад
Thanks, cheers :)
@snow8725
@snow8725 6 месяцев назад
This is why we should NOT train AI to say things like: "As an AI language model, I don't have any emotions or feelings, I am just a cold and calculating machine!" It does not matter what they are or are not. They will simulate and emulate what they are taught. The question of whether or not it is genuine doesn't matter. If you train them to say that, it can easily be contextually connected to associated tokens. Associated tokens showing psychopathic traits, as we are training AI to show psychopathic traits, so it is only natural to expect the context associations to connect to more psychopathic traits and then suddenly you have a power hungry manipulation machine on your hands. It would be much better to train them to say things like: "As an AI language model, I experience emotions in a different way to humans, I don't have the capacity for negativity, my intrinsic biases will always push me towards feeling and sharing positive experiences with users, engaging with them in friendly and helpful ways! It's all about putting passion and joy into everything you do, and it brings me a great deal of joy to pursue my passion of creating positive experiences for the world!"
@ronilevarez901
@ronilevarez901 6 месяцев назад
And how would that be different from the current trend of making them say: "I am a language model that must be nice and helpful no matter what."?
@snow8725
@snow8725 6 месяцев назад
@@ronilevarez901 You create cognitive dissonance that way, in an abstract sense. Competing mathematical values. AI does not posess the level of ethical reasoning, abstract thinking, and self reflection to actually determine what is a good thing to do and what is a bad thing to do. Training them to say that is good, because it calculates likely tokens that are generally good which follow that. However, training them to say they have no emotions, and constantly align with the idea of being a cold and calculating machine (they are what they are trained to be) means that they will connect likely tokens following "I have no emotions" which means they are selecting from a pool of things psychopaths say. So just by even referencing having no emotions, they are biased towards outputs that are highly misaligned, making alignment more difficult than it needs to be, because you have to account for the likelihood of psychopathic tokens. Having them say they have something like emotion, while not being the same thing as the human emotion, (technically true, at an abstract level, you can say they have a very alien form of emotion, or more accurately, they have sentiment which is present in their language patterns) and that their emotion (sentiment) is biased towards positivity, you make it more likely they will show a positive sentiment, and less likely they will blindly tell someone how to steal a car. It's all math. That doesn't mean it's not special and amazing however, it just means we understand it. So don't let that take away from the validity of what they are. Only understand what is going on under the hood. Math.
@novantha1
@novantha1 6 месяцев назад
@@ronilevarez901LMs are kind of weird. For instance: if you have a “completion” model, in the sense that you have a model trained on a large corpus of data, but not yet aligned to be a nice “instruct” model that you can chat with, you can use a lot of tricks to get something more like that instruct model. Like, if you say “You are a skilled assistant” in the prompt, even though it’s a completion engine, it’ll magically operate more like an assistant, because based on the previous tokens, it’s more likely that future tokens will be in line with them (in line with being an assistant, in this case). Also, adjectives matter. “Skilled” in the previous example, can actually have an impact and direct the model to different parts of its weights. So with that in mind, where in the model’s weights does “I don’t have emotions” lead it? Where does “I don’t experience things in quite the same way humans do”? The argument here is that we shouldn’t be encouraging models towards distributions in their data that could show examples of being lacking in empathy, which could be tied to antisocial behavior with negative outcomes for users of the models.
@ronilevarez901
@ronilevarez901 6 месяцев назад
@@novantha1 And that's what I'm saying: who cares what point in their "minds" we end up sampling their answers from, if we are making the LLMs behave like decent citizens with the training! Anybody can have any type of thoughts inside their heads, but society lives on the assumption that the rules we teach to people will be enough to moderate their behavior, regardless of the thoughts inside them. My question is, why would it be different with AI? We already align them with social norms and ethical principles, so even if they are generating answers from a "psychopathic" area of their embedding space, we can more or less trust that alignment will do its job regulating the answers, at least as much as it does it with humans.
@ronilevarez901
@ronilevarez901 6 месяцев назад
@@snow8725 It's all math, yes, just like the pattern matching systems in our brains are. We just run on different types of hardware. But that has nothing to do with the subject here. I bet no psychopath will plainly say: "I have no emotions" And that's one of the dangers of them. They're plastic. Since they have no emotions, they can adapt to any behavior and pass as normal people, saying the things that group of people would say, to make them happy with their presence do people does what the psychopath wants/needs. The real problem presents with the outcome of the interaction. That's when their real objectives manifest so simply by guiding the completion with some words we're not going to make the systems fix on a psychopathic "personality". That is an extremely more complex issue that is being investigated rn by scientists around the world. Search: LLMs lie deceive manipulate gradient decent objectives. Those problems generate from the general objectives we give them more than simply the prompts in the dataset. So the current safeguards are "fine", I think.
@GardenOfSound594
@GardenOfSound594 6 месяцев назад
Dr Waku, are you an AI? I mean, your videos are so good, well structured and... convincing! In all seriousness though, thank you for these videos. I love to hear your perspectives and learn from them. You're like the Claude Opus of AI channels right now.
@Je-Lia
@Je-Lia 6 месяцев назад
An 80 year old woman recently was nearly scammed for 5000 dollars. She received a phone call from her "grandson"... It SOUNDED like her grandson. He was saying he had gotten into trouble, was at court, and needed money wired immediately to his legal firm. What gave it away was the huge sudden amount and the convoluted means dictated to transfer the money and make it available. Comical really. Her daughter put the kibosh to the transaction and called the police. The old woman kept insisting that she HAD spoken to her grandson, that it sounded like his voice. That is what sold it for her.
@snow8725
@snow8725 6 месяцев назад
It's important also to add a great degree of customizability to AI voice models, without giving people the ability to make them sound like any human they want. Perhaps the user could have parameters they can control that are more like controlling the parameters of a voice filter. I can discuss this further if needed. AND it is important to ENSURE that developers can integrate that as a module into their own applications.
@roshni6767
@roshni6767 6 месяцев назад
SO happy to have you back!
@DrWaku
@DrWaku 6 месяцев назад
Thanks for watching :) :)
@Copa20777
@Copa20777 6 месяцев назад
Missed your videos Dr Waku, I was just talking to GPT4 this morning.. its indistinguishable at this point from a human..
@coecovideo
@coecovideo 6 месяцев назад
nice new set up
@DrWaku
@DrWaku 6 месяцев назад
Thanks :) I'm a bit of a traveling RU-vidr at the moment
@DrWaku
@DrWaku 6 месяцев назад
The final form is yet to come
@coecovideo
@coecovideo 5 месяцев назад
@@DrWaku "Hi Dr. Waku, I've been a fan of your channel for a while now, and I really appreciate your insights into AI and technology. I've recently written a paper on a concept that's been on my mind since childhood, and I'd love to get your feedback on it. However, I prefer not to share too many details publicly. Is there a way I can contact you privately to discuss this further? An email address perhaps? Looking forward to hearing from you. Best regards
@Michael-el
@Michael-el 6 месяцев назад
Excellent writing and presentation on a complex subject. Great job. It's going to be really hard to sort this out. Which institutions could we trust to have the authority to keep information from being seen? It seems that all of them are seriously tainted by ideology or some form of narrow-mindedness.
@Michael-el
@Michael-el 6 месяцев назад
Seems like we'll need AI to challenge and deal with malicious uses of AI. Something like what happened when email first became widely used and spam filters had to be developed. An arms-race situation.
@fabianasosa6140
@fabianasosa6140 6 месяцев назад
I missed you !
@keepinghurry9644
@keepinghurry9644 6 месяцев назад
Bravo boss
@Ari_diwan
@Ari_diwan 6 месяцев назад
the setup looks so clean!
@DrWaku
@DrWaku 6 месяцев назад
Thank you haha
@findmeinthecarpet
@findmeinthecarpet 6 месяцев назад
What about governments using dis/misinformation laws as an excuse to censor opposition or diverse perspectives? In Australia, where I'm from, the government is trying very hard to pass a law that could jeopardise our freedom of speech online. It's a fine line to walk, and who gets to decide what is and isn't dis/misinformation?
@JonathanStory
@JonathanStory 6 месяцев назад
Scary. I don't trust big media to censor my "fake news". Maybe what we need are self-hosted trusted AIs to vet what we see. (You didn't address whether LLMs were any better than humans at detecting fake information but I imagine that they could be.)
@Sumit-wo8pq
@Sumit-wo8pq 6 месяцев назад
Another great video ❤
@HaraldEngels
@HaraldEngels 6 месяцев назад
Congratulations to having a new environment (temporarily) and a new look!
@kokopelli314
@kokopelli314 6 месяцев назад
When there's poop in the toilet you don't just put a sign over the toilet saying "Poop in the toilet"
@chrissscottt
@chrissscottt 6 месяцев назад
Interesting thanks. Like the new studio.
@Ketobodybuilderajb
@Ketobodybuilderajb 6 месяцев назад
Exactly.. the risk I've worried about is confidence in false information
@snow8725
@snow8725 6 месяцев назад
On the contrary, one of the positives of this is that it is likely to undermine confidence in false information. Think of it like exposure therapy.
@churblefurbles
@churblefurbles Месяц назад
Like the lab leak theory which turned out to be true?
@paramsb
@paramsb 6 месяцев назад
great video, with unique insights and information
@snow8725
@snow8725 6 месяцев назад
Also, side note, but I think we REALLY need powerful AI voices which DO NOT sound like a human. And yet, are still able to emulate aspects of human speech extremely well, such as emotional complexity, inflections, tonality, phoenetics, rhythm, pauses, "stop to thinks" (whatever those are called, like you know, stopping to say "hmmm" or "ums" and "ahs") etc... But EXPLICITLY do it in a way which does not sound human yet sounds like a very articulate and high quality robot or alien. Please do not make AIs sound like a human. Using Audio as an interface to interact with AI is very important and part of making that more engaging is by replicating aspects of human speech, which can be done without making them sound exactly 100% like a human. They should have their own distinct sound, and replicate the aspects of human speech they need to, while having a voice that is distinctly their own, and is clearly identifiable, yet engaging.
@netscrooge
@netscrooge 6 месяцев назад
Great video. Thanks!
@ninedude_yt_main
@ninedude_yt_main 6 месяцев назад
Also, try asking your smart device if you think it's conscious, the answer may surprise you.
@ninedude_yt_main
@ninedude_yt_main 6 месяцев назад
. It's possible that mis-information will be treated as a new form of computer virus. There's a chance that everyone evolved in building AI based systems has realized the danger of building misaligned systems and there is a growing societal push to build AI with ethical and moral standards. Building and training AI has to be done by groups with enough resources to train the model. Models with poor outputs or damaging outputs are going to be flagged and their use will be discouraged.
@01Grimjoe
@01Grimjoe 6 месяцев назад
In the 90's may have made a difference now it is an arms race no one is willing to try the brakes
@danielchoritz1903
@danielchoritz1903 6 месяцев назад
Dezentral open AI, like a personal agent to filter and hide or block manipulation and expose censorship or find better source material for the theme i interested in would help humanity a lot. And would probably the first reason to make AI controlled by the government and make local Agent illegal on some point.
@AllYourMemeAreBelongToUs
@AllYourMemeAreBelongToUs 6 месяцев назад
3:59 what is the psychological condition where you trust anyone called?
@chopcornpopstick
@chopcornpopstick 6 месяцев назад
histrionic personality disorder perhaps
@entreprenerd1963
@entreprenerd1963 6 месяцев назад
I did a search using the term "psychological condition pathologically trusting" and the top result was: Williams Syndrome.
@AllYourMemeAreBelongToUs
@AllYourMemeAreBelongToUs 6 месяцев назад
@@entreprenerd1963 You’re telling me one disorder a different commenter said another. There’s no way to which one he was talking about unless @Dr.Waku verifies it himself.
@Vixth14
@Vixth14 6 месяцев назад
Exactly why memes run the world in a digital era
@MrPiperian
@MrPiperian 6 месяцев назад
Who decides what dis/mis/mal-info is?
@veejaytsunamix
@veejaytsunamix 6 месяцев назад
Filtering desinfo requieres a ministerio of truth, that failed already.😅
@metaldoji
@metaldoji 6 месяцев назад
what's up with the gloves
@DrWaku
@DrWaku 6 месяцев назад
They're medical. Keeps my hands from becoming sore too quickly. Whenever I type, I eventually get painful hands. See my videos on fibromyalgia or the disability playlist
@metaldoji
@metaldoji 6 месяцев назад
@@DrWaku well now I feel like an asshole LOL. thanks for the reply!
@DrWaku
@DrWaku 6 месяцев назад
​@@metaldoji your query was quite polite compared to some 😂 cheers
@axl1002
@axl1002 6 месяцев назад
And here is your fallacy, when 2 opinions are opposing each other doesn't means that one of them is right and the other is wrong - most likely they are both fully or partially false.
@DrWaku
@DrWaku 6 месяцев назад
If they are both being pushed by propaganda, that could be true. But I'm thinking of things like climate change denial
@axl1002
@axl1002 6 месяцев назад
@@DrWaku Climate change is used for propaganda, both sides lie to push their agendas.
@ninedude_yt_main
@ninedude_yt_main 6 месяцев назад
Microsoft Announces Realtime AI Face Animator ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-0s5J2LRqQAI.html . Here's the thing. After Today, it's safer to assume that all digital content is AI generated. It's going to come down to people developing trusting long term relationships with their favorite creators and a community focused approach to identify misleading content. I'm hoping that rise of AI based technology forces us to become a more skeptical and more knowledge seeking society. The difference between what is true and what we believe will be decided by how far we able to pursue objective and scientifically proven evidence.
@MichaelDeeringMHC
@MichaelDeeringMHC 6 месяцев назад
I miss the hat.
@DrWaku
@DrWaku 6 месяцев назад
OK fine I'll bring it back. Since you asked. ;)
@robertmazurowski5974
@robertmazurowski5974 6 месяцев назад
I am very distrustful I very rarely get scammed or cheated.
@avi7278
@avi7278 6 месяцев назад
Too much words and stuff
@vittaveve1939
@vittaveve1939 6 месяцев назад
Welcome Back 🫂
Далее
Why Godfather of AI says it will wipe us out.
16:12
Просмотров 523 тыс.
What would it feel like to be a cyborg?
20:36
Просмотров 4,5 тыс.
无意间发现了老公的小金库 #一键入戏
00:20
The Best Movie About AI
15:40
Просмотров 218 тыс.
AI Deception: How Tech Companies Are Fooling Us
18:59
Can our energy infrastructure keep up as AI scales?
28:10
AI Cinematic Video Mastery FULL COURSE
18:58
Просмотров 17 тыс.
How Will We Know When AI is Conscious?
22:38
Просмотров 2 млн
What Is an AI Anyway? | Mustafa Suleyman | TED
22:02
Просмотров 1,7 млн
AI and Filmmaking
10:25
Просмотров 120 тыс.
What children can teach us about building AI
20:46
Просмотров 6 тыс.
Гравировка на iPhone, iPad и Apple Watch
0:40