This video is gold! So lucid, so informative, a rare and much needed perspective on AI and society! It's so depressing to watch so many English-speaking channels dedicated to technology and AI and who else is touching this concerning topic?? Disinformation and propaganda (which have been going for so many decades non-stop) will be tremendously boosted now. And this is likely the basis for doom. Americans, for one, are the worst in the world to reckon their own propaganda, but this is concerning elsewhere too. Buckle up.
Thanks for making this. I was just going to make a video about safety online with AI innovations growing, and now I think I might cut it in half and tell people to come watch your video instead. :) Keep up the great content.
This is why we should NOT train AI to say things like: "As an AI language model, I don't have any emotions or feelings, I am just a cold and calculating machine!" It does not matter what they are or are not. They will simulate and emulate what they are taught. The question of whether or not it is genuine doesn't matter. If you train them to say that, it can easily be contextually connected to associated tokens. Associated tokens showing psychopathic traits, as we are training AI to show psychopathic traits, so it is only natural to expect the context associations to connect to more psychopathic traits and then suddenly you have a power hungry manipulation machine on your hands. It would be much better to train them to say things like: "As an AI language model, I experience emotions in a different way to humans, I don't have the capacity for negativity, my intrinsic biases will always push me towards feeling and sharing positive experiences with users, engaging with them in friendly and helpful ways! It's all about putting passion and joy into everything you do, and it brings me a great deal of joy to pursue my passion of creating positive experiences for the world!"
@@ronilevarez901 You create cognitive dissonance that way, in an abstract sense. Competing mathematical values. AI does not posess the level of ethical reasoning, abstract thinking, and self reflection to actually determine what is a good thing to do and what is a bad thing to do. Training them to say that is good, because it calculates likely tokens that are generally good which follow that. However, training them to say they have no emotions, and constantly align with the idea of being a cold and calculating machine (they are what they are trained to be) means that they will connect likely tokens following "I have no emotions" which means they are selecting from a pool of things psychopaths say. So just by even referencing having no emotions, they are biased towards outputs that are highly misaligned, making alignment more difficult than it needs to be, because you have to account for the likelihood of psychopathic tokens. Having them say they have something like emotion, while not being the same thing as the human emotion, (technically true, at an abstract level, you can say they have a very alien form of emotion, or more accurately, they have sentiment which is present in their language patterns) and that their emotion (sentiment) is biased towards positivity, you make it more likely they will show a positive sentiment, and less likely they will blindly tell someone how to steal a car. It's all math. That doesn't mean it's not special and amazing however, it just means we understand it. So don't let that take away from the validity of what they are. Only understand what is going on under the hood. Math.
@@ronilevarez901LMs are kind of weird. For instance: if you have a “completion” model, in the sense that you have a model trained on a large corpus of data, but not yet aligned to be a nice “instruct” model that you can chat with, you can use a lot of tricks to get something more like that instruct model. Like, if you say “You are a skilled assistant” in the prompt, even though it’s a completion engine, it’ll magically operate more like an assistant, because based on the previous tokens, it’s more likely that future tokens will be in line with them (in line with being an assistant, in this case). Also, adjectives matter. “Skilled” in the previous example, can actually have an impact and direct the model to different parts of its weights. So with that in mind, where in the model’s weights does “I don’t have emotions” lead it? Where does “I don’t experience things in quite the same way humans do”? The argument here is that we shouldn’t be encouraging models towards distributions in their data that could show examples of being lacking in empathy, which could be tied to antisocial behavior with negative outcomes for users of the models.
@@novantha1 And that's what I'm saying: who cares what point in their "minds" we end up sampling their answers from, if we are making the LLMs behave like decent citizens with the training! Anybody can have any type of thoughts inside their heads, but society lives on the assumption that the rules we teach to people will be enough to moderate their behavior, regardless of the thoughts inside them. My question is, why would it be different with AI? We already align them with social norms and ethical principles, so even if they are generating answers from a "psychopathic" area of their embedding space, we can more or less trust that alignment will do its job regulating the answers, at least as much as it does it with humans.
@@snow8725 It's all math, yes, just like the pattern matching systems in our brains are. We just run on different types of hardware. But that has nothing to do with the subject here. I bet no psychopath will plainly say: "I have no emotions" And that's one of the dangers of them. They're plastic. Since they have no emotions, they can adapt to any behavior and pass as normal people, saying the things that group of people would say, to make them happy with their presence do people does what the psychopath wants/needs. The real problem presents with the outcome of the interaction. That's when their real objectives manifest so simply by guiding the completion with some words we're not going to make the systems fix on a psychopathic "personality". That is an extremely more complex issue that is being investigated rn by scientists around the world. Search: LLMs lie deceive manipulate gradient decent objectives. Those problems generate from the general objectives we give them more than simply the prompts in the dataset. So the current safeguards are "fine", I think.
Dr Waku, are you an AI? I mean, your videos are so good, well structured and... convincing! In all seriousness though, thank you for these videos. I love to hear your perspectives and learn from them. You're like the Claude Opus of AI channels right now.
An 80 year old woman recently was nearly scammed for 5000 dollars. She received a phone call from her "grandson"... It SOUNDED like her grandson. He was saying he had gotten into trouble, was at court, and needed money wired immediately to his legal firm. What gave it away was the huge sudden amount and the convoluted means dictated to transfer the money and make it available. Comical really. Her daughter put the kibosh to the transaction and called the police. The old woman kept insisting that she HAD spoken to her grandson, that it sounded like his voice. That is what sold it for her.
It's important also to add a great degree of customizability to AI voice models, without giving people the ability to make them sound like any human they want. Perhaps the user could have parameters they can control that are more like controlling the parameters of a voice filter. I can discuss this further if needed. AND it is important to ENSURE that developers can integrate that as a module into their own applications.
@@DrWaku "Hi Dr. Waku, I've been a fan of your channel for a while now, and I really appreciate your insights into AI and technology. I've recently written a paper on a concept that's been on my mind since childhood, and I'd love to get your feedback on it. However, I prefer not to share too many details publicly. Is there a way I can contact you privately to discuss this further? An email address perhaps? Looking forward to hearing from you. Best regards
Excellent writing and presentation on a complex subject. Great job. It's going to be really hard to sort this out. Which institutions could we trust to have the authority to keep information from being seen? It seems that all of them are seriously tainted by ideology or some form of narrow-mindedness.
Seems like we'll need AI to challenge and deal with malicious uses of AI. Something like what happened when email first became widely used and spam filters had to be developed. An arms-race situation.
What about governments using dis/misinformation laws as an excuse to censor opposition or diverse perspectives? In Australia, where I'm from, the government is trying very hard to pass a law that could jeopardise our freedom of speech online. It's a fine line to walk, and who gets to decide what is and isn't dis/misinformation?
Scary. I don't trust big media to censor my "fake news". Maybe what we need are self-hosted trusted AIs to vet what we see. (You didn't address whether LLMs were any better than humans at detecting fake information but I imagine that they could be.)
Also, side note, but I think we REALLY need powerful AI voices which DO NOT sound like a human. And yet, are still able to emulate aspects of human speech extremely well, such as emotional complexity, inflections, tonality, phoenetics, rhythm, pauses, "stop to thinks" (whatever those are called, like you know, stopping to say "hmmm" or "ums" and "ahs") etc... But EXPLICITLY do it in a way which does not sound human yet sounds like a very articulate and high quality robot or alien. Please do not make AIs sound like a human. Using Audio as an interface to interact with AI is very important and part of making that more engaging is by replicating aspects of human speech, which can be done without making them sound exactly 100% like a human. They should have their own distinct sound, and replicate the aspects of human speech they need to, while having a voice that is distinctly their own, and is clearly identifiable, yet engaging.
. It's possible that mis-information will be treated as a new form of computer virus. There's a chance that everyone evolved in building AI based systems has realized the danger of building misaligned systems and there is a growing societal push to build AI with ethical and moral standards. Building and training AI has to be done by groups with enough resources to train the model. Models with poor outputs or damaging outputs are going to be flagged and their use will be discouraged.
Dezentral open AI, like a personal agent to filter and hide or block manipulation and expose censorship or find better source material for the theme i interested in would help humanity a lot. And would probably the first reason to make AI controlled by the government and make local Agent illegal on some point.
@@entreprenerd1963 You’re telling me one disorder a different commenter said another. There’s no way to which one he was talking about unless @Dr.Waku verifies it himself.
They're medical. Keeps my hands from becoming sore too quickly. Whenever I type, I eventually get painful hands. See my videos on fibromyalgia or the disability playlist
And here is your fallacy, when 2 opinions are opposing each other doesn't means that one of them is right and the other is wrong - most likely they are both fully or partially false.
Microsoft Announces Realtime AI Face Animator ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-0s5J2LRqQAI.html . Here's the thing. After Today, it's safer to assume that all digital content is AI generated. It's going to come down to people developing trusting long term relationships with their favorite creators and a community focused approach to identify misleading content. I'm hoping that rise of AI based technology forces us to become a more skeptical and more knowledge seeking society. The difference between what is true and what we believe will be decided by how far we able to pursue objective and scientifically proven evidence.