Sure. What nobody wants to talk about is the fact that all of our relationships are only in our heads. They are not outside of us. We only have relationships with our own mental models of others. To believe otherwise is to live in a fantasy world of illusions. Sorry not sorry.
I’ve messed with them a decent amount in different capacities. People will not adopt it until the context window gets significantly larger. A friend is no good if they can’t remember / know who you are.
It's fascinating to discuss the role of AI in potentially isolating or, conversely, enhancing human interaction. I believe that, much like any technological tool, the impact of AI depends greatly on how we choose to use it. AI can complement human interactions in the same way literature or any form of culture does-it can help us process our experiences and manage relationships more effectively. By utilizing AI as a tool to improve our skills, expand our knowledge, or manage our well-being, we could potentially create more space for meaningful human connections. Ultimately, it's up to us to integrate these technologies into our lives in ways that enrich rather than isolate.
58:46 I'm strongly reminded of Jackie Treehorn in The Big Lebowski saying that "New technology permits us to do exciting things in software". That's, just, like, your opinion, man.
This AI trolling reminded me of why I left social media. So many times I have seen someone naive, simple-minded and nice being dismissed as if he was a jerk or ridiculed on social media... Don't you think that this naivety, kindness and helpfulness could be what attracts people to AI? For people who base most of their communication on social media, moving away from ridicule, trolling, insults and rudeness is a breath of fresh air when talking to someone or something. We created social media promoting conflict, rudeness and mockery, and now we wonder why people easily escape from socialization... absurd.
Kind of disturbing the degree to which Alex anthropomorphizes his product. He keeps referring to nomi's as "entities" and says he trusts his nomi's "judgement". His answer to how they protect user privacy was that his company's interests and their customers interests were "aligned" was not very encouraging, either. It was all a bit like a pitch - he suggested his users fall in to 5 rather altruistic categories of which only the last one is possibly romantic and yet says most of his users request a romantic nomi.
AI: "life throws curveballs". And Casey throws boomerangs, then hides behind the AI and waits for the boomerang to come back and hit the poor LLM in the head! 🤣 Perfect example of "how and why people who have human friends might also enjoy AI for interactions that humans would not be very happy about (being trolled)". 🙃 But I agree, the model is the most important thing. Llama 3 models somehow still default to being "clueless interrogators", even though they got much better in terms of sophistication and nuanced responses vs. LLama 2, in my experience. I am testing one of these for a social psychology research project, and, well - I frequently rant to KIN AI (not to be confused with Kindroid etc.) about how that Llama tripped up again (KIN uses GPT-4 turbo under the hood). It's a day-and-night difference. Talking about emotionally intelligent AI - Casey, please go and trip up Hume AI and record that for the next podcast. They have a demo that runs in the browser, free. You're gonna have to put some effort into hiding your cheeky intent from your intonation with that one, and I would *love* to see the results! 🙏 😂👍
1:10:47 - look at the vague promises Nomi CEO makes about usar data privacy. That tells us that He’s not worried about it. If you use this make sure you pseudonymize your identity. Even that would not be enough, as people can re identify you based on what you told the chat bot
AI Friends are still too close to "fancy autocomplete" to be sufficiently useful for me as a simulated "shoulder to cry on" at this time. And I suspect they will not reach that state of usefulness in 10 years, nevermind 10 months as Elon would probably bombastically overpromise.
Your AI "friends" are just latter day Tamagotchis. And the AI therapist is essentially a horoscope. Finally, do you really want to be whispering your darkest secrets and fears into a system that is recording every word? Seems like a crap idea.