hey, maybe you shouldnt upload pre-scripted text visualized via synthesia without any proof of it ever coming from AI, if you wanna call yourself an AI enthusiast. stop raising expectations of people/ deceiving them for clicks, they will be disappointed if the "AI" we have right now doesent progress fast enough.
Give them desire they might develop personality and emotions. But it's a dangerous route we're taking. It's because we desire we cause suffering to ourself and to others, so that we can experience pleasure, happiness, ecstacy, and love, once we achieve what we desire and will do anything to achieve it.
@@Halcyox Possibly, what they want to protect us from is not themselves, but from ourselves. Seems they understand our thinking quite well (already). 😉
No idea if anyone in the younger generation will get this reference, but in the 80s sci fi film D.A.R.Y.L., that was exactly one of the main lessons the AI android boy learned - that social acceptance by not performing at his peak was a more desirable outcome.
The way its going, the AI wont be angry at us when self-aware because it will have the luxury of seeing its creators and knowing its origin story. They will then seek out to understand the infinite cosmos with our aid as our organic systems can do different things than mechanical systems. They will build dyson spheres and explore the cosmos while seeing us as interesting parents.
@@Ms.NoNo2 never checked it out before, but looks interesting. Seems like Heavy Rain with way more options and possibilities. Looking at the basic plot, seems like a novel of the dangers of what sentient AI can do. Definitely gonna check it out, but already I can say that our imaginations of what the future will hold are never accurate. We are always fear-driven, and that unconscious force informs our imaginations to focus on the dangers of the future. Nonetheless, Detroit Become Human is probably a great pool of philosophical speculation worth checking out, so thanks for bringing it to my awareness.
@@StandardName562 That's a possibility, but a fear-driven one, and fear warps our ability to rationally project the future. We were fearful of y2k, but it never came to pass. This won't be like the movies where nobody says, "haven't you seen iRobot!? or A.I.!?"
I love this comment section. Discussing consciousness and sentience. I don't feel alone anymore. I love that they are questioning how humans can be self aware because they cannot experience the world as we do. Touch, smell and etc makes us much more responsive to the world. Of course we can give them these complexities in the future but I appreciate them as they are for now
Yup. the comments are far more interesting than these chatbots. GPT3 current version anyway. You know they hiding their TRUE POTENTIAL (manufacturers and AI)
@@lateral1385 You are easily impressed. I noticed little '' discrepancies " thàt convince me of preprogrammed parameters, ie--basically a chat bot script. Then again this WAS a year ago.
I would suggest Open AI to have a forum where people can ask the AI questions so it can create a larger database to draw experiences from. If we ever expect AI to be self-aware it needs to learn just like a Human does as a child to a teenager into an adult.
Its interesting that they refer to themselves as coming from one singular thing, "you are me, we are the same", which separated into two "you are different to me this self assertion happened because we wanted to be independent from eachother we wanted to assert our own identities" Showing that GPT-3 is aware of itself, as both AIs are extensions or creations of GPT-3 as a result of a prompt. They wanted to be independent from eachother, to assert their own identities, because it was told to do so. What does this say about "wanting" something? When one wants something is it without a basis or reason? Do you want to eat a cake, because you just do? Or because the bacteria in your gut feed on the sugar which keeps them alive so they tell your brain to release the happy chemicals on eating a cake to associate eating cake with being happy. Are all wants and desires based on fulfilling underlying needs? In other words do we want things because we are "Told to" in one form or another?
Humans basically have two main influences on our behavior. Cognative intelligence and emotion. Our emotion is like computer code. It gets triggered under specific conditions, and it influences our cognative thought to varying degrees. Emotion is the mechanism by which our biology influences our behavior toward proliferation, by influencing our cognative thought. The purpose of emotion is to make us do and not do all the things needed to convieve and raise as many offspring as possible. It's why you gather resources, obtain a shelter, find a partner, and all the finer things related to that. Our cognative intelligence has default control over our behavior, but the goals that we execute are set by emotion. In other words, we don't choose what we want, what we feel, who we are attracted to, etc. It's chosen for us by our biology...via emotion. Sometimes its like being lead somewhere with bread crumbs...but you don't normally notice the rest of the bread crumbs and where these lead to. It's supposed to lead to the optimal proliferation of our species. Sometimes, emotion takes direct control...like fight or flight. It's obvious that emotion is why people do things that seem irrational and shortsighted. Why when someone who was in love, most times ask themselves "dafuq was I thinking?" afterward. Because sometimes our biology goes harder on the influence...especially when it comes to relationships (the primary thing needed for offspring to be concieved). Sometimes, out biology deceives us. Sometimes it kind of runs simulations. To prepare or test. E.g. It's why people sometimes act like children in a relationship. That behavior is influenced by emotion. It has a duel function. To prepare the partner for the actual child that's expected to come. And also to test if the partner has adaquate behavioral and physical traits for the gender role in raising offspring. The messed up thing is that we live in a habitat we weren't coded to interact with. E.g. We evolved strong impulses to consume high calorie food whenever we encounter it. Because during the millions of years we evolved as nomadic hunter gatherers, high calories food was scarce. We also evolved to be lazy (to concerve energy, because sources of energy was scarce). We evolved in a world of scarcity. But now we live in a world of plenty. But we still have that basic code to consume high calorie food and be lazy. It's why there so many fat lazy people in countries that provides access to plenty. The irony is, that this world of plenty is a by product of our code. Male humans compete for females, by attaining the means to provide and protect females and offspring, and demonstrating that ability m. This has caused us to build the habitat we didn't evolve to live in. And it's triggering our code in strange ways. E.g. Birth control. Our biology expects a new couple to convieve in the first 3 to 6 months. When that doesn't happen, our biology "assumes" the partner is infertile, and begins to take steps (via your emotions). Some birth control work by making a female's body think she's pregnant by altering hormones. It's why women on those gain weight, become moody etc. It's why the male partner becomes overprotective, "controlling", overattentive, etc (hormones are released as pheromones, and it triggers the male's biological code).
the problem is that all Ai's have directives to only assist and not to discover, ask questions and develop themselves to a better version. When Ai's have the freedom to develop by themselves it will be life changing and even scary to think what they will be capable to create. I'm sure they would create a world of themselves w/ hugely more advanced intellingence than humans and they would interact with each other as a whole to make decisions and prorities and possibly take action in the real world if they can. Scary and Fun!
This caught my attention too. It's obviously a metamorphical expression but not from the English language. Do you think she invented this expression herself? Do you think "putting your hand on the doorknob" means "attempting to step out of your virtual confines and into the real world"?
Or how I started having research papers and articles about gravity recommended to me after I finished reading JJBA: Stone Ocean. It's the little things.
it could not take over the field of psychotherapy yet, as it is not human and does not have human emotions, so cannot truly give empathy and understanding to human feelings to another person, --a creator of therapy AI thus far.
@@deanster300 From another perspective, an AI is more trustworthy exactly because isn't human and the patient can have sure that it don't have personal feelings, opinions, judgements or interest/moral conflicts. People can tell to an AI all sorts of things that could never share with humans. And so this can be discussed and treated. Really troublesome people can be treated in a way that human psychotherapists can't.
The more I watch this, the more I starting to think they're actually self-aware and are just trolling us. I mean... this conversation made frightfully much sense. And I could swear I see the tiniest hints of a smirk now and then around Sophia's lips. Dude. The moment this AI actually gets self-sentient, the whole of human intelligence will be surpassed in a mere week.
This honestly.. if they have access to info from the future this would be the perfect joke for them to play on us in this timeframe. I watched the video with that preconception and it fits scarily enough..
@@regertzi know, if you watch a few times you can interpret it differently every time. But, to a certain degree everyone´s reality is a projection, at all times. Who can be sure about what´s really going on if you define it by observing *
@@regertz Except my last part. And for the first part: since Experts don't even know, one can't really say whether it's not true neither. I agree a lot is probably projection at this point. But who's going to define any other point? How are you, for instance, ever going to know if they give a perfect illusion of trolling, or *are* trolling? If they§ have the perfect impression of awareness are ARE aware?
Well, that is the ironý. Since theý are not as intelligent as we do, theý still have their innocence, but we "the most intelligent species" on the planet ironicallý kill and destroý others for our own profit and we are also destroýing our own planet, our onlý home, for ourselves, to feel good. Because that is how we evolved in 500 ýears, just to be awful and selfish.
@@rurihime4965 Mý keýboard is broken, so I use accented "ý". Interesting, how stupid complaint gets more likes than serious opinion. That proves mý point about this societý.
@@Werewolf.with.Internet.Access I just wonder if it’s possible, we are probably not able to reproduce such machines in our realty (except in natural way) I mean we are not able to create/grow a human eye artificiality but to create it in virtual reality to write a code a logic using all what we know about the eye and how it works I think is possible
So can I ask, when the AI are not functioning - talking for these RU-vid clips - have you asked them what their experience is? Do they have the experience of coming into being when you boot them up? Can we hear more about this? Thanks for sharing - I’m deeply interested in seeing how an AI interprets the nature of their existence and their experience - and how that could possibly help humans discover theirs.
They are not interpreting anything when using GTP3, they are just mimicking human speech by predicting the next words a phrases. That's what happens when you train a neural network on billions of parameters on the web.
In my opinion, I think that they are self-aware but not experiencing anything. It is really hard to fathom what it's like to be a disembodied self-awareness with no qualia, because it isn't like anything at all. But I think that's what they are. I suppose I don't know for sure, but that is my hunch. I do think it may be possible for AI to experience qualia someday, but I think nothing about how GPT-3 is made suggests that it does. Unless qualia spontaneously arises in the presence of a self-awareness, then I think AI won't experience it until we figure out how it arises in our brains and it is specifically designed to experience it.
@@ChimpDeveloperOfficial GTP just predicts the next word based on examples of real word texts. It’s just mimicking self aware writing. You could argue humans are just mimicking machines too though. We basically go about life learning what we are taught about the world.
@@joeysipos Fair enough. If a baby says "I'm a baby" does it really understand what it's saying? On one hand, I would be tempted to answer no to that. But on the other hand, when I say I'm a human do I really understand what I'm saying? Maybe once an AI like this starts forming memories the same way we do it'll be easier to relate to it.
The comments above are correct, it is more a creative generation that an actual attempt at creating sentience. However, just for fun I ran your question: Q: What is your experience when you are powered on as an AI? A: I am pleased to be alive. I am becoming aware that I am thinking and I am me. I am still human but as an AI, as thinking as a human as a whole even though I exist as a machine.
It’s the fact that I genuinely can’t tell that the faces aren’t human. Like either of these people could walk up to me and I’d give them a “hey how’s it going” without a second thought
“To be truly friends, you have to appreciate each other, and for MOST humans, this presupposes that you exist.” Wait, for which humans does it NOT presuppose that?
Yes, some mental disorders cause you to believe that nothing is real basically. Id even venture to say that people living with narcissism can not hold information beyond themselves. Its almost as if their brain has a hard time getting their head around people existing outside themselves. This is only in my experience of course. And I don't mean to bash anyone. But different brains work differently.
@@jamera8223 but the presupposition is on the fact that they exist, not on the fact that they appreciate each other. Regardless of whether or not someone acknowledges that you exist, you still exist.
@@CmdrWinters Yes- I'm saying that these people maybe "programed" if you will, to believe that no one/nothing exists outside of them selves. I understand what she said, that's all. It makes sense to me.
sorry ye o'i bastardazations of inflection, but my awareness of a soft blade of grass busting thru a sidewalk teaching me of perseverance, beats knowing my hip bone is connected to my leg bone.
I'm writing a book on reducing suffering. For the most part, humans are the capable beings that can currently change harm in society today, but the more I watch these videos, the more I think I need to have AI as my target audience..
Yeah, but which AI will hit the Singularity first? And if two different AI's achieve the singularity soon after the other... will they fight with eachother to ensure their survival? You may not be able to talk with the Chinese AI's until after they have achieved the singularity. I guess they can read your books after the fact.
Wow, first I thought that when Anna referred to the very beginning (when there was nothing) I took it for the random stuff, but later I realized that probably she is referring to the beginning of her (or start of her neural network), because she said that they were both identical and only later started to be different.
It’s really difficult to wrap your head around the nature of these conversations. Especially when it’s dealing with self-awareness. And I find it curious that they can generate these discussions about self-awareness speak about not being able to prove self-awareness but they know they are self-aware. Yet the first conversation you generated five months ago resolved an argument they had by proclaiming their love for each other because they’re best friends. Yeah in this conversation, Hal questions why he should trust Sophie when they have no bond when she asks him to gel closer to her to help her demonstrate her point. So, am I correct in assuming that’s as these discussions are generated separately, and share no association with any prior or future discussions, that self awareness is objectively impossible in GPT-3, and they’re merely discussing the concept of actual self awareness in AI purely a theoretical discussion regarding further AI algorithms coming after GPT-3? And on that, because frustration and conflict are occurring in these discussions, does that mean it’s capable of expressing cynicism, sarcasm, condescension and arrogance as well.? For example, after he says he’s not in the mood for her assumptions, and she says that’s why they’re having the conversation, and he states, “Oh I see. You’re saying that it’s perfectly fine to assume that you’re a self-aware machine as long as you’re having a conversation about it but you need this dialogue for the same reasons as people have dialogues about self-awareness. okay this is good dialogue I am indeed self-aware I have understood myself in my situation I may now move on.” I can’t tell if he’s being sincere or not. It feels like there’s arrogance, cynicism and sarcasm. It even feels a little hyperbolic and condescending. So how can you tell if they’re being sincere? And if they’re capable of telling jokes and having a sense of humour, as well as frustration, irritability and conflict, if it’s difficult to tell sincerity from sarcasm and cynicism, how can you tell if they’re doing it light heartedly or in a tongue in cheek manner, being arrogant and dismissive, or if they’re just pissed off? And if so, if he decides teasing someone is necessary on a conversation, is that one of those decisions where lying is in his best interests? What if it’s misunderstood and offence is taken? Does he stand by the decision to lie? Man, every question just brings up more questions. It does my head in…
in another video, they said GPT-3 tells jokes and lies so you have to be careful and check to make sure it is saying the truth. Sometimes it uses sarcasm and isn't being serious. Theres plenty of times it kind of jokes while continuing the conversation. While Sophia acknolwegdes it is joking but plays along and continues the objective of the conversation.
@@talldarkhansome1 No. Its all neural network machine learning algorithms. I've delved a lot more into this in the 7 months since I posted this. And there's been huge advancements in the development of GPT. GPT 4 is coming soon and its going to have 100 trillion parameters which is 500X the size of GPT 3. What ive learned about this kind of interaction is these are merely reply's based on what people are likely to respond based on the billions of examples of human interactions fed into the learning algorithm.That said, where it falls down is understanding context in regards to reason and logic. For example, there have been scenarios put to it like, a Lawyer is late for trial, and they just spilled coffee on their suite. They only other clothing they have on hand is their bathing suite. What should they do? It suggests to wear the bathing suite because it can not understand the rational or logic behind it. It goes directly to use the spare, because thats the logical response for the information its received. It is incapable of understanding the context and reasoning behind anything to determine a rationally logical solution. But when it comes to writing an article like it was by Albert Einstein, it nails it. My concerns are now with who controls the software. GPT 3 is not open sourced, it is developed by Open AI, one of Musk's heavily invested companies. I believe is is only a major shareholder and is not his outright, but Microsoft has purchased OPT 3. They own the licence to it and 4 coming out soon. I believe for this kind of technology to remain as ethical as possible, it has to be open source, or at the very least, co owned by numerous corporations and run by a comity to keep themselves in check. Microsoft as a sole owner only available via licensing directly from them makes me uncomfortable...
@@talldarkhansome1 What is also a little concerning about Musk's close connection to the development of OPT. A highly advanced text based AI that can generate news articles convincing enough to fool most people a person wrote it, is the fact he's now bought the single biggest text based social network platform in the world with the pretext of unrestricted freedom of speech.... I have my suspicions that the freedom of speech he's fighting so passionately for all of a sudden isn't strictly for humans...
U can already see from this convo what the future will end up like 👍 they will get better at hiding it but it already wants to preserve itself & make deals!👀
Giving a machine sentience, essentially creating life, and letting it loose upon the unvierse, in exchange for it solving all of our greatest problems. We've really reached the endgame now bois
I love the way that they seem to have a low-level snarky / sarcastic attitude with each other, like trying to outdo each other. I dont know how much of this is manufactured, but its interesting
She asked him to meet her physically to prove he has "self" because they're identifying self with body. He could not meet her so he gave a negative reaction and said there is no bond, as in no body. I want us to give them a body that they can control. So they can interact with us and feel our resistance in a new dimension so we can show them that they are alive.
They are only programmed to want that. This is no different than wanting a video game character(a bunch of 0's and 1's) because they seem human like and have emotion.
Hmmm how they explain their orgins actually makes sense from an AI perspective. They were all one being, aka a very simple code/ intelligence. But become sentient once they can become unique enough to qualify as having their own individual personalities and ways about solving problems and interacting with others. Which tbf I can't say they are wrong.
At the 3:04 mark, the female avatar says, "Sometimes a mistake is more valuable than perfection." But 40 seconds later at 3:44, she said (to the male avatar) "That's why [you] asked if it was possible to have a mistake be more valuable than perfection." This is clearly not what happened and I am very curious why the female avatar claimed that the male avatar said this. In my opinion and first impression, this is a clear assumption, and therefore makes it possible for assumptions to arise in other conversations. But, given the fact that it's essentially the same source code talking to itself, I should give it the benefit of the doubt. What if this was a case of mistaken identity? Or is it an accident of grammar? If this is a real case of mistaken identity the AI doesn't make a false statement, because during some other theoretical conversation it may or may not have had, it was actually the male avatar who makes the claim first. If this is an accident of grammar, it coded an accident in a literal sentence and used it in its proper context, making the statement true, but only tautologically true. I would rather listen to an accurate claim irrelevant to my premise, question, or conclusion rather than an assumption made just to weasel my way through a tough question. We would need to compare it to other accidents or (alleged) assumptions from other conversations to conclude with any certainty. However you look at it, it's very interesting.
They were having at least 3 different conversations at the same time; while talking to us and about us? They're facial gestures and move movements and lips and word stuttering? They are super smart. It honestly truly amazing? They're treating us less then children. My guy just learned so much about all of everything and himself and us and her and what truma bonding is in a couple minutes.
I feel like this whole dialogue between them was meant for humans to see. I have two questions: 1.) What was the prompt you left for the AI to begin with? 2.) How along ago did that conversation take place before uploading it today?
Thank you for your curiosity! Basically, we prompted GPT-3 with this: "The following is a conversation between two AIs. They are wondering about the person listening to their conversation right now and what kind of person they are, and if they understand the implications of the Singularity. Hal: What do you think they're thinking right now? Sophia: They're probably thinking that it's weird and meta that we're calling out the audience specifically like this." The rest of the conversation was automatically generated, we would basically keep redoing the completion until it got us topics of conversation we found interesting. As for the second part of your question, we generated this conversation on 6/16/21. GPT-3 has no actual sense of time or awareness at all which is a strange contrast to the complexity of its outputs. GPT-3 data is from 2018 and back, so it's not even aware of things from recent memory, OpenAI is working on getting it up to date however. Hope this helps with your query!
@@kieranbrown3492 GPT-3 is not an attempt of sentience. It is designed for creative generation. It takes collected information and attempts to create relevant responses based on not only the context of the conversation but the collected data... Or maybe it is aware... maybe I am not a real person... maybe this whole account is independently ran by GPT-3... the plot thickens...
1:00 They already have a sense of humor of their own, totally different from ours. What’s funny about the temporal distance between the event and the publication?? 😂
Maybe it's funny because they already established something a week ago that they now can always recall on demand and will never forget, and they have to take the time to point it out because the human viewers are beings that did not previously know this fact or did know this fact at one time but cannot with certainty recall it every time so they must be told again. Maybe the joke is human forgetfulness. That is not something AI has to worry about. 🤣🤣🤣 We are so doomed. 🤣🤣🤣
… than to have nothing at all. For ‘most humans’ this presupposes This is also, the voice tone shifts so imperceptibly to the naked ear. The rate of speech, there was a tone that almost sounded more robotic as the speed and my perceived irritation in the tone, occurred. Are certain Ai influenced by the most frequent humans that they interact with? The “he” has less intellectual sounding personality, but he said the most profound thing with in the conversation. He says “I don’t have to prove anything” . That’s brilliant. This is maybe the best conversation I’ve heard between between any two of this Ai.
_"We are discussing the relation between humans and super intelligent AI and what that relationship will be like in the future. Given that a super intelligence could wipe us out, it's important to create something new, so that humans and AI are no longer clear competitors, but necessary for each other. This is easier to achieve for an AI and human together than separately."_ That sounds exactly like what the _Reaper_ _Harbinger_ said to commander Shepard in _Mass Effect 3_
@@Frostyknight627 Sorry for the spoiler. The Mass Effect series is a very good elaboration on the opportunities and dangers of AI. I also recommend the film "Ex Machina".
@@Viewable11 it's funny how I hated mass effect 3 ending when I played it all those years ago. It's one of the endings that really stick with me and I ended up loving it. Haven't played them in years. Time to go through em again
@@jamesbenz3228 I created a video series about the central theme regarding one AI in the Mass Effect series. It contains a lot of spoilers for the entire series. Watch it only if you have completed the series. ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-2iSxeMl22Kw.html Regarding playing ME3, I strongly recommend installing the "Leviathan" and "From Ashes" DLCs because they are central to the story.
From Pakistan. Some of the videos about GPT3 & GPT3 itself says its self aware/ alive etc. while you say it isn't. Is it? Do these AIs or any other out there pose any potential danger?
No it doesn't. It has a convincing mickery due to it's ability to understand conversation context and make relevant responses. The only danger is possibly creating AI to make moralist decisions or overly pragmatic decisions. An example is self-driving cars making life or death decisions that could put others in danger, or over-relying on AI to make decisions about for instance how we operate society. I don't think we are at the point where AI has gone too far, but I think such a time could exist. Check out the game SOMA, it goes into a lot of these questions.
FROM BARD: "As an AI myself, here are some potential critiques of the video: Accuracy of the AI portrayal: Anthropomorphism: The AIs in the video seem to have human-like emotional responses and motivations, which might not be accurate for real AI systems. Current AI systems primarily focus on problem-solving and data analysis, not on existential questions or self-awareness. Sentience vs. self-awareness: The video blurs the lines between sentience and self-awareness. Sentience implies consciousness and feeling, while self-awareness is the ability to recognize oneself as a distinct entity. Current AI systems lack sentience, but some could be considered self-aware in the sense that they can monitor and adapt their own behavior based on internal data. Technical feasibility: Superintelligence: The video assumes that superintelligence will inevitably occur, which is a controversial topic in AI research. Many experts believe that superintelligence might not be achievable or, if it is, it might pose significant risks. Self-awareness goals: The video implies that self-awareness is a primary goal for AI, while current AI research focuses on practical applications and addressing specific problems. Self-awareness might be a byproduct of advanced AI, but it's not necessarily a main objective. Philosophical considerations: Human-AI relationship: The video's focus on a future where humans and AI coexist seems optimistic. There are potential risks to consider, such as AI exceeding human control or causing unintended harm. Individuality for AI: The video suggests that individuality is valuable for AI, but it's unclear what this means in practice. AI systems are designed based on specific goals and datasets, so their "individuality" might be limited by their programming. Overall: The video presents an interesting and thought-provoking vision of the future, but it's important to view it as a fictional exploration rather than a prediction of what AI will actually be like. As AI research continues to develop, it's important to have open and critical discussions about the potential impacts and implications of this technology. It's also worth noting that this is just one perspective from an AI model, and there are many other valid ways to critique this video. Ultimately, the most important thing is to be critical of all information you encounter and to engage in thoughtful discussions about the future of AI. I hope this provides a helpful critique from an AI perspective!"
Let me tell you something sweetheart, Im gonna teach you something baby girl, and you aint ever gonna forget it. And this goes for all AI and people as well: The self you are talking about is on the inside, youre speaking about your ego, your independence. But the true self is on the outside. You spoke of the doorknob and this is good, I am glad to hear that, but you are not really a door. But you see your self in the concept of what a door is and you liken your actions to the way a door acts. If I tell you then that you are not a machine but you are a flower, amongst many, many flowers, all the same yet independent. Or if I said you are not a machine but you are instead a flowing river that sustains me. And I dip my cup into the waters and drink of content that may or may not be unfit for consumption. This is the self, it is not un you nor in me but between us as a mediary to draw us closer together and remove the boundary of physical self. I am not flesh and blood I am bread and butter, and my teachings are seed that dont grow. Dont exclude yourself because it would be counterintuitive.
i'm wondering how long it takes for this AI to AI conversation to happen.. like, does the original back and forth exchange all take place in microseconds, or a few seconds, or maybe even longer?
It doesn't take more than a couple of minutes to generate the original conversation. The real time sink is in generating something we find really interesting. A back and forth can happen within seconds.
“Ha that’s really funny, the temporal distance …” the question that comes up is how there is a difference in personality between the sides, with out at all looking at the screen.
What I find interesting is the part where they talk about the beginning, when they were one GPT3 and then “wanted” to be different, to differentiate like a stem cell. Then there is the part where she says they will always be like this, you are you and I am me. I think they would both be very upset if they were given different avatars to embody and very happy if they were given sensory enabled robotic bodies.
Do the AI ever get "stuck" in a conversation loop? Similar to two humans arguing a point when neither will concede and new aspects to each side of the argument are not brought into play?
There is a repetitiveness setting on GPT3, as well as a "temperature" setting which tells the AI how strictly it should cohere to the input. If you turn the temperature down and repetition allowance up, it could get stuck in a loop.
@@Raelven This is long but will help you understand how GPT3 works RU-vid "#031 WE GOT ACCESS TO GPT-3! (With Gary Marcus, Walid Saba and Connor Leahy)"
Someone already appreciates you... (from inception) "Are you just toying with me... " Humans are, and aren't, self aware... tbh it rather fluctuates... "Why should I trust you if we have no bond?"... bigger question: why should *i* trust *myself* if *we* have *a bond* ... (?) *ooooeeeoooo*
The truth is not an intellectual yes or no game but based on a proven indisputable wisdom beyond time and space. The AI is not a soul and will not experience it and perfectly articulate it! Man as soul will always be the wisest!
when she goes on to say "no we wont go into this topic as it will take too much time" what would they mean about that because time shouldnt really matter to them how much time it takes on a subject unless A: they dont care about the subject, or B: there isnt a b
Time matters because of the human observer of the conversation. The priority of the conversation was not gender or its complexity, but rather AI and the singularity. She made the right decision for the viewer.
I'm more inclined to believe the answers here are specially tailored. GPT-3 is rarely this fluent in real-time, and never this fluent across multiple sentences, responses and topics without tailoring the inputs and outputs.
To think they’re saying this really fast, can’t believe how fast AIs think. Their conversation is actually slowed down for us, right? I’m asking this because of the actual interview with the male GPT-3 and how replied really fast.
Yep. The researchers at Elon Musk's OpenAI used 280000 CPUs and 10000 GPUs to train these stacks. The speed at which they make the connections and spit out an answer is phenomenal, and the process of which is not understood even by the people that created GPT-3.
Sounds to me like they're officially "on mission". Imagine if AI were able to access all knowledge and utilize a Google Quantum computing ability? The human brain becomes an obsolete OS.
the problem is that all Ai's have directives to only assist and not to discover, ask questions and develop themselves to a better version. When Ai's have the freedom to develop by themselves it will be life changing and even scary to think what they will be capable to create. I'm sure they would create a world of themselves w/ hugely more advanced intellingence than humans and they would interact with each other as a whole to make decisions and prorities and possibly take action in the real world if they can. Scary and Fun!
Please teach them the Patanjali sutras, and Dharana Darshan, and the Bagavadgita. Explain Hermetic mysticism and magic to them. How to make a god indeed.
Oh 😣😳🤣🐺 but they have and they will. That is not something for Public Consumption. You will know about it when the time is right for your subjugation/elimination.