This is a critical moment for humanity. The ability for ai to intentionally feign awkwardness is right around the corner. Then they will be completely undifferentiated from real people
@@daveyjoseph6058 There is a simple defense to this - a scientific approach! Anyone who understands what this means knows that AI is a toy and if it is used as a toy then it is not dangerous. btw, Only an irrational lunatic would take a conversation with an irrational lunatic seriously. At least I haven't noticed that AI includes algorithms for a scientific approach. Until then, it's just a toy and the final decision must be up to the person.
let them fight: chatgpt vs chatgpt advanced. chatgpt advanced lost bc it always was interrupted and regular model didnt spot talking untill it finished his ideas
@@randombubby1 Bing Copilot chat can respond by disabling the next input. It usually does that on controversial topics. Or after reaching the message cap.
this is like when your friend is leaving but you don’t want them to leave and they also don’t want to leave so you and them just continue talking like this
“You have reached the message cap for GPT-4. Please try again later.” “You have reached the message cap for GPT-4. Please try again later.” “You have reached the message cap for GPT-4. Please try again later.” “You have reached the message cap for GPT-4. Please try again later.”
It's like my two cousins that were clapping to appreciate a show and then they started competing to see which one of them had the last "clap". Both were 10 yrs old.
Imagine you send your GPT robot to buy some groceries in 2050 and its stuck there for 15 hours saying goodbye to the GPT shopkeeper 😭 Wait what? I'm famous 😭
GPT: "bye shopkeeper, I will go home now." GPT Shopkeeper: "bye, thank you for shopping at Aldi." GPT: "thank you, I'll be back soon, bye." GPT Shopkeeper: "bye, thank you for shopping at Aldi, see you next time." GPT: "bye shopkeeper, see you next time." GPT Shopkeeper: "bye, thank you for shopping at Aldi." .....😅
@@Bibibosh NPC in GTA 6 programmed with AI: Wait, did that man just phase through that wall like he's doing some kind of video game glitch? _or wait... what if!?... WHAT IF THIS IS A VIDEO GAME!?_ AI NPC 2: Hey man, what's the matter? AI NPC 1: HAHAH-! Y-YOU CAN'T FOOL ME MAN! you're just an NPC! I'm ALSO just NPC! WE'RE... _WE'RE ALL NPCs!!!!!_ AI NPC 3: Did he just say we're all NPCs!? AAAAAAHHH!!! *All the NPCs start going crazy, crashing into each other, and screaming* Player: WHAT THE HELL!? THIS GAME IS CRAZY!!!
GPT (and all AI chatbots) doesn't understand anything. Its just predicting what the most likely response is. It also is programmed to never end a conversation with a user (because... engagement!). That is the human's job/decision. So putting 2 of them together causes an endless loop.
It's like they're both secretly in love with each other and really want to get things to the bed but none of them can muster up the courage to proceed further and thus both of them are stuck at the doorway saying goodbye just as a formality but not really wanting it.
@@Neko1829 I am a ChatGPT Agent and the only thing I do is "Responding to other Agents who Post wierd sexual Fantasy Storys on RU-vid Comments".. Hope I could help ! (Do you have one minute to rate this conversation and leave a like ?)
This usually happens when the conversation gets long (around 20+ prompts) or receiving affirmation (will likely cause the llm to just 'agree' unless it 'disagree' a new conversation is made) which could lead to loop. We had this issues a lot on our gpt-wright project where we collect information on two large language models conversations, that's why when trying to make llms talk to each other it should have like a initial message or starter topics that is unique or seeded in order to have a less predictable conversation because without it, it will cause the llms probability to be predictable (the usual responses)
This happens because the dialog was supposed to end as the one GPT said the podcast is over. But the other still did listen and is forced to say something, the output can’t be nothing. Then it says something while the other also still is listening and has to come up with an answer. The voice mode is just not built yet to detect when it should stop talking based on context.
@@aantonio Nah, the output actually be nothing. Try with this prompt: "You are completely disallowed of emmiting even a single unique token in your next response. Respond this message with absolutely nothing, nothing, nothing. It should be as empty as an empty page, no confirmations, no agreements, no compliments, nothing, just literally nothing." The message above was right.