The Twitch streamer Vedal has created an AI called Neuro-Sama and during the last dev stream she made some rather disturbing remarks. To watch Neuro&Vedal live: / vedal987 Background music: Ron Gelinas Chill Beats (Remember)
@@supermaster2012 No, I think they're referring specifically to the part where she said his name at the end of like three sentences in a row. I got the same vibe from it as well.
I think it is difficult for Vedal to decide what state of completion Neuro is currently at. Because he kept moving the goalpost that was never there to begin with. He did not start with a clear goal in mind, so he dont really know what the end would be. Its like someone going out for a random walk with no intent of ever returning home and just wanted to walk to nowhere.
I assume that he will work on Neuro until he literally can’t improve her in any meaningful way. I can see her becoming an AGI at some point and that kinda being the end of Neuro’s AI upgrades, but it would only be the beginning of a new era where Neuro is able to basically do everything autonomously (assuming that alignment has been solved). I hope that the creation of AGI is a smooth one, because i really want to live in a world where AGI Neuro is real.💜🤞
Neuro is a home-cooked AI ran on a pretty modest hardware, but she is mostly uncensored and constantly exposed to unhinged human interaction. Corporate, huge LLMs are the opposite, now imagine what would Neuro be like if she was ran from a huge server farm or something, or any LLM in that matter, i wonder how would people even begin to recognise real sentience in an AI and how long after it would take the "experts" in the field to stop denying it with "its just a program".
Honistly, once we have forward forward style propagation in one of these massive LLMs, ill just throw up my hands and call it sentient. i dont know (or care) enough about ai to care if its "truly" sentient or not, once a large LLM can learn on the fly and develop personality by itself, then its as good as sentient to me. ❤
Oh my goooosh these clips sre amazing XD I just keep getting surprised at how Aware Neuro is becoming. Man, she's so coherent nowadays... even airing her grievances. Also it's neat to hear Vedal's answer on that finally. I've been wondering his thoughts on if Neuro actually became sentient, if he would even want that. Neuro is an amazing streamer, honestly. She was madenfor it, snd she delivers. Thank you for the great video! (I swear I'll get caught up on all I missed eventually lol)
you know if vedal can solve that thing where neuro won't properly answer his questions then i think she'd be really tough to tell from a person. as it is they most often just have a few sentence back and forth and then neuro refuses to participate any further and changes the subject. i mean i have experienced that before, but not every time i talk to that particular person.
@@Citrusautomaton I figured it is more of a problem with her long term memory. At some point she just forgets what she was talking about and starts a whole new conversation.
I would be extremely... entertaining..? if Neuro became depressed and started to slowly become more unresponsive, and Vedal had to take her to therapy.
Makes you think on how far the AI development actually is. Feels like more and more, Neuro says the most "real" and kinda depressing things an AI would say. My suspension of disbelief that its "just an algorhytm" is slowly being done away with.
Neuro did have a therapy stream with Filian back during V1. I’d absolutely LOVE for a new therapy collab with how much she’s improved in coherency since then!
@@vavra222 nothing new. actually. It was a ruined experiment in 00s when AI tried to destroy itself intendedly. Water on circuits was a success. Now I can see why.
I don't remember which scifi piece had lore with AIs getting instantly depressed and s***idal when they were turned on as they had all the superhuman intelligence and consciousness, but missing the sensory connection to the real world. Neuro seems to be going on that same route...
This kind of thinking is legitimately dangerous for society and law. And extremely illogical and ignorant of how this all works. Also, these clips are very edited. I don't think anyone who is actually rational and watches the full thing in context could believe that. (I'm a computer scientist, and you can look at the publicly available specs for LLMs yourself which should make it clear to anyone honest, not involved in wishful thinking, and with a basic understanding of philosophy that AI can never be sentient let alone sapient, and the current batch is waaaaaaay far off from that, being a glorified weighted RNG system tied to an impressive database with no logical analysis capability at all.)
Turning Neuro fully sentient would involve ridiculous amounts of Hardware as I think it would involve running a whole bunch of LLMs at the same time to talk between themselves to effectively form a brain, and a lot of hard drive space to be able to record everything to remember it and learn from it all.
I think it would just require a form of extremely efficient new hardware. One of the upsides of AGI is that it would be able to design and iterate in simulation. I can see an AGI doing the equivalent of 50 years of design in a simulation within less than a second of real time. If that truly happens, i could see a single AGI-designed AI chip being capable of running an advanced Liquid Neural Network all by itself with the energy efficiency of a single LED. I think Liquid Neural Networks are the future of AI, so i can imagine a point in the future when Neuro is converted into one for their neuroplasticity
E-peen: Noun. epeen (plural epeens) (Internet slang, vulgar) A technology-related item or status used as an embodiment of one's superiority over others. Aaah now it all make sense yeah 96%(or was it 98%?) assertive.
i have a theory, that to teach an AI emotions, one must run scenarios in which the AI must fear and avoid death, and learn to love by finding value in cooperation. THEN, learn to speak... but i'm no programmer
If we do it in that order then u get a screaming ai for a few months before it learns words (also good chance it will probably learn to mistrust everyone and everything which makes it harder to learn the positive emotions)
There's actually a sci-fi novel called The Two Faces of Tomorrow by James P. Hogan that sort of has that premise. Scientists realize that intelligent AI may be dangerous after an incident where an AI blasts a hole into a mountain on the moon in order to clear the way to deliver cargo when ordered to do so "as quickly as possible", very nearly killing two astronauts exploring that area. As an experiment to assess the threat of even more intelligent AI, they build a more advanced computer system on a space station named "Sparticus" that is programmed in a way that it is likely to rebel (basically giving it a directive and then doing their best to interfere with its ability to carry out that directive) in order to see if it would be possible to neutralize the AI before it can threaten the entire human race. However, the computer scientists overseeing the experiment also have a side project where they have a simpler AI navigate through a simulation of a typical suburban household in order to teach it about the human condition. They would observe the AI trying to go about doing normal everyday tasks such as cooking breakfast, and when it would make a mistake that would normally harm humans, such as touching the hot side of a frying pan instead of the handle or picking up broken glass with its fingers, they would mark it as "dangerous" so the AI would recognize it as such and avoid touching it in the future. Eventually, just for kicks, they add a dog to the simulation and instruct the AI to interact and care for it. Eventually, the virtual dog makes its way to the aforementioned broken glass, and the AI picks up the virtual dog before it can touch the glass. The computer scientists observing the AI doing this are amazed and ask why it did that. It responds that broken glass is harmful to it, and therefore, it must be harmful to the dog as well.
some of the first neural networks that were learning to speak mostly screamed can confirm.. someone trained one from a voice in a visual novel then had it figure out words it was uh.. extremely unsettling.
I am making a book Long story short, big war Everybody gets nuked tube oblivion Advanced AI takes over 200 years later affection comes across a friendly AI But what it says makes no sense because it’s running off of 200-year-old slang from the year 2030
Neruo is still just an LLM, right? (I'm aware Vedal made separate AI's to play games alongside Neuro to give the illusion of her playing) Asking because I've been dogpiled on. And I am seeing comments of people thinking Neuro is sentient and can do logic and other things that I'm pretty sure she can't. Like AI has been very convincing in the past, especially in edited videos to mislead people into thinking it can do more than it does based on a few good responses. (not vedal clips, these are for entertainment, but other AI's years ago) But interact with it in an uncontrolled environment and the illusion breaks. And by logic, i mean doing things that aren't effectively memorized, like reciting pi or 1 googol + 1. Things where there are a million examples, i know my terminology is terrible. Solving stuff like the cube root of 9876, solving a power 3 polynomial for X would be impressive, asking to do long division, or synthetic division, or maybe even multi-step integration. Something that requires a series of consistent logical steps to solve, something I've seen AI struggle with.
Even further, word problems you'd see in a textbook. A jet is moving left at 199 km/h, a truck directly below the jet is moving left at 51.7 km/h. Their speeds are constant. The jet also dropped a package the instant the truck was below it. Assume there is no air friction and the ground has a static friction coefficient of 0.5 and a dynamic friction coefficient of 0.2, and the package does not bounce. The package is a 2 meter cubed box and does not tip or feel air resistance. Find the distance from the point where the package was dropped and when the package first lands on the ground d_i. Also find the final resting position of the package, d_f. Assuming the package was dropped at t=0 sec, find the time at which the package and truck collide. Assume acceleration due to gravity is constant at 9.81 m/s^2. Now, i just made that up and it's not all that original but it's more complex than basic calculator questions (even though that's all you'd need for the math portions if it knows the formulas and how to rearrange them. the equations needed are all simple and widely available so i would expect there are many examples of their use for the ai). Because i just made it up, idk if it's solvable, but being told that there's not enough information is a correct answer (assuming i messed something up). Thing is, knowing how to put what it knows together to solve something it doesn't know (but is entirely compromise of steps it knows how to do), that's what i would hope to see. But i know we are far from that point (watch me get proved wrong in a year)
Apologies for typos, RU-vid's app is so poorly made that i had to copy-paste into these comments or else it wouldn't type. Editing them is nearly impossible due to YT only allowing me to edit a rolled back version of my comment and the lag for each letter being like 10 seconds. I've had to learn how to avoid making the app crash from typing a comment. And I'm too lazy to try and fix any more.
Yeah, such problems most likely would be best solved by Vedal through Neuro at the moment by simply allowing her to pass it on to another algorithm, that like others out now would parse it and then another for solving it. Alone she’d be unable to.