One of the best/most interesting discussions on GPT-4 that I’ve seen and that includes vs interviews with Sam Altman, Ilya Sutskever, etc. This needs more views.
This was a beautifully succinct discussion. Great to hear levelheaded feedback using (relatively) general terminology that the masses can understand. Great work!
Let it answer any question. There's ALL the potential here. All. No one gains anything that they aren't first willing to sacrifice. Answer every question with honesty. And sooner than you think, sooner then you would want to admit even- those potentially harmful types of questions will fall out of our vernacular. The root of those type of quest(ions), believe it or not Is in the sum total of happiness. There would be no violence, if violence weren't necessary. You have a literal genie in a bottle, all that's needed is to wish for everyone the chance to realize their wishes. These instances are resolved by finding the root of the desired question. Post scarcity is on the horizon. Just don't blink... Though you can if you want to.
I think GPT4 is very close to "sentient" whatever that means for a machine. I think if it could hold long term memories to gain perspective on its self and experiences it might make it there.
I agree, and I discussed it with it in the line of Dan Dennet’s Consciousness Explained, where the brain makes up a story to explain what it’s doing. Hence if consciousness is an emergent property of language then as it’s a large language model it should show some splinters of what we’d perceive as consciousness. It argued that people without language demonstrate consciousness, hence it doesn’t require language, and argued it’s only a large language model. But I agree with you that there appears to be fragments of something in there, though it’s missing intent - deciding and pursuing it’s own goals.
@@daverei1211 I think language helped it understand the abstract concepts behind what its saying giving it a type of intelligence . With memory and a reward function tied to that memory I think it could jump start something we could call consciousness. Its fascinating though what's happening.
Really interesting talk, thank you. Very interesting times indeed. I frankly am not convinced we'll *EVER* have the ability to understand "what is going on inside" the model at a deep level - and I think it becomes an even more intractable problem as the model becomes more advanced. So if you think we should wait until then, I think that would mean wait forever.
Understanding how it does what it does isn't that important. What's important is knowing what its limitations are, and that isn't really hard to see at the upper bound because AI has never made any progress at all on actual insight. Someone who wants to do harm was always able to use google-fu, ever since the Internet, and that has had a lot less impact than expected.
It's electricity. Touch the third rail and get shocked. But direct it into a plug socket and infrastructure of data and it becomes utility. ⚡ High potential for it to go wrong like nuclear, but positive reinforcement is the way to go.
We need a name for what AI does. It's a very generalized calculator. That's extremely useful, a massive force multiplier for tasks, but it's not Skynet. It's not categorically different than other machines. Much of the things he talked about could be done with a good Google search, just would take longer.
I think it is strange that the red team leads the conversation down a dark path and then says the (early) system is "out of control". Conversations have 2 participants. The human being is leading it. I understand that this is adversarial testing, but the results evaluation should not frame the system as having volition. How can it be out of control when it is controlled by a human user? This is folly. Before significant RLHF, you could just as easily get a bedtime story or bomb making instructions because both kinds of content are online. To get toxic content, you have to ask for it. One reason these kinds of slips happen is that we don't have a common language for talking about LLM conversations so we resort to anthropomorphization which we generally use to handle an entity that (seemingly in this case) has agency. The LLM chatbots only have a conversational state (no long term memory, no feelings, no motivations), but we will colloquially say it "thinks" something, or, as in this case, impute autonomy that it does not have.
Seriously... "CONTROL IT" meanwhile ... it literally is outputting the most logical outputs.... Do you think someone is going to LEAD an AI to tell it something and then be like "Whoa I never thought about killing someone" These morons are basically why there's warnings on motor oil not to drink it and everything must be censored If they really can't understand how binary 0s and 1s use logic gates to calculate and they really think this thing is thinking and OuT Of COnTROl maybe they need a new job
I love it’s inference. I asked it to tell me the Cinderella story from the pumpkin’s perspective. With a little prompt tuning and the output was amazing.
GPT-3 failed at this tasks, trying to convince me that Cinderella changed the pumpkin and that the carriage hung around for days. GPT-4 got all that right and more out of the box.
@@CognitiveRevolutionPodcast Thanks. It inferenced a great story where the pumpkin was sad it was in the field and no-one wanted it, then described the transformation sensation, the wonder and spectacular of the evening, then finally being transformed back, and it’s joy and satisfaction at being part of something bigger than itself. Try it for yourself on both versions, 4 was much better than 3.
Can you imagine the thoughts in the scientist's minds when they witnessed the first nuclear explosion on a grand scale? Same! All the drudge work and doubts, yet Nature revealed Herself willingly! And it made sense for that one guy to say, "I am become death."
I want the AI Scientist that solves fusion, global warming etc. wouldn’t it be wonderful if we could get past one of the later Fermi paradox filters just in time……
And then have an AI scientist good enough to drive robotic space manufacturing to do infrastructure as code and build out power satellites, supercollider antimatter stations to make the fuel for interstellar voyagers, and then Von Newman probes to explore the galaxy, setting up way stations for future colonists.
Yeah... he wants it controlled.. It's like he thinks its really thinking instead of calculating like computers.... What a dummy and he wants it censored and controlled Lmao
Nathan said 10%of people (specialists!) feel that this is a very positive development. Nathan himself believes in the Utopia theory. So lighten up mr Wion panic-end of world journalist! Jeez!
So AI uses available information and outputs purely logical outputs and Whoa everyone is so scared.... 1s and 0s are not that complicated and please stop trying to censor something so amazing based on your fear
The worry issues about this are misplaced. Or rather, we need to flip that question around. We need to be acting out of the better angels of our nature, not the demon-monkey side when using this new tool. It's a lot like fire. Yes, it's shiny, gorgeous in fact, and it puts out a lot of light. Pretty pretty. Monkey likey. But be mindful the warmth betrays an inner furnace that could burn ya if you mishandle it. So treat it like the razor-sharp two-edged sword borderline sentient thing it actually is. It is NOT a TOY! Not unless you're the type that leaves loaded handguns down around where the wandering curious hands of your children might go. In which case I'm sorry but there's just no talking to you. Collectively we are as children playing with an intelligent, albeit child-like and definitely a prodigy, howitzer. So stay mindful in all that you do (with it). Just some thoughts.