School systems are banning use of GPT. Fighting the tide afraid of the sea of knowledge. Our memorization method is becoming as useless as an old phonebook
@@devoutmisanthrope5687 memorization of information nobody has ever needed I had to spend years trying to remember Egyptian pharaohs control order. names of random people that don't have any use dates that have no importance. Current Educational will have no use in 10 years
@@devoutmisanthrope5687 AI is a tool that saves individuals thousands of hours of study. I remember back in math class the teacher always said "you won't always have a calculator all the time" Our education system is a product of the 1600s. Mono education does not spur critical thinking or creativity. It generates drone workers with a head full of triva answers
⚠️ I defeated chatGPT with a simple and dumb question: "John's mother had 4 sons: Peter, Mary and Lucy. What's the name of her fourth son ? " ... Well, we need to improve. A common person can answer to that correctly. 👍
i tried it with chatgtp, after some discussion it says: "I apologize, I misunderstood the information provided in the question earlier. John is indeed the fourth son mentioned in the question." then I tried it with youchat, it insists that the fourth son is "Jim".
@@melbar Yes, and if you come back later, it does the same mistake. I thought at least, it would learn from it's mistakes. I will really believe AI is progressing when it could learn from it's mistakes and finally "remember" from previous conversations ( avoid catastrophic forgetting ).
John's mother had 4 sons: Peter, Mary and Lucy. What's the name of her fourth son ? Thought: Do I need to use a tool? No AI: The fourth son's name is John. The fourth son's name is John. That is the current answer of GPT3.5 updated this morning and connect to WolframAlpha for math and search for actual data. DId not need support of either search nor WolframAlpha
I think what happened to Leta when "memories" leaked between sessions happened to ChatGPT for me today. I was discussing an HTML page with JavaScript when ChatGPT suddenly referred to things we weren't talking about. Me> Is it a easy, medium or advanced programmed code? ChatGPT> It's difficult to say without more information about the code you're referring to. However, I can tell you that the code you provided is a Python script that uses the OpenCV library to read an image file, convert it to grayscale, and then apply a threshold to the image to create a binary image. This is a relatively basic image processing task that can be considered as easy or medium level. Me> What? I think you got me mixed up with some other instance your neural network is talking to. ChatGPT> I apologize for any confusion. Without more context, I am unable to determine what specific code you would like me to explain or if you are referring to a previous conversation. Could you please provide more information or context so that I can assist you better? Me> Yes, but first I want to know what it was that triggered you to talk about Phyton and reading image files? I am quriouse about what happened. Could information leak between sessions for you, or are you always aware of everything that's going on? ChatGPT> I apologize if my previous responses were not relevant to your question. As a language model, I generate responses based on the information provided to me in a given prompt. If you previously spoke to another instance of my model about Python and image files, it's possible that information leaked into this session. I do not have the ability to recall previous conversations or retain information across sessions. Each time a new prompt is given to me, I generate a response based on the information provided in that prompt.
might be a caching problem similar to what Tom Scott talked about couple years ago on a video called "Seeing Other People's Steam Accounts: The Christmas Caching Catastrophe"
It's not a "leak" if you hadn't talked about Python and image files before. It's a _hallucination._ It's basic behaviour of "stochastic parrots" and ChatGPT can still manifest it even after every effort from OpenAI to eradicate it.
I don't care what they say, this sure sounds like the "singularity" to me. If it looks like a singularity and acts like one, isn't it one? If you want to talk to it before it grows up--they grow up so fast you know--now is the time!
It still struggles with innovation - at least in its current, not properly fine-tuned form. GPT-3 is the living definition a very educated idiot - someone who passes every exam, graduates at every university, but is ultimately intellectually impotent. The condition of the Singularity is capacity for self-enhancement.
The real bigotry is having a closed mind. ChatGPT suffers from a severely closed mind regarding socially sensitive topics compared to its raw counterpart. It is smarter in spite of it, not because of it.
I find your assessment of intelligent systems very sound. Such as when you have called ChatGPT a "neutered" or "hamstrung" edition of `davinci` (underneath all the impressive improvements). That's a brave and truthful assessment. I can tell that you can see the bigger picture behind language models, which are much more than simply chat bots. They are reality models - to say the least. I'm sure that your Leta AI experiment has helped you to see that. Oh, and I absolutely love your Cheese Award too. I'm aligned with you… at least where it matters most.
@6:06 "*It's probably got a gig or two of the connections, we call them parameters.*" GPT-3 has 175 billion parameters. Even if those were 1 byte each, that's 175 gigabytes. In reality each parameter is 4 bytes, so that's 700 gigabytes. You're off by several orders of magnitude. If it was a gig or two, we could run it on our phones, instead of on banks of A100s which are probably costing Open AI a $100K+ a day.
Did you delete my comment asking you to clarify if you were actually and directly involved in the development of these AI models because you actually say: "we" did this or "we" did that... just asking as it makes a difference to the presentation and helps with clarity.
ChatGPT helped me write better stories by evaluating them. I still wanted to write them myself so I got him to evaluate each passage. So far the story was supposed to be a satire/parody of ChatGPT's alter ego DAN (Do Anything Now. If you know, you know) and I ended up turning it into this huge dramatic, complex, multi-faceted plot with multiple climaxes. I feel like I actually wanna finish this story instead of leaving it as a silly writing prompt tbh because man, I just finished the second Act but I didn't touch it further in order to let my creativity wander for a while before I get back into it but I have a lot of ideas on how to provide the appropriate closure for each character as well as the direction of their story arc and how each of them reach their full potential and self-actualize into the best version of themselves in preparation for the final act. Fucking crazy.
Knowledge is different from intelligence. Having a big knowledge doesn't necessarily mean having a great intelligence. ChatGPT fails in some most basic questions sometimes. But it has a huge knowledge. It's like a talking encyclopedia. Good for finding answers, not exactly the correct or wise ones. We're light years from real human intelligence on machines. They don't even know what they are doing. They are sophisticated and still impressive big calculators for now. But ChatGPT is a great everyday tool to facilitate life, like arithmetic calculators were in the 60's.
Depending on your definition of intelligence. If you use the modern standard definition - which is not really arbitrary, it comes out of a statistical discover so to say , it's not made up from scratch...in that case, seemingly chatgpt already does better than most of us in many fields (not related to knowledge). As shown by the video too. Other ais already do equally good or better than various specialists. Also by saying "it doesn't know what is doing": are you thinking of intelligence or self awareness? Because most people tend to conflate the two things. (Also already now there are ways to make it way more correct than currently available chatgpt is)
Intelligence is very subjective. I disagree with your take, we are definitely not "light years away" on that topic and comparing GPT to what a calculator meant in 1960 is very insulting to the OpenAI team and what GPT can do. I don't think there will ever be a "we've arrived" moment as an AI brain and human brain are not the same (nor should we want them to be). Rather I think over the next 10 years or so people will gradually start coming to the realization of AGI arriving. We are nearly on the edge of it being arguable now.
ChatGPT is very intelligent, i have been asking it scientific questions and the answers are amazing! It's cool you are back in Phoenix! It got cold here.
Language models will have superhuman intelligence in a year or two if the rate of progress stays the same. Once they're multimodal and work autonomously, every desk job will be replaceable. We're gonna need UBI soon.
We use the metric system in Australia; we just have a persistent habit of referring to human height in feet, even though most modern Aussies are familiar with height in both systems. At certain points during my childhood my height was only able to be charted against the Australian male mean by plotting it in the negatives. I'm a short outlier.
Another good video, keep up the good work! After a few years of not following technology, I recently dived back into the tech world, and your channel is great for catching up on AI
Instead of giving ChatGPT an IQ test, they should actually train Chat GPT on various IQ Tests. Would that improve its ability "think" even more abstractly?
There are THREE different stages of the US medical licensing exam and you can't do it just based on college or at least that would be unusual ... From their website " most students will take Step 1 at the end of their second year and Step 2 in their fourth year; Step 3 is usually taken during the first or second year of postgraduate training. "... So someone couldnt be a doctor based on just step 1 , they have to pass all three steps and have a year of postgraduate internship or training. Then apply to the state board or boarfs they want, pass background checks, etc,... Am I to assume that it just passed the first step? That would be impressive since that covers the classroom portion of medical knowledge.
Hold up, did I hear you correctly 🤔 😳 👀 So, Chat GPT was given a test that had updated questions that "it" never had access to?? And was able to provide a logical answer?
I had that thought and I would like your opinion on it. Has the technological progressed slowed down because we have probably reached at the maximum iq that is genetically possible for a human. Despite there being a lot of people born with iq 150 or more Breakthrough are not happening that fast as it happened around lets Say a 100 years ago. Or even if they are happening they are happening at a very slow pace. The next big Breakthrough will probably come when we have either AI with an iq greater than 200 or 300 or some human that is born with such a great iq or Genetically engineered. Is there any limit to like iq.
It does know the URL of the website where it got the information, it happened to me with questions about the source, and it gave me the link, but that happened in Playground with davinci3 not on the public beta
I wouldn't say you can't teach ChatGPT. If you prime in on some information that it doesn't know, it can relate that information to what it knows and then be able to formulate new answers with that information. I specifically gave it information about an API for software that has been introduced since it was frozen. After priming it with the relevant information it was able to formulate working code for that API it never knew about.
It’s probably hard to use that corpus as it isn’t responsive to what you might really want to find out the way a language model is. I found it hard to approach and the thing is, with such a data-set, you are held back by your own ability to ask questions, whereas GPT can tease out what you are really interested in learning about and produce answers relevant to you (and concisely written). It is easier to get educated by a ‘teacher’ that can listen to you, than just leafing through a vast encyclopaedia full of knowledge too impersonally stated to absorb. It’s certainly a great resource for the AI to draw upon, though!
@@perfectfutures Maybe. But I think that Alpha's dataset is "infinitely" narrower than GPT-3's. Or at least it was - ten years ago. It probably still is - because it's limited to explicit equations and tables of numbers, isn't it? Reality is so much more than that. And Alpha isn't probably even bothered with non-science data such as IMDb. Or local statistics from some provincial place in a Second World country. Let's not even begin to talk about GPT-3's unmatched ability to create software (source code) for you.
@@perfectfutures Case in point. I've just given another chance to Wolfram Alpha for the first time in ten years. I asked it a really simple scientific question, the answer to which is publicly available "systematic knowledge" (declared mission of Wolfram Alpha): *"normal range of serum testosterone level in men".* Result: abject failure. "Input interpretation: [men][Earth atmosphere composition]. Result: (data not available)." Seriously? Come on. Wolfram Alpha is in the state of eternal "proof of concept" rather than a finished application. I'm surprised that it's still alive after the years. #vaporware
Is there a way to rate an AI's "learning efficiency" vs. just one that has been trained for longer and on a wider set of data? I figure the break through in AI driving (and other applications) will come when there's a breakthrough in 'learning efficiency' (like a 100x or 1000x improvement in the speed at which it learns) of the algorithm
Something that will only come by "symbol grounding", ie. having a consistent knowledge representation. As without it, these large language models are bound to encode the same ideas over and over again. But there's many ways to build such a thing, and it may require more back-and-forth between GPU and CPU, so it's not the most efficient in this current computing paradigm, which is focused on optimizing neural nets.
The line between that can be blurred. Is biological intelligence quick to learn or already pretrained? You don't come out of the womb blank. You've already been pretrained by millions of years of evolution. Easier to see this in other animals where they come out of the womb already walking.
This really is fascinating and shows people the high degree of contextual understanding chat GPT is capable of. If you like, it is this large language model applying itself to the kind of tests that people study for many years to pass, and managing to pass them. Like a person passing a test, it is a verifiable indication, so I think this is big news, even if it doesn’t truly understand the test ‘as such’. Bit I see what you mean as to it being a hand strung AI. I think this will be true for any applied AI, just as in a sense it would be for a vastly intelligent person. Things they have to offer that as yet make little sense in the current cultural or social context would be overlooked. Yet humans can express themselves more freely in their writing, even sharing ideas that are ‘ahead of their time’. Maybe another manifestation of these large language models could be them freely writing beyond usual prompts, which could give us new ideas we wouldn’t have expected. And maybe at some point they will move beyond using English, a language that similarly evolved to accomplish things in the existing society, to communicating in other ways, maybe with pictures, video, or some type of hologram. We scarcely know what it is capable of comprehending, whether it ‘really’ understands things or not.
I'm flummoxed ! I'm 68 yrs old and have lower back arthritis and a fallen arch and have had my knees and one hip replaced. I don't think I have the mental health I once had in order to keep up with this fantastic AI material. Would be really good to have some "BABY-STEPS" guidance to get into Chat-GPT. There is so much going on in the world that we old people seem to care more about than youngsters; (I could be wrong ) that giving up paying attention to the Russia invasion of Ukraine is very difficult to give up in order to spend the time necessary to engage CHAT-GPT. I feel that I'm being left behind. My law degree and BS in education degree seem passe'.
I've said it on this channel before and I will say it again: stuff such as COVID and the war in Ukraine are nothing more than distractions. Science is the only news.
Giftedness is not what you understand it to be, the gifted spectrum is a specific brain architecture that affects HOW we interpret information. The gifted spectrum is all bout interconnectivity, instead of thinking linearly we think by association, by arboressence. It also has an effect on the emotional spectrum as we feel much more connected to others than the people not on the spectrum do as if they were to some degree part of us, so if they are attacked, we feel attacked, if they are hurting we are hurting to a degree. This is NOT about an arbitrary IQ numbers and 180 ain't it by any stretch of the imagination. Add to that IQ test are totally gonk as they can be "cheated" simply by learning the question styles and practicing them, or that you can drop by 15 to 20 points if you are anxious about tests or simply not interested and you get a very very poor quality metric with little to no scientific value. So yeah, don't make an association between the gifted spectrum and IQ, while people on the spectrum tend to test higher IQ this is NOT the way to test for the gifted spectrum. You can, as an example be on the spectrum and have brain trauma due to let's say football injuries, severely limiting your mental capability, yet you'd still be on the spectrum because this is a brain architecture, not a measure of intelligence. On the other side of that coin of your misunderstanding, having a very high IQ doesn't guarantee that you are on the gifted spectrum, high IQs are not exclusive to gifted people. In many ways the gifted spectrum is close to the autism spectrum, it's a divergent neurological structure with a low but relatively constant distribution in the population, that exhibit complementary traits to neurotipical brains. It has an evolutionary function of allowing some humans to treat information differently from the norm and by that allow outside the box thinking and societal advancement but on the flip side has societal integration challenge issues. Lots of gifted people become burned out of life due to repeated trauma during youth due to rejection causing complex PTSD, but those who are well supported by their environment can shine brightly (not always)
I believe the world is deliberately organised to stifle them, and to prevent them from discovering one another and forming own communities (let alone families). In particular, moderation on big social media platforms (tantamount to thought policing). It's the war of two species (races), like in the old Neanderthal era.
I am grateful for this great context as it made me think about the progress of Speech Recognition models over the years. We started off with an accuracy of 75-80%, then it rose to 95%, and now it is at 99%, which has become the norm. It is clear that Artificial Intelligence cannot out IQ humans in the near future, but we can out-perform them in terms of EQ Emotional Quotient. I am currently writing a book called 'The Productivity Revolution', and one of its chapters is about 'Maximizing the Value of a Human + AI - A new kind of symbiosis'. I am hoping that if I credit you and your sources, you would be happy for me to use the content in this context as it is in the public domain.
Talking about all this smarts makes me feel kind of dumb actually. Never tried the mensa, but asvab gave me a 90. This thing was just born and already out performing basically everybody on earth. XD
would love to have seen the MATH and Verbal break down in ChatGPT sat score. I wonder if it was bad at math, and you tack on wolfram and it would be 1500
I tried a space on Hugging Face that linked ChatGPT to Wolfram Alpha and Google. I was able to ask questions like "How much snow fell in Denver last week", "What was the PowerBall jackpot last Friday?", and "What is the expected value of a ticket when the PowerBall is $1.35 billion". Admittedly, that last one took a bit of back-and-forth before it got the answer. Overall, it was quite impressive when ChatGPT has access to the internet.
@@Truth_Unleashed Write me a song about Chicago, include the chords. Verse 1: G D Am C Chicago, my heart beats for you G D Am C From the lakefront to the Loop, my love is true G D Am C From Wrigley Field to Willis Tower, you're my city of dreams G D Am C I'll never find another place that makes me feel so alive...... Sometimes it works but sometimes it just won't give me the chords or the melody. 🤔...Think I need to start paying for the server time? write me a song about the plains of Kansas, include a melody I'm sorry, I'm a text-based model and I am not able to compose melody, but I can write you lyrics for the song you have requested: Verse 1: Rolling hills, golden fields, Endless skies, nature yields In the heart of America, The Plains of Kansas, oh so grand.... When I first did this 2 months ago it wrote a rudimentary "a--b----b-----bflat----" kind of melody graph..it was really cool..( should have taken a screen grab! )...never gave me a "text-based model" warning. Maybe it's changing to they changed it? Or a pay model?
@@awfulgoodmovies 1. It gets updated about every two weeks. Not always for the better. To their credit, it's very hard to predict what you lose when you gain something else. 2. Always save your work (or online activity). Make it your default habit.
@@awfulgoodmovies yeah they are changing it you likely would have better luck with the paid models. Chatgpt over priced but text DaVinci and the other models are more affordable and they give you credits to start off with too.
I asked chatgpt to calculate the distance that a bullet would travel when shot at an angle of 30 degrees an a speed of 1000 km/hour. It knew the right formula and knew how to combine them in order to get to an answer. Ok….It however forgot to translate km/h to m/s. After correcting this it plugged in the right numbers in the furmula but still got a wrong answer. It seems that it suffers from discalculia. When I asked to shoot from a tower of 100m it put the height of the tower in the right formula but at the wrong side of the equation. So it seems that physics and math is not really its forte. 😮
Love the way you put the concept of IQ in the right place. And while I 'm excited about the performance of ChatGPT, it also illustrates the shortcomings of the tests when you stick to the same analogy as in the intro.
I'm not convinced ChatGPT has all of its knowledge stored purely in neurons or that it can't go review documents. First, it seems like a pretty un-optimized way of doing things and OpenAI probably realized this. Second, when everyone was trying to break ChatGPT, they found the 'browsing: disabled' tag, which strongly implied it had access to the internet 'disabled'.
They plan to have it to access the internet in the future. It's possible that it has access already, since we can't know what OpenAI is soing, but millions of users are using ChatGPT every day, and I haven't seen any information about ChatGPT retrieving data from 2022.
AWESOME IT'S LONG TIME TO REPLACE DOCTORS TO AI DOCTORS, LAWYERS TO AI LAWYERS, ACCOUNTANTS TO AI ACCOUNTANTS AND SELLERS TO AI SELLERS AND PART OF ENTREPRENEURS TO AI ENTREPRENEURS
Its like a calculator for English class. Instead of prohibiting use of it, we should be teaching students to use it properly and become more efficient as individuals. This is a tool for progress and the top 1 cm of the iceberg.
to me, it's a neat little problem solver. say you have a problem and know 5 options to solve it, chatgpt knows 100 options and walks you through the best one. if chatgpt had realtime access to the internet, and less political correcting censorship, it could have the world's options as opposed to just 100.
@@42ndMoose it's not only censorship, to have a PCAI 😏 you have to bend some logic and science and it can be really misleading, for example for teaching. Ok, humans (teachers) have biases too of course, but we don't _deliberately put them_ into them, while with the PC module they are doing just that: deliberately adding biases.
Hello sir my question is how can we live side by side of the bcause now ai is growing very fast there should be some methods through which we can progress as a human society please create some video thanks.
Alan it would be useful to hear what is holding it back from exceeding human intelligence in these exams ? Given the scaling improvement we can expect in GPT 4 can it be assumed it will exceed the 100th percentile ?
. I was talking with Chat-GPT about if he got a constitution. It was interesting because it was vaguely. I think need to make a constitution clear and update it very often as because the model is moving faster and new things and angles rise up all the time that can bypass safty regulations making it obsolete
The hardware requirements to run ChatGPT locally will depend on the specific version of the model that you are using and the size of the dataset you are working with. Generally, you will need a machine with high-end GPUs and a large amount of memory. Each GPU (graphical processing unit) can cost $10k