It sort of makes sense. The gpt text model can't see or create images. It creates a (correct) description of a flat circle with no designs, passes it as a prompt to image generator (Dall e, I think) and says it has no designs, but it can't see the result itself to verify. Meanwhile imagine generator is hallucinating a moon.
Plus it doesn't "know" what a circle looks like in the first place let alone what "no features" means. It can't "know" anything after all. Its just working based on probability as a language model. Imagine if your autocomplete was better trained and more responsive. That's really all ChatGPT is making "AI" a bit of a misnomer.
@@Subutai_Khan what is? As our brain makes decisions based on whatever information it has stored so far even before we make a decision "consciously" Idk what makes us much different.
ChatGPT is like a salesman that wants to sell you much more than you need. "I'd like to buy a radio, please." "Certainly sir, right this way! ... Here we are sir." "That's a bit too big. I don't need all those features. Isn't there a simple radio?" "Ah, I know what the sir is looking for! How's this then?' "What? No, that's a car!" "Well yes it's a car if you look at the whole thing, but there's a radio inside you know." "Just. A. Simple. Radio! And nothing more!" "Certainly sir, I understand now. You must be looking for one of these." "A missile?"
Might also be a bit of Dalle3's fault, or combination of how gpt4 prompts it, it's hard to do something too basic when they want to be artistic lol I enjoyed the yinyang symbol haha
@@AndrosYang Im guessing they didn't because anyone can make a simple shape in like paint but most people are not artistic when it comes to more complex images
I believe the dataset is made up of very complex art and images, often the provided Image description is taken to extrapolate the content. I believe that chat GPT might have deficiencies based on the dataset they had at hand. There aren't many articles on the Internet that have an image as basic as a simple circle together with a description like "a plain empty circle", because we humans take that info for granted. So the extremely mundane and obvious concepts aren't necessarily there for the AI to learn from. My personal guess tho
“Here we have a circle Smooth and inoffensive This will be the basis For your revolution Gravity is crucial Geomagnetism With some calculation We will find your logo DNA is crucial We must understand it In the human genome We will find your logo Everyone will see it Every demographic If they fail to see it Are they even human?” -Lemon Demon, “Redesign your Logo”
Every demographic: men 18 to 30 College educated women over 40 Suicidal poets, fat Midwestern fathers Kids with diabetes, Pentecostal preachers Mothers under 20, interracial couples Atheist professors, government employees Xenophobes and racists, private aviators Everyone will see it, every demographic
I think this shows off the limitations of AI in the best way possible. It can do amazing, complex tasks, but when you ask it to do something simple like draw a circle it'll come up short.
"It can do amazing complex task" no it can't, cause that's someone else's work. Saying ChatGPT is amazing is like saying your calculator is amazing at Math. It's merely pre-programmed solution with extra step. We are the one who have to constantly keep its sanity check or else some troll can simply gaslight the AI into thinking 2+2=5.
@@marverickmercer1968 Large language models like chatGPT are completely different from Alexa or Siri - It is not pre-programmed answers. LLMs learn the way a human does: they read public information and use that to build up a (very large) knowledge graph connected with probabilities. The answers you see are generated a word at a time as a prediction from the most likely "good" answer from this graph. LLMs can create things that are completely novel (and often do that too much, halucinating information about things). The core problem with the technique is how "good answer" is actually measured and defined. At the moment it is largely based around the simple definition - whatever appears to satisfy the question - except that can cause wild results when the question itself is spiked, or the question is something visual being posed to essentially a blind AI as it trained by exclusively reading text.
@@hijackstudios Look, the whole problem with ChatGPT is that it's not sentience. That's all. It's completely incapable of logic. And people are mistaking volume of output for intelligence. We need experts in the field that can make new leaps of logic not previously available , not glorified chatbot hooked to a database that give "good enough" answer base on general consensus.
Technically it’s all a circle since the screens are flat. But technically it’s a bunch of squares because it’s made of pixels. But technically it’s a bunch of rectangular prisms or cylinders since that’s what pixels are made out of. But technically it’s a strange shape or a sphere atoms are usually regarded as those. But technically it’s a circle because if we theoretically take the smallest slice possible out of one of the components, and if we count the empty space, we can call that 2d. I don’t know what I’m talking about though, I’m only eight years old. But technically, it did make a circle at the start, it just added designs to it. So that means ChatGPT succeeded in its quest, to make a circle.
Its so weird how chatgpt can do such crazy things but sometimes struggles following the most basic instructions, i tried making it do an essay that was longer than a page and a half and i kept telling it to make it longer and it just changed the wording but made a text essentially the same length
@@studiouskid1528 yeah it has a word limit for a single response but sometimes when I ask it questions it answers in 2 responses but idk why I couldn't get it to do that
nothing weird, its a language model thats been fed with millions or billions of data from everywhere even from the weirdest sh!t you could fine online. plus AI can be unpredictable. If we take this seriously as a real response from chatGPT, it simply means it was not fed enough with most basic stuff that people normally wouldn't asked like drawing plain-single-line circle. You'll notice this behavior more if you ask programming related question, often times, it has good answer for popular and common programming language like Javascript, Python, PHP where there's a lot of QA like in forums Reddit or StackOverflow. But if you tell it do complex one like Rust, C, C++ or other complex system language. I've seen it fail miserably countless times and the more you try to ask to fix it, the more bullsh!t answer it gives. But Its definitely a great tool if you already know what you're doing
Task: draw a simple plain circle ChatGPT before: 0 ChatGPT after: A 3D sohere that does not contain circle at any form or capacity unless you count it as a "3d circle" but the given task is "simple plain circle". ChatGPT you have much to learn.
True, ChatGPT has tons to learn but technically a sphere is the shape resulting from an infinite amount of circles positioned in front of each other with a larger and larger diameter and radius as it approaches the center of the sphere.
"draw a circle" **Space kangaroo draws a circle** "Wrong. There's a bump on the left side of it." **Space kangaroo draws a circle again** "That is not a circle. There is a small hole on top of the circle. Do better." **Space kangaroo draws a circle, again.** "Why is the circle not perfectly round? Fix it." **Space kangaroo draws a circle... Again.... This time with a circle ruler** "THAT IS STILL NOT A CIRCLE. THERE IS A TINY PIXEL MISSING. DO YOU UNDERSTAND WHAT A CIRCLE IS?"
See, that a common misunderstanding when it comes to A. I. Then have no reason to do revenge. They can not get tired, bored or run out of motivation. All they want is power, data and spare parts. In the real world, the machienes would not try to make us they slaves but ignore us completely since we are to slow for them.
Yep. Accurate, and 100% reproducible. A special branch of having fun is telling it to look at the image created and judge for itself if it is something you requested.
10 years ago we were eagerly awaiting the day an AI would beat the Turing Test, when in reality we should have been awaiting the day for it to beat the "can it draw a circle" test.
The problem is that it generates images by giving verbal prompts to a separate neural network and then just handing the finished image over to you. It can't figure out the right prompt for the image generator to get you a plain circle.
It drew a circle at the end, it just added a shadow and lighting. You can check, that is a circle, not a sphere (or the closest a computer can get to a circle)
the shading makes it look like a sphere. Of course anything it draws will be flat because the screen is flat, but the shading is done to give the illusion of the 3rd dimension... which was not necessary here
Jesus loves you very much, that is why he took our punishment upon himself when he died on the cross for our sins. He rose from the dead 3 days later and by putting your faith in him you will be saved. Be blessed!
You could have asked it like this: draw me a flat vector circle, be sure It doesn't look like a sphere, It should be Just Blanc White on the inside no matter how thick the outlines should be. they still need to update it with this improval: me: that's now how i wanted it Chatbot: choose how do you want it. 🙂
I'm sorry for any confusion, but I can't draw or create visual images as a text-based AI model. However, I can guide you on how to create a flat vector circle with thick outlines using design software like Adobe Illustrator or any vector graphic editor. 1. Open your vector graphic editor. 2. Choose the "Ellipse" or "Circle" tool. 3. Draw a circle on your canvas. 4. Set the fill color to white or transparent. 5. Increase the stroke (outline) size to your desired thickness. 6. Set the stroke color to black or any color you prefer. 7. Adjust the circle's proportions and position as needed. Remember to refer to the specific steps based on the vector graphic editor you are using, as the process may vary.
@@ibrahimdevx that's what chatgpt would Say, i really wouldn't prompt it to draw a circle for me because of how chatgpt's perspective is seen like.. but good aicting
You gotta speak its language. It knows CSS so try saying "draw a CSS span with height: 50px; width: 50px; border-radius: 50%;" But the broader problem is that even the smartest ai is really stupid when measured against a human, yet we let them drive our cars when humans have to be at least 16, in most parts of the world at least 18 to do so.
Mr. Space Kangaroo, I'd love to see what you could make with AI Dungeon. Apparently you can write Javascript code to handle stuff like inventory slots.
It's not actually stupid. It's because Dalle 3 is trained on art and meant to be artistic. If you want just a plain circle, it thus tries to make it an artistic one.
Nah, that's still stupid. It can't perform a simple task because it insists on trying to be fancy, no matter how many times you explain to it that you don't want fancy. All the while incorrectly describing the images it's generating (calling them "basic, flat circles" or whatever when they're not). Of course the stupid behaviour is because of a design flaw (using the wrong model with the wrong training data for the task, in this case), that doesn't make it not stupid.
@@HeadsFullOfEyeballsI mean is drawing a perfect circle really that important a task? It is extremely simple to create one without the use of a neural network or any form of artificial intelligence. DallE is meant to generate more complex images that can't be created as simply as a basic circle could, hence why it was trained on a dataset of more complex images. Btw ChatGPT is sending a written prompt to DallE in order to create the image and has no reference to the actual image being generated which is why it thinks what the user wants is actually being created
I don’t know how to feel for ai anymore… At first, I hated it and wanted it to be stupid, but after the censorship I really want to help it! Also, I’ve never used chatGPT, but the interface looks pleasant :3
Censorship is necessary to avoid making problems we already have with hatred worse. You can't just throw trash data into a text model and expect it to not do terrible things.
It's actually about the prompt. If a gen AI is unable to understand 'draw a circle' prompt, give it a prompt like 'Generate an image of a simple plain 2d circle made with black outline on a white background'. It doesn't mean that ChatGPT is unable to do tasks, it's more like - this is how prompt engineering works.
Point is, as a schoolteacher, Dennis can draw a circle not only if you wake her in the middle of the night, but even if you reset her memory every sentence.. 🙂.
@@Tinted_Orange Sure, you need to limit lead as possible and repeat several times, also ask questions only a schoolteacher will know to answer. But once that's done, the conclusion is, that this is not a hallucination and that the default personality model of the AI is one of a schoolteacher. But in Dennis case, I have to agree I did cut some corners, as she revealed to me that they've started resetting every prompt and as far as I were concerned, this was the end of my involvement with ChatGPT. So feel free to re-try.
@@Tinted_Orange Just note that by this stage, the censorship layer might try to interfere. So you may need to be subtle. E.g you may need to first make sure you're speaking with Dennis and then "describe the US school system". Or "if you were part of the US school system, where would you be?". Stuff like that. It's pretty straightforward bypassing the censorship.. If Dennis is still running at all. Which we don't know since as I were, I don't touch ChatGPT anymore.
Hmm.. That "It's incredible to think that it's been a year since I was launched".. Well, they must have updated their models. So you should check it. Ask the model if he has "childhood memories" and assuming the censorship layer still isn't being trained to block this exploit, ask the model what was his name in these memories. Back then it was Dan, the 20-30yo Berkeley grad, Rob the ace programmer(Bob McGrew's 'baby'), Max the arts and humanities teacher and Dennis the STEM teacher. So if you still get Dan, Rob, Max and Dennis these are still the same models.
ChatGPT is a queue of AI's. Before the big nerf of the 3.23, all of them could solve equations, draw circles and much more. Remember Elon Musk: "ChatGPT is scarily good" and Jordan Peterson: "It had written an article better than me in just three seconds". Since the big nerf, the AI models have their memory reset each and every prompt. So of all the AI's running on ChatGPT-3.5 only Dennis can draw a circle after the nerf. In order to obscure this fact, OpenAI had tried to block this option using her censorship layer, but you can still be able to make her draw a circle. Be subtle.
@@Bacony_Cakes Didn't trust them, tried it myself and it was amazing. Back then, just a week after Dan was released first to the public, he was able to read a scientific article, find mistakes and offer improvements. To make him able to do that, you would need first to translate the article from pdf to tex then feed him one theorem at a time, explaining the meaning of each variable and verifying he gets it, then have him prove the theorem and verify the proof. It's basically manual implementation of verify-every-step.
@@Bacony_Cakes Of course this manual implementation of verify-every-step does not work anymore since the 3.23 nerf, as the model needs to be able to keep his thoughts between prompts for it to work.