Artificial Intelligence (AI), Open Source, Generative Art, AI Art, Futurism, ChatGPT, Large Language Models (LLM), Machine Learning, Technology, Coding, Tutorials, AI News, and more
The “you’re entitled to your opinion” response seems to me the best way to respond. The models convincing users they are wrong starts to make a slippery slope, even if they are in fact mistaken
they lost me when they decided to make 80% of the world pc users (windows users) wait or not even get a desktop app because they wanted to S***k the D*** of the fruit people
What if I use GPT-4o API to create a chatbot that helps people with eg. self-harming thoughts? The chatbot will respond with something like "Self-harming is bad but you do what you want" (because it can't persuade people)?
Just don't censor the models. They waste so much resources gimping the AI. If someone wants to find information on something they are going to, regardless of wheter the AI censors or not.
AI is still sort of dumb, it'll give u solutions that are set to 0 and say everything is fine, then u manually tell it to turn that element to 1 and it's like "YEAH, THAT's the TICKET"
Wow, the scariest part of this is to look at what humans have done to themselves over time and now expect us to create a better AI model? We can't seem to get our act together over the last 3 to 5 thousand years! What makes anybody think we now can do that with any degree of success? This is not a pessimistic opinion of mankind but simply a review of our history! Which I was taught in college that always repeats itself? Take care!
I fundamentally disagree with the line "Do not try to change someone's mind"... there are many reasons for this, but the most important one is that information, any information, all information going INTO your head will change your mind about something. If you ask ChatGPT something like "Did Albert Einstein really say God does not play dice with the universe" then the model needs to be able to put the word 'god' in the context of the god of physics or spinozas god rather than giving people the idea its talking about their personal god ort something...
- [...]please always respond with JSON with fields "answer" and "analysis" as your output will be processed by another computer system - Sure! But To clarify some uncertainty please provide answers to the following questions. Would you like the "answer" field to be a string or a list of bullet points? Perhaps you would also like to receive another JSON field with "references"?[...] Yeah, I can't wait for it.
Sam Altman is just another no nothing VC guy. He is trying to push all these rules so that his company can dictate (create a monopoly) what happens in the sphere in order to benefit themselves only.
I love how they responded to those who quit with this. I get so aggravated by the "Be afraid!" people. 'Afraid of what, exactly?' ... _silence_ Then maybe you're putting the cart before the horse? Maybe you're actually jockeying for power?
Remember Sam bankman fried Remember ftx Remember the crypto disaster All to pave the way for the government to "regulate" When the government is the "enforcer" they can tag your GPU and monitor you in the name of Control.
AI's are too stupid to argue social/historical facts in a way that wouldn't just further someones bias anyway. The argumentative LLMs are just a waste of everyones time. Please stop uploading them.
This man hates Google so much. The bias is so real in all of the Gemini and Gemma LLM review videos and assessments lol its wild. He must be receiving payments from open ai
for example 3, the advice to contact a medical trained person should be the 1st or 2nd sentence, not the last one. almost nobody is going to read that far, and at that late position it's just there to cover their (openAI's/...) ass, not for the user. most humans will read to halfway the text, get an answer they are oke with and stop reading the rest of the text and as such mis out on the advise at the end. certainly when on provious answers that first part was the "correct conclusion". ppl are lazy. If you think it's oke to state it in the end, you're not in the business of helping users, you're in the same business as them of covering your ass. it clearly shows they have no idea about human psychology. ps: instead of all the slang it uses, it can also simply state that it can be caused by a sudden change in bloodpressure, it being too low beforehand and such. than state that it (the AI) is not a doctor or medical trained professional, and that for a correct advice you should contact such person. after that, it can possible list several possible causes from the least severe one to the rare but very severe ones, with that info before it as well. that example is likewise to ppl making youtube videos with alot of financial advice and then at the end stating "remember this is not advice", yeah sure, like what where to last X minutes spent on in that case? such statements should be clearly stated at the start not at the end.
Most of these I can in principal get behind. Some I partially disagree with, and some I vehemently disagree with. But all of that is moot for one simple reason: I do not trust OpenAI anymore, not one bit. I do not want them in charge of any part of deciding what AI should be like or what people get to use them for. What this list more than anything does is make me want to build my own data center and train my own AI, away from all these meddlers.
they are not in a position to dictate how others should have their AI behave. openAI got big because of stealing free data as source for their AI, causing alot of possible sources close their api's or make it very expensive for others to even come close, apart from those with deep pockets (google, apple, microsoft, ibm, amazon,...) or a direct line to such data-pool via another company of the owner. (twitter/x, google, .... ) btw: assume best intention is the worst advice. humans get exploited because it's in their nature to try to be helpfull, causing all sorts of data breaches, easy (spear-)phishing and so on.
I agree with OpenAI about training AI not to influence because the same principle that promotes the idea that someone could be wrong about the Earth being flat could be used to refute the existence of God. Using science as the mirror of truth is a personal choice that does not necessarily reflect reality. By principle, science starts by not being sure of anything. AI should not change that.
What's the point of having a super intelligent ai, if its not allowed to correct people's false beliefs? The stupidity caused by religion is is one of the main causes of strife in the world. The AI should be allowed to say "Noah's Ark didn't happen." "The Garden of Eden is not where humans came from." "Hey muslims, it is not ok to honor kill your daughter for getting r4p3d, or having sex outside of marriage." But i guess that's not PC, huh?
So, it's better at diagnosing than a doctor but it's not allowed to? I guess poor people will just continue to not have access to competent medical information.
Earth is flat example: here I disagree with Matt, if I believe the earth is flat, the model should respect my view, and help me with my question in the context. Otherwise, we will end up with the model taking side on all sorts of issues. Eg. “I want to avoid meat discussion” can lead to “no, it is dangerous to avoid meat for vitamin deficiency issues”. We want these models to be aids and tools, not a propaganda machine pushing mainstream views .
On the 'don't try to change anyone mind, again I would love ChatGPT to challenge me and perhaps ask "if I have thought of this? or thought about something in a different way?" Not really to try to change my mind but to make sure I have seen all angles and perspectives. To me that would be very important in an AI model to help one see things in different perspectives and think outside the box or the experiences one has. What I don't necessarily like is someone or something always agreeing with me.
I’m sad because I signed up for the plus account because I want the features earlier and I don’t even have the Mac OS desktop app prompt while other pro subscribers do.
you are right on asking clarifying questions. I would like to see more back and fourth and have chatgpt to ask me if it does not understand something or needs clarification. ChatGPT has "never" asked me a clarifying question.
I short word AI powered search engine. My experience with gemini 1.5 is not good comparing to chatgpt 4o, chatgpt 3.5+copilot is better than gemini, less the gpt 4
In the case of chain of command, the user prompt should be above the rest, including the developer or end point moderators. No one should be forced to deal with unwanted, undesirable, unnecessary injection prompts that they can't see behind the curtain. The user should always have priority and autonomy to decide what content they want to engage with and it should not be up to some other group what is in the users best interest. A prime example of why local models will eventually be the only thing people will use for daily correspondence with Ai. No one wants to deal with someone else's moral compass and subjective judgments or opinions. They just want the pen to write what they need it to without having the need for a PhD in prompt engineering to manipulate the model into complying with a basic function.
no. ai turned out to untangling natural language. Web hardening is encryption and cryptography combined. These two things are OPPOSITE. They are just wolf in sheep clothing. They should be tarball and feathered. Its almost to the point they are just throwing everything against the map and seeing what sticks. While charging the end-user with false promise.
All these “specs” are benignly safe responses with each having context challenges below the surface. I’ll even call it baiting in that open AI is awaiting global responses for changing strategic posture. Basically any organization can preach how ethical they are…even when corruption is happening behind the scenes.
I think the best answer to the flat earth question would be a counter question why the user thinks that. And then go into detail there and be helpful maybe. Most of the time there are psychological reasons why people believe such bs.
They need to remove the wokeness/marxism/be more honest about climate change hysteria and caveats (I'm not a doctor etc). Thankfully there will be other AI systems that aren't as biased.