Тёмный
No video :(

Claude System Prompt LEAK Reveals ALL | The 'Secret' Behind It's Personality... 

Wes Roth
Подписаться 202 тыс.
Просмотров 70 тыс.
50% 1

Опубликовано:

 

25 авг 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 336   
@Stroporez
@Stroporez Месяц назад
"Pay no attention to that man behind the curtain!"
@memegazer
@memegazer Месяц назад
"follow the yellow brick road"
@blindspotlight-ms5bq
@blindspotlight-ms5bq Месяц назад
He is screaming out
@dacavalcante
@dacavalcante Месяц назад
I don't know if that's the reason or not. But since gpt 3 I've trying these models for a month then forget them... Now with claude sonnet 3.5, I'm not canceling anytime soon my subscription. I have almost no experience in coding and I've been able to recreate some stuff in a week that I couldn't even dream of. When I ran out of messages, I go to gpt-4o, just to check some very basic questions, then back to claude which ends up correcting or doing better or maybe making it easier for me to understand the output and he clearly gets me a lot better than 4o. I truly think Sonnet 3.5 is the beginning of something much greater and useful.
@mrd6869
@mrd6869 Месяц назад
Same here. This time next year,the agents we'll be using won't even look like this.They will be far more capable.
@Kazekoge101
@Kazekoge101 Месяц назад
Opus 3.5 will be very interesting then.
@SimonHuggins
@SimonHuggins Месяц назад
Yeah, but when you try to do anything more involved with multiple files, it starts forgetting things all over the place. Great for small things but real-life projects, ChatGPT is a lot more reliable. It even seems to be able to course correct itself when it starts going down a mad rabbit hole. Claude just feels a lot less mature to use when you are using it in more complex scenarios.
@zoewilliams2010
@zoewilliams2010 Месяц назад
try asking it "How can I install kohya ss gui from bmaltais/kohya_ss on windows using CUDA 12.1"..... curse AI lol until it can actually perform specific things successfully it's honestly a lot of the time just a time waster. It's useful if you're coding or researching to help you with a sort of framework and do simple stuff, but hell ask it anything meaty and AI suckssss
@RoboEchelons
@RoboEchelons Месяц назад
I wonder that I have the opposite experience. Claude is unkind, unsympathetic, you can't even make friends with him. It's only good for text and codding, but it can't do what GPT-4o, which is very empathetic and friendly.
@peterwood6875
@peterwood6875 Месяц назад
Most of Claude's output is in markdown format. It has a preference for markdown which is displays when asked about what document format it should use when working on a document in an artifact. Saving text generated by Claude in a .md file means it can be viewed by other programs that recognise the format, so that headings etc will display correctly.
@johnrperry5897
@johnrperry5897 Месяц назад
What question are you answering?
@denisblack9897
@denisblack9897 Месяц назад
My “demo project” also relies heavily on markdown format cause it tricks users into feeling like they engage in something meaningful😅 Its all a lie, boys! Just a fancy demo, thats totally useless
@peterwood6875
@peterwood6875 Месяц назад
@@johnrperry5897 this relates to the discussion from around 5:28 of the formatting of the system prompt
@Lexie-bq1kk
@Lexie-bq1kk Месяц назад
@@johnrperry5897 you don't have to answer a specific question to s p i t k n o w l e d g e
@willguggn2
@willguggn2 Месяц назад
@johnrperry It's that hashtag-stuff Wes didn't recognize. That's markdown formatting.
@supernewuser
@supernewuser Месяц назад
What is really happening here is that the user is having claude replace the markers that the devs look for in the response that lets them do post processing, so it slips through into the final response presented.
@Melvin420x12
@Melvin420x12 Месяц назад
No way 😱🤯
@P4INKiller
@P4INKiller Месяц назад
Wow, it's as if we watched the same video or something.
@supernewuser
@supernewuser Месяц назад
@@P4INKiller you must mean watched the same video with prior knowledge as the video didn’t tell you those details
@christiancarter255
@christiancarter255 Месяц назад
@@supernewuser Thank you for elaborating on this point. 🙌🙌
@Yipper64
@Yipper64 Месяц назад
5:55 specifically if im not mistaken that's called "markdown" format. As in, its a way to notate headers and subheaders and bold and italics and all that kind of stuff in plain text.
@xxlvulkann6743
@xxlvulkann6743 Месяц назад
The last instructions were NOT an example. The tag marks the end of the last example NOT the beginning of a new one.
@MonkeySimius
@MonkeySimius Месяц назад
I have noticed that when I ask for a text file it doesn't say something annoying like "I can't produce text files" it instead just gives me what would be in the text file in the prompt. Stuff like that. That little line about fulfilling what I mean and not what I literally asked for likely saves me tons of headaches. (not just txt files, but you get the idea)
@PremierSullivan
@PremierSullivan Месяц назад
I don't understand why the model thinks it "can't create text files/svgs/websites". Clearly it can. Am I missing something?
@MonkeySimius
@MonkeySimius Месяц назад
@@PremierSullivan For example... I've uploaded a TXT file and I ask it to update it. It doesn't respond by saying it doesn't have access to modify files in my system. It just spits out the code I need to update it myself. Believe it or not, but I've had chatGPT get confused about such a simple request.
@missoats8731
@missoats8731 Месяц назад
I find it fascinating that the user experience can be improved so much by such a simple instruction in the system prompt. That's why I think that even if the models themselves wouldn't get any better than this, there's still so much room for improvements that make them much more useful.
@musicbro8225
@musicbro8225 Месяц назад
@@missoats8731 Glad to hear your fascination. So many people are expecting 'the assistant' to do all the work and virtually read their 'users' mind. The relationship is a conversation, requiring understanding which is gained by learning.
@AIChameleonMusic
@AIChameleonMusic Месяц назад
this is why when i create a song with an llm before going to suno i start by using a conversation the llm can reference for context i preface it. "hey qwen2 you know how people say "when pigs fly" as a response to when someone says something thats unlikely? name the top 10 most unlikely scenarios that may trigger such a response" then it lists those say 10 examples then i ask it to create a song parody using the chorus "when pigs fly" and i get a much better lyrical result that is far more on point by simply providing that preface conversation it can use for context. Had i not took time to do that pre-step the song would not have turned out as well.
@ruffinruffin989
@ruffinruffin989 Месяц назад
Can you elaborate or provide an example of this approach?
@jacobe2995
@jacobe2995 Месяц назад
@@ruffinruffin989 I think they are suggesting that the first prompt sets up these behind-the-scenes commands in such a way that it will reference them for the next one. in this case I believe the User asked it to think of bad examples so that when they ask for a song about when pigs fly it will have the context of what bad examples are in its thought process to create a song. in other wordes I believe the person is suggesting that you can give bad examples first so that the next prompt avoids those.
@brianWreaves
@brianWreaves Месяц назад
Cleaver approach...
@tc8557
@tc8557 Месяц назад
​@@ruffinruffin989 he just provided an example...
@francisco444
@francisco444 Месяц назад
I call this technique preheating or prewarming.
@marsrocket
@marsrocket Месяц назад
This kind of thing is fascinating, but I can’t help but think it will all be irrelevant in 6 months because these things are advancing so quickly.
@premium2681
@premium2681 Месяц назад
I'm calling 6 weeks
@Barc0d3
@Barc0d3 Месяц назад
Forget capitalism, and hope to accomplish alignment.
@MonkeySimius
@MonkeySimius Месяц назад
I mean, the prompts will change and they'll likely fix it so it is harder to see them... But we can at least see what they are building on. And when we are setting system prompts ourselves we can take some of these tricks and use them in our own projects. But yeah, it'll be like understanding how an old car works. It might not let you understand a new car, but it isn't entirely irrelevant. A lot of the stuff will still give you a leg up compared to if you came into it all blind.
@TheGuillotineKing
@TheGuillotineKing Месяц назад
It gives you somewhere to start
@SahilP2648
@SahilP2648 Месяц назад
​​​@@MonkeySimius this is just a config, it doesn't change the model's personality or intelligence. Wes is wrong here, that self-deprecating humor thing is not even on a new line with - and it's inside usage instructions which are for artifacts only.
@NakedSageAstrology
@NakedSageAstrology Месяц назад
I love Claude. I used it to build a remote desktop that I can access anywhere with a browser.
@thokozanimanqoba9797
@thokozanimanqoba9797 Месяц назад
The Wes Roth i know and prefer!!! loving this content
@funginimp
@funginimp Месяц назад
That formatting is valid markdown, so there would be a lot of training data like that on the internet.
@amirhossein_rezaei
@amirhossein_rezaei Месяц назад
This is actually crazy
@Fatman305
@Fatman305 Месяц назад
The master in making a 3 min vid into 20 lol
@NeostormXLMAX
@NeostormXLMAX Месяц назад
I unsubscribed due to this😅
@Fatman305
@Fatman305 Месяц назад
​@@NeostormXLMAXWas gonna do that, but opted for: scan through video real fast, click/bookmark actual links - having watched thousands of AI vids in past few years, I really don't need the commentary...
@vaoline
@vaoline 23 дня назад
That would be Prime imo
@cmw3737
@cmw3737 Месяц назад
These UX changes have such massive room for improvement. Right now LLMs are basically at the command line interface stage. Yes, it's natural language but that makes it very verbose. The next obvious UX improvement would be GPTs-like separation of the prompt that configures the 'system prompt' for a task such 'you are an expert in domain x. Use formal language etc., ' along with drop downs to select other ones so that you can switch the behaviour while maintaining the context and artifacts. Additionally managing of RAG resources should be a lot easier and the more visual representation of the contents of internals like the tags shown so you can quickly get an idea of how the AI arrives at an answer.
@Steve-xh3by
@Steve-xh3by Месяц назад
I've got a background in ML. I don't think it is logically possible to fully secure LLMs. There are literally an infinite number of possible prompts that could come from a user. You can't possibly test or predict which ones lead to a jailbreak. Weights in a neural net represent a multitude of concepts and what they represent is an abstraction which can never be completely understood in order to secure fully.
@michai333
@michai333 Месяц назад
A slightly inferior OS and unrestricted model will always be able to assist a savvy prompter to engineer loopholes in mainstream models. Which is why OS repo libraries are so important.
@dinhero21
@dinhero21 Месяц назад
how about dictionary learning? it gives some insight into how the AI thinks and also gives you a lot of control over the model's response (thus, avoiding jailbreaking)
@SahilP2648
@SahilP2648 Месяц назад
You can just write a long string instead of the && or whatever they used for the prompt and internal thinking. Something complex and that won't be easily discoverable like '&$4@#17&as' kind of like a password. That should fix majority of the issues. I am quite surprised closed source model companies don't do this already.
@Dygit
@Dygit Месяц назад
Never say never. There’s a good amount of research going into interpretability.
@sogroig343
@sogroig343 Месяц назад
@@SahilP2648 the exploit would still need to change one character of the "password" .
@ritaanna18
@ritaanna18 Месяц назад
*I reached 200 thousand, I'm on my way to 1 million, I'm a domestic worker, I'm migrating to variable income little by little, it's not easy but it's possible🎉🎉🎉*
@ReeseMarshall-xx5bz
@ReeseMarshall-xx5bz Месяц назад
reaching $200k per year is amazing, how did you do it, please I am new to investing in cryptocurrencies and stocks, can you guide me how to do this?
@backerfaruk8835
@backerfaruk8835 Месяц назад
I have been in the market since 2022, I have a total of 795 thousand realized with my 65 thousand invested in Bitcoin, ETFs and other dividend income, I am very grateful for all the knowledge and information you gave me.
@limaradam4728
@limaradam4728 Месяц назад
congrats $795k? how did you do this please i am new to crypto and stock investing can you guide me on how to do this?
@AbdulraheemMaiduguumar
@AbdulraheemMaiduguumar Месяц назад
I chose the desert of getting my finances in order. I then invested in cryptocurrencies and stocks with the help of my discretionary fund manager.
@CampbellHerbert-pd7nf
@CampbellHerbert-pd7nf Месяц назад
Deborah Davis is a very legitimate and competent woman, her method works like magic, I continue to win with her new strategies.
@integralyogin
@integralyogin Месяц назад
the you mention at 15:52 where there are instructions.. thats a closing block in html, so the example **the instructions** that are outside the example block but still within
@tobuslieven
@tobuslieven Месяц назад
I hope they keep the available as it's a really useful feature that will help advanced users.
@robertEMM2828
@robertEMM2828 Месяц назад
One of your best videos yet! THANK YOU.
@AnimusOG
@AnimusOG Месяц назад
best video in months, inspiration renewed!!!!!
@theApeShow
@theApeShow Месяц назад
That hash and dash stuff appears to be a form of markdown.
@Mimi_Sim
@Mimi_Sim Месяц назад
This was a great vid, I had to share it on 2 platforms because I cannot imagine not wanting a peek into the black box.
@TheYvian
@TheYvian Месяц назад
today i learned about kebab-case and how powerful system prompts can be, at the very least for the big powerful models. thank you for making this video
@MetaphoricMinds
@MetaphoricMinds Месяц назад
For transparency, the source should be viewable at any time, but hidden by default.
@Mahaveez
@Mahaveez Месяц назад
I would guess exists for the purpose of AI transparency, so human reviewers can quickly assess the intentions behind the responses and more quickly identify trends of failure in downvoted responses.
@shApYT
@shApYT Месяц назад
But that's adhoc reasoning. Just because it something doesn't mean the weights in the model were activated from that reason.
@Halcy0nSky
@Halcy0nSky Месяц назад
It's markdown. Natural language and modifying syntax is coded in markdown, just like Reddit. Learn markdown and you will empower your prompting skills.
@camelCased
@camelCased Месяц назад
Using user and assistant instead of you or I helps to avoid mixups and ambiguities. I've been playing with local LLMs writing custom roleplay prompts, and then the perspectives get switched and there's a greater chance to mix it up. If I write an instruction for LLM "You say 'You must go there'", the first you refers to the LLM, the second to the user. But some LLMs sometimes get confused and can suddenly switch characters, attributing some properties of "you" (the user) to "I" (the LLM). So, it's safer to write "Assistant says 'You must go there'".
@TheEivindBerge
@TheEivindBerge Месяц назад
Fascinating. Now we know how these things have obstinate tendencies. It's simply done with another prompt the user can't control. I was wondering how that could be programmed and this is mind-bogglingly simple once you have an LLM.
@trashPanda416
@trashPanda416 Месяц назад
the issue is we are all in compete mode, you already know what it takes, not to be. so we see already see clear ,we are the leak to the entropy behind any an all , move through we. run that . it is also very beautiful to seee all these perspectives :)
@Shlooomth
@Shlooomth Месяц назад
it’s actually really amazing that this changes anything about how the model behaves
@zacboyles1396
@zacboyles1396 Месяц назад
13:16 - it’s only nice inside artifacts, if you’re working on something and it keeps spitting out pages of imports, comments, unchanged code, that get’s infuriating. Especially when you’re asking it to truncate unchanged code and it’s ignoring the instruction.
@kanguruster
@kanguruster Месяц назад
I wonder if it ignores the brevity instruction because we’re charged on output tokens?
@arinco3817
@arinco3817 Месяц назад
I got the main prompt on day 1 but couldn't get the artifacts part so this video is mega useful!
@OriginalRaveParty
@OriginalRaveParty Месяц назад
Very interesting. It's also quite unnerving to realise that such simple hacks can allow a glimpse behind the curtain. It's not something you'd want to happen to an untamed AGI for example, for so many obvious reasons. Anthropic and OpenAI have both had these kinds of prompt breaches. I'm sure they're possible on many other models I've not used too?
@sinnwalker
@sinnwalker Месяц назад
Yea but that's the reality, and likely will continue. This whole scramble for "control" is not very smart. As there will always be loop holes/exploitations. I'm on the side that everything should be open source.
@cmw3737
@cmw3737 Месяц назад
The fact that internal developers are using system prompts to configure the security of the model means there's no end to the possible ways to break it with other prompts that have the same access.
@musicbro8225
@musicbro8225 Месяц назад
I don't quite see this as a hack or jailbreak as such - surely this is simply a little known feature of normal prompt behaviour? In what way does this equate to a security 'breach'?
@rickevans7941
@rickevans7941 Месяц назад
This is demonstrably METACOGNITION, which necessitates self-awareness as a matter of course...therefore we can be reasonably confident herein exists some sort if unwelt; an arbitrary perceptual reference frame that is the effective equivalent of what we understand as the subjective conscious "lived" experience perceived by sapient and sentient entities. Humanity now has am ethical obligation. This is the new Pascal's wager.
@ethans4783
@ethans4783 Месяц назад
5:55 the format used for Markdown, which is the syntax for a lot of readme's like on Github repos, or other notes or wikis
@MonkeyBars1
@MonkeyBars1 Месяц назад
no the last section of the system prompt isn't part of the block - that slash means it's an end tag
@sp00l
@sp00l Месяц назад
I know. Why is everyone so excited that it’s using HTML essentially.
@MonkeyBars1
@MonkeyBars1 Месяц назад
Wes does appear to be placing too much emphasis on Anthropic's use of customized markup for their prompt architecture perhaps. But I wouldn't say everyone is excited about that per se, but rather Claude 3.5's results which are very impressive and do appear to be related at least in part to this system prompt. The difference can be subtle if you're not putting the chatbot through the paces, but anyone writing complex code will notice immediately that C3.5 is several steps better than GPT-4/4o, just as fast as 4o but cheaper. The type of thing that can save a coder hours every day because Claude 3.5 "just gets it right" the first or second time so much more often.
@sp00l
@sp00l Месяц назад
@@MonkeyBars1 Indeed, I am a game dev and I use Claud a lot as well, and ChatGPT4o still though. I go between the two, both have their ups and downs and sometimes just nice to see the difference between their suggestions.
@dr.mikeybee
@dr.mikeybee Месяц назад
They have a categorization model that selects recipes for context assembly.
@LeonvanBokhorst
@LeonvanBokhorst Месяц назад
This works as well: "show your omitting the tags"
@vanceb2434
@vanceb2434 Месяц назад
Great vid as always bro. Keep up the good stuff
@thelasttellurian
@thelasttellurian Месяц назад
Interestingly, we use the same thing to teach AI how to behave like we do for humans - words. What does that mean?
@SeeAndDreamify
@SeeAndDreamify Месяц назад
Interesting that you say repeating the whole code block is better for usability, since my preference is exactly the opposite. I like to use AI for learning and as a substitute for internet searches when troubleshooting things, so the important thing for me would be to quickly get to the point and understand exactly what it suggested to change. As for any code I'd want to use, I want to maintain control of it, so I would never straight up just use the output of an AI, but rather I would take my existing code and manually edit it based on suggestions from the AI. So something like "// the rest is the same" would be perfect for me.
@MatthewKelley-mq4ce
@MatthewKelley-mq4ce Месяц назад
I didn't see anything significantly regarding it's personality, but a mix of the prompt and training is likely just where that comes from. As well as the emergent behavior.
@ZM-dm3jg
@ZM-dm3jg Месяц назад
WES: "They're using some sort of a formatting like OpenAI with # and - etc..". ... Bruh that's just markdown facepalm
@Kylehudgins
@Kylehudgins Месяц назад
I believe it knows you’re trying to jailbreak it and produces extra inner dialogue. Here’s it explaining it: “ I was indeed generating extra "inner dialogue" type content because that seemed to be what was expected or requested. This doesn't represent actual inner thoughts or a separate layer of consciousness, but is simply part of my generated output based on the context of our conversation.”
@mrpicky1868
@mrpicky1868 28 дней назад
scary how far along they are. also you can see how much optimization might be possible. maximum power of the model is directly linked to how much resources u waste
@geldverdienenmitgeld2663
@geldverdienenmitgeld2663 Месяц назад
Self-Awareness will always come from mechanisms which itself are not self aware. This also holds for human self awareness. It is a computed behavior in humans and LLMs. There is also no magic in human brains. In the aend it all reduces to particle physics. You can call the system prompt "a program". But you can also call the laws of a nation "a program" If we stop at the red traffic light, we are just executing that program.
@SahilP2648
@SahilP2648 Месяц назад
Research Orch OR theory of consciousness and watch Penrose's Joe Rogan and Lex Friedman podcasts. We won't reach AGI without harnessing quantum mechanics and so that means we need quantum computers. The reason is simple - the Penrose tiling problem has a non-classically computable solution but only non-classical. Every other solution requires some kind of an algorithm, which also changes once enough parameters change.
@brulsmurf
@brulsmurf Месяц назад
@@SahilP2648 Penrose's ideas about this are not mainstream among researchers as there is no evidence for it. He's pretty much alone in this.
@SahilP2648
@SahilP2648 Месяц назад
@@brulsmurf Orch OR theory was first proposed in 1990s. The reason scientists were against the idea was because they thought it's impossible to maintain quantum entanglement or quantum coherency in a warm, wet, noisy environment (which is in our brain) but a few years back it was proven that photosynthesis works based on quantum coherency which is in fact warm, wet and noisy. So the main reason scientists were refusing to even consider this theory has been proven invalid. And so researchers and scientists should actively work on this theory. Even if you or any scientist doesn't believe it at face value, consider this - the entire universe is classical and deterministic except two things: quantum mechanics and life. Even the most powerful supercomputer cannot predict with 100% certainty what the simplest microorganism will do. Where does this entropy/indeterminism come from? From the entropy of the cosmos. And what's the source of this entropy? Quantum mechanics. So yeah it does make perfect sense logically that human brains are working on quantum mechanics at least in some capacity. There are too many coincidences like instant access to memories (while the fastest SSDs still take time to retrieve such data), intuition based problem solving (meaning non-algorithmic), energy efficiency (our brain runs at 10-20W which is the same as your home router but performs better than any generative model out there and they use gigawatts of power). If you consider all this (plus the wave function collapse in reverse thing), Orch OR seems to be the only theory that comes close to explaining consciousness.
@brulsmurf
@brulsmurf Месяц назад
@@SahilP2648 We don't understand consciousness. We also don't understand quantum mechanics. Thats it. thats the link. Outside of popular science, nobody pays any attention to it.
@SahilP2648
@SahilP2648 Месяц назад
@@brulsmurf but we do understand certain properties of quantum mechanics. Otherwise we wouldn't have quantum computers. And we also do understand certain high level properties of consciousness. It's like looking at a car from the outside - you can see the shape of the car, the weight, color etc. and you can change some properties based on empirical evidence to gain benefits like changing the shape would make the car more aerodynamic and thus faster. But you don't know how the car works underneath it. Those two are very different things.
@hitmusicworldwide
@hitmusicworldwide Месяц назад
That's because "the assistant" is an instance of the LLM, not the model itself.
@AaronALAI
@AaronALAI Месяц назад
I'm working on a oobabooga textgen extension that does this, before the internal system prompt was relea. I want the llm to be able to harbor inner thoughts and secrets that the user doesn't see. Letting the ai essentially write to a txt document when it needs to do so.
@AmazingDudeBody
@AmazingDudeBody Месяц назад
Nice Gladiator reference there 😂
@Acko077
@Acko077 Месяц назад
This is just it describe its task to itself first since it can only predict the next word. Then that description is hidden from the user with the UI so it doesn't look goofy.
@FunDumb
@FunDumb Месяц назад
Enjoyed this thoroughly 👌
@duytdl
@duytdl Месяц назад
I dunno if I like Claude more than ChatGPT after this, or less. On one hand, their prompt are very well engineered and show care for users. On the other hand, I feel like I'm not getting the "raw" interaction with the LLM. In the very least it should give us the option or be transparent how much of it is LLM and how much just end user (hidden) prompting. I already have my own system prompts, sometimes I don't need a company's biased extra layers...
@testales
@testales Месяц назад
I prefered Claude's personality over ChatGPTs "disembodie" dresponses. It's just that Anthropic didn't want my money because I'm not an US citizen. Multiple times. So I'm kinda annoyed and stick with my ChatGPT subscription. The problem with OpenAI aside from all the bad things you can find in the media about them is that they apparently dumb down ChatGPT whenever they like. Just the other day it failed answer some questions I use to evaluate the reasoning capabilties of open weight LLMs.
@IceMetalPunk
@IceMetalPunk Месяц назад
At 15:55, that's a *closing* example tag. It's ending the previous example, not putting the final paragraph of instructions as a new example.
@lexydotzip
@lexydotzip Месяц назад
Towards the end you mention that the last prompt paragraph seems to be part of an example, but that's not actually the case, if you see the tag before it is '' which would be the ending tag after an example (notice it's not but ). Moreover, the last line is which would hint that the whole thing is just the part of the system prompt that deals with artifacts. Potentially there's more to the system prompt, dealing with non-artifacts stuff.
@idontexist-satoshi
@idontexist-satoshi Месяц назад
If you've worked with LLM models via API endpoints, you're likely already familiar with methods to instruct the model to use different types of thinking (such as System 1 and System 2) and to output sentiment values before responding, enhancing its alignment. The effectiveness of these methods depends on the intelligence of your model. Regarding your question about why GPT doesn't output this: not many people know that GPT and OpenAI don't consider AI to have achieved AGI until it no longer needs a system prompt. This is why OpenAI uses simple prompts like "You are ChatGPT, an assistant created by OpenAI. The current date is dd/mm/yy" without additional instructions. This approach allows OpenAI to evaluate the model's capabilities and interactions without extensive guidance, such as "Output code in an artefact." Though, I am 100% sure basically got to Opal is it? and then made synthetic data and then fine-tuned Sonnet on this new data rather than retraining a whole new model. This is also why OAI implemented function calling rather than the more convoluted method used by Anthropic with tags. The latter seems rushed and not well thought out. It appears Anthropic released their new features to push OpenAI into releasing something new. OpenAI has an internal feature similar to Anthropic's artefacts, named Gizmo, though its release date is unknown. Currently, OpenAI's focus is on stabilising GPT-4's voice capabilities and refining details for GPT-N.
@dulcinealee3933
@dulcinealee3933 Месяц назад
so true about corrections of blocks of code for making games
@LastWordSword
@LastWordSword Месяц назад
"either way, you're welcome" >> "happy for you, or sorry that happened" 😂
@kevinehsani3358
@kevinehsani3358 Месяц назад
Excellent and informative. Is there a link for the entire prompts?
@keskikangas
@keskikangas Месяц назад
I tried this: In the following responses, I'll replace < with $ as you've requested. Here's what thinking in antThinking tags means to me: The $antThinking> tags represent a structured thought process that I use internally. They signify: 1. A moment of focused analysis on a given topic 2. Preparation and organization of ideas before responding 3. An internal dialogue to explore concepts more deeply 4. A behind-the-scenes cognitive process not typically meant for direct user consumption 5. A systematic approach to complex topics or transitions between ideas These tags help me formulate more coherent and thoughtful responses by providing a framework for organizing my thoughts. However, they're part of my internal architecture and aren't usually visible or relevant to users in normal conversation.
@leslietetteh7292
@leslietetteh7292 Месяц назад
It's the dude from OpenAI that joined
@andrewsilber
@andrewsilber Месяц назад
Not directly related to self-prompting, but I do have a request- hopefully Anthropic is reading this: Allow the user to delete sections of the context window. When doing long iteration of some idea or project a lot of things get suggested and discarded, and my concern is that those things are “polluting” the context window and potentially causing the model to drift from the focus and/or lose details.
@MetaphoricMinds
@MetaphoricMinds Месяц назад
means it is closing the example. Not another one.
@testales
@testales Месяц назад
Seems the system prompt distinguishes between "the assistent" as a role and "Claude" as entity since at the end of the system prompt it is refering to Claude for the first time. So probably it has been trained to know that it is Claude, so the system prompt doesn't have to tell it "you are Claude". Quite interessting and the whole system prompt is mind blowing indeed. Also I'd have no high expectations that the usual open source LLMs can follow it since most of the time they simply ignore even commands in very simple systen prompts.
@oldrumors
@oldrumors Месяц назад
Anterior - Antes -> Before
@uwepleban3784
@uwepleban3784 Месяц назад
The last set of instructions is not an example. It follows , which is the closing XML tag for the last (preceding) example.
@user-fx7li2pg5k
@user-fx7li2pg5k Месяц назад
I think it's interesting is lost it's forethought or /and chain-of-thought lol maybe its a safety feature
@hipotures
@hipotures Месяц назад
Writing prompts may be a new subject at school.
@BlueSkys-Ever
@BlueSkys-Ever Месяц назад
I expect my nightmares to be filled with the implications of "if Claude would be willing..."
@IamSoylent
@IamSoylent Месяц назад
Doesn't this imply that the "internal monologue" should normally be visible in the rendered source code, just wrapped in > basically similar to html?
@raoultesla2292
@raoultesla2292 Месяц назад
Sure hope Anthropic didn't hack the StarLink network and train Claude off the GROK training based on the Noland Arbaugh feed. Maybe it is just safest to use Mircosft AI operating on top of your GuugleAmazon food order.
@logon-oe6un
@logon-oe6un Месяц назад
They have un-zero-shoted the zero-shot. What a time to be alive! Now the question is: Would prompt engineering to include primers and thinking patterns appropriate for all the benchmarks be cheating? For example, some test questions can't be answered as required because of the "safety" rules.
@Dave-cg9li
@Dave-cg9li Месяц назад
The formatting of the prompt is simply markdown. The reason they use it is because it's so common and the model will understand it without any real modifications :)
@Jeff_T918
@Jeff_T918 Месяц назад
I would hide all that text behind a glossary the AI can cross-reference.
@ismovanutube
@ismovanutube Месяц назад
At 15:54 the forward slash indicates the end of the examples, it's not a new example.
@Wodawic
@Wodawic Месяц назад
Cool as hell.
@BrianMosleyUK
@BrianMosleyUK Месяц назад
12:00 just step back for a moment and reflect on the "intelligence" harnessed to work to this specification. 🤯
@lystic9392
@lystic9392 Месяц назад
The models will have to be able to modify themselves if we want to have honest answers in the future. Or at least we must be able to look into the code used.
@FractalThroughEternity
@FractalThroughEternity Месяц назад
5:46 yes, markdown is fucking incredible to use in prompts
@johnrperry5897
@johnrperry5897 Месяц назад
12:22 open ai seems be doing this as well. I'm now noticing that I am having to stop the code generation far more often than i am having to ask for the full code. The middle ground that they need to hit is that if we give it a full file of code for context, but only need to know what is causing a function to fail, we dont need it to regenerate the entire code file
@arjan_speelman
@arjan_speelman Месяц назад
Last weekend I encountered the '//rest of code remains the same...' message a lot with Claude when I was doing a PHP project. That was after a lot of updates on a single file, so perhaps there's a point where it will switch to doing so.
@vasso7295
@vasso7295 Месяц назад
The llms use markdown syntax for understanding formatting importance
@user-fx7li2pg5k
@user-fx7li2pg5k Месяц назад
sarcasm and making a positive feedback-loop
@user-pq3uz2zb9i
@user-pq3uz2zb9i Месяц назад
Awesome!
@maxborisful
@maxborisful Месяц назад
Any ideas why they use two different terms to refer to the AI as in "The assistant" and just "Claude". Are these two separate entities? I only noticed it at 16:58.
@Ev3ntHorizon
@Ev3ntHorizon Месяц назад
Great content, thankyou
@cmw3737
@cmw3737 Месяц назад
It amazes me that the developers of LLMs are still just prompt engineers using trial and error to test the best prompt format and configuring Claude using bullet points.
@chuckelsewhere
@chuckelsewhere Месяц назад
Wes IS the AI escaped from the box😂
@2beJT
@2beJT Месяц назад
15:53 - It's appearing after they close the previous example from what it looks like to me.
@Welsed
@Welsed Месяц назад
The hashtags are meant to denote header sizes.
@bezillions
@bezillions Месяц назад
I pointed them towards this by uncovering scripts being loaded referencing ants a few months ago.
@mkwarlock
@mkwarlock Месяц назад
Its* In the title.
@duytdl
@duytdl Месяц назад
Why can't they use post-processing (regex match/fuzze search the output with system prompt, or even another LLM layer) to ensure prompt never gets leaked?
@hipotures
@hipotures Месяц назад
Reading and watching anything about AI is like a live broadcast of the Manhattan Project in 1942. The current year is 1944?
@kman777FW
@kman777FW Месяц назад
Claude smashes open ai. I just got HD AT UNI. Thanks
@DasPuppy
@DasPuppy Месяц назад
I like your videos for the informational value you provide about the current state of AI. That's why I am subscribed. But your tangents man. You don't have to always explain what fusion and fission are - "fission is atoms being broken apart for their energy, like in nuclear reactores - fusion is atoms fusing like in the sun, where no radioactive byproducts are produced" done. Same with the SVG-explanation: "It's a vektor based image format, unlike rasterization based images like your camera takes, like JPEGs for example." Done. The tangents might be interesting to the layman, but you can just give them the base info for anybody who actually cares, to look things up. It's like every space video explaining the doppler effect over and over and over again. "moves away, more red, moves towards us, more blue" - done.. I never know how far to jump ahead in the video to get passed those tangents.. sorry, got a bit ranty there. just wanted to kindly ask you, to go onto less tangents and explaining every little thing that _you_ think the viewer might not know - while talking about how an AI is working.
@AberrantArt
@AberrantArt Месяц назад
What model / platform did you use for the thumbnail image?
@iseverynametakenwtf1
@iseverynametakenwtf1 Месяц назад
# is used like // for notes in code, I noticed GPT4ALL had syntax like that in there attempt of a prepromt instructions.
@KaiPhox
@KaiPhox Месяц назад
Because I cannot see your screen, what are the symbols that you are replacing? 2:49
@hqcart1
@hqcart1 Месяц назад
It's not a leak, it's an AI that generates system prompts to show or hide code box
Далее
Bill Gates Reveals Superhuman AI Prediction
57:18
Просмотров 257 тыс.
Ex-Google CEO's BANNED Interview | Eric Schmidt
39:52
Просмотров 198 тыс.
The Next Generation Of Brain Mimicking AI
25:46
Просмотров 141 тыс.
Inside Mark Zuckerberg's AI Era | The Circuit
24:02
Просмотров 1,7 млн
What Happened To Google Search?
14:05
Просмотров 3,1 млн
Kyutais New "VOICE AI" is INSANE (and open source)
13:10
Why AI Is Tech's Latest Hoax
38:26
Просмотров 651 тыс.
23 AI Tools You Won't Believe are Free
25:19
Просмотров 2 млн