Тёмный

Anthropic's Meta Prompt: A Must-try! 

Sam Witteveen
Подписаться 59 тыс.
Просмотров 86 тыс.
50% 1

The Anthropic prompt is to help you create better prompts
MetaPrompt Colab: drp.li/8Odmv
Meta Prompt colab Original: colab.research.google.com/dri...
Anthropic Prompt Guide: docs.anthropic.com/claude/doc...
Anthropic Cook Book: github.com/anthropics/anthrop...
🕵️ Interested in building LLM Agents? Fill out the form below
Building LLM Agents Form: drp.li/dIMes
👨‍💻Github:
github.com/samwit/langchain-t... (updated)
git hub.com/samwit/llm-tutorials
⏱️Time Stamps:
00:00 Intro
00:14 Anthropic Prompt Library
00:25 OpenAI Cookbook
01:22 Anthropic Prompt Library: Website Wizard Prompt
01:36 Anthropic Cookbook
01:55 Anthropic Helper Metaprompt Docs
02:55 Code Time

Наука

Опубликовано:

 

15 июн 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 113   
@icandreamstream
@icandreamstream 3 месяца назад
Perfect timing, I was experimenting with this myself yesterday. But this is a much more in depth take, I’ll have to check it out.
@samwitteveenai
@samwitteveenai 3 месяца назад
glad it was useful
@erniea5843
@erniea5843 2 месяца назад
Appreciate you bringing attention to this. Great walkthrough
@IvarDaigon
@IvarDaigon 3 месяца назад
I guess one use case might be using a larger LLM to create system prompts for a smaller faster model to enable it to better follow instructions and collect information before the information is summarized and then passed back to the larger model to formalize. For example. Model A instructs model B how to interview the customer and collect the required information which then gets passed back to model A to fill out and submit an online form. this approach would be faster and cheaper than getting model A to do all of the work because A-teir models are often 10X the cost of B-tier models. This kind of system would work really well when collecting information via email, instant message or over the phone.
@keithprice3369
@keithprice3369 3 месяца назад
Interesting. I was envisioning the opposite. Use Haiku to generate the prompt and pass that to either Sonnet or Opus. Worth experimenting with both, I think.
@davidw8668
@davidw8668 3 месяца назад
DSPy pipeline work indeed kinda that way, so you use larger models in the beginning to optimize prompts and finetune automatically, create data with larger models to train smaller models, and then run the decomposed tasks on smaller models and evaluate the pipeline using larger models
@Taskade
@Taskade 3 месяца назад
Thanks for sharing this insightful video! It's great to see how Anthropic's Metaprompt can really enhance prompt quality and model interactions. Looking forward to experimenting with it myself. Keep up the awesome work! 😊👍
@levi2408
@levi2408 2 месяца назад
This comment has to be AI generated
@MartinBroadhurst
@MartinBroadhurst 2 месяца назад
​@levi2408 thanks for sharing your interesting insights into fake AI generated comments. I can't wait to learn more about AI generated spam. 😉
@CybermindForge
@CybermindForge 3 месяца назад
This is the truth. It is just semantically and Syntactically difficult to adjust to them all. If you add the video, and audio generation it gets 😅 great video!
@Stewz66
@Stewz66 3 месяца назад
Very helpful, constructive, and practical information. Thank you!
2 месяца назад
Really interesting. This will help having hyperspecialized agents. When we know that swarms of these are the future of AI, at least for the coming month ... Thank You Sam
@polysopher
@polysopher 3 месяца назад
Totally what I think the future will be!
@AnimusOG
@AnimusOG 3 месяца назад
My favorite AI Guy, thanks again for your content bro. I hope I get to meet you one day.
@ShawnThuris
@ShawnThuris 3 месяца назад
Very interesting, there must be so many use cases for this. (Minor point: in August the time is PDT rather than PST.)
@alextrebek5237
@alextrebek5237 2 месяца назад
Criminally underrated channel
@WhySoBroke
@WhySoBroke 3 месяца назад
Great tutorial and nicely explained. I believe you assume a certain level of knowledge from the viewer. For a beginner where do we enter the Claude api key? Just as an example of things you assume the viewer already knows. Maybe direct to a basic video explaining so it is not redundant?
@matthewtschetter1953
@matthewtschetter1953 3 месяца назад
Sam, helpful as always, thank you! How do you think these promopting cookbooks could help agents perform tasks?
@MattJonesYT
@MattJonesYT 3 месяца назад
Whenever I do A/B testing between chatgpt and the free claude model I end up choosing chatgpt mainly because for whatever reason claude tends to hallucinate in authoritative sounding ways whereas if chatgpt doesn't understand something it is more likely to admit that (but not always). For instance today I told claude I added PEA to a gas engine and it assumed I was using biodiesel and proceeded to give a long chat about that. Chatgpt understood that PEA is polyetheramine for cleaning gas systems. So it's hard for me to take claude seriously as yet.
@aleksh6329
@aleksh6329 3 месяца назад
There is also a framework called DSPy by Omar Khattab that attempts to remove prompt engineering and it works with any LLM!
@micbab-vg2mu
@micbab-vg2mu 3 месяца назад
At the moment I use personalise prompts for every task diffrent one - the quality output is much higher:)
@alchemication
@alchemication 3 месяца назад
Interesting, just when I concluded that too long prompts are not good and usually a symptom of trying to cram in too much info ;D (depending on the situation obviously). Nevertheless the concept is nice, and indeed some of us utilised it for a long time for prompt development and optimisation ;)
@TheRealHassan789
@TheRealHassan789 3 месяца назад
Idea: use RAG to grab the closest prompts from GitHub repos to inject into the meta prompt notebook…. This would probably give even better results?
@samwitteveenai
@samwitteveenai 3 месяца назад
there are some really nice research and applications of using RAG to choose the best exemplars/examples for few shot learning. What you are saying certainly makes sense.
@odw32
@odw32 2 месяца назад
Looking at how we work together with humans, it would make a lot more sense if the prompting process could be split up: First you chat/onboard the model with the right task/context, they ask for clarifications where needed, then they execute a task, and ask for a review/feedback reflecting back on what they did. Especially the "ask for clarification", "admit lack of knowledge" and "request feedback" parts aren't a default part of the commercial tools yet. Luckily things like meta-prompting and using LangChain agents all seem to converge in that a direction though, like little pieces of the puzzle.
@damien2198
@damien2198 3 месяца назад
I use a GPT Claude 3 prompt Optimizer (it loads Claude prompt documentation / cookbook into chatgpt)
@ivancartago7944
@ivancartago7944 2 месяца назад
This is awesome
@joser100
@joser100 3 месяца назад
This looks great but very specific to Anthropic models, no? is not that what we are after with using programmatic tools such as DSPy for example to reach the same goal but more "generically"? (similar with Instructor only that this one more focused on formatting I think)
@SonGoku-pc7jl
@SonGoku-pc7jl Месяц назад
cool! :) thanks!
@mikeplockhart
@mikeplockhart 3 месяца назад
Thanks for the content. With meta prompting in mind, do you thing something like DSPy is a more programmatic alternative or there doing similar things under the hood? And if you’re looking for video ideas… 😊
@samwitteveenai
@samwitteveenai 3 месяца назад
have been playing with DSPy a fair bit with some interesting results. It is quite a bit different than Metaprompt but has some very interesting ideas.
@manikyaaditya1216
@manikyaaditya1216 Месяц назад
hey this is insightful, thanks for sharing it. The colab link seems to be broken, can you please share the updated one.
@TheBlackClockOfTime
@TheBlackClockOfTime 3 месяца назад
I thought my screen was scratched there for a second until I realized it's the human head logo's grey outlines.
@hosst
@hosst 2 месяца назад
wonderful
@codelucky
@codelucky 3 месяца назад
Thank you, Sam. I'm excited to dive deep into Metaprompt and learn how to create comprehensive prompts with precise instructions that produce the desired outcomes for users. Can you suggest a course or resource to help me get started on this journey?
@ronnieleon7857
@ronnieleon7857 3 месяца назад
Hello Sam. I hope you're holding up well. There's a video you talked about open-sourcing a web scraping tool. Did you open-source the project? I'd like to contribute to a tool that automates web scraping.
@mushinart
@mushinart 2 месяца назад
cool video ,sir ... did langchain optimize it for claude 3 yet ?
@samwitteveenai
@samwitteveenai 2 месяца назад
Yeah you can use LangChain with Claude 3 no problems
@amandamate9117
@amandamate9117 3 месяца назад
i love this
@Koryogden
@Koryogden 2 месяца назад
META-PROMPTING!!!! YES!!!! This topic excites me!
@user-lb2gu7ih5e
@user-lb2gu7ih5e 2 месяца назад
By YouSum 00:01:58 Utilize Metaprompt for precise AI instructions. 00:08:49 Metaprompt aids in generating detailed and specific prompts. 00:10:42 Metaprompt ensures polite, professional, and customized AI responses. 00:11:55 Experiment with Metaprompt for tailored AI agent interactions. By YouSum
@LucianThorr
@LucianThorr 2 месяца назад
What is the "typical" UX for developers who are using these LLVMs on a daily basis? Is it all in Notebooks like the above video? Web browser based conversations? IDE integrations? And especially if IDE, how do you keep your proprietary code from being scanned and leaking back up to these companies?
@LaHoraMaker
@LaHoraMaker 3 месяца назад
Maybe this is the work of the famous Prompt Engineer & Librarian position at Anthropic with a base salary of 250-375k USD :D
@benjaminanderson1014
@benjaminanderson1014 2 месяца назад
So we've successfully outsourced ai prompt engineering to ai? Cool.
@samwitteveenai
@samwitteveenai 2 месяца назад
For some things. It certainly helps.
@jarosawburyk893
@jarosawburyk893 2 месяца назад
I wonder how Claude specific it is - would it generate good prompts for OpenAI GTP4 ?
@samwitteveenai
@samwitteveenai 2 месяца назад
Certainly worth a try. You can alter it and also run on OpenAI or Gemini etc as well
@MiraMamaSinCodigo
@MiraMamaSinCodigo 2 месяца назад
Metaprompt feeling like a RAG of all models.
@nicdemai
@nicdemai 3 месяца назад
The meta prompt is a feature currently in Gemini Advanced but its not released yet. Although it’s not as detailed as this.
@Walczyk
@Walczyk 2 месяца назад
you mean how it writes out a list first?
@nicdemai
@nicdemai 2 месяца назад
@@Walczyk No i mean when you write a prompt. Before you send it to the Ultra model another model tried to modify it to make it longer and more detailed with concise instructions before sending it to the Ultra model.
@Walczyk
@Walczyk 2 месяца назад
oic, i had a feeling it was doing that; because it would read out this clean structure of what it would do; i could see it had received that as its prompt@@nicdemai
@harigovind511
@harigovind511 3 месяца назад
I am part of a team who is building a GenAI powered analytic tool, we still use a combination of GPT-3.5 and 4…Don’t get me wrong Claude is good, specially sonet the price to performance ration is just out of this world, I guess we are just primarily impressed by OpenAI’s track record of releasing quality model which are significantly better than the previous version under the same API umbrella.
@KeiS14
@KeiS14 3 месяца назад
This is cool and all but what is 4? 1:03
@armankarambakhsh4456
@armankarambakhsh4456 3 месяца назад
Wont the metaprompt work if I just copy it into claude main interface itself?
@samwitteveenai
@samwitteveenai 2 месяца назад
yesyou can certainly do something similar like that
@gvi341984
@gvi341984 2 месяца назад
Claude free mode only lasts a few messages until pay wall
@fburton8
@fburton8 2 месяца назад
Probably a silly question, but why is the "and" at 4:24 blue?
@einmalderfelixbitte
@einmalderfelixbitte 2 месяца назад
I am sure that is because ‘and’ is normally used as an operator in boolean equations (like ‘or’). So the editor (wrongly) highlights it everywhere even when it is not used as a logical operator.
@xitcix8360
@xitcix8360 3 месяца назад
I want cook the soup
@mikeyai9227
@mikeyai9227 2 месяца назад
What happens when your output has xml in it?
@samwitteveenai
@samwitteveenai 2 месяца назад
You can parse it very easily.
@JC-jz6rx
@JC-jz6rx 2 месяца назад
Problem is cost. Imagine sending that prompt on every API call
@samwitteveenai
@samwitteveenai 2 месяца назад
you don't need to send this prompt each time. the idea is this makes a prompt that is much shorter that you can use.
@heresmypersonalopinion
@heresmypersonalopinion 2 месяца назад
I hate it when it tells me " i feel uncomfortable completing this task".
@AntonioVergine
@AntonioVergine 2 месяца назад
Am i the only one thinking that if i have to "fix my prompt or the AI won't understand" it means that the AI simply is not good enough yet?
@samwitteveenai
@samwitteveenai 2 месяца назад
Kinda yes, kinda no. You could imagine you have to talk to different humans in different ways etc. Models are like. Of course ideally we would like them to understand everything we want, even humans aren't good at that though
@sbutler860
@sbutler860 2 месяца назад
I asked Claude 3 questions: 1. Who won the FA Cup in 1955? 2. Which composer won the Academy Award in 1944 and for which film? What was the date of the death of Queen Elizabeth II? CLAUDE got all three questions WRONG. I asked the same of Google's Gemini and it got all three CORRECT. I also asked the same questions of Windows' CO-PILOT, and it also got all three correct, although it took its sweet time about it. Therefore, Claude may know how to metaprompt a travel agent, but it doesn't know its arse from its elbow about anything else. Long live Google Gemini! And Co-Pilot! x
@llliiillliiilll404
@llliiillliiilll404 2 месяца назад
00:00:35 - watching this guy typing with two fingers is so painful
@ellielikesmath
@ellielikesmath 2 месяца назад
prompt engineering still seems like a such dead end. if it requires each prompt be unrolled into something with a lot of common sense filler, why not add that as a filter to the LLM? so you feed in your prompt, some automated system makes reasonable guesses as to what filler to pack it with, and then see what the LLM makes of it. the problem is the user thinks all of the assumptions they make are obvious and shared by the LLM, and it's not always the case. I'd be interested to know if any LLM tries to predict sentences/clauses the user left out of their prompt, or heaven forbid, ask the user questions!! about what they may have omitted or meant. this is but one way out of this nonsense, and i assume people are trying lots of ways to get rid of this besides what i am suggesting.
@Koryogden
@Koryogden 2 месяца назад
What I was trying myself, is a Style/Principles Guide Framework... It just doesn't quite apply the principles though, but it did qualitatively increase responses
@JaredFarrer
@JaredFarrer Месяц назад
Yeah but nobody wants to try and figure out how to word things so the model will respond it’s junk. I wouldn’t pay for Gemini
@TheMadridfan1
@TheMadridfan1 3 месяца назад
Sadly OpenAI is acting worthless lately. I sure hope they release 5 soon
@nickdisney3D
@nickdisney3D 3 месяца назад
After using claude, gpt4 is poop.
@tomwojcik7896
@tomwojcik7896 3 месяца назад
@@nickdisney3D really? in what areas did you find Claude (Opus, i assume) significantly better?
@rcingfever3882
@rcingfever3882 3 месяца назад
I would say in conversation for example if you see a diference in gpt 3.5 and gpt 4 the later just understands better. Same is true between gpt 4 and opus not a lot but slightly. And when it comes to coding and image generation to make a small change on my website i had to prompt i guess 10 times to make understand when it comes to gpt4 but for opus i got it in first time.
@rcingfever3882
@rcingfever3882 3 месяца назад
.​@@tomwojcik7896
@nickdisney3D
@nickdisney3D 3 месяца назад
@@tomwojcik7896 everything. Except that it gets emotional and moody when you push it the right way. I had a chat with it today where it just refused to respond.
@motbus3
@motbus3 3 месяца назад
It is ridiculous that each AI model should be prompted in a different way.
@JaredFarrer
@JaredFarrer Месяц назад
OpenAI has really dropped the ball lately bad leadership
@exmodule6323
@exmodule6323 2 месяца назад
Tighten up your delivery man, I had to stop listening because your intro was too long
@roc1761
@roc1761 2 месяца назад
Can't you take what you're given...
@cas54926
@cas54926 2 месяца назад
Someone might have attention span issues 😂
@Koryogden
@Koryogden 2 месяца назад
????? What????
@bomarni
@bomarni 2 месяца назад
1.75x speed is your friend
@ronaldokun
@ronaldokun 2 месяца назад
Be cool, man. Someone is teaching you something. You can increase the speed and give a nice feedback. Everybody wins.
@filthyE
@filthyE 2 месяца назад
Thoughts on ChatGPT 4 (via official WebUI) vs. Claude 3 Opus (via official WebUI) in March 2024? Assuming a person can only afford one? (hypothetical scenario, just wondering your thoughts) Obviously API access through terminal or a custom UI frontend to various models is ideal, but wondering what you'd recommend to a layperson who could only choose among the web versions of each of these two services.
@onekycarscanners6002
@onekycarscanners6002 2 месяца назад
Why will they not go directly to the site and input their prompt
@onekycarscanners6002
@onekycarscanners6002 2 месяца назад
You can create Web UI per prompt niche. And control prices plus make it easier for many who have no idea to what a good prompt is.
@samwitteveenai
@samwitteveenai 2 месяца назад
I have both I find myself using Claude more these past 2 weeks.
Далее
Master Claude 3 Haiku - The Crash Course!
23:24
Просмотров 18 тыс.
🤢 To try piggy toothpick beauty gadget
00:30
Просмотров 8 млн
How I Did This INSANE Trick 🤯📱🍬
00:10
Просмотров 3,6 млн
Самый надежный автомобиль
01:00
Просмотров 576 тыс.
Llama3 + CrewAI + Groq = Email AI Agent
14:27
Просмотров 52 тыс.
ChatTTS -  Conversational TTS Step by Step
11:47
Просмотров 4,6 тыс.
I Analyzed My Finance With Local LLMs
17:51
Просмотров 416 тыс.
I Found a Way to Combine Claude 3 and GPT-4
10:10
Просмотров 18 тыс.
Google Releases AI AGENT BUILDER! 🤖 Worth The Wait?
34:21
Advanced Colab - How to go Beyond the Basics
13:36
Просмотров 3,8 тыс.
wireless switch without wires part 6
0:49
Просмотров 3,9 млн
Will the battery emit smoke if it rotates rapidly?
0:11
Непробиваемый телевизор 🤯
0:23
Девушка и AirPods Max 😳
0:59
Просмотров 16 тыс.