Тёмный

Master Claude 3 Haiku - The Crash Course! 

Sam Witteveen
Подписаться 60 тыс.
Просмотров 19 тыс.
50% 1

Haiku Colab: drp.li/hCUVX - CODE NOTEBOOK
Prompt library: docs.anthropic.com/claude/pro...
Cook Book: github.com/anthropics/anthrop...
🕵️ Interested in building LLM Agents? Fill out the form below
Building LLM Agents Form: drp.li/dIMes
👨‍💻Github:
github.com/samwit/langchain-t... (updated)
git hub.com/samwit/llm-tutorials
⏱️Time Stamps:
00:00 Intro
00:11 Anthropic Blog: Claude Family of Models
00:38 Haiku Pricing Comparisons
02:28 LMSYS Chatbot Arena Leaderboard
03:33 Anthropic Prompt Engineering
04:46 Code Time
05:53 Coding: Basics with Text
07:21 Coding: Getting JSON
09:42 Coding: Exemplars
14:24 Coding: Multimodal-Images
15:39 Coding: Multimodal-With URL
16:43 Coding: Multimodal-Transcribing Handwriting
17:36 Coding: Multimodal-Counting Objects
19:51 Coding: Multimodal-OCR on the Organizational Chart
20:43 Coding: Multimodal-Profit and Loss Statement
21:55 Coding: Simple Examples with LangChain
22:41 Teaser: CrewAI+Claude 3 Haiku

Наука

Опубликовано:

 

25 июн 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 53   
@robxmccarthy
@robxmccarthy 3 месяца назад
Haiku is the biggest release since GPT4 from a cost/performance perspective. Glad you got to dig into it, always enjoy your videos.
@samwitteveenai
@samwitteveenai 3 месяца назад
Thanks! I agree it really seems to open up a lot of opportunities.
@davidtindell950
@davidtindell950 Месяц назад
Great overview and multimodal examples from the Anthropic Claude Cookbook using Haiku! I ran some of the examples multiple times with variations and the cost so far is less than a U.S. $1.00. We should definitely consider Haiku for personal and business apps where the tradeoff between quality and cost must be balanced: e.g. summarizing a large volume of papers and documents and creating and maintaining a large database of vector embeddings to support documents Q&A.
@pokerandphilosophy8328
@pokerandphilosophy8328 3 месяца назад
My main use of Claude 3 currently is to have extended philosophical discussions with it, discuss texts and have it help me rewrite my own papers and drafts. I often begin the conversation with Opus to maximise quality. But when the context gets longer, I sometimes switch to Sonnet or Haiku. Haiku very often surprises me with how smart it is. When its responses are informed by the longer context, including the prior responses from Opus, this serves as something similar to a many-shot prompting method with explicit examples and it boosts Haiku's intelligence. Furthermore, Haiku's slightly more unfocused or meandering intellect enables it to make relevant connections between various parts of the conversation that Opus often misses due to its more focussed attention to user instructions and strict adherence to prompt. As a result of that, Haiku's responses sometimes are more intelligent, insightful and (broad) context sensitive even if it is slightly more prone to error than its bigger siblings.
@nas8318
@nas8318 3 месяца назад
It's meandering intellect may be due to a higher temperature setting. You may wanna look into that.
@pokerandphilosophy8328
@pokerandphilosophy8328 3 месяца назад
@@nas8318 I'm interfacing all three Claude 3 models through the Anthropic workbench with the temperature set to zero. So, it's really something else that is at play.
@joflo5950
@joflo5950 3 месяца назад
Thanks for the video! I would really love to see the follow-up video on function calling you mentioned.
@paulmiller591
@paulmiller591 3 месяца назад
Thanks, Sam. I had played with Haiku previously but have not done this optimised prompting. Jumping onto doing this now. Cheers.
@amandamate9117
@amandamate9117 2 месяца назад
cant wait for the crewAI + Haiku video !!! it would be nice to have a superagent thats using OPUS and small agents only haiku.
@walterpark8824
@walterpark8824 3 месяца назад
What a great model for local use. Thanks for showing it so clearly.
@ehza
@ehza 3 месяца назад
Thank you for this. Quite helpful to me!
@samwitteveenai
@samwitteveenai 3 месяца назад
Glad it was helpful!
@AdamTwardoch
@AdamTwardoch 2 месяца назад
It seems to be that Haiku is a distilled / sparsified / quantized Opus. When it works, it gives results that are quite similar to Opus - while Sonnet gives very different results, so it looks it was trained independently. This is great: I often prep few-shot with Opus and then hand or over to Haiku for scale.
@aa-xn5hc
@aa-xn5hc 2 месяца назад
Looking forward to your next video on CrewAI and Haiku
@alchemication
@alchemication 3 месяца назад
So far wasn’t able to get anywhere with haiku for any production quality use case, but the idea with using many examples sounds promising. Will test out. Thx for inspiration to try again 😊
@vivekpadman5248
@vivekpadman5248 3 месяца назад
Thanks for this
@kenchang3456
@kenchang3456 3 месяца назад
Cheaper works for me as I'm in the learning/experimenting stage. Looking forward to you Claude 3 based function calling video. Thanks for sharing.
@samwitteveenai
@samwitteveenai 3 месяца назад
Thanks Ken
@jayhu6075
@jayhu6075 2 месяца назад
What a amazing explanation how to do with vision, xml and other stuffs with Haiku. Hopefully more in the future about Agents what you all mention with Crew AI. Many thnx.
@brandonwinston
@brandonwinston 3 месяца назад
One of the big challenges I’m having is plugging haiku into all the places OpenAi APIs are accepted
@samwitteveenai
@samwitteveenai 2 месяца назад
checkout LiteLLM you can use that as proxy that takes OpenAI inputs and can reroute them
@xemy1010
@xemy1010 3 месяца назад
Haiku might be the perfect model to label / caption an image dataset at scale using natural language. Dalle-3's paper makes it clear that generating detailed natural language captions for each image was a big part of the magic behind its ability to understand and follow prompts so well at inference. SD3 only used a 50:50 mix of CogVLM-generated captions and captions from the original images. I think a Haiku-captioned training dataset would be a big step up for training these models.
@micbab-vg2mu
@micbab-vg2mu 3 месяца назад
great video - thank you
@silvacarl
@silvacarl 2 месяца назад
I look forward to every one of these videos. Can you do more langchain or RAG examples with open source LLMs?
@JD-hk7iw
@JD-hk7iw 3 месяца назад
I had written off Haiku after testing my use case with it using the same prompt I use for opus/gpt-4. Totally unusable. After watching this, I revised the wording & format of the system prompt and added three examples. Well I'll be damned. Touché Haiku, touché. Not as nuanced and focused as opus/gpt-4, but definitely serviceable. The combination of the 200K context window and the pricing really is what makes this model special. Thanks for the informative video showing the proper way to leverage Haiku.
@samwitteveenai
@samwitteveenai 3 месяца назад
This is awesome to hear! I have found since the input tokens are so cheap I have been using 20 examples for some things and getting really good results in changing style and tone too.
@EyadAiman
@EyadAiman 2 месяца назад
Impressive tutorial as always SAM I suggest you make a tutorial on how to build RAG system with agents using claude 3 haiku
@EmadElazhary-tt8tl
@EmadElazhary-tt8tl 2 месяца назад
Thanks for the video! Please Please, Function Calling using Haiku in Langchain
@carterjames199
@carterjames199 3 месяца назад
Please go over function calling asap really looking forward to it, from my test haiku is amazing with a few examples but still has some issues when I go upwards of 4 functions that can be called
@UTubeGuyJK
@UTubeGuyJK 3 месяца назад
I hadn’t heard of the xml tag prompting with Claude before.
@lancemarchetti8673
@lancemarchetti8673 2 месяца назад
DBRX just launched their new model on huggingface
@sauravmohanty3946
@sauravmohanty3946 3 месяца назад
can you share the link to the notebook you explained in the video ?
@samwitteveenai
@samwitteveenai 3 месяца назад
It’s the colab in the Video description
@ShoomonPerry
@ShoomonPerry 2 месяца назад
I want to use claude haiku to process large documents...but end up running out of output tokens. Is there a simple hack (in langchain?...) to get a multipart response?
@samwitteveenai
@samwitteveenai 2 месяца назад
The problem is Claude will only output 4k tokens (pretty sure from my memory). This is where getting a system to do multiple calls can be really useful. In langchain you can do it with MapReduce but it can be a bit hit or miss. Another way is to write your own splitting and prompting and run it in parallel. Let me try to think of a good use case I can show and I will try to make a video about it. It is certainly an issue.
@ShoomonPerry
@ShoomonPerry 2 месяца назад
Thanks Sam. What if you chained the queries, passing in the response of the first query into the context window of the second....and telling the LLM to pick up where it left off?... @@samwitteveenai
@jdallain
@jdallain 3 месяца назад
I’m not sure if it’s possible, but please use haiku with the more advanced crewai video that you mentioned making
@jdallain
@jdallain 3 месяца назад
Commented before watching until the end 😊
@CookerSingh
@CookerSingh 3 месяца назад
I think there is no function calling feature and for now only be used in a wrapper based applications.
@carterjames199
@carterjames199 3 месяца назад
There is function calling
@carterjames199
@carterjames199 3 месяца назад
Its just not as mature as OpenAI function calling
@CookerSingh
@CookerSingh 2 месяца назад
​@@carterjames199Is there any way I can add function calling or use available proxies out there for all LLMs.
@samwitteveenai
@samwitteveenai 2 месяца назад
The function calling is in a different format than OpenAI they use xml which can be nested. for proxies etc checkout LiteLLM but I don't think that will convert function calls yet
@RaitisPetrovs-nb9kz
@RaitisPetrovs-nb9kz 3 месяца назад
Out of curiosity tested all 3 models got dogs correct with 1st shot without prompting
@samwitteveenai
@samwitteveenai 3 месяца назад
😀 Thats interesting. Maybe their examples was planted to make it look wrong for the first one? For me it came back with 8 when I tried their fancy prompt but very quickly changed back to 9 with a few small changes.
@RaitisPetrovs-nb9kz
@RaitisPetrovs-nb9kz 3 месяца назад
@@samwitteveenai
@clray123
@clray123 2 месяца назад
The "fancy prompting" messing around with the count of dogs output is actually a glaring example of why all these models are crap. Lack of trustworthiness, making big mistakes on trivial tasks AND what's more, those mistakes depend on minute details of how you arrange the input! It reminds me of a fine tuned model claiming at the same time that it loves and hates tomatoes! Or claiming that its favorite animal is a tomato, unless, of course, you ask beforehand whether a tomato is an animal. This is simply ridiculous and inconsistencies of this sort highlight the simple fact that today even the most sophisticated of these models are still imitators of intelligence, rather than intelligence. Building castles on sand.
@andrada25m46
@andrada25m46 2 месяца назад
Personally I haven’t had issues with haiku, it’s much better than GPT3.5 you just have to prompt it well.
@Quin.Bioinformatics
@Quin.Bioinformatics 2 месяца назад
google bard is trash why was it rated so highly? it sucks at generative coding and variant annotation.
@snuwan
@snuwan 2 месяца назад
Claude 3 is actually better.
Далее
How Google is Expanding the Gemini Era
13:27
Просмотров 4,2 тыс.
При каком ВЕСЕ ЛОПНЕТ ШИНА?
18:44
LangGraph Crash Course with code examples
39:01
Просмотров 62 тыс.
Why & When You Should Use Claude 3 Over ChatGPT
17:00
My Brain after 569 Leetcode Problems
7:50
Просмотров 2,4 млн
Claude 3.5 is the new KING of AI 👑 Beats GPT4o
10:58
CrewAI - Building a Custom Crew
21:34
Просмотров 14 тыс.
Google's RAG Experiment - NotebookLM
13:39
Просмотров 14 тыс.
Неразрушаемый смартфон
1:00
Просмотров 1,6 млн