Тёмный

InternLM - A Strong Agentic Model? 

Sam Witteveen
Подписаться 63 тыс.
Просмотров 13 тыс.
50% 1

In this video I look at InternLM an LLM which focus on math, reasoning and being able to support function calling.
Colab: drp.li/mxJrX
Github: github.com/InternLM/InternLM
LM Deploy: github.com/InternLM/InternLM/...
HF: huggingface.co/internlm/inter...
🕵️ Interested in building LLM Agents? Fill out the form below
Building LLM Agents Form: drp.li/dIMes
👨‍💻Github:
github.com/samwit/langchain-t... (updated)
github.com/samwit/llm-tutorials
⏱️Time Stamps:
00:00 Intro
01:33 Hugging Face Leaderboard
01:57 InternLM Github
03:02 InternLM: LMDeploy
04:29 InternLM: Lagent
06:36 InternLM Paper
08:29 InternLM Hugging Face Models and Datasets
08:39 InternLM on Ollama
08:54 Code Time
09:15 InternLM Hugging Face Implementation (Colab)
13:12 InternLM Chat Format
13:39 InternLM Function Calling
15:01 InternLM Running Locally through Ollama

Наука

Опубликовано:

 

17 июл 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 26   
@keithmatthews2707
@keithmatthews2707 10 дней назад
Very useful content thank you Sam for your valuable insights into these topic areas
@toadlguy
@toadlguy 12 дней назад
Thank you, Sam, for once again highlighting the most interesting new models/techniques in this fascinating field. I note InternLM 2.5 explicitly notes that it "supports gathering information from over 100 websites" with an implementation using Lagent. I'm sure a LangChain implementation could be easily created as well. Actually fine tuning models with Sources for information not in the model (like current weather or news) with function calling and JSON support and using LangChain for finer control would be a great method for using smaller local models. (I feel more comfortable using LangChain than a model specific framework, if possible.) I would love to see other models add this approach. I wonder how much this is done in pretraining vs the base model. (guess I'll have to look at the paper 😉).
@LaHoraMaker
@LaHoraMaker 12 дней назад
LMDeploy is a quite interesting framework to deploy and quantize most of the Chinese models. It also works in Kaggle fairly well given it supports also older GPUs.
@mickelodiansurname9578
@mickelodiansurname9578 12 дней назад
thats a nice SMALL model for function calling alright... appreciate you bringing it to my attention.
@waneyvin
@waneyvin 12 дней назад
great job mate! And this is a bit like glm4, not sure about the comparison of benchmark. Both are agentic designed, and could be trained with agentic instructions.
@omarelfaqir3627
@omarelfaqir3627 11 дней назад
Hello Sam, Thanks to bring this wonderful model to our attention. There is just a confusion in the video between commercial usage and commercial licence: commercial usage is allowed without submitting any form, but with the Open Source licence you might need to Open Source any derivative work (ie finetuning you make for example). If you want to make non open source stuff with it (why would you😊?) you will need to submit the form to obtain a commercial licence, allowing you to do that. It is a quite classic business model in Open Source software
@SonGoku-pc7jl
@SonGoku-pc7jl 12 дней назад
thanks! in spanish is regular but good that all evolution :)
@tlfmcooper
@tlfmcooper 12 дней назад
Thanks
@kenchang3456
@kenchang3456 12 дней назад
Kind of interesting that if one of the stronger points of InternLM 2.5 is being able to support agents, I wonder what part of the training data makes it more capable of supporting agents if function calling data only accounts for 16%. Thanks for the video, I'll have to find a way to make time to try it out.
@jon_flop_boat
@jon_flop_boat 6 дней назад
It’s my understanding that, instead of focusing on incorporating information into the model, the creators focused hard on pretraining on reasoning and research. If the model is particularly good at these things, it can just Google the relevant information and synthesize it in real time, hence the name: InternLM. It doesn’t know anything, but it can look stuff up!
@choiswimmer
@choiswimmer 12 дней назад
Nice
@ManjaroBlack
@ManjaroBlack 12 дней назад
I couldn’t get InternLM to work well with RAG or any embedding. It gives ok answers to simple prompting.
@aa-xn5hc
@aa-xn5hc 10 дней назад
Please try lmagent with 2.5
@lapozzunk
@lapozzunk 12 дней назад
If each model gets a higher rating than its predecessors, when will we reach 100? Also, if I don't watch such videos, will this happen later?
@attilavass6935
@attilavass6935 11 дней назад
Am I the only one who misses a memory module from Lagent? I'm gonna test this though ASAP
@WillJohnston-wg9ew
@WillJohnston-wg9ew 12 дней назад
What is the agentic aspect? Maybe I don't understand something or missed something?
@Schaelpy
@Schaelpy 10 дней назад
He talks about it at 4:45
@wickjohn3854
@wickjohn3854 12 дней назад
ask him what happen in 1989 LOL
@Dom-zy1qy
@Dom-zy1qy 6 дней назад
The ultimate benchmark for Chinese models. I wonder I'd they've actually been tuned to avoid discussing things like that. Would prob get them defunded by the govt.
@TheGuillotineKing
@TheGuillotineKing 12 дней назад
Fun fact these Chinese models are banned in the USA and can’t be used for a commercial product
@ringpolitiet
@ringpolitiet 11 дней назад
Quite an enigma how you combine an interest in rather techy stuff like tool calling LLMs with a straight off the turnip truck view of other things that seems as easy or easier to get informed about.
@dinoscheidt
@dinoscheidt 10 дней назад
Fun fact: A source helps. @TheGuillotineKing seems cognitively challenged holding apart the current talks to maybe restrict the EXPORT of OSS Models vs the other way around.
@TheGuillotineKing
@TheGuillotineKing 10 дней назад
@@dinoscheidt Fun Fact your mother swallowed a gallon of 🥜🥜🥜🥜🥜🐿️🐿️🐿️ juice and that's how she had you
@toadlguy
@toadlguy 9 дней назад
@@dinoscheidt Well, he is right that they can’t be used for commercial projects due to the license. 😉
Далее
Зачем он туда залез?
00:25
Просмотров 250 тыс.
КАК ДУМАЕТЕ КТО ВЫЙГРАЕТ😂
00:29
The 4 Big Changes in LLMs
14:25
Просмотров 16 тыс.
Florence 2 - The Best Small VLM Out There?
14:02
Просмотров 12 тыс.
I Analyzed My Finance With Local LLMs
17:51
Просмотров 443 тыс.
Mesop - Google's New UI Maker
14:04
Просмотров 68 тыс.
Fine-tuning LLMs with PEFT and LoRA
15:35
Просмотров 117 тыс.
Deep dive: model merging
47:26
Просмотров 6 тыс.
Самый быстрый пылесос!
0:30
Просмотров 19 тыс.