Тёмный

Open Interpreter Secrets Revealed: Setup Guide & Examples + O1 Light 

AppyDave
Подписаться 1,3 тыс.
Просмотров 3,2 тыс.
50% 1

Кино

Опубликовано:

 

28 авг 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 19   
@zazerb750
@zazerb750 Месяц назад
Hey mate, thanks a ton for the video. Was exactly what I was looking for, a succinct run through of realistic use cases.
@AppyDave
@AppyDave Месяц назад
Thanks for the comment, and glad it hellped
@qiujin4548
@qiujin4548 4 месяца назад
thanks for testing this out for us Dave! very inspiring
@AppyDave
@AppyDave 4 месяца назад
I'm glad you found it inspiring! Think this will be very interesting technology to follow.
@xspydazx
@xspydazx 3 месяца назад
It's actually using Jupiter notebook ... In the back ground to execute app it is running a cell in notebook , or executing it's code in a cell in Jupiter as it uses the Jupiter client library .... So to create your own function calling script , you can use instructor (to create the correct json function calling to Jupiter client ) .. IE the template to follow for the call .. as well as your script needs to contain the instructions to follow , IE to return the code in segments etc , to resolve problems returned . On the next turn , as well as examining the problem and solving first before resubmitting the task ,.. as well as asking it to create any function it needs to perform a task , given these progrsmming langs , and gathering info about the os , so that bash can be used where possible , on knowing the system structure , as it should default to scripting first . Until it has learned enough to use the bash . As well as confirming all tasks until the task is fully error free. Enabling for tasks to be auto run .... The prompt used is quite in-depth .. Perhaps even to create the "summarizer response* always ... IE collect all past messages and summarize , then use this context to respond , hence the context history should be summarised every as well as calculated as when it goes out of the sliding window context it will need to be concatenated and summarised . As well as stacked if necessary in the past history ... To enable rezumarizing even further back in the history .. giving the model a high briefing of the past. IE optimizing the history shuffling the important context to the front messages and relating the irrelevant ... Once you have the prompt set all you need to do is intercept the messaging completions and query's and return the responses . Etc . Verbose no verbose (it stores anyways in json message formates so it's easy to update the model during training with these messages ... So you can discuss information with the model first with truth fil and factual data and it will be stored in it's conversations ....then when you upload these and the relevanr document data. In training .. you will have truly taught the model new knowledge as well as had good discussion which it can refer too also ! .,
@AppyDave
@AppyDave 23 дня назад
cool
@xspydazx
@xspydazx 23 дня назад
@@AppyDave i since updated this method : As you only need the two functions : Method : 1 one to executecode on the system... and one to save/load a file from your tools folder : So you can allow the model to generate any function for itself :( best way ) and it can add the tool to its tool collection : then it can execute the code : this way is a beautiful way , as i used the stop and think, then act technique .. if it needs to act it will execute one of its tools or create a tools : this method really does work well : as it can generate bash code and python scripts etc: you can just create tools and add them to the tools folder ! < it will load these tools into its tool box ! > Method 2: This is simular to the above but it can only execute a tool , you supply the tool names in your code and it searches the docstrings for each function and adds it to its toolbox : ( i found that the model does not like the pydantic SLOW!! ) Only! If you need to search a website etc you do not need an api you need a llm !!
@AppyDave
@AppyDave 23 дня назад
@@xspydazx Wow, you've really delved deep into this! It sounds like you've found some effective methods for using Open Interpreter. Thanks for sharing your insights!
@xspydazx
@xspydazx 23 дня назад
@@AppyDave yes it was quite hard to figure the whole thingy out as the main this was just one function !! > the rest was logistical in fact ( like why did it not work with MY model )
@Mephmt
@Mephmt 4 месяца назад
I'd be interested in seeing how you integrated Siri with OI. I'm using Android and Windows machines and I have the inference, STT, and TTS all set up, but I haven't gotten the transcribed text into open-interpreter itself from the client phone. I'd love to know how you managed that using Siri.
@AppyDave
@AppyDave 4 месяца назад
Siri is just part of the operating system and provides a Voice Typing capability for any application with input form, on Android (Google Assistant) should allow voice typing as well but I'm not sure.
@Mephmt
@Mephmt 4 месяца назад
@@AppyDave Right, but how are you getting the resultant text to Open Interpreter from your phone? HTTP request or web socket?
@zaubermanninc4390
@zaubermanninc4390 4 месяца назад
my system32 folder got deleted only from watching this. typing from the void
@zaubermanninc4390
@zaubermanninc4390 4 месяца назад
@@tldr-ai I don't get what you mean
@ps3301
@ps3301 4 месяца назад
If u use it with gpt 4, it will bankrupt you!! But open source llm alternatives are pretty useless.
@AppyDave
@AppyDave 4 месяца назад
Tell me about it, GPT4 is so expensive in these use cases
@zaubermanninc4390
@zaubermanninc4390 4 месяца назад
wait til llama3 400B
@LE1TERL
@LE1TERL Месяц назад
gpt 3.5 turbo performs almost the same for me at a much smaller cost
@AppyDave
@AppyDave Месяц назад
@@LE1TERL good point
Далее
I forced EVERYONE to use Linux
22:59
Просмотров 471 тыс.
Секрет фокусника! #shorts
00:15
Просмотров 43 млн
Data Dignity and the Inversion of AI
47:28
Просмотров 11 тыс.
Improve Your AI Skills with Open Interpreter
15:23
Просмотров 12 тыс.
FREE Local LLMs on Apple Silicon | FAST!
15:09
Просмотров 163 тыс.
Батя в 90-е
0:58
Просмотров 433 тыс.
У деда и бабули в гостях🔥
1:00
Просмотров 814 тыс.