Тёмный
No video :(

Introducing Gemini API Code Interpreter (Execution) 

Prompt Engineering
Подписаться 168 тыс.
Просмотров 8 тыс.
50% 1

Опубликовано:

 

23 авг 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 22   
@gemini_537
@gemini_537 Месяц назад
This feature is super useful ❤ Great job, Google 👍💯
@engineerprompt
@engineerprompt Месяц назад
Love the user name :)
@unclecode
@unclecode Месяц назад
@enginerrprompt Interesting review of Gemini API's code execution. I see this as integrating algorithmic thinking into LLMs. Unlike function calls, this feature allows the model to execute code at an algorithmic level as part of its reasoning process - essentially bringing a mini Turing machine into play. It's fascinating because it addresses a fundamental limitation in LLMs. Instead of relying on increased model size or fine-tuning, it provides a mechanism for true computational reasoning. For instance, counting 'r's in "strawberry" becomes an algorithmic task rather than a pattern recognition one. I hope it remains focused on this level of algorithmic execution. Tasks like database updates or API calls should stay in the realm of function calls. This separation maintains the purity of the "algo-execution" concept, enhancing the model's core reasoning capabilities without overreaching into application-specific processes. This approach effectively bridges the gap between language processing and computational thinking in LLMs, opening up new possibilities for more accurate and logically sound responses. Btw, I posted a detailed review of this on my X account and shared your video over there.
@engineerprompt
@engineerprompt Месяц назад
I agree, it could really address some of the issues we had with LLMs and can really improve their reasoning capabilities. I like the algorithm-execution term that you came up with :)
@unclecode
@unclecode Месяц назад
@@engineerprompt We need to make this “algorithm-execution” distinction clear. It serves its purpose well and defines function-call better. Really enjoyed this, it’s the kind of thing I missed seeing more from Google. Excited to experiment with it. Looking forward to your context caching video!
@IdPreferNot1
@IdPreferNot1 20 дней назад
love the detailed run through, thx.
@rousabout7578
@rousabout7578 Месяц назад
Very Useful! TY
@IncomeBoost42
@IncomeBoost42 Месяц назад
Should be easy for openAI to implement this. I used their custom GPTs a while back when it came out and it would run the user’s python code/scripts in an isolated environment then analyse the result and follow any instructions you specifically set. Sounds like they already have all the ingredients to do this on the fly.
@engineerprompt
@engineerprompt Месяц назад
Yes, I agree. The main thing is to scale it to the API.
@karthikb.s.k.4486
@karthikb.s.k.4486 Месяц назад
Nice tutorial. Thank you for sharing . May i know what configuration of Mac are using so that i can start working on LLMS please let me know
@engineerprompt
@engineerprompt Месяц назад
Mine is M2 MacBook Pro with 96GB of RAM. You can even work with 32GB RAM version if you are not going to run 70B+ models
@carlkim2577
@carlkim2577 Месяц назад
I'm not seeing this in Vertex Studio. Are some extra steps needed to activate this function? Or is the update rolling out slowly? Tks!
@engineerprompt
@engineerprompt Месяц назад
I haven't tested it on Vertex but here is what I am seeing in docs: cloud.google.com/vertex-ai/generative-ai/docs/context-cache/context-cache-overview
@freedom_aint_free
@freedom_aint_free Месяц назад
In Google's AI studio, even with 'code execution selected' it's not running the code, nor producing any output other than text explaining the code...
@nickiascerinschi206
@nickiascerinschi206 Месяц назад
Just get openai
@FranzAllanSee
@FranzAllanSee Месяц назад
That’s not my experience. I gave it a unit test and asked it to implement the class to make the test pass and it was able to do so
@koen.mortier_fitchen
@koen.mortier_fitchen Месяц назад
Gemini is like a cheap coder on Fiverr. Sure I can. You pay. It can’t.
@engineerprompt
@engineerprompt Месяц назад
It's free :)
@FranzAllanSee
@FranzAllanSee Месяц назад
Gemini as chat sucks. I dont think it was ever meant for zero shot. I tried with code execution and it did great. At least I dont have to hand roll my own agent
@anubisai
@anubisai Месяц назад
It's shyte at code.
@anubisai
@anubisai Месяц назад
It-ter-ate.... not aye-ter-ate. It. It-ter-ation not aye-ter-ation
Далее
Self-Host and Deploy Local LLAMA-3 with NIMs
13:08
Просмотров 6 тыс.
Making Long Context LLMs Usable with Context Caching
13:39
#JasonStatham being iconic
00:38
Просмотров 274 тыс.
Graph RAG: Improving RAG with Knowledge Graphs
15:58
Просмотров 50 тыс.
HTMX Sucks
25:16
Просмотров 119 тыс.
WHY did this C++ code FAIL?
38:10
Просмотров 250 тыс.
Claude Artifacts: What it can do and limitations
12:37
Note-taking Apps for Command-line People
34:49
Просмотров 37 тыс.
The moment we stopped understanding AI [AlexNet]
17:38
Просмотров 942 тыс.
Every Framework Sucks Now
24:11
Просмотров 132 тыс.