Тёмный

NLP & AI database integration | Get Insights from database using NLP | Chat with database | AI | NLP 

BI Insights Inc
Подписаться 14 тыс.
Просмотров 952
50% 1

Опубликовано:

 

6 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 12   
@GordonShamway1984
@GordonShamway1984 3 месяца назад
Wonderful as always and just in time. Was going to build a similar use case that auto generates database docs for business users next week. This comes in handy🎉 Thank you again and again
@BiInsightsInc
@BiInsightsInc 3 месяца назад
Glad it was helpful! Happy coding.
@michaelaustin1638
@michaelaustin1638 2 месяца назад
Awesome video! How did you get the various categories when creating a model?
@BiInsightsInc
@BiInsightsInc 2 месяца назад
Thanks. Those are defaults in the OpenWebUI. You can select relevant categories for a custom model.
@mohdmuqtadar8538
@mohdmuqtadar8538 3 месяца назад
Great video What if the response from database exhaustes the context window of the model.
@BiInsightsInc
@BiInsightsInc 3 месяца назад
Thanks. If you are encountering model's maximum context length then you can try the following. 1. Choose a different LLM that supports a larger context window. 2. Brute Force Chunk the document, and extract content from each chunk. 3. RAG Chunk the document, only extract content from a subset of chunks that look “relevant”. Here an example of these from LangChain. js.langchain.com/v0.1/docs/use_cases/extraction/how_to/handle_long_text/
@mahraneabid
@mahraneabid 2 месяца назад
when he said "would you like me to break down the sales by product" and you responded with yes will do the action that he mention or will not?
@BiInsightsInc
@BiInsightsInc Месяц назад
It may work if the SQL model is able to generate sql for the question. You can try it and let us know if this extended option works.
@mahraneabid
@mahraneabid 2 месяца назад
hi sir the edited model cant be seen by ollama, when I call ollama list in CMD its display only the ollama3.1, why?
@BiInsightsInc
@BiInsightsInc Месяц назад
If you do not see the custom model in your ollama ecosystem then check the model file to make it's correct. Here is an example of the custom model file from openwebui. openwebui.com/m/darkstorm2150/Data-Scientist:latest
@krishnarajuyoutube
@krishnarajuyoutube 2 месяца назад
can we run llama 3 locally on any simple VPS Server, or do we need GPUS ?
@BiInsightsInc
@BiInsightsInc 2 месяца назад
Hi you'd need a gpu to run llm. By the way VPS servers can have GPUs.
Далее
НЕ БУДИТЕ КОТЯТ#cat
00:21
Просмотров 1,2 млн
build a rag app using ollama and langchain
8:12
Просмотров 1,5 тыс.
Chat with Your SQL Data Using ChatGPT
21:31
Просмотров 83 тыс.
Solving one of PostgreSQL's biggest weaknesses.
17:12
Просмотров 198 тыс.
Demo: Rapid prototyping with Gemma and Llama.cpp
11:37