This is beautiful. I'm working with 5 classmates (electromechanical and software engineering college) on a proyect, we developed a tiny robot able to chat with patients as a co therapist, using a raspberry pi and a LLM. But the hallucinations are way too dangerous here, so i suggested to my team we start implementing RAG. Generating and "validating" the psychology database is really, really, really time consuming, it's hard, tricky and it takes a long time to have good quality examples, but we are pretty sure it's gonna be 100% worth it. I just had knowledge of Ollama and i would love to try out Verba in our prototype, so people in need can start getting attention and we can right away start recollecting data from the final model already deployed. I would love to collaborate with you guys, I'm such an enthusiast of opensource communities and corporations, and I loved the concept you evoke so much.
Why the layout of the installed version via pip is not the same as your demo? Also, how can we use PDF files without an API key from Unstructured? I believe this is still a showstopper for most of us.
is their an API where I can use to upload data? rather than uploading it in the admin ui. and also is their a way to access chat through api as well so that I can use the chat inside any website or apps?
Wow, Victoria, is a natural in front of the Camera, perfect presentation and body language movement just right for the level of presentation required. It almost looks like she went to school for acting, NLP and Micro Expression presentation for marketing.
Is it more recommended to break down your markdown blogs into separate files rather than one big file to ingest? I tried with one big file and didn’t get accurate results
This is exactly what I have been looking for. However, I install it and none of the variables seem to populate the application. At lease none are showing.
Quick question. Does setting up and using Verba support Windows or WSL? Also, what exactly is the process. Does it simply work like a RAG app off the shelf after setup or we need to have weviate DB running on the side as well.
Weaviate Embedded isn't currently supported on Windows but we're working on it! On other devices, Weaviate Embedded is setup automatically and locally in the background when installing Verba, but you also got other deployment options such as Docker or using a Free Sandbox Cluster Hosted on our Cloud Platform (console.weaviate.cloud/)
the use of GPU is highly recommended but not so clear. I am using Verba on my WSL on Windows but as it is using only CPU, it is kind of slow. How can I plug my GPU to help?
kuda/ nvidia - check what you have, mske sure of compatibikity with your python version and package - it is a big job for windows users it took me ages to activate kuda. but not with this program.
Can this be connected to via an api for external apps? eg: Automation of emailing facts about ingested data to someone interested/able to receive email responses
Nice system, great that it works with ollama. I think like everyone who isn't openai, we want rag to work... but it just doesn't. I've come to the conclusion that it basically just 'can't' work, embedding dbs just don't represent information in 'connected enough' way to make an nl 'query' successful in 99% of use cases. And for the 1% where rag does work... keywords also seem to work just as well.... The sooner funded companies like weaviate accept that current rag just doesn't work, the better chance we have of the hard work of creating a system that can work... and basically you're probably going to have to 'train' self contained embeddings against a more general model in lora-like fashion, to have any hope of teasing out the actual relationships 'activations', that will give a natural language query against unstructured data a chance.
Like the tech, but the health use case is really clunky and very simple. Medical data is messy and you would obviously be asking patients most of these questions, not an LLM!