For experimenting I would recommend using no database at all. You can simply use the cosine similarity (i.e. from torch functional) or quickly implement it and you are nearly done. Just use some argsort to get the best matches. It's like five lines of code or so. For easy store/load you can use pickle to serialize/unserialize the object that holds the embeddings. It is fast on CPU too, but of course you can run it on GPU without any bigger changes. No services required.
Would be cool to make AI website scraper that strips away all javascript bloat from a webpage and converts it into lightweight basic html page while preserving functionality. Would be great as a proxy service to make loading modern web pages fast on slow phones on poor data connections. Modern web is way too bloated. I sometimes manually archive a page by deleting all javascript in notepad++ and modify image embed links to point to locally saved .png files. That takes a long time but I can reduce 5MB page down to 200kB and save that. Would be nice to have smart automated tool to do that in seconds.
I dont know if my question is stupid, but can you tell me can we take snapshots of website and use ocr and llms to scrape the useful info, instead of sending request to that website since it would look more humanly , and also use less requests
Hey yall, in case you didn't get good full text search results like me, the CEO of Supabase (Paul Copplestone) sent me this to use instead: supabase.com/docs/guides/database/extensions/pgroonga
This video was really helpful for the people like me looking for webscrapping tools. Though I wonder if jinaAi is really free. Is there any challenge in using it for more number of links? Does it have rate limit on hitting urls with prefix? Any clarification on this is appreciated. : )
Great video!! Thanks for sharing the code! One question though: Inside A:tier code -> "print_ai_answer" function, you wrote: for like in extracted_personality["likes"]: text_to_embed = f"The user likes {like}" current_embeddings = embedding_client.embed_query(text_to_embed) dislike_with_metadata = { "id": str(uuid.uuid4()), "values": current_embeddings, "metadata": {"type": "likes", "content": like} } embeddings.append(dislike_with_metadata) Was it not supposed to be something like "likes_with_metadata ={...}" and then "embeddings.append(likes_with_metadata)" ? I guess repeating "dislike_with_metadata" does not make a difference for the code functionality, but it was a bit confusing to understand the code for a moment. Thanks!
Great video, thanks. Is there a way to provide our own scraped data (so we can make sure we use a good stealth scraper and get all the content), and then the LLM analyses it like this?
Jina is almost perfect.. too bad it's not smart enough to scrape content from "accordions" where you first click to make the content visible. I feel a smart AI scraper should be able to grab that text and determine based on CSS class that it's probably valuable text.. just hidden at the time
How is your final implementation? I’m really curious about it, because make sense to have this abstracted, but we have some differences between than that can make this process tricky