Neo4j is the Graph Database & Analytics leader. This channel features videos by our Developer Relations, Engineering and Product teams about best practices using Neo4j. Learn more at neo4j.com/ If you have technical questions or want to build a local community join community.neo4j.com
This is bullshit.. llamaindex have so much better support for knowledge graph.. with property graphs and its index.. and the full control over the graph creation..Langchain is far behind.. this app doesn't work.. and is waste of time!
Great video and great app! I have a question on how the llm is used to generate responses based on the knowledge graph. From my understanding, whether i'm using langchain or llamaindex cypher chatbot, the chatbot basically receives the prompt as well as the list of all existing nodes and relationships and from that it generates a cypher query to retrieve the relevant context from the knowledge graph to then formulate the response. But the list of all nodes and relationships will get longer for every new document being added in the graph, so since the llm is limited by its context window, isn't this approach unscalable? if so do you guys any solution to this problem? thanks in advance.
PS: This situation is probably only relevant to when an LLM is used to extract entities and relationships without restriction on node/relationship types. So you could get large amount of different nodes and relationships. But sometimes you can't know in advance what type of entities/relationships you want to extract from a set of documents so you let the llm extract anything it can. Hmm how do you deal with the limited context window problem?
Great video and great app! I have a question on how the llm is used to generate responses based on the knowledge graph. From my understanding, whether i'm using langchain or llamaindex cypher chatbot, the chatbot basically receives the prompt as well as the list of all existing nodes and relationships and from that it generates a cypher query to retrieve the relevant context from the knowledge graph to then formulate the response. But the list of all nodes and relationships will get longer for every new document being added in the graph, so since the llm is limited by its context window, isn't this approach unscalable? if so do you guys any solution to this problem? thanks in advance.
Great numbers. But this feels like a sales pitch rather than tech talk. It’s unclear to me how fanout is fixed by Neo4j? How does it impact read time since you’ll need to construct the feed by walking the graph right? Are 125-48-3 numbers of nodes per cluster or total number of nodes? If the former, how many clusters did you need? If the latter, you really only need 3 Neo4j instances in total, not even standby servers? Can you shed some light on those things? Or point me to followup video?
Hi thanks for the video. What if we previously exported the output of "MATCH (n) RETURN n" into a CSV file. And now we want to upload that file into another database. Is there any way that neo4j recognizes the node types and properties automatically?
I wanted to use public app. When trying to connect I get following error: Could not use APOC procedures. Please ensure the APOC plugin is installed in Neo4j and that 'apoc.meta.data()' is allowed in Neo4j configuration
I have question, I already have ollama install in my bare-metal linux box, will docker install another instance and duplicate the models or will it fetch it from the existing bare metal installation?
Hello guys, Does anyone know its possible to keep the layout of the scene frozen when expanding/deleting a node so that the nodes don't get reorder automatically. Kind regards, Mario
Thanks for sharing, awesome effort! Tinkering a lot with how to wrap my large datasets with KGs, so food for thought... Is it possible to tell what was causing that latency to answer the questions? Was it only GPT-4? Also, would be great to see the token usage straight away, show some prompts, and discuss the learnings by model (what worked well and where) - this would be super interesting. Cheers!
How does this compare to what MS is doing in the GraphRAG paper, it seems like MS was really jazzed on using the LLM, GPT4, (in some way it wasn't clear) to create the relationships, the edges/weights etc, and this was the secret sauce, I assume this is different from what Neo4j is doing, but could you expand?
this was a bit before GraphRAG become a term - check out the latest episodes from GoingMeta ( neo4j.com/video/going-meta-a-series-on-graphs-semantics-and-knowledge/ ) where go into GraphRAG a bit more as well
For a beginner - talk is most in tech terminology - could not really follow. A demo of what this can do would be far more useful for me and my team. Is there anywhere where I can see a demo that might be useful?? 1:03:18
Would love to see more and how this can be integrated with generative UI and connecting the knowledge graph to different visuals, analytics workflows and generative UI workflows and using the knowledge graph like a way to connect the information from outside the internal data environment to the outside world and also running analysis and saving the metrics cards and visualizing and recalling
Awesome stuff team! Very cool. Running into an issue getting the repo working as the fronted build can't find @neo4j-nvl/core npm package - is it a private package? Thanks!
I am a micro services developer and work in Java tech. I want to learn Neo4j and I have found videos on your own channel very good, therefore, I tried finding in playlists but couldn't figure out a track to follow. Can you please share names of few playlists and their order for newbie Neo4j enthusiast?
I'm keen to figure out how to enable a use case where multiple users are entering data and uploading docs, auto-generating nodes/edges/vectors, and then -- this is important -- only being able to query/access their own nodes/edges/vectors. In my use case, it's essential that nobody has acces to anyone else's information. And it's important that the method for this kind of access control is within good auth security practice (no passing uids in the body of an API call). Thoughts? (note -- currently using firebase for auth in my app.)
Trying to link my basic knowledge from ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-ftlZ0oeXYRE.html with this video. Trying to build my first RAG with Neo4j Hope to do it this weekend. The main goal of companies right now is to have a better answer from a RAG without hallucinations. Hope with neo4j we achieve it. UPDATE: Playing already with the workspace. It would be nice to have a RAG like on the video, with a chat UI integrated on a webpage, or similar. From the begining till the end. Creation of the Database, feed it, Graph creation and finally having the Chatmodel ask the Graph database.
Hi @tecnopadre , thanks for testing it out already! You do have the full flow from beginning till the end within this application, I presented the chatbot in a different screen but you have the same chatbot integration in the app, on the main screen, click on the bottom right button: "Q&A Chat" and that will open the RAG Chatbot, plugged on your KG that you just created and will be able to answer your questions on the data you just loaded.
How would you handle if the schema is too big for the LLM Context. Do I always need an embedding model to find the starting point. Or do you have other ideas on how to create a subgraph(subschema)?
Some LLMs accepts a large quantity of tokens. If it still doesn't fit, you can create a chain. The first chain can be executing this prompt: 'given this schema, which nodes and relationships are related to this query? {user_query} answer in JSON and consider the relationship direction.' you can also provide an example of the JSON you'd like to receive as the sub-schema.
strange - so I can reach the repo here: github.com/neo4j-labs/llm-graph-builder If you want to try out the app right away you need to go to: llm-graph-builder.neo4jlabs.com/
@@thanartchamnanyantarakij9950 Hi, yes but you would have to implement the integration with the llm you want to use. In the application we demoed and linked above, we currently have Diffbot, OpenAI (GPT 3.5 and GPT 4) and Gemini (1.0 pro and 1.5 pro), you can check out how we've done the implementation as everything is opensource and take inspiration from it for making the integration with the llm you want to use
I have been waiting for it a tool like this for a while. I wish you could emphasize the process of ingesting structured data from public databases. Especially to ingest triples. There's plenty of data offered by the European Union which is not very accessible. Also having a way to ingest unstructured data with predefined constraints by using ontologies would be particularly useful. I have no idea how to implement that though.
if you already created the instance you need to click on the little hat icon top right, and there you can see user guides and some other datasets unfortunately, the feature with a starting dataset was removed. You can however migrate your Sandbox (sandbox.neo4j.com/ ) over to Aura with the new Push to Aura Feature