@@LiamOttley I'm not a crazy advanced developer. most of my coding experience is in javascript and im learning a lot from gpt-4 while coding real-world projects. Right now i am trying to convert your HormoziGPT to javascript but i have no python experience. GPT is assisting me with this. If you end up making a code walkthrough though this may be easier for me to translate the languages.
Liam, many thanks for all your effort to help us to create on AI supported commerce business. I am an Algebra teacher with Linear Algebra background which allowed me last Nov30 to immediately understand how ChatGPT3.5 works. You are far above my ability. I listen to your tutor sessions 2 or more time to best understand how to make a Math tutor site for decreasing failing Algebra1 students in CA (which allows "non-paid" any K-12 weekly quizzes or Unit Tests) from over 40% to my 5 Fails from 121 Sept to June student cohort after I found out I could ReTest in 3rd of 8 four week semesters. My goal is to first help one CA Algebra1 class to have Zero Fs, then a whole 8th Grade school have Zero Fs, and help as many Los Angeles 8th Grade classes have Zero Fs as possible.
definitely want that part 2. Great video but would especially love to see the part where you send the embedding to pinecone. To have a start to finish step by step of this project would be extremely valuble
Wow, what an amazing video! 🎉 I'm absolutely blown away by the creativity and effort put into this project. It's clear that you've got some serious skills, and I just wanted to take a moment to congratulate you on your outstanding work. 👏 I couldn't help but notice how well everything came together in this video, and it got me thinking: it would be absolutely fantastic if you could create a follow-up video where you explain the code behind this project. It would be incredibly insightful for aspiring programmers like myself to see the inner workings and learn from your expertise. 💡 Sending you a big shoutout all the way from Colombia! 🇨🇴 Your content has reached far and wide, and I just wanted to express my admiration and gratitude for the inspiration you provide. Keep up the incredible work, and know that you have fans cheering you on from all corners of the globe. 🌍💙
Dude!!! I've been waiting for this for a long time. Thank you! I'd really appreciate seeing the detailed process of building this out for my own use cases.
This is so awesome, yes please do a follow up in depth! I'm a MERN stack guy but I'm going to use gpt4 to help me understand your python repo more 😂, this is super interesting
again thanks for your time and effort, cant wait to try this out. the system message is a lot of fun to play with and now on top of a vector database cant wait !
Mate, this application has the highest job-killer potential I've seen so far. I can think of at least 5-10 business use cases. Thank you for the video and the clear explanation.
I feel like this is huge. Figuring out how to aggregate different kinds of content and then chunking it up and storing it in a vector database is the future. I think the real question for a guy like you is: what is the big move that makes you rich? I always have gotten stuck providing services for people. But the really money is in products. I really hope you develop a product that makes you super rich. You are really awesome.
Hi Sage, thanks for your kind words! I think this tech could really supercharge the individual with some world class mentors in the near future. Companies like wisdomai.com/ are already productizing things like this. I think there is a lot of money to be made now in productizing some kind of productivity boost that companies can apply to their systems. Better lead gen, better outreach, better follow ups etc even replacing entire members of staff. Just my two cents!
Hello! I am an AI developer and consultant in South Korea. I watched several videos on your channel and found it interesting that you are very much in line with the role model I want to become!! (Development, consulting, entrepreneur & if you become more famous, youtuber related to AI development) I would be grateful if you could post more good videos in the future. You have inspired me a lot!!! I am your fan from today!
Nice one, clear explanations. Best would be to show how total beginner like me can set this up fully :) Or an online app app, where the user just have to define it own api keys, and enter the list of websites links to get started with this would be great (google collab?)
😮😮😮... Half the stuff here just ran through my mind but the possibility of "cloning" Leila is all the motivation I need to learn about how to do it. Thank you so much for doing this 🎉. I love to see infinite game players crush it 💪🏽😈
Great video! Please do make the breakdown on setting up the Pinecode database. If you then could add the link to that video here, it would be just amazing!
Absoluley love all your videos, I have never coded before but I an able to build apps because of you. Thanks for sharing. Please please do a follow up video. I would love to see how far this can go.
Nice video on this video here are my thoughts. You did a great job at showing the process to create this product but If feel like the responses are generic and very similar to what chat gpt will produce. I know this is only V1 and it needs to be fine-tuned. I would like more strategic and step-by-step advice just like the ones he gives in the videos and podcast. I'm sure this will come with time because I believe it can be extremely beneficial for beginners. Besides that great work on the video Liam. Smashed it
Thanks mate 💪🏼 Responses sound like ChatGPT because they are being generated by the ChatGPT API, more prompt engineering can fix this. Haven't had time to test it enough to get it where I'm happy with it!
This was good! I got lost when you got to data pipeline to be honest. But I understand the concept thats been told in other tutorials, I think you just did it differently using Jupiter, Python and Vscode.
Great work, great video, thanks for sharing! Keen to learn how you did the data pipeline as well, would be great if you do another video on that and share the data pipeline code as well. Cheers!
I was looking at your code and would like to know if you could create an option for the second part of this video to store embedding locally and use semantic search from Langchain instead of 'pinecone_endpoint' since the latter is paid. As a beginner, I want to create locally and, afterward, expand. Does that make sense to you?
Hi great video again. One question. Is t possible to use for vector database a free model, for the search of question, the same model. And finally having the chunks we use openai for example. What about the quality of free model only to search chunks ? Never see any paper on thet..
Hey Liam, thank you for all the information you are providing. 🙏🏼 It would be super useful if you could do a Pinecone/Botpress/Stack AI integration for when we are dealing with large amounts of data as a knowledge base and we can't just upload the data to Stack AI cause it would take forever to upload. Cheers!
Checked my OpenAI usage and looks like embedding and whisper for this project only cost $20 or so. Main stinger is Pinecone DB costs, $200 or so on my usage there but could be my other projects lumped into that
Tried making a pdf skimming bot and my biggest gripe is that the OpenAI AI isnt really using the information to communicate as much as just reassemble parts of what it has access to. Itd be nice if it could reflect and report much like gpt
Try asking the contextual questions in relation to customer experiences. Thats where most of the nuance would be businesses usually run on how far the customer is willing to interact. Im doubtful that llm can grasp anything past a binary mode of happy and sad but lets try that out I could be wrong. Last thing you want is to have the gpt trained model happy attempts make people more upset. It definitely gets more personalised i wonder how that would be solved more algorithms i suppose. This was wonderfully delightful thank you for your time newly subbed❤.
Cody AI allready had a interface for this application that makes is very easy. I need the RU-vid transcription to text and I don’t know how and why I should chunk it since Cody can handle vast amounts of data. But if you can showcase how to get the data pipeline set up this application can be run without coding
Hello Liam , Great Video, may i ask if this code include memory of past conversation and that pulled all togheter with the new prompt context+ pinecone database context, so making it: prompt context+ pinecone database+ past conversation memory context? thank you
Dude do this with 2 more business podcast, an vector database for each of them. Run the result of each through a gpt-4 and then merge those responses into a single anwser.
Liam - any chance you could share how the telegram chatbots work where you can chat with famous people using voice? Like the Steve Jobs, CarynAI bots in the news.
A few questions, since the context would be really big using podcast series or books and tokens OpenAi can receive from the user (excluding openai trained context) are really limited (well now they increased a lot with latest update, but you need to have pro version), how do you handle a question on that big context, do you execute multiple prompts on multiple context fragments and then try to consolidate the data trying to do a summary or what?
Hahah damn what a good recco by YT, was looking into how to build something like this. My only gripe is it seems like these all do the same sort of thing, it only uses the specific portions of the text to generate a response. What if you want to use the entire body so it can understand all the context & full picture? Maybe the output would be minimal in difference?
Token window is too small, couldn't use the entire body of his podcasts. You could be recursive summarization potentially but I still prefer a system like this where the prompt is not choked up to the brim with tokens. The point of this is that only a few snippets of all of his podcast content is relevant to a given query and this system allows you to retrieve them!
@@LiamOttley I figured, it'd probably more wasteful in terms of tokens then actually helpful in terms of providing an accurate answer to whatever question asked.
How do you know it won’t pull the audio transcription of the other guests. How does your repo know that the text is from your speaker and not the guest
I only downloaded solo Alex podcasts which is like 80% of them, I ignored the guest appearances he does. You can use tools like descript strip text from certain speakers though
I'm getting roughly a similar effect by using the llama-index embeddings. Not sure what the benefits of using pinecone over llama-index is though. I have been noticing some hallucinations though with the outputs when I query the data. Not sure why that is. Where are you storing your data files? Is it all local?
Llama index and other quick index tools are good for testing but lack the full control needed to create something tailored to your needs and that can run at scale. I can edit every step of the way with this kind of system which is why I prefer to use it normally. My data is all in a Pinecone vector DB hosted on their servers
I would like to start an E-commerce with your company and guidance of you besides l would like to start digital marketing business by your guidance and support thank you sir