Тёмный
Stable Discussion
Stable Discussion
Stable Discussion
Подписаться
AI Still Can't Code Alone
16:14
Месяц назад
The AI Note Taking Powerhouse - Obsidian
15:17
3 месяца назад
Avoiding HypeGPT: Navigating The AI LLM Hype
10:44
5 месяцев назад
Will AI Agents Take Jobs from Programmers?
9:43
6 месяцев назад
Fine Tuning ChatGPT is a Waste of Your Time
9:40
6 месяцев назад
Should Devs Worry About OpenAI?
5:32
6 месяцев назад
Will Junior Devs Survive AI and ChatGPT?
11:27
6 месяцев назад
Learning AI as a JavaScript Developer
9:54
7 месяцев назад
The OpenAI API is better than ChatGPT
8:36
7 месяцев назад
Don't just use ChatGPT with your PDFs
8:17
7 месяцев назад
What is AI: Beyond ChatGPT
4:19
10 месяцев назад
Комментарии
@EvansOasis
@EvansOasis 8 дней назад
Very well done explaining the uses of this tool! I believe future version of it will change how we go about everything, and a big step will be the ability to visualize and interact with these connections. I wanted to let you (and whoever sees this) know that I've collaborated with Brian (Creator of Smart Connections) and released the official plugin companion: "Smart Connections Visualizer". For this first version, it's an obsidian graph view that shows relevant connections to your current note. I'll still be adding much much more to it, also with an ability to customize how you see things like no other! Give my channel a peak if you want to find out more If it's not too much, could you pin this comment to let people know about this tool to enhance their experience with SC?
@Saintel
@Saintel 22 дня назад
If you use ChatGPT does that not just make your Obsidian notes no longer private?
@ramakrishnaprasadvemana7833
@ramakrishnaprasadvemana7833 23 дня назад
This is good Working tutorial will help viewers to understand more deeply nuances and start applying the concepts learnt. And some of them will come back and enrich every one with their experience
@yanrongliao846
@yanrongliao846 29 дней назад
Hello, I don't know the principle of the pipeline, I wonder how can I establish a didicated law's gpt, I just upload some pdf, and each is at about 10Mb,but I failed to let gpt answer my quetion even if the pdf are so precise.
@AdnanAli
@AdnanAli Месяц назад
Congratulations. There are tools like Langsmith now that can be used to show the chain used by the GPT.
@StableDiscussion
@StableDiscussion Месяц назад
Thanks! And thank you for watching! I like Langsmith but if you’re on an old version of Langchain I think it won’t be compatible with the current method usage as the API has changed a lot. Especially with old workarounds from 7 months ago. Couldn’t even get the old app building so it was unlikely I’d be able to add tooling on top. But thanks for the recommendation!
@DevulNahar
@DevulNahar Месяц назад
This is pretty cool. Can you give a tutrial of PDF ingestion pipeline?
@StableDiscussion
@StableDiscussion Месяц назад
Thanks for watching! Hoping to dive into more detailed work at a future milestone. Will update here when that comes!
@MichealScott24
@MichealScott24 Месяц назад
@petportal_ai
@petportal_ai Месяц назад
Congratulations on the 1k subscriber milestone! Well deserved with such great content. Turning my notifications on!
@TechAtScale
@TechAtScale Месяц назад
Have you taken a look at Amazon Q with dev mode in an IDE yet?
@MichealScott24
@MichealScott24 Месяц назад
🫡❤
@marketfarm
@marketfarm Месяц назад
"hallucinates a guess". I like that. 😆
@Maisonier
@Maisonier Месяц назад
There is anyway to use the GPU for this?
@101RealTalker
@101RealTalker 2 месяца назад
My vault is over 3million words across 2k+ files, all geared towards one project. I was excited to use the Co-Pilot plugin because it advertises "vault mode", but was quickly disappointed when its default reference amount was only 3 notes/files at a time (lol), and can only stretch to 10 but gives a warning that it will prolly screw up the responses. I want to communicate with my vault as a whole for perfect macro context, but it seems my case use is still not possible with current Ai? SmartConnections doesn't seem to be any better, or am I mistaken?
@StableDiscussion
@StableDiscussion Месяц назад
Sweet! That’s a good size! Smart Connections is similar but breaks files into blocks. You’ll pull several related references from within the context of some files which performs better but may take things out of context. Seems like it would perform better This problem is actually generally not so much about what AI is capable of today, it’s that general (solve all) solutions often don’t perfectly fit the problem space or some specific domain. Early days, and everyone is still figuring it out. Some new models can handle a lot of context but no infinite search over your notes yet but one solution could exist that might make you not care about it much
@CitizenWarwick
@CitizenWarwick 2 месяца назад
We had a well crafted GPT4 prompt with many tests covering our desired outputs. We took gpt35 and fine tuned it and now it's performing the same. Worked well for our use case!
@YanMaosmart
@YanMaosmart 24 дня назад
Can you share how many datasets have you used to finetune? Used arounds 200 examples but finetuned model still not work quite well
@CitizenWarwick
@CitizenWarwick 24 дня назад
@@YanMaosmart around 600 though I guess success depends on expected output, we output JSON and our prompt is conversational
@NLPprompter
@NLPprompter 2 месяца назад
dude try obsidian copilot can use ollama, with that can use dolphin mistral = it's uncensored AI so there is no guardrail, this way we can be anything creative with texts.
@StableDiscussion
@StableDiscussion 2 месяца назад
Thanks for the suggestion! I like the look of that but don’t see a lot of activity on the GitHub. Still seems nice to be able to bring in other models, especially local models Dolphin has been pretty fun to play with too
@NDnf84
@NDnf84 2 месяца назад
Almost none of this is AI.
@StableDiscussion
@StableDiscussion 2 месяца назад
Right?! We make our own stuff here with very little generation of content. Thanks for noticing ❤️
@AnimusOG
@AnimusOG 2 месяца назад
Well done my man. Keep it up. Your content is valuable because your explainations are excellent!
@StableDiscussion
@StableDiscussion 2 месяца назад
Much appreciated!
@yongyu2032
@yongyu2032 3 месяца назад
Hi! Good video! I am wondering if you are using any of the openai service essentially you are not running the model locally right? Is there a local embedding and llm available for this, such as llama2 or mistral. Thanks! Awesome video.
@StableDiscussion
@StableDiscussion 3 месяца назад
Thanks! Glad you enjoyed it! Local embedding are definitely available. Local models currently don’t seem supported but you can proxy requests in these plugins and that could make something like that work too
@marketfarm
@marketfarm 3 месяца назад
Yes! I’m ready to upload my decades of original notes and content into Obsidian and instruct my LLM to cull through them to create new material based on my own source material. Come on, pick my brain.
@needsmoreghosts
@needsmoreghosts 3 месяца назад
Aye, very insightful, and was something I also picked up from using Stable Diffusion a lot. One thing that's helped a lot with prompting ChatGPT is the '-' dash seperator. It's a good way to make sure words are often not linked together as tokens, as 'Space Dash Space' is often just 1 seperate token in and of itself. If there are two seemingly odd words that I want to seperate in a prompt, it seems to give decent indication. I would say with json formatting, this can actually be pretty decent, as it's a clean well understood set of characters with clear meanings, [1,2,3] or whatever value is probably easily interpritable.
@needsmoreghosts
@needsmoreghosts 3 месяца назад
Oh this is just what I was looking for! A really great breakdown of this, thank you bud. Didn't even realise that something like Obsidian existed... Here I am doing api calls through vscode like a pleb, definitely time to move!
@StableDiscussion
@StableDiscussion 3 месяца назад
Glad to hear! Check back in when you've had a chance to try it out!
@rfilms9310
@rfilms9310 3 месяца назад
"you to curate data to feed AI"
@korbendallasmultipass1524
@korbendallasmultipass1524 3 месяца назад
I would say you are actually looking for Embeddings. You can set up a database with Embeddings based on our specific data which will be checked for similarities. The matches would then be used to create the context for the completions api. Fine tuning is more to modify the way how it answers. This was my understanding.
@MrAhsan99
@MrAhsan99 3 месяца назад
thanks for the insight
@nyxthel
@nyxthel 3 месяца назад
Solid work! Thanks!
@kingturtle6742
@kingturtle6742 3 месяца назад
Can the content for training be collected from ChatGPT-4? For example, after chatting with ChatGPT-4, can the desired content be filtered and integrated into ChatGPT-3.5 for fine-tuning? Is this approach feasible and effective? Are there any considerations to keep in mind?
@dawoodnaderi
@dawoodnaderi Месяц назад
all you need for fine-tuning is samples of "very" desirable outcome/response. that's it. doesn't matter where you get it from.
@Charles-Darwin
@Charles-Darwin 3 месяца назад
This struck me too, it seems not many quite noticed the gravity of the massive context. Not sure if you saw it, but there's a paper on arxiv with the title "World Model on Million-Length Video And Language With RingAttention" published a day or two before Sora and google's gemma 1.5 were announced. They show results with near perfect retrieval.
@chrismann1916
@chrismann1916 3 месяца назад
Question - as it relates to the needle in the haystack performance the research you quote is very recent work but unfortunately this damn space moves so freaking fast. Or is this even a question or maybe a statement. From what I understand in the Google blog post Gemini uses a new Mixture-of-Experts (MoE) architecture, which apparently delivers much improved processing/understanding of long context. They are quoting crazy needle in the haystack results. At the same time, GPT4 has a hard time processing moderately complex prompts (well within the token range in which it preforms well at in the needle test) so as a builder with road-rash I have a raised eyebrow! What am I missing?! :-)
@StableDiscussion
@StableDiscussion 2 месяца назад
You’re quite right this space moves fast! There are a lot of things we’re finding now we have access to Gemini. Specifically, the way it handles controllability and context is very interesting. I think the way these AI are measured is going to continue to evolve. The needle in a haystack is a great test but we’re going to have to see what the cost of that optimization is going to have.
@christinawhisler
@christinawhisler 3 месяца назад
Is it a waste of time for novelist too?
@MaxA-wd3qo
@MaxA-wd3qo 3 месяца назад
why, why so tiny amount of subscribers. Very much needed approach to problems, to tell 'wait a minute... here are the stones on the road"
@chrismann1916
@chrismann1916 3 месяца назад
Brother, one of the best breakdowns of this topic I've found yet.
@DigitalLibrarian
@DigitalLibrarian 3 месяца назад
The kung fu panda movies are pretty good.
@anonymeforliberty4387
@anonymeforliberty4387 3 месяца назад
i didnt get your point. You showed us a diagram with two system prompts, one before and one after the user prompts to control them. But in your code example, we didn't see that in action.
@tecnopadre
@tecnopadre 4 месяца назад
Sorry but why then it's a waste of time? It wasn't clear or finally mention as far as I've listened
@Hex0dus
@Hex0dus 4 месяца назад
I really like the video and your style so count me in as a subscriber. But showcasing Deno inside of Jupyter notebook and trying it out for myself was a bug disappointment as Deno can't track the values between cells and therefore there is no autocomplete or intellisense at all. I see in your video that it's the same in your installation.
@StableDiscussion
@StableDiscussion 4 месяца назад
Thanks for the kind words! It’s definitely a mixed bag. The ability to have a consistent running environment in which to execute scripts makes iteration on AI solutions much more efficient. I found that often times I would be re-executing costly generations because I needed to return to a desired state to test some aspect of my solution. Notebooks are better at this task. But the integration still has much to be desired and I totally think the lack of type consistency, autocomplete, and other core dev features makes it difficult to appreciate or enjoy
@Bboreal88
@Bboreal88 4 месяца назад
Hi, new sub here. Do you have any video teaching how to build an small language model or using RAG to get a very simple conversation project demo going?
@StableDiscussion
@StableDiscussion 4 месяца назад
Thanks for the subscription! We probably won’t be doing a video creating a small language model but we may look at adding RAG videos in the future. If you’re looking for something simple, you may refer to the code I referenced in the video. That’s a good starter for RAG concepts and you can look to optimize and break down later
@markburton5318
@markburton5318 4 месяца назад
Very timely. I was just experimenting earlier with generating RFP responses based on past winning bids and corporate knowledge bases. The RAG is returning irrelevant docs even when there are very relevant docs. On other use cases, RAG seemed to work fairly well. I think it is to do with obscure concepts. In the RFP questions throwing off the vector distances. The buyers are after all trying to separate the good from the best. I will try some of these options as quick experiments but I’m going to have to dive deep into the vector distances, etc, and properly diagnose, build solid evaluation criteria, unit tests and monitoring KPIs for this stage of the pipeline. And try fine tuning for this use case.
@StableDiscussion
@StableDiscussion 4 месяца назад
Sounds like it! Glad you found value in the overview! I’d imagine RFP generation wholesale is a very tricky space due to hallucinations. RAG will definitely help you in some cases but I’d imagine it would have a hard time keeping track of a firm’s competencies in relation to winning bid competencies. I’d look to reframe towards overall strategic direction over diving into the details
@user-bd8jb7ln5g
@user-bd8jb7ln5g 5 месяцев назад
Tinkering builds skills. Most people search for knowledge but its skills that they want, and skills will never be learned by watching an endless stream of social media, tutorials, lectures. It's doing vs knowing. Skills = applied knowledge = ability to do/create something
@user-bd8jb7ln5g
@user-bd8jb7ln5g 5 месяцев назад
AI hype is big business. After watching a lot of AI dedicated channels for almost a year, I have learned to mainly view them with distrust. They hype everything AI to death, never honestly discussing limitations or problems, nor the solutions to the problems. Sources of good information are not mentioned, moreover they rarely have insight into what works and doesn't. Just superficial regurgitation of press announcements. This isn't true of everyone, but many.
@bigbadallybaby
@bigbadallybaby 5 месяцев назад
Using chat gpt 4 its a odd mix of mid blowing capabilities, nuance, depth, subtlety to its answers but then often just really dumb predictive text where it has zero understanding
@NDnf84
@NDnf84 2 месяца назад
It's not capable of truly 'understanding' anything.
@MrAhsan99
@MrAhsan99 5 месяцев назад
you are absolutely right. Thanks for the advice
@larsfaye292
@larsfaye292 5 месяцев назад
This channel is gold. So practical, professional and balanced. Great job and great advice. Now I need to figure out what to build....
@StableDiscussion
@StableDiscussion 5 месяцев назад
Thanks for the support 🎉 Wishing you a successful brainstorm!
@petportal_ai
@petportal_ai 5 месяцев назад
Great video. Love the perspective!
@StableDiscussion
@StableDiscussion 5 месяцев назад
Glad you enjoyed it! Thanks!
@stephaniezeng7887
@stephaniezeng7887 5 месяцев назад
Great work! Excellent user experience :) It makes the AI image-generation process less daunting. One of the biggest challenges I encountered generating images is to generate multiple images with a consistent character. I usually ask the AI to provide the seed ID of a preferred image and generate a new one with the same seed ID. Sometimes it works, sometimes it doesn't. It would be cool to see if the Stable Canvas can solve a similar problem.
@larsfaye292
@larsfaye292 5 месяцев назад
Will you be able to feed the images back into the prompts to encourage a specific visual or artistry style?
@StableDiscussion
@StableDiscussion 5 месяцев назад
That's a really interesting idea @larsfaye292 We have a set of features in mind related to using prior images but are still exploring how best to incorporate them. Some analysis of the images that are generated to pull out styles could be a cool approach and it sounds plausible!
@DJPapzin
@DJPapzin 5 месяцев назад
🎯 Key Takeaways for quick navigation: 00:00 AI *agents aren't replacing developers.* 01:29 AI *lacks real-world ambiguity.* 03:03 Development *involves people.* 04:36 AI *faces challenges in adaptation.* 08:15 Technology *shifts, but job needs persist.* Made with HARPA AI
@BernardMcCarty
@BernardMcCarty 5 месяцев назад
Thank you. Your clear explanation of RAG was very useful 👍
@protovici1476
@protovici1476 5 месяцев назад
This video and opinion is fairly incorrect in regards to fine-tuning. Especially, fine-tuning can be utilized in any deep learning hyperparameters (i.e. GenAI, Discriminative AI, BERT, NLP) with any data set. Self supervision, supervised, to reinforcement learning just to name a few use cases of algorithms to solve a problem. RU-vids fine-tuning in their algorithm made me stumble upon this video. Highly recommended re-evaluation of this video to save folks from misunderstanding.
@LucienHughes
@LucienHughes 5 месяцев назад
You're right that the API is much more flexible, but bear in mind that ChatGPT is both cheaper (not charging per-token), and much easier to use multi-modally. Also, you can get better control of the context by knowing when and when not to start a new chat session.
@gopinathl6166
@gopinathl6166 5 месяцев назад
I would like to get your advice for creating conversational chatbot. Do RAG or Finetune be suitable because we have a CourtLAW based dataset that contains 1000's of PDF which is unstructured dataset of paragraphs?
@zainulkhan8381
@zainulkhan8381 2 месяца назад
Hello I am also trying to feed pdf data as an input to openai its unstructured set of data and ai is not able to process it correctly when I ask it to list transactions in pdf that it generated garbage values and not the actual values that are in pdf I am tired of giving prompts so I am looking forward to fine tune now
@zainulkhan8381
@zainulkhan8381 2 месяца назад
Did you achieved the results of the operations you were doing on your pdfs