🚀 There is so much more to explore in ML than just LLMs like ChatGPT. Feel free to grab my FREE cheat sheet of different ML domains and open challenges: borismeinardus.substack.com/p/a-list-of-different-ml-domains
Trying to register using your link above but it is asking for an organization email? When I use my personal email address there is no new registration option??
Thank you!! Well, with prompt engineering you don't really train a model, but it is related to the SFT stage of a model because you partially improve model response quality by engineering the prompt in a way that aligns with the formatting used during SFT :) I hope this makes sense haha
📝 Summary of Key Points: 📌 The video explains the working mechanism of chat GPT, focusing on how it understands text through an attention mechanism that identifies relationships between words to predict the next word accurately. 🧐 Chat GPT is trained through methods like mask language modeling and next token prediction to understand language, grammar, and world knowledge. It undergoes pre-training on vast amounts of data to generate text that makes sense. 🚀 Supervised fine-tuning and reinforcement learning from human feedback are crucial steps in training chat GPT to provide responses aligned with desired outcomes and preferences, enhancing its ability to generate suitable text. 💡 Additional Insights and Observations: 💬 The attention mechanism in chat GPT helps it comprehend context and relationships between words, improving its predictive capabilities. 📊 GPT models are trained on massive datasets to learn language, grammar, and world knowledge, enabling them to generate coherent text. 🌐 Techniques like retrieval augmentation generation (RAG) allow for customizing GPT models without extensive retraining, making them versatile for various tasks. 📣 Concluding Remarks: The video delves into the intricate workings of chat GPT, highlighting its training methods, mechanisms for understanding text, and techniques for customization. Understanding these processes provides insights into how AI language models like GPT function and adapt to different contexts effectively. Generated using TalkBud
The GTC sessions are all pretty badly timed for me but going to wake up extra early for a few. Had considered a full day workshop but they are really full night events for me lol. Thanks for sharing that!
Simple (and probably stupid) question: does a domain-specific problem, say understanding ocean data or medical images (here I'm using the word "understanding" in a very loosened way), need a specific pre-training on the domain data? Or can one just use an LLM pre-trained on the whole internet corpus and then fine-tune it?
hope you can make a video on implementing a simple paper i just saw your project video but got no idea how to sctually do ut ima be honest i need some hand holding at start 😅
Bro, I want to fine-tune a model for a translation task. However, I encountered a ‘CUDA out of memory’ error. Now, I plan to purchase a GPU from AWS ec2 instance. How is the payment processed in AWS? They asked for card details when I signed up. Do they automatically process the payment?
Sora is a crazy good model - not perfect of course, but really really good. It's so much fun to think of how people will use it in the future and what effect it (amongst all of AI) will have on the world. It's the speed of progress that makes this work exciting - just think of the Will Smith eating spaghetti generated video from like 11 months ago. Let's see where we will be in one more year! Some might argue, it is not the most revolutionary when it comes to the technical details, but I think that is not why Sora is so amazing and relevant. It shows where the world is heading, and at what insane speed we are moving. We, who are interested in that technology are aware of all this, but the "normal" population still isn't but most definitely will need to adapt. That's why ChatGPT was also so important. It brought an immense amount of attention to the world of AI.
Hey! Not 100% sure what you mean with software but if you refer to models that you want to train you can look at some of these different options - using smaller models - quantizing the models, i.e. changing the datatype from to smaller ones like float16 or float8 - using parameter efficient fine tuning techniques like LoRA - using smaller batchsize - using lower resolution images (if using images/ videos) I hope this somewhat helps :)