Neptune is the MLOps stack component for experiment tracking. It offers a single place to track, compare, store, and collaborate on experiments and models.
With Neptune, Data Scientists can develop production-ready models faster, and ML Engineers can access model artifacts instantly in order to deploy them to production.
I dont see foundational models replacing Deep Learning Engineers (often carrying a title of "Data Scientist") anytime soon. The foundational models wont replace every single use case of Deep Learning or ML where training models still needed. Using LLM is often an overkill where BERT would suffice or using GPT4-v for Object Detection / Segmentation tasks. To be cost-effective, models still need to be fine-tuned, data cleaned and selected, etc. An ML Engineer won't be able to do it as good as a Deep Learning Eng.
The interesting part is that when we go from classic coding to ML and now to LLMs, the main characteristic is the increase of the ability to deal with real world, organic situations. This is a puts a tremendous pressure on control and test sides because they want well defined and controlled boundaries on the system. So the power and flexibility of ML an LLM are also they curse when it comes to OPS. As LLMs interact using natural language, it is almost as if we would have to put the team inside the version control system. Maybe we should think in LLM models as members of the team: we don't version people, we train and certificate them. Another way of thinking would be to do pair programming with the models: one LLM develops, other creates test cases, both in a competition
Are these tools enough for mlops GitHub , maven, Jenkins, docker, kubernetes, but I want to automate 1. EDA or visualisation 2. Data preprocessing 3. Website development in ML
To be honest, to say that LLMs solve problems better than NLP shows me that you didn’t dig into the subject deeply enough. They can and should be combined very well plus there’s the question of costs. Some tasks are done much better by classical NLP models. For instance, try to run entity extraction using an LLM on a huge data corpus. Good luck with that :)
Will age like milk. Look at NLP competitions, LLMs are outperforming everything else. Inference costs are going down really quick, e.g., with groq's LPUs. There is also rapid adoption from open-source libraries, like spaCy, meaning development cost is going down as well. To top that, on non of these fronts are LLMs showing any signs that they are slowing.
@@nvsurf Ok, now can you please give me a link to an NLP / text analysis tool you developed, so we could relate your statement to a real practical use case? Because LLMs work really great for text analysis when you prompt it via OpenAI or ChatGPT but when you try to scale it, you'll quickly run into troubles and won't get as good results as you can get with the more traditional tools.
Reconciling the "Opportunity Cost" ( that might be nice today but unless its going to be essential tomorrow, let's sit on the fence and not fix it if its not broken hey ) Vs "Social Impact theory" ( Hey everyone else is doing it so why don't we ) is always tough when you're reporting into "Chiefs" ( who were never "Braves" ) ... its just a diseconomy of scale of big businesses.
I wish we could leave "thumbs Up" all over the videos but I hope youtube records the moment we click the button and translates it into some useful insight for the authors. This is one of the videos that'll have me watching it again for note taking alone. Thanks for sharing!🤩