Thanks Mike, I wish this tutorial was around when I first started trying to find my way around around Vertex AI. Whilst I enjoy finding out things by trial and error, the older I get the more I value my time and having someone to show me the first steps and explain what to watch out for is invaluable.
Don't get disheartened with the traffic you're getting or the the class imbalance of views vs. likes 😅, trust me many don't even know this playlist exists, plus many don't even know what MLops is. I've been looking for it for like last 2 months and dunno why RU-vid algo didn't put this up for me before. Would suggest adding hastags maybe. Cheers 🤘👊🙌
When modifying the version data using the suggestions (which describes false pos. false neg. and true pos.) located in the "labels" under model registry/evaluate/(selected label... i.e. '0' or '1'), where are the changes reflected (if any at all)? I don't see any changes being made to the dataset that the version was trained in after making changes. How can you use these changes and continue modifying the dataset alongside the changes made with the suggestions?
Great stuff! Enjoyed excellent explanations and following along with github repo clone. I would have a few questions. 1) since I changed a few lines in notebooks, what happens if I commit changes: is git going to try updating your repo? 2) I would like to see the underlying model architecture: how to see this? Perhaps answers coming in following vidoes.
There are already a few videos in this series that incorporate some basic pipelines. The repository linked in the description also has more example of advanced pipelines. Let me know if there is any specific part of pipelines you would like to see featured!
Great tutorial. Thanks for sharing. I have a question about how to create a batch prediction for the images. In your case you used the tabular data (where you used the BQ), but when I'm trying to make a batch prediction for images, it is asking me to provide JSONL format file. I'm a little confused providing JSONL file. Thank you.
Hi Farukh, I look forward to expanding these demos to also show text and image based ML. Until then I will try my best to answer you here in the RU-vid comments. For an AutoML Vision Model through the console (similar to this video), the workflow ask for a name, the model, source path and destination path. The paths are to Google Cloud Storage buckets. The source is a .jsonl file in the storage bucket that has one line per image you want to predict. Each of these lines looks like {"content":"gs://path to image", "mimeType":"image/jpeg"} . Here is a link that may help: cloud.google.com/vertex-ai/docs/predictions/batch-predictions#batch_request_input
Thank you, just wanted to check - after deploying the endpoint in vertex, there is no need to go through extra steps to have the REST API endpoint, right? For aws sagemaker, there is an extra api gateway with lambda step.
I may have an astonishing project that does the impossible...I need a ML set up to take it to the next level... It took me forever to code the one of a kind tool but i really need a ML model to analyze it. I have not been able to get AI to work on it properly....probably because no one has EVER seen a data set like this .. i really need someone to help with this FULL NDA
Thank you for the question. I have this notebook in the accompanying GitHub repo that shows a method for extracting the model type and hyperparameters from Cloud Logging: github.com/statmike/vertex-ai-mlops/blob/main/02%20-%20Vertex%20AI%20AutoML/02Tools%20-%20AutoML%20Cloud%20Logging.ipynb
It is also now possible to have more direction over AutoML by using AutoML Workflows. I will add this on my next update pass for the AutoML series. A good documentation page to read about this is: cloud.google.com/vertex-ai/docs/tabular-data/tabular-workflows/e2e-automl
Hello Priya, You can skip the deployment to an endpoint and do the batch predictions. The endpoint is not required for batch predictions and runs as a separate job. Hope this helps!
Hi Tariq, Any time you train a model, custom or AutoML service, you are taking a snapshot in time of inputs = training data. If you preprocess those inputs then those preprocessing steps would need to be replicated for using the resulting model for predictions. If you choose a method with built in preprocessing, AutoML or custom with built in preprocessing, then the model will include preprocessing steps as part of the model that also get replicated during prediction. A good guide for reviewing the automatic preprocessing done by Vertex AI AutoML can be found at this link: cloud.google.com/vertex-ai/docs/datasets/data-types-tabular