Just wanted to say thank you for the DVC tooling and for the quality video content coming out on this channel. I recently got back into a ML personal project and wanted to focus on it being easy to run and iterate on so I can maximize what little sporadic time I can dedicate to developing it. Looking around for tooling solutions and finding DVC was a real boon as it cleanly and thoughtfully addressed a lot of my pain points, concerns, and planned features (for instance, I was developing a DAG data dependency execution pipeline already). Thanks again to the team for the time and care put into this software, patreon'd.
2:57 Wanted to add for viewers/users that when you run this dvc run command, make sure you LEAVE NO SPACES BETWEEN THE PARAMETERS in the -p flag, or else dvc will throw an error. So double check you're typing -p prepare.seed,prepare.split (No spaces between the two arguments).
Got this "ERROR: failed to reproduce 'dvc.yaml': Parameters 'train.n_estimators' are missing from 'params.yaml'." in the step dvc.repro.... It seems the params.yaml has train.n_est and not train.n_estimators.. Changing the dvc.yaml file resolves this.
I am getting this error Traceback (most recent call last): File "src/featurization.py", line 59, in train_words = np.array(df_train.text.str.lower().values.astype("U")) numpy.core._exceptions.MemoryError: Unable to allocate 1.85 GiB for an array with shape (20017,) and data type
It says in the description that the command dvc run has now been replaced with 'dvc stage add' but as far as I can see stage add does not actually run the new pipeline stage. Would 'dvc exp run -n' work, or is the current procedure to do 'dvc stage add -n' followed by 'dvc exp run'?
Thank you very much for the tutorials! Very inspiring and helpful :) I have a question about hyperparameters tuning with dvc (and maybe CML). Is it possible? I have a dvc pipeline and I want to test a bunch of parameters (I am also using params.yaml). For example learning rate of [0.1, 0.01, 0.001, 0.0001]. It looks like I need to manually create a new branch and run 'dvc repro' for each learning rate? And then somehow compare these branches? Is there a more efficient way?
Hi Elena, good question- you can use DVC for hyperparameter tuning. Right now, you'd want one commit per experiment (they don't have to be different branches to do `dvc metrics diff`- you can also compare several commits on one branch!). However, we're working on a new feature for doing lightweight experiments without committing each time. Check out the feature in progress here: github.com/iterative/dvc/wiki/Experiments
@@dvcorg8370 Hi Elle, that was again a very nice tutorial, but this is exactly the question I had as well - when building models, it's unlikely that we'd want to commit each time (I'd typically run some form of hyperparameter optimization using an external tool, rather than tuning by hand). So the lightweight experiments without individual commits sounds super useful!
@@prraoable Yep, this makes perfect sense. Right now, you could do: 1 commit one automatic hyperparameter search. In a sense, the parameters of your search (say, grid size & density, or priors for a bayesian tool) could be the experiment. But ultimately, moving towards a lighter way of exploring parameter space is needed! The model we're planning to reach is explore locally, commit your favorite(s) at the end of the day.
@davidaliaga4708 You are correct sir! We love an astute viewer! ❤️ dvc run was deprecated and replaced with dvc stage add to set up your stages with dependencies and outputs. You can find the documentation here: dvc.org/doc/start/data-management/data-pipelines Once your pipeline is set up, you can run dvc repro to run only the stages that have changed!
Hi @dormarcovitch8801! dvc repro will pull the data as defined in the prepare stage if it has not been pulled before or if it has changed. As you make changes to the following stages, if nothing changes with the data in the prepare stage, dvc repro will skip that portion of your pipeline.
Dear all, I would like to know how I can pass the correct Conda's Python environment to the dvc repro command. I have a situation (common to most Python users) that my machine has several different Python Environments (all of which are managed by Conda). Particularly for this case, I am in need that DVC runs its pipeline (that is defined in the dvc.yaml file) under one of these Python versions. How can I do that?
when i run prepare command i got a error like that: ERROR: failed to run: prepare.dvc -d src/prepare.py -d data/data.xml -o data/prepared python src/prepare.py data/data.xml, exited with 127. and i found a solution. Ypu need to activate conda with this command "conda activate"