thank you thank you. This information is skipped out in most machine learning courses, and no one will teach you this. In practice, a lot of data has temporal nature, while all along you only learned how to classify cats and dogs, and regress house pricess.
I have a question. If I have a time series data for a market, and the data is from 2012 to 2022. now I need to forcast the number of customer that visit the store. But from 2020 to 2022 ,because of COVID19, the number of customer has drop a lot. for this case, If I use last 30% data(from 2019 to 2022) to testing. Model can't get any data that influences by COVID19 when model training (all of them use to test) Isn't that make forcast mape very high? how should I do for this case? (sorry for my poor english)
Very informative and intriguing talk. I've been using SARMIAX and things like fbprophet for time series forecast. I have a question about the value of the ML approach. Considering there is a host of things you need to account for while modeling a time-series problem as an ML problem, is it actually that significantly better than traditional algorithms? Is this production-grade stuff or is this in early experimental stages? I must admit the ML approach sounds way more interesting than what I've been doing for the past few years.
We basically use xgboost and lightGBMs for forecasting, or even linear regression. This models are therefore fit for production. ML models have the advantage that they allow you to enrich the features that you extract from the time series, with features from external resources, and hence, they are in general more versatile than the classical forecasting models like arima, which make many assumptions about the data and do not incorporate features very well.
Great talk! How would account for availability in your model? For example let’s say a SKU was out of stock for a portion of the training period. This could result in the sale lag feature being low for the out of stock SKU and high for substitute SKUs that were in stock.
if you are imputing mean from your training set in place of a missing datapoint, does that mean that the imputed datapoint does not change your model estimation anyway as predicted model passes through mean of variables anyway? I dont think it is information leakage in this way, it is just saying ignore this datapoint
Awesome lecture! I just have one question @32:38, Kishan mentions that we may have different time indexes for different groups can be different which is fine. But the original consolidated data (all groups included) has continuous time stamps whereas when we consider different groups, there may be gaps in the time stamps. Would you still consider them as time series? Will the rest of the process work normally under these circumstances?
*Abstract* This talk explores how to adapt machine learning models for time series forecasting by transforming time series data into tabular datasets with features and target variables. Kishan Manani discusses the advantages of using machine learning for forecasting, including its ability to handle complex data structures and incorporate exogenous variables. He then dives into the specifics of feature engineering for time series, covering topics like lag features, window features, and static features. The talk emphasizes the importance of avoiding data leakage and highlights the differences between machine learning workflows for classification/regression and forecasting tasks. Finally, Manani introduces useful libraries like Darts and sktime that facilitate time series forecasting with tabular data and provides practical examples. *Summary* *Why use machine learning for forecasting? (**1:25**)* - Machine learning models can learn across many related time series. - They can effectively incorporate exogenous variables. - They offer access to techniques like sample weights and custom loss functions. *Don't neglect simple baselines though! (**3:45**)* - Simple statistical models can be surprisingly effective. - Ensure the uplift from machine learning justifies the added complexity. *Forecasting with machine learning (**4:15**)* - Convert time series data into a table with features and a target variable. - Use past values of the target variable as features, ensuring no data leakage from the future. - Include features with known past and future values (e.g., marketing spend). - Handle features with only past values (e.g., weather) by using alternative forecasts or lagged versions. - Consider static features (metadata) to capture differences between groups of time series. *Multi-step forecasting (**8:07**)* - Direct forecasting: Train separate models for each forecast step. - Recursive forecasting: Train a one-step ahead model and use it repeatedly, plugging forecasts back into the target series. *Cross-validation: Tabular vs Time series (**11:32**)* - Randomly splitting data is inappropriate for time series due to temporal dependence. - Split data by time, replicating the forecasting process for accurate performance evaluation. *Machine learning workflow (**13:00**)* - Time series forecasting workflow differs significantly from classification/regression tasks. - Feature engineering and handling vary at predict time depending on the multi-step forecasting approach. *Feature engineering for time series forecasting (**14:47**)* - Lag features: Use past values of target and features, including seasonal lags. - Window features: Compute summary statistics (e.g., mean, standard deviation) over past windows. - Nested window features: Capture differences in various time scales. - Static features: Encode categorical metadata using target encoding, being mindful of potential target leakage. *Overview of some useful libraries (**27:01**)* - tsfresh: Creates numerous time series features from a data frame. - Darts and sktime: Facilitate forecasting with tabular data and offer functionalities like recursive forecasting and time series cross-validation. *Forecasting with tabular data using Darts (**28:04**)* - Example demonstrates forecasting with lag features and future known features on single and multiple time series. disclaimer: i used gemini 1.5 pro to summarize the youtube transcript.
Hi, does anyone know how to implement the recursive forecasting that he did in Darts using sktime. I couldn't really find an intuitive explanation online.
Is anybody ever compared model result using same dataset and same parameters from sktime and Darts? for example ARIMA model from both packages. I've try it, and both models gave a different MAPE result. I hope i have made a mistake in my code.