For muti-step prediction can we set the units in the last dense layer as the number of timesteps for eg. based on the frequency of your original time series data if the individual observations are 1 hour or 1 day apart you can set the units in the dense layer as 24 or 10 for 24 hours ahead prediction or 10 days ahead prediction?
The number of units in the last dense layer does not need to be directly tied to the number of time steps you want to predict. The dense layer typically represents the dimensionality of the output space, which is independent of the number of time steps. (but yes you can still do that) There are 2 approaches you can take here: Direct Multi-Step Prediction: predict all future time steps directly using a single output layer; So for 24 hours ahead prediction, the output layer would have 24 units, and for 10 days ahead prediction, it would have 10 units Auto-regressive Prediction: In this approach, you predict one time step at a time and feed the predicted value back into the model to predict the next time step. (recursive approach)
if you do 10 then predict 1. Next time you do 10 doesn't that Y value from the first iteration now become a point in the new x set ? or am I missing something
Not certain if I'm understanding your question correctly but each cell in the network gives two numbers, the output and the new hidden state. I believe in your example the output is 1, but that is not what is passed to the next cell. The hidden state (which is something other than 1) *is* passed to the next cell. So the output 1 is never used as an input to a new cell.
@flips4life892 I am talking about the x y split. if you have a sequence of data. You take 10 points as x , 11th as y, say Y1. Then the next 10 as x. Doesn't that last point Y1 now become included in your next 10 X values. x = RNN_Input[0:10] Y = RNN_Input[10] print(x) print(f"y = {y}") x1 = RNN_Input[1:11] Y1 = RNN_Input[11] print(x1) print(f"y1 = {Y1}")
In case you get an error in the "Cleaning the Data " cell, the code below worked for me. import yfinance as yfin gold = yfin.download('GC=F', '2015-12-20', interval='1d')
Depending on which forumula you used to scale your data on, you'd have to reverse the mathematical transformation. Here is some code that might help: from sklearn.preprocessing import StandardScaler scaler = StandardScaler() scaler.inverse_transform(....)
you are doing great. but your model does not really perform. Simple moving average will give the results you are getting with RNN. So, what's the sole purpose ?
haha yeah. The use case was just an example. The end result would be to explain the intricacies behind the idea and usage of a RNN. RNN's, however, are somewhat of a base architecture and you will see parts of an RNN used in larger, more complex ML models that can perhaps perform better with alternative data.
I unfortunatley am not on a Mac M1 -- but you can attempt to follow along on a Google Colab notebook - it should help with the standardization across operating systems.