Тёмный

PyTorch Time Sequence Prediction With LSTM - Forecasting Tutorial 

Patrick Loeber
Подписаться 273 тыс.
Просмотров 52 тыс.
50% 1

Опубликовано:

 

6 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 59   
@patloeber
@patloeber 3 года назад
Finally a new PyTorch tutorial. I hope you enjoy it :) Also, check out Tabnine, the FREE AI-powered code completion tool I used in this video: www.tabnine.com/?.com&PythonEngineer * ---------------------------------------------------------------------------------------------------------- * This is a sponsored link. You will not have any additional costs, instead you will support me and my project. Thank you so much for the support! 🙏
@iEdp526_01
@iEdp526_01 3 года назад
Thank you for making this, I've been struggling with this stuff on and off for months. These videos on PyTorch made things click, I really appreciate you taking the time to make them. They've helped me immensly.
@anaximeno
@anaximeno 3 года назад
Thank you, this is video helped me to understand how to use LSTM on Pytorch.
@patloeber
@patloeber 3 года назад
glad it was helpful!
@amiralioghli8622
@amiralioghli8622 10 месяцев назад
Hi, thank you for sharing your valuable information through this channel. I am one of the new followers of time series. If possible, could you create a series on how to implement Transformers on time series data, covering both univariate and multivariate approaches? Focusing on operations like forecasting, classification, or anomaly detection-just one of these would be greatly appreciated. There are no videos available on RU-vid that have implemented this before. It would be extremely helpful for students and new researchers in the field of time series.
@yunlongsong7618
@yunlongsong7618 2 года назад
this is an amazing tutorial. thanks a lot for putting the effort. great job.
@CodeWithTomi
@CodeWithTomi 3 года назад
Great!... Another Pytorch Tutorial.
@patloeber
@patloeber 3 года назад
Hope you like it!
@LanTranLe-sk9cn
@LanTranLe-sk9cn 2 года назад
Thank you so much. I found it very helpful!
@ansumandas5749
@ansumandas5749 3 года назад
Please make a video on the batch size , sequence length and input size and how they actually are fed to the machine
@patloeber
@patloeber 3 года назад
thanks for the idea!
@fadoobaba
@fadoobaba 5 месяцев назад
many thanks!
@sciencei7saan459
@sciencei7saan459 Год назад
Thanx ....great job.
@saurrav3801
@saurrav3801 3 года назад
Good to see you again bro 🥺🔥
@patloeber
@patloeber 3 года назад
Yeah :)
@cwumin2105
@cwumin2105 3 года назад
Hi Python Engineer, may I know how to do if we want to predict multiple steps instead of one step ahead? Hopefully you can show an example. thanks
@scottk5083
@scottk5083 3 года назад
Amazing content! Although, quick question. I noticed you called 'self.hidden' at 29:48. However I didnt see a corresponding parameter to self.hidden i.e self.n_hidden has n_hidden parameters, while i cant see the number of parameters for self.hidden
@patrickningi4259
@patrickningi4259 3 года назад
great content as always
@patloeber
@patloeber 3 года назад
Glad you enjoyed it
@JonasBalandraux
@JonasBalandraux 3 месяца назад
Is it better for prediction performance to pass the output of one LSTM to the next or to pass the previous hidden state (as done in the video)? I've seen both methods used and don't know which is better. Do you have any advice on when to use each approach?
@GayalKuruppu
@GayalKuruppu 5 месяцев назад
Would be great if you could just import torch first 👀
@saadouch
@saadouch 2 года назад
thanks boss!
@PWK95
@PWK95 3 года назад
This is amazing! How do you always know exactly what I need and make a tutorial about it? Any chance you could make a tutorial about how to make an estimator that can give out the width of the given sine function and the x-shift of the 3 sine functions relative to each other? That would quite literally save my life. I know it should be possible to do with a similar method employed in the video, but I just can't do it...
@maxmohamed9878
@maxmohamed9878 3 года назад
Well explained. Please keep doing the great work that you are doing
@patloeber
@patloeber 3 года назад
Thanks a lot!
@greatsaid5271
@greatsaid5271 3 года назад
your videos are amazing, thanks a lot 🙌
@patloeber
@patloeber 3 года назад
Glad you like them!
@SP-db6sh
@SP-db6sh 3 года назад
Plz post a video on quick start guide in torchmeta !
@grdev3066
@grdev3066 3 года назад
Hi, great video! Just want to ask, why do we have 2 Lstm cells, and not a single one? And not sure if I get it... in the forward() func we split samples by dim=1 to feed a sequence of elements right? So if target_inputs has say 1000 elements(columns in this case) it means, that our lstm knows what happened 1000 points behind and "use" all of them to make the very next prediction? Thanks!
@rpcruz
@rpcruz Год назад
It could be a single LSTM cell. He just wanted to make it deeper. He split the tensor in the axis of the sequence to process each time step at each for loop.
@DanielWeikert
@DanielWeikert 2 года назад
Can you do a video using Transformers for time series? Have not found anything useful on yt so far. Thanks and br
@oscar_lares
@oscar_lares 2 года назад
Thanks for this video. Such a great help and cleared up some confusion.. One question I had was, for the training, why are you only using the values from y and not the x?
@hannahw115
@hannahw115 2 года назад
Don't you "destroy" some of the knowledge learned during training by initialising the hidden and cell state as zeros in each forward pass? Or is this a better approach than initialising the states once in the beginning? Maybe you could elaborate on that? :)
@MrEmbrance
@MrEmbrance 3 года назад
you explained nothing
@conscofd3534
@conscofd3534 2 года назад
I understand the fact that your videos are "code along" style ones BUT for the implementation, there is too much from the HOW and saddly, nothing from the WHY.
@wollmonsterchen
@wollmonsterchen Год назад
Thanks for the helpful video. Is the code on github? I didn't find it and it would be very helpful to play around a little bit
@teetanrobotics5363
@teetanrobotics5363 3 года назад
please put in a playlist
@aimenmalik8929
@aimenmalik8929 2 года назад
hey!! may i know why we are give x as input, (x.split), why not y.split??? because our sine wave is basically in variable y.
@anilkumar-yd1rd
@anilkumar-yd1rd 3 года назад
Can you please guide me on high level on how do I implement the same work for MLP using Pytorch Lightning
@regularviewer1682
@regularviewer1682 Год назад
Hey! I was wondering why are there multiple colors at the end when at the start there was only 1 sine wave? I'm confused where all the additional red and green lines came from. Thanks :)
@doctormaddix2143
@doctormaddix2143 Год назад
I am trying to run this code on my gpu. It should work, but it doesn't. device = torch.device("cuda" if torch.cuda.is_available() else "cpu") returns 'cuda', so my GPU is being detected. I also copied the training and test inputs and targets to the gpu with .to(device) as well as the model (model = LSTMPredictor(hiddenstates).to(device)). But i still get the error: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat2 in method wrapper_CUDA_mm). It occurs in the optimizer step (optimizer.step(closure)). What do you think?
@rohanmenon9160
@rohanmenon9160 Год назад
idk i am a beginner and i use jupyter notebooks and i copied the code perfectly( after running no errors) but i did not get any predictions or loss ? any idea what must be the case?
@ahmadalghooneh2105
@ahmadalghooneh2105 2 года назад
shouldn't it be h_t2 and c_t2 for self.lstm2?
@messedmushroom
@messedmushroom 2 года назад
Would we not want to initialize the hidden state and cell state outside of the forward, so they capture long term features? Since they are in forward, aren't we removing all notions of long-term connectivity as they get cleared on every forward call?
@rpcruz
@rpcruz Год назад
Usually, you only want the LSTM to keep the memory during the sequence. For example, if I have a LSTM that recognizes activity in videos, then I want it to keep the memory while processing the frames in one video, but then I want it to forget it for the next video.
@pietheijn-vo1gt
@pietheijn-vo1gt 2 года назад
Hi. What method are you using to predict future samples? As I understand there are multiple methods
@amirsoltanpoor7421
@amirsoltanpoor7421 2 года назад
When I run this I get this error: 'Tensor' object has no attribute 'append'
@DanielWeikert
@DanielWeikert 3 года назад
can you dive deeper into the various pytorch package functions in a future video? e.g. detach vs item, .Tensor, .tensor when to use datatype longtensor, ...? Thanks and best
@patloeber
@patloeber 3 года назад
Thanks for the suggestions! Will think about this :)
@GoForwardPs34
@GoForwardPs34 Год назад
where is the code
@yabindong1754
@yabindong1754 3 года назад
Why predict the sequence one by one? Can you treat each sequence as a feature and predict them at the same time?
@Johncowk
@Johncowk 3 года назад
LSTMs are recurrent networks: you need the result of the previous iteration to get the next. That's the way they work, and also one of their main weakness.
@gopikrishnan5206
@gopikrishnan5206 3 года назад
Can you do a tutorial on python data analysis and visualization covering numpy, pandas and matplotlib libraries.
@Omkar-ey3ls
@Omkar-ey3ls 3 года назад
why did we call the super () inside the LSTM predictor class ? is there any reason for this ?
@patloeber
@patloeber 3 года назад
Yes, we have to do this to initialize the super class correctly (this is a basic thing to do in object oriented programming in Python)
@ludmiladeute6353
@ludmiladeute6353 3 года назад
I tried to run this code and it's not working. Where can I find the file?
@patloeber
@patloeber 3 года назад
You need one of the latest Pytorch versions for this. The link to repo is in the video description
@sreesankar07
@sreesankar07 3 года назад
hello I am a beginner python programmer .Can you please make video on DSA
Далее
Autoencoder In PyTorch - Theory & Implementation
30:00
Long Short-Term Memory (LSTM), Clearly Explained
20:45
Просмотров 563 тыс.
Time Series Forecasting with XGBoost - Advanced Methods
22:02
181 - Multivariate time series forecasting using LSTM
22:40
LSTM-Based Time Series with PyTorch (10.2)
13:54
Просмотров 6 тыс.
LSTM Time Series Forecasting Tutorial in Python
29:53
Просмотров 213 тыс.