Тёмный

Aside: Conv1D for Embedding Timeseries for Forecasting with Transformers 

Let's Learn Transformers Together
Подписаться 222
Просмотров 323
50% 1

EDIT: As an additional note, Conv1D layers are good for sequence analysis in general. I had never thought of them as an "embedding" layer, but from this perspective it feels very natural.
----
The purpose of this video is to highlight something that I learned after reading comments on my last video: Conv1D embedding is possibly a preferable option to Linear embedding for timeseries because it can leverage neighboring data points in the embedding process.
Previous Video:
• A Very Simple Transfor...
Music Credit:
Please, Don’t Forget Me by | e s c p | www.escp.space
escp-music.bandcamp.com

Опубликовано:

 

19 июн 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 12   
@karta282950
@karta282950 14 дней назад
Thank you!
@harshjoshi_0506
@harshjoshi_0506 Месяц назад
Hey great content, please keep educating
@lets_learn_transformers
@lets_learn_transformers Месяц назад
Thank you!
@alihajikaram8004
@alihajikaram8004 7 дней назад
Please....... make more videos on this paper and also transformed time series
@lets_learn_transformers
@lets_learn_transformers 5 часов назад
Thank you @alihajikaram8004! I am in the process of studying some applications to Protein/Molecule data, however I'd like to explore some more advanced approaches for timeseries soon!
@jeanlannes4522
@jeanlannes4522 Месяц назад
Thank you for the mention and for the clear video ! I still have questions (I am running experiments on them) regarding the optimal size of tokens (pointwise vs sub sequence wise). Also, what to do when you have multiples features / multivariate time series.
@lets_learn_transformers
@lets_learn_transformers Месяц назад
Thanks @jeanlannes! This is very interesting. Thank you again for teaching me about this. I'd love to hear how your experiments turn out!
@naifaladwani9181
@naifaladwani9181 Месяц назад
Great content. Any intention to illustrate a multivariate time series model? I am doing experiments on this, using each time step (of x features) as a ‘token’ and embedding it using a Linear layer (x, embed_size). I am wondering if there are better ideas for this.
@lets_learn_transformers
@lets_learn_transformers 26 дней назад
Thanks @naifaladwani9181! I do not have plans to illustrate a multivariate time series, as I plan on shifting topics for a few videos. However, you could also use the Conv1D layer in this case - if you replace the first argument in nn.Conv1D (in_channels) with the size of the data at each time step, the output dimensions should be the same (I will have to double check this)
@rdavidrd
@rdavidrd 27 дней назад
Does using Conv1D to generate input embeddings improve your output predictions?
@lets_learn_transformers
@lets_learn_transformers 26 дней назад
Hi @rdavidrd, I did not observe an improvement in the limited testing I did. However, the problems used here are very basic and I did not do any rigorous tuning to improve the models. I left results out of this video for this reason - because I didn't want to make any statements on Conv1D being better without specific results. My intuition is that Conv1D is an improvement, but I believe this is problem-specific and would require some experimentation. Sorry for a bit of a non-answer, but I hope this helps!
@rdavidrd
@rdavidrd 26 дней назад
@@lets_learn_transformers No need to apologize-your response is informative and highlights important considerations for others exploring similar methods. Thanks for your input! Maybe using LSTMs instead of Conv1D (or using both) could be an avenue worth exploring.
Далее
NestJS Crash Course
1:40:07
Просмотров 157 тыс.
Word Embedding and Word2Vec, Clearly Explained!!!
16:12
SHAP with Python (Code and Explanations)
15:41
Просмотров 49 тыс.