Тёмный

Machine Learning Trading | Trading with Deep Reinforcement Learning | Dr Thomas Starke 

QuantInsti Quantitative Learning
Подписаться 28 тыс.
Просмотров 41 тыс.
50% 1

📢 [FREE WEBINAR] Common Mistakes Made by Traders: bit.ly/4f7D6BN
📌 Tuesday, July 23, 2024 at 9:30 AM ET | 7:00 PM IST | 9:30 PM SGT
-----------------------
Dr. Thomas Starke Speaks on Machine Learning Trading with Deep Reinforcement Learning (DRL). In this captivating video, join Dr. Thomas Starke as he unravels the fascinating world of deep reinforcement learning (DRL) and its application to trading. Witness how DRL, the groundbreaking technology that conquered the world's hardest board game, GO, can revolutionize the way we approach the financial markets.
********
Ready to embark on an exciting journey? Gain a comprehensive understanding of machine learning trading, refine your skills in implementing ML algorithms, and unlock the power of algo trading to achieve success. Discover what EPAT holds for you with our counselors 👉bit.ly/41otjRc
********
Discover the fundamental elements of reinforcement learning, delve into the concept of Markov Decision Process, and explore how these concepts can be harnessed to optimize trading strategies. Dr. Starke explains the challenges faced in trading and unveils how "gamification" can be leveraged to train robust trading systems.
Throughout the video, you'll gain insights into essential topics such as retroactive labeling, the utilization of the Bellman Equation, and the implementation of deep reinforcement learning. Understand how to train the system effectively, design a suitable reward function, and determine the most relevant features for optimal performance.
Witness the testing phase of reinforcement learning algorithms and explore the considerations for selecting the right neural network for trading applications. Dr. Starke presents testing results, highlighting the potential and challenges of employing deep reinforcement learning in the trading domain.
▶️ Trading in the Age of AI: How to Stay Ahead!: • Trading in the Age of ...
▶️ ChatGPT and Algo Trading: Exploring Opportunities & Challenges: • ChatGPT and Algo Tradi...
▶️ Machine Learning For Trading Tutorials: • Applications of Machin...
▶️ ChatGPT and Machine Learning in Trading: • ChatGPT and Machine Le...
FREE Ebooks:
🎁Machine Learning In Trading | Step By Step Implementation of ML Models - bit.ly/4bGQF9e
🎁Algorithmic Trading | A Rought & Ready Guide - bit.ly/3UGgTms
🎁Python Basics | With Illustrations From The Financial Markets - bit.ly/49BP1UQ
🔔 Subscribe to our channel for more Algorithmic Trading tutorials and tips!
👍 Like this video and share it with your fellow traders.
💬 Drop your questions and comments below. We'd love to hear from you!
-----------------------------------------
About Speaker:
Dr Thomas Starke (CEO, AAAQuants)
Dr Starke has a PhD in Physics and currently leads the quant-trading team in one of the leading prop-trading firms in Australia, AAAQuants, as its CEO. He has also held the senior research fellow position at Oxford University.
Dr Starke has previously worked at the proprietary trading firm Vivienne Court, and at Memjet Australia, the world leader in high-speed printing. He has led strategic research projects for Rolls-Royce Plc (UK) and is also the co-founder of the microchip design company pSemi.
-----------------------------------------
Chapters:
00:00 - Dr Starke Introduction
01:34 - What is Reinforcement Learning?
05:38 - Markov Decision Process
08:30 - Application to Trading
11:46- The Problem
16:21 - Retroactive Labelling
18:24 - How to use Bellman Equation
25:16 - Deep Reinforcement Learning
27:52 - Implementation
31:52 - What is Gamification
33:00 - How to train the System?
35:47 - Reward Function design
41:11 - What features to use?
44:59 - Testing the Reinforcement Learning
47:46 - Which Neural Network should I use?
49:28 - Testing Results
54:03 - Challenges
55:54 - Full Simulation
56:41 - Lessons Learned
59:15 - Conclusion
01:00:16 - Q&A
#machinelearningtrading #deeplearning #MachineLearningTutorial

Опубликовано:

 

21 июл 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 34   
@quantinsti
@quantinsti Год назад
Begin your Algorithmic Trading journey with the most comprehensive quant trading curriculum with industry experts- "Executive Programme in Algorithmic Trading (EPAT)" Register Now - bit.ly/41mrPqu FREE Ebooks: 🎁Machine Learning In Trading | Step By Step Implementation of ML Models - bit.ly/3SXt7Ww 🎁Algorithmic Trading | A Rought & Ready Guide - bit.ly/48n9b3E 🎁Python Basics | With Illustrations From The Financial Markets - bit.ly/48hqCCW
@daveb4446
@daveb4446 Год назад
This guy has a great voice. Makes learning so calm.
@nooral-sayed8726
@nooral-sayed8726 2 года назад
Very well explained. Thank you!
@2255.
@2255. 3 месяца назад
very insightful thank you Dr
@quantinsti
@quantinsti 3 месяца назад
🚨📢 *Implementation of Machine Learning in Momentum Trading | FREE Webinar* Tuesday, April 16, 2024 - 9:30 AM ET | 7:00 PM IST | 9:30 PM SGT Register now! - bit.ly/49E3AqB
@ChrisHalden007
@ChrisHalden007 Год назад
Great video. Thanks
@quantinsti
@quantinsti Год назад
Thanks for your comment. We're glad that you liked it. 🙂 Do subscribe to our channel to always get notified when our latest video goes Live!
@cryptolicious3738
@cryptolicious3738 2 года назад
awesome video !
@quantinsti
@quantinsti 2 года назад
Thank you for acknowledging.
@honghaiz
@honghaiz 7 месяцев назад
Thanks
@VP_SOTWMC
@VP_SOTWMC 3 года назад
Where do we find the reinforcement learning coding video as Dr Thomas Starke mentioned it will be there in Quantintsti lecture?
@quantinsti
@quantinsti 3 года назад
Hi! Thanks for your comment. Dr Starke covers the complete programming part of it in his lecture in EPAT (The Executive Programme in Algorithmic Trading). If you wish to know more about EPAT, kindly visit: www.quantinsti.com/ Or feel free to connect with us here: bit.ly/2WoWILi
@ashishbhong5901
@ashishbhong5901 2 года назад
do we need to normalize stock price data before feeding it to reinforcement learning model and if we do how to implement it ?
@quantinsti
@quantinsti 2 года назад
Hello Ashish. Yes, the data should be normalised before passing it as input. You can subtract the mean of data from the data points and divide the result with the standard deviation of data to normalise. Hope this answers your query. 😊
@shyamsuthar3736
@shyamsuthar3736 2 года назад
can we recognize chart patterns through RL ?
@quantinsti
@quantinsti 2 года назад
Hello Shyam. Stock patterns can be classified using neural networks. The neural networks can be combined with a framework of reinforcement learning. Please refer to the following resources. 1. Stock Pattern Classification from Charts using Deep Learning Algorithms (www.researchgate.net/publication/346543292_Stock_Pattern_Classification_from_Charts_using_Deep_Learning_Algorithms) 2. Reinforcement Learning with Neural Network (www.baeldung.com/cs/reinforcement-learning-neural-network) Thank you
@giovannisantostasi9615
@giovannisantostasi9615 Год назад
Overfitting is the spectrum of algotrading. It is so difficult to avoid it or compensate it and as it is explained in the video NN will learn noise (because it is dominant in market data) and have no idea of real patterns in the data. If there is any pattern, any memory is so weak that I don't think NN is the way to go at all unless some very clever and radical idea (of how to filter out noise) is applied. I'm not aware of anything like that in the literature so far.
@quantinsti
@quantinsti Год назад
Hello Santostasi. 1) About overfitting: Overfitting is indeed a big issue whenever you optimize the parameters of an ML model. However, there is an increasing consensus that Random Forest algorithms are the best to circumvent this problem, compared to other models. You can suggest to the student to use that model. 2) About the NN model learning from noise: Actually, that happens to any model. Models are applied and they try to find a pattern through the noisy data. Instead of the OHLC data, you can use Tick bars (which have better statistical properties), volume bars, etc. The signal-to-noise ratio in financial markets is really small. It's not so simple to get a signal. Every researcher/trader will face this with any model. The video is focused on the explanation of the model and its application to data. The solution to issues like a low signal-to-noise ratio, finding patterns with a different model, etc can be found in other videos o a future video we'd be more than happy to create. 3) Finding a pattern and memory loss: Price memory is indeed lost in returns. You can use an ARFIMA model applied to the price series and get its residuals. The ARFIMA model is applied as (0,d,0) where d is your control variable: You optimize "d" to get the ARFIMA model residuals to result to be stationary (for example, make the unit root statistic less and close to 5%). These residuals can be used as a prediction feature, instead of simple returns. ARFIMA residuals have the property to be stationary and also maintain the price memory. We hope this helps 🤝
@alrey72
@alrey72 2 года назад
What about using a fixed stop loss and take profit .. or implementing trailing stop loss.
@quantinsti
@quantinsti 2 года назад
Hi Al, thanks for your comment. You can add Stop Loss and Take Profit to the model. You can define your rewards and penalties accordingly. If a stop-loss is hit, penalise the model, while if a take profit is hit, reward the model. We had tried this approach, but the results were sub-optimal. You can still experiment with this and form your own opinion. You can also try using multiple RL learners and assign them different weights based on a factor of your choice. In case you are still worried about huge losses, you can always manually intervene or hard code a stop-loss and take-profit in your strategy. However, this approach is not recommended. We hope this helps.
@alrey72
@alrey72 2 года назад
@@quantinsti Thanks :)
@quantinsti
@quantinsti 2 года назад
@@alrey72 We're happy to help! Thanks, and stay safe! :)
@forheuristiclifeksh7836
@forheuristiclifeksh7836 2 месяца назад
1:00
@MrClimateCriminal
@MrClimateCriminal 8 месяцев назад
Where can we find a code example of what he describes in this video
@quantinsti
@quantinsti 8 месяцев назад
Hello, You can check our course 'Deep Reinforcement Learning in Trading' ( quantra.quantinsti.com/course/deep-reinforcement-learning-trading). The course teaches you to apply reinforcement learning to create, backtest, paper trade, and live trade a strategy using two deep learning neural networks and replay memory.
@honghaiz
@honghaiz 7 месяцев назад
I am interested in this too
@quantinsti
@quantinsti 7 месяцев назад
Great to hear you're interested! 🌟 We have more insightful videos on our channel that you might enjoy. Dive in and discover more about Machine Learning and its applications in Trading in these playlists - bit.ly/3RoPp1t Happy Learning😊
@giovannisantostasi9615
@giovannisantostasi9615 Год назад
This could be a nice course in RL, great but it is USELESS for trading. The example given in the end is a code that can understand sine waves. Ok, nice (I can do that without using AI). By the time he adds noise algo is already bad, but the fundamental point is markets have no SINE WAVES or any other type of reliable periodicities. If it was that easy one could use Fourier Analysis and pick dominant frequencies in a moving window without using AI. So his work may have interesting applications in other fields but not in trading.
@quantinsti
@quantinsti Год назад
Hello Santostasi. Your point of view is precise, but as we said, we wanted to focus on the explanation of the model and its application. In the Quantra course, you will have an answer about using the model with real-world data! I hope this helps!
@giovannisantostasi9615
@giovannisantostasi9615 Год назад
Interesting but useless for real trading. There are no patterns, no sine waves, in real market data. Can you make an AI system that deals with real markets?
@quantinsti
@quantinsti Год назад
Hello Santostasi. As we said previously, the video's purpose was the explanation the model and its application to a series. You can try to use the model for real-world data! Or, in case you want great help from our end, you can join the following course quantra.quantinsti.com/course/deep-reinforcement-learning-trading. You will have there the model applied to real-world data!
@juanodonnell
@juanodonnell 2 года назад
no learning at all
@quantinsti
@quantinsti 2 года назад
We are sad to hear that. We have many useful learning resources, tutorials, projects and more on our website in the form of blogs that we believe you'll find useful. You can also search the blog according to your interests from this field. Here's the link FYI - bit.ly/QIblogs
@sELFhATINGiNDIAN
@sELFhATINGiNDIAN 5 месяцев назад
He barely can string words together, this guy isn't a financial engineer- so many mistakes
Далее
An introduction to Reinforcement Learning
16:27
Просмотров 647 тыс.
This is why Deep Learning is really weird.
2:06:38
Просмотров 373 тыс.
Books for Algorithmic Trading I Wish I Had Read Sooner
11:33
What Creates Consciousness?
45:45
Просмотров 107 тыс.
Overview of Deep Reinforcement Learning Methods
24:50
Reinforcement Learning from scratch
8:25
Просмотров 48 тыс.