Тёмный

Reinforcement Learning for Trading Tutorial | $GME RL Python Trading 

Nicholas Renotte
Подписаться 280 тыс.
Просмотров 130 тыс.
50% 1

Опубликовано:

 

21 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 323   
@bennorquay9353
@bennorquay9353 Год назад
Fantastic tutorial! Some of the libs are a bit old now. I got it working on lambda stack with the following changes: 1. Use latest tensorflow-gpu and tensorflow 2. Change "stable-baselines" to "stable-baselines3" 3. Change "MlpLstmPolicy" to "MlpPolicy'" Cheers
@wesseldenadel2695
@wesseldenadel2695 Год назад
thnx!
@quimqu
@quimqu Год назад
Thanks!!!!!
@Meditator80
@Meditator80 Год назад
Thanks !!!
@HailegabrielDegefa
@HailegabrielDegefa 4 месяца назад
what about stocks-v0, i cant get it to work. it says gym doesnt know it
@whitewhite3829
@whitewhite3829 3 месяца назад
@@HailegabrielDegefa You can pip install the gymnasium instead of gym and import it like "import gymnasium as gym".
@pedrostark5287
@pedrostark5287 3 года назад
Please keep making videos! You're a real treasure explaining RL so well! I'm just learning it at school and you really just helped me understand a lot of it. Thank you!
@NicholasRenotte
@NicholasRenotte 3 года назад
Thanks so much @Pedro!
@lkasdfha
@lkasdfha 2 года назад
@@NicholasRenotte u77⁷⁷7⁷the
@JarlBulgruf
@JarlBulgruf 2 года назад
Just so everyone knows, you need to add df.sort_index() so the data isn't reversed. The model is training and predicting on reversed data. Gym-anytrading does not automatically sort by the date index.
@NicholasRenotte
@NicholasRenotte 2 года назад
Yah! Good pick up, fixed it in the second part to this tutorial!
@olliairola6066
@olliairola6066 2 года назад
Battled with this for like two days until I noticed, came to write this same comment here!
@Nerdherfer
@Nerdherfer 3 года назад
I trimmed the data down to just the closing prices, and the algorithm is training a lot better. I highly recommend others do that as well.
@NicholasRenotte
@NicholasRenotte 3 года назад
Ha, awesome. How much better are we talking here?
@victorthecat6306
@victorthecat6306 2 года назад
Am trying to make a project like this one and have almost no experience in it, but the learning curve just got easier thanks to you
@urban48
@urban48 3 года назад
would love to see more in-depth video, specially about custom signal features
@NicholasRenotte
@NicholasRenotte 3 года назад
Done, will add it to the list. I enjoyed getting back into the finance side of things. What do you think about extending out the trading RL series a little more?
@NicholasRenotte
@NicholasRenotte 3 года назад
Check it out, custom indicators included: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-q-Uw9gC3D4o.html
@vincentroye
@vincentroye 3 года назад
@@NicholasRenotte that would be great!
@NicholasRenotte
@NicholasRenotte 3 года назад
@@vincentroye the extended series? or the indicators vid?
@vincentroye
@vincentroye 3 года назад
@@NicholasRenotte the extended series :)
@vincentroye
@vincentroye 3 года назад
Extremely interesting and easy to understand! I'd like to learn more about the pros and cons of other gym trading environments. Thanks for all the time you spend producing these tutorials. You're helping a lot of people like me.
@NicholasRenotte
@NicholasRenotte 3 года назад
Definitely @Vincent. Glad you enjoyed the video. Will find some other trading envs that we can take a look at, I had another one on my list already that I've tested.
@markusbuchholz3518
@markusbuchholz3518 3 года назад
Nicholas your work is impressive and community is growing, Perfect. The community always can expect useful set of instructions/information's about AI and as now how to model/teach RL agents. The RL is really awesome but rather very abstract. so it requires lot of studying. Your effort in promoting this branch of AI noticeable. Thanks also for the stable-baselines tips. Have a nice day.
@markusbuchholz3518
@markusbuchholz3518 3 года назад
@@blockchainbacker.4740 Hello Nicholas, I do not have any problem with investments since I invest only in education. You provide one of the best YT content. Thanks
@NicholasRenotte
@NicholasRenotte 3 года назад
Heya @Markus, that was someone impersonating me. They were blocked from the channel, click the user name in case it looks weird next time! But as always, thank you soooo much for checking out the videos. The new RL trading one is out as well: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-q-Uw9gC3D4o.html
@markusbuchholz3518
@markusbuchholz3518 3 года назад
@@NicholasRenotte I can't really understand how someone can be "impolite" generally. Your YT channel is awesome and following mini RL series very impressive. You display somehow very innovative way where RL can be deployed. Good luck with you innovative way of mind.
@NicholasRenotte
@NicholasRenotte 3 года назад
Thanks sooo much@@markusbuchholz3518 means a lot to me!!
@aminberjaouitahmaz4121
@aminberjaouitahmaz4121 2 года назад
Can't express how much I enjoy your videos. Amazing job making the projects as practical as possible and looking forward to more ML videos!
@hammadali504
@hammadali504 2 года назад
Would love to see more videos like these in depth. And a video on stock price predictions using neural networks
@edzme
@edzme 3 года назад
your RL stuff is fantastic thanks for doing it!! 💎 🙌 bro. Also yes please do more RL trading stuff! Different action spaces & custom environments!
@NicholasRenotte
@NicholasRenotte 3 года назад
You got it, I think I'm going to bump it up on the list as well @Ed!
@weigthcut
@weigthcut 3 года назад
Hii! New Fan here :) Would love to see that, too! Maybe with a hold position... or trading fees. Just as an idea! Keep up the great work man
@NicholasRenotte
@NicholasRenotte 3 года назад
@@weigthcut you got it! Will likely be doing a mega series on ML for finance/trading!
@eb-worx
@eb-worx 4 месяца назад
Can you do an update on this ? Since tensorflow 1.15.0 is not available anymore and seems they changed so much that i just cannot get this to work with tensorflow 2
@yamani3882
@yamani3882 Год назад
How do I force it though to not make short trades? I only want to train it to make long trades. Also, I would want it to sell before it tries to buy again (Buy then Sell). Is this stuff configurable with Gym-Anytrading?
@funnyredditdaily777
@funnyredditdaily777 6 месяцев назад
Have you made a profitable bot that just trades forever?
@maikgreubel2685
@maikgreubel2685 Год назад
Very easy to understand, you're very talented in teaching and explaining. Thank you for your effort. I don't know, whether this was already answered by someone, if so, please ignore this or use it for your own implementations (no warranty), but here is my implementation of the callback to stop when explained variance is between a given threshold: from stable_baselines.common.callbacks import BaseCallback from stable_baselines.common import explained_variance from typing import Dict, Any class ExplainedVarianceCallback(BaseCallback): def __init__(self, minThreshold=0.95, maxThreshold=1.15, verbose=0): super(ExplainedVarianceCallback, self).__init__(verbose) self.minThreshold = minThreshold self.maxThreshold = maxThreshold self.values = [] self.rewards = [] self.window_size = 5 def on_training_start(self, locals_: Dict[str, Any], globals_: Dict[str, Any]) -> None: super(ExplainedVarianceCallback, self).on_training_start(locals_, globals_) self.window_size = locals_['self'].env.get_attr("window_size")[0] def on_step(self) -> bool: self.values = self.values[1 if len(self.values) > self.window_size else 0:] + list(self.locals['values']) self.rewards = self.rewards[1 if len(self.rewards) > self.window_size else 0:] + list(self.locals['rewards']) ev = explained_variance(np.array(self.values), np.array(self.rewards)) if(ev > self.minThreshold and ev < self.maxThreshold): return False pass To use this, you can pass an instance to model.learn(). You may want to specify a custom threshold by giving the minThreshold and maxThreshold attributes to init method of instance. It's not very beautiful, I'm still a newbie to python :) HTH
@andrewkaplanc
@andrewkaplanc 2 года назад
Super helpful! Thank you for explaining everything in easy to understand terms
@BB-ko3fh
@BB-ko3fh 3 года назад
would be cool if you could throw some basic technical indicators like RSI and OBV into the df to see if it helps the agent. Also, would appreciate it if you could add callbacks to stop training at the optimum level. Keep up the good work and looking forward to the next vid of this series :)
@NicholasRenotte
@NicholasRenotte 3 года назад
Thanks so much @BB, let me know what you think, have RSI, OBV and SMA covered: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-q-Uw9gC3D4o.html
@borisljevar3126
@borisljevar3126 Год назад
Thank you for such clear, hands-on tutorials on reinforcement learning. I have a couple of questions, though. I've learned elsewhere that an RL agent requires a trade-off between exploration and exploitation. I didn't see this specifically mentioned in this video. Is there a reason for that? Perhaps it's not advisable to use any exploration/exploitation trade-offs in trading algorithms, or maybe this specific RL model doesn't support it. I would appreciate it if you could help me understand these considerations. Additionally, I would love to see an example of an RL agent being trained with new data while in operation. I believe the official terminology is "online learning" or "continual learning." Please consider making a video that covers that topic as well.
@chriss3154
@chriss3154 2 года назад
If someone have some strange "gym.logger" error try to install gym==0.12.5👍Please mr Renotte think about future-proofing dependencies, in any case great video👍
@NicholasRenotte
@NicholasRenotte 2 года назад
Yeah ik ik, almost impossible with how fast this stuff is moving though atm!
@shouvikdey7078
@shouvikdey7078 3 года назад
Hey man your videos are great. Please continue with reinforcement learning so that we can learn to apply RL algorithms. I have been searching for Application of RL algorithms but it is hard to come by and finally got your videos. Really great videos you make. Keep going please.
@NicholasRenotte
@NicholasRenotte 3 года назад
Thanks @shouvik, got plenty coming. Not stopping anytime soon!!
@izzrose1997
@izzrose1997 3 года назад
would really love to see a more in depth video, heck would love to see more video from you. I am learning a lot so thank you! new subscribers and been binge watching your stuff. good works
@NicholasRenotte
@NicholasRenotte 3 года назад
Thanks soo much @izzrose1997, anything you'd like to see?
@homieinthesky8919
@homieinthesky8919 Год назад
Just change the activation function in the A2C model to Relu and u will get some better results
@sankalpchenna6046
@sankalpchenna6046 2 года назад
while importing A2C i got this error "ModuleNotFoundError: No module named 'tensorflow.contrib' " how to resolve
@mgfg22
@mgfg22 Год назад
any news?
@EkShunya
@EkShunya 2 года назад
You are an angel. Thanks a million for all you are doing by making knowledge free and easy to use.
@user-or7ji5hv8y
@user-or7ji5hv8y 3 года назад
yes, being able to include financial features would be interesting
@NicholasRenotte
@NicholasRenotte 3 года назад
Already done @C, check it out: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-q-Uw9gC3D4o.html
@939470
@939470 3 года назад
excellent tutorial and easy to understand. thx! would like to see video on using the callback for model.learn
@NicholasRenotte
@NicholasRenotte 3 года назад
You got it @Wong Johnny, got a bunch of stuff planned for our ML trader series!
@techieguy4973
@techieguy4973 Год назад
Can we use stocks environment on cryptocurrencies? Or there is a different environment for that?
@esdrassanchezsanchez7702
@esdrassanchezsanchez7702 2 года назад
Could you please explain in depth the best way to find the explained_variance near 1? And also maybe talk about the overfitting of the model and how it perfoms using cv time split. Thank you Nicholas, really good content!
@bhesht
@bhesht Месяц назад
Excellent explanation!
@roshinroy5129
@roshinroy5129 Год назад
i am getting an error: AttributeError: module 'stable_baselines.logger' has no attribute 'get_level'
@mihirtrivedi4096
@mihirtrivedi4096 3 года назад
Is there any method by which we can plot the rewards vs the episodes graph? Your videos are really helpfull in learning Reinforcement learning. Thank you
@udik1
@udik1 Год назад
you can add callback to the learn method, where you can specify any custom item you want to plot
@abdulrehmanraja9311
@abdulrehmanraja9311 Год назад
from stable_baselines import A2C is not working. It gives a DLL error. Can someone help in sorting that?
@liupeng88
@liupeng88 3 года назад
Excellent video! I'm wondering if you could add video on portfolio optimization using RL in the future? This would be more related to real-world trading environment where one needs to determine ratio allocation among multiple assets...
@NicholasRenotte
@NicholasRenotte 3 года назад
Definitely, I've thought about doing this with discrete optimization. Possibly looking at something like leveraging the efficient frontier.
@prathameshdinkar2966
@prathameshdinkar2966 2 года назад
Thanks for the video, nicely explained and very interesting! Do you have any video or information about adding the sentiment to this reinforcement learning model?
@applepie7282
@applepie7282 3 года назад
RL is hopeless in trading guys. at least for me. Ive been working on it for weeks with stable baselines and my custom trading environment.step system, rewarding strategy ,observation approach... these are very challanging topics. the results are so unstable and far away from the classic algo trading profits. just find some good old indicator and optimize parameters with bayesian ops. scikit api is easy and effective. at least you will get some meaningful results... RL is not ready for market. at least for me :) and thanks for the video. I will check that anytrade environment without hope :)
@NicholasRenotte
@NicholasRenotte 3 года назад
I think it's still a while away. TBH I'm going to do a ton more research into it over the year as I've read some promising papers. Always room for improvement though!
@Throwingness
@Throwingness 3 года назад
I haven't used an agent like this, but I wonder how you know if the model has overfit the data when you go forward? I also wonder how a bot like is is supposed to handle anomalous events.
@NicholasRenotte
@NicholasRenotte 3 года назад
I think it might have been tbh @Varuna, we're going to be handling it properly when we build up our bot for the ml trader series with better backtesting!
@futurestadawul
@futurestadawul 3 года назад
Man your videos are awesome. We need more about adding other features (or creating them).. Thank you
@TheJourneyToMyFreedom
@TheJourneyToMyFreedom 3 года назад
excellent tutorial! keep up your good work! i read below that you are planning to add the callback to the code! looking heavily forward to that!
@NicholasRenotte
@NicholasRenotte 3 года назад
Thanks so much @Peter, definitely I've got it planned in the next couple of weeks! Code is done, just need to record!
@TheJourneyToMyFreedom
@TheJourneyToMyFreedom 3 года назад
@@NicholasRenotte Love to hear that! BTW just saw I was at the same company as you ;). Pinged you at some social media, if you are interested in connecting.
@NicholasRenotte
@NicholasRenotte 3 года назад
@@TheJourneyToMyFreedom definitely, always intested in connecting!
@TheJourneyToMyFreedom
@TheJourneyToMyFreedom 3 года назад
@@NicholasRenotte You already accepted ;)
@NicholasRenotte
@NicholasRenotte 3 года назад
@@TheJourneyToMyFreedom perfect!! Will jump onto LinkedIn!
@vivekkhanna9486
@vivekkhanna9486 3 года назад
Hi Nicholas, Excellent explanations, really commendable, can you please have one video with using callback, looking forward for some more knowledge sessions from you!
@NicholasRenotte
@NicholasRenotte 3 года назад
Yup, got it planned and coming @Vivek!
@simonlecocq189
@simonlecocq189 2 года назад
Hello Nicholas! First of all thank you for your videos they are really intersting! Can you explain a little bit more the meaning of the explained variance and the value_loss? It will help to understand why they need to be respectively closed to 1 and 0. Do you have also a automatical way to select the best moment to stop the fitting ? Thank you in advance
@NicholasRenotte
@NicholasRenotte 2 года назад
Check this out: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-eBCU-tqLGfQ.html
@patrickm7760
@patrickm7760 3 года назад
Great video, highly educational. My guess is that the granularity was not sufficient as things can change wildly during the day and between market open. Minute granularity in 5 minute trading blocks may have worked better
@NicholasRenotte
@NicholasRenotte 3 года назад
Solid suggestion!
@LucaSpinello
@LucaSpinello 3 года назад
That's great! Please do more of these
@NicholasRenotte
@NicholasRenotte 3 года назад
You got it, got a bigger series starting up! Saving them all here: ru-vid.com/group/PLgNJO2hghbmjMiRgCrpTbZzq0o_hINadE
@Throwingness
@Throwingness 3 года назад
Outstanding content because you explain everything but do it quickly and clearly.
@NicholasRenotte
@NicholasRenotte 3 года назад
Thanks so much @Varuna, glad you liked it!
@vinayakrajurs
@vinayakrajurs 3 года назад
Great lecture , really helped me a lot . Can you please explain what are the different states in the environment and what is the Y axis in the final performance plot. Thank you so much again for the wonderful tutorial.
@krishnamore2281
@krishnamore2281 3 года назад
You make understanding so simple . Love your work thank you for making such videos .
@NicholasRenotte
@NicholasRenotte 3 года назад
So glad you enjoyed it @Krishna!
@_hydro_2112
@_hydro_2112 3 года назад
Great, Thank alot for all the work you put in into this tutorial!
@HarshRathore-h4m
@HarshRathore-h4m Год назад
Love the way you explain! it would be great if you can add a few technical indicators and market sentiment as features.
@loganathansiva7063
@loganathansiva7063 2 года назад
I am totally drowned by the simple and elegant style by which you explain the algorithm. Superb. Hats off to your videos. Using your codes to train myself in trading. Could you please explain how to introduce any data and either buy or sell or hold with few examples using few stocks like apple, google msn, tesla etc., and share the codes in google.colab. Is it possible. Any way congrats for your great endevor in stock market intro in public. No one will share such things because money matters. I am impressed. Thanks a lot Nicholas. May god bless you.
@NicholasRenotte
@NicholasRenotte 2 года назад
Working on a new series atm, found a bunch of bugs in my old stuff but willd definitely share!
@OmarAlolayan
@OmarAlolayan 3 года назад
Thanks Nicholas, great video as usual Do you plan to make a tutorial for multi-agent RL and multi-agent environments?
@NicholasRenotte
@NicholasRenotte 3 года назад
It's on the list, a little far out, but it's definitely planned @Omar.
@stockinvestor1
@stockinvestor1 3 года назад
Awesome work, this would be a good series for you build out with other features and all the things you're more than happy to expand on :) . Also looked at other videos you have made, very clear and great videos!
@NicholasRenotte
@NicholasRenotte 3 года назад
Thanks so much @stockinvestor1, I'm definitely going to do more in this space. So far three more tutorials planned for RL trading but open to any additional ideas!
@shreyasom9751
@shreyasom9751 Год назад
Hey TensorFlow-GPU has been discontinued, can anyone help me that wether this will run fine with CUDA or any other version of pytorch or not.
@FuZZbaLLbee
@FuZZbaLLbee 3 года назад
Haha would be amazed if it could predict a short squeeze 😋 Black swans will always be an issue for AI based trading, or human trading for that matter.
@NicholasRenotte
@NicholasRenotte 3 года назад
Ikr @FuZZbaLLbee, I think I’d just be creating RL agents 24/7 chasing alpha if there was the chance of predicting them! Was genuinely curious how a quick RL agent was going to perform in this case. Definitely will do a little work on exploring integration with external data sources e.g. forecasting reddit sentiment and bringing it as a leading signal.
@SuperReddevil23
@SuperReddevil23 Год назад
Hi Nicholas. Love your videos. I probably learn more watching your youtube videos than my data science classes back in college. I just wanted to know if we can add plain vanilla indicators such as 200 EMA, MACD, RSI as input vectors to train our reinforcement learning algorithm. These indicators can improve the performance of our algorithm.
@darryldlima4766
@darryldlima4766 Год назад
Has anyone else noticed that when he runs evaluations, he gets a different total profit. See 34:53 and 35:55 when he reruns the evaluation. The frame_bounds are the same (90,110) for both evals. When evaluating the same trained model on the same range of data, shouldn't one expect the same total profit?
@mychannel-lx9jd
@mychannel-lx9jd 2 года назад
please make this video step by step beginners can not understand what are you doing also i tried every thing and can not pass step 0 !!
@NicholasRenotte
@NicholasRenotte 2 года назад
Checked out the RL course?
@wasgeht2409
@wasgeht2409 Год назад
Shit you are amazing bro !!!! Could you make more videos with ml projects and streamlite :) Best regards from Germany
@abdelmalekdjamaa7691
@abdelmalekdjamaa7691 3 года назад
as always your RL stuff are fascinating and easy to grasp!
@NicholasRenotte
@NicholasRenotte 3 года назад
Thanks so much @Bite sized!
@krvignesh6323
@krvignesh6323 3 года назад
Great content Nicholas! I have a small doubt. While the dataframe has different prices (Open, High, Low and Close), which price does the model use? How does it decide to use which one?
@NicholasRenotte
@NicholasRenotte 2 года назад
I think I used open, I should've used close 😅 you can change which you pass though.
@lamprosganias2031
@lamprosganias2031 11 месяцев назад
Hey Nicholas, it was an interesting video! Do you have any video on creating our own environment or something to suggest?
@felixcui727
@felixcui727 Год назад
Thanks Nicholas! This is amazing. Is it possible or reasonal to train bot using all stocks data rather than GME price data itself?
@isaacgroen3692
@isaacgroen3692 3 года назад
df = pd.read_csv('data/bitcoindata.csv', index_col = 'Dates', parse_dates = True)
@ButchCassidyAndSundanceKid
@ButchCassidyAndSundanceKid 3 года назад
Very useful stuff. Thanks Nicholas.
@NicholasRenotte
@NicholasRenotte 3 года назад
Doing some RL trading now @Adam?!
@ButchCassidyAndSundanceKid
@ButchCassidyAndSundanceKid 3 года назад
@@NicholasRenotte I'm just learning. I was on an expensive algotrading course, but the guy didn't explain as well as you do. Since you're an IBM guy, I was wondering if you could give us a course on Qiskit and Quantum Machine Learning please. Thank you.
@NicholasRenotte
@NicholasRenotte 3 года назад
@@ButchCassidyAndSundanceKid oooooh now you're talking my language! Been keeping that one a secret but I've definitely got something coming in that space 😉
@ButchCassidyAndSundanceKid
@ButchCassidyAndSundanceKid 3 года назад
@@NicholasRenotte My understanding of Qiskit and Quantum Machine Learning is still very limited, all these Hadamard Gates, Shor's Algorithm concepts are very abstract and not easy to grapple with. And without a full understanding, one cannot proceed further to Quantum Machine Learning. I have just finished watching your Reinforcement Learning series, it taught me a different way of writing RL code (I used to use tf.agent which is very clunky and difficult to understand). Thank you Nicholas. Keep up with the momentum. Look forward to more of your tutorials.
@周暘恩
@周暘恩 8 месяцев назад
i just wonder if the "Estimation Part" is correct ?? you use ``` env = gym.make(...) obs = env.reset() ``` but the problem is that did you load the previously trained model ?? when using the code, `env` will be replaced with a new one, is that right ?? more, you 'KeyInterrupted' the training part, will the model save the process where it had been ??
@Imakeweirdstuff
@Imakeweirdstuff 2 года назад
Very nice video! I am interested in the callback functionality to stop the model at a certain threshold of accuracy. Thanks!
@artem3766
@artem3766 3 года назад
Awesome! You explain very well!! Do you have a tutorial where you adopt TA(RSI, MACD, BB etc..) to the chart to see if the agent perform better or if was traded only on TA.
@NicholasRenotte
@NicholasRenotte 3 года назад
Sure do, check this out: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-q-Uw9gC3D4o.html
@rejithreghunathan3500
@rejithreghunathan3500 3 года назад
This is a great video. Good work @Nicholas! Is it possible to save the developed model as a pickle file? Eg. if we need to deploy the model in a production.
@NicholasRenotte
@NicholasRenotte 3 года назад
Normally you just save the trained model weights as a h5 and reload into keras! I show how to do it with stable baselines in the RL Course ont he channel.
@rajeshp2408
@rajeshp2408 3 года назад
Hi Nicholas, great video can you explain call back function where the model can stop when reach expected accuracy?
@NicholasRenotte
@NicholasRenotte 2 года назад
Yup, I show it in part 5: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-Mut_u40Sqz4.html
@rajeshp2408
@rajeshp2408 2 года назад
@@NicholasRenotte Thanks so much
@aleksandrz6412
@aleksandrz6412 2 года назад
Found a MISTAKE! Just take a calculator and check the last chart with buy and sell signals. Assume, each time we are buying or shorting 1 share. Then, by the last trade our approximate balance is -15, which means there was a loss while model is wrongly calculating it as a 5% gain...
@ramisania84
@ramisania84 2 года назад
@Nicholas Renotte thank you the video is very good, but may I ask a question I'm still confused in part 1 determining the window_size 5 and the frame_ bound (10,100), will the frame bound display the best 90 days of data or what? thanks for the explanation later
@btkb1427
@btkb1427 3 года назад
Great video mate, would love to see the in-depth video, but this does look like a method that is far too easy to overfit with. What do you think? Thanks again for the vid!
@NicholasRenotte
@NicholasRenotte 3 года назад
Yep, agreed! It probably didn't help that I picked a stock that had wild variance. Having early stopping will help, the model trained so fast it was hard to stop when it reached an appropriate level of EV. I'm going to pump out some more detailed videos on how to get something that is actually a little more stable working plus custom indicators, integration, etc. Thoughts?
@btkb1427
@btkb1427 3 года назад
@@NicholasRenotte That would be awesome! I agree with the custom indicators and integration :) those videos would be really helpful, some like a moving average would be a great intro I think, do you know if you can do multiasset trading using the RL environments that you're using?
@NicholasRenotte
@NicholasRenotte 3 года назад
​@@btkb1427 got it, I'll add an SMA or something along those lines! I think you could have a multi-asset model but from my (very brief) research it sounds like individual asset models are more the norm. You would probably run individual asset models with an overarching funds management model is my best guess!
@mushfiratmohaimin5892
@mushfiratmohaimin5892 2 года назад
I want to pass in a few more data to the model along with the price, such as the market cap, the change in market cap, the difference in 'high' price and 'low' price, maybe something more to experiment with. How do I do this? Thanks in advance
@tofudeliverygt86
@tofudeliverygt86 2 года назад
Note: Stable baselines only work with Tensor Flow one. I'm going through tutorials on reinforcement learning and it's hard to find something relatively current and easy to follow.
@NicholasRenotte
@NicholasRenotte 2 года назад
Take a look at the latest Mario tutorial, that uses Stable baselines 3 @Michal!
@highlander3666
@highlander3666 Год назад
would love to see a video on callbacks to stop the training when stopTrainingOnRewardThreshold is met
@user-or7ji5hv8y
@user-or7ji5hv8y 3 года назад
I would be in favor of a video explaining the resulting metrics like explained variance, etc.
@NicholasRenotte
@NicholasRenotte 3 года назад
Definitely, I talked a little about it inside of this ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-Mut_u40Sqz4.html but agreed, I think we still need a bit more on it!
@domisawesomee2192
@domisawesomee2192 2 года назад
Hi Nicholas, should we perform a train test split on the df and, if so, do stationary variables and feature standardization of the variables help with improving the model?
@alice20001
@alice20001 10 месяцев назад
How would I go about saving/loading models? Let's say I want to implement a callback that saves the model with the highest explained variance and then load it and serve it new data as it comes in through the terminal? Is there a way to do this without using stable-baselines, and use keras instead?
@weijunpoh93
@weijunpoh93 2 года назад
I was wondering why aren't we split testing the data into train and test prior to training the model in reinforcement learning?
@OZtwo
@OZtwo 3 года назад
As I am new to all this still trying to get my hands around Deep Learning I have been looking at maybe going with reinforcement learning. The only question I have is it seems that we only are using RL to train any given model and then use that in 'production'. What I'm looking for is something which will allow it to always be in learning mode which it then could adapt as the environment around it changes. For this is RL the best way to go?
@NicholasRenotte
@NicholasRenotte 3 года назад
Heya @Jeff, you're going down the right path. From a deployment standpoint I've only shown interation. In production like projects what you'll actually do is have continuous retraining and deployment. The process is the same however the cycle is changed. So for example what you would do is: 1. Train your model 2. Deploy it to production 3. Overnight or at a set schedule, bring in new data 4. Load your existing model and retrain it on the new data 5. Redeploy the model
@OZtwo
@OZtwo 3 года назад
@@NicholasRenotte What I want to try and do really is to give the robot reward areas and from there have it train itself on what to do next. This way the robot would evolve more or less. The only items I will need to look into is how to recharge it. But at the end give it a camera, 4 sensors (cliff sensors), a charger and a pickup device like the Vector Robot has. From there it learns to move and explore and what not all the while knowing only that it's doing the right thing when getting a reward. BUT yes, before I get to this level I need to first go back to your very first video and start at the very beginning. :) The only issue I am having now is I can't get my Jetson Nano (b01) Jetpack 4.5 fully installed and working. :(
@jokehuang1611
@jokehuang1611 2 года назад
I had try PPO (action masking) to learn based on OHLCV data, but the model can not to converge.
@theodorska4930
@theodorska4930 3 года назад
I am trying to do RL with energy engineering, this tutorial is as close I have gotten because it uses recorded data. Any recommendations or examples of code that tackles optimal solutions for energy storage, micro grids, refridgeration, etc?
@NicholasRenotte
@NicholasRenotte 3 года назад
Ooooh, I'm actually looking at energy and resource opt at work rn. I don't have anything quite yet but this might help in terms of setting up the custom environment that will likely be needed: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-bD6V3rcr_54.html
@giacomoferrari5624
@giacomoferrari5624 3 года назад
great content and nice explaination. Love the vids
@NicholasRenotte
@NicholasRenotte 3 года назад
Thanks so much @Giacomo! Glad you're enjoying them, anything else you'd like to see?
@giacomoferrari5624
@giacomoferrari5624 3 года назад
@@NicholasRenotte it would be interesting to do a principal component analysis on a huge database of indicators to see which are less correlated and bring the most variance to the data. That way you know that you are using the right indicators (there are so many out there). Otherwise use forecasting models as indicators in which basing your trading strategy on. For example a logistic regression function saying will price go up or down ? An ARIMA checking the direction, if it forecasts that the price will go up or down. Once you have several confermation indicators, enter the trade and then add the proper risk management to the system. No idea, I am still trying to figure all of this out. Im coming from tradingviews pinescript, which is nowhere near what you can do in python. I do have a bit of knowledge of data science, but between studying it and actually applying it, there is lots of pracite to be done.
@giacomoferrari5624
@giacomoferrari5624 3 года назад
@@NicholasRenotte For sure though I think the whole crypto space is much more interesting as there is plenty of data to choose from. Especially from the blockchain it self not only necessarily price and volume data. Just some food for thought, maybe that gives you some ideas for next videos
@GeorgiosKourogiorgas
@GeorgiosKourogiorgas 3 года назад
The data is sorted by date descending. Neither in your code nor in gym_anytrading the data is sorted "properly". Do you do it on purpose?
@NicholasRenotte
@NicholasRenotte 3 года назад
Nope, that's a screw up. Sorted in the follow up tutorial: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-q-Uw9gC3D4o.html
@kiranvenkatgali9503
@kiranvenkatgali9503 6 месяцев назад
Hi i am getting ValueError: too many values to unpack (expected 4) near n_state, reward, done, info = env.step(action) under Build environment. Can anyone help me with this
@user-or7ji5hv8y
@user-or7ji5hv8y 3 года назад
I guess we are simply running many episodes over the same dataset as specified by frame_bound?
@NicholasRenotte
@NicholasRenotte 3 года назад
Correct!
@fabiankuhne5026
@fabiankuhne5026 Год назад
Umfortunately I get by the end of part three the following errot: attribute error: 'gym_logger' has no attribute 'MIN_LEVEL'. I tried the solutions on stackoverflow without success. Do you have any idea?
@hederahelix622
@hederahelix622 3 года назад
Hi Nicholas, I'd like to see a video on making a callback function if you have time. Thanks very much for your work, this is very informative
@NicholasRenotte
@NicholasRenotte 3 года назад
Heya! Check out the reinforment learning course :) all is revealed!
@hederahelix622
@hederahelix622 3 года назад
@@NicholasRenotte Nice one. Thanks
@rverm1000
@rverm1000 Год назад
can you add some of your python code trading rules. like you can buy if the moving average is going up , sell a short if the moving average is going down. at least control when its buy or shorting to the most optimized condition. still its a good video.
@henkhbit5748
@henkhbit5748 3 года назад
+++ As always very informative and clearly explained. Will you a better result when you stop training at 0.95 explained variance?
@NicholasRenotte
@NicholasRenotte 3 года назад
Ideally yes @Henk, I think having an improved loss metric will help as well. Got a few more vids to do in the CV space then will head back to RL soon!
@MuhammadAli-mi5gg
@MuhammadAli-mi5gg 3 года назад
Hi Nicholas, thanks for great video. I want your advice, as I want to make a crypto-prediction model, and want to use it for real-crypto-trading.So should I go with traditional LSTM,RNN etc approaches, or I have to go with RL. Thanks for any help.
@NicholasRenotte
@NicholasRenotte 3 года назад
There's no real hard and fast on this one, I've been seeing a lot of people look into using recurrent policies with RL lately however!
@MuhammadAli-mi5gg
@MuhammadAli-mi5gg 3 года назад
@@NicholasRenotte Ok, thanks alot.
@farhatsam8529
@farhatsam8529 3 года назад
Nice video, thanks. Choosing GME is a real bad idea, this stock is a pump/dump security so it's basically a random market.
@NicholasRenotte
@NicholasRenotte 3 года назад
Thanks so much @farhat. Yah, it's pretty easy to swap it out for alternate securities but I don't think you'd actually try to HFT something that's undergoing as wild price action as GME right now. Got some more videos planned for the RL trading series, anything you'd like to see?
@farhatsam8529
@farhatsam8529 3 года назад
@@NicholasRenotte Thank you Nicholas, i really like your content. I have some feedback: My first thoughts about ML and specifically RL is that we need just to provide it the data, the goal, and it will try to find the best model, without huge knowledge from our side, but now, i see that we need to tweak it to find the best results. Continuing with RL applied to trading, is there anyway we can ingest additional data like some indicators (RSI, MA...) Q: Do you use NN in your environment here ?
@NicholasRenotte
@NicholasRenotte 3 года назад
@@farhatsam8529 right on time, check this out: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-q-Uw9gC3D4o.html yep we do use an NN here, it's natively integrated into the MlpLstmPolicy.
@rajd9219
@rajd9219 2 года назад
Please provide detailed steps to setup this. Unable to run this as unable to install tensorflow 1.15
@Muthamizhchelvan-V
@Muthamizhchelvan-V 3 года назад
is it possible to train the agent with multiple companies data?
@NicholasRenotte
@NicholasRenotte 3 года назад
Yup!
@Muthamizhchelvan-V
@Muthamizhchelvan-V 3 года назад
@@NicholasRenotte can you create one more video on this, if you possible
@danielsilva3383
@danielsilva3383 3 года назад
Hi Great Video 😉! Do you run that in the cloud? Which computer do you have? Do you need a NVIDIA GPU to ML? I have a Lenovo laptop with a AMD Ryzen 5, can I run most Models? Thanks for your help
@NicholasRenotte
@NicholasRenotte 3 года назад
Thanks so much @Daniel. This is running on my local machine. It's using a 2070 Super and a Ryzen 7 3700X. The Ryzen 5 should be fine to run most models, just make sure if you're getting a GPU it's an NVIDIA one as it natively works with Tensorflow!
@danielsilva3383
@danielsilva3383 3 года назад
@@NicholasRenotte Thank you 👍. Do you know of any laptop that you recommend that contains good NVIDIA GPU's?
@NicholasRenotte
@NicholasRenotte 3 года назад
@@danielsilva3383 anytime! TBH, I'm not too up to speed on Laptops but I've found I'm able to handle most CV and RL tasks with a 2070 Super. If you plan on getting into hardcore NLP you might need something beefier.
@danielsilva3383
@danielsilva3383 3 года назад
@@NicholasRenotte Thanks I will check it! 👍
@joaobentes8391
@joaobentes8391 3 года назад
Great Job!! Love ur videos ! Keep the good work : D. I didnt now nothing about Python Code and RL and thanks to ur awsome videos im learning fastfast
@NicholasRenotte
@NicholasRenotte 3 года назад
Yesss! Awesome to hear João, what are you currently working on?
@joaobentes8391
@joaobentes8391 3 года назад
Im planin a deep RL to play sliter.io : ))
@NicholasRenotte
@NicholasRenotte 3 года назад
@@joaobentes8391 niceeee, I saw an example of that a couple of nights ago using Gym Universe. Sounds awesome!! Would love to see a snippet once you've got it trained.
@anshuman284
@anshuman284 3 года назад
Thank you for creating good content. This is really helpful!
@mitsostim07
@mitsostim07 3 года назад
So if I wanted to predict future actions and profits, can I use "frame_bound=(len(df)-5,len(df)), window_size=5" for the next 5 days?
@NicholasRenotte
@NicholasRenotte 3 года назад
Would repeatedly call it using a sliding window, would need forecasted prices if you wanted future actions.
@mitsostim07
@mitsostim07 3 года назад
@@NicholasRenotte Thank you for the info!
@qiguosun129
@qiguosun129 Год назад
Good lecture. However, as a financial researcher I suggest that readers should not expact robustness return under simple RL models.
@qiguosun129
@qiguosun129 Год назад
In addition, the gym_anytrading is not suitable for stable_baseline3, I recommend readers to follow the steps of custom environment in stable_baselines3's doc to rebuild the environment.
@kapilgaur6142
@kapilgaur6142 Год назад
Thanks for such great content! I am very much interested in RL and robotics and I have been working to build a robo dog which uses RL to learn to walk and can recognize faces and can understand gestures and voice commands, could you please do this kind of video? :)
@namangoyal8477
@namangoyal8477 2 года назад
kindly make a video on custom indicators and create a trading bot video specifically for Technical and Financial Analysis used in daily trading of ant cryptocurrency.
@somedude152
@somedude152 Год назад
Hello, I am trying to train an agent on a custom environment that I made and I was wondering if there is any way to increase how often the performance metrics like explained_variance and value_loss pop up during the training process.
Далее
БЕЛКА РОЖАЕТ? #cat
00:24
Просмотров 802 тыс.
How to Code a AI Trading bot (so you can make $$$)
35:09
ML Was Hard Until I Learned These 5 Secrets!
13:11
Просмотров 302 тыс.
An introduction to Reinforcement Learning
16:27
Просмотров 654 тыс.