Тёмный

Two Reasons Why Making AI for Day Trading is Hard 

eminshall
Подписаться 6 тыс.
Просмотров 2,3 тыс.
50% 1

Опубликовано:

 

18 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 22   
@MrHardzio4Fun
@MrHardzio4Fun 3 месяца назад
Hints: 1. Big dataset. 2. Custom data preprocessing (standarized with lookback: close, change, d_hl)(No technical indicators derived from ohlcv). 3. Custom environment that you fully understand(account balance/positions value + data from 2. as observation/state space) 4. Custom model architecture that you choose. 5. Custom training loop with spawning in random place in your dataset for x steps. 6. Periodically testing on unseen data until profitable. 7. Long training.
@WealthGame_
@WealthGame_ 3 месяца назад
Very informative video, all loves your insights. Using reinforcement learning not to directly predict the price asset, but may be more significantly used to predict chances of reversal of Asset or trend continuation using statistical parameters
@colebrzezinski4059
@colebrzezinski4059 3 месяца назад
Really good talk. Love the video you did on the right data features, I had been thinking similar things about features regarding trading. Also like the remark you make on state spaces, really good point to consider in terms of deep learning environments.
@plopplippityplopyo
@plopplippityplopyo 3 месяца назад
Thank you for sharing your thoughts on this, I have personally been working on this problem space for the past year, and it sounds like the issues I'm encountering are all too common 🙂
@angeldiaz4624
@angeldiaz4624 5 дней назад
Don’t need many indicators. Really a whole new way of trading
@veniciussoaresdasilva6614
@veniciussoaresdasilva6614 3 месяца назад
I agree with you I work at least 1 year on RL to trade and the performance is poor, only in the training produce some results, but in the live data the point is other. All models that I build do around 60% to 80% profit but when move to live data forgot it.
@MasamuneX
@MasamuneX 3 месяца назад
I've had some good trades using a random forest classifier that just predicts if the change from todays close to tomorrows close will be a positive change or a negative change its only 62% accurate but if you're willing to give up some recall and only make guess 1 in a 1000 times you can boost the accuracy only betting when the model is very certain of the outcome. The downside is that it has no perspective on the amount of change so in back testing there are some volatility conditions i check to make sure the stock is moving and its not just getting a bunch of 0.01% moves right.
@user-ne4qf2ej9q
@user-ne4qf2ej9q 26 дней назад
Why would you need to develop new strategies? You can use those that working for decades, such as ORB on 5 min charts, or Reversals, double bottom and double top. You just need tech you agent to finding perfect conditions for that with stop loses and profits. Also trading is different during the day. In the morning from 9:30 till 11:00 usually good is for Scalping because market very volatile, that conditions really good for ORB and VWAP, after 11 its much slower and its good for trend trading.
@angeldiaz4624
@angeldiaz4624 5 дней назад
I think I found the perfect trading strategy. It’s for swing trading not day trading. Would we be able to work on something together?
@ashtonsilver6518
@ashtonsilver6518 2 месяца назад
I achieved a Annual return of 0.268% testing my quant script on the DOW 30 tickers, outperforming it by around 4% since it performs at about 0.226% using the same data and timeframe. However I didn't save the run! I am struggling to recreate the state conditions and code variables I chose during that performance. If you want to work together on this?
@tildarusso
@tildarusso 3 месяца назад
My understanding, in a typical RL setting, your actions are the only dynamic to change the environment/system, but it is clearly not the case for stock trading, where the status is first partial observed, and also influenced by many many other agents (human or algorithms). Hence the essential MDP assumption is not satisfied, or say the data distribution shift constantly. The model performance can't be very ideal therefore.
@martin777xyz
@martin777xyz 3 месяца назад
Your actions don't affect the environment (share price), they affect the reward (also holding x share price). Essentially a reinforcement learning system would learn (on balance) the best actions to take at given time steps in the environment. It's important that the training data is sufficiently varied.
@tildarusso
@tildarusso 3 месяца назад
@@martin777xyz In that case, the env is definitive, but the stock market should be part of state/observation, as sort of random noise, as it sin't affected by my actions at all?! This is no any typical RL setup, I got headache...
@MasamuneX
@MasamuneX 2 месяца назад
@@tildarusso if your trading account is under a mill you don't need to worry about your actions changing the price meaningfully unless you are trading super super super super small cap stocks with low volume and low liquidity
@meowtownhill
@meowtownhill Месяц назад
@@MasamuneX Veteran traders know this to be true. Back testing or paper trading doesn't include how you're participation affects the dynamics of the market so the model is losing vital data . When it goes on live trading, the environment is all of a sudden changed.
@MasamuneX
@MasamuneX Месяц назад
@@meowtownhill lmao yeah if your account is under a mill and your trading a small cap of under 10Bill assuming you use a maximum of 5% of your account per position 50k isnt going to move the price up or down by more than like 0.1% even if you just sent a market order and didnt structure your orders to slowly soak up the shares
@vehctor8893
@vehctor8893 3 месяца назад
The typical way of training a model using the "typical" method of having an episode consist of multiple trades sequentially throughout the entire training set is not good from my experimentation, Ive been building out a way to train RL models via having each trade being a single episode i.e flat->open long-> close long , or flat->open short->close short , would be one episode, this certainly seems to yield more robust models from my testing.
@jasonreviews
@jasonreviews 3 месяца назад
it's unpredicatable. Just buy Nanc or Cruz etf? you will mak emoney through Government inside trading. I give up. you can't predict the laws....
Далее
5 Reasons Why I Love FinRL
10:38
Просмотров 1,8 тыс.
AI Learns Insane Monopoly Strategies
11:30
Просмотров 10 млн
ML Was Hard Until I Learned These 5 Secrets!
13:11
Просмотров 300 тыс.
How AI 'Understands' Images (CLIP) - Computerphile
18:05
5 Things I Don't Understand about FinRL
13:18