Hi, just wanted to say thank you so much and found this video v helpful when searching for 'conditional probability'. Happy accident that lead to a great channel for quant finance and physical science research. Cheers! 🥂
Thanks for the video! Just wanted to ask if in your opinion does make sense to use this to model implied volatilities, or might be better to use a multivariate OU process (with log(x)). Thanks!
thanks for the amazing video. do you have an idea how to calculate the composite likehood with pairwise method over all constituents, meaning over all neighboring paris of assets to estimate the two dcc parameters alpha and beta? i like to understand how to calculate on a big data set like you have done with the complete s&P 500. Pakel et al. (2020) - Fitting Vast Dimensional Time-Varying Covariance Models suggets to do that with pairwise likelihood. do have have an idea how to implement it in your code?
why computing the garch volatility you multiply that cond vola by 1600? in this part of code: 'garch_vol_df = pd.concat([pd.DataFrame(model_parameters[x].conditional_volatility/100)*1600 for x in model_parameters], axis=1)'
Hey.. very helpful video.. can you help me the code of DCC-MIDAS-X Model which is somewhat similar to this only.. i am unable to incorporate this model.. please help
Thanks very helpful video. Could you please suggest how can exogenous variable X be incorporated into DCC garch model to see the impact of X on correlation of different assets
Glad you liked the video. Does the exogenous variable have a correlation with GARCH characteristics? You can’t just throw anything in there as it won’t converge. Just add it as another asset and see what it looks like. No problem in doing that. Remember we are looking at correlation of VOLATILITY, not returns.
@@dirtyquant Thanks for replying X has garch characteristics i want to check whether variable X impact the dynamic conditional correlation of two assets..like if it has positive or negative effect
It won’t have any effect on the other 2 variables. But you can look at the time series of the DCC and I find that useful. Let me know how you get on. You can post your findings on dirtyquant.com
Hi! First of all many thanks for your video. You clarified a lot of stuff that is a necessity for my thesis. I do run into an issue. I want to use the student's skew t distribution for my DCC-GARCH model. I have adjusted dist to 'skewt'. In the function garch_t_to_u. Also, i have added eta and lambda as parameters. Then, udata = t.cdf(std_rest, lambda). However, now opt.out.success provides a False and I do not have any estimations. Could you please let me know how I can adjust the code to use the skewed T distribution? If I make mistakes in my adjustments as explained above please let me know. Many thanks in advance!
Hi , it's a really helpful video. Really appreciated :)) I have tried to walked through your code. Just have a question. If I wanna get the DCC correlation of the stocks do I just extract the Rt value? Thanks in advance.😁
@@dirtyquant Hi Dirtyquant, Thanks so much for getting beck so quickly! Is there any way that I can modify the code to get the return correlation. I tried input return instead of udata_list but the model shows not success.
The model is (likely) not going to converge if you use returns as they as far too unstable. You need to convert the returns to a uniform distribution using the appropriate CDF for that distribution. Again, I don’t think it will work as the autocorrelation of returns decays far too quickly so it will be incredibly noisy. Please try it and report back. :-)
I understand the basics, and could use this to interpret past data. But I am struggling conceptually with applying DCC to a simulation/forecast model. Do you have any recommended follow-up reading?
I am planning on doing a video on it, but look at my response here: dirtyquant.com/t/video-cholesky-decomposition-take-your-backtesting-to-the-next-level/127/4
There is no reason why you have to use GARCH. If you omit that step and just feed in returns, it should work. I think the issue is that returns are a lot more unstable compared to vol.
Thanks a lot for your videos! Is there a way to change your code to estimate the DCC considering the GARCH with a normal innovation density, instead of a t-distribution? I am really struggling to understand which parameters should I extract from the results of the GARCH to then get the uniform distribution.
Very easy. Instead of udata = t.cdf(std_res, nu) Just use the normal CDF And instead of arch_model(rets[x], dist = 't') Just change that to normal. Done!
@@dirtyquant thanks a lot for your kind reply, you are really making my last Master assignment a lot easier! Could you also tell which parameters should I put in the norm cdf? I mean how do I get the mean and std from the fitted GARCH model. (Amazing channel really)
@@dirtyquant I see! So I just need to get the standardized residuals out of it, right? Any idea on how to do it? I tried to get the same object (std_res) from the fitted garch model with normal, but the object does not exist.
Because everything is in a function, and hence a local variable. It won’t appear in your list of variables. I am travelling but will give you a working example later. Try solve it yourself, you will learn a lot
It was really a little complicated to me to calculate DCC and CCC (Constant Conditional Correlation) in Excel. I had to use a LGARCHMLE (leveraged model) with American Indexes: Nasdaq, S&P500 and Dow Jones. However, it's gonna be very difficult to calculate it in Python. I heard that R programming is very easy to estimate it. I didn't know that DCC can be calculated in Python too. What about CCC? Your code looks complicated. Can you share it? Regards.
Thanks. Very helpful! I learn a lot from your code. I want to know about how to compute standard errors to make inferences about estimates in DCC. Could you add how to compute standard errors to your python code, please?
Hey, glad you liked the video and code. So the errors around the estimates really depend on your optimiser, which will give you estimates around the parameters. If you use MCMC, then you can just look at the chain and infer the errors from there.
Hey Rocky, if you look at page 135 (106 on top right) of this doc you will see the log like function for normal copula: ses.library.usyd.edu.au/bitstream/handle/2123/14728/2016_Christian_Contino_Thesis.pdf?sequence=2&isAllowed=y
@@dirtyquant Thank you for sharing your work. Very appreciate it. I do have a question. What if I would like to change from Multivariate Gaussian distributed errors to Multivariate t-distributed errors, how do I change the code?
I will do a video at some point, but look at this discussion, as I link to my paper on how to do it: dirtyquant.com/t/video-cholesky-decomposition-take-your-backtesting-to-the-next-level/127/3
@@dirtyquant There are no tutorial on DECO ever... Would you like to help me? stats.stackexchange.com/questions/534509/compute-conditional-equicorrelation-matrix-deco-garch-from-dcc-garch-estimatio
I've managed to get through to %time opt_out = minimize(loglike_norm_dcc_copula, np.array([0.01, 0.95]), args = (udata_list,), bounds=bnds, constraints=cons), however, I keep on getting --> ValueError: not enough values to unpack (expected 2, got 1). Need some help!!
Interesting video... thanks for sharing... loved it although not a fan of the background music.. would be 10x better without it! P.S. I consume such RU-vid videos at 1.5x or 2x and it's very annoying if there is background music :) Good luck... looking forward to more interesting stuffs