mandyxmg / mcxs23-BVARs

Research Project BVARs
MIT License
1 stars 4 forks source link

Model Extension #2

Closed mandyxmg closed 1 year ago

mandyxmg commented 1 year ago

@mandyxmg

mandyxmg commented 1 year ago

@donotdespair

mandyxmg commented 1 year ago

Hi @donotdespair, as we discussed, I’m interested in the dummy variable prior and the extension about the error term. I’m thinking of doing the former one or both of them depending on my capabilities. Can you please kindly send me some information about them? Thanks.

Mandy

donotdespair commented 1 year ago

Hi @mandyxmg

Thank you for opening this issue! Let's talk about it!

  1. The first go-to resource would be the article Woźniak (2016) Bayesian Vector Autoregressions provided in the lecture materials. Section 6 explicitly discusses the setup and the estimation. Having studied this part you will know how to implement the prior and estimate a model with it.
  2. Now, you need a research paper that would tell you how to generate artificial data to generate the prior. Section 6, which I mentioned above, mentions two papers by Chris Sims that scrutinise such priors. Another option is a recent article Domenico Giannone, Michele Lenza & Giorgio E. Primiceri (2019) Priors for the Long Run, Journal of the American Statistical Association, 114:526, 565-580, DOI: 10.1080/01621459.2018.1483826

Please let me know, when you would need more input from me. Please, remember to tag me in such a post. Thanks, @donotdespair

mandyxmg commented 1 year ago

Thanks @donotdespair, will have a read :)

mandyxmg commented 1 year ago

Hi @donotdespair,

Thanks for sharing the information about the dummy prior. I can understand the basic idea of the dummy prior and the set up from your paper and many other paper I have searched and read; however the pain point is now I'm very much struggling to understand how to generate the artificial Y plus and X plus. The 2019 paper doesn't make sense to me. Sorry, not saying it's not good, just I'm not able to understand it. The notation it used and the equation transformation it introduced are not the same as what we learnt in this subject. I'm not quite good at matrices, but by studying this subject, I become a bit better; however, as I'm still not at a proficient level, I may need some material to make this explicitly clear. The 2019 paper doesn't make this clear (I guess it doesn't have to as the audiences are researchers), which makes it harder and almost unlikely for me to understand it. When I don't even understand the knowledge, it's unlikely for me to present it in code :( I'm so sorry...

I'm so worried about this coming assignment. Do you have any other sources that would simplify things and are written for students? Otherwise, I guess I have to give up this extension, which I don't want to but no choice. If you can't really have much from there, maybe, I can try the extension on the error term? Would it be easier to understand? Do you have any suggestions?

Many thanks,

Mandy

donotdespair commented 1 year ago

Hi @mandyxmg

Yes, I realise that the 2019 paper is hard. That's why I mentioned first the two papers by Chris Sims that I mentioned in Section 6 of my paper. It's much easier there.

donotdespair commented 1 year ago

Hi @mandyxmg

Please, have a look at the Appendix I was talking about during the meeting by Bauwens, Lubrano, and Richards (1999). There, on page 307, you will find the definition and derivation of the matricvariate t-distribution coherent with our derivations in the notes (the link has been updated on Canvas). The necessary changes are that in our derivation q=1 (and so, the IW distribution becomes IG2) and, their P^{-1} (P is the precision matrix) is our Sigma covariance matrix.

Greetings, @donotdespair

mandyxmg commented 1 year ago

Hi @donotdespair , sorry for the late reply, will have a look, thanks for sharing :)

mandyxmg commented 1 year ago

Hi @donotdespair, for providing feedback to the empirical example of peers, I don't see a peer is assigned to me. Are you still working on this? Thanks.

donotdespair commented 1 year ago

Hi @mandyxmg

You are right! I ticked off Canvas for the automatic assignment, but it did not do that. I don't quite understand Canvas at times. But now, you are assigned! Thanks for the remark!

mandyxmg commented 1 year ago

Hi @donotdespair , I'm looking at incorporating the heteroskedasticity in the model, I kinda get how the function works. The problem now if how do we can get the column specific residuals? Thanks.

mandyxmg commented 1 year ago

@donotdespair , would the residuals be T X N dimension, if that's the case, how to understand the diag(sigma square) be a T vector in the main diagonal? Do we have N numbers of T X T diagonal matrices here? Don't think so, got a bit confused here. Appreciate the clarifications.

donotdespair commented 1 year ago

Hi @mandyxmg

I'm on it! i will communicate in a little while :)

T

donotdespair commented 1 year ago

Hi @mandyxmg

I have just sent an announcement talking about the new functions provided. Please let me know if you require any further support.

Cheers, T

mandyxmg commented 1 year ago

Hi @donotdespair, thanks for notifying me. I'll have a look :)

mandyxmg commented 1 year ago

Hi @donotdespair, thanks for sharing the code. I have tried to incorporate the function in my dataset, and it runs out some results. A quick questions here, the attached screenshot shows that I repeatedly draw sample 5000 times, it looks when S=1, 100, 1000 are very different from each other, and the three batches are different from when S=2000 and 5000. Can I understand this is due to data burning in the first many draws, then eventually converge to the stable state? Thanks.

Screenshot 2023-06-04 000133
mandyxmg commented 1 year ago

@donotdespair , I'll email my code to you, so it might be easier to spot on any issues, thanks :)