Closed ThomasKronhol closed 1 year ago
Hi @ThomasKronhol
BINGO on both points!
Please, remember to tag me below whenever you want a reply from me
Cheers,
@donotdespair
Hi again, @donotdespair
I think that I might want to start out by just having a identified model using choleksy (not taking long run restrictions into account) per se - or is this indeed required?
If I want to do something else then dummy priors, what kind of suggestions would you have in terms of the hyper parameters or similar, which I could investigate?
// Thomas
Hi @ThomasKronhol
Happy to help one point at a time!
OK, you can begin by coding the model with Cholesky decomposition as a simple model. But I would insist that you later on code the RRWZ2010 algorithm. One of the requirements for this assignment was to work with Uhlig's algorithm for sign restrictions or Waggoner & Zha (2003) for the model with exclusion restrictions. I would be happy to replace WZ2003 by RRWZ2010 algorithm, but just a triangular system estimated by applying Cholesky decomposition is too little for this assignment.
So, please develop the RRWZ2010 algo subsequently and apply at least one long-term restriction to implement it in your empirical example.
Hey @ThomasKronhol
If it's about the dummy observation prior...
Please let me know, when you would need more input from me. Please, remember to tag me in such a post. Thanks, @donotdespair
@donotdespair
Would I also need to implement long run restrictions if I do not want to work with dummy priors?
additionally, are we allowed to use your package?
// Thomas
@ThomasKronhol Cool! No worries! These are two separate things.
Does this make sense? T
@donotdespair Can I drop by your office tomorrow? // Thomas
@donotdespair Can I drop by your office tomorrow? // Thomas
Hi @ThomasKronhol Can we keep arrangements regarding meetings to emails? This gets published online in the end. Folks can even search this content :) Thanks. I have sent you the email.
Hi @ThomasKronhol
You asked whether you can use my bsvars package. My answer is a conditional YES. I mean by that, that yes, you can use the package, but you also need to submit a pull request to the package repo with your proposal of a new utility function. Your assignment will be accepted, provided that the function's quality is sufficient to include it in the package. This includes writing up the help file for the function, which is not that difficult. So, yes, you can use my package, but you still have to write original code. Your contribution will be acknowledged in the section contributors of the package. If you want to pursue this option, please, let me know and we'll talk about what this function can be.
Hi @donotdespair
Having simulated the artificial data, I get the following error when running it through the code of WG2003:
The code used to simulate the data is as follows:
n <- 1000
x0 <- c(0, 0)
cov_mat <- diag(2)
x <- matrix(1, n, 2) x[1,] <- x0
for (i in 2:n) { x[i,] <- mvrnorm(1, x[i-1,], cov_mat) }
Y=x[2:1000,] y = Y Y = ts(y) p = 1 X = matrix(1,nrow(Y),1) for (i in 1:p){ X = cbind(X,y[2:1000-i,]) } X = ts(X)
Running the lines found in the code WZ2003:
t0 = proc.time() B0.posterior = rgn(n=S.burnin, S=S.post, nu=nu.post, V=FF.V, B0.initial=B0.initial)
Gives the following error (the error is from computing the orthogonal matrix using the function found in L16 codes.R: out = as.matrix(tmp[,(N[2]+1):N[1]])): Error in tmp[, (N[2] + 1):N[1]] : subscript out of bounds
Any suggestions on what might cause and resolve this?
Kind regards T
OK, @ThomasKronhol
These lines:
Y=x[2:1000,]
X = cbind(X,y[2:1000-i,])
should be:
Y=x[(1+p):1000,]
X = cbind(X,y[(1+p):1000-i,])
Let me know if this work :)
Hi @donotdespair
It gives the same error message unfortunately :(
OK @ThomasKronhol
Use n
consequently:
n= 300
x0 <- c(0, 0)
x <- matrix(1, n, 2)
x[1,] <- x0
cov_mat <- diag(2)
for (i in 2:n) {
x[i,] <- mvtnorm::rmvnorm(1, x[i-1,], cov_mat)
}
p = 1
Y=x[(1+p):n,]
Y = ts(Y)
X = matrix(1,nrow(Y),1)
for (i in 1:p){
X = cbind(X,y[(p+1):n-i,])
}
X = ts(X)
Hi @donotdespair
Do we need to impose our restrictions in this assignment or do we only need to show, that the estimation runs using artificial data?
Kind regards T
@ThomasKronhol, the latter, please. But you need to use some restrictions to estimate your target model :)
Hi @donotdespair
Does it make sense, that the estimation would take longer, in case the number of artificial data points increases from fx 500 to 1000? In my mind it would make sense if it did not affect the speed, as it is simply matrix multiplications?
Kind regards T
Hi @ThomasKronhol
Yes, it might take longer, especially when you recompute these matrices in each Gibbs step. I would still think ti shouldn't take that much longer, but apparently, it does. Cheers, T
Hi @donotdespair
I have problems running my estimation with large numbers of draws. I get the following error:
Lowering the number of draws, the code runs perfectly fine, although slow. Any idea why this might be the case or what might resolve it?
Best T
OK @ThomasKronhol
My guess is that there is a bug somewhere in the code! It might be the one in the rgn
sampler that prevents us from reproducing the results, but it also might be something else. I will have some time to check function rgn
on Friday. In the meantime, you check if you save and compute everything else correctly.
This is this part of polishing the code that is the most difficult. Everything seems to work and be as it should, but it isn't. And we have to figure out what that is. It's an unavoidable part of code development.
Good luck!
T
Hi @donotdespair
I'm trying to plot my impulse responses, and it's going well, when using the basic model.
However, when running the extended model (outside of my quarto file, where it works perfectly), I get the following error. It's inside the RGN, and I have not changed anything compared to the quarto file. Any suggestions on why the following is occuring?
Best, T
Hi @ThomasKronhol
I get your point and thanks for the reminder! I'm not sure at all! I'd need more access to your code...
Shall we meet on Monday in my office?
Hi @donotdespair
Before submitting the empirical analysis, are we then allowed to "delete" the estimation done on the artificial data, thus improving the speed of the computations, or should we keep it, and try and use "cache"?
Best
T
Hey @ThomasKronhol I'd day that cache would be the most suitable here! Thanks
Hi @donotdespair
I've included Federal funds into my model, and it gives the following IRF's.
Best T
Hi @ThomasKronhol
There are a few things to say about your setup.
What do you think?
Cheers, T
Hi @donotdespair
Thanks for the suggestions. I have tried changing the order, which yielded in slightly less interpretable responses (see attached).
I think the argument of fci conditions affecting the real economy with a lag is somewhat more justified compared to contemporaneous effects, so I might keep the order as before. Nevertheless, regarding the FF I think the suggested cause is good, and "makes sense" in an economic sense!
Thanks a lot!
Best T
Hi @ThomasKronhol
Thanks for sending these outputs! Actually, I love them! Most of them are quite interpretable. Well, except for the initial responses at horizon zero, that usually have the opposite sign compared to those at longer horizons. But also, they are, in the end, not that different from your original ones if you look at the mid- and long-term implications.
How about presenting the results with the NFCI
as the last variable as your benchmark results and those with this variable as the first as a robustness check (which works at all but the initial horizon)?
Cheers,
@donotdespair
@donotdespair
It's a deal!
Hi @ThomasKronhol
The materials for the Student t error terms are not numerous. One is the appendix to a textbook by Bauwens, Lubrano, Richards (1999). Another is the article by Geweke (1993).
And the Paper Nathan found instructive to him was by Bobeica and Hartwig
Hi @donotdespair
I have some questions regarding the estimation procedure.
Can I simply use the code provided in lecture 15 including the functions, where a cholesky decomposition is applied to make a structural model, as my "baseline" model? Or should I use the code provided in lecture 12?
If I want to extend my model using dummy priors, how do I get artifficial data, which seems to be required?
Kind regards Thomas