bsvars / bsvarSIGNs

Bayesian SVARs with Sign, Zero, and Narrative Restrictions
http://bsvars.org/bsvarSIGNs/
Other
10 stars 2 forks source link

look into HD coding #47

Open adamwang15 opened 2 months ago

adamwang15 commented 2 months ago

compute_historical_decompositions() is based on the textbook Kilian, L., & Lütkepohl, H. (2017), check if the code is correct

AlejandroOrtiz12345 commented 2 months ago

Hello Adam and Thomasz,

I was working on my historical decomposition and realized that after computing it with the compute_historical_decomposition() function, I lost two observations from the dataset. The problem is that I don´t know if those lost observations belong to the beginning or the end of the dataset.

Maybe the possible problem you pointed out to me last time has something to do with this?

Best regards, Alejandro

donotdespair commented 2 months ago

Hey @AlejandroOrtiz12345

Would you please share with us your analysis script? It would be useful bc often HDs don't work due to the model specification needing improvements. I would go through your code and make certain all the setup is as it should. The data file or plot would be useful as well.

Greetings,

T

AlejandroOrtiz12345 commented 2 months ago

Hello Mr. Wozniak, thank you so much for the reply and for the help that you offered. As requested, I copy and paste my R script and attach the excel database I´m using.

BVAR Brazilian Economy for historical decomposition of inflation (ipca_free_3mc)

library(tidyverse) library(lubridate) library(BVARverse) library(BVAR) library(rbcb) library(readxl) library(openxlsx) library(bsvarSIGNs) library(vars) library(tseries) library(coda) library(parallel) library(ggplot2) library(tidyr) library(dplyr)

setwd('C:/Users/acruceno/OneDrive - Fosun Brasil/Documentos/Baysiana/Bases de Dados')

dados <- read_excel('Baysiana/Bases de Dados/dados_projeto_alejandro.xlsx', sheet = 'svar')

data_project_alejandro.xlsx

Visualizing the data

dados %>% pivot_longer( cols = -date, names_to = 'variaveis', values_to = 'valores' ) %>% ggplot() + aes(x = date, y = valores) + geom_line() + facet_wrap(~variaveis, scales = 'free_y') + labs(title = 'Séries Temporais')

Collecting the variables of interest in levels and differeces/growth rates

variables <- dados[,c('ic_br_3mma', 'scp_3mma', 'output_gap_bcb', 'ipca_free_3mc' , '90d_swap_3mma', 'wages_3mma', 'capb_3mma')] stat_variables <- dados[,c('ic_br_3md', 'scp_3md', 'output_gap_bcb_3md', 'ipca_free_3mc', '90d_swap_3md', 'wages_3mc', 'capb_3md')] stat_variables2 <- dados[,c('ic_br_3md', 'scp_3md', 'gdp_growth', 'ipca_free_3mc', '90d_swap_3md', 'wages_3mc', 'capb_3md')] stat_variables3 <- dados[,c('gdp_growth', 'ipca_free_3mc', '90d_swap_3md', 'wages_3mc','capb_3md')]

Checking staionarity of variables in differences/growth rates with the KPSS test

check_stationarity <- function(base) { stationary_vars <- c() non_stationary_vars <- c()

for (col in names(base)[-1]) { result <- tryCatch({ kpss_result <- kpss.test(base[[col]]) p_value <- kpss_result$p.value p_value
}, error = function(e) { NA
})

if (!is.na(result) && result >= 0.05) {
  stationary_vars <- c(stationary_vars, col)
} else {
  non_stationary_vars <- c(non_stationary_vars, col)
}

}

return(list(stationary = stationary_vars, non_stationary = non_stationary_vars)) }

results1 <- check_stationarity(variables) result2 <- check_stationarity(stat_variables)

cat("Variáveis Estacionárias:\n", results1$stationary, "\n") cat("Variáveis Não Estacionárias:\n", results1$non_stationary, "\n")

Estimating the model with BVAR Signs Package

sr3 <- matrix(rep(NA), ncol = 7, nrow = 7)

sr3[1,] <- c(1,NA,NA,NA,NA,NA,NA) sr3[2,] <- c(NA,1,NA,NA,NA,NA,NA) sr3[3,] <- c(NA,NA,1,-1,-1,NA,-1) sr3[4,] <- c(1,1,1,1,-1,1,-1) sr3[5,] <- c(NA,NA,NA,NA,1,NA,NA) sr3[6,] <- c(NA,NA,NA,NA,NA,1,NA) sr3[7,] <- c(NA,NA,1,NA,NA,NA,1)

sign_irf <- sr3

prior2 <- specify_bsvarSIGN$new(as.matrix(stat_variables), p = 2, sign_irf = sign_irf, stationary = c(TRUE, TRUE, TRUE, TRUE, TRUE,TRUE, TRUE))

Estimating the hyperparameters

hyper_pams <- prior2$prior$estimate_hyper(S = 20000, burn_in = 5000, mu = FALSE, delta = FALSE, lambda = TRUE, psi = TRUE)

hp_df <- as.data.frame(t(hyper_pams))

trace_plot <- hp_df %>% mutate(index = row_number()) %>% pivot_longer( cols = -index, names_to = 'variable', values_to = 'values' ) %>% ggplot() + aes(x = index, y = values, color = variable) + geom_line(linewidth = 0.7) + facet_wrap(~variable, scales = 'free_y') + labs(title = 'Trace plot of MCMC chain') + theme(legend.position = 'none')

Estimating the model:

posterior2 <- estimate(prior2, S = 100)

Computing impulse responses

impulse2 <- compute_impulse_responses(posterior2, horizon = 10) plot(impulse2)

irfs_4th_var <- impulse2[4, , , ]

median_irf_4th_var <- apply(irfs_4th_var, c(1,2), median)

irf_df <- as.data.frame(t(median_irf_4th_var)) %>% mutate(horizon = 1:11) %>% rename(Response of inflation to commodity price shock = 'V1', Response of inflation to supply chain shock = 'V2', Response of inflation to output gap shock = 'V3', Response of inflation to inflation shock = 'V4', Response of inflation to MP shock = 'V5', Response of inflation to wage shock = 'V6', Response of inflation to FP shock = 'V7') %>% pivot_longer( cols = -horizon, values_to = 'valores', names_to = 'choque' ) %>% ggplot() + aes(x = horizon, y = valores, color = valores) + geom_line() + facet_wrap(~choque, scales = 'free_y') + theme(legend.position = 'none') + labs(title = "Impulse Response Functions of Inflation to Structural Shocks", subtitle = 'Authors own calculations', x = 'Horizon', y = 'Response')

Computing the historical decomposition

hd <- compute_historical_decompositions(posterior = posterior2, show_progress = TRUE) plot(hd)

Getting only impact on inflation

contrib_shocks_4th_var <- hd[4, , , ]

median_contrib_shocks_4th_var <- apply(contrib_shocks_4th_var, c(1, 2), median)

contrib_df <- as.data.frame(t(median_contrib_shocks_4th_var)) colnames(contribdf) <- paste0("Shock", 1:ncol(contrib_df)) contrib_df$Time <- seq.Date(from = as.Date('2003-09-01'), to = as.Date('2024-03-01'), by = 'quarter')

contrib_long <- tidyr::gather(contrib_df, key = "Shock", value = "Contribution", -Time)

Plotting the entire series of the historical decomposition

ggplot(contrib_long, aes(x = Time, y = Contribution, color = Shock)) + geom_line() + labs(title = "Historical Decomposition of Free Prices Inflation", x = "Time Period", y = "Contribution", color = "Shock") + scale_color_manual(values = c( "Shock_1" = "#1f77b4", # Blue "Shock_2" = "#ff7f0e", # Orange "Shock_3" = "#2ca02c", # Green "Shock_4" = "#d62728", # Red "Shock_5" = "#9467bd", # Purple "Shock_6" = "#8c564b", # Brown "Shock_7" = "#000000" # Black )) + theme_minimal()

Line graphs of HD since 2019-12-01 forward

hd_graph <- contrib_long %>% dplyr::filter(Time >= '2019-12-01') %>% ggplot()+ aes(x = Time, y = Contribution, color = Shock) + geom_line() + labs(title = "Historical Decomposition of Free Prices Inflation (Last 20 Periods)", x = "Time Period", y = "Contribution", color = "Shock") + scale_color_manual(values = c( "Shock_1" = "#1f77b4", # Blue "Shock_2" = "#ff7f0e", # Orange "Shock_3" = "#2ca02c", # Green "Shock_4" = "#d62728", # Red "Shock_5" = "#9467bd", # Purple "Shock_6" = "#8c564b", # Brown "Shock_7" = "#000000" # Black )) + theme_minimal()

Historical decomposition as bar graphs stacked on top of each other from 2019-01-01 forward:

hd_bar_graph <- contrib_long %>% dplyr::filter(Time >= '2019-12-01') %>% ggplot() + aes(x = Time, y = Contribution, fill = Shock) + geom_bar(stat = "identity", position = "stack") + labs(title = "Historical Decomposition of Free Prices Inflation (Last 20 Periods)", x = "Time Period", y = "Contribution", fill = "Shock", subtitle = 'Authors own calculations') + scale_fill_manual(values = c( "Shock_1" = "#1f77b4", # Blue "Shock_2" = "#ff7f0e", # Orange "Shock_3" = "#2ca02c", # Green "Shock_4" = "#d62728", # Red "Shock_5" = "#9467bd", # Purple "Shock_6" = "#8c564b", # Brown "Shock_7" = "#000000" # Black )) + theme_minimal()

donotdespair commented 1 month ago

Hey @AlejandroOrtiz12345

I just wanted to let you know that we're still revising the C++ code for historical decompositions. It takes time, but we're actively working on this.

Greetings, Tomasz