Column hyper-parameters are all initialized to the same value and never get updated before the MCMC kernels are applied for inference. Once the Gibbs kernel for column hyperparameters is applied, the hyper-parameters are sampled from an empirical grid prior (the grid is initialized with data within the individual primitives, e.g. here for the Gaussian primitive GPM). This process requires access to the data. The function initializing a CrossCat model for model building doesn't have access to the data, so it can't do a better job during the initialization (it doesn't know the grid to sample from). Now, that's not ideal as it makes the inference for model-building harder than it needs to be.
There are three ways to fix this:
During calls incorporate for incorporating rows, the code could call the hyper-parameter inference kernel for each row. That's what Python-CGPM does. On the face of it, it seems unnecessarily inefficient.
One could construct the XCat initial model from types and data (unlike just types here). Then, the grid would exist during initialization, and you could sample
One creates initial hyper-grids -- taken from looking at n different datasets. Those should eventually get replaced by empirical hypergrids from the current data. This would help with demos of sequential inference, i.e. incorporating row-by-row, and doing inference after each incorporation.
Column hyper-parameters are all initialized to the same value and never get updated before the MCMC kernels are applied for inference. Once the Gibbs kernel for column hyperparameters is applied, the hyper-parameters are sampled from an empirical grid prior (the grid is initialized with data within the individual primitives, e.g. here for the Gaussian primitive GPM). This process requires access to the data. The function initializing a CrossCat model for model building doesn't have access to the data, so it can't do a better job during the initialization (it doesn't know the grid to sample from). Now, that's not ideal as it makes the inference for model-building harder than it needs to be.
There are three ways to fix this:
incorporate
for incorporating rows, the code could call the hyper-parameter inference kernel for each row. That's what Python-CGPM does. On the face of it, it seems unnecessarily inefficient.