Open jsilveyra opened 1 day ago
de un paper de mossbauer:
"A covariance matrix is estimated at the end of fitting, and is then used to calculate the standard deviation of the model parameters. "
Los error estimates tienen que mostrarse tanto para los parámetros fiteados como para las other calculated quantities:
When you compute derived parameters based on fitted model parameters, you need to propagate the uncertainties from the original parameters to these derived parameters. Follow these steps:
Define Derived Parameters
Suppose you have a model with parameters ( \boldsymbol{\theta} = [\theta_1, \theta_2, \ldots, \theta_p] ) and you want to compute derived parameters ( \boldsymbol{\alpha} = [\alpha_1, \alpha_2, \ldots, \alpha_q] ) which are functions of the original parameters. For example:
[ \alpha_1 = f_1(\theta_1, \theta_2, \ldots, \theta_p) ] [ \alpha_2 = f_2(\theta_1, \theta_2, \ldots, \theta_p) ]
Obtain Variance-Covariance Matrix of Original Parameters
Let ( \mathbf{Cov}(\boldsymbol{\theta}) ) be the variance-covariance matrix of the original parameters. If the original parameter uncertainties are provided as standard errors or in parentheses, convert these to variances and compute the covariance matrix.
For example:
[ \mathbf{Cov}(\boldsymbol{\theta}) = \begin{bmatrix} \sigma_{\theta_1}^2 & \text{Cov}(\theta_1, \theta_2) \ \text{Cov}(\theta_1, \theta2) & \sigma{\theta_2}^2 \end{bmatrix} ]
Compute Jacobian Matrix
Compute the Jacobian matrix ( J ) of the derived parameters with respect to the original parameters. The Jacobian matrix ( J ) has elements ( J_{ij} = \frac{\partial \alpha_i}{\partial \theta_j} ).
For example, if ( \alpha_i = f_i(\theta_1, \theta_2) ):
[ J = \begin{bmatrix} \frac{\partial \alpha_1}{\partial \theta_1} & \frac{\partial \alpha_1}{\partial \theta_2} \ \frac{\partial \alpha_2}{\partial \theta_1} & \frac{\partial \alpha_2}{\partial \theta_2} \end{bmatrix} ]
Propagate Uncertainty
The variance-covariance matrix of the derived parameters ( \mathbf{Cov}(\boldsymbol{\alpha}) ) can be computed using the Jacobian matrix ( J ) and the variance-covariance matrix of the original parameters ( \mathbf{Cov}(\boldsymbol{\theta}) ):
[ \mathbf{Cov}(\boldsymbol{\alpha}) = J \cdot \mathbf{Cov}(\boldsymbol{\theta}) \cdot J^T ]
Here, ( J^T ) is the transpose of the Jacobian matrix.
Compute Standard Errors and Confidence Intervals
Standard Errors: The diagonal elements of ( \mathbf{Cov}(\boldsymbol{\alpha}) ) give the variances of the derived parameters. The standard errors are the square roots of these diagonal elements.
Confidence Intervals: For a 95% confidence interval, you can use approximately ±2 standard deviations if the uncertainties are normally distributed.
Assume you have two original parameters ( \theta_1 ) and ( \theta_2 ) with a variance-covariance matrix:
[ \mathbf{Cov}(\boldsymbol{\theta}) = \begin{bmatrix} 0.01 & 0.002 \ 0.002 & 0.04 \end{bmatrix} ]
And you have a derived parameter ( \alpha = \theta_1 + 2\theta_2 ).
Compute the Jacobian Matrix:
For ( \alpha = \theta_1 + 2\theta_2 ):
[ J = \begin{bmatrix} 1 & 2 \end{bmatrix} ]
Propagate the Uncertainty:
[ \mathbf{Cov}(\alpha) = J \cdot \mathbf{Cov}(\boldsymbol{\theta}) \cdot J^T ]
[ \mathbf{Cov}(\alpha) = \begin{bmatrix} 1 & 2 \end{bmatrix} \cdot \begin{bmatrix} 0.01 & 0.002 \ 0.002 & 0.04 \end{bmatrix} \cdot \begin{bmatrix} 1 \ 2 \end{bmatrix} ]
[ \mathbf{Cov}(\alpha) = \begin{bmatrix} 0.01 + 0.004 & 0.002 + 0.08 \end{bmatrix} \cdot \begin{bmatrix} 1 \ 2 \end{bmatrix} ]
[ \mathbf{Cov}(\alpha) = 0.014 + 0.164 = 0.178 ]
Standard Error of ( \alpha ):
[ \text{SE}(\alpha) = \sqrt{0.178} \approx 0.422 ]
Confidence Intervals:
For a 95% confidence interval:
[ \text{CI}_{\alpha} = \alpha \pm 1.96 \times \text{SE}(\alpha) ]
[ \text{CI}_{\alpha} = \alpha \pm 1.96 \times 0.422 ]
no tengo bien claro cómo reportarlo, por lo que no sé si tiene que ir en el archivo exportado de parámetros o podemos informarlo durante el ajuste. Hago copy paste del chatgpt
Summary
The number in parentheses provides the uncertainty or standard error of the parameter estimate, giving a sense of how precise the estimate is. This notation is particularly useful in scientific and technical fields where reporting the precision and reliability of measurements is crucial.
Practical Example
Suppose you have fitted a Mössbauer spectrum and obtained a parameter value:
This means the fitted parameter is 2.4634 and the associated uncertainty (standard error) is 0.002. Thus, you can infer that the true value of the parameter is expected to lie within the interval 2.4634 ± 0.002.
In MATLAB, the
fitlm
function from the Statistics and Machine Learning Toolbox provides an easy way to fit a linear regression model and compute the variance-covariance matrix of the parameter estimates. However, if you are looking for a built-in MATLAB function that does not require any additional toolbox (e.g., without the Statistics and Machine Learning Toolbox), you would need to compute the variance-covariance matrix manually.Here’s how you can compute the variance-covariance matrix manually using basic MATLAB functions:
Example Using Basic MATLAB Functions
Suppose you have the following data:
Fit a Linear Model:
You can use the basic matrix operations to fit a linear model.
Compute Residuals:
Compute Variance of Residuals:
Compute Variance-Covariance Matrix:
Extract Standard Errors and Confidence Intervals:
Explanation
X \ y
: Computes the least squares solution to the linear system.sigma_squared
: The variance of residuals, used to scale the covariance matrix.cov_matrix
: The variance-covariance matrix of the parameter estimates.Example Code
Here’s the complete example in MATLAB code:
This code provides a manual way to calculate the variance-covariance matrix, standard errors, and confidence intervals for a linear regression model without requiring additional toolboxes.