matias-gonz / mag-analyst

Matlab toolbox to model the magnetization of soft magnetic materials.
Other
4 stars 0 forks source link

Agregar estimaciones de incertidumbres de errores calculados #183

Open jsilveyra opened 1 day ago

jsilveyra commented 1 day ago

no tengo bien claro cómo reportarlo, por lo que no sé si tiene que ir en el archivo exportado de parámetros o podemos informarlo durante el ajuste. Hago copy paste del chatgpt

Summary

The number in parentheses provides the uncertainty or standard error of the parameter estimate, giving a sense of how precise the estimate is. This notation is particularly useful in scientific and technical fields where reporting the precision and reliability of measurements is crucial.

Practical Example

Suppose you have fitted a Mössbauer spectrum and obtained a parameter value:

2.4634(2)

This means the fitted parameter is 2.4634 and the associated uncertainty (standard error) is 0.002. Thus, you can infer that the true value of the parameter is expected to lie within the interval 2.4634 ± 0.002.

In MATLAB, the fitlm function from the Statistics and Machine Learning Toolbox provides an easy way to fit a linear regression model and compute the variance-covariance matrix of the parameter estimates. However, if you are looking for a built-in MATLAB function that does not require any additional toolbox (e.g., without the Statistics and Machine Learning Toolbox), you would need to compute the variance-covariance matrix manually.

Here’s how you can compute the variance-covariance matrix manually using basic MATLAB functions:

Example Using Basic MATLAB Functions

Suppose you have the following data:

x = [1; 2; 3; 4; 5];
y = [2; 3; 5; 4; 6];
  1. Fit a Linear Model:

    You can use the basic matrix operations to fit a linear model.

    X = [ones(size(x)) x]; % Design matrix with intercept and predictor
    b = X \ y; % OLS estimates
  2. Compute Residuals:

    y_hat = X * b; % Predicted values
    residuals = y - y_hat; % Residuals
  3. Compute Variance of Residuals:

    n = length(y); % Number of observations
    k = size(X, 2); % Number of parameters
    sigma_squared = (residuals' * residuals) / (n - k); % Variance of residuals
  4. Compute Variance-Covariance Matrix:

    % Variance-Covariance Matrix
    X_transpose_X_inv = inv(X' * X); % Inverse of X'X
    cov_matrix = sigma_squared * X_transpose_X_inv;
  5. Extract Standard Errors and Confidence Intervals:

    % Standard Errors
    se = sqrt(diag(cov_matrix)); % Square root of diagonal elements
    
    % Confidence Intervals
    t_value = tinv(0.975, n - k); % t-value for 95% CI
    ci = [b - t_value * se, b + t_value * se]; % Confidence Intervals

Explanation

Example Code

Here’s the complete example in MATLAB code:

% Example data
x = [1; 2; 3; 4; 5];
y = [2; 3; 5; 4; 6];

% Fit model
X = [ones(size(x)) x];
b = X \ y;

% Compute residuals
y_hat = X * b;
residuals = y - y_hat;

% Compute variance of residuals
n = length(y);
k = size(X, 2);
sigma_squared = (residuals' * residuals) / (n - k);

% Compute variance-covariance matrix
X_transpose_X_inv = inv(X' * X);
cov_matrix = sigma_squared * X_transpose_X_inv;

% Extract standard errors
se = sqrt(diag(cov_matrix));

% Compute 95% Confidence Intervals
t_value = tinv(0.975, n - k);
ci = [b - t_value * se, b + t_value * se];

% Display results
disp('Parameter Estimates:');
disp(b);
disp('Variance-Covariance Matrix:');
disp(cov_matrix);
disp('Standard Errors:');
disp(se);
disp('95% Confidence Intervals:');
disp(ci);

This code provides a manual way to calculate the variance-covariance matrix, standard errors, and confidence intervals for a linear regression model without requiring additional toolboxes.

jsilveyra commented 1 day ago

de un paper de mossbauer:

"A covariance matrix is estimated at the end of fitting, and is then used to calculate the standard deviation of the model parameters. "

jsilveyra commented 1 day ago

Los error estimates tienen que mostrarse tanto para los parámetros fiteados como para las other calculated quantities:

Computing Error Estimates for Derived Parameters

When you compute derived parameters based on fitted model parameters, you need to propagate the uncertainties from the original parameters to these derived parameters. Follow these steps:

Steps to Compute Error Estimates

  1. Define Derived Parameters

    Suppose you have a model with parameters ( \boldsymbol{\theta} = [\theta_1, \theta_2, \ldots, \theta_p] ) and you want to compute derived parameters ( \boldsymbol{\alpha} = [\alpha_1, \alpha_2, \ldots, \alpha_q] ) which are functions of the original parameters. For example:

    [ \alpha_1 = f_1(\theta_1, \theta_2, \ldots, \theta_p) ] [ \alpha_2 = f_2(\theta_1, \theta_2, \ldots, \theta_p) ]

  2. Obtain Variance-Covariance Matrix of Original Parameters

    Let ( \mathbf{Cov}(\boldsymbol{\theta}) ) be the variance-covariance matrix of the original parameters. If the original parameter uncertainties are provided as standard errors or in parentheses, convert these to variances and compute the covariance matrix.

    For example:

    [ \mathbf{Cov}(\boldsymbol{\theta}) = \begin{bmatrix} \sigma_{\theta_1}^2 & \text{Cov}(\theta_1, \theta_2) \ \text{Cov}(\theta_1, \theta2) & \sigma{\theta_2}^2 \end{bmatrix} ]

  3. Compute Jacobian Matrix

    Compute the Jacobian matrix ( J ) of the derived parameters with respect to the original parameters. The Jacobian matrix ( J ) has elements ( J_{ij} = \frac{\partial \alpha_i}{\partial \theta_j} ).

    For example, if ( \alpha_i = f_i(\theta_1, \theta_2) ):

    [ J = \begin{bmatrix} \frac{\partial \alpha_1}{\partial \theta_1} & \frac{\partial \alpha_1}{\partial \theta_2} \ \frac{\partial \alpha_2}{\partial \theta_1} & \frac{\partial \alpha_2}{\partial \theta_2} \end{bmatrix} ]

  4. Propagate Uncertainty

    The variance-covariance matrix of the derived parameters ( \mathbf{Cov}(\boldsymbol{\alpha}) ) can be computed using the Jacobian matrix ( J ) and the variance-covariance matrix of the original parameters ( \mathbf{Cov}(\boldsymbol{\theta}) ):

    [ \mathbf{Cov}(\boldsymbol{\alpha}) = J \cdot \mathbf{Cov}(\boldsymbol{\theta}) \cdot J^T ]

    Here, ( J^T ) is the transpose of the Jacobian matrix.

  5. Compute Standard Errors and Confidence Intervals

    • Standard Errors: The diagonal elements of ( \mathbf{Cov}(\boldsymbol{\alpha}) ) give the variances of the derived parameters. The standard errors are the square roots of these diagonal elements.

    • Confidence Intervals: For a 95% confidence interval, you can use approximately ±2 standard deviations if the uncertainties are normally distributed.

Practical Example

Assume you have two original parameters ( \theta_1 ) and ( \theta_2 ) with a variance-covariance matrix:

[ \mathbf{Cov}(\boldsymbol{\theta}) = \begin{bmatrix} 0.01 & 0.002 \ 0.002 & 0.04 \end{bmatrix} ]

And you have a derived parameter ( \alpha = \theta_1 + 2\theta_2 ).

  1. Compute the Jacobian Matrix:

    For ( \alpha = \theta_1 + 2\theta_2 ):

    [ J = \begin{bmatrix} 1 & 2 \end{bmatrix} ]

  2. Propagate the Uncertainty:

    [ \mathbf{Cov}(\alpha) = J \cdot \mathbf{Cov}(\boldsymbol{\theta}) \cdot J^T ]

    [ \mathbf{Cov}(\alpha) = \begin{bmatrix} 1 & 2 \end{bmatrix} \cdot \begin{bmatrix} 0.01 & 0.002 \ 0.002 & 0.04 \end{bmatrix} \cdot \begin{bmatrix} 1 \ 2 \end{bmatrix} ]

    [ \mathbf{Cov}(\alpha) = \begin{bmatrix} 0.01 + 0.004 & 0.002 + 0.08 \end{bmatrix} \cdot \begin{bmatrix} 1 \ 2 \end{bmatrix} ]

    [ \mathbf{Cov}(\alpha) = 0.014 + 0.164 = 0.178 ]

  3. Standard Error of ( \alpha ):

    [ \text{SE}(\alpha) = \sqrt{0.178} \approx 0.422 ]

  4. Confidence Intervals:

    For a 95% confidence interval:

    [ \text{CI}_{\alpha} = \alpha \pm 1.96 \times \text{SE}(\alpha) ]

    [ \text{CI}_{\alpha} = \alpha \pm 1.96 \times 0.422 ]