Open stephanieherman opened 1 year ago
Thanks for the detailed report and analysis! This package is old and I haven't worked on it for ages. I had a look and agree with your suggestion, although I think there is a misunderstanding of what 'dat' was meant to be: the whole original training data. Passing a subset was not an intended to be suppported (no good reason, just happened like that).
Took me a while to get started again and found a number of issues with the package but for this particular problem I created a draft solution along your suggestion in #23 what do you think?
Hello!
I ran upon a strange behavior of the DModX() function when projecting new data into my PCA model. I've pasted some code to reproduce this below. First, I train a PCA model on the majority of the observations in the
mtcars
dataset:When evaluating the DModX values of new observations, I noticed that I get different results depending on how many new observations I evaluate:
You also get the same value for a single new observation (no matter which observation):
Finally, it only seems to occur for normalized DModX values (type = "normalized"), which is the default setting. The example below show that the same DModX values are obtained when using absolute DModX values (type = "absolute").
In the documentation you write that the "Normalized values are adjusted to the total RSD of the model". Hence, as I am providing the same PCA model, the generated DModX value of a single new observation should not vary depending on the set of new observations that are projected into the model space.
When looking at the function code (pasted below), I have a guess on where this issue might arise. In the calculation of the normalization factor
s0
it looks like the squared residuals of the new observations are summed (sum(E2)
). Should this not be the sum of the squared residuals of the training data (i.e.,sum(resid(object, object@completeObs)^2)
)?