Closed vkhodygo closed 4 years ago
If w is the variance vector
We might actually need to do something like this
W = np.sqrt(np.diag(W))
Aw = np.dot(W,A)
Bw = np.dot(B,W)
X = np.linalg.lstsq(Aw, Bw)
in place of anywhere that currently calls scipy.linalg.lstsq
@cjekel
If w is the variance vector
We might actually need to do something like this
W = np.sqrt(np.diag(W)) Aw = np.dot(W,A) Bw = np.dot(B,W) X = np.linalg.lstsq(Aw, Bw)
in place of anywhere that currently calls scipy.linalg.lstsq
Indeed, but your matrix A is already based on the initial vector x. It's just a matter of preference here, I think. That actually made me realize that a simple scaling of matrices (or input vectors) should work with a given number of breaks, however, what happens when you provide not a simple number but the positions of them?
I think the issue with adding the weights to x_data before hand, is that the assembly of A needs to depend upon the original x_data. The reason is that I need to check whether the original data point is in which break zone. Modifying it before hand will effect the slopes break point locations.
I think we can add a keyword called weights
which can either be a float, or a numpy array of length y, where weights[i] corresponds to the weight for x[i] and y[i] data point.
I think the issue with adding the weights to x_data before hand, is that the assembly of A needs to depend upon the original x_data.
I didn't think about that.
I think we can add a keyword called
weights
which can either be a float, or a numpy array of length y, where weights[i] corresponds to the weight for x[i] and y[i] data point.
Does it change the result when you have identical weights everywhere? It looks like a simple scaling of the original minimization problem.
Hi, Charles
It's been some time since I touched your package. Now I need it again and I realized that you have no feature to approximate data with y errors. My stats are quite rusty, but a simple scaling of the initial data using standard deviations should do the trick, I hope.