sandialabs / pyttb

Python Tensor Toolbox
https://pyttb.readthedocs.io
BSD 2-Clause "Simplified" License
25 stars 13 forks source link

230 tutorial gcp opt #253

Closed jeremy-myers closed 1 year ago

jeremy-myers commented 1 year ago

:books: Documentation preview :books:: https://pyttb--253.org.readthedocs.build/en/253/

dmdunla commented 1 year ago

This is still breaking due to the docs failing when the LBFGS line search fails:

Bad direction in the line search;
   refresh the lbfgs memory and restart the iteration.

I made the problem smaller and that did not help:

shape = (5, 6, 7)

I initialized with the solution, M_true, and that did not help:

# Compute rank-3 GCP approximation to X with GCP-OPT
result_lbfgs, initial_guess, info_lbfgs = ttb.gcp_opt(
    data=X, rank=rank, objective=objective, optimizer=optimizer, init=M_true
)

I created an initial guess of all ones and that did help on the smaller problem:


shape = (5, 6, 7)
...
M_true = ttb.ktensor(U).normalize()
M_init = ttb.ktensor.from_function(np.ones,shape,rank)
...
# Compute rank-3 GCP approximation to X with GCP-OPT
result_lbfgs, initial_guess, info_lbfgs = ttb.gcp_opt(
    data=X, rank=rank, objective=objective, optimizer=optimizer, init=M_init
)
...
dmdunla commented 1 year ago

Why was the matplotlib code commented out? Was there a problem?

jeremy-myers commented 1 year ago

Why was the matplotlib code commented out? Was there a problem?

Yes, some of the regression tests were failing.

jeremy-myers commented 1 year ago

This is still breaking due to the docs failing when the LBFGS line search fails:

Bad direction in the line search;
   refresh the lbfgs memory and restart the iteration.

I made the problem smaller and that did not help:

shape = (5, 6, 7)

I initialized with the solution, M_true, and that did not help:

# Compute rank-3 GCP approximation to X with GCP-OPT
result_lbfgs, initial_guess, info_lbfgs = ttb.gcp_opt(
    data=X, rank=rank, objective=objective, optimizer=optimizer, init=M_true
)

I created an initial guess of all ones and that did help on the smaller problem:

shape = (5, 6, 7)
...
M_true = ttb.ktensor(U).normalize()
M_init = ttb.ktensor.from_function(np.ones,shape,rank)
...
# Compute rank-3 GCP approximation to X with GCP-OPT
result_lbfgs, initial_guess, info_lbfgs = ttb.gcp_opt(
    data=X, rank=rank, objective=objective, optimizer=optimizer, init=M_init
)
...

I'll incorporate this and commit again

dmdunla commented 1 year ago

@jeremy-myers Please add an Issue related to the Rayleigh loss that fails with LBFGS when a random start or starting with M_true.