JeanKossaifi / tensorly-notebooks

Tensor methods in Python with TensorLy
429 stars 126 forks source link

Sparse example notebooks #6

Closed asmeurer closed 4 years ago

asmeurer commented 5 years ago

This is based on @jcrist's notebook from https://gist.github.com/jcrist/f7f0682ed01f12e96f9a40d8862b2477. I have added an example using parafac to the end of the notebook. I didn't know of any real life example sparse tensors that can be factored easily (the nips tensor that Jim used in the notebook seems to have a very large rank), so I constructed one using a random sparse factorization. If you know of a better example, let me know.

This requires https://github.com/tensorly/tensorly/pull/85.

JeanKossaifi commented 4 years ago

@asmeurer Now that sparse support is all merged in TensorLy, I realised i completely forgot about the notebooks, sorry!

I just had a look, starting with sparse_missing_values.ipynb. The notebooks are great.

In sparse_missing_values.ipynb there are a few issues with recent changes:

However, I noticed that (at least on my machine), the example seems to take a lot of memory (and I significantly reduced the size of the tensor). Using memit seems to indicate that the memory usage increases at line 215 of candecomp_parafac.py, which is for mode in range(tl.ndim(tensor)): so that doesn't make much sense. What do you think?

asmeurer commented 4 years ago

Hi.

It may be a while before I get time to work on this so feel free to push up the required changes yourself.

Regarding memory usage, if an example here is using a lot of memory, that might be a sign that the algorithm is breaking sparsity somewhere. So it should probably be investigated. The examples here all use array shapes that will use a ton of memory if things were densified. memit is most likely reporting things incorrectly, unless somehow ndim is densifying.

To be clear, just how much memory is being used? Does it appear to be densifying the full (1000, 1000, 1000) tensor, or is it smaller?

JeanKossaifi commented 4 years ago

@asmeurer I fixed the issues due to changes in TensorLy and merged 4 of the notebooks (I didn't realise it would automatically close the PR as I created a new branch off yours).

There still remains an issue in the robust_pca one (somehow clip seems to be using the numpy version instead of dispatching the sparse version), and there is an issue when trying to call parafac (the regular one, not the sparse wrapped version) on sparse tensor (due to trying to implicitly take product between a sparse tensor and a dense tensor), not sure why it worked before.

Will try to find what is going on there.

Thanks again for the great notebooks @asmeurer, @scopatz, @jcrist and @hameerabbasi !

JeanKossaifi commented 4 years ago

For record, here's the error when running (dense) parafac on a sparse tensor:

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-8-a2ccc3ec0a30> in <module>
----> 1 get_ipython().run_cell_magic('memit', '', "start_time = time.time()\nfactors = parafac(tensor, rank=rank, init='random', verbose=True)\nend_time = time.time()\ntotal_time = end_time - start_time\nprint('Took %d mins %d secs' % (divmod(total_time, 60)))\n")

~/anaconda3/lib/python3.7/site-packages/IPython/core/interactiveshell.py in run_cell_magic(self, magic_name, line, cell)
   2360             with self.builtin_trap:
   2361                 args = (magic_arg_s, cell)
-> 2362                 result = fn(*args, **kwargs)
   2363             return result
   2364 

<decorator-gen-127> in memit(self, line, cell)

~/anaconda3/lib/python3.7/site-packages/IPython/core/magic.py in <lambda>(f, *a, **k)
    185     # but it's overkill for just that one bit of state.
    186     def magic_deco(arg):
--> 187         call = lambda f, *a, **k: f(*a, **k)
    188 
    189         if callable(arg):

~/anaconda3/lib/python3.7/site-packages/memory_profiler.py in memit(self, line, cell)
   1013                                timeout=timeout, interval=interval,
   1014                                max_usage=True,
-> 1015                                include_children=include_children)
   1016             mem_usage.append(tmp[0])
   1017 

~/anaconda3/lib/python3.7/site-packages/memory_profiler.py in memory_usage(proc, interval, timeout, timestamps, include_children, multiprocess, max_usage, retval, stream, backend)
    334             # Therefore, the whole process hangs indefinitely. Here, we are ensuring that the process gets killed!
    335             try:
--> 336                 returned = f(*args, **kw)
    337                 parent_conn.send(0)  # finish timing
    338                 ret = parent_conn.recv()

~/anaconda3/lib/python3.7/site-packages/memory_profiler.py in _func_exec(stmt, ns)
    786     # helper for magic_memit, just a function proxy for the exec
    787     # statement
--> 788     exec(stmt, ns)
    789 
    790 

<string> in <module>

~/git_repos/tensorly/tensorly/decomposition/candecomp_parafac.py in parafac(tensor, rank, n_iter_max, init, svd, normalize_factors, orthogonalise, tol, random_state, verbose, return_errors, non_negative, sparsity, l2_reg, mask, cvg_criterion, fixed_modes, linesearch)
    308             pseudo_inverse += Id
    309 
--> 310             mttkrp = unfolding_dot_khatri_rao(tensor, (None, factors), mode)
    311             factor = tl.transpose(tl.solve(tl.conj(tl.transpose(pseudo_inverse)),
    312                                     tl.transpose(mttkrp)))

~/git_repos/tensorly/tensorly/kruskal_tensor.py in unfolding_dot_khatri_rao(tensor, kruskal_tensor, mode)
    377     weights, factors = kruskal_tensor
    378     for r in range(rank):
--> 379         component = multi_mode_dot(tensor, [f[:, r] for f in factors], skip=mode)
    380         mttkrp_parts.append(component)
    381 

~/git_repos/tensorly/tensorly/tenalg/__init__.py in dynamically_dispatched_fun(*args, **kwargs)
     76         current_backend = _BACKENDS[_LOCAL_STATE.tenalg_backend]
     77         if hasattr(current_backend, name):
---> 78             fun = getattr(current_backend, name)(*args, **kwargs)
     79         else:
     80             warnings.warn(f'tenalg: defaulting to core tenalg backend, {name}'

~/git_repos/tensorly/tensorly/tenalg/core_tenalg/n_mode_product.py in multi_mode_dot(tensor, matrix_or_vec_list, modes, skip, transpose)
    121             res = mode_dot(res, T.conj(T.transpose(matrix_or_vec)), mode - decrement)
    122         else:
--> 123             res = mode_dot(res, matrix_or_vec, mode - decrement)
    124 
    125         if T.ndim(matrix_or_vec) == 1:

~/git_repos/tensorly/tensorly/tenalg/core_tenalg/n_mode_product.py in mode_dot(tensor, matrix_or_vector, mode)
     62                              'Provided array of dimension {} not in [1, 2].'.format(T.ndim(matrix_or_vector)))
     63 
---> 64         res = T.dot(matrix_or_vector, unfold(tensor, mode))
     65 
     66         if vec: # We contracted with a vector, leading to a vector

~/git_repos/tensorly/tensorly/backend/__init__.py in inner(*args, **kwargs)
    159 
    160     def inner(*args, **kwargs):
--> 161         return _get_backend_method(name)(*args, **kwargs)
    162 
    163     # We don't use `functools.wraps` here because some of the dispatched

~/git_repos/tensorly/tensorly/backend/numpy_backend.py in dot(a, b)
     36     @staticmethod
     37     def dot(a, b):
---> 38         return a.dot(b)
     39 
     40     @staticmethod

~/anaconda3/lib/python3.7/site-packages/sparse/_sparse_array.py in __array__(self, *args, **kwargs)
    221         if not AUTO_DENSIFY:
    222             raise RuntimeError(
--> 223                 "Cannot convert a sparse array to dense automatically. "
    224                 "To manually densify, use the todense method."
    225             )

RuntimeError: Cannot convert a sparse array to dense automatically. To manually densify, use the todense method.
asmeurer commented 4 years ago

There still remains an issue in the robust_pca one (somehow clip seems to be using the numpy version instead of dispatching the sparse version), and there is an issue when trying to call parafac (the regular one, not the sparse wrapped version) on sparse tensor (due to trying to implicitly take product between a sparse tensor and a dense tensor), not sure why it worked before.

I vaguely remember there being some issues if you mixed sparse and dense or used unwrapped version or something like that. You'll have to look back on the discussions. I think there was something about internal variables that were dense so you wanted to store them as a dense array instead of sparse, but it was hard to do because the backend was designed for just one or the other. I think we concluded that it wasn't worth trying to support that. Again, this is only from memory, so I could be misremembering it.

asmeurer commented 4 years ago

Also I seem to remember putting both examples in the notebook, so if we did conclude it wasn't worth supporting I might not have removed it from the notebook.