Closed jules-vanaret closed 1 year ago
Hi @jules-vanaret
Glad you like it!
- Scipy version (tested on Ubuntu)
I cannot reproduce this (fresh env, with python 3.8, but on ubuntu 22). It's strange that your scipy version is that old (which I think is the reason). I am getting scipy==1.10.1
when installing via pip install git+https...
.
- tarrow install folder is no longer a git repo
Yes, good catch, thx! This should be fixed in bafdad5
Let me know if that helped
Thank you for your answer! After tweaking my conda configuration, I ended up getting different error messages, which means that the problem was likely due to an issue on my end. Sorry for the false alarm.
I tested the installation since your new commit, it works perfectly.
To use a trained network, you suggested
# Dense dummy representations
rep = model.embedding(x)
which gives the full (T, N_channels, W,H) embedding. To get only a local scalar information, is gradCAM the only way to go, e.g with something like
data = ... # has shape (T,W,H)
frame_pairs = [torch.from_numpy(data[i:i+2].astype('int16')).unsqueeze(1).float() for i in range(data.shape[0]-2)]
cams = [model.gradcam(pair, tile_size=(64,64)) for c in tqdm(frame_pair)]
or is there another way to go from the dense embeddings to, say a single scalar value between 0 and 1 of how reversible the events around a given pixel are ?
Hi @jules-vanaret, thanks for the feedback. We have mostly used Grad-CAM (and regular CAMs in the beginning) to visually discover time-asymmetric events. Another way to get visually interpretable output would be some form of dimensionality reduction on the embedding space, e.g. to 1D, or 3D displayed as RGB. What do you want to use the single scalar value per pixel for?
My goal is to get a tool for quick visual inspection of division events in dense cellular environments (any area in which the scalar pixel values light up could be cropped and inspected).
Right now for me the CAM map is somewhat satisfying but it is quite noisy so I need to find a way to post-process to make it really interpretable. Reducting to 3D and visualizing as RGB is a great idea, I'll give it a go !
Hi, thank you for making such a polished package. The tensorboard insets visualization is particularly useful and satisfying.
I have noticed 2 little issues with the installation of the package.
1. Scipy version (tested on Ubuntu)
When following the installation instructions on Ubuntu 20.04.3 LTS, from a new conda envrionment running Python 3.8.10, the pip installer displays
and then finishes normally (classic pip shenaningans). Note that the Scipy version is 1.6.3.
When running
import tarrow
from a Python script, the following message appearsManually running
pip install scipy==1.8.1
(it seems to be the oldest version that implemented QhullError) solved the issue.2. tarrow install folder is no longer a git repo if installed via pip install git+https... (tested on Ubuntu and Windows)
After following the installation instructions, setting up the data and config, running the "train.py" script leads to the following error:
This could be due to the fact that when installed via the
pip install git+https...
, the package's root is (in my case) in~/.local/lib/python3.8/site-packages/tarrow
and is no longer a git repo. To solve this, I uninstalled the repo, and followed the following steps:git clone git@github.com:weigertlab/tarrow.git
to clone the repo manually, which conserves the repo naturecd tarrow
to go to the repo's rootpip install -e .
to perform a manual pip installation. When called from a python script the package is now read from this folder, which is a repo.The training is now running flawlessly and looks VERY promising. ;)