Closed s870582003 closed 5 years ago
Class2D involves many stochastic processes. So the result is not completely reproducible. There is no point comparing rlnClassNumber
between different runs.
whether the accuracy of class assignments can be greatly impacted by the ratio of total number of samples, to the number of classes --K given in the command.
To some extent yes. I typically use 50-100 classes in Class2D right after AutoPick. For second run of Class2D, I use up to 200. Using more than 200 classes is not recommended; it just takes too long. Also make sure you have at least several hundreds particles per class.
Thanks for the reply!
It is a relief knowing that comparing rlnClassNumber
is not the way to go.
However, we would still like to find ways for judging whether our outputs from Class2D make sense.
So I guess our next question will be what are the necessary criteria, down to each sample's metadata, for evaluating Class2D's output across GPU and CPU platforms?
However, we would still like to find ways for judging whether our outputs from Class2D make sense.
Why? There is no easy way to do this.
We are working on porting Relion to a platform other than CUDA and CPU. For this reason, we need to verify that our current solution is giving correct output for each different applications, including Class2D and others, aligns with that of CUDA and CPU. Your help will be greatly appreciated if you can shed light on some idea on how to evaluate those outputs being acceptable or not.
Do several iterations on GPU. Continue from it and run only one iteration on CUDA, CPU and your own platform (AMD GPU?). Then the result should be very close.
That being said, if you are seriously working on this, you should contact us to see the possibility of working together. Our next version (3.1) undergoes many changes in the internal data structures and you might have to repeat your porting efforts again and again.
Thanks for the suggestions, we will try running it at once :)
And regarding verion 3.1 and onward, we will hold an internal discussion regarding the possibility of working together. I will look forward to any future discussion :)
We've tried running the 25th iteration on both CPU and GPU, with the first 24 iterations produced by CPU, as instructed below.
Do several iterations on GPU. Continue from it and run only one iteration on CUDA, CPU and your own platform (AMD GPU?). Then the result should be very close.
There are 1584 samples, and 27 of them looks quite different. Here we've updated our output down below, would you kindly share some opinion on whether this looks good or not? Or maybe some suggestion on a threshold for a ratio (perhaps num_samples to num_different_samples?) that we can follow accordingly if more future unit tests are to come.
The goal of Class2D is not perfect reproducibility but to get rid of junks. Process a few datasets and compare the results. If the number of particles in good classes is more or less the same and the refinement leads to the same resolution, it's OK in practice.
It is difficult to give a threshold as a single number; the noisier the dataset is, the more difficult class assignment is and stochasticity plays more role.
If you want to port RELION faithfully, you should compare the results in a more fine grained way. For example, you can compare the result of the diff2 kernel. You can also compare the probability distribution functions. Dumping these internal values needs more hacking, though.
In your attachment, there are four good classes, containing 994 or 1008 particles in GPU and CPU versions. Because the total number is so small, I cannot judge if this difference is significant or not.
Thanks for the clarification and the suggestions! We are currently trying to follow your train of thoughts in our testing procedure, especially with the following description.
If the number of particles in good classes is more or less the same and the refinement leads to the same resolution, it's OK in practice.
Here we are little unsure of where to look in terms of refinement resolution, we are wondering if you are referring to the +Final Resolution
in the snapshot below, output by 3D auto-refinement after series of 2DClass + Autopick + 2DClass + 3DClass ?
Yes. Did you go through our tutorial? Follow the tutorial on the original version and your optimised version and see if the final resolution is significantly different or not.
If we were to compare the final resolution, to which extent of difference (down to several decimal places?) is considered acceptable?
Also, we re-ran the Class2D test in which the first 24 iterations been run on the same platform except for the very last, but this time with around 9000 samples (Hopefully this is large enough..).
Would you kindly take a look at our output to see if the sample distribution among good and bad classes looks good ?
If we were to compare the final resolution, to which extent of difference (down to several decimal places?) is considered acceptable?
As I keep saying, I cannot give a threshold as a single number.
Would you kindly take a look at our output to see if the sample distribution among good and bad classes looks good ?
Visually it looks fine but more reliable test is the resolution of 3D reconstruction you ultimately get. You should also test on many datasets with different characteristics.
I assume you are a computer programmer. Do you or your colleagues have practical experiences in CryoEM data processing? I strongly recommend you to work with data processing experts.
Hi guys, I hope you doing well all, I new on using RELION 3.1, I'm facing some issues on using 2D classification on RELION 3.1 I did follow the installation and try to run the GUI it did work but when I tried to run the 2D classification it did not work, I wen back to the comment line approach and download the benchmark dataset with the STAR file and when I tried to run the following comment:
mpirun -n XXX which relion_refine_mpi --i Particles/shiny_2sets.star --ctf --iter 25 --tau2_fudge 2 --particle_diameter 360 --K 200 --zero_mask --oversampling 1 --psi_step 6 --offset_range 5 --offset_step 2 --norm --scale --random_seed 0 --o class2d
I got the following error:
please could someone help me how to run the 2D classification in RELION 3.1 really appreciate that!
We are running relion 2D reference free classification based on the Single-particle processing in relion-3.0 tutorial pdf.
We took our input sample from the above pdf as well, with links pasted below:
wget ftp://ftp.mrc-lmb.cam.ac.uk/pub/scheres/relion30_tutorial_data.tar wget ftp://ftp.mrc-lmb.cam.ac.uk/pub/scheres/relion30_tutorial_precalculated_results.tar.gz
The command issued is as the following...
.../relion/build/bin/relion_refine --o Class2D/gpu/run --i Extract/job007/particles.star --dont_combine_weights_via_disc --pool 3 --pad 2 --ctf --iter 25 --tau2_fudge 2 --particle_diameter 200 --K 50 --flatten_solvent --zero_mask --oversampling 1 --psi_step 12 --offset_range 5 --offset_step 2 --norm --scale --j 1 --gpu
We tried running it on both CUDA and CPU, and found that the class number,
_rlnClassNumber
, assigned to each particle differs greatly since the 9th or 10th iteration until the final iteration, as indicated inrun_it025_data.star
.However, the two cases both converged when checking the
_rlnChangesOptimalClasses
value across iterations.So we would like to make sure that whether checking
_rlnClassNumber
for each particle is the proper way of evaluating the output for 2D classification?Or whether the accuracy of class assignments can be greatly impacted by the ratio of total number of samples, to the number of classes
--K
given in the command.If so, what would be the rule of thumb on assigning a proper value for
--K
according to the number of samples?