linnarsson-lab / loom-viewer

Tool for sharing, browsing and visualizing single-cell data stored in the Loom file format
BSD 2-Clause "Simplified" License
35 stars 6 forks source link

[sparkline] avoid awkward small, uneven grouping, and confusing column stretching #122

Open JobLeonard opened 6 years ago

JobLeonard commented 6 years ago

TL;DR: either plot data directly with near-even column size, or group with at least twenty cells. Column width should be increased to be at least 3 pixels wide to minimize interpretation errors.

Plotting small groups (total cells between [1 - 20]x available pixels)

Flame maps highlight an inherent issue that we have to resolve when grouping data or plotting it directly. It applies to the other plotters too, but is invisible. (nb: the data in the examples below is nonsensical to investigate as a flame map and only illustrates the inherent issue with the plotter)

When there is so little data to plot that we can show it directly, our sparklines look fine (with a caveat I'll get back to):

image

Similarly, once we start grouping about 10 cells per column or more, things look pretty good too.

Roughly 10 cells: image

Roughly 100 cells:

image

However, there is an "awkward transition"-phase between direct plotting and grouped plotting, which is especially visible by flame maps:

image

In this example, most groups contain two datapoints, but occassionally we see three. It leads to an ugly, difficult to read flame map. (Another issue specific to flame-maps is that at this scale, the horizontal gradient dominates over the vertical one, making reading non-intuitive)

This issue is not limited to flame maps however: every one of our plotters is grouped like this.

For bar plotters and heat-maps, the difference between averaging two or three values is big, introducing unpredictable biases and artifacts to our plots, and box plots do not have enough statistical power yet to be useful at this stage:

image

image

Compare to (roughly) 10 or 100 cells per group:

image

image

Again: nonsensical data, the point is that min/max, average and quartiles can be told apart.

One way to look at this is that grouping the data is essentially applying statistics. For these statistics to make sense, we need enough data per group, and a small enough variation in group size due to rounding. This rounding error gets smaller as the group increases: groups of 2-3 cells vary 50%, 10-11 cells vary 10%, etc. I just had a brief chat with @gioelelm, to see what would be a good statistical rule of thumb. With the p-value < 0.05 the obvious choice is 20 cells at a minimum per group.

Plotting really small datasets (cells [1/3x - 1]x available pixels);

The other end of the spectrum is if we have less data than pixels. that means we can spread the columns over multiple pixels. If we avoid anti-aliasing (and we should, it smears out black-and-white columns into grey mush), that again leads to rounding error. In this case in terms of column width.

If our cell count is somewhere between 1/2 and 1x the available pixels, some of the columns will be one pixel wide, and some will be two pixels. This is a visual error of 100%, again without any indication to the viewer when this happens.

As a rule of thumb, I suggest that columns should be at least 3 pixels wide (plus retinca correction). that way, the error is reduced to 33%. In pure error terms significant, but the issue here is more one of likelihood of column misinterpretaiton, not of statistical rounding error, so it's not as big of a deal.

_PS: Incidentally this gives an empirical answer to the famous Sorites paradox with "there's a part where it's neither heap nor non-heap but just awkward, unusable inbetween horribleness" - I bet that statisticians have reached that conclusion years ago._