Morph an input dataset of 2D points into select shapes, while preserving the summary statistics to a given number of decimal points through simulated annealing. It is intended to be used as a teaching tool to illustrate the importance of data visualization.
This is a rather large-looking PR (only because I am basing this off of https://github.com/stefmolin/data-morph/pull/198 for ease-of-merge!), and is a sort of a first attempt (which is why I am marking it as a draft) at caching the various statistics properties as described in #201.
TL;DR: since at each iteration we only move one point in the dataset (basically map $(x_i, y_i) \mapsto (x'_i, y'_i)$ ), why not compute the new statistics ($\langle X' \rangle$, $\text{Var}(X') $, and $r(X', Y') $) in terms of the old ones ($\langle X \rangle$, $\text{Var}(X) $, and $r(X, Y) $) + the old ($(x_i, y_i) $) and new ($(x'_i, y'_i) $) points? This is essentially what this PR does.
where the only new quantity to keep track of is $\langle X Y \rangle$ (the rest can be obtained from the above 2).
Workflow
With the math out of the way, this is how the new way of computing the statistics works:
read the (x, y) dataset, and make an instance of Statistics (subject to change for its somewhat ambiguous naming) from it
start the iteration
Statistics has a method, perturb, which takes in 3 arguments: the row (index) we wish to perturb, and the 2 values of the perturbations (in the x and y directions), and returns an instance of the new SummaryStatistics (i.e. the one from the perturbed data)
if _is_close_enough returns True, we call the perturb method again (it's $O(1)$ so not very expensive to call), but this time with update=True, which actually updates the data in the Statistics instance
go to the next iteration
Note that there's a bunch of new methods (shifted_mean, shifted_var, shifted_stdev, shifted_corrcoef), which we may or may not want to make private (or refactor so they accept different arguments), as I didn't put too much thought into the overall API design (again, hence the "draft" status).
Performance
Now, onto the performance results! Note that the benchmarks were done with #201 already in there, so the absolute numbers differ from the ones on main, but the reduction in the number of function calls is evident. I tested everything using python -m cProfile -m data_morph --seed 42 --start-shape panda --target-shape [shape].
# star shape - perf before
79257097 function calls (78610163 primitive calls) in 43.075 seconds
# star shape - perf after
54021781 function calls (52470658 primitive calls) in 28.704 seconds
# rings shape - perf before
94095798 function calls (93420792 primitive calls) in 43.998 seconds
# rings shape - perf after
68846080 function calls (67219947 primitive calls) in 29.592 seconds
# bullseye shape - perf before
82412199 function calls (81796202 primitive calls) in 40.245 seconds
# bullseye shape - perf after
57109757 function calls (55635954 primitive calls) in 26.200 seconds
so we use 26.1M less function calls for all shapes in question, which results in about 35% faster performance (computed as (t_old - t_new) / t_old) * 100).
Discussion items
Some possible points of discussion:
I didn't pay too much attention to Bessel's correction in the derivation of the formulas above, though I imagine the numerical difference should be mostly negligible since all of the starting datasets have > 100 points
API design (make shifted_* functions private? Could perturb do without an update argument? etc.)
the tests for shifted_* functions are a bit repetitive, and could take a while to run (I was initially very skeptical of the numerical stability, but it appears to be fine, though it could just be that I haven't tried enough scenarios to hit an instability)
Feedback is welcome :)
Checklist
[x] Test cases have been modified/added to cover any code changes.
[x] Docstrings have been modified/created for any code changes.
[x] All linting and formatting checks pass (see the contributing guidelines for more information).
An implementation that resolves #201.
Describe your changes
This is a rather large-looking PR (only because I am basing this off of https://github.com/stefmolin/data-morph/pull/198 for ease-of-merge!), and is a sort of a first attempt (which is why I am marking it as a draft) at caching the various statistics properties as described in #201.
TL;DR: since at each iteration we only move one point in the dataset (basically map $(x_i, y_i) \mapsto (x'_i, y'_i)$ ), why not compute the new statistics ($\langle X' \rangle$, $\text{Var}(X') $, and $r(X', Y') $) in terms of the old ones ($\langle X \rangle$, $\text{Var}(X) $, and $r(X, Y) $) + the old ($(x_i, y_i) $) and new ($(x'_i, y'_i) $) points? This is essentially what this PR does.
Details
Mathematical derivation
The mean is shifted as:
$$ \langle X' \rangle = \langle X \rangle + \frac{\delta_x}{n} $$
where $\delta_x = x'_i - x_i$ (same for $Y$), the variance (or the standard deviation if you want) is shifted as:
$$ \text{Var}(X') = \text{Var}(X) + 2 \frac{\delta_x}{n} (x_i - \langle X \rangle) + \frac{\delta_x^2}{n} - \frac{\delta_x^2}{n^2} $$
while the correlation coefficient is shifted as:
$$ r(X', Y') = \frac{\langle X Y \rangle + \frac{1}{n}(\delta_x y_i + \delta_y x_i + \delta_x \delta_y) - \langle X' \rangle \langle Y' \rangle}{\sqrt{\text{Var}(X') \text{Var}(Y')}} $$
where the only new quantity to keep track of is $\langle X Y \rangle$ (the rest can be obtained from the above 2).
Workflow
With the math out of the way, this is how the new way of computing the statistics works:
(x, y)
dataset, and make an instance ofStatistics
(subject to change for its somewhat ambiguous naming) from itStatistics
has a method,perturb
, which takes in 3 arguments: the row (index) we wish to perturb, and the 2 values of the perturbations (in thex
andy
directions), and returns an instance of the newSummaryStatistics
(i.e. the one from the perturbed data)_is_close_enough
returnsTrue
, we call theperturb
method again (it's $O(1)$ so not very expensive to call), but this time withupdate=True
, which actually updates the data in theStatistics
instanceNote that there's a bunch of new methods (
shifted_mean
,shifted_var
,shifted_stdev
,shifted_corrcoef
), which we may or may not want to make private (or refactor so they accept different arguments), as I didn't put too much thought into the overall API design (again, hence the "draft" status).Performance
Now, onto the performance results! Note that the benchmarks were done with #201 already in there, so the absolute numbers differ from the ones on
main
, but the reduction in the number of function calls is evident. I tested everything usingpython -m cProfile -m data_morph --seed 42 --start-shape panda --target-shape [shape]
.so we use 26.1M less function calls for all shapes in question, which results in about 35% faster performance (computed as (t_old - t_new) / t_old) * 100).
Discussion items
Some possible points of discussion:
shifted_*
functions private? Couldperturb
do without anupdate
argument? etc.)shifted_*
functions are a bit repetitive, and could take a while to run (I was initially very skeptical of the numerical stability, but it appears to be fine, though it could just be that I haven't tried enough scenarios to hit an instability)Feedback is welcome :)
Checklist