Closed grovduck closed 11 months ago
@aazuspan, a stupid question. I thought using:
from __future__ import annotations
would allow the use of list
and dict
(lowercase) as type hints (i.e., from PEP 585). I'm not sure I remember how you got this to work in sknnr
, but my tests are failing here and I think this is the reason. I'm using this with pydantic.BaseModel
derived classes like this:
from __future__ import annotations
from typing import Any
from pydantic import BaseModel
class Geometry(BaseModel):
"""Client-side proxy for ee.Geometry object"""
type: str
coordinates: list[Any]
Any help would be very much appreciated!
Interesting! If I'm remembering right, the __future__
import works by effectively stringifying the type annotations to avoid the interpreter trying to evaluate incompatible ones. Looking at the failed test, I think pydantic
is forcing those to be evaluated anyways, which is breaking the workaround.
A little short on details, but pydantic/pydantic#2112 seems to be describing the same issue. On the opposite end of the spectrum, pydantic/pydantic#2678 has a lot of detail, although I haven't read deeply enough into that discussion to figure out what the conclusion was or whether it's relevant to backwards compatibility...
@aazuspan , thanks for your help finding those issues. I'm not sure I understand much better than I did before other than it seems I'll have issues using list
and dict
in pydantic classes and trying to get 3.8 to pass. If I go back to using typing.List
and typing.Dict
, I get all kinds of complaints from the flake8-future-annotations
check. Do you have any opinion on either:
Typing is great to have, but kind of a pain as well!
I'd be tempted to drop 3.8
... I know we considered that with sknnr
and held off, but now we're 6 months past Numpy's drop date, and it would sure simplify some things. With that said, option 3 would also be quick fix, so I wouldn't hesitate to do that if you have reservations about limiting Python version support.
Typing is great to have, but kind of a pain as well!
💯
I'd be tempted to drop
3.8
... I know we considered that withsknnr
and held off, but now we're 6 months past Numpy's drop date, and it would sure simplify some things
Great, thanks for your input. I've dropped 3.8 (and added 3.12) to this commit.
@aazuspan, if you have the time (and can stomach it), I'd love for you to take a quick look at the changes here. The general thrust of this PR is to bring in the sknnr
transformers [^1] into this package to do client-side fitting (training). I didn't at all hew to the best-practices of nice, small, atomic commits so it's a bit to wade through, but the general pattern follows closely to what you've done with sknnr
.
Raw
is the main base class and implements train
, predict
(spatial prediction), and predict_fc
(feature collection [point] prediction).Transformed
inherits from Raw
but is an abstract class. Its abstract methods set the protocol for doing the transformations which are delegated to the subclasses (Euclidean
, Mahalanobis
, MSN
, and GNN
). It also implements transform_image
and transform_fc
which are called from predict
and predict_fc
, respectively. Finally, it overrides train
to do the fitting and the relevant objects are saved as EE server-side objects.There are a few issues I know of at this point:
sknnr
). But given that the core neighbor finder is called ee.Classifier.minimumDistance
, I felt a bit weird about calling these regressors again. But maybe it's better to keep them consistent with sknnr
rather than with GEE
.Raw
(transform_image
and transform_fc
) which just pass back the input argument. Because I'm calling these methods from Raw.predict
and Raw.predict_fc
, mypy complained that these methods weren't originally defined in Raw
. I can't make those methods abstract without making all of Raw
abstract, which would defeat the purpose. Do you have a better way of defining these or changing predict
/predict_fc
so that it avoids this issue? test_model_spatial.py
file in ESTIMATOR_PARAMETERS
(the last value in the tuple). These represent the minimum number of expected matches between some check images that I created a while ago and the neighbor images created through the test, 360,000 being the maximum for a 600x600 image. I honestly can't remember how I created those test images because pynnmap
didn't have the capacity to run the different methods, so I'm guessing I did it with yaImpute
. But I think those check images might be worth recreating with sknnr_spatial
to see if we can get higher values.I know it's not in your nature, but please be brutally honest if you see some funky stuff and additional opportunities for refactoring. The good thing is that we're currently passing tests!
[^1]: We only need the transformers as GEE's ee.Classifier.minimumDistance
is reponsible for neighbor finding. The transformers are used to set the ordination space.
Absolutely, I'm excited to take a look and get a better idea of how this works! I tried running tests and it looks like I can access the table assets, but not the images. Do you mind sharing those?
FAILED tests/test_model_spatial.py::test_image_match[5-raw] - ee.ee_exception.EEException: Image.load: Image asset 'users/gregorma/gee-knn/test-check/raw_neighbors_600' not found (does not exist or caller does not have access).
FAILED tests/test_model_spatial.py::test_image_match[5-euc] - ee.ee_exception.EEException: Image.load: Image asset 'users/gregorma/gee-knn/test-check/euc_neighbors_600' not found (does not exist or caller does not have access).
FAILED tests/test_model_spatial.py::test_image_match[5-mah] - ee.ee_exception.EEException: Image.load: Image asset 'users/gregorma/gee-knn/test-check/mah_neighbors_600' not found (does not exist or caller does not have access).
FAILED tests/test_model_spatial.py::test_image_match[5-msn] - ee.ee_exception.EEException: Image.load: Image asset 'users/gregorma/gee-knn/test-check/msn_neighbors_600' not found (does not exist or caller does not have access).
FAILED tests/test_model_spatial.py::test_image_match[5-gnn] - ee.ee_exception.EEException: Image.load: Image asset 'users/gregorma/gee-knn/test-check/gnn_neighbors_600' not found (does not exist or caller does not have access).
Do you mind sharing those?
Sorry about that. Should be shared with your gmail account now. Let me know if you have further issues.
All tests passing now, thanks!
but I made a first pass at least
@aazuspan, thanks for such a thorough review. I'll be picking through it over the next couple of days (then back to sknnr
!)
Hopefully you can see my responses to your comments.
I do see a couple, but I got an email notification about a bunch of comments that aren't showing up for me, e.g. your answer about retile
or the int IDs... Not quite sure what's going on there.
I do see a couple, but I got an email notification about a bunch of comments that aren't showing up for me, e.g. your answer about
retile
or the int IDs... Not quite sure what's going on there.
I really messed everything up. But I think here is what happened (chronologically):
So, I'll try to go back and put in what I had written (I don't think deleted comments can be retrieved), but probably best to use the emails that were sent as the definitive source. I think the crux is that one shouldn't respond when an active review is ongoing. Sorry about this!
Oh no, that's a hassle to have to re-enter all your responses. I think I have the text of all your responses in the notification emails, I just don't necessarily know what they were responses to (although I can probably guess in most cases). Do you have those, or would it be helpful if I forwarded the email?
Github is a pretty smooth experience overall, but figuring out where/if review comments are going to show up seems like it's always a little bit of a gamble, so you're not alone!
I think I have the text of all your responses in the notification emails, I just don't necessarily know what they were responses to (although I can probably guess in most cases). Do you have those, or would it be helpful if I forwarded the email?
If you have those and it would be easy enough to send, that would be great. I don't get a copy of changes I make.
Choosing names remains the hardest part of programming! I'm not sure that staying consistent with
sknnr
is necessarily the perfect choice, but it does seem like a defensible one at least, and would make it easy for someone familiar with one package to adapt to the other (although if you ever tried using them in a single script there might be some namespace headaches...). I think that's where I would lean by default, just because I don't have any better ideas.
I know it's not what you suggested, but I'm considering the following name changes:
Raw
-> RawKNNClassifier
Euclidean
-> EuclideanKNNClassifier
Mahalanobis
-> MahalanobisKNNClassifier
MSN
-> MSNClassifier
GNN
-> GNNClassifier
I'm thinking of this from the following perspective:
sknnr
, we chose our naming off the main estimator we were deriving - KNeighborsRegressors
. With this package, I'm worried if we call them regressors, users will be more confused by that terminology rather than the more familiar Classifier
terminology. The term isn't completely accurate, but GEE calls their estimators classifiers even if run in regression mode.Thoughts?
The term isn't completely accurate, but GEE calls their estimators classifiers even if run in regression mode.
Good point!
I think your logic makes sense, and I'm happy to go with the Classifier
naming scheme. As you suggested, that should be a more familiar interface for users with some Earth Engine ML experience, who are probably the most likely to find and use this.
@aazuspan, you've been incredibly gracious with your time on reviewing this PR. Thank you so much. I'm running out of steam a bit and will migrate these issues you've identified below into separate issues.
Colocation
class so that user doesn't need to instantiaten_components
and spp_transform
values on GNN
to ensure correctnessmode
keyword on feature collections to get distances and and testssknnr-spatial
and this training data to see if we can increase the number of expected matches of pixels in the test surfacesid_field
does not store an integer typeThere is still a bit of work to do, but I'd like to merge in this PR now, mostly because I need to test with the larger SERVIR workflow this coming week. I've learned that this was much too ambitious of a PR on its own!
Sounds great @grovduck! Thanks for helping me get up to speed on this. I'm excited to start exploring it with some real data!
Sounds great @grovduck! Thanks for helping me get up to speed on this. I'm excited to start exploring it with some real data!
@aazuspan, even though there are still many things to do on this, I wouldn't have gotten nearly as far without your helpful reviews and fixes. I appreciate it!
This PR addresses #3 by leveraging
sknnr
to provide estimators for this package. As noted in #3, running the estimators client-side will "interrupt" the GEE server-side flow and thus run training only once for all targets. It also removes the duplication of implementing these ordination estimators in more than one repository.This is still very much a work in progress and I will be addressing the following issues:
Create a base class for all estimators to reduce duplicated code across estimatorssknnr
, makeRaw
the base classRaw
(currently calledTransformed
) to be the base class for all estimators that use transformationssknnr
transformation objects to handle client-side ordinationpredict
andpredict_fc
toTransformed
and remove from derived classes@abstractmethod
methods inTransformed
to handle transformations and implement in derived classesAs this PR turned out to be very ambitious, there are a few tasks that I decided not to tackle with this PR and turn into separate issues:
Colocation
class so that user doesn't need to instantiaten_components
andspp_transform
values onGNN
to ensure correctnessmode
keyword on feature collections to get distances and and testssknnr-spatial
and this training data to see if we can increase the number of expected matches of pixels in the test surfacesid_field
does not store an integer type