mapaction / rolling-data-scramble-dashboard-poc

Proof of concept dashboard for the status of the Rolling Data Scrambles
MIT License
1 stars 0 forks source link

Implement hyper modern python #50

Closed felnne closed 3 years ago

felnne commented 3 years ago

From https://cjolowicz.github.io/posts/hypermodern-python-01-setup/ (and follow on posts in series)

initially skipped but completed in #56.

I used this already.

Changed to using conventional variable name but not yet reading from package metadata (i.e. pyproject.toml)

Changed to using recommended package layout to avoid differences in how things like imports work between having the code installed as a package vs being in your current directory. See this blog for more information & background.

See #51.

Initially with a stub test. See #55 for task to add real tests.

Set to fail on less than 100%, which will be the case until #55 is complete.

Not implemented yet, but will for #55.

Postponed until #56

Added, with plugins for:

Max line length increased to 120.

Already added.

Initially skipped, now added.

Not initially implemented (https://github.com/mapaction/rolling-data-scramble-dashboard-poc/issues/50#issuecomment-787122818) but now added (https://github.com/mapaction/rolling-data-scramble-dashboard-poc/issues/50#issuecomment-792373548)

This needed some changes to variable names (inadvertently redeclaring the same variable name in loops for example).

I enabled some additional configuration options:

[mypy]
warn_redundant_casts = True

[mypy-mapaction_rds_dashboard.*]
disallow_untyped_calls = True
disallow_untyped_defs = True
disallow_incomplete_defs = True
check_untyped_defs = True
disallow_untyped_decorators = True

no_implicit_optional = True

warn_unused_ignores = True

I think there is still a task to check we only define type hints when needed (#59).

I used the approach from https://github.com/mapaction/rolling-data-scramble-dashboard-poc/issues/50#issuecomment-787436455 a couple of times.

Skipped as this doesn't support Python 3.9.

I want to look at this in #30, as I think that approach would work really well.

It may also be feasible to define a model/schema for the MapChef layer file - though as that has a JSON Schema we could use something Pydantic to do that (or something less specific that takes a JSON Schema and generates a Data Class rather than a Pydantic model).

Already added (using the Sphinx format that PyCharm uses by default).

This felt redundant.

I don't think this is needed for this project, but it definitely would for others (MapChef for example). I might come back to this later.

I adapted the examples given as we don't use Nox yet, I also added a parallel Windows job - I think these could be merged but this might make them harder for people (read: me) to understand.

Implements part of #8.

This was pretty simple but I'm not yet happy with how this works:

As Sphinx was skipped this was skipped too.

felnne commented 3 years ago

Beginnings of a guide:

# install pyenv
# install poetry
$ cd ~/code (or wherever)
$ poetry new --src foo
$ poetry install
# add LICENSE (mit)
# replace README.rst with README.md
$ git init
$ git add .
# create GitLab project

Could also use:

felnne commented 3 years ago

There's a note in this guide about not installing development dependencies with Poetry:

How about we declare Flake8 as a development dependency of our project, like we did with Pytest in the previous chapter? Then we can benefit from Poetry as a dependency manager, and record the versions of Flake8 and its dependencies in its lock file. – Well, there is a catch ... This command installs a bunch of things our linting session does not need:

  • the package under development
  • the package dependencies
  • unrelated development dependencies (e.g. Pytest)

A major difference between testing and linting is that you need to install your package to be able to run the test suite, but you don’t need to install your package to run linters on it. Linters are static analysis tools, they don’t need to run your program.

Whilst I agree with this, in the sense that it's untidy that package dependencies are installed etc., I don't think this is a catch as such, in that having these dependencies installed is not going to stop them working.

The solution to this issue in the guide is to generate a constraints file containing each linting package needed within each Nox environment. Whilst I'm sure this works, we currently don't need multiple environments (#56) so I'm not sure this is a big problem, and for simplicity, I decided not to implement this.

felnne commented 3 years ago

I didn't/haven't implemented the pre-commit logic because I have reservations with how this behaves, in the sense that it applies after the user has committed their files and can't be changed without then amending the commit. To me this reduces the users control, and I'd rather either:

I've used this approach in other projects but I don't know how others will feel about this.

felnne commented 3 years ago

It's worth highlighting we can define type hints without declaring the variable, e.g.:

# declaring variable with type
layers_properties_data: Dict[
    str,
    List[Dict[str, Any]],
] = json.load(fp=layers_properties_file)

# declaring type and value separately
layers_properties_data: Dict[str, List[Dict[str, Any]]]
layers_properties_data = json.load(fp=layers_properties_file)

I think the second version is more readable, providing people understand type hints.

felnne commented 3 years ago

Re: https://github.com/mapaction/rolling-data-scramble-dashboard-poc/issues/50#issuecomment-787122818

I was wrong about how this works, or more accurately, I was being really dumb with what pre-commit means - i.e. things don't change after you've made a commit but rather Git won't let you make one if the relevant checks fail.

This makes a lot more sense and definitely seems like a good thing so I've gone back and added this.

felnne commented 3 years ago

Work on this is now complete. I will write something up about this as part of the coding standards work a bit later. I'll also review other best practices in future issues.