springfall2008 / batpred

Home battery prediction and charging automation for Home Assistant, supporting many inverter types
https://springfall2008.github.io/batpred/
107 stars 37 forks source link

Enforce various standards in code / docs #313

Closed iainfogg closed 8 months ago

iainfogg commented 9 months ago

This is an issue for triggering some discussion, so @springfall2008 and others can chip in on what, if anything, should be set as code / documentation standards for the project. It's also me coming up with some suggestions of how we might do it, but I'm not attached to any of it.

On #294 a few of us discussed some things that could be set down as fixed standards. I've had a go at coming up with a set of standards, together with the tooling that can monitor them, in the linked PR #312.

This is still a bit TBC as I've not got the auto-fixing working yet, but I should have that shortly.

Tooling

I'm suggesting using pre-commit - this is a local tool that allows other tools to be run with specified config, and also a web-based tool that allows their system to push commits that fix specific errors in your code (ensuring that even if you miss a problem, it'll be able to fix it for you - at least for the problems it's able to fix).

pre-commit can be run locally, if you wish. However, I'm not suggesting that everyone adopts that - rather, that we use their CI Lite offering (which is free for open source projects) that will do auto-fixing. I think we should do that on PRs and on commits to main. Of course, anyone's welcome to run it locally to check stuff before committing, and I can document how to do that too.

Also, I've only run it locally, so I've not tried the web based tool yet, but I expect that it'll either give a tick (if it was fine, or if it could auto-fix the issues) or a cross (if there were issues which it couldn't auto-fix) to indicate whether the PR passed the checks, preventing stuff getting merged

Standards:

I'm suggesting we adopt the following tools - some of which can do auto-fixes, and some which just tell you the problems:

Impact on current codebase

With all those checks in place, the only things that need fixing are genuine issues:

I'm happy to fix all those things as part of getting this in.

@springfall2008 @fboundy and anyone else - how's that sound? Please give honest feedback, and I can ditch this, or adapt it, as people see fit.

See #312 for the code.

fboundy commented 9 months ago

I’m all for standards and you obviously have a much better handle on this than me so I will attempt to comply with anything that is agreed. I’m very much a hacker rather than a proper developer! On 11 Nov 2023 at 19:49 +0000, iainfogg @.***>, wrote:

This is an issue for triggering some discussion, so @springfall2008 and others can chip in on what, if anything, should be set as code / documentation standards for the project. It's also me coming up with some suggestions of how we might do it, but I'm not attached to any of it. On #294 a few of us discussed some things that could be set down as fixed standards. I've had a go at coming up with a set of standards, together with the tooling that can monitor them, in the linked PR #312. This is still a bit TBC as I've not got the auto-fixing working yet, but I should have that shortly. Tooling I'm suggesting using pre-commit - this is a local tool that allows other tools to be run with specified config, and also a web-based tool that allows their system to push commits that fix specific errors in your code (ensuring that even if you miss a problem, it'll be able to fix it for you - at least for the problems it's able to fix). pre-commit can be run locally, if you wish. However, I'm not suggesting that everyone adopts that - rather, that we use their CI Lite offering (which is free for open source projects) that will do auto-fixing. I think we should do that on PRs and on commits to main. Also, I've only run it locally, so I've not tried the web based tool yet, but I expect that it'll either give a tick (if it was fine, or if it could auto-fix the issues) or a cross (if there were issues which it couldn't auto-fix) to indicate whether the PR passed the checks, preventing stuff getting merged Standards: I'm suggesting we adopt the following tools - some of which can do auto-fixes, and some which just tell you the problems:

• Trailing whitespace - remove any whitespace at the end of lines • End of file fixer - ensure there is one blank line at the end of each file • Check YAML - load the YAML files in the project to check they are valid • Check JSON (and JSON5) - load the JSON (and JSON5) files to check they are valid • File contents sorter - to sort specified files into alphabetical order (for the custom dictionary and the requirements.txt files) • Black - to check various Python standards, This currently has line length set to 180 to match a recent PR, but can be amended to whatever we like. It's not flagging any issues up with predbat.py with that line length set. • Markdown link - check various standards for Markdown, to ensure the Markdown is correct • CSpell - a custom spell checker (the one used within VS Code, if you're using the Codespaces environments), along with a custom dictionary containing all custom words (e.g. GivTCP. Predbat etc.) we need for it to pass. As well as checking comments, this will also check variable names, as long as you use this_word or thatWord format for variables. I've also added all non-compatible variable names to the custom dictionary, again so it'll pass without needing code modifications.

Impact on current codebase With all those checks in place, the only things that need fixing are genuine issues:

• Some actual whitespace issues that crept in after I did a fix the other day • The weird stuff at the start of solaredge.yaml • Some actual spelling mistakes in comments in the code • Some actual issues with the Markdown e.g. some links that aren't written in Markdown style • Some super long lines in the Markdown docs (200-600 characters at times)

I'm happy to fix all those things as part of getting this in. @springfall2008 @fboundy and anyone else - how's that sound? Please give honest feedback, and I can ditch this, or adapt it, as people see fit. See #312 for the code. — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.Message ID: @.***>

gcoan commented 9 months ago

They all sound sensible to me.

Maybe some standards around procedure and code documentation / commenting as well?

Definitely something for further down the road but how can we move toward more automated test packs that run as part of the release and deployment process?  I don't know whether Trefor has any test API results and config that could be used as an input start point and maybe way too optimistic but a test driven development strategy whereby new features are only added when tests are built to prove that the new function works as expected. When I look at the code there's a lot too it and Trefor's knowledge of how it hangs together is critical. Even now things occasionally get broken and only discovered out in the wild.

Two other things :

How do we make it easier to ensure new sensors switches etc get included in the dashboard automatically?  Can we collect the definitions of these together in the code and generate the sample dashboard automatically?

And how do we make it easier for existing users to keep up to date with changes to the dashboards and apps.yaml?  I find myself looking at the source code diff for every release to work out what manual changes I need to apply as these are not installed in subsequent releases.

Geoffrey On 11 Nov 2023 at 20:02 +0000, fboundy @.***>, wrote:

I’m all for standards and you obviously have a much better handle on this than me so I will attempt to comply with anything that is agreed. I’m very much a hacker rather than a proper developer! On 11 Nov 2023 at 19:49 +0000, iainfogg @.***>, wrote:

This is an issue for triggering some discussion, so @springfall2008 and others can chip in on what, if anything, should be set as code / documentation standards for the project. It's also me coming up with some suggestions of how we might do it, but I'm not attached to any of it. On #294 a few of us discussed some things that could be set down as fixed standards. I've had a go at coming up with a set of standards, together with the tooling that can monitor them, in the linked PR #312. This is still a bit TBC as I've not got the auto-fixing working yet, but I should have that shortly. Tooling I'm suggesting using pre-commit - this is a local tool that allows other tools to be run with specified config, and also a web-based tool that allows their system to push commits that fix specific errors in your code (ensuring that even if you miss a problem, it'll be able to fix it for you - at least for the problems it's able to fix). pre-commit can be run locally, if you wish. However, I'm not suggesting that everyone adopts that - rather, that we use their CI Lite offering (which is free for open source projects) that will do auto-fixing. I think we should do that on PRs and on commits to main. Also, I've only run it locally, so I've not tried the web based tool yet, but I expect that it'll either give a tick (if it was fine, or if it could auto-fix the issues) or a cross (if there were issues which it couldn't auto-fix) to indicate whether the PR passed the checks, preventing stuff getting merged Standards: I'm suggesting we adopt the following tools - some of which can do auto-fixes, and some which just tell you the problems:

• Trailing whitespace - remove any whitespace at the end of lines • End of file fixer - ensure there is one blank line at the end of each file • Check YAML - load the YAML files in the project to check they are valid • Check JSON (and JSON5) - load the JSON (and JSON5) files to check they are valid • File contents sorter - to sort specified files into alphabetical order (for the custom dictionary and the requirements.txt files) • Black - to check various Python standards, This currently has line length set to 180 to match a recent PR, but can be amended to whatever we like. It's not flagging any issues up with predbat.py with that line length set. • Markdown link - check various standards for Markdown, to ensure the Markdown is correct • CSpell - a custom spell checker (the one used within VS Code, if you're using the Codespaces environments), along with a custom dictionary containing all custom words (e.g. GivTCP. Predbat etc.) we need for it to pass. As well as checking comments, this will also check variable names, as long as you use this_word or thatWord format for variables. I've also added all non-compatible variable names to the custom dictionary, again so it'll pass without needing code modifications.

Impact on current codebase With all those checks in place, the only things that need fixing are genuine issues:

• Some actual whitespace issues that crept in after I did a fix the other day • The weird stuff at the start of solaredge.yaml • Some actual spelling mistakes in comments in the code • Some actual issues with the Markdown e.g. some links that aren't written in Markdown style • Some super long lines in the Markdown docs (200-600 characters at times)

I'm happy to fix all those things as part of getting this in. @springfall2008 @fboundy and anyone else - how's that sound? Please give honest feedback, and I can ditch this, or adapt it, as people see fit. See #312 for the code. — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.Message ID: @.> — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you are subscribed to this thread.Message ID: @.>

iainfogg commented 9 months ago

Thanks @gcoan that sounds sensible too. I'd suggest you create another issue (or multiple issues) for those things that aren't checkable with an automated style checking too.

Although I'll drop it in here, I've got this branch where I started doing some work to look at adding tests, although the complexity of Predbat with all the code being in just two classes, together with the reliance on AppDaemon for various methods that appear in Predbat (e.g. logging, accessing HA entities etc.) made it really tricky. I plan on coming back to it at some point, but equally am happy if someone else can get somewhere with it themselves. But like you, I'm keen that we find a way to get some tests in.

And with the standards around process, code documentation etc. I'd suggest that if you do create an issue to discuss those things, once that's agreed, we put that info into the documentation in the Developing on Predbat section.

iainfogg commented 9 months ago

If you look at this MR (it's from and to branches on my repo), you'll see in the commit history that I did a few things:

This is an example of the workflow of how it is able to fix things (only for things it can fix).

For things it can't fix, they get a red cross against the commit, and the PR stays marked as broken (with all the details of what actually failed) until someone fixes them and pushes changes.

My hope is that plenty of issues (esp. trailing whitespace, for example) will get fixed automatically, meaning that applying these coding standards can be less work than it otherwise might be.

iainfogg commented 9 months ago

Also, when a job fails due to a code problem, if you have the GitHub app on your phone or wherever, you get notifications to it, including being able to see the detail of the failures.

And when a PR goes from failing to passing again, you get a notification from that :)

iainfogg commented 9 months ago

If you're happy too @springfall2008 , as the repo owner you'll need to follow point 1. on this link to give pre-commit.ci permission to push its auto fixes to the Predbat repo.

No pressure or rush though, just wanted to get it down while it was in my head.

fboundy commented 9 months ago

As an aside I think there’s an opportunity to refactor this with more classes but that may be PredBat 8… On 11 Nov 2023 at 22:07 +0000, iainfogg @.***>, wrote:

If you're happy too @springfall2008 , as the repo owner you'll need to follow point 1. on this link to give pre-commit.ci permission to push its auto fixes to the Predbat repo. No pressure or rush though, just wanted to get it down while it was in my head. — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.Message ID: @.***>

iainfogg commented 9 months ago

How do we make it easier to ensure new sensors switches etc get included in the dashboard automatically?  Can we collect the definitions of these together in the code and generate the sample dashboard automatically?

Just as a side note, @gcoan you mentioned keeping the dashboard up to date - have you seen this before (the auto generating card) in the docs https://springfall2008.github.io/batpred/output-data/#displaying-output-data ?

I now use this, saves me faffing around adding entities for new releases.

gcoan commented 9 months ago

Yes I have seen it and I do have it setup on a dashboard view, but I prefer a more logical grouping of things together rather than all the controls being in alphabetical order.

So I've stuck to manually maintaining the sample dashboard which is a repeated pain so maybe some time I'll be motivated enough to adopt the auto generated version.

I did have in mind whether the auto generated could be more structured into themes or sub areas. Eg all the iboost controls and sensors together.  Ditto car charging, soc limits, the main control switches and notification switches, the balance inverter control switches etc.

This would I think be more easier for new people to adopt than a long alphabetical list. However whether this could ever be made to work based on entity naming convention and continue to be automatically generated which is clearly a big advantage. On 12 Nov 2023 at 15:01 +0000, iainfogg @.***>, wrote:

How do we make it easier to ensure new sensors switches etc get included in the dashboard automatically?  Can we collect the definitions of these together in the code and generate the sample dashboard automatically? Just as a side note, @gcoan you mentioned keeping the dashboard up to date - have you seen this before (the auto generating card) in the docs https://springfall2008.github.io/batpred/output-data/#displaying-output-data ? I now use this, saves me faffing around adding entities for new releases. — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.Message ID: @.***>

iainfogg commented 9 months ago

Hi @springfall2008 when you get a chance, can you let me know if you're happy with this?

And if you are, can you do point 1 on this link as the repo owner please?

https://pre-commit.ci/lite

I know I forgot to include the link earlier, sorry!

springfall2008 commented 9 months ago

Seems like a good idea to me, I'll see if I can get this enabled soon

iainfogg commented 9 months ago

Thanks @springfall2008

It won't affect other PRs until the PR I've done is merged in (it'll just affect my PR for now, sad that's the only branch with the relevant config in it).

DJBenson commented 9 months ago

Just chiming in on the question about how to keep dashboards updated - I submitted the auto-entities contribution as I was finding myself looking at what was added and then adding that to the dashboard which was time consuming and not something your average user will want too do. The card has so far scooped up 100% of the new entities @springfall2008 has added so unless/until such a time new naming conventions are added or existing ones changed it should be fairly robust.

I like the idea of an auto-generated dashboard if such a thing were possible but I'm not sure if it effectively doubles up on what the auto entities card does (granted my solution requires additional HACS components).

My biggest issue even as a fairly technically competent user is keeping up with changes to apps.yaml - my method is to create an apps.txt file alongside apps.yaml and then use the Studio Code addon to compare the two and merge in the changes I want. I think @springfall2008 has moved things out of apps.yaml and into HA entities and I'm all for that where possible but I think the documentation would benefit from a section on best practice for keeping apps.yaml up to date and perhaps a technical solution for making it more seemless for less experienced users but I'm not suggesting that's easy if at all possible.

iainfogg commented 9 months ago

I think the documentation would benefit from a section on best practice for keeping apps.yaml up to date and perhaps a technical solution for making it more seemless for less experienced users but I'm not suggesting that's easy if at all possible.

I actually think a YAML reading utility should be not too hard for flagging up about new items / removing unused ones.

iainfogg commented 9 months ago

@DJBenson I've experimented with a few ways of packaging up UI stuff like charts, entity lists etc. and haven't found anything yet that's felt like a good solution. It's been complicated because HA is not good at distributing UIs anyway, and made worse because HACS won't let you distribute a back-end project like Predbat along with some front-end stuff e.g. custom cards, meaning you need two repos then (one for front-end, one for back-end).

The messing around that I did was here, if you're interested (and @springfall2008 ).

I'd like to see it improve, I'm just not clear yet on the best solution.

iainfogg commented 9 months ago

@springfall2008 did you manage to do that pre-commit.com/lite thing I mentioned earlier please?

iainfogg commented 8 months ago

Sorry to chase @springfall2008 - I can't roll out the PR for enforcing standards until you've done point 1 on this link as the repo owner, as I don't have permissions to do it:

https://pre-commit.ci/lite

springfall2008 commented 8 months ago

I think I've done this now, please check

iainfogg commented 8 months ago

I've now put this stuff live.

Do check the documentation that explains the process around pre-commit, and the commits that it may do on your behalf to fix things it's able to fix.

Also check failings builds for spelling issues, and either fix the spelling mistake or add the word to the dictionary if needed.

Also, I've added a badge to the top of the README to show if the build is currently passing on main branch.

springfall2008 commented 8 months ago

Nice work @iainfogg :)

One thing I'm unsure about, when I merged a PR from a branch it shows that it's not been reviewed and so I hit bypass checks (I'm not asking others to review my changes yet). This seemed not to run CI checks either. How can I run CI but ignore the review requirement myself?

iainfogg commented 8 months ago

Thanks :) @springfall2008

It should run the CI checks regardless of approval status.

Here you can see on this one (which has been approved) it's got separate checks for approvals, CI checks and no conflicts.

merge-checks

For yourself, to put something in without it being reviewed, you'll see a red cross saying the changes aren't approved, but you should still look for a green tick against 'All checks have passed'.

It takes a few mins to come through, and if your initial commit had errors that it knows how to fix, you have to wait for the second run to complete with the fixes included.

Is that not what you're seeing?

springfall2008 commented 8 months ago

It's working now, I perhaps tried it just before it was ready

iainfogg commented 8 months ago

It takes maybe 5 seconds from creating the PR for the middle line to appear, at which point it should show it's running, or waiting to run. You're obviously working faster than a computer can ;)

gcoan commented 8 months ago

Hi @iainfogg I created a PR #408, edited in code spaces which was good. Corrected errors as shown in code spaces so I was expecting the PR to go ok. But confused. I thought I saw an error reported in the PR commit scripts but the scripts carried on running and all passed 508223BC-2DD2-44BB-9FCA-FF6FF6FDC92E N but I received an email telling me the checks failed 1A9C3EF4-0228-4B06-8F4E-9252BD7A127A

And looking at the fail no idea what’s wrong F5D49A49-A1BA-4B74-B287-A8D14CE6A51B

confused !

iainfogg commented 8 months ago

Have a look at https://springfall2008.github.io/batpred/developing/#coding-standards

In short, there was a problem and it failed, but it was a problem it knows how to fix so it did a commit to fix it, which then made it pass.

But you'll need to pull that commit into your environment.

Let me know if the docs need clarifying (or feel free to do a PR on them). The docs should make clear how you can run the checks before you push as well.

iainfogg commented 8 months ago

@gcoan and you've probably got notifications set for failed workflows only, which is why you didn't get a notification for the successful workflow.

gcoan commented 8 months ago

Thanks @iainfogg it looks like it was a trailing space. I'm sure I saw trailing spaces being flagged before in codespaces, but this one wasn't flagged and it isn't now image

Re-reading the coding standards, yes I see how it explains how some workflows can be auto corrected, and this is what appeared to happen for this one. I can see the commit on my fork image

As you suggested, turns out my alerts were set to only notify of failed workflows which is why I got confused about it image

I've unticked that option now.

I've had a re-read of the dev standards doc. Creating a codespaces env works fine, followed the instructions no issues, and I can see lint errors reported in codespaces that I can fix as I go, and can preview the doc pages to check formatting etc. All that is fine.

But I am struggling with 'running the checks locally'. Don't want this to turn into a tutorial on using codespaces and github, but the instructions are all written as if you are issuing command line instructions to mkdocs, pre commit, git, etc. I don't see how to enter these in codespaces? If I look in the extensions in codespaces, the code spell check, markdown preview, markdownlint, eslint are all installed but the pre-commit extension isn't. I can install it from the marketplace but then can't see a way of manually running the checks, e.g. "pre-commit run --all-files"

iainfogg commented 8 months ago

@gcoan anywhere it talks about running a command like mkdocs or pre-commit, it's meaning running them in the terminal - they are all command line applications.

Running command line applications and commands is simple - from the menu, choose Terminal, New Terminal, and it'll pop up an extra tab where you can run those commands and others directly in there. You can create multiple terminals too if you like (e.g. if you have mkdocs serve running in one, and you want to run pre-commit or whatever in another.

mkdocs and pre-commit are all installed when your Codespace is created, but they aren't extensions - they are programmes installed into the VM (extensions are extra UI options and features specifically for VS Code).

If you look at https://github.com/springfall2008/batpred/blob/main/.devcontainer/devcontainer.json you can see how the Codespace VM is configured. There are VS Code extensions which are installed, but also towards the end there's a pip install command which installs anything listed in https://github.com/springfall2008/batpred/blob/main/requirements.txt .

You'll see both mkdocs and pre-commit are in there, and will be installed when the Codespace VM is created.

However, remember that when you create a Codespace, it does all the install stuff there and then, based on whatever is in those two files at that time, so if you created your Codespace before I added pre-commit to requirements.txt you won't have it installed - but you can just delete your Codespace and create a new one, as long as you do it off a branch which has the up to date packages to install in it.

Does that make sense? Feel free to add a note about those commands being run from a terminal in the 'Running locally' section if you think it would make it more clear.

gcoan commented 8 months ago

Thanks @iainfogg for the pointer, I thought it might be something like that. I've added a few more details about opening and running commands in the Terminal to PR #415

iainfogg commented 8 months ago

Looks good, thanks @gcoan