CenterForTheBuiltEnvironment / comfort_tool

CBE Thermal Comfort Tool for ASHRAE-55
http://comfort.cbe.berkeley.edu
Other
90 stars 39 forks source link

Addressed issues #10 and #9 #13

Closed FedericoTartarini closed 5 years ago

FedericoTartarini commented 5 years ago

I have addressed the following issues:

chriswmackey commented 5 years ago

@FedericoTartarini ,

Thank you very much for this PR and for fixing + documenting the Python code. As the original author of the Python contribution, I can say that the Python portions of this PR should definitely be merged in. I had always assumed that I wasn't getting the SET to perfectly match on that decimal place because of a rounding error somewhere. Now that I know the exact problem, I can fix it in a couple of other places where I have the original code.

Speaking of this, I realize that I should note something before we end up with duplicated efforts for improving the Python code: over the last few months, I have been doing a lot of cleaning and documentation of the original Python comfort models that you can see here in this ladybug-comfort repo. In addition to adding docstrings (pardon my poorly-documented old code here) and bringing the code up to PEP8 standards, I have also put in place some tests and automated continuous integration that run whenever someone sends a new PR to the repo. We've also implemented automated deployment of the library to PyPI so that people can easily install it and make Python applications with it. Online documentation of the library will be coming soon. The current tests on the ladybug-comfort repo are not as good as what you have added here for SET (really great stuff with the SET validation matrix). Still, I am continuing to improve them and, over the next few months, I will be adding several other comfort models beyond the Standard-55 ones, including Physiologic Equivalent Temperature, Predicted Heat Strain, and several others that are more suited to outdoor comfort. Also, if you can't tell from the "Usage" section on the ReadMe, the ladybug-comfort library has a lot of built-in capabilities for calculating thermal comfort from streams of climate data and the results of building energy models. You can already see some of the benefits of this in how much more useful the SolarCal model becomes when you actually have direct access to sun positions derived from EPW data. Still, a lot of the stuff that I am doing over there departs from the strict definitions of Standard-55 and that was a major reason why I didn't push all of my work directly to the CBE Tool repo here.

I had the intention of coming back here and improving the Python code but you clearly beat me to it and, in the case of SET, you did a better job than I would have. So I would like to hear your thoughts on this proposal for moving forward:

Given that the ladybug-comfort library is incorporating a lot of stuff beyond Standard-55 and EN-15251, what if we keep the Python code here strictly in accordance with these two standards? This would mean that we should probably take out the UTCI model from the Python code here and I would need to send a PR that edits the adaptive comfort models such that they strictly obey Standard-55 and EN-15251 (I took a lot of liberties with the adaptive comfort capabilities I originally built in here). However, this means that the Python code here serves a clear purpose that is distinct from ladybug-comfort and, for the comfort models that overlap between the two repos, we can share some code and testing frameworks. So I will push things like the local discomfort models here at the same time that add them to ladybug-comfort (this should be in the next couple of months).

If this proposal sounds good to you, let me know if you prefer to take out the UTCI model with this PR or if you would like me to send a PR afterwards that does so along with the adaptive comfort edits.

chriswmackey commented 5 years ago

FYI, @FedericoTartarini ,

I implemented your fix over in the ladybug-comfort repository. I also implemented automated checks over there with the same ASHRAE-55 2017 validation table that you are using (thanks again for such a great idea). I think I will also implement checks against the validation tables given in Standard-55 for PMV/PPD and SolarCal. So we can also implement the checks here after I convert the tables to CSV and we merge in this PR.

Also, if I didn't say it earlier, I am happy to help implement automated continuous integration of the python code on this repo if that is desirable. I just realize it might be odd if people send a PR with only javascript and then automated checks run for the Python code.

FedericoTartarini commented 5 years ago

Hi the rounding error is unfortunately still there. If you run the pytest unit (the testing data come from the ASHRAE 55 standard) you will notice that the test fail for the last 5 entries, however, the error is now negligible. The main issue before was that in .js the do{} while{} loop run at least once, while in python the while (): runs only if the condition it is satisfied. That is the reason why I have proposed to initialize TCL_OLD = False. Congratulations for the great effort in putting this code together.

I was not aware of the great work you have already done in the ladybug-comfort library.

I completely agree with your proposal and I would keep in this repository only the functions that are extracted from the EN and ASHRAE standards and I would put a reference to the ladybug code in case people what to use the UTCI function for example. I also would keep this Pull Request as it is and maybe later on, whenever you have time, you can send another PR with further edit (e.g. remove UTCI, rename functions as you did in the ladybug library, etc.). In addition, in my PR I have added some TODOs. I know it may not be a good practice to leave them there but I wanted to discuss them with you. For example I have suggested to return JSONs rater that lists, but I have noticed that you have already done that in the ladybug library. I think, the best way to go ahead is that you send a Pull Request to this repository and include all the latest changes you have done in the Ladybug library. Since my only contribution was to spot the issue caused by the while loop.

Regarding your last comment (testing) I am happy for you to remove or leave the testing unit. I am an Engineer so I am not expert in coding, hence, I do not know which are the best practices in this field.

Finally, I believe we should also add a button/link in the CBE comfort tool, which people can use to download the Python code. Currently users cannot access the python functions from the website and the .py functions are hidden in the GitHub repository.

chriswmackey commented 5 years ago

That is odd that you don't get the values to match within the one decimal place given in the validation table. You can see from the way that I wrote the test over in ladybug-comfort that I test for matching to within a relative tolerance of 1e-2, which is up to the one decimal place given. Perhaps it's either a difference in how numpy.round works in relation to pytest.approx or there's another variable in the SET calculation that I updated in my code over there due to a change in standard-55 but I forgot to update here.

In any case, yes, you are completely right that I did not implement the while loop correctly. Starting it with False solves it. On a side note, I realized that you won't need the i variable here in the Python version. It doesn't hurt to have it but I realized that i is leftover from when I originally translated it from the javascript. In any case, it's a huge step forward and, just to give you full credit for all of the updates you made, you also updated the TempCoreNeutral constant, which I realize was outdated in the version here before your edits.

Glad that you agree with the plan to make this a repo for only EN and ASHRAE models. I will prepare a PR to send once @thoyt approves this one and we merge it in. Let's also get Tyler's thoughts on whether automated Python testing is a good fit for this repo since it might be odd given that this repo is predominantly javascript and running Python tests locally isn't that much of a burden.

As for adding TODO's in the code, that is totally a good practice. If anything, it gives new potential contributors some ideas of improvements they could make. Having methods that can return JSON objects would be very practical for full stack developers who want to make use of the code on the back ends of web applications. So, yes, I support it but I would suggest one change from exactly what you mention: I think we should still have core comfort functions that return Python objects (either lists or dictionaries) since people may be using the Python code in many places beyond a web back end. Then, we can have separate methods that call the core comfort functions, convert the returned dictionary into a JSON object using Pythons's built-in JSON module, and return the JSON. Also, to just give an example of a place "beyond a web back end" where the Python code is currently being applied, I currently use the code in a CAD plugin that uses IronPython (yes, IronPython apparently still gets used) and, to get the code to work there, we have to keep it very generic. This means that I have written it in a way that works for both Python 2 and Python 3 and it also doesn't use any modules beyond the ones that are built in to Python (so no modules like numpy or pandas). For the record it is totally ok that you used these modules in the tests you wrote but I just wanted to clarify that we should try to keep the core comfort functions free of them.

Lastly, to your idea for a button on the CBE Tool, that would be wonderful! Knowing that the Python might be downloaded by people also gives me an incentive to maintain it better than I have the past couple of years. So I definitely support it. If we really want to make it accessible to others, I have one more suggestion, which is to package it into its own module and deploy it to PyPI. I know that PyPI is how most Python developers install external packages (I imagine that might be how you installed pytest, numpy and pandas) so, above all else, I think software developers would appreciate it.

thoyt commented 5 years ago

I think it would be a nice project to turn the python module into a proper python package. I think it would be fine to run python tests in an automated way, but doesn't seem too important given the small amount of commit activity. I think it would be best to document how to run the tests and keep it simple.

FedericoTartarini commented 5 years ago

Hi, the issue #10 was caused by the while loop. In .js the do{} while{} runs at least once, however, that is not the case in Python. The issue #9 was caused by the fact that when the user was changing the velocity value the function updateBounds() was called and this function was calling the function redrawBounds(coolingEffect) which was then extending the comfort region as a function of the cooling effect provided by increased air velocities. I have fixed and addressed all your comments, however, I have not changed the following:

My latest commit contains all the latest changes and addresses all the issues discussed above. Thank you very much for your time and help.

thoyt commented 5 years ago

@FedericoTartarini Thanks for making the changes!

I don't have an issue with the style changes themselves, it just becomes really hard to find the substantive changes to give feedback. I think the IDE is a bit aggressive and it makes a lot of noise in the diff. Also kind of makes it harder to find the substantive changes later down the road if we need to find them. But enough about that :)

I'll go ahead and merge and deploy this

FedericoTartarini commented 5 years ago

I completely agree with you, that is hard for you to find the substantive changes to the code, my fault. I should have used another approach. Perhaps a better approach from my side would have been to first send a Pull request with only the formatting changes and later on create a Pull request for each specific issue. Sorry for that. I have, however, just realized this now while I am typing this message and I have to apologize in advance since I have already addressed issue #3, however, again I did not follow the aforementioned approach and you may see a lot of changes in my Pull request, out of which the majority are formatting changes. Sorry for that, but now it is hard for me to revert them. The only thing I can do is to add a detailed description in the Pull request to explain more in detail what I have done so it is going to be easier for you to see my contributions.

Perhaps, wait for a second before deploying this so we can include the new Pull request as well. :)

Thank you again for your help and for your time.

thoyt commented 5 years ago

@FedericoTartarini No worries, I think if you merge master into your other branch with #3 it will remove most of the formatting changes from the diff. I already deployed but it's no big deal, I can deploy again after #3 is merged