Closed alan-churley closed 1 year ago
The issue is still current so contributions are welcome.
This is a good issue for newcomers as it allows to understand how the utility works in depth, as well as for those who are looking at sharpening their Jest skills.
If anyone is interested in picking this up, please leave a comment here, then discuss your findings / proposed changes before opening a PR.
@dreamorosi Interested, I will look into this & share my findings.
Hi @arnabrahman, thank you for your interest in this issue ✨!
Looking forward to review / collaborate on your proposal and findings in the issue.
If you have any question while you look into it, please let me know both here or on our Discord.
Hello @dreamorosi, sorry for replying late to this.
I think this particular issue was first mentioned here by @saragerion . After looking at both the unit tests for Logger
/Tracer
, my understanding of what she meant by the issue is, in the Logger
/Tracer
package all the tests are written for public functions rather than testing a particular feature of the package. But in Metrics
, here we are following a pattern of testing a particular behavior/feature of Metrics
rather than taking a single function & testing every case of that function.
So, if we want to follow a similar pattern/structure for Metrics
similar to Logger
/Tracer
, we would take a function & test every case of that function. For example, for addMetric
function, we want to test
We can take this approach & apply it to every public function of Metrics
.
These are my findings. I might be wrong about this, so let me know if i am in the right direction or not. Thanks 🙂
Hi @arnabrahman thank you for taking the time to look into this!
Your findings are spot on, and this is indeed the direction that I think we should be going with this issue.
On top of the points you discussed above, as part of this issue I think we should:
Logger
/Tracer
unit tests (i.e. comments, spacing, etc.)If you're still interested in working on this issue, feel free to start working on a PR. I'll be happy to continue the conversation there.
@dreamorosi Yes, i am very much interested to work on this & contribute. I will start on a PR soon & will also follow your suggestions.
Great, I have assigned the issue to you and change its status to "Working on it".
If you have any question during the implementation don't hesitate to reach out here or on Discord!
Having the opportunity to introduce a new functionality for high resolution metrics and written tests before extending the Metrics implementation i had the following observations described in issue #1373 . The observations relative to tests are the Points 3,4,6
Note in observation 4 ,
as contributor i wish i had tighter feedback loop when extending the metrics implementation.
In my observations i described that a solution consideration is that the Metrics tests which exercise the public methods to be of an MetricsInterface
abstraction and not of the Metrics concrete implementation.
Hi @arnabrahman, just wanted to follow up on this issue and ask if you were still working on this. If there's any question that we can help answer please let us know!
Yes, currently i am working on this. I have already made progress and so far covered 100% test coverage as you have mentioned. Now, i need to work on refactoring some things, then i will be finished. Sorry, it's taking a bit time. And, yes i will let you know if i face any problem. Thanks @dreamorosi
Hey thank you for getting back to me so quickly, there's no rush at all - I just wanted to check!
FYI: I'm about to open a PR that adds 4 test cases related to a new feature on Metrics, hope it doesn't cause too much mess on your branch!
Thank you!
Hey thank you for getting back to me so quickly, there's no rush at all - I just wanted to check!
FYI: I'm about to open a PR that adds 4 test cases related to a new feature on Metrics, hope it doesn't cause too much mess on your branch!
Thank you!
Ok, I have a question. After I rewrite all the tests for Metrics, what will happen to the old tests? Should I keep both old tests & new tests or should I only keep the new ones?
If coverage of the new tests is 100% and the structure/style/etc. is in line with the other modules then we could replace the existing ones.
I'll be able to give you a concrete answer once you open a PR though.
Having the opportunity to introduce a new functionality for high resolution metrics and written tests before extending the Metrics implementation i had the following observations described in issue #1373 . The observations relative to tests are the Points 3,4,6
Note in observation 4 , as contributor i wish i had tighter feedback loop when extending the metrics implementation. In my observations i described that a solution consideration is that the Metrics tests which exercise the public methods to be of an
MetricsInterface
abstraction and not of the Metrics concrete implementation.
I'm about to merge the linked PR as the scope of this issue (as defined here) was to standardize the tests according to what is being done in the other utilities of the project.
I will provide a longer answer to all your points in the other original issue.
Comments on closed issues are hard for our team to see. If you need more assistance, please either tag a team member or open a new issue that references this one. If you wish to keep having a conversation with other community members under this issue feel free to do so.
Description of the feature request
Problem statement Metrics unit tests currently are not structured the same as the Logger module,
In the logger only public methods are tested, following the recommendations here: https://github.com/goldbergyoni/javascript-testing-best-practices/#-%EF%B8%8F-14-stick-to-black-box-testing-test-only-public-methods
Opening a ticket to track discussions and work on improving this,
Related issues, RFCs
https://github.com/awslabs/aws-lambda-powertools-typescript/pull/102