Closed floroz closed 3 years ago
@jescalan PR is ready, let me know your feedback 😄
Hey @floroz! First of all I want to thank you again for all the work you put into this project. It is amazing to see such a drive to contribute to open source, community tooling, and I deeply appreciate it.
That being said, I feel a little uncomfortable about these changes in general, because I am not convinced that they further our goals with this project of making it small, simple, stable, and easy to maintain and contribute to. Originally, the library was comprised of a single file that was maybe 100 LOC and quite easy to read through, update, and understand. After this set of changes, it is now made up of a large network of interconnected files that do exactly the same thing. This represents overall an increase is complexity, and a decrease in understandability and maintainability.
Of course, that tradeoff would be worth it if there was another aspect that was dramatically improved as a result. And I think the primary argument for this is that adding tests makes the library much more stable and reliable, and in theory it needs to be broken into many files to have good test coverage (questionable, but let's assume), and so they are worth the hit we take to clarity and maintainability, and the extra LOC introduced.
However, I am not entirely convinced that the tests added here really add that much stability to the library. For example, one of the things you pointed out that could cause this library to break is that next.js could update its internals and break things. A test to cover whether that has happened or not would be a notable improvement. But I don't think any of the tests added here actually cover this case. Here, we mock the next function. Here, we mock the internal reloader object. And here, we mock the entire reload controller. I may have missed it, and I do apologize if I have, but I think even with these added tests, nextjs internals could change and break this library, and all the tests would still be passing.
I do think there are some situations in which unit tests can be helpful tools, but I'm not sure I see this situation as one of them. What this library is severely lacking is an integration tests -- a test that will tell us if something is actually wrong, or broken. What this PR adds is a lot of extra code and complexity, but as far as I can see, there is no boost to the stability or reliability of the library overall.
I should note that I am just reviewing the code and surely you have a much deeper knowledge of what's going on given that you wrote it. So it's for sure 100% possible that I am missing something here that would really be a large benefit. If so, I would really love if you'd counter-argue my points above, am very open to it, and apologize for misunderstanding.
Aside from the breaking apart of the files and the testing, I think the addition of the linting setup is a definitely great value add, and we should hold on to that regardless. I guess the challenge that I'd put in front of you is whether you'd be able to keep tinkering with this and get it to a point where we have added the most possible value with the least possible complexity. If it were me, I'd probably try to keep the implementation as-is, and see if I could get a bonafide integration test rolling. During initial development, I set up a manual fixture as you could see, where I could test it, but didn't have time to go through the process of figuring out how to fully automate that test.
Hi Jeff,
Thanks for your reply and the thorough review.
Contributing to others' projects is always a great learning experience for me, and an opportunity to work outside the standards and conventions of my day-to-day work.
I understand your concern about readability and LOC.
Having everything into a single file might help the reader glance back and forth between lines, but ultimately is a design choice that brings a great trade-off as we have procedural code without any encapsulation, where it's hard to distinguish at first what is responsible for what.
If we separate the responsibilities within the binary, we can distinguish some separate entities:
Keeping all those things within the same file might help reading or to make a quick change, but will ultimately impair the capacity to confidently extend the functionalities and have confidence that change to a line will not have an impact 10 lines below.
The only way to overcome this problem is to separate the code into modules representing the responsibility that each unit of code represents.
The ability to maintain a degree of simplicity and readability during this process the developer's responsibility and I think things could be further improved here, and I would welcome any suggestion you might have on how to better structure this project.
Regarding the unit test, I think some of them bring value, as explained with the example of the commander test, or making sure our function creating the express server does what we expect it to do.
However, I do agree that we would need an integration test that would bring the confidence that the library is working. I have a few ideas about that.
What I would recommend, is perhaps having a synchronous chat over Slack/Zoom in the next week, to better coordinate our intent and have a shared understanding of how this project should look like, and map the way forward.
Let me know :)
Sounds good - let's find a time to meet. Could you reach out via email so we can set it up? You can find my email here 😁
Sounds good - let's find a time to meet. Could you reach out via email so we can set it up? You can find my email here 😁
Awesome, I've sent you an email 😄
hey @jescalan, I have reduced the scope of this PR to only introduce eslint
, prettier
and update the package.json
with the peer dependency range.
Will introduce integration testing in the next PR 😄
First contribution to this library congrats! 🎉🎉🎉🎉
This PR introduces linting and upgrades some packages in to
next-remote-watch
:Adds
next
as peer dependencies supporting versions greater than10
.Static Analysis