Open joshkehn opened 10 years ago
I agree. At least, we could set up a workflow where at least two of us check out and sign off on merges.
Goals are important not just for us, but for other users. We should give structure for users wanting to integrate the latest changes from both Tastypie and Swagger UI.
I think that we need tests before anything else - otherwise it's almost impossible to validate any contribution decently.
What we can do is define a release cycle - once a month or something - and anything that we want to see fixed / improved / implemented should be created as an issue and then we can flag whatever we wanna plan for next release at the start of the month.
Then it should be up to people to submit pull request to fix any ticket part of the backlog or new bugs discovered.
Also I think that providing a failing test case with every bugs reported is a must. On the same track any new feature implemented should be test covered.
We also need to follow swagger's development to be able to keep up with it.
I agree on most, although I don't think we can demand a failing test case. If someone finds a bug, I'd rather not discourage them from reporting it by requiring a pull request of a failing test. It'd be nice to encourage that, but I don't think we can expect that from all users.
Also I think that providing a failing test case with every bugs reported is a must.
Strongly disagree. While this is a developer-centric library, a failing test case as a requirement for a bug report discourages people from submitting bugs. Bugs are not always straight forward and test cases aren't always easy. Additionally requiring test cases could cause the test suite to bloat with tests that demonstrate one small bug rather than the project as a whole. We should aim for general code coverage for basic usage.
I think that we need tests before anything else - otherwise it's almost impossible to validate any contribution decently.
Test would be great. Let's break down the project into units to test separately before we start linking the pieces together. Maybe an add_tests
branch we use?
What we can do is define a release cycle - once a month or something - and anything that we want to see fixed / improved / implemented should be created as an issue and then we can flag whatever we wanna plan for next release at the start of the month.
I think monthly is a bit fast for this project. It's not used that heavily and the core team now is relatively small. Let's do some milestones and figure out what features we want to add / change before we commit to a release date. After a couple iterations we should understand better what we can target and start putting dates on milestones.
Yep I agree on almost everything @joshkehn.
First, I meant that providing a failing test case should be "recommended" in the contributing document - I totally agree we cannot / shouldn't force people to provide those.
I'm personally for small tests that cover each bugs - how would you avoid those bugs to pop up again then ? Testing the project has a whole is one thing, preventing bugs to re-appear is another.
If we provide good guidelines and good tests examples it shouldn't be too hard to keep them clean.
The add_tests branch is definitely a good idea. We can start by creating tests method without their implementation to draw what we want to see tested.
Good to see this is moving forward btw :-)
One more huge point: We need to improve documentation and cover every feature we have - I don't think it's the case right now.
:+1:
@krimkus @concentricsky @johnraz
Let's setup a contributing document for people. I just merged #51 directly into master but don't feel good about that as a common practice. Need some milestones, possible test coverage (#19) and some rules of the road, aka what we want this library to provide and what the goals of the project are.