Closed bhack closed 2 years ago
@ewilderj Sorry to mention you but I don't know if you have already a ticket policy on this repo.
cc @martinwicke @dksb to comment on this issue. We are certainly aware of it. Perhaps Deepak has an update.
@bhack Can you please share a few issues you are getting nagging messages for? We are revisiting the feature and see if any kind of nagging makes sense or not.
@dksp This is the best sorted query approximation that I can do now (of course it doesn't analyze Nagging Assignees frequency): https://github.com/tensorflow/tensorflow/issues?page=2&q=is%3Aopen+Nagging+-label%3A%22stat%3Acontributions+welcome%22+sort%3Acreated-asc
Please share an improved one if you compose it.
I.e. at the top of that query result:
Nagging Assignee @shivaniag: It has been 479 days with no activity and this issue has an assignee. Please update the label and/or status accordingly.
I think that when we have 479 days of "Nagging Assignee" something needs to happen :smile:
Probably also @av8ramit could be interested in this.
Two subsequent Nagging Assignee 153 and 158 days: https://github.com/tensorflow/tensorflow/issues/16087
It has been 240 days with no activity and this issue has an assignee. Please update the label and/or status accordingly.
We will turn off nagging on the issue soon and then start emailing assignees directly.
@av8ramit Good. But I think that we need to know in some way what is the public status of an issue. Having 270 days of internal emails reminders will not let us to solve the problem in general. If you see in my query I had filtered out "Contributions welcome" that there are the ones that could "acceptably" stay opened forever without activity. But in general we need to maintain a triage/management of the others. It is not just a question of public spam/noise of the notification but mainly an abandon-ware feeling.
I think this is a problem that @dksb is looking at. In my humble opinion, a project this large cannot escape the monotonically growing number of issues, however hard it tries to close them all. However we can certainly manage our messaging around them, I suppose. Can we characterize all these outlying issues in meaningful buckets?
@ewilderj I agree with you and it is what i've tried to do with the query. The original scope was to create a special label with the bot so that we can have a periodical review (re-triage?) activity on this cluster cause I think sometime it just needs a re-evaluation, a closure, a stale notification (just a check if users are alive) or a reassignment.
I have this query but probably you can have a better one.
So I think that if we can label a cluster we can make a periodical revaluation for action on these specific issues that was catch by nagging assignee or similar logic. What do you think?
@bhack your suggestions are good. Nagging is not very useful and we will turn public nagging off. But this is not a tooling problem but more of a process and a resource problem. "Contributions welcome" category is tricky, most issues are FRs that cannot be closed right away. We are working on addressing the issue backlog, while keeping up with the new issues.
Cheers, Deepak
On Thu, Nov 29, 2018 at 6:17 AM bhack notifications@github.com wrote:
@ewilderj https://github.com/ewilderj I agree with you and it is what i've tried to do with the query. The original scope was to create a special label with the bot so that we can have a periodical review (re-triage?) activity on this cluster cause I think sometime it just needs a re-evaluation, a closure, a stale notification (just for check if users are alive) or a reassignment. I've this query but probably you can have a better one. So I think that if we can label a cluster we can make a periodical revaluation for action on these specific issues that was catch by nagging assignee or similar logic. What do you think?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/tensorflow/community/issues/29#issuecomment-442662422, or mute the thread https://github.com/notifications/unsubscribe-auth/AB0xd-xemXWE_uvooN_z4WMCuWP9bnRpks5uzy6ogaJpZM4X34jQ .
Exactly, and I've excluded "Contributions welcome" from the query cause in that case we have a clear communication. There is currently any internal plan to schedule that activity and the message is clear: "We are interested in that feature ot to maintain it but we are waiting for a community PR". I think that there is also a cognitive bias for TF core team to receive repetitive notification. It is better to cluster and have a specific handling. IMHO It is better to have a problem of resource scheduling, so that could be just a question of set a frequency of the re-triage, than be fooled by some attention bias. To scale I think that soon or later you need to design some community roles for high level contributors.
What do you think about introduce a grooming activity every quarter? VSCode it is another high traffic repository on Github so we could get a little bit of inspiration from them.
See https://github.com/microsoft/vscode/wiki/Issue-Grooming https://github.com/microsoft/vscode/issues/82032#issuecomment-539010340
They have also a monthly public iteration plan issue that I find very transparent and community friendly:
@chanshah is continuously working through our backlog, and you may have noticed that we have much better label coverage recently. I love the Issue Grooming announcement though -- it is something we could imitate.
cc @theadactyl @joanafilipa
The community loves their monthly iteration Plan tickets. It could be very nice if we could have something like that in Tensorflow. Also an improved chart like this https://github.com/microsoft/vscode/issues/82032#issuecomment-539010340 would be cool.
Edit: If you want to see the chart source https://github.com/lannonbr/vscode-github-stats
We also need grooming across Tensorflow org owned repos not just the main TF. Just as example if you see the other issues here (in this repo) they are quite unmaintained.
Also roadmap.md/roadmap docs It Is often systematically outdated and have a too large scope to be useful. Please consider something like an iteration plan policy: https://github.com/microsoft/vscode/wiki/Iteration-Plans
And a more Dynamic Roadmap: https://github.com/microsoft/vscode/wiki/Roadmap
cc @joanafilipa @theadactyl
I've added roadmap request to a subprojects https://github.com/tensorflow/graphics/issues/132. But probably we can better handle iteration roadmaps also for subprojects with a general policy.
Sorry, Is this going to be discussed internally? Can we have a more transparent monthly planning and regularly updated roadmap?
Gently ping on the new year. Can we have an improved monthly or release cycle outlook and roadmap?
@bhack thanks for the reminder. If I am understanding correctly, you're looking for the following:
Am I missing something?
Agreed with @martinwicke that the vscode grooming announcement is a good model to follow. I'll work with @chanshah on this, for this year. As far as roadmaps, this is a challenging undertaking across such a broad organization. What I could see working is having project owners self-volunteer for a particular roadmap update schedule that they can reasonably achieve (monthly, quarterly, etc) and commit to publishing a "last updated" message according to that schedule, even if the roadmap hasn't significantly changed. What do you think?
Yes. For the first point monthly could be ideal. it Is monthly cause Vscode has a monthly release cycle. We don't have an explcit release cycle in Tensorflow or if we have one It Is totally hidden/internal.
So if we are not going to have predictable alpha/beta/RC/release public cycles we could assume monthly cycles.
Also I think that Vscode has a monthly general issue about release where all the teams and ecosystem contribute with (issue/PR link).
Check this month pinned ticket: https://github.com/microsoft/vscode/issues/87479
The related end-game: https://github.com/microsoft/vscode/issues/89326
And a sample ecosystem for Jan: https://github.com/microsoft/vscode-remote-release/issues/2075
@theadactyl Can we have an update on how the grooming and public roadmap threads are going ahead internally or if it is stalled? It is important to have a keep-alive for the community.
For sure!
Ok thanks for the update. I will ping you again next month especially if we don't have any public progress on your last point. The grooming topic was not only about RFC but it was related to the whole ecosystem and grooming initiatives/sprints in coordination with SIGs.
Honestly I have a big expectation from the Tensorflow teams about a cooperation on a regular public roadmap in the ecosystem. As I've mentioned, with concrete examples in this thread, other very large scale opensource projects can maintain a quite regular best-effort public Roadmap across teams in the ecosystem including the complexity of handling close source components/extensions in an opensource process (like Tensorflow).
@theadactyl I don't know if you have something new to share. In the meantime, for the grooming part of this ticket, probably we could start to re-triage the top 10 bugs in this list once at week. What do you think? As you can see there are issues that was last commented in 2017.
@dynamicwebpaige @goldiegadde FYI
We have been picking off issues, but definitely not FIFO. I'm wondering whether it's possible to surface our prioritization to create a roadmap? I know we're doing that for release milestones, but it might also make sense more generally to give a better sense of progress.
@martinwicke I understand that there can be a kind of tension towards exposing your cards to the competition from a commercial/strategic point of view. But other then make a better "sense of progress" the point it also about to let the single user, contributors or just business integrators to make a little bit of planning on their own activities that are related to the framework which is a factor not to be underestimated.
I agree. It's less of a "keep our cards close to our chest" problem than a process problem. Keeping several tracking systems in sync and up to date is work, and someone has to do that. I want to avoid just adding more stale sources of information.
If It is a process problem I think we are in a better potential position to find a solution than starting from a policy issue. We have some example of quite large projects like partially Kubernetes with its SIGs or Vsocde with its components and extension that have found some solutions. It Is not perfect but better then nothing.
Just to make you an recent example: numpy_ops. Just landing in the source tree directly, no community RFC. I could be interested to use this in a project but I don't know what Is the perimeter of this features, if it Is a day by day best effort work without a milestone for the next release or It has one. And so, we could extract many other examples like features request opened in the ticket system, in charge internally by the team for the next milestone but we don't know about etc..
I think that the community could help you in this effort but you need to start somewhere to organize this process an ask for community support.
It is really hard to maintain the ticket system when the status and labels start to be so noised.
E.g. how you can progress o review priorities in you ticket routines where in a queue with issues `awaiting tensorflower" you have tickets in this status from 2017/2018 and assigned:
https://github.com/tensorflow/tensorflow/issues?q=is%3Aopen+label%3A%22stat%3Aawaiting+tensorflower%22+sort%3Acomments-asc
It hard to search, it is easy to create duplicates from an user point of view, it is easy to lost issues, it hard to understand if are planned for a given release (we are not using release labels) etc.
Some little steps in the right direction https://github.com/orgs/tensorflow/projects/9
In the meantime can we create a newsletter where every Tensorflow internal team/SIG can contribute? E.g. the Tensorflow/Google guys in MLIR are doing a good work with their newsletter edition: https://llvm.discourse.group/t/mlir-news-11th-edition-7-11-2020/1326
Seems that there are other little steps like https://github.com/tensorflow/tensorflow/issues/42047 This an internal issue put in 2.4.0 project kanban. it looks like something's moving on...
Again about a Roadmap subtopic on contribution conflicts that I tried to solve since 2016: We just talked with @mdanatg that we could be still exposed to contribution conflicts and a fresh case is at https://github.com/tensorflow/tensorflow/issues/44485#issuecomment-752100847
I'll try to bring this up on the next meeting in January (Thursday)
I'll try to bring this up on the next meeting in January (Thursday)
@mihaimaruseac Thanks. I hope we could finally make some progress with the new year.
Thanks for pushing this topic @bhack also for linking it throughout GitHub. Definitely valuable for the community
We've instituted a lot of programs to manage issue backlog triage, etc. We are still looking at improving public roadmap access, but are making progress with teams posting to the forum: discuss.tensorflow.org.
I'm going to close this issue for now, feel free to open new ones against more specific processes.
Yes we made many progress recently considering that this was a 2018 ticket. There is still a lot to do but we are on the right path.
I receive many mails of repeated Nagging assignee without maintainers activity. How do you are handling Nagging Assignees? Can you add an automatic label to have a quick internal overview of these issues? Continuous ping to inactive maintainers doesn't give a good feeling about the library.