stacks-archive / app-mining

For App Mining landing page development and App Mining operations.
https://app.co/mining
MIT License
49 stars 16 forks source link

Update guidelines for Audit Period Changes #158

Closed GinaAbrams closed 4 years ago

GinaAbrams commented 4 years ago

What is the problem you are seeing? Please describe. During the audit period, in the event that apps are re-tested, we need to make sure that apps are tested for what they submitted at the beginning of the month and not changes made in the middle of the month.

We also need to publish a guide for what qualifies for retest.

How is this problem misaligned with goals of app mining? App Mining is a system that was designed to serve all app founders, and want to ensure we're doing this fairly in the audit period as well.

What is the explicit recommendation you’re looking to propose? Gather feedback and change proposals from the community, publish, and implement.

friedger commented 4 years ago

In #159 I suggest

friedger commented 4 years ago

Highlight that the audit period is for finding systematic errors, not for improvement or decreasing the rank of individual apps.

For example, if an audience setting in tmui is wrong then the change should be that the setting is made public at the beginning before tests begin.

If the months since launch on PH is wrong then PBC could try to formalize how relaunches are handled.

njordhov commented 4 years ago

If the goal is to avoid allowing major changes before TMUI retest, the results could be published immediately after completion, with a short period (like 24h) to request retesting.

GinaAbrams commented 4 years ago

The main item for discussion here is primarily TryMyUI related. Product Hunt months since launch is simple data that can be audited.

I'm not sure how making the review period shorter would improve the situation.

I think there can be more global rules such as:

How should we go about any changes that might be NIL related? We've had requests from folks that their Gaia score needs to be updated, but we don't want to reward apps for changes made mid-review.

njordhov commented 4 years ago

I'm not sure how making the review period shorter would improve the situation.

The idea is to provide the TMUI score immediately upon completion with a short review period, so that if a retest is done there isn't much time to make major changes to the product.

friedger commented 4 years ago

Highlight that the audit period is for finding systematic errors, not for improvement or decreasing the rank of individual apps.

The audit period should not be used to handle retests (be it by TMUI or NIL or whatever reviewer) for an individual app. Only errors in the algorithms should be accepted, not in the reviewers judging.

Errors in reviews should be discussed in the monthly calls whether the reviewer is biased, not qualified, etc. and how the quality of a reviewer could be improved. The audit period should not be used for that. Handling review errors in the audit period could lead to manipulation of individual scores and does not scale. We should take the data input from the reviewers for the algorithms as immutable.

Examples for systematic errors could be

Walterion01 commented 4 years ago

From another point of view, it will not be perfect for limiting the audit period to systemic error. As we probably will increase or switch the reviewers and they are mostly human, they will be fault like the cause behind this issue. We should not let the developer take the loss for the reviewer fault. Especially take into account that the flow of mining ranking is like that if an app goes down, it will be so much hard to bring it back up. So, in this case, if we let the developer take the hit and fix the issue for the next month, it will not help the app made with hard work.

Instead of erasing the question, I propose to optimising the flow and reviewers work to minimise the fault. I assume that in this way, no one can abuse it as if there were no issue in the process, no one can object to that.

For more clarification, it is not wise to let someone go to jail and give them no chance to defend because the judge has a mistake and say: "ok we will fix that for the next man in the line".

So I propose instead we work on the issues happening for reviewers and flow and try to fix them, eg fix the judge.

friedger commented 4 years ago

My view on this is that we are not talking about going to jail but about getting free lunch. Everybody gets nothing by default (is not in jail), the algorithms determine the free lunch for your app based on the data it received. If you don't like the outcome, you can fix the algorithm or fix the data. You should probably not mix both fixtures.

@Walterion1 Your concerns about the impact of last months score should be discussed in a separate issue.

I do agree that issues happening for reviewers should be fixed, but not in the audit period.

Walterion01 commented 4 years ago

It is a metaphor, dear friedger. When you worked on something for months and not for your fault, it gets an unfair reward; it will discourage you. And one of the flaws of the proposal is the effect of LastRoundScore so it should be discussed here, as it is a good thing already.

An example for even more clarification: Say in the future months we have 500 apps, and NIL check them all, and some of them are loading a new version of blockstack.js that has a bug in some cases happened to, eg 30 of them. How should we do in this case? Call for an emergency? In the case of this proposal, we will fix the bug, don't let them ask for a retest and say to them: "Try next month". Should we not have a good structure, take the issue, eg in Github or a website for developers and if verified by PBC or another reviewer, let them have their rightful justice?

Honestly, I don't understand the motive behind letting someone else takes the damage for a judge instead of improving the workflow, do we like to go the easy way or the right way?

Walterion01 commented 4 years ago

@GinaAbrams I propose to have a way to report such issues in the app dashboard or Github, and PBC or a reviewer verify the issue by checking with some clear rules. Sample of the rules:

ViniciusBP commented 4 years ago

IMO, For Trymyui, each new retest should be asked here on github and the old and new videos should be reviewed to see if the developer made significant changes to the App. Just look what happened last month, will the old score of Arcane Maps be fixed if proved by videos that the app changed a lot? It is easy to check and see that the App changed a lot and had an unfair advantage, it is the same thing as giving a special deadline with almost one additional month of development time. Even worse now that the tmyui scores should change only in 2 months.

I don't like the idea of developers contacting reviewers directly and fixing their issues during audit time.

GinaAbrams commented 4 years ago

The app mining process is mostly manual these days, but progressing toward more automatic. We have to treat app reviewers as the source of truth for app reviews, but also ensure a fair process for all. I propose a simple solution that was listed above, which we can revisit when the process is more programmatic.

Apps will not be re-reviewed unless there is something blatantly wrong done on the part of the app reviewer. If for example, the wrong app was reviewed, then we can re-test. But if it's in regards to audience preferences, it will not be re-reviewed.

This should be re-visited again in a few weeks when we launch the updated maker portal.