Open hstove opened 4 years ago
Related issues
Anti-pattern : #98 Raise the quality: #137 More NIL dimensions : #143 Improper gaia usage: #150 Open source : #11
--
[image: --] Friedger Müffke [image: http://]friedger.de https://friedger.de
Two follow up points to make this more concrete:
Pause it. App mining needs to take a short pause. The ranking of this month makes zero sense to me.
Instead of removing apps from the program, we could add a new criteria to the New Internet Labs scoring rubric
I see the problem that you can still get good results by high scores through other reviewers. In the extreme case that the large majority of app publishers violates the privacy criteria, they get all near 0 points in NIL theta.
I propose to subtract points from the final score as proposed in #98
@joshthegreatavenue Only pause it until this is fixed :-) Increase the pressure to fix this issue as early as possible.
Like using radiks ;)
https://docs.blockstack.org/develop/dapp_principles.html
I do not believe there is any argument to be made against this proposal.
A new dimension "NIL Radiks" would be sensible.
@larrysalibra can you please comment here
I propose to use blocksurvey.org with a new form each month for whistle blowing.
Jeff Domke notifications@github.com schrieb am Mo., 25. Nov. 2019, 17:49:
@larrysalibra https://github.com/larrysalibra can you please comment here
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/blockstack/app-mining/issues/175?email_source=notifications&email_token=AALBYWJDBCYGQH2WJSTCYE3QVP63JA5CNFSM4JOCQBV2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEFDBSYI#issuecomment-558242145, or unsubscribe https://github.com/notifications/unsubscribe-auth/AALBYWJ3JH5NVULEFCAT2F3QVP63JANCNFSM4JOCQBVQ .
Just to be clear, I agree with @friedger that this should be implemented for December 1.
@jeffdomke @GinaAbrams can we get confirmation December implementation?
I agree that this is a problem.
I am recommending a whistleblower system, instead of a formal reviewer, because catching this type of activity is hard to detect at scale
We need to build tools and technology to prevent this at scale. This is what I'm trying to do with future versions of Can't Be Evil Sandbox.
In the short-term, I'm fine with a whistleblower system for this, but I don't want to be involved in investigating or mediating disputes about it.
A whistleblower system could work fine for December. At least better than nothing.
@larrysalibra are you ok with having this be a NIL criteria? Even if you weren't involved in the scoring of it.
I would also like to clarify a point around emailing users. Some apps ask for the user's email, and if provided, send them emails. There is no way to do this in a decentralized, private manner. As long as it is clear to the user that they're sharing their email, and why, then this would not violate the proposal.
How about using Google Analytics to collect anonymous user's behaviour data, is it against the proposal?
Google analytics fingerprints users and enables them to be tracked all over the web. It is the antithesis of users owning their own data. If you are using it stop now and look into usefathom or simpleanalytics.
GA and others are the target of upcoming proposals.
@dantrevino usefathom or simpleanalytics - I guess they're centralized analytics tools as well?
What do you guys think about using cloudflare? They also gives cookies when using their DNS system for . At the same time, they provide DDoS protection, easy and free SSL and caching.
I can see this enforced two ways:
Gaia
. In both cases there is no objective measurement here, we are evaluating on a case by case basis the "spirit" or "intent" (predicted) behind the implementation. I'm OK with this assuming the community is but just want to confirm.
Alternate proposal:
Apps are in violation when data entered into the app by a user is not stored in Gaia. There is no way to exhaustively test this with each app, but it is possible to test major features and detect when an app is or isn't writing based on user input.
confirmation December implementation
@dant we still don't have a clear proposal on PBC vs. NIL and how this is scored. I don't see how we can enforce this since we have no decision post review period starting. Once we are clear we can dry run in Dec and use real data in Jan.
@dantrevino usefathom or simpleanalytics - I guess they're centralized analytics tools as well?
@sydneyitguy The issue here is not centralization, but rather the use of user data, and the fingerprinting of browsers to be tracked across all usage, not just within an app. UseFathom and SimpleAnalytics will track app usage stats, not a user's global activity.
With UseFathom and SimpleAnalytics (and other user privacy respecting analytics), you can find all of the relevant user data on devices, screen sizes, urls, within your app without giving your user's data to legacy data silos.
@dantrevino I'm not sure whether giving our user data to an alternative start-ups instead of Google is actually a better option. Yes "they say" they don't sell data though..
What is the problem you are seeing? Please describe. There are multiple apps that are performing well in App Mining, but they're violating digital rights by sending private, unencrypted data to their central server.
How is this problem misaligned with goals of app mining? This is blatantly against what we stand for.
What is the explicit recommendation you’re looking to propose? Any app that engages in this behavior should be removed from App Mining.
Private, unencrypted data includes:
Describe your long term considerations in proposing this change. Please include the ways you can predict this recommendation could go wrong and possible ways mitigate. My recommendation is to rely on a whistleblower system to flag apps that engage in this type of behavior. Anyone can send the myself or a member of the App Mining team a message indicating how an app is behaving in this way. We will investigate their report, and if confirmed, we will notify the app developer about what we've found. They will have a chance to defend the removal, in the case of false positives.
I am recommending a whistleblower system, instead of a formal reviewer, because catching this type of activity is hard to detect at scale. We have a strong community that is willing to dig deep into apps, especially those that perform well. I believe this is enough to stop these apps from performing well in App Mining.
Additional context Some of the top apps, according to our audit data, engage in this type of behavior. They claim to be decentralized, but they are not. They barely use Gaia in a minimal way so they can get a full score from New Internet Labs. They are hurting the Blockstack ecosystem by pretending to be decentralized and private, while they are in fact acting in a mostly centralized manner.
I am also open to ideas that simply give these apps a very low score, instead of removing them from the program.
In the long run, it is on us to build tools that block this type of behavior, or at least make it very clear what is happening to the average consumer. Until then, I think we need to use App Mining to incentivize the type of apps we want to see.