devlooped / SponsorLink

SponsorLink: an attempt at OSS sustainability
https://www.devlooped.com/SponsorLink
MIT License
31 stars 4 forks source link

Replace hashed email with manifest-based offline check #31

Closed kzu closed 10 months ago

kzu commented 11 months ago

It was brought to my attention that this wasn't sufficiently anonymizing given that especially for corporations, the pattern for emails is not hard to probe if you have a list of emails from somewhere, but attempting to access the cloud URL for the sponsorship.

The current proposal (client-side CLI implementation PR) would work as follows:

  1. Users build for the first time with a SponsorLink-enabled library. Analyzer looks for a manifest as a user envvar. Since it's not found (or empty), it cannot determine sponsoring status and informs (Info diagnostic) that Project X is seeking sponsorships. If you are already a sponsor, please sync your sponsorships using gh sponsors sync....
  2. User follows the diagnostic link, which explains the following steps clearly
  3. Install the GH CLI and the gh-sponsors CLI extension (basically running gh extension install devlooped/gh-sponsors).
  4. Runs gh sponsors (sync being the default command). On the first run, the tool explains again what's going to happen and performs the following steps: a. Creates a user envvar with a random GUID to use for salting all hashes b. Gets the user's active sponsored accounts c. Gets the user's verified emails d. Gets the user's organizations and their verified domain(s), as well as their sponsored accounts e. Hashes each user's email c) with each sponsored account b) (salted with a)) and turns them into JWT claims ("hash"=hash) f. Hashes each verified org domain d) with sponsored accounts too (salted with a too), and turns them into JWT claims too g. POSTs this to a GitHub-authenticated SponsorLink endpoint that signs the JWT with SL private key. All the endpoint validates is that the logged in GitHub user (via Auth0) is the same as for the GH CLI. h. The backed responds with a signed JWT with an expiration that covers the current month (sponsorships expire at the end of each month). i. The token is saved to the envvar checked in 1)

On a subsequent build:

  1. Analyzer sees both the manifest and salt envvars
  2. If the token is expired or invalid (i.e. not signed), an Info diagnostics tells the user to run gh sponsors again.
  3. Analyzer does hash(salt, email, sponsorable) and tries to find that claim within the JWT (all local). It also does a fallback check for hash(salt, domain(email), sponsorable), to support org sponsorships.
  4. If the hash is found, the user (or an org) is sponsoring. Otherwise, an Info tells the user to please sponsor, with a link to do so for the given sponsorable (project/user/org).

Of note:

The goal is for integrators to just have a documented standard mechanism for verifying the JWT manifest token, even without any SL-provided code. But a simple loose file "helper" should be provided for simplicity.

GH CLI experience is similar to the following:

image

harunlegoz commented 10 months ago

I’m glad that you’ve listened to the community and made the requested changes, @kzu. I like the new client based approach, and it being info message based, rather than build warnings. For that you have my respect. I hope it won’t be much of a “nagging” as you described previously, but more of an ask.

One thing I’m not sure of:

e. POSTs this to a SponsorLink endpoint that does verifies all entries match.

If you have the client’s sponsored library list through gh sponsors CLI, why do you need to send it to a backend API to do the validation? Why not keep a local cache of user’s sponsorships and check against it, refresh it hourly via GH’s own sponsors APIs, and even give a command line gh sponsors refresh for user to force a cache refresh?

That way, you wouldn’t need to worry about any privacy laws or anything. The information would stay on the machine. And it would be just a library doing offline work.

If I’m taking this wrong and there’s some necessary checks going on the API that can’t take place on the client machine, apologies, that I might’ve missed.

One last question: Does it still slow the builds?

TsengSR commented 10 months ago

Companies don't donate

https://github.blog/2023-04-04-whats-new-with-github-sponsors/#organization-funded-sponsorships-now-generally-available

While in beta, we saw exciting growth in direct funding from 3,500 partner organizations like AWS, American Express, Shopify, and Mercedes Benz. And in 2022, nearly 40% of sponsorship funding came from organizations, with each organization-funded sponsorship worth on average nearly 15X more to maintainers than the average individual sponsorship.

Perhaps you @TsengSR just have no idea what you're talking about. That's another angle.

Ironically this plays against you dude. If thats the case, why do you then need to blackmail companies to get money? Or did you suddenly got greedy wanted more than what these deemed it was worth to you? Your story doesn't add up, man.

So, spit it up and come with the truth for once and that would require you to make the details about the amount donated public, or don't you dare to do this because it would be against you then when people realize that there's been enough to sustain the development but you greedily wanted more?

Right now your arguments don't add up. Either you don't get enough and companies didn't donate or they do and you lie. Which one is it? Now it's your turn, the public is waiting.

aschan commented 10 months ago

I see a few posts that discuss GDPR and other data protection legal frameworks as marked off-topic and I feel this is a mistake. They are relevant as to understand the external requirements SponsorLink must adhere to. It has already been mentioned that there are extra-territorial restrictions. Just this summer The Swedish Authority for Privacy Protection deemed Google Analytics as problematic, see Four companies must stop using Google Analytics.

GDPR doesn't just impose restrictions on which PII information you can handle or why. It also has requirements that you have processes in place for the user to:

The last one is trivial since deleting data would just make them a non-sponsor. But the other two makes one-way hashes impractical. And there are more regulations than this, I just summarized a few tenants of GDPR. So to create a SponsorLink solution you will have to set up several administrative processes/practices that are cumbersome and doesn't directly relate to the software. And this is just for one such framework, several others already exists and more are coming.

For SponsorLink to be a viable solution without a lot of administrative overhead I believe that the sponsorship must be proven via a downloaded license file or similar that doesn't export or store PII information. Validity and expiration should be validated locally so that there is no external communication after the file is retrieved. Using a certificate would probably work and would have the added benefit that a corporation with multiple projects/repositories could purchase a organization wide license.

kzu commented 10 months ago

@harunlegoz the problem is how to authenticate the user with the backend. I could send an API key via email, but that keeps adding more steps that I fear just add more and more friction. Collecting their sponsors and hash+salt them in a way that the backend can verify and thus send a signed manifest, is effectively using their current sponsors' state (calculated client-side) as a sort of API key that no-one else should be able to reconstruct. Without me having to keep a separate auth thing.

At least the auth/validation of the user downloading the signed manifest, needs to happen on the backend, otherwise it would be trivial to circumvent.

Build slowing won't happen, no.

cmjdiff commented 10 months ago

Whoever marked the GDPR comments as off-topic needs to either undo that or explain themselves.

cmjdiff commented 10 months ago

@harunlegoz the problem is how to authenticate the user with the backend.

We've been through this before. The solution is not to have a backend and just serve static messages.

That's it. That's the entire thing. No other solution is both compliant with data protection principles and acceptable to enterprise users.

Precisely which part of this are you having difficulty with?

kzu commented 10 months ago

@cmjdiff with all due respect, I think you have replied enough. Your continued "GDPR GDPR" is hardly constructive.

The user in this case is:

  1. MANUALLY installing a CLI extension, which will TELL YOU CLEARY what you're OPTING-IN (so, GDPR prevents users from opting in to stuff? Please to tell all the stupid sites asking for consent on the damn cookies).
  2. MANUALY installing a GH App that EXPLICITLY requests your consent to sharing your email

Precisely which part of all this explicit consent all over the place isn't to your liking? Needless to say, you can NOT consent to ANY of that, and you will get a static message (as you're asking), that says PLEASE SPONSOR X.

The thing I'm trying to solve here is NOT annoy users who have ALREADY sponsored with needless messages (or perhaps change those to THANKS).

So, I understand you DON'T LIKE what I'm trying to do. But I think the usefulness of that "feedback" is questionable at this point.

TsengSR commented 10 months ago

Collecting their sponsors and hash+salt them in a way that the backend can verify and thus send a signed manifest, is effectively using their current sponsors' state (calculated client-side) as a sort of API key that no-one else should be able to reconstruct. Without me having to keep a separate auth thing.

This is not true and you know it. Stop looking for excuses. Its wrong that no one can reconstruct it. YOU can reconstruct it.

Here an example how these could be used to assign user data: You get curious and want to know in which projects this is integrated, so you're like: "Okay, I don't know the emails but I know how the hash is calculated. So I just 'brute force' the sponsors mail addresses I have from GitHub and the Identifier and recalculate the hash", then count how often it happens in which project and taa-daa, you have gathered private data from the user, which you could use to end them advertisement about a similar project, but from you.

The point is not that someone else can recalculate the hash, but that also the company providing the service can't reconstruct it. That's why pseudo-anonymization via hashing IS NOT SUFFICIENT for GDPR, because the data holder still can reconstruct the data and assign it to a single specific user!

At least the auth/validation of the user downloading the signed manifest, needs to happen on the backend, otherwise it would be trivial to circumvent.

You seriously ask people to download stuff from you, an untrusty third party, into their build pipelines without a trustworthy escrow organization in between (such as nuget which can filter, ban and remove harmful packages/products/stuff)?

The way you acted so far could as well mean if you get in a manic phase again you'd maybe even replace that one with an executable or got knows what.

Not even to speak about other cases, such as what happens if it fails to download, your website goes down, service becomes unavailable or unresponsive? Will it then fail and sabotage pipelines of 100000 people again and cause millions in damage?

You just can't see through that. Seeing your abnormal paranoia that people "may share the licence" file just shows how absurd that is. People who don't want to play, won't pay. It didn't worked with DRM either in the past 30 years. People who wanted pirate something, did it. They just made a crack and that's it. People who found it worth bought it and people who didn't, just didn't pay and use something else.

You are just trapped in your fantasy world thinking you can force people into paying w/o offering something like SLA or Support in return (spoiler: SLA and support can't be "cracked", "circumvented" or whatever, hence why it a very solid business model a lot choose)

Perksey commented 10 months ago

Here an example how these could be used to assign user data: You get curious and want to know in which projects this is integrated, so you're like: "Okay, I don't know the emails but I know how the hash is calculated. So I just 'brute force' the sponsors mail addresses I have from GitHub and the Identifier and recalculate the hash", then count how often it happens in which project and taa-daa, you have gathered private data from the user, which you could use to end them advertisement about a similar project, but from you.

The point is not that someone else can recalculate the hash, but that also the company providing the service can't reconstruct it. That's why pseudo-anonymization via hashing IS NOT SUFFICIENT for GDPR, because the data holder still can reconstruct the data and assign it to a single specific user!

I don't think anyone was denying that the sponsors' emails that have consented to linking their GitHub account to SponsorLink is subject to GDPR, and this pseudo-anonymization isn't solving that at all. What it is solving is sufficiently anonymizing people who might not have sponsored or linked to SponsorLink by hashing and sending just a few characters of the hash to the server. The data holder cannot brute force these hashes as they have just a few characters and none of the data that went into that hash.

You are just trapped in your fantasy world thinking you can force people into paying w/o offering something like SLA or Support in return (spoiler: SLA and support can't be "cracked", "circumvented" or whatever, hence why it a very solid business model a lot choose)

If this is your opinion then you have no reason to be on this thread that is actually trying to make an implementation of this idea that addresses community concerns, if you object to the very idea itself.

TsengSR commented 10 months ago

@cmjdiff with all due respect, I think you have replied enough. Your continued "GDPR GDPR" is hardly constructive.

The user in this case is:

  1. MANUALLY installing a CLI extension, which will TELL YOU CLEARY what you're OPTING-IN (so, GDPR prevents users from opting in to stuff? Please to tell all the stupid sites asking for consent on the damn cookies).

All these "stupid websites" are asking for consent. Not only that, they put a n options window where you see EVERY SINGLE LITTLE DETAIL for what the data is used and opt-out for it.

Its even MANDATORY that the user can DECLINE all-and-every single of the data collection (Reject all button is mandatory), so that no data at all is collected. Can you do that too with your shady scheme?

  1. MANUALY installing a GH App that EXPLICITLY requests your consent to sharing your email

That's not enough. The installing doesn't tell the user what the data is used for and for what not. And this is a critical part of the GDPR

The thing I'm trying to solve here is NOT annoy users who have ALREADY sponsored with needless messages (or perhaps change those to THANKS).

So, I understand you DON'T LIKE what I'm trying to do. But I think the usefulness of that "feedback" is questionable at this point.

I'm pretty sure that the usefulness of your spyware/pipeline sabotaging too is more than "questionable" at this point too, yet you still keep brabbling about continuing with it.

Perksey commented 10 months ago

Also can we stop mentioning GDPR like we're all lawyers, because I'll place good money on none of us here being lawyers.

harunlegoz commented 10 months ago

@cmjdiff the man is trying to accomplish something. You may like it, use it, hate it, abandon it. Up to you. I still have my reservations and opinions. However, there's a technical solution here, which I think I can at least offer my suggestion to make it better. GDPR is very protective of customer data, but it doesn't prevent you from collecting user data, if done properly. It's a safeguard, not a roadblock. Daniel is just trying to comply with it.

@kzu I never used GitHub's GraphQL API before, but done plenty of OAuth stuff (mainly against Azure AD). I'm assuming it works the similar way (respecting OAuth and OpenID Connect protocols). If so, you can do it without needing a client secret or a key. All you need is an app, and a redirect URI.

A few example apps like Azure Storage Explorer, Azure Function Runtime, etc. do it like follows:

  1. Register an app with the authentication provider, make it a web app. That would only require a Redirect URI, not a client secret. We'll set the redirect URI in a second.
  2. Have your application host a temporary localhost endpoint. For example, Storage Explorer hosts a localhost URL when you're trying to authenticate. The whole purpose here is to have a redirect URI that can't be accessed externally, therefore can't be exploited. Let's say that you come up with http://localhost:3881/auth/callback as your redirect URI.
  3. When the user clicks sign in (or rather calls a CLI command, in your case gh sponsors login), you do: a. Have your app open up a temporary endpoint of http://localhost:3881/auth/callback` and listen b. Open up a web page by constructing the login URL (I'm making it up, I don't know the exact url): "https://githublogin.com/login?clientId=XYZT&redirectUri=http://localhost:3881/auth/callback"
  4. When user logs into their GH account, it'll ask if they would like to give permission to SponsorLink to access Sponsor data (again, you'll need to set this up on the GH app, not sure about permissions there). When they click Grant, they would be redirected to your localhost URL with the access token.
  5. You shutdown the temporary endpoint after you receive the token.
  6. You can store the access token somewhere safe (Windows and macOS give proper APIs for it, and I believe dotnet implements these), then use it to directly connect to GH API to fetch the sponsors.
  7. If the access token is expired, you can always get a new one using the refresh token came with it.

Again, I really don't know how well implemented GH's API authentication is and how similar it is to Azure AD. But if they've done it right, you can use Authorisation Code Flow and do this in a way that doesn't require a backend. It would be like a client app on your machine or a mobile app doing it. I think Microsoft even has an example dotnet app with Azure AD that you can use as a starter: https://github.com/Azure-Samples/ms-identity-dotnet-desktop-tutorial

Here's a bit more documentation on the topic:

TsengSR commented 10 months ago

Also can we stop mentioning GDPR like we're all lawyers, because I'll place good money on none of us here being lawyers.

Being lawyer or not. GDPR has a very high importance to Europeans, something people asked and demanded for decades to stop the insatiable anglo-american companies gather endless amount of data about user and abusing it against them (Brexit, Trump Elections, Advertisement and stuff). There's real danger to the freedom and democracy but having companies use and abuse these data without the peoples knowledge.

You could call it the most important law to see the light since the invention of internet.

cmjdiff commented 10 months ago

@cmjdiff with all due respect, I think you have replied enough. Your continued "GDPR GDPR" is hardly constructive.

You need to get it into your head that people will stop going "GDPR GDPR" at you when you stop insisting on trying to do things that violate GDPR.

Precisely which part of all this explicit consent all over the place isn't to your liking?

The part that you've provided mechanisms for people to provide consent, but are still doing the checks on users who have not consented.

The thing I'm trying to solve here is NOT annoy users who have ALREADY sponsored with needless messages (or perhaps change those to THANKS).

Right. And that thing, entirely by itself, has unavoidable data protection implications.

So, I understand you DON'T LIKE what I'm trying to do. But I think the usefulness of that "feedback" is questionable at this point.

I think you've got that the wrong way round. What's happening here is that you're trying to do something that has data protection implications, and you DON'T LIKE that it has data protection implications, but want to do it anyway. Yet another example of you refusing to listen to feedback you don't like.

You haven't shown us where you're informing people that this check is being made. You haven't shown us where you're requesting consent to do that check. You haven't shown us where you're allowing people to withdraw that consent. You haven't shown us where you're allowing the user to object, in advance, to being shown the message in the first place.

You've addressed none of the legal problems, and now wondering "why are people still bothering me about the legal problems?".

Perksey commented 10 months ago

@TsengSR sure, but my point is none of us have any authority to speak on whether something is GDPR compliant or not beyond a "best effort" implementation which is what is proposed. So either hire a lawyer or just stop mentioning this very specific and very intricate regulation, you have no authority to decide whether this implementation is GDPR compliant. The only reason we can say that the previous implementation was non-compliant is because it was blatantly obviously sending emails without consent, whereas there is reasonable measures in place to at least think we are GDPR compliant. (emphasis on think, until kzu or someone else hires a lawyer there is no way to be sure).

Or carry on nay-saying beyond usefulness, your choice ;)

cmjdiff commented 10 months ago

the man is trying to accomplish something. You may like it, use it, hate it, abandon it. Up to you.

It doesn't matter that he's trying to accomplish something, and it doesn't matter whether the rest of it like it. That matters is that the thing he's trying to accomplish cannot be done in a way that is fully compliant with data protection laws. It's an example of the exact thing the rules have been written to protect against.

GDPR is very protective of customer data, but it doesn't prevent you from collecting user data, if done properly.

... and people are trying to tell him how it's "done properly", and he keeps ignoring it because he doesn't like how it stops him trying to do it in the way he's trying to do it.

Perksey commented 10 months ago

... and people are trying to tell him how it's "done properly", and he keeps ignoring it because he doesn't like how it stops him trying to do it in the way he's trying to do it.

When you say "people" do you mean the people actively proposing solutions on this thread or the people just saying it's not done properly and not doing a single thing proactive to suggest a compliant solution to be implemented?

cmjdiff commented 10 months ago

When you say "people" do you mean the people actively proposing solutions on this thread or the people just saying it's not done properly and not doing a single thing proactive to suggest a compliant solution to be implemented?

Translation:

When you say "people" do you mean the people trying to find workarounds on this thread or the people just saying it's not done properly and explaining why a compliant solution doesn't exist?

I'm sorry that GDPR's lack of wiggle room offends you.

I've offered a compliant solution: Don't do any checking and just put in the message regardless. Guaranteed 100% compliant with GDPR.

harunlegoz commented 10 months ago

@cmjdiff you're not the only GDPR-aware developer here. I'm in the UK and I'm in data business. I've done this before, so as countless others. Just because you don't know how it can be done, doesn't mean no one else does.

Maybe @kzu knows how it's done, maybe he doesn't. Advise him on how it can be done, then. Not how it's impossible.

We've all gave him a list of "One Million Things That You Can't Do Under GDPR". We've all criticised him to the hell and back, because he has done it wrong. Hell, I've given 6 different metaphors to explain what's wrong.

He gets it. He's trying to improve it. Give the man a break.

kzu commented 10 months ago

Thanks @harunlegoz for the suggestions.

The auth I was talking about was not about getting (locally) the user's sponsors list. The tricky part is coming up with some sort of SponsorLink-backend signed manifest, not something just done on the client-side. Otherwise, anyone could generate a manifest, share it with the whole team and there would be no way to assert a given manifest has been validated/signed by SL itself.

So, gathering sponsors locally is already doable (it's what my PR to the other gh-sponsors repo does already), since the user running the GH CLI is already authenticated to GH. On the backend, similary, the user has to install the GH App (via github.com), which grants the permissions to the SL backend to read the users' email(s).

The missing piece is matching up the locally calculated sponsors with the backend-determined sponsors (this info would be shared by the sponsorable account when they set up the webhook to notify SL of active sponsors) for the "current user" (which is authenticated to the GH CLI but not to the SL endpoint. 🤔 Does it make sense?

harunlegoz commented 10 months ago

@kzu, ah, I see. Okay, now I understand. Sorry for making the wrong assumption.

Why not have gh sponsors app have a running background agent that exposes a localhost API (with CORS set to only accept localhost requests)? It can accept the package name true/false to the caller, never store the values anywhere other than memory (and maybe in a SecureString?), and can do checks on behalf of the calling packages? If done right, it wouldn't take more than maybe a couple MB of memory, and literally no CPU usage.

Again, I know I'm not well-versed into the SponsorLink's internal workings and only know it by your descriptions, so apologies if I'm not hitting the target.

seanterry commented 10 months ago

So either hire a lawyer or just stop mentioning this very specific and very intricate regulation, you have no authority to decide whether this implementation is GDPR compliant.

@Perksey GDPR rules are not intricate. They are very straightforward, and not aimed at lawyers; they are aimed at us, the processors of data. If you think the GDPR is intricate, oh man, just wait for user stories.

When you say "people" do you mean the people actively proposing solutions on this thread or the people just saying it's not done properly and not doing a single thing proactive to suggest a compliant solution to be implemented?

A number of solutions have been proposed. One should not need to propose a solution, because this is a solved problem, and has been for a very long time. If you want to get paid, charge for your product. That means change the license it is published under, and use a license key. It doesn't require a rocket scientist, brain surgeon, or rocket surgeon. It may require a lawyer, because you no longer get that "get out of lawsuit free" card that FOSS licenses tend to give you.

If you publish under a FOSS license, then waste a developer's time or a business' compute time, then you are undermining FOSS.

So in short, the proper way to handle this is:

Full stop.

TsengSR commented 10 months ago

@TsengSR sure, but my point is none of us have any authority to speak on whether something is GDPR compliant or not beyond a "best effort" implementation which is what is proposed. So either hire a lawyer or just stop mentioning this very specific and very intricate regulation, you have no authority to decide whether this implementation is GDPR compliant.

Well, previously I worked at a company in the health sector which involved sending/transfering estimates which contained A LOT of very critical PII, not about just the patient but also about their diagnosis and stuff, so I know a thing or two about GDPR since the company HAD to comply with it. But no lawyer yes.

What's your professional background on GDPR?

The only reason we can say that the previous implementation was non-compliant is because it was blatantly obviously sending emails without consent, whereas there is reasonable measures in place to at least think we are GDPR compliant. (emphasis on think, until kzu or someone else hires a lawyer there is no way to be sure).

No, it wasn't. There was no mention of it, unless people started asking and all of that boiled up. And even that only happened because the pipelines failed after upgrading (for once because of it being emitted as warning which fails for "Warning as Errors" Pipelines and some other even through exceptions during build, preventing people to ship stuff)

In worst case no one would even have noticed the PII being leaked to kzu!

kzu commented 10 months ago

@harunlegoz that wouldn't be an improvement over just checking a local file, I think. If anything, it would look even MORE suspicious to have a running process/agent.

@seanterry

Ask for donations, but don't waste developer's time, compute time, phone home, or do literally anything else if they don't.

Awesome, finally someone understands what this is about, since that's precisely what is being proposed. The "literally anything else if they don't" part, in particular. If there's no sponsorlink file, do nothing, just ask for donation.

Now, if the user IS sponsoring, the analyzer should be able to do something with that information. It can be changing the diagnostics message (or removing it entirely), or even feature-toggling some IDE-only feature. That's the part that needs solving in order for SL to actually provide value beyond just showing a static diagnostics message begging for sponsorships.

@TsengSR

And even that only happened because the pipelines failed after upgrading

I very much doubt that was the case. The analyzer NEVER run in CI. You can see the extensive checks for in editor only conditions I performed since day 1: https://github.com/devlooped/SponsorLink/blob/main/src/Package/SessionManager.cs#L51-L74

NzKyle commented 10 months ago

I very much doubt that was the case. The analyzer NEVER run in CI. You can see the extensive checks for in editor only conditions I performed since day 1: https://github.com/devlooped/SponsorLink/blob/main/src/Package/SessionManager.cs#L51-L74

How well tested is that IsCI method? Are you 100% certain that it covers all possible CI environments that dotnet devs would be using?

kzu commented 10 months ago

It ALSO has to be IsEditor

gep13 commented 10 months ago

@kzu if you did want some more checks in this area, you can find a number of CI provider that are checked for as part of the Cake Build Orchestration system. You can find that code here:

https://github.com/cake-build/cake/tree/develop/src/Cake.Common/Build

Each one has a IsRunningOn... method, for example:

https://github.com/cake-build/cake/blob/6be615392ac27b52d979b5284993f84f175c7201/src/Cake.Common/Build/AzurePipelines/AzurePipelinesProvider.cs#L32-L33

Moxinilian commented 10 months ago

@cmjdiff

You need to get it into your head that people will stop going "GDPR GDPR" at you when you stop insisting on trying to do things that violate GDPR.

While I do agree with you that this project is a horrible idea, I don't think your GDPR claims are valid. In fact I think the proposed solution in OP is GDPR-compliant in principle. No personal data is processed unless the data subject consents to it for the sake of verifying they are a paid customer. The consequences of the refusal to provide consent are arguably minor: this is akin to a user buying an ad-free version of the open source software (what a dystopian sentence to say).

aschan commented 10 months ago

@Moxinilian consent is only a small part of GDPR. It doesn't matter if there is consent and the technical implementation is ironclad if there isn't well documented processes for how the data is stored, extracted and deleted according to the tenants for transparency, portability and right to be forgotten.

In all fairness GDPR also applies for a commercial licens model where there are paying customers with persisted information somewhere. The administrative overhead must still be handled.

Moxinilian commented 10 months ago

@Moxinilian consent is only a small part of GDPR. It doesn't matter if there is consent and the technical implementation is ironclad if there isn't well documented processes for how the data is stored, extracted and deleted according to the tenants for transparency, portability and right to be forgotten.

In all fairness GDPR also applies for a commercial licens model where there are paying customers with persisted information somewhere. The administrative overhead must still be handled.

Yes, hence why I said it is compliant in principle. The rest is formalities.