photo / frontend

The official @github repository of the Trovebox frontend software. A photo sharing and photo management web interface for data stored "in the cloud" (i.e. Amazon S3, Rackspace CloudFiles, Google Storage).
https://trovebox.com
Apache License 2.0
1.38k stars 244 forks source link

Photo generation security concerns #333

Open jmathai opened 12 years ago

jmathai commented 12 years ago

Per a comment in issue #332 https://github.com/openphoto/frontend/issues/332#issuecomment-2890512

@cedricbonhomme There are a couple constraints but your points are still valid.

  1. If the photo is private then the viewer needs permission to access your photo to call that API.
  2. Calling the /photo/:id/view.json with returnSizes does not generate any photos. It only returns URLs which will generate them if they don't already exist. More details: https://github.com/openphoto/frontend/blob/master/documentation/faq/PhotoGeneration.markdown

We should identify the different use cases and criteria. The basic list is.

  1. The user is also the owner (all access)
  2. A user is logged in and has access to a private photo (a lot of access)
  3. A user is not logged in and viewing a public photo (little access)

There's probably a few others.

cedricbonhomme commented 12 years ago

I agree with your remarks. For me points 1 and 2 (for a logged user) are not really a big problem. The owner of a photograph must be able to do what he wants ...

Would it be a problem to make the authentication a requirement, for that API? And the URL: URL /photo/id/create/10d71/wXh.jpg could be used by everyone (as it is now).

I think that this API is used just after the upload of a photo. In this case this is not a problem, since you should be authenticated in order to upload a photo. If the API is used by a specific theme, it can be a problem... And I think that this is the case at least with the beisel theme. This theme seems to use that API in order to generates different size (440x292, 280x186, ...) of photos.

jmathai commented 12 years ago

@cedricbonhomme We could add a restriction or signature to the /photos/list.json and /photo/:id/view.json endpoints. The major drawback here is that JavaScript libraries won't be able to generate this signature without leaking the signing key (of which one already exists on the backend).

The biggest problem is that the photo(s) are generated on demand and there is no guarantee which user (owner, authenticated or unauthenticated) will request a specific version.

The issue lies in the /photos/list.json and /photo/:id/view.json endpoints. The create endpoint has a signature already (the 5 character string before WxH). The biggest factor is (IMO) client side JavaScript.

jmathai commented 12 years ago

@cedricbonhomme Perhaps we can add some sort of rate limiting per endpoint. That's a bigger task but might be worth considering.

cedricbonhomme commented 12 years ago

@jmathai I don't think that the rate limit is a very good solution. A signature or something like that can be better. I don't understand exactly the problem with JavaScript. I'll try to understand how it works with the signing key you are talking about...

Moreover, it seems that Google bot can use the API ;-) """ $ tail -50 admin/log/error.log | grep -i Copying [Tue Nov 29 20:02:34 2011] [warn] [client 66.249.71.120] mod_fcgid: stderr: {severity:info, description:"Copying from /tmp/opmeLwFFTk to /home/cedric/www/index/photos/custom/201110/1318771858-20100622T135041_100x100xCR.jpg", additional:}

[Tue Nov 29 20:16:46 2011] [warn] [client 66.249.66.11] mod_fcgid: stderr: {severity:info, description:"Copying from /tmp/opmeAElTAq to /home/cedric/www/index/photos/custom/201110/1319126648-20100622T142301_100x100xCR.jpg", additional:}

[Tue Nov 29 20:38:39 2011] [warn] [client 66.249.66.11] mod_fcgid: stderr: {severity:info, description:"Copying from /tmp/opme5W8YTj to /home/cedric/www/index/photos/custom/201110/1319126648-20100622T142301_960x960.jpg", additional:}

[Tue Nov 29 20:39:36 2011] [warn] [client 66.249.66.11] mod_fcgid: stderr: {severity:info, description:"Copying from /tmp/opme9BxoTj to /home/cedric/www/index/photos/custom/201110/1319447318-20110306T134526_960x960.jpg", additional:}

cedric@ssh:~$ tail -50 admin/log/error.log | grep "66.249.71.120" [Tue Nov 29 20:02:34 2011] [warn] [client 66.249.71.120] mod_fcgid: stderr: {severity:info, description:"Copying from /tmp/opmeLwFFTk to /home/cedric/www/index/photos/custom/201110/1318771858-20100622T135041_100x100xCR.jpg", additional:} [Tue Nov 29 20:02:34 2011] [warn] [client 66.249.71.120] mod_fcgid: stderr: {severity:warn, description:"Uncaught exception (/home/cedric/www/photos/src/libraries/external/epi/EpiException.php:13): Query error: SQLSTATE[23000]: Integrity constraint violation: 1062 Duplicate entry '10-cedric_bonhomme@orange.fr-path100x100xCR' for key 'id' - INSERT INTO op_photoVersion (id, owner, key, path) VALUES(:id, :owner, :key, :value)", additional:} """

jmathai commented 12 years ago

@cedricbonhomme Interesting. That does make sense since OpenPhoto doesn't care who's making the request. But I see the concern here and it's something we should either resolve or state as a known behavior.

I'd love to hear ideas that...

  1. Allows for photos to be generated on demand
  2. Secures this generation and solves the problem that it's unknown who the requester will be
jmathai commented 12 years ago

I've got a proposal. We can limit the number of low resolution versions of a photo to a configurable number. Here's how it would work.

  1. The max number of photo versions is 5.
  2. Photo 1 is uploaded.
  3. Photo 1 has 5 versions.
  4. A 6th version of photo 1 is requested.
  5. We check how many versions exist.
  6. Since we are maxed out we delete the oldest version.
  7. We create the requested version of the photo.

This solves the problem of filling up disk space, doesn't require complicated rate limiting and continues to allow "anonymous" requesters to generate the photos. All that's at risk is overwhelming the host's CPU with mass requests generating and deleting photos. That's a valid tradeoff IMO since if someone wants to DOS you, they will.

cedricbonhomme commented 12 years ago

I think it's a very good idea. A kind of limited cache per photo. I think that the max number of photos would depend mainly on the theme (but that's not a requirement).

if someone wants to DOS you, they will. True.