Open olofster opened 10 years ago
note for myself: https://console.aws.amazon.com/s3/home?region=us-west-2 (link to the bucket)
Just realizing this is a different S3 account than what has already been made. For now I think we should use the existing S3 bucket and later swap out the existing S3 bucket for this one in a different Github issue.
which is the S3 account we already have? what are we storing there?
It's one I set up, currently has sponsor logos and project thumbnails, which use Paperclip with S3.
let's definitely use your one then. Once we have the pics I can get credentials and do the rest:)
Ach. Rake task works perfectly in a for-reals local web server but not on Heroku and its 403 restricted ephemeral filesystem.
Getting 403 errors when I upload now, even running locally. Posted the issue on Stack Exchange since this is no longer saving time. Plan B: Copy into git and server from Heroku like we do the header images.
hey I can just upload to my S3
lmk
So after trying to get this to work on and off over the weekend it seems that there's several problems with automating this, mostly being that the files are getting corrupted on the first download from Twitter/anywhere else; open-uri seems to be breaking jpegs and also there'd have to be a way to detect different files types and Twitter is being weird about automated downloading (this was the source of the 403 error that came up--testing with public Tumblr images downloaded (corrupted) images but didn't 403). Given other tasks I really need to tackle, it might be best to do this manually. :[
k. mind doing a db dump of first name / last name / profile pic URL of all participants for now? (I can mechanical turk it or taskrabbit it later on)
Would it be best to create a CSV with those fields? Heroku's dumps are only readable in Postgres.
yeah if you can 'export' it to CSV somehow that'd be great
CSV sent.
404'ing Afterglow participant pics have been moved to S3 and are displaying now. Moving this item to a post-Afterglow milestone.
Olof to look into non-API way of getting the pics
you probably know this, but does this help? https://twitter.com/api/users/profile_image?screen_name=olofster (public)
Hmmm... these are the really small profile pics (48px x 48px), smaller than the size of the 70px x 70px avatar-image css class. I'll keep looking though.
Just created oauth creds so we can get the full-sized images, maybe, if this works. As I recall though Heroku won't allow images to be downloaded into their ephemeral filesystem, maybe, and idk how to check if files actually showed up there since it's a cloud service and the heroku console doesn't let me just run "ls" on the directory the images get downloaded to. Since the call to get the profile image URL and the call to download the image are separate, the second one might need a custom http header with all the oauth stuff that would normally be handled by a library, which only has functions to get the URL of the profile pic, not the actual image.
I'm kind of amazed at how "moving files from one place on the internet to another" is so complicated in this particular case. I'm tempted to just make this a Chrome extension for the admin page.
So, for the sake of my sanity I'm going to put any problematic twitter photos into S3 manually and plug in the public S3 links in the admin page for Disnovate participants.
Hey @huertanix - I just created an S3 bucket such that profile pics can be properly stored e.g., https://s3-us-west-1.amazonaws.com/arthackdaywebsite/kim.png
We should replace current links to twitter etc. by our own amazon links. What are your thoughts on best process: script or ask elance / fancy hands?
These are proposed steps: //David via script or fancyhands
thoughts?