Closed jbrukh closed 11 years ago
I'm not sure what's causing this, but it's something to do with the either:
The solution will be to upload directly to S3, I will investigate the best way to do this.
Details for direct uploads are here:
http://aws.amazon.com/articles/1434?_encoding=UTF8&jiveRedirect=1
What's going to have to happen is the following:
When calling upload I will give you the following:
{
type: ("s3", "direct"),
local: [boolean],
endpoint: [string],
token: [string],
resource_id: [string],
aws_access_key_id: [string],
policy: [string],
signature: [string]
}
When the type is direct you will do what you do already, if it's S3 you will follow the instructions above, and when you get the success_action_redirect
you will then extract the bucket, etag and key values from the url and put those in the message you send on upload success.
This is actually quite a complicated change, we'll need to make sure we do it right. I want to keep backwards compatibility so we can use the app in dev without having to use S3.
Let me know what you think
kk, I'll start looking at this.
On Thu, Aug 15, 2013 at 8:36 PM, Jonathan Goldman notifications@github.com wrote:
Details for direct uploads are here: http://aws.amazon.com/articles/1434?_encoding=UTF8&jiveRedirect=1 What's going to have to happen is the following: When calling upload I will give you the following:
{ type: ("s3", "direct"), local: [boolean], endpoint: [string], token: [string], resource_id: [string], aws_access_key_id: [string], policy: [string], signature: [string] }
When the type is direct you will do what you do already, if it's S3 you will follow the instructions above, and when you get the
success_action_redirect
you will then extract the bucket, etag and key values from the url and put those in the message you send on upload success. This is actually quite a complicated change, we'll need to make sure we do it right. I want to keep backwards compatibility so we can use the app in dev without having to use S3.Let me know what you think
Reply to this email directly or view it on GitHub: https://github.com/jbrukh/octopus/issues/98#issuecomment-22740840
Please send upload_params
instead of S3-specific fields, where upload_params
is an associative array (containing the fields).
This is instrumented in goavatar; also endpoint, token are now parsed as keys/values in upload_params
.
This is done, but we should have a good beating on it to make sure it all works right.
Nice.
On Sat, Aug 17, 2013 at 10:32 PM, Jonathan Goldman notifications@github.com wrote:
This is done, but we should have a good beating on it to make sure it all works right.
Reply to this email directly or view it on GitHub: https://github.com/jbrukh/octopus/issues/98#issuecomment-22823482
Closing, looks like it works.
The files do come through though...
handler.go:190: Octopus Socket: SENDING FILE var/local/a68e1f66-ea75-7fca-8056-26705d6fe41c handler.go:132: Octopus Socket: RECEIVED {"token":"xCJv9xJn4yxEhgQgdPmw","resource_id":"a68e1f66-ea75-7fca-8056-26705d6fe41c","endpoint":"https://octopusmetrics.com/api/recordings/15/results","local":true,"id":"438332","message_type":"upload"} upload.go:20: uploading file var/local/a68e1f66-ea75-7fca-8056-26705d6fe41c to endpoint: https://octopusmetrics.com/api/recordings/15/results upload.go:95: UPLOAD REQUEST ------------------------------------- upload.go:97: UPLOAD RESPONSE ------------------------------------- {"status":"500","error":"Internal Server Error"} handler.go:180: Octopus Socket: RESPONDED &{Id:438332 MessageType:upload Success:false Err:failed to upload, status: 500}