Closed berchev closed 3 years ago
Hi there @berchev thanks for reaching out. Looking at the logs for 1.6.6 it is possible that the upload of a large file is taking much longer to complete and get persisted onto the vagrant cloud backend storage. You can see that the 400 errors are retryable so its possible that we need to retry a bit more.
There was a change in v1.6.6 to how vagrant boxes are uploaded. Instead of using the API to upload, boxes are now uploaded directly to the Vagrant Cloud backend storage which looks to be S3.
Do you run into issues if you add "no_direct_upload": true
to you configuration to disable the direct upload feature?
Hi @nywilken Thank you for the quick response!
I have just tested with "no_direct_upload": true
option set to vagrant-cloud
post-processor and can confirm that upload is successful
Hi @nywilken
Nice to meet to you, I'm the person who originally inquired about this issue to @berchev. I also tried building my packer template with 'no_direct_upload' option, and it was successfully completed build and upload. I could workaround this issue for now. However I hope this issue will be fixed.
Because I could not know when it would fail. It probably may depend box size, and internet speed, it sometimes would fail, at other time not. Then I have no choice but to add 'no_direct_upload' option at all times. It doesn't make sense that the option being.
I would greatly appreciate if you can give our proposal a good review.
@berchev Thank you very much for your kindly cooperation.
Hello,
I want to mention that I report that issue to HashiCorp support. They provide my same solution ('no_direct_upload': true) and it works.
Hi everyone. There have been a number of updates made to Vagrant Cloud which should resolve the box upload issue that was resulting in 400 errors. By default, when the no_direct_upload
value is false
(which is the default) the box asset will be uploaded directly to the backing asset storage. The TTL on the upload links was set too low to properly allow for retries which was resulting in the errors. This has been resolved and uploads directly to asset storage should be working as expected. Setting the no_direct_upload
to true
will force the upload to be proxied through Vagrant Cloud but will result in slower uploads.
If any other issues are encountered with uploads to Vagrant Cloud, please feel free to open an issue in the hashicorp/vagrant repository or send an email to support
@nywilken If you need anything else related to this issue, just let me know :slightly_smiling_face:
That's awesome, Chris -- thanks. I'll close this, and we can reopen if we see users still struggling.
Thanks @chrisroberts
Just a followup that we found an issue with direct uploads related to the size of the generated box asset. The modifications in #10820 resolve that issue.
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Overview of the Issue
I tried to upload Vagrant Box (generated with Packer) to Vagrant Cloud, but I am hitting a bunch of 400 Bad Request errors, using Packer 1.6.6
Using Packer 1.6.5 the upload is successful:
Assumptions
Maybe this is happening when the file is too big. The upload fails for me when I try to upload box 7.5 GB
Uploading small box (900MB), actually works with my
xenial.json
template using Packer 1.6.6xenial.json
and the successful upload log are provided below as a gist, in sectionLog Fragments and crash.log files
Reproduction Steps
ovf-files
.vmdk
file and.ovf
filePacker version
Packer 1.6.6 Vagrant 2.2.14
Simplified Packer Buildfile
My
packer.json
template. Since not too long, I will paste it here:Operating system and Environment details
I am using MacOS, but I believe that can be hit from any OS
Log Fragments and crash.log files
attaching some gist files: