Open mfolkeseth opened 3 years ago
Interesting, do you get the same if you run npx @eik/cli@next publish
?
EDIT: You could also try adding the --debug
flag to see if that gives you any more insight. The cli tends to swallow a fair amount of info by default. You might also try logging inside the node_modules/@eik/cli/classes/publish/package/tasks/upload-files.js
to see if you can get some more info about the error.
The same issue happens when using @next
.
The error produced in upload-files.js
is the following:
url: 'https://servername.no/pkg/my-package/1.0.22?t=1624522654690',
status: 503,
statusText: 'Backend fetch failed',
headers: Headers { [Symbol(map)]: [Object: null prototype] },
counter: 0
I suspect that there may be some slowness somewhere between the eik service and google cloud bucket. This is my best guess so far as everything seems to work out if we just keep publishing new packages more rapidly. "Nudging" the service if you will. Any similar experiences on your end?
Hi again @digitalsadhu,
Any thoughts on this? Can it be that the server swallows some errors on the way as well? Somewhere within the pkgPut
handler in core?
How large is the package your trying to upload? How many files does it contain and the total file size?
2 files. Debug telles me an approximate total size of 40Kb
I'm pretty sure this http error is surfaced from the servers under laying requests to GCS:
url: 'https://servername.no/pkg/my-package/1.0.22?t=1624522654690',
status: 503,
statusText: 'Backend fetch failed',
headers: Headers { [Symbol(map)]: [Object: null prototype] },
counter: 0
I'm not really able to dig up a clear answer on why GCS would respond this. This is perhaps the closest I get atm: https://stackoverflow.com/questions/54150761/why-do-i-see-503-backend-error-when-i-try-to-get-a-cloud-storage-service-accou/54246405#54246405
We have seen some issues with GCS to be honest and what we've seen might be related since this is a pretty small package.
So, when we publish a package to the server the client will package the files into a tar file and upload the tar to the server with a http PUT
. The Eik server will then unpack this tar file and upload each file to the sink (in our case here GCS). When doing so, the server also collects data about the files in the tar and when all files are written to the sink a set of meta files is also written to the sink.
When the meta files is written the put operation is done and the Eik server responds with a http redirect to one of the meta files written. The Eik client does seamlessly follow this redirect and does a GET to fetch the meta file and when fetched info from this file is printed by the Eik client. Very, very REST-ishy.
What we have seen though is that sometimes when we write files to GCS the process of retrieving a file just some milliseconds after they are written fails. It seems like files written on GCS might have a small lag from they are written until they can be read.
This might be happening here.
Currently the client is a bit bad on detecting if its the process of writing to GCS which errors or if its reading which errors. Are you able to access your GCS admin panel and see if the files is stored in GCS?
We have sent a lot of uploads through @eik/cli
lately and the files arrive at the bucket every time. We confirm this both through the admin panel and are able retrieve the uploaded files through eik API.
All files in the following image are working files where the cli reported with the above error upon upload:
Then its the read which is causing error and not the publish.
It seems like that the recommended approach by Google on this is to do an exponential retry: https://cloud.google.com/storage/docs/json_api/v1/status-codes#503_Service_Unavailable
I'll look into adding one.
Thanks a bunch! 👍
Hi,
Lately we have seen an issue happening more often than not on eik (latest version)
Any idea what is causing this? I am currently having a hard time identifying weither this is a service issue or cli issue. Using google sink.
EDIT: All files arrive at the google bucket as expected, but the CLI does not clean up
.eik
after itself.