Open artturimatias opened 5 years ago
Streams are tricky. Are you waiting for it to finish? You could use something like end-of-stream
to detect this easily.
Ideally you'd want to log out all stream errors to see if it's failing - if you could do that it might show you why no data is being written.
Thanks for reply. The problem is that I don't get any errors and stream seems to end properly. If I replace client.createWriteStream with fs.createWriteStream the stream works and I get a new local file.
var eos = require('end-of-stream');
var filename = "workcycles.jpg"
const local = fs.createReadStream(path.join("preview", filename));
const webdav = client.createWriteStream(filename) // this results zero length file
// const webdav = fs.createWriteStream(filename) // this works
eos(webdav, function(err) {
if (err) return console.log('webdav stream had an error or closed early');
console.log('webdav stream has ended', this === webdav);
});
webdav.on('data', () => {
console.log('webdav data')
})
webdav.on('error', () => {
console.log('webdav error')
});
local.pipe(webdav)
Maybe I'm just not using it right. Could you provide a working example of how one should use client.createWriteStream with Nextcloud?
@artturimatias Ok, so I tried with an example and the stream writes, but I don't get the end event at the correct moment. Here's the example I'm using:
const fs = require("fs");
const { createClient } = require("webdav");
const endOfStream = require("end-of-stream");
const client = createClient("https://somewebdav.com", {
username: "user",
password: "pass"
});
const readStream = fs.createReadStream("./webdav.jpg");
const writeStream = client.createWriteStream("/webdav.jpg");
readStream.pipe(writeStream);
endOfStream(writeStream, err => {
if (err) {
console.error(err);
return;
}
console.log("Done!");
});
I couldn't run a demo account at nextcloud.com as their service wasn't working.
Seems to be that the event comes when the writestream finishes but the request is still running. Need to think of a way to fix that, as it's most likely been around as an issue for some time. This is related to #147 You might be right that it's related to #173, but I can't be sure.
The issue is that the stream seems to drain and emit end, but the request is still ongoing. Ideally the stream would only fire the end event once the request has finished.
Thanks for the example. I tested with your code and I still got a zero length file. If I run with debug (DEBUG=*), then I can see options for put request that seems to be valid:
options { protocol: 'https:',
maxRedirects: 21,
maxBodyLength: 10485760,
path: '/remote.php/webdav/vepa.png',
method: 'PUT',
headers:
{ Accept: 'application/json, text/plain, */*',
'Content-Type': 'application/x-www-form-urlencoded',
....
Yes in your example the Content-Type is wrong.. it should probably be application/octet-stream
. I can't imagine why this would be.
Webdav-client doesn't set content-type, so it must be getting set in some default manner. You could modify this area to add the header Content-Type: application/octet-stream
to see if that helps..
Ok, there must be something odd in our nextclooud installation. I tried with try.nextcloud.com and upload worked (headers did not matter, I tried with modified headers and default ones)
Sorry that I didn't test earlier with other instance. I need to checkout out how our installation is different from stock installation.
Thanks for the help so far.
Here is the full code that I used for testing:
const fs = require("fs");
const { createClient } = require("webdav");
const endOfStream = require("end-of-stream");
const client = createClient("https://dummycloud.com/remote.php/webdav/", {
username: "user",
password: "pass"
});
const readStream = fs.createReadStream("./osc-vepa.png");
const writeStream = client.createWriteStream("/vepa.png");
readStream.pipe(writeStream);
endOfStream(writeStream, err => {
if (err) {
console.error(err);
return;
}
console.log("Done!");
});
@perry-mitchell I can confirm that Content-type: application/x-www-form-urlencoded
is set by default for all non-json requests, and I've found out that it is just axios
default(see source code).
Also changing to application/octet-stream
doesn't help with my zero-length issue, just wanted to let you know where wrong content type
is coming from and I guess this needs to be set in your code in headers.
Investigating further zero-length issue.
I just had confirmation that our Nextcloud has load balancer in front of it and that probably caused the stream breakage in my case.
We had another issue - #147 - which might have played some part in this issue. I know it was mentioned earlier but it might be worth trying again now that it's fixed, as circumstances might have changed. It was released in 3.1.0.
Tested with 3.2.0 but no change, results still zero length files in our Nextcloud. But like I said, the problem might be the load balancer in our case.
@artturimatias Please let me know if you discover otherwise. It’s hard to get streaming working perfectly.
I've noticed the createWriteStream function only works for small files but completely fails for large files (Say over 10mb). I've tested it on my local webdav server so we can rule out the possibility of the problem causing by load balancer.
I am suspecting the problem might be from the Axios being called in the fetch function. It seems Axios has a problem when PUT/POST stream to a remote server.
@Daniel-Pek I have tested with 1KB text file and ~100KB jpeg image, both result in 0KB file on webdav server. And the only webdav server that I'm working with is bigcommerce.com account webdav storage which might be behind load-balancer. So I won't rule out the load balancer causing that problem.
Not sure how I can debug this problem.
@perry-mitchell Another observation is client.putFileContents
works ok on the same server. So even if it something related to load-balancer, only streaming the file fails, and using client.putFileContents
with any buffer read with fs.readFile
works ok.
Let me know if/how I can further debug the problem to give you additional information. This issue is a blocker for us.
Hi guys, I can reproduce the issue with a local setup of Next Cloud too. No idea what happens at this point.
Edit: however I can repro on try.nextcloud
too. The code is rather simple, I am not sure what I have missed:
addFileFromReadable = async (readable, fileDefinition) => {
const webDavClient = await this.getClient();
const filePath = getFilePath(fileDefinition);
const writeStream = webDavClient.createWriteStream(filePath);
await readable.pipe(writeStream);
return fileDefinition;
};
and everything is fine if I do a fs.createWriteStream
instead so the readable is not faulty.
As pointed out by other users, the palliative is to transform the readable into a buffer and then use putFileContents
instead (but I guess this uses more memory as you load the file in your server's RAM when doing this).
Edit: btw, big thank you for this lib. I've just finished replacing Meteor Files by Next Cloud using your package, except for this small issue everything worked very smoothly
Hey there,
I also stumbled over this issue. After googling around a bit I found people having issues with a missing Content-Length
header on the request[1]. I tried to "hack" that header directly into createStream.js
, which did indeed solve the issue for me. There's right now no way to set the header via the API of this package, so that's no proper fix. I don't know if or when I might find the time to submit a proper fix for this by providing a way to set the headers. Until then, maybe this helps someone or inspires someone else to create a PR ;-). I'm using putFileContents()
for now, which works fine enough for me.
I'm running the request against an up to date NextCloud installation which also runs behind a nginx loadbalancer, which I cannot control (it's hosted at uberspace.de). I don't know if that has something to do with it.
Best, Benjamin.
[1] i.e. https://github.com/axios/axios/issues/961#issuecomment-340615016
It's good to note that you can, at least as of v4.0.0-r01, upload a file steam using putFileContents
instead of createWriteStream
. That being said, I'm planning to add easy access to providing a file size value to the createWriteStream
method so that this header can be set.
Unfortunately I have the same problem. With the client.createWriteStream
I only get a file with 0 bytes.
We use the webdav in a backup plugin for the smart home project iobroker
Currently I solve it via client.putFileContents
This works, but because of fs.readFileSync
it is very expensive for RAM.
Is there a solution to the problem in the meantime?
Here is a link to the project: https://github.com/simatec/ioBroker.backitup/blob/007247e6c18537c54de28dc5062a23c7466e8e70/lib/scripts/65-webdav.js#L23
Hey guys, I actually have a work around about this issue and you guys might want to give a go on this.
getFileUploadLink
This did work for me even for a very large file.
Thanks for your tip. Do you have an example of this?
@simatec putFileContents
takes a stream, so you don't need to use fs.readFileSync
.
Unfortunately, an upload is not possible without fs.readFileSync. A file with 0 bytes is also created here
Hi @Daniel-Pek , what do you mean specifcically by "Use the Node JS stream to pipe the files to that location by using the URL from the step 1"? This is a bit complex and a piece of code would help greatly.
I don't get why it would work better than the code that exists already there: https://github.com/perry-mitchell/webdav-client/blob/master/source/operations/createStream.ts#L23
Some thoughts on 4.0.0, testing towards nextcloud: Uploading files > 2 GB seems to be impossible at the moment.
putFileContents
only works for files with a maximum size of 2GB, no matter which value maxBodyLength
hasputFileContents
does not work when passing streams (e.g. client.putFileContents('filename', fs.createReadStream('filename'))
), but works using fs.readFileSync
createWriteStream
creates zero-length files (despite listening to finish
-event)My use case:
Will sporadically continue my debugging and update this post if I have new information.
Hey there,
I also stumbled over this issue. After googling around a bit I found people having issues with a missing
Content-Length
header on the request[1]. I tried to "hack" that header directly intocreateStream.js
, which did indeed solve the issue for me. There's right now no way to set the header via the API of this package, so that's no proper fix. I don't know if or when I might find the time to submit a proper fix for this by providing a way to set the headers. Until then, maybe this helps someone or inspires someone else to create a PR ;-). I'm usingputFileContents()
for now, which works fine enough for me.I'm running the request against an up to date NextCloud installation which also runs behind a nginx loadbalancer, which I cannot control (it's hosted at uberspace.de). I don't know if that has something to do with it.
Best, Benjamin.
[1] i.e. axios/axios#961 (comment)
That seems to work fine for now:
function writeStream(localFilePath, remoteFilePath) {
try {
fs.stat(localFilePath, (err, data) => {
if (err) throw err;
const readStream = fs.createReadStream(localFilePath);
const writeStream = client.createWriteStream(remoteFilePath, {
headers: { "Content-Length": data.size },
});
readStream.pipe(writeStream);
endOfStream(writeStream, (err) => {
if (err) throw err;
console.log("Done!");
});
});
} catch (error) {
console.error(error);
}
}
(Why doesn't it format correctly?) - Fixed 🙂
Thanks @bennigraf !
The putFileContents
method supports overriding the content length header, so using this together with a stream should work. It's unfortunate that some servers require this header, as streaming is inherently performed without knowing the full file size beforehand in many cases.
I'll try to set aside some more testing time regarding uploading large 2GB+ files.
webdav: 4.10.0, WebDAV server - Apache 2.4.39, by HTTPS, with AuthType.Digest
and httpsAgent: new https.Agent({ rejectUnauthorized: false })
, tested in LAN (no firewalls, IDS, etc.)
Initially I also fell into the problem with createWriteStream(...) and zero size files. But later I realized that during the work with stream files upload, the same time I creating another WebDAVClient and that somehow garbles the transfer of previous one.
So, instead of creating new clients for each operation within the same fodder I cached first one and reused it with destroying afterwards. It works fine and I do not modify headers.
I found that when turned on forensic logging on httpd side and saw two PUT requests for the same file and one of them with 0 content length. Cannot say how they intertwined but caching solved the problem.
Also, if you are doing stat(..)
right after writeStream.on('close', ...)
you probably get false result because WebDAV server need some small time (100ms - 400ms - 1sec) to make file available.
Hi, I have encountered the same problem in React Native. Node can use fs to obtain Stream, and what should be done in React Native
const imageBuffer = fsInterface.createWriteStream(localUrl); await this.client.putFileContents(cloudPath, imageBuffer, { overwrite: true, });
const writeStream = await client.createWriteStream(
filePath,
{
'headers': {
'content-type': file.mimetype,
'content-length': file.size
}
}
);
I'm writing a Nextcloud client application with Nodejs and webdav-client. I have no problems reading contents of the nexcloud server but uploading files with client.createWriteStream creates zero length files.
This is how I'm using client.createWriteStream:
I can upload files with curl or cadaver so this is not a permission problem:
Am I doing something wrong here? Maybe this related to #173?
version: 2.10.0