nextcloud / desktop

πŸ’» Desktop sync client for Nextcloud
https://nextcloud.com/install/#install-clients
GNU General Public License v2.0
3.01k stars 791 forks source link

"Connection closed" message when syncing files larger then +- 100Mb #4278

Open Jhutjens92 opened 2 years ago

Jhutjens92 commented 2 years ago

I have my nextcloud installation running in a docker container. It's connected to a MySQL DB (another docker container) and exposed to the web using SWAG. All the relevant php.ini/config files have the following settings

Uploading via webserver is no problem but whenever i try to sync the same file using the windows sync client i receive a "Connection Closed"

Expected behaviour

Files should just upload to the nextcloud server.

Actual behaviour

Files aren't being uploaded and client throws an error:

Steps to reproduce

  1. Run nextcloud in docker container.
  2. Use SWAG (docker container) to link (sub)domain to nextcloud
  3. Upload large file (larger then 100Mb) (in my case a .mp4)
  4. See response in client log.

Client configuration

Client version: 3.4.2

Operating system: Microsoft Windows 10 Pro (10.0.19041 Build 19041)

OS language: Dutch

Installation path of client: C:\Program Files\Nextcloud

Nextcloud version: Nextcloud Hub II (23.0.0)

Storage backend: Local server storage

Logs

  1. Client logfile: Client_20220213_2039_owncloud.log.0.txt

  2. Web server error log: N.A.

  3. Server logfile: nextcloud log (data/nextcloud.log): nextcloud.log

(ignore the failed login)

JustArchi commented 1 year ago

I'm surprised this issue is up for so long, maxChunkSize should be like 95 MB by default and it'd solve the problem for majority if not all users.

IsmaStifler commented 1 year ago

after a lot of looking at the perfect configuration that works in the desktop nextcloud version, behind the free cloudflare proxies and dns with an upload limit of 100 mb is this for my NC version clientVersion=3.9.0stable-Win64 (build 20230613) reducing to 98mb the maximum upload does not collide with the 100mb making the relationship of *1024 when going from bytes to mb

maxChunkSize=98000000 minChunkSize=100 targetChunkUploadDuration=6000 chunkSize=50000000

added to %APPDATA%\Nextcloud\nextcloud.cfg

[General] clientVersion=3.9.0stable-Win64 (build 20230613) isVfsEnabled=false overrideLocalDir= overrideServerUrl= updateSegment=92 confirmExternalStorage=true crashReporter=true monoIcons=false newBigFolderSizeLimit=70000 optionalServerNotifications=false showCallNotifications=false showInExplorerNavigationPane=true useNewBigFolderSizeLimit=true maxChunkSize=98000000 minChunkSize=100 targetChunkUploadDuration=6000 chunkSize=50000000

TheQwenton commented 1 year ago

where

I put mine in client config, not server config. Unless I misunderstood and your client is in a docker? If there is a server-side setting that asks for this mode, I do not know it. … On Wed, Jan 4, 2023 at 6:32 PM tomshomelab @._> wrote: where do you enter this data for AIO verion of nextcloud, my file is located in /var/lib/docker/volumes/nextcloud_aio_nextcloud/data/config/config.php but i cannot find where to enter this on the document that somewhat matches β€” Reply to this email directly, view it on GitHub <#4278 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ARDYZJI4CSISE4IBGLED3ZTWQYCB3ANCNFSM5OJXM27Q . You are receiving this because you commented.Message ID: @_._>

im not sure i understand - client is the desktop application isnt it? my nextcloud is hosted on a container in portainer that i connect to via my pc, phone etc. where are you entering this command?

This config isn't for the server so no changes need to be made server side. It's for the client.

I've included the instructions that someone else posted. You will need to do this on every machine the NextCloud Desktop client is running on. (Does not include the mobile apps so IOS and Android is fine)

Windows Fix Press Win+R on your keyboard to open the Run application. Past the following in the dialog box: %APPDATA%\Nextcloud\nextcloud.cfg This will either ask you to pick an application to open nextcloud.cfg or will open in your default text editor (unless you have something else set to open .cfg files). If it asks you to pick an application, feel free to use Notepad or any other editor. Add the following line under the [General] section: maxChunkSize=50000000 Save the file, quit Nextcloud desktop, and start it again.

MacOS Fix Open a Finder window and press Command+Shift+G on your keyboard. This will bring up a 'Go to folder' window. Paste the following in the dialog box: $HOME/Library/Preferences/Nextcloud Open the nextcloud.cfg file. If you do not have a default editor for .cfg files, feel free to open the file with TextEdit. Add the following line under the [General] section: maxChunkSize=50000000 Save the file, quit Nextcloud desktop, and start it again.

Linux Fix Open a terminal window and edit the following file: nano $HOME/.config/Nextcloud/nextcloud.cfg Add the following line under the [General] section: maxChunkSize=50000000 Save the file (Ctl+o, Ctl+x), then quit Nextcloud desktop, and start it again.

THANK YOU SO MUCH

skaskiewicz commented 1 year ago

I didn't test this 'fix,' but the old version of the client (3.1.3 for Windows and Linux) works perfectly.

SODDINGIT commented 1 year ago

I was losing my mind and spent hours removing/increasing limits on SWAG as well as Nextcloud and PHP. Once I realized that uploads through web worked I came here and found this fix.

Works perfectly under Linux.

skaskiewicz commented 1 year ago

mobile

I was just checking the Nextcloud client on Android a moment ago, and it seems to have the same error. However, due to my limited knowledge about Android, I'm not able to confirm it 100%. In the older version 3.14.2, I don't see any errors in the logs of my web server, whereas the newest version is causing unstable operation of my web server.

anultravioletaurora commented 1 year ago

I was running into this same issue today. I'm routing traffic through Cloudflare, but was able to upload my larger files through the web ui.

Putting the aforementioned change in my nextcloud.cfg file fixed it for me. I'm using 3.9.3 desktop client on Linux

LokeYourC3PH commented 1 year ago

It's tragic that this is both still "Open" and hasn't even since been looked at or implemented by devs. Terrible, really.

ThatTallGuy21 commented 1 year ago

It's tragic that this is both still "Open" and hasn't even since been looked at or implemented by devs. Terrible, really.

Especially when this was reported over 1.5 years ago.

mlopezcoria commented 1 year ago

I randomly put this thing into nextcloud.cfg (in [General] section) and somehow it works for me. (I don't know if it works for other people or not)

chunkSize=10000000
minChunkSize=1000000
maxChunkSize=50000000
targetChunkUploadDuration=6000

Note: I use version 3.4.3 on Manjaro Linux Note 2: I read from https://docs.nextcloud.com/desktop/3.0/advancedusage.html because, in the documentation, I didn't see [General] section in 3.4 thinking

I too can confirm that this solved the issue (Nextcloud client 3.4.3, Manjaro). Only setting "chunkSize" does not work, I had to set all 4 settings in the config file. I don't really understand why, but hey, it works. Thank you ❀️

Thank you for your help, I did this and it worked.

The documentation link is outdated, though. Now it's this: https://docs.nextcloud.com/desktop/3.10/advancedusage.html

szaimen commented 1 year ago

A fix is being worked on in https://github.com/nextcloud/desktop/pull/4826

anultravioletaurora commented 1 year ago

Great to hear ☺️ TY!!

TheJags commented 1 year ago

I too am getting "connection closed" errors for 2 specific files (10.5 MB and 31.8 MB) on Ubuntu 23.04 Lunar. Rest of the files (including a 281 MB .tgz file) syncs just fine.

Nextcloud client version is 3.10.50. I'm syncing these files with nch.pl.

QNetworkReply :: RemoteHostClosedError "Connection closed" QVariant (Invalid)

The solution suggested by many is not working for me.

nano $HOME/.config/Nextcloud/nextcloud.cfg

and add:

[General]
chunkSize=10000000
minChunkSize=1000000
maxChunkSize=50000000
targetChunkUploadDuration=6000

In my case the files are well below 50 MB:

1, places.sqlite - 10.5 MB (File comes from Firefox) 2, Bookmarks - 31.8 MB (There's no file extension. File comes from Vivaldi)

These 2 files resides into a separate Nextcloud directory. In other words, I am NOT syncing these files directly from Firefox or Vivaldi profile directories.

First, I am transferring selected browser data files into a separate Nextcloud directory (within my home directory) and from there they are being synced with nch.pl.

Syncing worked okay for few weeks but around Sep 15, 2023 or so, these Connection closed errors started showing up.

Nextcloud_Sync_Error

Related log entries:

[ warning nextcloud.sync.networkjob ./src/libsync/abstractnetworkjob.cpp:221 ]: QNetworkReply::RemoteHostClosedError "Connection closed" QVariant(Invalid)

[ warning nextcloud.sync.credentials.webflow ./src/gui/creds/webflowcredentials.cpp:208 ]:  QNetworkReply::RemoteHostClosedError

[ warning nextcloud.sync.credentials.webflow ./src/gui/creds/webflowcredentials.cpp:209 ]:  "Connection closed"

[ info nextcloud.sync.networkjob.put ./src/libsync/propagateupload.cpp:87 ]:    PUT of "https://nch.pl/remote.php/dav/uploads/<UserName>/<000000000>/00001" FINISHED WITH STATUS "RemoteHostClosedError Connection closed" QVariant(Invalid) QVariant(Invalid)

[ warning nextcloud.sync.propagator ./src/libsync/owncloudpropagator.cpp:284 ]: Could not complete propagation of "Browser_Profiles/Firefox/places.sqlite" by OCC::PropagateUploadFileNG(0x55c1c36a1e80) with status OCC::SyncFileItem::NormalError and error: "Connection closed"

[ warning nextcloud.gui.activity ./src/gui/tray/usermodel.cpp:878 ]:    Item  "Browser_Profiles/Firefox/places.sqlite"  retrieved resulted in  "Connection closed"

[ warning nextcloud.gui.activity ./src/gui/tray/usermodel.cpp:840 ]:    Item  "Browser_Profiles/Firefox/places.sqlite"  retrieved resulted in error  "Connection closed"

[ warning nextcloud.sync.networkjob ./src/libsync/abstractnetworkjob.cpp:221 ]: QNetworkReply::RemoteHostClosedError "Connection closed" QVariant(Invalid)

[ warning nextcloud.sync.credentials.webflow ./src/gui/creds/webflowcredentials.cpp:208 ]:  QNetworkReply::RemoteHostClosedError

[ warning nextcloud.sync.credentials.webflow ./src/gui/creds/webflowcredentials.cpp:209 ]:  "Connection closed"

[ info nextcloud.sync.networkjob.put ./src/libsync/propagateupload.cpp:87 ]:    PUT of "https://nch.pl/remote.php/dav/uploads/<UserName>/<000000000>/00001" FINISHED WITH STATUS "RemoteHostClosedError Connection closed" QVariant(Invalid) QVariant(Invalid)

[ warning nextcloud.sync.propagator ./src/libsync/owncloudpropagator.cpp:284 ]: Could not complete propagation of "Browser_Profiles/Vivaldi/Bookmarks" by OCC::PropagateUploadFileNG(0x55c1c31bbf70) with status OCC::SyncFileItem::NormalError and error: "Connection closed"

[ warning nextcloud.gui.activity ./src/gui/tray/usermodel.cpp:878 ]:    Item  "Browser_Profiles/Vivaldi/Bookmarks"  retrieved resulted in  "Connection closed"

[ warning nextcloud.gui.activity ./src/gui/tray/usermodel.cpp:840 ]:    Item  "Browser_Profiles/Vivaldi/Bookmarks"  retrieved resulted in error  "Connection closed"

Any help is greatly appreciated.

BenPicard commented 1 year ago

I randomly put this thing into nextcloud.cfg (in [General] section) and somehow it works for me. (I don't know if it works for other people or not)

chunkSize=10000000
minChunkSize=1000000
maxChunkSize=50000000
targetChunkUploadDuration=6000

Note: I use version 3.4.3 on Manjaro Linux Note 2: I read from https://docs.nextcloud.com/desktop/3.0/advancedusage.html because, in the documentation, I didn't see [General] section in 3.4 thinking

I too can confirm that this solved the issue (Nextcloud client 3.4.3, Manjaro). Only setting "chunkSize" does not work, I had to set all 4 settings in the config file. I don't really understand why, but hey, it works. Thank you ❀️

Thank you for your help, I did this and it worked.

The documentation link is outdated, though. Now it's this: https://docs.nextcloud.com/desktop/3.10/advancedusage.html

Same problem in my Mac OS client with a 250mb mp4 file. Those lines added to the config file fixed the error.

matdave commented 12 months ago

I randomly put this thing into nextcloud.cfg (in [General] section) and somehow it works for me. (I don't know if it works for other people or not)

chunkSize=10000000
minChunkSize=1000000
maxChunkSize=50000000
targetChunkUploadDuration=6000

Note: I use version 3.4.3 on Manjaro Linux Note 2: I read from https://docs.nextcloud.com/desktop/3.0/advancedusage.html because, in the documentation, I didn't see [General] section in 3.4 πŸ€”

Thanks! This helped on Fedora. Link is https://docs.nextcloud.com/desktop/latest/advancedusage.html for latest version

ThatTallGuy21 commented 11 months ago

Hi all - I added a question here specifically about this issue in the hopes that the co-founder of NextCloud can speak to their process for prioitizing enhancements and defects, as well as if/when this one will be resolved. All questions in the linked forum will be discussed in an upcoming episode of the Uncast show. Show your support by liking any of the comments found there.

Matth3wW commented 11 months ago

Hi everybody, thanks for finding this. I found simply setting the maximum chunk size to 50 MB (half of Cloudflare's 100 MB upload size limit) worked to resolve this issue.

I put together a short guide to fix this issue with the latest stable release (3.4.4, but should work on any client v3.4+). I tried to make it as accessible as possible to follow.

Windows Fix

Press Win+R on your keyboard to open the Run application. Past the following in the dialog box:

%APPDATA%\Nextcloud\nextcloud.cfg

This will either ask you to pick an application to open nextcloud.cfg or will open in your default text editor (unless you have something else set to open .cfg files). If it asks you to pick an application, feel free to use Notepad or any other editor.

Add the following line under the [General] section:

maxChunkSize=50000000

Save the file, quit Nextcloud desktop, and start it again.

MacOS Fix

Open a Finder window and press Command+Shift+G on your keyboard. This will bring up a 'Go to folder' window. Paste the following in the dialog box:

$HOME/Library/Preferences/Nextcloud

Open the nextcloud.cfg file. If you do not have a default editor for .cfg files, feel free to open the file with TextEdit.

Add the following line under the [General] section:

maxChunkSize=50000000

Save the file, quit Nextcloud desktop, and start it again.

Linux Fix

Open a terminal window and edit the following file:

nano $HOME/.config/Nextcloud/nextcloud.cfg

Add the following line under the [General] section:

maxChunkSize=50000000

Save the file (Ctl+o, Ctl+x), then quit Nextcloud desktop, and start it again.

Wanted to add that this fixed my issue and I don't have cloudflare implemented anywhere in my stack. This was happening to me on a local network. Nextcloud on a debian container in proxmox, nextcloud client on kubuntu 23.10. I was able to upload large files via the web interface, but not the client on the same computer.

codefaux commented 10 months ago

Yet another voice to say this fixed my problem. Desktop to server, literally two feet apart, the Windows client is spitting out error after error after error.. "Connection closed."

I spent literally four hours chasing my own ass trying to figure out how my -server- was misconfigured.

Known issue referenced in several other issues for over a year.

PR seems to be approved except that it keeps breaking due to forward movement before someone can click the f-ing button.

Meanwhile, Nextcloud looks like trash as a desktop file sync option because people who actually know what they're doing (not just the average point-and-click user, but every class of user from basic to expert) runs into this issue.

I've run several update cycles of both the client and the server and I keep expecting it's just me or it'll get fixed, but somehow it really turns out to be neither.

I love the project. I haven't cared about 80% of the most recent updates to random desktop tools and apps, meanwhile the core functionality is broken, with a known and proposed and accepted fix.

Can we get some people to click some buttons on this or what? The Calendar or whatever can wait. Core functionality is broken for a fair portion of your userbase.

vithusel commented 10 months ago

Yet another voice to say this fixed my problem. Desktop to server, literally two feet apart, the Windows client is spitting out error after error after error.. "Connection closed."

I spent literally four hours chasing my own ass trying to figure out how my -server- was misconfigured.

Known issue referenced in several other issues for over a year.

PR seems to be approved except that it keeps breaking due to forward movement before someone can click the f-ing button.

Meanwhile, Nextcloud looks like trash as a desktop file sync option because people who actually know what they're doing (not just the average point-and-click user, but every class of user from basic to expert) runs into this issue.

I've run several update cycles of both the client and the server and I keep expecting it's just me or it'll get fixed, but somehow it really turns out to be neither.

I love the project. I haven't cared about 80% of the most recent updates to random desktop tools and apps, meanwhile the core functionality is broken, with a known and proposed and accepted fix.

Can we get some people to click some buttons on this or what? The Calendar or whatever can wait. Core functionality is broken for a fair portion of your userbase.

Although the team at NextCloud owes us nothing as this is a opensource project i certainly agree with the issue with the PR being stuck on a endless loop. This became enough of a issue for me as well as issues with groupfolders that i have abandoned nextcloud entirely and moved onto a different product for my storage needs and was surprised to find basic functions such as thumbnail generation and previews to be tremendously faster compared to nextcloud. It doesnt have all thr bells and whistles but certainly has all the feature sets a storage platform needa. I'll likely be migrating my customers away in the new year as well. Lots of features being introduced with very little appetite on improving existing ones.

codefaux commented 10 months ago

Yet another voice to say this fixed my problem. Desktop to server, literally two feet apart, the Windows client is spitting out error after error after error.. "Connection closed." I spent literally four hours chasing my own ass trying to figure out how my -server- was misconfigured. Known issue referenced in several other issues for over a year. PR seems to be approved except that it keeps breaking due to forward movement before someone can click the f-ing button. Meanwhile, Nextcloud looks like trash as a desktop file sync option because people who actually know what they're doing (not just the average point-and-click user, but every class of user from basic to expert) runs into this issue. I've run several update cycles of both the client and the server and I keep expecting it's just me or it'll get fixed, but somehow it really turns out to be neither. I love the project. I haven't cared about 80% of the most recent updates to random desktop tools and apps, meanwhile the core functionality is broken, with a known and proposed and accepted fix. Can we get some people to click some buttons on this or what? The Calendar or whatever can wait. Core functionality is broken for a fair portion of your userbase.

Although the team at NextCloud owes us nothing as this is a opensource project i certainly agree with the issue with the PR being stuck on a endless loop. This became enough of a issue for me as well as issues with groupfolders that i have abandoned nextcloud entirely and moved onto a different product for my storage needs and was surprised to find basic functions such as thumbnail generation and previews to be tremendously faster compared to nextcloud. It doesnt have all thr bells and whistles but certainly has all the feature sets a storage platform needa. I'll likely be migrating my customers away in the new year as well. Lots of features being introduced with very little appetite on improving existing ones.

Make no mistake - I know they owe us nothing and imply no debt. I'm fully aware I'd have absolutely nothing if it weren't for the astounding amounts of care and work that go into a project like this over probably literal decates.

Nobody is going to use something which flat fails to work, especially after seeing it's been an issue for a year. Potential monetary investors aren't going to take a project seriously if there's a core issue with a pull request for basic features which simply isn't clicked on. They are hampering their own product. They're putting good effort behind nice new features while the core function is so broken that people stop using the product.

I'm frustrated, MANY users are frustrated, and their product fails to function at one of its most basic levels. Meanwhile, Notes and every other applet I don't use are receiving flavor updates and extra features.

I'm not saying "Fix this for me right now, I'm the user rarr!" -- I'm saying, from the outside, this looks bad.

Few people care if Talk has Markdown support if the core file sync feature won't work for many users who are deploying the preconfigured image most people are accustomed to being simply plug-and-play. Quality of life features for Photos aren't going to make users forget that they can't get Sync to work consistently to get the photos there in the first place. They're putting actual work into LOTS of places, meanwhile the basic functionality is broken and literally it just takes a few text edits to an existing fix and a few button clicks to ship it.

All it takes is the coordinated efforts of two or three individuals to make this happen. This project doesn't move so swiftly that in a matter of hours any pull request is out of date and broken.

It's not glamorous, and it may not be impacting enough of the dev team to make it happen, but the users are CLEARLY complaining, and it's nothing new. I literally would offer to do the work myself, but someone already did the work, a few times. Let's maybe take advantage of that?

jospoortvliet commented 10 months ago

@codefaux the issue is that this problem only affects a relatively small number of users. It's not in the top-10 of issues according to πŸ‘πŸ» though I admit that it's nr 11 right now and it's actually a shared 10th place... So I realize it's not unimportant, either.

I'll ask the team to prioritize it, there is a pull request that I hope can go in. Testing and help are welcome. But I know the team is busy - we have a couple of big customer migrations and projects going on - and we have to prioritize the parts of the business that pay the salaries.

codefaux commented 10 months ago

@jospoortvliet First and foremost, thank you for considering reprioritizing it. If you meet any resistence, feel free to direct them to this:

This issue is a year and several months old, and has had 121 comments from 63 participants. That's 63 people who had github accounts and are the sort of people who will search for, raise, and/or comment on an issue. They also had to realize the problem was the client and not the server, or accidentally stumble their way here.

Personally, I've been dealing with this bug for nine months. I didn't interact anywhere. I couldn't figure it out so I ignored it until it drove me to consider a new project, but what felt like the Sunk Cost Fallacy (even at the time) kept me chasing it.

.

With all due respect, developers can have no solid understanding of how many users a bug actually impacts.

Do we have a page view metric on this issue? Most users don't report, they move on to something which works. Of the users who stay, many still don't interact, they trust the developers to recognize the potential of a bug and deal with it, or they move to something which works. So, we can't really trust Bug Reports as "the way" to prioritize bugs.

I can guarantee you this bug has the potential to impact any user who deploys the Docker container with zero unusual modifications and uses the Windows client to sync. I can do that because I personally am that user, and I've read across at least six others in this report, not to mention the four Reddit threads on issues similar to this one, the several on the Unraid forums, and so on. (Bet the dev team didn't know about half of those.) So, who can be effected? Docker deployment + Windows client. Also based on what I've seen, Apache users sometimes have the issue too. Anyway, at least the largest or second-largest group of potential users. "Potential Reach" seems like a good thing to add to bug priority considerations.

This bug breaks core functionality, rendering any file over 100mb potentially an issue. That's A LOT of files nowadays. Sure, pics and documents will sync fine with only a few exceptions. You can't even sync a TikTok video without hitting that bug. I was gonna say a Vine but that's long dead and you get the point. Core functionality, on a reasonable fileset, is not guaranteed to work. How severe is the breakage? "Critical" comes to mind, from bug severity scales.

.

I love this project and I'm fully aware that thousands of hours of effort by hundreds would be an understatement. Every user should, at minimum, understand and appreciate just how much work goes into this. If every user gave a dollar software development would be a vastly different world, and I wish I could make up the difference.

Ignoring the actual reported scope of the bug: You have a critical severity bug, potentially effecting your largest- or second-largest subset of users who have reasonable expectations using expected deployment techniques; Agree or Disagree?

.

Re: Help: Looks to me like someone did the work. From what I can tell, the tests are the broken part, and the fix is just sitting there and waiting. I'm just judging based on the comments from your team, I don't know what's actually going on. So, from what I can see, the project's tests are broken, blocking this and other pull requests. I know literally nothing of writing tests for software, but "Project not found. Please check the 'sonar.projectKey' and 'sonar.organization' properties, the 'SONAR_TOKEN' environment variable, or contact the project administrator" sounds like a straightforward enough error message to figure out, I have faith in you guys.

Re: Testing; Can I pull and use that PR? From what I see, there's changes requested and pending and it says it's broken due to the tests. I'm not a Gibhub pro -- does that prevent me from using it? If I do put out the effort to help you test it, will I be pulling the version before or after the requested changes? Is a report from before the requested changes worth my effort? I don't know how to pull the project, apply the PR, and build its Docker container to deploy on my stack, but if you can point me to documentation and make me believe it won't be wasted effort I'll go ahead and do it. It doesn't seem useful, there's people who know what they're doing suggesting changes, and Github says it's broken.

Will that break updates in the future? I'm assuming after a while I'll have to delete it and rebuild the container with the official image. How long do you think I'd have to be running the off-mainline version before the bug gets fixed? A year and a few months seems like a lot of time to ask a user to be using a stale version, so what, I'd have to maintain my own fork with the pull request from your repository and either fix the PR any time it broke, or just wait for someone else to fix it like I am now? And, if I used the fix and reported it working, and a year passes, and the fix has to be changed again, will I need to re-test it? Etc..

Better question, if I can apply the PR and test it, wouldn't it be easier for an active developer? Asking for help testing this from a user is a lot. It's near impossible to tell, as a user, what will be useful effort to expend. From my perspective, it's out of my reach, and I'm well above the "average" user, so it seems facetious to say "you could help" from a position where you'd know better than me that I can't.

zachauker commented 10 months ago

Not that any additional substantiation is needed, but I can confirm that I am also experiencing this issue on macOS Ventura with Nextcloud client version 3.11. Adding the chunk size fields to the nextcloud.cfg file as described by @PaperMemo also seems to remedy the issue for me.

gdragon-git commented 9 months ago

I would love this getting solved. As a first time user of NextCloud, getting "connection" closed messages on sync and not knowing why is very off-putting. Makes me immediately question my choices on using NextCloud. I still don't know how to solve it fully, but I can see a PR that was added back in mid 2022 hasn't been proc'd and there seems to be slowness here, too, with getting a proper fix out. I appreciate the hard work that it is to be FOSS, but it seems like something is slipping through here and I hope it can get some love soon.

codefaux commented 9 months ago

They've said this but isn't on their top list because not enough people have been impacted yet. They know it's there but it isn't as fun as adding features to Markdown or messing with the calendar or whatever, so they need impetus to bother fixing it even though it's been fixed for then.

They're pretending it isn't impacting enough people to fix it, so if it isn't fixed this time next month or week or whenever I get fired up enough, I'm gonna go to the Reddit and farm for others impacted by this bug, see if I can't populated this issue report with the volumes of people OBVIOUSLY impacted by this who have been silently trusting the devs to make the core features of the product work.

They don't understand that people don't report bugs; they stop using a product. They don't have any kind of data collection so they have zero idea how big it actually is, they're guessing based on reports.

If you know anyone in any conversations threads where this bug has come up, urge every single person affected to, at minimum, come add an emoji reaction to this thread.

jospoortvliet commented 8 months ago

@codefaux there is no relation between web UI features like markdown and this issue getting fixed - we can't take a PHP coder and let them fix a C++ desktop client issue. I know it might look like that from the outside, but that is not how engineering works.

With regards to us pretending or there are a lot of people impacted - we have several thousand customers that in some cases have tens of millions of users and not a single one reported this. I get it's frustrating but reality is that if this would affect a majority of users, we'd have tens of thousands of people in here, not ˜60. That doesn't mean it is not important, but we have to prioritize what pays the bills.

I know it's frustrating to bump into this issue, but there is a work-around, which isn't true for all bugs. So those that have no work-around also often go first. I hope you can understand that.

As said before, you can help - testing the PR and perhaps contributing to it. The team is, in the mean time, focusing 90% of their time on fixing bugs, we have very few features on our roadmap for the coming months and those we have are because customers demand it. So they will get to this, hopefully long before we hit summer.

DoctorMcKay commented 8 months ago

I was avoiding bumping the issue because I find that to be bad manners, but if participant count is what triages issues, here's my +1.

vithusel commented 8 months ago

@codefaux there is no relation between web UI features like markdown and this issue getting fixed - we can't take a PHP coder and let them fix a C++ desktop client issue. I know it might look like that from the outside, but that is not how engineering works.

With regards to us pretending or there are a lot of people impacted - we have several thousand customers that in some cases have tens of millions of users and not a single one reported this. I get it's frustrating but reality is that if this would affect a majority of users, we'd have tens of thousands of people in here, not ˜60. That doesn't mean it is not important, but we have to prioritize what pays the bills.

I know it's frustrating to bump into this issue, but there is a work-around, which isn't true for all bugs. So those that have no work-around also often go first. I hope you can understand that.

As said before, you can help - testing the PR and perhaps contributing to it. The team is, in the mean time, focusing 90% of their time on fixing bugs, we have very few features on our roadmap for the coming months and those we have are because customers demand it. So they will get to this, hopefully long before we hit summer.

Just adding to this, where users are using CloudFlare tunnels and are experiencing this issue. Best solution is to stop using Cloudflare, they don't officially support it being used for file transfers and there are a few other solutions outside of their suite of products that accomplish tunnel benefits (including where users don't have a static IP or are behind CGNAT).

Im currently using a mixture of products which provides much higher availability as well as increased throughput via SDWAN.

jcastro commented 8 months ago

Also not working for me, I'm trying to sync +10GB mov files

jospoortvliet commented 8 months ago

I was avoiding bumping the issue because I find that to be bad manners, but if participant count is what triages issues, here's my +1.

it's the πŸ‘πŸ» on the first issue that helps track important issues. So vote there πŸ₯‡

DoctorMcKay commented 8 months ago

it's the πŸ‘πŸ» on the first issue that helps track important issues. So vote there πŸ₯‡

I'd have thought so, but the quoted number of affected users was "~60", which corresponds to the participant count for this issue, not the πŸ‘ reactors.

I understand prioritizing bugs that don't have workarounds over ones that do, but I don't think this qualifies as having a useable workaround. In any affected environment that isn't strictly single-user, the server admin doesn't have a way to force a chunk size configuration on clients. It's unreasonable to expect end-users to go edit a cfg file in appdata.

Dantali0n commented 8 months ago
chunkSize=10000000
minChunkSize=1000000
maxChunkSize=50000000
targetChunkUploadDuration=6000

Can this also be done server side through some kind of configuration parameter or php.ini? Having to manually change the client config for all clients is really error prone and cumbersome.

I am using the nextcloud docker image and HaProxy, so not cloudflare and I am seeing this issue.

codefaux commented 8 months ago
chunkSize=10000000
minChunkSize=1000000
maxChunkSize=50000000
targetChunkUploadDuration=6000

Can this also be done server side through some kind of configuration parameter or php.ini? Having to manually change the client config for all clients is really error prone and cumbersome.

I am using the nextcloud docker image and HaProxy, so not cloudflare and I am seeing this issue.

That's the entire point of this issue, and what's inside the pull request.

There is a proposed server fix, and apparently the proposed fix is good, looks good, has been approved, but can't be applied. At least that's the impression I get every time I look at it after being told I should be trying to fix it for/with them.

In their own thread for the pull request, one of their devs states that the pull request is good, but their tests are broken. When I looked there was a token/missing company name error in the log of one their tests, another ends in an assertion so whichever dev put that in knows why it stopped, and the other has logs too large to be viewed without downloading them. According to the same thread, these same broken tests are holding up several pull requests.

(https://github.com/nextcloud/desktop/pull/4826#issuecomment-1716599407)

(https://github.com/nextcloud/desktop/pull/4826#issuecomment-1848964182)

They haven't fixed their tests, so they can't accept this pull request or several others, according to the discussion in that pull request. If you want it fixed, they keep telling us to fix it for/with them.

Otherwise, apparently the genius method of tracking report volume is to thumbs up the issue. Tell your friends and maybe we'll make the top list and get a fix. I'm told they're fixing bugs all day long.

sub20hz commented 7 months ago

@codefaux there is no relation between web UI features like markdown and this issue getting fixed - we can't take a PHP coder and let them fix a C++ desktop client issue. I know it might look like that from the outside, but that is not how engineering works.

With regards to us pretending or there are a lot of people impacted - we have several thousand customers that in some cases have tens of millions of users and not a single one reported this. I get it's frustrating but reality is that if this would affect a majority of users, we'd have tens of thousands of people in here, not ˜60. That doesn't mean it is not important, but we have to prioritize what pays the bills.

I know it's frustrating to bump into this issue, but there is a work-around, which isn't true for all bugs. So those that have no work-around also often go first. I hope you can understand that.

As said before, you can help - testing the PR and perhaps contributing to it. The team is, in the mean time, focusing 90% of their time on fixing bugs, we have very few features on our roadmap for the coming months and those we have are because customers demand it. So they will get to this, hopefully long before we hit summer.

Out of ~40 users 16 have reported a problem with uploads to me within a week of switching to cloudflare (I suspect more have run into it and are not reporting or even aware there is an issue). I broadly assume the number of users effected is far larger than activity on this issue can imply, as many sysadmins or general users are not even aware of how to debug such a problem to end up on this thread.

Regardless, if user reports can increase the likelihood of a server-side fix, +1

vithusel commented 7 months ago

@codefaux there is no relation between web UI features like markdown and this issue getting fixed - we can't take a PHP coder and let them fix a C++ desktop client issue. I know it might look like that from the outside, but that is not how engineering works.

With regards to us pretending or there are a lot of people impacted - we have several thousand customers that in some cases have tens of millions of users and not a single one reported this. I get it's frustrating but reality is that if this would affect a majority of users, we'd have tens of thousands of people in here, not ˜60. That doesn't mean it is not important, but we have to prioritize what pays the bills.

I know it's frustrating to bump into this issue, but there is a work-around, which isn't true for all bugs. So those that have no work-around also often go first. I hope you can understand that.

As said before, you can help - testing the PR and perhaps contributing to it. The team is, in the mean time, focusing 90% of their time on fixing bugs, we have very few features on our roadmap for the coming months and those we have are because customers demand it. So they will get to this, hopefully long before we hit summer.

Out of ~40 users 16 have reported a problem with uploads to me within a week of switching to cloudflare (I suspect more have run into it and are not reporting or even aware there is an issue). I broadly assume the number of users effected is far larger than activity on this issue can imply, as many sysadmins or general users are not even aware of how to debug such a problem to end up on this thread.

Regardless, if user reports can increase the likelihood of a server-side fix, +1

Out of curiosity, what's your reason for using Cloudflare? Are you using their tunnel in particular?

DoctorMcKay commented 7 months ago

For me personally at least (I'm not the person you responded to), I wanted to use Cloudflare to hide my home IP (and thus geolocation) from people I might share files with. I ended up having to proxy my home Nextcloud through a rented server to accomplish the same due to this issue.

vithusel commented 7 months ago

For me personally at least (I'm not the person you responded to), I wanted to use Cloudflare to hide my home IP (and thus geolocation) from people I might share files with. I ended up having to proxy my home Nextcloud through a rented server to accomplish the same due to this issue.

Understood.

My only issue with using Cloudflare tunnel to achieve this is the fact that their T&C's don't allow for anything like NextCloud and it's unfair to NextCloud to have to implement a change for something that's technically not even supported and may in future get blocked by Cloudflare.

Renting your own VPS is probably the best option for such a use case. It's what I have done as well. There's multiple methods to achieve the end goal. I use a mixture of applications (haproxy, wireguard and SSH tunneling) to provide a redundant network.

codefaux commented 7 months ago

it's unfair to NextCloud to have to implement a change for something that's technically not even supported and may in future get blocked by Cloudflare.

I'm not using Cloudflare. This is not a problem exclusive to Cloudflare users. It also makes Nextcloud resilient against issues with large transfers any time said large transfers fail, to my (possibly inaccurate) understanding.

Nextcloud is being asked to fix a problem in their software which causes issues with Cloudflare AND other, apparently much less common, scenarios.

So, there's that.

sorvani commented 7 months ago

I've been experiencing this on Fedora (37, 38, & 39 for certain) for a year or more. I just grumbled and switched to the web interface to upload the problematic files. Adding maxChunkSize=50000000 to my client's config file resolved the issue. So if the submitted PR makes the client dynamically adjust the chunk size, it should resolve the issue.

codefaux commented 7 months ago

I've been experiencing this on Fedora (37, 38, & 39 for certain) for a year or more. I just grumbled and switched to the web interface to upload the problematic files. Adding maxChunkSize=50000000 to my client's config file resolved the issue. So if the submitted PR makes the client dynamically adjust the chunk size, it should resolve the issue.

It's been pretty much confirmed that the PR fixes this behavior. According to their conversations, the hold-up is that despite the PR working and being (seemingly?) approved, their internal tests are broken, and that's stopping this and several other pull requests. I'm just going by what I'm reading, though. They've asked us to help them several times but I don't know how to fix their tests for them or even if anyone outside their direct collaborators have the capacity.

https://github.com/nextcloud/desktop/pull/4826#issuecomment-1716599407

https://github.com/nextcloud/desktop/pull/4826#issuecomment-1848964182

artforlife commented 7 months ago

Same issue. Error syncing files 100-500 Mb in size or larger. In fact, I cannot seem to see a dependence on file size at all. At this point, the file-sharing part is completely unusable, which was a major part of using the platform for me in the first place. I ended up setting up the Seafile instead.

P.S. I have tried setting the chunk sizes as recommended in this thread, and it seemed to have fixed the behavior briefly, but then the same issue returned again.

walec51 commented 7 months ago

setting maxChunkSize to 50 MB solved the problem for me but now I have to give instructions to every user how to modify their config file

for 2 years now a lot of people had problems with syncing large files, moving from OwnCloud was a mistake I regret every year...

even with a contributed PR this project is not capable of fixing bugs...

the default maxChunkSize used to be 100 MB which makes sens and this is how it remains on https://doc.owncloud.com/desktop/5.2/advanced_usage/configuration_file.html

on NextCloud we now have 5 GB ! which makes no sense - why have chunking at all at this point?!

I propose to change the default to 50 MB - as many PHP envs have an even lower max file upload limit of 96 MB

this would be a one line PR of which this team may by capable of reviewing and merging

codefaux commented 7 months ago

setting maxChunkSize to 50 MB solved the problem for me but now I have to give instructions to every user how to modify their config file

for 2 years now a lot of people had problems with syncing large files, moving from OwnCloud was a mistake I regret every year...

even with a contributed PR this project is not capable of fixing bugs...

the default maxChunkSize used to be 100 MB which makes sens and this is how it remains on https://doc.owncloud.com/desktop/5.2/advanced_usage/configuration_file.html

on NextCloud we now have 5 GB ! which makes no sense - why have chunking at all at this point?!

I propose to change the default to 50 MB - as many PHP envs have an even lower max file upload limit of 96 MB

this would be a one line PR of which this team may by capable of reviewing and merging

Don't forget to thumbs up the first post of the issue, we've been informed that's how they actually track how important issues are.

JasperTheMinecraftDev commented 6 months ago

Yeah, we really need that! I had the same issue today and fixed it with the manual client setting but that's incovenient and doesn't work everywhere.

ShyViolets commented 6 months ago

Wow, glad I found this, but disappointed this is STILL an issue. I was driving myself insane trying to find the solution for my closed-connection and didn't realize it was related to Cloudflare limits and the client itself.

aiohdfgaiuhg commented 5 months ago

+1 for fixing here

modernNeo commented 5 months ago

my 2 cents: I think the suggestion that @metheis has posted on here is all that is really needed. I think the only issue is that I had to go through like 2-3 hours of investigation to find it, via https://help.nextcloud.com/t/if-youre-seeing-connection-closed-errors-uploading-large-files-100mb-while-using-cloudflare-we-have-a-fix/137510 which to me, implies that maybe this is a popular enough use case for the desktop to suggest @metheis's fix alongside the error?

codefaux commented 5 months ago

my 2 cents: I think the suggestion that @metheis has posted on here is all that is really needed. I think the only issue is that I had to go through like 2-3 hours of investigation to find it, via https://help.nextcloud.com/t/if-youre-seeing-connection-closed-errors-uploading-large-files-100mb-while-using-cloudflare-we-have-a-fix/137510 which to me, implies that maybe this is a popular enough use case for the desktop to suggest @metheis's fix alongside the error?

They aren't looking for or in need of the fix.

The fix exists.

Pull request #4826 foxes this problem.

Apply pull request #4826 and the problem ends.

The reason they haven't applied #4826 is that their tests -- functions which validate incoming changes -- are broken and they either can't be bothered, don't have anyone who understand how to fix tests, or have actively decided not to address it.

4826 has been reported to fix the problem every time someone has mentioned it, during this two plus year stretch of ignoring a bug obviously impacting a significant quantity of users.

They've provided zero rationale on why they haven't fixed it except that not enough people have clicked an emoji reaction to this issue for their team to consider it a priority.

They've suggested a dozen-ish times that someone outside their group do the work for them, but they haven't one time acknowledged that their tests are broken and we can't fix that for them, and they haven't acknowledged that SOMEONE DID THE WORK FOR THEM and they have actively turned that into wasted effort.

fracture-point commented 3 months ago

Adding my 2c that (1) I am also affected on an out of the box install behind Cloudflare, (2) the workaround works, and (3) this was a major reason why I moved off of owncloud - the mobile app did not chunk properly which made syncing behind CF unusable.

codefaux commented 3 months ago

@fracture-point While using this service behind Cloudflare is explicitly against Cloudflare TOS, the software should not break, you are correct. Don't forget to thumbs up the first post, in the hopes that we get enough of them to garner more attention than "fix it yourself" when A) someone did and B) it's still ready to be accepted. in pull request #4826 fixes the issue according to relevant conversation

Edit: I just realized, the only reason I use NextCloud is for file sync. It doesn't work reliably, the devs don't take bugs seriously unless the bug gets a nebulous quantity known only as "enough" likes, and both the mobile AND desktop clients have recently started failing sync randomly because "file not found" on Androind and "permissions" on desktop. That's funny. I'm the ONLY USER and I only have TWO DEVICES and they're not even syncing the same folders. I'm nuking this crapware from my systems and moving to something at least as reliable as the Windows 98 Briefcases feature.