Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Syncing files larger than 1GB does not work #6222

Open
5 of 8 tasks
Malex14 opened this issue Nov 14, 2023 · 5 comments · May be fixed by nextcloud/documentation#11295
Open
5 of 8 tasks

[Bug]: Syncing files larger than 1GB does not work #6222

Malex14 opened this issue Nov 14, 2023 · 5 comments · May be fixed by nextcloud/documentation#11295
Milestone

Comments

@Malex14
Copy link

Malex14 commented Nov 14, 2023

⚠️ Before submitting, please verify the following: ⚠️

Bug description

Since Client version 3.10.0 syncing files larger than 1GB doesn't work anymore. This is most likely due to this commit cbbb4c8 , where the default value for maxChunkSize was increased to 5GB, but with newer Apache (>2.4.53) versions the maximum body size is 1GB:

LimitRequestBody (In Apache HTTP Server <=2.4.53 this defaulted to unlimited, but now defaults to 1 GiB. The new default limits uploads from non-chunking clients to 1 GiB. If this is a concern in your environment, override the new default by either manually setting it to 0 or to a value similar to that used for your local environment’s PHP upload_max_filesize / post_max_size / memory_limit parameters.)

When I manually set maxChunkSize in ~/.config/Nextcloud/nextcloud.cfg syncing large files works again.

Steps to reproduce

  1. Use client>=3.10.0 and Apache server>2.4.53
  2. Try to sync a 2GB file (e.g. dd if=/dev/zero of=test bs=2GB count=1)
  3. Observe a sync error

Expected behavior

Syncing files larger than 1GB works with default client and server config

Which files are affected by this bug

src/libsync/configfile.cpp

Operating system

Linux

Which version of the operating system you are running.

arch

Package

Distro package manager

Nextcloud Server version

27.1.3

Nextcloud Desktop Client version

3.10.1

Is this bug present after an update or on a fresh install?

Updated to a major version (ex. 3.3.6 to 3.4.0)

Are you using the Nextcloud Server Encryption module?

Encryption is Disabled

Are you using an external user-backend?

  • Default internal user-backend
  • LDAP/ Active Directory
  • SSO - SAML
  • Other

Nextcloud Server logs

[no app in context] Fehler: Sabre\DAV\Exception\BadRequest: Erwartete Dateigröße von 1709545454 bytes, aber 0 bytes gelesen (vom Nextcloud-Client) und geschrieben (in den Nextcloud-Speicher). Dies kann entweder ein Netzwerkproblem auf der sendenden Seite oder ein Problem beim Schreiben in den Speicher auf der Serverseite sein. at <<closure>>

 0. /var/www/html/apps/dav/lib/Connector/Sabre/Directory.php line 149
    OCA\DAV\Connector\Sabre\File->put("*** sensitive parameters replaced ***")
 1. /var/www/html/apps/dav/lib/Upload/UploadFolder.php line 50
    OCA\DAV\Connector\Sabre\Directory->createFile("*** sensitive parameters replaced ***")
 2. /var/www/html/3rdparty/sabre/dav/lib/DAV/Server.php line 1098
    OCA\DAV\Upload\UploadFolder->createFile("*** sensitive parameters replaced ***")
 3. /var/www/html/3rdparty/sabre/dav/lib/DAV/CorePlugin.php line 504
    Sabre\DAV\Server->createFile("*** sensitive parameters replaced ***")
 4. /var/www/html/3rdparty/sabre/event/lib/WildcardEmitterTrait.php line 89
    Sabre\DAV\CorePlugin->httpPut(["Sabre\\HTTP\\Request"], ["Sabre\\HTTP\\Response"])
 5. /var/www/html/3rdparty/sabre/dav/lib/DAV/Server.php line 472
    Sabre\DAV\Server->emit("method:PUT", [["Sabre\\HTTP\\ ... ]])
 6. /var/www/html/3rdparty/sabre/dav/lib/DAV/Server.php line 253
    Sabre\DAV\Server->invokeMethod(["Sabre\\HTTP\\Request"], ["Sabre\\HTTP\\Response"])
 7. /var/www/html/3rdparty/sabre/dav/lib/DAV/Server.php line 321
    Sabre\DAV\Server->start()
 8. /var/www/html/apps/dav/lib/Server.php line 365
    Sabre\DAV\Server->exec()
 9. /var/www/html/apps/dav/appinfo/v2/remote.php line 35
    OCA\DAV\Server->exec()
10. /var/www/html/remote.php line 172
    require_once("/var/www/html/a ... p")

PUT /remote.php/dav/uploads/*** sensitive parameters replaced ***/3043113402/00002
from *** sensitive parameters replaced *** by *** sensitive parameters replaced *** at 2023-11-14T09:20:25+00:00

Additional info

No response

@Malex14 Malex14 changed the title [Bug]: [Bug]: Syncing files larger than 1GB does not work Nov 14, 2023
@joshtrichards
Copy link
Member

When I manually set maxChunkSize in ~/.config/Nextcloud/nextcloud.cfg syncing large files works again.

This is the correct approach if you're unwilling to adjust your Apache LimitRequestBody (and/or still want to limit both parameters to something like 5G). Previously the Desktop default max was 1G so I guess this rarely came up.

The docs do need adjustment though to make clear the Apache situation can also impact chunked uploads if the files are large enough. I'll try to address that now.

joshtrichards added a commit to nextcloud/documentation that referenced this issue Nov 20, 2023
The recent addition clarified Apache's new behavior and and this setting's impact on non-chunking clients. With the recent v2 chunking client implementations, the maxChunkSize can be as high as 5 GiB by default (e.g. see nextcloud/desktop#6222). This expands the language to note impact for chunking clients too and clarifies the available options.

Signed-off-by: Josh Richards <[email protected]>
@joshtrichards
Copy link
Member

joshtrichards commented Nov 20, 2023

Docs update pending in nextcloud/documentation#11295

I'll close this here.

Edit: Leaving open until merged just in case

@Malex14
Copy link
Author

Malex14 commented Nov 20, 2023

Thank you for adding the documentation. I think additionally the default body size should be increased in the official docker image, since right now those two components are not fully compatible in their default configuration.

@joshtrichards
Copy link
Member

I agree with the general idea that things should work as much as possible across the board with "the defaults". Unfortunately there are a lot of factors so it doesn't always end up being that way. Other times it just needs to get sorted out by somebody and then adjustments coordinated across the board.

Let's see if we can make some progress today. :-)

So I agree, but let's start with the documentation. The community Docker image tends to track the docs.

The default is already changed in the AIO Docker image (to unlimited/0 the old Apache behavior). The community image though chose to add an environment variable to support overriding it while sticking with the default upstream value for now.

At the time that seemed enough and the docs weren't too assertive about the need to change it because it wasn't really an issue with v1 chunking.

Based upon a further review of the code (particularly the dynamic chunking in the desktop client) and your follow-up I've further revised the doc change (in the previously linked PR). The change is to now recommend an official value of 5 GiB.

@bkraul
Copy link

bkraul commented Jan 1, 2024

When I manually set maxChunkSize in ~/.config/Nextcloud/nextcloud.cfg syncing large files works again.

This is the correct approach if you're unwilling to adjust your Apache LimitRequestBody (and/or still want to limit both parameters to something like 5G). Previously the Desktop default max was 1G so I guess this rarely came up.

The docs do need adjustment though to make clear the Apache situation can also impact chunked uploads if the files are large enough. I'll try to address that now.

For those of us running nginx, the Nextcloud recommended nginx config (found here) needs to be updated:

    # set max upload size and increase upload timeout:
    client_max_body_size 512M;
    client_body_timeout 300s;
    fastcgi_buffers 64 4K;

I set mine up to 5G and my uploads started working. The weird part is that large uploads worked on the web interface just fine, just not through the Nextcloud agent. but this change fixed it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants