1
mirror of https://github.com/rclone/rclone synced 2024-12-02 11:53:45 +01:00
rclone/backend/s3
Nick Craig-Wood 9b5308144f s3: Reduce memory usage streaming files by reducing max stream upload size
Before this change rclone would allow the user to stream (eg with
rclone mount, rclone rcat or uploading google photos or docs) 5TB
files.  This meant that rclone allocated 4 * 525 MB buffers per
transfer which is way too much memory by default.

This change makes rclone use the configured chunk size for streamed
uploads.  This is 5MB by default which means that rclone can stream
upload files up to 48GB by default staying below the 10,000 chunks
limit.

This can be increased with --s3-chunk-size if necessary.

If rclone detects that a file is being streamed to s3 it will make a
single NOTICE level log stating the limitation.

This fixes the enormous memory usage.

Fixes #3568
See: https://forum.rclone.org/t/how-much-memory-does-rclone-need/12743
2019-11-09 15:55:19 +00:00
..
s3_test.go s3: fix SetModTime on GLACIER/ARCHIVE objects and implement set/get tier 2019-09-14 09:18:55 +01:00
s3.go s3: Reduce memory usage streaming files by reducing max stream upload size 2019-11-09 15:55:19 +00:00
v2sign.go s3: fix v2 signer on files with spaces - fixes #2438 2018-10-14 00:10:29 +01:00