Wasabi Became Disconnected While Writing to..Error

PROBLEM

Wasabi became disconnected while writing to..error

 

CAUSE

When uploading data to Wasabi/S3 (using the S3 API) you have two ways to push data up: In one single file, or you can take that file and split it up into smaller chunks and upload multiple chunks at once (multipart upload). Using multipart upload allows you to use more data streams and upload the data more quickly/efficiently. There are two limitations with multipart uploads: The maximum file size is 5TB (not an issue here), and the maximum number of parts that you can chunk your data up into is limited to 10,000 parts (this is the issue). Here are the errors I am seeing for your bucket 'haw.sd1':

156.3.153.2 PUT - /haw.sd1/2023-12-12_13_17_45/archive_0_SIS.partNumber=10000&uploadId=B93oFKE_U5oWlFP78tZuKvLKkrWgBGdMAalLHNZZmtdd_w-ucwQPa4Q1-Og-6u8sk29b40jBMCOQ0cNYHJCYE8-gju6livAuvIafayRUceTzrBmPJaldCDFORLrmWE7F 952EA10EAB3828D0:A  rclone/v1.63.1  Tuesday, December 12th 2023, 6:30:20 pm 1214    head05  haw.sd1 0   57868919    392 0   0   s3:PutObject    REST.PUT.PART   | Protocol: HTTPS =ServerName: s3.us-west-1.wasabisys.com (http://s3.us-west-1.wasabisys.com) CipherSuite: 4865 Version: 772 Neg.Protocol: http/1.1 | AwsAccessKey: HDDPWH6HUTQX2ND81OOC | UserNum: 100 Network connection was closed.  ConnectionClosed    410 0   0       0   0

156.3.153.2 PUT - /haw.sd1/2023-12-12_13_17_45/archive_0_SIS.partNumber=10001&uploadId=B93oFKE_U5oWlFP78tZuKvLKkrWgBGdMAalLHNZZmtdd_w-ucwQPa4Q1-Og-6u8sk29b40jBMCOQ0cNYHJCYE8-gju6livAuvIafayRUceTzrBmPJaldCDFORLrmWE7F 812CD7610969C457:A  rclone/v1.63.1  Tuesday, December 12th 2023, 6:30:20 pm 5   head12  haw.sd1 0   781 352 0   0   s3:PutObject    REST.PUT.PART   |  Protocol: HTTPS =ServerName: s3.us-west-1.wasabisys.com (http://s3.us-west-1.wasabisys.com) CipherSuite: 4865 Version: 772 Neg.Protocol: http/1.1 | AwsAccessKey: HDDPWH6HUTQX2ND81OOC | UserNum: 100 Part number must be an integer between 1 and 10000, inclusive   InvalidArgument 400 0   5       0   0

156.3.153.2 PUT - /haw.sd1/2023-12-12_13_17_45/archive_0_SIS.partNumber=10001&uploadId=B93oFKE_U5oWlFP78tZuKvLKkrWgBGdMAalLHNZZmtdd_w-ucwQPa4Q1-Og-6u8sk29b40jBMCOQ0cNYHJCYE8-gju6livAuvIafayRUceTzrBmPJaldCDFORLrmWE7F 3D9CBEC046906810:B  rclone/v1.63.1  Tuesday, December 12th 2023, 6:30:19 pm 6   R213-U31    haw.sd1 0   759 431 0   0   s3:PutObject    REST.PUT.PART   | Protocol: HTTP | AwsAccessKey: HDDPWH6HUTQX2ND81OOC | UserNum: 100 | CM CDR: MTcwMjQyMzIyNDY0OSAzOC4xNDYuNDAuMTAzIENvbklEOjMwMDA3MjkyNy9FbmdpbmVDb25JRDozODg2NzEwL0NvcmU6Mjk= | VS: ZGVmYXVsdA==    Part number must be an integer between 1 and 10000, inclusive   InvalidArgument 400 0   6       0   0

What we see here is that your uploads reached part # 10000 and 10001 simultaneously, and Wasabi threw and error and closed the connection. This is why you are seeing a connection reset and the backup fails.

 

RESOLUTION

What needs to be done is you need to have the software that is uploading the data use larger chunk sizes. Whatever the value is set to, I recommend to double it (if your chunk size is currently set to 64MB, set it to 128MB instead or 256MB). Once that change is made, I'd suggest retrying your backup and see if you find success.

Within /usr/bp/bpinit/master.ini by default the S3ChunkSize would be set to 64MB. You will need to increase the size to 128MB or in this case we had to increase it to 256MB

Example

[CloudUtil]
CacheDir=/backups/cloudcache ; Directory rclone will use for caching
CacheSize=4G ; Max total size of objects in cache
S3ChunkSize=256M ; Chunk size for multipart uploads to S3 (this is what we changed from 64MB)
S3Concurrency=8 ; Concurrency for multipart uploads to S3
UnmountAfter=1 ; Unmount cloud storage after archive job
RemoveFiles=1 ; Remove files after deleting archive set

 

 

 

Have more questions?

Contact us

Was this article helpful?
0 out of 0 found this helpful

Provide feedback for the Documentation team!

Browse this section