Closed thekid closed 3 months ago
Given the following S3 upload:
$s3= $this->environment->endpoint('s3')->using('bucket'); $file= 'PNG<large amount of binary data>'; $path= 'target/upload.png'; $headers= [ 'x-amz-content-sha256' => hash('sha256', $file), 'Content-Type' => 'image/png', 'Content-Length' => strlen($file), ];
The following blocks until the entire file buffer has been transmitted:
$r= $s3->request('PUT', $path, $headers, $file);
The new method allows to transmit in smaller chunks (e.g. 16 kB) and yield control between:
const CHUNK= 16384; $transfer= $s3->open('PUT', $path, $headers); for ($i= 0; $i < strlen($file); $i+= CHUNK) { $transfer->write(substr($file, $i, CHUNK)); // e.g. yield } $r= $transfer->finish();
Note there are also multipart uploads which solve the problem of have to know the size and hash beforehand!
Released in https://github.com/xp-forge/aws/releases/tag/v1.7.0
Example
Given the following S3 upload:
The following blocks until the entire file buffer has been transmitted:
The new method allows to transmit in smaller chunks (e.g. 16 kB) and yield control between:
Note there are also multipart uploads which solve the problem of have to know the size and hash beforehand!