Date: Fri, 29 Mar 2024 13:05:42 +0000 (UTC) Message-ID: <1022177679.474.1711717542517@library.aws.roguewave.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_473_1831430698.1711717542516" ------=_Part_473_1831430698.1711717542516 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: file:///C:/exported.html
Zend Server for IBM i, version 6 - 2019.x are 32 bit programs, whi= ch means they are limited to file sizes around 2 GB. This example sho= ws how to use the split command in PASE to break the file into smaller chun= ks and send them up to AWS. The split command is a standard command i= n Linux, so this technique would also apply there.
Zend Server versions 6 - 8.5x, 9.1.x, 2018.x and 2019.x.
The way to accomplish this with 32 bit PHP is to 'chunk' the file. To 'c= hunk' the file means to break it up into smaller pieces, and transmit the p= ieces. PASE uses the 'split' command to break up the file. Then the AWS SDK= provides for sending the pieces. You can read the AWS documentation here:<= /p>
Upload a File in Multiple= Parts Using the PHP SDK Low-Level API
Chunking the file is fairly straig= htforward. Here is the format for the PASE command:
$ split -b 1000m /path/to/large/fi= le /path/to/output/file/prefix
You end up with files having prefix_aa, pr= efix_ab, so you can then pass each chunk to uploadPart, and finally call co= mpleteUpload and you=E2=80=99re done. Here is an example PASE session= :
$ split -b 500m test testnew
$ ls -la testnew*
-rw-r--r-- 1 ma= urice 0 524288000 Mar 10 14:38 testnewaa
-rw-r--r-- 1 maurice 0 52428800= 0 Mar 10 14:39 testnewab
-rw-r--r-- 1 maurice 0 524288000 Mar 10 14:39 t= estnewac
-rw-r--r-- 1 maurice 0 524288000 Mar 10 14:40 testnewad
-rw-= r--r-- 1 maurice 0 314572800 Mar 10 14:40 testnewae
So you call Aws\S3\S3Client::createMultipartUpload() which will initiali= ze the upload and give you an upload id.
Then call Aws\S3\S3Client::uploadPart()with a chunk and a part number, i= teratively or in parallel for all the chunks.
Finally you call Aws\S3\S3Client::completeMultipartUpload() to finalize = the upload.
For code examples and more information, please refer to the AWS document= ation in the above link.