Question:
I am attempting to change the metadata of all of the objects in a particular bucket on S3 using the AWS PHP SDK2. I’ve had trouble finding a specific example using the new SDK, but have pieced together the following:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
$OBJ_aws_s3 = S3Client::factory($config); $objects = $OBJ_aws_s3->getIterator('ListObjects', array( 'Bucket' => $bucket, 'MaxKeys' => 10 )); foreach($objects as $object) { $key = $object['Key']; echo "Processing " . $key . "\n"; $response = $OBJ_aws_s3->copyObject(array( 'Bucket' => $bucket, 'Key' => $key, 'CopySource' => $key, 'Metadata' => array( 'Cache-Control' => 'max-age=94608000', 'Expires' => gmdate('D, d M Y H:i:s T', strtotime('+3 years')) ), 'MetadataDirective' => 'REPLACE', )); } |
The foreach
loop successfully loops through the first 10 items in the given $bucket, but I get a 403 error on the copyObject()
operation:
1 2 |
Uncaught Aws\S3\Exception\AccessDeniedException: AWS Error Code: AccessDenied, Status Code: 403 |
I am not sure if this is due to incorrect values being passed in to copyObject, or some setting in S3. Note that I have yet to create a rights-restricted account in IAM and am using the base account that should have all rights on the objects.
Any help appreciated.
Answer:
Ok, figured this out – my syntax was incorrect in two ways.
First, I was using the incorrect value for CopySource
. From the documentation:
CopySource – (string) – The name of the source bucket and key name of the source object, separated by a slash (/). Must be URL-encoded.
So in my case, instead of using just 'CopySource' => $key,
, it should be 'CopySource' => urlencode($bucket . '/' . $key),
. This explains the 403 errors, as I was essentially telling the API that my source file was in a {bucket} / {key} of just {key}.
The second issue relates to the specific headers – specifying the Expires and Cache-Control headers in the Metadata
field results in the creation of Amazon-specific meta values, with keys prefixed with x-amz-meta-
. Instead I am now using the Expires
and CacheControl
arguments. My final working code:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
$OBJ_aws_s3 = S3Client::factory($config); $objects = $OBJ_aws_s3->getIterator('ListObjects', array( 'Bucket' => $bucket, 'MaxKeys' => 10 )); foreach($objects as $object) { $key = $object['Key']; echo "Processing " . $key . "\n"; $response = $OBJ_aws_s3->copyObject(array( 'Bucket' => $bucket, 'Key' => $key, 'CopySource' => urlencode($bucket . '/' . $key), 'CacheControl' => 'max-age=94608000', 'Expires' => gmdate('D, d M Y H:i:s T', strtotime('+3 years')), 'MetadataDirective' => 'COPY', )); } |