How I can inject artifact from AWS S3 inside Docker image?


I need to prepare Docker image with embedded Jar file to push it into ECR. Jar file is storing in S3 bucket. How I can inject jar inside image without explicit storing AWS access keys into image?
Maybe I can use AWS CLI or exist other way?
Also it is not recommended to add public access to my s3 bucket and set access keys via env variable during execute docker run.


You can define an AWS IAM Role and attach it to EC2 Instances. So any instance that needs to run this docker build command, can do so as long as it has the IAM role attached to it. You can do so from the AWS Console. This solves the problem of you putting AWS credentials on the instance itself.

You will still need to install the aws cli in your Dockerfile. Once IAM Role is attached, you don’t have to worry about credentials.

Recommended docs:

IAM Roles for Amazon EC2

Here’s an official blog post tutorial on how to do this:

Attach an AWS IAM Role to an Existing Amazon EC2 Instance by Using the AWS CLI

Just make sure you specify in the IAM Role which S3 Buckets you want these instances to have access to.

Leave a Reply