I can’t download file from S3 to EFS using Lambda

Question:

I connected the Lambda to EFS, and I want to download small file from S3 to the EFS using Lambda.

I connected the Lambda function to file system, and added access point with permissions 777.

I have this small python function in Lambda to download to the mount:

And I get timeout after 1 min (this should take less than a second).

If I download for /tmp/img.jpg, this works. Even if I copy the file from /tmp/ to /mnt/my-efs/ this works.

I have this IAM for the Lambda user:

Why can’t I download file from S3 to EFS using Lambda?

Answer:

This get’s a big tricky so let’s go slow and make sure we cover all the moving parts. I apologize if some of this is redundant, but I wanted to make sure a layman would have all the right parts. Please let me know if you think I’ve missed something.

Steps:

  1. serverless.yaml
  2. AWS Access point settings
  3. Adding EFS access to the Lambda
  4. Downloading items into the EFS drive
  5. Verifying your files are in EFS

1. serverless.yaml
I’m using serverless so this is what my serverless.yaml file looks like.

These were the minimum permissions i needed to download the file from S3 into the EFS drive location. Again, this project uses serverless so all this can be translated to a cloudFormation template if you’re not using serverless.

And under the handler function…

NOTE: I left out a lot from my serverless.yaml file, and only included the parts pertaining to the EFS setup.

2. AWS EFS Access Point Settings

    1. go to EFS service.
    1. click Access points.
    1. Create new Access point
    1. Fill out the fields as needed. Below is how i configured mine
    • Details
      • File System: Choose a dropdown selection
      • Name - optional Pick a name
      • Root directory path - optional: This will map to your underlying file system on the EC2, or SFTP server. I’m using an sftp server, so I simply input the address on the sftp server. sftp.companyX.com/images. This location will map to our Local Mount Path set inside your Lambda settings within the EFS configuration window (we’ll look more at that in step 3).
    • POSIX User
      • User ID: 1054 | Locate your /etc/passwd/ file and verify this to be true for your server hosting the EFS drive. I think 1054 is the default setting for amazon linux but if you’re running a different OS on your EFS host, you should verify this value.
      • Group ID: 1054 (see last line)
    • Root directory creation permissions
      • Owner User ID: 1054 (see last line)
      • Owner Group ID: 1054 (see last line)
      • POSIX permissions to apply to the root directory path: 777 | understand these permissions translate to your iamRoleStatements.Action values; if they’re different this 777 should be set appropriately.

3. Lambdas addition of the EFS

  • EFS File System: (choose from dropdown selection)
  • Local Mount Path: | SUPER IMPORTANT e.g. /mnt/my-efs. This part is very non-intuitive IMO. This location, whatever you want it to be, will map to the Root directory path (step 2.4 above) you configured when you build your access point. Meaning, if you download an s3 file called img.jpg to /mnt/my-efs/. it will put that file into sftp.company.com/images/img.jpg.

4. Downloading items into the EFS drive

  • Obviously, this can be done many ways. Your original code looked perfectly fine. I’ve simply extended it to extract the file location from an S3 event object as an example. If it’s not obvious, the S3 event is triggered whenever the S3 bucket get’s a new upload (a new object added to the bucket).

5. Verifying Your Files are in EFS

  • To verify the download works, you can just print the files from your EFS location from within your Lambda then check you built-in CloudWatch logs to see the results are as expected.

Special attention here! Notice i didn’t read from /mnt/my-efs/ i read from /my-efs, this is because the EFS / directory maps to the /mnt directory in the Lambda environment.

Finished 🚀

NOTE: If i’ve left something out feel free to comment and i’ll add that as well if i can. Unfortunately i can’t share the repo as it’s a work related project (protected).

Leave a Reply