Simple Storage Service

Simple Storage Service

Simple Storage Service

Hello Everyone

Welcome to CloudAffaire and this is Debjeet.

In the last blog post, we have discussed different options of EC2 instance monitoring. And with that, we have concluded our introductory series on EC2 service.

In this blog post, we are going to discuss AWS Simple Storage Service or S3.

Simple Storage Service

Amazon Simple Storage Service or S3 is highly scalable, reliable, fast, inexpensive data storage infrastructure designed for the Internet. S3 is more like dropbox or google drive with additional features that you can use to store and retrieve any amount of data, at any time, from anywhere on the web.

Key concepts:


A bucket is a container for objects stored in Amazon S3. Every object is contained in a bucket. For example, if the object named blogs/AutoScalingGroup.doc is stored in the cloudaffaire bucket, then it is addressable using the URL Buckets organize the Amazon S3 namespace at the highest level, they identify the account responsible for storage and data transfer charges, they play a role in access control, and they serve as the unit of aggregation for usage reporting. The scope of the bucket is within the region they are created, but you can transfer data from one bucket to another across the region. By default, you can create up to 100 buckets in each of your AWS accounts.


Objects are the fundamental entities stored in Amazon S3. Objects consist of object data and metadata. The data portion is opaque to Amazon S3. The metadata is a set of name-value pairs that describe the object. These include some default metadata, such as the date last modified, and standard HTTP metadata, such as Content-Type. You can also specify custom metadata at the time the object is stored. An object is uniquely identified within a bucket by a key (name) and a version ID.


A key is a unique identifier for an object within a bucket. Every object in a bucket has exactly one key. Because the combination of a bucket, key, and version ID uniquely identify each object, Amazon S3 can be thought of as a basic data map between “bucket + key + version” and the object itself. Every object in Amazon S3 can be uniquely addressed through the combination of the web service endpoint, bucket name, key, and optionally, a version. For example, in the URL, “cloudaffaire” is the name of the bucket and “blogs/AutoScalingGroup.doc” is the key.


Versioning is a means of keeping multiple variants of an object in the same bucket. You can use versioning to preserve, retrieve, and restore every version of every object stored in your Amazon S3 bucket. With versioning, you can easily recover from both unintended user actions and application failures. In one bucket, for example, you can have two objects with the same key, but different version IDs, such as blogs/AutoScalingGroup.doc (version 1) and blogs/AutoScalingGroup.doc (version 2).


You can choose the geographical region where Amazon S3 will store the buckets you create. You might choose a region to optimize latency, minimize costs, or address regulatory requirements. Objects stored in a region never leave the region unless you explicitly transfer them to another region.


Storage Classes:

Amazon S3 offers a range of storage classes designed for different use cases. These include Amazon S3 STANDARD for general-purpose storage of frequently accessed data, Amazon S3 STANDARD_IA for long-lived, but less frequently accessed data, and GLACIER for long-term archive.

Bucket Policies:

Bucket policies provide centralized access control to buckets and objects based on a variety of conditions, including Amazon S3 operations, requesters, resources, and aspects of the request (e.g., IP address). The policies are expressed in our access policy language and enable centralized management of permissions. The permissions attached to a bucket apply to all of the objects in that bucket.


S3 provides encryption support for your data at rest and on transit. You can use S3 default encryption or create your encryption setup. S3 supports server-side encryption where you request Amazon S3 to encrypt your object before saving it on disks in its data centers and decrypt it when you download the objects. You can also use client-side encryption where you can encrypt data client-side and upload the encrypted data to Amazon S3. In this case, you manage the encryption process, the encryption keys, and related tools.


You can configure Lifecycle for S3 objects so that they are stored cost-effectively throughout their lifecycle. A lifecycle configuration is a set of rules that define actions that Amazon S3 applies to a group of objects. There are two types of actions, Transition actions for object transition between storage classes and Expiration actions for deletion of objects after expiry.


The Amazon S3 notification feature enables you to receive notifications when certain events happen in your bucket.

Cross-Region Replication:

Cross-region replication (CRR) enables automatic, asynchronous copying of objects across buckets in different AWS Regions


AWS provides various tools like CloudWatch Alarms, CloudTrail Log, and Amazon S3 dashboard that you can use to monitor Amazon S3

Server Access Logging:

To track requests for access to your bucket, you can enable server access logging. Each access log record provides details about a single access request, such as the requester, bucket name, request time, request action, response status, and an error code, if relevant.

Hope you have enjoyed this article, in the next blog, we will create our 1st S3 bucket and upload some file in it.


Leave a Reply

Close Menu