How To Create Elasticsearch Cluster In AWS
Hello Everyone
Welcome to CloudAffaire and this is Debjeet.
In this series, we will explore one of the most popular log management tools in DevOps better known as ELK (E=Elasticserach, L=Logstash, K=Kibana) stack.
What Is ELK stack in DevOps?
The ELK Stack is a collection of three open-source products — Elasticsearch, Logstash, and Kibana — all developed, managed and maintained by Elastic. Elasticsearch is an open-source, full-text search and analysis engine, based on the Apache Lucene search engine. Logstash is a log aggregator that collects data from various input sources, executes different transformations and enhancements and then ships the data to various supported output destinations. Kibana is a visualization layer that works on top of Elasticsearch, providing users with the ability to analyze and visualize the data. Together, these different components are most commonly used for monitoring, troubleshooting and securing IT environments, business intelligence, and web analytics.
How ELK stack fits in AWS?
In AWS there are two ways you can deploy an ELK stack, using fully managed services (E=Elasticsearch, L=Lambda, K=Kibana) or customer-managed (Install and configure ELK in EC2). In this blog post, we will create the Elasticsearch cluster using AWS Elasticsearch service (Kibana in included in AWS Elasticsearch service) and then we will insert some data to our Elasticsearch cluster and finally view the data in Kibana dashboard.
How To Create Elasticsearch Cluster In AWS:
Step 1: Create an access policy for your Elasticsearch cluster.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 |
################################################ ## How To Create ElasticSerach Cluster In AWS ## ################################################ ## Prerequisite: AWS CLI installed and configured with proper access ## https://cloudaffaire.com/category/aws/aws-cli/ ## Create a directory for this demo mkdir elasticsearch && cd elasticsearch ## Get your public ip address wget http://ipecho.net/plain -O - -q ; echo # 113.239.49.214 ## Create an access policy for your elasticsearch ## to restric access from your ip address only vi myaccesspolicy.json --------------------- { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "*" }, "Action": [ "es:*" ], "Condition": { "IpAddress": { "aws:SourceIp": [ " ] } } } ] } --------------------- :wq |
Step 2: Create your Elasticsearch Cluster.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
## Create your elasticsearch cluster aws es create-elasticsearch-domain \ --domain-name cloudaffaire \ --elasticsearch-version 6.0 \ --elasticsearch-cluster-config InstanceType=t2.small.elasticsearch,InstanceCount=1 \ --ebs-options EBSEnabled=true,VolumeType=standard,VolumeSize=10 \ --access-policies file://myaccesspolicy.json ## Get your elasticsearch cluster status aws es describe-elasticsearch-domain \ --domain-name cloudaffaire \ --query 'DomainStatus.Processing' ## It will take time for your cluster to be created ## if above output returns false, your cluster is ready |
Step 3: Get details of your Elasticsearch cluster.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
## Get all the elasticsearch cluster names aws es list-domain-names ## Get details of your elasticsearch cluster aws es describe-elasticsearch-domain \ --domain-name cloudaffaire ## Get configuration details for your elasticsearch cluster aws es describe-elasticsearch-domain-config \ --domain-name cloudaffaire ## Get your elasticsearch endpoint details AWS_ES_ENDPOINT=$(aws es describe-elasticsearch-domain \ --domain-name cloudaffaire \ --query 'DomainStatus.Endpoint' \ --output text) && echo $AWS_ES_ENDPOINT ## Get your elasticsearch cluster health curl -X GET "$AWS_ES_ENDPOINT/_cat/health?v&pretty" |
Step 4: Insert some data in your Elasticsearch cluster.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
## Download some sample data and extract the files ## https://cloudaffaire.com/how-to-install-git-in-aws-ec2-instance/ git clone https://github.com/CloudAffaire/sample_data.git && cp sample_data/Employees* . ## Bulk insert data into your elasticsearch cluster curl -XPUT "$AWS_ES_ENDPOINT/cloudaffairempldb?pretty" \ -H 'Content-Type: application/json' \ -d @Employees25KHeader.json && curl -XPUT "$AWS_ES_ENDPOINT/cloudaffairempldb/_bulk" \ -H 'Content-Type: application/json' \ --data-binary @Employees25K.json ## Get your elasticsearch index list curl -XGET "$AWS_ES_ENDPOINT/_cat/indices" |
Step 5: Configure your Elasticsearch Kibana dashboard.
1 2 3 |
## Configure Kibana, open below output in your browser AWS_EC_KIBANA="https://"$AWS_ES_ENDPOINT"/_plugin/kibana/" && echo $AWS_EC_KIBANA |
Step 5: Cleanup.
1 2 3 4 |
## Delete your elasticsearch cluster aws es delete-elasticsearch-domain \ --domain-name cloudaffaire && cd && rm -rf elasticsearch |
To get more details on ELK, please refer below documentation.
https://www.elastic.co/guide/index.html