How to decrease heartbeat time of slave nodes in Hadoop


I am working on AWS EMR.

I want to get the information of died task node as soon as possible. But as per default setting in hadoop, heartbeat is shared after every 10 minutes.

This is the default key-value pair in mapred-default – mapreduce.jobtracker.expire.trackers.interval : 600000ms

I tried to modify default value to 6000ms using – this link

After that, whenever I terminate any ec2 machine from EMR cluster, I am not able to see state change that fast.(in 6 seconds)

Resource manager REST API – http://MASTER_DNS_NAME:8088/ws/v1/cluster/nodes


  1. What is the command to check the mapreduce.jobtracker.expire.trackers.interval value in running EMR cluster(Hadoop cluster)?
  2. Is this the right key I am using to get the state change ? If it is not, please suggest any other solution.
  3. What is the difference between DECOMMISSIONING vs DECOMMISSIONED vs LOST state of nodes in Resource manager UI ?


I tried numbers of times, but it is showing ambiguous behaviour. Sometimes, it moved to DECOMMISSIONING/DECOMMISIONED state, and sometime it directly move to LOST state after 10 minutes.

I need a quick state change, so that I can trigger some event.

Here is my sample code –

This is the settings that I changed into AWS EMR (internally Hadoop) to reduce the time between state change from RUNNING to other state(DECOMMISSIONING/DECOMMISIONED/LOST).


  1. You can use “hdfs getconf”. Please refer to this post Get a yarn configuration from commandline
  2. These links give info about node manager health-check and the properties you have to check:

Refer “yarn.resourcemanager.nodemanagers.heartbeat-interval-ms” in the below link:

  1. Your queries are answered in this link:

Refer the “attachments” and “sub-tasks” area.
In simple terms, if the currently running application master and task containers gets shut-down properly (and/or re-initiated in different other nodes) then the node manager is said to be DECOMMISSIONED (gracefully), else it is LOST.


“dfs.namenode.decommission.interval” is for HDFS data node decommissioning, it does not matter if you are concerned only about node manager.
In exceptional cases, data node need not be a compute node.

Try yarn.nm.liveness-monitor.expiry-interval-ms (default 600000 – that is why you reported that the state changed to LOST in 10 minutes, set it to a smaller value as you require) instead of mapreduce.jobtracker.expire.trackers.interval.

You have set “yarn.resourcemanager.nodemanagers.heartbeat-interval-ms” as 5000, which means, the heartbeat goes to resource manager once in 5 seconds, whereas the default is 1000. Set it to a smaller value as you require.

Leave a Reply