How to restart one live node from a multi node cassandra cluster?


I have a production cassandra cluster of 6 nodes. I have made some changes to the cassandra.yaml file on one node and hence need to restart it.
How can I do it without losing any data or causing any cluster related issues?
Can I just kill the cassandra process on that particular node and start it again.
Cluster Info:
6 nodes. All active.
I am using AWS Ec2Snitch.



In case you are using replication factor greater than 1, and not using ALL consistency setting on your writes/reads, you can perform steps listed below, without any downtime/data loss. In case you have one of the limitations listed above, you’ll need to increase your replication factor/change requests consistency before you continue.

  1. Perform nodetool drain on that node (
  2. Stop the service.
  3. Start the service.

In Cassandra, if durable writes are enabled, you should not lose data anyway – there’s a mechanism of commitlog log replay in case of accidental restart, so you should not lose any data if doing just restart, but replaying commitlog can take some time.

The steps written above are a part of official upgrade procedure, and should be the “safest” option. You can do nodetool flush + restart, this will ensure that commitlog replay will be minimal and can be faster than drain approach.

Leave a Reply