Question:
I have a task to process large amount of data, so data is batched into lots of parts. I have written the task definition for this type of work but now I know only how to set up them manually via registering multiple task definitions for each env.
sample where each task has its own env BATCH_ID
1 2 3 4 5 6 |
aws ecs register-task-definition --cli-input-json file://taskdef1.json aws ecs run-task --cluster $cluster --task-definition process_data_1 aws ecs register-task-definition --cli-input-json file://taskdef2.json aws ecs run-task --cluster $cluster --task-definition process_data_2 |
It would be even nice to have some .manifest file of all task arns put for cluster.
Is there someway to run multiple similar ECS tasks with different env params in more elegant way then creating enormous amount of different taskdefs files?
Thanks for any help and suggestions
Answer:
You can override environmental variable when you run the task with --overrides
flag. I use this all the time; you can either override an existing environmental variable (defined in task definition) or simply add a new one.
--overrides
accepts only JSON (no shorthand syntax); in your case, it would look like:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
{ "containerOverrides": [ { "name": "containerNameFromTaskDefinition", "environment": [ { "name": "BATCH_ID", "value": "sampleBatchId" } ] } ] } |
and the command:
1 2 |
aws ecs run-task --cluster $cluster --task-definition process_data_1 --overrides {"containerOverrides":[...]} |
You can use --overrides
to override even more things of course: https://docs.aws.amazon.com/cli/latest/reference/ecs/run-task.html