What’s the difference between regular and ml AWS EC2 instances?


I’m experimenting with AWS Sagemaker using a Free Tier account. According to the Sagemaker pricing, I can use 50 hours of m4.xlarge and m5.xlarge instances for training in the free tier. (I am safely within the two-month limit.) But when I attempt to train an algorithm with the XGBoost container using m5.xlarge, I get the error shown below the code.

Are the ml-type and non-ml-type instances the same with just a fancy prefix for those that one would use with Sagemaker or are they entirely different? The EC2 page doesn’t even list the ml instances.

ClientError: An error occurred (ValidationException) when calling the
CreateTrainingJob operation: 1 validation error detected: Value
‘m5.xlarge’ at ‘resourceConfig.instanceType’ failed to satisfy
constraint: Member must satisfy enum value set: [ml.p2.xlarge,
ml.m5.4xlarge, ml.m4.16xlarge, ml.p4d.24xlarge, ml.c5n.xlarge,
ml.p3.16xlarge, ml.m5.large, ml.p2.16xlarge, ml.c4.2xlarge,
ml.c5.2xlarge, ml.c4.4xlarge, ml.c5.4xlarge, ml.c5n.18xlarge,
ml.g4dn.xlarge, ml.g4dn.12xlarge, ml.c4.8xlarge, ml.g4dn.2xlarge,
ml.c5.9xlarge, ml.g4dn.4xlarge, ml.c5.xlarge, ml.g4dn.16xlarge,
ml.c4.xlarge, ml.g4dn.8xlarge, ml.c5n.2xlarge, ml.c5n.4xlarge,
ml.c5.18xlarge, ml.p3dn.24xlarge, ml.p3.2xlarge, ml.m5.xlarge,
ml.m4.10xlarge, ml.c5n.9xlarge, ml.m5.12xlarge, ml.m4.xlarge,
ml.m5.24xlarge, ml.m4.2xlarge, ml.p2.8xlarge, ml.m5.2xlarge,
ml.p3.8xlarge, ml.m4.4xlarge]


The instances with the ml prefix are instance classes specifically for use in Sagemaker.

In addition to being used within the Sagemaker service, the instance will be running an AMI with all the necessary libraries and packages such as Jupyter.

Leave a Reply