Question:
I am trying to convert about 1.5 GB of GZIPPED CSV into Parquet using AWS Glue. The script below is an autogenerated Glue job to accomplish that task. It seems to take a very long time (I’ve waited hours for 10 DPUs and never seen it end or produce any output data)
I’m wondering if anyone has any experience converting 1.5 GB + GZIPPED CSV into Parquet – is there a better way to accomplish this conversion?
I have TB’s of data to convert. It is concerning that it seems to take so long to convert GBs.
My Glue Job Logs have thousands of entries like:
1 2 3 4 5 6 7 8 9 10 11 |
18/03/02 20:20:20 DEBUG Client: client token: N/A diagnostics: N/A ApplicationMaster host: 172.31.58.225 ApplicationMaster RPC port: 0 queue: default start time: 1520020335454 final status: UNDEFINED tracking URL: http://ip-172-31-51-199.ec2.internal:20888/proxy/application_1520020149832_0001/ user: root |
AWS Autogenerated Glue Job Code:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 |
import sys from awsglue.transforms import * from awsglue.utils import getResolvedOptions from pyspark.context import SparkContext from awsglue.context import GlueContext from awsglue.job import Job ## @params: [JOB_NAME] args = getResolvedOptions(sys.argv, ['JOB_NAME']) sc = SparkContext() glueContext = GlueContext(sc) spark = glueContext.spark_session job = Job(glueContext) job.init(args['JOB_NAME'], args) ## @type: DataSource ## @args: [database = "test_datalake_db", table_name = "events2_2017_test", transformation_ctx = "datasource0"] ## @return: datasource0 ## @inputs: [] datasource0 = glueContext.create_dynamic_frame.from_catalog(database = "test_datalake_db", table_name = "events2_2017_test", transformation_ctx = "datasource0") ## @type: ApplyMapping ## @args: [mapping = [("sys_vortex_id", "string", "sys_vortex_id", "string"), ("sys_app_id", "string", "sys_app_id", "string"), ("sys_pq_id", "string", "sys_pq_id", "string"), ("sys_ip_address", "string", "sys_ip_address", "string"), ("sys_submitted_at", "string", "sys_submitted_at", "string"), ("sys_received_at", "string", "sys_received_at", "string"), ("device_id_type", "string", "device_id_type", "string"), ("device_id", "string", "device_id", "string"), ("timezone", "string", "timezone", "string"), ("online", "string", "online", "string"), ("app_version", "string", "app_version", "string"), ("device_days", "string", "device_days", "string"), ("device_sessions", "string", "device_sessions", "string"), ("event_id", "string", "event_id", "string"), ("event_at", "string", "event_at", "string"), ("event_date", "string", "event_date", "string"), ("int1", "string", "int1", "string")], transformation_ctx = "applymapping1"] ## @return: applymapping1 ## @inputs: [frame = datasource0] applymapping1 = ApplyMapping.apply(frame = datasource0, mappings = [("sys_vortex_id", "string", "sys_vortex_id", "string"), ("sys_app_id", "string", "sys_app_id", "string"), ("sys_pq_id", "string", "sys_pq_id", "string"), ("sys_ip_address", "string", "sys_ip_address", "string"), ("sys_submitted_at", "string", "sys_submitted_at", "string"), ("sys_received_at", "string", "sys_received_at", "string"), ("device_id_type", "string", "device_id_type", "string"), ("device_id", "string", "device_id", "string"), ("timezone", "string", "timezone", "string"), ("online", "string", "online", "string"), ("app_version", "string", "app_version", "string"), ("device_days", "string", "device_days", "string"), ("device_sessions", "string", "device_sessions", "string"), ("event_id", "string", "event_id", "string"), ("event_at", "string", "event_at", "string"), ("event_date", "string", "event_date", "string"), ("int1", "string", "int1", "string")], transformation_ctx = "applymapping1") ## @type: ResolveChoice ## @args: [choice = "make_struct", transformation_ctx = "resolvechoice2"] ## @return: resolvechoice2 ## @inputs: [frame = applymapping1] resolvechoice2 = ResolveChoice.apply(frame = applymapping1, choice = "make_struct", transformation_ctx = "resolvechoice2") ## @type: DropNullFields ## @args: [transformation_ctx = "dropnullfields3"] ## @return: dropnullfields3 ## @inputs: [frame = resolvechoice2] dropnullfields3 = DropNullFields.apply(frame = resolvechoice2, transformation_ctx = "dropnullfields3") ## @type: DataSink ## @args: [connection_type = "s3", connection_options = {"path": "s3://devops-redshift*****/prd/parquet"}, format = "parquet", transformation_ctx = "datasink4"] ## @return: datasink4 ## @inputs: [frame = dropnullfields3] datasink4 = glueContext.write_dynamic_frame.from_options(frame = dropnullfields3, connection_type = "s3", connection_options = {"path": "s3://devops-redshift*****/prd/parquet"}, format = "parquet", transformation_ctx = "datasink4") job.commit() |
Answer:
Yes, I’ve recently figured out that Spark DataFrames – versus Glue’s DynamicFrames – are the significantly faster way to go.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
# boiler plate, generated code sc = SparkContext() glueContext = GlueContext(sc) spark = glueContext.spark_session job = Job(glueContext) job.init(args['JOB_NAME'], args) # some job-specific variables compression_type = 'snappy' # 'snappy', 'gzip', or 'none' source_path = 's3://source-bucket/part1=x/part2=y/' destination_path = 's3://destination-bucket/part1=x/part2=y/' # CSV to Parquet conversion df = spark.read.option('delimiter','|').option('header','true').csv(source_path) df.write.mode("overwrite").format('parquet').option('compression', compression_type).save(destination_path ) job.commit() |