Question:
I think I’m encountering jar incompatibility. I’m using the following jar files to build a spark cluster:
- spark-2.4.7-bin-hadoop2.7.tgz
- aws-java-sdk-1.11.885.jar
- hadoop:hadoop-aws-2.7.4.jar
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
from pyspark.sql import SparkSession, SQLContext from pyspark.sql.types import * from pyspark.sql.functions import * import sys spark = (SparkSession.builder .appName("AuthorsAges") .appName('SparkCassandraApp') .getOrCreate()) spark._jsc.hadoopConfiguration().set("fs.s3a.access.key", "access-key") spark._jsc.hadoopConfiguration().set("fs.s3a.secret.key", "secret-key") spark._jsc.hadoopConfiguration().set("fs.s3a.impl","org.apache.hadoop.fs.s3a.S3AFileSystem") spark._jsc.hadoopConfiguration().set("com.amazonaws.services.s3.enableV4", "true") spark._jsc.hadoopConfiguration().set("fs.s3a.aws.credentials.provider","org.apache.hadoop.fs.s3a.BasicAWSCredentialsProvider") spark._jsc.hadoopConfiguration().set("fs.s3a.endpoint", "") input_file='s3a://spark-test-data/Fire_Department_Calls_for_Service.csv' file_schema = StructType([StructField("Call_Number",StringType(),True), StructField("Unit_ID",StringType(),True), StructField("Incident_Number",StringType(),True), ... ... # Read file into a Spark DataFrame input_df = (spark.read.format("csv") \ .option("header", "true") \ .schema(file_schema) \ .load(input_file)) |
The code fails when it starts to execute the spark.read.format. It appears that it can’t find the class. java.lang.NoClassDefFoundError: com/amazonaws/AmazonServiceException.
My spark-defaults.conf is configured as follows:
1 2 |
spark.jars.packages com.amazonaws:aws-java-sdk:1.11.885,org.apache.hadoop:hadoop-aws:2.7.4 |
I would appreciate if someone can help me. Any ideas?
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 |
Traceback (most recent call last): File " File "/usr/local/spark/spark-3.0.1-bin-hadoop2.7/python/pyspark/sql/readwriter.py", line 178, in load return self._df(self._jreader.load(path)) File "/usr/local/lib/python3.6/site-packages/py4j/java_gateway.py", line 1305, in __call__ answer, self.gateway_client, self.target_id, self.name) File "/usr/local/spark/spark-3.0.1-bin-hadoop2.7/python/pyspark/sql/utils.py", line 128, in deco return f(*a, **kw) File "/usr/local/lib/python3.6/site-packages/py4j/protocol.py", line 328, in get_return_value format(target_id, ".", name), value) py4j.protocol.Py4JJavaError: An error occurred while calling o51.load. : java.lang.NoClassDefFoundError: com/amazonaws/AmazonServiceException at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:348) at org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:2134) at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2099) at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2195) at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2654) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295) at org.apache.spark.sql.execution.streaming.FileStreamSink$.hasMetadata(FileStreamSink.scala:46) at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:366) at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:297) at org.apache.spark.sql.DataFrameReader.$anonfun$load$2(DataFrameReader.scala:286) at scala.Option.getOrElse(Option.scala:189) at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:286) at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:232) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:282) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.GatewayConnection.run(GatewayConnection.java:238) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.ClassNotFoundException: com.amazonaws.AmazonServiceException at java.net.URLClassLoader.findClass(URLClassLoader.java:382) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) ... 30 more |
Answer:
hadoop-aws 2.7.4 uses aws-java-sdk 1.7.4 that isn’t completely compatible with newer versions, so if you use the newer version of aws-java-sdk, then Hadoop can’t find required classes. You have following choice:
- remove explicit dependency on the aws-java-sdk – if you don’t need newer functionality
- compile Spark 2.4 with Hadoop 3 using
hadoop-3.1
profile, as described in documentation - switch to Spark 3.0.x that already has version built with Hadoop 3.2