Not logged in. Login

Cassandra + Spark + Python

We will use the spark-cassandra-connector to bring Spark and Cassandra together. When you run a Spark job using this library, you need to include the corresponding Spark Package:

spark-submit --packages com.datastax.spark:spark-cassandra-connector_2.12:3.1.0 --conf spark.sql.extensions=com.datastax.spark.connector.CassandraSparkExtensions …

You need to configure the SparkSession object to connect correctly to our cluster. Create your spark variable like this:

cluster_seeds = ['node1.local', 'node2.local']
spark = SparkSession.builder.appName('Spark Cassandra example') \
    .config('', ','.join(cluster_seeds)).getOrCreate()

With this done, you should be able to read DataFrames from Cassandra or write DataFrames tables to Cassandra:

df ="org.apache.spark.sql.cassandra") \
    .options(table=table, keyspace=keyspace).load()
df.write.format("org.apache.spark.sql.cassandra") \
    .options(table=table, keyspace=keyspace).save()
Updated Mon Aug. 29 2022, 10:52 by ggbaker.