Description
One of the biggest challenges after taking the first steps into the world of writing
Apache Spark applications in Scala is taking them to production.
An application of any kind needs to be easy to run and easy to configure.
This project is trying to help developers write Spark applications focusing mainly on the
application logic rather than the details of configuring the application and setting up the
Spark context.
This project is also trying to create and encourage a friendly yet professional environment
for developers to help each other, so please do no be shy and join through gitter, twitter,
issue reports or pull requests.
Spark Utils alternatives and similar packages
Based on the "Big Data" category.
Alternatively, view Spark Utils alternatives based on common mentions on social networks and blogs.
-
Deeplearning4J
Suite of tools for deploying and training deep learning models using the JVM. Highlights include model import for keras, tensorflow, and onnx/pytorch, a modular and tiny c++ library for running math code and a java based math library on top of the core c++ library. Also includes samediff: a pytorch/tensorflow like library for running deep learn... -
Reactive-kafka
Alpakka Kafka connector - Alpakka is a Reactive Enterprise Integration library for Java and Scala, based on Reactive Streams and Akka. -
Schemer
Schema registry for CSV, TSV, JSON, AVRO and Parquet schema. Supports schema inference and GraphQL API. -
GridScale
Scala library for accessing various file, batch systems, job schedulers and grid middlewares.
CodeRabbit: AI Code Reviews for Developers
* Code Quality Rankings and insights are calculated and provided by Lumnify.
They vary from L1 to L5 with "L5" being the highest.
Do you think we are missing an alternative of Spark Utils or a related project?
README
Spark Utils
Motivation
One of the biggest challenges after taking the first steps into the world of writing Apache Spark applications in Scala is taking them to production.
An application of any kind needs to be easy to run and easy to configure.
This project is trying to help developers write Spark applications focusing mainly on the application logic rather than the details of configuring the application and setting up the Spark context.
This project is also trying to create and encourage a friendly yet professional environment for developers to help each other, so please do no be shy and join through gitter, twitter, issue reports or pull requests.
Description
This project contains some basic utilities that can help setting up a Spark application project.
The main point is the simplicity of writing Apache Spark applications just focusing on the logic, while providing for easy configuration and arguments passing.
The code sample bellow shows how easy can be to write a file format converter from any acceptable type, with any acceptable parsing configuration options to any acceptable format.
object FormatConverterExample extends SparkApp[FormatConverterContext, DataFrame] {
override def createContext(config: Config) = FormatConverterContext(config)
override def run(implicit spark: SparkSession, context: FormatConverterContext): Try[DataFrame] = {
val inputData = spark.source(context.input).read
inputData.sink(context.output).write
}
}
Creating the configuration can be as simple as defining a case class to hold the configuration and a factory, that helps extract simple and complex data types like input sources and output sinks.
case class FormatConverterContext(input: FormatAwareDataSourceConfiguration,
output: FormatAwareDataSinkConfiguration)
object FormatConverterContext extends Configurator[FormatConverterContext] {
import com.typesafe.config.Config
import scalaz.ValidationNel
def validationNel(config: Config): ValidationNel[Throwable, FormatConverterContext] = {
import scalaz.syntax.applicative._
config.extract[FormatAwareDataSourceConfiguration]("input") |@|
config.extract[FormatAwareDataSinkConfiguration]("output") apply
FormatConverterContext.apply
}
}
Optionally, the SparkFun
can be used instead of SparkApp
to make the code even more concise.
object FormatConverterExample extends
SparkFun[FormatConverterContext, DataFrame](FormatConverterContext(_).get) {
override def run(implicit spark: SparkSession, context: FormatConverterContext): Try[DataFrame] =
spark.source(context.input).read.sink(context.output).write
}
For structured streaming applications the format converter might look like this:
object StreamingFormatConverterExample extends SparkApp[StreamingFormatConverterContext, DataFrame] {
override def createContext(config: Config) = StreamingFormatConverterContext(config).get
override def run(implicit spark: SparkSession, context: StreamingFormatConverterContext): Try[DataFrame] = {
val inputData = spark.source(context.input).read
inputData.streamingSink(context.output).write.awaitTermination()
}
}
The streaming configuration the configuration can be as simple as following:
case class StreamingFormatConverterContext(input: FormatAwareStreamingSourceConfiguration,
output: FormatAwareStreamingSinkConfiguration)
object StreamingFormatConverterContext extends Configurator[StreamingFormatConverterContext] {
def validationNel(config: Config): ValidationNel[Throwable, StreamingFormatConverterContext] = {
config.extract[FormatAwareStreamingSourceConfiguration]("input") |@|
config.extract[FormatAwareStreamingSinkConfiguration]("output") apply
StreamingFormatConverterContext.apply
}
}
The [SparkRunnable
](docs/spark-runnable.md) and [SparkApp
](docs/spark-app.md) or
[SparkFun
](docs/spark-fun.md) together with the
configuration framework
provide for easy Spark application creation with configuration that can be managed through
configuration files or application parameters.
The IO frameworks for [reading](docs/data-source.md) and [writing](docs/data-sink.md) data frames add extra convenience for setting up batch and structured streaming jobs that transform various types of files and streams.
Last but not least, there are many utility functions that provide convenience for loading resources, dealing with schemas and so on.
Most of the common features are also implemented as decorators to main Spark classes, like
SparkContext
, DataFrame
and StructType
and they are conveniently available by importing
the org.tupol.spark.implicits._
package.
Documentation
The documentation for the main utilities and frameworks available:
- [SparkApp](docs/spark-app.md), [SparkFun](docs/spark-fun.md) and [SparkRunnable](docs/spark-runnable.md)
- [DataSource Framework](docs/data-source.md) for both batch and structured streaming applications
- [DataSink Framework](docs/data-sink.md) for both batch and structured streaming applications
Latest stable API documentation is available here.
An extensive tutorial and walk-through can be found here. Extensive samples and demos can be found here.
A nice example on how this library can be used can be found in the
spark-tools
project, through the implementation
of a generic format converter and a SQL processor for both batch and structured streams.
Prerequisites
- Java 8 or higher
- Scala 2.12
- Apache Spark 3.0.X
Getting Spark Utils
Spark Utils is published to Maven Central and Spark Packages:
- Group id / organization:
org.tupol
- Artifact id / name:
spark-utils
- Latest stable versions:
- Spark 2.4:
0.4.2
- Spark 3.0:
0.6.1
- Spark 2.4:
Usage with SBT, adding a dependency to the latest version of tools to your sbt build definition file:
libraryDependencies += "org.tupol" %% "spark-utils" % "0.6.2"
Include this package in your Spark Applications using spark-shell
or spark-submit
$SPARK_HOME/bin/spark-shell --packages org.tupol:spark-utils_2.12:0.4.2
Starting a New spark-utils
Project
The simplest way to start a new spark-utils
is to make use of the
spark-apps.seed.g8
template project.
To fill in manually the project options run
g8 tupol/spark-apps.seed.g8
The default options look like the following:
name [My Project]:
appname [My First App]:
organization [my.org]:
version [0.0.1-SNAPSHOT]:
package [my.org.my_project]:
classname [MyFirstApp]:
scriptname [my-first-app]:
scalaVersion [2.11.12]:
sparkVersion [2.4.0]:
sparkUtilsVersion [0.4.0]:
To fill in the options in advance
g8 tupol/spark-apps.seed.g8 --name="My Project" --appname="My App" --organization="my.org" --force
What's new?
0.6.2
- Fixed
core
dependency toscala-utils
; now usingscala-utils-core
- Refactored the
core
/implicits
package to make the implicits a little more explicit
For previous versions please consult the [release notes](RELEASE-NOTES.md).
License
This code is open source software licensed under the [MIT License](LICENSE).
*Note that all licence references and agreements mentioned in the Spark Utils README section above
are relevant to that project's source code only.