Changelog History
-
v0.8.3 Changes
July 23, 2017๐ like v0.8.2, just with a fix for Docker and Debian builds
-
v0.8.2 Changes
June 28, 2017๐ > Note: for
Spark < 2.0
, see v0.7.0-pre2๐ > This is (likely) the last release which supports
Scala 2.10
.๐ Various fixes and improvements , among others:
- ๐ป redesigned UI to be more user-friendly (minimalistic UX, cell context menu, improve sidebar)
- ๐ better
Scala 2.11
support (code autocompletion; fixed kernel failures; improvedcustomDeps
fetching) - ๐ Use
coursier
for faster deps resolving during the build - ๐ code cleanup and other stability fixes
- ๐ ease usage of Mesos in Spark 2.1: it now includes the
spark-mesos
lib by default (added-Dwith.mesos=true
build option)
๐ New features:
- SBT project generation from a notebook (experimental)
- Notebook edit versioning, and storage on Git (experimental)
- ๐ Viewer-only mode - a build option which makes notebooks not editable
โ Removed:
- โ removed
:dp, :cp, :local-repo, :remote-repo
commands (useEdit -> Notebook metadata
instead) - โ removed old plotting libs:
Rickshawts
,TauChart
,LinePlot
,Bokeh
(all superseeded by Plotly)
-
v0.7.0 Changes
October 31, 2016๐ > Note: for
Spark < 2.0
, see v0.7.0-pre2- โ Add spark 2 support
- ๐ Many fixes for better stability (more lenient for user input, avoid kernels crash)
- Lot of optimization for the viz, also replaced most Dimple with C3.
- Introducing Plotly.js wrappers
- ๐ Better debian support
- Greater download as Markdown as zip with charts rendered as PNG referred in a images folder
- ๐ Better doc available at all time in the
doc
folder - Cell dirtiness detection based on variables dependency graph
- ๐ New default port to 9001 to avoid conflict with HDFS
- โ Removed Wisp and Highcharts (in favor of plotly.js)
- Code cleanup
-
v0.7.0-pre2 Changes
October 31, 2016THIS IS FOR SPARK PRE 2
๐ Based on v0.7.0, it uses its fixes, optimization and most new features unless spark 2 specific (
SparkSession
for instance). -
v0.6.4 Changes
October 06, 2016๐ for
spark <=1.6
, use this release or the stale/spark-1.6-and-older branch -
v0.6.3 Changes
March 11, 2016๐ Aside the stabilization with the all the bugs fixed, new features are:
- ๐ improvement of the PivotChart
- ๐ improvement of completion with type args and more
- ๐ better sampling for automatic/default plots
- โ added tests and travis
- ๐ท spark jobs are tracked by cells, cells have now ids
- hardened the observables init
- ๐ improved scala 2.11 support
- ๐ improve Flow widget, added Custom box taking scala code directly as logic
- ๐ท job for a cell can be cancelled
- read_only mode
- ๐ notebooks are now sync wrt cell output (including reactive), but not cell add/dels and cell'content changes
- panels have landed:
- general spark monitor
- defined variables and types
- chat room
- ๐ cleaner docker build
- โ added taucharts viz lib support
- โ added
-Dguava.version
to support integration tools like cassandra connector from 1.5+
๐ Again, we'd like to thank the community for their work and their support!
YOU'RE ALL AWESOME! -
v0.6.2 Changes
December 15, 2015- ๐ build information in the UI
- ๐ better https support for web socket connections
- ๐ use the presentation compiler for completion
- ๐ fix restart kernel
- ๐ป log the server/spark forwarded to the browser's console
- 0๏ธโฃ chart are plotting 25 entries by default (extendable using maxPoints) but this cap is changeable using a reactive HTML input
- ๐ท spark jobs' monitor/progress bar is now always live (still in progress, needs some UI hardening and enhancements)
- graph plots are reactive
- table chart using dynatable
- ๐ HTTP proxy support for dependency managements
- generic spark version support in a best effort way for any new spark versions (including nightly builds)
- nightly build repos can be detected and injected with the
spark.resolver.search
jvm property set totrue
- ๐ป presentation mode added, including UI tuning via
- variables environment support in metadata: loca repo, vm arguments and spark configuration
- ๐ Better
DataFrame
viz support PivotChart
tuning, including viz and state managment- ๐ support
%%
in the deps definition to take care of the used scala version - support the current spark version in the deps definition using
_
like"org.apache.spark %% spark-streaming-kafka % _"
- โ added
user_custom.css
for users' extensions or redefinings of the CSS - ๐ป report the Spark UI link on the left hand side of the notebook
- URL Query parameter
action=recompute_now
to automatically recompute everything at loading time - 0๏ธโฃ default logging less verbose
- โ added CSV downloader from DataFrame capability (directly in HDFS using spark-csv)!
- ๐ new C3 based widgets
- ๐ new GeoChart widget -- support for JTS geometries, GeoJSON and String
- ๐ new Flow for visual flow management using boxes and arrows (needs hardening and improvements)
- ๐ป UI cleaning (menubars, ...)
- kernel auto starts can be disabled (useful for view only mode like presentation):
autostartOnNotebookOpen
in conf - ๐ป UI shows when kernel isn't ready
- ๐ป died kernel are now reported throughout the UI too
- โ added
manager.notebooks.override
to override and merge default values with metadata provided before starting a notebook - ๐ new examples notebooks:
- Machine Learning
- C3
- Geospatial
- Flow
- ๐ more documentation (not enough...)
๐ฑ Special thanks to @vidma for his amazing work on many new and killing features! ๐ ๐ ๐
-
v0.6.1 Changes
September 11, 2015- โ ADD_JARS support (add jars to context)
- ๐ NB metadata saved at ok
- ๐ fix 2.11 :dp and :cp
- ๐ป hide tachyon ui
- YARN_CONF_DIR support
- ๐ customArgs in metadata (application.conf, ...) โ adding JVM arguments to spawned process for a notebook
- ๐ spark 1.5.0 support
- tachyon 0.7.1 integration for spark 1.5.0
- โ added reactive slider + example in
misc
- old X and Y renaming of tuples' field name discarded, back to _1, _2
- example of cassandra connector (@maasg)
- ๐ reactive
widgets.PivotChart
support for simpler analysis of scala data - ๐ fixes fixes fixes
-
v0.6.0 Changes
July 23, 2015- ๐ a loooooot of fixes \o/
- ๐ a loooooot of documentation including on how to install and run the spark notebook on distros and clusters (yarn, mapr, EMR, ...)
- support for HADOOP_CONF_DIR and EXTRA_CLASSPATH to include spark cluster specific classpath entries, like hadoop conf dir, but also lzo jar and
โก๏ธ so on. This updates both the classpath of the notebook server and the notebooks processes. - ๐ the custom repos specified in the metadata or application.conf have an higher priority
- ๐ support for spark 1.4.1
- ๐ณ mesos is added to the docker distro
- code is now run asynchronously, allowing the introduction of the flame button , that can cancel all running spark jobs
- โ added many new notebooks, included @Data-Fellas ML and ADAM examples or anomaly detection by @radek1st
- LOGO :-D
- added :markdown , :javascript , :jpeg , :png , :latex , :svg , :pdf , :html , :plain , :pdf that support interpolation (using scala variables)
- ๐ป clusters can be deleted from the ui
- ๐ฆ spark packages repo is available by default
- ๐ฆ spark package format is now supported :
groupId:artifactId:version
- โ added
with.parquet
modifier to include parquet deps - 0๏ธโฃ
spark.app.name
uses the name of the notebook by default (easier to track in clusters) - Dynamic table renderer for
DataFrame
- โ Added a users sections in the README
- Tachyon can be disabled by setting
manager.tachyon.enabled
tofalse
- support for printing from the browser ( CTRL+P )
- added
:ldp
for local dependency definitions (so not added to spark context) - ๐ Graph (nodes-edges) can be plotted easily using the
Node
andEdge
types โ seeviz/Graph Plots.snb
- ๐ Geo data viz added using latlon data โ see
viz/Geo Data (Map).snb
- โจ Enhanced the twitter stream example to show tweets in a map
- Enhanced the WISP examples including Histogram, BoxPot. Wisp plots can now be build using the lower api for Highchart
- โ Adding the commons lib in the spark context to enable extended viz using spark jobs
-
v0.5.2
June 15, 2015