Changelog History
-
v2.2.0-M1
May 24, 2018 -
v2.2.0-M0
May 23, 2018 -
v2.0.3
December 11, 2017 -
v2.0.2 Changes
November 17, 2017- ๐ Improve error message for broadcasting error
- Avoid
java.lang.Error
when usingExecutors.singleThreadExecutor
as the execution context.
-
v2.0.1 Changes
August 05, 2017๐ Port all built-in plugins except
INDArray
-related plugins to Scala 2.12.Thanks to @lrytz (scala/bug#10334, scala/scala#5973 and scala/scala#5977).
-
v2.0.0 Changes
July 26, 2017๐ Today, we are happy to announce DeepLearning.scala 2.0.0, the new stable release of DeepLearning.scala, a simple library for creating complex neural networks from object-oriented and functional programming constructs.
- DeepLearning.scala runs on JVM, can be used either in standalone JVM applications or Jupyter Notebooks.
- DeepLearning.scala is expressive. Various types of neural network layers can be created by composing
map
,reduce
or other higher order functions. - ๐ DeepLearning.scala supports plugins. You can share your own algorithms, models, hyperparameters as a plugin, as simple as creating a Github Gist.
- All the above features are statically type checked.
๐ Features in DeepLearning.scala 2.0
๐ In DeepLearning.scala 2.0, we removed the special support for differentiable ADT and
Boolean
types. Now differentiable computational graphs are ordinary Scala code, so all types including ADT andBoolean
are avialable in these graphs.Dynamic neural networks
Unlike some other deep learning frameworks, the structure of neural networks in DeepLearning.scala is dynamically determined during running. Our neural networks are programs. All Scala features, including functions and expressions, are available in neural networks.
For example:
def ordinaryScalaFunction(a: INDArray): Boolean = { a.signnum.sumT \> math.random }def myDynamicNeuralNetwork(input: INDArray) = INDArrayLayer(monadic[Do] { val outputOfLayer1 = layer1(input).forward.each if (ordinaryScalaFunction(outputOfLayer1.data)) { dynamicallySelectedLayer2(outputOfLayer1).forward.each } else { dynamicallySelectedLayer3(outputOfLayer1).forward.each } })
The above neural network will go into different subnetworks according to an ordinary Scala function.
๐ With the ability of creating dynamic neural networks, regular programmers are able to build complex neural networks from simple code. You write code almost as usual, the only difference being that code based on DeepLearning.scala is differentiable, which enables such code to evolve by modifying its parameters continuously.
Functional programming
DeepLearning.scala 2.0 is based on Monads, which are composable, thus a complex layer can be built from primitive operators. Along with the Monad, we provide an Applicative type class, to perform multiple calculations in parallel.
๐ For example, the previous example can be rewritten in higher-order function style as following:
def myDynamicNeuralNetwork(input: INDArray) = INDArrayLayer { layer1(input).forward.flatMap { outputOfLayer1 =\>if (ordinaryScalaFunction(outputOfLayer1.data)) { dynamicallySelectedLayer2(outputOfLayer1).forward } else { dynamicallySelectedLayer3(outputOfLayer1).forward } } }
โ The key construct in DeepLearning.scala 2.0 is the dependent type class DeepLearning, which witnesses a differentiable expression. In other words, given the
DeepLearning
type class instance, you can activate the deep learning ability of any type.Object-oriented programming
๐ The code base of DeepLearning.scala 2.0 is organized according to Dependent Object Type calculus (DOT). All features are provided as mixin-able plugins. A plugin is able to change APIs and behaviors of all DeepLearning.scala types. This approach not only resolves expression problem, but also gives plugins the additional ability of virtually depending on other plugins.
โก๏ธ For example, when a plugin author is creating the Adagrad optimizer plugin, he does not have to explicitly call functions related to learning rate. However, once a plugin user enables both the
Adagrad
plugin and the FixedLearningRate plugin, then computation inFixedLearningRate
will get called eventually when theAdagrad
optimization is executed.๐ Plugins for DeepLearning.scala 2.0
๐ | Plugin Name | Plugin Description | | --- | --- | โ | Builtins | All the built-in plugins. | ๐ | FixedLearningRate | Setup fixed learning rate when training INDArray weights. | | Adagrad | An adaptive gradient algorithm with per-parameter learning rate for INDArray weights. | | L1Regularization | L1 Regularization. | | L2Regularization | L2 Regularization. | โก๏ธ | Momentum | The Momentum and NesterovMomentum optimizer for SGD. | โก๏ธ | RMSprop | The RMSprop optimizer for SGD. | โก๏ธ | Adam | The Adam optimizer for SGD. | ๐ | INDArrayDumping | A plugin to dump weight matrices during training. | | CNN | A standalone CNN implementation. | | Add your own algorithms, models or any cool features here. |
๐ Links
-
v2.0.0-RC7
July 25, 2017 -
v2.0.0-RC6
July 23, 2017 -
v2.0.0-RC5
July 17, 2017 -
v2.0.0-RC4
July 14, 2017