Changelog History
-
v1.5.3 Changes
November 20, 2018➕ Additions
Data Set API
The
DataSet
family of classes helps the user to create and transform potentially large number of data instances.
👉 Users can create and perform complex transformations on data sets, using theDataPipe
API or simple Scala functions.import \_root\_.io.github.mandar2812.dynaml.probability.\_import \_root\_.io.github.mandar2812.dynaml.pipes.\_import io.github.mandar2812.dynaml.tensorflow.\_val random\_numbers = GaussianRV(0.0, 1.0) :\* GaussianRV(1.0, 2.0) //Create a data set.val dataset1 = dtfdata.dataset(random\_numbers.iid(10000).draw) val filter\_gr\_zero = DataPipe[(Double, Double), Boolean]( c =\> c.\_1 \> 0d && c.\_2 \> 0d) //Filter elementsval data\_gr\_zero = dataset1.filter(filter\_gr\_zero) val abs\_func: (Double, Double) =\> (Double, Double) = (c: (Double, Double)) =\> (math.abs(c.\_1), math.abs(c.\_2)) //Map elementsval data\_abs= dataset1.map(abs\_func)
Find out more about the
DataSet
API and its capabilities in the user guide.Tensorflow Integration
📦 Package
dynaml.tensorflow
Batch Normalisation
Batch normalisation is used to standardize activations of convolutional layers and
to speed up training of deep neural nets.Usage
import io.github.mandar2812.dynaml.tensorflow.\_val bn = dtflearn.batch\_norm("BatchNorm1")
Inception v2
The Inception architecture, proposed by Google is an important
building block of convolutional neural network architectures used in vision applications.In a subsequent paper, the authors introduced optimizations in the Inception
architecture, known colloquially as Inception v2.In Inception v2, larger convolutions (i.e.
3 x 3
and5 x 5
) are implemented in a factorized manner
to reduce the number of parameters to be learned. For example the3 x 3
convolution is expressed as a
combination of1 x 3
and3 x 1
convolutions.Similarly the
5 x 5
convolutions can be expressed a combination of two3 x 3
convolutionsDynaML now offers the Inception cell as a computational layer.
Usage
import io.github.mandar2812.dynaml.pipes.\_import io.github.mandar2812.dynaml.tensorflow.\_import org.platanios.tensorflow.api.\_//Create an RELU activation, given a string name/identifier.val relu\_act = DataPipe(tf.learn.ReLU(\_)) //Learn 10 filters in each branch of the inception cellval filters = Seq(10, 10, 10, 10) val inception\_cell = dtflearn.inception\_unit( channels = 3, num\_filters = filters, relu\_act, //Apply batch normalisation after each convolution use\_batch\_norm = true)(layer\_index = 1)
Dynamical Systems: Continuous Time RNN
Continuous time recurrent neural networks (CTRNN) are an important class of recurrent neural networks. They enable
the modelling of non-linear and potentially complex dynamical systems of multiple variables, with feedback.➕ Added CTRNN layer:
dtflearn.ctrnn
➕ Added CTRNN layer with inferable time step:
dtflearn.dctrnn
.➕ Added a projection layer for CTRNN based models
dtflearn.ts_linear
.Training Stopping Criteria
Create common and simple training stop criteria such as.
Stop after fixed number of iterations
dtflearn.max_iter_stop(100000)
Stop after change in value of loss goes below a threshold.
dtflearn.abs_loss_change_stop(0.0001)
Stop after change in relative value of loss goes below a threshold.
dtflearn.rel_loss_change_stop(0.001)
🏗 Neural Network Building Blocks
- Added helper method
dtlearn.build_tf_model()
for training tensorflow models/estimators.
Usage
import io.github.mandar2812.dynaml.tensorflow.\_import org.platanios.tensorflow.api.\_import org.platanios.tensorflow.data.image.MNISTLoaderimport ammonite.ops.\_val tempdir = home/"tmp"val dataSet = MNISTLoader.load( java.nio.file.Paths.get(tempdir.toString()) )val trainImages = tf.data.TensorSlicesDataset(dataSet.trainImages)val trainLabels = tf.data.TensorSlicesDataset(dataSet.trainLabels)val trainData = trainImages.zip(trainLabels) .repeat() .shuffle(10000) .batch(256) .prefetch(10)// Create the MLP model.val input = tf.learn.Input( UINT8, Shape( -1, dataSet.trainImages.shape(1), dataSet.trainImages.shape(2)) )val trainInput = tf.learn.Input(UINT8, Shape(-1))val architecture = tf.learn.Flatten("Input/Flatten") \>\> tf.learn.Cast("Input/Cast", FLOAT32) \>\> tf.learn.Linear("Layer\_0/Linear", 128) \>\> tf.learn.ReLU("Layer\_0/ReLU", 0.1f) \>\> tf.learn.Linear("Layer\_1/Linear", 64) \>\> tf.learn.ReLU("Layer\_1/ReLU", 0.1f) \>\> tf.learn.Linear("Layer\_2/Linear", 32) \>\> tf.learn.ReLU("Layer\_2/ReLU", 0.1f) \>\> tf.learn.Linear("OutputLayer/Linear", 10)val trainingInputLayer = tf.learn.Cast("TrainInput/Cast", INT64)val loss = tf.learn.SparseSoftmaxCrossEntropy("Loss/CrossEntropy") \>\> tf.learn.Mean("Loss/Mean") \>\> tf.learn.ScalarSummary("Loss/Summary", "Loss")val optimizer = tf.train.AdaGrad(0.1)// Directory in which to save summaries and checkpointsval summariesDir = java.nio.file.Paths.get( (tempdir/"mnist\_summaries").toString() )val (model, estimator) = dtflearn.build\_tf\_model( architecture, input, trainInput, trainingInputLayer, loss, optimizer, summariesDir, dtflearn.max\_iter\_stop(1000), 100, 100, 100)(trainData)
- 🏗 Build feedforward layers and feedforward layer stacks easier.
Usage
import io.github.mandar2812.dynaml.tensorflow.\_import org.platanios.tensorflow.api.\_//Create a single feedforward layerval layer = dtflearn.feedforward(num\_units = 10, useBias = true)(id = 1)//Create a stack of feedforward layersval net\_layer\_sizes = Seq(10, 5, 3) val stack = dtflearn.feedforward\_stack( (i: Int) =\> dtflearn.Phi("Act\_"+i), FLOAT64)( net\_layer\_sizes)
3D Graphics
📦 Package
dynaml.graphics
Create 3d plots of surfaces, for a use case, see the
jzydemo.sc
andtf_wave_pde.sc
Library Organisation
- ✂ Removed the
dynaml-notebook
module.
🐛 Bug Fixes
- 🛠 Fixed bug related to
scalar
method ofVectorField
,innerProdDouble
and other inner product implementations.
👌 Improvements and Upgrades
- ⬆️ Bumped up Ammonite version to 1.1.0
RegressionMetrics
andRegressionMetricsTF
now also compute Spearman rank correlation as
🐎 one of the performance metrics.
- Added helper method
-
v1.5.3-beta.2 Changes
May 27, 2018➕ Additions
3D Graphics
📦 Package
dynaml.graphics
Create 3d plots of surfaces, for a use case, see the
jzydemo.sc
andtf_wave_pde.sc
Tensorflow Utilities
📦 Package
dynaml.tensorflow
Training Stopping Criteria
Create common and simple training stop criteria such as.
- Stop after fixed number of iterations
dtflearn.max_iter_stop(100000)
- Stop after change in value of loss goes below a threshold.
dtflearn.abs_loss_change_stop(0.0001)
- Stop after change in relative value of loss goes below a threshold.
dtflearn.rel_loss_change_stop(0.001)
🏗 Neural Network Building Blocks
- Added helper method
dtlearn.build_tf_model()
for training tensorflow models/estimators.
Usage
val dataSet = MNISTLoader.load(java.nio.file.Paths.get(tempdir.toString())) val trainImages = tf.data.TensorSlicesDataset(dataSet.trainImages) val trainLabels = tf.data.TensorSlicesDataset(dataSet.trainLabels) val trainData = trainImages.zip(trainLabels) .repeat() .shuffle(10000) .batch(256) .prefetch(10) // Create the MLP model.val input = tf.learn.Input(UINT8, Shape(-1, dataSet.trainImages.shape(1), dataSet.trainImages.shape(2))) val trainInput = tf.learn.Input(UINT8, Shape(-1)) val architecture = tf.learn.Flatten("Input/Flatten") \>\> tf.learn.Cast("Input/Cast", FLOAT32) \>\> tf.learn.Linear("Layer\_0/Linear", 128) \>\> tf.learn.ReLU("Layer\_0/ReLU", 0.1f) \>\> tf.learn.Linear("Layer\_1/Linear", 64) \>\> tf.learn.ReLU("Layer\_1/ReLU", 0.1f) \>\> tf.learn.Linear("Layer\_2/Linear", 32) \>\> tf.learn.ReLU("Layer\_2/ReLU", 0.1f) \>\> tf.learn.Linear("OutputLayer/Linear", 10) val trainingInputLayer = tf.learn.Cast("TrainInput/Cast", INT64) val loss = tf.learn.SparseSoftmaxCrossEntropy("Loss/CrossEntropy") \>\> tf.learn.Mean("Loss/Mean") \>\> tf.learn.ScalarSummary("Loss/Summary", "Loss") val optimizer = tf.train.AdaGrad(0.1) // Directory in which to save summaries and checkpointsval summariesDir = java.nio.file.Paths.get((tempdir/"mnist\_summaries").toString()) val (model, estimator) = dtflearn.build\_tf\_model( architecture, input, trainInput, trainingInputLayer, loss, optimizer, summariesDir, dtflearn.max\_iter\_stop(1000), 100, 100, 100)(trainData)
- 🏗 Build feedforward layers and feedforward layer stacks easier.
Usage
//Create a single feedforward layerval layer = dtflearn.feedforward(num\_units = 10, useBias = true)(id = 1)//Create a stack of feedforward layersval stack = dtflearn.feedforward\_stack( (i: Int) =\> dtflearn.Phi("Act\_"+i), FLOAT64)( net\_layer\_sizes.tail)
📦 Package
dynaml.tensorflow.layers
Dynamical Systems: Continuous Time RNN
- ➕ Added CTRNN layer with inferable time step:
DynamicTimeStepCTRNN
. - ➕ Added a projection layer for CTRNN based models
FiniteHorizonLinear
.
Activations
- ➕ Added cumulative gaussian distribution function as an activation map
dtflearn.Phi("actName")
. - ➕ Added generalised logistic function as an activation map
dtflearn.GeneralizedLogistic("actName")
🐛 Bug Fixes
- 🛠 Fixed bug related to
scalar
method ofVectorField
,innerProdDouble
and other inner product implementations.
👌 Improvements and Upgrades
- ⬆️ Bumped up Ammonite version to 1.1.0
- Stop after fixed number of iterations
-
v1.5.3-beta.1 Changes
March 09, 2018➕ Additions
Tensorflow Utilities
📦 Package
dynaml.tensorflow
The
dtfpipe
object is created to house data pipelines and workflows around tensorflow primitives.dtfpipe.gaussian_standardization
performs Gaussian Scaling of the data and returnsGaussianScalerTF
objects, one each for the input and output data.dtfpipe.minmax_standardization
performs[0, 1]
scaling of the features and ouputs, returningMinMaxScalerTF
objects.
Usage
import io.github.mandar2812.dynaml.tensorflow.\_import org.platanios.tensorflow.api.\_val (inputs, outputs): (Tensor, Tensor) = ...val (scaledData, (features\_scaler, targets\_scaler)) = dtfpipe.gaussian\_standardization(inputs, outputs)
📦 Package
dynaml.tensorflow.utils
- ➕ Added
GaussianScalerTF
andMinMaxScalerTF
, to enable scaling-rescaling of tensorflow data sets.
📦 Package
dynaml.tensorflow.layers
Dynamical Systems: Continuous Time RNN
The continuous time recurrent neural network; CTRNN, when discretised for a finite time horizon is represented as the computational layer
FiniteTimeCTRNN
.📦 Package
dynaml.tensorflow.learn
- ➕ Added
MVTimeSeriesLoss
which helps quantify the average L2 loss over a finite time slice of a multivariate time series.
-
v1.5.2 Changes
March 05, 2018➕ Additions
Tensorflow Integration
- Tensorflow (beta) support now live, thanks to the tensorflow_scala project! Try it out in:
📦 Package
dynaml.tensorflow
📦 The
dtf
package object houses utility functions related to tensorflow primitives. Currently supports creation of tensors from arrays.import io.github.mandar2812.dynaml.tensorflow.\_import org.platanios.tensorflow.api.\_//Create a FLOAT32 Tensor of shape (2, 2), i.e. a square matrixval mat = dtf.tensor\_f32(2, 2)(1d, 2d, 3d, 4d) //Create a random 2 \* 3 matrix with independent standard normal entriesval rand\_mat = dtf.random(FLOAT32, 2, 3)( GaussianRV(0d, 1d) \> DataPipe((x: Double) =\> x.toFloat) ) //Multiply matricesval prod = mat.matmul(rand\_mat) println(prod.summarize())val another\_rand\_mat = dtf.random(FLOAT32, 2, 3)( GaussianRV(0d, 1d) \> DataPipe((x: Double) =\> x.toFloat) )//Stack tensors vertically, i.e. row wiseval vert\_tensor = dtf.stack(Seq(rand\_mat, another\_rand\_mat), axis = 0)//Stack vectors horizontally, i.e. column wiseval horz\_tensor = dtf.stack(Seq(rand\_mat, another\_rand\_mat), axis = 1)
🏗 The
dtflearn
package object deals with basic neural network building blocks which are often needed while constructing prediction architectures.//Create a simple neural architecture with one convolutional layer //followed by a max pool and feedforward layer val net = tf.learn.Cast("Input/Cast", FLOAT32) \>\> dtflearn.conv2d\_pyramid(2, 3)(4, 2)(0.1f, true, 0.6F) \>\> tf.learn.MaxPool("Layer\_3/MaxPool", Seq(1, 2, 2, 1), 1, 1, SamePadding) \>\> tf.learn.Flatten("Layer\_3/Flatten") \>\> dtflearn.feedforward(256)(id = 4) \>\> tf.learn.ReLU("Layer\_4/ReLU", 0.1f) \>\> dtflearn.feedforward(10)(id = 5)
Library Organisation
- ➕ Added
dynaml-repl
anddynaml-notebook
modules to repository.
DynaML Server
DynaML ssh server now available (only in Local mode)
$ ./target/universal/stage/bin/dynaml --server
To login to the server open a separate shell and type, (when prompted for password, just press ENTER)
$ ssh repl@localhost -p22222
Basis Generators
- Legrendre polynomial basis generators
🛠 Bugfixes
- Acceptance rule of
HyperParameterMCMC
and related classes.
🔄 Changes
- 🖨 Increased pretty printing to screen instead of logging.
Cleanup
📦 Package
dynaml.models.svm
- 📦 Removal of deprecated model classes from
svm
package
-
v1.5.2-beta.4 Changes
February 16, 2018➕ Additions
- ➕ Added
MetricsTF
top level class for calculating metrics from tensorflow objects - ➕ Added
dtflearn
object for housing common neural net building blocks
- ➕ Added
-
v1.5.2-beta.3 Changes
February 07, 2018🐛 Bug Fix Beta Release
Module
dynaml-repl
- 🛠 Fixed
Router
code inDynaMLRepl
so script arguments are passed correctly.
- 🛠 Fixed
-
v1.5.2-beta.2 Changes
January 26, 2018➕ Additions
- ➕ Added
dynaml-repl
anddynaml-notebook
modules to repository.
📦 Package
dynaml.tensorflow
- ➕ Added
dtf
package object for utility functions related to tensorflow primitives. Currently supports creation of tensors from arrays.
Cleanup
📦 Package
dynaml.models.svm
- 📦 Removal of deprecated model classes from
svm
package
- ➕ Added
-
v1.5.2-beta.1 Changes
November 10, 2017➕ Additions
- Tensorflow (beta) support now live, thanks to the tensorflow_scala project! Try it out in:
- Legrendre polynomial basis generators
DynaML ssh server now available
$ ./target/universal/stage/bin/dynaml --server
To login to the server open a separate shell and type
$ ssh repl@localhost -p22222
🛠 Bugfixes
- Acceptance rule of
HyperParameterMCMC
and related classes.
🔄 Changes
- 🖨 Increased pretty printing to screen instead of logging.
-
v1.5.1 Changes
September 20, 2017➕ Additions
📦 Package
dynaml.probability.distributions
- ➕ Added Kumaraswamy distribution, an alternative to the Beta distribution.
- ➕ Added Erlang distribution, a special case of the Gamma distribution.
📦 Package
dynaml.analysis
➕ Added Radial Basis Function generators.
- Gaussian
- Inverse Multi-Quadric
- Multi-Quadric
- Matern Half-Integer
➕ Added an inner product space implementation for
Tuple2
🐛 Bug Fixes
📦 Package
dynaml.kernels
- 🛠 Fixed bug concerning hyper-parameter blocking in
CompositeCovariance
and its children.
📦 Package
dynaml.probability.distributions
- 🛠 Fixed calculation error for normalisation constant of multivariate T and Gaussian family.
-
v1.5 Changes
August 15, 2017➕ Additions
📦 Package
dynaml.algebra
➕ Added support for dual numbers.
//Zero Dualval zero = DualNumber.zero[Double] val dnum = DualNumber(1.5, -1.0) val dnum1 = DualNumber(-1.5, 1.0) //Algebraic operations: multiplication and addition/subtractiondnum1\*dnum2 dnum1 - dnum dnum\*zero
📦 Package
dynaml.probability
- ➕ Added support for mixture distributions and mixture random variables.
MixtureRV
,ContinuousDistrMixture
for random variables andMixtureDistribution
for constructing mixtures of breeze distributions.
📦 Package
dynaml.optimization
- ➕ Added
ModelTuner[T, T1]
trait as a super trait toGlobalOptimizer[T]
GridSearch
andCoupledSimulatedAnnealing
now extendAbstractGridSearch
andAbstractCSA
respectively.- ➕ Added
ProbGPMixtureMachine
: constructs a mixture model after a CSA or grid search routine by calculating the mixture probabilities of members of the final hyper-parameter ensemble.
Stochastic Mixture Models
📦 Package
dynaml.models
- ➕ Added
StochasticProcessMixtureModel
as top level class for stochastic mixture models. - ➕ Added
GaussianProcessMixture
: implementation of gaussian process
mixture models. - ➕ Added
MVTMixture
: implementation of mixture model over
multioutput matrix T processes.
Kulback-Leibler Divergence
📦 Package
dynaml.probability
- ➕ Added method
KL()
toprobability
package object, to calculate
the Kulback Leibler divergence between two continuous random
variables backed by breeze distributions.
Adaptive Metropolis Algorithms.
AdaptiveHyperParameterMCMC which
adapts the exploration covariance with each sample.HyperParameterSCAM adapts
the exploration covariance for each hyper-parameter independently.Splines and B-Spline Generators
📦 Package
dynaml.analysis
- B-Spline generators
- 📄 Bernstein and Cardinal b-spline generators.
- Arbitrary spline functions can be created using the
SplineGenerator
class.
Cubic Spline Interpolation Kernels
📦 Package
dynaml.kernels
- 📄 Added cubic spline interpolation kernel
CubicSplineKernel
and its ARD analogueCubicSplineARDKernel
Gaussian Process Models for Linear Partial Differential Equations
Based on a legacy ICML 2003 paper by Graepel. DynaML now ships with capability of performing PDE forward and inverse inference using the Gaussian Process API.
📦 Package
dynaml.models.gp
GPOperatorModel
: models a quantity of interest which is governed by a linear PDE in space and time.
📦 Package
dynaml.kernels
LinearPDEKernel
: The core kernel primitive accepted by theGPOperatorModel
class.GenExpSpaceTimeKernel
: a kernel of the exponential family which can serve as a handy base kernel forLinearPDEKernel
class.Basis Function Gaussian Processes
👍 DynaML now supports GP models with explicitly incorporated basis
functions as linear mean/trend functions.📦 Package
dynaml.models.gp
GPBasisFuncRegressionModel
can
be used to create GP models with trends incorporated as a linear
combination of basis functions.
🌲 Log Gaussian Processes
- LogGaussianProcessModel represents
a stochastic process whose natural logarithm follows a gaussian process.
👌 Improvements
📦 Package
dynaml.probability
- 🔄 Changes to
RandomVarWithDistr
: made type parameterDist
covariant. - Reform to
IIDRandomVar
hierarchy.
📦 Package
dynaml.probability.mcmc
- 🐛 Bug-fixes to the
HyperParameterMCMC
class.
General
- DynaML now ships with Ammonite
v1.0.0
.
🛠 Fixes
📦 Package
dynaml.optimization
- Corrected energy calculation in
CoupledSimulatedAnnealing
; added
🌲 log likelihood due to hyper-prior.
📦 Package
dynaml.optimization
- Corrected energy calculation in
CoupledSimulatedAnnealing
; added
🌲 log likelihood due to hyper-prior.
- ➕ Added support for mixture distributions and mixture random variables.