DynaML v1.5.3 Release Notes

Release Date: 2018-11-20 // over 5 years ago
  • โž• Additions

    Data Set API

    The DataSet family of classes helps the user to create and transform potentially large number of data instances.
    ๐Ÿ‘‰ Users can create and perform complex transformations on data sets, using the DataPipe API or simple Scala functions.

    import \_root\_.io.github.mandar2812.dynaml.probability.\_import \_root\_.io.github.mandar2812.dynaml.pipes.\_import io.github.mandar2812.dynaml.tensorflow.\_val random\_numbers = GaussianRV(0.0, 1.0) :\* GaussianRV(1.0, 2.0) //Create a data set.val dataset1 = dtfdata.dataset(random\_numbers.iid(10000).draw) val filter\_gr\_zero = DataPipe[(Double, Double), Boolean]( c =\> c.\_1 \> 0d && c.\_2 \> 0d) //Filter elementsval data\_gr\_zero = dataset1.filter(filter\_gr\_zero) val abs\_func: (Double, Double) =\> (Double, Double) = (c: (Double, Double)) =\> (math.abs(c.\_1), math.abs(c.\_2)) //Map elementsval data\_abs= dataset1.map(abs\_func) 
    

    Find out more about the DataSet API and its capabilities in the user guide.

    Tensorflow Integration

    ๐Ÿ“ฆ Package dynaml.tensorflow

    Batch Normalisation

    Batch normalisation is used to standardize activations of convolutional layers and
    to speed up training of deep neural nets.

    Usage

    import io.github.mandar2812.dynaml.tensorflow.\_val bn = dtflearn.batch\_norm("BatchNorm1") 
    

    Inception v2

    The Inception architecture, proposed by Google is an important
    building block of convolutional neural network architectures used in vision applications.

    ๐Ÿ“„ inception

    In a subsequent paper, the authors introduced optimizations in the Inception
    architecture, known colloquially as Inception v2.

    In Inception v2, larger convolutions (i.e. 3 x 3 and 5 x 5) are implemented in a factorized manner
    to reduce the number of parameters to be learned. For example the 3 x 3 convolution is expressed as a
    combination of 1 x 3 and 3 x 1 convolutions.

    ๐Ÿ“„ inception

    Similarly the 5 x 5 convolutions can be expressed a combination of two 3 x 3 convolutions

    ๐Ÿ“„ inception2

    DynaML now offers the Inception cell as a computational layer.

    Usage

    import io.github.mandar2812.dynaml.pipes.\_import io.github.mandar2812.dynaml.tensorflow.\_import org.platanios.tensorflow.api.\_//Create an RELU activation, given a string name/identifier.val relu\_act = DataPipe(tf.learn.ReLU(\_)) //Learn 10 filters in each branch of the inception cellval filters = Seq(10, 10, 10, 10) val inception\_cell = dtflearn.inception\_unit( channels = 3, num\_filters = filters, relu\_act, //Apply batch normalisation after each convolution use\_batch\_norm = true)(layer\_index = 1) 
    

    Dynamical Systems: Continuous Time RNN

    Continuous time recurrent neural networks (CTRNN) are an important class of recurrent neural networks. They enable
    the modelling of non-linear and potentially complex dynamical systems of multiple variables, with feedback.

    โž• Added CTRNN layer: dtflearn.ctrnn

    โž• Added CTRNN layer with inferable time step: dtflearn.dctrnn.

    โž• Added a projection layer for CTRNN based models dtflearn.ts_linear.

    Training Stopping Criteria

    Create common and simple training stop criteria such as.

    Stop after fixed number of iterations dtflearn.max_iter_stop(100000)

    Stop after change in value of loss goes below a threshold. dtflearn.abs_loss_change_stop(0.0001)

    Stop after change in relative value of loss goes below a threshold. dtflearn.rel_loss_change_stop(0.001)

    ๐Ÿ— Neural Network Building Blocks

    • Added helper method dtlearn.build_tf_model() for training tensorflow models/estimators.

    Usage

    import io.github.mandar2812.dynaml.tensorflow.\_import org.platanios.tensorflow.api.\_import org.platanios.tensorflow.data.image.MNISTLoaderimport ammonite.ops.\_val tempdir = home/"tmp"val dataSet = MNISTLoader.load( java.nio.file.Paths.get(tempdir.toString()) )val trainImages = tf.data.TensorSlicesDataset(dataSet.trainImages)val trainLabels = tf.data.TensorSlicesDataset(dataSet.trainLabels)val trainData = trainImages.zip(trainLabels) .repeat() .shuffle(10000) .batch(256) .prefetch(10)// Create the MLP model.val input = tf.learn.Input( UINT8, Shape( -1, dataSet.trainImages.shape(1), dataSet.trainImages.shape(2)) )val trainInput = tf.learn.Input(UINT8, Shape(-1))val architecture = tf.learn.Flatten("Input/Flatten") \>\> tf.learn.Cast("Input/Cast", FLOAT32) \>\> tf.learn.Linear("Layer\_0/Linear", 128) \>\> tf.learn.ReLU("Layer\_0/ReLU", 0.1f) \>\> tf.learn.Linear("Layer\_1/Linear", 64) \>\> tf.learn.ReLU("Layer\_1/ReLU", 0.1f) \>\> tf.learn.Linear("Layer\_2/Linear", 32) \>\> tf.learn.ReLU("Layer\_2/ReLU", 0.1f) \>\> tf.learn.Linear("OutputLayer/Linear", 10)val trainingInputLayer = tf.learn.Cast("TrainInput/Cast", INT64)val loss = tf.learn.SparseSoftmaxCrossEntropy("Loss/CrossEntropy") \>\> tf.learn.Mean("Loss/Mean") \>\> tf.learn.ScalarSummary("Loss/Summary", "Loss")val optimizer = tf.train.AdaGrad(0.1)// Directory in which to save summaries and checkpointsval summariesDir = java.nio.file.Paths.get( (tempdir/"mnist\_summaries").toString() )val (model, estimator) = dtflearn.build\_tf\_model( architecture, input, trainInput, trainingInputLayer, loss, optimizer, summariesDir, dtflearn.max\_iter\_stop(1000), 100, 100, 100)(trainData)
    
    • ๐Ÿ— Build feedforward layers and feedforward layer stacks easier.

    Usage

    import io.github.mandar2812.dynaml.tensorflow.\_import org.platanios.tensorflow.api.\_//Create a single feedforward layerval layer = dtflearn.feedforward(num\_units = 10, useBias = true)(id = 1)//Create a stack of feedforward layersval net\_layer\_sizes = Seq(10, 5, 3) val stack = dtflearn.feedforward\_stack( (i: Int) =\> dtflearn.Phi("Act\_"+i), FLOAT64)( net\_layer\_sizes)
    

    3D Graphics

    ๐Ÿ“ฆ Package dynaml.graphics

    Create 3d plots of surfaces, for a use case, see the jzydemo.sc and tf_wave_pde.sc

    Library Organisation

    • โœ‚ Removed the dynaml-notebook module.

    ๐Ÿ› Bug Fixes

    • ๐Ÿ›  Fixed bug related to scalar method of VectorField, innerProdDouble and other inner product implementations.

    ๐Ÿ‘Œ Improvements and Upgrades

    • โฌ†๏ธ Bumped up Ammonite version to 1.1.0
    • RegressionMetrics and RegressionMetricsTF now also compute Spearman rank correlation as
      ๐ŸŽ one of the performance metrics.

Previous changes from v1.5.3-beta.2

  • โž• Additions

    3D Graphics

    ๐Ÿ“ฆ Package dynaml.graphics

    Create 3d plots of surfaces, for a use case, see the jzydemo.sc and tf_wave_pde.sc

    Tensorflow Utilities

    ๐Ÿ“ฆ Package dynaml.tensorflow

    Training Stopping Criteria

    Create common and simple training stop criteria such as.

    • Stop after fixed number of iterations dtflearn.max_iter_stop(100000)
    • Stop after change in value of loss goes below a threshold. dtflearn.abs_loss_change_stop(0.0001)
    • Stop after change in relative value of loss goes below a threshold. dtflearn.rel_loss_change_stop(0.001)

    ๐Ÿ— Neural Network Building Blocks

    • Added helper method dtlearn.build_tf_model() for training tensorflow models/estimators.

    Usage

    val dataSet = MNISTLoader.load(java.nio.file.Paths.get(tempdir.toString())) val trainImages = tf.data.TensorSlicesDataset(dataSet.trainImages) val trainLabels = tf.data.TensorSlicesDataset(dataSet.trainLabels) val trainData = trainImages.zip(trainLabels) .repeat() .shuffle(10000) .batch(256) .prefetch(10) // Create the MLP model.val input = tf.learn.Input(UINT8, Shape(-1, dataSet.trainImages.shape(1), dataSet.trainImages.shape(2))) val trainInput = tf.learn.Input(UINT8, Shape(-1)) val architecture = tf.learn.Flatten("Input/Flatten") \>\> tf.learn.Cast("Input/Cast", FLOAT32) \>\> tf.learn.Linear("Layer\_0/Linear", 128) \>\> tf.learn.ReLU("Layer\_0/ReLU", 0.1f) \>\> tf.learn.Linear("Layer\_1/Linear", 64) \>\> tf.learn.ReLU("Layer\_1/ReLU", 0.1f) \>\> tf.learn.Linear("Layer\_2/Linear", 32) \>\> tf.learn.ReLU("Layer\_2/ReLU", 0.1f) \>\> tf.learn.Linear("OutputLayer/Linear", 10) val trainingInputLayer = tf.learn.Cast("TrainInput/Cast", INT64) val loss = tf.learn.SparseSoftmaxCrossEntropy("Loss/CrossEntropy") \>\> tf.learn.Mean("Loss/Mean") \>\> tf.learn.ScalarSummary("Loss/Summary", "Loss") val optimizer = tf.train.AdaGrad(0.1) // Directory in which to save summaries and checkpointsval summariesDir = java.nio.file.Paths.get((tempdir/"mnist\_summaries").toString()) val (model, estimator) = dtflearn.build\_tf\_model( architecture, input, trainInput, trainingInputLayer, loss, optimizer, summariesDir, dtflearn.max\_iter\_stop(1000), 100, 100, 100)(trainData)
    
    • ๐Ÿ— Build feedforward layers and feedforward layer stacks easier.

    Usage

    //Create a single feedforward layerval layer = dtflearn.feedforward(num\_units = 10, useBias = true)(id = 1)//Create a stack of feedforward layersval stack = dtflearn.feedforward\_stack( (i: Int) =\> dtflearn.Phi("Act\_"+i), FLOAT64)( net\_layer\_sizes.tail)
    

    ๐Ÿ“ฆ Package dynaml.tensorflow.layers

    Dynamical Systems: Continuous Time RNN

    • โž• Added CTRNN layer with inferable time step: DynamicTimeStepCTRNN.
    • โž• Added a projection layer for CTRNN based models FiniteHorizonLinear.

    Activations

    • โž• Added cumulative gaussian distribution function as an activation map dtflearn.Phi("actName").
    • โž• Added generalised logistic function as an activation map dtflearn.GeneralizedLogistic("actName")

    ๐Ÿ› Bug Fixes

    • ๐Ÿ›  Fixed bug related to scalar method of VectorField, innerProdDouble and other inner product implementations.

    ๐Ÿ‘Œ Improvements and Upgrades

    • โฌ†๏ธ Bumped up Ammonite version to 1.1.0