DynaML v1.5.3-beta.2 Release Notes

Release Date: 2018-05-27 // almost 6 years ago
  • โž• Additions

    3D Graphics

    ๐Ÿ“ฆ Package dynaml.graphics

    Create 3d plots of surfaces, for a use case, see the jzydemo.sc and tf_wave_pde.sc

    Tensorflow Utilities

    ๐Ÿ“ฆ Package dynaml.tensorflow

    Training Stopping Criteria

    Create common and simple training stop criteria such as.

    • Stop after fixed number of iterations dtflearn.max_iter_stop(100000)
    • Stop after change in value of loss goes below a threshold. dtflearn.abs_loss_change_stop(0.0001)
    • Stop after change in relative value of loss goes below a threshold. dtflearn.rel_loss_change_stop(0.001)

    ๐Ÿ— Neural Network Building Blocks

    • Added helper method dtlearn.build_tf_model() for training tensorflow models/estimators.

    Usage

    val dataSet = MNISTLoader.load(java.nio.file.Paths.get(tempdir.toString())) val trainImages = tf.data.TensorSlicesDataset(dataSet.trainImages) val trainLabels = tf.data.TensorSlicesDataset(dataSet.trainLabels) val trainData = trainImages.zip(trainLabels) .repeat() .shuffle(10000) .batch(256) .prefetch(10) // Create the MLP model.val input = tf.learn.Input(UINT8, Shape(-1, dataSet.trainImages.shape(1), dataSet.trainImages.shape(2))) val trainInput = tf.learn.Input(UINT8, Shape(-1)) val architecture = tf.learn.Flatten("Input/Flatten") \>\> tf.learn.Cast("Input/Cast", FLOAT32) \>\> tf.learn.Linear("Layer\_0/Linear", 128) \>\> tf.learn.ReLU("Layer\_0/ReLU", 0.1f) \>\> tf.learn.Linear("Layer\_1/Linear", 64) \>\> tf.learn.ReLU("Layer\_1/ReLU", 0.1f) \>\> tf.learn.Linear("Layer\_2/Linear", 32) \>\> tf.learn.ReLU("Layer\_2/ReLU", 0.1f) \>\> tf.learn.Linear("OutputLayer/Linear", 10) val trainingInputLayer = tf.learn.Cast("TrainInput/Cast", INT64) val loss = tf.learn.SparseSoftmaxCrossEntropy("Loss/CrossEntropy") \>\> tf.learn.Mean("Loss/Mean") \>\> tf.learn.ScalarSummary("Loss/Summary", "Loss") val optimizer = tf.train.AdaGrad(0.1) // Directory in which to save summaries and checkpointsval summariesDir = java.nio.file.Paths.get((tempdir/"mnist\_summaries").toString()) val (model, estimator) = dtflearn.build\_tf\_model( architecture, input, trainInput, trainingInputLayer, loss, optimizer, summariesDir, dtflearn.max\_iter\_stop(1000), 100, 100, 100)(trainData)
    
    • ๐Ÿ— Build feedforward layers and feedforward layer stacks easier.

    Usage

    //Create a single feedforward layerval layer = dtflearn.feedforward(num\_units = 10, useBias = true)(id = 1)//Create a stack of feedforward layersval stack = dtflearn.feedforward\_stack( (i: Int) =\> dtflearn.Phi("Act\_"+i), FLOAT64)( net\_layer\_sizes.tail)
    

    ๐Ÿ“ฆ Package dynaml.tensorflow.layers

    Dynamical Systems: Continuous Time RNN

    • โž• Added CTRNN layer with inferable time step: DynamicTimeStepCTRNN.
    • โž• Added a projection layer for CTRNN based models FiniteHorizonLinear.

    Activations

    • โž• Added cumulative gaussian distribution function as an activation map dtflearn.Phi("actName").
    • โž• Added generalised logistic function as an activation map dtflearn.GeneralizedLogistic("actName")

    ๐Ÿ› Bug Fixes

    • ๐Ÿ›  Fixed bug related to scalar method of VectorField, innerProdDouble and other inner product implementations.

    ๐Ÿ‘Œ Improvements and Upgrades

    • โฌ†๏ธ Bumped up Ammonite version to 1.1.0