Write_Grads
n'a pas été mis en œuvre dans TF2.x
. C'est l'une des demandes de fonctionnalités très attendues qui est toujours ouverte. Veuillez consulter cette page GitHub numéro comme demande de fonctionnalité. Ainsi, il nous suffit d'importer TF1.x
modules et utilisation write_grads
comme indiqué dans le code suivant.
# Load the TensorBoard notebook extension
%load_ext tensorboard
import tensorflow as tf
import datetime
# Clear any logs from previous runs
!rm -rf ./logs/
# Disable V2 behavior
tf.compat.v1.disable_v2_behavior()
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
def create_model():
return tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
model = create_model()
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
log_dir = "logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = tf.compat.v1.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1, write_grads =True)
model.fit(x=x_train, y=y_train, epochs=1, validation_data=(x_test, y_test), callbacks=[tensorboard_callback])
%tensorboard --logdir logs/fit
Sortie :
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 0s 0us/step
Train on 60000 samples, validate on 10000 samples
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training_v1.py:2048: Model.state_updates (from tensorflow.python.keras.engine.training) is deprecated and will be removed in a future version.
Instructions for updating:
This property should not be used in TensorFlow 2.0, as updates are applied automatically.
32/60000 [..............................] - ETA: 0s - loss: 2.3311 - acc: 0.0312WARNING:tensorflow:Callbacks method `on_train_batch_end` is slow compared to the batch time (batch time: 0.0055s vs `on_train_batch_end` time: 0.0235s). Check your callbacks.
60000/60000 [==============================] - 17s 288us/sample - loss: 0.2187 - acc: 0.9349 - val_loss: 0.1012 - val_acc: 0.9690
<tensorflow.python.keras.callbacks.History at 0x7f7ebd1d3d30>