Copy model
along with weights to the TPU. (deprecated)
tf.contrib.tpu.keras_to_tpu_model(
model, strategy=None
)
Returns a TPU model.
Usage:
a = Input(shape=(32,))
b = Dense(32)(a)
model = Model(inputs=a, outputs=b)
# If `num_cores_per_host` is greater than one, batch parallelism will be used
# to run on multiple TPU cores.
strategy = keras_support.TPUDistributionStrategy(tpu_cluster_resolver)
model = keras_support.tpu_model(model, strategy)
model.compile(
optimizer=tf.compat.v1.train.GradientDescentOptimizer(learning_rate=1.0),
...)
Args |
model
|
A tf.keras.Model instance.
|
strategy
|
TPUDistributionStrategy . The strategy to use for replicating
model across multiple TPU cores.
|
Returns |
A new KerasTPUModel instance.
|