Normalize and scale inputs or activations synchronously across replicas.
Inherits From: Layer
, Module
tf.keras.layers.experimental.SyncBatchNormalization(
axis=-1,
momentum=0.99,
epsilon=0.001,
center=True,
scale=True,
beta_initializer='zeros',
gamma_initializer='ones',
moving_mean_initializer='zeros',
moving_variance_initializer='ones',
beta_regularizer=None,
gamma_regularizer=None,
beta_constraint=None,
gamma_constraint=None,
**kwargs
)
Applies batch normalization to activations of the previous layer at each
batch by synchronizing the global batch statistics across all devices that
are training the model. For specific details about batch normalization
please refer to the tf.keras.layers.BatchNormalization
layer docs.
If this layer is used when using tf.distribute strategy to train models
across devices/workers, there will be an allreduce call to aggregate batch
statistics across all replicas at every training step. Without tf.distribute
strategy, this layer behaves as a regular
tf.keras.layers.BatchNormalization
layer.
Example usage:
strategy = tf.distribute.MirroredStrategy()
with strategy.scope():
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(16))
model.add(tf.keras.layers.experimental.SyncBatchNormalization())
Args |
axis
|
Integer, the axis that should be normalized
(typically the features axis).
For instance, after a Conv2D layer with
data_format="channels_first" ,
set axis=1 in BatchNormalization .
|
momentum
|
Momentum for the moving average.
|
epsilon
|
Small float added to variance to avoid dividing by zero.
|
center
|
If True, add offset of beta to normalized tensor.
If False, beta is ignored.
|
scale
|
If True, multiply by gamma .
If False, gamma is not used.
When the next layer is linear (also e.g. nn.relu ),
this can be disabled since the scaling
will be done by the next layer.
|
beta_initializer
|
Initializer for the beta weight.
|
gamma_initializer
|
Initializer for the gamma weight.
|
moving_mean_initializer
|
Initializer for the moving mean.
|
moving_variance_initializer
|
Initializer for the moving variance.
|
beta_regularizer
|
Optional regularizer for the beta weight.
|
gamma_regularizer
|
Optional regularizer for the gamma weight.
|
beta_constraint
|
Optional constraint for the beta weight.
|
gamma_constraint
|
Optional constraint for the gamma weight.
|
Call arguments |
inputs
|
Input tensor (of any rank).
|
training
|
Python boolean indicating whether the layer should behave in
training mode or in inference mode.
training=True : The layer will normalize its inputs using the
mean and variance of the current batch of inputs.
training=False : The layer will normalize its inputs using the
mean and variance of its moving statistics, learned during training.
|
|
Arbitrary. Use the keyword argument input_shape
(tuple of integers, does not include the samples axis)
when using this layer as the first layer in a model.
|
Output shape |
Same shape as input.
|