tf.keras.layers.BatchNormalization( axis=-1, momentum=0.99, epsilon=0.001, center=True, scale=True, beta_initializer="zeros", gamma_initializer="ones", moving_mean_initializer="zeros", moving_variance_initializer="ones", beta_regularizer=None, gamma_regularizer=None, beta_constraint=None, gamma_constraint=None, renorm=False, renorm_clipping=None, renorm_momentum=0.99, fused=None, trainable=True, virtual_batch_size=None, adjustment=None, name=None, **kwargs )
Layer that normalizes its inputs.
Batch normalization applies a transformation that maintains the mean output close to 0 and the output standard deviation close to 1.
Importantly, batch normalization works differently during training and during inference.
During training (i.e. when using
fit() or when calling the layer/model
with the argument
training=True), the layer normalizes its output using
the mean and standard deviation of the current batch of inputs. That is to
say, for each channel being normalized, the layer returns
(batch - mean(batch)) / (var(batch) + epsilon) * gamma + beta, where:
epsilonis small constant (configurable as part of the constructor arguments)
gammais a learned scaling factor (initialized as 1), which can be disabled by passing
scale=Falseto the constructor.
betais a learned offset factor (initialized as 0), which can be disabled by passing
center=Falseto the constructor.
During inference (i.e. when using
predict() or when
calling the layer/model with the argument
training=False (which is the
default), the layer normalizes its output using a moving average of the
mean and standard deviation of the batches it has seen during training. That
is to say, it returns
(batch - self.moving_mean) / (self.moving_var + epsilon) * gamma + beta.
self.moving_var are non-trainable variables that
are updated each time the layer in called in training mode, as such:
moving_mean = moving_mean * momentum + mean(batch) * (1 - momentum)
moving_var = moving_var * momentum + var(batch) * (1 - momentum)
As such, the layer will only normalize its inputs during inference after having been trained on data that has similar statistics as the inference data.
betato normalized tensor. If False,
gamma. If False,
gammais not used. When the next layer is linear (also e.g.
nn.relu), this can be disabled since the scaling will be done by the next layer.
Tensorsused to clip the renorm correction. The correction
(r, d)is used as
corrected_value = normalized_value * r + d, with
rclipped to [rmin, rmax], and
dto [-dmax, dmax]. Missing rmax, rmin, dmax are set to inf, 0, inf, respectively.
momentum, this affects training and should be neither too small (which would add noise) nor too large (which would give stale estimates). Note that
momentumis still applied to get the means and variances for inference.
True, use a faster, fused implementation, or raise a ValueError if the fused implementation cannot be used. If
None, use the faster implementation if possible. If False, do not used the fused implementation.
Truethe variables will be marked as trainable.
int. By default,
None, which means batch normalization is performed across the whole batch. When
None, instead perform "Ghost Batch Normalization", which creates virtual sub-batches which are each normalized separately (with shared gamma, beta, and moving statistics). Must divide the actual batch size during execution.
Tensorcontaining the (dynamic) shape of the input tensor and returning a pair (scale, bias) to apply to the normalized values (before gamma and beta), only during training. For example, if axis==-1,
adjustment = lambda shape: ( tf.random.uniform(shape[-1:], 0.93, 1.07), tf.random.uniform(shape[-1:], -0.1, 0.1))will scale the normalized value by up to 7% up or down, then shift the result by up to 0.1 (with independent scaling and bias for each feature but shared across all examples), and finally apply gamma and/or beta. If
None, no adjustment is applied. Cannot be specified if virtual_batch_size is specified.
training=True: The layer will normalize its inputs using the mean and variance of the current batch of inputs.
training=False: The layer will normalize its inputs using the mean and variance of its moving statistics, learned during training.
Input shape Arbitrary. Use the keyword argument
input_shape (tuple of
integers, does not include the samples axis) when using this layer as the first layer in a model.
Output shape Same shape as input.
layer.trainable = False on a
The meaning of setting
layer.trainable = False is to freeze the layer,
i.e. its internal state will not change during training:
its trainable weights will not be updated
train_on_batch(), and its state updates will not be run.
Usually, this does not necessarily mean that the layer is run in inference
mode (which is normally controlled by the
training argument that can
be passed when calling a layer). "Frozen state" and "inference mode"
are two separate concepts.
However, in the case of the
BatchNormalization layer, setting
trainable = False on the layer means that the layer will be
subsequently run in inference mode (meaning that it will use
the moving mean and the moving variance to normalize the current batch,
rather than using the mean and variance of the current batch).
This behavior has been introduced in TensorFlow 2.0, in order
layer.trainable = False to produce the most commonly
expected behavior in the convnet fine-tuning use case.
- This behavior only occurs as of TensorFlow 2.0. In 1.*,
layer.trainable = False would freeze the layer but would
not switch it to inference mode.
trainable on an model containing other layers will
recursively set the
trainable value of all inner layers.
- If the value of the
attribute is changed after calling
compile() on a model,
the new value doesn't take effect for this model
compile() is called again.