Computes the mean squared logarithmic error between y_true
and y_pred
.
tf.keras.losses.MeanSquaredLogarithmicError(
reduction=losses_utils.ReductionV2.AUTO, name='mean_squared_logarithmic_error'
)
loss = square(log(y_true + 1.) - log(y_pred + 1.))
Usage:
y_true = [[0., 1.], [0., 0.]]
y_pred = [[1., 1.], [1., 0.]]
# Using 'auto'/'sum_over_batch_size' reduction type.
msle = tf.keras.losses.MeanSquaredLogarithmicError()
msle(y_true, y_pred).numpy()
0.240
# Calling with 'sample_weight'.
msle(y_true, y_pred, sample_weight=[0.7, 0.3]).numpy()
0.120
# Using 'sum' reduction type.
msle = tf.keras.losses.MeanSquaredLogarithmicError(
reduction=tf.keras.losses.Reduction.SUM)
msle(y_true, y_pred).numpy()
0.480
# Using 'none' reduction type.
msle = tf.keras.losses.MeanSquaredLogarithmicError(
reduction=tf.keras.losses.Reduction.NONE)
msle(y_true, y_pred).numpy()
array([0.240, 0.240], dtype=float32)
Usage with the compile
API:
model = tf.keras.Model(inputs, outputs)
model.compile('sgd', loss=tf.keras.losses.MeanSquaredLogarithmicError())
Args |
reduction
|
(Optional) Type of tf.keras.losses.Reduction to apply to
loss. Default value is AUTO . AUTO indicates that the reduction
option will be determined by the usage context. For almost all cases
this defaults to SUM_OVER_BATCH_SIZE . When used with
tf.distribute.Strategy , outside of built-in training loops such as
tf.keras compile and fit , using AUTO or SUM_OVER_BATCH_SIZE
will raise an error. Please see this custom training tutorial
for more details.
|
name
|
Optional name for the op. Defaults to
'mean_squared_logarithmic_error'.
|
Methods
from_config
View source
@classmethod
from_config(
config
)
Instantiates a Loss
from its config (output of get_config()
).
Args |
config
|
Output of get_config() .
|
get_config
View source
get_config()
Returns the config dictionary for a Loss
instance.
__call__
View source
__call__(
y_true, y_pred, sample_weight=None
)
Invokes the Loss
instance.
Args |
y_true
|
Ground truth values. shape = [batch_size, d0, .. dN] , except
sparse loss functions such as sparse categorical crossentropy where
shape = [batch_size, d0, .. dN-1]
|
y_pred
|
The predicted values. shape = [batch_size, d0, .. dN]
|
sample_weight
|
Optional sample_weight acts as a
coefficient for the loss. If a scalar is provided, then the loss is
simply scaled by the given value. If sample_weight is a tensor of size
[batch_size] , then the total loss for each sample of the batch is
rescaled by the corresponding element in the sample_weight vector. If
the shape of sample_weight is [batch_size, d0, .. dN-1] (or can be
broadcasted to this shape), then each loss element of y_pred is scaled
by the corresponding value of sample_weight . (Note ondN-1 : all loss
functions reduce by 1 dimension, usually axis=-1.)
|
Returns |
Weighted loss float Tensor . If reduction is NONE , this has
shape [batch_size, d0, .. dN-1] ; otherwise, it is scalar. (Note dN-1
because all loss functions reduce by 1 dimension, usually axis=-1.)
|
Raises |
ValueError
|
If the shape of sample_weight is invalid.
|