View source on GitHub |
Clips gradients of a multitask loss by their global norm.
tf.contrib.opt.clip_gradients_by_global_norm(
gradients_variables, clip_norm=20.0
)
Ignores all-zero tensors when computing the global norm.
Args | |
---|---|
gradients_variables
|
a list of pairs (gradient, variable). |
clip_norm
|
a float Tensor, the global norm to clip on. Default is 20.0. |
Returns | |
---|---|
list
|
A list of pairs of the same type as gradients_variables,. |
fixed_global_norm
|
A 0-D (scalar) Tensor representing the global norm. |