View source on GitHub |
Densely-connected layer class.
Inherits From: Dense
, Layer
, Layer
, Module
tf.compat.v1.layers.Dense(
units,
activation=None,
use_bias=True,
kernel_initializer=None,
bias_initializer=tf.compat.v1.zeros_initializer(),
kernel_regularizer=None,
bias_regularizer=None,
activity_regularizer=None,
kernel_constraint=None,
bias_constraint=None,
trainable=True,
name=None,
**kwargs
)
Migrate to TF2
This API is a legacy api that is only compatible with eager execution and
tf.function
if you combine it with
tf.compat.v1.keras.utils.track_tf1_style_variables
Please refer to tf.layers model mapping section of the migration guide to learn how to use your TensorFlow v1 model in TF2 with Keras.
The corresponding TensorFlow v2 layer is tf.keras.layers.Dense
.
Structural Mapping to Native TF2
None of the supported arguments have changed name.
Before:
dense = tf.compat.v1.layers.Dense(units=3)
After:
dense = tf.keras.layers.Dense(units=3)
Description
This layer implements the operation:
outputs = activation(inputs * kernel + bias)
Where activation
is the activation function passed as the activation
argument (if not None
), kernel
is a weights matrix created by the layer,
and bias
is a bias vector created by the layer
(only if use_bias
is True
).
Args | |
---|---|
units
|
Integer or Long, dimensionality of the output space. |
activation
|
Activation function (callable). Set it to None to maintain a linear activation. |
use_bias
|
Boolean, whether the layer uses a bias. |
kernel_initializer
|
Initializer function for the weight matrix.
If None (default), weights are initialized using the default
initializer used by tf.compat.v1.get_variable .
|
bias_initializer
|
Initializer function for the bias. |
kernel_regularizer
|
Regularizer function for the weight matrix. |
bias_regularizer
|
Regularizer function for the bias. |
activity_regularizer
|
Regularizer function for the output. |
kernel_constraint
|
An optional projection function to be applied to the
kernel after being updated by an Optimizer (e.g. used to implement
norm constraints or value constraints for layer weights). The function
must take as input the unprojected variable and must return the
projected variable (which must have the same shape). Constraints are
not safe to use when doing asynchronous distributed training.
|
bias_constraint
|
An optional projection function to be applied to the
bias after being updated by an Optimizer .
|
trainable
|
Boolean, if True also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES (see tf.Variable ).
|
name
|
String, the name of the layer. Layers with the same name will share weights, but to avoid mistakes we require reuse=True in such cases. |
_reuse
|
Boolean, whether to reuse the weights of a previous layer by the same name. |
Attributes | |
---|---|
graph
|
|
scope_name
|
Methods
apply
apply(
*args, **kwargs
)
get_losses_for
get_losses_for(
inputs
)
Retrieves losses relevant to a specific set of inputs.
Args | |
---|---|
inputs
|
Input tensor or list/tuple of input tensors. |
Returns | |
---|---|
List of loss tensors of the layer that depend on inputs .
|
get_updates_for
get_updates_for(
inputs
)
Retrieves updates relevant to a specific set of inputs.
Args | |
---|---|
inputs
|
Input tensor or list/tuple of input tensors. |
Returns | |
---|---|
List of update ops of the layer that depend on inputs .
|