tf.tpu.experimental.embedding.TPUEmbeddingV2

The TPUEmbedding mid level API running on TPU with sparse core accelerator.

feature_config A nested structure of tf.tpu.experimental.embedding.FeatureConfig configs.
optimizer An instance of one of tf.tpu.experimental.embedding.SGD, tf.tpu.experimental.embedding.Adagrad or tf.tpu.experimental.embedding.Adam. When not created under TPUStrategy may be set to None to avoid the creation of the optimizer slot variables, useful for optimizing memory consumption when exporting the model for serving where slot variables aren't needed.
pipeline_execution_with_tensor_core If True, the TPU embedding computations will overlap with the TensorCore computations (and hence will be one step old). Set to True for improved performance.

ValueError If optimizer is not one of tf.tpu.experimental.embedding.(SGD, Adam or Adagrad) or None when created under a TPUStrategy.
RuntimeError If not created under TPUStrategy.

embedding_table_shards Returns a dict of embedding tables, keyed by TableConfig.
embedding_tables Returns a dict of embedding tables, keyed by TableConfig.
variables Returns a dict of variables, keyed by TableConfig, then by slot name.

Methods

apply_gradients

View source

Applies the gradient update to the embedding tables.

If a gradient of None is passed in any position of the nested structure, then a gradient update with a zero gradient is applied for that feature. For optimizers like SGD or Adagrad, this is the same as applying no update at all. For lazy Adam and other sparsely applied optimizers with decay, ensure you understand the effect of applying a zero gradient.

Args
gradients A nested structure of gradients, with structure matching the feature_config passed to this object.
preserved_outputs A dicts of PartitionedCsrFormatTensor, coming from the second output of the embedding lookup call.

Raises
RuntimeError if not built.
ValueError If a non-tf.Tensor non-None gradient is passed in, or a tf.Tensor of the incorrect shape is passed in. Also if the size of any sequence in gradients does not match corresponding sequence in feature_config.
TypeError If the type of any sequence in gradients does not match corresponding sequence in feature_config.

build

View source

Create variables and slots variables for TPU embeddings.

dequeue

View source

Perform embedding lookup.

embedding_lookup

View source

Perform embedding lookup on the input feature.

Args
features A nested structure of tf.Tensors, tf.SparseTensors or tf.RaggedTensors, with the same structure as feature_config. Inputs will be downcast to tf.int32. Only one type out of tf.SparseTensor or tf.RaggedTensor is supported per call.
weights If not None, a nested structure of tf.Tensors, tf.SparseTensors or tf.RaggedTensors, matching the above, except that the tensors should be of float type (and they will be downcast to tf.float32). For tf.SparseTensors we assume the indices are the same for the parallel entries from features and similarly for tf.RaggedTensors we assume the row_splits are the same.

Raises
ValueError If the input feature is not one of the Tensor, SparseTensor or RaggedTensor type.
TypeError If the type of any sequence in features does not match corresponding sequence in feature_config. Similarly for weights, if not None.

Returns
packed_activations Embedding lookup results packed as the same sequence of the input feature.
packed_output A dict of PartitionedCsrFormatTensors.

enqueue

View source

Preprocessing the features on host.

preprocess_features

View source

Function to preprocess features.

__call__

View source

Call the mid level api to do embedding lookup.