TensorFlow 2 version | View source on GitHub |
Mapping from logical cores in a computation to the physical TPU topology.
tf.tpu.experimental.DeviceAssignment(
topology, core_assignment
)
Prefer to use the DeviceAssignment.build()
helper to construct a
DeviceAssignment
; it is easier if less flexible than constructing a
DeviceAssignment
directly.
Args | |
---|---|
topology
|
A Topology object that describes the physical TPU topology.
|
core_assignment
|
A logical to physical core mapping, represented as a
rank 3 numpy array. See the description of the core_assignment
property for more details.
|
Raises | |
---|---|
ValueError
|
If topology is not Topology object.
|
ValueError
|
If core_assignment is not a rank 3 numpy array.
|
Attributes | |
---|---|
core_assignment
|
The logical to physical core mapping. |
num_cores_per_replica
|
The number of cores per replica. |
num_replicas
|
The number of replicas of the computation. |
topology
|
A Topology that describes the TPU topology.
|
Methods
build
@staticmethod
build( topology, computation_shape=None, computation_stride=None, num_replicas=1 )
coordinates
coordinates(
replica, logical_core
)
Returns the physical topology coordinates of a logical core.
host_device
host_device(
replica=0, logical_core=0, job=None
)
Returns the CPU device attached to a logical core.
lookup_replicas
lookup_replicas(
task_id, logical_core
)
Lookup replica ids by task number and logical core.
Args | |
---|---|
task_id
|
TensorFlow task number. |
logical_core
|
An integer, identifying a logical core. |
Returns | |
---|---|
A sorted list of the replicas that are attached to that task and logical_core. |
Raises | |
---|---|
ValueError
|
If no replica exists in the task which contains the logical core. |
tpu_device
tpu_device(
replica=0, logical_core=0, job=None
)
Returns the name of the TPU device assigned to a logical core.
tpu_ordinal
tpu_ordinal(
replica=0, logical_core=0
)
Returns the ordinal of the TPU device assigned to a logical core.