tf.data.experimental.service.CrossTrainerCache

Options related to the tf.data service cross trainer cache.

This is used to enable cross-trainer cache when distributing a dataset. For example:

dataset = dataset.apply(tf.data.experimental.service.distribute(
    processing_mode=tf.data.experimental.service.ShardingPolicy.OFF,
    service=FLAGS.tf_data_service_address,
    job_name="job",
    cross_trainer_cache=data_service_ops.CrossTrainerCache(
        trainer_id=trainer_id())))

For more details, refer to https://www.tensorflow.org/api_docs/python/tf/data/experimental/service#sharing_tfdata_service_with_concurrent_trainers

trainer_id Each training job has a unique ID. Once a job has consumed data, the data remains in the cache and is re-used by jobs with different trainer_ids. Requests with the same trainer_id do not re-use data.

ValueError if trainer_id is empty.