Creates an Evaluator for evaluating metrics and plots.
tfma.evaluators.MetricsPlotsAndValidationsEvaluator(
eval_config: tfma.EvalConfig
,
eval_shared_model: Optional[tfma.types.EvalSharedModel
] = None,
metrics_key: str = constants.METRICS_KEY,
plots_key: str = constants.PLOTS_KEY,
attributions_key: str = constants.ATTRIBUTIONS_KEY,
run_after: str = slice_key_extractor.SLICE_KEY_EXTRACTOR_STAGE_NAME,
schema: Optional[schema_pb2.Schema] = None,
random_seed_for_testing: Optional[int] = None
) -> tfma.evaluators.Evaluator
Args |
eval_config
|
Eval config.
|
eval_shared_model
|
Optional shared model (single-model evaluation) or list
of shared models (multi-model evaluation). Only required if there are
metrics to be computed in-graph using the model.
|
metrics_key
|
Name to use for metrics key in Evaluation output.
|
plots_key
|
Name to use for plots key in Evaluation output.
|
attributions_key
|
Name to use for attributions key in Evaluation output.
|
run_after
|
Extractor to run after (None means before any extractors).
|
schema
|
A schema to use for customizing metrics and plots.
|
random_seed_for_testing
|
Seed to use for unit testing.
|
Returns |
Evaluator for evaluating metrics and plots. The output will be stored under
'metrics' and 'plots' keys.
|