afe.core.evaluate_networksο
Classesο
Proxy class used to provide interface of DatasetPerformanceAnalyzer. |
|
ComposeDatasetCompareFunction(f, a) returns a dataset performance analyzer that |
|
Used for printing the progress and results of graph evaluation. |
|
Wrapper class encapsulating objects used in graph evaluation. |
Functionsο
|
Zip together two iterables that must have the same length. The returned |
Module Contentsο
- afe.core.evaluate_networks.checked_zip(x: Iterable[_A], y: Iterable[_B]) Iterator[Tuple[_A, _B]] [source]ο
Zip together two iterables that must have the same length. The returned iterator behaves like zip, except that it raises an exception if one iterator is longer than the other.
- Parameters:
x β First iterable
y β Second iterable
- Returns:
Iterable of pairs of values taken from x and y
- class afe.core.evaluate_networks.PerformanceAnalyzerProxy(dataset_performance_analyzer: Any)[source]ο
Proxy class used to provide interface of DatasetPerformanceAnalyzer. DatasetPerformanceAnalyzer is used by GraphEvaluator to determine network performance by comparing network outputs to ground truth data.
Attributeο
- attribute _dataset_performance_analyzer:
Any. DatasetPerformanceAnalyzer object which performs the performance analysis. It must provide the same interface as this proxy class.
- compare(net_out: List[numpy.ndarray], gt_data: _GroundTruth) str | None [source]ο
Compares the network output and the ground truth output and saves the performance information within the class. Returns a message detailing the performance for the given output. The message can also include the overall performance when taking into account the records from previous compare() results.
- class afe.core.evaluate_networks.ComposeDatasetCompareFunction(dataset_performance_analyzer: Any, transform_outputs: Callable[[List[numpy.ndarray], _GroundTruth], Tuple[List[numpy.ndarray], _GroundTruth2]])[source]ο
ComposeDatasetCompareFunction(f, a) returns a dataset performance analyzer that behaves like a, except that its compare function is equivalent to
- def compare(out, gt):
out2, gt2 = f(out, gt) return a.compare(out2, gt2)
See DataSetPerformanceAnalyzerProxy for method documentation.
- class afe.core.evaluate_networks.GraphEvaluatorLogger(verbose: bool, log_filename: str | None)[source]ο
Used for printing the progress and results of graph evaluation.
Attributeο
- attribute _verbose:
bool. Whether to print out the progress and results. If set to False, logging will be disabled.
- attribute _log_file:
Optional[IO]. The IO object used to keep the graph evaluation logs, if any.
- print_progressbar(current_step: int, total_steps: int, analysis_str: str)[source]ο
Prints the progressbar if logging is enabled.
- Parameters:
current_step β int. The current step in graph evaluation process.
total_steps β int. The total number of steps in evaluation process.
analysis_str β str. The output of the current step in evaluation.
- print_analysis_str(analysis_str: str)[source]ο
Prints out the analysis string.
- Parameters:
analysis_str β str. The output of the current step in evaluation.
- class afe.core.evaluate_networks.GraphEvaluator(input_generator: Iterable[Dict[afe.ir.defines.NodeName, numpy.ndarray]], ground_truth_data_generator: Iterable[_GroundTruth], performance_analyzer: Any, sample_count_hint: int, transpose_output: bool = False)[source]ο
Wrapper class encapsulating objects used in graph evaluation.
Attributeο
- attribute input_generator:
DataGenerator. Used to generate input data used in graph evaluation.
- :attribute ground_truth_data_generator. DataGenerator. Used to generate ground truth
outputs used in graph evaluation.
- :attribute dataset_performance_analyzer. PerformanceAnalyzerProxy. Used to perform
graph evaluation.
- attribute sample_count_hint:
Optional[int]. Number of samples in the input, used for progress reporting. Does not affect the number of samples actually processed from the input. If None, the number of samples in the input is unknown and progress is not shown.
- attribute transpose_output:
bool. Whether to transpose output from NHWC to NCHW layout.
- dataset_performance_analyzer: PerformanceAnalyzerProxy[_GroundTruth][source]ο
- evaluate(run_func: Callable[[Dict[afe.ir.defines.NodeName, numpy.ndarray]], List[numpy.ndarray]], verbose: bool = False, analysis_log_filename: str | None = None) float [source]ο
Perform evaluation of the network performance. If an exception is raised while performing the evaluation the performance is set to zero.
- Parameters:
run_func β Callable[[Dict[NodeName, np.ndarray], List[np.ndarray]]. Function which takes input dataset and generates inference results.
verbose β bool. Default is False. If set to True, print out the evaluation results.
analysis_log_filename β Optional[str]. Default is None. If given, represents the file to which the evaluation results shall be logged.
- Returns:
float. A number from 0 to 1 indicating the networkβs performance.