pysys.perf.api

API for creating new performance reporters (for manipulating performance run files outside the framework).

PerformanceUnit

class pysys.perf.api.PerformanceUnit(name, biggerIsBetter)[source]

Bases: object

Class which identifies the unit in which a performance result is measured.

Every unit encodes whether big numbers are better or worse (which can be used to calculate the improvement or regression when results are compared), e.g. better for throughput numbers, worse for time taken or latency numbers.

For consistency, we recommend using the pre-defined units where possible. For throughput numbers or rates, that means using PER_SECOND. For latency measurements that means using SECONDS if long time periods of several seconds are expected, or NANO_SECONDS (=10**-9 seconds) if sub-second time periods are expected (since humans generally find numbers such as 1,234,000 ns easier to skim-read and compare than fractional numbers like 0.001234).

Parameters
  • name (str) – The name of the unit . Should be short, for example “/s”.

  • biggerIsBetter (bool) – Indicates whether larger values are good (e.g. rate/TPS/throughput) or bad (latency/memory).

BasePerformanceReporter

class pysys.perf.api.BasePerformanceReporter(project, summaryfile, testoutdir, runner, **kwargs)[source]

Bases: object

API base class for creating a reporter that handles or stores performance results for later analysis.

Each performance result consists of a value, a result key (which must be unique across all test cases and modes, and also stable across different runs), and a unit (which also encodes whether bigger values are better or worse). Each test can report any number of performance results.

Performance reporter implementations are required to be thread-safe.

Project configuration of performance reporters is through the PySys project XML file using the <performance-reporter> tag. Multiple reporters may be configured and their individual properties set through the nested <property> tag or XML attributes. Properties are set as Python attributes on the instance just after construction, with automatic conversion of type to match the default value if specified as a static attribute on the class.

If no reporters are explicitly configured, default reporters will be added.

Variables
  • ~.project (pysys.config.project.Project) – The project configuration instance.

  • testoutdir (str) – The output directory used for this test run (equal to runner.outsubdir), an identifying string which often contains the platform, or when there are multiple test runs on the same machine may be used to distinguish between them. This is usually a relative path but may be an absolute path.

  • runner – A reference to the runner.

setup(**kwargs)[source]

Called before any tests begin to prepare the performance writer for use, once the runner is setup, and any project configuration properties for this performance reporter have been assigned to this instance.

Usually there is no reason to override the constructor, and any initialization can be done in this method.

getRunDetails(testobj=None, **kwargs)[source]

Return an dictionary of information about this test run (e.g. hostname, start time, etc).

Overriding this method is discouraged; customization of the run details should usually be performed by changing the runner.runDetails dictionary from the pysys.baserunner.BaseRunner.setup() method.

Parameters

testobj – the test case instance registering the value

Changed in version 2.0: Added testobj parameter, for advanced cases where you want to to provide different runDetails based on some feature of the test object or mode.

static valueToDisplayString(value)[source]

Pretty-print an integer or float value to a moderate number of significant figures.

The method additionally adds a “,” grouping for large numbers.

Subclasses may customize this if desired, including by reimplementing as a non-static method.

Parameters

value – the value to be displayed, which must be a numeric type.

getRunSummaryFile(testobj, **kwargs)[source]

Return the fully substituted location of the file to which summary performance results will be written.

This may include the following substitutions: @OUTDIR@ (=``${outDirName}``, the basename of the output directory for this run, e.g. “linux”), @HOSTNAME@, @DATE@, @TIME@, and @TESTID@. The default is given by DEFAULT_SUMMARY_FILE. If the specified file does not exist it will be created; it is possible to use multiple summary files from the same run. The path will be resolved relative to the pysys project root directory unless an absolute path is specified.

Parameters

testobj – the test case instance registering the value

reportResult(testobj, value, resultKey, unit, toleranceStdDevs=None, resultDetails=None)[source]

Report a performance result, with an associated unique key that identifies it.

This method must be implemented by performance reporters. However do not ever call it directly - always use pysys.basetest.BaseTest.reportPerformanceResult which performs some critical input validations before calling all the registered reporters.

Parameters
  • testobj – the test case instance registering the value. Use testobj.descriptor.id to get the testId.

  • value (int|float) – the value to be reported. This may be an int or a float.

  • resultKey (str) – a unique string that fully identifies what was measured.

  • unit (PerformanceUnit) – identifies the unit the value is measured in.

  • toleranceStdDevs (float) – indicates how many standard deviations away from the mean for a regression.

  • resultDetails (dict[str,obj]) – A dictionary of detailed information that should be recorded together with the result.

cleanup()[source]

Called when PySys has finished executing tests.

This is where any file footer and other I/O finalization can be written to the end of performance log files, and is also a good time to do any required aggregation, printing of summaries or artifact publishing.

static tryDeserializePerformanceFile(self, path)[source]

Advanced method which allows performance reporters to deserialize the files they write to allow them to be used as comparison baselines.

Most reporters do not need to worry about this method.

If you do implement it, return an instance of PerformanceRunData, or None if you do not support this file type, for example because the extension does not match. It is best to declare this as a static method if possible.

Return type

PerformanceRunData

PerformanceRunData

class pysys.perf.api.PerformanceRunData(name, runDetails, results)[source]

Bases: object

Holds performance data for a single test run, consisting of runDetails and a list of performance results covering one or more cycles.

Variables
  • name (str) – The name, typically a filename.

  • ~.runDetails (dict[str,str]) – A dictionary containing (string key, string value) information about the whole test run, for example operating system and hostname.

  • ~.results (list[dict]) – A list where each item is a dictionary containing information about a given result. The current keys are: resultKey, testId, value, unit, biggerIsBetter, toleranceStdDevs, samples, stdDev, resultDetails.

static aggregate(runs)[source]

Aggregate a list of multiple runs and/or cycles into a single performance run data object with a single entry for each unique resultKey with the value given as a mean of all the observed samples.

Parameters

files (list[PerformanceRunData]) – the list of run objects to aggregate.

Rtype PerformanceRunData

CSVPerformanceFile

class pysys.perf.api.CSVPerformanceFile(contents, name=None)[source]

Bases: pysys.perf.api.PerformanceRunData

Holds performance data for a single test run in a CSV performance file.

If this file contains aggregated results the number of “samples” may be greater than 1 and the “value” will specify the mean result.

Variables
  • ~.runDetails (dict[str,obj]) – A dictionary containing (string key, string value) information about the whole test run.

  • ~.results (list[dict]) – A list where each item is a dictionary containing information about a given result, containing values for each of the keys in COLUMNS, for example ‘resultKey’, ‘value’, etc.

  • ~.RUN_DETAILS (str) – The constant prefix identifying information about the whole test run

  • ~.RESULT_DETAILS (str) – The constant prefix identifying detailed information about a given result

  • ~.COLUMNS (list[str]) – Constant list of the columns in the performance output

Parameters

contents (str) – A string containing the contents of the file to be parsed (can be empty)

Return type

CSVPerformanceFile

static aggregate(files)[source]

Aggregate a list of performance file objects into a single CSVPerformanceFile object.

Takes a list of one or more CSVPerformanceFile objects and returns a single aggregated CSVPerformanceFile with a single row for each resultKey (with the “value” set to the mean if there are multiple results with that key, and the stdDev also set appropriately).

This method is now deprecated in favour of PerformanceRunData.aggregate.

Parameters

files (list[CSVPerformanceFile]) – the list of performance file objects to aggregate.

static load(src)[source]

Read the runDetails and results from the specified .csv file on disk.

Parameters

src (str) – The path to read.

Returns

A new CSVPerformanceFile instance.

New in version 2.1.

dump(dest)[source]

Dump the runDetails and results from this object to a CSV at the specified location.

Any existing file is overwritten.

Parameters

dest (str) – The destination path or file handle to write to.

New in version 2.1.

static toCSVLine(values)[source]

Convert a list or dictionary of input values into a CSV string.

Note that no new line character is return in the CSV string. The input values can either be a list (any nested dictionaries are expanded into KEY=VALUE entries), or a dictionary (or OrderedDict) whose keys will be added in the same order as COLUMNS.

Parameters

values – the input list or dictionary