pysys.basetest module

Contains the base test class for test execution and validation.

For more information see the pysys.basetest.BaseTest API documentation.

class pysys.basetest.BaseTest(descriptor, outsubdir, runner)[source]

Bases: pysys.process.user.ProcessUser

The base class for all PySys testcases.

BaseTest is the parent class of all PySys system testcases. The class provides utility functions for cross-platform process management and manipulation, test timing, and test validation. Any PySys testcase should inherit from the base test and provide an implementation of the abstract execute method defined in this class. Child classes can also overide the setup, cleanup and validate methods of the class to provide custom setup and cleanup actions for a particual test, and to perform all validation steps in a single method should this prove logically more simple.

Execution of a PySys testcase is performed through an instance of the pysys.baserunner.BaseRunner class, or a subclass thereof. The base runner instantiates an instance of the testcase, and then calls the setup, execute, validate and cleanup methods of the instance. All processes started during the test execution are reference counted within the base test, and terminated within the cleanup method.

Validation of the testcase is through the assert* methods. Execution of many methods appends an outcome to the outcome data structure maintained by the ProcessUser base class, thus building up a record of the individual validation outcomes. Several potential outcomes are supported by the PySys framework (SKIPPED, BLOCKED, DUMPEDCORE, TIMEDOUT, FAILED, NOTVERIFIED, and PASSED) and the overall outcome of the testcase is determined using aprecedence order of the individual outcomes.

All assert* methods except for assertThat support variable argument lists for common non-default parameters. Currently this includes the assertMessage parameter, to override the default statement logged by the framework to stdout and the run log, and the abortOnError parameter, to override the defaultAbortOnError project setting.

Variables:
  • mode (string) – The user defined mode the test is running within. Subclasses can use this in conditional checks to modify the test execution based upon the mode.
  • input (string) – Full path to the input directory of the testcase. This is used both by the class and its subclasses to locate the default directory containing all input data to the testcase, as defined in the testcase descriptor.
  • output (string) – Full path to the output sub-directory of the testcase. This is used both by the class and its subclasses to locate the default directory for output produced by the testcase. Note that this is the actual directory where all output is written, as modified from that defined in the testcase descriptor to accomodate for the sub-directory used within this location to sandbox concurrent execution of the test, and/or to denote the run number.
  • reference (string) – Full path to the reference directory of the testcase. This is used both by the class and its subclasses to locate the default directory containing all reference data to the testcase, as defined in the testcase descriptor.
  • log (logging.Logger) – Reference to the logger instance of this class
  • project (Project) – Reference to the project details as set on the module load of the launching executable
__init__(descriptor, outsubdir, runner)[source]

Create an instance of the BaseTest class.

Parameters:
  • descriptor – The descriptor for the test giving all test details
  • outsubdir – The output subdirectory the test output will be written to
  • runner – Reference to the runner responsable for executing the testcase
addResource(resource)[source]

Add a resource which is owned by the test and is therefore cleaned up (deleted) when the test is cleaned up.

Deprecated - please use addCleanupFunction instead of this function.

assertDiff(file1, file2, filedir1=None, filedir2=None, ignores=[], sort=False, replace=[], includes=[], encoding=None, **xargs)[source]

Perform a validation assert on the comparison of two input text files.

This method performs a file comparison on two input files. The files are pre-processed prior to the comparison to either ignore particular lines, sort their constituent lines, replace matches to regular expressions in a line with an alternate value, or to only include particular lines. Should the files after pre-processing be equivalent a PASSED outcome is added to the test outcome list, otherwise a FAILED outcome is added.

Parameters:
  • file1 – The basename of the first file used in the file comparison
  • file2 – The basename of the second file used in the file comparison (often a reference file)
  • filedir1 – The dirname of the first file (defaults to the testcase output subdirectory)
  • filedir2 – The dirname of the second file (defaults to the testcase reference directory)
  • ignores – A list of regular expressions used to denote lines in the files which should be ignored
  • sort – Boolean flag to indicate if the lines in the files should be sorted prior to the comparison
  • replace – List of tuples of the form (‘regexpr’, ‘replacement’). For each regular expression in the list, any occurences in the files is replaced with the replacement value prior to the comparison being carried out. This is often useful to replace timestamps in logfiles etc.
  • includes – A list of regular expressions used to denote lines in the files which should be used in the comparison. Only lines which match an expression in the list are used for the comparison
  • encoding – The encoding to use to open the file. The default value is None which indicates that the decision will be delegated to the getDefaultFileEncoding() method.
  • xargs – Variable argument list (see class description for supported parameters)
assertFalse(expr, **xargs)[source]

Perform a validation assert on the supplied expression evaluating to false.

If the supplied expression evaluates to false a PASSED outcome is added to the outcome list. Should the expression evaluate to true, a FAILED outcome is added.

Parameters:
  • expr – The expression to check for the true | false value
  • xargs – Variable argument list (see class description for supported parameters)
assertGrep(file, filedir=None, expr='', contains=True, ignores=None, literal=False, encoding=None, **xargs)[source]

Perform a validation assert on a regular expression occurring in a text file.

When the contains input argument is set to true, this method will add a PASSED outcome to the test outcome list if the supplied regular expression is seen in the file; otherwise a FAILED outcome is added. Should contains be set to false, a PASSED outcome will only be added should the regular expression not be seen in the file.

Parameters:
  • file – The basename of the file used in the grep
  • filedir – The dirname of the file (defaults to the testcase output subdirectory)
  • expr – The regular expression to check for in the file (or a string literal if literal=True). If the match fails, the matching regex will be reported as the test outcome
  • contains – Boolean flag to denote if the expression should or should not be seen in the file
  • ignores – Optional list of regular expressions that will be ignored when reading the file.
  • literal – By default expr is treated as a regex, but set this to True to pass in a string literal instead
  • encoding – The encoding to use to open the file. The default value is None which indicates that the decision will be delegated to the getDefaultFileEncoding() method.
  • xargs – Variable argument list (see class description for supported parameters)
assertLastGrep(file, filedir=None, expr='', contains=True, ignores=[], includes=[], encoding=None, **xargs)[source]

Perform a validation assert on a regular expression occurring in the last line of a text file.

When the contains input argument is set to true, this method will add a PASSED outcome to the test outcome list if the supplied regular expression is seen in the file; otherwise a FAILED outcome is added. Should contains be set to false, a PASSED outcome will only be added should the regular expression not be seen in the file.

Parameters:
  • file – The basename of the file used in the grep
  • filedir – The dirname of the file (defaults to the testcase output subdirectory)
  • expr – The regular expression to check for in the last line of the file
  • contains – Boolean flag to denote if the expression should or should not be seen in the file
  • ignores – A list of regular expressions used to denote lines in the file which should be ignored
  • includes – A list of regular expressions used to denote lines in the file which should be used in the assertion.#
  • encoding – The encoding to use to open the file. The default value is None which indicates that the decision will be delegated to the getDefaultFileEncoding() method.
  • xargs – Variable argument list (see class description for supported parameters)
assertLineCount(file, filedir=None, expr='', condition='>=1', ignores=None, encoding=None, **xargs)[source]

Perform a validation assert on the number of lines in a text file matching a specific regular expression.

This method will add a PASSED outcome to the outcome list if the number of lines in the input file matching the specified regular expression evaluate to true when evaluated against the supplied condition.

Parameters:
  • file – The basename of the file used in the line count
  • filedir – The dirname of the file (defaults to the testcase output subdirectory)
  • expr – The regular expression string used to match a line of the input file
  • condition – The condition to be met for the number of lines matching the regular expression
  • ignores – A list of regular expressions that will cause lines to be excluded from the count
  • encoding – The encoding to use to open the file. The default value is None which indicates that the decision will be delegated to the getDefaultFileEncoding() method.
  • xargs – Variable argument list (see class description for supported parameters)
assertOrderedGrep(file, filedir=None, exprList=[], contains=True, encoding=None, **xargs)[source]

Perform a validation assert on a list of regular expressions occurring in specified order in a text file.

When the contains input argument is set to true, this method will append a PASSED outcome to the test outcome list if the supplied regular expressions in the exprList are seen in the file in the order they appear in the list; otherwise a FAILED outcome is added. Should contains be set to false, a PASSED outcome will only be added should the regular expressions not be seen in the file in the order they appear in the list.

Parameters:
  • file – The basename of the file used in the ordered grep
  • filedir – The dirname of the file (defaults to the testcase output subdirectory)
  • exprList – A list of regular expressions which should occur in the file in the order they appear in the list
  • contains – Boolean flag to denote if the expressions should or should not be seen in the file in the order specified
  • encoding – The encoding to use to open the file. The default value is None which indicates that the decision will be delegated to the getDefaultFileEncoding() method.
  • xargs – Variable argument list (see class description for supported parameters)
assertThat(conditionstring, *args)[source]

Perform a validation based on a python eval string.

The eval string should be specified as a format string, with zero or more %s-style arguments. This provides an easy way to check conditions that also produces clear outcome messages.

The safest way to pass arbitrary arguments of type string is to use the repr() function to add appropriate quotes and escaping.

e.g. self.assertThat(‘%d >= 5 or %s==”foobar”’, myvalue, repr(mystringvalue))

Parameters:
  • conditionstring – A string will have any following args substituted into it and then be evaluated as a boolean python expression.
  • args – Zero or more arguments to be substituted into the format string
assertTrue(expr, **xargs)[source]

Perform a validation assert on the supplied expression evaluating to true.

If the supplied expression evaluates to true a PASSED outcome is added to the outcome list. Should the expression evaluate to false, a FAILED outcome is added.

Parameters:
  • expr – The expression, as a boolean, to check for the True | False value
  • xargs – Variable argument list (see class description for supported parameters)
cleanup()[source]

Cleanup method which performs cleanup actions after execution and validation of the test.

The cleanup method performs actions to stop all processes started in the background and not explicitly killed during the test execution. It also stops all process monitors running in separate threads, and any instances of the manual tester user interface.

Should a custom cleanup for a subclass be required, use addCleanupFunction instead of overriding this method.

execute()[source]

Execute method which must be overridden to perform the test execution steps.

Raises:NotImplementedError – Raised exeception should the method not be overridden
reportPerformanceResult(value, resultKey, unit, toleranceStdDevs=None, resultDetails=None)[source]

Reports a new performance result, with an associated unique key that identifies it for comparison purposes.

Where possible it is better to report the rate at which an operation can be performed (e.g. throughput) rather than the total time taken, since this allows the number of iterations to be increased .

Parameters:
  • value – The value to be reported. Usually this is a float or integer, but string is also permitted.
  • resultKey – A unique string that fully identifies what was measured, which will be used to compare results from different test runs. For example “HTTP transport message sending throughput using with 3 connections”. The resultKey must be unique across all test cases and modes. It should be fully self-describing (without the need to look up extra information such as the associated testId). Do not include the test id or units in the resultKey string. It must be stable across different runs, so cannot contain process identifiers, date/times or other numbers that will vary. If possible resultKeys should be written so that related results will be together when all performance results are sorted by resultKey, which usually means putting general information near the start of the string and specifics (throughput/latency, sending/receiving) towards the end of the string. It should be as concise as possible (given the above).
  • unit – Identifies the unit the the value is measured in, including whether bigger numbers are better or worse (used to determine improvement or regression). Must be an instance of pysys.utils.perfreporter.PerformanceUnit. In most cases, use pysys.utils.perfreporter.PerformanceUnit.SECONDS (e.g. for latency) or pysys.utils.perfreporter.PerformanceUnit.PER_SECOND (e.g. for throughput); the string literals ‘s’ and ‘/s’ can be used as a shorthand for those PerformanceUnit instances.
  • toleranceStdDevs – (optional) A float that indicates how many standard deviations away from the mean a result needs to be to be considered a regression.
  • resultDetails – (optional) A dictionary of detailed information about this specific result and/or test that should be recorded together with the result, for example information about what mode the test is running in.
setKeywordArgs(xargs)[source]

Set the xargs as data attributes of the test class.

Values in the xargs dictionary are set as data attributes using the builtin setattr method. Thus an xargs dictionary of the form {'foo': 'bar'} will result in a data attribute of the form self.foo with value bar. This is used so that subclasses can define default values of data attributes, which can be overriden on instantiation e.g. using the -X options to the runTest.py launch executable.

Parameters:xargs – A dictionary of the user defined extra arguments
setup()[source]

Setup method which may optionally be overridden to perform custom setup operations prior to test execution.

startManualTester(file, filedir=None, state=11, timeout=1800)[source]

Start the manual tester.

The manual tester user interface (UI) is used to describe a series of manual steps to be performed to execute and validate a test. Only a single instance of the UI can be running at any given time, and can be run either in the FOREGROUND (method will not return until the UI is closed or the timeout occurs) or in the BACKGROUND (method will return straight away so automated actions may be performed concurrently). Should the UI be terminated due to expiry of the timeout, a TIMEDOUT outcome will be added to the outcome list. The UI can be stopped via the stopManualTester method. An instance of the UI not explicitly stopped within a test will automatically be stopped via the cleanup method of the BaseTest.

Parameters:
  • file – The name of the manual test xml input file (see pysys.xml.manual for details on the DTD)
  • filedir – The directory containing the manual test xml input file (defaults to the output subdirectory)
  • state – Start the manual tester either in the FOREGROUND or BACKGROUND (defaults to FOREGROUND)
  • timeout – The timeout period after which to termintate a manual tester running in the FOREGROUND
startProcessMonitor(process, interval, file, **kwargs)[source]

Start a separate thread to log process statistics to logfile, and return a handle to the process monitor.

This method uses the pysys.process.monitor module to perform logging of the process statistics, starting the monitor as a seperate background thread. Should the request to log the statistics fail a BLOCKED outcome will be added to the test outcome list. All process monitors not explicitly stopped using the returned handle are automatically stopped on completion of the test via the cleanup method of the BaseTest.

Parameters:
  • process – The process handle returned from the startProcess method
  • interval – The interval in seconds between collecting and logging the process statistics
  • file – The path to the filename used for logging the process statistics
  • kwargs – Keyword arguments to allow platform specific configurations
Returns:

A handle to the process monitor (pysys.process.monitor.ProcessMonitor)

Return type:

handle

stopManualTester()[source]

Stop the manual tester if running.

stopProcessMonitor(monitor)[source]

Stop a process monitor.

Parameters:monitor – The process monitor handle returned from the startProcessMonitor method
validate()[source]

Validate method which may optionally be overridden to group all validation steps.

wait(interval)[source]

Wait for a specified period of time.

Parameters:interval – The time interval in seconds to wait
waitManualTester(timeout=1800)[source]

Wait for the manual tester to be stopped via user interaction.