COMPOPT

COMPOPT [option=value ...]

This system command is used to set various compilation options. The options are evaluated when a Natural object is compiled.

If you enter the COMPOPT command without any options, a screen is displayed where you can enable or disable the options described below.

The default settings of the individual options are set with the corresponding keyword subparameters of the parameter macro NTCMPO in the Natural parameter module or in the profile parameter CMPO. When you change the library, the COMPOPT options are reset to their default values.

This document covers the following topics:


Syntax Explanation

COMPOPT If you issue the COMPOPT system command without options, the Compilations Options screen appears. The keywords available there are described below.
COMPOPT option=value

The keywords for the individual options are described below.

The setting assigned to a compiler option is in effect until you issue the next LOGON command to another library. At LOGON, the default settings set with the macro NTCMPO and/or profile parameter CMPO will be resumed.

Specifying Compiler Keyword Parameters

You can specify compiler keyword parameters on different levels:

  1. The default settings of the individual keyword parameters are specified in the macro NTCMPO in the Natural parameter module.

  2. At session start, you can override the compiler keyword parameters with the profile parameter CMPO.

  3. During an active Natural session, there are two ways to change the compiler keyword parameters with the COMPOPT system command: either directly using command assignment (COMPOPT option=value) or by issuing the COMPOPT command without keyword parameters which displays the Compilation Options screen. The settings assigned to a compiler option are in effect until you issue the next LOGON command to another library. At LOGON, the default settings set with the macro NTCMPO and/or the profile parameter CMPO (see above) will be resumed. Example:

    OPTIONS KCHECK=ON
    DEFINE DATA LOCAL
    1 #A (A25) INIT <'Hello World'>
    END-DEFINE
    WRITE #A
    END
  4. In a Natural object (for example: program, subprogram), you can set compiler parameters (options) with the OPTIONS statement. Example:

    OPTIONS KCHECK=ON
    WRITE 'Hello World'
    END

    The compiler options defined in an OPTIONS statement will only affect the compilation of this object, but do not update settings set with the command COMPOPT.

General Compilation Options

The following options are available:

These options correspond to the keyword subparameters of the CMPO profile parameter and/or the NTCMPO parameter macro.

CHKRULE - Validate INCDIR Statements in Maps

The CHKRULE option can be used to enable or disable a validation check during the catalog process for maps.

ON

INCDIR validation is enabled. If the file (DDM) or field referenced in the INCDIR control statement does not exist, syntax error NAT0721 is raised at compile time.

When a Natural map is created, you may include fields which are already defined inside another existing object. This works with nearly all kinds of objects which allow you to define variables and also with DDMs. When the included field is a database variable, it is a map editor built-in behavior to automatically add (besides the included field) an additional INCDIR statement in the map statement body to trigger a Predict rule upload and incorporation when the map is compiled (STOW).

The function is similar to what is happening when an INCLUDE statement is processed. However, instead of getting the source lines from a copycode object, they are received from Predict. The search key to find the rule(s) are the DDM name (which is regarded as the file name) and the field name. Both are indicated in the INCDIR statement. An INCDIR rule requested at compile time has not got to be found on Predict, as there is absolutely no requirement for its existence. That implies, it is by no means an error situation if a searched rule is not found.

When fields are incorporated from a DDM into a map, the corresponding INCDIR statements are created, including the current DDM and field name as "search key" to request existent rules from Predict. However, if the DDM is renamed after the copy process, the old DDM name (which is not valid anymore) still continues to be used in the INCDIR statement. This causes that no rule is loaded and the programmer is not informed about this. Moreover, it is not only a DDM rename causing this situation. The more likely situation effecting this consequence is to have a wrong FDIC file assigned, by any mistake. In this case, the DDM name is valid, but it cannot be found on the current Predict system file. Then the result is same as when the DDM does not exist at all; the processing rules supposed to be added from Predict are not included.

OFF INCDIR validation is disabled. This is the default value.

CPAGE - Code Page Support for Alphanumeric Constants

The CPAGE option can be used to activate a conversion routine which translates all alphanumeric constants (from the code page that was active at compilation time into the code page that is active at runtime) when the object is started at runtime.

See also CPAGE Compiler Option in the Unicode and Code Page Support documentation.

ON Code page support for alpha strings is enabled.
OFF Code page support for alpha strings is disabled. This is the default value.

DBSHORT - Interpretation of Database Short Field Names

A database field defined in a DDM is described by two names:

  • the short name with a length of 2 characters, used by Natural to communicate with the database (especially with Adabas);

  • the long name with a length of 3-32 characters (1-32 characters, if the underlying database type accessed is DB2/SQL), which is supposed to be used to reference the field in the Natural programming code.

Under special conditions, you may reference a database field in a Natural program with its short name instead of the long name. This applies if running in Reporting Mode without Natural Security and if the database access statement contains a reference to a DDM instead of a view.

The decision if a field name is regarded as a short-name reference depends on the name length. When the field identifier consists of two characters, a short-name reference is assumed; a field name with another length is considered as a long-name reference. This standard interpretation rule for database fields can additionally be influenced and controlled by setting the compiler option DBSHORT to ON or OFF:

ON

The usage of a short name is allowed for referencing a database field.

However, a data base short name is not permitted in general (even if DBSHORT=ON)

  • for the definition of a field when a view is created;

  • when a DEFINE DATA LOCAL statement was specified;

  • when running under Natural Security.

This is the default value.

OFF

A database field may only be referenced via its long name. Every database field identifier is considered as a long-name reference, regardless of its length.

If a two character name is supplied which can only be found as a short name but not as a long name, syntax error NAT0981 is raised at compile time.

This makes it possible to use long names defined in a DDM with 2-byte identifier length. This option is essential if the underlying database you access with this DDM is SQL (DB2) and table columns with a two character name exist. For all other database types (for example, Adabas), however, any attempt to define a long-field with a 2-byte name length will be rejected at DDM generation.

Moreover, if no short-name references are used (what can be enforced via DBSHORT=OFF), the program becomes independent of whether it is compiled under Natural Security or not.

Examples:

Assume the following data base field definition in the DDM EMPLOYEES:

Short Name Long Name
AA PERSONNEL-ID

Example 1:

OPTIONS DBSHORT=ON
READ EMPLOYEES 
  DISPLAY AA      /* data base short name AA is allowed
END

Example 2:

OPTIONS DBSHORT=OFF
READ EMPLOYEES 
  DISPLAY AA      /* syntax error NAT0981, because DBSHORT=OFF
END

Example 3:

OPTIONS DBSHORT=ON
DEFINE DATA LOCAL
1 V1 VIEW OF EMPLOYEES
  2  PERSONNEL-ID
END-DEFINE
READ V1 BY PERSONNEL-ID 
  DISPLAY AA     /* syntax error NAT0981, because PERSONNEL-ID is defined in view;
                 /* (even if DBSHORT=ON)
END-READ
END

DB2ARRY - Support DB2 Arrays in SQL SELECT and INSERT Statements

The DB2ARRY option can be used to activate retrieval and/or insertion of multiple rows from/into DB2 by a single SQL SELECT or INSERT statement execution. This allows the specification of arrays as receiving fields in the SQL SELECT and as source fields in the SQL INSERT statement. If DB2ARRY is ON, it is no longer possible to use Natural alphanumeric arrays for DB2 VARCHAR/GRAPHIC columns. Instead of these, long alphanumeric Natural variables have to be used.

ON DB2 array support is enabled.
OFF DB2 array support is not enabled. This is the default value.

DB2BIN – Generate SQL Binary Data Types for Natural Binary Fields

The DB2BIN option can be used to support the DB2 data types BINARY and VARBINARY.

If DB2BIN is set to OFF, Natural binary fields (format B(n)) are generated as SQL data type CHAR (n<= 253) or VARCHAR (253<n<=32767) like it was in previous releases. DB2BIN=OFF is good for those who used Natural binary fields like SQL CHAR fields. B2 and B4 are treated as SQL SMALLINT and INTEGER.

If DB2BIN is set to ON, Natural binary fields (format B(n)) are generated as SQL data type BINARY (n<=255) or VARBIN (255<n<=32767). DB2BIN=ON is good for those who want to use SQL binary columns. B2 and B4 are also treated as SQL BINARY(2) and BINARY(4).

Note:
The setting of DB2BIN at the end of the compilation is used for the complete Natural object. It cannot be changed for parts of a Natural object.

ON SQL types BINARY and VARBIN are generated for Natural binary fields.
OFF SQL types CHAR and VARCHAR are generated for Natural binary fields, except B2 and B4. The latter are treated as SQL data types SMALLINT and INTEGER.

This is the default value.

DB2PKYU – Place Primary Key Fields into the Natural DML UPDATE Statement

Only applies if supported by the Natural for DB2 version installed at your site.

The DB2PKYU option can be used to update DB2 primary key fields with a Natural DML UPDATE statement. DB2 primary key fields are fields whose short names begin with the character O in the DDM.

Note:
The setting of DB2PKYU at the end of the compilation is used for the complete Natural object. It cannot be changed for parts of a Natural object.

ON

DB2 primary key fields are updated.

DB2 primary key fields which are updated within the Natural program are placed into the resulting DB2 positioned UPDATE statement of a Natural DML UPDATE statement. The SQLCODE +535 DB2 returned for this positioned UPDATE is treated as 0 (zero) by the Natural for DB2 runtime system.

OFF

DB2 primary key fields are not updated.

DB2 primary key fields which are updated within the Natural program are not placed into the resulting DB2 positioned UPDATE statement.

This is the default value.

DB2TSTI – Generate SQL TIMESTAMP Data Type for Natural TIME Fields

This option is used to map Natural TIME variables to the SQL TIMESTAMP data type instead of the default SQL TIME data type.

ON SQL type TIMESTAMP is generated for Natural TIME fields of Natural data format T.

This applies to the entire Natural object. You cannot generate only part of an object with the DB2TSTI setting.

OFF SQL type TIME is generated for Natural TIME fields of Natural data format T.

This is the default value.

Note:
A Natural TIME field only contains tenth of seconds as precision while a SQL TIMESTAMP column can contain a much greater precision. Thus, the TIMESTAMP value read from the SQL database may be truncated if DB2TSTI=ON is set.

ECHECK - Existence Check for Object Calling Statements

ON The compiler checks for the existence of an object that is specified in an object calling statement, such as FETCH [RETURN/REPEAT], RUN [REPEAT], CALLNAT, PERFORM, INPUT USING MAP, PROCESS PAGE USING, function call and helproutine call.

The existence check is based on a search for the cataloged object or for the source of the object when it is invoked by a RUN [REPEAT] statement.

It requires that the name of the object to be called/run is defined as an alphanumeric constant (not as an alphanumeric variable).

Otherwise, ECHECK=ON will have no effect.

Error Control for ECHECK=ON

The existence check is executed only when the object does not contain any syntax errors. The existence check is executed for every object calling statement.

The existence check is controlled by the PECK profile parameter (see the Parameter Reference documentation).

Problems in Using the CATALL Command with ECHECK=ON

When a CATALL system command is used in conjunction with ECHECK=ON, you should consider the following:

If a CATALL process is invoked, the order in which the objects are compiled depends primarily on the type of the object and secondarily on the alphabetical name of the object. The object type sequence used is:

GDAs, LDAs, PDAs, functions, subprograms, external subroutines, help routines, maps, adapters, programs, classes.

Within objects of the same type, the alphabetical order of the name determines the sequence in which they are cataloged.

As mentioned above, the success of the object calling statement is checked against the compiled form of the called object. If the calling object (the one which is being compiled and includes the object calling statement) is cataloged before the invoked object, the ECHECK result may be wrong if the called object was not cataloged beforehand. In this case, the object image of the called object has not yet been produced by the CATALL command.

Solution:

  • Set compiler option ECHECK to OFF.

  • Perform a general compile with CATALL on the complete library, or if just one or a few objects were changed, perform a separate compile on these objects.

  • Set compiler option ECHECK=ON.

  • On the complete library, perform a general compile with CATALL, selecting function CHECK.

OFF No existence check is performed. This is the default setting.

GDASC - GDA Signature Check

This option is used to store information on the structure of a GDA (global data area) to determine whether a Natural error is to be issued when an unchanged GDA is cataloged.

The GDA information (GDA signature) only changes when a GDA is modified. The GDA signature does not change when a GDA is (accidentally) cataloged but was not modified.

The signature of the GDA and the GDA signatures stored in all Natural objects referencing this GDA are compared at execution time, in addition to the time stamps of the objects.

ON GDA signatures are stored and compared during execution. Natural only issues an error message if the signatures are not identical.
OFF GDA signatures are not stored. This is the default value.

GFID - Generation of Global Format IDs

This option allows you to control Natural's internal generation of global format IDs so as to influence Adabas's performance concerning the re-usability of format buffer translations.

ON Global format IDs are generated for all views. This is the default value.
VID Global format IDs are generated only for views in local/global data areas, but not for views defined within programs.
OFF No global format IDs are generated.

For details on global format IDs, see the Adabas documentation.

Rules for Generating GLOBAL FORMAT-IDs in Natural

  • For Natural nucleus internal system-file calls:
    GFID=abccddee
    where equals
    a x'F9'
    b x'22' or x'21' depending on DB statement
    cc physical database number (2 bytes)
    dd physical file number (2 bytes)
    ee number created by runtime (2 bytes)
  • For user programs or Natural utilities:
    • GFID=abbbbbb
      where equals
      a x'F8' or x'F7' or x'F6'

      where:

      F6=UPDATE SAME
      F7=HISTOGRAM
      F8=all others

      bbbbbbb bytes 1-7 of STOD value

    Note:
    STOD is the return value of the store clock machine instruction (STCK).

KCHECK - Keyword Checking

ON Field declarations in an object will be checked against a set of critical Natural keywords. If a variable name defined matches one of these keywords, a syntax error is reported when the object is checked or cataloged.
OFF No keyword check is performed. This is the default value.

The section Performing a Keyword Check (in the Programming Guide) contains a list of the keywords that are checked by the KCHECK option.

The section Alphabetical List of Natural Reserved Keywords (in the Programming Guide) contains an overview of all Natural keywords and reserved words.

LOWSRCE - Allow Lower-Case Source

This option supports the use of lower or mixed-case program sources on mainframe platforms. It facilitates the transfer of programs written in mixed/lower-case characters from other platforms to a mainframe environment.

ON Allows any kind of lower/upper-case characters in the program source.
OFF Allows upper-case mode only. This requires keywords, variable names and identifiers to be defined in upper case. This is the default value.

When you use lower-case characters with LOWSRCE=ON, consider the following:

  • The syntax rules for variable names allow lower-case characters in subsequent positions. Therefore, you can define two variables, one written with lower-case characters and the other with upper-case characters.

    Example:

    DEFINE DATA LOCAL
    1 #Vari  (A20)
    1 #VARI  (A20)
    

    With LOWSRCE=OFF, these variables are treated as different variables.

    With LOWSRCE=ON, the compiler is not case sensitive and does not make a distinction between lower/upper-case characters. This will lead to a syntax error because a duplicate definition of a variable is not allowed.

  • Using  the session parameter EM (Edit Mask) in an I/O statement or in a MOVE EDITED statement, there are characters which influence the layout of the data setting assigned to a variable (EM control characters), and characters which insert text fragments into the data setting.

    Example:

    #VARI :='1234567890'
      WRITE #VARI (EM=XXXXXxxXXXXX)
    

    With LOWSRCE=OFF, the output is "12345xx67890", because for alpha-format variables only upper-case X, H and circumflex accent (ˆ) sign can be used.

    With LOWSRCE=ON, the output is "1234567890", because an x character is treated like an upper-case X and, therefore, interpreted as an EM control character for that field format. To avoid this problem, enclose constant text fragments in apostrophes (').

    Example:

    WRITE #VARI(EM=XXXXX'xx'XXXXX)

    The text fragment is not considered an EM control character, regardless of the LOWSRCE settings.

  • Since all variable names are converted to upper-case characters with LOWSRCE=ON, the display of variable names in I/O statements (INPUT, WRITE or DISPLAY) differs.

    Example:

    MOVE 'ABC' to #Vari
      DISPLAY #Vari
    

    With LOWSRCE=OFF, the output is:

           #Vari
      --------------------
      
      ABC
    

    With LOWSRCE=ON, the output is:

           #VARI
      --------------------
      
      ABC
    

MAXPREC – Maximum Number of Digits after Decimal Point

This option determines the maximum number of digits after the decimal point that the Natural compiler generates for results of arithmetic operations.

7,…,29 The value denotes the maximum number of digits after the decimal point that the Natural compiler generates for results of arithmetic operations.

The default value 7 provides upwards compatibility for existing applications. If such applications are cataloged with MAXPREC=7, they will deliver the same results as before. Objects cataloged with a Natural version that did not support the MAXPREC option are executed as if MAXPREC=7 had been set.

If higher precision is desired for intermediate results, the value should be increased.

The setting of MAXPREC does not limit the number of digits after the decimal point that can be specified for user defined fields and constants. However, the precision of such fields and constants influences the precision of results of arithmetic operations. This makes it possible to benefit from enhanced precision in selected computations without having the need to set the compiler option MAXPREC to a value that unintentionally affects other computations. So even if MAXPREC=7 is in effect, the following example program can be cataloged and executed:

DEFINE DATA LOCAL
1 P (P1.15)
END-DEFINE
P := P + 0.1234567890123456
END

See also Precision of Results of Arithmetic Operations in the Programming Guide.

Warning:
Changing the value of the MAXPREC option that is being used to catalog a Natural object may lead to different results, even if the object source has not been changed. See example below.

Example:

DEFINE DATA LOCAL
1 #R (P1.7)
END-DEFINE
#R := 1.0008 * 1.0008 * 1.0008
IF #R = 1.0024018 THEN ... ELSE ... END-IF

The value of #R after the computation and the execution of the IF statement depend on the setting of MAXPREC:

Setting of MAXPREC Effective at Compile Time Value of #R Executed Clause of IF Statement
MAXPREC=7 1.0024018 THEN clause
MAXPREC=12 1.0024019 ELSE clause

MEMOPT - Memory Optimization for Locally Declared Variables

This option determines whether or not memory is allocated for unused level-1 fields or groups defined locally (DEFINE DATA LOCAL).

ON Storage is allocated only for
  • level-1 field, if the field or a redefinition thereof is accessed;

  • group, if the group or at least a group-field is accessed.

OFF Data storage is allocated for all groups and fields declared locally. This is the default setting.

PCHECK - Parameter Check for Object Calling Statements

ON The compiler checks the number, format, length and array index bounds of the parameters that are specified in an object calling statement, such as CALLNAT, PERFORM, INPUT USING MAP, PROCESS PAGE USING, function call and helproutine call. Also, the OPTIONAL feature of the DEFINE DATA PARAMETER statement is considered in the parameter check.

The parameter check is based on a comparison of the parameters of the object calling statement with the DEFINE DATA PARAMETER definitions for the object to be invoked.

It requires that

  • the name of the object to be called is defined as an alphanumeric constant (not as an alphanumeric variable),

  • the object to be called is available as a cataloged object.

Otherwise, PCHECK=ON will have no effect.

Error Control for PCHECK=ON

The parameter check is executed only when the object does not contain any syntax errors. The parameter check is executed for every object calling statement.

The parameter check is controlled by the PECK profile parameter (see the Parameter Reference documentation).

Problems in Using the CATALL Command with PCHECK=ON

When a CATALL command is used in conjunction with PCHECK=ON, you should consider the following:

If a CATALL process is invoked, the order in which the objects are compiled depends primarily on the type of the object and secondarily on the alphabetical name of the object. The object type sequence used is:

GDAs, LDAs, PDAs, functions, subprograms, external subroutines, help routines, maps, adapters, programs, classes.

Within objects of the same type, the alphabetical order of the name determines the sequence in which they are cataloged.

As mentioned above, the parameters of the object calling statement are checked against the compiled form of the called object. If the calling object (the one which is being compiled and includes the object calling statement) is cataloged before the invoked object, the PCHECK result may be wrong if the parameters in the invoking statement and in the called object were changed. In this case, the new object image of the called object has not yet been produced by the CATALL command. This causes the new parameter layout in the object calling statement to be compared with the old parameter layout of the DEFINE DATA PARAMETER statement of the called subprogram.

Solution:

  • Set compiler option PCHECK to OFF.

  • Perform a general compile with CATALL on the complete library, or if just one or a few objects were changed, perform a separate compile on these objects.

  • Set compiler option PCHECK=ON.

  • On the complete library, perform a general compile with CATALL, selecting function CHECK.

OFF No parameter check is performed. This is the default setting.

PSIGNF - Internal Representation of Positive Sign of Packed Numbers

ON The positive sign of a packed number is represented internally as H'F'. This is the default value.
OFF The positive sign of a packed number is represented internally as H'C'.

THSEP - Dynamic Thousands Separator

This option can be used to enable or disable the use of thousands separators at compilation time. See also the profile and session parameter THSEPCH and the section Customizing Separator Character Displays (in the Programming Guide).

ON Thousands separator used. Every thousands separator character that is not part of a string literal is replaced internally with a control character.
OFF Thousands separator not used, i.e. no thousands separator control character is generated by the compiler. This is the compatibility setting.

TQMARK - Translate Quotation Mark

ON Each double quotation mark within a text constant is output as a single apostrophe. This is the default value.
OFF Double quotation marks within a text constant are not translated; they are output as double quotation marks.

Example:

RESET A(A5)                    
A:= 'AB"CD'                    
WRITE '12"34' / A / A (EM=H(5))
END

With TQMARK ON, the output is:

12'34     
AB'CD     
C1C27DC3C4

With TQMARK OFF, the output is:

12"34     
AB"CD     
C1C27FC3C4

TSENABL - Applicability of TS Profile Parameter

This option determines whether the profile parameter TS (translate output for locations with non-standard lower-case usage) is to apply only to Natural system libraries (that is, libraries whose names begin with "SYS", except SYSTEM) or to all user libraries as well.

Natural objects cataloged with TSENABL=ON determine the TS parameter even if they are located in a non-system library.

ON The profile parameter TS applies to all libraries.
OFF The profile parameter TS only applies to Natural system libraries. This is the default value.

Compilation Options for Version and Platform Compatibility

The following options are available:

These options correspond to the keyword subparameters of the CMPO profile parameter and/or the NTCMPO parameter macro.

LUWCOMP - Disallow Syntax Not Available on UNIX or Windows

The LUWCOMP option checks whether the syntax of the features provided since Natural for Mainframes Version 8.2 is also supported by Natural for UNIX Version 8.3 and Natural for Windows Version 8.3. If any syntax incompatibilities between the mainframe and UNIX or Windows are detected, compilation under Natural for Mainframes Version 8.2 fails with an appropriate Natural error message and reason code.

The following values are possible:

ON When a program is compiled, every attempt to use a syntax construction that is supported by Natural for Mainframes but not by Natural for UNIX or Natural for Windows is rejected with a NAT0598 syntax error and an appropriate reason code (see the following section).
OFF No syntax check is performed. Any inconsistencies between the mainframe and UNIX or Windows are ignored. This is the default value.

Reason Codes for Syntax Errors

The following reason codes indicate which syntax parts are not supported by UNIX or Windows:

Reason Code Invalid Syntax on UNIX or Windows
001 A variable of the format P/N or a numeric constant with more than 7 precision digits is defined.

Example:

DEFINE DATA LOCAL
  1 #P(P5.8)
004 Either of the following compiler options is used:
  • MEMOPT

  • MAXPREC

Example:

OPTIONS MAXPREC=10
007 In a MOVE ALL statement, a SUBSTR option is used for the source or target field.

Example:

MOVE ALL 'X' TO SUBSTR(#A, 3, 5)
011 The ADJUST option is used in a READ WORK FILE statement to auto resize an X-array field at access.

Example:

READ WORK FILE 1 #XARR(*) AND ADJUST
012 The field referenced in the REINPUT ... MARK clause is supplied with a (CV=...) option.

Example:

REINPUT 'text' MARK *#FLD (CV=#C)
013 System variables are referenced in the field list of a WRITE WORK FILE statement.
014 Within a READ or FIND statement,
  • an IN SHARED HOLD clause or

  • a SKIP RECORDS IN HOLD clause

is used.
015 Either of the following statements is used:
016 The source field in a SEPARATE statement was defined as an array.

Example:

SEPARATE #TEXT(*) INTO ...
017 The POSITION clause is used in a SEPARATE statement.
019 One of the following new system variables was used:

MASKCME - MASK Compatible with MOVE EDITED

ON The range of valid year values that match the YYYY mask characters is 1582 - 2699 to make the MASK option compatible with MOVE EDITED. If the profile parameter MAXYEAR is set to 9999, the range of valid year values is 1582 - 9999.
OFF The range of valid year values that match the YYYY mask characters is 0000 - 2699. This is the default value. If the profile parameter MAXYEAR is set to 9999, the range of valid year values is 0000 - 9999.

NMOVE22 - Assignment of Numeric Variables of Same Length and Precision

ON Assignments of numeric variables where source and target have the same length and precision is performed as with Natural Version 2.2.
OFF Assignments of numeric variables where source and target have the same length and precision is performed as with Natural Version 2.3 and above, that is they are processed as if source and target would have different length or precision. This is the default value.