COMPOPT [option=value
...]
|
This system command is used to set various compilation options. The options are evaluated when a Natural object is compiled.
If you enter the COMPOPT
command without any
options, a screen is displayed where you can enable or disable the options
described below.
The default settings of the individual options are set with the corresponding profile parameters in the Natural parameter file.
This document covers the following topics:
Specifying Compiler Keyword Parameters (Remote Mainframe Environment)
Compilation Options for Version and Platform Compatibility (Remote Mainframe Environment)
COMPOPT |
If you issue the COMPOPT system
command without options, a dialog box
appears. The keywords available there are described below.
See also Compiler Options in Using Natural Studio. |
COMPOPT
option=value |
Instead of changing an option in the dialog
box, you can also specify it
directly with the Example: COMPOPT DBSHORT=ON |
The following compiler options are available. For details on the purpose of these options and the possible settings, see the description of the corresponding Natural profile parameter:
DBSHORT
|
GFID
|
KCHECK
|
MASKCME
| PCHECK
| PSIGNF
| THSEP
|
TQMARK
You can specify compiler parameters on different levels:
The default settings of the individual compiler parameters are
specified using the
Compiler
Options category of the Configuration Utility and are
stored in the Natural parameter file NATPARM
.
At session start, you can override the compiler option settings by specifying the corresponding profile parameters.
During an active Natural session, there are two ways to change the
compiler parameter values with the COMPOPT
system
command: either directly using command assignment (COMPOPT
option=value
)
or by issuing the COMPOPT
command without options
which displays the Compiler Options
dialog box. The settings assigned to a compiler option are in effect until
you issue the next LOGON
command to another library.
At LOGON
to a different libary, the default settings
(see item 1 above) will be resumed. Example:
OPTIONS KCHECK=ON DEFINE DATA LOCAL 1 #A (A25) INIT <'Hello World'> END-DEFINE WRITE #A END
In a Natural object (for example: program, subprogram), you can set
compiler parameters with the OPTIONS
statement. Example:
OPTIONS KCHECK=ON WRITE 'Hello World' END
The compiler options defined in an OPTIONS
statement will only
affect the compilation of this object, but do not update settings set with the
command COMPOPT
.
The topics provided below apply when using the
COMPOPT
command in a remote mainframe
environment.
You can specify compiler keyword parameters on different levels:
The default settings of the individual keyword parameters are
specified in the macro
NTCMPO
in the Natural parameter
module.
At session start, you can override the compiler keyword parameters
with the profile parameter CMPO
.
During an active Natural session, there are two ways to change the
compiler keyword parameters with the COMPOPT
system
command: either directly using command assignment (COMPOPT
option=value
)
or by issuing the COMPOPT
command without keyword
parameters which displays the Compilation Options screen.
The settings assigned to a compiler option are in effect until you issue the
next LOGON
command to another library. At
LOGON
, the default settings set with the macro
NTCMPO
and/or the profile parameter
CMPO
(see above) will be resumed.
Example:
OPTIONS KCHECK=ON DEFINE DATA LOCAL 1 #A (A25) INIT <'Hello World'> END-DEFINE WRITE #A END
In a Natural object (for example: program, subprogram), you can set
compiler parameters (options) with the OPTIONS
statement. Example:
OPTIONS KCHECK=ON WRITE 'Hello World' END
The compiler options defined in an OPTIONS
statement will only
affect the compilation of this object, but do not update settings set with the
command COMPOPT
.
The following options are available:
DB2ARRY - Support DB2 Arrays in SQL SELECT and INSERT Statements
DB2BIN – Generate SQL Binary Data Types for Natural Binary Fields
DB2TSTI – Generate SQL TIMESTAMP Data Type for Natural TIME Fields
PSIGNF - Internal Representation of Positive Sign of Packed Numbers
These options correspond to the keyword subparameters of the
CMPO
profile parameter and/or the
NTCMPO
parameter macro.
The CHKRULE
option can be used to enable or disable a
validation check during the catalog process for maps.
ON |
When a Natural map is created, you may include fields which are
already defined inside another existing object. This works with nearly all
kinds of objects which allow you to define variables and also with DDMs. When
the included field is a database variable, it is a map editor built-in behavior
to automatically add (besides the included field) an additional
The function is similar to what is happening when an
When fields are incorporated from a DDM into a map, the
corresponding |
OFF |
INCDIR validation is disabled. This is the default
value.
|
The CPAGE
option can be used to activate a conversion
routine which translates all alphanumeric constants (from the code page that
was active at compilation time into the code page that is active at runtime)
when the object is started at runtime.
ON |
Code page support for alpha strings is enabled. |
OFF |
Code page support for alpha strings is disabled. This is the default value. |
A database field defined in a DDM is described by two names:
the short name with a length of 2 characters, used by Natural to communicate with the database (especially with Adabas);
the long name with a length of 3-32 characters (1-32 characters, if the underlying database type accessed is DB2/SQL), which is supposed to be used to reference the field in the Natural programming code.
Under special conditions, you may reference a database field in a Natural program with its short name instead of the long name. This applies if running in Reporting Mode without Natural Security and if the database access statement contains a reference to a DDM instead of a view.
The decision if a field name is regarded as a short-name reference
depends on the name length. When the field identifier consists of two
characters, a short-name reference is assumed; a field name with another length
is considered as a long-name reference. This standard interpretation rule for
database fields can additionally be influenced and controlled by setting the
compiler option DBSHORT
to ON
or OFF
:
ON |
The usage of a short name is allowed for referencing a database field. However, a data base short name is not permitted in
general (even if
This is the default value. |
OFF |
A database field may only be referenced via its long name. Every database field identifier is considered as a long-name reference, regardless of its length. If a two character name is supplied which can only be found as a short name but not as a long name, syntax error NAT0981 is raised at compile time. This makes it possible to use long names defined in a DDM with 2-byte identifier length. This option is essential if the underlying database you access with this DDM is SQL (DB2) and table columns with a two character name exist. For all other database types (for example, Adabas), however, any attempt to define a long-field with a 2-byte name length will be rejected at DDM generation. Moreover, if no short-name references are used (what can be
enforced via |
Assume the following data base field definition in the DDM
EMPLOYEES
:
Short Name | Long Name |
---|---|
AA |
PERSONNEL-ID |
Example 1:
OPTIONS DBSHORT=ON READ EMPLOYEES DISPLAY AA /* data base short name AA is allowed END
Example 2:
OPTIONS DBSHORT=OFF READ EMPLOYEES DISPLAY AA /* syntax error NAT0981, because DBSHORT=OFF END
Example 3:
OPTIONS DBSHORT=ON DEFINE DATA LOCAL 1 V1 VIEW OF EMPLOYEES 2 PERSONNEL-ID END-DEFINE READ V1 BY PERSONNEL-ID DISPLAY AA /* syntax error NAT0981, because PERSONNEL-ID is defined in view; /* (even if DBSHORT=ON) END-READ END
The DB2ARRY
option can be used to activate retrieval
and/or insertion of multiple rows from/into DB2 by a single SQL
SELECT
or
INSERT
statement
execution. This allows the specification of arrays as receiving fields in the
SQL SELECT
and as source fields in the SQL INSERT
statement. If DB2ARRY
is ON
, it is no longer possible
to use Natural alphanumeric arrays for DB2 VARCHAR/GRAPHIC columns. Instead of
these, long alphanumeric Natural variables have to be used.
ON |
DB2 array support is enabled. |
OFF |
DB2 array support is not enabled. This is the default value. |
The DB2BIN
option can be used to support the DB2 data
types BINARY and VARBINARY.
If DB2BIN
is set to OFF
, Natural binary
fields (format B(n
)) are generated as
SQL data type CHAR (n
<= 253) or
VARCHAR (253<n
<=32767) like it
was in previous releases. DB2BIN=OFF
is good for those who used
Natural binary fields like SQL CHAR fields. B2 and B4 are treated as SQL
SMALLINT and INTEGER.
If DB2BIN
is set to ON
, Natural binary fields
(format B(n
)) are generated as SQL data
type BINARY (n
<=255) or VARBIN
(255<n
<=32767).
DB2BIN=ON
is good for those who want to use SQL binary columns. B2
and B4 are also treated as SQL BINARY(2) and BINARY(4).
Note:
The setting of DB2BIN
at the end of the compilation is
used for the complete Natural object. It cannot be changed for parts of a
Natural object.
ON |
SQL types BINARY and VARBIN are generated for Natural binary fields. |
OFF |
SQL types CHAR and VARCHAR are generated for Natural binary
fields, except B2 and B4. The latter are treated as SQL data types SMALLINT and
INTEGER.
This is the default value. |
This option is used to map Natural TIME
variables to the SQL TIMESTAMP data type instead of the default SQL TIME data
type.
ON |
SQL type TIMESTAMP is generated for Natural TIME
fields of Natural data format T.
This applies to the entire Natural object. You cannot generate
only part of an object with the |
OFF |
SQL type TIME is generated for Natural TIME
fields of Natural data format T.
This is the default value. |
Note:
A Natural TIME
field only contains tenth of seconds as
precision while a SQL TIMESTAMP column can contain a much greater precision.
Thus, the TIMESTAMP value read from the SQL database may be truncated if
DB2TSTI=ON
is set.
ON |
The compiler checks for the existence of an object that is
specified in an object calling statement, such as
FETCH [RETURN/REPEAT] ,
RUN [REPEAT] ,
CALLNAT ,
PERFORM ,
INPUT USING
MAP ,
PROCESS PAGE
USING ,
function
call,
helproutine
call.
The existence check is based on a search for the cataloged object
or for the source of the object when it is invoked by a It requires that the name of the object to be called/run is defined as an alphanumeric constant (not as an alphanumeric variable). Otherwise, Error Control for ECHECK=ON
The existence check is executed only when the object does not contain any syntax errors. The existence check is executed for every object calling statement. The existence check is controlled by the
Problems in Using the CATALL Command with
ECHECK=ON
When a If a GDAs, LDAs, PDAs, functions, subprograms, external subroutines, help routines, maps, adapters, programs, classes. Within objects of the same type, the alphabetical order of the name determines the sequence in which they are cataloged. As mentioned above, the success of the object calling statement is
checked against the compiled form of the called object. If the calling object
(the one which is being compiled and includes the object calling statement) is
cataloged before the invoked object, the Solution:
|
OFF |
No existence check is performed. This is the default setting. |
This option is used to store information on the structure of a GDA (global data area) to determine whether a Natural error is to be issued when an unchanged GDA is cataloged.
The GDA information (GDA signature) only changes when a GDA is modified. The GDA signature does not change when a GDA is (accidentally) cataloged but was not modified.
The signature of the GDA and the GDA signatures stored in all Natural objects referencing this GDA are compared at execution time, in addition to the time stamps of the objects.
ON |
GDA signatures are stored and compared during execution. Natural only issues an error message if the signatures are not identical. |
OFF |
GDA signatures are not stored. This is the default value. |
This option allows you to control Natural's internal generation of global format IDs so as to influence Adabas's performance concerning the re-usability of format buffer translations.
ON |
Global format IDs are generated for all views. This is the default value. |
VID |
Global format IDs are generated only for views in local/global data areas, but not for views defined within programs. |
OFF |
No global format IDs are generated. |
For details on global format IDs, see the Adabas documentation.
GFID=abccddee
where | equals |
---|---|
a | x'F9' |
b | x'22' or x'21' depending on DB statement |
cc | physical database number (2 bytes) |
dd | physical file number (2 bytes) |
ee | number created by runtime (2 bytes) |
GFID=abbbbbb
where | equals |
---|---|
a | x'F8' or x'F7' or x'F6'
where:
F6= |
bbbbbbb | bytes 1-7 of STOD value |
Note:
STOD is the return value of the store clock machine instruction
(STCK).
ON |
Field declarations in an object will be checked against a set of critical Natural keywords. If a variable name defined matches one of these keywords, a syntax error is reported when the object is checked or cataloged. |
OFF |
No keyword check is performed. This is the default value. |
The section Performing a Keyword
Check (in the Programming Guide)
contains a list of the keywords that are checked by the KCHECK
option.
The section Alphabetical List of Natural Reserved Keywords (in the Programming Guide) contains an overview of all Natural keywords and reserved words.
This option supports the use of lower or mixed-case program sources on mainframe platforms. It facilitates the transfer of programs written in mixed/lower-case characters from other platforms to a mainframe environment.
ON |
Allows any kind of lower/upper-case characters in the program source. |
OFF |
Allows upper-case mode only. This requires keywords, variable names and identifiers to be defined in upper case. This is the default value. |
When you use lower-case characters with LOWSRCE=ON
,
consider the following:
The syntax rules for variable names allow lower-case characters in subsequent positions. Therefore, you can define two variables, one written with lower-case characters and the other with upper-case characters.
Example:
DEFINE DATA LOCAL 1 #Vari (A20) 1 #VARI (A20)
With LOWSRCE=OFF
, these variables are treated as
different variables.
With LOWSRCE=ON
, the compiler is not case
sensitive and does not make a distinction between lower/upper-case characters.
This will lead to a syntax error because a duplicate definition of a variable
is not allowed.
Using the session parameter EM
(Edit Mask) in an
I/O statement or in a MOVE
EDITED
statement, there are characters which influence the
layout of the data setting assigned to a variable (EM
control characters), and characters which insert text fragments into the data
setting.
Example:
#VARI :='1234567890' WRITE #VARI (EM=XXXXXxxXXXXX)
With LOWSRCE=OFF
, the output is
"12345xx67890", because for alpha-format variables
only upper-case X, H and circumflex accent (ˆ) sign can be used.
With LOWSRCE=ON
, the output is
"1234567890", because an x character is treated like
an upper-case X and, therefore, interpreted as an EM
control character for that field format. To avoid this problem, enclose
constant text fragments in apostrophes (').
Example:
WRITE #VARI(EM=XXXXX'xx'XXXXX)
The text fragment is not considered an
EM
control character, regardless of the
LOWSRCE
settings.
Since all variable names are converted to upper-case characters with
LOWSRCE=ON
, the display of variable names in I/O statements
(INPUT
,
WRITE
or
DISPLAY
) differs.
Example:
MOVE 'ABC' to #Vari DISPLAY #Vari
With LOWSRCE=OFF
, the output is:
#Vari -------------------- ABC
With LOWSRCE=ON
, the output is:
#VARI -------------------- ABC
This option determines the maximum number of digits after the decimal point that the Natural compiler generates for results of arithmetic operations.
7,…,29 |
The value denotes the maximum number of digits after the
decimal point that the Natural compiler generates for results of arithmetic
operations.
The default value If higher precision is desired for intermediate results, the value should be increased. The setting of DEFINE DATA LOCAL 1 P (P1.15) END-DEFINE P := P + 0.1234567890123456 END See also Precision of Results of Arithmetic Operations in the Programming Guide. |
Warning: Changing the value of the MAXPREC option
that is being used to catalog a Natural object may lead to different results,
even if the object source has not been changed. See example below. |
Example:
DEFINE DATA LOCAL 1 #R (P1.7) END-DEFINE #R := 1.0008 * 1.0008 * 1.0008 IF #R = 1.0024018 THEN ... ELSE ... END-IF
The value of #R
after the computation and the execution of
the IF
statement depend on the setting of
MAXPREC
:
Setting of MAXPREC Effective at Compile Time | Value of #R | Executed Clause of IF Statement |
---|---|---|
MAXPREC=7 |
1.0024018 |
THEN clause
|
MAXPREC=12 |
1.0024019 |
ELSE clause
|
This option determines whether or not memory is allocated for unused
level-1 fields or groups defined locally (DEFINE DATA LOCAL
).
ON |
Storage is allocated only for
|
OFF |
Data storage is allocated for all groups and fields declared locally. This is the default setting. |
ON |
The compiler checks the number, format, length and array index
bounds of the parameters that are specified in an object calling statement,
such as CALLNAT ,
PERFORM ,
INPUT USING
MAP , PROCESS PAGE
USING ,
function
call,
helproutine
call. Also, the OPTIONAL feature
of the DEFINE DATA
PARAMETER statement is considered in the parameter check.
The parameter check is based on a comparison of the parameters of
the object calling statement with the It requires that
Otherwise, Error Control for PCHECK=ON
The parameter check is executed only when the object does not contain any syntax errors. The parameter check is executed for every object calling statement. The parameter check is controlled by the
Problems in Using the CATALL Command with
PCHECK=ON
When a If a GDAs, LDAs, PDAs, functions, subprograms, external subroutines, help routines, maps, adapters, programs, classes. Within objects of the same type, the alphabetical order of the name determines the sequence in which they are cataloged. As mentioned above, the parameters of the object calling statement
are checked against the compiled form of the called object. If the calling
object (the one which is being compiled and includes the object calling
statement) is cataloged before the invoked object, the Solution:
|
OFF |
No parameter check is performed. This is the default setting. |
ON |
The positive sign of a packed number is represented internally as H'F'. This is the default value. |
OFF |
The positive sign of a packed number is represented internally as H'C'. |
This option can be used to enable or disable the use of thousands
separators at compilation time. See also the profile
parameter THSEP
and
the profile and session parameter THSEPCH
and the
section Customizing Separator
Character Displays (in the Programming
Guide).
ON |
Thousands separator used. Every thousands separator character that is not part of a string literal is replaced internally with a control character. |
OFF |
Thousands separator not used, i.e. no thousands separator control character is generated by the compiler. This is the compatibility setting. |
ON |
Each double quotation mark within a text constant is output as a single apostrophe. This is the default value. |
OFF |
Double quotation marks within a text constant are not translated; they are output as double quotation marks. |
Example:
RESET A(A5) A:= 'AB"CD' WRITE '12"34' / A / A (EM=H(5)) END
With TQMARK ON
, the output is:
12'34 AB'CD C1C27DC3C4
With TQMARK OFF
, the output is:
12"34 AB"CD C1C27FC3C4
This option determines whether the profile parameter TS (translate
output for locations with non-standard lower-case usage) is to apply only to
Natural system libraries (that is, libraries whose names begin with
"SYS", except SYSTEM
) or to all user
libraries as well.
Natural objects cataloged with TSENABL=ON
determine the
TS
parameter even if they are located in a non-system
library.
ON |
The profile parameter TS applies to all
libraries.
|
OFF |
The profile parameter TS only applies to
Natural system libraries. This is the default value.
|
The following options are available:
These options correspond to the keyword subparameters of the
CMPO
profile parameter and/or the
NTCMPO
parameter
macro.
ON |
The range of valid year values that match the YYYY mask
characters is 1582 - 2699 to make the MASK option compatible with
MOVE EDITED . If the profile parameter
MAXYEAR is set to 9999, the range of valid year values
is 1582 - 9999.
|
OFF |
The range of valid year values that match the YYYY mask
characters is 0000 - 2699. This is the default value. If the profile parameter
MAXYEAR is set to 9999, the range of valid year values
is 0000 - 9999.
|
The LUWCOMP
option checks whether the syntax of the
features provided since Natural for Mainframes Version
8.2 is also supported by Natural for UNIX Version 8.3 and Natural for Windows
Version 8.3. If any syntax incompatibilities
between the mainframe and UNIX or Windows are detected, compilation under
Natural for Mainframes Version 8.2 fails with an appropriate Natural error
message and reason code.
The following values are possible:
ON |
When a program is compiled, every attempt to use a syntax construction that is supported by Natural for Mainframes but not by Natural for UNIX or Natural for Windows is rejected with a NAT0598 syntax error and an appropriate reason code (see the following section). |
OFF |
No syntax check is performed. Any inconsistencies between the mainframe and UNIX or Windows are ignored. This is the default value. |
The following reason codes indicate which syntax parts are not supported by UNIX or Windows:
Reason Code | Invalid Syntax on UNIX or Windows |
---|---|
001 | A variable of the format P/N or a numeric
constant with more than 7 precision digits is defined.
Example: DEFINE DATA LOCAL 1 #P(P5.8) |
004 | Either of the following compiler options is
used:
Example: OPTIONS MAXPREC=10 |
007 | In a MOVE ALL statement, a
SUBSTR option is used for the source or target field.
Example: MOVE ALL 'X' TO SUBSTR(#A, 3, 5) |
011 | The ADJUST option is used in a
READ WORK FILE statement to auto resize an X-array field at
access.
Example: READ WORK FILE 1 #XARR(*) AND ADJUST |
012 | The field referenced in the REINPUT ...
MARK clause is supplied with a (CV=...) option.
Example: REINPUT 'text' MARK *#FLD (CV=#C) |
013 | System variables are referenced in the field
list of a WRITE WORK FILE statement.
|
014 | Within a
READ or
FIND statement,
|
015 | Either of the following
statements is used:
|
016 | The source field in a
SEPARATE statement was defined as an array.
Example: SEPARATE #TEXT(*) INTO ... |
017 |
The POSITION clause is used in a SEPARATE
statement.
|
019 | One of the following new
system variables was used:
|