FlatFile Module 10.7 | Flat File Schema Developer's Guide | Processing Flat Files Overview | Processing Inbound Flat Files | Handling Large Flat Files
 
Handling Large Flat Files
By default, Integration Server processes all flat files in the same manner, regardless of their size. Integration Server receives a flat file and keeps the file content in memory during processing. However, if you receive large files, Integration Server can encounter problems when working with these files because the system does not have enough memory to hold the entire parsed file.
If some or all of the flat files that you process encounter problems because of memory constraints, you can set the iterator variable in the pub.flatFile:convertToValues service to true to process top level records (children of the document root) in the flat file schema one at a time. After all child records of the top level record are parsed, the pub.flatFile:convertToValues service returns and the iterator moves to the top level of the next record in the schema, until all records are parsed. This parsing should be done in a flow service using a REPEAT step where each time the pub.flatFile:convertToValues service returns, the results are mapped and dropped from the pipeline to conserve memory. If the results were kept in the pipeline, out–of–memory errors might occur.
The pub.flatFile:converToValues service generates an output object (ffIterator variable) that encapsulates and keeps track of the input records during processing. When all input data has been parsed, this object becomes null. When the ffIterator variable is null, you should use an EXIT step to exit from the REPEAT step and discontinue processing.