9.5. File Interoperability


Up: I/O Next: Datatypes for File Interoperability Previous: Split Collective Data Access Routines

At the most basic level, file interoperability is the ability to read the information previously written to a file---not just the bits of data, but the actual information the bits represent. MPI guarantees full interoperability within a single MPI environment, and supports increased interoperability outside that environment through the external data representation (Section External Data Representation: ``external32'' ) as well as the data conversion functions (Section User-Defined Data Representations ).

Interoperability within a single MPI environment (which could be considered ``operability'') ensures that file data written by one MPI process can be read by any other MPI process, subject to the consistency constraints (see Section File Consistency ), provided that it would have been possible to start the two processes simultaneously and have them reside in a single MPI_COMM_WORLD. Furthermore, both processes must see the same data values at every absolute byte offset in the file for which data was written.

This single environment file interoperability implies that file data is accessible regardless of the number of processes. There are three aspects to file interoperability:


The first two aspects of file interoperability are beyond the scope of this standard, as both are highly machine dependent. However, transferring the bits of a file into and out of the MPI environment (e.g., by writing a file to tape) is required to be supported by all MPI implementations. In particular, an implementation must specify how familiar operations similar to POSIX cp, rm, and mv can be performed on the file. Furthermore, it is expected that the facility provided maintains the correspondence between absolute byte offsets (e.g., after possible file structure conversion, the data bits at byte offset 102 in the MPI environment are at byte offset 102 outside the MPI environment). As an example, a simple off-line conversion utility that transfers and converts files between the native file system and the MPI environment would suffice, provided it maintained the offset coherence mentioned above. In a high quality implementation of MPI, users will be able to manipulate MPI files using the same or similar tools that the native file system offers for manipulating its files.

The remaining aspect of file interoperability, converting between different machine representations, is supported by the typing information specified in the etype and filetype. This facility allows the information in files to be shared between any two applications, regardless of whether they use MPI, and regardless of the machine architectures on which they run.

MPI supports multiple data representations: ``native,'' ``internal,'' and ``external32.'' An implementation may support additional data representations. MPI also supports user-defined data representations (see Section User-Defined Data Representations ). The native and internal data representations are implementation dependent, while the external32 representation is common to all MPI implementations and facilitates file interoperability. The data representation is specified in the datarep argument to MPI_FILE_SET_VIEW.


[] Advice to users.

MPI is not guaranteed to retain knowledge of what data representation was used when a file is written. Therefore, to correctly retrieve file data, an MPI application is responsible for specifying the same data representation as was used to create the file. ( End of advice to users.)

``native''
Data in this representation is stored in a file exactly as it is in memory. The advantage of this data representation is that data precision and I/O performance are not lost in type conversions with a purely homogeneous environment. The disadvantage is the loss of transparent interoperability within a heterogeneous MPI environment.


[] Advice to users.

This data representation should only be used in a homogeneous MPI environment, or when the MPI application is capable of performing the data type conversions itself. ( End of advice to users.)

[] Advice to implementors.

When implementing read and write operations on top of MPI message passing, the message data should be typed as MPI_BYTE to ensure that the message routines do not perform any type conversions on the data. ( End of advice to implementors.)

``internal''
This data representation can be used for I/O operations in a homogeneous or heterogeneous environment; the implementation will perform type conversions if necessary. The implementation is free to store data in any format of its choice, with the restriction that it will maintain constant extents for all predefined datatypes in any one file. The environment in which the resulting file can be reused is implementation-defined and must be documented by the implementation.


[] Rationale.

This data representation allows the implementation to perform I/O efficiently in a heterogeneous environment, though with implementation-defined restrictions on how the file can be reused. ( End of rationale.)

[] Advice to implementors.

Since ``external32'' is a superset of the functionality provided by ``internal,'' an implementation may choose to implement ``internal'' as ``external32.'' ( End of advice to implementors.)

``external32''
This data representation states that read and write operations convert all data from and to the ``external32'' representation defined in Section External Data Representation: ``external32'' . The data conversion rules for communication also apply to these conversions (see Section 3.3.2, page 25-27, of the MPI-1 document). The data on the storage medium is always in this canonical representation, and the data in memory is always in the local process's native representation.

This data representation has several advantages. First, all processes reading the file in a heterogeneous MPI environment will automatically have the data converted to their respective native representations. Second, the file can be exported from one MPI environment and imported into any other MPI environment with the guarantee that the second environment will be able to read all the data in the file.

The disadvantage of this data representation is that data precision and I/O performance may be lost in data type conversions.


[] Advice to implementors.

When implementing read and write operations on top of MPI message passing, the message data should be converted to and from the ``external32'' representation in the client, and sent as type MPI_BYTE. This will avoid possible double data type conversions and the associated further loss of precision and performance. ( End of advice to implementors.)




Up: I/O Next: Datatypes for File Interoperability Previous: Split Collective Data Access Routines


Return to MPI-2 Standard Index
Return to MPI 1.1 Standard Index
Return to MPI Forum Home Page

MPI-2.0 of July 18, 1997
HTML Generated on August 11, 1997