What TPF is not TPF is not a general-purpose operating system. TPF's specialized role is to process transaction input messages, then return output messages on a 1:1 basis at extremely high volume with short maximum elapsed time limits. TPF has no built-in graphical user interface functionality, and TPF has never offered direct graphical display facilities: to implement it on the host would be considered an unnecessary and potentially harmful diversion of real-time system resources. TPF's user interface is command-line driven with simple text display terminals that scroll upward, and there are no mouse-driven cursors, windows, or icons on a TPF
Prime CRAS (
Computer room agent set — which is best thought of as the "operator's console"). Character messages are intended to be the mode of communications with human users. All work is accomplished via the use of the command line, similar to
UNIX without
X. There are several products available which connect to Prime CRAS and provide graphical interface functions to the TPF operator, such as
TPF Operations Server. Graphical interfaces for end users, if desired, must be provided by external systems. Such systems perform analysis on character content (see
Screen scrape) and convert the message to/from the desired graphical form, depending on its context. Being a specialized purpose operating system, TPF does not host a compiler/assembler, text editor, nor implement the concept of a desktop as one might expect to find in a general-purpose operating system. TPF application source code is commonly stored in external systems, and likewise built "offline". Starting with z/TPF 1.1,
Linux is the supported build platform; executable programs intended for z/TPF operation must observe the
ELF format for s390x-ibm-linux. Using TPF requires a knowledge of its
Command Guide since there is no support for an online command "directory" or "man"/help facility to which users might be accustomed. Commands created and shipped by IBM for the system administration of TPF are called "functional messages"—commonly referred to as "
Z-messages", as they are all prefixed with the letter "Z". Other letters are reserved so that customers may write their own commands. TPF implements debugging in a distributed client-server mode, which is necessary because of the system's headless, multi-processing nature: pausing the entire system in order to trap a single task would be highly counterproductive. Debugger packages have been developed by third party vendors who took very different approaches to the "break/continue" operations required at the TPF host, implementing unique communications protocols used in traffic between the human developer running the debugger client and the server-side debug controller, as well as the form and function of debugger program operations at the client side. Two examples of third party debugger packages are
Step by Step Trace from Bedford Associates and
CMSTPF,
TPF/GI, and
zTPFGI, all from TPF Software, Inc. Neither package is wholly compatible with the other, nor with IBM's own offering. IBM's debugging client offering is packaged in an
IDE called
IBM TPF Toolkit.
What TPF is TPF is highly optimized to permit messages from the supported network to either be switched out to another location, routed to an application (specific set of programs) or to permit extremely efficient accesses to database records.
Data records Historically, all data on the TPF system had to fit in fixed record (and memory block) sizes of 381, 1055 and 4K bytes. This was due in part to the physical record sizes of blocks located on DASD. Much overhead was saved by freeing up any part of the operating system from breaking large data entities into smaller ones during file operations, and reassembling the same during read operations. Since IBM hardware does I/O via the use of
channels and
channel programs, TPF would generate very small and efficient channel programs to do its I/O — all in the name of speed. Since the early days also placed a premium on the size of storage media — be it memory or disk, TPF applications evolved into doing very powerful things while using very little resource. Today, much of these limitations are removed. In fact, only because of legacy support are smaller-than-4K DASD records still used. With the advances made in DASD technology, a read/write of a 4K record is just as efficient as a 1055 byte record. The same advances have increased the capacity of each device so that there is no longer a premium placed on the ability to pack data into the smallest model as possible.
Programs and residency TPF also had its program
segments allocated as 381, 1055 and 4K byte-sized
records at different points in its history. Each segment consisted of a single record; with a typically comprehensive application requiring perhaps tens or even hundreds of segments. For the first forty years of TPF's history, these segments were never
link-edited. Instead, the relocatable object code (direct output from the assembler) was laid out in memory, had its
internally (self-referential) relocatable symbols resolved, then the entire image was written to file for later loading into the system. This created a challenging programming environment in which
segments related to one another could not directly address each other, with control transfer between them implemented as the
ENTER/BACK system service. In ACP/TPF's earliest days (circa 1965), memory space was severely limited, which gave rise to a distinction between
file-resident and
core-resident programs—only the most frequently used application programs were written into memory and never removed (
core-residency); the rest were stored on file and read in on demand, with their backing memory buffers released post-execution. The introduction of
C language to TPF at version 3.0 was first implemented conformant to segment conventions, including the absence of linkage editing. This scheme quickly demonstrated itself to be impractical for anything other than the simplest of C programs. At TPF 4.1, truly and fully linked
load modules were introduced to TPF. These were compiled with the
z/OS C/C++ compiler using TPF-specific
header files and linked with
IEWL, resulting in a z/OS-conformant load module, which in no manner could be considered a traditional TPF segment. The
TPF loader was extended to read the z/OS-unique
load module file format, then lay out file-resident load modules' sections into memory; meanwhile, assembly language programs remained confined to TPF's
segment model, creating an obvious disparity between applications written in assembler and those written in higher level languages (HLL). At z/TPF 1.1, all source language types were conceptually unified
and fully link-edited to conform to the
ELF specification. The
segment concept became obsolete, meaning that
any program written in
any source language—including Assembler—may now be of
any size. Furthermore, external references became possible, and separate source code programs that had once been
segments could now be directly linked together into a
shared object. A value point is that critical legacy applications can benefit from improved efficiency through simple
repackaging—calls made between members of a single shared object module now have a much shorter
pathlength at run time as compared to calling the system's
ENTER/BACK service. Members of the same shared object may now share writeable data regions directly thanks to
copy-on-write functionality also introduced at z/TPF 1.1; which coincidentally reinforces TPF's
reentrancy requirements. The concepts of file- and memory- residency were also made obsolete, due to a z/TPF design point which sought to have all programs resident in memory at all times. Since z/TPF had to maintain a
call stack for high-level language programs, which gave HLL programs the ability to benefit from
stack-based memory allocation, it was deemed beneficial to extend the call stack to assembly language programs on an optional basis, which can ease memory pressure and ease
recursive programming. All z/TPF executable programs are now packaged as ELF shared objects.
Memory usage Historically and in step with the previous, core blocks— memory— were also 381, 1055 and 4 K bytes in size. Since
ALL memory blocks had to be of this size, most of the overhead for obtaining memory found in other systems was discarded. The programmer merely needed to decide what size block would fit the need and ask for it. TPF would maintain a list of blocks in use and simply hand out the first block on the available list. Physical memory was divided into sections reserved for each size so a 1055 byte block always came from a section and returned there, the only overhead needed was to add its address to the appropriate physical block table's list. No compaction or data collection was required. As applications got more advanced demands for memory increased, and once C became available memory chunks of indeterminate or large size were required. This gave rise to the use of heap storage and some memory management routines. To ease the overhead, TPF memory was broken into frames— 4 KB in size (1 MB with z/TPF). If an application needs a certain number of bytes, the number of contiguous frames required to fill that need are granted. ==References==