Parsytec's product range included: • Megaframe (
T414/
T800) – One per board, up to ten boards in a rack or as plug-in boards • MultiCluster (
T800) – Up to 64 processors in a single rack • SuperCluster (
T800) – 16 to 1,024 processors in a single frame • GigaCluster (planned:
T9000; realized:
T800 or
MPC 601) – 64 to 16,384 processors in "cubes" • x'plorer (
T800 or
MPC 601) • Cognitive Computer (
MPC 604 and
Intel Pentium Pro) • Powermouse (
MPC 604) In total, approximately 700 stand-alone systems (SC and GC) had been shipped. Initially, Parsytec participated in the GPMIMD (General Purpose
MIMD) project under the umbrella of the ESPRIT program, both of which were funded by the
European Commission's
Directorate for Science. However, after significant disagreements with other participants—
Meiko, Parsys,
Inmos, and Telmat—regarding the choice of a common physical architecture, Parsytec left the project and announced its own
T9000-based machine, the GC. Due to Inmos' issues with the
T9000, Parsytec was forced to switch to a system using a combination of
Motorola MPC 601 CPUs and Inmos
T805 processors. This led to the development of Parsytec's "hybrid" systems (e.g., GC/PP), where
transputers were used as communication processors while the computational tasks were offloaded to the
PowerPCs. Parsytec's cluster systems were operated by an external workstation, typically a
SUN workstation (e.g.,
Sun-4). There is considerable confusion regarding the names of Parsytec products. This is partly due to the architecture, but also because of the aforementioned unavailability of the Inmos
T9000, which forced Parsytec to use the
T805 and
PowerPC processors instead. Systems equipped with
PowerPC processors were given the prefix "Power." The architecture of GC systems is based on self-contained GigaCubes. The basic architectural element of a Parsytec system was a cluster, which consisted, among other components, of four transputers/processors (i.e., a cluster is a node in the classical sense). A GigaCube (sometimes referred to as a supernode or meganode) consisted of four clusters (nodes), each with 16
Inmos T805 transputers (30 MHz),
RAM (up to 4 MB per
T805), and an additional redundant
T805 (the 17th processor). It also included local link connections and four
Inmos C004 routing chips. Hardware fault tolerance was achieved by linking each
T805 to a different C004. The unusual spelling of x'plorer led to variations like xPlorer, and the Gigacluster is sometimes referred to as the GigaCube or Grand Challenge.
Megaframe Megaframe was the product name for a family of transputer-based parallel processing modules, some of which could be used to upgrade an
IBM PC. As a standalone system, a Megaframe could hold up to ten processor modules. Different versions of the modules were available, such as one featuring a 32-bit transputer
T414 with floating-point hardware (
Motorola 68881), 1 MB of
RAM (80 nanosecond access time), and a throughput of 10 MIPS, or one with four 16-bit
transputers (
T22x) with 64 kB of RAM. Additionally, cards for special features were offered, including a graphics processor with a resolution of 1280 x 1024 pixels and an I/O "cluster" with terminal and
SCSI interfaces.
Multicluster The
MultiCluster-1 series consisted of statically configurable systems that could be tailored to specific user requirements, such as the number of processors, amount of memory, I/O configuration, and system topology. The required processor topology could be configured using UniLink connections, fed through a special backplane. Additionally, four external sockets were provided.
Multicluster-2 used network configuration units (NCUs) that provided flexible, dynamically configurable interconnection networks. The multiuser environment could support up to eight users through Parsytec's multiple virtual architecture software. The NCU design was based on the
Inmos crossbar switch, the C004, which offers full crossbar connectivity for up to 16 transputers. Each NCU, made of C004s, connected up to 96 UniLinks, linking internal as well as external transputers and other I/O subsystems. MultiCluster-2 allowed for the configuration of various fixed interconnection topologies, such as tree or mesh structures. had a hierarchical, cluster-based design. A basic unit was a 16-transputer
T800, fully connected cluster, and larger systems included additional levels of NCUs to form the necessary connections. The Network Configuration Manager (NCM) software controlled the NCUs and dynamically established the required connections. Each transputer could be equipped with 1 to 32 MB of dynamic
RAM, with single-error correction and double-error detection. While the GC/PP was a hybrid system, the GCel ("entry level") was based solely on the
T805. The GCel was designed to be upgradeable to the T9000 transputers (had they arrived in time), thus becoming a full GC. Since the T9000 was Inmos' evolutionary successor to the
T800, the upgrade was planned to be simple and straightforward. This was because, firstly, both transputers shared the same instruction set, and secondly, they had a similar performance ratio of compute power to communication throughput. A theoretical speed-up factor of 10 was expected, but in the end, it was never achieved. The network structure of the GC was a two-dimensional lattice, with an inter-communication speed between the nodes (i.e., clusters in Parsytec's terminology) of 20 Mbit/s. For its time, the concept of the GC was exceptionally modular and scalable. A so-called GigaCube was a module that was already a one gigaflop system and served as the building block for larger systems. A module (or "cube" in Parsytec's terminology) contained: • Four clusters, each equipped with: • 16 transputers (plus one additional transputer for redundancy, making it 17 transputers per cluster), • 4 wormhole routing chips (
C104 for the planned T9000 and
C004 for the realized
T805), • A dedicated power supply and communication ports. By combining modules (or cubes, respectively), one could theoretically connect up to 16,384 processors to create a very powerful system. Typical installations included: The two largest installations of the GC that were actually shipped had 1,024 processors (16 modules, with 64 transputers per module) and were operated at the data centers of the Universities of Cologne and Paderborn. In October 2004, the system at Paderborn was transferred to the Heinz Nixdorf Museums Forum, where it is now inoperable. The power consumption of a system with 1,024 processors was approximately 27 kW, and its weight was nearly a ton. In 1992, the system was priced at around 1.5 million
DM. While the smaller versions, up to GC-3, were air-cooled, water cooling was mandatory for the larger systems. In 1992, a GC with 1,024 processors ranked on the TOP500 list of the world's fastest supercomputer installations. In Germany alone, it was the 22nd fastest computer. In 1995, there were nine Parsytec computers on the TOP500 list, including two GC/PP 192 installations, which ranked 117th and 188th. In 1996, they still ranked 230th and 231st on the TOP500 list.
x'plorer Sparcstation as front end The '''x'plorer''' model came in two versions: The initial version featured 16
transputers, each with access to 4 MB of
RAM, and was called x'plorer. Later, when Parsytec switched to the
PPC architecture, it was renamed POWERx'plorer and featured 8
MPC 601 CPUs. Both models were housed in the same desktop case, designed by Via 4 Design. In any model, the x'plorer was essentially a single "slice" — which Parsytec referred to as a cluster — of a GigaCube (PPC or Transputer), with the smallest version (GC-1) using 4 of these clusters. As a result, some referred to it as a "GC-0.25." The POWERx'plorer was based on 8 processing units arranged in a 2D mesh. Each processing unit included: • One 80 MHz
MPC 601 processor, • 8 MB of local memory, and • A transputer for establishing and maintaining communication links.
Cognitive Computer The Parsytec
CC (Cognitive Computer) system was an autonomous unit at the card rack level. The CC card rack subsystem provided the system with its infrastructure, including power supply and cooling. The system could be configured as a standard 19-inch rack-mounted unit, which accepted various 6U plug-in modules. The CC system was a distributed memory, message-passing parallel computer and is globally classified in the
MIMD category of parallel computers. There were two different versions available: •
CCe: Based on the
Motorola MPC 604 processor running at 133 MHz with 512 KB L2 cache. The modules were connected together at 1 Gbit/s using high-speed (HS) link technology according to the
IEEE 1355 standard, allowing data transfer at up to 75 MB/s. The communication controller was integrated into the processor nodes through the
PCI bus. The system board used the
MPC 105 chip for memory control,
DRAM refresh, and memory decoding for banks of DRAM and/or Flash. The CPU bus speed was limited to 66 MHz, while the PCI bus speed was a maximum of 33 MHz. •
CCi: Based on the
Intel Pentium Pro, its core elements were dual
Pentium Pro-based motherboards (at 266 MHz), which were interconnected using several high-speed networks. Each dual motherboard had 128 MB of memory. Each node had a peak performance of 200
MFLOPS. The operating systems were
Windows NT 4.0 and ParsyFrame (with an optional UNIX environment). In all CC systems, the nodes were directly connected to the same router, which implemented an active hardware 8x8 crossbar switch for up to 8 connections using the 40 MB/s high-speed link. Regarding the CCe, the software was based on
IBM's
AIX 4.1 UNIX operating system, along with Parsytec's parallel programming environment, Embedded PARIX (EPX). This setup combined a standard
UNIX environment (including compilers, tools, and libraries) with an advanced software development environment. The system was integrated into the local area network using standard Ethernet. As a result, a CC node had a peak performance of 266 MFLOPS. The peak performance of the 8-node CC system installed at
Geneva University Hospital was therefore 2.1
GFLOPS.
Powermouse Powermouse was another scalable system that consisted of modules and individual components. It was a straightforward extension of the x'plorer system. controlled the data flow in four directions to other modules in the system. The bandwidth of a single node was 9 MB/s. For about 35,000
DM, a basic system consisting of 16
CPUs (i.e., four modules) could provide a total computing power of 9.6 Gflop/s. As with all Parsytec products, Powermouse required a
Sun Sparcstation as the front-end. All software, including
PARIX with
C++ and
Fortran 77 compilers and debuggers (alternatively providing
MPI or
PVM as user interfaces), was included.
Operating system The operating system used was
PARIX (PARallel UnIX extensions) – PARIXT8 for the
T80x transputers and PARIXT9 for the
T9000 transputers, respectively. Based on
UNIX, PARIX supported
remote procedure calls and was compliant with the
POSIX standard. PARIX provided
UNIX functionality at the front-end (e.g., a
Sun SPARCstation, which had to be purchased separately) with library extensions for the needs of the parallel system at the back-end, which was the Parsytec product itself (connected to the front-end for operation). The PARIX software package included components for the program development environment (compilers, tools, etc.) and the runtime environment (libraries). PARIX offered various types of synchronous and asynchronous communication. In addition, Parsytec provided a parallel programming environment called Embedded PARIX (EPX). and
Helios could also be run on the machines. Helios supported Parsytec's special reset mechanism out of the box. == See also ==