Universal high performance computing: we have just begun

Universality has been the key to the overwhelming success of computing in the commercial, intellectual and military sectors of American life. The same hardware and often the same software can be applied to a wide range of problems. This has led to enormous economies of scale in manufacturing, training, and deployment. At present, high performance computing is entirely diierent. Although there have been some impressive successes and there are continuing advances in parallel scientiic computing, only a small fraction of the nation's intensive computing problems are currently amenable to high performance parallel computing. Each of these applications has required a very large and expensive development eeort. Because of the high development costs, HPC has primarily caught on in very cost-insensitive areas. This was not problematic when the corresponding hardware was prohibitively expensive. With the availability of cost-eecient parallel hardware, however, it is now widely understood that a breakthrough toward universal parallel computing is needed for HPC to meet its promise in National Challenge applications and to become a major commercial factor. A prerequisite for Universal High Performance Computing (UHPC) is convergence in the hardware area. In serial computing, a common basic model called the Von Neumann model has allowed hardware and software developments to proceed with the assurance that basic compatibility would be maintained. Until recently, parallel computing has been plagued by a plethora of very diierent computation models that made it impractical to develop a broad software base. Fortunately, there now seems to be widespread agreement that HPC hardware should follow a model based on general purpose processors in a distributed memory system. This model covers everything from networks of workstations (NOW) to commercial parallel systems being produced by all the major manufacturers. The reason for this convergence is economic and is unlikely to change: the development cost for a new high-performance microprocessor is extremely high, whereas the market for HPC is limited compared to the market for personal computers and workstations. Building special processors for HPC is also very expensive and does not provide the same performance 1

[1]  Satoshi Matsuoka,et al.  An efficient implementation scheme of concurrent object-oriented languages on stock multicomputers , 1992, PPOPP '93.

[2]  Andrew S. Grimshaw An Introduction to Parallel Object-Oriented Programming with Mentat , 1991 .

[3]  WeinandAndre,et al.  ET++an object oriented application framework in C++ , 1988 .

[4]  米沢 明憲 ABCL : an object-oriented concurrent system , 1990 .

[5]  Jerome A. Feldman,et al.  PSather monitors: Design, Tutorial, Rationale and Implementation , 1991 .

[6]  Richard M. Karp,et al.  Mapping the genome: some combinatorial problems arising in molecular biology , 1993, STOC.

[7]  Stephen M. Omohundro,et al.  Sather Iters: Object-Oriented Iteration Abstraction , 1993 .

[8]  Gul A. Agha,et al.  ACTORS - a model of concurrent computation in distributed systems , 1985, MIT Press series in artificial intelligence.

[9]  Jerome A. Feldman,et al.  PSather: Layered Extensions to an Object-Oriented Language for Efficient Parallel Computation , 1993 .

[10]  A. Stolcke,et al.  Sather Language Design and Performance Evaluation , 1991 .

[11]  Jenq Kuen Lee,et al.  Object oriented parallel programming: experiments and results , 1991, Proceedings of the 1991 ACM/IEEE Conference on Supercomputing (Supercomputing '91).

[12]  Guy L. Steele,et al.  Data Parallel Computers and the FORALL Statement , 1991, J. Parallel Distributed Comput..

[13]  Andreas Paepcke Object-oriented programming: the CLOS perspective , 1993 .

[14]  Michael Metcalf,et al.  Fortran 90 Explained , 1990 .

[15]  Geoffrey Zweig,et al.  Physical mapping of chromosomes using unique probes , 1994, SODA '94.

[16]  Walter H. Kohler,et al.  Concurrency Features for the Trellis/Owl Language , 1987, ECOOP.

[17]  Stephen M. Omohundro Sather Provides Nonproprietary Access to Object‐Oriented Programming , 1992 .

[18]  Kurt Mehlhorn,et al.  LEDA: A Library of Efficient Data Types and Algorithms , 1990, ICALP.

[19]  David Robson,et al.  Smalltalk-80: The Language and Its Implementation , 1983 .

[20]  Brad J. Cox,et al.  Object-oriented programming ; an evolutionary approach , 1986 .

[21]  Peter Brezany,et al.  Vienna Fortran - A Language Specification. Version 1.1 , 1992 .

[22]  Denis Caromel,et al.  Toward a method of object-oriented concurrent programming , 1993, CACM.

[23]  Ramesh Subramonian,et al.  LogP: towards a realistic model of parallel computation , 1993, PPOPP '93.

[24]  S.U. Hanssgen,et al.  The Modula-2* environment for parallel programming , 1993, Proceedings of Workshop on Programming Models for Massively Parallel Computers.

[25]  簡聰富,et al.  物件導向軟體之架構(Object-Oriented Software Construction)探討 , 1989 .

[26]  Philip J. Hatcher,et al.  A production-quality C* compiler for Hypercube multicomputers , 1991, PPOPP '91.

[27]  Andrew A Chien,et al.  Concurrent Aggregates (CA): an Object-Orinted Language for Fine- Grained Message-Passing Machines , 1990 .

[28]  W. Daniel Hillis,et al.  Data parallel algorithms , 1986, CACM.

[29]  Andrea C. Arpaci-Dusseau,et al.  Parallel programming in Split-C , 1993, Supercomputing '93. Proceedings.

[30]  Chu-cheow Lim,et al.  A Parallel Object-Oriented System for Realizing Reusable and Efficient Data Abstractions , 1993 .