The Illiac IV

The Illiac IV
Author: R.M. Hord
Publisher: Springer Science & Business Media
Total Pages: 362
Release: 2013-03-14
Genre: Computers
ISBN: 3662103451

The Illiac IV was the first large scale array computer. As the fore runner of today's advanced computers, it brought whole classes of scientific computations into the realm of practicality. Conceived initially as a grand experiment in computer science, the revolutionary architecture incorporated both a high level of parallelism and pipe lining. After a difficult gestation, the Illiac IV became operational in November 1975. It has for a decade been a substantial driving force behind the develooment of computer technology. Today the Illiac IV continues to service large-scale scientific aoolication areas includ ing computational fluid dynamics, seismic stress wave propagation model ing, climate simulation, digital image processing, astrophysics, numerical analysis, spectroscopy and other diverse areas. This volume brings together previously published material, adapted in an effort to provide the reader with a perspective on the strengths and weaknesses of the Illiac IV and the impact this unique computa tional resource has had on the development of technology. The history and current status of the Illiac system, the design and architecture of the hardware, the programming languages, and a considerable sampling of applications are all covered at some length. A final section is devoted to commentary.

Introduction to Parallel Processing

Introduction to Parallel Processing
Author: Behrooz Parhami
Publisher: Springer Science & Business Media
Total Pages: 512
Release: 2006-04-11
Genre: Business & Economics
ISBN: 0306469642

THE CONTEXT OF PARALLEL PROCESSING The field of digital computer architecture has grown explosively in the past two decades. Through a steady stream of experimental research, tool-building efforts, and theoretical studies, the design of an instruction-set architecture, once considered an art, has been transformed into one of the most quantitative branches of computer technology. At the same time, better understanding of various forms of concurrency, from standard pipelining to massive parallelism, and invention of architectural structures to support a reasonably efficient and user-friendly programming model for such systems, has allowed hardware performance to continue its exponential growth. This trend is expected to continue in the near future. This explosive growth, linked with the expectation that performance will continue its exponential rise with each new generation of hardware and that (in stark contrast to software) computer hardware will function correctly as soon as it comes off the assembly line, has its down side. It has led to unprecedented hardware complexity and almost intolerable dev- opment costs. The challenge facing current and future computer designers is to institute simplicity where we now have complexity; to use fundamental theories being developed in this area to gain performance and ease-of-use benefits from simpler circuits; to understand the interplay between technological capabilities and limitations, on the one hand, and design decisions based on user and application requirements on the other.

Proceedings

Proceedings
Author:
Publisher:
Total Pages: 608
Release: 1977
Genre: Computer programs
ISBN:

Encyclopedia of Parallel Computing

Encyclopedia of Parallel Computing
Author: David Padua
Publisher: Springer Science & Business Media
Total Pages: 2211
Release: 2011-09-08
Genre: Computers
ISBN: 0387097651

Containing over 300 entries in an A-Z format, the Encyclopedia of Parallel Computing provides easy, intuitive access to relevant information for professionals and researchers seeking access to any aspect within the broad field of parallel computing. Topics for this comprehensive reference were selected, written, and peer-reviewed by an international pool of distinguished researchers in the field. The Encyclopedia is broad in scope, covering machine organization, programming languages, algorithms, and applications. Within each area, concepts, designs, and specific implementations are presented. The highly-structured essays in this work comprise synonyms, a definition and discussion of the topic, bibliographies, and links to related literature. Extensive cross-references to other entries within the Encyclopedia support efficient, user-friendly searchers for immediate access to useful information. Key concepts presented in the Encyclopedia of Parallel Computing include; laws and metrics; specific numerical and non-numerical algorithms; asynchronous algorithms; libraries of subroutines; benchmark suites; applications; sequential consistency and cache coherency; machine classes such as clusters, shared-memory multiprocessors, special-purpose machines and dataflow machines; specific machines such as Cray supercomputers, IBM’s cell processor and Intel’s multicore machines; race detection and auto parallelization; parallel programming languages, synchronization primitives, collective operations, message passing libraries, checkpointing, and operating systems. Topics covered: Speedup, Efficiency, Isoefficiency, Redundancy, Amdahls law, Computer Architecture Concepts, Parallel Machine Designs, Benmarks, Parallel Programming concepts & design, Algorithms, Parallel applications. This authoritative reference will be published in two formats: print and online. The online edition features hyperlinks to cross-references and to additional significant research. Related Subjects: supercomputing, high-performance computing, distributed computing

Introduction to Parallel and Vector Solution of Linear Systems

Introduction to Parallel and Vector Solution of Linear Systems
Author: James M. Ortega
Publisher: Springer Science & Business Media
Total Pages: 309
Release: 2013-06-29
Genre: Computers
ISBN: 1489921125

Although the origins of parallel computing go back to the last century, it was only in the 1970s that parallel and vector computers became available to the scientific community. The first of these machines-the 64 processor llliac IV and the vector computers built by Texas Instruments, Control Data Corporation, and then CRA Y Research Corporation-had a somewhat limited impact. They were few in number and available mostly to workers in a few government laboratories. By now, however, the trickle has become a flood. There are over 200 large-scale vector computers now installed, not only in government laboratories but also in universities and in an increasing diversity of industries. Moreover, the National Science Foundation's Super computing Centers have made large vector computers widely available to the academic community. In addition, smaller, very cost-effective vector computers are being manufactured by a number of companies. Parallelism in computers has also progressed rapidly. The largest super computers now consist of several vector processors working in parallel. Although the number of processors in such machines is still relatively small (up to 8), it is expected that an increasing number of processors will be added in the near future (to a total of 16 or 32). Moreover, there are a myriad of research projects to build machines with hundreds, thousands, or even more processors. Indeed, several companies are now selling parallel machines, some with as many as hundreds, or even tens of thousands, of processors.