f



[l/m 10/2/2001] Suggested readings comp.par/comp.sys.super (24/28) FAQ

Archive-Name: superpar-faq
Last-modified: 1 Oct 2001

24	Suggested (required) readings			< * this panel * >
26	Dead computer architecture society
28	Dedications
2	Introduction and Table of Contents and justification
4	Comp.parallel news group history
6	parlib
8	comp.parallel group dynamics
10	Related news groups, archives and references
12
14
16	
18	Supercomputing and Crayisms
20	IBM and Amdahl
22	Grand challenges and HPCC

So you didn't search TM-86000?  (panel 14).


Here's the context: this is more parallel (rather than super) computing
oriented.

Every calendar year, I ask in comp.parallel for everyone's opinions
on what should people be reading.  I couch this with the proviso that
the reader be at least a 1st or 2nd year grad student in computer science
or related technical field.  This presumes some basic ACM CORE curriculum
knowledge like:
	basic computer architecture,
	compilers,
	operating systems, and some numerical analysis
	(some would argue: not enough, but that's a separate argument).

For better or worse, it's done numerically (a mid 1980s experiment).
Every suggester gets "10 votes."
You will see the 10 perceived "REQUIRED" readings in parallel computing
by your colleagues: and they are very good colleagues like JH and DP, DH, etc.

Disadvantages:
	1) sometimes 10 votes is not enough (I made the rules, I can make
	exceptions).
	2) new unfamiliar books tend to take time to make it to "the top-10."
	Yes, some references might be old, so vote for newer references
	and encourage your colleagues to "vote" for those references, too.
	3) for those we have a RECOMMENDED 100 (for recommended class
	reading lists).  Search (panel 14 in TM-86000) and find them.
	I might make a separate FAQ panel later.  Ten is enough for now.
Some people will claim "anti-votes."  Sorry I have no provision for anti-votes
except to note them in annotations.  Watch for them!

And if you have voted in the past and wish to change your "vote,"
just ask.

We are not doing this to sell textbooks.  This is merely a yearly opinion
survey.  You can suggest 10 at just about anytime (especially if you want to
N an existing endorsement, or anti, or whatever).



COME ON COME!  you are long winded
-------------


Here:

REQUIRED

%A George S. Almasi
%A Allan Gottlieb
%T Highly Parallel Computing, 2nd ed.
%I Benjamin/Cummings division of Addison Wesley Inc.
%D 1994
%O ISBN 0-8053-0443-6 $36.95
%K ISBN # 0-8053-0177-1, book, text, Ultracomputer,
grequired99, 91(11): enm, cb@uk, ag, jlh, dp, gl, dar, dfk, a(umn), pb,
%d 1st edition, 1989
%X This is a kinda neat book.  There are special net antecdotes
which make this interesting.  Oh, there are a few significant typos:
LINPAK is really LINPACK. Etc.  These were fixed in the second edition.
%X It's cheesy in places and the typography is
pitiful, but it's still the best survey of parallel processing.  We really
need a Hennessy and Patterson for parallel processing.
(The topography was much improved in the second edition so much of
the cheesy flavor is gone --ag.)
%X (JLH & DP) The authors discuss the basic foundations, applications,
programming models, language and operating system issues and a wide
variety of architectural approaches.  The discussions of parallel
architectures include a section that describes the key concepts within
a particular approach.
%X Very broad coverage of architecture, languages, background theory,
software, etc. Not really a book on programming, of course, but
certainly a good book otherwise.
%X Top-10 required reading in computer architecture to Dave Patterson.
%X It is hardware oriented, but makes some useful comments on programming.
%X I agree that somehow the book design/typography is terrible,
but the content is great.
%X This book is more expensive (about $90) but has significantly more
content, including protocols for directory based caching and
bus snooping, as well as case studies of distributed memory architectures.

%A Michael Wolfe
%T Optimizing Supercompilers for Supercomputers
%S Pitman Research Monographs in Parallel and Distributed Computing
%I MIT
%C Cambridge, MA
%D 1989
%d October 1982
%r Ph. D. Dissertation
%K parallelization, compiler, summary,
%K book, text,
%K grequired93/91(9): cbuk, dmp, lls, +6 c.compilers,
%K Recursion removal and parallel code
%X Good technical intro to dependence analysis, based on Wolfe's PhD Thesis.
%X This dissertation was re-issued in 1989 by MIT under it's Pittman
parallel processing series.
%X ...synchronization and locking instructions when compiling the
parallel procedures and those called by them. This is a bit like
the 'random synchronization' method described by Wolfe but
works with pointer-based datastructures rather than array elements.
%X Cited Chapters:
Data Dependence 11-57
Structure of a Supercomplier 214-218
%X consider replacing the classic reference with:
M. J., Wolfe, High Performance Compilers for Parallel Computing,
Addison Wesley, Reading, Mass, 1996
which has somewhat different content but is also well worth reading.

%A W. Daniel Hillis
%A Guy. L. Steele, Jr.
%Z Thinking Machines Corp.
%T Data Parallel Algorithms
%J Communications of the ACM
%V 29
%N 12
%D December 1986
%P 1170-1183
%r DP86-2
%K Special issue on parallel processing,
grequired97(8): enm, hcc, dmp, jlh, dp, jwvz, sm, wdh,
CR Categories and Subject Descriptors:
B.2.1 [Arithmetic and Logic Structures]: Design Styles - parallel;
C.1.2 [Processor Architectures]:
Multiple Data Stream Architectures (Multiprocessors) - parallel processors;
D.1.3 [Programming Techniques] Concurrent Programming;
D.3.3 [Programming Languages] Language Constructs -
concurrent programming structures;
E.2 [Data Storage Representations]: linked representations;
F.1.2 [Computation by Abstract Devices]: Modes of Computation - parallelism;
G.1.0 [Numerical Analysis] General- parallel algorithms,
General Terms: Algorithms
Additional Key Words and Phrases: Combinator reduction, combinators,
Connection Machine computer system, log-linked lists, parallel prefix,
SIMD, sorting, Ultracomputer,
%K Rhighnam, algorithms, analysis, Connection Machine, programming, SIMD, CM,
%X (JLH & DP) Discusses the challenges and approaches for programming an SIMD
like the Connection Machine.

%A C. L. Seitz
%T The Cosmic Cube
%J Communications of the ACM
%V 28
%N 1
%D January 1985
%P 22-33
%r Hm83
%d jun'84
%K grequired91(6): enm, dmp, jlh, dp, j-lb, jwvz,
Rcccp, Rhighnam,
%K CR Categories and Subject Descriptors: C.1.2 [Processor Architectures]:
Multiple Data Stream Architectures (Multiprocessors);
C.5.4 [Computer System Implementation]: VLSI Systems;
D.1.2 [Programming Techniques]: Concurrent Programming;
D.4.1 [Operating Systems]: Process Management
General terms: Algorithms, Design, Experimentation
Additional Key Words and Phrases: highly concurrent computing,
message-passing architectures, message-based operating systems,
process programming, object-oriented programming, VLSI systems,
homogeneous machine, hypercube, C^3P,
%X Excellent survey of this project.
Reproduced in "Parallel Computing: Theory and Comparisons,"
by G. Jack Lipovski and Miroslaw Malek,
Wiley-Interscience, New York, 1987, pp. 295-311, appendix E.
%X * Brief survey of the cosmic cube, and its hardware
%X (JLH & DP) This is a good discussion of the Caltech approach, which
embodies the ideas several of these machines (often called hypercubes).
The work at Caltech is the basis for the machines at JPL and the Intel iPSC,
as well as closely related to the NCUBE design.  Another paper by Seitz
on this same topic appears in the Dec. 1984 issue of IEEE Trans.
on Computers.
%X One of my top-10 papers to Dave Patterson (on computer architecture).
%X Literature search yielded:
1450906 C85023854
The Cosmic Cube (Concurrent Computing)
Seitz, C.L.
Author Affil: Dept. Of Comput. Sci., California Inst. Of Technol.,
Pasadena, Ca, Usa
Source: Commun. Acm (Usa) Vol.28, No.1, Pp.: 22-33
Publication Year: Jan. 1985
Coden: Cacma2 Issn: 0001-0782
U. S. Copyright Clearance Center Code: 0001-0782/85/0100-002275c
Treatment: Practical;
Document Type: Journal Paper
Languages: English
(14 Refs)
Abstract: Sixty-four small computers are connected by a network of
point-to-point communication channels in the plan of a binary 6-cube. this
cosmic cube computer is a hardware simulation of a future vlsi
implementation that will consist of single-chip nodes. the machine offers
high degrees of concurrency in applications and suggests that future
machines with thousands of nodes are both feasible and attractive. it uses
message switching instead of shared variables for communicating between
concurrent processes.
descriptors: multiprocessing systems; message switching
identifiers: message passing architectures; process programming; vlsi
systems; point-to-point communication channels; binary 6-cube; cosmic cube;
hardware simulation; VLSI implementation; single-chip nodes; concurrency
class codes: C5440; C5620

%A Edward Gehringer
%A Daniel P. Siewiorek
%A Zary Segall
%Z CMU
%T Parallel Processing: The Cm* Experience
%I Digital Press
%C Boston, MA
%D 1987
%K book, text, multiprocessor,
%K grequired91(5): enm, ag, jlh, dp, dar,
%O ISBN 0-932376-91-6 $42
%X Looks okay!
%X [Extract from inside front cover]
.... a comprehensive report of the important parallel-processing
research carried out on Cm* at Carnegie-Mellon University. Cm* is a
multiprocessing system consisting of 50 tightly coupled processors and
has been in operation since the mid-1970s.
Two operating systems-StarOs and Medusa-are part of its development,
along with a vast number of applications.
%X (JLH & DP) This book reviews the Cm* experience.  The book
discusses hardware issues, operating system strategies,
programming systems, and includes an extensive discussion of the
experience with over 20 applications on Cm*.
%X (DAR) a must read to avoid re-inventing the wheel.

%A John Hennessy
%A David Patterson
%T Computer Architecture: A Quantitative Approach, 2nd ed.
%I Morgan Kaufmann Publishers Inc.
%C Palo Alto, CA 94303
%D 1995
%O ISBN 1-55860-069-8
%K books, text, textbook, basic concepts, multiprocessors,
computer architecture, textbook, pario bib,
%K grequired97(5): rgs, dn, a(umn), dab, sm,
%X http://Literary.com/mkp/new/hp2e/hp2e_index.shtml
%X This is an excellent book, and I would guess it was about suitable
for second or final-year undergraduate use.
%X The book emphasises quantitative measurement of various architectures,
as hinted at in the title. Thus, benchmarking, using real applications, is
heavily emphasised. Naturally, considering the authors, the benefits of
the class of processors generically referred to as 'RISC' are highlighted.
%X The book costs M-#25 Sterling here in England (hard-back).
%X Chapter titles are:
1. Fundamentals of Computer Design
2. Performance and Cost
3. Instruction Set Design: Alternatives and Principles
4. Instruction Set Examples and Measurements of Use
5. Basic Processor Implementation Strategies
6. Pipelining
7. Vector Processors
8. Memory-Heirarchy Design
9. Input/Output
10. Future Directions
Appendix A: Computer Arithmetic
Appendix B: Complete Instruction Set Tables
Appendix C: Detailed Instruction Set Measurements
Appendix D: Time Versus Frequency Measurements
Appendix E: Survey of RISC Architectures
%X Looks like a great coverage of architecture. Of course a chapter on I/O!
[David.Kotz@Dartmouth.edu]
%X Watch for printing or edition number in paper copies
(The "V. Pratt" Warning).

%A M. Ben-Ari
%T Principles of Concurrent and Distributed Programming
%I Prentice Hall International, Inc.
%C Englewood Cliffs, NJ
%D 1989
%O ISBN 0-13-711821-X
%K conditional grequired91 (1986 version was the suggested version, see VRP),
parallel processing (electronic computers),
%K sc, +3 votes posted from c.e. discussion.
%X Sound familiar?
%X I (VRP) ran into a problem with Prentice-Hall over Ben-Ari: they do not
regard his rewrite as a 2nd edition but as a completely new book.  If
you order it under the title you give in your bibliography THEY WILL
SHIP YOU THE OLD BOOK.  The Stanford bookstore even called them to ask
whether they'd be receiving the new edition and P-H told them that if
the instructor ordered it under the old title that was what he must want.
%X Why a publishing company would not only create a situation with such an
obvious built-in pitfall but then proceed to firmly and insistently
push their customers into this pit is utterly beyond me.  God and
publishers move in mysterious ways.
%X Moral: Change your title to "Principles of Concurrent and Distributed
Computing" and don't refer to it as "the second edition" since it isn't.

%A W. Daniel Hillis
%T The Connection Machine
%S Series in Artificial Inteligence
%I MIT Press
%C Cambridge, MA
%D 1985
%K book, text, PhD thesis, massive parallelism, SIMD, TMC CM-1,
%K grequired96, 91(5): JLb, dar, jwvz, dn, wdh,
%O ISBN #: 0262580977 $15.95 [1989 printing?]
%X TMC CM-1.
%X Has a chapter on why computer science is no good.
%X Patent 4,709,327, Connection Machine, 24 Nov 87 (individuals)
"Parallel Processor / Memory Circuit", W. Daniel Hillis, et al.
This looks like the meat of the connection machine design.
It probably has lots of stuff that up til the patent was considered
proprietary.
%X another dissertation rehash and woefully lacking in details
(a personal gripe about MIT theses) but otherwise a CM introduction.
%X Top-10 required reading in computer architecture to Dave Patterson.
%X See references for more readings,

%A Vipin Kumar
%A Ananth Grama
%A Anshul Gupta
%A George Karypis
%T Introduction to Parallel Computing:
Design and Analysis of Algorithms
%I Benjamin Cummings
%C Redwood City, CA
%D 1994
%K book, text, grequired(5)01 98/94: mp, a (umn), rc, gvw, mic,
scalability,
%X This new book takes an in-depth look at techniques for the
design and analysis of parallel algorithms.  Its broad, balanced
coverage of important core topics includes sorting and graph
algorithms, discrete optimization techniques, and scientific
computing applications.  The authors focus on parallel
algorithms for realistic machine models while avoiding
architectures that are unrealizable in practice. Numerous
examples and diagrams illustrate potentially difficult subjects
and each chapter concludes with an extensive list of
bibliographic references. In addition, problems of varying
degrees of difficulty challenge readers at different levels.
This is an ideal book for students and professionals who want
insight into problem-solving with parallel computers.
%X For a detailed ASCII brochure reply to: parallel@bc.aw.com.
%X This is to announce the availability of supplementary material and
other information regarding the text book "INTRODUCTION TO PARALLEL
COMPUTING: DESIGN AND ANALYSIS OF ALGORITHMS" (by Kumar, Grama,
Gupta and Karypis, Publisher: Benjamin Cummings, November 93) by
anonymous ftp.
%X The following supplementary material is currently available via
anonymous ftp from the sites ftp.cs.umn.edu:users/kumar/book and
bc.aw.com:bc/kumar:
%X
a) Postscript files containing the figures, tables and pseudocodes
in the text.
b) Errata sheet.
%X If you would like to receive more information on how to retrieve
these, or about the book in general, or be added to a mailing list
announcing updates and additional material on the book, you can
send E-MAIL to book-vk@cs.umn.edu.
%X Solutions to problem sets in the book are available in an
instructors guide directly from Benjamin/Cummings (or contact your
local Addison Wesley / Benjamin/Cummings representative).
%X (GVW) An excellent text on parallel algorithms.
The book's great strengths is that
its authors invest little time in PRAM algorithms,
preferring ones which can be implemented on machines which
can actually be built.
The chapters discussing architectures and programming languages
are run-of-the-mill, but the bulk of the book describes and analyzes a
wide variety of algorithms for mesh- and hypercube-based multicomputers
(chosen as representative of sparsely-connected and densely-connected
machines respectively).
%X (anon.) While primarily a book on parallel algorithms,
it surveys parallel programming languages and models,
uses better examples than most texts, and is mercifully light on
impenetrable notation (a problem common to most algorithmist authors).

%A Michael J. Quinn
%T Parallel Computing: Theory and Practice, 2nd ed.
%I McGraw-Hill
%C New York
%D 1994
%K book, text,
%K grequired(5): dgg, fpst, dfk, gvw, mic,
%X The man keeps his word!
%X Second edition named from his original book:
        Designing Efficient Algorithms for Parallel Computers
        Supercomputing and Artificial Intelligence series,
        McGraw Hill, New York, 1987.
%X (GVW) A good introductory text on parallel computing, which
interleaves presentations of various parallel algorithms with
discussions of
their implementation and performance. The book also contains a good
overview of the kinds of parallel programming systems which
most parallel computer users are likely to encounter.
%X Chap. 8 parallel FFT algorithms (hypercube).
%X The 3rd printing has a page full of errata.  Dated Dec. 1995.
You might check MQ's web page for this errata.



Articles: comp.parallel
Administrative: eugene@cse.ucsc.edu.SNIP
Archive: http://groups.google.com/groups?hl=en&group=comp.parallel

0
eugene (428)
6/24/2003 1:02:59 PM
comp.parallel 2866 articles. 0 followers. Post Follow

0 Replies
358 Views

Similar Articles

[PageSpeed] 58

Reply: