[l/m 10/2/2001] group history/glossary comp.parallel (4/28) FAQ

Archive-Name: superpar-faq
Last-modified: 2 Oct 2001

4	Comp.parallel news group history, glossary, etc.
6	parlib
8	comp.parallel group dynamics
10	Related news groups, archives and references
18	Supercomputing and Crayisms
20	IBM and Amdahl
22	Grand challenges and HPCC
24	Suggested (required) readings
26	Dead computer architecture society
28	Dedications
2	Introduction and Table of Contents and justification

News group history

Comp.parallel began as a mailing list specifically for
Floating Point Systems T-series hypercubes in the late 1980s by
"Steve" Stevenson at Clemson University.  Later, the news group was
gatewayed (originally) as comp.hypercube.  About six months into it,
someone suggested that the news group be all parallel stuff.
That's when it was changed (by democratic vote, to be sure) to the
moderated Usenet group comp.parallel.

Comp.parallel distinguished itself as one of the better Usenet groups
with a high "signal to noise" posting ratio.
Prior to comp.parallel, parallel and supercomputing were discussed in
the unmoderated Usenet group comp.arch (poor signal to noise ratio).
[aka high performance computing]

I forget (personally) the discussion which went along with the creation of
comp.sys.super and comp.unix.cray.  It is enough to say that "it happened."

Comp.sys.super started as part of the "Great Usenet Reorganization"
(circa 1986/7).
C.s.s. was just seen as part of the existing sliding scale of
computer performance (from micros to supers).
Minicomputers (16-bit LSI machines) started disappearing about this time.

Where's the charter?

It's going to be substituted here.

What's okay to post here?

Most anything related to
	parallel computing (comp.parallel) or
	supercomputing (comp.sys.super, but unmoderated).
Additionally, one typically posts opinions about policy as relating to
running the news group (i.e., news group maintenance).
Largely, it is up to the moderator in
comp.parallel to decide what ultimately propagates (in addition to the
usual propagation problems [What? you expect news to be propagated reliabily?
I have bridge to sell and some land in Florida which is occasionally
above water.]).

We are not here to hold your hand.  Read and understand the netiquette posts
in groups such as news.announce.newusers (or de.newusers or similar groups).
	Netiquette != etiquette.
	Netiquette ~= etiquette.
	Netiquette not = etiquette.
Avoid second and third degree flames: no pyramid posts or sympathy card calls.
Sure some one might be dying, but that's more appropriate in other groups.
We have posted obits and funeral notices (e.g., Sid Fernbach, Dan Slotnick).
No spam.  We will stop spam especially cross-posted spam.

	Current (1996) SPAM count to (comp.parallel): growing.
	Current (1996) SPAM count to (comp.sys.super): more than c.p.

The spam count is the number of attempts to spam the group which get
blocked by moderation.

	One more note:
	Good jokes are always appreciated.  Is it Monday?

Old joke (net.arch: 1984) with many variants:

In the 21st Century, we will have greater than Cray-1 power
with massive memories and huge disks, easily carryable under the arm
and costing less than $3000, and the first thing the user asks:
	"Is it PC compatible?"

Guidance on advertising:
Keep it short and small.  This means: post-docs, employment, products, etc.
Don't post them too frequently.

What's okay to cross-post here?

Your moderators are in communication with other moderators.
Currently, if you cross-post to two or more moderated news groups,
a single moderator can approve or cancel such an article.
Mutual agreements for automatic cross-post approval have been
negiotated with:
	news.announce.conferences (moderator must email announcement to n.a.c.

You are free to separately dual post (this isn't a cross-post) to
those moderated news groups.

Group Specific Glossary

Q: What does PRAM stand for?

Confused by acronyms?

The following are noted but not endorsed (other name collisions possible):
Frequent acronyms:
ICPP:	International Conference on Parallel Processing
ICDCS IDCS DCS:	International Conference on Distributed Computer Systems
ISCA:	International Symposium on Computer Architecture
MIN:	Multistage Interconnection Network
IJCNN:	International Joint Conference on Neural Networks
ACM && IEEE/CS:	two professional computer societies
	ACM: the one with the SIGs, IEEE: the one with the technical commitees
INNS:	International Neural Network Society
CCC:	Cray Computer Corporation (defunct)
CRI:	Cray Research Inc. (SGI div.)
SSI(1):	Supercomputer Systems, Inc., Eau Claire, WI, S. Chen
SSI(2):	Supercomputer Systems, Inc., San Diego, CA
SSI(3):	Supercomputer Systems, AG, Zurich, CH
CDC:	Centers for Disease Control and Prevention
	Control Data Corporation (defunct)
CDS:	Control Data Services (defunct now CDC => CDS => Syntegra)
DMMP DMC: Distributed Memory Multi-Processor/Computer
DMMC:	Distributed Memory Multiprocessor Conference (aka Hypercube Conference)
ERA:	Engineering Research Associates
ETA:	nothing or Engineering Technology Associates (depending who you talk to)
ASC:	Texas Instruments Advanced Scientific Computer (real old)
ASCI:	Accelerated Strategic Computing Initiative
ASPLOS: Architectural Support for Programming Languages and Operating Systems

IPPS International Parallel Processing Symposium
JPDC Journal of Parallel and Distributed Computing
MIDAS:	Don't use.  Too many MIDASes in the world.
MIP(S):	Meaningless Indicators of Performance, also MFLOPS, GFLOPS, TFLOPS,
	PFLOPS, (also substitute IPS and LIPS (logical inferences) for FLOPS
NDA:	Non-disclosure Agreement
POPL:	Principles of Programming Languages
POPP PPOPP PPoPP: Principles and Practice of Parallel Programming
HPF:	High Performance Fortran (a parallel Fortran dialect)
MPI:	Message Passing Interface (also see PVM)
PVM:	Parallel Virtual Machine (clusters/networks of workstations)
	also see MPI
	Parallel "shared" Virtual Memory [Not the same as the other PVM]
SC'xx:	Supercomputing'xx (a conference, not to be confused with the journal)
SGI:	Silicon Graphics, Inc.
SUN:	Stanford Unversity Network
SOSP:	Symposium on Operating Systems Principals
SPDC:	Symposium on Principles of Distributed Computing
SPAA:	Symposium on Parallel Algorithms and Architectures
TOC/ToC: IEEE Transactions on Computers
	Table of Contents
TOCS: ACM Transactions on Computer Systems
TPDS/PDS: Transactions on Parallel and Distributed Systems,
		Partitioned Data Set
TSE: Transactions on Software Engineering
Pascal && Unix && Tera:	They aren't acronyms.

You can suggest others.....
We have dozens of others, we are not encouraging their use.
This is a list of last resort.
While people use these macros in processors like bibTeX, many interdisciplinary
applications people reading these groups are clueless.  USE THE COMPLETE
expansion when possible or include the macro with the citation.
Leave it out, and you will appear like
	"one of those arrogant computer scientists..." to quote a friend.

Less volatile acronyms (accepted in the community):

SISD: [Flynn's terminology] Single-Instruction stream, Single-Data stream
SIMD: [Flynn's terminology] Single-Instruction stream, Multiple-Data stream
MISD: [Flynn's terminology] Multiple-Instruction stream, Single-Data stream
MIMD: [Flynn's terminology] Multiple-Instruction stream, Multiple-Data stream
SPMD: Single Program Multiple Data [Darema's term]: looser variant to SIMD/MIMD
	typically non shared address space.

PRAM: Parallel Random Access Memory
QRQW: Queued reads and writes
EREW: Exclusive access Read, Exclusive access Write
CREW: Concurrent read, exclusive write PRAM
ASCI = Accelerated Strategic Computing Initiative
        (i.e. simulating nuclear bombs, so we don't feel
         compelled to blow them up in order to test them.)
ASCI Red = the Intel machine at Sandia National Labs,
        consisting of >9000 200 MHz Pentium Pro cpus
        in a 2-D mesh configuration.
ASCI Blue = Two systems, both targetted at 3 TFLOPS peak,
        1 TFLOPS sustained:
        1. A future IBM machine to be installed at Lawrence
           Livermore National Labs.  By the end of 1998 or 
           early 1999, it should be 512 SMP machines in
           a message-passing cluster.  Each machine is based
           on 8 PowerPC 630 processors.  For starters, IBM
           has installed an SP-2 machine.
        2. An SGI/Cray machine installed at Los Alamos National Lab. (LANL).
a. The machine is now in place. It is composed of 48 SMPs, each
   with 128 pes, for a total of 6144.

b. A nit: LANL is a lab, not 'labs'. I actually saw the memo
   decreeing this, along with a rationale. Main reason? To ferret
   out outsiders... :)

c. If appropriate, here's the url:
     Blue pac: www.llnl.gov/asci/platforms/bluepac
     Red: http://mephisto.ca.sandia.gov/TFLOP/sc96/index.html

ASCI White

	Building under construction.

Shared Memory

1. A glossary of terms in parallel computing can be found at:
   (Most of this was taken from my IEEE P&DT article w/o my
   permission, and without proper credit; the credit thing has
   apparently now been fixed.)
2. My history of parallel computing is available as technical report
   CSRI-TR-312 from the Computer Systems Research Institute,
   University of Toronto, at:

%A Gregory V. Wilson
%T A Chronology of Major Events in Parallel Computing
%R CSRI-312
%I U. of Toronto, DCS
%D December 1994
%X ftp://ftp.csri.toronto.edu/csri-technical-reports
%X See Wilson's 1995/6 text book.


	http://www.convex.com/	# this might change	H-P
Got the pattern?


Brazil Parallel Processing Homepage
Dataflow Webpages
	# the UCC.ie web pages and mailing list have gone

	Use of this acronym is declining.

Other mailing lists

Where can I find "references?"

   BEWARE: The Law of Least Effort! (*if you need this reference, mail me.)

The references provided herein are not intended to be comprehensive for the
most part.  That's the perview of a bibliography.

The major biblios I am aware:
Mine, and I will attempt to integrate the following as well
Cherri Pancake's parallel debugging biblio
David Kotz's parallel I/O biblio
H.T. Kung's Systolic array biblio 

NCSTRL Project: (from ARPA: CSTR)
the Unified CS TR index:

If you ask a query, and I know the answer, I might give you a quick
search off the top of the biblio, but I'm not your librarian.
I am a Journal Associate Editor for John Wiley & Sons, Inc.
If I don't answer, I don't have the time or don't know you well enough.
Knowledgeable people have up to date copies of my biblio
(and the other biblios).

If you are a student or a prof, and you assemble a biblio on some topics,
1) if you use one of these biblios: ACKnowledge that fact.
2) If you post it, separate the new entries and submit it directly to me,
if you don't, you make work busy work for those of us maintaining it because
we have to resolve entry collisions (not that as simple as you might think,
like name differences (full vs. abbreviated name, bibtex macros (w/o the
expansion [do you have any appreciation how irksome that it to some people?]).

Assembling a biblio is a fine student exercise, BUT
it should build on existing information.  It should also minimize the
propagation of typos and other errors (we are all still finding them in
the existing biblios).

Notorious (frequently posted) biblio topics:

	MINs (multistage interconnection networks).
	Load balancing.

While clearly important, these are topics which bore and upset some people
(ignore them, they can hit 'n' on their news system).  You are supposed to
kill file this FAQ after reading it (subject to last modified dates,
of course).

Some very telling personal favorite quotes from the literature of
parallel processing:

[Wulf81] describes the plight of the multiprocessor researcher:
We want to learn about the consequences of different designs on
the useability and performance of multiprocessors.
Unfortunately, each decision we make precludes us from exploring its
alternatives.  This is unfortunate, but probably inevitable for hardware.
Perhaps, however, it is not inevitable for the software....
and especially for the facilities provided by the operating system.

[Wulf81, pp. 276]:
In general, we believe that it's possible to make two major mistakes at the
outset of a project like C.mmp.  One is to design one's own processor;
doing so is guaranteed to add two years to the length of the project and,
quite possibly, sap the energy of the project staff to the point that nothing
beyond the processor ever gets done.  The second mistake is to use someone
else's processor.  Doing so forecloses a number of critical decisions, and thus
sufficiently muddies the water that crisp evaluations of the results are
difficult.  We can offer no advice.  We have now made the second mistake\**
\*- for variety, next time we'd like to make the first!  Given the chance, our
processor would:
\**[Wulf81]: Twice, in fact.
The second multiprocessor project at C-MU, $Cms$, also uses the PDP-11.
..(b F
Be both inherently more reliable and go to extremes not to propagate errors;
once an error is detected, it would report that error without further effect
on the machine state.
Provide rapid domain changing; we see no inherent reason that this should
require more than, say, a dozen instruction times.

Provide an adequate address space; actually, rather than a larger number of
address bits, we would prefer true capability-based addressing [Fabry74] at
the instruction level since this leads to a logically infinite address space.

"More computing sins are committed in the name of efficiency (without
necessarily achieving it) than for any other reason -- including blind
stupidity." -- Wm. A. Wulf

Make it work first before you make it work fast.
	--Bruce Whiteside in J. L. Bentley, More Programming Pearls

Articles: comp.parallel
Administrative: eugene@cse.ucsc.edu.SNIP
Archive: http://groups.google.com/groups?hl=en&group=comp.parallel

eugene (428)
7/4/2003 1:02:59 PM
comp.parallel 2866 articles. 0 followers. Post Follow

0 Replies

Similar Articles

[PageSpeed] 32