f



Do we really need virtual machines?

Hallo,
According to their proponents virtual machines such as JVM and CLR are
the solution to all our (programming) problems, of which portability
is but one.

Maybe it's just because when I learnt programming the p-machine was
considered an interesting oddity, but with the exception of code that
really must run unchanged on unknown platforms, I fail to see what do
I gain from a virtual machine that I don't already get from a good old
compiler/runtime support/standard library chain.

After all, isn't gcc the most ported virtual machine of all?

Now, this being the compiler forum, I'm interested in learning about
the advantages of virtual machines from the compiler writer
perspective.

Thank you,
Nicola Musatti
0
10/2/2004 5:17:26 AM
comp.compilers 3310 articles. 1 followers. Post Follow

148 Replies
2176 Views

Similar Articles

[PageSpeed] 47

Nicola Musatti wrote:

> Maybe it's just because when I learnt programming the p-machine was
> considered an interesting oddity, but with the exception of code that
> really must run unchanged on unknown platforms, I fail to see what do
> I gain from a virtual machine that I don't already get from a good old
> compiler/runtime support/standard library chain.

There is no requirement to virtualize code in order to gain
portability.  Quite the contrary, what the JVM and other
implementations do is perform reference checking, and JVM even tries
to do that before running the code (virtually or not) in order to
enhance performance. Further, a good virtual memory implementation can
perform complete reference checking as well.

To be fair, I don't think anyone from the JVM or .net crowd is
claiming that virtualization is required for portability, otherwise
the just in time compiler ideas make no sense (I don't believe .net is
even has a virtual machine implementation).

100% portability with standard compilers is a worthwhile study (I
define 100% portability as recompile only). I would divide the
portability problem into sematics and enforcement. Semantics is having
the same code perform the same action on different implementations,
enforcement is making sure the program does not violate the rules
under which the sematics are garanteed to be correct. There seems to
be lots of agreement on sematics, but not much on enforcement.

--
Samiam is Scott A. Moore
0
Scott
10/2/2004 8:17:54 PM
On 2 Oct 2004 01:17:26 -0400, Nicola Musatti <Nicola.Musatti@ObjectWay.it>
wrote:

> Hallo,
> According to their proponents virtual machines such as JVM and CLR are
> the solution to all our (programming) problems, of which portability
> is but one.

Well I hope they aren't saying it will solve my performance problems -
because I'd have a hard time believing that!

> Maybe it's just because when I learnt programming the p-machine was
> considered an interesting oddity, but with the exception of code
> that really must run unchanged on unknown platforms, I fail to see
> what do I gain from a virtual machine that I don't already get from
> a good old compiler/runtime support/standard library chain.

If you view a virtual machine as just a well-defined platform
targetted by compilers, then it seems there is nothing you can do for
a virtual machine that cannot be done for some other platform.  A VM
is practically synonymous with an emulator except that VM code is
designed to be emulated.

> After all, isn't gcc the most ported virtual machine of all?

I think it's not a virtual machine; when the program is running, gcc
is no longer running.

> Now, this being the compiler forum, I'm interested in learning about
> the advantages of virtual machines from the compiler writer
> perspective.

Because you get to design the instruction set, you can make it more
"friendly" to the compiler writers.

In the case of the JVM or SafeTSA you can make it simple to perform a
safety validation; safety proofs for other machine codes are far more
difficult.

But then again, most of the VM's benefit is portability - it is usually
easier to write an interpreter for virtual machine code than for, say,
intel x86 machine code.

CU
Dobes
0
Dobes
10/2/2004 8:18:30 PM
Nicola Musatti wrote:

> According to their proponents virtual machines such as JVM and CLR
> are the solution to all our (programming) problems, of which
> portability is but one.

You are right. Distinct instruction sets are an obstacle to
portability. But my experience with learning Java tells me another
lesson: The instruction is hard enough to create portably (see below)
but it is only the first step of portability. Look at Java and C#: The
most important obstacle in portability is the amount of libraries with
all their quirks and dead ends (AWT).

Has anyone ever noticed that the "standard libraries" that come with
Java and C# are attempts to recreate Unix-like Operating Systems ?
Including APIs for memory, file system, scheduler, terminal, printer,
network, clock, GUI ...  O'Reilly publishes dozens of Java books. What
are they all about ? Library details; most of them becoming obsolete
if C# succeeds. Stuffing Java library details into my brain seems like
a waste of brain capacity.

> Now, this being the compiler forum, I'm interested in learning about
> the advantages of virtual machines from the compiler writer
> perspective.

Our moderator (John) has pointed out _very_ often that language
designers should first learn the lessons of the UNCOL period (1960s)
before they start inventing yet another UNCOL-variant. Nobody listens
to John; they are just too busy.
[The original UNCOL proposal was published in the CACM in 1958, and
the report on the first (and last) version was in the 1961 winter JCC
proceeedings.  Also see a collection of UNCOL references I pulled
together in comp.compilers 13 years ago at
http://compilers.iecc.com/comparch/article/91-12-058 -John]
0
ISO
10/2/2004 8:35:11 PM
On 2004-10-02, J�rgen Kahrs <Juergen.Kahrs@vr-web.de> wrote:
> Nicola Musatti wrote:
>
>> According to their proponents virtual machines such as JVM and CLR
>> are the solution to all our (programming) problems, of which
>> portability is but one. [...]
>
> Has anyone ever noticed that the "standard libraries" that come with
> Java and C# are attempts to recreate Unix-like Operating Systems ?
> Including APIs for memory, file system, scheduler, terminal, printer,
> network, clock, GUI ...  [....]
>
>> Now, this being the compiler forum, I'm interested in learning about
>> the advantages of virtual machines from the compiler writer
>> perspective.
>
> Our moderator (John) has pointed out _very_ often that language
> designers should first learn the lessons of the UNCOL period (1960s)
> before they start inventing yet another UNCOL-variant. Nobody listens
> to John; they are just too busy.

  I mostly agree with that comment, but I tend to believe that work
could still be done to define an intermediate language (not
necessarily a VM) for a common family of languages. For exemple, I
would believe that it could be possible to define an intermediate
language suitable to Ocaml and SML, and perhaps (possibly) also to
Scheme. This form won't be suitable to Java or Eiffel or C++ or C.

In other words, I would imagine that an intermediate form similar to
the "lambda" representation of the Ocaml compiler could be suitable
for SML and perhaps Scheme. Of course, the current "lambda"
representation is not it, but could be a little bit extended or
adapted.

All this is just a belief, not a result of hard work.

On a vaguely related note, I feel very sorry that JVM or CLR did not
evolve to incorrporate the few necessary features -notably closures &
tail recursive calls- to make them really usable for functional
languages like the ML family or Scheme. I suppose that the major
reason is not technical, but economical or political. (IIRC, some
relevant extensions to the JVM have been proposed by the functional
community but rejected by Sun).

  Regards/

Regards.
--
Basile STARYNKEVITCH         http://starynkevitch.net/Basile/
email: basile<at>starynkevitch<dot>net
aliases: basile<at>tunes<dot>org = bstarynk<at>nerim<dot>net
8, rue de la Fa�encerie, 92340 Bourg La Reine, France
[Common intermediate languages work OK if the source languages are
semantically similar and the targets are all about the same, e.g.
32 bit byte addressed twos-complement machines with flat addressing.
They collapse when you try to generalize them. -John]

0
Basile
10/4/2004 4:39:41 AM
On 2 Oct 2004 16:35:11 -0400, J�rgen Kahrs <juergen.kahrs@vr-web.de> wrote:

> Has anyone ever noticed that the "standard libraries" that come with
> Java and C# are attempts to recreate Unix-like Operating Systems ?

Yes, this is true. But I think that this attempts are very useful.

> Including APIs for memory, file system, scheduler, terminal, printer,
> network, clock, GUI ...  O'Reilly publishes dozens of Java books. What
> are they all about ? Library details; most of them becoming obsolete
> if C# succeeds. Stuffing Java library details into my brain seems like
> a waste of brain capacity.

I don't agree with your opinion. I have done some big java
applications and I can run them with Windows, Solaris and Linux with
ANY problems and without needing to compile again. In fact, I don't
need to modify anything.  And this is true because Sun has done the
effort to recreate the standard libraries and unify them in the java
library.

And if you use for example C++ , that is possible the most standard
language, with standard libraries you have more problems with
portability. Because the standard libraries don't covers all of the
functionallity you want in most of your programs. And you have to use
system dependent libaries. For example what if you want to do an app
with a GUI? (Yes, you have QT, but...)  And also there are compilers
incompatibilities. A lot of times you have some code that runs well
with GNU G++ but not with MSVC.

Well, in conclusion, I think that the simple fact to do a VM don't
solves your portability in a magic way. In every platform you want to
run code, you should battle writting a VM2NativeCode and also portting
libraries if you define them. But I think that this isn't a waste of
time.

And about Nicola:

>I gain from a virtual machine that I don't already get from a good old
>compiler/runtime support/standard library chain.

I think that if you implement this you can get the same functionality
that you can get with a VM (except that you should compile in every
platform). But in constrat you will have more or at least the same
work to do that with a VM.  And an advantage of a VM, is that you can
get a VM like JVM that has been already done.

Cheers,
--
Joan Jes�s Pujol Espinar

0
Joan
10/4/2004 4:40:10 AM
J�rgen Kahrs wrote:

> Our moderator (John) has pointed out _very_ often that language
> designers should first learn the lessons of the UNCOL period (1960s)
> before they start inventing yet another UNCOL-variant. Nobody
> listens to John; they are just too busy.

In addition, despite the fact that most people were reminded of P-code
when Java came out, Sun attempted to put the JVM on silicon and
achieved EXACTLY the same results Western Digital did when they tried
to put the P-machine on silicon, namely a processor that could not run
bytecode as fast as a general purpose machine. Turns out all that
register manipulation and multiple modes makes a difference after all.

The tired saying that applies is: The first thing you learn from
history is that nobody learns from history.
--
Samiam is Scott A. Moore
0
Scott
10/4/2004 4:40:47 AM
In a sense we are all running on virtual machines all the time, even
when programming in C++ or Assembler. Until IBM introduced the System
360 in the 60's every computer was hardwired for its own instruction
set. The 360's were different: there were a number of
microprogrammable machines that all executed the 360 instruction
set. The trend since then has been to have a standard architecture and
instruction set that ranges over all sizes and speeds, but implemented
by different microprocessors running different microcode.

So the a better question might be: at what level do we want
compatibility?

(And, yes, there were nanomachines that were programmed to look like
micromachines, but you need to find someone like Sam Cohen to find out
about them.)

john slimick
slimick@pitt.edu
[Microprogramming was invented in about 1952, but it wasn't until the
1960s that ROMs got fast and cheap enough to make it practical. -John]
0
John
10/4/2004 4:51:58 AM
Joan Pujol <joanpujol@gmail.com> writes:
|> On 2 Oct 2004 16:35:11 -0400, J�rgen Kahrs <juergen.kahrs@vr-web.de> wrote:
|>
|> > Has anyone ever noticed that the "standard libraries" that come with
|> > Java and C# are attempts to recreate Unix-like Operating Systems ?
|>
|> Yes, this is true. But I think that this attempts are very useful.

For some meanings of the word "useful".  Unfortunately, they also
attempt to implement every restriction of those languages, such as:

A minimal (and often useless) diagnostic model - no, a single integer
code is NOT useful in practice.  When you get ENOMEM from a write to a
descriptor that is connected via TCP/IP to a remote server, what have
you run out of and in what part of the system?

A synchronous, copying I/O model that gives really poor efficiency
(and I don't just mean bandwidth).  Many of the old mainframe models
were vastly better, and we used to get very high bandwidths with very
low memory and CPU utilisation on very slow CPUs.

There are other examples, but that should do.


Regards,
Nick Maclaren.
0
nmm1
10/10/2004 2:30:18 AM
Dobes Vandermeer <dobes@dobesland.com> wrote:
>Nicola Musatti <Nicola.Musatti@ObjectWay.it> wrote:
>
>> After all, isn't gcc the most ported virtual machine of all?
>
>I think it's not a virtual machine; when the program is running, gcc
>is no longer running.

All languages require some kind of run time support. They also require
standard cross-module calling conventions. To some extent you can view
the combination of the run-time library and the stylized machine code
emitted by the compiler as a virtual machine. This view is made
explicit by a large proportion of functional programming language
compilers, where the technique is to compile the language to an
abstract virtual machine which can be compiled to efficient machine
code.

It's also worth considering the partial evaluation view of compilers,
where the run-time system is the residual part of the interpreter that
is left after it has been partially evaluated with respect to a
program.

Tony.
--
f.a.n.finch  <dot@dotat.at>  http://dotat.at/
LUNDY FASTNET IRISH SEA SHANNON: WEST OR SOUTHWEST 6 OR 7, OCCASIOANLLY GALE 8
AT FIRST IN LUNDY AND IRISH SEA, AND PERHAPS AGAIN LATER. RAIN OR SHOWERS.
MODERATE OR GOOD.
0
Tony
10/10/2004 2:31:32 AM
Joan Pujol <joanpujol@gmail.com> wrote
[...]
> And if you use for example C++ , that is possible the most standard
> language, with standard libraries you have more problems with
> portability. Because the standard libraries don't covers all of the
> functionallity you want in most of your programs. And you have to use
> system dependent libaries. For example what if you want to do an app
> with a GUI? (Yes, you have QT, but...)  And also there are compilers
> incompatibilities. A lot of times you have some code that runs well
> with GNU G++ but not with MSVC.

But if I combine g++ with either each HW supplier's own Unix, Linux or
Cygwin I get a rather compliant POSIX system. The advantage of
compiling directly to native machine code should compensate the extra
care necessary in writing portable code.

Cheers,
Nicola Musatti
0
Nicola
10/10/2004 2:32:01 AM
Basile Starynkevitch [news] wrote:
{stuff deleted}
> On a vaguely related note, I feel very sorry that JVM or CLR did not
> evolve to incorrporate the few necessary features -notably closures &
> tail recursive calls- to make them really usable for functional
> languages like the ML family or Scheme.
{stuff deleted}

The CLR does contain a tail-call instruction. Unfortunately, this interacts
badly with the stack walking security model. So it doesn't "do the right
thing". However, at least there was an attempt to get it right.
0
Daniel
10/10/2004 2:34:17 AM
John Slimick wrote:

> In a sense we are all running on virtual machines all the time, even
> when programming in C++ or Assembler. Until IBM introduced the System
> 360 in the 60's every computer was hardwired for its own instruction
> set. The 360's were different: there were a number of
> microprogrammable machines that all executed the 360 instruction
> set. The trend since then has been to have a standard architecture and
> instruction set that ranges over all sizes and speeds, but implemented
> by different microprocessors running different microcode.

> So the a better question might be: at what level do we want
> compatibility?

It is interesting that the S/360 instruction set has had as long a
life as it has, surviving two changes to its address space, huge
factors in processor speed and memory size.  S/360 is fairly simple as
instruction sets go.  Only a few addressing modes and a few
instruction lengths.

Consider how long the VAX lasted, for example, with a large number of
addressing modes and instruction lengths.

RISC architectures, being closer to the underlying hardware, I see as
shorter lived.  Architectures like Itanium, requiring the compiler to
know fine details of the hardware, less virtual in the sense described
above, might be expected to have shorter lifetimes.

There has been discussion here before about systems that would accept
an intermediate code designed to be converted to the run optimally on
the specific hardware available at load time.  The JIT model doesn't
seem so bad, either.

It will be interesting to see the direction processors are going in
the future.

-- glen
[Yes indeed.  The s/360 was a remarkably good design, enough address bits
to survive much larger memories, simple enough so that it was possible to
pipeline.  The Vax, on the other hand, was an exquisite design for the
wrong technology, hand-written code on byte addressible machines with
very limited memory and a lot of microcode. -John]

0
glen
10/10/2004 2:50:33 AM
J�rgen Kahrs <Juergen.Kahrs@vr-web.de> wrote
<snip>
> Has anyone ever noticed that the "standard libraries" that come with
> Java and C# are attempts to recreate Unix-like Operating Systems ?

I've not noticed any such connection.

> Including APIs for memory, file system, scheduler, terminal, printer,
> network, clock, GUI

<snip>

None of these abstractions are specific to Unix and you omit to
mention any part of the "standard libraries" that come with Java and
C# that have no analogue whatsoever within the Unix OS (e.g. XML
parsing support, reflection etc.).

Cheers,

Dave
0
david
10/12/2004 4:51:23 AM
Nicola.Musatti@ObjectWay.it (Nicola Musatti) writes:
>According to their proponents virtual machines such as JVM and CLR are
>the solution to all our (programming) problems, of which portability
>is but one.
>
>Maybe it's just because when I learnt programming the p-machine was
>considered an interesting oddity, but with the exception of code that
>really must run unchanged on unknown platforms, I fail to see what do
>I gain from a virtual machine that I don't already get from a good old
>compiler/runtime support/standard library chain.

One thing you get is a somewhat obfuscated form of the source code,
and not being able to read the source code directly seems to be
important in some areas.

Maybe that's why JVM and CLR seem to be much less popular among free
software developers than among proprietary software developers.

>After all, isn't gcc the most ported virtual machine of all?

GCC has been widely retargeted, but portability of C (or GNU C) code
requires quite a bit of work, because C allows (and requires)
accessing a lot of stuff in the environment outside the gcc run-time
library.

>Now, this being the compiler forum, I'm interested in learning about
>the advantages of virtual machines from the compiler writer
>perspective.

Well, for VMs and compilers in general, VMs are a kind of intermediate
representation with a number of advantages:

- they can be interpreted easily and efficiently.

- they can be saved and loaded relatively simply.

- they isolate the front end from the back end relatively well.

  - anton
--
M. Anton Ertl
anton@mips.complang.tuwien.ac.at
http://www.complang.tuwien.ac.at/anton/home.html
0
anton
10/17/2004 8:16:09 PM
glen herrmannsfeldt <gah@ugcs.caltech.edu> writes:
>It is interesting that the S/360 instruction set has had as long a
>life as it has, surviving two changes to its address space, huge
>factors in processor speed and memory size.  S/360 is fairly simple as
>instruction sets go.  Only a few addressing modes and a few
>instruction lengths.
>
>Consider how long the VAX lasted, for example, with a large number of
>addressing modes and instruction lengths.
>
>RISC architectures, being closer to the underlying hardware, I see as
>shorter lived.

Some features of some RISC architectures, like branch delay slots
(MIPS, SPARC, 88k) and architectural load delay slots (MIPS-I) have
indeed proved to be too close to hardware, and were eliminated in
newer RISC architectures, and sometimes eliminated or at least
deprecated even in later revisions of the same architecture.

And certainly the RISC principles seem to have provided more speedup
in the time around 1990 than nowadays, but I don't see that the RISC
principles as embodied in e.g., the Alpha architecture would provide a
disadvantage over the 360 or the 386 architecture on current hardware
or hardware in the foreseeable future.

If these architectures are not as long-lived as the 360 or 386, the
reason is not hardware, but software.

>Architectures like Itanium, requiring the compiler to know fine
>details of the hardware, less virtual in the sense described above,
>might be expected to have shorter lifetimes.

Here similarly, the software rather than the hardware will be the
decisive factor.

The compiler does not need to know details of the hardware.  Even
utilization of the fancy architectural features is optional, one could
just treat it as an ordinary RISC.

Followups to comp.arch

- anton
--
M. Anton Ertl
anton@mips.complang.tuwien.ac.at
http://www.complang.tuwien.ac.at/anton/home.html
[I would argue that the 360 survived both because it had IBM's marketing
might and because it had big enough addresses and wasn't overly optimized
for a particular hardware cost model like the Vax was.  But this argument
definitely belongs in comp.arch. See you there. -John]
0
anton
10/17/2004 8:18:23 PM
"Anton Ertl" <anton@mips.complang.tuwien.ac.at> wrote in message
news:04-10-140@comp.compilers...
> glen herrmannsfeldt <gah@ugcs.caltech.edu> writes:
> >It is interesting that the S/360 instruction set has had as long a
> >life as it has, surviving two changes to its address space, huge
> >factors in processor speed and memory size.  S/360 is fairly simple
as
> >instruction sets go.  Only a few addressing modes and a few
> >instruction lengths.
> >
> >Consider how long the VAX lasted, for example, with a large number of
> >addressing modes and instruction lengths.
> >
> >RISC architectures, being closer to the underlying hardware, I see as
> >shorter lived.
>
> Some features of some RISC architectures, like branch delay slots
> (MIPS, SPARC, 88k) and architectural load delay slots (MIPS-I) have
> indeed proved to be too close to hardware, and were eliminated in
> newer RISC architectures, and sometimes eliminated or at least
> deprecated even in later revisions of the same architecture.
>
> And certainly the RISC principles seem to have provided more speedup
> in the time around 1990 than nowadays, but I don't see that the RISC
> principles as embodied in e.g., the Alpha architecture would provide a
> disadvantage over the 360 or the 386 architecture on current hardware
> or hardware in the foreseeable future.
>
> If these architectures are not as long-lived as the 360 or 386, the
> reason is not hardware, but software.
>
> >Architectures like Itanium, requiring the compiler to know fine
> >details of the hardware, less virtual in the sense described above,
> >might be expected to have shorter lifetimes.
>
> Here similarly, the software rather than the hardware will be the
> decisive factor.
>
> The compiler does not need to know details of the hardware.  Even
> utilization of the fancy architectural features is optional, one could
> just treat it as an ordinary RISC.
>
> Followups to comp.arch
>
> - anton
> --
> M. Anton Ertl
> anton@mips.complang.tuwien.ac.at
> http://www.complang.tuwien.ac.at/anton/home.html
> [I would argue that the 360 survived both because it had IBM's
marketing
> might and because it had big enough addresses and wasn't overly
optimized
> for a particular hardware cost model like the Vax was.  But this
argument
> definitely belongs in comp.arch. See you there. -John]

I would posit that 360 survived because of its installed base of
business critical applications that were not easily replaceable.

del cecchi


0
del
10/17/2004 9:11:55 PM
"del cecchi" <dcecchi.nojunk@att.net> writes:
> I would posit that 360 survived because of its installed base of
> business critical applications that were not easily replaceable.

if it was a $100b invested in 360 application softare in the early 70s
(w/360 less than 10 years ago) ... aka previous ref to previous amdahl
talk at mit
http://www.garlic.com/~lynn/2004m.html#53 4GHz is the glass ceiling?

one might conjecture it has hit several trillion. ... a good part of
it in fundamental business critical applications.

to redo and/or move the feature/function .... would need to show some
sort of cost advantage as possibly also demonstratable risk mitigation
(for some stuff, the cost to the business of an outage can be greater
than any measurable migration benefit) ... aka independent of the
difficulty of replacement ... there is also some risk issues in making
any change.

-- 
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/
0
Anne
10/17/2004 9:27:56 PM
>> [I would argue that the 360 survived both because it had IBM's marketing
>> might and because it had big enough addresses and wasn't overly optimized
>> for a particular hardware cost model like the Vax was.

>I would posit that 360 survived because of its installed base of
>business critical applications that were not easily replaceable.

Well, sure.  It keeps being easier to build new 360s than to port or
translate the code.  The final nail in the Vax's coffin was when DEC
found that they could do practical static translation to Alpha code,
which let them avoid building all the grotty instruction decoders,
register interlocks, and other stuff the Vax needed.

For the 360, certainly some part of its longevity is that old 360 code
uses EX instructions and stores into the instruction stream and
otherwise makes itself hard to translate.  But I think a large part is
that making the frequently used parts of the 360 instruction set run
fast still isn't all that hard.  The block oriented I/O architecture
helps, too.  (Can you still run 1960s channel programs on ESCON
hardware?)

For the 386 instruction set, in view of the marginal success of
Transmeta, I guess the jury is still out.


0
johnl
10/18/2004 12:28:41 AM
johnl@iecc.com (John R. Levine) writes:
>
> For the 386 instruction set, in view of the marginal success of
> Transmeta, I guess the jury is still out.

Transmeta isn't very successfull, but VMware is (or they are at least
for now until AMD/Intel make virtualization a commodity) While their
task is a bit easier (translation to the same architecture with just
patching a few supervirsor instructions) they still have to solve all
the hard problems like handling self modifying code efficently.

Also Intel has shown with IA32el on IA64 that a software JIT can beat
a mediocre hardware implementation. The software version is considerably
faster than the builtin 32bit hardware of McKinley.

-Andi
0
Andi
10/18/2004 1:47:14 AM
Anton Ertl wrote:

(snip)

> Some features of some RISC architectures, like branch delay slots
> (MIPS, SPARC, 88k) and architectural load delay slots (MIPS-I) have
> indeed proved to be too close to hardware, and were eliminated in
> newer RISC architectures, and sometimes eliminated or at least
> deprecated even in later revisions of the same architecture.

> And certainly the RISC principles seem to have provided more speedup
> in the time around 1990 than nowadays, but I don't see that the RISC
> principles as embodied in e.g., the Alpha architecture would provide a
> disadvantage over the 360 or the 386 architecture on current hardware
> or hardware in the foreseeable future.

I think I agree, though I think Alpha will be long dead 25 years
from now.  (S/360 is now over 40 years old.)  It might be for
political reasons rather than technical reasons, though.

> If these architectures are not as long-lived as the 360 or 386, the
> reason is not hardware, but software.

Well, the reason x86 lasted so long is definitely software.

S/360 had as part of its design to work over a large range
of machines.

>>Architectures like Itanium, requiring the compiler to know fine
>>details of the hardware, less virtual in the sense described above,
>>might be expected to have shorter lifetimes.

> Here similarly, the software rather than the hardware will be the
> decisive factor.

You mean the lack of software?

> The compiler does not need to know details of the hardware.  Even
> utilization of the fancy architectural features is optional, one could
> just treat it as an ordinary RISC.

When a small difference in SPEC score can make or break
a sale, I don't know that compiler writers have that choice.

I do like the idea of compiling to an intermediate form
that is then compiled for the specific CPU at program load
time. Or maybe just-in-time like the Java JVM JIT systems,
though with not quite so general an intermediate form.


(snip)

> [I would argue that the 360 survived both because it had IBM's marketing
> might and because it had big enough addresses and wasn't overly optimized
> for a particular hardware cost model like the Vax was.  But this argument
> definitely belongs in comp.arch. See you there. -John]

I agree about S/360 and VAX.  I don't know what this says
about Alpha or Itanium.

-- glen

0
glen
10/18/2004 4:21:47 AM
In article <2tg5h6F1u2c4nU1@uni-berlin.de>,
del cecchi <dcecchi.nojunk@att.net> wrote:
>
>I would posit that 360 survived because of its installed base of
>business critical applications that were not easily replaceable.

Nope.  That was not the main reason, at least early on.

The PC stream was marketed to people who didn't know that it wasn't
reasonable to have to power cycle a computer once every hour or so,
and the workstation ones to hackers who expected to have to modify
the software they used to make it work at all.

Yes, System/360 and the early System/370 was like that, but it was
somewhat better by the early 1980s.  And so were DEC's non-Unix
systems, and we know how they spread.

By the late 1980s, both the PC and RISC systems had improved very
considerably, and that is when the mainframes started suffering
badly.


Regards,
Nick Maclaren.
0
nmm1
10/18/2004 12:42:27 PM
johnl@iecc.com (John R. Levine) writes:
>The final nail in the Vax's coffin was when DEC
>found that they could do practical static translation to Alpha code,
>which let them avoid building all the grotty instruction decoders,
>register interlocks, and other stuff the Vax needed.

The VAX had several problems:

- Instruction decoding.  Looking at the stuff done for various IA-32
and especially AMD64 implementations (one decoder for three different
(though somewhat related) architectures, with various prefixes,
several instruction set extensions, and other warts), decoding several
VAX instructions per cycle should be doable nowadays.  Alternatively,
use a decoded-instruction cache, like the Pentium 4.

- execution (exceptions in the right order etc.).  With present-day
out-of-order execution techniques and instruction triage (make fast
paths for the frequent ones, and execute the others correctly), I
don't see a particular reason for being slower than, say,
implementations of the IA-32 architecture.

- 32-bit address space.  I don't know how easy it would have been to
extend the VAX to 64 bits, but certainly a VAX-like 64-bit
architecture with a 32-bit mode could have been defined (like AMD64 is
to IA-32).

Not sure what you mean with register interlocks.

Of course such a VAX implementation would have come later than the
first Alphas by several years, and would have required more design
effort and/or resulted in a slower CPU than the EV6; so going with
Alpha and binary translation probably was a good decision (I also note
that, unlike the DEC-10 and the Alpha, people don't seem to agonize
over the death of the VAX).

>For the 360, certainly some part of its longevity is that old 360 code
>uses EX instructions and stores into the instruction stream and
>otherwise makes itself hard to translate.  But I think a large part is
>that making the frequently used parts of the 360 instruction set run
>fast still isn't all that hard.

And I guess speed is also not that important.  The software was
written for slower machines anyway.  How fast are current 360
implementations in terms of CPU speed (e.g., SPECcpu)?

- anton
-- 
M. Anton Ertl                    Some things have to be seen to be believed
anton@mips.complang.tuwien.ac.at Most things have to be believed to be seen
http://www.complang.tuwien.ac.at/anton/home.html
0
anton
10/18/2004 2:35:25 PM
glen herrmannsfeldt <gah@ugcs.caltech.edu> writes:
>Anton Ertl wrote:
[IA-64]
>> Here similarly, the software rather than the hardware will be the
>> decisive factor.
>
>You mean the lack of software?

If there continues to be a lack of software, IA-64 won't see a long
life.

>> The compiler does not need to know details of the hardware.  Even
>> utilization of the fancy architectural features is optional, one could
>> just treat it as an ordinary RISC.
>
>When a small difference in SPEC score can make or break
>a sale, I don't know that compiler writers have that choice.

Sure, for best SPEC scores the compiler should be optimized for the
implementation.  That's not just true for IA-64 implementations, but
also for implementations of other RISCs and CISCs.

However, wrt longevity I don't think this is important.  E.g., if
future IA-64 implementors found out that they could increase the clock
rate by a factor of two by implementing the plain-RISC parts of IA-64
efficiently, and implementing the fancy parts (EPIC) correctly (but
not necessarily quickly), they could just do that and have the
compilers generate plain-RISC-style code; that's similar to what
happened to IA-32 and other CISC architectures.

The calling convention might necessitate the efficient implementation
of the register-stack feature, though (just a guess).

- anton
-- 
M. Anton Ertl                    Some things have to be seen to be believed
anton@mips.complang.tuwien.ac.at Most things have to be believed to be seen
http://www.complang.tuwien.ac.at/anton/home.html
0
anton
10/18/2004 3:04:06 PM
nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
> Nope.  That was not the main reason, at least early on.
>
> The PC stream was marketed to people who didn't know that it wasn't
> reasonable to have to power cycle a computer once every hour or so,
> and the workstation ones to hackers who expected to have to modify
> the software they used to make it work at all.
>
> Yes, System/360 and the early System/370 was like that, but it was
> somewhat better by the early 1980s.  And so were DEC's non-Unix
> systems, and we know how they spread.
>
> By the late 1980s, both the PC and RISC systems had improved very
> considerably, and that is when the mainframes started suffering
> badly.

there was a big explosion starting maybe as early as '79 in
distributed and departmental mainframes ... this is the niche that the
4341 and vax'es were selling into. there were customers buying 4341s
in quantities of multiple hundreds at a time (somewhat that the
"mainframe" price/performance had dropped below some
threshold)... example:
http://www.garlic.com/~lynn/2001m.html#15 departmental servers

by the mid-80s, the high-end pcs and workstations were starting to
take over this market .... the 4341 follow-on .... 4381 didn't see
anything like the explosion that the 4341 saw (even as replacement for
4341 as it went to end-of-life).

the other area was business PC applications ... one of the things that
help the PC with market penetration was mainframe terminal emulation
....  business for about the same price as a mainframe terminal
.... could get in a single desk footprint ... single screen, single
keyboard ... something that provided both local computing capability
as well as mainframe terminal access.

one of the issues was that as the PC business market evolved, you
started to see more and more business applications running on the PC
.....  in part because of the human factors ... things like
spreadsheets, etc.

this resulted in a significant amount of tension between the disk
product division and the communication product division. the disk
product division wanted to introduce a brand new series of products
that gave PCs "disk-like" thruput and semantics to the glass house
mainframe disk farm. This would have an enormous impact on the install
base of communication division terminal controllers (which all the PC
terminal emulation connected to).

in the 80s ... somebody from the disk product division gave a featured
presentation at an internal worldwide communication product conference
claiming that the communication division was going to be the death of
the disk division (of course the presentation wasn't actually titled
that or they wouldn't have allowed him on the agenda).

The issue was (with the migration of applications to the PCs) that if
the (PC) access to the corporate data in the glass-house wasn't
provided with significantly better thruput and semantics (than
available with terminal emulation hardware & semantics) ... the data
would soon follow ... aka you would start to see a similar big
explosion in PC hard disks ... that you started to see in PC
computing.

so to some extent SAA was supposed to address this ... not so much for
providing better access to the data in the glass house disk farms, but
enable the possibility of migrating applications back to the mainframe
.... leaving the PC/mainframe interface as a fancy gui ...  which could
preserve the terminal emulation install base. random posts on SAA,
3-tier architecture, middle layer, etc
http://www.garlic.com/~lynn/subtopic.html#3tier

US & world-wide vax numbes by year (from a 1988 IDC report):
http://www.garlic.com/~lynn/2002f.html#0 Computers in Science Fiction

i don't have the compareable 4341 numbers ... but there was some
presentation (at share?) claiming that something like 11,000 of the
vax shipments should have been 4341 ... because of the better 4341
price/performance.

random topic drift ... the 4341 follow-on, the 4381 was somewhat
originally targeted to have been a fort knox machine. fort knox was
program to consolidate the large number of corporate microprocessors
onto an 801 risc base (i.e. a large number of 360 & 370 models were
actually some sort of processor with microprogramming to emulate
360/370 architecture). I contributed to a document helping kill fort
knox ... at least for 370; not so much that i was against 801s ... but
that chip technology had advanced to the point where you could start
to get 370 actually directly implemented in silicon ... and enable
elimination of the expensive emulation layer. random 801 & fort knox
posts:
http://www.garlic.com/~lynn/subtopic.html#801

-- 
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/
0
Anne
10/18/2004 3:24:13 PM
In article <u4qks9hia.fsf@mail.comcast.net>,
Anne & Lynn Wheeler  <lynn@garlic.com> wrote:

Yes, quite.  Here follow a few nitpicks.

>there was a big explosion starting maybe as early as '79 in
>distributed and departmental mainframes ... this is the niche that the
>4341 and vax'es were selling into. there were customers buying 4341s
>in quantities of multiple hundreds at a time (somewhat that the
>"mainframe" price/performance had dropped below some
>threshold)... example:
>http://www.garlic.com/~lynn/2001m.html#15 departmental servers

Yes, indeed.  But it was very much those systems, and not the 68K
and other microprocessor-based workstations.

>by the mid-80s, the high-end pcs and workstations were starting to
>take over this market .... the 4341 follow-on .... 4381 didn't see
>anything like the explosion that the 4341 saw (even as replacement for
>4341 as it went to end-of-life).

Well, yes and no.  Even in academia, they hadn't started to make much
impact on the SERVER market (e.g. Email etc.), because of their dire
RAS etc.  The Unices were ghastly, and Microsoft's junk was worse.
They were becoming the application engines, but took rather longer
to displace the main servers.

>the other area was business PC applications ... one of the things that
>help the PC with market penetration was mainframe terminal emulation
>...  business for about the same price as a mainframe terminal
>... could get in a single desk footprint ... single screen, single
>keyboard ... something that provided both local computing capability
>as well as mainframe terminal access.

The BBC Micro was a prime example of that.  It was the main terminal
system of a great many organisations for quite a few years.  The IBM
PCs took longer, because the initial software was a crock, but
eventually replaced it.

>so to some extent SAA was supposed to address this ... not so much for
>providing better access to the data in the glass house disk farms, but
>enable the possibility of migrating applications back to the mainframe
>... leaving the PC/mainframe interface as a fancy gui ...  which could
>preserve the terminal emulation install base. random posts on SAA,
>3-tier architecture, middle layer, etc
>http://www.garlic.com/~lynn/subtopic.html#3tier

Oh, yes, indeed.  And therein hangs a tale ....  We could go on for
HOURS about that :-)

>i don't have the compareable 4341 numbers ... but there was some
>presentation (at share?) claiming that something like 11,000 of the
>vax shipments should have been 4341 ... because of the better 4341
>price/performance.

And it was absolute nonsense.  Where the VAX scored over the 4341
was in the superiority of VMS, not in its hardware.  If you costed
in the support effort and did NOT assume that you were starting
with people who knew VM/CMS, you got a very different result.

Don't get me wrong - VM/CMS wasn't bad, but VMS was a much better
system for a great many purposes.  A user unfamiliar with either
would typically take 1/3 the time to start using VMS effectively
as VM/CMS (or Unix, for that matter).  Let's leave MVS and TSO out
of this one ....


Regards,
Nick Maclaren.
0
nmm1
10/18/2004 4:31:07 PM
In article <cl0r4b$d8s$1@gemini.csx.cam.ac.uk>,
Nick Maclaren <nmm1@cus.cam.ac.uk> wrote:
>>presentation (at share?) claiming that something like 11,000 of the
>>vax shipments should have been 4341 ... because of the better 4341
>>price/performance.
>
>And it was absolute nonsense.  Where the VAX scored over the 4341
>was in the superiority of VMS, not in its hardware.  If you costed
>in the support effort and did NOT assume that you were starting
>with people who knew VM/CMS, you got a very different result.
>
>Don't get me wrong - VM/CMS wasn't bad, but VMS was a much better
>system for a great many purposes.  A user unfamiliar with either
>would typically take 1/3 the time to start using VMS effectively
>as VM/CMS (or Unix, for that matter).  Let's leave MVS and TSO out
>of this one ....

This reminds me of something older that may be relevant.  I once read
a document by George Mealy about the travails of OS/360 and some
comparisons with what was then the PDP-10 operating system.  What
struck me was that the PDP-10 system was designed as a remote terminal
oriented system from the ground up.  Whereas with OS/360 you had what
was basically a card driven system and had to graft on more layers of
software to get it to deal with operation from terminals.  Now I don't
pretend to know anything about IBM software, but I got the impression
that later on you had to have something called CICS to do what the
DEC software already was doing built-in; and even in my last contacts
with IBM stuff there seemed to be files that were card images and
printer line images.  And CICS required its own set of experts as if
it were another operating system running on top of the OS.
>
>
>Regards,
>Nick Maclaren.


-- 

jhhaynes at earthlink dot net

0
haynes
10/18/2004 5:40:41 PM
haynes@alumni.uark.edu (Jim Haynes) writes:
> This reminds me of something older that may be relevant.  I once
> read a document by George Mealy about the travails of OS/360 and
> some comparisons with what was then the PDP-10 operating system.
> What struck me was that the PDP-10 system was designed as a remote
> terminal oriented system from the ground up.  Whereas with OS/360
> you had what was basically a card driven system and had to graft on
> more layers of software to get it to deal with operation from
> terminals.  Now I don't pretend to know anything about IBM software,
> but I got the impression that later on you had to have something
> called CICS to do what the DEC software already was doing built-in;
> and even in my last contacts with IBM stuff there seemed to be files
> that were card images and printer line images.  And CICS required
> its own set of experts as if it were another operating system
> running on top of the OS.

there were lots of infrastructures that built their own online
operation on top of os/360 ... cps, apl\360, cics, etc. they had
subsystems that did their own tasking, scheduling, swapping, terminal,
etc .. recent post on cps
http://www.garlic.com/~lynn/2004m.html#54

while i was an undergraudate, the university got to be one of the
original ibm beta-test sites for what was to become the cics
product. the university sent some people to ibm class to be trained in
cics ... but I was the one that got to shoot cics bugs. cics had
been developed at a customer site for a specific environment ... and
ibm was taking that and turning it into a generalized product.

the university library had gotten a grant from onr to do online
library. some of the problems was that the library was using bdam
operations that hadn't been used in the original cics customer
environment.

for some topic drift ... a totally different (bdam) library project
from that era ... nlm 
http://www.garlic.com/~lynn/2002g.html#3 Why are Mainframe Computers really still in use at all?
http://www.garlic.com/~lynn/2004e.html#53 c.d.theory glossary (repost)
http://www.garlic.com/~lynn/2004l.html#52 Specifying all biz rules in relational data

-- 
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/
0
Anne
10/18/2004 6:36:58 PM
ref:
http://www.garlic.com/~lynn/2004m.html#59 RICSs too close to hardware?

this somewhat exemplifies the communication division orientation in
the mid-80s
http://www.garlic.com/~lynn/94.html#33b Hight Speed Data Transport (HSDT)

at the time we were doing hight speed data transport project.
http://www.garlic.com/~lynn/subnetwork.html#hsdt

we had a high-speed backbone running at the time of nsfnet-1 rfp
.... but weren't allowed to bid, however we got NSF technical
audit which concluded that what we had running was at least five
years ahead of bid submissions to build something new. random
posts
http://www.garlic.com/~lynn/internet.htm

in this ref:
http://www.garlic.com/~lynn/2001m.html#15

the particular gov. operation would have had about as many 4341 nodes
as there were total arpanet nodes at the time.

the internal explosion in the use of 4341s also help fuel the explosive
growth in the size of the internal network .... where the internal
network was nearly 1000 nodes at the time arpanet was around 250 nodes
(about the time of the big 1/1/83 switch-over to internetworking protocol
and gateways). ... random internal network refs:
http://www.garlic.com/~lynn/subnetwork.html#internalnet

we had another problem involving HSDT with SAA in the late '80s. my
wife had written and delivered a response to a gov. RFI ... where she
had laid out many of the basics of what became 3-tier architecture,
middle layer, and middleware. we expanded on that and started giving
3-tier architecture marketing presentations. we started taking some
amount of heat from the SAA crowd at the time ... who somewhat could
be characterized as attempting to put the client/server (2-tier) genie
back into the bottle (and our pushing 3-tier architecture was going
in the opposite direction) .... 
http://www.garlic.com/~lynn/subtopic.html#saa

-- 
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/
0
Anne
10/18/2004 6:50:11 PM
In article <teTcd.417$ta5.137@newsread3.news.atl.earthlink.net>,
Jim Haynes <jhaynes@alumni.uark.edu> wrote:
>
>This reminds me of something older that may be relevant.  I once read
>a document by George Mealy about the travails of OS/360 and some
>comparisons with what was then the PDP-10 operating system.  What
>struck me was that the PDP-10 system was designed as a remote terminal
>oriented system from the ground up.  Whereas with OS/360 you had what
>was basically a card driven system and had to graft on more layers of
>software to get it to deal with operation from terminals.  Now I don't
>pretend to know anything about IBM software, but I got the impression
>that later on you had to have something called CICS to do what the
>DEC software already was doing built-in; and even in my last contacts
>with IBM stuff there seemed to be files that were card images and
>printer line images.  And CICS required its own set of experts as if
>it were another operating system running on top of the OS.

It's relevant, but the details are wrong.

CICS was a specialist sub-system for transaction processing, designed
to make the MVS brontosaurus fly on high context-switch processing.
You are thinking of TSO.  Despite claims, it WASN'T much more effort
to use than JCL, but you couldn't do much more with it ....

VM/CMS was written precisely because MVS/TSO was so ghastly - but the
Wheelers know a thousand times more about that than I do.  There were
several MVS sub-systems that were designed for interactive use, most
of which came out of academia, such as MTS (Michigan), GUTS (Gothenburg)
and Phoenix (Cambridge).  The last was the one most designed for remote
use as, by the time we got an IBM, Cambridge was ALREADY a remote access
site.

This is actually why I said that VMS was superior to VM/CMS.  I can't
judge them from performance, RAS or administrative ease, and it was
the ease of use and support that I was referring to.  For example, 90%
of our users did not have immediate access to IBM printed documentation,
and about 10% didn't have practical access to it.  Even then, we were
decentralised.

The VMS user interface was somewhat better than VM/CMS but, far more
importantly, it was designed (as was Phoenix) so that users did not
need access to printed documentation.  Not just for the commands, but
for the error messages.  VMS/CMS was better than MVS/TSO in that
respect, but not up to VMS standards.


Regards,
Nick Maclaren.
0
nmm1
10/18/2004 6:53:17 PM
nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
> And it was absolute nonsense.  Where the VAX scored over the 4341
> was in the superiority of VMS, not in its hardware.  If you costed
> in the support effort and did NOT assume that you were starting
> with people who knew VM/CMS, you got a very different result.
>
> Don't get me wrong - VM/CMS wasn't bad, but VMS was a much better
> system for a great many purposes.  A user unfamiliar with either
> would typically take 1/3 the time to start using VMS effectively
> as VM/CMS (or Unix, for that matter).  Let's leave MVS and TSO out
> of this one ....

i think that share eventually produced a report/presentation (as well
as some number of requirements for ibm) about the strengths of vms (it
may have been phrased w/o directly mentioning vms ... just listed
things that should be done by ibm for its products to make it more
competitive in the mid-range).

big issues all sorts of skill level, significant up front learning and
just the number of person hrs required for the care and feeding of
systems. 

a customer with 20-50 people caring and feeding a single big mainframe
complex couldn't continue to follow the same paradigm when it was
cloned a couple hundred or thousand times.

-- 
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/
0
Anne
10/18/2004 7:02:39 PM
"Nick Maclaren" <nmm1@cus.cam.ac.uk> wrote in message 
news:cl13et$9b$1@gemini.csx.cam.ac.uk...

snip

> You are thinking of TSO.  Despite claims, it WASN'T much more effort
> to use than JCL,

Talk about your "faint praise"!  :-)

-- 
 - Stephen Fuld
   e-mail address disguised to prevent spam 


0
Stephen
10/18/2004 7:04:51 PM
This thread is heading toward confusion!

CICS is a "transaction processing system".  The PDP-10 and VAX-VMS are 
"conversational time sharing systems".

It would be more appropriate to compare VM-CMS or MVS_TSO or OS/400 with 
VAX-VMS and PDP-10.

While CICS is more like a character green screen version of Apache or an 
application server like WebSphere.

Mike Sicilian



"Anne & Lynn Wheeler" <lynn@garlic.com> wrote in message 
news:uwtxn98l1.fsf@mail.comcast.net...
> haynes@alumni.uark.edu (Jim Haynes) writes:
>> This reminds me of something older that may be relevant.  I once
>> read a document by George Mealy about the travails of OS/360 and
>> some comparisons with what was then the PDP-10 operating system.
>> What struck me was that the PDP-10 system was designed as a remote
>> terminal oriented system from the ground up.  Whereas with OS/360
>> you had what was basically a card driven system and had to graft on
>> more layers of software to get it to deal with operation from
>> terminals.  Now I don't pretend to know anything about IBM software,
>> but I got the impression that later on you had to have something
>> called CICS to do what the DEC software already was doing built-in;
>> and even in my last contacts with IBM stuff there seemed to be files
>> that were card images and printer line images.  And CICS required
>> its own set of experts as if it were another operating system
>> running on top of the OS.
>
> there were lots of infrastructures that built their own online
> operation on top of os/360 ... cps, apl\360, cics, etc. they had
> subsystems that did their own tasking, scheduling, swapping, terminal,
> etc .. recent post on cps
> http://www.garlic.com/~lynn/2004m.html#54
>
> while i was an undergraudate, the university got to be one of the
> original ibm beta-test sites for what was to become the cics
> product. the university sent some people to ibm class to be trained in
> cics ... but I was the one that got to shoot cics bugs. cics had
> been developed at a customer site for a specific environment ... and
> ibm was taking that and turning it into a generalized product.
>
> the university library had gotten a grant from onr to do online
> library. some of the problems was that the library was using bdam
> operations that hadn't been used in the original cics customer
> environment.
>
> for some topic drift ... a totally different (bdam) library project
> from that era ... nlm
> http://www.garlic.com/~lynn/2002g.html#3 Why are Mainframe Computers 
> really still in use at all?
> http://www.garlic.com/~lynn/2004e.html#53 c.d.theory glossary (repost)
> http://www.garlic.com/~lynn/2004l.html#52 Specifying all biz rules in 
> relational data
>
> -- 
> Anne & Lynn Wheeler | http://www.garlic.com/~lynn/ 


0
mike
10/18/2004 7:09:25 PM
In article <FxUcd.13618$a15.6129@newssvr15.news.prodigy.com>,
mike <mike@mike.net> wrote:
>
>
>This thread is heading toward confusion!
>
>CICS is a "transaction processing system".  The PDP-10 and VAX-VMS are 
>"conversational time sharing systems".
>
Well the point I was trying to make, and maybe I'm wrong about this anyway,
is that in PDP-10 and VMS and Unix there is the underlying assumption that
you are going to get characters dribbling in from terminals, and after
some arbitrary number of characters you get one that tells you to act on
what you got.  Whereas with the mainframe OSes it seems like the underlying
assumption is that you are going to get card decks and send output to 
line printers and card punches.  And then something has to be wrapped 
around the OS that turns character streams from terminals into virtual
card decks, and turns punch and line printer files into something a
terminal can handle.

Maybe a case in point is Wylbur, which as I understood it was not a
timesharing system but a terminal-oriented system for creating virtual
card decks, running them as jobs, and then examining the output.

Something I was a little closer to was Burroughs B5500.  That machine was
superb for batch and inherently lousy for timesharing.  There was a 
timesharing version of the operating system.  There was also a customer-
supplied terminal front end for the batch operating system.  It was
called R/C (for "Remote Card") and that's how it worked; from a terminal
you could make a file that was a virtual card deck and submit it for
processing, and then you could examine the output file.

Oh, and speaking of JCL, years ago someone sent me a bumper sticker
that read, "Honk if you love JCL"
-- 

jhhaynes at earthlink dot net

0
haynes
10/18/2004 7:25:57 PM

Your view that MVS is a batch machine is absolutely correct.   It would be 
hard to overstate the lengths IBM went to make it efficient for a relatively 
small number of long running jobs with few context switches.

At the MVS level a context switch cost more than an average complete 
interactive transaction with all its application code and I/O on many other 
systems.  That is why TSO performed so badly against VM-CMS or OS/400 or 
VAX-VMS.

Then again it also would be very surprising if that bumper sticker generated 
much honking!

I think the best system IBM ever made for both usability and power of the 
command language and efficiency of context switch in a time shared 
environment was OS/400.  It was never marketed as a technical workstation 
like the VAX but it is very elegant.

Mike Sicilian




"Jim Haynes" <haynes@alumni.uark.edu> wrote in message 
news:9NUcd.566$5i5.126@newsread2.news.atl.earthlink.net...
> In article <FxUcd.13618$a15.6129@newssvr15.news.prodigy.com>,
> mike <mike@mike.net> wrote:
>>
>>
>>This thread is heading toward confusion!
>>
>>CICS is a "transaction processing system".  The PDP-10 and VAX-VMS are
>>"conversational time sharing systems".
>>
> Well the point I was trying to make, and maybe I'm wrong about this 
> anyway,
> is that in PDP-10 and VMS and Unix there is the underlying assumption that
> you are going to get characters dribbling in from terminals, and after
> some arbitrary number of characters you get one that tells you to act on
> what you got.  Whereas with the mainframe OSes it seems like the 
> underlying
> assumption is that you are going to get card decks and send output to
> line printers and card punches.  And then something has to be wrapped
> around the OS that turns character streams from terminals into virtual
> card decks, and turns punch and line printer files into something a
> terminal can handle.
>
> Maybe a case in point is Wylbur, which as I understood it was not a
> timesharing system but a terminal-oriented system for creating virtual
> card decks, running them as jobs, and then examining the output.
>
> Something I was a little closer to was Burroughs B5500.  That machine was
> superb for batch and inherently lousy for timesharing.  There was a
> timesharing version of the operating system.  There was also a customer-
> supplied terminal front end for the batch operating system.  It was
> called R/C (for "Remote Card") and that's how it worked; from a terminal
> you could make a file that was a virtual card deck and submit it for
> processing, and then you could examine the output file.
>
> Oh, and speaking of JCL, years ago someone sent me a bumper sticker
> that read, "Honk if you love JCL"
> -- 
>
> jhhaynes at earthlink dot net
> 


0
mike
10/18/2004 7:44:44 PM
"mike" <mike@mike.net> writes:
> This thread is heading toward confusion!
>
> CICS is a "transaction processing system".  The PDP-10 and VAX-VMS are 
> "conversational time sharing systems".
>
> It would be more appropriate to compare VM-CMS or MVS_TSO or OS/400 with 
> VAX-VMS and PDP-10.
>
> While CICS is more like a character green screen version of Apache or an 
> application server like WebSphere.

cps was converstational programming system ... that was done by the
boston programming center ... and ran on os/360. they had also done
special microcode assist for the 360/50 that significantly improved
cps performance. recent cps related posting
http://www.garlic.com/~lynn/2004m.html#54

apl\360 ... well was apl\360 ... random apl along with some number of
hone posts
http://www.garlic.com/~lynn/subtopic.html#hone

hone was a major internal cms\apl based timesharing service that
supported all the field, sales, and marketing people worldwide.

there were a lot of subsystem/monitors that ran on os/360
.... providing their own contained environment, terminal support,
tasking, scheduling, allocation, swapping, etc. while some of the
commands differed between cics and say cps ... their system
implementation details were remarkably similar.

there was vmpc ... which was done for vs1 ... it was originally going
to be called pco (personal computing option) ... but they ran into an
acronym conflict with a political party in europe. pco was supposedly
going to be a cms-killer (as opposed to a enhanced crje like tso).

cp67 & cms was done at the science center, 4th floor, 545 tech sq
in the mid-60s
http://www.garlic.com/~lynn/subtopic.html#545tech

some of the people from ctss had gone to the 5th floor to work on
multics, and others went to the 4th floor and the science center.  the
boston programming center (and cps) was on the 3rd floor (until the
group was absorbed by the rapidly expanding vm/cms group ... after
cp67 had morphed into vm370).

-- 
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/
0
Anne
10/18/2004 7:56:36 PM

As you have shown there were a lot of these time share systems that ran on 
top of MVS.

In addition to these internal IBM systems, there was MUSIC from McGill 
University.  I also seem to recall a version of MUMPS from Mass General 
Hospital, which like apl\360 was both a time sharing monitor and a 
programming environment for the MUMPS language and database.

Mike Sicilian


"Anne & Lynn Wheeler" <lynn@garlic.com> wrote in message 
news:uk6tn94wb.fsf@mail.comcast.net...
> "mike" <mike@mike.net> writes:
>> This thread is heading toward confusion!
>>
>> CICS is a "transaction processing system".  The PDP-10 and VAX-VMS are
>> "conversational time sharing systems".
>>
>> It would be more appropriate to compare VM-CMS or MVS_TSO or OS/400 with
>> VAX-VMS and PDP-10.
>>
>> While CICS is more like a character green screen version of Apache or an
>> application server like WebSphere.
>
> cps was converstational programming system ... that was done by the
> boston programming center ... and ran on os/360. they had also done
> special microcode assist for the 360/50 that significantly improved
> cps performance. recent cps related posting
> http://www.garlic.com/~lynn/2004m.html#54
>
> apl\360 ... well was apl\360 ... random apl along with some number of
> hone posts
> http://www.garlic.com/~lynn/subtopic.html#hone
>
> hone was a major internal cms\apl based timesharing service that
> supported all the field, sales, and marketing people worldwide.
>
> there were a lot of subsystem/monitors that ran on os/360
> ... providing their own contained environment, terminal support,
> tasking, scheduling, allocation, swapping, etc. while some of the
> commands differed between cics and say cps ... their system
> implementation details were remarkably similar.
>
> there was vmpc ... which was done for vs1 ... it was originally going
> to be called pco (personal computing option) ... but they ran into an
> acronym conflict with a political party in europe. pco was supposedly
> going to be a cms-killer (as opposed to a enhanced crje like tso).
>
> cp67 & cms was done at the science center, 4th floor, 545 tech sq
> in the mid-60s
> http://www.garlic.com/~lynn/subtopic.html#545tech
>
> some of the people from ctss had gone to the 5th floor to work on
> multics, and others went to the 4th floor and the science center.  the
> boston programming center (and cps) was on the 3rd floor (until the
> group was absorbed by the rapidly expanding vm/cms group ... after
> cp67 had morphed into vm370).
>
> -- 
> Anne & Lynn Wheeler | http://www.garlic.com/~lynn/ 


0
mike
10/18/2004 8:12:49 PM
Nick Maclaren wrote:
> VM/CMS was written precisely because MVS/TSO was so ghastly - but the
> Wheelers know a thousand times more about that than I do.  There were
> several MVS sub-systems that were designed for interactive use, most
> of which came out of academia, such as MTS (Michigan), GUTS (Gothenburg)
> and Phoenix (Cambridge).  The last was the one most designed for remote
> use as, by the time we got an IBM, Cambridge was ALREADY a remote access
> site.

CP-67/CMS was written well before TSO.  I think TSO came 
next; MIT was running it on the 360/65 in 1973. Then CP was 
ported to the new 370 machines and called VM/CMS.
  http://pucc.princeton.edu/~melinda/

I believe that MTS was a standalone operating system, not
an MVS subsystem.  See
  http://www.itd.umich.edu/~doc/Digest/0596/feat02.html

First time I heard of GUTS or Phoenix;
please post details or pointers.
0
Tom
10/18/2004 9:10:25 PM
Nick Maclaren wrote:
> CICS was a specialist sub-system for transaction processing, designed
> to make the MVS brontosaurus fly on high context-switch processing.
> You are thinking of TSO.  Despite claims, it WASN'T much more effort
> to use than JCL, but you couldn't do much more with it ....

My impression was that CICS was a user application program 
that ran on OS/360. When run it burrowed into the OS, wired 
a bunch of core, fired up its own scheduler and job queue 
system, and took over one or more terminal controllers.

Customer Information and Control System is what the name 
meant, and it evolved, as Lynn says, from a special purpose 
application into a widely used transaction processing 
system. The transaction application programs were mostly 
written in COBOL, right?  and they used system resources in 
very stylized ways, and had transaction commit and rollback 
semantics on the database, and a whole lot of stuff.

I never used it; people who know it well should correct and
expand my description.  I wrote a similar subsystem (minus
burrowing) for Multics, and then when I was at Tandem our
stuff competed with CICS for some applications, and I worked
with Pete Homan at Tandem, who had worked at the IBM Hursley
lab where CICS development was done in the 1980s.  GCOS had
two such transaction processing subsystems in the 1970s.
One was TDS and the other was ... I forget.

Don't call what these systems do "timesharing."  It'll just
confuse the discussion.
0
Tom
10/18/2004 9:22:37 PM
On Mon, 18 Oct 2004 17:10:25 -0400, Tom Van Vleck <thvv@multicians.org>  
wrote:

> Nick Maclaren wrote:
>> VM/CMS was written precisely because MVS/TSO was so ghastly - but the
>> Wheelers know a thousand times more about that than I do.  There were
>> several MVS sub-systems that were designed for interactive use, most
>> of which came out of academia, such as MTS (Michigan), GUTS (Gothenburg)
>> and Phoenix (Cambridge).  The last was the one most designed for remote
>> use as, by the time we got an IBM, Cambridge was ALREADY a remote access
>> site.
>
> CP-67/CMS was written well before TSO.  I think TSO came
> next; MIT was running it on the 360/65 in 1973. Then CP was
> ported to the new 370 machines and called VM/CMS.
>   http://pucc.princeton.edu/~melinda/
>
> I believe that MTS was a standalone operating system, not
> an MVS subsystem.  See
>   http://www.itd.umich.edu/~doc/Digest/0596/feat02.html
>
> First time I heard of GUTS or Phoenix;
> please post details or pointers.
Gothenburg Univ Timesharing System was mostly written in PL/I
and was used as late as 1994 (?) commercially b Information Resources
Inc. in Chicago.  I believe they wrote their own TP system and Database
and at that time collected sales data from 4000 supermarkets across the US



-- 
Using Opera's revolutionary e-mail client: http://www.opera.com/m2/
0
Tom
10/18/2004 9:24:24 PM
nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
> VM/CMS was written precisely because MVS/TSO was so ghastly - but
> the Wheelers know a thousand times more about that than I do.  There
> were several MVS sub-systems that were designed for interactive use,
> most of which came out of academia, such as MTS (Michigan), GUTS
> (Gothenburg) and Phoenix (Cambridge).  The last was the one most
> designed for remote use as, by the time we got an IBM, Cambridge was
> ALREADY a remote access site.

cp/67 first made it into customer (360/67) sites because tss/360 was
so ghastly ... predating mvt/tso. 

i was undergraduate at a university that was one of the tss/360 sites
and had an installed 360/67. however, tss/360 was having a hard time
coming to fruition ... so the machine ran in 360/65 (real memory) mode
most of the time with os/360. 

Cambridge had finished the port of cp/40 from custom modified 360/40
(with custom virtual memory hardware) to 360/67 as cp/67. It was
running at the science center and then was installed on the 360/67 out
at lincoln labs. The last week in jan, 1968, three people from the
science center came out to the university and installed cp/67 (the
univ. had somewhat gotten tired waiting for tss/360 to come to
reasonable fruition). I did a lot of performance and feature work on
cp/67 and cms as an undergraduate ... including adding tty/ascii
terminal support. In 69, i did a motification to HASP ... adding 2741
& tty terminal support ... as well as implementing cms editor syntax
for a conversational remote job entry function ... on an MVT relase 18
base. I think TSO finally showed up in MVT release 20.something period
.... and I thot that the terminal CRJE hack that I had done on
HASP-base was better than the TSO offering.

In addition to the cp/67 alternative to tss/360 ... UofMich also did
MTS (michigan terminal system) for the 360/67 (360/67 was only 360
model with virtual memory hardware support).

370s initially came out with no virtual memory hardware support
.... but eventually virtual memory (and virtual memory operating
systems) were announced for all 370 models. tss/360 (as tss/370),
cp/67 (as vm/370), and MTS were all ported to virtual memory 370.

we have two different cambridge's involved here.  lots of (cambridge)
science center posts
http://www.garlic.com/~lynn/subtopic.html#545tech

Melinda's history is also a good source for a lot of this
http://pucc.princeton.edu/~melinda/

-- 
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/
0
Anne
10/18/2004 9:39:25 PM
"mike" wrote:

> As you have shown there were a lot of these time share systems that ran on 
> top of MVS.
> 
> In addition to these internal IBM systems, there was MUSIC from McGill 
> University.  I also seem to recall a version of MUMPS from Mass General 
> Hospital, which like apl\360 was both a time sharing monitor and a 
> programming environment for the MUMPS language and database.
> 

Yup, also there was BRUIN, at Brown University.
I connected to that once on a guest account, and
couldn't get out.  LOGOUT -- no.  LOGOFF -- no.
QUIT -- no.  BYE -- no.  END -- no.  GOODBYE -- no.
ADIOS -- no.  Tried a bunch more, no luck.
Finally asked somebody.  CANCEL.
0
Tom
10/18/2004 9:50:16 PM
"Tom Linden" <tom@kednos.com> writes:
> Gothenburg Univ Timesharing System was mostly written in PL/I and
> was used as late as 1994 (?) commercially b Information Resources
> Inc. in Chicago.  I believe they wrote their own TP system and
> Database and at that time collected sales data from 4000
> supermarkets across the US

tss/360 was supposed to have been a time-sharing system ... TSO
.... while a mnemonic for time-sharing operation was really
conversational or online option ... as opposed to time-sharing option.

a lot of the conversation/online systems (whether or not they were
time-sharing) that were built on os/360 platform tended to have their
own subsystem infrastructures ... in many cases having substitute
feature/function for standard os/360 facilities.

one of the issues for cics online system was that standard os/360
scheduling and file open/close facilities were extremely heavy weight
.... not suitable for online/conversation activities. cics would do its
own subsystem tasking/scheduling. cics also tended to do (os/360)
operating system file opens at startup and keep them open for the
duration of cics (with conversational tasks doing internal cics file
open/closes). In addition to (people) terminals, CICS systems were
also used to drive a lot of other kinds of terminals; banking
terminals, ATM machines, cable TV head-end & settop boxes, etc.

The other thing that falls into this category is now called TPF
(transaction processing system) ... which is a totally independent
system. It started out life as its own operating system ... and it was
called ACP (airline control program) before the name change to TPF.
As ACP it drove many of the largest online airline related systems.
It somewhat got its name change as other industries picked up for
various operational use (other parts of travel industry, some of the
financial transaction oriented systems, etc).

-- 
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/
0
Anne
10/18/2004 9:50:44 PM

Anne & Lynn Wheeler wrote:

(snip)

> ... and I thot that the terminal CRJE hack that I had done on
> HASP-base was better than the TSO offering.

I always wanted to know the difference between CRBE and CRJE.

-- glen

0
glen
10/18/2004 9:50:56 PM
Tom Van Vleck <thvv@multicians.org> writes:
> Yup, also there was BRUIN, at Brown University.  I connected to that
> once on a guest account, and couldn't get out.  LOGOUT -- no.
> LOGOFF -- no.  QUIT -- no.  BYE -- no.  END -- no.  GOODBYE -- no.
> ADIOS -- no.  Tried a bunch more, no luck.  Finally asked somebody.
> CANCEL.

the original (cp/67) cms had a "BRUIN" command that had been ported to
CMS ... i.e. somewhat like the port of apl\360 to cms\apl ... remove
all the multi-tasking and system infrastructure features ... leaving
just the user command interface stuff.

-- 
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/
0
Anne
10/18/2004 9:53:07 PM
Tom Van Vleck wrote:
> "mike" wrote:
> 
> 
>>As you have shown there were a lot of these time share systems that ran on 
>>top of MVS.
>>
>>In addition to these internal IBM systems, there was MUSIC from McGill 
>>University.  I also seem to recall a version of MUMPS from Mass General 
>>Hospital, which like apl\360 was both a time sharing monitor and a 
>>programming environment for the MUMPS language and database.
>>
> 
> 
> Yup, also there was BRUIN, at Brown University.
> I connected to that once on a guest account, and
> couldn't get out.  LOGOUT -- no.  LOGOFF -- no.
> QUIT -- no.  BYE -- no.  END -- no.  GOODBYE -- no.
> ADIOS -- no.  Tried a bunch more, no luck.
> Finally asked somebody.  CANCEL.

    Would BAH's E-mail obfuscation have helped,
good buddy?

-- 
Eric.Sosman@sun.com

0
Eric
10/18/2004 10:10:51 PM
Jim Haynes wrote:
> This reminds me of something older that may be relevant.  I once read
> a document by George Mealy about the travails of OS/360 and some
> comparisons with what was then the PDP-10 operating system.  What
> struck me was that the PDP-10 system was designed as a remote terminal
> oriented system from the ground up.  Whereas with OS/360 you had what
> was basically a card driven system and had to graft on more layers of
> software to get it to deal with operation from terminals.  Now I don't
> pretend to know anything about IBM software, but I got the impression
> that later on you had to have something called CICS to do what the
> DEC software already was doing built-in; and even in my last contacts
> with IBM stuff there seemed to be files that were card images and
> printer line images.  And CICS required its own set of experts as if
> it were another operating system running on top of the OS.
> 

DEC had nothing like CICS, almost nobody did.  CICS was a transaction 
processing monitor.  In today's terms it was a single process that owned 
all the resources and the transactions operated as threads underneath 
it.  CICS provided all OS-type services for the transactions.  It was 
all designed from the ground up for flat-out efficiency and speed. 
Univac had TIP. which was somewhat similar.  I had thought Tuxedo was 
also a work-alike, but someone recently pointed out that it wasn't. 
WEre there any others?

0
Peter
10/18/2004 11:10:55 PM
Jim Haynes wrote:
> Well the point I was trying to make, and maybe I'm wrong about this anyway,
> is that in PDP-10 and VMS and Unix there is the underlying assumption that
> you are going to get characters dribbling in from terminals, and after
> some arbitrary number of characters you get one that tells you to act on
> what you got. 


Sort of.  Non-IBM systems used async, character-at-a-time terminals. 
This let you do things like command completion, etc, but  put a heck of 
a load on the CPU to get good response time.

IBM systems used mostly block (3270-type) terminals, where the terminal 
handled the input and editing and only sent data to the CPU when the 
user pressed "Enter" (or other "AID" keys), and only when the FEP 
"polled" the device to request input.  This is not so good for 
timesharing, but much better for transaction processing.  HTML forms are 
a good analog of how the 3270 worked.

0
Peter
10/18/2004 11:27:39 PM
"mike" <mike@mike.net> wrote in message 
news:FxUcd.13618$a15.6129@newssvr15.news.prodigy.com...
> This thread is heading toward confusion!
>
> CICS is a "transaction processing system".  The PDP-10 and VAX-VMS are 
> "conversational time sharing systems".

Yes.

> It would be more appropriate to compare VM-CMS or MVS_TSO or OS/400 with 
> VAX-VMS and PDP-10.

without CICS, quite true.

> While CICS is more like a character green screen version of Apache or an 
> application server like WebSphere.

I believe the Unix "sort of" equivalent to CICS is/was systems like Tuxedo. 
And while CICS was originally a "greem screen" application, there is now 
software that allows taking advantage of the processing power and high 
bandwidth to the screen of a PC.   Things like field editing can be moved to 
the PC.

-- 
 - Stephen Fuld
   e-mail address disguised to prevent spam 


0
Stephen
10/18/2004 11:47:20 PM
"Peter Flass" <Peter_Flass@Yahoo.com> wrote in message 
news:LjYcd.17161$l07.14091@twister.nyroc.rr.com...
> Jim Haynes wrote:
>> Well the point I was trying to make, and maybe I'm wrong about this 
>> anyway,
>> is that in PDP-10 and VMS and Unix there is the underlying assumption 
>> that
>> you are going to get characters dribbling in from terminals, and after
>> some arbitrary number of characters you get one that tells you to act on
>> what you got.
>
>
> Sort of.  Non-IBM systems
                  ^^^^^^^^^^^^^^^^^^^^^

This should really be non-mainframe systems.  AFAIK, the BUNCH systems used 
primarily systems that were similar to that IBM used (with totally different 
prorocols of course!) just because non of them wanted to have the overhead 
of character level interrupts on hundreds or thousands of terminals.

> used async, character-at-a-time terminals. This let you do things like 
> command completion, etc, but  put a heck of a load on the CPU to get good 
> response time.
>
> IBM systems used mostly block (3270-type) terminals, where the terminal 
> handled the input and editing and only sent data to the CPU when the user 
> pressed "Enter" (or other "AID" keys), and only when the FEP "polled" the 
> device to request input.  This is not so good for timesharing, but much 
> better for transaction processing.  HTML forms are a good analog of how 
> the 3270 worked.

Synch, line at a time terminals worked fine for much timesharing.  As long 
as you stuck to line oriented editors program development worked fine.  And 
there were even mechanisms to use tthe "advanced" features of the relativly 
smart terminals to do usefull things.  But of course no real graphics.

-- 
 - Stephen Fuld
   e-mail address disguised to prevent spam 


0
Stephen
10/18/2004 11:47:25 PM
In article <9NUcd.566$5i5.126@newsread2.news.atl.earthlink.net>,
haynes@alumni.uark.edu (Jim Haynes) writes:

>In article <FxUcd.13618$a15.6129@newssvr15.news.prodigy.com>,
>mike <mike@mike.net> wrote:
>
>>This thread is heading toward confusion!
>>
>>CICS is a "transaction processing system".  The PDP-10 and VAX-VMS are
>>"conversational time sharing systems".

When I was working with Sperry's equivalent of CICS (which they
confusingly called IMS), I found that the best way to think of my
programs was as re-entrant subroutines called by the TP monitor.
It handed me some program information (Program Information Block),
an screen image from the terminal generating the transaction (Input
Message Area), and two blocks of memory to use for variables (Work
Area for temporary stuff, and Continuity Data Area for stuff that
was preserved across calls).  After I finished playing (which could
include file I/O) I would leave an image in the Output Message Area
which the monitor would take and display on the appropriate terminal,
as well as setting appropriate fields to indicate whether to do it
again and if so with which program.  Repeat ad nausaeum.

>Well the point I was trying to make, and maybe I'm wrong about this
>anyway, is that in PDP-10 and VMS and Unix there is the underlying
>assumption that you are going to get characters dribbling in from
>terminals, and after some arbitrary number of characters you get one
>that tells you to act on what you got.  Whereas with the mainframe
>OSes it seems like the underlying assumption is that you are going
>to get card decks and send output to line printers and card punches.
>And then something has to be wrapped around the OS that turns character
>streams from terminals into virtual card decks, and turns punch and
>line printer files into something a terminal can handle.

Some of this was done by the terminal hardware, since block mode was
taken for granted.

>Oh, and speaking of JCL, years ago someone sent me a bumper sticker
>that read, "Honk if you love JCL"

Somehow I get a mental image of a large number of lines of cryptic
code, starting with "// JOB" and ending with

//SYSIN DD *
HONK
/*

I kind of like "Forth love if honk then" instead.

--
/~\  cgibbs@kltpzyxm.invalid (Charlie Gibbs)
\ /  I'm really at ac.dekanfrus if you read it the right way.
 X   Top-posted messages will probably be ignored.  See RFC1855.
/ \  HTML will DEFINITELY be ignored.  Join the ASCII ribbon campaign!

0
Charlie
10/18/2004 11:58:14 PM
I seem to recall that the IMS database system also had its own CICS like 
front end.

Mike Sicilian


"Anne & Lynn Wheeler" <lynn@garlic.com> wrote in message 
news:u8ya31yrv.fsf@mail.comcast.net...
> "Tom Linden" <tom@kednos.com> writes:
>> Gothenburg Univ Timesharing System was mostly written in PL/I and
>> was used as late as 1994 (?) commercially b Information Resources
>> Inc. in Chicago.  I believe they wrote their own TP system and
>> Database and at that time collected sales data from 4000
>> supermarkets across the US
>
> tss/360 was supposed to have been a time-sharing system ... TSO
> ... while a mnemonic for time-sharing operation was really
> conversational or online option ... as opposed to time-sharing option.
>
> a lot of the conversation/online systems (whether or not they were
> time-sharing) that were built on os/360 platform tended to have their
> own subsystem infrastructures ... in many cases having substitute
> feature/function for standard os/360 facilities.
>
> one of the issues for cics online system was that standard os/360
> scheduling and file open/close facilities were extremely heavy weight
> ... not suitable for online/conversation activities. cics would do its
> own subsystem tasking/scheduling. cics also tended to do (os/360)
> operating system file opens at startup and keep them open for the
> duration of cics (with conversational tasks doing internal cics file
> open/closes). In addition to (people) terminals, CICS systems were
> also used to drive a lot of other kinds of terminals; banking
> terminals, ATM machines, cable TV head-end & settop boxes, etc.
>
> The other thing that falls into this category is now called TPF
> (transaction processing system) ... which is a totally independent
> system. It started out life as its own operating system ... and it was
> called ACP (airline control program) before the name change to TPF.
> As ACP it drove many of the largest online airline related systems.
> It somewhat got its name change as other industries picked up for
> various operational use (other parts of travel industry, some of the
> financial transaction oriented systems, etc).
>
> -- 
> Anne & Lynn Wheeler | http://www.garlic.com/~lynn/ 


0
mike
10/19/2004 12:57:44 AM
"Anne & Lynn Wheeler" <lynn@garlic.com> wrote in message
news:u4qks9hia.fsf@mail.comcast.net...
> nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
> > Nope.  That was not the main reason, at least early on.
> >
> > The PC stream was marketed to people who didn't know that it wasn't
> > reasonable to have to power cycle a computer once every hour or so,
> > and the workstation ones to hackers who expected to have to modify
> > the software they used to make it work at all.
> >
> > Yes, System/360 and the early System/370 was like that, but it was
> > somewhat better by the early 1980s.  And so were DEC's non-Unix
> > systems, and we know how they spread.
> >
> > By the late 1980s, both the PC and RISC systems had improved very
> > considerably, and that is when the mainframes started suffering
> > badly.
>
> there was a big explosion starting maybe as early as '79 in
> distributed and departmental mainframes ... this is the niche that the
> 4341 and vax'es were selling into. there were customers buying 4341s
> in quantities of multiple hundreds at a time (somewhat that the
> "mainframe" price/performance had dropped below some
> threshold)... example:
> http://www.garlic.com/~lynn/2001m.html#15 departmental servers
>
> by the mid-80s, the high-end pcs and workstations were starting to
> take over this market .... the 4341 follow-on .... 4381 didn't see
> anything like the explosion that the 4341 saw (even as replacement for
> 4341 as it went to end-of-life).
>
(snippage)
And the racetrack machines died a horrible death.  I thought the 370
fort knox machine ended up as the 4361.

del cecchi


0
del
10/19/2004 1:07:03 AM
"Stephen Fuld" <s.fuld@PleaseRemove.att.net> wrote in message
news:hCYcd.719051$Gx4.718206@bgtnsc04-
snip
> >
> > IBM systems used mostly block (3270-type) terminals, where the
terminal
> > handled the input and editing and only sent data to the CPU when the
user
> > pressed "Enter" (or other "AID" keys), and only when the FEP
"polled" the
> > device to request input.  This is not so good for timesharing, but
much
> > better for transaction processing.  HTML forms are a good analog of
how
> > the 3270 worked.
>
> Synch, line at a time terminals worked fine for much timesharing.  As
long
> as you stuck to line oriented editors program development worked fine.
And
> there were even mechanisms to use tthe "advanced" features of the
relativly
> smart terminals to do usefull things.  But of course no real graphics.

You should have seen the cool graphics a guy in Rochester got out of a
3279,  wonderful color waveforms from the circuit simulator.  Some sort
of trick with downloading character sets that were really little chunks
of the picture or something like that.  I thought I had gone to heaven
after years of looking at waveforms plotted on a line printer.

del cecchi
>
> -- 
>  - Stephen Fuld
>    e-mail address disguised to prevent spam
>
>


0
del
10/19/2004 1:11:10 AM
"del cecchi" <dcecchi.nojunk@att.net> writes:
> You should have seen the cool graphics a guy in Rochester got out of
> a 3279, wonderful color waveforms from the circuit simulator.  Some
> sort of trick with downloading character sets that were really
> little chunks of the picture or something like that.  I thought I
> had gone to heaven after years of looking at waveforms plotted on a
> line printer.

slightly related ... post about multi-user, distributed, space-war
game:
http://www.garlic.com/~lynn/2004m.html#20 Whatever happened to IBM's VM PC software?

-- 
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/
0
Anne
10/19/2004 2:11:58 AM
fOn Mon, 18 Oct 2004 20:07:03 -0500 in alt.folklore.computers, "del
cecchi" <dcecchi.nojunk@att.net> wrote:

>
>"Anne & Lynn Wheeler" <lynn@garlic.com> wrote in message
>news:u4qks9hia.fsf@mail.comcast.net...

>> by the mid-80s, the high-end pcs and workstations were starting to
>> take over this market .... the 4341 follow-on .... 4381 didn't see
>> anything like the explosion that the 4341 saw (even as replacement for
>> 4341 as it went to end-of-life).

>And the racetrack machines died a horrible death.  I thought the 370
>fort knox machine ended up as the 4361.

And dual 4361s ended up as the service processors for the 309X?
series, providing LPAR functionality. 

-- 
Thanks. Take care, Brian Inglis 	Calgary, Alberta, Canada

Brian.Inglis@CSi.com 	(Brian[dot]Inglis{at}SystematicSW[dot]ab[dot]ca)
    fake address		use address above to reply
0
Brian
10/19/2004 3:44:26 AM
"Stephen Fuld" <s.fuld@PleaseRemove.att.net> writes:
> I believe the Unix "sort of" equivalent to CICS is/was systems like Tuxedo. 
> And while CICS was originally a "greem screen" application, there is now 
> software that allows taking advantage of the processing power and high 
> bandwidth to the screen of a PC.   Things like field editing can be moved to 
> the PC.

tuxedo was transaction monitor .... while cics was transaction
processing subsystem ....  i got half dozen tuxedo books down in boxes
someplace. i believe tuxedo was spun off to bea(?).

there was also camelot ... out of cmu ... along with mach, andrew
widgets, andrew filesystem, etc. IBM had pumped something like $50m
into CMU for these projects about the same time that IBM & DEC each
funded Project Athena at MIT to the tune of something like $25m each.

some of this was spun out of cmu as Transarc (i believe also heavily
funded by ibm ... and then bought outright by ibm).

cics was much more like transaction processing in any of the rdbms
systems (loading transaction code, scheduling transaction code,
actually dispatching the code for executiong) ... except it started
out interfacing to bdam files (as opposed to having full blown dbms).

cics beta test at the university was on mvt system on 360 machine
.... predating screens ... using 2741 and 1050s.

-- 
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/
0
Anne
10/19/2004 3:50:04 AM
Nice topic.

This brings to mind an issue that I have with the various idealogies.

RISC vs. CISC is pointless in the days of large physical reg files and
OoO.

The real issue is how to express parallelism to the OoO, and having a
large reg file is but one way to give "intermediates" different names
that allow OoO to do its thing. Using memory addresses expresses
parallelism (or more importantly serial dependency) in the most
truly representative way.

The problem with x86 CISC is not that it has mem refs (an even better way
to express the independence of operands as there are orders of magnitude
more independent memory locations than registers) but that it only
has _some_ of it's operands as mem-refs.

More importantly it overwrites one architected reg with every op, forcing
architected regs to be a critical resource, and excessive load stores.

Suppose you had an architecture with _only_ memory base registers, and
operative instructions take immediate offsets to mem regs as operands and results.
That would be pretty easy to support OoO up to any physical register file
size, and also close the "semantic gap" (blast from the past) in a
substantive way, and load/operate/store would simply translate to
operate. Could be viewed as a analogous to a stack machine with very many
stack pointers.

<flamebait>All the bottlenecks in x86 would be removed.</flamebait>

Peter




0
Peter
10/19/2004 4:30:09 AM
Brian Inglis <Brian.Inglis@SystematicSW.Invalid> writes:
> And dual 4361s ended up as the service processors for the 309X?
> series, providing LPAR functionality. 

service processor for 3090 started out being a single 4331 ...
effectively a upgrade from the uc.5 service processor in the 308x.

it had a stabilized vm/370 & cms release 6 system with a number of
custom modifications ... like being able to "read" a bunch of service
ports in the 3090. the menu screens that had been custom stuff in the
uc.5 ... became ios3270 screens for the 3090.

the 4331 morphed into a 4361 and then dual-4361s ... for availability.

part of the issue was long standing requirement that field engineer in
the field could bootstrap hardware problem diagnostic ... starting
with very few diagnostic facilities like a scope. starting with the
3081 ... the machine was no longer scope'able. so a machine that was
scopable (a service processor) was put in ... that had a whole lot of
diagnostic interfaces to everywhere in the machine. the field engineer
could bootstrap diagnostics of the service processor ... and then use
the service processor to diagnose the real machine. somewhere along
the way .. it was decided to replicate the 4361 ... so to take a
failed 4361 out of the critical path for diagnosing the 3090.

since the vm/370 release 6 would be in use long past its product
lifetime, the engineering group had to put together its own software
support team. I contributed some amount of stuff for this custom
system ...  including a problem analysis and diagnostic tool for
analysing and diagnosing vm/370 software problems. random dumprx
postings:
http://www.garlic.com/~lynn/subtopic.html#dumprx

The virtual machine microcode assist (sie) was enhanced to do logical
partitioning (LPARs) of the hardware (w/o needing a vm kernel) called
PR/SM (processor resource/systems manager). the service processor was used
to setup PR/SM configuration ... but didn't actually execute PR/SM
functions.

this is standard 4361
http://www-1.ibm.com/ibm/history/exhibits/mainframe/mainframe_PP4361.html

and standard 3090 (which had a pair of 4361s packaged inside) ...
http://www-1.ibm.com/ibm/history/exhibits/mainframe/mainframe_PP3090.html

3090 also offered vector processing option 3090 came with "extended
storage" ... it was electronic memory for paging with wide, high-speed
bus ... and was accessed by (processor) syncronous page move
instructions (the theory was that the latency was too long for normal
memory ... but with wide-enuf bus ... a 4k move could go pretty fast).
When HiPPI support was added to the 3090 ... the standard I/O
interface wasn't fast enuf ... so a special interface was cut into the
extended storage bus to provide HiPPI support.

article from the palo alto science center on fortran support for
3090 vector facility
http://domino.research.ibm.com/tchjr/journalindex.nsf/4ac37cf0bdc4dd6a85256547004d47e1/1383665bc8da3f1c85256bfa0067f655?OpenDocument

article out of BNL about 3090 mentioning vector facility and extended
storage.
http://www.ccd.bnl.gov/LINK.bnl/1996/May96/histcom5.html

it has line about "IBM sites", such as SLAC, FERMILAB, and CERN ...
.... for a time, I was involved in monthly meetings at SLAC and there
was lots of application and software sharing between these sister lab
"IBM sites". it also mentions some of the issues around eventual
migration from 3090 to computational intensive risc workstations (of
course the hot new thing is the GRID, i happened to give a talk at a
GRID conference this summer).

for some topic drift, one could trace the invention of GML at the
cambridge science center, its integration into the CMS document
formater "SCRIPT", its wide deployment and standardization as SGML
.... and the eventual morphing at CERN into HTML. SLAC then has the
distinction of putting up the first web server in the US (on its
vm/cms system). a couple recent posts on the subject
http://www.garlic.com/~lynn/2004l.html#72
earlier post about slac's original web pages
http://www.garlic.com/~lynn/2004d.html#53
lots of random gml/sqml posts:
http://www.garlic.com/~lynn/subtopic.html#sgml

the bnl.gov article also mentions installing sql/ds ... which is the
tech transfer from sjr of the original rdbms effort, system/r to
endicott: random posts on system/r and some mention about sql/ds tech
transfer
http://www.garlic.com/~lynn/subtopic.html#systemr

-- 
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/
0
Anne
10/19/2004 4:57:13 AM
>The VAX had several problems: ...

>Not sure what you mean with register interlocks.

The autoincrement and autodecrement address modes are interpreted
in strict order, so you have to be prepared for this:

 cmpw @(r2)+,@(r2)+

It compares two words at addresses pointed to by successive words
pointed to by r2.  You can't decode the two addresses in parallel if
they use the same register.  There aren't that many circumstances
where it's useful to do multiple references to the same register in an
instruction with at least one being autoinc or autodec, but it's legal
so you have to be prepared for it.  I would think this is a place
where static translation to the Alpha should win big, since it could
generate simple fast code in 99.9% of the cases and slower code for
the rare times that the registers interfere.

>And I guess speed is also not that important.  The software was
>written for slower machines anyway.  How fast are current 360
>implementations in terms of CPU speed (e.g., SPECcpu)?

I've never seen specmarks for IBM mainframes.  There's not much point,
since people don't buy them for raw C program execution speed.

The current top of the line z990 is a 64 bit architecture with 64 bit
datapaths, dual issue superscalar, large I-cache and TLB, running at
1.2 GHz.  Most instructions are implemented in hardware, some of the
complex ones (decimal arithmetic and the like) are in "millicode",
firmware subroutines.  Making it even harder to make comparisons, they
have a lot of useful millicode application assists, notably one to run
Java.  There's two CPUs per chip, and typical systems have between 8
and 32 CPUs.  

They have extensive checking and recovery hardware both on-chip and in
the rest of the system to deal with hardware faults, more than in any
other design I know.  Systems have a couple of spare CPUs and will do
automatic failover if a CPU fails, generally recovering from a
hardware checkpoint so it's transparent to software.  You can also add
more "books" of CPU, cache, and memory, without rebooting.

See http://www.research.ibm.com/journal/rd48-34.html for more info on
the z990.  Don't miss the paper about their firmware compiler, which
grafts a front end for PL.8, the PL/I subset they've been using for
system programming since the 801 project 20 years ago, onto GCC to use
GCC's 64 bit code generator.

Regards,
John Levine, johnl@iecc.com, Primary Perpetrator of "The Internet for Dummies",
Information Superhighwayman wanna-be, http://www.johnlevine.com, Mayor
"More Wiener schnitzel, please", said Tom, revealingly.




0
johnl
10/19/2004 5:40:05 AM
>CICS is a "transaction processing system".  The PDP-10 and VAX-VMS are 
>"conversational time sharing systems".
>
>It would be more appropriate to compare VM-CMS or MVS_TSO or OS/400 with 
>VAX-VMS and PDP-10.
>
>While CICS is more like a character green screen version of Apache or an 
>application server like WebSphere.

I'd like to compare them with DTSS, the system that introduced the
user interface made familiar to a generation of Basic programmers, e.g.

10 let a=2+3
20 print a
30 end
run

DTSS originally ran on a GE 635, a 36 bit computer comparable to a
KA-10.  They used a much slower Datanet 30 as a tty front end that
delivered lines of characters to the 635.  The line edit/run process
was managed by a single user-mode monitor program that read all the
lines and when it saw one that didn't start with a digit, sorted all
the program's lines into order and ran the compiler in a subprocess,
which then ran the compiled code.

What makes this interesting is that while TOPS-10 could run maybe 20
users on a KA, DTSS could run 100 with quite decent response times.
The structure of the system was more like CICS than like TOPS-10, but
they managed to make it look like (in fact to be) a general purpose
time-sharing system.  It supported multiple languages, run any program
you want as a user process, etc.  Quite an achievement, particuarly
since it was written almost entirely by undergraduates.


0
johnl
10/19/2004 5:45:54 AM
In article <cCYcd.719050$Gx4.170739@bgtnsc04-news.ops.worldnet.att.net>,
Stephen Fuld <s.fuld@PleaseRemove.att.net> wrote:
>
>"mike" <mike@mike.net> wrote in message 
>news:FxUcd.13618$a15.6129@newssvr15.news.prodigy.com...
>> This thread is heading toward confusion!
>>
>> CICS is a "transaction processing system".  The PDP-10 and VAX-VMS are 
>> "conversational time sharing systems".
>
>Yes.

And, inside CICS you do a somewhat monitored cooperative time
sharing; like in MacOS pre-X or non-nt windows. 
>
>> It would be more appropriate to compare VM-CMS or MVS_TSO or OS/400 with 
>> VAX-VMS and PDP-10.
>
>without CICS, quite true.
>
>> While CICS is more like a character green screen version of Apache or an 
>> application server like WebSphere.

I get loads of deja-vu's when I program cgi-bin's; they have a large
reseblance to how things were done in cics/vtam/3270. 

>I believe the Unix "sort of" equivalent to CICS is/was systems like Tuxedo. 
>And while CICS was originally a "greem screen" application, there is now 
>software that allows taking advantage of the processing power and high 
>bandwidth to the screen of a PC.   Things like field editing can be moved to 
>the PC.

Field editing has been in the terminal since at least the mid eighties. 

-- mrr
0
Morten
10/19/2004 6:00:04 AM
On Tue, 19 Oct 2004 06:00:04 GMT in alt.folklore.computers, Morten
Reistad <firstname@lastname.pr1v.n0> wrote:

>In article <cCYcd.719050$Gx4.170739@bgtnsc04-news.ops.worldnet.att.net>,
>Stephen Fuld <s.fuld@PleaseRemove.att.net> wrote:
>>
>>"mike" <mike@mike.net> wrote in message 
>>news:FxUcd.13618$a15.6129@newssvr15.news.prodigy.com...
>>> This thread is heading toward confusion!
>>>
>>> CICS is a "transaction processing system".  The PDP-10 and VAX-VMS are 
>>> "conversational time sharing systems".
>>
>>Yes.
>
>And, inside CICS you do a somewhat monitored cooperative time
>sharing; like in MacOS pre-X or non-nt windows. 
>>
>>> It would be more appropriate to compare VM-CMS or MVS_TSO or OS/400 with 
>>> VAX-VMS and PDP-10.
>>
>>without CICS, quite true.
>>
>>> While CICS is more like a character green screen version of Apache or an 
>>> application server like WebSphere.
>
>I get loads of deja-vu's when I program cgi-bin's; they have a large
>reseblance to how things were done in cics/vtam/3270. 
>
>>I believe the Unix "sort of" equivalent to CICS is/was systems like Tuxedo. 
>>And while CICS was originally a "greem screen" application, there is now 
>>software that allows taking advantage of the processing power and high 
>>bandwidth to the screen of a PC.   Things like field editing can be moved to 
>>the PC.
>
>Field editing has been in the terminal since at least the mid eighties. 

IIRC field editing was in the earlier green screens, then moved into
the controller with the 3274 series, then moved back into the terminal
with HFT (High?er? Function Terminal) and DFT (Distributed Function
Terminal) terminal types. 

-- 
Thanks. Take care, Brian Inglis 	Calgary, Alberta, Canada

Brian.Inglis@CSi.com 	(Brian[dot]Inglis{at}SystematicSW[dot]ab[dot]ca)
    fake address		use address above to reply
0
Brian
10/19/2004 8:11:51 AM
johnl@iecc.com (John R. Levine) wrote in message news:<ckv2np$d5i$1@xuxa.iecc.com>...
>  The block oriented I/O architecture
> helps, too.  

Without a doubt. In the commercial space, the large majority of
applications moved data without transforming it at all, or simply
sorted and collated it. S/360 & 370 I/O speeds and CPU disconnect on
the multiple channel architecture (basically dedicated I/O processors)
allowed very performant I/O, something that the zSeries delivers in
spades today. The 4300 series was nowhere near as good in this
respect.

-- 
Regards
Alex McDonald
0
alex_mcd
10/19/2004 8:48:00 AM
In article <9NUcd.566$5i5.126@newsread2.news.atl.earthlink.net>,
   haynes@alumni.uark.edu (Jim Haynes) wrote:
>In article <FxUcd.13618$a15.6129@newssvr15.news.prodigy.com>,
>mike <mike@mike.net> wrote:
>>
>>
>>This thread is heading toward confusion!
>>
>>CICS is a "transaction processing system".  The PDP-10 and VAX-VMS are 
>>"conversational time sharing systems".
>>
>Well the point I was trying to make, and maybe I'm wrong about this 
anyway,
>is that in PDP-10 and VMS and Unix there is the underlying assumption that
>you are going to get characters dribbling in from terminals, and after
>some arbitrary number of characters you get one that tells you to act on
>what you got.  Whereas with the mainframe OSes it seems like the 
underlying
>assumption is that you are going to get card decks and send output to 
>line printers and card punches.  And then something has to be wrapped 
>around the OS that turns character streams from terminals into virtual
>card decks, and turns punch and line printer files into something a
>terminal can handle.

Yep.  And it gagawful when the -10s tried to do forms programming
and ship products that did efficient forms handling.  We spent
an awful lot of time studying this.  MCS was essentially our first
stab at it.  I don't think we ever did DBMS very well compared
to IBM.


>
>Maybe a case in point is Wylbur, which as I understood it was not a
>timesharing system but a terminal-oriented system for creating virtual
>card decks, running them as jobs, and then examining the output.
>
>Something I was a little closer to was Burroughs B5500.  That machine was
>superb for batch and inherently lousy for timesharing.  There was a 
>timesharing version of the operating system.  There was also a customer-
>supplied terminal front end for the batch operating system.  It was
>called R/C (for "Remote Card") and that's how it worked; from a terminal
>you could make a file that was a virtual card deck and submit it for
>processing, and then you could examine the output file.
>
>Oh, and speaking of JCL, years ago someone sent me a bumper sticker
>that read, "Honk if you love JCL"

<GRIN>  SJ$IF$0TT$$  Honk!

/BAH


Subtract a hundred and four for e-mail.
0
jmfbahciv
10/19/2004 9:45:42 AM
In article <cl29mi$kgk$1@xuxa.iecc.com>,
   johnl@iecc.com (John R. Levine) wrote:
>>CICS is a "transaction processing system".  The PDP-10 and VAX-VMS are 
>>"conversational time sharing systems".
>>
>>It would be more appropriate to compare VM-CMS or MVS_TSO or OS/400 with 
>>VAX-VMS and PDP-10.
>>
>>While CICS is more like a character green screen version of Apache or an 
>>application server like WebSphere.
>
>I'd like to compare them with DTSS, the system that introduced the
>user interface made familiar to a generation of Basic programmers, e.g.
>
>10 let a=2+3
>20 print a
>30 end
>run
>
>DTSS originally ran on a GE 635, a 36 bit computer comparable to a
>KA-10.  They used a much slower Datanet 30 as a tty front end that
>delivered lines of characters to the 635.  The line edit/run process
>was managed by a single user-mode monitor program that read all the
>lines and when it saw one that didn't start with a digit, sorted all
>the program's lines into order and ran the compiler in a subprocess,
>which then ran the compiled code.
>
>What makes this interesting is that while TOPS-10 could run maybe 20
>users on a KA, DTSS could run 100 with quite decent response times.

I bet that was before the shuffler was replaced with the swapper.
In addition, I also bet that you did one disk I/O which would cause
a context switch in TOPS-10.  I would probably make a guess that
the comparison was 4S72 and not a good release of the 5-series
TOPS-10 monitor.


>The structure of the system was more like CICS than like TOPS-10, but
>they managed to make it look like (in fact to be) a general purpose
>time-sharing system.  It supported multiple languages, run any program
>you want as a user process, etc.  Quite an achievement, particuarly
>since it was written almost entirely by undergraduates.

That's the way to train our kiddies.  :-)

/BAH

Subtract a hundred and four for e-mail.
0
jmfbahciv
10/19/2004 9:56:08 AM
No context quoted, as it would be either too much or too little.

There is virtually no online information on Phoenix, which really
needs fixing.  It was one of the least invasive of the MVT/MVS
front ends, and was partially modelled on the Titan system, one of
the early and influential interactive systems, by some of the same
people.  At the University of Cambridge, not Massachusetts :-)

Strictly, it was a TSO command processor, much more like VMS than
TSO, CMS or Unix.  This was supported by a good dynamic allocation
module (under both MVT and MVS), a less good scheduler, a large
number of commands, a very good programmable editor, a very good
debugger, a tape library system and so on.

Like early Unix, it was designed to be succint, to allow easy use
over 110 baud lines, but (unlike that) it had decent diagnostics.
Over 10 man years was put into the help system (much better even
than VMS's, and out of the CMS or Unix world), which paid for itself
in the reduction of staff effort at least threefold.  The program
was trivial; it was the data and its linkages - very different from
the current unstructured Web morass.

People who were not computer experts and who were unfamiliar with
it really did use it starting from 20 lines of instruction, but
most people either read an introductory manual or went on a course
of an hour or two.  Thereafter, it was usable with no documentation
beyond the online help system.  The only other system I used where
this was possible was VMS, and it was more painful to do so, though
several others (now also defunct) could also do it.

The front end was PDP-11 systems, which handled the line-building,
so that MVT/MVS was not crippled by single-character handling.
Later, the Cambridge PAD was developed (by people in the Computing
SERVICE), which was heavily influential on later packet system
designs.  The Cambridge Ring was never attached to it.


Note that there were several aspects of its architecture that
should really be reinvented, as they were and are vastly superior
to modern designs.  As far as I know, none were unique to Phoenix,
but some may have originated there.


Regards,
Nick Maclaren.
0
nmm1
10/19/2004 10:39:45 AM
"Nick Maclaren" <nmm1@cus.cam.ac.uk> wrote in message
news:cl2qth$d91$1@scorpius.csx.cam.ac.uk...
>
> No context quoted, as it would be either too much or too little.
>
> There is virtually no online information on Phoenix, which really
> needs fixing.  It was one of the least invasive of the MVT/MVS
> front ends, and was partially modelled on the Titan system, one of
> the early and influential interactive systems, by some of the same
> people.  At the University of Cambridge, not Massachusetts :-)
>
[snip]
>
> Regards,
> Nick Maclaren.

Thanks for the memories. Stuff that I though had gone for ever just
reappeared in my brain - a true CAM (give me the *entire* content and I can
recall it).

Peter
Mathematics 1975-1979


0
Peter
10/19/2004 10:53:21 AM
del cecchi wrote:
> You should have seen the cool graphics a guy in Rochester got out of a
> 3279,  wonderful color waveforms from the circuit simulator.  Some sort
> of trick with downloading character sets that were really little chunks
> of the picture or something like that.  I thought I had gone to heaven
> after years of looking at waveforms plotted on a line printer.

IBM used the same trick in a very early demo of the EGA (Enhanced 
Graphics Adapter) for their PCs.

Terje

-- 
- <Terje.Mathisen@hda.hydro.com>
"almost all programming can be viewed as an exercise in caching"
0
Terje
10/19/2004 12:21:19 PM
Nick Maclaren wrote:
> No context quoted, as it would be either too much or too little.
> 
> There is virtually no online information on Phoenix, which really
> needs fixing.  It was one of the least invasive of the MVT/MVS
> front ends, and was partially modelled on the Titan system, one of
> the early and influential interactive systems, by some of the same
> people.  At the University of Cambridge, not Massachusetts :-)
> 
> Strictly, it was a TSO command processor, much more like VMS than
> TSO, CMS or Unix.  This was supported by a good dynamic allocation
> module (under both MVT and MVS), a less good scheduler, a large
> number of commands, a very good programmable editor, a very good
> debugger, a tape library system and so on.
 > [...snip...]

Another "Thanks for the memories".

ZED (the programmable editor) mentioned by Nick was simply superb.
The *best* editor I've ever come across (and I'm a self-confessed
VMS bigot :-).  And TLS (Tape Library System) was extremely
clever;  it kept an updatable directory at the beginning of the
tape, with a very large erase gap (?) behind it.

The only 2 systems that came close to Phoenix were, in my humble
opinion, VMS and EMAS.  I remember vividly having to queue up
for terminal access even at 2 o'clock in the morning !

So what eventually happened to Phoenix ?
What replaced it, and why ?

Roy Omond
Blue Bubble Ltd.
0
Roy
10/19/2004 12:25:34 PM
In article <2tj7t8F1vie5mU1@uni-berlin.de>, dcecchi.nojunk@att.net 
says...
> 
> "Stephen Fuld" <s.fuld@PleaseRemove.att.net> wrote in message
> news:hCYcd.719051$Gx4.718206@bgtnsc04-
> snip
> > >
> > > IBM systems used mostly block (3270-type) terminals, where the
> terminal
> > > handled the input and editing and only sent data to the CPU when the
> user
> > > pressed "Enter" (or other "AID" keys), and only when the FEP
> "polled" the
> > > device to request input.  This is not so good for timesharing, but
> much
> > > better for transaction processing.  HTML forms are a good analog of
> how
> > > the 3270 worked.
> >
> > Synch, line at a time terminals worked fine for much timesharing.  As
> long
> > as you stuck to line oriented editors program development worked fine.
> And
> > there were even mechanisms to use tthe "advanced" features of the
> relativly
> > smart terminals to do usefull things.  But of course no real graphics.
> 
> You should have seen the cool graphics a guy in Rochester got out of a
> 3279,  wonderful color waveforms from the circuit simulator.  Some sort
> of trick with downloading character sets that were really little chunks
> of the picture or something like that.  I thought I had gone to heaven
> after years of looking at waveforms plotted on a line printer.

Those "little chunks of the picture" were called "Programmed Symbols", 
which surprisingly was a good name for 'em, since the programmer wrote 
to the character generator.  

The 3279/PS didn't work very well on ALDs though (Automated Logic 
Drawings; schematics in ^IBMspeak).

-- 
  Keith
0
K
10/19/2004 12:45:52 PM
On 19 Oct 2004 10:39:45 GMT
nmm1@cus.cam.ac.uk (Nick Maclaren) wrote:

> No context quoted, as it would be either too much or too little.
> 
> There is virtually no online information on Phoenix, which really
> needs fixing.  It was one of the least invasive of the MVT/MVS

	Too true Phoenix was great, it made a 370 pleasant to use.

> number of commands, a very good programmable editor, a very good

	Ah yes ZED the last word in line editors - that was nice.

> People who were not computer experts and who were unfamiliar with
> it really did use it starting from 20 lines of instruction, but
> most people either read an introductory manual or went on a course
> of an hour or two.  Thereafter, it was usable with no documentation
> beyond the online help system.

	This I can confirm, it was very easy to get started with.

-- 
C:>WIN                                      |   Directable Mirror Arrays
The computer obeys and wins.                | A better way to focus the sun
You lose and Bill collects.                 |    licences available see
                                            |    http://www.sohara.org/
0
Steve
10/19/2004 1:52:12 PM
In article <2tkfhlF20cdp0U1@uni-berlin.de>,
Roy Omond  <Roy.Omond@BlueBubble.UK.Com> wrote:
>ZED (the programmable editor) mentioned by Nick was simply superb.
>The *best* editor I've ever come across (and I'm a self-confessed
>VMS bigot :-).  And TLS (Tape Library System) was extremely
>clever;  it kept an updatable directory at the beginning of the
>tape, with a very large erase gap (?) behind it.
>
>The only 2 systems that came close to Phoenix were, in my humble
>opinion, VMS and EMAS.  I remember vividly having to queue up
>for terminal access even at 2 o'clock in the morning !

Ah. INFO.EAGLE.CURRENT.STATUS

Happy memories.

>So what eventually happened to Phoenix ?

http://www.rrw-net.co.uk/semiramis-org-uk/phoenix/index.html

>What replaced it, and why ?

I think it was retired in favour of the central unix service, but it's
seems like a long time since I was a Cambridge undergraduate & my main
exposure to Phoenix was wasting time reading Groggs.

Phil
-- 
http://www.kantaka.co.uk/ .oOo. public key: http://www.kantaka.co.uk/gpg.txt
0
phil
10/19/2004 2:13:36 PM
| Brian Inglis wrote:
|> del cecchi wrote:
|>> Anne & Lynn Wheeler wrote:
|>> by the mid-80s, the high-end pcs and workstations were starting to
|>> take over this market .... the 4341 follow-on .... 4381 didn't see
|>> anything like the explosion that the 4341 saw (even as replacement for
|>> 4341 as it went to end-of-life).

|> And the racetrack machines died a horrible death.  I thought the 370
|> fort knox machine ended up as the 4361.

| And dual 4361s ended up as the service processors for the 309X?
| series, providing LPAR functionality.

No, LPAR functionality was done on the 309x hardware, in actuality,
a version of VM with heavy use of SIE. _________________________Gerard S.



0
gerard46
10/19/2004 3:21:38 PM
johnl@iecc.com (John R. Levine) wrote in message news:<cl29bl$gck$1@xuxa.iecc.com>...
> >The VAX had several problems: ...
>  
> >Not sure what you mean with register interlocks.
> 
> The autoincrement and autodecrement address modes are interpreted
> in strict order, so you have to be prepared for this:
> 
>  cmpw @(r2)+,@(r2)+
> 
> It compares two words at addresses pointed to by successive words
> pointed to by r2.  You can't decode the two addresses in parallel if
> they use the same register.

Actually, with an reasonable instruction execution unit, even this is
not a problem. The first address is computed '@(r2)' while the second
is computed as '@(r2+sizeof(w))'. This is straightforward in a decoder
setup for VAX-like instruction decode.

The only thing that is hard is trying to assume that @(r2)+ has a 
single (constant) decoder emission without regards to what comes
before or after.

Mitch
0
MitchAlsup
10/19/2004 3:51:24 PM
glen herrmannsfeldt <gah@ugcs.caltech.edu> writes:
> I always wanted to know the difference between CRBE and CRJE.

the code in HASP that drove 2780 bisynch was called Remote Job Entry
.... there literally were decks of cards ... and they were frequently
referred to job deck (of cards).

The *CRJE* stuff i did in hasp involved deleting the 2780 code and
replacing it with 2741 & TTY terminal support ... and slipping in an
editor that supported the CMS edit syntax.

doing search engine on *remote job entry* turns up "about 17,500"
entries; *remote batch entry* turns up "about 467" entires.

some of the *remote batch entry* entries say something about remote
submission of batches of data ... so *conversational remote batch
entry* might possibly have some slight semantic conflict between
conversational and batch.

specifying both *2780* and *rje* to search engine turns up
"about 989" entries (down from "17,500" for just *rje*).

-- 
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/
0
Anne
10/19/2004 3:56:14 PM
In article <thvv-08B113.17223718102004@comcast.dca.giganews.com>,
Tom Van Vleck <thvv@multicians.org> wrote:
>Nick Maclaren wrote:
>My impression was that CICS was a user application program
>that ran on OS/360. When run it burrowed into the OS, wired
>a bunch of core, fired up its own scheduler and job queue
>system, and took over one or more terminal controllers.
>  [ ... ]                                          GCOS had
>two such transaction processing subsystems in the 1970s.
>One was TDS and the other was ... I forget.

TPE: Transaction Processing Executive.

   TDS came out of Bull, and ran as a privileged slave
program, rather like TimeSharing Subsystem did.  It ran its
own scheduler and memory pool for the transaction processing
tasks under its control, and ran its own versions of IDS,
ISP and Random database library subroutines to journalize
the tasks' work.  Seems to have resembled CICS in these
regards.

   TPE was from Phoenix, and tried to extend the GCOS
executive to do transaction handling.  IDS journalling
support was installed in a GCOS executive module, and the
programs for submitting batch jobs were amended to speed
transaction-handling jobs into execution.  In the early
years when I saw it this was the bottleneck, since even with
the amendments, job submission was a very heavy task,
running through requirements analysis, peripheral
allocation, memory allocation, execution, and tear-down
afterwards.  None of these steps would take much for a
transaction program, but each one of them required waking up
a privileged-slave program to check the job out.  ISTR
the ongoing changes to job submission were the main thing
about TPE.

   Later on, the two programs may have merged, when GCOS
went virtual, but that was after my time.

        Regards.        Mel.
0
mwilson
10/19/2004 4:48:45 PM
"Alex McDonald" <alex_mcd@btopenworld.com> wrote in message 
news:b57b10b6.0410190048.62ed98c0@posting.google.com...
> johnl@iecc.com (John R. Levine) wrote in message 
> news:<ckv2np$d5i$1@xuxa.iecc.com>...
>>  The block oriented I/O architecture
>> helps, too.
>
> Without a doubt. In the commercial space, the large majority of
> applications moved data without transforming it at all, or simply
> sorted and collated it.

I think you can cut it finer than that.  Many commercial applications do 
transformations of data, but much of the time it is very specific 
transformatons, for which hardware assists are provided.  For example, there 
is a lot more emphasis in commercial application on human interfaces with 
data (screens, printed reports, keyed inputs, etc.) so there hardware 
instructions to convert data between displayable forms and the forms that 
one computes with.  Also, since commercial data tends to have more data that 
is of arbitrary byte lengths, there are instructions to manipulate 
individual bytes in strings, etc.  One of the few compute intense things 
that at least some of such systems often did is encryption.  As everyone 
probably knows, DES is very compute intensive, so IBM offers (and has for 
some time) an optional hardware "coprocessor" that does 
encryption/decryption.  So it is a matter of emphasis.

> S/360 & 370 I/O speeds and CPU disconnect on
> the multiple channel architecture (basically dedicated I/O processors)
> allowed very performant I/O, something that the zSeries delivers in
> spades today. The 4300 series was nowhere near as good in this
> respect.

Well, the 4300 series was at the low end of the line, so lots of the channel 
functiopns were preformed by the same engines.  But the higher end systems 
of that era had separate channel engines.

One key is that you set up the program in ordinary memory then pointed the 
channel processor to the program and told it "go".  No memory mapping or 
loads/stores to specified addresses that were really on some other 
peripheral device.  BTW, this was certainly not unique to IBM.  At least 
most of the other mainframe systems had similar ideas, though of course the 
implementations were totally different.  This is one key to higher 
performance, one which systems like Infiniband were designed to do also.  To 
try to unite two threads here, I can't tell if ASI implements this in some 
way.

-- 
 - Stephen Fuld
   e-mail address disguised to prevent spam 


0
Stephen
10/19/2004 5:01:24 PM
nmm1@cus.cam.ac.uk (Nick Maclaren) wrote in message news:<cl0r4b$d8s$1@gemini.csx.cam.ac.uk>...

> Don't get me wrong - VM/CMS wasn't bad, but VMS was a much better
> system for a great many purposes.  A user unfamiliar with either
> would typically take 1/3 the time to start using VMS effectively
> as VM/CMS (or Unix, for that matter).  Let's leave MVS and TSO out
> of this one ....

Yeah -- but with VM you could junk CMS and write your own OS, which is
how the OS that I still run was started about 30 years ago...

Michel.
0
hack
10/19/2004 5:10:01 PM
"Anne & Lynn Wheeler" <lynn@garlic.com> wrote in message 
news:uoeizz7rn.fsf@mail.comcast.net...
> "Stephen Fuld" <s.fuld@PleaseRemove.att.net> writes:
>> I believe the Unix "sort of" equivalent to CICS is/was systems like 
>> Tuxedo.
>> And while CICS was originally a "greem screen" application, there is now
>> software that allows taking advantage of the processing power and high
>> bandwidth to the screen of a PC.   Things like field editing can be moved 
>> to
>> the PC.
>
> tuxedo was transaction monitor .... while cics was transaction
> processing subsystem ....

OK, that begs the question - please describe the differences.  Note that I 
know a little about CICS, but almost nothing about Tuxedo, so please take 
that as a starting point.

> i got half dozen tuxedo books down in boxes
> someplace. i believe tuxedo was spun off to bea(?).

I believe that yes, it was spun off to Bea Systems.

> there was also camelot ... out of cmu ... along with mach, andrew
> widgets, andrew filesystem, etc.

I don't know about Camelot, but the Andrew File system is not really a 
transaction anything.  It is a distributed file system.  But see below

> IBM had pumped something like $50m
> into CMU for these projects about the same time that IBM & DEC each
> funded Project Athena at MIT to the tune of something like $25m each.
>
> some of this was spun out of cmu as Transarc (i believe also heavily
> funded by ibm ... and then bought outright by ibm).

The Andrew File System - AFS (Yes, the Andrew in the name was to honor CMU's 
founder) was essentially renamed the Distributed File System - DFS, and 
became part of the Transarc distributed processing system.

> cics was much more like transaction processing in any of the rdbms
> systems (loading transaction code, scheduling transaction code,
> actually dispatching the code for executiong) ... except it started
> out interfacing to bdam files (as opposed to having full blown dbms).

Yes.  Eventually it got interfaces to the standard IBM database systems (IMS 
and DB2).  I understand (and agree with your statements) about what CICS 
did.  What was different about what Tuxedo did?  (As I said above, I know 
almost nothing about Tuxedo, so I am trying to learn.)

-- 
 - Stephen Fuld
   e-mail address disguised to prevent spam 


0
Stephen
10/19/2004 5:11:34 PM
"Stephen Fuld" <s.fuld@PleaseRemove.att.net> writes:
> Well, the 4300 series was at the low end of the line, so lots of the
> channel functiopns were preformed by the same engines.  But the
> higher end systems of that era had separate channel engines.

there was this interesting problem of how fast various channels could
re-act ... more latency as opposed to raw, flat-out bandwidth.

vm formated ckd disks in psuedo fixed-block architecture ... actually
from its start in the mid-60s. on 3330s disks there was this
interesting problem of having a request for a record on one track
.... and also a queued request for a "logical" sequential record on a
different track (on the same cylinder). the trick was for the channel
(and rest of infrastructure) to execute the switch track command(s)
and pickup the next record in a single revolution (w/o the start of
record having rotated past the heads before the start of the data
transfer operation ... resulting in an extra full revolution).

the 168 outboard channels, the 148 channels, and the 4341 channels all
did this much better & consistently than the 158 channels.  the 158
had integrated channels, where the processor engin was time-shared
between the 370 microcode function and the channel microcode function.

moving to the 303x machines ... they took a 158 engine ... stripped
away the 370 microcode (leaving just the channel microcode) and called
it a channel director. all of the 303x processors used channel
directors for their channels. all of th3 303x processors had the
channel command latency characteristics of the 158 ... while the 4341
had better channel command latency characteristics.

the 4341 was about a one mip machine that you could with 16mbytes of
memory and six channels. the 3033 was a 4.5 mip machine that you could
get with 16mbytes of memory and sixteen channels. however six fully
configured 4341s were in about the same price range as a 3033 (meaning
you could get an aggregate of 6mips, 96mbytes of memory, and 36
channels). there was some internal tension at the time about clusters
of 4341 being extremely competitive with the high-end product.

now, if you are talking about the 4331 ... that was a much slower
machine.

-- 
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/
0
Anne
10/19/2004 5:18:29 PM
"mike" <mike@mike.net> wrote in message 
news:cEZcd.18663$Qv5.4367@newssvr33.news.prodigy.com...
>
> I seem to recall that the IMS database system also had its own CICS like 
> front end.

I believe that was true, and IIRC it was called IMS/DC (for Data 
Communications).  Later, CICS got an interface to IMS.  Given it is IBM, I 
suspect IMS/DC is still supported, but probably deprecated and not much 
development activity nor new users.

-- 
 - Stephen Fuld
   e-mail address disguised to prevent spam 


0
Stephen
10/19/2004 5:21:44 PM
"Mel Wilson" <mwilson@the-wire.com> wrote in message 
news:tVUdBls/KDbX089yn@the-wire.com...
> In article <thvv-08B113.17223718102004@comcast.dca.giganews.com>,
> Tom Van Vleck <thvv@multicians.org> wrote:
>>Nick Maclaren wrote:
>>My impression was that CICS was a user application program
>>that ran on OS/360. When run it burrowed into the OS, wired
>>a bunch of core, fired up its own scheduler and job queue
>>system, and took over one or more terminal controllers.
>>  [ ... ]                                          GCOS had
>>two such transaction processing subsystems in the 1970s.
>>One was TDS and the other was ... I forget.
>
> TPE: Transaction Processing Executive.
>
>   TDS came out of Bull, and ran as a privileged slave
> program, rather like TimeSharing Subsystem did.  It ran its
> own scheduler and memory pool for the transaction processing
> tasks under its control, and ran its own versions of IDS,
> ISP and Random database library subroutines to journalize
> the tasks' work.  Seems to have resembled CICS in these
> regards.
>
>   TPE was from Phoenix, and tried to extend the GCOS
> executive to do transaction handling.  IDS journalling
> support was installed in a GCOS executive module, and the
> programs for submitting batch jobs were amended to speed
> transaction-handling jobs into execution.  In the early
> years when I saw it this was the bottleneck, since even with
> the amendments, job submission was a very heavy task,
> running through requirements analysis, peripheral
> allocation, memory allocation, execution, and tear-down
> afterwards.  None of these steps would take much for a
> transaction program, but each one of them required waking up
> a privileged-slave program to check the job out.  ISTR
> the ongoing changes to job submission were the main thing
> about TPE.

Interesting.  TPE sound sort of like what Univac/Sperry did with their 
transaction processing system, TIP (IIRC Transaction Interface Protocol). 
But instead of speeding up the protocols for submitting jobs, they changed 
the task allocation logic to allow running a task (i.e. a transaction) 
without a corresponding job (Univac called jobs, runs, but they were 
equivalent).  Thus no requirements analysis, peripheral allocation,setting 
up spool files for printed output, etc.  The files used for the database 
were set up at initialization time and didn't have to be allocated to each 
transaction and there were restrictions on what a transaction could do, but 
these weren't significant.  It worked pretty well and was/is the main 
competition to IBM's purpose built operating system ACP/TPF for airline res 
systems.  TIP doesn't have the throughput of TPF, but it is a much easier 
environment to develop programs for.

-- 
 - Stephen Fuld
   e-mail address disguised to prevent spam 


0
Stephen
10/19/2004 7:51:52 PM
johnl@iecc.com (John R. Levine) writes:

> >The VAX had several problems: ...
> 
> >Not sure what you mean with register interlocks.
> 
> The autoincrement and autodecrement address modes are interpreted
> in strict order, so you have to be prepared for this:
> 
>  cmpw @(r2)+,@(r2)+
> 
> It compares two words at addresses pointed to by successive words
> pointed to by r2.  You can't decode the two addresses in parallel if
> they use the same register.  There aren't that many circumstances
> where it's useful to do multiple references to the same register in an
> instruction with at least one being autoinc or autodec, but it's legal
> so you have to be prepared for it.  I would think this is a place
> where static translation to the Alpha should win big, since it could
> generate simple fast code in 99.9% of the cases and slower code for
> the rare times that the registers interfere.

Not so necessary for VAX, but on PDP11 it was nice to be able to use
it as a stack machine, doing arithmetic with the data out in memory.
Not fast, though!
-- 
Joseph J. Pfeiffer, Jr., Ph.D.       Phone -- (505) 646-1605
Department of Computer Science       FAX   -- (505) 646-1002
New Mexico State University          http://www.cs.nmsu.edu/~pfeiffer
0
Joe
10/19/2004 7:52:32 PM
re: 
http://www.garlic.com/~lynn/2004n.html#14 360 longevity

and for a little more drift ... the 3090 had sort of the opposite
problem with i/o command latency/overhead processing.

i had been making these comments about the relative system performance
of disks had declined by a factor of 10 times between 360 and 3081.
http://www.garlic.com/~lynn/93.html#31 Big I/O or Kicking the Mainframe out the Door

the disk division didn't like what i was saying and assigned the
performance group to refute the statements. after spending something
like 3 months looking at the issues ... they eventually concluded that
i had slightly understated the severity of the problem. this then
turned into a share presentation on configuring to improve disk
thruput.

i had also been wandering around the disk engineering and product test
labs in bldg. 14 & bldg. 15. they had these "test cells" that contained
development hardware .. that got "stand-alone" test time connection
to some processor for testing. at the time, if they tried connecting
development hardware to a machine running a standard MVS operating system,
the claim was that the MTBF was 15 minutes .... so they had a number
of processors that were serialy scheduled for dedicated development
hardware testing.

i took this as somewhat of a challenge to write an operating system
bullet proof i/o subsystem that would never crash &/or hang the
system. this was eventually deployed across the disk development and
product test processors in bldg. 14 & 15 .... and even eventually
migrated to some of the other disk division plant sites. so they would
typically be able to do possibly half dozen test cells on a processor
w/o crashing and/or hanging.

bldg. 15, product test lab ... had a 3033 for testing with new disk
products and added a 3830 controller with 16 3330 disk drives for
engineering timesharing use ... on a machine that had been previously
been dedicated to disk hardware testing (note that testcell operation
was fairly i/o intensive ... but tended to only use one percent of the
processor ... or less).
http://www.garlic.com/~lynn/subtopic.html#disk

so one monday i came in ... and i have an irate call from bldg. 15
wanting to know what i had done to their system over the weekend;
system thruput and response hand gone all to pieces and was horrible.
I said that I hadn't changed their system at all over the weekend and
asked them what changes they had made. Of course, they said they
hadn't made any changes. Well, it turned out that they had replaced
their 3830 controller with a new development 3880 (disk) controller.

Having isolated what the change was ... it was time to do a lot of
hardware analysis. The 3830 disk controller had afast horizontal
microcode engine that handle commands and disk transfers. As part of
some policy(?) .. the 3880 had a relatively slow speed (JIB-prime)
veritical microcode processor (for doing command decode and execution)
with a some dedicated hardware for actual data transfer handling up to
3mbyte/sec (which would be seen with the new 3380 disks). The problem
was that elapsed time for typical disk operations were taking a couple
milliseconds longer with 3880 controller compared to same exact
operation using 3830 controller. To compensate, they re-orged how some
stuff was done inside the 3880 and would signal operation complete to
the processor as soon as data finished transfer ... and all sorts of
internal disk controller task completion proceeding in parallel after
signaling completion (as opposed to waiting until the 3880 had
actually completed everything before signaling completion).

They claimed that in the product performance acceptance tests ...
that this change allowed 3880 to meet specifications. However, it
turned out that the performance accpetance test was done with a two
disk drive VS1 operating system ... running single thread operation.
In this scenario ... the 3880 signaled completion and the VS1 system
went on its way getting other stuff done (overlapped with the 3880
actually finishing the operation). The VS1 operating system would then
get around to eventually putting together the next operation ... and
by that time the 3880 would be done with its internal business.

What had happened (that Monday morning) in real live operation with 16
drives and lots of concurrent activity was that there tended to
frequently be queued operations waiting for the controller. The 3880
would signal operation complete ... and immediately be hit with start
of a new (queued) operation. Since the 3880 was busy ... it would
signal controller busy (SM+BUSY) back to the processor ... and the
system would have to requeue the operation and go off and do something
else. Since the controller had signaled SM+BUSY ... it was now forced
to schedule a controller free interrupt (CUE) to tell the processor
that it was ready to do the next operation. The VS1 system performance
test never saw the increase in latency because it had other stuff to
do getting ready for the next operation ... and it never experienced
the significant increase in pathlength caused by the requeuing because
of the SM+BUSY and the subsequent additional interrupt (the CUE).

So it was now back to the drawing board for the 3880 .... to actualy
try and do something about the extra latency (rather than trying to
hide it and hope it was overlapped with something else the processor
needed to do). Fortunately this was still six months before first
customer ship for the 3880s ... so they had a little breathing room in
which to do something.

So the 3090 group in POK had been doing various kinds of capacity planning
.... some recent posts on this subject
http://www.garlic.com/~lynn/subtopic.html#bench

and balanced configuration thruput. The problem was that even after
fixing everything that could be possibly fixed in the 3880 ... there
was going to be significant more channel busy per operation (compared
to the same operations with 3830 controller).

Now, this is where my memory is a little vaque. What i seem to
recollect was that the typical 3090 configuration had been assumed to
be six TCMs and 96 channels. All this stuff with 3880 channel busy met
that the (customer's) disk farm had to be spread across a larger
number of channels in order to achieve the expected thruput; with
typical configuration now needing an extra 32 channels (to compensate
for the increased 3880 disk controller channel busy time), which in
turn required adding an extra (7th) TCM. There were jokes about taking
the cost of the extra TCM out of the disk division's revenue.

shorter version of the same tale:
http://www.garlic.com/2002b.html#3 Microcode? (& index searching)

various historical dates:
http://www.isham-research.com/chrono.html

-- 
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/
0
Anne
10/19/2004 8:09:17 PM
here is cics history site (possibly more than you ever want to know)
http://objectz.com/columnists/tscott/part1.html

it mentions IMS as a partial competitor to CICS ... and mentions
some amount about IMS & DL/1

from above ...

IBM.s IMS was a partial competitor to CICS.  It consists of two
products IMS/DB, a hierarchical database manager, and IBM/TS, a
transaction processing system (formerly referred to as a data
communications system, IMS/DC).  The application programming interface
for IMS was called DL/I.  With IMS being developed in San Jose,
California, it is easy to see how there could be more of a competitive
attitude between there and Hursley than a cooperative one.  Legend
within IBM relates that when the CICS team approached the IMS team to
work on an interface between the two products, the IMS team wanted no
part of it, saying they already had a transaction manager in IMS/DC.
The CICS team went ahead alone to build the interface.  The first
version, made available in 1974 with the first virtual storage version
of CICS (Aylmer-Hall, 1999), worked by making IMS/DB think it was
being invoked from a batch program.  First-hand experience of this
author revealed some of the problems of an interface designed without
cooperation from both sides.  When a problem in the interface caused
the CICS system to ABEND (Abnormally End), the application team might
call for IBM help from a CICS specialist or from an IMS specialist.
The CICS specialist would trace the problem to the IMS interface and
stop, saying he knew nothing of IMS.  If an IMS specialist was called,
he would look at the system dump and say that he could not find the
IMS control blocks that he needed to get started, because it was not
really a batch application.  Getting both specialists at once to solve
one problem proved impossible, so the team developing this CICS-IMS
application, especially this author, learned a lot about reading CICS
system dumps.

.... snip ...

when my wife did her time in POK responsible for loosely-coupled (aka
cluster) architecture ... she developed "peer-to-peer shared data"
... and spent some time working with IMS getting it adopted for IMS hot
standby ... misc
http://www.garlic.com/~lynn/subtopic.html#shareddata

the claim could be made that it was also the foundation for (the much
later) parallel sysplex ... parallel sysplex home page:
http://www-1.ibm.com/servers/eserver/zseries/pso/

of course we did ha/cmp
http://www.garlic.com/~lynn/subtopic.html#hacmp
and related
http://www.garlic.com/~lynn/subtopic.html#available

-- 
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/
0
Anne
10/19/2004 8:35:48 PM
del cecchi wrote:
> "Stephen Fuld" <s.fuld@PleaseRemove.att.net> wrote in message
> news:hCYcd.719051$Gx4.718206@bgtnsc04-
> snip
> 
>>>IBM systems used mostly block (3270-type) terminals, where the
> 
> terminal
> 
>>>handled the input and editing and only sent data to the CPU when the
> 
> user
> 
>>>pressed "Enter" (or other "AID" keys), and only when the FEP
> 
> "polled" the
> 
>>>device to request input.  This is not so good for timesharing, but
> 
> much
> 
>>>better for transaction processing.  HTML forms are a good analog of
> 
> how
> 
>>>the 3270 worked.
>>
>>Synch, line at a time terminals worked fine for much timesharing.  As
> 
> long
> 
>>as you stuck to line oriented editors program development worked fine.
> 
> And
> 
>>there were even mechanisms to use tthe "advanced" features of the
> 
> relativly
> 
>>smart terminals to do usefull things.  But of course no real graphics.
> 
> 
> You should have seen the cool graphics a guy in Rochester got out of a
> 3279,  wonderful color waveforms from the circuit simulator.  Some sort
> of trick with downloading character sets that were really little chunks
> of the picture or something like that.  I thought I had gone to heaven
> after years of looking at waveforms plotted on a line printer.
> 

Any terminal with downloadable character sets can do what IBM calls 
"programmed symbol" graphics.  Unfortunately my 3290 doesn't have enough 
memory to download all possible bitmaps for a character cell.

0
Peter
10/19/2004 8:47:11 PM

Peter Flass wrote:
(someone wrote)

>> You should have seen the cool graphics a guy in Rochester got out of a
>> 3279,  wonderful color waveforms from the circuit simulator.  Some sort
>> of trick with downloading character sets that were really little chunks
>> of the picture or something like that.  I thought I had gone to heaven
>> after years of looking at waveforms plotted on a line printer.

> Any terminal with downloadable character sets can do what IBM calls 
> "programmed symbol" graphics.  Unfortunately my 3290 doesn't have enough 
> memory to download all possible bitmaps for a character cell.

It might be that it figures out which cell bitmaps it needs.
If not enough are available then figure out the set that minimizes
the number of bits that are missing.  (Extra ON bits probably look
worse than a few OFF bits that should be ON.)

For line graphics it shouldn't be too hard to make it look
pretty good.

-- glen

0
glen
10/19/2004 9:13:02 PM
glen herrmannsfeldt wrote:

> 
> 
> Peter Flass wrote:
> (someone wrote)
> 
>>> You should have seen the cool graphics a guy in Rochester got out of a
>>> 3279,  wonderful color waveforms from the circuit simulator.  Some sort
>>> of trick with downloading character sets that were really little chunks
>>> of the picture or something like that.  I thought I had gone to heaven
>>> after years of looking at waveforms plotted on a line printer.
> 
> 
>> Any terminal with downloadable character sets can do what IBM calls 
>> "programmed symbol" graphics.  Unfortunately my 3290 doesn't have 
>> enough memory to download all possible bitmaps for a character cell.
> 
> 
> It might be that it figures out which cell bitmaps it needs.
> If not enough are available then figure out the set that minimizes
> the number of bits that are missing.  (Extra ON bits probably look
> worse than a few OFF bits that should be ON.)

This is exactly how you'd do it, and it is of course a horrible hack:

You need far more cpu & IO bandwidth to draw some simple plots on the 
screen. :-(
> 
> For line graphics it shouldn't be too hard to make it look
> pretty good.

I'd do some extra hackery here, by making sure graphs that used any kind 
of grid would locate the grid lines on some (sub-)multiple of the 
character cell size.

BTW, this hack is still in use, in the form of text-mode windowing 
libraries that synthesize a cursor by modifying the font maps of any 
character that happens to be overlayed by the cursor!

If the cursor is the same size as a character cell or smaller, then it 
can never cover more than four cells, and the library can reserve four 
unused positions in the character bitmap table, changing them on the fly 
as the cursor is moved.

Terje

-- 
- <Terje.Mathisen@hda.hydro.com>
"almost all programming can be viewed as an exercise in caching"
0
Terje
10/19/2004 10:07:55 PM
"mike" <mike@mike.net> wrote in message news:<cEZcd.18663$Qv5.4367@newssvr33.news.prodigy.com>...
> I seem to recall that the IMS database system also had its own CICS like 
> front end.


Still does.  IMS/DC (data comm) as opposed to IMS/DB (the database
part).  Of course CICS programs can access IMS as well (just to keep
things confusing).
0
robertwessel2
10/19/2004 10:57:31 PM
Here in alt.folklore.computers,
"Stephen Fuld" <s.fuld@PleaseRemove.att.net> spake unto us, saying:

>> Sort of.  Non-IBM systems
>                  ^^^^^^^^^^^^^^^^^^^^^
>
>This should really be non-mainframe systems.  AFAIK, the BUNCH systems used 
>primarily systems that were similar to that IBM used (with totally different 
>prorocols of course!) just because non of them wanted to have the overhead 
>of character level interrupts on hundreds or thousands of terminals.

Correct.  I know that Sperry/Unisys "UTS" terminals were (still are)
intelligent block-mode terminals, and I think Burroughs/Unisys T27
terminals and workalikes/derivatives are similar.

-- 
 -Rich Steiner >>>---> http://www.visi.com/~rsteiner >>>---> Smyrna, GA USA
  OS/2 + eCS + Linux + Win95 + DOS + PC/GEOS + Executor = PC Hobbyist Heaven!
       WARNING: I've seen FIELDATA FORTRAN V and I know how to use it!
                   The Theorem Theorem: If If, Then Then.
0
rsteiner
10/20/2004 3:53:15 AM
Here in alt.folklore.computers,
"Stephen Fuld" <s.fuld@PleaseRemove.att.net> spake unto us, saying:

>> While CICS is more like a character green screen version of Apache or an 
>> application server like WebSphere.
>
>I believe the Unix "sort of" equivalent to CICS is/was systems like Tuxedo.

Yes, Tux is kinda sorta like a transaction system.

>And while CICS was originally a "greem screen" application, there is
>now software that allows taking advantage of the processing power and
>high bandwidth to the screen of a PC.   Things like field editing can
>be moved to the PC.

At least in the case of UNIVAC/Sperry 1100/2200-series boxes, screen
and field editing/processing was usually done on the terminal anyway
when one was using a transaction system like TIP or HVTIP.

That's one of the whole points behind having semi-intelligent terminals
using a block-mode protocol.

-- 
 -Rich Steiner >>>---> http://www.visi.com/~rsteiner >>>---> Smyrna, GA USA
  OS/2 + eCS + Linux + Win95 + DOS + PC/GEOS + Executor = PC Hobbyist Heaven!
       WARNING: I've seen FIELDATA FORTRAN V and I know how to use it!
                   The Theorem Theorem: If If, Then Then.
0
rsteiner
10/20/2004 3:55:57 AM
Here in alt.folklore.computers,
"Stephen Fuld" <s.fuld@PleaseRemove.att.net> spake unto us, saying:

>TIP doesn't have the throughput of TPF, but it is a much easier
>environment to develop programs for.

I've been given to understand that HVTIP (High-Volume TIP, which most
USAS applications are written for nowadays) is considerably faster in
many respects than standard TIP.

It's pretty easy to write software for TIP.  The hard part is finding a
company still using it!  :-)

-- 
 -Rich Steiner >>>---> http://www.visi.com/~rsteiner >>>---> Smyrna, GA USA
  OS/2 + eCS + Linux + Win95 + DOS + PC/GEOS + Executor = PC Hobbyist Heaven!
       WARNING: I've seen FIELDATA FORTRAN V and I know how to use it!
                   The Theorem Theorem: If If, Then Then.
0
rsteiner
10/20/2004 4:00:22 AM
Here in alt.folklore.computers,
Anne & Lynn Wheeler <lynn@garlic.com> spake unto us, saying:

>a customer with 20-50 people caring and feeding a single big mainframe
>complex couldn't continue to follow the same paradigm when it was
>cloned a couple hundred or thousand times.

You might want to specify "IBM mainframe complex" above -- I've been
given the very strong impression based on my days at Northwest Airlines
(which is both an IBM and a Unisys 2200-series mainframe shop) that the
IBM side of life required a consierably larger staff to maintain, both
on the systems side and on the applications development/support side.

-- 
 -Rich Steiner >>>---> http://www.visi.com/~rsteiner >>>---> Smyrna, GA USA
  OS/2 + eCS + Linux + Win95 + DOS + PC/GEOS + Executor = PC Hobbyist Heaven!
       WARNING: I've seen FIELDATA FORTRAN V and I know how to use it!
                   The Theorem Theorem: If If, Then Then.
0
rsteiner
10/20/2004 4:04:35 AM
In article <ufz4axx7f.fsf@mail.comcast.net>,
Anne & Lynn Wheeler <lynn@garlic.com> writes:
|> 
|> here is cics history site (possibly more than you ever want to know)
|> http://objectz.com/columnists/tscott/part1.html
|> 
|> it mentions IMS as a partial competitor to CICS ... and mentions
|> some amount about IMS & DL/1

IBM has always been very strong on internal competition :-)

I have been in several arguments where I have said it is
actually a strength, in that changes in the industry usually
catch IBM unawares, but have not so far left it with no viable
product lines.  The same is not true of all IT companies ....


Regards,
Nick Maclaren.
0
nmm1
10/20/2004 9:23:29 AM
Peter Flass <Peter_Flass@Yahoo.com> writes:

> Jim Haynes wrote:

>> This reminds me of something older that may be relevant.  I once
>> read a document by George Mealy about the travails of OS/360 and
>> some comparisons with what was then the PDP-10 operating system.
>> What struck me was that the PDP-10 system was designed as a remote
>> terminal oriented system from the ground up.  Whereas with OS/360
>> you had what was basically a card driven system and had to graft on
>> more layers of software to get it to deal with operation from
>> terminals.  Now I don't pretend to know anything about IBM
>> software, but I got the impression that later on you had to have
>> something called CICS to do what the DEC software already was doing
>> built-in; and even in my last contacts with IBM stuff there seemed
>> to be files that were card images and printer line images.  And
>> CICS required its own set of experts as if it were another
>> operating system running on top of the OS.

> DEC had nothing like CICS, almost nobody did.  CICS was a
> transaction processing monitor.  In today's terms it was a single
> process that owned all the resources and the transactions operated
> as threads underneath it.  CICS provided all OS-type services for
> the transactions.  It was all designed from the ground up for
> flat-out efficiency and speed. Univac had TIP. which was somewhat
> similar.  I had thought Tuxedo was also a work-alike, but someone
> recently pointed out that it wasn't. WEre there any others?

They had at least two. One was ACMS, along with RDB, TDMS et al. The
other was, CICS. You could also get DB2/VMS to go with it.

-- 
Paul Repacholi                               1 Crescent Rd.,
+61 (08) 9257-1001                           Kalamunda.
                                             West Australia 6076
comp.os.vms,- The Older, Grumpier Slashdot
Raw, Cooked or Well-done, it's all half baked.
EPIC, The Architecture of the future, always has been, always will be.
0
prep
10/20/2004 11:06:11 AM
nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
> IBM has always been very strong on internal competition :-)

numerous times when it came to trying to kill off vm/cms ...
past post that mentions a pco (vs/pc) gimick
http://www.garlic.com/~lynn/2001f.html#49 

the mentions a couple people using a model to calculate projected
pco/vspc performance (since it wasn't running yet) and nearly the
whole cms group involved in running mandated/required compareable real
benchmarks (upwards of six months time). when they finally got real
pco running ... it turned out that pco was something like ten times
slower than the simulated numbers were claiming.

also mentioned is the CERN tso/cms comparison tests .. and the CERN
report presented to share. internal corporate copies of the report
were quickly stamped "confidential - restricted" ...  available on a
strickly need-to-know basis only (for instance, you probably didn't
want the people marketing tso to know about it).

one could possibly tie the evolution of heavy CMS use at CERN to the
subsequent invention of HTML and the web.

random posts on gml/sgml, its invention at the science center in
'69; incorporation of gml support in cms document processing, etc
http://www.garlic.com/~lynn/subtopic.html#sgml

-- 
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/
0
Anne
10/20/2004 2:43:33 PM
for a little dirft ... there was this joke about working
four shifts; first shift in bldg 28 ... on various stuff like
http://www.garlic.com/~lynn/subtopic.html#systemr

2nd shift in bldgs 14/15
http://www.garlic.com/~lynn/subtopic.html#disk

3rd shift in bldg 90 doing some stuff for ims group
http://www.garlic.com/~lynn/subtopic.html#hsdt

and weekends/4th shift up at hone
http://www.garlic.com/~lynn/subtopic.html#hone

-- 
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/
0
Anne
10/20/2004 3:16:41 PM
In article <rEedBpHpveMA092yn@visi.com>, rsteiner@visi.com
(Richard Steiner) writes:

>Here in alt.folklore.computers,
>"Stephen Fuld" <s.fuld@PleaseRemove.att.net> spake unto us, saying:
>
>>> Sort of.  Non-IBM systems
>>                  ^^^^^^^^^^^^^^^^^^^^^
>>
>>This should really be non-mainframe systems.  AFAIK, the BUNCH systems
>>used rimarily systems that were similar to that IBM used (with totally
>>different prorocols of course!) just because non of them wanted
>>to have the overhead of character level interrupts on hundreds or
>>thousands of terminals.
>
>Correct.  I know that Sperry/Unisys "UTS" terminals were (still are)
>intelligent block-mode terminals, and I think Burroughs/Unisys T27
>terminals and workalikes/derivatives are similar.

In fact, the state of a transaction was in part stored in the terminal's
screen buffer.  At least the way I wrote transaction programs.  :-)

--
/~\  cgibbs@kltpzyxm.invalid (Charlie Gibbs)
\ /  I'm really at ac.dekanfrus if you read it the right way.
 X   Top-posted messages will probably be ignored.  See RFC1855.
/ \  HTML will DEFINITELY be ignored.  Join the ASCII ribbon campaign!

0
Charlie
10/20/2004 5:43:11 PM
In article <cl2qth$d91$1@scorpius.csx.cam.ac.uk>,
	nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
> 
> No context quoted, as it would be either too much or too little.
> 
> There is virtually no online information on Phoenix, which really
> needs fixing.  It was one of the least invasive of the MVT/MVS
> front ends, and was partially modelled on the Titan system, one of
> the early and influential interactive systems, by some of the same
> people.  At the University of Cambridge, not Massachusetts :-)

GEC did a range of mini computers -- GEC 4000 series.
One of the operating systems for this mini, OS4000,
had a JCL language which was pretty much identical
to that of Phoenix command language. Actually, I have
a rather dog-eared copy of the Cambridge 370/165
Newcomers' Guide from 1978, which documents the Phoenix
command language.

-- 
Andrew Gabriel
Consultant Software Engineer
0
andrew
10/20/2004 8:01:35 PM
In article <9NUcd.566$5i5.126@newsread2.news.atl.earthlink.net>, haynes@alumni.uark.edu says...
>

>Something I was a little closer to was Burroughs B5500.  That machine was
>superb for batch and inherently lousy for timesharing.  There was a 
>timesharing version of the operating system.  There was also a customer-
>supplied terminal front end for the batch operating system.  It was
>called R/C (for "Remote Card") and that's how it worked; from a terminal
>you could make a file that was a virtual card deck and submit it for
>processing, and then you could examine the output file.
>
It has gotten a lot better, although the preferred terminal type is
still block mode, which isn't necessarily bad...

			- Tim
NOT speaking for Unisys.

0
tmm
10/21/2004 7:11:52 PM
On 20 Oct 04 09:43:11 -0800 in alt.folklore.computers, "Charlie Gibbs"
<cgibbs@kltpzyxm.invalid> wrote:

>In article <rEedBpHpveMA092yn@visi.com>, rsteiner@visi.com
>(Richard Steiner) writes:
>
>>Here in alt.folklore.computers,
>>"Stephen Fuld" <s.fuld@PleaseRemove.att.net> spake unto us, saying:
>>
>>>> Sort of.  Non-IBM systems
>>>                  ^^^^^^^^^^^^^^^^^^^^^
>>>
>>>This should really be non-mainframe systems.  AFAIK, the BUNCH systems
>>>used rimarily systems that were similar to that IBM used (with totally
>>>different prorocols of course!) just because non of them wanted
>>>to have the overhead of character level interrupts on hundreds or
>>>thousands of terminals.
>>
>>Correct.  I know that Sperry/Unisys "UTS" terminals were (still are)
>>intelligent block-mode terminals, and I think Burroughs/Unisys T27
>>terminals and workalikes/derivatives are similar.
>
>In fact, the state of a transaction was in part stored in the terminal's
>screen buffer.  At least the way I wrote transaction programs.  :-)

Invisible screen fields as compared to hidden HTML form fields. 

-- 
Thanks. Take care, Brian Inglis 	Calgary, Alberta, Canada

Brian.Inglis@CSi.com 	(Brian[dot]Inglis{at}SystematicSW[dot]ab[dot]ca)
    fake address		use address above to reply
0
Brian
10/22/2004 12:00:49 AM
rsteiner@visi.com (Richard Steiner) wrote in message news:<TPedBpHpvCzS092yn@visi.com>...
> You might want to specify "IBM mainframe complex" above -- I've been
> given the very strong impression based on my days at Northwest Airlines
> (which is both an IBM and a Unisys 2200-series mainframe shop) that the
> IBM side of life required a consierably larger staff to maintain, both
> on the systems side and on the applications development/support side.

there was both significant number of vendor people as well as customer
people involved in the system care and feeding

there was some presentation someplace ... that initially amdahl was
selling into MTS and VM/370 accounts (many at universities) because of
the significant lower dependancy on vendor support people (most of
which would presumably evaporate if the customer switched to an amdahl
processor).

i got somewhat roped into this from another standpoint. the first
thoroughly blue account (large commercial entity with large football
fields worth of installed gear) announced that they were going to be
the first (true-blue) installation to install amdahl. I got asked to
go live at the customer location as part of a strategy to try and
change the customer's mind.

lots of past amdahl mentions:
http://www.garlic.com/~lynn/99.html#2 IBM S/360
http://www.garlic.com/~lynn/99.html#188 Merced Processor Support at it
again
http://www.garlic.com/~lynn/99.html#190 Merced Processor Support at it
again
http://www.garlic.com/~lynn/99.html#191 Merced Processor Support at it
again
http://www.garlic.com/~lynn/99.html#209 Core (word usage) was
anti-equipment etc
http://www.garlic.com/~lynn/2000c.html#8 IBM Linux
http://www.garlic.com/~lynn/2000c.html#44 WHAT IS A MAINFRAME???
http://www.garlic.com/~lynn/2000c.html#48 WHAT IS A MAINFRAME???
http://www.garlic.com/~lynn/2000d.html#61 "all-out" vs less aggressive
designs (was: Re: 36 to 32 bit transition)
http://www.garlic.com/~lynn/2000e.html#20 Is Al Gore The Father of the
Internet?^
http://www.garlic.com/~lynn/2000e.html#21 Competitors to SABRE?  Big
Iron
http://www.garlic.com/~lynn/2000e.html#58 Why not an IBM zSeries
workstation?
http://www.garlic.com/~lynn/2000f.html#11 Amdahl Exits Mainframe
Market
http://www.garlic.com/~lynn/2000f.html#12 Amdahl Exits Mainframe
Market
http://www.garlic.com/~lynn/2000f.html#68 TSS ancient history, was X86
ultimate CISC? designs)
http://www.garlic.com/~lynn/2000f.html#69 TSS ancient history, was X86
ultimate CISC? designs)
http://www.garlic.com/~lynn/2001.html#18 Disk caching and file
systems.  Disk history...people forget
http://www.garlic.com/~lynn/2001.html#63 Are the L1 and L2 caches
flushed on a page fault ?
http://www.garlic.com/~lynn/2001b.html#12 Now early Arpanet security
http://www.garlic.com/~lynn/2001b.html#28 So long, comp.arch
http://www.garlic.com/~lynn/2001b.html#56 Why SMP at all anymore?
http://www.garlic.com/~lynn/2001b.html#67 Original S/360 Systems -
Models 60,62 70
http://www.garlic.com/~lynn/2001b.html#73 7090 vs. 7094 etc.
http://www.garlic.com/~lynn/2001d.html#35 Imitation...
http://www.garlic.com/~lynn/2001d.html#70 Pentium 4 Prefetch engine?
http://www.garlic.com/~lynn/2001e.html#19 SIMTICS
http://www.garlic.com/~lynn/2001g.html#35 Did AT&T offer Unix to
Digital Equipment in the 70s?
http://www.garlic.com/~lynn/2001j.html#23 OT - Internet Explorer V6.0
http://www.garlic.com/~lynn/2001l.html#17 mainframe question
http://www.garlic.com/~lynn/2001l.html#18 mainframe question
http://www.garlic.com/~lynn/2001l.html#47 five-nines
http://www.garlic.com/~lynn/2001n.html#22 Hercules, OCO, and IBM
missing a great opportunity
http://www.garlic.com/~lynn/2001n.html#83 CM-5 Thinking Machines,
Supercomputers
http://www.garlic.com/~lynn/2001n.html#85 The demise of compaq
http://www.garlic.com/~lynn/2001n.html#90 Buffer overflow
http://www.garlic.com/~lynn/2002.html#24 Buffer overflow
http://www.garlic.com/~lynn/2002.html#44 Calculating a Gigalapse
http://www.garlic.com/~lynn/2002.html#50 Microcode?
http://www.garlic.com/~lynn/2002d.html#3 Chip Emulators - was How does
a chip get designed?
http://www.garlic.com/~lynn/2002d.html#4 IBM Mainframe at home
http://www.garlic.com/~lynn/2002d.html#14 Mainframers: Take back the
light (spotlight, that is)
http://www.garlic.com/~lynn/2002e.html#48 flags, procedure calls,
opinions
http://www.garlic.com/~lynn/2002e.html#51 IBM 360 definition (Systems
Journal)
http://www.garlic.com/~lynn/2002e.html#68 Blade architectures
http://www.garlic.com/~lynn/2002g.html#0 Blade architectures
http://www.garlic.com/~lynn/2002h.html#73 Where did text file line
ending characters begin?
http://www.garlic.com/~lynn/2002i.html#12 CDC6600 - just how powerful
a machine was it?
http://www.garlic.com/~lynn/2002i.html#17 AS/400 and MVS -
clarification please
http://www.garlic.com/~lynn/2002i.html#19 CDC6600 - just how powerful
a machine was it?
http://www.garlic.com/~lynn/2002j.html#20 MVS on Power (was Re:
McKinley Cometh...)
http://www.garlic.com/~lynn/2002j.html#45 M$ SMP and old time IBM's
LCMP
http://www.garlic.com/~lynn/2002j.html#46 M$ SMP and old time IBM's
LCMP
http://www.garlic.com/~lynn/2002j.html#75 30th b'day
http://www.garlic.com/~lynn/2002n.html#58 IBM S/370-168, 195, and 3033
http://www.garlic.com/~lynn/2002o.html#14 Home mainframes
http://www.garlic.com/~lynn/2002p.html#40 Linux paging
http://www.garlic.com/~lynn/2002p.html#44 Linux paging
http://www.garlic.com/~lynn/2002p.html#48 Linux paging
http://www.garlic.com/~lynn/2002p.html#54 Newbie: Two quesions about
mainframes
http://www.garlic.com/~lynn/2002q.html#31 Collating on the S/360-2540
card reader?
http://www.garlic.com/~lynn/2003.html#9 Mainframe System
Programmer/Administrator market demand?
http://www.garlic.com/~lynn/2003.html#36 mainframe
http://www.garlic.com/~lynn/2003.html#37 Calculating expected
reliability for designed system
http://www.garlic.com/~lynn/2003.html#56 Wild hardware idea
http://www.garlic.com/~lynn/2003.html#65 Amdahl's VM/PE
information/documentation sought
http://www.garlic.com/~lynn/2003c.html#76 COMTEN- IBM networking boxes
http://www.garlic.com/~lynn/2003d.html#68 unix
http://www.garlic.com/~lynn/2003e.html#13 unix
http://www.garlic.com/~lynn/2003e.html#15 unix
http://www.garlic.com/~lynn/2003e.html#16 unix
http://www.garlic.com/~lynn/2003e.html#17 unix
http://www.garlic.com/~lynn/2003e.html#18 unix
http://www.garlic.com/~lynn/2003e.html#20 unix
http://www.garlic.com/~lynn/2003f.html#10 Alpha performance, why?
http://www.garlic.com/~lynn/2003g.html#3 Disk capacity and backup
solutions
http://www.garlic.com/~lynn/2003g.html#58 40th Anniversary of IBM
System/360
http://www.garlic.com/~lynn/2003h.html#32 IBM system 370
http://www.garlic.com/~lynn/2003h.html#56 The figures of merit that
make mainframes worth the price
http://www.garlic.com/~lynn/2003i.html#3 A Dark Day
http://www.garlic.com/~lynn/2003i.html#4 A Dark Day
http://www.garlic.com/~lynn/2003i.html#6 A Dark Day
http://www.garlic.com/~lynn/2003i.html#53 A Dark Day
http://www.garlic.com/~lynn/2003j.html#54 June 23, 1969: IBM
"unbundles" software
http://www.garlic.com/~lynn/2003l.html#11 how long does (or did) it
take to boot a timesharing system?
http://www.garlic.com/~lynn/2003l.html#31 IBM Manuals from the 1940's
and 1950's
http://www.garlic.com/~lynn/2003l.html#41 Secure OS Thoughts
http://www.garlic.com/~lynn/2003m.html#20 360 Microde Floating Point
Fix
http://www.garlic.com/~lynn/2003n.html#22 foundations of relational
theory? - some references for the
http://www.garlic.com/~lynn/2003n.html#24 Good news for SPARC
http://www.garlic.com/~lynn/2003n.html#46 What makes a mainframe a
mainframe?
http://www.garlic.com/~lynn/2003o.html#52 Virtual Machine Concept
http://www.garlic.com/~lynn/2003p.html#30 Not A Survey Question
http://www.garlic.com/~lynn/2004.html#46 DE-skilling was Re: ServerPak
Install via QuickLoad Product
http://www.garlic.com/~lynn/2004b.html#48 Automating secure
transactions
http://www.garlic.com/~lynn/2004b.html#49 new to mainframe asm
http://www.garlic.com/~lynn/2004c.html#6 If the x86 ISA could be
redone
http://www.garlic.com/~lynn/2004c.html#7 IBM operating systems
http://www.garlic.com/~lynn/2004c.html#11 40yrs, science center, feb.
1964
http://www.garlic.com/~lynn/2004c.html#39 Memory Affinity
http://www.garlic.com/~lynn/2004c.html#61 IBM 360 memory
http://www.garlic.com/~lynn/2004d.html#22 System/360 40th Anniversary
http://www.garlic.com/~lynn/2004g.html#28 Most dangerous product the
mainframe has ever seen
http://www.garlic.com/~lynn/2004h.html#20 Vintage computers are better
than modern crap !
http://www.garlic.com/~lynn/2004j.html#17 Wars against bad things
http://www.garlic.com/~lynn/2004l.html#51 Specifying all biz rules in
relational data
http://www.garlic.com/~lynn/2004m.html#53 4GHz is the glass ceiling?
http://www.garlic.com/~lynn/2004m.html#56 RISCs too close to hardware?
0
lynn
10/24/2004 11:59:12 PM
>>> On Tue, 19 Oct 2004 14:09:17 -0600, Anne & Lynn Wheeler
>>> <lynn@garlic.com> said:

lynn> re: http://www.garlic.com/~lynn/2004n.html#14 360 longevity

lynn> and for a little more drift ... the 3090 had sort of the opposite
lynn> problem with i/o command latency/overhead processing.

lynn> i had been making these comments about the relative system
lynn> performance of disks had declined by a factor of 10 times between
lynn> 360 and 3081.  http://www.garlic.com/~lynn/93.html#31 Big I/O or
lynn> Kicking the Mainframe out the Door

lynn> the disk division

The disk division eventually was kicked out of the door :-).

lynn> didn't like what i was saying and assigned the performance group
lynn> to refute the statements. after spending something like 3 months
lynn> looking at the issues ... they eventually concluded that i had
lynn> slightly understated the severity of the problem. this then turned
lynn> into a share presentation on configuring to improve disk thruput.

But there is practically no way around the problem, only palliatives.
The problem is utimately the use of magnetic recording with a
``spherical'' diffusion pattern, which uin practice implies rotating
media, and the ingrained habits of mass storage designers that think
even optics have the same limitation.

  My fantasy imagines a mass storage device shaped like a pyramid, with
  a laser on top, and non rotating optical media on bottom, a bit like
  CRT storage systems of old...

[ ... ]

lynn> i took this as somewhat of a challenge to write an operating
lynn> system bullet proof i/o subsystem that would never crash &/or hang
lynn> the system. [ ... ]

Perhaps at your IBM office people tolerated you for trying to do your
job well, but usually that kind of attitude gets you labeled as a
non-team-player, a know-it-all busybody. :-)

In other countries, or perhaps on the opposite coast, in the offices of
another big monopoly, people would consider it as a challenge to write
an i/o subsystem that would crash and hang the system as often as
possible, and get away with it. :-)

One of my musings is peripherally realted to this, and it is that
currently companies like MS (or IBM in its heyday) have an attitude
similar to that of GM in the 60s (or USA RAM manufacturers in the 70s),
that is selling product with known high defect rates, with a cool ``we
are the only game in town, sucker'' attitude :-).

Then Toyota and Hitachi started selling essentially defect free cars and
RAM chips, and the rest is history. I wonder when this will happen to
software, or firmware. Just like essentially defect free manufacturing
requires a Japanese style culture, I wonder which country has a culture
that enables the writing of quality software. Surely not the the UK (or
the USA), perhaps Russia (dream on :->).

lynn> [ ... terrible IO performance problems arising suddendly ... ] To
lynn> compensate, they re-orged how some stuff was done inside the 3880
lynn> and would signal operation complete to the processor as soon as
lynn> data finished transfer ... and all sorts of internal disk
lynn> controller task completion proceeding in parallel after signaling
lynn> completion (as opposed to waiting until the 3880 had actually
lynn> completed everything before signaling completion). [ ... ]

Ahhh, that's a classic tale of architectural mismatch. A lot of !"=A3$%^&
don't really understand the impact of latency.

On a slightly related issue: with contemporary cheapo ATA disks thanks
to improved density one can get sequential transfer speeds well in
excess of 40MB/s (outer tracks at least), both for reading and
writing.

But one disables write caching one gets sequential write speeds of less
than 1/10th the speed with write caching enabled.

There are a few theories as to the reason(s) for this, and most involve
latency, with scenarios not dissimilar to those you described above.
Nothing new under the sun...

  BTW, on my machine as a result I have write caching enabled, and I
  hate myself for this :-). I have heard of ATA RAID disk manufacturers
  having write caching enabled by default for the same reason too.
  Their customers usually don't know. Eheheh. :-)
0
pg_nh
10/31/2004 1:08:15 PM
"Peter Grandi" <pg_nh@0409.exp.sabi.co.UK> wrote in message 
news:yf38y9navcw.fsf@base.gp.example.com...

snip


> But there is practically no way around the problem, only palliatives.
> The problem is utimately the use of magnetic recording with a
> ``spherical'' diffusion pattern, which uin practice implies rotating
> media, and the ingrained habits of mass storage designers that think
> even optics have the same limitation.

I don't think that is fair.  Storage designers have generally kept to the 
existing configurations not because of their "habits", but because the other 
things they have tried didn't work as well.  Over the history of 
"peripheral" storage (i.e. not main memory), there have been things like 
IBM's data cell and both IBM and CDC tried "libraries" of mini tape strips. 
"Solid state" disks and drums have been somewhat popular, depending on the 
economics and the addressing and OS limitations of earlier systems.  Despite 
the many reports of its demise, tape in libraries is still popular for 
certain uses.  I briefly spent a little time looking at a holographic system 
technology, but the drawbacks were so large that it was impractical to 
productize.

If you have some ideas, there are lots of people who would just love to hear 
about them, and if they are practical, I am pretty sure you could get the 
where with all to develop them.  There is a natural bias toward what people 
know works, but lots of people are trying to do better.

>  My fantasy imagines a mass storage device shaped like a pyramid, with
>  a laser on top, and non rotating optical media on bottom, a bit like
> CRT storage systems of old...

Well, the holographic system I looked at still had a rotating "carrier" for 
the crystals, as you needed more than one crystal to get reasonable 
capacity.  I would guess that your system would have a poor volumetric 
eficiency and would thus be costly, but perhaps I don't fully understand 
your proposal.

-- 
 - Stephen Fuld
   e-mail address disguised to prevent spam 


0
Stephen
10/31/2004 4:07:32 PM
In article <yf38y9navcw.fsf@base.gp.example.com>,
Peter Grandi <pg_nh@0409.exp.sabi.co.UK> wrote:
>>>> On Tue, 19 Oct 2004 14:09:17 -0600, Anne & Lynn Wheeler
>>>> <lynn@garlic.com> said:
>
>lynn> re: http://www.garlic.com/~lynn/2004n.html#14 360 longevity
>
>
>One of my musings is peripherally realted to this, and it is that
>currently companies like MS (or IBM in its heyday) have an attitude
>similar to that of GM in the 60s (or USA RAM manufacturers in the 70s),
>that is selling product with known high defect rates, with a cool ``we
>are the only game in town, sucker'' attitude :-).
>
>Then Toyota and Hitachi started selling essentially defect free cars and
>RAM chips, and the rest is history. I wonder when this will happen to
>software, or firmware. Just like essentially defect free manufacturing
>requires a Japanese style culture, I wonder which country has a culture
>that enables the writing of quality software. Surely not the the UK (or
>the USA), perhaps Russia (dream on :->).

I have earlier pointed out that perfectly functioning software is 
produced in lots of US companies; they are just driven out of the
mainstream by a monopoly. 

There were good cars in the US in the 60's as well, but they were 
small hand-made outfits that went extinct one by one; either they
went under or they started making specialist vehicles that took them
out of the clutch of the monopolists. (well, oligopolists really, but that
sounds too political here) 

>lynn> [ ... terrible IO performance problems arising suddendly ... ] To
>lynn> compensate, they re-orged how some stuff was done inside the 3880
>lynn> and would signal operation complete to the processor as soon as
>lynn> data finished transfer ... and all sorts of internal disk
>lynn> controller task completion proceeding in parallel after signaling
>lynn> completion (as opposed to waiting until the 3880 had actually
>lynn> completed everything before signaling completion). [ ... ]
>
>Ahhh, that's a classic tale of architectural mismatch. A lot of !"�$%^&
>don't really understand the impact of latency.
>
>On a slightly related issue: with contemporary cheapo ATA disks thanks
>to improved density one can get sequential transfer speeds well in
>excess of 40MB/s (outer tracks at least), both for reading and
>writing.
>
>But one disables write caching one gets sequential write speeds of less
>than 1/10th the speed with write caching enabled.
>
>There are a few theories as to the reason(s) for this, and most involve
>latency, with scenarios not dissimilar to those you described above.
>Nothing new under the sun...
>
>  BTW, on my machine as a result I have write caching enabled, and I
>  hate myself for this :-). I have heard of ATA RAID disk manufacturers
>  having write caching enabled by default for the same reason too.
>  Their customers usually don't know. Eheheh. :-)

-- mrr
0
Morten
10/31/2004 6:30:06 PM
Morten Reistad wrote:

>  
> 
> I have earlier pointed out that perfectly functioning software is 
> produced in lots of US companies; they are just driven out of the
> mainstream by a monopoly. 
> 
> There were good cars in the US in the 60's as well, but they were 
> small hand-made outfits that went extinct one by one; either they
> went under or they started making specialist vehicles that took them
> out of the clutch of the monopolists. (well, oligopolists really, but that
> sounds too political here) 
> 
> 
>> 
> 
> -- mrr
> 

I am sorry I missed your previous postings
about perfect software and monopoly.  Can you
refer me to some of your postings or email me
copies.

We are in a constant battle with our
engineers about the "perfectability" of
software, and I would welcome any help.

Thanks

JKA

0
J
11/3/2004 8:08:36 PM
pg_nh@0409.exp.sabi.co.UK (Peter Grandi) writes:
> Perhaps at your IBM office people tolerated you for trying to do your
> job well, but usually that kind of attitude gets you labeled as a
> non-team-player, a know-it-all busybody. :-)
>
> In other countries, or perhaps on the opposite coast, in the offices of
> another big monopoly, people would consider it as a challenge to write
> an i/o subsystem that would crash and hang the system as often as
> possible, and get away with it. :-)

it was at the internal disk engineering and product test labs
(bldg 14 & 15 on the san jose plant site)
http://www.garlic.com/~lynn/subtopic.html#disk

they had been doing all their testing using stand-alone machine time
....  that had to be serialized/scheduled among all the testers and
different testcells. they had tried running concurrently under MVS but
the MTBF for the operating system was on the order of 15 minutes.

the objective was a bullet proof i/o subsystem so that all disk
engineering activities could go on simulataneously & concurrently
sharing the same machine.

of course it had other side effects ... since the machines under heavy
testcell load was possibly 1 percent cpu utilization ... which met
that we could siphon off a lot of extraneously & otherwise,
unaccounted for cpu. the enginnering & product test labs tended to be
the 2nd to get the newest processors out of POK (typically something
like serial 003, still engineering models ... but the processor
engineers had the first two ... and then disk engineering and product
test got the next one). 

at one point, one of the projects that was needing lots of cpu and
having trouble getting it allocated from the normal computing center
machines was the air bearing simulation work ... for designing the
flying disk (3380) heads. dropped it on a brand new 3033 engineering
model in bldg. 15 ... and let it rip for all the time it needed.

a recent posting about dealing with 3880 issue when it was
deployed it was first deployed in the bldg. 15 ... for standard
string of 16 3330 drives as part of interactive use by the
engineers ... fortunately it was still six months prior to
first customer ship ... so there was time to improve some of the
issues:
http://www.garlic.com/~lynn/2004n.html#15 360 longevity, was RISCs too close to hardware

there was a problem raised from a somewhat unexpected source. there
was an internal only corporate report and the MVS RAS manager in POK
strenously objected to the mention of MVS 15 minute MTBF. There was
some rumor that the objection was so strenuous that it quelshed any
possibility for any award for significantly contributing to the
productivity of the disk engineering and product test labs.


-- 
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/
0
Anne
11/5/2004 2:51:22 PM
oh, and one reason the the disk engineering and product test labs
tolerated me ... was that i wasn't in the gpd organizations. i had a
day job in research (bldg 28) ... the stuff in bldg. 14&15 was just
for the fun of it. something similar for the hone complex further up
the peninsula or for stl/bldg-90 or for the vlsi group out in
lsg/bldg.29. i would just show up and fix problems and go away. one of
the harder problems was keeping track of all the security
authorizations for the different data centers. I didn't exist in any
of those organizations.

-- 
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/
0
Anne
11/5/2004 4:54:43 PM
Tom Van Vleck <thvv@multicians.org> wrote:

>Nick Maclaren wrote:
>> VM/CMS was written precisely because MVS/TSO was so ghastly - but the
>> Wheelers know a thousand times more about that than I do.  There were
>> several MVS sub-systems that were designed for interactive use, most
>> of which came out of academia, such as MTS (Michigan), GUTS (Gothenburg)
>> and Phoenix (Cambridge).  The last was the one most designed for remote
>> use as, by the time we got an IBM, Cambridge was ALREADY a remote access
>> site.

[snip]

>I believe that MTS was a standalone operating system, not
>an MVS subsystem.  See
>  http://www.itd.umich.edu/~doc/Digest/0596/feat02.html

     I remember docs that the standard MTS system ran under UMMPS
(University of Michigan Multi-Program Supervisor).  I also heard it
could run under JESS/2 (not sure of spelling).  I never heard of it
running under MVS.

[snip]

Sincerely,

Gene Wirchenko

Computerese Irregular Verb Conjugation:
     I have preferences.
     You have biases.
     He/She has prejudices.
0
Gene
11/6/2004 7:01:31 AM
Gene Wirchenko <genew@mail.ocis.net> writes:
>      I remember docs that the standard MTS system ran under UMMPS
> (University of Michigan Multi-Program Supervisor).  I also heard it
> could run under JESS/2 (not sure of spelling).  I never heard of it
> running under MVS.

the folklore is that michigan adopted LLMPS (lincoln labs multi
programming supervisor) for UMMPS. 

random past posts mentioning llmps ... (i have hardcopy of the old
share contribution library document for llmps):
http://www.garlic.com/~lynn/93.html#15 unit record & other controllers
http://www.garlic.com/~lynn/93.html#23 MTS & LLMPS?
http://www.garlic.com/~lynn/93.html#25 MTS & LLMPS?
http://www.garlic.com/~lynn/93.html#26 MTS & LLMPS?
http://www.garlic.com/~lynn/98.html#15 S/360 operating systems geneaology
http://www.garlic.com/~lynn/2000.html#89 Ux's good points.
http://www.garlic.com/~lynn/2000g.html#0 TSS ancient history, was X86 ultimate CISC? designs)
http://www.garlic.com/~lynn/2001e.html#13 High Level Language Systems was Re: computer books/authors (Re: FA:
http://www.garlic.com/~lynn/2001h.html#24 "Hollerith" card code to EBCDIC conversion
http://www.garlic.com/~lynn/2001h.html#71 IBM 9020 FAA/ATC Systems from 1960's
http://www.garlic.com/~lynn/2001i.html#30 IBM OS Timeline?
http://www.garlic.com/~lynn/2001i.html#34 IBM OS Timeline?
http://www.garlic.com/~lynn/2001k.html#27 Is anybody out there still writting BAL 370.
http://www.garlic.com/~lynn/2001l.html#5 mainframe question
http://www.garlic.com/~lynn/2001l.html#9 mainframe question
http://www.garlic.com/~lynn/2001m.html#55 TSS/360
http://www.garlic.com/~lynn/2001n.html#45 Valid reference on lunar mission data being unreadable?
http://www.garlic.com/~lynn/2001n.html#89 TSS/360
http://www.garlic.com/~lynn/2002.html#14 index searching
http://www.garlic.com/~lynn/2002b.html#6 Microcode?
http://www.garlic.com/~lynn/2002b.html#44 PDP-10 Archive migration plan
http://www.garlic.com/~lynn/2002d.html#49 Hardest Mistake in Comp Arch to Fix
http://www.garlic.com/~lynn/2002e.html#47 Multics_Security
http://www.garlic.com/~lynn/2002f.html#47 How Long have you worked with MF's ? (poll)
http://www.garlic.com/~lynn/2002f.html#54 WATFOR's Silver Anniversary
http://www.garlic.com/~lynn/2002i.html#63 Hercules and System/390 - do we need it?
http://www.garlic.com/~lynn/2002l.html#44 Thirty Years Later: Lessons from the Multics Security Evaluation
http://www.garlic.com/~lynn/2002m.html#28 simple architecture machine instruction set
http://www.garlic.com/~lynn/2002n.html#54 SHARE MVT Project anniversary
http://www.garlic.com/~lynn/2002n.html#64 PLX
http://www.garlic.com/~lynn/2002o.html#78 Newsgroup cliques?
http://www.garlic.com/~lynn/2002q.html#29 Collating on the S/360-2540 card reader?
http://www.garlic.com/~lynn/2003b.html#0 Disk drives as commodities. Was Re: Yamhill
http://www.garlic.com/~lynn/2003f.html#41 SLAC 370 Pascal compiler found
http://www.garlic.com/~lynn/2003i.html#8 A Dark Day
http://www.garlic.com/~lynn/2003m.html#32 SR 15,15 was: IEFBR14 Problems
http://www.garlic.com/~lynn/2004b.html#31 determining memory size
http://www.garlic.com/~lynn/2004d.html#31 someone looking to donate IBM magazines and stuff
http://www.garlic.com/~lynn/2004g.html#57 Adventure game (was:PL/? History (was Hercules))
http://www.garlic.com/~lynn/2004l.html#16 Xah Lee's Unixism


-- 
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/
0
Anne
11/6/2004 3:25:15 PM
>>> On Sun, 31 Oct 2004 16:07:32 GMT, "Stephen Fuld"
>>> <s.fuld@PleaseRemove.att.net> said:

[ ... ]

>> But there is practically no way around the problem, only palliatives.
>> The problem is utimately the use of magnetic recording with a
>> ``spherical'' diffusion pattern, which uin practice implies rotating
>> media, and the ingrained habits of mass storage designers that think
>> even optics have the same limitation.

s.fuld> I don't think that is fair.  Storage designers have generally
s.fuld> kept to the existing configurations not because of their
s.fuld> "habits", but because the other things they have tried didn't
s.fuld> work as well.

Uhm, my impression is different -- consider for an extreme example the
spiral tracks of CD-ROMs, which were designed to replace vinyl. :-)

s.fuld> Over the history of "peripheral" storage (i.e. not main memory),
s.fuld> there have been things like IBM's data cell and both IBM and CDC
s.fuld> tried "libraries" of mini tape strips.

But they are still magnetic and though like magnetic, where proximity to
the medium is essential. With optics you can ``just'' focus your beam.

s.fuld> "Solid state" disks and drums have been somewhat popular,
s.fuld> depending on the economics and the addressing and OS limitations
s.fuld> of earlier systems.

This is still by analogy, but with anothe technology, based more on
electrical rather than magnetic charges.

[ ... ]

s.fuld> [ ... ] I briefly spent a little time looking at a holographic
s.fuld> system technology, but the drawbacks were so large that it was
s.fuld> impractical to productize. [ ... ]

As to holo systems, like those prototyped by MCC and them Tamarack and
then many others, I have two theories:

* The medium is the main problem, and rewritability like for most media
  (including apparently ferroelectric and magnetic RAM) is still bad.

* Actually they work quite well, but they are of such enormous advantage
  in SIGINT that they sold only to government agencies and are classified.

Note that I don't particularly like holo systems, they sound too
complicated to me.

>> My fantasy imagines a mass storage device shaped like a pyramid, with
>> a laser on top, and non rotating optical media on bottom, a bit like
>> CRT storage systems of old...

s.fuld> Well, the holographic system I looked at still had a rotating
s.fuld> "carrier" for the crystals, as you needed more than one crystal
s.fuld> to get reasonable capacity.

But you can deflect the beam over an entire crystal array without moving
anything. The rules are different with optics. That's the point.

s.fuld> I would guess that your system would have a poor volumetric
s.fuld> eficiency and would thus be costly, but perhaps I don't fully
s.fuld> understand your proposal.

Well, I actually have several (all handwaving :->), but they are all
based on the idea that optics is really different from magnetics, in
that is is damn easy (well, optimistically) to deflect and focus a beam
of light, but not a magnetic field, and that there are hundreds of years
of experience in building high precision, cheap, entirely solid state
optical systems (e.g. telescopes, cameras :->) with amazing information
densities.

Disc systems have two tragic problems: the active element moves (the
head), and the medium itself moves too (the disc). The reason is that
the active element needs to be as near as possible to the spot of the
medium it is recording to, because the field is ``spherical''.

Now consider a CD-ROM as the worst possible example: even assuming that
one wants to rotate the medium, why put the sensing element on an arm?
Unless one is constrained by the mental habits of vinyl discs and
magnetic discs?

I would to see a storage system designed around optics on different
analogies: a camera, a photo enlarger, a laser printer, or even
something totally new (much more difficult) like the mirror arrays in
recent OHPs.

I just wish that some research institute or company unleashed some
impoverished optical engineers and physicists onto building storage
systems. But career structures at storage companies are probably based
on rewarding those that push magnetics furthest...
0
pg_nh
11/11/2004 8:12:26 PM
On Thu, 11 Nov 2004 21:12:26 +0000, Peter Grandi wrote:
> I would to see a storage system designed around optics on different
> analogies: a camera, a photo enlarger, a laser printer, or even
> something totally new (much more difficult) like the mirror arrays in
> recent OHPs.

Doesn't the optical i/o element and/or the medium physically move in most
inexpensive examples of all of these examples?  That's just a natural
optimisation based on the absence of a need for random access, I guess.

Even when direct access would be desirable (laser printer, for example),
the design choice is to use a bunch of DRAM as a serialization buffer,
rather than build a 2D focussing system and a flat-bed ion transfer
mechanism.

-- 
Andrew

0
Andrew
11/11/2004 8:47:51 PM
"Peter Grandi" <pg_nh@0409.exp.sabi.co.UK> wrote in message 
news:yf3ekj0ywk5.fsf@base.gp.example.com...
>>>> On Sun, 31 Oct 2004 16:07:32 GMT, "Stephen Fuld"
>>>> <s.fuld@PleaseRemove.att.net> said:
>
> [ ... ]
>
>>> But there is practically no way around the problem, only palliatives.
>>> The problem is utimately the use of magnetic recording with a
>>> ``spherical'' diffusion pattern, which uin practice implies rotating
>>> media, and the ingrained habits of mass storage designers that think
>>> even optics have the same limitation.
>
> s.fuld> I don't think that is fair.  Storage designers have generally
> s.fuld> kept to the existing configurations not because of their
> s.fuld> "habits", but because the other things they have tried didn't
> s.fuld> work as well.
>
> Uhm, my impression is different -- consider for an extreme example the
> spiral tracks of CD-ROMs, which were designed to replace vinyl. :-)

Sure, but do you really think the idea of spiral versus concentric tracks is 
that big a deal?  There are small advantages for each, but I hardly consider 
spiral tracks "revolutionary".

> s.fuld> Over the history of "peripheral" storage (i.e. not main memory),
> s.fuld> there have been things like IBM's data cell and both IBM and CDC
> s.fuld> tried "libraries" of mini tape strips.
>
> But they are still magnetic and though like magnetic, where proximity to
> the medium is essential. With optics you can ``just'' focus your beam.

People have been doing, or at least attempting to do, optical storage for a 
long time.  In the 1970s there was a company that tried to do an optical 
thing sort of like the IBM Data Cell.  A laser ablated locations on a strip 
of metal coated plastic that was taken from a stack and wrapped around a 
drum.  Then there are all of the optical disks of various diameters, 
starting with at least 12 inches.

As for "just" focusing the beam, there is a reason that CDs and DVDs have a 
moving arm.  It is that you can't just focus the beam to the precision you 
need across the distances you need within the form factor that is 
reasonable.  Even these devices have moving mirrors, not "just" solid state 
devices.

> s.fuld> "Solid state" disks and drums have been somewhat popular,
> s.fuld> depending on the economics and the addressing and OS limitations
> s.fuld> of earlier systems.
>
> This is still by analogy, but with anothe technology, based more on
> electrical rather than magnetic charges.

OK, how about bubble memories?

> [ ... ]
>
> s.fuld> [ ... ] I briefly spent a little time looking at a holographic
> s.fuld> system technology, but the drawbacks were so large that it was
> s.fuld> impractical to productize. [ ... ]
>
> As to holo systems, like those prototyped by MCC and them Tamarack and
> then many others, I have two theories:
>
> * The medium is the main problem, and rewritability like for most media
>  (including apparently ferroelectric and magnetic RAM) is still bad.
>
> * Actually they work quite well, but they are of such enormous advantage
>  in SIGINT that they sold only to government agencies and are classified.

At least for the system I looked at, there were others.  The read process 
got you a huge amount of data (several megabytes) in one laser flash, but 
then you had to get that data into main memory somehow to use it and that 
limited the bandwidth.  Also, the latency, to get to the next burst was 
pretty long, 100s of ms, so it really wasn't a good disk replacement (what 
the project people wanted to do).  I proposed using it as a multi-media 
distributionthi(think movies on demand), but they didn't seem interested. 
And IIRC the energy required to write to the device was so large that the 
heat generated limited you to doing only one write every few seconds or the 
system would melt!

> Note that I don't particularly like holo systems, they sound too
> complicated to me.
>
>>> My fantasy imagines a mass storage device shaped like a pyramid, with
>>> a laser on top, and non rotating optical media on bottom, a bit like
>>> CRT storage systems of old...
>
> s.fuld> Well, the holographic system I looked at still had a rotating
> s.fuld> "carrier" for the crystals, as you needed more than one crystal
> s.fuld> to get reasonable capacity.
>
> But you can deflect the beam over an entire crystal array without moving
> anything. The rules are different with optics. That's the point.

But you can't do that unless your "pyramid" is pretty tall (i.e. think of a 
CRT) and that loses you the volumetric efficiency that I was talking about.

> s.fuld> I would guess that your system would have a poor volumetric
> s.fuld> eficiency and would thus be costly, but perhaps I don't fully
> s.fuld> understand your proposal.
>
> Well, I actually have several (all handwaving :->), but they are all
> based on the idea that optics is really different from magnetics, in
> that is is damn easy (well, optimistically) to deflect and focus a beam
> of light, but not a magnetic field, and that there are hundreds of years
> of experience in building high precision, cheap, entirely solid state
> optical systems (e.g. telescopes, cameras :->) with amazing information
> densities.

But telescopes are essentially amplifiers.  They don't "address" any data ia 
solid state.  The telescope has to be manually moved to see a different 
image.  Even cameras have to be moved or at least move the lense to focus on 
different depths.

> Disc systems have two tragic problems: the active element moves (the
> head), and the medium itself moves too (the disc). The reason is that
> the active element needs to be as near as possible to the spot of the
> medium it is recording to, because the field is ``spherical''.

And because it allows great space efficiency.  Bits per cubic inch is a 
relevant measure for many shops.

> Now consider a CD-ROM as the worst possible example: even assuming that
> one wants to rotate the medium, why put the sensing element on an arm?
> Unless one is constrained by the mental habits of vinyl discs and
> magnetic discs?

As I explained above, because it was the best engineering solution.  You 
need to do better than just hand waving to have anyone believe you that all 
of those people are just too dumb to do other than what they did.

> I would to see a storage system designed around optics on different
> analogies: a camera, a photo enlarger,

Problems discussee above.

> a laser printer,

Where the medium (the paper) moves.

> or even
> something totally new (much more difficult) like the mirror arrays in
> recent OHPs.

If you are talking about DLP technology, the mirrors move.

> I just wish that some research institute or company unleashed some
> impoverished optical engineers and physicists onto building storage
> systems. But career structures at storage companies are probably based
> on rewarding those that push magnetics furthest...

Or perhaps that such places do exist and the ideas you have proposed have 
been rejected for good and sufficient reasons.  Several companies in the 
disk business are or were also in the optical business.  There isn't some 
vast conspiracy to prevent some excellent technology from getting out.  In 
fact, the beauty of the free enterprise system is that it is just the 
opposite.

-- 
 - Stephen Fuld
   e-mail address disguised to prevent spam 


0
Stephen
11/11/2004 8:57:35 PM
pg_nh@0409.exp.sabi.co.UK (Peter Grandi) writes:
> 
> s.fuld> I don't think that is fair.  Storage designers have generally
> s.fuld> kept to the existing configurations not because of their
> s.fuld> "habits", but because the other things they have tried didn't
> s.fuld> work as well.
> 
> Uhm, my impression is different -- consider for an extreme example the
> spiral tracks of CD-ROMs, which were designed to replace vinyl. :-)

I was pretty stunned to find out that there was a good reason for that
at the time.  If they used a layout like magnetic disks, they would
have had to provide enough buffering to prevent skips when moving from
track to track.  Memory was expensive back then.  So... just use a
spiral track, and a feedback mechanism to keep track of whether the
track is moving out from under you and move the head to compensate.

Hmmm...  I'll be somebody, somewhere, has done a laser-based vinyl
record pickup by now.  Anybody ever hear of such a thing?

> Now consider a CD-ROM as the worst possible example: even assuming that
> one wants to rotate the medium, why put the sensing element on an arm?
> Unless one is constrained by the mental habits of vinyl discs and
> magnetic discs?

Because of packaging constraints.

-- 
Joseph J. Pfeiffer, Jr., Ph.D.       Phone -- (505) 646-1605
Department of Computer Science       FAX   -- (505) 646-1002
New Mexico State University          http://www.cs.nmsu.edu/~pfeiffer
0
Joe
11/11/2004 10:30:01 PM
Joe Pfeiffer wrote:
> Hmmm...  I'll be somebody, somewhere, has done a laser-based vinyl
> record pickup by now.  Anybody ever hear of such a thing?

Sure they have. Main problem: Tiny dust motes that a regular pickup 
simply pushes away, but a laser detects as scratches.

People have even made a scanner-based vinyl reader sw: It results in 
recognizable melodies, but pretty bad sound. :-)

Terje
-- 
- <Terje.Mathisen@hda.hydro.com>
"almost all programming can be viewed as an exercise in caching"
0
Terje
11/12/2004 6:59:07 AM
In article <yf3ekj0ywk5.fsf@base.gp.example.com>,
   pg_nh@0409.exp.sabi.co.UK (Peter Grandi) wrote:
<snip>

>Now consider a CD-ROM as the worst possible example: even assuming that
>one wants to rotate the medium, why put the sensing element on an arm?

We did that so the storage unit could be removed from the system.

<snip>

/BAH

Subtract a hundred and four for e-mail.
0
jmfbahciv
11/12/2004 1:25:59 PM
In article <3nQkd.8218$7i4.89@bgtnsc05-news.ops.worldnet.att.net>,
   "Stephen Fuld" <s.fuld@PleaseRemove.att.net> wrote:
<snip>

>Or perhaps that such places do exist and the ideas you have proposed have 
>been rejected for good and sufficient reasons.  Several companies in the 
>disk business are or were also in the optical business.  There isn't some 
>vast conspiracy to prevent some excellent technology from getting out.  In 
>fact, the beauty of the free enterprise system is that it is just the 
>opposite.

Another thing is that nobody knew how to convert light to 
electricity and visa versa until really recently.
              ^ please note the use of and.

/BAH

0
jmfbahciv
11/12/2004 1:30:37 PM
In article <1boei49fyu.fsf@cs.nmsu.edu>,
   Joe Pfeiffer <pfeiffer@cs.nmsu.edu> wrote:
>pg_nh@0409.exp.sabi.co.UK (Peter Grandi) writes:
>> 
>> s.fuld> I don't think that is fair.  Storage designers have generally
>> s.fuld> kept to the existing configurations not because of their
>> s.fuld> "habits", but because the other things they have tried didn't
>> s.fuld> work as well.
>> 
>> Uhm, my impression is different -- consider for an extreme example the
>> spiral tracks of CD-ROMs, which were designed to replace vinyl. :-)
>
>I was pretty stunned to find out that there was a good reason for that
>at the time.  If they used a layout like magnetic disks, they would
>have had to provide enough buffering to prevent skips when moving from
>track to track.  Memory was expensive back then.  So... just use a
>spiral track, and a feedback mechanism to keep track of whether the
>track is moving out from under you and move the head to compensate.
>
>Hmmm...  I'll be somebody, somewhere, has done a laser-based vinyl
>record pickup by now.  Anybody ever hear of such a thing?
>
>> Now consider a CD-ROM as the worst possible example: even assuming that
>> one wants to rotate the medium, why put the sensing element on an arm?
>> Unless one is constrained by the mental habits of vinyl discs and
>> magnetic discs?
>
>Because of packaging constraints.

Packaging?  I would have guessed mechanical.  The devices that
weren't removable didn't have moving parts.

/BAH
 

Subtract a hundred and four for e-mail.
0
jmfbahciv
11/12/2004 1:33:30 PM
<jmfbahciv@aol.com> wrote in message news:9OKdnYqdKIqiXgncRVn-3A@rcn.net...
> In article <1boei49fyu.fsf@cs.nmsu.edu>,
>    Joe Pfeiffer <pfeiffer@cs.nmsu.edu> wrote:
> >pg_nh@0409.exp.sabi.co.UK (Peter Grandi) writes:
> >>
> >> s.fuld> I don't think that is fair.  Storage designers have generally
> >> s.fuld> kept to the existing configurations not because of their
> >> s.fuld> "habits", but because the other things they have tried didn't
> >> s.fuld> work as well.
> >>
> >> Uhm, my impression is different -- consider for an extreme example the
> >> spiral tracks of CD-ROMs, which were designed to replace vinyl. :-)
> >
> >I was pretty stunned to find out that there was a good reason for that
> >at the time.  If they used a layout like magnetic disks, they would
> >have had to provide enough buffering to prevent skips when moving from
> >track to track.  Memory was expensive back then.  So... just use a
> >spiral track, and a feedback mechanism to keep track of whether the
> >track is moving out from under you and move the head to compensate.
> >
> >Hmmm...  I'll be somebody, somewhere, has done a laser-based vinyl
> >record pickup by now.  Anybody ever hear of such a thing?
> >
> >> Now consider a CD-ROM as the worst possible example: even assuming that
> >> one wants to rotate the medium, why put the sensing element on an arm?
> >> Unless one is constrained by the mental habits of vinyl discs and
> >> magnetic discs?
> >
> >Because of packaging constraints.
>
> Packaging?  I would have guessed mechanical.  The devices that
> weren't removable didn't have moving parts.
>
You have 600 Million bits spread over several tens of square inches of area.
You have to hit each of them with a laser beam focused down in order to read
it.  How would you propose to do that without moving parts?  Expensive
optical scanners?  This was a consumer product, designed to be inexpensive.
And a fine job of engineering it was.

del cecchi


0
Del
11/12/2004 2:29:00 PM
"Del  Cecchi" <cecchinospam@us.ibm.com> writes:
> You have 600 Million bits spread over several tens of square inches
> of area.  You have to hit each of them with a laser beam focused
> down in order to read it.  How would you propose to do that without
> moving parts?  Expensive optical scanners?  This was a consumer
> product, designed to be inexpensive.  And a fine job of engineering
> it was.

in the mid-80s i was railing about the high-cost of computer specific
gear ... especially telecom ... was also doing some stuff with a
company (that at the time was called cyclotonics) on trying to apply
some reed-solomon ecc to some more convential computer telecom. part
of the issues was the quantities weren't particularly large and so a
lot of the costs were heavily front-end loaded.

after some business trip to japan ... i came back with a statement
that i could get better technology out of a $300 cdrom player than
some $20k (maybe only a little exaggeration) fiber-optic computer
telecom gear ... and i would have a couple fine servo-motors left
over.

random past mentions of cyclotonics
http://www.garlic.com/~lynn/2001.html#1 4M pages are a bad idea (was Re: AMD 64bit Hammer CPU and VM)
http://www.garlic.com/~lynn/2002p.html#53 Free Desktop Cyber emulation on PC before Christmas
http://www.garlic.com/~lynn/2003e.html#27 shirts
http://www.garlic.com/~lynn/2004f.html#37 Why doesn't Infiniband supports RDMA multicast

-- 
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/
0
Anne
11/12/2004 2:37:33 PM
In comp.arch Joe Pfeiffer <pfeiffer@cs.nmsu.edu> wrote:
> pg_nh@0409.exp.sabi.co.UK (Peter Grandi) writes:
> > 
> > s.fuld> I don't think that is fair.  Storage designers have generally
> > s.fuld> kept to the existing configurations not because of their
> > s.fuld> "habits", but because the other things they have tried didn't
> > s.fuld> work as well.
> > 
> > Uhm, my impression is different -- consider for an extreme example the
> > spiral tracks of CD-ROMs, which were designed to replace vinyl. :-)
> 
> I was pretty stunned to find out that there was a good reason for that
> at the time.  If they used a layout like magnetic disks, they would
> have had to provide enough buffering to prevent skips when moving from
> track to track.  Memory was expensive back then.  So... just use a
> spiral track, and a feedback mechanism to keep track of whether the
> track is moving out from under you and move the head to compensate.
> 
> Hmmm...  I'll be somebody, somewhere, has done a laser-based vinyl
> record pickup by now.  Anybody ever hear of such a thing?

Oh, there are many such and they have been around for at least a decade now.
The thing is that a non-laser based vinyl disk reader slowly wears out
the tracks due to contact between the head and the tracks.

-- 
	Sander

+++ Out of cheese error +++
0
Sander
11/12/2004 4:26:56 PM
Terje Mathisen <terje.mathisen@hda.hydro.com> writes:

> Joe Pfeiffer wrote:
> > Hmmm...  I'll be somebody, somewhere, has done a laser-based vinyl
> > record pickup by now.  Anybody ever hear of such a thing?
> 
> Sure they have. Main problem: Tiny dust motes that a regular pickup
> simply pushes away, but a laser detects as scratches.

That seems like something that should be filterable...
-- 
Joseph J. Pfeiffer, Jr., Ph.D.       Phone -- (505) 646-1605
Department of Computer Science       FAX   -- (505) 646-1002
New Mexico State University          http://www.cs.nmsu.edu/~pfeiffer
0
Joe
11/12/2004 5:13:28 PM
jmfbahciv@aol.com writes:
> >
> >> Now consider a CD-ROM as the worst possible example: even assuming that
> >> one wants to rotate the medium, why put the sensing element on an arm?
> >> Unless one is constrained by the mental habits of vinyl discs and
> >> magnetic discs?
> >
> >Because of packaging constraints.
> 
> Packaging?  I would have guessed mechanical.  The devices that
> weren't removable didn't have moving parts.

I'll guess we mean the same thing -- having the laser in a fixed
location would imply some sort of lens or mirror (or both) arrangement
to get it to the track.  Doing that in the space available for a CD
would be...  interesting.  And a whole lot harder than just moving the laser.
-- 
Joseph J. Pfeiffer, Jr., Ph.D.       Phone -- (505) 646-1605
Department of Computer Science       FAX   -- (505) 646-1002
New Mexico State University          http://www.cs.nmsu.edu/~pfeiffer
0
Joe
11/12/2004 5:16:57 PM
On 11 Nov 2004 15:30:01 -0700, Joe Pfeiffer <pfeiffer@cs.nmsu.edu> wrote:
>
>Hmmm...  I'll be somebody, somewhere, has done a laser-based vinyl
>record pickup by now.  Anybody ever hear of such a thing?
>

Yep, there have been a few sold commercially, but AFAIK they disappeared
from the market years ago.  There was one organization that was using a 
laser system to recover music from wax cylinders without wearing them out.

-- 
Cheers,
Stan Barr     stanb .at. dial .dot. pipex .dot. com
(Remove any digits from the addresses when mailing me.)

The future was never like this!
0
stanb45
11/12/2004 6:35:02 PM
Anne & Lynn Wheeler <lynn@garlic.com> wrote in message news:<uactngmky.fsf@mail.comcast.net>...
> after some business trip to japan ... i came back with a statement
> that i could get better technology out of a $300 cdrom player than
> some $20k (maybe only a little exaggeration) fiber-optic computer
> telecom gear ... and i would have a couple fine servo-motors left
> over.

recent, slightly related post regarding one of those business trips to
japan in the mid-80s time-frame
http://www.garlic.com/~lynn/2004g.html#12 network history

and yes, the communication products division referenced in
the above post is the same one referenced in this
post from thurs:
http://www.garlic.com/~lynn/2004o.html#41 osi bits

another contrast ... the initial mainframe tcp/ip product got about
43kbytes/sec while consuming nearly a full (100 percent) 3090
engine. i added rfc1044 support to the base product and in tuning
tests at cray research between a cray and a 4341-clone was getting
1mbyte/sec sustained (nearly 25 times more thruput, hardware limit
with the 4341-attachment box) using very modest amount of 4341 engine
(and they actually shipped the code not too long after the rfc was
published) random past 1044 posts
http://www.garlic.com/~lynn/subnetwork.html#1044

and from my rfc index
http://www.garlic.com/~lynn/rfcietff.htm

the rfc1044 summary entry
http://www.garlic.com/~lynn/rfcidx3.htm#1044

as always, clicking on the ".txt=nnn" field retrieves the actual
rfc
0
lynn
11/13/2004 12:05:48 AM
In article <yf38y9navcw.fsf@base.gp.example.com>, Peter Grandi
<pg_nh@0409.exp.sabi.co.UK> writes
>Then Toyota and Hitachi started selling essentially defect free cars and
>RAM chips, and the rest is history. I wonder when this will happen to
>software, or firmware. Just like essentially defect free manufacturing
>requires a Japanese style culture, I wonder which country has a culture
>that enables the writing of quality software. Surely not the the UK (or
>the USA), perhaps Russia (dream on :->).
>
It's being tried - Ireland seems to have managed to hire Parnas: see
http://www.irishscientist.ie/2003/contents.asp?contentxml=03p139a.xml&co
ntentxsl=is03pages.xsl
(I searched for SQRL and Parnas). If he does strike gold there, a couple
of much bigger countries are going to look really silly. The November
2004 issue of CACM carries an article by Glass suggesting that work from
another software quality institute, this time in Australia, might be
worth a look - The institute is http://www.sqi.gu.edu.au/indexFrameset.h
tml, and the work is Genetic Software engineering. http://www.sqi.gu.edu
..au/gse/indexFrameset.html

Some of the big Indian firms have big CMMI programs and their own
software institutes.

Personally, my prejudices lead me to bet on the Australians for culture.
To me, both Southern Ireland and India have cultures that applaud
talking your way out of difficult situations, which is hard when the
other participant is a computer.
-- 
A.G.McDowell
0
A
11/13/2004 6:23:59 AM
[ ... ]

>> Uhm, my impression is different -- consider for an extreme example the
>> spiral tracks of CD-ROMs, which were designed to replace vinyl. :-)

s.fuld> Sure, but do you really think the idea of spiral versus
s.fuld> concentric tracks is that big a deal? [ ... ]

Well, spiral tracks make random positioning a lot harder. In another
post someone mentions that spiral tracks allow cheaper readers, but his
explanation to me reads ``a single spiraling groove like in vinyl is
cheaper to read'', which sort of reinforces my impression.

[ ... ]

>> With optics you can ``just'' focus your beam.

s.fuld> People have been doing, or at least attempting to do, optical
s.fuld> storage for a long time.  In the 1970s there was a company that
s.fuld> tried to do an optical thing sort of like the IBM Data Cell.  A
s.fuld> laser ablated locations on a strip of metal coated plastic that
s.fuld> was taken from a stack and wrapped around a drum.

That was an amazingly clumsy idea, designed to imitate magnetic strip
bulk storage. As to that there have been various ``digital paper''
schemes designed to imitate magnetic tape reels usually.

It seems really hard to shake long standing mental habits :-).

s.fuld> Then there are all of the optical disks of various diameters,
s.fuld> starting with at least 12 inches.

And they are still here: WORM storage is now extremely popular, and
practically every computer system has got WORM drives (aka CD/DVD-R),
and even MO/PC optical disks are very popular (AKA CD/DVD-RW and
DVD-RAM).

They are still just imitations of the magnetic disc idea in shape, even
if the medium is optical.

s.fuld> As for "just" focusing the beam, there is a reason that CDs and
s.fuld> DVDs have a moving arm.  It is that you can't just focus the
s.fuld> beam to the precision you need across the distances you need
s.fuld> within the form factor that is reasonable.

Really? And mechanically positioning an arm gives tighter tolerances?
And things like fiber optics don't exist? :-)

I think that the peculiar absurdity of CD-ROM pickup is that as a rule
the _laser_ is on the tip of the arm, and the optics too, as if the
``field emitter'' needed to be near the medium, as if it was emitting a
magnetic field.

Even assuming that one really had to do a moving arm following a
``groove'' on a CD-ROM, why not put just a mirror and/or a fiber optic
at the tip of that arm and make it much lighter/faster? :-)

My impression is simply that CD-ROM was designed to imitate the
design of a vinyl player as closely as possible, because that worked,
was well understood, and damn everything else.

s.fuld> Even these devices have moving mirrors, not "just" solid state
s.fuld> devices.

Yes, fine. A lot better than moving both the medium and the sensing
element. One can well do rotating media or even a movable pickup, but
there is no point in making the ``field emitter'' movable too, because
photons can be focused/waveguided.

And in the extreme one can do storage with a long fiber optic cable or a
laser beam reflected off the moon :-). Here I am joking, because that is
doing storage in the shape not even of a magnetic disc or a tape, but of
a mercury delay line...

[ ... ]

s.fuld> [ ... ] I briefly spent a little time looking at a holographic
s.fuld> system technology, but the drawbacks were so large that it was
s.fuld> impractical to productize. [ ... ]

[ ... theories on holostore non-appearance ... ]

>> * Actually they work quite well, but they are of such enormous
>> advantage in SIGINT that they sold only to government agencies and
>> are classified.

s.fuld> At least for the system I looked at, there were others.  The
s.fuld> read process got you a huge amount of data (several megabytes)
s.fuld> in one laser flash, but then you had to get that data into main
s.fuld> memory somehow to use it and that limited the bandwidth.

Yes, a problem. Solvable at a cost. Not necessarily a good idea for
PC-based mass storage.

s.fuld> Also, the latency, to get to the next burst was pretty long,
s.fuld> 100s of ms, so it really wasn't a good disk replacement (what
s.fuld> the project people wanted to do).

I have read rather different latencies for other devices...

s.fuld> I proposed using it as a multi-media distributionthi (think
s.fuld> movies on demand), but they didn't seem interested.

Uh, that's not a bad idea. Also, they seem good for storing gigantic
dictionaries of ``random data'' :-).

s.fuld> And IIRC the energy required to write to the device was so large
s.fuld> that the heat generated limited you to doing only one write
s.fuld> every few seconds or the system would melt!

Uhm, I have seen photos of holostores and the specs of their lasers and
they seemed pretty mild. I can imagine that perhaps different media
require different writing powers.

>>> My fantasy imagines a mass storage device shaped like a pyramid, with
>>> a laser on top, and non rotating optical media on bottom, a bit like
>>> CRT storage systems of old...

s.fuld> Well, the holographic system I looked at still had a rotating
s.fuld> "carrier" for the crystals, as you needed more than one crystal
s.fuld> to get reasonable capacity.

>> But you can deflect the beam over an entire crystal array without
>> moving anything. The rules are different with optics. That's the
>> point.

s.fuld> But you can't do that unless your "pyramid" is pretty tall
s.fuld> (i.e. think of a CRT) and that loses you the volumetric
s.fuld> efficiency that I was talking about.

Ah, but consider first the holostore cube carrier: it can be just a
sphere or a cylinder, and the beam can be deflected over the whole
cylinder or sphere without moving the latter. Deflection of that sort
can be done in a pretty tight space too. Not easy, but again high spec
optical systems have been around for a long time.

As to the pyramid, yes, that would be for non-PC style applications. I
am not really talking (yet) about PC-style mass storage. Rack size, as
long as the capacity and access times are good, is fine for many
applications.

s.fuld> I would guess that your system would have a poor volumetric
s.fuld> eficiency and would thus be costly, but perhaps I don't fully
s.fuld> understand your proposal.

>> Well, I actually have several (all handwaving :->), but they are all
>> based on the idea that optics is really different from magnetics, in
>> that is is damn easy (well, optimistically) to deflect and focus a
>> beam of light, but not a magnetic field, and that there are hundreds
>> of years of experience in building high precision, cheap, entirely
>> solid state optical systems (e.g. telescopes, cameras :->) with
>> amazing information densities.

s.fuld> But telescopes are essentially amplifiers.  They don't "address"
s.fuld> any data ia solid state. The telescope has to be manually moved
s.fuld> to see a different image. Even cameras have to be moved or at
s.fuld> least move the lense to focus on different depths.

I was giving these as examples of extremely high specification optical
devices, not storage systems -- simply to make the point that truly
optical storage can draw upon centuries of experience by optical
engineers to build extremely precise, big or small, optical systems that
look nowhere like tape or disc drives, and that guide light over
potentially large distances or tight corners to amazingly tight
tolerances.

>> Disc systems have two tragic problems: the active element moves (the
>> head), and the medium itself moves too (the disc). The reason is that
>> the active element needs to be as near as possible to the spot of the
>> medium it is recording to, because the field is ``spherical''.

s.fuld> And because it allows great space efficiency.  Bits per cubic
s.fuld> inch is a relevant measure for many shops.

Optical can do very high space efficiency too -- consider binoculars :-).
The one disadvantage I can see for optical devices is that usually they
have problems with weight more than size.

>> Now consider a CD-ROM as the worst possible example: even assuming
>> that one wants to rotate the medium, why put the sensing element on
>> an arm? Unless one is constrained by the mental habits of vinyl discs
>> and magnetic discs?

s.fuld> As I explained above, because it was the best engineering
s.fuld> solution.

It was not an explanation, it seemed to me to be pure handwaving. An
explanation contains numbers...

s.fuld> You need to do better than just hand waving to have anyone
s.fuld> believe you that all of those people are just too dumb to do
s.fuld> other than what they did.

But that was not my goal at all, and you are rather incorrectly
paraphrasing and misprespresenting what I have written.

I am trying to suggest that storage is stuck in a local maximum by
ingrained memes (and of course ingrained economics). If people are
intrigued by this impression, then perhaps they will try harder to get
out of the local maximum, with all the difficulties related to that, but
also with all the potential gains.

>> I would to see a storage system designed around optics on different
>> analogies: a camera, a photo enlarger,

s.fuld> Problems discussee above.

There are always problems -- consider the non trivial problems of
putting a head flying so near its medium, because of the magnetic field,
and the resulting limitations. Never mind also problems with medium
coercivity (or lack thereof) and so on.

I am daily amazed that magnetic storage systems in my PC, as absurd they
seem to me, actually do work. If very bright magnetic storage engineers
have managed to make magnetic discs that work well, perhaps even optical
engineers can do something as good, starting from a base of hundreds
instead of dozens of years of experience.

>> a laser printer,

s.fuld> Where the medium (the paper) moves.

In a laser printer the medium is the drum/belt. It rotates, but my point
is not limited to ``no moving parts''. That's an ideal goal.

The intermediate steps are less moving parts, and if there are moving
parts, lighter/faster ones perhaps, thanks to not having to have to move
the ``field emitter'' as close as possible to the medium.

As to laser printers only the drum rotates, yes, but usually the ``field
emitter'' is static, and a fast mirror scans the medium, and then there
are non-moving arrays of LEDs for writing (poor track density, but
rather good access time).

>> or even something totally new (much more difficult) like the mirror
>> arrays in recent OHPs.

s.fuld> If you are talking about DLP technology, the mirrors move.

Well, but the ``field emitter'' is static and far away. Not at all like
a magnetic field emitter. This looks good to me.

s.fuld> [ ... ] There isn't some vast conspiracy to prevent some
s.fuld> excellent technology from getting out.

Indeed not (but perhaps there is one for holostores :->), but there is
the power of a local maximum, and of memes and sunk costs that drive
careers and resonate with that local maximum.

s.fuld> In fact, the beauty of the free enterprise system is that it is
s.fuld> just the opposite.

The free enterprise system existed perhaps in a couple of countries in
the 19th century, and perhaps in Hong Kong relatively recently... It is
almost solely of historical interest, and is rather unrelated to the
issues of technologies getting stuck in local maxima.
0
pg_nh
11/17/2004 9:56:42 PM
[ ... ]

s.fuld> I don't think that is fair.  Storage designers have generally
s.fuld> kept to the existing configurations not because of their
s.fuld> "habits", but because the other things they have tried didn't
s.fuld> work as well.

>> Uhm, my impression is different -- consider for an extreme example the
>> spiral tracks of CD-ROMs, which were designed to replace vinyl. :-)

pfeiffer> I was pretty stunned to find out that there was a good reason
pfeiffer> for that at the time.  If they used a layout like magnetic
pfeiffer> disks, they would have had to provide enough buffering to
pfeiffer> prevent skips when moving from track to track. Memory was
pfeiffer> expensive back then. So... just use a spiral track, and a
pfeiffer> feedback mechanism to keep track of whether the track is
pfeiffer> moving out from under you and move the head to compensate.

But that's precisely What is so crazy (in hindsight of course)! Since
those engineers were told to do a music playback system that worked like
a vinyl player but was digital, they correctly reasoned that a
groove-following pickup was good enough, with the feedback mechanism
being the electronic equivalent of the groove walls.

Yes, if your goal is to make a cheap simulator of a vinyl player, making
it the digital equivalent of a vinyl player is perfectly all right.
After all, why should one try to think out of a well worn groove if it
gets the job done? :-)

But then I take issue with the goal -- the goal was then to make
something structurally, not merely functionally, equivalent to a vinyl
player.

[ ... ]

>> Now consider a CD-ROM as the worst possible example: even assuming
>> that one wants to rotate the medium, why put the sensing element on
>> an arm? Unless one is constrained by the mental habits of vinyl discs
>> and magnetic discs?

pfeiffer> Because of packaging constraints.

Does not follow. Why not put the laser and photoreceptor somewhere fixed
and use lightguides? Naturally if all you want and you care to think
about is a ``digital vinyl'' player, why bother? :-)


Then what happens is that once a half-assed technology achieves volume
despite its limitations, we all get stuck in that local maximum (see IA8
to IA32 for another example).
0
pg_nh
11/17/2004 10:09:50 PM
[ ... ]

>> Now consider a CD-ROM as the worst possible example: even assuming that
>> one wants to rotate the medium, why put the sensing element on an arm?

jmfbahciv> We did that so the storage unit could be removed from the system.

Does not follow! Does not follow! Even if you really wanted an arm, why
not put the sensing elements ``somewhere'' and use some kind of optics
to guide the the tip of the arm? With optics you don't need to put the
laser and the sensor a few millimeters from the medium. No spherical
field.

The real answer as far as I can see is mostly: we were told to do a
digital vinyl player, so putting the whole heavy thing on the arm was
good enough.

[ ... ]

0
pg_nh
11/17/2004 10:13:42 PM
"Peter Grandi" <pg_nh@0409.exp.sabi.co.UK> wrote in message 
news:yf3u0rorvfp.fsf@base.gp.example.com...

snip

> s.fuld> As I explained above, because it was the best engineering
> s.fuld> solution.
>
> It was not an explanation, it seemed to me to be pure handwaving. An
> explanation contains numbers...

The numbers change with technology.  The fundamental issue here is that you 
seem to think that "mental indolence" is preventing some, not totally 
defined (at leat not yet by you, but has no or fewer moving parts), optical 
system from comming out that will overwhelm some segment of the current 
storage product landscape.  This is based, AFAICT on a vague notion that we 
are "stuck" in a local minimum.  My position is that is simply incorrect and 
that there is no "magic bullet" in what you are talking about that would get 
us to a lower minimum than where we are.  It appears that the vast majority 
of people take my side, but we may be wrong.  If you are right, it should be 
easy to prove it; just develop such a product.  But most people seem to feel 
that the current systems are why they are for good engineering reasons, and 
that lots of alternatives have been tried, and continue to be tried in 
various research labs, so far without proving you right.  You seem to think 
that every one of these groups of very smart people are "blinded", but the 
only evidence you present for that side is that a solution of they type you 
propose hasn't come out.  I believe the reason it ahsn't come out is simply 
that it isn't currently a good idea.  You seem to be so in love with it that 
you refuse to consider that alternative.  So my advice is, as I said before, 
just show you are right.  Fame and fortune awaits.  :-)

-- 
 - Stephen Fuld
   e-mail address disguised to prevent spam 


0
Stephen
11/17/2004 11:45:40 PM
"Peter Grandi" <pg_nh@0409.exp.sabi.co.UK> wrote in message 
news:yf3pt2crutt.fsf@base.gp.example.com...
>[ ... ]
>
> s.fuld> I don't think that is fair.  Storage designers have generally
> s.fuld> kept to the existing configurations not because of their
> s.fuld> "habits", but because the other things they have tried didn't
> s.fuld> work as well.
>
>>> Uhm, my impression is different -- consider for an extreme example the
>>> spiral tracks of CD-ROMs, which were designed to replace vinyl. :-)
>
> pfeiffer> I was pretty stunned to find out that there was a good reason
> pfeiffer> for that at the time.  If they used a layout like magnetic
> pfeiffer> disks, they would have had to provide enough buffering to
> pfeiffer> prevent skips when moving from track to track. Memory was
> pfeiffer> expensive back then. So... just use a spiral track, and a
> pfeiffer> feedback mechanism to keep track of whether the track is
> pfeiffer> moving out from under you and move the head to compensate.
>
> But that's precisely What is so crazy (in hindsight of course)! Since
> those engineers were told to do a music playback system that worked like
> a vinyl player but was digital, they correctly reasoned that a
> groove-following pickup was good enough, with the feedback mechanism
> being the electronic equivalent of the groove walls.

I suspect that no, that was not what happened.  I suspect they were given a 
spec that said there could be no arbitrary interruptions in the music and 
they didn't want to pay the cost penalty of the extra buffering that a 
concentric track scheme would have required so they used a solution that 
they knew worked.  They weren't trying to reinvent the vinyl record, but 
they were faced with (at least some of) the same requirements and that led 
to a similar solution.

The result was the same, but you apparently think less of the engineers than 
I do.

-- 
 - Stephen Fuld
   e-mail address disguised to prevent spam 


0
Stephen
11/17/2004 11:45:46 PM

Peter Grandi wrote:

>>>Uhm, my impression is different -- consider for an extreme example the
>>>spiral tracks of CD-ROMs, which were designed to replace vinyl. :-)

> s.fuld> Sure, but do you really think the idea of spiral versus
> s.fuld> concentric tracks is that big a deal? [ ... ]

> Well, spiral tracks make random positioning a lot harder. In another
> post someone mentions that spiral tracks allow cheaper readers, but his
> explanation to me reads ``a single spiraling groove like in vinyl is
> cheaper to read'', which sort of reinforces my impression.

Why do spiral tracks make random positioning harder?  It is constant
linear velocity that makes positioning harder, not to mention that
you don't really know where you are going.  CD tracks, that is,
songs, can be of varying lengths with a minimum of about four seconds.
Each block includes its track number (stored as two digits of BCD).
During seeks it reads a block to find where it is, and determine where
it should go.   CD's were designed when memory was still relatively
expensive, and one wouldn't keep a big table of positions of data
blocks on the disk.

Constant linear velocity gives about a factor of two in length
(time) which was pretty important in early CD days.

> [ ... ]

>>>With optics you can ``just'' focus your beam.

(snip)

> s.fuld> As for "just" focusing the beam, there is a reason that CDs and
> s.fuld> DVDs have a moving arm.  It is that you can't just focus the
> s.fuld> beam to the precision you need across the distances you need
> s.fuld> within the form factor that is reasonable.

> Really? And mechanically positioning an arm gives tighter tolerances?
> And things like fiber optics don't exist? :-)

The CD reader is actually pretty complicated.  The pits are 1/4
wavelenth, such that the reflection from the pit is out of phase
with the reflection from the region around the pit.  Getting
the light out, and then back again, using an optical fiber would
be pretty hard.  Focusing to a one micron spot from long distances
is also hard.  Collecting a large fraction of the light would require
a very large, very expensive lens.

> I think that the peculiar absurdity of CD-ROM pickup is that as a rule
> the _laser_ is on the tip of the arm, and the optics too, as if the
> ``field emitter'' needed to be near the medium, as if it was emitting a
> magnetic field.

> Even assuming that one really had to do a moving arm following a
> ``groove'' on a CD-ROM, why not put just a mirror and/or a fiber optic
> at the tip of that arm and make it much lighter/faster? :-)

The lens is light, and is moved dynamically to follow the disk
motion.

> My impression is simply that CD-ROM was designed to imitate the
> design of a vinyl player as closely as possible, because that worked,
> was well understood, and damn everything else.
> 
> s.fuld> Even these devices have moving mirrors, not "just" solid state
> s.fuld> devices.
> 
> Yes, fine. A lot better than moving both the medium and the sensing
> element. One can well do rotating media or even a movable pickup, but
> there is no point in making the ``field emitter'' movable too, because
> photons can be focused/waveguided.

They can be, but keeping it coherent and focusing to a one micron
spot over long distances is hard, and gets much harder as you get
farther away.

-- glen

0
glen
11/17/2004 11:47:20 PM
On Wed, 17 Nov 2004 23:13:42 +0000, Peter Grandi wrote:
> The real answer as far as I can see is mostly: we were told to do a
> digital vinyl player, so putting the whole heavy thing on the arm was
> good enough.

The goal was to inexpensively package a sequential bit stream, in the late
'70s.  Random access was not a (significant) issue.  Good engineering is
avoiding over-engineering.

Remember that the competing technology was two different versions of
digital magnetics on cassettes of tape.

-- 
Andrew

0
Andrew
11/18/2004 12:00:44 AM

Stephen Fuld wrote:
(snip)

> I suspect that no, that was not what happened.  I suspect they were given a 
> spec that said there could be no arbitrary interruptions in the music and 
> they didn't want to pay the cost penalty of the extra buffering that a 
> concentric track scheme would have required so they used a solution that 
> they knew worked.  They weren't trying to reinvent the vinyl record, but 
> they were faced with (at least some of) the same requirements and that led 
> to a similar solution.

Definitely minimizing the logic was a requirement.  Consider that track
numbers are stored in BCD to simplify the conversion for the display
on the front.  That restricts the tracks to 99 instead of 255, but was
considered worthwhile.  Well, more would require the extra expense of
a three digit display, too.

Buffer memory was expensive, too.  They did need enough memory to decode
the complex error correcting codes that allow fairly large gaps
(such as from scratches), but a track to track seek could take a long 
time.   What distance do you allow on the disk for the seek
(constant linear velocity)?  If it isn't enough a complete revolution is
required before you have data again.

-- glen

0
glen
11/18/2004 12:40:01 AM

Andrew Reilly wrote:

> On Wed, 17 Nov 2004 23:13:42 +0000, Peter Grandi wrote:
> 
>>The real answer as far as I can see is mostly: we were told to do a
>>digital vinyl player, so putting the whole heavy thing on the arm was
>>good enough.

> The goal was to inexpensively package a sequential bit stream, in the late
> '70s.  Random access was not a (significant) issue.  Good engineering is
> avoiding over-engineering.

> Remember that the competing technology was two different versions of
> digital magnetics on cassettes of tape.

DCC came from Phillips, so that wasn't competition.  I believe both
DCC and DAT came significantly later, so weren't competition at
the time.

-- glen

0
glen
11/18/2004 12:41:15 AM
On Wed, 17 Nov 2004 17:41:15 -0800, glen herrmannsfeldt wrote:
> Andrew Reilly wrote:
> 
>> On Wed, 17 Nov 2004 23:13:42 +0000, Peter Grandi wrote:
>> 
>>>The real answer as far as I can see is mostly: we were told to do a
>>>digital vinyl player, so putting the whole heavy thing on the arm was
>>>good enough.
> 
>> The goal was to inexpensively package a sequential bit stream, in the late
>> '70s.  Random access was not a (significant) issue.  Good engineering is
>> avoiding over-engineering.
> 
>> Remember that the competing technology was two different versions of
>> digital magnetics on cassettes of tape.
> 
> DCC came from Phillips, so that wasn't competition.  I believe both
> DCC and DAT came significantly later, so weren't competition at
> the time.

Competing was perhaps too strong a word.  The digital audio technology
that preceded the CD was still mostly handled by video tape of various
sorts, in the pro-audio (not consumer) sphere.

The point that I was trying to make (badly) was only that CD only needed
to have marginally better random access to tape, to be a success.  There
was certainly no concievable need for instantaneous, totally random
access, which is the only motivation that I can think of for trying to do
steady-state (or at least not spinning) optics.

Peter: spinning is a pretty benign and efficient form of moving.  If
you're more interested in density than access time (even if you do care
about linear bandwidth), I stronly suspect that phyisics is in favor of
keeping the data and the read sensor physically close, which is easy to
achieve when your data surface spins.  Spinning also requires no
additional space.

Cheers,

-- 
Andrew

0
Andrew
11/18/2004 2:49:55 AM
In article <cngo0h$vs7$1@gnus01.u.washington.edu>,
 glen herrmannsfeldt <gah@ugcs.caltech.edu> wrote:

> Why do spiral tracks make random positioning harder?

*shrug*

> It is constant
> linear velocity that makes positioning harder, not to mention that
> you don't really know where you are going.

I was under the impression that the CLV specs on a CD were rather loose.

>  CD's were designed when memory was still relatively
> expensive, and one wouldn't keep a big table of positions of data
> blocks on the disk.

Well, that's the thing.  They had to do it cheap enough to sell.  Which 
probbably was the deciding factor in many decisions.  Heck, when the 
system came out, LSIs, ASICS, solid state lasers were all still pretty 
new.  Precision  microoptics in consumer devices was a novel idea too.

Then again, look at all the joking about NTSC, yet it works darn well.  
What I find more amazing is it woked in the 1950's, and they were 
actually able to make color CRTs back then at all - it took RCA a few 
tries to get it right, and things we take for granted today - phosphors 
on the screen surface, integral implosion protection, bright pictures, 
etc etc etc were major breakthroughs of their time - the technology 
didn't really start hitting a level of maturity until the mid 60's (You 
could argue the PortaColor had the first truly modern CRT - rectangular, 
in line gun, built in imposion protection, rare earth phosphors)

> Constant linear velocity gives about a factor of two in length
> (time) which was pretty important in early CD days.

Wasn't cramming some Bethoven symphony onto a one side disc of X size 
the driving consideration?

> The CD reader is actually pretty complicated. 

Yeah.

> The lens is light, and is moved dynamically to follow the disk
> motion.

Source of that bizzare hiss CD players (older ones at least) make :)
0
Philip
11/18/2004 4:36:51 AM
> Well, spiral tracks make random positioning a lot harder. 

Not a design criterion for the original CD. Mechanical and electronic
limitations had a much larger effect on the speed of random track seeking
for the first generation of devices (and perhaps even today) than the track
layout.

> I think that the peculiar absurdity of CD-ROM pickup is that as a rule
> the _laser_ is on the tip of the arm, and the optics too, as if the
> ``field emitter'' needed to be near the medium, as if it was emitting a
> magnetic field.

It doesn't have to be -
> 
> Even assuming that one really had to do a moving arm following a
> ``groove'' on a CD-ROM, why not put just a mirror and/or a fiber optic
> at the tip of that arm and make it much lighter/faster? :-)

That's the point - _would_ it really be ligher and faster? You need to
offset the mass of the fibre and its connectors to the mass of an LED
(very lightweight) and its optics. Also, tolerance to bending during
movement is well understood for wires, and not so well understood for
optical fibres - that's one more technical risk in an already risky
project - no thanks.

	Jan
0
ISO
11/18/2004 7:56:42 AM
Andrew Reilly <andrew-newspost@areilly.bpc-users.org> writes:

> The goal was to inexpensively package a sequential bit stream, in the late
> '70s.  Random access was not a (significant) issue.  Good engineering is
> avoiding over-engineering.

> Remember that the competing technology was two different versions of
> digital magnetics on cassettes of tape.

Well, you could consider existing analog audio formats as competition
as well.  The CDs was a more robust medium with superior sound quality
and the ability for random access to tracks, in about the same size as
the smallest competitor (cassettes).  It was also cheap(er?) to
produce.

While we can argue whether players could have been designed
differently, they quickly became cheap enough for the format to be a
success, and there is no question that the CD was well engineered for
its purpose.  Blame the computer industry for jumping the wrong
bandwagon. 

(Makes you wonder what the world would look like if PCs had come with
NeXT-like optical drives, incompatible with CDs.  If we had to take
the trouble of ripping music though a poor-quality analog line-in,
would we still have laws like the DMCA and would Metallica still be
suing their fans? :-) 

-kzm
-- 
If I haven't seen further, it is by standing in the footprints of giants
0
Ketil
11/18/2004 10:49:37 AM
pg_nh@0409.exp.sabi.co.UK (Peter Grandi) writes:

> [ ... ]

> s.fuld> I don't think that is fair.  Storage designers have
> s.fuld> generally kept to the existing configurations not because of
> s.fuld> their "habits", but because the other things they have tried
> s.fuld> didn't work as well.

>>> Uhm, my impression is different -- consider for an extreme example
>>> the spiral tracks of CD-ROMs, which were designed to replace
>>> vinyl. :-)

> pfeiffer> I was pretty stunned to find out that there was a good
> pfeiffer> reason for that at the time.  If they used a layout like
> pfeiffer> magnetic disks, they would have had to provide enough
> pfeiffer> buffering to prevent skips when moving from track to
> pfeiffer> track. Memory was expensive back then. So... just use a
> pfeiffer> spiral track, and a feedback mechanism to keep track of
> pfeiffer> whether the track is moving out from under you and move
> pfeiffer> the head to compensate.

> But that's precisely What is so crazy (in hindsight of course)!
> Since those engineers were told to do a music playback system that
> worked like a vinyl player but was digital, they correctly reasoned
> that a groove-following pickup was good enough, with the feedback
> mechanism being the electronic equivalent of the groove walls.

Dig into the low level details of CD format, it is all a very tight
package with interrealions all over.

One of the biggest drivers was to keep the cost of the tracking system
within bounds, and that was done by combining a long travel mechanical
sled or swing arm with a EMA corrector integrated with the
optics. Spiral rather than cylinder tracks where done so the data
could be interleaved without having nasty end of track special cases
every where. It also allowed a big simplification of the servos.

The trade-offs you would do now are VERY different to what made sense
then. Consider as an example CD caddies. There where to protect you
valuable disks! Now the caddies are far more expensive than a disk!!

-- 
Paul Repacholi                               1 Crescent Rd.,
+61 (08) 9257-1001                           Kalamunda.
                                             West Australia 6076
comp.os.vms,- The Older, Grumpier Slashdot
Raw, Cooked or Well-done, it's all half baked.
EPIC, The Architecture of the future, always has been, always will be.
0
prep
11/18/2004 2:07:22 PM
pg_nh@0409.exp.sabi.co.UK (Peter Grandi) writes:

> [ ... ]

>>> Now consider a CD-ROM as the worst possible example: even assuming
>>> that one wants to rotate the medium, why put the sensing element
>>> on an arm?

> jmfbahciv> We did that so the storage unit could be removed from the
> jmfbahciv> system.

> Does not follow! Does not follow! Even if you really wanted an arm,
> why not put the sensing elements ``somewhere'' and use some kind of
> optics to guide the the tip of the arm? With optics you don't need
> to put the laser and the sensor a few millimeters from the
> medium. No spherical field.

Because putting the optics of large enough NA at a distance would cost
a fortune!!!

> The real answer as far as I can see is mostly: we were told to do a
> digital vinyl player, so putting the whole heavy thing on the arm
> was good enough.

Not so, the vinyl record had almost no influence at all, except for
the odd light humour.

Can you explain why, for instance, the CD tracks from the inside out?

-- 
Paul Repacholi                               1 Crescent Rd.,
+61 (08) 9257-1001                           Kalamunda.
                                             West Australia 6076
comp.os.vms,- The Older, Grumpier Slashdot
Raw, Cooked or Well-done, it's all half baked.
EPIC, The Architecture of the future, always has been, always will be.
0
prep
11/18/2004 2:11:36 PM
Peter Grandi wrote:
> [ ... ]
> 
> s.fuld> I don't think that is fair.  Storage designers have generally
> s.fuld> kept to the existing configurations not because of their
> s.fuld> "habits", but because the other things they have tried didn't
> s.fuld> work as well.
> 
> 
>>>Uhm, my impression is different -- consider for an extreme example the
>>>spiral tracks of CD-ROMs, which were designed to replace vinyl. :-)
> 
> 
> pfeiffer> I was pretty stunned to find out that there was a good reason
> pfeiffer> for that at the time.  If they used a layout like magnetic
> pfeiffer> disks, they would have had to provide enough buffering to
> pfeiffer> prevent skips when moving from track to track. Memory was
> pfeiffer> expensive back then. So... just use a spiral track, and a
> pfeiffer> feedback mechanism to keep track of whether the track is
> pfeiffer> moving out from under you and move the head to compensate.
> 
> But that's precisely What is so crazy (in hindsight of course)! Since
> those engineers were told to do a music playback system that worked like
> a vinyl player but was digital, they correctly reasoned that a
> groove-following pickup was good enough, with the feedback mechanism
> being the electronic equivalent of the groove walls.
> 
> Yes, if your goal is to make a cheap simulator of a vinyl player, making
> it the digital equivalent of a vinyl player is perfectly all right.
> After all, why should one try to think out of a well worn groove if it
> gets the job done? :-)
> 
> But then I take issue with the goal -- the goal was then to make
> something structurally, not merely functionally, equivalent to a vinyl
> player.

At the time that the CD spec was being developed, random positioning in 
any medium -- vinyl or otherwise -- was incredibly expensive and not 
terribly effective. 100 tpi was a big deal for floppy disks; hard disks 
with accuracies in the 1000-TPI and up range cost as much as a car and 
typically required an entire head and platter devoted to servo 
information. And were noisy as heck.

It's nice to think, 10 hardware generations later, that the engineers 
should have been able to deliver better than state-of-the-art 
performance at mass consumer prices.

paul
0
Paul
11/18/2004 7:52:55 PM
In article <87y8gz5jz9.fsf@prep.synonet.com>, prep@prep.synonet.com 
wrote:

> Spiral rather than cylinder tracks where done so the data
> could be interleaved without having nasty end of track special cases
> every where. It also allowed a big simplification of the servos.

Could you even do cylindrical tracks back then?  Cheaply enough?  For 
some reason, I keep imagining the lens bumping over a track, then taking 
a bit of the turn to settle and center on the track....

A spiral at least makes staying on track a bit easier, and I suspect in 
the late 70's, just getting that to work reliably and still be 
economical was a task...
0
Philip
11/18/2004 10:12:55 PM
Peter Grandi wrote:
> [ ... ]
> 
> 
>>>Now consider a CD-ROM as the worst possible example: even assuming that
>>>one wants to rotate the medium, why put the sensing element on an arm?
> 
> 
> jmfbahciv> We did that so the storage unit could be removed from the system.
> 
> Does not follow! Does not follow! Even if you really wanted an arm, why
> not put the sensing elements ``somewhere'' and use some kind of optics
> to guide the the tip of the arm? With optics you don't need to put the
> laser and the sensor a few millimeters from the medium. No spherical
> field.
> 
> The real answer as far as I can see is mostly: we were told to do a
> digital vinyl player, so putting the whole heavy thing on the arm was
> good enough.
> 
> [ ... ]
> 

What a humoruous discussion.  Let's strike out in other areas.

I can't believe we still use binary for ubiquitous computing.

The real answer, as far as I can see, is mostly:  we were told to 
enhance designs based on the switching action of transistors, so not 
trying out other ideas was good enough.

:-)

Meanwhile, back in CD-player land.   I seem to recall that the first 
mechanisms didn't use a swinging arm, but stepped across the surface 
perpendicular to the rotation like a floppy disk head.  In such an 
arrangement, there is no advantage to putting the optics somewhere else, 
because mass was not a problem (The diode/sensor assembly sits on a set 
of rails, actuated by a stepper motor or smilar.  The swinging arm idea 
came later, as I recall, as designers tried for faster seek times. They 
  moved to the servo arm idea and tackled the issues of non-constant 
movements necessary as the arm swings across the arc over the various 
tracks, how to quickly move said arm to a track taking into account the 
mass and intertia, etc.

Strangely, as they moved to the servo actuated arm, they no doubt would 
have preferred to remove the mass of the diode/sensor, but they did not, 
so there is most likely a good reason.

As an engineer who is regularly beat up in social circles as a 
stereotypical "overdesigner", and who tries to be pragmatic in his 
approach of designing solutions, I find the veiled inferences concerning 
the CD-ROM engineers highly ironic.

In my opinion, the best design is one that works, works well, costs the 
least, introduces the least amount of radical new direction, and can be 
extended a bit.

Jim

-- 
Jim Brain, Brain Innovations
brain@jbrain.com                                http://www.jbrain.com
Dabbling in WWW, Embedded Systems, Old CBM computers, and Good Times!
0
Jim
11/19/2004 7:25:46 AM
In article <_dhnd.116697$R05.4338@attbi_s53>,
 Jim Brain <brain@jbrain.com> wrote:
 ...
> Meanwhile, back in CD-player land.   I seem to recall that the first 
> mechanisms didn't use a swinging arm, but stepped across the surface 
> perpendicular to the rotation like a floppy disk head.  In such an 
> arrangement, there is no advantage to putting the optics somewhere else, 
> because mass was not a problem (The diode/sensor assembly sits on a set 
> of rails, actuated by a stepper motor or smilar. ...

I have one of those, a circa '91 (?) NEC 2x reader, which I still use 
sometimes (on a ca. 91 computer ;).
____________________________________________________________________
TonyN.:'                                          tonynlsn@shore.net
      '

0
Tony
11/20/2004 2:01:57 AM
Nicola Musatti wrote:

> Hallo,
> According to their proponents virtual machines such as JVM and CLR are
> the solution to all our (programming) problems, of which portability
> is but one.

I'm not sure if that is what has been claimed, or if that's your opinion
  but if it is what is being claimed, it's obviously dead wrong.

A Virtual Machine solves one problem and only one: allowing a
particular binary program to be operated in a non-compatible
environment.  That the environment may not exist anywhere except in
software isn't really relevant.

There are a number of areas that a virtual machine does not solve,
including source portability (care to say that you are certain you can
run a program unchanged on two different machines that use the same
virtual machine environment?  Maybe you can but it is probably likely
you can't); execution efficiency in critical environments (nobody
writes a boot loader to run in multiple virtual machine environments)
and graphics standardization (a game to run on Windowz will be
different from one to run on Linux or the Macintosh even if it can run
in a virtual machine).

> Maybe it's just because when I learnt programming the p-machine was
> considered an interesting oddity, but with the exception of code that
> really must run unchanged on unknown platforms, I fail to see what do
> I gain from a virtual machine that I don't already get from a good old
> compiler/runtime support/standard library chain.

For text-based and batch processing in ordinary applications a virtual
machine may provide a bettet choice for targeting an application than
a recompile for dozens of targets.  This may be one reason Ryan
McFarland COBOL is quite popular since the p-code for compiled
applications is the same for all platforms the compiler is written
for.

> After all, isn't gcc the most ported virtual machine of all?

I don't think GCC would qualify as a virtual machine as I think code
is generated to run natively, not to be interpreted.

> Now, this being the compiler forum, I'm interested in learning about
> the advantages of virtual machines from the compiler writer
> perspective.

It allows you to write the code generator once (as opposed to
rewriting the code generator for each target machine) and simply
rewrite (or recompile) the run-time library and virtual support
environment for each target machine (which you have to do anyway).
You would also have to rewrite the compiler's I/O routines to handle
differences in file management.

If the compiler itself can run on the run-time library then you might
not even have to rewrite any part of the compiler at all.

Paul Robinson
"The lessons of history teach us - if they teach us anything - that
nobdy learns the lessons that history teaches us."

0
Paul
12/17/2004 5:31:14 AM
Paul Robinson wrote:

(snip)

> I'm not sure if that is what has been claimed, or if that's your opinion
>   but if it is what is being claimed, it's obviously dead wrong.

> A Virtual Machine solves one problem and only one: allowing a
> particular binary program to be operated in a non-compatible
> environment.  That the environment may not exist anywhere except in
> software isn't really relevant.

I would say that in addition they can provide more debugging
facilities than the hardware normally provides.

IBM used CP/67 to develop S/370 software before enough machines were
available, and VM/370 to help in debugging after they were available.
IBM's VM has kept the ability to run itself, I would presume, for
debugging purposes.

I am not so sure that JVM provides debugging facilities similar to
VM/ESA, but it could be done much easier than adding debugging
features to the hardware.

-- glen
0
glen
12/20/2004 4:49:43 AM
Paul Robinson wrote:
> Nicola Musatti wrote:
>>After all, isn't gcc the most ported virtual machine of all?
> I don't think GCC would qualify as a virtual machine as I think code
> is generated to run natively, not to be interpreted.

What about attaching GCC's intermediate representation to the executable
file? This will allow dynamic recompilation if
a) the assigned cache size varies during operation
b) the ratio of core speed to memory speed changes
c) the cache associativity changes

This recompilation can be done on another core or even processor and
will not decrease speed of the current executable.

If the results of this recompilations are attached as well then it only
has to be done once.

Objections?

bis besser,
Tobias
0
Tobias
12/22/2004 6:05:59 AM
Reply:

Similar Artilces:

Compiler/Virtual Machine Environment
Has anyone ever implemented embedding a "virtual host" or "virual machine" environment inside your C/C++ programs so that end-users could write small snippets of customized code, compile it using your custom compiler, and if the compiled byte-code file exists, it gets loaded and ran by the embedded virtual host engine? My thought was to develop a basic-like compiler system that the user could write a file called TASKA.MSC which looks like: // File: TASKA.MSC // Simple Task A Example Dim userdata as TUserData Clear userdata if GetUserData(userdata) then // do...

Virtual Machines
I would like to run a sheet fed scanner from somewhere on my system, index the resultant pdf images so that they can be retrieved and browsed in a working office (The paper-less environment). The problem is that I cannot find Linux software that can do the job at a price the small business can afford. If I want to use Windows software, I'm stuck unless I run a Windows machine. I already use Win4lin extensively but it cannot handle external USB devices other than a mouse and printers. Would virtual machine software such as xen or vmware allow me to overcome those restrictions? Has anyone ...

Compile jar for Microsoft Virtual Machine
Hello, I've found a tool which examines a bunch of pdf files and tell me if they are corrupted or not ... The only problem ... is that I need to run that jar file on pc where there's only the jview installed and I don't have admin rights to install JRE. Is there way to compile the jar that I have so I can use it with jview ? The tool is call Multivalent : http://multivalent.sourceforge.net/download.html Thanks in advance. Nader Nader wrote: .... > I've found a tool which examines a bunch of pdf files and tell me if they > are corrupted or not ... The only prob...

JIT compilers, virtual machine & AOT
Hi, I've some difficulties to understand how does the JIT compilers work. Let's say that a JIT compiler is used, is it running with a virtual machine or not? How the garbage collection is done? Same question for AOT compilers, how does the GC work since there is no garbage collector ? (thread in parallel of the program?) Thanks ...

Dos box virtual machine Need for Speed
So there must be any easy fix for this. I am running win98se, playing my mp3's on my media player and I want to play Need for Speed II and still here the tunes. When I launch NFSII it takes over, I think it launches a dos environment then elbows out WIN98 and my jams. Can't they just get along and play nice with each other simultaneously? I'll be much happier slammin my $100,000 jag into a wall while hearing Johnny Lang sing "You can run a red light" uman1916 wrote: > So there must be any easy fix for this. I am running win98se, playing > my mp3's on my m...

Need to compile and execute plsql files from remote machine
Hi all, I have a requirement of - checking the syntax, compiling and executing the plsql files (procedures, functions, triggers etc) from a remote machine. I use JAVA for development and i had tried using jdbc connection. CallableStatement stmt; :::::::::: stmt = connection.prepareCall("Begin ... end;"); //Anonymous procedure stmt.execute(); ::: The above snippet does not give any compilation/syntatical errors. It will be better if there is any other means to access the database and execute the procedure with all the behaviour of sqlplus. I would appreciate any gui...

Don't you need an XML Virtual Machine ?
Hi, RefleX 0.1.3, a general-purpose XML Virtual Machine, is available here : http://reflex.gforge.inria.fr/ In this release, you'll find tutorials for mapping SQL to arbitrary complex XML structures, and for experimenting the Active Schema Language : express constraints on XML documents that you can't achieve with DTD, W3C XML Schema, neither Relax NG, and define custom semantic data types. Enjoy ! -- Cordialement, /// (. .) --------ooO--(_)--Ooo-------- | Philippe Poulard | ----------------------------- http://reflex.gforge.inria.f...

Don't you need an XML Virtual Machine ?
Hi, RefleX is a Java tool that allows people that have no particular knowledge of Java to write smart programs entirely in XML. The concepts of native XML programming used in RefleX have been designed separately, so that other implementations on other platforms/languages can be considered. As the tags used are considered "active", the underlying concepts have been named "Active Tags". Programming in XML allows developers to efficiently produce batch scripts as well as Web applications. With Active Tags, you can dramatically decrease the number of lines you have to ...

Need consultant for on-site install of Integrity Virtual Machine.
Sorry this is not OVMS but I am having a hard time finding someone that is familiar with HP IVM (Integrity Virtual Machine)installation. This software is part of HP-UX Virtual Server Operating Environment!!! Can anyone point me in the right direction? Thanks in advance. Len Whitwer 425-488-0710 Jan-Erik Soderholm wrote: > lenwhitwer@gmail.com wrote 2014-03-13 19:36: > >> Sorry this is not OVMS but I am having a hard time finding someone that >> is familiar with HP IVM (Integrity Virtual Machine)installation. This >> software is part of HP-UX Virtual ...

I need help
I have some functions that I need to compile and make it run in other machines. One function is the main one and call the others, and all this functions uses also GUI commands (MATLAB 6.5 R13). What I need is to resume them all in an .exe file and make it run in any machine. I've compiled once, but even using all the files created by the compilation, the .exe file didn't run in other machines, but mine. Can anyone help me? I appreciate since now! Pedro Camilo. HI Pedro, you need all (with your .exe-file) linked dynamic link libraries (dll) in your folder where you running it! A ...

Compile a lisp code and execute into a Java Virtual Machine
Hi, I'm working in a project that is lisp code and works fine in Allegro CL and MCL. What I need is compile this lisp code (some lisp files) and execute it into a Java Virtual Machine and add some Java interfaces from Swing. I don't know if someone can help me in this task. I'm trying jatha and loadfromjarprimitive but doesn't work fine and I don't know if this is what I need Some examples of how to load, compile and execute some lisp code into java is appreciated Thanks You could look into clojure. It is a JVM Lisp that might be of use. On 12/16/2011 12:09 PM, Sa...

Question: How can Lisp be both compiled and interpreted? does it use a virtual machine?
None of the books I have read explain this and I know very little about it (I have only run programs at the REPL). My question is: how can machine code be compiled from a lisp code if all lisp programs use garbage collection? Isn't a virtual machine/interpreter needed for that? ghostunit@gmail.com wrote: > None of the books I have read explain this and I know very little about > it (I have only run programs at the REPL). It cannot. Your Lisp vendor can provide /both/ a compiler and interpreter, but it might just provide a compiler. If so, the REPL only seems to be an interpret...

If the Internet consists of connected machines, provided there exists 'Virtual Machines' provide therefore the existence of 'Virtual Internets'
I f t h e I n t e r n e t c o n s i s t s o f c o n n e c t e d m a c h i n e s , p r o v i d e d t h e r e e x i s t s ' V i r t u a l M a c h i n e s ' p r o v i d e t he r e f o r e t h e e x i s t e n c e o f ' V i r t u a l I n t e r n e t s ' . h t t p : / / w w w . m e a m i . o r g / In article <2620d68a-7fb2-49ea-920f-11c9150c11bb@v7g2000pro.googlegroups.com>, "M. M i c h a e l M u s a t o v" <marty.musatov@gmail.com> wrote: > I f t h e I n t e r n e t c o n s i s t s o f c o n n e c t &g...

Need help Find a local Virtual Machine thats sending packets?
I'm getting 1000's of log entries like the following: %PIX-3-305005: No translation group found for udp src inside-HBG:192.168.57.134(unresolved)/137 dst outside-HBG:192.168.255.255(unresolved)/137 The only machines that use 192.168.x.x in our network are VMWare images. When I do a SH Arp, the 192.168.57.134 address is not in the list. I'm guessing its because its using the local host NIC's 10.1.x.x address. How can I track down this IP address to its source? Thanks, Scott<- On Apr 12, 4:30 pm, "Scott Townsend" <scooter...@community.nospam> wrote:...

Web resources about - Do we really need virtual machines? - comp.compilers

Machine - Wikipedia, the free encyclopedia
A machine is a tool containing one or more parts that uses energy to perform an intended action. Machines are usually powered by mechanical, ...

Which community benefits from AFL poker machine venues?
AFL clubs with poker machines are spending hundreds of thousands of dollars earmarked as a &quot;community benefit&quot; on their own facilities ...

OBAMA ARMED EL CHAPO WITH .50 CAL MACHINE GUNS: Media Ignore Yet Another Fast and Furious Outrage
By Roger Aronoff The liberal media avoid reporting on President Obama’s many scandals at any cost, and Fast and Furious is no exception. Despite ...

The Secret SEO Tool of 2016: Machine Learning
Machine learning technology are the secret SEO tools of 2016. As Google and other search engines increasingly incorporate artificial intelligence ...

IDG Contributor Network: The machine learning problem of the next decade
A few months ago, my company, CrowdFlower, ran a machine learning competition on Kaggle . It perfectly highlighted the biggest opportunity (and ...

How the Kochs tried to destroy a journalist reporting on their political machine
... Corn reports for Mother Jones. While reporting for her book, Mayer discovered that after her story was published, the Koch political machine ...

Amazon Dash is ready to refill your printer or washing machine
Amazon has been hyping up devices with built-in Dash refill ordering, and the first wave of those devices is finally here. As of today, you can ...

Avast: Inside The Brain Of An Antivirus Machine
... If we accept that security hackers are essentially software developers by another name, what is happening on the bleeding edge of machine learning ...

Washing Machine Dancing Itself To Pieces On Trampoline
This is a video of a washing machine dancing on a trampoline until it falls off and hurts itself. I've fallen off a trampoline and hurt myself ...

Feel the Adventure of H.G Wells' The Time Machine In This Beautiful New Poster
... his brilliant, fascinating stories like The Invisible Man, The War of the Worlds, The Island of Doctor Moreau or, most famously, The Time Machine ...

Resources last updated: 1/24/2016 7:16:45 AM