Not in the sense of having a stack-oriented instruction set, although
I suppose that is more likely, even if less efficient.
No, it is the famous post of Robert Firth made on March 11 1991 of
which I speak:
Robert Firth posted, on March 11, 1991:
>In article <13...@darkstar.ucsc.edu> hay...@felix.ucsc.edu (99700000) writes:
>>Also 48 is a nice number for packing all kinds of bytes and nibbles into,
>>since it has so many divisors. I spoze that's less important now that
>>the price of memory has gone down so much compared to 1964.
>Indeed it was. Here is one list, from the KDF9 programming manual, p 24:
>THE KDF9 WORD HAS 48 BITS ...
>IT MAY BE USED AS...
> Eight 6-Bit Alpha-Numeric Characters
> One 48-Bit Fixed-Point Number
> Two 24-Bit (Half length) Fixed-Point Numbers
> Half of a 96-Bit (Double Length) Fixed-Point Number
> One 48-Bit Floating-Point Number
> Two 24-Bit (Half length) Floating-Point Numbers
> Half of a 96-Bit (Double length) Floating-Point Number
> Three 16-Bit (Fixed point) Integers
> Six 8-Bit Instruction Syllables
>An instruction was 1, 2 or 3 syllables; an address was 15 bits.
>O, memory! We shall not see its like again.
It would seem that with today's standard parts, oriented towards the
power-of-two philosophy first displayed in STRETCH (which at least
_did_ have bit addressing, for variable-width data) and followed up in
the IBM 360 and then copied by the PDP-11, there is no hope.
But then, you can choose to get 64-bit wide memories, or much more
expensive 72-bit wide memories, with 8 bits extra for SECDED ECC.
That goes up to 9 bits when you go from 72 bits to 130 bits...
one can provide ECC with single-error correction and double-error
detection for 120 bits. Using nice standard 128-bit memory sticks.
I have noted that 36-bit floating-point numbers seemed to be a better
width than 32 bits, by the experience of so many programs needing to
start using double-precision after the switch from the 7090 to the
I have noted that many early desktop scientific calculators, and the
pocket calculators that came after them, had ten digits of precision -
plus one or more guard digits - and two exponent digits. This level of
precision is what is provided by a 48-bit floating-point number, and
the Control Data 1604, its successor the 3600, and some Russian and
Chinese machines showed that 48-bit floating point was desirable for
I have noted that the Control Data 6600 had 60-bit words, and the loss
of four bits from double-precision floats didn't seem to bother
So a computer with the ability to handle floating point numbers of
those lengths would, I claim, be more optimal than today's computers
with 32 bit and 64 bit floating point numbers.
How to achieve this?
Well, if one had a chip that had a 120 bit data bus to memory - plus 8
bits for ECC - then, if we have six-fold interleaving, or at least
cache lines of 720 bits inside the chip, we have a unit that can be
20 single precision floating-point numbers of 36 bits length
15 medium precision floating-point numbers of 48 bits length
12 double precision floating-point numbers of 60 bits length
24 integers of 30 bits length
72 Chen-Ho encoded groups of three decimal digits of 10 bits length
180 packed decimal digits of 4 bits in length
120 characters of 6 bits in length
90 characters of 8 bits in length
80 characters of 9 bits in length
45 UTF-16 characters of 16 bits in length
However, addressing may potentially be quite complicated.
The external address block, of course, selects 120-bit words from the
So, to make the architecture independent of this implementation
detail, I propose that base registers contain the addresses of 720 bit
blocks. This needs a multiply-by-three circuit (the shift by one place
to actually multiply by six being trivial).
One can add to the resulting address of a 120-bit word an address for
an item that is 15, 30, or 60 bits in width with only a right shift.
On the other hand, a divide-by-fifteen circuit would take the address
of a 6-bit character in memory, and turn it into the address of a unit
of 90 bits - which can then be shifted three places to make it the
address of a unit of 720 bits.
So if we have simply two basic addressing modes, "raw" addressing for
30 and 60 bit items, and "converted" addressing for 36 and 48 bit
items (which is also usable for 60 bit items), programs can do normal
multiplication by the length of an item for indexing in most cases. (A
third mode, "original" addressing, where a value is added to the
address of the 720 bit block after shifting, would only get us the
ability to address items 45 bits in length, which are already
accessible with raw addressing.)
However, eight-bit, nine-bit, and 16-bit characters still wouldn't be
fully addressable, only words composed of various numbers of them.
Since one already has a divide by fifteen circuit, though, instead of
putting its output into the multiply-by-three circuit, one could use
it straight out, and that allows direct addressing of 8-bit bytes.
It's possible that the addressing on such a machine would not drive
programmers completely insane...