f



What I can to do with old PL/1 code?

I have an old program running on S370 a lot of years ago. It have deal
with complex numerical computations and its volume about 10000 lines.

I have IBM Visual Age PL/1 compiler, but it gives wrong results. Using
compatibility mode makes situation slightly better only. Step by step
debug with hand checking shows that I need to do some senseless
exchanging lines of code, or write code by another manner (sometimes).
So, I'm almost completely sure, that that compiler has errors. It's can
be proved by my another (smaller) program, that had the same behavior
until I rewrote it to Fortran. I cannot do the same here, due to size
and semantic complexity. Some features couldn't be reproduced in
Fortran.

So, I try to find another way. I see Liant compiler and contact with
them, but have one response only during to week. Have somebody
experience with that compiler?

At the same time, I knew about Open VMS PL/1 compiler for Alpha
computers. I think now I could buy such computer for a modest price,
but what about compiler? Going to Kednos site and seen Hobbyist
license, I do not understand how I can get it. Also, can somebody to
tell about features of that compiler? Or, may be, such way isn't good?

0
mikezmn (64)
12/10/2005 10:21:04 AM
comp.lang.pl1 1741 articles. 0 followers. Post Follow

193 Replies
2025 Views

Similar Articles

[PageSpeed] 34

MZN wrote:
> I have an old program running on S370 a lot of years ago. It have deal
> with complex numerical computations and its volume about 10000 lines.
> 
> I have IBM Visual Age PL/1 compiler, but it gives wrong results. Using
> compatibility mode makes situation slightly better only. Step by step
> debug with hand checking shows that I need to do some senseless
> exchanging lines of code, or write code by another manner (sometimes).
> So, I'm almost completely sure, that that compiler has errors. It's can
> be proved by my another (smaller) program, that had the same behavior
> until I rewrote it to Fortran. I cannot do the same here, due to size
> and semantic complexity. Some features couldn't be reproduced in
> Fortran.

It's not clear exactly what your problem is, but your description raises 
some questions.  It's also possible the compiler does not have errors.

First, what do you mean by "wrong results"?  Are you comparing them 
against previous results obtained a lot of years ago on an S370?  Or are 
they theoretically wrong?  That is, are they just different from those 
produced by old runs or are they different from expected results using 
mathematical analysis?

Second, when you say "complex numerical computations", do you mean using 
the FLOAT BINARY data type?

If changing the order of arithmetic operations changes the results, the 
difference may be due to roundoff error or other known computational 
issues related to limitations on precision and representation of real 
numbers.  In such cases, translating to Fortran may or may not help 
because Fortran has mostly the same operator precedences as PL/I so it 
evaluates expressions in mostly the same order.

In addition to compatibility mode, you might try recompiling with 
different levels of optimization.  Some compilers will change the order 
of arithmetic operations in "unsafe" ways for certain (usually higher) 
levels of optimization.  In this context, "unsafe" means the compiler 
changes the order of operations in such a way that roundoff error or 
cumulative loss of precision may occur or may be worse than without 
reordering.  This is not just a PL/I issue; some Fortran compilers do it 
as well.

Also, the S360/S370 binary floating point implementation did not have 
the same binary precision for all floating point numbers.  This is a 
hardware issue and would be the same for all supported languages. 
Basically, the S360/S370 floating point representation used a 
hexadecimal base (base-16) representation for floating point rather than 
a binary base.  This is why IBM 360/370 PL/I floating point declarations 
usually declare variables as having a precision of 21 (single) or 53 
(double) bits.

S360 single precision floating point numbers are stored in 32-bit words 
with a leading sign bit followed by a 7-bit biased base-16 exponent. 
The low-order 24 bits contain the normalized hexadecimal mantissa (in 6 
4-bit hexadecimal digits).  Thus, the mantissa value is adjusted so the 
high-order hexadecimal digit is non-zero; i.e., in the range 1 - 15 (or, 
in binary, in the range 0001 - 1111).  Note this means that when the 
high-order hexadecimal digit is 1, the 3 high-order binary digits must 
be zero so the representation of the value would contain only 21 
significant binary digits.

This is reflected in source-level declarations.  Although 
single-precision floating point on S360/S370 systems contain 6 
hexadecimal digits for a total of 24 bits, the hexadecimal 
representation cannot guarantee more than 21 of those 24 bits will be 
significant.  Since PL/I's FLOAT BINARY precision is specified in binary 
digits, declaring a variable FLOAT BINARY(24) on an S360/S370 system 
would require the compiler to allocate two words (64 bits) for the value 
because it would need to increase the precision in order to guarantee 
the full 24 significant bits for all represented values.

Similarly, although double-precision S360/S370 floating point uses 14 
hexadecimal digits (or 56 bits), it can only guarantee that 53 of those 
bits will be significant for all represented values.

During computations, this means that the _binary_ precision of the 
result of any single-precision arithmetic operation will vary between 21 
and 24 significant bits, depending on the value.  For long or 
complicated computations this will produce roundoff errors and 
propagated loss of precision that may be different from those produced 
either by a strict 21-bit binary precision or by a strict 24-bit binary 
precision representation.

The IEEE floating point standard prescribes a binary representation so 
some long or involved computations will produce different results using 
IEEE representation than those produced using S360/S370 representation. 
  This would be true for any legacy S360/S370 application in any language.

> So, I try to find another way. I see Liant compiler and contact with
> them, but have one response only during to week. Have somebody
> experience with that compiler?

I've used the Liant compiler, but not for many years.  The information 
you've provided seems insufficient to determine whether switching to 
another compiler would change the results in the way you need.

> At the same time, I knew about Open VMS PL/1 compiler for Alpha
> computers. I think now I could buy such computer for a modest price,
> but what about compiler? Going to Kednos site and seen Hobbyist
> license, I do not understand how I can get it. Also, can somebody to
> tell about features of that compiler? Or, may be, such way isn't good?

Before switching to any other compiler, it might be a good idea to run a 
test.  You indicate you have a smaller program that exhibits the same 
(or similar) erroneous behavior.  I'd suggest testing any compiler using 
that smaller program before making the change.


Bob Lidral
lidral  at  alum  dot  mit  dot  edu
0
12/10/2005 1:35:20 PM
On 10 Dec 2005 02:21:04 -0800, MZN <MikeZmn@gmail.com> wrote:

> I have an old program running on S370 a lot of years ago. It have deal
> with complex numerical computations and its volume about 10000 lines.
>
> I have IBM Visual Age PL/1 compiler, but it gives wrong results. Using
> compatibility mode makes situation slightly better only. Step by step
> debug with hand checking shows that I need to do some senseless
> exchanging lines of code, or write code by another manner (sometimes).
> So, I'm almost completely sure, that that compiler has errors. It's can
> be proved by my another (smaller) program, that had the same behavior
> until I rewrote it to Fortran. I cannot do the same here, due to size
> and semantic complexity. Some features couldn't be reproduced in
> Fortran.
>
> So, I try to find another way. I see Liant compiler and contact with
> them, but have one response only during to week. Have somebody
> experience with that compiler?

I licensed it to them in the mid 80's

>
> At the same time, I knew about Open VMS PL/1 compiler for Alpha
> computers. I think now I could buy such computer for a modest price,
> but what about compiler? Going to Kednos site and seen Hobbyist
> license, I do not understand how I can get it. Also, can somebody to
> tell about features of that compiler? Or, may be, such way isn't good?
>
It is true that you can buy a reasonably powerful Alpha workstation for a  
modest
price.  I have several from Island Computers  http://www.islandco.com/
or ebay.

If this is not for commercial use then you would qualify for a hobbyist  
license.
To obtain such a license you must first have the underlying hobbyist  
license
fro VMS and all the layered products.  On our hobbyist's page click on the  
shark
logo and you will be directed to the appropraite page which contains all  
the
instructions.  HP also has a "testdrive" on their web site but it didn't  
seem to
be operational when I checked,  I'll find out and post when I have the  
answer.

Alternatively, if the code is fairly pure PL/I not requiring not much more  
effort than
compile, link and run,  I could run it for you here.  If it is an issue of  
precision,
a VAX or VAX emulator night be better as it supports greater precision.   
 From the
online HELP file

PLI

   Attributes

     FLOAT

        Data type attribute.

        Defines a floating-point arithmetic variable.

        FLOAT BINARY(p) max=113 default=24 (OpenVMS  VAX)  FLOAT  BINARY(p)
        max=53  default=24  (OpenVMS AXP) FLOAT DECIMAL(p) max=34 default=7
        (OpenVMS VAX) FLOAT DECIMAL(p) max=15 default=7 (OpenVMS AXP)

        If FLOAT is specified without any other data type  attributes,  the
        variable  has  the  attributes  FLOAT  BINARY(24).   Floating-point
        binary data with precision in the range of 54-113 may be  supported
        in software, depending on the processor type and hardware options.

You could also use scaled fixed decimal

     FIXED

        Data type attribute.

        Defines a fixed-point arithmetic variable with  precision  (p)  and
        scale-factor (q).

         FIXED BINARY  (p) max=31
                       (q) -31 to 31
                       default=31,0
         FIXED DECIMAL (p) max=31
                       (q) 0 to 31
                       default=10,0

        If FIXED is specified without any other data type  attributes,  the
        variable has the attributes FIXED BINARY(31,0).



Tom





0
tom284 (1839)
12/10/2005 2:57:59 PM
MZN wrote:
> I have an old program running on S370 a lot of years ago. It have deal
> with complex numerical computations and its volume about 10000 lines.
> 
> I have IBM Visual Age PL/1 compiler, but it gives wrong results. Using
> compatibility mode makes situation slightly better only. Step by step
> debug with hand checking shows that I need to do some senseless
> exchanging lines of code, or write code by another manner (sometimes).
> So, I'm almost completely sure, that that compiler has errors. It's can
> be proved by my another (smaller) program, that had the same behavior
> until I rewrote it to Fortran. I cannot do the same here, due to size
> and semantic complexity. Some features couldn't be reproduced in
> Fortran.
> 
> So, I try to find another way. I see Liant compiler and contact with
> them, but have one response only during to week. Have somebody
> experience with that compiler?
> 
> At the same time, I knew about Open VMS PL/1 compiler for Alpha
> computers. I think now I could buy such computer for a modest price,
> but what about compiler? Going to Kednos site and seen Hobbyist
> license, I do not understand how I can get it. Also, can somebody to
> tell about features of that compiler? Or, may be, such way isn't good?
> 
I would help to know [unless I just missed it]:
	What hardware are you using?
	What compiler?  Version, release, etc. are you using?
Carl
0
12/10/2005 7:07:13 PM
First of all, thank you very much, Bob!
Bob Lidral =D0=BF=D0=B8=D1=81=D0=B0=D0=BB(=D0=B0):

> MZN wrote:
> > I have an old program running on S370 a lot of years ago. It have deal
> > with complex numerical computations and its volume about 10000 lines.
> >
> > I have IBM Visual Age PL/1 compiler, but it gives wrong results. Using
> > compatibility mode makes situation slightly better only. Step by step
> > debug with hand checking shows that I need to do some senseless
> > exchanging lines of code, or write code by another manner (sometimes).
> > So, I'm almost completely sure, that that compiler has errors. It's can
> > be proved by my another (smaller) program, that had the same behavior
> > until I rewrote it to Fortran. I cannot do the same here, due to size
> > and semantic complexity. Some features couldn't be reproduced in
> > Fortran.
>
> It's not clear exactly what your problem is, but your description raises
> some questions.  It's also possible the compiler does not have errors.
>
> First, what do you mean by "wrong results"?  Are you comparing them
> against previous results obtained a lot of years ago on an S370?  Or are
> they theoretically wrong?  That is, are they just different from those
> produced by old runs or are they different from expected results using
> mathematical analysis?

They are wrong in both senses. I have old results, unfortunately as
hard copy (computer paper), so I can compare.
>
> Second, when you say "complex numerical computations", do you mean using
> the FLOAT BINARY data type?
I declared variables as FLOAT(16)
>
> If changing the order of arithmetic operations changes the results, the
> difference may be due to roundoff error or other known computational
> issues related to limitations on precision and representation of real
> numbers.  In such cases, translating to Fortran may or may not help
> because Fortran has mostly the same operator precedences as PL/I so it
> evaluates expressions in mostly the same order.
Yes, I suspect that this is due to roundoff error, but I have no
knowledge enough in order to prevent it. And it's big work too. Fortran
helped me in very similar, but smaller program.
>
> In addition to compatibility mode, you might try recompiling with
> different levels of optimization.  Some compilers will change the order
> of arithmetic operations in "unsafe" ways for certain (usually higher)
> levels of optimization.  In this context, "unsafe" means the compiler
> changes the order of operations in such a way that roundoff error or
> cumulative loss of precision may occur or may be worse than without
> reordering.  This is not just a PL/I issue; some Fortran compilers do it
> as well.
I used all optimization modes.
>
> Also, the S360/S370 binary floating point implementation did not have
> the same binary precision for all floating point numbers.  This is a
> hardware issue and would be the same for all supported languages.
> Basically, the S360/S370 floating point representation used a
> hexadecimal base (base-16) representation for floating point rather than
> a binary base.  This is why IBM 360/370 PL/I floating point declarations
> usually declare variables as having a precision of 21 (single) or 53
> (double) bits.
I understand that, but all algorithms are very clear, but a complex
too.
>
> S360 single precision floating point numbers are stored in 32-bit words
> with a leading sign bit followed by a 7-bit biased base-16 exponent.
> The low-order 24 bits contain the normalized hexadecimal mantissa (in 6
> 4-bit hexadecimal digits).  Thus, the mantissa value is adjusted so the
> high-order hexadecimal digit is non-zero; i.e., in the range 1 - 15 (or,
> in binary, in the range 0001 - 1111).  Note this means that when the
> high-order hexadecimal digit is 1, the 3 high-order binary digits must
> be zero so the representation of the value would contain only 21
> significant binary digits.
>
> This is reflected in source-level declarations.  Although
> single-precision floating point on S360/S370 systems contain 6
> hexadecimal digits for a total of 24 bits, the hexadecimal
> representation cannot guarantee more than 21 of those 24 bits will be
> significant.  Since PL/I's FLOAT BINARY precision is specified in binary
> digits, declaring a variable FLOAT BINARY(24) on an S360/S370 system
> would require the compiler to allocate two words (64 bits) for the value
> because it would need to increase the precision in order to guarantee
> the full 24 significant bits for all represented values.
As I wrote before, I use FLOAT(16)
>
> Similarly, although double-precision S360/S370 floating point uses 14
> hexadecimal digits (or 56 bits), it can only guarantee that 53 of those
> bits will be significant for all represented values.
>
> During computations, this means that the _binary_ precision of the
> result of any single-precision arithmetic operation will vary between 21
> and 24 significant bits, depending on the value.  For long or
> complicated computations this will produce roundoff errors and
> propagated loss of precision that may be different from those produced
> either by a strict 21-bit binary precision or by a strict 24-bit binary
> precision representation.
>
> The IEEE floating point standard prescribes a binary representation so
> some long or involved computations will produce different results using
> IEEE representation than those produced using S360/S370 representation.
>   This would be true for any legacy S360/S370 application in any language.
>
> > So, I try to find another way. I see Liant compiler and contact with
> > them, but have one response only during to week. Have somebody
> > experience with that compiler?
>
> I've used the Liant compiler, but not for many years.  The information
> you've provided seems insufficient to determine whether switching to
> another compiler would change the results in the way you need.
>
> > At the same time, I knew about Open VMS PL/1 compiler for Alpha
> > computers. I think now I could buy such computer for a modest price,
> > but what about compiler? Going to Kednos site and seen Hobbyist
> > license, I do not understand how I can get it. Also, can somebody to
> > tell about features of that compiler? Or, may be, such way isn't good?
>
> Before switching to any other compiler, it might be a good idea to run a
> test.  You indicate you have a smaller program that exhibits the same
> (or similar) erroneous behavior.  I'd suggest testing any compiler using
> that smaller program before making the change.
Unfortunately, now I have a double Opteron machine with Windows 64 bit,
and IBM PL/I compiler doesn't install. I'll try to install Windows XP
32 bit (as virtual machine) and do that, but work looks very hard.
>=20
>=20
> Bob Lidral
> lidral  at  alum  dot  mit  dot  edu

0
mikezmn (64)
12/10/2005 9:48:35 PM
Thank you, Tom!

1. I'll be look about Alpha machine.
2. If it is not a big problem to you, I'll prepare zip archive with
code and data set, but I have test results in hard copy form, or as
graphs.
So, if you give me e-mail, I'll be very appreciated.

0
mikezmn (64)
12/10/2005 9:54:02 PM
Hardware used are dual Pentium III machine with Windows XP 32 bit.
Software IBM Visual age PL/I compiler 2.1.7 with updates up to 2.1.13.

0
mikezmn (64)
12/10/2005 9:56:59 PM
Subject: What I can to do with old PL/1 code?
From: "MZN" <MikeZmn@gmail.com>, http://groups.google.comDate: 10 Dec 2005
02:21:04 -0800
>I have an old program running on S370 a lot of years ago. It have deal
>with complex numerical computations and its volume about 10000 lines.
>
>I have IBM Visual Age PL/1 compiler, but it gives wrong results. Using
>compatibility mode makes situation slightly better only. Step by step
>debug with hand checking shows that I need to do some senseless
>exchanging lines of code, or write code by another manner (sometimes).
>So, I'm almost completely sure, that that compiler has errors. It's can
>be proved by my another (smaller) program, that had the same behavior
>until I rewrote it to Fortran. I cannot do the same here, due to size
>and semantic complexity. Some features couldn't be reproduced in
>Fortran.
..
The first thing that you need to do is to enable
subscript bounds checking, and some other checks.
   Please add the next line before your main procedure statement.
(SUBRG, SIZE, STRINGRANGE, STRINGSIZE):

   There are differences between the IBM S/370 and the
PC in the way in which floating-point values are stored.
You may have an unstable algorithm.
   Have you tried using FLOAT (18), which will use the
extra precision of the PC?

   There are differences in the way in which MULTIPLY,
DIVIDE, ADD, and SUBTRACT are handled with fixed-point binary.
Are you using any of these functions?

Do you have IBM or ANSI rules specified?  ( %PROCESS option.)

   The VA compiler is very robust compiler, and it is
unlikely that your problem is caused by the compiler;
nevertheless, a compiler error cannot be dismissed.

Other things you could try:

   Do you specify OPTIONS (REORDER) in your procedures?
Does yur program give different results if compiled
with REORDER on or off? (with optimization on).

>So, I try to find another way. I see Liant compiler and contact with
>them, but have one response only during to week. Have somebody
>experience with that compiler?

>At the same time, I knew about Open VMS PL/1 compiler for Alpha
>computers. I think now I could buy such computer for a modest price,
>but what about compiler? Going to Kednos site and seen Hobbyist
>license, I do not understand how I can get it. Also, can somebody to
>tell about features of that compiler? Or, may be, such way isn't good?


0
robin_v (2737)
12/10/2005 11:46:42 PM
Thank you Robin,

1. I used (SUBRG, SIZE, STRINGRANGE, STRINGSIZE): earlier and now - no
luck
2. I tried FLOAT(18) - no luck. Algorithm possibly may be unstable, but
a. I used some preventive measures
b. It worked good on S370
3. I do not use MULTIPLY, DIVIDE, ADD, and SUBTRACT
4. I tried both IBM and ANSI rules
5. OPTIONS(REORDER) doesn't affect on results.

0
mikezmn (64)
12/11/2005 12:39:32 AM
I have ported numerical computations in PL/I from S370 to NT4/2K/XP/2K3, 
with a volume of some 150,000 LOC, and had far fewer problems than you.

Here are the major gotcha's:
1) The Intel hardware has a stupidity known as imprecise interrupts that can 
wreak havoc. Compile absolutely everything with NOIMPRECISE and only permit 
IMPRECISE (the default) if you are absolutely sure that your code will never 
ever raise a floating point exception. If you are even slightly unsure about 
the remore possibility of a floating point exception, use NOIMPRECISE. 
Ignore the Programming Guide's recommendation.

2) The behaviour of the normal return from an ON block on the old PL/I 
compilers is to resume after an OVERFLOW and ZERODIVIDE: the newer compilers 
raise ERROR. You must alter your logic to handle this.

3) IEEE float has a shorter mantissa and longer exponent. This can affect 
convergence criteria, so that algorithms that converged on /370 don't when 
using IEEE float. You will need to review the convergence criteria of your 
algorithms to address this issue. Note that a Fortran rewrite won't help you 
here, as the problem is not a language dependent one. [This was the worse 
problem I encountered].

4) When converted to a string, a float bin (53) ieee becomes a char (24), 
whereas a float bin (53) hexadec is a char(22). It is essential that you 
review all conversion of floats to chars to ensure you don't miss anything. 
This does not normally affect numerical algorithms.

There used to be a problem with the floating point condition enablement when 
a PL/I routine was called from non-PLI code. This problem was resolved a 
while back, so you won't have a problem with 2.1.13.

The use of Liant or other compiler will only exacerbate your problems, as 
these compilers are far less compatible.

If you can reproduce any of the problems you are encountering, you should 
open a PMR. The lab are normally very fast in resolving any problems they 
can reproduce.

"MZN" <MikeZmn@gmail.com> wrote in message 
news:1134210064.904867.222520@z14g2000cwz.googlegroups.com...
>I have an old program running on S370 a lot of years ago. It have deal
> with complex numerical computations and its volume about 10000 lines.
>
> I have IBM Visual Age PL/1 compiler, but it gives wrong results. Using
> compatibility mode makes situation slightly better only. Step by step
> debug with hand checking shows that I need to do some senseless
> exchanging lines of code, or write code by another manner (sometimes).
> So, I'm almost completely sure, that that compiler has errors. It's can
> be proved by my another (smaller) program, that had the same behavior
> until I rewrote it to Fortran. I cannot do the same here, due to size
> and semantic complexity. Some features couldn't be reproduced in
> Fortran.
>
> So, I try to find another way. I see Liant compiler and contact with
> them, but have one response only during to week. Have somebody
> experience with that compiler?
>
> At the same time, I knew about Open VMS PL/1 compiler for Alpha
> computers. I think now I could buy such computer for a modest price,
> but what about compiler? Going to Kednos site and seen Hobbyist
> license, I do not understand how I can get it. Also, can somebody to
> tell about features of that compiler? Or, may be, such way isn't good?
> 


0
12/11/2005 8:49:27 AM
Mark thanks,
1. I played with IMPRECISE/NOIMPRECISE, and do not remember differences
2. OVERFLOW and ZERODIVIDE. I'll check it, but I do not think, that I
use it.
3. Convergence. Yes, I'm agree, but rewritten to Fortran program (that
is the piece of bigger)
    have a lot of algorithms, where convergence is very important.
Robin wrote about algorothm unstability.
    Big program probably has one (Gram-Schmidt orthogonalization,
without Bjork modification). I'll check it too.
4. I do not use number to string convesion.

It's a new for me, that Liant compiler less compatible, than IBM,
thanks.

About APAR. The code is very big, moreover, I have no old results in
electronic form.
Below I apply two variant of code first is old (and gives wrong results
on PC), and second one is corrected
(gives good 15 signs in comparison with old printed results). As you
can see, there is no problem with convergence.
Small letters mean my insertions as IBM VA PLI compiler requested, and
all differenses with old code.
That is internal procedure.
Old code begins -------------------------------------------------
 GM: PROC(AL, Q, G);
    DCL (AL, TIAL, SIPI, DV, AM, V5) FLOAT(16),
        (Q, TI) BIN FIXED(31),
        (G(*,*), GST) CPLX float(16);

    DO SI=POL TO Q;
       V5=SQRT(AL*ESG(SI));
       SIPI=SI*PI;
       J=(-1B)**SI;

       DO TI=0B TO PSCR;
          TIAL=TI*AL;
          DV, DIV=SIPI**2-TIAL**2;
          AM=MAX(SIPI, TIAL);
          IF AM > 1 THEN DV=DV/AM;
          IF TI=0B & SI=0B THEN GST=V5;                     /* (C) */

                           ELSE DO;
             IF ABS(DV) < 1.0d-10 THEN DO;
    /* (B) */   GST=V5/2;
                IF POL=1B THEN GST=+IU*GST;
                                     END;

                                ELSE DO;
 /* (A) */      GST=(J*EXP(IU*TIAL)-1)/DIV;
                IF POL=0B THEN GST=+IU*V5*TIAL*GST;
                          ELSE GST=-V5*SIPI*GST;
                                     END;

                                END;

          G(SI,TI)=GST;
       END;    /* TI */

    END;       /* SI */

 END GM;
Old code ended ----------------------------------------------
New code begins---------------------------------------------
 GM: PROC(AL, Q, G);
    DCL (AL, TIAL, SIPI, DV, AM, V5) FLOAT(16),
        (Q, TI) BIN FIXED(15),
        (G(*,*), GST, gst1) CPLX float(16);

    DO SI=POL TO Q;
       V5=SQRT(AL*ESG(SI));
       SIPI=SI*PI;
       J=(-1B)**SI;

       DO TI=0B TO PSCR;
          TIAL=TI*AL;
          DV, DIV=SIPI**2-TIAL**2;
          AM=MAX(SIPI, TIAL);
          IF AM > 1.0d0 THEN DV=DV/AM;
          IF TI=0B & SI=0B THEN GST=V5;      /* (C) */

                           ELSE DO;
          IF ABS(DV) < 1.0d-10 THEN DO;
 /* (B) */   GST=V5*0.5d0;
             IF POL=1B THEN GST=+IU*GST;
                                    END;

                             ELSE DO;
 /* (A) */   GST1=(J*EXP(IU*TIAL)-1.0d0)/DIV;
             IF POL=0B THEN GST=+IU*V5*TIAL*GST1;
                       ELSE GST=-V5*SIPI*GST;
                                  END;

                                END;

          G(SI,TI)=GST;
       END;   /* TI */

    END;       /* SI */

 END GM;
New code ended ---------------------------------------

0
mikezmn (64)
12/11/2005 10:06:31 AM
MZN wrote:
> [...]
> It's a new for me, that Liant compiler less compatible, than IBM,
> thanks.
> [...]
That shouldn't be a surprise.  The old S360/S370 PL/I compiler was
almost certainly an IBM product; your VA compiler is also an IBM
product.  I would expect a high degree of compatibility between the two.

OTOH, the Liant compiler is not an IBM product; I would not expect it to
be quite as compatible with the IBM S360 compiler as the IBM VA compiler is.

One advantage to the Liant compilers (Liant produces compilers for more
than one language) was that they were compatible across all supported
platforms.  That is, a Liant Fortran compiler produced code that ran in
the same way and produced the same results on any platform for which it
had been implemented.  The same was true of all of their compilers:
BASIC, C, C++, COBOL, Fortran, Pascal, and PL/I.  They were not always
completely compatible with compilers from other vendors but they were
generally compatible with published language standards.


Caveat: these comments are based on my own personal memories and do not
reflect the official position of any other person or company.


Bob Lidral
lidral  at  alum  dot  mit  dot  edu


0
12/11/2005 11:12:36 AM
Mark Yudkin wrote:
> 
> There used to be a problem with the floating point condition enablement when 
> a PL/I routine was called from non-PLI code. This problem was resolved a 
> while back, so you won't have a problem with 2.1.13.

This is fun!  I just got bit by this one.  The FPU control word that 
enables/disables interrupts and controls rounding isn't saved and 
restored by default on calls.  I believe this is true for windoze as 
well as OS/2.  Unfortunately, different languages/compilers have their 
own "preferred" setting for the FPU CW, so interlanguage or even OS 
calls may change the setting.  The PL/I language spec specifies 
truncation when converting float to fixed, other languages may use 
"round to nearest", etc.  This can make a significant difference.

It sounds like IBM has this one under control, though.

0
Peter_Flass (956)
12/11/2005 12:41:11 PM
Peter Flass =D0=BF=D0=B8=D1=81=D0=B0=D0=BB(=D0=B0):
 The PL/I language spec specifies
> truncation when converting float to fixed, other languages may use
> "round to nearest", etc.  This can make a significant difference.
Does that true for both S370 and PC?

Peter, could you tell me when we can expect your compiler? Even at beta
stage. I have OS/2 on one my computer...

2All: Thank you, you gave me a lot of subjects for thinking, but
1=2E Remember, rewriting to Fortran helps at least sometimes.
2=2E Take a look on above pieces of code. It's very similar to compiler
(not floating point)problem.

Mike

0
mikezmn (64)
12/11/2005 1:16:42 PM
One of the things you resisted was running the program without an 
optimization.  Even though it may take alot longer to run, when weird 
things happen I always recommend making at least one run without any 
optimizations turned on.  This at least makes sure that your cod eand 
the compiler's optmizations are compatable.  There are too many ways to 
accidently fool a compiler's optmizer that at least one run without 
optimization should be done.
0
multicsfan (63)
12/11/2005 1:41:37 PM
2multicsfan

Probably, I wasn't clear. No, I began to work without any
optimizations. I used all optimization modes after that only...

0
mikezmn (64)
12/11/2005 3:19:35 PM
My *goal* is an alpha by the end of second quarter next year.  It 
depends on how many unexpected things [like this] come up between now 
and then.

See http://home.nycap.rr.com/pflass/status.htm - updated sporadically.

Obviously, since I'm currently cross-compiling, the code won't be 
available until I've been able to completely compile myself.

MZN wrote:
> Peter Flass писал(а):
>  The PL/I language spec specifies
> 
>>truncation when converting float to fixed, other languages may use
>>"round to nearest", etc.  This can make a significant difference.
> 
> Does that true for both S370 and PC?
> 
> Peter, could you tell me when we can expect your compiler? Even at beta
> stage. I have OS/2 on one my computer...
> 
> 2All: Thank you, you gave me a lot of subjects for thinking, but
> 1. Remember, rewriting to Fortran helps at least sometimes.
> 2. Take a look on above pieces of code. It's very similar to compiler
> (not floating point)problem.
> 
> Mike
> 

0
Peter_Flass (956)
12/11/2005 9:51:25 PM
On 10 Dec 2005 13:54:02 -0800, MZN <MikeZmn@gmail.com> wrote:

> Thank you, Tom!
>
> 1. I'll be look about Alpha machine.
> 2. If it is not a big problem to you, I'll prepare zip archive with
> code and data set, but I have test results in hard copy form, or as
> graphs.
> So, if you give me e-mail, I'll be very appreciated.
>
tom at kednos dot com
0
tom284 (1839)
12/12/2005 2:18:53 AM
"MZN" <MikeZmn@gmail.com> wrote in message
news:1134295591.880363.83490@f14g2000cwb.googlegroups.com...
> Mark thanks,
> 1. I played with IMPRECISE/NOIMPRECISE, and do not remember differences
> 2. OVERFLOW and ZERODIVIDE. I'll check it, but I do not think, that I
> use it.
> 3. Convergence. Yes, I'm agree, but rewritten to Fortran program (that
> is the piece of bigger)
>     have a lot of algorithms, where convergence is very important.
> Robin wrote about algorothm unstability.
>     Big program probably has one (Gram-Schmidt orthogonalization,
> without Bjork modification). I'll check it too.
> 4. I do not use number to string convesion.
>
> It's a new for me, that Liant compiler less compatible, than IBM,
> thanks.
>
> About APAR. The code is very big, moreover, I have no old results in
> electronic form.
> Below I apply two variant of code first is old (and gives wrong results
> on PC), and second one is corrected
> (gives good 15 signs in comparison with old printed results). As you
> can see, there is no problem with convergence.
> Small letters mean my insertions as IBM VA PLI compiler requested, and
> all differenses with old code.
> That is internal procedure.
> Old code begins -------------------------------------------------
>  GM: PROC(AL, Q, G);
>     DCL (AL, TIAL, SIPI, DV, AM, V5) FLOAT(16),
>         (Q, TI) BIN FIXED(31),
>         (G(*,*), GST) CPLX float(16);
>
>     DO SI=POL TO Q;
>        V5=SQRT(AL*ESG(SI));
>        SIPI=SI*PI;
>        J=(-1B)**SI;
>
>        DO TI=0B TO PSCR;
>           TIAL=TI*AL;
>           DV, DIV=SIPI**2-TIAL**2;
>           AM=MAX(SIPI, TIAL);
>           IF AM > 1 THEN DV=DV/AM;
>           IF TI=0B & SI=0B THEN GST=V5;                     /* (C) */
>
>                            ELSE DO;
>              IF ABS(DV) < 1.0d-10 THEN DO;

1q-10 is required for 18-digit precision.

>     /* (B) */   GST=V5/2;
>                 IF POL=1B THEN GST=+IU*GST;
>                                      END;
>
>                                 ELSE DO;
>  /* (A) */      GST=(J*EXP(IU*TIAL)-1)/DIV;
>                 IF POL=0B THEN GST=+IU*V5*TIAL*GST;
>                           ELSE GST=-V5*SIPI*GST;
>                                      END;
>
>                                 END;
>
>           G(SI,TI)=GST;
>        END;    /* TI */
>
>     END;       /* SI */
>
>  END GM;
> Old code ended ----------------------------------------------

Many of the variables are not declared in these procedures,
so we don't know what their types are.

Some variables, obviously local, are not declared.
These need to be declared.

Why, for example, is DIV not declared?  It is assigned the
same value as DV [which is FLOAT(16)].  Then it is
used as a divisor (so the result has an effective accuracy of
single precision if it is not declared anywhere).

A good idea is to force all variables to be declared (compiler option).
And to have all variables in the subroutine in the parameter list
where those variables take their values from an external procedure.

Another suggestion: have you used the INITFILL compiler option?
This can be used to set local variables to any desired value on
procedure entry.
I use xBB, because it can pick up uninitialized variables,
particularly decimal (gives a data interrupt), but also helps integer
and floating-point, and string.

A long long time ago, there used to be a bug in complex arithmetic.
The bug was fixed, but might have become unfixed.
Might pay to check the code for complex mult and div
(%PROCESS option to show assembly code interlisted
with the source listing).

> New code begins---------------------------------------------
>  GM: PROC(AL, Q, G);
>     DCL (AL, TIAL, SIPI, DV, AM, V5) FLOAT(16),
>         (Q, TI) BIN FIXED(15),
>         (G(*,*), GST, gst1) CPLX float(16);
>
>     DO SI=POL TO Q;
>        V5=SQRT(AL*ESG(SI));
>        SIPI=SI*PI;
>        J=(-1B)**SI;
>
>        DO TI=0B TO PSCR;
>           TIAL=TI*AL;
>           DV, DIV=SIPI**2-TIAL**2;
>           AM=MAX(SIPI, TIAL);
>           IF AM > 1.0d0 THEN DV=DV/AM;
>           IF TI=0B & SI=0B THEN GST=V5;      /* (C) */
>
>                            ELSE DO;
>           IF ABS(DV) < 1.0d-10 THEN DO;

1q-10 is required for 18-digit precision.

>  /* (B) */   GST=V5*0.5d0;
>              IF POL=1B THEN GST=+IU*GST;
>                                     END;
>
>                              ELSE DO;
>  /* (A) */   GST1=(J*EXP(IU*TIAL)-1.0d0)/DIV;
>              IF POL=0B THEN GST=+IU*V5*TIAL*GST1;
>                        ELSE GST=-V5*SIPI*GST;

GST in this ELSE clause does not appear to be initialized
if this is the first path taken.

>                                   END;
>
>                                 END;
>
>           G(SI,TI)=GST;
>        END;   /* TI */
>
>     END;       /* SI */
>
>  END GM;
> New code ended ---------------------------------------


0
robin_v (2737)
12/12/2005 2:38:34 AM
2Robin

Thank you. Hm, after 22 years things look different!
Step by step.
1. I do not know, what lq-10 means?
2. I'm agree with you, that some definitely local variables should be
declared, so that procedure written in not good style (earlier I
thought else). But in covering procedure (it's very big, so I couldn't
include it) all of theese are declared. There are: SI, J BIN FIXED(31)
INIT(0B), DIV FLOAT(16) INIT(0.0d0). Again, now I'm agree, that it is
not a good style.
3. I'm agree too, that using internal procedure here is not a good
idea, but it worked, and I do not want to rewite it.
4. About INITFILL option. I use default behavior for it. Your idea
about xBB looks very good.
5. What was the bug in complex arithmetic? In what compiler?
6. I'm agree about GST. That is due to that, I change original code,
but didn't check branch when POL^=0. It should be
IF POL=0B THEN GST=+IU*V5*TIAL*GST1;
                  ELSE GST=-V5*SIPI*GST1;

Now I install IBM compiler again, but it has strange behavior.
1. I installed version 2.1.7, then fixpack 2.1.10 (as it requested by
last fixpack), and, finally, fixpack 2.1.13. But on splash screen and
Help->Product information it shows version 2.1.10. But in the PLI
compiler window year is 2005. How I understand, they simply forget
about it.
2. Editor marks by red correct pieces of code
 BJ(0:N1+N3,0:Q1) INIT(((N1+N3+1)*(Q1+1)) 0) CPLX float(16) CTL;

-----------------RED---------------------------------------
But fairness requires to say, that in earlier versions such cases was a
lot more.
3. Also, I can not build project yet, but I think, it's my fault.

Mike

0
mikezmn (64)
12/12/2005 10:52:21 AM
"MZN" <MikeZmn@gmail.com> wrote in message
news:1134384741.175034.163840@z14g2000cwz.googlegroups.com...
> 2Robin
>
> Thank you. Hm, after 22 years things look different!
> Step by step.
> 1. I do not know, what lq-10 means?

"q" in 1q-10 gives 18-digit precision.
1e-10 gives default precision
1d-10 gives doubleprecision
1q-10 gives 18-digit precision.

> 2. I'm agree with you, that some definitely local variables should be
> declared, so that procedure written in not good style (earlier I
> thought else). But in covering procedure (it's very big, so I couldn't
> include it) all of theese are declared. There are: SI, J BIN FIXED(31)
> INIT(0B), DIV FLOAT(16) INIT(0.0d0). Again, now I'm agree, that it is
> not a good style.
> 3. I'm agree too, that using internal procedure here is not a good
> idea, but it worked, and I do not want to rewite it.
> 4. About INITFILL option. I use default behavior for it. Your idea
> about xBB looks very good.
> 5. What was the bug in complex arithmetic? In what compiler?

PL/I for OS/2 (precursor for VA PL/I).  As I said, it was
a long time ago, and it was fixed at the time.
After posting, I recalled that it involved assignment.
Solved by first initializing to zero, prior to initializing
to some non-zero value.

> 6. I'm agree about GST. That is due to that, I change original code,
> but didn't check branch when POL^=0. It should be
> IF POL=0B THEN GST=+IU*V5*TIAL*GST1;
>                   ELSE GST=-V5*SIPI*GST1;
>
> Mike


0
robin_v (2737)
12/13/2005 1:04:02 AM
MZN wrote:
> Hardware used are dual Pentium III machine with Windows XP 32 bit.
> Software IBM Visual age PL/I compiler 2.1.7 with updates up to 2.1.13.
> 
What compiler options?

We do all our development on the PC [OS/2 version of that compiler] and 
then do one final compile and test on z/OS [IBM mainframe] with OS PL/I. 
  We've used V1.5.1 and V2.3 with equal success.  I suspect most of your 
problems can be resolved with the right options for compatibility.
0
12/13/2005 4:40:00 AM
CG =D0=BF=D0=B8=D1=81=D0=B0=D0=BB(=D0=B0):

> MZN wrote:
> > Hardware used are dual Pentium III machine with Windows XP 32 bit.
> > Software IBM Visual age PL/I compiler 2.1.7 with updates up to 2.1.13.
> >
> What compiler options?
>
> We do all our development on the PC [OS/2 version of that compiler] and
> then do one final compile and test on z/OS [IBM mainframe] with OS PL/I.
>   We've used V1.5.1 and V2.3 with equal success.  I suspect most of your
> problems can be resolved with the right options for compatibility.

My tryings to reanimate that program have 3 year history. I did it time
to time. I do not remember about earlier attepts, but now I use 2.1.13
version of compiler, installed on Windows XP, that work as VMWARE
virtual machine under Windows XP 64 bit (compiler doesn't install under
Windows XP 64 bit system). Compiler options as follows:
LIMITS(EXTNAME(7),FIXEDDEC(15),NAME(31),FIXEDBIN(31))  AGGREGATE
ATTRIBUTES SOURCE INSOURCE OPTIONS NEST XREF COMPILE PROCEED SEMANTIC
GONUMBER SNAP TEST SYSTEM(WINDOWS PENTIUM) DEFAULT(IBM ASSIGNABLE
NOINITFILL NONCONNECTED DESCRIPTOR DESCLIST SHORT(HEXADEC)
DUMMY(ALIGNED) ORDINAL(MIN) BYADDR LINKAGE(OPTLINK) NOINLINE ORDER
NOOVERLAP NONRECURSIVE DESCLOCATOR NULL370 EVENDEC RETURNS(BYADDR)
NORETCODE ASCII IEEE NATIVE NATIVEADDR ALIGNED E(IEEE)) NOIMPRECISE
CHECK(STORAGE) LIST OFFSET PPTRACE

Mike

0
mikezmn (64)
12/13/2005 7:19:18 AM
robin wrote:

> "MZN" <MikeZmn@gmail.com> wrote in message
> news:1134384741.175034.163840@z14g2000cwz.googlegroups.com...

>>2Robin

>>Thank you. Hm, after 22 years things look different!
>>Step by step.
>>1. I do not know, what lq-10 means?

> "q" in 1q-10 gives 18-digit precision.
> 1e-10 gives default precision
> 1d-10 gives doubleprecision
> 1q-10 gives 18-digit precision.

In PL/I constants have the base, scale, precision, mode, and scale 
factor they are written in.  I don't believe that D and Q are allowed 
for exponents, except in a language from a nearby newsgroup.

-- glen

0
gah (12851)
12/13/2005 7:28:02 AM
Some additional questions:
1. Who knows how to connect line numbers in source files with offsets
in IBM VA PLI?
I can't do that...
2. Now I use the following compiler options:
LIMITS(EXTNAME(7),FIXEDDEC(15),NAME(31),FIXEDBIN(31))  AGGREGATE
ATTRIBUTES SOURCE INSOURCE OPTIONS NEST XREF GONUMBER SNAP TEST
SYSTEM(WINDOWS PENTIUM) DEFAULT(IBM ASSIGNABLE NOINITFILL NONCONNECTED
DESCRIPTOR DESCLIST SHORT(HEXADEC) DUMMY(ALIGNED) ORDINAL(MIN) BYADDR
RETURNS(BYADDR) LINKAGE(OPTLINK) NOINLINE ORDER NOOVERLAP NONRECURSIVE
NULLSYS EVENDEC NORETCODE ASCII NONNATIVE NONNATIVEADDR ALIGNED
E(HEXADEC) DESCLOCATOR EVENDEC NULL370) LIST MDECK OFFSET PPTRACE
Is that correct in compatibility with S370 sense?
3. To Robin: setting option INITFILL immediately drives to a lot of
errors during inpet data check, so I leave it by default NO, and I
return it back when I'll be have good results.

0
mikezmn (64)
12/13/2005 9:18:06 AM
I see, that compiler version 2.1.13 have another behavior than earlier.
It gives (I never saw that earlier):

Cylarr.pli(92:2) : IBM1221I W Statement uses 65682 bytes for
temporaries.
NMAKE :  fatal error U1077:  'pli.exe' : return code '4'

That means:
IBM1221W Statement uses count bytes for temporaries. Explanation: This
message is produced if a statement uses more bytes for temporaries than
allowed by the STORAGE compiler option.

But I can not find place in GUI where I can specify STORAGE option
Probably, it's possible through MAKE, but I do not know how.

Now I corrected option by the following manner:
LIMITS(EXTNAME(7),FIXEDDEC(15),NAME(31),FIXEDBIN(31))  AGGREGATE
ATTRIBUTES SOURCE INSOURCE OPTIONS NEST XREF GONUMBER SNAP TEST
SYSTEM(WINDOWS PENTIUM) DEFAULT(IBM ASSIGNABLE NOINITFILL NONCONNECTED
DESCRIPTOR DESCLIST SHORT(HEXADEC) DUMMY(ALIGNED) ORDINAL(MIN) BYADDR
RETURNS(BYADDR) LINKAGE(OPTLINK) NOINLINE ORDER NOOVERLAP NONRECURSIVE
NULLSYS EVENDEC NORETCODE EBCDIC NONNATIVE NONNATIVEADDR ALIGNED
E(HEXADEC) DESCLOCATOR EVENDEC NULL370) NOIMPRECISE CHECK(STORAGE) LIST
MDECK OFFSET PPTRACE

so, your advises will be appreciated.

Mike

0
mikezmn (64)
12/13/2005 10:35:14 AM
"MZN" <MikeZmn@gmail.com> wrote in message 
news:1134210064.904867.222520@z14g2000cwz.googlegroups.com...
>I have an old program running on S370 a lot of years ago. It have deal
> with complex numerical computations and its volume about 10000 lines.
>
> I have IBM Visual Age PL/1 compiler, but it gives wrong results. Using
> compatibility mode makes situation slightly better only. Step by step
> debug with hand checking shows that I need to do some senseless
> exchanging lines of code, or write code by another manner (sometimes).
> So, I'm almost completely sure, that that compiler has errors. It's can
> be proved by my another (smaller) program, that had the same behavior
> until I rewrote it to Fortran. I cannot do the same here, due to size
> and semantic complexity. Some features couldn't be reproduced in
> Fortran.
>


I doubt there are ANY algorithms coded in your program that cant be 
translated to Fortran,
If you think otherwise how about posting a small excerpt.?


> So, I try to find another way. I see Liant compiler and contact with
> them, but have one response only during to week. Have somebody
> experience with that compiler?
>

The rumor is they want $15,000 for a copy, and its not even shown as a 
SUPPORTED Liant product,
(its a LEGACY thingo that hasnt been updated for years and years)

> At the same time, I knew about Open VMS PL/1 compiler for Alpha
> computers. I think now I could buy such computer for a modest price,
> but what about compiler? Going to Kednos site and seen Hobbyist
> license, I do not understand how I can get it. Also, can somebody to
> tell about features of that compiler? Or, may be, such way isn't good?
>

PL/I is dead, and thats the odor you smell from reading messages in this 
newsgroup..


0
dave_frank (2243)
12/13/2005 12:10:48 PM
OK this is more than a little "out there" but ---

What if you installed the Hercules 360/370/z-machine emulator and ran OS/MVT and PL/I(G) on a virtual 370?
Hercules is available for non-commercial use, as are the old IBM OS and compilers.   You lose some speed in
the emulation, but emulated 370's still outran real hardware timing when emulated on modern processors.
0
kgrhoads (401)
12/13/2005 2:07:58 PM
>The FPU control word that 
>enables/disables interrupts and controls rounding isn't saved and 
>restored by default on calls.  I believe this is true for windoze as 
>well as OS/2.

IIRC this first became known in a widespread manner with programs compiled with Borland compilers
running under Win3.1 and Win95.  It is an old problem that is fairly endemic.  It became worse
when MS decided to make the limited precision modes of the coprocessor the default, IIRC, somewhere
between MS C 5.1 or 6.0 and MS VC 1.0  (late '80's?)

Disclaimer, all above subject to fallable memory ....
0
kgrhoads (401)
12/13/2005 2:13:44 PM
David Frank =D0=BF=D0=B8=D1=81=D0=B0=D0=BB(=D0=B0):


> I doubt there are ANY algorithms coded in your program that cant be
> translated to Fortran,
> If you think otherwise how about posting a small excerpt.?

ANY? There are a lot. It's not a my goal, but
DCL A(*,*) FLOAT DEC CTL;
..=2E.
ALLOCATE A:
GET DATA(A);

>
> The rumor is they want $15,000 for a copy, and its not even shown as a
> SUPPORTED Liant product,
> (its a LEGACY thingo that hasnt been updated for years and years)

It's a really rumor. Did you contact with them? I did.
>
> PL/I is dead, and thats the odor you smell from reading messages in this
> newsgroup..

I'm not agree. That thread is completely impugns your statement. Also,
I do not like to argue about that anymore, sorry.

Mike

0
mikezmn (64)
12/13/2005 2:15:02 PM
Please ignore David, he regularly trolls comp.lang.pli with "it is dead" stuff.
He also occasionally trolls comp.lang.fortran with "the only true compiler is" stuff.
Once upon a time he actually contributed some useful bits, but that appears to have
stopped years ago.  

SO he is now in the "please don't feed the trolls" category.
0
kgrhoads (401)
12/13/2005 2:17:35 PM
"MZN" <MikeZmn@gmail.com> wrote in message 
news:1134483302.217099.193430@g47g2000cwa.googlegroups.com...
David Frank ?????(?):


>> I doubt there are ANY algorithms coded in your program that cant be
>> translated to Fortran,
>> If you think otherwise how about posting a small excerpt.?

>ANY? There are a lot. It's not a my goal, but
>DCL A(*,*) FLOAT DEC CTL;
....

I would assume that in normal use above is declared in a calling unit (main 
program?)  and below is in a called unit (subroutine?)
however there is no size allocation assigned to A array below, care to 
explain why not?

>ALLOCATE A:
>GET DATA(A);       <- there is NOTHING to "GET"  as A hasnt been properly 
>allocated or data inserted into it.

But a main/subroutine algorithm might be coded in Fortran:

! ------------------
program test
real,allocatable :: a(:,:)

call sub1(a)
write (*,*) a       ! output to screen the 100 "massaged" values of A
end program

! -------------
subroutine sub1(a)
real,allocatable :: a(:,:)
allocate (A(10,10))        ! create a 2D array with 100 elements
open (1,file='test.dat')
read (1,*) a                   ! input 1st 100 values in file into array a

    |      ! massage the data  and exit from subroutine
end subroutine 


0
dave_frank (2243)
12/13/2005 2:51:58 PM
On Tue, 13 Dec 2005 14:07:58 +0000, Kevin G. Rhoads  
<kgrhoads@alum.mit.edu> wrote:

> OK this is more than a little "out there" but ---
>
> What if you installed the Hercules 360/370/z-machine emulator and ran  
> OS/MVT and PL/I(G) on a virtual 370?
> Hercules is available for non-commercial use, as are the old IBM OS and  
> compilers.   You lose some speed in
> the emulation, but emulated 370's still outran real hardware timing when  
> emulated on modern processors.
PL/I G doesn't support a number of things he uses
0
tom284 (1839)
12/13/2005 3:20:07 PM
>PL/I G doesn't support a number of things he uses

I didn't look closely at the code.  I'm not sure that PL/I(H) is available for use under Hercules,
or is this an even more extensive dialect?
0
kgrhoads (401)
12/13/2005 4:36:34 PM
1) Use cod2off, it is docmented under the OFFSET compiler option.

2) Specify NOIMPRECISE. Do not argue, do not pass go, do not collection 
$200.
Forcing HEXADEC is probably not a good idea. Arithmetic is still carried out 
in IEEE, so you merely discard precision twice. NULL370 is also pointless in 
almost all code, as are a range of other performance killer compatibility 
options and forced truncations.

3) Since INITFILL is revealing errors, you clearly have a problem with 
unitialized storage.

"MZN" <MikeZmn@gmail.com> wrote in message 
news:1134465486.338905.200600@o13g2000cwo.googlegroups.com...
> Some additional questions:
> 1. Who knows how to connect line numbers in source files with offsets
> in IBM VA PLI?
> I can't do that...
> 2. Now I use the following compiler options:
> LIMITS(EXTNAME(7),FIXEDDEC(15),NAME(31),FIXEDBIN(31))  AGGREGATE
> ATTRIBUTES SOURCE INSOURCE OPTIONS NEST XREF GONUMBER SNAP TEST
> SYSTEM(WINDOWS PENTIUM) DEFAULT(IBM ASSIGNABLE NOINITFILL NONCONNECTED
> DESCRIPTOR DESCLIST SHORT(HEXADEC) DUMMY(ALIGNED) ORDINAL(MIN) BYADDR
> RETURNS(BYADDR) LINKAGE(OPTLINK) NOINLINE ORDER NOOVERLAP NONRECURSIVE
> NULLSYS EVENDEC NORETCODE ASCII NONNATIVE NONNATIVEADDR ALIGNED
> E(HEXADEC) DESCLOCATOR EVENDEC NULL370) LIST MDECK OFFSET PPTRACE
> Is that correct in compatibility with S370 sense?
> 3. To Robin: setting option INITFILL immediately drives to a lot of
> errors during inpet data check, so I leave it by default NO, and I
> return it back when I'll be have good results.
> 


0
12/13/2005 5:06:12 PM
What release are you using? I got that one fixed quite a while back. It 
affected calls into FROMALIEN routines (from non-PL/I code).

"Peter Flass" <Peter_Flass@Yahoo.com> wrote in message 
news:H7Vmf.25106$XJ5.1313@twister.nyroc.rr.com...
> Mark Yudkin wrote:
>>
>> There used to be a problem with the floating point condition enablement 
>> when a PL/I routine was called from non-PLI code. This problem was 
>> resolved a while back, so you won't have a problem with 2.1.13.
>
> This is fun!  I just got bit by this one.  The FPU control word that 
> enables/disables interrupts and controls rounding isn't saved and restored 
> by default on calls.  I believe this is true for windoze as well as OS/2. 
> Unfortunately, different languages/compilers have their own "preferred" 
> setting for the FPU CW, so interlanguage or even OS calls may change the 
> setting.  The PL/I language spec specifies truncation when converting 
> float to fixed, other languages may use "round to nearest", etc.  This can 
> make a significant difference.
>
> It sounds like IBM has this one under control, though.
> 


0
12/13/2005 5:11:40 PM
It was a PL/I compiler issue, and it was fixed when I reported it. Dunno 
about Borland's compilers of course.

"Kevin G. Rhoads" <kgrhoads@alum.mit.edu> wrote in message 
news:439ED718.4BEB9280@alum.mit.edu...
> >The FPU control word that
>>enables/disables interrupts and controls rounding isn't saved and
>>restored by default on calls.  I believe this is true for windoze as
>>well as OS/2.
>
> IIRC this first became known in a widespread manner with programs compiled 
> with Borland compilers
> running under Win3.1 and Win95.  It is an old problem that is fairly 
> endemic.  It became worse
> when MS decided to make the limited precision modes of the coprocessor the 
> default, IIRC, somewhere
> between MS C 5.1 or 6.0 and MS VC 1.0  (late '80's?)
>
> Disclaimer, all above subject to fallable memory .... 


0
12/13/2005 5:13:19 PM
David Frank =D0=BF=D0=B8=D1=81=D0=B0=D0=BB(=D0=B0):
Sorry to all!
> ! ------------------
> program test
> real,allocatable :: a(:,:)
>
> call sub1(a)
> write (*,*) a       ! output to screen the 100 "massaged" values of A
> end program
>
> ! -------------
> subroutine sub1(a)
> real,allocatable :: a(:,:)
> allocate (A(10,10))        ! create a 2D array with 100 elements
> open (1,file=3D'test.dat')
> read (1,*) a                   ! input 1st 100 values in file into array a
>
>     |      ! massage the data  and exit from subroutine
> end subroutine

Sorry, I wasn't specific. I meaned, that in Fortran allocatable
variable can't appear in namelist. Probably, it's even due to standard.
And remember, please, that I do not like to rewrite it on Fortran.

0
mikezmn (64)
12/13/2005 8:17:31 PM
MZN wrote:

> I have an old program running on S370 a lot of years ago. It have deal
> with complex numerical computations and its volume about 10000 lines.
The pl1gcc project is always looking for interesting code.
If you Would release the code under a gpl compliant license, I would very
much like to add it to the pl1gcc's test cases.

That way you will also ensure the code will live on forever:-)

Henrik
0
12/13/2005 9:49:32 PM
I can't recall -- is there an Environment variable that can be set to 
give options?  'SET IBMPLI='?  If NMAKE works like Gnu Make, you can set 
env variables in the makefile.  I recently discovered this, and it has 
saved me lots of typing.

MZN wrote:

> I see, that compiler version 2.1.13 have another behavior than earlier.
> It gives (I never saw that earlier):
> 
> Cylarr.pli(92:2) : IBM1221I W Statement uses 65682 bytes for
> temporaries.
> NMAKE :  fatal error U1077:  'pli.exe' : return code '4'
> 
> That means:
> IBM1221W Statement uses count bytes for temporaries. Explanation: This
> message is produced if a statement uses more bytes for temporaries than
> allowed by the STORAGE compiler option.
> 
> But I can not find place in GUI where I can specify STORAGE option
> Probably, it's possible through MAKE, but I do not know how.
> 
> Now I corrected option by the following manner:
> LIMITS(EXTNAME(7),FIXEDDEC(15),NAME(31),FIXEDBIN(31))  AGGREGATE
> ATTRIBUTES SOURCE INSOURCE OPTIONS NEST XREF GONUMBER SNAP TEST
> SYSTEM(WINDOWS PENTIUM) DEFAULT(IBM ASSIGNABLE NOINITFILL NONCONNECTED
> DESCRIPTOR DESCLIST SHORT(HEXADEC) DUMMY(ALIGNED) ORDINAL(MIN) BYADDR
> RETURNS(BYADDR) LINKAGE(OPTLINK) NOINLINE ORDER NOOVERLAP NONRECURSIVE
> NULLSYS EVENDEC NORETCODE EBCDIC NONNATIVE NONNATIVEADDR ALIGNED
> E(HEXADEC) DESCLOCATOR EVENDEC NULL370) NOIMPRECISE CHECK(STORAGE) LIST
> MDECK OFFSET PPTRACE
> 
> so, your advises will be appreciated.
> 
> Mike
> 

0
Peter_Flass (956)
12/14/2005 12:16:10 AM
Kevin G. Rhoads wrote:

>>PL/I G doesn't support a number of things he uses
> 
> 
> I didn't look closely at the code.  I'm not sure that PL/I(H) is available for use under Hercules,
> or is this an even more extensive dialect?

That would be quite a feat, as (H) was never released.  (F) is 
available, but still limited compared to newer compilers.

0
Peter_Flass (956)
12/14/2005 12:18:53 AM
This was my own compiler.  I spent a day assuming a bug in the compiler, 
naturally, before I checked Google.

Mark Yudkin wrote:

> What release are you using? I got that one fixed quite a while back. It 
> affected calls into FROMALIEN routines (from non-PL/I code).
> 
> "Peter Flass" <Peter_Flass@Yahoo.com> wrote in message 
> news:H7Vmf.25106$XJ5.1313@twister.nyroc.rr.com...
> 
>>Mark Yudkin wrote:
>>
>>>There used to be a problem with the floating point condition enablement 
>>>when a PL/I routine was called from non-PLI code. This problem was 
>>>resolved a while back, so you won't have a problem with 2.1.13.
>>
>>This is fun!  I just got bit by this one.  The FPU control word that 
>>enables/disables interrupts and controls rounding isn't saved and restored 
>>by default on calls.  I believe this is true for windoze as well as OS/2. 
>>Unfortunately, different languages/compilers have their own "preferred" 
>>setting for the FPU CW, so interlanguage or even OS calls may change the 
>>setting.  The PL/I language spec specifies truncation when converting 
>>float to fixed, other languages may use "round to nearest", etc.  This can 
>>make a significant difference.
>>
>>It sounds like IBM has this one under control, though.
>>
> 
> 
> 

0
Peter_Flass (956)
12/14/2005 12:20:36 AM
Peter Flass =D0=BF=D0=B8=D1=81=D0=B0=D0=BB(=D0=B0):

> I can't recall -- is there an Environment variable that can be set to
> give options?  'SET IBMPLI=3D'?  If NMAKE works like Gnu Make, you can set
> env variables in the makefile.  I recently discovered this, and it has
> saved me lots of typing.

Thank you. I'll be look. Now I add STORAGE option into windows by hands
during every rebuid.

Mike

0
mikezmn (64)
12/14/2005 8:05:50 AM
To Mark
1. I set NOIMPRECISE. Nothing changes. I'll be use that always ;)
2. What do you mean under HEXADEC? Is it HEXADEC, or E(HEXADEC), or
SHORT(HEXADEC), or all of them?

To Mark and Robin
When I used INITFILL before, it was simply INITFILL. Now I use
INITFILL(BB), and have no any errors.
But manual definitely recommends use in a form of INITFILL('BB'x)
(compiler gives error for that), so it is another error.

To Peter
Compiler options may be stored in IBM.OPTIONS environment variable
-----------
The IBM.OPTIONS environment variable specifies compiler option
settings. For example:
  set ibm.options=xref attributes
The syntax of the character string you assign to the IBM.OPTIONS
environment variable is the same as that required for the compile-time
options specified on the PLI command (see Using the PLI command to
invoke the compiler).
The defaults together with the changes you apply using this environment
variable become the new defaults. Any options you specify on the PLI
command or in your source program override these defaults.
---------------------

To Peter and Kevin
Where I can find PL/I (F), (G) and (H) compilers?

To all
After applying all your advises (thanks) results still remain bad. What
I can to do else?

0
mikezmn (64)
12/14/2005 8:48:50 AM
"MZN" <MikeZmn@gmail.com> wrote in message 
news:1134505051.511122.42330@g44g2000cwa.googlegroups.com...


>Sorry, I wasn't specific. I meaned, that in Fortran allocatable
>variable can't appear in namelist. Probably, it's even due to standard.
>and remember, please, that I do not like to rewrite it on Fortran.

re: "I do not like to rewrite it on Fortran"

OK, so thats the REAL reason.
I have a history here of  defending against statements such as your:
  "some features couldnt be reproduced in Fortran"

However its true that allocatable arrays cant be included in a Fortran 
namelist,
but one wonders whether PL/I accepts such arrays either in a namelist. 


0
dave_frank (2243)
12/14/2005 10:12:56 AM
"David Frank" <dave_frank@hotmail.com> wrote in message
news:IeSnf.4310$3Z.833@newsread1.news.atl.earthlink.net...
>
>   "some features couldnt be reproduced in Fortran"

That's right.

> However its true that allocatable arrays cant be included in a Fortran
> namelist,
> but one wonders whether PL/I accepts such arrays either in a namelist.

The equivalent namelist has just been shown to you - viz. -
put data (a);


0
robin_v (2737)
12/14/2005 2:16:31 PM
"robin" <robin_v@bigpond.com> wrote in message 
news:3PVnf.15$V7.2@news-server.bigpond.net.au...
> "David Frank" <dave_frank@hotmail.com> wrote in message
> news:IeSnf.4310$3Z.833@newsread1.news.atl.earthlink.net...
>>
>>   "some features couldnt be reproduced in Fortran"
>
> That's right.
>
>> However its true that allocatable arrays cant be included in a Fortran
>> namelist,
>> but one wonders whether PL/I accepts such arrays either in a namelist.
>
> The equivalent namelist has just been shown to you - viz. -
> put data (a);
>
>

You in another of your happy hour hazes again?   no PL/I syntax was shown 
using "NAMELIST" 


0
dave_frank (2243)
12/14/2005 2:45:22 PM
David,

I can not undestand what do you want? Here we're talking about another
things.
Moreover, may be you want to prove something? Please, make another
thread for it.

> You in another of your happy hour hazes again?   no PL/I syntax was shown
> using "NAMELIST"

A: PROGRAM OPTIONS(MAIN);
DCL B(:,:,:) FLOAT CTL, N FIXED;
GET DATA(N);
ALLOCATE B(N,N+1,N+2);
GET DATA(B);
PUT DATA(B);
END A;

I communicate with you in that thread last time. Sorry about that!

Mike

0
mikezmn (64)
12/14/2005 3:29:43 PM
I mean all use of hexadec representation, whether by attribute (excepting 
binary I/O), or by compiler option is best avoided.

"MZN" <MikeZmn@gmail.com> wrote in message 
news:1134550130.239916.139680@g49g2000cwa.googlegroups.com...
> To Mark
> 1. I set NOIMPRECISE. Nothing changes. I'll be use that always ;)
> 2. What do you mean under HEXADEC? Is it HEXADEC, or E(HEXADEC), or
> SHORT(HEXADEC), or all of them?
>
> To Mark and Robin
> When I used INITFILL before, it was simply INITFILL. Now I use
> INITFILL(BB), and have no any errors.
> But manual definitely recommends use in a form of INITFILL('BB'x)
> (compiler gives error for that), so it is another error.
>
> To Peter
> Compiler options may be stored in IBM.OPTIONS environment variable
> -----------
> The IBM.OPTIONS environment variable specifies compiler option
> settings. For example:
>  set ibm.options=xref attributes
> The syntax of the character string you assign to the IBM.OPTIONS
> environment variable is the same as that required for the compile-time
> options specified on the PLI command (see Using the PLI command to
> invoke the compiler).
> The defaults together with the changes you apply using this environment
> variable become the new defaults. Any options you specify on the PLI
> command or in your source program override these defaults.
> ---------------------
>
> To Peter and Kevin
> Where I can find PL/I (F), (G) and (H) compilers?
>
> To all
> After applying all your advises (thanks) results still remain bad. What
> I can to do else?
> 


0
12/14/2005 5:22:22 PM
Use the MAXTEMP compiler option to increase the limit, or use the supplied 
compiler exit to downgrade IBM1221I to an I severity, or both (I do both).

"Peter Flass" <Peter_Flass@Yahoo.com> wrote in message 
news:evJnf.28962$XC4.25210@twister.nyroc.rr.com...
>I can't recall -- is there an Environment variable that can be set to give 
>options?  'SET IBMPLI='?  If NMAKE works like Gnu Make, you can set env 
>variables in the makefile.  I recently discovered this, and it has saved me 
>lots of typing.
>
> MZN wrote:
>
>> I see, that compiler version 2.1.13 have another behavior than earlier.
>> It gives (I never saw that earlier):
>>
>> Cylarr.pli(92:2) : IBM1221I W Statement uses 65682 bytes for
>> temporaries.
>> NMAKE :  fatal error U1077:  'pli.exe' : return code '4'
>>
>> That means:
>> IBM1221W Statement uses count bytes for temporaries. Explanation: This
>> message is produced if a statement uses more bytes for temporaries than
>> allowed by the STORAGE compiler option.
>>
>> But I can not find place in GUI where I can specify STORAGE option
>> Probably, it's possible through MAKE, but I do not know how.
>>
>> Now I corrected option by the following manner:
>> LIMITS(EXTNAME(7),FIXEDDEC(15),NAME(31),FIXEDBIN(31))  AGGREGATE
>> ATTRIBUTES SOURCE INSOURCE OPTIONS NEST XREF GONUMBER SNAP TEST
>> SYSTEM(WINDOWS PENTIUM) DEFAULT(IBM ASSIGNABLE NOINITFILL NONCONNECTED
>> DESCRIPTOR DESCLIST SHORT(HEXADEC) DUMMY(ALIGNED) ORDINAL(MIN) BYADDR
>> RETURNS(BYADDR) LINKAGE(OPTLINK) NOINLINE ORDER NOOVERLAP NONRECURSIVE
>> NULLSYS EVENDEC NORETCODE EBCDIC NONNATIVE NONNATIVEADDR ALIGNED
>> E(HEXADEC) DESCLOCATOR EVENDEC NULL370) NOIMPRECISE CHECK(STORAGE) LIST
>> MDECK OFFSET PPTRACE
>>
>> so, your advises will be appreciated.
>>
>> Mike
>>
> 


0
12/14/2005 5:24:04 PM
Fuckwit!

If Fortran spells DATA as NAMELIST, what difference does it make? Robin's 
code is a complete and correct implementation of the requirement.

"David Frank" <dave_frank@hotmail.com> wrote in message 
news:6eWnf.4352$3Z.1613@newsread1.news.atl.earthlink.net...
>
> "robin" <robin_v@bigpond.com> wrote in message 
> news:3PVnf.15$V7.2@news-server.bigpond.net.au...
>> "David Frank" <dave_frank@hotmail.com> wrote in message
>> news:IeSnf.4310$3Z.833@newsread1.news.atl.earthlink.net...
>>>
>>>   "some features couldnt be reproduced in Fortran"
>>
>> That's right.
>>
>>> However its true that allocatable arrays cant be included in a Fortran
>>> namelist,
>>> but one wonders whether PL/I accepts such arrays either in a namelist.
>>
>> The equivalent namelist has just been shown to you - viz. -
>> put data (a);
>>
>>
>
> You in another of your happy hour hazes again?   no PL/I syntax was shown 
> using "NAMELIST"
> 


0
12/14/2005 5:28:11 PM
To Mark
1. I can not find MAXTEMP option... Moreover, when I set
IBM.OPTIONS=STORAGE(128000), compiler gives error,
IBM.OPTIONS=STORAGE is OK. When I set in make file STORAGE(128000) it's
OK too.
2. Could you tell me about PMR preparation and where it can be sent? I
didn't find devinitive information in manual.

Another interesting thing. I tried to compare results for convergense
for very simple case. When series summation is very small (up to 5
terms), I had
15-16 correct digits. For a long summation (400 terms) results aren't
correct at all. So, I'm definitely have problem with roundoff error
accumulation. Any advise how it can be avoided?

Compiler options are
 +    AGGREGATE(DECIMAL)
 +    ATTRIBUTES(FULL)
      BIFPREC(31)
      BLANK('09'x)
      CHECK( NOCONFORMANCE NOSTORAGE )
      CMPAT(LE)
      CODEPAGE(00819)
    NOCOMPILE(S)
    NOCOPYRIGHT
      CURRENCY('$')
 +    DEFAULT(IBM ASSIGNABLE INITFILL('BB') NONCONNECTED LOWERINC
              DESCRIPTOR DESCLOCATOR DUMMY(ALIGNED) ORDINAL(MIN)
              BYADDR RETURNS(BYADDR) LINKAGE(OPTLINK) NORETCODE
              NOINLINE ORDER NOOVERLAP NONRECURSIVE ALIGNED
              NULL370 EVENDEC SHORT(IEEE)
              ASCII IEEE NONNATIVE NONNATIVEADDR E(IEEE))
    NODLLINIT
    NOEXIT
      EXTRN(SHORT)
 +    FLAG(I)
      FLOATINMATH(ASIS)
 +    GONUMBER
    NOGRAPHIC
 +  NOIMPRECISE
      INCAFTER(PROCESS(""))
      INCLUDE(EXT('inc' 'cpy' 'mac'))
    NOINITAUTO
    NOINITBASED
    NOINITCTL
    NOINITSTATIC
 +    INSOURCE(FULL)
      LANGLVL(SAA2 NOEXT)
      LIBS( SINGLE DYNAMIC )
 +    LIMITS( EXTNAME(7) FIXEDBIN(31,31) FIXEDDEC(15,15) NAME(31) )
      LINECOUNT(60)
    NOLINEDIR
 +    LIST
    NOMACRO
      MARGINI(' ')
      MARGINS(2,72)
      MAXMSG(W 250)
      MAXSTMT(4096)
      MAXTEMP(50000)
 +    MDECK
      MSG(*)
      NAMES('@#$' '@#$')
      NATLANG(ENU)
 +    NEST
      NOT('^')
      NUMBER
      OBJECT
 +    OFFSET
      OPTIMIZE(0)
 +    OPTIONS(DOC)
      OR('|')
    NOPP
 +    PPTRACE
      PRECTYPE(ANS)
      PREFIX(CONVERSION FIXEDOVERFLOW INVALIDOP OVERFLOW
             NOSIZE NOSTRINGRANGE NOSTRINGSIZE NOSUBSCRIPTRANGE
             UNDERFLOW ZERODIVIDE)
      PROBE
    NOPROCEED(S)
      PROCESS(DELETE)
      REDUCE
      RESEXP
      RESPECT()
      RULES(IBM BYNAME NODECSIZE EVENDEC GOTO NOLAXBIF
            NOLAXCTL LAXDCL NOLAXDEF LAXIF LAXINOUT LAXLINK
            LAXMARGINS LAXPUNC LAXQUAL LAXSEMI NOLAXSTRZ MULTICLOSE)
    NOSEMANTIC(S)
 +    SNAP
 +    SOURCE
      STATIC(SHORT)
    NOSTMT
    NOSTORAGE
    NOSYNTAX(S)
      SYSPARM('')
      SYSTEM(WINDOWS)
      TERMINAL
 +    TEST
      USAGE( ROUND(IBM) UNSPEC(IBM) )
      WIDECHAR(LITTLEENDIAN)
      WINDOW(1950)
 +    XINFO(DEF NOXML)
 +    XREF(FULL)

You can see here PRECTYPE and NOCONFORMANCE options. Its description is
absent in manual.

Mike

0
mikezmn (64)
12/14/2005 7:49:51 PM
MZN wrote:
> I see, that compiler version 2.1.13 have another behavior than earlier.
> It gives (I never saw that earlier):
> 
> Cylarr.pli(92:2) : IBM1221I W Statement uses 65682 bytes for
> temporaries.
> NMAKE :  fatal error U1077:  'pli.exe' : return code '4'
> 
> That means:
> IBM1221W Statement uses count bytes for temporaries. Explanation: This
> message is produced if a statement uses more bytes for temporaries than
> allowed by the STORAGE compiler option.
> 
> But I can not find place in GUI where I can specify STORAGE option
> Probably, it's possible through MAKE, but I do not know how.
> 
> Now I corrected option by the following manner:
> LIMITS(EXTNAME(7),FIXEDDEC(15),NAME(31),FIXEDBIN(31))  AGGREGATE
> ATTRIBUTES SOURCE INSOURCE OPTIONS NEST XREF GONUMBER SNAP TEST
> SYSTEM(WINDOWS PENTIUM) DEFAULT(IBM ASSIGNABLE NOINITFILL NONCONNECTED
> DESCRIPTOR DESCLIST SHORT(HEXADEC) DUMMY(ALIGNED) ORDINAL(MIN) BYADDR
> RETURNS(BYADDR) LINKAGE(OPTLINK) NOINLINE ORDER NOOVERLAP NONRECURSIVE
> NULLSYS EVENDEC NORETCODE EBCDIC NONNATIVE NONNATIVEADDR ALIGNED
> E(HEXADEC) DESCLOCATOR EVENDEC NULL370) NOIMPRECISE CHECK(STORAGE) LIST
> MDECK OFFSET PPTRACE
> 
> so, your advises will be appreciated.
> 
> Mike
> 
You should be able to find documentation of all compiler options in the 
Programming Guide that comes with the compiler.  If you don't have access to 
your installations copy you should be able to download it in PDF format from 
IBM's website.

The Storage option controls whether or not the compiler produces a report in the 
listing of the storage requirements of each block.  Also if storage(xxx) is 
specified, an information message is issued for any statement that requires more 
than xxx bytes of temporary storage.  The default value of xxx is 1000.  Note 
that the description in the manual makes it sound like statements requiring more 
than xxx bytes of temporaries will not work, "... maximum amount of storage 
allowed for temporaries ..."  At least for the Personal PL/I compiler this is 
not so.  The statement still works correctly.  The message is just an 
information message for the programmer (that's why the message number ends in I).

One truly ironic circumstance is that despite the fact that the maximum string 
length is 32767, a statement involving several substring and/or concatenation 
operations on strings with variable maximum length can "require" several hundred 
thousand bytes of temporary storage!  Even the old F compiler managed to do a 
better job.  Programs containing such statements would run in as little as 50K 
(that's right K) as long as the actual lengths of the strings were modest.
0
jjw (608)
12/14/2005 8:12:55 PM
I have IBM Visual Age PL/I for Windows v. 2.1.7 updated to 2.1.13. In
manual (Programming Guide) I can not find a lot of compiler options (I
suspect and something else too). There are examples: FLOATINMATH,
BIFPREC, NOINITAUTO,
NOINITBASED, NOINITCTL, NOINITSTATIC, MAXTEMP, PRECTYPE. Can I have
wrong manual? At the same time, I can find, at least, some of theese
options in IBM manuals for PL/I implementations for AIX and z/OS. It's
absolutely abnormal, I feel that I should call IBM representative. Does
anybody have the manual of such kind?

0
mikezmn (64)
12/14/2005 9:57:21 PM
On Thu, 15 Dec 2005 00:14:09 GMT, Peter Flass <Peter_Flass@Yahoo.com>  
wrote:

> MZN wrote:
>> To Peter and Kevin
>> Where I can find PL/I (F), (G) and (H) compilers?
>>
>
> There is no (G) and (H).  (F) is available with the OS/360 source from  
> CBTTAPE.ORG, and also with the OS, MVS, and VM systems available with  
> Hercules. (see www.conmicro.cx/hercules).  The compiler runs just fine  
> not only on old systems, but also on the latest z/OS and z/VM.  Says a  
> lot about IBM's commitment to upward-compatibility.  There's also a  
> PL/I(D) for DOS/360, but you don't want to go there.
>>
>
I would have called that backward compatibility.
0
tom284 (1839)
12/15/2005 12:02:44 AM
MZN wrote:
> To Peter and Kevin
> Where I can find PL/I (F), (G) and (H) compilers?
> 

There is no (G) and (H).  (F) is available with the OS/360 source from 
CBTTAPE.ORG, and also with the OS, MVS, and VM systems available with 
Hercules. (see www.conmicro.cx/hercules).  The compiler runs just fine 
not only on old systems, but also on the latest z/OS and z/VM.  Says a 
lot about IBM's commitment to upward-compatibility.  There's also a 
PL/I(D) for DOS/360, but you don't want to go there.
> 

0
Peter_Flass (956)
12/15/2005 12:14:09 AM
"David Frank" <dave_frank@hotmail.com> wrote in message
news:6eWnf.4352$3Z.1613@newsread1.news.atl.earthlink.net...
>
> "robin" <robin_v@bigpond.com> wrote in message
> news:3PVnf.15$V7.2@news-server.bigpond.net.au...
> > "David Frank" <dave_frank@hotmail.com> wrote in message
> > news:IeSnf.4310$3Z.833@newsread1.news.atl.earthlink.net...
> >>
> >>   "some features couldnt be reproduced in Fortran"
> >
> > That's right.
> >
> >> However its true that allocatable arrays cant be included in a Fortran
> >> namelist,
> >> but one wonders whether PL/I accepts such arrays either in a namelist.
> >
> > The equivalent namelist has just been shown to you - viz. -
> > put data (a);
>
> You in another of your happy hour hazes again?   no PL/I syntax was shown
> using "NAMELIST"

A PUT DATA statement writes out the name(s) of the vaiable(s) AND their values.
A GET DATA statement requires that the name(s) of the variable(s) and their
values be provided as data.



0
robin_v (2737)
12/15/2005 1:57:24 AM
MZN wrote:
> I have IBM Visual Age PL/I for Windows v. 2.1.7 updated to 2.1.13. In
> manual (Programming Guide) I can not find a lot of compiler options (I
> suspect and something else too). There are examples: FLOATINMATH,
> BIFPREC, NOINITAUTO,
> NOINITBASED, NOINITCTL, NOINITSTATIC, MAXTEMP, PRECTYPE. Can I have
> wrong manual? At the same time, I can find, at least, some of theese
> options in IBM manuals for PL/I implementations for AIX and z/OS. It's
> absolutely abnormal, I feel that I should call IBM representative. Does
> anybody have the manual of such kind?
> 
Well, I just went on the IBM website (http://www.ibm.com/us/), went to the 
section on support and downloads, and searched on "PL/I Programming Guide"  VA 
PL/I for Windows Library was the third hit.  Clicked on that and there, along 
with three or four other relevant publications, was the PL/I Programming Guide 
for download (a PDF just over 2MB).  Chapter 4 is devoted to compile time 
options and lists them all along with detailed documentation.  Here is the URL:

ftp://ftp.software.ibm.com/software/websphere/awdtools/pli/VAPLIPG.PDF
0
jjw (608)
12/15/2005 5:36:39 AM
"MZN" <MikeZmn@gmail.com> wrote in message 
news:1134574183.741918.107280@o13g2000cwo.googlegroups.com...
> David,
>
> I can not undestand what do you want? Here we're talking about another
> things.

You were saying that you cant translate your program to Fortran because it 
lacked syntax,
in particular you mentioned allocatable namelist.

> Moreover, may be you want to prove something? Please, make another
> thread for it.
>

Be my guest,   I am on-topic

re:  your example showing code you think is untranslatable can be translated 
using a
     roll-ur-own namelist...

 A: PROGRAM OPTIONS(MAIN);
 DCL B(:,:,:) FLOAT CTL, N FIXED;
 GET DATA(N);
 ALLOCATE B(N,N+1,N+2);
 GET DATA(B);
 PUT DATA(B);
 END A;

! ------------------
program A
real,allocatable :: b(:,:,:)
integer :: n
character(20) :: sn, sb
open (1,file='test.dat')    ! create test file
write (1,*) 'n ',2
write (1,*) 'data ', [1:24]
rewind (1)

! translation of 4 executable PL/I statements
read (1,*) sn,n
allocate ( b(n,n+1,n+2) )
read (1,*) sb, b
write (*,*) sb, b

end program A    ! outputs below


 data                   1.000000       2.000000       3.000000
   4.000000       5.000000       6.000000       7.000000       8.000000
   9.000000       10.00000       11.00000       12.00000       13.00000
   14.00000       15.00000       16.00000       17.00000       18.00000
   19.00000       20.00000       21.00000       22.00000       23.00000
   24.00000

> 


0
dave_frank (2243)
12/15/2005 8:09:49 AM
robin wrote:
> "David Frank" <dave_frank@hotmail.com> wrote in message

(snip)

>>You in another of your happy hour hazes again?   no PL/I syntax was shown
>>using "NAMELIST"

> A PUT DATA statement writes out the name(s) of the vaiable(s) AND their values.
> A GET DATA statement requires that the name(s) of the variable(s) and their
> values be provided as data.

The formatting is a little nicer than NAMELIST, and Fortran still 
doesn't have anything like GET DATA; or PUT DATA;  (the list of 
variables is optional.)

-- glen

0
gah (12851)
12/15/2005 9:20:26 AM
David Frank wrote:

(snip)

> However its true that allocatable arrays cant be included in a Fortran 
> namelist,
> but one wonders whether PL/I accepts such arrays either in a namelist. 

PL/I doesn't make arbitrary rules where it would be possible to
do something.  PL/I has real generic functions, unlike Fortran.
You can call SQRT with a CHARACTER variable or constant and
it will figure out how to do it.   Try that in Fortran!

-- glen

0
gah (12851)
12/15/2005 9:30:05 AM
Please review the PDF version of the Programming Guide for the most recent 
version. Your listing shows:
>      MAXTEMP(50000)
The STORAGE option has nothing to do with your problem and can be left at 
NOSTORAGE.

As I said, IEEE has a longer exponent and a smaller mantissa. You will need 
to use a larger epsilon in the convergence tests. As for numeric accuracy, 
assuming your algorithm is numerically stable, the best you can do is to use 
extended precision. Unfortunately, the longest Intel mantissa is 
considerable shorter than the longest /370 mantissa, but if your code was OK 
on /370 using float (16), it will be OK with float(18) on Intel.

To open a PMR, you contact your local IBM branch office. Review the 
documentation that cam with your PL/I license for full details.

"MZN" <MikeZmn@gmail.com> wrote in message 
news:1134589791.256484.153270@o13g2000cwo.googlegroups.com...
> To Mark
> 1. I can not find MAXTEMP option... Moreover, when I set
> IBM.OPTIONS=STORAGE(128000), compiler gives error,
> IBM.OPTIONS=STORAGE is OK. When I set in make file STORAGE(128000) it's
> OK too.
> 2. Could you tell me about PMR preparation and where it can be sent? I
> didn't find devinitive information in manual.
>
> Another interesting thing. I tried to compare results for convergense
> for very simple case. When series summation is very small (up to 5
> terms), I had
> 15-16 correct digits. For a long summation (400 terms) results aren't
> correct at all. So, I'm definitely have problem with roundoff error
> accumulation. Any advise how it can be avoided?
>
> Compiler options are
> +    AGGREGATE(DECIMAL)
> +    ATTRIBUTES(FULL)
>      BIFPREC(31)
>      BLANK('09'x)
>      CHECK( NOCONFORMANCE NOSTORAGE )
>      CMPAT(LE)
>      CODEPAGE(00819)
>    NOCOMPILE(S)
>    NOCOPYRIGHT
>      CURRENCY('$')
> +    DEFAULT(IBM ASSIGNABLE INITFILL('BB') NONCONNECTED LOWERINC
>              DESCRIPTOR DESCLOCATOR DUMMY(ALIGNED) ORDINAL(MIN)
>              BYADDR RETURNS(BYADDR) LINKAGE(OPTLINK) NORETCODE
>              NOINLINE ORDER NOOVERLAP NONRECURSIVE ALIGNED
>              NULL370 EVENDEC SHORT(IEEE)
>              ASCII IEEE NONNATIVE NONNATIVEADDR E(IEEE))
>    NODLLINIT
>    NOEXIT
>      EXTRN(SHORT)
> +    FLAG(I)
>      FLOATINMATH(ASIS)
> +    GONUMBER
>    NOGRAPHIC
> +  NOIMPRECISE
>      INCAFTER(PROCESS(""))
>      INCLUDE(EXT('inc' 'cpy' 'mac'))
>    NOINITAUTO
>    NOINITBASED
>    NOINITCTL
>    NOINITSTATIC
> +    INSOURCE(FULL)
>      LANGLVL(SAA2 NOEXT)
>      LIBS( SINGLE DYNAMIC )
> +    LIMITS( EXTNAME(7) FIXEDBIN(31,31) FIXEDDEC(15,15) NAME(31) )
>      LINECOUNT(60)
>    NOLINEDIR
> +    LIST
>    NOMACRO
>      MARGINI(' ')
>      MARGINS(2,72)
>      MAXMSG(W 250)
>      MAXSTMT(4096)
>      MAXTEMP(50000)
> +    MDECK
>      MSG(*)
>      NAMES('@#$' '@#$')
>      NATLANG(ENU)
> +    NEST
>      NOT('^')
>      NUMBER
>      OBJECT
> +    OFFSET
>      OPTIMIZE(0)
> +    OPTIONS(DOC)
>      OR('|')
>    NOPP
> +    PPTRACE
>      PRECTYPE(ANS)
>      PREFIX(CONVERSION FIXEDOVERFLOW INVALIDOP OVERFLOW
>             NOSIZE NOSTRINGRANGE NOSTRINGSIZE NOSUBSCRIPTRANGE
>             UNDERFLOW ZERODIVIDE)
>      PROBE
>    NOPROCEED(S)
>      PROCESS(DELETE)
>      REDUCE
>      RESEXP
>      RESPECT()
>      RULES(IBM BYNAME NODECSIZE EVENDEC GOTO NOLAXBIF
>            NOLAXCTL LAXDCL NOLAXDEF LAXIF LAXINOUT LAXLINK
>            LAXMARGINS LAXPUNC LAXQUAL LAXSEMI NOLAXSTRZ MULTICLOSE)
>    NOSEMANTIC(S)
> +    SNAP
> +    SOURCE
>      STATIC(SHORT)
>    NOSTMT
>    NOSTORAGE
>    NOSYNTAX(S)
>      SYSPARM('')
>      SYSTEM(WINDOWS)
>      TERMINAL
> +    TEST
>      USAGE( ROUND(IBM) UNSPEC(IBM) )
>      WIDECHAR(LITTLEENDIAN)
>      WINDOW(1950)
> +    XINFO(DEF NOXML)
> +    XREF(FULL)
>
> You can see here PRECTYPE and NOCONFORMANCE options. Its description is
> absent in manual.
>
> Mike
> 


0
12/15/2005 5:38:36 PM
James J. Weinkam =D0=BF=D0=B8=D1=81=D0=B0=D0=BB(=D0=B0):

> MZN wrote:
> > I have IBM Visual Age PL/I for Windows v. 2.1.7 updated to 2.1.13. In
> > manual (Programming Guide) I can not find a lot of compiler options (I
> > suspect and something else too). There are examples: FLOATINMATH,
> > BIFPREC, NOINITAUTO,
> > NOINITBASED, NOINITCTL, NOINITSTATIC, MAXTEMP, PRECTYPE. Can I have
> > wrong manual? At the same time, I can find, at least, some of theese
> > options in IBM manuals for PL/I implementations for AIX and z/OS. It's
> > absolutely abnormal, I feel that I should call IBM representative. Does
> > anybody have the manual of such kind?
> >
> Well, I just went on the IBM website (http://www.ibm.com/us/), went to the
> section on support and downloads, and searched on "PL/I Programming Guide=
"  VA
> PL/I for Windows Library was the third hit.  Clicked on that and there, a=
long
> with three or four other relevant publications, was the PL/I Programming =
Guide
> for download (a PDF just over 2MB).  Chapter 4 is devoted to compile time
> options and lists them all along with detailed documentation.  Here is th=
e URL:
>
> ftp://ftp.software.ibm.com/software/websphere/awdtools/pli/VAPLIPG.PDF

Thank you, James, I saw this already. Above mentioned options are
absent there.
And its present in another IBM's PL/I manuals for AIX and z/OS. But
theese options
printed by my IBM VA PL/I compiler for Windows as acting. So, it's
definitely IBM's miss.
Tomorrow I'll be contact with them.

At the same time some of that options are included in WebSphere Studio
PL/I for Windows Programming Guide located at
http://www-1.ibm.com/support/docview.wss?rs=3D0&q1=3DPL%2fI+Programming+Gui=
de&uid=3Dswg27005323&loc=3Den_US&cs=3Dutf-8&cc=3Dus&lang=3Den
But it's not my product (I have IBM VisualAge PL/I for Windows).

And, again, who knows how to avoid roundoff error accumulation?

0
mikezmn (64)
12/15/2005 5:58:19 PM
"David Frank" <dave_frank@hotmail.com> wrote in message
news:hx9of.5055$Dd2.3588@newsread3.news.atl.earthlink.net...
>
> "MZN" <MikeZmn@gmail.com> wrote in message
> news:1134574183.741918.107280@o13g2000cwo.googlegroups.com...
> > David,
> >
> > I can not undestand what do you want? Here we're talking about another
> > things.
>
> You were saying that you cant translate your program to Fortran because it
> lacked syntax,
> in particular you mentioned allocatable namelist.
>
> > Moreover, may be you want to prove something? Please, make another
> > thread for it.
>
> Be my guest,   I am on-topic
>
> re:  your example showing code you think is untranslatable can be translated
> using a
>      roll-ur-own namelist...
>
>  A: PROGRAM OPTIONS(MAIN);
>  DCL B(:,:,:) FLOAT CTL, N FIXED;
>  GET DATA(N);
>  ALLOCATE B(N,N+1,N+2);
>  GET DATA(B);
>  PUT DATA(B);
>  END A;
>
> ! ------------------
> program A
> real,allocatable :: b(:,:,:)
> integer :: n
> character(20) :: sn, sb
> open (1,file='test.dat')    ! create test file
> write (1,*) 'n ',2
> write (1,*) 'data ', [1:24]
> rewind (1)
>
> ! translation of 4 executable PL/I statements

No it's not.  You avoided using NAMELIST.

> read (1,*) sn,n
> allocate ( b(n,n+1,n+2) )
> read (1,*) sb, b
> write (*,*) sb, b
>
> end program A    ! outputs below
>
>  data                   1.000000       2.000000       3.000000
>    4.000000       5.000000       6.000000       7.000000       8.000000
>    9.000000       10.00000       11.00000       12.00000       13.00000
>    14.00000       15.00000       16.00000       17.00000       18.00000
>    19.00000       20.00000       21.00000       22.00000       23.00000
>    24.00000

Obviously not only do you not understand GET DATA and PUT DATA,
but you also don't understand Fortran's NAMELIST either.

put data (a, b, c, d);
prints
a = 1.2345 b = 9876 c = 123 d = 9.87654e+05;


0
robin_v (2737)
12/15/2005 8:17:40 PM
"MZN" <MikeZmn@gmail.com> wrote in message
news:1134589791.256484.153270@o13g2000cwo.googlegroups.com...
> To Mark
> 1. I can not find MAXTEMP option... Moreover, when I set
> IBM.OPTIONS=STORAGE(128000), compiler gives error,
> IBM.OPTIONS=STORAGE is OK. When I set in make file STORAGE(128000) it's
> OK too.
> 2. Could you tell me about PMR preparation and where it can be sent? I
> didn't find devinitive information in manual.

To raise a PMR, you need to show that your code
is correct and to provide explicit evidence of a
compiler error.  (This often means reproducing the error
using a few statements.)

The question of options, whether they are in or not in,
seem to be beside the point.

> Another interesting thing. I tried to compare results for convergense
> for very simple case. When series summation is very small (up to 5
> terms), I had
> 15-16 correct digits. For a long summation (400 terms) results aren't
> correct at all.

How many digits correct?

> So, I'm definitely have problem with roundoff error
> accumulation. Any advise how it can be avoided?

Maybe.  But the fact that you had to alter the program
to get the correct result suggests that the underlying
cause is a programming error.

To avoid errors in a summation, it may be necessary
to sort the values and to form the sum beginning with
the smallest value.

For a summation of 400 numbers, in 18-digit precision,
a typical maximum error would be one part in 15 or so digits.
More suggests significant subtractive cancellation
(which suggests a programming problem rather than
a compiler error).
Have you listed these 400 values? and carefully examined them?

> Mike


0
robin_v (2737)
12/15/2005 8:17:40 PM
robin =D0=BF=D0=B8=D1=81=D0=B0=D0=BB(=D0=B0):
Now, for PMR I have
1=2E Impossibility to set STORAGE option by anyway described in manual.
It is possible to do by hand in make file only.
2=2E Absense a number of compiler's options in manual, whereas thay are
present in listing file. That results a vagueness are they available in
fact, or not?
3=2E Inpossibility to install compiler under Windows 64 bit edition.
Installer writes @String MEMORY_NT was not found in string table."

> The question of options, whether they are in or not in,
> seem to be beside the point.
Does it mean that we have documentation error at least?

>
> > Another interesting thing. I tried to compare results for convergense
> > for very simple case. When series summation is very small (up to 5
> > terms), I had
> > 15-16 correct digits. For a long summation (400 terms) results aren't
> > correct at all.
>
> How many digits correct?
For long summation no one.
>
> > So, I'm definitely have problem with roundoff error
> > accumulation. Any advise how it can be avoided?
>
> Maybe.  But the fact that you had to alter the program
> to get the correct result suggests that the underlying
> cause is a programming error.
Nominally say, yes, but it program gave correct results on S370.
>
> To avoid errors in a summation, it may be necessary
> to sort the values and to form the sum beginning with
> the smallest value.
For that case, it's possible. For others terms may be not monotonous.
Anyway, it's a lot of additional work. I'd like to avoid it.

> For a summation of 400 numbers, in 18-digit precision,
> a typical maximum error would be one part in 15 or so digits.
> More suggests significant subtractive cancellation
> (which suggests a programming problem rather than
> a compiler error).
> Have you listed these 400 values? and carefully examined them?
Yes, but for that case I definitely know that term reduces when its
number increases.
And again, it worked earlier.
>=20
> > Mike

0
mikezmn (64)
12/15/2005 9:31:15 PM
David Frank wrote:
(snip)

> in particular you mentioned allocatable namelist.

>>Moreover, may be you want to prove something? Please, make another
>>thread for it.

> Be my guest,   I am on-topic

It seems that PL/I even allows pointer variables in PUT DATA, though
maybe not in GET DATA.

Does Fortran allow them in NAMELIST?

-- glen

0
gah (12851)
12/16/2005 4:33:28 AM
"robin" <robin_v@bigpond.com> wrote in message 
news:Ebkof.9830$V7.1811@news-server.bigpond.net.au...
> "David Frank" <dave_frank@hotmail.com> wrote in message
>>
>> ! translation of 4 executable PL/I statements
>
> No it's not.  You avoided using NAMELIST.
>
You arent reading the topic thread,  the author correctly (and I already 
confirmed) that Fortran doesnt allow allocatables in a namelist.
As a Fortran book author, how come YOU dont know this?

Below is my DIY namelist that DOES accept allocatables  (and for Glen H.) 
pointers, etc.

>> read (1,*) sn,n
>> allocate ( b(n,n+1,n+2) )
>> read (1,*) sb, b
>> write (*,*) sb, b
>>
>> end program A    ! outputs below
>>
>>  data                   1.000000       2.000000       3.000000
>>    4.000000       5.000000       6.000000       7.000000       8.000000
>>    9.000000       10.00000       11.00000       12.00000       13.00000
>>    14.00000       15.00000       16.00000       17.00000       18.00000
>>    19.00000       20.00000       21.00000       22.00000       23.00000
>>    24.00000
>
> Obviously not only do you not understand GET DATA and PUT DATA,
> but you also don't understand Fortran's NAMELIST either.
>
> put data (a, b, c, d);
> prints
> a = 1.2345 b = 9876 c = 123 d = 9.87654e+05;
>

If you write above record to file and then attempt to input the file via GET 
DATA (d)  does it assign d the value of a ? 


0
dave_frank (2243)
12/16/2005 10:22:52 AM
First off, forget the STORAGE option. It's only there for syntactic 
compatibility of *PROCESS statements ported from the host.

Also, the documentation issue won't get you anywhere as IBM do document all 
of the options in the PDF. You must review the PDF, not the INF file, as I 
already pointed out. [I keep complaining about the fact that the INF isn't 
updated too, but you won't get anywhere with a PMR.]

I agree with you that the Windows 64-bit edition installation problem should 
be PMR'd.

---

You also won't get anywhere complaining that IEEE-754 specified a larger 
exponent and smaller mantissa, as it isn't a defect, but an unfortunate fact 
of life. The Intel has lower precision floating point arithmetic than the 
/370, and you must learn to live with it. The answer isn't wrong now / 
doesn't work now and right / working beforehand - you have changed your 
floating point model and you have to accept the consequences of the 
architectural change.

"MZN" <MikeZmn@gmail.com> wrote in message 
news:1134682275.920678.144350@o13g2000cwo.googlegroups.com...

robin ?????(?):
Now, for PMR I have
1. Impossibility to set STORAGE option by anyway described in manual.
It is possible to do by hand in make file only.
2. Absense a number of compiler's options in manual, whereas thay are
present in listing file. That results a vagueness are they available in
fact, or not?
3. Inpossibility to install compiler under Windows 64 bit edition.
Installer writes @String MEMORY_NT was not found in string table."

> The question of options, whether they are in or not in,
> seem to be beside the point.
Does it mean that we have documentation error at least?

>
> > Another interesting thing. I tried to compare results for convergense
> > for very simple case. When series summation is very small (up to 5
> > terms), I had
> > 15-16 correct digits. For a long summation (400 terms) results aren't
> > correct at all.
>
> How many digits correct?
For long summation no one.
>
> > So, I'm definitely have problem with roundoff error
> > accumulation. Any advise how it can be avoided?
>
> Maybe.  But the fact that you had to alter the program
> to get the correct result suggests that the underlying
> cause is a programming error.
Nominally say, yes, but it program gave correct results on S370.
>
> To avoid errors in a summation, it may be necessary
> to sort the values and to form the sum beginning with
> the smallest value.
For that case, it's possible. For others terms may be not monotonous.
Anyway, it's a lot of additional work. I'd like to avoid it.

> For a summation of 400 numbers, in 18-digit precision,
> a typical maximum error would be one part in 15 or so digits.
> More suggests significant subtractive cancellation
> (which suggests a programming problem rather than
> a compiler error).
> Have you listed these 400 values? and carefully examined them?
Yes, but for that case I definitely know that term reduces when its
number increases.
And again, it worked earlier.
>
> > Mike


0
12/16/2005 10:53:10 AM
You have documentation on all of the option in the PDF. The INF is old, but 
there's no point complaining, as you have current documentation.

By default the current guide is installed to C:\Program 
Files\IBM\VAPLI\help\pdf\VAPLIPG.PDF. It's also in the PDF subfolder of the 
unzipped fixpack.

"MZN" <MikeZmn@gmail.com> wrote in message 
news:1134669499.428344.97140@g49g2000cwa.googlegroups.com...
James J. Weinkam ?????(?):

> MZN wrote:
> > I have IBM Visual Age PL/I for Windows v. 2.1.7 updated to 2.1.13. In
> > manual (Programming Guide) I can not find a lot of compiler options (I
> > suspect and something else too). There are examples: FLOATINMATH,
> > BIFPREC, NOINITAUTO,
> > NOINITBASED, NOINITCTL, NOINITSTATIC, MAXTEMP, PRECTYPE. Can I have
> > wrong manual? At the same time, I can find, at least, some of theese
> > options in IBM manuals for PL/I implementations for AIX and z/OS. It's
> > absolutely abnormal, I feel that I should call IBM representative. Does
> > anybody have the manual of such kind?
> >
> Well, I just went on the IBM website (http://www.ibm.com/us/), went to the
> section on support and downloads, and searched on "PL/I Programming Guide" 
> VA
> PL/I for Windows Library was the third hit.  Clicked on that and there, 
> along
> with three or four other relevant publications, was the PL/I Programming 
> Guide
> for download (a PDF just over 2MB).  Chapter 4 is devoted to compile time
> options and lists them all along with detailed documentation.  Here is the 
> URL:
>
> ftp://ftp.software.ibm.com/software/websphere/awdtools/pli/VAPLIPG.PDF

Thank you, James, I saw this already. Above mentioned options are
absent there.
And its present in another IBM's PL/I manuals for AIX and z/OS. But
theese options
printed by my IBM VA PL/I compiler for Windows as acting. So, it's
definitely IBM's miss.
Tomorrow I'll be contact with them.

At the same time some of that options are included in WebSphere Studio
PL/I for Windows Programming Guide located at
http://www-1.ibm.com/support/docview.wss?rs=0&q1=PL%2fI+Programming+Guide&uid=swg27005323&loc=en_US&cs=utf-8&cc=us&lang=en
But it's not my product (I have IBM VisualAge PL/I for Windows).

And, again, who knows how to avoid roundoff error accumulation?


0
12/16/2005 10:57:34 AM
Mark Yudkin =D0=BF=D0=B8=D1=81=D0=B0=D0=BB(=D0=B0):

> You have documentation on all of the option in the PDF. The INF is old, b=
ut
> there's no point complaining, as you have current documentation.

Yes, you're right, sorry! But comedy of mistakes continues.

> By default the current guide is installed to C:\Program
> Files\IBM\VAPLI\help\pdf\VAPLIPG.PDF. It's also in the PDF subfolder of t=
he
> unzipped fixpack
..
In that folder I have two files VAPLIPG.PDF (Programming Reference) and

vaplilrm.pdf (Langiage Reference). They have the same content! It's
Programming Reference for both!

Setting in System properties environment variable
IBM.OPTION=3DFLOATINMATH(EXTENDED) MAXTEMP(90000)
doesn't work. Compiler doesn't give error but in listing theese options
appear with another (nochanged) values. Probably, it's overridden by
something...

IBM's representatives here told me, that as I bought compiler a long
time ago (2.5 years), I need to buy
support contract. Without it they do not want to open PMR. Does it a
usual practice?

Now everywhere in program I set extended precision, and for simplest
test I have:
1=2E Where serie should be converge at 2 terms I have a fullly
(15-16digits) correct results for 2,3,5,10, 50 terms.
2=2E  Where serie should be converge at 200 terms I have no more 1-2
correct digits for 200, 400, 500, 1000 terms, but new result have a
very good convergence (internally)

Mark, I suspect compiler, due to my own experience with rewritting to
Fortran. Arithmetics is the same, but results are correct. Although
Fortran compiler is Watcom, not IBM....

Moreover, we're should expect more portability from high level
language. And differences in floating point implementation on S370 and
PCs do not look so disastrous.

So, my principal question is the same: is it possible to avoid roundoff
errors by small price. And, if yes how to do that?

0
mikezmn (64)
12/16/2005 2:24:05 PM
>"MZN" <MikeZmn@gmail.com> wrote in message
>news:1134682275.920678.144350@o13g2000cwo.googlegroups.com...
>robin ?????(?):
>Now, for PMR I have
>1. Impossibility to set STORAGE option by anyway described in manual.
>It is possible to do by hand in make file only.

See below (2).

>2. Absense a number of compiler's options in manual, whereas thay are
>present in listing file. That results a vagueness are they available in
>fact, or not?

Since you inmstalled a fixpak, there will be additions to the
compiler that are not ducumented in the Lang. Ref. and/or
Programming Guide.  You need to look in the README dicumentation
for differences.

>3. Inpossibility to install compiler under Windows 64 bit edition.
>Installer writes @String MEMORY_NT was not found in string table."

Did the documentation say anywhere that it can be installed
in 64-bit Windows?

>> The question of options, whether they are in or not in,
>> seem to be beside the point.

>Does it mean that we have documentation error at least?

>> > Another interesting thing. I tried to compare results for convergense
>> > for very simple case. When series summation is very small (up to 5
>> > terms), I had
>> > 15-16 correct digits. For a long summation (400 terms) results aren't
>> > correct at all.
>
>> How many digits correct?

>For long summation no one.

> > So, I'm definitely have problem with roundoff error
> > accumulation. Any advise how it can be avoided?
>
>> Maybe.  But the fact that you had to alter the program
>> to get the correct result suggests that the underlying
>> cause is a programming error.

>Nominally say, yes, but it program gave correct results on S370.

I don't recall how many times that I have heard that,
and in the end it turned out to be a programming error.
The fact that it once worked doesn't prove anything.
It doesn't prove that that there are no bugs in the program.

>> To avoid errors in a summation, it may be necessary
>> to sort the values and to form the sum beginning with
>> the smallest value.

>For that case, it's possible. For others terms may be not monotonous.
>Anyway, it's a lot of additional work. I'd like to avoid it.

>> For a summation of 400 numbers, in 18-digit precision,
>> a typical maximum error would be one part in 15 or so digits.
>> More suggests significant subtractive cancellation
>> (which suggests a programming problem rather than
>> a compiler error).
>> Have you listed these 400 values? and carefully examined them?

>Yes, but for that case I definitely know that term reduces when its
>number increases.

Then, if that is the case, you need to investigate why the sum is wrong.
You should be able to make a file copy of the values
and to produce a progressive sum as each value is added,
and see at what stage the sum goes bad.

>And again, it worked earlier.

Please see above about working before.

>> > Mike


0
robin_v (2737)
12/16/2005 11:10:49 PM
"David Frank" <dave_frank@hotmail.com> wrote in message
news:0Awof.5526$Dd2.5480@newsread3.news.atl.earthlink.net...
>
> "robin" <robin_v@bigpond.com> wrote in message
> news:Ebkof.9830$V7.1811@news-server.bigpond.net.au...
> > "David Frank" <dave_frank@hotmail.com> wrote in message
> >>
> >> ! translation of 4 executable PL/I statements
> >
> > No it's not.  You avoided using NAMELIST.
> >
> You arent reading the topic thread,  the author correctly (and I already
> confirmed) that Fortran doesnt allow allocatables in a namelist.

That's right.

> As a Fortran book author, how come YOU dont know this?

I do know that.  The problem is that YOU don't know
that the Fortran code you wrote does NOT in any way shape
or form produce NAMELIST style of output.

> Below is my DIY namelist that DOES accept allocatables  (and for Glen H.)
> pointers, etc.
>
> >> read (1,*) sn,n
> >> allocate ( b(n,n+1,n+2) )
> >> read (1,*) sb, b
> >> write (*,*) sb, b
> >>
> >> end program A    ! outputs below
> >>
> >>  data                   1.000000       2.000000       3.000000
> >>    4.000000       5.000000       6.000000       7.000000       8.000000
> >>    9.000000       10.00000       11.00000       12.00000       13.00000
> >>    14.00000       15.00000       16.00000       17.00000       18.00000
> >>    19.00000       20.00000       21.00000       22.00000       23.00000
> >>    24.00000
> >
> > Obviously not only do you not understand GET DATA and PUT DATA,
> > but you also don't understand Fortran's NAMELIST either.
> >
> > put data (a, b, c, d);
> > prints
> > a = 1.2345 b = 9876 c = 123 d = 9.87654e+05;
>
> If you write above record to file and then attempt to input the file via GET
> DATA (d)  does it assign d the value of a ?

No; why on earth should it ?

But what does get data (b, c, d, a); do?
or get data (c, a, d, b); do?


0
robin_v (2737)
12/16/2005 11:10:50 PM
"MZN" <MikeZmn@gmail.com> wrote in message
news:1134743045.656594.71670@g44g2000cwa.googlegroups.com...

>Now everywhere in program I set extended precision, and for simplest
>test I have:
>1. Where serie should be converge at 2 terms I have a fullly
>(15-16digits) correct results for 2,3,5,10, 50 terms.
>2.  Where serie should be converge at 200 terms I have no more 1-2
>correct digits for 200, 400, 500, 1000 terms, but new result have a
>very good convergence (internally)

>Mark, I suspect compiler,

There is no evidence for that conclusion.
You need to find out why your your algorithm breaks down.
I have already suggested that as it is the the summation that
breaks down with 200 values, you should begin by
carefully examining those 200 values, and
print out the partial sum after each element is added.

>Moreover, we're should expect more portability from high level
>language. And differences in floating point implementation on S370 and
>PCs do not look so disastrous.

That's why it looks like an unstable algorithm or a progrmming error.
(I have previously remarked that your changing the code
to produce a correct result lends support to the programming error
scenario).


0
robin_v (2737)
12/17/2005 12:19:25 AM
Robin,
In fact, all enhancement are documented, but in the PDFs, not in the INFs. 
This fact is stated in the readme's. I strongly suspect that IBM would like 
to get away from their OS/2-compatible INF format.

The PL/I installer has a number of bugs, I've opened PMR's before on these. 
There is no logical reason why PL/I should not install on the 32-bit 
subsystem of 64-bit Windows, and hence I recommended to Mike that he opens a 
PMR.


"robin" <robin_v@bigpond.com> wrote in message 
news:ZPHof.20240$V7.11385@news-server.bigpond.net.au...
> >"MZN" <MikeZmn@gmail.com> wrote in message
>>news:1134682275.920678.144350@o13g2000cwo.googlegroups.com...
>>robin ?????(?):
>>Now, for PMR I have
>>1. Impossibility to set STORAGE option by anyway described in manual.
>>It is possible to do by hand in make file only.
>
> See below (2).
>
>>2. Absense a number of compiler's options in manual, whereas thay are
>>present in listing file. That results a vagueness are they available in
>>fact, or not?
>
> Since you inmstalled a fixpak, there will be additions to the
> compiler that are not ducumented in the Lang. Ref. and/or
> Programming Guide.  You need to look in the README dicumentation
> for differences.
>
>>3. Inpossibility to install compiler under Windows 64 bit edition.
>>Installer writes @String MEMORY_NT was not found in string table."
>
> Did the documentation say anywhere that it can be installed
> in 64-bit Windows?
>
>>> The question of options, whether they are in or not in,
>>> seem to be beside the point.
>
>>Does it mean that we have documentation error at least?
>
>>> > Another interesting thing. I tried to compare results for convergense
>>> > for very simple case. When series summation is very small (up to 5
>>> > terms), I had
>>> > 15-16 correct digits. For a long summation (400 terms) results aren't
>>> > correct at all.
>>
>>> How many digits correct?
>
>>For long summation no one.
>
>> > So, I'm definitely have problem with roundoff error
>> > accumulation. Any advise how it can be avoided?
>>
>>> Maybe.  But the fact that you had to alter the program
>>> to get the correct result suggests that the underlying
>>> cause is a programming error.
>
>>Nominally say, yes, but it program gave correct results on S370.
>
> I don't recall how many times that I have heard that,
> and in the end it turned out to be a programming error.
> The fact that it once worked doesn't prove anything.
> It doesn't prove that that there are no bugs in the program.
>
>>> To avoid errors in a summation, it may be necessary
>>> to sort the values and to form the sum beginning with
>>> the smallest value.
>
>>For that case, it's possible. For others terms may be not monotonous.
>>Anyway, it's a lot of additional work. I'd like to avoid it.
>
>>> For a summation of 400 numbers, in 18-digit precision,
>>> a typical maximum error would be one part in 15 or so digits.
>>> More suggests significant subtractive cancellation
>>> (which suggests a programming problem rather than
>>> a compiler error).
>>> Have you listed these 400 values? and carefully examined them?
>
>>Yes, but for that case I definitely know that term reduces when its
>>number increases.
>
> Then, if that is the case, you need to investigate why the sum is wrong.
> You should be able to make a file copy of the values
> and to produce a progressive sum as each value is added,
> and see at what stage the sum goes bad.
>
>>And again, it worked earlier.
>
> Please see above about working before.
>
>>> > Mike
>
> 


0
12/18/2005 8:28:09 AM
There was a packaging bug in the original FP13 and the LRM was the Guide! I 
PMR'd it and the fix pack was replaced (the replacement also fixed a 
regression in the SQL precompiler I also PMR'd). Grab the latest FP13.

The environment option is IBM.OPTIONS, not IBM.OPTION, so it's not 
surprising it didn't work.

It is normal either to need a support contract, ot to have to pay if the 
case is not considered a bug. However, I do not work for IBM.

Fortran and PL/I both implement IEEE-754 and hence there should be no 
difference. Unless you're still using HEXADEC, in which case you are forcing 
pairs of rounding errors. Make sure that all of your floating point values 
are IEEE.

"MZN" <MikeZmn@gmail.com> wrote in message 
news:1134743045.656594.71670@g44g2000cwa.googlegroups.com...
Mark Yudkin ?????(?):

> You have documentation on all of the option in the PDF. The INF is old, 
> but
> there's no point complaining, as you have current documentation.

Yes, you're right, sorry! But comedy of mistakes continues.

> By default the current guide is installed to C:\Program
> Files\IBM\VAPLI\help\pdf\VAPLIPG.PDF. It's also in the PDF subfolder of 
> the
> unzipped fixpack
..
In that folder I have two files VAPLIPG.PDF (Programming Reference) and

vaplilrm.pdf (Langiage Reference). They have the same content! It's
Programming Reference for both!

Setting in System properties environment variable
IBM.OPTION=FLOATINMATH(EXTENDED) MAXTEMP(90000)
doesn't work. Compiler doesn't give error but in listing theese options
appear with another (nochanged) values. Probably, it's overridden by
something...

IBM's representatives here told me, that as I bought compiler a long
time ago (2.5 years), I need to buy
support contract. Without it they do not want to open PMR. Does it a
usual practice?

Now everywhere in program I set extended precision, and for simplest
test I have:
1. Where serie should be converge at 2 terms I have a fullly
(15-16digits) correct results for 2,3,5,10, 50 terms.
2.  Where serie should be converge at 200 terms I have no more 1-2
correct digits for 200, 400, 500, 1000 terms, but new result have a
very good convergence (internally)

Mark, I suspect compiler, due to my own experience with rewritting to
Fortran. Arithmetics is the same, but results are correct. Although
Fortran compiler is Watcom, not IBM....

Moreover, we're should expect more portability from high level
language. And differences in floating point implementation on S370 and
PCs do not look so disastrous.

So, my principal question is the same: is it possible to avoid roundoff
errors by small price. And, if yes how to do that?


0
12/18/2005 8:42:36 AM
Mark Yudkin =D0=BF=D0=B8=D1=81=D0=B0=D0=BB(=D0=B0):

> There was a packaging bug in the original FP13 and the LRM was the Guide!=
 I
> PMR'd it and the fix pack was replaced (the replacement also fixed a
> regression in the SQL precompiler I also PMR'd). Grab the latest FP13.
So, FP13 has two versions at least. It's surprise for me. I'll do that

> The environment option is IBM.OPTIONS, not IBM.OPTION, so it's not
> surprising it didn't work.
Sorry, I was wrong. Now I can set this variable through project
properties too.

> It is normal either to need a support contract, ot to have to pay if the
> case is not considered a bug. However, I do not work for IBM.
OK, I'll be pay for that.

> Fortran and PL/I both implement IEEE-754 and hence there should be no
> difference. Unless you're still using HEXADEC, in which case you are forc=
ing
> pairs of rounding errors. Make sure that all of your floating point values
> are IEEE.
Where I can read detailed description in floating point implementation
on S370 and IEEE-754? Some experts on migration from mainframe here
told me that IBM machines had different implementations for different
models of S370 system.

Also, I'll be very appreciated for some examples of PL/I code that
definitely give a problem.

0
mikezmn (64)
12/18/2005 10:06:09 AM
MZN wrote:
> Where I can read detailed description in floating point implementation
> on S370 and IEEE-754? Some experts on migration from mainframe here
> told me that IBM machines had different implementations for different
> models of S370 system.

Some odd models of the S/360 line were different from the others, and 
there was a large FP re-engineering applied to all machines ca. 1967 or 
so, but otherwise there were no differences to speak of until quite 
recently, when IEEE-754 (already in use by PCs and almost everything 
else) was added.

You may as well go to the source, reachable from 
<URL:http://www.elink.ibmlink.ibm.com/public/applications/publications/cgibin/pbi.cgi?SSN=05LRP0001949928206&FNC=ONL&PBL=SA22-7832-04&TRL=TXTSRH#>

-- 
John W. Kennedy
"But now is a new thing which is very old--
that the rich make themselves richer and not poorer,
which is the true Gospel, for the poor's sake."
   -- Charles Williams.  "Judgement at Chelmsford"
0
jwkenne (1442)
12/18/2005 3:18:10 PM
>"MZN" <MikeZmn@gmail.com> wrote in message
ews:1134900368.945970.27540@g47g2000cwa.googlegroups.com...
Mark Yudkin wrote:
>> Fortran and PL/I both implement IEEE-754 and hence there should be no
>> difference.

Well, actually, it seems his code originally ran with hex rather than IEEE.

>> Unless you're still using HEXADEC, in which case you are forcing
>> pairs of rounding errors. Make sure that all of your floating point values
>> are IEEE.
>Where I can read detailed description in floating point implementation
>on S370 and IEEE-754? Some experts on migration from mainframe here
>told me that IBM machines had different implementations for different
>models of S370 system.

>Also, I'll be very appreciated for some examples of PL/I code that
>definitely give a problem.

Your code is what is giving a problem, and you need to
examine the output as I suggested.
That way you can determine whether it is your code, the algorithm,
or the compiler that is the cause.


0
robin_v (2737)
12/19/2005 1:44:47 PM
"John W. Kennedy" <jwkenne@attglobal.net> wrote in message
news:T4fpf.39027$L7.8883@fe12.lga...
> MZN wrote:
> > Where I can read detailed description in floating point implementation
> > on S370 and IEEE-754? Some experts on migration from mainframe here
> > told me that IBM machines had different implementations for different
> > models of S370 system.
>
> Some odd models of the S/360 line were different from the others, and
> there was a large FP re-engineering applied to all machines ca. 1967 or
> so,

This was for the S/360, and it was not major; it involved
adding the guard digit to the Floating-Point arithmetic unit.
It is irrelevant to this case.  Its main effect was to
improve accuracy for single precision working.  MZN
has been using double precision and extended precision.
Even on S/360 and S/370 the effects on DP operations
were not anywhere noticeable as on single precision.

> but otherwise there were no differences to speak of until quite
> recently, when IEEE-754 (already in use by PCs and almost everything
> else) was added.




0
robin_v (2737)
12/19/2005 1:44:47 PM
robin wrote:
> "John W. Kennedy" <jwkenne@attglobal.net> wrote in message
> news:T4fpf.39027$L7.8883@fe12.lga...
>> MZN wrote:
>>> Where I can read detailed description in floating point implementation
>>> on S370 and IEEE-754? Some experts on migration from mainframe here
>>> told me that IBM machines had different implementations for different
>>> models of S370 system.
>> Some odd models of the S/360 line were different from the others, and
>> there was a large FP re-engineering applied to all machines ca. 1967 or
>> so,
> 
> This was for the S/360, and it was not major; it involved
> adding the guard digit to the Floating-Point arithmetic unit.
> It is irrelevant to this case.  Its main effect was to
> improve accuracy for single precision working.  MZN
> has been using double precision and extended precision.
> Even on S/360 and S/370 the effects on DP operations
> were not anywhere noticeable as on single precision.

The guard digit was added to double precision, postnormalization was 
added to the HER and HDR instructions, and the behavior of overflow and 
underflow was altered.

>> but otherwise there were no differences to speak of until quite
>> recently, when IEEE-754 (already in use by PCs and almost everything
>> else) was added.
> 
> 
> 
> 


-- 
John W. Kennedy
"But now is a new thing which is very old--
that the rich make themselves richer and not poorer,
which is the true Gospel, for the poor's sake."
   -- Charles Williams.  "Judgement at Chelmsford"
0
jwkenne (1442)
12/19/2005 1:54:33 PM
On Mon, 19 Dec 2005 08:54:33 -0500, John W. Kennedy  
<jwkenne@attglobal.net> wrote:

> robin wrote:
>> "John W. Kennedy" <jwkenne@attglobal.net> wrote in message
>> news:T4fpf.39027$L7.8883@fe12.lga...
>>> MZN wrote:
>>>> Where I can read detailed description in floating point implementation
>>>> on S370 and IEEE-754? Some experts on migration from mainframe here
>>>> told me that IBM machines had different implementations for different
>>>> models of S370 system.
>>> Some odd models of the S/360 line were different from the others, and
>>> there was a large FP re-engineering applied to all machines ca. 1967 or
>>> so,
>>  This was for the S/360, and it was not major; it involved
>> adding the guard digit to the Floating-Point arithmetic unit.
>> It is irrelevant to this case.  Its main effect was to
>> improve accuracy for single precision working.  MZN
>> has been using double precision and extended precision.
>> Even on S/360 and S/370 the effects on DP operations
>> were not anywhere noticeable as on single precision.
>
> The guard digit was added to double precision, postnormalization was  
> added to the HER and HDR instructions, and the behavior of overflow and  
> underflow was altered.

As an interesting aside, ICL (as did Siemens) licensed the Spectra series  
 from
RCA and called it the system 4.  We had a 4/72 when I worked at the  
European
Space Agency in the early 70's and we found that it gave different results  
for
orbital calculations than the 360/65 because of the guard digit producing  
different
rounding behaviour.
>
>>> but otherwise there were no differences to speak of until quite
>>> recently, when IEEE-754 (already in use by PCs and almost everything
>>> else) was added.
>>
>
>

0
tom284 (1839)
12/19/2005 2:01:00 PM
"Tom Linden" <tom@kednos.com> wrote in message
news:ops11ajyr2zgicya@hyrrokkin...
> On Mon, 19 Dec 2005 08:54:33 -0500, John W. Kennedy
> <jwkenne@attglobal.net> wrote:
>
> > robin wrote:
> >> "John W. Kennedy" <jwkenne@attglobal.net> wrote in message
> >> news:T4fpf.39027$L7.8883@fe12.lga...
> >>> MZN wrote:
> >>>> Where I can read detailed description in floating point implementation
> >>>> on S370 and IEEE-754? Some experts on migration from mainframe here
> >>>> told me that IBM machines had different implementations for different
> >>>> models of S370 system.
> >>> Some odd models of the S/360 line were different from the others, and
> >>> there was a large FP re-engineering applied to all machines ca. 1967 or
> >>> so,
> >>  This was for the S/360, and it was not major; it involved
> >> adding the guard digit to the Floating-Point arithmetic unit.
> >> It is irrelevant to this case.  Its main effect was to
> >> improve accuracy for single precision working.  MZN
> >> has been using double precision and extended precision.
> >> Even on S/360 and S/370 the effects on DP operations
> >> were not anywhere noticeable as on single precision.
> >
> > The guard digit was added to double precision, postnormalization was
> > added to the HER and HDR instructions, and the behavior of overflow and
> > underflow was altered.
>
> As an interesting aside, ICL (as did Siemens) licensed the Spectra series
>  from
> RCA and called it the system 4.

Actually, it was the English Electric Company who did that.
English Electric's computer division along with other British manufacturers
were merged under the ICL umbrella by c. 1970.

>  We had a 4/72 when I worked at the European
> Space Agency in the early 70's and we found that it gave different results
> for
> orbital calculations than the 360/65 because of the guard digit producing
> different
> rounding behaviour.

The 4-50 and 4-70 had the guard digit on the single precision instructions
(I don't know about the 4-72, but expect that it was the same being
the more-powerful of the series).
However, neither the 4-50 nor the 4-70 had a guard digit
on the HE, HER, HD, and HDR instructions.
It is surprising that those instructions did not match the D family
floating-point instructions, because the H family was about
from 10 to 30 times faster than the D family.




0
robin_v (2737)
12/19/2005 11:14:46 PM
"John W. Kennedy" <jwkenne@attglobal.net> wrote in message
news:vYypf.39224$L7.37622@fe12.lga...

> The guard digit was added to double precision, postnormalization was
> added to the HER and HDR instructions, and the behavior of overflow and
> underflow was altered.

The HE, HER, HD, HDR set was a glaring design error - a faux pas.
That it failed to post-normalise meant that it couldn't be used
in a loop to divide by, say, 32.

However, it could be used to divide by 2 etc effectively
by employing the DP version and following that by AD :
SDR o,o
LE 0,X
HDR 0,0
AD 0,zero
This was a pain, and to force folks to use that
to save some time compared with full division by 2
(between 10 and 30 times slower) was absurd,
because the memory available on the 360 was relatively small.


0
robin_v (2737)
12/20/2005 3:51:25 AM
"John W. Kennedy" <jwkenne@attglobal.net> wrote in message
news:vYypf.39224$L7.37622@fe12.lga...
> robin wrote:
> > "John W. Kennedy" <jwkenne@attglobal.net> wrote in message
> > news:T4fpf.39027$L7.8883@fe12.lga...

> > This was for the S/360, and it was not major; it involved
> > adding the guard digit to the Floating-Point arithmetic unit.
> > It is irrelevant to this case.  Its main effect was to
> > improve accuracy for single precision working.  MZN
> > has been using double precision and extended precision.
> > Even on S/360 and S/370 the effects on DP operations
> > were not anywhere noticeable as on single precision.
>
> The guard digit was added to double precision,

I did not say otherwise.  I was referring to the fact that
the guard digit had more effect on single precision.

The guard digit had no effect when neither operand
required pre-normalization.  Nor did it have any effect
if the pre-normalized operand had zero in the guard digit.
Nor did it have any effect if the result of add/subtract
had a non-zero digit in the most-significant nibble of the
mantissa.
    In other words, it only had effect if the result mantissa
had a zero high-order nibble and the guard digit was non-zero.
    I do not recall any program being affected by this
change.  Not even a library subroutine or function.

> postnormalization was
> added to the HER and HDR instructions, and the behavior of overflow and
> underflow was altered.


0
robin_v (2737)
12/20/2005 3:51:26 AM
robin wrote:
> "John W. Kennedy" <jwkenne@attglobal.net> wrote in message
> news:vYypf.39224$L7.37622@fe12.lga...

>>The guard digit was added to double precision, postnormalization was
>>added to the HER and HDR instructions, and the behavior of overflow and
>>underflow was altered.

> The HE, HER, HD, HDR set was a glaring design error - a faux pas.
> That it failed to post-normalise meant that it couldn't be used
> in a loop to divide by, say, 32.

Well, first there are no HE or HD instructions.

I am pretty sure that HER and HDR will, and always have, done a
one digit shift when needed.  It might be that they won't normalize
a previously unnormalized number, but in that rare case using AER
or some other that will normalize should be fine.

(snip)

-- glen

0
gah (12851)
12/20/2005 8:43:30 AM
robin wrote:
> "John W. Kennedy" <jwkenne@attglobal.net> wrote in message
> news:vYypf.39224$L7.37622@fe12.lga...
> 
>> The guard digit was added to double precision, postnormalization was
>> added to the HER and HDR instructions, and the behavior of overflow and
>> underflow was altered.
> 
> The HE, HER, HD, HDR set was a glaring design error - a faux pas.
> That it failed to post-normalise meant that it couldn't be used
> in a loop to divide by, say, 32.

There never was an HE or HD instruction.

The main intention of the HER and HDR instructions was to accelerate 
taking square roots.

It is very well known that the entire 360 FP feature could have used 
some input from numerical analysts; it's shot full of design defects. 
Some of the mistakes were corrected in the 1967 re-engineering, but 
others (most grossly, the hexadecimal orientation) had to wait for IEEE-754.

-- 
John W. Kennedy
"But now is a new thing which is very old--
that the rich make themselves richer and not poorer,
which is the true Gospel, for the poor's sake."
   -- Charles Williams.  "Judgement at Chelmsford"
0
jwkenne (1442)
12/20/2005 3:08:04 PM
glen herrmannsfeldt wrote:
> robin wrote:
>> "John W. Kennedy" <jwkenne@attglobal.net> wrote in message
>> news:vYypf.39224$L7.37622@fe12.lga...
> 
>>> The guard digit was added to double precision, postnormalization was
>>> added to the HER and HDR instructions, and the behavior of overflow and
>>> underflow was altered.
> 
>> The HE, HER, HD, HDR set was a glaring design error - a faux pas.
>> That it failed to post-normalise meant that it couldn't be used
>> in a loop to divide by, say, 32.
> 
> Well, first there are no HE or HD instructions.
> 
> I am pretty sure that HER and HDR will, and always have, done a
> one digit shift when needed.  It might be that they won't normalize
> a previously unnormalized number, but in that rare case using AER
> or some other that will normalize should be fine.

Prior to 1967, they did not postnormalize.

-- 
John W. Kennedy
"But now is a new thing which is very old--
that the rich make themselves richer and not poorer,
which is the true Gospel, for the poor's sake."
   -- Charles Williams.  "Judgement at Chelmsford"
0
jwkenne (1442)
12/20/2005 3:08:37 PM
robin wrote:
> "John W. Kennedy" <jwkenne@attglobal.net> wrote in message
> news:vYypf.39224$L7.37622@fe12.lga...
>> robin wrote:
>>> "John W. Kennedy" <jwkenne@attglobal.net> wrote in message
>>> news:T4fpf.39027$L7.8883@fe12.lga...
> 
>>> This was for the S/360, and it was not major; it involved
>>> adding the guard digit to the Floating-Point arithmetic unit.
>>> It is irrelevant to this case.  Its main effect was to
>>> improve accuracy for single precision working.  MZN
>>> has been using double precision and extended precision.
>>> Even on S/360 and S/370 the effects on DP operations
>>> were not anywhere noticeable as on single precision.
>> The guard digit was added to double precision,
> 
> I did not say otherwise.  I was referring to the fact that
> the guard digit had more effect on single precision.

But it was always there in single precision. The 1967 re-engineering 
added it to double precision.

-- 
John W. Kennedy
"But now is a new thing which is very old--
that the rich make themselves richer and not poorer,
which is the true Gospel, for the poor's sake."
   -- Charles Williams.  "Judgement at Chelmsford"
0
jwkenne (1442)
12/20/2005 3:10:03 PM
Can such constructions as below to be an origin of numerical errors on
PCs?

DCL A FLOAT(18);

IF A=1 THEN ...
IF A=1E0 THEN ...
IF A=1.0Q0 THEN ...

or more accurately should be

IF ABS(A-1.0Q0)<EPS THEN ...

where EPS is some small number.

In other words, constructions with equality sign successfully proceeded
on S370, but may be that's not right for PC?

0
mikezmn (64)
12/20/2005 5:21:40 PM
glen herrmannsfeldt wrote:
> robin wrote:
> 
>> "John W. Kennedy" <jwkenne@attglobal.net> wrote in message
>> news:vYypf.39224$L7.37622@fe12.lga...
> 
> 
>>> The guard digit was added to double precision, postnormalization was
>>> added to the HER and HDR instructions, and the behavior of overflow and
>>> underflow was altered.
> 
> 
>> The HE, HER, HD, HDR set was a glaring design error - a faux pas.
>> That it failed to post-normalise meant that it couldn't be used
>> in a loop to divide by, say, 32.
> 
> 
> Well, first there are no HE or HD instructions.
> 
> I am pretty sure that HER and HDR will, and always have, done a
> one digit shift when needed.  It might be that they won't normalize
> a previously unnormalized number, but in that rare case using AER
> or some other that will normalize should be fine.
> 
> (snip)
> 
> -- glen
> 
Unfortunately, I no longer have any S/360 manuals and there don't seem to be any 
free downloadable versions.  However, according to GA22-7000-8, IBM System/370 
Principles of Operation (1981), HER and HDR do the following:

The second operand is divided by 2 and the normalized quotient is placed in the 
first operand location.

The manual goes to describe the exact operation of the instruction in detail, 
covering every conceivable eventuality.  Under "Programming Notes" it states:

3. The result of HALVE is zero only when the second operand fraction is zero, or 
when exonent underflow occurs with the exponent underflow mask set to zero.  A 
fraction with zeros in every bit position, except for a one in the rightmost bit 
position , does not become a zero after the right shift.  This is brecause the 
one bit is preserved in the guard digit and, when the result is not made a true 
zero because of underflow, becomes the leftmost bitafter normalization of the 
result.

So much for not fully normalizing a previously unnormalized number.

As I recall the S/360, all floating pointing operations procuced normalized 
results except for the various load instructions and the unnormalized instructions.
0
jjw (608)
12/20/2005 9:46:26 PM
With rare exception, when comparing a float to an exact value it is 
always best to use the

IF ABS(A-1.0Q0)<EPS THEN ...

Form.  Unless you know for certain that your floating point binary will 
have exact values you can end up with programs that never terminate or 
produce wrong answers.

MZN wrote:
> Can such constructions as below to be an origin of numerical errors on
> PCs?
> 
> DCL A FLOAT(18);
> 
> IF A=1 THEN ...
> IF A=1E0 THEN ...
> IF A=1.0Q0 THEN ...
> 
> or more accurately should be
> 
> IF ABS(A-1.0Q0)<EPS THEN ...
> 
> where EPS is some small number.
> 
> In other words, constructions with equality sign successfully proceeded
> on S370, but may be that's not right for PC?
> 
0
multicsfan (63)
12/21/2005 12:26:13 AM
James J. Weinkam wrote:
>>
> Unfortunately, I no longer have any S/360 manuals and there don't seem 
> to be any free downloadable versions.  

http://www.bitsavers.org/pdf/ibm/360/poo/A22-6821-6_360PrincOpsJan67.pdf
http://www.bitsavers.org/pdf/ibm/360/poo/A22-6821-7_360PrincOpsDec67.pdf

0
Peter_Flass (956)
12/21/2005 12:53:26 AM
MZN wrote:
> Can such constructions as below to be an origin of numerical errors on
> PCs?

> DCL A FLOAT(18);

> IF A=1 THEN ...
> IF A=1E0 THEN ...
> IF A=1.0Q0 THEN ...

Most likely it could go either way.  There might be some that
round to 1 on PC and don't on S/360, and some that work the other
way around.

-- glen

0
gah (12851)
12/21/2005 5:44:33 AM
James J. Weinkam wrote:

(snip)

> Unfortunately, I no longer have any S/360 manuals and there don't seem 
> to be any free downloadable versions.  However, according to 
> GA22-7000-8, IBM System/370 Principles of Operation (1981), HER and HDR 
> do the following:

> The second operand is divided by 2 and the normalized quotient is placed 
> in the first operand location.

> The manual goes to describe the exact operation of the instruction in 
> detail, covering every conceivable eventuality.  Under "Programming 
> Notes" it states:

> 3. The result of HALVE is zero only when the second operand fraction is 
> zero, or when exonent underflow occurs with the exponent underflow mask 
> set to zero.  A fraction with zeros in every bit position, except for a 
> one in the rightmost bit position , does not become a zero after the 
> right shift.  This is brecause the one bit is preserved in the guard 
> digit and, when the result is not made a true zero because of underflow, 
> becomes the leftmost bitafter normalization of the result.

It seems that early S/360's didn't do that.  I believe it was changed
at the same time that a guard digit was added to long float operations.

-- glen

0
gah (12851)
12/21/2005 5:46:21 AM
"John W. Kennedy" <jwkenne@attglobal.net> wrote in message
news:s7Vpf.39983$L7.34049@fe12.lga...
> robin wrote:
>
> The main intention of the HER and HDR instructions was to accelerate
> taking square roots.

First time I have heard of that one.
Division by 2 was and still is a fairly common operation
(averaging, etc) and given that float division took
an extraordinarily long time, HER and HDR were
golden opportunities to speed up algorithms, if only
compilers would use them.

> It is very well known that the entire 360 FP feature could have used
> some input from numerical analysts; it's shot full of design defects.

When the S/360 was designed (1964), hardware was expensive,
and a hexadecimal mantissa minimised the hardware and,
in particular, reduced shifting time [for normalization] to
a minimum.

> Some of the mistakes were corrected in the 1967 re-engineering, but
> others (most grossly, the hexadecimal orientation) had to wait for IEEE-754.

Like I said, hex mantissa was a design that maximised speed and
minimised cost.  IMHO it was a reasonable compromise.

> --
> John W. Kennedy
> "But now is a new thing which is very old--
> that the rich make themselves richer and not poorer,
> which is the true Gospel, for the poor's sake."
>    -- Charles Williams.  "Judgement at Chelmsford"


0
robin_v (2737)
12/21/2005 7:10:39 AM
"John W. Kennedy" <jwkenne@attglobal.net> wrote in message
news:i9Vpf.39985$L7.38713@fe12.lga...
> robin wrote:
> > "John W. Kennedy" <jwkenne@attglobal.net> wrote in message
> > news:vYypf.39224$L7.37622@fe12.lga...
> >> robin wrote:
> >>> "John W. Kennedy" <jwkenne@attglobal.net> wrote in message
> >>> news:T4fpf.39027$L7.8883@fe12.lga...
> >
> >>> This was for the S/360, and it was not major; it involved
> >>> adding the guard digit to the Floating-Point arithmetic unit.
> >>> It is irrelevant to this case.  Its main effect was to
> >>> improve accuracy for single precision working.  MZN
> >>> has been using double precision and extended precision.
> >>> Even on S/360 and S/370 the effects on DP operations
> >>> were not anywhere noticeable as on single precision.
> >> The guard digit was added to double precision,
> >
> > I did not say otherwise.  I was referring to the fact that
> > the guard digit had more effect on single precision.
>
> But it was always there in single precision. The 1967 re-engineering
> added it to double precision.

Why not just read what I wrote?  I said:
     "... the guard digit had more effect on single precision".
>
> --
> John W. Kennedy


0
robin_v (2737)
12/21/2005 7:10:40 AM
"MZN" <MikeZmn@gmail.com> wrote in message
news:1135099300.244718.170560@f14g2000cwb.googlegroups.com...
> Can such constructions as below to be an origin of numerical errors on
> PCs?

They can be a source of numerical algorithm misbehaviour on
any system.

It's not that a value like 1, 1e0, 1q0 is not held precisely
as 1.000000000000000, it that the floating-point value
of A may not be exactly unity.

> DCL A FLOAT(18);
>
> IF A=1 THEN ...
> IF A=1E0 THEN ...
> IF A=1.0Q0 THEN ...
>
> or more accurately should be
>
> IF ABS(A-1.0Q0)<EPS THEN ...
>
> where EPS is some small number.

This is better.  On IBM PL/I, the built-in function EPSILON
gives the smallest value that will change 1.0 to something different;
however, EPS is likely to be a much larger value than that,
for example, 1e-14*A

> In other words, constructions with equality sign successfully proceeded
> on S370, but may be that's not right for PC?

There's no guarantee that a program will behave precisely
the same way to the last bit when it is run on a different
kind of machine.  If the algorithm has a basic numerical flaw
in it, it may not run at all or it may produce the wrong
result.

You already know that the IBM mainframe can use a
different floating-point form (Hex) compared to PC (IEEE).


0
robin_v (2737)
12/21/2005 7:10:40 AM
Peter Flass wrote:
> James J. Weinkam wrote:
> 
>>>
>> Unfortunately, I no longer have any S/360 manuals and there don't seem 
>> to be any free downloadable versions.  
> 
> 
> http://www.bitsavers.org/pdf/ibm/360/poo/A22-6821-6_360PrincOpsJan67.pdf
> http://www.bitsavers.org/pdf/ibm/360/poo/A22-6821-7_360PrincOpsDec67.pdf
> 
Thanks for this reference.  A22-6831-7 gives substantially the same description 
of HALVE as the 370 manual I quoted earlier.  Apparently by 1967 they had got it 
right on the 360.
0
jjw (608)
12/21/2005 7:42:47 AM
In our case, the worst example (because it slipped though testing and hit us 
only in production), our cubic spline algorithm sometimes failed to converge 
on IEE-754 when it did on /370, because the rounding effects exceeded our 
convergence check. We altered the check to consider that IEEE float has a 
longer exponent and shorter mantissa and have no further production 
problems.

"MZN" <MikeZmn@gmail.com> wrote in message 
news:1134900368.945970.27540@g47g2000cwa.googlegroups.com...

Mark Yudkin ?????(?):

> There was a packaging bug in the original FP13 and the LRM was the Guide! 
> I
> PMR'd it and the fix pack was replaced (the replacement also fixed a
> regression in the SQL precompiler I also PMR'd). Grab the latest FP13.
So, FP13 has two versions at least. It's surprise for me. I'll do that

> The environment option is IBM.OPTIONS, not IBM.OPTION, so it's not
> surprising it didn't work.
Sorry, I was wrong. Now I can set this variable through project
properties too.

> It is normal either to need a support contract, ot to have to pay if the
> case is not considered a bug. However, I do not work for IBM.
OK, I'll be pay for that.

> Fortran and PL/I both implement IEEE-754 and hence there should be no
> difference. Unless you're still using HEXADEC, in which case you are 
> forcing
> pairs of rounding errors. Make sure that all of your floating point values
> are IEEE.
Where I can read detailed description in floating point implementation
on S370 and IEEE-754? Some experts on migration from mainframe here
told me that IBM machines had different implementations for different
models of S370 system.

Also, I'll be very appreciated for some examples of PL/I code that
definitely give a problem.


0
12/21/2005 8:01:28 AM
Mark Yudkin wrote:

> In our case, the worst example (because it slipped though testing and hit us 
> only in production), our cubic spline algorithm sometimes failed to converge 
> on IEE-754 when it did on /370, because the rounding effects exceeded our 
> convergence check. We altered the check to consider that IEEE float has a 
> longer exponent and shorter mantissa and have no further production 
> problems.

(snip)

> Where I can read detailed description in floating point implementation
> on S370 and IEEE-754? 
http://publibfp.boulder.ibm.com/cgi-bin/bookmgr/BOOKS/dz9zr003/CCONTENTS

This describes both HFP (Hex Floating Point, like S/370)
and BFP (IEEE-754).

 > Some experts on migration from mainframe here
 > told me that IBM machines had different implementations for different
 > models of S370 system.

Well, the 360/91 and related machines used a different divide algorithm
which generated a rounded quotient instead of the architecture specified
truncated quotient.  I don't believe any of those machines are around,
though.

It might be, though, that the difference between IEEE rounding and
S/370 truncating the quotient makes a difference in your test.

-- glen

0
gah (12851)
12/21/2005 9:00:35 AM
robin wrote:
> "John W. Kennedy" <jwkenne@attglobal.net> wrote in message
> news:i9Vpf.39985$L7.38713@fe12.lga...
>> robin wrote:
>>> "John W. Kennedy" <jwkenne@attglobal.net> wrote in message
>>> news:vYypf.39224$L7.37622@fe12.lga...
>>>> robin wrote:
>>>>> "John W. Kennedy" <jwkenne@attglobal.net> wrote in message
>>>>> news:T4fpf.39027$L7.8883@fe12.lga...
>>>>> This was for the S/360, and it was not major; it involved
>>>>> adding the guard digit to the Floating-Point arithmetic unit.
>>>>> It is irrelevant to this case.  Its main effect was to
>>>>> improve accuracy for single precision working.  MZN
>>>>> has been using double precision and extended precision.
>>>>> Even on S/360 and S/370 the effects on DP operations
>>>>> were not anywhere noticeable as on single precision.
>>>> The guard digit was added to double precision,
>>> I did not say otherwise.  I was referring to the fact that
>>> the guard digit had more effect on single precision.
>> But it was always there in single precision. The 1967 re-engineering
>> added it to double precision.
> 
> Why not just read what I wrote?  I said:

I did.

"it involved adding the guard digit to the Floating-Point arithmetic unit"

I say again, the '67 EC added the guard digit to double precision, but 
not to single precision, where it already existed.

-- 
John W. Kennedy
"But now is a new thing which is very old--
that the rich make themselves richer and not poorer,
which is the true Gospel, for the poor's sake."
   -- Charles Williams.  "Judgement at Chelmsford"
0
jwkenne (1442)
12/21/2005 6:19:03 PM
Another strange compiler behavior:

DCL (J, SI, N) BIN FIXED(31);

DO SI=1 TO N;
      J=(-1B)**SI;
....
Compiler gives message for operator J=...
IBM2805I I For assignment to J, conversion from FLOAT BIN(1) to FIXED
BIN(31) will be done by library call.
for the case of
     J=(-1)**SI;
IBM2805I I For assignment to J, conversion from FLOAT DEC(1) to FIXED
BIN(31) will be done by library call.
But in both cases there are no any FLOAT variables, but compiler
consider -1 or -1B as FLOAT.

Although both cases, and even its complete exchaging by different code
do not matter for results.

0
mikezmn (64)
12/21/2005 6:28:00 PM
Mark,

It's seems very useful for me, if you give me example by code (if you
remember it, of course).

Does it mean, that any operators like
IF A=B THEN
should be changed by
IF ABS(A-B)/B<1.0q-14 THEN
for additional portability?

In my case I found and changed one only without changing of result.

0
mikezmn (64)
12/21/2005 6:39:32 PM
MZN wrote:
> Another strange compiler behavior:
> 
> DCL (J, SI, N) BIN FIXED(31);
> 
> DO SI=1 TO N;
>       J=(-1B)**SI;
> ...
> Compiler gives message for operator J=...
> IBM2805I I For assignment to J, conversion from FLOAT BIN(1) to FIXED
> BIN(31) will be done by library call.
> for the case of
>      J=(-1)**SI;
> IBM2805I I For assignment to J, conversion from FLOAT DEC(1) to FIXED
> BIN(31) will be done by library call.
> But in both cases there are no any FLOAT variables, but compiler
> consider -1 or -1B as FLOAT.
> 
> Although both cases, and even its complete exchaging by different code
> do not matter for results.
> 
Except in special cases where the exponent is a sufficiently small integer 
constant, the result of exponentiation is always FLOAT.  It is that result which 
must be converted for assignment.
0
jjw (608)
12/21/2005 11:42:04 PM
"MZN" <MikeZmn@gmail.com> wrote in message
news:1135190372.506431.325700@g44g2000cwa.googlegroups.com...
> Does it mean, that any operators like
> IF A=B THEN
> should be changed by
> IF ABS(A-B)/B<1.0q-14 THEN
> for additional portability?

Something like that.



0
robin_v (2737)
12/22/2005 4:24:53 AM
"MZN" <MikeZmn@gmail.com> wrote in message
news:1135189680.582849.292440@g14g2000cwa.googlegroups.com...
> Another strange compiler behavior:
>
> DCL (J, SI, N) BIN FIXED(31);
>
> DO SI=1 TO N;
>       J=(-1B)**SI;
> ...
> Compiler gives message for operator J=...
> IBM2805I I For assignment to J, conversion from FLOAT BIN(1) to FIXED
> BIN(31) will be done by library call.
> for the case of
>      J=(-1)**SI;
> IBM2805I I For assignment to J, conversion from FLOAT DEC(1) to FIXED
> BIN(31) will be done by library call.
> But in both cases there are no any FLOAT variables, but compiler
> consider -1 or -1B as FLOAT.

No, this is not "strange" behaviour.
The result is always FLOAT for this, because it is not one of
the special cases (specal cases being that the result is FIXED BINARY
iff the result will fit in 31 bits based on the declared precision of N.

Actually, this is a grossly inefficient way to change sign.
Better is to include X = -X;
somewhere in the loop, and to use X.



0
robin_v (2737)
12/22/2005 4:24:53 AM
On Thu, 22 Dec 2005 04:24:53 GMT, robin <robin_v@bigpond.com> wrote:

> "MZN" <MikeZmn@gmail.com> wrote in message
> news:1135189680.582849.292440@g14g2000cwa.googlegroups.com...
>> Another strange compiler behavior:
>>
>> DCL (J, SI, N) BIN FIXED(31);
>>
>> DO SI=1 TO N;
>>       J=(-1B)**SI;
>> ...
>> Compiler gives message for operator J=...
>> IBM2805I I For assignment to J, conversion from FLOAT BIN(1) to FIXED
>> BIN(31) will be done by library call.
>> for the case of
>>      J=(-1)**SI;
>> IBM2805I I For assignment to J, conversion from FLOAT DEC(1) to FIXED
>> BIN(31) will be done by library call.
>> But in both cases there are no any FLOAT variables, but compiler
>> consider -1 or -1B as FLOAT.
>
> No, this is not "strange" behaviour.
> The result is always FLOAT for this, because it is not one of
> the special cases (specal cases being that the result is FIXED BINARY
> iff the result will fit in 31 bits based on the declared precision of N.
>
> Actually, this is a grossly inefficient way to change sign.
> Better is to include X = -X;
> somewhere in the loop, and to use X.
>
>
or a static array initialized to 1,-1,1,-1...  no test required
or unroll the loop once so both odd an even treated on same pass

0
tom284 (1839)
12/22/2005 4:06:30 PM
On Thu, 22 Dec 2005 22:46:21 GMT, robin <robin_v@bigpond.com> wrote:

> "Tom Linden" <tom@kednos.com> wrote in message
> news:ops160c4nazgicya@hyrrokkin...
>> On Thu, 22 Dec 2005 04:24:53 GMT, robin <robin_v@bigpond.com> wrote:
>>
>> > "MZN" <MikeZmn@gmail.com> wrote in message
>> > news:1135189680.582849.292440@g14g2000cwa.googlegroups.com...
>> >> Another strange compiler behavior:
>> >>
>> >> DCL (J, SI, N) BIN FIXED(31);
>> >>
>> >> DO SI=1 TO N;
>> >>       J=(-1B)**SI;
>> >> ...
>> >> Compiler gives message for operator J=...
>> >> IBM2805I I For assignment to J, conversion from FLOAT BIN(1) to FIXED
>> >> BIN(31) will be done by library call.
>> >> for the case of
>> >>      J=(-1)**SI;
>> >> IBM2805I I For assignment to J, conversion from FLOAT DEC(1) to FIXED
>> >> BIN(31) will be done by library call.
>> >> But in both cases there are no any FLOAT variables, but compiler
>> >> consider -1 or -1B as FLOAT.
>> >
>> > No, this is not "strange" behaviour.
>> > The result is always FLOAT for this, because it is not one of
>> > the special cases (specal cases being that the result is FIXED BINARY
>> > iff the result will fit in 31 bits based on the declared precision of  
>> N.
>> >
>> > Actually, this is a grossly inefficient way to change sign.
>> > Better is to include X = -X;
>> > somewhere in the loop, and to use X.
>> >
>> or a static array initialized to 1,-1,1,-1...  no test required
>
> This is impractical, as N and the loop control variable SI
> ae defined as BIXED BIN(31).

Well, in practice these are economized polynomial expansions of typically
less than 30 terms so it is actually a good approach.  For example, for
computing satellite orbits we don't need to go higher than 24th spherical
harmonic of earth's gravity field.
>
>> or unroll the loop once so both odd an even treated on same pass
>
> Possible if the loop is short, but otherwise not practical, and in
> any case, unnecessary.

The loop will typically be short, as cited above, so this is really the
most efficient way.

>
>
>

0
tom284 (1839)
12/22/2005 10:40:10 PM
"Tom Linden" <tom@kednos.com> wrote in message
news:ops160c4nazgicya@hyrrokkin...
> On Thu, 22 Dec 2005 04:24:53 GMT, robin <robin_v@bigpond.com> wrote:
>
> > "MZN" <MikeZmn@gmail.com> wrote in message
> > news:1135189680.582849.292440@g14g2000cwa.googlegroups.com...
> >> Another strange compiler behavior:
> >>
> >> DCL (J, SI, N) BIN FIXED(31);
> >>
> >> DO SI=1 TO N;
> >>       J=(-1B)**SI;
> >> ...
> >> Compiler gives message for operator J=...
> >> IBM2805I I For assignment to J, conversion from FLOAT BIN(1) to FIXED
> >> BIN(31) will be done by library call.
> >> for the case of
> >>      J=(-1)**SI;
> >> IBM2805I I For assignment to J, conversion from FLOAT DEC(1) to FIXED
> >> BIN(31) will be done by library call.
> >> But in both cases there are no any FLOAT variables, but compiler
> >> consider -1 or -1B as FLOAT.
> >
> > No, this is not "strange" behaviour.
> > The result is always FLOAT for this, because it is not one of
> > the special cases (specal cases being that the result is FIXED BINARY
> > iff the result will fit in 31 bits based on the declared precision of N.
> >
> > Actually, this is a grossly inefficient way to change sign.
> > Better is to include X = -X;
> > somewhere in the loop, and to use X.
> >
> or a static array initialized to 1,-1,1,-1...  no test required

This is impractical, as N and the loop control variable SI
ae defined as BIXED BIN(31).

> or unroll the loop once so both odd an even treated on same pass

Possible if the loop is short, but otherwise not practical, and in
any case, unnecessary.



0
robin_v (2737)
12/22/2005 10:46:21 PM
No, there is nothing strange in the behaviour. Rather the Windows platform 
provides more warnings about the defined behaviour of arithmetic. The rules 
are as defined in the LRM, and are consistent with the host. I strongly 
recommend your using "dcl ... value()" to address such issues.

"MZN" <MikeZmn@gmail.com> wrote in message 
news:1135189680.582849.292440@g14g2000cwa.googlegroups.com...
> Another strange compiler behavior:
>
> DCL (J, SI, N) BIN FIXED(31);
>
> DO SI=1 TO N;
>      J=(-1B)**SI;
> ...
> Compiler gives message for operator J=...
> IBM2805I I For assignment to J, conversion from FLOAT BIN(1) to FIXED
> BIN(31) will be done by library call.
> for the case of
>     J=(-1)**SI;
> IBM2805I I For assignment to J, conversion from FLOAT DEC(1) to FIXED
> BIN(31) will be done by library call.
> But in both cases there are no any FLOAT variables, but compiler
> consider -1 or -1B as FLOAT.
>
> Although both cases, and even its complete exchaging by different code
> do not matter for results.
> 


0
12/23/2005 7:02:09 AM
That sort of logic is anyway normal, although your case is buggy (if B < 0 
your test always succeeds, if B=0 you get a zerodivide).

I assume also that A and B are extended, as 1q-14 is quite small. You should 
consider using the floating point inquiry BIFs instead of constants - that's 
what they're for.

"MZN" <MikeZmn@gmail.com> wrote in message 
news:1135190372.506431.325700@g44g2000cwa.googlegroups.com...
> Mark,
>
> It's seems very useful for me, if you give me example by code (if you
> remember it, of course).
>
> Does it mean, that any operators like
> IF A=B THEN
> should be changed by
> IF ABS(A-B)/B<1.0q-14 THEN
> for additional portability?
>
> In my case I found and changed one only without changing of result.
> 


0
12/23/2005 7:05:49 AM
Thanks all of you for responses

DCL (J, SI, N) BIN FIXED(31);
....
DO SI=1 TO N;
      J=(-1B)**SI;
was once in whole program and it changed by
DCL (J, SI, N) BIN FIXED(31);
J=1;
....
DO SI=1 TO N;
      ...
      J=-J;
without influence on results

Unfortunately, I couldn't unroll loops due to N can be equal to
thousand.

Of course, in
IF ABS(A-B)/B<1.0q-14 THEN
zerodivide should be prevented
Finally in general caseit should be like:

DCL (A, B, C) FLOAT(18);
C=ABS(A-B);
IF (A^=0.0Q0 & C/ABS(A)>TINY(A)) | (B^=0.0Q0 & C/ABS(B)>TINY(A)) TNEN

I have another questions:
1. In program I'm often use EXP(1.0Q0I*A), where A becomes 100..10000,
so we have loss of significance. Are known some differences in
proceeding of trigonometrical functions on mainframes and PCs?

2. In VA PL/I there are trigonometrical functions with F suffix (TANF,
SINF etc.). They should be have REAL arguments only, but work at
hardware level (faster?). Have they advantages in precision sense?

0
mikezmn (64)
12/23/2005 4:37:20 PM
Thank you, Mark,

> The rules are as defined in the LRM, and are consistent with the host.

I don't understand that phrase. What means LRM?

> I strongly recommend your using "dcl ... value()" to address such issues.

Could you explain it more detailed?

0
mikezmn (64)
12/23/2005 7:21:43 PM
On 23 Dec 2005 11:21:43 -0800, MZN <MikeZmn@gmail.com> wrote:

> Thank you, Mark,
>
>> The rules are as defined in the LRM, and are consistent with the host.
>
> I don't understand that phrase. What means LRM?

Language Reference Manual
>
>> I strongly recommend your using "dcl ... value()" to address such  
>> issues.
>
> Could you explain it more detailed?
>

0
tom284 (1839)
12/24/2005 2:45:36 PM
LRM is Language Reference Manual.
dcl ... value is documented in the LRM. The point is that numeric constants 
in PL/I have an implicit base, scale, precision and mode. The rules for PL/I 
expressions consider these. The result of using these can be that arithmetic 
has "strange" results.

As the LRM illustrates:
<quote>
  dcl I fixed bin(31,5) init(1);
      I = I+.1;

The value of I is now 1.0625. This is because .1 is converted to FIXED 
BINARY (5,4), so that the nearest binary approximation is 0.0001B (no 
rounding occurs). The decimal equivalent of this is .0625. The result 
achieved by specifying .1000 in place of .1 would be different.
</quote>

Such issues can be bypassed very simply by using dcl ... value to specify a 
named constant having the attributes you want, a recommendation that is also 
discussed in the documentation.

"MZN" <MikeZmn@gmail.com> wrote in message 
news:1135365703.687993.206780@f14g2000cwb.googlegroups.com...
> Thank you, Mark,
>
>> The rules are as defined in the LRM, and are consistent with the host.
>
> I don't understand that phrase. What means LRM?
>
>> I strongly recommend your using "dcl ... value()" to address such issues.
>
> Could you explain it more detailed?
> 


0
12/25/2005 10:32:17 AM
The accurancy of the PL/I mathematical library functions differs between 
/370 and IEEE-754 because the numeric representation does, as I keep telling 
you. hence the answer is that there are known differences that are 
unavoidable, and that this should be quite obvious. See also the LRM under 
"Accuracy of mathematical function".

The PL/I library is based on the LE (Language Environment) algorithms as 
ported to Windows; they are contained in HEPWM20.DLL (and HEPWS20.DLL). 
Other IBM languages use the same LE implementations. Hence changing the 
language won't change anything.

As stated in the LRM under "Accuracy of mathematical function": "The 
mathematical built-in functions that are implemented using inline machine 
instructions produce results of different accuracy." For specific functions, 
the LRM indicates "The accuracy of the result is set by the hardware." This 
is polite for "less accurate". and I would therefore recommend avoiding the 
hardware-based instructions when there is no pressing need for their use.

May I suggest your taking the time to actually consult the LRM before asking 
questions here that it answers?

"MZN" <MikeZmn@gmail.com> wrote in message 
news:1135355840.393571.59310@g43g2000cwa.googlegroups.com...
> Thanks all of you for responses
>
> DCL (J, SI, N) BIN FIXED(31);
> ...
> DO SI=1 TO N;
>      J=(-1B)**SI;
> was once in whole program and it changed by
> DCL (J, SI, N) BIN FIXED(31);
> J=1;
> ...
> DO SI=1 TO N;
>      ...
>      J=-J;
> without influence on results
>
> Unfortunately, I couldn't unroll loops due to N can be equal to
> thousand.
>
> Of course, in
> IF ABS(A-B)/B<1.0q-14 THEN
> zerodivide should be prevented
> Finally in general caseit should be like:
>
> DCL (A, B, C) FLOAT(18);
> C=ABS(A-B);
> IF (A^=0.0Q0 & C/ABS(A)>TINY(A)) | (B^=0.0Q0 & C/ABS(B)>TINY(A)) TNEN
>
> I have another questions:
> 1. In program I'm often use EXP(1.0Q0I*A), where A becomes 100..10000,
> so we have loss of significance. Are known some differences in
> proceeding of trigonometrical functions on mainframes and PCs?
>
> 2. In VA PL/I there are trigonometrical functions with F suffix (TANF,
> SINF etc.). They should be have REAL arguments only, but work at
> hardware level (faster?). Have they advantages in precision sense?
> 


0
12/25/2005 10:45:07 AM
"Tom Linden" <tom@kednos.com> wrote in message
news:ops17ik8iizgicya@hyrrokkin...
> On Thu, 22 Dec 2005 22:46:21 GMT, robin <robin_v@bigpond.com> wrote:
>
> > "Tom Linden" <tom@kednos.com> wrote in message
> > news:ops160c4nazgicya@hyrrokkin...
> >> On Thu, 22 Dec 2005 04:24:53 GMT, robin <robin_v@bigpond.com> wrote:
> >>
> >> > "MZN" <MikeZmn@gmail.com> wrote in message
> >> > news:1135189680.582849.292440@g14g2000cwa.googlegroups.com...
> >> >> Another strange compiler behavior:
> >> >>
> >> >> DCL (J, SI, N) BIN FIXED(31);
> >> >>
> >> >> DO SI=1 TO N;
> >> >>       J=(-1B)**SI;
> >> >> ...
> >> >> Compiler gives message for operator J=...
> >> >> IBM2805I I For assignment to J, conversion from FLOAT BIN(1) to FIXED
> >> >> BIN(31) will be done by library call.
> >> >> for the case of
> >> >>      J=(-1)**SI;
> >> >> IBM2805I I For assignment to J, conversion from FLOAT DEC(1) to FIXED
> >> >> BIN(31) will be done by library call.
> >> >> But in both cases there are no any FLOAT variables, but compiler
> >> >> consider -1 or -1B as FLOAT.
> >> >
> >> > No, this is not "strange" behaviour.
> >> > The result is always FLOAT for this, because it is not one of
> >> > the special cases (specal cases being that the result is FIXED BINARY
> >> > iff the result will fit in 31 bits based on the declared precision of
> >> N.
> >> >
> >> > Actually, this is a grossly inefficient way to change sign.
> >> > Better is to include X = -X;
> >> > somewhere in the loop, and to use X.
> >> >
> >> or a static array initialized to 1,-1,1,-1...  no test required
> >
> > This is impractical, as N and the loop control variable SI
> > ae defined as BIXED BIN(31).
>
> Well, in practice these are economized polynomial expansions of typically
> less than 30 terms

I wasn't referring to your specific case; I was referring to the
code that MZN supplied.

> so it is actually a good approach.  For example, for
> computing satellite orbits we don't need to go higher than 24th spherical
> harmonic of earth's gravity field.
> >
> >> or unroll the loop once so both odd an even treated on same pass
> >
> > Possible if the loop is short, but otherwise not practical, and in
> > any case, unnecessary.
>
> The loop will typically be short, as cited above, so this is really the
> most efficient way.

No, using an array is not particularly efficient.  Nor is it necessary.
x = -x; or similar is the simplest (KISS) and quickest, and smallest in
terms of storage.


0
robin_v (2737)
12/25/2005 10:52:01 AM
On Sun, 25 Dec 2005 10:52:01 GMT, robin <robin_v@bigpond.com> wrote:

>
> "Tom Linden" <tom@kednos.com> wrote in message
> news:ops17ik8iizgicya@hyrrokkin...
>> On Thu, 22 Dec 2005 22:46:21 GMT, robin <robin_v@bigpond.com> wrote:
>>
>> > "Tom Linden" <tom@kednos.com> wrote in message
>> > news:ops160c4nazgicya@hyrrokkin...
>> >> On Thu, 22 Dec 2005 04:24:53 GMT, robin <robin_v@bigpond.com> wrote:
>> >>
>> >> > "MZN" <MikeZmn@gmail.com> wrote in message
>> >> > news:1135189680.582849.292440@g14g2000cwa.googlegroups.com...
>> >> >> Another strange compiler behavior:
>> >> >>
>> >> >> DCL (J, SI, N) BIN FIXED(31);
>> >> >>
>> >> >> DO SI=1 TO N;
>> >> >>       J=(-1B)**SI;
>> >> >> ...
>> >> >> Compiler gives message for operator J=...
>> >> >> IBM2805I I For assignment to J, conversion from FLOAT BIN(1) to  
>> FIXED
>> >> >> BIN(31) will be done by library call.
>> >> >> for the case of
>> >> >>      J=(-1)**SI;
>> >> >> IBM2805I I For assignment to J, conversion from FLOAT DEC(1) to  
>> FIXED
>> >> >> BIN(31) will be done by library call.
>> >> >> But in both cases there are no any FLOAT variables, but compiler
>> >> >> consider -1 or -1B as FLOAT.
>> >> >
>> >> > No, this is not "strange" behaviour.
>> >> > The result is always FLOAT for this, because it is not one of
>> >> > the special cases (specal cases being that the result is FIXED  
>> BINARY
>> >> > iff the result will fit in 31 bits based on the declared precision  
>> of
>> >> N.
>> >> >
>> >> > Actually, this is a grossly inefficient way to change sign.
>> >> > Better is to include X = -X;
>> >> > somewhere in the loop, and to use X.
>> >> >
>> >> or a static array initialized to 1,-1,1,-1...  no test required
>> >
>> > This is impractical, as N and the loop control variable SI
>> > ae defined as BIXED BIN(31).
>>
>> Well, in practice these are economized polynomial expansions of  
>> typically
>> less than 30 terms
>
> I wasn't referring to your specific case; I was referring to the
> code that MZN supplied.
>
>> so it is actually a good approach.  For example, for
>> computing satellite orbits we don't need to go higher than 24th  
>> spherical
>> harmonic of earth's gravity field.
>> >
>> >> or unroll the loop once so both odd an even treated on same pass
>> >
>> > Possible if the loop is short, but otherwise not practical, and in
>> > any case, unnecessary.
>>
>> The loop will typically be short, as cited above, so this is really the
>> most efficient way.
>
> No, using an array is not particularly efficient.  Nor is it necessary.
> x = -x; or similar is the simplest (KISS) and quickest, and smallest in
> terms of storage.
>
The code was an expansion with alternating sign, which is why I suggeted
unrolling the loop once to simultaneously treat odd and even terms and
therby by avoid the test for negation.

0
tom284 (1839)
12/25/2005 2:06:23 PM
"MZN" <MikeZmn@gmail.com> wrote in message
news:1135355840.393571.59310@g43g2000cwa.googlegroups.com...
> Thanks all of you for responses
>
> DCL (J, SI, N) BIN FIXED(31);
> ...
> DO SI=1 TO N;
>       J=(-1B)**SI;
> was once in whole program and it changed by
> DCL (J, SI, N) BIN FIXED(31);
> J=1;
> ...
> DO SI=1 TO N;
>       ...
>       J=-J;
> without influence on results

As expected.  It will, however, eliminate compiler warning messages
and will be faster.  How much faster depends on the magnitude of N,
the number of places that J=(-1B)**SI; was used, and the length of the loop.

> Unfortunately, I couldn't unroll loops due to N can be equal to
> thousand.

A loop can be unrolled by executing the body twice in each loop,
or three times, or four, etc.

However, unrolling does not necessarily save anything.

> Of course, in
> IF ABS(A-B)/B<1.0q-14 THEN
> zerodivide should be prevented
> Finally in general caseit should be like:
>
> DCL (A, B, C) FLOAT(18);
> C=ABS(A-B);
> IF (A^=0.0Q0 & C/ABS(A)>TINY(A)) | (B^=0.0Q0 & C/ABS(B)>TINY(A)) TNEN
>
> I have another questions:
> 1. In program I'm often use EXP(1.0Q0I*A), where A becomes 100..10000,
> so we have loss of significance. Are known some differences in
> proceeding of trigonometrical functions on mainframes and PCs?
>
> 2. In VA PL/I there are trigonometrical functions with F suffix (TANF,
> SINF etc.). They should be have REAL arguments only, but work at
> hardware level (faster?). Have they advantages in precision sense?

The LRM doesn't say much about what the differences are.
Advantage should be speed.  The functions give hardware exceptions
for values out of range.


0
robin_v (2737)
12/26/2005 12:23:17 AM
"Mark Yudkin" <myudkinATcompuserveDOTcom@boingboing.org> wrote in message
news:43ae7533$0$1156$5402220f@news.sunrise.ch...
> LRM is Language Reference Manual.
> dcl ... value is documented in the LRM. The point is that numeric constants
> in PL/I have an implicit base, scale, precision and mode. The rules for PL/I
> expressions consider these. The result of using these can be that arithmetic
> has "strange" results.
>
> As the LRM illustrates:
> <quote>
>   dcl I fixed bin(31,5) init(1);
>       I = I+.1;
>
> The value of I is now 1.0625. This is because .1 is converted to FIXED
> BINARY (5,4), so that the nearest binary approximation is 0.0001B (no
> rounding occurs). The decimal equivalent of this is .0625. The result
> achieved by specifying .1000 in place of .1 would be different.

True, but not especially different.  The sum would yield
1.09375
The difference from the decimal sum of 1.1 is caused by the fact that
the declaration of I does not cater for sufficient number of places
after the binary point.

The example in the manual is not a good one, and a better
way to illustrate it is to have
dcl I fixed binary (31, 28);
and then I = I + .1; yields 1.0625 [initial value of I is 1 as before]
but that
dcl I fixed binary(31,28), tenth fixed binary (31,28) value (0.1);
and then I = I + tenth;
gives
1.099999999
approx.

> </quote>
>
> Such issues can be bypassed very simply by using dcl ... value to specify a
> named constant having the attributes you want, a recommendation that is also
> discussed in the documentation.

or, simply, to specify the computation as

I = I + 0.100000000;
which is clearer IMHO.

But if you want it to be accurate, then
dcl I fixed decimal (15,5);
I = I + .1;
always gives 1.10000 precisely [again,assuming initial value of I as 1].


0
robin_v (2737)
12/26/2005 12:23:18 AM
On Tue, 27 Dec 2005 00:33:11 GMT, robin <robin_v@bigpond.com> wrote:

>> The code was an expansion with alternating sign, which is why I suggeted
>> unrolling the loop once to simultaneously treat odd and even terms and
>> therby by avoid the test for negation.
> No test is required for negation.
> Using one or other of the suggestions (or variations on them) is  
> sufficient.

We are obviously not talking about the same thing.  I was referring to
the code where he had (-1)**n  as part of the coefficient in an expansion
0
tom284 (1839)
12/27/2005 12:24:24 AM
"Tom Linden" <tom@kednos.com> wrote in message
news:ops2cesxfhzgicya@hyrrokkin...
> On Sun, 25 Dec 2005 10:52:01 GMT, robin <robin_v@bigpond.com> wrote:
>
> > "Tom Linden" <tom@kednos.com> wrote in message
> > news:ops17ik8iizgicya@hyrrokkin...
> >> On Thu, 22 Dec 2005 22:46:21 GMT, robin <robin_v@bigpond.com> wrote:
> >>
> >> > "Tom Linden" <tom@kednos.com> wrote in message
> >> > news:ops160c4nazgicya@hyrrokkin...
> >> >> On Thu, 22 Dec 2005 04:24:53 GMT, robin <robin_v@bigpond.com> wrote:
> >> >>
> >> >> > "MZN" <MikeZmn@gmail.com> wrote in message
> >> >> > news:1135189680.582849.292440@g14g2000cwa.googlegroups.com...
> >> >> >> Another strange compiler behavior:
> >> >> >>
> >> >> >> DCL (J, SI, N) BIN FIXED(31);
> >> >> >>
> >> >> >> DO SI=1 TO N;
> >> >> >>       J=(-1B)**SI;
> >> >> >> ...
> >> >> >> Compiler gives message for operator J=...
> >> >> >> IBM2805I I For assignment to J, conversion from FLOAT BIN(1) to
> >> FIXED
> >> >> >> BIN(31) will be done by library call.
> >> >> >> for the case of
> >> >> >>      J=(-1)**SI;
> >> >> >> IBM2805I I For assignment to J, conversion from FLOAT DEC(1) to
> >> FIXED
> >> >> >> BIN(31) will be done by library call.
> >> >> >> But in both cases there are no any FLOAT variables, but compiler
> >> >> >> consider -1 or -1B as FLOAT.
> >> >> >
> >> >> > No, this is not "strange" behaviour.
> >> >> > The result is always FLOAT for this, because it is not one of
> >> >> > the special cases (specal cases being that the result is FIXED
> >> BINARY
> >> >> > iff the result will fit in 31 bits based on the declared precision
> >> of
> >> >> N.
> >> >> >
> >> >> > Actually, this is a grossly inefficient way to change sign.
> >> >> > Better is to include X = -X;
> >> >> > somewhere in the loop, and to use X.
> >> >> >
> >> >> or a static array initialized to 1,-1,1,-1...  no test required
> >> >
> >> > This is impractical, as N and the loop control variable SI
> >> > ae defined as BIXED BIN(31).
> >>
> >> Well, in practice these are economized polynomial expansions of
> >> typically
> >> less than 30 terms
> >
> > I wasn't referring to your specific case; I was referring to the
> > code that MZN supplied.
> >
> >> so it is actually a good approach.  For example, for
> >> computing satellite orbits we don't need to go higher than 24th
> >> spherical
> >> harmonic of earth's gravity field.
> >> >
> >> >> or unroll the loop once so both odd an even treated on same pass
> >> >
> >> > Possible if the loop is short, but otherwise not practical, and in
> >> > any case, unnecessary.
> >>
> >> The loop will typically be short, as cited above, so this is really the
> >> most efficient way.
> >
> > No, using an array is not particularly efficient.  Nor is it necessary.
> > x = -x; or similar is the simplest (KISS) and quickest, and smallest in
> > terms of storage.
> >
> The code was an expansion with alternating sign, which is why I suggeted
> unrolling the loop once to simultaneously treat odd and even terms and
> therby by avoid the test for negation.

No test is required for negation.
Using one or other of the suggestions (or variations on them) is sufficient.


0
robin_v (2737)
12/27/2005 12:33:11 AM
"Tom Linden" <tom@kednos.com> wrote in message
news:ops2e12yzvzgicya@hyrrokkin...
> On Tue, 27 Dec 2005 00:33:11 GMT, robin <robin_v@bigpond.com> wrote:
>
> >> The code was an expansion with alternating sign, which is why I suggeted
> >> unrolling the loop once to simultaneously treat odd and even terms and
> >> therby by avoid the test for negation.
> > No test is required for negation.
> > Using one or other of the suggestions (or variations on them) is
> > sufficient.
>
> We are obviously not talking about the same thing.  I was referring to
> the code where he had (-1)**n  as part of the coefficient in an expansion

So was I.
No test is required for negation.


0
robin_v (2737)
12/27/2005 10:51:34 PM
I'm trying to convince MZN to RTFM, and posting examples from the manual 
appear to me to be a polite way to point out that his question is answered 
by the documentation he's failing to consult. I wasn't trying to post a 
"better" example.

"robin" <robin_v@bigpond.com> wrote in message 
news:WJGrf.107456$V7.63638@news-server.bigpond.net.au...
> "Mark Yudkin" <myudkinATcompuserveDOTcom@boingboing.org> wrote in message
> news:43ae7533$0$1156$5402220f@news.sunrise.ch...
>> LRM is Language Reference Manual.
>> dcl ... value is documented in the LRM. The point is that numeric 
>> constants
>> in PL/I have an implicit base, scale, precision and mode. The rules for 
>> PL/I
>> expressions consider these. The result of using these can be that 
>> arithmetic
>> has "strange" results.
>>
>> As the LRM illustrates:
>> <quote>
>>   dcl I fixed bin(31,5) init(1);
>>       I = I+.1;
>>
>> The value of I is now 1.0625. This is because .1 is converted to FIXED
>> BINARY (5,4), so that the nearest binary approximation is 0.0001B (no
>> rounding occurs). The decimal equivalent of this is .0625. The result
>> achieved by specifying .1000 in place of .1 would be different.
>
> True, but not especially different.  The sum would yield
> 1.09375
> The difference from the decimal sum of 1.1 is caused by the fact that
> the declaration of I does not cater for sufficient number of places
> after the binary point.
>
> The example in the manual is not a good one, and a better
> way to illustrate it is to have
> dcl I fixed binary (31, 28);
> and then I = I + .1; yields 1.0625 [initial value of I is 1 as before]
> but that
> dcl I fixed binary(31,28), tenth fixed binary (31,28) value (0.1);
> and then I = I + tenth;
> gives
> 1.099999999
> approx.
>
>> </quote>
>>
>> Such issues can be bypassed very simply by using dcl ... value to specify 
>> a
>> named constant having the attributes you want, a recommendation that is 
>> also
>> discussed in the documentation.
>
> or, simply, to specify the computation as
>
> I = I + 0.100000000;
> which is clearer IMHO.
>
> But if you want it to be accurate, then
> dcl I fixed decimal (15,5);
> I = I + .1;
> always gives 1.10000 precisely [again,assuming initial value of I as 1].
>
> 


0
12/30/2005 7:22:16 AM
"Mark Yudkin" <myudkinATcompuserveDOTcom@boingboing.org> wrote in message
news:43b4e020$0$1154$5402220f@news.sunrise.ch...
> I'm trying to convince MZN to RTFM, and posting examples from the manual
> appear to me to be a polite way to point out that his question is answered
> by the documentation he's failing to consult. I wasn't trying to post a
> "better" example.

Fair enough, but in this particular instance, the example in the
ref. manual is not a good one, which is why I elaborated.

> "robin" <robin_v@bigpond.com> wrote in message
> news:WJGrf.107456$V7.63638@news-server.bigpond.net.au...
> > "Mark Yudkin" <myudkinATcompuserveDOTcom@boingboing.org> wrote in message
> > news:43ae7533$0$1156$5402220f@news.sunrise.ch...
> >> LRM is Language Reference Manual.
> >> dcl ... value is documented in the LRM. The point is that numeric
> >> constants
> >> in PL/I have an implicit base, scale, precision and mode. The rules for
> >> PL/I
> >> expressions consider these. The result of using these can be that
> >> arithmetic
> >> has "strange" results.
> >>
> >> As the LRM illustrates:
> >> <quote>
> >>   dcl I fixed bin(31,5) init(1);
> >>       I = I+.1;
> >>
> >> The value of I is now 1.0625. This is because .1 is converted to FIXED
> >> BINARY (5,4), so that the nearest binary approximation is 0.0001B (no
> >> rounding occurs). The decimal equivalent of this is .0625. The result
> >> achieved by specifying .1000 in place of .1 would be different.
> >
> > True, but not especially different.  The sum would yield
> > 1.09375
> > The difference from the decimal sum of 1.1 is caused by the fact that
> > the declaration of I does not cater for sufficient number of places
> > after the binary point.
> >
> > The example in the manual is not a good one, and a better
> > way to illustrate it is to have
> > dcl I fixed binary (31, 28);
> > and then I = I + .1; yields 1.0625 [initial value of I is 1 as before]
> > but that
> > dcl I fixed binary(31,28), tenth fixed binary (31,28) value (0.1);
> > and then I = I + tenth;
> > gives
> > 1.099999999
> > approx.
> >
> >> </quote>
> >>
> >> Such issues can be bypassed very simply by using dcl ... value to specify
> >> a
> >> named constant having the attributes you want, a recommendation that is
> >> also
> >> discussed in the documentation.
> >
> > or, simply, to specify the computation as
> >
> > I = I + 0.100000000;
> > which is clearer IMHO.
> >
> > But if you want it to be accurate, then
> > dcl I fixed decimal (15,5);
> > I = I + .1;
> > always gives 1.10000 precisely [again,assuming initial value of I as 1].


0
robin_v (2737)
12/31/2005 7:39:04 AM
"John W. Kennedy" <jwkenne@attglobal.net> wrote in message
news:i9Vpf.39985$L7.38713@fe12.lga...
> robin wrote:
> > "John W. Kennedy" <jwkenne@attglobal.net> wrote in message
> > news:vYypf.39224$L7.37622@fe12.lga...
> >> robin wrote:
> >>> "John W. Kennedy" <jwkenne@attglobal.net> wrote in message
> >>> news:T4fpf.39027$L7.8883@fe12.lga...
> >
> >>> This was for the S/360, and it was not major; it involved
> >>> adding the guard digit to the Floating-Point arithmetic unit.
> >>> It is irrelevant to this case.  Its main effect was to
> >>> improve accuracy for single precision working.  MZN
> >>> has been using double precision and extended precision.
> >>> Even on S/360 and S/370 the effects on DP operations
> >>> were not anywhere noticeable as on single precision.
> >> The guard digit was added to double precision,
> >
> > I did not say otherwise.  I was referring to the fact that
> > the guard digit had more effect on single precision.
>
> But it was always there in single precision. The 1967 re-engineering
> added it to double precision.

Again, I did not say otherwise.  What I said was that the
guard digit had more effect on single precision.
    English Electric (and subsequently ICL) did not see fit to
retrofit the System 4 with a guard digit for double precision.


0
robin_v (2737)
12/31/2005 7:39:05 AM
"James J. Weinkam" <jjw@cs.sfu.ca> wrote in message
news:SY_pf.26191$Hl4.15544@clgrps13...
> glen herrmannsfeldt wrote:
> > robin wrote:
> >
> >> "John W. Kennedy" <jwkenne@attglobal.net> wrote in message
> >> news:vYypf.39224$L7.37622@fe12.lga...
> >
> >>> The guard digit was added to double precision, postnormalization was
> >>> added to the HER and HDR instructions, and the behavior of overflow and
> >>> underflow was altered.
> >
> >> The HE, HER, HD, HDR set was a glaring design error - a faux pas.
> >> That it failed to post-normalise meant that it couldn't be used
> >> in a loop to divide by, say, 32.
> >
> > Well, first there are no HE or HD instructions.
> >
> > I am pretty sure that HER and HDR will, and always have, done a
> > one digit shift when needed.  It might be that they won't normalize
> > a previously unnormalized number, but in that rare case using AER
> > or some other that will normalize should be fine.
> >
> > (snip)
> >
> > -- glen
> >
> Unfortunately, I no longer have any S/360 manuals and there don't seem to be
any
> free downloadable versions.  However, according to GA22-7000-8, IBM System/370
> Principles of Operation (1981), HER and HDR do the following:
>
> The second operand is divided by 2 and the normalized quotient is placed in
the
> first operand location.
>
> The manual goes to describe the exact operation of the instruction in detail,
> covering every conceivable eventuality.  Under "Programming Notes" it states:
>
> 3. The result of HALVE is zero only when the second operand fraction is zero,
or
> when exonent underflow occurs with the exponent underflow mask set to zero.  A
> fraction with zeros in every bit position, except for a one in the rightmost
bit
> position , does not become a zero after the right shift.  This is brecause the
> one bit is preserved in the guard digit and, when the result is not made a
true
> zero because of underflow, becomes the leftmost bitafter normalization of the
> result.
>
> So much for not fully normalizing a previously unnormalized number.
>
> As I recall the S/360, all floating pointing operations procuced normalized
> results except for the various load instructions and the unnormalized
> instructions.

Except, of course, initially for HER and HDR.  The 1964 Principles of Operation
makes this clear.
    Indeed, the RCA Spectra (and the EE System 4 which was a licenced copy)
did not normalize in the case of HER and HDR.  That never changed for the
EE Systrem 4 (I don't know what RCA subsequently did for the Spectra.)
Nor did the System 4 retrofit a guard digit on d.p.


0
robin_v (2737)
12/31/2005 7:39:09 AM
robin wrote:

(snip regarding HER, HDR, and the lack of normalization in the early
versions of S/360.)

> Except, of course, initially for HER and HDR.  The 1964 Principles of Operation
> makes this clear.
>     Indeed, the RCA Spectra (and the EE System 4 which was a licenced copy)
> did not normalize in the case of HER and HDR.  That never changed for the
> EE Systrem 4 (I don't know what RCA subsequently did for the Spectra.)
> Nor did the System 4 retrofit a guard digit on d.p.

Well, one could always add zero, still probably faster than divide,
but if you always need to do that it makes (made) more sense to fix it.

It might be that there would be use for HU and HW, the obvious
mnemonics for unnormalized versions, but I can't think of them right now.

-- glen

0
gah (12851)
1/2/2006 8:16:22 PM
"glen herrmannsfeldt" <gah@ugcs.caltech.edu> wrote in message
news:CZSdnfkDXvuBFyTeRVn-tg@comcast.com...
> robin wrote:
>
> (snip regarding HER, HDR, and the lack of normalization in the early
> versions of S/360.)
>
> > Except, of course, initially for HER and HDR.  The 1964 Principles of
Operation
> > makes this clear.
> >     Indeed, the RCA Spectra (and the EE System 4 which was a licenced copy)
> > did not normalize in the case of HER and HDR.  That never changed for the
> > EE Systrem 4 (I don't know what RCA subsequently did for the Spectra.)
> > Nor did the System 4 retrofit a guard digit on d.p.
>
> Well, one could always add zero, still probably faster than divide,

Definitely faster than divide, but that took an extra instruction (4 bytes)
and possibly an extra constant (4 or 8 bytes) when there was precious
little store to hold the extras.
The real problem with HER and HDR, however, with the unnormalized
version was the loss of precision if the most-significant nibble ws 1.

> but if you always need to do that it makes (made) more sense to fix it.
>
> It might be that there would be use for HU and HW, the obvious
> mnemonics for unnormalized versions, but I can't think of them right now.


0
robin_v (2737)
1/3/2006 1:56:13 AM
robin wrote:

> "glen herrmannsfeldt" <gah@ugcs.caltech.edu> wrote in message
> news:CZSdnfkDXvuBFyTeRVn-tg@comcast.com...

(snip regarding HER, HDR, and the lack of normalization in the early
versions of S/360.)

>>>Except, of course, initially for HER and HDR.  
 >> The 1964 Principles of Operation makes this clear.

(snip)

>>Well, one could always add zero, still probably faster than divide,

> Definitely faster than divide, but that took an extra instruction (4 bytes)
> and possibly an extra constant (4 or 8 bytes) when there was precious
> little store to hold the extras.
> The real problem with HER and HDR, however, with the unnormalized
> version was the loss of precision if the most-significant nibble ws 1.

In sqrt you can likely live with that until the last iteration.

With the common implementation for binary machines, you lose, anyway.
For S/360 the last iteration is done something like:

y4=y3+(x/y3-y3)/2

this is required for full precision HFP arithmetic, even with a
normalizing HDR.

(x/y3-y3) normally won't have many significant bits, so there is probably
no loss in the non-normalizing HDR.

It is also fairly common to do the initial approximation in fixed point.
If one really wanted to, one could test the exponent bits prior to the 
HDR at the end.

-- glen

     DE      FR0,BUFF        GIVE TWO PASSES OF NEWTON-RAPHSON
     AU      FR0,BUFF          ITERATION
     HER     FR0,FR0
     DER     FR2,FR0         (X/Y1+Y1)/2 = (Y1-X/Y1)/2+X/Y1 TO GUARD
     AU      FR0,ROUND         LAST DIGIT-.  ADD ROUNDING FUDGE
     SER     FR0,FR2
     HER     FR0,FR0
     AER     FR0,FR2


0
gah (12851)
1/4/2006 10:11:31 AM
"glen herrmannsfeldt" <gah@ugcs.caltech.edu> wrote in message
news:m4ednbbFy-jGAibenZ2dnUVZ_s-dnZ2d@comcast.com...
> robin wrote:
>
> > "glen herrmannsfeldt" <gah@ugcs.caltech.edu> wrote in message
> > news:CZSdnfkDXvuBFyTeRVn-tg@comcast.com...
>
> (snip regarding HER, HDR, and the lack of normalization in the early
> versions of S/360.)
>
> >>>Except, of course, initially for HER and HDR.
>  >> The 1964 Principles of Operation makes this clear.
>
> (snip)
>
> >>Well, one could always add zero, still probably faster than divide,
>
> > Definitely faster than divide, but that took an extra instruction (4 bytes)
> > and possibly an extra constant (4 or 8 bytes) when there was precious
> > little store to hold the extras.
> > The real problem with HER and HDR, however, with the unnormalized
> > version was the loss of precision if the most-significant nibble ws 1.
>
> In sqrt you can likely live with that until the last iteration.

Not every program needs SQRT.  In any case, it was probably
done by invoking a function, in which case, storage requirements
would not have been an issue.
    Many programs, however, routinely require division by 2,
and as there may be a number of these in a program,
the amount of extra storage required would become a drawback.

> With the common implementation for binary machines, you lose, anyway.
> For S/360 the last iteration is done something like:
>
> y4=y3+(x/y3-y3)/2
>
> this is required for full precision HFP arithmetic, even with a
> normalizing HDR.
>
> (x/y3-y3) normally won't have many significant bits, so there is probably
> no loss in the non-normalizing HDR.

The loss of a bit [non-post normalizing] for this step of halving
is irrelevant except for the last.

> It is also fairly common to do the initial approximation in fixed point.
> If one really wanted to, one could test the exponent bits prior to the
> HDR at the end.

The crux was that for general use, the original HER and HDR
were not as attractive as they would seem.

> -- glen




0
robin_v (2737)
1/4/2006 3:45:54 PM
robin wrote:

(snip)

> Not every program needs SQRT.  In any case, it was probably
> done by invoking a function, in which case, storage requirements
> would not have been an issue.

>     Many programs, however, routinely require division by 2,
> and as there may be a number of these in a program,
> the amount of extra storage required would become a drawback.

(snip)

> The crux was that for general use, the original HER and HDR
> were not as attractive as they would seem.

Someone claimed that they were originally to speed up square root,
and I don't remember ever seeing them used anywhere else.  It does
seem that they could be useful, though.

-- glen

0
gah (12851)
1/5/2006 8:41:33 AM
"glen herrmannsfeldt" <gah@ugcs.caltech.edu> wrote in message
news:qeednYOF5PpZRiHeRVn-qg@comcast.com...
> robin wrote:
>
> (snip)
>
> > Not every program needs SQRT.  In any case, it was probably
> > done by invoking a function, in which case, storage requirements
> > would not have been an issue.
>
> >     Many programs, however, routinely require division by 2,
> > and as there may be a number of these in a program,
> > the amount of extra storage required would become a drawback.
>
> (snip)
>
> > The crux was that for general use, the original HER and HDR
> > were not as attractive as they would seem.
>
> Someone claimed that they were originally to speed up square root,
> and I don't remember ever seeing them used anywhere else.

Once post-normalization was fixed, they would have been useful
anywhere, especially for optimization, but I see no impediment
for them not being used for even low optimization.  Their
particular attractiveness being, of course, speed..

>  It does
> seem that they could be useful, though.


0
robin_v (2737)
1/6/2006 12:04:42 AM
John W. Kennedy wrote:
> It is very well known that the entire 360 FP feature could have used
> some input from numerical analysts; it's shot full of design defects.

Could you elaborate on those design defects?

How did S/360 compare with its predecessor machines (ie 709x) regarding
those defects?  What differences did competitors machines--those
available in 1965--have compared to S/360 regarding these defects?

0
hancock4 (224)
1/13/2006 5:25:48 PM
hancock4@bbs.cpcn.com wrote:
> John W. Kennedy wrote:
>> It is very well known that the entire 360 FP feature could have used
>> some input from numerical analysts; it's shot full of design defects.
> 
> Could you elaborate on those design defects?
> 
> How did S/360 compare with its predecessor machines (ie 709x) regarding
> those defects?  What differences did competitors machines--those
> available in 1965--have compared to S/360 regarding these defects?

To start with, the S/360 word was four bits shorter than the 704 word. 
This was, at least, a strategic error, because it meant that /up/grading 
to a 360 meant, in this area, a /down/grading in function.

But the hexadecimal base further meant that the effective length of the 
fraction was essentially 21 bits (single precision) or 53 bits (double 
precision), rather than the superficial 24 or 56, and this was not 
clearly understood at first.

Other problems were corrected in a massive Engineering Change, which 
added a guard digit to double precision, added postnormalization to the 
halve instructions HER and HDR, and changed the results returned in 
cases of overflow and underflow.

The early competitors generally had words longer than 32 bits, but I am 
not familiar with any of them in detail.

-- 
John W. Kennedy
"But now is a new thing which is very old--
that the rich make themselves richer and not poorer,
which is the true Gospel, for the poor's sake."
   -- Charles Williams.  "Judgement at Chelmsford"
0
jwkenne (1442)
1/13/2006 8:26:11 PM
"John W. Kennedy" <jwkenne@attglobal.net> wrote in message
news:E1Uxf.1208$l03.452@fe11.lga...
> hancock4@bbs.cpcn.com wrote:
> > John W. Kennedy wrote:
> >> It is very well known that the entire 360 FP feature could have used
> >> some input from numerical analysts; it's shot full of design defects.
> >
> > Could you elaborate on those design defects?
> >
> > How did S/360 compare with its predecessor machines (ie 709x) regarding
> > those defects?  What differences did competitors machines--those
> > available in 1965--have compared to S/360 regarding these defects?
>
> To start with, the S/360 word was four bits shorter than the 704 word.
> This was, at least, a strategic error, because it meant that /up/grading
> to a 360 meant, in this area, a /down/grading in function.

Yes and no.  Double precision gave 28 extra bits.
But for most work, little difference between 36 bits and 32 bits.
But that's no measure, anyhow. The appropriate mesaure is
the number of mantissa bits and range of exponent.

And as for a "strategic error", the S/360 was the only architecture
that was copied around the world [apart from the PC],
and is the only architecture that survives from the 1960s and earlier
[albeit updated].

> But the hexadecimal base further meant that the effective length of the
> fraction was essentially 21 bits (single precision) or 53 bits (double
> precision), rather than the superficial 24 or 56, and this was not
> clearly understood at first.

I never had any difficulty with that, and I suspect
that nobody else did either.

How would you have done it better?
With binary, you would have, say, 21 bit mantissa plus sign
and 9-bit exponent plus sign (or biased 10 bits).

The reason for chosing the 8-bit exponent field was influenced by
byte-orientation, which, among other things, permitted instructions
like IC and STC to manipulate the exponent.
    Then there was the question of performance during pre- and
post-normalising  Shifts of 4 bits at a time (maximum of 6 shifts
for single precision) for hex is a lot quicker than 1 bit at a time
for binary (maximum 24 shifts) [single precision, and corresponding
values for double precision].
    The choice gave a range of 10**-78 thru 10**75 IIRC,
while some competitors had a less-accommodating range of
10**-35 to 10**35.

    And if you chose 24 bit mantissa, that would give you 7 biased
exponent bits, or 6 real bits.  Which doesn't give you an
exciting range of exponents, to put it mildly.

> Other problems were corrected in a massive Engineering Change, which
> added a guard digit to double precision, added postnormalization to the
> halve instructions HER and HDR, and changed the results returned in
> cases of overflow and underflow.

Are you sure of that?  The 1964 Principles of Operations
states that a zero word is returned for underflow,
which it always did.

> The early competitors generally had words longer than 32 bits, but I am
> not familiar with any of them in detail.

Competitive equipment had 32 bits, 48 bits, 36 bits, 60 bits
but in the main, more than 32 bits was scarcely the rule.


0
robin_v (2737)
1/13/2006 11:28:44 PM
The ones I know are:

(precision is single/double)

GE 600 (later Honeywell 6000) were 36/72 bits  ascii based character 
set, either 6/6 bit or 4/9 bit per word.  (Multics PL/1 allowed direct 
access to EIS (Extended Instruction Set, not used by GCOS)unit which 
supported 63 decimal digit maximum precision)

DEC PDP -10/20 was 36/72 bit.  ascii characters were 5/7 bit per word

Amdahl was an IBM 360/370 clone with the same instruction set  32/64 hex 
based, EBCDIC characters 4/8 bit/word.  I think there was some support 
for 128 bit floating point, but it was not part of the original 360 
instruction set.

CDC and Cray(?) were 60 bit, no hardware double IIRC.  There were also 4 
special flag bits to indicate that the value was the result of things 
like division by zero, underflow, overflow (again IIRC).  10/6 bit per 
word.

I don't really know what the rest were, but I think many were either 36 
bit or machines deisgned after 360 may have been 32 bit to be more 
compatable with IBM.

John W. Kennedy wrote:

> hancock4@bbs.cpcn.com wrote:
> 
>> John W. Kennedy wrote:
>>
>>> It is very well known that the entire 360 FP feature could have used
>>> some input from numerical analysts; it's shot full of design defects.
>>
>>
>> Could you elaborate on those design defects?
>>
>> How did S/360 compare with its predecessor machines (ie 709x) regarding
>> those defects?  What differences did competitors machines--those
>> available in 1965--have compared to S/360 regarding these defects?
> 
> 
> To start with, the S/360 word was four bits shorter than the 704 word. 
> This was, at least, a strategic error, because it meant that /up/grading 
> to a 360 meant, in this area, a /down/grading in function.
> 
> But the hexadecimal base further meant that the effective length of the 
> fraction was essentially 21 bits (single precision) or 53 bits (double 
> precision), rather than the superficial 24 or 56, and this was not 
> clearly understood at first.
> 
> Other problems were corrected in a massive Engineering Change, which 
> added a guard digit to double precision, added postnormalization to the 
> halve instructions HER and HDR, and changed the results returned in 
> cases of overflow and underflow.
> 
> The early competitors generally had words longer than 32 bits, but I am 
> not familiar with any of them in detail.
> 
0
multicsfan (63)
1/14/2006 12:22:28 AM
robin wrote:
> "John W. Kennedy" <jwkenne@attglobal.net> wrote in message
> news:E1Uxf.1208$l03.452@fe11.lga...
>> hancock4@bbs.cpcn.com wrote:
>>> John W. Kennedy wrote:
>>>> It is very well known that the entire 360 FP feature could have used
>>>> some input from numerical analysts; it's shot full of design defects.
>>> Could you elaborate on those design defects?
>>>
>>> How did S/360 compare with its predecessor machines (ie 709x) regarding
>>> those defects?  What differences did competitors machines--those
>>> available in 1965--have compared to S/360 regarding these defects?
>> To start with, the S/360 word was four bits shorter than the 704 word.
>> This was, at least, a strategic error, because it meant that /up/grading
>> to a 360 meant, in this area, a /down/grading in function.
> 
> Yes and no.  Double precision gave 28 extra bits.

The 704 family offered double precision, too; it was not fully 
implemented in hardware, but the hardware assisted it, and the FORTRAN 
compiler supported it.

> But for most work, little difference between 36 bits and 32 bits.
> But that's no measure, anyhow. The appropriate mesaure is
> the number of mantissa bits and range of exponent.

They add up to the word size, one way or the other. In any case, the 
S/360 had significantly fewer effective fraction bits (21) in single 
precision than the 7094 (27).  In practice, a very, very large number of 
FORTRAN programs had to be altered to use double precision where single 
precision had once served.

> And as for a "strategic error", the S/360 was the only architecture
> that was copied around the world [apart from the PC],
> and is the only architecture that survives from the 1960s and earlier
> [albeit updated].

The _whole_ S/360 architecture was copied, but, whereas the 8/16/32/64 
two's-complement, byte-addressable data architecture has become 
universal, the S/360 floating-point design was never used outside of the 
context of full S/360 compatibility, and the modern descendants of the 
S/360 now offer the vastly superior IEEE-754 as an alternative. Note, 
too, that floating-point has become nearly a dead issue in the S/360 
world; the z/OS FORTRAN compiler is decades old, and several generations 
out of date.

>> But the hexadecimal base further meant that the effective length of the
>> fraction was essentially 21 bits (single precision) or 53 bits (double
>> precision), rather than the superficial 24 or 56, and this was not
>> clearly understood at first.

> I never had any difficulty with that, and I suspect
> that nobody else did either.

There were many problems with S/360 floating point in the early days; 
the literature was awash with the subject.

> How would you have done it better?
> With binary, you would have, say, 21 bit mantissa plus sign
> and 9-bit exponent plus sign (or biased 10 bits).
> 
> The reason for chosing the 8-bit exponent field was influenced by
> byte-orientation, which, among other things, permitted instructions
> like IC and STC to manipulate the exponent.

In other words, hardware convenience at the cost of usability.

>     Then there was the question of performance during pre- and
> post-normalising  Shifts of 4 bits at a time (maximum of 6 shifts
> for single precision) for hex is a lot quicker than 1 bit at a time
> for binary (maximum 24 shifts) [single precision, and corresponding
> values for double precision].
>     The choice gave a range of 10**-78 thru 10**75 IIRC,
> while some competitors had a less-accommodating range of
> 10**-35 to 10**35.
> 
>     And if you chose 24 bit mantissa, that would give you 7 biased
> exponent bits, or 6 real bits.  Which doesn't give you an
> exciting range of exponents, to put it mildly.
> 
>> Other problems were corrected in a massive Engineering Change, which
>> added a guard digit to double precision, added postnormalization to the
>> halve instructions HER and HDR, and changed the results returned in
>> cases of overflow and underflow.
> 
> Are you sure of that?  The 1964 Principles of Operations
> states that a zero word is returned for underflow,
> which it always did.

That was before the Engineering Change. After the Engineering Change, if 
  the Underflow Mask bit in the PSW is 1, the exponent is wrapped (i.e., 
is set to 128 more than the correct value).

>> The early competitors generally had words longer than 32 bits, but I am
>> not familiar with any of them in detail.
> 
> Competitive equipment had 32 bits, 48 bits, 36 bits, 60 bits
> but in the main, more than 32 bits was scarcely the rule.

32 bits was rare before the 360.

-- 
John W. Kennedy
"But now is a new thing which is very old--
that the rich make themselves richer and not poorer,
which is the true Gospel, for the poor's sake."
   -- Charles Williams.  "Judgement at Chelmsford"
0
jwkenne (1442)
1/14/2006 12:30:47 AM
"robin"  wrote
...............
>> Other problems were corrected in a massive Engineering Change, which
>> added a guard digit to double precision, added postnormalization to the
>> halve instructions HER and HDR, and changed the results returned in
>> cases of overflow and underflow.
>
> Are you sure of that?  The 1964 Principles of Operations
> states that a zero word is returned for underflow,
> which it always did.

Has the handling of exponent underflow not always been under control of
the PSW Program mask for exponent underflow like it is today?

Your description is correct for the mask bit set to zero, but if the mask
bit is one the operation is completed with the exponent set to 128
greater than the correct value and a program interrupt is generated.

Regards Sven 


0
no.direct (6)
1/14/2006 12:32:19 AM
Sven Pran wrote:
> "robin"  wrote
> ...............
>>> Other problems were corrected in a massive Engineering Change, which
>>> added a guard digit to double precision, added postnormalization to the
>>> halve instructions HER and HDR, and changed the results returned in
>>> cases of overflow and underflow.
>> Are you sure of that?  The 1964 Principles of Operations
>> states that a zero word is returned for underflow,
>> which it always did.
> 
> Has the handling of exponent underflow not always been under control of
> the PSW Program mask for exponent underflow like it is today?

Trapping or not trapping was always under control of the mask, but 
before the great Engineering Change, the stored value was always true 
zero, no matter which way the mask was set.

-- 
John W. Kennedy
"But now is a new thing which is very old--
that the rich make themselves richer and not poorer,
which is the true Gospel, for the poor's sake."
   -- Charles Williams.  "Judgement at Chelmsford"
0
jwkenne (1442)
1/14/2006 12:49:54 AM
John W. Kennedy wrote:

> robin wrote:

(snip)

>> But for most work, little difference between 36 bits and 32 bits.
>> But that's no measure, anyhow. The appropriate mesaure is
>> the number of mantissa bits and range of exponent.

> They add up to the word size, one way or the other. In any case, the 
> S/360 had significantly fewer effective fraction bits (21) in single 
> precision than the 7094 (27).  In practice, a very, very large number of 
> FORTRAN programs had to be altered to use double precision where single 
> precision had once served.

That may be true, but the for many numerical algorithms the number of 
bits required increases as the size of the problem increases, which 
likely would have happened in the transition from 7094 to 360.

If the speed ratio was much smaller on 360 than 7094 that would
also have helped.

>> And as for a "strategic error", the S/360 was the only architecture
>> that was copied around the world [apart from the PC],
>> and is the only architecture that survives from the 1960s and earlier
>> [albeit updated].

(snip)

>>> But the hexadecimal base further meant that the effective length of the
>>> fraction was essentially 21 bits (single precision) or 53 bits (double
>>> precision), rather than the superficial 24 or 56, and this was not
>>> clearly understood at first.

For many algorithms the average number of bits, 22.5, is more 
representative than the minimum.  There is always a tradeoff between 
exponent and fraction.

>> I never had any difficulty with that, and I suspect
>> that nobody else did either.

> There were many problems with S/360 floating point in the early days; 
> the literature was awash with the subject.
> 
>> How would you have done it better?
>> With binary, you would have, say, 21 bit mantissa plus sign
>> and 9-bit exponent plus sign (or biased 10 bits).

>> The reason for chosing the 8-bit exponent field was influenced by
>> byte-orientation, which, among other things, permitted instructions
>> like IC and STC to manipulate the exponent.

There are formats which use an 8 bit exponent followed by the sign and
fraction.  That allows the exponent to be manipulated using byte 
instructions.

> In other words, hardware convenience at the cost of usability.

Mostly I would say that it took more work to come up with algorithms 
suitable for HFP.  I explained previously the modification to the SQRT 
algorithm, simple once you know it but someone had to figure that out.

(snip)

>> Competitive equipment had 32 bits, 48 bits, 36 bits, 60 bits
>> but in the main, more than 32 bits was scarcely the rule.

> 32 bits was rare before the 360.

One of the results of designing a machine useful for both fixed and 
floating point problems.

-- glen


0
gah (12851)
1/14/2006 10:00:48 AM

glen herrmannsfeldt wrote:
> John W. Kennedy wrote:
> 
>> robin wrote:
> 
> 
> (snip)
> 
>>> But for most work, little difference between 36 bits and 32 bits.
>>> But that's no measure, anyhow. The appropriate mesaure is
>>> the number of mantissa bits and range of exponent.
> 
> 
>> They add up to the word size, one way or the other. In any case, the 
>> S/360 had significantly fewer effective fraction bits (21) in single 
>> precision than the 7094 (27).  In practice, a very, very large number 
>> of FORTRAN programs had to be altered to use double precision where 
>> single precision had once served.
> 
> 
> That may be true, but the for many numerical algorithms the number of 
> bits required increases as the size of the problem increases, which 
> likely would have happened in the transition from 7094 to 360.

In the early 1960's when I worked at a Division of North American 
Aviation (the aerospace portions now owned by Boeing) we received the 
second 360/65 on the West Coast, and we were an IBM beta site for 
several of their software products.  The biggest headaches we had in 
migrating programs from 7094 to 360 were (1) learning job control 
language (JCL), and (2) redeclaring Fortran variables from single- to 
double-precision where the loss of 4 bits made a difference.  Since we 
were on cost-plus government-funded projects the efficiency of the 360 
in terms of an extra machine cycle per F.P. computation was of 
absolutely no consequence monetarily or operationally.

> 
> If the speed ratio was much smaller on 360 than 7094 that would
> also have helped.
> 
>>> And as for a "strategic error", the S/360 was the only architecture
>>> that was copied around the world [apart from the PC],
>>> and is the only architecture that survives from the 1960s and earlier
>>> [albeit updated].
> 
> 
> (snip)
> 
>>>> But the hexadecimal base further meant that the effective length of the
>>>> fraction was essentially 21 bits (single precision) or 53 bits (double
>>>> precision), rather than the superficial 24 or 56, and this was not
>>>> clearly understood at first.
> 
> 
> For many algorithms the average number of bits, 22.5, is more 
> representative than the minimum.  There is always a tradeoff between 
> exponent and fraction.
> 
>>> I never had any difficulty with that, and I suspect
>>> that nobody else did either.
> 
> 
>> There were many problems with S/360 floating point in the early days; 
>> the literature was awash with the subject.
>>
>>> How would you have done it better?
>>> With binary, you would have, say, 21 bit mantissa plus sign
>>> and 9-bit exponent plus sign (or biased 10 bits).
> 
> 
>>> The reason for chosing the 8-bit exponent field was influenced by
>>> byte-orientation, which, among other things, permitted instructions
>>> like IC and STC to manipulate the exponent.
> 
> 
> There are formats which use an 8 bit exponent followed by the sign and
> fraction.  That allows the exponent to be manipulated using byte 
> instructions.
> 
>> In other words, hardware convenience at the cost of usability.
> 
> 
> Mostly I would say that it took more work to come up with algorithms 
> suitable for HFP.  I explained previously the modification to the SQRT 
> algorithm, simple once you know it but someone had to figure that out.
> 
> (snip)
> 
>>> Competitive equipment had 32 bits, 48 bits, 36 bits, 60 bits
>>> but in the main, more than 32 bits was scarcely the rule.
> 
> 
>> 32 bits was rare before the 360.
> 
> 
> One of the results of designing a machine useful for both fixed and 
> floating point problems.
> 
> -- glen
> 
> 
0
donaldldobbs (108)
1/14/2006 7:53:14 PM
"John W. Kennedy" <jwkenne@attglobal.net> wrote in message
news:YCXxf.66$Fd6.27@fe08.lga...
> robin wrote:
> > "John W. Kennedy" <jwkenne@attglobal.net> wrote in message
> > news:E1Uxf.1208$l03.452@fe11.lga...
> >> hancock4@bbs.cpcn.com wrote:
> >>> John W. Kennedy wrote:
> >>>> It is very well known that the entire 360 FP feature could have used
> >>>> some input from numerical analysts; it's shot full of design defects.
> >>> Could you elaborate on those design defects?
> >>>
> >>> How did S/360 compare with its predecessor machines (ie 709x) regarding
> >>> those defects?  What differences did competitors machines--those
> >>> available in 1965--have compared to S/360 regarding these defects?
> >> To start with, the S/360 word was four bits shorter than the 704 word.
> >> This was, at least, a strategic error, because it meant that /up/grading
> >> to a 360 meant, in this area, a /down/grading in function.
> >
> > Yes and no.  Double precision gave 28 extra bits.
>
> The 704 family offered double precision, too; it was not fully
> implemented in hardware, but the hardware assisted it, and the FORTRAN
> compiler supported it.

It had to, in order to meet the standard.

> > But for most work, little difference between 36 bits and 32 bits.
> > But that's no measure, anyhow. The appropriate measure is
> > the number of mantissa bits and range of exponent.
>
> They add up to the word size, one way or the other.

Not relevant; what's important is the breakdown --
and in particular, the number of mantissa bits.

> In any case, the
> S/360 had significantly fewer effective fraction bits (21) in single
> precision than the 7094 (27).

Leaving only 7 bits for the exponent.  In other words, a reduced
range of exponent, which the S/360 corrected.

>  In practice, a very, very large number of
> FORTRAN programs had to be altered to use double precision where single
> precision had once served.

I do not recall receiving a single complaint of that category,
even though the machine that we upgraded from used 31
mantissa bits for scientific work.

BTW, the PL/I SSP for the S/360 used - wait for it - SINGLE precision
as the default.  Many of those for FORTRAN SSP were
provided only as single precision, some both single and double,
some double.

> > And as for a "strategic error", the S/360 was the only architecture
> > that was copied around the world [apart from the PC],
> > and is the only architecture that survives from the 1960s and earlier
> > [albeit updated].
>
> The _whole_ S/360 architecture was copied, but, whereas the 8/16/32/64
> two's-complement, byte-addressable data architecture has become
> universal, the S/360 floating-point design was never used outside of the
> context of full S/360 compatibility,

The S/360 was not copied in its entirety, and even in those
cases where it was not copied in its entirety, the
original hex floating-point design was retained
(without guard digit on double, with zero for underflow, etc)

> and the modern descendants of the
> S/360 now offer the vastly superior IEEE-754 as an alternative. Note,
> too, that floating-point has become nearly a dead issue in the S/360
> world; the z/OS FORTRAN compiler is decades old, and several generations
> out of date.
>
> >> But the hexadecimal base further meant that the effective length of the
> >> fraction was essentially 21 bits (single precision) or 53 bits (double
> >> precision), rather than the superficial 24 or 56, and this was not
> >> clearly understood at first.
>
> > I never had any difficulty with that, and I suspect
> > that nobody else did either.
>
> There were many problems with S/360 floating point in the early days;

Strange, we got along well with F.P.
And both machines that we subsequently obtained used
the original hex floating point (without guard digit, etc).

It is clear that it was not the problem that you imagine.

So-called "clones" retained the hex model without guard
digit on d.p.  Strange, that.
How come *they* did not get "many problems"?

And if it was as bad as you claim, how come they
never implemented something better?

> the literature was awash with the subject.

Such as?

> > How would you have done it better?

No idea?

> > With binary, you would have, say, 21 bit mantissa plus sign
> > and 9-bit exponent plus sign (or biased 10 bits).
> >
> > The reason for chosing the 8-bit exponent field was influenced by
> > byte-orientation, which, among other things, permitted instructions
> > like IC and STC to manipulate the exponent.
>
> In other words, hardware convenience at the cost of usability.
>
> >     Then there was the question of performance during pre- and
> > post-normalising  Shifts of 4 bits at a time (maximum of 6 shifts
> > for single precision) for hex is a lot quicker than 1 bit at a time
> > for binary (maximum 24 shifts) [single precision, and corresponding
> > values for double precision].
> >     The choice gave a range of 10**-78 thru 10**75 IIRC,
> > while some competitors had a less-accommodating range of
> > 10**-35 to 10**35.
> >
> >     And if you chose 24 bit mantissa, that would give you 7 biased
> > exponent bits, or 6 real bits.  Which doesn't give you an
> > exciting range of exponents, to put it mildly.
> >
> >> Other problems were corrected in a massive Engineering Change, which
> >> added a guard digit to double precision, added postnormalization to the
> >> halve instructions HER and HDR, and changed the results returned in
> >> cases of overflow and underflow.
> >
> > Are you sure of that?  The 1964 Principles of Operations
> > states that a zero word is returned for underflow,
> > which it always did.
>
> That was before the Engineering Change. After the Engineering Change, if
>   the Underflow Mask bit in the PSW is 1, the exponent is wrapped (i.e.,
> is set to 128 more than the correct value).
>
> >> The early competitors generally had words longer than 32 bits, but I am
> >> not familiar with any of them in detail.
> >
> > Competitive equipment had 32 bits, 48 bits, 36 bits, 60 bits
> > but in the main, more than 32 bits was scarcely the rule.
>
> 32 bits was rare before the 360.

Bendix?, Pilot ACE, DEUCE come to mind as 32-bit machines.
Others had 16 (or was it 18?) and 12 IIRC.

Others having a longer word were designed thus to accommodate
two instructions of, say, 24 bits, and the integer size was 24 bits.

The real problem with the S/360 was not the FPU but
with the fact that you didn't get many bangs per buck.
For some work, the machine that it replaced was faster.


0
robin_v (2737)
1/15/2006 2:25:08 PM
robin wrote:
> "John W. Kennedy" <jwkenne@attglobal.net> wrote in message
> news:YCXxf.66$Fd6.27@fe08.lga...
>> robin wrote:
>>> "John W. Kennedy" <jwkenne@attglobal.net> wrote in message
>>> news:E1Uxf.1208$l03.452@fe11.lga...
>>>> hancock4@bbs.cpcn.com wrote:
>>>>> John W. Kennedy wrote:
>>>>>> It is very well known that the entire 360 FP feature could have used
>>>>>> some input from numerical analysts; it's shot full of design defects.
>>>>> Could you elaborate on those design defects?
>>>>>
>>>>> How did S/360 compare with its predecessor machines (ie 709x) regarding
>>>>> those defects?  What differences did competitors machines--those
>>>>> available in 1965--have compared to S/360 regarding these defects?
>>>> To start with, the S/360 word was four bits shorter than the 704 word.
>>>> This was, at least, a strategic error, because it meant that /up/grading
>>>> to a 360 meant, in this area, a /down/grading in function.
>>> Yes and no.  Double precision gave 28 extra bits.
>> The 704 family offered double precision, too; it was not fully
>> implemented in hardware, but the hardware assisted it, and the FORTRAN
>> compiler supported it.
> 
> It had to, in order to meet the standard.

There was no FORTRAN standard until long afterwards.

>>> But for most work, little difference between 36 bits and 32 bits.
>>> But that's no measure, anyhow. The appropriate measure is
>>> the number of mantissa bits and range of exponent.
>> They add up to the word size, one way or the other.
> 
> Not relevant; what's important is the breakdown --
> and in particular, the number of mantissa bits.

In order to make any sense of your argument, I can only assume that you 
do not know what the words "relevant" and "mantissa" mean. Kindly look 
them up.

>> In any case, the
>> S/360 had significantly fewer effective fraction bits (21) in single
>> precision than the 7094 (27).

> Leaving only 7 bits for the exponent.  In other words, a reduced
> range of exponent, which the S/360 corrected.

Having trouble with subtraction, are we now?

>>  In practice, a very, very large number of
>> FORTRAN programs had to be altered to use double precision where single
>> precision had once served.

> I do not recall receiving a single complaint of that category,
> even though the machine that we upgraded from used 31
> mantissa bits for scientific work.

Then you were doing unusually undemanding work; plenty of shops had 
major problems.

> The S/360 was not copied in its entirety,

Problem state was.

> and even in those
> cases where it was not copied in its entirety, the
> original hex floating-point design was retained
> (without guard digit on double, with zero for underflow, etc)

I'm sure IBM spent all that money upgrading all those machines without 
payment just for fun.

> Strange, we got along well with F.P.
> And both machines that we subsequently obtained used
> the original hex floating point (without guard digit, etc).

> It is clear that it was not the problem that you imagine.

It is clear that it wasn't a problem for /you/. (Or, alternatively, that 
it /was/ a problem, but you didn't audit your results adequately.)

> So-called "clones" retained the hex model without guard
> digit on d.p.  Strange, that.
> How come *they* did not get "many problems"?

You buy cheap imitations, you get cheap imitations.

> And if it was as bad as you claim, how come they
> never implemented something better?

They did. In 1967, and again, recently.

-- 
John W. Kennedy
"But now is a new thing which is very old--
that the rich make themselves richer and not poorer,
which is the true Gospel, for the poor's sake."
   -- Charles Williams.  "Judgement at Chelmsford"
0
jwkenne (1442)
1/15/2006 4:46:34 PM
John W. Kennedy wrote:
(snip)

> In order to make any sense of your argument, I can only assume that you 
> do not know what the words "relevant" and "mantissa" mean. Kindly look 
> them up.

Mantissa: The fractional part of a logarithm.

Until a log instruction is implemented, you likely won't find the word
mantissa in the Principles of Operations manual for any IBM processor.

http://publibfp.boulder.ibm.com/cgi-bin/bookmgr/BOOKS/dz9zr003/9.2.2

-- glen

0
gah (12851)
1/16/2006 8:26:15 AM
"glen herrmannsfeldt" <gah@ugcs.caltech.edu> wrote in message
news:J6qdnbufBcshxVbenZ2dnUVZ_sSdnZ2d@comcast.com...
> John W. Kennedy wrote:
> > In order to make any sense of your argument, I can only assume that you
> > do not know what the words "relevant" and "mantissa" mean. Kindly look
> > them up.
>
> Mantissa: The fractional part of a logarithm.

Also part of an FPN.

> Until a log instruction is implemented, you likely won't find the word
> mantissa in the Principles of Operations manual for any IBM processor.

Different manufacturers call the fields of an FPN different things.

You'll find "mantissa" used in any computer science text to
describe part of an FPN.

e.g. 1, Compuers & Programming, Hannula, 1974.
"The sign, the mantissa, and the characteristic of a floating-point
number are all stored in the same cell."
The diagrams show "mantissa" occupying bits 8 thru 31 and 8 thru 63.

e.g. 2, Rudd, Assembler Language Programming & the IBM 360 & 370, 1976.

Both books published in USA.

e.g. 3, Clone hardware ref manual:
For AE, AD, ADR, ADR: " If they [exponents] do not agree,
the mantissa with the smaller exponent operand is shifted right."

These were the first 3 books that I picked up.


0
robin_v (2737)
1/16/2006 11:28:40 PM
"John W. Kennedy" <jwkenne@attglobal.net> wrote in message
news:O%uyf.24$pp1.17@fe11.lga...
> robin wrote:
> > "John W. Kennedy" <jwkenne@attglobal.net> wrote in message
> > news:YCXxf.66$Fd6.27@fe08.lga...
> >> robin wrote:
> >>> "John W. Kennedy" <jwkenne@attglobal.net> wrote in message
> >>> news:E1Uxf.1208$l03.452@fe11.lga...
> >>>> hancock4@bbs.cpcn.com wrote:
> >>>>> John W. Kennedy wrote:
> >>>>>> It is very well known that the entire 360 FP feature could have used
> >>>>>> some input from numerical analysts; it's shot full of design defects.
> >>>>> Could you elaborate on those design defects?
> >>>>>
> >>>>> How did S/360 compare with its predecessor machines (ie 709x) regarding
> >>>>> those defects?  What differences did competitors machines--those
> >>>>> available in 1965--have compared to S/360 regarding these defects?
> >>>> To start with, the S/360 word was four bits shorter than the 704 word.
> >>>> This was, at least, a strategic error, because it meant that /up/grading
> >>>> to a 360 meant, in this area, a /down/grading in function.
> >>> Yes and no.  Double precision gave 28 extra bits.
> >> The 704 family offered double precision, too; it was not fully
> >> implemented in hardware, but the hardware assisted it, and the FORTRAN
> >> compiler supported it.
> >
> > It had to, in order to meet the standard.
>
> There was no FORTRAN standard until long afterwards.

IBM set it.

> >>> But for most work, little difference between 36 bits and 32 bits.
> >>> But that's no measure, anyhow. The appropriate measure is
> >>> the number of mantissa bits and range of exponent.
> >> They add up to the word size, one way or the other.
> >
> > Not relevant; what's important is the breakdown --
> > and in particular, the number of mantissa bits.
>
> In order to make any sense of your argument, I can only assume that you
> do not know what the words "relevant" and "mantissa" mean. Kindly look
> them up.

The term "mantissa" has been used since the early days of computers
to describe part of floating-point number.

Are you having a bad day?

> >> In any case, the
> >> S/360 had significantly fewer effective fraction bits (21) in single
> >> precision than the 7094 (27).
>
> > Leaving only 7 bits for the exponent.  In other words, a reduced
> > range of exponent, which the S/360 corrected.
>
> Having trouble with subtraction, are we now?

When I last looked, 27 + 1 + 7 + 1 = 36.

> >>  In practice, a very, very large number of
> >> FORTRAN programs had to be altered to use double precision where single
> >> precision had once served.
>
> > I do not recall receiving a single complaint of that category,
> > even though the machine that we upgraded from used 31
> > mantissa bits for scientific work.
>
> Then you were doing unusually undemanding work; plenty of shops had
> major problems.

Research is typically demanding.

>>>The _whole_ S/360 architecture was copied, but, whereas the 8/16/32/64
>>>two's-complement, byte-addressable data architecture has become
>>>universal, the S/360 floating-point design was never used outside of the
>>>context of full S/360 compatibility, and the modern descendants of the
>>>S/360 now offer the vastly superior IEEE-754 as an alternative. Note,
>>>too, that floating-point has become nearly a dead issue in the S/360
>>>world; the z/OS FORTRAN compiler is decades old, and several generations
>>>out of date.

You're overlooking, PL/I, which for which z/OS has a recent compiler.

> > The S/360 was not copied in its entirety,
>
> Problem state was.

Only the original, not the revised hardware, as I previously stated (below).

> > and even in those
> > cases where it was not copied in its entirety, the
> > original hex floating-point design was retained
> > (without guard digit on double, with zero for underflow, etc)
>
> I'm sure IBM spent all that money upgrading all those machines without
> payment just for fun.

AFAIK, no-one else followed suite.

> > Strange, we got along well with F.P.
> > And both machines that we subsequently obtained used
> > the original hex floating point (without guard digit, etc).


> > It is clear that it was not the problem that you imagine.
>
> It is clear that it wasn't a problem for /you/.

It wasn't a problem for anyone in an extensive institution.

> (Or, alternatively, that
> it /was/ a problem, but you didn't audit your results adequately.)

My results were always "audited".  So were those of others.

> > So-called "clones" retained the hex model without guard
> > digit on d.p.  Strange, that.
> > How come *they* did not get "many problems"?
>
> You buy cheap imitations, you get cheap imitations.

I didn't buy anything.  But I would point out that those
"cheap" systems had superior real-time performance, with
multiple resister sets and processor states for handling
interrupts.

> > And if it was as bad as you claim, how come they
> > never implemented something better?
>
> They did. In 1967,

No they didn't.  I was referring to clones in which the guard digit
on d.p. was NEVER provided. [see above]

>>> the literature was awash with the subject.

>>Such as?

Still no instance?

>>> > How would you have done it better ?

Still no answer?


0
robin_v (2737)
1/16/2006 11:28:41 PM
robin wrote:
>>>> The 704 family offered double precision, too; it was not fully
>>>> implemented in hardware, but the hardware assisted it, and the FORTRAN
>>>> compiler supported it.
>>> It had to, in order to meet the standard.
>> There was no FORTRAN standard until long afterwards.

> IBM set it.

So your argument is that the 704 hardware had to implement 
double-precision floating-point in 1954 in order to support FORTRAN IV, 
  which didn't even come out until 1962 (two hardware generations later)?

>>>>> But for most work, little difference between 36 bits and 32 bits.
>>>>> But that's no measure, anyhow. The appropriate measure is
>>>>> the number of mantissa bits and range of exponent.
>>>> They add up to the word size, one way or the other.
>>> Not relevant; what's important is the breakdown --
>>> and in particular, the number of mantissa bits.
>> In order to make any sense of your argument, I can only assume that you
>> do not know what the words "relevant" and "mantissa" mean. Kindly look
>> them up.
> 
> The term "mantissa" has been used since the early days of computers
> to describe part of floating-point number.
> 
> Are you having a bad day?

Either you are attempting to argue that the size of the fraction and the 
size of the exponent are each more important than one another, while 
simultaneously maintaining that word size has nothing to do with the 
issue either way, or else you are simply misusing words.

>>>> In any case, the
>>>> S/360 had significantly fewer effective fraction bits (21) in single
>>>> precision than the 7094 (27).
>>> Leaving only 7 bits for the exponent.  In other words, a reduced
>>> range of exponent, which the S/360 corrected.
>> Having trouble with subtraction, are we now?
> 
> When I last looked, 27 + 1 + 7 + 1 = 36.

Are you under the impression that the 704 series had a 35-bit word with 
a parity bit?

>>>> The _whole_ S/360 architecture was copied, but, whereas the 8/16/32/64
>>>> two's-complement, byte-addressable data architecture has become
>>>> universal, the S/360 floating-point design was never used outside of the
>>>> context of full S/360 compatibility, and the modern descendants of the
>>>> S/360 now offer the vastly superior IEEE-754 as an alternative. Note,
>>>> too, that floating-point has become nearly a dead issue in the S/360
>>>> world; the z/OS FORTRAN compiler is decades old, and several generations
>>>> out of date.

> You're overlooking, PL/I, which for which z/OS has a recent compiler.

PL/I is not a major player in the raw-science market; if it were, IBM 
would have implemented PL/I support for the (now dead, like every other 
attempt to put the 360 family back into the high-performance-computing 
market) 370 vector processor.

>>> The S/360 was not copied in its entirety,
>> Problem state was.
> 
> Only the original, not the revised hardware, as I previously stated (below).

Indeed, even the TS instruction was not implemented.

>>> and even in those
>>> cases where it was not copied in its entirety, the
>>> original hex floating-point design was retained
>>> (without guard digit on double, with zero for underflow, etc)
>> I'm sure IBM spent all that money upgrading all those machines without
>> payment just for fun.
> 
> AFAIK, no-one else followed suite.

Because they couldn't afford to.

>>> Strange, we got along well with F.P.
>>> And both machines that we subsequently obtained used
>>> the original hex floating point (without guard digit, etc).
> 
> 
>>> It is clear that it was not the problem that you imagine.
>> It is clear that it wasn't a problem for /you/.
> 
> It wasn't a problem for anyone in an extensive institution.

It demonstrably was a problem. IBM spent a fortune fixing what could be 
fixed (note that it had to implement the change on at least seven 
different machine types), the literature was full of problems introduced 
by the S/360, and, in the end, the S/360 and follow-up lines were never 
more than marginally successful in the supercomputing arena.

> I didn't buy anything.  But I would point out that those
> "cheap" systems had superior real-time performance, with
> multiple resister sets and processor states for handling
> interrupts.
> 
>>> And if it was as bad as you claim, how come they
>>> never implemented something better?
>> They did. In 1967,
> 
> No they didn't.  I was referring to clones in which the guard digit
> on d.p. was NEVER provided. [see above]
> 
>>>> the literature was awash with the subject.
> 
>>> Such as?
> 
> Still no instance?

Gee, somehow I can't find my old computer magazines from the mid-60's. I 
guess my mother threw them out with my comic books.

>>>>> How would you have done it better ?
> 
> Still no answer?

I was in Junior High when these choices were being made. All I know is 
that they were found inadequate in the field by many customers.

-- 
John W. Kennedy
"But now is a new thing which is very old--
that the rich make themselves richer and not poorer,
which is the true Gospel, for the poor's sake."
   -- Charles Williams.  "Judgement at Chelmsford"
0
jwkenne (1442)
1/17/2006 1:11:37 AM
John W. Kennedy wrote:

(snip)

>>>> Strange, we got along well with F.P.
>>>> And both machines that we subsequently obtained used
>>>> the original hex floating point (without guard digit, etc).

>>>> It is clear that it was not the problem that you imagine.

>>> It is clear that it wasn't a problem for /you/.

>> It wasn't a problem for anyone in an extensive institution.

> It demonstrably was a problem. IBM spent a fortune fixing what could be 
> fixed (note that it had to implement the change on at least seven 
> different machine types), the literature was full of problems introduced 
> by the S/360, and, in the end, the S/360 and follow-up lines were never 
> more than marginally successful in the supercomputing arena.

Maybe, but I doubt this was why.  If you consider what a Cray-1 does
with floating point multiply and divide, you will find that IBM, even
before the fix, wasn't all that bad.

Also, IBM had extended precision which has rarely been matched by
others, as far as hardware implemented floating point.

-- glen

0
gah (12851)
1/17/2006 3:28:21 AM
On Mon, 16 Jan 2006 19:28:21 -0800, glen herrmannsfeldt  
<gah@ugcs.caltech.edu> wrote:

> Also, IBM had extended precision which has rarely been matched by
> others, as far as hardware implemented floating point.
>
VAX wasn't bad at 113 fractional bits  (and yes the manual does _not_
call it a mantissa (your welcome Glen!) )
0
tom284 (1839)
1/17/2006 4:19:01 AM
Tom Linden wrote:

> On Mon, 16 Jan 2006 19:28:21 -0800, glen herrmannsfeldt  
> <gah@ugcs.caltech.edu> wrote:

>> Also, IBM had extended precision which has rarely been matched by
>> others, as far as hardware implemented floating point.

> VAX wasn't bad at 113 fractional bits  (and yes the manual does _not_
> call it a mantissa (your welcome Glen!) )

I used to work at a place with three 11/750's and an 11/730.  I never 
tried it, but guess which one is supposed to be faster for H-float.
Yep, the 730.  It was one of the few that had hardware for H-float.

I don't know the later VAX well enough to know which did, but I believe
it was rare.  All would do it in software, though.

The IBM extended precision format was specifically designed to be
easy to implement in software.

-- glen

0
gah (12851)
1/17/2006 5:59:58 AM

glen herrmannsfeldt wrote:
[snip]

> Maybe, but I doubt this was why.  If you consider what a Cray-1 does
> with floating point multiply and divide, you will find that IBM, even
> before the fix, wasn't all that bad.

The Cray-1 did not initially come with hardware divide, the divide was 
done in software in the pipe.  Hardware divide was a $500K option which 
was slower then the software divide ;)

Memory Parity was also a $500K option.  To check for memory errors the 
idea was the Cray was so fast you could just run your program twice and 
if you got the same answer there was no memory problem.
0
multicsfan (63)
1/17/2006 1:01:21 PM
Hello,

multicsfan wrote:

<snip>

> The Cray-1 did not initially come with hardware divide, the divide was 
> done in software in the pipe.  Hardware divide was a $500K option which 
> was slower then the software divide ;)

No Cray-1 ever had hardware divide.
Floating point divide was always a reciprocal approximation,
followed by correcting multiplications.  A special 2-a.b instruction
applied part of the correction.

> Memory Parity was also a $500K option.  To check for memory errors the 
> idea was the Cray was so fast you could just run your program twice and 
> if you got the same answer there was no memory problem.

Serial #1 Cray-1 had no parity, Serial #2 was being built without parity
but was scrapped before completion.  Serials #3 and beyond
all had SECDED.  It added one clock to scalar memory fetch times,
raising it from 10 to 11 cp.

The only options ever available on the Cray-1 was how much memory
the customer wanted to buy.  (Modulo peripherals, of course.)
Most were sold with 1 MW, from 256 KW to 4 MW was
theoretically possible.

-- 
Cheers!

Dan Nagle
Purple Sage Computing Solutions, Inc.
0
dannagle (1018)
1/17/2006 1:35:59 PM
Well those options were in the proposal Cray sent to RPI around 
1974/1975 when RPI was looking to replace their old 360-50.  Those 
numbers came from the Cray proposal.  It is possible that at the time of 
the proposal those were options and later things changed.  I don'tknow 
how many Cray's had been delivered at that time.  Around 1976/1977 the 
RPI-ACM had a guest lecturer from a site with a Cray or had worked at a 
site with a Cray.  The comment I remember most was the MTBF was about 4 
hours and the MTTR was about 10 minutes (the time to wiggle all the boards).

Dan Nagle wrote:
> Hello,
> 
> multicsfan wrote:
> 
> <snip>
> 
>> The Cray-1 did not initially come with hardware divide, the divide was 
>> done in software in the pipe.  Hardware divide was a $500K option 
>> which was slower then the software divide ;)
> 
> 
> No Cray-1 ever had hardware divide.
> Floating point divide was always a reciprocal approximation,
> followed by correcting multiplications.  A special 2-a.b instruction
> applied part of the correction.
> 
>> Memory Parity was also a $500K option.  To check for memory errors the 
>> idea was the Cray was so fast you could just run your program twice 
>> and if you got the same answer there was no memory problem.
> 
> 
> Serial #1 Cray-1 had no parity, Serial #2 was being built without parity
> but was scrapped before completion.  Serials #3 and beyond
> all had SECDED.  It added one clock to scalar memory fetch times,
> raising it from 10 to 11 cp.
> 
> The only options ever available on the Cray-1 was how much memory
> the customer wanted to buy.  (Modulo peripherals, of course.)
> Most were sold with 1 MW, from 256 KW to 4 MW was
> theoretically possible.
> 
0
multicsfan (63)
1/17/2006 9:09:17 PM
"John W. Kennedy" <jwkenne@attglobal.net> wrote in message
news:ivXyf.40$gh5.16@fe08.lga...
> robin wrote:
> >>>> The 704 family offered double precision, too; it was not fully
> >>>> implemented in hardware, but the hardware assisted it, and the FORTRAN
> >>>> compiler supported it.
> >>> It had to, in order to meet the standard.
> >> There was no FORTRAN standard until long afterwards.
>
> > IBM set it.
>
> So your argument is that the 704 hardware had to implement
> double-precision floating-point in 1954 in order to support FORTRAN IV,
>   which didn't even come out until 1962 (two hardware generations later)?

You said it ; I didn't.

> >>>>> But for most work, little difference between 36 bits and 32 bits.
> >>>>> But that's no measure, anyhow. The appropriate measure is
> >>>>> the number of mantissa bits and range of exponent.
> >>>> They add up to the word size, one way or the other.
> >>> Not relevant; what's important is the breakdown --
> >>> and in particular, the number of mantissa bits.
> >> In order to make any sense of your argument, I can only assume that you
> >> do not know what the words "relevant" and "mantissa" mean. Kindly look
> >> them up.
> >
> > The term "mantissa" has been used since the early days of computers
> > to describe part of floating-point number.
> >
> > Are you having a bad day?
>
> Either you are attempting to argue that the size of the fraction and the
> size of the exponent are each more important than one another, while
> simultaneously maintaining that word size has nothing to do with the
> issue either way, or else you are simply misusing words.

Are you trying to divert attention from "mantissa"?

> >>>> In any case, the
> >>>> S/360 had significantly fewer effective fraction bits (21) in single
> >>>> precision than the 7094 (27).
> >>> Leaving only 7 bits for the exponent.  In other words, a reduced
> >>> range of exponent, which the S/360 corrected.
> >> Having trouble with subtraction, are we now?
> >
> > When I last looked, 27 + 1 + 7 + 1 = 36.
>
> Are you under the impression that the 704 series had a 35-bit word with
> a parity bit?

You said 36 bits earlier.

> >>>> The _whole_ S/360 architecture was copied, but, whereas the 8/16/32/64
> >>>> two's-complement, byte-addressable data architecture has become
> >>>> universal, the S/360 floating-point design was never used outside of the
> >>>> context of full S/360 compatibility, and the modern descendants of the
> >>>> S/360 now offer the vastly superior IEEE-754 as an alternative. Note,
> >>>> too, that floating-point has become nearly a dead issue in the S/360
> >>>> world; the z/OS FORTRAN compiler is decades old, and several generations
> >>>> out of date.
>
> > You're overlooking, PL/I, which for which z/OS has a recent compiler.
>
> PL/I is not a major player in the raw-science market; if it were, IBM
> would have implemented PL/I support for the (now dead, like every other
> attempt to put the 360 family back into the high-performance-computing
> market) 370 vector processor.
>
> >>> The S/360 was not copied in its entirety,
> >> Problem state was.
> >
> > Only the original, not the revised hardware, as I previously stated (below).
>
> Indeed, even the TS instruction was not implemented.
>
> >>> and even in those
> >>> cases where it was not copied in its entirety, the
> >>> original hex floating-point design was retained
> >>> (without guard digit on double, with zero for underflow, etc)
> >> I'm sure IBM spent all that money upgrading all those machines without
> >> payment just for fun.
> >
> > AFAIK, no-one else followed suite.
>
> Because they couldn't afford to.

Try again.  They could have done it when the system
was built.

Most of the changes would have required only an alteration to
the ROM (except for the harwitred machines).

> >>> Strange, we got along well with F.P.
> >>> And both machines that we subsequently obtained used
> >>> the original hex floating point (without guard digit, etc).
> >
> >>> It is clear that it was not the problem that you imagine.
> >> It is clear that it wasn't a problem for /you/.
> >
> > It wasn't a problem for anyone in an extensive institution.
>
> It demonstrably was a problem. IBM spent a fortune fixing what could be
> fixed

Most of the changes would have required only an alteration
to the ROM.

> (note that it had to implement the change on at least seven
> different machine types), the literature was full of problems introduced
> by the S/360,

You still haven't named any.

> and, in the end, the S/360 and follow-up lines were never
> more than marginally successful in the supercomputing arena.

This was principally on account of the price of the machine
and speed (or rather, lack of), rather than any other factors.

FYI, our s/360 was slower than the machine that it replaced
for small jobs -- yet the machine it replaced was 50 times
slower (add time 64uS).  When LCS was put on the S/360,
it ran even slower, because the OS took up most of the
fast memory, so that user programs were loaded into slow
memory.
    One of the factors contributing to the slowness showed up
when we consistently got "disk overrun"s (with slow memory).
These errors only came up after some 250 attempts to read a track
of disc to memory, and each retry failed (DMA).
The remedy ? Increase the number of attempts!

> > I didn't buy anything.  But I would point out that those
> > "cheap" systems had superior real-time performance, with
> > multiple resister sets and processor states for handling
> > interrupts.
> >
> >>> And if it was as bad as you claim, how come they
> >>> never implemented something better?
> >> They did. In 1967,
> >
> > No they didn't.  I was referring to clones in which the guard digit
> > on d.p. was NEVER provided. [see above]
> >
> >>>> the literature was awash with the subject.
> >
> >>> Such as?
> >
> > Still no instance?
>
> Gee, somehow I can't find my old computer magazines from the mid-60's. I
> guess my mother threw them out with my comic books.
>
> >>>>> How would you have done it better ?
> >
> > Still no answer?
>
> I was in Junior High when these choices were being made. All I know is
> that they were found inadequate in the field by many customers.


0
robin_v (2737)
1/17/2006 9:18:17 PM
Hello,

multicsfan wrote:
> Well those options were in the proposal Cray sent to RPI around 
> 1974/1975 when RPI was looking to replace their old 360-50.  Those 
> numbers came from the Cray proposal.  It is possible that at the time of 
> the proposal those were options and later things changed.  I don'tknow 
> how many Cray's had been delivered at that time.

The first Cray-1 went to Los Alamos in 1976.
It was given as a 6-month trial, negotiations
at the end of the trial.  The negotiations
resulted in SECDED being added to all future systems.

>  Around 1976/1977 the 
> RPI-ACM had a guest lecturer from a site with a Cray or had worked at a 
> site with a Cray.  The comment I remember most was the MTBF was about 4 
> hours and the MTTR was about 10 minutes (the time to wiggle all the 
> boards).

These are numbers from SECDED systems.
MTTCR (mean time to cosmic rays) was more like a half-hour
to an hour.

> Dan Nagle wrote:
> 
>> Hello,
>>
>> multicsfan wrote:
>>
>> <snip>
>>
>>> The Cray-1 did not initially come with hardware divide, the divide 
>>> was done in software in the pipe.  Hardware divide was a $500K option 
>>> which was slower then the software divide ;)
>>
>>
>>
>> No Cray-1 ever had hardware divide.
>> Floating point divide was always a reciprocal approximation,
>> followed by correcting multiplications.  A special 2-a.b instruction
>> applied part of the correction.
>>
>>> Memory Parity was also a $500K option.  To check for memory errors 
>>> the idea was the Cray was so fast you could just run your program 
>>> twice and if you got the same answer there was no memory problem.
>>
>>
>>
>> Serial #1 Cray-1 had no parity, Serial #2 was being built without parity
>> but was scrapped before completion.  Serials #3 and beyond
>> all had SECDED.  It added one clock to scalar memory fetch times,
>> raising it from 10 to 11 cp.
>>
>> The only options ever available on the Cray-1 was how much memory
>> the customer wanted to buy.  (Modulo peripherals, of course.)
>> Most were sold with 1 MW, from 256 KW to 4 MW was
>> theoretically possible.
>>


-- 
Cheers!

Dan Nagle
Purple Sage Computing Solutions, Inc.
0
dannagle (1018)
1/17/2006 10:00:29 PM
Sounds like they hadn't delivered the first one at the time of the 
proposal to RPI which is probalby where the discrepency is.

Dan Nagle wrote:

> Hello,
> 
> multicsfan wrote:
> 
>> Well those options were in the proposal Cray sent to RPI around 
>> 1974/1975 when RPI was looking to replace their old 360-50.  Those 
>> numbers came from the Cray proposal.  It is possible that at the time 
>> of the proposal those were options and later things changed.  I 
>> don'tknow how many Cray's had been delivered at that time.
> 
> 
> The first Cray-1 went to Los Alamos in 1976.
> It was given as a 6-month trial, negotiations
> at the end of the trial.  The negotiations
> resulted in SECDED being added to all future systems.
> 
>>  Around 1976/1977 the RPI-ACM had a guest lecturer from a site with a 
>> Cray or had worked at a site with a Cray.  The comment I remember most 
>> was the MTBF was about 4 hours and the MTTR was about 10 minutes (the 
>> time to wiggle all the boards).
> 
> 
> These are numbers from SECDED systems.
> MTTCR (mean time to cosmic rays) was more like a half-hour
> to an hour.
> 
>> Dan Nagle wrote:
>>
>>> Hello,
>>>
>>> multicsfan wrote:
>>>
>>> <snip>
>>>
>>>> The Cray-1 did not initially come with hardware divide, the divide 
>>>> was done in software in the pipe.  Hardware divide was a $500K 
>>>> option which was slower then the software divide ;)
>>>
>>>
>>>
>>>
>>> No Cray-1 ever had hardware divide.
>>> Floating point divide was always a reciprocal approximation,
>>> followed by correcting multiplications.  A special 2-a.b instruction
>>> applied part of the correction.
>>>
>>>> Memory Parity was also a $500K option.  To check for memory errors 
>>>> the idea was the Cray was so fast you could just run your program 
>>>> twice and if you got the same answer there was no memory problem.
>>>
>>>
>>>
>>>
>>> Serial #1 Cray-1 had no parity, Serial #2 was being built without parity
>>> but was scrapped before completion.  Serials #3 and beyond
>>> all had SECDED.  It added one clock to scalar memory fetch times,
>>> raising it from 10 to 11 cp.
>>>
>>> The only options ever available on the Cray-1 was how much memory
>>> the customer wanted to buy.  (Modulo peripherals, of course.)
>>> Most were sold with 1 MW, from 256 KW to 4 MW was
>>> theoretically possible.
>>>
> 
> 
0
multicsfan (63)
1/17/2006 10:11:17 PM
"robin" <robin_v@bigpond.com> writes:
> FYI, our s/360 was slower than the machine that it replaced for
> small jobs -- yet the machine it replaced was 50 times slower (add
> time 64uS).  When LCS was put on the S/360, it ran even slower,
> because the OS took up most of the fast memory, so that user
> programs were loaded into slow memory.

our 360/67 at the univ. had significantly worse thruput than the 709
(tube machine) that it replaced. the 360/67 had 8-byte wide 750ns
memory (and instruction) cycle time (compared to 360/50 2-byte 2mic
memory, and most LCS was 8mic). 360/67 was essentially a 360/65 that
had virtual memory hardware bolted on.

a dominate workload was fortran student jobs ... there was 1401
front-end that handled unitrecord<->tape ... and you carried tapes
between the 1401 and 709 (this was 40 yrs ago, 1966). the 709 fortran
monitor ran student jobs (tape to tape) at a couple seconds per.

360/67 came in with os/360 and 2311 disks. job processing was
syncronous with unit record process ... read the cards ... write stuff
to disk, read stuff from disk, eventually execute ... print output
.... do the next job. this was taking minutes per student job. most of
the time 360/67 ran as non-virtual memory 360/65 with vanilla os/360.
the 67 associative array hardware lookup (virtual to real address
translation) did add 150ns to 750ns basic memory cycle (900ms total).

by os/mft11 release, the univ got HASP, which decoupled the unit
record processing from the job execution. the other operating system
gorp of moving lots of stuff back&forth between memory and disk was
still resulting in student fortran job processing taking over 30
seconds.

i did a lot of detailed analysis and careful construction/placement of
operating system stuff on disk for optimized arm seek operation and
got the typical student fortran job elapsed time to a little under 12
seconds (still longer than they had run on 709) ... but almost three
times faster than it had been taking.

it wasn't until we got watfor monitor from univ. of waterloo that
student fortran job thruput started to exceed what it had been on the
709.

i gave talk at fall '68 share meeting in boston on both the
optimization work on os/360 as well extensive kernel rewrites that i
had done to cp/67 (i was still undergraduate) ... previous posting
referencing part of that talk
http://www.garlic.com/~lynn/94.html#18 CP/67 & OS MFT14
http://www.garlic.com/~lynn/94.html#20 CP/67 & OS MFT14

for a little drift ... i had done some of the work on gcard ...
an ios3270 version of the 360 green card ... and just recently
did something of a rough conversion to html
http://www.garlic.com/~lynn/gcard.html

and of course, the 360 functional characteristics documents game
detailed machine timings ... several have been scanned and are
available here (including 360/65, 360/67, 360/91, and 360/195):
http://bitsavers.trailing-edge.com/pdf/ibm/360/funcChar/
http://www.bitsavers.org/pdf/ibm/360/funcChar/

-- 
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/
0
lynn13 (400)
1/17/2006 11:37:33 PM
Dan Nagle wrote:

(snip)

> No Cray-1 ever had hardware divide.
> Floating point divide was always a reciprocal approximation,
> followed by correcting multiplications.  A special 2-a.b instruction
> applied part of the correction.

Well, it was used to implement divide in high-level languages.

I thought I remembered that multiply was not accurate out to the
last bit, the reciprocal approximation obviously isn't.  Maybe the
corrected reciprocal is pretty close.

Back to the 360, the 360/91 uses a divide algorithm similar to Cray,
except that it does the multiply at the same time.  It generates a
rounded quotient, unlike the truncated quotient that S/360 specifies.
As a result, it violates the Principles of Operation by being closer to 
the right answer.

-- glen

0
gah (12851)
1/18/2006 3:11:38 AM
robin wrote:
> "John W. Kennedy" <jwkenne@attglobal.net> wrote in message
> news:ivXyf.40$gh5.16@fe08.lga...
>> robin wrote:
>>>>>> The 704 family offered double precision, too; it was not fully
>>>>>> implemented in hardware, but the hardware assisted it, and the FORTRAN
>>>>>> compiler supported it.
>>>>> It had to, in order to meet the standard.
>>>> There was no FORTRAN standard until long afterwards.
>>> IBM set it.
>> So your argument is that the 704 hardware had to implement
>> double-precision floating-point in 1954 in order to support FORTRAN IV,
>>   which didn't even come out until 1962 (two hardware generations later)?
> 
> You said it ; I didn't.

The bloody quotes are right above.

>>>>>>> But for most work, little difference between 36 bits and 32 bits.
>>>>>>> But that's no measure, anyhow. The appropriate measure is
>>>>>>> the number of mantissa bits and range of exponent.
>>>>>> They add up to the word size, one way or the other.
>>>>> Not relevant; what's important is the breakdown --
>>>>> and in particular, the number of mantissa bits.
>>>> In order to make any sense of your argument, I can only assume that you
>>>> do not know what the words "relevant" and "mantissa" mean. Kindly look
>>>> them up.
>>> The term "mantissa" has been used since the early days of computers
>>> to describe part of floating-point number.
>>>
>>> Are you having a bad day?
>> Either you are attempting to argue that the size of the fraction and the
>> size of the exponent are each more important than one another, while
>> simultaneously maintaining that word size has nothing to do with the
>> issue either way, or else you are simply misusing words.
> 
> Are you trying to divert attention from "mantissa"?

We'll try this one more time.

You argue simultaneously that the "mantissa" is most important and that 
the exponent is most important. This means one of two things: you think 
"mantissa" means "exponent", or you're contradicting yourself.

>>>>>> In any case, the
>>>>>> S/360 had significantly fewer effective fraction bits (21) in single
>>>>>> precision than the 7094 (27).
>>>>> Leaving only 7 bits for the exponent.  In other words, a reduced
>>>>> range of exponent, which the S/360 corrected.
>>>> Having trouble with subtraction, are we now?
>>> When I last looked, 27 + 1 + 7 + 1 = 36.
>> Are you under the impression that the 704 series had a 35-bit word with
>> a parity bit?
> 
> You said 36 bits earlier.

Yes, I did, because it /did/ have 36. But your breakdown above includes 
an extra 1-bit field that cannot be accounted for.

-- 
John W. Kennedy
"But now is a new thing which is very old--
that the rich make themselves richer and not poorer,
which is the true Gospel, for the poor's sake."
   -- Charles Williams.  "Judgement at Chelmsford"
0
jwkenne (1442)
1/18/2006 3:16:30 AM
The 360/91 wasn't the only one with problems.  When I played with the 
Watfiv source there were comments here and there about problems 
different model 360's had.  Some were in error trapping.  I don't 
remember which model(s) (I still have the printed source), but some of 
them wiht pipelines could not accuratly report what instruction caused 
an error, just that it was one of the ones in the pipe that caused the 
problem.

I believe some other early pipelined machines had this problem or even 
worse ones.  not only did they have problems with error tracing during 
execution but sometimes the program counter wasn't properly tracked 
during normal interrupts like I/O, timers, etc.

IIRC the UCLA people doing the IBM part of the network project I worked 
on said their 360/91 would get slower as they added RAM due to the cable 
lenghts between the memory boxes and the CPU box.

glen herrmannsfeldt wrote:

> Dan Nagle wrote:
> 
> (snip)
> 
>> No Cray-1 ever had hardware divide.
>> Floating point divide was always a reciprocal approximation,
>> followed by correcting multiplications.  A special 2-a.b instruction
>> applied part of the correction.
> 
> 
> Well, it was used to implement divide in high-level languages.
> 
> I thought I remembered that multiply was not accurate out to the
> last bit, the reciprocal approximation obviously isn't.  Maybe the
> corrected reciprocal is pretty close.
> 
> Back to the 360, the 360/91 uses a divide algorithm similar to Cray,
> except that it does the multiply at the same time.  It generates a
> rounded quotient, unlike the truncated quotient that S/360 specifies.
> As a result, it violates the Principles of Operation by being closer to 
> the right answer.
> 
> -- glen
> 
0
multicsfan (63)
1/18/2006 4:18:15 AM
multicsfan wrote:

> The 360/91 wasn't the only one with problems.  When I played with the 
> Watfiv source there were comments here and there about problems 
> different model 360's had.  Some were in error trapping.  I don't 
> remember which model(s) (I still have the printed source), but some of 
> them wiht pipelines could not accuratly report what instruction caused 
> an error, just that it was one of the ones in the pipe that caused the 
> problem.

> I believe some other early pipelined machines had this problem or even 
> worse ones.  not only did they have problems with error tracing during 
> execution but sometimes the program counter wasn't properly tracked 
> during normal interrupts like I/O, timers, etc.

Imprecise interrupts.

Some other processors had imprecise interrupts for things like 
protection, but the 360/91 and /195 had them for many of the floating 
point operations.  With out of order execution and completion, it
must complete all instructions before the interrupt can be taken.

PL/I (F) has a message that changes when sysgenned for the 91,
such that messages say NEAR when indicating where the error occurred.
Though with the STMT option it will put BR 0 instructions between each
statement to flush the pipeline. (And slow everything down.)

-- glen

0
gah (12851)
1/18/2006 5:00:10 AM
"multicsfan" <multicsfan@hotmail.com> wrote in message
news:BU5zf.2504$SD3.2051@trndny07...
>
> The Cray-1 did not initially come with hardware divide, the divide was
> done in software in the pipe.  Hardware divide was a $500K option which
> was slower then the software divide ;)
>
> Memory Parity was also a $500K option.  To check for memory errors the
> idea was the Cray was so fast you could just run your program twice and
> if you got the same answer there was no memory problem.

Just like the old days!(1950s) !


0
robin_v (2737)
1/18/2006 1:27:46 PM
In article <pqizf.1906$EU3.1442@fe12.lga>,
 John W. Kennedy <jwkenne@attglobal.net> wrote:

>>>> When I last looked, 27 + 1 + 7 + 1 = 36.
>>> Are you under the impression that the 704 series had a 35-bit word with
>>> a parity bit?
>> 
>> You said 36 bits earlier.
>
> Yes, I did, because it /did/ have 36. But your breakdown above includes 
> an extra 1-bit field that cannot be accounted for.

There's two sign bits: sign of mantissa, and sign of exponent.  "Excess"
representation of the exponent hides the explicit sign, but the bit is still
effectively a sign.

Also, with binary floating point, a normalized mantissa would always have a
1 as the leftmost bit, so in most implementations, that's assumed and
overwritten by the sign bit.

-- 
Randy Hudson

0
ime1 (5)
1/19/2006 6:31:58 AM
Randy Hudson wrote:
> In article <pqizf.1906$EU3.1442@fe12.lga>,
>  John W. Kennedy <jwkenne@attglobal.net> wrote:
> 
>>>>> When I last looked, 27 + 1 + 7 + 1 = 36.
>>>> Are you under the impression that the 704 series had a 35-bit word with
>>>> a parity bit?
>>> You said 36 bits earlier.
>> Yes, I did, because it /did/ have 36. But your breakdown above includes 
>> an extra 1-bit field that cannot be accounted for.
> 
> There's two sign bits: sign of mantissa, and sign of exponent.  "Excess"
> representation of the exponent hides the explicit sign, but the bit is still
> effectively a sign.

In forty years, I have yet to see one single hardware manual that 
describes it so.

> Also, with binary floating point, a normalized mantissa would always have a
> 1 as the leftmost bit, so in most implementations, that's assumed and
> overwritten by the sign bit.

That's a relatively modern sophistication, and definitely not applicable 
to the vacuum-tube and discrete-transistor eras.

-- 
John W. Kennedy
"But now is a new thing which is very old--
that the rich make themselves richer and not poorer,
which is the true Gospel, for the poor's sake."
   -- Charles Williams.  "Judgement at Chelmsford"
0
jwkenne (1442)
1/19/2006 4:31:46 PM
On Thu, 19 Jan 2006 11:31:46 -0500, John W. Kennedy  
<jwkenne@attglobal.net> wrote:

> Randy Hudson wrote:
>> In article <pqizf.1906$EU3.1442@fe12.lga>,
>>  John W. Kennedy <jwkenne@attglobal.net> wrote:
>>
>>>>>> When I last looked, 27 + 1 + 7 + 1 = 36.
>>>>> Are you under the impression that the 704 series had a 35-bit word  
>>>>> with
>>>>> a parity bit?
>>>> You said 36 bits earlier.
>>> Yes, I did, because it /did/ have 36. But your breakdown above  
>>> includes an extra 1-bit field that cannot be accounted for.
>>  There's two sign bits: sign of mantissa, and sign of exponent.   
>> "Excess"
>> representation of the exponent hides the explicit sign, but the bit is  
>> still
>> effectively a sign.
>
> In forty years, I have yet to see one single hardware manual that  
> describes it so.
>
>> Also, with binary floating point, a normalized mantissa would always  
>> have a
>> 1 as the leftmost bit, so in most implementations, that's assumed and
>> overwritten by the sign bit.
>
> That's a relatively modern sophistication, and definitely not applicable  
> to the vacuum-tube and discrete-transistor eras.
>
I don't believe that is true, I believe most floating point  
representations that
have used a binary exponent have suppressed the leading one to obtain one  
more
bit of accuracy.  But with a radix 16 exponent you can't. of course do  
that.

Not sure how far this goes back in time, but i bet it is to the 50's  
anyway.

0
tom284 (1839)
1/19/2006 10:25:29 PM
From: "John W. Kennedy" <jwkenne@attglobal.net>
Sent: Wednesday, January 18, 2006 2:16 PM

> robin wrote:
> > "John W. Kennedy" <jwkenne@attglobal.net> wrote in message
> > news:ivXyf.40$gh5.16@fe08.lga...
> >> robin wrote:
> >>>>>> The 704 family offered double precision, too; it was not fully
> >>>>>> implemented in hardware, but the hardware assisted it, and the FORTRAN
> >>>>>> compiler supported it.
> >>>>> It had to, in order to meet the standard.
> >>>> There was no FORTRAN standard until long afterwards.
> >>> IBM set it.
> >> So your argument is that the 704 hardware had to implement
> >> double-precision floating-point in 1954 in order to support FORTRAN IV,
> >>   which didn't even come out until 1962 (two hardware generations later)?
> >
> > You said it ; I didn't.
>
> The bloody quotes are right above.

I think that it would be a good idea if you ceased
your ridiculous allegations, which have not been based on anything.

> >>>>>>> But for most work, little difference between 36 bits and 32 bits.
> >>>>>>> But that's no measure, anyhow. The appropriate measure is
> >>>>>>> the number of mantissa bits and range of exponent.
> >>>>>> They add up to the word size, one way or the other.
> >>>>> Not relevant; what's important is the breakdown --
> >>>>> and in particular, the number of mantissa bits.
> >>>> In order to make any sense of your argument, I can only assume that you
> >>>> do not know what the words "relevant" and "mantissa" mean. Kindly look
> >>>> them up.
> >>> The term "mantissa" has been used since the early days of computers
> >>> to describe part of floating-point number.
> >>>
> >>> Are you having a bad day?
> >> Either you are attempting to argue that the size of the fraction and the
> >> size of the exponent are each more important than one another, while
> >> simultaneously maintaining that word size has nothing to do with the
> >> issue either way, or else you are simply misusing words.
> >
> > Are you trying to divert attention from "mantissa"?
>
> We'll try this one more time.
>
> You argue simultaneously that the "mantissa" is most important and that
> the exponent is most important.

If you look again, you will see that I didn't say that.

> This means one of two things: you think
> "mantissa" means "exponent", or you're contradicting yourself.

The only person who doesn't know what "mantissa" means
is your self.  Do you still think that it is to do with logarithms?
And BTW, I didn't contradict myself.

> >>>>>> In any case, the
> >>>>>> S/360 had significantly fewer effective fraction bits (21) in single
> >>>>>> precision than the 7094 (27).
> >>>>> Leaving only 7 bits for the exponent.  In other words, a reduced
> >>>>> range of exponent, which the S/360 corrected.
> >>>> Having trouble with subtraction, are we now?
> >>> When I last looked, 27 + 1 + 7 + 1 = 36.
> >> Are you under the impression that the 704 series had a 35-bit word with
> >> a parity bit?
> >
> > You said 36 bits earlier.
>
> Yes, I did, because it /did/ have 36. But your breakdown above includes
> an extra 1-bit field that cannot be accounted for.

I'll leave it for you to work out what the bits might be for.



0
robin_v (2737)
1/20/2006 12:14:16 AM
"Randy Hudson" <ime@panix.com> wrote in message
news:dqnbot$jb8$1@reader2.panix.com...
> In article <pqizf.1906$EU3.1442@fe12.lga>,
>  John W. Kennedy <jwkenne@attglobal.net> wrote:
>
> >>>> When I last looked, 27 + 1 + 7 + 1 = 36.
> >>> Are you under the impression that the 704 series had a 35-bit word with
> >>> a parity bit?
> >>
> >> You said 36 bits earlier.
> >
> > Yes, I did, because it /did/ have 36. But your breakdown above includes
> > an extra 1-bit field that cannot be accounted for.
>
> There's two sign bits: sign of mantissa, and sign of exponent.  "Excess"
> representation of the exponent hides the explicit sign, but the bit is still
> effectively a sign.
>
> Also, with binary floating point, a normalized mantissa would always have a
> 1 as the leftmost bit,

Except for zero.

> so in most implementations, that's assumed and
> overwritten by the sign bit.

Early float implementations did not do that.


0
robin_v (2737)
1/20/2006 12:14:17 AM
"Anne & Lynn Wheeler" <lynn@garlic.com> wrote in message
news:m37j8ymnnm.fsf@lhwlinux.garlic.com...
> "robin" <robin_v@bigpond.com> writes:
> > FYI, our s/360 was slower than the machine that it replaced for
> > small jobs -- yet the machine it replaced was 50 times slower (add
> > time 64uS).  When LCS was put on the S/360, it ran even slower,
> > because the OS took up most of the fast memory, so that user
> > programs were loaded into slow memory.
>
> our 360/67 at the univ. had significantly worse thruput than the 709
> (tube machine) that it replaced. the 360/67 had 8-byte wide 750ns
> memory (and instruction) cycle time (compared to 360/50 2-byte 2mic
> memory, and most LCS was 8mic). 360/67 was essentially a 360/65 that
> had virtual memory hardware bolted on.
>
> a dominate workload was fortran student jobs ... there was 1401
> front-end that handled unitrecord<->tape ... and you carried tapes
> between the 1401 and 709 (this was 40 yrs ago, 1966). the 709 fortran
> monitor ran student jobs (tape to tape) at a couple seconds per.
>
> 360/67 came in with os/360 and 2311 disks. job processing was
> syncronous with unit record process ... read the cards ... write stuff
> to disk, read stuff from disk, eventually execute ... print output
> ... do the next job. this was taking minutes per student job. most of
> the time 360/67 ran as non-virtual memory 360/65 with vanilla os/360.
> the 67 associative array hardware lookup (virtual to real address
> translation) did add 150ns to 750ns basic memory cycle (900ms total).
>
> by os/mft11 release, the univ got HASP, which decoupled the unit
> record processing from the job execution. the other operating system
> gorp of moving lots of stuff back&forth between memory and disk was
> still resulting in student fortran job processing taking over 30
> seconds.
>
> i did a lot of detailed analysis and careful construction/placement of
> operating system stuff on disk for optimized arm seek operation and
> got the typical student fortran job elapsed time to a little under 12
> seconds (still longer than they had run on 709) ... but almost three
> times faster than it had been taking.
>
> it wasn't until we got watfor monitor from univ. of waterloo that
> student fortran job thruput started to exceed what it had been on the
> 709.

Your experiences with LCS, HASP, and WATFOR pretty well match
ours.  The time for a small job in Fortran, PL/I, and COBOL using IBM
compilers took about one-and-a-half minutes.  Much of that was
taken up by the link editor.  We were able to improve on that
somewhat by buffering compiler I/O, but with limited memory
there was an upper bound on buffer size.
    The real improvements came with HASP and WATFOR,
as with WATFOR the I/O bottleneck became worse.

> i gave talk at fall '68 share meeting in boston on both the
> optimization work on os/360 as well extensive kernel rewrites that i
> had done to cp/67 (i was still undergraduate) ... previous posting
> referencing part of that talk
> http://www.garlic.com/~lynn/94.html#18 CP/67 & OS MFT14
> http://www.garlic.com/~lynn/94.html#20 CP/67 & OS MFT14
>
> for a little drift ... i had done some of the work on gcard ...
> an ios3270 version of the 360 green card ... and just recently
> did something of a rough conversion to html
> http://www.garlic.com/~lynn/gcard.html
>
> and of course, the 360 functional characteristics documents game
> detailed machine timings ... several have been scanned and are
> available here (including 360/65, 360/67, 360/91, and 360/195):
> http://bitsavers.trailing-edge.com/pdf/ibm/360/funcChar/
> http://www.bitsavers.org/pdf/ibm/360/funcChar/
> --
> Anne & Lynn Wheeler | http://www.garlic.com/~lynn/


0
robin_v (2737)
1/20/2006 12:14:18 AM
John W. Kennedy wrote:
> Randy Hudson wrote:

(snip)

>> There's two sign bits: sign of mantissa, and sign of exponent.  "Excess"
>> representation of the exponent hides the explicit sign, but the bit is 
>> still effectively a sign.

> In forty years, I have yet to see one single hardware manual that 
> describes it so.

For ones complement machines, as I understand it, it is usually stated
as a sign bit.  For twos complement machines with a biased exponent it
isn't.

-- glen

0
gah (12851)
1/20/2006 4:14:04 AM
Tom Linden wrote:
> On Thu, 19 Jan 2006 11:31:46 -0500, John W. Kennedy 
> <jwkenne@attglobal.net> wrote:
> 
>> Randy Hudson wrote:
>>> In article <pqizf.1906$EU3.1442@fe12.lga>,
>>>  John W. Kennedy <jwkenne@attglobal.net> wrote:
>>>
>>>>>>> When I last looked, 27 + 1 + 7 + 1 = 36.
>>>>>> Are you under the impression that the 704 series had a 35-bit word 
>>>>>> with
>>>>>> a parity bit?
>>>>> You said 36 bits earlier.
>>>> Yes, I did, because it /did/ have 36. But your breakdown above 
>>>> includes an extra 1-bit field that cannot be accounted for.
>>>  There's two sign bits: sign of mantissa, and sign of exponent.  
>>> "Excess"
>>> representation of the exponent hides the explicit sign, but the bit 
>>> is still
>>> effectively a sign.
>>
>> In forty years, I have yet to see one single hardware manual that 
>> describes it so.
>>
>>> Also, with binary floating point, a normalized mantissa would always 
>>> have a
>>> 1 as the leftmost bit, so in most implementations, that's assumed and
>>> overwritten by the sign bit.
>>
>> That's a relatively modern sophistication, and definitely not 
>> applicable to the vacuum-tube and discrete-transistor eras.
>>
> I don't believe that is true, I believe most floating point 
> representations that
> have used a binary exponent have suppressed the leading one to obtain 
> one more
> bit of accuracy.  But with a radix 16 exponent you can't. of course do 
> that.
> 
> Not sure how far this goes back in time, but i bet it is to the 50's 
> anyway.

At the very least, it is /not/ the case in the IBM 
704/709/7040/7044/7090/7094 family, which is the architecture that 
FORTRAN was designed for, the architecture under discussion in this 
subthread, and the most important scientific architecture previous to 
the 360.

-- 
John W. Kennedy
"But now is a new thing which is very old--
that the rich make themselves richer and not poorer,
which is the true Gospel, for the poor's sake."
   -- Charles Williams.  "Judgement at Chelmsford"
0
jwkenne (1442)
1/21/2006 5:11:31 AM
robin wrote:
> From: "John W. Kennedy" <jwkenne@attglobal.net>
> Sent: Wednesday, January 18, 2006 2:16 PM
> 
>> robin wrote:
>>> "John W. Kennedy" <jwkenne@attglobal.net> wrote in message
>>> news:ivXyf.40$gh5.16@fe08.lga...
>>>> robin wrote:
>>>>>>>> The 704 family offered double precision, too; it was not fully
>>>>>>>> implemented in hardware, but the hardware assisted it, and the FORTRAN
>>>>>>>> compiler supported it.
>>>>>>> It had to, in order to meet the standard.
>>>>>> There was no FORTRAN standard until long afterwards.
>>>>> IBM set it.
>>>> So your argument is that the 704 hardware had to implement
>>>> double-precision floating-point in 1954 in order to support FORTRAN IV,
>>>>   which didn't even come out until 1962 (two hardware generations later)?
>>> You said it ; I didn't.
>> The bloody quotes are right above.
> 
> I think that it would be a good idea if you ceased
> your ridiculous allegations, which have not been based on anything.

You said -- it's quoted right above -- that the FORTRAN compiler for the 
704 had to support double precision "in order to meet the standard", 
which is absurd, because the FORTRAN compiler for the 704 was the first 
FORTRAN compiler there ever was, and existed long before any standard.

>>>>>>>>> But for most work, little difference between 36 bits and 32 bits.
>>>>>>>>> But that's no measure, anyhow. The appropriate measure is
>>>>>>>>> the number of mantissa bits and range of exponent.
>>>>>>>> They add up to the word size, one way or the other.
>>>>>>> Not relevant; what's important is the breakdown --
>>>>>>> and in particular, the number of mantissa bits.
>>>>>> In order to make any sense of your argument, I can only assume that you
>>>>>> do not know what the words "relevant" and "mantissa" mean. Kindly look
>>>>>> them up.
>>>>> The term "mantissa" has been used since the early days of computers
>>>>> to describe part of floating-point number.
>>>>>
>>>>> Are you having a bad day?
>>>> Either you are attempting to argue that the size of the fraction and the
>>>> size of the exponent are each more important than one another, while
>>>> simultaneously maintaining that word size has nothing to do with the
>>>> issue either way, or else you are simply misusing words.
>>> Are you trying to divert attention from "mantissa"?
>> We'll try this one more time.
>>
>> You argue simultaneously that the "mantissa" is most important and that
>> the exponent is most important.
> 
> If you look again, you will see that I didn't say that.

In one and the same posting, you said, "What's important is ... the 
number of mantissa bits," and then followed it up by indicating that the 
360 did a good thing by increasing the exponent range at the expense of 
fraction bits. You can't have it both ways.

>> This means one of two things: you think
>> "mantissa" means "exponent", or you're contradicting yourself.
> 
> The only person who doesn't know what "mantissa" means
> is your self.  Do you still think that it is to do with logarithms?
> And BTW, I didn't contradict myself.

Actually, I didn't raise the issue of logarithms; Glen did. However, he 
was right; the use of "mantissa" to mean "fraction component of a 
floating-point number", though widespread, is an abuse, like using "k" 
to mean 1024.

I have already indicated how you contradicted yourself.

>>>>>>>> In any case, the
>>>>>>>> S/360 had significantly fewer effective fraction bits (21) in single
>>>>>>>> precision than the 7094 (27).
>>>>>>> Leaving only 7 bits for the exponent.  In other words, a reduced
>>>>>>> range of exponent, which the S/360 corrected.
>>>>>> Having trouble with subtraction, are we now?
>>>>> When I last looked, 27 + 1 + 7 + 1 = 36.
>>>> Are you under the impression that the 704 series had a 35-bit word with
>>>> a parity bit?
>>> You said 36 bits earlier.
>> Yes, I did, because it /did/ have 36. But your breakdown above includes
>> an extra 1-bit field that cannot be accounted for.
> 
> I'll leave it for you to work out what the bits might be for.

The actual format of a 704 floating-point number was:

S (it was called S rather than zero): sign
1-8: excess-128 exponent
9-35: fraction.

-- 
John W. Kennedy
"But now is a new thing which is very old--
that the rich make themselves richer and not poorer,
which is the true Gospel, for the poor's sake."
   -- Charles Williams.  "Judgement at Chelmsford"
0
jwkenne (1442)
1/21/2006 5:22:49 AM
"John W. Kennedy" <jwkenne@attglobal.net> wrote in message
news:8ojAf.3953$e9.3645@fe12.lga...
> Tom Linden wrote:
> > Not sure how far this goes back in time, but i bet it is to the 50's
> > anyway.
>
> At the very least, it is /not/ the case in the IBM
> 704/709/7040/7044/7090/7094 family, which is the architecture that
> FORTRAN was designed for, the architecture under discussion in this
> subthread, and the most important scientific architecture previous to
> the 360.

The most important scientific architecture before the 360
was the Pilot ACE in which pioneering work on floating point and
theory of rounding errors was developed.


0
robin_v (2737)
1/22/2006 2:11:29 AM
"John W. Kennedy" <jwkenne@attglobal.net> wrote in message
news:KyjAf.3955$e9.3653@fe12.lga...
> robin wrote:
> > From: "John W. Kennedy" <jwkenne@attglobal.net>
> > Sent: Wednesday, January 18, 2006 2:16 PM
> >
> >> robin wrote:
> >>> "John W. Kennedy" <jwkenne@attglobal.net> wrote in message
> >>> news:ivXyf.40$gh5.16@fe08.lga...
> >>>> robin wrote:
> >>>>>>>> The 704 family offered double precision, too; it was not fully
> >>>>>>>> implemented in hardware, but the hardware assisted it, and the
FORTRAN
> >>>>>>>> compiler supported it.
> >>>>>>> It had to, in order to meet the standard.
> >>>>>> There was no FORTRAN standard until long afterwards.
> >>>>> IBM set it.
> >>>> So your argument is that the 704 hardware had to implement
> >>>> double-precision floating-point in 1954 in order to support FORTRAN IV,
> >>>>   which didn't even come out until 1962 (two hardware generations later)?
> >>> You said it ; I didn't.
> >> The bloody quotes are right above.
> >
> > I think that it would be a good idea if you ceased
> > your ridiculous allegations, which have not been based on anything.
>
> You said -- it's quoted right above -- that the FORTRAN compiler for the
> 704 had to support double precision "in order to meet the standard",
> which is absurd, because the FORTRAN compiler for the 704 was the first
> FORTRAN compiler there ever was, and existed long before any standard.

And I replied that IBM set it.

> >>>>>>>>> But for most work, little difference between 36 bits and 32 bits.
> >>>>>>>>> But that's no measure, anyhow. The appropriate measure is
> >>>>>>>>> the number of mantissa bits and range of exponent.
> >>>>>>>> They add up to the word size, one way or the other.
> >>>>>>> Not relevant; what's important is the breakdown --
> >>>>>>> and in particular, the number of mantissa bits.
> >>>>>> In order to make any sense of your argument, I can only assume that you
> >>>>>> do not know what the words "relevant" and "mantissa" mean. Kindly look
> >>>>>> them up.
> >>>>> The term "mantissa" has been used since the early days of computers
> >>>>> to describe part of floating-point number.
> >>>>>
> >>>>> Are you having a bad day?
> >>>> Either you are attempting to argue that the size of the fraction and the
> >>>> size of the exponent are each more important than one another, while
> >>>> simultaneously maintaining that word size has nothing to do with the
> >>>> issue either way, or else you are simply misusing words.
> >>> Are you trying to divert attention from "mantissa"?
> >> We'll try this one more time.
> >>
> >> You argue simultaneously that the "mantissa" is most important and that
> >> the exponent is most important.
> >
> > If you look again, you will see that I didn't say that.
>
> In one and the same posting, you said, "What's important is ... the
> number of mantissa bits," and then followed it up by indicating that the
> 360 did a good thing by increasing the exponent range at the expense of
> fraction bits. You can't have it both ways.

I didn't say that at all.  Your "conclusions" are wrong.
What I did say was that the hardware considerations
related to the choice of hex, which reduced the number
of shifts during post-normalization from 23 to 5, and
for double precision from 55 to 13.
This arrangement (hex) gave a good range of exponent (roughly
10**-78 to 10**75) and a reasonable number of bits for the
mantissa.

> >> This means one of two things: you think
> >> "mantissa" means "exponent", or you're contradicting yourself.
> >
> > The only person who doesn't know what "mantissa" means
> > is your self.  Do you still think that it is to do with logarithms?
> > And BTW, I didn't contradict myself.
>
> Actually, I didn't raise the issue of logarithms; Glen did. However, he
> was right; the use of "mantissa" to mean "fraction component of a
> floating-point number", though widespread, is an abuse, like using "k"
> to mean 1024.

YOU raised the issue of the word "mantissa" saying
that I didn't know what it meant, and implying that
I was using it wrongly, which I wasn't.
    Whereas all along it was YOU who didnt know
what "mantissa" meant.

> I have already indicated how you contradicted yourself.

No you didn't because I didn't contradict myself.

> >>>>>>>> In any case, the
> >>>>>>>> S/360 had significantly fewer effective fraction bits (21) in single
> >>>>>>>> precision than the 7094 (27).
> >>>>>>> Leaving only 7 bits for the exponent.  In other words, a reduced
> >>>>>>> range of exponent, which the S/360 corrected.
> >>>>>> Having trouble with subtraction, are we now?
> >>>>> When I last looked, 27 + 1 + 7 + 1 = 36.
> >>>> Are you under the impression that the 704 series had a 35-bit word with
> >>>> a parity bit?
> >>> You said 36 bits earlier.
> >> Yes, I did, because it /did/ have 36. But your breakdown above includes
> >> an extra 1-bit field that cannot be accounted for.
> >
> > I'll leave it for you to work out what the bits might be for.
>
> The actual format of a 704 floating-point number was:
>
> S (it was called S rather than zero): sign
> 1-8: excess-128 exponent
> 9-35: fraction.

Good, I'm glad that you funally worked it out.



0
robin_v (2737)
1/22/2006 2:11:31 AM
robin wrote:
> "John W. Kennedy" <jwkenne@attglobal.net> wrote in message
> news:8ojAf.3953$e9.3645@fe12.lga...
>> Tom Linden wrote:
>>> Not sure how far this goes back in time, but i bet it is to the 50's
>>> anyway.
>> At the very least, it is /not/ the case in the IBM
>> 704/709/7040/7044/7090/7094 family, which is the architecture that
>> FORTRAN was designed for, the architecture under discussion in this
>> subthread, and the most important scientific architecture previous to
>> the 360.
> 
> The most important scientific architecture before the 360
> was the Pilot ACE in which pioneering work on floating point and
> theory of rounding errors was developed.

Every FORTRAN program written for the Pilot ACE was successfully 
recompiled for the S/360 on September 8, 1752.

-- 
John W. Kennedy
"But now is a new thing which is very old--
that the rich make themselves richer and not poorer,
which is the true Gospel, for the poor's sake."
   -- Charles Williams.  "Judgement at Chelmsford"
0
jwkenne (1442)
1/22/2006 4:53:43 AM
robin wrote:
> "John W. Kennedy" <jwkenne@attglobal.net> wrote in message
> news:KyjAf.3955$e9.3653@fe12.lga...
>> robin wrote:
>>> From: "John W. Kennedy" <jwkenne@attglobal.net>
>>> Sent: Wednesday, January 18, 2006 2:16 PM
>>>
>>>> robin wrote:
>>>>> "John W. Kennedy" <jwkenne@attglobal.net> wrote in message
>>>>> news:ivXyf.40$gh5.16@fe08.lga...
>>>>>> robin wrote:
>>>>>>>>>> The 704 family offered double precision, too; it was not fully
>>>>>>>>>> implemented in hardware, but the hardware assisted it, and the
> FORTRAN
>>>>>>>>>> compiler supported it.
>>>>>>>>> It had to, in order to meet the standard.
>>>>>>>> There was no FORTRAN standard until long afterwards.
>>>>>>> IBM set it.
>>>>>> So your argument is that the 704 hardware had to implement
>>>>>> double-precision floating-point in 1954 in order to support FORTRAN IV,
>>>>>>   which didn't even come out until 1962 (two hardware generations later)?
>>>>> You said it ; I didn't.
>>>> The bloody quotes are right above.
>>> I think that it would be a good idea if you ceased
>>> your ridiculous allegations, which have not been based on anything.
>> You said -- it's quoted right above -- that the FORTRAN compiler for the
>> 704 had to support double precision "in order to meet the standard",
>> which is absurd, because the FORTRAN compiler for the 704 was the first
>> FORTRAN compiler there ever was, and existed long before any standard.
> 
> And I replied that IBM set it.

In other words, you are loudly insisting that the first FORTRAN compiler 
for the 704 was compatible with itself. Some people might find that 
observation a little useless.

>>>>>>>>>>> But for most work, little difference between 36 bits and 32 bits.
>>>>>>>>>>> But that's no measure, anyhow. The appropriate measure is
>>>>>>>>>>> the number of mantissa bits and range of exponent.
>>>>>>>>>> They add up to the word size, one way or the other.
>>>>>>>>> Not relevant; what's important is the breakdown --
>>>>>>>>> and in particular, the number of mantissa bits.
>>>>>>>> In order to make any sense of your argument, I can only assume that you
>>>>>>>> do not know what the words "relevant" and "mantissa" mean. Kindly look
>>>>>>>> them up.
>>>>>>> The term "mantissa" has been used since the early days of computers
>>>>>>> to describe part of floating-point number.
>>>>>>>
>>>>>>> Are you having a bad day?
>>>>>> Either you are attempting to argue that the size of the fraction and the
>>>>>> size of the exponent are each more important than one another, while
>>>>>> simultaneously maintaining that word size has nothing to do with the
>>>>>> issue either way, or else you are simply misusing words.
>>>>> Are you trying to divert attention from "mantissa"?
>>>> We'll try this one more time.
>>>>
>>>> You argue simultaneously that the "mantissa" is most important and that
>>>> the exponent is most important.
>>> If you look again, you will see that I didn't say that.
>> In one and the same posting, you said, "What's important is ... the
>> number of mantissa bits," and then followed it up by indicating that the
>> 360 did a good thing by increasing the exponent range at the expense of
>> fraction bits. You can't have it both ways.
> 
> I didn't say that at all.

Yes, you did, as is plainly visible.

I see no point in arguing further with a pathological liar. I suggest 
you seek professional help.

-- 
John W. Kennedy
"But now is a new thing which is very old--
that the rich make themselves richer and not poorer,
which is the true Gospel, for the poor's sake."
   -- Charles Williams.  "Judgement at Chelmsford"
0
jwkenne (1442)
1/22/2006 4:59:22 AM
On Sat, 21 Jan 2006 23:53:43 -0500, "John W. Kennedy"
<jwkenne@attglobal.net> wrote:

>robin wrote:
>> "John W. Kennedy" <jwkenne@attglobal.net> wrote in message
>> news:8ojAf.3953$e9.3645@fe12.lga...
>>> Tom Linden wrote:
>>>> Not sure how far this goes back in time, but i bet it is to the 50's
>>>> anyway.
>>> At the very least, it is /not/ the case in the IBM
>>> 704/709/7040/7044/7090/7094 family, which is the architecture that
>>> FORTRAN was designed for, the architecture under discussion in this
>>> subthread, and the most important scientific architecture previous to
>>> the 360.
>> 
>> The most important scientific architecture before the 360
>> was the Pilot ACE in which pioneering work on floating point and
>> theory of rounding errors was developed.
>
>Every FORTRAN program written for the Pilot ACE was successfully 
>recompiled for the S/360 on September 8, 1752.

1752 ???


-- 
ArarghMail601 at [drop the 'http://www.' from ->] http://www.arargh.com
BCET Basic Compiler Page: http://www.arargh.com/basic/index.html

To reply by email, remove the garbage from the reply address.
0
1/22/2006 5:45:18 AM
>>Every FORTRAN program written for the Pilot ACE was successfully
>>recompiled for the S/360 on September 8, 1752.
>
> 1752 ???

Yes, the Declaration of Independence was first drafted on punched cards and 
printed on an 800 LPM printer.  Of course King George III wanted to tax the 
KWH used by the computer installations so the great Boston Byte Party took 
place where the Colonists (called Terrorists by KG III) dumped data into the 
bit bucket.

Tom Lake 


0
tlake (477)
1/22/2006 7:27:13 AM
On Sun, 22 Jan 2006 07:27:13 GMT, "Tom Lake" <tlake@twcny.rr.com>
wrote:

>>>Every FORTRAN program written for the Pilot ACE was successfully
>>>recompiled for the S/360 on September 8, 1752.
>>
>> 1752 ???
>
>Yes, the Declaration of Independence was first drafted on punched cards and 
>printed on an 800 LPM printer.  Of course King George III wanted to tax the 
>KWH used by the computer installations so the great Boston Byte Party took 
>place where the Colonists (called Terrorists by KG III) dumped data into the 
>bit bucket.

Didn't you forget a :-) ?  :-)

-- 
ArarghMail601 at [drop the 'http://www.' from ->] http://www.arargh.com
BCET Basic Compiler Page: http://www.arargh.com/basic/index.html

To reply by email, remove the garbage from the reply address.
0
1/22/2006 8:45:39 AM
<ararghmail601NOSPAM@NOW.AT.arargh.com> wrote in message 
news:pgh6t19n009k27r1q44h9prp2h5d7supvo@4ax.com...
> On Sun, 22 Jan 2006 07:27:13 GMT, "Tom Lake" <tlake@twcny.rr.com>
> wrote:
>
>>>>Every FORTRAN program written for the Pilot ACE was successfully
>>>>recompiled for the S/360 on September 8, 1752.
>>>
>>> 1752 ???
>>
>>Yes, the Declaration of Independence was first drafted on punched cards 
>>and
>>printed on an 800 LPM printer.  Of course King George III wanted to tax 
>>the
>>KWH used by the computer installations so the great Boston Byte Party took
>>place where the Colonists (called Terrorists by KG III) dumped data into 
>>the
>>bit bucket.
>
> Didn't you forget a :-) ?  :-)

Nu, maybe I expected you should believe it!  8^)

Tom Lake 


0
tlake (477)
1/22/2006 10:26:38 AM
Tom Lake wrote:
>>>Every FORTRAN program written for the Pilot ACE was successfully
>>>recompiled for the S/360 on September 8, 1752.
>>
>>1752 ???
> 
> 
> Yes, the Declaration of Independence was first drafted on punched cards and 
> printed on an 800 LPM printer.  Of course King George III wanted to tax the 
> KWH used by the computer installations so the great Boston Byte Party took 
> place where the Colonists (called Terrorists by KG III) dumped data into the 
> bit bucket.
> 
> Tom Lake 
> 
> 
You must be using the Gregorian calendar.


Bob Lidral
lidral  at  alum  dot  mit  dot  edu
0
1/22/2006 2:05:28 PM
ararghmail601NOSPAM@NOW.AT.arargh.com wrote:
> On Sat, 21 Jan 2006 23:53:43 -0500, "John W. Kennedy"
> <jwkenne@attglobal.net> wrote:
> 
>> robin wrote:
>>> "John W. Kennedy" <jwkenne@attglobal.net> wrote in message
>>> news:8ojAf.3953$e9.3645@fe12.lga...
>>>> Tom Linden wrote:
>>>>> Not sure how far this goes back in time, but i bet it is to the 50's
>>>>> anyway.
>>>> At the very least, it is /not/ the case in the IBM
>>>> 704/709/7040/7044/7090/7094 family, which is the architecture that
>>>> FORTRAN was designed for, the architecture under discussion in this
>>>> subthread, and the most important scientific architecture previous to
>>>> the 360.
>>> The most important scientific architecture before the 360
>>> was the Pilot ACE in which pioneering work on floating point and
>>> theory of rounding errors was developed.
>> Every FORTRAN program written for the Pilot ACE was successfully 
>> recompiled for the S/360 on September 8, 1752.
> 
> 1752 ???

Precisely. It is an especially memorable date, as it was also on that 
same day that the first ironclad proof was found that the Apollo 
landings were faked.

-- 
John W. Kennedy
"But now is a new thing which is very old--
that the rich make themselves richer and not poorer,
which is the true Gospel, for the poor's sake."
   -- Charles Williams.  "Judgement at Chelmsford"
0
jwkenne (1442)
1/22/2006 5:37:49 PM
Bob Lidral wrote:
> Tom Lake wrote:
>>>> Every FORTRAN program written for the Pilot ACE was successfully
>>>> recompiled for the S/360 on September 8, 1752.
>>>
>>> 1752 ???
>>
>>
>> Yes, the Declaration of Independence was first drafted on punched 
>> cards and printed on an 800 LPM printer.  Of course King George III 
>> wanted to tax the KWH used by the computer installations so the great 
>> Boston Byte Party took place where the Colonists (called Terrorists by 
>> KG III) dumped data into the bit bucket.
>>
>> Tom Lake
>>
> You must be using the Gregorian calendar.

Only Papists use the Gregorian calendar. George III used the New Style 
calendar, which has the inestimable value of being invented by Protestants.

-- 
John W. Kennedy
"But now is a new thing which is very old--
that the rich make themselves richer and not poorer,
which is the true Gospel, for the poor's sake."
   -- Charles Williams.  "Judgement at Chelmsford"
0
jwkenne (1442)
1/22/2006 5:39:26 PM
> Only Papists use the Gregorian calendar.

Mea Culpa. Seig Heil,  Benedict XVI !!!

George III used the New Style
> calendar, which has the inestimable value of being invented by 
> Protestants.

An anti-Catholic Kennedy?  How novel!  8^)

Tom Lake



0
tlake (477)
1/22/2006 6:07:39 PM
John W. Kennedy wrote:
> Bob Lidral wrote:
> 
>> Tom Lake wrote:
>>
>>>>> Every FORTRAN program written for the Pilot ACE was successfully
>>>>> recompiled for the S/360 on September 8, 1752.
>>>>
>>>>
>>>> 1752 ???
>>>
>>>
>>>
>>> Yes, the Declaration of Independence was first drafted on punched 
>>> cards and printed on an 800 LPM printer.  Of course King George III 
>>> wanted to tax the KWH used by the computer installations so the great 
>>> Boston Byte Party took place where the Colonists (called Terrorists 
>>> by KG III) dumped data into the bit bucket.
>>>
>>> Tom Lake
>>>
>> You must be using the Gregorian calendar.
> 
> 
> Only Papists use the Gregorian calendar. George III used the New Style 
> calendar, which has the inestimable value of being invented by Protestants.
> 
And which has the further virtue of having no September 8, 1752. :-)
0
1/23/2006 6:47:44 AM
John W. Kennedy wrote in message ...
>robin wrote:
>> "John W. Kennedy" <jwkenne@attglobal.net> wrote in message
>> news:8ojAf.3953$e9.3645@fe12.lga...
>>> Tom Linden wrote:
>>>> Not sure how far this goes back in time, but i bet it is to the 50's
>>>> anyway.
>>> At the very least, it is /not/ the case in the IBM
>>> 704/709/7040/7044/7090/7094 family, which is the architecture that
>>> FORTRAN was designed for, the architecture under discussion in this
>>> subthread, and the most important scientific architecture previous to
>>> the 360.
>>
>> The most important scientific architecture before the 360
>> was the Pilot ACE in which pioneering work on floating point and
>> theory of rounding errors was developed.
>
>Every FORTRAN program written for the Pilot ACE was successfully
>recompiled for the S/360 on September 8, 1752.

The Pilot ACE and its design goes back to 1946,
and thus precedes the 360, and of course, the 704 family.




0
robin_v (2737)
1/24/2006 2:25:07 PM
John W. Kennedy wrote in message ...
>robin wrote:
>> "John W. Kennedy" <jwkenne@attglobal.net> wrote in message
>> news:KyjAf.3955$e9.3653@fe12.lga...
>>> robin wrote:
>>>> From: "John W. Kennedy" <jwkenne@attglobal.net>
>>>> Sent: Wednesday, January 18, 2006 2:16 PM
>>>>
>>>>> robin wrote:
>>>>>> "John W. Kennedy" <jwkenne@attglobal.net> wrote in message
>>>>>> news:ivXyf.40$gh5.16@fe08.lga...
>>>>>>> robin wrote:
>>>>>>>>>>> The 704 family offered double precision, too; it was not fully
>>>>>>>>>>> implemented in hardware, but the hardware assisted it, and the
>> FORTRAN
>>>>>>>>>>> compiler supported it.
>>>>>>>>>> It had to, in order to meet the standard.
>>>>>>>>> There was no FORTRAN standard until long afterwards.
>>>>>>>> IBM set it.
>>>>>>> So your argument is that the 704 hardware had to implement
>>>>>>> double-precision floating-point in 1954 in order to support FORTRAN IV,
>>>>>>>   which didn't even come out until 1962 (two hardware generations later)?
>>>>>> You said it ; I didn't.
>>>>> The bloody quotes are right above.
>>>> I think that it would be a good idea if you ceased
>>>> your ridiculous allegations, which have not been based on anything.
>>> You said -- it's quoted right above -- that the FORTRAN compiler for the
>>> 704 had to support double precision "in order to meet the standard",
>>> which is absurd, because the FORTRAN compiler for the 704 was the first
>>> FORTRAN compiler there ever was, and existed long before any standard.
>>
>> And I replied that IBM set it.
>
>In other words, you are loudly insisting that the first FORTRAN compiler
>for the 704 was compatible with itself. Some people might find that
>observation a little useless.
>
>>>>>>>>>>>> But for most work, little difference between 36 bits and 32 bits.
>>>>>>>>>>>> But that's no measure, anyhow. The appropriate measure is
>>>>>>>>>>>> the number of mantissa bits and range of exponent.
>>>>>>>>>>> They add up to the word size, one way or the other.
>>>>>>>>>> Not relevant; what's important is the breakdown --
>>>>>>>>>> and in particular, the number of mantissa bits.
>>>>>>>>> In order to make any sense of your argument, I can only assume that you
>>>>>>>>> do not know what the words "relevant" and "mantissa" mean. Kindly look
>>>>>>>>> them up.
>>>>>>>> The term "mantissa" has been used since the early days of computers
>>>>>>>> to describe part of floating-point number.
>>>>>>>>
>>>>>>>> Are you having a bad day?
>>>>>>> Either you are attempting to argue that the size of the fraction and the
>>>>>>> size of the exponent are each more important than one another, while
>>>>>>> simultaneously maintaining that word size has nothing to do with the
>>>>>>> issue either way, or else you are simply misusing words.
>>>>>> Are you trying to divert attention from "mantissa"?
>>>>> We'll try this one more time.
>>>>>
>>>>> You argue simultaneously that the "mantissa" is most important and that
>>>>> the exponent is most important.
>>>> If you look again, you will see that I didn't say that.
>>> In one and the same posting, you said, "What's important is ... the
>>> number of mantissa bits," and then followed it up by indicating that the
>>> 360 did a good thing by increasing the exponent range at the expense of
>>> fraction bits. You can't have it both ways.
>>
>> I didn't say that at all.
>
>Yes, you did, as is plainly visible.
>
>I see no point in arguing further with a pathological liar. I suggest
>you seek professional help.

The only one who needs help is yourself, who has clear difficulty
in comprehending posts.

BTW, I do not lie.  It is you who is going out of your way
to put interpretations, inferences, and conclusions on my posts
that simply aren't there.






0
robin_v (2737)
1/24/2006 2:25:08 PM
R. Vowels wrote in message <7NqBf.225893$V7.72835@news-server.bigpond.net.au>...
>John W. Kennedy wrote in message ...
>>robin wrote:
>>> "John W. Kennedy" <jwkenne@attglobal.net> wrote in message
>>> news:8ojAf.3953$e9.3645@fe12.lga...
>>>> Tom Linden wrote:
>>>>> Not sure how far this goes back in time, but i bet it is to the 50's
>>>>> anyway.
>>>> At the very least, it is /not/ the case in the IBM
>>>> 704/709/7040/7044/7090/7094 family, which is the architecture that
>>>> FORTRAN was designed for, the architecture under discussion in this
>>>> subthread, and the most important scientific architecture previous to
>>>> the 360.
>>>
>>> The most important scientific architecture before the 360
>>> was the Pilot ACE in which pioneering work on floating point and
>>> theory of rounding errors was developed.
>>
>>Every FORTRAN program written for the Pilot ACE was successfully
>>recompiled for the S/360 on September 8, 1752.
>
>The Pilot ACE and its design goes back to 1946,
>and thus precedes the 360, and of course, the 704 family.

The word size of the Pilot ACE was 32 bits.




0
robin_v (2737)
2/2/2006 11:36:41 PM
>robin wrote in message <8Xsyf.217843$V7.146535@news-server.bigpond.net.au>...
>>"John W. Kennedy" <jwkenne@attglobal.net> wrote in message
>>news:YCXxf.66$Fd6.27@fe08.lga...
>> robin wrote:

>> There were many problems with S/360 floating point in the early days;
>> the literature was awash with the subject.
>
>Such as?

Can you name just one?


0
robin_v (2737)
2/3/2006 3:05:02 PM
"John W. Kennedy" <jwkenne@attglobal.net> wrote in message
news:KyjAf.3955$e9.3653@fe12.lga...
> robin wrote:
> > From: "John W. Kennedy" <jwkenne@attglobal.net>
> > Sent: Wednesday, January 18, 2006 2:16 PM
> >
> >> robin wrote:
> >>> "John W. Kennedy" <jwkenne@attglobal.net> wrote in message
> >>> news:ivXyf.40$gh5.16@fe08.lga...
> >>>> robin wrote:
> >>>>>>>> The 704 family offered double precision, too; it was not fully
> >>>>>>>> implemented in hardware, but the hardware assisted it, and the
FORTRAN
> >>>>>>>> compiler supported it.
> >>>>>>> It had to, in order to meet the standard.
> >>>>>> There was no FORTRAN standard until long afterwards.
> >>>>> IBM set it.
> >>>> So your argument is that the 704 hardware had to implement
> >>>> double-precision floating-point in 1954 in order to support FORTRAN IV,
> >>>>   which didn't even come out until 1962 (two hardware generations later)?
> >>> You said it ; I didn't.
> >> The bloody quotes are right above.
> >
> > I think that it would be a good idea if you ceased
> > your ridiculous allegations, which have not been based on anything.
>
> You said -- it's quoted right above -- that the FORTRAN compiler for the
> 704 had to support double precision "in order to meet the standard",
> which is absurd, because the FORTRAN compiler for the 704 was the first
> FORTRAN compiler there ever was, and existed long before any standard.

And I replied that IBM set it.

> >>>>>>>>> But for most work, little difference between 36 bits and 32 bits.
> >>>>>>>>> But that's no measure, anyhow. The appropriate measure is
> >>>>>>>>> the number of mantissa bits and range of exponent.
> >>>>>>>> They add up to the word size, one way or the other.
> >>>>>>> Not relevant; what's important is the breakdown --
> >>>>>>> and in particular, the number of mantissa bits.
> >>>>>> In order to make any sense of your argument, I can only assume that you
> >>>>>> do not know what the words "relevant" and "mantissa" mean. Kindly look
> >>>>>> them up.
> >>>>> The term "mantissa" has been used since the early days of computers
> >>>>> to describe part of floating-point number.
> >>>>>
> >>>>> Are you having a bad day?
> >>>> Either you are attempting to argue that the size of the fraction and the
> >>>> size of the exponent are each more important than one another, while
> >>>> simultaneously maintaining that word size has nothing to do with the
> >>>> issue either way, or else you are simply misusing words.
> >>> Are you trying to divert attention from "mantissa"?
> >> We'll try this one more time.
> >>
> >> You argue simultaneously that the "mantissa" is most important and that
> >> the exponent is most important.
> >
> > If you look again, you will see that I didn't say that.
>
> In one and the same posting, you said, "What's important is ... the
> number of mantissa bits," and then followed it up by indicating that the
> 360 did a good thing by increasing the exponent range at the expense of
> fraction bits. You can't have it both ways.

Again I didn't say that at all.  What I did say was that
there were design considerations, in particular the choice of hex
reduced the number of shifts required for post normalization
from 23 to 5 for single precision, and correspondingly
for double precision (55 and 13).  This gave a good range of exponent,
consistent with a reasonable number of mantissa bits.

In\ then invited you to state how you would have done it better ,
and of course there was no response at all.

> >> This means one of two things: you think
> >> "mantissa" means "exponent", or you're contradicting yourself.
> >
> > The only person who doesn't know what "mantissa" means
> > is your self.  Do you still think that it is to do with logarithms?
> > And BTW, I didn't contradict myself.
>
> Actually, I didn't raise the issue of logarithms;

YOU raised the question of "mantissa", saying that I didn't
know what the word means, and implying that the word was
used incorrectly, which it wasn't.  And isn't.
Whereas in fact, all along it's YOU who doesn't know what
the word means.

> Glen did. However, he
> was right; the use of "mantissa" to mean "fraction component of a
> floating-point number", though widespread, is an abuse, like using "k"
> to mean 1024.
>
> I have already indicated how you contradicted yourself.

No you haven't, because there is no contradiction.

> >>>>>>>> In any case, the
> >>>>>>>> S/360 had significantly fewer effective fraction bits (21) in single
> >>>>>>>> precision than the 7094 (27).
> >>>>>>> Leaving only 7 bits for the exponent.  In other words, a reduced
> >>>>>>> range of exponent, which the S/360 corrected.
> >>>>>> Having trouble with subtraction, are we now?
> >>>>> When I last looked, 27 + 1 + 7 + 1 = 36.
> >>>> Are you under the impression that the 704 series had a 35-bit word with
> >>>> a parity bit?
> >>> You said 36 bits earlier.
> >> Yes, I did, because it /did/ have 36. But your breakdown above includes
> >> an extra 1-bit field that cannot be accounted for.
> >
> > I'll leave it for you to work out what the bits might be for.
>
> The actual format of a 704 floating-point number was:
>
> S (it was called S rather than zero): sign
> 1-8: excess-128 exponent
> 9-35: fraction.

Good; you finally worked it out.


0
robin_v (2737)
4/8/2006 3:34:11 AM
robin wrote:

(snip)

> Again I didn't say that at all.  What I did say was that
> there were design considerations, in particular the choice of hex
> reduced the number of shifts required for post normalization
> from 23 to 5 for single precision, and correspondingly
> for double precision (55 and 13).  This gave a good range of exponent,
> consistent with a reasonable number of mantissa bits.

I believe this is true for some cases.  I don't know that S/360
satisfied those cases.

(snip)

> YOU raised the question of "mantissa", saying that I didn't
> know what the word means, and implying that the word was
> used incorrectly, which it wasn't.  And isn't.
> Whereas in fact, all along it's YOU who doesn't know what
> the word means.

It is wrong.  The fact that it is done fairly often doesn't
change the fact that it is wrong.  IBM consistently uses 'fraction'
instead of 'mantissa' in their documentation.  (If used wrong often 
enough it will eventually be adopted.  As far as I know, that hasn't 
happened yet.)

-- glen

0
gah (12851)
4/8/2006 5:16:04 AM
"glen herrmannsfeldt" <gah@ugcs.caltech.edu> wrote in message
news:I62dnRS6YruW2qrZRVn-hg@comcast.com...
> robin wrote:
>
> (snip)
>
> > Again I didn't say that at all.  What I did say was that
> > there were design considerations, in particular the choice of hex
> > reduced the number of shifts required for post normalization
> > from 23 to 5 for single precision, and correspondingly
> > for double precision (55 and 13).  This gave a good range of exponent,
> > consistent with a reasonable number of mantissa bits.
>
> I believe this is true for some cases.  I don't know that S/360
> satisfied those cases.

We are referring to the S/360, and my comments have been
about that machine.

> > YOU raised the question of "mantissa", saying that I didn't
> > know what the word means, and implying that the word was
> > used incorrectly, which it wasn't.  And isn't.
> > Whereas in fact, all along it's YOU who doesn't know what
> > the word means.
>
> It is wrong.  The fact that it is done fairly often doesn't
> change the fact that it is wrong.  IBM consistently uses 'fraction'
> instead of 'mantissa' in their documentation.  (If used wrong often
> enough it will eventually be adopted.  As far as I know, that hasn't
> happened yet.)

The word "mantissa" has been used, and correctly, since the
early days of computing, and you will find it in many texts
describing the S/360 and /370.  I listed some in this newsgroup.


0
robin_v (2737)
4/10/2006 1:05:09 AM
On Sun, 09 Apr 2006 18:05:09 -0700, robin <robin_v@bigpond.com> wrote:

> The word "mantissa" has been used, and correctly, since the
> early days of computing, and you will find it in many texts
> describing the S/360 and /370.  I listed some in this newsgroup.

When I went to school, Mantissa was the fractional part of the
logarithm.  It has been widely misused to mean the fractional part of
a normalized floating point representation of a number, googling I
even found one definition as the fractional part of a real number.
Now let's end it there please.
0
tom284 (1839)
4/10/2006 1:34:38 PM
Reply:

Similar Artilces:

A lot of thanks to all of you! (Was: What I can to do with old PL/1 code?)
Thank you very much for your help, my program works! After 17 years of shutdown it looks... I did 3 different tests and its gave coincidence 13-15 digits, in one case it was 4 digits only, but now I use newer and more accurate subroutines. Of course, for complete assurance, I should to do additional tests, but I feel, that all is OK. I found following reasons for my problems: 1. Code changing, described earlier (it was in one place only, and still now I do not know reason) 2. Exchanging 1E1 by 1.0q0 (and others like that) and using FLOAT(18) instead of FLOAT(16) 3. Using definite compiler's options set (I suspect it is one only) 4. Linking with static libraries instead of dynamic. (I do not know why) Looks strange, but theese conditions are necessary, i.e. its give good effect together only. To Mark. VA compiler output is very similar to S370, so as I remember properties of old compiler well, I mistakenly kept in subconsciousness that theese compiler are very similar too. At the same time, I should note, that VA PL/I LRM is too formal and laconic, so I need in modern PL/1 book. Of course, I swear, that I'll be read manuals, sorry! To Tom. I read about OpenVMS, and begin to like it (mostly for PL/1 compiler availability). Unfortunately, two my attepts to buy at ebay Alpha station (XP1000) were ubsuccessful, but I'll be definitely buy it. Let me, please, ask you again after that. I was very appreciated for your help. It was a big pleasure to follow to your a...

Re: What I can to do with old PL/I code?
glen herrmannsfeldt <gah@ugcs.caltech.edu> writes: > robin wrote: > > > "MZN" <MikeZmn@gmail.com> wrote in message > > news:1134384741.175034.163840@z14g2000cwz.googlegroups.com... > > >>2Robin > > >>Thank you. Hm, after 22 years things look different! > >>Step by step. > >>1. I do not know, what lq-10 means? > > > "q" in 1q-10 gives 18-digit precision. > > 1e-10 gives default precision > > 1d-10 gives doubleprecision > > 1q-10 gives 18-digit precision. > > In PL/I constants have the base, scale, precision, mode, and scale > factor they are written in. I don't believe that D and Q are allowed > for exponents, except in a language from a nearby newsgroup. You need to check out the PLI manual. > -- glen robin wrote: > glen herrmannsfeldt <gah@ugcs.caltech.edu> writes: > robin wrote: (someone wrote) >>>1d-10 gives doubleprecision >>>1q-10 gives 18-digit precision. >>In PL/I constants have the base, scale, precision, mode, and scale >>factor they are written in. I don't believe that D and Q are allowed >>for exponents, except in a language from a nearby newsgroup. > You need to check out the PLI manual. I did. If you tell me the page, I will look again, but when I looked only E was allowed. -- glen glen herrmannsfeldt wrote: > robin wrote: > >> glen he...

Re: What I can to do with old PL/I code? #2
"MZN" <MikeZmn@gmail.com> writes: > Some additional questions: > 3. To Robin: setting option INITFILL immediately drives to a lot of > errors during inpet data check, so I leave it by default NO, and I > return it back when I'll be have good results. If you are getting errors, that indicates bug(s) in your program. ...

PL/I CAN 1
> 1. CANT read arbitrary file into exact size array. FALSE. read_any_file: proc options (main); dcl isize fixed (31), i fixed (31), c char(1); dcl buf (*) char(1) controlled; inquire (anyfile, isize); allocate buf(isize); get file (anyfile) edit (buf) (a(1)); put edit (buf) (a); put ('size file (bytes) =',size(buf)); end; > ! ---------------------- > program read_ANY_file ! With this CVF program > integer,external :: stat > integer :: n, istat(12) > integer(1),allocatable :: buf(:) > n = stat('anyfile',istat) >...

SQL: Why old VO 1.0 can, but young VO 2.7a not can ?
Sorry my english. Some SQL database support "scrollable" cursors. For example, Sybase SQL Anywhere 8.0. Next from Sybase documentation: ODBC and embedded SQL provide methods for using scrollable cursors and dynamic scrollable cursors. These methods allow you to move several rows forward at a time, or to move backwards through the result set. For applications that move through a result set only in a forward direction and do not update the result set, cursor behavior is relatively straightforward. By default, ODBC applications request this behavior. ODBC defines a re...

XHTML 1.1 spec: lang and xml:lang
Hello, I've looked at the latest XHTML spec from 2009-05-07. Referring to http://www.w3.org/TR/xhtml-modularization/abstract_modules.html#s_commonatts In the I18N module, there are only two attributes listed: dir and xml:lang. The note in the last paragraph says: "Finally, note that the I18N collection only contains the xml:lang attribute unless the Bi- directional Text Module module is selected." This is fine, but... http://www.w3.org/TR/xhtml11/doctype.html says (last paragraph): "This specification also adds the lang attribute to the I18N attribute c...

[tao-users] Trying to migrate old TAO 1.4.4 code to 2.0.1 on Ubuntu 12.04 64bit
I'm having trouble porting some old ACE/TAO code to a new system running Ubunti 12.02 64 bit with TAO 2.0.1. using the following IDL: module Smrthm { interface Video { typedef sequence<octet> Picture; Picture TakeSnapshot(); }; }; The following code coredumps in the memcpy: Smrthm::Video::Picture_var theSnapshot = video->TakeSnapshot(); long pictureLength = theSnapshot->length(); cerr << "allocating buffer of size:" << pictureLength << "\n"; void *buffer = malloc(pictureLength); cerr << "allocated buffer\n"; memcpy(buffer, theSnapshot->get_buffer(1), pictureLength); cerr << "memcpy complete\n"; So I suspect I'm doing something wrong around theSnapshot->get_buffer(1), but I'm not sure what. Can someone point me to an example of how you're supposed to access the data of a sequence<octet>? Thanks Steve ...

re: PL/I CAN
> 1. CANT read arbitrary file > ! ---------------------- > program read_ANY_file ! With this CVF program > integer,external :: stat > integer :: n, istat(12) > integer(1),allocatable :: buf(:) > n = stat('anyfile',istat) > allocate ( buf(istat(8)) ) > open (1,file='anyfile',form='binary') > read (1) buf > write (*,*) 'size file (bytes) =',size(buf) > end program .. You lie. I have posted several programs showing that PL/I can read any arbitrary file, and have explicitly stated that PL/I has been able to do this since 1966. .. BTW, your code is not Fortran - Your code relies on a vendor function and a vendor extension. PL/I does not need such gimmicks, as it has been able to read arbitrary files since 1966. Fortan on the other hand has been around since 1956, and still can't do it portably and reliably. "robin" <ro|bin_v@bigpond.mapson.com> wrote in message news:1bbzb.36925$aT.27657@news-server.bigpond.net.au... > > 1. CANT read arbitrary file > > ! ---------------------- > > program read_ANY_file ! With this CVF program > > integer,external :: stat > > integer :: n, istat(12) > > integer(1),allocatable :: buf(:) > > n = stat('anyfile',istat) > > allocate ( buf(istat(8)) ) > > open (1,file='anyfile',form='binary') > > read (1) buf >...

1.1.1.1 ?
hi my firewall logs dropped packets from an internal IP address trying to contact 1.1.1.1 through port 9999. Any ideas whether 1.1.1.1 is valid IP? and what is port 9999?? thanks mike wrote: > hi > > my firewall logs dropped packets from an internal IP address trying to > contact 1.1.1.1 through port 9999. Any ideas whether 1.1.1.1 is valid > IP? and what is port 9999?? > thanks These trojans *BlitzNet*, *Backdoor.Oracle*, *Backdoor.Spadeace* uses port 9999 -- S.S. "StarScripter" <Star@privacy.net> wrote in message news:<bv8ejj$p54t3$1@ID-185702.news.uni-berlin.de>... > mike wrote: > > hi > > > > my firewall logs dropped packets from an internal IP address trying to > > contact 1.1.1.1 through port 9999. Any ideas whether 1.1.1.1 is valid > > IP? and what is port 9999?? > > thanks > > These trojans *BlitzNet*, *Backdoor.Oracle*, *Backdoor.Spadeace* uses port > 9999 thanks very much..will check it out ...

I have also heard that F# is gaining ground because VB.Net programmers can change their source code a bit/byte/module at a time, with 1/3 less the VB code.
I have also heard that F# is gaining ground because VB.Net programmers can change their source code a bit/byte/module at a time, with 1/3 less the VB code. Is that true? -- Regards, Casey On Wed, 18 Mar 2009 18:45:16 -0700, Casey Hawthorne <caseyhHAMMER_TIME@istar.ca> wrote: >I have also heard that F# is gaining ground because VB.Net programmers >can change their source code a bit/byte/module at a time, with 1/3 >less the VB code. > >Is that true? It's not very likely. VB and F# are so far apart semantically that I doubt many VB programmers would/could make the sw...

RxParse 1.1
This is one of the best little XML SAX parser's I have ever used. Totally Perl, doesen't use any dll wrappers. ################################################################# # AUTHOR: robic0, copyright (c) 2006-2007 # Reproduction of contents, or distribution in a comercial # product, is strictly prohibited without prior written # permission from the author. ################################################################# # XML/Xhtml/Html - RXParse parse/edit/filter module # ------------------------------------------------------ # Compliant w3c XML: 1.1 # Resources: # Extensibl...

for i in {1..$1}, $1 can't be used
Shell script interpreters are unable to recognize variable in a for loop control but I don't want to hard code the iteration end in the for loop every time, what can be done to make the script flexible? On Tue, 28 Sep 2010 08:16:50 -0700, ela wrote: > Shell script interpreters are unable to recognize variable in a for loop Hmmm, maybe you might want to spend some time looking through http://www.tldp.org/LDP/abs/html/loops1.html "ela" <ela@yantai.org> wrote in news:i7rc54$qsr$1 @ijustice.itsc.cuhk.edu.hk: > Shell script interpreters are una...

Attach and PL/1 using Enterprise PL/1 Compiler V 3.3
Hallo, does anybody have a simple PL/1 sample Program which does subtasking using ATTACH-statement or using POSIX-calls? If you have, please send it to me. Thanks Gottfried ...

How can I convert from 8.2.1 to 7.1.1?
&nbsp;I would like to convert the following files from 8.2.1 to 7.1.1.&nbsp; Is this possible without just having someone else do it inside 8.2?&nbsp; If not, then can someone please convert them for me?&nbsp; They are listed in order of most needed.&nbsp; Thanks for the help. Single_Temp_RampPID_Ver1.vi: http://forums.ni.com/attachments/ni/170/317838/1/Single_Temp_RampPID_Ver1.vi PiezomotorControl version2.vi: http://forums.ni.com/attachments/ni/170/317838/2/PiezomotorControl version2.vi PiezomotorControl.vi: http://forums.ni.com/attachments/ni/170/317838/3/PiezomotorControl.vi Somebody needs to to the 8.0 to 7.1 conversion inside 8.0. Here's my attempt, see if they work. &nbsp; &nbsp; &nbsp; &nbsp; Piezo71.zip: http://forums.ni.com/attachments/ni/170/317842/1/Piezo71.zip Thanks a bunch. I really appreciate it.&nbsp; Cheers, Sean ...

PL/1 Version 1.1 and z/OS support
Hi guys, Can anyone tell me please if z/OS version 1.7 still supports PL/1 ver 1.1? Thanks, Ron. ronsho.shoshani@gmail.com wrote: > Hi guys, > > Can anyone tell me please if z/OS version 1.7 still supports PL/1 ver > 1.1? > I would expect it would probably *run*, baring LE incompatibalities (score one for static libraries). Hasn't EOS been announced for 1.1? Most of the incompatibilities with Enterprise PL/I have now been corrected. ronsho.shoshani@gmail.com wrote: > Hi guys, > > Can anyone tell me please if z/OS version 1.7 still supports PL/1 ver > 1.1? > > > Thanks, > Ron. > I have used it in vendor software running in a number of shops - some quite large - and no reports of problems so far. I would expect programs compiled in it to run correctly for quite a few years to come. So, in summary: No worries! PL/I for MVS and VM still works great. Ah yes. "The last of the mainframe compilers...". The compilers written in PL/I (VisualAge and later) cannot produce a load module which can (correctly) be marked as RENT. Not a big issue for the programmer I suppose, but I do like that subpool 252 storage protection. (Actually, in my current job, I'm using OS PL/I V2R3 (with PL/I transient library ahead of the LE runtime library) on OS/390 2.8. PL/I for MVS and VM would be luxury. <sigh> DB2 V5. IMS V6. Gotta love it. <g>) Cheers, Greg ...

Can PL/I translate n = n+1 ?
The psuedo guru(s) here are running out of brain cells and havent been able to translate one of my examples for months. Source below posted in comp.lang.fortran is also a challenge for any budding PL/I guru to translate my statement n = n+1 <<<<<<<<<<<<<<<<<< comp.lang.fortran >>>>>>>>>>>>>>>>>>>>> Controversy anyone? below runs fine under Windows XP.. How would a non-CVF compiler equivalent source be written without trapping that still accommodates the subroutine statement n = n+1 ?? and outputs n=10 n=11 n=10 in sequence from the 3 writes? ! -------------------------------- program protect_const integer,parameter :: n=10 call sub1(n) write (*,*) 'n=',n ! = 10 stop contains subroutine sub1(n) integer :: n[value] ! points to input arg on stack, not prot. mem. write (*,*) 'n=',n ! = 10 n = n+1 ! use/update input arg on stack write (*,*) 'n=',n ! = 11 end subroutine sub1 end program protect_const > Controversy anyone? below runs fine under Windows XP.. > How would a non-CVF compiler equivalent source be written without > trapping that still accommodates the subroutine statement > n = n+1 ?? > and outputs n=10 n=11 n=10 in sequence from the 3 writes? Doesn't PL/I use parameters by reference rather than by value? If so, the value of n would be changed by the call so...

PL/1 Messages and Codes (OS/2 and Windows)
This is a multi-part message in MIME format. --------------040203000000070405080900 Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit Does anyone have access to machine readable copy of this manual or do they know where to get a paper copy. -- Paul S. Hinman - VE6LDS long West 113 deg 27 min 20 sec lat North 53 deg 27 min 3 sec Maidenhead Locator DO33gk --------------040203000000070405080900 Content-Type: text/html; charset=us-ascii Content-Transfer-Encoding: 7bit <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> <html> <head> <meta content="text/html;charset=ISO-8859-1" http-equiv="Content-Type"> <title></title> </head> <body bgcolor="#ffffff" text="#000000"> <font size="+1"><font face="Comic Sans MS">Does anyone have access to machine readable copy of this manual or do they know where to get a paper copy.</font></font><br> <pre class="moz-signature" cols="72">-- Paul S. Hinman - VE6LDS long West 113 deg 27 min 20 sec lat North 53 deg 27 min 3 sec Maidenhead Locator DO33gk </pre> </body> </html> --------------040203000000070405080900-- ...

re: PL/I CAN #18 string-handling 1
From: "David Frank" <dave_frank@hotmail.com>, RoadRunner - Central Florida Date: Sun, 21 Dec 2003 11:02:20 GMT > "robin" <rob|in_v@bigpond.mapson.com> wrote in message news:3JSEb.59056$aT.16838@news-server.bigpond.net.au... >> >> It begins its search at the end of the string (namely, the >> period (.) ), because the declaration in PL/I is a varying >> string. >> >> Here is the PL/I declaration: >> >> dcl line character (80) varying initial ( etc > > There is no guarantee that trailing blanks dont exist, > ie. a different string declaration, > a record read from file.. You still don't get it. There is every guarantee that it does, because: 1. I used PL/I's superior VARYING strings; and 2. The initialization constant has NO trailing blank in it. GET IT NOW ? > which means that any function scanning in reverse exits > at the 1st trailing blank. Frayed knot. "robin" <rob|in_v@bigpond.mapson.com> wrote in message news:0gCFb.61843$aT.45678@news-server.bigpond.net.au... > From: "David Frank" <dave_frank@hotmail.com>, RoadRunner - Central Florida > Date: Sun, 21 Dec 2003 11:02:20 GMT > > > "robin" <rob|in_v@bigpond.mapson.com> wrote in message news:3JSEb.59056$aT.16838@news-server.bigpond.net.au... > >> > >> It begins its search at the end of the string (namely, the > >> perio...

Re: PL/I CAN #18 string-handling 1
robin <rob|in_v@bigpond.mapson.com> writes: > Subject: CANT #18 string-handling 1 > From: "David Frank" <dave_frank@hotmail.com>, RoadRunner - Central Florida > Date: Fri, 19 Dec 2003 09:21:13 GMT > > > 18. CANT find index to last word using ONLY a 3 statement loop. > > ! ------------------------ > > program string_handling_1 ! #1 in a series > > character(80) :: line = & > > 'The quick brown fox ' // & ! commenting is legal > > "jumped over the lazy dog's back." ! "text" is legal > > > > do i = len_trim(line),1,-1 ! reverse scan > > if (line(i:i) == ' ') exit ! last word index found > > end do > > write (*,*) 'last word =',line(i:len_trim(line)) > > end program > > !Outputs: last word = back. > > In PL/I, all that's required is: > > put ('last word=', substr (line, searchr(line, ' ')) ); > > one statement, no loops, given that there's more than one word > (which your code assumes, and fails if there is only one). put ('last word = ' || substr (line, searchr(line, ' ')+1) ); overcomes the bug in the Fortran code. ...

Can't find old cheat code
I need a cheat code for Freedom Force. I played it before with the code to unlock all the characters so you could use any character through out the game. But all I can find online is the god mode and peace() and Mission_Win() with a few others but not that code. ...

Can't install old pacakge in X6.1
First of all, thanks to Object Art. The 6.1 beta brings light to me and many other Dolphiners! I hope that Dolphin Smalltalk is the most powerful tools for Windows Development tool. BTW, When I install following package, I enoucntered some problem in X6.1 http://mfiles.naver.net/9241a772683256e4c0/data28/2008/1/8/30/personnel_files-tkandrea92.pac This package is ITC Gorisk's persistent object from OmniBase package. When I install it in X6, I have had no problem. X6.1 has some difference about migration package from old version. Please check this situation. Best regards...

re: PL/I CAN #18 string-handling 1 #2
From: "David Frank" <dave_frank@hotmail.com>, RoadRunner - Central Florida Date: Mon, 22 Dec 2003 13:55:03 GMT > "robin" <rob|in_v@bigpond.mapson.com> wrote in message news:0gCFb.61843$aT.45678@news-server.bigpond.net.au... >> From: "David Frank" <dave_frank@hotmail.com>, RoadRunner - Central Florida >> Date: Sun, 21 Dec 2003 11:02:20 GMT >> >> > "robin" <rob|in_v@bigpond.mapson.com> wrote in message news:3JSEb.59056$aT.16838@news-server.bigpond.net.au... >> >> >> >> It begins its search at the end of the string (namely, the >> >> period (.) ), because the declaration in PL/I is a varying >> >> string. >> >> >> >> Here is the PL/I declaration: >> >> >> >> dcl line character (80) varying initial ( etc >> > >> > There is no guarantee that trailing blanks dont exist, >> > ie. a different string declaration, >> > a record read from file.. >> >> You still don't get it. >> There is every guarantee that it does, because: >> 1. I used PL/I's superior VARYING strings; and > > your varying strings MUST accept a string with trailing blanks > as initialization length, else its HORRIBLY crippled. > >> 2. The initialization constant has NO trailing blank in it. >> GET IT NOW ? > the initialization cons...

Re: PL/I CAN #18 string-handling 1 #2
"David Frank" <dave_frank@hotmail.com> writes: > > "Tim Challenger" <"timothy(dot)challenger(at)apk(dot)at"> wrote in > message news:7c18343f8d709ef6d6ce5cc66af71053@news.teranews.com... > > On Fri, 19 Dec 2003 09:53:36 GMT, glen herrmannsfeldt wrote: > > > > > David Frank wrote: > > > > > >> 18. CANT find index to last word using ONLY a 3 statement loop. > > >> ! ------------------------ > > >> program string_handling_1 ! #1 in a series > > >> character(80) :: line = & > > >> 'The quick brown fox ' // & ! commenting is > legal > > >> "jumped over the lazy dog's back." ! "text" is > legal > > >> > > >> do i = len_trim(line),1,-1 ! reverse scan > > >> if (line(i:i) == ' ') exit ! last word index found > > >> end do > > >> write (*,*) 'last word =',line(i:len_trim(line)) > > >> end program > > >> !Outputs: last word = back. > > > > > > There is a story about the Russians accusing the US of flying > > > a two engine jet over the Soviet Union, but the president > > > told them that the US hadn't flown any two engine jet over > > > their country. Conveniently, they ne...

Re: pl. anyone can provide ans for the following code
Thomas Hawtin wrote: > gandhi.pathik@gmail.com wrote: > > My preferred solution: > >> public class Experiment >> { >> static >> { >> System.out.println("1"); >> } >> //don't touch it >> public static void main(String[] a) >> { >> System.out.println("2"); >> >> } >> //don't touch it > > static Experiment System; > static Experiment out; > > static void println(String text) { > ...

Web resources about - What I can to do with old PL/1 code? - comp.lang.pl1

Code - Wikipedia, the free encyclopedia
"Decoded" redirects here. For the television show, see Brad Meltzer's Decoded . For the Jay-Z memoir, see Decoded (book) . A code is a rule for ...


Herding Code
While at NDC 2012 in Oslo, Jon and K. Scott talked to Shay Friedman about Roslyn, IronRuby, and the DLR. Download / Listen: 146 – Shay Friedman ...

iPhone, iPad and Mac OS X Cocoa Usability and Development Services - Instinctive Code
iPad, iPhone / iPod Touch and Mac OS X Cocoa development services and UI design.

Re/code
No Bible Available? NY Swears in Politician With iPad Bible App. By Jordan Kahn, Writer, 9to5Mac

Writing Easy to Maintain Code
Wikimedia software developer and Software Craftmanship advocate Jeroen De Dauw discusses about how to maintain code. A significant amount of ...

You can now make your own Android-powered smart mirror, source code available on GitHub
Earlier this year, Googler Max Braun created a bathroom mirror that essentially displays Google Now cards and other useful ambient information. ...

Standard Of Review: On ‘Better Call Saul,’ A Man Got To Have A Code
Minor criticisms aside, culture critic Harry Graff is enjoying this season of the show and looks forward to where it goes with two key characters. ...

Brand USA Taps Code and Theory as Digital AOR Ahead of New Campaign
Brand USA has hired Code and Theory as its digital agency of record. The shop will initially be charged with redesigning the organization's website ...

Assembly language and machine code – Gary explains
... run any program we care to create. The first of these two guys from Alan Turing. He played a major role in cracking the German Enigma code during ...

Resources last updated: 3/19/2016 6:43:14 PM