f



0**0

Hello,

I didn't find in the standard, what should be the interpretation
of 0**0, 0d0**0, etc. ?

I ask because I just noticed with Absoft compiler (ver. 11.5.2), that
0d0**0 == 1d0, whereas 0q0**0 == 0q0 (q for quad precision numbers).
Thus, porting some double precision code to quad precision gave me
unexpected results.

Jean-Claude Arbaut
0
Jean
10/6/2012 9:26:26 AM
comp.lang.fortran 11941 articles. 2 followers. Post Follow

27 Replies
1184 Views

Similar Articles

[PageSpeed] 54

In article <506ff947$0$18052$ba4acef3@reader.news.orange.fr>,
Jean-Claude Arbaut <jeanclaudearbaut@orange.fr> writes: 

> I didn't find in the standard, what should be the interpretation
> of 0**0, 0d0**0, etc. ?
> 
> I ask because I just noticed with Absoft compiler (ver. 11.5.2), that
> 0d0**0 == 1d0, whereas 0q0**0 == 0q0 (q for quad precision numbers).
> Thus, porting some double precision code to quad precision gave me
> unexpected results.

My guess is that you shouldn't be relying on this at all.  Both answers 
are probably "correct" in the sense that the compiler can do anything if 
the code is invalid, and these operations are probably not allowed, so 
the question of the "correct" value is moot.

0
helbig
10/6/2012 10:43:46 AM
Hi,

> > I didn't find in the standard, what should be the interpretation 
> > of 0**0, 0d0**0, etc. ?
> >
> > I ask because I just noticed with Absoft compiler (ver. 11.5.2), that
> > 0d0**0 == 1d0, whereas 0q0**0 == 0q0 (q for quad precision numbers).
> > Thus, porting some double precision code to quad precision gave me
> > unexpected results.
> 
> My guess is that you shouldn't be relying on this at all.  Both answers 
> are probably "correct" in the sense that the compiler can do anything if 
> the code is invalid, and these operations are probably not allowed, so 
> the question of the "correct" value is moot.

how is the code invalid?

I think the common mathematical definition is such that 0**0 equals 1 (which is also what I get from gfortran for all the variants given above).

Btw, I don't think the Fortran standard contains contains a complete collection of mathematical axioms and definitions. It is supposed to define Fortran as a language, and not all of mathematics.

Cheers,
Janus
0
Janus
10/6/2012 1:14:17 PM
On 2012-10-06, Janus Weil <janus@gcc.gnu.org> wrote:

> I think the common mathematical definition is such that 0**0
> equals 1 (which is also what I get from gfortran for all the
> variants given above).

It is generally regarded as undefined, because there are different
limiting cases for

lim[x->0] x**0 and lim[x->0] 0**x.
0
Thomas
10/6/2012 1:54:20 PM
Jean-Claude Arbaut <jeanclaudearbaut@orange.fr> wrote:
> Hello,

> I didn't find in the standard, what should be the interpretation
> of 0**0, 0d0**0, etc. ?

> I ask because I just noticed with Absoft compiler (ver. 11.5.2), that
> 0d0**0 == 1d0, whereas 0q0**0 == 0q0 (q for quad precision numbers).
> Thus, porting some double precision code to quad precision gave me
> unexpected results.

Mathematically, both are right. 

The usual implementation of REAL or COMPLEX powers, 
including 0.0 and (0.0,0.0), is through LOG and EXP,
and will fail for 0.0**0.0 or return NaN.

The usual implementation of integer powers involves
multiplication and squaring, and for negative powers
a final reciprocal. I believe that can give either 0 or 1.

-- glen
0
glen
10/6/2012 3:09:14 PM
First, thanks to all who answered.

Actually, I know 0**0 is mathematically undefined, but usually,
in programming languages, there is a convention to set it to either
0 or 1. The latter seems to be the most usual, and it's nice when
computing values of a polynomial with powers (which is what I did).
I thought there were such conventions in Fortran, but if there is
none, I guess I should not count on a value, even if a given compiler
gives one.

At least, it may be a compiler bug to get this inconsistency between
double and quad precision - even though the notation 0q0 is
undefined in fortran, it's perfectly admissible to get access to quad
precision through selected_real_kind(30), the "q" is only syntactic
sugar. It would seem natural to get the same value with all available
real precisions.

Anyway, it was not essential to my program, I just wanted to know
whether it was in Fortran standard.

Jean-Claude Arbaut

Le 06/10/2012 17:09, glen herrmannsfeldt a �crit :
> Jean-Claude Arbaut <jeanclaudearbaut@orange.fr> wrote:
>> Hello,
>
>> I didn't find in the standard, what should be the interpretation
>> of 0**0, 0d0**0, etc. ?
>
>> I ask because I just noticed with Absoft compiler (ver. 11.5.2), that
>> 0d0**0 == 1d0, whereas 0q0**0 == 0q0 (q for quad precision numbers).
>> Thus, porting some double precision code to quad precision gave me
>> unexpected results.
>
> Mathematically, both are right.
>
> The usual implementation of REAL or COMPLEX powers,
> including 0.0 and (0.0,0.0), is through LOG and EXP,
> and will fail for 0.0**0.0 or return NaN.
>
> The usual implementation of integer powers involves
> multiplication and squaring, and for negative powers
> a final reciprocal. I believe that can give either 0 or 1.
>
> -- glen
>

0
Jean
10/6/2012 3:35:14 PM
On 10/6/12 4:26 AM, Jean-Claude Arbaut wrote:
> Hello,
>
> I didn't find in the standard, what should be the interpretation
> of 0**0, 0d0**0, etc. ?
>
> I ask because I just noticed with Absoft compiler (ver. 11.5.2), that
> 0d0**0 == 1d0, whereas 0q0**0 == 0q0 (q for quad precision numbers).
> Thus, porting some double precision code to quad precision gave me
> unexpected results.
>
> Jean-Claude Arbaut
It's processor dependent.  The section on Evaluation of Operations says

"The execution of any numeric operation whose result is not defined by 
the arithmetic used by the processor is prohibited."

You're stuck with reading the processor documentation.  Trying to 
evaluate 0.0**0 won't give you any guaranteed useful information.  If 
the processor supports that expression, then it will give you an answer. 
  If the processor doesn't support that expression, then it can still 
give you an answer (or do anything else) because your program isn't 
standard conforming.  Not a great situation.

Dick Hendrickson

0
Dick
10/6/2012 5:04:09 PM
Janus Weil <janus@gcc.gnu.org> wrote:

> how is the code invalid?
> 
> I think the common mathematical definition is such that 0**0 equals 1
> (which is also what I get from gfortran for all the variants given
> above).
> 
> Btw, I don't think the Fortran standard contains contains a complete
> collection of mathematical axioms and definitions. It is supposed to
> define Fortran as a language, and not all of mathematics.

There are multiple "common mathematical definitions", which is exactly
the problem. I actually wish the Fortran standard were a little more
explicit about a few points of the math in some areas. I occasionally
(well, quite a bit more than just ocasionally) run into people who don't
realize that the particular rules they learned in their elementary
school aren't actually universal ones; rounding is one area I have seen
that, with some people being convinced that a rule they learned is the
one and only correct way to do rounding (and it turns out to be a rule
with poor numerical properties).

The Fortran standard did actually used to mention the specific case of
0**0, though I can't find it in a quick skim of f2003. I think that rule
might have dissappeared when IEEE stuff came into the language. But it
at least used to be there. I did just find it in the f77 standard:

  "Any arithetic operation whose result is not mathematically defined
   is prohibitted in the execution of an executable program. Examples
   are dividing by zero and raising a zero-valued primary to a
   zero-valued or negative-valued power."

I suppose this does show an example in the standard of the kind of
narrow view of mathematics that I referred to above. That bit in the f77
standard reads as though the writer thought that there was one and only
one mathematical system, and it was one in which 0**0 was not defined.
Dick quoted a later version of the same prohibition, which acknowledges
that there are multiple mathematical systems and leaves it up to the
processor which system it is using, also deleting the implication that,
of course, mathematics could not include things like dividing by zero.

There still remain some places in the standard where what I might call
the elementary-school version of mathematics shows through. One of those
is the "mathematical equivalence" rule for evaluating expressions. There
have been plenty of debates on the fine points of what counts as
"mathematical equivalence", but I note that the use of that term and its
contrast to "computational equivalence" imples that someone didn't think
that what computers did with floatting point counted as mathematics. I
beg to differ.

-- 
Richard Maine                    | Good judgment comes from experience;
email: last name at domain . net | experience comes from bad judgment.
domain: summertriangle           |  -- Mark Twain
0
nospam
10/6/2012 7:22:50 PM
Jean-Claude Arbaut <jeanclaudearbaut@orange.fr> wrote:

> At least, it may be a compiler bug to get this inconsistency between
> double and quad precision - even though the notation 0q0 is
> undefined in fortran, it's perfectly admissible to get access to quad
> precision through selected_real_kind(30), the "q" is only syntactic
> sugar. It would seem natural to get the same value with all available
> real precisions.

I don't think you can call it a bug. At least it isn't a bug in the
sense of violating the standard. I'll buy that one might consider it a
shortcomming in quaity of implementation.

There are, of course, plenty of expressions where one will get
dramatically different results depending on the precision. And by
dramatically different, I mean up to and including the difference
between giving any result at all versus aborting. Try dividing by
something that underflows to zero with one precision, for example.

-- 
Richard Maine                    | Good judgment comes from experience;
email: last name at domain . net | experience comes from bad judgment.
domain: summertriangle           |  -- Mark Twain
0
nospam
10/6/2012 7:30:30 PM
On 2012-10-06 12:35:14 -0300, Jean-Claude Arbaut said:

> First, thanks to all who answered.
> 
> Actually, I know 0**0 is mathematically undefined, but usually,
> in programming languages, there is a convention to set it to either
> 0 or 1. The latter seems to be the most usual, and it's nice when
> computing values of a polynomial with powers (which is what I did).
> I thought there were such conventions in Fortran, but if there is
> none, I guess I should not count on a value, even if a given compiler
> gives one.
> 
> At least, it may be a compiler bug to get this inconsistency between
> double and quad precision - even though the notation 0q0 is
> undefined in fortran, it's perfectly admissible to get access to quad
> precision through selected_real_kind(30), the "q" is only syntactic
> sugar. It would seem natural to get the same value with all available
> real precisions.
> 
> Anyway, it was not essential to my program, I just wanted to know
> whether it was in Fortran standard.
> 
> Jean-Claude Arbaut

What did the vendor say? They might think that the two answers should be the
same even if the standard allows anything. This is usually called a quality of
implementaion issue when there is a useful answer for many situations even if
the standard does not supply one for all (or even any) situations.

I have had mixed experiences with Absoft. They have different behaviours of
some timers between the 32 and 64 bit versions. They blame it on maintaining
a silly error in the C runtime that they use for much of their Fortran.
Such an answer may have made them happy but it did not make me happy!

> Le 06/10/2012 17:09, glen herrmannsfeldt a �crit :
>> Jean-Claude Arbaut <jeanclaudearbaut@orange.fr> wrote:
>>> Hello,
>> 
>>> I didn't find in the standard, what should be the interpretation
>>> of 0**0, 0d0**0, etc. ?
>> 
>>> I ask because I just noticed with Absoft compiler (ver. 11.5.2), that
>>> 0d0**0 == 1d0, whereas 0q0**0 == 0q0 (q for quad precision numbers).
>>> Thus, porting some double precision code to quad precision gave me
>>> unexpected results.
>> 
>> Mathematically, both are right.
>> 
>> The usual implementation of REAL or COMPLEX powers,
>> including 0.0 and (0.0,0.0), is through LOG and EXP,
>> and will fail for 0.0**0.0 or return NaN.
>> 
>> The usual implementation of integer powers involves
>> multiplication and squaring, and for negative powers
>> a final reciprocal. I believe that can give either 0 or 1.
>> 
>> -- glen


0
Gordon
10/6/2012 7:48:58 PM
Op 06-10-12 21:48, Gordon Sande schreef:
> On 2012-10-06 12:35:14 -0300, Jean-Claude Arbaut said:
>
>> First, thanks to all who answered.
>>
>> Actually, I know 0**0 is mathematically undefined, but usually,
>> in programming languages, there is a convention to set it to either
>> 0 or 1. The latter seems to be the most usual, and it's nice when
>> computing values of a polynomial with powers (which is what I did).
>> I thought there were such conventions in Fortran, but if there is
>> none, I guess I should not count on a value, even if a given compiler
>> gives one.
>>
>> At least, it may be a compiler bug to get this inconsistency between
>> double and quad precision - even though the notation 0q0 is
>> undefined in fortran, it's perfectly admissible to get access to quad
>> precision through selected_real_kind(30), the "q" is only syntactic
>> sugar. It would seem natural to get the same value with all available
>> real precisions.
>>
>> Anyway, it was not essential to my program, I just wanted to know
>> whether it was in Fortran standard.
>>
>> Jean-Claude Arbaut
>
> What did the vendor say? They might think that the two answers should b=
e
> the
> same even if the standard allows anything. This is usually called a
> quality of
> implementaion issue when there is a useful answer for many situations
> even if
> the standard does not supply one for all (or even any) situations.
>
> I have had mixed experiences with Absoft. They have different behaviour=
s of
> some timers between the 32 and 64 bit versions. They blame it on
> maintaining
> a silly error in the C runtime that they use for much of their Fortran.=

> Such an answer may have made them happy but it did not make me happy!
>
>> Le 06/10/2012 17:09, glen herrmannsfeldt a =E9crit :
>>> Jean-Claude Arbaut <jeanclaudearbaut@orange.fr> wrote:
>>>> Hello,
>>>
>>>> I didn't find in the standard, what should be the interpretation
>>>> of 0**0, 0d0**0, etc. ?
>>>
>>>> I ask because I just noticed with Absoft compiler (ver. 11.5.2), tha=
t
>>>> 0d0**0 =3D=3D 1d0, whereas 0q0**0 =3D=3D 0q0 (q for quad precision n=
umbers).
>>>> Thus, porting some double precision code to quad precision gave me
>>>> unexpected results.
>>>
>>> Mathematically, both are right.
>>>
>>> The usual implementation of REAL or COMPLEX powers,
>>> including 0.0 and (0.0,0.0), is through LOG and EXP,
>>> and will fail for 0.0**0.0 or return NaN.
>>>
>>> The usual implementation of integer powers involves
>>> multiplication and squaring, and for negative powers
>>> a final reciprocal. I believe that can give either 0 or 1.
>>>
>>> -- glen
>
>
All,


As there is no mathematical unique answer the standard nor any vendor=20
should force a unique answer.

Only the limits that are mentioned sometimes can determine whether or=20
not there is an answer at all.


Kind regards,


Jan Gerrit Kootstra

0
Jan
10/6/2012 8:17:52 PM
On 10/6/2012 3:17 PM, Jan Gerrit Kootstra wrote:
....

> As there is no mathematical unique answer the standard nor any vendor
> should force a unique answer.
....

A vendor _has_ to provide a solution for an implementation; that it 
should at least be self-consistent is all OP asked for...

--
0
dpb
10/6/2012 8:33:11 PM
In article <k4q4i3$rt2$1@speranza.aioe.org>, dpb <none@non.net> 
wrote:

> On 10/6/2012 3:17 PM, Jan Gerrit Kootstra wrote:
> ...
> 
> > As there is no mathematical unique answer the standard nor any vendor
> > should force a unique answer.
> ...
> 
> A vendor _has_ to provide a solution for an implementation; that it 
> should at least be self-consistent is all OP asked for...

There are some situations, including this one, where the best 
solution might not be to return an answer at all, but rather abort 
with an error message.

As far as evaluating polynomials, doing so with exponentiation is 
almost never the best way in any language, including fortran. In the 
general case there is horner's rule to consider, and in other 
specific cases there are often recursion expressions that can be 
used.  In addition to being more efficient, these approaches often 
can avoid problems such as subtracting quantities with nearly equal 
values, or trying to add large and small values, that occur with the 
exponentiation expressions.  I don't know if these considerations 
apply to the underlying code being discussed in this case, but it is 
not unlikely.

In any case, as a practical matter I would suggest changing the code 
at least to avoid the x**0 expression evaluation, not only for 
portability reasons, but also for efficiency reasons. I can imagine 
what the code probably looks like from the discussion so far, but we 
can't be certain. If the OP posts the code, then we can make more 
concrete suggestions.

$.02 -Ron Shepard
0
Ron
10/6/2012 9:32:21 PM
On 10/6/2012 4:32 PM, Ron Shepard wrote:
> In article<k4q4i3$rt2$1@speranza.aioe.org>, dpb<none@non.net>
> wrote:
>
>> On 10/6/2012 3:17 PM, Jan Gerrit Kootstra wrote:
>> ...
>>
>>> As there is no mathematical unique answer the standard nor any vendor
>>> should force a unique answer.
>> ...
>>
>> A vendor _has_ to provide a solution for an implementation; that it
>> should at least be self-consistent is all OP asked for...
>
> There are some situations, including this one, where the best
> solution might not be to return an answer at all, but rather abort
> with an error message.
....

"Might" is an operative word here... :)

And, if there were the chosen path I'd still expect consistency in an 
implementation.

--
0
dpb
10/6/2012 9:51:39 PM
On 10/6/2012 4:26 AM, Jean-Claude Arbaut wrote:
> Hello,
>
> I didn't find in the standard, what should be the interpretation
> of 0**0, 0d0**0, etc. ?
>
> I ask because I just noticed with Absoft compiler (ver. 11.5.2), that
> 0d0**0 == 1d0, whereas 0q0**0 == 0q0 (q for quad precision numbers).
> Thus, porting some double precision code to quad precision gave me
> unexpected results.
>
> Jean-Claude Arbaut
>


FYI

---------------------
Matlab

EDU>> 0^0
      1
----------------------

--------------------
Mathematica

In[174]:= 0^0
Out[174]= Indeterminate
--------------------

--------------
Maple 16
> 0^0;
           1
---------------------

-------------------
Python
In [6]: 0^0
Out[6]: 0
--------------------

--Nasser

0
Nasser
10/7/2012 1:44:35 PM
On 10/7/2012 8:44 AM, Nasser M. Abbasi wrote:

opps, had mistake in the Python example, I should use ** not ^
here is the correct one

--------------------
Python
In [9]: 0**0
Out[9]: 1
-------------------

--Nasser




0
Nasser
10/7/2012 1:47:10 PM
On 10/6/12 4:51 PM, dpb wrote:
> On 10/6/2012 4:32 PM, Ron Shepard wrote:
>> In article<k4q4i3$rt2$1@speranza.aioe.org>, dpb<none@non.net>
>> wrote:
>>
>>> On 10/6/2012 3:17 PM, Jan Gerrit Kootstra wrote:
>>> ...
>>>
>>>> As there is no mathematical unique answer the standard nor any vendor
>>>> should force a unique answer.
>>> ...
>>>
>>> A vendor _has_ to provide a solution for an implementation; that it
>>> should at least be self-consistent is all OP asked for...
>>
>> There are some situations, including this one, where the best
>> solution might not be to return an answer at all, but rather abort
>> with an error message.
> ...
>
> "Might" is an operative word here... :)
>
> And, if there were the chosen path I'd still expect consistency in an
> implementation.
>
> --
The problem is that exponentiation is common and testing every 
exponentiation for special cases is likely to be a time waster.  Or at 
least people will perceive it to be.

And, consistency in an implementation isn't very useful if there isn't 
also portability.

Dick Hendrickson
0
Dick
10/7/2012 2:10:41 PM
On 10/7/2012 9:10 AM, Dick Hendrickson wrote:
....

> And, consistency in an implementation isn't very useful if there isn't
> also portability.
....

But I contend inconsistency in an implementation whether it's portable 
or not isn't very useful, either (or is at least error-inducing-prone).

There's a reason these areas are left as implementation-dependent...but 
imo a quality implementation would strive to be self-consistent.

Now, there _may_ be sufficient reason a vendor may decide the vagaries 
of a particular hardware platform or other considerations make them 
choose to not be so.  Nowhere in anything I've written before is there a 
statement denying that; only that I think ideally consistency within an 
implementation is a reasonable expectation of the user.

--
0
dpb
10/7/2012 2:34:31 PM
On 10/7/2012 9:10 AM, Dick Hendrickson wrote:
....

> And, consistency in an implementation isn't very useful if there isn't
> also portability.
....

But I contend inconsistency in an implementation whether it's portable 
or not isn't very useful, either (or is at least error-inducing-prone).
....

That is, error-inducing on the part of writing code w/ only a modicum of 
testing, relying on the "expected" behavior of the compiler one is using.

The OP's example works (again, w/o addressing the question of whether 
should have written his particular example as did but similar things 
could be envisioned that aren't all that unreasonable) -- you know a 
particular construct is implementation-dependent so as OP you test for a 
given condition and discover it's behavior and rely on that for your 
code logic (making dutiful notes/comments and even, perhaps, isolating 
it as a system dependency).

Now, simply changing the precision as did OP gives one a different 
behavior--while strictly speaking, yes, it's your fault; you shoulda' 
either not made the reliance to begin with and found another workaround 
or tested every possible behavior, it's still imo not as good a quality 
implementation as would desire.  And again, yes, there's the caveat of 
the vendor has the right and perhaps even a good justification...

This kinda' reminds me of discussions years and years and years ago w/ 
NRC over the introduction of digital systems into reactor safety 
systems--some wanted to not allow them at all because there was, in 
their mind, no way to ever fully verify proper operation because 
fundamentally one could traceback to the point of there being some 
branch under which even the processor itself might error.  This was a 
nonproductive since carried to its conclusion meant we couldn't use 
computers for numerical design work, either.

--
0
dpb
10/7/2012 3:03:33 PM
In article <k4s3tm$188$1@speranza.aioe.org>, dpb <none@non.net> 
wrote:

> There's a reason these areas are left as implementation-dependent...but 
> imo a quality implementation would strive to be self-consistent.

I don't know if this is one if these situations, but this is the 
kind of expression that might change with optimization levels or 
with compiler options. And there might be some compiler options or 
optimizations levels that would result in a program abort and an 
error message, and that might also depend on the KIND values. This 
would make self-consistency (between different KIND values) 
difficult or impossible to achieve.

Will the expression X**0 be evaluated the same as X**I when I==0?  
Maybe not.  It is common at some optimization levels for the 
compiler to replace expressions like X**0, X**1, X**2, X**3 and so 
on with the appropriate multiplications and never call the power 
operator, whereas an expression like X**I where the compiler does 
not know the value of I would generally use it. Should the compiler 
treat the X==0.0 case special, and if so, is that the same as the 
special treatment within the general power function.  On the other 
hand, what about 0.0**I, should the compiler try to simplify this or 
pass it on to the power function?  If X==0.0 is not treated as a 
special case, then almost certainly X**0 is going to end up with a 
different value than 0.0**I when the compiler tries to simplify 
these expressions inline and avoid the power function.  For 
different KIND values for X and I, it also would not be unusual to 
return different results.  Some KIND values are going to map 
directly to the hardware, while others are going to involve software 
emulation.  In this case, the same compiler software will return 
different results when installed on slightly different hardware.

And for the integer cases, 0**0, I**0, 0**I, I**J, it is easy to see 
that a compiler optimization might do something entirely different 
than it does for the floating point cases.  And, it might do 
different things for different integer KIND values in these cases 
too.

So in situations like this, I don't think that any kind of 
self-consistency should be expected.

The OP has not yet posted the actual code, so we do no know which of 
these situations apply.  We can imagine what the code might look 
like, but we don't really know, so all we can do is talk about these 
generalities.

$.02 -Ron Shepard
0
Ron
10/7/2012 4:03:29 PM
On 10/7/2012 11:03 AM, Ron Shepard wrote:
> In article<k4s3tm$188$1@speranza.aioe.org>, dpb<none@non.net>
> wrote:
>
>> There's a reason these areas are left as implementation-dependent...but
>> imo a quality implementation would strive to be self-consistent.
>
> I don't know if this is one if these situations, but this is the
> kind of expression that might change with optimization levels or
> with compiler options. And there might be some compiler options or
> optimizations levels that would result in a program abort and an
> error message, and that might also depend on the KIND values. This
> would make self-consistency (between different KIND values)
> difficult or impossible to achieve.

....[list of possibilities elided for brevity]...

> So in situations like this, I don't think that any kind of
> self-consistency should be expected.
>
> The OP has not yet posted the actual code, so we do no know which of
> these situations apply.  We can imagine what the code might look
> like, but we don't really know, so all we can do is talk about these
> generalities.
>
> $.02 -Ron Shepard

True but one would presume in this case the code is the same except for 
the KIND value and therefore eliminates the differences spoken of above 
about integer vis a vis fp exponents, written constants vis a vis 
variables, etc., etc., etc., ... and reduces to simply the same 
expression w/ two differing KINDs.  Under those circumstances I _STILL_ 
think consistency would be _a_good_thing_  (tm)

I thought I had put in enough caveats several times previously that if 
there's a real reason the vendor doesn't do so they have the leeway to 
do so and still be compliant so again I'm speaking of what I think a 
"quality implementation" should strive for if feasible.  Again, yes, if 
one gets bit it's not possible to blame anybody else but oneself but 
then again, vendors shouldn't go out of their way to try to ensure that 
one does get bitten, either... :)

--

--

0
dpb
10/7/2012 6:08:11 PM
Ron Shepard <ron-shepard@nospam.comcast.net> wrote:

(snip, someone wrote)
>> There's a reason these areas are left as implementation-dependent...but 
>> imo a quality implementation would strive to be self-consistent.

> I don't know if this is one if these situations, but this is the 
> kind of expression that might change with optimization levels or 
> with compiler options. And there might be some compiler options or 
> optimizations levels that would result in a program abort and an 
> error message, and that might also depend on the KIND values. This 
> would make self-consistency (between different KIND values) 
> difficult or impossible to achieve.

> Will the expression X**0 be evaluated the same as X**I when I==0?  

This is iteresting. Given the different mathematical limits
for I**0 and 0**I as I goes to zero, one might imagine a 
compiler optimizing the two differently.

> Maybe not.  It is common at some optimization levels for the 
> compiler to replace expressions like X**0, X**1, X**2, X**3 and so 
> on with the appropriate multiplications and never call the power 
> operator, whereas an expression like X**I where the compiler does 
> not know the value of I would generally use it. Should the compiler 
> treat the X==0.0 case special, and if so, is that the same as the 
> special treatment within the general power function.  

The usual real power (including 2.0, 1.0, and 0.0) is done with
LOG and EXP. The usual integer power is done through multiplications.
The I**J case still leaves some uncertainty: do you start with 1, and
multiply it by I, or do you start with I? (Similarly for X**J.)

The implementations that I know about start with I or X, and either
square or multiply by I or X depending on the bits of the exponent.
Most will test for 0 as a special case, but maybe not all.

(snip)

> And for the integer cases, 0**0, I**0, 0**I, I**J, it is easy to see 
> that a compiler optimization might do something entirely different 
> than it does for the floating point cases.  And, it might do 
> different things for different integer KIND values in these cases 
> too.

> So in situations like this, I don't think that any kind of 
> self-consistency should be expected.

> The OP has not yet posted the actual code, so we do no know which of 
> these situations apply.  We can imagine what the code might look 
> like, but we don't really know, so all we can do is talk about these 
> generalities.

-- glen
0
glen
10/7/2012 9:46:56 PM
dpb <none@non.net> wrote:
> On 10/7/2012 9:10 AM, Dick Hendrickson wrote:
> ...

>> And, consistency in an implementation isn't very useful if there isn't
>> also portability.
> ...

> But I contend inconsistency in an implementation whether it's portable 
> or not isn't very useful, either (or is at least error-inducing-prone).
> ...

> That is, error-inducing on the part of writing code w/ only a modicum of 
> testing, relying on the "expected" behavior of the compiler one is using.

This is true, but it has a long history. One has been the variable
precision resulting from the x87 stack, and the uncertainty of
a value staying in a register or being stored with precision loss.
That can very easily be optimized differently.

Add that to the variations, (at least pre-IEEE) in floating point
implementations, and the rearrangements that optimizers are allowed,
and this one should pretty much go into the noise.

> The OP's example works (again, w/o addressing the question of whether 
> should have written his particular example as did but similar things 
> could be envisioned that aren't all that unreasonable) -- you know a 
> particular construct is implementation-dependent so as OP you test for a 
> given condition and discover it's behavior and rely on that for your 
> code logic (making dutiful notes/comments and even, perhaps, isolating 
> it as a system dependency).

Well, in this case it is easy to test and treat as needed. 

If one wants a specific result from something the standard allows
to be implementation dependent, then one should test for it and
do it right.

> Now, simply changing the precision as did OP gives one a different 
> behavior--while strictly speaking, yes, it's your fault; you shoulda' 
> either not made the reliance to begin with and found another workaround 
> or tested every possible behavior, it's still imo not as good a quality 
> implementation as would desire.  And again, yes, there's the caveat of 
> the vendor has the right and perhaps even a good justification...

Quad precision, at least with the Q exponent, is an extension,
and one often implemented in software emulation. It is not unusual
for software emulation to give different results from the hardware.

> This kinda' reminds me of discussions years and years and years ago w/ 
> NRC over the introduction of digital systems into reactor safety 
> systems--some wanted to not allow them at all because there was, in 
> their mind, no way to ever fully verify proper operation because 
> fundamentally one could traceback to the point of there being some 
> branch under which even the processor itself might error.  This was a 
> nonproductive since carried to its conclusion meant we couldn't use 
> computers for numerical design work, either.

Not to mention the large number of errors from wetware, and that
more of those can't be tested for than for computational hardware.
As far as I know it, TMI was mostly a wetware error, and Chernobyl
pretty much completely a wetware error.

-- glen
0
glen
10/7/2012 9:58:38 PM
Hi,

On 2012-10-07 15:03:33 +0000, dpb said:

> But I contend inconsistency in an implementation whether it's portable 
> or not isn't very useful, either (or is at least error-inducing-prone).

But it is useful if it causes the programmer to be alert
to the issue early in the process.

-- 
Cheers!

Dan Nagle

0
Dan
10/7/2012 10:02:39 PM
On 10/7/2012 4:58 PM, glen herrmannsfeldt wrote:
....

>> This kinda' reminds me of discussions years and years and years ago w/
>> NRC over the introduction of digital systems into reactor safety
>> systems--some wanted to not allow them at all because there was, in
>> their mind, no way to ever fully verify proper operation because
>> fundamentally one could traceback to the point of there being some
>> branch under which even the processor itself might error.  This was a
>> nonproductive since carried to its conclusion meant we couldn't use
>> computers for numerical design work, either.
>
> Not to mention the large number of errors from wetware, and that
> more of those can't be tested for than for computational hardware.
> As far as I know it, TMI was mostly a wetware error, ...

End result entirely, yes, after the initial turbine trip and PORV reseat 
failure if had simply left safety systems alone all would have ended 
well.  When initial operators failed to recognize the symptoms and 
stopped the RCPs that instigated the worst of the result that wasn't 
cured until next shift SRO recognized the issue and restarted RCPs to 
reestablish core circulation...

> ... and Chernobyl pretty much completely a wetware error.

Indeed, it was a tragedy of error from conception to 
inception...compounded by the design lacking a containment building to 
help mitigate result--

<http://www.world-nuclear.org/info/chernobyl/inf07app.html>

--
0
dpb
10/8/2012 4:43:21 AM
On Oct 8, 8:58=A0am, glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:

> It is not unusual
> for software emulation to give different results from the hardware.

Especially when the hardware is in error, as it was in the case of
the KDF9 hardware multiplier!
0
Robin
10/8/2012 10:01:13 AM
On Oct 8, 1:10=A0am, Dick Hendrickson <dick.hendrick...@att.net> wrote:

> The problem is that exponentiation is common and testing every
> exponentiation for special cases is likely to be a time waster. =A0Or at
> least people will perceive it to be.

Testing for 0.0 is scarcely going to waste much time.
It's an important special case, and it should be included
in generated code.


0
Robin
10/8/2012 10:24:23 AM
In article 
<d21aaea7-57d3-4c92-985b-1dd5cff49a2c@r8g2000pbs.googlegroups.com>,
 Robin Vowels <robin.vowels@gmail.com> wrote:

> On Oct 8, 1:10�am, Dick Hendrickson <dick.hendrick...@att.net> wrote:
> 
> > The problem is that exponentiation is common and testing every
> > exponentiation for special cases is likely to be a time waster. �Or at
> > least people will perceive it to be.
> 
> Testing for 0.0 is scarcely going to waste much time.
> It's an important special case, and it should be included
> in generated code.

An exception to this would be using SIMD hardware or pipelining to 
evaluate several results simultaneously.  Here, the same operations 
must be performed on a full set of values, so special cases cannot 
be singled out.  These kinds of considerations have been important 
in scientific computing since the early 1980's, this is not recent.

$.02 -Ron Shepard
0
Ron
10/8/2012 3:37:57 PM
Reply: