COMPGROUPS.NET | Search | Post Question | Groups | Stream | About | Register

### Problems to calculate sin

• Email
• Follow

```I must create a program that use trigonometry function.
I know sin(30)=0.5 but when I use Math.sin() I can't get it

Math.sin(30*Math.PI/180)=0.49999999999999994

What's the problem?

Thank you

Stefano Buscherini
```
 0

See related articles to this posting

```Steve70 wrote:
> I must create a program that use trigonometry function.
> I know sin(30)=0.5 but when I use Math.sin() I can't get it
>
> Math.sin(30*Math.PI/180)=0.49999999999999994

That *is* 0.5, at least to rather good tolerance.

BugBear
```
 0

```On Mar 4, 3:40=A0pm, Steve70 <batst...@libero.it> wrote:
> I must create a program that use trigonometry function.
> I know sin(30)=3D0.5 but when I use Math.sin() I can't get it
>
> Math.sin(30*Math.PI/180)=3D0.49999999999999994
>
> What's the problem?
>
> Thank you
>
> Stefano Buscherini

Math.sin uses Log tables to calculate value of Sin So they are
approximate to 0.0000000000001 Value.

You can use this value in your Calculations with error of
0.0000000000001  Thats not a big difference unless you are using it
for Rocket/ Missile Launching Program. Then you may create a function
to calculate Sin value yourself.

I suppose you know how to create Sin() using Mathematical Formulla.
Give me \$15 and I will give you the Formula.

Bye
Sanny
```
 0

``` S> I must create a program that use trigonometry function.
S> I know sin(30)=0.5 but when I use Math.sin() I can't get it

S> Math.sin(30*Math.PI/180)=0.49999999999999994

S> What's the problem?

you're using floating point numbers. their precision is quite limited, and
they cannot represent all numbers exactly. (it's not possible with fixed
amount of bits).
so, you should not depend on exact results. use rounding when doing output.
expect some degree of error when doing comparison.

using floating point numbers correctly (so you have minimal errors) might be
quite complex thing. for introduction, read
http://en.wikipedia.org/wiki/Floating_point

```
 0

```On Tue, 4 Mar 2008 02:40:23 -0800 (PST), Steve70 <batsteve@libero.it>
wrote, quoted or indirectly quoted someone who said :

>I must create a program that use trigonometry function.
>I know sin(30)=0.5 but when I use Math.sin() I can't get it
>
>Math.sin(30*Math.PI/180)=0.49999999999999994

see http://mindprod.com/jgloss/floatingpoint.html
http://mindprod.com/jgloss/trigonometry.html

If you asked a carpenter for a table .5 meters long and it was
0.49999999999999994, how would you notice?
--

The Java Glossary
http://mindprod.com
```
 0

```Sanny wrote:
> I suppose you know how to create Sin() using Mathematical Formulla.
> Give me \$15 and I will give you the Formula.

Come on, Sanny, do you really think anyone is going to give you cash for that
kind of thing?

"sin", lower-case "m" in "mathematical", "formula" is spelled with one "l"
and, in that context, lower-case "f".  No charge for the information.

Are you this careful with the expression of your formulas, too?

--
Lew
```
 0

```On Mar 4, 5:40 am, Steve70 <batst...@libero.it> wrote:
> I must create a program that use trigonometry function.
> I know sin(30)=0.5 but when I use Math.sin() I can't get it
>
> Math.sin(30*Math.PI/180)=0.49999999999999994
>
> What's the problem?
>
> Thank you
>
> Stefano Buscherini

A few of the other posters have commented that you need to understand
the limits of floating point arithmetic.  Note that there are at least
three ways that comes into play in this problem.

Mathematically we know
sin(pi/6) = 1/2
exactly.  However Math.PI is an approximation of the value of PI so
30*Math.PI/180 is an approximation of pi/6.  So when we take the sine
we are taking the sine of a number that's a little different from the
number we really wanted.  That's one source of error.  You can see
this most easily by looking at the value
of
Math.sin(Math.PI)
It is not 0 but a value of about 10^-16.  This is not an error in the
computation of the sine, it represents the difference between Java's
approximation to pi and the true value.  This source of error is
sometimes a little surprising to users.

The second source of error is in the arithmetic within the
parentheses.  Anytime you operate on floating point numbers, and
especially if you are dividing, you are likely to get an answer that
is not exact.  For division this is easy to see.  You cannot write
10./3 in any finite decimal representation.  While a computer users
binary rather than decimal, the same issue arises.  If we combine this
with the first error it's possible that there is a number that Java
could represent that is closer to the value pi/6 than the value you
will actually get using 60*Math.PI/180.

The last source is in the computation of the sine itself.  Java
permits (but does not require except in strictmath) mathematical
functions to have very small errors in the calculations.    If the
correct answer is x, the result of the computation can be either x, or
the smallest number the computer can distinguish from x that is larger
than x, or the largest number the computer can distinguish from x that
is smaller than x.  In practice this means that the answer is within
about 10^-16 x of the true value.

Regards,
Tom McGlynn
```
 0

```Sanny wrote:
> On Mar 4, 3:40 pm, Steve70 <batst...@libero.it> wrote:
>> I must create a program that use trigonometry function.
>> I know sin(30)=0.5 but when I use Math.sin() I can't get it
>>
>> Math.sin(30*Math.PI/180)=0.49999999999999994
>>
>> What's the problem?
>>
>> Thank you
>>
>> Stefano Buscherini
>
> Math.sin uses Log tables to calculate value of Sin So they are
> approximate to 0.0000000000001 Value.

I'm curious. Why log tables?

I don't know how Math.sin is implemented, and don't even assume it is
implemented the same way in all JVMs, but if I had to guess I would have
expected some sort of truncated Taylor series.

Patricia
```
 0

```On Mar 4, 6:29=A0pm, Patricia Shanahan <p...@acm.org> wrote:
> Sanny wrote:
> > On Mar 4, 3:40 pm, Steve70 <batst...@libero.it> wrote:
> >> I must create a program that use trigonometry function.
> >> I know sin(30)=3D0.5 but when I use Math.sin() I can't get it
>
> >> Math.sin(30*Math.PI/180)=3D0.49999999999999994
>
> >> What's the problem?
>
> >> Thank you
>
> >> Stefano Buscherini
>
> > Math.sin uses Log tables to calculate value of Sin So they are
> > approximate to 0.0000000000001 Value.
>
> I'm curious. Why log tables?
>
> I don't know how Math.sin is implemented, and don't even assume it is
> implemented the same way in all JVMs, but if I had to guess I would have
> expected some sort of truncated Taylor series.
>
> Patricia

Using Tables are faster than using a formula to compute a value.

By log table I just mean a Table for Sine. Log tables are more
familiar than Log Tables.

Bye
Sanny

```
 0

```Steve70 wrote:
> I must create a program that use trigonometry function.
> I know sin(30)=0.5 but when I use Math.sin() I can't get it
>
> Math.sin(30*Math.PI/180)=0.49999999999999994

Question: Keeping in mind the fact that π is an irrational
number, do you think that 30*Math.PI/180 is exactly π/6?

--
Eric Sosman
esosman@ieee-dot-org.invalid
```
 0

```Sanny wrote:
> On Mar 4, 6:29 pm, Patricia Shanahan <p...@acm.org> wrote:
>> Sanny wrote:
>>> On Mar 4, 3:40 pm, Steve70 <batst...@libero.it> wrote:
>>>> I must create a program that use trigonometry function.
>>>> I know sin(30)=0.5 but when I use Math.sin() I can't get it
>>>> Math.sin(30*Math.PI/180)=0.49999999999999994
>>>> What's the problem?
>>>> Thank you
>>>> Stefano Buscherini
>>> Math.sin uses Log tables to calculate value of Sin So they are
>>> approximate to 0.0000000000001 Value.
>> I'm curious. Why log tables?
>>
>> I don't know how Math.sin is implemented, and don't even assume it is
>> implemented the same way in all JVMs, but if I had to guess I would have
>> expected some sort of truncated Taylor series.
>>
>> Patricia
>
> Using Tables are faster than using a formula to compute a value.
>
> By log table I just mean a Table for Sine.

(chuckle)

BugBear
```
 0

```Sanny wrote:
> On Mar 4, 6:29 pm, Patricia Shanahan <p...@acm.org> wrote:
>> Sanny wrote:
>>> On Mar 4, 3:40 pm, Steve70 <batst...@libero.it> wrote:
>>>> I must create a program that use trigonometry function.
>>>> I know sin(30)=0.5 but when I use Math.sin() I can't get it
>>>> Math.sin(30*Math.PI/180)=0.49999999999999994
>>>> What's the problem?
>>>> Thank you
>>>> Stefano Buscherini
>>> Math.sin uses Log tables to calculate value of Sin So they are
>>> approximate to 0.0000000000001 Value.
>> I'm curious. Why log tables?
>>
>> I don't know how Math.sin is implemented, and don't even assume it is
>> implemented the same way in all JVMs, but if I had to guess I would have
>> expected some sort of truncated Taylor series.
>>
>> Patricia
>
> Using Tables are faster than using a formula to compute a value.

Depends on many things, including the size of the tables. Remember that
one can do quite a lot of simple constant loading and floating point
arithmetic for the cost of one cache miss. Given the max 1 ulp error
requirement for Math.sin, I would expect a polynomial approximation to
be faster than a sufficiently precise table look-up.

> By log table I just mean a Table for Sine. Log tables are more
> familiar than Log Tables.

I assume you mean something like "Log tables are more familiar than sine
tables." However, my bachelor's degree was in mathematics. I am familiar
with sine tables, series-based approximations to sine, and have read a

Do you actually know how Math.sin is implemented, or are you just saying
how you think you would implement it?

Patricia
```
 0

```On Mar 4, 9:05 am, Sanny <softta...@hotmail.com> wrote:
> On Mar 4, 6:29 pm, Patricia Shanahan <p...@acm.org> wrote:
>
>
>
> > Sanny wrote:
> > > On Mar 4, 3:40 pm, Steve70 <batst...@libero.it> wrote:
> > >> I must create a program that use trigonometry function.
> > >> I know sin(30)=0.5 but when I use Math.sin() I can't get it
>
> > >> Math.sin(30*Math.PI/180)=0.49999999999999994
>
> > >> What's the problem?
>
> > >> Thank you
>
> > >> Stefano Buscherini
>
> > > Math.sin uses Log tables to calculate value of Sin So they are
> > > approximate to 0.0000000000001 Value.
>
> > I'm curious. Why log tables?
>
> > I don't know how Math.sin is implemented, and don't even assume it is
> > implemented the same way in all JVMs, but if I had to guess I would have
> > expected some sort of truncated Taylor series.
>
> > Patricia
>
> Using Tables are faster than using a formula to compute a value.
>
> By log table I just mean a Table for Sine. Log tables are more
> familiar than Log Tables.
>
> Bye
> Sanny

Looking at the flibdm libraries, whose algorithms are used in Sun's
JVM's, and which define the results that must be given when strictmath
is specified, the sine function consists of a range reduction such
that only the range 0 to pi/4 need be considered, followed by a
polynomial expansion to 13th order.  I didn't check to see if this is
simply the Taylor expansion as Patricia suggested.  It's possible that
some slight modification has better error properties.  Since only odd
terms need be considered this takes relatively few operations ~ 7
the function call is probably a non-trivial fraction of the total
cost.

Table lookup and interpolation might be faster, but I wouldn't bet on
it, e.g., finding the integer index into the table given the real
input value probably soaks up a few cycles.  Note that table lookup
would still have to do a range reduction first if the table were to be
any feasible size.

Regards,
Tom McGlynn

```
 0

```Thomas.a.mcglynn@nasa.gov wrote:
> On Mar 4, 9:05 am, Sanny <softta...@hotmail.com> wrote:
>> On Mar 4, 6:29 pm, Patricia Shanahan <p...@acm.org> wrote:
>>
>>
>>
>>> Sanny wrote:
>>>> On Mar 4, 3:40 pm, Steve70 <batst...@libero.it> wrote:
>>>>> I must create a program that use trigonometry function.
>>>>> I know sin(30)=0.5 but when I use Math.sin() I can't get it
>>>>> Math.sin(30*Math.PI/180)=0.49999999999999994
>>>>> What's the problem?
>>>>> Thank you
>>>>> Stefano Buscherini
>>>> Math.sin uses Log tables to calculate value of Sin So they are
>>>> approximate to 0.0000000000001 Value.
>>> I'm curious. Why log tables?
>>> I don't know how Math.sin is implemented, and don't even assume it is
>>> implemented the same way in all JVMs, but if I had to guess I would have
>>> expected some sort of truncated Taylor series.
>>> Patricia
>> Using Tables are faster than using a formula to compute a value.
>>
>> By log table I just mean a Table for Sine. Log tables are more
>> familiar than Log Tables.
>>
>> Bye
>> Sanny
>
> Looking at the flibdm libraries, whose algorithms are used in Sun's
> JVM's, and which define the results that must be given when strictmath
> is specified, the sine function consists of a range reduction such
> that only the range 0 to pi/4 need be considered, followed by a
> polynomial expansion to 13th order.  I didn't check to see if this is
> simply the Taylor expansion as Patricia suggested.  It's possible that
> some slight modification has better error properties.  Since only odd
> terms need be considered this takes relatively few operations ~ 7
> the function call is probably a non-trivial fraction of the total
> cost.

That's about what I would have guessed. Thanks for the information.

Patricia
```
 0

```Patricia Shanahan wrote:
>>
>> Using Tables are faster than using a formula to compute a value.
>
> Depends on many things, including the size of the tables. Remember that
> one can do quite a lot of simple constant loading and floating point
> arithmetic for the cost of one cache miss.

I remember the shock when I first encountered this; in the days
of the 68000 - 68020 a look up table was almost
always a clear win, and I regarded (naively) lookup
tables as an "obvious" optimisation.

Then I started coding on Sun/SPARC...

I also used to think that avoiding floating point arithmetic
AT ALL COSTS was a correct speed strategy. Also naive.

BugBear
```
 0

```Steve70 wrote:
> I must create a program that use trigonometry function.
> I know sin(30)=0.5 but when I use Math.sin() I can't get it
>
> Math.sin(30*Math.PI/180)=0.49999999999999994
>
> What's the problem?
>
> Thank you
>
> Stefano Buscherini

Given the information below, you can calculate the error bounds on sine by:

|x|^15 / 15!

Using pi/4 = 0.75397 I get an error bounds of 2.04 x 10^-14, so you can
truncate/round all sines at 13 digits after the decimal point and get a
reasonable result.  Note that issues with floats might still pop up, I'd
recommend doubles to be safe, unless you have a specific reason to use
floats.

```
 0

```Thomas.a.mcglynn@nasa.gov wrote:
> Table lookup and interpolation might be faster, but I wouldn't bet on
> it, e.g., finding the integer index into the table given the real
> input value probably soaks up a few cycles.  Note that table lookup
> would still have to do a range reduction first if the table were to be
> any feasible size.
Even with range reduction and say cubic interpolation you would still
need an enormous table to give the required accuracy. The interpolation
would cost almost as much as the series evaluation. Tables are more
common when the accuracy requirement is low and the processor has a low
performance FPU (or no hardware floating point at all).

Mark Thornton
```
 0

```On Mar 4, 3:10 pm, Mark Thornton <mark.p.thorn...@ntlworld.com> wrote:
> Thomas.a.mcgl...@nasa.gov wrote:
> > Table lookup and interpolation might be faster, but I wouldn't bet on
> > it, e.g., finding the integer index into the table given the real
> > input value probably soaks up a few cycles.  Note that table lookup
> > would still have to do a range reduction first if the table were to be
> > any feasible size.
>
> Even with range reduction and say cubic interpolation you would still
> need an enormous table to give the required accuracy. The interpolation
> would cost almost as much as the series evaluation. Tables are more
> common when the accuracy requirement is low and the processor has a low
> performance FPU (or no hardware floating point at all).
>
> Mark Thornton

With cubic interpolation, I'd anticipate errors of order the fourth
power of the step size, which suggests about 10,000 interpolation
intervals would be needed to get errors of order 10^-16 (for the range
0 - pi/4).  We need smaller errors for smaller values, but the
increasing linearity of sin(x) for small x probably takes care of
that. 10,000 is big enough that I'd want strong evidence that the
table approach was desirable, but not enough to preclude a table
driven approach based only upon the size of the table -- seems like it
should fit into typical cache.

However, I make no claims to understand how modern CPUs operate with
their dizzying hierarchy of caches.  It may well be that this
simplistic analysis is inappropriate.

Regards,
Tom McGlynn
```
 0

```On Tue, 4 Mar 2008 05:29:03 -0800 (PST), Thomas.a.mcglynn@nasa.gov
wrote, quoted or indirectly quoted someone who said :

>The last source is in the computation of the sine itself.

sines are computed by polynomial approximations.  It is amazing they
are as accurate as they are with the low order polynomials they use.
--

The Java Glossary
http://mindprod.com
```
 0

```On Tue, 4 Mar 2008 05:29:03 -0800 (PST), Thomas.a.mcglynn@nasa.gov
wrote, quoted or indirectly quoted someone who said :

> Java
>permits (but does not require except in strictmath) mathematical
>functions to have very small errors in the calculations.

the Intel FP instruction set has a sine-computing instruction.  It
works inside with polynomial approximations. Any error is Intel's
doing.

I suppose in some future chip it will have special in-parallel checks
for 45 degrees, 90 degrees, 0 degrees, 30 degrees to get as perfect as
possible results.
--

The Java Glossary
http://mindprod.com
```
 0

```Thomas.a.mcglynn@nasa.gov wrote:
....
> With cubic interpolation, I'd anticipate errors of order the fourth
> power of the step size, which suggests about 10,000 interpolation
> intervals would be needed to get errors of order 10^-16 (for the range
> 0 - pi/4).  We need smaller errors for smaller values, but the
> increasing linearity of sin(x) for small x probably takes care of
> that. 10,000 is big enough that I'd want strong evidence that the
> table approach was desirable, but not enough to preclude a table
> driven approach based only upon the size of the table -- seems like it
> should fit into typical cache.

How many bytes per interval?

Patricia
```
 0

```On Wed, 05 Mar 2008 01:09:56 +0000, Roedy Green wrote:

[Snip]
> the Intel FP instruction set has a sine-computing instruction.  It
> works inside with polynomial approximations. Any error is Intel's
> doing.
[Snip]

I read somewhere that Java doesn't use this sine-computing instruction
since it doesn't meet the accuracy requirements guaranteed by the class.
This was quoted as the reason that Java performs slower on transcendental
functions than other languages on the platform.

Is this information out of date?

--
Kenneth P. Turvey <kt-usenet@squeakydolphin.com>
```
 0

```On Mar 4, 9:45 pm, Patricia Shanahan <p...@acm.org> wrote:
> Thomas.a.mcgl...@nasa.gov wrote:
>
> ...
>
> > With cubic interpolation, I'd anticipate errors of order the fourth
> > power of the step size, which suggests about 10,000 interpolation
> > intervals would be needed to get errors of order 10^-16 (for the range
> > 0 - pi/4).  We need smaller errors for smaller values, but the
> > increasing linearity of sin(x) for small x probably takes care of
> > that. 10,000 is big enough that I'd want strong evidence that the
> > table approach was desirable, but not enough to preclude a table
> > driven approach based only upon the size of the table -- seems like it
> > should fit into typical cache.
>
> How many bytes per interval?
>
> Patricia

For a cubic fit presumably one needs 4 values per interval or 32 bytes
for double coefficients.  The expansion isn't around 0 so the even and
odd terms both show.  So about 300kB altogether.  Too big to want to
use it without a good reason, but if, contrary to our belief, the
computation of the sine were faster using the table, easily
accommodated within typical programs, and a bit smaller than the
typical cache size.  Presumably one would need to compute a lot of
sines to accommodate the setup overhead though.

Tom
```
 0

```Thomas.a.mcglynn@nasa.gov wrote:
> On Mar 4, 9:45 pm, Patricia Shanahan <p...@acm.org> wrote:
>> Thomas.a.mcgl...@nasa.gov wrote:
>>
>> ...
>>
>>> With cubic interpolation, I'd anticipate errors of order the fourth
>>> power of the step size, which suggests about 10,000 interpolation
>>> intervals would be needed to get errors of order 10^-16 (for the range
>>> 0 - pi/4).  We need smaller errors for smaller values, but the
>>> increasing linearity of sin(x) for small x probably takes care of
>>> that. 10,000 is big enough that I'd want strong evidence that the
>>> table approach was desirable, but not enough to preclude a table
>>> driven approach based only upon the size of the table -- seems like it
>>> should fit into typical cache.
>> How many bytes per interval?
>>
>> Patricia
>
> For a cubic fit presumably one needs 4 values per interval or 32 bytes
> for double coefficients.  The expansion isn't around 0 so the even and
> odd terms both show.  So about 300kB altogether.  Too big to want to
> use it without a good reason, but if, contrary to our belief, the
> computation of the sine were faster using the table, easily
> accommodated within typical programs, and a bit smaller than the
> typical cache size.  Presumably one would need to compute a lot of
> sines to accommodate the setup overhead though.
>
>    Tom

I think at the moment the polynomial approximation approach wins,
because the processor has the greatest freedom to reorder and schedule
arithmetic involving only a few constants and one parameter, with no

As caches get bigger, the economics may shift again.

It's the sort of thing where if I were doing it professionally I might
maintain implementations both ways, and compare them on each processor
generation to see shifts in the relative performance.

Patricia
```
 0

```On Mar 5, 12:41 pm, Patricia Shanahan <p...@acm.org> wrote:
> Thomas.a.mcgl...@nasa.gov wrote:
> > For a cubic fit presumably one needs 4 values per interval or 32 bytes
> > for double coefficients.  The expansion isn't around 0 so the even and
> > odd terms both show.  So about 300kB altogether.  Too big to want to
....
>
> I think at the moment the polynomial approximation approach wins,
> because the processor has the greatest freedom to reorder and schedule
> arithmetic involving only a few constants and one parameter, with no
>
> As caches get bigger, the economics may shift again.
>
> It's the sort of thing where if I were doing it professionally I might
> maintain implementations both ways, and compare them on each processor
> generation to see shifts in the relative performance.
>
> Patricia

I'm sure your right about the current situation.  However I suspect I
overestimated the needed size for a cache somewhat.  I was naively
thinking of 10,000 independent cubic fits, but it's more reasonable
and efficient to use the values from several points near where the
interpolation is to be done.  In that case we only need a single
coefficient for each point (presumably the function value) so we can
get by with a mere ~100kB of table.  I don't think this affects your
conclusion though.

Regards,
Tom
```
 0

```Kenneth P. Turvey wrote:
> On Wed, 05 Mar 2008 01:09:56 +0000, Roedy Green wrote:
>
> [Snip]
>> the Intel FP instruction set has a sine-computing instruction.  It
>> works inside with polynomial approximations. Any error is Intel's
>> doing.
> [Snip]
>
> I read somewhere that Java doesn't use this sine-computing instruction
> since it doesn't meet the accuracy requirements guaranteed by the class.
> This was quoted as the reason that Java performs slower on transcendental
> functions than other languages on the platform.

It does use the FSIN function, but only after reducing the argument to
+-PI/4 (I believe). The accuracy problem with the FSIN function is the
way Intel do argument reduction using a 66bit approximation of PI. The
Java specification requires many more bits of PI in some cases.

Mark Thornton
```
 0

```Thomas.a.mcglynn@nasa.gov wrote:
> On Mar 5, 12:41 pm, Patricia Shanahan <p...@acm.org> wrote:
>> Thomas.a.mcgl...@nasa.gov wrote:
>>> For a cubic fit presumably one needs 4 values per interval or 32 bytes
>>> for double coefficients.  The expansion isn't around 0 so the even and
>>> odd terms both show.  So about 300kB altogether.  Too big to want to
> ...
>> I think at the moment the polynomial approximation approach wins,
>> because the processor has the greatest freedom to reorder and schedule
>> arithmetic involving only a few constants and one parameter, with no
>>
>> As caches get bigger, the economics may shift again.
>>
>> It's the sort of thing where if I were doing it professionally I might
>> maintain implementations both ways, and compare them on each processor
>> generation to see shifts in the relative performance.
>>
>> Patricia
>
> I'm sure your right about the current situation.  However I suspect I
> overestimated the needed size for a cache somewhat.  I was naively
> thinking of 10,000 independent cubic fits, but it's more reasonable
> and efficient to use the values from several points near where the
> interpolation is to be done.  In that case we only need a single
> coefficient for each point (presumably the function value) so we can
> get by with a mere ~100kB of table.  I don't think this affects your
> conclusion though.
>
>     Regards,
>     Tom

It may be worth noting that the Level one data caches are often much
smaller than this (32KB/core on my machine).

Mark Thornton
```
 0

```On Mar 4, 8:09 pm, Roedy Green <see_webs...@mindprod.com.invalid>
wrote:
>
> I suppose in some future chip it will have special in-parallel checks
> for 45 degrees, 90 degrees, 0 degrees, 30 degrees to get as perfect as
> possible results.

Hmmm....  Do you mean that you would like something like

Math.sin(Math.PI)

to return 0?  That seems very dangerous.

I could see having special checking on new degree based functions
though:  E.g., [untested]

double sind(double x) {
double specialValue[] {
0,0.5,Math.sqrt(3)/2, 1, ...};

x = x % 360;
if (x < 0) {
x += 360;
}

if ( (x mod 30) == 0} {
return specialValue[x/30];
} else {
return Math.sin(x*Math.PI/180);
}
}

Regards,
Tom McGlynn
```
 0

```On 05 Mar 2008 09:41:15 GMT, "Kenneth P. Turvey"
<kt-usenet@squeakydolphin.com> wrote, quoted or indirectly quoted
someone who said :

>I read somewhere that Java doesn't use this sine-computing instruction
>since it doesn't meet the accuracy requirements guaranteed by the class.
>This was quoted as the reason that Java performs slower on transcendental
>functions than other languages on the platform.
>
>Is this information out of date?

I don't know. The code would in a native class. The chip will not have
changed, so likely that information would be stable.
--

The Java Glossary
http://mindprod.com
```
 0

```Roedy Green <see_website@mindprod.com.invalid> wrote:
> On 05 Mar 2008 09:41:15 GMT, "Kenneth P. Turvey"
>>I read somewhere that Java doesn't use this sine-computing instruction
>>since it doesn't meet the accuracy requirements guaranteed by the class.
>>This was quoted as the reason that Java performs slower on transcendental
>>functions than other languages on the platform.
>>Is this information out of date?

> I don't know. The code would in a native class. The chip will not have
> changed, so likely that information would be stable.

My guess was, that if one needed math more exact than native
processor's arithmetics, he would use "strictfp", so while I
don't know for sure, I wouldn't bet on abovementioned information.

```
 0

```Andreas Leitgeb wrote:
> Roedy Green <see_website@mindprod.com.invalid> wrote:
>> On 05 Mar 2008 09:41:15 GMT, "Kenneth P. Turvey"
>>> I read somewhere that Java doesn't use this sine-computing instruction
>>> since it doesn't meet the accuracy requirements guaranteed by the class.
>>> This was quoted as the reason that Java performs slower on transcendental
>>> functions than other languages on the platform.
>>> Is this information out of date?
>
>> I don't know. The code would in a native class. The chip will not have
>> changed, so likely that information would be stable.
>
> My guess was, that if one needed math more exact than native
> processor's arithmetics, he would use "strictfp", so while I
> don't know for sure, I wouldn't bet on abovementioned information.
>

The minimum precision requirements in the Math.sin API documentation are
not dependent on strictfp.

Patricia
```
 0

```On Mar 6, 11:26 am, Patricia Shanahan <p...@acm.org> wrote:
> Andreas Leitgeb wrote:
....

> > My guess was, that if one needed math more exact than native
> > processor's arithmetics, he would use "strictfp", so while I
> > don't know for sure, I wouldn't bet on abovementioned information.
>
> The minimum precision requirements in the Math.sin API documentation are
> not dependent on strictfp.
>

Perhaps Andreas may be thinking of the StrictMath class rather than
strictfp?  This class duplicates the functionality of Math but
requires that the results be identical to the fdlibm results.  [I
think somewhere else in this thread I also confused this with
strictfp].

It's a bit counterintuitive, but I believe the general effect of
either a strictfp block or use of the StrictMath class is a less
accurate calculation.  They are all about consistency, not accuracy.

Regards,
Tom McGlynn

```
 0

```Tom McGlynn wrote:
> On Mar 6, 11:26 am, Patricia Shanahan <p...@acm.org> wrote:
>> Andreas Leitgeb wrote:
> ...
>
>>> My guess was, that if one needed math more exact than native
>>> processor's arithmetics, he would use "strictfp", so while I
>>> don't know for sure, I wouldn't bet on abovementioned information.
>> The minimum precision requirements in the Math.sin API documentation are
>> not dependent on strictfp.
>>
>
> Perhaps Andreas may be thinking of the StrictMath class rather than
> strictfp?  This class duplicates the functionality of Math but
> requires that the results be identical to the fdlibm results.  [I
> think somewhere else in this thread I also confused this with
> strictfp].
>
> It's a bit counterintuitive, but I believe the general effect of
> either a strictfp block or use of the StrictMath class is a less
> accurate calculation.  They are all about consistency, not accuracy.

I think StrictMath can go either way. There are generally two
representable numbers that could be a valid Math.sin result for a given
angle, because each is within one ULP of the infinite precision answer.
Math.sin can return either. StrictMath.sin must return the one fdlibm
would pick, which may or may not be the closer of the two.

Patricia
```
 0

```And why Turbo Pascal calculate sin(30*PI/180)=0.5?

```
 0

```Steve70 wrote:
> And why Turbo Pascal calculate sin(30*PI/180)=0.5?

Does it?  Does it really?

You can make the Java result of Math.sin( 30.0 * Math.PI / 180.0 ) look like
it returns 0.5, too.  The significant thing is that the result of that
calculation has guaranteed results, precision and accuracy in Java.  Your
Pascal or C implementations will vary in their internal precision, accuracy
and output.

As pointed out quite frequently in these newsgroups, floating point
expressions in a binary device will generally be inaccurate.  The science of
numerical analysis allows limits on that inaccuracy, and yields algorithms to
minimize it.  Programmers need to have some knowledge of that science in order
to understand, and more importantly, control what their code does.

Note the definition of Java's Math.PI:
<http://java.sun.com/javase/6/docs/api/java/lang/Math.html#PI>
> The double value that is closer than any other to pi,
> the ratio of the circumference of a circle to its diameter.

If PI cannot be exactly represented, then it is not reasonable to expect that
the result of a calculation on that approximation involving division by 180.0,
whose result generally will also be inaccurate, much less the result of an
approximation of a transcendental function of that calculation, will be
exactly representable in the binary floating-point format, much less exactly
accurate.

--
Lew
```
 0

```Steve70 wrote:
> And why Turbo Pascal calculate sin(30*PI/180)=0.5?
>
>
>
>
>

And what result does it print for sin(30*PI/180)-0.5

Did you verify that the result was exactly 0.5 or merely converted to
string as 0.5.
```
 0

```On Mar 4, 9:40=A0pm, Steve70 <batst...@libero.it> wrote:
=2E..
> Math.sin(30*Math.PI/180)=3D0.49999999999999994
>
> What's the problem?

research IEEE 754.

I expect turbo Pascal was doing internal rounding
(knowing Borland* - suspecting at time of output).

* I cannot imagine that Turbo Pascal was *not*
using IEEE 754 internally - I wrote a few trivial
number crunching progs. using turbo Pascal and
found the results quite plausible (they confirmed
basic theories of physics I was trying to disprove).

--
Andrew T.
PhySci.org
```
 0

```On Mar 10, 9:04=A0pm, Andrew Thompson <andrewtho...@gmail.com> wrote:

> ...I wrote a few trivial
> number crunching progs. using turbo Pascal and
> found the results quite plausible (they confirmed ..

=2E.to 10+ decimal places..

> ..basic theories of physics I was trying to disprove).

--
Andrew T.
PhySci.org
```
 0

```Mark Thornton <mark.p.thornton@ntlworld.com> wrote:
> Steve70 wrote:
>> And why Turbo Pascal calculate sin(30*PI/180)=0.5?
> And what result does it print for sin(30*PI/180)-0.5
> Did you verify that the result was exactly 0.5 or merely converted to
> string as 0.5.

I do find it believeable that TurboPascal produces a more exact
result, since afaik it not only uses the processors "double extended"
floating point arithmetics(*), but also has 10bytes-datatype "extended"
to store such values rather than only the 8byte "double".

So, I'd expect even "sin(30*PI/180)-0.5" to be significantly
smaller (that is: nearer to zero) than in java, with or without
strictfp/StrictMath.

(*): From Wikipedia: IEEE 754 specifies four formats for representing
floating-point values: single-precision (32-bit), double-precision
(64-bit), single-extended precision (≥ 43-bit, not commonly used) and
double-extended precision (≥ 79-bit, usually implemented with 80 bits).

PS: don't have any TurboPascal here at hand. Only know it from back
in the MsDos-era.
```
 0

```Lew wrote:
> As pointed out quite frequently in these newsgroups, floating point
> expressions in a binary device will generally be inaccurate.

In this case, binary has nothing to do with it. With binary /or/
decimal, no computer offers infinite precision. In fact, all other
things being equal, binary is better; decimal is only to be preferred in
a computational context with decimal quantization (i.e., finance).
--
John W. Kennedy
"...when you're trying to build a house of cards, the last thing you
should do is blow hard and wave your hands like a madman."
--  Rupert Goodwins
```
 0

```John W. Kennedy wrote:
> Lew wrote:
>> As pointed out quite frequently in these newsgroups, floating point
>> expressions in a binary device will generally be inaccurate.
>
> In this case, binary has nothing to do with it. With binary /or/
> decimal, no computer offers infinite precision. In fact, all other
> things being equal, binary is better; decimal is only to be preferred in
> a computational context with decimal quantization (i.e., finance).

Oo-kaay.

Binariness is not relevant in the abstract, but since this community uses Java
on binary devices it is relevant in practice.  All that stuff about decimal,
not that it'd matter if you'd discussed duodecimal or sexagintadecimal, is by
the wayside and doesn't alter the salient point a jot.

The point is that fixed-precision computations will be inaccurate for real
numbers, and error management therein is a vital concern.

--
Lew
```
 0

```On Mar 6, 3:35 pm, Patricia Shanahan <p...@acm.org> wrote:
> Tom McGlynn wrote:
> > On Mar 6, 11:26 am, Patricia Shanahan <p...@acm.org> wrote:
> >> Andreas Leitgeb wrote:
> > ...
>
> >>> My guess was, that if one needed math more exact than native
> >>> processor's arithmetics, he would use "strictfp", so while I
> >>> don't know for sure, I wouldn't bet on abovementioned information.
> >> The minimum precision requirements in the Math.sin API documentation are
> >> not dependent on strictfp.
>
> > Perhaps Andreas may be thinking of the StrictMath class rather than
> > strictfp?  This class duplicates the functionality of Math but
> > requires that the results be identical to the fdlibm results.  [I
> > think somewhere else in this thread I also confused this with
> > strictfp].
>
> > It's a bit counterintuitive, but I believe the general effect of
> > either a strictfp block or use of the StrictMath class is a less
> > accurate calculation.  They are all about consistency, not accuracy.
>
> I think StrictMath can go either way. There are generally two
> representable numbers that could be a valid Math.sin result for a given
> angle, because each is within one ULP of the infinite precision answer.
> Math.sin can return either. StrictMath.sin must return the one fdlibm
> would pick, which may or may not be the closer of the two.
>
> Patricia

You're certainly correct that it's possible in principle to build a
variant Math library which is legal but less accurate that
StrictMath.  My sense, though is that it's not actually that easy
(e.g., see below). So in practice where Math and StrictMath differ
(I'm not aware of anyplace this has actually been done) my suspicion
would be that the Math library is likely to be the more accurate
overall.

Two items of interest that I came up in looking up some of the
background...

First, the problem with using sine hardware is that while it generally
provides accuracy to better than 10^-16, one needs to compute sines to
an absolute accuracy of 10^-32 when one considers the regions where
the sine crosses 0.  That's not a problem at x=0, but it is at other
integral multiples of pi.  E.g., Math.sin(Math.PI) is of order 10^-16,
but the Java requirement states that it needs to be within one ULP of
the correct answer, so it needs an absolute accuracy of 10^-32.

Second, the discussion of how to make trigonometric calculations
efficient in Java (e.g., to use FPU hardware), notes that an optimizer
is free to substitute an FPU sine call when it knows it will be within
the required accuracy.  This suggested to me that it might be possible
for the value of the sine function to change during the execution of a
program that uses runtime optimization, like HotSpot, giving one
(presumably 1 ULP away) afterwards.  That sounds a bit weird, and
perhaps its forbidden somewhere...  The explicit wiggle room given in
the definition of the Math libraries though makes them a bit different
from most functions though.

Regards,
Tom McGlynn
```
 0

```Tom McGlynn wrote:
> Second, the discussion of how to make trigonometric calculations
> efficient in Java (e.g., to use FPU hardware), notes that an optimizer
> is free to substitute an FPU sine call when it knows it will be within
> the required accuracy.  This suggested to me that it might be possible
> for the value of the sine function to change during the execution of a
> program that uses runtime optimization, like HotSpot, giving one
> (presumably 1 ULP away) afterwards.  That sounds a bit weird, and

This sort of behaviour is readily observable on older JVM that do not
correctly reduce the arguments when using the x86 hardware FSIN
instruction. Much harder to notice now.

Mark Thornton
```
 0

```On Mar 11, 10:33 am, Mark Thornton <mark.p.thorn...@ntlworld.com>
wrote:
> Tom McGlynn wrote:
> > Second, the discussion of how to make trigonometric calculations
> > efficient in Java (e.g., to use FPU hardware), notes that an optimizer
> > is free to substitute an FPU sine call when it knows it will be within
> > the required accuracy.  This suggested to me that it might be possible
> > for the value of the sine function to change during the execution of a
> > program that uses runtime optimization, like HotSpot, giving one
> > answer before the function was optimized and a different answer
> > (presumably 1 ULP away) afterwards.  That sounds a bit weird, and
>
> This sort of behaviour is readily observable on older JVM that do not
> correctly reduce the arguments when using the x86 hardware FSIN
> instruction. Much harder to notice now.
>
> Mark Thornton

Fascinating...  I gather though that the older JVM's were incorrect
though in that they were giving FPU results that were far more than
one ULP off?  Even so it's strange to think that the program:

public class Test {
public static void main(String[] args) throws Exception {
double x = ...;
double y = Math.sin(x);
double z = Math.sin(x);
System.out.println("x=y:"+(x==y));
}
}

might legally print false if the sine function got optimized between
the two calls!

Tom
```
 0

```Tom McGlynn wrote:
> On Mar 6, 3:35 pm, Patricia Shanahan <p...@acm.org> wrote:
>> Tom McGlynn wrote:
>>> On Mar 6, 11:26 am, Patricia Shanahan <p...@acm.org> wrote:
>>>> Andreas Leitgeb wrote:
>>> ...
>>>>> My guess was, that if one needed math more exact than native
>>>>> processor's arithmetics, he would use "strictfp", so while I
>>>>> don't know for sure, I wouldn't bet on abovementioned information.
>>>> The minimum precision requirements in the Math.sin API documentation are
>>>> not dependent on strictfp.
>>> Perhaps Andreas may be thinking of the StrictMath class rather than
>>> strictfp?  This class duplicates the functionality of Math but
>>> requires that the results be identical to the fdlibm results.  [I
>>> think somewhere else in this thread I also confused this with
>>> strictfp].
>>> It's a bit counterintuitive, but I believe the general effect of
>>> either a strictfp block or use of the StrictMath class is a less
>>> accurate calculation.  They are all about consistency, not accuracy.
>> I think StrictMath can go either way. There are generally two
>> representable numbers that could be a valid Math.sin result for a given
>> angle, because each is within one ULP of the infinite precision answer.
>> Math.sin can return either. StrictMath.sin must return the one fdlibm
>> would pick, which may or may not be the closer of the two.
>>
>> Patricia
>
> You're certainly correct that it's possible in principle to build a
> variant Math library which is legal but less accurate that
> StrictMath.  My sense, though is that it's not actually that easy
> (e.g., see below). So in practice where Math and StrictMath differ
> (I'm not aware of anyplace this has actually been done) my suspicion
> would be that the Math library is likely to be the more accurate
> overall.

I agree with you about "overall".

> First, the problem with using sine hardware is that while it generally
> provides accuracy to better than 10^-16, one needs to compute sines to
> an absolute accuracy of 10^-32 when one considers the regions where
> the sine crosses 0.  That's not a problem at x=0, but it is at other
> integral multiples of pi.  E.g., Math.sin(Math.PI) is of order 10^-16,
> but the Java requirement states that it needs to be within one ULP of
> the correct answer, so it needs an absolute accuracy of 10^-32.

The sine implementations I've seen all begin by normalizing the angle.
The problem you point out puts demands on the accuracy of the normalization.

>
> Second, the discussion of how to make trigonometric calculations
> efficient in Java (e.g., to use FPU hardware), notes that an optimizer
> is free to substitute an FPU sine call when it knows it will be within
> the required accuracy.  This suggested to me that it might be possible
> for the value of the sine function to change during the execution of a
> program that uses runtime optimization, like HotSpot, giving one
> (presumably 1 ULP away) afterwards.  That sounds a bit weird, and
> perhaps its forbidden somewhere...  The explicit wiggle room given in
> the definition of the Math libraries though makes them a bit different
> from most functions though.

Would changing the Math.sin, for a given input, during a run break the
semi-monotonicity requirement?

"Therefore, most methods with more than 0.5 ulp errors are required to
be semi-monotonic: whenever the mathematical function is non-decreasing,
so is the floating-point approximation, likewise, whenever the
mathematical function is non-increasing, so is the floating-point
approximation."

The mathematical sine of a given angle neither increases nor decreases
from call to call.

Patricia
```
 0

```On Mar 11, 11:05 am, Patricia Shanahan <p...@acm.org> wrote:
> Tom McGlynn wrote:
> > On Mar 6, 3:35 pm, Patricia Shanahan <p...@acm.org> wrote:
> >> Tom McGlynn wrote:
> >>> On Mar 6, 11:26 am, Patricia Shanahan <p...@acm.org> wrote:
> >>>> Andreas Leitgeb wrote:
> >>> ...
....
>
>
> > Second, the discussion of how to make trigonometric calculations
> > efficient in Java (e.g., to use FPU hardware), notes that an optimizer
> > is free to substitute an FPU sine call when it knows it will be within
> > the required accuracy.  This suggested to me that it might be possible
> > for the value of the sine function to change during the execution of a
> > program that uses runtime optimization, like HotSpot, giving one
> > answer before the function was optimized and a different answer
> > (presumably 1 ULP away) afterwards.  That sounds a bit weird, and
> > perhaps its forbidden somewhere...  The explicit wiggle room given in
> > the definition of the Math libraries though makes them a bit different
> > from most functions though.
>
> Would changing the Math.sin, for a given input, during a run break the
> semi-monotonicity requirement?
>
> "Therefore, most methods with more than 0.5 ulp errors are required to
> be semi-monotonic: whenever the mathematical function is non-decreasing,
> so is the floating-point approximation, likewise, whenever the
> mathematical function is non-increasing, so is the floating-point
> approximation."
>
> The mathematical sine of a given angle neither increases nor decreases
> from call to call.
>
> Patricia

That's sounds good to me (coupled with the language that sine is one
of the functions to which this applies). It seems like there's still a
tiny bit of ambiguity at the minima and maxima.  E.g., if we look at
the arguments with values just below and above 2 pi, between those two
values the sine both rises and declines, so I might be able to make an
argument that the exact function is neither non-increasing nor non-
decreasing at those two points and so I'm still allowed to select
either of my two possible values there, i.e., -1 and -1+ULP. In
practice I can't imagine any algorithm that's otherwise valid that
won't give -1 here.

It sounds like the implementations that Mark talked about are clearly
illegal though even had they met the standard of accuracy.

Regards,
Tom
```
 0

```Lew wrote:
> John W. Kennedy wrote:
>> Lew wrote:
>>> As pointed out quite frequently in these newsgroups, floating point
>>> expressions in a binary device will generally be inaccurate.
>>
>> In this case, binary has nothing to do with it. With binary /or/
>> decimal, no computer offers infinite precision. In fact, all other
>> things being equal, binary is better; decimal is only to be preferred
>> in a computational context with decimal quantization (i.e., finance).
>
> Oo-kaay.
>
> Binariness is not relevant in the abstract, but since this community
> uses Java on binary devices it is relevant in practice.

At present, the devices don't actually matter; Java uses binary by
definition. If (per impossibile) Java were to be implemented on an old
IBM 7080, it would still be required to /seem/ binary.

On the other hand, IEEE-754r is on the way. It's already available in
some slight support of it in BigDecimal.
--
John W. Kennedy
"Compact is becoming contract,
Man only earns and pays."
-- Charles Williams.  "Bors to Elayne:  On the King's Coins"
```
 0

```On Thu, 06 Mar 2008 11:35:42 -0800, Patricia Shanahan <pats@acm.org>
wrote, quoted or indirectly quoted someone who said :

>There are generally two
>representable numbers that could be a valid Math.sin result for a given
>angle, because each is within one ULP of the infinite precision answe

I think of the work I did early in my career designing high voltage
transmission lines.  I would have happily given up a little accuracy
in computing sine for extra speed.  I was computing positions of
swinging insulators to check for sufficient clearance.  3 digits of
precision would have been ample.

You might be tempted in such a situation to use a table lookup for
fast sine, but you probably would not buy you much since the overhead
for a JNI call is so high.

--

The Java Glossary
http://mindprod.com
```
 0

```On Mon, 10 Mar 2008 03:04:30 -0700 (PDT), Andrew Thompson
<andrewthommo@gmail.com> wrote, quoted or indirectly quoted someone
who said :

>research IEEE 754.

see http://mindprod.com/jgloss/ieee754.html
--

The Java Glossary
http://mindprod.com
```
 0