COMPGROUPS.NET | Search | Post Question | Groups | Stream | About | Register

### couple of observations, Fortran needs matrix diagonal() function, and determinant()

• Email
• Follow

```For fun, I spend over 3 hrs adding Fortran to one entry
of my HOWTO cheat sheet page. This entry is on how to compute
the adjugate matrix of a square matrix.

The adjugate matrix is defined here

It is basically the transpose of the cofactor matrix. If you
divide it by the determinant of the matrix, you get the
inverse of the matrix.

I thought this will take me may be 10-20 minutes in Fortran,
but it ended up taking more than 3 hrs. The first reason is
that Fortran does not have determinant function build-in,
and also, it does not have a function to read the diagonal
of a matrix. So, ended up linking to lapack DGETRF() to
do LU decomposition in order to obtain the determinant
from the PLU result of the decomposition.

But all went well in the end. I get the same results as
shown on the wiki, and the same as Matlab and Mathematica,
tested on number of matrices and learned a bit of Fortran.

But my question is: Why is there no intrinsic for finding
the determinant for rank-2 matrix? I find this strange
given Fortran is for numerical computation and given
that Lapack does not have a build-in function either
(one needs to use LU and more processing afterwords)

And also there is no function to obtain the diagonal of
rank 2 array? I see Fortran now has intrinsic for matrix
multiply, transpose.  I think a diag() function would be
nice to have, and also a det() function.

I found Fortran array/matrix operation to be flexible
and easy to work with. I just think it needs few more
intrinsic functions such as the above missing ones to make
it 'easier' to do common things. For example, I did this
same operation in Mathematica in only 3 lines, a solution in
Matlab is the same.

Here is the Fortran function I wrote. If you find improvements
please feel free to let me know and I'll correct

-------------------------------------
!-- Find the Adjugate of a square matrix
!-- Version 6/22/2012
!-- gfortran 4.6.3
program t44
implicit none

integer, parameter :: n=3
integer :: INFO,i,j,ii,IPIV(n-1),number_row_exchange
real (kind=kind(0.0d0)) :: A(n,n),B(n-1,n-1),C(n,n)
logical :: M(n,n)

A(1,:) = [1,2,3];
A(2,:) = [4,5,6];
A(3,:) = [7,8,9];

!A(1,:) = [-3,2,-5];
!A(2,:) = [-1,0,-2];
!A(3,:) = [3,-4,1];

DO i=1,n
DO j=1,n

!build the mask to extract submatrices
M = .true.
M(:,j) = .false.
M(i,:) = .false.
B = reshape(pack(A, M),[n-1,n-1])

!-- LU decomposition in order to obtain determinant
CALL DGETRF( n-1, n-1, B, n-1, IPIV, INFO )

!-- determinant of U is the product of diagonal
C(i,j)=1.0d0
DO ii=1,n-1
C(i,j)=C(i,j)*B(ii,ii)
END DO

!-- Adjust sign based on number of row exchanges
number_row_exchange = 0
DO ii=1,n-1
IF (IPIV(ii) .NE.  ii) THEN
number_row_exchange = number_row_exchange +1
END IF
END DO

IF (MOD(number_row_exchange,2).EQ.1) THEN
C(i,j)=-C(i,j)
END IF

!-- Apply the (-1)^(i+j) logic
IF (MOD(i+j,2).EQ.1) THEN
C(i,j)=-C(i,j)
END IF

END DO
END DO

!-- Transpose the matrix of cofactor to obtain the adjugate matrix
C = TRANSPOSE(C)

PRINT *,'C='
write(*,'(3F15.4)')  ( (C(i,j),j=1,n), i=1,n)

end program t44
-----------------------------

export LD_LIBRARY_PATH=/usr/lib/atlas-base/:/usr/lib/atlas-base/atlas/
gfortran -fcheck=all -Wall t44.f90 -L/usr/lib/atlas-base/atlas/ -lblas -llapack

>./a.out
C=
-3.0000         6.0000        -3.0000
6.0000       -12.0000         6.0000
-3.0000         6.0000        -3.0000

fyi, using Mathematica, has build-in Cofactor function:
-------------------------
<<"Combinatorica`"
mat      = {{1,2,3},{4,5,6},{7,8,9}};
cof      = Table[Cofactor[mat,{i,j}],{i,3},{j,3}];
------------------------

Out[22]= { {-3,   6, -3},
{ 6, -12,  6},
{-3,   6, -3}}

--Nasser
```
 0

See related articles to this posting

```In linear algebra computations you almost never use the determinant directly.
While having such a function may be useful in those very rare cases, it also
puts a burden on the library. The problem is: where do you stop? The Fortran
intrinsics could also include a full FTT package or a statistics package for
that matter.

Regards,

Arjen
```
 0
Reply arjen.markus895 (716) 6/22/2012 11:20:56 AM

```On 6/22/2012 6:20 AM, Arjen Markus wrote:
> In linear algebra computations you almost never use the determinant directly.

Well,  I use det() alot in my courses to solve my HWs (or may be
the teacher just wanted to give us hard time then :)

For example, to find characteristic polynomial of A, I'd solve
det(A-I*lamba)=0, finding the area spanned by 3 points, and so
many other examples. But in the real world, you could very well
be correct.

I know that the matrix inverse operation is not used directly
if at all.

Any way, just thought to suggest it. You are right that one
matrix diagonal? This is very useful and easy to add
to Fortran intrinsic? Having to make a loop each time to
extract a matrix diagonal is not really fun.

One thing nice about Fortran, is that one can link directly to
so much Fortran code out there. There are zillions of Fortran
functions out there to be used. I was browsing Fortran libraries
on the net, and there are so much free Fortran code, it is amazing.

--Nasser
```
 0

```On 6/22/2012 1:20 PM, Arjen Markus wrote:
> In linear algebra computations you almost never use the determinant directly.
> While having such a function may be useful in those very rare cases, it also
> puts a burden on the library. The problem is: where do you stop? The Fortran
> intrinsics could also include a full FTT package or a statistics package for
> that matter.

Det, inverse and FFT are good examples. These are
difficult problems which means that:

1) It is not clear what exactly you want! FFT could
be butterfly-based, or z-chirp, or other things, with
different trade offs. Likewise, inversion could use
simple Gauss elimination, or blocking for speed-up,
or Strassen-like methods.

2) It is often questionable whether you really
need it. Like the inverse (using a conjugate
gradient solver may be superior) or det (time
consuming, and it usually overflows your real!)

It seems best to have those difficult things in
separate packages. If fortran is just some
language to use for writing algorithms, than
it shouldn't by itself contain all algorithms.

(BTW, confluent hypergeometric functions are
also missing, but six Bessel functions are in.
A similar discussion..)

--
Jos
```
 0
Reply jos.bergervoet (38) 6/22/2012 12:30:59 PM

```On Jun 22, 10:18=A0pm, "Nasser M. Abbasi" <n...@12000.org> wrote:

> Any way, just thought to suggest it. You are right that one
> can't add too many things to the library. But how about
> matrix diagonal? This is very useful and easy to add
> to Fortran intrinsic? Having to make a loop each time to
> extract a matrix diagonal is not really fun.

It's trivial:

y =3D (/ (x(i,i), i =3D 1,n) /)
```
 0
Reply louisa.hutch (142) 6/22/2012 12:44:54 PM

```On 2012-06-22 09:30:59 -0300, Jos Bergervoet said:

> On 6/22/2012 1:20 PM, Arjen Markus wrote:
>> In linear algebra computations you almost never use the determinant directly.
>> While having such a function may be useful in those very rare cases, it also
>> puts a burden on the library. The problem is: where do you stop? The Fortran
>> intrinsics could also include a full FTT package or a statistics package for
>> that matter.
>
> Det, inverse and FFT are good examples. These are
> difficult problems which means that:
>
> 1) It is not clear what exactly you want! FFT could
> be butterfly-based, or z-chirp, or other things, with
> different trade offs. Likewise, inversion could use
> simple Gauss elimination, or blocking for speed-up,
> or Strassen-like methods.

Strassen etc is not numerically stable. In any case the
speedups do not gain traction until beyond the current
ranges of interest. Large problems tend to be sparse
which is a major topic by itself.

Such choices of technology should not be fixed into the
language.

> 2) It is often questionable whether you really
> need it. Like the inverse (using a conjugate
> gradient solver may be superior) or det (time
> consuming, and it usually overflows your real!)
>
> It seems best to have those difficult things in
> separate packages. If fortran is just some
> language to use for writing algorithms, than
> it shouldn't by itself contain all algorithms.
>
> (BTW, confluent hypergeometric functions are
> also missing, but six Bessel functions are in.
> A similar discussion..)

```
 0
Reply Gordon.Sande1 (252) 6/22/2012 1:14:56 PM

```On 2012-06-22 09:18:24 -0300, Nasser M. Abbasi said:

> On 6/22/2012 6:20 AM, Arjen Markus wrote:
>> In linear algebra computations you almost never use the determinant directly.
>
> Well,  I use det() alot in my courses to solve my HWs (or may be
> the teacher just wanted to give us hard time then :)

Other than in undergraduate courses you will probably never see
a determinent. They live in zoos called undergraduate courses but
not in the wild!

> For example, to find characteristic polynomial of A, I'd solve
> det(A-I*lamba)=0, finding the area spanned by 3 points, and so
> many other examples. But in the real world, you could very well
> be correct.

All sorts of lovely late 1800s linear algebra/geometry can be expressed
in determinants. There are some very thick Dover books on the topics
by (usually) German mathematicians with dates like 1880. What is now
called numerical analysis was never an issue back then. Now one does
QR for what was LU back then. Hand calculation corresponds to infinite
precision which is hard to come by in real computers with their grubby
floating point. Until about 1930 a 10x10 matrix was large but
now 10000x10000 is routine. Old stuff was dense but current stuff
tends to be sparse.

> I know that the matrix inverse operation is not used directly
> if at all.
>
> Any way, just thought to suggest it. You are right that one
> can't add too many things to the library. But how about
> matrix diagonal? This is very useful and easy to add
> to Fortran intrinsic? Having to make a loop each time to
> extract a matrix diagonal is not really fun.

FORALL is the tool that gets you a matrix diagonal. It is an array
assignment with more flexability than the array notation provides.

> One thing nice about Fortran, is that one can link directly to
> so much Fortran code out there. There are zillions of Fortran
> functions out there to be used. I was browsing Fortran libraries
> on the net, and there are so much free Fortran code, it is amazing.
>
> --Nasser

```
 0
Reply Gordon.Sande1 (252) 6/22/2012 1:14:57 PM

```"Nasser M. Abbasi" <nma@12000.org> wrote in message
news:js1gc6\$k2d\$1@speranza.aioe.org...

> And also there is no function to obtain the diagonal of
> rank 2 array? I see Fortran now has intrinsic for matrix
> multiply, transpose.  I think a diag() function would be
> nice to have, and also a det() function.

> I found Fortran array/matrix operation to be flexible
> and easy to work with. I just think it needs few more
> intrinsic functions such as the above missing ones to make
> it 'easier' to do common things. For example, I did this
> same operation in Mathematica in only 3 lines, a solution in
> Matlab is the same.

> Here is the Fortran function I wrote. If you find improvements
> please feel free to let me know and I'll correct

> -------------------------------------
> !-- Find the Adjugate of a square matrix
> !-- Version 6/22/2012
> !-- gfortran 4.6.3
> program t44
> implicit none

> integer, parameter :: n=3
> integer :: INFO,i,j,ii,IPIV(n-1),number_row_exchange
> real (kind=kind(0.0d0)) :: A(n,n),B(n-1,n-1),C(n,n)
> logical :: M(n,n)

> A(1,:) = [1,2,3];
> A(2,:) = [4,5,6];
> A(3,:) = [7,8,9];

A = reshape([ &
1, 2, 3, &
4, 5, 6, &
7, 8, 8], &
shape(A), order = [2,1])

> !A(1,:) = [-3,2,-5];
> !A(2,:) = [-1,0,-2];
> !A(3,:) = [3,-4,1];

> DO i=1,n
>    DO j=1,n

>       !build the mask to extract submatrices
>       M = .true.
>       M(:,j) = .false.
>       M(i,:) = .false.
>       B = reshape(pack(A, M),[n-1,n-1])

Elegant!  But you could have reversed the meaning of i and
j here and avoided the need for a tranpose later.

>       !-- LU decomposition in order to obtain determinant
>       CALL DGETRF( n-1, n-1, B, n-1, IPIV, INFO )

>       !-- determinant of U is the product of diagonal
>       C(i,j)=1.0d0
>       DO ii=1,n-1
>          C(i,j)=C(i,j)*B(ii,ii)
>       END DO

C = product([(B(ii,ii),ii=1,n-1)])

>       !-- Adjust sign based on number of row exchanges
>       number_row_exchange = 0
>       DO ii=1,n-1
>          IF (IPIV(ii) .NE.  ii) THEN
>             number_row_exchange = number_row_exchange +1
>          END IF
>       END DO

number_row_exchange = count(ipiv/=[(ii,ii=1,n-1)])

>       IF (MOD(number_row_exchange,2).EQ.1) THEN
>           C(i,j)=-C(i,j)
>       END IF

>       !-- Apply the (-1)^(i+j) logic
>       IF (MOD(i+j,2).EQ.1) THEN
>          C(i,j)=-C(i,j)
>       END IF

C(i,j) = C(i,j)*(1-2*modulo(number_row_exchange+i+j,2))

>    END DO
> END DO

> !-- Transpose the matrix of cofactor to obtain the adjugate matrix
> C = TRANSPOSE(C)

OK

> PRINT *,'C='
> write(*,'(3F15.4)')  ( (C(i,j),j=1,n), i=1,n)

write(*,'(3F15.4)') transpose(C) ! Write out C in row-major form

> end program t44
> -----------------------------

--
write(*,*) transfer((/17.392111325966148d0,6.5794487871554595D-85, &
6.0134700243160014d-154/),(/'x'/)); end

```
 0
Reply not_valid (1693) 6/22/2012 1:19:07 PM

```On 6/22/2012 7:44 AM, Louisa wrote:
> On Jun 22, 10:18 pm, "Nasser M. Abbasi" <n...@12000.org> wrote:
>
>> Any way, just thought to suggest it. You are right that one
>> can't add too many things to the library. But how about
>> matrix diagonal? This is very useful and easy to add
>> to Fortran intrinsic? Having to make a loop each time to
>> extract a matrix diagonal is not really fun.
>
> It's trivial:
>
> y = (/ (x(i,i), i = 1,n) /)
>

This is nice. did not know I can do that. (I am still learning
Fortran).

I changed the code using the above trick. Compare the difference
now. Here I wanted to find the product of the matrix diagonal.
Matrix is of size N

-----------before-----------
C(i,j)=1.0d0

DO ii=1,N
C(i,j)=C(i,j)*B(ii,ii)
END DO

--- now -------------
C(i,j)= PRODUCT( (/ (B(ii,ii), ii = 1,N) /) )

Much better. thanks.
--Nasser

```
 0

```On 06/22/2012 02:30 PM, Jos Bergervoet wrote:
> (BTW, confluent hypergeometric functions are
> also missing, but six Bessel functions are in.
> A similar discussion..)

I found it surprising how long it took until the error function became
officially supported, given that most compiler supported it for ages.

Regarding more special functions, C and C++ have produced some standard
document which adds some more functions. My impression is that most
systems do not support those. In any case, the Fortran committee is
pondering whether it should add them as well or not. See
ftp://ftp.nag.co.uk/sc22wg5/N1901-N1950/N1921.pdf

Though that proposal seems to only add Lagurre, Legendre, Beta, Bessel
(spherical/Cylindrical), Hermite, Zeta and Neumann functions and
elliptic integrals.

You could lobby at J3 for adding confluent hypergeometric functions, if
you think that such an addition makes sense.

Tobias
```
 0
Reply burnus (564) 6/22/2012 1:31:07 PM

```"Gordon Sande" <Gordon.Sande@gmail.com> wrote in message
news:js1r4h\$hc1\$2@dont-email.me...

> On 2012-06-22 09:18:24 -0300, Nasser M. Abbasi said:

> Other than in undergraduate courses you will probably never see
> a determinent. They live in zoos called undergraduate courses but
> not in the wild!

>> I know that the matrix inverse operation is not used directly
>> if at all.

>> Any way, just thought to suggest it. You are right that one
>> can't add too many things to the library. But how about
>> matrix diagonal? This is very useful and easy to add
>> to Fortran intrinsic? Having to make a loop each time to
>> extract a matrix diagonal is not really fun.

> FORALL is the tool that gets you a matrix diagonal. It is an array
> assignment with more flexability than the array notation provides.

It seems to me that FORALL sets, rather than gets the diagonal.
E.g.

A = 0
FORALL(i=1:size(A,1)) A(i,i) = D(i)

looks a lot simpler than

But how would you compute trace(A) with a parallel construct like
FORALL?

--
write(*,*) transfer((/17.392111325966148d0,6.5794487871554595D-85, &
6.0134700243160014d-154/),(/'x'/)); end

```
 0
Reply not_valid (1693) 6/22/2012 1:31:35 PM

```On 06/22/2012 12:11 PM, Nasser M. Abbasi wrote:
> And also there is no function to obtain the diagonal of
> rank 2 array?

I am not claiming that the following it very elegant or that one should
use pointers, but it is one way to get access to the diagonal of the
matrix via a pointer and pointer-rank remapping. The program prints as
expected: "1 5 9".

Tobias

integer, parameter :: n = 3
integer, target :: chunk(n*n)
integer, pointer :: matrix(:,:), diag(:)

chunk = [1,2,3,4,5,6,7,8,9]

matrix(1:n,1:n) => chunk
diag => chunk(::n+1)

print *, diag
end
```
 0
Reply burnus (564) 6/22/2012 1:40:51 PM

```On 22/06/2012 14:14, Gordon Sande wrote:
> Other than in undergraduate courses you will probably never see
> a determinent. They live in zoos called undergraduate courses but
> not in the wild!

I certainly remember having to grapple with them as an undergraduate,
but not having found a need to do that at any time since.  Determinants
do seem to have the aura of the 19th century about them.

It is said that after Lewis Carroll wrote "Alice's Adventures in
Wonderland", it so charmed Queen Victoria that she invited him to tea at
Buckingham Palace and asked him to send her a copy of his next book.
He agreed, and so a year or so later there was delivered to the Palace a
small packet containing a copy of "An Elementary Treatise on
Determinants, With Their Application to Simultaneous Linear Equations
and Algebraic Equations".  Carroll was, of course, the pen name of a
maths don at Oxford.  The book may still be relevant, for all I know.

--
Clive Page
```
 0
Reply usenet1820 (75) 6/22/2012 1:46:44 PM

```Thanks James for your review as well. I incorporated your
suggestions, the MODULO is interesting but a bit confusing
to understand and did not know it existed. I replace MOD with
it as you suggested. Below is the currect version. So much smaller
than before !
ps. I kept matrix initialization as is, since I find this way
easier to read, and it is the same number of lines.

---------------------------------------
!Fortran program to find matrix adjugate, version 6/22/2012, 9 AM
program t44
implicit none

integer, parameter :: n=3
integer :: INFO,i,j,ii,IPIV(n-1),number_row_exchange
real (kind=kind(0.0d0)) :: A(n,n),B(n-1,n-1),C(n,n)
logical :: M(n,n)

A(1,:) = [1,  2,  3];
A(2,:) = [4,  5,  6];
A(3,:) = [7,  8,  9];

DO j=1,n
DO i=1,n

M = .true.
M(:,j) = .false.
M(i,:) = .false.
B = reshape(pack(A, M),[n-1,n-1])

!-- LU decomposition in order to obtain determinant
CALL DGETRF( n-1, n-1, B, n-1, IPIV, INFO )

!-- determinant of U is the product of diagonal
C(i,j)= PRODUCT( [(B(ii,ii),ii=1,n-1)] )

!-- Adjust sign based on number of row exchanges and (-1)^(i+j)
number_row_exchange = count(IPIV /= [(ii,ii=1,n-1)])
C(i,j) = C(i,j)*(1-2*MODULO(number_row_exchange+i+j,2))

END DO
END DO
write(*,'(3F15.4)') C
end program t44
---------------------------

export LD_LIBRARY_PATH=/usr/lib/atlas-base/:/usr/lib/atlas-base/atlas/
gfortran -fcheck=all -Wall t44.f90 -L/usr/lib/atlas-base/atlas/ -lblas -llapack

>./a.out
C=
-3.0000         6.0000        -3.0000
6.0000       -12.0000         6.0000
-3.0000         6.0000        -3.0000

--Nasser
```
 0

```On 22/06/12 14:31, James Van Buskirk wrote:
> "Gordon Sande"<Gordon.Sande@gmail.com>  wrote in message
> news:js1r4h\$hc1\$2@dont-email.me...
>
>> On 2012-06-22 09:18:24 -0300, Nasser M. Abbasi said:
>
>> Other than in undergraduate courses you will probably never see
>> a determinent. They live in zoos called undergraduate courses but
>> not in the wild!
>
>

In building up the theory, yes, especially for CI and related, but with
the big caveat that I've never worked on anything post Hartree-Fock, I
don't think they used much (directly) when actually implementing it.

I've come across a use in Berry Phase stuff, but that's about it,

Ian
```
 0

```"Nasser M. Abbasi" <nma@12000.org> wrote in message
news:js1u8n\$o0s\$1@speranza.aioe.org...

> Thanks James for your review as well. I incorporated your
> suggestions, the MODULO is interesting but a bit confusing
> to understand and did not know it existed. I replace MOD with
> it as you suggested. Below is the currect version. So much smaller
> than before !

MODULO is great because it MODULO(A,P) maps A into its P
congruence classes mod P, whereas there are 2*P-1 possible
results for MOD(A,P) often requiring further processing to
do what MODULO achieves in a single step.  Kind of like
ATAN2(y,x) vs. ATAN(y/x).  In this case you already know that
A >= 0, however the compiler knows that it can replace

MODULO(n,2)

with

IAND(n,1)

without having to do analysis to show that A >= 0 so you
may get the benefits of IAND(n,1) without its unreadability.

> ps. I kept matrix initialization as is, since I find this way
> easier to read, and it is the same number of lines.

In this case the line-by-line initialization works, but if
you wanted to write

real(kind=kind(0.0d0)) A(3,3) = reshape([ &
1, 2, 3, &
4, 5, 6, &
7, 8, 9], &
[3,3], order = [2,1])

Which you would have to do if A were a named constant, then
you need the RESHAPE syntax, and the ORDER optional argument

--
write(*,*) transfer((/17.392111325966148d0,6.5794487871554595D-85, &
6.0134700243160014d-154/),(/'x'/)); end

```
 0
Reply not_valid (1693) 6/22/2012 2:53:28 PM

```On 6/22/2012 4:53 PM, James Van Buskirk wrote:
> "Nasser M. Abbasi" <nma@12000.org> wrote in message
..
>> ps. I kept matrix initialization as is, since I find this way
>> easier to read, and it is the same number of lines.
>
> In this case the line-by-line initialization works, but if
> you wanted to write
>
> real(kind=kind(0.0d0)) A(3,3) = reshape([ &
>     1, 2, 3, &
>     4, 5, 6, &
>     7, 8, 9], &
>     [3,3], order = [2,1])
>
> Which you would have to do if A were a named constant, then
> you need the RESHAPE syntax, and the ORDER optional argument

Why would line-by-line become a problem for this
but real(kind(1d0)) by itself seems OK to me..)

--
Jos

```
 0
Reply jos.bergervoet (38) 6/22/2012 3:18:35 PM

```"Jos Bergervoet" <jos.bergervoet@xs4all.nl> wrote in message
news:4fe48ccb\$0\$6963\$e4fe514c@news2.news.xs4all.nl...

> On 6/22/2012 4:53 PM, James Van Buskirk wrote:

>> In this case the line-by-line initialization works, but if
>> you wanted to write

>> real(kind=kind(0.0d0)) A(3,3) = reshape([ &
>>     1, 2, 3, &
>>     4, 5, 6, &
>>     7, 8, 9], &
>>     [3,3], order = [2,1])

>> Which you would have to do if A were a named constant, then
>> you need the RESHAPE syntax, and the ORDER optional argument

> Why would line-by-line become a problem for this
> but real(kind(1d0)) by itself seems OK to me..)

Initializers have to be a single expression.  The line-by-line
method required 3 expressions, one per row.  You could write it
out with DATA statements or with assignment statements in the
executable part of the program unit, but assignment statements
mean a different thing than initializers.  Also default
initialization of structure components requires an initializer.
There was an error in my code in that I forgot the double colons.
Here is something that actually compiles:

real(kind=kind(0.0d0)) :: A(3,3) = reshape([ &
1, 2, 3, &
4, 5, 6, &
7, 8, 9], &
[3,3], order = [2,1])
real(kind(A)) B(3,3)
data B(1,:)/1,2,3/
data B(2,:)/4,5,6/
data B(3,:)/7,8,9/
end

If you meant that the O.P.'s code didn't require an initializer,
then, yes I agree as does the compiler that this is the case. My
remark was intended to be informative in case the reader
encountered an instance where an initializer was preferred or
required, how to make the initialization expression look more
like the way the matrix might be written out in a book.

--
write(*,*) transfer((/17.392111325966148d0,6.5794487871554595D-85, &
6.0134700243160014d-154/),(/'x'/)); end

```
 0
Reply not_valid (1693) 6/22/2012 3:46:09 PM

```In article <js1u8n\$o0s\$1@speranza.aioe.org>,
"Nasser M. Abbasi" <nma@12000.org> wrote:

> DO j=1,n
>     DO i=1,n
>
>        M = .true.
>        M(:,j) = .false.
>        M(i,:) = .false.
>        B = reshape(pack(A, M),[n-1,n-1])
>
>        !-- LU decomposition in order to obtain determinant
>        CALL DGETRF( n-1, n-1, B, n-1, IPIV, INFO )
>
>        !-- determinant of U is the product of diagonal
>        C(i,j)= PRODUCT( [(B(ii,ii),ii=1,n-1)] )
>
>        !-- Adjust sign based on number of row exchanges and (-1)^(i+j)
>        number_row_exchange = count(IPIV /= [(ii,ii=1,n-1)])
>        C(i,j) = C(i,j)*(1-2*MODULO(number_row_exchange+i+j,2))
>
>     END DO
> END DO

I think if you do rough operation counts, you will see that the call
to DGETRF requires (n-1)^3 operations, and it is within two nested
do loops, resulting in an overall n^5 computational effort.

What you are computing is the determinant of a matrix times its
inverse.  Computing the inverse of a dense matrix requires only n^3
effort, and the determinant of the matrix can be obtained for free
during that operation.

So you are using an n^5 algorithm to compute something that should
require only n^3 effort.

This is one of the differences between formal methods, including
those based on determinants, minors, and cofactors, and real
computational methods that people actually use.

Another common problem with computing determinants is that they
often overflow or underflow the range of floating point variables.
Some libraries, including LINPACK (and also LAPACK, I think, but I'm
not sure), compute determinant values by returning two floating
point values, one for the mantissa (the significant digits) and
another one separately for the exponent.  The adjugate that you are
computing has this same problem.  That is why after you finish your
course, you will probably never see or need to compute one again.

You need to learn the formal stuff, so you have to go through this
step, but you are only taking the first step.  The interesting and
useful stuff will come next.

\$.02 -Ron Shepard
```
 0
Reply ron-shepard (1197) 6/22/2012 4:43:06 PM

```"Ron Shepard" <ron-shepard@NOSPAM.comcast.net> wrote in message
news:ron-shepard-F447F0.11430622062012@news60.forteinc.com...

> I think if you do rough operation counts, you will see that the call
> to DGETRF requires (n-1)^3 operations, and it is within two nested
> do loops, resulting in an overall n^5 computational effort.

> What you are computing is the determinant of a matrix times its
> inverse.  Computing the inverse of a dense matrix requires only n^3
> effort, and the determinant of the matrix can be obtained for free
> during that operation.

But if the matrix is singular, the algorithm as posted works but
computing the inverse fails.

--
write(*,*) transfer((/17.392111325966148d0,6.5794487871554595D-85, &
6.0134700243160014d-154/),(/'x'/)); end

```
 0
Reply not_valid (1693) 6/22/2012 4:54:35 PM

```On 6/22/2012 11:43 AM, Ron Shepard wrote:

>
> You need to learn the formal stuff, so you have to go through this
> step, but you are only taking the first step.  The interesting and
> useful stuff will come next.
>
> \$.02 -Ron Shepard
>

I have no idea what is all this mumble jumble means.

How else will you commute the adjugate matrix of this matrix?

A[2 2;
2 2]

Native application of what you says results in

------------------------------
A=[2 2;
2 2]

inv(A)*det(A)
Warning: Matrix is singular to working precision.

ans =
NaN   NaN
NaN   NaN
------------------

With what I showed, you obtain the correct result

2  -2
-2   2

Having a correct algorithm that works all the time is
more important than having a fast one what works sometimes
and fails some other times.

--Nasser
```
 0

```On Friday, June 22, 2012 7:11:06 PM UTC+9, Nasser M. Abbasi wrote:
>Thanks James for your review as well. I incorporated your=20
>suggestions, the MODULO is interesting but a bit confusing=20
>to understand and did not know it existed.

Nasser,

Judging by this and previous comments, I think you would do yourself a B=
ig Favour by carefully reading through a complete list of all the intrinsic=
procedures. You will find it in any book or in the standrad itself. You se=
em to be working on problems for which they were designed.

Regards,

Mike Metcalf
```
 0
Reply michaelmetcalf (810) 6/23/2012 4:03:14 AM

```On 6/23/2012 2:25 AM, Nasser M. Abbasi wrote:
..
> Having a correct algorithm that works all the time is
> more important than having a fast one what works sometimes
> and fails some other times.

is slow, works sometimes and fails some other times!

But for those failures it has a beautiful mechanism to
throw exceptions. :-)

--
Jos

```
 0
Reply jos.bergervoet (38) 6/23/2012 6:23:40 AM

```On 2012-06-22 13:43:06 -0300, Ron Shepard said:

> In article <js1u8n\$o0s\$1@speranza.aioe.org>,
>  "Nasser M. Abbasi" <nma@12000.org> wrote:
>
>> DO j=1,n
>> DO i=1,n
>>
>> M = .true.
>> M(:,j) = .false.
>> M(i,:) = .false.
>> B = reshape(pack(A, M),[n-1,n-1])
>>
>> !-- LU decomposition in order to obtain determinant
>> CALL DGETRF( n-1, n-1, B, n-1, IPIV, INFO )
>>
>> !-- determinant of U is the product of diagonal
>> C(i,j)= PRODUCT( [(B(ii,ii),ii=1,n-1)] )
>>
>> !-- Adjust sign based on number of row exchanges and (-1)^(i+j)
>> number_row_exchange = count(IPIV /= [(ii,ii=1,n-1)])
>> C(i,j) = C(i,j)*(1-2*MODULO(number_row_exchange+i+j,2))
>>
>> END DO
>> END DO
>
> I think if you do rough operation counts, you will see that the call
> to DGETRF requires (n-1)^3 operations, and it is within two nested
> do loops, resulting in an overall n^5 computational effort.
>
> What you are computing is the determinant of a matrix times its
> inverse.  Computing the inverse of a dense matrix requires only n^3
> effort, and the determinant of the matrix can be obtained for free
> during that operation.
>
> So you are using an n^5 algorithm to compute something that should
> require only n^3 effort.
>
> This is one of the differences between formal methods, including
> those based on determinants, minors, and cofactors, and real
> computational methods that people actually use.
>
> Another common problem with computing determinants is that they
> often overflow or underflow the range of floating point variables.
> Some libraries, including LINPACK (and also LAPACK, I think, but I'm
> not sure), compute determinant values by returning two floating
> point values, one for the mantissa (the significant digits) and
> another one separately for the exponent.  The adjugate that you are
> computing has this same problem.  That is why after you finish your
> course, you will probably never see or need to compute one again.
>
> You need to learn the formal stuff, so you have to go through this
> step, but you are only taking the first step.  The interesting and
> useful stuff will come next.
>
> \$.02 -Ron Shepard

For a nice survey of computing the adjugant try Pete Stewart's
it offers at CiteSeer. You can also get it at CS U. Maryland
from his ftp site. Nice numerical analysis including algorithms
that are more like inverting via an LDU or SVD factoring.

to find it again. I see that it has several citations form more modern
numerical analysis books.

Not the kind of stuff that shows up in a first course in either programming
or linear algebra/numerical analysis. That is a hint for what the OP can
find when he gets around to more advanced material. The old 1880s stuff is
still very solid but some techniques have advanced and it all has much
better exposition from new/better terminology that comes from newer viewpoints.

```
 0
Reply Gordon.Sande1 (252) 6/23/2012 7:09:55 PM

```On 6/23/12 2:09 PM, Gordon Sande wrote:
> On 2012-06-22 13:43:06 -0300, Ron Shepard said:
>
>> In article <js1u8n\$o0s\$1@speranza.aioe.org>,
>> "Nasser M. Abbasi" <nma@12000.org> wrote:
>>
>>> DO j=1,n
>>> DO i=1,n
>>>
>>> M = .true.
>>> M(:,j) = .false.
>>> M(i,:) = .false.
>>> B = reshape(pack(A, M),[n-1,n-1])
>>>
>>> !-- LU decomposition in order to obtain determinant
>>> CALL DGETRF( n-1, n-1, B, n-1, IPIV, INFO )
>>>
>>> !-- determinant of U is the product of diagonal
>>> C(i,j)= PRODUCT( [(B(ii,ii),ii=1,n-1)] )
>>>
>>> !-- Adjust sign based on number of row exchanges and (-1)^(i+j)
>>> number_row_exchange = count(IPIV /= [(ii,ii=1,n-1)])
>>> C(i,j) = C(i,j)*(1-2*MODULO(number_row_exchange+i+j,2))
>>>
>>> END DO
>>> END DO
>>
>> I think if you do rough operation counts, you will see that the call
>> to DGETRF requires (n-1)^3 operations, and it is within two nested
>> do loops, resulting in an overall n^5 computational effort.
>>
>> What you are computing is the determinant of a matrix times its
>> inverse. Computing the inverse of a dense matrix requires only n^3
>> effort, and the determinant of the matrix can be obtained for free
>> during that operation.
>>
>> So you are using an n^5 algorithm to compute something that should
>> require only n^3 effort.
>>
>> This is one of the differences between formal methods, including
>> those based on determinants, minors, and cofactors, and real
>> computational methods that people actually use.
>>
>> Another common problem with computing determinants is that they
>> often overflow or underflow the range of floating point variables.
>> Some libraries, including LINPACK (and also LAPACK, I think, but I'm
>> not sure), compute determinant values by returning two floating
>> point values, one for the mantissa (the significant digits) and
>> another one separately for the exponent. The adjugate that you are
>> computing has this same problem. That is why after you finish your
>> course, you will probably never see or need to compute one again.
>>
>> You need to learn the formal stuff, so you have to go through this
>> step, but you are only taking the first step. The interesting and
>> useful stuff will come next.
>>
>> \$.02 -Ron Shepard
>
> For a nice survey of computing the adjugant try Pete Stewart's
> it offers at CiteSeer. You can also get it at CS U. Maryland
> from his ftp site. Nice numerical analysis including algorithms
> that are more like inverting via an LDU or SVD factoring.
>
> to find it again. I see that it has several citations form more modern
> numerical analysis books.
>
> Not the kind of stuff that shows up in a first course in either programming
> or linear algebra/numerical analysis. That is a hint for what the OP can
> find when he gets around to more advanced material. The old 1880s stuff is
> still very solid but some techniques have advanced and it all has much
> better exposition from new/better terminology that comes from newer
> viewpoints.
>
Three observations on this topic.

1)  There originally was a "determinate" (I don't remember it's name)
function, a diagonal function, and a matrix "divide" operator (solved
linear equations: A*x=B => x = B"/"A) in the mid-80s versions of Fortran
90.  The diagonal function was removed as part of one of the language
simplification programs (it is a trivial thing to compute by hand).  The
other two were removed when approximately every numerical analyst in the
world came and told X3J3 not to do it.  Basically, doing either involves
inverting a matrix and that's hard to do.  Every physically interesting
matrix has an adjective in front of it (tri-diagonal, sparse, Hermitian,
Hamiltonian, positive definite, ill-conditioned, ...) and, for
interesting (ie, large), matrices the solution methods are radically
different, hard to do, and involve serious round-off and cancellation
problems that make the results meaningless if you do it wrong.  If
people don't understand the problems, then the language shouldn't point
a gun at their foot; if they do understand the problems, then the
language ....

2)  The man from ICL (Alan Wilson) who was developing matrix/array
library functions for their unusual distributed array processor (DAP)
said he spent much of his time reading 100 year old books on
matrix-algebra-ish topics.  The old algorithms (mostly) worked very well
on ICL hardware; not very well on their competitors.  ICL went out of

3)  there is a matrix with terms something like A(I,J) = 1/((i+1)*(j+1))
where the "+"might be "-" and the "/" might be in a different place.
Does anybody remember it's name?  It's interesting in the sense that
it's hard to invert.  Google doesn't help me.

Dick Hendrickson
```
 0
Reply dick.hendrickson (1286) 6/23/2012 11:41:02 PM

```Dick Hendrickson <dick.hendrickson@att.net> wrote:

> 3)  there is a matrix with terms something like A(I,J) = 1/((i+1)*(j+1))
> where the "+"might be "-" and the "/" might be in a different place.
> Does anybody remember it's name?  It's interesting in the sense that
> it's hard to invert.  Google doesn't help me.

I suspect you are talking about a Hilbert Matrix. I was having trouble
remembering the name also. Thought I recalled that it began with H.
Things like Hermetian and Hamiltonian kept popping into my mind, but no,
those aren't right. Finally, Hilbert ocurred to me. (And once you have
the name, Google is much more help - or just go straight to the
wikipedia entry).

--
Richard Maine                    | Good judgment comes from experience;
email: last name at domain . net | experience comes from bad judgment.
domain: summertriangle           |  -- Mark Twain
```
 0
Reply nospam47 (9744) 6/24/2012 12:17:48 AM

```Dick Hendrickson <dick.hendrickson@att.net> wrote:

(snip)
> Three observations on this topic.

> 1)  There originally was a "determinate" (I don't remember it's name)
> function, a diagonal function, and a matrix "divide" operator (solved
> linear equations: A*x=B => x = B"/"A) in the mid-80s versions of Fortran
> 90.  The diagonal function was removed as part of one of the language
> simplification programs (it is a trivial thing to compute by hand).  The
> other two were removed when approximately every numerical analyst in the
> world came and told X3J3 not to do it.  Basically, doing either involves
> inverting a matrix and that's hard to do.

I think I agree, but it is interesting that many versions
of BASIC have a matrix inversion operation. Maybe not so bad,
though, as it is more usual to use smaller matrices in BASIC
than in Fortran.

-- glen
```
 0
Reply gah (12850) 6/24/2012 12:39:27 AM

```In article <js32ei\$en8\$1@speranza.aioe.org>,
"Nasser M. Abbasi" <nma@12000.org> wrote:

> On 6/22/2012 11:43 AM, Ron Shepard wrote:
>
> >
> > You need to learn the formal stuff, so you have to go through this
> > step, but you are only taking the first step.  The interesting and
> > useful stuff will come next.
> >
> > \$.02 -Ron Shepard
> >
>
> I have no idea what is all this mumble jumble means.
>
> How else will you commute the adjugate matrix of this matrix?
>
> A[2 2;
>    2 2]
>
> Native application of what you says results in
>
> ------------------------------
> A=[2 2;
>    2 2]
>
> inv(A)*det(A)
> Warning: Matrix is singular to working precision.

Yes, you have a zero/zero situation.  If you are seriously asking
how to do this computation with an n^3 algorithm rather than the
n^5, then there are probably many ways.  One of the simplest to
understand is based on the singular value decomposition of the
matrix.

A = U.sigma.V^T

where U and V are orthogonal and sigma is diagonal with nonnegative
entries.  The SVD of a matrix requires only n^3 effort.  So given
the SVD of A, how do you compute the determinant and the inverse?

Det(A) = Det(U.sigma.V^T)
= Det(U)*Det(sigma)*Det(V)
= Det(sigma)
= Product(sigma(k,k), k=1,n)

Inverse(A) = V.Inverse(sigma).U^T
= Sum(j) (1/sigma(j,j)) * v(:,j) * u(:,j)^T

For a nonsingular matrix, all of the v(:,j) and u(:,j) vectors

= Sum(j) (Product(sigma(k,k))/sigma(j,j))
* v(:,j) * u(:,j)^T

But consider the situation where limit sigma(m,m)-->0 for some m.
Now, each term in that summation has a zero coefficient except for
j=m.  The sigma(m,m) term appears in both the numerator and the
denominator, so they cancel.  The coefficient is then the product of
the remaining nonzero singular values times the v(:,m)*u(:,m)^T
outer product.

What happens when there are two or more singular values?  Each term
in the summation is then zero, so the adjugate matrix itself ends up
being zero.  The denominator can cancel one of the zero factors in
the numerator, but it cannot cancel the other zero factors.

I don't think that SVD is the only way to handle these singular
situations, it was just the easiest way to explain how those
zero/zero factors cancel.  I think the same thing can be achieved
with LU factorization (that is what DGETRF does), or QR
factorization, or probably other factorizations too.  LU
factorization is cheaper than SVD, although they both require n^3
effort.

>
> ans =
>      NaN   NaN
>      NaN   NaN
> ------------------
>
> With what I showed, you obtain the correct result
>
>      2  -2
>     -2   2
>
> Having a correct algorithm that works all the time is
> more important than having a fast one what works sometimes
> and fails some other times.

Well yes, but there still is no need for an n^5 algorithm, there are
n^3 ways to handle the general case, and also to detect singular
matrices and treat those special cases appropriately.  One of the
problems with numerical analysis these days is that it is so easy to
access a library and do something with an algorithm that scales
two loops you were doing things by hand, say with pencil and paper,
then it would be very clear to you what is the difference between
n^5 and n^3, and it would never even occur to you to try to justify
the n^5 method, you would instead try everything you could to get
the n^3 method to work.  For a 10x10 matrix, that is a factor of 100
difference.

\$.02 -Ron Shepard
```
 0
Reply ron-shepard (1197) 6/24/2012 8:08:26 AM

```On Friday, June 22, 2012 7:25:40 PM UTC-5, Nasser M. Abbasi wrote:
> On 6/22/2012 11:43 AM, Ron Shepard wrote:
>
> >
> > You need to learn the formal stuff, so you have to go through this
> > step, but you are only taking the first step.  The interesting and
> > useful stuff will come next.
> >
> > \$.02 -Ron Shepard
> >
>
> I have no idea what is all this mumble jumble means.
>
> How else will you commute the adjugate matrix of this matrix?
>
> A[2 2;
>    2 2]
>
> Native application of what you says results in
>
> ------------------------------
> A=[2 2;
>    2 2]
>
> inv(A)*det(A)
> Warning: Matrix is singular to working precision.
>
> ans =
>      NaN   NaN
>      NaN   NaN
> ------------------
>
> With what I showed, you obtain the correct result
>
>      2  -2
>     -2   2
>
> Having a correct algorithm that works all the time is
> more important than having a fast one what works sometimes
> and fails some other times.
>
> --Nasser

be serious now.. the mumble jumble is important unless of course you deal with toy problems.. and no-one is going to do inv(A)*det(A) in their right mind
```
 0

```On Fri, 22 Jun 2012 10:14:57 -0300, Gordon Sande wrote:

> On 2012-06-22 09:18:24 -0300, Nasser M. Abbasi said:
>
>> On 6/22/2012 6:20 AM, Arjen Markus wrote:
>>> In linear algebra computations you almost never use the determinant directly.
>>
>> Well,  I use det() alot in my courses to solve my HWs (or may be
>> the teacher just wanted to give us hard time then :)
>
> Other than in undergraduate courses you will probably never see
> a determinent. They live in zoos called undergraduate courses but
> not in the wild!
>

Do I have to give my PhD back, because a fundamental part of my
research at that time involved finding the solutions of det(A(z)) = 0?
A(z) was a 5x5 matrix where each element was an expression
containing some superpostion of spherical Bessel and/or Neumann
functions of complex order z and real arguments.

--
steve
```
 0
Reply sgk (134) 6/25/2012 2:45:19 PM

```Steven G. Kargl <sgk@removetroutmask.apl.washington.edu> wrote:
> On Fri, 22 Jun 2012 10:14:57 -0300, Gordon Sande wrote:
(snip)
>>> On 6/22/2012 6:20 AM, Arjen Markus wrote:
>>>> In linear algebra computations you almost never use
>>>> the determinant directly.

(snip)
>> Other than in undergraduate courses you will probably never see
>> a determinent. They live in zoos called undergraduate courses but
>> not in the wild!

> Do I have to give my PhD back, because a fundamental part of my
> research at that time involved finding the solutions of
> det(A(z)) = 0?

> A(z) was a 5x5 matrix where each element was an expression
> containing some superpostion of spherical Bessel and/or Neumann
> functions of complex order z and real arguments.

I forget the name of the rule that says that there is an
exception for every rule. The one above says almost never
and probably never, so not quite never.

Much bigger than 5x5 and double precision isn't enough too much
of the time, but I would believe 5x5. Did you do iterative
refinement on it? (I think that works for determinants but I
don't know that I ever tried it.) I used to have a polynomial
least squares routine that did iterative refinement on the
solution.

-- glen

```
 0
Reply gah (12850) 6/25/2012 6:26:34 PM

```On Mon, 25 Jun 2012 18:26:34 +0000, glen herrmannsfeldt wrote:

> Steven G. Kargl <sgk@removetroutmask.apl.washington.edu> wrote:
>> On Fri, 22 Jun 2012 10:14:57 -0300, Gordon Sande wrote:
> (snip)
>>>> On 6/22/2012 6:20 AM, Arjen Markus wrote:
>>>>> In linear algebra computations you almost never use
>>>>> the determinant directly.
>
> (snip)
>>> Other than in undergraduate courses you will probably never see
>>> a determinent. They live in zoos called undergraduate courses but
>>> not in the wild!
>
>> Do I have to give my PhD back, because a fundamental part of my
>> research at that time involved finding the solutions of
>> det(A(z)) = 0?
>
>> A(z) was a 5x5 matrix where each element was an expression
>> containing some superpostion of spherical Bessel and/or Neumann
>> functions of complex order z and real arguments.
>
> I forget the name of the rule that says that there is an
> exception for every rule. The one above says almost never
> and probably never, so not quite never.
>
> Much bigger than 5x5 and double precision isn't enough too much
> of the time, but I would believe 5x5. Did you do iterative
> refinement on it? (I think that works for determinants but I
> don't know that I ever tried it.) I used to have a polynomial
> least squares routine that did iterative refinement on the
> solution.

The method I used involved specifying rectangular or circular
contours in the complex z plane, and computing the winding
number to determine if a zero was enclosed.  For contours
that contained a zero, one can then find the zero by computing
a cauchy integral.  The details can be found in an appendix of
a paper I wrote long ago: SG. Kargl, PL.Marston,  J. Acoust.
Soc. Am., 85, 1014-1028 (1989).

--
steve
```
 0
Reply sgk (134) 6/25/2012 7:08:14 PM

```Steven G. Kargl <sgk@removetroutmask.apl.washington.edu> wrote:

(snip on determinants)
>>> Do I have to give my PhD back, because a fundamental part of my
>>> research at that time involved finding the solutions of
>>> det(A(z)) = 0?

(snip, I wrote)
>> I forget the name of the rule that says that there is an
>> exception for every rule. The one above says almost never
>> and probably never, so not quite never.

>> Much bigger than 5x5 and double precision isn't enough too much
>> of the time, but I would believe 5x5. Did you do iterative
>> refinement on it? (I think that works for determinants but I
>> don't know that I ever tried it.) I used to have a polynomial
>> least squares routine that did iterative refinement on the
>> solution.

> The method I used involved specifying rectangular or circular
> contours in the complex z plane, and computing the winding
> number to determine if a zero was enclosed.

OK, so not directly computing the determinant.

> For contours
> that contained a zero, one can then find the zero by computing
> a cauchy integral.  The details can be found in an appendix of
> a paper I wrote long ago: SG. Kargl, PL.Marston,  J. Acoust.
> Soc. Am., 85, 1014-1028 (1989).

Reminds me, I believe in this group, someone wrote a program
to compute the number of ways to make change given a set of
possible coins as a numeric contour integral.

-- glen
```
 0
Reply gah (12850) 6/25/2012 8:21:48 PM

```On Mon, 25 Jun 2012 20:21:48 +0000, glen herrmannsfeldt wrote:

> Steven G. Kargl <sgk@removetroutmask.apl.washington.edu> wrote:
>
> (snip on determinants)
>>>> Do I have to give my PhD back, because a fundamental part of my
>>>> research at that time involved finding the solutions of
>>>> det(A(z)) = 0?
>
> (snip, I wrote)
>>> I forget the name of the rule that says that there is an
>>> exception for every rule. The one above says almost never
>>> and probably never, so not quite never.
>
>>> Much bigger than 5x5 and double precision isn't enough too much
>>> of the time, but I would believe 5x5. Did you do iterative
>>> refinement on it? (I think that works for determinants but I
>>> don't know that I ever tried it.) I used to have a polynomial
>>> least squares routine that did iterative refinement on the
>>> solution.
>
>> The method I used involved specifying rectangular or circular
>> contours in the complex z plane, and computing the winding
>> number to determine if a zero was enclosed.
>
> OK, so not directly computing the determinant.

Huh?  Given the function f(z) = det(A(z)) where A(z) is
a 5x5 matrix, how else are you going to find f(z) = 0
if you don't compute the determinant?

I never stated how the determinant was actually computed,
which was performed by an LU decomposition of the matrix
followed by the product of the diagonal elements.  If you
inferred that I did not directly compute the determinant
via cofactor expansion, then you're sort of correct.  I, in
fact, did write out the cofactor expansion, and tested the
resulting ugly mess against the LU composition method.

--
steve
```
 0
Reply sgk (134) 6/25/2012 9:32:00 PM

```Steven G. Kargl <sgk@removetroutmask.apl.washington.edu> wrote:

(snip, I wrote)
>> OK, so not directly computing the determinant.

> Huh?  Given the function f(z) = det(A(z)) where A(z) is
> a 5x5 matrix, how else are you going to find f(z) = 0
> if you don't compute the determinant?

> I never stated how the determinant was actually computed,
> which was performed by an LU decomposition of the matrix
> followed by the product of the diagonal elements.  If you
> inferred that I did not directly compute the determinant
> via cofactor expansion, then you're sort of correct.  I, in
> fact, did write out the cofactor expansion, and tested the
> resulting ugly mess against the LU composition method.

Well, there was the question of why no determinant function
in Fortran. So, was your problem a counterexample to the
uselessness of a DET() function?

More specifically, if Fortran had a DET function at the time,
would you have used it?

-- glen
```
 0
Reply gah (12850) 6/25/2012 10:23:20 PM

```On Mon, 25 Jun 2012 22:23:20 +0000, glen herrmannsfeldt wrote:

> Steven G. Kargl <sgk@removetroutmask.apl.washington.edu> wrote:
>
> (snip, I wrote)
>>> OK, so not directly computing the determinant.
>
>> Huh?  Given the function f(z) = det(A(z)) where A(z) is
>> a 5x5 matrix, how else are you going to find f(z) = 0
>> if you don't compute the determinant?
>
>> I never stated how the determinant was actually computed,
>> which was performed by an LU decomposition of the matrix
>> followed by the product of the diagonal elements.  If you
>> inferred that I did not directly compute the determinant
>> via cofactor expansion, then you're sort of correct.  I, in
>> fact, did write out the cofactor expansion, and tested the
>> resulting ugly mess against the LU composition method.
>
> Well, there was the question of why no determinant function
> in Fortran. So, was your problem a counterexample to the
> uselessness of a DET() function?

You should re-read my original response to Gordon.  He wrote:

% Other than in undergraduate courses you will probably never see
% a determinent. They live in zoos called undergraduate courses but
% not in the wild!

My point is that Gordon's assertion was incorrect.  Determinants
(and more importantly computing determinants) can be found in
the wild.

> More specifically, if Fortran had a DET function at the time,
> would you have used it?

I suppose if one existed I would have used it or at least tested
it.  I use sin(), cos(), sqrt(), and many more intrinsic procedures.
I see no reason to avoid these, why should I have avoided a det().

For the record, my personal math library, libm90, has a det()
function.

--
steve
```
 0
Reply sgk (134) 6/25/2012 11:43:18 PM

```Steven G. Kargl <sgk@removetroutmask.apl.washington.edu> wrote:

(snip, I wrote)
>> Well, there was the question of why no determinant function
>> in Fortran. So, was your problem a counterexample to the
>> uselessness of a DET() function?

> You should re-read my original response to Gordon.  He wrote:

> % Other than in undergraduate courses you will probably never see
> % a determinent. They live in zoos called undergraduate courses but
> % not in the wild!

> My point is that Gordon's assertion was incorrect.  Determinants
> (and more importantly computing determinants) can be found in
> the wild.

>> More specifically, if Fortran had a DET function at the time,
>> would you have used it?

> I suppose if one existed I would have used it or at least tested
> it.  I use sin(), cos(), sqrt(), and many more intrinsic procedures.
> I see no reason to avoid these, why should I have avoided a det().

The usual sin(), cos(), and sqrt() routines work well for the usual
argument domain. If you want an accurate value of sin(1e100)
or even sin(1e100+1) then not so well.

It seems to me that it isn't hard to write a det() function
that works well for small matrices, even up to 5x5 or so.
When you get to 100x100 or 10000x10000 it is numerically much
more difficult. There are many subtractions of nearly equal
values and of values so different that all significance of the
smaller value is lost. Would it be reasonable to supply an
intrinsic det() and then limit it to 5x5 or smaller? Maybe.

Would an intrinsic det() do cofactor expansion, LU decomposition,
or something else? Does it matter? In cases where the specific
algorithm is sensitive to the problem at hand, it is best to leave
the user to find a routine in one of the many available libraries.

If you read the introduction to Numerical Recipes, the whole idea
is that such routines not be black boxes. That it is important
in many cases to know the details about how they work inside,
to determine the appropriateness for the problem at hand.
In those cases, a black-box intrinsic might not be a good idea.

Now, there might be cases where the intrinsic sin() and cos()
aren't appropriate. I can imagine that they might have a
discontinuous first or second derivative. Most of the time that
isn't a problem, but sometimes it might be. That standard doesn't
say much about that, but users who have such requirements might
have to find their own sin() and cos().

> For the record, my personal math library, libm90, has a det()
> function.

-- glen

```
 0
Reply gah (12850) 6/26/2012 1:01:52 AM

```On Monday, June 25, 2012 6:01:52 PM UTC-7, glen herrmannsfeldt wrote:
> Steven G. Kargl <sgk@removetroutmask.apl.washington.edu> wrote:
>
> (snip, I wrote)
> >> Well, there was the question of why no determinant function
> >> in Fortran. So, was your problem a counterexample to the
> >> uselessness of a DET() function?
>
> > You should re-read my original response to Gordon.  He wrote:
>
> > % Other than in undergraduate courses you will probably never see
> > % a determinent. They live in zoos called undergraduate courses but
> > % not in the wild!
>
> > My point is that Gordon's assertion was incorrect.  Determinants
> > (and more importantly computing determinants) can be found in
> > the wild.
>
> >> More specifically, if Fortran had a DET function at the time,
> >> would you have used it?
>
> > I suppose if one existed I would have used it or at least tested
> > it.  I use sin(), cos(), sqrt(), and many more intrinsic procedures.
> > I see no reason to avoid these, why should I have avoided a det().
>
> The usual sin(), cos(), and sqrt() routines work well for the usual
> argument domain. If you want an accurate value of sin(1e100)
> or even sin(1e100+1) then not so well.

Derailing a discussion with the classic sin(1e100) example.  Yep,
a person could do something totally stupid like sin(1e100).  It
certainly does not give me much confidence in anything that person
has to say.

BTW, sqrt() should work perfectly well over its entire domain
where "perfectly well" is defined as an error of less than or
equal to 0.5 ULP in all rounding modes.

> It seems to me that it isn't hard to write a det() function
> that works well for small matrices, even up to 5x5 or so.
> When you get to 100x100 or 10000x10000 it is numerically much
> more difficult. There are many subtractions of nearly equal
> values and of values so different that all significance of the
> smaller value is lost. Would it be reasonable to supply an
> intrinsic det() and then limit it to 5x5 or smaller? Maybe.

Fortran contains a MATMUL intrinsic without some artificial
low limit on the size of the matrices.  You can have the
same problems of subtraction of nearly equal values and
of values so different that all significance of smaller values
is lost.  This did not stop J3 from adding MATMUL.

> Would an intrinsic det() do cofactor expansion, LU decomposition,
> or something else? Does it matter?

It does not matter how the computation is done.  The standard
does not specify the implementation.  See RANDOM_NUMBER.

And, for the record, I think the standard should not include
a DET().

--
steve

```
 0
Reply kargls (354) 6/26/2012 1:51:30 AM

```On 6/25/12 8:51 PM, kargls@comcast.net wrote:
> On Monday, June 25, 2012 6:01:52 PM UTC-7, glen herrmannsfeldt wrote:
>>
>> (snip, I wrote)
>>>> Well, there was the question of why no determinant function
>>>> in Fortran. So, was your problem a counterexample to the
>>>> uselessness of a DET() function?
>>
>>> You should re-read my original response to Gordon.  He wrote:
>>
>>> % Other than in undergraduate courses you will probably never see
>>> % a determinent. They live in zoos called undergraduate courses but
>>> % not in the wild!
>>
>>> My point is that Gordon's assertion was incorrect.  Determinants
>>> (and more importantly computing determinants) can be found in
>>> the wild.
>>
>>>> More specifically, if Fortran had a DET function at the time,
>>>> would you have used it?
>>
>>> I suppose if one existed I would have used it or at least tested
>>> it.  I use sin(), cos(), sqrt(), and many more intrinsic procedures.
>>> I see no reason to avoid these, why should I have avoided a det().
>>
>> The usual sin(), cos(), and sqrt() routines work well for the usual
>> argument domain. If you want an accurate value of sin(1e100)
>> or even sin(1e100+1) then not so well.
>
> Derailing a discussion with the classic sin(1e100) example.  Yep,
> a person could do something totally stupid like sin(1e100).  It
> certainly does not give me much confidence in anything that person
> has to say.
>
> BTW, sqrt() should work perfectly well over its entire domain
> where "perfectly well" is defined as an error of less than or
> equal to 0.5 ULP in all rounding modes.
>
>> It seems to me that it isn't hard to write a det() function
>> that works well for small matrices, even up to 5x5 or so.
>> When you get to 100x100 or 10000x10000 it is numerically much
>> more difficult. There are many subtractions of nearly equal
>> values and of values so different that all significance of the
>> smaller value is lost. Would it be reasonable to supply an
>> intrinsic det() and then limit it to 5x5 or smaller? Maybe.
>
> Fortran contains a MATMUL intrinsic without some artificial
> low limit on the size of the matrices.  You can have the
> same problems of subtraction of nearly equal values and
> of values so different that all significance of smaller values
> is lost.  This did not stop J3 from adding MATMUL.

Right, this is a judgment call.  Matrix multiply is simple straight
forward arithmetic without anything unusual going on (other than
overflow or catastrophic cancellation which are normal for any computer
arithmetic).  Det() is simply hard and there is no simple solution (with
apologies to Kramer).

>
>> Would an intrinsic det() do cofactor expansion, LU decomposition,
>> or something else? Does it matter?

Beats me.  Beats me.  Beats me.  Yes!

There is a significant difference (as well as some overlap) between
arithmetic (MATMUL) and numerical analysis (DET).

>
> It does not matter how the computation is done.  The standard
> does not specify the implementation.  See RANDOM_NUMBER.

True, the standard doesn't specify the implementation.  But, if the
standard requires something, there is the implication that there is a
pretty good implementation that is pretty much useful pretty much most
of the time.

>
> And, for the record, I think the standard should not include
> a DET().

Yes! but where were you when RANDOM_NUMBER was introduced?  ;)

Dick Hendrickson
>

```
 0
Reply dick.hendrickson (1286) 6/26/2012 3:00:54 AM

```On Fri, 22 Jun 2012 14:46:44 +0100, Clive Page <usenet@page2.eu>
wrote in <a4jbaaFlkrU1@mid.individual.net>:
> On 22/06/2012 14:14, Gordon Sande wrote:
>> Other than in undergraduate courses you will probably never see
>> a determinent. They live in zoos called undergraduate courses but
>> not in the wild!

> I certainly remember having to grapple with them as an undergraduate,
> but not having found a need to do that at any time since.  Determinants
> do seem to have the aura of the 19th century about them.

> It is said that after Lewis Carroll wrote "Alice's Adventures in
> Wonderland", it so charmed Queen Victoria that she invited him to tea at
> Buckingham Palace and asked him to send her a copy of his next book.
> He agreed, and so a year or so later there was delivered to the Palace a
> small packet containing a copy of "An Elementary Treatise on
> Determinants, With Their Application to Simultaneous Linear Equations
> and Algebraic Equations".  Carroll was, of course, the pen name of a
> maths don at Oxford.  The book may still be relevant, for all I know.

It's only 145 pages, she probably read it in an afternoon!

--
Ivan Reid, School of Engineering & Design, _____________  CMS Collaboration,
Brunel University.    Ivan.Reid@[brunel.ac.uk|cern.ch]    Room 40-1-B12, CERN
KotPT -- "for stupidity above and beyond the call of duty".
```
 0
Reply Ivan.Reid1 (10) 6/28/2012 5:38:59 PM

39 Replies
232 Views

Similar Articles

12/7/2013 10:57:52 AM
[PageSpeed]