f



Comparision of C Sharp and C performance

A C and a C Sharp program was written to calculate the 64-bit value of
19 factorial one million times, using both the iterative and recursive
methods to solve (and compare) the results

Here is the C code.

#include <stdio.h>
#include <time.h>

long long factorial(long long N)
{
    long long nFactorialRecursive;
    long long nFactorialIterative;
    long long Nwork;
    if (N <= 2) return N;
    for ( nFactorialIterative = 1, Nwork = N;
          Nwork > 1;
          Nwork-- )
        nFactorialIterative *= Nwork;
    nFactorialRecursive = N * factorial(N-1);
    if (nFactorialRecursive != nFactorialIterative)
       printf("%I64d! is %I64d recursively but %I64d iteratively wtf!
\n",
              N,
              nFactorialIterative,
	      nFactorialRecursive);
    return nFactorialRecursive;
}

int main(void)
{
    long long N;
    long long Nfactorial;
    double dif;
    long long i;
    long long K;
    time_t start;
    time_t end;
    N = 19;
    K = 1000000;
    time (&start);
    for (i = 0; i < K; i++)
        Nfactorial = factorial(N);
    time (&end);
    dif = difftime (end,start);
    printf("%I64d! is %I64d: %.2f seconds to calculate %I64d times
\n",
           N, Nfactorial, dif, K);
    return 0; // Gee is that right?
}

Here is the C Sharp code.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;

namespace N_factorial
{
    class Program
    {
        static void Main(string[] args)
        {
            long N;
            long Nfactorial = 0;
            TimeSpan dif;
            long i;
            long K;
            DateTime start;
            DateTime end;
            N = 19;
            K = 1000000;
            start = DateTime.Now;
            for (i = 0; i < K; i++)
                Nfactorial = factorial(N);
            end = DateTime.Now;
            dif = end - start;
            Console.WriteLine
                ("The factorial of " +
                 N.ToString() + " is " +
                 Nfactorial.ToString() + ": " +
                 dif.ToString() + " " +
                 "seconds to calculate " +
                 K.ToString() + " times");
            return;
        }

        static long factorial(long N)
        {
            long nFactorialRecursive;
            long nFactorialIterative;
            long Nwork;
            if (N <= 2) return N;
            for ( nFactorialIterative = 1, Nwork = N;
                  Nwork > 1;
                  Nwork-- )
                nFactorialIterative *= Nwork;
            nFactorialRecursive = N * factorial(N-1);
            if (nFactorialRecursive != nFactorialIterative)
                Console.WriteLine
                ("The iterative factorial of " +
                 N.ToString() + " " +
                 "is " +
                 nFactorialIterative.ToString() + " " +
                 "but its recursive factorial is " +
                 nFactorialRecursive.ToString());
            return nFactorialRecursive;
        }
    }
}

The C Sharp code runs at 110% of the speed of the C code, which may
seem to "prove" the half-literate Urban Legend that "C is more
efficient than C Sharp or VM/bytecode languages in general, d'oh".

As I take pains to point out in my book, "Build Your Own .Net Language
and Compiler" (Apress 2004) (buy it now buy it now), it's not even
grammatical to say that a programming language is more "efficient"
than another pl.

But far more significantly: the ten percent "overhead" would be
several orders of magnitude were C Sharp to be an "inefficient,
interpreted language" which many C programmers claim it is. That is
because a true interpreter parses and/or unpacks each instruction when
it is executed, and both of the above examples execute their
instructions millions of times.

Were C Sharp to be interpreted, the above C Sharp code would run very,
very slowly, but C Sharp isn't interpreted.

Instead, a one-time modification is made to the byte code upon loading
to thread the codes together. This explains part of the ten percent
"overhead". For the remainder of execution, a sort of switch statement
is operating in which the code for individual byte codes using go to
to transfer control. This means that C and C Sharp execute at the same
effective rate of speed, and the ONLY efficiency-based reason for
choosing C is avoiding the initial overhead of setting up the .Net
virtual machine.

But what does this virtual machine provide? Almost 100 percent safety
against memory leaks and many other bad things.

Indeed, C is like the (unwritten) British constitution. In that
arrangement, Parliament cannot "entrench" an act that would bind all
subsequent Parliaments, because Parliamentary supremacy (like the
putative power of C) must at all costs be preserved: this was the
innovation of 1688/9, when Parliament hired King William and his
Better Half as Kingie and Queenie on condition that they be nice and
obey Parliament. This means that in fact the British constitution
contains no protection against a runaway, tyrannical, "long"
Parliament. It promised not to do so in 1911 and confirmed that it
would be nice in 1949, but there is nothing in the British
constitution to prevent Parliament from enacting a new bill, as long
as it could get enough Peers in the House of Lords to wake up and
permit it to do so (Lords approval being required unlike money bills),
and HM the Queen to give Royal Assent.

When Kurt Godel was studying the booklet given him in Princeton to
pass the US Citizenship test, he claimed to find a bug that would
allow America to be a dictatorship. I think he'd be even more
terrified of the British constitution, for like his self-reflexive
paradoxical statement in his incompleteness/inconsistency result, the
very power of Parliament renders it impotent to write a Constitution!

Whereas .Net and Java provide "Constitutional" safeguards against code
doing nasty things even as the American constitution was intended to
be, and to some practical extent is, "a machine that runs of itself".

Both constitutions can fail, but the British constitution is more
likely to. It enabled Margaret Thatcher to rule by decree and override
even her own Cabinet, and ramrod through a medieval "poll tax" in 1990
that produced civil disturbances. Britons enjoy human rights mostly
through the EU. Whereas misuse of the American constitution during
Bush's administration was more vigorously resisted especially in its
courts, where the judges are truly independent.

It is true that a massive "bug" in the American constitution developed
in 1860 with the outbreak of civil war, but this was extra-
Constitutional. It resulted from a deliberate misinterpretation of
state's rights under the Tenth Amendment in which the states retained
a "nullifying" level of sovereignity, but their assent to the
Constitution in 1789 had itself nullified this strong interpretation
of "state's rights".

Since 1689, no such "bug" has occured in the British constitution.
However, the British constitution existed before 1689, and its bug was
just as serious, for it produced the English civil war. This was
because there is no provision in the British constitution for a pig-
headed king, and King Charles II could conceivably in the future
refuse Royal Assent to needed legislation, or use the British Army
(which is NOT under the control of Parliament, but of the Monarch to
whom officers swear fealty) against his own people.

C Sharp programs can fail as can the American Constitution. But the
idiotic equation of the reliability of C and C Sharp in fact resembles
the political passivity of Britons who talk darkly of the EU being a
"new world order" destroying their "rights as Englishmen" when in fact
it's the best thing that ever happened to them. And, I've not
addressed how the rights of Irishmen have been abused under the
British constitution.

I'm for one tired of the Urban Legends of the lower middle class,
whether in programming or politics.
0
spinoza1111
12/27/2009 2:36:54 PM
comp.lang.c 30657 articles. 4 followers. spinoza1111 (3246) is leader. Post Follow

448 Replies
2817 Views

Similar Articles

[PageSpeed] 25

On Dec 27, 9:36=A0am, spinoza1111 <spinoza1...@yahoo.com> wrote:
> A C and a C Sharp program was written to calculate the 64-bit value of
> 19 factorial one million times, using both the iterative and recursive
> methods to solve (and compare) the results
>
> Here is the C code.
>
> #include <stdio.h>
> #include <time.h>
>
> long long factorial(long long N)
> {
> =A0 =A0 long long nFactorialRecursive;
> =A0 =A0 long long nFactorialIterative;
> =A0 =A0 long long Nwork;
> =A0 =A0 if (N <=3D 2) return N;
> =A0 =A0 for ( nFactorialIterative =3D 1, Nwork =3D N;
> =A0 =A0 =A0 =A0 =A0 Nwork > 1;
> =A0 =A0 =A0 =A0 =A0 Nwork-- )
> =A0 =A0 =A0 =A0 nFactorialIterative *=3D Nwork;
> =A0 =A0 nFactorialRecursive =3D N * factorial(N-1);
> =A0 =A0 if (nFactorialRecursive !=3D nFactorialIterative)
> =A0 =A0 =A0 =A0printf("%I64d! is %I64d recursively but %I64d iteratively =
wtf!
> \n",
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 N,
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 nFactorialIterative,
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 nFactorialRecursive);
> =A0 =A0 return nFactorialRecursive;
>
> }
>
> int main(void)
> {
> =A0 =A0 long long N;
> =A0 =A0 long long Nfactorial;
> =A0 =A0 double dif;
> =A0 =A0 long long i;
> =A0 =A0 long long K;
> =A0 =A0 time_t start;
> =A0 =A0 time_t end;
> =A0 =A0 N =3D 19;
> =A0 =A0 K =3D 1000000;
> =A0 =A0 time (&start);
> =A0 =A0 for (i =3D 0; i < K; i++)
> =A0 =A0 =A0 =A0 Nfactorial =3D factorial(N);
> =A0 =A0 time (&end);
> =A0 =A0 dif =3D difftime (end,start);
> =A0 =A0 printf("%I64d! is %I64d: %.2f seconds to calculate %I64d times
> \n",
> =A0 =A0 =A0 =A0 =A0 =A0N, Nfactorial, dif, K);
> =A0 =A0 return 0; // Gee is that right?
>
> }
>
> Here is the C Sharp code.
>
> using System;
> using System.Collections.Generic;
> using System.Linq;
> using System.Text;
>
> namespace N_factorial
> {
> =A0 =A0 class Program
> =A0 =A0 {
> =A0 =A0 =A0 =A0 static void Main(string[] args)
> =A0 =A0 =A0 =A0 {
> =A0 =A0 =A0 =A0 =A0 =A0 long N;
> =A0 =A0 =A0 =A0 =A0 =A0 long Nfactorial =3D 0;
> =A0 =A0 =A0 =A0 =A0 =A0 TimeSpan dif;
> =A0 =A0 =A0 =A0 =A0 =A0 long i;
> =A0 =A0 =A0 =A0 =A0 =A0 long K;
> =A0 =A0 =A0 =A0 =A0 =A0 DateTime start;
> =A0 =A0 =A0 =A0 =A0 =A0 DateTime end;
> =A0 =A0 =A0 =A0 =A0 =A0 N =3D 19;
> =A0 =A0 =A0 =A0 =A0 =A0 K =3D 1000000;
> =A0 =A0 =A0 =A0 =A0 =A0 start =3D DateTime.Now;
> =A0 =A0 =A0 =A0 =A0 =A0 for (i =3D 0; i < K; i++)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 Nfactorial =3D factorial(N);
> =A0 =A0 =A0 =A0 =A0 =A0 end =3D DateTime.Now;
> =A0 =A0 =A0 =A0 =A0 =A0 dif =3D end - start;
> =A0 =A0 =A0 =A0 =A0 =A0 Console.WriteLine
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 ("The factorial of " +
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0N.ToString() + " is " +
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Nfactorial.ToString() + ": " +
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0dif.ToString() + " " +
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0"seconds to calculate " +
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0K.ToString() + " times");
> =A0 =A0 =A0 =A0 =A0 =A0 return;
> =A0 =A0 =A0 =A0 }
>
> =A0 =A0 =A0 =A0 static long factorial(long N)
> =A0 =A0 =A0 =A0 {
> =A0 =A0 =A0 =A0 =A0 =A0 long nFactorialRecursive;
> =A0 =A0 =A0 =A0 =A0 =A0 long nFactorialIterative;
> =A0 =A0 =A0 =A0 =A0 =A0 long Nwork;
> =A0 =A0 =A0 =A0 =A0 =A0 if (N <=3D 2) return N;
> =A0 =A0 =A0 =A0 =A0 =A0 for ( nFactorialIterative =3D 1, Nwork =3D N;
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 Nwork > 1;
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 Nwork-- )
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 nFactorialIterative *=3D Nwork;
> =A0 =A0 =A0 =A0 =A0 =A0 nFactorialRecursive =3D N * factorial(N-1);
> =A0 =A0 =A0 =A0 =A0 =A0 if (nFactorialRecursive !=3D nFactorialIterative)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 Console.WriteLine
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 ("The iterative factorial of " +
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0N.ToString() + " " +
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0"is " +
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0nFactorialIterative.ToString() + " " +
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0"but its recursive factorial is " +
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0nFactorialRecursive.ToString());
> =A0 =A0 =A0 =A0 =A0 =A0 return nFactorialRecursive;
> =A0 =A0 =A0 =A0 }
> =A0 =A0 }
>
> }
>
> The C Sharp code runs at 110% of the speed of the C code, which may
> seem to "prove" the half-literate Urban Legend that "C is more
> efficient than C Sharp or VM/bytecode languages in general, d'oh".
>
> As I take pains to point out in my book, "Build Your Own .Net Language
> and Compiler" (Apress 2004) (buy it now buy it now), it's not even
> grammatical to say that a programming language is more "efficient"
> than another pl.
>
> But far more significantly: the ten percent "overhead" would be
> several orders of magnitude were C Sharp to be an "inefficient,
> interpreted language" which many C programmers claim it is. That is
> because a true interpreter parses and/or unpacks each instruction when
> it is executed, and both of the above examples execute their
> instructions millions of times.
>
> Were C Sharp to be interpreted, the above C Sharp code would run very,
> very slowly, but C Sharp isn't interpreted.
>
> Instead, a one-time modification is made to the byte code upon loading
> to thread the codes together. This explains part of the ten percent
> "overhead". For the remainder of execution, a sort of switch statement
> is operating in which the code for individual byte codes using go to
> to transfer control. This means that C and C Sharp execute at the same
> effective rate of speed, and the ONLY efficiency-based reason for
> choosing C is avoiding the initial overhead of setting up the .Net
> virtual machine.
>
> But what does this virtual machine provide? Almost 100 percent safety
> against memory leaks and many other bad things.
>
> Indeed, C is like the (unwritten) British constitution. In that
> arrangement, Parliament cannot "entrench" an act that would bind all
> subsequent Parliaments, because Parliamentary supremacy (like the
> putative power of C) must at all costs be preserved: this was the
> innovation of 1688/9, when Parliament hired King William and his
> Better Half as Kingie and Queenie on condition that they be nice and
> obey Parliament. This means that in fact the British constitution
> contains no protection against a runaway, tyrannical, "long"
> Parliament. It promised not to do so in 1911 and confirmed that it
> would be nice in 1949, but there is nothing in the British
> constitution to prevent Parliament from enacting a new bill, as long
> as it could get enough Peers in the House of Lords to wake up and
> permit it to do so (Lords approval being required unlike money bills),
> and HM the Queen to give Royal Assent.
>
> When Kurt Godel was studying the booklet given him in Princeton to
> pass the US Citizenship test, he claimed to find a bug that would
> allow America to be a dictatorship. I think he'd be even more
> terrified of the British constitution, for like his self-reflexive
> paradoxical statement in his incompleteness/inconsistency result, the
> very power of Parliament renders it impotent to write a Constitution!
>
> Whereas .Net and Java provide "Constitutional" safeguards against code
> doing nasty things even as the American constitution was intended to
> be, and to some practical extent is, "a machine that runs of itself".
>
> Both constitutions can fail, but the British constitution is more
> likely to. It enabled Margaret Thatcher to rule by decree and override
> even her own Cabinet, and ramrod through a medieval "poll tax" in 1990
> that produced civil disturbances. Britons enjoy human rights mostly
> through the EU. Whereas misuse of the American constitution during
> Bush's administration was more vigorously resisted especially in its
> courts, where the judges are truly independent.
>
> It is true that a massive "bug" in the American constitution developed
> in 1860 with the outbreak of civil war, but this was extra-
> Constitutional. It resulted from a deliberate misinterpretation of
> state's rights under the Tenth Amendment in which the states retained
> a "nullifying" level of sovereignity, but their assent to the
> Constitution in 1789 had itself nullified this strong interpretation
> of "state's rights".
>
> Since 1689, no such "bug" has occured in the British constitution.
> However, the British constitution existed before 1689, and its bug was
> just as serious, for it produced the English civil war. This was
> because there is no provision in the British constitution for a pig-
> headed king, and King Charles II could conceivably in the future
> refuse Royal Assent to needed legislation, or use the British Army
> (which is NOT under the control of Parliament, but of the Monarch to
> whom officers swear fealty) against his own people.
>
> C Sharp programs can fail as can the American Constitution. But the
> idiotic equation of the reliability of C and C Sharp in fact resembles
> the political passivity of Britons who talk darkly of the EU being a
> "new world order" destroying their "rights as Englishmen" when in fact
> it's the best thing that ever happened to them. And, I've not
> addressed how the rights of Irishmen have been abused under the
> British constitution.
>
> I'm for one tired of the Urban Legends of the lower middle class,
> whether in programming or politics.

OT, but *much* more interesting than your rambles about "C:The
Complete Nonsense", etc.
What about the memory footprint of C vs C#? Sure, .Net code is
compiled rather than interpreted so it doesn't have a huge time
penalty - but it is compiled to an intermediate language designed to
run in a *huge* virtual machine. Bloatware which is only slightly
slower than nonbloated code is still bloated and still slower. Also, a
much more interesting question (than simple factorial calculations) is
how something like Microsoft Excel would work if its code base was
rewritten from C to C#. I suspect that the cummulative slowdowns would
result in a spreadsheet which was annoyingly slower.

Out of curiousity, what rules of grammar does the sentence "C is more
effecient than C#." violate?
0
scattered
12/27/2009 4:09:39 PM
"spinoza1111" <spinoza1111@yahoo.com> wrote in message 
news:c3d9c29f-0f7b-41e3-9478-ce62d98b7c76@d4g2000pra.googlegroups.com...
>A C and a C Sharp program was written to calculate the 64-bit value of
> 19 factorial one million times, using both the iterative and recursive
> methods to solve (and compare) the results
>
> Here is the C code.

> Here is the C Sharp code.

> The C Sharp code runs at 110% of the speed of the C code, which may
> seem to "prove" the half-literate Urban Legend that "C is more
> efficient than C Sharp or VM/bytecode languages in general, d'oh".

C# was 10% faster? On my 32-bit machine, the C code was generally faster, 
depending on compilers and options.

But changing both to use 32-bit arithmetic, the C was nearly double the 
speed of C#, which is what is to be expected.

That's not bad, but C# does need a very complex environment to run, so 
cannot be used as a replacement for many uses of C, and it is still slower.

(BTW your timing routines in the C code seem to round to the nearest second; 
not very useful for that purpose.)

> Were C Sharp to be interpreted, the above C Sharp code would run very,
> very slowly, but C Sharp isn't interpreted.

There are interpreters, and interpreters. You could be looking at 
slow-downs, compared with C, of 3x to 300x, but usually there will be some 
benefits that make up for performance deficiencies in these kinds of 
benchmarks.

C#, I believe, is executed as native code, after translation from MSIL or 
CIL or some such acronym.

-- 
Bartc 

0
bartc
12/27/2009 5:25:54 PM
On Sun, 27 Dec 2009 06:36:54 -0800, spinoza1111 wrote:

> The C Sharp code runs at 110% of the speed of the C code, which may seem
> to "prove" the half-literate Urban Legend that "C is more efficient than
> C Sharp or VM/bytecode languages in general, d'oh".

	Really? I would have thought that, at best, it proves that a 
particular C# implementation of a particular algorithm, when using a 
particular C# virtual machine, on a particular platform, has beaten a 
particular C implementation of a particular algorithm, with some 
particular C compiler and some particular compile-time settings, on a 
particular platform. This, assuming that one can believe you - and why 
should we?

	If anything, you seem to have proved that you know nothing much 
about rigor, and that in this occasion to indulged in forcing facts to 
fit your preconceived notions. Way to go.

	As for your ranting on the American and British constitutions - 
what kind of mental condition affects you?
0
Jens
12/27/2009 5:27:18 PM
I transformed that program a bit. It calculates now factorial of 20, and
it does that 10 million times.

Note that the difftime function is not accurate. I used the utility "timethis".
Machine: Intel i7 (8 cores) with 12GB RAM

The results are:
-------------------------------------------------------------------------------------
D:\temp>csc /o tfact.cs                         C# Optimizations ON
Microsoft (R) Visual C# 2008 Compiler version 3.5.30729.1
for Microsoft (R) .NET Framework version 3.5
Copyright (C) Microsoft Corporation. All rights reserved.
D:\temp>timethis tfact
TimeThis :  Command Line :  tfact
TimeThis :    Start Time :  Sun Dec 27 18:33:53 2009

The factorial of 20 is 2432902008176640000: 00:00:03.7460000 seconds to calculate 10000000 times

TimeThis :  Command Line :  tfact
TimeThis :    Start Time :  Sun Dec 27 18:33:53 2009
TimeThis :      End Time :  Sun Dec 27 18:33:57 2009
TimeThis :  Elapsed Time :  00:00:03.804
---------------------------------------------------------------------------------------
D:\temp>cl -Ox tfact.c                             C optimizations ON
Microsoft (R) C/C++ Optimizing Compiler Version 15.00.21022.08 for x64
Copyright (C) Microsoft Corporation.  All rights reserved.
tfact.c
Microsoft (R) Incremental Linker Version 9.00.21022.08
Copyright (C) Microsoft Corporation.  All rights reserved.
/out:tfact.exe
tfact.obj
D:\temp>timethis tfact
TimeThis :  Command Line :  tfact
TimeThis :    Start Time :  Sun Dec 27 18:34:10 2009

The factorial of 20 is 2432902008176640000: 3.00 seconds to calculate 10000000 times

TimeThis :  Command Line :  tfact
TimeThis :    Start Time :  Sun Dec 27 18:34:10 2009
TimeThis :      End Time :  Sun Dec 27 18:34:13 2009
TimeThis :  Elapsed Time :  00:00:02.666
D:\temp>

----------------------------------------------------------------------------------------

The result is clear: C takes 2.666 seconds, C# takes 3.804 seconds
0
jacob
12/27/2009 5:48:03 PM
> On Dec 27, 9:36   am, spinoza1111 <spinoza1...@yahoo.com> wrote:
> Here is the C code.

> long long factorial(long long N)
> {
>         long long nFactorialRecursive;
>         long long nFactorialIterative;
>         long long Nwork;
>         if (N <= 2) return N;
>         for ( nFactorialIterative = 1, Nwork = N;
>                     Nwork > 1;
>                     Nwork-- )
>                 nFactorialIterative *= Nwork;
>         nFactorialRecursive = N * factorial(N-1);
>         if (nFactorialRecursive != nFactorialIterative)
>                printf("%I64d! is %I64d recursively but %I64d iteratively wtf!
> \n",
>                             N,
>                             nFactorialIterative,
>                             nFactorialRecursive);
>         return nFactorialRecursive;
>
> }

I'm impressed.  Very few people could define an N^2 algorithm for calculating
factorials.

Hint:  When you call factorial(19), you calculate factorial(19) iteratively,
and then you calculate 19 * factorial(18).  You then calculate factorial(18)
iteratively, then calcualte 18 * factorial(17).  Etcetera.

In short, for factorial(19), instead of performing 38 multiplications and
19 calls, you perform 19 multiplications and 19 calls for the recursive
calculation, plus 164 multiplications for the iterative calculations.

This is not a reasonable way to go about things.  This is a pretty
impressive screwup on your part, and you can't blame the language design;
this is purely at the algorithm-design level, not any kind of mysterious
quirk of C.

Again, the problem isn't with C's design; it's that you are too muddled
to design even a basic test using two algorithms, as you embedded one
in another.

Here's two test programs.  One's yours, but I switched to 'long double'
and used 24! instead of 19! as the test case, and multiplied the number
of trials by 10.

The main loop is unchanged except for the change in N and the switch to
%Lf.

Yours:
	long double factorial(long double N)
	{
	    long double nFactorialRecursive;
	    long double nFactorialIterative;
	    long double Nwork;
	    if (N <= 2) return N;
	    for ( nFactorialIterative = 1, Nwork = N; Nwork > 1; Nwork-- )
		nFactorialIterative *= Nwork;
	    nFactorialRecursive = N * factorial(N-1);
	    if (nFactorialRecursive != nFactorialIterative)
	       printf("%Lf! is %Lf recursively but %Lf iteratively wtf!\n",
		      N,
		      nFactorialIterative,
		      nFactorialRecursive);
	    return nFactorialRecursive;
	}

Mine:

	long double ifactorial(long double N)
	{
	    long double nFactorialIterative;
	    long double Nwork;
	    if (N <= 2) return N;
	    for ( nFactorialIterative = 1, Nwork = N; Nwork > 1; Nwork-- )
		nFactorialIterative *= Nwork;
	    return nFactorialIterative;
	}

	long double rfactorial(long double N)
	{
	    long double nFactorialRecursive;
	    if (N <= 2) return N;
	    nFactorialRecursive = N * rfactorial(N-1);
	    return nFactorialRecursive;
	}

	long double factorial(long double N)
	{
	    long double nFactorialRecursive;
	    long double nFactorialIterative;
	    nFactorialIterative = ifactorial(N);
	    nFactorialRecursive = rfactorial(N);
	    if (nFactorialRecursive != nFactorialIterative)
	       printf("%Lf! is %Lf recursively but %Lf iteratively wtf!\n",
		      N,
		      nFactorialIterative,
		      nFactorialRecursive);
	    return nFactorialRecursive;
	}

Output from the main loops:

24! is 620448401733239409999872: 14.00 seconds to calculate 10000000 times
24! is 620448401733239409999872: 5.00 seconds to calculate 10000000 times

.... Which is to say, no one cares whether C# is faster or slower than C
by a few percent, when non-idiotic code is faster than idiotic code by
nearly a factor of three.

-s
-- 
Copyright 2009, all wrongs reversed.  Peter Seebach / usenet-nospam@seebs.net
http://www.seebs.net/log/ <-- lawsuits, religion, and funny pictures
http://en.wikipedia.org/wiki/Fair_Game_(Scientology) <-- get educated!
0
Seebs
12/27/2009 8:20:53 PM
"Seebs" <usenet-nospam@seebs.net> wrote in message 
news:slrnhjfgf3.7fu.usenet-nospam@guild.seebs.net...
>> On Dec 27, 9:36   am, spinoza1111 <spinoza1...@yahoo.com> wrote:
>> Here is the C code.
>
>> long long factorial(long long N)
>> {
....
>> }
>
> I'm impressed.  Very few people could define an N^2 algorithm for 
> calculating
> factorials.

> Here's two test programs.  One's yours, but I switched to 'long double'
> and used 24! instead of 19! as the test case, and multiplied the number
> of trials by 10.
>
> The main loop is unchanged except for the change in N and the switch to
> %Lf.

> Output from the main loops:
>
> 24! is 620448401733239409999872: 14.00 seconds to calculate 10000000 times
> 24! is 620448401733239409999872: 5.00 seconds to calculate 10000000 times
>
> ... Which is to say, no one cares whether C# is faster or slower than C
> by a few percent, when non-idiotic code is faster than idiotic code by
> nearly a factor of three.

It doesn't matter too much that the code was idiotic, provided both 
languages were executing the same idiotic algorithm.

The original code wasn't a bad test of long long multiplication in each 
language.

-- 
Bartc 

0
bartc
12/27/2009 8:39:56 PM
"bartc" <bartc@freeuk.com> wrote in message 
news:CMMZm.19365$Ym4.9442@text.news.virginmedia.com...
> "spinoza1111" <spinoza1111@yahoo.com> wrote in message 
> news:c3d9c29f-0f7b-41e3-9478-ce62d98b7c76@d4g2000pra.googlegroups.com...
>>A C and a C Sharp program was written to calculate the 64-bit value of
>> 19 factorial one million times, using both the iterative and recursive
>> methods to solve (and compare) the results
>>
>> Here is the C code.
>
>> Here is the C Sharp code.
>
>> The C Sharp code runs at 110% of the speed of the C code, which may
>> seem to "prove" the half-literate Urban Legend that "C is more
>> efficient than C Sharp or VM/bytecode languages in general, d'oh".
>
> C# was 10% faster? On my 32-bit machine, the C code was generally faster, 
> depending on compilers and options.
>

yeah, much more significant may well be what the code actually does...

micro-benchmarks of this sort are rarely telling of overall performance.

something a little larger, such as a ray-tracer, would likely be a better 
example.


> But changing both to use 32-bit arithmetic, the C was nearly double the 
> speed of C#, which is what is to be expected.
>

yes, but to be fair, C# 'long' == C 'long long'...

(since C# followed Java in making long always 64-bits).


> That's not bad, but C# does need a very complex environment to run, so 
> cannot be used as a replacement for many uses of C, and it is still 
> slower.
>
> (BTW your timing routines in the C code seem to round to the nearest 
> second; not very useful for that purpose.)
>
>> Were C Sharp to be interpreted, the above C Sharp code would run very,
>> very slowly, but C Sharp isn't interpreted.
>
> There are interpreters, and interpreters. You could be looking at 
> slow-downs, compared with C, of 3x to 300x, but usually there will be some 
> benefits that make up for performance deficiencies in these kinds of 
> benchmarks.
>
> C#, I believe, is executed as native code, after translation from MSIL or 
> CIL or some such acronym.
>

yeah.
MSIL is the older / original term.
the IL was latter redubbed CIL.

it is much the same as the difference between x86-64, x64, and AMD64 or 
EMT64T...


C# is first compiled to MSIL / CIL, and then the JIT stage does further 
compilation and optimization.

MSIL is not, in itself, inherently slow.

rather, what usually adds some slowdown is the OO facilities, array 
handling, ... this is because the OO is hard to fine-tune to the same extent 
as in, say, C++, and because of the use of bounds-checking of arrays 
(although there are many cases where bounds checking can be eliminated by 
"proving" that it will never fail).

like Java, it uses a particular approach to GC which tends to make GC 
inevitable (although, they may do like what many JVM's do, and essentially 
pay a slight up-front cost to free objects known-garbage up-front, or 
possibly use ref-counting as a culling method).


actually, simple stack machines I have found are underrated.
granted, SSA is a little easier to optimize, but I have gradually learned 
from experience what was my original intuition:
a stack machine is generally a much nicer IL stage than trying to connect 
the frontend and backend directly with SSA. SSA is nasty, IMO, and it is 
much better IME to unwind these internal stack-machines to a pseudo-SSA form 
during low-level compilation.

it also leads to the "nicety" that one has a much better idea where the 
temporaries and phi's are located, since these tend to emerge "naturally" 
from the operations on the virtual stack.

actually, AFAICT, it is much easier to unwind a stack into SSA form than it 
is to try to coerce ASTs into SSA. I am not sure then why most other 
compilers seem to take the direct AST -> SSA route, since there would not 
seem to be a whole lot of difference in terms of the quality of the final 
code produced, even though there is a notable difference WRT how much 
difference it is to work on the compiler.


nevermind that I may at some point want to redesign RPNIL some (my personal 
RPN-based IL), probably essentially "flipping" the stack (IOW: putting 
everything in left-to-right ordering, rather than x86-style right-to-left). 
this would make it more in-line with pretty much every other RPN.

originally, I chose an essentially backwards ordering, as I had assumed more 
or less 1:1 mapping with the underlying x86, and so everything was laid out 
to allow a "naive" translator (my original lower-end was rather naive, but 
as a cost it has left me with a fairly ugly IL with extra funky rules and a 
"backwards" stack ordering).

FWIW, this is no longer with good reason, so I may consider eventual 
redesign (and probably also for a lot of the syntax, such as making it 
properly token-based, ...).


> -- 
> Bartc 


0
BGB
12/27/2009 8:50:18 PM
On 2009-12-27, bartc <bartc@freeuk.com> wrote:
> It doesn't matter too much that the code was idiotic, provided both 
> languages were executing the same idiotic algorithm.

True.

> The original code wasn't a bad test of long long multiplication in each 
> language.

Fair enough.  It's not exactly what I'd call an interesting test case for
real-world code.  I'd be a lot more interested in performance of, say,
large lists or hash tables.

-s
-- 
Copyright 2009, all wrongs reversed.  Peter Seebach / usenet-nospam@seebs.net
http://www.seebs.net/log/ <-- lawsuits, religion, and funny pictures
http://en.wikipedia.org/wiki/Fair_Game_(Scientology) <-- get educated!
0
Seebs
12/27/2009 9:10:27 PM
On Dec 28, 12:09=A0am, scattered <still.scatte...@gmail.com> wrote:
> On Dec 27, 9:36=A0am,spinoza1111<spinoza1...@yahoo.com> wrote:
>
>
>
>
>
> > A C and a C Sharp program was written to calculate the 64-bit value of
> > 19 factorial one million times, using both the iterative and recursive
> > methods to solve (and compare) the results
>
> > Here is the C code.
>
> > #include <stdio.h>
> > #include <time.h>
>
> > long long factorial(long long N)
> > {
> > =A0 =A0 long long nFactorialRecursive;
> > =A0 =A0 long long nFactorialIterative;
> > =A0 =A0 long long Nwork;
> > =A0 =A0 if (N <=3D 2) return N;
> > =A0 =A0 for ( nFactorialIterative =3D 1, Nwork =3D N;
> > =A0 =A0 =A0 =A0 =A0 Nwork > 1;
> > =A0 =A0 =A0 =A0 =A0 Nwork-- )
> > =A0 =A0 =A0 =A0 nFactorialIterative *=3D Nwork;
> > =A0 =A0 nFactorialRecursive =3D N * factorial(N-1);
> > =A0 =A0 if (nFactorialRecursive !=3D nFactorialIterative)
> > =A0 =A0 =A0 =A0printf("%I64d! is %I64d recursively but %I64d iterativel=
y wtf!
> > \n",
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 N,
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 nFactorialIterative,
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 nFactorialRecursive);
> > =A0 =A0 return nFactorialRecursive;
>
> > }
>
> > int main(void)
> > {
> > =A0 =A0 long long N;
> > =A0 =A0 long long Nfactorial;
> > =A0 =A0 double dif;
> > =A0 =A0 long long i;
> > =A0 =A0 long long K;
> > =A0 =A0 time_t start;
> > =A0 =A0 time_t end;
> > =A0 =A0 N =3D 19;
> > =A0 =A0 K =3D 1000000;
> > =A0 =A0 time (&start);
> > =A0 =A0 for (i =3D 0; i < K; i++)
> > =A0 =A0 =A0 =A0 Nfactorial =3D factorial(N);
> > =A0 =A0 time (&end);
> > =A0 =A0 dif =3D difftime (end,start);
> > =A0 =A0 printf("%I64d! is %I64d: %.2f seconds to calculate %I64d times
> > \n",
> > =A0 =A0 =A0 =A0 =A0 =A0N, Nfactorial, dif, K);
> > =A0 =A0 return 0; // Gee is that right?
>
> > }
>
> > Here is the C Sharp code.
>
> > using System;
> > using System.Collections.Generic;
> > using System.Linq;
> > using System.Text;
>
> > namespace N_factorial
> > {
> > =A0 =A0 class Program
> > =A0 =A0 {
> > =A0 =A0 =A0 =A0 static void Main(string[] args)
> > =A0 =A0 =A0 =A0 {
> > =A0 =A0 =A0 =A0 =A0 =A0 long N;
> > =A0 =A0 =A0 =A0 =A0 =A0 long Nfactorial =3D 0;
> > =A0 =A0 =A0 =A0 =A0 =A0 TimeSpan dif;
> > =A0 =A0 =A0 =A0 =A0 =A0 long i;
> > =A0 =A0 =A0 =A0 =A0 =A0 long K;
> > =A0 =A0 =A0 =A0 =A0 =A0 DateTime start;
> > =A0 =A0 =A0 =A0 =A0 =A0 DateTime end;
> > =A0 =A0 =A0 =A0 =A0 =A0 N =3D 19;
> > =A0 =A0 =A0 =A0 =A0 =A0 K =3D 1000000;
> > =A0 =A0 =A0 =A0 =A0 =A0 start =3D DateTime.Now;
> > =A0 =A0 =A0 =A0 =A0 =A0 for (i =3D 0; i < K; i++)
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 Nfactorial =3D factorial(N);
> > =A0 =A0 =A0 =A0 =A0 =A0 end =3D DateTime.Now;
> > =A0 =A0 =A0 =A0 =A0 =A0 dif =3D end - start;
> > =A0 =A0 =A0 =A0 =A0 =A0 Console.WriteLine
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 ("The factorial of " +
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0N.ToString() + " is " +
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Nfactorial.ToString() + ": " +
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0dif.ToString() + " " +
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0"seconds to calculate " +
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0K.ToString() + " times");
> > =A0 =A0 =A0 =A0 =A0 =A0 return;
> > =A0 =A0 =A0 =A0 }
>
> > =A0 =A0 =A0 =A0 static long factorial(long N)
> > =A0 =A0 =A0 =A0 {
> > =A0 =A0 =A0 =A0 =A0 =A0 long nFactorialRecursive;
> > =A0 =A0 =A0 =A0 =A0 =A0 long nFactorialIterative;
> > =A0 =A0 =A0 =A0 =A0 =A0 long Nwork;
> > =A0 =A0 =A0 =A0 =A0 =A0 if (N <=3D 2) return N;
> > =A0 =A0 =A0 =A0 =A0 =A0 for ( nFactorialIterative =3D 1, Nwork =3D N;
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 Nwork > 1;
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 Nwork-- )
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 nFactorialIterative *=3D Nwork;
> > =A0 =A0 =A0 =A0 =A0 =A0 nFactorialRecursive =3D N * factorial(N-1);
> > =A0 =A0 =A0 =A0 =A0 =A0 if (nFactorialRecursive !=3D nFactorialIterativ=
e)
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 Console.WriteLine
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 ("The iterative factorial of " +
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0N.ToString() + " " +
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0"is " +
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0nFactorialIterative.ToString() + " "=
 +
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0"but its recursive factorial is " +
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0nFactorialRecursive.ToString());
> > =A0 =A0 =A0 =A0 =A0 =A0 return nFactorialRecursive;
> > =A0 =A0 =A0 =A0 }
> > =A0 =A0 }
>
> > }
>
> > The C Sharp code runs at 110% of the speed of the C code, which may
> > seem to "prove" the half-literate Urban Legend that "C is more
> > efficient than C Sharp or VM/bytecode languages in general, d'oh".
>
> > As I take pains to point out in my book, "Build Your Own .Net Language
> > and Compiler" (Apress 2004) (buy it now buy it now), it's not even
> > grammatical to say that a programming language is more "efficient"
> > than another pl.
>
> > But far more significantly: the ten percent "overhead" would be
> > several orders of magnitude were C Sharp to be an "inefficient,
> > interpreted language" which many C programmers claim it is. That is
> > because a true interpreter parses and/or unpacks each instruction when
> > it is executed, and both of the above examples execute their
> > instructions millions of times.
>
> > Were C Sharp to be interpreted, the above C Sharp code would run very,
> > very slowly, but C Sharp isn't interpreted.
>
> > Instead, a one-time modification is made to the byte code upon loading
> > to thread the codes together. This explains part of the ten percent
> > "overhead". For the remainder of execution, a sort of switch statement
> > is operating in which the code for individual byte codes using go to
> > to transfer control. This means that C and C Sharp execute at the same
> > effective rate of speed, and the ONLY efficiency-based reason for
> > choosing C is avoiding the initial overhead of setting up the .Net
> > virtual machine.
>
> > But what does this virtual machine provide? Almost 100 percent safety
> > against memory leaks and many other bad things.
>
> > Indeed, C is like the (unwritten) British constitution. In that
> > arrangement, Parliament cannot "entrench" an act that would bind all
> > subsequent Parliaments, because Parliamentary supremacy (like the
> > putative power of C) must at all costs be preserved: this was the
> > innovation of 1688/9, when Parliament hired King William and his
> > Better Half as Kingie and Queenie on condition that they be nice and
> > obey Parliament. This means that in fact the British constitution
> > contains no protection against a runaway, tyrannical, "long"
> > Parliament. It promised not to do so in 1911 and confirmed that it
> > would be nice in 1949, but there is nothing in the British
> > constitution to prevent Parliament from enacting a new bill, as long
> > as it could get enough Peers in the House of Lords to wake up and
> > permit it to do so (Lords approval being required unlike money bills),
> > and HM the Queen to give Royal Assent.
>
> > When Kurt Godel was studying the booklet given him in Princeton to
> > pass the US Citizenship test, he claimed to find a bug that would
> > allow America to be a dictatorship. I think he'd be even more
> > terrified of the British constitution, for like his self-reflexive
> > paradoxical statement in his incompleteness/inconsistency result, the
> > very power of Parliament renders it impotent to write a Constitution!
>
> > Whereas .Net and Java provide "Constitutional" safeguards against code
> > doing nasty things even as the American constitution was intended to
> > be, and to some practical extent is, "a machine that runs of itself".
>
> > Both constitutions can fail, but the British constitution is more
> > likely to. It enabled Margaret Thatcher to rule by decree and override
> > even her own Cabinet, and ramrod through a medieval "poll tax" in 1990
> > that produced civil disturbances. Britons enjoy human rights mostly
> > through the EU. Whereas misuse of the American constitution during
> > Bush's administration was more vigorously resisted especially in its
> > courts, where the judges are truly independent.
>
> > It is true that a massive "bug" in the American constitution developed
> > in 1860 with the outbreak of civil war, but this was extra-
> > Constitutional. It resulted from a deliberate misinterpretation of
> > state's rights under the Tenth Amendment in which the states retained
> > a "nullifying" level of sovereignity, but their assent to the
> > Constitution in 1789 had itself nullified this strong interpretation
> > of "state's rights".
>
> > Since 1689, no such "bug" has occured in the British constitution.
> > However, the British constitution existed before 1689, and its bug was
> > just as serious, for it produced the English civil war. This was
> > because there is no provision in the British constitution for a pig-
> > headed king, and King Charles II could conceivably in the future
> > refuse Royal Assent to needed legislation, or use the British Army
> > (which is NOT under the control of Parliament, but of the Monarch to
> > whom officers swear fealty) against his own people.
>
> > C Sharp programs can fail as can the American Constitution. But the
> > idiotic equation of the reliability of C and C Sharp in fact resembles
> > the political passivity of Britons who talk darkly of the EU being a
> > "new world order" destroying their "rights as Englishmen" when in fact
> > it's the best thing that ever happened to them. And, I've not
> > addressed how the rights of Irishmen have been abused under the
> > British constitution.
>
> > I'm for one tired of the Urban Legends of the lower middle class,
> > whether in programming or politics.
>
> OT, but *much* more interesting than your rambles about "C:The
> Complete Nonsense", etc.
> What about the memory footprint of C vs C#? Sure, .Net code is
> compiled rather than interpreted so it doesn't have a huge time
> penalty - but it is compiled to an intermediate language designed to
> run in a *huge* virtual machine. Bloatware which is only slightly
> slower than nonbloated code is still bloated and still slower. Also, a
> much more interesting question (than simple factorial calculations) is
> how something like Microsoft Excel would work if its code base was
> rewritten from C to C#. I suspect that the cummulative slowdowns would
> result in a spreadsheet which was annoyingly slower.

Newer spreadsheets such as Google's are faster but they're mostly
written in C, probably (by highly competent people) owing to
inertia...not in Java and not in C Sharp.

In the example, the C DLL is 5120 bytes whereas the C Sharp DLL is
7680 bytes. Again, not even an order of magnitude, and you get greater
safety for the so-called "bloat".

It's barbaric, given Moore's law, to so overfocus in such a Puritan
way on "bloat" when the number one issue, as Dijkstra said in 2000, is
"not making a mess of it".
>
> Out of curiousity, what rules of grammar does the sentence "C is more
> effecient than C#." violate?

0
spinoza1111
12/28/2009 3:31:20 PM
On Dec 28, 1:25=A0am, "bartc" <ba...@freeuk.com> wrote:
> "spinoza1111" <spinoza1...@yahoo.com> wrote in message
>
> news:c3d9c29f-0f7b-41e3-9478-ce62d98b7c76@d4g2000pra.googlegroups.com...
>
> >A C and a C Sharp program was written to calculate the 64-bit value of
> > 19 factorial one million times, using both the iterative and recursive
> > methods to solve (and compare) the results
>
> > Here is the C code.
> > Here is the C Sharp code.
> > The C Sharp code runs at 110% of the speed of the C code, which may
> > seem to "prove" the half-literate Urban Legend that "C is more
> > efficient than C Sharp or VM/bytecode languages in general, d'oh".
>
> C# was 10% faster? On my 32-bit machine, the C code was generally faster,
> depending on compilers and options.

No, dear boy. 110% means that C Sharp is ten percent SLOWER, and it
means I don't care that it is.
>
> But changing both to use 32-bit arithmetic, the C was nearly double the
> speed of C#, which is what is to be expected.

OK, let's try that at home...

....of course, you do know that the value overflows for C Sharp int and
C long arithmetic. The maximum value we can compute is 12 factorial...

....to maintain realistic execution times, we must change both versions
to execute ten million times.

Oops. Check this out! Here, C Sharp is actually faster!

C Sharp result: The factorial of 12 is 479001600: 00:00:06.8750000
seconds to calculate 10000000 times
C result: 12! is 479001600: 8.00 seconds to calculate 10000000 times

In other words, you changed your version of the code to calculate the
wrong answer "twice as fast" as C Sharp. In other words, you made the
code significantly less "powerful" to get the wrong result, and when
we fix it, C runs slower. This is probably because cache usage, which
is easily implemented safely in a hosted VM, becomes more and more
important the more the same code is executed on similar data.


>
> That's not bad, but C# does need a very complex environment to run, so
> cannot be used as a replacement for many uses of C, and it is still slowe=
r.

Excuse me, but that's a canard. C, to run efficiently, needs a
complete optimizer. A VM is less complex. Architectures have in fact
been tuned in very complex ways to run C adequately in ways that have
caused errors, perhaps including the Intel arithmetic bug.

Whereas in OO designs such as are effectively supported by VM-hosted
languages like C Sharp and Java, data can be retained in stateful
objects making cacheing at all levels a natural gesture. For example,
a stateful factorial class can save each factorial calculated over its
execution lifetime. A C program cannot do this without saving material
at a global level thats overly visible to other modules.

>
> (BTW your timing routines in the C code seem to round to the nearest seco=
nd;
> not very useful for that purpose.)

Yes...the timing routines standard in C SHARP EXPRESS, shipped free to
script kiddies and even girl programmers world wide, are superior to
the stuff in C.
>
> > Were C Sharp to be interpreted, the above C Sharp code would run very,
> > very slowly, but C Sharp isn't interpreted.
>
> There are interpreters, and interpreters. You could be looking at
> slow-downs, compared with C, of 3x to 300x, but usually there will be som=
e
> benefits that make up for performance deficiencies in these kinds of
> benchmarks.

No. An interpreter has a fixed overhead K for each instruction, and in
programs like this, K is multipled many, many times. C Sharp is not
interpreted save in the male programmer subconscious which because of
the very real emasculation he faces in the corporation, fears a
fantasy emasculation implicit in not using his father's language.

C sharp doesn't have this.

>
> C#, I believe, is executed as native code, after translation from MSIL or
> CIL or some such acronym.

That is more or less correct.
>
> --
> Bartc

0
spinoza1111
12/28/2009 3:55:44 PM
In article <786af291-2bc3-4f43-af7f-efacfa6b9d54@d20g2000yqh.googlegroups.com>,
spinoza1111  <spinoza1111@yahoo.com> wrote:
....
>Newer spreadsheets such as Google's are faster but they're mostly
>written in C, probably (by highly competent people) owing to
>inertia...not in Java and not in C Sharp.
>
>In the example, the C DLL is 5120 bytes whereas the C Sharp DLL is
>7680 bytes. Again, not even an order of magnitude, and you get greater
>safety for the so-called "bloat".
>
>It's barbaric, given Moore's law, to so overfocus in such a Puritan
>way on "bloat" when the number one issue, as Dijkstra said in 2000, is
>"not making a mess of it".

Obviously, neither you nor Dijkstra understand the true economic
underpinnings of the software biz.  The whole point is to be hot, sexy,
and broken.  Repeat that a few times to get the point.

Because, as I've said so many times, if they ever actually solve the
problem in the software world, the business is dead.  And this has
serious economic consequences for everyone.  Think job-loss, which is
generally considered the worst thing imaginable.

There are many examples of situations where programmers did actually
solve their problem and ended up on unemployment as a result.  At the
bigger, company level, think Windows XP.  MS basically solved the OS
problem with Windows XP.  They were thus forced to tear it up and start
over with Vista/Windows 7/etc.

0
gazelle
12/28/2009 3:58:29 PM
On Dec 28, 1:27=A0am, Jens Stuckelberger
<Jens_Stuckelber...@nowhere.net> wrote:
> On Sun, 27 Dec 2009 06:36:54 -0800,spinoza1111wrote:
> > The C Sharp code runs at 110% of the speed of the C code, which may see=
m
> > to "prove" the half-literate Urban Legend that "C is more efficient tha=
n
> > C Sharp or VM/bytecode languages in general, d'oh".
>
> =A0 =A0 =A0 =A0 Really? I would have thought that, at best, it proves tha=
t a
> particular C# implementation of a particular algorithm, when using a
> particular C# virtual machine, on a particular platform, has beaten a
> particular C implementation of a particular algorithm, with some
> particular C compiler and some particular compile-time settings, on a
> particular platform. This, assuming that one can believe you - and why
> should we?
>
> =A0 =A0 =A0 =A0 If anything, you seem to have proved that you know nothin=
g much
> about rigor, and that in this occasion to indulged in forcing facts to
> fit your preconceived notions. Way to go.

Blow me, dear chap. In fact, we can generalize from what we see, as
long as we take the results cum grano salis.
>
> =A0 =A0 =A0 =A0 As for your ranting on the American and British constitut=
ions -
> what kind of mental condition affects you?

I'd call it literacy, dear boy. You see, to help a student, I read AV
Dicey's Introduction to the Study of the Law of the Constitution and
Ian Loveland's Constitutional Law, Administrative Law, and Human
Rights cover to cover over a couple of weeks. Like Godel, I noticed
some quasi-mathematical paradoxes, in my case in the British
constitution. I thought them amusing.

0
spinoza1111
12/28/2009 3:59:01 PM
On Dec 28, 1:48=A0am, jacob navia <ja...@nospam.org> wrote:
> I transformed that program a bit. It calculates now factorial of 20, and
> it does that 10 million times.
>
> Note that the difftime function is not accurate. I used the utility "time=
this".
> Machine: Intel i7 (8 cores) with 12GB RAM
>
> The results are:
> -------------------------------------------------------------------------=
-- ----------
> D:\temp>csc /o tfact.cs =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 C=
# Optimizations ON
> Microsoft (R) Visual C# 2008 Compiler version 3.5.30729.1
> for Microsoft (R) .NET Framework version 3.5
> Copyright (C) Microsoft Corporation. All rights reserved.
> D:\temp>timethis tfact
> TimeThis : =A0Command Line : =A0tfact
> TimeThis : =A0 =A0Start Time : =A0Sun Dec 27 18:33:53 2009
>
> The factorial of 20 is 2432902008176640000: 00:00:03.7460000 seconds to c=
alculate 10000000 times
>
> TimeThis : =A0Command Line : =A0tfact
> TimeThis : =A0 =A0Start Time : =A0Sun Dec 27 18:33:53 2009
> TimeThis : =A0 =A0 =A0End Time : =A0Sun Dec 27 18:33:57 2009
> TimeThis : =A0Elapsed Time : =A000:00:03.804
> -------------------------------------------------------------------------=
-- ------------
> D:\temp>cl -Ox tfact.c =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 C optimizations ON
> Microsoft (R) C/C++ Optimizing Compiler Version 15.00.21022.08 for x64
> Copyright (C) Microsoft Corporation. =A0All rights reserved.
> tfact.c
> Microsoft (R) Incremental Linker Version 9.00.21022.08
> Copyright (C) Microsoft Corporation. =A0All rights reserved.
> /out:tfact.exe
> tfact.obj
> D:\temp>timethis tfact
> TimeThis : =A0Command Line : =A0tfact
> TimeThis : =A0 =A0Start Time : =A0Sun Dec 27 18:34:10 2009
>
> The factorial of 20 is 2432902008176640000: 3.00 seconds to calculate 100=
00000 times
>
> TimeThis : =A0Command Line : =A0tfact
> TimeThis : =A0 =A0Start Time : =A0Sun Dec 27 18:34:10 2009
> TimeThis : =A0 =A0 =A0End Time : =A0Sun Dec 27 18:34:13 2009
> TimeThis : =A0Elapsed Time : =A000:00:02.666
> D:\temp>
>
> -------------------------------------------------------------------------=
-- -------------
>
> The result is clear: C takes 2.666 seconds, C# takes 3.804 seconds

Previous chap had a glimmer of insight with which I partly agreed. We
do have to take individual results cum grano salis. We need to think
here only in terms of orders of magnitude, and you haven't shown, Mr.
Navia, and with all due respect, that C Sharp runs an order of
magnitude or more slower, which it would be if *le canard* that "C
Sharp is for girls and script kiddies because it is interpreted" were
true, which we have falsified.

Thanks for the better timer. However, part of the C problem is the
fact that suboptimal library routines appear first in documentation
whereas in C Sharp, best of breed is at the top.

0
spinoza1111
12/28/2009 4:02:40 PM
On Dec 28, 4:20=A0am, Seebs <usenet-nos...@seebs.net> wrote:
> > On Dec 27, 9:36 =A0 am,spinoza1111<spinoza1...@yahoo.com> wrote:
> > Here is the C code.
> > long long factorial(long long N)
> > {
> > =A0 =A0 =A0 =A0 long long nFactorialRecursive;
> > =A0 =A0 =A0 =A0 long long nFactorialIterative;
> > =A0 =A0 =A0 =A0 long long Nwork;
> > =A0 =A0 =A0 =A0 if (N <=3D 2) return N;
> > =A0 =A0 =A0 =A0 for ( nFactorialIterative =3D 1, Nwork =3D N;
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 Nwork > 1;
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 Nwork-- )
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 nFactorialIterative *=3D Nwork;
> > =A0 =A0 =A0 =A0 nFactorialRecursive =3D N * factorial(N-1);
> > =A0 =A0 =A0 =A0 if (nFactorialRecursive !=3D nFactorialIterative)
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0printf("%I64d! is %I64d recursively but =
%I64d iteratively wtf!
> > \n",
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 N,
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 nFactorialItera=
tive,
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 nFactorialRecur=
sive);
> > =A0 =A0 =A0 =A0 return nFactorialRecursive;
>
> > }
>
> I'm impressed. =A0Very few people could define an N^2 algorithm for calcu=
lating
> factorials.

Hold on to your hats, it's Scripto Boy, who's never taken a computer
science class..
>
> Hint: =A0When you call factorial(19), you calculate factorial(19) iterati=
vely,
> and then you calculate 19 * factorial(18). =A0You then calculate factoria=
l(18)
> iteratively, then calcualte 18 * factorial(17). =A0Etcetera.
>
> In short, for factorial(19), instead of performing 38 multiplications and
> 19 calls, you perform 19 multiplications and 19 calls for the recursive
> calculation, plus 164 multiplications for the iterative calculations.
>
> This is not a reasonable way to go about things. =A0This is a pretty
> impressive screwup on your part, and you can't blame the language design;
> this is purely at the algorithm-design level, not any kind of mysterious
> quirk of C.
>
> Again, the problem isn't with C's design; it's that you are too muddled
> to design even a basic test using two algorithms, as you embedded one
> in another.
>
> Here's two test programs. =A0One's yours, but I switched to 'long double'
> and used 24! instead of 19! as the test case, and multiplied the number
> of trials by 10.
>
> The main loop is unchanged except for the change in N and the switch to
> %Lf.
>
> Yours:
> =A0 =A0 =A0 =A0 long double factorial(long double N)
> =A0 =A0 =A0 =A0 {
> =A0 =A0 =A0 =A0 =A0 =A0 long double nFactorialRecursive;
> =A0 =A0 =A0 =A0 =A0 =A0 long double nFactorialIterative;
> =A0 =A0 =A0 =A0 =A0 =A0 long double Nwork;
> =A0 =A0 =A0 =A0 =A0 =A0 if (N <=3D 2) return N;
> =A0 =A0 =A0 =A0 =A0 =A0 for ( nFactorialIterative =3D 1, Nwork =3D N; Nwo=
rk > 1; Nwork-- )
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 nFactorialIterative *=3D Nwork;
> =A0 =A0 =A0 =A0 =A0 =A0 nFactorialRecursive =3D N * factorial(N-1);
> =A0 =A0 =A0 =A0 =A0 =A0 if (nFactorialRecursive !=3D nFactorialIterative)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0printf("%Lf! is %Lf recursively but %Lf it=
eratively wtf!\n",
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 N,
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 nFactorialIterative,
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 nFactorialRecursive);
> =A0 =A0 =A0 =A0 =A0 =A0 return nFactorialRecursive;
> =A0 =A0 =A0 =A0 }
>
> Mine:
>
> =A0 =A0 =A0 =A0 long double ifactorial(long double N)
> =A0 =A0 =A0 =A0 {
> =A0 =A0 =A0 =A0 =A0 =A0 long double nFactorialIterative;
> =A0 =A0 =A0 =A0 =A0 =A0 long double Nwork;
> =A0 =A0 =A0 =A0 =A0 =A0 if (N <=3D 2) return N;
> =A0 =A0 =A0 =A0 =A0 =A0 for ( nFactorialIterative =3D 1, Nwork =3D N; Nwo=
rk > 1; Nwork-- )
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 nFactorialIterative *=3D Nwork;
> =A0 =A0 =A0 =A0 =A0 =A0 return nFactorialIterative;
> =A0 =A0 =A0 =A0 }
>
> =A0 =A0 =A0 =A0 long double rfactorial(long double N)
> =A0 =A0 =A0 =A0 {
> =A0 =A0 =A0 =A0 =A0 =A0 long double nFactorialRecursive;
> =A0 =A0 =A0 =A0 =A0 =A0 if (N <=3D 2) return N;
> =A0 =A0 =A0 =A0 =A0 =A0 nFactorialRecursive =3D N * rfactorial(N-1);
> =A0 =A0 =A0 =A0 =A0 =A0 return nFactorialRecursive;
> =A0 =A0 =A0 =A0 }
>
> =A0 =A0 =A0 =A0 long double factorial(long double N)
> =A0 =A0 =A0 =A0 {
> =A0 =A0 =A0 =A0 =A0 =A0 long double nFactorialRecursive;
> =A0 =A0 =A0 =A0 =A0 =A0 long double nFactorialIterative;
> =A0 =A0 =A0 =A0 =A0 =A0 nFactorialIterative =3D ifactorial(N);
> =A0 =A0 =A0 =A0 =A0 =A0 nFactorialRecursive =3D rfactorial(N);
> =A0 =A0 =A0 =A0 =A0 =A0 if (nFactorialRecursive !=3D nFactorialIterative)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0printf("%Lf! is %Lf recursively but %Lf it=
eratively wtf!\n",
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 N,
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 nFactorialIterative,
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 nFactorialRecursive);
> =A0 =A0 =A0 =A0 =A0 =A0 return nFactorialRecursive;
> =A0 =A0 =A0 =A0 }
>
> Output from the main loops:
>
> 24! is 620448401733239409999872: 14.00 seconds to calculate 10000000 time=
s
> 24! is 620448401733239409999872: 5.00 seconds to calculate 10000000 times
>
> ... Which is to say, no one cares whether C# is faster or slower than C
> by a few percent, when non-idiotic code is faster than idiotic code by
> nearly a factor of three.

You misunderstood the solution, Scripto boy, just as you misunderstood
Schildt; and as in the case of Schildt, you use your own mental
confusion to attack the credibility of your elders and betters. I
don't care that within each recursion I recalculate the next value of
N both ways, recursively and iteratively. Indeed, it was my goal to
get meaningful performance numbers by executing many, many
instructions repeatedly. Since I can look up factorials in seconds, I
did not need a new factorial program, as you so very foolishly seem to
believe.

My point was the absence of "interpretive overhead" in the C sharp
caused it to run at (far) less than 1 order of magnitude slower than
the C code, and that by migrating to C sharp one avoids the legacy
unsafety and stupidity of C, as well as its fragmentation into
mutually warring tribes, with the attendant bad feeling and politics
of personalities.

I suggest you return to school and acquire academic qualifications and
the ability to behave like a professional before ruining people's
reputations with falsehoods, Scripto boy.
>
> -s
> --
> Copyright 2009, all wrongs reversed. =A0Peter Seebach / usenet-nos...@see=
bs.nethttp://www.seebs.net/log/<-- lawsuits, religion, and funny picturesht=
tp://en.wikipedia.org/wiki/Fair_Game_(Scientology) <-- get educated!

0
spinoza1111
12/28/2009 4:15:54 PM
On Dec 28, 4:20=A0am, Seebs <usenet-nos...@seebs.net> wrote:
> > On Dec 27, 9:36 =A0 am,spinoza1111<spinoza1...@yahoo.com> wrote:
> > Here is the C code.
> > long long factorial(long long N)
> > {
> > =A0 =A0 =A0 =A0 long long nFactorialRecursive;
> > =A0 =A0 =A0 =A0 long long nFactorialIterative;
> > =A0 =A0 =A0 =A0 long long Nwork;
> > =A0 =A0 =A0 =A0 if (N <=3D 2) return N;
> > =A0 =A0 =A0 =A0 for ( nFactorialIterative =3D 1, Nwork =3D N;
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 Nwork > 1;
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 Nwork-- )
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 nFactorialIterative *=3D Nwork;
> > =A0 =A0 =A0 =A0 nFactorialRecursive =3D N * factorial(N-1);
> > =A0 =A0 =A0 =A0 if (nFactorialRecursive !=3D nFactorialIterative)
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0printf("%I64d! is %I64d recursively but =
%I64d iteratively wtf!
> > \n",
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 N,
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 nFactorialItera=
tive,
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 nFactorialRecur=
sive);
> > =A0 =A0 =A0 =A0 return nFactorialRecursive;
>
> > }
>
> I'm impressed. =A0Very few people could define an N^2 algorithm for calcu=
lating
> factorials.
>
> Hint: =A0When you call factorial(19), you calculate factorial(19) iterati=
vely,
> and then you calculate 19 * factorial(18). =A0You then calculate factoria=
l(18)
> iteratively, then calcualte 18 * factorial(17). =A0Etcetera.
>
> In short, for factorial(19), instead of performing 38 multiplications and
> 19 calls, you perform 19 multiplications and 19 calls for the recursive
> calculation, plus 164 multiplications for the iterative calculations.
>
> This is not a reasonable way to go about things. =A0This is a pretty
> impressive screwup on your part, and you can't blame the language design;
> this is purely at the algorithm-design level, not any kind of mysterious
> quirk of C.
>
> Again, the problem isn't with C's design; it's that you are too muddled
> to design even a basic test using two algorithms, as you embedded one
> in another.
>
> Here's two test programs. =A0One's yours, but I switched to 'long double'
> and used 24! instead of 19! as the test case, and multiplied the number
> of trials by 10.
>
> The main loop is unchanged except for the change in N and the switch to
> %Lf.
>
> Yours:
> =A0 =A0 =A0 =A0 long double factorial(long double N)
> =A0 =A0 =A0 =A0 {
> =A0 =A0 =A0 =A0 =A0 =A0 long double nFactorialRecursive;
> =A0 =A0 =A0 =A0 =A0 =A0 long double nFactorialIterative;
> =A0 =A0 =A0 =A0 =A0 =A0 long double Nwork;
> =A0 =A0 =A0 =A0 =A0 =A0 if (N <=3D 2) return N;
> =A0 =A0 =A0 =A0 =A0 =A0 for ( nFactorialIterative =3D 1, Nwork =3D N; Nwo=
rk > 1; Nwork-- )
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 nFactorialIterative *=3D Nwork;
> =A0 =A0 =A0 =A0 =A0 =A0 nFactorialRecursive =3D N * factorial(N-1);
> =A0 =A0 =A0 =A0 =A0 =A0 if (nFactorialRecursive !=3D nFactorialIterative)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0printf("%Lf! is %Lf recursively but %Lf it=
eratively wtf!\n",
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 N,
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 nFactorialIterative,
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 nFactorialRecursive);
> =A0 =A0 =A0 =A0 =A0 =A0 return nFactorialRecursive;
> =A0 =A0 =A0 =A0 }
>
> Mine:
>
> =A0 =A0 =A0 =A0 long double ifactorial(long double N)
> =A0 =A0 =A0 =A0 {
> =A0 =A0 =A0 =A0 =A0 =A0 long double nFactorialIterative;
> =A0 =A0 =A0 =A0 =A0 =A0 long double Nwork;
> =A0 =A0 =A0 =A0 =A0 =A0 if (N <=3D 2) return N;
> =A0 =A0 =A0 =A0 =A0 =A0 for ( nFactorialIterative =3D 1, Nwork =3D N; Nwo=
rk > 1; Nwork-- )
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 nFactorialIterative *=3D Nwork;
> =A0 =A0 =A0 =A0 =A0 =A0 return nFactorialIterative;
> =A0 =A0 =A0 =A0 }
>
> =A0 =A0 =A0 =A0 long double rfactorial(long double N)
> =A0 =A0 =A0 =A0 {
> =A0 =A0 =A0 =A0 =A0 =A0 long double nFactorialRecursive;
> =A0 =A0 =A0 =A0 =A0 =A0 if (N <=3D 2) return N;
> =A0 =A0 =A0 =A0 =A0 =A0 nFactorialRecursive =3D N * rfactorial(N-1);
> =A0 =A0 =A0 =A0 =A0 =A0 return nFactorialRecursive;
> =A0 =A0 =A0 =A0 }
>
> =A0 =A0 =A0 =A0 long double factorial(long double N)
> =A0 =A0 =A0 =A0 {
> =A0 =A0 =A0 =A0 =A0 =A0 long double nFactorialRecursive;
> =A0 =A0 =A0 =A0 =A0 =A0 long double nFactorialIterative;
> =A0 =A0 =A0 =A0 =A0 =A0 nFactorialIterative =3D ifactorial(N);
> =A0 =A0 =A0 =A0 =A0 =A0 nFactorialRecursive =3D rfactorial(N);
> =A0 =A0 =A0 =A0 =A0 =A0 if (nFactorialRecursive !=3D nFactorialIterative)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0printf("%Lf! is %Lf recursively but %Lf it=
eratively wtf!\n",
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 N,
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 nFactorialIterative,
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 nFactorialRecursive);
> =A0 =A0 =A0 =A0 =A0 =A0 return nFactorialRecursive;
> =A0 =A0 =A0 =A0 }
>
> Output from the main loops:
>
> 24! is 620448401733239409999872: 14.00 seconds to calculate 10000000 time=
s
> 24! is 620448401733239409999872: 5.00 seconds to calculate 10000000 times
>
> ... Which is to say, no one cares whether C# is faster or slower than C
> by a few percent, when non-idiotic code is faster than idiotic code by
> nearly a factor of three.
>
> -s
> --
> Copyright 2009, all wrongs reversed. =A0Peter Seebach / usenet-nos...@see=
bs.nethttp://www.seebs.net/log/<-- lawsuits, religion, and funny picturesht=
tp://en.wikipedia.org/wiki/Fair_Game_(Scientology) <-- get educated!

Also, it's a mistake to use "long double" for a mathematical function
that is defined only for integers in terms of stylistic
communication...although it is not likely here that double precision
errors would play a role. Long double doesn't even make a difference
in Microsoft C++ as far as I know.

So, "we" are not impressed with your stunt. You optimized code not
meant to be optimized, removing "our" ability to detect, at some
sufficiently long execution time to confirm that "C is an order of
magnitude faster than C sharp", at which point, and only at which
point, would "we" be interested in converting back to C. You
calculated the exact value of 24 factorial faster for no reason,
whereas I ran code side by side to test whether C provides
sufficiently dramatic and repeatable performance savings to warrant
even considering its use. I proved it does not.

As to lists and hash tables: C Sharp provides these precoded using
state of the art and best of breed algorithms written by really cool
guys, perhaps Ray Chen and Adam Barr. Whereas in C, one's always being
advised to visit some really creepy site to download some creep's
code, bypassing exhortations to come to Jesus and advertisements for
ammo. As King Lear said, it would be a "delicate stratagem", not to
shoe a troop of horse with felt, but to write lists and hash tables,
but us grownups have better things to do at this time.

Peter Seebach, we are not amused.
0
spinoza1111
12/28/2009 4:28:11 PM
On Dec 28, 11:58=A0pm, gaze...@shell.xmission.com (Kenny McCormack)
wrote:
> In article <786af291-2bc3-4f43-af7f-efacfa6b9...@d20g2000yqh.googlegroups=
..com>,spinoza1111=A0<spinoza1...@yahoo.com> wrote:
>
> ...
>
> >Newer spreadsheets such as Google's are faster but they're mostly
> >written in C, probably (by highly competent people) owing to
> >inertia...not in Java and not in C Sharp.
>
> >In the example, the C DLL is 5120 bytes whereas the C Sharp DLL is
> >7680 bytes. Again, not even an order of magnitude, and you get greater
> >safety for the so-called "bloat".
>
> >It's barbaric, given Moore's law, to so overfocus in such a Puritan
> >way on "bloat" when the number one issue, as Dijkstra said in 2000, is
> >"not making a mess of it".
>
> Obviously, neither you nor Dijkstra understand the true economic
> underpinnings of the software biz. =A0The whole point is to be hot, sexy,
> and broken. =A0Repeat that a few times to get the point.
>
> Because, as I've said so many times, if they ever actually solve the
> problem in the software world, the business is dead. =A0And this has
> serious economic consequences for everyone. =A0Think job-loss, which is
> generally considered the worst thing imaginable.
>
> There are many examples of situations where programmers did actually
> solve their problem and ended up on unemployment as a result. =A0At the
> bigger, company level, think Windows XP. =A0MS basically solved the OS
> problem with Windows XP. =A0They were thus forced to tear it up and start
> over with Vista/Windows 7/etc.

ROTFLMAO. Hey is that why I'm a teacher in Asia? Izzit cuz I was such
a great programmer, actually providing Bell Northern Research and
Princeton which stuff that worked, instead of posting nasty notes
about Herb Schildt? I like to think I prefer to be a teacher, but
perhaps I'm Deludo Boy.

Thanks as always for another five star and amusing remark. Have one on
me.

She's hot sexy and broke
And that ain't no joke
0
spinoza1111
12/28/2009 4:32:35 PM
On Dec 28, 12:09=A0am, scattered <still.scatte...@gmail.com> wrote:
> On Dec 27, 9:36=A0am, spinoza1111 <spinoza1...@yahoo.com> wrote:
>
>
>
>
>
> > A C and a C Sharp program was written to calculate the 64-bit value of
> > 19 factorial one million times, using both the iterative and recursive
> > methods to solve (and compare) the results
>
> > Here is the C code.
>
> > #include <stdio.h>
> > #include <time.h>
>
> > long long factorial(long long N)
> > {
> > =A0 =A0 long long nFactorialRecursive;
> > =A0 =A0 long long nFactorialIterative;
> > =A0 =A0 long long Nwork;
> > =A0 =A0 if (N <=3D 2) return N;
> > =A0 =A0 for ( nFactorialIterative =3D 1, Nwork =3D N;
> > =A0 =A0 =A0 =A0 =A0 Nwork > 1;
> > =A0 =A0 =A0 =A0 =A0 Nwork-- )
> > =A0 =A0 =A0 =A0 nFactorialIterative *=3D Nwork;
> > =A0 =A0 nFactorialRecursive =3D N * factorial(N-1);
> > =A0 =A0 if (nFactorialRecursive !=3D nFactorialIterative)
> > =A0 =A0 =A0 =A0printf("%I64d! is %I64d recursively but %I64d iterativel=
y wtf!
> > \n",
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 N,
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 nFactorialIterative,
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 nFactorialRecursive);
> > =A0 =A0 return nFactorialRecursive;
>
> > }
>
> > int main(void)
> > {
> > =A0 =A0 long long N;
> > =A0 =A0 long long Nfactorial;
> > =A0 =A0 double dif;
> > =A0 =A0 long long i;
> > =A0 =A0 long long K;
> > =A0 =A0 time_t start;
> > =A0 =A0 time_t end;
> > =A0 =A0 N =3D 19;
> > =A0 =A0 K =3D 1000000;
> > =A0 =A0 time (&start);
> > =A0 =A0 for (i =3D 0; i < K; i++)
> > =A0 =A0 =A0 =A0 Nfactorial =3D factorial(N);
> > =A0 =A0 time (&end);
> > =A0 =A0 dif =3D difftime (end,start);
> > =A0 =A0 printf("%I64d! is %I64d: %.2f seconds to calculate %I64d times
> > \n",
> > =A0 =A0 =A0 =A0 =A0 =A0N, Nfactorial, dif, K);
> > =A0 =A0 return 0; // Gee is that right?
>
> > }
>
> > Here is the C Sharp code.
>
> > using System;
> > using System.Collections.Generic;
> > using System.Linq;
> > using System.Text;
>
> > namespace N_factorial
> > {
> > =A0 =A0 class Program
> > =A0 =A0 {
> > =A0 =A0 =A0 =A0 static void Main(string[] args)
> > =A0 =A0 =A0 =A0 {
> > =A0 =A0 =A0 =A0 =A0 =A0 long N;
> > =A0 =A0 =A0 =A0 =A0 =A0 long Nfactorial =3D 0;
> > =A0 =A0 =A0 =A0 =A0 =A0 TimeSpan dif;
> > =A0 =A0 =A0 =A0 =A0 =A0 long i;
> > =A0 =A0 =A0 =A0 =A0 =A0 long K;
> > =A0 =A0 =A0 =A0 =A0 =A0 DateTime start;
> > =A0 =A0 =A0 =A0 =A0 =A0 DateTime end;
> > =A0 =A0 =A0 =A0 =A0 =A0 N =3D 19;
> > =A0 =A0 =A0 =A0 =A0 =A0 K =3D 1000000;
> > =A0 =A0 =A0 =A0 =A0 =A0 start =3D DateTime.Now;
> > =A0 =A0 =A0 =A0 =A0 =A0 for (i =3D 0; i < K; i++)
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 Nfactorial =3D factorial(N);
> > =A0 =A0 =A0 =A0 =A0 =A0 end =3D DateTime.Now;
> > =A0 =A0 =A0 =A0 =A0 =A0 dif =3D end - start;
> > =A0 =A0 =A0 =A0 =A0 =A0 Console.WriteLine
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 ("The factorial of " +
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0N.ToString() + " is " +
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Nfactorial.ToString() + ": " +
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0dif.ToString() + " " +
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0"seconds to calculate " +
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0K.ToString() + " times");
> > =A0 =A0 =A0 =A0 =A0 =A0 return;
> > =A0 =A0 =A0 =A0 }
>
> > =A0 =A0 =A0 =A0 static long factorial(long N)
> > =A0 =A0 =A0 =A0 {
> > =A0 =A0 =A0 =A0 =A0 =A0 long nFactorialRecursive;
> > =A0 =A0 =A0 =A0 =A0 =A0 long nFactorialIterative;
> > =A0 =A0 =A0 =A0 =A0 =A0 long Nwork;
> > =A0 =A0 =A0 =A0 =A0 =A0 if (N <=3D 2) return N;
> > =A0 =A0 =A0 =A0 =A0 =A0 for ( nFactorialIterative =3D 1, Nwork =3D N;
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 Nwork > 1;
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 Nwork-- )
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 nFactorialIterative *=3D Nwork;
> > =A0 =A0 =A0 =A0 =A0 =A0 nFactorialRecursive =3D N * factorial(N-1);
> > =A0 =A0 =A0 =A0 =A0 =A0 if (nFactorialRecursive !=3D nFactorialIterativ=
e)
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 Console.WriteLine
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 ("The iterative factorial of " +
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0N.ToString() + " " +
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0"is " +
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0nFactorialIterative.ToString() + " "=
 +
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0"but its recursive factorial is " +
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0nFactorialRecursive.ToString());
> > =A0 =A0 =A0 =A0 =A0 =A0 return nFactorialRecursive;
> > =A0 =A0 =A0 =A0 }
> > =A0 =A0 }
>
> > }
>
> > The C Sharp code runs at 110% of the speed of the C code, which may
> > seem to "prove" the half-literate Urban Legend that "C is more
> > efficient than C Sharp or VM/bytecode languages in general, d'oh".
>
> > As I take pains to point out in my book, "Build Your Own .Net Language
> > and Compiler" (Apress 2004) (buy it now buy it now), it's not even
> > grammatical to say that a programming language is more "efficient"
> > than another pl.
>
> > But far more significantly: the ten percent "overhead" would be
> > several orders of magnitude were C Sharp to be an "inefficient,
> > interpreted language" which many C programmers claim it is. That is
> > because a true interpreter parses and/or unpacks each instruction when
> > it is executed, and both of the above examples execute their
> > instructions millions of times.
>
> > Were C Sharp to be interpreted, the above C Sharp code would run very,
> > very slowly, but C Sharp isn't interpreted.
>
> > Instead, a one-time modification is made to the byte code upon loading
> > to thread the codes together. This explains part of the ten percent
> > "overhead". For the remainder of execution, a sort of switch statement
> > is operating in which the code for individual byte codes using go to
> > to transfer control. This means that C and C Sharp execute at the same
> > effective rate of speed, and the ONLY efficiency-based reason for
> > choosing C is avoiding the initial overhead of setting up the .Net
> > virtual machine.
>
> > But what does this virtual machine provide? Almost 100 percent safety
> > against memory leaks and many other bad things.
>
> > Indeed, C is like the (unwritten) British constitution. In that
> > arrangement, Parliament cannot "entrench" an act that would bind all
> > subsequent Parliaments, because Parliamentary supremacy (like the
> > putative power of C) must at all costs be preserved: this was the
> > innovation of 1688/9, when Parliament hired King William and his
> > Better Half as Kingie and Queenie on condition that they be nice and
> > obey Parliament. This means that in fact the British constitution
> > contains no protection against a runaway, tyrannical, "long"
> > Parliament. It promised not to do so in 1911 and confirmed that it
> > would be nice in 1949, but there is nothing in the British
> > constitution to prevent Parliament from enacting a new bill, as long
> > as it could get enough Peers in the House of Lords to wake up and
> > permit it to do so (Lords approval being required unlike money bills),
> > and HM the Queen to give Royal Assent.
>
> > When Kurt Godel was studying the booklet given him in Princeton to
> > pass the US Citizenship test, he claimed to find a bug that would
> > allow America to be a dictatorship. I think he'd be even more
> > terrified of the British constitution, for like his self-reflexive
> > paradoxical statement in his incompleteness/inconsistency result, the
> > very power of Parliament renders it impotent to write a Constitution!
>
> > Whereas .Net and Java provide "Constitutional" safeguards against code
> > doing nasty things even as the American constitution was intended to
> > be, and to some practical extent is, "a machine that runs of itself".
>
> > Both constitutions can fail, but the British constitution is more
> > likely to. It enabled Margaret Thatcher to rule by decree and override
> > even her own Cabinet, and ramrod through a medieval "poll tax" in 1990
> > that produced civil disturbances. Britons enjoy human rights mostly
> > through the EU. Whereas misuse of the American constitution during
> > Bush's administration was more vigorously resisted especially in its
> > courts, where the judges are truly independent.
>
> > It is true that a massive "bug" in the American constitution developed
> > in 1860 with the outbreak of civil war, but this was extra-
> > Constitutional. It resulted from a deliberate misinterpretation of
> > state's rights under the Tenth Amendment in which the states retained
> > a "nullifying" level of sovereignity, but their assent to the
> > Constitution in 1789 had itself nullified this strong interpretation
> > of "state's rights".
>
> > Since 1689, no such "bug" has occured in the British constitution.
> > However, the British constitution existed before 1689, and its bug was
> > just as serious, for it produced the English civil war. This was
> > because there is no provision in the British constitution for a pig-
> > headed king, and King Charles II could conceivably in the future
> > refuse Royal Assent to needed legislation, or use the British Army
> > (which is NOT under the control of Parliament, but of the Monarch to
> > whom officers swear fealty) against his own people.
>
> > C Sharp programs can fail as can the American Constitution. But the
> > idiotic equation of the reliability of C and C Sharp in fact resembles
> > the political passivity of Britons who talk darkly of the EU being a
> > "new world order" destroying their "rights as Englishmen" when in fact
> > it's the best thing that ever happened to them. And, I've not
> > addressed how the rights of Irishmen have been abused under the
> > British constitution.
>
> > I'm for one tired of the Urban Legends of the lower middle class,
> > whether in programming or politics.
>
> OT, but *much* more interesting than your rambles about "C:The
> Complete Nonsense", etc.
> What about the memory footprint of C vs C#? Sure, .Net code is
> compiled rather than interpreted so it doesn't have a huge time
> penalty - but it is compiled to an intermediate language designed to
> run in a *huge* virtual machine. Bloatware which is only slightly
> slower than nonbloated code is still bloated and still slower. Also, a
> much more interesting question (than simple factorial calculations) is
> how something like Microsoft Excel would work if its code base was
> rewritten from C to C#. I suspect that the cummulative slowdowns would
> result in a spreadsheet which was annoyingly slower.
>
> Out of curiousity, what rules of grammar does the sentence "C is more
> effecient than C#." violate?

Glad you asked. It violates a semantic rule. A LANGUAGE cannot be
"efficient" unless it must be executed by a strict interpreter (code
that unlike a Java or .Net virtual machine does something each time
each instruction is executed that takes K time). But, of course, C
Sharp can be compiled. Sure, extra instructions are compiled in that
check array bounds, but these are very nice things if one takes
responsibility for one's code after some clown has changed it.

Bjarne Stroustrup somewhere writes about programmers who claim that it
is "inefficient" to keep run time tests and asserts() in code after it
is released to "production". He compares them to sailors who leave
port without lifeboats or life jackets.

Most programmers have only a crude and overly normative understanding
of what "efficiency" is. It's not raw and blinding speed. For example,
Peter Seebach thinks it was more "efficient" to take loops out that
needed to stay in code meant to be a simple benchmarks.

Indeed, many times, "inefficient" means "I don't want to think".

"Far from perceiving such prohibitions on thought as something
hostile, the candidates =96 and all scientists are candidates =96 feel
relieved. Because thinking burdens them with a subjective
responsibility, which their objective position in the production-
process prevents them from fulfilling, they renounce it, shake a bit
and run over to the other side. The displeasure of thinking soon turns
into the incapacity to think at all: people who effortlessly invent
the most refined statistical objections, when it is a question of
sabotaging a cognition, are not capable of making the simplest
predictions of content ex cathedra [Latin: from the chair, e.g. Papal
decision]. They lash out at the speculation and in it kill common
sense. The more intelligent of them have an inkling of what ails their
mental faculties, because the symptoms are not universal, but appear
in the organs, whose service they sell. Many still wait in fear and
shame, at being caught with their defect. All however find it raised
publicly to a moral service and see themselves being recognized for a
scientific asceticism, which is nothing of the sort, but the secret
contour of their weakness. Their resentment is socially rationalized
under the formula: thinking is unscientific. Their intellectual energy
is thereby amplified in many dimensions to the utmost by the mechanism
of control. The collective stupidity of research technicians is not
simply the absence or regression of intellectual capacities, but an
overgrowth of the capacity of thought itself, which eats away at the
latter with its own energy. The masochistic malice [Bosheit] of young
intellectuals derives from the malevolence [B=F6sartigkeit] of their
illness."

TW Adorno, Minima Moralia (Reflections on Damaged Life) 1948

0
spinoza1111
12/28/2009 4:51:05 PM
Kenny McCormack wrote:
>
> In article <786af291-2bc3-4f43-af7f-efacfa6b9d54@d20g2000yqh.googlegroups.com>,
> spinoza1111  <spinoza1111@yahoo.com> wrote:
> ...
>>Newer spreadsheets such as Google's are faster but they're mostly
>>written in C, probably (by highly competent people) owing to
>>inertia...not in Java and not in C Sharp.
>>
>>In the example, the C DLL is 5120 bytes whereas the C Sharp DLL is
>>7680 bytes. Again, not even an order of magnitude, and you get greater
>>safety for the so-called "bloat".
>>
>>It's barbaric, given Moore's law, to so overfocus in such a Puritan
>>way on "bloat" when the number one issue, as Dijkstra said in 2000, is
>>"not making a mess of it".
>
> Obviously, neither you nor Dijkstra understand the true economic
> underpinnings of the software biz.  The whole point is to be hot, sexy,
> and broken.  Repeat that a few times to get the point.

It seems that you consider yourself quite knowledgeable and expert,
even superior to Dijkstra, yet you offer so little actual news or
information actually about C.

In fact I have to wonder, do you ever actually contribute anything
about the C programming language?

> Because, as I've said so many times, if they ever actually solve the
> problem in the software world, the business is dead.  And this has
> serious economic consequences for everyone.  Think job-loss, which is
> generally considered the worst thing imaginable.
>
> There are many examples of situations where programmers did actually
> solve their problem and ended up on unemployment as a result.  At the
> bigger, company level, think Windows XP.  MS basically solved the OS
> problem with Windows XP.  They were thus forced to tear it up and start
> over with Vista/Windows 7/etc.

This sort of sarcasm, petty whining, attempted humor, ... can get
boorish when applied, in your words, "many times". Just a thought but
the phrase "you can be part of the problem, or you can be part of the
solution" springs to mind often when trying to follow threads. 

As for the topic/rant in question, if I ever set out to develop a
project for only a MS proprietary platform I'll probably care about C#
vs options but since my normal development world is not MS the
question isn't really important for me personally. My world also
slants more towards system stuff than the latest shiny user app - in
other words I'm not fascinated by eye candy. I seem to be in a
minority admittedly. But in my world the fact that C is used to build
C tools intuitively feels like a win. No intended swipe here but does
a C# compiler for C# exist? With source for study?
0
stan
12/28/2009 7:43:03 PM
spinoza1111 wrote:
> On Dec 28, 1:25 am, "bartc" <ba...@freeuk.com> wrote:
>> "spinoza1111" <spinoza1...@yahoo.com> wrote in message
>>
>> news:c3d9c29f-0f7b-41e3-9478-ce62d98b7c76@d4g2000pra.googlegroups.com...
>>
>>> A C and a C Sharp program was written to calculate the 64-bit value
>>> of 19 factorial one million times, using both the iterative and
>>> recursive methods to solve (and compare) the results
>>
>>> Here is the C code.
>>> Here is the C Sharp code.
>>> The C Sharp code runs at 110% of the speed of the C code, which may
>>> seem to "prove" the half-literate Urban Legend that "C is more
>>> efficient than C Sharp or VM/bytecode languages in general, d'oh".
>>
>> C# was 10% faster? On my 32-bit machine, the C code was generally
>> faster, depending on compilers and options.
>
> No, dear boy. 110% means that C Sharp is ten percent SLOWER, and it
> means I don't care that it is.

So about 90% of the speed of C.

>>
>> But changing both to use 32-bit arithmetic, the C was nearly double
>> the speed of C#, which is what is to be expected.
>
> OK, let's try that at home...
>
> ...of course, you do know that the value overflows for C Sharp int and
> C long arithmetic. The maximum value we can compute is 12 factorial...

I didn't care; both versions overflowed to give the same wrong result. I
just didn't want the results dominated by the attempting 64-bit multiplies
on my 32-bit machine.

> ...to maintain realistic execution times, we must change both versions
> to execute ten million times.
>
> Oops. Check this out! Here, C Sharp is actually faster!
>
> C Sharp result: The factorial of 12 is 479001600: 00:00:06.8750000
> seconds to calculate 10000000 times
> C result: 12! is 479001600: 8.00 seconds to calculate 10000000 times

Check again. I got 5.4 seconds for C# and  3.0 seconds for C. Are you using
32-bits in C, where it's not always obvious?

> In other words, you changed your version of the code to calculate the
> wrong answer "twice as fast" as C Sharp. In other words, you made the
> code significantly less "powerful" to get the wrong result, and when
> we fix it, C runs slower. This is probably because cache usage, which
> is easily implemented safely in a hosted VM, becomes more and more
> important the more the same code is executed on similar data.

In this tiny program everything should fit into the cache.

>> That's not bad, but C# does need a very complex environment to run,
>> so cannot be used as a replacement for many uses of C, and it is
>> still slower.
>
> Excuse me, but that's a canard. C, to run efficiently, needs a
> complete optimizer. A VM is less complex. Architectures have in fact
> been tuned in very complex ways to run C adequately in ways that have
> caused errors, perhaps including the Intel arithmetic bug.

An optimising C compiler is a few MB, and is not needed to run the result.
To compile your C# code, I needed a 63MB download. Then I needed .NET which
probably was already in my OS, but appears to 'take up to 500MB of hard disk
space'.

The C version would have required a few tens of KB in total to run.

>> (BTW your timing routines in the C code seem to round to the nearest
>> second; not very useful for that purpose.)
>
> Yes...the timing routines standard in C SHARP EXPRESS, shipped free to
> script kiddies and even girl programmers world wide, are superior to
> the stuff in C.

C's clock() function gives elapsed times in msec.

-- 
Bartc 

0
bartc
12/28/2009 7:48:22 PM
In article <7ben07-6hc.ln1@invalid.net>, stan  <smoore@exis.net>
attempted a bunch of lame flames in my direction:
(snip)

What rock did you just climb out from under?

(Or, equivalently, which reg are you a sock puppet of?)

0
gazelle
12/28/2009 7:50:30 PM
On 2009-12-28, stan <smoore@exis.net> wrote:
> In fact I have to wonder, do you ever actually contribute anything
> about the C programming language?

I saw it happen once.

Oddly, over in comp.unix.shell, he actually contributes topical material
and shows technical understanding, so it's not a general incapacity.  I
guess he's just butthurt about comp.lang.c for some reason.

-s
-- 
Copyright 2009, all wrongs reversed.  Peter Seebach / usenet-nospam@seebs.net
http://www.seebs.net/log/ <-- lawsuits, religion, and funny pictures
http://en.wikipedia.org/wiki/Fair_Game_(Scientology) <-- get educated!
0
Seebs
12/28/2009 7:51:43 PM
"bartc" <bartc@freeuk.com> writes:
[...]
> C's clock() function gives elapsed times in msec.

No, it gives CPU time (not elapsed time), expressed as a value
of type clock_t.  To get the time in seconds, the result must be
scaled by dividing by CLOCKS_PER_SEC.

On one system I just tried, clock_t is compatible with signed long,
and CLOCKS_PER_SEC is 1000000; on another, clock_t is compatible
with unsigned long, and CLOCKS_PER_SEC is 1000.  (Note that on the
first system I tried, clock_t values will overflow in less than 36
CPU minutes.)

-- 
Keith Thompson (The_Other_Keith) kst-u@mib.org  <http://www.ghoti.net/~kst>
Nokia
"We must do something.  This is something.  Therefore, we must do this."
    -- Antony Jay and Jonathan Lynn, "Yes Minister"
0
Keith
12/28/2009 8:04:41 PM
Seebs wrote:
>
>
> On 2009-12-28, stan <smoore@exis.net> wrote:
>> In fact I have to wonder, do you ever actually contribute anything
>> about the C programming language?
>
> I saw it happen once.
>
> Oddly, over in comp.unix.shell, he actually contributes topical material
> and shows technical understanding, so it's not a general incapacity.  I
> guess he's just butthurt about comp.lang.c for some reason.

I can add the same observation for awk stuff where he appears lucid,
but I have to admit I question more and more of what I see from
him. When I find myself agreeing with him I start to wonder if I AM wrong.

I must agree with your word choice "odd".
0
stan
12/28/2009 9:22:02 PM
On Dec 29, 3:43=A0am, stan <smo...@exis.net> wrote:
> Kenny McCormack wrote:
>
> > In article <786af291-2bc3-4f43-af7f-efacfa6b9...@d20g2000yqh.googlegrou=
ps.com>,
> >spinoza1111=A0<spinoza1...@yahoo.com> wrote:
> > ...
> >>Newer spreadsheets such as Google's are faster but they're mostly
> >>written in C, probably (by highly competent people) owing to
> >>inertia...not in Java and not in C Sharp.
>
> >>In the example, the C DLL is 5120 bytes whereas the C Sharp DLL is
> >>7680 bytes. Again, not even an order of magnitude, and you get greater
> >>safety for the so-called "bloat".
>
> >>It's barbaric, given Moore's law, to so overfocus in such a Puritan
> >>way on "bloat" when the number one issue, as Dijkstra said in 2000, is
> >>"not making a mess of it".
>
> > Obviously, neither you nor Dijkstra understand the true economic
> > underpinnings of the software biz. =A0The whole point is to be hot, sex=
y,
> > and broken. =A0Repeat that a few times to get the point.
>
> It seems that you consider yourself quite knowledgeable and expert,
> even superior to Dijkstra, yet you offer so little actual news or
> information actually about C.
>
> In fact I have to wonder, do you ever actually contribute anything
> about the C programming language?
>
> > Because, as I've said so many times, if they ever actually solve the
> > problem in the software world, the business is dead. =A0And this has
> > serious economic consequences for everyone. =A0Think job-loss, which is
> > generally considered the worst thing imaginable.
>
> > There are many examples of situations where programmers did actually
> > solve their problem and ended up on unemployment as a result. =A0At the
> > bigger, company level, think Windows XP. =A0MS basically solved the OS
> > problem with Windows XP. =A0They were thus forced to tear it up and sta=
rt
> > over with Vista/Windows 7/etc.
>
> This sort of sarcasm, petty whining, attempted humor, ... can get
> boorish when applied, in your words, "many times". Just a thought but
> the phrase "you can be part of the problem, or you can be part of the
> solution" springs to mind often when trying to follow threads.

The fact is that corporate programmers do have short careers which
tend not to be adequate to raise a family: studies have found that
many programmers find it almost impossible to get work after more than
ten years of experience, even if they retrain.

I was an anomaly not because I retrained, although I learned C in the
1980s and .Net more recently, but because I work out regularly even
today and therefore look significantly younger than I am, with the
result that I worked continuously from Nov 1971 to 2005.

Slogans from the 1960s don't change the fact that programmers, as
people who carry out the will of most uncaring managements in the most
excruciating detail, are treated like dirt, and respond not with
solidarity but with regression, here, to what Adorno called "the
nightmare of childhood": Fascism.

>
> As for the topic/rant in question, if I ever set out to develop a
> project for only a MS proprietary platform I'll probably care about C#
> vs options but since my normal development world is not MS the
> question isn't really important for me personally. My world also
> slants more towards system stuff than the latest shiny user app - in
> other words I'm not fascinated by eye candy. I seem to be in a
> minority admittedly. But in my world the fact that C is used to build
> C tools intuitively feels like a win. No intended swipe here but does
> a C# compiler for C# exist? With source for study?

I wrote and published a Visual Basic .Net compiler for most of Quick
Basic for study. Cf. "Build Your Own .Net Language and Compiler",
Edward G Nilges, Apress 2004.

There's no reason why a C Sharp compiler couldn't be written for C
Sharp.

0
spinoza1111
12/29/2009 1:35:04 AM
On Dec 29, 3:48=A0am, "bartc" <ba...@freeuk.com> wrote:
> spinoza1111wrote:
> > On Dec 28, 1:25 am, "bartc" <ba...@freeuk.com> wrote:
> >> "spinoza1111" <spinoza1...@yahoo.com> wrote in message
>
> >>news:c3d9c29f-0f7b-41e3-9478-ce62d98b7c76@d4g2000pra.googlegroups.com..=
..
>
> >>> A C and a C Sharp program was written to calculate the 64-bit value
> >>> of 19 factorial one million times, using both the iterative and
> >>> recursive methods to solve (and compare) the results
>
> >>> Here is the C code.
> >>> Here is the C Sharp code.
> >>> The C Sharp code runs at 110% of the speed of the C code, which may
> >>> seem to "prove" the half-literate Urban Legend that "C is more
> >>> efficient than C Sharp or VM/bytecode languages in general, d'oh".
>
> >> C# was 10% faster? On my 32-bit machine, the C code was generally
> >> faster, depending on compilers and options.
>
> > No, dear boy. 110% means that C Sharp is ten percent SLOWER, and it
> > means I don't care that it is.
>
> So about 90% of the speed of C.

Yes. And you get much higher levels of built-in reality for that ten
percent.
>
>
>
> >> But changing both to use 32-bit arithmetic, the C was nearly double
> >> the speed of C#, which is what is to be expected.
>
> > OK, let's try that at home...
>
> > ...of course, you do know that the value overflows for C Sharp int and
> > C long arithmetic. The maximum value we can compute is 12 factorial...
>
> I didn't care; both versions overflowed to give the same wrong result. I
> just didn't want the results dominated by the attempting 64-bit multiplie=
s
> on my 32-bit machine.
>
> > ...to maintain realistic execution times, we must change both versions
> > to execute ten million times.
>
> > Oops. Check this out! Here, C Sharp is actually faster!
>
> > C Sharp result: The factorial of 12 is 479001600: 00:00:06.8750000
> > seconds to calculate 10000000 times
> > C result: 12! is 479001600: 8.00 seconds to calculate 10000000 times
>
> Check again. I got 5.4 seconds for C# and =A03.0 seconds for C. Are you u=
sing
> 32-bits in C, where it's not always obvious?

Yes, and it's obvious. I am using int in C Sharp and long in C, where
"int" in C Sharp means 32 bits and "long" in C means 32 bits.

>
> > In other words, you changed your version of the code to calculate the
> > wrong answer "twice as fast" as C Sharp. In other words, you made the
> > code significantly less "powerful" to get the wrong result, and when
> > we fix it, C runs slower. This is probably because cache usage, which
> > is easily implemented safely in a hosted VM, becomes more and more
> > important the more the same code is executed on similar data.
>
> In this tiny program everything should fit into the cache.

Hmm..."the" cache. There is "a" stack at runtime, and Schildt was
correct in using the definite article. But "the" cache? Anyway, it
appears that if there was "a" cache, it worked better in .Net.
>
> >> That's not bad, but C# does need a very complex environment to run,
> >> so cannot be used as a replacement for many uses of C, and it is
> >> still slower.
>
> > Excuse me, but that's a canard. C, to run efficiently, needs a
> > complete optimizer. A VM is less complex. Architectures have in fact
> > been tuned in very complex ways to run C adequately in ways that have
> > caused errors, perhaps including the Intel arithmetic bug.
>
> An optimising C compiler is a few MB, and is not needed to run the result=
..
> To compile your C# code, I needed a 63MB download. Then I needed .NET whi=
ch
> probably was already in my OS, but appears to 'take up to 500MB of hard d=
isk
> space'.

Boo hoo. If safety means anything, then 500MB is worth it. Were not
running our Commodore 64 in Mom's basement any more.
>
> The C version would have required a few tens of KB in total to run.
>
> >> (BTW your timing routines in the C code seem to round to the nearest
> >> second; not very useful for that purpose.)
>
> > Yes...the timing routines standard in C SHARP EXPRESS, shipped free to
> > script kiddies and even girl programmers world wide, are superior to
> > the stuff in C.
>
> C's clock() function gives elapsed times in msec.
>
> --
> Bartc

0
spinoza1111
12/29/2009 1:40:40 AM
In article <b518670c-2e25-4c6e-a38e-0f2e545d7f17@p8g2000yqb.googlegroups.com>,
spinoza1111  <spinoza1111@yahoo.com> wrote his usual good stuff:
....

Be aware that this "stan" character is a toad.  Let's not bother with him.

He is almost certainly also a sock of one of the regs - probably Seebs.

0
gazelle
12/29/2009 1:58:13 AM
On Dec 28, 5:10=A0am, Seebs <usenet-nos...@seebs.net> wrote:
> On 2009-12-27, bartc <ba...@freeuk.com> wrote:
>
> > It doesn't matter too much that the code was idiotic, provided both
> > languages were executing the same idiotic algorithm.
>
> True.
>
> > The original code wasn't a bad test of long long multiplication in each
> > language.
>
> Fair enough. =A0It's not exactly what I'd call an interesting test case f=
or
> real-world code. =A0I'd be a lot more interested in performance of, say,
> large lists or hash tables.
>
> -s
> --
> Copyright 2009, all wrongs reversed. =A0Peter Seebach / usenet-nos...@see=
bs.nethttp://www.seebs.net/log/<-- lawsuits, religion, and funny picturesht=
tp://en.wikipedia.org/wiki/Fair_Game_(Scientology) <-- get educated!

C code to hash several numbers, iterated to get somewhat better
performance numbers.

#include <stdio.h>
#include <time.h>
#include <stdlib.h>
#define ARRAY_SIZE 1000
#define SESSIONS 100000

int main(void)
{
    int hash[ARRAY_SIZE];
    int i;
    int r;
    int j;
    int k;
    int collisions;
    time_t start;
    time_t end;
    double dif;
    int tests;
    int sessions;
    time (&start);
    for (sessions =3D 0; sessions < SESSIONS; sessions++)
    {
        for (i =3D 0; i < ARRAY_SIZE; i++) hash[i] =3D 0;
            collisions =3D 0;
            tests =3D ARRAY_SIZE;
            for (i =3D 0; i < tests; i++)
            {
                r =3D rand();
                j =3D r % ARRAY_SIZE;
                k =3D j;
                if (hash[j] !=3D 0) collisions++;
                while(hash[j] !=3D r && hash[j] !=3D 0)
                {
                    if (j >=3D ARRAY_SIZE) j =3D 0; else j++;
                    if (j=3D=3Dk)
                    {
                        printf("Table is full\n");
                        break;
                    }
                }
                if (hash[j] =3D=3D 0) hash[j] =3D r;
            }
	}
    time (&end);
    dif =3D difftime (end,start);
    printf("It took C %.2f seconds to hash %d numbers with %d
collisions, %d times\n",
           dif, tests, collisions, sessions);
    return 0; // Gee is that right?
}

C Sharp code to do the same:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;

namespace ConsoleApplication1
{
    class Program
    {
        static void Main(string[] args)
        {
            HashSet<int> hash =3D null;
            hash =3D new HashSet<int>();
            Random R =3D new Random(1);
            int i;
            int j;
            TimeSpan dif;
            DateTime start;
            DateTime end;
            start =3D DateTime.Now;
            int sessions =3D 100000;
            int tests =3D 1000;
            for (i =3D 0; i < sessions; i++)
            {
                hash.Clear();
                for (j =3D 0; j < tests; j++)
                    hash.Add(R.Next(32767));
            }
            end =3D DateTime.Now;
            dif =3D end - start;
            Console.WriteLine
                ("It took C Sharp " +
                 dif.ToString() + " " +
                 "seconds to hash " +
                 tests.ToString() + " numbers " +
                 sessions.ToString() + " times");
            return; // Ha ha I don't have to worry about the
shibboleth
        }
    }
}


The C Sharp code is not only smaller above, it runs dramatically
faster: 24 secs on my machine as contrasted with 35 secs for the C
code.

This is because the HashSet (also available in Java) can be written as
fast as you like in that it's a part of the extended "os". It may
itself be written in C, and please note that this does NOT mean that
YOU should write in C after all, because the HashSet was written state
of the art by studly guys and gals at Microsoft, and painstakingly
reviewed by other studly guys and gals.

And no, I don't want to visit some creepy site to get best C practise
for hashing. The fact is that the research it takes at creepy come-to-
Jesus and ammo sites to find "good" C, quite apart from its being a
waste of spirit in an expense of shame, provides no "forensic"
assurance that the creepy guy who gave you the code didn't screw up or
insert a time bomb. HashSet is available, shrink-wrapped and out of
the box, and IT RUNS TWICE AS FAST.

HashSet can even safely run as managed code but be developed as Java
bytecode or .Net MSIL as a form of safe assembler language. When it is
maintained by the vendor, you get the benefit. Whereas el Creepo's
code is all you get, period, unless you call him a year later, only to
find that his ex-wife has thrown him out the house.

C is NOT more "efficient" than C Sharp. That is not even a coherent
thing to say.

Furthermore, even the best of us screw up (as I have screwn up) when
implementing the basics. Donald R Knuth has said that he always gets
binary search wrong the first time he recodes it. It is a mark of the
smart person to have trouble with low-level math and code; Einstein
(cf Walter Kaufman's bio) had in fact a great deal of difficulty
working out the details of relativistic mathematics and required help
from other mathematicians and physicists.

Therefore we class acts prefer C Sharp.
0
spinoza1111
12/29/2009 3:11:03 AM
On Dec 29, 11:11=A0am, spinoza1111 <spinoza1...@yahoo.com> wrote:
> On Dec 28, 5:10=A0am, Seebs <usenet-nos...@seebs.net> wrote:
>
>
>
>
>
> > On 2009-12-27, bartc <ba...@freeuk.com> wrote:
>
> > > It doesn't matter too much that the code was idiotic, provided both
> > > languages were executing the same idiotic algorithm.
>
> > True.
>
> > > The original code wasn't a bad test of long long multiplication in ea=
ch
> > > language.
>
> > Fair enough. =A0It's not exactly what I'd call an interesting test case=
 for
> > real-world code. =A0I'd be a lot more interested in performance of, say=
,
> > large lists or hash tables.
>
> > -s
> > --
> > Copyright 2009, all wrongs reversed. =A0Peter Seebach / usenet-nos...@s=
eebs.nethttp://www.seebs.net/log/<-- lawsuits, religion, and funny pictures=
http://en.wikipedia.org/wiki/Fair_Game_(Scientology) <-- get educated!
>
> C code to hash several numbers, iterated to get somewhat better
> performance numbers.
>
> #include <stdio.h>
> #include <time.h>
> #include <stdlib.h>
> #define ARRAY_SIZE 1000
> #define SESSIONS 100000
>
> int main(void)
> {
> =A0 =A0 int hash[ARRAY_SIZE];
> =A0 =A0 int i;
> =A0 =A0 int r;
> =A0 =A0 int j;
> =A0 =A0 int k;
> =A0 =A0 int collisions;
> =A0 =A0 time_t start;
> =A0 =A0 time_t end;
> =A0 =A0 double dif;
> =A0 =A0 int tests;
> =A0 =A0 int sessions;
> =A0 =A0 time (&start);
> =A0 =A0 for (sessions =3D 0; sessions < SESSIONS; sessions++)
> =A0 =A0 {
> =A0 =A0 =A0 =A0 for (i =3D 0; i < ARRAY_SIZE; i++) hash[i] =3D 0;
> =A0 =A0 =A0 =A0 =A0 =A0 collisions =3D 0;
> =A0 =A0 =A0 =A0 =A0 =A0 tests =3D ARRAY_SIZE;
> =A0 =A0 =A0 =A0 =A0 =A0 for (i =3D 0; i < tests; i++)
> =A0 =A0 =A0 =A0 =A0 =A0 {
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 r =3D rand();
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 j =3D r % ARRAY_SIZE;
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 k =3D j;
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 if (hash[j] !=3D 0) collisions++;
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 while(hash[j] !=3D r && hash[j] !=3D 0)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 {
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 if (j >=3D ARRAY_SIZE) j =3D 0; e=
lse j++;
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 if (j=3D=3Dk)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 {
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 printf("Table is full\n")=
;
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 break;
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 }
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 }
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 if (hash[j] =3D=3D 0) hash[j] =3D r;
> =A0 =A0 =A0 =A0 =A0 =A0 }
> =A0 =A0 =A0 =A0 }
> =A0 =A0 time (&end);
> =A0 =A0 dif =3D difftime (end,start);
> =A0 =A0 printf("It took C %.2f seconds to hash %d numbers with %d
> collisions, %d times\n",
> =A0 =A0 =A0 =A0 =A0 =A0dif, tests, collisions, sessions);
> =A0 =A0 return 0; // Gee is that right?
>
> }
>
> C Sharp code to do the same:
>
> using System;
> using System.Collections.Generic;
> using System.Linq;
> using System.Text;
>
> namespace ConsoleApplication1
> {
> =A0 =A0 class Program
> =A0 =A0 {
> =A0 =A0 =A0 =A0 static void Main(string[] args)
> =A0 =A0 =A0 =A0 {
> =A0 =A0 =A0 =A0 =A0 =A0 HashSet<int> hash =3D null;
> =A0 =A0 =A0 =A0 =A0 =A0 hash =3D new HashSet<int>();
> =A0 =A0 =A0 =A0 =A0 =A0 Random R =3D new Random(1);
> =A0 =A0 =A0 =A0 =A0 =A0 int i;
> =A0 =A0 =A0 =A0 =A0 =A0 int j;
> =A0 =A0 =A0 =A0 =A0 =A0 TimeSpan dif;
> =A0 =A0 =A0 =A0 =A0 =A0 DateTime start;
> =A0 =A0 =A0 =A0 =A0 =A0 DateTime end;
> =A0 =A0 =A0 =A0 =A0 =A0 start =3D DateTime.Now;
> =A0 =A0 =A0 =A0 =A0 =A0 int sessions =3D 100000;
> =A0 =A0 =A0 =A0 =A0 =A0 int tests =3D 1000;
> =A0 =A0 =A0 =A0 =A0 =A0 for (i =3D 0; i < sessions; i++)
> =A0 =A0 =A0 =A0 =A0 =A0 {
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 hash.Clear();
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 for (j =3D 0; j < tests; j++)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 hash.Add(R.Next(32767));
> =A0 =A0 =A0 =A0 =A0 =A0 }
> =A0 =A0 =A0 =A0 =A0 =A0 end =3D DateTime.Now;
> =A0 =A0 =A0 =A0 =A0 =A0 dif =3D end - start;
> =A0 =A0 =A0 =A0 =A0 =A0 Console.WriteLine
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 ("It took C Sharp " +
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0dif.ToString() + " " +
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0"seconds to hash " +
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0tests.ToString() + " numbers " +
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0sessions.ToString() + " times");
> =A0 =A0 =A0 =A0 =A0 =A0 return; // Ha ha I don't have to worry about the
> shibboleth
> =A0 =A0 =A0 =A0 }
> =A0 =A0 }
>
> }
>
> The C Sharp code is not only smaller above, it runs dramatically
> faster: 24 secs on my machine as contrasted with 35 secs for the C
> code.
>
> This is because the HashSet (also available in Java) can be written as
> fast as you like in that it's a part of the extended "os". It may
> itself be written in C, and please note that this does NOT mean that
> YOU should write in C after all, because the HashSet was written state
> of the art by studly guys and gals at Microsoft, and painstakingly
> reviewed by other studly guys and gals.
>
> And no, I don't want to visit some creepy site to get best C practise
> for hashing. The fact is that the research it takes at creepy come-to-
> Jesus and ammo sites to find "good" C, quite apart from its being a
> waste of spirit in an expense of shame, provides no "forensic"
> assurance that the creepy guy who gave you the code didn't screw up or
> insert a time bomb. HashSet is available, shrink-wrapped and out of
> the box, and IT RUNS TWICE AS FAST.
>
> HashSet can even safely run as managed code but be developed as Java
> bytecode or .Net MSIL as a form of safe assembler language. When it is
> maintained by the vendor, you get the benefit. Whereas el Creepo's
> code is all you get, period, unless you call him a year later, only to
> find that his ex-wife has thrown him out the house.
>
> C is NOT more "efficient" than C Sharp. That is not even a coherent
> thing to say.
>
> Furthermore, even the best of us screw up (as I have screwn up) when
> implementing the basics. Donald R Knuth has said that he always gets
> binary search wrong the first time he recodes it. It is a mark of the
> smart person to have trouble with low-level math and code; Einstein
> (cf Walter Kaufman's bio) had in fact a great deal of difficulty
> working out the details of relativistic mathematics and required help
> from other mathematicians and physicists.
>
> Therefore we class acts prefer C Sharp.

And don't waste my time. Yes, the C hash is oversimplified and creates
clusters, slowing it down dramatically as the table fills up. A
"delicate stratagem" would be of course to make each entry a linked
list base which would reduce the search time on average for each new
number to K times the average number of collisions C. In the C code
above the search time is larger than K*C because it's affected by
neighboring collisions. As the table fills up, the search time
increases very rapidly towards the maximum which is the total table
size. In the linked list solution (not implemented here but all over
Knuth and the Knet like a rash) search time is ALWAYS K*C!

But that's my point. Whereas it's a very good idea to reinvent the
wheel the place to do it is in an ACM approved Komputer Science Skool,
not on the job. Anyone who neglects academic training is a net cost to
a Korporation that pays him to write C so he can learn what he shuddha
learnt. 'Course, I wouldn't be talking about any auto-didact here.
There is nothing wrong with being an auto-didact as long as you don't
set out to destroy alter-didacts with MSCSen and BSen from the Univ of
Illinois.

My guess is that the studly Dudleys who developed Hashset went to
Komputer Skool and learnt their Knuth. That's because when I was at
Princeton, I helped people with their Microsoft interview having
crashed and burned at it myself in the spirit of people who flunked
their exams in ancient China and who started Konfucian Skewls in
preference, with one exception, to deciding they we're Christ's baby
brother and starting the Tai'ping revolution. Microsoft was, for
technical positions, exclusively recruiting comp sci majors.
0
spinoza1111
12/29/2009 4:16:24 AM
On Dec 29, 5:22=A0am, stan <smo...@exis.net> wrote:
> Seebs wrote:
>
> > On 2009-12-28, stan <smo...@exis.net> wrote:
> >> In fact I have to wonder, do you ever actually contribute anything
> >> about the C programming language?
>
> > I saw it happen once.
>
> > Oddly, over in comp.unix.shell, he actually contributes topical materia=
l
> > and shows technical understanding, so it's not a general incapacity. =
=A0I
> > guess he's just butthurt about comp.lang.c for some reason.
>
> I can add the same observation for awk stuff where he appears lucid,
> but I have to admit I question more and more of what I see from
> him. When I find myself agreeing with him I start to wonder if I AM wrong=
..

What the hell kind of thing is this to say about another person? It's
less wrong to be wrong on some technical matter than to lie, as
Richard Heathfield has lied, about another person's publications in
comp.risks, or to start what's little more than a rumor about Schildt.
These are evil acts. Being wrong on some technical matter is NOT evil.

If a person lies, you can't trust him on technical matters, whereas if
a person merely makes technical mistakes, you can at least trust him
like you trust a monkey with a typewriter or wikipedia. He might
occasionally have insights, as does Shakespeare's Fool in King Lear.

In fact, one of the lessons Gerald "The Psychology of Computer
Programming" Weinberg relays is that even junior people in structured
walkthroughs could spot bugs, in some cases faster than the "senior"
programmers.

What bothers you about Kenny is that he has a sense of humor and mocks
the utter pretense of the regs.
>
> I must agree with your word choice "odd".

0
spinoza1111
12/29/2009 10:44:50 AM
On Dec 29, 3:43=A0am, stan <smo...@exis.net> wrote:
> Kenny McCormack wrote:
>
> > In article <786af291-2bc3-4f43-af7f-efacfa6b9...@d20g2000yqh.googlegrou=
ps.com>,
> >spinoza1111=A0<spinoza1...@yahoo.com> wrote:
> > ...
> >>Newer spreadsheets such as Google's are faster but they're mostly
> >>written in C, probably (by highly competent people) owing to
> >>inertia...not in Java and not in C Sharp.
>
> >>In the example, the C DLL is 5120 bytes whereas the C Sharp DLL is
> >>7680 bytes. Again, not even an order of magnitude, and you get greater
> >>safety for the so-called "bloat".
>
> >>It's barbaric, given Moore's law, to so overfocus in such a Puritan
> >>way on "bloat" when the number one issue, as Dijkstra said in 2000, is
> >>"not making a mess of it".
>
> > Obviously, neither you nor Dijkstra understand the true economic
> > underpinnings of the software biz. =A0The whole point is to be hot, sex=
y,
> > and broken. =A0Repeat that a few times to get the point.
>
> It seems that you consider yourself quite knowledgeable and expert,
> even superior to Dijkstra, yet you offer so little actual news or
> information actually about C.
>
> In fact I have to wonder, do you ever actually contribute anything
> about the C programming language?
>
> > Because, as I've said so many times, if they ever actually solve the
> > problem in the software world, the business is dead. =A0And this has
> > serious economic consequences for everyone. =A0Think job-loss, which is
> > generally considered the worst thing imaginable.
>
> > There are many examples of situations where programmers did actually
> > solve their problem and ended up on unemployment as a result. =A0At the
> > bigger, company level, think Windows XP. =A0MS basically solved the OS
> > problem with Windows XP. =A0They were thus forced to tear it up and sta=
rt
> > over with Vista/Windows 7/etc.
>
> This sort of sarcasm, petty whining, attempted humor, ... can get
> boorish when applied, in your words, "many times". Just a thought but
> the phrase "you can be part of the problem, or you can be part of the
> solution" springs to mind often when trying to follow threads.
>
> As for the topic/rant in question, if I ever set out to develop a
> project for only a MS proprietary platform I'll probably care about C#
> vs options but since my normal development world is not MS the
> question isn't really important for me personally. My world also
> slants more towards system stuff than the latest shiny user app - in
> other words I'm not fascinated by eye candy. I seem to be in a
> minority admittedly. But in my world the fact that C is used to build
> C tools intuitively feels like a win. No intended swipe here but does

The problem is that the C tools had to be built to make C even
minimally useful. I realize that this was in part intentional on
Kernighan and Ritchie's part. C was intended to be the essential
kernel of a system extended with libraries.

But the best laid plans...instead of being extended with best of breed
libraries, C programmers remained chained to the original libraries
such as the abominable string library. Note that in a sense, the
abomination of desolation that is the Nul terminated string is not a
part of the C language. Instead, true C is a RISC language that
completely omits strings. You don't have to use strings, or you can
use a correct implementation that embeds a length code, or joins
strings with links into unbounded ropes.

Samuel Johnson if he did not think Scotland had many noble and wild
prospects:

"Mr. Ogilvie then took new ground, where, I suppose, he thought
himself perfectly safe; for he observed, that Scotland had a great
many noble wild prospects. JOHNSON. 'I believe, Sir, you have a great
many. Norway, too, has noble wild prospects; and Lapland is remarkable
for prodigious noble wild prospects. But, Sir, let me tell you, the
noblest prospect which a Scotchman ever sees, is the high road that
leads him to England!'"

C likewise contains certain Noble and Wild prospects, but the Noblest
prospect which a C programmer ever sees is his first .Net C Sharp
program.

> a C# compiler for C# exist? With source for study?

0
spinoza1111
12/29/2009 10:53:49 AM
spinoza1111 wrote:
> On Dec 28, 5:10 am, Seebs <usenet-nos...@seebs.net> wrote:

>> I'd be a lot more interested in performance of, say,
>> large lists or hash tables.

> C code to hash several numbers, iterated to get somewhat better
> performance numbers.

> C Sharp code to do the same:

> The C Sharp code is not only smaller above, it runs dramatically
> faster: 24 secs on my machine as contrasted with 35 secs for the C
> code.

C was a bit faster on my machine.

But, you are now starting to get away from good benchmarking practices, by 
comparing a implementation of something in one language, with a built-in 
version in another.

This is a little dangerous: you might well find that feature is faster, or 
just a little slower, in ruby, python or perl. Does that mean these 
scripting languages should replace C or even C#?

> Therefore we class acts prefer C Sharp.

I quite respect C# as a language, but it would be nice if it could be 
extracted from the clutches of MS and existed independently.

-- 
Bartc 

0
bartc
12/29/2009 11:29:01 AM
spinoza1111 wrote:
> On Dec 29, 3:48 am, "bartc" <ba...@freeuk.com> wrote:
>> spinoza1111wrote:

>>> In other words, you changed your version of the code to calculate
>>> the wrong answer "twice as fast" as C Sharp. In other words, you
>>> made the code significantly less "powerful" to get the wrong
>>> result, and when we fix it, C runs slower. This is probably because
>>> cache usage, which is easily implemented safely in a hosted VM,
>>> becomes more and more important the more the same code is executed
>>> on similar data.
>>
>> In this tiny program everything should fit into the cache.
>
> Hmm..."the" cache. There is "a" stack at runtime, and Schildt was
> correct in using the definite article. But "the" cache? Anyway, it
> appears that if there was "a" cache, it worked better in .Net.


Whatever. It's probably the same one you mentioned first. And if C# runs on 
it, then it's quite likely to have cache memory, hardware stack, the 
complete works.

-- 
Bartc 

0
bartc
12/29/2009 12:00:10 PM
On Dec 29, 7:29=A0pm, "bartc" <ba...@freeuk.com> wrote:
> spinoza1111wrote:
> > On Dec 28, 5:10 am, Seebs <usenet-nos...@seebs.net> wrote:
> >> I'd be a lot more interested in performance of, say,
> >> large lists or hash tables.
> > C code to hash several numbers, iterated to get somewhat better
> > performance numbers.
> > C Sharp code to do the same:
> > The C Sharp code is not only smaller above, it runs dramatically
> > faster: 24 secs on my machine as contrasted with 35 secs for the C
> > code.
>
> C was a bit faster on my machine.
>
> But, you are now starting to get away from good benchmarking practices, b=
y
> comparing a implementation of something in one language, with a built-in
> version in another.

But that's my point. Not only do you not have to "reinvent the wheel",
the built-in wheel is better and safer to use.

>
> This is a little dangerous: you might well find that feature is faster, o=
r
> just a little slower, in ruby, python or perl. Does that mean these
> scripting languages should replace C or even C#?

No. The real issue is software safety.
>
> > Therefore we class acts prefer C Sharp.
>
> I quite respect C# as a language, but it would be nice if it could be
> extracted from the clutches of MS and existed independently.

It does: Google the mono project. Microsoft has in fact worked hard to
make .Net open architecture (if closed source).
>
> --
> Bartc

0
spinoza1111
12/29/2009 12:02:59 PM
On Mon, 28 Dec 2009 19:11:03 -0800, spinoza1111 wrote:

> 000

In your C example, you are using linear probing combined
with a fill factor of 100%. That is a bad choice, IMO.

Also, beware of rand(). It is guaranteed to deliver only 15 bits of random.
Implementations _may_ produce a larger range of values.

HTH,
AvK
0
Moi
12/29/2009 12:22:59 PM
"bartc" <bartc@freeuk.com> writes:

> spinoza1111 wrote:
>> On Dec 28, 5:10 am, Seebs <usenet-nos...@seebs.net> wrote:
>
>>> I'd be a lot more interested in performance of, say,
>>> large lists or hash tables.
>
>> C code to hash several numbers, iterated to get somewhat better
>> performance numbers.
>
>> C Sharp code to do the same:
>
>> The C Sharp code is not only smaller above, it runs dramatically
>> faster: 24 secs on my machine as contrasted with 35 secs for the C
>> code.
>
> C was a bit faster on my machine.
>
> But, you are now starting to get away from good benchmarking
> practices, by comparing a implementation of something in one language,
> with a built-in version in another.

I think it is much worse than that.  The C# code can chose the table
size for good performance (and I can't imagine it would not).  In the
C version, the table was chosen to be as bad as it could be without
failing (i.e. exactly the same size as the number of numbers being put
into the set).

There is no evidence that any attempt at a reasonable comparison is
being attempted, here.

To perform the same task, the obvious C solution would use a bit
array, since the number range in the C# code was quite small, but I
suspect that was an arbitrary choice.

-- 
Ben.
0
Ben
12/29/2009 12:26:03 PM
spinoza1111 wrote:
> On Dec 29, 7:29 pm, "bartc" <ba...@freeuk.com> wrote:


>> I quite respect C# as a language, but it would be nice if it could be
>> extracted from the clutches of MS and existed independently.
>
> It does: Google the mono project. Microsoft has in fact worked hard to
> make .Net open architecture (if closed source).


OK I'll have a closer look at Mono for Windows, although I was really 
talking about the core language, ie. without it's complex environment.

-- 
Bartc 

0
bartc
12/29/2009 1:00:02 PM
In article <cd32d98a-5f2d-4384-9af7-3be685bc8733@j14g2000yqm.googlegroups.com>,
spinoza1111  <spinoza1111@yahoo.com> wrote:
....
>> C was a bit faster on my machine.
>>
>> But, you are now starting to get away from good benchmarking practices, by
>> comparing a implementation of something in one language, with a built-in
>> version in another.
>
>But that's my point. Not only do you not have to "reinvent the wheel",
>the built-in wheel is better and safer to use.

Hot, sexy, broken.  Full employment.  C is good.

Seriously, does anyone not see just how bad the economy could get if
software actually worked?

0
gazelle
12/29/2009 1:16:11 PM
On 2009-12-29, bartc <bartc@freeuk.com> wrote:
> spinoza1111 wrote:
>> It does: Google the mono project. Microsoft has in fact worked hard to
>> make .Net open architecture (if closed source).

> OK I'll have a closer look at Mono for Windows, although I was really 
> talking about the core language, ie. without it's complex environment.

In any event, it's a red herring; Mono exists to make people think C# is
an open language, when it is in fact nothing of the sort.  It's a pure
deception, so far as I can tell.

-s
-- 
Copyright 2009, all wrongs reversed.  Peter Seebach / usenet-nospam@seebs.net
http://www.seebs.net/log/ <-- lawsuits, religion, and funny pictures
http://en.wikipedia.org/wiki/Fair_Game_(Scientology) <-- get educated!
0
Seebs
12/29/2009 3:31:47 PM
On 2009-12-29, Moi <root@invalid.address.org> wrote:
> In your C example, you are using linear probing combined
> with a fill factor of 100%. That is a bad choice, IMO.

The unanswerable question:

Did he do that because he's fundamentally dishonest, and wanted to rig the
test to look bad for C, or did he do that because he doesn't understand
the algorithm well enough to do better?

-s
-- 
Copyright 2009, all wrongs reversed.  Peter Seebach / usenet-nospam@seebs.net
http://www.seebs.net/log/ <-- lawsuits, religion, and funny pictures
http://en.wikipedia.org/wiki/Fair_Game_(Scientology) <-- get educated!
0
Seebs
12/29/2009 3:32:51 PM
On Tue, 29 Dec 2009 15:32:51 +0000, Seebs wrote:

> On 2009-12-29, Moi <root@invalid.address.org> wrote:
>> In your C example, you are using linear probing combined with a fill
>> factor of 100%. That is a bad choice, IMO.
> 
> The unanswerable question:
> 
> Did he do that because he's fundamentally dishonest, and wanted to rig
> the test to look bad for C, or did he do that because he doesn't
> understand the algorithm well enough to do better?


I don't know, I don't say, I don't care.
Any possible answer would be off-topic.
Let everybody decide for themselves how to deal with pedant trolls.

BTW: Spinoza's handling of zeros used as NULL-values inside the hash table 
is wrong, too. Could have been be handled by some clever bit-cramming.

AvK
0
Moi
12/29/2009 3:45:25 PM
On Dec 29, 8:26=A0pm, Ben Bacarisse <ben.use...@bsb.me.uk> wrote:
> "bartc" <ba...@freeuk.com> writes:
> >spinoza1111wrote:
> >> On Dec 28, 5:10 am, Seebs <usenet-nos...@seebs.net> wrote:
>
> >>> I'd be a lot more interested in performance of, say,
> >>> large lists or hash tables.
>
> >> C code to hash several numbers, iterated to get somewhat better
> >> performance numbers.
>
> >> C Sharp code to do the same:
>
> >> The C Sharp code is not only smaller above, it runs dramatically
> >> faster: 24 secs on my machine as contrasted with 35 secs for the C
> >> code.
>
> > C was a bit faster on my machine.
>
> > But, you are now starting to get away from good benchmarking
> > practices, by comparing a implementation of something in one language,
> > with a built-in version in another.
>
> I think it is much worse than that. =A0The C# code can chose the table
> size for good performance (and I can't imagine it would not). =A0In the
> C version, the table was chosen to be as bad as it could be without
> failing (i.e. exactly the same size as the number of numbers being put
> into the set).

That is not the case, and part of the problem is that you think so
normatively. In a toxic and pseudo-scientific corporate environment,
where (in Adorno's words) "all scientists are candidate for posts"
there's no real problem solving, just a continual (and futile) effort
to establish and destroy credibility.

You are correct in saying that the C version becomes very slow as the
number of table entries approaches the size of the table. I've already
noted that this is a problem, and how to solve it by using a linked
list originating at each entry.

However, the two programs demonstrate my point. C more or less forces
the C program to decide on the maximum table size, therefore the test,
which filled up the table, was realistic, since tables often fill up
in production.

Whereas in a completely encapsulated way the C Sharp program either
preallocated more than enough storage or automatically expanded the
table. We don't know what it did because we do not need to know. In
production, the only responsibility of the user of hashset is to but
additions in try..catch error handling.

Giving C this type of flexibility would involve a lot of extra coding,
or visiting some creepy site.

If it becomes necessary to delete from the C hash table, and the basic
"search forward to available entry" method actually implemented is in
use, a special value is needed to mark deleted entries, since entries
that match the hash code need to be retrieved past the deleted entry.
If linked lists are used, they need to be searched to find the entry
to be deleted. Whereas in the C Sharp code, deletion is one line of
additional code!

Therefore, the benchmark numbers are realistic. A simple C hash table,
implemented in the simplest fashion without a link list, will probably
run much slower than a C Sharp hash table user application with much
more coding effort. It will be much harder to maintain.

C sucks, and it's not "efficient" in practise.

>
> There is no evidence that any attempt at a reasonable comparison is
> being attempted, here.
>
> To perform the same task, the obvious C solution would use a bit
> array, since the number range in the C# code was quite small, but I
> suspect that was an arbitrary choice.
>
> --
> Ben.

0
spinoza1111
12/29/2009 3:50:11 PM
On Dec 29, 9:16=A0pm, gaze...@shell.xmission.com (Kenny McCormack)
wrote:
> In article <cd32d98a-5f2d-4384-9af7-3be685bc8...@j14g2000yqm.googlegroups=
..com>,spinoza1111=A0<spinoza1...@yahoo.com> wrote:
>
> ...
>
> >> C was a bit faster on my machine.
>
> >> But, you are now starting to get away from good benchmarking practices=
, by
> >> comparing a implementation of something in one language, with a built-=
in
> >> version in another.
>
> >But that's my point. Not only do you not have to "reinvent the wheel",
> >the built-in wheel is better and safer to use.
>
> Hot, sexy, broken. =A0Full employment. =A0C is good.
>
> Seriously, does anyone not see just how bad the economy could get if
> software actually worked?

I do see that it is getting really bad, precisely because more and
more shops are using much better software written for free by people
who don't even know they're virtual slaves. Employment in the USA is
at 10.2% and much higher for minorities.

Add to this the fact that businesses don't care about software safety,
and the employment picture is grim, indeed.
0
spinoza1111
12/29/2009 3:52:19 PM
On Dec 29, 9:16=A0pm, gaze...@shell.xmission.com (Kenny McCormack)
wrote:
> In article <cd32d98a-5f2d-4384-9af7-3be685bc8...@j14g2000yqm.googlegroups=
..com>,spinoza1111=A0<spinoza1...@yahoo.com> wrote:
>
> ...
>
> >> C was a bit faster on my machine.
>
> >> But, you are now starting to get away from good benchmarking practices=
, by
> >> comparing a implementation of something in one language, with a built-=
in
> >> version in another.
>
> >But that's my point. Not only do you not have to "reinvent the wheel",
> >the built-in wheel is better and safer to use.
>
> Hot, sexy, broken. =A0Full employment. =A0C is good.
>
> Seriously, does anyone not see just how bad the economy could get if
> software actually worked?

You (or you and I) should write a book about bad software and
programmer stupidity called "hot, sexy, and broken" with a bimbo in a
bikini on the cover. It'd sell like hotcakes, like "Hot Naked Chicks
and World News Report" in Idiocracy.
0
spinoza1111
12/29/2009 3:53:51 PM

spinoza1111 wrote:

> C sucks, and it's not "efficient" in practise.

Broad based statement backed by a very narrow example.

The conclusion is intellectually dishonest

w..



--- news://freenews.netfront.net/ - complaints: news@netfront.net ---
0
Walter
12/29/2009 3:56:57 PM
On Dec 29, 8:26=A0pm, Ben Bacarisse <ben.use...@bsb.me.uk> wrote:
> "bartc" <ba...@freeuk.com> writes:
> >spinoza1111wrote:
> >> On Dec 28, 5:10 am, Seebs <usenet-nos...@seebs.net> wrote:
>
> >>> I'd be a lot more interested in performance of, say,
> >>> large lists or hash tables.
>
> >> C code to hash several numbers, iterated to get somewhat better
> >> performance numbers.
>
> >> C Sharp code to do the same:
>
> >> The C Sharp code is not only smaller above, it runs dramatically
> >> faster: 24 secs on my machine as contrasted with 35 secs for the C
> >> code.
>
> > C was a bit faster on my machine.
>
> > But, you are now starting to get away from good benchmarking
> > practices, by comparing a implementation of something in one language,
> > with a built-in version in another.
>
> I think it is much worse than that. =A0The C# code can chose the table
> size for good performance (and I can't imagine it would not). =A0In the
> C version, the table was chosen to be as bad as it could be without
> failing (i.e. exactly the same size as the number of numbers being put
> into the set).
>
> There is no evidence that any attempt at a reasonable comparison is
> being attempted, here.
>
> To perform the same task, the obvious C solution would use a bit
> array, since the number range in the C# code was quite small, but I
> suspect that was an arbitrary choice.
>
Might be. But the problem is that the optimal solution is conceivable
only by a person who thinks in C all the time. I submit that this
stunts the mind even as a programmer, not to mention humanity. The
sort of crap that's posted here indicates this.

I can concede that a Troglodyte who overspecializes in C could write
code that operates in a fraction of the C Sharp time for the same
reason that assembler can in the last analysis run at the fraction of
the time of even C; the distributed software that calculates large
primes on thousands of computers all over the world, using spare
cycles on volunteer computers, seems to be written in assembler.

This shows the ultimate vanity of C; it pretends to be as fast as
assembler but is not.

But in either assembler or C, a level of committment and intensity
consistently destroys the programmer's humanity save in rare cases of
true genius, of a sort which isn't found here, and which itself (like
Grigory Perelman, the Russian mathematician who left the field after
refusing the Fields medal) is smart enough to recognize an expense of
spirit in a waste of shame when it sees it.
> --
> Ben.

0
spinoza1111
12/29/2009 4:01:45 PM
On Dec 29, 8:22=A0pm, Moi <r...@invalid.address.org> wrote:
> On Mon, 28 Dec 2009 19:11:03 -0800,spinoza1111wrote:
> > 000
>
> In your C example, you are using linear probing combined
> with a fill factor of 100%. That is a bad choice, IMO.

I explained that already as a problem. I said that the average search
time slows down as the table fills up. I've known this for nearly
forty years, since I read it in Knuth in 1971. I said (did you miss
it) that you need to use linked lists based at each entry to get K*C
time where C is the average number of collisions.

The problem is, as I've said, that the C program is more
psychologically complex and harder to maintain than C Sharp code which
does the same thing better.
>
> Also, beware of rand(). It is guaranteed to deliver only 15 bits of rando=
m.
> Implementations _may_ produce a larger range of values.

I am also aware of the limitations of rand(). Indeed, many defenses of
C snigger at its obvious facilities and recommend arcana, some of
which is available from creepy sites. But this is completely absurd,
because one of the major reasons for using a programming language is
clarity.

Of course, the hatred of clarity, combined with a basic
misunderstanding of the very word, is on display in the two documents
that started the Schildt canard: they call him "clear" (which means
he's correct by implication) and then tear into him...for being
insufficientl unclear, and for failing to tell C programmers what Thou
Shalt Not like a proper *ayatollah* or *imam* of Holy C.
>
> HTH,
> AvK

0
spinoza1111
12/29/2009 4:07:40 PM
On Dec 29, 8:26=A0pm, Ben Bacarisse <ben.use...@bsb.me.uk> wrote:
> "bartc" <ba...@freeuk.com> writes:
> >spinoza1111wrote:
> >> On Dec 28, 5:10 am, Seebs <usenet-nos...@seebs.net> wrote:
>
> >>> I'd be a lot more interested in performance of, say,
> >>> large lists or hash tables.
>
> >> C code to hash several numbers, iterated to get somewhat better
> >> performance numbers.
>
> >> C Sharp code to do the same:
>
> >> The C Sharp code is not only smaller above, it runs dramatically
> >> faster: 24 secs on my machine as contrasted with 35 secs for the C
> >> code.
>
> > C was a bit faster on my machine.
>
> > But, you are now starting to get away from good benchmarking
> > practices, by comparing a implementation of something in one language,
> > with a built-in version in another.
>
> I think it is much worse than that. =A0The C# code can chose the table
> size for good performance (and I can't imagine it would not). =A0In the
> C version, the table was chosen to be as bad as it could be without
> failing (i.e. exactly the same size as the number of numbers being put
> into the set).
>
> There is no evidence that any attempt at a reasonable comparison is
> being attempted, here.
>
> To perform the same task, the obvious C solution would use a bit

The problem is that like most "obvious C solutions", a bit array
doesn't scale. Sure, it's cheesily and in the small "elegant" to small
minds to allocate a huge bit array and index into this directly,  for
the space taken would be 1/32 of an array of longs.

But if the key is changed to a more realistic range of values (such as
an alphanumeric symbol), the bit vector gets too large. You're back
where you started, and this often happens when the user initially says
"the key is a number" and later says "it's a symbol".

Whereas the hashkey scales nicely. It would even be better to roll
your own hashkey class, one that allows the user to give it an idea of
the key range on instance creation. Then, the hashkey class could use
a bit map as appropriate.


> array, since the number range in the C# code was quite small, but I
> suspect that was an arbitrary choice.
>
> --
> Ben.

0
spinoza1111
12/29/2009 4:11:51 PM
On Dec 29, 11:32=A0pm, Seebs <usenet-nos...@seebs.net> wrote:
> On 2009-12-29, Moi <r...@invalid.address.org> wrote:
>
> > In your C example, you are using linear probing combined
> > with a fill factor of 100%. That is a bad choice, IMO.
>
> The unanswerable question:
>
> Did he do that because he's fundamentally dishonest, and wanted to rig th=
e
> test to look bad for C, or did he do that because he doesn't understand
> the algorithm well enough to do better?

Peter, when you make the issue into personalities, you enable the
destruction of this newsgroup. That's because people have the right to
defend themselves.

Like most code posted here, the code was written quickly for
discussion by educated professionals, who've studied computer science
in a non-corporate environment or somehow acquired collegial habits
and basic decency.

A more tuned C version (such as Ben's bit map) doesn't scale. It is of
course good to know that inserting hash table entries after the
collision creates slow performance, and it was I who said this first.
But we need to compare programs of roughly equal psychological
complexity.

Where's your code, Scripto Boy?

Shove it up your ass, fella.
>
> -s
> --
> Copyright 2009, all wrongs reversed. =A0Peter Seebach / usenet-nos...@see=
bs.nethttp://www.seebs.net/log/<-- lawsuits, religion, and funny picturesht=
tp://en.wikipedia.org/wiki/Fair_Game_(Scientology) <-- get educated!

0
spinoza1111
12/29/2009 4:17:27 PM
On Dec 29, 11:31=A0pm, Seebs <usenet-nos...@seebs.net> wrote:
> On 2009-12-29, bartc <ba...@freeuk.com> wrote:
>
> >spinoza1111wrote:
> >> It does: Google the mono project. Microsoft has in fact worked hard to
> >> make .Net open architecture (if closed source).
> > OK I'll have a closer look at Mono for Windows, although I was really
> > talking about the core language, ie. without it's complex environment.
>
> In any event, it's a red herring; Mono exists to make people think C# is
> an open language, when it is in fact nothing of the sort. =A0It's a pure
> deception, so far as I can tell.

C# is an open language. It had to be to be ECMA certified. It is being
changed and improved rapidly but in a controlled fashion, so we won't
have to worry about "standardizing" crap.
>
> -s
> --
> Copyright 2009, all wrongs reversed. =A0Peter Seebach / usenet-nos...@see=
bs.nethttp://www.seebs.net/log/<-- lawsuits, religion, and funny picturesht=
tp://en.wikipedia.org/wiki/Fair_Game_(Scientology) <-- get educated!

0
spinoza1111
12/29/2009 4:18:54 PM
On Dec 29, 11:56=A0pm, Walter Banks <wal...@bytecraft.com> wrote:
> spinoza1111wrote:
> > C sucks, and it's not "efficient" in practise.
>
> Broad based statement backed by a very narrow example.

Intelligent students can see that the square of the hypotenuse is
equal to the sum of the squares of the opposing sides based on one
diagram. Likewise, I show that for approximately equal levels of
effort, you get a much more efficient, powerful and easier to maintain
piece of code.
>
> The conclusion is intellectually dishonest
>
> w..
>
> --- news://freenews.netfront.net/ - complaints: n...@netfront.net ---

0
spinoza1111
12/29/2009 4:21:17 PM
On Dec 29, 11:17=A0am, spinoza1111 <spinoza1...@yahoo.com> wrote:
> A more tuned C version (such as Ben's bit map) doesn't scale. It is of
> course good to know that inserting hash table entries after the
> collision creates slow performance, and it was I who said this first.
> But we need to compare programs of roughly equal psychological
> complexity.

One thing you repeatedly fail to grasp is the concept of APIs.  You're
not comparing C# to C, you're comparing C# with a tuned hash library
to C with a poorly implemented hash algorithm.  You also fail at
computer science in general.

For one thing, you're C# hash array is NOT bounded at 1000 entries.
You're trying to add 1000 entries to a hash array with 1000 spots
which means you're getting a lot of collisions and doing many
sequential searches [for empty spots].  That could easily explain the
time difference.  Generally if you have an unbounded input size you'd
use a tree not a hash.  Hashes are useful when you know you have at
most X symbols, then you can easily allocate some Y >> X space and
hash collisions should be low.

But don't let this stop you.

Tom
0
Tom
12/29/2009 4:35:36 PM
On Dec 29, 10:50=A0am, spinoza1111 <spinoza1...@yahoo.com> wrote:
> You are correct in saying that the C version becomes very slow as the
> number of table entries approaches the size of the table. I've already
> noted that this is a problem, and how to solve it by using a linked
> list originating at each entry.

The solution is not to use linked lists [at least not that way].
Either use a larger table or a tree.  Generally for unbounded inputs a
tree is a better idea as it has better properties on the whole (as the
symbol set size grows...).

> However, the two programs demonstrate my point. C more or less forces
> the C program to decide on the maximum table size, therefore the test,
> which filled up the table, was realistic, since tables often fill up
> in production.

No, you could grow the table at some threshold, reimport existing
symbols into the new hash.  A naive approach would just add another X
symbol spots every time you get more than [say] 80% full.

Most of the time when a hash is preferable over a tree [or other
structure] is when the symbol set is of known dimensions.  E.g. you're
writing a lexer for a parser for a language.

> Whereas in a completely encapsulated way the C Sharp program either
> preallocated more than enough storage or automatically expanded the
> table. We don't know what it did because we do not need to know. In
> production, the only responsibility of the user of hashset is to but
> additions in try..catch error handling.

And you couldn't have a hash_add(), hash_find() function in C that
does all of this hash table manipulations [like done in C#] in C
because....?

The same algorithm(s) that C# uses could be implemented in a C library
[I'll bet they exist all over the net].

> Giving C this type of flexibility would involve a lot of extra coding,
> or visiting some creepy site.

Or doing a 2 second google search

http://www.gnu.org/s/libc/manual/html_node/Hash-Search-Function.html

Wow.  That was hard.

> If it becomes necessary to delete from the C hash table, and the basic
> "search forward to available entry" method actually implemented is in
> use, a special value is needed to mark deleted entries, since entries
> that match the hash code need to be retrieved past the deleted entry.
> If linked lists are used, they need to be searched to find the entry
> to be deleted. Whereas in the C Sharp code, deletion is one line of
> additional code!

hash_delete()

You really need to learn what functions are.

> Therefore, the benchmark numbers are realistic. A simple C hash table,
> implemented in the simplest fashion without a link list, will probably
> run much slower than a C Sharp hash table user application with much
> more coding effort. It will be much harder to maintain.
>
> C sucks, and it's not "efficient" in practise.

So because you suck at software development and computer science in
general, C sucks.

When you're comparing equivalent algorithms maybe you might have a
point.   Until then this is just further demonstration that there are
people in the world stupider than I.

Tom
0
Tom
12/29/2009 4:43:13 PM
On Dec 29, 11:21=A0am, spinoza1111 <spinoza1...@yahoo.com> wrote:
> On Dec 29, 11:56=A0pm, Walter Banks <wal...@bytecraft.com> wrote:
>
> > spinoza1111wrote:
> > > C sucks, and it's not "efficient" in practise.
>
> > Broad based statement backed by a very narrow example.
>
> Intelligent students can see that the square of the hypotenuse is
> equal to the sum of the squares of the opposing sides based on one
> diagram. Likewise, I show that for approximately equal levels of
> effort, you get a much more efficient, powerful and easier to maintain
> piece of code.

Which is of course dishonest since a quick google search turns up all
sorts of hashing code, some of which is PART OF GNU LIBC.

Tom
0
Tom
12/29/2009 4:46:34 PM
On Dec 29, 11:07=A0am, spinoza1111 <spinoza1...@yahoo.com> wrote:
> I explained that already as a problem. I said that the average search
> time slows down as the table fills up. I've known this for nearly
> forty years, since I read it in Knuth in 1971. I said (did you miss
> it) that you need to use linked lists based at each entry to get K*C
> time where C is the average number of collisions.

You've been in software for 40 years (1971 was 38 years ago btw...)
and it didn't occur to you to use a binary tree?

> The problem is, as I've said, that the C program is more
> psychologically complex and harder to maintain than C Sharp code which
> does the same thing better.

man hsearch
man bsearch
man tsearch

Sure those are not part of C99, but they're part of GNU libc, which a
lot of people have access to.  There are standalone data management
libraries out there.

You're being obtuse to think that people haven't worked on these
problems and that ALL developers must code their own solutions from
scratch.

> I am also aware of the limitations of rand(). Indeed, many defenses of
> C snigger at its obvious facilities and recommend arcana, some of
> which is available from creepy sites. But this is completely absurd,
> because one of the major reasons for using a programming language is
> clarity.

rand() is useful for simple non-sequential numbers.  If you need a
statistically meaningful PRNG use one.  I'd hazard a guess C# is no
different in it's use of a LCG anyways.  So I wouldn't be so apt to
praise it.

You could easily write an API that had functions like

hash_add(table, key, value);
value =3D hash_search(table, key);
hash_remove(table, key);

Why would that be so much harder than your class based methods from
C#?

Tom
0
Tom
12/29/2009 4:53:22 PM
On 2009-12-29, Tom St Denis <tom@iahu.ca> wrote:
> On Dec 29, 11:21�am, spinoza1111 <spinoza1...@yahoo.com> wrote:
>>[stuff]
> Which is of course dishonest

I think Hanlon's Razor applies -- never ascribe to malice that which can
be adequately explained by stupidity.

-s
-- 
Copyright 2009, all wrongs reversed.  Peter Seebach / usenet-nospam@seebs.net
http://www.seebs.net/log/ <-- lawsuits, religion, and funny pictures
http://en.wikipedia.org/wiki/Fair_Game_(Scientology) <-- get educated!
0
Seebs
12/29/2009 4:56:56 PM
On Dec 29, 11:56=A0am, Seebs <usenet-nos...@seebs.net> wrote:
> On 2009-12-29, Tom St Denis <t...@iahu.ca> wrote:
>
> > On Dec 29, 11:21=A0am, spinoza1111 <spinoza1...@yahoo.com> wrote:
> >>[stuff]
> > Which is of course dishonest
>
> I think Hanlon's Razor applies -- never ascribe to malice that which can
> be adequately explained by stupidity.

In this case I don't think he can't perform a google search [e.g.,
being stupid], rather I think he's trying to troll to a point and is
shying away from doing any actual research in his quest to prove his
point because the truth of the matter is he's full of sh!#.  He tried
to compare two completely different algorithms in an attempt to prove
that C# is better than C.  Which, even if it were true wouldn't matter
in the slightest.

This is like comparing a 1L diesel Mini to a 5L hemi and saying the
Mini gets better mileage therefore diesel is better than gasoline.

If he actually researched the hash implementation in Mono, then coded
up a reasonable implementation in C, compared the speed [and/or the
complexity of the C code] he might actually have a point.

But as it stands all this thread has proved is he doesn't understand
Computer Science and that he's anti-social.

Tom
0
Tom
12/29/2009 5:05:27 PM
In article <001b66dc-b78d-48c0-8c0a-6a5229f4ad7c@e27g2000yqd.googlegroups.com>,
spinoza1111  <spinoza1111@yahoo.com> wrote:
....
(seebs)
>> In any event, it's a red herring; Mono exists to make people think C# is
>> an open language, when it is in fact nothing of the sort. �It's a pure
>> deception, so far as I can tell.
(spinny)
>C# is an open language. It had to be to be ECMA certified. It is being

On this one, I think seebs is right.  Something about a stopped clock.

As a practical matter, .NET is an MS technology.  The slice of the
computing world that is going to go for the ideas and concepts that are
..NET and yet would use an off-brand implementation is vanishingly small.

Give them credit.  MS does know how to play the standards game for show,
while at the same time keeping the ball in their home territory.

0
gazelle
12/29/2009 5:32:15 PM
Tom St Denis <tom@iahu.ca> writes:
[...]
> man hsearch
> man bsearch
> man tsearch
>
> Sure those are not part of C99, but they're part of GNU libc, which a
> lot of people have access to.  There are standalone data management
> libraries out there.
[...]

bsearch is standard in both C90 and C99.

-- 
Keith Thompson (The_Other_Keith) kst-u@mib.org  <http://www.ghoti.net/~kst>
Nokia
"We must do something.  This is something.  Therefore, we must do this."
    -- Antony Jay and Jonathan Lynn, "Yes Minister"
0
Keith
12/29/2009 5:46:57 PM
C Sharp "sucks". The proof is that MS keeps replacing it with new versions. 
How long did 1.0 last? Less than 2 years? MS has billions to spend on their 
strategies of market control and dominance without even worrying about 
profit. OS and app sales fund their developer technologies. Do you like 
being on the never ending MS learning cycle? I use C# professionally and I 
don't like it. 

Ohh now we have covariance and contravariance!! Lets all get sexually 
aroused and blog about it! C# 4!!!

Remember the Windows 2000 era? Win32 programs just POPPED onto the screen, 
lightning fast redraws.

But to this day I can spot a .Net application by its sluggish drawing. Aero 
glass now hides this somewhat. Sneaky.

I'm sure you can come up with some C# algo. that performs super fast. We 
should not be surprised by this should we? C# doesn't magically run on a 
different processor or something. Like C, its all machine language in the 
end.
0
jamm
12/29/2009 9:40:46 PM
On Dec 30, 12:35=A0am, Tom St Denis <t...@iahu.ca> wrote:
> On Dec 29, 11:17=A0am,spinoza1111<spinoza1...@yahoo.com> wrote:
>
> > A more tuned C version (such as Ben's bit map) doesn't scale. It is of
> > course good to know that inserting hash table entries after the
> > collision creates slow performance, and it was I who said this first.
> > But we need to compare programs of roughly equal psychological
> > complexity.
>
> One thing you repeatedly fail to grasp is the concept of APIs. =A0You're
> not comparing C# to C, you're comparing C# with a tuned hash library
> to C with a poorly implemented hash algorithm. =A0You also fail at
> computer science in general.

C# comes WITH a tuned hash library whereas the libraries that "come
with" C in the sense of being visible are a joke: snprintf being an
example.

>
> For one thing, you're C# hash array is NOT bounded at 1000 entries.
> You're trying to add 1000 entries to a hash array with 1000 spots

Many arrays in fact fill up in production. And don't waste my time: I
raised this issue right after I posted the original code, pointing out
that the average probe time would go to shit and describing how to fix
it...the point being that none of this was necessary with C Sharp.

> which means you're getting a lot of collisions and doing many
> sequential searches [for empty spots]. =A0That could easily explain the
> time difference. =A0Generally if you have an unbounded input size you'd
> use a tree not a hash. =A0Hashes are useful when you know you have at
> most X symbols, then you can easily allocate some Y >> X space and
> hash collisions should be low.

Talk about bloatware: on the one hand one guy complains that C Sharp
executables are too large, here a C fan suggests wasting memory to
make a crude algorithm perform. It might be a good idea in certain
circumstances, but it fails to demonstrate that C doesn't suck. The
linked list that I suggested (but haven't implemented thisthread) is
better.


>
> But don't let this stop you.
>
> Tom

0
spinoza1111
12/30/2009 2:24:29 AM
On Dec 30, 12:43=A0am, Tom St Denis <t...@iahu.ca> wrote:
> On Dec 29, 10:50=A0am,spinoza1111<spinoza1...@yahoo.com> wrote:
>
> > You are correct in saying that the C version becomes very slow as the
> > number of table entries approaches the size of the table. I've already
> > noted that this is a problem, and how to solve it by using a linked
> > list originating at each entry.
>
> The solution is not to use linked lists [at least not that way].
> Either use a larger table or a tree. =A0Generally for unbounded inputs a
> tree is a better idea as it has better properties on the whole (as the
> symbol set size grows...).

Fair enough. The linked list might have long searches whereas the tree
search can be bounded as long as you can keep it balanced. See (you
probably have) Knuth. I have to buy a new copy if this nonsense
continues, I gave away my old copy to DeVry in a Prospero moment:

Now my Charmes are all ore-throwne,
And what strength I haue's mine owne.


>
> > However, the two programs demonstrate my point. C more or less forces
> > the C program to decide on the maximum table size, therefore the test,
> > which filled up the table, was realistic, since tables often fill up
> > in production.
>
> No, you could grow the table at some threshold, reimport existing
> symbols into the new hash. =A0A naive approach would just add another X
> symbol spots every time you get more than [say] 80% full.

Been there, did that...in PL.1 at Illinois Bell. It's a good practice.
>
> Most of the time when a hash is preferable over a tree [or other
> structure] is when the symbol set is of known dimensions. =A0E.g. you're
> writing a lexer for a parser for a language.

This is discussed in my book (buy my book or die Earthlings...ooops
commercial promo), "Build Your Own .Net Language and Compiler", Apress
2004.
>
> > Whereas in a completely encapsulated way the C Sharp program either
> > preallocated more than enough storage or automatically expanded the
> > table. We don't know what it did because we do not need to know. In
> > production, the only responsibility of the user of hashset is to but
> > additions in try..catch error handling.
>
> And you couldn't have a hash_add(), hash_find() function in C that
> does all of this hash table manipulations [like done in C#] in C
> because....?

....because IT'S BEEN DONE, and is more easily accessible in C Sharp
and Java. C'mon, guy. I need to program more examples in Wolfram and
use my computer to search for intelligent life on other planets (sure
ain't found much here ha ha).

>
> The same algorithm(s) that C# uses could be implemented in a C library
> [I'll bet they exist all over the net].

Dammit, I addressed this. Yes, it's possible that the C Sharp
libraries are in C. It's also possible that they were written in MSIL
directly, or C Sharp itself. And before there was C Sharp, C was used
to produce better things than C, just like the old gods built the
world only to be overthrown by cooler gods.

>
> > Giving C this type of flexibility would involve a lot of extra coding,
> > or visiting some creepy site.
>
> Or doing a 2 second google search
>
> http://www.gnu.org/s/libc/manual/html_node/Hash-Search-Function.html
>
> Wow. =A0That was hard.

You're neglecting the forensic problem. Not only "should" I not use
this code in commercial products, I have gnow way of gnowing that gnu
will ingneroperate.

>
> > If it becomes necessary to delete from the C hash table, and the basic
> > "search forward to available entry" method actually implemented is in
> > use, a special value is needed to mark deleted entries, since entries
> > that match the hash code need to be retrieved past the deleted entry.
> > If linked lists are used, they need to be searched to find the entry
> > to be deleted. Whereas in the C Sharp code, deletion is one line of
> > additional code!
>
> hash_delete()
>
> You really need to learn what functions are.

I think I do. Do u? And I have said before that this constant, lower
middle class, questioning of credentials by people who have every
reason to worry about their own is boring and disgusting.

>
> > Therefore, the benchmark numbers are realistic. A simple C hash table,
> > implemented in the simplest fashion without a link list, will probably
> > run much slower than a C Sharp hash table user application with much
> > more coding effort. It will be much harder to maintain.
>
> > C sucks, and it's not "efficient" in practise.
>
> So because you suck at software development and computer science in
> general, C sucks.

No, C sucks because I started programming before it existed and saw
University of Chicago programmers do a better job, only to see C
become widely use merely because of the prestige of a campus with no
particular distinction in comp sci at the time but an upper class
reputation. I was hired by Princeton in the 1980s in part because it
was behind the curve technically.

You see, public relations machinery worked on behalf of Princeton on
the right coast and later Apple on the left coast to re-present men
who were at best tokens of a type as real inventors. Because
industrial relations had imposed the same overall contours on
technology world-wide, men were simultaneously discovering the same
things all over the world but world-wide, American military power
(established by slaughtering the people of Hiroshima and Nagasaki)
made it seem as if prestige Americans invented the computer whereas it
was being simultaneously invented in places as diverse as Nazi
Germany, Pennsylvania, and Iowa.

The result is a perpetual childishness and anxiety in which certain
inventors are celebrated as gods by public relations machinery and the
"rest of us" are encouraged to fight for scraps of status.


>
> When you're comparing equivalent algorithms maybe you might have a
> point. =A0 Until then this is just further demonstration that there are
> people in the world stupider than I.

Well, if that were true, that would make you happy. But if your goal
is to discover a vanishingly small value for a number, I suggest you
scram.

>
> Tom

0
spinoza1111
12/30/2009 2:45:20 AM
On Dec 30, 12:53=A0am, Tom St Denis <t...@iahu.ca> wrote:
> On Dec 29, 11:07=A0am,spinoza1111<spinoza1...@yahoo.com> wrote:
>
> > I explained that already as a problem. I said that the average search
> > time slows down as the table fills up. I've known this for nearly
> > forty years, since I read it in Knuth in 1971. I said (did you miss
> > it) that you need to use linked lists based at each entry to get K*C
> > time where C is the average number of collisions.
>
> You've been in software for 40 years (1971 was 38 years ago btw...)
> and it didn't occur to you to use a binary tree?
>
> > The problem is, as I've said, that the C program is more
> > psychologically complex and harder to maintain than C Sharp code which
> > does the same thing better.
>
> man hsearch
> man bsearch
> man tsearch
>
> Sure those are not part of C99, but they're part of GNU libc, which a
> lot of people have access to. =A0There are standalone data management
> libraries out there.
>
> You're being obtuse to think that people haven't worked on these
> problems and that ALL developers must code their own solutions from
> scratch.
>
> > I am also aware of the limitations of rand(). Indeed, many defenses of
> > C snigger at its obvious facilities and recommend arcana, some of
> > which is available from creepy sites. But this is completely absurd,
> > because one of the major reasons for using a programming language is
> > clarity.
>
> rand() is useful for simple non-sequential numbers. =A0If you need a
> statistically meaningful PRNG use one. =A0I'd hazard a guess C# is no
> different in it's use of a LCG anyways. =A0So I wouldn't be so apt to
> praise it.
>
> You could easily write an API that had functions like
>
> hash_add(table, key, value);
> value =3D hash_search(table, key);
> hash_remove(table, key);
>
> Why would that be so much harder than your class based methods from
> C#?

Because I didn't have to write them. I just used hashkey.

And even if I use the gnu products (which should not be ethically used
in commercial products) the forensic problems of C remain. Because it
provides zero protection against memory leaks, aliasing, and the
simultaneous usage of certain non-reentrant library functions, I would
have to test the code doing the hashing thoroughly IN ADDITION to the
code written to solve the actual problem. Whereas forensically I am
within my rights to assume hashkey works.
>
> Tom

0
spinoza1111
12/30/2009 2:51:06 AM
On Dec 29, 6:45=A0pm, spinoza1111 <spinoza1...@yahoo.com> wrote:
> On Dec 30, 12:43=A0am, Tom St Denis <t...@iahu.ca> wrote:
>
> > On Dec 29, 10:50=A0am,spinoza1111<spinoza1...@yahoo.com> wrote:
>
> > > You are correct in saying that the C version becomes very slow as the
> > > number of table entries approaches the size of the table. I've alread=
y
> > > noted that this is a problem, and how to solve it by using a linked
> > > list originating at each entry.
>
> > The solution is not to use linked lists [at least not that way].
> > Either use a larger table or a tree. =A0Generally for unbounded inputs =
a
> > tree is a better idea as it has better properties on the whole (as the
> > symbol set size grows...).
>
> Fair enough. The linked list might have long searches whereas the tree
> search can be bounded as long as you can keep it balanced. See (you
> probably have) Knuth. I have to buy a new copy if this nonsense
> continues, I gave away my old copy to DeVry in a Prospero moment:
>
> Now my Charmes are all ore-throwne,
> And what strength I haue's mine owne.
Oh jeeze well then you'd better get that book back post haste.
0
Oliver
12/30/2009 5:30:27 AM
On Dec 27, 4:36=A0pm, spinoza1111 <spinoza1...@yahoo.com> wrote:
> A C and a C Sharp program was written to calculate the 64-bit value of
> 19 factorial one million times, using both the iterative and recursive
> methods to solve (and compare) the results
>
> Here is the C code.
>
> #include <stdio.h>
> #include <time.h>
>
> long long factorial(long long N)
> {
> =A0 =A0 long long nFactorialRecursive;
> =A0 =A0 long long nFactorialIterative;
> =A0 =A0 long long Nwork;
> =A0 =A0 if (N <=3D 2) return N;
> =A0 =A0 for ( nFactorialIterative =3D 1, Nwork =3D N;
> =A0 =A0 =A0 =A0 =A0 Nwork > 1;
> =A0 =A0 =A0 =A0 =A0 Nwork-- )
> =A0 =A0 =A0 =A0 nFactorialIterative *=3D Nwork;
> =A0 =A0 nFactorialRecursive =3D N * factorial(N-1);
> =A0 =A0 if (nFactorialRecursive !=3D nFactorialIterative)
> =A0 =A0 =A0 =A0printf("%I64d! is %I64d recursively but %I64d iteratively =
wtf!
> \n",
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 N,
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 nFactorialIterative,
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 nFactorialRecursive);
> =A0 =A0 return nFactorialRecursive;
>
> }
>
> int main(void)
> {
> =A0 =A0 long long N;
> =A0 =A0 long long Nfactorial;
> =A0 =A0 double dif;
> =A0 =A0 long long i;
> =A0 =A0 long long K;
> =A0 =A0 time_t start;
> =A0 =A0 time_t end;
> =A0 =A0 N =3D 19;
> =A0 =A0 K =3D 1000000;
> =A0 =A0 time (&start);
> =A0 =A0 for (i =3D 0; i < K; i++)
> =A0 =A0 =A0 =A0 Nfactorial =3D factorial(N);
> =A0 =A0 time (&end);
> =A0 =A0 dif =3D difftime (end,start);
> =A0 =A0 printf("%I64d! is %I64d: %.2f seconds to calculate %I64d times
> \n",
> =A0 =A0 =A0 =A0 =A0 =A0N, Nfactorial, dif, K);
> =A0 =A0 return 0; // Gee is that right?
>
> }
>
> Here is the C Sharp code.
>
> using System;
> using System.Collections.Generic;
> using System.Linq;
> using System.Text;
>
> namespace N_factorial
> {
> =A0 =A0 class Program
> =A0 =A0 {
> =A0 =A0 =A0 =A0 static void Main(string[] args)
> =A0 =A0 =A0 =A0 {
> =A0 =A0 =A0 =A0 =A0 =A0 long N;
> =A0 =A0 =A0 =A0 =A0 =A0 long Nfactorial =3D 0;
> =A0 =A0 =A0 =A0 =A0 =A0 TimeSpan dif;
> =A0 =A0 =A0 =A0 =A0 =A0 long i;
> =A0 =A0 =A0 =A0 =A0 =A0 long K;
> =A0 =A0 =A0 =A0 =A0 =A0 DateTime start;
> =A0 =A0 =A0 =A0 =A0 =A0 DateTime end;
> =A0 =A0 =A0 =A0 =A0 =A0 N =3D 19;
> =A0 =A0 =A0 =A0 =A0 =A0 K =3D 1000000;
> =A0 =A0 =A0 =A0 =A0 =A0 start =3D DateTime.Now;
> =A0 =A0 =A0 =A0 =A0 =A0 for (i =3D 0; i < K; i++)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 Nfactorial =3D factorial(N);
> =A0 =A0 =A0 =A0 =A0 =A0 end =3D DateTime.Now;
> =A0 =A0 =A0 =A0 =A0 =A0 dif =3D end - start;
> =A0 =A0 =A0 =A0 =A0 =A0 Console.WriteLine
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 ("The factorial of " +
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0N.ToString() + " is " +
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Nfactorial.ToString() + ": " +
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0dif.ToString() + " " +
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0"seconds to calculate " +
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0K.ToString() + " times");
> =A0 =A0 =A0 =A0 =A0 =A0 return;
> =A0 =A0 =A0 =A0 }
>
> =A0 =A0 =A0 =A0 static long factorial(long N)
> =A0 =A0 =A0 =A0 {
> =A0 =A0 =A0 =A0 =A0 =A0 long nFactorialRecursive;
> =A0 =A0 =A0 =A0 =A0 =A0 long nFactorialIterative;
> =A0 =A0 =A0 =A0 =A0 =A0 long Nwork;
> =A0 =A0 =A0 =A0 =A0 =A0 if (N <=3D 2) return N;
> =A0 =A0 =A0 =A0 =A0 =A0 for ( nFactorialIterative =3D 1, Nwork =3D N;
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 Nwork > 1;
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 Nwork-- )
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 nFactorialIterative *=3D Nwork;
> =A0 =A0 =A0 =A0 =A0 =A0 nFactorialRecursive =3D N * factorial(N-1);
> =A0 =A0 =A0 =A0 =A0 =A0 if (nFactorialRecursive !=3D nFactorialIterative)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 Console.WriteLine
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 ("The iterative factorial of " +
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0N.ToString() + " " +
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0"is " +
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0nFactorialIterative.ToString() + " " +
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0"but its recursive factorial is " +
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0nFactorialRecursive.ToString());
> =A0 =A0 =A0 =A0 =A0 =A0 return nFactorialRecursive;
> =A0 =A0 =A0 =A0 }
> =A0 =A0 }
>
> }
>
> The C Sharp code runs at 110% of the speed of the C code, which may
> seem to "prove" the half-literate Urban Legend that "C is more
> efficient than C Sharp or VM/bytecode languages in general, d'oh".

You don't know how to benchmark programs. D'oh (sic).

> But far more significantly: the ten percent "overhead" would be
> several orders of magnitude were C Sharp to be an "inefficient,
> interpreted language" which many C programmers claim it is.

Show them "many C programmers".

> I'm for one tired of the Urban Legends of the lower middle class,
> whether in programming or politics.

I long for the day when you'll grow tired of (your own) incompetence
as well.
0
Michael
12/30/2009 7:14:42 AM
On Dec 29, 5:11=A0am, spinoza1111 <spinoza1...@yahoo.com> wrote:
> On Dec 28, 5:10=A0am, Seebs <usenet-nos...@seebs.net> wrote:
>
>
>
>
>
> > On 2009-12-27, bartc <ba...@freeuk.com> wrote:
>
> > > It doesn't matter too much that the code was idiotic, provided both
> > > languages were executing the same idiotic algorithm.
>
> > True.
>
> > > The original code wasn't a bad test of long long multiplication in ea=
ch
> > > language.
>
> > Fair enough. =A0It's not exactly what I'd call an interesting test case=
 for
> > real-world code. =A0I'd be a lot more interested in performance of, say=
,
> > large lists or hash tables.
>
> > -s
> > --
> > Copyright 2009, all wrongs reversed. =A0Peter Seebach / usenet-nos...@s=
eebs.nethttp://www.seebs.net/log/<-- lawsuits, religion, and funny pictures=
http://en.wikipedia.org/wiki/Fair_Game_(Scientology) <-- get educated!
>
> C code to hash several numbers, iterated to get somewhat better
> performance numbers.
>
> #include <stdio.h>
> #include <time.h>
> #include <stdlib.h>
> #define ARRAY_SIZE 1000
> #define SESSIONS 100000
>
> int main(void)
> {
> =A0 =A0 int hash[ARRAY_SIZE];
> =A0 =A0 int i;
> =A0 =A0 int r;
> =A0 =A0 int j;
> =A0 =A0 int k;
> =A0 =A0 int collisions;
> =A0 =A0 time_t start;
> =A0 =A0 time_t end;
> =A0 =A0 double dif;
> =A0 =A0 int tests;
> =A0 =A0 int sessions;
> =A0 =A0 time (&start);
> =A0 =A0 for (sessions =3D 0; sessions < SESSIONS; sessions++)
> =A0 =A0 {
> =A0 =A0 =A0 =A0 for (i =3D 0; i < ARRAY_SIZE; i++) hash[i] =3D 0;
> =A0 =A0 =A0 =A0 =A0 =A0 collisions =3D 0;
> =A0 =A0 =A0 =A0 =A0 =A0 tests =3D ARRAY_SIZE;
> =A0 =A0 =A0 =A0 =A0 =A0 for (i =3D 0; i < tests; i++)
> =A0 =A0 =A0 =A0 =A0 =A0 {
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 r =3D rand();
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 j =3D r % ARRAY_SIZE;
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 k =3D j;
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 if (hash[j] !=3D 0) collisions++;
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 while(hash[j] !=3D r && hash[j] !=3D 0)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 {
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 if (j >=3D ARRAY_SIZE) j =3D 0; e=
lse j++;
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 if (j=3D=3Dk)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 {
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 printf("Table is full\n")=
;
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 break;
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 }
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 }
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 if (hash[j] =3D=3D 0) hash[j] =3D r;
> =A0 =A0 =A0 =A0 =A0 =A0 }
> =A0 =A0 =A0 =A0 }
> =A0 =A0 time (&end);
> =A0 =A0 dif =3D difftime (end,start);
> =A0 =A0 printf("It took C %.2f seconds to hash %d numbers with %d
> collisions, %d times\n",
> =A0 =A0 =A0 =A0 =A0 =A0dif, tests, collisions, sessions);
> =A0 =A0 return 0; // Gee is that right?
>
> }
>
> C Sharp code to do the same:
>
> using System;
> using System.Collections.Generic;
> using System.Linq;
> using System.Text;
>
> namespace ConsoleApplication1
> {
> =A0 =A0 class Program
> =A0 =A0 {
> =A0 =A0 =A0 =A0 static void Main(string[] args)
> =A0 =A0 =A0 =A0 {
> =A0 =A0 =A0 =A0 =A0 =A0 HashSet<int> hash =3D null;
> =A0 =A0 =A0 =A0 =A0 =A0 hash =3D new HashSet<int>();
> =A0 =A0 =A0 =A0 =A0 =A0 Random R =3D new Random(1);
> =A0 =A0 =A0 =A0 =A0 =A0 int i;
> =A0 =A0 =A0 =A0 =A0 =A0 int j;
> =A0 =A0 =A0 =A0 =A0 =A0 TimeSpan dif;
> =A0 =A0 =A0 =A0 =A0 =A0 DateTime start;
> =A0 =A0 =A0 =A0 =A0 =A0 DateTime end;
> =A0 =A0 =A0 =A0 =A0 =A0 start =3D DateTime.Now;
> =A0 =A0 =A0 =A0 =A0 =A0 int sessions =3D 100000;
> =A0 =A0 =A0 =A0 =A0 =A0 int tests =3D 1000;
> =A0 =A0 =A0 =A0 =A0 =A0 for (i =3D 0; i < sessions; i++)
> =A0 =A0 =A0 =A0 =A0 =A0 {
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 hash.Clear();
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 for (j =3D 0; j < tests; j++)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 hash.Add(R.Next(32767));
> =A0 =A0 =A0 =A0 =A0 =A0 }
> =A0 =A0 =A0 =A0 =A0 =A0 end =3D DateTime.Now;
> =A0 =A0 =A0 =A0 =A0 =A0 dif =3D end - start;
> =A0 =A0 =A0 =A0 =A0 =A0 Console.WriteLine
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 ("It took C Sharp " +
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0dif.ToString() + " " +
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0"seconds to hash " +
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0tests.ToString() + " numbers " +
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0sessions.ToString() + " times");
> =A0 =A0 =A0 =A0 =A0 =A0 return; // Ha ha I don't have to worry about the
> shibboleth
> =A0 =A0 =A0 =A0 }
> =A0 =A0 }
>
> }
>
> The C Sharp code is not only smaller above, it runs dramatically
> faster: 24 secs on my machine as contrasted with 35 secs for the C
> code.
>
> This is because the HashSet (also available in Java) can be written as
> fast as you like in that it's a part of the extended "os". It may
> itself be written in C, and please note that this does NOT mean that
> YOU should write in C after all, because the HashSet was written state
> of the art by studly guys and gals at Microsoft, and painstakingly
> reviewed by other studly guys and gals.
>
> And no, I don't want to visit some creepy site to get best C practise
> for hashing. The fact is that the research it takes at creepy come-to-
> Jesus and ammo sites to find "good" C, quite apart from its being a
> waste of spirit in an expense of shame, provides no "forensic"
> assurance that the creepy guy who gave you the code didn't screw up or
> insert a time bomb. HashSet is available, shrink-wrapped and out of
> the box, and IT RUNS TWICE AS FAST.
>
> HashSet can even safely run as managed code but be developed as Java
> bytecode or .Net MSIL as a form of safe assembler language. When it is
> maintained by the vendor, you get the benefit. Whereas el Creepo's
> code is all you get, period, unless you call him a year later, only to
> find that his ex-wife has thrown him out the house.
>
> C is NOT more "efficient" than C Sharp. That is not even a coherent
> thing to say.
>
> Furthermore, even the best of us screw up (as I have screwn up) when
> implementing the basics. Donald R Knuth has said that he always gets
> binary search wrong the first time he recodes it. It is a mark of the
> smart person to have trouble with low-level math and code; Einstein
> (cf Walter Kaufman's bio) had in fact a great deal of difficulty
> working out the details of relativistic mathematics and required help
> from other mathematicians and physicists.
>
> Therefore we class acts prefer C Sharp.

With a little modification here is the my result;

root@varyag-laptop:~# time ./x
It took C 6.00 seconds to hash 1000 numbers with 82 collisions, 100000
times

real	0m5.468s
user	0m5.456s
sys	0m0.000s
root@varyag-laptop:~# time mono xc.exe
It took C Sharp 00:00:10.1764660 seconds to hash 1000 numbers 100000
times

real	0m10.260s
user	0m10.241s
sys	0m0.004s

This is, where this trash talking ends.. ;)
0
Boris
12/30/2009 10:18:51 AM
On Dec 30, 6:18=A0pm, "Boris S." <pard...@gmail.com> wrote:
> On Dec 29, 5:11=A0am,spinoza1111<spinoza1...@yahoo.com> wrote:
>
>
>
>
>
> > On Dec 28, 5:10=A0am, Seebs <usenet-nos...@seebs.net> wrote:
>
> > > On 2009-12-27, bartc <ba...@freeuk.com> wrote:
>
> > > > It doesn't matter too much that the code was idiotic, provided both
> > > > languages were executing the same idiotic algorithm.
>
> > > True.
>
> > > > The original code wasn't a bad test of long long multiplication in =
each
> > > > language.
>
> > > Fair enough. =A0It's not exactly what I'd call an interesting test ca=
se for
> > > real-world code. =A0I'd be a lot more interested in performance of, s=
ay,
> > > large lists or hash tables.
>
> > > -s
> > > --
> > > Copyright 2009, all wrongs reversed. =A0Peter Seebach / usenet-nos...=
@seebs.nethttp://www.seebs.net/log/<-- lawsuits, religion, and funny pictur=
eshttp://en.wikipedia.org/wiki/Fair_Game_(Scientology) <-- get educated!
>
> > C code to hash several numbers, iterated to get somewhat better
> > performance numbers.
>
> > #include <stdio.h>
> > #include <time.h>
> > #include <stdlib.h>
> > #define ARRAY_SIZE 1000
> > #define SESSIONS 100000
>
> > int main(void)
> > {
> > =A0 =A0 int hash[ARRAY_SIZE];
> > =A0 =A0 int i;
> > =A0 =A0 int r;
> > =A0 =A0 int j;
> > =A0 =A0 int k;
> > =A0 =A0 int collisions;
> > =A0 =A0 time_t start;
> > =A0 =A0 time_t end;
> > =A0 =A0 double dif;
> > =A0 =A0 int tests;
> > =A0 =A0 int sessions;
> > =A0 =A0 time (&start);
> > =A0 =A0 for (sessions =3D 0; sessions < SESSIONS; sessions++)
> > =A0 =A0 {
> > =A0 =A0 =A0 =A0 for (i =3D 0; i < ARRAY_SIZE; i++) hash[i] =3D 0;
> > =A0 =A0 =A0 =A0 =A0 =A0 collisions =3D 0;
> > =A0 =A0 =A0 =A0 =A0 =A0 tests =3D ARRAY_SIZE;
> > =A0 =A0 =A0 =A0 =A0 =A0 for (i =3D 0; i < tests; i++)
> > =A0 =A0 =A0 =A0 =A0 =A0 {
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 r =3D rand();
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 j =3D r % ARRAY_SIZE;
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 k =3D j;
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 if (hash[j] !=3D 0) collisions++;
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 while(hash[j] !=3D r && hash[j] !=3D 0)
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 {
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 if (j >=3D ARRAY_SIZE) j =3D 0;=
 else j++;
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 if (j=3D=3Dk)
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 {
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 printf("Table is full\n=
");
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 break;
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 }
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 }
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 if (hash[j] =3D=3D 0) hash[j] =3D r;
> > =A0 =A0 =A0 =A0 =A0 =A0 }
> > =A0 =A0 =A0 =A0 }
> > =A0 =A0 time (&end);
> > =A0 =A0 dif =3D difftime (end,start);
> > =A0 =A0 printf("It took C %.2f seconds to hash %d numbers with %d
> > collisions, %d times\n",
> > =A0 =A0 =A0 =A0 =A0 =A0dif, tests, collisions, sessions);
> > =A0 =A0 return 0; // Gee is that right?
>
> > }
>
> > C Sharp code to do the same:
>
> > using System;
> > using System.Collections.Generic;
> > using System.Linq;
> > using System.Text;
>
> > namespace ConsoleApplication1
> > {
> > =A0 =A0 class Program
> > =A0 =A0 {
> > =A0 =A0 =A0 =A0 static void Main(string[] args)
> > =A0 =A0 =A0 =A0 {
> > =A0 =A0 =A0 =A0 =A0 =A0 HashSet<int> hash =3D null;
> > =A0 =A0 =A0 =A0 =A0 =A0 hash =3D new HashSet<int>();
> > =A0 =A0 =A0 =A0 =A0 =A0 Random R =3D new Random(1);
> > =A0 =A0 =A0 =A0 =A0 =A0 int i;
> > =A0 =A0 =A0 =A0 =A0 =A0 int j;
> > =A0 =A0 =A0 =A0 =A0 =A0 TimeSpan dif;
> > =A0 =A0 =A0 =A0 =A0 =A0 DateTime start;
> > =A0 =A0 =A0 =A0 =A0 =A0 DateTime end;
> > =A0 =A0 =A0 =A0 =A0 =A0 start =3D DateTime.Now;
> > =A0 =A0 =A0 =A0 =A0 =A0 int sessions =3D 100000;
> > =A0 =A0 =A0 =A0 =A0 =A0 int tests =3D 1000;
> > =A0 =A0 =A0 =A0 =A0 =A0 for (i =3D 0; i < sessions; i++)
> > =A0 =A0 =A0 =A0 =A0 =A0 {
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 hash.Clear();
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 for (j =3D 0; j < tests; j++)
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 hash.Add(R.Next(32767));
> > =A0 =A0 =A0 =A0 =A0 =A0 }
> > =A0 =A0 =A0 =A0 =A0 =A0 end =3D DateTime.Now;
> > =A0 =A0 =A0 =A0 =A0 =A0 dif =3D end - start;
> > =A0 =A0 =A0 =A0 =A0 =A0 Console.WriteLine
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 ("It took C Sharp " +
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0dif.ToString() + " " +
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0"seconds to hash " +
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0tests.ToString() + " numbers " +
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0sessions.ToString() + " times");
> > =A0 =A0 =A0 =A0 =A0 =A0 return; // Ha ha I don't have to worry about th=
e
> > shibboleth
> > =A0 =A0 =A0 =A0 }
> > =A0 =A0 }
>
> > }
>
> > The C Sharp code is not only smaller above, it runs dramatically
> > faster: 24 secs on my machine as contrasted with 35 secs for the C
> > code.
>
> > This is because the HashSet (also available in Java) can be written as
> > fast as you like in that it's a part of the extended "os". It may
> > itself be written in C, and please note that this does NOT mean that
> > YOU should write in C after all, because the HashSet was written state
> > of the art by studly guys and gals at Microsoft, and painstakingly
> > reviewed by other studly guys and gals.
>
> > And no, I don't want to visit some creepy site to get best C practise
> > for hashing. The fact is that the research it takes at creepy come-to-
> > Jesus and ammo sites to find "good" C, quite apart from its being a
> > waste of spirit in an expense of shame, provides no "forensic"
> > assurance that the creepy guy who gave you the code didn't screw up or
> > insert a time bomb. HashSet is available, shrink-wrapped and out of
> > the box, and IT RUNS TWICE AS FAST.
>
> > HashSet can even safely run as managed code but be developed as Java
> > bytecode or .Net MSIL as a form of safe assembler language. When it is
> > maintained by the vendor, you get the benefit. Whereas el Creepo's
> > code is all you get, period, unless you call him a year later, only to
> > find that his ex-wife has thrown him out the house.
>
> > C is NOT more "efficient" than C Sharp. That is not even a coherent
> > thing to say.
>
> > Furthermore, even the best of us screw up (as I have screwn up) when
> > implementing the basics. Donald R Knuth has said that he always gets
> > binary search wrong the first time he recodes it. It is a mark of the
> > smart person to have trouble with low-level math and code; Einstein
> > (cf Walter Kaufman's bio) had in fact a great deal of difficulty
> > working out the details of relativistic mathematics and required help
> > from other mathematicians and physicists.
>
> > Therefore we class acts prefer C Sharp.
>
> With a little modification here is the my result;
>
> root@varyag-laptop:~# time ./x
> It took C 6.00 seconds to hash 1000 numbers with 82 collisions, 100000
> times
>
> real =A0 =A00m5.468s
> user =A0 =A00m5.456s
> sys =A0 =A0 0m0.000s
> root@varyag-laptop:~# time mono xc.exe
> It took C Sharp 00:00:10.1764660 seconds to hash 1000 numbers 100000
> times
>
> real =A0 =A00m10.260s
> user =A0 =A00m10.241s
> sys =A0 =A0 0m0.004s
>
> This is, where this trash talking ends.. ;)

The trash talking in my case is in all instances a response to the
annoying lower middle class habit of always transforming technical
issues into issues of personal standing, a habit that is based on
rather well-deserved feelings of inadequacy.

You forgot to disclose the modification you made, and the performance
improvement of C simply doesn't warrant using it, because C as such
presents the forensic problem that any C program can do nasty things
that a managed C Sharp program cannot.

Besides, we know that the C program performs poorly because the table
fills up, resulting in longer and longer searches for holes. I was the
first, in fact, to point this out, because I'm not here to destroy
others by concealing information unfavorable to my case.

Here are the comparative numbers when the C program, only, is
restricted to using 75% of the table (which trades bloat for speed).

It took C 13.00 seconds to hash 750 numbers in a table with available
size 1000 with 101913509 collisions, 100000 times
Total probes: 176913509: Average number of collisions: 0.576064

It took C Sharp 00:00:24.4531250 seconds to hash 1000 numbers 100000
times

Yes, folks, C sharp took twice as long.

Shame on it. Bad, Commie, Terrorist, Evil C Sharp, from the Dark Side
of the Force.

However, only an order of magnitude would be significant, because an
order of magnitude would indicate that the C Sharp code was
interpreted, which it is not despite urban legends. With the slower
speed (and the "larger" executable and putative "code bloat", the C
Sharp executable being 5120 bytes and the C executable being...hey
wait a minute...7680 bytes (because MSIL is not prolix) one gets
freedom from The Complete Nonsense of C which stunts the mind and rots
the spirit down to the level of the Troglodyte.


0
spinoza1111
12/30/2009 10:42:39 AM
On Tue, 29 Dec 2009 18:51:06 -0800, spinoza1111 wrote:

> On Dec 30, 12:53 am, Tom St Denis <t...@iahu.ca> wrote:

> 
> And even if I use the gnu products (which should not be ethically used
> in commercial products) the forensic problems of C remain. Because it

They are probably licenced under the LGPL.

> provides zero protection against memory leaks, aliasing, and the
> simultaneous usage of certain non-reentrant library functions, I would
> have to test the code doing the hashing thoroughly IN ADDITION to the
> code written to solve the actual problem. Whereas forensically I am
> within my rights to assume hashkey works.

You are wrong. C offers 100% protection against all evil scenarios.
Remember: it is your code, if it is wrong you can fix it. Its behavior is
dictated in the standard.

OTOH the dotnet runtime offers no such guarantee. It does what it does.
It may or may not leak, it may or may not be reentrant, it may suddenly start 
the garbage collector, without you knowing it. It may even call home to invoke
real programmers. And its internals may change at any time, without you 
knowing it.

Most clc regulars will code a trivial hashtable like this in about half an hour
(and probably another to get it right); and most of us have some existing code
base (or knowlefge) to lend from.

Your time savings were in the second part: getting it right. Instead you relied 
on Bill G. and Steve B. to get it right for you.

AvK
0
Moi
12/30/2009 11:06:23 AM
On Dec 29, 9:45=A0pm, spinoza1111 <spinoza1...@yahoo.com> wrote:
> Fair enough. The linked list might have long searches whereas the tree

No fair enough, you don't get to post an inefficient algorithm compare
it to optimized C# and make conclusions, then when told basically say
"fair enough."  Admit you were wrong.

> ...because IT'S BEEN DONE, and is more easily accessible in C Sharp
> and Java. C'mon, guy. I need to program more examples in Wolfram and
> use my computer to search for intelligent life on other planets (sure
> ain't found much here ha ha).

It's been done in C too though.  Did you miss the part where I pointed
out that there are libraries for C out there that do trees, heaps,
queues, etc?

You seem to assume that all C developers start with no external APIs
and write everything from scratch themselves.  That's both naive and
ignorant.

> Dammit, I addressed this. Yes, it's possible that the C Sharp
> libraries are in C. It's also possible that they were written in MSIL
> directly, or C Sharp itself. And before there was C Sharp, C was used
> to produce better things than C, just like the old gods built the
> world only to be overthrown by cooler gods.

Ok, but what I'm trying to say is your argument that C sucks because
it doesn't /come with/ a hash library is dishonest.  Finding a hash
library for C is not hard, and they're not hard to use.  You were
being dishonest when you claimed that only C# provides such
functionality.

> You're neglecting the forensic problem. Not only "should" I not use
> this code in commercial products, I have gnow way of gnowing that gnu
> will ingneroperate.

Ok, but there are other libraries out there.  Point is I found that
with a 2 second google search.  So for you to claim that there are no
suitably licensed data management libraries out there is lazy and
dishonest.

> I think I do. Do u? And I have said before that this constant, lower
> middle class, questioning of credentials by people who have every
> reason to worry about their own is boring and disgusting.

You keep claiming that people have to embed 100s of messy C lines in
all of their code to get anything done in C.  First it was messy
string functions, now it's hashes, what next malloc and free?  My
point was you're missing the part where you put those algorithms in
functions that users can then call without embedding 100s of lines of
messy C all over their programs.

> No, C sucks because I started programming before it existed and saw
> University of Chicago programmers do a better job, only to see C
> become widely use merely because of the prestige of a campus with no
> particular distinction in comp sci at the time but an upper class
> reputation. I was hired by Princeton in the 1980s in part because it
> was behind the curve technically.

You keep claiming that you're this "old timer" programmer from back in
the day, like that matters.  Even if it were true, that doesn't
preclude the possibility that even with all that time to get
experience you STILL have no idea what you're talking about.

If you want to impress me with your credentials you'd stop spouting
obvious lies, better yet, you'd stop trolling usenet.  Better yet,
you'd post with your real name...

Tom
0
Tom
12/30/2009 11:47:56 AM
On Dec 29, 9:51=A0pm, spinoza1111 <spinoza1...@yahoo.com> wrote:
> On Dec 30, 12:53=A0am, Tom St Denis <t...@iahu.ca> wrote:
>
>
>
> > On Dec 29, 11:07=A0am,spinoza1111<spinoza1...@yahoo.com> wrote:
>
> > > I explained that already as a problem. I said that the average search
> > > time slows down as the table fills up. I've known this for nearly
> > > forty years, since I read it in Knuth in 1971. I said (did you miss
> > > it) that you need to use linked lists based at each entry to get K*C
> > > time where C is the average number of collisions.
>
> > You've been in software for 40 years (1971 was 38 years ago btw...)
> > and it didn't occur to you to use a binary tree?
>
> > > The problem is, as I've said, that the C program is more
> > > psychologically complex and harder to maintain than C Sharp code whic=
h
> > > does the same thing better.
>
> > man hsearch
> > man bsearch
> > man tsearch
>
> > Sure those are not part of C99, but they're part of GNU libc, which a
> > lot of people have access to. =A0There are standalone data management
> > libraries out there.
>
> > You're being obtuse to think that people haven't worked on these
> > problems and that ALL developers must code their own solutions from
> > scratch.
>
> > > I am also aware of the limitations of rand(). Indeed, many defenses o=
f
> > > C snigger at its obvious facilities and recommend arcana, some of
> > > which is available from creepy sites. But this is completely absurd,
> > > because one of the major reasons for using a programming language is
> > > clarity.
>
> > rand() is useful for simple non-sequential numbers. =A0If you need a
> > statistically meaningful PRNG use one. =A0I'd hazard a guess C# is no
> > different in it's use of a LCG anyways. =A0So I wouldn't be so apt to
> > praise it.
>
> > You could easily write an API that had functions like
>
> > hash_add(table, key, value);
> > value =3D hash_search(table, key);
> > hash_remove(table, key);
>
> > Why would that be so much harder than your class based methods from
> > C#?
>
> Because I didn't have to write them. I just used hashkey.
>
> And even if I use the gnu products (which should not be ethically used
> in commercial products) the forensic problems of C remain. Because it
> provides zero protection against memory leaks, aliasing, and the
> simultaneous usage of certain non-reentrant library functions, I would
> have to test the code doing the hashing thoroughly IN ADDITION to the
> code written to solve the actual problem. Whereas forensically I am
> within my rights to assume hashkey works.

All functions in libc which aren't thread safe are marked as so.
turns out the *_r() variants ARE thread safe.  And as "Moi" pointed
out, GNU LIBC is licensed under LGPL which doesn't attach a license to
your linked image.

And where you get this idea that C# or Java even don't have memory
management issues... Have you never seen a tomcat server taking 5GB of
ram to handle a few 100 connections?  Just because there IS a GC in
Java doesn't mean it's used effectively.  Point is, if you really did
have "40 years of experience" (impressive since you stated this began
38 years ago, how did you make up the two years?) you'd be comfortable
with pointers, heaps, array indecies, etc... If not from C, from any
of the dozens of languages that came out around/before it.

With all your experience though you don't seem to get testing and
verification.  In an ideal model you assume libc is working and reduce
things to that.  Then you prove your libraries are working, and reduce
things to them, and so on.  Under your line of thinking you'd have to
go to the customers site and prove that electrons are moving on mains,
that the computer fans spin clock wise, etc and so on.  NO
ASSUMPTIONS!

Tom
0
Tom
12/30/2009 11:53:04 AM
On Dec 29, 9:24=A0pm, spinoza1111 <spinoza1...@yahoo.com> wrote:
> C# comes WITH a tuned hash library whereas the libraries that "come
> with" C in the sense of being visible are a joke: snprintf being an
> example.

I don't get what your point is.  There is a ton of 3rd party Java/C#/
PHP/etc code out there.  If developers only stayed with what the core
C# provided it'd be no better [in that regard].

> Many arrays in fact fill up in production. And don't waste my time: I
> raised this issue right after I posted the original code, pointing out
> that the average probe time would go to shit and describing how to fix
> it...the point being that none of this was necessary with C Sharp.

No, all this proves is after analyzing a problem that YOU invented,
YOU came up with an inappropriately inferior solution.  It proves that
you don't really know what you're talking about, or at least are not
honest enough to make a point properly.

> Talk about bloatware: on the one hand one guy complains that C Sharp
> executables are too large, here a C fan suggests wasting memory to
> make a crude algorithm perform. It might be a good idea in certain
> circumstances, but it fails to demonstrate that C doesn't suck. The
> linked list that I suggested (but haven't implemented thisthread) is
> better.

That's how you improve hashing though.  More memory.  If you have
unbounded inputs use a tree and learn how to splay.  OH MY GOD,
COMPUTER SCIENCE!!!

Tom
0
Tom
12/30/2009 11:56:19 AM
On Dec 30, 5:42=A0am, spinoza1111 <spinoza1...@yahoo.com> wrote:
> The trash talking in my case is in all instances a response to the
> annoying lower middle class habit of always transforming technical
> issues into issues of personal standing, a habit that is based on
> rather well-deserved feelings of inadequacy.

It'd help if you stopped trying to explain away your ignorance in
terms of other peoples failures.

> You forgot to disclose the modification you made, and the performance
> improvement of C simply doesn't warrant using it, because C as such
> presents the forensic problem that any C program can do nasty things
> that a managed C Sharp program cannot.

Dude, learn how to design testable and verifyable software.

> Besides, we know that the C program performs poorly because the table
> fills up, resulting in longer and longer searches for holes. I was the
> first, in fact, to point this out, because I'm not here to destroy
> others by concealing information unfavorable to my case.

No, *YOUR* program performed poorly because you designed an
inappropriate solution.  If you wrote the same algorithm in C# it'd be
just as inefficient.

> However, only an order of magnitude would be significant, because an
> order of magnitude would indicate that the C Sharp code was
> interpreted, which it is not despite urban legends. With the slower
> speed (and the "larger" executable and putative "code bloat", the C
> Sharp executable being 5120 bytes and the C executable being...hey
> wait a minute...7680 bytes (because MSIL is not prolix) one gets
> freedom from The Complete Nonsense of C which stunts the mind and rots
> the spirit down to the level of the Troglodyte.

Nobody here is saying that C# is not executed as bytecode.  And if
they are they're wrong.  I also wouldn't compare executable sizes
unless you also consider the C# DLL baggage [on top of the C runtime
ironically..]

Here's a helpful hint:  If you're trying to make a point, consider it
from all angles first.  If three seconds of thinking can shoot down
every point you're trying to make you're in the wrong business.  Maybe
trolling is not for you, have you considered travel instead?

Tom
0
Tom
12/30/2009 12:01:21 PM
On Dec 30, 8:01=A0pm, Tom St Denis <t...@iahu.ca> wrote:
> On Dec 30, 5:42=A0am,spinoza1111<spinoza1...@yahoo.com> wrote:
>
> > The trash talking in my case is in all instances a response to the
> > annoying lower middle class habit of always transforming technical
> > issues into issues of personal standing, a habit that is based on
> > rather well-deserved feelings of inadequacy.
>
> It'd help if you stopped trying to explain away your ignorance in
> terms of other peoples failures.

The process has consistently started with "regs" here questioning my
bonafides, since I write markedly better (more consistently excellent
grammar and spelling, wider vocabulary, more knowledge) than the regs.
Their defensive response has been to start attacks, and newbies and
fools read the attacks only. This causes them to join what constitutes
a cybernetic mob.

In fact, people here are mostly failures whose programme is to show
that other people are liars and failures. For example, recently,
Richard Heathfield, the editor of an unsuccessful book on C from a
publisher with a bad reputation, claimed I'd never posted to the well-
regarded and tightly moderated group comp.risks. The charge was a
malicious lie and libel under UK law and refutable with laughable
ease, and it proved that Heathfield is both stupid and malicious.

Whether you like it or not, and whether or not you've been
oversocialized to not defend yourself, I defend myself, and this
creates the illusion that I'm making trouble.

Do your homework.

>
> > You forgot to disclose the modification you made, and the performance
> > improvement of C simply doesn't warrant using it, because C as such
> > presents the forensic problem that any C program can do nasty things
> > that a managed C Sharp program cannot.
>
> Dude, learn how to design testable and verifyable software.

Dude, learn how to spell. The first step is to select a quality tool.
C is not a quality tool since it exists in incompatible forms and was
never properly designed.

>
> > Besides, we know that the C program performs poorly because the table
> > fills up, resulting in longer and longer searches for holes. I was the
> > first, in fact, to point this out, because I'm not here to destroy
> > others by concealing information unfavorable to my case.
>
> No, *YOUR* program performed poorly because you designed an
> inappropriate solution. =A0If you wrote the same algorithm in C# it'd be
> just as inefficient.

The Lower Middle Class Parent inhabits too many posters here, and he
transforms all speech into the minatory register, and nothing gets
learnt.

But that's a good idea. I shall indeed do so ASAP. We need to see how
C sharp performs for the ignorant programmer who doesn't know hashset.
My prediction is poorly, for as I was the first to say in this thread,
the algorithm used in the C program slows down as the table fills.

>
> > However, only an order of magnitude would be significant, because an
> > order of magnitude would indicate that the C Sharp code was
> > interpreted, which it is not despite urban legends. With the slower
> > speed (and the "larger" executable and putative "code bloat", the C
> > Sharp executable being 5120 bytes and the C executable being...hey
> > wait a minute...7680 bytes (because MSIL is not prolix) one gets
> > freedom from The Complete Nonsense of C which stunts the mind and rots
> > the spirit down to the level of the Troglodyte.
>
> Nobody here is saying that C# is not executed as bytecode. =A0And if
> they are they're wrong. =A0I also wouldn't compare executable sizes
> unless you also consider the C# DLL baggage [on top of the C runtime
> ironically..]

We don't know whether the .Net runtime  is in C, since the particular
implementation I use is closed source. But there is no necessary
connection between C and the .Net runtime, or Windows and .Net.
The .Net runtime can be written in anything you like, such as
unmanaged C Sharp.

>
> Here's a helpful hint: =A0If you're trying to make a point, consider it
> from all angles first. =A0If three seconds of thinking can shoot down
> every point you're trying to make you're in the wrong business. =A0Maybe
> trolling is not for you, have you considered travel instead?

The next step, from the minatory register, for the Lower Middle Class
Parent, is the abstract recommendation that one shape up. I have in
this thread considered things from different points of view, for I was
the first to note that the C algorithm slows down as the table fills
up (and to find a solution).

Here's a helpful hint: shove your lectures up your ass, and confine
yourself to on-topic technical remarks. Use Ben Bacarisse's posts as
an example. He's no friend of mine, but he focuses on technical points
almost exclusively and is very intelligent as a techie.

>
> Tom

0
spinoza1111
12/30/2009 12:42:33 PM
On Dec 30, 7:06=A0pm, Moi <r...@invalid.address.org> wrote:
> On Tue, 29 Dec 2009 18:51:06 -0800,spinoza1111wrote:
> > On Dec 30, 12:53=A0am, Tom St Denis <t...@iahu.ca> wrote:
>
> > And even if I use the gnu products (which should not be ethically used
> > in commercial products) the forensic problems of C remain. Because it
>
> They are probably licenced under the LGPL.
>
> > provides zero protection against memory leaks, aliasing, and the
> > simultaneous usage of certain non-reentrant library functions, I would
> > have to test the code doing the hashing thoroughly IN ADDITION to the
> > code written to solve the actual problem. Whereas forensically I am
> > within my rights to assume hashkey works.
>
> You are wrong. C offers 100% protection against all evil scenarios.

Poppycock.

> Remember: it is your code, if it is wrong you can fix it. Its behavior is

No, it belongs to the organization paying your salary.

"Those Bolshevists are trying to take our factories?"
"Your factories? You don't even own the smoke!"

- International Workers of the World cartoon ca. 1915

The fact is that in many environments, the suits don't give
programmers enough time to do quality assurance. This means that using
such an unconstrained language as C is professional malpractice.

The suits encourage a form of self-defeating programmer machismo such
that no programmer ever admits to not having enough time. Instead,
based on the macho culture, he'll destroy his health working extra,
unpaid hours to "prove" he's a "man", and the suits laugh all the way
to the bank...since he's lowered his salary.



> dictated in the standard.
>
> OTOH the dotnet runtime offers no such guarantee. It does what it does.
> It may or may not leak, it may or may not be reentrant, it may suddenly s=
tart
> the garbage collector, without you knowing it. It may even call home to i=
nvoke
> real programmers. And its internals may change at any time, without you
> knowing it.

This is all true, but highly unlikely.

>
> Most clc regulars will code a trivial hashtable like this in about half a=
n hour
> (and probably another to get it right); and most of us have some existing=
 code
> base (or knowlefge) to lend from.

In fact, the regs you so admire almost never post new code, for
they're afraid of making errors. Keith Thompson, Seebs and Heathfield
specialize in enabling campaigns of personal destruction against
people who actually accomplish anything, and this started with Seebs'
adolescent mockery of Schildt.

Whereas I've posted C code despite the fact that I think C sucks, and
last used it at the time I was asked at Princeton to assist a Nobel
prize winner with C.

[Damn straight I'll repeat myself about Nash. I find the same lies
about Schildt and myself year in and year out, and this will continue
until it ends and/or Heathfield loses his house in a libel suit.]
>
> Your time savings were in the second part: getting it right. Instead you =
relied
> on Bill G. and Steve B. to get it right for you.

You're lying. I was the first in this thread to show how the C code
slows down as the table fills up, since I first read of the algorithm
in 1972 and implemented it first in 1976 in PL.1. I wrote the code at
the start of the week before boarding the ferry to work. On the ferry
I realized that I would have to explain a fact about performance that
isn't obvious, so I plugged in my Vodaphone thingie and got back on
air. Nobody else had commented on the issue at the time. You're lying,
punk.
>
> AvK

0
spinoza1111
12/30/2009 12:54:27 PM
On Dec 30, 7:47=A0pm, Tom St Denis <t...@iahu.ca> wrote:
> On Dec 29, 9:45=A0pm,spinoza1111<spinoza1...@yahoo.com> wrote:
>
> > Fair enough. The linked list might have long searches whereas the tree
>
> No fair enough, you don't get to post an inefficient algorithm compare
> it to optimized C# and make conclusions, then when told basically say
> "fair enough." =A0Admit you were wrong.
>
> > ...because IT'S BEEN DONE, and is more easily accessible in C Sharp
> > and Java. C'mon, guy. I need to program more examples in Wolfram and
> > use my computer to search for intelligent life on other planets (sure
> > ain't found much here ha ha).
>
> It's been done in C too though. =A0Did you miss the part where I pointed
> out that there are libraries for C out there that do trees, heaps,
> queues, etc?
>

I addressed this point before you mentioned it. I said that I don't
want to use virtual slave labor (which is what open source is) at GNU
nor do I want to go to some creepy site.

> You seem to assume that all C developers start with no external APIs
> and write everything from scratch themselves. =A0That's both naive and
> ignorant.

False. I posted the example to show how a real problem is typically
solved. In C, the default is to hack new code. In C Sharp the default
is to use a tool. The fact is most C programmers are deficient at
reuse.


>
> > Dammit, I addressed this. Yes, it's possible that the C Sharp
> > libraries are in C. It's also possible that they were written in MSIL
> > directly, or C Sharp itself. And before there was C Sharp, C was used
> > to produce better things than C, just like the old gods built the
> > world only to be overthrown by cooler gods.
>
> Ok, but what I'm trying to say is your argument that C sucks because
> it doesn't /come with/ a hash library is dishonest. =A0Finding a hash
> library for C is not hard, and they're not hard to use. =A0You were
> being dishonest when you claimed that only C# provides such
> functionality.

The problem isn't that C doesn't come with a hash library. The problem
is that it comes with too many.

There's no way (except perhaps consulting some Fat Bastard at your
little shop, or one of the regs here, such as the pathological liar
Heathfield) of telling which library actually works, and this is a
serious matter, because statistically, C programs are less likely to
work than C Sharp independent of programmer skill: this is a
mathematical result of the ability to alias and the fact that other
people change "your" code.


>
> > You're neglecting the forensic problem. Not only "should" I not use
> > this code in commercial products, I have gnow way of gnowing that gnu
> > will ingneroperate.
>
> Ok, but there are other libraries out there. =A0Point is I found that
> with a 2 second google search. =A0So for you to claim that there are no
> suitably licensed data management libraries out there is lazy and
> dishonest.

You're searching toxic waste.

>
> > I think I do. Do u? And I have said before that this constant, lower
> > middle class, questioning of credentials by people who have every
> > reason to worry about their own is boring and disgusting.
>
> You keep claiming that people have to embed 100s of messy C lines in
> all of their code to get anything done in C. =A0First it was messy
> string functions, now it's hashes, what next malloc and free? =A0My
> point was you're missing the part where you put those algorithms in
> functions that users can then call without embedding 100s of lines of
> messy C all over their programs.

....only to find for example that simultaneous calls fail because
global data cannot be hidden properly in C. There's no static nesting
whatsoever, have you noticed? Even PL.1 had this!

The result? If the library function has a state it cannot be called by
a handler handling its failure. This is well known for malloc() but
unknown and unpredictable for any arbitrary "solution" recommended by
some Fat Bastard, recommended by some pathological liar, or found in a
Google search.


>
> > No, C sucks because I started programming before it existed and saw
> > University of Chicago programmers do a better job, only to see C
> > become widely use merely because of the prestige of a campus with no
> > particular distinction in comp sci at the time but an upper class
> > reputation. I was hired by Princeton in the 1980s in part because it
> > was behind the curve technically.
>
> You keep claiming that you're this "old timer" programmer from back in
> the day, like that matters. =A0Even if it were true, that doesn't
> preclude the possibility that even with all that time to get
> experience you STILL have no idea what you're talking about.

It is true that corporate life is an eternal childhood. However, I
also worked independent of the corporation, for example as the author
of Build Your Own .Net Compiler (buy it now or I will kill this dog)
and the programmer of its (26000 line) exemplary compiler.

>
> If you want to impress me with your credentials you'd stop spouting
> obvious lies, better yet, you'd stop trolling usenet. =A0Better yet,
> you'd post with your real name...

It is well known that I nyah ha ha am Bnarg, the Ruler of the Galaxy,
posting from my Lair on the Planet Gazumbo.

Seriously, it is well known to the regs here that I am Edward G.
Nilges.

>
> Tom

0
spinoza1111
12/30/2009 1:04:34 PM
On Dec 30, 7:42=A0am, spinoza1111 <spinoza1...@yahoo.com> wrote:
> The process has consistently started with "regs" here questioning my
> bonafides, since I write markedly better (more consistently excellent
> grammar and spelling, wider vocabulary, more knowledge) than the regs.
> Their defensive response has been to start attacks, and newbies and
> fools read the attacks only. This causes them to join what constitutes
> a cybernetic mob.

I don't care about the "regs,"  in this thread I only care about what
YOU are trying to pass off as knowledge.

> Whether you like it or not, and whether or not you've been
> oversocialized to not defend yourself, I defend myself, and this
> creates the illusion that I'm making trouble.

Defend yourself by being right then.  If you're trying to make
arguments actually make sure they're sound and well reasoned instead
of just shotgunning stupidity and seeing what sticks.

> Dude, learn how to spell. The first step is to select a quality tool.
> C is not a quality tool since it exists in incompatible forms and was
> never properly designed.

Which is ironic given how much of your day to day life is probably the
result of C programs...

So far it seems to be you saying C sucks, and nobody caring.  I don't
care if you program in C or not.  I'm only replying here because you
posted some nonsense comparison and are trying to pass it off as
science.  You're a fraud.

> But that's a good idea. I shall indeed do so ASAP. We need to see how
> C sharp performs for the ignorant programmer who doesn't know hashset.
> My prediction is poorly, for as I was the first to say in this thread,
> the algorithm used in the C program slows down as the table fills.

So why bother the comparison?  If you knew your algorithm in your C
program was not comparable why bother?

That'd be like comparing bubble sort in C# to qsort in C ...

> We don't know whether the .Net runtime =A0is in C, since the particular
> implementation I use is closed source. But there is no necessary
> connection between C and the .Net runtime, or Windows and .Net.
> The .Net runtime can be written in anything you like, such as
> unmanaged C Sharp.

Well the Windows kernel has a C interface for the syscalls.  So at
some point something has to call that.  So chances are good that the
C# runtime is based on top of the C runtime.

Tom
0
Tom
12/30/2009 1:23:33 PM
On Dec 30, 8:04=A0am, spinoza1111 <spinoza1...@yahoo.com> wrote:
> I addressed this point before you mentioned it. I said that I don't
> want to use virtual slave labor (which is what open source is) at GNU
> nor do I want to go to some creepy site.

OSS is hardly slave labour.  Many people in that scene get paid for
their work.  It's just a community effort.  Of course if this is how
you play ostrich then so be it.

> False. I posted the example to show how a real problem is typically
> solved. In C, the default is to hack new code. In C Sharp the default
> is to use a tool. The fact is most C programmers are deficient at
> reuse.

Citation needed.

> The problem isn't that C doesn't come with a hash library. The problem
> is that it comes with too many.

So first it's that there are no solutions in C to the problem.  Now
there are too many?

> There's no way (except perhaps consulting some Fat Bastard at your
> little shop, or one of the regs here, such as the pathological liar
> Heathfield) of telling which library actually works, and this is a
> serious matter, because statistically, C programs are less likely to
> work than C Sharp independent of programmer skill: this is a
> mathematical result of the ability to alias and the fact that other
> people change "your" code.

I don't get what your rant is.  By your logic we shouldn't trust the
code you produced since we didn't write it.

Also, if I import version X of an OSS library then NOBODY is changing
it on me...

> ...only to find for example that simultaneous calls fail because
> global data cannot be hidden properly in C. There's no static nesting
> whatsoever, have you noticed? Even PL.1 had this!

Um, the stack of the threads is where you typically put cheap per-
thread data.  Otherwise you allocate it off the heap.  In the case of
the *_r() GNU libc functions they store any transient data in the
structure you pass it.  That's how they achieve thread safety.

It's like in OOP where you have a fresh instance of a class per
thread.  The class has public/private data members that are unique to
the instance.  OMG C++ HAS NO THREAD SAFETY!!!

> The result? If the library function has a state it cannot be called by
> a handler handling its failure. This is well known for malloc() but
> unknown and unpredictable for any arbitrary "solution" recommended by
> some Fat Bastard, recommended by some pathological liar, or found in a
> Google search.

malloc() is thread safe in GNU libc.  It can fail/succeed in multiple
threads simultaneously.  What's your point?

> Seriously, it is well known to the regs here that I am Edward G.
> Nilges.

I didn't know that (nor do I assume to know that, being the victim of
a joe-job myself I don't trust anything most people say w.r.t.
identities unless they prove it through other means).  That being
said, why not reserve your posts for more positive or constructive
contributions instead?

Tom
0
Tom
12/30/2009 1:32:27 PM
On Dec 30, 7:54=A0am, spinoza1111 <spinoza1...@yahoo.com> wrote:
> In fact, the regs you so admire almost never post new code, for
> they're afraid of making errors.

You say that like it's a bad thing.  Real developers SHOULD be afraid
of "winging" it.  Software [no matter the language] is hard to get
right all the time.  A smart developer would re-use as much as
possible, such that if you had asked me to write, say a routine to
store GZIP'ed data, I'd call libz, I wouldn't re-invent a deflate/gzip
codec on the fly just to boast of what a wicked C coder I am...

In this case, I would have googled for a hash API, and wrote a program
based on it.  The fact that GNU libc provides one as part of the
standard library means I would have used it.

The fact that these concepts are foreign to you just serves to prove
you know nothing of software development.

Tom
0
Tom
12/30/2009 1:35:12 PM
On Dec 30, 9:35=A0pm, Tom St Denis <t...@iahu.ca> wrote:
> On Dec 30, 7:54=A0am, spinoza1111 <spinoza1...@yahoo.com> wrote:
>
> > In fact, the regs you so admire almost never post new code, for
> > they're afraid of making errors.
>
> You say that like it's a bad thing. =A0Real developers SHOULD be afraid
> of "winging" it. =A0Software [no matter the language] is hard to get
> right all the time. =A0A smart developer would re-use as much as
> possible, such that if you had asked me to write, say a routine to
> store GZIP'ed data, I'd call libz, I wouldn't re-invent a deflate/gzip
> codec on the fly just to boast of what a wicked C coder I am...

That's true. The exception: teachers, who have to "reinvent the wheel"
to explain how hash functions work or in Schildt's case, how C works.
And of course they're going to get it wrong. The best use the getting-
it-wrong to help students learn, as in the case where the teacher
allows the student to correct him, or find something he missed. The
most famous example being Knuth's checks to people who find bugs in
his books...hmm, maybe Seebach is mad at Herb for not cutting him a
check.

In my case I needed to illustrate an important fact about C Sharp:
that it's not interpreted. If it were, its execution time would be an
order of magnitude (ten times) higher. It wasn't.

I also intended to show that C Sharp avoids reinventing the wheel by
providing a canonical set of libraries that function in a truly
encapsulated way. In C Sharp, you just don't have to worry about re-
entrance UNLESS (and this is the only exception that comes to mind)
the routine you call uses disk memory for some silly reason, say
writing and reading to the infamous "Windows Registry". Whereas in C,
malloc() and other routines aren't re-entrant.

I've called these "forensic" concerns (my term of art) because they
have to do with things that you have to worry about, for which there
is no definable deliverable: you can deliver C code but you cannot
provide a forensic assurance that it's completely exposure free.

This is true even if you're as competent as the regs here fantasize
themselves to be, because truly great programmers in many cases make
more mistakes as part of being more productive, and more creative.
Knuth famously claimed that every time he does binary search he makes
the same mistake, and John von Neumann bragged about his errors (after
fixing them). Whereas here, technicians wait in Adorno's "fear and
shame" for their errors to be detected.

In this toxic environment, anyone who tries to teach or learn is
exposed to thuggish conduct from people who cannot teach and learn by
rote.

>
> In this case, I would have googled for a hash API, and wrote a program
> based on it. =A0The fact that GNU libc provides one as part of the
> standard library means I would have used it.
>
> The fact that these concepts are foreign to you just serves to prove
> you know nothing of software development.

Guess not, Tommy boy. In fact, I've seen software development "mature"
to the point where nothing gets done because everybody equates
rationality with the ability to dominate conversations, if necessary
by trashing other people. I want very little to do with this type of
software development, which is why I left the field for English
teaching after a successful career that at thirty years was about
three times longer than the average.

>
> Tom

0
spinoza1111
12/30/2009 4:10:16 PM
On Dec 30, 11:10=A0am, spinoza1111 <spinoza1...@yahoo.com> wrote:
> That's true. The exception: teachers, who have to "reinvent the wheel"
> to explain how hash functions work or in Schildt's case, how C works.
> And of course they're going to get it wrong. The best use the getting-
> it-wrong to help students learn, as in the case where the teacher
> allows the student to correct him, or find something he missed. The
> most famous example being Knuth's checks to people who find bugs in
> his books...hmm, maybe Seebach is mad at Herb for not cutting him a
> check.

USENET postings !=3D published book.  I'm not going to write from memory
the AES encryption algorithm every time a student asks about it in
sci.crypt.  I'll answer questions about parts of it, and point them to
source if they need a reference.

Why would anyone here repeatedly post complicated and time consuming
code for every single newcomer who comes by to ask?  If you want to
learn about hashing and data management, buy a book on the subject and/
or google.

> In my case I needed to illustrate an important fact about C Sharp:
> that it's not interpreted. If it were, its execution time would be an
> order of magnitude (ten times) higher. It wasn't.

Depends on what you're doing.  If your application is highly syscall
dependent [e.g. reading large files from disk] an interpreted program
might not be much slower than compiled.  In this case though, all you
showed is that if you use the wrong algorithm in C you get slower
results than the right algorithm in C#.

> I also intended to show that C Sharp avoids reinventing the wheel by
> providing a canonical set of libraries that function in a truly
> encapsulated way. In C Sharp, you just don't have to worry about re-
> entrance UNLESS (and this is the only exception that comes to mind)
> the routine you call uses disk memory for some silly reason, say
> writing and reading to the infamous "Windows Registry". Whereas in C,
> malloc() and other routines aren't re-entrant.

C# does ***NOT*** include every possible routine a developer would
ever need.  If it did there wouldn't be third party C# libraries out
there.  So your point is not only wrong and based on false pretenses,
but ignorant and naive.

Also as I posted elsewhere malloc() *is* thread safe in GNU libc.
It's even thread safe in the windows C runtime libraries.

> I've called these "forensic" concerns (my term of art) because they
> have to do with things that you have to worry about, for which there
> is no definable deliverable: you can deliver C code but you cannot
> provide a forensic assurance that it's completely exposure free.

You need to learn how to write testable and verifiable software.

> This is true even if you're as competent as the regs here fantasize
> themselves to be, because truly great programmers in many cases make
> more mistakes as part of being more productive, and more creative.

Which is why they don't wing it in USENET posts, instead they reduce
things to proven quantities.  If I spend the time and energy to test
and verify a hash library, I can then reduce my correctness to the
fact that I've already checked the library.  I don't need to re-verify
everything.

> In this toxic environment, anyone who tries to teach or learn is
> exposed to thuggish conduct from people who cannot teach and learn by
> rote.

You can't teach because you lack the humility to admit when you're
wrong.

> Guess not, Tommy boy. In fact, I've seen software development "mature"
> to the point where nothing gets done because everybody equates
> rationality with the ability to dominate conversations, if necessary
> by trashing other people. I want very little to do with this type of
> software development, which is why I left the field for English
> teaching after a successful career that at thirty years was about
> three times longer than the average.

Resorting to insults won't win you any points.  All you've "proven" in
this thread is you fundamentally don't understand computer science,
that you don't know C that well, that you can't teach, that you're not
that familiar with software engineering, and that you're genuinely not
a nice person.

Tom
0
Tom
12/30/2009 4:22:40 PM
In article <cca8960d-b90d-4714-a100-3cc3682e3cb0@1g2000vbe.googlegroups.com>,
Tom St Denis  <tom@iahu.ca> described himself to a tee thusly:
....
>Resorting to insults won't win you any points.  All you've "proven" in
>this thread is you fundamentally don't understand computer science,
>that you don't know C that well, that you can't teach, that you're not
>that familiar with software engineering, and that you're genuinely not
>a nice person.

Its so funny - all of those things are true to a tee about you.

Projecting much?

0
gazelle
12/30/2009 4:32:25 PM
On Dec 30, 9:32=A0pm, Tom St Denis <t...@iahu.ca> wrote:
> On Dec 30, 8:04=A0am, spinoza1111 <spinoza1...@yahoo.com> wrote:
>
> > I addressed this point before you mentioned it. I said that I don't
> > want to use virtual slave labor (which is what open source is) at GNU
> > nor do I want to go to some creepy site.
>
> OSS is hardly slave labour. =A0Many people in that scene get paid for
> their work. =A0It's just a community effort. =A0Of course if this is how
> you play ostrich then so be it.
>
> > False. I posted the example to show how a real problem is typically
> > solved. In C, the default is to hack new code. In C Sharp the default
> > is to use a tool. The fact is most C programmers are deficient at
> > reuse.
>
> Citation needed.

OO was developed because of insufficient code reuse. In my own book, I
show how I developed each distinct piece of the puzzle was developed
as something that could be independently tested, and reused. An Amazon
reviewer (A Customer at
http://www.amazon.com/Build-Your-NET-Language-Compiler/product-reviews/1590=
591348/ref=3Dcm_cr_pr_link_2?ie=3DUTF8&showViewpoints=3D0&pageNumber=3D2&so=
rtBy=3DbySubmissionDateDescending)
writes:

"This book contains a huge amount of code that has obviously been
under development and evolving for a long time. The code has a level
of documentation, error checking and self-consistency testing that is
rare to see even in commercial code, much less sample code for a
book."

You see, OO (which C does NOT have) allowed me to craft the compiler
in a few months in 2003, since I used basic software (such as software
to parse words and do arithmetic in nondecimal bases) which I'd
written in the Nineties. I didn't have to worry about this code
breaking something remote since it existed in static libraries.

On this foundation I was able to build encapsulated stateful objects
such as the scanner, the representation of a data type, and the
representation of a variable at run time. Each one of these is
deliverable as testable, stand alone.

The kicker is that I worked in Visual Basic (.Net). Apart from its
lack of modern syntactic sugar, its clunky syntax is in fact block-
structured, with the blocks being expressed in a very different way
from C. The key was the structure imposed by the .Net object oriented
environment.
>
> > The problem isn't that C doesn't come with a hash library. The problem
> > is that it comes with too many.
>
> So first it's that there are no solutions in C to the problem. =A0Now
> there are too many?

It is rather paradoxical but true, but if one's truly worried about
software reliability (which few programmers are in fact) then a large
set of solutions, any one of which can with a great deal of
probability be buggy owing to the aliasing and "powerful" nature of C,
constitutes a null set UNTIL you have done your homework and
determined the subset of "solutions" that work. We see this all the
time here, with constant wars over which C solution is "best" or even
works.

For example, we're not supposed to expect a basic malloc() to work
recursively, and we're not supposed to use sprintf(). The large number
of tools is actually the problem.

Every time I read the following words of the midcentury sociologist
and musician Theodore Adorno, I am astonished by his prescience as
regards computing:

"Their intellectual energy is thereby amplified in many dimensions to
the utmost by the mechanism of control. The collective stupidity of
research technicians is not simply the absence or regression of
intellectual capacities, but an overgrowth of the capacity of thought
itself, which eats away at the latter with its own energy. The
masochistic malice [Bosheit] of young intellectuals derives from the
malevolence [B=F6sartigkeit] of their illness."
>
> > There's no way (except perhaps consulting some Fat Bastard at your
> > little shop, or one of the regs here, such as the pathological liar
> > Heathfield) of telling which library actually works, and this is a
> > serious matter, because statistically, C programs are less likely to
> > work than C Sharp independent of programmer skill: this is a
> > mathematical result of the ability to alias and the fact that other
> > people change "your" code.
>
> I don't get what your rant is. =A0By your logic we shouldn't trust the
> code you produced since we didn't write it.

Please don't, since it was produced to illustrate things off the cuff.
>
> Also, if I import version X of an OSS library then NOBODY is changing
> it on me...
>
> > ...only to find for example that simultaneous calls fail because
> > global data cannot be hidden properly in C. There's no static nesting
> > whatsoever, have you noticed? Even PL.1 had this!
>
> Um, the stack of the threads is where you typically put cheap per-
> thread data. =A0Otherwise you allocate it off the heap. =A0In the case of
> the *_r() GNU libc functions they store any transient data in the
> structure you pass it. =A0That's how they achieve thread safety.

It's a clumsy and old-fashioned method, not universally used. It also
has bugola potential.

You see, the called routine is telling the caller to supply him with
"scratch paper". This technique is an old dodge. It was a requirement
in IBM 360 BAL (Basic Assembler Language) that the caller provide the
callee with a place to save the 16 "general purpose registers" of the
machine.

The problem then and now is what happens if the caller is called
recursively by the callee as it might be in exception handling, and
the callee uses the same structure. It's not supposed to but it can
happen.

In OO, the called "procedure" is just a separate object instance with
completely separate work areas. The caller need do nothing special,
and this means that things are less likely to break under maintenance.
Whereas your "solution" violates a central tenet of good software: the
caller should NOT have to do anything special based on the internal
needs of the callee.

Your solution partitions library functions that need scratch memory
from pure functions. Such apartheid always creates problems, for
example where the library developers discover that a library routine
needs (or does not need) state.

>
> It's like in OOP where you have a fresh instance of a class per
> thread. =A0The class has public/private data members that are unique to
> the instance. =A0OMG C++ HAS NO THREAD SAFETY!!!
>
> > The result? If the library function has a state it cannot be called by
> > a handler handling its failure. This is well known for malloc() but
> > unknown and unpredictable for any arbitrary "solution" recommended by
> > some Fat Bastard, recommended by some pathological liar, or found in a
> > Google search.
>
> malloc() is thread safe in GNU libc. =A0It can fail/succeed in multiple
> threads simultaneously. =A0What's your point?

That's nice to know. But not all mallocs are create equal, are they.
>
> > Seriously, it is well known to the regs here that I am Edward G.
> > Nilges.
>
> I didn't know that (nor do I assume to know that, being the victim of
> a joe-job myself I don't trust anything most people say w.r.t.
> identities unless they prove it through other means). =A0That being
> said, why not reserve your posts for more positive or constructive
> contributions instead?

Each thread I start begins with a positive contribution. Then the regs
fuck with me and I don't take it lying down.

>
> Tom

0
spinoza1111
12/30/2009 4:39:34 PM
On Dec 30, 9:23=A0pm, Tom St Denis <t...@iahu.ca> wrote:
> On Dec 30, 7:42=A0am, spinoza1111 <spinoza1...@yahoo.com> wrote:
>
> > The process has consistently started with "regs" here questioning my
> > bonafides, since I write markedly better (more consistently excellent
> > grammar and spelling, wider vocabulary, more knowledge) than the regs.
> > Their defensive response has been to start attacks, and newbies and
> > fools read the attacks only. This causes them to join what constitutes
> > a cybernetic mob.
>
> I don't care about the "regs," =A0in this thread I only care about what
> YOU are trying to pass off as knowledge.
>
> > Whether you like it or not, and whether or not you've been
> > oversocialized to not defend yourself, I defend myself, and this
> > creates the illusion that I'm making trouble.
>
> Defend yourself by being right then. =A0If you're trying to make
> arguments actually make sure they're sound and well reasoned instead
> of just shotgunning stupidity and seeing what sticks.
>
> > Dude, learn how to spell. The first step is to select a quality tool.
> > C is not a quality tool since it exists in incompatible forms and was
> > never properly designed.
>
> Which is ironic given how much of your day to day life is probably the
> result of C programs...

ROTFLMAO

Oh great C how have I offended thee?
You power the ATM which won't give me money!
I abase me before thee oh C thou God
You help make the Body Butter I put on my bod.

Give. Me. A. Break. The daily life of ordinary people is on balance
MESSED UP by computer systems, especially computer systems programmed
by thugs who refuse to learn modern languages designed for
reliability. Starting with the financial models that assembled the
"securitized mortgages" so carelessly as to cause a massive financial
crash.

I really, really hope that the next time I fly an Airbus its avionics
are in Ada...not C, and it's likely they are, since Europeans feel no
special loyalty towards C.
>
> So far it seems to be you saying C sucks, and nobody caring. =A0I don't
> care if you program in C or not. =A0I'm only replying here because you
> posted some nonsense comparison and are trying to pass it off as
> science. =A0You're a fraud.
>
> > But that's a good idea. I shall indeed do so ASAP. We need to see how
> > C sharp performs for the ignorant programmer who doesn't know hashset.
> > My prediction is poorly, for as I was the first to say in this thread,
> > the algorithm used in the C program slows down as the table fills.
>
> So why bother the comparison? =A0If you knew your algorithm in your C
> program was not comparable why bother?
>
> That'd be like comparing bubble sort in C# to qsort in C ...

My point was that the two programs took roughly the same TIME to do
the same THING, but the C program had a performance issue. It did so
because like the C Sharp program, it used the most direct method.

In C, if you don't have the time to reuse code, you do it yourself,
hopefully based on a model in Knuth (or Schildt). The simplest and
most obvious way to hash is to use the adjacent cells, not the linked
list.

In C Sharp the simplest way to hash is to select the ONE library that
is shipped with .Net.

That silly "1984" Apple ad, with the Father talking about the "one
right way" and the cute runner smashing the glass, has overinfluenced
minds here. It's silly (and Theodore Roszak made this point at the
same time as the Apple ad in a small essay called "From Satori to
Silicon Valley") to believe that high technology can be a matter of
"freedom"; it seemed to Roszak that it's about control, and
exacerbating social inequality. C programmers bought into the illusion
that they could be "free" and "code the way they wanted" and the
result was a forensic mess: one is constantly learning that this or
that function (such as sprintf()) can't be used by grave "libertarian"
lawn trolls whose abstract committment to freedom is in fact belied by
the necessarily thuggish way in which they have to ensure that what
they THINK is right, is followed.

That is: if you kill the Father you still have Big Brother.
>
> > We don't know whether the .Net runtime =A0is in C, since the particular
> > implementation I use is closed source. But there is no necessary
> > connection between C and the .Net runtime, or Windows and .Net.
> > The .Net runtime can be written in anything you like, such as
> > unmanaged C Sharp.
>
> Well the Windows kernel has a C interface for the syscalls. =A0So at
> some point something has to call that. =A0So chances are good that the
> C# runtime is based on top of the C runtime.

Your point being. My experience inside IBM and what I know of
Microsoft is that development groups who do this sort of work use
proprietary code generators which might generate C but which force
programmers to code a certain way.
>
> Tom

0
spinoza1111
12/30/2009 4:54:08 PM
On Wed, 30 Dec 2009 08:39:34 -0800 (PST), spinoza1111
<spinoza1111@yahoo.com> wrote:

>On Dec 30, 9:32=A0pm, Tom St Denis <t...@iahu.ca> wrote:
>> On Dec 30, 8:04=A0am, spinoza1111 <spinoza1...@yahoo.com> wrote:
[snip]
>> Um, the stack of the threads is where you typically put cheap per-
>> thread data. =A0Otherwise you allocate it off the heap. =A0In the case of
>> the *_r() GNU libc functions they store any transient data in the
>> structure you pass it. =A0That's how they achieve thread safety.
>
>It's a clumsy and old-fashioned method, not universally used. It also
>has bugola potential.
>
>You see, the called routine is telling the caller to supply him with
>"scratch paper". This technique is an old dodge. It was a requirement
>in IBM 360 BAL (Basic Assembler Language) that the caller provide the
>callee with a place to save the 16 "general purpose registers" of the
>machine.
>
>The problem then and now is what happens if the caller is called
>recursively by the callee as it might be in exception handling, and
>the callee uses the same structure. It's not supposed to but it can
>happen.

He's not talking about the technique used in BAL etc.  The
transient data is contained within a structure that is passed by
the caller to the callee.  The space for the structure is on the
stack.  Recursion is permitted.


Richard Harter, cri@tiac.net
http://home.tiac.net/~cri, http://www.varinoma.com
Infinity is one of those things that keep philosophers busy when they 
could be more profitably spending their time weeding their garden.
0
cri
12/30/2009 5:43:44 PM
On Dec 30, 11:32=A0am, gaze...@shell.xmission.com (Kenny McCormack)
wrote:
> In article <cca8960d-b90d-4714-a100-3cc3682e3...@1g2000vbe.googlegroups.c=
om>,
> Tom St Denis =A0<t...@iahu.ca> described himself to a tee thusly:
> ...
>
> >Resorting to insults won't win you any points. =A0All you've "proven" in
> >this thread is you fundamentally don't understand computer science,
> >that you don't know C that well, that you can't teach, that you're not
> >that familiar with software engineering, and that you're genuinely not
> >a nice person.
>
> Its so funny - all of those things are true to a tee about you.
>
> Projecting much?

Citation needed.

Tom
0
Tom
12/30/2009 6:24:53 PM
spinoza1111 wrote:

> So, "we" are not impressed with your stunt. You optimized code not
> meant to be optimized

Amazing. 


Rui Maciel
0
Rui
12/30/2009 6:34:45 PM
On Dec 30, 11:39=A0am, spinoza1111 <spinoza1...@yahoo.com> wrote:
> OO was developed because of insufficient code reuse. In my own book, I
> show how I developed each distinct piece of the puzzle was developed
> as something that could be independently tested, and reused. An Amazon
> reviewer (A Customer athttp://www.amazon.com/Build-Your-NET-Language-Comp=
iler/product-review...)
> writes:

Amazon reviews are bullshit most of the time.  I have published two
books of my own [which I won't cite].  And for one of them I had
people carrying over from usenet [when I was being joejobed]
protesting the book.  So what?  Review comments are meaningless.

Code reuse is NOT a sole feature of OO programming languages.

> You see, OO (which C does NOT have) allowed me to craft the compiler
> in a few months in 2003, since I used basic software (such as software
> to parse words and do arithmetic in nondecimal bases) which I'd
> written in the Nineties. I didn't have to worry about this code
> breaking something remote since it existed in static libraries.

If I were to write a quick parser [beyond trivial complexity] today
I'd use flex and yacc... tools written years ago to do such a job... I
don't have to re-invent the wheel.

> On this foundation I was able to build encapsulated stateful objects
> such as the scanner, the representation of a data type, and the
> representation of a variable at run time. Each one of these is
> deliverable as testable, stand alone.

Ya, and?

> The kicker is that I worked in Visual Basic (.Net). Apart from its
> lack of modern syntactic sugar, its clunky syntax is in fact block-
> structured, with the blocks being expressed in a very different way
> from C. The key was the structure imposed by the .Net object oriented
> environment.

I'm happy for you.  How does any of this have any bearing whatsoever
on you misrepresenting computer science throughout this thread?

> It is rather paradoxical but true, but if one's truly worried about
> software reliability (which few programmers are in fact) then a large
> set of solutions, any one of which can with a great deal of
> probability be buggy owing to the aliasing and "powerful" nature of C,
> constitutes a null set UNTIL you have done your homework and
> determined the subset of "solutions" that work. We see this all the
> time here, with constant wars over which C solution is "best" or even
> works.

And there aren't competing libraries/classes in Java and C#?  This is
more naive nonsense.

> For example, we're not supposed to expect a basic malloc() to work
> recursively, and we're not supposed to use sprintf(). The large number
> of tools is actually the problem.

How does malloc() work recursively?  Like what does that even mean?
Can you show some C code that recursively calls malloc()?

> It's a clumsy and old-fashioned method, not universally used. It also
> has bugola potential.

The alternative is you pass things globally.  And I don't get how

myclass::member(...) { this.integer =3D 5; }

Is any diff from say

myclass_member(struct self *this, ...) { this->integer =3D 5; }

In terms of data storage.  If you write

myclass mc;
mc.member();

Or

struct self mc;
myclass_member(&mc);

They're both essentially on the stack, and you're passing data as
such.

> In OO, the called "procedure" is just a separate object instance with
> completely separate work areas. The caller need do nothing special,
> and this means that things are less likely to break under maintenance.
> Whereas your "solution" violates a central tenet of good software: the
> caller should NOT have to do anything special based on the internal
> needs of the callee.

Um, if your function needs transient data, the callee has to maintain
that or you can NEVER be thread safe.  Suppose I have a class with a
function doWork(void);  How do I tell doWork() what data to work on?
It'll be a member of the class right?  And who maintains the class?
The caller.

You're writing nonsense again...

Generally speaking if a function needs scratch memory that is not used
between calls the function itself will maintain it, not the caller.
The structure you pass to the GNU libc *_r() functions is not scratch
memory though.  It's your damn data management structures...

> > malloc() is thread safe in GNU libc. =A0It can fail/succeed in multiple
> > threads simultaneously. =A0What's your point?
>
> That's nice to know. But not all mallocs are create equal, are they.

malloc() has a very prescribed behaviour.  For platforms which support
threading [which is not part of C99] they MUST support thread-safe C99
libc functions.

> Each thread I start begins with a positive contribution. Then the regs
> fuck with me and I don't take it lying down.

This thread [and all posts therein] contain nothing but your asinine
lies, fabrications, and other manufactured "facts."  I'm not fucking
with you, I'm just pointing out that you've been repeatedly dead
wrong.  Try posting things that are true and people will stop
correcting you.

Tom
0
Tom
12/30/2009 6:38:06 PM
On Dec 30, 11:54=A0am, spinoza1111 <spinoza1...@yahoo.com> wrote:
> My point was that the two programs took roughly the same TIME to do
> the same THING, but the C program had a performance issue. It did so
> because like the C Sharp program, it used the most direct method.

But not even.  You could have used a hash library, one which even
comes with GNU libc to do the same work.  You chose not to.  That's
nobody's fault but yours.  And even then you used the least efficient
configuration possible.  You even admitted as much to.  So your "test"
proved nothing except you don't know how to run a test.

> In C, if you don't have the time to reuse code, you do it yourself,
> hopefully based on a model in Knuth (or Schildt). The simplest and
> most obvious way to hash is to use the adjacent cells, not the linked
> list.

Yes, but if I knew I had 1000 entries to hash I wouldn't have used a
hash with only 1000 buckets.  I would have used one with say 2000 or
more (so you're not 100% full).  And I sincerely, with every fibre of
my being doubt that Knuth endorses a "wing it" approach to software
development.  And even if he did, he is NOT the authority on software
engineering.  He's a computer scientist for sure, far beyond my own
standards, but he isn't [as far as I know] much of a software
developer in terms of writing maintainable code, documentation, etc,
etc.

And regardless, this has nothing to do with C itself.  You could
ascribe that development model to ANY language.  So keep your
criticism specific to C itself.

> In C Sharp the simplest way to hash is to select the ONE library that
> is shipped with .Net.

What if the supplied hash library isn't suitable for what you need to
hash?  What if a tree is more suitable?  What if it's a specific form
of RB-tree or ... I sincerely doubt C# comes standards with every form
of data management technique known to man.

Can I conclude from that then that C# sucks because it doesn't have
splay tree functionality as part of the standard library?

Tom
0
Tom
12/30/2009 6:45:31 PM
On 2009-12-30, Tom St Denis <tom@iahu.ca> wrote:
> And I sincerely, with every fibre of
> my being doubt that Knuth endorses a "wing it" approach to software
> development.  And even if he did, he is NOT the authority on software
> engineering.  He's a computer scientist for sure, far beyond my own
> standards, but he isn't [as far as I know] much of a software
> developer in terms of writing maintainable code, documentation, etc,
> etc.

Actually, he sort of is -- in the sense that he developed an entire new
model of programming in order to write software that he could maintain
with very, very, few bugs.  If you get a chance, pick up and read his
book on literate programming; I'm not 100% sold on it, but you can
hardly argue with the maintainability and reliability of TeX.

> Can I conclude from that then that C# sucks because it doesn't have
> splay tree functionality as part of the standard library?

No, you can only conclude that Spinny is not really arguing from
anything but emotion.

-s
-- 
Copyright 2009, all wrongs reversed.  Peter Seebach / usenet-nospam@seebs.net
http://www.seebs.net/log/ <-- lawsuits, religion, and funny pictures
http://en.wikipedia.org/wiki/Fair_Game_(Scientology) <-- get educated!
0
Seebs
12/30/2009 7:46:07 PM
Tom St Denis <tom@iahu.ca> writes:
> On Dec 30, 11:39 am, spinoza1111 <spinoza1...@yahoo.com> wrote:
[more of the same]
[...]
> This thread [and all posts therein] contain nothing but your asinine
> lies, fabrications, and other manufactured "facts."

Tom, "spinoza1111" has been told this N times, for some very large
value of N.  What makes you think that his response to the N+1th
attempt will be any different?

You are wasting your time (and ours) by attempting to reason with him.

-- 
Keith Thompson (The_Other_Keith) kst-u@mib.org  <http://www.ghoti.net/~kst>
Nokia
"We must do something.  This is something.  Therefore, we must do this."
    -- Antony Jay and Jonathan Lynn, "Yes Minister"
0
Keith
12/30/2009 7:48:51 PM
On 2009-12-30, Tom St Denis <tom@iahu.ca> wrote:
> OSS is hardly slave labour.  Many people in that scene get paid for
> their work.  It's just a community effort.  Of course if this is how
> you play ostrich then so be it.

In the interests of full disclosure:  $DAYJOB is the Linux side of Wind
River Systems.  I get paid for writing open source software, and we do
indeed distrubute most of what I write under open source licenses.  Not
because of linking with other open source software, but because management
believes that it's good citizenship to contribute back to the community
as well as using free code.

We contribute back to projects, from bug reports to patches.  One of
my cowokers has put a ton of work into implementing debugger
support in the Linux kernel, and more importantly (to the community)
into making that support clean enough for upstream inclusion.

>> Seriously, it is well known to the regs here that I am Edward G.
>> Nilges.

> I didn't know that (nor do I assume to know that, being the victim of
> a joe-job myself I don't trust anything most people say w.r.t.
> identities unless they prove it through other means).  That being
> said, why not reserve your posts for more positive or constructive
> contributions instead?

Because he can't make any, I think?

-s
-- 
Copyright 2009, all wrongs reversed.  Peter Seebach / usenet-nospam@seebs.net
http://www.seebs.net/log/ <-- lawsuits, religion, and funny pictures
http://en.wikipedia.org/wiki/Fair_Game_(Scientology) <-- get educated!
0
Seebs
12/30/2009 7:51:28 PM
gazelle@shell.xmission.com (Kenny McCormack) writes:

> In article <cca8960d-b90d-4714-a100-3cc3682e3cb0@1g2000vbe.googlegroups.com>,
> Tom St Denis  <tom@iahu.ca> described himself to a tee thusly:
> ...
>>Resorting to insults won't win you any points.  All you've "proven" in
>>this thread is you fundamentally don't understand computer science,
>>that you don't know C that well, that you can't teach, that you're not
>>that familiar with software engineering, and that you're genuinely not
>>a nice person.
>
> Its so funny - all of those things are true to a tee about you.
>
> Projecting much?
>

Amazing isn't it. Tom comes rolling in here pronouncing his ill thought
out views, gets his arse handed to him on a plate pretty much, starts
huffing and puffing about the bleeding obvious (he has to be a Dilbert
character - something like "Obvious Tom" - you know with meeting
highlights such as "Let's not lose our focus on quality" or "This
solutions needs to be Engineered". He's another one of those MS hating
blowhards whose been allowed to blow his own self importance up a little
high.

I'm still laughing at him when he was on his soap box about good
programmers and designing SW as opposed to hacking shit together. Duh! I
never thought of that ...

As for his comments about C, well, It sure gave me a chuckle.

-- 
"Avoid hyperbole at all costs, its the most destructive argument on
the planet" - Mark McIntyre in comp.lang.c
0
Richard
12/30/2009 9:04:20 PM
On Wed, 30 Dec 2009 11:48:51 -0800, Keith Thompson
<kst-u@mib.org> wrote:

>Tom St Denis <tom@iahu.ca> writes:
>> On Dec 30, 11:39 am, spinoza1111 <spinoza1...@yahoo.com> wrote:
>[more of the same]
>[...]
>> This thread [and all posts therein] contain nothing but your asinine
>> lies, fabrications, and other manufactured "facts."
>
>Tom, "spinoza1111" has been told this N times, for some very large
>value of N.  What makes you think that his response to the N+1th
>attempt will be any different?
>
>You are wasting your time (and ours) by attempting to reason with him.

So why are you reading this thread?




Richard Harter, cri@tiac.net
http://home.tiac.net/~cri, http://www.varinoma.com
Infinity is one of those things that keep philosophers busy when they 
could be more profitably spending their time weeding their garden.
0
cri
12/30/2009 11:00:38 PM
cri@tiac.net (Richard Harter) writes:
> On Wed, 30 Dec 2009 11:48:51 -0800, Keith Thompson
> <kst-u@mib.org> wrote:
>>Tom St Denis <tom@iahu.ca> writes:
>>> On Dec 30, 11:39 am, spinoza1111 <spinoza1...@yahoo.com> wrote:
>>[more of the same]
>>[...]
>>> This thread [and all posts therein] contain nothing but your asinine
>>> lies, fabrications, and other manufactured "facts."
>>
>>Tom, "spinoza1111" has been told this N times, for some very large
>>value of N.  What makes you think that his response to the N+1th
>>attempt will be any different?
>>
>>You are wasting your time (and ours) by attempting to reason with him.
>
> So why are you reading this thread?

Hmm.  I don't have a good answer to that.

-- 
Keith Thompson (The_Other_Keith) kst-u@mib.org  <http://www.ghoti.net/~kst>
Nokia
"We must do something.  This is something.  Therefore, we must do this."
    -- Antony Jay and Jonathan Lynn, "Yes Minister"
0
Keith
12/30/2009 11:20:44 PM
In article <ln3a2s3v77.fsf@nuthaus.mib.org>,
Keith Thompson  <kst-u@mib.org> wrote:
>cri@tiac.net (Richard Harter) writes:
>> On Wed, 30 Dec 2009 11:48:51 -0800, Keith Thompson
>> <kst-u@mib.org> wrote:
>>>Tom St Denis <tom@iahu.ca> writes:
>>>> On Dec 30, 11:39 am, spinoza1111 <spinoza1...@yahoo.com> wrote:
>>>[more of the same]
>>>[...]
>>>> This thread [and all posts therein] contain nothing but your asinine
>>>> lies, fabrications, and other manufactured "facts."
>>>
>>>Tom, "spinoza1111" has been told this N times, for some very large
>>>value of N.  What makes you think that his response to the N+1th
>>>attempt will be any different?
>>>
>>>You are wasting your time (and ours) by attempting to reason with him.
>>
>> So why are you reading this thread?
>
>Hmm.  I don't have a good answer to that.

Looks like another leaky killfile.

0
gazelle
12/30/2009 11:40:20 PM
On 2009-12-30, Keith Thompson <kst-u@mib.org> wrote:
> cri@tiac.net (Richard Harter) writes:
>> So why are you reading this thread?

> Hmm.  I don't have a good answer to that.

I have something of one:

The fact is, you can learn a lot more sometimes from understanding why
something is wrong than you would from understanding why it is right.

I've seen dozens of implementations, recursive and iterative alike, of
the factorial function.  Spinny's was the first O(N^2) I've ever seen,
and it actually highlights something that may be non-obvious to the
reader about interactions between algorithms.

Watching people who do understand something explain why someone who
doesn't is wrong can be very enlightening.  We recently had someone in
comp.lang.ruby who was advocating that the language be changed to round
outputs "near zero" to zero exactly so that sin(180) would produce the
correct exact value of zero rather than some other value.  In fact, this
turns out to be an extremely bad idea -- but the discussion of *why* it's
bad taught me more about floating point.

-s
-- 
Copyright 2009, all wrongs reversed.  Peter Seebach / usenet-nospam@seebs.net
http://www.seebs.net/log/ <-- lawsuits, religion, and funny pictures
http://en.wikipedia.org/wiki/Fair_Game_(Scientology) <-- get educated!
0
Seebs
12/31/2009 12:31:08 AM
On Dec 31, 1:43=A0am, c...@tiac.net (Richard Harter) wrote:
> On Wed, 30 Dec 2009 08:39:34 -0800 (PST),spinoza1111
>
>
>
>
>
>
>
> <spinoza1...@yahoo.com> wrote:
> >On Dec 30, 9:32=3DA0pm, Tom St Denis <t...@iahu.ca> wrote:
> >> On Dec 30, 8:04=3DA0am,spinoza1111<spinoza1...@yahoo.com> wrote:
> [snip]
> >> Um, the stack of the threads is where you typically put cheap per-
> >> thread data. =3DA0Otherwise you allocate it off the heap. =3DA0In the =
case of
> >> the *_r() GNU libc functions they store any transient data in the
> >> structure you pass it. =3DA0That's how they achieve thread safety.
>
> >It's a clumsy and old-fashioned method, not universally used. It also
> >has bugola potential.
>
> >You see, the called routine is telling the caller to supply him with
> >"scratch paper". This technique is an old dodge. It was a requirement
> >in IBM 360 BAL (Basic Assembler Language) that the caller provide the
> >callee with a place to save the 16 "general purpose registers" of the
> >machine.
>
> >The problem then and now is what happens if the caller is called
> >recursively by the callee as it might be in exception handling, and
> >the callee uses the same structure. It's not supposed to but it can
> >happen.
>
> He's not talking about the technique used in BAL etc. =A0The
> transient data is contained within a structure that is passed by
> the caller to the callee. =A0The space for the structure is on the
> stack. =A0Recursion is permitted.

If control "comes back" to the caller who has stacked the struct and
the caller recalls the routine in question with the same struct, this
will break.

But the main point is that the callee isn't a black box if I have to
supply it with storage.
>
> Richard Harter, c...@tiac.nethttp://home.tiac.net/~cri,http://www.varinom=
a.com
> Infinity is one of those things that keep philosophers busy when they
> could be more profitably spending their time weeding their garden.

0
spinoza1111
12/31/2009 2:33:04 AM
On Dec 31, 3:46=A0am, Seebs <usenet-nos...@seebs.net> wrote:
> On 2009-12-30, Tom St Denis <t...@iahu.ca> wrote:
>
> > And I sincerely, with every fibre of
> > my being doubt that Knuth endorses a "wing it" approach to software
> > development. =A0And even if he did, he is NOT the authority on software
> > engineering. =A0He's a computer scientist for sure, far beyond my own
> > standards, but he isn't [as far as I know] much of a software
> > developer in terms of writing maintainable code, documentation, etc,
> > etc.
>
> Actually, he sort of is -- in the sense that he developed an entire new
> model of programming in order to write software that he could maintain
> with very, very, few bugs. =A0If you get a chance, pick up and read his
> book on literate programming; I'm not 100% sold on it, but you can
> hardly argue with the maintainability and reliability of TeX.

Here, Tom engages in a strange offset or shift I've seen in the real
world. This is to defend bad practice by denying that scientific
leaders have any credibility in a slightly offset, slightly redefined
"real world" in which the "best" practitioners are not "scientists":
yet strangely deserve the respect ordinarily given scientists.

The unacknowledged fact: Knuth is seen by the ordinary techie as being
the "champ", or "contendah" he could have been, with long-term
economic security guaranteed not by the impoverishing "free market"
but by tenure. This causes the ordinary techie to slightly redefine
software quality to make himself feel better about a world in which
it's just true that he's been made a perpetual candidate for his own
post and a perpetual infant.

The literate code of TeX is then re-read as somewhat pretentious and a
waste of resources.

It is true that Knuth's literate programming reduces bugs. He observed
that they still occur but diminish exponentially when the programmer
is free to develop as needed his own approaches, including in Knuth's
case the writing of essays about code while the code is being written,
something that Dijkstra did as well.

This is not done in industry. First of all, we need only to look at
the extraordinarily low level of literacy at postings here to realize
that most "senior" people have lost the ability to be coherent, so
much so that provably accurate grammar is to them a marker of mental
disorder. Secondly, essays are a "waste of time" in the infantile
corporate playbook.

>
> > Can I conclude from that then that C# sucks because it doesn't have
> > splay tree functionality as part of the standard library?
>
> No, you can only conclude that Spinny is not really arguing from
> anything but emotion.

I am almost as disgusted with .Net as I am with C, since computer
systems in general are designed to make the rich, richer. I don't
love .Net, and emotion has nothing to do with my preferences. The only
time my emotions are engaged is when you maliciously attack people.
>
> -s
> --
> Copyright 2009, all wrongs reversed. =A0Peter Seebach / usenet-nos...@see=
bs.nethttp://www.seebs.net/log/<-- lawsuits, religion, and funny picturesht=
tp://en.wikipedia.org/wiki/Fair_Game_(Scientology) <-- get educated!

0
spinoza1111
12/31/2009 2:45:41 AM
On Dec 31, 7:00=A0am, c...@tiac.net (Richard Harter) wrote:
> On Wed, 30 Dec 2009 11:48:51 -0800, Keith Thompson
>
> <ks...@mib.org> wrote:
> >Tom St Denis <t...@iahu.ca> writes:
> >> On Dec 30, 11:39=A0am,spinoza1111<spinoza1...@yahoo.com> wrote:
> >[more of the same]
> >[...]
> >> This thread [and all posts therein] contain nothing but your asinine
> >> lies, fabrications, and other manufactured "facts."
>
> >Tom, "spinoza1111" has been told this N times, for some very large
> >value of N. =A0What makes you think that his response to the N+1th
> >attempt will be any different?
>
> >You are wasting your time (and ours) by attempting to reason with him.
>
> So why are you reading this thread?

Because like "respectable" white Americans, photographed staring at
the strangled and burned black body at a Lynching, they are attracted
to the firelit circle, the screams, the blood and glass.
>
> Richard Harter, c...@tiac.nethttp://home.tiac.net/~cri,http://www.varinom=
a.com
> Infinity is one of those things that keep philosophers busy when they
> could be more profitably spending their time weeding their garden.

0
spinoza1111
12/31/2009 2:48:00 AM
On Dec 31, 8:31=A0am, Seebs <usenet-nos...@seebs.net> wrote:
> On 2009-12-30, Keith Thompson <ks...@mib.org> wrote:
>
> > c...@tiac.net (Richard Harter) writes:
> >> So why are you reading this thread?
> > Hmm. =A0I don't have a good answer to that.
>
> I have something of one:
>
> The fact is, you can learn a lot more sometimes from understanding why
> something is wrong than you would from understanding why it is right.
>
> I've seen dozens of implementations, recursive and iterative alike, of
> the factorial function. =A0Spinny's was the first O(N^2) I've ever seen,

Again, if you'd studied computer science in the university
environment, in which practitioners are not being infantilized by
being made perpetual candidates, you'd realize that order(n^2) is not
a evaluative or condemnatory term. It was pointed out to you that the
purpose of the code was to execute instructions repeatedly, and
discover whether C Sharp had an interpretive overhead. It was pointed
out to you that had it such an overhead, the time taken by the C Sharp
would have itself order(n^2) with respect to the C time.

It was pointed out to you that the reason for starting order n^2 was
to factor out random effects from a too-fast program, to let things
run.

In university classes in computer science that you did not attend, a
qualified professor would have shown you that not only is the language
of computational complexity not evaluative, but also that we have to
write such algorithms to learn more.

You are making the same sort of ignorance-based attack you made years
ago on Schildt when you said "the 'heap' is a DOS term".

> and it actually highlights something that may be non-obvious to the
> reader about interactions between algorithms.
>
> Watching people who do understand something explain why someone who
> doesn't is wrong can be very enlightening. =A0We recently had someone in
> comp.lang.ruby who was advocating that the language be changed to round
> outputs "near zero" to zero exactly so that sin(180) would produce the
> correct exact value of zero rather than some other value. =A0In fact, thi=
s
> turns out to be an extremely bad idea -- but the discussion of *why* it's
> bad taught me more about floating point.
>
> -s
> --
> Copyright 2009, all wrongs reversed. =A0Peter Seebach / usenet-nos...@see=
bs.nethttp://www.seebs.net/log/<-- lawsuits, religion, and funny picturesht=
tp://en.wikipedia.org/wiki/Fair_Game_(Scientology) <-- get educated!

0
spinoza1111
12/31/2009 2:55:59 AM
On Dec 31, 5:04=A0am, Richard <rgrd...@gmail.com> wrote:
> gaze...@shell.xmission.com (Kenny McCormack) writes:
> > In article <cca8960d-b90d-4714-a100-3cc3682e3...@1g2000vbe.googlegroups=
..com>,
> > Tom St Denis =A0<t...@iahu.ca> described himself to a tee thusly:
> > ...
> >>Resorting to insults won't win you any points. =A0All you've "proven" i=
n
> >>this thread is you fundamentally don't understand computer science,
> >>that you don't know C that well, that you can't teach, that you're not
> >>that familiar with software engineering, and that you're genuinely not
> >>a nice person.
>
> > Its so funny - all of those things are true to a tee about you.
>
> > Projecting much?
>
> Amazing isn't it. Tom comes rolling in here pronouncing his ill thought
> out views, gets his arse handed to him on a plate pretty much, starts
> huffing and puffing about the bleeding obvious (he has to be a Dilbert
> character - something like "Obvious Tom" - you know with meeting
> highlights such as "Let's not lose our focus on quality" or "This
> solutions needs to be Engineered". He's another one of those MS hating
> blowhards whose been allowed to blow his own self importance up a little
> high.

Indeed. Tom believes "with every fibre of his being" that we must
write Quality software. But I guess I have fibres which demur. Bad
fibres.


>
> I'm still laughing at him when he was on his soap box about good
> programmers and designing SW as opposed to hacking shit together. Duh! I
> never thought of that ...
>
> As for his comments about C, well, It sure gave me a chuckle.
>
> --
> "Avoid hyperbole at all costs, its the most destructive argument on
> the planet" - Mark McIntyre in comp.lang.c

0
spinoza1111
12/31/2009 3:05:26 AM
On Dec 31, 2:45=A0am, Tom St Denis <t...@iahu.ca> wrote:
> On Dec 30, 11:54=A0am,spinoza1111<spinoza1...@yahoo.com> wrote:
>
> > My point was that the two programs took roughly the same TIME to do
> > the same THING, but the C program had a performance issue. It did so
> > because like the C Sharp program, it used the most direct method.
>
> But not even. =A0You could have used a hash library, one which even
> comes with GNU libc to do the same work. =A0You chose not to. =A0That's
> nobody's fault but yours. =A0And even then you used the least efficient
> configuration possible. =A0You even admitted as much to. =A0So your "test=
"
> proved nothing except you don't know how to run a test.
>
> > In C, if you don't have the time to reuse code, you do it yourself,
> > hopefully based on a model in Knuth (or Schildt). The simplest and
> > most obvious way to hash is to use the adjacent cells, not the linked
> > list.
>
> Yes, but if I knew I had 1000 entries to hash I wouldn't have used a
> hash with only 1000 buckets. =A0I would have used one with say 2000 or
> more (so you're not 100% full).

Since I saw your mistake
I can this assertion make
It is a mistake I'd never make.

When I make a mistake
I go and eat a steak
And thought I take
How to hide my own mistake.
 =A0
> And I sincerely, with every fibre of
> my being doubt that Knuth endorses a "wing it" approach to software
> development. =A0And even if he did, he is NOT the authority on software
> engineering. =A0He's a computer scientist for sure, far beyond my own
> standards, but he isn't [as far as I know] much of a software
> developer in terms of writing maintainable code, documentation, etc,
> etc.
>
> And regardless, this has nothing to do with C itself. =A0You could
> ascribe that development model to ANY language. =A0So keep your
> criticism specific to C itself.
>
> > In C Sharp the simplest way to hash is to select the ONE library that
> > is shipped with .Net.
>
> What if the supplied hash library isn't suitable for what you need to
> hash? =A0What if a tree is more suitable? =A0What if it's a specific form
> of RB-tree or ... I sincerely doubt C# comes standards with every form
> of data management technique known to man.

What part of "inheritance" don't you understand? Oh never mind...
>
> Can I conclude from that then that C# sucks because it doesn't have
> splay tree functionality as part of the standard library?
>
> Tom

0
spinoza1111
12/31/2009 3:10:52 AM
On Wed, 30 Dec 2009 18:55:59 -0800, spinoza1111 wrote:

> On Dec 31, 8:31 am, Seebs <usenet-nos...@seebs.net> wrote:
>> On 2009-12-30, Keith Thompson <ks...@mib.org> wrote:
>>

>>
>> I have something of one:
>>
>> The fact is, you can learn a lot more sometimes from understanding why
>> something is wrong than you would from understanding why it is right.
>>
>> I've seen dozens of implementations, recursive and iterative alike, of
>> the factorial function.  Spinny's was the first O(N^2) I've ever seen,
> 
> Again, if you'd studied computer science in the university environment,
> in which practitioners are not being infantilized by being made
> perpetual candidates, you'd realize that order(n^2) is not a evaluative
> or condemnatory term. It was pointed out to you that the purpose of the
> code was to execute instructions repeatedly, and discover whether C
> Sharp had an interpretive overhead. It was pointed out to you that had
> it such an overhead, the time taken by the C Sharp would have itself
> order(n^2) with respect to the C time.
> 
> It was pointed out to you that the reason for starting order n^2 was to
> factor out random effects from a too-fast program, to let things run.
> 
> In university classes in computer science that you did not attend, a
> qualified professor would have shown you that not only is the language
> of computational complexity not evaluative, but also that we have to
> write such algorithms to learn more.

The reason for the sub-optimal hashtable now is clear to me.

BTW, I fixed your hashtable for you. It is even shorter, and runs faster.
There still is a lot of place for improvement. Feel free to improve it.

*****/

#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#define ARRAY_SIZE 1000
#define SESSIONS 100000

struct meuk {
        int head;
        int next;
        int val;
        } nilgesmeuk[ARRAY_SIZE];

#define COUNTOF(a) (sizeof(a)/sizeof(a)[0])

int main(void)
{
    int val , *p;
    time_t start , end;
    double dif;
    unsigned ii,slot,sessions,hops;
    time (&start);
    hops = 0;
    for (sessions = 0; sessions < SESSIONS; sessions++)
    {
        for (ii = 0; ii < COUNTOF(nilgesmeuk); ii++) nilgesmeuk[ii].head = -1;

        for (ii = 0; ii < COUNTOF(nilgesmeuk); ii++)
        {
            nilgesmeuk[ii].val = val = rand();
            nilgesmeuk[ii].next = -1;
            slot =  val % COUNTOF(nilgesmeuk);
            for( p =  &nilgesmeuk[slot].head ; *p >= 0 ; p = &nilgesmeuk[*p].next ) {hops++;}
            *p = ii;
        }
    }
    time (&end);
    dif = difftime (end,start);
    printf("It took C %.2f seconds to hash %d numbers with %d hops(chainlength=1+%6.4f) , %d times\n"
           , dif, (int) ii, hops, (float) hops/(ii*sessions), sessions);
    return 0;
}

/*****

AvK

0
Moi
12/31/2009 9:26:42 AM
On Dec 31, 5:26=A0pm, Moi <r...@invalid.address.org> wrote:
> On Wed, 30 Dec 2009 18:55:59 -0800,spinoza1111wrote:
> > On Dec 31, 8:31=A0am, Seebs <usenet-nos...@seebs.net> wrote:
> >> On 2009-12-30, Keith Thompson <ks...@mib.org> wrote:
>
> >> I have something of one:
>
> >> The fact is, you can learn a lot more sometimes from understanding why
> >> something is wrong than you would from understanding why it is right.
>
> >> I've seen dozens of implementations, recursive and iterative alike, of
> >> the factorial function. =A0Spinny's was the first O(N^2) I've ever see=
n,
>
> > Again, if you'd studied computer science in the university environment,
> > in which practitioners are not being infantilized by being made
> > perpetual candidates, you'd realize that order(n^2) is not a evaluative
> > or condemnatory term. It was pointed out to you that the purpose of the
> > code was to execute instructions repeatedly, and discover whether C
> > Sharp had an interpretive overhead. It was pointed out to you that had
> > it such an overhead, the time taken by the C Sharp would have itself
> > order(n^2) with respect to the C time.
>
> > It was pointed out to you that the reason for starting order n^2 was to
> > factor out random effects from a too-fast program, to let things run.
>
> > In university classes in computer science that you did not attend, a
> > qualified professor would have shown you that not only is the language
> > of computational complexity not evaluative, but also that we have to
> > write such algorithms to learn more.
>
> The reason for the sub-optimal hashtable now is clear to me.
>
> BTW, I fixed your hashtable for you. It is even shorter, and runs faster.
> There still is a lot of place for improvement. Feel free to improve it.
>
> *****/
>
> #include <stdio.h>
> #include <stdlib.h>
> #include <time.h>
> #define ARRAY_SIZE 1000
> #define SESSIONS 100000
>
> struct meuk {
> =A0 =A0 =A0 =A0 int head;
> =A0 =A0 =A0 =A0 int next;
> =A0 =A0 =A0 =A0 int val;
> =A0 =A0 =A0 =A0 } nilgesmeuk[ARRAY_SIZE];
>
> #define COUNTOF(a) (sizeof(a)/sizeof(a)[0])
>
> int main(void)
> {
> =A0 =A0 int val , *p;
> =A0 =A0 time_t start , end;
> =A0 =A0 double dif;
> =A0 =A0 unsigned ii,slot,sessions,hops;
> =A0 =A0 time (&start);
> =A0 =A0 hops =3D 0;
> =A0 =A0 for (sessions =3D 0; sessions < SESSIONS; sessions++)
> =A0 =A0 {
> =A0 =A0 =A0 =A0 for (ii =3D 0; ii < COUNTOF(nilgesmeuk); ii++) nilgesmeuk=
[ii].head =3D -1;
>
> =A0 =A0 =A0 =A0 for (ii =3D 0; ii < COUNTOF(nilgesmeuk); ii++)
> =A0 =A0 =A0 =A0 {
> =A0 =A0 =A0 =A0 =A0 =A0 nilgesmeuk[ii].val =3D val =3D rand();
> =A0 =A0 =A0 =A0 =A0 =A0 nilgesmeuk[ii].next =3D -1;
> =A0 =A0 =A0 =A0 =A0 =A0 slot =3D =A0val % COUNTOF(nilgesmeuk);
> =A0 =A0 =A0 =A0 =A0 =A0 for( p =3D =A0&nilgesmeuk[slot].head ; *p >=3D 0 =
; p =3D &nilgesmeuk[*p].next ) {hops++;}
> =A0 =A0 =A0 =A0 =A0 =A0 *p =3D ii;
> =A0 =A0 =A0 =A0 }
> =A0 =A0 }
> =A0 =A0 time (&end);
> =A0 =A0 dif =3D difftime (end,start);
> =A0 =A0 printf("It took C %.2f seconds to hash %d numbers with %d hops(ch=
ainlength=3D1+%6.4f) , %d times\n"
> =A0 =A0 =A0 =A0 =A0 =A0, dif, (int) ii, hops, (float) hops/(ii*sessions),=
 sessions);
> =A0 =A0 return 0;
>
> }
>
> /*****
>
> AvK

Get back to you when I have more time to study this, but...

I don't know why you calculate what seems to be an invariant in a for
loop

I don't know why it's apparently necessary to calculate the size of
the array
0
spinoza1111
12/31/2009 11:23:25 AM
spinoza1111 wrote:
> On Dec 31, 5:26 pm, Moi <r...@invalid.address.org> wrote:
>> On Wed, 30 Dec 2009 18:55:59 -0800,spinoza1111wrote:
>>> On Dec 31, 8:31 am, Seebs <usenet-nos...@seebs.net> wrote:
>>>> On 2009-12-30, Keith Thompson <ks...@mib.org> wrote:
>>>> I have something of one:
>>>> The fact is, you can learn a lot more sometimes from understanding why
>>>> something is wrong than you would from understanding why it is right.
>>>> I've seen dozens of implementations, recursive and iterative alike, of
>>>> the factorial function.  Spinny's was the first O(N^2) I've ever seen,
>>> Again, if you'd studied computer science in the university environment,
>>> in which practitioners are not being infantilized by being made
>>> perpetual candidates, you'd realize that order(n^2) is not a evaluative
>>> or condemnatory term. It was pointed out to you that the purpose of the
>>> code was to execute instructions repeatedly, and discover whether C
>>> Sharp had an interpretive overhead. It was pointed out to you that had
>>> it such an overhead, the time taken by the C Sharp would have itself
>>> order(n^2) with respect to the C time.
>>> It was pointed out to you that the reason for starting order n^2 was to
>>> factor out random effects from a too-fast program, to let things run.
>>> In university classes in computer science that you did not attend, a
>>> qualified professor would have shown you that not only is the language
>>> of computational complexity not evaluative, but also that we have to
>>> write such algorithms to learn more.
>> The reason for the sub-optimal hashtable now is clear to me.
>>
>> BTW, I fixed your hashtable for you. It is even shorter, and runs faster.
>> There still is a lot of place for improvement. Feel free to improve it.
>>
>> *****/
>>
>> #include <stdio.h>
>> #include <stdlib.h>
>> #include <time.h>
>> #define ARRAY_SIZE 1000
>> #define SESSIONS 100000
>>
>> struct meuk {
>>         int head;
>>         int next;
>>         int val;
>>         } nilgesmeuk[ARRAY_SIZE];
>>
>> #define COUNTOF(a) (sizeof(a)/sizeof(a)[0])
>>
>> int main(void)
>> {
>>     int val , *p;
>>     time_t start , end;
>>     double dif;
>>     unsigned ii,slot,sessions,hops;
>>     time (&start);
>>     hops = 0;
>>     for (sessions = 0; sessions < SESSIONS; sessions++)
>>     {
>>         for (ii = 0; ii < COUNTOF(nilgesmeuk); ii++) nilgesmeuk[ii].head = -1;
>>
>>         for (ii = 0; ii < COUNTOF(nilgesmeuk); ii++)
>>         {
>>             nilgesmeuk[ii].val = val = rand();
>>             nilgesmeuk[ii].next = -1;
>>             slot =  val % COUNTOF(nilgesmeuk);
>>             for( p =  &nilgesmeuk[slot].head ; *p >= 0 ; p = &nilgesmeuk[*p].next ) {hops++;}
>>             *p = ii;
>>         }
>>     }
>>     time (&end);
>>     dif = difftime (end,start);
>>     printf("It took C %.2f seconds to hash %d numbers with %d hops(chainlength=1+%6.4f) , %d times\n"
>>            , dif, (int) ii, hops, (float) hops/(ii*sessions), sessions);
>>     return 0;
>>
>> }
>>
>> /*****
>>
>> AvK
> 
> Get back to you when I have more time to study this, but...
> 
> I don't know why you calculate what seems to be an invariant in a for
> loop

If you mean the COUNTOF thing, it's a compilation-time calculation. 
There is no run-time cost, since the compiler can work out the division 
at compilation time.

> I don't know why it's apparently necessary to calculate the size of
> the array

So that he can change the size of the array during editing, and have 
that change automatically reflected in the loop control code without his 
having to remember to update it.

He could just as easily have used ARRAYSIZE, but doing so would 
introduce into the code an assumption that his array has (at least) 
ARRAYSIZE elements - the COUNTOF macro is thus slightly more robust with 
regard to later editing of the definition of the array.

-- 
Richard Heathfield <http://www.cpax.org.uk>
Email: -http://www. +rjh@
"Usenet is a strange place" - dmr 29 July 1999
Sig line vacant - apply within
0
Richard
12/31/2009 11:31:15 AM
On Thu, 31 Dec 2009 03:23:25 -0800, spinoza1111 wrote:

> On Dec 31, 5:26 pm, Moi <r...@invalid.address.org> wrote:
>> On Wed, 30 Dec 2009 18:55:59 -0800,spinoza1111wrote:
>> > On Dec 31, 8:31 am, Seebs <usenet-nos...@seebs.net> wrote:
>> >> On 2009-12-30, Keith Thompson <ks...@mib.org> wrote:
>>
>> >> I have something of one:
>>
>> >> The fact is, you can learn a lot more sometimes from understanding
>> >> why something is wrong than you would from understanding why it is
>> >> right.
>>
>> >> I've seen dozens of implementations, recursive and iterative alike,
>> >> of the factorial function.  Spinny's was the first O(N^2) I've ever
>> >> seen,
>>
>> > Again, if you'd studied computer science in the university
>> > environment, in which practitioners are not being infantilized by
>> > being made perpetual candidates, you'd realize that order(n^2) is not
>> > a evaluative or condemnatory term. It was pointed out to you that the
>> > purpose of the code was to execute instructions repeatedly, and
>> > discover whether C Sharp had an interpretive overhead. It was pointed
>> > out to you that had it such an overhead, the time taken by the C
>> > Sharp would have itself order(n^2) with respect to the C time.
>>
>> > It was pointed out to you that the reason for starting order n^2 was
>> > to factor out random effects from a too-fast program, to let things
>> > run.
>>
>> > In university classes in computer science that you did not attend, a
>> > qualified professor would have shown you that not only is the
>> > language of computational complexity not evaluative, but also that we
>> > have to write such algorithms to learn more.
>>
>> The reason for the sub-optimal hashtable now is clear to me.
>>
>> BTW, I fixed your hashtable for you. It is even shorter, and runs
>> faster. There still is a lot of place for improvement. Feel free to
>> improve it.
>>
>> *****/
>>

> Get back to you when I have more time to study this, but...
> 
> I don't know why you calculate what seems to be an invariant in a for
> loop

IMHO there is no invariant.
The 'var' variable is redundant and not used, but that is harmless.
(the 'slot' variable can also be eliminated; at the expense of clarity)

> 
> I don't know why it's apparently necessary to calculate the size of the
> array

Nothing is calculated. Maybe you are confused by the sizeof operator ?

HTH,
AvK
0
Moi
12/31/2009 11:45:14 AM
On Dec 31, 7:31=A0pm, Richard Heathfield <r...@see.sig.invalid> wrote:
> spinoza1111wrote:
> > On Dec 31, 5:26 pm, Moi <r...@invalid.address.org> wrote:
> >> On Wed, 30 Dec 2009 18:55:59 -0800,spinoza1111wrote:
> >>> On Dec 31, 8:31 am, Seebs <usenet-nos...@seebs.net> wrote:
> >>>> On 2009-12-30, Keith Thompson <ks...@mib.org> wrote:
> >>>> I have something of one:
> >>>> The fact is, you can learn a lot more sometimes from understanding w=
hy
> >>>> something is wrong than you would from understanding why it is right=
..
> >>>> I've seen dozens of implementations, recursive and iterative alike, =
of
> >>>> the factorial function. =A0Spinny's was the first O(N^2) I've ever s=
een,
> >>> Again, if you'd studied computer science in the university environmen=
t,
> >>> in which practitioners are not being infantilized by being made
> >>> perpetual candidates, you'd realize that order(n^2) is not a evaluati=
ve
> >>> or condemnatory term. It was pointed out to you that the purpose of t=
he
> >>> code was to execute instructions repeatedly, and discover whether C
> >>> Sharp had an interpretive overhead. It was pointed out to you that ha=
d
> >>> it such an overhead, the time taken by the C Sharp would have itself
> >>> order(n^2) with respect to the C time.
> >>> It was pointed out to you that the reason for starting order n^2 was =
to
> >>> factor out random effects from a too-fast program, to let things run.
> >>> In university classes in computer science that you did not attend, a
> >>> qualified professor would have shown you that not only is the languag=
e
> >>> of computational complexity not evaluative, but also that we have to
> >>> write such algorithms to learn more.
> >> The reason for the sub-optimal hashtable now is clear to me.
>
> >> BTW, I fixed your hashtable for you. It is even shorter, and runs fast=
er.
> >> There still is a lot of place for improvement. Feel free to improve it=
..
>
> >> *****/
>
> >> #include <stdio.h>
> >> #include <stdlib.h>
> >> #include <time.h>
> >> #define ARRAY_SIZE 1000
> >> #define SESSIONS 100000
>
> >> struct meuk {
> >> =A0 =A0 =A0 =A0 int head;
> >> =A0 =A0 =A0 =A0 int next;
> >> =A0 =A0 =A0 =A0 int val;
> >> =A0 =A0 =A0 =A0 } nilgesmeuk[ARRAY_SIZE];
>
> >> #define COUNTOF(a) (sizeof(a)/sizeof(a)[0])
>
> >> int main(void)
> >> {
> >> =A0 =A0 int val , *p;
> >> =A0 =A0 time_t start , end;
> >> =A0 =A0 double dif;
> >> =A0 =A0 unsigned ii,slot,sessions,hops;
> >> =A0 =A0 time (&start);
> >> =A0 =A0 hops =3D 0;
> >> =A0 =A0 for (sessions =3D 0; sessions < SESSIONS; sessions++)
> >> =A0 =A0 {
> >> =A0 =A0 =A0 =A0 for (ii =3D 0; ii < COUNTOF(nilgesmeuk); ii++) nilgesm=
euk[ii].head =3D -1;
>
> >> =A0 =A0 =A0 =A0 for (ii =3D 0; ii < COUNTOF(nilgesmeuk); ii++)
> >> =A0 =A0 =A0 =A0 {
> >> =A0 =A0 =A0 =A0 =A0 =A0 nilgesmeuk[ii].val =3D val =3D rand();
> >> =A0 =A0 =A0 =A0 =A0 =A0 nilgesmeuk[ii].next =3D -1;
> >> =A0 =A0 =A0 =A0 =A0 =A0 slot =3D =A0val % COUNTOF(nilgesmeuk);
> >> =A0 =A0 =A0 =A0 =A0 =A0 for( p =3D =A0&nilgesmeuk[slot].head ; *p >=3D=
 0 ; p =3D &nilgesmeuk[*p].next ) {hops++;}
> >> =A0 =A0 =A0 =A0 =A0 =A0 *p =3D ii;
> >> =A0 =A0 =A0 =A0 }
> >> =A0 =A0 }
> >> =A0 =A0 time (&end);
> >> =A0 =A0 dif =3D difftime (end,start);
> >> =A0 =A0 printf("It took C %.2f seconds to hash %d numbers with %d hops=
(chainlength=3D1+%6.4f) , %d times\n"
> >> =A0 =A0 =A0 =A0 =A0 =A0, dif, (int) ii, hops, (float) hops/(ii*session=
s), sessions);
> >> =A0 =A0 return 0;
>
> >> }
>
> >> /*****
>
> >> AvK
>
> > Get back to you when I have more time to study this, but...
>
> > I don't know why you calculate what seems to be an invariant in a for
> > loop
>
> If you mean the COUNTOF thing, it's a compilation-time calculation.
> There is no run-time cost, since the compiler can work out the division
> at compilation time.

I am relieved to realize that thx to u. Of course, a modern language
would not so conflate and confuse operations meant to be executed at
compile time with runtime operations.

However, you assume that all C compilers will do the constant division
of the two sizeofs. Even if "standard" compilers do, the functionality
of doing what we can at compile time can't always be trusted in actual
compilers, I believe. In fact, doing constant operations inside
preprocessor macros neatly makes nonsense of any simple explanation of
preprocessing as a straightforward word-processing transformation of
text.

Dijkstra's "making a mess of it" includes hacks that invalidate
natural language claims about what software does.
>
> > I don't know why it's apparently necessary to calculate the size of
> > the array
>
> So that he can change the size of the array during editing, and have
> that change automatically reflected in the loop control code without his
> having to remember to update it.

That is absurd, dear boy. You just alter ARRAY_SIZE in its define. OK,
you don't evaluate an invariant in a for loop IF the preprocessor
divides sizeofs. If this was made a Standard, that, in my view, is a
gross mistake.
>
> He could just as easily have used ARRAYSIZE, but doing so would
> introduce into the code an assumption that his array has (at least)
> ARRAYSIZE elements - the COUNTOF macro is thus slightly more robust with
> regard to later editing of the definition of the array.

I think, with all due respect, that it's a mess. It assumes that the
preprocessor will execute constant operations in violation of the
understanding of intelligent people that the preprocessor doesn't do
anything but process text, and that constant time evaluation is the
job of the optimizer.


>
> --
> Richard Heathfield <http://www.cpax.org.uk>
> Email: -http://www. +rjh@
> "Usenet is a strange place" - dmr 29 July 1999
> Sig line vacant - apply within

0
spinoza1111
12/31/2009 12:35:01 PM
On Dec 31, 7:45=A0pm, Moi <r...@invalid.address.org> wrote:
> On Thu, 31 Dec 2009 03:23:25 -0800,spinoza1111wrote:
> > On Dec 31, 5:26=A0pm, Moi <r...@invalid.address.org> wrote:
> >> On Wed, 30 Dec 2009 18:55:59 -0800,spinoza1111wrote:
> >> > On Dec 31, 8:31=A0am, Seebs <usenet-nos...@seebs.net> wrote:
> >> >> On 2009-12-30, Keith Thompson <ks...@mib.org> wrote:
>
> >> >> I have something of one:
>
> >> >> The fact is, you can learn a lot more sometimes from understanding
> >> >> why something is wrong than you would from understanding why it is
> >> >> right.
>
> >> >> I've seen dozens of implementations, recursive and iterative alike,
> >> >> of the factorial function. =A0Spinny's was the first O(N^2) I've ev=
er
> >> >> seen,
>
> >> > Again, if you'd studied computer science in the university
> >> > environment, in which practitioners are not being infantilized by
> >> > being made perpetual candidates, you'd realize that order(n^2) is no=
t
> >> > a evaluative or condemnatory term. It was pointed out to you that th=
e
> >> > purpose of the code was to execute instructions repeatedly, and
> >> > discover whether C Sharp had an interpretive overhead. It was pointe=
d
> >> > out to you that had it such an overhead, the time taken by the C
> >> > Sharp would have itself order(n^2) with respect to the C time.
>
> >> > It was pointed out to you that the reason for starting order n^2 was
> >> > to factor out random effects from a too-fast program, to let things
> >> > run.
>
> >> > In university classes in computer science that you did not attend, a
> >> > qualified professor would have shown you that not only is the
> >> > language of computational complexity not evaluative, but also that w=
e
> >> > have to write such algorithms to learn more.
>
> >> The reason for the sub-optimal hashtable now is clear to me.
>
> >> BTW, I fixed your hashtable for you. It is even shorter, and runs
> >> faster. There still is a lot of place for improvement. Feel free to
> >> improve it.
>
> >> *****/
>
> > Get back to you when I have more time to study this, but...
>
> > I don't know why you calculate what seems to be an invariant in a for
> > loop
>
> IMHO there is no invariant.
> The 'var' variable is redundant and not used, but that is harmless.
> (the 'slot' variable can also be eliminated; at the expense of clarity)
>
>
>
> > I don't know why it's apparently necessary to calculate the size of the
> > array
>
> Nothing is calculated. Maybe you are confused by the sizeof operator ?

At this point, dear boy, I think you're confused. I know that the
sizeof operator is constant time And, my C compiler won't let me
#define FOO as (1/0), which means that constant evaluation is done
inside the preprocessor.

Well, I'll be dipped in shit.

As my grandpa said when he found a headless statue of Venus in a
thrift shop, who busted this.

Who was the moron who added that unnecessary feature to the standard
when I wasn't coding in C? Clive? Was that your idea?

It's dumb, since like sequence points, it invalidates English
discourse about C.

And it was your responsibility, Moi, not to use such a stupid bug-
feature.
>
> HTH,
> AvK

0
spinoza1111
12/31/2009 12:41:31 PM

spinoza1111 wrote:

> However, you assume that all C compilers will do the constant division
> of the two sizeofs. Even if "standard" compilers do, the functionality
> of doing what we can at compile time can't always be trusted in actual
> compilers, I believe. In fact, doing constant operations inside
> preprocessor macros neatly makes nonsense of any simple explanation of
> preprocessing as a straightforward word-processing transformation of
> text.

Don't confuse two separate translate operations. Macros are pure text
transformations. Most compilers separately may choose to evaluate
constant operations at compile or code generation time to save code
and execution time.

w..


--- news://freenews.netfront.net/ - complaints: news@netfront.net ---
0
Walter
12/31/2009 12:45:04 PM
In article <06idnZ3lEvwaFqHWnZ2dnUVZ8nmdnZ2d@bt.com>,
Richard Heathfield  <rjh@see.sig.invalid> wrote:
>>> [...]
>>> #define ARRAY_SIZE 1000
>>> struct meuk nilgesmeuk[ARRAY_SIZE];
>>> #define COUNTOF(a) (sizeof(a)/sizeof(a)[0])
>>> int ii;
>>>         for (ii = 0; ii < COUNTOF(nilgesmeuk); ii++)
>>> [...]
>
>If you mean the COUNTOF thing, it's a compilation-time calculation. 
>There is no run-time cost, since the compiler can work out the division 
>at compilation time.
> [...]
>So that he can change the size of the array during editing, and have 
>that change automatically reflected in the loop control code without his 
>having to remember to update it.
>
>He could just as easily have used ARRAYSIZE, but doing so would 
>introduce into the code an assumption that his array has (at least) 
>ARRAYSIZE elements - the COUNTOF macro is thus slightly more robust with 
>regard to later editing of the definition of the array.

Caveat: if you ever decide to change the array from a
statically allocated array to a dynamically allocated array:

  struct meuk *nilgesmeuk = malloc(ARRAY_SIZE * sizeof *nilgesmeuk);

then the COUNTOF macro silently does the wrong thing.
It would be prudent to add a compile-time assertion to make sure that
nilgesmeuk is a real array. Something like:

  #define IS_ARRAY(a) ((void*)&(a) == &(a)[0])
  COMPILE_TIME_ASSERT(IS_ARRAY(nilgesmeuk));
0
ike
12/31/2009 1:32:54 PM
In article <4760b6f4-1dd6-4cfb-8b1f-8f9d3c240887@h9g2000yqa.googlegroups.com>,
spinoza1111  <spinoza1111@yahoo.com> wrote:
>I know that the
>sizeof operator is constant time And, my C compiler won't let me
>#define FOO as (1/0), which means that constant evaluation is done
>inside the preprocessor.

Then your C compiler is broken. Or perhaps you're mistaken.
Can you post the code that you used to check this?

0
ike
12/31/2009 1:46:14 PM
On Thu, 31 Dec 2009 04:35:01 -0800, spinoza1111 wrote:

> On Dec 31, 7:31 pm, Richard Heathfield <r...@see.sig.invalid> wrote:
>> spinoza1111wrote:
>> > On Dec 31, 5:26 pm, Moi <r...@invalid.address.org> wrote:
>> >> On Wed, 30 Dec 2009 18:55:59 -0800,spinoza1111wrote:
>> >>> On Dec 31, 8:31 am, Seebs <usenet-nos...@seebs.net> wrote:
>> >>>> On 2009-12-30, Keith Thompson <ks...@mib.org> wrote: I have
>> >>>> something of one:
>> >>>> The fact is, you can learn a lot more sometimes from understanding
>> >>>> why something is wrong than you would from understanding why it is
>> >>>> right. I've seen dozens of implementations, recursive and
>> >>>> iterative alike, of the factorial function.  Spinny's was the
>> >>>> first O(N^2) I've ever seen,
>> >>> Again, if you'd studied computer science in the university
>> >>> environment, in which practitioners are not being infantilized by
>> >>> being made perpetual candidates, you'd realize that order(n^2) is
>> >>> not a evaluative or condemnatory term. It was pointed out to you
>> >>> that the purpose of the code was to execute instructions
>> >>> repeatedly, and discover whether C Sharp had an interpretive
>> >>> overhead. It was pointed out to you that had it such an overhead,
>> >>> the time taken by the C Sharp would have itself order(n^2) with
>> >>> respect to the C time. It was pointed out to you that the reason
>> >>> for starting order n^2 was to factor out random effects from a
>> >>> too-fast program, to let things run. In university classes in
>> >>> computer science that you did not attend, a qualified professor
>> >>> would have shown you that not only is the language of computational
>> >>> complexity not evaluative, but also that we have to write such
>> >>> algorithms to learn more.
>> >> The reason for the sub-optimal hashtable now is clear to me.
>>
>> >> BTW, I fixed your hashtable for you. It is even shorter, and runs
>> >> faster. There still is a lot of place for improvement. Feel free to
>> >> improve it.
>>
>> >> *****/
>>
>> >> #include <stdio.h>
>> >> #include <stdlib.h>
>> >> #include <time.h>
>> >> #define ARRAY_SIZE 1000
>> >> #define SESSIONS 100000
>>
>> >> struct meuk {
>> >>         int head;
>> >>         int next;
>> >>         int val;
>> >>         } nilgesmeuk[ARRAY_SIZE];
>>
>> >> #define COUNTOF(a) (sizeof(a)/sizeof(a)[0])
>>
>> >> int main(void)
>> >> {
>> >>     int val , *p;
>> >>     time_t start , end;
>> >>     double dif;
>> >>     unsigned ii,slot,sessions,hops;
>> >>     time (&start);
>> >>     hops = 0;
>> >>     for (sessions = 0; sessions < SESSIONS; sessions++) {
>> >>         for (ii = 0; ii < COUNTOF(nilgesmeuk); ii++)
>> >>         nilgesmeuk[ii].head = -1;
>>
>> >>         for (ii = 0; ii < COUNTOF(nilgesmeuk); ii++) {
>> >>             nilgesmeuk[ii].val = val = rand();
>> >>             nilgesmeuk[ii].next = -1;
>> >>             slot =  val % COUNTOF(nilgesmeuk);
>> >>             for( p =  &nilgesmeuk[slot].head ; *p >= 0 ; p =
>> >>             &nilgesmeuk[*p].next ) {hops++;} *p = ii;
>> >>         }
>> >>     }
>> >>     time (&end);
>> >>     dif = difftime (end,start);
>> >>     printf("It took C %.2f seconds to hash %d numbers with %d
>> >>     hops(chainlength=1+%6.4f) , %d times\n"
>> >>            , dif, (int) ii, hops, (float) hops/(ii*sessions),
>> >>            sessions);
>> >>     return 0;
>>
>> >> }
>>
>> >> /*****
>>
>> >> AvK
>>
>> > Get back to you when I have more time to study this, but...
>>
>> > I don't know why you calculate what seems to be an invariant in a for
>> > loop
>>
>> If you mean the COUNTOF thing, it's a compilation-time calculation.
>> There is no run-time cost, since the compiler can work out the division
>> at compilation time.
> 
> I am relieved to realize that thx to u. Of course, a modern language
> would not so conflate and confuse operations meant to be executed at
> compile time with runtime operations.
> 
> However, you assume that all C compilers will do the constant division
> of the two sizeofs. Even if "standard" compilers do, the functionality
> of doing what we can at compile time can't always be trusted in actual
> compilers, I believe. In fact, doing constant operations inside
> preprocessor macros neatly makes nonsense of any simple explanation of
> preprocessing as a straightforward word-processing transformation of
> text.
> 
> Dijkstra's "making a mess of it" includes hacks that invalidate natural
> language claims about what software does.
>>
>> > I don't know why it's apparently necessary to calculate the size of
>> > the array
>>
>> So that he can change the size of the array during editing, and have
>> that change automatically reflected in the loop control code without
>> his having to remember to update it.
> 
> That is absurd, dear boy. You just alter ARRAY_SIZE in its define. OK,
> you don't evaluate an invariant in a for loop IF the preprocessor
> divides sizeofs. If this was made a Standard, that, in my view, is a
> gross mistake.
>>
>> He could just as easily have used ARRAYSIZE, but doing so would
>> introduce into the code an assumption that his array has (at least)
>> ARRAYSIZE elements - the COUNTOF macro is thus slightly more robust
>> with regard to later editing of the definition of the array.
> 
> I think, with all due respect, that it's a mess. It assumes that the
> preprocessor will execute constant operations in violation of the
> understanding of intelligent people that the preprocessor doesn't do
> anything but process text, and that constant time evaluation is the job
> of the optimizer.


Well, it depends.
Suppose I want to decouple the heads and nextpointers:


*****************/
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#define ARRAY_SIZE 1000
#define ARRAY2_SIZE (3*ARRAY_SIZE)
#define SESSIONS 100000

struct meuk {
        struct meuk* next;
        int val;
        } nilgesmeuk[ARRAY_SIZE];

struct meuk *nilgesheads[ARRAY2_SIZE];

#define COUNTOF(a) (sizeof(a)/sizeof(a)[0])

int main(void)
{
    int val ;
    struct meuk **hnd;
    time_t start , end;
    double dif;
    unsigned ii,slot,sessions,hops;
    time (&start);
    hops = 0;
    for (sessions = 0; sessions < SESSIONS; sessions++)
    {
        for (ii = 0; ii < COUNTOF(nilgesheads); ii++) nilgesheads[ii] = NULL;

        for (ii = 0; ii < COUNTOF(nilgesmeuk); ii++)
        {
            nilgesmeuk[ii].val = val = rand();
            nilgesmeuk[ii].next = NULL;
            slot =  val % COUNTOF(nilgesheads);
            for( hnd =  &nilgesheads[slot] ; *hnd ; hnd = &(*hnd)->next ) {hops++;}
            *hnd = &nilgesmeuk[ii];
        }
    }
    time (&end);
    dif = difftime (end,start);
    printf("It took C %.2f seconds to hash %d numbers with %d hops(chainlength=1+%6.4f) , %d times\n"
           , dif, (int) ii, hops, (float) hops/(ii*sessions), sessions);
    return 0;
}

/*****************

Then I would not have to care which ARRAY_SIZE to use as an upper bound to my indexing operations.
The preprocessor and the compiler take care for me, for I don't fear them.

HTH,
AvK 
0
Moi
12/31/2009 1:51:18 PM
On Thu, 31 Dec 2009 13:32:54 +0000, Ike Naar wrote:

> In article <06idnZ3lEvwaFqHWnZ2dnUVZ8nmdnZ2d@bt.com>, Richard Heathfield
>  <rjh@see.sig.invalid> wrote:
>>>> [...]
>>>> #define ARRAY_SIZE 1000
>>>> struct meuk nilgesmeuk[ARRAY_SIZE];
>>>> #define COUNTOF(a) (sizeof(a)/sizeof(a)[0]) int ii;
>>>>         for (ii = 0; ii < COUNTOF(nilgesmeuk); ii++)
>>>> [...]
>>
>>If you mean the COUNTOF thing, it's a compilation-time calculation.
>>There is no run-time cost, since the compiler can work out the division
>>at compilation time.
>> [...]
>>So that he can change the size of the array during editing, and have
>>that change automatically reflected in the loop control code without his
>>having to remember to update it.
>>
>>He could just as easily have used ARRAYSIZE, but doing so would
>>introduce into the code an assumption that his array has (at least)
>>ARRAYSIZE elements - the COUNTOF macro is thus slightly more robust with
>>regard to later editing of the definition of the array.
> 
> Caveat: if you ever decide to change the array from a statically
> allocated array to a dynamically allocated array:
> 
>   struct meuk *nilgesmeuk = malloc(ARRAY_SIZE * sizeof *nilgesmeuk);
> 
> then the COUNTOF macro silently does the wrong thing. It would be
> prudent to add a compile-time assertion to make sure that nilgesmeuk is
> a real array. Something like:
> 
>   #define IS_ARRAY(a) ((void*)&(a) == &(a)[0])
>   COMPILE_TIME_ASSERT(IS_ARRAY(nilgesmeuk));

Yes, of course this is true. (nice macro)
But, changing static allocation ( := constant size)
into dynamic allocation would always need an extra variable
somewhere to store the current size (or number of elements).
Plus a minor rework on the index bounds. IMHO, Renaming the 
(array-->>pointer) variable would in that case be a sane thing
to catch all the dangling COUNTOF()s.


AvK

0
Moi
12/31/2009 1:59:04 PM
"Walter Banks" <walter@bytecraft.com> wrote in message 
news:4B3C9CD0.2F746021@bytecraft.com...
>
>
> spinoza1111 wrote:
>
>> However, you assume that all C compilers will do the constant division
>> of the two sizeofs. Even if "standard" compilers do, the functionality
>> of doing what we can at compile time can't always be trusted in actual
>> compilers, I believe. In fact, doing constant operations inside
>> preprocessor macros neatly makes nonsense of any simple explanation of
>> preprocessing as a straightforward word-processing transformation of
>> text.
>
> Don't confuse two separate translate operations. Macros are pure text
> transformations. Most compilers separately may choose to evaluate
> constant operations at compile or code generation time to save code
> and execution time.

If he knew anything about compilers or code generation........

Dennis
 

0
Dennis
12/31/2009 2:06:29 PM
On Wed, 30 Dec 2009 18:33:04 -0800 (PST), spinoza1111
<spinoza1111@yahoo.com> wrote:

>On Dec 31, 1:43=A0am, c...@tiac.net (Richard Harter) wrote:
>> On Wed, 30 Dec 2009 08:39:34 -0800 (PST),spinoza1111
>>
>>
>>
>>
>>
>>
>>
>> <spinoza1...@yahoo.com> wrote:
>> >On Dec 30, 9:32=3DA0pm, Tom St Denis <t...@iahu.ca> wrote:
>> >> On Dec 30, 8:04=3DA0am,spinoza1111<spinoza1...@yahoo.com> wrote:
>> [snip]
>> >> Um, the stack of the threads is where you typically put cheap per-
>> >> thread data. =3DA0Otherwise you allocate it off the heap. =3DA0In the =
>case of
>> >> the *_r() GNU libc functions they store any transient data in the
>> >> structure you pass it. =3DA0That's how they achieve thread safety.
>>
>> >It's a clumsy and old-fashioned method, not universally used. It also
>> >has bugola potential.
>>
>> >You see, the called routine is telling the caller to supply him with
>> >"scratch paper". This technique is an old dodge. It was a requirement
>> >in IBM 360 BAL (Basic Assembler Language) that the caller provide the
>> >callee with a place to save the 16 "general purpose registers" of the
>> >machine.
>>
>> >The problem then and now is what happens if the caller is called
>> >recursively by the callee as it might be in exception handling, and
>> >the callee uses the same structure. It's not supposed to but it can
>> >happen.
>>
>> He's not talking about the technique used in BAL etc. =A0The
>> transient data is contained within a structure that is passed by
>> the caller to the callee. =A0The space for the structure is on the
>> stack. =A0Recursion is permitted.
>
>If control "comes back" to the caller who has stacked the struct and
>the caller recalls the routine in question with the same struct, this
>will break.

This isn't right; I dare say the fault is mine for being unclear.
C uses call by value; arguments are copied onto the stack.  The
upshot is that callee operates on copies of the original
variables.  This is true both of elemental values, i.e., ints,
floats, etc, and composite values, i.e., structs.

So, when the calling sequence contains a struct a copy of the
struct is placed on the stack.  The callee does not have access
to the caller's struct.  To illustrate suppose that foo calls bar
and bar calls foo, and that foo passes a struct to bar which in
turn passes the struct it received to foo.  There will be two
copies of the struct on the stack, one created when foo called
bar, and one created when bar called foo.
  
>
>But the main point is that the callee isn't a black box if I have to
>supply it with storage.

Whether or not it is a black box depends on the specification of
the callee and upon the nature of the storage being provided.
  



Richard Harter, cri@tiac.net
http://home.tiac.net/~cri, http://www.varinoma.com
Infinity is one of those things that keep philosophers busy when they 
could be more profitably spending their time weeding their garden.
0
cri
12/31/2009 2:57:05 PM
spinoza1111 wrote:
> On Dec 31, 7:31 pm, Richard Heathfield <r...@see.sig.invalid> wrote:
>> spinoza1111wrote:

<snip>

>>> I don't know why you calculate what seems to be an invariant in a for
>>> loop
>> If you mean the COUNTOF thing, it's a compilation-time calculation.
>> There is no run-time cost, since the compiler can work out the division
>> at compilation time.

<snip>

> However, you assume that all C compilers will do the constant division
> of the two sizeofs.

Right. In theory, it isn't guaranteed by the Standard, and there's 
nothing to stop a compiler performing the division at runtime. In 
practice, I don't know of any current mainstream compiler that does 
this, and it is unlikely that any future mainstream implementation will 
pessimise in that way.

 > Even if "standard" compilers do, the functionality
> of doing what we can at compile time can't always be trusted in actual
> compilers, I believe.

Do you have a specific compiler in mind?

<snip>

>>> I don't know why it's apparently necessary to calculate the size of
>>> the array
>> So that he can change the size of the array during editing, and have
>> that change automatically reflected in the loop control code without his
>> having to remember to update it.
> 
> That is absurd, dear boy. You just alter ARRAY_SIZE in its define.

No, it's not absurd, in the general case. If you're only interested in 
this specific example, you can just alter ARRAY_SIZE, as you say. But if 
you have two or more arrays that originally use the same #define, 
ARRAY_SIZE say, for their size, and then a requirements change makes it 
necessary to modify the size of one of the arrays, i < 
COUNTOF(arrayname) will not need to be modified, whereas i < ARRAY_SIZE 
would need to be changed (and could easily be missed).

<snip>

-- 
Richard Heathfield <http://www.cpax.org.uk>
Email: -http://www. +rjh@
"Usenet is a strange place" - dmr 29 July 1999
Sig line vacant - apply within
0
Richard
12/31/2009 3:00:40 PM
Ike Naar wrote:

<snip>

> Caveat: if you ever decide to change the array from a
> statically allocated array to a dynamically allocated array:
> 
>   struct meuk *nilgesmeuk = malloc(ARRAY_SIZE * sizeof *nilgesmeuk);
> 
> then the COUNTOF macro silently does the wrong thing.

Right. This is one of the reasons I prefer to keep dynamic allocation 
wrapped up tight inside container code.

<snip>

-- 
Richard Heathfield <http://www.cpax.org.uk>
Email: -http://www. +rjh@
"Usenet is a strange place" - dmr 29 July 1999
Sig line vacant - apply within
0
Richard
12/31/2009 3:03:41 PM
spinoza1111 wrote:

>> If you mean the COUNTOF thing, it's a compilation-time calculation.
>> There is no run-time cost, since the compiler can work out the division
>> at compilation time.
> 
> I am relieved to realize that thx to u. Of course, a modern language
> would not so conflate and confuse operations meant to be executed at
> compile time with runtime operations.

And that particular macro is an idiom that most experienced C 
programmers will recognise at once.
> 
> However, you assume that all C compilers will do the constant division
> of the two sizeofs. Even if "standard" compilers do, the functionality
> of doing what we can at compile time can't always be trusted in actual
> compilers, I believe. In fact, doing constant operations inside
> preprocessor macros neatly makes nonsense of any simple explanation of
> preprocessing as a straightforward word-processing transformation of
> text.

Why is it that whenever C does something that you did not know about you 
blame the language? Computer Science does not specify a 'one true way'. 
In addition every time you write that something is not the way that a 
modern language would do it you are being unreasonable (even when it is 
true) because C is an ancient language (in computer terms).

Whether you like it or not, there is an immense amount of software 
written in C and quite a bit of it is in safety critical areas. Changing 
the language so that such source code is broken is not an exercise that 
any standards authority would undertake lightly. 'Modern' languages 
start with an advantage in that they do not have to consider the needs 
of code written more than a decade ago.

Some industries (such as the automotive industry) are less than happy 
with the changes that were made by C99 to the extent that they did not 
want to use C99 compilers.

Whether you agree or not, standards are concerned with commercial usage 
and not academic purity. But, of course, you consider that to be some 
form of conspiracy.
> 
> Dijkstra's "making a mess of it" includes hacks that invalidate
> natural language claims about what software does.
>>> I don't know why it's apparently necessary to calculate the size of
>>> the array
>> So that he can change the size of the array during editing, and have
>> that change automatically reflected in the loop control code without his
>> having to remember to update it.
> 
> That is absurd, dear boy. You just alter ARRAY_SIZE in its define. OK,
> you don't evaluate an invariant in a for loop IF the preprocessor
> divides sizeofs. If this was made a Standard, that, in my view, is a
> gross mistake.
>> He could just as easily have used ARRAYSIZE, but doing so would
>> introduce into the code an assumption that his array has (at least)
>> ARRAYSIZE elements - the COUNTOF macro is thus slightly more robust with
>> regard to later editing of the definition of the array.
> 
> I think, with all due respect, that it's a mess. It assumes that the
> preprocessor will execute constant operations in violation of the
> understanding of intelligent people that the preprocessor doesn't do
> anything but process text, and that constant time evaluation is the
> job of the optimizer.

And while I have my fingers on my keyboard, note that any attempt to 
compare C with C# is pretty pointless. Compare C# with Java or C++ or 
even Smalltalk if you must.

Now there is the issue about compile versus interpreted as if this were 
a binary choice. As those who have actually listened to their lecturers 
in CS courses know there is a continuum between heavily optimised 
compilation and completely unoptimised (one word at a time) interpretation.

One hallmark of professionals is that they are aware of the level of 
their ignorance. You seem to assume that everyone else is ignorant. You 
just rant and rarely argue.
0
Francis
12/31/2009 3:50:03 PM
On 2009-12-31, Francis Glassborow <francis.glassborow@btinternet.com> wrote:
> Why is it that whenever C does something that you did not know about you 
> blame the language?

I am beginning to suspect that all internet trolls have NPD.  Consider how
much more sense his behavior makes if you assume that.  He's never at
fault; it's always someone or something else.

> Now there is the issue about compile versus interpreted as if this were 
> a binary choice. As those who have actually listened to their lecturers 
> in CS courses know there is a continuum between heavily optimised 
> compilation and completely unoptimised (one word at a time) interpretation.

Yes.  Compilation to bytecode is an intermediate state between purely
interpreted language (such as most Unix shells) and languages like C,
which compile directly to machine code.  There are a number of interesting
special cases to be found.  From what little I know of C#, it sounds like
C# and Java are in the same basic class -- they compile once into an
intermediate form other than processor instructions, which is then
"interpreted" by a bytecode interpreter of some sort.  That's a pretty
reasonable compromise.

> One hallmark of professionals is that they are aware of the level of 
> their ignorance. You seem to assume that everyone else is ignorant. You 
> just rant and rarely argue.

Dunning-Kruger effect.

-s
-- 
Copyright 2009, all wrongs reversed.  Peter Seebach / usenet-nospam@seebs.net
http://www.seebs.net/log/ <-- lawsuits, religion, and funny pictures
http://en.wikipedia.org/wiki/Fair_Game_(Scientology) <-- get educated!
0
Seebs
12/31/2009 4:12:25 PM
On Dec 31 2009, 10:06=A0pm, "Dennis \(Icarus\)"
<nojunkm...@ever.invalid> wrote:
> "Walter Banks" <wal...@bytecraft.com> wrote in message
>
> news:4B3C9CD0.2F746021@bytecraft.com...
>
>
>
>
>
>
>
> >spinoza1111wrote:
>
> >> However, you assume that all C compilers will do the constant division
> >> of the two sizeofs. Even if "standard" compilers do, the functionality
> >> of doing what we can at compile time can't always be trusted in actual
> >> compilers, I believe. In fact, doing constant operations inside
> >> preprocessor macros neatly makes nonsense of any simple explanation of
> >> preprocessing as a straightforward word-processing transformation of
> >> text.
>
> > Don't confuse two separate translate operations. Macros are pure text
> > transformations. Most compilers separately may choose to evaluate
> > constant operations at compile or code generation time to save code
> > and execution time.
>
> If he knew anything about compilers or code generation........

You know, I'm reading Habermas (A Theory of Communicative Action,
etc). And he would find this constant (and off-topic) raising of the
issue of personal competence very strange, because to him, coherent
language starts in basic decency, including the basic decency of not
going offtopic, and making absurd claims about people's competence.

These claims are especially absurd given the fact that the moderator
of clcm, Peter Seebach, is without any qualifications whatsoever in
computer science, having majored in psychology.

Whereas I took all the academic compsci I could manage with a straight-
A average, debugged a Fortran compiler in machine language at the age
of 22, wrote a compiler in 1K of storage, developed compilers at Bell
Northern Research, and authored the book and the software for Build
Your Own .Net Language and Compiler. But I'm tired of having to
identify these facts to people who crawl in here wounded by what
corporations do, which is constantly discount and downgrade real
knowledge in favor of a money-mad mysticism in which "results" as
predefined by the suits "are all that matters", and take their anger
out on their fellow human beings.

Stop wasting my time with these endless canards. They are more
reflective of a basic personal insecurity of people who were hired by
corporations because they seemed pliable, and assembled into teams
where their mistakes, it's hoped by the suits, can be factored out as
they learn on the job.


Asshole.

>
> Dennis

0
spinoza1111
12/31/2009 4:37:07 PM
On Dec 31 2009, 8:45=A0pm, Walter Banks <wal...@bytecraft.com> wrote:
> spinoza1111wrote:
> > However, you assume that all C compilers will do the constant division
> > of the two sizeofs. Even if "standard" compilers do, the functionality
> > of doing what we can at compile time can't always be trusted in actual
> > compilers, I believe. In fact, doing constant operations inside
> > preprocessor macros neatly makes nonsense of any simple explanation of
> > preprocessing as a straightforward word-processing transformation of
> > text.
>
> Don't confuse two separate translate operations. Macros are pure text
> transformations. Most compilers separately may choose to evaluate
> constant operations at compile or code generation time to save code
> and execution time.


I think you're confused, Walter, but it's understandable, since C is a
badly-designed language that creates confusion. While it is said that
"macros are pure text operations", on a bit more thought I realized
that this statement has never been true.

You see, to evaluate #if statements in C, we need to evaluate constant
expressions. Might as well do this all the time, or perhaps some of
the time (evaluate constant expressions only in the #if statement). My
guess is that compilers vary, and this is one more reason not to use
C.

C Sharp got rid of all but a very simple for of preprocessing for this
reason.

In my example, I wanted to define 1/0 as a pure text operation. I
would want to do so in reality in order let us say to test an error
handler by generating an error. The compiler wouldn't let me.

This means I have to retract my charge that the Standard invalidated
perfectly reasonable English prose about C which was illuminating. It
looks like it was never true that the preprocessor was "a pure text
operation" after all.

*Quel dommage*!


>
> w..
>
> --- news://freenews.netfront.net/ - complaints: n...@netfront.net ---

0
spinoza1111
12/31/2009 4:50:04 PM
On Dec 31 2009, 9:46=A0pm, i...@localhost.claranet.nl (Ike Naar) wrote:
> In article <4760b6f4-1dd6-4cfb-8b1f-8f9d3c240...@h9g2000yqa.googlegroups.=
com>,
>
> spinoza1111=A0<spinoza1...@yahoo.com> wrote:
> >I know that the
> >sizeof operator is constant time And, my C compiler won't let me
> >#define FOO as (1/0), which means that constant evaluation is done
> >inside the preprocessor.
>
> Then your C compiler is broken. Or perhaps you're mistaken.
> Can you post the code that you used to check this?

#define FOO (1/0)

int main(void)
{
    int i; i =3D FOO;
    return 0;
}

Output at compile and link time:

1>------ Build started: Project: hashing, Configuration: Release Win32
------
1>Compiling...
1>hashing.c
1>..\..\..\hashing.c(5) : error C2124: divide or mod by zero
1>Build log was saved at "file://c:\egnsf\C and C sharp comparisions
\Hashing\C\hashing\hashing\Release\BuildLog.htm"
1>hashing - 1 error(s), 0 warning(s)
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Build: 0 succeeded, 1 failed, 0 up-to-date, =
0 skipped
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D


But: maybe it's not the preprocessor's "fault". Lets try this:



int main(void)
{
	int i; i =3D 1/0;
    return 0;
}

Same diagnostic, of course. So the preprocessor can be said to make a
straight textual substitution, after all.

However, I can conceive of any number of scenarios in which I'd want
to divide by zero, as in my example of an error handler. Given the
permissive philosophy of C, I don't see why I can't do this.

"Constant folding" should be for optimization. The fatal error should
be a warningg IMO.
0
spinoza1111
12/31/2009 5:02:09 PM
On Dec 31 2009, 9:51=A0pm, Moi <r...@invalid.address.org> wrote:
> On Thu, 31 Dec 2009 04:35:01 -0800,spinoza1111wrote:
> > On Dec 31, 7:31=A0pm, Richard Heathfield <r...@see.sig.invalid> wrote:
> >> spinoza1111wrote:
> >> > On Dec 31, 5:26 pm, Moi <r...@invalid.address.org> wrote:
> >> >> On Wed, 30 Dec 2009 18:55:59 -0800,spinoza1111wrote:
> >> >>> On Dec 31, 8:31 am, Seebs <usenet-nos...@seebs.net> wrote:
> >> >>>> On 2009-12-30, Keith Thompson <ks...@mib.org> wrote: I have
> >> >>>> something of one:
> >> >>>> The fact is, you can learn a lot more sometimes from understandin=
g
> >> >>>> why something is wrong than you would from understanding why it i=
s
> >> >>>> right. I've seen dozens of implementations, recursive and
> >> >>>> iterative alike, of the factorial function. =A0Spinny's was the
> >> >>>> first O(N^2) I've ever seen,
> >> >>> Again, if you'd studied computer science in the university
> >> >>> environment, in which practitioners are not being infantilized by
> >> >>> being made perpetual candidates, you'd realize that order(n^2) is
> >> >>> not a evaluative or condemnatory term. It was pointed out to you
> >> >>> that the purpose of the code was to execute instructions
> >> >>> repeatedly, and discover whether C Sharp had an interpretive
> >> >>> overhead. It was pointed out to you that had it such an overhead,
> >> >>> the time taken by the C Sharp would have itself order(n^2) with
> >> >>> respect to the C time. It was pointed out to you that the reason
> >> >>> for starting order n^2 was to factor out random effects from a
> >> >>> too-fast program, to let things run. In university classes in
> >> >>> computer science that you did not attend, a qualified professor
> >> >>> would have shown you that not only is the language of computationa=
l
> >> >>> complexity not evaluative, but also that we have to write such
> >> >>> algorithms to learn more.
> >> >> The reason for the sub-optimal hashtable now is clear to me.
>
> >> >> BTW, I fixed your hashtable for you. It is even shorter, and runs
> >> >> faster. There still is a lot of place for improvement. Feel free to
> >> >> improve it.
>
> >> >> *****/
>
> >> >> #include <stdio.h>
> >> >> #include <stdlib.h>
> >> >> #include <time.h>
> >> >> #define ARRAY_SIZE 1000
> >> >> #define SESSIONS 100000
>
> >> >> struct meuk {
> >> >> =A0 =A0 =A0 =A0 int head;
> >> >> =A0 =A0 =A0 =A0 int next;
> >> >> =A0 =A0 =A0 =A0 int val;
> >> >> =A0 =A0 =A0 =A0 } nilgesmeuk[ARRAY_SIZE];
>
> >> >> #define COUNTOF(a) (sizeof(a)/sizeof(a)[0])
>
> >> >> int main(void)
> >> >> {
> >> >> =A0 =A0 int val , *p;
> >> >> =A0 =A0 time_t start , end;
> >> >> =A0 =A0 double dif;
> >> >> =A0 =A0 unsigned ii,slot,sessions,hops;
> >> >> =A0 =A0 time (&start);
> >> >> =A0 =A0 hops =3D 0;
> >> >> =A0 =A0 for (sessions =3D 0; sessions < SESSIONS; sessions++) {
> >> >> =A0 =A0 =A0 =A0 for (ii =3D 0; ii < COUNTOF(nilgesmeuk); ii++)
> >> >> =A0 =A0 =A0 =A0 nilgesmeuk[ii].head =3D -1;
>
> >> >> =A0 =A0 =A0 =A0 for (ii =3D 0; ii < COUNTOF(nilgesmeuk); ii++) {
> >> >> =A0 =A0 =A0 =A0 =A0 =A0 nilgesmeuk[ii].val =3D val =3D rand();
> >> >> =A0 =A0 =A0 =A0 =A0 =A0 nilgesmeuk[ii].next =3D -1;
> >> >> =A0 =A0 =A0 =A0 =A0 =A0 slot =3D =A0val % COUNTOF(nilgesmeuk);
> >> >> =A0 =A0 =A0 =A0 =A0 =A0 for( p =3D =A0&nilgesmeuk[slot].head ; *p >=
=3D 0 ; p =3D
> >> >> =A0 =A0 =A0 =A0 =A0 =A0 &nilgesmeuk[*p].next ) {hops++;} *p =3D ii;
> >> >> =A0 =A0 =A0 =A0 }
> >> >> =A0 =A0 }
> >> >> =A0 =A0 time (&end);
> >> >> =A0 =A0 dif =3D difftime (end,start);
> >> >> =A0 =A0 printf("It took C %.2f seconds to hash %d numbers with %d
> >> >> =A0 =A0 hops(chainlength=3D1+%6.4f) , %d times\n"
> >> >> =A0 =A0 =A0 =A0 =A0 =A0, dif, (int) ii, hops, (float) hops/(ii*sess=
ions),
> >> >> =A0 =A0 =A0 =A0 =A0 =A0sessions);
> >> >> =A0 =A0 return 0;
>
> >> >> }
>
> >> >> /*****
>
> >> >> AvK
>
> >> > Get back to you when I have more time to study this, but...
>
> >> > I don't know why you calculate what seems to be an invariant in a fo=
r
> >> > loop
>
> >> If you mean the COUNTOF thing, it's a compilation-time calculation.
> >> There is no run-time cost, since the compiler can work out the divisio=
n
> >> at compilation time.
>
> > I am relieved to realize that thx to u. Of course, a modern language
> > would not so conflate and confuse operations meant to be executed at
> > compile time with runtime operations.
>
> > However, you assume that all C compilers will do the constant division
> > of the two sizeofs. Even if "standard" compilers do, the functionality
> > of doing what we can at compile time can't always be trusted in actual
> > compilers, I believe. In fact, doing constant operations inside
> > preprocessor macros neatly makes nonsense of any simple explanation of
> > preprocessing as a straightforward word-processing transformation of
> > text.
>
> > Dijkstra's "making a mess of it" includes hacks that invalidate natural
> > language claims about what software does.
>
> >> > I don't know why it's apparently necessary to calculate the size of
> >> > the array
>
> >> So that he can change the size of the array during editing, and have
> >> that change automatically reflected in the loop control code without
> >> his having to remember to update it.
>
> > That is absurd, dear boy. You just alter ARRAY_SIZE in its define. OK,
> > you don't evaluate an invariant in a for loop IF the preprocessor
> > divides sizeofs. If this was made a Standard, that, in my view, is a
> > gross mistake.
>
> >> He could just as easily have used ARRAYSIZE, but doing so would
> >> introduce into the code an assumption that his array has (at least)
> >> ARRAYSIZE elements - the COUNTOF macro is thus slightly more robust
> >> with regard to later editing of the definition of the array.
>
> > I think, with all due respect, that it's a mess. It assumes that the
> > preprocessor will execute constant operations in violation of the
> > understanding of intelligent people that the preprocessor doesn't do
> > anything but process text, and that constant time evaluation is the job
> > of the optimizer.
>
> Well, it depends.
> Suppose I want to decouple the heads and nextpointers:
>
> *****************/
> #include <stdio.h>
> #include <stdlib.h>
> #include <time.h>
> #define ARRAY_SIZE 1000
> #define ARRAY2_SIZE (3*ARRAY_SIZE)
> #define SESSIONS 100000
>
> struct meuk {
> =A0 =A0 =A0 =A0 struct meuk* next;
> =A0 =A0 =A0 =A0 int val;
> =A0 =A0 =A0 =A0 } nilgesmeuk[ARRAY_SIZE];
>
> struct meuk *nilgesheads[ARRAY2_SIZE];
>
> #define COUNTOF(a) (sizeof(a)/sizeof(a)[0])
>
> int main(void)
> {
> =A0 =A0 int val ;
> =A0 =A0 struct meuk **hnd;
> =A0 =A0 time_t start , end;
> =A0 =A0 double dif;
> =A0 =A0 unsigned ii,slot,sessions,hops;
> =A0 =A0 time (&start);
> =A0 =A0 hops =3D 0;
> =A0 =A0 for (sessions =3D 0; sessions < SESSIONS; sessions++)
> =A0 =A0 {
> =A0 =A0 =A0 =A0 for (ii =3D 0; ii < COUNTOF(nilgesheads); ii++) nilgeshea=
ds[ii] =3D NULL;
>
> =A0 =A0 =A0 =A0 for (ii =3D 0; ii < COUNTOF(nilgesmeuk); ii++)
> =A0 =A0 =A0 =A0 {
> =A0 =A0 =A0 =A0 =A0 =A0 nilgesmeuk[ii].val =3D val =3D rand();
> =A0 =A0 =A0 =A0 =A0 =A0 nilgesmeuk[ii].next =3D NULL;
> =A0 =A0 =A0 =A0 =A0 =A0 slot =3D =A0val % COUNTOF(nilgesheads);
> =A0 =A0 =A0 =A0 =A0 =A0 for( hnd =3D =A0&nilgesheads[slot] ; *hnd ; hnd =
=3D &(*hnd)->next ) {hops++;}
> =A0 =A0 =A0 =A0 =A0 =A0 *hnd =3D &nilgesmeuk[ii];
> =A0 =A0 =A0 =A0 }
> =A0 =A0 }
> =A0 =A0 time (&end);
> =A0 =A0 dif =3D difftime (end,start);
> =A0 =A0 printf("It took C %.2f seconds to hash %d numbers with %d hops(ch=
ainlength=3D1+%6.4f) , %d times\n"
> =A0 =A0 =A0 =A0 =A0 =A0, dif, (int) ii, hops, (float) hops/(ii*sessions),=
 sessions);
> =A0 =A0 return 0;
>
> }
>
> /*****************
>
> Then I would not have to care which ARRAY_SIZE to use as an upper bound t=
o my indexing operations.
> The preprocessor and the compiler take care for me, for I don't fear them=
..

OK, this makes sense now. I can see where it might be a useful
technique.

The problem remains: sizeof as a pure compile time operation isn't
textually distinguished from a run time operation.

The preprocessor is widely considered to be a mistake because what it
wants to do is better done by OO language features. It sounded cool in
the early days to be able to write "conditional macros", and
assemblers (such as IBM mainframe BAL) had Turing-complete facilities,
which I used to generate tables for the first cellphone OS. But
statistically speaking, you're using a programmable monkey at a
typewriter when you use conditional macro instructions, and the C
preprocessor (perhaps fortunately) isn't Turing-complete whereas in
BAL you could construct loops to an assembler time go to.

Also, isn't there an alignment/padding problem on some machines in the
way you calculate the size of the total array?
>
> HTH,
> AvK

0
spinoza1111
12/31/2009 5:10:03 PM
On Dec 31 2009, 10:57=A0pm, c...@tiac.net (Richard Harter) wrote:
> On Wed, 30 Dec 2009 18:33:04 -0800 (PST),spinoza1111
>
>
>
>
>
> <spinoza1...@yahoo.com> wrote:
> >On Dec 31, 1:43=3DA0am, c...@tiac.net (Richard Harter) wrote:
> >> On Wed, 30 Dec 2009 08:39:34 -0800 (PST),spinoza1111
>
> >> <spinoza1...@yahoo.com> wrote:
> >> >On Dec 30, 9:32=3D3DA0pm, Tom St Denis <t...@iahu.ca> wrote:
> >> >> On Dec 30, 8:04=3D3DA0am,spinoza1111<spinoza1...@yahoo.com> wrote:
> >> [snip]
> >> >> Um, the stack of the threads is where you typically put cheap per-
> >> >> thread data. =3D3DA0Otherwise you allocate it off the heap. =3D3DA0=
In the =3D
> >case of
> >> >> the *_r() GNU libc functions they store any transient data in the
> >> >> structure you pass it. =3D3DA0That's how they achieve thread safety=
..
>
> >> >It's a clumsy and old-fashioned method, not universally used. It also
> >> >has bugola potential.
>
> >> >You see, the called routine is telling the caller to supply him with
> >> >"scratch paper". This technique is an old dodge. It was a requirement
> >> >in IBM 360 BAL (Basic Assembler Language) that the caller provide the
> >> >callee with a place to save the 16 "general purpose registers" of the
> >> >machine.
>
> >> >The problem then and now is what happens if the caller is called
> >> >recursively by the callee as it might be in exception handling, and
> >> >the callee uses the same structure. It's not supposed to but it can
> >> >happen.
>
> >> He's not talking about the technique used in BAL etc. =3DA0The
> >> transient data is contained within a structure that is passed by
> >> the caller to the callee. =3DA0The space for the structure is on the
> >> stack. =3DA0Recursion is permitted.
>
> >If control "comes back" to the caller who has stacked the struct and
> >the caller recalls the routine in question with the same struct, this
> >will break.
>
> This isn't right; I dare say the fault is mine for being unclear.
> C uses call by value; arguments are copied onto the stack. =A0The
> upshot is that callee operates on copies of the original
> variables. =A0This is true both of elemental values, i.e., ints,
> floats, etc, and composite values, i.e., structs.
>
> So, when the calling sequence contains a struct a copy of the
> struct is placed on the stack. =A0The callee does not have access
> to the caller's struct. =A0To illustrate suppose that foo calls bar
> and bar calls foo, and that foo passes a struct to bar which in
> turn passes the struct it received to foo. =A0There will be two
> copies of the struct on the stack, one created when foo called
> bar, and one created when bar called foo.

Correct. And foo sees bar's scratchpad memory, which is a security
exposure. It creates opportunities for fun and games.

I'm foo. I have to pass bar this struct:

struct { int i; char * workarea; }

i is data. workarea is an area which bar needs to do its work. bar
puts customer passwords in workarea. Control returns to foo in an
error handler which is passed the struct. foo can now see the
passwords.

Because you violate encapsulation, you have a security hole, right?
Better to use an OO language in which each invocation of the stateful
foo object gets as much memory as it needs.

Let me know if I am missing anything.
>
>
>
> >But the main point is that the callee isn't a black box if I have to
> >supply it with storage.
>
> Whether or not it is a black box depends on the specification of
> the callee and upon the nature of the storage being provided.
>
> Richard Harter, c...@tiac.nethttp://home.tiac.net/~cri,http://www.varinom=
a.com
> Infinity is one of those things that keep philosophers busy when they
> could be more profitably spending their time weeding their garden.

0
spinoza1111
12/31/2009 5:17:28 PM
On Dec 31 2009, 11:00=A0pm, Richard Heathfield <r...@see.sig.invalid>
wrote:
> spinoza1111wrote:
> > On Dec 31, 7:31 pm, Richard Heathfield <r...@see.sig.invalid> wrote:
> >> spinoza1111wrote:
>
> <snip>
>
> >>> I don't know why you calculate what seems to be an invariant in a for
> >>> loop
> >> If you mean the COUNTOF thing, it's a compilation-time calculation.
> >> There is no run-time cost, since the compiler can work out the divisio=
n
> >> at compilation time.
>
> <snip>
>
> > However, you assume that all C compilers will do the constant division
> > of the two sizeofs.
>
> Right. In theory, it isn't guaranteed by the Standard, and there's
> nothing to stop a compiler performing the division at runtime. In
> practice, I don't know of any current mainstream compiler that does
> this, and it is unlikely that any future mainstream implementation will
> pessimise in that way.
>
> =A0> Even if "standard" compilers do, the functionality
>
> > of doing what we can at compile time can't always be trusted in actual
> > compilers, I believe.
>
> Do you have a specific compiler in mind?
>
> <snip>
>
> >>> I don't know why it's apparently necessary to calculate the size of
> >>> the array
> >> So that he can change the size of the array during editing, and have
> >> that change automatically reflected in the loop control code without h=
is
> >> having to remember to update it.
>
> > That is absurd, dear boy. You just alter ARRAY_SIZE in its define.
>
> No, it's not absurd, in the general case. If you're only interested in
> this specific example, you can just alter ARRAY_SIZE, as you say. But if
> you have two or more arrays that originally use the same #define,
> ARRAY_SIZE say, for their size, and then a requirements change makes it
> necessary to modify the size of one of the arrays, i <
> COUNTOF(arrayname) will not need to be modified, whereas i < ARRAY_SIZE
> would need to be changed (and could easily be missed).

You and moi have shown me that this could be a useful coding
technique. However, using fixed array sizes, whether they are the same
or different for two arrays, always tends to allocate, not, strangely,
too much (leading to the dreaded "code bloat" which is not in itself a
bug) but too little in production. Using objects hides the choice. Of
course, at some point inside a low-level object, an array may have to
be allocated, but it happens in one place. Or, linked lists of classes
and structures can be used, replacing malloc with object creation.

The trick seems a shibboleth which relies on a non-orthogonal fact
about sizeof: that it happens to be a compile time operation.


>
> <snip>
>
> --
> Richard Heathfield <http://www.cpax.org.uk>
> Email: -http://www. +rjh@
> "Usenet is a strange place" - dmr 29 July 1999
> Sig line vacant - apply within

0
spinoza1111
12/31/2009 5:23:33 PM
On Dec 31 2009, 11:50=A0pm, Francis Glassborow
<francis.glassbo...@btinternet.com> wrote:
> spinoza1111wrote:
> >> If you mean the COUNTOF thing, it's a compilation-time calculation.
> >> There is no run-time cost, since the compiler can work out the divisio=
n
> >> at compilation time.
>
> > I am relieved to realize that thx to u. Of course, a modern language
> > would not so conflate and confuse operations meant to be executed at
> > compile time with runtime operations.
>
> And that particular macro is an idiom that most experienced C
> programmers will recognise at once.

Well hoo ray.
>
>
>
> > However, you assume that all C compilers will do the constant division
> > of the two sizeofs. Even if "standard" compilers do, the functionality
> > of doing what we can at compile time can't always be trusted in actual
> > compilers, I believe. In fact, doing constant operations inside
> > preprocessor macros neatly makes nonsense of any simple explanation of
> > preprocessing as a straightforward word-processing transformation of
> > text.
>
> Why is it that whenever C does something that you did not know about you
> blame the language? Computer Science does not specify a 'one true way'.
> In addition every time you write that something is not the way that a
> modern language would do it you are being unreasonable (even when it is
> true) because C is an ancient language (in computer terms).

Which means it should be sent to the knacker's yard.
>
> Whether you like it or not, there is an immense amount of software
> written in C and quite a bit of it is in safety critical areas. Changing
> the language so that such source code is broken is not an exercise that
> any standards authority would undertake lightly. 'Modern' languages
> start with an advantage in that they do not have to consider the needs
> of code written more than a decade ago.

The "needs of code" is a strange concept. I prefer human needs.
>
> Some industries (such as the automotive industry) are less than happy
> with the changes that were made by C99 to the extent that they did not
> want to use C99 compilers.

You mean, in fact, those geniuses in the AMERICAN auto industry: the
folks who need government handouts to survive because they preferred
in the fat years to build gas guzzlers for Yuppie scum.
>
> Whether you agree or not, standards are concerned with commercial usage
> and not academic purity. But, of course, you consider that to be some
> form of conspiracy.

No, I think it's quite open and blatant. An unqualified upper
management doesn't understand its own data systems (this was clear
when Ken Lay said he didn't know what was going on at Enron). It
preserves out of date approaches through FUD (fear, uncertainty and
doubt).
>
>
>
>
>
>
>
> > Dijkstra's "making a mess of it" includes hacks that invalidate
> > natural language claims about what software does.
> >>> I don't know why it's apparently necessary to calculate the size of
> >>> the array
> >> So that he can change the size of the array during editing, and have
> >> that change automatically reflected in the loop control code without h=
is
> >> having to remember to update it.
>
> > That is absurd, dear boy. You just alter ARRAY_SIZE in its define. OK,
> > you don't evaluate an invariant in a for loop IF the preprocessor
> > divides sizeofs. If this was made a Standard, that, in my view, is a
> > gross mistake.
> >> He could just as easily have used ARRAYSIZE, but doing so would
> >> introduce into the code an assumption that his array has (at least)
> >> ARRAYSIZE elements - the COUNTOF macro is thus slightly more robust wi=
th
> >> regard to later editing of the definition of the array.
>
> > I think, with all due respect, that it's a mess. It assumes that the
> > preprocessor will execute constant operations in violation of the
> > understanding of intelligent people that the preprocessor doesn't do
> > anything but process text, and that constant time evaluation is the
> > job of the optimizer.
>
> And while I have my fingers on my keyboard, note that any attempt to
> compare C with C# is pretty pointless. Compare C# with Java or C++ or
> even Smalltalk if you must.

Why? I can solve the same problems using C Sharp without having to
rote-memorize non-orthogonal nonsense and use techniques that although
valid and somewhat elegant in a twisted way, insult the intelligence.
>
> Now there is the issue about compile versus interpreted as if this were
> a binary choice. As those who have actually listened to their lecturers
> in CS courses know there is a continuum between heavily optimised
> compilation and completely unoptimised (one word at a time) interpretatio=
n.

Incorrect. Pure interpretation replaces the computer you have with one
that is K slower where K is the time needed to "understand" individual
bytecodes. In some interpreters, K can vary (as in the case of
interpretation of a bytecode that says "scan for this character" (for
example). The result is a dramatic slowdown.

Whereas an interpreter that translates the bytecodes on first
encountering them is only marginally slower than direct machine
language.
>
> One hallmark of professionals is that they are aware of the level of
> their ignorance. You seem to assume that everyone else is ignorant. You
> just rant and rarely argue.

No, I think that many people here buy into a paradigm that's out of
date and it controls their imagination.

0
spinoza1111
12/31/2009 5:35:21 PM
spinoza1111 wrote:
> On Dec 31 2009, 9:51 pm, Moi <r...@invalid.address.org> wrote:

<snip>

>> Then I would not have to care which ARRAY_SIZE to use as an upper bound to my indexing operations.
>> The preprocessor and the compiler take care for me, for I don't fear them.
> 
> OK, this makes sense now. I can see where it might be a useful
> technique.

And not absurd at all.

> The problem remains: sizeof as a pure compile time operation isn't
> textually distinguished from a run time operation.

That is not a problem to people who actually know the language. Those 
who don't know the language have no reason to care, unless they are 
learning it, in which case this is one of the things they need to learn.

In any case, since C99 it is no longer true that sizeof is uniquely a 
compile time operation - the size of VLAs is computed at runtime.

> The preprocessor is widely considered to be a mistake because what it
> wants to do is better done by OO language features.

That doesn't follow at all. Firstly, whether the preprocessor's role is 
better undertaken by OO language features is a matter of opinion. 
Secondly, the preprocessor pre-dated the adoption of such features into 
mainstream languages. Even if your hindsight is 20/20 (which is far from 
universally agreed), it's still hindsight.

<snip>

> Also, isn't there an alignment/padding problem on some machines in the
> way you calculate the size of the total array?

No, there isn't.

-- 
Richard Heathfield <http://www.cpax.org.uk>
Email: -http://www. +rjh@
"Usenet is a strange place" - dmr 29 July 1999
Sig line vacant - apply within
0
Richard
12/31/2009 5:43:48 PM
spinoza1111 wrote:

<snip>

> You and moi have shown me that this could be a useful coding
> technique.

In practice, in real code, it's hardly used at all, except perhaps in 
environments where malloc & co have been banned - i.e. where a so-called 
(and mis-named) "safe" subset of C is used.

<snip>

-- 
Richard Heathfield <http://www.cpax.org.uk>
Email: -http://www. +rjh@
"Usenet is a strange place" - dmr 29 July 1999
Sig line vacant - apply within
0
Richard
12/31/2009 5:49:16 PM
spinoza1111 wrote:
> On Dec 31 2009, 11:50 pm, Francis Glassborow wrote:

<snip>

>> And while I have my fingers on my keyboard, note that any attempt to
>> compare C with C# is pretty pointless. Compare C# with Java or C++ or
>> even Smalltalk if you must.
> 
> Why? I can solve the same problems using C Sharp

Nobody is stopping you.

<snip>

-- 
Richard Heathfield <http://www.cpax.org.uk>
Email: -http://www. +rjh@
"Usenet is a strange place" - dmr 29 July 1999
Sig line vacant - apply within
0
Richard
12/31/2009 5:50:40 PM
On Jan 1, 12:12=A0am, Seebs <usenet-nos...@seebs.net> wrote:
> On 2009-12-31, Francis Glassborow <francis.glassbo...@btinternet.com> wro=
te:
>
> > Why is it that whenever C does something that you did not know about yo=
u
> > blame the language?
>
> I am beginning to suspect that all internet trolls have NPD. =A0Consider =
how
> much more sense his behavior makes if you assume that. =A0He's never at
> fault; it's always someone or something else.

How dare you brag about your own ADHD and implicitly ask for sympathy
for it, and use a fantasized narcissistic personality disorder in
another as an argument in a technical discussion?


>
> > Now there is the issue about compile versus interpreted as if this were
> > a binary choice. As those who have actually listened to their lecturers
> > in CS courses know there is a continuum between heavily optimised
> > compilation and completely unoptimised (one word at a time) interpretat=
ion.
>
> Yes. =A0Compilation to bytecode is an intermediate state between purely

Wrong. It's not halfway. It's more like 1..10% slower. And for this,
you get protection against incompetence and fraud, and the ability to
port without having to hire a Fat Bastard of a C "expert".

> interpreted language (such as most Unix shells) and languages like C,
> which compile directly to machine code. =A0

Supported in fact by a runtime in all cases which performs stack and
heap management, as described by Herb Schildt, by example.

> There are a number of interesting
> special cases to be found. =A0From what little I know of C#, it sounds li=
ke

....you're going to pontificate about C# based on ignorance while
decrying this with respect to C. You can open your yap about C# but
nobody can speak about C without repeating shibboleths.

What a jerk!

> C# and Java are in the same basic class -- they compile once into an
> intermediate form other than processor instructions, which is then
> "interpreted" by a bytecode interpreter of some sort. =A0That's a pretty
> reasonable compromise.

WRONG. The compiler translates to bytecode. That bytecode is then
transformed into threaded, and directly executable, machine
instructions the first time the runtime software (which is not an
interpreter, but is properly known as just in time compiler)
encounters it. Bytecodes that are not executed remain in bytecode
form. After this ONE TIME operation the code is MACHINE LANGUAGE. The
one time operation also performs the invaluable service of detecting
incompetence and malfeasance.

Do your homework before posting on matters on which you are without
standing. Withdraw the Schildt post, because without standing you
seriously damaged his reputation because as here, you did not do your
homework ("the 'heap' is a DOS term"). Stop calling people "morons"
and "crazy" based on your knowing some detail of C that they don't, or
based on their willingness to speak truth to power.

"You CHILD. You COMPANY MAN. You stupid ****ing cunt. You, Williamson,
I'm talking to you, ****head. You just cost me $6,000. Six thousand
dollars, and one Cadillac. That's right. What are you going to do
about it? What are you going to do about it, asshole? You're ****ing
****. Where did you learn your trade, you stupid ****ing cunt, you
idiot? Who ever told you that you could work with men? Oh, I'm gonna
have your job, ****head. " - David Mamet, Glengarry Glen Ross
0
spinoza1111
12/31/2009 5:56:45 PM
"Seebs" <usenet-nospam@seebs.net> wrote in message 
news:slrnhjpje9.ke4.usenet-nospam@guild.seebs.net...
<snip>
>
> Yes.  Compilation to bytecode is an intermediate state between purely
> interpreted language (such as most Unix shells) and languages like C,
> which compile directly to machine code.  There are a number of interesting
> special cases to be found.  From what little I know of C#, it sounds like
> C# and Java are in the same basic class -- they compile once into an
> intermediate form other than processor instructions, which is then
> "interpreted" by a bytecode interpreter of some sort.  That's a pretty
> reasonable compromise.

You can also compile as a native executable. This is useful when you have a 
c# executable that needs to work with a 32-bit dll on a 64-but system.
If left as MSIL, the executable is run as a 64-bit process which cannot work 
with the 32-bit dll.


Dennis 

0
Dennis
12/31/2009 6:26:33 PM
On Thu, 31 Dec 2009 09:10:03 -0800, spinoza1111 wrote:

> On Dec 31 2009, 9:51 pm, Moi <r...@invalid.address.org> wrote:
>> On Thu, 31 Dec 2009 04:35:01 -0800,spinoza1111wrote:

> The preprocessor is widely considered to be a mistake because what it

In the rest of the universe, the use of the word 'widely' is considerd
useless.

> to generate tables for the first cellphone OS. But statistically
> speaking, you're using a programmable monkey at a typewriter when you

You cannot speak 'statistically'. You can speak nonsense, but that would 
easily be recognized.


> use conditional macro instructions, and the C preprocessor (perhaps
> fortunately) isn't Turing-complete whereas in BAL you could construct

Stop calling Turing (or other names). BTW did I mention that I am related
to Dijkstra? *NO I did not.* I only post code here.
(and too many comments to trolls and/or idiots)

You are not worthy. Post code instead, plaease

> 
> Also, isn't there an alignment/padding problem on some machines in the
> way you calculate the size of the total array?

No, there is not (IMHO)
sizeof yields a size_t (introduced by C89/C90, IIRC)

so: dividing two of them yields a ... _drumroll_ ... size_t.

Which (in this case) seems adequate for me
(there is a subtle issue lurking here, but that
is _presumably_ beyond your grasp)


HTH,
AvK

0
Moi
12/31/2009 6:36:23 PM
On Jan 1, 2:36=A0am, Moi <r...@invalid.address.org> wrote:
> On Thu, 31 Dec 2009 09:10:03 -0800,spinoza1111wrote:
> > On Dec 31 2009, 9:51=A0pm, Moi <r...@invalid.address.org> wrote:
> >> On Thu, 31 Dec 2009 04:35:01 -0800,spinoza1111wrote:
> > The preprocessor is widely considered to be a mistake because what it
>
> In the rest of the universe, the use of the word 'widely' is considerd
> useless.

Perhaps inside black holes into which light goes and from which it
returneth, not.
>
> > to generate tables for the first cellphone OS. But statistically
> > speaking, you're using a programmable monkey at a typewriter when you
>
> You cannot speak 'statistically'. You can speak nonsense, but that would
> easily be recognized.

Oh? I just did.
>
> > use conditional macro instructions, and the C preprocessor (perhaps
> > fortunately) isn't Turing-complete whereas in BAL you could construct
>
> Stop calling Turing (or other names). BTW did I mention that I am related
> to Dijkstra? *NO I did not.* I only post code here.

Yeah, but you just did.

Didn't you.

And...actually helping Nash find a bug is a real qualification,
whereas being related to Dijkstra...isn't.

> (and too many comments to trolls and/or idiots)
>
> You are not worthy. Post code instead, plaease

I'd rather not code in C, because it's a poor language.

>
>
>
> > Also, isn't there an alignment/padding problem on some machines in the
> > way you calculate the size of the total array?
>
> No, there is not (IMHO)
> sizeof yields a size_t (introduced by C89/C90, IIRC)
>
> so: dividing two of them yields a ... _drumroll_ ... size_t.
>
> Which (in this case) seems adequate for me
> (there is a subtle issue lurking here, but that
> is _presumably_ beyond your grasp)

Fuck off, turd blossom. This space is for technical discussion, not
personalities. In fact, the less  I know of your personality, the
better, it seems.
0
spinoza1111
12/31/2009 6:45:09 PM
In article <33a5995d-bb4b-4839-ab45-d2656fc8bd62@a32g2000yqm.googlegroups.com>,
spinoza1111  <spinoza1111@yahoo.com> wrote:
....
>This space is for technical discussion, not personalities.

Well, see, that's where you are wrong.  As I've shown many times in the
past, other than langauge lawyering (*), it is impossible to be on-topic
in this newsgroup.  Therefore, the clear and simple purpose/raison
d'etre of this group is for trading insults.  And we do that well!

(*) Which is on-topic, but has all the appeal to a sensible person as,
    well, real life lawyering...

0
gazelle
12/31/2009 6:52:48 PM
On Thu, 31 Dec 2009 09:17:28 -0800 (PST), spinoza1111
<spinoza1111@yahoo.com> wrote:

>On Dec 31 2009, 10:57=A0pm, c...@tiac.net (Richard Harter) wrote:
>> On Wed, 30 Dec 2009 18:33:04 -0800 (PST),spinoza1111
>>
>>
>>
>>
>>
>> <spinoza1...@yahoo.com> wrote:
>> >On Dec 31, 1:43=3DA0am, c...@tiac.net (Richard Harter) wrote:
>> >> On Wed, 30 Dec 2009 08:39:34 -0800 (PST),spinoza1111
>>
>> >> <spinoza1...@yahoo.com> wrote:
>> >> >On Dec 30, 9:32=3D3DA0pm, Tom St Denis <t...@iahu.ca> wrote:
>> >> >> On Dec 30, 8:04=3D3DA0am,spinoza1111<spinoza1...@yahoo.com> wrote:
>> >> [snip]
>> >> >> Um, the stack of the threads is where you typically put cheap per-
>> >> >> thread data. =3D3DA0Otherwise you allocate it off the heap. =3D3DA0=
>In the =3D
>> >case of
>> >> >> the *_r() GNU libc functions they store any transient data in the
>> >> >> structure you pass it. =3D3DA0That's how they achieve thread safety=
>.
>>
>> >> >It's a clumsy and old-fashioned method, not universally used. It also
>> >> >has bugola potential.
>>
>> >> >You see, the called routine is telling the caller to supply him with
>> >> >"scratch paper". This technique is an old dodge. It was a requirement
>> >> >in IBM 360 BAL (Basic Assembler Language) that the caller provide the
>> >> >callee with a place to save the 16 "general purpose registers" of the
>> >> >machine.
>>
>> >> >The problem then and now is what happens if the caller is called
>> >> >recursively by the callee as it might be in exception handling, and
>> >> >the callee uses the same structure. It's not supposed to but it can
>> >> >happen.
>>
>> >> He's not talking about the technique used in BAL etc. =3DA0The
>> >> transient data is contained within a structure that is passed by
>> >> the caller to the callee. =3DA0The space for the structure is on the
>> >> stack. =3DA0Recursion is permitted.
>>
>> >If control "comes back" to the caller who has stacked the struct and
>> >the caller recalls the routine in question with the same struct, this
>> >will break.
>>
>> This isn't right; I dare say the fault is mine for being unclear.
>> C uses call by value; arguments are copied onto the stack. =A0The
>> upshot is that callee operates on copies of the original
>> variables. =A0This is true both of elemental values, i.e., ints,
>> floats, etc, and composite values, i.e., structs.
>>
>> So, when the calling sequence contains a struct a copy of the
>> struct is placed on the stack. =A0The callee does not have access
>> to the caller's struct. =A0To illustrate suppose that foo calls bar
>> and bar calls foo, and that foo passes a struct to bar which in
>> turn passes the struct it received to foo. =A0There will be two
>> copies of the struct on the stack, one created when foo called
>> bar, and one created when bar called foo.
>
>Correct. And foo sees bar's scratchpad memory, which is a security
>exposure. It creates opportunities for fun and games.
>
>I'm foo. I have to pass bar this struct:
>
>struct { int i; char * workarea; }
>
>i is data. workarea is an area which bar needs to do its work. bar
>puts customer passwords in workarea. Control returns to foo in an
>error handler which is passed the struct. foo can now see the
>passwords.
>
>Because you violate encapsulation, you have a security hole, right?
>Better to use an OO language in which each invocation of the stateful
>foo object gets as much memory as it needs.
>
>Let me know if I am missing anything.

As an initial remark, your example is not the kind of thing that
St. Denis was discussing.  The "per-thread data" is data that is
passed down to routines in a thread.  The top level routines in
the thread get passed a copy of the per-thread data struct; in
turn they pass copies down to the routines they call.  Either the
struct has no pointers at all or, if it does have pointers, they
are opaque pointers.  There is no path for passing data up.

As a second initial remark, your example should be qualified.
Generally speaking, functions that need scratch space provide it
internally.  For the sake of argument, let's suppose that there
is some good reason for supplying scratch space.

You don't "have to pass bar this struct" - there are a number of
alternatives, perhaps too many.  Here are some:

(1) If we know the size of work area we can make it an array.
The struct is:

struct stuff {
    int i;
    char workarea[size];
}

Foo and bar will have separate copies of the workarea.  The
upside is that there is no security hole.  The down side is that
there will be two copies on the stack.

(2) We can malloc the space in the calling sequence.  The
following won't pass muster in a code review but it has the
general idea:

struct stuff {int i; void * workarea;} barbill;

barbill.workarea = malloc(size);

bar(barbill);

In this version, bar frees the workarea.  The upside is that
there is only one copy of the work area.  The downside is that
bar must free the work area space.

(3) Bar can zero out the space when it is done with it.

(4) Foo does not call bar directly; instead it calls an inteface
routine with a handle to bar as an argument; the interface
routine calls bar and takes care of zeroing out the space.

Usw.





Richard Harter, cri@tiac.net
http://home.tiac.net/~cri, http://www.varinoma.com
Infinity is one of those things that keep philosophers busy when they 
could be more profitably spending their time weeding their garden.
0
cri
12/31/2009 7:22:40 PM
On Thu, 31 Dec 2009 10:45:09 -0800, spinoza1111 wrote:

> On Jan 1, 2:36 am, Moi <r...@invalid.address.org> wrote:

>> Stop calling Turing (or other names). BTW did I mention that I am
>> related to Dijkstra? *NO I did not.* I only post code here.
> 
> Yeah, but you just did.
> 
> Didn't you.

I'm sorry.
I won't ever do so again.
(Is it important to you why?)

> And...actually helping Nash find a bug is a real qualification, whereas
> being related to Dijkstra...isn't.
>

I get your point. Did you get mine ?


>> (and too many comments to trolls and/or idiots)
>>
>> You are not worthy. Post code instead, plaease
> 
> I'd rather not code in C, because it's a poor language.
> 

Thanks: please don't
Thank you,

AvK
0
Moi
12/31/2009 7:39:48 PM
Moi <root@invalid.address.org> writes:
> On Thu, 31 Dec 2009 09:10:03 -0800, spinoza1111 wrote:
[...]
>> use conditional macro instructions, and the C preprocessor (perhaps
>> fortunately) isn't Turing-complete whereas in BAL you could construct
>
> Stop calling Turing (or other names). BTW did I mention that I am related
> to Dijkstra? *NO I did not.* I only post code here.
[...]

Using the phrase "Turing-complete", which refers to a well known
concept in computer science, is not name-dropping.  The fact that the
C preprocessor isn't Turing-complete may well be relevant to whatever
is being discussed.

-- 
Keith Thompson (The_Other_Keith) kst-u@mib.org  <http://www.ghoti.net/~kst>
Nokia
"We must do something.  This is something.  Therefore, we must do this."
    -- Antony Jay and Jonathan Lynn, "Yes Minister"
0
Keith
12/31/2009 7:40:05 PM
On Thu, 31 Dec 2009 11:40:05 -0800, Keith Thompson wrote:

> Moi <root@invalid.address.org> writes:
>> On Thu, 31 Dec 2009 09:10:03 -0800, spinoza1111 wrote:
> [...]
>>> use conditional macro instructions, and the C preprocessor (perhaps
>>> fortunately) isn't Turing-complete whereas in BAL you could construct
>>
>> Stop calling Turing (or other names). BTW did I mention that I am
>> related to Dijkstra? *NO I did not.* I only post code here.
> [...]
> 
> Using the phrase "Turing-complete", which refers to a well known concept
> in computer science, is not name-dropping.  The fact that the C
> preprocessor isn't Turing-complete may well be relevant to whatever is
> being discussed.

And that is -of course- correct.
(but irrelevant in this flame^H^H^H^H^H discussion)

AvK
0
Moi
12/31/2009 7:46:11 PM
On 2009-12-31, Dennis (Icarus) <nojunkmail@ever.invalid> wrote:
> You can also compile as a native executable. This is useful when you have a 
> c# executable that needs to work with a 32-bit dll on a 64-but system.
> If left as MSIL, the executable is run as a 64-bit process which cannot work 
> with the 32-bit dll.

This suggests that, were one seriously interested in benchmarking, the three
useful cases would be C# as MSIL, C# as native, and C as native.

Actually, the best thing, if you're seriously interested in benchmarking, is
for someone to give you a lollipop.  :P

-s
-- 
Copyright 2009, all wrongs reversed.  Peter Seebach / usenet-nospam@seebs.net
http://www.seebs.net/log/ <-- lawsuits, religion, and funny pictures
http://en.wikipedia.org/wiki/Fair_Game_(Scientology) <-- get educated!
0
Seebs
12/31/2009 8:18:05 PM

spinoza1111 wrote:

> On Dec 31 2009, 8:45 pm, Walter Banks <wal...@bytecraft.com> wrote:
> > spinoza1111wrote:
> > > However, you assume that all C compilers will do the constant division
> > > of the two sizeofs. Even if "standard" compilers do, the functionality
> > > of doing what we can at compile time can't always be trusted in actual
> > > compilers, I believe. In fact, doing constant operations inside
> > > preprocessor macros neatly makes nonsense of any simple explanation of
> > > preprocessing as a straightforward word-processing transformation of
> > > text.
> >
> > Don't confuse two separate translate operations. Macros are pure text
> > transformations. Most compilers separately may choose to evaluate
> > constant operations at compile or code generation time to save code
> > and execution time.
>
> I think you're confused, Walter, but it's understandable, since C is a
> badly-designed language that creates confusion. While it is said that
> "macros are pure text operations", on a bit more thought I realized
> that this statement has never been true.
>
> You see, to evaluate #if statements in C, we need to evaluate constant
> expressions. Might as well do this all the time, or perhaps some of
> the time (evaluate constant expressions only in the #if statement). My
> guess is that compilers vary, and this is one more reason not to use
> C.

You appear to be confusing C macros with conditional compilation.

w..



--- news://freenews.netfront.net/ - complaints: news@netfront.net ---
0
Walter
12/31/2009 9:10:08 PM
On 2009-12-31, Walter Banks <walter@bytecraft.com> wrote:
> You appear to be confusing C macros with conditional compilation.

Thank you!  Now I think I see what he's getting at.

Evaluation of #if is clearly compile time.  Therefore the preprocessor can
do compile-time evaluations.  Therefore when the preprocessor expands
macros which perform computations, obviously, the preprocessor should be
doing compile-time arithmetic!

.... Totally wrong, but it would almost sort of make sense if you didn't
understand the preprocessing phase at all.  (And yes, I'll happily grant
that the macro language layer is sort of wonky and inconsistent with
the rest of the language.  As soon as you've got a time machine, go back
and fix it, 'k?)

-s
-- 
Copyright 2009, all wrongs reversed.  Peter Seebach / usenet-nospam@seebs.net
http://www.seebs.net/log/ <-- lawsuits, religion, and funny pictures
http://en.wikipedia.org/wiki/Fair_Game_(Scientology) <-- get educated!
0
Seebs
12/31/2009 9:27:55 PM
On Jan 1, 5:27=A0am, Seebs <usenet-nos...@seebs.net> wrote:
> On 2009-12-31, Walter Banks <wal...@bytecraft.com> wrote:
>
> > You appear to be confusing C macros with conditional compilation.
>
> Thank you! =A0Now I think I see what he's getting at.
>
> Evaluation of #if is clearly compile time. =A0Therefore the preprocessor =
can
> do compile-time evaluations. =A0Therefore when the preprocessor expands
> macros which perform computations, obviously, the preprocessor should be
> doing compile-time arithmetic!
>
> ... Totally wrong, but it would almost sort of make sense if you didn't
> understand the preprocessing phase at all. =A0(And yes, I'll happily gran=
t
> that the macro language layer is sort of wonky and inconsistent with
> the rest of the language. =A0As soon as you've got a time machine, go bac=
k
> and fix it, 'k?)
>
> -s
> --
> Copyright 2009, all wrongs reversed. =A0Peter Seebach / usenet-nos...@see=
bs.nethttp://www.seebs.net/log/<-- lawsuits, religion, and funny picturesht=
tp://en.wikipedia.org/wiki/Fair_Game_(Scientology) <-- get educated!

I already addressed this and (re) determined what the preprocessor
does, which is pure preprocessing. The compiler does constant folding.
I am using this thread to learn and relearn.

This is a dialog, asshole, not an opportunity for you to appear to be
more qualified than you are by putting down other people. In it,
people have the right to make tentative suggestions and be corrected.

Even Richard Heathfield conducts himself better than you here, because
he has actual, if overly narrow, knowledge and is willing to act
professionally most of the time. He does lie: but it's an indication
of how evil people can become here that you act much worse.

You're completely deluded about bytecodes, unqualified to speak on
them and unwilling to learn. You have a psychology degree and are
apparently a sort of script kiddie, tech writer and gofer who is
pretending to be a programmer by destroying other people.
0
spinoza1111
1/1/2010 3:19:39 AM
On Jan 1, 3:46=A0am, Moi <r...@invalid.address.org> wrote:
> On Thu, 31 Dec 2009 11:40:05 -0800, Keith Thompson wrote:
> > Moi <r...@invalid.address.org> writes:
> >> On Thu, 31 Dec 2009 09:10:03 -0800,spinoza1111wrote:
> > [...]
> >>> use conditional macro instructions, and the C preprocessor (perhaps
> >>> fortunately) isn't Turing-complete whereas in BAL you could construct
>
> >> Stop calling Turing (or other names). BTW did I mention that I am
> >> related to Dijkstra? *NO I did not.* I only post code here.
> > [...]
>
> > Using the phrase "Turing-complete", which refers to a well known concep=
t
> > in computer science, is not name-dropping. =A0The fact that the C
> > preprocessor isn't Turing-complete may well be relevant to whatever is
> > being discussed.
>
> And that is -of course- correct.
> (but irrelevant in this flame^H^H^H^H^H discussion)

You made it a bullying session (not a flame war, since that term
implies that I'm engaged in similar behavior to yours, and I am not).

And, the fact is that the preprocessor concept was simultaneously
stolen, downsized and messed up by the original designers of C
including Dennis Ritchie. Much of C is this kind of vandalism. It's
not popular outside of the USA esp. in the EU for that reason.
>
> AvK

0
spinoza1111
1/1/2010 3:25:22 AM
On Jan 1, 3:22=A0am, c...@tiac.net (Richard Harter) wrote:
> On Thu, 31 Dec 2009 09:17:28 -0800 (PST),spinoza1111
>
>
>
>
>
> <spinoza1...@yahoo.com> wrote:
> >On Dec 31 2009, 10:57=3DA0pm, c...@tiac.net (Richard Harter) wrote:
> >> On Wed, 30 Dec 2009 18:33:04 -0800 (PST),spinoza1111
>
> >> <spinoza1...@yahoo.com> wrote:
> >> >On Dec 31, 1:43=3D3DA0am, c...@tiac.net (Richard Harter) wrote:
> >> >> On Wed, 30 Dec 2009 08:39:34 -0800 (PST),spinoza1111
>
> >> >> <spinoza1...@yahoo.com> wrote:
> >> >> >On Dec 30, 9:32=3D3D3DA0pm, Tom St Denis <t...@iahu.ca> wrote:
> >> >> >> On Dec 30, 8:04=3D3D3DA0am,spinoza1111<spinoza1...@yahoo.com> wr=
ote:
> >> >> [snip]
> >> >> >> Um, the stack of the threads is where you typically put cheap pe=
r-
> >> >> >> thread data. =3D3D3DA0Otherwise you allocate it off the heap. =
=3D3D3DA0=3D
> >In the =3D3D
> >> >case of
> >> >> >> the *_r() GNU libc functions they store any transient data in th=
e
> >> >> >> structure you pass it. =3D3D3DA0That's how they achieve thread s=
afety=3D
> >.
>
> >> >> >It's a clumsy and old-fashioned method, not universally used. It a=
lso
> >> >> >has bugola potential.
>
> >> >> >You see, the called routine is telling the caller to supply him wi=
th
> >> >> >"scratch paper". This technique is an old dodge. It was a requirem=
ent
> >> >> >in IBM 360 BAL (Basic Assembler Language) that the caller provide =
the
> >> >> >callee with a place to save the 16 "general purpose registers" of =
the
> >> >> >machine.
>
> >> >> >The problem then and now is what happens if the caller is called
> >> >> >recursively by the callee as it might be in exception handling, an=
d
> >> >> >the callee uses the same structure. It's not supposed to but it ca=
n
> >> >> >happen.
>
> >> >> He's not talking about the technique used in BAL etc. =3D3DA0The
> >> >> transient data is contained within a structure that is passed by
> >> >> the caller to the callee. =3D3DA0The space for the structure is on =
the
> >> >> stack. =3D3DA0Recursion is permitted.
>
> >> >If control "comes back" to the caller who has stacked the struct and
> >> >the caller recalls the routine in question with the same struct, this
> >> >will break.
>
> >> This isn't right; I dare say the fault is mine for being unclear.
> >> C uses call by value; arguments are copied onto the stack. =3DA0The
> >> upshot is that callee operates on copies of the original
> >> variables. =3DA0This is true both of elemental values, i.e., ints,
> >> floats, etc, and composite values, i.e., structs.
>
> >> So, when the calling sequence contains a struct a copy of the
> >> struct is placed on the stack. =3DA0The callee does not have access
> >> to the caller's struct. =3DA0To illustrate suppose that foo calls bar
> >> and bar calls foo, and that foo passes a struct to bar which in
> >> turn passes the struct it received to foo. =3DA0There will be two
> >> copies of the struct on the stack, one created when foo called
> >> bar, and one created when bar called foo.
>
> >Correct. And foo sees bar's scratchpad memory, which is a security
> >exposure. It creates opportunities for fun and games.
>
> >I'm foo. I have to pass bar this struct:
>
> >struct { int i; char * workarea; }
>
> >i is data. workarea is an area which bar needs to do its work. bar
> >puts customer passwords in workarea. Control returns to foo in an
> >error handler which is passed the struct. foo can now see the
> >passwords.
>
> >Because you violate encapsulation, you have a security hole, right?
> >Better to use an OO language in which each invocation of the stateful
> >foo object gets as much memory as it needs.
>
> >Let me know if I am missing anything.
>
> As an initial remark, your example is not the kind of thing that
> St. Denis was discussing. =A0The "per-thread data" is data that is
> passed down to routines in a thread. =A0The top level routines in
> the thread get passed a copy of the per-thread data struct; in
> turn they pass copies down to the routines they call. =A0Either the
> struct has no pointers at all or, if it does have pointers, they
> are opaque pointers. =A0There is no path for passing data up.
>
> As a second initial remark, your example should be qualified.
> Generally speaking, functions that need scratch space provide it
> internally. =A0For the sake of argument, let's suppose that there
> is some good reason for supplying scratch space.
>
> You don't "have to pass bar this struct" - there are a number of
> alternatives, perhaps too many. =A0Here are some:
>
> (1) If we know the size of work area we can make it an array.
> The struct is:
>
> struct stuff {
> =A0 =A0 int i;
> =A0 =A0 char workarea[size];
>
> }
>
> Foo and bar will have separate copies of the workarea. =A0The
> upside is that there is no security hole. =A0The down side is that
> there will be two copies on the stack.
>
> (2) We can malloc the space in the calling sequence. =A0The
> following won't pass muster in a code review but it has the
> general idea:
>
> struct stuff {int i; void * workarea;} barbill;
>
> barbill.workarea =3D malloc(size);
>
> bar(barbill);
>
> In this version, bar frees the workarea. =A0The upside is that
> there is only one copy of the work area. =A0The downside is that
> bar must free the work area space.
>
> (3) Bar can zero out the space when it is done with it.
>
> (4) Foo does not call bar directly; instead it calls an inteface
> routine with a handle to bar as an argument; the interface
> routine calls bar and takes care of zeroing out the space.

Yes, but now you may not be seeing that you're fixing a fix,
responding to the issue of security exposure (which is a well-known
and unsolvable problem in C starting with sprintf()) with more code
which may or may not be used in practice.
>
> Usw.
>
> Richard Harter, c...@tiac.nethttp://home.tiac.net/~cri,http://www.varinom=
a.com
> Infinity is one of those things that keep philosophers busy when they
> could be more profitably spending their time weeding their garden.

0
spinoza1111
1/1/2010 3:33:02 AM
On Jan 1, 5:10=A0am, Walter Banks <wal...@bytecraft.com> wrote:
> spinoza1111wrote:
> > On Dec 31 2009, 8:45 pm, Walter Banks <wal...@bytecraft.com> wrote:
> > > spinoza1111wrote:
> > > > However, you assume that all C compilers will do the constant divis=
ion
> > > > of the two sizeofs. Even if "standard" compilers do, the functional=
ity
> > > > of doing what we can at compile time can't always be trusted in act=
ual
> > > > compilers, I believe. In fact, doing constant operations inside
> > > > preprocessor macros neatly makes nonsense of any simple explanation=
 of
> > > > preprocessing as a straightforward word-processing transformation o=
f
> > > > text.
>
> > > Don't confuse two separate translate operations. Macros are pure text
> > > transformations. Most compilers separately may choose to evaluate
> > > constant operations at compile or code generation time to save code
> > > and execution time.
>
> > I think you're confused, Walter, but it's understandable, since C is a
> > badly-designed language that creates confusion. While it is said that
> > "macros are pure text operations", on a bit more thought I realized
> > that this statement has never been true.
>
> > You see, to evaluate #if statements in C, we need to evaluate constant
> > expressions. Might as well do this all the time, or perhaps some of
> > the time (evaluate constant expressions only in the #if statement). My
> > guess is that compilers vary, and this is one more reason not to use
> > C.
>
> You appear to be confusing C macros with conditional compilation.

(Sigh) C macros provide an incomplete form of conditional compilation
(#if and #ifdef). But a C preprocessor statement cannot iterate or go
to. But even without iterate and go to it has created a lot of damage.

What the original poster's trick shows: that a combination of constant
folding and the preprocessor in effect negates one useful thing to say
about the preprocessor: that IN EFFECT it makes a pure textual
substitution. This effect is negated by a "standard" which gives the
compiler permission to do constant folding in a visible way up front
(resulting in my example in an unwanted error message when I divide by
zero on purpose in a simple program to test an error handler). To see
the intent original poster's  method for determining the size of a
table, you have to forget what you learned when you read elsethread,
that "the C preprocessor does a pure textual replacement".

The *telos* of a system increasingly manned by unqualified corporate
types like Seebach is to destroy the ability to speak plain English
about C. What irritates them isn't "mistakes", since Seebach spoke
mistakenly about .Net and Java bytecode "interpretation" without
bothering to look up JIT compilation.

It's the use of language itself.

The comparision to religious fundamentalism which I've made elsethread
is apropos, because religious fundamentalists hate the metaphors of
speech and destroy religion when (as in Plymouth Bay colony in
Massachusetts in the 17th century) they prefer to forbid women Bible
discussion groups, and persecute their members as witches or (as
today) Islamic and Jewish feminists horrify the self-appointed male
thought leaders.

Seebach dislikes Schildt because he used language to talk about C in a
way Seebach could not: yet Seebach makes even more egregious errors
("the 'heap' is a DOS term" and "bytecodes are interpreted in .Net and
Java").

>
> w..
>
> --- news://freenews.netfront.net/ - complaints: n...@netfront.net ---

0
spinoza1111
1/1/2010 3:51:52 AM
spinoza1111 wrote:

<snip>

> I already addressed this and (re) determined what the preprocessor
> does, which is pure preprocessing.

Actually, I don't think you did. What you appear to have done was guess 
what the preprocessor did, perform the textual substitution yourself, 
and then test the result. That is, it seems to me that your hypothesis 
was: given an input X, and processes P() and C() (preprocessing and 
compilation), you observed Z = C(P(X)), hypothesised Y = P(X), wrote Y 
yourself, and found that Z = C(Y). But you did not demonstrate that you 
had proved the hypothesis Y = P(X), i.e. you did not demonstrate that 
you had determined what the preprocessor does.

Fortunately, this is quite easy to do with most modern compilers.

Here's your code again:

#define FOO (1/0)

int main(void)
{
     int i; i = FOO;
     return 0;
}

Here's the output from Borland's C compiler:

Warning W8082 delme.c 5: Division by zero in function main
Warning W8004 delme.c 7: 'i' is assigned a value that is never used in 
function main

So far, we've just duplicated your Visual Studio results (except that 
Borland produces an executable file, which apparently your version of VS 
doesn't, at least using the options you chose).

So the next step is to find out what the preprocessor actually does. One 
way to do this is to tell the compiler to write the preprocessor output 
to a file. This is trivial to do with gcc (man gcc to find out how, but 
without looking it up I think it's -E or -P), and with Visual Studio 
(use -EP). Right now I'm on a gcc-less VS-less laptop, however, and the 
Borland compiler I do have installed lacks a preprocessor output dump 
option. Feel free to do the experiment yourself, however.

> The compiler does constant folding.
> I am using this thread to learn and relearn.

Then you may find it interesting to learn that the Borland compiler, 
when given this program:

#include <stdio.h>

#define FOO (1/0)

int main(void)
{
     int i; i = FOO;
     printf("%d\n", i);
     return 0;
}

produced the warnings quoted above, and then produced 1 as the output 
from running the compiled program. Any answer would be legal (since the 
behaviour of the program is undefined). I changed 1/0 to 6/0 and the 
output changed to 6. It seems that bcc32 just ignores the division 
completely, which it is entitled to do (because dividing by zero invokes 
undefined behaviour, so any result is legal).

<nonsense snipped>

> people have the right to make tentative suggestions and be corrected.

Or even non-tentative suggestions, yes.

> Even Richard Heathfield conducts himself better than you here,

Personally I think Seebs conducts himself perfectly well.

 > because he has actual, if overly narrow, knowledge

In comp.lang.c, I only know C. That's kind of the point of topicality. I 
have other interests and other skills, but they are not relevant here, 
so I don't (or at least rarely) bring them up.

 > and is willing to act professionally most of the time.

There is no professionalism requirement for subscribing to, or 
contributing to, comp.lang.c, and therefore the issue of professionalism 
is at best a side issue. Having said that, it seems to me that if you're 
going to criticise comp.lang.c contributors for a lack of 
professionalism, Seebs would not be the obvious target.

 > He does lie:

Do you know anyone who never tells lies? But I try hard to keep my 
Usenet articles honest and accurate. Sometimes I make mistakes, which 
you have been known to interpret as lies, but stating a falsehood 
through mistake or ignorance is very different to stating a falsehood 
deliberately, with the intent of deceiving another. You, on the other 
hand, have been known to set out to deceive others deliberately. If you 
sue me for libel, I can easily prove this in court, using archive 
material (if I can *find* it, but hey, I'd be motivated, right?). And 
once I've done so, you'll find it hard to convince any judge not to 
throw your libel case out of court. Incidentally, I still haven't 
received a writ. What's the hold-up?

> but it's an indication
> of how evil people can become here that you act much worse.

No, it isn't, because no, he doesn't.

> You're completely deluded about bytecodes, unqualified to speak on
> them and unwilling to learn. You have a psychology degree and are
> apparently a sort of script kiddie, tech writer and gofer who is
> pretending to be a programmer by destroying other people.

Actually, Seebs is a programmer of no mean repute, and his employers no 
doubt count themselves fortunate to have him on the team. In fact, they 
are doubly fortunate, since they also have Chris Torek on board.

His job is irrelevant here, however, as is his degree and your opinion 
about his personality. What is relevant here is C, and Peter Seebach's 
knowledge of C is considerable.

-- 
Richard Heathfield <http://www.cpax.org.uk>
Email: -http://www. +rjh@
"Usenet is a strange place" - dmr 29 July 1999
Sig line vacant - apply within
0
Richard
1/1/2010 4:06:46 AM
spinoza1111 wrote:

<snip>

> And, the fact is that the preprocessor concept was simultaneously
> stolen, downsized and messed up by the original designers of C
> including Dennis Ritchie.

What is it with you and Dennis Ritchie? Did he refuse you an autograph 
or something?

 > Much of C is this kind of vandalism. It's
> not popular outside of the USA esp. in the EU for that reason.

Actually, C is still very popular in the UK (which, heaven help us, is 
in the EU). I wouldn't know about the rest of the EU, but I have no 
reason to suspect it's unpopular abroad.

Anyway, if you don't like C, nobody is forcing you to use it.

-- 
Richard Heathfield <http://www.cpax.org.uk>
Email: -http://www. +rjh@
"Usenet is a strange place" - dmr 29 July 1999
Sig line vacant - apply within
0
Richard
1/1/2010 4:20:01 AM
spinoza1111 wrote:

<snip>

> Yes, but now you may not be seeing that you're fixing a fix,
> responding to the issue of security exposure (which is a well-known
> and unsolvable problem in C starting with sprintf()) with more code
> which may or may not be used in practice.

It is true that sprintf can be used unsafely. It is also true that a 
teaspoon can be used unsafely. Nevertheless, the problem of how to use a 
teaspoon safely is not a difficult one. Neither is the problem of how to 
use sprintf safely.

-- 
Richard Heathfield <http://www.cpax.org.uk>
Email: -http://www. +rjh@
"Usenet is a strange place" - dmr 29 July 1999
Sig line vacant - apply within
0
Richard
1/1/2010 4:22:27 AM
spinoza1111 wrote:
> On Jan 1, 5:10 am, Walter Banks <wal...@bytecraft.com> wrote:

<snip>

>> You appear to be confusing C macros with conditional compilation.
> 
> (Sigh) C macros provide an incomplete form of conditional compilation
> (#if and #ifdef).

You appear to be confusing C macros with conditional compilation. 
Neither #if nor #ifdef is a C macro. They are preprocessor directives.

 > But a C preprocessor statement cannot iterate or go
> to. But even without iterate and go to it has created a lot of damage.

If your claim is that you have done a lot of damage using the C 
preprocessor, I would not dream of disputing it. Nevertheless, some 
people seem to manage to use it safely. We call these people "C 
programmers".

> What the original poster's trick shows:

It's not a trick. It's a technique. It's even demonstrated in the 
language spec.

 > that a combination of constant
> folding and the preprocessor in effect negates one useful thing to say
> about the preprocessor: that IN EFFECT it makes a pure textual
> substitution.

Look, it's really easy. If you do this:

#define X y

then, while that directive is in effect, every occurrence of X that is 
not in a string or comment is replaced with y. That's easy, right?

And if you do this:

#define F(a, b) a foo b bar a baz b

then, while that directive is in effect, every occurrence of this:

F(x, y)

is replaced with this:

x foo y bar x foo y

This is not rocket science.

> This effect is negated by a "standard" which gives the
> compiler permission to do constant folding in a visible way up front
> (resulting in my example in an unwanted error message when I divide by
> zero on purpose in a simple program to test an error handler). To see
> the intent original poster's  method for determining the size of a
> table, you have to forget what you learned when you read elsethread,
> that "the C preprocessor does a pure textual replacement".

What are you going on about? The definition was along these lines:

#define COUNTOF(array) sizeof array / sizeof array[0]

modulo the odd parenthesis or so.

Your claim was that the invariant was calculated within the loop 
control, right?

Now, all the preprocessor does is change:

for(i = 0; i < COUNTOF(zog); i++)

to

for(i = 0; i < sizeof zog / sizeof zog[0]; i++)

The calculation of sizeof zog / sizeof zog[0] is NOTHING TO DO WITH the 
preprocessor. The preprocessor does in fact do what you call a pure 
textual replacement. The compiler sees the division operator and, in any 
sensible modern compiler, will replace the constA / constB with the 
result constC. That is nothing to do with the preprocessor whatsoever. 
It is astonishing that you still seem to think it is.

<nonsense snipped>

-- 
Richard Heathfield <http://www.cpax.org.uk>
Email: -http://www. +rjh@
"Usenet is a strange place" - dmr 29 July 1999
Sig line vacant - apply within
0
Richard
1/1/2010 4:32:53 AM
On Jan 1, 12:32=A0pm, Richard Heathfield <r...@see.sig.invalid> wrote:
> spinoza1111wrote:
> > On Jan 1, 5:10 am, Walter Banks <wal...@bytecraft.com> wrote:
>
> <snip>
>
> >> You appear to be confusing C macros with conditional compilation.
>
> > (Sigh) C macros provide an incomplete form of conditional compilation
> > (#if and #ifdef).
>
> You appear to be confusing C macros with conditional compilation.
> Neither #if nor #ifdef is a C macro. They are preprocessor directives.

You don't know macros. You see, "macros" were invented long before C.
Their first use was for textual substitution in assembler programs,
but then it was noticed that at the same time and in the same way,
conditional and looping macros could provide more apparent power.

Buy my book to read about this prehistory. Or die.

For this reason, my toolkit of "macros" which generated the tables of
the first cellular mobile OS included conditional and looping
operations to generate Z-80 tables accurately.

Likewise, the code I maintained at Princeton in PL.I that used textual
substitution, conditional compilation and macro-time do while was
"macro" code.

Get more experience before you speak.

>
> =A0> But a C preprocessor statement cannot iterate or go
>
> > to. But even without iterate and go to it has created a lot of damage.
>
> If your claim is that you have done a lot of damage using the C
> preprocessor, I would not dream of disputing it. Nevertheless, some

I have done very little, since at the time I was (ta da) assisting
Nash with C, I noticed that it was less adequate for real development
than Visual Basic or Rexx.

> people seem to manage to use it safely. We call these people "C
> programmers".
>
> > What the original poster's trick shows:
>
> It's not a trick. It's a technique. It's even demonstrated in the
> language spec.
>
> =A0> that a combination of constant
>
> > folding and the preprocessor in effect negates one useful thing to say
> > about the preprocessor: that IN EFFECT it makes a pure textual
> > substitution.
>
> Look, it's really easy. If you do this:
>
> #define X y
>
> then, while that directive is in effect, every occurrence of X that is
> not in a string or comment is replaced with y. That's easy, right?
>
> And if you do this:
>
> #define F(a, b) a foo b bar a baz b
>
> then, while that directive is in effect, every occurrence of this:
>
> F(x, y)
>
> is replaced with this:
>
> x foo y bar x foo y
>
> This is not rocket science.

And if you do this

#define FOO (1/0)

you get a compiler message even though you want to test an error
handler. And so C is powerful in what way?

>
> > This effect is negated by a "standard" which gives the
> > compiler permission to do constant folding in a visible way up front
> > (resulting in my example in an unwanted error message when I divide by
> > zero on purpose in a simple program to test an error handler). To see
> > the intent original poster's =A0method for determining the size of a
> > table, you have to forget what you learned when you read elsethread,
> > that "the C preprocessor does a pure textual replacement".
>
> What are you going on about? The definition was along these lines:
>
> #define COUNTOF(array) sizeof array / sizeof array[0]
>
> modulo the odd parenthesis or so.
>
> Your claim was that the invariant was calculated within the loop
> control, right?

Revoked that. Admitted I was wrong, as you need to admit you were
wrong about the much more serious matter of my publications on
comp.risks.
>
> Now, all the preprocessor does is change:
>
> for(i =3D 0; i < COUNTOF(zog); i++)
>
> to
>
> for(i =3D 0; i < sizeof zog / sizeof zog[0]; i++)
>
> The calculation of sizeof zog / sizeof zog[0] is NOTHING TO DO WITH the
> preprocessor. The preprocessor does in fact do what you call a pure
> textual replacement. The compiler sees the division operator and, in any
> sensible modern compiler, will replace the constA / constB with the

Wrong. Constant folding is best done behind the scenes in the internal
DAG of the compiler, and must be turnable offable for any program that
desires to see a known calculation being executed at run time. That is
how I implemented it in the compiler I wrote for "Build Your Own
Goddamn .Net Language and Compiler".

> result constC. That is nothing to do with the preprocessor whatsoever.

The confusion that results has much to do with the preprocessor
interacting as with the inappropriately visible constant folding bug-
feature.

The result is that the programming language makes intelligent people
look stupid, and small-minded clerks look smart. Plato may have had a
genuine beef against the Sophists and their writing after all,
although I think Plato is full of shit.

Seriously: the intent of high level programming languages was to
empower intelligent grown ups to use computers, not to empower legions
of stunted and nasty little clerks who can't see the forest for the
trees and think it's the height of wisdom to maliciously destroy
people.

> It is astonishing that you still seem to think it is.
>
> <nonsense snipped>
>
> --
> Richard Heathfield <http://www.cpax.org.uk>
> Email: -http://www. +rjh@
> "Usenet is a strange place" - dmr 29 July 1999
> Sig line vacant - apply within

0
spinoza1111
1/1/2010 4:47:01 AM
On 2010-01-01, Richard Heathfield <rjh@see.sig.invalid> wrote:
> There is no professionalism requirement for subscribing to, or 
> contributing to, comp.lang.c, and therefore the issue of professionalism 
> is at best a side issue. Having said that, it seems to me that if you're 
> going to criticise comp.lang.c contributors for a lack of 
> professionalism, Seebs would not be the obvious target.

It would never have occurred to me to seek professionalism.  No one's
paying me to read this newsgroup.  My employer can only afford so many
of my opinions.  :)

>[spinny sez]
>> You're completely deluded about bytecodes, unqualified to speak on
>> them and unwilling to learn.

I have no idea on what basis he claims this.

>> You have a psychology degree and are

Hey, look!  He's stopped mistaking me for Peter Seibel!

>> apparently a sort of script kiddie,

lolwut

>> tech writer

Yup.

>> and gofer who is

I still think this is the awesomest attempt at a putdown yet.  Part of
my job involves taking alleged toolchain bug reports and trying to make an
initial determination as to whether it's likely enough to be a bug to
bother producing or cleaning up a reproducer and forwarding it to the
kind folks at Code Sourcery.  That's hardly the whole of my job, but
yeah, I guess in that part I'm mostly just a gofer.

I just don't see why I'm supposed to feel bad about this.  I'm acting as
a gobetween to hand stuff to a few of the most prolific contributors
to the GNU toolchains.  These people know the compiler far better than
I am ever likely to.  It is more efficient to hand the data off to the
experts.  Why should I feel bad about this?  I get plenty of programming
time; sometimes it's nice to not have to do that.

>> pretending to be a programmer by destroying other people.

I don't think I've tried to destroy someone since about 5th grade, and
even then I don't think I had any clue what was going on.  I have no
interest in destroying, or even harming, Schildt.  I wish to protect
people from deeply flawed material which is demonstrably likely to
cause them serious difficulties in learning to program.  If Schildt
wants to take advantage of these criticisms to correct some of his
material (as he did for some in the 4th edition of C:TCR), great!  He
can improve his understanding of C, and/or his writing, and be a better
person.

> Actually, Seebs is a programmer of no mean repute, and his employers no 
> doubt count themselves fortunate to have him on the team. In fact, they 
> are doubly fortunate, since they also have Chris Torek on board.

Ayup.  And a fair number of other people who are pretty amazing.  I'm
probably our domain expert on C, but only by a pretty narrow margin.
Mostly I'm useful because I can ask useful questions or give likely
guesses on pretty much anything vaguely computer related whether or
not I've ever heard of it before.  I am a very effective Second Pair
Of Eyes, and a bit better than the average rubber duck.

-s
p.s.:  Happy new year!
-- 
Copyright 2009, all wrongs reversed.  Peter Seebach / usenet-nospam@seebs.net
http://www.seebs.net/log/ <-- lawsuits, religion, and funny pictures
http://en.wikipedia.org/wiki/Fair_Game_(Scientology) <-- get educated!
0
Seebs
1/1/2010 5:05:56 AM
spinoza1111 wrote:
> On Jan 1, 12:32 pm, Richard Heathfield <r...@see.sig.invalid> wrote:
>> spinoza1111wrote:
>>> On Jan 1, 5:10 am, Walter Banks <wal...@bytecraft.com> wrote:
>> <snip>
>>
>>>> You appear to be confusing C macros with conditional compilation.
>>> (Sigh) C macros provide an incomplete form of conditional compilation
>>> (#if and #ifdef).
>> You appear to be confusing C macros with conditional compilation.
>> Neither #if nor #ifdef is a C macro. They are preprocessor directives.
> 
> You don't know macros.

I know C macros. You appear to be confusing C macros with conditional 
compilation. Get a C book, and read up on the preprocessor.

<completely crass nonsense snipped>

> And if you do this
> 
> #define FOO (1/0)
> 
> you get a compiler message even though you want to test an error
> handler. And so C is powerful in what way?

As far as I can recall without a copy of the Standard to hand, the 
diagnostic message is not required by the Standard, since no constraint 
is violated. It's a quality of implementation issue. A good 
implementation will warn against dividing by zero. You appear to have an 
implementation which, in that respect at least, is good.

If the implementation did not produce the message, you'd complain for 
that reason instead.

<snip>

>> Your claim was that the invariant was calculated within the loop
>> control, right?
> 
> Revoked that.

Yes, but you still seem to think that this is *about the C preprocessor*.

 > Admitted I was wrong, as you need to admit you were
> wrong about the much more serious matter of my publications on
> comp.risks.

(a) It's actually not a very serious matter at all, and (b) I already 
dealt with it. Learn to read.

<snip>

>> The calculation of sizeof zog / sizeof zog[0] is NOTHING TO DO WITH the
>> preprocessor. The preprocessor does in fact do what you call a pure
>> textual replacement. The compiler sees the division operator and, in any
>> sensible modern compiler, will replace the constA / constB with the
> 
> Wrong.

No.

> Constant folding is best done behind the scenes in the internal
> DAG of the compiler,

The precise details of what is done and when are up to the 
implementation. The point is that it's (conceptually speaking) done at 
TP7, and not before. So it's nothing to do with the preprocessor.

 > and must be turnable offable for any program that
> desires to see a known calculation being executed at run time.

The C Standard does not require this as far as I know. If you know 
different, please provide chapter and verse.

<nonsense snipped>

>> result constC. That is nothing to do with the preprocessor whatsoever.
> 
> The confusion that results

Is yours.

 > has much to do with the preprocessor
> interacting as with the inappropriately visible constant folding bug-
> feature.

No. It isn't a bug, it's nothing to do with the preprocessor, and 
there's nothing inappropriate about it.

> 
> The result is that the programming language makes intelligent people
> look stupid, and small-minded clerks look smart.

The programming language does no such thing. In this thread, you have 
made yourself look stupid by making bizarre claims about a subject you 
don't know very well, but that's your fault, not C's fault. If you 
choose to call those who know more than you "small-minded clerks", 
that's up to you, but your words effectively translate to "spinoza1111 
considers himself to be even more ignorant than a small-minded clerk, 
and therefore his views are of no account".

<snip>

> Seriously: the intent of high level programming languages was to
> empower intelligent grown ups to use computers, not to empower legions
> of stunted and nasty little clerks who can't see the forest for the
> trees and think it's the height of wisdom to maliciously destroy
> people.

I had no idea you were stunted.

-- 
Richard Heathfield <http://www.cpax.org.uk>
Email: -http://www. +rjh@
"Usenet is a strange place" - dmr 29 July 1999
Sig line vacant - apply within
0
Richard
1/1/2010 6:37:27 AM
"spinoza1111" <spinoza1111@yahoo.com> ha scritto nel messaggio
news:30f65bf9-b306-4575-a31e-f1869a0edc3a@21g2000yqj.googlegroups.com...
On Dec 28, 5:10 am, Seebs <usenet-nos...@seebs.net> wrote:
> On 2009-12-27, bartc <ba...@freeuk.com> wrote:
>C is NOT more "efficient" than C Sharp. That is not even a coherent
>thing to say.

for see who is more "efficient" you have to see with array operation
(or operation betwenn ints doubles etc etc) for example

int   i, a[10000], b[10000];

for(i=0; i<10000; ++i)
       {a[i]=i;
        b[i]=a[i];
       }

the same algo in whatever language you want.




0
io_x
1/1/2010 8:16:16 AM
On Jan 1, 1:05=A0pm, Seebs <usenet-nos...@seebs.net> wrote:
> On 2010-01-01, Richard Heathfield <r...@see.sig.invalid> wrote:
>
> > There is no professionalism requirement for subscribing to, or
> > contributing to, comp.lang.c, and therefore the issue of professionalis=
m

No, but there's such a thing as respect and decency and speaking
within your area of certified professional competence, especially when
your conduct harms another.

> > is at best a side issue. Having said that, it seems to me that if you'r=
e
> > going to criticise comp.lang.c contributors for a lack of
> > professionalism, Seebs would not be the obvious target.
>
> It would never have occurred to me to seek professionalism. =A0No one's
> paying me to read this newsgroup. =A0My employer can only afford so many
> of my opinions. =A0:)
>
> >[spinny sez]
> >> You're completely deluded about bytecodes, unqualified to speak on
> >> them and unwilling to learn.
>
> I have no idea on what basis he claims this.
>

Your post in which you merely speculated that Java and .Net bytecodes
were "interpreted".

> >> You have a psychology degree and are
>
> Hey, look! =A0He's stopped mistaking me for Peter Seibel!
>
> >> apparently a sort of script kiddie,
>
> lolwut
>
> >> tech writer
>
> Yup.
>
> >> and gofer who is
>
> I still think this is the awesomest attempt at a putdown yet. =A0Part of
> my job involves taking alleged toolchain bug reports and trying to make a=
n
> initial determination as to whether it's likely enough to be a bug to
> bother producing or cleaning up a reproducer and forwarding it to the
> kind folks at Code Sourcery. =A0That's hardly the whole of my job, but
> yeah, I guess in that part I'm mostly just a gofer.

Case closed. Herb is a programmer who wrote a complete compiler and
interpreter for C, and you were unqualified to attack him. I need you
to remove "C: The Complete Nonsense" and replace it with an apology
for speaking far beyond your area of competence.

>
> I just don't see why I'm supposed to feel bad about this. =A0I'm acting a=
s
> a gobetween to hand stuff to a few of the most prolific contributors
> to the GNU toolchains. =A0These people know the compiler far better than
> I am ever likely to. =A0It is more efficient to hand the data off to the
> experts. =A0Why should I feel bad about this? =A0I get plenty of programm=
ing
> time; sometimes it's nice to not have to do that.
>
> >> pretending to be a programmer by destroying other people.
>
> I don't think I've tried to destroy someone since about 5th grade, and
> even then I don't think I had any clue what was going on. =A0I have no
> interest in destroying, or even harming, Schildt. =A0I wish to protect

But you  did.

> people from deeply flawed material which is demonstrably likely to
> cause them serious difficulties in learning to program. =A0

How would you know? You have said that at this time, you analyze bug
reports. This is a semi-clerical activity, not programming. Therefore
YOU need to learn programming (including how bytecodes are processed
in .Net and Java and the widespread use of the heap outside DOS)
before passing judgement on real programmers.

> If Schildt
> wants to take advantage of these criticisms to correct some of his
> material (as he did for some in the 4th edition of C:TCR), great! =A0He
> can improve his understanding of C, and/or his writing, and be a better
> person.
>
> > Actually, Seebs is a programmer of no mean repute, and his employers no
> > doubt count themselves fortunate to have him on the team. In fact, they
> > are doubly fortunate, since they also have Chris Torek on board.
>
> Ayup. =A0And a fair number of other people who are pretty amazing. =A0I'm
> probably our domain expert on C, but only by a pretty narrow margin.

A strange one indeed, who doesn't use it, only judges other people
based on a pretty virginal understanding of the theory and the
standard.

> Mostly I'm useful because I can ask useful questions or give likely
> guesses on pretty much anything vaguely computer related whether or
> not I've ever heard of it before. =A0I am a very effective Second Pair
> Of Eyes, and a bit better than the average rubber duck.

A false humility coupled with the vicious way in which you have tried
to destroy the reputation of Herb Schildt. Ugly.

But, risk free in the Korporation where the real producers have been
laid off.

>
> -s
> p.s.: =A0Happy new year!
> --
> Copyright 2009, all wrongs reversed. =A0Peter Seebach / usenet-nos...@see=
bs.nethttp://www.seebs.net/log/<-- lawsuits, religion, and funny picturesht=
tp://en.wikipedia.org/wiki/Fair_Game_(Scientology) <-- get educated!

0
spinoza1111 (3246)
1/1/2010 1:41:40 PM
On Jan 1, 2:37=A0pm, Richard Heathfield <r...@see.sig.invalid> wrote:
> spinoza1111wrote:
> > On Jan 1, 12:32 pm, Richard Heathfield <r...@see.sig.invalid> wrote:
> >> spinoza1111wrote:
> >>> On Jan 1, 5:10 am, Walter Banks <wal...@bytecraft.com> wrote:
> >> <snip>
>
> >>>> You appear to be confusing C macros with conditional compilation.
> >>> (Sigh) C macros provide an incomplete form of conditional compilation
> >>> (#if and #ifdef).
> >> You appear to be confusing C macros with conditional compilation.
> >> Neither #if nor #ifdef is a C macro. They are preprocessor directives.
>
> > You don't know macros.
>
> I know C macros. You appear to be confusing C macros with conditional
> compilation. Get a C book, and read up on the preprocessor.

I've already addressed this. You want your terminology, which you
learned laboriously and by rote, to be used so that you're write. But
as I have already explained, macros and conditional compilation are
the same entity done at the same time, originating as I have said in
assembler language and PL/1.
>
> <completely crass nonsense snipped>
>
> > And if you do this
>
> > #define FOO (1/0)
>
> > you get a compiler message even though you want to test an error
> > handler. And so C is powerful in what way?
>
> As far as I can recall without a copy of the Standard to hand, the
> diagnostic message is not required by the Standard, since no constraint
> is violated. It's a quality of implementation issue. A good
> implementation will warn against dividing by zero. You appear to have an
> implementation which, in that respect at least, is good.
>
> If the implementation did not produce the message, you'd complain for
> that reason instead.

I would not. C confuses constant folding with macro processing.
>
> <snip>
>
> >> Your claim was that the invariant was calculated within the loop
> >> control, right?
>
> > Revoked that.
>
> Yes, but you still seem to think that this is *about the C preprocessor*.
>
> =A0> Admitted I was wrong, as you need to admit you were
>
> > wrong about the much more serious matter of my publications on
> > comp.risks.
>
> (a) It's actually not a very serious matter at all, and (b) I already
> dealt with it. Learn to read.
>
> <snip>
>
> >> The calculation of sizeof zog / sizeof zog[0] is NOTHING TO DO WITH th=
e
> >> preprocessor. The preprocessor does in fact do what you call a pure
> >> textual replacement. The compiler sees the division operator and, in a=
ny
> >> sensible modern compiler, will replace the constA / constB with the
>
> > Wrong.
>
> No.
>
> > Constant folding is best done behind the scenes in the internal
> > DAG of the compiler,
>
> The precise details of what is done and when are up to the
> implementation. The point is that it's (conceptually speaking) done at
> TP7, and not before. So it's nothing to do with the preprocessor.
>
> =A0> and must be turnable offable for any program that
>
> > desires to see a known calculation being executed at run time.
>
> The C Standard does not require this as far as I know. If you know
> different, please provide chapter and verse.

Quit changing the subject from the collective crappy behavior of
actual C compilers to the standard.
>
> <nonsense snipped>
>
> >> result constC. That is nothing to do with the preprocessor whatsoever.
>
> > The confusion that results
>
> Is yours.
>
> =A0> has much to do with the preprocessor
>
> > interacting as with the inappropriately visible constant folding bug-
> > feature.
>
> No. It isn't a bug, it's nothing to do with the preprocessor, and
> there's nothing inappropriate about it.
>
>
>
> > The result is that the programming language makes intelligent people
> > look stupid, and small-minded clerks look smart.
>
> The programming language does no such thing. In this thread, you have
> made yourself look stupid by making bizarre claims about a subject you

No, I've cleared a matter up by asking questions and being willing to
be wrong. You are making a mess of the clarity that resulted.

> don't know very well, but that's your fault, not C's fault. If you
> choose to call those who know more than you "small-minded clerks",

I do, because what you know doesn't make you effective programmers. It
makes you into Korporate slaveys.

> that's up to you, but your words effectively translate to "spinoza1111
> considers himself to be even more ignorant than a small-minded clerk,
> and therefore his views are of no account".
>
> <snip>
>
> > Seriously: the intent of high level programming languages was to
> > empower intelligent grown ups to use computers, not to empower legions
> > of stunted and nasty little clerks who can't see the forest for the
> > trees and think it's the height of wisdom to maliciously destroy
> > people.
>
> I had no idea you were stunted.
>
> --
> Richard Heathfield <http://www.cpax.org.uk>
> Email: -http://www. +rjh@
> "Usenet is a strange place" - dmr 29 July 1999
> Sig line vacant - apply within

0
spinoza1111 (3246)
1/1/2010 1:47:00 PM
Richard Heathfield wrote:
> spinoza1111 wrote:
> 
> <snip>
> 
>> You and moi have shown me that this could be a useful coding
>> technique.
> 
> In practice, in real code, it's hardly used at all, except perhaps in 
> environments where malloc & co have been banned - i.e. where a so-called 
> (and mis-named) "safe" subset of C is used.

Banning *alloc and recursion does have advantages. You have a provable 
upper bound on memory usage, so you know you won't run out of memory. If 
you need to do clever malloc-like things like allocating memory out of a 
static array you can tune it so that you won't suffer from memory 
fragmentation (having a pool of objects of the correct size) etc.

Of course, not all problems can really be solved without something like 
malloc, but the environments where malloc is banned tend to be ones 
where the problems to be solved don't actually require it.
-- 
Flash Gordon
0
smap (838)
1/1/2010 1:56:40 PM
On 31 Dec 2009, 18:36, Moi <r...@invalid.address.org> wrote:

> Stop calling Turing (or other names). BTW did I mention that I am related
> to Dijkstra? *NO I did not.* I only post code here.

I met Babbages great(n)-son once
0
1/1/2010 2:51:40 PM
On 1 Jan, 04:20, Richard Heathfield <r...@see.sig.invalid> wrote:
> spinoza1111 wrote:

<snip>

> > [C is]
> > not popular outside of the USA esp. in the EU for that reason.
>
> Actually, C is still very popular in the UK (which, heaven help us, is
> in the EU). I wouldn't know about the rest of the EU, but I have no
> reason to suspect it's unpopular abroad.

It's used in Italy
0
1/1/2010 2:57:00 PM
spinoza1111 wrote:

<snip>

> Case closed. Herb is a programmer who wrote a complete compiler and
> interpreter for C,

It wasn't a compiler, and it wasn't complete.

 > and you were unqualified to attack him.

Seebs is perfectly qualified to point out Schildt's mistakes, however, 
which is precisely what he did.

 > I need you
> to remove "C: The Complete Nonsense" and replace it with an apology
> for speaking far beyond your area of competence.

If you can find any error of fact on the page, I'm sure Seebs will be 
delighted to correct it.

<nonsense snipped>

-- 
Richard Heathfield <http://www.cpax.org.uk>
Email: -http://www. +rjh@
"Usenet is a strange place" - dmr 29 July 1999
Sig line vacant - apply within
0
rjh (10791)
1/1/2010 2:59:50 PM
spinoza1111 wrote:
> On Jan 1, 2:37 pm, Richard Heathfield <r...@see.sig.invalid> wrote:
>> spinoza1111wrote:
>>> On Jan 1, 12:32 pm, Richard Heathfield <r...@see.sig.invalid> wrote:

<snip>

>>>> Neither #if nor #ifdef is a C macro. They are preprocessor directives.
>>> You don't know macros.
>> I know C macros. You appear to be confusing C macros with conditional
>> compilation. Get a C book, and read up on the preprocessor.
> 
> I've already addressed this.

And yet you remain confused on the difference between macros and 
conditional preprocessing directives.

<snip>

>> If the implementation did not produce the message, you'd complain for
>> that reason instead.
> 
> I would not.

You have a track record of changing your criticism as your understanding 
of reality changes - for example, you complained that Peter Seebach was 
too academic, and now you complain that he isn't academic enough.

> C confuses constant folding with macro processing.

No, but /you/ do. The macro processing happens during preprocessing. The 
constant folding happens during TP7 (translation). They are totally 
separate. It is you who confused them.

>> The C Standard does not require this as far as I know. If you know
>> different, please provide chapter and verse.
> 
> Quit changing the subject from the collective crappy behavior of
> actual C compilers to the standard.

Firstly, you have yet to show why the behaviour of your Microsoft 
compiler is "crappy". Secondly, I'm not changing the subject. I'm 
showing that the compiler is not breaking the Standard's rules. If you 
want to show that the Microsoft compiler is crappy, fine, but you 
haven't done so yet.

>>> The result is that the programming language makes intelligent people
>>> look stupid, and small-minded clerks look smart.
>> The programming language does no such thing. In this thread, you have
>> made yourself look stupid by making bizarre claims about a subject you
> 
> No, I've cleared a matter up by asking questions and being willing to
> be wrong.

The matter wasn't unclear to anyone except you.

 > You are making a mess of the clarity that resulted.

That you still confuse macro processing with constant folding (above) 
suggests that you are not yet as clear as you think.

<snip>

>> If you
>> choose to call those who know more than you "small-minded clerks",
> 
> I do,

....then it follows that you know less than a small-minded clerk, by your 
own admission.

> because what you know doesn't make you effective programmers. It
> makes you into Korporate slaveys.

It is not ignorance that makes effective programmers, but knowledge and 
skill. Your shunning of knowledge can never make you more effective.

<snip>

-- 
Richard Heathfield <http://www.cpax.org.uk>
Email: -http://www. +rjh@
"Usenet is a strange place" - dmr 29 July 1999
Sig line vacant - apply within
0
rjh (10791)
1/1/2010 3:16:42 PM
On 2010-01-01, Richard Heathfield <rjh@see.sig.invalid> wrote:
> If you can find any error of fact on the page, I'm sure Seebs will be 
> delighted to correct it.

Indeed, even though he's made it clear that he can't, I did actually
pick up the 4th edition, and will eventually be shuffling things around
to replace it with a more recent criticism.

Note that the 4th edition fixes several things, but continues to repeatedly
and totally misunderstand feof().

-s
-- 
Copyright 2009, all wrongs reversed.  Peter Seebach / usenet-nospam@seebs.net
http://www.seebs.net/log/ <-- lawsuits, religion, and funny pictures
http://en.wikipedia.org/wiki/Fair_Game_(Scientology) <-- get educated!
0
usenet-nospam (2309)
1/1/2010 5:06:03 PM
Seebs wrote:
> On 2010-01-01, Richard Heathfield <rjh@see.sig.invalid> wrote:
>> If you can find any error of fact on the page, I'm sure Seebs will be 
>> delighted to correct it.
> 
> Indeed, even though he's made it clear that he can't, I did actually
> pick up the 4th edition, and will eventually be shuffling things around
> to replace it with a more recent criticism.
> 
> Note that the 4th edition fixes several things, but continues to repeatedly
> and totally misunderstand feof().

This might be the time to categorise the errors by edition. That is, for 
each error, make it clear whether the error is, is not, or is not known 
to be, present in each given edition (and perhaps whether any specific 
edition has fixed a problem in a previous edition).

It would increase the housekeeping work involved, true, but may help to 
demonstrate an attempt at "playing fair" to Schildt by recognising where 
he has fixed problems, whilst still recording those problems for the 
sake of those users who have an edition in which the problem is not fixed.

Of course, the person who *should* be doing this is... Herbert Schildt.

-- 
Richard Heathfield <http://www.cpax.org.uk>
Email: -http://www. +rjh@
"Usenet is a strange place" - dmr 29 July 1999
Sig line vacant - apply within
0
rjh (10791)
1/1/2010 5:17:37 PM
Seebs <usenet-nospam@seebs.net> writes:
> On 2010-01-01, Richard Heathfield <rjh@see.sig.invalid> wrote:
>> If you can find any error of fact on the page, I'm sure Seebs will be 
>> delighted to correct it.
>
> Indeed, even though he's made it clear that he can't, I did actually
> pick up the 4th edition, and will eventually be shuffling things around
> to replace it with a more recent criticism.

Let me encourage you to keep a copy of your original review online,
for comparison purposes if nothing else.

> Note that the 4th edition fixes several things, but continues to repeatedly
> and totally misunderstand feof().

-- 
Keith Thompson (The_Other_Keith) kst-u@mib.org  <http://www.ghoti.net/~kst>
Nokia
"We must do something.  This is something.  Therefore, we must do this."
    -- Antony Jay and Jonathan Lynn, "Yes Minister"
0
kst-u (21963)
1/1/2010 5:50:34 PM
spinoza1111 wrote:
> On Jan 1, 2:37 pm, Richard Heathfield <r...@see.sig.invalid> wrote:
>> spinoza1111wrote:
>>> On Jan 1, 12:32 pm, Richard Heathfield <r...@see.sig.invalid> wrote:
>>>> spinoza1111wrote:
>>>>> On Jan 1, 5:10 am, Walter Banks <wal...@bytecraft.com> wrote:
>>>> <snip>
>>>>>> You appear to be confusing C macros with conditional compilation.
>>>>> (Sigh) C macros provide an incomplete form of conditional compilation
>>>>> (#if and #ifdef).
>>>> You appear to be confusing C macros with conditional compilation.
>>>> Neither #if nor #ifdef is a C macro. They are preprocessor directives.
>>> You don't know macros.
>> I know C macros. You appear to be confusing C macros with conditional
>> compilation. Get a C book, and read up on the preprocessor.
> 
> I've already addressed this. You want your terminology, which you
> learned laboriously and by rote, to be used so that you're write. But
> as I have already explained, macros and conditional compilation are
> the same entity done at the same time, originating as I have said in
> assembler language and PL/1.

Once again you choose to assert things about which you have no 
knowledge. How Richard learnt anything is outside your knowledge. And 
what gives you the right (note the spelling) to say what C terminology 
means?

I suspect that the only reason most of us bother to reply to your 
rantings is that we worry that some newcomer will take them seriously.

I must say that you have managed to make Herbert Schildt look quite 
knowledgeable in comparison to your ignorance.
0
1/1/2010 6:01:05 PM
On Jan 1, 10:57=A0pm, Nick Keighley <nick_keighley_nos...@hotmail.com>
wrote:
> On 1 Jan, 04:20, Richard Heathfield <r...@see.sig.invalid> wrote:
>
> > spinoza1111 wrote:
>
> <snip>
>
> > > [C is]
> > > not popular outside of the USA esp. in the EU for that reason.
>
> > Actually, C is still very popular in the UK (which, heaven help us, is
> > in the EU). I wouldn't know about the rest of the EU, but I have no
> > reason to suspect it's unpopular abroad.
>
> It's used in Italy

Of course it's "used". But by what people? How competent are they?
And, it would be interesting to know whether their use of C tracks
political (pro-American) views.
0
spinoza1111
1/1/2010 11:33:07 PM
On Jan 1, 10:59=A0pm, Richard Heathfield <r...@see.sig.invalid> wrote:
> spinoza1111 wrote:
>
> <snip>
>
> > Case closed. Herb is a programmer who wrote a complete compiler and
> > interpreter for C,
>
> It wasn't a compiler, and it wasn't complete.

(Sigh) This has been resolved and not in your favor. Many compiler
books (including Aho/Sethi/Ullman and my own) contain small compilers
which generate interpretive code.
>
> =A0> and you were unqualified to attack him.
>
> Seebs is perfectly qualified to point out Schildt's mistakes, however,
> which is precisely what he did.

This is based on the Folklore of the Holy and Pure Fool (the reine
Pful of the Parzival legend and the "emperor has no clothes). It is
thought (esp. by Americans) that the ordinary person uncorrupted by
"false expertise" can spot flaws and fraud that is covered up by
institutions.

However, his right-to-speak (cf Habermas) is based on openness and
lack of the constant focus on attributes of persons. Seebs enabled the
rote transformation of Schildt into a name for "erroneous" in such a
way that the actual facts (notably the very few errors Seebs
documented) are forgotten.


>
> =A0> I need you
>
> > to remove "C: The Complete Nonsense" and replace it with an apology
> > for speaking far beyond your area of competence.
>
> If you can find any error of fact on the page, I'm sure Seebs will be
> delighted to correct it.

"The 'Heap' is a DOS term"

The following passage is self-contradictory:

"The following is a partial list of the errors I am aware of, sorted
by page number. I am not including everything; just many of them."

"I am missing several hundred errors. Please write me if you think you
know of any I'm missing. Please also write if you believe one of these
corrections is inadequate or wrong; I'd love to see it."

"Currently known: [20 trivial errors]"

When Seebach writes "I am missing several hundred errors", this
(rather incoherent) sentence seems to mean "I would like you to help
me find more errors".

Etc. The document is incomplete and never was. However, it became the
one authoritative source for all claims about Schildt's book.
>
> <nonsense snipped>
>
> --
> Richard Heathfield <http://www.cpax.org.uk>
> Email: -http://www. +rjh@
> "Usenet is a strange place" - dmr 29 July 1999
> Sig line vacant - apply within

0
spinoza1111
1/1/2010 11:47:46 PM
On Jan 1, 11:16=A0pm, Richard Heathfield <r...@see.sig.invalid> wrote:
> spinoza1111 wrote:
> > On Jan 1, 2:37 pm, Richard Heathfield <r...@see.sig.invalid> wrote:
> >> spinoza1111wrote:
> >>> On Jan 1, 12:32 pm, Richard Heathfield <r...@see.sig.invalid> wrote:
>
> <snip>
>
> >>>> Neither #if nor #ifdef is a C macro. They are preprocessor directive=
s.
> >>> You don't know macros.
> >> I know C macros. You appear to be confusing C macros with conditional
> >> compilation. Get a C book, and read up on the preprocessor.
>
> > I've already addressed this.
>
> And yet you remain confused on the difference between macros and
> conditional preprocessing directives.
>
> <snip>
>
> >> If the implementation did not produce the message, you'd complain for
> >> that reason instead.
>
> > I would not.
>
> You have a track record of changing your criticism as your understanding
> of reality changes - for example, you complained that Peter Seebach was
> too academic, and now you complain that he isn't academic enough.

No, I never said he was too academic. You use the Korporate definition
of "academic" when I used more complex English syntax than you to
describe Seebach as flawed and malicious. The syntax irritated you
because instead of using hackneyed Korporate phrases such as "track
record", it used language that was more tightly integrated with my
overall view, which you fail to understand, that a false Populism has
merely enabled people without education in their field to "criticize"
people in a way to which they cannot respond, since the "criticism"
starts with the assertion that the person criticised is a profoundly
flawed character and begs the question thereafter.

I've shown that you've lied ("Nilges has never posted to comp.risks")
and that Seebach makes the same sort of "errors" as Herb ("the 'Heap'
is a DOS term": "Java and .Net code is interpreted").


>
> > C confuses constant folding with macro processing.
>
> No, but /you/ do. The macro processing happens during preprocessing. The
> constant folding happens during TP7 (translation). They are totally
> separate. It is you who confused them.
>
> >> The C Standard does not require this as far as I know. If you know
> >> different, please provide chapter and verse.
>
> > Quit changing the subject from the collective crappy behavior of
> > actual C compilers to the standard.
>
> Firstly, you have yet to show why the behaviour of your Microsoft
> compiler is "crappy". Secondly, I'm not changing the subject. I'm
> showing that the compiler is not breaking the Standard's rules. If you
> want to show that the Microsoft compiler is crappy, fine, but you
> haven't done so yet.
>
> >>> The result is that the programming language makes intelligent people
> >>> look stupid, and small-minded clerks look smart.
> >> The programming language does no such thing. In this thread, you have
> >> made yourself look stupid by making bizarre claims about a subject you
>
> > No, I've cleared a matter up by asking questions and being willing to
> > be wrong.
>
> The matter wasn't unclear to anyone except you.
>
> =A0> You are making a mess of the clarity that resulted.
>
> That you still confuse macro processing with constant folding (above)
> suggests that you are not yet as clear as you think.

No, I showed how you and Seebach, lacking formal education in CS, have
no clue about separation of concerns, and how in Seebach's case this
resulted in the silliness of "sequence points", as if it was proper or
even possible to "optimize" a language in the first place (under
separation of concerns, it is not possible to do so).

>
> <snip>
>
> >> If you
> >> choose to call those who know more than you "small-minded clerks",
>
> > I do,
>
> ...then it follows that you know less than a small-minded clerk, by your
> own admission.
>
> > because what you know doesn't make you effective programmers. It
> > makes you into Korporate slaveys.
>
> It is not ignorance that makes effective programmers, but knowledge and
> skill. Your shunning of knowledge can never make you more effective.
>
> <snip>
>
> --
> Richard Heathfield <http://www.cpax.org.uk>
> Email: -http://www. +rjh@
> "Usenet is a strange place" - dmr 29 July 1999
> Sig line vacant - apply within

0
spinoza1111
1/2/2010 12:02:30 AM
On Jan 2, 1:17=A0am, Richard Heathfield <r...@see.sig.invalid> wrote:
> Seebs wrote:
> > On 2010-01-01, Richard Heathfield <r...@see.sig.invalid> wrote:
> >> If you can find any error of fact on the page, I'm sure Seebs will be
> >> delighted to correct it.
>
> > Indeed, even though he's made it clear that he can't, I did actually
> > pick up the 4th edition, and will eventually be shuffling things around
> > to replace it with a more recent criticism.
>
> > Note that the 4th edition fixes several things, but continues to repeat=
edly
> > and totally misunderstand feof().
>
> This might be the time to categorise the errors by edition. That is, for
> each error, make it clear whether the error is, is not, or is not known
> to be, present in each given edition (and perhaps whether any specific
> edition has fixed a problem in a previous edition).
>
> It would increase the housekeeping work involved, true, but may help to
> demonstrate an attempt at "playing fair" to Schildt by recognising where
> he has fixed problems, whilst still recording those problems for the
> sake of those users who have an edition in which the problem is not fixed=
..
>
> Of course, the person who *should* be doing this is... Herbert Schildt.
>
> --
> Richard Heathfield <http://www.cpax.org.uk>
> Email: -http://www. +rjh@
> "Usenet is a strange place" - dmr 29 July 1999
> Sig line vacant - apply within

A wiki-style collective discussion would be more appropriate but ONLY
if Schildt agreed to participate. You see, it's unfair and
unproductive of the truth to conduct a discussion about his book
without including him. And guess what? If he won't participate (and
given what I've seen of your behavior I can see why) then you need to
simply drop the issue.
0
spinoza1111
1/2/2010 12:05:49 AM
On Jan 2, 1:50=A0am, Keith Thompson <ks...@mib.org> wrote:
> Seebs <usenet-nos...@seebs.net> writes:
> > On 2010-01-01, Richard Heathfield <r...@see.sig.invalid> wrote:
> >> If you can find any error of fact on the page, I'm sure Seebs will be
> >> delighted to correct it.
>
> > Indeed, even though he's made it clear that he can't, I did actually
> > pick up the 4th edition, and will eventually be shuffling things around
> > to replace it with a more recent criticism.
>
> Let me encourage you to keep a copy of your original review online,
> for comparison purposes if nothing else.

Why? Is this some sort of Internet classic?

No, it's a poor document written by a pretentious person without basic
knowledge of computer science, who tells us that "the 'heap' is a DOS
term", who actually believed that because a language standard didn't
mention a runtime facility (the stack) this means that that runtime
facility cannot be mentioned, and who tasked Schildt for assigning
sensible meanings to pathological expressions rather than writing that
things are "undefined" in practice.


>
> > Note that the 4th edition fixes several things, but continues to repeat=
edly
> > and totally misunderstand feof().
>
> --
> Keith Thompson (The_Other_Keith) ks...@mib.org =A0<http://www.ghoti.net/~=
kst>
> Nokia
> "We must do something. =A0This is something. =A0Therefore, we must do thi=
s."
> =A0 =A0 -- Antony Jay and Jonathan Lynn, "Yes Minister"

0
spinoza1111
1/2/2010 1:55:05 AM
On Fri, 1 Jan 2010 15:33:07 -0800 (PST)
spinoza1111 <spinoza1111@yahoo.com> wrote:

> And, it would be interesting to know whether their use of C tracks
> political (pro-American) views.

I can't see a connection between politics and the use of a programming
language. I know, that's my fault...
0
Lorenzo
1/2/2010 3:42:39 AM
"Lorenzo Villari" <vlllnz@tiscali.it> wrote in message 
news:20100102044239.7e8d997f@kubuntu...
> On Fri, 1 Jan 2010 15:33:07 -0800 (PST)
> spinoza1111 <spinoza1111@yahoo.com> wrote:
>
>> And, it would be interesting to know whether their use of C tracks
>> political (pro-American) views.
>
> I can't see a connection between politics and the use of a programming
> language. I know, that's my fault...

It's a long-standing plot, going back to 1812.

It can't be a coincidence that the United States national anthem begins
"Oh, say can you C by the dawn's early light"

Clearly this refers to the C programming language, and how programmers have 
to work long through the night in an effort to meet deadlines.

Dennis
 

0
Dennis
1/2/2010 3:57:47 AM
spinoza1111 wrote:
> On Jan 1, 10:57 pm, Nick Keighley <nick_keighley_nos...@hotmail.com>
> wrote:
>> On 1 Jan, 04:20, Richard Heathfield <r...@see.sig.invalid> wrote:
>>
>>> spinoza1111 wrote:
>> <snip>
>>
>>>> [C is]
>>>> not popular outside of the USA esp. in the EU for that reason.
>>> Actually, C is still very popular in the UK (which, heaven help us, is
>>> in the EU). I wouldn't know about the rest of the EU, but I have no
>>> reason to suspect it's unpopular abroad.
>> It's used in Italy
> 
> Of course it's "used". But by what people?

Lots of different kinds of people. The world is more complex than you 
like to think.

 > How competent are they?

It varies. Some are really very good at it, but some are only slightly 
better at it than you are.

> And, it would be interesting to know whether their use of C tracks
> political (pro-American) views.

To you, perhaps, but I doubt whether anyone else would be even slightly 
interested.

-- 
Richard Heathfield <http://www.cpax.org.uk>
Email: -http://www. +rjh@
"Usenet is a strange place" - dmr 29 July 1999
Sig line vacant - apply within
0
Richard
1/2/2010 6:22:05 AM
spinoza1111 wrote:
> On Jan 1, 10:59 pm, Richard Heathfield <r...@see.sig.invalid> wrote:
>> spinoza1111 wrote:
>>
>> <snip>
>>
>>> Case closed. Herb is a programmer who wrote a complete compiler and
>>> interpreter for C,
>> It wasn't a compiler, and it wasn't complete.
> 
> (Sigh) This has been resolved and not in your favor.

Only in your fevered imagination. The implementation did not produce 
standalone object code; it was an interpreter, not a compiler. And it 
did not implement the entire language. Therefore it was not complete. 
Therefore, the claim that it was a complete compiler - like so many of 
your claims - is mistaken.

<nonsense snipped>

>>  > I need you
>>> to remove "C: The Complete Nonsense" and replace it with an apology
>>> for speaking far beyond your area of competence.
>> If you can find any error of fact on the page, I'm sure Seebs will be
>> delighted to correct it.
> 
> "The 'Heap' is a DOS term"

If Seebs agrees with you that "the heap" is *not* a DOS term, I'm sure 
he'll fix the page. If you can cite an international standard that 
supports your case, I'm sure that will help you to persuade him. Failing 
that, citing an authoritative author will help (hint: Spinoza and Adorno 
are not generally considered to be authoritative on the subject of 
MS-DOS). But I think you'll find authoritative MS-DOS works to be 
littered with references to the heap. I can dig one out if you like.

> 
> The following passage is self-contradictory:
> 
> "The following is a partial list of the errors I am aware of, sorted
> by page number. I am not including everything; just many of them."

Let's say that there are Z errors in the book, where Z is actually 
unknown because nobody has actually got as far as finding them all, but 
let us assume that it is at least a fixed number. Peter's complete list 
contains Y items, where Y < Z. His published list contains X items, 
where X < Y.

> "I am missing several hundred errors. Please write me if you think you
> know of any I'm missing. Please also write if you believe one of these
> corrections is inadequate or wrong; I'd love to see it."

He would seem to mean Z - Y == several_hundred, which is probably about 
right. I find it difficult to open the book without finding another 
error I hadn't previously noticed.

> "Currently known: [20 trivial errors]"

Whether they are trivial depends on whether you care about learning the 
language properly. In any case, you have already shown, through your 
ignorance of simple C matters, that you are not in any position to judge 
whether an error is trivial or not. For example, Schildt's advice on EOF 
will seriously confuse a newbie who takes it on trust.

> When Seebach writes "I am missing several hundred errors", this
> (rather incoherent) sentence seems to mean "I would like you to help
> me find more errors".

Feel free.

> Etc. The document is incomplete and never was.

Then it needs to be renamed "C: The Incomplete Nonsense", at least for 
the time being.

 > However, it became the
> one authoritative source for all claims about Schildt's book.

So you consider Peter Seebach's page to be authoritative. Fine, so 
what's the problem?

-- 
Richard Heathfield <http://www.cpax.org.uk>
Email: -http://www. +rjh@
"Usenet is a strange place" - dmr 29 July 1999
Sig line vacant - apply within
0
Richard
1/2/2010 6:43:10 AM
On Jan 2, 2:43=A0pm, Richard Heathfield <r...@see.sig.invalid> wrote:
> spinoza1111wrote:
> > On Jan 1, 10:59 pm, Richard Heathfield <r...@see.sig.invalid> wrote:
> >>spinoza1111wrote:
>
> >> <snip>
>
> >>> Case closed. Herb is a programmer who wrote a complete compiler and
> >>> interpreter for C,
> >> It wasn't a compiler, and it wasn't complete.
>
> > (Sigh) This has been resolved and not in your favor.
>
> Only in your fevered imagination. The implementation did not produce
> standalone object code; it was an interpreter, not a compiler. And it

WRONG because it had to scan and parse. If something can scan and
parse and not being at some scale a compiler, then you do not know
your profession.

By the way, what are YOUR academic credentials?

> did not implement the entire language. Therefore it was not complete.
> Therefore, the claim that it was a complete compiler - like so many of
> your claims - is mistaken.
>
> <nonsense snipped>
>
> >> =A0> I need you
> >>> to remove "C: The Complete Nonsense" and replace it with an apology
> >>> for speaking far beyond your area of competence.
> >> If you can find any error of fact on the page, I'm sure Seebs will be
> >> delighted to correct it.
>
> > "The 'Heap' is a DOS term"
>
> If Seebs agrees with you that "the heap" is *not* a DOS term, I'm sure
> he'll fix the page. If you can cite an international standard that
> supports your case, I'm sure that will help you to persuade him. Failing
> that, citing an authoritative author will help (hint: Spinoza and Adorno
> are not generally considered to be authoritative on the subject of
> MS-DOS). But I think you'll find authoritative MS-DOS works to be
> littered with references to the heap. I can dig one out if you like.

....which doesn't at all imply that "the heap" is a DOS term. The term
occured in several books I read in the 1970s including Saul Rosen's
Programming Systems and Languages (1970). This was before Gates stole
MS-DOS.
>
>
>
> > The following passage is self-contradictory:
>
> > "The following is a partial list of the errors I am aware of, sorted
> > by page number. I am not including everything; just many of them."
>
> Let's say that there are Z errors in the book, where Z is actually
> unknown because nobody has actually got as far as finding them all, but
> let us assume that it is at least a fixed number. Peter's complete list
> contains Y items, where Y < Z. His published list contains X items,
> where X < Y.

ROTLMFAO. It is the mark of the subliterate to count ideas. Ideas are
not countable, and reading English properly not only means being to
able to grasp things qualitatively all the way down, and a fundamental
committment to honesty and decency (cf Habermas, Kant and the paradox
of the Liar) which you don't have.
>
> > "I am missing several hundred errors. Please write me if you think you
> > know of any I'm missing. Please also write if you believe one of these
> > corrections is inadequate or wrong; I'd love to see it."
>
> He would seem to mean Z - Y =3D=3D several_hundred, which is probably abo=
ut
> right. I find it difficult to open the book without finding another
> error I hadn't previously noticed.

Because you never learned critical reading in whatever inferior
Borstal or technical college you attended, you actually believe that
the first interpretation that occurs to you is correct.
>
> > "Currently known: [20 trivial errors]"
>
> Whether they are trivial depends on whether you care about learning the
> language properly. In any case, you have already shown, through your
> ignorance of simple C matters, that you are not in any position to judge

Fuck you, asshole. I ask questions and have the balls to be wrong
because I'm MUCH better educated than you in both CS and general
culture. This is what competent people do as opposed to corporate
clowns who've worked for a series of "banks and insurance companies",
and either destroyed others at these jobs or been fired.

> whether an error is trivial or not. For example, Schildt's advice on EOF
> will seriously confuse a newbie who takes it on trust.
>
> > When Seebach writes "I am missing several hundred errors", this
> > (rather incoherent) sentence seems to mean "I would like you to help
> > me find more errors".
>
> Feel free.
>
> > Etc. The document is incomplete and never was.
>
> Then it needs to be renamed "C: The Incomplete Nonsense", at least for
> the time being.
>
> =A0> However, it became the
>
> > one authoritative source for all claims about Schildt's book.
>
> So you consider Peter Seebach's page to be authoritative. Fine, so
> what's the problem?

Stop playing word games. Not only do you unfairly ascribe ignorance to
others based on superficial textual clues and your own verbal
limitations, you also pretend ignorance in the Korporate way, which is
easy since you are in fact profoundly ignorant about your trade and
many, many other things. You should know that by "authoritative" I
meant "accepted at face value by people who should no better because
people who should know better mindlessly cited Seebach".

You feign ignorance or use real ignorance to make your case, and your
ignorance (and that of Seebach) is marshaled by companies to create
minimally acceptable software in the service of their stock price.
>
> --
> Richard Heathfield <http://www.cpax.org.uk>
> Email: -http://www. +rjh@
> "Usenet is a strange place" - dmr 29 July 1999
> Sig line vacant - apply within

0
spinoza1111
1/2/2010 7:04:36 AM
On 2010-01-02, Richard Heathfield <rjh@see.sig.invalid> wrote:
> Only in your fevered imagination. The implementation did not produce 
> standalone object code; it was an interpreter, not a compiler. And it 
> did not implement the entire language. Therefore it was not complete. 
> Therefore, the claim that it was a complete compiler - like so many of 
> your claims - is mistaken.

The book clearly says it's an interpreter.

>>>  > I need you
>>>> to remove "C: The Complete Nonsense" and replace it with an apology
>>>> for speaking far beyond your area of competence.
>>> If you can find any error of fact on the page, I'm sure Seebs will be
>>> delighted to correct it.
>> 
>> "The 'Heap' is a DOS term"
>
> If Seebs agrees with you that "the heap" is *not* a DOS term, I'm sure 
> he'll fix the page. If you can cite an international standard that 
> supports your case, I'm sure that will help you to persuade him. Failing 
> that, citing an authoritative author will help (hint: Spinoza and Adorno 
> are not generally considered to be authoritative on the subject of 
> MS-DOS). But I think you'll find authoritative MS-DOS works to be 
> littered with references to the heap. I can dig one out if you like.

I think I'll clarify that one, in any event.  The term "heap" is used
very heavily in a DOS environment.  The term "heap" is not used at all
in some Unix docs, but glibc uses it occasionally -- interestingly,
specifically to point out that malloc returns some pointers to space
allocated separately outside the heap.  (By unhappy coincidence, I know
WAY WAY too much about the details there, but they're not particularly
relevant.)

The point I was aiming for (but frankly didn't make properly) was that
the concept of the "heap" is not necessarily an intrinsic part of the
C language -- less so still is the specific memory layout, or the notion
that if you allocate enough stuff on "the heap" you will run into
the stack.  (In fact, on some of the systems I use, this is effectively
impossible, because the stack pointer and the "break" address are
far enough apart that you run into the resource limits on both long
before they come near each other.)

I think I'll probably clean that wording up at some point, probably around
the time I get to updating the page for the 4th edition.

> Let's say that there are Z errors in the book, where Z is actually 
> unknown because nobody has actually got as far as finding them all, but 
> let us assume that it is at least a fixed number. Peter's complete list 
> contains Y items, where Y < Z. His published list contains X items, 
> where X < Y.

Exactly.  My copy of the book has some notes in it, many of which I
didn't feel were worth listing.

>> "I am missing several hundred errors. Please write me if you think you
>> know of any I'm missing. Please also write if you believe one of these
>> corrections is inadequate or wrong; I'd love to see it."

> He would seem to mean Z - Y == several_hundred, which is probably about 
> right. I find it difficult to open the book without finding another 
> error I hadn't previously noticed.

That's my guess.  Note that I'm counting repetitions, so basically
every sample program in the book counts as an instance of the void
main error...

> Whether they are trivial depends on whether you care about learning the 
> language properly. In any case, you have already shown, through your 
> ignorance of simple C matters, that you are not in any position to judge 
> whether an error is trivial or not. For example, Schildt's advice on EOF 
> will seriously confuse a newbie who takes it on trust.

Reminding me, I want to do a poll on what happens on real systems if
you do putchar(EOF).

>> Etc. The document is incomplete and never was.

> Then it needs to be renamed "C: The Incomplete Nonsense", at least for 
> the time being.

Hee.

My assertion is not that my listing is a complete annotation of all
the nonsense, 

> > However, it became the
>> one authoritative source for all claims about Schildt's book.

> So you consider Peter Seebach's page to be authoritative. Fine, so 
> what's the problem?

The problem is that he's lying here; my page is not the "one authoritative
source".  Rather, the talk page for Herbert Schildt on Wikipedia contains
a number of debates about whether or not the "controversy" is justified,
which Spinny lost specifically on the grounds that someone pointed to my
page.  That has caused him to mistakenly think it's the sole authoritative
source, but in fact, I think anyone would consider the pages by Francis
or Clive on the same topic to be comparably authoritative -- both have
been at least as active in C standardization as I have.

So it's all tilting at windmills.  Someone accepted my page as an argument,
so he thinks if he can make it go away he can win the argument with the
wikipedia people -- but in fact, while my page was *sufficient* to win
the argument, that's far from making it the only qualified source.

-s
-- 
Copyright 2010, all wrongs reversed.  Peter Seebach / usenet-nospam@seebs.net
http://www.seebs.net/log/ <-- lawsuits, religion, and funny pictures
http://en.wikipedia.org/wiki/Fair_Game_(Scientology) <-- get educated!
0
Seebs
1/2/2010 7:28:12 AM
On Jan 2, 11:57=A0am, "Dennis \(Icarus\)" <nojunkm...@ever.invalid>
wrote:
> "Lorenzo Villari" <vll...@tiscali.it> wrote in message
>
> news:20100102044239.7e8d997f@kubuntu...
>
> > On Fri, 1 Jan 2010 15:33:07 -0800 (PST)
> >spinoza1111<spinoza1...@yahoo.com> wrote:
>
> >> And, it would be interesting to know whether their use of C tracks
> >> political (pro-American) views.
>
> > I can't see a connection between politics and the use of a programming
> > language. I know, that's my fault...
>
> It's a long-standing plot, going back to 1812.
>
> It can't be a coincidence that the United States national anthem begins
> "Oh, say can you C by the dawn's early light"
>
> Clearly this refers to the C programming language, and how programmers ha=
ve
> to work long through the night in an effort to meet deadlines.
>
> Dennis

Lack of cultural or historical knowledge is not an argument. Americans
have always been proud of the "can do" spirit expressed in rapid
development without a lot of theory. It produced the Model T, the
Liberty ship, and Fortran, which beat the more European Algol.
However, it was clear by 1964 that Algol was more powerful in the
sense of more expressive and so the American firm IBM bastardized
Algol to create PL.1 (giving its Vienna lab only six months to come up
with formal semantics).

But because neither programmers and compiler writers really knew what
they were doing at the time, Multics, which proposed to use PL.1 to
create a time sharing "computer utility", seemed as a result of
political underestimates of the time needed and the failure to see
that software grows exponentially and bugs are reduced inversely
exponentially.

Kernighan and Ritchie, however, failed to see how funding politics
created the appearance of delay, and taking their superficial
understanding at face value, reverted to can-do Fortran pragmatism
while incorporating some of what had been learned in Algol.

The result? A mess.

The mere fact that you can refer light-heartedly to working "long
hours through the night" when the theft of programmer time has
destroyed many lives shows that you're imprisoned within a sort of
Saturday Night Live irony which is flippant and childish about real
issues.
0
spinoza1111
1/2/2010 7:33:32 AM
spinoza1111 wrote:
> On Jan 1, 11:16 pm, Richard Heathfield <r...@see.sig.invalid> wrote:
>> spinoza1111 wrote:
>>> On Jan 1, 2:37 pm, Richard Heathfield <r...@see.sig.invalid> wrote:
>>>> spinoza1111wrote:
>>>>> On Jan 1, 12:32 pm, Richard Heathfield <r...@see.sig.invalid> wrote:
>> <snip>
>>
>>>>>> Neither #if nor #ifdef is a C macro. They are preprocessor directives.
>>>>> You don't know macros.
>>>> I know C macros. You appear to be confusing C macros with conditional
>>>> compilation. Get a C book, and read up on the preprocessor.
>>> I've already addressed this.
>> And yet you remain confused on the difference between macros and
>> conditional preprocessing directives.
>>
>> <snip>
>>
>>>> If the implementation did not produce the message, you'd complain for
>>>> that reason instead.
>>> I would not.
>> You have a track record of changing your criticism as your understanding
>> of reality changes - for example, you complained that Peter Seebach was
>> too academic, and now you complain that he isn't academic enough.
> 
> No, I never said he was too academic.

Well, I can't prove that you did (because Google Groups's search tool is 
being particularly nonsensical today), so I guess I'll have to accept 
that you may not have used the precise phrase "too academic". 
Nevertheless, I remember (as others here will no doubt remember) you 
complaining that he was too academic - and then he revealed that he has 
no CS degree, ever since which you've been attacking him for not being 
sufficiently academic.

<nonsense snipped>

> I've shown that you've lied ("Nilges has never posted to comp.risks")

No you haven't; the above quote is utterly inaccurate. What I said was 
that it seemed to me that the comp.risks moderator could be credited 
with the sense not to post articles from spinoza1111@yahoo.com; as it 
transpired, I was wrong - the moderator /has/ posted such articles, but 
for presumably good reasons posts all moderated articles under a single 
originating address, something I'd forgotten about when conducting the 
search, which was based on author. In other words, my cheap shot (and 
yes, I know it was a cheap shot) misfired - but it wasn't a lie.

> and that Seebach makes the same sort of "errors" as Herb ("the 'Heap'
> is a DOS term": "Java and .Net code is interpreted").

He doesn't, it is, and they can be and are. For example, "DOS 
Programmer's Reference", 3rd edition (Dettman and Johnson), is littered 
with references to the heap. And "The bytecode verifier acts as a sort 
of gatekeeper: it ensures that code passed to the Java interpreter is in 
a fit state to be executed and can run without fear of breaking the Java 
interpreter." is taken directly from Java's home, the Sun site. URL: 
<http://java.sun.com/docs/white/langenv/Security.doc3.html> And here's a 
quote from microsoft.com: "The .NET Micro Framework HAL can communicate 
with the CLR through event masks. Before entering into the interpreter 
loop, the .NET Micro Framework CLR checks for signals..."

<snip>

>> That you still confuse macro processing with constant folding (above)
>> suggests that you are not yet as clear as you think.
> 
> No, I showed how you and Seebach, lacking formal education in CS, have
> no clue about separation of concerns, and how in Seebach's case this
> resulted in the silliness of "sequence points", as if it was proper or
> even possible to "optimize" a language in the first place (under
> separation of concerns, it is not possible to do so).

Sequence points haven't even been mentioned in this thread, as far as I 
can tell. We were discussing #define FOO (1/0), remember? No sequence 
points need apply. Sheesh. Learn to separate concerns, why don't you? We 
can argue about sequence points if you like, but at least start a new 
thread for it.

-- 
Richard Heathfield <http://www.cpax.org.uk>
Email: -http://www. +rjh@
"Usenet is a strange place" - dmr 29 July 1999
Sig line vacant - apply within
0
Richard
1/2/2010 7:36:40 AM
spinoza1111 wrote:
> On Jan 2, 1:17 am, Richard Heathfield <r...@see.sig.invalid> wrote:
>> Seebs wrote:
>>> On 2010-01-01, Richard Heathfield <r...@see.sig.invalid> wrote:
>>>> If you can find any error of fact on the page, I'm sure Seebs will be
>>>> delighted to correct it.
>>> Indeed, even though he's made it clear that he can't, I did actually
>>> pick up the 4th edition, and will eventually be shuffling things around
>>> to replace it with a more recent criticism.
>>> Note that the 4th edition fixes several things, but continues to repeatedly
>>> and totally misunderstand feof().
>> This might be the time to categorise the errors by edition. That is, for
>> each error, make it clear whether the error is, is not, or is not known
>> to be, present in each given edition (and perhaps whether any specific
>> edition has fixed a problem in a previous edition).
>>
>> It would increase the housekeeping work involved, true, but may help to
>> demonstrate an attempt at "playing fair" to Schildt by recognising where
>> he has fixed problems, whilst still recording those problems for the
>> sake of those users who have an edition in which the problem is not fixed.
>>
>> Of course, the person who *should* be doing this is... Herbert Schildt.
>>
>> --
>> Richard Heathfield <http://www.cpax.org.uk>
>> Email: -http://www. +rjh@
>> "Usenet is a strange place" - dmr 29 July 1999
>> Sig line vacant - apply within
> 
> A wiki-style collective discussion would be more appropriate

So that you could get in there and screw it up like you did Wikipedia? 
No thanks. I'm sure Peter will be delighted to take into consideration 
the views of anyone who emails him on the subject, always assuming they 
haven't tried his patience in email before to the extent of getting 
themselves killfiled.

 > but ONLY if Schildt agreed to participate.

It is precisely *because* Schildt won't do his own errata that *other 
people* are doing his errata.

 > You see, it's unfair and
> unproductive of the truth to conduct a discussion about his book
> without including him.

Who has excluded him?

 > And guess what? If he won't participate (and
> given what I've seen of your behavior I can see why) then you need to
> simply drop the issue.

Exactly wrong. If he *will* participate, preferably by doing his own 
laundry on his own Web site, the issue will drop itself.

-- 
Richard Heathfield <http://www.cpax.org.uk>
Email: -http://www. +rjh@
"Usenet is a strange place" - dmr 29 July 1999
Sig line vacant - apply within
0
Richard
1/2/2010 7:40:34 AM
On Jan 2, 11:42=A0am, Lorenzo Villari <vll...@tiscali.it> wrote:
> On Fri, 1 Jan 2010 15:33:07 -0800 (PST)
>
> spinoza1111<spinoza1...@yahoo.com> wrote:
> > And, it would be interesting to know whether their use of C tracks
> > political (pro-American) views.
>
> I can't see a connection between politics and the use of a programming
> language. I know, that's my fault...

The connection is unseen because (cf Adorno) people in mass society
must of necessity be deluded about their true relationship to the
means of production. The dialectical logic of programming is this:

The owners of capital know that their private ownership of a public
good is unfair
Therefore they appoint (cf Galbraith, The Economics of Innocent Fraud)
a buffer class between them and the general public, call management
But like the owners of capital management can't work, it can
only...manage
But this is a contradiction, since management insofar as it's real
"does" things
Therefore the formerly actual work of management (setting rules and
standards) becomes computer programming
But because this is low-paid management, it can neither be recognized
as fully professional nor compensated justly nor even performed
acceptably
To reconcile them with their intolerable situation, the dehumanized
programmers create shibboleths and cargo cults of competence such as C
mythology
0
spinoza1111
1/2/2010 7:42:49 AM
On 2010-01-02, Richard Heathfield <rjh@see.sig.invalid> wrote:
> Well, I can't prove that you did (because Google Groups's search tool is 
> being particularly nonsensical today), so I guess I'll have to accept 
> that you may not have used the precise phrase "too academic". 
> Nevertheless, I remember (as others here will no doubt remember) you 
> complaining that he was too academic - and then he revealed that he has 
> no CS degree, ever since which you've been attacking him for not being 
> sufficiently academic.

To be picky, he was pretty general about who it was that was too
academic; he appeared to be talking about the whole category of people
who were disagreeing with him, claiming that they had AP'd out of
entry-level CS and thus missed the content -- apparently, he feels
that AP courses don't really cover the material or something.

-s
-- 
Copyright 2010, all wrongs reversed.  Peter Seebach / usenet-nospam@seebs.net
http://www.seebs.net/log/ <-- lawsuits, religion, and funny pictures
http://en.wikipedia.org/wiki/Fair_Game_(Scientology) <-- get educated!
0
Seebs
1/2/2010 7:48:07 AM
spinoza1111 wrote:
> On Jan 2, 2:43 pm, Richard Heathfield <r...@see.sig.invalid> wrote:
>> spinoza1111wrote:
>>> On Jan 1, 10:59 pm, Richard Heathfield <r...@see.sig.invalid> wrote:
>>>> spinoza1111wrote:
>>>> <snip>
>>>>> Case closed. Herb is a programmer who wrote a complete compiler and
>>>>> interpreter for C,
>>>> It wasn't a compiler, and it wasn't complete.
>>> (Sigh) This has been resolved and not in your favor.
>> Only in your fevered imagination. The implementation did not produce
>> standalone object code; it was an interpreter, not a compiler. And it
> 
> WRONG because it had to scan and parse.

So does printf. That doesn't mean that printf is a compiler.

> If something can scan and
> parse and not being at some scale a compiler, then you do not know
> your profession.

Are you claiming that printf is a compiler?

> By the way, what are YOUR academic credentials?

Many eminent computer scientists do not have a CS degree. Neither do I - 
which does not of course make me an eminent computer scientist, but it 
does show that a CS degree is not necessary for CS expertise. We had 
this discussion before, remember?

But if I have no CS degree, why should you - or anyone - take my views 
on C seriously? Well, firstly, I do have some kind of presence in the 
academic CS universe: a book to which I contributed about a quarter of 
the text (roughly 344 pages out of 1250 or so), and for which I picked 
most of the rest of the writing team, is (or at least was, last time I 
checked) on the required reading list for least two university CS 
courses - one in the UK and one in the USA. Secondly, even people who 
really don't like me very much are prepared to concede that I know my C. 
  You have said so yourself, in fact. Thirdly, I have a long track 
record of helping people to learn C in this very group, which has quite 
a few C experts in it who are only too ready to pounce on any mistakes I 
might make. I am far more interested in their opinion of my ability than 
I am in your opinion, since it has long been evident to me that your 
opinion is based on ignorance and prejudice, whereas theirs is based on 
knowledge and skill.

<snip>

>> But I think you'll find authoritative MS-DOS works to be
>> littered with references to the heap. I can dig one out if you like.
> 
> ...which doesn't at all imply that "the heap" is a DOS term.

Of course it does.

 > The term
> occured in several books I read in the 1970s including Saul Rosen's
> Programming Systems and Languages (1970).

I don't think anyone has claimed that "the heap" is *only* a DOS term. 
It is, however, most certainly a DOS term.

 > This was before Gates stole MS-DOS.

Gates did not steal MS-DOS. He *bought* QDOS, for $50,000, and re-badged 
it MS-DOS. I am given to understand that the vendor had to threaten a 
law-suit to get the money, but that doesn't mean Gates stole the software.

If it were likely that anyone would take your claims seriously, Bill 
Gates would now have grounds for a libel suit against you. Since nobody 
is going to believe your claim, however, you can rest easy.

<nonsense snipped>

> Fuck you, asshole. I ask questions and have the balls to be wrong

Almost all the time, in fact.

> because I'm MUCH better educated than you in both CS and general
> culture.

It is ironic that you talk about culture, since you are one of the least 
cultured people I know. If you're much better educated than me in CS in 
a way that is relevant in comp.lang.c, then you will have no difficulty 
in showing my C knowledge to be erroneous in a way that convinces 
acknowledged C experts in the group, such as Peter Seebach, Keith 
Thompson, David Thompson, Ben Pfaff, Dann Corbit, Ben Bacarisse, and so 
on. Having said that, you know and I know that I know C a darn sight 
better than you do, so your appeals to your own academic authority are 
trite and meaningless.

 > This is what competent people do as opposed to corporate
> clowns who've worked for a series of "banks and insurance companies",
> and either destroyed others at these jobs or been fired.

For the record, I've worked for a variety of companies in a variety of 
industries, including consumer electronics, automotive, and airline. 
I've never been fired from a programming job (my very first programming 
job was made redundant, and the company went bust soon after for reasons 
unrelated to their IT). I've quit a few, but never been fired. I have 
only once been in a position where I felt obliged to recommend anyone's 
dismissal (a suggestion which turned out to be unnecessary, since the 
responsible manager had already decided to dismiss all eight of the 
people concerned). In fact I have on occasion provided programming 
lessons (in my and their own time) for colleagues who were struggling 
and who recognised their need to improve. You, on the other hand, 
regularly accuse other people of theft, libel, Nazism, Fascism, and all 
the rest of it, in an obvious attempt to destroy them. Most people's 
hypocrisy is relatively low-level, a sort of background hypocrisy. Yours 
doesn't just redline the meter - it wraps around the pin.

<snip>

>>> Etc. The document is incomplete and never was.
>> Then it needs to be renamed "C: The Incomplete Nonsense", at least for
>> the time being.
>>
>>  > However, it became the
>>
>>> one authoritative source for all claims about Schildt's book.
>> So you consider Peter Seebach's page to be authoritative. Fine, so
>> what's the problem?
> 
> Stop playing word games.

Do you mean you were mistaken to claim that Seebs's page is authoritative?

<nonsense snipped>

-- 
Richard Heathfield <http://www.cpax.org.uk>
Email: -http://www. +rjh@
"Usenet is a strange place" - dmr 29 July 1999
Sig line vacant - apply within
0
Richard
1/2/2010 9:45:51 AM
Seebs wrote:
> On 2010-01-02, Richard Heathfield <rjh@see.sig.invalid> wrote:

<snip>

>>> However, it became the
>>> one authoritative source for all claims about Schildt's book.
> 
>> So you consider Peter Seebach's page to be authoritative. Fine, so 
>> what's the problem?
> 
> The problem is that he's lying here; my page is not the "one authoritative
> source".

Um, I think it more likely that he's expressing an opinion. Hanlon's 
Razor and all that.

<snip>

-- 
Richard Heathfield <http://www.cpax.org.uk>
Email: -http://www. +rjh@
"Usenet is a strange place" - dmr 29 July 1999
Sig line vacant - apply within
0
Richard
1/2/2010 9:48:13 AM
Richard Heathfield wrote:
> spinoza1111 wrote:

>>
>> "The 'Heap' is a DOS term"
> 
> If Seebs agrees with you that "the heap" is *not* a DOS term, I'm sure 
> he'll fix the page. If you can cite an international standard that 
> supports your case, I'm sure that will help you to persuade him. Failing 
> that, citing an authoritative author will help (hint: Spinoza and Adorno 
> are not generally considered to be authoritative on the subject of 
> MS-DOS). But I think you'll find authoritative MS-DOS works to be 
> littered with references to the heap. I can dig one out if you like.
> 

Yes, Nilges prides himself on accurate writing (or should that be 
righting?). However  his failure to understand that a statement of the 
form 'a is a b' does not exclude 'a is a c' suggests that his reading 
skills need a bit of honing.

In addition there is a considerable difference between a possibly 
thoughtless statement in a newsgroup which is easy to correct (and yes, 
we (in this newsgroup) all know that 'heap' is used in many other 
contexts including other computer OSs) and enshrining a miss-statement 
in a published book.

Now I have had the misfortune to review several of Schildt's books on 
both C and C++ (and in at least one he managed to confuse the two, and 
his C++ code is definitely poor and not an example to be followed, but I 
know of at least one author who was seriously worse). I would be 
reluctant to even attempt to list all the errors in any one of his books 
because they were so pervasive (rather less in his more recent works so 
perhaps he does take notice) that errors only went to demonstrate the 
inadequacy of his understanding of C. Like every reviewer, my work is my 
opinion. If a theatre critic slates a play he is not generally liable 
for libel. However if he makes personal comments about the playwright, 
producer, actors etc. that is a different matter.

A comparison between what Peter, Richard and I have written about one or 
more books by Schildt with what you have repeatedly said about Peter and 
Richard should make the point. Any one of us can be guilty of making 
mistakes or miss-speaking but that does not make us liars, conspirators 
or incompetent.  If anyone took you seriously Peter would have a rock 
solid case against you for libel as you have repeatedly impugned his 
professional competence in ways that could damage his reputation and career.

0
Francis
1/2/2010 10:41:16 AM
On 1 Jan, 23:33, spinoza1111 <spinoza1...@yahoo.com> wrote:
> On Jan 1, 10:57=A0pm, Nick Keighley <nick_keighley_nos...@hotmail.com>
> > On 1 Jan, 04:20, Richard Heathfield <r...@see.sig.invalid> wrote:
> > > spinoza1111 wrote:

> > > > [C is]
> > > > not popular outside of the USA esp. in the EU for that reason.
>
> > > Actually, C is still very popular in the UK (which, heaven help us, i=
s
> > > in the EU). I wouldn't know about the rest of the EU, but I have no
> > > reason to suspect it's unpopular abroad.
>
> > It's used in Italy

I have Italian colleagues, a code base I maintain is largely written
by Italians.

> Of course it's "used". But by what people? How competent are they?

Some of them are pretty competent. A few of them are very competant.
I've not seen any real crap.

> And, it would be interesting to know whether their use of C tracks
> political (pro-American) views.

oddly it's never crossed my mind to discuss American politics with
Italians... I don't know much about it to be honest. I haven't even
discussed *Italian* politics wuth them very much. I think it's
impolite to start spouting on a subject I know nothing about.

 I suspect political opinions vary as much as they vary anywhere else.

The whole idea of choosing a programming language based on your
political opinions, is... bizzare.

How do you recruit? "We're a C based shop here, so do you agree with
us that the war on terrorism is furthured by American Hegemonic
occupation of the Islamic peoples? No? Sorry you're not for us
then..."
0
Nick
1/2/2010 10:56:52 AM
On Jan 2, 6:41=A0pm, Francis Glassborow
<francis.glassbo...@btinternet.com> wrote:
> Richard Heathfield wrote:
> >spinoza1111wrote:
>
> >> "The 'Heap' is a DOS term"
>
> > If Seebs agrees with you that "the heap" is *not* a DOS term, I'm sure
> > he'll fix the page. If you can cite an international standard that
> > supports your case, I'm sure that will help you to persuade him. Failin=
g
> > that, citing an authoritative author will help (hint: Spinoza and Adorn=
o
> > are not generally considered to be authoritative on the subject of
> > MS-DOS). But I think you'll find authoritative MS-DOS works to be
> > littered with references to the heap. I can dig one out if you like.
>
> Yes, Nilges prides himself on accurate writing (or should that be
> righting?). However =A0his failure to understand that a statement of the
> form 'a is a b' does not exclude 'a is a c' suggests that his reading
> skills need a bit of honing.

Let's see if that's true.
>
> In addition there is a considerable difference between a possibly
> thoughtless statement in a newsgroup which is easy to correct (and yes,
> we (in this newsgroup) all know that 'heap' is used in many other
> contexts including other computer OSs) and enshrining a miss-statement
> in a published book.

Hold it right there. Seebie's statement (the 'heap' is a DOS term) was
not like Richard Heathfield's claim that he did not find "spinoza1111"
in comp.risks, meant I believe as a malicious lie but in a newsgroup
where we don't always review what we say, and we can undo damage done
by apology (although Richard has not apologized for his malicious
lie).

"C: the Complete Nonsense" was an Internet publication with its own
Web site that is readily found by Google. Seebie allowed it to stand
even after its mistakes were criticised whereas in newsgroups people
can retract what they say. Because bubble butts and aliterates can so
readily find it, it is far more influential than a book could ever be.

Furthermore, Clive Feather effectively gave it certification when he
based a similarly written post citing it, and it's likely that
Feather's cite gave the first post its viral "authority".

>
> Now I have had the misfortune to review several of Schildt's books on
> both C and C++ (and in at least one he managed to confuse the two, and

As sensibly distinct languages, they are hopelessly and incestuously
intertwined in a way that we see right here is not sensibly
explainable; most answers to noob questions degenerate into claims and
counterclaims amongst the regs. One of the reasons for the brutal but
explainable syntax and semantics of Fortran I, and the elegance of the
Algol Report, was that both Backus and the Algol team knew that great
engineering is simple engineering...but not simpler than necessary. It
can be described sensibly.

> his C++ code is definitely poor and not an example to be followed, but I
> know of at least one author who was seriously worse). I would be

Who?

> reluctant to even attempt to list all the errors in any one of his books

"Errors in books" is Fundamentalist language. It makes sense only when
talking about books that describe an independent scientific or
historical phenomenon, such as a falsehood in a math book or the claim
that FDR dropped the atom bomb which did appear in a Texas school
book.

The consistent illusion here, one which makes real scientists laugh
when they learn of it, is that a social, human artifact such as C in
actual use, is independent and to be spoken of as part of nature. What
you call "errors" are in fact Schildt's bold attempt to make sense of
a mess for real people, and in a self-contradictory fashion, nearly
all of Schildt's critics call him "clear", not knowing that "clarity"
logically implies truth.

Because C is a mess, it is impossible in general and for real
compilers to make universal and sensible predictions of its behavior
true in all cases. For example, the final "truth" about certain
pathological uses of post-increment is Heisenbergian and useless in
practice, for Seebie wants us to all say "undefined" like Killer
Zombies from Space...when, in fact, most compilers allow the
pathological case and execute it  in some very well-defined way, and
maintenance programmers who come upon pathology that can't be removed
without breaking something else need to know how the pathological case
works.

The problem isn't Herb.

> because they were so pervasive (rather less in his more recent works so
> perhaps he does take notice) that errors only went to demonstrate the
> inadequacy of his understanding of C. Like every reviewer, my work is my
> opinion. If a theatre critic slates a play he is not generally liable
> for libel. However if he makes personal comments about the playwright,
> producer, actors etc. that is a different matter.

"Bullschildt" is the smoking gun here, for he's saying of a respected
individual with standing in his community and a family name being
assaulted that that individual can be expected to write bullshit. Note
that theater and movie critics do NOT do this! They do NOT make
predictions, as the anti-Schildt creeps do with respect to Schildt and
mine own enemies do about me, about our future output.

Remember Gene Siskel and Roger Ebert? Did you ever hear them say about
a director that "he" is a jerk who cannot make a good movie? Rarely if
ever: instead, they say anything they like about the MOVIE, the one
particular production. This is for a very good reason, as you indicate
above: fear of a libel lawsuit.

You see, adults, as opposed to people working in fields for which they
are unprepared and people who've worked for a succession of "banks and
insurance companies" without acquiring the maturity to know when he's
making a malicious lie, can differentiate between criticism of the
work, and criticism of the *auteur*. They are not corporate droids who
have learned to criticise and back-stab coworkers and people on the
Internet behind their backs because they are too cowardly not to be
worked to death by the corporation.

Amazon was in its infancy when "C: The Complete Nonsense" was
published, but Seebach could have penned a review which did not even
mention the author in order to avoid his implication that Schildt is
"always" wrong. He did not. As a result, this meme went viral and
became the common "knowledge" of C programmers too lazy and
incompetent to think, and the answer to a FAQ, and Seebach didn't have
the common decency to stop this process.
>
> A comparison between what Peter, Richard and I have written about one or
> more books by Schildt with what you have repeatedly said about Peter and
> Richard should make the point. Any one of us can be guilty of making
> mistakes or miss-speaking but that does not make us liars, conspirators
> or incompetent. =A0If anyone took you seriously Peter would have a rock
> solid case against you for libel as you have repeatedly impugned his
> professional competence in ways that could damage his reputation and care=
er.

The situation here is different.

I had a friend who was foully mistreated by Richard Heathfield's
publisher, SAMs. She set up a Web site criticising SAMs. SAMs sued her
under Chicago municipal laws concerning phone harassment.

Although I think SAMs treated this person like shit both in the
initial employment relationship and the lawsuit, I think she made a
mistake by creating a Web site for the same reason I think Seebach's
standalone post was a vicious mistake.

What I say about Heathfield is self-defense in a dialogue in which he
mobilizes what I think is his stupidity and lack of education, and
enables others to participate in cybernetic lynch mobs, and I am by no
means the only person to be subject to his bullying. And unlike Siskel
and Ebert, I do indeed make reality-based predictions and
generalizations about him.

But these predictions are justified and verified. I knew he was a yob
wayback in 2000 when he reacted in a infantile manner to being
sidelined by his lack of education in a well-received discussion I
triggered on programmers as professionals, and thereafter lost no
opportunity to enable mob actions, as when I used an invariant
expression in a for loop (a vastly less significant and predictive-of-
incompetence error than saying "the 'heap' is a DOS term" or ".Net
code is interpreted", to cite two of Seebie's gems, or
"comp.programming is not about programmers" or "Nilges isn't in
comp.risks" to cite Heathfield).

The predictions were in dialogues in which Heathfield and Seebie
fought dirty and spoke outside their narrow areas of competence.
Whereas to create a Web site assaulting Herb, apparently hold McGraw
Hill to ransom, and not seriously try to involve him in a dialog (try
picking up the phone during business hours) smacks of civilly and
criminally actionable conduct.

0
spinoza1111
1/2/2010 4:42:53 PM
On Jan 2, 5:45=A0pm, Richard Heathfield <r...@see.sig.invalid> wrote:
> spinoza1111wrote:
> > On Jan 2, 2:43 pm, Richard Heathfield <r...@see.sig.invalid> wrote:
> >> spinoza1111wrote:
> >>> On Jan 1, 10:59 pm, Richard Heathfield <r...@see.sig.invalid> wrote:
> >>>> spinoza1111wrote:
> >>>> <snip>
> >>>>> Case closed. Herb is a programmer who wrote a complete compiler and
> >>>>> interpreter for C,
> >>>> It wasn't a compiler, and it wasn't complete.
> >>> (Sigh) This has been resolved and not in your favor.
> >> Only in your fevered imagination. The implementation did not produce
> >> standalone object code; it was an interpreter, not a compiler. And it
>
> > WRONG because it had to scan and parse.
>
> So does printf. That doesn't mean that printf is a compiler.

You are very ignorant because you could not tell from parsing that
parsing means parsing at Chomsky level 1 in this context.
>
> > If something can scan and
> > parse and not being at some scale a compiler, then you do not know
> > your profession.
>
> Are you claiming that printf is a compiler?
>
No: see above.

> > By the way, what are YOUR academic credentials?
>
> Many eminent computer scientists do not have a CS degree. Neither do I -
> which does not of course make me an eminent computer scientist, but it
> does show that a CS degree is not necessary for CS expertise. We had
> this discussion before, remember?

Yes, and you lost, remember, for you never answered my riposte:

(1) You're no Dijkstra. You don't have the intelligence to invent
theory and practice in the absence of prior art. You're not even
Nilges, since in 1974, byotch, I developed a data base with selection
and formatting in the absence of prior art and development tools
beyond a primitive assembler.

(2) There was not opportunity to take academic course work in the time
of Dijkstra since if you'd done so yourself, you would know that the
content of academic CS was created by Dijkstra et al. How could
Dijkstra taken computer classes in Holland of the early fifties? I
took the very first CS class offered by my own university in 1970!

One of the strongest indications that neither you nor Seebach have any
academic training in computer science apart from programming classes
in technical colleges in your case (which you disrupted to show off
your knowledge) is indeed your confusion of C with computer science.

>
> But if I have no CS degree, why should you - or anyone - take my views
> on C seriously?

Guess what? I don't. What I do take seriously is your specific advice
on low level technical issues regarding C. You're a clerk.

> Well, firstly, I do have some kind of presence in the
> academic CS universe: a book to which I contributed about a quarter of
> the text (roughly 344 pages out of 1250 or so), and for which I picked
> most of the rest of the writing team, is (or at least was, last time I
> checked) on the required reading list for least two university CS
> courses

God help us all. Oh well, I have taught at Roosevelt University, a
third-rate school in Chicago, Princeton, and DeVry, which I am the
first to admit is a strange range. In that experience I am aware that
at the lower level a lot of bad practice is being taught, and
universities do an abominable job of selecting instructors, and poor
instructors are allowed to select inferior books, such as C Unleashed.

[Note my choice of words. I don't know if C Unleashed contains a lotta
errors, and I am not gonna set up a Web site unleashing the hounds of
hell on your book. I think it is in a global state of sin since it
promotes a bad language for new development. Your next book might be
great. Perhaps you'll write a slightly pornographic mystery novel.]

> - one in the UK and one in the USA. Secondly, even people who
> really don't like me very much are prepared to concede that I know my C.
> =A0 You have said so yourself, in fact.

Yes, I have. I think you know the trees but are lost in a very dark
wood.

>Thirdly, I have a long track
> record of helping people to learn C in this very group, which has quite
> a few C experts in it who are only too ready to pounce on any mistakes I
> might make. I am far more interested in their opinion of my ability than
> I am in your opinion, since it has long been evident to me that your
> opinion is based on ignorance and prejudice, whereas theirs is based on
> knowledge and skill.
>
> <snip>
>
> >> But I think you'll find authoritative MS-DOS works to be
> >> littered with references to the heap. I can dig one out if you like.
>
> > ...which doesn't at all imply that "the heap" is a DOS term.
>
> Of course it does.
>
> =A0> The term
>
> > occured in several books I read in the 1970s including Saul Rosen's
> > Programming Systems and Languages (1970).
>
> I don't think anyone has claimed that "the heap" is *only* a DOS term.

"The 'heap' is a DOS term" in context means that Schildt was wrong to
talk about it whilst explaining C, and that implies, logically, that
at the time Seebach incorrectly believed that heaps were invented for
DOS. This error has been allowed to stand without Seebach's document
being subject to the foul treatment he accords Herb's book.

It's an inexcusable error, and NOT because we're supposed naturalize
concepts and treat computer shit as more valuable than a man's good
name. It's inexcusable because Seebach was being a damned hypocrite.

> It is, however, most certainly a DOS term.
>
> =A0> This was before Gates stole MS-DOS.
>
> Gates did not steal MS-DOS. He *bought* QDOS, for $50,000, and re-badged
> it MS-DOS. I am given to understand that the vendor had to threaten a
> law-suit to get the money, but that doesn't mean Gates stole the software=
..
>
> If it were likely that anyone would take your claims seriously, Bill
> Gates would now have grounds for a libel suit against you. Since nobody
> is going to believe your claim, however, you can rest easy.

Then he (and Jobs) should first go after the makers of the film
Pirates of Silicon Valley, or any number of places where this story is
affirmed. He'll get more money.


>
> <nonsense snipped>
>
> > Fuck you, asshole. I ask questions and have the balls to be wrong
>
> Almost all the time, in fact.
>
> > because I'm MUCH better educated than you in both CS and general
> > culture.
>
> It is ironic that you talk about culture, since you are one of the least
> cultured people I know.

Sure, in terms of the British lower middle class and their anxious
gentility and passivity. I am Onslow to your Violet Bouquet. But, I'm
right.


>If you're much better educated than me in CS in
> a way that is relevant in comp.lang.c, then you will have no difficulty
> in showing my C knowledge to be erroneous in a way that convinces
> acknowledged C experts in the group, such as Peter Seebach, Keith
> Thompson, David Thompson, Ben Pfaff, Dann Corbit, Ben Bacarisse, and so

Oh my goodness, *quelle Pantheon*: a clerk who's never taken a CS
class and a buncha ordinary slobs with broadband [Ben is very smart
esp. on details].

Because of the foul conduct of people like you, competent people
almost never come in here. Michael Kinsley of MSNBC asked for his blog
to be shut down because of the filth posted by losers like you.
Facebook is more popular with smart people because you can get rid of
psychos in a heartbeat. I occasionally correspond with Brian
Kernighan, who I met while at Princeton, and get cordial replies, but
I would never ask him to come in here because the "regs" apart from
Ben are head-cases.


> on. Having said that, you know and I know that I know C a darn sight

Forest and trees, mate. Forest and trees.

> better than you do, so your appeals to your own academic authority are
> trite and meaningless.
>
> =A0> This is what competent people do as opposed to corporate
>
> > clowns who've worked for a series of "banks and insurance companies",
> > and either destroyed others at these jobs or been fired.
>
> For the record, I've worked for a variety of companies in a variety of
> industries, including consumer electronics, automotive, and airline.

I will stand corrected. OK, you've worked with a series of companies.
And many programmers change jobs. But given the social skills you
display, I can only wonder how you got along with co-workers, and I
think you were a back-stabber, based on my long experience here.

> I've never been fired from a programming job (my very first programming
> job was made redundant, and the company went bust soon after for reasons
> unrelated to their IT). I've quit a few, but never been fired. I have
> only once been in a position where I felt obliged to recommend anyone's
> dismissal (a suggestion which turned out to be unnecessary, since the
> responsible manager had already decided to dismiss all eight of the
> people concerned). In fact I have on occasion provided programming
> lessons (in my and their own time) for colleagues who were struggling
> and who recognised their need to improve. You, on the other hand,
> regularly accuse other people of theft, libel, Nazism, Fascism, and all
> the rest of it, in an obvious attempt to destroy them. Most people's
> hypocrisy is relatively low-level, a sort of background hypocrisy. Yours
> doesn't just redline the meter - it wraps around the pin.
>
> <snip>
>
> >>> Etc. The document is incomplete and never was.
> >> Then it needs to be renamed "C: The Incomplete Nonsense", at least for
> >> the time being.
>
> >> =A0> However, it became the
>
> >>> one authoritative source for all claims about Schildt's book.
> >> So you consider Peter Seebach's page to be authoritative. Fine, so
> >> what's the problem?
>
> > Stop playing word games.
>
> Do you mean you were mistaken to claim that Seebs's page is authoritative=
?

You've never read King Lear:

there thou might'st behold the great image of Authoritie, a Dogg's
obey'd in Office.

That is, the connotation of the word for you is always positive. This
is because of your lack of education and culture. Many people, on the
other hand, have from wider exposure to the humanities a notion of
"bad" authority.

In the Shakespeare play, Kent speaks positively of Lear's authority at
the beginning

Kent.
No Sir, but you haue that in your countenance, which I would faine
call Master.
Lear.
What's that?
Kent.
Authority.

This actually fooled the pompous Roger Scruton into thinking that
Shakespeare favored Authority, but apparently, Scrotum didn't finish
the play, for Kent and Lear learn what authority is: it is tearing a
host's (Gloucester's) eyes out even as people think it's cute here to
destroy others, and act offended when their marks fight back.

Indeed, your whole concept of authority and intelligence is fucked.
You equate the two with the ability to dominate a conversation and for
that reason you descend into absurdity when your dominance is
threatened: "Nilges isn't in comp.risks because I looked for
spinoza1111", "comp.programming is not about programmers", "it is a
thoughtcrime and conclusive indication of incompetence to use an
invariant expression in a for loop", blah blah blah.

But you've never read Shakespeare I'd hazard.
>
> <nonsense snipped>
>
> --
> Richard Heathfield <http://www.cpax.org.uk>
> Email: -http://www. +rjh@
> "Usenet is a strange place" - dmr 29 July 1999
> Sig line vacant - apply within

0
spinoza1111
1/2/2010 5:19:52 PM
On Jan 2, 3:48=A0pm, Seebs <usenet-nos...@seebs.net> wrote:
> On 2010-01-02, Richard Heathfield <r...@see.sig.invalid> wrote:
>
> > Well, I can't prove that you did (because Google Groups's search tool i=
s
> > being particularly nonsensical today), so I guess I'll have to accept
> > that you may not have used the precise phrase "too academic".
> > Nevertheless, I remember (as others here will no doubt remember) you
> > complaining that he was too academic - and then he revealed that he has
> > no CS degree, ever since which you've been attacking him for not being
> > sufficiently academic.
>
> To be picky, he was pretty general about who it was that was too
> academic; he appeared to be talking about the whole category of people
> who were disagreeing with him, claiming that they had AP'd out of
> entry-level CS and thus missed the content -- apparently, he feels
> that AP courses don't really cover the material or something.

OK, let's give you credit. Please correct any one of these facts:

You took some high school classes that prepared you to avoid taking
the required CS class which was probably less intense than CS 101 for
prospective majors, being the easier class for nonmajors (like the
class Kernighan teaches at Princeton). The high school classes, given
the era, taught you, probably, how to write code in Pascal.

You passed the AP exam to avoid taking the required class, perhaps
with flying colors. Good for you, but as a person with educational
experience, I'd say that entirely too many students over-rely on AP to
avoid having to spend money and "waste time"  in survey and
introductory classes.

They don't expect to encounter a great teacher in intro classes
(though Kernighan teaches intro classes at Princeton) because of
widespread contempt for teachers based on money madness: students
reason that really smart people will become gazillionaires by starting
companies, and not have the compassion, patience and decency of a
Kernighan. Therefore they take AP. But as a result, they never see
that every subject, even math, and especially a SOCIAL praxis like
mere computing, has multiple POVs...and they wind up libeling a
Schildt.

You then took an easy major (psychology) and entered the workforce
during an era in which companies were increasing their hiring of
ignorant and therefore malleable people who could be trusted to do
boring things like route bugs, and had terminated great developers to
please Wall Street.

You never learned in school that the stack and heap are commonly used,
as is the Turing machine, to explain computation. You never learned
that in CS, the professor might make an error, and either correct it
himself or have it corrected by the student, and use the situation to
teach the facts and a bit of humility as well.

You then, with initiative that is in some sense admirable, became an
enthusiastic auto-didact.

Is this correct?
>
> -s
> --
> Copyright 2010, all wrongs reversed. =A0Peter Seebach / usenet-nos...@see=
bs.nethttp://www.seebs.net/log/<-- lawsuits, religion, and funny picturesht=
tp://en.wikipedia.org/wiki/Fair_Game_(Scientology) <-- get educated!

0
spinoza1111
1/2/2010 5:34:20 PM
On 2010-01-02, Richard Heathfield <rjh@see.sig.invalid> wrote:
> Seebs wrote:
>> The problem is that he's lying here; my page is not the "one authoritative
>> source".

> Um, I think it more likely that he's expressing an opinion. Hanlon's 
> Razor and all that.

Hmm.  You have a point; he is confused enough that he might well have
mistaken the citation on the wikipedia talk page for the claim he's now
making.  I retract the accusation.

-s
-- 
Copyright 2010, all wrongs reversed.  Peter Seebach / usenet-nospam@seebs.net
http://www.seebs.net/log/ <-- lawsuits, religion, and funny pictures
http://en.wikipedia.org/wiki/Fair_Game_(Scientology) <-- get educated!
0
Seebs
1/2/2010 7:21:22 PM
On 2010-01-02, Francis Glassborow <francis.glassborow@btinternet.com> wrote:
> A comparison between what Peter, Richard and I have written about one or 
> more books by Schildt with what you have repeatedly said about Peter and 
> Richard should make the point. Any one of us can be guilty of making 
> mistakes or miss-speaking but that does not make us liars, conspirators 
> or incompetent.  If anyone took you seriously Peter would have a rock 
> solid case against you for libel as you have repeatedly impugned his 
> professional competence in ways that could damage his reputation and career.

You know, it occurs to me that I really ought to point my lawyer at
this stuff.  Not because I think there'd be any point in a defamation
case, but because he usually finds my kooks funny.

-s
-- 
Copyright 2010, all wrongs reversed.  Peter Seebach / usenet-nospam@seebs.net
http://www.seebs.net/log/ <-- lawsuits, religion, and funny pictures
http://en.wikipedia.org/wiki/Fair_Game_(Scientology) <-- get educated!
0
Seebs
1/2/2010 7:22:36 PM
spinoza1111 wrote:
> On Jan 2, 5:45 pm, Richard Heathfield <r...@see.sig.invalid> wrote:
>> spinoza1111wrote:
>>> On Jan 2, 2:43 pm, Richard Heathfield <r...@see.sig.invalid> wrote:
>>>> spinoza1111wrote:
>>>>> On Jan 1, 10:59 pm, Richard Heathfield <r...@see.sig.invalid> wrote:
>>>>>> spinoza1111wrote:
>>>>>> <snip>
>>>>>>> Case closed. Herb is a programmer who wrote a complete compiler and
>>>>>>> interpreter for C,
>>>>>> It wasn't a compiler, and it wasn't complete.
>>>>> (Sigh) This has been resolved and not in your favor.
>>>> Only in your fevered imagination. The implementation did not produce
>>>> standalone object code; it was an interpreter, not a compiler. And it
>>> WRONG because it had to scan and parse.
>> So does printf. That doesn't mean that printf is a compiler.
> 
> You are very ignorant because you could not tell from parsing that
> parsing means parsing at Chomsky level 1 in this context.

Firstly, say what you mean. Secondly, parsing (at *any* level) does not 
make a compiler. It is necessary, but certainly not sufficient.

>>> If something can scan and
>>> parse and not being at some scale a compiler, then you do not know
>>> your profession.
>> Are you claiming that printf is a compiler?
>>
> No: see above.

I'll tell you what - I'll accept that, according to *your* definition of 
a compiler, Herbert Schildt wrote a compiler. According to mine, 
however, he didn't.

>>> By the way, what are YOUR academic credentials?
>> Many eminent computer scientists do not have a CS degree. Neither do I -
>> which does not of course make me an eminent computer scientist, but it
>> does show that a CS degree is not necessary for CS expertise. We had
>> this discussion before, remember?
> 
> Yes, and you lost, remember,

No, I don't remember that.

> for you never answered my riposte:

So what? You write so much twaddle that nobody can reasonably be 
expected to read even 1% of it.

> (1) You're no Dijkstra.

Accepted.

<nonsense snipped>

 > You're not even Nilges,

Thank heaven for small mercies. Or indeed large ones.

> since in 1974, byotch, I developed a data base with selection
> and formatting in the absence of prior art and development tools
> beyond a primitive assembler.

Bear in mind that experience of your previous erroneous and occasionally 
deceptive articles leads me to treat any claim of yours with a large 
pinch of salt.

> (2) There was not opportunity to take academic course work in the time
> of Dijkstra since if you'd done so yourself, you would know that the
> content of academic CS was created by Dijkstra et al. How could
> Dijkstra taken computer classes in Holland of the early fifties?

He couldn't. That's kind of the point. He couldn't, and therefore he 
didn't. And therefore, by being a computer scientist despite not having 
a CS degree (because he couldn't possibly have got one, because there 
was no such thing at the time), he has demonstrated that a CS degree is 
not a prerequisite for being a computer scientist. I mean, this is so 
blindingly obvious that even a child of 7 could understand it. I suggest 
you consult a child of 7.

 > I took the very first CS class offered by my own university in 1970!

It doesn't appear to have done you much good.

<nonsense snipped>

> God help us all. Oh well, I have taught at Roosevelt University, a
> third-rate school in Chicago, Princeton, and DeVry, which I am the
> first to admit is a strange range. In that experience I am aware that
> at the lower level a lot of bad practice is being taught,

If they let you loose, I'm not surprised.

> [Note my choice of words. I don't know if C Unleashed contains a lotta
> errors, and I am not gonna set up a Web site unleashing the hounds of
> hell on your book.

You'd be too late. Such a site already exists. I wrote it, and I 
maintain it.

<nonsense snipped>

>> If you're much better educated than me in CS in
>> a way that is relevant in comp.lang.c, then you will have no difficulty
>> in showing my C knowledge to be erroneous in a way that convinces
>> acknowledged C experts in the group, such as Peter Seebach, Keith
>> Thompson, David Thompson, Ben Pfaff, Dann Corbit, Ben Bacarisse, and so
> 
> Oh my goodness, *quelle Pantheon*: a clerk who's never taken a CS
> class and a buncha ordinary slobs with broadband [Ben is very smart
> esp. on details]

can run CS rings around a CS graduate who seems to have almost no grasp 
of CS. Odd, that.

<nonsense snipped>

>>  > This is what competent people do as opposed to corporate
>>
>>> clowns who've worked for a series of "banks and insurance companies",
>>> and either destroyed others at these jobs or been fired.
>> For the record, I've worked for a variety of companies in a variety of
>> industries, including consumer electronics, automotive, and airline.
> 
> I will stand corrected. OK, you've worked with a series of companies.
> And many programmers change jobs.

Especially those who choose to work on short-term contracts.

 > But given the social skills you
> display, I can only wonder how you got along with co-workers, and I
> think you were a back-stabber, based on my long experience here.

I get on fine with almost everybody, and I have only once 
(metaphorically!) stabbed anyone in the back - well, eight people in one 
stab actually, but it turned out they were already (metaphorically) dead 
anyway. I considered the stabbing to be a necessary precaution. They had 
wasted a huge amount of time and budget turning out a piece of software 
that might have embarrassed even you.

<snip>

>>>>> Etc. The document is incomplete and never was.
>>>> Then it needs to be renamed "C: The Incomplete Nonsense", at least for
>>>> the time being.
>>>>  > However, it became the
>>>>> one authoritative source for all claims about Schildt's book.
>>>> So you consider Peter Seebach's page to be authoritative. Fine, so
>>>> what's the problem?
>>> Stop playing word games.
>> Do you mean you were mistaken to claim that Seebs's page is authoritative?
> 
> You've never read King Lear:

Oh, but I have.

> there thou might'st behold the great image of Authoritie, a Dogg's
> obey'd in Office.

"O matter, and impertinency mixt, Reason in Madnesse"! The word 
"authority" has both positive and negative connotations, but the word 
"authoritative" is a different (albeit related) word which has, as far 
as I'm aware, only positive connotations. If I care enough, I'll take it 
up with alt.usage.english or possibly alt.english.usage - don't bother 
to try to educate me on the matter, since I wouldn't believe you anyway.

<nonsense snipped>

-- 
Richard Heathfield <http://www.cpax.org.uk>
Email: -http://www. +rjh@
"Usenet is a strange place" - dmr 29 July 1999
Sig line vacant - apply within
0
Richard
1/2/2010 8:31:39 PM
On 31 Dec 2009, 19:36, Moi <r...@invalid.address.org> wrote:
> On Thu, 31 Dec 2009 09:10:03 -0800, spinoza1111 wrote:
> > On Dec 31 2009, 9:51=A0pm, Moi <r...@invalid.address.org> wrote:
> >> On Thu, 31 Dec 2009 04:35:01 -0800,spinoza1111wrote:
> > use conditional macro instructions, and the C preprocessor (perhaps
> > fortunately) isn't Turing-complete whereas in BAL you could construct
>
> Stop calling Turing (or other names). BTW did I mention that

For all this silly question is worth:
yes: the PL/1 Precompiler is Turing-complete
and yes: the C Prep is not!
(but mightier than you might think: I computed the first
thousand primes with it - nothing I would do on a PC)
I don't remember BAL enough, but it seems to me, that HLA was
the one with a real Macro language.

Greetings,
Wolfgang
0
wolfgang
1/2/2010 8:57:21 PM
spinoza1111 <spinoza1111@yahoo.com> writes:

> On Jan 2, 5:45 pm, Richard Heathfield <r...@see.sig.invalid> wrote:
>> spinoza1111wrote:
>> > On Jan 2, 2:43 pm, Richard Heathfield <r...@see.sig.invalid> wrote:
>> >> spinoza1111wrote:
>> >>> On Jan 1, 10:59 pm, Richard Heathfield <r...@see.sig.invalid> wrote:
>> >>>> spinoza1111wrote:
>> >>>> <snip>
>> >>>>> Case closed. Herb is a programmer who wrote a complete compiler and
>> >>>>> interpreter for C,
>> >>>> It wasn't a compiler, and it wasn't complete.
>> >>> (Sigh) This has been resolved and not in your favor.
>> >> Only in your fevered imagination. The implementation did not produce
>> >> standalone object code; it was an interpreter, not a compiler. And it
>>
>> > WRONG because it had to scan and parse.
>>
>> So does printf. That doesn't mean that printf is a compiler.
>
> You are very ignorant because you could not tell from parsing that
> parsing means parsing at Chomsky level 1 in this context.

Did you mean "type 1" rather than "level 1"?  If so, why is parsing
linked to type 1 grammars?  What has this to do with C?

<snip>
-- 
Ben.
0
Ben
1/2/2010 9:21:57 PM
On Jan 3, 5:21=A0am, Ben Bacarisse <ben.use...@bsb.me.uk> wrote:
> spinoza1111<spinoza1...@yahoo.com> writes:
> > On Jan 2, 5:45=A0pm, Richard Heathfield <r...@see.sig.invalid> wrote:
> >> spinoza1111wrote:
> >> > On Jan 2, 2:43 pm, Richard Heathfield <r...@see.sig.invalid> wrote:
> >> >> spinoza1111wrote:
> >> >>> On Jan 1, 10:59 pm, Richard Heathfield <r...@see.sig.invalid> wrot=
e:
> >> >>>> spinoza1111wrote:
> >> >>>> <snip>
> >> >>>>> Case closed. Herb is a programmer who wrote a complete compiler =
and
> >> >>>>> interpreter for C,
> >> >>>> It wasn't a compiler, and it wasn't complete.
> >> >>> (Sigh) This has been resolved and not in your favor.
> >> >> Only in your fevered imagination. The implementation did not produc=
e
> >> >> standalone object code; it was an interpreter, not a compiler. And =
it
>
> >> > WRONG because it had to scan and parse.
>
> >> So does printf. That doesn't mean that printf is a compiler.
>
> > You are very ignorant because you could not tell from parsing that
> > parsing means parsing at Chomsky level 1 in this context.
>
> Did you mean "type 1" rather than "level 1"? =A0If so, why is parsing
> linked to type 1 grammars? =A0What has this to do with C?

Yes, and most literate people (Ben) are not of such a literal mind.

Lots of luck making the case that "parsing has nothing to do with C".

And I've just realized why people say "x has nothing to do with y"
here so often as an argument when it would seem that for arbitrary x
and y, the fact that a chain of significance can always be created
between them means that "x has z to do with y" is always trivially
true.

It's because a critical mass of people here didn't learn CS at uni.
Instead they majored in something else, couldn't get a job in their
field, and wound up getting snapped up, much to their astonishment, by
some slave raider looking for nubile and pliable young flesh that some
company could whip into shape by teaching it a few basic motions in
Taylorist style.

It's at uni where x has z to do with y more often.

The parsing in printf is trivial.


>
> <snip>
> --
> Ben.

0
spinoza1111
1/3/2010 4:30:57 AM
On Jan 3, 4:57=A0am, "wolfgang.riedel" <wolfgang.riede...@web.de> wrote:
> On 31 Dec 2009, 19:36, Moi <r...@invalid.address.org> wrote:
>
> > On Thu, 31 Dec 2009 09:10:03 -0800,spinoza1111wrote:
> > > On Dec 31 2009, 9:51=A0pm, Moi <r...@invalid.address.org> wrote:
> > >> On Thu, 31 Dec 2009 04:35:01 -0800,spinoza1111wrote:
> > > use conditional macro instructions, and the C preprocessor (perhaps
> > > fortunately) isn't Turing-complete whereas in BAL you could construct
>
> > Stop calling Turing (or other names). BTW did I mention that
>
> For all this silly question is worth:
> yes: the PL/1 Precompiler is Turing-complete
> and yes: the C Prep is not!
> (but mightier than you might think: I computed the first
> thousand primes with it - nothing I would do on a PC)

Slick. Do you have the code?

> I don't remember BAL enough, but it seems to me, that HLA was
> the one with a real Macro language.
>
> Greetings,
> Wolfgang

0
spinoza1111
1/3/2010 4:31:52 AM
On Jan 3, 3:21=A0am, Seebs <usenet-nos...@seebs.net> wrote:
> On 2010-01-02, Richard Heathfield <r...@see.sig.invalid> wrote:
>
> > Seebs wrote:
> >> The problem is that he's lying here; my page is not the "one authorita=
tive
> >> source".
> > Um, I think it more likely that he's expressing an opinion. Hanlon's
> > Razor and all that.
>
> Hmm. =A0You have a point; he is confused enough that he might well have
> mistaken the citation on the wikipedia talk page for the claim he's now
> making. =A0I retract the accusation.

No, you're confused.

Most people think in arrays and not in tree structures due to lack of
education. This actually means that you think of the wikipedia cite
and your libelous article as two data points, and the FAQ about
Schildt as a third. But since your libelous article was the source of
citations cited in turn, it is probably the apex of a tree. It caused
a malicious campaign of libel and slander.
>
> -s
> --
> Copyright 2010, all wrongs reversed. =A0Peter Seebach / usenet-nos...@see=
bs.nethttp://www.seebs.net/log/<-- lawsuits, religion, and funny picturesht=
tp://en.wikipedia.org/wiki/Fair_Game_(Scientology) <-- get educated!

0
spinoza1111
1/3/2010 4:38:14 AM
On Jan 3, 3:22=A0am, Seebs <usenet-nos...@seebs.net> wrote:
> On 2010-01-02, Francis Glassborow <francis.glassbo...@btinternet.com> wro=
te:
>
> > A comparison between what Peter, Richard and I have written about one o=
r
> > more books by Schildt with what you have repeatedly said about Peter an=
d
> > Richard should make the point. Any one of us can be guilty of making
> > mistakes or miss-speaking but that does not make us liars, conspirators
> > or incompetent. =A0If anyone took you seriously Peter would have a rock
> > solid case against you for libel as you have repeatedly impugned his
> > professional competence in ways that could damage his reputation and ca=
reer.

Not if it can be proven that Seebach did so first to Schildt, for that
is my claim, and the First Amendment to the Constitution protects it,
along with my readiness to defend myself (pro se if needed) in court.
>
> You know, it occurs to me that I really ought to point my lawyer at
> this stuff. =A0Not because I think there'd be any point in a defamation
> case, but because he usually finds my kooks funny.

I'd be careful with that. A lawyer is a sort of rent-a-Father in a
society that's killed the Father and he might laugh or snarl at you
for wasting his time.
>
> -s
> --
> Copyright 2010, all wrongs reversed. =A0Peter Seebach / usenet-nos...@see=
bs.nethttp://www.seebs.net/log/<-- lawsuits, religion, and funny picturesht=
tp://en.wikipedia.org/wiki/Fair_Game_(Scientology) <-- get educated!

0
spinoza1111
1/3/2010 4:40:25 AM
On Jan 3, 4:31=A0am, Richard Heathfield <r...@see.sig.invalid> wrote:
> spinoza1111wrote:
> > On Jan 2, 5:45 pm, Richard Heathfield <r...@see.sig.invalid> wrote:
> >> spinoza1111wrote:
> >>> On Jan 2, 2:43 pm, Richard Heathfield <r...@see.sig.invalid> wrote:
> >>>> spinoza1111wrote:
> >>>>> On Jan 1, 10:59 pm, Richard Heathfield <r...@see.sig.invalid> wrote=
:
> >>>>>> spinoza1111wrote:
> >>>>>> <snip>
> >>>>>>> Case closed. Herb is a programmer who wrote a complete compiler a=
nd
> >>>>>>> interpreter for C,
> >>>>>> It wasn't a compiler, and it wasn't complete.
> >>>>> (Sigh) This has been resolved and not in your favor.
> >>>> Only in your fevered imagination. The implementation did not produce
> >>>> standalone object code; it was an interpreter, not a compiler. And i=
t
> >>> WRONG because it had to scan and parse.
> >> So does printf. That doesn't mean that printf is a compiler.
>
> > You are very ignorant because you could not tell from parsing that
> > parsing means parsing at Chomsky level 1 in this context.
>
> Firstly, say what you mean. Secondly, parsing (at *any* level) does not
> make a compiler. It is necessary, but certainly not sufficient.
>
> >>> If something can scan and
> >>> parse and not being at some scale a compiler, then you do not know
> >>> your profession.
> >> Are you claiming that printf is a compiler?
>
> > No: see above.
>
> I'll tell you what - I'll accept that, according to *your* definition of
> a compiler, Herbert Schildt wrote a compiler. According to mine,
> however, he didn't.
>
> >>> By the way, what are YOUR academic credentials?
> >> Many eminent computer scientists do not have a CS degree. Neither do I=
 -
> >> which does not of course make me an eminent computer scientist, but it
> >> does show that a CS degree is not necessary for CS expertise. We had
> >> this discussion before, remember?
>
> > Yes, and you lost, remember,
>
> No, I don't remember that.
>
> > for you never answered my riposte:
>
> So what? You write so much twaddle that nobody can reasonably be
> expected to read even 1% of it.
>
> > (1) You're no Dijkstra.
>
> Accepted.
>
> <nonsense snipped>
>
> =A0> You're not even Nilges,
>
> Thank heaven for small mercies. Or indeed large ones.
>
> > since in 1974, byotch, I developed a data base with selection
> > and formatting in the absence of prior art and development tools
> > beyond a primitive assembler.
>
> Bear in mind that experience of your previous erroneous and occasionally
> deceptive articles leads me to treat any claim of yours with a large
> pinch of salt.
>
> > (2) There was not opportunity to take academic course work in the time
> > of Dijkstra since if you'd done so yourself, you would know that the
> > content of academic CS was created by Dijkstra et al. How could
> > Dijkstra taken computer classes in Holland of the early fifties?
>
> He couldn't. That's kind of the point. He couldn't, and therefore he
> didn't. And therefore, by being a computer scientist despite not having
> a CS degree (because he couldn't possibly have got one, because there
> was no such thing at the time), he has demonstrated that a CS degree is
> not a prerequisite for being a computer scientist. I mean, this is so
> blindingly obvious that even a child of 7 could understand it. I suggest
> you consult a child of 7.
>
> =A0> I took the very first CS class offered by my own university in 1970!
>
> It doesn't appear to have done you much good.
>
> <nonsense snipped>
>
> > God help us all. Oh well, I have taught at Roosevelt University, a
> > third-rate school in Chicago, Princeton, and DeVry, which I am the
> > first to admit is a strange range. In that experience I am aware that
> > at the lower level a lot of bad practice is being taught,
>
> If they let you loose, I'm not surprised.
>
> > [Note my choice of words. I don't know if C Unleashed contains a lotta
> > errors, and I am not gonna set up a Web site unleashing the hounds of
> > hell on your book.
>
> You'd be too late. Such a site already exists. I wrote it, and I
> maintain it.
>
> <nonsense snipped>
>
> >> If you're much better educated than me in CS in
> >> a way that is relevant in comp.lang.c, then you will have no difficult=
y
> >> in showing my C knowledge to be erroneous in a way that convinces
> >> acknowledged C experts in the group, such as Peter Seebach, Keith
> >> Thompson, David Thompson, Ben Pfaff, Dann Corbit, Ben Bacarisse, and s=
o
>
> > Oh my goodness, *quelle Pantheon*: a clerk who's never taken a CS
> > class and a buncha ordinary slobs with broadband [Ben is very smart
> > esp. on details]
>
> can run CS rings around a CS graduate who seems to have almost no grasp
> of CS. Odd, that.
>
> <nonsense snipped>
>
> >> =A0> This is what competent people do as opposed to corporate
>
> >>> clowns who've worked for a series of "banks and insurance companies",
> >>> and either destroyed others at these jobs or been fired.
> >> For the record, I've worked for a variety of companies in a variety of
> >> industries, including consumer electronics, automotive, and airline.
>
> > I will stand corrected. OK, you've worked with a series of companies.
> > And many programmers change jobs.
>
> Especially those who choose to work on short-term contracts.

.... and other bottom feeders who become "consultants" to be
"independent" and who dialectically wind up being completely dependent
on the good will of bad people in a business that at least in the USA
may be mobbed up.

I am very familiar with the ethics of such "consultants".

>
> =A0> But given the social skills you
>
> > display, I can only wonder how you got along with co-workers, and I
> > think you were a back-stabber, based on my long experience here.
>
> I get on fine with almost everybody, and I have only once
> (metaphorically!) stabbed anyone in the back - well, eight people in one
> stab actually, but it turned out they were already (metaphorically) dead

Wow, listen to this guy...

> anyway. I considered the stabbing to be a necessary precaution. They had
> wasted a huge amount of time and budget turning out a piece of software
> that might have embarrassed even you.

OK, you lied, eight people lost their jobs, probably unnecessarily.

>
> <snip>
>
> >>>>> Etc. The document is incomplete and never was.
> >>>> Then it needs to be renamed "C: The Incomplete Nonsense", at least f=
or
> >>>> the time being.
> >>>> =A0> However, it became the
> >>>>> one authoritative source for all claims about Schildt's book.
> >>>> So you consider Peter Seebach's page to be authoritative. Fine, so
> >>>> what's the problem?
> >>> Stop playing word games.
> >> Do you mean you were mistaken to claim that Seebs's page is authoritat=
ive?
>
> > You've never read King Lear:
>
> Oh, but I have.

I don't think so, and if so, you have read without understanding.
>
> > there thou might'st behold the great image of Authoritie, a Dogg's
> > obey'd in Office.
>
> "O matter, and impertinency mixt, Reason in Madnesse"! The word
> "authority" has both positive and negative connotations, but the word
> "authoritative" is a different (albeit related) word which has, as far
> as I'm aware, only positive connotations. If I care enough, I'll take it

Literate people extend their connotations to a fully derived word.

> up with alt.usage.english or possibly alt.english.usage - don't bother
> to try to educate me on the matter, since I wouldn't believe you anyway.
>
> <nonsense snipped>
>
> --
> Richard Heathfield <http://www.cpax.org.uk>
> Email: -http://www. +rjh@
> "Usenet is a strange place" - dmr 29 July 1999
> Sig line vacant - apply within

0
spinoza1111
1/3/2010 5:35:33 AM
spinoza1111 wrote:
> On Jan 3, 4:31 am, Richard Heathfield <r...@see.sig.invalid> wrote:
>> spinoza1111wrote:

<snip>

> I am very familiar with the ethics of such "consultants".

You are very quick to generalise.

> 
>>> But given the social skills you
>>
>>> display, I can only wonder how you got along with co-workers, and I
>>> think you were a back-stabber, based on my long experience here.
>> I get on fine with almost everybody, and I have only once
>> (metaphorically!) stabbed anyone in the back - well, eight people in one
>> stab actually, but it turned out they were already (metaphorically) dead
> 
> Wow, listen to this guy...

It would do you good.

>> anyway. I considered the stabbing to be a necessary precaution. They had
>> wasted a huge amount of time and budget turning out a piece of software
>> that might have embarrassed even you.
> 
> OK, you lied,

I don't see how you make that out.

> eight people lost their jobs, probably unnecessarily.

The decision, as it turns out, had already been made, but my evidence 
was confirmation - if confirmation were needed - that the right decision 
had been made.

<snip>

>>> You've never read King Lear:
>> Oh, but I have.
> 
> I don't think so,

That doesn't affect reality in the slightest. I have read King Lear, 
whether you think so or not. And this seems to be the problem - you 
appear to assume that reality is what you think it to be. It isn't. It 
is independent of your imagination.

<yet more nonsense snipped>

-- 
Richard Heathfield <http://www.cpax.org.uk>
Email: -http://www. +rjh@
"Usenet is a strange place" - dmr 29 July 1999
Sig line vacant - apply within
0
Richard
1/3/2010 9:02:05 AM
On Jan 3, 5:02=A0pm, Richard Heathfield <r...@see.sig.invalid> wrote:
> spinoza1111wrote:
> > On Jan 3, 4:31 am, Richard Heathfield <r...@see.sig.invalid> wrote:
> >> spinoza1111wrote:
>
> <snip>
>
> > I am very familiar with the ethics of such "consultants".
>
> You are very quick to generalise.

Based on experience. It's called "intelligence".

In my experience, nobody should brag about being a contract
programmer. Most of my career, I've not been one, and most programmers
I've known who were all-contract-all-the-time were forced to work to
rule as low level temps.

From the viewpoint of a company that's heavily invested in C, the last
thing it wants to hear is that C sucks, so obtaining a warm body
labeled "C programmer" is in the shortest of terms "rational", because
at the level of mere programmer, the programmer is expected to be
uneducated in comp sci so that he's loyal to the language he knows.

Your constant contortions to "prove" various factoids about C show
that you're overly invested in the language from a financial point of
view.
>
>
>
> >>> But given the social skills you
>
> >>> display, I can only wonder how you got along with co-workers, and I
> >>> think you were a back-stabber, based on my long experience here.
> >> I get on fine with almost everybody, and I have only once
> >> (metaphorically!) stabbed anyone in the back - well, eight people in o=
ne
> >> stab actually, but it turned out they were already (metaphorically) de=
ad
>
> > Wow, listen to this guy...
>
> It would do you good.
>
> >> anyway. I considered the stabbing to be a necessary precaution. They h=
ad
> >> wasted a huge amount of time and budget turning out a piece of softwar=
e
> >> that might have embarrassed even you.
>
> > OK, you lied,
>
> I don't see how you make that out.
>
> > eight people lost their jobs, probably unnecessarily.
>
> The decision, as it turns out, had already been made, but my evidence
> was confirmation - if confirmation were needed - that the right decision
> had been made.
>
> <snip>
>
> >>> You've never read King Lear:
> >> Oh, but I have.
>
> > I don't think so,
>
> That doesn't affect reality in the slightest. I have read King Lear,

OK, you have. So did Roger Scruton. But just as he misinterpreted it,
so did you, in the same way you don't understand C.

> whether you think so or not. And this seems to be the problem - you
> appear to assume that reality is what you think it to be. It isn't. It
> is independent of your imagination.
>
> <yet more nonsense snipped>
>
> --
> Richard Heathfield <http://www.cpax.org.uk>
> Email: -http://www. +rjh@
> "Usenet is a strange place" - dmr 29 July 1999
> Sig line vacant - apply within

0
spinoza1111
1/3/2010 9:47:21 AM
On Jan 3, 4:31=A0am, Richard Heathfield <r...@see.sig.invalid> wrote:
> spinoza1111wrote:
> > On Jan 2, 5:45 pm, Richard Heathfield <r...@see.sig.invalid> wrote:
> >> spinoza1111wrote:
> >>> On Jan 2, 2:43 pm, Richard Heathfield <r...@see.sig.invalid> wrote:
> >>>> spinoza1111wrote:
> >>>>> On Jan 1, 10:59 pm, Richard Heathfield <r...@see.sig.invalid> wrote=
:
> >>>>>> spinoza1111wrote:
> >>>>>> <snip>
> >>>>>>> Case closed. Herb is a programmer who wrote a complete compiler a=
nd
> >>>>>>> interpreter for C,
> >>>>>> It wasn't a compiler, and it wasn't complete.
> >>>>> (Sigh) This has been resolved and not in your favor.
> >>>> Only in your fevered imagination. The implementation did not produce
> >>>> standalone object code; it was an interpreter, not a compiler. And i=
t
> >>> WRONG because it had to scan and parse.
> >> So does printf. That doesn't mean that printf is a compiler.
>
> > You are very ignorant because you could not tell from parsing that
> > parsing means parsing at Chomsky level 1 in this context.
>
> Firstly, say what you mean. Secondly, parsing (at *any* level) does not
> make a compiler. It is necessary, but certainly not sufficient.
>
> >>> If something can scan and
> >>> parse and not being at some scale a compiler, then you do not know
> >>> your profession.
> >> Are you claiming that printf is a compiler?
>
> > No: see above.
>
> I'll tell you what - I'll accept that, according to *your* definition of
> a compiler, Herbert Schildt wrote a compiler. According to mine,
> however, he didn't.
>
> >>> By the way, what are YOUR academic credentials?
> >> Many eminent computer scientists do not have a CS degree. Neither do I=
 -
> >> which does not of course make me an eminent computer scientist, but it
> >> does show that a CS degree is not necessary for CS expertise. We had
> >> this discussion before, remember?
>
> > Yes, and you lost, remember,
>
> No, I don't remember that.
>
> > for you never answered my riposte:
>
> So what? You write so much twaddle that nobody can reasonably be
> expected to read even 1% of it.
>
> > (1) You're no Dijkstra.
>
> Accepted.
>
> <nonsense snipped>
>
> =A0> You're not even Nilges,
>
> Thank heaven for small mercies. Or indeed large ones.
>
> > since in 1974, byotch, I developed a data base with selection
> > and formatting in the absence of prior art and development tools
> > beyond a primitive assembler.
>
> Bear in mind that experience of your previous erroneous and occasionally
> deceptive articles leads me to treat any claim of yours with a large
> pinch of salt.
>
> > (2) There was not opportunity to take academic course work in the time
> > of Dijkstra since if you'd done so yourself, you would know that the
> > content of academic CS was created by Dijkstra et al. How could
> > Dijkstra taken computer classes in Holland of the early fifties?
>
> He couldn't. That's kind of the point. He couldn't, and therefore he
> didn't. And therefore, by being a computer scientist despite not having
> a CS degree (because he couldn't possibly have got one, because there
> was no such thing at the time), he has demonstrated that a CS degree is
> not a prerequisite for being a computer scientist. I mean, this is so
> blindingly obvious that even a child of 7 could understand it. I suggest
> you consult a child of 7.
>
> =A0> I took the very first CS class offered by my own university in 1970!
>
> It doesn't appear to have done you much good.
>
> <nonsense snipped>
>
> > God help us all. Oh well, I have taught at Roosevelt University, a
> > third-rate school in Chicago, Princeton, and DeVry, which I am the
> > first to admit is a strange range. In that experience I am aware that
> > at the lower level a lot of bad practice is being taught,
>
> If they let you loose, I'm not surprised.
>
> > [Note my choice of words. I don't know if C Unleashed contains a lotta
> > errors, and I am not gonna set up a Web site unleashing the hounds of
> > hell on your book.
>
> You'd be too late. Such a site already exists. I wrote it, and I
> maintain it.
>
> <nonsense snipped>
>
> >> If you're much better educated than me in CS in
> >> a way that is relevant in comp.lang.c, then you will have no difficult=
y
> >> in showing my C knowledge to be erroneous in a way that convinces
> >> acknowledged C experts in the group, such as Peter Seebach, Keith
> >> Thompson, David Thompson, Ben Pfaff, Dann Corbit, Ben Bacarisse, and s=
o
>
> > Oh my goodness, *quelle Pantheon*: a clerk who's never taken a CS
> > class and a buncha ordinary slobs with broadband [Ben is very smart
> > esp. on details]
>
> can run CS rings around a CS graduate who seems to have almost no grasp
> of CS. Odd, that.
>
> <nonsense snipped>
>
> >> =A0> This is what competent people do as opposed to corporate
>
> >>> clowns who've worked for a series of "banks and insurance companies",
> >>> and either destroyed others at these jobs or been fired.
> >> For the record, I've worked for a variety of companies in a variety of
> >> industries, including consumer electronics, automotive, and airline.
>
> > I will stand corrected. OK, you've worked with a series of companies.
> > And many programmers change jobs.
>
> Especially those who choose to work on short-term contracts.
>
> =A0> But given the social skills you
>
> > display, I can only wonder how you got along with co-workers, and I
> > think you were a back-stabber, based on my long experience here.
>
> I get on fine with almost everybody, and I have only once
> (metaphorically!) stabbed anyone in the back - well, eight people in one
> stab actually, but it turned out they were already (metaphorically) dead
> anyway. I considered the stabbing to be a necessary precaution. They had
> wasted a huge amount of time and budget turning out a piece of software
> that might have embarrassed even you.
>
> <snip>
>
> >>>>> Etc. The document is incomplete and never was.
> >>>> Then it needs to be renamed "C: The Incomplete Nonsense", at least f=
or
> >>>> the time being.
> >>>> =A0> However, it became the
> >>>>> one authoritative source for all claims about Schildt's book.
> >>>> So you consider Peter Seebach's page to be authoritative. Fine, so
> >>>> what's the problem?
> >>> Stop playing word games.
> >> Do you mean you were mistaken to claim that Seebs's page is authoritat=
ive?
>
> > You've never read King Lear:
>
> Oh, but I have.
>
> > there thou might'st behold the great image of Authoritie, a Dogg's
> > obey'd in Office.
>
> "O matter, and impertinency mixt, Reason in Madnesse"! The word

Very good. Like me, you know where to find the First Folio edition.
However, unlike me, you don't connect what goes on in this newsgroup
with Lear. You don't understand that Lear was a protest against false
authority such as yours.

> "authority" has both positive and negative connotations, but the word
> "authoritative" is a different (albeit related) word which has, as far
> as I'm aware, only positive connotations. If I care enough, I'll take it
> up with alt.usage.english or possibly alt.english.usage - don't bother
> to try to educate me on the matter, since I wouldn't believe you anyway.
>
> <nonsense snipped>
>
> --
> Richard Heathfield <http://www.cpax.org.uk>
> Email: -http://www. +rjh@
> "Usenet is a strange place" - dmr 29 July 1999
> Sig line vacant - apply within

0
spinoza1111
1/3/2010 9:49:06 AM
On Jan 3, 3:22=A0am, Seebs <usenet-nos...@seebs.net> wrote:
> On 2010-01-02, Francis Glassborow <francis.glassbo...@btinternet.com> wro=
te:
>
> > A comparison between what Peter, Richard and I have written about one o=
r
> > more books by Schildt with what you have repeatedly said about Peter an=
d
> > Richard should make the point. Any one of us can be guilty of making
> > mistakes or miss-speaking but that does not make us liars, conspirators
> > or incompetent. =A0If anyone took you seriously Peter would have a rock
> > solid case against you for libel as you have repeatedly impugned his
> > professional competence in ways that could damage his reputation and ca=
reer.
>
> You know, it occurs to me that I really ought to point my lawyer at
> this stuff. =A0Not because I think there'd be any point in a defamation
> case, but because he usually finds my kooks funny.

"Your" lawyer (who's probably some hard working bottom feeder with a
lot of potential and actual clients) might be irritated at you wasting
his time, because lawyers sue people and institutions with deep
pockets, and he'd see that you have no case. Although truth is not an
absolute defense, it's a good one, and the truth is that you set up a
malicious Web site which harmed Schildt, without qualifications in
computer science of a sort a lawyer would accept (hint: lawyers vastly
prefer external and verifiable certification, such as degrees in CS
from Univ of Illinois, to the claims of autodidacts and AP scores).

Lawyers can also tell the difference between self-defense in what they
call "chat rooms" and malicious standalone Web sites which have high
Google page rankings, and which can be demonstrated to be the apex of
trees of citations in which I can show that the damage to a reputation
started with you.

Lawyers also love older but fit guys like me who can be articulate on
the stand and describe factually how they helped Nash with C, worked
in C until finding a better language, published a book which has
ranked in the top ten Amazon compiler books, etc.

Lawyers don't like geeks who have evolved their own private set of
rules in crummy chat rooms.

So don't make legal threats, even in jest.
>
> -s
> --
> Copyright 2010, all wrongs reversed. =A0Peter Seebach / usenet-nos...@see=
bs.nethttp://www.seebs.net/log/<-- lawsuits, religion, and funny picturesht=
tp://en.wikipedia.org/wiki/Fair_Game_(Scientology) <-- get educated!

0
spinoza1111
1/3/2010 9:56:53 AM
spinoza1111 wrote:
> On Jan 3, 5:02 pm, Richard Heathfield <r...@see.sig.invalid> wrote:
>> spinoza1111wrote:
>>> On Jan 3, 4:31 am, Richard Heathfield <r...@see.sig.invalid> wrote:
>>>> spinoza1111wrote:
>> <snip>
>>
>>> I am very familiar with the ethics of such "consultants".
>> You are very quick to generalise.
> 
> Based on experience. It's called "intelligence".
> 
> In my experience, nobody should brag about being a contract
> programmer. Most of my career, I've not been one, and most programmers
> I've known who were all-contract-all-the-time were forced to work to
> rule as low level temps.
> 
> From the viewpoint of a company that's heavily invested in C, the last
> thing it wants to hear is that C sucks, so obtaining a warm body
> labeled "C programmer" is in the shortest of terms "rational", because
> at the level of mere programmer, the programmer is expected to be
> uneducated in comp sci so that he's loyal to the language he knows.

OK that tells us something about the programmers you normally come in 
contact with. Contract workers come in two bands, those who are too 
awful to employ permanently and those who are so good that paying them 
what they are worth as  employees would distort the company pay structures.

I have been a critic of pay structures that are based on hierarchies 
exactly because they result in your most skilled employees either being 
promoted out of their skill area (there is no reason to think that an 
excellent programmer can be a manager) or leaving (actually often to be 
re-employed as a consultant).

There are a few companies around that understand that employees should 
be paid well for doing a good job irrespective of what that job is. I 
have been known to suggest that managers at all levels should have their 
pay tied to the performance/productivity of their 'team'.

0
Francis
1/3/2010 10:05:16 AM
On Jan 2, 3:28=A0pm, Seebs <usenet-nos...@seebs.net> wrote:
> On 2010-01-02, Richard Heathfield <r...@see.sig.invalid> wrote:
>
> > Only in your fevered imagination. The implementation did not produce
> > standalone object code; it was an interpreter, not a compiler. And it
> > did not implement the entire language. Therefore it was not complete.
> > Therefore, the claim that it was a complete compiler - like so many of
> > your claims - is mistaken.
>
> The book clearly says it's an interpreter.

We've been over this, Mister ADHD: an interpreter plus a front end
scanner and parser is a compiler. If you'd taken CS at uni instead of
psychology you would know this. You see, universities, unlike
Korporations, don't see the need to make money by running fast but
incorrect software to cheat people.

Your implicit definition of a compiler as a translator that generates
native object code is false. It would if true have several absurd
results. It would exclude retargetable COMPILERS that generate C code
for portability. It would exclude .Net and Java compilers that do NOT
generate interpreted code, but code that is compiled by a small tool
(the JIT translator) at run time to native code. It would deprive most
early COMPILER developers of their Turing and other awards for
generating COMPILERS that generated interpreted code, such as the
developers of the Purdue University PUFFT compiler.

Your silly definition also invalidates pp 1 & 2 of Aho Sethi et al.'s
Dragon Book, which defines a compiler as a language transformer,
specifically on p 2 identifying directly executable machine language
as a subset of the possibilities.
>
> >>> =A0> I need you
> >>>> to remove "C: The Complete Nonsense" and replace it with an apology
> >>>> for speaking far beyond your area of competence.
> >>> If you can find any error of fact on the page, I'm sure Seebs will be
> >>> delighted to correct it.
>
> >> "The 'Heap' is a DOS term"
>
> > If Seebs agrees with you that "the heap" is *not* a DOS term, I'm sure
> > he'll fix the page. If you can cite an international standard that
> > supports your case, I'm sure that will help you to persuade him. Failin=
g
> > that, citing an authoritative author will help (hint: Spinoza and Adorn=
o
> > are not generally considered to be authoritative on the subject of
> > MS-DOS). But I think you'll find authoritative MS-DOS works to be
> > littered with references to the heap. I can dig one out if you like.
>
> I think I'll clarify that one, in any event. =A0The term "heap" is used
> very heavily in a DOS environment. =A0The term "heap" is not used at all
> in some Unix docs, but glibc uses it occasionally -- interestingly,
> specifically to point out that malloc returns some pointers to space
> allocated separately outside the heap. =A0(By unhappy coincidence, I know
> WAY WAY too much about the details there, but they're not particularly
> relevant.)
>
> The point I was aiming for (but frankly didn't make properly) was that
> the concept of the "heap" is not necessarily an intrinsic part of the
> C language -- less so still is the specific memory layout, or the notion
> that if you allocate enough stuff on "the heap" you will run into
> the stack. =A0(In fact, on some of the systems I use, this is effectively
> impossible, because the stack pointer and the "break" address are
> far enough apart that you run into the resource limits on both long
> before they come near each other.)

The C language as a syntax has nothing to do with runtime mechanisms.
However, had you attended a proper computer science program, you would
have learned that formal semantics often is explained by formal models
which aren't intended to be the only possible reality.

This goes back to Euclid. I'm sure some clowns in his audience at the
beach, while Euclid drew the diagram showing the Pythagorean theorem,
didn't "get" it, in the way you don't "get" it, and asked why they
should believe Euclid for this one triangle. Would the theorem hold at
the fag beach, they would ask, and then say, that's where you belong.

In fact, this is exactly what you did to Schildt. You mocked him for
using a simple model. And if you are gay, please don't bother telling
me that in your defense. I don't want to know, and since 1972 at least
I am aware that most "gay" people can be thugs, same as straights,
because most people confuse instrumental and communicative reason,
systematically.

You'd mock Euclid, and Turing.

You were just literally wrong when you said "the 'heap' is a DOS
term". You don't understand the meaning of a language standard, and
you also don't understand how flawed is the standard on which you
worked to advance your rather half-assed career.

It appears that the C99 standard did not explain semantics using any
one specific runtime model. However, this is a major flaw in the
standard, which to satisfy greedy vendors, made too many things
"undefined" and is written in bureaucratese-waffling style
systematically.
>
> I think I'll probably clean that wording up at some point, probably aroun=
d
> the time I get to updating the page for the 4th edition.
>
> > Let's say that there are Z errors in the book, where Z is actually
> > unknown because nobody has actually got as far as finding them all, but
> > let us assume that it is at least a fixed number. Peter's complete list
> > contains Y items, where Y < Z. His published list contains X items,
> > where X < Y.
>
> Exactly. =A0My copy of the book has some notes in it, many of which I
> didn't feel were worth listing.
>
> >> "I am missing several hundred errors. Please write me if you think you
> >> know of any I'm missing. Please also write if you believe one of these
> >> corrections is inadequate or wrong; I'd love to see it."
> > He would seem to mean Z - Y =3D=3D several_hundred, which is probably a=
bout
> > right. I find it difficult to open the book without finding another
> > error I hadn't previously noticed.
>
> That's my guess. =A0Note that I'm counting repetitions, so basically
> every sample program in the book counts as an instance of the void
> main error...

That is very dishonest. It's the same confusion the authors of the
anti-Schildt C FAQ made: between token and type. They counted separate
copies of citations of your stupid document as separate pieces of
evidence.

A distinct "error" isn't a recount of a previous error. What happened
is that you found the usual number of post-publication errors, padded
them with anti-Microsoft opinions, and maligned Schildt as a person.

>
> > Whether they are trivial depends on whether you care about learning the
> > language properly. In any case, you have already shown, through your
> > ignorance of simple C matters, that you are not in any position to judg=
e
> > whether an error is trivial or not. For example, Schildt's advice on EO=
F
> > will seriously confuse a newbie who takes it on trust.
>
> Reminding me, I want to do a poll on what happens on real systems if
> you do putchar(EOF).
>
> >> Etc. The document is incomplete and never was.
> > Then it needs to be renamed "C: The Incomplete Nonsense", at least for
> > the time being.
>
> Hee.
>
> My assertion is not that my listing is a complete annotation of all
> the nonsense,
>
> > > However, it became the
> >> one authoritative source for all claims about Schildt's book.
> > So you consider Peter Seebach's page to be authoritative. Fine, so
> > what's the problem?
>
> The problem is that he's lying here; my page is not the "one authoritativ=
e
> source". =A0Rather, the talk page for Herbert Schildt on Wikipedia contai=
ns
> a number of debates about whether or not the "controversy" is justified,
> which Spinny lost specifically on the grounds that someone pointed to my
> page. =A0That has caused him to mistakenly think it's the sole authoritat=
ive
> source, but in fact, I think anyone would consider the pages by Francis
> or Clive on the same topic to be comparably authoritative -- both have
> been at least as active in C standardization as I have.

You were their source and inspiration, jerk face.

>
> So it's all tilting at windmills. =A0Someone accepted my page as an argum=
ent,
> so he thinks if he can make it go away he can win the argument with the

Fuck is your problem? If you have a complaint make it to me. Stop
talking to others. It's cowardly.

> wikipedia people -- but in fact, while my page was *sufficient* to win
> the argument, that's far from making it the only qualified source.
>
> -s
> --
> Copyright 2010, all wrongs reversed. =A0Peter Seebach / usenet-nos...@see=
bs.nethttp://www.seebs.net/log/<-- lawsuits, religion, and funny picturesht=
tp://en.wikipedia.org/wiki/Fair_Game_(Scientology) <-- get educated!

0
spinoza1111
1/3/2010 10:21:06 AM
On 2010-01-03, Francis Glassborow <francis.glassborow@btinternet.com> wrote:
> I have been a critic of pay structures that are based on hierarchies 
> exactly because they result in your most skilled employees either being 
> promoted out of their skill area (there is no reason to think that an 
> excellent programmer can be a manager) or leaving (actually often to be 
> re-employed as a consultant).

Yeah.  One of the things I quite like about my employer's system; short
of the C*O level, you can promote as far as you want on a technical track,
you don't have to become a manager.

This can result in someone "taking orders" from another person who's two
or three ranks lower in the theoretical chart of titles, but who cares?
Managers manage, programmers program, everyone's happy.

-s
-- 
Copyright 2010, all wrongs reversed.  Peter Seebach / usenet-nospam@seebs.net
http://www.seebs.net/log/ <-- lawsuits, religion, and funny pictures
http://en.wikipedia.org/wiki/Fair_Game_(Scientology) <-- get educated!
0
Seebs
1/3/2010 10:24:40 AM
On Jan 3, 6:05=A0pm, Francis Glassborow
<francis.glassbo...@btinternet.com> wrote:
> spinoza1111 wrote:
> > On Jan 3, 5:02 pm, Richard Heathfield <r...@see.sig.invalid> wrote:
> >> spinoza1111wrote:
> >>> On Jan 3, 4:31 am, Richard Heathfield <r...@see.sig.invalid> wrote:
> >>>> spinoza1111wrote:
> >> <snip>
>
> >>> I am very familiar with the ethics of such "consultants".
> >> You are very quick to generalise.
>
> > Based on experience. It's called "intelligence".
>
> > In my experience, nobody should brag about being a contract
> > programmer. Most of my career, I've not been one, and most programmers
> > I've known who were all-contract-all-the-time were forced to work to
> > rule as low level temps.
>
> > From the viewpoint of a company that's heavily invested in C, the last
> > thing it wants to hear is that C sucks, so obtaining a warm body
> > labeled "C programmer" is in the shortest of terms "rational", because
> > at the level of mere programmer, the programmer is expected to be
> > uneducated in comp sci so that he's loyal to the language he knows.
>
> OK that tells us something about the programmers you normally come in
> contact with. Contract workers come in two bands, those who are too
> awful to employ permanently and those who are so good that paying them
> what they are worth as =A0employees would distort the company pay structu=
res.

The Troglodyte dances before the shadow on the wall
Sprites and spirits doth he try to call
While unseen outside there is a sun
Which outside, gives light to every one.

Yes, this abstract distinction you make exists. There are really good
programmers and many "studies" have found that they are light years
ahead of their mates...in some cases so many astronomical units ahead
that their coworkers are "offended" by them and start Seebie-style
whispering campaigns.

However, to believe that companies want to pay people what they're
worth ranks along trust in the Tooth Fairy. Companies in fact make it
their business to hire underqualified people (such as people without
training in academic CS) so as to enhance stock price.

>
> I have been a critic of pay structures that are based on hierarchies
> exactly because they result in your most skilled employees either being
> promoted out of their skill area (there is no reason to think that an
> excellent programmer can be a manager) or leaving (actually often to be
> re-employed as a consultant).

Oh good. American and G-8 societies already too unequal, you wish to
make them more so. Programming "ability" is not a fixed thing and it
has many dimensions. For example, Richard Heathfield has valuable
knowledge of details about C but almost no computing imagination.

Programming ability also disappears rapidly in cases of divorce and
substance abuse.

It is completely unlike medical or legal ability because essential to
what we think of as the "ability" of a doctor and lawyer is their
ability to sign-off on decisions that have important legal and
financial ramifications. Programmers have no such ability.


>
> There are a few companies around that understand that employees should
> be paid well for doing a good job irrespective of what that job is. I
> have been known to suggest that managers at all levels should have their
> pay tied to the performance/productivity of their 'team'.

We have to ask what it's for. If you're writing highly optimized C so
as to enable a financial firm to make an offer in an online market
that jacks up your stock price, and immediately cancels the offer
without purchasing a share (to name one cute post-panic trading game)
you are risking a market meltdown bigger than 2008 based on your games
and your conduct should be criminalized by legislation if it is not
already been criminalized.

Married men and women with > 2 children as opposed to single people in
my view deserve higher pay, and people whose work is a net benefit to
a real community (not some fanciful "open source" community of pirates
and slaves). Senior people especially people on the verge of
retirement deserve higher pay as do people who've troubled to get
formal education in their professions (such as a computer science
degree).

I realize that this view radically differs from the typical programmer
view.

"Intelligence is a moral category" (Adorno). If you lie, cheat and
steal, misrepresenting the reputation of another, then as we see here,
you have a tendency to say shockingly unintelligent things ("the
'heap' is a DOS term": "Nilges ain't in Risks"). cf. also Habermas: a
communicative act such as the claim that a compiler can be one which
generates interpreted code has to be made in an environment of mutual
trust and a common shared decency.

However, Americans and their friends abroad seem to have a problem
with this notion. They want everybody to be "free", and then punish
people who freely elect Hamas by paying Israel to kill their children.
0
spinoza1111
1/3/2010 10:53:31 AM
On Jan 3, 6:24=A0pm, Seebs <usenet-nos...@seebs.net> wrote:
> On 2010-01-03, Francis Glassborow <francis.glassbo...@btinternet.com> wro=
te:
>
> > I have been a critic of pay structures that are based on hierarchies
> > exactly because they result in your most skilled employees either being
> > promoted out of their skill area (there is no reason to think that an
> > excellent programmer can be a manager) or leaving (actually often to be
> > re-employed as a consultant).
>
> Yeah. =A0One of the things I quite like about my employer's system; short
> of the C*O level, you can promote as far as you want on a technical track=
,
> you don't have to become a manager.
>
The Programmer Peter Pan Syndrome is "I love to code and do not want
to be a manager". This is because the promotion to management is a
psychological symbol of having to grow up and eventually die.

The "technical track" usually dead ends at a cliff; programming
careers have been found on average too brief owing to technical change
to properly raise a family, since people with ten years or more
experience are either behind the curve, or perceived to be, or both.

Figure it out. If a large company has a stable and mature system
maintained and incrementally improved by a set of aging geeks that are
experts on this system and who (in order to do a good job) have not
enhanced their other skills, the aging geeks are a cost center. If the
company can find an Open Source solution to replace these people, it
will.

Geeks at Bell-Northern Research developed a compiler for PBX
programming, disruptively creating a new market in the 1970s. But as
early as 1980, the Ottawa compiler group, which contained very smart
people, was perceived by the suits as a cost center because the
compiler seemed to be...perfect. The Ottawa compiler group was
disbanded. But then, the Mountain View PBX people needed all sorts of
new features for new customers with new requirements, which created my
job.

But after I made the many changes needed, including a new compiler for
24 bits and a complete auto-installer, any suggestions I had as to
further modifying the compiler (such as using optimization to predict
bugs) were met, politely, with the response that Northern's strategy
would be less "proprietary" in future...even though like Apple, being
"proprietary" had made it successful. The upshot was that Northern
Telecom, Bell-Northern's parent, wasted megabucks on a variety of mad
schemes and some of its executives went to jail for stock
manipulations.

There is really no such thing as being "valued" in a company in the
way programmers fantasize they are "valued". It's like the skilled Jew
in the concentration camp who fantasizes that he'll be spared, that
the Nazis won't be so "irrational" in the large.

[No, corporations aren't Fascist dictatorships. Instead, underlying
mechanisms make rationality in the small into irrationality when it
scales up. Small businesses almost never develop Fascistic
pathologies.]

If these programmer Peter Pans lack formal education in computer
science, their next job will be at McDonald's since they have no way
to convince an employer to retrain them in a modern platform, and the
teaching profession is closed to them.

Peter, repeat after me: you want fries with that?

[Note: I don't mean to imply that Seebach is unemployable absent a
demand for C programmers. I do not know enough about him to know. But
I do know that programmer Peter Pans exist.]

> This can result in someone "taking orders" from another person who's two
> or three ranks lower in the theoretical chart of titles, but who cares?
> Managers manage, programmers program, everyone's happy.
>
> -s
> --
> Copyright 2010, all wrongs reversed. =A0Peter Seebach / usenet-nos...@see=
bs.nethttp://www.seebs.net/log/<-- lawsuits, religion, and funny picturesht=
tp://en.wikipedia.org/wiki/Fair_Game_(Scientology) <-- get educated!

0
spinoza1111
1/3/2010 11:14:28 AM
spinoza1111 <spinoza1111@yahoo.com> writes:

> On Jan 3, 5:21 am, Ben Bacarisse <ben.use...@bsb.me.uk> wrote:
>> spinoza1111<spinoza1...@yahoo.com> writes:
>> > On Jan 2, 5:45 pm, Richard Heathfield <r...@see.sig.invalid> wrote:
>> >> spinoza1111wrote:
>> >> > On Jan 2, 2:43 pm, Richard Heathfield <r...@see.sig.invalid> wrote:
>> >> >> spinoza1111wrote:
>> >> >>> On Jan 1, 10:59 pm, Richard Heathfield <r...@see.sig.invalid> wrote:
>> >> >>>> spinoza1111wrote:
>> >> >>>> <snip>
>> >> >>>>> Case closed. Herb is a programmer who wrote a complete compiler and
>> >> >>>>> interpreter for C,
>> >> >>>> It wasn't a compiler, and it wasn't complete.
>> >> >>> (Sigh) This has been resolved and not in your favor.
>> >> >> Only in your fevered imagination. The implementation did not produce
>> >> >> standalone object code; it was an interpreter, not a compiler. And it
>>
>> >> > WRONG because it had to scan and parse.
>>
>> >> So does printf. That doesn't mean that printf is a compiler.
>>
>> > You are very ignorant because you could not tell from parsing that
>> > parsing means parsing at Chomsky level 1 in this context.
>>
>> Did you mean "type 1" rather than "level 1"?  If so, why is parsing
>> linked to type 1 grammars?  What has this to do with C?
>
> Yes, and most literate people (Ben) are not of such a literal mind.
>
> Lots of luck making the case that "parsing has nothing to do with C".

I don't want to.  I wandered what type 1 grammars have to do with C
and why you limit the term "parsing" to this one type even if only "in
this context".

I think what has happened is that you expect people like me with
literal minds to auto-correct what you write until is makes technical
sense.  I am happy to do that, but if I auto-correct what you say
about C I will end up with my (technical) options about it, not yours.

<snip>
-- 
Ben.
0
Ben
1/3/2010 1:58:42 PM
On Jan 3, 9:58=A0pm, Ben Bacarisse <ben.use...@bsb.me.uk> wrote:
> spinoza1111<spinoza1...@yahoo.com> writes:
> > On Jan 3, 5:21=A0am, Ben Bacarisse <ben.use...@bsb.me.uk> wrote:
> >>spinoza1111<spinoza1...@yahoo.com> writes:
> >> > On Jan 2, 5:45=A0pm, Richard Heathfield <r...@see.sig.invalid> wrote=
:
> >> >> spinoza1111wrote:
> >> >> > On Jan 2, 2:43 pm, Richard Heathfield <r...@see.sig.invalid> wrot=
e:
> >> >> >> spinoza1111wrote:
> >> >> >>> On Jan 1, 10:59 pm, Richard Heathfield <r...@see.sig.invalid> w=
rote:
> >> >> >>>> spinoza1111wrote:
> >> >> >>>> <snip>
> >> >> >>>>> Case closed. Herb is a programmer who wrote a complete compil=
er and
> >> >> >>>>> interpreter for C,
> >> >> >>>> It wasn't a compiler, and it wasn't complete.
> >> >> >>> (Sigh) This has been resolved and not in your favor.
> >> >> >> Only in your fevered imagination. The implementation did not pro=
duce
> >> >> >> standalone object code; it was an interpreter, not a compiler. A=
nd it
>
> >> >> > WRONG because it had to scan and parse.
>
> >> >> So does printf. That doesn't mean that printf is a compiler.
>
> >> > You are very ignorant because you could not tell from parsing that
> >> > parsing means parsing at Chomsky level 1 in this context.
>
> >> Did you mean "type 1" rather than "level 1"? =A0If so, why is parsing
> >> linked to type 1 grammars? =A0What has this to do with C?
>
> > Yes, and most literate people (Ben) are not of such a literal mind.
>
> > Lots of luck making the case that "parsing has nothing to do with C".
>
> I don't want to. =A0I wandered what type 1 grammars have to do with C
> and why you limit the term "parsing" to this one type even if only "in
> this context".

....because in most computer science books, "parsing" is as opposed to
"scanning". "Scanning (aka lexical analysis) versus parsing
(syntactical analysis)".

Ahot, Sethi, et al. COMPILERS: PRINCIPLES TECHNIQUES AND TOOLS 2nd ed.
p5: "The first phase of a compiler is called *lexical analysis* or
*scanning*".

p. 8: "The second phase of a compiler is called *syntactical analysis*
or *parsing*".

In this usage, "parsing" is limited, but, of course, the "parsing" of
Chomsky type (or level, or stage, or fuck-all) 3 (or zero, or
"regular" or sod-all) =3D=3D scanning.

Intellectual (and even moral) growth involves abandoning the habit of
rote terminology and the concomitant hostility towards the terminology
of the Other, and the appreciation of structures of wider and wider
scope. If you like friend Peter didn't take komputer science then you
learned things bottom up and by chance, whereas in uni you go more top
down, learning an initial framework in which a set of words, not one
word, is the focus.

You also learn that everything has everything to do with everything,
the question being in what way we can sensibly explain.
>
> I think what has happened is that you expect people like me with
> literal minds to auto-correct what you write until is makes technical
> sense. =A0I am happy to do that, but if I auto-correct what you say
> about C I will end up with my (technical) options about it, not yours.

Indeed.
>
> <snip>
> --
> Ben.

0
spinoza1111
1/3/2010 2:21:48 PM
spinoza1111 <spinoza1111@yahoo.com> writes:

> On Jan 2, 3:28 pm, Seebs <usenet-nos...@seebs.net> wrote:
<snip>
>> The book clearly says it's an interpreter.
>
> We've been over this, Mister ADHD: an interpreter plus a front end
> scanner and parser is a compiler. If you'd taken CS at uni instead of
> psychology you would know this.

That disagrees with the usage I learnt from my CS degree course.

There is some flexibility in these terms but I wonder what you call a
system that directly interprets a parsed form of the source code.
This seems to be what "little C" does, and I think it is useful to
have a term for such systems so as to distinguish them from
"compilers" that produce some form of target code (albeit sometimes an
intermediate code rather than machine instructions).  I call them
interpreters.  What do you call them if not interpreters?

<snip>
-- 
Ben.
0
Ben
1/3/2010 2:31:34 PM
spinoza1111 <spinoza1111@yahoo.com> writes:

> On Jan 3, 9:58 pm, Ben Bacarisse <ben.use...@bsb.me.uk> wrote:
>> spinoza1111<spinoza1...@yahoo.com> writes:
>> > On Jan 3, 5:21 am, Ben Bacarisse <ben.use...@bsb.me.uk> wrote:
>> >>spinoza1111<spinoza1...@yahoo.com> writes:
>> >> > On Jan 2, 5:45 pm, Richard Heathfield <r...@see.sig.invalid> wrote:
>> >> >> spinoza1111wrote:
>> >> >> > On Jan 2, 2:43 pm, Richard Heathfield <r...@see.sig.invalid> wrote:
>> >> >> >> spinoza1111wrote:
>> >> >> >>> On Jan 1, 10:59 pm, Richard Heathfield <r...@see.sig.invalid> wrote:
>> >> >> >>>> spinoza1111wrote:
>> >> >> >>>> <snip>
>> >> >> >>>>> Case closed. Herb is a programmer who wrote a complete compiler and
>> >> >> >>>>> interpreter for C,
>> >> >> >>>> It wasn't a compiler, and it wasn't complete.
>> >> >> >>> (Sigh) This has been resolved and not in your favor.
>> >> >> >> Only in your fevered imagination. The implementation did not produce
>> >> >> >> standalone object code; it was an interpreter, not a compiler. And it
>>
>> >> >> > WRONG because it had to scan and parse.
>>
>> >> >> So does printf. That doesn't mean that printf is a compiler.
>>
>> >> > You are very ignorant because you could not tell from parsing that
>> >> > parsing means parsing at Chomsky level 1 in this context.
>>
>> >> Did you mean "type 1" rather than "level 1"?  If so, why is parsing
>> >> linked to type 1 grammars?  What has this to do with C?
>>
>> > Yes, and most literate people (Ben) are not of such a literal mind.
>>
>> > Lots of luck making the case that "parsing has nothing to do with C".
>>
>> I don't want to.  I wandered what type 1 grammars have to do with C
>> and why you limit the term "parsing" to this one type even if only "in
>> this context".
>
> ...because in most computer science books, "parsing" is as opposed to
> "scanning". "Scanning (aka lexical analysis) versus parsing
> (syntactical analysis)".
>
> Ahot, Sethi, et al. COMPILERS: PRINCIPLES TECHNIQUES AND TOOLS 2nd ed.
> p5: "The first phase of a compiler is called *lexical analysis* or
> *scanning*".

That is still no answer.  I don't have that book anymore but I don't
recall anything much about parsing type 1 grammars in it.

<snip>
>> I think what has happened is that you expect people like me with
>> literal minds to auto-correct what you write until is makes technical
>> sense.  I am happy to do that, but if I auto-correct what you say
>> about C I will end up with my (technical) options about it, not yours.
>
> Indeed.

Do you not want me to correct your initial statement so that it makes
sense, then?  I am pretty sure you meant "Chomsky type 2".  Why not
just use the more usual term "context free grammars" so you don't
have to remember what type number they are?

-- 
Ben.
0
Ben
1/3/2010 3:13:43 PM
On Sun, 03 Jan 2010 14:31:34 +0000
Ben Bacarisse <ben.usenet@bsb.me.uk> wrote:

> This seems to be what "little C" does, and I think it is useful to
> have a term for such systems so as to distinguish them from
> "compilers" that produce some form of target code (albeit sometimes an
> intermediate code rather than machine instructions).  I call them
> interpreters.  What do you call them if not interpreters?

I bought this one some years ago

http://www.amazon.com/C-Complete-Reference-4th-Ed/dp/0072121246

and "Little C" is introduced as "A C Interpreter" by the author. I don't
know if this is an error done by the translator from English, because
I've got the italian version, but I don't think so...  

0
Lorenzo
1/3/2010 3:27:52 PM
Lorenzo Villari <vlllnz@tiscali.it> writes:

> On Sun, 03 Jan 2010 14:31:34 +0000
> Ben Bacarisse <ben.usenet@bsb.me.uk> wrote:
>
>> This seems to be what "little C" does, and I think it is useful to
>> have a term for such systems so as to distinguish them from
>> "compilers" that produce some form of target code (albeit sometimes an
>> intermediate code rather than machine instructions).  I call them
>> interpreters.  What do you call them if not interpreters?
>
> I bought this one some years ago
>
> http://www.amazon.com/C-Complete-Reference-4th-Ed/dp/0072121246
>
> and "Little C" is introduced as "A C Interpreter" by the author. I don't
> know if this is an error done by the translator from English, because
> I've got the italian version, but I don't think so...  

Yes.  I don't think there is any doubt that the vast majority of
people (including the author) would call little C an interpreter.

-- 
Ben.
0
Ben
1/3/2010 4:10:34 PM
"Ben Bacarisse" <ben.usenet@bsb.me.uk> wrote in message 
news:0.0e316b296bcc09db5599.20100103143134GMT.87eim79s55.fsf@bsb.me.uk...
> spinoza1111 <spinoza1111@yahoo.com> writes:
>
>> On Jan 2, 3:28 pm, Seebs <usenet-nos...@seebs.net> wrote:
> <snip>
>>> The book clearly says it's an interpreter.
>>
>> We've been over this, Mister ADHD: an interpreter plus a front end
>> scanner and parser is a compiler. If you'd taken CS at uni instead of
>> psychology you would know this.
>
> That disagrees with the usage I learnt from my CS degree course.

Same here.

>
> There is some flexibility in these terms but I wonder what you call a
> system that directly interprets a parsed form of the source code.
> This seems to be what "little C" does, and I think it is useful to
> have a term for such systems so as to distinguish them from
> "compilers" that produce some form of target code (albeit sometimes an
> intermediate code rather than machine instructions).  I call them
> interpreters.  What do you call them if not interpreters?
>

The dragon book (Aho, et. al.) calls them interpreters, as do the other 
compiler texts I have.

Dennis 

0
Dennis
1/3/2010 4:27:05 PM
spinoza1111 wrote:

I've read several of your latest batch of replies, but they didn't seem 
to contain any content worth responding to - your usual twaddle, in 
fact. This one is slightly different, insofar as it has some technical 
content, hence this reply. I've snipped out the awesomely stupid bits, 
though, and just left the mildly stupid and, on occasion, merely 
misguided bits.

> On Jan 2, 3:28 pm, Seebs <usenet-nos...@seebs.net> wrote:
>> On 2010-01-02, Richard Heathfield <r...@see.sig.invalid> wrote:
>>
>>> Only in your fevered imagination. The implementation did not produce
>>> standalone object code; it was an interpreter, not a compiler. And it
>>> did not implement the entire language. Therefore it was not complete.
>>> Therefore, the claim that it was a complete compiler - like so many of
>>> your claims - is mistaken.
>> The book clearly says it's an interpreter.
> 
> We've been over this, Mister ADHD: an interpreter plus a front end
> scanner and parser is a compiler.

According to your definition of "compiler", sure. Most people don't 
think of it like that. Most people, erroneously in my view, think of a 
compiler as generating (as you say later) native object code. My own 
view is that a compiler is a program that takes as input a program 
written in some language, and produces as output a program that does the 
same thing, but written in another language. (The "identity" compiler, 
which produces output written in the /same/ language, is cute but 
trivial.) If the target language is native object code, all well and 
good, but it doesn't have to be. An interpreter, on the other hand, 
takes a program written in some language, and executes it. Thus, we 
might think of a computer itself as a "machine code interpreter".

<snip>

> Your implicit definition of a compiler as a translator that generates
> native object code is false.

As far as I can tell, Seebs neither gave nor implied such a definition.

<snip>

> Your silly definition

Cite, please.

> also invalidates pp 1 & 2 of Aho Sethi et al.'s
> Dragon Book, which defines a compiler as a language transformer,
 > specifically on p 2 identifying directly executable machine language
 > as a subset of the possibilities.

Yes, the Dragon Book has got it right.

<usual rubbish snipped>

-- 
Richard Heathfield <http://www.cpax.org.uk>
Email: -http://www. +rjh@
"Usenet is a strange place" - dmr 29 July 1999
Sig line vacant - apply within
0
Richard
1/3/2010 5:38:57 PM
Seebs <usenet-nospam@seebs.net> wrote:

> On 2009-12-30, Keith Thompson <kst-u@mib.org> wrote:
> > cri@tiac.net (Richard Harter) writes:
> >> So why are you reading this thread?
> 
> > Hmm.  I don't have a good answer to that.
> 
> I have something of one:
> 
> The fact is, you can learn a lot more sometimes from understanding why
> something is wrong than you would from understanding why it is right.

That's true for reading G.B. Shaw, who is almost always wrong, but
almost always in intelligent, interesting ways.

It ain't always so for other dumb folks, bubba.

Richard
0
raltbos
1/3/2010 6:50:58 PM
In article <0.0e316b296bcc09db5599.20100103143134GMT.87eim79s55.fsf@bsb.me.uk>,
Ben Bacarisse  <ben.usenet@bsb.me.uk> wrote:

>There is some flexibility in these terms but I wonder what you call a
>system that directly interprets a parsed form of the source code.

The term "semi-compiler" has been used, but even that usually
implies something more than a parsed version of the source.

On the other hand, I don't think many people would dispute that a
system that converts a program to byte codes, which are then
interpreted, is a compiler.  And of course even the machine
instructions generated by a native-code compiler maybe still be
"interpreted" by microcode.

I would say that the key feature of a compiler is that it translates
to something structurally simpler.  Turning loops into tests and jumps,
structure field accesses into arithmetic and dereferencing, that sort
of thing.  Some reduction in the complexity of the operations.  A parser
on the other hand merely produces a more convenient representation of
the same operations, so a parser plus something that interprets the
parse tree is not a compiler.  It's just a better interpreter than
one that operates on the program text directly.

A Fortran to C translator would be less likely to be called a compiler
than a Lisp to C translator, because the C constructs produced are likely
to be of comparable complexity to those in the original Fortran, but not
those in the original Lisp.

-- Richard
-- 
Please remember to mention me / in tapes you leave behind.
0
richard
1/3/2010 6:59:55 PM
On 2010-01-03, Ben Bacarisse <ben.usenet@bsb.me.uk> wrote:
> That disagrees with the usage I learnt from my CS degree course.

That means you're too educated.

I think Spinny's actually made the key point himself, probably
unintentionally:

>> We've been over this, Mister ADHD: an interpreter plus a front end
>> scanner and parser is a compiler.

The point, of course, being that generally a "compiled language" is one
which doesn't have the interpreter.  A normal C implementation doesn't
have an interpreter; it generates code which is run natively.  Schildt
did not write a thing which generated native code, so it's not what is
normally called a "compiler" -- this his emphasis on it being an
interpreter.

You could argue, in a sort of pedantic way, that the interpreter actually
compiles-to-X, where X is something other than native code, and then
something else interprets the X.

So, given Java as an example to draw from:  Does Schildt's "C interpreter"
allow you to convert C programs to some kind of data file, which another
program or piece of hardware then executes?

-s
-- 
Copyright 2010, all wrongs reversed.  Peter Seebach / usenet-nospam@seebs.net
http://www.seebs.net/log/ <-- lawsuits, religion, and funny pictures
http://en.wikipedia.org/wiki/Fair_Game_(Scientology) <-- get educated!
0
Seebs
1/3/2010 8:26:39 PM
Richard Heathfield <rjh@see.sig.invalid> writes:
[...]
> According to your definition of "compiler", sure. Most people don't
> think of it like that. Most people, erroneously in my view, think of a
> compiler as generating (as you say later) native object code. My own
> view is that a compiler is a program that takes as input a program
> written in some language, and produces as output a program that does
> the same thing, but written in another language. (The "identity"
> compiler, which produces output written in the /same/ language, is
> cute but trivial.) If the target language is native object code, all
> well and good, but it doesn't have to be. An interpreter, on the other
> hand, takes a program written in some language, and executes it. Thus,
> we might think of a computer itself as a "machine code interpreter".
[...]

I would restrict the definition of "compiler" a bit more than that.
A translator that leaves substantial portions of the input language
unchecked and unmodified in the output is something I'd call a
preprocessor but not a compiler.

For example, the old Ratfor preprocessor, which translated a dialect
of Fortran (that had such things if/then/else and structure loops
when Fortran itself didn't) into (then-) standard Fortran was not,
IMHO, a compiler.  You could write a Fortran program that didn't
use any Ratfor-specific constructs, and the translator would leave
it unchanged.  Even if there were errors in the input, the translator
would pass them through and leave it up to the Fortran compiler to
diagnose them.

Similarly, I don't consider the C preprocessor to be a compiler.

cfront, the old C++-to-C translator, on the other hand, was IMHO
a compiler.  In theory, if there were *any* errors in the input,
cfront itself would diagnose them.

I don't claim that my definition is more correct than anyone else's
(well, mostly), but that's how I think of it.

-- 
Keith Thompson (The_Other_Keith) kst-u@mib.org  <http://www.ghoti.net/~kst>
Nokia
"We must do something.  This is something.  Therefore, we must do this."
    -- Antony Jay and Jonathan Lynn, "Yes Minister"
0
Keith
1/3/2010 9:58:28 PM
Dennis (Icarus) wrote:
> 
> "Ben Bacarisse" <ben.usenet@bsb.me.uk> wrote in message 
> news:0.0e316b296bcc09db5599.20100103143134GMT.87eim79s55.fsf@bsb.me.uk...
>> spinoza1111 <spinoza1111@yahoo.com> writes:
>>
>>> On Jan 2, 3:28 pm, Seebs <usenet-nos...@seebs.net> wrote:
>> <snip>
>>>> The book clearly says it's an interpreter.
>>>
>>> We've been over this, Mister ADHD: an interpreter plus a front end
>>> scanner and parser is a compiler. If you'd taken CS at uni instead of
>>> psychology you would know this.
>>
>> That disagrees with the usage I learnt from my CS degree course.
> 
> Same here.
> 
>>
>> There is some flexibility in these terms but I wonder what you call a
>> system that directly interprets a parsed form of the source code.
>> This seems to be what "little C" does, and I think it is useful to
>> have a term for such systems so as to distinguish them from
>> "compilers" that produce some form of target code (albeit sometimes an
>> intermediate code rather than machine instructions).  I call them
>> interpreters.  What do you call them if not interpreters?
>>
> 
> The dragon book (Aho, et. al.) calls them interpreters, as do the other 
> compiler texts I have.
> 
> Dennis

You are completely missing the point, technical terms always mean 
exactly what Nilges says they do and if we disagree it is because we are 
ignorant and did not study CS at the same places he did.

Of course in the real world that the rest of us inhabit many terms have 
changed there meaning over the last 50 years. Just think about how the 
word 'computer' has changed its meaning. Come to that, exactly what is a 
computer (and be careful because careless definitions will include many 
things that most of us would noot actually consider to be computers)




0
Francis
1/3/2010 11:14:34 PM
On Jan 4, 7:14=A0am, Francis Glassborow
<francis.glassbo...@btinternet.com> wrote:
> Dennis (Icarus) wrote:
>
> > "Ben Bacarisse" <ben.use...@bsb.me.uk> wrote in message
> >news:0.0e316b296bcc09db5599.20100103143134GMT.87eim79s55.fsf@bsb.me.uk..=
..
> >>spinoza1111<spinoza1...@yahoo.com> writes:
>
> >>> On Jan 2, 3:28 pm, Seebs <usenet-nos...@seebs.net> wrote:
> >> <snip>
> >>>> The book clearly says it's an interpreter.
>
> >>> We've been over this, Mister ADHD: an interpreter plus a front end
> >>> scanner and parser is a compiler. If you'd taken CS at uni instead of
> >>> psychology you would know this.
>
> >> That disagrees with the usage I learnt from my CS degree course.
>
> > Same here.
>
> >> There is some flexibility in these terms but I wonder what you call a
> >> system that directly interprets a parsed form of the source code.
> >> This seems to be what "little C" does, and I think it is useful to
> >> have a term for such systems so as to distinguish them from
> >> "compilers" that produce some form of target code (albeit sometimes an
> >> intermediate code rather than machine instructions). =A0I call them
> >> interpreters. =A0What do you call them if not interpreters?
>
> > The dragon book (Aho, et. al.) calls them interpreters, as do the other
> > compiler texts I have.
>
> > Dennis
>
> You are completely missing the point, technical terms always mean
> exactly what Nilges says they do and if we disagree it is because we are
> ignorant and did not study CS at the same places he did.

Well, you are ignorant, but technical terms (I've been saying) mean
depending on multi-term structures in context, not what I say they
mean.
>
> Of course in the real world that the rest of us inhabit many terms have
> changed there meaning over the last 50 years. Just think about how the
> word 'computer' has changed its meaning. Come to that, exactly what is a
> computer (and be careful because careless definitions will include many
> things that most of us would noot actually consider to be computers)

0
spinoza1111
1/4/2010 1:07:26 AM
On Jan 4, 7:14=A0am, Francis Glassborow
<francis.glassbo...@btinternet.com> wrote:
> Dennis (Icarus) wrote:
>
> > "Ben Bacarisse" <ben.use...@bsb.me.uk> wrote in message
> >news:0.0e316b296bcc09db5599.20100103143134GMT.87eim79s55.fsf@bsb.me.uk..=
..
> >>spinoza1111<spinoza1...@yahoo.com> writes:
>
> >>> On Jan 2, 3:28 pm, Seebs <usenet-nos...@seebs.net> wrote:
> >> <snip>
> >>>> The book clearly says it's an interpreter.
>
> >>> We've been over this, Mister ADHD: an interpreter plus a front end
> >>> scanner and parser is a compiler. If you'd taken CS at uni instead of
> >>> psychology you would know this.
>
> >> That disagrees with the usage I learnt from my CS degree course.
>
> > Same here.
>
> >> There is some flexibility in these terms but I wonder what you call a
> >> system that directly interprets a parsed form of the source code.
> >> This seems to be what "little C" does, and I think it is useful to
> >> have a term for such systems so as to distinguish them from
> >> "compilers" that produce some form of target code (albeit sometimes an
> >> intermediate code rather than machine instructions). =A0I call them
> >> interpreters. =A0What do you call them if not interpreters?
>
> > The dragon book (Aho, et. al.) calls them interpreters, as do the other
> > compiler texts I have.
>
> > Dennis
>
> You are completely missing the point, technical terms always mean
> exactly what Nilges says they do and if we disagree it is because we are
> ignorant and did not study CS at the same places he did.
>
> Of course in the real world that the rest of us inhabit many terms have
> changed there meaning over the last 50 years. Just think about how the
> word 'computer' has changed its meaning. Come to that, exactly what is a
> computer (and be careful because careless definitions will include many
> things that most of us would noot actually consider to be computers)

Golly think ponder ponder...it's a Turing-complete device?

0
spinoza1111
1/4/2010 1:09:00 AM
On Jan 4, 4:26=A0am, Seebs <usenet-nos...@seebs.net> wrote:
> On 2010-01-03, Ben Bacarisse <ben.use...@bsb.me.uk> wrote:
>
> > That disagrees with the usage I learnt from my CS degree course.
>
> That means you're too educated.
>
> I think Spinny's actually made the key point himself, probably
> unintentionally:
>
> >> We've been over this, Mister ADHD: an interpreter plus a front end
> >> scanner and parser is a compiler.
>
> The point, of course, being that generally a "compiled language" is one
> which doesn't have the interpreter. =A0A normal C implementation doesn't
> have an interpreter; it generates code which is run natively. =A0Schildt
> did not write a thing which generated native code, so it's not what is
> normally called a "compiler" -- this his emphasis on it being an
> interpreter.

From the GNU C Compiler documentation:

"Historically, COMPILERS [my emphasis] for many languages, including C+
+ and Fortran, have been implemented as =93preprocessors=94 which emit
another high level language such as C. None of the compilers included
in GCC are implemented this way; they all generate machine code
directly."

Because of sexual anxieties, many developers reserve words with high
positive connotation to mean complex and more difficult to implement
things, where the software becomes Lacan's and Zizek's "big
other" (the Stalinist father of WWII who by definition is never
satisfied with the son). Which of course goes against the meaning of
the adage that great engineering is simple engineering. If you're
writing a demo or instructional compiler as were Herb and I, the first
edition best emits interpreted code, and for this sacrifice of speed
you get better debugging.

The GNU documentation clearly implies that a COMPILER which "generates
machine code directly" is better than one that doesn't. But it also
uses "compiler" to refer to what Herb wrote.


>
> You could argue, in a sort of pedantic way, that the interpreter actually
> compiles-to-X, where X is something other than native code, and then
> something else interprets the X.

Gee, you could.
>
> So, given Java as an example to draw from: =A0Does Schildt's "C interpret=
er"
> allow you to convert C programs to some kind of data file, which another
> program or piece of hardware then executes?

Java and .Net do not typically interpret code: we've told you that,
dear little Peter. They translate bytecode to native code the first
time the path of logic that contains the code is executed in most
implementations, although there exist implementations which are
interpreters. Confusingly, esp. to people without proper education in
computer science, Microsoft documentation calls this step JIT
compilation, which is incorrect if we reserve the term "compiler" to
something that translates something in a Chomsky 1 or 0 source
language writable and readable by humans.
>
> -s
> --
> Copyright 2010, all wrongs reversed. =A0Peter Seebach / usenet-nos...@see=
bs.nethttp://www.seebs.net/log/<-- lawsuits, religion, and funny picturesht=
tp://en.wikipedia.org/wiki/Fair_Game_(Scientology) <-- get educated!

0
spinoza1111
1/4/2010 1:22:05 AM
On Jan 4, 4:26=A0am, Seebs <usenet-nos...@seebs.net> wrote:
> On 2010-01-03, Ben Bacarisse <ben.use...@bsb.me.uk> wrote:
>
> > That disagrees with the usage I learnt from my CS degree course.
>
> That means you're too educated.

Wow. Wow. "Too educated". That's one for the red book:

"The 'Heap' is a DOS term"
"You're too educated"

"Too" educated?

Oh that this too too solid Flesh, would melt,
Thaw, and resolue it selfe into a Dew
>
> I think Spinny's actually made the key point himself, probably
> unintentionally:
>
> >> We've been over this, Mister ADHD: an interpreter plus a front end
> >> scanner and parser is a compiler.
>
> The point, of course, being that generally a "compiled language" is one

Yes. The participle is in opposition to interpreted. But the use isn't
grammatically consistent.

> which doesn't have the interpreter. =A0A normal C implementation doesn't
> have an interpreter; it generates code which is run natively. =A0Schildt
> did not write a thing which generated native code, so it's not what is
> normally called a "compiler" -- this his emphasis on it being an
> interpreter.
>
> You could argue, in a sort of pedantic way, that the interpreter actually
> compiles-to-X, where X is something other than native code, and then
> something else interprets the X.
>
> So, given Java as an example to draw from: =A0Does Schildt's "C interpret=
er"
> allow you to convert C programs to some kind of data file, which another
> program or piece of hardware then executes?

Or that the Euerlasting had not fixt
316: His Cannon 'gainst Selfe-slaughter. O God, O God!
317: How weary, stale, flat, and vnprofitable
318: Seemes to me all the vses of this world?
319: Fie on't? Oh fie, fie, 'tis an vnweeded Garden
320: That growes to Seed: Things rank, and grosse in Nature
321: Possesse it meerely. That it should come to this!
>
> -s
> --
> Copyright 2010, all wrongs reversed. =A0Peter Seebach / usenet-nos...@see=
bs.nethttp://www.seebs.net/log/<-- lawsuits, religion, and funny picturesht=
tp://en.wikipedia.org/wiki/Fair_Game_(Scientology) <-- get educated!

0
spinoza1111
1/4/2010 1:26:40 AM
On Jan 3, 11:13=A0pm, Ben Bacarisse <ben.use...@bsb.me.uk> wrote:
> spinoza1111<spinoza1...@yahoo.com> writes:
> > On Jan 3, 9:58=A0pm, Ben Bacarisse <ben.use...@bsb.me.uk> wrote:
> >>spinoza1111<spinoza1...@yahoo.com> writes:
> >> > On Jan 3, 5:21=A0am, Ben Bacarisse <ben.use...@bsb.me.uk> wrote:
> >> >>spinoza1111<spinoza1...@yahoo.com> writes:
> >> >> > On Jan 2, 5:45=A0pm, Richard Heathfield <r...@see.sig.invalid> wr=
ote:
> >> >> >> spinoza1111wrote:
> >> >> >> > On Jan 2, 2:43 pm, Richard Heathfield <r...@see.sig.invalid> w=
rote:
> >> >> >> >> spinoza1111wrote:
> >> >> >> >>> On Jan 1, 10:59 pm, Richard Heathfield <r...@see.sig.invalid=
> wrote:
> >> >> >> >>>> spinoza1111wrote:
> >> >> >> >>>> <snip>
> >> >> >> >>>>> Case closed. Herb is a programmer who wrote a complete com=
piler and
> >> >> >> >>>>> interpreter for C,
> >> >> >> >>>> It wasn't a compiler, and it wasn't complete.
> >> >> >> >>> (Sigh) This has been resolved and not in your favor.
> >> >> >> >> Only in your fevered imagination. The implementation did not =
produce
> >> >> >> >> standalone object code; it was an interpreter, not a compiler=
.. And it
>
> >> >> >> > WRONG because it had to scan and parse.
>
> >> >> >> So does printf. That doesn't mean that printf is a compiler.
>
> >> >> > You are very ignorant because you could not tell from parsing tha=
t
> >> >> > parsing means parsing at Chomsky level 1 in this context.
>
> >> >> Did you mean "type 1" rather than "level 1"? =A0If so, why is parsi=
ng
> >> >> linked to type 1 grammars? =A0What has this to do with C?
>
> >> > Yes, and most literate people (Ben) are not of such a literal mind.
>
> >> > Lots of luck making the case that "parsing has nothing to do with C"=
..
>
> >> I don't want to. =A0I wandered what type 1 grammars have to do with C
> >> and why you limit the term "parsing" to this one type even if only "in
> >> this context".
>
> > ...because in most computer science books, "parsing" is as opposed to
> > "scanning". "Scanning (aka lexical analysis) versus parsing
> > (syntactical analysis)".
>
> > Ahot, Sethi, et al. COMPILERS: PRINCIPLES TECHNIQUES AND TOOLS 2nd ed.
> > p5: "The first phase of a compiler is called *lexical analysis* or
> > *scanning*".
>
> That is still no answer. =A0I don't have that book anymore but I don't
> recall anything much about parsing type 1 grammars in it.
>
> <snip>
>
> >> I think what has happened is that you expect people like me with
> >> literal minds to auto-correct what you write until is makes technical
> >> sense. =A0I am happy to do that, but if I auto-correct what you say
> >> about C I will end up with my (technical) options about it, not yours.
>
> > Indeed.
>
> Do you not want me to correct your initial statement so that it makes
> sense, then? =A0I am pretty sure you meant "Chomsky type 2". =A0Why not
> just use the more usual term "context free grammars" so you don't
> have to remember what type number they are?

No thanks, because in my readings about automata theory (which started
with independent reading of Hopcroft and Ullman in 1971 and included a
graduate level class, this numbering system has VARIED.

When will you learn that clerkish knowledge isn't knowledge? And when
will you start using your skills for good and not evil?
>
> --
> Ben.

0
spinoza1111
1/4/2010 1:31:40 AM
On Jan 3, 6:05=A0pm, Francis Glassborow
<francis.glassbo...@btinternet.com> wrote:
> spinoza1111wrote:
> > On Jan 3, 5:02 pm, Richard Heathfield <r...@see.sig.invalid> wrote:
> >> spinoza1111wrote:
> >>> On Jan 3, 4:31 am, Richard Heathfield <r...@see.sig.invalid> wrote:
> >>>> spinoza1111wrote:
> >> <snip>
>
> >>> I am very familiar with the ethics of such "consultants".
> >> You are very quick to generalise.
>
> > Based on experience. It's called "intelligence".
>
> > In my experience, nobody should brag about being a contract
> > programmer. Most of my career, I've not been one, and most programmers
> > I've known who were all-contract-all-the-time were forced to work to
> > rule as low level temps.
>
> > From the viewpoint of a company that's heavily invested in C, the last
> > thing it wants to hear is that C sucks, so obtaining a warm body
> > labeled "C programmer" is in the shortest of terms "rational", because
> > at the level of mere programmer, the programmer is expected to be
> > uneducated in comp sci so that he's loyal to the language he knows.
>
> OK that tells us something about the programmers you normally come in
> contact with. Contract workers come in two bands, those who are too

"You are the homeless men and crack hos I see from my car" is not an
argument. Nearly all contract programmers are basically rejects from a
bad system, which says something about them and the system.
> awful to employ permanently and those who are so good that paying them
> what they are worth as =A0employees would distort the company pay structu=
res.
>
> I have been a critic of pay structures that are based on hierarchies
> exactly because they result in your most skilled employees either being
> promoted out of their skill area (there is no reason to think that an
> excellent programmer can be a manager) or leaving (actually often to be
> re-employed as a consultant).
>
> There are a few companies around that understand that employees should
> be paid well for doing a good job irrespective of what that job is. I
> have been known to suggest that managers at all levels should have their
> pay tied to the performance/productivity of their 'team'.

0
spinoza1111
1/4/2010 1:35:25 AM
Keith Thompson wrote:

<snip>

> I would restrict the definition of "compiler" a bit more than that.
> A translator that leaves substantial portions of the input language
> unchecked and unmodified in the output is something I'd call a
> preprocessor but not a compiler.
> 
> For example, the old Ratfor preprocessor, which translated a dialect
> of Fortran (that had such things if/then/else and structure loops
> when Fortran itself didn't) into (then-) standard Fortran was not,
> IMHO, a compiler.  You could write a Fortran program that didn't
> use any Ratfor-specific constructs, and the translator would leave
> it unchanged.  Even if there were errors in the input, the translator
> would pass them through and leave it up to the Fortran compiler to
> diagnose them.

Pro*C is such a beast, too - it converts embedded SQL constructs into C 
function calls, but doesn't attempt to translate real C. I am accustomed 
to calling such programs "pre-compilers", and I think this is fairly 
widespread, at least for Pro*C itself.

I have occasionally had cause to write code that generates preprocessor 
instructions, and I've tended to think of it as prepreprocessing rather 
than precompilation, probably because it's relatively simple compared to 
the kind of stuff pro*C gets up to.

> Similarly, I don't consider the C preprocessor to be a compiler.

Neither do I. Nevertheless, that's kind of what it is. (Oddly, it 
manages to translate from C to C and yet is not an "identity compiler".)

<snip>

-- 
Richard Heathfield <http://www.cpax.org.uk>
Email: -http://www. +rjh@
"Usenet is a strange place" - dmr 29 July 1999
Sig line vacant - apply within
0
Richard
1/4/2010 2:05:59 AM
spinoza1111 <spinoza1111@yahoo.com> writes:

> On Jan 3, 11:13 pm, Ben Bacarisse <ben.use...@bsb.me.uk> wrote:
>> spinoza1111<spinoza1...@yahoo.com> writes:
>> > On Jan 3, 9:58 pm, Ben Bacarisse <ben.use...@bsb.me.uk> wrote:
>> >>spinoza1111<spinoza1...@yahoo.com> writes:
>> >> > On Jan 3, 5:21 am, Ben Bacarisse <ben.use...@bsb.me.uk> wrote:
>> >> >>spinoza1111<spinoza1...@yahoo.com> writes:
<snip>
>> >> >> > You are very ignorant because you could not tell from parsing that
>> >> >> > parsing means parsing at Chomsky level 1 in this context.
>>
>> >> >> Did you mean "type 1" rather than "level 1"?  If so, why is parsing
>> >> >> linked to type 1 grammars?  What has this to do with C?
>>
>> >> > Yes, and most literate people (Ben) are not of such a literal mind.
<snip>
>> Do you not want me to correct your initial statement so that it makes
>> sense, then?  I am pretty sure you meant "Chomsky type 2".  Why not
>> just use the more usual term "context free grammars" so you don't
>> have to remember what type number they are?
>
> No thanks, because in my readings about automata theory (which started
> with independent reading of Hopcroft and Ullman in 1971 and included a
> graduate level class, this numbering system has VARIED.

Citation?  Nothing I've seen by anyone as reputable as Hopcroft or
Ullman uses any other numbering.  Who does (other than yourself)?

<snip>
-- 
Ben.
0
Ben
1/4/2010 2:33:10 AM
spinoza1111 <spinoza1111@yahoo.com> writes:
<snip>
> The GNU documentation clearly implies that a COMPILER which "generates
> machine code directly" is better than one that doesn't. But it also
> uses "compiler" to refer to what Herb wrote.

Citation?  Specifically where the GNU documentation uses "compiler"
for what is clearly and interpreter.  I can't find any such use.

<snip>
-- 
Ben.
0
Ben
1/4/2010 2:36:47 AM
On Dec 27 2009, 12:20=A0pm, Seebs <usenet-nos...@seebs.net> wrote:
> > On Dec 27, 9:36 =A0 am, spinoza1111 <spinoza1...@yahoo.com> wrote:
> > Here is the C code.
> > long long factorial(long long N)
> > {
> > =A0 =A0 =A0 =A0 long long nFactorialRecursive;
> > =A0 =A0 =A0 =A0 long long nFactorialIterative;
> > =A0 =A0 =A0 =A0 long long Nwork;
> > =A0 =A0 =A0 =A0 if (N <=3D 2) return N;
> > =A0 =A0 =A0 =A0 for ( nFactorialIterative =3D 1, Nwork =3D N;
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 Nwork > 1;
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 Nwork-- )
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 nFactorialIterative *=3D Nwork;
> > =A0 =A0 =A0 =A0 nFactorialRecursive =3D N * factorial(N-1);
> > =A0 =A0 =A0 =A0 if (nFactorialRecursive !=3D nFactorialIterative)
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0printf("%I64d! is %I64d recursively but =
%I64d iteratively wtf!
> > \n",
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 N,
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 nFactorialItera=
tive,
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 nFactorialRecur=
sive);
> > =A0 =A0 =A0 =A0 return nFactorialRecursive;
>
> > }
>
> I'm impressed. =A0Very few people could define an N^2 algorithm for calcu=
lating
> factorials.
>
> Hint: =A0When you call factorial(19), you calculate factorial(19) iterati=
vely,
> and then you calculate 19 * factorial(18). =A0You then calculate factoria=
l(18)
> iteratively, then calcualte 18 * factorial(17). =A0Etcetera.
>
> In short, for factorial(19), instead of performing 38 multiplications and
> 19 calls, you perform 19 multiplications and 19 calls for the recursive
> calculation, plus 164 multiplications for the iterative calculations.
>
> This is not a reasonable way to go about things. =A0This is a pretty
> impressive screwup on your part, and you can't blame the language design;
> this is purely at the algorithm-design level, not any kind of mysterious
> quirk of C.
>
> Again, the problem isn't with C's design; it's that you are too muddled
> to design even a basic test using two algorithms, as you embedded one
> in another.
>
> Here's two test programs. =A0One's yours, but I switched to 'long double'
> and used 24! instead of 19! as the test case, and multiplied the number
> of trials by 10.
>
> The main loop is unchanged except for the change in N and the switch to
> %Lf.
>
> Yours:
> =A0 =A0 =A0 =A0 long double factorial(long double N)
> =A0 =A0 =A0 =A0 {
> =A0 =A0 =A0 =A0 =A0 =A0 long double nFactorialRecursive;
> =A0 =A0 =A0 =A0 =A0 =A0 long double nFactorialIterative;
> =A0 =A0 =A0 =A0 =A0 =A0 long double Nwork;
> =A0 =A0 =A0 =A0 =A0 =A0 if (N <=3D 2) return N;
> =A0 =A0 =A0 =A0 =A0 =A0 for ( nFactorialIterative =3D 1, Nwork =3D N; Nwo=
rk > 1; Nwork-- )
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 nFactorialIterative *=3D Nwork;
> =A0 =A0 =A0 =A0 =A0 =A0 nFactorialRecursive =3D N * factorial(N-1);
> =A0 =A0 =A0 =A0 =A0 =A0 if (nFactorialRecursive !=3D nFactorialIterative)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0printf("%Lf! is %Lf recursively but %Lf it=
eratively wtf!\n",
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 N,
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 nFactorialIterative,
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 nFactorialRecursive);
> =A0 =A0 =A0 =A0 =A0 =A0 return nFactorialRecursive;
> =A0 =A0 =A0 =A0 }
>
> Mine:
>
> =A0 =A0 =A0 =A0 long double ifactorial(long double N)
> =A0 =A0 =A0 =A0 {
> =A0 =A0 =A0 =A0 =A0 =A0 long double nFactorialIterative;
> =A0 =A0 =A0 =A0 =A0 =A0 long double Nwork;
> =A0 =A0 =A0 =A0 =A0 =A0 if (N <=3D 2) return N;
> =A0 =A0 =A0 =A0 =A0 =A0 for ( nFactorialIterative =3D 1, Nwork =3D N; Nwo=
rk > 1; Nwork-- )
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 nFactorialIterative *=3D Nwork;
> =A0 =A0 =A0 =A0 =A0 =A0 return nFactorialIterative;
> =A0 =A0 =A0 =A0 }
>
> =A0 =A0 =A0 =A0 long double rfactorial(long double N)
> =A0 =A0 =A0 =A0 {
> =A0 =A0 =A0 =A0 =A0 =A0 long double nFactorialRecursive;
> =A0 =A0 =A0 =A0 =A0 =A0 if (N <=3D 2) return N;
> =A0 =A0 =A0 =A0 =A0 =A0 nFactorialRecursive =3D N * rfactorial(N-1);
> =A0 =A0 =A0 =A0 =A0 =A0 return nFactorialRecursive;
> =A0 =A0 =A0 =A0 }
>
> =A0 =A0 =A0 =A0 long double factorial(long double N)
> =A0 =A0 =A0 =A0 {
> =A0 =A0 =A0 =A0 =A0 =A0 long double nFactorialRecursive;
> =A0 =A0 =A0 =A0 =A0 =A0 long double nFactorialIterative;
> =A0 =A0 =A0 =A0 =A0 =A0 nFactorialIterative =3D ifactorial(N);
> =A0 =A0 =A0 =A0 =A0 =A0 nFactorialRecursive =3D rfactorial(N);
> =A0 =A0 =A0 =A0 =A0 =A0 if (nFactorialRecursive !=3D nFactorialIterative)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0printf("%Lf! is %Lf recursively but %Lf it=
eratively wtf!\n",
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 N,
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 nFactorialIterative,
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 nFactorialRecursive);
> =A0 =A0 =A0 =A0 =A0 =A0 return nFactorialRecursive;
> =A0 =A0 =A0 =A0 }
>
> Output from the main loops:
>
> 24! is 620448401733239409999872: 14.00 seconds to calculate 10000000 time=
s
> 24! is 620448401733239409999872: 5.00 seconds to calculate 10000000 times
>
> ... Which is to say, no one cares whether C# is faster or slower than C
> by a few percent, when non-idiotic code is faster than idiotic code by
> nearly a factor of three.
>
> -s
> --
> Copyright 2009, all wrongs reversed. =A0Peter Seebach / usenet-nos...@see=
bs.nethttp://www.seebs.net/log/<-- lawsuits, religion, and funny picturesht=
tp://en.wikipedia.org/wiki/Fair_Game_(Scientology) <-- get educated!

Sorry to interrupt. I am just curious in why the second implementation
is significantly faster than the first implementation. My guess is
that the second implementation breaks the two factorial calculation
into two subroutines, hence the memory can be released after each
calculation which boosts up the speed. Is my guess reasonable?

-jw
0
JW
1/4/2010 7:31:07 AM
On Jan 4, 10:33=A0am, Ben Bacarisse <ben.use...@bsb.me.uk> wrote:
> spinoza1111<spinoza1...@yahoo.com> writes:
> > On Jan 3, 11:13=A0pm, Ben Bacarisse <ben.use...@bsb.me.uk> wrote:
> >>spinoza1111<spinoza1...@yahoo.com> writes:
> >> > On Jan 3, 9:58=A0pm, Ben Bacarisse <ben.use...@bsb.me.uk> wrote:
> >> >>spinoza1111<spinoza1...@yahoo.com> writes:
> >> >> > On Jan 3, 5:21=A0am, Ben Bacarisse <ben.use...@bsb.me.uk> wrote:
> >> >> >>spinoza1111<spinoza1...@yahoo.com> writes:
> <snip>
> >> >> >> > You are very ignorant because you could not tell from parsing =
that
> >> >> >> > parsing means parsing at Chomsky level 1 in this context.
>
> >> >> >> Did you mean "type 1" rather than "level 1"? =A0If so, why is pa=
rsing
> >> >> >> linked to type 1 grammars? =A0What has this to do with C?
>
> >> >> > Yes, and most literate people (Ben) are not of such a literal min=
d.
> <snip>
> >> Do you not want me to correct your initial statement so that it makes
> >> sense, then? =A0I am pretty sure you meant "Chomsky type 2". =A0Why no=
t
> >> just use the more usual term "context free grammars" so you don't
> >> have to remember what type number they are?
>
> > No thanks, because in my readings about automata theory (which started
> > with independent reading of Hopcroft and Ullman in 1971 and included a
> > graduate level class, this numbering system has VARIED.
>
> Citation? =A0Nothing I've seen by anyone as reputable as Hopcroft or
> Ullman uses any other numbering. =A0Who does (other than yourself)?



It's not important, Ben. What's important is the taxonomy: regular
languages, context free languages, context sensitive languages, and
everything else. I studied this stuff outside the workplace and on my
own time in 1971 and revisited in a graduate school class in which I
got an A. However, I've never used the information on the job nor have
I taught it. I think that the awareness that the taxonomy exists is
the important thing here, and this awareness doesn't exist in the
typical auto-didact.

Having one's own notation could indicate that one appreciates it at a
deeper level than the rote-learning swot, or invented it on one's own
like some sort of crazed savant. In my case, the former is true and
the latter is false.

>
> <snip>
> --
> Ben.

0
spinoza1111
1/4/2010 8:43:28 AM
On 4 Jan, 08:43, spinoza1111 <spinoza1...@yahoo.com> wrote:
> On Jan 4, 10:33=A0am, Ben Bacarisse <ben.use...@bsb.me.uk> wrote:
> > spinoza1111<spinoza1...@yahoo.com> writes:
> > > On Jan 3, 11:13=A0pm, Ben Bacarisse <ben.use...@bsb.me.uk> wrote:
> > >>spinoza1111<spinoza1...@yahoo.com> writes:
> > >> > On Jan 3, 9:58=A0pm, Ben Bacarisse <ben.use...@bsb.me.uk> wrote:
> > >> >>spinoza1111<spinoza1...@yahoo.com> writes:
> > >> >> > On Jan 3, 5:21=A0am, Ben Bacarisse <ben.use...@bsb.me.uk> wrote=
:
> > >> >> >>spinoza1111<spinoza1...@yahoo.com> writes:



> > >> >> >> > You are very ignorant because you could not tell from parsin=
g that
> > >> >> >> > parsing means parsing at Chomsky level 1 in this context.

the amusing thing is you name-drop Chomsky then apparently get it
wrong. It probably would have taken you 60s tops to check it out.

> > >> >> >> Did you mean "type 1" rather than "level 1"? =A0If so, why is =
parsing
> > >> >> >> linked to type 1 grammars? =A0What has this to do with C?
>
> > >> >> > Yes, and most literate people (Ben) are not of such a literal m=
ind.

once you start using technical vocabulary you enter the realm of
exactness.


> > >> Do you not want me to correct your initial statement so that it make=
s
> > >> sense, then? =A0I am pretty sure you meant "Chomsky type 2". =A0Why =
not
> > >> just use the more usual term "context free grammars" so you don't
> > >> have to remember what type number they are?
>
> > > No thanks, because in my readings about automata theory (which starte=
d
> > > with independent reading of Hopcroft and Ullman in 1971 and included =
a
> > > graduate level class, this numbering system has VARIED.

so you were self taught on this stuff?


> > Citation? =A0Nothing I've seen by anyone as reputable as Hopcroft or
> > Ullman uses any other numbering. =A0Who does (other than yourself)?
>
> It's not important, Ben. What's important is the taxonomy: regular
> languages, context free languages, context sensitive languages, and
> everything else.

If you'd said that at the beginning it would have been impressive.


> I studied this stuff outside the workplace and on my
> own time in 1971 and revisited in a graduate school class in which I
> got an A. However, I've never used the information on the job nor have
> I taught it. I think that the awareness that the taxonomy exists is
> the important thing here, and this awareness doesn't exist in the
> typical auto-didact.

doesn't auto-didact mean "self taught"?


> Having one's own notation could indicate that one appreciates it at a
> deeper level than the rote-learning swot, or invented it on one's own
> like some sort of crazed savant. In my case, the former is true and
> the latter is false.

riight...

0
Nick
1/4/2010 9:38:06 AM
On 4 Jan, 01:22, spinoza1111 <spinoza1...@yahoo.com> wrote:
> On Jan 4, 4:26=A0am, Seebs <usenet-nos...@seebs.net> wrote:
> > On 2010-01-03, Ben Bacarisse <ben.use...@bsb.me.uk> wrote:


> > > That disagrees with the usage I learnt from my CS degree course.
>
> > That means you're too educated.
>
> > I think Spinny's actually made the key point himself, probably
> > unintentionally:
>
> > >> We've been over this, Mister ADHD: an interpreter plus a front end
> > >> scanner and parser is a compiler.
>
> > The point, of course, being that generally a "compiled language" is one
> > which doesn't have the interpreter. =A0A normal C implementation doesn'=
t
> > have an interpreter; it generates code which is run natively. =A0Schild=
t
> > did not write a thing which generated native code, so it's not what is
> > normally called a "compiler" -- this his emphasis on it being an
> > interpreter.
>
> From the GNU C Compiler documentation:
>
> "Historically, COMPILERS [my emphasis] for many languages, including C+
> + and Fortran, have been implemented as =93preprocessors=94 which emit
> another high level language such as C. None of the compilers included
> in GCC are implemented this way; they all generate machine code
> directly."

<snip socialogical bullshit>

> If you're
> writing a demo or instructional compiler as were Herb and I, the first
> edition best emits interpreted code, and for this sacrifice of speed
> you get better debugging.

did Schildt's "translator" (to use a neutral term) emit any sort of
code?

> The GNU documentation clearly implies that a COMPILER which "generates
> machine code directly" is better than one that doesn't.

no it doesn't. It says many compilers emit HLL output (for instance C-
front the first C++ compiler emtted C which could then be compiled to
native code using a traditional C compiler). gcc emits native code it
doesn't say (or imply) one is "better" than the other.

> But it also
> uses "compiler" to refer to what Herb wrote.

I don't think it does.

> > You could argue, in a sort of pedantic way, that the interpreter actual=
ly
> > compiles-to-X, where X is something other than native code, and then
> > something else interprets the X.
>
> Gee, you could.

point being?

I'm playing around with a scheme (Lisp) compiler that emits C which
can then be compiled with a C compiler. I'm happy to call that a
compiler. I got the impression Schildt's translator almost directly
executed the source code.


> > So, given Java as an example to draw from: =A0Does Schildt's "C interpr=
eter"
> > allow you to convert C programs to some kind of data file, which anothe=
r
> > program or piece of hardware then executes?
>
> Java and .Net do not typically interpret code: we've told you that,
> dear little Peter. They translate bytecode to native code the first
> time the path of logic that contains the code is executed in most
> implementations, although there exist implementations which are
> interpreters. Confusingly, esp. to people without proper education in
> computer science, Microsoft documentation calls this step JIT
> compilation, which is incorrect if we reserve the term "compiler" to
> something that translates something in a Chomsky 1 or 0 source
> language writable and readable by humans.

JIT compiler seems a reasonable term. I'm not convinced "compiling" is
limited to human readable texts.

Isn't it theoretically possible for JIT compilers to be faster then
normal compilers because they actually know what envrironment the code
is running in? That is they could (in theory) profile the code as it
runs.


0
Nick
1/4/2010 9:51:18 AM
On 3 Jan, 23:14, Francis Glassborow
<francis.glassbo...@btinternet.com> wrote:
> Dennis (Icarus) wrote:
>
> > "Ben Bacarisse" <ben.use...@bsb.me.uk> wrote in message
> >news:0.0e316b296bcc09db5599.20100103143134GMT.87eim79s55.fsf@bsb.me.uk..=
..
> >> spinoza1111 <spinoza1...@yahoo.com> writes:
>
> >>> On Jan 2, 3:28 pm, Seebs <usenet-nos...@seebs.net> wrote:
> >> <snip>
> >>>> The book clearly says it's an interpreter.
>
> >>> We've been over this, Mister ADHD: an interpreter plus a front end
> >>> scanner and parser is a compiler. If you'd taken CS at uni instead of
> >>> psychology you would know this.
>
> >> That disagrees with the usage I learnt from my CS degree course.
>
> > Same here.
>
> >> There is some flexibility in these terms but I wonder what you call a
> >> system that directly interprets a parsed form of the source code.
> >> This seems to be what "little C" does, and I think it is useful to
> >> have a term for such systems so as to distinguish them from
> >> "compilers" that produce some form of target code (albeit sometimes an
> >> intermediate code rather than machine instructions). =A0I call them
> >> interpreters. =A0What do you call them if not interpreters?
>
> > The dragon book (Aho, et. al.) calls them interpreters, as do the other
> > compiler texts I have.
>
> > Dennis
>
> You are completely missing the point, technical terms always mean
> exactly what Nilges says they do and if we disagree it is because we are
> ignorant and did not study CS at the same places he did.
>
> Of course in the real world that the rest of us inhabit many terms have
> changed there meaning over the last 50 years. Just think about how the
> word 'computer' has changed its meaning. Come to that, exactly what is a
> computer (and be careful because careless definitions will include many
> things that most of us would noot actually consider to be computers.

an information transforming device? I'm happy to accept much embedded
stuff to be a computer though I feel I've let in everything from a
microphone to a printing press.





0
Nick
1/4/2010 9:54:00 AM
Nick Keighley wrote:
> On 3 Jan, 23:14, Francis Glassborow
> <francis.glassbo...@btinternet.com> wrote:

>> Of course in the real world that the rest of us inhabit many terms have
>> changed there meaning over the last 50 years. Just think about how the
>> word 'computer' has changed its meaning. Come to that, exactly what is a
>> computer (and be careful because careless definitions will include many
>> things that most of us would noot actually consider to be computers.
> 
> an information transforming device? I'm happy to accept much embedded
> stuff to be a computer though I feel I've let in everything from a
> microphone to a printing press.
> 

That, of course, is the problem. Nilges' "Turing-complete device" is too 
restrictive because such devices cannot actually exist (they require 
unlimited resources) and more practical definitions tend to let in more 
than we feel comfortable with. Is a mobile (cell) phone a computer? What 
about a digital camera? A TV? Where do we draw the line?
0
Francis
1/4/2010 10:27:58 AM
spinoza1111 wrote:
> 
> It's not important, Ben. What's important is the taxonomy: regular
> languages, context free languages, context sensitive languages, and
> everything else. I studied this stuff outside the workplace and on my
> own time in 1971 and revisited in a graduate school class in which I
> got an A. However, I've never used the information on the job nor have
> I taught it. I think that the awareness that the taxonomy exists is
> the important thing here, and this awareness doesn't exist in the
> typical auto-didact.

Many years ago I was a sub-editor (proof reader in reality) and I was 
given a book by a well known expert on some form of technology. My editor
asked that I be vigilant in ensuring that no mention was made anywhere 
in the front matter o