f



absence of max

classical question

maybe someone know that (i realized it clearly quite recently) that there i=
s no max (nor min) anywhere in c standard library  (afaik there are fmax fm=
in for floats in math.h )

this is weird, maybe max from ibrary would be slow (though compilers have i=
ntrisinc for that so it could be made fast) and maybe this is a reason but =
absence of min max got weirld result that it must be defined in each projec=
t (no big worry as it is one line but still weird)
0
fir
11/26/2016 8:49:41 PM
comp.lang.c 30657 articles. 4 followers. spinoza1111 (3246) is leader. Post Follow

566 Replies
1263 Views

Similar Articles

[PageSpeed] 1

It takes 14 seconds to write your own custom max() function, one capable of
handling your unique data/app's needs.

That's probably why.

Best regards,
Rick C. Hodgin
0
Rick
11/26/2016 11:34:37 PM
On 26/11/2016 23:34, Rick C. Hodgin wrote:
> It takes 14 seconds to write your own custom max() function, one capable of
> handling your unique data/app's needs.
>
> That's probably why.

It takes 14 seconds in Python as well, yet it's a built-in function.

While, the way I implement it, I can also write:

  x max:= y       # x := max(x,y)

In C, you'd have to write different versions for signed, unsigned, 
double and pointers (8 functions). If you don't want to always use 
64-bit arithmetic, you might want 32-bit versions; that's another 6 
functions.

So 14 functions so far, and you haven't yet looked at in-place versions.

If you try and do it with macros, then you have to think about problems 
like MAX(a[++i],b). Plus all the usual issues you get when you leave 
things like this to the creativity of millions of individual programmers.

There are benefits to having even apparently trivial features like this 
supported by the language.

-- 
Bartc
0
BartC
11/27/2016 12:21:12 AM
I wrote a reply last night, but I guess it was lost somehow.  It basically said
Python's a weakly-typed language, isn't it?  If so it would have to have its
own built-in min() and max() functions to be able to reliably compare
disparate types.  I see you also mention this as a need in C, but I would
argue to a lesser degree because of casting.  We can create a macro with
a trinary operator and generate code for every case.  The compiler may
complain here or there so you apply the appropriate cast.

Another thought this morning.  Python moves by people needs.  C
moves by committee choices.  If something is not deemed desirable
by the committee, it will never officially be added.  But, I think
that era of fixedness in those maintaining the C Standard will have to
change, lest it be ignored as people move on past it.

Best regards,
Rick C. Hodgin
0
Rick
11/27/2016 12:25:51 PM
On 11/26/2016 3:49 PM, fir wrote:
> classical question
> 
> maybe someone know that (i realized it clearly quite recently) that there is no max (nor min) anywhere in c standard library  (afaik there are fmax fmin for floats in math.h )
> 
> this is weird, maybe max from ibrary would be slow (though compilers have intrisinc for that so it could be made fast) and maybe this is a reason but absence of min max got weirld result that it must be defined in each project (no big worry as it is one line but still weird)
> 
gcc -ffast-math optimizes fmax et al. by discarding the special
requirements about non-finite operands.
Recent releases of icc optimize the usual macros (with the usual caveats).
There isn't adequate overlap even between icc and gcc on expressing this
efficiently (nor between icpc, which optimizes std::max, and g++ nor
msvc++).  So I have macros which translate according to the compiler in use.
0
Tim
11/27/2016 12:40:50 PM
On 27-11-16 13:25, Rick C. Hodgin wrote:
> I wrote a reply last night, but I guess it was lost somehow.  It basically said
> Python's a weakly-typed language, isn't it?  If so it would have to have its
> own built-in min() and max() functions to be able to reliably compare
> disparate types.  I see you also mention this as a need in C, but I would
> argue to a lesser degree because of casting.  We can create a macro with
> a trinary operator and generate code for every case.  The compiler may
> complain here or there so you apply the appropriate cast.
>
> Another thought this morning.  Python moves by people needs.  C
> moves by committee choices.  If something is not deemed desirable
> by the committee, it will never officially be added.  But, I think
> that era of fixedness in those maintaining the C Standard will have to
> change, lest it be ignored as people move on past it.

Also, don't forget we're comparing two completely different languages here.
A high-level interpreted language with little optimization benefits from a
high number of built-in functions; they're much faster than user-created
code. In C the speed gain is much less, often completely non-existant.

-- 
Robert Spanjaard
0
Robert
11/27/2016 12:54:05 PM
On Sun, 27 Nov 2016 04:25:51 -0800 (PST)
"Rick C. Hodgin" <rick.c.hodgin@gmail.com> wrote:

> I wrote a reply last night, but I guess it was lost somehow.  It
> basically said Python's a weakly-typed language, isn't it?  If so it
> would have to have its own built-in min() and max() functions to be
> able to reliably compare disparate types.  I see you also mention
> this as a need in C, but I would argue to a lesser degree because of
> casting.  We can create a macro with a trinary operator and generate
> code for every case.  The compiler may complain here or there so you
> apply the appropriate cast.
> 
> Another thought this morning.  Python moves by people needs.  C
> moves by committee choices.  If something is not deemed desirable
> by the committee, it will never officially be added.  But, I think
> that era of fixedness in those maintaining the C Standard will have to
> change, lest it be ignored as people move on past it.
> 
> Best regards,
> Rick C. Hodgin

Puthon has bad breakage between 2 and 3...

-- 
press any key to continue or any other to quit
0
Melzzzzz
11/27/2016 1:06:00 PM
On 27/11/2016 12:54, Robert Spanjaard wrote:
> On 27-11-16 13:25, Rick C. Hodgin wrote:
>> I wrote a reply last night, but I guess it was lost somehow.  It
>> basically said
>> Python's a weakly-typed language, isn't it?  If so it would have to
>> have its
>> own built-in min() and max() functions to be able to reliably compare
>> disparate types.  I see you also mention this as a need in C, but I would
>> argue to a lesser degree because of casting.  We can create a macro with
>> a trinary operator and generate code for every case.  The compiler may
>> complain here or there so you apply the appropriate cast.
>>
>> Another thought this morning.  Python moves by people needs.  C
>> moves by committee choices.  If something is not deemed desirable
>> by the committee, it will never officially be added.  But, I think
>> that era of fixedness in those maintaining the C Standard will have to
>> change, lest it be ignored as people move on past it.
>
> Also, don't forget we're comparing two completely different languages here.
> A high-level interpreted language with little optimization benefits from a
> high number of built-in functions; they're much faster than user-created
> code. In C the speed gain is much less, often completely non-existant.

Actually, if I write such a function in user-code:

     def mymax(a,b):
         if a>=b: return a
         return b

it's actually *faster* than the built-in 'max'!

(Nearly 50% faster on one Python version. However the specification is 
different as built-in 'max' deals with N operands, or can do an 
element-by-element compare of N lists returning the 'greater' of all the 
lists.

But if you do want to simply compare two numbers, then a user max 
function can be faster. And here, you only need to write one such 
function (and another for 'min') as the overloading needed is automatic)

But it's not about speed; to me, 'min' and 'max' are almost as basic as 
C's built-in, overloaded operators '<', '>', '<=' and '>='.

-- 
Bartc
0
BartC
11/27/2016 1:54:56 PM
On Sunday, November 27, 2016 at 7:55:08 AM UTC-6, Bart wrote:
> Actually, if I write such a function in user-code:
> 
>      def mymax(a,b):
>          if a>=b: return a
>          return b
> 
> it's actually *faster* than the built-in 'max'!
> 
> (Nearly 50% faster on one Python version. However the specification is 
> different as built-in 'max' deals with N operands, or can do an 
> element-by-element compare of N lists returning the 'greater' of all the 
> lists.

Out of curiosity, how do the semantics compare if one of the operands is
NaN?  There are some situations where it would be useful to have max(X,NaN)
and max(NaN,X) both return X, and cases where it would be helpful to have
both return NaN, and some where it wouldn't matter.  I can't think of any
cases where the semantics of your version would be more desirable than any
other.  BTW, it's been years since IEEE added a new set of relational
comparisons, but I don't yet know of any language including them.  What I'd
like to see would be for languages to include types that are analogous to
"float" and "double", and may be freely converted, but whose relational
operators used the new semantics.
0
supercat
11/27/2016 3:33:46 PM
On 27/11/16 01:21, BartC wrote:
> On 26/11/2016 23:34, Rick C. Hodgin wrote:
>> It takes 14 seconds to write your own custom max() function, one
>> capable of
>> handling your unique data/app's needs.
>>
>> That's probably why.
>
> It takes 14 seconds in Python as well, yet it's a built-in function.
>
> While, the way I implement it, I can also write:
>
>   x max:= y       # x := max(x,y)
>
> In C, you'd have to write different versions for signed, unsigned,
> double and pointers (8 functions). If you don't want to always use
> 64-bit arithmetic, you might want 32-bit versions; that's another 6
> functions.
>
> So 14 functions so far, and you haven't yet looked at in-place versions.

And that is why there is no "max" standard function in C.

>
> If you try and do it with macros, then you have to think about problems
> like MAX(a[++i],b). Plus all the usual issues you get when you leave
> things like this to the creativity of millions of individual programmers.
>
> There are benefits to having even apparently trivial features like this
> supported by the language.
>

In C, this would have to be implemented as an operator, not a function, 
in order to work on a range of types.  It can't be a macro, even a C11 
generic macro, without additional language support to avoid the usual 
"evaluate the argument twice" problem.

gcc used to have an extension "a <? b" and "a >? b" as the minimum and 
maximum operators - precisely because making an operator was the only 
way to achieve the effect for different types while being safe from the 
"evaluate twice" problem.  But these operators have long since been 
removed from gcc, for a variety of reasons:

1. They were rarely used.  It makes little sense to maintain such an 
extension unless people actually find it useful.

2. It was non-portable.  If you are happy with gcc extensions, then 
there are perfectly good ways to achieve the effect using other gcc 
extensions that /are/ useful in a number of contexts:

#define max(a,b) \
        ({ typeof (a) _a = (a); \
            typeof (b) _b = (b); \
          _a > _b ? _a : _b; })

3. It is not needed in C++, which has templates (and therefore very 
simple std::max and std::min functions).  So it is only relevant for C, 
not C++.

4. It takes only 14 seconds to write your own custom function or inline 
code when you need a "max".  That is less time than it takes to look up 
the gcc manual to learn about the extra operator.

So in the end, there is not much need of a "max" in the language.

0
David
11/27/2016 4:35:17 PM
x>y?x:y
0
asetofsymbols
11/27/2016 4:46:55 PM
On 27/11/16 13:25, Rick C. Hodgin wrote:
> I wrote a reply last night, but I guess it was lost somehow.  It basically said
> Python's a weakly-typed language, isn't it?  If so it would have to have its
> own built-in min() and max() functions to be able to reliably compare
> disparate types.  I see you also mention this as a need in C, but I would
> argue to a lesser degree because of casting.  We can create a macro with
> a trinary operator and generate code for every case.  The compiler may
> complain here or there so you apply the appropriate cast.
>

Python is a strongly typed language, not a weakly typed language (such 
as C).  But Python is a dynamically typed language, with duck-typing. 
This means that in effect, every function in Python is a bit like a 
template in C++.  And writing a new "max" function in Python is as 
simple as:

def my_max(a, b) :
	if (a > b) return a
	return b

When the implementation is so simple, there is very little reason /not/ 
to put it into the standard library - just as C++ has std::max with a 
very simple implementation as a template.

C, on the other hand, has /no/ possible implementation of a max function 
or macro that works on all numeric types until C11 _Generic - and even 
then, the implementation is more than a little ugly.

> Another thought this morning.  Python moves by people needs.  C
> moves by committee choices.  If something is not deemed desirable
> by the committee, it will never officially be added.  But, I think
> that era of fixedness in those maintaining the C Standard will have to
> change, lest it be ignored as people move on past it.
>

C moves by people needs as well, just a lot more slowly - it needs a lot 
of people, and a very clear need, to make changes happen.  "max" is not 
needed that much.


0
David
11/27/2016 4:47:04 PM
W dniu niedziela, 27 listopada 2016 17:47:01 UTC+1 u=C5=BCytkownik asetof..=
..@gmail.com napisa=C5=82:
> x>y?x:y

max looks better, quicker readable

imo max would be needed to be defined=20
in c lib (and well implemented but with intrinsics this is i think achievea=
ble)

should be also defined for more arguments than two=20

max btw could be also defined for arrays
max(int* arr, int size) probably nobody uses it but maybe could be used
0
fir
11/27/2016 5:18:45 PM
> x>y?x:y
  max(x,y)
one character more
0
asetofsymbols
11/27/2016 5:22:21 PM
W dniu niedziela, 27 listopada 2016 18:22:29 UTC+1 u=C5=BCytkownik asetof..=
..@gmail.com napisa=C5=82:
> > x>y?x:y
>   max(x,y)
> one character more

(x>y?x:y)
 max(x,y)
one character less, say its maybe draw if take group of cases,  but is not =
the point
max() is imo better readable, it is also established by tradition=20

0
fir
11/27/2016 5:27:49 PM
On 27/11/2016 17:22, asetofsymbols@gmail.com wrote:
>> x>y?x:y
>  max(x,y)
> one character more

max(a[i]->m.x,a[i]->m.y)
a[i]->m.x>a[i]->m.y)?a[i]->m.x:a[i]->m.y

Quite a few characters more (especially if you insert spaces for extra 
readability).

I also can't tell at a glance that it is returning the max value of two 
expressions. Plus there is more maintenance as there are two copies of 
each expression.

-- 
bartc
0
BartC
11/27/2016 5:33:38 PM
On Sunday, November 27, 2016 at 10:35:25 AM UTC-6, David Brown wrote:
> > In C, you'd have to write different versions for signed, unsigned,
> > double and pointers (8 functions). If you don't want to always use
> > 64-bit arithmetic, you might want 32-bit versions; that's another 6
> > functions.
> >
> > So 14 functions so far, and you haven't yet looked at in-place versions.
> 
> And that is why there is no "max" standard function in C.

The number of programs to be compiled vastly exceeds the number of compilers.
Even if only 1% of programs would use a feature, implementing it in every
compiler would be cheaper than implementing it in every program.

Further, if the Standard were to either specify corner-case behaviors or
indicate ways by which implementations could indicate them [via predefined
macros] then someone reading a program would know what corner-case behaviors
for a built-in maximum function would be without having to look at the code.
By contrast, if a program calls a custom-written "max" function with values
that might be NaN, the programmer would need to examine the code for that
function to see how it would behave.

Also, a number of platforms offer efficient ways of loading a register with
the larger of two values; it would seem easier for a compiler to use that
means when given a "__max*" intrinsic than for it to try to recognize all
the ways programmers might try to accomplish the same thing in cases where
they don't care about NaN corner-case behavior [if the hardware feature's
NaN behavior would differ from that of using manual comparisons, a compiler
could not make the substitution while in an Annex-F-compliant mode].

> In C, this would have to be implemented as an operator, not a function, 
> in order to work on a range of types.  It can't be a macro, even a C11 
> generic macro, without additional language support to avoid the usual 
> "evaluate the argument twice" problem.

There's no reason compiler intrinsics couldn't handle a range of types.
Alternatively, there could be a set of built-ins with names that indicate
the type as well as an optional (see below) form which would take care of
overloading.  If the type-specific forms use a consistent naming convention,
the cognitive load of having N kinds of intrinsics each supporting T types
would be N+T rather than NT.

> gcc used to have an extension "a <? b" and "a >? b" as the minimum and 
> maximum operators - precisely because making an operator was the only 
> way to achieve the effect for different types while being safe from the 
> "evaluate twice" problem.  But these operators have long since been 
> removed from gcc, for a variety of reasons:
> 
> 1. They were rarely used.  It makes little sense to maintain such an 
> extension unless people actually find it useful.
> 
> 2. It was non-portable.  If you are happy with gcc extensions, then 
> there are perfectly good ways to achieve the effect using other gcc 
> extensions that /are/ useful in a number of contexts:

Programmers are generally going to be loath to use extensions unless they
have reason to believe that by the time it's necessary to port code to
another implementation, that implementation will likely support the
extension or, at minimum, support something similar that can be used to
port the code.

IMHO, the Standard's attitude toward constraint violations is unhelpful; the
language would develop much better if the Standard were to specify that an
implementation need not issue diagnostics if something that would be a
constraint violation is made legitimate by an expressly-documented extension,
and also if the authors would add "optional" features to the language with
less justification than would be required for mandatory features.

If the Standard had said that implementations are not required to accept
the <? or ?> operators, but that conforming implementations that do accept
them must process them with a particular meaning, that wouldn't impose any
burden on implementation writers unless or until customers start demanding
support for them [which would in turn suggest that customers find them
useful].

> #define max(a,b) \
>         ({ typeof (a) _a = (a); \
>             typeof (b) _b = (b); \
>           _a > _b ? _a : _b; })

I wonder why the Standard doesn't recognize statement-expressions?  They
add essentially the same sort of complexity as allowing declarations to
appear after executable statements, a feature for which C99 mandated
support, and there are many situations where the useful lifetime of a
variable will end with a use of the value computed therein.

> 4. It takes only 14 seconds to write your own custom function or inline 
> code when you need a "max".  That is less time than it takes to look up 
> the gcc manual to learn about the extra operator.

There's a chicken-and-egg problem here.  If the Standard describes what an
operator will do on implementations that accept it, then anyone who has
encountered it once on any implementation will know what it does on all.
By contrast, someone reading code in a project that defines its own "max"
function would need to inspect that function to know what it does.
0
supercat
11/27/2016 5:39:43 PM
BartC <bc@freeuk.com> writes:

> On 27/11/2016 17:22, asetofsymbols@gmail.com wrote:
>>> x>y?x:y
>>  max(x,y)
>> one character more
>
> max(a[i]->m.x,a[i]->m.y)
> a[i]->m.x>a[i]->m.y)?a[i]->m.x:a[i]->m.y

Not to mention all the bog standard "double evaluation of expression" caveats

0
Gareth
11/27/2016 5:51:32 PM
On 27/11/2016 16:35, David Brown wrote:
> On 27/11/16 01:21, BartC wrote:

>> So 14 functions so far, and you haven't yet looked at in-place versions.
>
> And that is why there is no "max" standard function in C.

Huh? That is the very reason why it ought to be built-in and benefiting 
from overloading.

>> If you try and do it with macros, then you have to think about problems
>> like MAX(a[++i],b). Plus all the usual issues you get when you leave
>> things like this to the creativity of millions of individual programmers.
>>
>> There are benefits to having even apparently trivial features like this
>> supported by the language.
>
> In C, this would have to be implemented as an operator, not a function,
> in order to work on a range of types.

Yes, but it can be a operator with function-like syntax. I can write in 
my language any of:

   a:=max(b,c)
   a:=b max c
   a max:=b

which translates to this C:

      a = (b>c?b:c);
      a = (b>c?b:c);
      a = (a>b?a:b);

(When a,b,c are simple; more complex terms generate more fiddly code 
than may call built-in functions. The important thing however is that 
that isn't a concern of the guy writing the above code using 'max'. The 
best implementation is left to the language.)

> gcc used to have an extension "a <? b" and "a >? b" as the minimum and
> maximum operators - precisely because making an operator was the only
> way to achieve the effect for different types while being safe from the
> "evaluate twice" problem.  But these operators have long since been
> removed from gcc, for a variety of reasons:
>
> 1. They were rarely used.  It makes little sense to maintain such an
> extension unless people actually find it useful.
>
> 2. It was non-portable.  If you are happy with gcc extensions, then
> there are perfectly good ways to achieve the effect using other gcc
> extensions that /are/ useful in a number of contexts:
>
> #define max(a,b) \
>        ({ typeof (a) _a = (a); \
>            typeof (b) _b = (b); \
>          _a > _b ? _a : _b; })
>
> 3. It is not needed in C++, which has templates (and therefore very
> simple std::max and std::min functions).  So it is only relevant for C,
> not C++.
>
> 4. It takes only 14 seconds to write your own custom function or inline
> code when you need a "max".  That is less time than it takes to look up
> the gcc manual to learn about the extra operator.
>
> So in the end, there is not much need of a "max" in the language.

It also takes less than 14 seconds, or not much more, to:

* Write a=a+1 instead of ++a
* Write a+=1 instead of ++a
* Write !(a==b) instead of a!=b
* Write b<a instead of a>b
* Write (*P).m instead of P->m
* Write a strlen() or strcpy() function
* Write an abs() function

etc.

But these features presumably exist for a reason.

Your reasons are just the usual excuses that try and justify why some 
obvious feature isn't in C. Of course if it /was/ in, then it would be 
indispensable!

And FWIW I use min and max a lot. (There is also 'clamp', a related 
feature, although that is used less often: y = clamp(x,lower,upper).)

-- 
Bartc

0
BartC
11/27/2016 6:02:34 PM
On 27/11/16 18:39, supercat@casperkitty.com wrote:
> On Sunday, November 27, 2016 at 10:35:25 AM UTC-6, David Brown wrote:
>>> In C, you'd have to write different versions for signed, unsigned,
>>> double and pointers (8 functions). If you don't want to always use
>>> 64-bit arithmetic, you might want 32-bit versions; that's another 6
>>> functions.
>>>
>>> So 14 functions so far, and you haven't yet looked at in-place versions.
>>
>> And that is why there is no "max" standard function in C.
>
> The number of programs to be compiled vastly exceeds the number of compilers.
> Even if only 1% of programs would use a feature, implementing it in every
> compiler would be cheaper than implementing it in every program.

That /might/ be true if the cost of implementing it in a compiler were 
similar to the cost of implementing it in a program.  But when you 
consider the effort required to extend the language with extra operators 
or other features (such as C11 generics), and the work needed to 
develop, document and test it in a compiler, it is a very different 
matter.  Remember, you are comparing it against 14 seconds' work in user 
code (I know Rick was not serious about the exact figure - but he is 
entirely correct that it is a very simple thing to do in user code). 
And how many programs would use a "max" function?  It is certainly a 
/long/ way from "every program".

>
> Further, if the Standard were to either specify corner-case behaviors or
> indicate ways by which implementations could indicate them [via predefined
> macros] then someone reading a program would know what corner-case behaviors
> for a built-in maximum function would be without having to look at the code.
> By contrast, if a program calls a custom-written "max" function with values
> that might be NaN, the programmer would need to examine the code for that
> function to see how it would behave.
>
> Also, a number of platforms offer efficient ways of loading a register with
> the larger of two values; it would seem easier for a compiler to use that
> means when given a "__max*" intrinsic than for it to try to recognize all
> the ways programmers might try to accomplish the same thing in cases where
> they don't care about NaN corner-case behavior [if the hardware feature's
> NaN behavior would differ from that of using manual comparisons, a compiler
> could not make the substitution while in an Annex-F-compliant mode].
>
>> In C, this would have to be implemented as an operator, not a function,
>> in order to work on a range of types.  It can't be a macro, even a C11
>> generic macro, without additional language support to avoid the usual
>> "evaluate the argument twice" problem.
>
> There's no reason compiler intrinsics couldn't handle a range of types.

Absolutely true.  But compiler intrinsics are non-standard extensions. 
Compilers such as gcc have lots of them.  If the gcc folk thought people 
would find a __builtin_max function useful, they would have added it. 
In reality, they found the non-standard extension max and min operators 
to be so little use that they removed them.

> Alternatively, there could be a set of built-ins with names that indicate
> the type as well as an optional (see below) form which would take care of
> overloading.  If the type-specific forms use a consistent naming convention,
> the cognitive load of having N kinds of intrinsics each supporting T types
> would be N+T rather than NT.

Indeed they could do this - and indeed gcc /does/ do this for functions 
that are found to be useful because they fill a need and cannot easily 
be replicated by normal C (or C++).

(gcc is the compiler I know best, so I am using it as an example - they 
same may apply to clang, MSVC, etc.)

0
David
11/27/2016 7:35:35 PM
On 27/11/16 19:02, BartC wrote:
> On 27/11/2016 16:35, David Brown wrote:
>> On 27/11/16 01:21, BartC wrote:
>
>>> So 14 functions so far, and you haven't yet looked at in-place versions.
>>
>> And that is why there is no "max" standard function in C.
>
> Huh? That is the very reason why it ought to be built-in and benefiting
> from overloading.

Standard C library functions don't support overloading.  Only operators 
support overloading.  (And with C11 generics, you can make a kind of 
overloading via macros.)

>
>>> If you try and do it with macros, then you have to think about problems
>>> like MAX(a[++i],b). Plus all the usual issues you get when you leave
>>> things like this to the creativity of millions of individual
>>> programmers.
>>>
>>> There are benefits to having even apparently trivial features like this
>>> supported by the language.
>>
>> In C, this would have to be implemented as an operator, not a function,
>> in order to work on a range of types.
>
> Yes, but it can be a operator with function-like syntax. I can write in
> my language any of:
>
>   a:=max(b,c)
>   a:=b max c
>   a max:=b
>
> which translates to this C:
>
>      a = (b>c?b:c);
>      a = (b>c?b:c);
>      a = (a>b?a:b);

That's fine - in /your/ language.  There are many languages where 
operators and functions are basically the same thing, just with a bit of 
syntactic sugar.  C is not one of those languages.  And we were 
discussing C, were we not?

>
> (When a,b,c are simple; more complex terms generate more fiddly code
> than may call built-in functions. The important thing however is that
> that isn't a concern of the guy writing the above code using 'max'. The
> best implementation is left to the language.)
>
>> gcc used to have an extension "a <? b" and "a >? b" as the minimum and
>> maximum operators - precisely because making an operator was the only
>> way to achieve the effect for different types while being safe from the
>> "evaluate twice" problem.  But these operators have long since been
>> removed from gcc, for a variety of reasons:
>>
>> 1. They were rarely used.  It makes little sense to maintain such an
>> extension unless people actually find it useful.
>>
>> 2. It was non-portable.  If you are happy with gcc extensions, then
>> there are perfectly good ways to achieve the effect using other gcc
>> extensions that /are/ useful in a number of contexts:
>>
>> #define max(a,b) \
>>        ({ typeof (a) _a = (a); \
>>            typeof (b) _b = (b); \
>>          _a > _b ? _a : _b; })
>>
>> 3. It is not needed in C++, which has templates (and therefore very
>> simple std::max and std::min functions).  So it is only relevant for C,
>> not C++.
>>
>> 4. It takes only 14 seconds to write your own custom function or inline
>> code when you need a "max".  That is less time than it takes to look up
>> the gcc manual to learn about the extra operator.
>>
>> So in the end, there is not much need of a "max" in the language.
>
> It also takes less than 14 seconds, or not much more, to:
>
> * Write a=a+1 instead of ++a
> * Write a+=1 instead of ++a
> * Write !(a==b) instead of a!=b
> * Write b<a instead of a>b
> * Write (*P).m instead of P->m
> * Write a strlen() or strcpy() function
> * Write an abs() function
>
> etc.
>
> But these features presumably exist for a reason.

Yes, there are two key reasons.  One is that it makes compilers simpler 
(which was very relevant in the past), allowing compilers to translate 
"a = a + b" into "add a, b" and "a++" into "inc a".  The other is that 
these sorts of features are used a great deal, so it is very convenient 
to have them on hand.  This is much less the case for "max" and "min", 
which rarely have direct assembly implementations, and which are not 
used nearly as often in user code.  (The fact that they rarely exist as 
assembly instructions is a good indication of this.)

>
> Your reasons are just the usual excuses that try and justify why some
> obvious feature isn't in C. Of course if it /was/ in, then it would be
> indispensable!

/I/ did not specify the C language, write the standards, or implement 
compilers.  I am not justifying anything here - I have no need to do so. 
  The question was asked why there is no "max" or "min" function in the 
C standard library, so I have given some explanations for why they are 
not there.  It would not bother me if they /were/ there, and I might 
have occasional use for them in my code if they were standardised.  But 
I don't feel bothered by their lack.  Some people, of course, /would/ 
object to them being added - either because they don't want the language 
to grow, or because they don't want conflicts with their own use of 
those identifiers (it is /really/ hard to add new features to C without 
annoying someone!).  Oh, and there are lots of features of C that I 
could happily live without, and lots more that I would change if I 
could.  But in most cases I can understand why the feature exists, even 
if I don't personally want it.

>
> And FWIW I use min and max a lot. (There is also 'clamp', a related
> feature, although that is used less often: y = clamp(x,lower,upper).)
>

And in C++, where it is much easier to add new features to the standard 
library than C, "clamp" has been added to C++17 to join "min" and "max". 
  But then, C++ has templates making the implementation trivial and thus 
giving a different balance of pros and cons.

0
David
11/27/2016 7:52:42 PM
On 27/11/2016 19:35, David Brown wrote:
> On 27/11/16 18:39, supercat@casperkitty.com wrote:

>> The number of programs to be compiled vastly exceeds the number of
>> compilers.
>> Even if only 1% of programs would use a feature, implementing it in every
>> compiler would be cheaper than implementing it in every program.
>
> That /might/ be true if the cost of implementing it in a compiler were
> similar to the cost of implementing it in a program.  But when you
> consider the effort required to extend the language with extra operators
> or other features (such as C11 generics), and the work needed to
> develop, document and test it in a compiler, it is a very different
> matter.  Remember, you are comparing it against 14 seconds' work in user
> code (I know Rick was not serious about the exact figure - but he is
> entirely correct that it is a very simple thing to do in user code). And
> how many programs would use a "max" function?  It is certainly a /long/
> way from "every program".

So how many programs use _Complex?

I don't have many open-source projects lying around on my machine, but I 
do happen to have CPython and SQLite. And by coincidence, they both 
define MIN and MAX macros!

SQLITE:
/*
** Macros to compute minimum and maximum of two numbers.
*/
#define MIN(A,B) ((A)<(B)?(A):(B))
#define MAX(A,B) ((A)>(B)?(A):(B))

CPython:
#undef MAX
#undef MIN
#define MAX(x, y) ((x) < (y) ? (y) : (x))
#define MIN(x, y) ((x) < (y) ? (x) : (y))

The purpose of having such functions built-in is precisely to avoid each 
application defining it's own inferior versions.

-- 
Bartc
0
BartC
11/27/2016 8:10:44 PM
On 27/11/2016 19:52, David Brown wrote:
> On 27/11/16 19:02, BartC wrote:

>> Yes, but it can be a operator with function-like syntax. I can write in
>> my language any of:
>>
>>   a:=max(b,c)
>>   a:=b max c
>>   a max:=b
>>
>> which translates to this C:
>>
>>      a = (b>c?b:c);
>>      a = (b>c?b:c);
>>      a = (a>b?a:b);
>
> That's fine - in /your/ language.  There are many languages where
> operators and functions are basically the same thing, just with a bit of
> syntactic sugar.  C is not one of those languages.  And we were
> discussing C, were we not?

We're discussing having min and max built-in. The standard ways of 
adding this stuff via user-code is with functions and with macros, both 
of which have their own problems.

Why /couldn't/ there be an operator that uses function-like syntax? (As 
happens with 'syntax(x)'. But it be operator-like syntax as I've shown 
as well.

My example shows that it is doable, it's readable, and can map easily to 
existing C features.

 >> And FWIW I use min and max a lot. (There is also 'clamp', a related
 >> feature, although that is used less often: y = clamp(x,lower,upper).)
 >>
 >
 > And in C++, where it is much easier to add new features to the standard
 > library than C, "clamp" has been added to C++17 to join "min" and "max".
 >  But then, C++ has templates making the implementation trivial and

How trivial was it to implement templates!


 >> But these features presumably exist for a reason.

 > This is much less the case for "max" and "min",
 > which rarely have direct assembly implementations, and which are not
 > used nearly as often in user code.  (The fact that they rarely exist as
 > assembly instructions is a good indication of this.)

My own implementation of min/max is not very demanding, certainly 
nowhere on the scale of C++ templates.

It was made easier more recently with special x86 assembly instructions, 
namely:

    cmov                  for integers
    minss, maxss etc      for floating point


Yes, the instruction set for floating point includes dedicated min and 
max instructions! Extraordinary given that such operations are 
apparently so rare.

-- 
Bartc
0
BartC
11/27/2016 8:31:39 PM
On 27/11/16 21:10, BartC wrote:
> On 27/11/2016 19:35, David Brown wrote:
>> On 27/11/16 18:39, supercat@casperkitty.com wrote:
>
>>> The number of programs to be compiled vastly exceeds the number of
>>> compilers.
>>> Even if only 1% of programs would use a feature, implementing it in
>>> every
>>> compiler would be cheaper than implementing it in every program.
>>
>> That /might/ be true if the cost of implementing it in a compiler were
>> similar to the cost of implementing it in a program.  But when you
>> consider the effort required to extend the language with extra operators
>> or other features (such as C11 generics), and the work needed to
>> develop, document and test it in a compiler, it is a very different
>> matter.  Remember, you are comparing it against 14 seconds' work in user
>> code (I know Rick was not serious about the exact figure - but he is
>> entirely correct that it is a very simple thing to do in user code). And
>> how many programs would use a "max" function?  It is certainly a /long/
>> way from "every program".
>
> So how many programs use _Complex?

None of my programs, anyway.  I would not object to it being missing 
from the standards - I think a standard library with a typedef'ed struct 
and functions/macros would have made more sense than making a special 
set of _Complex types in C.  It always struck me as a lot of effort for 
a feature rarely used - putting it in the library rather than the 
language would have been less intrusive.  It could also then have been 
compatible with C++'s complex numbers.  But I guess some people thought 
is was important enough to be able to use operators on complex numbers, 
and thus it was put into the language.  (An alternative would have been 
to add operator overloading - Jacob's compiler has shown how that could 
have been done.)

>
> I don't have many open-source projects lying around on my machine, but I
> do happen to have CPython and SQLite. And by coincidence, they both
> define MIN and MAX macros!
>
> SQLITE:
> /*
> ** Macros to compute minimum and maximum of two numbers.
> */
> #define MIN(A,B) ((A)<(B)?(A):(B))
> #define MAX(A,B) ((A)>(B)?(A):(B))
>
> CPython:
> #undef MAX
> #undef MIN
> #define MAX(x, y) ((x) < (y) ? (y) : (x))
> #define MIN(x, y) ((x) < (y) ? (x) : (y))
>
> The purpose of having such functions built-in is precisely to avoid each
> application defining it's own inferior versions.
>

These definitions are clear, simple, and written in about 14 seconds. 
Until C11 generics came along, they could not be improved upon within 
the C language - there would need to be changes to the language that 
were far bigger than just a "min" or "max" standard library function in 
order to implement them.


0
David
11/27/2016 9:07:44 PM
On 27/11/2016 20:31, BartC wrote:

> Why /couldn't/ there be an operator that uses function-like syntax? (As
> happens with 'syntax(x)'.

Oops: I mean 'sizeof(x)'.


0
BartC
11/27/2016 10:09:15 PM
On Sunday, November 27, 2016 at 4:09:29 PM UTC-6, Bart wrote:
> On 27/11/2016 20:31, BartC wrote:
> 
> > Why /couldn't/ there be an operator that uses function-like syntax? (As
> > happens with 'syntax(x)'.
> 
> Oops: I mean 'sizeof(x)'.

The sizeof operator is often written as though it takes function-like syntax,
but it actually has two syntactic forms: it can be applied to a value, in
which case parentheses are not required, or to a type, in which case they
are required.  It's interesting that a lot of code writes "sizeof(variable)"
or "sizeof (int)" rather than "sizeof variable" or "sizeof (int)", which
probably contributes to the confusion.
 
0
supercat
11/27/2016 10:14:36 PM
On 27/11/2016 21:07, David Brown wrote:
> On 27/11/16 21:10, BartC wrote:

>> I don't have many open-source projects lying around on my machine, but I
>> do happen to have CPython and SQLite. And by coincidence, they both
>> define MIN and MAX macros!
>>
>> SQLITE:
>> /*
>> ** Macros to compute minimum and maximum of two numbers.
>> */
>> #define MIN(A,B) ((A)<(B)?(A):(B))
>> #define MAX(A,B) ((A)>(B)?(A):(B))
>>
>> CPython:
>> #undef MAX
>> #undef MIN
>> #define MAX(x, y) ((x) < (y) ? (y) : (x))
>> #define MIN(x, y) ((x) < (y) ? (x) : (y))
>>
>> The purpose of having such functions built-in is precisely to avoid each
>> application defining it's own inferior versions.
>>
>
> These definitions are clear, simple, and written in about 14 seconds.

Wasn't one of your points there such a feature would rarely be used? Yet 
the the first two applications I look at both rely on them.

(And looking further, with GMP there is:

#define __GMP_MAX(h,i) ((h) > (i) ? (h) : (i))

although strangely no __GMP_MIN. And in NCURSES:

  #ifndef max
  # define max(a,b) (((a) > (b)) ? (a) : (b))
  #endif
  #ifndef min
  # define min(a,b) (((a) < (b)) ? (a) : (b))
  #endif
)

Anyway these are all inferior because MAX(a[++i],b[++j]) might not do 
what you'd expect; they are not a general solution.

> Until C11 generics came along, they could not be improved upon within
> the C language - there would need to be changes to the language that
> were far bigger than just a "min" or "max" standard library function in
> order to implement them.

You're looking at language-building features as a solution. I don't 
agree. At the minute we are expected to use macros so that we end up 
with thousands of variations of half-assed macros, all with slightly 
different names, to do stuff like:

  FOR(i, a, b) {...}
  REPEAT(N) {...}
  ARRAY_LEN(a)
  MAX(a,b)
  CLAMP(x,y,z)
  BINARY(101101)
  DECIMAL_WITH_COMMAS(1,048,576)

Etc.

I think the C11 changes will have the same problems as macros. And 
presumably, the generic features end up generating standard C code which 
is compiled as normal? That's not much different to my wrapping a thick 
syntax layer around the whole language! (Which is then processed with a 
tool resulting in standard C to compile.)

-- 
Bartc
0
BartC
11/27/2016 10:55:08 PM
On 27/11/2016 22:14, supercat@casperkitty.com wrote:
> On Sunday, November 27, 2016 at 4:09:29 PM UTC-6, Bart wrote:
>> On 27/11/2016 20:31, BartC wrote:
>>
>>> Why /couldn't/ there be an operator that uses function-like syntax? (As
>>> happens with 'syntax(x)'.
>>
>> Oops: I mean 'sizeof(x)'.
>
> The sizeof operator is often written as though it takes function-like syntax,
> but it actually has two syntactic forms: it can be applied to a value, in
> which case parentheses are not required, or to a type, in which case they
> are required.  It's interesting that a lot of code writes "sizeof(variable)"
> or "sizeof (int)" rather than "sizeof variable" or "sizeof (int)", which
> probably contributes to the confusion.

Yeah ... because a lot of people can't remember when it requires 
parentheses and when it doesn't!

My proposed max() would always require parentheses.

(That's for C; as I use it now, it can also be written in infix form, 
but that would be a bit much for C. Infix doesn't really suit it min or 
max, but it makes it possible to do 'a max= b' like other infix operators.)

-- 
Bartc


0
BartC
11/27/2016 11:02:17 PM
On 27/11/16 23:02, BartC wrote:
> On 27/11/2016 22:14, supercat@casperkitty.com wrote:
>> On Sunday, November 27, 2016 at 4:09:29 PM UTC-6, Bart wrote:
>>> On 27/11/2016 20:31, BartC wrote:
>>>
>>>> Why /couldn't/ there be an operator that uses function-like syntax? (As
>>>> happens with 'syntax(x)'.
>>>
>>> Oops: I mean 'sizeof(x)'.
>>
>> The sizeof operator is often written as though it takes function-like
>> syntax,
>> but it actually has two syntactic forms: it can be applied to a value, in
>> which case parentheses are not required, or to a type, in which case they
>> are required.  It's interesting that a lot of code writes
>> "sizeof(variable)"
>> or "sizeof (int)" rather than "sizeof variable" or "sizeof (int)", which
>> probably contributes to the confusion.
>
> Yeah ... because a lot of people can't remember when it requires
> parentheses and when it doesn't!

It never, *ever* requires parentheses except on the vanishingly few 
occasions when you need to specify a type rather than an expression.

-- 
Richard Heathfield
Email: rjh at cpax dot org dot uk
"Usenet is a strange place" - dmr 29 July 1999
Sig line 4 vacant - apply within
0
Richard
11/27/2016 11:26:04 PM
On 27/11/2016 23:26, Richard Heathfield wrote:
> On 27/11/16 23:02, BartC wrote:
>> On 27/11/2016 22:14, supercat@casperkitty.com wrote:

>>> It's interesting that a lot of code writes
>>> "sizeof(variable)"
>>> or "sizeof (int)" rather than "sizeof variable" or "sizeof (int)", which
>>> probably contributes to the confusion.
>>
>> Yeah ... because a lot of people can't remember when it requires
>> parentheses and when it doesn't!
>
> It never, *ever* requires parentheses except on the vanishingly few
> occasions when you need to specify a type rather than an expression.

In my code, that can happen a lot. (See below, although here I used it 
so much that I wrapped it in a macro.)


-------------------------------------------------------

#include <stdio.h>
#include <windows.h>

#define SHOWSIZE(T) fprintf(f,#T " %d\n",sizeof(T))

int main(void) {
FILE* f;

if (sizeof(void*)==8) {
	f=fopen("sizes64","wb");
	fprintf(f,"-- 64 BITS\n");
}
else {
	f=fopen("sizes32","wb");
	fprintf(f,"-- 32 BITS\n");
}

SHOWSIZE(BOOL);
SHOWSIZE(WORD);
SHOWSIZE(INT);
SHOWSIZE(INPUT_RECORD);
SHOWSIZE(WCHAR);
SHOWSIZE(DWORD);
SHOWSIZE(HWND);
SHOWSIZE(UINT);
SHOWSIZE(CHAR);
SHOWSIZE(KEY_EVENT_RECORD);
SHOWSIZE(WPARAM);
SHOWSIZE(LPARAM);
SHOWSIZE(SCROLLINFO);
SHOWSIZE(SCROLLBARINFO);
SHOWSIZE(COLORREF);
SHOWSIZE(LOGPALETTE);
SHOWSIZE(PALETTEENTRY);
SHOWSIZE(PALETTEENTRY);

SHOWSIZE(BITMAP);
SHOWSIZE(BITMAPCOREHEADER);
SHOWSIZE(BITMAPCOREINFO);
SHOWSIZE(BITMAPFILEHEADER);
SHOWSIZE(BITMAPINFO);
SHOWSIZE(BITMAPINFOHEADER);
SHOWSIZE(BITMAPV4HEADER);
SHOWSIZE(BITMAPV5HEADER);
SHOWSIZE(BLENDFUNCTION);
SHOWSIZE(DIBSECTION);
SHOWSIZE(RGBQUAD);
SHOWSIZE(RGBTRIPLE);
SHOWSIZE(SIZE);
SHOWSIZE(LPSIZE);
SHOWSIZE(TRIVERTEX);
SHOWSIZE(RGNDATA);
SHOWSIZE(RGNDATAHEADER);
SHOWSIZE(PAINTSTRUCT);
SHOWSIZE(LOGBRUSH);
//SHOWSIZE(LOGBRUSH32);
SHOWSIZE(ABC);
SHOWSIZE(ABCFLOAT);
SHOWSIZE(ENUMLOGFONT);
SHOWSIZE(TEXTMETRIC);
SHOWSIZE(NEWTEXTMETRIC);
SHOWSIZE(LOGFONT);
SHOWSIZE(EXTLOGFONT);
SHOWSIZE(EXTLOGPEN);
SHOWSIZE(LOGPEN);
SHOWSIZE(MSG);
SHOWSIZE(CHAR_INFO);
SHOWSIZE(CONSOLE_CURSOR_INFO);
SHOWSIZE(CONSOLE_FONT_INFO);
SHOWSIZE(CONSOLE_SCREEN_BUFFER_INFO);
//SHOWSIZE(CONSOLE_SCREEN_BUFFER_INFOEX);
SHOWSIZE(INPUT_RECORD);
SHOWSIZE(KEY_EVENT_RECORD);
SHOWSIZE(FOCUS_EVENT_RECORD);
SHOWSIZE(MENU_EVENT_RECORD);
SHOWSIZE(MOUSE_EVENT_RECORD);
SHOWSIZE(SMALL_RECT);
SHOWSIZE(WINDOW_BUFFER_SIZE_RECORD);
SHOWSIZE(COORD);
SHOWSIZE(POINT);
SHOWSIZE(POINTS);
SHOWSIZE(RECT);
SHOWSIZE(DISPLAY_DEVICE);
//SHOWSIZE(VIDEOPARAMETERS);
SHOWSIZE(STARTUPINFO);
SHOWSIZE(PROCESS_INFORMATION);
SHOWSIZE(WNDCLASSEX);
SHOWSIZE(LRESULT);
SHOWSIZE(OPENFILENAME);

fclose(f);

}


-- 
Bartc
0
BartC
11/27/2016 11:43:24 PM
On 27/11/16 23:43, BartC wrote:
> On 27/11/2016 23:26, Richard Heathfield wrote:
>> On 27/11/16 23:02, BartC wrote:
>>> On 27/11/2016 22:14, supercat@casperkitty.com wrote:
>
>>>> It's interesting that a lot of code writes
>>>> "sizeof(variable)"
>>>> or "sizeof (int)" rather than "sizeof variable" or "sizeof (int)",
>>>> which
>>>> probably contributes to the confusion.
>>>
>>> Yeah ... because a lot of people can't remember when it requires
>>> parentheses and when it doesn't!
>>
>> It never, *ever* requires parentheses except on the vanishingly few
>> occasions when you need to specify a type rather than an expression.
>
> In my code, that can happen a lot.

I have only found one instance in my own code. The sizeof(type) 
construct is often used unwisely (which is not to say that there are not 
legitimate reasons to use it).

-- 
Richard Heathfield
Email: rjh at cpax dot org dot uk
"Usenet is a strange place" - dmr 29 July 1999
Sig line 4 vacant - apply within
0
Richard
11/27/2016 11:46:41 PM
On Sunday, November 27, 2016 at 6:43:32 PM UTC-5, Bart wrote:
> On 27/11/2016 23:26, Richard Heathfield wrote:
> > On 27/11/16 23:02, BartC wrote:
> >> On 27/11/2016 22:14, supercat@casperkitty.com wrote:
> 
> >>> It's interesting that a lot of code writes
> >>> "sizeof(variable)"
> >>> or "sizeof (int)" rather than "sizeof variable" or "sizeof (int)", which
> >>> probably contributes to the confusion.
> >>
> >> Yeah ... because a lot of people can't remember when it requires
> >> parentheses and when it doesn't!
> >
> > It never, *ever* requires parentheses except on the vanishingly few
> > occasions when you need to specify a type rather than an expression.
> 
> In my code, that can happen a lot. (See below, although here I used it 
> so much that I wrapped it in a macro.)
> 
> 
> -------------------------------------------------------
> 
> #include <stdio.h>
> #include <windows.h>
> 
> #define SHOWSIZE(T) fprintf(f,#T " %d\n",sizeof(T))
> 
> int main(void) {
> FILE* f;
> 
> if (sizeof(void*)==8) {
> 	f=fopen("sizes64","wb");
> 	fprintf(f,"-- 64 BITS\n");
> }
> else {
> 	f=fopen("sizes32","wb");
> 	fprintf(f,"-- 32 BITS\n");
> }
> 
> SHOWSIZE(BOOL);
> SHOWSIZE(WORD);
> SHOWSIZE(INT);
> SHOWSIZE(INPUT_RECORD);
> SHOWSIZE(WCHAR);
> SHOWSIZE(DWORD);
> SHOWSIZE(HWND);
> SHOWSIZE(UINT);
> SHOWSIZE(CHAR);
> SHOWSIZE(KEY_EVENT_RECORD);
> SHOWSIZE(WPARAM);
> SHOWSIZE(LPARAM);
> SHOWSIZE(SCROLLINFO);
> SHOWSIZE(SCROLLBARINFO);
> SHOWSIZE(COLORREF);
> SHOWSIZE(LOGPALETTE);
> SHOWSIZE(PALETTEENTRY);
> SHOWSIZE(PALETTEENTRY);
> 
> SHOWSIZE(BITMAP);
> SHOWSIZE(BITMAPCOREHEADER);
> SHOWSIZE(BITMAPCOREINFO);
> SHOWSIZE(BITMAPFILEHEADER);
> SHOWSIZE(BITMAPINFO);
> SHOWSIZE(BITMAPINFOHEADER);
> SHOWSIZE(BITMAPV4HEADER);
> SHOWSIZE(BITMAPV5HEADER);
> SHOWSIZE(BLENDFUNCTION);
> SHOWSIZE(DIBSECTION);
> SHOWSIZE(RGBQUAD);
> SHOWSIZE(RGBTRIPLE);
> SHOWSIZE(SIZE);
> SHOWSIZE(LPSIZE);
> SHOWSIZE(TRIVERTEX);
> SHOWSIZE(RGNDATA);
> SHOWSIZE(RGNDATAHEADER);
> SHOWSIZE(PAINTSTRUCT);
> SHOWSIZE(LOGBRUSH);
> //SHOWSIZE(LOGBRUSH32);
> SHOWSIZE(ABC);
> SHOWSIZE(ABCFLOAT);
> SHOWSIZE(ENUMLOGFONT);
> SHOWSIZE(TEXTMETRIC);
> SHOWSIZE(NEWTEXTMETRIC);
> SHOWSIZE(LOGFONT);
> SHOWSIZE(EXTLOGFONT);
> SHOWSIZE(EXTLOGPEN);
> SHOWSIZE(LOGPEN);
> SHOWSIZE(MSG);
> SHOWSIZE(CHAR_INFO);
> SHOWSIZE(CONSOLE_CURSOR_INFO);
> SHOWSIZE(CONSOLE_FONT_INFO);
> SHOWSIZE(CONSOLE_SCREEN_BUFFER_INFO);
> //SHOWSIZE(CONSOLE_SCREEN_BUFFER_INFOEX);
> SHOWSIZE(INPUT_RECORD);
> SHOWSIZE(KEY_EVENT_RECORD);
> SHOWSIZE(FOCUS_EVENT_RECORD);
> SHOWSIZE(MENU_EVENT_RECORD);
> SHOWSIZE(MOUSE_EVENT_RECORD);
> SHOWSIZE(SMALL_RECT);
> SHOWSIZE(WINDOW_BUFFER_SIZE_RECORD);
> SHOWSIZE(COORD);
> SHOWSIZE(POINT);
> SHOWSIZE(POINTS);
> SHOWSIZE(RECT);
> SHOWSIZE(DISPLAY_DEVICE);
> //SHOWSIZE(VIDEOPARAMETERS);
> SHOWSIZE(STARTUPINFO);
> SHOWSIZE(PROCESS_INFORMATION);
> SHOWSIZE(WNDCLASSEX);
> SHOWSIZE(LRESULT);
> SHOWSIZE(OPENFILENAME);
> 
> fclose(f);
> 
> }

This is an atypical use of sizeof: you're just collecting information about the
characteristics of a particular implementation.
More typical uses are when you need the size of an object in order to do
something size-dependent to that object, such as malloc(), memcpy(), or
fwrite(). It's always possible to use sizeof(Type) in those cases, but it's
generally more appropriate (and less error prone) to write "sizeof expression",
where expression is an lvalue referring to the object whose size you need.

Of course, your refusal to correctly understand the distinctions between
pointers and arrays in C means that you would probably often fail to choose an
appropriate lvalue expression - sizeof is one of the key cases where your
carefully cultivated confusion about such matters is most likely to cause
problems.
0
jameskuyper
11/28/2016 12:21:13 AM
On 27/11/16 23:55, BartC wrote:
> On 27/11/2016 21:07, David Brown wrote:
>> On 27/11/16 21:10, BartC wrote:
> 
>>> I don't have many open-source projects lying around on my machine, but I
>>> do happen to have CPython and SQLite. And by coincidence, they both
>>> define MIN and MAX macros!
>>>
>>> SQLITE:
>>> /*
>>> ** Macros to compute minimum and maximum of two numbers.
>>> */
>>> #define MIN(A,B) ((A)<(B)?(A):(B))
>>> #define MAX(A,B) ((A)>(B)?(A):(B))
>>>
>>> CPython:
>>> #undef MAX
>>> #undef MIN
>>> #define MAX(x, y) ((x) < (y) ? (y) : (x))
>>> #define MIN(x, y) ((x) < (y) ? (x) : (y))
>>>
>>> The purpose of having such functions built-in is precisely to avoid each
>>> application defining it's own inferior versions.
>>>
>>
>> These definitions are clear, simple, and written in about 14 seconds.
> 
> Wasn't one of your points there such a feature would rarely be used? Yet
> the the first two applications I look at both rely on them.

All you have shown is that two rather large code bases both define MAX
and MIN macros.  That is still perfectly consistent with being rarely
used.  Now, if you count the number of /uses/ of these macros and show
it to be a reasonable fraction of the number of lines of code in the
source, then you can say it is commonly used in at least these two code
bases.

(I have no proof for saying that max/min are rarely used - it is merely
an opinion.  I also haven't given any kind of indication about what I
meant by "rarely", such as comparisons to the usefulness of other
features.  So don't get hung up about that point - my only concrete
justification is that cpu instruction sets usually do not have any
integer min/max instruction.  Floating point is always more complicated,
and cannot be easily generated from a few individual instructions
because of NaNs and other nonsense - thus an fmin or fmax instruction
/is/ worth the effort.)

> 
> (And looking further, with GMP there is:
> 
> #define __GMP_MAX(h,i) ((h) > (i) ? (h) : (i))
> 
> although strangely no __GMP_MIN. And in NCURSES:
> 
>  #ifndef max
>  # define max(a,b) (((a) > (b)) ? (a) : (b))
>  #endif
>  #ifndef min
>  # define min(a,b) (((a) < (b)) ? (a) : (b))
>  #endif
> )
> 
> Anyway these are all inferior because MAX(a[++i],b[++j]) might not do
> what you'd expect; they are not a general solution.

They are what you get in C, so they are what people use.  And they are
simple and easy to make.

The question is, would a "safer" built-in version be worth the effort?
it would not more more efficient, and it would not safe noticeably on
developer effort (since the macro here is so easy to write).  It would
not help for anything that can use gcc (or clang) extensions, as "safe"
macros can be written or copied directly from the gcc manual.

So the only problem with today's "max" solutions is when you call them
with arguments that have side-effects.  Frankly, I am always sceptical
to any function call whose arguments have side effects - in most cases,
I think the code is less clear and potentially more fragile (there is no
defined order for these side effects).  There can be particular cases
where it makes sense, but I would usually reject max(a[++i], b[++j]) as
smart-ass code that tries to do too much in one line.

> 
>> Until C11 generics came along, they could not be improved upon within
>> the C language - there would need to be changes to the language that
>> were far bigger than just a "min" or "max" standard library function in
>> order to implement them.
> 
> You're looking at language-building features as a solution. I don't
> agree. At the minute we are expected to use macros so that we end up
> with thousands of variations of half-assed macros, all with slightly
> different names, to do stuff like:
> 
>  FOR(i, a, b) {...}
>  REPEAT(N) {...}
>  ARRAY_LEN(a)
>  MAX(a,b)
>  CLAMP(x,y,z)
>  BINARY(101101)
>  DECIMAL_WITH_COMMAS(1,048,576)
> 

And how is this in any way different from any other programming
language?  No language is "complete" with standard library functions
covering /everything/.  C happens to have a relatively small standard
library - but that is by design, not by accident.  It means that the
language is good for small and efficient programs with little overhead.
 It is also fine when combined with other libraries - libraries don't
need to be part of the C standard to be useful.  It also means that it
is often not a great choice for big projects, because you do need to
invent so much yourself.

> Etc.
> 
> I think the C11 changes will have the same problems as macros. And
> presumably, the generic features end up generating standard C code which
> is compiled as normal? That's not much different to my wrapping a thick
> syntax layer around the whole language! (Which is then processed with a
> tool resulting in standard C to compile.)
> 

Have you actually /looked/ at _Generic, or are you just guessing?  Here
is a generic "max" macro.  It is ugly and verbose, especially compared
to something like C++ templates, but it does the job.  It gets the types
right, while avoiding double evaluation of side effects.

Should it be added to the C standard library?  I don't really see why
not, now that _Generic is there.  But the functions could not be called
"max" and "min" without clashing too much with existing code - they
would have to be something like _Max and _Min (with a <stdmaxmin.h> to
define "max" and "min" names).  Would they be used enough to make it
worth the effort?  Probably not.



static inline int _maxi(int const x, int const y) {
    return y > x ? y : x;
}

static inline unsigned _maxui(unsigned const x, unsigned const y) {
    return y > x ? y : x;
}

static inline long _maxl(long const x, long const y) {
    return y > x ? y : x;
}

static inline unsigned long _maxul(unsigned long const x, unsigned long
const y) {
    return y > x ? y : x;
}

static inline long long _maxll(long long const x, long long const y) {
    return y > x ? y : x;
}

static inline unsigned long long _maxull(unsigned long long const x,
unsigned long long const y) {
    return y > x ? y : x;
}

static inline float _maxf(float const x, float const y) {
    return y > x ? y : x;
}

static inline double _maxd(double const x, double const y) {
    return y > x ? y : x;
}

static inline long double _maxld(long double const x, long double const y) {
    return y > x ? y : x;
}

#define max(x, y) (_Generic((x) + (y),   \
    int:                _maxi,             \
    unsigned:           _maxu,            \
    long:               _maxl,            \
    unsigned long:      _maxul,           \
    long long:          _maxll,           \
    unsigned long long: _maxull,          \
    float:              _maxf,            \
    double:             _maxd,            \
    long double:        _maxld)((x), (y)))


0
David
11/28/2016 11:46:53 AM
On 28/11/16 00:46, Richard Heathfield wrote:
> On 27/11/16 23:43, BartC wrote:
>> On 27/11/2016 23:26, Richard Heathfield wrote:
>>> On 27/11/16 23:02, BartC wrote:
>>>> On 27/11/2016 22:14, supercat@casperkitty.com wrote:
>>
>>>>> It's interesting that a lot of code writes
>>>>> "sizeof(variable)"
>>>>> or "sizeof (int)" rather than "sizeof variable" or "sizeof (int)",
>>>>> which
>>>>> probably contributes to the confusion.
>>>>
>>>> Yeah ... because a lot of people can't remember when it requires
>>>> parentheses and when it doesn't!
>>>
>>> It never, *ever* requires parentheses except on the vanishingly few
>>> occasions when you need to specify a type rather than an expression.
>>
>> In my code, that can happen a lot.
> 
> I have only found one instance in my own code. The sizeof(type)
> construct is often used unwisely (which is not to say that there are not
> legitimate reasons to use it).
> 

I find I use sizeof on types quite a lot, but in many of these cases it
is as a static_assert to confirm the size of a struct before it is used,
and therefore before there are any objects of that type in the code.
But the need to make structs that match exactly to existing formats is
probably more common in my type of programming than many other types of
code.

I also like to use parentheses when using sizeof on objects.  I fully
agree with you that it is not necessary - but I think it looks clearer.
 That is purely a personal choice, of course, and not required by the
language.

0
David
11/28/2016 11:56:03 AM
On 28/11/2016 11:46, David Brown wrote:
> On 27/11/16 23:55, BartC wrote:

>> Wasn't one of your points there such a feature would rarely be used? Yet
>> the the first two applications I look at both rely on them.
>
> All you have shown is that two rather large code bases both define MAX
> and MIN macros.  That is still perfectly consistent with being rarely
> used.  Now, if you count the number of /uses/ of these macros and show
> it to be a reasonable fraction of the number of lines of code in the
> source, then you can say it is commonly used in at least these two code
> bases.

It shows people often have to end up inventing these macros. And the 
fact that they are pretty much the same macros suggests it is quite a 
common need.

The use of generics to do that doesn't really change things, as the 
definitions for min/max still have to be added, and that is now a lot 
more complicated (and makes the code, for now, less portable). Although 
the resulting implementations will be better.

(Note the lccwin compiler appears to include min/max macros as standard, 
but because it is just one compiler makes it less useful. Also dangerous 
as, as the names are lower case, people might think they are functions.)

>> Anyway these are all inferior because MAX(a[++i],b[++j]) might not do
>> what you'd expect; they are not a general solution.
>
> They are what you get in C, so they are what people use.  And they are
> simple and easy to make.
>
> The question is, would a "safer" built-in version be worth the effort?
> it would not more more efficient, and it would not safe noticeably on
> developer effort (since the macro here is so easy to write).  It would
> not help for anything that can use gcc (or clang) extensions, as "safe"
> macros can be written or copied directly from the gcc manual.
>
> So the only problem with today's "max" solutions is when you call them
> with arguments that have side-effects.

So you say sloppy coding is good? Great, that means a lot less work for 
my own code generation efforts! The question of side effects comes up, 
apart from min/max, in the implementation of constructs like these, when 
x and y are arbitrary expressions:

   x += y;
   x==y==z;       # [not C] when interpreted as (x==y) && (y==z)
   swap(x,y);     # [not C]

and such, where the simplest implementation involves re-evaluating one 
or more operands. (Some code-generation techniques will take care of 
such issues; but the one I use doesn't!)

>> You're looking at language-building features as a solution. I don't
>> agree. At the minute we are expected to use macros so that we end up
>> with thousands of variations of half-assed macros, all with slightly
>> different names, to do stuff like:
>>
>>  FOR(i, a, b) {...}
>>  REPEAT(N) {...}
>>  ARRAY_LEN(a)
>>  MAX(a,b)
>>  CLAMP(x,y,z)
>>  BINARY(101101)
>>  DECIMAL_WITH_COMMAS(1,048,576)
>>
>
> And how is this in any way different from any other programming
> language?  No language is "complete" with standard library functions
> covering /everything/.

Many of those are fundamental (well not clamp and probably not min/max). 
That is, simple iterative loops, repeating a block N times, getting the 
number of elements in an array, writing binary literals or literals with 
separators, and so on.

You don't want everyone either reinventing the same workaround with 
macros, or not doing so and writing things out in full, or writing 
binary as hex, or long numbers without separators.

> C happens to have a relatively small standard
> library

It's not a library thing. If the language has to, it can implement some 
of these with macros (as it does many things), then it's only an extra 
few lines in one of the standard headers. But they will then be standard 
at least.

>> I think the C11 changes will have the same problems as macros.

> Have you actually /looked/ at _Generic, or are you just guessing?  Here
> is a generic "max" macro.  It is ugly and verbose, especially compared
> to something like C++ templates, but it does the job.  It gets the types
> right, while avoiding double evaluation of side effects.

> static inline int _maxi(int const x, int const y) {
>     return y > x ? y : x;
> }
........

> #define max(x, y) (_Generic((x) + (y),   \

(I was thinking (x)>(y) would be better here, as that is used within the 
functions. And it would work for pointers too. But then, that will 
always return type int ...)

>     int:                _maxi,             \
>     unsigned:           _maxu,            \
>     long:               _maxl,            \
>     unsigned long:      _maxul,           \
>     long long:          _maxll,           \
>     unsigned long long: _maxull,          \
>     float:              _maxf,            \
>     double:             _maxd,            \
>     long double:        _maxld)((x), (y)))

That's quite neat. (I may borrow this feature actually...)

But it's also pretty much what I do in my code-generation: a family of 
min/max function variations, but the selection is done within the 
translator (unless the expression can be handled in-line).

With these C versions, the onus is on the programmer to write all this, 
and provide it with the distribution of the source code. And if code is 
subsequently shared, then A's implementation of min/max may end up 
clashing with B'S similar implementation.

However I can't, at the minute, post a code snippet using min/max and 
expect anyone else to be able to compile it.

Otherwise it seems a fine feature to use for building a library.

-- 
Bartc
0
BartC
11/28/2016 1:16:49 PM
On 28/11/16 14:16, BartC wrote:
> On 28/11/2016 11:46, David Brown wrote:
>> On 27/11/16 23:55, BartC wrote:
> 
>>> Wasn't one of your points there such a feature would rarely be used? Yet
>>> the the first two applications I look at both rely on them.
>>
>> All you have shown is that two rather large code bases both define MAX
>> and MIN macros.  That is still perfectly consistent with being rarely
>> used.  Now, if you count the number of /uses/ of these macros and show
>> it to be a reasonable fraction of the number of lines of code in the
>> source, then you can say it is commonly used in at least these two code
>> bases.
> 
> It shows people often have to end up inventing these macros. And the
> fact that they are pretty much the same macros suggests it is quite a
> common need.
> 
> The use of generics to do that doesn't really change things, as the
> definitions for min/max still have to be added, and that is now a lot
> more complicated (and makes the code, for now, less portable). Although
> the resulting implementations will be better.
> 
> (Note the lccwin compiler appears to include min/max macros as standard,
> but because it is just one compiler makes it less useful. Also dangerous
> as, as the names are lower case, people might think they are functions.)
> 
>>> Anyway these are all inferior because MAX(a[++i],b[++j]) might not do
>>> what you'd expect; they are not a general solution.
>>
>> They are what you get in C, so they are what people use.  And they are
>> simple and easy to make.
>>
>> The question is, would a "safer" built-in version be worth the effort?
>> it would not more more efficient, and it would not safe noticeably on
>> developer effort (since the macro here is so easy to write).  It would
>> not help for anything that can use gcc (or clang) extensions, as "safe"
>> macros can be written or copied directly from the gcc manual.
>>
>> So the only problem with today's "max" solutions is when you call them
>> with arguments that have side-effects.
> 
> So you say sloppy coding is good? Great, that means a lot less work for
> my own code generation efforts! The question of side effects comes up,
> apart from min/max, in the implementation of constructs like these, when
> x and y are arbitrary expressions:
> 
>   x += y;
>   x==y==z;       # [not C] when interpreted as (x==y) && (y==z)
>   swap(x,y);     # [not C]
> 
> and such, where the simplest implementation involves re-evaluating one
> or more operands. (Some code-generation techniques will take care of
> such issues; but the one I use doesn't!)

It is quite simple.  Don't call macros with arguments with side-effects.

If you want to say that this is bad, and that it is poor language
design, then I agree.  It /is/ bad.  There are better alternatives.  C
supports three ways to improve on macros with this sort of problem.  One
is "static inline" functions - but then you have lost type generics.
Another is "_Generic" - but then you need C11, and it is ugly and
verbose.  A third is gcc extensions - but then you need gcc (or a
compatible compiler).  And of course, other languages do a better job -
C++ has templates, but then you are no longer working in C.

If you want to /complain/ about C, then you are on your own - C is what
it is, despite what we might wish.


> 
>>> You're looking at language-building features as a solution. I don't
>>> agree. At the minute we are expected to use macros so that we end up
>>> with thousands of variations of half-assed macros, all with slightly
>>> different names, to do stuff like:
>>>
>>>  FOR(i, a, b) {...}
>>>  REPEAT(N) {...}
>>>  ARRAY_LEN(a)
>>>  MAX(a,b)
>>>  CLAMP(x,y,z)
>>>  BINARY(101101)
>>>  DECIMAL_WITH_COMMAS(1,048,576)
>>>
>>
>> And how is this in any way different from any other programming
>> language?  No language is "complete" with standard library functions
>> covering /everything/.
> 
> Many of those are fundamental (well not clamp and probably not min/max).
> That is, simple iterative loops, repeating a block N times, getting the
> number of elements in an array, writing binary literals or literals with
> separators, and so on.

The loops and iteration macros here are not remotely "fundamental".  I
have never felt the need for them, and I write plenty of loops in C.
I'll warrant that the same applies to most C programmers.

Binary literals are a rather specialised thing - few people need to
write them apart from embedded developers, and embedded compilers
usually support 0b0101 syntax.  (This is something I really think should
be included in the C standards - I cannot comprehend why it is missing
from C11, as it is supported by many compilers and is standardised in
C++11.)

Digit separators would be nice, but they are not "fundamental" - C
programmers have managed without them for a good many years.  C++ added
them with ' as the separator, which was not the nicest of choices (but
alternatives such as , and _ had too many other problems).

> 
> You don't want everyone either reinventing the same workaround with
> macros, or not doing so and writing things out in full, or writing
> binary as hex, or long numbers without separators.

People who write C /do/ reinvent the same wheels, again and again.  I
have often said that most people who program in C, and most programs
written in C, would be better using higher level languages.  But I don't
think the problem is with the C language - the problem is that people
choose C when other languages would let them write better code, or be
more productive in writing their code.

> 
>> C happens to have a relatively small standard
>> library
> 
> It's not a library thing. If the language has to, it can implement some
> of these with macros (as it does many things), then it's only an extra
> few lines in one of the standard headers. But they will then be standard
> at least.
> 
>>> I think the C11 changes will have the same problems as macros.
> 
>> Have you actually /looked/ at _Generic, or are you just guessing?  Here
>> is a generic "max" macro.  It is ugly and verbose, especially compared
>> to something like C++ templates, but it does the job.  It gets the types
>> right, while avoiding double evaluation of side effects.
> 
>> static inline int _maxi(int const x, int const y) {
>>     return y > x ? y : x;
>> }
> .......
> 
>> #define max(x, y) (_Generic((x) + (y),   \
> 
> (I was thinking (x)>(y) would be better here, as that is used within the
> functions. And it would work for pointers too. But then, that will
> always return type int ...)

Exactly.  Using "+" gets the promotions and conversions right.

> 
>>     int:                _maxi,             \
>>     unsigned:           _maxu,            \
>>     long:               _maxl,            \
>>     unsigned long:      _maxul,           \
>>     long long:          _maxll,           \
>>     unsigned long long: _maxull,          \
>>     float:              _maxf,            \
>>     double:             _maxd,            \
>>     long double:        _maxld)((x), (y)))
> 
> That's quite neat. (I may borrow this feature actually...)

Feel free.  I "borrowed" it from the internet somewhere :-)

> 
> But it's also pretty much what I do in my code-generation: a family of
> min/max function variations, but the selection is done within the
> translator (unless the expression can be handled in-line).

You are writing a code generator - it makes sense to do this sort of
thing during code generation.

> 
> With these C versions, the onus is on the programmer to write all this,
> and provide it with the distribution of the source code. And if code is
> subsequently shared, then A's implementation of min/max may end up
> clashing with B'S similar implementation.
> 
> However I can't, at the minute, post a code snippet using min/max and
> expect anyone else to be able to compile it.
> 
> Otherwise it seems a fine feature to use for building a library.
> 

0
David
11/28/2016 1:52:15 PM
On Sunday, November 27, 2016 at 11:47:10 AM UTC-5, David Brown wrote:
> On 27/11/16 13:25, Rick C. Hodgin wrote:
> > I wrote a reply last night, but I guess it was lost somehow.  It basically said
> > Python's a weakly-typed language, isn't it?  If so it would have to have its
> > own built-in min() and max() functions to be able to reliably compare
> > disparate types.  I see you also mention this as a need in C, but I would
> > argue to a lesser degree because of casting.  We can create a macro with
> > a trinary operator and generate code for every case.  The compiler may
> > complain here or there so you apply the appropriate cast.
> 
> Python is a strongly typed language, not a weakly typed language (such 
> as C).

Are you saying "C is a weakly typed language" or "Python is a strongly
typed language like C?"

> But Python is a dynamically typed language, with duck-typing.

How can a dynamically typed language be a strongly typed language?
I was under the impression in Python you could code this:

    n = 5
    print n
    n = "Hello, world"
    print n

The token "n" can be used for multiple variable types.  I tested this
using the online test page:

    http://mathcs.holycross.edu/~kwalsh/python/

I see on the Python wiki that it answers this question:

    https://wiki.python.org/moin/Why%20is%20Python%20a%20dynamic%20language%20and%20also%20a%20strongly%20typed%20language

I disagree with the way the term "strongly typed" is used there because
I consider C to be a strongly typed language.  You type "char c" and
that's all C can ever be within that scope.

That's a philosophical difference I suppose.

> This means that in effect, every function in Python is a bit like a 
> template in C++.  And writing a new "max" function in Python is as 
> simple as:
> 
> def my_max(a, b) :
> 	if (a > b) return a
> 	return b
> 
> When the implementation is so simple, there is very little reason /not/ 
> to put it into the standard library - just as C++ has std::max with a 
> very simple implementation as a template.

Python is required, within its own internal processing engine, to compare
disparate types, such as a floating point value to another integer value,
and return correct results, without any custom/fancy casting or other
things required in C.

That's what I was referring to.  Python needs this type of built-in
ability because it's weakly typed.  A and B come in and they can be
anything.  The nuances of the comparison logic has to be able to
handle comparing type x to type y generically, and with correct
results in each case.

Visual FreePro has to do this as well and it was complex to write all
of the code which does this as there are many possible comparisons.

> C, on the other hand, has /no/ possible implementation of a max function 
> or macro that works on all numeric types until C11 _Generic - and even 
> then, the implementation is more than a little ugly.
> 
> > Another thought this morning.  Python moves by people needs.  C
> > moves by committee choices.  If something is not deemed desirable
> > by the committee, it will never officially be added.  But, I think
> > that era of fixedness in those maintaining the C Standard will have to
> > change, lest it be ignored as people move on past it.
> 
> C moves by people needs as well, just a lot more slowly - it needs a lot 
> of people, and a very clear need, to make changes happen.  "max" is not 
> needed that much.

Python is an open project.  If I desire to have x, y, or z added I can
do it, it goes into the main line, and is then available to all.  I
can't do that with C.  I can do that with GCC or CLANG or some instance
of a C compiler, but not C, not to the standard.

I would argue it doesn't move by people needs, but by committee needs.
Otherwise a lot would've been introduced that, because it wasn't,
spawned off many other languages.

It's why I've more or less ditched C and am proceeding with CAlive.  I
will provide support through C99 eventually however.  And other people
are free to add additional newer support.

Best regards,
Rick C. Hodgin
0
Rick
11/28/2016 3:13:16 PM
On 28/11/2016 15:13, Rick C. Hodgin wrote:
> On Sunday, November 27, 2016 at 11:47:10 AM UTC-5, David Brown wrote:
>> On 27/11/16 13:25, Rick C. Hodgin wrote:
>>> I wrote a reply last night, but I guess it was lost somehow.  It basically said
>>> Python's a weakly-typed language, isn't it?  If so it would have to have its
>>> own built-in min() and max() functions to be able to reliably compare
>>> disparate types.  I see you also mention this as a need in C, but I would
>>> argue to a lesser degree because of casting.  We can create a macro with
>>> a trinary operator and generate code for every case.  The compiler may
>>> complain here or there so you apply the appropriate cast.
>>
>> Python is a strongly typed language, not a weakly typed language (such
>> as C).
>
> Are you saying "C is a weakly typed language" or "Python is a strongly
> typed language like C?"
>
>> But Python is a dynamically typed language, with duck-typing.
>
> How can a dynamically typed language be a strongly typed language?
> I was under the impression in Python you could code this:
>
>     n = 5
>     print n
>     n = "Hello, world"
>     print n

> The token "n" can be used for multiple variable types.

C allows this:

   char* s = "Hello world";
   int n = 5;

   printf("%s\n", s + n);
   printf("%d\n", s + n);
   printf("%p\n", s + n);

So it can add a number to a string (and print the result as a string, 
number or pointer). I guess that means C has weak typing; it doesn't try 
too hard to stop you getting around its type system.

 > The token "n" can be used for multiple variable types.

C can do a bit of that too. And in the same block:

int n=5;

{ printf("%d\n",n);          // n is a number
   char* n="Hello";
   printf("%s\n",n);          // n is now a string
}


-- 
Bartc


0
BartC
11/28/2016 3:35:17 PM
On Monday, November 28, 2016 at 10:35:24 AM UTC-5, Bart wrote:
> On 28/11/2016 15:13, Rick C. Hodgin wrote:
> > On Sunday, November 27, 2016 at 11:47:10 AM UTC-5, David Brown wrote:
> >> On 27/11/16 13:25, Rick C. Hodgin wrote:
> >>> I wrote a reply last night, but I guess it was lost somehow.  It basically said
> >>> Python's a weakly-typed language, isn't it?  If so it would have to have its
> >>> own built-in min() and max() functions to be able to reliably compare
> >>> disparate types.  I see you also mention this as a need in C, but I would
> >>> argue to a lesser degree because of casting.  We can create a macro with
> >>> a trinary operator and generate code for every case.  The compiler may
> >>> complain here or there so you apply the appropriate cast.
> >>
> >> Python is a strongly typed language, not a weakly typed language (such
> >> as C).
> >
> > Are you saying "C is a weakly typed language" or "Python is a strongly
> > typed language like C?"
> >
> >> But Python is a dynamically typed language, with duck-typing.
> >
> > How can a dynamically typed language be a strongly typed language?
> > I was under the impression in Python you could code this:
> >
> >     n = 5
> >     print n
> >     n = "Hello, world"
> >     print n
> 
> > The token "n" can be used for multiple variable types.
> 
> C allows this:
> 
>    char* s = "Hello world";
>    int n = 5;
> 
>    printf("%s\n", s + n);
>    printf("%d\n", s + n);
>    printf("%p\n", s + n);
> 
> So it can add a number to a string (and print the result as a string, 
> number or pointer). I guess that means C has weak typing; it doesn't try 
> too hard to stop you getting around its type system.

I don't view that as weak typing in any sense.  In that case, you are
providing input to a target which is strongly typed, in that it can
only receive what it's been told to receive.

The C compiler upsizes or downsizes or translates the various types
you specify as input for you, so that they meet the rigid requirements
of the called function.  And whereas printf() uses variadic parameters,
they are still dictated by the operands specified in the format string.

Python is not like that.  It receives generic parameters and then is
called upon to attempt to conduct operations on them.  It will,
internally, resolve the comparison form automatically.

Visual FreePro also does this, and it's nontrivial to implement.  But,
it does save a lot of difficult work for every developer who uses it.

C maintains a rigid form that does not change at any point after compile.
"char c" is always "char."  It can be cast into other forms, but that's
an operation, not a reinvention of the fundamental type.

>  > The token "n" can be used for multiple variable types.
> 
> C can do a bit of that too. And in the same block:
> 
> int n=5;
> 
> { printf("%d\n",n);          // n is a number
>    char* n="Hello";
>    printf("%s\n",n);          // n is now a string
> }

New scope.  Neither value of "n" has changed, just what the compiler
chooses to reference as by its design when the token "n" is utilized
in each scope.

Best regards,
Rick C. Hodgin
0
Rick
11/28/2016 3:48:40 PM
On Monday, November 28, 2016 at 10:48:47 AM UTC-5, Rick C. Hodgin wrote:
> On Monday, November 28, 2016 at 10:35:24 AM UTC-5, Bart wrote:
> > On 28/11/2016 15:13, Rick C. Hodgin wrote:
> > > On Sunday, November 27, 2016 at 11:47:10 AM UTC-5, David Brown wrote:
> > > > But Python is a dynamically typed language, with duck-typing.
> > >
> > > How can a dynamically typed language be a strongly typed language?
> > > I was under the impression in Python you could code this:
> > >
> > >     n = 5
> > >     print n
> > >     n = "Hello, world"
> > >     print n
> > 
> > > The token "n" can be used for multiple variable types.
> > 
> > C allows this:
> > 
> >    char* s = "Hello world";
> >    int n = 5;
> > 
> >    printf("%s\n", s + n);
> >    printf("%d\n", s + n);
> >    printf("%p\n", s + n);
> > 
> > So it can add a number to a string (and print the result as a string, 
> > number or pointer). I guess that means C has weak typing; it doesn't try 
> > too hard to stop you getting around its type system.
> 
> I don't view that as weak typing in any sense.  In that case, you are
> providing input to a target which is strongly typed, in that it can
> only receive what it's been told to receive.

And to be a little more precise on this, every piece of strongly typed
data that is used in C goes through transformation mechanisms that are
explicitly designed to receive one type in, and emit another type out.

Everything in C is completely strongly typed I would argue.  In fact,
I might even use/coin the term very strongly typed for how C regards its
data.

The C compiler happens to be very adept at manipulating strongly typed
variables into other forms required for input into the other thing,
which is what makes it so desirable.

> The C compiler upsizes or downsizes or translates the various types
> you specify as input for you, so that they meet the rigid requirements
> of the called function.  And whereas printf() uses variadic parameters,
> they are still dictated by the operands specified in the format string.
> 
> Python is not like that.  It receives generic parameters and then is
> called upon to attempt to conduct operations on them.  It will,
> internally, resolve the comparison form automatically.
> 
> Visual FreePro also does this, and it's nontrivial to implement.  But,
> it does save a lot of difficult work for every developer who uses it.
> 
> C maintains a rigid form that does not change at any point after compile.
> "char c" is always "char."  It can be cast into other forms, but that's
> an operation, not a reinvention of the fundamental type.
> 
> >  > The token "n" can be used for multiple variable types.
> > 
> > C can do a bit of that too. And in the same block:
> > 
> > int n=5;
> > 
> > { printf("%d\n",n);          // n is a number
> >    char* n="Hello";
> >    printf("%s\n",n);          // n is now a string
> > }
> 
> New scope.  Neither value of "n" has changed, just what the compiler
> chooses to reference as by its design when the token "n" is utilized
> in each scope.

Best regards,
Rick C. Hodgin
0
Rick
11/28/2016 3:53:16 PM
On 28/11/16 16:13, Rick C. Hodgin wrote:
> On Sunday, November 27, 2016 at 11:47:10 AM UTC-5, David Brown wrote:
>> On 27/11/16 13:25, Rick C. Hodgin wrote:
>>> I wrote a reply last night, but I guess it was lost somehow.  It basically said
>>> Python's a weakly-typed language, isn't it?  If so it would have to have its
>>> own built-in min() and max() functions to be able to reliably compare
>>> disparate types.  I see you also mention this as a need in C, but I would
>>> argue to a lesser degree because of casting.  We can create a macro with
>>> a trinary operator and generate code for every case.  The compiler may
>>> complain here or there so you apply the appropriate cast.
>>
>> Python is a strongly typed language, not a weakly typed language (such 
>> as C).
> 
> Are you saying "C is a weakly typed language" or "Python is a strongly
> typed language like C?"

I don't think there are any clear and non-controversial definitions of
strong and weak typing.  But I believe C is usually classified as
"weakly typed" because it is relatively easy to mess around with types
(some conversions are done implicitly, especially via void*, and enums
are particularly weak) and a great deal of data is simply "int".

Python is classified as strongly typed because every object has a
specific type, and type conversions actually change the data (so if you
use a number as a string, a formatting function is called to generate a
new string from the number - the language does not merely try to
re-interpret the number as though it were a string).


> 
>> But Python is a dynamically typed language, with duck-typing.
> 
> How can a dynamically typed language be a strongly typed language?
> I was under the impression in Python you could code this:
> 
>     n = 5
>     print n
>     n = "Hello, world"
>     print n

Yes, you can do that - because Python is dynamically typed.  But dynamic
typing does not mean weak typing.  In the Python code above, "n" is just
a name - it is not an object.  The objects are 5 and "Hello, world",
each of which has very clear and definitive types that do not change.
All that changes is the binding of the name "n" to those two objects.

> 
> The token "n" can be used for multiple variable types.  I tested this
> using the online test page:
> 
>     http://mathcs.holycross.edu/~kwalsh/python/
> 
> I see on the Python wiki that it answers this question:
> 
>     https://wiki.python.org/moin/Why%20is%20Python%20a%20dynamic%20language%20and%20also%20a%20strongly%20typed%20language
> 
> I disagree with the way the term "strongly typed" is used there because
> I consider C to be a strongly typed language.  You type "char c" and
> that's all C can ever be within that scope.
> 
> That's a philosophical difference I suppose.

C is statically typed, while Python is dynamically typed.  Those terms
are /not/ synonymous with "strongly typed" and "weakly typed" - I think
you can have all combinations (though I can't think of any weakly typed
dynamically typed language off hand).

Once you have your "char c", it is not hard to make an "int *" pointer
to point to it and read the same data as a different type - that is what
makes it "weak".  It is very hard to do something similar in Python.

Of course, the terms are not really black and white, and you can
reasonably argue that "C with warnings enabled" has stronger typing
because it is hard to "cheat" on your types while avoiding warnings.


> 
>> This means that in effect, every function in Python is a bit like a 
>> template in C++.  And writing a new "max" function in Python is as 
>> simple as:
>>
>> def my_max(a, b) :
>> 	if (a > b) return a
>> 	return b
>>
>> When the implementation is so simple, there is very little reason /not/ 
>> to put it into the standard library - just as C++ has std::max with a 
>> very simple implementation as a template.
> 
> Python is required, within its own internal processing engine, to compare
> disparate types, such as a floating point value to another integer value,
> and return correct results, without any custom/fancy casting or other
> things required in C.
> 
> That's what I was referring to.  Python needs this type of built-in
> ability because it's weakly typed.  A and B come in and they can be
> anything.  The nuances of the comparison logic has to be able to
> handle comparing type x to type y generically, and with correct
> results in each case.

No, it is /strongly/ typed.  When Python needs to convert an operand (or
both operands) in order to carry out an operation like "a > b", it uses
well-defined conversion functions.  These exist for built-in types - if
you are using your own types, you need to define these functions or you
will get a run-time error.  C is weakly typed, because it will happily
(well, maybe unhappily, with a warning) do what you ask here in many
cases, even if it makes no sense.

> 
> Visual FreePro has to do this as well and it was complex to write all
> of the code which does this as there are many possible comparisons.
> 
>> C, on the other hand, has /no/ possible implementation of a max function 
>> or macro that works on all numeric types until C11 _Generic - and even 
>> then, the implementation is more than a little ugly.
>>
>>> Another thought this morning.  Python moves by people needs.  C
>>> moves by committee choices.  If something is not deemed desirable
>>> by the committee, it will never officially be added.  But, I think
>>> that era of fixedness in those maintaining the C Standard will have to
>>> change, lest it be ignored as people move on past it.
>>
>> C moves by people needs as well, just a lot more slowly - it needs a lot 
>> of people, and a very clear need, to make changes happen.  "max" is not 
>> needed that much.
> 
> Python is an open project.  If I desire to have x, y, or z added I can
> do it, it goes into the main line, and is then available to all.  

No, you can't.  Python is an open source project - you can download the
source, make all the changes you want, and use it yourself.  You can
publish the modified version as you want.  But it will only get back to
the mainline if it is approved by the project maintainers, which
involves going through a formal process of "Python Enhancement
Proposals" to give the main Python developers and the Python community a
chance to consider the change and its implications.  The change must
meet the approval of Python's Benevolent Dictator For Life, Guido van
Rossum, before it has a chance to become part of the official mainline
Python.

Still, you have a far better chance of getting a change into Python than
getting a change into C :-)

> I
> can't do that with C.  I can do that with GCC or CLANG or some instance
> of a C compiler, but not C, not to the standard.
> 
> I would argue it doesn't move by people needs, but by committee needs.
> Otherwise a lot would've been introduced that, because it wasn't,
> spawned off many other languages.
> 
> It's why I've more or less ditched C and am proceeding with CAlive.  I
> will provide support through C99 eventually however.  And other people
> are free to add additional newer support.
> 
> Best regards,
> Rick C. Hodgin
> 

0
David
11/28/2016 3:53:46 PM
On Monday, November 28, 2016 at 10:54:00 AM UTC-5, David Brown wrote:
> On 28/11/16 16:13, Rick C. Hodgin wrote:
> > On Sunday, November 27, 2016 at 11:47:10 AM UTC-5, David Brown wrote:
> >> On 27/11/16 13:25, Rick C. Hodgin wrote:
> >>> I wrote a reply last night, but I guess it was lost somehow.  It basically said
> >>> Python's a weakly-typed language, isn't it?  If so it would have to have its
> >>> own built-in min() and max() functions to be able to reliably compare
> >>> disparate types.  I see you also mention this as a need in C, but I would
> >>> argue to a lesser degree because of casting.  We can create a macro with
> >>> a trinary operator and generate code for every case.  The compiler may
> >>> complain here or there so you apply the appropriate cast.
> >>
> >> Python is a strongly typed language, not a weakly typed language (such 
> >> as C).
> > 
> > Are you saying "C is a weakly typed language" or "Python is a strongly
> > typed language like C?"
> 
> I don't think there are any clear and non-controversial definitions of
> strong and weak typing.  But I believe C is usually classified as
> "weakly typed" because it is relatively easy to mess around with types
> (some conversions are done implicitly, especially via void*, and enums
> are particularly weak) and a great deal of data is simply "int".
> 
> Python is classified as strongly typed because every object has a
> specific type, and type conversions actually change the data (so if you
> use a number as a string, a formatting function is called to generate a
> new string from the number - the language does not merely try to
> re-interpret the number as though it were a string).
> 
> 
> > 
> >> But Python is a dynamically typed language, with duck-typing.
> > 
> > How can a dynamically typed language be a strongly typed language?
> > I was under the impression in Python you could code this:
> > 
> >     n = 5
> >     print n
> >     n = "Hello, world"
> >     print n
> 
> Yes, you can do that - because Python is dynamically typed.  But dynamic
> typing does not mean weak typing.  In the Python code above, "n" is just
> a name - it is not an object.  The objects are 5 and "Hello, world",
> each of which has very clear and definitive types that do not change.
> All that changes is the binding of the name "n" to those two objects.

Visual FreePro does the same thing.  I have a struct with a discriminating
union which determines the actual type of the data within.  It can be
reassigned as needed, as in python (or originally as in other xbase
languages as that's what it's emulating).

The data changes internally, meaning it's weakly typed because the type
goes back to the thing identified by its token name (in my definition).

> > The token "n" can be used for multiple variable types.  I tested this
> > using the online test page:
> > 
> >     http://mathcs.holycross.edu/~kwalsh/python/
> > 
> > I see on the Python wiki that it answers this question:
> > 
> >     https://wiki.python.org/moin/Why%20is%20Python%20a%20dynamic%20language%20and%20also%20a%20strongly%20typed%20language
> > 
> > I disagree with the way the term "strongly typed" is used there because
> > I consider C to be a strongly typed language.  You type "char c" and
> > that's all C can ever be within that scope.
> > 
> > That's a philosophical difference I suppose.
> 
> C is statically typed, while Python is dynamically typed.  Those terms
> are /not/ synonymous with "strongly typed" and "weakly typed" - I think
> you can have all combinations (though I can't think of any weakly typed
> dynamically typed language off hand).

I think they must be.  If something is statically typed it cannot be
altered.  It remains that way forever, which means it is strongly typed.

> Once you have your "char c", it is not hard to make an "int *" pointer
> to point to it and read the same data as a different type - that is what
> makes it "weak".

I disagree.  "char c" has not changed.  It will always still yield the
char value at its location in memory.  The new "int *" (be it a cast, or
a variable that's set to the &c address) is a new and discrete thing with
its own ability to act on the thing it's pointing to.

In both cases, both are strongly typed.

> It is very hard to do something similar in Python.
>
> Of course, the terms are not really black and white, and you can
> reasonably argue that "C with warnings enabled" has stronger typing
> because it is hard to "cheat" on your types while avoiding warnings.

Even in the case of explicitly added casts, which are fundamental
operations you've manually injected into an expression to receive a
particular type, and emit another type, I don't see how it could be
viewed as anything other than strongly typed.

Everything in C is nailed down categorically and the correct function
is called based on what is known at compile-time.

In python (and Visual FreePro) it does a test on variable types to
determine which branch to flow to in order to process a particular
type.

> >> This means that in effect, every function in Python is a bit like a 
> >> template in C++.  And writing a new "max" function in Python is as 
> >> simple as:
> >>
> >> def my_max(a, b) :
> >> 	if (a > b) return a
> >> 	return b
> >>
> >> When the implementation is so simple, there is very little reason /not/ 
> >> to put it into the standard library - just as C++ has std::max with a 
> >> very simple implementation as a template.
> > 
> > Python is required, within its own internal processing engine, to compare
> > disparate types, such as a floating point value to another integer value,
> > and return correct results, without any custom/fancy casting or other
> > things required in C.
> > 
> > That's what I was referring to.  Python needs this type of built-in
> > ability because it's weakly typed.  A and B come in and they can be
> > anything.  The nuances of the comparison logic has to be able to
> > handle comparing type x to type y generically, and with correct
> > results in each case.
> 
> No, it is /strongly/ typed.  When Python needs to convert an operand (or
> both operands) in order to carry out an operation like "a > b", it uses
> well-defined conversion functions.

Of course.  But, they are not known at compile-time.  They are known
only at runtime with internal tests being made on each type to determine
what internal logic path to flow to.

The same input names "n" and "m" could flow through different paths
based on their internal values at runtime.  It's not known at compile-
time, and has no ability to be known outside of a static program with
no variables like:

    (pseudo code)
    function x() {
        if (current_millisecond % 7 == 0)
            return "Hello world";
        else
            return 5;
    }

Any python app calling a function like that (in its proper syntax form)
will have variable types being returned, but stored in the same token
name.  When you later do a comparison to another, it will flow through
to the proper handler based on the runtime peculiarities of the data,
and not due to anything known at compile time.

At each use / reference, it must be manually examined and then branched
to.

In C, that's not the case.  The entire program is completely known at
compile time and there is no ability to change anything.  You must use
something like custom logic and variadic functions in order to do those
variable things.  But even then, in each case, every flow through every
possible type is already known, with only manual tests on data, not
variable types, determining where flow goes.

I view that as very strongly typed.

> These exist for built-in types - if
> you are using your own types, you need to define these functions or you
> will get a run-time error.  C is weakly typed, because it will happily
> (well, maybe unhappily, with a warning) do what you ask here in many
> cases, even if it makes no sense.
> 
> > 
> > Visual FreePro has to do this as well and it was complex to write all
> > of the code which does this as there are many possible comparisons.
> > 
> >> C, on the other hand, has /no/ possible implementation of a max function 
> >> or macro that works on all numeric types until C11 _Generic - and even 
> >> then, the implementation is more than a little ugly.
> >>
> >>> Another thought this morning.  Python moves by people needs.  C
> >>> moves by committee choices.  If something is not deemed desirable
> >>> by the committee, it will never officially be added.  But, I think
> >>> that era of fixedness in those maintaining the C Standard will have to
> >>> change, lest it be ignored as people move on past it.
> >>
> >> C moves by people needs as well, just a lot more slowly - it needs a lot 
> >> of people, and a very clear need, to make changes happen.  "max" is not 
> >> needed that much.
> > 
> > Python is an open project.  If I desire to have x, y, or z added I can
> > do it, it goes into the main line, and is then available to all.  
> 
> No, you can't.  Python is an open source project - you can download the
> source, make all the changes you want, and use it yourself.  You can
> publish the modified version as you want.  But it will only get back to
> the mainline if it is approved by the project maintainers, which
> involves going through a formal process of "Python Enhancement
> Proposals" to give the main Python developers and the Python community a
> chance to consider the change and its implications.  The change must
> meet the approval of Python's Benevolent Dictator For Life, Guido van
> Rossum, before it has a chance to become part of the official mainline
> Python.

Roughly, how many changes have been allowed since python came about,
compared to how many changes have been allowed to the C Standard?

> Still, you have a far better chance of getting a change into Python than
> getting a change into C :-)

My point exactly.

> > I
> > can't do that with C.  I can do that with GCC or CLANG or some instance
> > of a C compiler, but not C, not to the standard.
> > 
> > I would argue it doesn't move by people needs, but by committee needs.
> > Otherwise a lot would've been introduced that, because it wasn't,
> > spawned off many other languages.
> > 
> > It's why I've more or less ditched C and am proceeding with CAlive.  I
> > will provide support through C99 eventually however.  And other people
> > are free to add additional newer support.

Best regards,
Rick C. Hodgin
0
Rick
11/28/2016 4:07:19 PM
On Monday, November 28, 2016 at 9:13:24 AM UTC-6, Rick C. Hodgin wrote:
> On Sunday, November 27, 2016 at 11:47:10 AM UTC-5, David Brown wrote:
> > Python is a strongly typed language, not a weakly typed language (such 
> > as C).
> 
> Are you saying "C is a weakly typed language" or "Python is a strongly
> typed language like C?"

C has all the dangers associated with loosely-typed languages; some dialects
expose the programmer to the semantic power of loosely-typed languages, while
others only expose the dangers.

In Python, one can have an array which is capable of holding things of any
type, but *the system will always know what types of things it holds*.  In
some dialects of C, heap storage is nothing more nor less than a bunch of
bits that can be interpreted as the programmer sees fit, but the
implementation will neither know nor care what types they represent.  In
other dialects, however, an implementation might behave as though it attaches
type information to an arbitrary subset of information on the heap, and may
behave in arbitrary fashion if information is used in a fashion contrary to
its right type.

I would describe Python as being strongly typed since the implementation
always knows the type of every object.  I would describe dialects of C which
allow storage to be freely reinterpreted as weakly-typed.  I'm not sure
what adjective would be appropriate for the dialects which don't allow
programmers to exploit the semantics of loose typing but don't provide the
protections associated with strong typing.

> > But Python is a dynamically typed language, with duck-typing.
> 
> How can a dynamically typed language be a strongly typed language?
> I was under the impression in Python you could code this:
> 
>     n = 5
>     print n
>     n = "Hello, world"
>     print n

You can, but if you tried e.g. 

     n = 5
     n = n-3;
     print n
     n = "Hello world"
     n = n-3;
     print n

you'd get a type mismatch error on the second subtract since it's not
possible to subtract a number from a string.  An analogous situation in
C to your first example would be:

     void *p = malloc(8);
     *(uint32_t*)p = 5;
     printf("%d", *(uint32_t**)p);
     *(char const*)p = "Hello there!";
     printf("%s", *(char const**)p);

and the second example:

     void *p = malloc(4 + sizeof (char*));
     *(uint32_t*)p = 5;
     *(uint32_t*)p -= 3;
     printf("%d", *(uint32_t*)p);
     *(char const**)p = "Hello there!";
     *(uint32_t*)p -= 3;
     printf("%s", *(char const**)p);

The second code example in C would not likely yield a type mismatch error
at runtime.  The malloc storage would be capable of holding either an integer
or a character pointer, but an implementation would not typically keep track
of which one it held.

> I disagree with the way the term "strongly typed" is used there because
> I consider C to be a strongly typed language.  You type "char c" and
> that's all C can ever be within that scope.

Things that are statically typed in C are strongly typed.  In Python,
things are strongly dynamically typed.
0
supercat
11/28/2016 4:16:48 PM
On Monday, November 28, 2016 at 9:54:00 AM UTC-6, David Brown wrote:
> Once you have your "char c", it is not hard to make an "int *" pointer
> to point to it and read the same data as a different type - that is what
> makes it "weak".  It is very hard to do something similar in Python.

It's easy syntactically to write code whose only sensible meaning is that
the bits at some address should be written as a particular type.  It's also
easy to write code whose only sensible meaning is that the bits at some
address should be read as a particular type.

In a truly-weakly-typed language, combining the two operations should be
a simple way to take the bit pattern used for one type and reinterpret it
as another.  Some dialects of C are weakly-typed by that definition, but
the Standard does not describe such semantics except to note that some
implementations happen to offer them (but not defining a means by which
code can test for such implementations).
0
supercat
11/28/2016 4:26:49 PM
On Monday, November 28, 2016 at 11:17:00 AM UTC-5, supe...@casperkitty.com wrote:
> On Monday, November 28, 2016 at 9:13:24 AM UTC-6, Rick C. Hodgin wrote:
> > On Sunday, November 27, 2016 at 11:47:10 AM UTC-5, David Brown wrote:
> > > Python is a strongly typed language, not a weakly typed language (such 
> > > as C).
> > 
> > Are you saying "C is a weakly typed language" or "Python is a strongly
> > typed language like C?"
> 
> C has all the dangers associated with loosely-typed languages; some dialects
> expose the programmer to the semantic power of loosely-typed languages, while
> others only expose the dangers.
> 
> In Python, one can have an array which is capable of holding things of any
> type, but *the system will always know what types of things it holds*.  In
> some dialects of C, heap storage is nothing more nor less than a bunch of
> bits that can be interpreted as the programmer sees fit, but the
> implementation will neither know nor care what types they represent.  In
> other dialects, however, an implementation might behave as though it attaches
> type information to an arbitrary subset of information on the heap, and may
> behave in arbitrary fashion if information is used in a fashion contrary to
> its right type.
> 
> I would describe Python as being strongly typed since the implementation
> always knows the type of every object.  I would describe dialects of C which
> allow storage to be freely reinterpreted as weakly-typed.  I'm not sure
> what adjective would be appropriate for the dialects which don't allow
> programmers to exploit the semantics of loose typing but don't provide the
> protections associated with strong typing.
> 
> > > But Python is a dynamically typed language, with duck-typing.
> > 
> > How can a dynamically typed language be a strongly typed language?
> > I was under the impression in Python you could code this:
> > 
> >     n = 5
> >     print n
> >     n = "Hello, world"
> >     print n
> 
> You can, but if you tried e.g. 
> 
>      n = 5
>      n = n-3;
>      print n
>      n = "Hello world"
>      n = n-3;
>      print n
> 
> you'd get a type mismatch error on the second subtract since it's not
> possible to subtract a number from a string.  An analogous situation in
> C to your first example would be:
> 
>      void *p = malloc(8);
>      *(uint32_t*)p = 5;
>      printf("%d", *(uint32_t**)p);
>      *(char const*)p = "Hello there!";
>      printf("%s", *(char const**)p);
> 
> and the second example:
> 
>      void *p = malloc(4 + sizeof (char*));
>      *(uint32_t*)p = 5;
>      *(uint32_t*)p -= 3;
>      printf("%d", *(uint32_t*)p);
>      *(char const**)p = "Hello there!";
>      *(uint32_t*)p -= 3;
>      printf("%s", *(char const**)p);
> 
> The second code example in C would not likely yield a type mismatch error
> at runtime.  The malloc storage would be capable of holding either an integer
> or a character pointer, but an implementation would not typically keep track
> of which one it held.

If this is your position regarding C being a weakly typed language,
then I disagree with you completely.

In each of those cases, the void* p value is being used in a way
which requires the explicit use of a cast.  Each explicit use of
a cast creates a new strongly typed form which is expressly known
to the compiler, and is used only as the compiler is able to use
it.  The fact that the cast operation is taking input and putting
it through to something improperly is not a feature of language
typing.  It's a feature of the misuse of language typing.

> > I disagree with the way the term "strongly typed" is used there because
> > I consider C to be a strongly typed language.  You type "char c" and
> > that's all C can ever be within that scope.
> 
> Things that are statically typed in C are strongly typed.  In Python,
> things are strongly dynamically typed.

All things in C are statically typed.  There are no exceptions.  Even
in the case of unions, every reference to each member is totally and
completely known top-to-bottom as to what it is.  Whether or not it's
been misapplied through the nuances of the application is another
matter.

Best regards,
Rick C. Hodgin
0
Rick
11/28/2016 5:31:33 PM
On Monday, November 28, 2016 at 6:47:01 AM UTC-5, David Brown wrote:

<snip>

> Should it be added to the C standard library?  I don't really see why
> not, now that _Generic is there.  But the functions could not be called
> "max" and "min" without clashing too much with existing code - they
> would have to be something like _Max and _Min (with a <stdmaxmin.h> to
> define "max" and "min" names).  Would they be used enough to make it
> worth the effort?  Probably not.
> 
> 
> 
> static inline int _maxi(int const x, int const y) {
>     return y > x ? y : x;
> }
> 
> static inline unsigned _maxui(unsigned const x, unsigned const y) {
>     return y > x ? y : x;
> }
> 
> static inline long _maxl(long const x, long const y) {
>     return y > x ? y : x;
> }
> 
> static inline unsigned long _maxul(unsigned long const x, unsigned long
> const y) {
>     return y > x ? y : x;
> }
> 
> static inline long long _maxll(long long const x, long long const y) {
>     return y > x ? y : x;
> }
> 
> static inline unsigned long long _maxull(unsigned long long const x,
> unsigned long long const y) {
>     return y > x ? y : x;
> }
> 
> static inline float _maxf(float const x, float const y) {
>     return y > x ? y : x;
> }
> 
> static inline double _maxd(double const x, double const y) {
>     return y > x ? y : x;
> }
> 
> static inline long double _maxld(long double const x, long double const y) {
>     return y > x ? y : x;
> }
> 
> #define max(x, y) (_Generic((x) + (y),   \
>     int:                _maxi,             \
>     unsigned:           _maxu,            \
>     long:               _maxl,            \
>     unsigned long:      _maxul,           \
>     long long:          _maxll,           \
>     unsigned long long: _maxull,          \
>     float:              _maxf,            \
>     double:             _maxd,            \
>     long double:        _maxld)((x), (y)))

You might consider using a macro for your function definitions.

#define MAX_FN_IMPL( NAME, TYPE )                           \
static inline TYPE _max##NAME(TYPE const x, TYPE const y) { \
  return y > x ? y : x;                                     \
}

I actually use this style on occasion to define type specific functions
as needed.

Best regards,
John D.
0
jadill33
11/28/2016 5:41:20 PM
BartC <bc@freeuk.com> writes:
> On 27/11/2016 16:35, David Brown wrote:
>> On 27/11/16 01:21, BartC wrote:
>
>>> So 14 functions so far, and you haven't yet looked at in-place versions.
>>
>> And that is why there is no "max" standard function in C.
>
> Huh? That is the very reason why it ought to be built-in and benefiting 
> from overloading.
[...]
> Your reasons are just the usual excuses that try and justify why some 
> obvious feature isn't in C. Of course if it /was/ in, then it would be 
> indispensable!

Ultimately, the set of features that are built into C is pretty
much arbitrary.  Built-in min and max operators might have been nice
to have, but they just don't happen to have been in the original
language -- and the functionality can always be implemented in code,
though a bit less conveniently than using a built-in operator.

If Ritchie had decided to implement min and max operators, I don't
believe I'd be saying that they're indepensible.  I'd probably
say that they can be useful, but they're never strictly necessary,
and the language could have gotten along without them.

[...]

-- 
Keith Thompson (The_Other_Keith) kst-u@mib.org  <http://www.ghoti.net/~kst>
Working, but not speaking, for JetHead Development, Inc.
"We must do something.  This is something.  Therefore, we must do this."
    -- Antony Jay and Jonathan Lynn, "Yes Minister"
0
Keith
11/28/2016 6:10:44 PM
On Monday, November 28, 2016 at 11:31:40 AM UTC-6, Rick C. Hodgin wrote:
> All things in C are statically typed.  There are no exceptions.  Even
> in the case of unions, every reference to each member is totally and
> completely known top-to-bottom as to what it is.  Whether or not it's
> been misapplied through the nuances of the application is another
> matter.

That depends upon whether the contents of heap storage are considered a
"thing", and whether one expects type violations to have some effect other
than Undefined Behavior.  If one allows a language to be called "strongly
typed" without requiring any deterministic enforcement of typing rules,
then heap storage would qualify as "strongly typed" under such a definition.
Otherwise, heap storage isn't strongly typed; whether it would qualify as
"weakly-typed" would depend upon the implementation.
0
supercat
11/28/2016 6:38:12 PM
On 28/11/2016 17:31, Rick C. Hodgin wrote:
> On Monday, November 28, 2016 at 11:17:00 AM UTC-5, supe...@casperkitty.com wrote:

>> The second code example in C would not likely yield a type mismatch error
>> at runtime.  The malloc storage would be capable of holding either an integer
>> or a character pointer, but an implementation would not typically keep track
>> of which one it held.
>
> If this is your position regarding C being a weakly typed language,
> then I disagree with you completely.

C is strongly typed? Even without explicit casts, C allows these 
expressions, which would raise eyebrows if you are not familiar with the 
language:

(s,t are char[]s containing a string; p is a pointer to one object; a,b 
are arrays; f is a function; i,j are signed integers)

  s+5;         // add number to string; like my first example
  s-t;         // subtract strings
  p[i];        // index a pointer
  *a;          // dereference an array
  s[1]/2;      // s[1] is a char; halve a 'char'
  (*f)();      // dereference a function
  i[a];        // index i with an array?
  a+i;         // add number to array
  a-b;         // subtract arrays
  a==0;        // compare array with zero
  (&i)[j];     // index i like an array

With casts, type-punning and unions, you can completely bypass the 
type-system and with very little effort.

I can't show the equivalent of the above in Python, as it lacks the 
ability to do the pointer examples. However my own dynamic language does 
have them, and all the above expressions are errors**.

So, which language is the more strongly, or strictly, typed: the one 
that allows pretty much anything, whether it makes sense /in the context 
of the program/ or not, or the one that raises objections?

> All things in C are statically typed.  There are no exceptions.  Even
> in the case of unions, every reference to each member is totally and
> completely known top-to-bottom as to what it is.  Whether or not it's
> been misapplied through the nuances of the application is another
> matter.

That's not a very compelling argument. Is a processor instruction set 
strictly typed or not? Since you will always be working with machine 
words containing bit patterns of so many bits.

(**Except for the equivalent of (*f)(); now I'll have to find out why!)

-- 
Bartc
0
BartC
11/28/2016 6:57:12 PM
On Monday, November 28, 2016 at 1:38:18 PM UTC-5, supe...@casperkitty.com wrote:
> On Monday, November 28, 2016 at 11:31:40 AM UTC-6, Rick C. Hodgin wrote:
> > All things in C are statically typed.  There are no exceptions.  Even
> > in the case of unions, every reference to each member is totally and
> > completely known top-to-bottom as to what it is.  Whether or not it's
> > been misapplied through the nuances of the application is another
> > matter.
> 
> That depends upon whether the contents of heap storage are considered a
> "thing", and whether one expects type violations to have some effect other
> than Undefined Behavior.  If one allows a language to be called "strongly
> typed" without requiring any deterministic enforcement of typing rules,
> then heap storage would qualify as "strongly typed" under such a definition.
> Otherwise, heap storage isn't strongly typed; whether it would qualify as
> "weakly-typed" would depend upon the implementation.

I don't consider the heap to be a typeable quantity.  I consider access
to the heap to be done only through strongly typed variables.  I consider
the heap to be a load/store mechanism which populates into/out of typed
things by the application's protocol (bitmaps accessed as bitmaps, rtf
files as rich text, etc.).

Same with remote data loaded into the heap (disk, network).

Best regards,
Rick C. Hodgin
0
Rick
11/28/2016 6:59:11 PM
On Monday, November 28, 2016 at 12:57:21 PM UTC-6, Bart wrote:
> C is strongly typed? Even without explicit casts, C allows these 
> expressions, which would raise eyebrows if you are not familiar with the 
> language:
> 
> (s,t are char[]s containing a string; p is a pointer to one object; a,b 
> are arrays; f is a function; i,j are signed integers)
> 
>   s+5;         // add number to string; like my first example

Incidentally, Python will allow both 3+4 (yielding 7) and "3"+"4" (yielding
34), but will squawk at 3+"4" or "3"+4.  If one wants to add 3 to the integer
value of "4", or append the string representation of 4 to the string "3",
one would write 3+int(4) or "3"+str(4), respectively.  That's a reasonably
nice way to handle overloading, since operators work consistently in all
cases where they work without error.

Javascript, by contrast, is more loosy-goosy, and will try to guess what
"5"+4 or "7"-2 should mean; in the former case it will decide that since
there is an overload of "+" for strings, it should convert the value 4
to a string, while in the latter case it will decide that since there is
no overload of "-" for strings, but there is one for numbers, it should
convert "7" to a number.

0
supercat
11/28/2016 7:33:16 PM
On 28/11/16 18:57, BartC wrote:
<snip>
>
> C is strongly typed? Even without explicit casts,

Just for the record, "explicit" is redundant here. Casts are explicit 
conversions, so you are saying the equivalent of "explicit explicit 
conversions".

-- 
Richard Heathfield
Email: rjh at cpax dot org dot uk
"Usenet is a strange place" - dmr 29 July 1999
Sig line 4 vacant - apply within
0
Richard
11/28/2016 8:09:16 PM
On Monday, November 28, 2016 at 12:59:31 PM UTC-6, Rick C. Hodgin wrote:
> I don't consider the heap to be a typeable quantity.  I consider access
> to the heap to be done only through strongly typed variables.  I consider
> the heap to be a load/store mechanism which populates into/out of typed
> things by the application's protocol (bitmaps accessed as bitmaps, rtf
> files as rich text, etc.).

Until C99, I don't think most programmers considered the bytes in there as
having types only in an ephemeral sense; if `*p` happens to identify address
0x12345678, then "*(uint32_t*)p = 0xCAFEBABE;" would on most machines set four
bytes starting at address 0x12345678 to [0xCA,0xFE,0xBA,0xBE] or else
[0xBE,0xBA,0xFE,0xCA], and once that was done the system would cease knowing
or caring about any relationship between those bytes and the type "uint32_t".

C99 authorizes the compiler to treat the heap as strongly-typed storage at
its leisure, but without providing the programmer with anything like the
safety net that is promised in strongly-typed languages.  Modern compilers
can make things more dangerous than they would be in even a weakly-typed
system, since in a weakly-typed system storage which has been formerly used
as an unknown type may be treated as holding an arbitrary unknown set of bit
values.  If code does something like:

    unsigned index = itemLookup[itemValue];
    if ((index < numItems) && (itemValues[index] == itemValue))
      ...handle found item
    else
      ...item not in collection

the values of bits in itemLookup[] will only matter for items which are
in the collection.  If an item is not in the collection, then
itemLookup[itemValue] will either yield a value which is too big to
identify a valid item within itemValues, or else it will identify an item
that doesn't match the one of interest.  Since the value of the bits in
itemLookup[value] won't matter, there should be no need to clear out that
storage before use.

In C99, however, reading storage as the wrong type invokes UB even in cases
where code would work equally well with any bit pattern the storage in
question might hold.  C99 will thus create new failure modes that would not
have existed if heap storage were weakly typed.
0
supercat
11/28/2016 8:09:44 PM
On Monday, November 28, 2016 at 1:57:21 PM UTC-5, Bart wrote:
> On 28/11/2016 17:31, Rick C. Hodgin wrote:
> > On Monday, November 28, 2016 at 11:17:00 AM UTC-5, supe...@casperkitty.com wrote:
> 
> >> The second code example in C would not likely yield a type mismatch error
> >> at runtime.  The malloc storage would be capable of holding either an integer
> >> or a character pointer, but an implementation would not typically keep track
> >> of which one it held.
> >
> > If this is your position regarding C being a weakly typed language,
> > then I disagree with you completely.
> 
> C is strongly typed? Even without explicit casts, C allows these 
> expressions, which would raise eyebrows if you are not familiar with the 
> language:
> 
> (s,t are char[]s containing a string; p is a pointer to one object; a,b 
> are arrays; f is a function; i,j are signed integers)
> 
>   s+5;         // add number to string; like my first example
>   s-t;         // subtract strings
>   p[i];        // index a pointer
>   *a;          // dereference an array
>   s[1]/2;      // s[1] is a char; halve a 'char'
>   (*f)();      // dereference a function
>   i[a];        // index i with an array?
>   a+i;         // add number to array
>   a-b;         // subtract arrays
>   a==0;        // compare array with zero
>   (&i)[j];     // index i like an array
> 
> With casts, type-punning and unions, you can completely bypass the 
> type-system and with very little effort.

You write compilers, Bart.  You know that's not true.  Every use
of a union, every use of type-punning, every use of any translation
mechanism, you are are generating conversion code to handle the new
thing explicitly.

No type constraints are violated or altered, but only by the express
direction of the developer, and that developer may not have done
things properly ... but that's not the issue.

Other languages like python and xbase allow you to change types at
your leisure continually.  I view that as weakly typed, even though
each internal type at any given instant is well defined and will
only behave as it should.

> I can't show the equivalent of the above in Python, as it lacks the 
> ability to do the pointer examples. However my own dynamic language does 
> have them, and all the above expressions are errors**.
> 
> So, which language is the more strongly, or strictly, typed: the one 
> that allows pretty much anything, whether it makes sense /in the context 
> of the program/ or not, or the one that raises objections?

I would say C is completely restrictive.  But, I can see what your
point is that through the use of the many mechanisms provided for by
its casting abilities, a developer can force the compiler do things
with data the language itself wouldn't otherwise allow apart from the
use of those castings.

I don't see that as any kind of a weakness in typing.  It's an ability
or extension of the language to look at any of its data in any way.
That's actually why I like C so much, because I used to do everything
in assembly and I could do that.  When I was younger the extra typing
and somewhat more difficult requirements to get everything debugged
were not an issue for me.  As I get older I welcome C's help. :-)

> > All things in C are statically typed.  There are no exceptions.  Even
> > in the case of unions, every reference to each member is totally and
> > completely known top-to-bottom as to what it is.  Whether or not it's
> > been misapplied through the nuances of the application is another
> > matter.
> 
> That's not a very compelling argument. Is a processor instruction set 
> strictly typed or not? Since you will always be working with machine 
> words containing bit patterns of so many bits.

Absolutely a processor's ISA is strictly typed.  It's also strongly
typed.  It's hard-wired literally. :-)  Things are explicit and exact
always and only.

The mechanisms of the algorithm written atop those fundamental ISA
abilities expose usages which are able to wield the more fundamental
operations into other meanings, just as quarks join to form electrons
and other particles.  And those particles join to form atoms.  And
those atoms form to create the bread we have, and the shoes we wear,
and so on.

But at each layer, such as from the ISA's perspective, everything is
strictly and strongly typed, following exact rules without question.

The same is true in C, it's just that when viewed through the mechanisms
C allows, the ones you've explicitly instructed the compiler to employ
when manipulating data for you, it can yield results which give rise to
errors.  That's not a problem with C, however, but with the application
developer.

Python, on the other hand, is not like that.  It can take a named token
and reassign its associated data and type at any point and time.  The
type contained at any given instant employs a formal and of a strong
type, but the association within the token is completely variable and
subject to change line by line.

> (**Except for the equivalent of (*f)(); now I'll have to find out why!)

I've always assumed the (*f)() syntax was just that:  a syntax.  I've
never understood why it's that way except so that it could be parsed
for what it is more easily.  I would think it should've been "f*()" if
you're going to be consistent with other C syntax like char*, and then
to call the function, you reference it in your code dereferenced as *f()
in each use to indicate visually you're calling a dereferenced function
pointer.

Best regards,
Rick C. Hodgin
0
Rick
11/28/2016 8:29:17 PM
On Monday, November 28, 2016 at 3:09:55 PM UTC-5, supe...@casperkitty.com wrote:
> On Monday, November 28, 2016 at 12:59:31 PM UTC-6, Rick C. Hodgin wrote:
> > I don't consider the heap to be a typeable quantity.  I consider access
> > to the heap to be done only through strongly typed variables.  I consider
> > the heap to be a load/store mechanism which populates into/out of typed
> > things by the application's protocol (bitmaps accessed as bitmaps, rtf
> > files as rich text, etc.).
> 
> Until C99, I don't think most programmers considered the bytes in there as
> having types only in an ephemeral sense; if `*p` happens to identify address
> 0x12345678, then "*(uint32_t*)p = 0xCAFEBABE;" would on most machines set four
> bytes starting at address 0x12345678 to [0xCA,0xFE,0xBA,0xBE] or else
> [0xBE,0xBA,0xFE,0xCA], and once that was done the system would cease knowing
> or caring about any relationship between those bytes and the type "uint32_t".
> 
> C99 authorizes the compiler to treat the heap as strongly-typed storage at
> its leisure, but without providing the programmer with anything like the
> safety net that is promised in strongly-typed languages.  Modern compilers
> can make things more dangerous than they would be in even a weakly-typed
> system, since in a weakly-typed system storage which has been formerly used
> as an unknown type may be treated as holding an arbitrary unknown set of bit
> values.  If code does something like:
> 
>     unsigned index = itemLookup[itemValue];
>     if ((index < numItems) && (itemValues[index] == itemValue))
>       ...handle found item
>     else
>       ...item not in collection
> 
> the values of bits in itemLookup[] will only matter for items which are
> in the collection.  If an item is not in the collection, then
> itemLookup[itemValue] will either yield a value which is too big to
> identify a valid item within itemValues, or else it will identify an item
> that doesn't match the one of interest.  Since the value of the bits in
> itemLookup[value] won't matter, there should be no need to clear out that
> storage before use.
> 
> In C99, however, reading storage as the wrong type invokes UB even in cases
> where code would work equally well with any bit pattern the storage in
> question might hold.  C99 will thus create new failure modes that would not
> have existed if heap storage were weakly typed.

Maybe the issue is I don't understand the definitions of strongly and
weakly typed. :-)

I view strongly typed as anything that can only be one thing, only used
as one thing, and that's its type.

I view weakly typed as anything that can be more than one thing, can be
used as one thing now, another thing later, and its type is instance-by-
instance determined, rather than being known at compile time.

Best regards,
Rick C. Hodgin
0
Rick
11/28/2016 8:46:13 PM
On Monday, November 28, 2016 at 2:46:29 PM UTC-6, Rick C. Hodgin wrote:
> I view strongly typed as anything that can only be one thing, only used
> as one thing, and that's its type.
> 
> I view weakly typed as anything that can be more than one thing, can be
> used as one thing now, another thing later, and its type is instance-by-
> instance determined, rather than being known at compile time.

Python is strongly dynamically typed.  Static typing would imply that the
compiler would know about the types of things.  Strong typing implies that
*something* knows about the types of things and will enforce proper usage.
In C, directly-accessed objects are strongly statically typed; storage
which is accessed only via pointers are handled in whatever manner the
implementation feels like.
0
supercat
11/28/2016 9:05:20 PM
On 28/11/2016 20:29, Rick C. Hodgin wrote:
> On Monday, November 28, 2016 at 1:57:21 PM UTC-5, Bart wrote:

>> That's not a very compelling argument. Is a processor instruction set
>> strictly typed or not? Since you will always be working with machine
>> words containing bit patterns of so many bits.

> Python, on the other hand, is not like that.  It can take a named token
> and reassign its associated data and type at any point and time.  The
> type contained at any given instant employs a formal and of a strong
> type, but the association within the token is completely variable and
> subject to change line by line.

That's just dynamic typing. Take this fragment of Python:

  x = "ONE "
  y = 10
  z = x*y          # duplicate string y times
  x=len(z)

  print (z,x)

Output is: ONE ONE ONE ONE ONE ONE ONE ONE ONE ONE  40

The 'x' variable here changes type. But you can do the same in C with 
some effort (CPython is written in C). Example in C (but depends on 
substantial support from my interpreter core):

     static varrec x, y, z;    // type-tagged objects

     vx_makestring("ONE ",&x);
     vx_makeint(10,&y);

     vx_ushare(&x);            // to do with reference-counting
     vx_mul(&x,&y,&z);

     vx_ushare(&z);
     pch_print(&z,0);

     vx_ushare(&z);
     vx_len(&z,&x);

     pch_print(&x,0);

Output is: ONE ONE ONE ONE ONE ONE ONE ONE ONE ONE  40

Snap.

The type system for C implemented at compile time for native types is a 
little different from a runtime type system dependent on type-tagged 
objects. The latter is more strongly typed IMO (possibly due to not 
having the same leisure the compiler has to do analysis, it has to make 
billions of quick decisions not a few thousand).

>> (**Except for the equivalent of (*f)(); now I'll have to find out why!)
>
> I've always assumed the (*f)() syntax was just that:  a syntax.

In my example f was a function, not a pointer to a function.

>  I've
> never understood why it's that way except so that it could be parsed
> for what it is more easily.  I would think it should've been "f*()"

* isn't a postfix op in C. So it was to be *f, /if/ f was a pointer (but 
C allows any number of extra derefs for functions or function pointers). 
(My syntax is postfix and would be f^(), although I've not fixed that 
problem so f^ is an error when f is a normal function.)

> if
> you're going to be consistent with other C syntax like char*, and then
> to call the function, you reference it in your code dereferenced as *f()
> in each use to indicate visually you're calling a dereferenced function
> pointer.

I'm not sure what you mean here. The * goes on the right of a type, but 
on the left of a name, and it's the name that appear in an expression. 
But my example would need to be written (*f)(), as *f() has a different 
meaning - call f() then apply a deref.


-- 
Bartc
0
BartC
11/28/2016 9:30:11 PM
On Monday, November 28, 2016 at 4:30:20 PM UTC-5, Bart wrote:
> On 28/11/2016 20:29, Rick C. Hodgin wrote:
> > On Monday, November 28, 2016 at 1:57:21 PM UTC-5, Bart wrote:
> 
> >> That's not a very compelling argument. Is a processor instruction set
> >> strictly typed or not? Since you will always be working with machine
> >> words containing bit patterns of so many bits.
> 
> > Python, on the other hand, is not like that.  It can take a named token
> > and reassign its associated data and type at any point and time.  The
> > type contained at any given instant employs a formal and of a strong
> > type, but the association within the token is completely variable and
> > subject to change line by line.
> 
> That's just dynamic typing. Take this fragment of Python:

I don't see the difference.  The token has a name.  Its type is, in
the case of dynamic typing, whatever it happens to be at the time.
It is weakly typed because you can't know at compile time what it
will be in all cases.

But, I accept the fact that I don't understand formal terminology
as I've never studied it, but just garnered it from what I've read
online and heard other people talking about.

I had an employer in 2007 standing over my shoulder guiding me on
how to code this particular thing.  He said at one point, "dereference
the pointer," and I sat there thinking, "Dereference, dereference, I
wonder what that means?"  I finally turned and looked at him and he
laughed and said, "type an asterisk."  I then said, "OH!" LOL :-)

>   x = "ONE "
>   y = 10
>   z = x*y          # duplicate string y times
>   x=len(z)
> 
>   print (z,x)
> 
> Output is: ONE ONE ONE ONE ONE ONE ONE ONE ONE ONE  40
> 
> The 'x' variable here changes type. But you can do the same in C with 
> some effort (CPython is written in C). Example in C (but depends on 
> substantial support from my interpreter core):
> 
>      static varrec x, y, z;    // type-tagged objects
> 
>      vx_makestring("ONE ",&x);
>      vx_makeint(10,&y);
> 
>      vx_ushare(&x);            // to do with reference-counting
>      vx_mul(&x,&y,&z);
> 
>      vx_ushare(&z);
>      pch_print(&z,0);
> 
>      vx_ushare(&z);
>      vx_len(&z,&x);
> 
>      pch_print(&x,0);
> 
> Output is: ONE ONE ONE ONE ONE ONE ONE ONE ONE ONE  40
> 
> Snap.

Protocol.  You can build atop fundamental operations to create just
about anything.  That doesn't mean they are that higher thing.  Here
we see some type of protocol employed by called functions which then
manipulate the data to present as the thing, but it's weakly typed,
meaning in each case the called function must examine the type to
see what it is and then, through the logic test, branch to the
appropriate handler.

> The type system for C implemented at compile time for native types is a 
> little different from a runtime type system dependent on type-tagged 
> objects. The latter is more strongly typed IMO (possibly due to not 
> having the same leisure the compiler has to do analysis, it has to make 
> billions of quick decisions not a few thousand).
> 
> >> (**Except for the equivalent of (*f)(); now I'll have to find out why!)
> >
> > I've always assumed the (*f)() syntax was just that:  a syntax.
> 
> In my example f was a function, not a pointer to a function.

I am unaware of that syntax in C/C++ apart from function declarations.
They define functions that are determined by an address stored in a
variable, and not something known at compile-time, though they can be
initially populated if they include an address-of reference, and may
even be optimized away into static functions if allowed... Still,
fundamentally they are:

    move  register,f
    call  register    ; Indirect call to address contained in f

On i386:

    mov   ebx,f
    call  ebx

> >  I've
> > never understood why it's that way except so that it could be parsed
> > for what it is more easily.  I would think it should've been "f*()"
> 
> * isn't a postfix op in C. So it was to be *f, /if/ f was a pointer (but 
> C allows any number of extra derefs for functions or function pointers). 
> (My syntax is postfix and would be f^(), although I've not fixed that 
> problem so f^ is an error when f is a normal function.)

I think it should be a pointer because internally that's what it is,
it just happens to be a function pointer.

> > if
> > you're going to be consistent with other C syntax like char*, and then
> > to call the function, you reference it in your code dereferenced as *f()
> > in each use to indicate visually you're calling a dereferenced function
> > pointer.
> 
> I'm not sure what you mean here. The * goes on the right of a type, but 
> on the left of a name, and it's the name that appear in an expression. 
> But my example would need to be written (*f)(), as *f() has a different 
> meaning - call f() then apply a deref.

Well, I only know the one syntax in C/C++ for the (*f)() reference you
indicate, which is:

    int (*my_function)(int a, int b);

In that case I call with:
    int x = my_function(1, 2);

I think the syntax should've been:
    int my_function*(int a, int b);

And then to use it you should've had to use:
    int x = *my_function(1, 2);

In this way, you would know at declaration time that my_function is
a function pointer declaration which returns type in, and receives
two ints as input.  And in usage, you would know that my_function is
a function pointer.

So, I don't know what the (*f)() syntax is you're referring to.

Best regards,
Rick C. Hodgin
0
Rick
11/28/2016 9:47:06 PM
On Monday, November 28, 2016 at 4:05:27 PM UTC-5, supe...@casperkitty.com wrote:
> On Monday, November 28, 2016 at 2:46:29 PM UTC-6, Rick C. Hodgin wrote:
> > I view strongly typed as anything that can only be one thing, only used
> > as one thing, and that's its type.
> > 
> > I view weakly typed as anything that can be more than one thing, can be
> > used as one thing now, another thing later, and its type is instance-by-
> > instance determined, rather than being known at compile time.
> 
> Python is strongly dynamically typed.  Static typing would imply that the
> compiler would know about the types of things.  Strong typing implies that
> *something* knows about the types of things and will enforce proper usage.
> In C, directly-accessed objects are strongly statically typed; storage
> which is accessed only via pointers are handled in whatever manner the
> implementation feels like.

I can accept that.  My use of the term strong and weak referred more to
the static condition.

I still don't see how something can be weakly typed under that definition.
I saw a reference to the possibility of altering the internal type through
an operation, such as possibly multiplying an integer by a floating point
and changing the resulting type from integer to floating point?  Would
that be weakly typed?  I would argue it's dynamically typed.  And I
would then argue that there is no dynamic and static typing, but only
strong and weak typing.

Best regards,
Rick C. Hodgin
0
Rick
11/28/2016 9:50:35 PM
On Monday, November 28, 2016 at 3:47:20 PM UTC-6, Rick C. Hodgin wrote:
> I don't see the difference.  The token has a name.  Its type is, in
> the case of dynamic typing, whatever it happens to be at the time.
> It is weakly typed because you can't know at compile time what it
> will be in all cases.

Consider the C code:

    void foo(void *thing) { ... }

    void test1(void)
    {
      char *blob = malloc(65536*4);
      for (int32_t i=0; i<65536*4; i+=4)
      {
        if (((i * 5) & 0xCAFE) % 7)
          *(uint32_t*)(blob+i) = 0;
        else
          *(float*)(blob+i) = 0.0f;
      }
      for (int32_t i=0; i<65536*4; i+=4)
      {
        foo(blob+i);
      }
    }

The storage identified by "blob" will hold a mixture of "float" and
"uint32_t" values, but it is likely nothing will distinguish which
values were written as which type.  A strongly-typed language would
preserve the distinction between writing a "float" with a value of 0.0
and writing a "uint32_t" that happened to have the same bit pattern.
A weakly-typed language would allow bit patterns in storage to be
interpreted as any type for which they would be valid.  Most C
implementations will usually behave as though they are weakly typed,
but may occasionally keep track of object types well enough to break
code that would benefit from being able to exploit weak typing behavior.

0
supercat
11/28/2016 10:59:20 PM
On Monday, November 28, 2016 at 5:59:39 PM UTC-5, supe...@casperkitty.com wrote:
> On Monday, November 28, 2016 at 3:47:20 PM UTC-6, Rick C. Hodgin wrote:
> > I don't see the difference.  The token has a name.  Its type is, in
> > the case of dynamic typing, whatever it happens to be at the time.
> > It is weakly typed because you can't know at compile time what it
> > will be in all cases.
> 
> Consider the C code:
> 
>     void foo(void *thing) { ... }
> 
>     void test1(void)
>     {
>       char *blob = malloc(65536*4);
>       for (int32_t i=0; i<65536*4; i+=4)
>       {
>         if (((i * 5) & 0xCAFE) % 7)
>           *(uint32_t*)(blob+i) = 0;
>         else
>           *(float*)(blob+i) = 0.0f;
>       }
>       for (int32_t i=0; i<65536*4; i+=4)
>       {
>         foo(blob+i);
>       }
>     }
> 
> The storage identified by "blob" will hold a mixture of "float" and
> "uint32_t" values, but it is likely nothing will distinguish which
> values were written as which type.  A strongly-typed language would
> preserve the distinction between writing a "float" with a value of 0.0
> and writing a "uint32_t" that happened to have the same bit pattern.
> A weakly-typed language would allow bit patterns in storage to be
> interpreted as any type for which they would be valid.  Most C
> implementations will usually behave as though they are weakly typed,
> but may occasionally keep track of object types well enough to break
> code that would benefit from being able to exploit weak typing behavior.

Okay.  I would've view that as an explicit data tag.  Unless data is
tagged to be something explicit, either as by being part of a known
structure, for example, so that each offset within the structure is
known to contain a specific thing, or by including some type of
discriminator nearby which uniquely identifies the type, then the
actual type falls back on the manner in which it is accessed.

Best regards,
Rick C. Hodgin
0
Rick
11/28/2016 11:13:00 PM
BartC <bc@freeuk.com> writes:

> On 28/11/2016 17:31, Rick C. Hodgin wrote:
<snip>
>> If this is your position regarding C being a weakly typed language,
>> then I disagree with you completely.
>
> C is strongly typed? Even without explicit casts, C allows these
> expressions, which would raise eyebrows if you are not familiar with
> the language:
>
> (s,t are char[]s containing a string; p is a pointer to one object;

All pointers point to one object.

> a,b are arrays; f is a function; i,j are signed integers)
>
>  s+5;         // add number to string; like my first example
>  s-t;         // subtract strings

The comments describe actions that are not possible in C.

>  p[i];        // index a pointer

Yup.

>  *a;          // dereference an array

You can't dereference an array in C.

>  s[1]/2;      // s[1] is a char; halve a 'char'

Yup (char is an integer type).

>  (*f)();      // dereference a function
>  i[a];        // index i with an array?
>  a+i;         // add number to array
>  a-b;         // subtract arrays
>  a==0;        // compare array with zero

The comments, again, all describe actions that are not possible in C.

>  (&i)[j];     // index i like an array

Yup.

Anyone learning C should be warned that your comments (the ones that I
describe as being wrong) simply reflect your spin on what's going on in
the language.  It is not possible for me to believe, after all these
years, that you still think that one can, for example, subtract arrays
in C, so describing such a thing can only be another attempt to spread
confusion.  I really wish you would stop doing that.

> With casts, type-punning and unions, you can completely bypass the
> type-system and with very little effort.

Yes, this is why C's type system is usually described as weak, but the
lack of many run-time checks can also be seen as signs of a weak type
system.  Other classic examples include

  *strchr("abc", 'a') = 'x';

<snip>
-- 
Ben.
0
Ben
11/29/2016 12:01:03 AM
On 28/11/2016 21:47, Rick C. Hodgin wrote:
 > On Monday, November 28, 2016 at 4:30:20 PM UTC-5, Bart wrote:

>> I'm not sure what you mean here. The * goes on the right of a type, but
>> on the left of a name, and it's the name that appear in an expression.
>> But my example would need to be written (*f)(), as *f() has a different
>> meaning - call f() then apply a deref.
>
> Well, I only know the one syntax in C/C++ for the (*f)() reference you
> indicate, which is:
>
>     int (*my_function)(int a, int b);
>
> In that case I call with:
>     int x = my_function(1, 2);
>
> I think the syntax should've been:
>     int my_function*(int a, int b);
>
> And then to use it you should've had to use:
>     int x = *my_function(1, 2);

Don't forget that this is C; there is always an extra twist! Given a 
function, and a function pointer:

    int fn(int,int);
    int (*fnptr)(int,int);

You call each of them like this:

    fn(10,20);           // call fn
    fnptr(10,20);        // call the function at *fnptr

Yes, exactly the same! Even though in ASM they will be:

     call fn
     ....
     mov R,[fnptr]
     call R

Unless you have a extra level of pointer, then you /will/ need a deref, 
but only one, and it needs to use this syntax:

   (*fnptrptr)(10,20);

However, C allows you to add ANY NUMBER of extra derefs when it comes to 
functions, so that you can /choose/ to have a matching number of derefs:

    fn(10,20);
    (*fnptr)(10,20);
    (**fnptrptr)(10,20);

(Just like my language /requires/, but it uses syntax like 
fnptrptr^^(10,20).)

Probably, you've never had to deal with more than one level of function 
pointer, so perhaps have never used (*fnptr)().

-- 
Bartc
0
BartC
11/29/2016 12:12:47 AM
BartC <bc@freeuk.com> writes:
<snip>
> C allows this:
>
>   char* s = "Hello world";
>   int n = 5;
>
>   printf("%s\n", s + n);
>   printf("%d\n", s + n);
>   printf("%p\n", s + n);
>
> So it can add a number to a string

No it can't.  I think you know it can't but saying it can satisfies some
private propaganda agenda of your.  Please stop.

<snip>
-- 
Ben.
0
Ben
11/29/2016 12:14:56 AM
On 29/11/2016 00:01, Ben Bacarisse wrote:
> BartC <bc@freeuk.com> writes:

>> C is strongly typed? Even without explicit casts, C allows these
>> expressions, which would raise eyebrows if you are not familiar with
>> the language:
>>
>> (s,t are char[]s containing a string; p is a pointer to one object;
>
> All pointers point to one object.

(No; some are known, by the programmer, to point to the first of 
multiple consecutive objects. But others will point to only one, with 
the area beyond that object unknown, undefined or out of bounds. The 
programmer may or may not know.

The language however never knows whether there is a valid object at P+1, 
if the only spec for P is T*. In my example, it is known that *(P+1) or 
P[1] is invalid.)

>>  s+5;         // add number to string; like my first example
>>  s-t;         // subtract strings
>
> The comments describe actions that are not possible in C.

Yes, that was the intention. This is how it looks to someone not 
familiar with C, as I mentioned.

If I write this in Python (the semicolons are OK):

    s = "ABC";
    t = "DEF";

    s-t;

It fails because subtraction is not allowed between strings. Now plug 
that very same code into C (s,t need to be char* now not char[] to use 
the same assignments), and it compiles and runs!

<Snip>

-- 
Bartc

0
BartC
11/29/2016 12:29:50 AM
On 29/11/2016 00:14, Ben Bacarisse wrote:
> BartC <bc@freeuk.com> writes:
> <snip>
>> C allows this:
>>
>>   char* s = "Hello world";
>>   int n = 5;
>>
>>   printf("%s\n", s + n);
>>   printf("%d\n", s + n);
>>   printf("%p\n", s + n);
>>
>> So it can add a number to a string
>
> No it can't.

Yes it can; that's what the code is doing.

s is used to refer to a string.

n is a number.

The language allows you to add those together using 's + n'.

Of course the actual operation it does, because of how C works, is not 
very interesting: it takes the address of the start of the string, and 
steps it by 5. It doesn't end up with "Hello world5" or, after trying to 
convert "Hello World" to number and getting 0, adding 5 to that.

But then exactly what 'add number to a string' does hasn't been defined.

> I think you know it can't but saying it can satisfies some
> private propaganda agenda of your.  Please stop.

The subthread was about strong and weak typing.

-- 
Bartc
0
BartC
11/29/2016 12:36:43 AM
BartC <bc@freeuk.com> writes:
> On 29/11/2016 00:01, Ben Bacarisse wrote:
>> BartC <bc@freeuk.com> writes:
>>> C is strongly typed? Even without explicit casts, C allows these
>>> expressions, which would raise eyebrows if you are not familiar with
>>> the language:
>>>
>>> (s,t are char[]s containing a string; p is a pointer to one object;
>>
>> All pointers point to one object.
>
> (No; some are known, by the programmer, to point to the first of 
> multiple consecutive objects. But others will point to only one, with 
> the area beyond that object unknown, undefined or out of bounds. The 
> programmer may or may not know.

That would have made made more sense if you replaced the initial "No" by
"Yes".

It's not quite true that all pointers point to one object.  Function
pointers don't, and object pointers may be null, or invalid, or may
point just past the end of an array object.

But (and I'm getting really tired of explaining this to you), a pointer
to an object that happens to be the initial element of an array object
is still a pointer to that one object.  Given:

    int arr[10];
    int *p = arr;

the pointer p points to a single int object (which can also be referred
to as arr[0]).

[...]

-- 
Keith Thompson (The_Other_Keith) kst-u@mib.org  <http://www.ghoti.net/~kst>
Working, but not speaking, for JetHead Development, Inc.
"We must do something.  This is something.  Therefore, we must do this."
    -- Antony Jay and Jonathan Lynn, "Yes Minister"
0
Keith
11/29/2016 1:01:01 AM
On Monday, November 28, 2016 at 6:29:59 PM UTC-6, Bart wrote:
> On 29/11/2016 00:01, Ben Bacarisse wrote:
> > All pointers point to one object.
> 
> (No; some are known, by the programmer, to point to the first of 
> multiple consecutive objects. But others will point to only one, with 
> the area beyond that object unknown, undefined or out of bounds. The 
> programmer may or may not know.

Given the code:

    int a[5];
    int *b = a+5;

to what object would b point?
0
supercat
11/29/2016 1:02:49 AM
"Rick C. Hodgin" <rick.c.hodgin@gmail.com> writes:

> On Monday, November 28, 2016 at 4:30:20 PM UTC-5, Bart wrote:
<snip>

>> That's just dynamic typing.
>
> I don't see the difference.  The token has a name.  Its type is, in
> the case of dynamic typing, whatever it happens to be at the time.
> It is weakly typed because you can't know at compile time what it
> will be in all cases.

I know I promised myself I wouldn't reply to you anymore but your posts
in this thread are very muddled and likely to confuse others.  And with
BartC doing his best to spin matters to fit his agenda, there is just
too much confusion here for my liking.

The strong/weak distinction is really about rules and their enforcement.
A strongly-typed language has strict type rules and does not let the
programmer break them, so, to some extent, how strongly-typed C is is a
matter of opinion about what "breaking a type rule" really means.  For
example, is accessing a float in a union whose int member was the last
one set "breaking a type" rule or not?  What about accessing an array
beyond it's declared bounds?  The size is part of the type, but are the
rules about the limits of pointer arithmetic type rules or not?  I'd say
yes, but there is room for disagreement.

The other distinction you are muddling up with this one is about when
types are known and whether (once known) they are fixed or not; often
simplified as static vs. dynamic typing.  Again, there is room for
interpretation.  I would argue that C has rules about types that are
only known at run-time.  Many people would be surprised to hear someone
say that C has dynamic types, but the effective types rules, the rules
about VLAs and some of the rules about how unions may be used are all
examples of rules about types that can not be knwon at compile time.

C is often described as statically typed because the declared types of
objects can be determined at compile time and (therefore) do not change.
I don't really object to that, but it is, at best, an oversimplifiction.

The bottom line, though, is that C's type rules (of whatever sort) are
not strongly enforced.  That makes it a weakly typed language.

<snip>
> Well, I only know the one syntax in C/C++ for the (*f)() reference you
> indicate, which is:
>
>     int (*my_function)(int a, int b);
>
> In that case I call with:
>     int x = my_function(1, 2);
<snip>
> So, I don't know what the (*f)() syntax is you're referring to.

You can also write that call as

  int x = (*my_function)(1, 2);

and, indeed, you *had* to write it like that many years ago.  That made
sense -- in early C you called functions, and my_function is not the
name of a function but the name of a pointer to one, so to get an actual
function to call you must dereference it.

But C has been changed so that function calling is now defined to happen
though a pointer (just like array indexing is defined to happen though a
pointer), and function-valued expressions convert to pointers in most
situations (just like array-valued expressions do).  That's why

  int x = my_function(1, 2);

is now legal, and why BartC does not like it.

-- 
Ben.
0
Ben
11/29/2016 1:11:21 AM
On 29/11/2016 01:02, supercat@casperkitty.com wrote:
> On Monday, November 28, 2016 at 6:29:59 PM UTC-6, Bart wrote:
>> On 29/11/2016 00:01, Ben Bacarisse wrote:
>>> All pointers point to one object.
>>
>> (No; some are known, by the programmer, to point to the first of
>> multiple consecutive objects. But others will point to only one, with
>> the area beyond that object unknown, undefined or out of bounds. The
>> programmer may or may not know.
>
> Given the code:
>
>     int a[5];
>     int *b = a+5;
>
> to what object would b point?

To the value 1638280? (According to one attempt I made to print *b).

Since this is someone else's code, the answer is I don't know what their 
intention is, whether b is stepped backwards, whether b is treated as an 
pointer to an isolated int, or whether it's used as pointer to a slice 
or block or if it will be reassigned.

You can do these manipulations with pointers; that's why they're for.

My objection, a few posts back, was about being able to do b[i], or 
treating b like an array. To someone coming from a difference language, 
that sounds like asking for trouble.

-- 
Bartc
0
BartC
11/29/2016 1:21:50 AM
On Monday, November 28, 2016 at 7:21:58 PM UTC-6, Bart wrote:
> On 29/11/2016 01:02, supercat wrote:
> > On Monday, November 28, 2016 at 6:29:59 PM UTC-6, Bart wrote:
> >> On 29/11/2016 00:01, Ben Bacarisse wrote:
> >>> All pointers point to one object.
> >>
> >> (No; some are known, by the programmer, to point to the first of
> >> multiple consecutive objects. But others will point to only one, with
> >> the area beyond that object unknown, undefined or out of bounds. The
> >> programmer may or may not know.
> >
> > Given the code:
> >
> >     int a[5];
> >     int *b = a+5;
> >
> > to what object would b point?
> 
> To the value 1638280? (According to one attempt I made to print *b).
> 
> Since this is someone else's code, the answer is I don't know what their 
> intention is, whether b is stepped backwards, whether b is treated as an 
> pointer to an isolated int, or whether it's used as pointer to a slice 
> or block or if it will be reassigned.

Pointer "b" points just past the end of an array of 5 ints.  Since the
array is not extern, that's pretty much all that can be said about it (if
the array were extern, it might be imported from a language that allows
more precise control over allocations than C does).

> You can do these manipulations with pointers; that's why they're for.
> 
> My objection, a few posts back, was about being able to do b[i], or 
> treating b like an array. To someone coming from a difference language, 
> that sounds like asking for trouble.

Applying the [] operator to a pointer is defined as being equivalent to
using the "+" operator to add an offset and then using the "*" operator
to dereference the resulting address.  Applying it to arrays is unfortunately
defined as decaying the array to a pointer and then applying the operator
to that resulting pointer, which muddles up the semantics of array bounds.

Given "int arr[2][5];", for example, the expression "arr[0]" should
decompose into an int* pointing to the first element of the array.
Further, an int* which points at the first element of the array should
be usable to access all ten elements thereof.  On the other hand the
expression arr[0][x] will invoke UB if x is greater than 4.  Such
behavior would be easily understandable if the effect of applying [] to
an int[] could be different from the effect of applying it to an int*,
but saying that the "arr[0]" portion of the above expression decomposes
to an "int*" to which the "[x]" is then applied muddies the waters.
0
supercat
11/29/2016 1:53:45 AM
On Monday, November 28, 2016 at 8:11:29 PM UTC-5, Ben Bacarisse wrote:
> "Rick C. Hodgin" <rick.c.hodgin@gmail.com> writes:
> 
> > On Monday, November 28, 2016 at 4:30:20 PM UTC-5, Bart wrote:
> <snip>
> 
> >> That's just dynamic typing.
> >
> > I don't see the difference.  The token has a name.  Its type is, in
> > the case of dynamic typing, whatever it happens to be at the time.
> > It is weakly typed because you can't know at compile time what it
> > will be in all cases.
> 
> I know I promised myself I wouldn't reply to you anymore but your posts
> in this thread are very muddled and likely to confuse others.  And with
> BartC doing his best to spin matters to fit his agenda, there is just
> too much confusion here for my liking.

If it makes you feel more comfortable, you can reply generically,
correcting the places where I'm in error, or am postulating things
which are not in line with proper C.  I do not profess to know all
things about C, and very quickly state that I am severely lacking
in my knowledge of proper C.

I don't expect you to respond.  I offer my thoughts below for general
consumption.

> The strong/weak distinction is really about rules and their enforcement.
> A strongly-typed language has strict type rules and does not let the
> programmer break them, so, to some extent, how strongly-typed C is is a
> matter of opinion about what "breaking a type rule" really means.  For
> example, is accessing a float in a union whose int member was the last
> one set "breaking a type" rule or not?

No.  But, that is just my opinion.

> What about accessing an array
> beyond it's declared bounds?

No.  But, that is just my opinion.

> The size is part of the type, but are the
> rules about the limits of pointer arithmetic type rules or not?  I'd say
> yes, but there is room for disagreement.

No.  But, that is just my opinion.

I view the strongly typed aspect of C to be that its rules are never
violated, /except/ when explicitly instructed to do so by the developer.
And in such a case, it is not C that is violating its rules, but it is
the developer who presses the override button, disabling the standard
mechanisms C provides for safe execution because, presumably, the
developer is aware of things the compiler isn't, and therefore makes
decisions the compiler itself can't make.

I view C as a tool to access data.  It provides a host of mechanisms
which can be pointed at the data, and then set loose to wield and
manipulate that data in a variety of ways, every one of which follows
explicit operations that are wholly known at compile time.

> The other distinction you are muddling up with this one is about when
> types are known and whether (once known) they are fixed or not; often
> simplified as static vs. dynamic typing.  Again, there is room for
> interpretation.  I would argue that C has rules about types that are
> only known at run-time.  Many people would be surprised to hear someone
> say that C has dynamic types, but the effective types rules, the rules
> about VLAs and some of the rules about how unions may be used are all
> examples of rules about types that can not be knwon at compile time.

Yes.  I do disagree almost completely.

In converting C source code to a binary executable, there are not any
aspects that are left to chance.  Everything is cast into stone and it
cannot be altered.  It behaves exactly as it is indicating, only
accessing data as instructed, only processing it through as instructed,
and if it happens to do it incorrectly because of developer overrides
injected into the program (including casts and unions, and I place
memory access, VLAs, and even static arrays in other categories which
require a different level of developer-imposted constraints), then
that is in no way C's fault.

The whole purpose of C is to wield data in its most fundamental form.
It's like operating a chainsaw with all the safetys removed.  Sure you
can get in there and cut everything possible, but you can also lop off
your legs in about two seconds.  It requires skill and precision to
utilize such a tool, but if you do it correctly the job can get done
much faster than it can with other tools.

> C is often described as statically typed because the declared types of
> objects can be determined at compile time and (therefore) do not change.
> I don't really object to that, but it is, at best, an oversimplifiction.

I disagree.  For me, knowing that information is what makes something
strongly typed or not.  There is nothing in C that will violate the
ability to wield a char differently than a char, apart from the direct
and explicit instructions to do so by the developer.  In such a case,
it is not C that is at fault, but the developer, if there is an issue
with the way that data is processed.

> The bottom line, though, is that C's type rules (of whatever sort) are
> not strongly enforced.  That makes it a weakly typed language.

I completely disagree.  In C, the rules are entirely enforced.  It is
just that there are areas where the tool (the language, its syntax)
expose things which require a properly skilled laborer to work it,
just as do machines in physical manufacturing plants.  And when they
go astray, it's not the tool's fault.  The tool only does what it was
designed to do, requiring the needs of the person to be in place so
as to limit the tool's ability to exceed what's right and productive.

> <snip>
> > Well, I only know the one syntax in C/C++ for the (*f)() reference you
> > indicate, which is:
> >
> >     int (*my_function)(int a, int b);
> >
> > In that case I call with:
> >     int x = my_function(1, 2);
> <snip>
> > So, I don't know what the (*f)() syntax is you're referring to.
> 
> You can also write that call as
> 
>   int x = (*my_function)(1, 2);
> 
> and, indeed, you *had* to write it like that many years ago.  That made
> sense -- in early C you called functions, and my_function is not the
> name of a function but the name of a pointer to one, so to get an actual
> function to call you must dereference it.
> 
> But C has been changed so that function calling is now defined to happen
> though a pointer (just like array indexing is defined to happen though a
> pointer), and function-valued expressions convert to pointers in most
> situations (just like array-valued expressions do).  That's why
> 
>   int x = my_function(1, 2);
> 
> is now legal, and why BartC does not like it.

I was completely unaware of the old syntax.  I am glad C changed it.

Best regards,
Rick C. Hodgin
0
Rick
11/29/2016 2:01:55 AM
BartC <bc@freeuk.com> writes:

> On 29/11/2016 00:14, Ben Bacarisse wrote:
>> BartC <bc@freeuk.com> writes:
>> <snip>
>>> C allows this:
>>>
>>>   char* s = "Hello world";
>>>   int n = 5;
>>>
>>>   printf("%s\n", s + n);
>>>   printf("%d\n", s + n);
>>>   printf("%p\n", s + n);
>>>
>>> So it can add a number to a string
>>
>> No it can't.
>
> Yes it can; that's what the code is doing.

No.  Please don't keep this nonsense up.  You know perfectly well that s
is a pointer and the code adds a number to a pointer, not a string.

<snip>
-- 
Ben.
0
Ben
11/29/2016 2:03:11 AM
BartC <bc@freeuk.com> writes:

> On 29/11/2016 00:01, Ben Bacarisse wrote:
>> BartC <bc@freeuk.com> writes:
>
>>> C is strongly typed? Even without explicit casts, C allows these
>>> expressions, which would raise eyebrows if you are not familiar with
>>> the language:
>>>
>>> (s,t are char[]s containing a string; p is a pointer to one object;
>>
>> All pointers point to one object.
>
> (No;

Yes.  If a pointer points to anything at all (some don't of course) it
points to one single object.

> some are known, by the programmer, to point to the first of
> multiple consecutive objects.

And some may point to the last of a collection or to some element that
is neither, but in all cases the pointer points to just the one
element.

<snip>
>>>  s+5;         // add number to string; like my first example
>>>  s-t;         // subtract strings
>>
>> The comments describe actions that are not possible in C.
>
> Yes, that was the intention.

Really?  You intended to write comments that mislead the reader?  Shame
on you.

> This is how it looks to someone not
> familiar with C, as I mentioned.

Then you should explain what's happening or those readers will never
learn what type rules C actually has, but I suspect you don't care about
that.  The more confusion you can create the better.

<snip>
-- 
Ben.
0
Ben
11/29/2016 2:12:45 AM
"Rick C. Hodgin" <rick.c.hodgin@gmail.com> writes:
<snip>
> No.
<snip>
> No.
<snip>
> No.
<snip>
> I do disagree almost completely.
<snip>
> I disagree.
<snip>
> I completely disagree.
<snip>

What a relief.  Should I worry about the "almost"?  No, there are more
important things to worry about.

-- 
Ben.
0
Ben
11/29/2016 2:19:18 AM
BartC <bc@freeuk.com> writes:
[...]
> C allows this:
>
>    char* s = "Hello world";
>    int n = 5;
>
>    printf("%s\n", s + n);
>    printf("%d\n", s + n);
>    printf("%p\n", s + n);
>
> So it can add a number to a string (and print the result as a string, 
> number or pointer). I guess that means C has weak typing; it doesn't try 
> too hard to stop you getting around its type system.

The first and third examples do not demonstrate weak typing.
The result of adding a char* value and an int value is of type char*.
Permitting operators to have operands of different types is not
weak typing.  (Though "%p" requires a void* argument, and s + n
is of type char*.)

The second example does demonstrate weak typing, in a sense.
In fact, that call has undefined behavior.  printf is not required
to *enforce* correct types for its arguments.

I don't think there's any rigorous definition of the terms "weak
typing" and "strong typing".  I'd call C's typing somewhat "strong"
in the sense that every expression has an unambiguous well-defined
type.  The major way in which I'd call C's typing "weak" is its
widespread use of implicit conversions.

[...]

-- 
Keith Thompson (The_Other_Keith) kst-u@mib.org  <http://www.ghoti.net/~kst>
Working, but not speaking, for JetHead Development, Inc.
"We must do something.  This is something.  Therefore, we must do this."
    -- Antony Jay and Jonathan Lynn, "Yes Minister"
0
Keith
11/29/2016 5:05:57 AM
BartC <bc@freeuk.com> writes:
> On 29/11/2016 01:02, supercat@casperkitty.com wrote:
[...]
>> Given the code:
>>
>>     int a[5];
>>     int *b = a+5;
>>
>> to what object would b point?
>
> To the value 1638280? (According to one attempt I made to print *b).

"the value 1638280" is not an object.

[...]

> My objection, a few posts back, was about being able to do b[i], or 
> treating b like an array. To someone coming from a difference language, 
> that sounds like asking for trouble.

To someone who knows C, it's perfectly ordinary.  To someone who
pretends not to know C, it's an opportunity to whine.

-- 
Keith Thompson (The_Other_Keith) kst-u@mib.org  <http://www.ghoti.net/~kst>
Working, but not speaking, for JetHead Development, Inc.
"We must do something.  This is something.  Therefore, we must do this."
    -- Antony Jay and Jonathan Lynn, "Yes Minister"
0
Keith
11/29/2016 7:11:22 AM
BartC <bc@freeuk.com> writes:
> On 29/11/2016 00:14, Ben Bacarisse wrote:
>> BartC <bc@freeuk.com> writes:
>> <snip>
>>> C allows this:
>>>
>>>   char* s = "Hello world";
>>>   int n = 5;
>>>
>>>   printf("%s\n", s + n);
>>>   printf("%d\n", s + n);
>>>   printf("%p\n", s + n);
>>>
>>> So it can add a number to a string
>>
>> No it can't.
>
> Yes it can; that's what the code is doing.
>
> s is used to refer to a string.

Yes, s is used to refer to a string.  s is not a string.  s is a
pointer.  You understand that, but you spread confusion by pretending
not to.  Stop it.

> n is a number.

Yup.  (More precisely an integer, even more precisely an object of type
int).

> The language allows you to add those together using 's + n'.

Yes, you can add a pointer and an integer, with semantics that have been
discussed at length.

> Of course the actual operation it does, because of how C works, is not 
> very interesting: it takes the address of the start of the string, and 
> steps it by 5. It doesn't end up with "Hello world5" or, after trying to 
> convert "Hello World" to number and getting 0, adding 5 to that.

Oh?  I'd call it very interesting.  It's at the core of how C deals with
arrays and their elements.  Of course you pretend not to understand that
because it helps you spread confusion.

> But then exactly what 'add number to a string' does hasn't been defined.

No, it hasn't, because it doesn't make any sense.  Please stop making
things up.  C can be confusing enough.  Stop trying to make it worse.

[...]

-- 
Keith Thompson (The_Other_Keith) kst-u@mib.org  <http://www.ghoti.net/~kst>
Working, but not speaking, for JetHead Development, Inc.
"We must do something.  This is something.  Therefore, we must do this."
    -- Antony Jay and Jonathan Lynn, "Yes Minister"
0
Keith
11/29/2016 7:23:03 AM
On 28/11/16 17:07, Rick C. Hodgin wrote:
> On Monday, November 28, 2016 at 10:54:00 AM UTC-5, David Brown wrote:
>> On 28/11/16 16:13, Rick C. Hodgin wrote:
>>> On Sunday, November 27, 2016 at 11:47:10 AM UTC-5, David Brown wrote:
>>>> On 27/11/16 13:25, Rick C. Hodgin wrote:
>>>>> I wrote a reply last night, but I guess it was lost somehow.  It basically said
>>>>> Python's a weakly-typed language, isn't it?  If so it would have to have its
>>>>> own built-in min() and max() functions to be able to reliably compare
>>>>> disparate types.  I see you also mention this as a need in C, but I would
>>>>> argue to a lesser degree because of casting.  We can create a macro with
>>>>> a trinary operator and generate code for every case.  The compiler may
>>>>> complain here or there so you apply the appropriate cast.
>>>>
>>>> Python is a strongly typed language, not a weakly typed language (such 
>>>> as C).
>>>
>>> Are you saying "C is a weakly typed language" or "Python is a strongly
>>> typed language like C?"
>>
>> I don't think there are any clear and non-controversial definitions of
>> strong and weak typing.  But I believe C is usually classified as
>> "weakly typed" because it is relatively easy to mess around with types
>> (some conversions are done implicitly, especially via void*, and enums
>> are particularly weak) and a great deal of data is simply "int".
>>
>> Python is classified as strongly typed because every object has a
>> specific type, and type conversions actually change the data (so if you
>> use a number as a string, a formatting function is called to generate a
>> new string from the number - the language does not merely try to
>> re-interpret the number as though it were a string).
>>
>>
>>>
>>>> But Python is a dynamically typed language, with duck-typing.
>>>
>>> How can a dynamically typed language be a strongly typed language?
>>> I was under the impression in Python you could code this:
>>>
>>>     n = 5
>>>     print n
>>>     n = "Hello, world"
>>>     print n
>>
>> Yes, you can do that - because Python is dynamically typed.  But dynamic
>> typing does not mean weak typing.  In the Python code above, "n" is just
>> a name - it is not an object.  The objects are 5 and "Hello, world",
>> each of which has very clear and definitive types that do not change.
>> All that changes is the binding of the name "n" to those two objects.
> 
> Visual FreePro does the same thing.  I have a struct with a discriminating
> union which determines the actual type of the data within.  It can be
> reassigned as needed, as in python (or originally as in other xbase
> languages as that's what it's emulating).
> 
> The data changes internally, meaning it's weakly typed because the type
> goes back to the thing identified by its token name (in my definition).
> 
>>> The token "n" can be used for multiple variable types.  I tested this
>>> using the online test page:
>>>
>>>     http://mathcs.holycross.edu/~kwalsh/python/
>>>
>>> I see on the Python wiki that it answers this question:
>>>
>>>     https://wiki.python.org/moin/Why%20is%20Python%20a%20dynamic%20language%20and%20also%20a%20strongly%20typed%20language
>>>
>>> I disagree with the way the term "strongly typed" is used there because
>>> I consider C to be a strongly typed language.  You type "char c" and
>>> that's all C can ever be within that scope.
>>>
>>> That's a philosophical difference I suppose.
>>
>> C is statically typed, while Python is dynamically typed.  Those terms
>> are /not/ synonymous with "strongly typed" and "weakly typed" - I think
>> you can have all combinations (though I can't think of any weakly typed
>> dynamically typed language off hand).
> 
> I think they must be.  If something is statically typed it cannot be
> altered.  It remains that way forever, which means it is strongly typed.

No, "statically typed" means you give things specific known types at
compile-time.  A statically typed language can also be strong, meaning
it is hard to change those types or view the data as a different type.
A statically typed language can also be weak, meaning there is little to
hinder you changing the type of an object (as distinct from /converting/
the object to a new object of a different type).

Consider these two functions in C (which is clearly a statically typed
language, although unions allow a certain degree of dynamic typing).

int floatToInt1(float f) {
  return f;
}

int floatToInt2(float f) {
  void * p = &f;
  int * q = p;
  return *q;
}

The first function is type-safe, and valid in a strongly-typed language
(or strongly-typed programming), as the floating point value f is
/converted/ to an integer.  The second function demonstrates weak typing
- the bit representation of the floating point value is re-interpreted
as an integer with no regard for the actual types involved.  It has
implementation-dependent behaviour, but is syntactically valid C that
will be accepted by compilers, usually without warning.

The second function is invalid as C++ code - you need to explicitly cast
the void* pointer to an int* pointer.  C++ does not make it impossible
to break the type system, but it makes it a little harder than C, and
that is one of the several reasons why C++ is considered a stronger
typed language than C.


Note that although C allows weak typing like this, programs do not have
to take advantage of it - you can code in a manner that uses stronger
typing.


> 
>> Once you have your "char c", it is not hard to make an "int *" pointer
>> to point to it and read the same data as a different type - that is what
>> makes it "weak".
> 
> I disagree.  "char c" has not changed.  It will always still yield the
> char value at its location in memory.  The new "int *" (be it a cast, or
> a variable that's set to the &c address) is a new and discrete thing with
> its own ability to act on the thing it's pointing to.
> 
> In both cases, both are strongly typed.

See above, for what I hope is a clearer example.

> 
>> It is very hard to do something similar in Python.
>>
>> Of course, the terms are not really black and white, and you can
>> reasonably argue that "C with warnings enabled" has stronger typing
>> because it is hard to "cheat" on your types while avoiding warnings.
> 
> Even in the case of explicitly added casts, which are fundamental
> operations you've manually injected into an expression to receive a
> particular type, and emit another type, I don't see how it could be
> viewed as anything other than strongly typed.
> 
> Everything in C is nailed down categorically and the correct function
> is called based on what is known at compile-time.
> 
> In python (and Visual FreePro) it does a test on variable types to
> determine which branch to flow to in order to process a particular
> type.
> 

It is this run-time test that is key to making Python a much stronger
typed language.  No matter how you try to cheat the type system,
operations on objects check the actual type of the object.  You cannot
tell Python "I know I said this object was a floating point variable,
but now I want you to treat it as an integer".

(For /really/ advanced usage, it is possible to force changes to Python
types by fiddling with metaclasses.  There is always a way to cheat!)

>>>> This means that in effect, every function in Python is a bit like a 
>>>> template in C++.  And writing a new "max" function in Python is as 
>>>> simple as:
>>>>
>>>> def my_max(a, b) :
>>>> 	if (a > b) return a
>>>> 	return b
>>>>
>>>> When the implementation is so simple, there is very little reason /not/ 
>>>> to put it into the standard library - just as C++ has std::max with a 
>>>> very simple implementation as a template.
>>>
>>> Python is required, within its own internal processing engine, to compare
>>> disparate types, such as a floating point value to another integer value,
>>> and return correct results, without any custom/fancy casting or other
>>> things required in C.
>>>
>>> That's what I was referring to.  Python needs this type of built-in
>>> ability because it's weakly typed.  A and B come in and they can be
>>> anything.  The nuances of the comparison logic has to be able to
>>> handle comparing type x to type y generically, and with correct
>>> results in each case.
>>
>> No, it is /strongly/ typed.  When Python needs to convert an operand (or
>> both operands) in order to carry out an operation like "a > b", it uses
>> well-defined conversion functions.
> 
> Of course.  But, they are not known at compile-time.  

Correct.  Python is dynamically typed and strongly typed.  Languages
(and to some extent programs written in the language) can be classified
in many ways, with attributes on different axes.  Strong-weak typing is
one axis, static-dynamic is another.  (And you can have many others axes
for language classification, of course.)

> They are known
> only at runtime with internal tests being made on each type to determine
> what internal logic path to flow to.
> 
> The same input names "n" and "m" could flow through different paths
> based on their internal values at runtime.  It's not known at compile-
> time, and has no ability to be known outside of a static program with
> no variables like:
> 
>     (pseudo code)
>     function x() {
>         if (current_millisecond % 7 == 0)
>             return "Hello world";
>         else
>             return 5;
>     }
> 

Yes, but that is /dynamic/ typing.

If you write "y = x()", then "print y.len()", the validity of taking the
length of y depends on the actual type of y - that is /strong/ typing.

> Any python app calling a function like that (in its proper syntax form)
> will have variable types being returned, but stored in the same token
> name.  When you later do a comparison to another, it will flow through
> to the proper handler based on the runtime peculiarities of the data,
> and not due to anything known at compile time.

What is returned is not a variable - you have no direct access to
variables in Python.  (And in this case, there are no variables - both
the string and the number are constant objects.)  The return is a
/reference/ that is set to point at either the constant number object 5,
or the constant string object "Hello world".  At compile time, it is not
known what type of object will be referred to by the returned reference.

> 
> At each use / reference, it must be manually examined and then branched
> to.

Yes, that is dynamic typing.

> 
> In C, that's not the case.  The entire program is completely known at
> compile time and there is no ability to change anything.  You must use
> something like custom logic and variadic functions in order to do those
> variable things.  But even then, in each case, every flow through every
> possible type is already known, with only manual tests on data, not
> variable types, determining where flow goes.
> 
> I view that as very strongly typed.

I know you think that means strongly typed.  You are wrong - it means
/statically/ typed.  It is a very common mixup, and can be hard to get
your head around - I hope my posts here make things a little clearer for
you.

> 
>> These exist for built-in types - if
>> you are using your own types, you need to define these functions or you
>> will get a run-time error.  C is weakly typed, because it will happily
>> (well, maybe unhappily, with a warning) do what you ask here in many
>> cases, even if it makes no sense.
>>
>>>
>>> Visual FreePro has to do this as well and it was complex to write all
>>> of the code which does this as there are many possible comparisons.
>>>
>>>> C, on the other hand, has /no/ possible implementation of a max function 
>>>> or macro that works on all numeric types until C11 _Generic - and even 
>>>> then, the implementation is more than a little ugly.
>>>>
>>>>> Another thought this morning.  Python moves by people needs.  C
>>>>> moves by committee choices.  If something is not deemed desirable
>>>>> by the committee, it will never officially be added.  But, I think
>>>>> that era of fixedness in those maintaining the C Standard will have to
>>>>> change, lest it be ignored as people move on past it.
>>>>
>>>> C moves by people needs as well, just a lot more slowly - it needs a lot 
>>>> of people, and a very clear need, to make changes happen.  "max" is not 
>>>> needed that much.
>>>
>>> Python is an open project.  If I desire to have x, y, or z added I can
>>> do it, it goes into the main line, and is then available to all.  
>>
>> No, you can't.  Python is an open source project - you can download the
>> source, make all the changes you want, and use it yourself.  You can
>> publish the modified version as you want.  But it will only get back to
>> the mainline if it is approved by the project maintainers, which
>> involves going through a formal process of "Python Enhancement
>> Proposals" to give the main Python developers and the Python community a
>> chance to consider the change and its implications.  The change must
>> meet the approval of Python's Benevolent Dictator For Life, Guido van
>> Rossum, before it has a chance to become part of the official mainline
>> Python.
> 
> Roughly, how many changes have been allowed since python came about,
> compared to how many changes have been allowed to the C Standard?

Oh, lots - and between major versions there have always been seriously
incompatible changes.  But these do not come about simple because
someone writes a bit of code and checks it in to the project's
repository.  Changes pass through a serious formal process with a lot of
consideration - it is simply that this process is more continual than C
(no need to wait for a new version of the standard), more open than C
(discussions are public and everyone can be involved), and faster than C.

> 
>> Still, you have a far better chance of getting a change into Python than
>> getting a change into C :-)
> 
> My point exactly.

I just wanted to correct the impression you were giving of pure anarchy
in the development of the Python language.

> 
>>> I
>>> can't do that with C.  I can do that with GCC or CLANG or some instance
>>> of a C compiler, but not C, not to the standard.
>>>
>>> I would argue it doesn't move by people needs, but by committee needs.
>>> Otherwise a lot would've been introduced that, because it wasn't,
>>> spawned off many other languages.
>>>
>>> It's why I've more or less ditched C and am proceeding with CAlive.  I
>>> will provide support through C99 eventually however.  And other people
>>> are free to add additional newer support.
> 
> Best regards,
> Rick C. Hodgin
> 

0
David
11/29/2016 9:37:57 AM
On 28/11/16 17:26, supercat@casperkitty.com wrote:
> On Monday, November 28, 2016 at 9:54:00 AM UTC-6, David Brown wrote:
>> Once you have your "char c", it is not hard to make an "int *" pointer
>> to point to it and read the same data as a different type - that is what
>> makes it "weak".  It is very hard to do something similar in Python.
> 
> It's easy syntactically to write code whose only sensible meaning is that
> the bits at some address should be written as a particular type.  It's also
> easy to write code whose only sensible meaning is that the bits at some
> address should be read as a particular type.
> 
> In a truly-weakly-typed language, combining the two operations should be
> a simple way to take the bit pattern used for one type and reinterpret it
> as another.  Some dialects of C are weakly-typed by that definition, but
> the Standard does not describe such semantics except to note that some
> implementations happen to offer them (but not defining a means by which
> code can test for such implementations).
> 

Certainly C is not as weak as it gets - assembly and Forth are examples
of untyped languages.  C is merely /weaker/ than Python, C++, and many
other languages.  When people classify languages in a black-and-white
fashion (which I think is a very bad idea), C is usually called a
"weakly typed, statically typed language", while C++ is "strongly typed,
statically typed" and Python is "strongly typed, dynamically typed".
But these should really be sliding scales.  C++ is more strongly typed
than C, but Python is more strongly typed than C++.  And C++ has more
dynamic typing than C but less dynamic than Python.

0
David
11/29/2016 9:42:06 AM
On 28/11/16 18:31, Rick C. Hodgin wrote:

> If this is your position regarding C being a weakly typed language,
> then I disagree with you completely.
> 
> In each of those cases, the void* p value is being used in a way
> which requires the explicit use of a cast.  

No, C does /not/ require explicit casts of void* pointers.  C++ needs
it, and you use a C++ compiler to compile your C code - therefore in
/your/ code, you need the explicit casts.  (C++ has stronger typing than
C - this is one of the reasons.)

Allowing re-interpretations like this without explicit casts is very
weak typing.  Requiring an explicit cast but otherwise making it that
easy is less weak, but still not particularly strong.  (C++ is
classified as strongly typed for many other features of its type system
despite this weakness.)


> Each explicit use of
> a cast creates a new strongly typed form which is expressly known
> to the compiler, and is used only as the compiler is able to use
> it.  

No, the cast is not required for the compiler to know the types
involved.  Indeed, there is no requirement for the casted-to type to be
an exact match for the assigned-to type - it just has to be something
that can be converted to the assigned-to type.

But again, you are talking about static typing, not strong typing.

> The fact that the cast operation is taking input and putting
> it through to something improperly is not a feature of language
> typing.  It's a feature of the misuse of language typing.
> 

It is weak typing - something that C makes easy.  It is also something
that many types of C code take advantage of, such as the standard
library qsort function.

0
David
11/29/2016 9:49:44 AM
On 28/11/16 22:50, Rick C. Hodgin wrote:
> On Monday, November 28, 2016 at 4:05:27 PM UTC-5, supe...@casperkitty.com wrote:
>> On Monday, November 28, 2016 at 2:46:29 PM UTC-6, Rick C. Hodgin wrote:
>>> I view strongly typed as anything that can only be one thing, only used
>>> as one thing, and that's its type.
>>>
>>> I view weakly typed as anything that can be more than one thing, can be
>>> used as one thing now, another thing later, and its type is instance-by-
>>> instance determined, rather than being known at compile time.
>>
>> Python is strongly dynamically typed.  Static typing would imply that the
>> compiler would know about the types of things.  Strong typing implies that
>> *something* knows about the types of things and will enforce proper usage.
>> In C, directly-accessed objects are strongly statically typed; storage
>> which is accessed only via pointers are handled in whatever manner the
>> implementation feels like.
> 
> I can accept that.  My use of the term strong and weak referred more to
> the static condition.

And I hope now you see that this usage is incorrect.  Static versus
dynamic typing is an important classification for languages - arguably
more important than strong or weak typing.  But it is a different concept.

These terms are not merely words we happen to use in one thread of a
newsgroup - these are commonly used terms in discussing programming
language characteristics (even though there is plenty of disagreement
about exact definitions).

> 
> I still don't see how something can be weakly typed under that definition.
> I saw a reference to the possibility of altering the internal type through
> an operation, such as possibly multiplying an integer by a floating point
> and changing the resulting type from integer to floating point?  Would
> that be weakly typed?  I would argue it's dynamically typed.  And I
> would then argue that there is no dynamic and static typing, but only
> strong and weak typing.
> 
> Best regards,
> Rick C. Hodgin
> 

0
David
11/29/2016 9:52:45 AM
On 28/11/16 21:09, Richard Heathfield wrote:
> On 28/11/16 18:57, BartC wrote:
> <snip>
>>
>> C is strongly typed? Even without explicit casts,
> 
> Just for the record, "explicit" is redundant here. Casts are explicit
> conversions, so you are saying the equivalent of "explicit explicit
> conversions".
> 

Yes, but we are explicitly emphasising the explicitness of the explicit
casts!

And since google never forgets, "for the record" is redundant too :-)


0
David
11/29/2016 9:55:02 AM
On 28/11/16 18:41, jadill33@gmail.com wrote:
> 
> You might consider using a macro for your function definitions.
> 
> #define MAX_FN_IMPL( NAME, TYPE )                           \
> static inline TYPE _max##NAME(TYPE const x, TYPE const y) { \
>   return y > x ? y : x;                                     \
> }
> 

Yes, I considered writing it this way - but I thought it better to avoid
the extra layers of macros for this example.

0
David
11/29/2016 10:04:29 AM
On 29/11/16 09:55, David Brown wrote:
> On 28/11/16 21:09, Richard Heathfield wrote:
>> On 28/11/16 18:57, BartC wrote:
>> <snip>
>>>
>>> C is strongly typed? Even without explicit casts,
>>
>> Just for the record, "explicit" is redundant here. Casts are explicit
>> conversions, so you are saying the equivalent of "explicit explicit
>> conversions".
>>
>
> Yes, but we are explicitly emphasising the explicitness of the explicit
> casts!

But are we doing so implicitly, explicitly, or explicitly explicitly?

>
> And since google never forgets,

Yeah, right. :-)

> "for the record" is redundant too :-)

Google remembers what /it/ wants to remember, not what /we/ want it to 
remember.

-- 
Richard Heathfield
Email: rjh at cpax dot org dot uk
"Usenet is a strange place" - dmr 29 July 1999
Sig line 4 vacant - apply within
0
Richard
11/29/2016 10:06:20 AM
On 29/11/16 11:06, Richard Heathfield wrote:
> On 29/11/16 09:55, David Brown wrote:
>> On 28/11/16 21:09, Richard Heathfield wrote:
>>> On 28/11/16 18:57, BartC wrote:
>>> <snip>
>>>>
>>>> C is strongly typed? Even without explicit casts,
>>>
>>> Just for the record, "explicit" is redundant here. Casts are explicit
>>> conversions, so you are saying the equivalent of "explicit explicit
>>> conversions".
>>>
>>
>> Yes, but we are explicitly emphasising the explicitness of the explicit
>> casts!
> 
> But are we doing so implicitly, explicitly, or explicitly explicitly?
> 

Implicitly, by using the word "explicitly" explicitly, I think.
Opinions may vary, of course - they usually do.

>>
>> And since google never forgets,
> 
> Yeah, right. :-)
> 
>> "for the record" is redundant too :-)
> 
> Google remembers what /it/ wants to remember, not what /we/ want it to
> remember.
> 

I recently came across a good example of something google did /not/
know, which I thought it should.  But I can't remember what it was.

0
David
11/29/2016 10:18:21 AM
On 29/11/16 10:18, David Brown wrote:
> On 29/11/16 11:06, Richard Heathfield wrote:
>> On 29/11/16 09:55, David Brown wrote:
>>> On 28/11/16 21:09, Richard Heathfield wrote:
>>>> On 28/11/16 18:57, BartC wrote:
>>>> <snip>
>>>>>
>>>>> C is strongly typed? Even without explicit casts,
>>>>
>>>> Just for the record, "explicit" is redundant here. Casts are explicit
>>>> conversions, so you are saying the equivalent of "explicit explicit
>>>> conversions".
>>>>
>>>
>>> Yes, but we are explicitly emphasising the explicitness of the explicit
>>> casts!
>>
>> But are we doing so implicitly, explicitly, or explicitly explicitly?
>>
>
> Implicitly, by using the word "explicitly" explicitly, I think.
> Opinions may vary, of course - they usually do.
>
>>>
>>> And since google never forgets,
>>
>> Yeah, right. :-)
>>
>>> "for the record" is redundant too :-)
>>
>> Google remembers what /it/ wants to remember, not what /we/ want it to
>> remember.
>>
>
> I recently came across a good example of something google did /not/
> know, which I thought it should.  But I can't remember what it was.


Try googling for it.

(Okay, I'm done!)

-- 
Richard Heathfield
Email: rjh at cpax dot org dot uk
"Usenet is a strange place" - dmr 29 July 1999
Sig line 4 vacant - apply within
0
Richard
11/29/2016 10:28:37 AM
On 28/11/2016 20:09, Richard Heathfield wrote:
> On 28/11/16 18:57, BartC wrote:
> <snip>
>>
>> C is strongly typed? Even without explicit casts,
>
> Just for the record, "explicit" is redundant here. Casts are explicit
> conversions, so you are saying the equivalent of "explicit explicit
> conversions".

The 'explicit casts' was for my benefit, as I use 'casts' to more 
loosely mean a conversion that is either added internally by a compiler, 
or that is applied in source code by the programmer. The latter would be 
explicit.

(I also call them 'hard' and 'soft' conversions, 'hard' being explicit. 
With a hard conversion, the compiler will try and go along with what the 
programmer requests, unless it is completely nonsensical.)

Although I understand that some people have trouble seeing past the 
narrow definition of any technical term as it is spelled out in the 
Language Reference.


-- 
Bartc
0
BartC
11/29/2016 10:49:01 AM
On 29/11/16 10:49, BartC wrote:
> On 28/11/2016 20:09, Richard Heathfield wrote:
>> On 28/11/16 18:57, BartC wrote:
>> <snip>
>>>
>>> C is strongly typed? Even without explicit casts,
>>
>> Just for the record, "explicit" is redundant here. Casts are explicit
>> conversions, so you are saying the equivalent of "explicit explicit
>> conversions".
>
> The 'explicit casts' was for my benefit, as I use 'casts' to more
> loosely mean a conversion that is either added internally by a compiler,
> or that is applied in source code by the programmer. The latter would be
> explicit.
>
> (I also call them 'hard' and 'soft' conversions, 'hard' being explicit.
> With a hard conversion, the compiler will try and go along with what the
> programmer requests, unless it is completely nonsensical.)
>
> Although I understand that some people have trouble seeing past the
> narrow definition of any technical term as it is spelled out in the
> Language Reference.

Or, to put it another way, you don't understand the value of defining 
things unambiguously?

-- 
Richard Heathfield
Email: rjh at cpax dot org dot uk
"Usenet is a strange place" - dmr 29 July 1999
Sig line 4 vacant - apply within
0
Richard
11/29/2016 10:51:05 AM
On 29/11/2016 02:12, Ben Bacarisse wrote:
> BartC <bc@freeuk.com> writes:

>> some are known, by the programmer, to point to the first of
>> multiple consecutive objects.
>
> And some may point to the last of a collection or to some element that
> is neither, but in all cases the pointer points to just the one
> element.

Here are my original comments, with extra emphasis:

.....p is a pointer to ONE object....

  p[i];        // index a pointer

By one object, I mean it is known, to the programmer, that there is 
nothing meaningful or valid beyond or before that object. (Or, possibly 
worse, that it is not known! As otherwise this could be fixed. The 
language however is oblivious.)

Yet, to an outsider, C is allowing you to access p as though it was 
actually an array. That must seem extraordinarily lax. You can even do 
it quite blatantly:

   int A;
   A = (&A)[123456789];

Not a beep out of the compiler! (This crashes BTW.)

Now that's what I would call weak typing.

(Yes, if A is an actual array, then A[i] could be an out of bounds 
access. But that's like one or two magnitudes lower down the scale that 
just being able to index ANYTHING.)

-- 
Bartc


0
BartC
11/29/2016 11:17:25 AM
BartC <bc@freeuk.com> writes:

> On 28/11/2016 20:09, Richard Heathfield wrote:
>> On 28/11/16 18:57, BartC wrote:
>> <snip>
>>>
>>> C is strongly typed? Even without explicit casts,
>>
>> Just for the record, "explicit" is redundant here. Casts are explicit
>> conversions, so you are saying the equivalent of "explicit explicit
>> conversions".
>
> The 'explicit casts' was for my benefit, as I use 'casts' to more
> loosely mean a conversion that is either added internally by a
> compiler, or that is applied in source code by the programmer. The
> latter would be explicit.

Why not just call them conversions?  What's the benefit of borrowing a
technical term ("cast") with another meaning and using it a way that is
already well described by a plain English word?  It just seems to be
part of plan to be make programming sounds as confusing as possible.

<snip>
-- 
Ben.
0
Ben
11/29/2016 11:40:19 AM
On 29/11/2016 11:40, Ben Bacarisse wrote:
> BartC <bc@freeuk.com> writes:
>
>> On 28/11/2016 20:09, Richard Heathfield wrote:
>>> On 28/11/16 18:57, BartC wrote:
>>> <snip>
>>>>
>>>> C is strongly typed? Even without explicit casts,
>>>
>>> Just for the record, "explicit" is redundant here. Casts are explicit
>>> conversions, so you are saying the equivalent of "explicit explicit
>>> conversions".
>>
>> The 'explicit casts' was for my benefit, as I use 'casts' to more
>> loosely mean a conversion that is either added internally by a
>> compiler, or that is applied in source code by the programmer. The
>> latter would be explicit.

> Why not just call them conversions?

'Cast' is shorter than 'conversion' or 'coercion'?

> What's the benefit of borrowing a
> technical term ("cast") with another meaning and using it a way that is
> already well described by a plain English word?  It just seems to be
> part of plan to be make programming sounds as confusing as possible.

The plan seemed more to nitpick any of my posts!

[If you want 'confusing', in my language there is also a kind of cast 
where the programmer doesn't specify the type: A = cast(B); here 
whatever type conversion is needed is applied. So the cast is 
semi-explicit, while the word 'cast' is used more explicitly than it 
would be in C.]

-- 
bartc

0
BartC
11/29/2016 12:17:48 PM
On Tuesday, November 29, 2016 at 4:52:51 AM UTC-5, David Brown wrote:
> On 28/11/16 22:50, Rick C. Hodgin wrote:
> > On Monday, November 28, 2016 at 4:05:27 PM UTC-5, supe...@casperkitty.com wrote:
> >> On Monday, November 28, 2016 at 2:46:29 PM UTC-6, Rick C. Hodgin wrote:
> >>> I view strongly typed as anything that can only be one thing, only used
> >>> as one thing, and that's its type.
> >>>
> >>> I view weakly typed as anything that can be more than one thing, can be
> >>> used as one thing now, another thing later, and its type is instance-by-
> >>> instance determined, rather than being known at compile time.
> >>
> >> Python is strongly dynamically typed.  Static typing would imply that the
> >> compiler would know about the types of things.  Strong typing implies that
> >> *something* knows about the types of things and will enforce proper usage.
> >> In C, directly-accessed objects are strongly statically typed; storage
> >> which is accessed only via pointers are handled in whatever manner the
> >> implementation feels like.
> > 
> > I can accept that.  My use of the term strong and weak referred more to
> > the static condition.
> 
> And I hope now you see that this usage is incorrect.  Static versus
> dynamic typing is an important classification for languages - arguably
> more important than strong or weak typing.  But it is a different concept.

I disagree with the definition, but I do see what everybody in this
thread has been explaining.

> These terms are not merely words we happen to use in one thread of a
> newsgroup - these are commonly used terms in discussing programming
> language characteristics (even though there is plenty of disagreement
> about exact definitions).

Yep.

Best regards,
Rick C. Hodgin
0
Rick
11/29/2016 1:44:16 PM
On 29/11/16 14:44, Rick C. Hodgin wrote:
> On Tuesday, November 29, 2016 at 4:52:51 AM UTC-5, David Brown wrote:
>> On 28/11/16 22:50, Rick C. Hodgin wrote:
>>> On Monday, November 28, 2016 at 4:05:27 PM UTC-5, supe...@casperkitty.com wrote:
>>>> On Monday, November 28, 2016 at 2:46:29 PM UTC-6, Rick C. Hodgin wrote:
>>>>> I view strongly typed as anything that can only be one thing, only used
>>>>> as one thing, and that's its type.
>>>>>
>>>>> I view weakly typed as anything that can be more than one thing, can be
>>>>> used as one thing now, another thing later, and its type is instance-by-
>>>>> instance determined, rather than being known at compile time.
>>>>
>>>> Python is strongly dynamically typed.  Static typing would imply that the
>>>> compiler would know about the types of things.  Strong typing implies that
>>>> *something* knows about the types of things and will enforce proper usage.
>>>> In C, directly-accessed objects are strongly statically typed; storage
>>>> which is accessed only via pointers are handled in whatever manner the
>>>> implementation feels like.
>>>
>>> I can accept that.  My use of the term strong and weak referred more to
>>> the static condition.
>>
>> And I hope now you see that this usage is incorrect.  Static versus
>> dynamic typing is an important classification for languages - arguably
>> more important than strong or weak typing.  But it is a different concept.
> 
> I disagree with the definition, but I do see what everybody in this
> thread has been explaining.
> 

I can quite understand why you might feel that static typing is in a
sense "better", "more solid", or "stronger" than dynamic typing (though
in truth, each has its pros and cons - and languages are rarely entirely
static, or entirely dynamic in their typing).  But the term "strong
typing" refers to how much you can mess around with the language's type
system.  It's just the way it is - consistency of terminology is often
more important than the particular choice of words.

It is also just an unfortunate mix of time zones that meant you had made
quite a few posts before you understood that it was your terminology
that was incorrect, rather than your understanding of C.


>> These terms are not merely words we happen to use in one thread of a
>> newsgroup - these are commonly used terms in discussing programming
>> language characteristics (even though there is plenty of disagreement
>> about exact definitions).
> 
> Yep.
> 
> Best regards,
> Rick C. Hodgin
> 

0
David
11/29/2016 2:12:46 PM
On 11/29/2016 6:17 AM, BartC wrote:
> On 29/11/2016 02:12, Ben Bacarisse wrote:
>> BartC <bc@freeuk.com> writes:
> 
>>> some are known, by the programmer, to point to the first of
>>> multiple consecutive objects.
>>
>> And some may point to the last of a collection or to some element that
>> is neither, but in all cases the pointer points to just the one
>> element.
> 
> Here are my original comments, with extra emphasis:
> 
> ....p is a pointer to ONE object....
> 
>  p[i];        // index a pointer
> 
> By one object, I mean it is known, to the programmer, that there is
> nothing meaningful or valid beyond or before that object. (Or, possibly
> worse, that it is not known! As otherwise this could be fixed. The
> language however is oblivious.)
> 
> Yet, to an outsider, C is allowing you to access p as though it was
> actually an array. That must seem extraordinarily lax. You can even do
> it quite blatantly:
> 
>   int A;
>   A = (&A)[123456789];
> 
> Not a beep out of the compiler! (This crashes BTW.)
> 
> Now that's what I would call weak typing.
> 
> (Yes, if A is an actual array, then A[i] could be an out of bounds
> access. But that's like one or two magnitudes lower down the scale that
> just being able to index ANYTHING.)
> 

I just want to know one thing, Bart.  Why did none of the students in my
C or C++ classes have a problem understanding that the name of an array,
used without an index, is a pointer to the first character?  You can't
seem to understand such a simple concept, despite multiple people trying
to explain it to you?

-- 
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
0
Jerry
11/29/2016 2:46:39 PM
On Tuesday, November 29, 2016 at 9:13:01 AM UTC-5, David Brown wrote:
> On 29/11/16 14:44, Rick C. Hodgin wrote:
> > On Tuesday, November 29, 2016 at 4:52:51 AM UTC-5, David Brown wrote:
> >> On 28/11/16 22:50, Rick C. Hodgin wrote:
> >>> On Monday, November 28, 2016 at 4:05:27 PM UTC-5, supe...@casperkitty.com wrote:
> >>>> On Monday, November 28, 2016 at 2:46:29 PM UTC-6, Rick C. Hodgin wrote:
> >>>>> I view strongly typed as anything that can only be one thing, only used
> >>>>> as one thing, and that's its type.
> >>>>>
> >>>>> I view weakly typed as anything that can be more than one thing, can be
> >>>>> used as one thing now, another thing later, and its type is instance-by-
> >>>>> instance determined, rather than being known at compile time.
> >>>>
> >>>> Python is strongly dynamically typed.  Static typing would imply that the
> >>>> compiler would know about the types of things.  Strong typing implies that
> >>>> *something* knows about the types of things and will enforce proper usage.
> >>>> In C, directly-accessed objects are strongly statically typed; storage
> >>>> which is accessed only via pointers are handled in whatever manner the
> >>>> implementation feels like.
> >>>
> >>> I can accept that.  My use of the term strong and weak referred more to
> >>> the static condition.
> >>
> >> And I hope now you see that this usage is incorrect.  Static versus
> >> dynamic typing is an important classification for languages - arguably
> >> more important than strong or weak typing.  But it is a different concept.
> > 
> > I disagree with the definition, but I do see what everybody in this
> > thread has been explaining.
> > 
> 
> I can quite understand why you might feel that static typing is in a
> sense "better", "more solid", or "stronger" than dynamic typing (though
> in truth, each has its pros and cons - and languages are rarely entirely
> static, or entirely dynamic in their typing).  But the term "strong
> typing" refers to how much you can mess around with the language's type
> system.  It's just the way it is - consistency of terminology is often
> more important than the particular choice of words.

I've written a few lengthy responses to this thread and then deleted
them.  It's not a point I care to argue, though I have specific reasons
why I disagree with the view that data is the determination as to how/
when a type access is violated.

I plan to create my own ecosystem with CAlive so it won't really be an
issue in maintaining legacy terminology.  My goals are to take people
away from traditional programming regimes, and allow them to more or
less ditch the entire ecosystem they grew up in, moving instead to a
new platform which is founded upon Jesus Christ, and upon His teachings
in how to use the skills He first gave a person to give back to Him and
the other people of the Earth.

It's a literal planned exodus, out from the old way of slavery and
bondage to a system which rejects God, and into the new way of freedom
in God, resulting in great benefits for those who will receive and
embrace it.

That's the plan, and the hope.  An entire ecosystem from hardware
through user apps, one founded completely in Jesus Christ, in His
teachings, in the application of His gifts given back expressly for
Him and other people (and not for money or power or control or
dominance as by patents or copyrights), but as part of the community
He has built and is building through those who will receive Him and
work for Him in this world.

My area of doing this is software and hardware.  Other people have
special skills in other areas.  Together, we will change the world,
because the One we pursue has that authority, power, and ability.

> It is also just an unfortunate mix of time zones that meant you had made
> quite a few posts before you understood that it was your terminology
> that was incorrect, rather than your understanding of C.
>
> >> These terms are not merely words we happen to use in one thread of a
> >> newsgroup - these are commonly used terms in discussing programming
> >> language characteristics (even though there is plenty of disagreement
> >> about exact definitions).
> > 
> > Yep.

Best regards,
Rick C. Hodgin
0
Rick
11/29/2016 2:54:13 PM
On 29/11/16 12:17, BartC wrote:
> On 29/11/2016 11:40, Ben Bacarisse wrote:
>> BartC <bc@freeuk.com> writes:
>>
>>> On 28/11/2016 20:09, Richard Heathfield wrote:
>>>> On 28/11/16 18:57, BartC wrote:
>>>> <snip>
>>>>>
>>>>> C is strongly typed? Even without explicit casts,
>>>>
>>>> Just for the record, "explicit" is redundant here. Casts are explicit
>>>> conversions, so you are saying the equivalent of "explicit explicit
>>>> conversions".
>>>
>>> The 'explicit casts' was for my benefit, as I use 'casts' to more
>>> loosely mean a conversion that is either added internally by a
>>> compiler, or that is applied in source code by the programmer. The
>>> latter would be explicit.
>
>> Why not just call them conversions?
>
> 'Cast' is shorter than 'conversion' or 'coercion'?

'C' is even shorter.

Or you could just use '.', which isn't just shorter lengthwise but also 
heightwise.

> [If you want 'confusing', in my language

In C, "cast" means "explicit conversion". If you want to discuss your 
own language, take it up in a newsgroup devoted to that language.

-- 
Richard Heathfield
Email: rjh at cpax dot org dot uk
"Usenet is a strange place" - dmr 29 July 1999
Sig line 4 vacant - apply within
0
Richard
11/29/2016 3:18:43 PM
On Tuesday, November 29, 2016 at 5:17:35 AM UTC-6, Bart wrote:
> Yet, to an outsider, C is allowing you to access p as though it was 
> actually an array. That must seem extraordinarily lax. You can even do 
> it quite blatantly:
> 
>    int A;
>    A = (&A)[123456789];
> 
> Not a beep out of the compiler! (This crashes BTW.)

Change the definition of "A" to "extern" or "volatile" and the
construct might be meaningful if a compiler doesn't mess with it
[though must usages of the concept would use a smaller displacement
and store the result somewhere else].  Standard C does not provide
any means of specifying that a global or automatic variable should
be placed at some known offset relative to anything else, but some
other systems do.  Further, some systems may allow memory to be
accessed from multiple addresses with various tweaks.  For example,
a system might map addresses 0x00000000-0x00FFFFFFFF to 16MiB of
RAM accessed via write-through cache, and 0x01000000-01FFFFFFFF to
that same 16MiB of RAM bypassing the cache.  If code uses DMA to read
some data, but the cache is blind to the DMA, it could be helpful to
have code use the upper addresses to force the CPU to see the data
which was actually fetched.
0
supercat
11/29/2016 3:21:14 PM
On 29/11/2016 14:46, Jerry Stuckle wrote:
> On 11/29/2016 6:17 AM, BartC wrote:

>> Here are my original comments, with extra emphasis:
>>
>> ....p is a pointer to ONE object....
>>
>>  p[i];        // index a pointer
>>
>> By one object, I mean it is known, to the programmer, that there is
>> nothing meaningful or valid beyond or before that object. (Or, possibly
>> worse, that it is not known! As otherwise this could be fixed. The
>> language however is oblivious.)
>>
>> Yet, to an outsider, C is allowing you to access p as though it was
>> actually an array. That must seem extraordinarily lax. You can even do
>> it quite blatantly:

> Why did none of the students in my
> C or C++ classes have a problem understanding that the name of an array,
> used without an index, is a pointer to the first character?  You can't
> seem to understand such a simple concept, despite multiple people trying
> to explain it to you?

I said 'to an outsider'. But what's harder to understand than the mere 
fact of how it works, are the implications.

Namely, lack of transparency and lack of type safety. That is, just the 
minimum, perfectly reasonable sort of type safety where you declare this:

  char c;
  char* pc=&c;
  char** ppc=&pc;

and the language squawks when you then write ppc[i][j]. Or the lack of 
transparency that happens when you see ppc[i][j], and find the 
declaration of ppc could be anything from the above at one end to char 
ppc[10][100] at the other.

And have you tried explaining to your students why the language turning 
a blind eye to:

    int A;
    A = (&A)[123456789];

makes perfect sense and was an excellent design choice?

int sumarray(int* A, int N){
   int i, sum=0;
   for (i=0; i<N; ++i) sum+=A[i];
   return sum;
}

Fine so far.

#define N 100
int A;
int* P=&A
printf("%d\n", sumarray(P,N);

Oops - this is wrong as P isn't set up to a block of 100 ints. 
Unfortunately the language thinks this is perfectly fine, so you have to 
find out the hard way!

-- 
Bartc


0
BartC
11/29/2016 3:45:18 PM
On 29/11/2016 15:21, supercat@casperkitty.com wrote:
> On Tuesday, November 29, 2016 at 5:17:35 AM UTC-6, Bart wrote:
>> Yet, to an outsider, C is allowing you to access p as though it was
>> actually an array. That must seem extraordinarily lax. You can even do
>> it quite blatantly:
>>
>>    int A;
>>    A = (&A)[123456789];
>>
>> Not a beep out of the compiler! (This crashes BTW.)
>
> Change the definition of "A" to "extern" or "volatile" and the
> construct might be meaningful if a compiler doesn't mess with it

The same thing has been discussed recently. I suggested that for such 
specific uses, then it can be written like this:

      A = *(&A + 123456789);

Reserving the [] array syntax for actual arrays. Even if the language 
doesn't detect cross-over usage of pointers and arrays, if such 
guidelines are used, then errors due to mix-ups could occur less 
frequently, and source would be more self-documenting.

But it would mean passing actual array pointers not pointers to first 
elements, so it wouldn't be popular.

> [though must usages of the concept would use a smaller displacement
> and store the result somewhere else].

I needed to go up to 123456789 to get a definite crash!

-- 
Bartc
0
BartC
11/29/2016 3:50:24 PM
Richard Heathfield <rjh@cpax.org.uk> writes:

> Or you could just use '.', which isn't just shorter lengthwise but
> also heightwise.

Have you opened the Xmas Sherry tonight Richard?
0
Gareth
11/29/2016 6:30:39 PM
On 29/11/16 18:30, Gareth Owen wrote:
> Richard Heathfield <rjh@cpax.org.uk> writes:
>
>> Or you could just use '.', which isn't just shorter lengthwise but
>> also heightwise.
>
> Have you opened the Xmas Sherry tonight Richard?

<grin> No, I'm just exercising reductio ad absurdum. Or perhaps, in this 
case, reductio ad abpointum.

-- 
Richard Heathfield
Email: rjh at cpax dot org dot uk
"Usenet is a strange place" - dmr 29 July 1999
Sig line 4 vacant - apply within
0
Richard
11/29/2016 6:49:24 PM
On Tuesday, November 29, 2016 at 9:18:50 AM UTC-6, Richard Heathfield wrote:
> In C, "cast" means "explicit conversion". If you want to discuss your 
> own language, take it up in a newsgroup devoted to that language.

I like to use the phrase "explicit cast" in cases where casts are intended
to explicitly indicate, to "anything" reading the code (compiler or human),
that something "unusual" may be going on.  Consider the code:

    float f1,f2;
    double d1,d2;
    // Assign values to f1,f2 somehow...
    d1 = f1*f2;
    d2 = (float)(f1*f2);

While saying that (f1*f2) is explicitly cast to "float" (rather than merely
saying that it's "cast") might nominally be redundant, I think it would make
clearer the intention that something "unusual" may be going on.  In the
absence of the cast, some implementations store in d1 a value of type "float"
which holds a value that isn't representable in that type.  Adding the cast
before the assignment to d2 would make explicit a desire *not* to have such
a value stored in d2.
0
supercat
11/29/2016 6:56:55 PM
On 11/29/2016 01:49 PM, Richard Heathfield wrote:
> On 29/11/16 18:30, Gareth Owen wrote:
>> Richard Heathfield <rjh@cpax.org.uk> writes:
>>
>>> Or you could just use '.', which isn't just shorter lengthwise but
>>> also heightwise.
>>
>> Have you opened the Xmas Sherry tonight Richard?
>
> <grin> No, I'm just exercising reductio ad absurdum. Or perhaps, in this
> case, reductio ad abpointum.

BartC's reason was pretty absurd even before reduction.


0
James
11/29/2016 7:09:55 PM
supercat@casperkitty.com writes:
> On Tuesday, November 29, 2016 at 9:18:50 AM UTC-6, Richard Heathfield wrote:
>> In C, "cast" means "explicit conversion". If you want to discuss your 
>> own language, take it up in a newsgroup devoted to that language.
>
> I like to use the phrase "explicit cast" in cases where casts are intended
> to explicitly indicate, to "anything" reading the code (compiler or human),
> that something "unusual" may be going on.
[...]

I like to use the word "cast" for that.

-- 
Keith Thompson (The_Other_Keith) kst-u@mib.org  <http://www.ghoti.net/~kst>
Working, but not speaking, for JetHead Development, Inc.
"We must do something.  This is something.  Therefore, we must do this."
    -- Antony Jay and Jonathan Lynn, "Yes Minister"
0
Keith
11/29/2016 7:29:56 PM
On 29/11/2016 19:29, Keith Thompson wrote:
> supercat@casperkitty.com writes:
>> On Tuesday, November 29, 2016 at 9:18:50 AM UTC-6, Richard Heathfield wrote:
>>> In C, "cast" means "explicit conversion". If you want to discuss your
>>> own language, take it up in a newsgroup devoted to that language.
>>
>> I like to use the phrase "explicit cast" in cases where casts are intended
>> to explicitly indicate, to "anything" reading the code (compiler or human),
>> that something "unusual" may be going on.
> [...]
>
> I like to use the word "cast" for that.

The C Standard uses 'explicit cast' in 6.5.4p3 (2007 and 2009 drafts).

I wonder what they could possibly mean?

-- 
Bartc


0
BartC
11/29/2016 8:27:24 PM
On 29/11/16 20:27, BartC wrote:
> On 29/11/2016 19:29, Keith Thompson wrote:
>> supercat@casperkitty.com writes:
>>> On Tuesday, November 29, 2016 at 9:18:50 AM UTC-6, Richard Heathfield
>>> wrote:
>>>> In C, "cast" means "explicit conversion". If you want to discuss your
>>>> own language, take it up in a newsgroup devoted to that language.
>>>
>>> I like to use the phrase "explicit cast" in cases where casts are
>>> intended
>>> to explicitly indicate, to "anything" reading the code (compiler or
>>> human),
>>> that something "unusual" may be going on.
>> [...]
>>
>> I like to use the word "cast" for that.
>
> The C Standard uses 'explicit cast' in 6.5.4p3 (2007 and 2009 drafts).
>
> I wonder what they could possibly mean?

Since the Standard /defines/ 'cast' as 'explicit conversion', presumably 
they mean 'explicit explicit conversion'. If you feel strongly enough 
about it, raise a DR.

-- 
Richard Heathfield
Email: rjh at cpax dot org dot uk
"Usenet is a strange place" - dmr 29 July 1999
Sig line 4 vacant - apply within
0
Richard
11/29/2016 8:31:47 PM
BartC <bc@freeuk.com> writes:
> On 29/11/2016 19:29, Keith Thompson wrote:
>> supercat@casperkitty.com writes:
>>> On Tuesday, November 29, 2016 at 9:18:50 AM UTC-6, Richard Heathfield wrote:
>>>> In C, "cast" means "explicit conversion". If you want to discuss your
>>>> own language, take it up in a newsgroup devoted to that language.
>>>
>>> I like to use the phrase "explicit cast" in cases where casts are
>>> intended to explicitly indicate, to "anything" reading the code
>>> (compiler or human), that something "unusual" may be going on.
>> [...]
>>
>> I like to use the word "cast" for that.
>
> The C Standard uses 'explicit cast' in 6.5.4p3 (2007 and 2009 drafts).

The C90, C99, and C11 editions of the standard all use that phrase.
(In C90 it's under Semantics; in C99 and C11 it's under Constraints.)

> I wonder what they could possibly mean?

It means the same thing as "cast".  Presumably the authors chose
to add the redundant word "explicit" for emphasis.  I probably
wouldn't have done so, but it's not wrong.

Do you think that "explicit cast" means something different from
"cast", in that context or any other C context?  If so, what?

For that matter, I'm fairly sure the entire sentence:

    Conversions that involve pointers, other than where permitted
    by the constraints of 6.5.16.1, shall be specified by means of
    an explicit cast.

is redundant, since it's already covered by the constraints on simple
assignment (which also apply to parameter passing, initialization,
and return statements).  A little redundancy isn't necessarily
a bad thing (the C standard is not an entirely formal document),
but in my opinion that particular sentence should be a footnote.

My guess is that that wording was added to emphasize a change in the
language.  Looking back at the 1974 and 1975 versions of the C manual,
I'm a bit surprised to see that the language didn't have a cast
operator.  K&R1 (1978) did, but it's not clear that a cast was required
for pointer conversions.

In old C, it was common to see things like
    char *ptr = 0xDEADBEEF;
or, more likely for the PDP-11:
    char *ptr = 0177770;
to allow references to a specific memory address.  I think the wording
was added to emphasize that things like that were no longer valid.

-- 
Keith Thompson (The_Other_Keith) kst-u@mib.org  <http://www.ghoti.net/~kst>
Working, but not speaking, for JetHead Development, Inc.
"We must do something.  This is something.  Therefore, we must do this."
    -- Antony Jay and Jonathan Lynn, "Yes Minister"
0
Keith
11/29/2016 8:54:34 PM
On 11/29/2016 03:54 PM, Keith Thompson wrote:
> BartC <bc@freeuk.com> writes:
>> On 29/11/2016 19:29, Keith Thompson wrote:
>>> supercat@casperkitty.com writes:
>>>> On Tuesday, November 29, 2016 at 9:18:50 AM UTC-6, Richard Heathfield wrote:
>>>>> In C, "cast" means "explicit conversion". If you want to discuss your
>>>>> own language, take it up in a newsgroup devoted to that language.
>>>>
>>>> I like to use the phrase "explicit cast" in cases where casts are
>>>> intended to explicitly indicate, to "anything" reading the code
>>>> (compiler or human), that something "unusual" may be going on.

Note, in particular, that what the C standard calls a cast might NOT 
qualify as a BartC "explicit cast" by this definition. If it was not 
intended to indicate that something unusual may be going on, then it's 
not a BartC "explicit cast", despite being a cast, and despite being an 
"explicit conversion" according to the C standard.

In order to determine whether or not a given cast is BartC explicit, 
it's not sufficient to examine what the code's author wrote; it's also 
necessary to determine what the author's intent was. In particular, just 
because something "unusual" is, in fact, going on, does NOT make it a 
BartC "explicit cast" - if the author did not intend to indicate that 
fact (possibly because he was unaware of that fact), it is not an BartC 
"explicit cast". This dependence upon the author's intent makes this a 
very peculiar term, one I would never have any particular use for, even 
if the standard were modified to endorse this usage.

>>> [...]
>>>
>>> I like to use the word "cast" for that.
>>
>> The C Standard uses 'explicit cast' in 6.5.4p3 (2007 and 2009 drafts).
....
> Do you think that "explicit cast" means something different from
> "cast", in that context or any other C context?  If so, what?

As he has indicated elsewhere, a BartC "cast" is "a conversion that is 
either added internally by a compiler, or that is applied in source code 
by the programmer", rendering the term a pointless synonym for 
"conversion", which makes it different from a BartC "explicit cast". The 
fact that BartC's meaning for "cast" is explicitly in conflict with the 
C standard's definition of the term "cast" doesn't seem to bother him - 
which is consistent with the assertion that he chooses to misuse this 
word with the deliberate intent of causing confusion.

0
James
11/29/2016 9:21:24 PM
On Monday, November 28, 2016 at 8:02:03 PM UTC-6, Rick C. Hodgin wrote:

> In converting C source code to a binary executable, there are not any
> aspects that are left to chance.  Everything is cast into stone and it
> cannot be altered.  It behaves exactly as it is indicating, only
> accessing data as instructed, only processing it through as instructed,
> and if it happens to do it incorrectly because of developer overrides
> injected into the program (including casts and unions, and I place
> memory access, VLAs, and even static arrays in other categories which
> require a different level of developer-imposted constraints), then
> that is in no way C's fault.

In a modern compiler, the behavior of one piece of the code may be affected
in unpredictable fashion by the context in which it is used.  Once the
executable code is generated, it may operate in fully consistent fashion,
but in many places where the Standard imposes no requirements it is common
for compilers to generate, at least 99% of the time, machine code which
will work in a predictable consistent fashion 100% of the time.  The cases
where compilers fail to do so are often not terribly predictable.

> The whole purpose of C is to wield data in its most fundamental form.
> It's like operating a chainsaw with all the safetys removed.  Sure you
> can get in there and cut everything possible, but you can also lop off
> your legs in about two seconds.  It requires skill and precision to
> utilize such a tool, but if you do it correctly the job can get done
> much faster than it can with other tools.

That may have been the purpose for which C was invented (I think it was)
but that is not how "modern" compilers work.  It seems fashionable for
compiler writers to place more emphasis on making their tools sharp than
on making them controllable.
0
supercat
11/29/2016 9:26:56 PM
On 29/11/2016 20:54, Keith Thompson wrote:
> BartC <bc@freeuk.com> writes:

>> The C Standard uses 'explicit cast' in 6.5.4p3 (2007 and 2009 drafts).
>
> The C90, C99, and C11 editions of the standard all use that phrase.
> (In C90 it's under Semantics; in C99 and C11 it's under Constraints.)
>
>> I wonder what they could possibly mean?
>
> It means the same thing as "cast".  Presumably the authors chose
> to add the redundant word "explicit" for emphasis.  I probably
> wouldn't have done so, but it's not wrong.
>
> Do you think that "explicit cast" means something different from
> "cast", in that context or any other C context?  If so, what?

I don't think anything of it.

But I was picked up on it for using it in one of my posts and a few 
people have apparently exchanged some amusing posts at my expense.

Yet all the time the venerable C Standard has been using exactly the 
same term!

(And when I tried to inject some amusement of my own by saying my 
language explicitly uses "cast", I was politely told to **** off.)

-- 
Bartc

0
BartC
11/29/2016 9:30:55 PM
On 29/11/2016 21:21, James R. Kuyper wrote:

> As he has indicated elsewhere, a BartC "cast" is "a conversion that is
> either added internally by a compiler, or that is applied in source code
> by the programmer", rendering the term a pointless synonym for
> "conversion", which makes it different from a BartC "explicit cast". The
> fact that BartC's meaning for "cast" is explicitly in conflict with the
> C standard's definition of the term "cast" doesn't seem to bother him -
> which is consistent with the assertion that he chooses to misuse this
> word with the deliberate intent of causing confusion.

More personal propaganda against me.

I don't think anyone at all was confused by 'explicit cast' in a post to 
do with strong and weak typing, until RH choose to focus on just that 
phrase.

So we end up talking about the equivalent of whether the "number" in 
"PIN number" is redundant in all conceivable situations.

But not everyone realises what PIN stands for (especially if spoken out 
loud), and not everyone knows every detail of all the C Standards off by 
heart.

-- 
Bartc
0
BartC
11/29/2016 9:47:08 PM
On 11/29/2016 10:45 AM, BartC wrote:
> On 29/11/2016 14:46, Jerry Stuckle wrote:
>> On 11/29/2016 6:17 AM, BartC wrote:
> 
>>> Here are my original comments, with extra emphasis:
>>>
>>> ....p is a pointer to ONE object....
>>>
>>>  p[i];        // index a pointer
>>>
>>> By one object, I mean it is known, to the programmer, that there is
>>> nothing meaningful or valid beyond or before that object. (Or, possibly
>>> worse, that it is not known! As otherwise this could be fixed. The
>>> language however is oblivious.)
>>>
>>> Yet, to an outsider, C is allowing you to access p as though it was
>>> actually an array. That must seem extraordinarily lax. You can even do
>>> it quite blatantly:
> 
>> Why did none of the students in my
>> C or C++ classes have a problem understanding that the name of an array,
>> used without an index, is a pointer to the first character?  You can't
>> seem to understand such a simple concept, despite multiple people trying
>> to explain it to you?
> 
> I said 'to an outsider'. But what's harder to understand than the mere
> fact of how it works, are the implications.
>

That is true for ANY language - programming or otherwise.  If you don't
know it, don't expect to understand it.

Although COBOL might be an exception :)

> Namely, lack of transparency and lack of type safety. That is, just the
> minimum, perfectly reasonable sort of type safety where you declare this:
> 
>  char c;
>  char* pc=&c;
>  char** ppc=&pc;
> 
> and the language squawks when you then write ppc[i][j]. Or the lack of
> transparency that happens when you see ppc[i][j], and find the
> declaration of ppc could be anything from the above at one end to char
> ppc[10][100] at the other.
> 

Sure, but in order for the compiler to properly evaluate ppc[i][j] it
would have to know the number of elements in [i]. You never specified
that, so [i][j] cannot be evaluated.

> And have you tried explaining to your students why the language turning
> a blind eye to:
> 
>    int A;
>    A = (&A)[123456789];
> 
> makes perfect sense and was an excellent design choice?
> 

No, because it does NOT make "perfect sense".  It's a stupid
construction.  And none of my students every tried something so idiotic.

> int sumarray(int* A, int N){
>   int i, sum=0;
>   for (i=0; i<N; ++i) sum+=A[i];
>   return sum;
> }
> 
> Fine so far.
> 
> #define N 100
> int A;
> int* P=&A
> printf("%d\n", sumarray(P,N);
> 
> Oops - this is wrong as P isn't set up to a block of 100 ints.
> Unfortunately the language thinks this is perfectly fine, so you have to
> find out the hard way!
> 

C requires the programmer to have a limited amount of intelligence and
to take responsibility for his/her constructs.  I think maybe you need
another language.  Maybe BASIC is more your speed.

-- 
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
0
Jerry
11/29/2016 10:04:05 PM
BartC <bc@freeuk.com> writes:
> On 29/11/2016 20:54, Keith Thompson wrote:
>> BartC <bc@freeuk.com> writes:
>>> The C Standard uses 'explicit cast' in 6.5.4p3 (2007 and 2009 drafts).
>>
>> The C90, C99, and C11 editions of the standard all use that phrase.
>> (In C90 it's under Semantics; in C99 and C11 it's under Constraints.)
>>
>>> I wonder what they could possibly mean?
>>
>> It means the same thing as "cast".  Presumably the authors chose
>> to add the redundant word "explicit" for emphasis.  I probably
>> wouldn't have done so, but it's not wrong.
>>
>> Do you think that "explicit cast" means something different from
>> "cast", in that context or any other C context?  If so, what?
>
> I don't think anything of it.
>
> But I was picked up on it for using it in one of my posts and a few 
> people have apparently exchanged some amusing posts at my expense.
>
> Yet all the time the venerable C Standard has been using exactly the 
> same term!

There are some differences.  I suspect nobody had noticed the
standard's use of that phrase until you pointed it out.  It's mostly
harmless in that particular context, but I can and do (mildly)
pick on the authors for using it.  You like to pretend that we
defend the Standard as if it were some kind of holy writ.  We don't.

You, on the other hand, tried to make a distinction between "explicit
cast" and "cast", and you used the word "cast" incorrectly.  (By
"incorrectly", I mean in a manner inconsistent with the unambiguous
definition given by the standard for the language we discuss in this
newsgroup.)  Quoting you from upthread:

    The 'explicit casts' was for my benefit, as I use 'casts' to more
    loosely mean a conversion that is either added internally by a
    compiler, or that is applied in source code by the programmer. The
    latter would be explicit.

I suggest that if you want to use the word "cast" when posting to this
newsgroup, you should use it in the sense defined by the C standard --
*if* you want to communicate clearly.

> (And when I tried to inject some amusement of my own by saying my 
> language explicitly uses "cast", I was politely told to **** off.)

Not in so many words, but it seemed to be yet another attempt to inject
deliberate confusion into the discussion.  Perhaps that wasn't your
intent?

-- 
Keith Thompson (The_Other_Keith) kst-u@mib.org  <http://www.ghoti.net/~kst>
Working, but not speaking, for JetHead Development, Inc.
"We must do something.  This is something.  Therefore, we must do this."
    -- Antony Jay and Jonathan Lynn, "Yes Minister"
0
Keith
11/29/2016 10:12:26 PM
On 29/11/2016 22:12, Keith Thompson wrote:
> BartC <bc@freeuk.com> writes:

> You, on the other hand, tried to make a distinction between "explicit
> cast" and "cast", and you used the word "cast" incorrectly.

I tried to justify my use of it because I hadn't really noticed. If 
anyone was really bamboozled by my apparent use of 'explicit explicit 
conversion' then I'd be surprised.

(There are some 700 C topics on stackoverflow which use 'explicit 
cast(s)' or 'implicit cast(s)'. Perhaps someone needs to get down there 
and ask them to please stop using that phrase. Before the concept of 
'explicit explicit conversion' 'or implicit explicit conversion' blows 
someone's mind!)

   (By
> "incorrectly", I mean in a manner inconsistent with the unambiguous
> definition given by the standard for the language we discuss in this
> newsgroup.)  Quoting you from upthread:
>
>     The 'explicit casts' was for my benefit, as I use 'casts' to more
>     loosely mean a conversion that is either added internally by a
>     compiler, or that is applied in source code by the programmer. The
>     latter would be explicit.
>
> I suggest that if you want to use the word "cast" when posting to this
> newsgroup, you should use it in the sense defined by the C standard --
> *if* you want to communicate clearly.

Were *you* confused? Because it sounded like nobody was really confused, 
they just wanted to pick a fight over absolutely nothing.

Obviously there aren't enough proper threads to get stuck in to.

-- 
Bartc
0
BartC
11/29/2016 10:27:59 PM
On 29/11/16 22:27, BartC wrote:
> On 29/11/2016 22:12, Keith Thompson wrote:

<snip>

>> I suggest that if you want to use the word "cast" when posting to this
>> newsgroup, you should use it in the sense defined by the C standard --
>> *if* you want to communicate clearly.
>
> Were *you* confused?

I doubt very much whether Keith was confused. I wasn't confused. But, it 
seems, *you* were confused, or at least sufficiently confused that you 
were clearly misusing the term, which is why I posted the correction.

-- 
Richard Heathfield
Email: rjh at cpax dot org dot uk
"Usenet is a strange place" - dmr 29 July 1999
Sig line 4 vacant - apply within
0
Richard
11/29/2016 10:33:08 PM
On Tuesday, November 29, 2016 at 5:28:07 PM UTC-5, Bart wrote:
> On 29/11/2016 22:12, Keith Thompson wrote:
> > BartC <bc@freeuk.com> writes:
>=20
> > You, on the other hand, tried to make a distinction between "explicit
> > cast" and "cast", and you used the word "cast" incorrectly.
>=20
> I tried to justify my use of it because I hadn't really noticed. If=20
> anyone was really bamboozled by my apparent use of 'explicit explicit=20
> conversion' then I'd be surprised.

It wasn't your use of "explicit cast" that was the problem; that's merely r=
edundant. It was your use of "cast" to refer to things that are not casts t=
hat is a problem.

> (There are some 700 C topics on stackoverflow which use 'explicit=20
> cast(s)' or 'implicit cast(s)'. Perhaps someone needs to get down there=
=20

"implicit cast" is a problem, since it's probably being used to refer to ca=
ses where there's no cast whatsoever, just a conversion occurring without a=
 cast. It would be nice to get them to correct such misuse of the term, and=
 if I were monitoring conversations there, I would do so. But I don't monit=
or stackoverflow, I do monitor this newsgroup.
0
jameskuyper
11/29/2016 10:52:40 PM
jameskuyper@verizon.net writes:
> On Tuesday, November 29, 2016 at 5:28:07 PM UTC-5, Bart wrote:
[...]
>> (There are some 700 C topics on stackoverflow which use 'explicit
>> cast(s)' or 'implicit cast(s)'. Perhaps someone needs to get down
>> there
>
> "implicit cast" is a problem, since it's probably being used to refer
> to cases where there's no cast whatsoever, just a conversion occurring
> without a cast. It would be nice to get them to correct such misuse of
> the term, and if I were monitoring conversations there, I would do
> so. But I don't monitor stackoverflow, I do monitor this newsgroup.

Stack Overflow is not a discussion forum, it's a collection of questions
and answers (and comments).  I do correct people who use the term
"implicit cast" when I have the opportunity, but I don't go out of my
way to find and correct mistakes (it's a *big* site).

-- 
Keith Thompson (The_Other_Keith) kst-u@mib.org  <http://www.ghoti.net/~kst>
Working, but not speaking, for JetHead Development, Inc.
"We must do something.  This is something.  Therefore, we must do this."
    -- Antony Jay and Jonathan Lynn, "Yes Minister"
0
Keith
11/29/2016 11:43:29 PM
BartC <bc@freeuk.com> writes:

> On 29/11/2016 11:40, Ben Bacarisse wrote:
>> BartC <bc@freeuk.com> writes:
<snip>
>>> The 'explicit casts' was for my benefit, as I use 'casts' to more
>>> loosely mean a conversion that is either added internally by a
>>> compiler, or that is applied in source code by the programmer. The
>>> latter would be explicit.
>
>> Why not just call them conversions?
>
> 'Cast' is shorter than 'conversion' or 'coercion'?

The question mark makes me think even you don't really consider this
explanation adequate.  If you have one you consider plausible, do post
it; I'm genuinely curious.  Expanding the narrow technical meaning of
cast to cover all (or almost all) conversions is not uncommon and I'm
not sure why this happens.

<snip>
-- 
Ben.
0
Ben
11/29/2016 11:51:44 PM
BartC <bc@freeuk.com> writes:

> On 29/11/2016 02:12, Ben Bacarisse wrote:
>> BartC <bc@freeuk.com> writes:
>
>>> some are known, by the programmer, to point to the first of
>>> multiple consecutive objects.
>>
>> And some may point to the last of a collection or to some element that
>> is neither, but in all cases the pointer points to just the one
>> element.
>
> Here are my original comments, with extra emphasis:
>
> ....p is a pointer to ONE object....
>
>  p[i];        // index a pointer
>
> By one object, I mean it is known, to the programmer, that there is
> nothing meaningful or valid beyond or before that object. (Or,
> possibly worse, that it is not known! As otherwise this could be
> fixed. The language however is oblivious.)

So what's you your point?  That p+i may not point to an object at all
(when i != 0)?  Yes, we agree about that (and I said as much in my post)
but I thought your objection to what I said was that a pointer can point
to more than one object.

(All this is, obviously, about object pointers -- function pointers are
another matter altogether.)

> Yet, to an outsider, C is allowing you to access p as though it was
> actually an array. That must seem extraordinarily lax.

It's lax to an insider too.  C is a lax language.  That's one reason
it's popular.

An outsider is obviously very likely to be confused, but that
confusion tells us very little about the point I raised: that a (valid
object) pointer always points to a single object.

> You can even do
> it quite blatantly:
>
>   int A;
>   A = (&A)[123456789];
>
> Not a beep out of the compiler! (This crashes BTW.)
>
> Now that's what I would call weak typing.

What has this to do with the incontrovertible fact that a valid object
pointer points to a single object?

<snip>
-- 
Ben.
0
Ben
11/30/2016 12:19:05 AM
On 30/11/2016 00:19, Ben Bacarisse wrote:
> BartC <bc@freeuk.com> writes:

>> You can even do
>> it quite blatantly:
>>
>>   int A;
>>   A = (&A)[123456789];
>>
>> Not a beep out of the compiler! (This crashes BTW.)
>>
>> Now that's what I would call weak typing.
>
> What has this to do with the incontrovertible fact that a valid object
> pointer points to a single object?

How does the fact that a pointer can only refer to one thing at a time 
affect any of my arguments?

Obviously that's how it has to work. But you're ignoring the extra 
information, such of which is present at compile time, sometimes at 
runtime, that THERE MIGHT BE MANY MORE OBJECTS FOLLOWING THAT ONE, 
forming part of a larger object.

This is very important to know. The programmer should know it (but can't 
always be certain if the code they write is called from someone else's 
code).

The language sometimes knows it, but chooses to ignore it. And in fact 
encourages it by allowing P[i] notation.

This is the where the weak typing comes into it. You have something 
which is patently not an array, being used as an array.

My suggestion has always been for the one object being pointed to itself 
be an array. Now, whether there are more objects following is not 
relevant (there might be bounds errors, but that's another, well 
understood matter, and to do with program logic not an iffy type system).

Now the type model strengthens, but only a little, since C still allows 
mere pointers to be indexed. Like putting a five-lever lock on your 
front door, but having bar-type swing doors right next to it!

-- 
Bartc

0
BartC
11/30/2016 12:43:39 AM
On 29/11/2016 23:51, Ben Bacarisse wrote:
> BartC <bc@freeuk.com> writes:
>
>> On 29/11/2016 11:40, Ben Bacarisse wrote:
>>> BartC <bc@freeuk.com> writes:
> <snip>
>>>> The 'explicit casts' was for my benefit, as I use 'casts' to more
>>>> loosely mean a conversion that is either added internally by a
>>>> compiler, or that is applied in source code by the programmer. The
>>>> latter would be explicit.
>>
>>> Why not just call them conversions?
>>
>> 'Cast' is shorter than 'conversion' or 'coercion'?
>
> The question mark makes me think even you don't really consider this
> explanation adequate.  If you have one you consider plausible, do post
> it; I'm genuinely curious.  Expanding the narrow technical meaning of
> cast to cover all (or almost all) conversions is not uncommon and I'm
> not sure why this happens.

Sorry I've lost the thread of this conversation.

I wrote 'explicit cast' instead of 'cast'; so shoot me!

People here seem to forget that usenet articles are not program code 
where everything has to be just so. They're in English.

And even in English, these are not legal documents or reference manuals 
or a precise set of instructions. It's informal.

They also forget that not everyone is an expert who knows that 'cast' 
has a very, very precise meaning - inside the C language reference.

-- 
Bartc

0
BartC
11/30/2016 12:51:51 AM
On Tuesday, November 29, 2016 at 4:27:10 PM UTC-5, supe...@casperkitty.com wrote:
> On Monday, November 28, 2016 at 8:02:03 PM UTC-6, Rick C. Hodgin wrote:
> 
> > In converting C source code to a binary executable, there are not any
> > aspects that are left to chance.  Everything is cast into stone and it
> > cannot be altered.  It behaves exactly as it is indicating, only
> > accessing data as instructed, only processing it through as instructed,
> > and if it happens to do it incorrectly because of developer overrides
> > injected into the program (including casts and unions, and I place
> > memory access, VLAs, and even static arrays in other categories which
> > require a different level of developer-imposted constraints), then
> > that is in no way C's fault.
> 
> In a modern compiler, the behavior of one piece of the code may be affected
> in unpredictable fashion by the context in which it is used.  Once the
> executable code is generated, it may operate in fully consistent fashion,
> but in many places where the Standard imposes no requirements it is common
> for compilers to generate, at least 99% of the time, machine code which
> will work in a predictable consistent fashion 100% of the time.  The cases
> where compilers fail to do so are often not terribly predictable.
> 
> > The whole purpose of C is to wield data in its most fundamental form.
> > It's like operating a chainsaw with all the safetys removed.  Sure you
> > can get in there and cut everything possible, but you can also lop off
> > your legs in about two seconds.  It requires skill and precision to
> > utilize such a tool, but if you do it correctly the job can get done
> > much faster than it can with other tools.
> 
> That may have been the purpose for which C was invented (I think it was)
> but that is not how "modern" compilers work.  It seems fashionable for
> compiler writers to place more emphasis on making their tools sharp than
> on making them controllable.

Well, let's change that.  My goal is to give people tools which manipulate
data in a wide array of ways, including those which operate in the way a
human being would expect, and not something a computer scientist might
recognize due to bitsize limitations in a signed / unsigned computation,
for example.  I believe in auto-upsizing during computation, and then
sign-saturating the final value into the result if necessary.  I plan
to provide facilities within my CPU's ISA to directly support that
ability, for example.

Best regards,
Rick C. Hodgin
0
Rick
11/30/2016 1:09:51 AM
BartC <bc@freeuk.com> writes:

> On 30/11/2016 00:19, Ben Bacarisse wrote:
>> BartC <bc@freeuk.com> writes:
>
>>> You can even do
>>> it quite blatantly:
>>>
>>>   int A;
>>>   A = (&A)[123456789];
>>>
>>> Not a beep out of the compiler! (This crashes BTW.)
>>>
>>> Now that's what I would call weak typing.
>>
>> What has this to do with the incontrovertible fact that a valid object
>> pointer points to a single object?
>
> How does the fact that a pointer can only refer to one thing at a time
> affect any of my arguments?

It relates to a comment you made, not an argument.  I pointed out that
"All pointers point to one object" (because you seemed to imply
otherwise) and indeed you replied to that directly with a point-blank
"No".

> Obviously that's how it has to work.

Yes, which makes me question this whole exchange.  Why on earth did you
say "no" if you meant "yes, that's how it was to work"?

> But you're ignoring the extra
> information, such of which is present at compile time, sometimes at
> runtime, that THERE MIGHT BE MANY MORE OBJECTS FOLLOWING THAT ONE,
> forming part of a larger object.

I'm ignoring it because it's not in dispute.

<snip>
-- 
Ben.
0
Ben
11/30/2016 1:51:19 AM
BartC <bc@freeuk.com> writes:

> On 29/11/2016 23:51, Ben Bacarisse wrote:
>> BartC <bc@freeuk.com> writes:
>>
>>> On 29/11/2016 11:40, Ben Bacarisse wrote:
>>>> BartC <bc@freeuk.com> writes:
>> <snip>
>>>>> The 'explicit casts' was for my benefit, as I use 'casts' to more
>>>>> loosely mean a conversion that is either added internally by a
>>>>> compiler, or that is applied in source code by the programmer. The
>>>>> latter would be explicit.
>>>
>>>> Why not just call them conversions?
>>>
>>> 'Cast' is shorter than 'conversion' or 'coercion'?
>>
>> The question mark makes me think even you don't really consider this
>> explanation adequate.  If you have one you consider plausible, do post
>> it; I'm genuinely curious.  Expanding the narrow technical meaning of
>> cast to cover all (or almost all) conversions is not uncommon and I'm
>> not sure why this happens.
>
> Sorry I've lost the thread of this conversation.
>
> I wrote 'explicit cast' instead of 'cast'; so shoot me!

Yes, you have lost the thread.  I don't really mind about that (not all
people who reply to you share exactly the same views).

You said (and it's still quoted above)

  "I use 'casts' to more loosely mean a conversion that is either added
  internally by a compiler, or that is applied in source code by the
  programmer"

I wondered why you did not just call then conversions.  You replied

  "'Cast' is shorter than 'conversion' or 'coercion'?"

where the question mark suggests that even you don't really think that's
a good reason.  I am curious to know if you have any insight into why
you use "cast" in this way because you are not alone.  Maybe your
reasons are the same as other people's reasons.

> People here seem to forget that usenet articles are not program code
> where everything has to be just so. They're in English.

No, I don't forget that.  In fact it's one reason I was surprised by
your usage.  "Conversion" is plain English, but "cast" sounds, to me,
like techie jargon.

<snip>
-- 
Ben.
0
Ben
11/30/2016 2:08:07 AM
BartC <bc@freeuk.com> writes:
> On 29/11/2016 23:51, Ben Bacarisse wrote:
>> BartC <bc@freeuk.com> writes:
>>> On 29/11/2016 11:40, Ben Bacarisse wrote:
>>>> BartC <bc@freeuk.com> writes:
>> <snip>
>>>>> The 'explicit casts' was for my benefit, as I use 'casts' to more
>>>>> loosely mean a conversion that is either added internally by a
>>>>> compiler, or that is applied in source code by the programmer. The
>>>>> latter would be explicit.
>>>
>>>> Why not just call them conversions?
>>>
>>> 'Cast' is shorter than 'conversion' or 'coercion'?
>>
>> The question mark makes me think even you don't really consider this
>> explanation adequate.  If you have one you consider plausible, do post
>> it; I'm genuinely curious.  Expanding the narrow technical meaning of
>> cast to cover all (or almost all) conversions is not uncommon and I'm
>> not sure why this happens.
>
> Sorry I've lost the thread of this conversation.
>
> I wrote 'explicit cast' instead of 'cast'; so shoot me!

Nobody is going to shoot you for using the phrase "explicit cast".
Someone (RH, I think) did point out, quite correctly that the word
"explicit" in that phrase is redundant (and yes, the standard uses it
too).  You could either acknowledge or ignore that point, and move on.

> People here seem to forget that usenet articles are not program code 
> where everything has to be just so. They're in English.
>
> And even in English, these are not legal documents or reference manuals 
> or a precise set of instructions. It's informal.
>
> They also forget that not everyone is an expert who knows that 'cast' 
> has a very, very precise meaning - inside the C language reference.

Not everyone does -- but surely you do.  Which means *you* can choose to
use the terms correctly, setting an example for others.

The problem isn't the mostly harmlessly redundant phrase "explicit
cast".  The problem is that you wrote upthread (quoted at the top of
this article) that you use the word "cast" to mean something other than
what it actually means.  As far as I can tell, you use the word "cast"
to mean what is properly (in C) called a "conversion".  That's not just
harmlessly redundant, it's incorrect and misleading.  As you've pointed
out, a lot of other people make the same mistake -- but you know better.

We all make mistakes; I've certainly made my share.  But when someone
points out that you're misusing a technical term, you don't admit it,
and you give the impression (at least to me) that you're deliberately
pretending not to understand things that you actualy do understand,
presumably with the goal of making some point that still eludes me.

A conversion may be implicit or explicit.  An explicit conversion is
called a cast, where a cast operator is written as a parenthesized type
name.  What is so hard about that?

-- 
Keith Thompson (The_Other_Keith) kst-u@mib.org  <http://www.ghoti.net/~kst>
Working, but not speaking, for JetHead Development, Inc.
"We must do something.  This is something.  Therefore, we must do this."
    -- Antony Jay and Jonathan Lynn, "Yes Minister"
0
Keith
11/30/2016 2:10:01 AM
On 11/29/2016 7:43 PM, BartC wrote:
> On 30/11/2016 00:19, Ben Bacarisse wrote:
>> BartC <bc@freeuk.com> writes:
> 
>>> You can even do
>>> it quite blatantly:
>>>
>>>   int A;
>>>   A = (&A)[123456789];
>>>
>>> Not a beep out of the compiler! (This crashes BTW.)
>>>
>>> Now that's what I would call weak typing.
>>
>> What has this to do with the incontrovertible fact that a valid object
>> pointer points to a single object?
> 
> How does the fact that a pointer can only refer to one thing at a time
> affect any of my arguments?
> 
> Obviously that's how it has to work. But you're ignoring the extra
> information, such of which is present at compile time, sometimes at
> runtime, that THERE MIGHT BE MANY MORE OBJECTS FOLLOWING THAT ONE,
> forming part of a larger object.
> 
> This is very important to know. The programmer should know it (but can't
> always be certain if the code they write is called from someone else's
> code).
>

Yes, you can.  It's called DOCUMENTATION.  YOU set the rules for calling
YOUR function and document them.  If the caller doesn't follow those
rules, that's his problem.

> The language sometimes knows it, but chooses to ignore it. And in fact
> encourages it by allowing P[i] notation.
> 
> This is the where the weak typing comes into it. You have something
> which is patently not an array, being used as an array.
> 

No, once again you are mixing pointers and arrays.  Weak typing has
nothing to do with it.  In C, the name of an array is a pointer to the
first character.  Period.

> My suggestion has always been for the one object being pointed to itself
> be an array. Now, whether there are more objects following is not
> relevant (there might be bounds errors, but that's another, well
> understood matter, and to do with program logic not an iffy type system).
> 
> Now the type model strengthens, but only a little, since C still allows
> mere pointers to be indexed. Like putting a five-lever lock on your
> front door, but having bar-type swing doors right next to it!
> 

You are the only one who seems to have problems like this, Bart.  So you
should be creating your own language instead of continuing to bitch
about C, when virtually every other C programmer in the world
understands and uses the syntax.

But not you.


-- 
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
0
Jerry
11/30/2016 3:26:56 AM
On 11/30/16 12:17 AM, BartC wrote:
>
> Yet, to an outsider, C is allowing you to access p as though it was
> actually an array. That must seem extraordinarily lax. You can even do
> it quite blatantly:
>
>     int A;
>     A = (&A)[123456789];
>
> Not a beep out of the compiler! (This crashes BTW.)

cc x.c
"x.c", line 6: Warning: Likely uninitialized read (variable A): main
CC x.c
"x.c", line 6: Warning: Likely out-of-bound read: 
*((main)::A[123456789]) in function main

So make the index 1:

cc x.c
"x.c", line 6: Warning: Likely uninitialized read (variable A): main
CC x.c
"x.c", line 6: Warning: Likely out-of-bound read: *((main)::A[1]) in 
function main

So some compilers "beep"..

-- 
Ian
0
Ian
11/30/2016 7:50:30 AM
On 29/11/16 15:54, Rick C. Hodgin wrote:
> On Tuesday, November 29, 2016 at 9:13:01 AM UTC-5, David Brown wrote:
>> On 29/11/16 14:44, Rick C. Hodgin wrote:
>>> On Tuesday, November 29, 2016 at 4:52:51 AM UTC-5, David Brown wrote:
>>>> On 28/11/16 22:50, Rick C. Hodgin wrote:
>>>>> On Monday, November 28, 2016 at 4:05:27 PM UTC-5, supe...@casperkitty.com wrote:
>>>>>> On Monday, November 28, 2016 at 2:46:29 PM UTC-6, Rick C. Hodgin wrote:
>>>>>>> I view strongly typed as anything that can only be one thing, only used
>>>>>>> as one thing, and that's its type.
>>>>>>>
>>>>>>> I view weakly typed as anything that can be more than one thing, can be
>>>>>>> used as one thing now, another thing later, and its type is instance-by-
>>>>>>> instance determined, rather than being known at compile time.
>>>>>>
>>>>>> Python is strongly dynamically typed.  Static typing would imply that the
>>>>>> compiler would know about the types of things.  Strong typing implies that
>>>>>> *something* knows about the types of things and will enforce proper usage.
>>>>>> In C, directly-accessed objects are strongly statically typed; storage
>>>>>> which is accessed only via pointers are handled in whatever manner the
>>>>>> implementation feels like.
>>>>>
>>>>> I can accept that.  My use of the term strong and weak referred more to
>>>>> the static condition.
>>>>
>>>> And I hope now you see that this usage is incorrect.  Static versus
>>>> dynamic typing is an important classification for languages - arguably
>>>> more important than strong or weak typing.  But it is a different concept.
>>>
>>> I disagree with the definition, but I do see what everybody in this
>>> thread has been explaining.
>>>
>>
>> I can quite understand why you might feel that static typing is in a
>> sense "better", "more solid", or "stronger" than dynamic typing (though
>> in truth, each has its pros and cons - and languages are rarely entirely
>> static, or entirely dynamic in their typing).  But the term "strong
>> typing" refers to how much you can mess around with the language's type
>> system.  It's just the way it is - consistency of terminology is often
>> more important than the particular choice of words.
> 
> I've written a few lengthy responses to this thread and then deleted
> them.  It's not a point I care to argue, though I have specific reasons
> why I disagree with the view that data is the determination as to how/
> when a type access is violated.
> 
> I plan to create my own ecosystem with CAlive so it won't really be an
> issue in maintaining legacy terminology.

You really should try to keep the same terminology as everyone else.
That does not mean you need to keep the same sort of type system in your
own languages as you find in others.  If you want to implement
duck-typing on CAlive rather than nominal or structural typing, that
would be fine - but call it "duck-typing", not "goat-typing" or
"sheep-typing".  If you want to implement static typing, call it static
typing.  If you want strong typing, call it strong typing.

It is entirely up to /you/ to decide how you want to implement the type
system in your language, considering a variety of possible type system
characteristics (not just strong/weak and static/dynamic, but also
manifest/inferred, nominal/structural/duck-typing, and perhaps other
characteristics).  And these are not all-or-nothing choices - usually
you can make them sliding scales.  For example, you could make code
normally strongly typed (no type-punning unions, no void*) but also
allow modules or functions to be marked as "unsafe" and have access to
type-safety breaking features to make it easier to implement things like
memory allocators.

But you should understand the terminology, and understand the terms -
only then can you make the best choices from an informed position.  It
is fine to want to create something new and not limited by existing
languages - but it is still worth learning from other people's work and
experience!

And if you start using your own terms, different from everyone else's,
then you will just confuse people (not to mention confusing yourself).



0
David
11/30/2016 9:37:22 AM
On 29/11/16 21:31, Richard Heathfield wrote:
> On 29/11/16 20:27, BartC wrote:
>> On 29/11/2016 19:29, Keith Thompson wrote:
>>> supercat@casperkitty.com writes:
>>>> On Tuesday, November 29, 2016 at 9:18:50 AM UTC-6, Richard Heathfield
>>>> wrote:
>>>>> In C, "cast" means "explicit conversion". If you want to discuss your
>>>>> own language, take it up in a newsgroup devoted to that language.
>>>>
>>>> I like to use the phrase "explicit cast" in cases where casts are
>>>> intended
>>>> to explicitly indicate, to "anything" reading the code (compiler or
>>>> human),
>>>> that something "unusual" may be going on.
>>> [...]
>>>
>>> I like to use the word "cast" for that.
>>
>> The C Standard uses 'explicit cast' in 6.5.4p3 (2007 and 2009 drafts).
>>
>> I wonder what they could possibly mean?
> 
> Since the Standard /defines/ 'cast' as 'explicit conversion', presumably
> they mean 'explicit explicit conversion'. If you feel strongly enough
> about it, raise a DR.
> 

Think of it like "The great big enormous turnip".  Redundancy for
emphasis or additional clarity is not a problem.  Remember, some people
reading the C standards are doing so for the first time, and some people
reading articles or Usenet posts about C do not have decades of
experience in C terminology details - it does no harm to write "explicit
cast" on occasion, especially in contrast to implicit conversions.

There is enough confusion about terms in this thread.  Surely we can
live with a little redundancy - it is a step up from a lot of inaccuracy!

0
David
11/30/2016 9:45:24 AM
David Brown wrote:
> [snip]

Thank you for offering your advice, David.  I will consider it.

Best regards,
Rick C. Hodgin
0
Rick
11/30/2016 11:07:00 AM
On 30/11/2016 07:50, Ian Collins wrote:
> On 11/30/16 12:17 AM, BartC wrote:
>>
>> Yet, to an outsider, C is allowing you to access p as though it was
>> actually an array. That must seem extraordinarily lax. You can even do
>> it quite blatantly:
>>
>>     int A;
>>     A = (&A)[123456789];
>>
>> Not a beep out of the compiler! (This crashes BTW.)
>
> cc x.c
> "x.c", line 6: Warning: Likely uninitialized read (variable A): main
> CC x.c
> "x.c", line 6: Warning: Likely out-of-bound read:
> *((main)::A[123456789]) in function main
>
> So make the index 1:
>
> cc x.c
> "x.c", line 6: Warning: Likely uninitialized read (variable A): main
> CC x.c
> "x.c", line 6: Warning: Likely out-of-bound read: *((main)::A[1]) in
> function main
>
> So some compilers "beep"..

Which one is that?

The only warning I get is that A is uninitialised:

   gcc -Wall -Wextra -Wpedantic -Warray-bounds c.c

And that's if I turn on all that.

But that code fragment is just an illustration; the real problem is 
indexing any T* type, especially when it is a function parameter, as the 
compile can't tell if the T* properly points to something where indexing 
would be meaningful, or if the programmer has made a mistake.

Maybe, on some compilers, with the right combination of options, when 
all the relevant code is in the same module, then you might be lucky 
enough to see a warning. But that's no thanks to the language.

C has effectively decided that any individual variable, any element in a 
mixed-type struct, a single scalar parameter, a blind pointer to 
anything at all, can be considered to be a 1-element array, and 
therefore fair game to be used as though it /is/ an array, whether it is 
in actuality or not.

And because the rules of the language allow it (and therefore turn any 
nonsense use of indexing into a mere out-of-bounds error, passing the 
buck to the programmer), none of the experts here are going to say there 
is anything wrong with it. They can't.

-- 
Bartc
0
BartC
11/30/2016 11:11:10 AM
On 30/11/2016 03:26, Jerry Stuckle wrote:
> On 11/29/2016 7:43 PM, BartC wrote:

>> Now the type model strengthens, but only a little, since C still allows
>> mere pointers to be indexed. Like putting a five-lever lock on your
>> front door, but having bar-type swing doors right next to it!
>>
>
> You are the only one who seems to have problems like this, Bart.  So you
> should be creating your own language instead of continuing to bitch
> about C,

Perhaps you haven't followed my posting history closely. That's exactly 
what I did, long before I properly encountered C. And I've since been 
wondering why it's got so many things backwards!

Some things have taken me a while to appreciate; maybe I couldn't quite 
believe they were as bad as they apparently are.

My alternate languages can do pretty much everything that C can do, but 
without this particular hole in the type system, with the simple 
expedient of requiring constructs like this:

   T* P;
   P[i];

to be written as:

   *(P+i);

A[i]-style indexing can only be applied when A is an array; end of.



-- 
Bartc
0
BartC
11/30/2016 11:25:47 AM
BartC <bc@freeuk.com> writes:
<snip>
>> On 11/30/16 12:17 AM, BartC wrote:
<snip>
>>>     int A;
>>>     A = (&A)[123456789];
<snip>

> C has effectively decided that any individual variable, any element in
> a mixed-type struct, a single scalar parameter, a blind pointer to
> anything at all, can be considered to be a 1-element array, and
> therefore fair game to be used as though it /is/ an array, whether it
> is in actuality or not.

No, C hasn't decided that.  It's decided that it is most definitely not
"fair game" to do that.  Making something undefined is the opposite of
that.

(C does not require an implementation to enforce those rules, but you
can get gcc to give you a run-time check using its sanitize options.)

> And because the rules of the language allow it [...]

What I think you mean is that the syntax allows it.

-- 
Ben.
0
Ben
11/30/2016 12:00:35 PM
On 30/11/16 12:07, Rick C. Hodgin wrote:
> David Brown wrote:
>> [snip]
> 
> Thank you for offering your advice, David.  I will consider it.
> 

Just a small thing - I have not made any "offering".  I posted some
/advice/, not an "offering of advice".  It was not an "offering" or a
"sacrifice" to you or anyone else - it was advice that I thought might
benefit you and perhaps others who read it.  Please do not make it look
like I am "donating" something to you, or supporting your plans, ideals
or visions.  I am merely giving help where I can to people in this
newsgroup.

If you want to know more about typing or type systems, then there are
plenty of people in Usenet who can help.  There are definitely people in
c.l.c. who know a good deal more than me about the subject.  But you
will probably find it makes more sense to discuss it in a group such as
comp.programming - in c.l.c., it is only really on-topic when we are
discussing the type system in C or comparisons with other languages.  If
you want to start a thread in comp.programming on the subject, I will
probably join in.

That's an /offer/ of advice, but not an /offering/ of advice :-)


0
David
11/30/2016 12:28:36 PM
On Wednesday, November 30, 2016 at 7:28:44 AM UTC-5, David Brown wrote:
> [snip]

Thank you for correcting my mistake, David.  You are always seeking to
help others do better and achieve more.  Such efforts are appreciated.

Best regards,
Rick C. Hodgin
0
Rick
11/30/2016 1:12:33 PM
On 11/30/2016 6:25 AM, BartC wrote:
> On 30/11/2016 03:26, Jerry Stuckle wrote:
>> On 11/29/2016 7:43 PM, BartC wrote:
> 
>>> Now the type model strengthens, but only a little, since C still allows
>>> mere pointers to be indexed. Like putting a five-lever lock on your
>>> front door, but having bar-type swing doors right next to it!
>>>
>>
>> You are the only one who seems to have problems like this, Bart.  So you
>> should be creating your own language instead of continuing to bitch
>> about C,
> 
> Perhaps you haven't followed my posting history closely. That's exactly
> what I did, long before I properly encountered C. And I've since been
> wondering why it's got so many things backwards!
>

Oh, I've read your posting history, unfortunately.  And from your
constant bitching, it's obvious that C is not for you.

C doesn't have anything backwards.  But it doesn't hold your hand like
many other languages do.  It generates efficient code, with little
run-time checking.  But it means you, the programmer, have to be more
intelligent in what you code.

> Some things have taken me a while to appreciate; maybe I couldn't quite
> believe they were as bad as they apparently are.
> 

There is nothing bad about C - other than the fact you seem to need a
language that holds your hand constantly.  C is not it.

> My alternate languages can do pretty much everything that C can do, but
> without this particular hole in the type system, with the simple
> expedient of requiring constructs like this:
> 
>   T* P;
>   P[i];
> 
> to be written as:
> 
>   *(P+i);
> 
> A[i]-style indexing can only be applied when A is an array; end of.
> 
> 
> 

There is no hole in the type system.  The name of an array is a pointer
to the first character.  Period.  They are not two different types, like
they are in most other languages.

And since your alternate languages can do pretty much everything that C
can do, I suggest you use them.  Your would rather bitch about C than
learn how to do it properly.  Had you been in one of my classes, I would
have kicked your arse out and told your manager to get you a job
emptying waste cans.

-- 
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
0
Jerry
11/30/2016 2:07:04 PM
On 30/11/2016 14:07, Jerry Stuckle wrote:
> On 11/30/2016 6:25 AM, BartC wrote:

>> A[i]-style indexing can only be applied when A is an array; end of.

> There is no hole in the type system.

There /is/ a hole where value array types would go. C decided, instead 
of leaving the hole, so as to trap errors when someone attempts an 
unsupported operation [/and to allow for the possibility of value array 
ops in the future/], to fill the hole with 'pointer to array first element'.

I think that was a bad decision. And leads to inconsistencies such as 
'int A[10]' meaning different things in different contexts.

> The name of an array is a pointer
> to the first character.  Period.  They are not two different types, like
> they are in most other languages.

I wonder why 'most other languages' do things differently!

> And since your alternate languages can do pretty much everything that C
> can do, I suggest you use them.

I do. When I use C now, it's usually for intermediate code. Or to figure 
out how interfaces work (at the minute, I'm about to delve into the 
mysteries of 'dirent.h' in order to derive the binary interface I need).

>  Your would rather bitch about C than
> learn how to do it properly.  Had you been in one of my classes, I would
> have kicked your arse out

You only want students who take everything at face value and don't ask 
awkward questions?

BTW do /you/ find everything in C (or in any language) 100% perfect? If 
not, how would relish being 'kicked out' of classes if you dared to 
voice your opinions?

Finally, how do languages ever evolve if nobody ever objects to anything 
in them?

-- 
Bartc
0
BartC
11/30/2016 2:22:41 PM
Bart wrote:
>   *(P+i); 

Because index is one important
operation why not just P.i ?

one char less than P[i]
0
asetofsymbols
11/30/2016 2:36:59 PM
BartC <bc@freeuk.com> writes:
[...]
> C has effectively decided that any individual variable, any element in a 
> mixed-type struct, a single scalar parameter, a blind pointer to 
> anything at all, can be considered to be a 1-element array, and 
> therefore fair game to be used as though it /is/ an array, whether it is 
> in actuality or not.

Yes, it has.  N1570 6.5.6p7:

    For the purposes of these operators, a pointer to an object that
    is not an element of an array behaves the same as a pointer to
    the first element of an array of length one with the type of
    the object as its element type.

where the phrase "these operators" refers to the "+" and "-"
operators taking a pointer and an integer as operands.  I'm glad
you understand that.

> And because the rules of the language allow it (and therefore turn any 
> nonsense use of indexing into a mere out-of-bounds error, passing the 
> buck to the programmer), none of the experts here are going to say there 
> is anything wrong with it. They can't.

I don't say there's anything wrong with it because, in my humble
opinion, there isn't anything wrong with it.

I can see that there could be some disadvantages to it.  Perhaps I
might incorrectly assume that a pointer points to an element of an
array, and index it incorrectly -- though that's just one example
of the fact that C doesn't require array index checking.

I can imagine a C-like language that has two kinds of pointers,
one supportint pointer arithmetic and therefore array indexing,
and another that can only point to a single object and cannot be
used to access other objects that are elements of the same array.
Perhaps that would be a good idea.  But C is not that language.
(And if such a feature were added, I presume that it would still be
possible to convert one kind of pointer to the other, overriding
any protection.  There is something to be said for requiring such
overriding to be explicit -- but again, C is not that language.)

There are also some advantages to the way C does it.  Maybe I have
a function that iterates over the elements of an array, and I want
to use it on a single object that's not explicitly an array element.

You clearly understand the rule.  You understand, I presume, that the
rule cannot be changed in a future version of the language without
breaking existing code.  You understand that most C experienced
programmers don't have a problem with it.

*We* understand that you don't like it.  (We don't understand what
the [deleted] you expect us to do about it.)

By all means let us know if you have anything *new* to say on
the topic.

-- 
Keith Thompson (The_Other_Keith) kst-u@mib.org  <http://www.ghoti.net/~kst>
Working, but not speaking, for JetHead Development, Inc.
"We must do something.  This is something.  Therefore, we must do this."
    -- Antony Jay and Jonathan Lynn, "Yes Minister"
0
Keith
11/30/2016 4:31:52 PM
On 11/30/2016 07:00 AM, Ben Bacarisse wrote:
> BartC <bc@freeuk.com> writes:
> <snip>
>>> On 11/30/16 12:17 AM, BartC wrote:
> <snip>
>>>>     int A;
>>>>     A = (&A)[123456789];
> <snip>
> 
>> C has effectively decided that any individual variable, any element in
>> a mixed-type struct, a single scalar parameter, a blind pointer to
>> anything at all, can be considered to be a 1-element array, and
>> therefore fair game to be used as though it /is/ an array, whether it
>> is in actuality or not.
> 
> No, C hasn't decided that.  It's decided that it is most definitely not
> "fair game" to do that.  Making something undefined is the opposite of
> that.

The expression above does have undefined behavior, but so would the
1-element array version:

    int B[1];
    (&B[0])[1234567890];

so that's not something that distinguishes a single object from a
1-element array.

As far as I can see, 6.5.16p7 has precisely the effect he's described in
such disparaging terms:

"For the purposes of these operators, a pointer to an object that is not
an element of an array behaves the same as a pointer to the first
element of an array of length one with the type of the object as its
element type."

Is there anything you can do with &B[0] that has defined behavior, that
has undefined behavior if you try to do the same thing with &A?

> (C does not require an implementation to enforce those rules, but you
> can get gcc to give you a run-time check using its sanitize options.)
> 
>> And because the rules of the language allow it [...]
> 
> What I think you mean is that the syntax allows it.

6.5.16p7 is in the semantics section, not the syntax section.

0
James
11/30/2016 4:32:53 PM
On 30/11/2016 14:36, asetofsymbols@gmail.com wrote:
> Bart wrote:
>>   *(P+i);
>
> Because index is one important
> operation why not just P.i ?
>
> one char less than P[i]

P.i already has another meaning that would clash. But a dedicated syntax 
for *(P+i), if P[i] was off-limits, might be P.(i) (or P.[i] but I would 
reserve that for something else). This allows for P.(i+1) which wouldn't 
work as P.i+1.

The one- or two- character difference between these forms is not 
significant, when you allow for more sensible identifiers and the usual 
punctuation that C needs anyway.

-- 
Bartc
0
BartC
11/30/2016 4:41:59 PM
On Wednesday, November 30, 2016 at 8:22:55 AM UTC-6, Bart wrote:
> There /is/ a hole where value array types would go. C decided, instead 
> of leaving the hole, so as to trap errors when someone attempts an 
> unsupported operation [/and to allow for the possibility of value array 
> ops in the future/], to fill the hole with 'pointer to array first element'.

An interesting semantic feature of C is that given the declarations:

    someType someThing = {1,2};
    void func(someType it);

the function call "func(someThing);" can pass someThing with either
value or reference semantics, depending upon what "someType" is.  I'm
not sure that the benefits of that feature outweigh the potential for
confusion if code tries to exploit it, but for better or worse some
code does rely upon it.
0
supercat
11/30/2016 5:03:31 PM
On 11/26/2016 3:49 PM, fir wrote:
> classical question
>
> maybe someone know that (i realized it clearly quite recently) that there
> is no max (nor min) anywhere in c standard library  (afaik there are fmax
> fmin for floats in math.h )
>
> this is weird, maybe max from ibrary would be slow (though compilers have
> intrisinc for that so it could be made fast) and maybe this is a reason
> but absence of min max got weirld result that it must be defined in each
> project (no big worry as it is one line but still weird)

Perhaps because there would need to be different versions for char, signed 
char, unsigned char, int, unsigned int, long, unsigned long, float, double, 
and so on?

-- 
Kenneth Brody

0
Ken
11/30/2016 6:15:40 PM
James Kuyper <jameskuyper@verizon.net> writes:

> On 11/30/2016 07:00 AM, Ben Bacarisse wrote:
>> BartC <bc@freeuk.com> writes:
>> <snip>
>>>> On 11/30/16 12:17 AM, BartC wrote:
>> <snip>
>>>>>     int A;
>>>>>     A = (&A)[123456789];
>> <snip>
>> 
>>> C has effectively decided that any individual variable, any element in
>>> a mixed-type struct, a single scalar parameter, a blind pointer to
>>> anything at all, can be considered to be a 1-element array, and
>>> therefore fair game to be used as though it /is/ an array, whether it
>>> is in actuality or not.
>> 
>> No, C hasn't decided that.  It's decided that it is most definitely not
>> "fair game" to do that.  Making something undefined is the opposite of
>> that.
>
> The expression above does have undefined behavior, but so would the
> 1-element array version:
>
>     int B[1];
>     (&B[0])[1234567890];
>
> so that's not something that distinguishes a single object from a
> 1-element array.

Sure.  But I don't think BartC's objection is to this equivalence
between plain objects (so to speak) and arrays of length 1.  If it were,
he's have used (&A)[0] as the example to mock.  He's complaining that
you can arbitrarily index &A.  (Previous posts on this topic used
(&A)[i] as a general case.)

<snip>
>>> And because the rules of the language allow it [...]
>> 
>> What I think you mean is that the syntax allows it.
>
> 6.5.16p7 is in the semantics section, not the syntax section.

I think BartC wants a syntax that distinguishes between pointers and
arrays, or, at the very least, between pointers that can be indexed and
those that can only be dereferenced (with arithmetic on them being
prohibited).

He complains that there is no peep from the compiler about the code
above and concludes from that that C permits iy.  All C permits is the
syntax of the code he wrote above.  The semantics of what he wrote (with
1234567890 as the index) is not permitted.

-- 
Ben.
0
Ben
11/30/2016 7:25:05 PM
On 30/11/2016 19:25, Ben Bacarisse wrote:

>>>>> On 11/30/16 12:17 AM, BartC wrote:
>>>>>>     int A;
>>>>>>     A = (&A)[123456789];

> I think BartC wants a syntax that distinguishes between pointers and
> arrays,

Syntax on its own won't know what's an array and what's a pointer.

The restriction I would have liked (or would simply have liked others to 
acknowledge since its too late for the language) was that A[i] is only 
allowed when A is of type T[] (ie. array of T).

And that *A or *(A+i) is only allowed when A is of type T*

> or, at the very least, between pointers that can be indexed and
> those that can only be dereferenced (with arithmetic on them being
> prohibited).

That was touched on in some thread or other (having two varieties of 
pointers) but I don't think it's necessary to go that far (although I've 
never created such an implementation to see how well it might work).

I think that with the segregation of arrays, then that might be enough 
to remove many of conflicts I'm not happy about. People will always want 
to do ++P and *(P+offset) on pointers, but it will be obvious that this 
is messing about with pointers and that some insalubrious goings-on 
might be in hand! So code will be scrutinised more carefully.

So, with just one variety of pointer, this is still possible (in my 
language too):

     int A;
     int* P=&A;
     *(P+123456789);

but not:

     P[123456789];

  > He complains that there is no peep from the compiler about the code
> above and concludes from that that C permits it.  All C permits is the
> syntax of the code he wrote above.  The semantics of what he wrote (with
> 1234567890 as the index) is not permitted.

I think that in the form of P[123...] it has to be permitted, as P might 
well be a char* pointing to a 1.5GB block of memory. The purpose of the 
(&A)[123...] example was to show more clearly how nonsensical it can be.

-- 
Bartc

0
BartC
11/30/2016 8:45:51 PM
BartC <bc@freeuk.com> writes:
> On 30/11/2016 19:25, Ben Bacarisse wrote:
>>>>>> On 11/30/16 12:17 AM, BartC wrote:
>>>>>>>     int A;
>>>>>>>     A = (&A)[123456789];
>
>> I think BartC wants a syntax that distinguishes between pointers and
>> arrays,
>
> Syntax on its own won't know what's an array and what's a pointer.
>
> The restriction I would have liked (or would simply have liked others to 
> acknowledge since its too late for the language) was that A[i] is only 
> allowed when A is of type T[] (ie. array of T).
[...]

Can you clarify what you mean by "would simply have liked others
to acknowledge"?  What form would such an acknowledgement take?

I've repeatedly acknowledged that you want such a restriction.
What more do you want?

-- 
Keith Thompson (The_Other_Keith) kst-u@mib.org  <http://www.ghoti.net/~kst>
Working, but not speaking, for JetHead Development, Inc.
"We must do something.  This is something.  Therefore, we must do this."
    -- Antony Jay and Jonathan Lynn, "Yes Minister"
0
Keith
11/30/2016 8:52:34 PM
On 11/30/2016 02:25 PM, Ben Bacarisse wrote:
> James Kuyper <jameskuyper@verizon.net> writes:
> 
>> On 11/30/2016 07:00 AM, Ben Bacarisse wrote:
>>> BartC <bc@freeuk.com> writes:
>>> <snip>
>>>>> On 11/30/16 12:17 AM, BartC wrote:
>>> <snip>
>>>>>>     int A;
>>>>>>     A = (&A)[123456789];
>>> <snip>
>>>
>>>> C has effectively decided that any individual variable, any element in
>>>> a mixed-type struct, a single scalar parameter, a blind pointer to
>>>> anything at all, can be considered to be a 1-element array, and
>>>> therefore fair game to be used as though it /is/ an array, whether it
>>>> is in actuality or not.
>>>
>>> No, C hasn't decided that.  It's decided that it is most definitely not
>>> "fair game" to do that.  Making something undefined is the opposite of
>>> that.
>>
>> The expression above does have undefined behavior, but so would the
>> 1-element array version:
>>
>>     int B[1];
>>     (&B[0])[1234567890];
>>
>> so that's not something that distinguishes a single object from a
>> 1-element array.
> 
> Sure.  But I don't think BartC's objection is to this equivalence
> between plain objects (so to speak) and arrays of length 1.  If it were,
> he's have used (&A)[0] as the example to mock.  He's complaining that
> you can arbitrarily index &A.  (Previous posts on this topic used
> (&A)[i] as a general case.)

I don't think he's any happier about indexing it with 0 than with any
other number.
While his example involves out-of-bounds access, the statement he made
about "what C has decided" makes no mention of that issue. It merely
paraphrases 6.5.16p7. I was objecting to your claim that C has NOT
decided it.

> <snip>
>>>> And because the rules of the language allow it [...]
>>>
>>> What I think you mean is that the syntax allows it.
>>
>> 6.5.16p7 is in the semantics section, not the syntax section.
> 
> I think BartC wants a syntax that distinguishes between pointers and
> arrays, or, at the very least, between pointers that can be indexed and
> those that can only be dereferenced (with arithmetic on them being
> prohibited).

Yes - and the equivalence of a pointer to a single object and a pointer
to the first element of a 1-element array is something he would
therefore be adamantly opposed to. As far as he's concerned, a routine
that handles n-element arrays for arbitrary values of n must NOT be able
to handle single objects that are not part of an array - that would
require a different declaration for the routine.
0
James
11/30/2016 9:52:45 PM
On Wednesday, November 30, 2016 at 2:46:08 PM UTC-6, Bart wrote:
> I think that with the segregation of arrays, then that might be enough 
> to remove many of conflicts I'm not happy about. People will always want 
> to do ++P and *(P+offset) on pointers, but it will be obvious that this 
> is messing about with pointers and that some insalubrious goings-on 
> might be in hand! So code will be scrutinised more carefully.

From an optimization perspective, it would be useful to have the language
distinguish between accessing an element which is x from the start of an
array, versus accessing an object which is displaced x from the address
of another object (which might not be the first one in an array).  For
the kind of compilers Dennis Ritchie was anticipating, the two operations
would be equivalent, but given code like:

    void foo(int array1[], int array2[], int start, int end)
    {
      for (int i=start; i<end; i++)
        array1[i] = array2[i] * 100;
    }

it would be helpful for a compiler to know that the operations on all of
the array elements may be performed independently in any order, which will
be possible the regions accessed through array1 and array2 are either
identical or disjoint, but will not be possible if they partially overlap.
If the array size were known at compile time and the function were written
as:

    void foo(int (*array1)[1000], int array2[1000], int start, int end)
    {
      for (int i=start; i<end; i++)
        (*array1)[i] = (*array2)[i] * 100;
    }

I think a compiler would be entitled to make that assumption, but I don't
think there's any equivalent form for the scenario where the array size
isn't known.  It might be possible using variable-length array types, but
that would require that "foo" be passed the array size even though nothing
in the code would actually make use of the value passed.
0
supercat
11/30/2016 10:17:49 PM
On 11/30/2016 9:22 AM, BartC wrote:
> On 30/11/2016 14:07, Jerry Stuckle wrote:
>> On 11/30/2016 6:25 AM, BartC wrote:
> 
>>> A[i]-style indexing can only be applied when A is an array; end of.
> 
>> There is no hole in the type system.
> 
> There /is/ a hole where value array types would go. C decided, instead
> of leaving the hole, so as to trap errors when someone attempts an
> unsupported operation [/and to allow for the possibility of value array
> ops in the future/], to fill the hole with 'pointer to array first
> element'.
>

YOU consider it a hole. No one else does.  Who do you think is more
likely to be wrong?  Hint: it isn't everybody else.

> I think that was a bad decision. And leads to inconsistencies such as
> 'int A[10]' meaning different things in different contexts.
> 

It doesn't make a difference what you think.   That's the way it is.

>> The name of an array is a pointer
>> to the first character.  Period.  They are not two different types, like
>> they are in most other languages.
> 
> I wonder why 'most other languages' do things differently!
> 

Other language hold you by the hand and do lots of runtime checks.  This
leads to code bloat and slow performance.  C has been very popular due
to the size and speed of its generated code.

>> And since your alternate languages can do pretty much everything that C
>> can do, I suggest you use them.
> 
> I do. When I use C now, it's usually for intermediate code. Or to figure
> out how interfaces work (at the minute, I'm about to delve into the
> mysteries of 'dirent.h' in order to derive the binary interface I need).
> 

Well, maybe you should use your other languages for your intermediate
code also, then.  And you can also use assembler.

>>  Your would rather bitch about C than
>> learn how to do it properly.  Had you been in one of my classes, I would
>> have kicked your arse out
> 
> You only want students who take everything at face value and don't ask
> awkward questions?
> 

I encouraged them to ask any questions - and I got a lot.  But I never
got one as stupid as what you're asking.

> BTW do /you/ find everything in C (or in any language) 100% perfect? If
> not, how would relish being 'kicked out' of classes if you dared to
> voice your opinions?
> 

I never said it was perfect.  But it does well for what I want.  I don't
waste time and energy complaining about something that works quite well.

> Finally, how do languages ever evolve if nobody ever objects to anything
> in them?
> 

So, get on the C committee and propose changes.  See if anyone else
agrees with you.  Hint: they won't.

This newsgroup isn't monitored by anyone on the C committee, AFAIK - at
least I've never seen any names I recognize post here, and the couple of
guys I do know don't read it.  All you are doing is wasting everyone
else's time with your constant bitching (and that's exactly what it is).

-- 
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
0
Jerry
11/30/2016 10:25:27 PM
On 30/11/2016 22:25, Jerry Stuckle wrote:
> On 11/30/2016 9:22 AM, BartC wrote:
>> On 30/11/2016 14:07, Jerry Stuckle wrote:
>>> On 11/30/2016 6:25 AM, BartC wrote:
>>
>>>> A[i]-style indexing can only be applied when A is an array; end of.
>>
>>> There is no hole in the type system.
>>
>> There /is/ a hole where value array types would go. C decided, instead
>> of leaving the hole, so as to trap errors when someone attempts an
>> unsupported operation [/and to allow for the possibility of value array
>> ops in the future/], to fill the hole with 'pointer to array first
>> element'.
>
> YOU consider it a hole. No one else does.  Who do you think is more
> likely to be wrong?  Hint: it isn't everybody else.
	
Have you got an opinion of your own, or do you have to depend on someone 
else's?

Take this pointer to pointer to array[4] of int:

   int (**P)[10];

Here's how it behaves without the 'hole':

   P;          // type: ptr to ptr to array[4]int
   *P          // type:        ptr to array[4]int
   **P         // type:               array[4]int

Notice the regular progression as more derefs are applied. Each ptr of 
the original type is gradually struck through as each deref is done.

And here's how it behaves in C:

   P;          // type: ptr to ptr to array[4]int
   *P          // type:        ptr to array[4]int
   **P         // type:                ptr to int

Notice the discontinuity in the third line; what's happened to the 
array, and why are there three lots of pointers when the original type 
had two? (I don't want any answers, just showing what happens.)

With the top version, also, if you subsequently want to index **P, it 
still has the array[4]int identity, and bounds checking and other 
goodies are on the cards. With the second version as it works in C, the 
array identity and the bounds have been lost. The int* is now 
indistinguishable from any other.

Here is a pointer to pointer to pointer to int:

   int ***Q;

And this is how it behaves in C:

   Q;          // type: ptr to ptr to ptr to int
   *Q          // type:        ptr to ptr to int
   **Q         // type:               ptr to int

Notice the last line for **Q has exactly the same type as the last **P 
line above for C. Surely P and Q must therefore have the same type? Um, 
no, they don't!


-- 
Bartc
0
BartC
11/30/2016 11:29:42 PM
On 30/11/2016 17:03, supercat@casperkitty.com wrote:
> On Wednesday, November 30, 2016 at 8:22:55 AM UTC-6, Bart wrote:
>> There /is/ a hole where value array types would go. C decided, instead
>> of leaving the hole, so as to trap errors when someone attempts an
>> unsupported operation [/and to allow for the possibility of value array
>> ops in the future/], to fill the hole with 'pointer to array first element'.
>
> An interesting semantic feature of C is that given the declarations:
>
>     someType someThing = {1,2};
>     void func(someType it);
>
> the function call "func(someThing);" can pass someThing with either
> value or reference semantics, depending upon what "someType" is.  I'm
> not sure that the benefits of that feature outweigh the potential for
> confusion if code tries to exploit it, but for better or worse some
> code does rely upon it.

Here's an example. someType1/2 are both opaque types as far as main() is 
concerned. But the behaviour is different depending on whether they are 
defined as a two-element struct, or two-element array:

typedef struct {int x, y;} someType1;
typedef int someType2[2];

void func1(someType1 P) {P.x = 9999;}
void func2(someType2 P) {P[0] = 9999;}

int main(void)
{
   someType1 A = {10,20};
   someType2 B = {10,20};

   func1(A);
   func2(B);

   printf("A = {%d,%d}\n", A.x, A.y);
   printf("B = {%d,%d}\n", B[0], B[1]);
}

Output:

  A = {10,20}
  B = {9999,20}

-- 
Bartc
0
BartC
11/30/2016 11:38:27 PM
BartC <bc@freeuk.com> writes:
[...]
> Here's an example. someType1/2 are both opaque types as far as main() is 
> concerned. But the behaviour is different depending on whether they are 
> defined as a two-element struct, or two-element array:

Then they're not opaque, are they?

[snip]

-- 
Keith Thompson (The_Other_Keith) kst-u@mib.org  <http://www.ghoti.net/~kst>
Working, but not speaking, for JetHead Development, Inc.
"We must do something.  This is something.  Therefore, we must do this."
    -- Antony Jay and Jonathan Lynn, "Yes Minister"
0
Keith
11/30/2016 11:49:50 PM
On 30/11/2016 23:49, Keith Thompson wrote:
> BartC <bc@freeuk.com> writes:
> [...]
>> Here's an example. someType1/2 are both opaque types as far as main() is
>> concerned. But the behaviour is different depending on whether they are
>> defined as a two-element struct, or two-element array:
>
> Then they're not opaque, are they?

main() doesn't know whether each is struct or array. (Except in the 
printf calls for this test, but that can also be provided as 
print_someType1 and so on.)

If such a type provided by a library changed implementation between 
array and struct, then code will need recompiling, but may also behave 
differently.

This is a consequence of array types hitting that 'hole' in the type 
system and needing to change from pass-by-value, to 
pass-by-value-of-pointer-to-first-element-of-array. Or in reverse if 
moving away from array types.

Everyone knows this, but Jerry Stuckle is insisting either that there is 
no such discontinuity in the type system, or is saying that it is 
inconsequential. Although more than likely he just likes to have the 
last word. (A bit like me then...)

-- 
Bartc
0
BartC
12/1/2016 12:10:23 AM
supercat@casperkitty.com writes:
> On Thursday, December 1, 2016 at 3:14:24 AM UTC-6, David Brown wrote:
>> Yes, if only C had a way to write something like:
>> 
>> void foo(int array1[restrict], int array2[restrict], int start, int end)
>> 
>> then the optimiser would be free to assume that array1 and array2 don't
>> overlap!
>> 
>> Oh wait, C /does/ have a way to write that.
>
> A search of C99 "restrict" revealed an example of that syntax, and a search
> of N1570 finds the same example, but I couldn't find anyplace in either
> standard that actually described its meaning.  What am I missing?

N1570 6.7.3p8 discusses it briefly.

N1570 6.7.3.1 is the "Formal definition of restrict".

Does your PDF viewer not have a search function?

> Also, there's no reason such meaning should only be available to parameters
> but I don't think that syntax would be allowable in any other context.

-- 
Keith Thompson (The_Other_Keith) kst-u@mib.org  <http://www.ghoti.net/~kst>
Working, but not speaking, for JetHead Development, Inc.
"We must do something.  This is something.  Therefore, we must do this."
    -- Antony Jay and Jonathan Lynn, "Yes Minister"
0
Keith
12/1/2016 1:01:01 AM
Jerry Stuckle <jstucklex@attglobal.net> writes:

> On 11/30/2016 9:22 AM, BartC wrote:
>> On 30/11/2016 14:07, Jerry Stuckle wrote:
>>> On 11/30/2016 6:25 AM, BartC wrote:
>> 
>>>> A[i]-style indexing can only be applied when A is an array; end of.
>> 
>>> There is no hole in the type system.
>> 
>> There /is/ a hole where value array types would go. C decided, instead
>> of leaving the hole, so as to trap errors when someone attempts an
>> unsupported operation [/and to allow for the possibility of value array
>> ops in the future/], to fill the hole with 'pointer to array first
>> element'.
>>
>
> YOU consider it a hole. No one else does.

How can you possibly know that?  I can say for sure you can't, because I
also consider it a hole, and I'd go so far as to say that it's the
majority opinion amongst the people I've talked to about C's type
system.

In the early days of C, everything that could be assigned and passed
into and out of a function would, typically, fit in a machine register.
But once structs could be assigned, passed and returned the situation
became more obviously asymmetric with arrays, uniquely amongst the
object types, being the only ones that can't be assigned, passed or
returned.

<snip>
-- 
Ben.
0
Ben
12/1/2016 1:18:26 AM
BartC <bc@freeuk.com> writes:
> On 30/11/2016 23:49, Keith Thompson wrote:
>> BartC <bc@freeuk.com> writes:
>> [...]
>>> Here's an example. someType1/2 are both opaque types as far as main() is
>>> concerned. But the behaviour is different depending on whether they are
>>> defined as a two-element struct, or two-element array:
>>
>> Then they're not opaque, are they?
>
> main() doesn't know whether each is struct or array. (Except in the 
> printf calls for this test, but that can also be provided as 
> print_someType1 and so on.)

Sure it does.  The declarations of both types are visible from main.

> If such a type provided by a library changed implementation between 
> array and struct, then code will need recompiling, but may also behave 
> differently.

Yes, if you change a type, the code will behave differently.

> This is a consequence of array types hitting that 'hole' in the type 
> system and needing to change from pass-by-value, to 
> pass-by-value-of-pointer-to-first-element-of-array. Or in reverse if 
> moving away from array types.

More accurately, arrays cannot be passed as arguments (though it's easy
to achieve the effect of passing them by reference).

[...]

-- 
Keith Thompson (The_Other_Keith) kst-u@mib.org  <http://www.ghoti.net/~kst>
Working, but not speaking, for JetHead Development, Inc.
"We must do something.  This is something.  Therefore, we must do this."
    -- Antony Jay and Jonathan Lynn, "Yes Minister"
0
Keith
12/1/2016 1:22:38 AM
On 01/12/2016 01:22, Keith Thompson wrote:
> BartC <bc@freeuk.com> writes:
>> On 30/11/2016 23:49, Keith Thompson wrote:
>>> BartC <bc@freeuk.com> writes:
>>> [...]
>>>> Here's an example. someType1/2 are both opaque types as far as main() is
>>>> concerned. But the behaviour is different depending on whether they are
>>>> defined as a two-element struct, or two-element array:
>>>
>>> Then they're not opaque, are they?
>>
>> main() doesn't know whether each is struct or array. (Except in the
>> printf calls for this test, but that can also be provided as
>> print_someType1 and so on.)
>
> Sure it does.  The declarations of both types are visible from main.

Then there can be no such thing as an opaque type, since you can always 
trawl through macros, typedefs and headers to find out what they are. 
And the compiler will of course know.

An opaque type is one where the use of that type should not need to know 
what it is, or to have to change any user-code if the implementation of 
the type changes.

-- 
bartc
0
BartC
12/1/2016 1:40:46 AM
BartC <bc@freeuk.com> writes:
> On 01/12/2016 01:22, Keith Thompson wrote:
>> BartC <bc@freeuk.com> writes:
>>> On 30/11/2016 23:49, Keith Thompson wrote:
>>>> BartC <bc@freeuk.com> writes:
>>>> [...]
>>>>> Here's an example. someType1/2 are both opaque types as far as main() is
>>>>> concerned. But the behaviour is different depending on whether they are
>>>>> defined as a two-element struct, or two-element array:
>>>>
>>>> Then they're not opaque, are they?
>>>
>>> main() doesn't know whether each is struct or array. (Except in the
>>> printf calls for this test, but that can also be provided as
>>> print_someType1 and so on.)
>>
>> Sure it does.  The declarations of both types are visible from main.
>
> Then there can be no such thing as an opaque type, since you can always 
> trawl through macros, typedefs and headers to find out what they are. 
> And the compiler will of course know.

You can define an incomplete struct type and use pointers to it.  The
full type is visible only in the .c file that implements the operations
on it.  Type FILE could be an example.

> An opaque type is one where the use of that type should not need to know 
> what it is, or to have to change any user-code if the implementation of 
> the type changes.

Then it shouldn't be implemented as an array type.

You're absolutely right that implementing an opaque type as an array
type can cause problems (unless operations are limited to act only on
pointers to the type, as for FILE).  Your conclusion is that there's a
fatal flaw in the language.  Mine is that you shouldn't implement an
opaque type as an array type.

-- 
Keith Thompson (The_Other_Keith) kst-u@mib.org  <http://www.ghoti.net/~kst>
Working, but not speaking, for JetHead Development, Inc.
"We must do something.  This is something.  Therefore, we must do this."
    -- Antony Jay and Jonathan Lynn, "Yes Minister"
0
Keith
12/1/2016 2:03:21 AM
max() = reverse(min())?
max() = min(-1)

I barely remember the sorting functions in C were using (-1,1) to 
specify the sorting order.
0
Mr
12/1/2016 4:33:54 AM
On 11/30/2016 6:29 PM, BartC wrote:
> On 30/11/2016 22:25, Jerry Stuckle wrote:
>> On 11/30/2016 9:22 AM, BartC wrote:
>>> On 30/11/2016 14:07, Jerry Stuckle wrote:
>>>> On 11/30/2016 6:25 AM, BartC wrote:
>>>
>>>>> A[i]-style indexing can only be applied when A is an array; end of.
>>>
>>>> There is no hole in the type system.
>>>
>>> There /is/ a hole where value array types would go. C decided, instead
>>> of leaving the hole, so as to trap errors when someone attempts an
>>> unsupported operation [/and to allow for the possibility of value array
>>> ops in the future/], to fill the hole with 'pointer to array first
>>> element'.
>>
>> YOU consider it a hole. No one else does.  Who do you think is more
>> likely to be wrong?  Hint: it isn't everybody else.
>     
> Have you got an opinion of your own, or do you have to depend on someone
> else's?
>

I go by the standard.

> Take this pointer to pointer to array[4] of int:
> 
>   int (**P)[10];
> 
> Here's how it behaves without the 'hole':
> 
>   P;          // type: ptr to ptr to array[4]int
>   *P          // type:        ptr to array[4]int
>   **P         // type:               array[4]int
> 
> Notice the regular progression as more derefs are applied. Each ptr of
> the original type is gradually struck through as each deref is done.
> 

Incorrect.  In each case, it only points at an int (element 0 of the
array).  Whether there are more elements or not is immaterial to the
definition of P.

> And here's how it behaves in C:
> 
>   P;          // type: ptr to ptr to array[4]int
>   *P          // type:        ptr to array[4]int
>   **P         // type:                ptr to int
> 

Incorrect, as I indicated above.  They are all eventually pointing at a
single value of type int.

> Notice the discontinuity in the third line; what's happened to the
> array, and why are there three lots of pointers when the original type
> had two? (I don't want any answers, just showing what happens.)
> 

The discontinuity is between your ears.

> With the top version, also, if you subsequently want to index **P, it
> still has the array[4]int identity, and bounds checking and other
> goodies are on the cards. With the second version as it works in C, the
> array identity and the bounds have been lost. The int* is now
> indistinguishable from any other.
> 
> Here is a pointer to pointer to pointer to int:
> 
>   int ***Q;
> 
> And this is how it behaves in C:
> 
>   Q;          // type: ptr to ptr to ptr to int
>   *Q          // type:        ptr to ptr to int
>   **Q         // type:               ptr to int
> 
> Notice the last line for **Q has exactly the same type as the last **P
> line above for C. Surely P and Q must therefore have the same type? Um,
> no, they don't!
> 
> 

Which is exactly how P works.

-- 
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
0
Jerry
12/1/2016 4:55:29 AM
On 11/30/2016 8:18 PM, Ben Bacarisse wrote:
> Jerry Stuckle <jstucklex@attglobal.net> writes:
> 
>> On 11/30/2016 9:22 AM, BartC wrote:
>>> On 30/11/2016 14:07, Jerry Stuckle wrote:
>>>> On 11/30/2016 6:25 AM, BartC wrote:
>>>
>>>>> A[i]-style indexing can only be applied when A is an array; end of.
>>>
>>>> There is no hole in the type system.
>>>
>>> There /is/ a hole where value array types would go. C decided, instead
>>> of leaving the hole, so as to trap errors when someone attempts an
>>> unsupported operation [/and to allow for the possibility of value array
>>> ops in the future/], to fill the hole with 'pointer to array first
>>> element'.
>>>
>>
>> YOU consider it a hole. No one else does.
> 
> How can you possibly know that?  I can say for sure you can't, because I
> also consider it a hole, and I'd go so far as to say that it's the
> majority opinion amongst the people I've talked to about C's type
> system.
>

Let me correct myself.  No one who understands C thinks it is a hole.

> In the early days of C, everything that could be assigned and passed
> into and out of a function would, typically, fit in a machine register.
> But once structs could be assigned, passed and returned the situation
> became more obviously asymmetric with arrays, uniquely amongst the
> object types, being the only ones that can't be assigned, passed or
> returned.
> 
> <snip>
> 

Not in my experience.  But mine only goes back to 1984 or so.  We used a
lot of values which could not be passed in registers.  Those were
available in the first edition of K&R, also.

-- 
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
0
Jerry
12/1/2016 4:57:41 AM
On 01/12/16 00:29, BartC wrote:
> On 30/11/2016 22:25, Jerry Stuckle wrote:
>> On 11/30/2016 9:22 AM, BartC wrote:
>>> On 30/11/2016 14:07, Jerry Stuckle wrote:
>>>> On 11/30/2016 6:25 AM, BartC wrote:
>>>
>>>>> A[i]-style indexing can only be applied when A is an array; end of.
>>>
>>>> There is no hole in the type system.
>>>
>>> There /is/ a hole where value array types would go. C decided, instead
>>> of leaving the hole, so as to trap errors when someone attempts an
>>> unsupported operation [/and to allow for the possibility of value array
>>> ops in the future/], to fill the hole with 'pointer to array first
>>> element'.
>>
>> YOU consider it a hole. No one else does.  Who do you think is more
>> likely to be wrong?  Hint: it isn't everybody else.
>     
> Have you got an opinion of your own, or do you have to depend on someone
> else's?
> 

Often Jerry is the one with the unique opinion contrary to everyone else
- but here he is entirely correct.

Basically, Bart, you seem to want C to be able to spot all logical
errors in code as syntax errors in the language.  But that is pure
wishful thinking - /no/ language does that (not even Ada, where it is
said that if you can get the code to compile, it is ready to ship).
Certainly it is true that C /allows/ more code to be syntactically valid
while clearly logically incorrect than many other languages.  But that
is the price for the efficiency of a low-level language.  If you want
arrays and pointers that are stricter, try a different programming
language or use C in a stricter manner.

It is true that C passes arrays as pointers (by reference) rather than
by value, and it is true that C syntax lets you take a pointer to a
single int and then use array indexing on that pointer.  It also lets
you write code like this:

int The_author_of_this_code_is_a_moron;
int integerSquareRoot(int x) {
	return 42;
}

Do you want the C language to be changed to disallow /all/ incorrect
code, or is it just some bits that you specially don't like?

The way to handle this is perfectly simple.  Learn how C works, then use
it correctly.  Don't write code that is clearly incorrect, even if it is
syntactically valid.  For someone who only uses C as a generated
intermediary language, you have absolutely /no/ excuse - you should use
only the subset of the language that you understand and can use correctly.


And for your reference, C has a perfectly good way of making arrays that
are passed by value.  You just embed them in a struct:


typedef struct {
	int data[4];
} intArray4;

intArray4 doubler(intArray4 xs) {
  intArray4 ys;
  for (int i = 0; i < 4; i++) {
    	ys.data[i] = xs.data[i] * 2;
  }
  return ys;
}

int foobar(void) {
  intArray4 xs = {{ 1, 2, 3, 4}};
  intArray4 ys = doubler(xs);
  return ys.data[0] + ys.data[3];
}





0
David
12/1/2016 9:05:22 AM
On 30/11/16 23:17, supercat@casperkitty.com wrote:
> On Wednesday, November 30, 2016 at 2:46:08 PM UTC-6, Bart wrote:
>> I think that with the segregation of arrays, then that might be enough 
>> to remove many of conflicts I'm not happy about. People will always want 
>> to do ++P and *(P+offset) on pointers, but it will be obvious that this 
>> is messing about with pointers and that some insalubrious goings-on 
>> might be in hand! So code will be scrutinised more carefully.
> 
> From an optimization perspective, it would be useful to have the language
> distinguish between accessing an element which is x from the start of an
> array, versus accessing an object which is displaced x from the address
> of another object (which might not be the first one in an array).  For
> the kind of compilers Dennis Ritchie was anticipating, the two operations
> would be equivalent, but given code like:
> 
>     void foo(int array1[], int array2[], int start, int end)
>     {
>       for (int i=start; i<end; i++)
>         array1[i] = array2[i] * 100;
>     }
> 
> it would be helpful for a compiler to know that the operations on all of
> the array elements may be performed independently in any order, which will
> be possible the regions accessed through array1 and array2 are either
> identical or disjoint, but will not be possible if they partially overlap.
> If the array size were known at compile time and the function were written
> as:
> 
>     void foo(int (*array1)[1000], int array2[1000], int start, int end)
>     {
>       for (int i=start; i<end; i++)
>         (*array1)[i] = (*array2)[i] * 100;
>     }
> 
> I think a compiler would be entitled to make that assumption, but I don't
> think there's any equivalent form for the scenario where the array size
> isn't known.  It might be possible using variable-length array types, but
> that would require that "foo" be passed the array size even though nothing
> in the code would actually make use of the value passed.
> 

Yes, if only C had a way to write something like:

void foo(int array1[restrict], int array2[restrict], int start, int end)

then the optimiser would be free to assume that array1 and array2 don't
overlap!

Oh wait, C /does/ have a way to write that.

0
David
12/1/2016 9:14:08 AM
David Brown <david.brown@hesbynett.no> writes:

> On 01/12/16 00:29, BartC wrote:
>> On 30/11/2016 22:25, Jerry Stuckle wrote:
>>> On 11/30/2016 9:22 AM, BartC wrote:
>>>> On 30/11/2016 14:07, Jerry Stuckle wrote:
>>>>> On 11/30/2016 6:25 AM, BartC wrote:
>>>>
>>>>>> A[i]-style indexing can only be applied when A is an array; end of.
>>>>
>>>>> There is no hole in the type system.
>>>>
>>>> There /is/ a hole where value array types would go. C decided, instead
>>>> of leaving the hole, so as to trap errors when someone attempts an
>>>> unsupported operation [/and to allow for the possibility of value array
>>>> ops in the future/], to fill the hole with 'pointer to array first
>>>> element'.
>>>
>>> YOU consider it a hole. No one else does.  Who do you think is more
>>> likely to be wrong?  Hint: it isn't everybody else.
>>     
>> Have you got an opinion of your own, or do you have to depend on someone
>> else's?
>
> Often Jerry is the one with the unique opinion contrary to everyone else
> - but here he is entirely correct.

I'm very surprised to see you write that.  C's array types stand out
amongst all it objects types because they can't be assigned nor passed
to or returned from functions.  That seems to be the hole BartC is
referring to and, until I saw this post, I'd assumed it was not a
contentious statement.

<snip>
-- 
Ben.
0
Ben
12/1/2016 11:20:09 AM
On 01/12/2016 09:05, David Brown wrote:
> On 01/12/16 00:29, BartC wrote:

> Basically, Bart, you seem to want C to be able to spot all logical
> errors in code as syntax errors in the language.

I'm only talking here about arrays and pointers. There would be a 
considerable number of benefits in strictly enforcing their separation
....

> The way to handle this is perfectly simple.  Learn how C works, then use
> it correctly.  Don't write code that is clearly incorrect

.... such as being able to more easily suss out how what else's code is 
doing. (Eg. what does that 'int*' parameter actually represent?)

 > Do you want the C language to be changed to disallow /all/ incorrect
 > code, or is it just some bits that you specially don't like?

The discussion about changing C is hypothetical (that argument has long 
been lost). I'm just astonished that so many can't see the issues; from 
'C is how it is; <shrug>' to those who think such a crazy type system is 
a positive advantage. It seems I'm the only one who can see the Emperor 
isn't wearing any clothes!

, even if it is
> syntactically valid.  For someone who only uses C as a generated
> intermediary language, you have absolutely /no/ excuse - you should use
> only the subset of the language that you understand and can use correctly.
>
>
> And for your reference, C has a perfectly good way of making arrays that
> are passed by value.  You just embed them in a struct:

Are you serious? That's the sort of subterfuge you'd have to resort to 
when generating C code from another language. In fact, what I'd have to 
do if I was actually interested in passing arrays by value; I'm not. 
(I've always be able to do that, but I never used the feature.)

But it /was/ important that value-array expressions were part of the 
type system as they made it work more logically and orthogonally. And 
allowed assignment and equality compare.

> intArray4 doubler(intArray4 xs) {
>   intArray4 ys;
>   for (int i = 0; i < 4; i++) {
>     	ys.data[i] = xs.data[i] * 2;
>   }
>   return ys;
> }
>
> int foobar(void) {
>   intArray4 xs = {{ 1, 2, 3, 4}};
>   intArray4 ys = doubler(xs);
>   return ys.data[0] + ys.data[3];
> }

See? xs and ys are clearly structs, not arrays. Fine as intermediate C 
where you can't see the joins; but not as actual C.

(BTW, here's how it's done properly. I've had to resurrect an ancient 
compiler which fully implemented value arrays:

function doubler([4]int xs)[4]int=
     int i
     [4]int ys

     for i:=1 to xs.upb do
     	ys[i]:=xs[i]*2
     od
     return ys
end

function foobar:int=
     [4]int xs=(10,20,30,40)
     [4]int ys
     ys := doubler(xs)
     return ys[1]+ys[4]
end

proc start=
     println foobar()
end

Output:

C:\archive\qc>a
  100

I could also have used a type where the length is not hardwired into the 
name:

  type intarray = [4]int

from which I can still extract the upper bound as 'xs.upb' without 
needing to know the details of your intArray4 and having to write sizeof 
xs.data/sizeof xs.data[0].

</pldesign101>)

-- 
Bartc

0
BartC
12/1/2016 11:35:01 AM
Jerry Stuckle <jstucklex@attglobal.net> writes:

> On 11/30/2016 8:18 PM, Ben Bacarisse wrote:
>> Jerry Stuckle <jstucklex@attglobal.net> writes:
>> 
>>> On 11/30/2016 9:22 AM, BartC wrote:
>>>> On 30/11/2016 14:07, Jerry Stuckle wrote:
>>>>> On 11/30/2016 6:25 AM, BartC wrote:
>>>>
>>>>>> A[i]-style indexing can only be applied when A is an array; end of.
>>>>
>>>>> There is no hole in the type system.
>>>>
>>>> There /is/ a hole where value array types would go. C decided, instead
>>>> of leaving the hole, so as to trap errors when someone attempts an
>>>> unsupported operation [/and to allow for the possibility of value array
>>>> ops in the future/], to fill the hole with 'pointer to array first
>>>> element'.
>>>>
>>>
>>> YOU consider it a hole. No one else does.
>> 
>> How can you possibly know that?  I can say for sure you can't, because I
>> also consider it a hole, and I'd go so far as to say that it's the
>> majority opinion amongst the people I've talked to about C's type
>> system.
>
> Let me correct myself.  No one who understands C thinks it is a hole.

I'd ask a normal person to explain what's lacking in my understanding of
C that makes me see arrays as unique in C's type system, but experience
suggests you are probably only one post away from starting the insults,
so I don't hold out much hope for a civilised exchange of views.

>> In the early days of C, everything that could be assigned and passed
>> into and out of a function would, typically, fit in a machine register.
>> But once structs could be assigned, passed and returned the situation
>> became more obviously asymmetric with arrays, uniquely amongst the
>> object types, being the only ones that can't be assigned, passed or
>> returned.
>
> Not in my experience.  But mine only goes back to 1984 or so.

I should have been more clear about what I meant by early C.  1984 is
quite late in the sense that C was mature by then.  Not only is it post
K&R but, crucially, structs had become nearly first-class citizens of
the type system (only lacking a literal denotation).  Studying early C
shows the thinking behind it's design.

<snip>
-- 
Ben.
0
Ben
12/1/2016 11:38:25 AM
On 01/12/2016 11:20, Ben Bacarisse wrote:
> David Brown <david.brown@hesbynett.no> writes:
>
>> On 01/12/16 00:29, BartC wrote:

>>> Have you got an opinion of your own, or do you have to depend on someone
>>> else's?
>>
>> Often Jerry is the one with the unique opinion contrary to everyone else
>> - but here he is entirely correct.
>
> I'm very surprised to see you write that.  C's array types stand out
> amongst all it objects types because they can't be assigned nor passed
> to or returned from functions.

Or compared (for equality at least). Absence of padding bytes means 
there are actually fewer issues with doing that than for structs. For 
fixed-length arrays anyway.

-- 
Bartc
0
BartC
12/1/2016 11:41:51 AM
On 01/12/16 12:20, Ben Bacarisse wrote:
> David Brown <david.brown@hesbynett.no> writes:
> 
>> On 01/12/16 00:29, BartC wrote:
>>> On 30/11/2016 22:25, Jerry Stuckle wrote:
>>>> On 11/30/2016 9:22 AM, BartC wrote:
>>>>> On 30/11/2016 14:07, Jerry Stuckle wrote:
>>>>>> On 11/30/2016 6:25 AM, BartC wrote:
>>>>>
>>>>>>> A[i]-style indexing can only be applied when A is an array; end of.
>>>>>
>>>>>> There is no hole in the type system.
>>>>>
>>>>> There /is/ a hole where value array types would go. C decided, instead
>>>>> of leaving the hole, so as to trap errors when someone attempts an
>>>>> unsupported operation [/and to allow for the possibility of value array
>>>>> ops in the future/], to fill the hole with 'pointer to array first
>>>>> element'.
>>>>
>>>> YOU consider it a hole. No one else does.  Who do you think is more
>>>> likely to be wrong?  Hint: it isn't everybody else.
>>>     
>>> Have you got an opinion of your own, or do you have to depend on someone
>>> else's?
>>
>> Often Jerry is the one with the unique opinion contrary to everyone else
>> - but here he is entirely correct.
> 
> I'm very surprised to see you write that.  C's array types stand out
> amongst all it objects types because they can't be assigned nor passed
> to or returned from functions.  That seems to be the hole BartC is
> referring to and, until I saw this post, I'd assumed it was not a
> contentious statement.
> 

What I really meant is that it is not a problem in C.  Yes, C array
types are a bit different from other types - but I don't see it as a
"hole" or a problem here.  It is just the way C works.  There are many
aspects of C and C types that are inconsistent (such as Supercat's
favourite, the way unsigned short types can get promoted to signed ints
leading to undefined behaviour).  But the way arrays work in C is not a
"hole in the type system" - it is just a little odd.


0
David
12/1/2016 12:48:11 PM
On 01/12/16 12:35, BartC wrote:
> On 01/12/2016 09:05, David Brown wrote:
>> On 01/12/16 00:29, BartC wrote:
> 
>> Basically, Bart, you seem to want C to be able to spot all logical
>> errors in code as syntax errors in the language.
> 
> I'm only talking here about arrays and pointers. There would be a
> considerable number of benefits in strictly enforcing their separation
> ...
> 
>> The way to handle this is perfectly simple.  Learn how C works, then use
>> it correctly.  Don't write code that is clearly incorrect
> 
> ... such as being able to more easily suss out how what else's code is
> doing. (Eg. what does that 'int*' parameter actually represent?)
> 
>> Do you want the C language to be changed to disallow /all/ incorrect
>> code, or is it just some bits that you specially don't like?
> 
> The discussion about changing C is hypothetical (that argument has long
> been lost). I'm just astonished that so many can't see the issues; from
> 'C is how it is; <shrug>' to those who think such a crazy type system is
> a positive advantage. It seems I'm the only one who can see the Emperor
> isn't wearing any clothes!
> 

First off, C /is/ the way it is - and no amount of moaning will change
that.  So there is nothing wrong with making that clear.  If you don't
like a feature of C, don't use it.  If you want different features, pick
a different language.  C++ is not far away, and it has std::array types.

Secondly, I haven't noticed many people saying that the way C supports
arrays is great - merely that it is not a problem.  It is designed to be
efficient and easy to implement - a system that allows arrays of
arbitrary lengths to be passed by value is going to involve a good deal
bigger and slower code.  It would not be unreasonable to allow it, and
insist that to pass arrays by reference you have to take addresses just
as you do for other types.  I guess the C language designers simply
picked a system that was least effort for the implementation and least
effort for programmers in the common case.

> , even if it is
>> syntactically valid.  For someone who only uses C as a generated
>> intermediary language, you have absolutely /no/ excuse - you should use
>> only the subset of the language that you understand and can use
>> correctly.
>>
>>
>> And for your reference, C has a perfectly good way of making arrays that
>> are passed by value.  You just embed them in a struct:
> 
> Are you serious? That's the sort of subterfuge you'd have to resort to
> when generating C code from another language.

You /are/ generating C code from another language - there is absolutely
no reason not to use such constructs!

But yes, I am serious - this is a perfectly good way to get arrays that
act as "normal" types.  I use it regularly.  Wrapping things in a struct
is a standard C way to make special types - it is not uncommon to have
structs that contain nothing but a single "int" field, simply to make
them incompatible with normal int's in order to make programming
mistakes less likely.

>  In fact, what I'd have to
> do if I was actually interested in passing arrays by value; I'm not.
> (I've always be able to do that, but I never used the feature.)
> 
> But it /was/ important that value-array expressions were part of the
> type system as they made it work more logically and orthogonally. And
> allowed assignment and equality compare.
> 
>> intArray4 doubler(intArray4 xs) {
>>   intArray4 ys;
>>   for (int i = 0; i < 4; i++) {
>>         ys.data[i] = xs.data[i] * 2;
>>   }
>>   return ys;
>> }
>>
>> int foobar(void) {
>>   intArray4 xs = {{ 1, 2, 3, 4}};
>>   intArray4 ys = doubler(xs);
>>   return ys.data[0] + ys.data[3];
>> }
> 
> See? xs and ys are clearly structs, not arrays. Fine as intermediate C
> where you can't see the joins; but not as actual C.

It's fine as actual C.  xs and ys are clearly not plain C arrays - but
they are equally clearly types that hold a sequence of ints that can
easily be accessed in an indexed fashion.

> 
> (BTW, here's how it's done properly. I've had to resurrect an ancient
> compiler which fully implemented value arrays:

That is not "done properly".  It is "done differently".  Baring
irrelevant syntax differences, the structure of your code here is
identical to mine.  The only difference is that mine is familiar to
millions of C developers and works on every C compiler, while yours is
in a language supported only by your own outdated tools.

> 
> function doubler([4]int xs)[4]int=
>     int i
>     [4]int ys
> 
>     for i:=1 to xs.upb do
>         ys[i]:=xs[i]*2
>     od
>     return ys
> end
> 
> function foobar:int=
>     [4]int xs=(10,20,30,40)
>     [4]int ys
>     ys := doubler(xs)
>     return ys[1]+ys[4]
> end
> 
> proc start=
>     println foobar()
> end
> 
> Output:
> 
> C:\archive\qc>a
>  100
> 
> I could also have used a type where the length is not hardwired into the
> name:
> 
>  type intarray = [4]int
> 
> from which I can still extract the upper bound as 'xs.upb' without
> needing to know the details of your intArray4 and having to write sizeof
> xs.data/sizeof xs.data[0].
> 
> </pldesign101>)
> 

0
David
12/1/2016 1:51:51 PM
On 01/12/2016 13:51, David Brown wrote:
> On 01/12/16 12:35, BartC wrote:

>> The discussion about changing C is hypothetical (that argument has long
>> been lost). I'm just astonished that so many can't see the issues; from
>> 'C is how it is; <shrug>' to those who think such a crazy type system is
>> a positive advantage. It seems I'm the only one who can see the Emperor
>> isn't wearing any clothes!
>>
>
> First off, C /is/ the way it is - and no amount of moaning will change
> that.  So there is nothing wrong with making that clear.  If you don't
> like a feature of C, don't use it.  If you want different features, pick
> a different language.  C++ is not far away, and it has std::array types.
>
> Secondly, I haven't noticed many people saying that the way C supports
> arrays is great - merely that it is not a problem.  It is designed to be
> efficient and easy to implement - a system that allows arrays of
> arbitrary lengths to be passed by value is going to involve a good deal
> bigger and slower code.

I'm not saying that all. Just that value arrays ought to be (or to have 
been) a proper part of the type system and of expressions instead of 
being practically banished.

I've had the opportunity to pass arrays by value and have never felt the 
need to. But you can do this:

   int A[10], B[10]
   int (*P)[10];

   A = B;    // assign
   A == B;   // compare
   A;        // array
   &A;       // pointer to array
   &A[0];    // pointer to first element
   P;        // pointer to array
   *P;       // array
   *A;       // error
   **P;      // error
   P[i];     // error
   f(A);     // pass by value (or pseudo-value)
   f(&A);    // pass by pointer-value

   void f(int A[]);       // not allowed (unknown size of value array)
   void f(int A[10]);     // pass by value
   void f(int (*A)[]);    // pass by pointer-value

Everything just fits into place and is always completely logical. There 
are no nasty surprises.

> It would not be unreasonable to allow it, and
> insist that to pass arrays by reference you have to take addresses just
> as you do for other types.

No it wouldn't. (And I also use a pass-by-reference scheme which 
eliminates explicit & and * operators, but with some loss of 
transparency within the code.)

> I guess the C language designers simply
> picked a system that was least effort for the implementation and least
> effort for programmers in the common case.

It was a reasonable compromise for 1970, for a machine-oriented 
language. However I started creating my languages (after exposure to 
Fortran, Algol and Pascal, two of which already existed in 1970) only 
ten years or so later.

> You /are/ generating C code from another language - there is absolutely
> no reason not to use such constructs!

There are any number of tricks that can be used. But the discussion is 
about C. Unfortunately wrapping another language around it doesn't 
completely eliminate the need to analyse other people's C code to find 
out what it does and what it means.

> But yes, I am serious - this is a perfectly good way to get arrays that
> act as "normal" types.  I use it regularly.  Wrapping things in a struct
> is a standard C way to make special types - it is not uncommon to have
> structs that contain nothing but a single "int" field, simply to make
> them incompatible with normal int's in order to make programming
> mistakes less likely.

You are liable to make mistakes in C? I wonder what Jerry Stuckle would 
say here!

>> (BTW, here's how it's done properly. I've had to resurrect an ancient
>> compiler which fully implemented value arrays:
>
> That is not "done properly".  It is "done differently".  Baring
> irrelevant syntax differences,

Irrelevant?!

  the structure of your code here is
> identical to mine.  The only difference is that mine is familiar to
> millions of C developers and works on every C compiler, while yours is
> in a language supported only by your own outdated tools.

C was created around 1970. My language has been revised as recently as 
this year (although I admit it's quite old-fashioned in many ways; lots 
of gleaming new languages about). I only had to dig out an old compiler 
because I wanted to run the code.

To make value arrays work for the new compiler needs a day or two's more 
effort for native code, much longer for a C target as this is a rather 
severe mismatch between the type systems. That I consider not worth the 
effort.

(I can't use a feature that doesn't work on all targets. The old 
compiler only had to worry about native code - that's easy!)

-- 
Bartc
0
BartC
12/1/2016 3:05:32 PM
On Thursday, December 1, 2016 at 3:14:24 AM UTC-6, David Brown wrote:
> Yes, if only C had a way to write something like:
> 
> void foo(int array1[restrict], int array2[restrict], int start, int end)
> 
> then the optimiser would be free to assume that array1 and array2 don't
> overlap!
> 
> Oh wait, C /does/ have a way to write that.

A search of C99 "restrict" revealed an example of that syntax, and a search
of N1570 finds the same example, but I couldn't find anyplace in either
standard that actually described its meaning.  What am I missing?

Also, there's no reason such meaning should only be available to parameters
but I don't think that syntax would be allowable in any other context.
0
supercat
12/1/2016 3:37:07 PM
On 01/12/16 16:37, supercat@casperkitty.com wrote:
> On Thursday, December 1, 2016 at 3:14:24 AM UTC-6, David Brown wrote:
>> Yes, if only C had a way to write something like:
>>
>> void foo(int array1[restrict], int array2[restrict], int start, int end)
>>
>> then the optimiser would be free to assume that array1 and array2 don't
>> overlap!
>>
>> Oh wait, C /does/ have a way to write that.
> 
> A search of C99 "restrict" revealed an example of that syntax, and a search
> of N1570 finds the same example, but I couldn't find anyplace in either
> standard that actually described its meaning.  What am I missing?


void foo(int array1[restrict], int array2[restrict], int start, int end)

is compatible with:

void foo(int * restrict array1, int * restrict array2, int start, int end)

So presumably it has the same meaning.  I did not see any clear
definition in the standard either.

> 
> Also, there's no reason such meaning should only be available to parameters
> but I don't think that syntax would be allowable in any other context.
> 

I am trying to think of an example where it could be relevant.  You have
to get your array references from somewhere.  Either you get them from
parameters (and can use the "restrict" qualifier), or you have direct
access to the actual arrays, in which case the compiler already knows
that they can't overlap.  Can you give a /realistic/ example?

0
David
12/1/2016 4:05:59 PM
On Thursday, December 1, 2016 at 10:06:09 AM UTC-6, David Brown wrote:
> On 01/12/16 16:37, supercat wrote:
> > A search of C99 "restrict" revealed an example of that syntax, and a search
> > of N1570 finds the same example, but I couldn't find anyplace in either
> > standard that actually described its meaning.  What am I missing?
> 
> void foo(int array1[restrict], int array2[restrict], int start, int end)
> 
> is compatible with:
> 
> void foo(int * restrict array1, int * restrict array2, int start, int end)
> 
> So presumably it has the same meaning.  I did not see any clear
> definition in the standard either.

It would also be compatible with the same definition without qualifiers,
even though a function definition without qualifiers would clearly not
have the same meaning.

The semantics I was asking for would be incompatible with a function
definition that applied the qualifier to the variables, since that would
require that array1 and array2 be disjoint, rather than merely forbidding
*partial* overlap.  It would make sense for the [restrict] syntax to
allow coincident pointers but not partial overlap, but I don't see anything
in the Standard which would actually say such a thing.

> > Also, there's no reason such meaning should only be available to parameters
> > but I don't think that syntax would be allowable in any other context.
> 
> I am trying to think of an example where it could be relevant.  You have
> to get your array references from somewhere.  Either you get them from
> parameters (and can use the "restrict" qualifier), or you have direct
> access to the actual arrays, in which case the compiler already knows
> that they can't overlap.  Can you give a /realistic/ example?

Calling a function that returns a pointer to the start of an allocated
region which is used as an array.  Global variables which hold pointers
to the start of an allocated region which is used as an array.

Knowing that a pointer identifies the start of something that's used as
an array, and knowing that two such pointers can only alias at elements
with matching indices may facilitate optimizations in cases where a
compiler would not otherwise be able to ascertain anything useful about
the provenance of the pointers in question.
0
supercat
12/1/2016 6:06:57 PM
On 12/1/2016 6:38 AM, Ben Bacarisse wrote:
> Jerry Stuckle <jstucklex@attglobal.net> writes:
> 
>> On 11/30/2016 8:18 PM, Ben Bacarisse wrote:
>>> Jerry Stuckle <jstucklex@attglobal.net> writes:
>>>
>>>> On 11/30/2016 9:22 AM, BartC wrote:
>>>>> On 30/11/2016 14:07, Jerry Stuckle wrote:
>>>>>> On 11/30/2016 6:25 AM, BartC wrote:
>>>>>
>>>>>>> A[i]-style indexing can only be applied when A is an array; end of.
>>>>>
>>>>>> There is no hole in the type system.
>>>>>
>>>>> There /is/ a hole where value array types would go. C decided, instead
>>>>> of leaving the hole, so as to trap errors when someone attempts an
>>>>> unsupported operation [/and to allow for the possibility of value array
>>>>> ops in the future/], to fill the hole with 'pointer to array first
>>>>> element'.
>>>>>
>>>>
>>>> YOU consider it a hole. No one else does.
>>>
>>> How can you possibly know that?  I can say for sure you can't, because I
>>> also consider it a hole, and I'd go so far as to say that it's the
>>> majority opinion amongst the people I've talked to about C's type
>>> system.
>>
>> Let me correct myself.  No one who understands C thinks it is a hole.
> 
> I'd ask a normal person to explain what's lacking in my understanding of
> C that makes me see arrays as unique in C's type system, but experience
> suggests you are probably only one post away from starting the insults,
> so I don't hold out much hope for a civilised exchange of views.
> 
>>> In the early days of C, everything that could be assigned and passed
>>> into and out of a function would, typically, fit in a machine register.
>>> But once structs could be assigned, passed and returned the situation
>>> became more obviously asymmetric with arrays, uniquely amongst the
>>> object types, being the only ones that can't be assigned, passed or
>>> returned.
>>
>> Not in my experience.  But mine only goes back to 1984 or so.
> 
> I should have been more clear about what I meant by early C.  1984 is
> quite late in the sense that C was mature by then.  Not only is it post
> K&R but, crucially, structs had become nearly first-class citizens of
> the type system (only lacking a literal denotation).  Studying early C
> shows the thinking behind it's design.
> 
> <snip>
> 

Yes, and even the first edition of K&R had structures.

C has always been able to pass values without using registers.  In fact,
it's only been the last few years that has become common.

-- 
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
0
Jerry
12/1/2016 6:28:01 PM
David Brown <david.brown@hesbynett.no> writes:
> On 01/12/16 12:20, Ben Bacarisse wrote:
[...]
>> I'm very surprised to see you write that.  C's array types stand out
>> amongst all it objects types because they can't be assigned nor passed
>> to or returned from functions.  That seems to be the hole BartC is
>> referring to and, until I saw this post, I'd assumed it was not a
>> contentious statement.
>
> What I really meant is that it is not a problem in C.  Yes, C array
> types are a bit different from other types - but I don't see it as a
> "hole" or a problem here.  It is just the way C works.  There are many
> aspects of C and C types that are inconsistent (such as Supercat's
> favourite, the way unsigned short types can get promoted to signed ints
> leading to undefined behaviour).  But the way arrays work in C is not a
> "hole in the type system" - it is just a little odd.

Now all we need is a rigorous definition of "hole", and we can all
agree.

-- 
Keith Thompson (The_Other_Keith) kst-u@mib.org  <http://www.ghoti.net/~kst>
Working, but not speaking, for JetHead Development, Inc.
"We must do something.  This is something.  Therefore, we must do this."
    -- Antony Jay and Jonathan Lynn, "Yes Minister"
0
Keith
12/1/2016 6:43:58 PM
BartC <bc@freeuk.com> writes:
> On 01/12/2016 09:05, David Brown wrote:
>> On 01/12/16 00:29, BartC wrote:
>> Basically, Bart, you seem to want C to be able to spot all logical
>> errors in code as syntax errors in the language.
>
> I'm only talking here about arrays and pointers. There would be a 
> considerable number of benefits in strictly enforcing their separation
> ...
>
>> The way to handle this is perfectly simple.  Learn how C works, then use
>> it correctly.  Don't write code that is clearly incorrect
>
> ... such as being able to more easily suss out how what else's code is 
> doing. (Eg. what does that 'int*' parameter actually represent?)
>
>  > Do you want the C language to be changed to disallow /all/ incorrect
>  > code, or is it just some bits that you specially don't like?
>
> The discussion about changing C is hypothetical (that argument has long 
> been lost). I'm just astonished that so many can't see the issues; from 
> 'C is how it is; <shrug>' to those who think such a crazy type system is 
> a positive advantage. It seems I'm the only one who can see the Emperor 
> isn't wearing any clothes!

[...]

I see.  So you understand how arrays and pointers actually work in C,
and you are able to write and generate working C code that relies on
that.  There's no actual disagreement about what C's rules are, or about
the fact that it's completely impractical to change them.

The thing that bothers you so much that you post article after article
after article about it is that the rest of us don't *feel* the same way
about it that you do.

Is this really worth your time (and ours)?

-- 
Keith Thompson (The_Other_Keith) kst-u@mib.org  <http://www.ghoti.net/~kst>
Working, but not speaking, for JetHead Development, Inc.
"We must do something.  This is something.  Therefore, we must do this."
    -- Antony Jay and Jonathan Lynn, "Yes Minister"
0
Keith
12/1/2016 6:49:38 PM
On 12/01/2016 11:05 AM, David Brown wrote:
> On 01/12/16 16:37, supercat@casperkitty.com wrote:
>> On Thursday, December 1, 2016 at 3:14:24 AM UTC-6, David Brown wrote:
>>> Yes, if only C had a way to write something like:
>>>
>>> void foo(int array1[restrict], int array2[restrict], int start, int end)
>>>
>>> then the optimiser would be free to assume that array1 and array2 don't
>>> overlap!
>>>
>>> Oh wait, C /does/ have a way to write that.
>>
>> A search of C99 "restrict" revealed an example of that syntax, and a search
>> of N1570 finds the same example, but I couldn't find anyplace in either
>> standard that actually described its meaning.  What am I missing?
> 
> 
> void foo(int array1[restrict], int array2[restrict], int start, int end)
> 
> is compatible with:
> 
> void foo(int * restrict array1, int * restrict array2, int start, int end)

More precisely, those are two different ways of declaring precisely the
same function prototype.

> So presumably it has the same meaning.  I did not see any clear
> definition in the standard either.

Did you fail to find 6.7.3.1, or did you fail to consider it a clear
definition? If the latter, I can sympathize. I think I understand that
clause, but it's one of the more obscure parts of the standard.

>> Also, there's no reason such meaning should only be available to parameters
>> but I don't think that syntax would be allowable in any other context.

The use of array syntax to declare what are actually pointers is allowed
only in function parameter declarations. However, any pointer to an
object type may be restrict-qualified.

Note: restrict doesn't say that the arrays the pointers point at cannot
overlap, it says that if any object accessed by using an expression
based upon a restrict-qualified pointer is also accessed within the
scope of that pointer by any other means, the behavior is undefined.
Consider a global 2-dimensional array named squares, representing the
squares of a chess board, and a routine that declares a
restrict-qualified pointer to a square, named wsq, which is only ever
used to access the white squares on the board. If the same routine uses
the global array directly, but only to access the black squares, then
such use does not violates the requirements for restrict - no square is
accessed by both methods.
0
James
12/1/2016 6:53:38 PM
On Thursday, December 1, 2016 at 12:52:34 PM UTC-6, Keith Thompson wrote:
> supercat writes:
> > A search of C99 "restrict" revealed an example of that syntax, and a search
> > of N1570 finds the same example, but I couldn't find anyplace in either
> > standard that actually described its meaning.  What am I missing?
> 
> N1570 6.7.3p8 discusses it briefly.
> 
> N1570 6.7.3.1 is the "Formal definition of restrict".
> 
> Does your PDF viewer not have a search function?

I know what "int foo(int *restrict p)" means.  I don't see anything,
however, that suggests that "int foo(int p[restrict])" would mean the
same thing.  Given that the meaning of a "static" qualifier in an array
bound is totally different from its meaning in other contexts, and I
can't think of any cases where a qualifier within an array bound would
be applied to the array, I see nothing in the Standard which would
describe the meaning of the syntax using "[restrict]", beyond the fact
that its inclusion in an example suggests that it must be regarded as
valid and should probably mean something.
0
supercat
12/1/2016 7:51:47 PM
On Thursday, December 1, 2016 at 12:53:47 PM UTC-6, James Kuyper wrote:
> On 12/01/2016 11:05 AM, David Brown wrote:
> > void foo(int array1[restrict], int array2[restrict], int start, int end)
> > 
> > is compatible with:
> > 
> > void foo(int * restrict array1, int * restrict array2, int start, int end)
> 
> More precisely, those are two different ways of declaring precisely the
> same function prototype.

What in the Standard says that they are equivalent?  A static qualifier
within an array bound behaves totally differently from a qualifier anywhere
outside the array bound, and I can't think of any qualifiers which are
specified as working identically inside or outside.

> Did you fail to find 6.7.3.1, or did you fail to consider it a clear
> definition? If the latter, I can sympathize. I think I understand that
> clause, but it's one of the more obscure parts of the standard.
> 
> >> Also, there's no reason such meaning should only be available to parameters
> >> but I don't think that syntax would be allowable in any other context.
> 
> The use of array syntax to declare what are actually pointers is allowed
> only in function parameter declarations. However, any pointer to an
> object type may be restrict-qualified.
> 
> Note: restrict doesn't say that the arrays the pointers point at cannot
> overlap, it says that if any object accessed by using an expression
> based upon a restrict-qualified pointer is also accessed within the
> scope of that pointer by any other means, the behavior is undefined.
> Consider a global 2-dimensional array named squares, representing the
> squares of a chess board, and a routine that declares a
> restrict-qualified pointer to a square, named wsq, which is only ever
> used to access the white squares on the board. If the same routine uses
> the global array directly, but only to access the black squares, then
> such use does not violates the requirements for restrict - no square is
> accessed by both methods.

I understand what restrict means.  My point is that given a piece of code
like:

    void scale_array(double *dest, double const *src, int n, double scale)
    {
      for (i=0; i<n; i++)
        dest[i] = src[i] * scale;
    }

there's no qualifier that would allow safe vectorization while still
allowing the function to operate on an array in place without having to
explicitly write the code as:

    void scale_array(double *restrict dest, double const *restrict src,
                     int n, double scale)
    {
      if (dest == src)
        for (i=0; i<n; i++)
          dest[i] = dest[i] * scale;
      else
        for (i=0; i<n; i++)
          dest[i] = src[i] * scale;
    }

effectively doubling the amount of code necessary to do the same thing.

If writing the header as

    void scale_array(double dest[restrict], double const src[restrict],
                     int n, double scale)

would allow vectorization without requiring code duplication, then I
may have been complaining about the "lack" of a feature which actually
exists, but if the Standard doesn't make the meaning clear, then I
would suggest the feature isn't really usable.
0
supercat
12/1/2016 8:22:54 PM
On 01/12/2016 18:49, Keith Thompson wrote:
> BartC <bc@freeuk.com> writes:

> The thing that bothers you so much that you post article after article
> after article about it is that the rest of us don't *feel* the same way
> about it that you do.
>
> Is this really worth your time (and ours)?

I don't usually reply to myself.

So you might ask the same of anyone else who takes part.

But I believe that some people reading this stuff (not the 'regulars') 
might be learning something as well and perhaps getting a wider 
perspective on the language.

Some might also have their own reservations about some aspects and are 
pleased someone is speaking up about them.

-- 
bartc




0
BartC
12/1/2016 8:48:17 PM
supercat@casperkitty.com writes:
> On Thursday, December 1, 2016 at 12:52:34 PM UTC-6, Keith Thompson wrote:
>> supercat writes:
>> > A search of C99 "restrict" revealed an example of that syntax, and
>> > a search of N1570 finds the same example, but I couldn't find
>> > anyplace in either standard that actually described its meaning.
>> > What am I missing?
>> 
>> N1570 6.7.3p8 discusses it briefly.
>> 
>> N1570 6.7.3.1 is the "Formal definition of restrict".
>> 
>> Does your PDF viewer not have a search function?

Sorry, that was inappropriately snarky.

> I know what "int foo(int *restrict p)" means.  I don't see anything,
> however, that suggests that "int foo(int p[restrict])" would mean the
> same thing.  Given that the meaning of a "static" qualifier in an array
> bound is totally different from its meaning in other contexts, and I
> can't think of any cases where a qualifier within an array bound would
> be applied to the array, I see nothing in the Standard which would
> describe the meaning of the syntax using "[restrict]", beyond the fact
> that its inclusion in an example suggests that it must be regarded as
> valid and should probably mean something.

N1570 6.7.6.3p7:

    A declaration of a parameter as "array of type" shall be adjusted
    to "qualified pointer to type", where the type qualifiers (if
    any) are those specified within the [ and ] of the array type
    derivation. If the keyword static also appears within the [
    and ] of the array type derivation, then for each call to the
    function, the value of the corresponding actual argument shall
    provide access to the first element of an array with at least
    as many elements as specified by the size expression.

"restrict" is a type qualifier, so a parameter declaration
    int p[restrict]
is equivalent to
    int *retrict p

-- 
Keith Thompson (The_Other_Keith) kst-u@mib.org  <http://www.ghoti.net/~kst>
Working, but not speaking, for JetHead Development, Inc.
"We must do something.  This is something.  Therefore, we must do this."
    -- Antony Jay and Jonathan Lynn, "Yes Minister"
0
Keith
12/1/2016 8:52:37 PM
David Brown <david.brown@hesbynett.no> writes:

> On 01/12/16 16:37, supercat@casperkitty.com wrote:
>> On Thursday, December 1, 2016 at 3:14:24 AM UTC-6, David Brown wrote:
>>> Yes, if only C had a way to write something like:
>>>
>>> void foo(int array1[restrict], int array2[restrict], int start, int end)
>>>
>>> then the optimiser would be free to assume that array1 and array2 don't
>>> overlap!
>>>
>>> Oh wait, C /does/ have a way to write that.
>> 
>> A search of C99 "restrict" revealed an example of that syntax, and a search
>> of N1570 finds the same example, but I couldn't find anyplace in either
>> standard that actually described its meaning.  What am I missing?
>
> void foo(int array1[restrict], int array2[restrict], int start, int end)
>
> is compatible with:
>
> void foo(int * restrict array1, int * restrict array2, int start, int end)
>
> So presumably it has the same meaning.  I did not see any clear
> definition in the standard either.

6.7.6.3 p7:

  "A declaration of a parameter as 'array of type' shall be adjusted to
  'qualified pointer to type', where the type qualifiers (if any) are
  those specified within the [ and ] of the array type derivation."

<snip>
-- 
Ben.
0
Ben
12/1/2016 9:13:33 PM
David Brown <david.brown@hesbynett.no> writes:

> On 01/12/16 12:20, Ben Bacarisse wrote:
>> David Brown <david.brown@hesbynett.no> writes:
>> 
>>> On 01/12/16 00:29, BartC wrote:
>>>> On 30/11/2016 22:25, Jerry Stuckle wrote:
>>>>> On 11/30/2016 9:22 AM, BartC wrote:
>>>>>> On 30/11/2016 14:07, Jerry Stuckle wrote:
>>>>>>> On 11/30/2016 6:25 AM, BartC wrote:
>>>>>>
>>>>>>>> A[i]-style indexing can only be applied when A is an array; end of.
>>>>>>
>>>>>>> There is no hole in the type system.
>>>>>>
>>>>>> There /is/ a hole where value array types would go. C decided, instead
>>>>>> of leaving the hole, so as to trap errors when someone attempts an
>>>>>> unsupported operation [/and to allow for the possibility of value array
>>>>>> ops in the future/], to fill the hole with 'pointer to array first
>>>>>> element'.
>>>>>
>>>>> YOU consider it a hole. No one else does.  Who do you think is more
>>>>> likely to be wrong?  Hint: it isn't everybody else.
>>>>     
>>>> Have you got an opinion of your own, or do you have to depend on someone
>>>> else's?
>>>
>>> Often Jerry is the one with the unique opinion contrary to everyone else
>>> - but here he is entirely correct.
>> 
>> I'm very surprised to see you write that.  C's array types stand out
>> amongst all it objects types because they can't be assigned nor passed
>> to or returned from functions.  That seems to be the hole BartC is
>> referring to and, until I saw this post, I'd assumed it was not a
>> contentious statement.
>
> What I really meant is that it is not a problem in C.  Yes, C array
> types are a bit different from other types - but I don't see it as a
> "hole" or a problem here.  It is just the way C works.
<snip>

Since the term "hole" is not a technical one you can use it in any way
you like, but BartC was clearer, saying "There /is/ a hole where value
array types would go".  He made it clear what was missing (that is,
after all, what a hole usually is), and it's seems we all agree on
that[1].  The disagreement (if there is any) would be about how much it
matters and since I can't quantify that I won't say any more.

[1] There's a technicality here -- C does have array values, it's just
that they get converted in almost all situations so you can't do much
with them.

-- 
Ben.
0
Ben
12/1/2016 9:23:46 PM
On 12/1/2016 3:48 PM, BartC wrote:
> On 01/12/2016 18:49, Keith Thompson wrote:
>> BartC <bc@freeuk.com> writes:
> 
>> The thing that bothers you so much that you post article after article
>> after article about it is that the rest of us don't *feel* the same way
>> about it that you do.
>>
>> Is this really worth your time (and ours)?
> 
> I don't usually reply to myself.
> 
> So you might ask the same of anyone else who takes part.
> 
> But I believe that some people reading this stuff (not the 'regulars')
> might be learning something as well and perhaps getting a wider
> perspective on the language.
> 
> Some might also have their own reservations about some aspects and are
> pleased someone is speaking up about them.
> 

Yes, unfortunately, some people might be learning.  And then having to
spend a long time unlearning your junk.

-- 
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
0
Jerry
12/1/2016 9:51:07 PM
On Thursday, December 1, 2016 at 2:52:37 PM UTC-6, Keith Thompson wrote:
> supercat writes:
> > I know what "int foo(int *restrict p)" means.  I don't see anything,
> > however, that suggests that "int foo(int p[restrict])" would mean the
> > same thing.  Given that the meaning of a "static" qualifier in an array
> > bound is totally different from its meaning in other contexts, and I
> > can't think of any cases where a qualifier within an array bound would
> > be applied to the array, I see nothing in the Standard which would
> > describe the meaning of the syntax using "[restrict]", beyond the fact
> > that its inclusion in an example suggests that it must be regarded as
> > valid and should probably mean something.
> 
> N1570 6.7.6.3p7:
> 
>     A declaration of a parameter as "array of type" shall be adjusted
>     to "qualified pointer to type", where the type qualifiers (if
>     any) are those specified within the [ and ] of the array type
>     derivation. If the keyword static also appears within the [
>     and ] of the array type derivation, then for each call to the
>     function, the value of the corresponding actual argument shall
>     provide access to the first element of an array with at least
>     as many elements as specified by the size expression.
> 
> "restrict" is a type qualifier, so a parameter declaration
>     int p[restrict]
> is equivalent to
>     int *retrict p

Thanks.  I wonder what the reasoning was behind that rule; the rule
appears as 6.7.5.3p7 in C99 [found by searching for the language
therein], and nothing in the C99 rationale for 6.7.5.3 seems to mention
it; it seems like a purely redundant way of expressing something that
could be easily expressed in another fashion, precluding the use of the
syntax for some other different purpose.

In any case, since the syntax as defined in the fashion you indicate, that
would suggest that C does as I said lack any concept of "pointer to the
start of an array of unknown size".
0
supercat
12/1/2016 10:28:53 PM
Ben Bacarisse <ben.usenet@bsb.me.uk> writes:
[...]
> [1] There's a technicality here -- C does have array values, it's just
> that they get converted in almost all situations so you can't do much
> with them.

As long as we're being technical ...

Yes, C does have array values.  The definition of "value" ("precise
meaning of the contents of an object when interpreted as having
a specific type") applies to arrays as much as to any other kind
of object type.

But array *values* are not converted.  6.3.2.1p3 says:

    Except when it is the operand of the sizeof operator, the
    _Alignof operator, or the unary & operator, or is a string
    literal used to initialize an array, an expression that has
    type "array of type" is converted to an expression with type
    "pointer to type"" that points to the initial element of the
    array object and is not an lvalue.

It's an *expression* (a source code construct) that's being
"converted", not the value of that expression (a run-time construct).
This is not a run-time conversion; there's no information in the
value of an array object from which the address of its initial
element could be inferred.  It would have been clearer IMHO to use
the word "adjusted" here, as 6.7.6.3 does for function parameters
of array type.  In both cases, it's a compile-time adjustment of an
array expression/declaration to a pointer expression/declaration,
respectively.

But I agree with your conclusion: array values do exist, but there's
little or nothing you can (directly) do with them.

-- 
Keith Thompson (The_Other_Keith) kst-u@mib.org  <http://www.ghoti.net/~kst>
Working, but not speaking, for JetHead Development, Inc.
"We must do something.  This is something.  Therefore, we must do this."
    -- Antony Jay and Jonathan Lynn, "Yes Minister"
0
Keith
12/1/2016 10:41:59 PM
supercat@casperkitty.com writes:
> On Thursday, December 1, 2016 at 2:52:37 PM UTC-6, Keith Thompson wrote:
>> supercat writes:
>> > I know what "int foo(int *restrict p)" means.  I don't see anything,
>> > however, that suggests that "int foo(int p[restrict])" would mean the
>> > same thing.  Given that the meaning of a "static" qualifier in an array
>> > bound is totally different from its meaning in other contexts, and I
>> > can't think of any cases where a qualifier within an array bound would
>> > be applied to the array, I see nothing in the Standard which would
>> > describe the meaning of the syntax using "[restrict]", beyond the fact
>> > that its inclusion in an example suggests that it must be regarded as
>> > valid and should probably mean something.
>> 
>> N1570 6.7.6.3p7:
>> 
>>     A declaration of a parameter as "array of type" shall be adjusted
>>     to "qualified pointer to type", where the type qualifiers (if
>>     any) are those specified within the [ and ] of the array type
>>     derivation. If the keyword static also appears within the [
>>     and ] of the array type derivation, then for each call to the
>>     function, the value of the corresponding actual argument shall
>>     provide access to the first element of an array with at least
>>     as many elements as specified by the size expression.
>> 
>> "restrict" is a type qualifier, so a parameter declaration
>>     int p[restrict]
>> is equivalent to
>>     int *retrict p
>
> Thanks.  I wonder what the reasoning was behind that rule; the rule
> appears as 6.7.5.3p7 in C99 [found by searching for the language
> therein], and nothing in the C99 rationale for 6.7.5.3 seems to mention
> it; it seems like a purely redundant way of expressing something that
> could be easily expressed in another fashion, precluding the use of the
> syntax for some other different purpose.

The "static" syntax can only be used with array syntax.  Allowing other
qualifiers as well means the features can be combined, for example
    void func(char [restrict static 10]);

> In any case, since the syntax as defined in the fashion you indicate, that
> would suggest that C does as I said lack any concept of "pointer to the
> start of an array of unknown size".

Isn't that pretty much what an object pointer is, given that any object
can be treated as a single-element array as far as pointer arithmetic is
concerned?

-- 
Keith Thompson (The_Other_Keith) kst-u@mib.org  <http://www.ghoti.net/~kst>
Working, but not speaking, for JetHead Development, Inc.
"We must do something.  This is something.  Therefore, we must do this."
    -- Antony Jay and Jonathan Lynn, "Yes Minister"
0
Keith
12/1/2016 10:48:05 PM
On Thursday, December 1, 2016 at 4:47:24 PM UTC-6, Keith Thompson wrote:
> supercat writes:
> > Thanks.  I wonder what the reasoning was behind that rule; the rule
> > appears as 6.7.5.3p7 in C99 [found by searching for the language
> > therein], and nothing in the C99 rationale for 6.7.5.3 seems to mention
> > it; it seems like a purely redundant way of expressing something that
> > could be easily expressed in another fashion, precluding the use of the
> > syntax for some other different purpose.
> 
> The "static" syntax can only be used with array syntax.  Allowing other
> qualifiers as well means the features can be combined, for example
>     void func(char [restrict static 10]);

That seems like a feeble attempt at working around the weird binding of
"restrict" which would make it unclear whether

     void func(char *restrict foo[static 10]) {...}

should accept a pointer to an array of 10 restrict-qualified pointers,
or a restrict-qualified pointer to an array of 10 non-qualified pointers.
Allowing the syntax resolves that problem, sorta, but it seems like a
needlessly ugly way to do so, compared with e.g. saying that to apply
a restrict qualifier to the array itself one would use something like:

     void func(char *(restrict foo)[static 10]) {...}

> > In any case, since the syntax as defined in the fashion you indicate, that
> > would suggest that C does as I said lack any concept of "pointer to the
> > start of an array of unknown size".
> 
> Isn't that pretty much what an object pointer is, given that any object
> can be treated as a single-element array as far as pointer arithmetic is
> concerned?

Given:

    void foo(int arr1[], int arr2[]) {...}

a compiler must recognize aliasing between any element of arr1 and any
element of arr2.  Given

    void foo(int *restrict arr1, int *restrict arr2) {...}

a compiler could assume there will be no aliasing at all between the two
arrays.  In cases like the array-scaling function I posted earlier, however,
it may be necessary for a compiler to allow for aliasing between items in
one array and the *corresponding* items in the other, even though it would
be allowable for a compiler to assume that no item in the first array would
alias any *other* element array in the second.
0
supercat
12/1/2016 11:00:10 PM
Jerry Stuckle <jstucklex@attglobal.net> writes:

> On 12/1/2016 6:38 AM, Ben Bacarisse wrote:
>> Jerry Stuckle <jstucklex@attglobal.net> writes:
>> 
>>> On 11/30/2016 8:18 PM, Ben Bacarisse wrote:
>>>> Jerry Stuckle <jstucklex@attglobal.net> writes:
>>>>
>>>>> On 11/30/2016 9:22 AM, BartC wrote:
>>>>>> On 30/11/2016 14:07, Jerry Stuckle wrote:
>>>>>>> On 11/30/2016 6:25 AM, BartC wrote:
>>>>>>
>>>>>>>> A[i]-style indexing can only be applied when A is an array; end of.
>>>>>>
>>>>>>> There is no hole in the type system.
>>>>>>
>>>>>> There /is/ a hole where value array types would go. C decided, instead
>>>>>> of leaving the hole, so as to trap errors when someone attempts an
>>>>>> unsupported operation [/and to allow for the possibility of value array
>>>>>> ops in the future/], to fill the hole with 'pointer to array first
>>>>>> element'.
>>>>>>
>>>>>
>>>>> YOU consider it a hole. No one else does.
>>>>
>>>> How can you possibly know that?  I can say for sure you can't, because I
>>>> also consider it a hole, and I'd go so far as to say that it's the
>>>> majority opinion amongst the people I've talked to about C's type
>>>> system.
>>>
>>> Let me correct myself.  No one who understands C thinks it is a hole.
>> 
>> I'd ask a normal person to explain what's lacking in my understanding of
>> C that makes me see arrays as unique in C's type system, but experience
>> suggests you are probably only one post away from starting the insults,
>> so I don't hold out much hope for a civilised exchange of views.
>> 
>>>> In the early days of C, everything that could be assigned and passed
>>>> into and out of a function would, typically, fit in a machine register.
>>>> But once structs could be assigned, passed and returned the situation
>>>> became more obviously asymmetric with arrays, uniquely amongst the
>>>> object types, being the only ones that can't be assigned, passed or
>>>> returned.
>>>
>>> Not in my experience.  But mine only goes back to 1984 or so.
>> 
>> I should have been more clear about what I meant by early C.  1984 is
>> quite late in the sense that C was mature by then.  Not only is it post
>> K&R but, crucially, structs had become nearly first-class citizens of
>> the type system (only lacking a literal denotation).  Studying early C
>> shows the thinking behind it's design.
>> 
>> <snip>
>
> Yes, and even the first edition of K&R had structures.

But they could not be assigned nor passed into or out of functions.

The treatment of arrays in C comes from BCPL and later B where every
value must be a machine word.  Neither BCPL nor B had structures, so
they only had to find a solution to handling arrays and they both used
the "pointer to the start" idea as the machine word "version" of the
array.

Early C added structures, but the compiler could still use machine words
for everything, because the only meaningful things you could do with a
struct resulted in a word-sized value: take the size, take the address
or access a member.  Anything that resulted in larger value could be
dealt with at compile time.  For example, you could, logically access
s.x where x was itself a struct but the compiler could just throw that
value away because you could not do anything with it unless it was part
of a larger expression that takes its size or its address or further
access a member within it.  Ultimately you ended up with code that
generated a word-sized result or which could be ignored because it
didn't.

> C has always been able to pass values without using registers.  In fact,
> it's only been the last few years that has become common.

That has nothing to do with what I said.  I said all of early C's values
would fit into a machine register.

-- 
Ben.
0
Ben
12/2/2016 12:06:47 AM
jameskuyper@verizon.net writes:
> On Friday, December 2, 2016 at 12:39:30 PM UTC-5, supe...@casperkitty.com wrote:
[...]
>> and thus no code would have been broken by a requirement that pointer
>> arguments whose type is specified at all must be specified using pointer
>> syntax.
>
> False. That rule would break the following code:
>
>     int foo() int bar; char boz[]; {/* body of foo */}

The correct (old-style) syntax for that would be:

    int foo(bar, boz) int bar; char boz[]; {/* body of foo */}

[...]

-- 
Keith Thompson (The_Other_Keith) kst-u@mib.org  <http://www.ghoti.net/~kst>
Working, but not speaking, for JetHead Development, Inc.
"We must do something.  This is something.  Therefore, we must do this."
    -- Antony Jay and Jonathan Lynn, "Yes Minister"
0
Keith
12/2/2016 1:01:01 AM
On 12/1/2016 7:06 PM, Ben Bacarisse wrote:
> Jerry Stuckle <jstucklex@attglobal.net> writes:
> 
>> On 12/1/2016 6:38 AM, Ben Bacarisse wrote:
>>> Jerry Stuckle <jstucklex@attglobal.net> writes:
>>>
>>>> On 11/30/2016 8:18 PM, Ben Bacarisse wrote:
>>>>> Jerry Stuckle <jstucklex@attglobal.net> writes:
>>>>>
>>>>>> On 11/30/2016 9:22 AM, BartC wrote:
>>>>>>> On 30/11/2016 14:07, Jerry Stuckle wrote:
>>>>>>>> On 11/30/2016 6:25 AM, BartC wrote:
>>>>>>>
>>>>>>>>> A[i]-style indexing can only be applied when A is an array; end of.
>>>>>>>
>>>>>>>> There is no hole in the type system.
>>>>>>>
>>>>>>> There /is/ a hole where value array types would go. C decided, instead
>>>>>>> of leaving the hole, so as to trap errors when someone attempts an
>>>>>>> unsupported operation [/and to allow for the possibility of value array
>>>>>>> ops in the future/], to fill the hole with 'pointer to array first
>>>>>>> element'.
>>>>>>>
>>>>>>
>>>>>> YOU consider it a hole. No one else does.
>>>>>
>>>>> How can you possibly know that?  I can say for sure you can't, because I
>>>>> also consider it a hole, and I'd go so far as to say that it's the
>>>>> majority opinion amongst the people I've talked to about C's type
>>>>> system.
>>>>
>>>> Let me correct myself.  No one who understands C thinks it is a hole.
>>>
>>> I'd ask a normal person to explain what's lacking in my understanding of
>>> C that makes me see arrays as unique in C's type system, but experience
>>> suggests you are probably only one post away from starting the insults,
>>> so I don't hold out much hope for a civilised exchange of views.
>>>
>>>>> In the early days of C, everything that could be assigned and passed
>>>>> into and out of a function would, typically, fit in a machine register.
>>>>> But once structs could be assigned, passed and returned the situation
>>>>> became more obviously asymmetric with arrays, uniquely amongst the
>>>>> object types, being the only ones that can't be assigned, passed or
>>>>> returned.
>>>>
>>>> Not in my experience.  But mine only goes back to 1984 or so.
>>>
>>> I should have been more clear about what I meant by early C.  1984 is
>>> quite late in the sense that C was mature by then.  Not only is it post
>>> K&R but, crucially, structs had become nearly first-class citizens of
>>> the type system (only lacking a literal denotation).  Studying early C
>>> shows the thinking behind it's design.
>>>
>>> <snip>
>>
>> Yes, and even the first edition of K&R had structures.
> 
> But they could not be assigned nor passed into or out of functions.
>

Sure they could.

> The treatment of arrays in C comes from BCPL and later B where every
> value must be a machine word.  Neither BCPL nor B had structures, so
> they only had to find a solution to handling arrays and they both used
> the "pointer to the start" idea as the machine word "version" of the
> array.
> 

So?

> Early C added structures, but the compiler could still use machine words
> for everything, because the only meaningful things you could do with a
> struct resulted in a word-sized value: take the size, take the address
> or access a member.  Anything that resulted in larger value could be
> dealt with at compile time.  For example, you could, logically access
> s.x where x was itself a struct but the compiler could just throw that
> value away because you could not do anything with it unless it was part
> of a larger expression that takes its size or its address or further
> access a member within it.  Ultimately you ended up with code that
> generated a word-sized result or which could be ignored because it
> didn't.
> 

No, in early days, *everything* was passed on the stack (or save area or
whatever).  Nothing was passed in registers.  And the early systems you
are talking about often had 8 bit registers - but ints even then were at
least 16 bits.  They could not be passed in an 8 bit register.

>> C has always been able to pass values without using registers.  In fact,
>> it's only been the last few years that has become common.
> 
> That has nothing to do with what I said.  I said all of early C's values
> would fit into a machine register.
> 

It has everything to do with what you said.  Early C's values would NOT
fit into machine registers.  And even if they would, early compilers did
NOT typically pass them in registers.

-- 
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
0
Jerry
12/2/2016 1:02:24 AM
Jerry Stuckle <jstucklex@attglobal.net> writes:

> On 12/1/2016 7:06 PM, Ben Bacarisse wrote:
>> Jerry Stuckle <jstucklex@attglobal.net> writes:
>> 
>>> On 12/1/2016 6:38 AM, Ben Bacarisse wrote:
>>>> Jerry Stuckle <jstucklex@attglobal.net> writes:
>>>>
>>>>> On 11/30/2016 8:18 PM, Ben Bacarisse wrote:
>>>>>> Jerry Stuckle <jstucklex@attglobal.net> writes:
>>>>>>
>>>>>>> On 11/30/2016 9:22 AM, BartC wrote:
>>>>>>>> On 30/11/2016 14:07, Jerry Stuckle wrote:
>>>>>>>>> On 11/30/2016 6:25 AM, BartC wrote:
>>>>>>>>
>>>>>>>>>> A[i]-style indexing can only be applied when A is an array; end of.
>>>>>>>>
>>>>>>>>> There is no hole in the type system.
>>>>>>>>
>>>>>>>> There /is/ a hole where value array types would go. C decided, instead
>>>>>>>> of leaving the hole, so as to trap errors when someone attempts an
>>>>>>>> unsupported operation [/and to allow for the possibility of value array
>>>>>>>> ops in the future/], to fill the hole with 'pointer to array first
>>>>>>>> element'.
>>>>>>>>
>>>>>>>
>>>>>>> YOU consider it a hole. No one else does.
>>>>>>
>>>>>> How can you possibly know that?  I can say for sure you can't, because I
>>>>>> also consider it a hole, and I'd go so far as to say that it's the
>>>>>> majority opinion amongst the people I've talked to about C's type
>>>>>> system.
>>>>>
>>>>> Let me correct myself.  No one who understands C thinks it is a hole.
>>>>
>>>> I'd ask a normal person to explain what's lacking in my understanding of
>>>> C that makes me see arrays as unique in C's type system, but experience
>>>> suggests you are probably only one post away from starting the insults,
>>>> so I don't hold out much hope for a civilised exchange of views.
>>>>
>>>>>> In the early days of C, everything that could be assigned and passed
>>>>>> into and out of a function would, typically, fit in a machine register.
>>>>>> But once structs could be assigned, passed and returned the situation
>>>>>> became more obviously asymmetric with arrays, uniquely amongst the
>>>>>> object types, being the only ones that can't be assigned, passed or
>>>>>> returned.
>>>>>
>>>>> Not in my experience.  But mine only goes back to 1984 or so.
>>>>
>>>> I should have been more clear about what I meant by early C.  1984 is
>>>> quite late in the sense that C was mature by then.  Not only is it post
>>>> K&R but, crucially, structs had become nearly first-class citizens of
>>>> the type system (only lacking a literal denotation).  Studying early C
>>>> shows the thinking behind it's design.
>>>>
>>>> <snip>
>>>
>>> Yes, and even the first edition of K&R had structures.
>> 
>> But they could not be assigned nor passed into or out of functions.
>
> Sure they could.

Page 121: "structures may not be assigned to or copied as a unit, and
.... they can not be passed to or returned from functions".  Automatic
structures could not even be initialised.

>> The treatment of arrays in C comes from BCPL and later B where every
>> value must be a machine word.  Neither BCPL nor B had structures, so
>> they only had to find a solution to handling arrays and they both used
>> the "pointer to the start" idea as the machine word "version" of the
>> array.
>
> So?
>
>> Early C added structures, but the compiler could still use machine words
>> for everything, because the only meaningful things you could do with a
>> struct resulted in a word-sized value: take the size, take the address
>> or access a member.  Anything that resulted in larger value could be
>> dealt with at compile time.  For example, you could, logically access
>> s.x where x was itself a struct but the compiler could just throw that
>> value away because you could not do anything with it unless it was part
>> of a larger expression that takes its size or its address or further
>> access a member within it.  Ultimately you ended up with code that
>> generated a word-sized result or which could be ignored because it
>> didn't.
>
> No, in early days, *everything* was passed on the stack (or save area or
> whatever).  Nothing was passed in registers.

I am talking about the size of the values and you are talking about
where they are passed as of that contradicts what I'm saying.

> And the early systems you
> are talking about often had 8 bit registers - but ints even then were at
> least 16 bits.  They could not be passed in an 8 bit register.

No.  I'm talking about early C that existed only on the PDP-11, IBM
S/360 and a GE/Honeywell 6000 machine.  C was only implemented on
"smaller" machines when it was far more developed.

<snip>
-- 
Ben.
0
Ben
12/2/2016 2:22:53 AM
On 12/1/2016 9:22 PM, Ben Bacarisse wrote:
> Jerry Stuckle <jstucklex@attglobal.net> writes:
> 
>> On 12/1/2016 7:06 PM, Ben Bacarisse wrote:
>>> Jerry Stuckle <jstucklex@attglobal.net> writes:
>>>
>>>> On 12/1/2016 6:38 AM, Ben Bacarisse wrote:
>>>>> Jerry Stuckle <jstucklex@attglobal.net> writes:
>>>>>
>>>>>> On 11/30/2016 8:18 PM, Ben Bacarisse wrote:
>>>>>>> Jerry Stuckle <jstucklex@attglobal.net> writes:
>>>>>>>
>>>>>>>> On 11/30/2016 9:22 AM, BartC wrote:
>>>>>>>>> On 30/11/2016 14:07, Jerry Stuckle wrote:
>>>>>>>>>> On 11/30/2016 6:25 AM, BartC wrote:
>>>>>>>>>
>>>>>>>>>>> A[i]-style indexing can only be applied when A is an array; end of.
>>>>>>>>>
>>>>>>>>>> There is no hole in the type system.
>>>>>>>>>
>>>>>>>>> There /is/ a hole where value array types would go. C decided, instead
>>>>>>>>> of leaving the hole, so as to trap errors when someone attempts an
>>>>>>>>> unsupported operation [/and to allow for the possibility of value array
>>>>>>>>> ops in the future/], to fill the hole with 'pointer to array first
>>>>>>>>> element'.
>>>>>>>>>
>>>>>>>>
>>>>>>>> YOU consider it a hole. No one else does.
>>>>>>>
>>>>>>> How can you possibly know that?  I can say for sure you can't, because I
>>>>>>> also consider it a hole, and I'd go so far as to say that it's the
>>>>>>> majority opinion amongst the people I've talked to about C's type
>>>>>>> system.
>>>>>>
>>>>>> Let me correct myself.  No one who understands C thinks it is a hole.
>>>>>
>>>>> I'd ask a normal person to explain what's lacking in my understanding of
>>>>> C that makes me see arrays as unique in C's type system, but experience
>>>>> suggests you are probably only one post away from starting the insults,
>>>>> so I don't hold out much hope for a civilised exchange of views.
>>>>>
>>>>>>> In the early days of C, everything that could be assigned and passed
>>>>>>> into and out of a function would, typically, fit in a machine register.
>>>>>>> But once structs could be assigned, passed and returned the situation
>>>>>>> became more obviously asymmetric with arrays, uniquely amongst the
>>>>>>> object types, being the only ones that can't be assigned, passed or
>>>>>>> returned.
>>>>>>
>>>>>> Not in my experience.  But mine only goes back to 1984 or so.
>>>>>
>>>>> I should have been more clear about what I meant by early C.  1984 is
>>>>> quite late in the sense that C was mature by then.  Not only is it post
>>>>> K&R but, crucially, structs had become nearly first-class citizens of
>>>>> the type system (only lacking a literal denotation).  Studying early C
>>>>> shows the thinking behind it's design.
>>>>>
>>>>> <snip>
>>>>
>>>> Yes, and even the first edition of K&R had structures.
>>>
>>> But they could not be assigned nor passed into or out of functions.
>>
>> Sure they could.
> 
> Page 121: "structures may not be assigned to or copied as a unit, and
> ... they can not be passed to or returned from functions".  Automatic
> structures could not even be initialised.
>

Sorry, I can't look at it right now.  I'm not in my office.


>>> The treatment of arrays in C comes from BCPL and later B where every
>>> value must be a machine word.  Neither BCPL nor B had structures, so
>>> they only had to find a solution to handling arrays and they both used
>>> the "pointer to the start" idea as the machine word "version" of the
>>> array.
>>
>> So?
>>
>>> Early C added structures, but the compiler could still use machine words
>>> for everything, because the only meaningful things you could do with a
>>> struct resulted in a word-sized value: take the size, take the address
>>> or access a member.  Anything that resulted in larger value could be
>>> dealt with at compile time.  For example, you could, logically access
>>> s.x where x was itself a struct but the compiler could just throw that
>>> value away because you could not do anything with it unless it was part
>>> of a larger expression that takes its size or its address or further
>>> access a member within it.  Ultimately you ended up with code that
>>> generated a word-sized result or which could be ignored because it
>>> didn't.
>>
>> No, in early days, *everything* was passed on the stack (or save area or
>> whatever).  Nothing was passed in registers.
> 
> I am talking about the size of the values and you are talking about
> where they are passed as of that contradicts what I'm saying.
>

You stated (in comments that you have deleted) "In the early days of C,
everything that could be assigned and passed into and out of a function
would, typically, fit in a machine register."

I'm just proving that it patently false.  Not only could larger values
larger than a register size be passed, but more values than there are
registers be passed.

Additionally, doubles, which were even then often 32 bit values, could
be passed.  And 16 bit ints could be passed, even when the only
registers were 8 bit (and yes, C ran on 8 bit systems - even in the mid
80's).

>> And the early systems you
>> are talking about often had 8 bit registers - but ints even then were at
>> least 16 bits.  They could not be passed in an 8 bit register.
> 
> No.  I'm talking about early C that existed only on the PDP-11, IBM
> S/360 and a GE/Honeywell 6000 machine.  C was only implemented on
> "smaller" machines when it was far more developed.
> 
> <snip>
> 

No, C was implemented on microprocessors very early in the game.  I
remember hearing about it on the Motorola 6800, for instance (even
though I didn't do any C programming at the time).

And S/360 had 32 bit registers, hardware floating point support and a
lot of things which only came to microprocessors in the late 80's or
early 90's.

-- 
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
0
Jerry
12/2/2016 2:42:12 AM
On Thursday, December 1, 2016 at 3:23:35 PM UTC-5, supe...@casperkitty.com =
wrote:
> On Thursday, December 1, 2016 at 12:53:47 PM UTC-6, James Kuyper wrote:
> > On 12/01/2016 11:05 AM, David Brown wrote:
> > > void foo(int array1[restrict], int array2[restrict], int start, int e=
nd)
> > >=20
> > > is compatible with:
> > >=20
> > > void foo(int * restrict array1, int * restrict array2, int start, int=
 end)
> >=20
> > More precisely, those are two different ways of declaring precisely the
> > same function prototype.
>=20
> What in the Standard says that they are equivalent?

The only grammar productions that allow type qualifiers such as
restrict to appear inside the square brackets of an array declaration
are in 6.7.6.2p1, which says "See 6.7.6.3 for the meaning of the
optional type qualifiers...".

6.7.6.3p7 says:
"A declaration of a parameter as =E2=80=98=E2=80=98array of type=E2=80=99=
=E2=80=99 shall be adjusted
to =E2=80=98=E2=80=98qualified pointer to type=E2=80=99=E2=80=99, where the=
 type qualifiers (if any)
are those specified within the [ and ] of the array type derivation."

Within the limitations of standardese, you couldn't hope for a more
explicit statement of the fact that, as function parameter
declarations, "int array1[restrict]" and "int * restrict array1" are
exactly equivalent.

> ...  A static qualifier
> within an array bound behaves totally differently from a qualifier
> anywhere outside the array bound, and I can't think of and
> qualifiers which are specified as working identically inside or
> outside.

The only defined behavior for type qualifiers inside the array bound
is the one provided by 6.7.6.3p7. I'm curious what the difference is
that you thought there was between those two declarations.
0
jameskuyper
12/2/2016 3:02:02 AM
On Thursday, December 1, 2016 at 5:29:09 PM UTC-5, supe...@casperkitty.com wrote:
> On Thursday, December 1, 2016 at 2:52:37 PM UTC-6, Keith Thompson wrote:
....
> > N1570 6.7.6.3p7:
> > 
> >     A declaration of a parameter as "array of type" shall be adjusted
> >     to "qualified pointer to type", where the type qualifiers (if
> >     any) are those specified within the [ and ] of the array type
> >     derivation. If the keyword static also appears within the [
> >     and ] of the array type derivation, then for each call to the
> >     function, the value of the corresponding actual argument shall
> >     provide access to the first element of an array with at least
> >     as many elements as specified by the size expression.
> > 
> > "restrict" is a type qualifier, so a parameter declaration
> >     int p[restrict]
> > is equivalent to
> >     int *retrict p
> 
> Thanks.  I wonder what the reasoning was behind that rule; the rule
> appears as 6.7.5.3p7 in C99 [found by searching for the language
> therein], and nothing in the C99 rationale for 6.7.5.3 seems to mention
> it; it seems like a purely redundant way of expressing something that
> could be easily expressed in another fashion, precluding the use of the
> syntax for some other different purpose.

I'm not sure how long C has had the rule that a parameter declared as an
array was equivalent to declaring that parameter as a pointer - but I
believe it dates back to K&R C, possibly back to B. The sole purpose of
that rule was to provide, as you say, a "purely redundant way of
expressing something that could be easily expressed in another fashion".
That "redundant" way was considered convenient by many people, which is
why the decision was made to provide it. I personally wouldn't care if
that rule were dropped, but there's essentially 0% chance of it ever
being changed, because huge amounts of code, some of it dating back to
the earliest days of C, depends upon that rule.

You point out, correctly, that this use "... preclud[es] the use of the
syntax for some other different purpose." That was considered an
acceptable cost, because there's just a single plausible candidate for
that "other different purpose" that is consistent with the way the rest
of the declaration syntax works: it would declare passing an array by
value. Since the decision was made to not support passing of arrays by
value, there was no need to reserve this syntax for that purpose. You
might disagree with that decision, but it is not going to change,
because of the importance of backwards compatibility.
The syntax that would otherwise have been interpreted as passing an
array to a function has been given the semantics of instead passing a
pointer to the first element of the array (again, the need for backwards
compatibility means that this rule will not change). It is therefore
reasonable that the syntax that would otherwise be interpreted as
describing an array parameter should instead declare the pointer that
will be passed.

However, this "redundant" way of expressing a pointer declaration was an
incomplete substitute for an actual pointer declaration: there was no
way to apply a qualifier to the pointer itself. The committee decided to
make the language more consistent by making sure that both ways of
declaring a pointer parameter would be able to declare qualified
pointers.
0
jameskuyper
12/2/2016 3:28:44 AM
On 01/12/16 22:23, Ben Bacarisse wrote:
> David Brown <david.brown@hesbynett.no> writes:
> 
>> On 01/12/16 12:20, Ben Bacarisse wrote:
>>> David Brown <david.brown@hesbynett.no> writes:
>>>
>>>> On 01/12/16 00:29, BartC wrote:
>>>>> On 30/11/2016 22:25, Jerry Stuckle wrote:
>>>>>> On 11/30/2016 9:22 AM, BartC wrote:
>>>>>>> On 30/11/2016 14:07, Jerry Stuckle wrote:
>>>>>>>> On 11/30/2016 6:25 AM, BartC wrote:
>>>>>>>
>>>>>>>>> A[i]-style indexing can only be applied when A is an array; end of.
>>>>>>>
>>>>>>>> There is no hole in the type system.
>>>>>>>
>>>>>>> There /is/ a hole where value array types would go. C decided, instead
>>>>>>> of leaving the hole, so as to trap errors when someone attempts an
>>>>>>> unsupported operation [/and to allow for the possibility of value array
>>>>>>> ops in the future/], to fill the hole with 'pointer to array first
>>>>>>> element'.
>>>>>>
>>>>>> YOU consider it a hole. No one else does.  Who do you think is more
>>>>>> likely to be wrong?  Hint: it isn't everybody else.
>>>>>     
>>>>> Have you got an opinion of your own, or do you have to depend on someone
>>>>> else's?
>>>>
>>>> Often Jerry is the one with the unique opinion contrary to everyone else
>>>> - but here he is entirely correct.
>>>
>>> I'm very surprised to see you write that.  C's array types stand out
>>> amongst all it objects types because they can't be assigned nor passed
>>> to or returned from functions.  That seems to be the hole BartC is
>>> referring to and, until I saw this post, I'd assumed it was not a
>>> contentious statement.
>>
>> What I really meant is that it is not a problem in C.  Yes, C array
>> types are a bit different from other types - but I don't see it as a
>> "hole" or a problem here.  It is just the way C works.
> <snip>
> 
> Since the term "hole" is not a technical one you can use it in any way
> you like, but BartC was clearer, saying "There /is/ a hole where value
> array types would go".  He made it clear what was missing (that is,
> after all, what a hole usually is), and it's seems we all agree on
> that[1].  The disagreement (if there is any) would be about how much it
> matters and since I can't quantify that I won't say any more.

Fair enough.  I have expressed myself badly - as you and Keith say,
"hole" is not a well-defined term, and I have being using it in a
particularly vaguer way.  All I really meant was that while I agree that
arrays are different from other types in C, they way they work is not a
problem for most C programmers.

> 
> [1] There's a technicality here -- C does have array values, it's just
> that they get converted in almost all situations so you can't do much
> with them.
> 

0
David
12/2/2016 8:39:44 AM
On 12/ 2/16 01:06 PM, Ben Bacarisse wrote:
> Jerry Stuckle <jstucklex@attglobal.net> writes:
>
>> C has always been able to pass values without using registers.  In fact,
>> it's only been the last few years that has become common.
>
> That has nothing to do with what I said.  I said all of early C's values
> would fit into a machine register.

Don't forget that Jerry World is stuck somewhere in the 90s where 
machines that favour passing parameters in registers, such as SPARC, are 
recent inventions.

-- 
Ian
0
Ian
12/2/2016 8:57:10 AM
On 01/12/16 19:53, James Kuyper wrote:
> On 12/01/2016 11:05 AM, David Brown wrote:
>> On 01/12/16 16:37, supercat@casperkitty.com wrote:
>>> On Thursday, December 1, 2016 at 3:14:24 AM UTC-6, David Brown wrote:
>>>> Yes, if only C had a way to write something like:
>>>>
>>>> void foo(int array1[restrict], int array2[restrict], int start, int end)
>>>>
>>>> then the optimiser would be free to assume that array1 and array2 don't
>>>> overlap!
>>>>
>>>> Oh wait, C /does/ have a way to write that.
>>>
>>> A search of C99 "restrict" revealed an example of that syntax, and a search
>>> of N1570 finds the same example, but I couldn't find anyplace in either
>>> standard that actually described its meaning.  What am I missing?
>>
>>
>> void foo(int array1[restrict], int array2[restrict], int start, int end)
>>
>> is compatible with:
>>
>> void foo(int * restrict array1, int * restrict array2, int start, int end)
> 
> More precisely, those are two different ways of declaring precisely the
> same function prototype.

I was basing this on 6.7.6.3 Example 5, which says "The following are
all compatible function prototype declarations".  (I am using the N1570
draft of C11 standard here.)  As far as I know, these are, as you say,
different ways of declaring the same prototype.  But I could only find a
reference for them being /compatible/.

> 
>> So presumably it has the same meaning.  I did not see any clear
>> definition in the standard either.
> 
> Did you fail to find 6.7.3.1, or did you fail to consider it a clear
> definition? If the latter, I can sympathize. I think I understand that
> clause, but it's one of the more obscure parts of the standard.

I did find the section - and it is not the easiest part of the standard!
 In particular, I did not find any clear way to see what "restrict"
meant in the context of "int array1[restrict]" - and it was clarity that
Supercat wanted.  Maybe I could have studied the section harder.

> 
>>> Also, there's no reason such meaning should only be available to parameters
>>> but I don't think that syntax would be allowable in any other context.
> 
> The use of array syntax to declare what are actually pointers is allowed
> only in function parameter declarations. However, any pointer to an
> object type may be restrict-qualified.
> 
> Note: restrict doesn't say that the arrays the pointers point at cannot
> overlap, it says that if any object accessed by using an expression
> based upon a restrict-qualified pointer is also accessed within the
> scope of that pointer by any other means, the behavior is undefined.
> Consider a global 2-dimensional array named squares, representing the
> squares of a chess board, and a routine that declares a
> restrict-qualified pointer to a square, named wsq, which is only ever
> used to access the white squares on the board. If the same routine uses
> the global array directly, but only to access the black squares, then
> such use does not violates the requirements for restrict - no square is
> accessed by both methods.
> 

0
David
12/2/2016 9:05:03 AM
jameskuyper@verizon.net writes:
<snip>

> I'm not sure how long C has had the rule that a parameter declared as an
> array was equivalent to declaring that parameter as a pointer - but I
> believe it dates back to K&R C, possibly back to B.

Yes, it goes right be to the very earliest C (pre K&R) but it does not
appear in B, because B had no parameter declarations as such -- you just
wrote a name and that's it.  B used the [] syntax to define named
objects where the value of the name would be the object address rather
than the contents of a named storage location.  That's the origin of C's
treatment of arrays.  (BCPL did the same using but with a keyword, VEC).

<snip>
-- 
Ben.
0
Ben
12/2/2016 10:30:27 AM
On Thursday, December 1, 2016 at 9:28:53 PM UTC-6, james...@verizon.net wrote:
> I'm not sure how long C has had the rule that a parameter declared as an
> array was equivalent to declaring that parameter as a pointer - but I
> believe it dates back to K&R C, possibly back to B.

The ability to declare parameter types *at all* only goes back as far as
C++; C compiler vendors recognized it was a useful feature and incorporated
it into C.  I'm not sure how C++ compilers handled qualifier syntax within
brackets, since wasn't standardized until years after C89 was published.

0
supercat
12/2/2016 3:24:41 PM
On Thursday, December 1, 2016 at 9:28:53 PM UTC-6, james...@verizon.net wrote:
> You point out, correctly, that this use "... preclud[es] the use of the
> syntax for some other different purpose." That was considered an
> acceptable cost, because there's just a single plausible candidate for
> that "other different purpose" that is consistent with the way the rest
> of the declaration syntax works: it would declare passing an array by
> value. Since the decision was made to not support passing of arrays by
> value, there was no need to reserve this syntax for that purpose.

There are many cases where it would be useful for an optimizer to know,
given two pointers p1 and p2, that there will be no values of i and j with
i!=j such that p1+i and p2+j will be used to alias the same storage in
conflicting fashion.  As a reminder:

    void scale_array(double *dest, double *src, int n, double scale)
    {
      for (i=0; i<n; i++)
        dest[i] = src[i] * scale;
    }

For purposes of optimization, it won't matter if dest and src are identical
or disjoint, but it will be important that no value written using "dest"
within the loop is read later using "src".

A declaration which would indicate that "dest" and "src" should each be
treated as the start of an array would allow a compiler to vectorize the
loop, without precluding the use of the code to scale an array "in-place".

Do you think the application or optimization is obscure?
0
supercat
12/2/2016 3:35:32 PM
On 12/2/2016 3:57 AM, Ian Collins wrote:
> On 12/ 2/16 01:06 PM, Ben Bacarisse wrote:
>> Jerry Stuckle <jstucklex@attglobal.net> writes:
>>
>>> C has always been able to pass values without using registers.  In fact,
>>> it's only been the last few years that has become common.
>>
>> That has nothing to do with what I said.  I said all of early C's values
>> would fit into a machine register.
> 
> Don't forget that Jerry World is stuck somewhere in the 90s where
> machines that favour passing parameters in registers, such as SPARC, are
> recent inventions.
> 

Don't forget that Ian is a total idiot with no understanding of real
computer operations.

-- 
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
0
Jerry
12/2/2016 3:46:06 PM
On 12/2/2016 10:24 AM, supercat@casperkitty.com wrote:
> On Thursday, December 1, 2016 at 9:28:53 PM UTC-6, james...@verizon.net wrote:
>> I'm not sure how long C has had the rule that a parameter declared as an
>> array was equivalent to declaring that parameter as a pointer - but I
>> believe it dates back to K&R C, possibly back to B.
> 
> The ability to declare parameter types *at all* only goes back as far as
> C++; C compiler vendors recognized it was a useful feature and incorporated
> it into C.  I'm not sure how C++ compilers handled qualifier syntax within
> brackets, since wasn't standardized until years after C89 was published.
> 

Not true.  The ability to declare parameter types was there before C++
came out.

-- 
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
0
Jerry
12/2/2016 3:50:02 PM
On 12/02/2016 04:05 AM, David Brown wrote:
> On 01/12/16 19:53, James Kuyper wrote:
>> On 12/01/2016 11:05 AM, David Brown wrote:
....
>>> void foo(int array1[restrict], int array2[restrict], int start, int end)
>>>
>>> is compatible with:
>>>
>>> void foo(int * restrict array1, int * restrict array2, int start, int end)
>>
>> More precisely, those are two different ways of declaring precisely the
>> same function prototype.
>
> I was basing this on 6.7.6.3 Example 5, which says "The following are
> all compatible function prototype declarations".  (I am using the N1570
> draft of C11 standard here.)  As far as I know, these are, as you say,
> different ways of declaring the same prototype.  But I could only find a
> reference for them being /compatible/.

I posted a response to supercat more than 6 hours before you sent the 
message above, but your response suggests that you might not have read 
it. Therefore, I'll repeat the key points I made in that message that 
are relevant to your response:

6.7.6.3p7:
"A declaration of a parameter as ‘‘array of type’’ shall be adjusted
to ‘‘qualified pointer to type’’, where the type qualifiers (if any)
are those specified within the [ and ] of the array type derivation."

Thus, "int array1[restrict]" gets adjusted to "int * restrict array1".

....
>> Did you fail to find 6.7.3.1, or did you fail to consider it a clear
>> definition? If the latter, I can sympathize. I think I understand that
>> clause, but it's one of the more obscure parts of the standard.
>
> I did find the section - and it is not the easiest part of the standard!
>  In particular, I did not find any clear way to see what "restrict"
> meant in the context of "int array1[restrict]" - and it was clarity that
> Supercat wanted.  Maybe I could have studied the section harder.

What it means in the context of a parameter declaration like "int 
array1[restrict]", is, per 6.7.6.3p7, precisely the same as what it 
means in "int * restrict array1", because the array declaration gets 
adjusted to become the pointer declaration.
0
James
12/2/2016 4:01:58 PM
On 02/12/16 17:01, James R. Kuyper wrote:
> On 12/02/2016 04:05 AM, David Brown wrote:
>> On 01/12/16 19:53, James Kuyper wrote:
>>> On 12/01/2016 11:05 AM, David Brown wrote:
> ...
>>>> void foo(int array1[restrict], int array2[restrict], int start, int
>>>> end)
>>>>
>>>> is compatible with:
>>>>
>>>> void foo(int * restrict array1, int * restrict array2, int start,
>>>> int end)
>>>
>>> More precisely, those are two different ways of declaring precisely the
>>> same function prototype.
>>
>> I was basing this on 6.7.6.3 Example 5, which says "The following are
>> all compatible function prototype declarations".  (I am using the N1570
>> draft of C11 standard here.)  As far as I know, these are, as you say,
>> different ways of declaring the same prototype.  But I could only find a
>> reference for them being /compatible/.
> 
> I posted a response to supercat more than 6 hours before you sent the
> message above, but your response suggests that you might not have read
> it. Therefore, I'll repeat the key points I made in that message that
> are relevant to your response:
> 

I have Usenet threads in a tree view.  Usually I try to read all
existing posts before posting new ones, but sometimes I reply to one
post before I've read all the others.  I apologise for doing so here.

> 6.7.6.3p7:
> "A declaration of a parameter as ‘‘array of type’’ shall be adjusted
> to ‘‘qualified pointer to type’’, where the type qualifiers (if any)
> are those specified within the [ and ] of the array type derivation."
> 
> Thus, "int array1[restrict]" gets adjusted to "int * restrict array1".
> 
> ...
>>> Did you fail to find 6.7.3.1, or did you fail to consider it a clear
>>> definition? If the latter, I can sympathize. I think I understand that
>>> clause, but it's one of the more obscure parts of the standard.
>>
>> I did find the section - and it is not the easiest part of the standard!
>>  In particular, I did not find any clear way to see what "restrict"
>> meant in the context of "int array1[restrict]" - and it was clarity that
>> Supercat wanted.  Maybe I could have studied the section harder.
> 
> What it means in the context of a parameter declaration like "int
> array1[restrict]", is, per 6.7.6.3p7, precisely the same as what it
> means in "int * restrict array1", because the array declaration gets
> adjusted to become the pointer declaration.

Yes, I understand it now.  Thanks.

0
David
12/2/2016 4:28:25 PM
On Friday, December 2, 2016 at 10:24:53 AM UTC-5, supe...@casperkitty.com w=
rote:
> On Thursday, December 1, 2016 at 9:28:53 PM UTC-6, james...@verizon.net w=
rote:
> > I'm not sure how long C has had the rule that a parameter declared as a=
n
> > array was equivalent to declaring that parameter as a pointer - but I
> > believe it dates back to K&R C, possibly back to B.
>=20
> The ability to declare parameter types *at all* only goes back as far as
> C++; ...

The ability to declare parameter types goes all the way back to K&R C:

float ida(i, d, a)
   int i;
   double d;
   int a[];
{
   // Body of the function.
}

Without that ability, how would the compiler know what type they were? I th=
ink that in B (or at least early versions of it), all supported types were =
represented the same way, so there was no need to declare them. But as soon=
 as meaningfully different types were supported, there had to be way of dec=
laring parameter types.

> ... C compiler vendors recognized it was a useful feature and incorporate=
d
> it into C. ...

What was introduced in C++ and ported back to C was function prototypes, wh=
ich allowed you declare parameter types in a declaration that was not part =
of the function definition. This allowed the following useful features:
1) If an argument could be implicitly converted to the type of the correspo=
nding parameter's type, it would be.
2) If it could not be implicitly converted, that constituted a constraint v=
iolation, for which at least one diagnostic was mandatory.
3) Compatibility of function pointer types could be based upon the types of=
 the parameters, and not just the return type of the function.
> ...   I'm not sure how C++ compilers handled qualifier syntax within
> brackets, since wasn't standardized until years after C89 was published.

I've lost access to my only copy of C++98, but as I remember it C++98 did n=
ot allow such qualifiers. I think this may have been a feature that a later=
 version of C++ borrowed from C99 (but without documentation to back it up,=
 I wouldn't recommend relying upon the accuracy of that memory).
0
jameskuyper
12/2/2016 4:54:34 PM
supercat@casperkitty.com writes:

> On Thursday, December 1, 2016 at 9:28:53 PM UTC-6, james...@verizon.net wrote:
>> I'm not sure how long C has had the rule that a parameter declared as an
>> array was equivalent to declaring that parameter as a pointer - but I
>> believe it dates back to K&R C, possibly back to B.
>
> The ability to declare parameter types *at all* only goes back as far as
> C++;

Perhaps a better way to put it is to say that in K&R C you could half
declare them -- they were declared for the function body but not for the
caller:

  double sin(x)
  double x;
  {
     ... function body ...
  }

<snip>
-- 
Ben.
0
Ben
12/2/2016 5:20:24 PM
On Friday, December 2, 2016 at 9:49:53 AM UTC-6, Jerry Stuckle wrote:
> Not true.  The ability to declare parameter types was there before C++
> came out.

Sorry--I meant the ability to declare argument types as part of the function
header.  Previously, a function declaration would have been:

    int foo();

and the definition:

    int foo(bar, boz)
      int bar;
      char *boz;
    {
      printf("bar=%d boz=%s", bar, boz);
    }

Prior to C++, no code would have used a declaration like:

    int foo(int bar, char boz[]);

and thus no code would have been broken by a requirement that pointer
arguments whose type is specified at all must be specified using pointer
syntax.

    int foo(int bar, char *boz);

Even if old C allowed [] for parameters declared between the header and
the function body, that wouldn't have necessitated allowing the syntax
for parameters declared within the function header.

It could be argued that the wheels fell off prior to C++, when typedef
was added to the language, since that's probably the point where the
disparate treatment of arrays declared before a function body became a
semantic rather than syntactic issue, but the remedy would have been to
have a new syntax which, given a non-array type, would declare something
of that type, but which if given an array type would declare something of
a pointer-to-element type.  Code wanting the old semantics with a new-
style function header could have declare the appropriate parameter with
that header, and code which wants a local variable of the same type could
declare that (something that's presently not possible).
0
supercat
12/2/2016 5:39:22 PM
On Friday, December 2, 2016 at 10:35:39 AM UTC-5, supe...@casperkitty.com w=
rote:
....
> There are many cases where it would be useful for an optimizer to know,
> given two pointers p1 and p2, that there will be no values of i and j wit=
h
> i!=3Dj such that p1+i and p2+j will be used to alias the same storage in
> conflicting fashion.  As a reminder:
>=20
>     void scale_array(double *dest, double *src, int n, double scale)
>     {
>       for (i=3D0; i<n; i++)
>         dest[i] =3D src[i] * scale;
>     }
>=20
> For purposes of optimization, it won't matter if dest and src are identic=
al
> or disjoint, but it will be important that no value written using "dest"
> within the loop is read later using "src".

The restrict qualifier enables precisely those optimizations, but has a mor=
e general meaning.

> A declaration which would indicate that "dest" and "src" should each be
> treated as the start of an array would allow a compiler to vectorize the
> loop, ...

The restrict qualifier imposes a more general requirement, which is a bette=
r fit to what the compiler needs to know to justify performing the relevant=
 optimizations. It doesn't just require that dest and src will never be use=
d to access the same object, it requires that any object accessed, directly=
 or indirectly, by using a pointer based upon 'dest' will never be accessed=
 by ANY other method within the scope of that declaration. For example, if =
the function accesses an external object, the compiler doesn't have to worr=
y about the possibility that a pointer based upon dest might also be used t=
o access that object. This is more precisely what the optimizer needs - onl=
y knowing that dest and src don't alias each other is insufficient.

Furthermore, restrict applies to all pointers based upon the one it's appli=
ed to. That means that the same restrictions apply to dest[-5] as to dest[5=
]; it doesn't have to point at the start of the array. Again, this is more =
precisely what the optimizer needs, if the code ever applies a negative off=
set to the pointer.

There's a point which your explanation doesn't address. Let's use _ArraySta=
rt as keyword applying the feature you're applying to the following identif=
ier. Consider the following code:

void dummy(size_t m, size_t n,
    char * _ArrayStart dest, const char * _ArrayStart src)
{
   size_t i, j;
   for(j=3D0; j<n; j +=3D 2)
      for(i=3D0; i<m; i +=3D 2)
          if(dest[j] =3D=3D src[i])
              dest[j] =3D '\0';
}

int func()
{
    char array[256];
    // Fill in array

    dummy(128, 128, array, array+1);
}
             =20
Would such code violate the requirements imposed by your new feature? Would=
 that feature enable optimization based upon not worrying about the possibi=
lity that src[j] might alias the same location in memory as dest[i]? If "re=
strict" were used in place of _ArrayStart, the answers to those questions w=
ould be "No" and "Yes", respectively.
0
jameskuyper
12/2/2016 5:52:26 PM
On Friday, December 2, 2016 at 12:39:30 PM UTC-5, supe...@casperkitty.com w=
rote:
> On Friday, December 2, 2016 at 9:49:53 AM UTC-6, Jerry Stuckle wrote:
> > Not true.  The ability to declare parameter types was there before C++
> > came out.
>=20
> Sorry--I meant the ability to declare argument types as part of the funct=
ion
> header.

"header" is not a term that the C standard uses for that purpose. It only u=
ses it for C standard library headers.
I believe that what you're referring to is what the C standard calls a func=
tion declaration, which can appear either on it's own, or as the starting p=
art of a function definition.
Most headers include function declarations, but "header" refers to the enti=
re thing, which in general might include macro, typedef, and struct definit=
ions, not just function declarations.

If that's what you were referring to, how does it justify objecting to my s=
tatement:

> I'm not sure how long C has had the rule that a parameter declared as an
> array was equivalent to declaring that parameter as a pointer - but I bel=
ieve
> it dates back to K&R C, possibly back to B.

In K&R C, it was not possible to declare a parameter as having any particul=
ar type in the function declaration, neither as an array, nor as a pointer,=
 nor as a float, a double, a char or an int. The only place you could decla=
re a function parameter as having an array type was in the declaration list=
 the precedes the body of the function - and in that location, the rule I d=
escribed did apply.

> Prior to C++, no code would have used a declaration like:
>=20
>     int foo(int bar, char boz[]);

True.

> and thus no code would have been broken by a requirement that pointer
> arguments whose type is specified at all must be specified using pointer
> syntax.

False. That rule would break the following code:

    int foo() int bar; char boz[]; {/* body of foo */}

boz is a pointer parameter of foo() declared using array syntax, violating =
the rule you're suggesting.

>     int foo(int bar, char *boz);
>=20
> Even if old C allowed [] for parameters declared between the header and
> the function body, ...

It did.

> ... that wouldn't have necessitated allowing the syntax
> for parameters declared within the function header.

Well, yes, because old C didn't allow declarations of parameters in the fun=
ction declaration at all - neither as arrays nor as pointers. All that was =
allowed was an identifier list.
0
jameskuyper
12/2/2016 6:17:11 PM
On Friday, December 2, 2016 at 11:52:41 AM UTC-6, james...@verizon.net wrot=
e:
> On Friday, December 2, 2016 at 10:35:39 AM UTC-5, supercat wrote:
> > There are many cases where it would be useful for an optimizer to know,
> > given two pointers p1 and p2, that there will be no values of i and j w=
ith
> > i!=3Dj such that p1+i and p2+j will be used to alias the same storage i=
n
> > conflicting fashion.

> The restrict qualifier enables precisely those optimizations, but has a m=
ore general meaning.

The meaning of "restrict" is broader, which means that it gives the
compiler more information than what I'm suggesting in the cases where
it would be usable, but also means that it is usable in fewer situations
(including the one I illustrated) unless one wants to duplicate code.

> The restrict qualifier imposes a more general requirement, which is
> a better fit to what the compiler needs to know to justify performing
> the relevant optimizations.=20

In some cases a compiler would benefit from the additional information; in
other cases, it could perform useful optimizations just fine without it.,

> It doesn't just require that dest and src will never be used to access
> the same object, it requires that any object accessed, directly or
> indirectly, by using a pointer based upon 'dest' will never be accessed
> by ANY other method within the scope of that declaration. For example,
> if the function accesses an external object, the compiler doesn't have
> to worry about the possibility that a pointer based upon dest might also
> be used to access that object. This is more precisely what the optimizer
> needs - only knowing that dest and src don't alias each other is
> insufficient.

A compiler looking at something like the function I posted would have no
trouble determining that nothing accessed using the two pointers would be
accessed via any other means during its execution.  If my function made
calls to outside code, a "restrict" qualifier would tell the compiler that
such outside code would not access any part of the array which my function
wrote, nor write to any part of the array which my function read.  A useful
thing for it to know, if true.  In cases where it wasn't true, however, no
existing qualifier would let the compiler vectorize the loop even if no
outside calls occurred *within* the loop.

> Furthermore, restrict applies to all pointers based upon the one it's app=
lied to. That means that the same restrictions apply to dest[-5] as to dest=
[5]; it doesn't have to point at the start of the array. Again, this is mor=
e precisely what the optimizer needs, if the code ever applies a negative o=
ffset to the pointer.

Vectorized code wouldn't care if src[3] aliases dest[3], src[4] aliases
dest[4], etc.  Problems would occur, however, if src[4] aliases dest[3].
While it wouldn't be necessary that the pointer identify the start of an
actual array object, what would be important is that the only way an
object could be identified using two different array-aligned pointer
variables would be if the variables held the same address.

> There's a point which your explanation doesn't address. Let's use _ArrayS=
tart as keyword applying the feature you're applying to the following ident=
ifier. Consider the following code:
>=20
> void dummy(size_t m, size_t n,
>     char * _ArrayStart dest, const char * _ArrayStart src)
> {
>    size_t i, j;
>    for(j=3D0; j<n; j +=3D 2)
>       for(i=3D0; i<m; i +=3D 2)
>           if(dest[j] =3D=3D src[i])
>               dest[j] =3D '\0';
> }
>=20
> int func()
> {
>     char array[256];
>     // Fill in array
> =20
>     dummy(128, 128, array, array+1);
> }
>              =20
> Would such code violate the requirements imposed by your new feature?
> Would that feature enable optimization based upon not worrying about
> the possibility that src[j] might alias the same location in memory as
> dest[i]? If "restrict" were used in place of _ArrayStart, the answers to
> those questions would be "No" and "Yes", respectively.

My rule would essentially say that an object will not be accessed via two
_ArrayStart-qualified pointers using the [] operator unless the two pointer=
s
hold the same address.  The code you gave would be legal with my qualifiers
attached, though my qualifier wouldn't help much with vectorization since
many items of src[] are used in the computation of each value of dest[].

Note that if the function were rewritten as:

> void dummy(size_t m, size_t n,
>     char * _ArrayStart dest, const char * _ArrayStart src)
> {
>    size_t i, j;
>    for(j=3D0; j<n; j +=3D 2)
>       for(i=3D1; i<m; i +=3D 2)
>           if(dest[j] =3D=3D src[i])
>               dest[j] =3D '\0';
> }

calling it with equal values for src and dest would be equivalent to callin=
g
the old function with the indicated values, and the compiler would be able
to vectorize it since it would be able to tell that src[] was only accessed
with odd indices and dest[] was only accessed with even indices, and thus
they couldn't alias.
0
supercat
12/2/2016 6:44:26 PM
On Friday, December 2, 2016 at 12:17:19 PM UTC-6, james...@verizon.net wrot=
e:
> If that's what you were referring to, how does it justify objecting to my=
 statement:
>=20
> > I'm not sure how long C has had the rule that a parameter declared as a=
n
> > array was equivalent to declaring that parameter as a pointer - but I b=
elieve
> > it dates back to K&R C, possibly back to B.

My point was that the syntax in which the types appeared within the first
set of parentheses was new with C++, and so anything having to do the
behavior of such types would have been defined at that time.

> In K&R C, it was not possible to declare a parameter as having any partic=
ular type in the function declaration, neither as an array, nor as a pointe=
r, nor as a float, a double, a char or an int. The only place you could dec=
lare a function parameter as having an array type was in the declaration li=
st the precedes the body of the function - and in that location, the rule I=
 described did apply.

True, but I see no particular reason the same rule would have had to have
been carried through when the new function syntax was defined.  The rule
may have made some sense in languages which did not allow any kinds of
aggregates to be passed by value, but the addition of the new function
declaration syntax and deprecation of the old one would have been a perfect
opportunity to deprecate the use of the array-to-element-pointer transform
as well.  Old-style function declarations would interpret "int boz[];" in
the pre-body argument list like "int *boz", but that doesn't mean that new-
style declarations had to do likewise.  It was the C++ decision to make
them do so which compelled C to do likewise.

> > Prior to C++, no code would have used a declaration like:
> >=20
> >     int foo(int bar, char boz[]);
>=20
> True.
>=20
> > and thus no code would have been broken by a requirement that pointer
> > arguments whose type is specified at all must be specified using pointe=
r
> > syntax.
>=20
> False. That rule would break the following code:
>=20
>     int foo() int bar; char boz[]; {/* body of foo */}

> boz is a pointer parameter of foo() declared using array syntax, violatin=
g the rule you're suggesting.

I should have said "pointer arguments *in new-style declarations*".

> >     int foo(int bar, char *boz);
> >=20
> > Even if old C allowed [] for parameters declared between the header and
> > the function body, ...


>=20
> It did.
>=20
> > ... that wouldn't have necessitated allowing the syntax
> > for parameters declared within the function header.

> Well, yes, because old C didn't allow declarations of parameters in the f=
unction declaration at all - neither as arrays nor as pointers. All that wa=
s allowed was an identifier list.

By "function header" I meant the part up to the closing paren following
the argument list.  Given

    int foo(foo,boz)  // 1
      int foo,boz[];    // 2
    {
    }

I'm not sure what term would best describe the line marked //2, but when
I said "function header" I was intending to refer only to the portion
marked //1 which could be common to the definition and declaration.
0
supercat
12/2/2016 7:58:50 PM
On 12/02/2016 02:46 PM, Keith Thompson wrote:
> jameskuyper@verizon.net writes:
>> On Friday, December 2, 2016 at 12:39:30 PM UTC-5, supe...@casperkitty.com wrote:
> [...]
>>> and thus no code would have been broken by a requirement that pointer
>>> arguments whose type is specified at all must be specified using pointer
>>> syntax.
>>
>> False. That rule would break the following code:
>>
>>     int foo() int bar; char boz[]; {/* body of foo */}
>
> The correct (old-style) syntax for that would be:
>
>     int foo(bar, boz) int bar; char boz[]; {/* body of foo */ }

You're right - it's been a long time since I made serious use of that 
syntax. I've been using almost exclusively function prototypes since the 
first time I used a compiler that supported them.


0
James
12/2/2016 8:02:44 PM
On 12/2/2016 12:39 PM, supercat@casperkitty.com wrote:
> On Friday, December 2, 2016 at 9:49:53 AM UTC-6, Jerry Stuckle wrote:
>> Not true.  The ability to declare parameter types was there before C++
>> came out.
> 
> Sorry--I meant the ability to declare argument types as part of the function
> header.  Previously, a function declaration would have been:
> 
>     int foo();
> 
> and the definition:
> 
>     int foo(bar, boz)
>       int bar;
>       char *boz;
>     {
>       printf("bar=%d boz=%s", bar, boz);
>     }
> 
> Prior to C++, no code would have used a declaration like:
> 
>     int foo(int bar, char boz[]);
> 
> and thus no code would have been broken by a requirement that pointer
> arguments whose type is specified at all must be specified using pointer
> syntax.
> 
>     int foo(int bar, char *boz);
> 
> Even if old C allowed [] for parameters declared between the header and
> the function body, that wouldn't have necessitated allowing the syntax
> for parameters declared within the function header.
>

Also not true.  There were C compilers in the mid 80's which supported
both the original and what is now the standard function declaration syntax.

The syntax became "official" with the first ANSI standard, but that was
after it was available.

> It could be argued that the wheels fell off prior to C++, when typedef
> was added to the language, since that's probably the point where the
> disparate treatment of arrays declared before a function body became a
> semantic rather than syntactic issue, but the remedy would have been to
> have a new syntax which, given a non-array type, would declare something
> of that type, but which if given an array type would declare something of
> a pointer-to-element type.  Code wanting the old semantics with a new-
> style function header could have declare the appropriate parameter with
> that header, and code which wants a local variable of the same type could
> declare that (something that's presently not possible).
> 

typedef was a great addition to the language.  It made the code less
dependent on the basic types used by that compiler.

-- 
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
0
Jerry
12/2/2016 8:31:47 PM
On Friday, December 2, 2016 at 2:59:02 PM UTC-5, supe...@casperkitty.com wrote:
> On Friday, December 2, 2016 at 12:17:19 PM UTC-6, james...@verizon.net wrote:
> > If that's what you were referring to, how does it justify objecting to my statement:
> > 
> > > I'm not sure how long C has had the rule that a parameter declared as an
> > > array was equivalent to declaring that parameter as a pointer - but I believe
> > > it dates back to K&R C, possibly back to B.
> 
> My point was that the syntax in which the types appeared within the first
> set of parentheses was new with C++, and so anything having to do the

It would have been helpful, then, if your point had been conveyed by words that
bore some faint resemblance to addressing that point. Instead, you choose to
respond to a perfectly correct statement about how parameter declarations have
always been handled, by claiming incorrectly that parameter declarations
weren't allowed prior to C++. Function prototypes changed the location of
parameter declarations, but they existed long before function prototypes were
invented.

> behavior of such types would have been defined at that time.
> > In K&R C, it was not possible to declare a parameter as having any
> > particular type in the function declaration, neither as an array, nor as a
> > pointer, nor as a float, a double, a char or an int. The only place you
> > could declare a function parameter as having an array type was in the
> > declaration list the precedes the body of the function - and in that
> > location, the rule I described did apply.
> 
> True, but I see no particular reason the same rule would have had to have
> been carried through when the new function syntax was defined.

This discussion was originally about the new ability in C99 to add qualifiers
to the leading dimension of a pointer parameter declared using array syntax.
The decision to allow pointer parameters to be declared using array syntax in
function prototypes was made in C90; the C99 feature was added in a context
where that decision had already been made, and could not be revoked without
breaking existing code. People who like that syntax more than you do wanted to
make it more useful by allowing the resulting pointer to be qualified, and the
committee agreed that it would be a good idea.

The C90 decision wasn't made because it had to be made - it was made because
the committee thought it would be a good idea. They wanted parameter
declarations in C90 function prototypes to look pretty much the same as
parameter declarations in K&R function definitions. In particular, that means
that the meaning of those declarations needed to be the same: since an array
parameter declared in a K&R function definition was treated as the declaration
of a pointer parameter, consistency required that it have the same effect when
declared in a function prototype. Furthermore, even if consistency hadn't been
an issue, the same people who liked to declare pointer parameters in K&R C
using array syntax still wanted to do so in C90, and the committee saw nothing
wrong with continuing to support them.

You can feel free to value such consistency less than the committee did; which
is why I'm glad you're not on the committee. As a result of their concerns
about consistency, the transition period (which lasted decades) was a lot
easier: that's what made it possible to define simple macros that expanded into
either a K&R declaration or a function prototype, depending upon which version
of C was being used.

I personally made little use of such macros - I built all new code using
prototypes as soon as I had access to a compiler which supported them. When
necessary, I converted old code to use prototypes, and almost always discovered
defects in the code that were revealed as violations of the constraints imposed
by prototypes. I refused to use compilers that didn't support them.

However, other people didn't have that luxury. People who needed to be able to
compile their code using K&R compilers relied upon such macros for decades
after function prototypes were first introduced. Some people are still using
those macros even though they no longer have such a need (which I disapprove
of, but understand).

> ...  The rule
> may have made some sense in languages which did not allow any kinds of
> aggregates to be passed by value, but the addition of the new function
> declaration syntax and deprecation of the old one would have been a perfect
> opportunity to deprecate the use of the array-to-element-pointer transform
> as well.

If they'd thought it would be a good idea to allow passing array arguments,
they might have considered that approach - but I doubt that they thought it
would be a good idea. You're free to disagree.

> > > Even if old C allowed [] for parameters declared between the header and
> > > the function body, ...
> > 
> > It did.
> > 
> > > ... that wouldn't have necessitated allowing the syntax
> > > for parameters declared within the function header.
> 
> > Well, yes, because old C didn't allow declarations of parameters in the
> > function declaration at all - neither as arrays nor as pointers. All that
> > was allowed was an identifier list.
> 
> By "function header" I meant the part up to the closing paren following
> the argument list.  Given
>
>     int foo(foo,boz)  // 1
>       int foo,boz[];    // 2
>     {
>     }
> 
> I'm not sure what term would best describe the line marked //2, but when
> I said "function header" I was intending to refer only to the portion
> marked //1 which could be common to the definition and declaration.

The C standard defines no name that covers all of line //2. The above code
parses as (6.9p1):

    declaration-specifiers: int
    declarator: foo(foo, boz)
    declaration_list: int foo, boz[]
    compound statement: {}

The declarator can be further broken down as (6.7.6p1): 
    direct-declarator: foo
    identifier list: foo, boz

It's the identifier_list that marks this as a K&R declaration; a
parameter_type_list in the same location would have made it a prototype. The
definition of a function with parameters must have either a parameter_type_list
or a declaration_list - it can't have both (6.9.1).
0
jameskuyper
12/2/2016 9:25:13 PM
On Friday, December 2, 2016 at 2:31:42 PM UTC-6, Jerry Stuckle wrote:
> Also not true.  There were C compilers in the mid 80's which supported
> both the original and what is now the standard function declaration syntax.

Yes, but were there any such compilers before the new syntax appeared in C++?

> The syntax became "official" with the first ANSI standard, but that was
> after it was available.

Of course; I've used compilers from that era.  I think they got the syntax
from C++, though, didn't they?

> > It could be argued that the wheels fell off prior to C++, when typedef
> > was added to the language, since that's probably the point where the
> > disparate treatment of arrays declared before a function body became a
> > semantic rather than syntactic issue, but the remedy would have been to
> > have a new syntax which, given a non-array type, would declare something
> > of that type, but which if given an array type would declare something of
> > a pointer-to-element type.  Code wanting the old semantics with a new-
> > style function header could have declare the appropriate parameter with
> > that header, and code which wants a local variable of the same type could
> > declare that (something that's presently not possible).
> > 
> 
> typedef was a great addition to the language.  It made the code less
> dependent on the basic types used by that compiler.

Of course typedef is useful when applied to things that behave like value
objects; I'm less convinced of the utility of the way it is defined for
arrays.  Given:

    int foo(thing bar)
    {
      thing boz;
      ...
    }

it would seem logical that "bar" and "boz" should have the same type; even
if use of [] syntax within a parameter list of whatever form would get
reinterpreted as a pointer, that wouldn't imply that array types should
sport that same behavior in cases which don't use that syntax.
0
supercat
12/2/2016 9:34:58 PM
On 12/2/2016 4:34 PM, supercat@casperkitty.com wrote:
> On Friday, December 2, 2016 at 2:31:42 PM UTC-6, Jerry Stuckle wrote:
>> Also not true.  There were C compilers in the mid 80's which supported
>> both the original and what is now the standard function declaration syntax.
> 
> Yes, but were there any such compilers before the new syntax appeared in C++?
>

That's what I said.  The first version of C++ from AT&T Labs (Version
"E" for "Educational") was released to universities in 1986.  it was
buggy and not fit for production; more of a "proof of concept".  Several
C compilers already had support for the new syntax. (BTW - the first
production C++ compiler was released in 1987 - still buggy, but better.
In 1988 the first compiler that really could be used in production were
released).

>> The syntax became "official" with the first ANSI standard, but that was
>> after it was available.
> 
> Of course; I've used compilers from that era.  I think they got the syntax
> from C++, though, didn't they?
> 

Nope, the compilers were using it before C++ came out.

>>> It could be argued that the wheels fell off prior to C++, when typedef
>>> was added to the language, since that's probably the point where the
>>> disparate treatment of arrays declared before a function body became a
>>> semantic rather than syntactic issue, but the remedy would have been to
>>> have a new syntax which, given a non-array type, would declare something
>>> of that type, but which if given an array type would declare something of
>>> a pointer-to-element type.  Code wanting the old semantics with a new-
>>> style function header could have declare the appropriate parameter with
>>> that header, and code which wants a local variable of the same type could
>>> declare that (something that's presently not possible).
>>>
>>
>> typedef was a great addition to the language.  It made the code less
>> dependent on the basic types used by that compiler.
> 
> Of course typedef is useful when applied to things that behave like value
> objects; I'm less convinced of the utility of the way it is defined for
> arrays.  Given:
> 
>     int foo(thing bar)
>     {
>       thing boz;
>       ...
>     }
> 
> it would seem logical that "bar" and "boz" should have the same type; even
> if use of [] syntax within a parameter list of whatever form would get
> reinterpreted as a pointer, that wouldn't imply that array types should
> sport that same behavior in cases which don't use that syntax.
> 

They will have exactly the same type.

-- 
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
0
Jerry
12/3/2016 12:32:43 AM
jameskuyper@verizon.net writes:
> On Saturday, December 3, 2016 at 11:15:35 AM UTC-5, David Brown wrote:
>> On 03/12/16 16:32, James R. Kuyper wrote:
> ...
>> > C2011 adds a feature that allows us to check this directly (which won't
>> > necessarily convince Jerry, either):
>> >
>> > #include <stdio.h>
>> > #define TYPE(a) _Generic(a, \
>> >     int*:"int *", int[4]:"int[4]", default:"other")
>> >
>> > typedef int thing[4];
>> > void foo(thing bar)
>> > {
>> >    thing boz;
>> >    printf("%s:%s\n", "bar", TYPE(bar));
>> >    printf("%s:%s\n", "boz", TYPE(boz));
>> > }
>> >
>> > int main(int argc, char *argv[])
>> > {
>> >     foo(&argc);
>> >
>> >     return 0;
>> > }
>> >
>> > Unfortunately, the compiler I currently have access to doesn't support
>> > C2011. Therefore, I can't confirm whether I've used _Generic correctly,
>> > nor can I confirm the results. Could someone who has a compiler that
>> > does support _Generic() check this out?
>> 
>> I had a look using <https://godbolt.org>, but it looks like both get 
>> printed as "int *".  Maybe _Generic also turns arrays into int* like 
>> parameter passing?  I don't know the details of _Generic well enough to 
>> tell.
>
> "Except when it is the operand of the sizeof operator, the _Alignof
> operator, or the unary & operator, or is a string literal used to
> initialize an array, an expression that has type ‘‘array of type’’ is
> converted to an expression with type ‘‘pointer to type’’ that points
> to the initial element of the array object and is not an lvalue."
> (6.3.2.1p3)
>
> I think there should also have been an exception for _Generic, but
> that's not what was actually done. Therefore, my suggestion is not as
> useful as I thought it would be.

But it can be made useful by taking the address of the operand:

#include <stdio.h>
#define TYPE(a) _Generic(&a, \
    int**:"int *", int(*)[4]:"int[4]", default:"other")

typedef int thing[4];
void foo(thing bar)
{
   thing boz;
   printf("%s:%s\n", "bar", TYPE(bar));
   printf("%s:%s\n", "boz", TYPE(boz));
}

int main(int argc, char *argv[])
{
    foo(&argc);

    return 0;
}

Output:

bar:int *
boz:int[4]

-- 
Keith Thompson (The_Other_Keith) kst-u@mib.org  <http://www.ghoti.net/~kst>
Working, but not speaking, for JetHead Development, Inc.
"We must do something.  This is something.  Therefore, we must do this."
    -- Antony Jay and Jonathan Lynn, "Yes Minister"
0
Keith
12/3/2016 1:01:01 AM
On 03/12/2016 00:32, Jerry Stuckle wrote:
> On 12/2/2016 4:34 PM, supercat@casperkitty.com wrote:

>> Of course typedef is useful when applied to things that behave like value
>> objects; I'm less convinced of the utility of the way it is defined for
>> arrays.  Given:
>>
>>     int foo(thing bar)
>>     {
>>       thing boz;
>>       ...
>>     }
>>
>> it would seem logical that "bar" and "boz" should have the same type; even
>> if use of [] syntax within a parameter list of whatever form would get
>> reinterpreted as a pointer, that wouldn't imply that array types should
>> sport that same behavior in cases which don't use that syntax.
>>
>
> They will have exactly the same type.

Interesting. When I run the  code below, I get this output:

  sizeof(bar) = 4
  sizeof(boz) = 16

If they're exactly the same type, how do you explain this result?

--------------------------------
#include <stdio.h>

typedef int thing[4];

void foo(thing bar){
   thing boz;

   printf("sizeof(bar) = %d\n",sizeof bar);
   printf("sizeof(boz) = %d\n",sizeof boz);
}

int main(void) {
   thing x;
   foo(x);
}

-- 
Bartc
0
BartC
12/3/2016 1:11:06 AM
On 12/2/2016 8:11 PM, BartC wrote:
> On 03/12/2016 00:32, Jerry Stuckle wrote:
>> On 12/2/2016 4:34 PM, supercat@casperkitty.com wrote:
> 
>>> Of course typedef is useful when applied to things that behave like
>>> value
>>> objects; I'm less convinced of the utility of the way it is defined for
>>> arrays.  Given:
>>>
>>>     int foo(thing bar)
>>>     {
>>>       thing boz;
>>>       ...
>>>     }
>>>
>>> it would seem logical that "bar" and "boz" should have the same type;
>>> even
>>> if use of [] syntax within a parameter list of whatever form would get
>>> reinterpreted as a pointer, that wouldn't imply that array types should
>>> sport that same behavior in cases which don't use that syntax.
>>>
>>
>> They will have exactly the same type.
> 
> Interesting. When I run the  code below, I get this output:
> 
>  sizeof(bar) = 4
>  sizeof(boz) = 16
> 
> If they're exactly the same type, how do you explain this result?
> 
> --------------------------------
> #include <stdio.h>
> 
> typedef int thing[4];
> 
> void foo(thing bar){
>   thing boz;
> 
>   printf("sizeof(bar) = %d\n",sizeof bar);
>   printf("sizeof(boz) = %d\n",sizeof boz);
> }
> 
> int main(void) {
>   thing x;
>   foo(x);
> }
> 

Because both are of type int[] or int * const.  You can use either
syntax with either bar or boz.

-- 
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
0
Jerry
12/3/2016 1:48:15 AM
Jerry Stuckle <jstucklex@attglobal.net> writes:

> On 12/1/2016 9:22 PM, Ben Bacarisse wrote:
>> Jerry Stuckle <jstucklex@attglobal.net> writes:
>> 
>>> On 12/1/2016 7:06 PM, Ben Bacarisse wrote:
>>>> Jerry Stuckle <jstucklex@attglobal.net> writes:
<snip>
>>>>> Yes, and even the first edition of K&R had structures.
>>>>
>>>> But they could not be assigned nor passed into or out of functions.
>>>
>>> Sure they could.
>> 
>> Page 121: "structures may not be assigned to or copied as a unit, and
>> ... they can not be passed to or returned from functions".  Automatic
>> structures could not even be initialised.
>
> Sorry, I can't look at it right now.  I'm not in my office.

Gosh.  You think I'd make up a quote?  Extraordinary.  Let me know when
you've looked it up.

<snip>
-- 
Ben.
0
Ben
12/3/2016 1:56:35 AM
Jerry Stuckle <jstucklex@attglobal.net> writes:

> On 12/2/2016 8:11 PM, BartC wrote:
>> On 03/12/2016 00:32, Jerry Stuckle wrote:
>>> On 12/2/2016 4:34 PM, supercat@casperkitty.com wrote:
>> 
>>>> Of course typedef is useful when applied to things that behave like
>>>> value
>>>> objects; I'm less convinced of the utility of the way it is defined for
>>>> arrays.  Given:
>>>>
>>>>     int foo(thing bar)
>>>>     {
>>>>       thing boz;
>>>>       ...
>>>>     }
>>>>
>>>> it would seem logical that "bar" and "boz" should have the same type;
>>>> even
>>>> if use of [] syntax within a parameter list of whatever form would get
>>>> reinterpreted as a pointer, that wouldn't imply that array types should
>>>> sport that same behavior in cases which don't use that syntax.
>>>>
>>>
>>> They will have exactly the same type.
>> 
>> Interesting. When I run the  code below, I get this output:
>> 
>>  sizeof(bar) = 4
>>  sizeof(boz) = 16
>> 
>> If they're exactly the same type, how do you explain this result?
>> 
>> --------------------------------
>> #include <stdio.h>
>> 
>> typedef int thing[4];
>> 
>> void foo(thing bar){
>>   thing boz;
>> 
>>   printf("sizeof(bar) = %d\n",sizeof bar);
>>   printf("sizeof(boz) = %d\n",sizeof boz);
>> }
>> 
>> int main(void) {
>>   thing x;
>>   foo(x);
>> }
>> 
>
> Because both are of type int[] or int * const.  You can use either
> syntax with either bar or boz.

No, bar has type int * and boz has type int [4].

-- 
Ben.
0
Ben
12/3/2016 2:11:36 AM
On 12/2/2016 8:56 PM, Ben Bacarisse wrote:
> Jerry Stuckle <jstucklex@attglobal.net> writes:
> 
>> On 12/1/2016 9:22 PM, Ben Bacarisse wrote:
>>> Jerry Stuckle <jstucklex@attglobal.net> writes:
>>>
>>>> On 12/1/2016 7:06 PM, Ben Bacarisse wrote:
>>>>> Jerry Stuckle <jstucklex@attglobal.net> writes:
> <snip>
>>>>>> Yes, and even the first edition of K&R had structures.
>>>>>
>>>>> But they could not be assigned nor passed into or out of functions.
>>>>
>>>> Sure they could.
>>>
>>> Page 121: "structures may not be assigned to or copied as a unit, and
>>> ... they can not be passed to or returned from functions".  Automatic
>>> structures could not even be initialised.
>>
>> Sorry, I can't look at it right now.  I'm not in my office.
> 
> Gosh.  You think I'd make up a quote?  Extraordinary.  Let me know when
> you've looked it up.
> 
> <snip>
> 

I don't know what you would do, Ben.  But it will be at least next week
before I get a chance to look it up.  That is, if I care enough to bother.

But I do know that by 1984 or so, everything was passed on the stack.
And from what I've seen of earlier code, the same was true, because you
could pass more parameters than there were registers.  But there was
basically no optimization in those compilers, either.

More recent compiler versions started optimizing the code, and one of
the first optimizations was to pass some values in registers.  But even
in early compiler versions that was only possible when the called
function was in the same compilation unit.  Newer compilers have done
even better.

-- 
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
0
Jerry
12/3/2016 2:25:08 AM
On 12/2/2016 9:11 PM, Ben Bacarisse wrote:
> Jerry Stuckle <jstucklex@attglobal.net> writes:
> 
>> On 12/2/2016 8:11 PM, BartC wrote:
>>> On 03/12/2016 00:32, Jerry Stuckle wrote:
>>>> On 12/2/2016 4:34 PM, supercat@casperkitty.com wrote:
>>>
>>>>> Of course typedef is useful when applied to things that behave like
>>>>> value
>>>>> objects; I'm less convinced of the utility of the way it is defined for
>>>>> arrays.  Given:
>>>>>
>>>>>     int foo(thing bar)
>>>>>     {
>>>>>       thing boz;
>>>>>       ...
>>>>>     }
>>>>>
>>>>> it would seem logical that "bar" and "boz" should have the same type;
>>>>> even
>>>>> if use of [] syntax within a parameter list of whatever form would get
>>>>> reinterpreted as a pointer, that wouldn't imply that array types should
>>>>> sport that same behavior in cases which don't use that syntax.
>>>>>
>>>>
>>>> They will have exactly the same type.
>>>
>>> Interesting. When I run the  code below, I get this output:
>>>
>>>  sizeof(bar) = 4
>>>  sizeof(boz) = 16
>>>
>>> If they're exactly the same type, how do you explain this result?
>>>
>>> --------------------------------
>>> #include <stdio.h>
>>>
>>> typedef int thing[4];
>>>
>>> void foo(thing bar){
>>>   thing boz;
>>>
>>>   printf("sizeof(bar) = %d\n",sizeof bar);
>>>   printf("sizeof(boz) = %d\n",sizeof boz);
>>> }
>>>
>>> int main(void) {
>>>   thing x;
>>>   foo(x);
>>> }
>>>
>>
>> Because both are of type int[] or int * const.  You can use either
>> syntax with either bar or boz.
> 
> No, bar has type int * and boz has type int [4].
> 

Yes and no.  You can use either pointer or array syntax with either
variable.  They are effectively the same type - int[] or int *.

The only difference is you can get the number of elements in boz, but
not in bar.

-- 
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
0
Jerry
12/3/2016 2:27:40 AM
Jerry Stuckle <jstucklex@attglobal.net> writes:

> On 12/2/2016 8:56 PM, Ben Bacarisse wrote:
>> Jerry Stuckle <jstucklex@attglobal.net> writes:
>> 
>>> On 12/1/2016 9:22 PM, Ben Bacarisse wrote:
>>>> Jerry Stuckle <jstucklex@attglobal.net> writes:
>>>>
>>>>> On 12/1/2016 7:06 PM, Ben Bacarisse wrote:
>>>>>> Jerry Stuckle <jstucklex@attglobal.net> writes:
>> <snip>
>>>>>>> Yes, and even the first edition of K&R had structures.
>>>>>>
>>>>>> But they could not be assigned nor passed into or out of functions.
>>>>>
>>>>> Sure they could.
>>>>
>>>> Page 121: "structures may not be assigned to or copied as a unit, and
>>>> ... they can not be passed to or returned from functions".  Automatic
>>>> structures could not even be initialised.
>>>
>>> Sorry, I can't look at it right now.  I'm not in my office.
>> 
>> Gosh.  You think I'd make up a quote?  Extraordinary.  Let me know when
>> you've looked it up.
>> 
>> <snip>
>> 
>
> I don't know what you would do, Ben.  But it will be at least next week
> before I get a chance to look it up.  That is, if I care enough to bother.
>
> But I do know that by 1984 or so, everything was passed on the stack.

That's the wrong era and the wrong issue.  My point was about (a) early
(pre-K&R) C and (b) the sizes of the values that can be assigned and
passed about (not how they are passed).

<snip>
-- 
Ben.
0
Ben
12/3/2016 2:33:24 AM
Jerry Stuckle <jstucklex@attglobal.net> writes:

> On 12/2/2016 9:11 PM, Ben Bacarisse wrote:
>> Jerry Stuckle <jstucklex@attglobal.net> writes:
>> 
>>> On 12/2/2016 8:11 PM, BartC wrote:
>>>> On 03/12/2016 00:32, Jerry Stuckle wrote:
>>>>> On 12/2/2016 4:34 PM, supercat@casperkitty.com wrote:
>>>>
>>>>>> Of course typedef is useful when applied to things that behave like
>>>>>> value
>>>>>> objects; I'm less convinced of the utility of the way it is defined for
>>>>>> arrays.  Given:
>>>>>>
>>>>>>     int foo(thing bar)
>>>>>>     {
>>>>>>       thing boz;
>>>>>>       ...
>>>>>>     }
>>>>>>
>>>>>> it would seem logical that "bar" and "boz" should have the same type;
>>>>>> even
>>>>>> if use of [] syntax within a parameter list of whatever form would get
>>>>>> reinterpreted as a pointer, that wouldn't imply that array types should
>>>>>> sport that same behavior in cases which don't use that syntax.
>>>>>>
>>>>>
>>>>> They will have exactly the same type.
>>>>
>>>> Interesting. When I run the  code below, I get this output:
>>>>
>>>>  sizeof(bar) = 4
>>>>  sizeof(boz) = 16
>>>>
>>>> If they're exactly the same type, how do you explain this result?
>>>>
>>>> --------------------------------
>>>> #include <stdio.h>
>>>>
>>>> typedef int thing[4];
>>>>
>>>> void foo(thing bar){
>>>>   thing boz;
>>>>
>>>>   printf("sizeof(bar) = %d\n",sizeof bar);
>>>>   printf("sizeof(boz) = %d\n",sizeof boz);
>>>> }
>>>>
>>>> int main(void) {
>>>>   thing x;
>>>>   foo(x);
>>>> }
>>>>
>>>
>>> Because both are of type int[] or int * const.  You can use either
>>> syntax with either bar or boz.
>> 
>> No, bar has type int * and boz has type int [4].
>> 
>
> Yes and no.

No, they are of different types.  The two types are not even compatible.

> You can use either pointer or array syntax with either
> variable.  They are effectively the same type - int[] or int *.

Even those two types are different from each other, but now that you've
dropped the const at least one of them is the type of one of the objects
in question and the other is at least compatible with the type of the
other.

> The only difference is you can get the number of elements in boz, but
> not in bar.

No, the main difference is that they are of different types.  This
produces all sorts of differences beside the difference in size.  For
example, the two types may have different alignments (they do on my
system) and pointers to the two objects are not even compatible, let
alone of the same type.  Try this, for example:

  int foo(thing bar)
  {
       thing boz;
       return &boz == &bar;
  }

-- 
Ben.
0
Ben
12/3/2016 2:50:31 AM
On 12/2/2016 9:50 PM, Ben Bacarisse wrote:
> int foo(thing bar)
>   {
>        thing boz;
>        return &boz == &bar;
>   }

Yes, because boz is of type int * const and bar is int *.  But they are
both pointers to int and an array of int, and you can use either syntax
with either one.

Why is that so hard for you and Bart to understand?

-- 
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
0
Jerry
12/3/2016 3:45:35 AM
On 03/12/2016 03:45, Jerry Stuckle wrote:
> On 12/2/2016 9:50 PM, Ben Bacarisse wrote:
>> int foo(thing bar)
>>   {
>>        thing boz;
>>        return &boz == &bar;
>>   }
>
> Yes, because boz is of type int * const and bar is int *.  But they are
> both pointers to int and an array of int, and you can use either syntax
> with either one.

You said this: "They will have exactly the same type."

Actually, in my example, one has type int* and the other int[4]. 'const' 
doesn't come into it.

They only end up as the same type when used inside an expression, if boz 
is not an operand to & or sizeof, but neither my example nor supercat's 
used bar and boz in an expression;

> Why is that so hard for you and Bart to understand?

I don't find it hard to understand is that you haven't got a clue.

-- 
Bartc
0
BartC
12/3/2016 11:07:50 AM
Jerry Stuckle <jstucklex@attglobal.net> writes:

> On 12/2/2016 9:50 PM, Ben Bacarisse wrote:
>> int foo(thing bar)
>>   {
>>        thing boz;
>>        return &boz == &bar;
>>   }
>
> Yes, because boz is of type int * const and bar is int *.

No.  See upthread for the correct types.

> But they are
> both pointers to int and an array of int, and you can use either syntax
> with either one.
>
> Why is that so hard for you and Bart to understand?

It's easy to understand.  What would be needed for you to believe that
two declared objects have different types?  If the named objects in
question had the same types they would have the same size, the same
alignment requirements, both would be assignable (or both would not be)
and pointers to them would be of compatible types (in fact the pointers
would have the same types).  All of these can be easily verified as false.

-- 
Ben.
0
Ben
12/3/2016 11:33:08 AM
On 03/12/16 03:25, Jerry Stuckle wrote:
> On 12/2/2016 8:56 PM, Ben Bacarisse wrote:
>> Jerry Stuckle <jstucklex@attglobal.net> writes:
>>
>>> On 12/1/2016 9:22 PM, Ben Bacarisse wrote:
>>>> Jerry Stuckle <jstucklex@attglobal.net> writes:
>>>>
>>>>> On 12/1/2016 7:06 PM, Ben Bacarisse wrote:
>>>>>> Jerry Stuckle <jstucklex@attglobal.net> writes:
>> <snip>
>>>>>>> Yes, and even the first edition of K&R had structures.
>>>>>>
>>>>>> But they could not be assigned nor passed into or out of functions.
>>>>>
>>>>> Sure they could.
>>>>
>>>> Page 121: "structures may not be assigned to or copied as a unit, and
>>>> ... they can not be passed to or returned from functions".  Automatic
>>>> structures could not even be initialised.
>>>
>>> Sorry, I can't look at it right now.  I'm not in my office.
>>
>> Gosh.  You think I'd make up a quote?  Extraordinary.  Let me know when
>> you've looked it up.
>>
>> <snip>
>>
>
> I don't know what you would do, Ben.  But it will be at least next week
> before I get a chance to look it up.  That is, if I care enough to bother.
>
> But I do know that by 1984 or so, everything was passed on the stack.
> And from what I've seen of earlier code, the same was true, because you
> could pass more parameters than there were registers.  But there was
> basically no optimization in those compilers, either.
>
> More recent compiler versions started optimizing the code, and one of
> the first optimizations was to pass some values in registers.  But even
> in early compiler versions that was only possible when the called
> function was in the same compilation unit.  Newer compilers have done
> even better.
>

Outside of the compilation unit, the choice of how to pass parameters is 
determined by the ABI - either compiler-specific (on weakly defined 
platforms like Windows) or system-specific (on systems with clearer 
ABIs, like *nix systems).  Especially in the later case, the compiler 
does not get the choice about how to pass parameters unless it is purely 
internal (such as calling static functions in the same compilation unit, 
or with link-time optimisation).  It doesn't matter how smart a compiler 
for 32-bit x86 Linux is, for example - /all/ parameters to external 
functions must be passed on the stack.

0
David
12/3/2016 2:20:45 PM
On 12/02/2016 09:11 PM, Ben Bacarisse wrote:
> Jerry Stuckle <jstucklex@attglobal.net> writes:
>
>> On 12/2/2016 8:11 PM, BartC wrote:
>>> On 03/12/2016 00:32, Jerry Stuckle wrote:
>>>> On 12/2/2016 4:34 PM, supercat@casperkitty.com wrote:
>>>
>>>>> Of course typedef is useful when applied to things that behave like
>>>>> value
>>>>> objects; I'm less convinced of the utility of the way it is defined for
>>>>> arrays.  Given:
>>>>>
>>>>>     int foo(thing bar)
>>>>>     {
>>>>>       thing boz;
>>>>>       ...
>>>>>     }
>>>>>
>>>>> it would seem logical that "bar" and "boz" should have the same type;
>>>>> even
>>>>> if use of [] syntax within a parameter list of whatever form would get
>>>>> reinterpreted as a pointer, that wouldn't imply that array types should
>>>>> sport that same behavior in cases which don't use that syntax.
>>>>>
>>>>
>>>> They will have exactly the same type.
>>>
>>> Interesting. When I run the  code below, I get this output:
>>>
>>>  sizeof(bar) = 4
>>>  sizeof(boz) = 16
>>>
>>> If they're exactly the same type, how do you explain this result?
>>>
>>> --------------------------------
>>> #include <stdio.h>
>>>
>>> typedef int thing[4];
>>>
>>> void foo(thing bar){
>>>   thing boz;
>>>
>>>   printf("sizeof(bar) = %d\n",sizeof bar);
>>>   printf("sizeof(boz) = %d\n",sizeof boz);
>>> }
>>>
>>> int main(void) {
>>>   thing x;
>>>   foo(x);
>>> }
>>>
>>
>> Because both are of type int[] or int * const.  You can use either

Jerry's explanation for the following results:

     sizeof(bar) = 4
     sizeof(boz) = 16

is that bar and boz both have the same type?

I'd recommend adding in sizeof(int*) and sizeof(int[4]) for comparison.

>> syntax with either bar or boz.
>
> No, bar has type int * and boz has type int [4].

I'd expect any reasonably normal person to accept that if bar and boz 
have different sizes, it means they must have different types. This is 
Jerry, however, so such rules don't apply.
C2011 adds a feature that allows us to check this directly (which won't 
necessarily convince Jerry, either):

#include <stdio.h>
#define TYPE(a) _Generic(a, \
     int*:"int *", int[4]:"int[4]", default:"other")

typedef int thing[4];
void foo(thing bar)
{
    thing boz;
    printf("%s:%s\n", "bar", TYPE(bar));
    printf("%s:%s\n", "boz", TYPE(boz));
}

int main(int argc, char *argv[])
{
     foo(&argc);

     return 0;
}

Unfortunately, the compiler I currently have access to doesn't support 
C2011. Therefore, I can't confirm whether I've used _Generic correctly, 
nor can I confirm the results. Could someone who has a compiler that 
does support _Generic() check this out?
0
James
12/3/2016 3:32:11 PM
On 03/12/16 16:32, James R. Kuyper wrote:
> On 12/02/2016 09:11 PM, Ben Bacarisse wrote:
>> Jerry Stuckle <jstucklex@attglobal.net> writes:
>>
>>> On 12/2/2016 8:11 PM, BartC wrote:
>>>> On 03/12/2016 00:32, Jerry Stuckle wrote:
>>>>> On 12/2/2016 4:34 PM, supercat@casperkitty.com wrote:
>>>>
>>>>>> Of course typedef is useful when applied to things that behave like
>>>>>> value
>>>>>> objects; I'm less convinced of the utility of the way it is
>>>>>> defined for
>>>>>> arrays.  Given:
>>>>>>
>>>>>>     int foo(thing bar)
>>>>>>     {
>>>>>>       thing boz;
>>>>>>       ...
>>>>>>     }
>>>>>>
>>>>>> it would seem logical that "bar" and "boz" should have the same type;
>>>>>> even
>>>>>> if use of [] syntax within a parameter list of whatever form would
>>>>>> get
>>>>>> reinterpreted as a pointer, that wouldn't imply that array types
>>>>>> should
>>>>>> sport that same behavior in cases which don't use that syntax.
>>>>>>
>>>>>
>>>>> They will have exactly the same type.
>>>>
>>>> Interesting. When I run the  code below, I get this output:
>>>>
>>>>  sizeof(bar) = 4
>>>>  sizeof(boz) = 16
>>>>
>>>> If they're exactly the same type, how do you explain this result?
>>>>
>>>> --------------------------------
>>>> #include <stdio.h>
>>>>
>>>> typedef int thing[4];
>>>>
>>>> void foo(thing bar){
>>>>   thing boz;
>>>>
>>>>   printf("sizeof(bar) = %d\n",sizeof bar);
>>>>   printf("sizeof(boz) = %d\n",sizeof boz);
>>>> }
>>>>
>>>> int main(void) {
>>>>   thing x;
>>>>   foo(x);
>>>> }
>>>>
>>>
>>> Because both are of type int[] or int * const.  You can use either
>
> Jerry's explanation for the following results:
>
>     sizeof(bar) = 4
>     sizeof(boz) = 16
>
> is that bar and boz both have the same type?
>
> I'd recommend adding in sizeof(int*) and sizeof(int[4]) for comparison.
>
>>> syntax with either bar or boz.
>>
>> No, bar has type int * and boz has type int [4].
>
> I'd expect any reasonably normal person to accept that if bar and boz
> have different sizes, it means they must have different types. This is
> Jerry, however, so such rules don't apply.
> C2011 adds a feature that allows us to check this directly (which won't
> necessarily convince Jerry, either):
>
> #include <stdio.h>
> #define TYPE(a) _Generic(a, \
>     int*:"int *", int[4]:"int[4]", default:"other")
>
> typedef int thing[4];
> void foo(thing bar)
> {
>    thing boz;
>    printf("%s:%s\n", "bar", TYPE(bar));
>    printf("%s:%s\n", "boz", TYPE(boz));
> }
>
> int main(int argc, char *argv[])
> {
>     foo(&argc);
>
>     return 0;
> }
>
> Unfortunately, the compiler I currently have access to doesn't support
> C2011. Therefore, I can't confirm whether I've used _Generic correctly,
> nor can I confirm the results. Could someone who has a compiler that
> does support _Generic() check this out?

I had a look using <https://godbolt.org>, but it looks like both get 
printed as "int *".  Maybe _Generic also turns arrays into int* like 
parameter passing?  I don't know the details of _Generic well enough to 
tell.

0
David
12/3/2016 4:15:27 PM
On Saturday, December 3, 2016 at 11:15:35 AM UTC-5, David Brown wrote:
> On 03/12/16 16:32, James R. Kuyper wrote:
....
> > C2011 adds a feature that allows us to check this directly (which won't
> > necessarily convince Jerry, either):
> >
> > #include <stdio.h>
> > #define TYPE(a) _Generic(a, \
> >     int*:"int *", int[4]:"int[4]", default:"other")
> >
> > typedef int thing[4];
> > void foo(thing bar)
> > {
> >    thing boz;
> >    printf("%s:%s\n", "bar", TYPE(bar));
> >    printf("%s:%s\n", "boz", TYPE(boz));
> > }
> >
> > int main(int argc, char *argv[])
> > {
> >     foo(&argc);
> >
> >     return 0;
> > }
> >
> > Unfortunately, the compiler I currently have access to doesn't support
> > C2011. Therefore, I can't confirm whether I've used _Generic correctly,
> > nor can I confirm the results. Could someone who has a compiler that
> > does support _Generic() check this out?
>=20
> I had a look using <https://godbolt.org>, but it looks like both get=20
> printed as "int *".  Maybe _Generic also turns arrays into int* like=20
> parameter passing?  I don't know the details of _Generic well enough to=
=20
> tell.

"Except when it is the operand of the sizeof operator, the _Alignof operato=
r, or the unary & operator, or is a string literal used to initialize an ar=
ray, an expression that has type =E2=80=98=E2=80=98array of type=E2=80=99=
=E2=80=99 is converted to an expression with type =E2=80=98=E2=80=98pointer=
 to type=E2=80=99=E2=80=99 that points to the initial element of the array =
object and is not an lvalue." (6.3.2.1p3)

I think there should also have been an exception for _Generic, but that's n=
ot what was actually done. Therefore, my suggestion is not as useful as I t=
hought it would be.
0
jameskuyper
12/3/2016 4:33:06 PM
On 12/3/2016 9:20 AM, David Brown wrote:
> On 03/12/16 03:25, Jerry Stuckle wrote:
>> On 12/2/2016 8:56 PM, Ben Bacarisse wrote:
>>> Jerry Stuckle <jstucklex@attglobal.net> writes:
>>>
>>>> On 12/1/2016 9:22 PM, Ben Bacarisse wrote:
>>>>> Jerry Stuckle <jstucklex@attglobal.net> writes:
>>>>>
>>>>>> On 12/1/2016 7:06 PM, Ben Bacarisse wrote:
>>>>>>> Jerry Stuckle <jstucklex@attglobal.net> writes:
>>> <snip>
>>>>>>>> Yes, and even the first edition of K&R had structures.
>>>>>>>
>>>>>>> But they could not be assigned nor passed into or out of functions.
>>>>>>
>>>>>> Sure they could.
>>>>>
>>>>> Page 121: "structures may not be assigned to or copied as a unit, and
>>>>> ... they can not be passed to or returned from functions".  Automatic
>>>>> structures could not even be initialised.
>>>>
>>>> Sorry, I can't look at it right now.  I'm not in my office.
>>>
>>> Gosh.  You think I'd make up a quote?  Extraordinary.  Let me know when
>>> you've looked it up.
>>>
>>> <snip>
>>>
>>
>> I don't know what you would do, Ben.  But it will be at least next week
>> before I get a chance to look it up.  That is, if I care enough to
>> bother.
>>
>> But I do know that by 1984 or so, everything was passed on the stack.
>> And from what I've seen of earlier code, the same was true, because you
>> could pass more parameters than there were registers.  But there was
>> basically no optimization in those compilers, either.
>>
>> More recent compiler versions started optimizing the code, and one of
>> the first optimizations was to pass some values in registers.  But even
>> in early compiler versions that was only possible when the called
>> function was in the same compilation unit.  Newer compilers have done
>> even better.
>>
> 
> Outside of the compilation unit, the choice of how to pass parameters is
> determined by the ABI - either compiler-specific (on weakly defined
> platforms like Windows) or system-specific (on systems with clearer
> ABIs, like *nix systems).  Especially in the later case, the compiler
> does not get the choice about how to pass parameters unless it is purely
> internal (such as calling static functions in the same compilation unit,
> or with link-time optimisation).  It doesn't matter how smart a compiler
> for 32-bit x86 Linux is, for example - /all/ parameters to external
> functions must be passed on the stack.
> 

David, I know how it works.  From what you've posted in the past, much
better than you do.  Please don't try to make yourself look intelligent
by stating the obvious.  It doesn't work.

-- 
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
0
Jerry
12/3/2016 8:40:54 PM
On 12/3/2016 6:07 AM, BartC wrote:
> On 03/12/2016 03:45, Jerry Stuckle wrote:
>> On 12/2/2016 9:50 PM, Ben Bacarisse wrote:
>>> int foo(thing bar)
>>>   {
>>>        thing boz;
>>>        return &boz == &bar;
>>>   }
>>
>> Yes, because boz is of type int * const and bar is int *.  But they are
>> both pointers to int and an array of int, and you can use either syntax
>> with either one.
> 
> You said this: "They will have exactly the same type."
> 
> Actually, in my example, one has type int* and the other int[4]. 'const'
> doesn't come into it.
>

Yes, it does, because the address pointed to by boz cannot be changed.
The address pointed to by bar can.

> They only end up as the same type when used inside an expression, if boz
> is not an operand to & or sizeof, but neither my example nor supercat's
> used bar and boz in an expression;
> 
>> Why is that so hard for you and Bart to understand?
> 
> I don't find it hard to understand is that you haven't got a clue.
> 

And you can use either pointer syntax or array syntax for either bar or
boz.  Why is that so hard for you to understand?  It's a concept even my
beginning C students understood very readily.

It's also why you can have:

char p[4] = "abc";
char * q = "def";

funca(char * p1) {
   char c1 = *p1;
   char c2 = p1[0];
}

funcb(char p2[]) {
   char c3 = *p2;
   char c4 = p3[0];
}


And then have:

funca(p);
funca(q);
funcb(p);
funcb(q);

Try it - they all work just fine, because both p and q are pointers to
the first character of arrays.

-- 
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
0
Jerry
12/3/2016 8:46:40 PM
On 12/3/2016 6:33 AM, Ben Bacarisse wrote:
> Jerry Stuckle <jstucklex@attglobal.net> writes:
> 
>> On 12/2/2016 9:50 PM, Ben Bacarisse wrote:
>>> int foo(thing bar)
>>>   {
>>>        thing boz;
>>>        return &boz == &bar;
>>>   }
>>
>> Yes, because boz is of type int * const and bar is int *.
> 
> No.  See upthread for the correct types.
> 
>> But they are
>> both pointers to int and an array of int, and you can use either syntax
>> with either one.
>>
>> Why is that so hard for you and Bart to understand?
> 
> It's easy to understand.  What would be needed for you to believe that
> two declared objects have different types?  If the named objects in
> question had the same types they would have the same size, the same
> alignment requirements, both would be assignable (or both would not be)
> and pointers to them would be of compatible types (in fact the pointers
> would have the same types).  All of these can be easily verified as false.
> 

If it's so easy to understand, why do you and Bart have so much trouble
understanding it?  Even my beginning C students understand such simple C
concepts.  But it seems to be beyond your level of comprehension.

See my response to Bart for another example.

-- 
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
0
Jerry
12/3/2016 8:48:11 PM
"James R. Kuyper" <jameskuyper@verizon.net> writes:

> On 12/02/2016 09:11 PM, Ben Bacarisse wrote:
>> Jerry Stuckle <jstucklex@attglobal.net> writes:
>>
>>> On 12/2/2016 8:11 PM, BartC wrote:
>>>> On 03/12/2016 00:32, Jerry Stuckle wrote:
>>>>> On 12/2/2016 4:34 PM, supercat@casperkitty.com wrote:
>>>>
>>>>>> Of course typedef is useful when applied to things that behave like
>>>>>> value
>>>>>> objects; I'm less convinced of the utility of the way it is defined for
>>>>>> arrays.  Given:
>>>>>>
>>>>>>     int foo(thing bar)
>>>>>>     {
>>>>>>       thing boz;
>>>>>>       ...
>>>>>>     }
>>>>>>
>>>>>> it would seem logical that "bar" and "boz" should have the same type;
>>>>>> even
>>>>>> if use of [] syntax within a parameter list of whatever form would get
>>>>>> reinterpreted as a pointer, that wouldn't imply that array types should
>>>>>> sport that same behavior in cases which don't use that syntax.
>>>>>>
>>>>>
>>>>> They will have exactly the same type.
>>>>
>>>> Interesting. When I run the  code below, I get this output:
>>>>
>>>>  sizeof(bar) = 4
>>>>  sizeof(boz) = 16
>>>>
>>>> If they're exactly the same type, how do you explain this result?
>>>>
>>>> --------------------------------
>>>> #include <stdio.h>
>>>>
>>>> typedef int thing[4];
>>>>
>>>> void foo(thing bar){
>>>>   thing boz;
>>>>
>>>>   printf("sizeof(bar) = %d\n",sizeof bar);
>>>>   printf("sizeof(boz) = %d\n",sizeof boz);
>>>> }
>>>>
>>>> int main(void) {
>>>>   thing x;
>>>>   foo(x);
>>>> }
>>>>
>>>
>>> Because both are of type int[] or int * const.  You can use either
>
> Jerry's explanation for the following results:
>
>     sizeof(bar) = 4
>     sizeof(boz) = 16
>
> is that bar and boz both have the same type?

So it wold seem.

> I'd recommend adding in sizeof(int*) and sizeof(int[4]) for comparison.
>
>>> syntax with either bar or boz.
>>
>> No, bar has type int * and boz has type int [4].
>
> I'd expect any reasonably normal person to accept that if bar and boz
> have different sizes, it means they must have different types. This is
> Jerry, however, so such rules don't apply.
> C2011 adds a feature that allows us to check this directly (which
> won't necessarily convince Jerry, either):
>
> #include <stdio.h>
> #define TYPE(a) _Generic(a, \
>     int*:"int *", int[4]:"int[4]", default:"other")
>
> typedef int thing[4];
> void foo(thing bar)
> {
>    thing boz;
>    printf("%s:%s\n", "bar", TYPE(bar));
>    printf("%s:%s\n", "boz", TYPE(boz));
> }
>
> int main(int argc, char *argv[])
> {
>     foo(&argc);
>
>     return 0;
> }
>
> Unfortunately, the compiler I currently have access to doesn't support
> C2011. Therefore, I can't confirm whether I've used _Generic
> correctly, nor can I confirm the results. Could someone who has a
> compiler that does support _Generic() check this out?

_Generic tests for compatible types rather than identical types, and it
also uses the type of the expression and not the type of the object.  To
see that the objects bar and boz have different types (using _Generic)
you'd need to use some other expression where array types are not
converted.  For example

  _Generic(&bar, int **: ..., int (*)[4]: ...);
  _Generic(&boz, int **: ..., int (*)[4]: ...);

but, frankly, if the different sizes do not persuade someone, this won't
make much difference.

There are many way in which bar and boz are obviously of different types
but it's hard to know what might ever tip the balance.  There's the fact
that the sizes are different, the fact that &bar and &boz are not
compatible types, the fact that the expression `bar' is an lvalue but
`boz' is not...  Maybe he will be persuaded by C++ where you can test
directly:

  int foo(thing bar)
  {
       thing boz;
       return typeid(bar) == typeid(boz);
  }

(You can also print typeid(bar).name() and typeid(boz).name(), but the
results, though clearly different to each other, are encoded (Pi and
A4_i for my version of g++).

-- 
Ben.
0
Ben
12/3/2016 9:31:54 PM
On Saturday, December 3, 2016 at 3:46:30 PM UTC-5, Jerry Stuckle wrote:
> On 12/3/2016 6:07 AM, BartC wrote:
> > On 03/12/2016 03:45, Jerry Stuckle wrote:
> >> On 12/2/2016 9:50 PM, Ben Bacarisse wrote:
> >>> int foo(thing bar)
> >>>   {
> >>>        thing boz;
> >>>        return &boz =3D=3D &bar;
> >>>   }
> >>
> >> Yes, because boz is of type int * const and bar is int *.  But they ar=
e
> >> both pointers to int and an array of int, and you can use either synta=
x
> >> with either one.
> >=20
> > You said this: "They will have exactly the same type."
> >=20
> > Actually, in my example, one has type int* and the other int[4]. 'const=
'
> > doesn't come into it.
> >
>=20
> Yes, it does, because the address pointed to by boz cannot be changed.
> The address pointed to by bar can.

The type of boz is int[4], not int *const. In most contexts, boz will be
implicitly converted into a pointer to the first element of that array
(6.2.3.1p3), which makes the two hard to distinguish. In order to see a
difference, it's necessary to invoke one the three situations where that
conversion does not occur, only two of which can apply in this particular c=
ase.

Ben has already shown that sizeof(boz) =3D=3D 16, while sizeof(bar) =3D=3D =
4. You've
ignored that difference, not even bothering to address the question of why =
the
size of objects that, according to you, should have the same type, is diffe=
rent.

The other context where it makes a difference is as an argument of unary &,=
 and
Keith's re-write of my _Generic code shows that &boz has the type int (*)[4=
],
while &bar has the type int**. If boz had the type int *const, then &boz sh=
ould
have the type int **const - so where did the 'const' go to, and where did t=
he
()[4] come from, and why is there only one '*'?

Note: "No two generic associations in the same generic selection shall spec=
ify
compatible types." (6.5.1.1p2). That's a Constraints section, so a diagnost=
ic
is mandatory for any violation, yet the code compiled with no such diagnost=
ic,
despite the fact that one association used int** and the other used int(*)[=
4].
Therefore, please don't even consider explaining this behavior of _Generic(=
) by
claiming that int ** and int (*)[4] are different ways of expressing the sa=
me
type.
=20
> > They only end up as the same type when used inside an expression, if bo=
z
> > is not an operand to & or sizeof, but neither my example nor supercat's
> > used bar and boz in an expression;
> >=20
> >> Why is that so hard for you and Bart to understand?
> >=20
> > I don't find it hard to understand is that you haven't got a clue.
> >=20
>=20
> And you can use either pointer syntax or array syntax for either bar or
> boz.  Why is that so hard for you to understand?  It's a concept even my
> beginning C students understood very readily.

Yes, but the more advanced ones know that there are exceptions to the
equivalence of pointer and array syntax.

> It's also why you can have:
>=20
> char p[4] =3D "abc";
> char * q =3D "def";
>=20
> funca(char * p1) {
>    char c1 =3D *p1;
>    char c2 =3D p1[0];
> }
>=20
> funcb(char p2[]) {
>    char c3 =3D *p2;
>    char c4 =3D p3[0];

You haven't provided any declaration for p3. I presume it's a typo for p2?

> }
>=20
>=20
> And then have:
>=20
> funca(p);
> funca(q);
> funcb(p);
> funcb(q);
>=20
> Try it - they all work just fine, because both p and q are pointers to
> the first character of arrays.

If the Keith's code using _Generic() is a problem for you because it relies
upon C2011, here's code that you compile with a C90 compiler, also based up=
on
the differences between &p and &q. According to you, p and q have the same
type, so the same should be true for &p and &q.. funcc() and funcd() are
written a little differently than funca() and funcb() in order to avoid war=
ning
messages from my compiler, but the only important difference is one extra l=
evel
of indirection, in order to avoid the implicit conversion referred to above=
..

static int funcc(char ** p1) {
   return **p1 =3D=3D p1[0][0];
}

static int funcd(char (*p2)[4]) {
   return **p2 =3D=3D p3[0][0];
}

int main(void)
{
    funcd(&p);
    funcc(&q);
#ifndef NOTSAMETYPE
    funcc(&p); // Constraint violation
    funcd(&q); // Constraint violation
#endif

    return 0;
}

When I compile the above code, I get the following error messages, which ar=
e
hard to explain if p and q have the same type:

notsametype.c: In function =E2=80=98main=E2=80=99:
notsametype.c:17: warning: passing argument 1 of =E2=80=98funcc=E2=80=99 fr=
om incompatible
pointer type
notsametype.c:4: note: expected =E2=80=98char **=E2=80=99 but argument is o=
f type =E2=80=98char (*)[4]=E2=80=99
notsametype.c:18: warning: passing argument 1 of =E2=80=98funcd=E2=80=99 fr=
om incompatible
pointer type
notsametype.c:8: note: expected =E2=80=98char (*)[4]=E2=80=99 but argument =
is of type =E2=80=98char **=E2=80=99

It compiles and executes without any problem, but only if I use the
-DNOTSAMETYPE option.
0
jameskuyper
12/3/2016 9:46:03 PM
jameskuyper@verizon.net writes:

> On Saturday, December 3, 2016 at 3:46:30 PM UTC-5, Jerry Stuckle wrote:
>> On 12/3/2016 6:07 AM, BartC wrote:
>> > On 03/12/2016 03:45, Jerry Stuckle wrote:
>> >> On 12/2/2016 9:50 PM, Ben Bacarisse wrote:
>> >>> int foo(thing bar)
>> >>>   {
>> >>>        thing boz;
>> >>>        return &boz == &bar;
>> >>>   }
>> >>
>> >> Yes, because boz is of type int * const and bar is int *.  But they are
>> >> both pointers to int and an array of int, and you can use either syntax
>> >> with either one.
>> > 
>> > You said this: "They will have exactly the same type."
>> > 
>> > Actually, in my example, one has type int* and the other int[4]. 'const'
>> > doesn't come into it.
>> >
>> 
>> Yes, it does, because the address pointed to by boz cannot be changed.
>> The address pointed to by bar can.
>
> The type of boz is int[4], not int *const. In most contexts, boz will be
> implicitly converted into a pointer to the first element of that array
> (6.2.3.1p3), which makes the two hard to distinguish.

But, just to hammer home the point that Jerry is wrong even about the
converted type,

  _Generic(boz,
           int *const: puts("Hey! It converts to a const pointer!"),
           default:    puts("No, of course it doesn't."));

prints "No, of course it doesn't.".

-- 
Ben.
0
Ben
12/3/2016 11:29:37 PM
On 12/03/2016 06:29 PM, Ben Bacarisse wrote:
> jameskuyper@verizon.net writes:
....
>> The type of boz is int[4], not int *const. In most contexts, boz will be
>> implicitly converted into a pointer to the first element of that array
>> (6.2.3.1p3), which makes the two hard to distinguish.

Typo: that should be 6.3.2.1p3

> But, just to hammer home the point that Jerry is wrong even about the
> converted type,
>
>   _Generic(boz,
>            int *const: puts("Hey! It converts to a const pointer!"),
>            default:    puts("No, of course it doesn't."));
>
> prints "No, of course it doesn't.".

Jerry's correct in believing that it's a pointer who's value cannot be 
modified, but that's because it "... is not an lvalue." (6.3.2.1p3), not 
because it's const-qualified.
0
James
12/3/2016 11:42:11 PM
"James R. Kuyper" <jameskuyper@verizon.net> writes:

> On 12/03/2016 06:29 PM, Ben Bacarisse wrote:
>> jameskuyper@verizon.net writes:
> ...
>>> The type of boz is int[4], not int *const. In most contexts, boz will be
>>> implicitly converted into a pointer to the first element of that array
>>> (6.2.3.1p3), which makes the two hard to distinguish.
>
> Typo: that should be 6.3.2.1p3
>
>> But, just to hammer home the point that Jerry is wrong even about the
>> converted type,
>>
>>   _Generic(boz,
>>            int *const: puts("Hey! It converts to a const pointer!"),
>>            default:    puts("No, of course it doesn't."));
>>
>> prints "No, of course it doesn't.".
>
> Jerry's correct in believing that it's a pointer who's value cannot be
> modified, but that's because it "... is not an lvalue." (6.3.2.1p3),
> not because it's const-qualified.

Yes.  I thought you'd covered that.  Maybe not.  The new point was just
that the converted (rvalue) pointer is not const.

-- 
Ben.
0
Ben
12/3/2016 11:59:34 PM
Jerry Stuckle <jstucklex@attglobal.net> writes:

> On 12/3/2016 6:33 AM, Ben Bacarisse wrote:
>> Jerry Stuckle <jstucklex@attglobal.net> writes:
>> 
>>> On 12/2/2016 9:50 PM, Ben Bacarisse wrote:
>>>> int foo(thing bar)
>>>>   {
>>>>        thing boz;
>>>>        return &boz == &bar;
>>>>   }
>>>
>>> Yes, because boz is of type int * const and bar is int *.
>> 
>> No.  See upthread for the correct types.
>> 
>>> But they are
>>> both pointers to int and an array of int, and you can use either syntax
>>> with either one.
>>>
>>> Why is that so hard for you and Bart to understand?
>> 
>> It's easy to understand.  What would be needed for you to believe that
>> two declared objects have different types?  If the named objects in
>> question had the same types they would have the same size, the same
>> alignment requirements, both would be assignable (or both would not be)
>> and pointers to them would be of compatible types (in fact the pointers
>> would have the same types).  All of these can be easily verified as false.
>
> If it's so easy to understand, why do you and Bart have so much trouble
> understanding it?  Even my beginning C students understand such simple C
> concepts.  But it seems to be beyond your level of comprehension.

Here's where we are at now.  You now accept (it seems) that the types
are /not/ the same because even you give them as being different: int
*const and int *.  These are not even compatible types, let alone the
same type.  It would be a step forward if you would at least agree that
the types are not "identical" as you originally claimed.

These two different types have the same size yet you know that sizeof
boz and sizeof bar are not the same.  This should make it clear to you
that boz and bar can't have the types you say they do.  (You've got the
type of bar right, it's the type of boz that's confusing you.)

Furthermore, whilst you are correct that boz will often be converted to
a pointer type, the type you give is wrong, even for these situations
since it's not converted to a const-qualified pointer type.  This can be
verified with a simple _Generic selection or by trying to pass &boz to
a function that has an int *const * parameter.

Even correcting your mistake about the const qualifier, you can't pass
&boz to a function with an int ** parameter.  How do you explain that?
And, mysteriously (to you) you /can/ pass &boz to a function that takes
an int (*)[4] argument.  I can explain that very simply -- boz has type
int [4] so &boz has type int (*)[4], but how do you explain it?

It's hard to see how you could be more muddled.

> See my response to Bart for another example.

Here it is:

| char p[4] = "abc";
| char * q = "def";
|
| funca(char * p1) {
|    char c1 = *p1;
|    char c2 = p1[0];
| }
|
| funcb(char p2[]) {
|    char c3 = *p2;
|    char c4 = p3[0];
| }

p1, p2 and q all have the same type ("pointer to char").  p has type
"array of 4 char".  Do you agree?

Only if you were to incorrectly claim that p and q (or p and p1 or p and
p2) had the same type could it be "another example".  If you agree with
me about the types here, it's just an example of you being right.

-- 
Ben.
0
Ben
12/4/2016 12:59:56 AM
On 12/3/2016 4:46 PM, jameskuyper@verizon.net wrote:
> On Saturday, December 3, 2016 at 3:46:30 PM UTC-5, Jerry Stuckle wrote:
>> On 12/3/2016 6:07 AM, BartC wrote:
>>> On 03/12/2016 03:45, Jerry Stuckle wrote:
>>>> On 12/2/2016 9:50 PM, Ben Bacarisse wrote:
>>>>> int foo(thing bar)
>>>>>   {
>>>>>        thing boz;
>>>>>        return &boz == &bar;
>>>>>   }
>>>>
>>>> Yes, because boz is of type int * const and bar is int *.  But they are
>>>> both pointers to int and an array of int, and you can use either syntax
>>>> with either one.
>>>
>>> You said this: "They will have exactly the same type."
>>>
>>> Actually, in my example, one has type int* and the other int[4]. 'const'
>>> doesn't come into it.
>>>
>>
>> Yes, it does, because the address pointed to by boz cannot be changed.
>> The address pointed to by bar can.
> 
> The type of boz is int[4], not int *const. In most contexts, boz will be
> implicitly converted into a pointer to the first element of that array
> (6.2.3.1p3), which makes the two hard to distinguish. In order to see a
> difference, it's necessary to invoke one the three situations where that
> conversion does not occur, only two of which can apply in this particular case.
> 
> Ben has already shown that sizeof(boz) == 16, while sizeof(bar) == 4. You've
> ignored that difference, not even bothering to address the question of why the
> size of objects that, according to you, should have the same type, is different.
> 
> The other context where it makes a difference is as an argument of unary &, and
> Keith's re-write of my _Generic code shows that &boz has the type int (*)[4],
> while &bar has the type int**. If boz had the type int *const, then &boz should
> have the type int **const - so where did the 'const' go to, and where did the
> ()[4] come from, and why is there only one '*'?
> 
> Note: "No two generic associations in the same generic selection shall specify
> compatible types." (6.5.1.1p2). That's a Constraints section, so a diagnostic
> is mandatory for any violation, yet the code compiled with no such diagnostic,
> despite the fact that one association used int** and the other used int(*)[4].
> Therefore, please don't even consider explaining this behavior of _Generic() by
> claiming that int ** and int (*)[4] are different ways of expressing the same
> type.
>  
>>> They only end up as the same type when used inside an expression, if boz
>>> is not an operand to & or sizeof, but neither my example nor supercat's
>>> used bar and boz in an expression;
>>>
>>>> Why is that so hard for you and Bart to understand?
>>>
>>> I don't find it hard to understand is that you haven't got a clue.
>>>
>>
>> And you can use either pointer syntax or array syntax for either bar or
>> boz.  Why is that so hard for you to understand?  It's a concept even my
>> beginning C students understood very readily.
> 
> Yes, but the more advanced ones know that there are exceptions to the
> equivalence of pointer and array syntax.
> 
>> It's also why you can have:
>>
>> char p[4] = "abc";
>> char * q = "def";
>>
>> funca(char * p1) {
>>    char c1 = *p1;
>>    char c2 = p1[0];
>> }
>>
>> funcb(char p2[]) {
>>    char c3 = *p2;
>>    char c4 = p3[0];
> 
> You haven't provided any declaration for p3. I presume it's a typo for p2?
> 
>> }
>>
>>
>> And then have:
>>
>> funca(p);
>> funca(q);
>> funcb(p);
>> funcb(q);
>>
>> Try it - they all work just fine, because both p and q are pointers to
>> the first character of arrays.
> 
> If the Keith's code using _Generic() is a problem for you because it relies
> upon C2011, here's code that you compile with a C90 compiler, also based upon
> the differences between &p and &q. According to you, p and q have the same
> type, so the same should be true for &p and &q.. funcc() and funcd() are
> written a little differently than funca() and funcb() in order to avoid warning
> messages from my compiler, but the only important difference is one extra level
> of indirection, in order to avoid the implicit conversion referred to above.
> 
> static int funcc(char ** p1) {
>    return **p1 == p1[0][0];
> }
> 
> static int funcd(char (*p2)[4]) {
>    return **p2 == p3[0][0];
> }
> 
> int main(void)
> {
>     funcd(&p);
>     funcc(&q);
> #ifndef NOTSAMETYPE
>     funcc(&p); // Constraint violation
>     funcd(&q); // Constraint violation
> #endif
> 
>     return 0;
> }
> 
> When I compile the above code, I get the following error messages, which are
> hard to explain if p and q have the same type:
> 
> notsametype.c: In function ‘main’:
> notsametype.c:17: warning: passing argument 1 of ‘funcc’ from incompatible
> pointer type
> notsametype.c:4: note: expected ‘char **’ but argument is of type ‘char (*)[4]’
> notsametype.c:18: warning: passing argument 1 of ‘funcd’ from incompatible
> pointer type
> notsametype.c:8: note: expected ‘char (*)[4]’ but argument is of type ‘char **’
> 
> It compiles and executes without any problem, but only if I use the
> -DNOTSAMETYPE option.
> 

Gee, no shit, Sherlock.  You miss the entire point.

But that's pretty normal for you.

-- 
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
0
Jerry
12/4/2016 1:19:53 AM
On 12/3/2016 7:59 PM, Ben Bacarisse wrote:
> Jerry Stuckle <jstucklex@attglobal.net> writes:
> 
>> On 12/3/2016 6:33 AM, Ben Bacarisse wrote:
>>> Jerry Stuckle <jstucklex@attglobal.net> writes:
>>>
>>>> On 12/2/2016 9:50 PM, Ben Bacarisse wrote:
>>>>> int foo(thing bar)
>>>>>   {
>>>>>        thing boz;
>>>>>        return &boz == &bar;
>>>>>   }
>>>>
>>>> Yes, because boz is of type int * const and bar is int *.
>>>
>>> No.  See upthread for the correct types.
>>>
>>>> But they are
>>>> both pointers to int and an array of int, and you can use either syntax
>>>> with either one.
>>>>
>>>> Why is that so hard for you and Bart to understand?
>>>
>>> It's easy to understand.  What would be needed for you to believe that
>>> two declared objects have different types?  If the named objects in
>>> question had the same types they would have the same size, the same
>>> alignment requirements, both would be assignable (or both would not be)
>>> and pointers to them would be of compatible types (in fact the pointers
>>> would have the same types).  All of these can be easily verified as false.
>>
>> If it's so easy to understand, why do you and Bart have so much trouble
>> understanding it?  Even my beginning C students understand such simple C
>> concepts.  But it seems to be beyond your level of comprehension.
> 
> Here's where we are at now.  You now accept (it seems) that the types
> are /not/ the same because even you give them as being different: int
> *const and int *.  These are not even compatible types, let alone the
> same type.  It would be a step forward if you would at least agree that
> the types are not "identical" as you originally claimed.
> 
> These two different types have the same size yet you know that sizeof
> boz and sizeof bar are not the same.  This should make it clear to you
> that boz and bar can't have the types you say they do.  (You've got the
> type of bar right, it's the type of boz that's confusing you.)
> 
> Furthermore, whilst you are correct that boz will often be converted to
> a pointer type, the type you give is wrong, even for these situations
> since it's not converted to a const-qualified pointer type.  This can be
> verified with a simple _Generic selection or by trying to pass &boz to
> a function that has an int *const * parameter.
> 
> Even correcting your mistake about the const qualifier, you can't pass
> &boz to a function with an int ** parameter.  How do you explain that?
> And, mysteriously (to you) you /can/ pass &boz to a function that takes
> an int (*)[4] argument.  I can explain that very simply -- boz has type
> int [4] so &boz has type int (*)[4], but how do you explain it?
> 
> It's hard to see how you could be more muddled.
> 
>> See my response to Bart for another example.
> 
> Here it is:
> 
> | char p[4] = "abc";
> | char * q = "def";
> |
> | funca(char * p1) {
> |    char c1 = *p1;
> |    char c2 = p1[0];
> | }
> |
> | funcb(char p2[]) {
> |    char c3 = *p2;
> |    char c4 = p3[0];
> | }
> 
> p1, p2 and q all have the same type ("pointer to char").  p has type
> "array of 4 char".  Do you agree?
> 
> Only if you were to incorrectly claim that p and q (or p and p1 or p and
> p2) had the same type could it be "another example".  If you agree with
> me about the types here, it's just an example of you being right.
> 

Ben, you are totally hopeless.  I'm through trying to explain the basics
of C to you.

Get an education.  Then maybe we can talk intelligently.

-- 
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
0
Jerry
12/4/2016 1:23:40 AM
On 04/12/2016 01:19, Jerry Stuckle wrote:

> Gee, no shit, Sherlock.  You miss the entire point.
>
> But that's pretty normal for you.


Why don't you just admit you were mistaken when you said, about bar and 
boz in this code fragment when 'thing' is an array typedef:


     int foo(thing bar)
     {
       thing boz;
       ...
     }

that:

 > They will have exactly the same type.

?

-- 
Bartc

0
BartC
12/4/2016 1:28:00 AM
On 12/ 4/16 02:28 PM, BartC wrote:
> On 04/12/2016 01:19, Jerry Stuckle wrote:
>
>> Gee, no shit, Sherlock.  You miss the entire point.
>>
>> But that's pretty normal for you.
>
>
> Why don't you just admit you were mistaken when you said, about bar and
> boz in this code fragment when 'thing' is an array typedef:
>
>
>       int foo(thing bar)
>       {
>         thing boz;
>         ...
>       }
>
> that:
>
>   > They will have exactly the same type.
>
> ?

Because he never does.  The closest he gets to admitting he is wrong 
will be a childish insult.

-- 
Ian
0
Ian
12/4/2016 1:34:59 AM
On Saturday, December 3, 2016 at 8:19:44 PM UTC-5, Jerry Stuckle wrote:
....
> Gee, no shit, Sherlock.  You miss the entire point.

Ah - it's insult time. That means you have no counter-arguments - though yo=
u will claim it just means you don't want to bother arguing. I'm fully awar=
e that you've still deluded yourself into believing you're correct - but st=
arting up with the insults is the closest that you ever come to admitting t=
hat you're wrong, so I'll take it as such.
0
jameskuyper
12/4/2016 3:19:49 AM
On 12/3/2016 10:19 PM, jameskuyper@verizon.net wrote:
> On Saturday, December 3, 2016 at 8:19:44 PM UTC-5, Jerry Stuckle wrote:
> ...
>> Gee, no shit, Sherlock.  You miss the entire point.
> 
> Ah - it's insult time. That means you have no counter-arguments - though you will claim it just means you don't want to bother arguing. I'm fully aware that you've still deluded yourself into believing you're correct - but starting up with the insults is the closest that you ever come to admitting that you're wrong, so I'll take it as such.
> 

Nope, it means I'm tired of trying to teach the pig to sing.

And no, I don't admit I'm wrong when I'm not.  Only when I am wrong.

You and some others here can read the C Standard.  But you don't
UNDERSTAND the C standard.

There is a HUGE difference.

-- 
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
0
Jerry
12/4/2016 4:26:01 PM
On Friday, December 2, 2016 at 6:32:45 PM UTC-6, Jerry Stuckle wrote:
> On 12/2/2016 4:34 PM, supercat wrote:
> > Yes, but were there any such compilers before the new syntax appeared in C++?
> 
> That's what I said.  The first version of C++ from AT&T Labs (Version
> "E" for "Educational") was released to universities in 1986.  it was
> buggy and not fit for production; more of a "proof of concept".  Several
> C compilers already had support for the new syntax. (BTW - the first
> production C++ compiler was released in 1987 - still buggy, but better.
> In 1988 the first compiler that really could be used in production were
> released).

C++ and the first C compilers I've heard of with prototype support both
came out in 1986, but I don't know exactly when within that year.  Further,
I don't know to when documentation of what C++ was going to be started to
spread.  Do you have any more precise dates for either of those things?
0
supercat
12/4/2016 4:40:15 PM
Ian Collins <ian-news@hotmail.com> writes:

> On 12/ 4/16 02:28 PM, BartC wrote:
>> On 04/12/2016 01:19, Jerry Stuckle wrote:
>>
>>> Gee, no shit, Sherlock.  You miss the entire point.
>>>
>>> But that's pretty normal for you.
>>
>>
>> Why don't you just admit you were mistaken when you said, about bar and
>> boz in this code fragment when 'thing' is an array typedef:
>>
>>
>>       int foo(thing bar)
>>       {
>>         thing boz;
>>         ...
>>       }
>>
>> that:
>>
>>   > They will have exactly the same type.
>>
>> ?
>
> Because he never does.  The closest he gets to admitting he is wrong
> will be a childish insult.

The good thing though is that the insult posts rarely contain any
further technical inaccuracies so there is not need to reply to them.
They mark the end of the thread in a very satisfactory way.

-- 
Ben.
0
Ben
12/5/2016 12:02:58 AM
On 12/4/2016 11:40 AM, supercat@casperkitty.com wrote:
> On Friday, December 2, 2016 at 6:32:45 PM UTC-6, Jerry Stuckle wrote:
>> On 12/2/2016 4:34 PM, supercat wrote:
>>> Yes, but were there any such compilers before the new syntax appeared in C++?
>>
>> That's what I said.  The first version of C++ from AT&T Labs (Version
>> "E" for "Educational") was released to universities in 1986.  it was
>> buggy and not fit for production; more of a "proof of concept".  Several
>> C compilers already had support for the new syntax. (BTW - the first
>> production C++ compiler was released in 1987 - still buggy, but better.
>> In 1988 the first compiler that really could be used in production were
>> released).
> 
> C++ and the first C compilers I've heard of with prototype support both
> came out in 1986, but I don't know exactly when within that year.  Further,
> I don't know to when documentation of what C++ was going to be started to
> spread.  Do you have any more precise dates for either of those things?
> 

As I said - the first C++ compiler (Version "E") came out was for
educational use only - just a proof of concept.  It was not ready for
prime time, nor was it meant to be.  The first commercial C++ compiler
came out in 1987.  Better, but still buggy.  By 1988, C++ compilers were
getting pretty good.

C compiler support for function declarations came out about 1985, IIRC.
My first compiler didn't have it around 1984, but it also wasn't
necessarily the latest.  Rather, it was the cheapest, since I wasn't
doing any C programming at the time.  Later I had to upgrade to a newer
compiler (around 1986?  I don't remember for sure now) because I was
seeing more and more code which used the new method.

When OS/2 1.0 came out in 1988, the IBM compiler supported function
declarations.  It wasn't the first compiler I used that supported it,
but it is the first one I can nail down a time on.

It was later that year that I started with C++, with a compiler that
actually was a translator from C++ to C.  You then compiled it with a C
compiler.

-- 
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
0
Jerry
12/5/2016 12:39:16 AM
Ben Bacarisse <ben.usenet@bsb.me.uk> writes:
[...]
> You don't need (and have never needed) the function definition to
> precede it's use.  In old C, a function declaration does just as well --
> it provides as much information to the compiler as a function definition
> does -- but neither a preceding declaration nor a preceding definition
> will allow the compiler to convert the arguments to match the parameter
> types.  (The definition has more information in it, but the language
> rules mean that the compiler is not allowed to use it.  Indeed, some
> very early C programs used the argument/parameter mismatch as a
> poor-person's union.)

I'm not sure what the rules were (if any) in pre-ANSI C, but in C89 and
later calling an unprototyped function with an incorrect type and/or
number of arguments has undefined behavior.  So in principle, a compiler
*could* generate code to convert the arguments to the expected type.

In practice, I don't know of any compilers that do so.

The correct solution, of course, is to use prototypes.

-- 
Keith Thompson (The_Other_Keith) kst-u@mib.org  <http://www.ghoti.net/~kst>
Working, but not speaking, for JetHead Development, Inc.
"We must do something.  This is something.  Therefore, we must do this."
    -- Antony Jay and Jonathan Lynn, "Yes Minister"
0
Keith
12/5/2016 1:01:01 AM
On 05/12/16 01:02, Ben Bacarisse wrote:
> Ian Collins <ian-news@hotmail.com> writes:
> 
>> On 12/ 4/16 02:28 PM, BartC wrote:
>>> On 04/12/2016 01:19, Jerry Stuckle wrote:
>>>
>>>> Gee, no shit, Sherlock.  You miss the entire point.
>>>>
>>>> But that's pretty normal for you.
>>>
>>>
>>> Why don't you just admit you were mistaken when you said, about bar and
>>> boz in this code fragment when 'thing' is an array typedef:
>>>
>>>
>>>       int foo(thing bar)
>>>       {
>>>         thing boz;
>>>         ...
>>>       }
>>>
>>> that:
>>>
>>>   > They will have exactly the same type.
>>>
>>> ?
>>
>> Because he never does.  The closest he gets to admitting he is wrong
>> will be a childish insult.
> 
> The good thing though is that the insult posts rarely contain any
> further technical inaccuracies so there is not need to reply to them.
> They mark the end of the thread in a very satisfactory way.
> 

I would not say they end the thread in a satisfactory way, but they do
end it.  I think it is a variation of Godwin's law.

0
David
12/5/2016 9:34:01 AM
On 12/5/2016 4:34 AM, David Brown wrote:
> On 05/12/16 01:02, Ben Bacarisse wrote:
>> Ian Collins <ian-news@hotmail.com> writes:
>>
>>> On 12/ 4/16 02:28 PM, BartC wrote:
>>>> On 04/12/2016 01:19, Jerry Stuckle wrote:
>>>>
>>>>> Gee, no shit, Sherlock.  You miss the entire point.
>>>>>
>>>>> But that's pretty normal for you.
>>>>
>>>>
>>>> Why don't you just admit you were mistaken when you said, about bar and
>>>> boz in this code fragment when 'thing' is an array typedef:
>>>>
>>>>
>>>>       int foo(thing bar)
>>>>       {
>>>>         thing boz;
>>>>         ...
>>>>       }
>>>>
>>>> that:
>>>>
>>>>   > They will have exactly the same type.
>>>>
>>>> ?
>>>
>>> Because he never does.  The closest he gets to admitting he is wrong
>>> will be a childish insult.
>>
>> The good thing though is that the insult posts rarely contain any
>> further technical inaccuracies so there is not need to reply to them.
>> They mark the end of the thread in a very satisfactory way.
>>
> 
> I would not say they end the thread in a satisfactory way, but they do
> end it.  I think it is a variation of Godwin's law.
> 

No, it just means I get tired of trying to teach the pigs to sing.  You
read the C standard.  But you don't UNDERSTAND the language.  That is
VERY obvious.

I've had programmers just out of college with a better understanding of
C than I find in this newsgroup.

-- 
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
0
Jerry
12/5/2016 1:38:40 PM
On 05/12/16 14:38, Jerry Stuckle wrote:
> On 12/5/2016 4:34 AM, David Brown wrote:
>> On 05/12/16 01:02, Ben Bacarisse wrote:
>>> Ian Collins <ian-news@hotmail.com> writes:

>>>>
>>>> Because he never does.  The closest he gets to admitting he is wrong
>>>> will be a childish insult.
>>>
>>> The good thing though is that the insult posts rarely contain any
>>> further technical inaccuracies so there is not need to reply to them.
>>> They mark the end of the thread in a very satisfactory way.
>>>
>>
>> I would not say they end the thread in a satisfactory way, but they do
>> end it.  I think it is a variation of Godwin's law.
>>
> 
> No, it just means I get tired of trying to teach the pigs to sing.  You
> read the C standard.  But you don't UNDERSTAND the language.  That is
> VERY obvious.
> 

But you never seem to get tired of your favourite expressions, no matter
how immature they make you appear, and your cheery disposition and
friendly nature is unaffected by your tiredness.

0
David
12/5/2016 2:52:09 PM
On Sunday, December 4, 2016 at 6:39:24 PM UTC-6, Jerry Stuckle wrote:
> C compiler support for function declarations came out about 1985, IIRC.
> My first compiler didn't have it around 1984, but it also wasn't
> necessarily the latest.  Rather, it was the cheapest, since I wasn't
> doing any C programming at the time.  Later I had to upgrade to a newer
> compiler (around 1986?  I don't remember for sure now) because I was
> seeing more and more code which used the new method.

I wonder if those early compilers supported the syntax of putting qualifiers
in brackets, or when that came about as a widely-recognized feature?  Early
compilers didn't have any qualifiers, and prior to "restrict" the only
important qualifier one could attach to arguments (as opposed to the things
they would point to) would have been "volatile", and that would only be
necessary in rather obscure circumstances.

0
supercat
12/5/2016 3:15:02 PM
On 12/5/2016 9:52 AM, David Brown wrote:
> On 05/12/16 14:38, Jerry Stuckle wrote:
>> On 12/5/2016 4:34 AM, David Brown wrote:
>>> On 05/12/16 01:02, Ben Bacarisse wrote:
>>>> Ian Collins <ian-news@hotmail.com> writes:
> 
>>>>>
>>>>> Because he never does.  The closest he gets to admitting he is wrong
>>>>> will be a childish insult.
>>>>
>>>> The good thing though is that the insult posts rarely contain any
>>>> further technical inaccuracies so there is not need to reply to them.
>>>> They mark the end of the thread in a very satisfactory way.
>>>>
>>>
>>> I would not say they end the thread in a satisfactory way, but they do
>>> end it.  I think it is a variation of Godwin's law.
>>>
>>
>> No, it just means I get tired of trying to teach the pigs to sing.  You
>> read the C standard.  But you don't UNDERSTAND the language.  That is
>> VERY obvious.
>>
> 
> But you never seem to get tired of your favourite expressions, no matter
> how immature they make you appear, and your cheery disposition and
> friendly nature is unaffected by your tiredness.
> 

Yup, the pigs get very annoyed.

-- 
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
0
Jerry
12/5/2016 9:03:32 PM
On 12/5/2016 10:15 AM, supercat@casperkitty.com wrote:
> On Sunday, December 4, 2016 at 6:39:24 PM UTC-6, Jerry Stuckle wrote:
>> C compiler support for function declarations came out about 1985, IIRC.
>> My first compiler didn't have it around 1984, but it also wasn't
>> necessarily the latest.  Rather, it was the cheapest, since I wasn't
>> doing any C programming at the time.  Later I had to upgrade to a newer
>> compiler (around 1986?  I don't remember for sure now) because I was
>> seeing more and more code which used the new method.
> 
> I wonder if those early compilers supported the syntax of putting qualifiers
> in brackets, or when that came about as a widely-recognized feature?  Early
> compilers didn't have any qualifiers, and prior to "restrict" the only
> important qualifier one could attach to arguments (as opposed to the things
> they would point to) would have been "volatile", and that would only be
> necessary in rather obscure circumstances.
> 

I honestly don't remember any more.

What I do remember is ordering the source files in reverse of their call
order - main() at the bottom and functions above it.  Never call a
function below you in the source file.  And it became habit to load your
file in the editor and immediately go to the end so you could find main().

This was due to the lack of function declarations.  If you did it this
way, the compiler would have the function definition before the function
call, and could compare and, if necessary, convert the parameter type in
the call to the one in the function.  If you didn't have this, there was
no comparison and all bets were off - the programmer had to ensure all
calls matched the function definition.

And, of course, for larger programs, this became even more awkward
because you would end up with everything in one huge source file (or
multiple source files, each including others, which caused other problems).

That's why I remember function declarations so well.  It made
programming so much easier! :)


-- 
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
0
Jerry
12/5/2016 9:09:03 PM
On Monday, December 5, 2016 at 3:09:10 PM UTC-6, Jerry Stuckle wrote:
> What I do remember is ordering the source files in reverse of their call
> order - main() at the bottom and functions above it.  Never call a
> function below you in the source file.  And it became habit to load your
> file in the editor and immediately go to the end so you could find main().
> 
> This was due to the lack of function declarations.  If you did it this
> way, the compiler would have the function definition before the function
> call, and could compare and, if necessary, convert the parameter type in
> the call to the one in the function.

C has always (since 1974 at least) supported function declarations, and when
not using the "new-style" parameter syntax it supported constructs like:

    void print1or2(ct,x,y)
      int ct,x,y;
    {
      printf("%d", x);
      if (ct > 1) printf(" %d",y); 
      printf("\n");
    }
    void test(void)
    {
      print1or2(1, 10);  // Behavior is defined if 
                         // called function only *uses* first two args
      print1or2(2, 20,30);
      print1or2(3, 40,50,60); // Last arg is ignored, but behavior is defined
    }

Are you saying that compilers would include lint-like logic to keep track
of how functions used their parameters and squawk if a caller passed
something inconsistent with that?  That would make sense, though there
was no requirement that callers always pass the number of arguments that
functions appeared to expect.
0
supercat
12/5/2016 9:38:28 PM
On Monday, December 5, 2016 at 4:09:10 PM UTC-5, Jerry Stuckle wrote:
> On 12/5/2016 10:15 AM, supercat@casperkitty.com wrote:
> > On Sunday, December 4, 2016 at 6:39:24 PM UTC-6, Jerry Stuckle wrote:
> >> C compiler support for function declarations came out about 1985, IIRC.
> >> My first compiler didn't have it around 1984, but it also wasn't
> >> necessarily the latest.  Rather, it was the cheapest, since I wasn't
> >> doing any C programming at the time.  Later I had to upgrade to a newer
> >> compiler (around 1986?  I don't remember for sure now) because I was
> >> seeing more and more code which used the new method.
> > 
> > I wonder if those early compilers supported the syntax of putting qualifiers
> > in brackets, or when that came about as a widely-recognized feature?  Early
> > compilers didn't have any qualifiers, and prior to "restrict" the only
> > important qualifier one could attach to arguments (as opposed to the things
> > they would point to) would have been "volatile", and that would only be
> > necessary in rather obscure circumstances.
> > 
> 
> I honestly don't remember any more.
> 
> What I do remember is ordering the source files in reverse of their call
> order - main() at the bottom and functions above it.  Never call a
> function below you in the source file.  And it became habit to load your
> file in the editor and immediately go to the end so you could find main().
> 
> This was due to the lack of function declarations.  If you did it this
> way, the compiler would have the function definition before the function
> call, and could compare and, if necessary, convert the parameter type in
> the call to the one in the function.  If you didn't have this, there was
> no comparison and all bets were off - the programmer had to ensure all
> calls matched the function definition.
> 
> And, of course, for larger programs, this became even more awkward
> because you would end up with everything in one huge source file (or
> multiple source files, each including others, which caused other problems).
> 
> That's why I remember function declarations so well.  It made
> programming so much easier! :)

There's a terminology problem here. All three of the following are C function
declarations:

int foo();
int foo(char, int *);
int foo(char c, int *pi) {*pi = c;}

The first declaration uses K&R syntax, already well established when the first
edition of K&R came out. The first two declarations are also called forward
declarations. The second and third declarations use function prototype syntax,
and the third function declaration is also a function definition.

Forward declarations would have been sufficient to allow you to avoid writing
code the way you describe, and forward declarations using K&R syntax have been
supported by C almost from the very beginning. Most C code that I've ever seen,
going all the way back to 1979, when I first learned C, used forward
declarations for all subroutines first, then defined calling routines first,
followed by definitions of the called routines. I don't like unnecessary
forward declarations, so one of the oddities I've picked up is that I usually
do write my code with called functions defined before the calling functions.
However, I've never used a version of C so primitive that it didn't support
forward declarations using K&R syntax. How long ago is the time period you're
describing?

Function prototype syntax was not described in 1st edition K&R C, but was
standardized in C90. I'm not sure when compilers that supported function
prototypes first became available, but your claim that function declarations
were added to some C compilers as early as 1985 would make more sense if you
were actually talking about function prototypes, rather than function
declarations. 

The idea that function definitions also serve as function declarations goes
back at least to K&R C. The only way to not use function declarations in C is
if the only function in your program is main() - the definition of main() would
still also count as a function declaration but unless main() is recursive, that
declaration would never be put to use.
The fact that the function definitions also serve as function declarations is
precisely what makes writing your code in the order you describe work.
Functions defined later in the file can call functions earlier in the file,
precisely because those earlier definitions also serve as declarations, with a
scope that starts at the declarator and extends to the end of the file.
Again, your comment that you were NOT using function declarations when you
first wrote your code that way would make more sense if what you actually meant
was that you were not using function prototypes.

I'm sure that you will ignore everything I've said, except possibly to respond
with some insults. I'm sure that you will continue to use the term "function
declaration" in contexts where what you actually mean is "function prototype",
and will continue to fail to recognize that function definitions are also
function declarations. I'm only writing this for the benefit of other people
who may be as confused as I was by your statements, before I figured out how
you were misusing those terms.
0
jameskuyper
12/5/2016 10:12:37 PM
Jerry Stuckle <jstucklex@attglobal.net> writes:

> On 12/5/2016 10:15 AM, supercat@casperkitty.com wrote:
>> On Sunday, December 4, 2016 at 6:39:24 PM UTC-6, Jerry Stuckle wrote:
>>> C compiler support for function declarations came out about 1985,
>>> IIRC.

I think you mean function prototypes.  C has always had function
declarations.

>>> My first compiler didn't have it around 1984, but it also wasn't
>>> necessarily the latest.  Rather, it was the cheapest, since I wasn't
>>> doing any C programming at the time.  Later I had to upgrade to a newer
>>> compiler (around 1986?  I don't remember for sure now) because I was
>>> seeing more and more code which used the new method.
>> 
>> I wonder if those early compilers supported the syntax of putting qualifiers
>> in brackets, or when that came about as a widely-recognized feature?  Early
>> compilers didn't have any qualifiers, and prior to "restrict" the only
>> important qualifier one could attach to arguments (as opposed to the things
>> they would point to) would have been "volatile", and that would only be
>> necessary in rather obscure circumstances.
>> 
>
> I honestly don't remember any more.
>
> What I do remember is ordering the source files in reverse of their call
> order - main() at the bottom and functions above it.  Never call a
> function below you in the source file.  And it became habit to load your
> file in the editor and immediately go to the end so you could find main().
>
> This was due to the lack of function declarations.

C has always had function declarations.  Some of what you write below
suggest you are confusing plain declarations (which C has always had)
with function prototypes which are more recent (though very old now).

> If you did it this way, the compiler would have the function
> definition before the function call, and could compare and, if
> necessary, convert the parameter type in the call to the one in the
> function.

You don't need (and have never needed) the function definition to
precede it's use.  In old C, a function declaration does just as well --
it provides as much information to the compiler as a function definition
does -- but neither a preceding declaration nor a preceding definition
will allow the compiler to convert the arguments to match the parameter
types.  (The definition has more information in it, but the language
rules mean that the compiler is not allowed to use it.  Indeed, some
very early C programs used the argument/parameter mismatch as a
poor-person's union.)

A function declaration that includes a prototype does provide the
information required to convert argument types.  That seems to be what
you are talking about (backed-up by the dates -- they appears in
compiler in the second half of the 1980s).

<snip>
> That's why I remember function declarations so well.  It made
> programming so much easier! :)

You must mean prototypes here.  That was the major improvement that
arrived in C90 (though it was available in some compiler before that of
course).

-- 
Ben.
0
Ben
12/5/2016 10:49:23 PM
On 12/5/2016 5:12 PM, jameskuyper@verizon.net wrote:
> On Monday, December 5, 2016 at 4:09:10 PM UTC-5, Jerry Stuckle wrote:
>> On 12/5/2016 10:15 AM, supercat@casperkitty.com wrote:
>>> On Sunday, December 4, 2016 at 6:39:24 PM UTC-6, Jerry Stuckle wrote:
>>>> C compiler support for function declarations came out about 1985, IIRC.
>>>> My first compiler didn't have it around 1984, but it also wasn't
>>>> necessarily the latest.  Rather, it was the cheapest, since I wasn't
>>>> doing any C programming at the time.  Later I had to upgrade to a newer
>>>> compiler (around 1986?  I don't remember for sure now) because I was
>>>> seeing more and more code which used the new method.
>>>
>>> I wonder if those early compilers supported the syntax of putting qualifiers
>>> in brackets, or when that came about as a widely-recognized feature?  Early
>>> compilers didn't have any qualifiers, and prior to "restrict" the only
>>> important qualifier one could attach to arguments (as opposed to the things
>>> they would point to) would have been "volatile", and that would only be
>>> necessary in rather obscure circumstances.
>>>
>>
>> I honestly don't remember any more.
>>
>> What I do remember is ordering the source files in reverse of their call
>> order - main() at the bottom and functions above it.  Never call a
>> function below you in the source file.  And it became habit to load your
>> file in the editor and immediately go to the end so you could find main().
>>
>> This was due to the lack of function declarations.  If you did it this
>> way, the compiler would have the function definition before the function
>> call, and could compare and, if necessary, convert the parameter type in
>> the call to the one in the function.  If you didn't have this, there was
>> no comparison and all bets were off - the programmer had to ensure all
>> calls matched the function definition.
>>
>> And, of course, for larger programs, this became even more awkward
>> because you would end up with everything in one huge source file (or
>> multiple source files, each including others, which caused other problems).
>>
>> That's why I remember function declarations so well.  It made
>> programming so much easier! :)
> 
> There's a terminology problem here. All three of the following are C function
> declarations:
> 
> int foo();
> int foo(char, int *);
> int foo(char c, int *pi) {*pi = c;}
> 
> The first declaration uses K&R syntax, already well established when the first
> edition of K&R came out. The first two declarations are also called forward
> declarations. The second and third declarations use function prototype syntax,
> and the third function declaration is also a function definition.
>

The first is a K&R-style declaration, supported to this day.  The second
is an ANSI declaration (which was not supported in early C compilers).
The third is a definition (which also was not supported in early C
compilers).  A definition is also a declaration, but not vice versa.


> Forward declarations would have been sufficient to allow you to avoid writing
> code the way you describe, and forward declarations using K&R syntax have been
> supported by C almost from the very beginning. Most C code that I've ever seen,
> going all the way back to 1979, when I first learned C, used forward
> declarations for all subroutines first, then defined calling routines first,
> followed by definitions of the called routines. I don't like unnecessary
> forward declarations, so one of the oddities I've picked up is that I usually
> do write my code with called functions defined before the calling functions.
> However, I've never used a version of C so primitive that it didn't support
> forward declarations using K&R syntax. How long ago is the time period you're
> describing?
> 

Forward declarations with K&R did not allow for parameter type matching.
Only function names and return types were supported.

> Function prototype syntax was not described in 1st edition K&R C, but was
> standardized in C90. I'm not sure when compilers that supported function
> prototypes first became available, but your claim that function declarations
> were added to some C compilers as early as 1985 would make more sense if you
> were actually talking about function prototypes, rather than function
> declarations. 
> 

Yes, well after 1985.  And when they started supporting it is what we're
talking about.  But then we already know you don't read so well.

> The idea that function definitions also serve as function declarations goes
> back at least to K&R C. The only way to not use function declarations in C is
> if the only function in your program is main() - the definition of main() would
> still also count as a function declaration but unless main() is recursive, that
> declaration would never be put to use.

Which is what I said in the first place.

> The fact that the function definitions also serve as function declarations is
> precisely what makes writing your code in the order you describe work.
> Functions defined later in the file can call functions earlier in the file,
> precisely because those earlier definitions also serve as declarations, with a
> scope that starts at the declarator and extends to the end of the file.
> Again, your comment that you were NOT using function declarations when you
> first wrote your code that way would make more sense if what you actually meant
> was that you were not using function prototypes.
> 

No shit, Sherlock.  Do you even try to read?  Or do you just argue to argue?

> I'm sure that you will ignore everything I've said, except possibly to respond
> with some insults. I'm sure that you will continue to use the term "function
> declaration" in contexts where what you actually mean is "function prototype",
> and will continue to fail to recognize that function definitions are also
> function declarations. I'm only writing this for the benefit of other people
> who may be as confused as I was by your statements, before I figured out how
> you were misusing those terms.
> 

I might as well ignore it.  You've shown once again you are ignorant.

-- 
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
0
Jerry
12/6/2016 3:21:04 AM
On 12/5/2016 5:49 PM, Ben Bacarisse wrote:
> Jerry Stuckle <jstucklex@attglobal.net> writes:
> 
>> On 12/5/2016 10:15 AM, supercat@casperkitty.com wrote:
>>> On Sunday, December 4, 2016 at 6:39:24 PM UTC-6, Jerry Stuckle wrote:
>>>> C compiler support for function declarations came out about 1985,
>>>> IIRC.
> 
> I think you mean function prototypes.  C has always had function
> declarations.
>

If you call

int foo();

with no parameters, etc. a function declaration, then yes.  We didn't
back then.

>>>> My first compiler didn't have it around 1984, but it also wasn't
>>>> necessarily the latest.  Rather, it was the cheapest, since I wasn't
>>>> doing any C programming at the time.  Later I had to upgrade to a newer
>>>> compiler (around 1986?  I don't remember for sure now) because I was
>>>> seeing more and more code which used the new method.
>>>
>>> I wonder if those early compilers supported the syntax of putting qualifiers
>>> in brackets, or when that came about as a widely-recognized feature?  Early
>>> compilers didn't have any qualifiers, and prior to "restrict" the only
>>> important qualifier one could attach to arguments (as opposed to the things
>>> they would point to) would have been "volatile", and that would only be
>>> necessary in rather obscure circumstances.
>>>
>>
>> I honestly don't remember any more.
>>
>> What I do remember is ordering the source files in reverse of their call
>> order - main() at the bottom and functions above it.  Never call a
>> function below you in the source file.  And it became habit to load your
>> file in the editor and immediately go to the end so you could find main().
>>
>> This was due to the lack of function declarations.
> 
> C has always had function declarations.  Some of what you write below
> suggest you are confusing plain declarations (which C has always had)
> with function prototypes which are more recent (though very old now).
> 

If you want to call them function declarations, fine.  We didn't back then.

>> If you did it this way, the compiler would have the function
>> definition before the function call, and could compare and, if
>> necessary, convert the parameter type in the call to the one in the
>> function.
> 
> You don't need (and have never needed) the function definition to
> precede it's use.  In old C, a function declaration does just as well --
> it provides as much information to the compiler as a function definition
> does -- but neither a preceding declaration nor a preceding definition
> will allow the compiler to convert the arguments to match the parameter
> types.  (The definition has more information in it, but the language
> rules mean that the compiler is not allowed to use it.  Indeed, some
> very early C programs used the argument/parameter mismatch as a
> poor-person's union.)
> 

In pre-K&R, it was the ONLY WAY to specify the parameter list.  If you
wanted the compiler to validate the parameters being past and convert if
necessary, you had to do it that way.

> A function declaration that includes a prototype does provide the
> information required to convert argument types.  That seems to be what
> you are talking about (backed-up by the dates -- they appears in
> compiler in the second half of the 1980s).
> 

> <snip>
>> That's why I remember function declarations so well.  It made
>> programming so much easier! :)
> 
> You must mean prototypes here.  That was the major improvement that
> arrived in C90 (though it was available in some compiler before that of
> course).
> 

If you want to call those function declarations, fine.  We didn't back then.

Function declarations were widely touted when the compilers first
started supporting them.

-- 
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
0
Jerry
12/6/2016 3:24:46 AM
On 12/5/2016 6:24 PM, Keith Thompson wrote:
> Ben Bacarisse <ben.usenet@bsb.me.uk> writes:
> [...]
>> You don't need (and have never needed) the function definition to
>> precede it's use.  In old C, a function declaration does just as well --
>> it provides as much information to the compiler as a function definition
>> does -- but neither a preceding declaration nor a preceding definition
>> will allow the compiler to convert the arguments to match the parameter
>> types.  (The definition has more information in it, but the language
>> rules mean that the compiler is not allowed to use it.  Indeed, some
>> very early C programs used the argument/parameter mismatch as a
>> poor-person's union.)
> 
> I'm not sure what the rules were (if any) in pre-ANSI C, but in C89 and
> later calling an unprototyped function with an incorrect type and/or
> number of arguments has undefined behavior.  So in principle, a compiler
> *could* generate code to convert the arguments to the expected type.
> 
> In practice, I don't know of any compilers that do so.
> 
> The correct solution, of course, is to use prototypes.
> 

Keith, they did if the function was defined, as I indicated above.  But
not otherwise.

-- 
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
0
Jerry
12/6/2016 3:25:30 AM
On Monday, December 5, 2016 at 10:21:12 PM UTC-5, Jerry Stuckle wrote:
> On 12/5/2016 5:12 PM, jameskuyper@verizon.net wrote:
....
> > There's a terminology problem here. All three of the following are C fu=
nction
> > declarations:
> >=20
> > int foo();
> > int foo(char, int *);
> > int foo(char c, int *pi) {*pi =3D c;}
> >=20
> > The first declaration uses K&R syntax, already well established when th=
e first
> > edition of K&R came out. The first two declarations are also called for=
ward
> > declarations. The second and third declarations use function prototype =
syntax,
> > and the third function declaration is also a function definition.
> >
>=20
> The first is a K&R-style declaration, supported to this day.  The second
> is an ANSI declaration (which was not supported in early C compilers).

Yes, it is an ANSI declaration. All three declarations are ANSI, so that's =
not a particularly useful phrase for distinguishing between them. They're a=
lso all ISO declarations, so that doesn't help distinguish them either. The=
 term "Prototype" is what you need to properly make the distinction.

> The third is a definition (which also was not supported in early C
> compilers).

Again, you're not making the proper distinctions. Function definitions are =
pretty much as old as C itself. What early C compilers did not support was =
function definitions using function prototype syntax; they used K&R syntax =
instead.

>   A definition is also a declaration, but not vice versa.

That's one sentence you didn't mess up by using the wrong terms. Congratula=
tions.

....
> > However, I've never used a version of C so primitive that it didn't sup=
port
> > forward declarations using K&R syntax. How long ago is the time period =
you're
> > describing?
> >=20
>=20
> Forward declarations with K&R did not allow for parameter type matching.
> Only function names and return types were supported.

Yes, but they did allow you to call a function whose definition occurs late=
r in the file, so ordering the definitions in the reverse of call order was=
 not necessary. Function prototypes didn't change that. They added a bunch =
of other useful features to the language, but that isn't one of them.

> > Function prototype syntax was not described in 1st edition K&R C, but w=
as
> > standardized in C90. I'm not sure when compilers that supported functio=
n
> > prototypes first became available, but your claim that function declara=
tions
> > were added to some C compilers as early as 1985 would make more sense i=
f you
> > were actually talking about function prototypes, rather than function
> > declarations.=20
> >=20
>=20
> Yes, well after 1985.  And when they started supporting it is what we're
> talking about.  But then we already know you don't read so well.

I read well enough to notice you making statements about "declaration" when=
 what you're saying about them isn't true for all declarations, but only fo=
r those declarations that use function prototype syntax.

> > The idea that function definitions also serve as function declarations =
goes
> > back at least to K&R C. The only way to not use function declarations i=
n C is
> > if the only function in your program is main() - the definition of main=
() would
> > still also count as a function declaration but unless main() is recursi=
ve, that
> > declaration would never be put to use.
>=20
> Which is what I said in the first place.

What you said was "This was due to the lack of function declarations." Ther=
e was no such lack. Function declarations, and in particular, forward decla=
rations of functions, were supported long before before function prototypes=
 were invented. You were using function declarations, at the very time that=
 you cited a "lack of function declarations" as the reason why you had to o=
rganize your code that way. You weren't using forward declarations, or you =
wouldn't have had to organize your code in that order - but the reason why =
you felt forced to use that order was not the lack of forward declarations,=
 either - just, apparently, a failure to use them.

....
> > Again, your comment that you were NOT using function declarations when =
you
> > first wrote your code that way would make more sense if what you actual=
ly meant
> > was that you were not using function prototypes.
> >=20
>=20
> No shit, Sherlock.  Do you even try to read?  Or do you just argue to arg=
ue?

Yes, I try to read. And when I see the term "function declaration" used in =
a statement that is false as written, but would be correct if that is chang=
ed to "function prototype", I (sometimes) argue. Not because I love to argu=
e, but to correct the incorrect terminology, in the hopes that people with =
more mental flexibility than you have will learn to avoid confusing people =
through the use of the wrong terms.
0
jameskuyper
12/6/2016 6:05:27 AM
On Monday, December 5, 2016 at 10:24:56 PM UTC-5, Jerry Stuckle wrote:
> On 12/5/2016 5:49 PM, Ben Bacarisse wrote:
> > Jerry Stuckle <jstucklex@attglobal.net> writes:
> > 
> >> On 12/5/2016 10:15 AM, supercat@casperkitty.com wrote:
> >>> On Sunday, December 4, 2016 at 6:39:24 PM UTC-6, Jerry Stuckle wrote:
> >>>> C compiler support for function declarations came out about 1985,
> >>>> IIRC.
> > 
> > I think you mean function prototypes.  C has always had function
> > declarations.
> >
> 
> If you call
> 
> int foo();
> 
> with no parameters, etc. a function declaration, then yes.  We didn't
> back then.
....
> > C has always had function declarations.  Some of what you write below
> > suggest you are confusing plain declarations (which C has always had)
> > with function prototypes which are more recent (though very old now).
> > 
> 
> If you want to call them function declarations, fine.  We didn't back then.
....
> > You must mean prototypes here.  That was the major improvement that
> > arrived in C90 (though it was available in some compiler before that of
> > course).
> > 
> 
> If you want to call those function declarations, fine.  We didn't back then.

That's too bad. We did call them function declarations back then. Why didn't
you like the terminology? It's a declaration (trivial to prove from the
grammar), and the thing it declared was a function, so why wouldn't you call it
a function declaration? The term was good enough for K&R (appendix A, section
17, page 213, 1st edition). Why wasn't it good enough for you?
0
jameskuyper
12/6/2016 6:47:30 AM
On Monday, December 5, 2016 at 9:24:56 PM UTC-6, Jerry Stuckle wrote:
> In pre-K&R, it was the ONLY WAY to specify the parameter list.  If you
> wanted the compiler to validate the parameters being past and convert if
> necessary, you had to do it that way.

Out of curiosity, can you identify some compilers that would do argument-
type conversions?  I can believe there might have been some, but a lot of
compilers would basically have a caller lay out arguments based upon their
type (as seen by the caller) oblivious to what the called function was
expecting, and the called function would look at the indicated spots for
arguments, oblivious to what arguments the caller had actually put there.
I think "lint" tools got their start doing argument validation; I would
think they should have been able to perform validation across a number of
C files, even in the absence of prototypes (since one of the purposes of
such tools, even today, is to squawk if a function is defined using one
signature, but a caller has seen a prototype--not visible at the point of
definition--which uses a different signature).
0
supercat
12/6/2016 2:10:38 PM
On 12/6/2016 1:05 AM, jameskuyper@verizon.net wrote:
> On Monday, December 5, 2016 at 10:21:12 PM UTC-5, Jerry Stuckle wrote:
>> On 12/5/2016 5:12 PM, jameskuyper@verizon.net wrote:
> ...
>>> There's a terminology problem here. All three of the following are C function
>>> declarations:
>>>
>>> int foo();
>>> int foo(char, int *);
>>> int foo(char c, int *pi) {*pi = c;}
>>>
>>> The first declaration uses K&R syntax, already well established when the first
>>> edition of K&R came out. The first two declarations are also called forward
>>> declarations. The second and third declarations use function prototype syntax,
>>> and the third function declaration is also a function definition.
>>>
>>
>> The first is a K&R-style declaration, supported to this day.  The second
>> is an ANSI declaration (which was not supported in early C compilers).
> 
> Yes, it is an ANSI declaration. All three declarations are ANSI, so that's not a particularly useful phrase for distinguishing between them. They're also all ISO declarations, so that doesn't help distinguish them either. The term "Prototype" is what you need to properly make the distinction.
>

That is a term YOU use.  That was not the term that was used at the time.

>> The third is a definition (which also was not supported in early C
>> compilers).
> 
> Again, you're not making the proper distinctions. Function definitions are pretty much as old as C itself. What early C compilers did not support was function definitions using function prototype syntax; they used K&R syntax instead.
> 

Incorrect.  Function definitions ARE as old as C itself.  In fact, they
are much older - going all the way back to Fortran.

And you keep getting hung up on the term "prototype".  That was NOT the
term that was used at the time, and we are talking about what was going
on in the mid 80's.

>>   A definition is also a declaration, but not vice versa.
> 
> That's one sentence you didn't mess up by using the wrong terms. Congratulations.
> 

Cut the shit, Sherlock.  You think you're showing your intelligence, but
you're just doing the opposite.

> ...
>>> However, I've never used a version of C so primitive that it didn't support
>>> forward declarations using K&R syntax. How long ago is the time period you're
>>> describing?
>>>
>>
>> Forward declarations with K&R did not allow for parameter type matching.
>> Only function names and return types were supported.
> 
> Yes, but they did allow you to call a function whose definition occurs later in the file, so ordering the definitions in the reverse of call order was not necessary. Function prototypes didn't change that. They added a bunch of other useful features to the language, but that isn't one of them.
>

You obviously never wrote any C before ANSI definitions came about.  If
what you say is true, why was virtually ALL code back then coded in
reverse order? Maybe because they knew what they were doing and you
don't.  But that is quite obvious.

And you can't read, either.

>>> Function prototype syntax was not described in 1st edition K&R C, but was
>>> standardized in C90. I'm not sure when compilers that supported function
>>> prototypes first became available, but your claim that function declarations
>>> were added to some C compilers as early as 1985 would make more sense if you
>>> were actually talking about function prototypes, rather than function
>>> declarations. 
>>>
>>
>> Yes, well after 1985.  And when they started supporting it is what we're
>> talking about.  But then we already know you don't read so well.
> 
> I read well enough to notice you making statements about "declaration" when what you're saying about them isn't true for all declarations, but only for those declarations that use function prototype syntax.
> 

Once again you "read".  But you don't "understand" or "comprehend".
Your head is so far up your arse that you can see your tonsils.  But
then that's its normal position.

>>> The idea that function definitions also serve as function declarations goes
>>> back at least to K&R C. The only way to not use function declarations in C is
>>> if the only function in your program is main() - the definition of main() would
>>> still also count as a function declaration but unless main() is recursive, that
>>> declaration would never be put to use.
>>
>> Which is what I said in the first place.
> 
> What you said was "This was due to the lack of function declarations." There was no such lack. Function declarations, and in particular, forward declarations of functions, were supported long before before function prototypes were invented. You were using function declarations, at the very time that you cited a "lack of function declarations" as the reason why you had to organize your code that way. You weren't using forward declarations, or you wouldn't have had to organize your code in that order - but the reason why you felt forced to use that order was not the lack of forward declarations, either - just, apparently, a failure to use them.
> 

Again, you read, but you don't comprehend.  And you have no idea what
the terminology was in use at the time - which is what we are talking about.

> ...
>>> Again, your comment that you were NOT using function declarations when you
>>> first wrote your code that way would make more sense if what you actually meant
>>> was that you were not using function prototypes.
>>>
>>
>> No shit, Sherlock.  Do you even try to read?  Or do you just argue to argue?
> 
> Yes, I try to read. And when I see the term "function declaration" used in a statement that is false as written, but would be correct if that is changed to "function prototype", I (sometimes) argue. Not because I love to argue, but to correct the incorrect terminology, in the hopes that people with more mental flexibility than you have will learn to avoid confusing people through the use of the wrong terms.
> 

Yes, you *try* to read.  Now try to *understand* and *comprehend*.  IOW,
get your head out of your arse.

But I also know that is beyond the capacity of your simple mind.  You
have shown that repeatedly.

-- 
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
0
Jerry
12/6/2016 3:46:48 PM
On 12/6/2016 1:47 AM, jameskuyper@verizon.net wrote:
> On Monday, December 5, 2016 at 10:24:56 PM UTC-5, Jerry Stuckle wrote:
>> On 12/5/2016 5:49 PM, Ben Bacarisse wrote:
>>> Jerry Stuckle <jstucklex@attglobal.net> writes:
>>>
>>>> On 12/5/2016 10:15 AM, supercat@casperkitty.com wrote:
>>>>> On Sunday, December 4, 2016 at 6:39:24 PM UTC-6, Jerry Stuckle wrote:
>>>>>> C compiler support for function declarations came out about 1985,
>>>>>> IIRC.
>>>
>>> I think you mean function prototypes.  C has always had function
>>> declarations.
>>>
>>
>> If you call
>>
>> int foo();
>>
>> with no parameters, etc. a function declaration, then yes.  We didn't
>> back then.
> ...
>>> C has always had function declarations.  Some of what you write below
>>> suggest you are confusing plain declarations (which C has always had)
>>> with function prototypes which are more recent (though very old now).
>>>
>>
>> If you want to call them function declarations, fine.  We didn't back then.
> ...
>>> You must mean prototypes here.  That was the major improvement that
>>> arrived in C90 (though it was available in some compiler before that of
>>> course).
>>>
>>
>> If you want to call those function declarations, fine.  We didn't back then.
> 
> That's too bad. We did call them function declarations back then. Why didn't
> you like the terminology? It's a declaration (trivial to prove from the
> grammar), and the thing it declared was a function, so why wouldn't you call it
> a function declaration? The term was good enough for K&R (appendix A, section
> 17, page 213, 1st edition). Why wasn't it good enough for you?
> 

Gee, I don't know where you were.  We didn't call them function
declarations in the real world.  And I'm not saying just me.  It
included all of the C programmers I knew at IBM, the C user's group I
belonged to (over 100 people, most non-IBMers), the people I knew on
CompuServe....

It seems there is only one person who claims to have called them
function declarations - YOU.

But then you always one to argue little shit instead of looking at the
big picture.  But that's what the stoopid to - because they can't
understand the big picture - as you have shown in this thread.

You obviously did NOT do any C programming before ANSI declarations
started being used.

-- 
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
0
Jerry
12/6/2016 3:51:24 PM
On 12/6/2016 9:10 AM, supercat@casperkitty.com wrote:
> On Monday, December 5, 2016 at 9:24:56 PM UTC-6, Jerry Stuckle wrote:
>> In pre-K&R, it was the ONLY WAY to specify the parameter list.  If you
>> wanted the compiler to validate the parameters being past and convert if
>> necessary, you had to do it that way.
> 
> Out of curiosity, can you identify some compilers that would do argument-
> type conversions?  I can believe there might have been some, but a lot of
> compilers would basically have a caller lay out arguments based upon their
> type (as seen by the caller) oblivious to what the called function was
> expecting, and the called function would look at the indicated spots for
> arguments, oblivious to what arguments the caller had actually put there.
> I think "lint" tools got their start doing argument validation; I would
> think they should have been able to perform validation across a number of
> C files, even in the absence of prototypes (since one of the purposes of
> such tools, even today, is to squawk if a function is defined using one
> signature, but a caller has seen a prototype--not visible at the point of
> definition--which uses a different signature).
> 

Every one of them, if the function was defined before it was called.
And incompatible conversions would be flagged as errors.

But if the function wasn't defined until after it was called, there was
no way for the compiler to know if conversions were necessary at the
type of the call, and it was up to the programmer to ensure the calls
were correct.  The only assumption in this case was that the function
returned an int.  It could have had any number of any type of parameters
- they would all compile.

Early lint tools didn't help either, because they were single-pass
tools.  But they did catch a lot of possible errors (which many
compilers can now warn for) like "if (a=b)" or mismatches between printf
formatting commands and parameter types.

As lint matured, it became better and better at catching things like you
mentioned.

-- 
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
0
Jerry
12/6/2016 3:58:08 PM
On Tuesday, December 6, 2016 at 10:46:59 AM UTC-5, Jerry Stuckle wrote:
> On 12/6/2016 1:05 AM, jameskuyper@verizon.net wrote:
> > On Monday, December 5, 2016 at 10:21:12 PM UTC-5, Jerry Stuckle wrote:
> >> On 12/5/2016 5:12 PM, jameskuyper@verizon.net wrote:
> > ...
> >>> There's a terminology problem here. All three of the following are C function
> >>> declarations:
> >>>
> >>> int foo();
> >>> int foo(char, int *);
> >>> int foo(char c, int *pi) {*pi = c;}
> >>>
> >>> The first declaration uses K&R syntax, already well established when the first
> >>> edition of K&R came out. The first two declarations are also called forward
> >>> declarations. The second and third declarations use function prototype syntax,
> >>> and the third function declaration is also a function definition.
> >>>
> >>
> >> The first is a K&R-style declaration, supported to this day.  The second
> >> is an ANSI declaration (which was not supported in early C compilers).
> > 
> > Yes, it is an ANSI declaration. All three declarations are ANSI, so that's
> > not a particularly useful phrase for distinguishing between them. They're
> > also all ISO declarations, so that doesn't help distinguish them either.
> > The term "Prototype" is what you need to properly make the distinction.
> That is a term YOU use.  That was not the term that was used at the time.

It is a term I used, at that time. I first learned C from a class I took in
1979. My textbook was K&R C, 1st edition, which used the term "function
declaration". So did my teacher. So did my boss on my first job writing C code.
I can't remember talking to any other people about C code at that time, but I'm
sure of those two.

I'm curious, back in the day, if you wanted to convey to somebody the fact that 

    double * moving_average_temperature();

was a C construct that serves to declare the fact that the specified identifier
identifies a function, what term did you use to convey that fact? Things that
declare identifiers have been officially been called "declarations" from the
very beginnings of C. Prefixing it with the adjective "function" to specify the
type of thing being declared seems a very obvious and logical thing to do. So
what did you use instead of "function declaration" to convey such information.
More importantly, WHY did you use something other than "function declaration"
to describe a declaration that declares a function?

> >> The third is a definition (which also was not supported in early C
> >> compilers).
> > 
> > Again, you're not making the proper distinctions. Function definitions are
> > pretty much as old as C itself. What early C compilers did not support was
> > function definitions using function prototype syntax; they used K&R syntax
> > instead.
> > 
> 
> Incorrect.  Function definitions ARE as old as C itself.  In fact, they
> are much older - going all the way back to Fortran.

True - I started using Fortran in 1976, and it did have function definitions. I
was only talking about the C concept, for with B would be a more appropriate
predecessor to mention.

> And you keep getting hung up on the term "prototype".  That was NOT the
> term that was used at the time, and we are talking about what was going
> on in the mid 80's.

Agreed - the term "function prototype" did not come into use until function
prototypes came into use, so that's reasonable. I'm sure there may have been an
intermediate stage where prototypes existed, and did not have a name yet, or
had a name but had not been implemented yet - but I never saw that intermediate
stage, so I'm not sure whether the feature or the name of the feature came
first.
I'm merely asking you to use the term "function prototype" when making
statements about function prototypes that are not true about other types of
function declarations. Function declarations did exist in the 1980s, and were
called function declarations by everyone I knew who worked with C.

> >>> However, I've never used a version of C so primitive that it didn't support
> >>> forward declarations using K&R syntax. How long ago is the time period you're
> >>> describing?
> >>>
> >>
> >> Forward declarations with K&R did not allow for parameter type matching.
> >> Only function names and return types were supported.
> > 
> > Yes, but they did allow you to call a function whose definition occurs
> > later in the file, so ordering the definitions in the reverse of call order
> > was not necessary. Function prototypes didn't change that. They added a
> > bunch of other useful features to the language, but that isn't one of them.
> You obviously never wrote any C before ANSI definitions came about.  If

I started writing C in 1979. The ANSI standard didn't come out till a decade
later. I was an eager supporter of the new standard in general, and function
prototypes in particular, because I make lots of typos, and function prototypes
made those typos easier to catch. I started using function prototypes as soon
as I got access to a a compiler that supported them, but that wasn't until a
couple years after the standard came out.

> what you say is true, why was virtually ALL code back then coded in
> reverse order? ...

Virtually all of the code I've ever worked on, including code from the early
1980s, was written using forward function declarations to allow it to be
written in forward order. I'm the only person whose code I'm familiar with who
uses reverse order (and I only started doing that relatively recently, in the
last decade or two). That's because I don't like unnecessary forward
declarations - it's not because I have ever used a C compiler that failed to
support forward declarations.

> ... Maybe because they knew what they were doing and you
> don't.  But that is quite obvious.

My code compiled without error messages (eventually!) and did what it was
supposed to do, which seriously limits how incorrect my understanding could
have been. It worked despite the fact that, in those days, I imitated the
examples I'd been giving, using forward declarations to allow me to write code
were the calling routine was defined before the called routine. Perhaps forward
declarations were not actually supported by the compiler, and my code couldn't
actually call functions that had only been declared but not yet defined yet -
but because the code had undefined behavior, it just happened to fail in
precisely the way I was incorrectly expecting it to work? That doesn't seem
very plausible.

....
> Again, you read, but you don't comprehend.  And you have no idea what
> the terminology was in use at the time - which is what we are talking about.

I know what terminology K&R C 1st edition used, my teacher used, I used, and my
boss used - at that time.