f



PL/I, COBOL, Advantages, Equivalence, et al

I started to write a similar note last night, but decided not to.  Today, I 
really think I should.

I hate to disagree AND agree with both D.F. *and* Robin, but it seems that in my 
opinion (not fact <G>) they both have similar problems in what they post (on 
some topics).

To me, the POWER of a programming language has absolutely NOTHING to do with 
"Can you translate XYZ syntax from language to another in 27 keystrokes taking 
no more than 3.64 lines of code".  The power of a programming language is 
determined by:

 - What type of programming requirements can you SOLVE in a programming language 
(what types of applications can the programming language be used for)
 - Given that MOST currently supported programming languages can be used to 
solve MOST programming requirements, (not all for either of these), the 
questions then become:

  - How well does the resulting object (machine) code perform.  (You can DO 
complex arithmetic in COBOL, for example, but it certainly wouldn't perform very 
well).
 - Then compare the run-time performance with the ability to maintain the code 
(how easy is it to get programmers to understand, maintain and enhance the 
source code.  The often-cited COBOL requirement is commonly stated as "Can the 
average COBOL maintenance programmer understand and fix a "bug" in the source 
code at 3 a.m. in the morning?  If not, the code is probably not "easily 
maintainable".)

    ***
Language wars are usually about as useful as discussions by children of  "My 
father can beat up your father".

I would use COBOL to write a weather forecasting/modeling application about as 
soon as I would recommend using Fortran to write an IBM mainframe CICS 
transaction processing routine.  I think that languages such as REXX, PERL, even 
AWK or SPITBOL/SNOBOL are better for "regular expression" text handling than 
either COBOL or PL/I (although both of them usually CAN do such text handling). 
If I were writing a new version of a Unix-(like) operating system and didn't use 
C (or equivalent), then I can't imagine anyone thinking that I had made the 
correct decision.  I don't (personally) know that "language" current games, 
CAD/CAM, or embedded systems are being written in, but I can almost guarantee it 
isn't REXX (or COBOL).

The "right tool" for the "right job" has and probably always will make sense. 
In fact, the HISTORY of PL/I was that much (not all) of its original design 
criteria was that it be able to handle (well) what then-current COBOL and 
Fortran could already separately do - but what neither could do that the other 
could.  Even today, if I were in an IBM mainframe shop that did BOTH scientific 
and business data processing and wanted to share resources (data and 
programmers), PL/I would probably be a better choice than COBOL or Fortran (but 
NOT necessarily C/C++).  However, it is equally true that both Fortran and COBOL 
have added features since the days that PL/I was designed to make them BETTER 
(not perfectly) suited to more "general" programming needs.

  ***

To me, D.F. is (for no useful) reason so bothered by Robin's fact statement on 
"language power" that he makes often erroneous statements and raises issues that 
have little or nothing to do with actual programming language requirements, e.g. 
"Translate this syntax from Fortran into some other language" - rather than 
SOLVE this programming requirement in your "language of choice".  Meanwhile, 
Robin states things as "fact" that are neither substantiated nor universally 
accepted.  Probably MOST (not all) programmers who PREFER using PL/I agree with 
them (so that they are reasonable to express in this newsgroup and the PL/I FAQ) 
but when viewed by "non-PL/I believers" they  do tend to reduce Robin's GENERAL 
credibility.  My biggest objection to the FAQ statement is not that it 
accurately reflects a COMMONLY (not universally) held opinion, but rather that 
IF it were changed. D.F. has indicated that he would stop positing his (often 
stupid) challenges in THIS newsgroup.  If the FAQ were reworded to more 
accurately reflect "opinion" or "for some applications in some environments" 
PL/I *is* a better choice than other programming languages that COULD solve the 
same problem, *AND* if D.F. actually did (then) stop his ridiculous posts, I 
would think many comp.lang.pl1 readers would be MUCH happier.

 ***

Having made comments on "fact vs opinion", I did want to express (yet again) 
some of what I understand (but can be corrected on) are advantages of both COBOL 
and PL/I *for IBM mainframe business" programming.

PL/I advantages:
 - preprocessor (other than - possibly but not certainly - the HLASM macro 
processor, I don't know of any similar tool on IBM or other environments that 
has NEARLY the power of the PL/I preprocssor.)
 - Bit processing (COBOL can use LE callable services for this, but it is pretty 
ugly and not very intuitive.  The current ISO 2002 Standard includes bit 
support - but it isn't yet available on IBM mainframes.  I don't see the need in 
MOST IBM mainframe business applications, but it certainly would be nice to have 
in COBOL)
 - Vector handling and Complex arithmetic  (Both of these could be done in 
COBOL, but certainly not in "normal" code.  Vectors can be handled by "loop" 
logic, but certainly not in a good or well-performing manner.  Complex 
arithmetic would require LE callable services or other hand-written subroutines. 
HOWEVER, in my (limited) experience, neither of these are commonly needed in 
business logic. In fact, I have never seen the requirement to use a PL/I 
subroutine to handle such for an IBM mainframe business application - limiting 
the use of COBOL for the same application.  I am certain such applications DO 
exist; they simply are NOT common).
 - PL/I "native" condition handling does provide portable features not available 
in IBM mainframe COBOL.  (Again, the "common condition handling" declaratives 
model is part of the ISO 2002 COBOL Standard, but not available on IBM 
mainframes.  The LE condition handling provides most - possibly all - that PL/I 
can do, but this is NOT something that the "average" COBOL programmer would know 
how to use or feel very comfortable with)
 - In a cite that wants to SHARE resources (data, programmers, etc) between a 
"scientific" side and a "business" side, PL/I would definitely be a better 
choice than COBOL.
 - VARYING strings (mentioned in a number of PL/I newsgroup threads) have 
similar facilities in COBOL but the design is sufficiently different that the 
COBOL standards groups are in the process of adding "prefixed" and "delimited" 
ANY LENGTH strings into the next COBOL Standard.  I don't know when (if) they 
will be added to IBM mainframe COBOL, but currently this is certainly something 
available today in PL/I but not in COBOL.

        * * * *

COBOL Advantages
 - first and foremost, COBOL is MORE commonly available at IBM mainframe shops 
than PL/I - and this includes programming resources currently available and also 
the "pool" of programmers from which to hire NEW programmers.  OTOH, the pool is 
shrinking (at least outside of India and some "out-sourcing" areas of the 
globe).  However, I think it is still SIGNIFICANTLY greater than PL/I resources. 
It is my perception (and has been ever since I first became aware of PL/I vs 
COBOL issues) that there are parts of the world and specific industries in which 
PL/I is more popular than it is in "general" data processing in the US. 
However, there are NO places in the world and no industries in which IBM PL/I is 
SIGNIFICANTLY more popular than IBM COBOL for BUSINESS data processing; while 
the converse is true that there are parts of the world and industries in which 
IBM mainframe COBOL has significantly greater use than IBM mainframe PL/I.
 - IBM's mainframe COBOL fully supports OO COBOL - both for and without Java 
interoperatibilty.  This has not (as far as I can tell) "caught on" among IBM 
mainframe COBOL users, but it does  have a "slightly" growing interest - if not 
use. (As others have pointed out, you CAN do much of this with PL/I today, but 
it isn't much better than trying to do bit-twiddling with COBOL).
 - COBOL Standard Report Writer. (This is controversial even among IBM mainframe 
COBOL sites; some love it and some hate it. It is currently limited to "fixed 
font" reports so I don't personally see it having a "growing future" among 
today's IBM mainframe sites, but it does have its uses that require 
significantly more complex coding in PL/I).
 - The latest COBOL compiler seems to support larger INDIVIDUAL character 
data-items. According to:
        http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/ibm3lr40/A.0
  "Maximum length of CHARACTER         32767"
while
      http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/IGY3LR31/APPENDIX1.2
has
  "01-49 data item size          134,217,727 bytes"

This is (some what) useful for XML data and DB2 "BLOBs".  On the other hand., 
some of the PL/I limits have 214,748,3647   where COBOL is limited to 
134,217,727.  So PL/I may be able to handle some data that COBOL can't.

- I believe that there are more "purchasable" packages that allow for COBOL 
customization and subroutine than there are PL/I packages - on the IBM mainframe 
market today.

   ***

In general, (but probably with SOME exceptions), the rest of the "advantages" 
for IBM mainframe data processing between COBOL and PL/I are a matter of "style" 
and what a programmer is used to.  Certainly items like "verbosity" (COBOL) or 
unstructured condition handling (PL/I) *are* matters of opinion rather than 
matters of fact.  The popularity of COBOL (over PL/I) in existing and past IBM 
mainframe shops does speak to "how common" certain opinions are.  On the other 
hand, just because some shops have and continue to use COBOL when PL/I *could* 
be used doesn't mean that they have made the BEST choice - any more than the 
reverse decision is "always right" when it occurs.

  ***

Again, I find "language wars" not very useful, but I did think I should post 
this note to express MY opinion and hopefully separate SOME "fact" from 
"opinion" (mine or others).


-- 
Bill Klein
 wmklein <at> ix.netcom.com 


0
wmklein (2605)
9/18/2006 7:32:20 PM
comp.lang.pl1 1741 articles. 0 followers. Post Follow

968 Replies
1125 Views

Similar Articles

[PageSpeed] 1

On Mon, 18 Sep 2006 12:32:20 -0700, William M. Klein  
<wmklein@nospam.netcom.com> wrote:

> I started to write a similar note last night, but decided not to.   
> Today, I
> really think I should.
>
> I hate to disagree AND agree with both D.F. *and* Robin, but it seems  
> that in my
> opinion (not fact <G>) they both have similar problems in what they post  
> (on
> some topics).
>
> To me, the POWER of a programming language has absolutely NOTHING to do  
> with
> "Can you translate XYZ syntax from language to another in 27 keystrokes  
> taking
> no more than 3.64 lines of code".  The power of a programming language is
> determined by:
>
>  - What type of programming requirements can you SOLVE in a programming  
> language
> (what types of applications can the programming language be used for)
>  - Given that MOST currently supported programming languages can be used  
> to
> solve MOST programming requirements, (not all for either of these), the
> questions then become:
>
>   - How well does the resulting object (machine) code perform.  (You can  
> DO
> complex arithmetic in COBOL, for example, but it certainly wouldn't  
> perform very
> well).
>  - Then compare the run-time performance with the ability to maintain  
> the code
> (how easy is it to get programmers to understand, maintain and enhance  
> the
> source code.  The often-cited COBOL requirement is commonly stated as  
> "Can the
> average COBOL maintenance programmer understand and fix a "bug" in the  
> source
> code at 3 a.m. in the morning?  If not, the code is probably not "easily
> maintainable".)
>
I would go one step further,  and drop the importance of object code  
efficiency
in lieue of how does the language help the programmer right more reliable
code?  How well does the compiler do in semantic analysis?
>     ***
> Language wars are usually about as useful as discussions by children of   
> "My
> father can beat up your father".
>
> I would use COBOL to write a weather forecasting/modeling application  
> about as
> soon as I would recommend using Fortran to write an IBM mainframe CICS
> transaction processing routine.  I think that languages such as REXX,  
> PERL, even
> AWK or SPITBOL/SNOBOL are better for "regular expression" text handling  
> than
> either COBOL or PL/I (although both of them usually CAN do such text  
> handling).
> If I were writing a new version of a Unix-(like) operating system and  
> didn't use
> C (or equivalent), then I can't imagine anyone thinking that I had made  
> the
> correct decision.  I don't (personally) know that "language" current  
> games,
> CAD/CAM, or embedded systems are being written in, but I can almost  
> guarantee it
> isn't REXX (or COBOL).
>
> The "right tool" for the "right job" has and probably always will make  
> sense.
> In fact, the HISTORY of PL/I was that much (not all) of its original  
> design
> criteria was that it be able to handle (well) what then-current COBOL and
> Fortran could already separately do - but what neither could do that the  
> other
> could.  Even today, if I were in an IBM mainframe shop that did BOTH  
> scientific
> and business data processing and wanted to share resources (data and
> programmers), PL/I would probably be a better choice than COBOL or  
> Fortran (but
> NOT necessarily C/C++).  However, it is equally true that both Fortran  
> and COBOL
> have added features since the days that PL/I was designed to make them  
> BETTER
> (not perfectly) suited to more "general" programming needs.
>
>   ***
>
> To me, D.F. is (for no useful) reason so bothered by Robin's fact  
> statement on
> "language power" that he makes often erroneous statements and raises  
> issues that
> have little or nothing to do with actual programming language  
> requirements, e.g.
> "Translate this syntax from Fortran into some other language" - rather  
> than
> SOLVE this programming requirement in your "language of choice".   
> Meanwhile,
> Robin states things as "fact" that are neither substantiated nor  
> universally
> accepted.  Probably MOST (not all) programmers who PREFER using PL/I  
> agree with
> them (so that they are reasonable to express in this newsgroup and the  
> PL/I FAQ)
> but when viewed by "non-PL/I believers" they  do tend to reduce Robin's  
> GENERAL
> credibility.  My biggest objection to the FAQ statement is not that it
> accurately reflects a COMMONLY (not universally) held opinion, but  
> rather that
> IF it were changed. D.F. has indicated that he would stop positing his  
> (often
> stupid) challenges in THIS newsgroup.  If the FAQ were reworded to more
> accurately reflect "opinion" or "for some applications in some  
> environments"
> PL/I *is* a better choice than other programming languages that COULD  
> solve the
> same problem, *AND* if D.F. actually did (then) stop his ridiculous  
> posts, I
> would think many comp.lang.pl1 readers would be MUCH happier.
>
>  ***
>
> Having made comments on "fact vs opinion", I did want to express (yet  
> again)
> some of what I understand (but can be corrected on) are advantages of  
> both COBOL
> and PL/I *for IBM mainframe business" programming.
>
> PL/I advantages:
>  - preprocessor (other than - possibly but not certainly - the HLASM  
> macro
> processor, I don't know of any similar tool on IBM or other environments  
> that
> has NEARLY the power of the PL/I preprocssor.)
>  - Bit processing (COBOL can use LE callable services for this, but it  
> is pretty
> ugly and not very intuitive.  The current ISO 2002 Standard includes bit
> support - but it isn't yet available on IBM mainframes.  I don't see the  
> need in
> MOST IBM mainframe business applications, but it certainly would be nice  
> to have
> in COBOL)
>  - Vector handling and Complex arithmetic  (Both of these could be done  
> in
> COBOL, but certainly not in "normal" code.  Vectors can be handled by  
> "loop"
> logic, but certainly not in a good or well-performing manner.  Complex
> arithmetic would require LE callable services or other hand-written  
> subroutines.
> HOWEVER, in my (limited) experience, neither of these are commonly  
> needed in
> business logic. In fact, I have never seen the requirement to use a PL/I
> subroutine to handle such for an IBM mainframe business application -  
> limiting
> the use of COBOL for the same application.  I am certain such  
> applications DO
> exist; they simply are NOT common).
>  - PL/I "native" condition handling does provide portable features not  
> available
> in IBM mainframe COBOL.  (Again, the "common condition handling"  
> declaratives
> model is part of the ISO 2002 COBOL Standard, but not available on IBM
> mainframes.  The LE condition handling provides most - possibly all -  
> that PL/I
> can do, but this is NOT something that the "average" COBOL programmer  
> would know
> how to use or feel very comfortable with)
>  - In a cite that wants to SHARE resources (data, programmers, etc)  
> between a
> "scientific" side and a "business" side, PL/I would definitely be a  
> better
> choice than COBOL.
>  - VARYING strings (mentioned in a number of PL/I newsgroup threads) have
> similar facilities in COBOL but the design is sufficiently different  
> that the
> COBOL standards groups are in the process of adding "prefixed" and  
> "delimited"
> ANY LENGTH strings into the next COBOL Standard.  I don't know when (if)  
> they
> will be added to IBM mainframe COBOL, but currently this is certainly  
> something
> available today in PL/I but not in COBOL.
>
>         * * * *
>
> COBOL Advantages
>  - first and foremost, COBOL is MORE commonly available at IBM mainframe  
> shops
> than PL/I - and this includes programming resources currently available  
> and also
> the "pool" of programmers from which to hire NEW programmers.  OTOH, the  
> pool is
> shrinking (at least outside of India and some "out-sourcing" areas of the
> globe).  However, I think it is still SIGNIFICANTLY greater than PL/I  
> resources.
> It is my perception (and has been ever since I first became aware of  
> PL/I vs
> COBOL issues) that there are parts of the world and specific industries  
> in which
> PL/I is more popular than it is in "general" data processing in the US.
> However, there are NO places in the world and no industries in which IBM  
> PL/I is
> SIGNIFICANTLY more popular than IBM COBOL for BUSINESS data processing;  
> while
> the converse is true that there are parts of the world and industries in  
> which
> IBM mainframe COBOL has significantly greater use than IBM mainframe  
> PL/I.
>  - IBM's mainframe COBOL fully supports OO COBOL - both for and without  
> Java
> interoperatibilty.  This has not (as far as I can tell) "caught on"  
> among IBM
> mainframe COBOL users, but it does  have a "slightly" growing interest -  
> if not
> use. (As others have pointed out, you CAN do much of this with PL/I  
> today, but
> it isn't much better than trying to do bit-twiddling with COBOL).
>  - COBOL Standard Report Writer. (This is controversial even among IBM  
> mainframe
> COBOL sites; some love it and some hate it. It is currently limited to  
> "fixed
> font" reports so I don't personally see it having a "growing future"  
> among
> today's IBM mainframe sites, but it does have its uses that require
> significantly more complex coding in PL/I).
>  - The latest COBOL compiler seems to support larger INDIVIDUAL character
> data-items. According to:
>         http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/ibm3lr40/A.0
>   "Maximum length of CHARACTER         32767"
> while
>       http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/IGY3LR31/APPENDIX1.2
> has
>   "01-49 data item size          134,217,727 bytes"
>
> This is (some what) useful for XML data and DB2 "BLOBs".  On the other  
> hand.,
> some of the PL/I limits have 214,748,3647   where COBOL is limited to
> 134,217,727.  So PL/I may be able to handle some data that COBOL can't.
>
> - I believe that there are more "purchasable" packages that allow for  
> COBOL
> customization and subroutine than there are PL/I packages - on the IBM  
> mainframe
> market today.
>
>    ***
>
> In general, (but probably with SOME exceptions), the rest of the  
> "advantages"
> for IBM mainframe data processing between COBOL and PL/I are a matter of  
> "style"
> and what a programmer is used to.  Certainly items like "verbosity"  
> (COBOL) or
> unstructured condition handling (PL/I) *are* matters of opinion rather  
> than
> matters of fact.  The popularity of COBOL (over PL/I) in existing and  
> past IBM
> mainframe shops does speak to "how common" certain opinions are.  On the  
> other
> hand, just because some shops have and continue to use COBOL when PL/I  
> *could*
> be used doesn't mean that they have made the BEST choice - any more than  
> the
> reverse decision is "always right" when it occurs.
>
>   ***
>
> Again, I find "language wars" not very useful, but I did think I should  
> post
> this note to express MY opinion and hopefully separate SOME "fact" from
> "opinion" (mine or others).
>
>



-- 
Using Opera's revolutionary e-mail client: http://www.opera.com/mail/
0
tom294 (608)
9/18/2006 8:03:59 PM
William M. Klein <wmklein@nospam.netcom.com> wrote:
(snip)
 
> To me, the POWER of a programming language has absolutely NOTHING 
> to do with "Can you translate XYZ syntax from language to another 
> in 27 keystrokes taking

(big snip)

I sometimes find comparisons of languages interesting, in that you
can understand the design goals of a language by seeing what it allows
and disallows.  I try to make my comparisons fair, stating facts separately
from opinions.   I consider it similar to the "compare and contrast"
assignments for studying literature.  

-- glen
0
gah1 (524)
9/18/2006 9:17:27 PM
On Mon, 18 Sep 2006 13:03:59 -0700, Tom Linden <tom@kednos-remove.com>  
wrote:

> On Mon, 18 Sep 2006 12:32:20 -0700, William M. Klein  
> <wmklein@nospam.netcom.com> wrote:
>
>> I started to write a similar note last night, but decided not to.   
>> Today, I
>> really think I should.
>>
>> I hate to disagree AND agree with both D.F. *and* Robin, but it seems  
>> that in my
>> opinion (not fact <G>) they both have similar problems in what they  
>> post (on
>> some topics).
>>
>> To me, the POWER of a programming language has absolutely NOTHING to do  
>> with
>> "Can you translate XYZ syntax from language to another in 27 keystrokes  
>> taking
>> no more than 3.64 lines of code".  The power of a programming language  
>> is
>> determined by:
>>
>>  - What type of programming requirements can you SOLVE in a programming  
>> language
>> (what types of applications can the programming language be used for)
>>  - Given that MOST currently supported programming languages can be  
>> used to
>> solve MOST programming requirements, (not all for either of these), the
>> questions then become:
>>
>>   - How well does the resulting object (machine) code perform.  (You  
>> can DO
>> complex arithmetic in COBOL, for example, but it certainly wouldn't  
>> perform very
>> well).
>>  - Then compare the run-time performance with the ability to maintain  
>> the code
>> (how easy is it to get programmers to understand, maintain and enhance  
>> the
>> source code.  The often-cited COBOL requirement is commonly stated as  
>> "Can the
>> average COBOL maintenance programmer understand and fix a "bug" in the  
>> source
>> code at 3 a.m. in the morning?  If not, the code is probably not "easily
>> maintainable".)
>>
> I would go one step further,  and drop the importance of object code  
> efficiency
> in lieue of how does the language help the programmer right more reliable
                                                         write
> code?  How well does the compiler do in semantic analysis?
>>     ***
>> Language wars are usually about as useful as discussions by children  
>> of  "My
>> father can beat up your father".
>>
>> I would use COBOL to write a weather forecasting/modeling application  
>> about as
>> soon as I would recommend using Fortran to write an IBM mainframe CICS
>> transaction processing routine.  I think that languages such as REXX,  
>> PERL, even
>> AWK or SPITBOL/SNOBOL are better for "regular expression" text handling  
>> than
>> either COBOL or PL/I (although both of them usually CAN do such text  
>> handling).
>> If I were writing a new version of a Unix-(like) operating system and  
>> didn't use
>> C (or equivalent), then I can't imagine anyone thinking that I had made  
>> the
>> correct decision.  I don't (personally) know that "language" current  
>> games,
>> CAD/CAM, or embedded systems are being written in, but I can almost  
>> guarantee it
>> isn't REXX (or COBOL).
>>
>> The "right tool" for the "right job" has and probably always will make  
>> sense.
>> In fact, the HISTORY of PL/I was that much (not all) of its original  
>> design
>> criteria was that it be able to handle (well) what then-current COBOL  
>> and
>> Fortran could already separately do - but what neither could do that  
>> the other
>> could.  Even today, if I were in an IBM mainframe shop that did BOTH  
>> scientific
>> and business data processing and wanted to share resources (data and
>> programmers), PL/I would probably be a better choice than COBOL or  
>> Fortran (but
>> NOT necessarily C/C++).  However, it is equally true that both Fortran  
>> and COBOL
>> have added features since the days that PL/I was designed to make them  
>> BETTER
>> (not perfectly) suited to more "general" programming needs.
>>
>>   ***
>>
>> To me, D.F. is (for no useful) reason so bothered by Robin's fact  
>> statement on
>> "language power" that he makes often erroneous statements and raises  
>> issues that
>> have little or nothing to do with actual programming language  
>> requirements, e.g.
>> "Translate this syntax from Fortran into some other language" - rather  
>> than
>> SOLVE this programming requirement in your "language of choice".   
>> Meanwhile,
>> Robin states things as "fact" that are neither substantiated nor  
>> universally
>> accepted.  Probably MOST (not all) programmers who PREFER using PL/I  
>> agree with
>> them (so that they are reasonable to express in this newsgroup and the  
>> PL/I FAQ)
>> but when viewed by "non-PL/I believers" they  do tend to reduce Robin's  
>> GENERAL
>> credibility.  My biggest objection to the FAQ statement is not that it
>> accurately reflects a COMMONLY (not universally) held opinion, but  
>> rather that
>> IF it were changed. D.F. has indicated that he would stop positing his  
>> (often
>> stupid) challenges in THIS newsgroup.  If the FAQ were reworded to more
>> accurately reflect "opinion" or "for some applications in some  
>> environments"
>> PL/I *is* a better choice than other programming languages that COULD  
>> solve the
>> same problem, *AND* if D.F. actually did (then) stop his ridiculous  
>> posts, I
>> would think many comp.lang.pl1 readers would be MUCH happier.
>>
>>  ***
>>
>> Having made comments on "fact vs opinion", I did want to express (yet  
>> again)
>> some of what I understand (but can be corrected on) are advantages of  
>> both COBOL
>> and PL/I *for IBM mainframe business" programming.
>>
>> PL/I advantages:
>>  - preprocessor (other than - possibly but not certainly - the HLASM  
>> macro
>> processor, I don't know of any similar tool on IBM or other  
>> environments that
>> has NEARLY the power of the PL/I preprocssor.)
>>  - Bit processing (COBOL can use LE callable services for this, but it  
>> is pretty
>> ugly and not very intuitive.  The current ISO 2002 Standard includes bit
>> support - but it isn't yet available on IBM mainframes.  I don't see  
>> the need in
>> MOST IBM mainframe business applications, but it certainly would be  
>> nice to have
>> in COBOL)
>>  - Vector handling and Complex arithmetic  (Both of these could be done  
>> in
>> COBOL, but certainly not in "normal" code.  Vectors can be handled by  
>> "loop"
>> logic, but certainly not in a good or well-performing manner.  Complex
>> arithmetic would require LE callable services or other hand-written  
>> subroutines.
>> HOWEVER, in my (limited) experience, neither of these are commonly  
>> needed in
>> business logic. In fact, I have never seen the requirement to use a PL/I
>> subroutine to handle such for an IBM mainframe business application -  
>> limiting
>> the use of COBOL for the same application.  I am certain such  
>> applications DO
>> exist; they simply are NOT common).
>>  - PL/I "native" condition handling does provide portable features not  
>> available
>> in IBM mainframe COBOL.  (Again, the "common condition handling"  
>> declaratives
>> model is part of the ISO 2002 COBOL Standard, but not available on IBM
>> mainframes.  The LE condition handling provides most - possibly all -  
>> that PL/I
>> can do, but this is NOT something that the "average" COBOL programmer  
>> would know
>> how to use or feel very comfortable with)
>>  - In a cite that wants to SHARE resources (data, programmers, etc)  
>> between a
>> "scientific" side and a "business" side, PL/I would definitely be a  
>> better
>> choice than COBOL.
>>  - VARYING strings (mentioned in a number of PL/I newsgroup threads)  
>> have
>> similar facilities in COBOL but the design is sufficiently different  
>> that the
>> COBOL standards groups are in the process of adding "prefixed" and  
>> "delimited"
>> ANY LENGTH strings into the next COBOL Standard.  I don't know when  
>> (if) they
>> will be added to IBM mainframe COBOL, but currently this is certainly  
>> something
>> available today in PL/I but not in COBOL.
>>
>>         * * * *
>>
>> COBOL Advantages
>>  - first and foremost, COBOL is MORE commonly available at IBM  
>> mainframe shops
>> than PL/I - and this includes programming resources currently available  
>> and also
>> the "pool" of programmers from which to hire NEW programmers.  OTOH,  
>> the pool is
>> shrinking (at least outside of India and some "out-sourcing" areas of  
>> the
>> globe).  However, I think it is still SIGNIFICANTLY greater than PL/I  
>> resources.
>> It is my perception (and has been ever since I first became aware of  
>> PL/I vs
>> COBOL issues) that there are parts of the world and specific industries  
>> in which
>> PL/I is more popular than it is in "general" data processing in the US.
>> However, there are NO places in the world and no industries in which  
>> IBM PL/I is
>> SIGNIFICANTLY more popular than IBM COBOL for BUSINESS data processing;  
>> while
>> the converse is true that there are parts of the world and industries  
>> in which
>> IBM mainframe COBOL has significantly greater use than IBM mainframe  
>> PL/I.
>>  - IBM's mainframe COBOL fully supports OO COBOL - both for and without  
>> Java
>> interoperatibilty.  This has not (as far as I can tell) "caught on"  
>> among IBM
>> mainframe COBOL users, but it does  have a "slightly" growing interest  
>> - if not
>> use. (As others have pointed out, you CAN do much of this with PL/I  
>> today, but
>> it isn't much better than trying to do bit-twiddling with COBOL).
>>  - COBOL Standard Report Writer. (This is controversial even among IBM  
>> mainframe
>> COBOL sites; some love it and some hate it. It is currently limited to  
>> "fixed
>> font" reports so I don't personally see it having a "growing future"  
>> among
>> today's IBM mainframe sites, but it does have its uses that require
>> significantly more complex coding in PL/I).
>>  - The latest COBOL compiler seems to support larger INDIVIDUAL  
>> character
>> data-items. According to:
>>         http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/ibm3lr40/A.0
>>   "Maximum length of CHARACTER         32767"
>> while
>>       http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/IGY3LR31/APPENDIX1.2
>> has
>>   "01-49 data item size          134,217,727 bytes"
>>
>> This is (some what) useful for XML data and DB2 "BLOBs".  On the other  
>> hand.,
>> some of the PL/I limits have 214,748,3647   where COBOL is limited to
>> 134,217,727.  So PL/I may be able to handle some data that COBOL can't.
>>
>> - I believe that there are more "purchasable" packages that allow for  
>> COBOL
>> customization and subroutine than there are PL/I packages - on the IBM  
>> mainframe
>> market today.
>>
>>    ***
>>
>> In general, (but probably with SOME exceptions), the rest of the  
>> "advantages"
>> for IBM mainframe data processing between COBOL and PL/I are a matter  
>> of "style"
>> and what a programmer is used to.  Certainly items like "verbosity"  
>> (COBOL) or
>> unstructured condition handling (PL/I) *are* matters of opinion rather  
>> than
>> matters of fact.  The popularity of COBOL (over PL/I) in existing and  
>> past IBM
>> mainframe shops does speak to "how common" certain opinions are.  On  
>> the other
>> hand, just because some shops have and continue to use COBOL when PL/I  
>> *could*
>> be used doesn't mean that they have made the BEST choice - any more  
>> than the
>> reverse decision is "always right" when it occurs.
>>
>>   ***
>>
>> Again, I find "language wars" not very useful, but I did think I should  
>> post
>> this note to express MY opinion and hopefully separate SOME "fact" from
>> "opinion" (mine or others).
>>
>>
>
>
>



-- 
Using Opera's revolutionary e-mail client: http://www.opera.com/mail/
0
tom294 (608)
9/18/2006 9:34:41 PM
"Tom Linden" <tom@kednos-remove.com> writes:
> I would go one step further, and drop the importance of object code
> efficiency in lieue of how does the language help the programmer
> right more reliable code?  How well does the compiler do in semantic
> analysis?

There are times when that's most relevant, and times when it's not.

When trying to get a Cray (or the likes) to render as many frames per
hour as possible on the latest would-be blockbuster, the goal may
indeed be to maximize object code efficiency.

On the other hand, for "business applications," whether that be
accounting, inventory analysis, or the like, you're probably I/O
bound, and hence not so interested in maxxing out the CPU.  (And
hence, features that somehow enable reliability / resilience are to be
desired...)

Both are legitimate scenarios that crop up.  The latter is probably
more common...
-- 
let name="cbbrowne" and tld="ca.afilias.info" in String.concat "@" [name;tld];;
<http://dba2.int.libertyrms.com/>
Christopher Browne
(416) 673-4124 (land)
0
cbbrowne2 (44)
9/19/2006 1:28:36 PM
On Tue, 19 Sep 2006 06:28:36 -0700, Christopher Browne  
<cbbrowne@ca.afilias.info> wrote:

> When trying to get a Cray (or the likes) to render as many frames per
> hour as possible on the latest would-be blockbuster, the goal may
> indeed be to maximize object code efficiency.

This is IMV a spcial case requiring more hands-on nurturing, much
like a race car vs. ordinary car.  IIRC, SGI used the Mips compiler
suite which pruned the stack frame from leaf nodes, making it more
difficult to recover gracefully from errors

-- 
Using Opera's revolutionary e-mail client: http://www.opera.com/mail/
0
tom294 (608)
9/19/2006 1:33:11 PM
glen herrmannsfeldt <gah@seniti.ugcs.caltech.edu> writes:
> William M. Klein <wmklein@nospam.netcom.com> wrote:
> (snip)
>  
>> To me, the POWER of a programming language has absolutely NOTHING 
>> to do with "Can you translate XYZ syntax from language to another 
>> in 27 keystrokes taking
>
> (big snip)
>
> I sometimes find comparisons of languages interesting, in that you
> can understand the design goals of a language by seeing what it allows
> and disallows.  I try to make my comparisons fair, stating facts separately
> from opinions.   I consider it similar to the "compare and contrast"
> assignments for studying literature.  

The fact that different languages are good at expressing different
things has pointed some to the notion that you should learn a new
language (and not one just like the others you already know) every few
years.

COBOL, FORTRAN, and PL/I are three different, but to head to *truly*
different requires going a fair bit further afield.

Very different from any of these (and each other) would be such things
as:
 - ICON
 - Snobol
 - Lisp
 - Haskell
 - Python
 - Perl
 - C

There are problems that each of these could solve "in 27 keystrokes"
that would likely take 27 pages in some other choice of language
(perhaps hyperbolae, to a small degree...)

Walking in such extra shoes is claimed to expand your ability to think
about different kinds of problems, solutions, and solution methods...
-- 
output = ("cbbrowne" "@" "ca.afilias.info")
<http://dba2.int.libertyrms.com/>
Christopher Browne
(416) 673-4124 (land)
0
cbbrowne2 (44)
9/19/2006 1:33:40 PM
Christopher Browne <cbbrowne@ca.afilias.info> wrote:
(snip)
 
> The fact that different languages are good at expressing different
> things has pointed some to the notion that you should learn a new
> language (and not one just like the others you already know) every few
> years.
 
> COBOL, FORTRAN, and PL/I are three different, but to head to *truly*
> different requires going a fair bit further afield.
 
> Very different from any of these (and each other) would be such things
> as:
> - ICON
> - Snobol
> - Lisp
> - Haskell
> - Python
> - Perl
> - C

I would also add languages like Mathematica, Matlab, and R.

Interpreted languages are especially convenient for doing things
in a small number of keystrokes, if the languages has defined just
the operation that one needs.  While the runtime may be slower,
getting the right answer might still be faster.

-- glen
0
gah1 (524)
9/19/2006 6:37:35 PM
William M. Klein wrote in message <7vCPg.71251$PM1.31867@fe04.news.easynews.com>...

> - Then compare the run-time performance with the ability to maintain the code
>(how easy is it to get programmers to understand, maintain and enhance the
>source code.  The often-cited COBOL requirement is commonly stated as "Can the
>average COBOL maintenance programmer understand and fix a "bug" in the source
>code at 3 a.m. in the morning?

Which is why you should be using PL/I.

PL/I programs can be made failsafe, and do not need
debugging at 3a.m. in the morning.
PL/I can trap virtually every kind of run-time error,
and can recover and continue, after having produced an
exception report.


0
robin_v (2737)
9/20/2006 3:53:40 AM
William M. Klein wrote in message <7vCPg.71251$PM1.31867@fe04.news.easynews.com>...
>I started to write a similar note last night, but decided not to.  Today, I
>really think I should.
>
>I hate to disagree AND agree with both D.F. *and* Robin, but it seems that in my
>opinion (not fact <G>) they both have similar problems in what they post (on
>some topics).
>
>To me, the POWER of a programming language has absolutely NOTHING to do with
>"Can you translate XYZ syntax from language to another in 27 keystrokes taking
>no more than 3.64 lines of code".  The power of a programming language is
>determined by:
>
> - What type of programming requirements can you SOLVE in a programming language
>(what types of applications can the programming language be used for)
> - Given that MOST currently supported programming languages can be used to
>solve MOST programming requirements, (not all for either of these),

Can they? I would dispute C, for example.
There is also the issue of how well they do that,
and how reliably.

You speak of bebugging COBOL programs at 3 o'clock in the morning.

    Let's examine that in the context of your statements above.

A PL/I program is robust and fault tolerant.
In the event that something unexpected should happen,
the program (with its built-in PL/I facilities) can print (or write to
an exception file) the details of the error and all the circumstances
including the actual data) that caused the error.

    And it can then continue with the next lot of data.

    No need for someone to come in at 3am to fix the program.
No need to re-run the program to find out where and why the
program crashed.

    The problem can be analyzed with a fresh mind in the light of day.


0
robin_v (2737)
9/20/2006 3:53:41 AM
William M. Klein wrote in message <7vCPg.71251$PM1.31867@fe04.news.easynews.com>...

>The "right tool" for the "right job" has and probably always will make sense.
>In fact, the HISTORY of PL/I was that much (not all) of its original design
>criteria was that it be able to handle (well) what then-current COBOL and
>Fortran could already separately do - but what neither could do that the other
>could.

It was also designed to do things that Algol could.

    There were many tasks that Fortran could not do at all,
and these were addressed in the design of PL/I

    Three of those issues that spring to mind were the ability
to handle errors [Fortran simply gave up, i.e., abended],
dynamic arrays, and character strings.

    [With IBM's compilers, it was possible to write a main program
and subroutines in PL/I, and to call a Fortran subprogam.
Even if an error occurred in the Fortran code (e.g., division by zero),
the whole thing did not fall over, because PL/I's error handling
trapped the error and allowed the program to continue.]

>  Even today, if I were in an IBM mainframe shop that did BOTH scientific
>and business data processing and wanted to share resources (data and
>programmers), PL/I would probably be a better choice than COBOL or Fortran (but
>NOT necessarily C/C++).

    PL/I is unequivocally better than C, in terms of reliability and robustness
in particular, and from every other standpoint.

>  However, it is equally true that both Fortran and COBOL
>have added features since the days that PL/I was designed to make them BETTER
>(not perfectly) suited to more "general" programming needs.


But so has PL/I added features.  So the relative relationship has
remained unchanged.


0
robin_v (2737)
9/20/2006 3:53:42 AM
robin wrote:

> William M. Klein wrote in message <7vCPg.71251$PM1.31867@fe04.news.easynews.com>...
> 
>> [...]
>>To me, the POWER of a programming language has absolutely NOTHING to do with
>>"Can you translate XYZ syntax from language to another in 27 keystrokes taking
>>no more than 3.64 lines of code".  The power of a programming language is
>>determined by:
>>
>>- What type of programming requirements can you SOLVE in a programming language
>>(what types of applications can the programming language be used for)
>>- Given that MOST currently supported programming languages can be used to
>>solve MOST programming requirements, (not all for either of these),
> 
> 
> Can they? I would dispute C, for example.

C can do just about anything PL/I can do.  And assembly/machine language 
  absolutely can do anything PL/I can do (it does get compiled into 
machine language, after all).

> There is also the issue of how well they do that,
> and how reliably.

Absolutely.

> 
> You speak of bebugging COBOL programs at 3 o'clock in the morning.
> 
>     Let's examine that in the context of your statements above.
> 
> A PL/I program is robust and fault tolerant.

Not exactly.  The most you can truthfully say is that a PL/I program can 
be robust and fault tolerant.  Much depends on the skill and experience 
of the programmer.

In my experience, the more powerful and more expressive a programming 
language is, the easier it is for inexperienced programmers to get into 
real trouble.  Well, as a general rule, anyway.  I have seen 
horribly-written, virtually incomprehensible code written in just about 
every computer language I've ever learned well (no fair for me to 
complain about hard-to-understand code in languages I don't know well :-) ).

I once heard a theory that one of the reasons COBOL was so verbose was 
to make it difficult to do anything really clever because that helped 
keep beginning programmers from getting into too much trouble and from 
creating a need for the 3:00 AM emergency debugging sessions.  Lest all 
you COBOL proponents (there are some here, aren't there) take exception 
to that, I don't subscribe to that theory.  COBOL is an old enough 
language that it was designed before such issues had come into 
consideration.

One of the costs of using a robust language is the length of time it 
takes to learn it well enough to use it properly.

> In the event that something unexpected should happen,
> the program (with its built-in PL/I facilities) can print (or write to
> an exception file) the details of the error and all the circumstances
> including the actual data) that caused the error.
> 
>     And it can then continue with the next lot of data.
> 
Well, only if it's written that way.  PL/I programs don't write 
themselves.  The language certainly has those capabilities, but it's a 
little optimistic to assert that all programs written in it make use of 
those facilities.

>     No need for someone to come in at 3am to fix the program.
> No need to re-run the program to find out where and why the
> program crashed.
> 
>     The problem can be analyzed with a fresh mind in the light of day.
> 
> 
That's actually more a function of management than of programming 
language.  I don't care what language is used, if a shop is sufficiently 
under-funded, under-staffed, or over-committed mistakes will happen that 
require the proverbial 3:00 AM debugging session.  Choosing PL/I (or 
Fortran, or COBOL, or RPG, or ... won't change that).


Bob Lidral
lidral  at  alum  dot  mit  edu
0
9/20/2006 7:22:53 AM
"William M. Klein" <wmklein@nospam.netcom.com> wrote in message 
news:7vCPg.71251$PM1.31867@fe04.news.easynews.com...
>
> I hate to disagree AND agree with both D.F. *and* Robin, but it seems that 
> in my opinion (not fact <G>) they both have similar problems in what they 
> post (on some topics).

The problem in this newsgroup is that the REALLY talented PL/I users left 
years ago
 leaving today a cadre of mostly inactive users (that dont even know the 
syntax for IF THEN)
or translate simple Fortran statements.

E.G.  No one is willing to confirm if PL/I has equivalent declaration of
Fortran's defined type variables because they dont trust there own knowledge 
well enuf to state the facts OR in Vowels case he wont respond because he 
knows it does NOT.

Without such syntax I dont see how the arbitrary list problem can be solved
using PL/I...

type list
   character,allocatable :: name(:)
   integer,allocatable     :: nums(:)
end type
type (list),allocatable :: lists(:)



0
dave_frank (2243)
9/20/2006 11:40:24 AM

robin wrote:
> William M. Klein wrote in message <7vCPg.71251$PM1.31867@fe04.news.easynews.com>...
> 
> 
>>- Then compare the run-time performance with the ability to maintain the code
>>(how easy is it to get programmers to understand, maintain and enhance the
>>source code.  The often-cited COBOL requirement is commonly stated as "Can the
>>average COBOL maintenance programmer understand and fix a "bug" in the source
>>code at 3 a.m. in the morning?
> 
> 
> Which is why you should be using PL/I.
> 
> PL/I programs can be made failsafe, and do not need
> debugging at 3a.m. in the morning.
> PL/I can trap virtually every kind of run-time error,
> and can recover and continue, after having produced an
> exception report.
> 
> 
Here, I have to agree with Robin.  I have been programming in PL/I since 
it was first available (I was working at a beta site, ca. 1965).  The 
only problems we had was with really junior programmers and the integer 
divide gotcha.  Once we got them past that concept they produced good 
robust code.  Bottom line:  in 41 years of coding in PL/I and being 
around PL/I shops, etc. I don't ever recall a single emergency 3 a.m. 
bug fixing session.  When the programs were released for production they 
were solid.
0
donaldldobbs (108)
9/20/2006 4:08:08 PM

Bob Lidral wrote:

> robin wrote:
> 
>> William M. Klein wrote in message 
>> <7vCPg.71251$PM1.31867@fe04.news.easynews.com>...
>>
>>> [...]
>>> To me, the POWER of a programming language has absolutely NOTHING to 
>>> do with
>>> "Can you translate XYZ syntax from language to another in 27 
>>> keystrokes taking
>>> no more than 3.64 lines of code".  The power of a programming 
>>> language is
>>> determined by:
>>>
>>> - What type of programming requirements can you SOLVE in a 
>>> programming language
>>> (what types of applications can the programming language be used for)
>>> - Given that MOST currently supported programming languages can be 
>>> used to
>>> solve MOST programming requirements, (not all for either of these),
>>
>>
>>
>> Can they? I would dispute C, for example.
> 
> 
> C can do just about anything PL/I can do. 

It doesn't do nesting, and scoping of variables very well.  And it 
certainly can obfuscate.

  And assembly/machine language
>  absolutely can do anything PL/I can do (it does get compiled into 
> machine language, after all).
> 
>> There is also the issue of how well they do that,
>> and how reliably.
> 
> 
> Absolutely.
> 
>>
>> You speak of bebugging COBOL programs at 3 o'clock in the morning.
>>
>>     Let's examine that in the context of your statements above.
>>
>> A PL/I program is robust and fault tolerant.
> 
> 
> Not exactly.  The most you can truthfully say is that a PL/I program can 
> be robust and fault tolerant.  Much depends on the skill and experience 
> of the programmer.
> 
> In my experience, the more powerful and more expressive a programming 
> language is, the easier it is for inexperienced programmers to get into 
> real trouble.  Well, as a general rule, anyway.  I have seen 
> horribly-written, virtually incomprehensible code written in just about 
> every computer language I've ever learned well (no fair for me to 
> complain about hard-to-understand code in languages I don't know well 
> :-) ).
> 
> I once heard a theory that one of the reasons COBOL was so verbose was 
> to make it difficult to do anything really clever because that helped 
> keep beginning programmers from getting into too much trouble and from 
> creating a need for the 3:00 AM emergency debugging sessions.  Lest all 
> you COBOL proponents (there are some here, aren't there) take exception 
> to that, I don't subscribe to that theory.  COBOL is an old enough 
> language that it was designed before such issues had come into 
> consideration.
> 
> One of the costs of using a robust language is the length of time it 
> takes to learn it well enough to use it properly.
> 
>> In the event that something unexpected should happen,
>> the program (with its built-in PL/I facilities) can print (or write to
>> an exception file) the details of the error and all the circumstances
>> including the actual data) that caused the error.
>>
>>     And it can then continue with the next lot of data.
>>
> Well, only if it's written that way.  PL/I programs don't write 
> themselves.  The language certainly has those capabilities, but it's a 
> little optimistic to assert that all programs written in it make use of 
> those facilities.
> 
>>     No need for someone to come in at 3am to fix the program.
>> No need to re-run the program to find out where and why the
>> program crashed.
>>
>>     The problem can be analyzed with a fresh mind in the light of day.
>>
>>
> That's actually more a function of management than of programming 
> language.  I don't care what language is used, if a shop is sufficiently 
> under-funded, under-staffed, or over-committed mistakes will happen that 
> require the proverbial 3:00 AM debugging session.  Choosing PL/I (or 
> Fortran, or COBOL, or RPG, or ... won't change that).
> 
> 
> Bob Lidral
> lidral  at  alum  dot  mit  edu
0
donaldldobbs (108)
9/20/2006 4:15:53 PM
David Frank <dave_frank@hotmail.com> wrote:
 
> E.G.  No one is willing to confirm if PL/I has equivalent declaration of
> Fortran's defined type variables because they dont trust there own knowledge 
> well enuf to state the facts OR in Vowels case he wont respond because he 
> knows it does NOT.

I don't know if it has defined type variables, I presume you mean
something like C's typedef.  I don't remember that Fortran does, either.

PL/I has had structures (as far as I know, borrowed from COBOL) since
the beginning, along with structure pointers.  List processing
with pointer variables has been part of PL/I from the beginning.
 
> Without such syntax I dont see how the arbitrary list problem can be solved
> using PL/I...
 
> type list
>   character,allocatable :: name(:)
>   integer,allocatable     :: nums(:)
> end type
> type (list),allocatable :: lists(:)

I don't know what this has to do with arbitrary lists.  If you want
list processing, you need pointers.  PL/I has always had allocatable
arrays of structures of allocatable arrays.

-- glen
0
gah1 (524)
9/20/2006 7:00:20 PM
"robin" <robin_v@bigpond.com> wrote in message 
news:8X2Qg.32466$rP1.21609@news-server.bigpond.net.au...
> William M. Klein wrote in message 
> <7vCPg.71251$PM1.31867@fe04.news.easynews.com>...
>
>> - Then compare the run-time performance with the ability to maintain the code
>>(how easy is it to get programmers to understand, maintain and enhance the
>>source code.  The often-cited COBOL requirement is commonly stated as "Can the
>>average COBOL maintenance programmer understand and fix a "bug" in the source
>>code at 3 a.m. in the morning?
>
> Which is why you should be using PL/I.
>
> PL/I programs can be made failsafe, and do not need
> debugging at 3a.m. in the morning.
> PL/I can trap virtually every kind of run-time error,
> and can recover and continue, after having produced an
> exception report.
>
>

Current (and recent) IBM mainframe COBOL can do the same.  The question (as 
others have pointed out) is what the programmer has put into the code.  Since 
24/7 processing has become more common, there certainly IS a lot more 
"fail-safe" COBOL than there used to be.  However, the "tradition" in COBOL 
application programming was to give the user what they SAID they 
wanted/expected.  Often, this led to middle of the night "it wasn't ever 
SUPPOSED to get data like this" application failures.  (See comments in this 
newsgroup and in the IBM documentation on the performance overhead from using 
TOO MUCH PL/I "condition handling.)
-- 
Bill Klein
 wmklein <at> ix.netcom.com 


0
wmklein (2605)
9/20/2006 8:22:32 PM
"glen herrmannsfeldt" <gah@seniti.ugcs.caltech.edu> wrote in message 
news:ees344$6s$7@naig.caltech.edu...
> David Frank <dave_frank@hotmail.com> wrote:
>
>> E.G.  No one is willing to confirm if PL/I has equivalent declaration of
>> Fortran's defined type variables because they dont trust there own 
>> knowledge
>> well enuf to state the facts OR in Vowels case he wont respond because he
>> knows it does NOT.
>
> I don't know if it has defined type variables, I presume you mean
> something like C's typedef.  I don't remember that Fortran does, either.
>

My documentation calls below declarations Defined Type Variables,
what do you call it?
 C's typedef == Fortran  TYPE

> PL/I has had structures (as far as I know, borrowed from COBOL) since
> the beginning, along with structure pointers.  List processing
> with pointer variables has been part of PL/I from the beginning.
>
>> Without such syntax I dont see how the arbitrary list problem can be 
>> solved
>> using PL/I...
>
>> type list
>>   character,allocatable :: name(:)
>>   integer,allocatable     :: nums(:)
>> end type
>> type (list),allocatable :: lists(:)
>
> I don't know what this has to do with arbitrary lists.  If you want
> list processing, you need pointers.  PL/I has always had allocatable
> arrays of structures of allocatable arrays.
>
> -- glen

Then quit farting around and show us the translation of above TYPE 
statements
to a PL/I allocatable structure with 2 allocatable array members.
If thats true, how about showing us the translation of  above  to such a
PL/I structure> 


0
dave_frank (2243)
9/20/2006 11:08:19 PM
Donald L. Dobbs wrote:

> 
> 
> Bob Lidral wrote:
> 
>> robin wrote:
>>
>>> William M. Klein wrote in message 
>>> <7vCPg.71251$PM1.31867@fe04.news.easynews.com>...
>>>
>>>> [...]
>>>> To me, the POWER of a programming language has absolutely NOTHING to 
>>>> do with
>>>> "Can you translate XYZ syntax from language to another in 27 
>>>> keystrokes taking
>>>> no more than 3.64 lines of code".  The power of a programming 
>>>> language is
>>>> determined by:
>>>>
>>>> - What type of programming requirements can you SOLVE in a 
>>>> programming language
>>>> (what types of applications can the programming language be used for)
>>>> - Given that MOST currently supported programming languages can be 
>>>> used to
>>>> solve MOST programming requirements, (not all for either of these),
>>>
>>>
>>>
>>>
>>> Can they? I would dispute C, for example.
>>
>>
>>
>> C can do just about anything PL/I can do. 
> 
> 
> It doesn't do nesting, and scoping of variables very well.  And it 
> certainly can obfuscate.
> 

My comments were intended to be in the same context as William M. 
Klein's: "What type of programming requirements can you SOLVE ..." which 
I took t mean the type of application rather than the programming 
method.  C doesn't have PICTURE or DECIMAL data, either -- and a lot of 
other things PL/I doesn't have.  OTOH, PL/I doesn't allow the use of an 
assignment operator as the condition for an if or while statement nor 
does it have short-cut Boolean operators except as an extension.  Those 
differences, by themselves, don't mean they can't solve the same 
programming problems, merely that they could, or in some cases would 
have to, use different methods to do so.

Actually, I've seen some interesting and occasionally difficult to 
analyze bugs caused by nesting and variable scoping.


Bob Lidral
lidral  at  alum  dot  mit  edu
0
9/21/2006 7:37:38 AM
"glen herrmannsfeldt" <gah@seniti.ugcs.caltech.edu> wrote in message 
news:ees344$6s$7@naig.caltech.edu...
> David Frank <dave_frank@hotmail.com> wrote:
>
<snip>
>
>> Without such syntax I dont see how the arbitrary list problem can be 
>> solved
>> using PL/I...
>
>> type list
>>   character,allocatable :: name(:)
>>   integer,allocatable     :: nums(:)
>> end type
>> type (list),allocatable :: lists(:)
>
> I don't know what this has to do with arbitrary lists.  If you want
> list processing, you need pointers.

Obviously not since my arbitrary #lists from file solution into a data 
structure (above)
 holding the decoded data has no pointer declarations.

> PL/I has always had allocatable arrays of structures of allocatable 
> arrays.
>
> -- glen

OK, show everyone that it does and post a translation of my data structure 
declaration above 


0
dave_frank (2243)
9/21/2006 8:30:58 AM
Bob Lidral wrote:
> OTOH, PL/I doesn't allow the use of an 
> assignment operator as the condition for an if or while statement 

Good one!  This "feature" alone has probably led to more C programming 
errors than all others combined.

0
Peter_Flass (956)
9/21/2006 10:36:51 AM
Bob Lidral wrote:
> My comments were intended to be in the same context as William M. 
> Klein's: "What type of programming requirements can you SOLVE ..." which 
> I took t mean the type of application rather than the programming 
> method.  C doesn't have PICTURE or DECIMAL data, either -- and a lot of 
> other things PL/I doesn't have.  OTOH, PL/I doesn't allow the use of an 
> assignment operator as the condition for an if or while statement nor 
> does it have short-cut Boolean operators except as an extension.

Worse than that -- it doesn't have short-cut Boolean operators, period, 
but some compilers produce short-cut semantics as a side-effect of 
optimization.

-- 
John W. Kennedy
"The blind rulers of Logres
Nourished the land on a fallacy of rational virtue."
   -- Charles Williams.  "Taliessin through Logres: Prelude"
0
jwkenne (1442)
9/21/2006 1:58:21 PM
William M. Klein wrote in message ...
>
>"robin" <robin_v@bigpond.com> wrote in message
>news:8X2Qg.32466$rP1.21609@news-server.bigpond.net.au...
>> William M. Klein wrote in message
>> <7vCPg.71251$PM1.31867@fe04.news.easynews.com>...
>>
>>> - Then compare the run-time performance with the ability to maintain the code
>>>(how easy is it to get programmers to understand, maintain and enhance the
>>>source code.  The often-cited COBOL requirement is commonly stated as "Can the
>>>average COBOL maintenance programmer understand and fix a "bug" in the source
>>>code at 3 a.m. in the morning?
>>
>> Which is why you should be using PL/I.
>>
>> PL/I programs can be made failsafe, and do not need
>> debugging at 3a.m. in the morning.
>> PL/I can trap virtually every kind of run-time error,
>> and can recover and continue, after having produced an
>> exception report.
>
>Current (and recent) IBM mainframe COBOL can do the same.  The question (as
>others have pointed out) is what the programmer has put into the code.  Since
>24/7 processing has become more common,

In those days (1960s) 24-hour processing was the norm.
It was unusual NOT to run around the clock.
Computers were very expensive, and often had inadequate
processing capacity.

> there certainly IS a lot more
>"fail-safe" COBOL than there used to be.  However, the "tradition" in COBOL
>application programming was to give the user what they SAID they
>wanted/expected.  Often, this led to middle of the night "it wasn't ever
>SUPPOSED to get data like this" application failures.

Sounds like poor programming.  One of the first things
a production program must do is to check that the data is valid,
and for such data to produce an exception report.
    If it doesn't at least do that, the program is not robust.
That has nothing to do with interrupt handling.
Now, with condition handling, unforeseen problems can be
trapped and handled, and it doesn't require a 3am debugging session.

>  (See comments in this
>newsgroup and in the IBM documentation on the performance overhead from using
>TOO MUCH PL/I "condition handling.)

Where?


0
robin_v (2737)
9/21/2006 2:09:01 PM
Bob Lidral wrote in message <4510EC4D.7060004@comcast.net>...
>robin wrote:
>
>> William M. Klein wrote in message <7vCPg.71251$PM1.31867@fe04.news.easynews.com>...
>>
>>> [...]
>>>To me, the POWER of a programming language has absolutely NOTHING to do with
>>>"Can you translate XYZ syntax from language to another in 27 keystrokes taking
>>>no more than 3.64 lines of code".  The power of a programming language is
>>>determined by:
>>>
>>>- What type of programming requirements can you SOLVE in a programming language
>>>(what types of applications can the programming language be used for)
>>>- Given that MOST currently supported programming languages can be used to
>>>solve MOST programming requirements, (not all for either of these),
>>
>>
>> Can they? I would dispute C, for example.
>
>C can do just about anything PL/I can do.  And assembly/machine language
>  absolutely can do anything PL/I can do (it does get compiled into
>machine language, after all).
>
>> There is also the issue of how well they do that,
>> and how reliably.
>
>Absolutely.
>
>> You speak of bebugging COBOL programs at 3 o'clock in the morning.
>>
>>     Let's examine that in the context of your statements above.
>>
>> A PL/I program is robust and fault tolerant.
>
>Not exactly.  The most you can truthfully say is that a PL/I program can
>be robust and fault tolerant.

That's exactly what I said in my immediately-preceding post
(that was posted within a few seconds of the other), and is quoted here :-
    "PL/I programs can be made failsafe, and do not need
    "debugging at 3a.m. in the morning.
    "PL/I can trap virtually every kind of run-time error,
    "and can recover and continue, after having produced an
    "exception report."

>  Much depends on the skill and experience
>of the programmer.

It takes no skill to include SIZE, STRINGRANGE, and SUBSCRIPTRANGE
in a program.
    And as for validating data, one of the first things a beginner
learns is the importance of validating data.

    But yes, to recover from an error does require some experience.

>In my experience, the more powerful and more expressive a programming
>language is, the easier it is for inexperienced programmers to get into
>real trouble.  Well, as a general rule, anyway.  I have seen
>horribly-written, virtually incomprehensible code written in just about
>every computer language I've ever learned well (no fair for me to
>complain about hard-to-understand code in languages I don't know well :-) ).
>
>I once heard a theory that one of the reasons COBOL was so verbose was
>to make it difficult to do anything really clever because that helped
>keep beginning programmers from getting into too much trouble and from
>creating a need for the 3:00 AM emergency debugging sessions.  Lest all
>you COBOL proponents (there are some here, aren't there) take exception
>to that, I don't subscribe to that theory.  COBOL is an old enough
>language that it was designed before such issues had come into
>consideration.
>
>One of the costs of using a robust language is the length of time it
>takes to learn it well enough to use it properly.
>
>> In the event that something unexpected should happen,
>> the program (with its built-in PL/I facilities) can print (or write to
>> an exception file) the details of the error and all the circumstances
>> including the actual data) that caused the error.
>>
>>     And it can then continue with the next lot of data.
>>
>Well, only if it's written that way.

That's what I said.

>  PL/I programs don't write
>themselves.  The language certainly has those capabilities, but it's a
>little optimistic to assert that all programs written in it make use of
>those facilities.

That's why I said "can".  And recall my text that I quoted above.

>>     No need for someone to come in at 3am to fix the program.
>> No need to re-run the program to find out where and why the
>> program crashed.
>>
>>     The problem can be analyzed with a fresh mind in the light of day.
>>
>>
>That's actually more a function of management than of programming
>language.

No it's not.

>  I don't care what language is used, if a shop is sufficiently
>under-funded, under-staffed, or over-committed mistakes will happen that
>require the proverbial 3:00 AM debugging session.

Not in PL/I, because putting in the code to make a program robust
is trivial.

>  Choosing PL/I (or
>Fortran, or COBOL, or RPG, or ... won't change that).

Choosing Fortran* won't change that, and other languages too, including C.

    But choosing PL/I can and does change that,
because PL/I was designed for real-time processing,
and has the above-mentioned facilities built-in and ready to use.

>Bob Lidral

______________
footnote
* Earlier versions of Fortran simply crashed when a zivision by zero
or some-such thing occurred.


0
robin_v (2737)
9/21/2006 2:09:02 PM
On Thu, 21 Sep 2006 06:58:21 -0700, John W. Kennedy  
<jwkenne@attglobal.net> wrote:

> Bob Lidral wrote:
>> My comments were intended to be in the same context as William M.  
>> Klein's: "What type of programming requirements can you SOLVE ..."  
>> which I took t mean the type of application rather than the programming  
>> method.  C doesn't have PICTURE or DECIMAL data, either -- and a lot of  
>> other things PL/I doesn't have.  OTOH, PL/I doesn't allow the use of an  
>> assignment operator as the condition for an if or while statement nor  
>> does it have short-cut Boolean operators except as an extension.
>
> Worse than that -- it doesn't have short-cut Boolean operators, period,  
> but some compilers produce short-cut semantics as a side-effect of  
> optimization.
>
That is a questiionable optimization, and strictly speaking is not legal.
We added extensions to handle these
http://www.kednos.com/pli/docs/REFERENCE_MANUAL/6291pro_016.html#index_x_799



-- 
Using Opera's revolutionary e-mail client: http://www.opera.com/mail/
0
tom294 (608)
9/21/2006 2:32:22 PM
David Frank <dave_frank@hotmail.com> wrote:

>>> type list
>>>   character,allocatable :: name(:)
>>>   integer,allocatable     :: nums(:)
>>> end type
>>> type (list),allocatable :: lists(:)
 
> Then quit farting around and show us the translation of above TYPE 
> statements
> to a PL/I allocatable structure with 2 allocatable array members.
> If thats true, how about showing us the translation of  above  to such a
> PL/I structure> 

If it was a defined type variable it wouldn't need the type keyword.

In C,

typedef struct {
   char *name;
   int *nums;
   } list;

list *lists;

Note that it doesn't say struct list.  Can you do that in Fortran
(through 2008)?

-- glen
0
gah1 (524)
9/21/2006 6:22:01 PM
"robin" <robin_v@bigpond.com> wrote in message 
news:22xQg.33336$rP1.17888@news-server.bigpond.net.au...
> Bob Lidral wrote in message <4510EC4D.7060004@comcast.net>...
>>robin wrote:
<snip>
>>  Choosing PL/I (or
>>Fortran, or COBOL, or RPG, or ... won't change that).
>
> Choosing Fortran* won't change that, and other languages too, including C.
>
>    But choosing PL/I can and does change that,
> because PL/I was designed for real-time processing,
> and has the above-mentioned facilities built-in and ready to use.
>

Again, for IBM mainframe commercial programming (the target of my ORIIGNAL 
comments), this is just as built-in for COBOL as it is for PL/I.  Progammers 
either will or won't use it based on whatever experience and desigr requirements 
they have/get.

-- 
Bill Klein
 wmklein <at> ix.netcom.com


0
wmklein (2605)
9/21/2006 9:02:21 PM
"glen herrmannsfeldt" <gah@seniti.ugcs.caltech.edu> wrote in message 
news:eeul89$n45$5@naig.caltech.edu...
>
> In C,
>
> typedef struct {
>   char *name;
>   int *nums;
>   } list;
>
> list *lists;
>
> Note that it doesn't say struct list.  Can you do that in Fortran
> (through 2008)?

In Fortran
type list
 character,allocatable :: name(:)
  integer,allocatable     :: nums(:)
end type
type (list),allocatable :: lists(:)

Please show Vowels how to translate above Fortran Derived Type declarations 
to PL/I
or retract your statement that PL/I has had this capability "since the 
beginning"


0
dave_frank (2243)
9/22/2006 12:15:40 PM
David Frank wrote:

(snip)

> In Fortran
> type list
>  character,allocatable :: name(:)
>   integer,allocatable     :: nums(:)
> end type
> type (list),allocatable :: lists(:)

> Please show Vowels how to translate above Fortran Derived Type declarations 
> to PL/I
> or retract your statement that PL/I has had this capability "since the 
> beginning"

DCL 1 lists(*) ctl
       2 name(*) ctl char(1),
       2 nums(*) ctl fixed bin;

-- glen

0
gah (12851)
9/26/2006 3:41:37 AM
glen herrmannsfeldt wrote:
> David Frank wrote:
> 
> (snip)
> 
>> In Fortran
>> type list
>>  character,allocatable :: name(:)
>>   integer,allocatable     :: nums(:)
>> end type
>> type (list),allocatable :: lists(:)
> 
> 
>> Please show Vowels how to translate above Fortran Derived Type 
>> declarations to PL/I
>> or retract your statement that PL/I has had this capability "since the 
>> beginning"
> 
> 
> DCL 1 lists(*) ctl
>       2 name(*) ctl char(1),
>       2 nums(*) ctl fixed bin;
> 
> -- glen
> 

Sorry, Glen, this won't do.  The controlled attribute can only be applied to a 
level 1 identifier.  Here's what it takes:

  dcl
   (m,l) bin fixed(15), n bin fixed(31),
   1 arrays(m) ctl,
    2 id ptr,
    2 values ptr,

   1 arrayid based,
    2 idlen bin fixed(15),
    2 idtext char(l refer(idlen)),

   1 arrayvalues based,
    2 vlen bin fixed(31),
    2 numbers(n refer(vlen)) bin fixed(31),

as in DF's original program, you read the entire file into a (presumably huge) 
character array and count the *'s (to do it right they should be only the *'s 
that come in column 1, i.e., the very first character or one immediately 
following a cr-lf pair) giving m. (DF's program will fail if there are any *'s 
in the array id's.)  Then you allocate the controlled structure, arrays.  Next 
you scan the input again and process each of the m arrays.  For the i-th array, 
you find the i-th * in column 1 and get l (that's an ell not a one) the length 
of the rest of the line, i.e., up to the next cr-lf, then allocate an arrayid, 
stuff the l characters into idtext and store the pointer in id(i).  Then for 
each line up to the next * in column 1, you count the commas and add 1 and sum 
these up giving n.  Now you can allocate an arrayvalues and store the pointer in 
values(i).  Finally you go back to the line following that with the i-th * 
(whose location you have thoughtfully remembered) and process each line again 
picking off the character values between commas and/or line boundaries, 
converting them to bin fixed(31) (by assignment) while stuffing the j-th result 
into values(i)->numbers(j).  It's really rather simple.

Needless to say I am not going to post the code.  If DF wants to see that he can 
download a PL/I LRM and work it out for himself.  I've given him enough hints.

Unfortunately, this so called challenge is not typical of how things are done in 
real data processing applications.  Usually the files are much too big even to 
dream about reading the entire thing into memory.
0
jjw (608)
9/26/2006 8:14:43 AM
"James J. Weinkam" <jjw@cs.sfu.ca> wrote in message 
news:Tj5Sg.35750$cz3.14004@edtnps82...
> glen herrmannsfeldt wrote:
>> David Frank wrote:
>>
>> (snip)
>>
>>> In Fortran
>>> type list
>>>  character,allocatable :: name(:)
>>>   integer,allocatable     :: nums(:)
>>> end type
>>> type (list),allocatable :: lists(:)
>>
>>
>>> Please show Vowels how to translate above Fortran Derived Type 
>>> declarations to PL/I
>>> or retract your statement that PL/I has had this capability "since the 
>>> beginning"
>>
>>
>> DCL 1 lists(*) ctl
>>       2 name(*) ctl char(1),
>>       2 nums(*) ctl fixed bin;
>>
>> -- glen
>>
>
> Sorry, Glen, this won't do.  The controlled attribute can only be applied 
> to a level 1 identifier.  Here's what it takes:
>
>  dcl
>   (m,l) bin fixed(15), n bin fixed(31),
>   1 arrays(m) ctl,
>    2 id ptr,
>    2 values ptr,
>
>   1 arrayid based,
>    2 idlen bin fixed(15),
>    2 idtext char(l refer(idlen)),
>
>   1 arrayvalues based,
>    2 vlen bin fixed(31),
>    2 numbers(n refer(vlen)) bin fixed(31),
>

<snip description of a typical pointer allocation chain>

> It's really rather simple.

What you describe is not simple, and is NOT equivalent to other language(s) 
syntax
using derived type variables.
Cant you just admit PL/I has no such support?

>
> Needless to say I am not going to post the code.  If DF wants to see that 
> he can download a PL/I LRM and work it out for himself.  I've given him 
> enough hints.
>

At end of my program's reading of lists, I show that the data is contained 
in a
data structure DIRECTLY addressable with standard syntax like SUM etc.
e.g.
       id = lists(n)%name      ! directly accesses n'th list name
      total = total + sum(lists(n)%nums)    !  add sum of n'th list's 
numbers

Your PROPOSED data structure is not a entity that contains ALL the data in 
its internal arrays
but a string of pointer-connected arrays. .

> Unfortunately, this so called challenge is not typical of how things are 
> done in real data processing applications.  Usually the files are much too 
> big even to dream about reading the entire thing into memory.

Many PCs have more memory than the old main-frame clunkers running 30yr 
COBOL
programs..

The first solution I posted over in comp.lang.fortran does not read the file
completely into memory before starting its processing
       http://home.earthlink.net/~dave_gemini/list.f90

In any case, while interesting, your proposal remains just that,
a description that you think might work but without actual source or any 
proof posted..



 


0
dave_frank (2243)
9/26/2006 1:16:37 PM
William M. Klein wrote in message ...
>"robin" <robin_v@bigpond.com> wrote in message
>news:22xQg.33336$rP1.17888@news-server.bigpond.net.au...
>> Bob Lidral wrote in message <4510EC4D.7060004@comcast.net>...
>>>robin wrote:
><snip>
>>>  Choosing PL/I (or
>>>Fortran, or COBOL, or RPG, or ... won't change that).
>>
>> Choosing Fortran* won't change that, and other languages too, including C.
>>
>>    But choosing PL/I can and does change that,
>> because PL/I was designed for real-time processing,
>> and has the above-mentioned facilities built-in and ready to use.

>Again, for IBM mainframe commercial programming (the target of my ORIIGNAL
>comments), this is just as built-in for COBOL as it is for PL/I.

Consider the following PL/I fragment::

on error snap begin;
    on error system;
    put data (p, q, r); /* Could create an exception foile here */
    go ro start_set;
end;

start_set: do forever;
    <<stuff>>
    end start_set:

which regains control for any kind of error.
    The ON statement in this fragment specifies the action
to be taken in the event of any kind of error.
    Having executed the ON statement (not the ON-unit),
the program then enters the main loop specified by DO - END.
    Now, in the event that some problem arises, the ON-unit
is executed, and as well as producing details of the error (including
naming the error and the location where it occurred,
PL/I produces a traceback giving the names of procedures
in the calling chain and where invoked).
    The names and values of variables P, Q, and R are then printed
[in practice., some kind of detailed error report would be produced].
Finally, the program resumes with the next set of data.


0
robin_v (2737)
9/26/2006 2:36:07 PM
James J. Weinkam <jjw@cs.sfu.ca> wrote:
> glen herrmannsfeldt wrote:
 
>> DCL 1 lists(*) ctl
>>       2 name(*) ctl char(1),
>>       2 nums(*) ctl fixed bin;
> Sorry, Glen, this won't do.  The controlled attribute can only be 
> applied to a level 1 identifier.  Here's what it takes:

I wondered about that just after I posted it.  Still,
you can use (*) and specify the size later.
 
>  dcl
>   (m,l) bin fixed(15), n bin fixed(31),
>   1 arrays(m) ctl,
>    2 id ptr,
>    2 values ptr,
 
>   1 arrayid based,
>    2 idlen bin fixed(15),
>    2 idtext char(l refer(idlen)),
 
>   1 arrayvalues based,
>    2 vlen bin fixed(31),
>    2 numbers(n refer(vlen)) bin fixed(31),

I probably would have done pointers to controlled
arrays or structures.  

I don't know why DF has character arrays instead
of character variables, though.  I forget when
Fortran allowed allocatable length character variables.

> Unfortunately, this so called challenge is not typical of how 
> things are done in real data processing applications.  
> Usually the files are much too big even to 
> dream about reading the entire thing into memory.

This is true, and it is done way too often.  Even so, there
are still applications where reading a large amount of
data of unknown size into memory is useful.  C's realloc()
usually works pretty well, if you only realloc() one array
(or array of struct) inside the loop.

-- glen
0
gah1 (524)
9/26/2006 5:48:10 PM
glen herrmannsfeldt wrote:
> 
> I wondered about that just after I posted it.  Still,
> you can use (*) and specify the size later.
>  
> 
The only reason to use * is if the allocated dimension in going to be in 
different variables with different allocate statements.  In fact, even if a 
variable is specified for the dimension in the declare statement, it can always 
be overridden in an allocate statement.  So the * doesn't intorduce any new 
capability.
0
jjw (608)
9/26/2006 7:47:37 PM
James J. Weinkam wrote in message ...
>Sorry, Glen, this won't do.  The controlled attribute can only be applied to a
>level 1 identifier.  Here's what it takes:
>
>  dcl
>   (m,l) bin fixed(15), n bin fixed(31),
>   1 arrays(m) ctl,
>    2 id ptr,
>    2 values ptr,
>
>   1 arrayid based,
>    2 idlen bin fixed(15),
>    2 idtext char(l refer(idlen)),
>
>   1 arrayvalues based,
>    2 vlen bin fixed(31),
>    2 numbers(n refer(vlen)) bin fixed(31),
>
>as in DF's original program, you read the entire file into a (presumably huge)
>character array and count the *'s

No, this won't do at all.
You say that the array is huge, but your declarations of bounds
are for arrays and variables to be up to only 32,767.

> (to do it right they should be only the *'s
>that come in column 1, i.e., the very first character or one immediately
>following a cr-lf pair) giving m. (DF's program will fail if there are any *'s
>in the array id's.)  Then you allocate the controlled structure, arrays.  Next
>you scan the input again

This won't do at all.
You are reading the data twice.

> and process each of the m arrays.  For the i-th array,
>you find the i-th * in column 1 and get l (that's an ell not a one) the length
>of the rest of the line, i.e., up to the next cr-lf, then allocate an arrayid,
>stuff the l characters into idtext and store the pointer in id(i).  Then for
>each line up to the next * in column 1, you count the commas and add 1 and sum
>these up giving n.  Now you can allocate an arrayvalues and store the pointer in
>values(i).  Finally you go back to the line following that with the i-th *
>(whose location you have thoughtfully remembered) and process each line again
>picking off the character values between commas and/or line boundaries,
>converting them to bin fixed(31) (by assignment) while stuffing the j-th result
>into values(i)->numbers(j).  It's really rather simple.

It is?


0
robin_v (2737)
9/27/2006 10:33:12 AM
robin wrote:
> James J. Weinkam wrote in message ...
> 
>>Sorry, Glen, this won't do.  The controlled attribute can only be applied to a
>>level 1 identifier.  Here's what it takes:
>>
>> dcl
>>  (m,l) bin fixed(15), n bin fixed(31),
>>  1 arrays(m) ctl,
>>   2 id ptr,
>>   2 values ptr,
>>
>>  1 arrayid based,
>>   2 idlen bin fixed(15),
>>   2 idtext char(l refer(idlen)),
>>
>>  1 arrayvalues based,
>>   2 vlen bin fixed(31),
>>   2 numbers(n refer(vlen)) bin fixed(31),
>>
>>as in DF's original program, you read the entire file into a (presumably huge)
>>character array and count the *'s
> 
> 
> No, this won't do at all.
> You say that the array is huge, 

The main purpose of my post was to point out that you cannot nest controlled 
variables within an aggregate.  I then showed how the "problem" can be 
approached  using the same methodology DF used in his original post, namely:

1. Find out the file size, allocate space and read the entire file into it.  (I 
did not actually show the data structure for this part but it would have to be 
an array with large bounds.)

2. Count the number of *'s and allocate an array of that length to keep track of 
the id's and number arrays.

3. For each id line:

    a) find the end allocate the string and assign the id to it

    b) count the ,'s and line ends up to the next id line, allocate the number 
array and place the values in it.

I then pointed out that this so called "challenge" posed by DF is not typical of 
the approach taken in most modern data processing applications.


but your declarations of bounds
> are for arrays and variables to be up to only 32,767.
>



> 
>>(to do it right they should be only the *'s
>>that come in column 1, i.e., the very first character or one immediately
>>following a cr-lf pair) giving m. (DF's program will fail if there are any *'s
>>in the array id's.)  Then you allocate the controlled structure, arrays.  Next
>>you scan the input again
> 
> 
> This won't do at all.
> You are reading the data twice.

No, I am reading it once and scanning it several times internally which is the 
approach used by DF.  I am not endorsing that approach.  The method that you 
posted read the file once and created an intermediate data structure consisting 
of an allocation of a controlled binary fixed(31) variable for each number in 
the array currently being processed.  Once the number of values in that array 
was known, you allocated an array of the desired size and copied each value to 
the appropriate array element then freed the value.  This is an additional 
internal scan of the data. In personal PL/I for OS/2 the intermediate data 
structure occupies 12 times as much storage as the final array.  This ratio may 
be different in other implementations, but is probably at least 6 in all 
implementations.  Moreover the allocation and freeing of each element adds 
significant time overhead to the large space overhead.
> 
> 
>>and process each of the m arrays.  For the i-th array,
>>you find the i-th * in column 1 and get l (that's an ell not a one) the length
>>of the rest of the line, i.e., up to the next cr-lf, then allocate an arrayid,
>>stuff the l characters into idtext and store the pointer in id(i).  Then for
>>each line up to the next * in column 1, you count the commas and add 1 and sum
>>these up giving n.  Now you can allocate an arrayvalues and store the pointer in
>>values(i).  Finally you go back to the line following that with the i-th *
>>(whose location you have thoughtfully remembered) and process each line again
>>picking off the character values between commas and/or line boundaries,
>>converting them to bin fixed(31) (by assignment) while stuffing the j-th result
>>into values(i)->numbers(j).  It's really rather simple.
> 
> 
> It is?
> 
> 
It is.
0
jjw (608)
9/28/2006 12:55:01 AM
James J. Weinkam wrote:
> robin wrote:
> 
> 
>> but your declarations of bounds
>> are for arrays and variables to be up to only 32,767.
>>
Sorry for replying to my own post but the following sentence failed to appear:

Also if you look again you will see that the variables for the bound of the 
number array are bin fixed(31).  It is only the number of arrays and the lengths 
of the id's that were assumed to be <32768.  If that is too small it is readily 
changed.
0
jjw (608)
9/28/2006 5:36:26 AM
"glen herrmannsfeldt" <gah@seniti.ugcs.caltech.edu> wrote in message 
news:een2d7$lol$4@naig.caltech.edu...
>
> I sometimes find comparisons of languages interesting, in that you
> can understand the design goals of a language by seeing what it allows
> and disallows.  I try to make my comparisons fair, stating facts separately
> from opinions.   I consider it similar to the "compare and contrast"
> assignments for studying literature.
>
Good observation.    When comparing programming languages, one
needs to set out the criteria by which such a comparison will be
evaluated.   Not every language is good at everything.   Not every
language will evaluate well under every criteria.

Some years ago, one of the popular computer science magazines
had a "hello world" contest to determine which language could
solve this problem in the fewest number of statements.   Such
contests are usually pretty useless, but this one was among the
silliest of all.

The criteria for language evaluation need to be carefully selected,
clearly stated, and weighted according to their importance in
the targeted problem-solution domain.

Often, the inherent virtues of the language will not be the dominant
concerns.   For example, people often choose C++ for their
programming projects, but that language is characterized largely
by its potential for creating flawed software.  In fact, it often
causes me to wonder why anyone would choose a toolset that
is error-prone for creating software and expect a rresult that is
error-free.   The reasons for choosing C++, the criteria being
used, has little to do with the inherent difficulties of that
language and more to do with its widespread use by
the programming community.

If one of my criteria is that a language support object-oriented
programming, PL/I will be quickly eliminated from consideration.
If I am concerned about support for some specific database
environment, the language must include direct support for that
environment.  I dependability is the foremost concern, we would
probably choose Eiffel or Ada. If we are writing a bunch of
"hello world" programs, we would probably want to use a
simple interpreted scripting language.

Arguing about programming languages in the abstract is a lot
like saying, "My dog is better than your dog!"   Better at
what?   Some dogs are better at pointing to a potential
winged target hiding in the brush.  Others are better at
catching a frisbee.  If I am going deer hunting, I really
don't want to take a noisy little chihuahua.

So, when comparing programming languages, we need
to understand the bounds within which each comparison
will be made.   We need to agree on the criteria.   We
must get beyond the abstract and go to the heart of the
problem domain in which we intend to use that language.

Richard Riehle 


0
adaworks2 (748)
10/4/2006 1:23:02 PM
On Wed, 04 Oct 2006 06:23:02 -0700, <adaworks@sbcglobal.net> wrote:

>
> "glen herrmannsfeldt" <gah@seniti.ugcs.caltech.edu> wrote in message
> news:een2d7$lol$4@naig.caltech.edu...
>>
>> I sometimes find comparisons of languages interesting, in that you
>> can understand the design goals of a language by seeing what it allows
>> and disallows.  I try to make my comparisons fair, stating facts  
>> separately
>> from opinions.   I consider it similar to the "compare and contrast"
>> assignments for studying literature.
>>
> Good observation.    When comparing programming languages, one
> needs to set out the criteria by which such a comparison will be
> evaluated.   Not every language is good at everything.   Not every
> language will evaluate well under every criteria.
>
> Some years ago, one of the popular computer science magazines
> had a "hello world" contest to determine which language could
> solve this problem in the fewest number of statements.   Such
> contests are usually pretty useless, but this one was among the
> silliest of all.
>
> The criteria for language evaluation need to be carefully selected,
> clearly stated, and weighted according to their importance in
> the targeted problem-solution domain.
>
> Often, the inherent virtues of the language will not be the dominant
> concerns.   For example, people often choose C++ for their
> programming projects, but that language is characterized largely
> by its potential for creating flawed software.  In fact, it often
> causes me to wonder why anyone would choose a toolset that
> is error-prone for creating software and expect a rresult that is
> error-free.   The reasons for choosing C++, the criteria being
> used, has little to do with the inherent difficulties of that
> language and more to do with its widespread use by
> the programming community.
>
> If one of my criteria is that a language support object-oriented
> programming, PL/I will be quickly eliminated from consideration.
> If I am concerned about support for some specific database
> environment, the language must include direct support for that
> environment.  I dependability is the foremost concern, we would
> probably choose Eiffel or Ada. If we are writing a bunch of
> "hello world" programs, we would probably want to use a
> simple interpreted scripting language.

Support for OOP as a criterion seems more of a fashion statement, at
least it could at best be a derived requirement from more fundemantal
criteria.

>
> Arguing about programming languages in the abstract is a lot
> like saying, "My dog is better than your dog!"   Better at
> what?   Some dogs are better at pointing to a potential
> winged target hiding in the brush.  Others are better at
> catching a frisbee.  If I am going deer hunting, I really
> don't want to take a noisy little chihuahua.
>
> So, when comparing programming languages, we need
> to understand the bounds within which each comparison
> will be made.   We need to agree on the criteria.   We
> must get beyond the abstract and go to the heart of the
> problem domain in which we intend to use that language.
>
> Richard Riehle
>
>



-- 
Using Opera's revolutionary e-mail client: http://www.opera.com/mail/
0
tom294 (608)
10/4/2006 1:28:33 PM
adaworks@sbcglobal.net wrote:


> 
> Good observation.    When comparing programming languages, one
> needs to set out the criteria by which such a comparison will be
> evaluated.   Not every language is good at everything.   Not every
> language will evaluate well under every criteria.


> The criteria for language evaluation need to be carefully selected,
> clearly stated, and weighted according to their importance in
> the targeted problem-solution domain.

Yes, particularly if you're going to go on to make statements like the 
one below.  So what are your criteria?



> 
> Often, the inherent virtues of the language will not be the dominant
> concerns.   For example, people often choose C++ for their
> programming projects, 

And for good reason IMO, but of course YMV.

 > but that language is characterized largely
> by its potential for creating flawed software.  

Really?  Um, can you tell us who characterizes it that way?  And for 
what reasons?  Probably, keeping in mind that any language can be abused.

 > In fact, it often
> causes me to wonder why anyone would choose a toolset that
> is error-prone for creating software and expect a rresult that is
> error-free.   

Can you please be more specific about the "error prone"?


 > The reasons for choosing C++, the criteria being
> used, has little to do with the inherent difficulties of that
> language and more to do with its widespread use by
> the programming community.

Widespread use?  Again, yes, and for good reason.

LR


0
lruss (582)
10/4/2006 2:35:28 PM
Tom Linden wrote:


> Support for OOP as a criterion seems more of a fashion statement, at
> least it could at best be a derived requirement from more fundemantal
> criteria.


Seems more like an ease of use thing to me than a fashion statement. 
Can you please tell me why you seem to have implied that OOP is merely 
fashion?  Do you think it will go away?

LR
0
lruss (582)
10/4/2006 2:37:04 PM
On Wed, 04 Oct 2006 07:37:04 -0700, LR <lruss@superlink.net> wrote:

> Tom Linden wrote:
>
>
>> Support for OOP as a criterion seems more of a fashion statement, at
>> least it could at best be a derived requirement from more fundemantal
>> criteria.
>
>
> Seems more like an ease of use thing to me than a fashion statement. Can  
> you please tell me why you seem to have implied that OOP is merely  
> fashion?  Do you think it will go away?
>
> LR

Good programmers can write good code in any langugae, some may require more
effort than others.  But there aren't that many 'good' programmers.   
Languages
like C++ do not enforce adequate discpline.  Overloading of objects leads,
with the passage of time, to diffuse meaning, resulting in disuse of  
objects,
contrary to one of the stated advantages of OOP.  Class libraries, I would  
suspect,
aren't as rigorously tested as a traditional compilers like PL/I, Ada or  
Cobol,
as many are amended and cobbled together for a particular application.

No I don't think it will go away.  If selection were based on the merits  
of the
language everyone would be coding in PL/I.

-- 
Using Opera's revolutionary e-mail client: http://www.opera.com/mail/
0
tom294 (608)
10/4/2006 3:02:53 PM
Tom Linden wrote:

> On Wed, 04 Oct 2006 07:37:04 -0700, LR <lruss@superlink.net> wrote:
> 
>> Tom Linden wrote:
>>
>>
>>> Support for OOP as a criterion seems more of a fashion statement, at
>>> least it could at best be a derived requirement from more fundemantal
>>> criteria.
>>
>>
>>
>> Seems more like an ease of use thing to me than a fashion statement. 
>> Can  you please tell me why you seem to have implied that OOP is 
>> merely  fashion?  Do you think it will go away?
>>
>> LR
> 
> 
> Good programmers can write good code in any langugae, 

Yes, very true.


> some may require more
> effort than others.  

Do you refer to the effort by the programmers, the effort to write in a 
particular language, both, something else entirely?


> But there aren't that many 'good' programmers.   

Relevance?

> Languages
> like C++ do not enforce adequate discpline.  

What particular discipline do you want enforced?


> Overloading of objects leads,
> with the passage of time, to diffuse meaning, 

In what way?


> resulting in disuse of  
> objects,

How can non-OOP languages do any better?  For example, in non-OOP 
languages you very often find flags of some kind used to do what can 
more easily be done in OOP though inheritence.  Talk about becoming 
diffuse over time.  Besides which, these flags lead to an enormous 
maintenance headache. Or lead the programmer to invent their own OOPish 
'language' in whatever language they're programming in.

Besides which, doesn't your fave language have some support for 
overloading functions at least?  I seem to recall a post about this 
where I jumped to the conclusion that PL/I didn't have this feature, 
being corrected reminded me that jump tos are considered evil.


> contrary to one of the stated advantages of OOP.  

Per above, I think we disagree on this.

> Class libraries, I 
> would  suspect,
> aren't as rigorously tested as a traditional compilers like PL/I, Ada 
> or  Cobol,

Are you speaking of the class libraries that might, for example, come 
with a standard c++ compiler.  I suspect these are tested about the same 
as a compiler is.


> as many are amended and cobbled together for a particular application.

Not sure what you're talking of here.  This sounds more like application 
libs put together by application programmers.  Surely you're not 
suggesting that non-OOP libs aren't ever amended and cobbled together 
for a particular application?

Or perhaps you have some specific example in mind?

> No I don't think it will go away.  

There we agree, which makes me wonder why you think it's a fashion.


 > If selection were based on the
> merits  of the
> language everyone would be coding in PL/I.

And strangely, I think that if selection were based on the merits of the 
language everyone would be coding in C++.

So what?  Unless you are able to put forth the actual merits, and prove, 
or least provide reasons for, their superiority, it's just another 
pointless 'my language is better than yours' claim.

LR

0
lruss (582)
10/4/2006 4:13:26 PM
<adaworks@sbcglobal.net> wrote in message
news:WAOUg.9626$e66.4140@newssvr13.news.prodigy.com...
>
> "glen herrmannsfeldt" <gah@seniti.ugcs.caltech.edu> wrote in message
> news:een2d7$lol$4@naig.caltech.edu...
> >
> > I sometimes find comparisons of languages interesting, in that you
> > can understand the design goals of a language by seeing what it allows
> > and disallows.  I try to make my comparisons fair, stating facts separately
> > from opinions.   I consider it similar to the "compare and contrast"
> > assignments for studying literature.
> >
> Good observation.    When comparing programming languages, one
> needs to set out the criteria by which such a comparison will be
> evaluated.   Not every language is good at everything.   Not every
> language will evaluate well under every criteria.
>
> Some years ago, one of the popular computer science magazines
> had a "hello world" contest to determine which language could
> solve this problem in the fewest number of statements.   Such
> contests are usually pretty useless, but this one was among the
> silliest of all.

Not necessarily.
It identifies immediately those languages that are verbose,
and which may be unsuitable for such a purpose.

> The criteria for language evaluation need to be carefully selected,
> clearly stated, and weighted according to their importance in
> the targeted problem-solution domain.
>
> Often, the inherent virtues of the language will not be the dominant
> concerns.   For example, people often choose C++ for their
> programming projects, but that language is characterized largely
> by its potential for creating flawed software.  In fact, it often
> causes me to wonder why anyone would choose a toolset that
> is error-prone for creating software and expect a rresult that is
> error-free.   The reasons for choosing C++, the criteria being
> used, has little to do with the inherent difficulties of that
> language and more to do with its widespread use by
> the programming community.
>
> If one of my criteria is that a language support object-oriented
> programming, PL/I will be quickly eliminated from consideration.
> If I am concerned about support for some specific database
> environment, the language must include direct support for that
> environment.  I dependability is the foremost concern, we would
> probably choose Eiffel or Ada.

Or PL/I, of course.
Dependability and robustmess are attributes of PL/I,
and have been for 40 years.

> If we are writing a bunch of
> "hello world" programs, we would probably want to use a
> simple interpreted scripting language.

No we wouldn't.

> Arguing about programming languages in the abstract is a lot
> like saying, "My dog is better than your dog!"   Better at
> what?   Some dogs are better at pointing to a potential
> winged target hiding in the brush.  Others are better at
> catching a frisbee.  If I am going deer hunting, I really
> don't want to take a noisy little chihuahua.

You wouldn't want _any_ dog.

> Richard Riehle


0
robin_v (2737)
10/5/2006 8:01:38 AM
robin wrote:

> <adaworks@sbcglobal.net> wrote in message
> news:WAOUg.9626$e66.4140@newssvr13.news.prodigy.com...
> 
>>"glen herrmannsfeldt" <gah@seniti.ugcs.caltech.edu> wrote in message
>>news:een2d7$lol$4@naig.caltech.edu...
>>
>>>I sometimes find comparisons of languages interesting, 


>>Some years ago, one of the popular computer science magazines
>>had a "hello world" contest to determine which language could
>>solve this problem in the fewest number of statements.   Such
>>contests are usually pretty useless, but this one was among the
>>silliest of all.
> 
> 
> Not necessarily.
> It identifies immediately those languages that are verbose,
> and which may be unsuitable for such a purpose.

Hmmm.  That sounds like a certain poster who thinks that the number of 
lines a program takes is pretty important.  Am I seeing the beginings of 
some common ground?


Anyway, those of you who enjoyed the "Hello World" program contest may 
enjoy "99 Bottles of Beer", in the language of your choice.
http://www.westnet.com/mirrors/99bottles/beer.html

Argue all you want, but Brainf*** is best.
http://www.westnet.com/mirrors/99bottles/beer_a_c.html#brainfuck

LR


0
lruss (582)
10/5/2006 12:39:30 PM
"LR" <lruss@superlink.net> wrote in message 
news:4523c687$0$25791$cc2e38e6@news.uslec.net...
> adaworks@sbcglobal.net wrote:
>
>
>
> > but that language (C++) is characterized largely
>> by its potential for creating flawed software.
>
> Really?  Um, can you tell us who characterizes it that way?  And for what 
> reasons?  Probably, keeping in mind that any language can be abused.
>
I have written software in C++.   Also, every conversation I have had,
in recent weeks, with a group of highly experienced C++ programmers
in the midst of a project on which they are working, has reinforced this
view.   There are more ways to make programming mistakes in C++
than in any contemporary language.  The mistakes are often difficult
to discover even long after the programs have been deployed.

> > In fact, it often
>> causes me to wonder why anyone would choose a toolset that
>> is error-prone for creating software and expect a rresult that is
>> error-free.
>
> Can you please be more specific about the "error prone"?
>
As noted above.   However, the pointer model is horrid, the
defaults on constructors and copy constructors can cause
serious defects in the code, and the memory management
model is non-existent.   We could go on for many pages
itemizing specific problems with C++, but anyone who has
used the language for any length of time knows how sensitive
it is to even the slightest deviation from careful programming.
Worse, the compiler fails to notify the programmer for a lot
of those problems.   This is why debuggers are regarded as
a necessary tool when programming in C++.   Not so in
some other languages.
>
> > The reasons for choosing C++, the criteria being
>> used, has little to do with the inherent difficulties of that
>> language and more to do with its widespread use by
>> the programming community.
>
> Widespread use?  Again, yes, and for good reason.
>
But those reasons have nothing to do with the dependability
of the final software product.

Richard Riehle 


0
adaworks2 (748)
10/5/2006 2:54:19 PM
adaworks@sbcglobal.net wrote:
> "LR" <lruss@superlink.net> wrote in message 
> news:4523c687$0$25791$cc2e38e6@news.uslec.net...
> 
>>adaworks@sbcglobal.net wrote:
>>
>>
>>
>>
>>>but that language (C++) is characterized largely
>>>by its potential for creating flawed software.
>>
>>Really?  Um, can you tell us who characterizes it that way?  And for what 
>>reasons?  Probably, keeping in mind that any language can be abused.
>>
> 
> I have written software in C++.   Also, every conversation I have had,
> in recent weeks, with a group of highly experienced C++ programmers
> in the midst of a project on which they are working, has reinforced this
> view.   There are more ways to make programming mistakes in C++
> than in any contemporary language.  The mistakes are often difficult
> to discover even long after the programs have been deployed.

Highly experienced C++ programmers?  In C++ or another language? 
Because if they're really 'experienced', for which we might more 
reasonably read 'knowlegeable', then they are likely to avoid the nasty 
corners of the language.  Which C++ has, no question.  BTW, do you know 
of a language without those?  Are they actually making mistakes of the 
kind we're discussing, or only complaining about the potential?

> 
> 
>>>In fact, it often
>>>causes me to wonder why anyone would choose a toolset that
>>>is error-prone for creating software and expect a rresult that is
>>>error-free.
>>
>>Can you please be more specific about the "error prone"?
>>
> 
> As noted above.   However, the pointer model is horrid, 

Use with caution if at all.  If used, wrap it up in a class.  Manage 
your risk.  Use smart pointers.

 > the
> defaults on constructors and copy constructors can cause
> serious defects in the code, 

Yes.  Don't do that. Are the "highly experienced" programmers you spoke 
to using the default ctors?  Don't forget the default assignment operator.



 > and the memory management
> model is non-existent.   

It's not non existant, it's just not what you like.  Maybe the 
programmers you spoke to are using raw pointers and not smart pointers? 
  Using malloc/free instead of new/delete?  Using raw pointers to arrays 
instead of std::vector? Shame on them.  The beauty of C++ is that if you 
don't like the features the language has you can roll your own.  And 
with boost and TR1 more available.



 >  We could go on for many pages
> itemizing specific problems with C++, 

Whereas most other languages suffer from a single flaw:  They're not C++. ;)

 > but anyone who has
> used the language for any length of time knows how sensitive
> it is to even the slightest deviation from careful programming.

Please suggest a language that doesn't require careful programming.

> Worse, the compiler fails to notify the programmer for a lot
> of those problems.  

That's an implementation issue, not a language specific issue.  I 
recommend lint.  I recommend it highly.



 > This is why debuggers are regarded as
> a necessary tool when programming in C++.  

I've never met anyone who regarded a debugger as necessary in any 
language.  Nice to have.  And it's particularly nice that C++'s market 
share makes for nice debugging and other tools.

 > Not so in
> some other languages.

Perchance, are those languages without a debugger available?  BTW, I 
don't think that I've ever met a programmer who wouldn't rather have a 
good symbolic debugger for a language than not.


>>>The reasons for choosing C++, the criteria being
>>>used, has little to do with the inherent difficulties of that
>>>language and more to do with its widespread use by
>>>the programming community.
>>
>>Widespread use?  Again, yes, and for good reason.
>>
> 
> But those reasons have nothing to do with the dependability
> of the final software product.

No language choice has anything to do with the dependability of the 
final software product.

You either program well, or you don't.  Use the language wisely or don't.

LR
0
lruss (582)
10/5/2006 3:24:20 PM
adaworks@sbcglobal.net wrote:
(snip)

> I have written software in C++.   Also, every conversation I have had,
> in recent weeks, with a group of highly experienced C++ programmers
> in the midst of a project on which they are working, has reinforced this
> view.   There are more ways to make programming mistakes in C++
> than in any contemporary language.  The mistakes are often difficult
> to discover even long after the programs have been deployed.

I agree.  If one is dedicated to object oriented methodology,
and explicitely avoids the possible mistakes it might not be
so bad.  I believe that the designers of Java tried to learn
from C++'s mistakes.  It seems to me that for an OO extension
of C, Java is closer to C in many ways than C++, except for
actually using C code in a C++ compiler.
 
(snip)

> As noted above.   However, the pointer model is horrid, the
> defaults on constructors and copy constructors can cause
> serious defects in the code, and the memory management
> model is non-existent. 

I think some of that is left over from the requirement
of early C++ compilers to translate to C as an intermediate,
and for C compatibility in any case.  Java's insistance
on initializing scalar variables helps prevent some defects,
though I still don't like it when it is wrong.

(snip)

-- glen
0
gah1 (524)
10/5/2006 7:11:38 PM
LR <lruss@superlink.net> wrote:
(snip on C++ and experienced programmers)
 
> Highly experienced C++ programmers?  In C++ or another language? 
> Because if they're really 'experienced', for which we might more 
> reasonably read 'knowlegeable', then they are likely to avoid the nasty 
> corners of the language.  Which C++ has, no question.  BTW, do you know 
> of a language without those?  Are they actually making mistakes of the 
> kind we're discussing, or only complaining about the potential?

I recently found a bug in a large program written by an
experienced and knowledgable C++ programmer.  This program tries
to check every argument for being in range, and otherwise having
the right value.  At one point it does a recursive search through
what is supposed to be a binary tree, but hadn't actually been
allocated yet.  Due to one small mistake in not initializing
a pointer to null, the program chased an infinite loop
of pointer with a cycle over 19000 long (and all four byte aligned)
until it ran out of memory.  Anyone can miss one initialization
in a large program, no matter how much experience they have.

-- glen

0
gah1 (524)
10/5/2006 7:18:13 PM
glen herrmannsfeldt wrote:

> LR <lruss@superlink.net> wrote:
> (snip on C++ and experienced programmers)
>  
> 
>>Highly experienced C++ programmers?  In C++ or another language? 
>>Because if they're really 'experienced', for which we might more 
>>reasonably read 'knowlegeable', then they are likely to avoid the nasty 
>>corners of the language.  Which C++ has, no question.  BTW, do you know 
>>of a language without those?  Are they actually making mistakes of the 
>>kind we're discussing, or only complaining about the potential?
> 
> 
> I recently found a bug in a large program written by an
> experienced and knowledgable C++ programmer.  This program tries
> to check every argument for being in range, and otherwise having
> the right value.  At one point it does a recursive search through
> what is supposed to be a binary tree, but hadn't actually been
> allocated yet.  

Was this something the experienced and knowlegeable C++ programmer had 
tried to implement themselves when std::set and std::map are available 
and waiting to be used?

 > Due to one small mistake in not initializing
> a pointer to null, 

Sounds like someone was using raw pointers.

 > the program chased an infinite loop
> of pointer with a cycle over 19000 long (and all four byte aligned)
> until it ran out of memory.  

Lint, lint always, lint forever.  Also, some compilers give warnings of 
unintialized variables.

 > Anyone can miss one initialization
> in a large program, no matter how much experience they have.

I agree, but I don't think this problem is limited to C++ or pointers 
and besides even if you do initialize things you can give them the wrong 
value.  Right?

I remember writing some code in PL/C (is PL/C close enough for the point 
I'm trying to make?) years ago that resulted in a couple of infinite 
loops. Lucky I was using an account that was limited to a few CPU secs 
per run, IIRC. I don't, ahem, make mistakes like this anymore, well, not 
often, and if I do, I, uh, no longer tell anyone. ;)

LR
0
lruss (582)
10/5/2006 10:37:57 PM
I will answer all your questions in this part of the reply
rather than embedding them in the text.

My preferred language is one that does not have all the
potential for errors that you seem to admit is present in
C++.  It is designed so the compiler will catch the maximum
number of errors at compile time.   It provides a model for
indirection that does not require me to wonder whether
a particular pointer construct might have a hidden dangling
reference or an eventual conflict somewhere.  I don't have
the concerns with copy constructors that are present in
C++.   Lint is not a substitute for good language design
in the first place.

Language choice does impact dependability.   There are
languages that you probably have not used that are characterized
by their emphasis on dependability.   C++ is not one of them.

It is true that one cannot depend entirely on the programming
language, and programming always involves being careful. However,
C++ is especially error-prone when compared with most
alternatives.

When I compare C++ with one of the better languages
such as Ada, I find myself preferring Ada.   When I
compare it with Eiffel, I find myself preferring Eiffel.

Furthermore, with Ada I get all the flexibility I need,
along with the required efficiency.   The compiler catches
more errors at compile-time leaving me time to spend on
my own programming mistakes, those not inherent in
the design of the language.

I know C++ well.  I know Ada well.   When I compare,
feature-for-feature, according to the one criterion that
is most important to me, dependability of the final
program, C++ consistently falls short.

The better I get to know both languages, the more I become
aware that C++ is one of the worst choices for any software
where dependability is important.  It is, at first, fun chasing
the little bugs around the code, but after a while, one needs to
take a more professional attitdude toward one's work and
realize that we are not in the business of tracking down bugs,
but rather we are in the business of trying to produce reliable
software.   C++ is not focused on that concern.   While some
programmers may find it fun to deal with the peculiarities of
C++ on a day-by-day basis, I would rather be able to focus
on the problems we are supposed to solve than the eccentricities
of the toolset we use to solve those problems.

Richard Riehle
"LR" <lruss@superlink.net> wrote in message 
news:45252379$0$25786$cc2e38e6@news.uslec.net...
> adaworks@sbcglobal.net wrote:
>> "LR" <lruss@superlink.net> wrote in message 
>> news:4523c687$0$25791$cc2e38e6@news.uslec.net...
>>
>>>adaworks@sbcglobal.net wrote:
>>>
>>>
>>>
>>>
>>>>but that language (C++) is characterized largely
>>>>by its potential for creating flawed software.
>>>
>>>Really?  Um, can you tell us who characterizes it that way?  And for what 
>>>reasons?  Probably, keeping in mind that any language can be abused.
>>>
>>
>> I have written software in C++.   Also, every conversation I have had,
>> in recent weeks, with a group of highly experienced C++ programmers
>> in the midst of a project on which they are working, has reinforced this
>> view.   There are more ways to make programming mistakes in C++
>> than in any contemporary language.  The mistakes are often difficult
>> to discover even long after the programs have been deployed.
>
> Highly experienced C++ programmers?  In C++ or another language? Because if 
> they're really 'experienced', for which we might more reasonably read 
> 'knowlegeable', then they are likely to avoid the nasty corners of the 
> language.  Which C++ has, no question.  BTW, do you know of a language without 
> those?  Are they actually making mistakes of the kind we're discussing, or 
> only complaining about the potential?
>
>>
>>
>>>>In fact, it often
>>>>causes me to wonder why anyone would choose a toolset that
>>>>is error-prone for creating software and expect a rresult that is
>>>>error-free.
>>>
>>>Can you please be more specific about the "error prone"?
>>>
>>
>> As noted above.   However, the pointer model is horrid,
>
> Use with caution if at all.  If used, wrap it up in a class.  Manage your 
> risk.  Use smart pointers.
>
> > the
>> defaults on constructors and copy constructors can cause
>> serious defects in the code,
>
> Yes.  Don't do that. Are the "highly experienced" programmers you spoke to 
> using the default ctors?  Don't forget the default assignment operator.
>
>
>
> > and the memory management
>> model is non-existent.
>
> It's not non existant, it's just not what you like.  Maybe the programmers you 
> spoke to are using raw pointers and not smart pointers? Using malloc/free 
> instead of new/delete?  Using raw pointers to arrays instead of std::vector? 
> Shame on them.  The beauty of C++ is that if you don't like the features the 
> language has you can roll your own.  And with boost and TR1 more available.
>
>
>
> >  We could go on for many pages
>> itemizing specific problems with C++,
>
> Whereas most other languages suffer from a single flaw:  They're not C++. ;)
>
> > but anyone who has
>> used the language for any length of time knows how sensitive
>> it is to even the slightest deviation from careful programming.
>
> Please suggest a language that doesn't require careful programming.
>
>> Worse, the compiler fails to notify the programmer for a lot
>> of those problems.
>
> That's an implementation issue, not a language specific issue.  I recommend 
> lint.  I recommend it highly.
>
>
>
> > This is why debuggers are regarded as
>> a necessary tool when programming in C++.
>
> I've never met anyone who regarded a debugger as necessary in any language. 
> Nice to have.  And it's particularly nice that C++'s market share makes for 
> nice debugging and other tools.
>
> > Not so in
>> some other languages.
>
> Perchance, are those languages without a debugger available?  BTW, I don't 
> think that I've ever met a programmer who wouldn't rather have a good symbolic 
> debugger for a language than not.
>
>
>>>>The reasons for choosing C++, the criteria being
>>>>used, has little to do with the inherent difficulties of that
>>>>language and more to do with its widespread use by
>>>>the programming community.
>>>
>>>Widespread use?  Again, yes, and for good reason.
>>>
>>
>> But those reasons have nothing to do with the dependability
>> of the final software product.
>
> No language choice has anything to do with the dependability of the final 
> software product.
>
> You either program well, or you don't.  Use the language wisely or don't.
>
> LR 


0
adaworks2 (748)
10/6/2006 4:44:15 AM
"glen herrmannsfeldt" <gah@seniti.ugcs.caltech.edu> wrote in message 
news:eg3lda$ep4$3@naig.caltech.edu...
>
> I think some of that is left over from the requirement
> of early C++ compilers to translate to C as an intermediate,
> and for C compatibility in any case.  Java's insistance
> on initializing scalar variables helps prevent some defects,
> though I still don't like it when it is wrong.
>
LC is correct when he suggests that it is possible to initialize
a variable with the wrong value.   The Ada Safety and Security
Annex has a pragma Normalize_Scalars that helps to ameliorate
this problem.

Often, it is better not to initialize a scalar to some value simply
because it can be done.   An Ada compiler always gives the
programmer a warning when a scalar is never assigned a
value anywhere, initialized or not.   This warning enables the
programmer to examine that warning and determine what
action is appropriate.  The fact that a scalar is not initialized
is less problematic than the realization that it never gets a
value asssigned anywhere in the program.

When using the SPARK examiner (a preprocessor for
creating highly reliable Ada code), one gets an even stronger
model for correctness.   At this stage of software practice,
there is no toolset better guaranteed to provide correct
programs than SPARK.   Before naysaying this, you need
to study SPARK for yourself.   Otherwise, you simply
won't understand the argument.

Richard Riehle 


0
adaworks2 (748)
10/6/2006 4:53:02 AM
"glen herrmannsfeldt" <gah@seniti.ugcs.caltech.edu> wrote in message 
news:ees344$6s$7@naig.caltech.edu...
> David Frank <dave_frank@hotmail.com> wrote:
>
>> E.G.  No one is willing to confirm if PL/I has equivalent declaration of
>> Fortran's defined type variables because they dont trust there own knowledge
>> well enuf to state the facts OR in Vowels case he wont respond because he
>> knows it does NOT.
>
> I don't know if it has defined type variables, I presume you mean
> something like C's typedef.  I don't remember that Fortran does, either.
>
typedef is a farce.  Too many C programmers think it is doing
something it isn't doing at all.   It is not a capability for declaring or 
defining
new types.  Rather, it is a way to create an alias for an existing type.

I think maybe David is asking whether one can invent new types as one
does in Ada.  For example, how would one declare, in PL/I, the following?

          type Int16 is range -2**15 .. 2**15 - 1;
          for Int16'Size use 16;

which says give the new type called Int16 a range as shown
and force it to be stored in 16 bits;  or,

        type  Color is (Red, Yellow, Blue);
        for Color use (Red     => 16#34F2#,
                              Yellow => 16#34F3#,
                              Blue     => 16#34F4#);

which says, for the enumerated values named Red, Yellow, and Blue
force the machine representation to the hexadecimal values shown.

I am pretty sure something like this is possible in PL/I.   Perhaps Robin
can give an example in PL/I source code.

Richard Riehle 


0
adaworks2 (748)
10/6/2006 5:05:55 AM
"Tom Linden" <tom@kednos-remove.com> wrote in message 
news:op.tgwfpvk4tte90l@hyrrokkin...
>
> Support for OOP as a criterion seems more of a fashion statement, at
> least it could at best be a derived requirement from more fundemantal
> criteria.
>
When a language does not support OOP, especially in these times,
that language is slightly crippled.   On the other hand, when a language
does support OOP, but is so filled with potential for screw-ups, one
needs to question whether it would not be better to stay away from
OOP if that is the only language available.

A software object is an instance of a class.   A class is simply a
specialized kind of abstract data type.   The special features of
a class are support for inheritance, dynamic binding, and
polymorphism.   A fully-formed class model will also include
parameterized classes (sometimes called templates).   These
features taken together make it possible to consider the
lifecycle of the software process more as an evolutionary
model.

A class is extensible.   That is, we can specialize an
extended class based on an existing class without changing
the base class.  This is a powerful idea and lends itself
well to evolutionary and prototypical styles of software
development and management.

A language that does not support the class construct is going
to be limited to a more linear way of thinking about software
development.   That is, without the class notion, one is forced
into procedural thinking.  This is not a bad thing and we have
used this approach to building perfectly good software for
well over forty years.

However, without the extensibility afforded by OOP, each time
one needs to extend the capabilities of an existing software
product, it is necessary to do the close equivalent of open-heart
surgery.    OOP does not require this.   We extend existing
code without touching the existing code.   This makes long
term adaptability a little easier and a lot safer.

I think it is very short-sighted of the PL/I community to continue
to resist developing an OOP version of the language.  Fortran
has support for some of the important ideas in OOP.  COBOL
now has that support.   Most modern languages have support
for OOP.   If PL/I does not eventually have OOP as part of
its fundamental model, it will continue to fall into disuse.   No
programmer graduating from any computer science program
anywhere in the world would consider adopting a programming
language that fails to support the object model.

All of that being said, PL/I could be adapted to OOP.   At this
stage of our knowledge of the good, the bad, and the ugly (C++)
of OOP, the upgrade of PL/I to support OOP could learn from
the many mistakes already in place with some languages that
are ostensibly OOP.

I would encourage those who want to see the long-term survival
of the best of PL/I to examine this issue and foment action on the
part of those who are charged with the continued health of the
language.

Richard Riehle 


0
adaworks2 (748)
10/6/2006 5:27:26 AM
adaworks@sbcglobal.net wrote:
> 
> typedef is a farce.  Too many C programmers think it is doing
> something it isn't doing at all.   It is not a capability for declaring or 
> defining
> new types.  Rather, it is a way to create an alias for an existing type.
> 
> I think maybe David is asking whether one can invent new types as one
> does in Ada.  For example, how would one declare, in PL/I, the following?
> 
>           type Int16 is range -2**15 .. 2**15 - 1;
>           for Int16'Size use 16;

You have just described bin fixed(15).  You can give it a name if you insist, 
but why bother.

> 
> which says give the new type called Int16 a range as shown
> and force it to be stored in 16 bits;  or,
> 
>         type  Color is (Red, Yellow, Blue);
>         for Color use (Red     => 16#34F2#,
>                               Yellow => 16#34F3#,
>                               Blue     => 16#34F4#);
> 
> which says, for the enumerated values named Red, Yellow, and Blue
> force the machine representation to the hexadecimal values shown.
> 
> I am pretty sure something like this is possible in PL/I.   Perhaps Robin
> can give an example in PL/I source code.
> 
define ordinal color
  (red value('34f2'xn),yellow value('34f3'xn),blue value('34f4'xn))
   precision(16) unsigned;

BTW, I have never seen an uglier representation for hex values that the one you 
used above.  Just my opinion.
0
jjw (608)
10/6/2006 8:04:46 AM
<adaworks@sbcglobal.net> wrote in message 
news:TulVg.7845$TV3.6595@newssvr21.news.prodigy.com...
>
>
> I think maybe David is asking whether one can invent new types as one
> does in Ada.  For example, how would one declare, in PL/I, the following?
>
>          type Int16 is range -2**15 .. 2**15 - 1;
>          for Int16'Size use 16;

integer(2) :: Int16

but if you insist on declaring a derived type variable  then
type Int16
   integer(2) :: k
end type

>
> which says give the new type called Int16 a range as shown
> and force it to be stored in 16 bits;  or,
>
>        type  Color is (Red, Yellow, Blue);
>        for Color use (Red     => 16#34F2#,
>                              Yellow => 16#34F3#,
>                              Blue     => 16#34F4#);
>
> which says, for the enumerated values named Red, Yellow, and Blue
> force the machine representation to the hexadecimal values shown.
>

integer(2),parameter  :: Red = #34f2, Yellow = #34f3, Blue =  #34f4

provides exact size 16bit constants,
plus the new Fortran standard has "C Interoperate" syntax which includes 
support for  C's enum syntax.

Otoh,  Since you havent explicitly shown us Ada's equivalent of Fortran's
     type list
        character,allocatable :: name(:)
        integer,allocatable     :: nums(:)
    end type
    type (list),allocatable :: lists(:)

which I assume means your silence means it doesnt have an equivalent
just like we have to deduce that  PL/I doesnt have derived types
let alone derived types with allocatable members.



0
dave_frank (2243)
10/6/2006 9:36:54 AM
In <2PlVg.7848$TV3.3125@newssvr21.news.prodigy.com>, on 10/06/2006
   at 05:27 AM, <adaworks@sbcglobal.net> said:

>I think it is very short-sighted of the PL/I community to continue to
>resist developing an OOP version of the language.

It would be if they were.

-- 
Shmuel (Seymour J.) Metz, SysProg and JOAT  <http://patriot.net/~shmuel>

Unsolicited bulk E-mail subject to legal action.  I reserve the
right to publicly post or ridicule any abusive E-mail.  Reply to
domain Patriot dot net user shmuel+news to contact me.  Do not
reply to spamtrap@library.lspace.org

0
spamtrap16 (3722)
10/6/2006 12:56:25 PM
In <y6oVg.47165$bf5.6750@edtnps90>, on 10/06/2006
   at 08:04 AM, "James J. Weinkam" <jjw@cs.sfu.ca> said:

>You have just described bin fixed(15).  You can give it a name if you
>insist,  but why bother.

It was a poorly chosen example. Better would have been

           type Int16 is range -10000 .. 20000;
           for Int16'Size use 16;

-- 
Shmuel (Seymour J.) Metz, SysProg and JOAT  <http://patriot.net/~shmuel>

Unsolicited bulk E-mail subject to legal action.  I reserve the
right to publicly post or ridicule any abusive E-mail.  Reply to
domain Patriot dot net user shmuel+news to contact me.  Do not
reply to spamtrap@library.lspace.org

0
spamtrap16 (3722)
10/6/2006 1:00:04 PM
"David Frank" <dave_frank@hotmail.com> wrote in message
news:45262687$0$3016$ec3e2dad@news.usenetmonster.com...
>
> <adaworks@sbcglobal.net> wrote in message
> news:TulVg.7845$TV3.6595@newssvr21.news.prodigy.com...
>
> > I think maybe David is asking whether one can invent new types as one
> > does in Ada.  For example, how would one declare, in PL/I, the following?
> >
> >          type Int16 is range -2**15 .. 2**15 - 1;
> >          for Int16'Size use 16;
>
> integer(2) :: Int16

No, this doesn' give you 16 bits in Fortran.
It doesn't guarantee anything.
In fact., with this, you could even get a severe compilation error,
because there's no guarantee that a compiler has a corresponding
kind.

> but if you insist on declaring a derived type variable  then
> type Int16
>    integer(2) :: k

It doesn't.  Same problem as above.

> end type
>
> > which says give the new type called Int16 a range as shown
> > and force it to be stored in 16 bits;  or,
> >        type  Color is (Red, Yellow, Blue);
> >        for Color use (Red     => 16#34F2#,
> >                              Yellow => 16#34F3#,
> >                              Blue     => 16#34F4#);
> > which says, for the enumerated values named Red, Yellow, and Blue
> > force the machine representation to the hexadecimal values shown.
>
> integer(2),parameter  :: Red = #34f2, Yellow = #34f3, Blue =  #34f4

> provides exact size 16bit constants,

No it doesn't.  Still the same problem.


0
robin_v (2737)
10/6/2006 2:14:27 PM
<adaworks@sbcglobal.net> wrote in message
news:TulVg.7845$TV3.6595@newssvr21.news.prodigy.com...
>
> "glen herrmannsfeldt" <gah@seniti.ugcs.caltech.edu> wrote in message
> news:ees344$6s$7@naig.caltech.edu...
> > David Frank <dave_frank@hotmail.com> wrote:
> >
> >> E.G.  No one is willing to confirm if PL/I has equivalent declaration of
> >> Fortran's defined type variables because they dont trust there own
knowledge
> >> well enuf to state the facts OR in Vowels case he wont respond because he
> >> knows it does NOT.
> >
> > I don't know if it has defined type variables, I presume you mean
> > something like C's typedef.  I don't remember that Fortran does, either.
> >
> I think maybe David is asking whether one can invent new types as one
> does in Ada.  For example, how would one declare, in PL/I, the following?
>
>           type Int16 is range -2**15 .. 2**15 - 1;
>           for Int16'Size use 16;
>
> which says give the new type called Int16 a range as shown
> and force it to be stored in 16 bits;  or,
>
>         type  Color is (Red, Yellow, Blue);
>         for Color use (Red     => 16#34F2#,
>                               Yellow => 16#34F3#,
>                               Blue     => 16#34F4#);
>
> which says, for the enumerated values named Red, Yellow, and Blue
> force the machine representation to the hexadecimal values shown.

Why would you want to do that?*
Simpler is:-
define ordinal color (red, yellow, blue);

Or, if you must have specific values,
define ordinal color (red value(1), yellow value (5), blue value(200));

Or if you really must have a hex constant,

define ordinal color (red value('34F2'xn), yellow, blue);

is sufficient, as the internal values increase consecutively.
_______
* JW has already given an equivalent, so I'll just add a few remarks.



0
robin_v (2737)
10/6/2006 2:14:28 PM
adaworks@sbcglobal.net wrote:
> I will answer all your questions in this part of the reply
> rather than embedding them in the text.
> 
> My preferred language is one that does not have all the
> potential for errors that you seem to admit is present in
> C++.  

And in all languages.

> It is designed so the compiler will catch the maximum
> number of errors at compile time.  

The kinds of errors that you're speaking of are ones that are mostly 
made by sloppy programmers and sloppy programmers will make errors no 
matter what language they're working in.

 > It provides a model for
> indirection that does not require me to wonder whether
> a particular pointer construct might have a hidden dangling
> reference or an eventual conflict somewhere.  I don't have
> the concerns with copy constructors that are present in
> C++.   Lint is not a substitute for good language design
> in the first place.

You've made some assumptions about what a "good language design" is.


> Language choice does impact dependability.   

I suspect not as much as programmer choice.

 > There are
> languages that you probably have not used that are characterized
> by their emphasis on dependability.   C++ is not one of them.

It might be interesting if you'd define dependabiltiy for us.



> It is true that one cannot depend entirely on the programming
> language, and programming always involves being careful. However,
> C++ is especially error-prone when compared with most
> alternatives.

Sure, if you're not careful, you'll be error prone.





> When I compare C++ with one of the better languages
> such as Ada, I find myself preferring Ada.   When I
> compare it with Eiffel, I find myself preferring Eiffel.

Well, since you've already decided that it's better, of course you find 
yourself prefering it.


> 
> Furthermore, with Ada I get all the flexibility I need,

I've never thought that there was some task you can accomplish in 
another language that you can't accomplish in Ada.  Or did you mean 
something else by the word "flexibility"?


> along with the required efficiency.   The compiler catches
> more errors at compile-time leaving me time to spend on
> my own programming mistakes, those not inherent in
> the design of the language.

Strange, but after years of programming in C++, I just don't seem to run 
into that many errors that I think are language based.  I wonder why?



> I know C++ well.  I know Ada well.   When I compare,
> feature-for-feature, according to the one criterion that
> is most important to me, dependability of the final
> program, C++ consistently falls short.

Details?


> 
> The better I get to know both languages, the more I become
> aware that C++ is one of the worst choices for any software
> where dependability is important.  It is, at first, fun chasing
> the little bugs around the code, 

Huh?


> but after a while, one needs to
> take a more professional attitdude toward one's work

Programming is not and never will be a profession.  Simply not possible. 
And also not legal, even, or perhaps most especially where it's been 
made 'law', whatever that might mean nowadays.  IMHO.  But IANAL.

 > and
> realize that we are not in the business of tracking down bugs,
> but rather we are in the business of trying to produce reliable
> software.   


Reliable?  At what cost?  And how do you measure your results?




> C++ is not focused on that concern.   While some
> programmers may find it fun to deal with the peculiarities of
> C++ on a day-by-day basis, I would rather be able to focus
> on the problems we are supposed to solve than the eccentricities
> of the toolset we use to solve those problems.

I would rather be able to focus on my ability to express my thoughts in 
code.  I find it leads to fewer problems.

LR


> 
> Richard Riehle
> "LR" <lruss@superlink.net> wrote in message 
> news:45252379$0$25786$cc2e38e6@news.uslec.net...
> 
>>adaworks@sbcglobal.net wrote:
>>
>>>"LR" <lruss@superlink.net> wrote in message 
>>>news:4523c687$0$25791$cc2e38e6@news.uslec.net...
>>>
>>>
>>>>adaworks@sbcglobal.net wrote:
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>>but that language (C++) is characterized largely
>>>>>by its potential for creating flawed software.
>>>>
>>>>Really?  Um, can you tell us who characterizes it that way?  And for what 
>>>>reasons?  Probably, keeping in mind that any language can be abused.
>>>>
>>>
>>>I have written software in C++.   Also, every conversation I have had,
>>>in recent weeks, with a group of highly experienced C++ programmers
>>>in the midst of a project on which they are working, has reinforced this
>>>view.   There are more ways to make programming mistakes in C++
>>>than in any contemporary language.  The mistakes are often difficult
>>>to discover even long after the programs have been deployed.
>>
>>Highly experienced C++ programmers?  In C++ or another language? Because if 
>>they're really 'experienced', for which we might more reasonably read 
>>'knowlegeable', then they are likely to avoid the nasty corners of the 
>>language.  Which C++ has, no question.  BTW, do you know of a language without 
>>those?  Are they actually making mistakes of the kind we're discussing, or 
>>only complaining about the potential?
>>
>>
>>>
>>>>>In fact, it often
>>>>>causes me to wonder why anyone would choose a toolset that
>>>>>is error-prone for creating software and expect a rresult that is
>>>>>error-free.
>>>>
>>>>Can you please be more specific about the "error prone"?
>>>>
>>>
>>>As noted above.   However, the pointer model is horrid,
>>
>>Use with caution if at all.  If used, wrap it up in a class.  Manage your 
>>risk.  Use smart pointers.
>>
>>
>>>the
>>>defaults on constructors and copy constructors can cause
>>>serious defects in the code,
>>
>>Yes.  Don't do that. Are the "highly experienced" programmers you spoke to 
>>using the default ctors?  Don't forget the default assignment operator.
>>
>>
>>
>>
>>>and the memory management
>>>model is non-existent.
>>
>>It's not non existant, it's just not what you like.  Maybe the programmers you 
>>spoke to are using raw pointers and not smart pointers? Using malloc/free 
>>instead of new/delete?  Using raw pointers to arrays instead of std::vector? 
>>Shame on them.  The beauty of C++ is that if you don't like the features the 
>>language has you can roll your own.  And with boost and TR1 more available.
>>
>>
>>
>>
>>> We could go on for many pages
>>>itemizing specific problems with C++,
>>
>>Whereas most other languages suffer from a single flaw:  They're not C++. ;)
>>
>>
>>>but anyone who has
>>>used the language for any length of time knows how sensitive
>>>it is to even the slightest deviation from careful programming.
>>
>>Please suggest a language that doesn't require careful programming.
>>
>>
>>>Worse, the compiler fails to notify the programmer for a lot
>>>of those problems.
>>
>>That's an implementation issue, not a language specific issue.  I recommend 
>>lint.  I recommend it highly.
>>
>>
>>
>>
>>>This is why debuggers are regarded as
>>>a necessary tool when programming in C++.
>>
>>I've never met anyone who regarded a debugger as necessary in any language. 
>>Nice to have.  And it's particularly nice that C++'s market share makes for 
>>nice debugging and other tools.
>>
>>
>>>Not so in
>>>some other languages.
>>
>>Perchance, are those languages without a debugger available?  BTW, I don't 
>>think that I've ever met a programmer who wouldn't rather have a good symbolic 
>>debugger for a language than not.
>>
>>
>>
>>>>>The reasons for choosing C++, the criteria being
>>>>>used, has little to do with the inherent difficulties of that
>>>>>language and more to do with its widespread use by
>>>>>the programming community.
>>>>
>>>>Widespread use?  Again, yes, and for good reason.
>>>>
>>>
>>>But those reasons have nothing to do with the dependability
>>>of the final software product.
>>
>>No language choice has anything to do with the dependability of the final 
>>software product.
>>
>>You either program well, or you don't.  Use the language wisely or don't.
>>
>>LR 
> 
> 
> 
0
lruss (582)
10/6/2006 3:37:19 PM
adaworks@sbcglobal.net wrote:

> "glen herrmannsfeldt" <gah@seniti.ugcs.caltech.edu> wrote in message 
> news:eg3lda$ep4$3@naig.caltech.edu...
> 
>>I think some of that is left over from the requirement
>>of early C++ compilers to translate to C as an intermediate,
>>and for C compatibility in any case.  Java's insistance
>>on initializing scalar variables helps prevent some defects,
>>though I still don't like it when it is wrong.
>>
> 
> LC is correct when he suggests that it is possible to initialize
> a variable with the wrong value.   The Ada Safety and Security
> Annex has a pragma Normalize_Scalars that helps to ameliorate
> this problem.

Does that really help?  Seriously?
http://en.wikibooks.org/wiki/Ada_Programming/Pragmas/Normalize_Scalars

"The pragma Normalize_Scalars directs the compiler to initialize 
otherwise uninitialized scalar variables with predictable values. If 
possible, the compiler will choose out-of-range values."

I think that might sometimes be worse.

I remember PL/C being 'helpful', with messages like (and this isn't even 
close to being exact.):  "Array index out of bounds, set to one."  With 
predictably unpredicable results.

> 
> Often, it is better not to initialize a scalar to some value simply
> because it can be done.   

Could you clarify/amplify that?

 > An Ada compiler always gives the
> programmer a warning when a scalar is never assigned a
> value anywhere, initialized or not.  

Does Ada support a seperate compilation model?  Interlanguage programming?


> This warning enables the
> programmer to examine that warning and determine what
> action is appropriate.  The fact that a scalar is not initialized
> is less problematic than the realization that it never gets a
> value asssigned anywhere in the program.

"anywhere in the program"?  Or anywhere in a "translation unit" (sorry, 
I'm not sure what the proper name for this would be for Ada, so please 
translate appropriately)?

> 
> When using the SPARK examiner (a preprocessor for
> creating highly reliable Ada code), one gets an even stronger
> model for correctness.   At this stage of software practice,
> there is no toolset better guaranteed to provide correct
> programs than SPARK.   Before naysaying this, you need
> to study SPARK for yourself.   Otherwise, you simply
> won't understand the argument.

I took a look at this: 
http://en.wikipedia.org/wiki/SPARK_programming_language

Interesting, but it leaves me unconvinced.  I looked at 
http://www.praxis-his.com/sparkada/ but couldn't find a tutorial there. 
  Perhaps you could recomend an online tutorial.

LR
0
lruss (582)
10/6/2006 4:00:12 PM
First,  thanks for all the replies.   Note that I never said
that PL/I could not accomplish the equivalent of what
I posted.   In fact, I suggested that Robin would have
a good solution and invited him to show it to us.

As to the Ada list type, there are a variety of ways to a
do the same thing in Ada.   For the example shown, I might
simply do this:

          type List_Type is record
                name : string(1..30);
                nums : integer;
          end record;

          type List_Type_Collection is array (Positive range <>) of List_Type;

giving me an uconstrained array of List_Type records.    I could also simply
use an existing linked-list library, a tree library, or whatever other 
collection
library I might want.

 or, if I want to have an unconstrained name in List_Type,


          type List_Type is record
                name : unbounded_string;
                nums : integer;
          end record;

which will allow me to have strings of whatever size I want.

Richard Riehle

============================================================
"David Frank" <dave_frank@hotmail.com> wrote in message 
news:45262687$0$3016$ec3e2dad@news.usenetmonster.com...
>
> <adaworks@sbcglobal.net> wrote in message 
> news:TulVg.7845$TV3.6595@newssvr21.news.prodigy.com...
>>
>>
>> I think maybe David is asking whether one can invent new types as one
>> does in Ada.  For example, how would one declare, in PL/I, the following?
>>
>>          type Int16 is range -2**15 .. 2**15 - 1;
>>          for Int16'Size use 16;
>
> integer(2) :: Int16
>
> but if you insist on declaring a derived type variable  then
> type Int16
>   integer(2) :: k
> end type
>
>>
>> which says give the new type called Int16 a range as shown
>> and force it to be stored in 16 bits;  or,
>>
>>        type  Color is (Red, Yellow, Blue);
>>        for Color use (Red     => 16#34F2#,
>>                              Yellow => 16#34F3#,
>>                              Blue     => 16#34F4#);
>>
>> which says, for the enumerated values named Red, Yellow, and Blue
>> force the machine representation to the hexadecimal values shown.
>>
>
> integer(2),parameter  :: Red = #34f2, Yellow = #34f3, Blue =  #34f4
>
> provides exact size 16bit constants,
> plus the new Fortran standard has "C Interoperate" syntax which includes 
> support for  C's enum syntax.
>
> Otoh,  Since you havent explicitly shown us Ada's equivalent of Fortran's
>     type list
>        character,allocatable :: name(:)
>        integer,allocatable     :: nums(:)
>    end type
>    type (list),allocatable :: lists(:)
>
> which I assume means your silence means it doesnt have an equivalent
> just like we have to deduce that  PL/I doesnt have derived types
> let alone derived types with allocatable members.
>
>
> 


0
adaworks2 (748)
10/6/2006 4:03:47 PM
<adaworks@sbcglobal.net> wrote in message 
news:D7vVg.2266$NE6.342@newssvr11.news.prodigy.com...
> First,  thanks for all the replies.   Note that I never said
> that PL/I could not accomplish the equivalent of what
> I posted.   In fact, I suggested that Robin would have
> a good solution and invited him to show it to us.
>
> As to the Ada list type, there are a variety of ways to a
> do the same thing in Ada.   For the example shown, I might
> simply do this:
>
>          type List_Type is record
>                name : string(1..30);
>                nums : integer;
>          end record;
>
>          type List_Type_Collection is array (Positive range <>) of 
> List_Type;
>
> giving me an uconstrained array of List_Type records.    I could also 
> simply
> use an existing linked-list library, a tree library, or whatever other 
> collection
> library I might want.
>
> or, if I want to have an unconstrained name in List_Type,
>
>
>          type List_Type is record
>                name : unbounded_string;
>                nums : integer;
>          end record;
>
> which will allow me to have strings of whatever size I want.
>
> Richard Riehle
>

but you have declared nums as a scalar not as a allocatable ARRAY member of 
List_Type
therefore it isnt equivalent to my Fortran declaration and as a result cant 
hold ALL the data
of the "arbitrary lists" problem..



0
dave_frank (2243)
10/6/2006 5:15:16 PM
adaworks@sbcglobal.net wrote:
(very large snip)

> No programmer graduating from any computer science program
> anywhere in the world would consider adopting a programming
> language that fails to support the object model.

I would say that most scientific programmers don't come
from the computer science program, but from engineering
and physical sciences.  

PL/I by design included features from COBOL for
the business community, and from Fortran for the
scientific community.  The life cycle of scientific
and engineering software is a little different from 
that of business or 'computer science' software.

-- glen
0
gah1 (524)
10/6/2006 6:19:18 PM
LR <lruss@superlink.net> wrote:
 
> "The pragma Normalize_Scalars directs the compiler to initialize 
> otherwise uninitialized scalar variables with predictable values. If 
> possible, the compiler will choose out-of-range values."

For debugging programs that will generally work on a
system that doesn't initialize variables, initializing
to a value that will easily be recognized as wrong works.

My favorite is X'81', which tends to be a large negative
integer, and small negative floating point value.

For pointers, it may or may not point outside the available
addressing range.

-- glen
0
gah1 (524)
10/6/2006 6:30:55 PM
adaworks@sbcglobal.net wrote:
(snip)

> typedef is a farce.  Too many C programmers think it is doing
> something it isn't doing at all.   It is not a capability for declaring or 
> defining
> new types.  Rather, it is a way to create an alias for an existing type.

True, but that existing type can be a struct or union, which
gives it some generality.  C has the standard FILE, where the
internal structure is system dependent, but contains everything
needed for file I/O.

> I think maybe David is asking whether one can invent new types as one
> does in Ada.  For example, how would one declare, in PL/I, the following?
 
>          type Int16 is range -2**15 .. 2**15 - 1;
>          for Int16'Size use 16;

PL/I allows specifying the number of bits or decimal digits
needed, independent of the underlying machine.  

Typedef is interesting in that the types it creates work
like the standard types, with no extra qualifier such as TYPE.
 
> which says give the new type called Int16 a range as shown
> and force it to be stored in 16 bits;  or,
(snip of enumeration example)

I am pretty sure PL/I now has enumarations, but
I don't believe it did originally.

-- glen
0
gah1 (524)
10/6/2006 6:38:18 PM
glen herrmannsfeldt wrote:

 > (very large snip)
[more snippage]

> I would say that most scientific programmers don't come
> from the computer science program, but from engineering
> and physical sciences.  

Where I hear they're still all (ok, all might be an overstatement) 
taught a nice simple subset of Fortran, because it's so 'simple' and 
'easy to use'.


> The life cycle of scientific
> and engineering software is a little different from 
> that of business or 'computer science' software.

In what way?

LR
0
lruss (582)
10/6/2006 9:07:53 PM
glen herrmannsfeldt wrote:

> LR <lruss@superlink.net> wrote:
>  
> 
>>"The pragma Normalize_Scalars directs the compiler to initialize 
>>otherwise uninitialized scalar variables with predictable values. If 
>>possible, the compiler will choose out-of-range values."
> 
> 
> For debugging programs that will generally work on a
> system that doesn't initialize variables, initializing
> to a value that will easily be recognized as wrong works.
> 
> My favorite is X'81', which tends to be a large negative
> integer, and small negative floating point value.

I've heard about "deadbeef".

A compiler I use in debug mode initializes uninitialized integer 
variables, I think that might be bin(31) fixed to you, to  0xcccccccc 
which is -858993460. Similar results for real/float types. I'm thankful 
that this particular compiler warns me that the variable is unitialized. 
(Although, that's an implementation issue.)

> For pointers, it may or may not point outside the available
> addressing range.

NULL works well for this in C & C++.  I'm curious, is there a value in 
PL/I for a pointer which will always be invalid?  If not, what do you do 
about writing code that has to move between platforms?


Also from 
http://en.wikibooks.org/wiki/Ada_Programming/Pragmas/Normalize_Scalars
---------------------------------------------------------------------
My_Variable : Positive; -- Oops, forgot to initialize this variable.
                         -- The compiler (may) initialize this to 0
....
-- Oops, using a variable before it is initialized!
-- An exception should be raised here, since the compiler
-- initialized the value to 0 - an out-of-range value for the Positive type.
Some_Other_Variable := My_Variable;
---------------------------------------------------------------------

Which looks interesting, but I worry about variables that are set to 
invalid values to begin with. And I think I'd rather know about it at 
compile time than run time.

LR
0
lruss (582)
10/6/2006 9:27:12 PM
glen herrmannsfeldt wrote:

> adaworks@sbcglobal.net wrote:
> (snip)
> 
> 
>>typedef is a farce.  Too many C programmers think it is doing
>>something it isn't doing at all.   It is not a capability for declaring or 
>>defining
>>new types.  Rather, it is a way to create an alias for an existing type.
> 
> 
> True, but that existing type can be a struct or union, which
> gives it some generality.  C has the standard FILE, where the
> internal structure is system dependent, but contains everything
> needed for file I/O.

And typedef can be very useful.  If you know what you're doing with it. 
IOW, don't use a screwdriver as a chisel if you know what's good for you.


> PL/I allows specifying the number of bits or decimal digits
> needed, independent of the underlying machine.  

I've always wondered though, what happens if you specify something like 
bin(1000) fixed, on a machine whose largest native fixed type is 32 bits?



> Typedef is interesting in that the types it creates work
> like the standard types, with no extra qualifier such as TYPE.

Allowing, as you pointed out, the usage of FILE.

LR



0
lruss (582)
10/6/2006 9:38:46 PM
LR <lruss@superlink.net> wrote:
 
(I wrote)
 
>> The life cycle of scientific
>> and engineering software is a little different from 
>> that of business or 'computer science' software.
 
> In what way?

One is that speed is usually pretty important, so run time
checks are usually reduced.   

Another is that many times, though not all, something 
is written to solve one problem and never used again.  
In that case, extendability is not very important.

Some of the previously mentioned compiler restrictions that
stop you from making mistakes require a lot of work to get
around, resulting in just as many mistakes.  That is, when
you really do need to get around them.

-- glen
0
gah1 (524)
10/6/2006 10:00:11 PM
LR <lruss@superlink.net> wrote:
(snip regarding initializing variables)
 
>> My favorite is X'81', which tends to be a large negative
>> integer, and small negative floating point value.
 
> I've heard about "deadbeef".

How about X'cafebabe'.  That is the first four bytes of
a Java class file.
 
(snip)
 
>> For pointers, it may or may not point outside the available
>> addressing range.
 
> NULL works well for this in C & C++.  I'm curious, is there a value in 
> PL/I for a pointer which will always be invalid?  If not, what do you do 
> about writing code that has to move between platforms?

Well, NULL, which PL/I also has, tends to be a valid value
when a pointer doesn't have anything to point to.  It is
often used and tested for in many programs and languages.

Note that Intel processors reserve segment selector zero
as the null segment selector.  That is, hardware support
for a null pointer.

I would say a large value for a true invalid pointer.
Odd on machines with word aligned data.

-- glen
0
gah1 (524)
10/6/2006 10:11:55 PM
glen herrmannsfeldt wrote:

> LR <lruss@superlink.net> wrote:

[snip]

>>>For pointers, it may or may not point outside the available
>>>addressing range.
> 
>  
> 
>>NULL works well for this in C & C++.  I'm curious, is there a value in 
>>PL/I for a pointer which will always be invalid?  If not, what do you do 
>>about writing code that has to move between platforms?
> 
> 
> Well, NULL, which PL/I also has, tends to be a valid value
> when a pointer doesn't have anything to point to. 

I'm not sure I follow that. Do you mean valid until dereferenced?

 > It is
> often used and tested for in many programs and languages.
> 
> Note that Intel processors reserve segment selector zero
> as the null segment selector.  That is, hardware support
> for a null pointer.
> 
> I would say a large value for a true invalid pointer.

Sorry, but I don't understand what advantage this would have over NULL.

> Odd on machines with word aligned data.

Does that assume that you're pointing to something that requires word 
alignment?  Does character data normally require word alignment in PL/I?

LR
0
lruss (582)
10/6/2006 11:25:23 PM
On Fri, 06 Oct 2006 16:25:23 -0700, LR <lruss@superlink.net> wrote:

> glen herrmannsfeldt wrote:
>
>> LR <lruss@superlink.net> wrote:
>
> [snip]
>
>>>> For pointers, it may or may not point outside the available
>>>> addressing range.
>>
>>> NULL works well for this in C & C++.  I'm curious, is there a value in  
>>> PL/I for a pointer which will always be invalid?  If not, what do you  
>>> do about writing code that has to move between platforms?
>>   Well, NULL, which PL/I also has, tends to be a valid value
>> when a pointer doesn't have anything to point to.
>
> I'm not sure I follow that. Do you mean valid until dereferenced?

Dereferencing is not meaningful within the context of PL/I,  Pointers
are a bona fide data type unlike C, where they are attributed by
association.  A pointer is simply an address to a memory location,
what you choose to put or get from there is up to you.
>
>  > It is
>> often used and tested for in many programs and languages.
>>  Note that Intel processors reserve segment selector zero
>> as the null segment selector.  That is, hardware support
>> for a null pointer.
>>  I would say a large value for a true invalid pointer.
>
> Sorry, but I don't understand what advantage this would have over NULL.

In PL/I there is a builtin function null() which returns the value of the
null pointer, which may not be zero.  This is implementation defined.  On
prime, for example, It has some unique value which addressed an invalid
segment, causing a trap, IIRC,  which facilitated error recovery.

>
>> Odd on machines with word aligned data.
>
> Does that assume that you're pointing to something that requires word  
> alignment?  Does character data normally require word alignment in PL/I?

This is somewhat of a fuzzy issue owing to the advent of architectures with
stricter alignement requirements, but in general, unless the ALIGNED  
attribute
is specified, you may assume byte alignement.  Bit fields are padded to the
narest byte
>
> LR



-- 
Using Opera's revolutionary e-mail client: http://www.opera.com/mail/
0
tom294 (608)
10/7/2006 12:14:30 AM
"LR" <lruss@superlink.net> wrote in message 
news:45252379$0$25786$cc2e38e6@news.uslec.net...
> adaworks@sbcglobal.net wrote:
>> "LR" <lruss@superlink.net> wrote in message 
>> news:4523c687$0$25791$cc2e38e6@news.uslec.net...
>>
I have snipped away your comments and mine.  It is clear
that we will not find many points of agreement in this discussion.

What is also clear is that you are satisfied with C++, just
as I was satisfied with languages I once knew when I was
busy working in the day-to-day world of programming
over a nearly thirty year period.

I have been privileged to have the time to step back and
look closely at the relative merits of a large number of
languages in recent years.   I suspect that this is not
something you have had the time to do since you are
probably heavily involved in actually making software
work each day, just as I was during my early career.

My comments about C++, and other languages, are
based on both my experience as a programmer and
my research into the foundations of programming and
programming language design.   That research includes
a lot of examination of a lot of programs and interviews
with a lot of programmers. Most of those programmers
are ardent about their own language choice and will
argue the virtues of their chosen language with vigor
and fervent commitment.   That is as it should be
since it is important to have confidence in one's
choices.

However, as I examine different language designs, it
becomes clear that some language design choices,
while seeming to be a good idea when developed,
have not been as good as they might have seemed
to the original designers.  This is why new language
and better designs continue to emerge.

Whatever your favorite language might be, it is important
for intellectual honesty to prevail.   As you have indicated,
no language design is perfect.  Even the best of the newly
designed languages can be criticized at some level.  Still,
those new designs do advance the state of programming
practice.   Older languages that evolve to adapt to new
ideas about programming and software development
are able to hold on to some share of the programming
marketplace.   In some cases, the evolution results in the
language becoming really good in some niche.   Other
times, the evolution of the language represents some real
improvements that guarantee a following for a long time.
The continued evolution of Fortran is a good case for
that last statement.

As I look at the evolution of C++, it seems that many new
features are intended to compensate for flaws in the original
design.   The language seems to be turning into the rough
equivalent of a "pile of dry rot held up by flying buttresses."
New language designs such as Eiffel are so much better
that one wonders why C++ even exists.   Of course the
answer is largely based on tradition, not on the value of
its inherent language design model.

I indicated earlier that language design choices need to be
made on the basis of criteria relevant to the problem one
is trying to solve.   One of the primary criterion for the
environment in which I work is dependability.  At present,
the most powerful language toolset to satisfy the need
for high-integrity, highly dependable software is called
SPARK, not C++, not PL/I.   It is a niche language,
to be sure.  One would not use SPARK for pedestrian
projects such as business data processing.  However,
there is currently no language model better suited to
the creation of safety-critical software.

On the other hand, most languages can be used with
some confidence for other kinds of software, even C++.
As you have agreed, C++ includes a lot of very dangerous
options.  What you have not acknowledged is that not
all languages do include those same opportunities for
mistakes.

Perhaps when you have the opportunity to step away from
your programming practice long enough to make objective
comparisons of the many languages choices, you will begin
to discover how each of these variations in design makes
a difference to the success of a project, depending on the
criteria you have chosen to define success.

Thanks for an interesting diaglogue,

Richard Riehle 


0
adaworks2 (748)
10/7/2006 1:42:58 AM
"LR" <lruss@superlink.net> wrote in message 
news:45267d5e$0$25792$cc2e38e6@news.uslec.net...
> adaworks@sbcglobal.net wrote:
>
>
>>
>> Often, it is better not to initialize a scalar to some value simply
>> because it can be done.
>
> Could you clarify/amplify that?
>
Yes.  Under rare circumstances, it might be better to
let an exception occur, provided one includes a proper
exception handler.   This decision would be driven by the
case where a valid value of any kind might deliver incorrect
results that might not be trapped until too late in the program.

> > An Ada compiler always gives the
>> programmer a warning when a scalar is never assigned a
>> value anywhere, initialized or not.
>
> Does Ada support a seperate compilation model?  Interlanguage programming?
>
Actually, Ada is one of the most democratic languages you
will find.   The separate compilation model is multi-layered,
and the language includes direct support for interoperability
with C, Java, C++, Fortran, and COBOL.   That model
could be easily extended (it is defined in the language) to
include other languages as they become popular.  Oh, and
Ada can also interact with Assembler and low-level machine code.
>
>> This warning enables the
>> programmer to examine that warning and determine what
>> action is appropriate.  The fact that a scalar is not initialized
>> is less problematic than the realization that it never gets a
>> value asssigned anywhere in the program.
>
> "anywhere in the program"?  Or anywhere in a "translation unit" (sorry, I'm 
> not sure what the proper name for this would be for Ada, so please translate 
> appropriately)?
>
The compiler will determine that a scalar is never initialized at
any point where it is visible.   The scoping rules are quite a bit
more strict in Ada than in most languages.   Also Ada separates
the notion of scope and visibility.   Therefore, the compiler can
easily determine whether a scalar has any chance of ever getting
a legal value when the program is executed.
>>
>> When using the SPARK examiner (a preprocessor for
>> creating highly reliable Ada code), one gets an even stronger
>> model for correctness.   At this stage of software practice,
>> there is no toolset better guaranteed to provide correct
>> programs than SPARK.   Before naysaying this, you need
>> to study SPARK for yourself.   Otherwise, you simply
>> won't understand the argument.
>
> I took a look at this: http://en.wikipedia.org/wiki/SPARK_programming_language
>
> Interesting, but it leaves me unconvinced.  I looked at 
> http://www.praxis-his.com/sparkada/ but couldn't find a tutorial there. 
> Perhaps you could recomend an online tutorial.
>
I don't have a stake in SPARK other than as a user.   However, I
think the people at PRAXIS might be quite willing to answer any
questions you might have.   SPARK, at present, seems to be the
most effective design for the support of and inclusion of formal
methods in a programming process.

Thanks again for your interest,

Richard Riehle 


0
adaworks2 (748)
10/7/2006 1:54:27 AM
"David Frank" <dave_frank@hotmail.com> wrote in message 
news:452691fb$0$3036$ec3e2dad@news.usenetmonster.com...
>
> <adaworks@sbcglobal.net> wrote in message 
> news:D7vVg.2266$NE6.342@newssvr11.news.prodigy.com...
>> First,  thanks for all the replies.   Note that I never said
>> that PL/I could not accomplish the equivalent of what
>> I posted.   In fact, I suggested that Robin would have
>> a good solution and invited him to show it to us.
>>
>> As to the Ada list type, there are a variety of ways to a
>> do the same thing in Ada.   For the example shown, I might
>> simply do this:
>>
>>          type List_Type is record
>>                name : string(1..30);
>>                nums : integer;
>>          end record;
>>
>>          type List_Type_Collection is array (Positive range <>) of List_Type;
>>
>> giving me an uconstrained array of List_Type records.    I could also simply
>> use an existing linked-list library, a tree library, or whatever other 
>> collection
>> library I might want.
>>
>> or, if I want to have an unconstrained name in List_Type,
>>
>>
>>          type List_Type is record
>>                name : unbounded_string;
>>                nums : integer;
>>          end record;
>>
>> which will allow me to have strings of whatever size I want.
>>
>> Richard Riehle
>>
>
> but you have declared nums as a scalar not as a allocatable ARRAY member of 
> List_Type
> therefore it isnt equivalent to my Fortran declaration and as a result cant 
> hold ALL the data
> of the "arbitrary lists" problem..
>
OK.   I will simply include an allocatable array of integers
in the record.

         type Integer_List is array(Positive range <>) of Integer;
         type List_Type(Allocated_Size) is record
                name : unbounded_string;
                nums : integer_list(1..Allocated_Size);
          end record;

Now, when I declare a record of this type I will simply code,

         My_Record : List_Type(Allocated_Size => 200);

which will give me a bounded list, dynamically allocated, for the
record.   I could also do this with an indirection-based solution,
but that will require a little more code.

Richard Riehle 


0
adaworks2 (748)
10/7/2006 2:00:28 AM
adaworks@sbcglobal.net wrote:
> "LR" <lruss@superlink.net> wrote in message 
> news:45252379$0$25786$cc2e38e6@news.uslec.net...
> 
>>adaworks@sbcglobal.net wrote:
>>
>>>"LR" <lruss@superlink.net> wrote in message 
>>>news:4523c687$0$25791$cc2e38e6@news.uslec.net...
>>>
> 
> I have snipped away your comments and mine.  It is clear
> that we will not find many points of agreement in this discussion.
> 
> What is also clear is that you are satisfied with C++, just
> as I was satisfied with languages I once knew when I was
> busy working in the day-to-day world of programming
> over a nearly thirty year period.

No, that's not clear.  I use the language and like it for many reasons, 
but satisfied?  I'm not sure about that.


> I have been privileged to have the time to step back and
> look closely at the relative merits of a large number of
> languages in recent years.   I suspect that this is not
> something you have had the time to do since you are
> probably heavily involved in actually making software
> work each day, just as I was during my early career.

I wouldn't say that I haven't done this at all.


> 
> My comments about C++, and other languages, are
> based on both my experience as a programmer and
> my research into the foundations of programming and
> programming language design.   That research includes
> a lot of examination of a lot of programs and interviews
> with a lot of programmers. Most of those programmers
> are ardent about their own language choice and will
> argue the virtues of their chosen language with vigor
> and fervent commitment.   That is as it should be
> since it is important to have confidence in one's
> choices.
> 
> However, as I examine different language designs, it
> becomes clear that some language design choices,
> while seeming to be a good idea when developed,
> have not been as good as they might have seemed
> to the original designers.  This is why new language
> and better designs continue to emerge.

And worse as well, no?  In any case, most languages have both good 
things and bad things.



> Whatever your favorite language might be, it is important
> for intellectual honesty to prevail.   As you have indicated,
> no language design is perfect.  Even the best of the newly
> designed languages can be criticized at some level.  

New isn't the equivalent of better.

 > Still,
> those new designs do advance the state of programming
> practice.   

I don't always share that view.  There's at least one newer language 
that I know of that I don't think was created to advance the state of 
the art, but to attack one of the creator's competitors.

 > Older languages that evolve to adapt to new
> ideas about programming and software development
> are able to hold on to some share of the programming
> marketplace.   

Yes, as you've pointed out, this is the case for Fortran.

 > In some cases, the evolution results in the
> language becoming really good in some niche.   Other
> times, the evolution of the language represents some real
> improvements that guarantee a following for a long time.
> The continued evolution of Fortran is a good case for
> that last statement.

I'm curious to see how this plays out.  There are plenty of people who 
program in Fortran who, or so it seems to me, are probably using a very 
narrow subset of the current language.  This persists because the subset 
is considered to be simple and easy to use in an age where software is 
becoming more and more complex.



> As I look at the evolution of C++, it seems that many new
> features are intended to compensate for flaws in the original
> design.   The language seems to be turning into the rough
> equivalent of a "pile of dry rot held up by flying buttresses."

Interesting perspective.  I tend to think that languages that get used 
acquire interesting features.  Much like human languages do.

> New language designs such as Eiffel are so much better
> that one wonders why C++ even exists.   

Utility?  Availability?  Familiarity?  I wonder why you wonder about it.

> Of course the
> answer is largely based on tradition, not on the value of
> its inherent language design model.

Tradition?  Honestly, I've never heard anyone suggest that they chose a 
language because of tradition.


> 
> I indicated earlier that language design choices need to be
> made on the basis of criteria relevant to the problem one
> is trying to solve.   

The best tool for the job?  I once read a book printed around the start 
of WWII that was for owners of milling machines that showed them how 
they could do things that are normally done on a lathe, like make gears. 
   I guess there was a shortage of lathes.  Best to use the tools you 
have, and the people who know how to use them, and make the gears rather 
then suffer the paralysis of fretting over the best tool.


 > One of the primary criterion for the
> environment in which I work is dependability.  At present,
> the most powerful language toolset to satisfy the need
> for high-integrity, highly dependable software is called
> SPARK, not C++, not PL/I.  

I don't see how SPARK is really all that different from a liberal use of 
assert() and lint.  Perhaps I'm missing something obvious.

 >  It is a niche language,
> to be sure.  One would not use SPARK for pedestrian
> projects such as business data processing.  

I honestly don't understand this.  Why not?

 > However,
> there is currently no language model better suited to
> the creation of safety-critical software.




> On the other hand, most languages can be used with
> some confidence for other kinds of software, even C++.
> As you have agreed, C++ includes a lot of very dangerous
> options.  What you have not acknowledged is that not
> all languages do include those same opportunities for
> mistakes.

All languages include opportunites for mistakes.



> Perhaps when you have the opportunity to step away from
> your programming practice long enough to make objective
> comparisons of the many languages choices, you will begin
> to discover how each of these variations in design makes
> a difference to the success of a project, depending on the
> criteria you have chosen to define success.

I like that qualifier. ;)


> Thanks for an interesting diaglogue,


Thank you very much too.

LR
0
lruss (582)
10/7/2006 3:21:42 AM
adaworks@sbcglobal.net wrote:

> "LR" <lruss@superlink.net> wrote in message 
> news:45267d5e$0$25792$cc2e38e6@news.uslec.net...
> 
>>adaworks@sbcglobal.net wrote:
>>
>>
>>
>>>Often, it is better not to initialize a scalar to some value simply
>>>because it can be done.
>>
>>Could you clarify/amplify that?
>>
> 
> Yes.  Under rare circumstances, it might be better to
> let an exception occur, provided one includes a proper
> exception handler.   This decision would be driven by the
> case where a valid value of any kind might deliver incorrect
> results that might not be trapped until too late in the program.

I'm not sure that I understand this.  Are you saying that the condition 
is met if you have a variable in your code with a valid value will cause 
an incorrect result?  That sounds like poor design.

As if we had a shoe that is fine unless it has a foot in it.



>>>An Ada compiler always gives the
>>>programmer a warning when a scalar is never assigned a
>>>value anywhere, initialized or not.
>>
>>Does Ada support a seperate compilation model?  Interlanguage programming?
>>
> 
> Actually, Ada is one of the most democratic languages you
> will find.   The separate compilation model is multi-layered,
> and the language includes direct support for interoperability
> with C, Java, C++, Fortran, and COBOL.   That model
> could be easily extended (it is defined in the language) to
> include other languages as they become popular.  Oh, and
> Ada can also interact with Assembler and low-level machine code.

My question really had more to do with how SPARK was going to figure out 
if a particular variable is initialized or how it should be.  It seems 
to me that seperate compilation might cause complications.  Is the 
conditional information saved in whatever the object files are called?


> 
>>>This warning enables the
>>>programmer to examine that warning and determine what
>>>action is appropriate.  The fact that a scalar is not initialized
>>>is less problematic than the realization that it never gets a
>>>value asssigned anywhere in the program.
>>
>>"anywhere in the program"?  Or anywhere in a "translation unit" (sorry, I'm 
>>not sure what the proper name for this would be for Ada, so please translate 
>>appropriately)?
>>
> 
> The compiler will determine that a scalar is never initialized at
> any point where it is visible.   The scoping rules are quite a bit
> more strict in Ada than in most languages.   Also Ada separates
> the notion of scope and visibility.   Therefore, the compiler can
> easily determine whether a scalar has any chance of ever getting
> a legal value when the program is executed.

Are you speaking of Ada, or SPARK here?  I'm not sure I can see how this 
can happen for a seperate compilation model unless there is an awful lot 
of info kept after compilation.



> 
>>>When using the SPARK examiner (a preprocessor for
>>>creating highly reliable Ada code), one gets an even stronger
>>>model for correctness.   At this stage of software practice,
>>>there is no toolset better guaranteed to provide correct
>>>programs than SPARK.   Before naysaying this, you need
>>>to study SPARK for yourself.   Otherwise, you simply
>>>won't understand the argument.
>>
>>I took a look at this: http://en.wikipedia.org/wiki/SPARK_programming_language
>>
>>Interesting, but it leaves me unconvinced.  I looked at 
>>http://www.praxis-his.com/sparkada/ but couldn't find a tutorial there. 
>>Perhaps you could recomend an online tutorial.
>>
> 
> I don't have a stake in SPARK other than as a user.   However, I
> think the people at PRAXIS might be quite willing to answer any
> questions you might have.   SPARK, at present, seems to be the
> most effective design for the support of and inclusion of formal
> methods in a programming process.

For now.  Like you said elsewhere in this thread, languages evolve. 
It'll be interesting to see if other languages try to adapt this more 
formally.  Or not.

LR

0
lruss (582)
10/7/2006 3:27:54 AM
adaworks@sbcglobal.net wrote:
> "glen herrmannsfeldt" <gah@seniti.ugcs.caltech.edu> wrote in message 
> news:ees344$6s$7@naig.caltech.edu...
>> David Frank <dave_frank@hotmail.com> wrote:
>>
>>> E.G.  No one is willing to confirm if PL/I has equivalent declaration of
>>> Fortran's defined type variables because they dont trust there own knowledge
>>> well enuf to state the facts OR in Vowels case he wont respond because he
>>> knows it does NOT.
>> I don't know if it has defined type variables, I presume you mean
>> something like C's typedef.  I don't remember that Fortran does, either.
>>
> typedef is a farce.  Too many C programmers think it is doing
> something it isn't doing at all.   It is not a capability for declaring or 
> defining
> new types.  Rather, it is a way to create an alias for an existing type.
> 
> I think maybe David is asking whether one can invent new types as one
> does in Ada.  For example, how would one declare, in PL/I, the following?
> 
>           type Int16 is range -2**15 .. 2**15 - 1;
>           for Int16'Size use 16;
> 
> which says give the new type called Int16 a range as shown
> and force it to be stored in 16 bits;  or,
> 
>         type  Color is (Red, Yellow, Blue);
>         for Color use (Red     => 16#34F2#,
>                               Yellow => 16#34F3#,
>                               Blue     => 16#34F4#);
> 
> which says, for the enumerated values named Red, Yellow, and Blue
> force the machine representation to the hexadecimal values shown.
> 
> I am pretty sure something like this is possible in PL/I.   Perhaps Robin
> can give an example in PL/I source code.

DEFINE ALIAS INT16 FIXED BINARY(15,0); -- but it is not guaranteed to be 
in 16 bits. Of course, it isn't necessarily guaranteed in Ada, either. 
And PL/I does not have the ability to declare true numeric types -- only 
a aliases. Ranges are also unavailable, except insofar as they are 
implied by precisions.

DEFINE ORDINAL COLOR (RED VALUE (34F2B4),
                       YELLOW VALUE (34F3B4),
                       BLUE VALUE (34F4B4));

-- 
John W. Kennedy
"The blind rulers of Logres
Nourished the land on a fallacy of rational virtue."
   -- Charles Williams.  "Taliessin through Logres: Prelude"
0
jwkenne (1442)
10/7/2006 5:30:54 AM
LR wrote:
> I've always wondered though, what happens if you specify something like 
> bin(1000) fixed, on a machine whose largest native fixed type is 32 bits?

Whatever the compiler designer feels like doing -- but, as a rule, it 
will only go up to whatever a C compiler on the same system implements 
as "long".

-- 
John W. Kennedy
"The blind rulers of Logres
Nourished the land on a fallacy of rational virtue."
   -- Charles Williams.  "Taliessin through Logres: Prelude"
0
jwkenne (1442)
10/7/2006 5:35:45 AM
LR wrote:
> 
> 
> I don't always share that view.  There's at least one newer language 
> that I know of that I don't think was created to advance the state of 
> the art, but to attack one of the creator's competitors.
> 

Wow, what a statement! Either tell us the language, the creator, and the 
competitor, or don't tell us anything at all.  Otherwise all it is is innuendo.
0
jjw (608)
10/7/2006 7:42:47 AM
John W. Kennedy wrote:
> adaworks@sbcglobal.net wrote:
> 
>> "glen herrmannsfeldt" <gah@seniti.ugcs.caltech.edu> wrote in message 
>> news:ees344$6s$7@naig.caltech.edu...
>>
>>> David Frank <dave_frank@hotmail.com> wrote:
>>>
>>>> E.G.  No one is willing to confirm if PL/I has equivalent 
>>>> declaration of
>>>> Fortran's defined type variables because they dont trust there own 
>>>> knowledge
>>>> well enuf to state the facts OR in Vowels case he wont respond 
>>>> because he
>>>> knows it does NOT.
>>>
>>> I don't know if it has defined type variables, I presume you mean
>>> something like C's typedef.  I don't remember that Fortran does, either.
>>>
>> typedef is a farce.  Too many C programmers think it is doing
>> something it isn't doing at all.   It is not a capability for 
>> declaring or defining
>> new types.  Rather, it is a way to create an alias for an existing type.
>>
>> I think maybe David is asking whether one can invent new types as one
>> does in Ada.  For example, how would one declare, in PL/I, the following?
>>
>>           type Int16 is range -2**15 .. 2**15 - 1;
>>           for Int16'Size use 16;
>>
>> which says give the new type called Int16 a range as shown
>> and force it to be stored in 16 bits;  or,
>>
>>         type  Color is (Red, Yellow, Blue);
>>         for Color use (Red     => 16#34F2#,
>>                               Yellow => 16#34F3#,
>>                               Blue     => 16#34F4#);
>>
>> which says, for the enumerated values named Red, Yellow, and Blue
>> force the machine representation to the hexadecimal values shown.
>>
>> I am pretty sure something like this is possible in PL/I.   Perhaps Robin
>> can give an example in PL/I source code.
> 
> 
> DEFINE ALIAS INT16 FIXED BINARY(15,0); -- but it is not guaranteed to be 
> in 16 bits. Of course, it isn't necessarily guaranteed in Ada, either. 
> And PL/I does not have the ability to declare true numeric types -- only 
> a aliases. Ranges are also unavailable, except insofar as they are 
> implied by precisions.
> 
> DEFINE ORDINAL COLOR (RED VALUE (34F2B4),
>                       YELLOW VALUE (34F3B4),
>                       BLUE VALUE (34F4B4));
> 
You seem to have omitted some quotes in your B4 constants.
0
jjw (608)
10/7/2006 7:47:06 AM
<adaworks@sbcglobal.net> wrote in message 
news:0TDVg.8033$TV3.2838@newssvr21.news.prodigy.com...
>
> "David Frank" <dave_frank@hotmail.com> wrote in message 
> news:452691fb$0$3036$ec3e2dad@news.usenetmonster.com...
>>
>
>> but you have declared nums as a scalar not as a allocatable ARRAY member 
>> of List_Type
>> therefore it isnt equivalent to my Fortran declaration and as a result 
>> cant hold ALL the data
>> of the "arbitrary lists" problem..
>>
> OK.   I will simply include an allocatable array of integers
> in the record.
>
>         type Integer_List is array(Positive range <>) of Integer;
>         type List_Type(Allocated_Size) is record
>                name : unbounded_string;
>                nums : integer_list(1..Allocated_Size);
>          end record;
>
> Now, when I declare a record of this type I will simply code,
>
>         My_Record : List_Type(Allocated_Size => 200);
>
> which will give me a bounded list, dynamically allocated, for the
> record.   I could also do this with an indirection-based solution,
> but that will require a little more code.
>
> Richard Riehle
>

Not sure, but it appears you still havent duplicated my Fortran declaration
   Type List
      Character,Allocatable :: Name(:)
      Integer,Allocatable :: Nums(:)
   End Type
   Type (List),Allocatable :: Lists(:)

Which allows me to have INDEPENDENT array sizes for
 Name, Nums members of EACH  instance of List within Lists

IOW   all data from a file with arbitrary #lists can be contained.


Which allows me EACH  of n Lists  Lists(n)%Nums   to be independently 
allocated 


0
dave_frank (2243)
10/7/2006 8:50:32 AM
"LR" <lruss@superlink.net> wrote in message 
news:45271e8c$0$25783$cc2e38e6@news.uslec.net...
> adaworks@sbcglobal.net wrote:
>
>
>>
>> Yes.  Under rare circumstances, it might be better to
>> let an exception occur, provided one includes a proper
>> exception handler.   This decision would be driven by the
>> case where a valid value of any kind might deliver incorrect
>> results that might not be trapped until too late in the program.
>
> I'm not sure that I understand this.  Are you saying that the condition is met 
> if you have a variable in your code with a valid value will cause an incorrect 
> result?  That sounds like poor design.
>
> As if we had a shoe that is fine unless it has a foot in it.
>
The initialization of a scalar with a value that could be intepreted
as correct at run-time, if it becomes a kind of default value, may
cause more run-time errors than if it is not initialized at all.  It is not
always possible to decide that a given initialization is better than no
value at all.  The circumstances will vary, of course.
>
>
>>>>An Ada compiler always gives the
>>>>programmer a warning when a scalar is never assigned a
>>>>value anywhere, initialized or not.
>>>
>>>Does Ada support a seperate compilation model?  Interlanguage programming?
>>>
>>
>> Actually, Ada is one of the most democratic languages you
>> will find.   The separate compilation model is multi-layered,
>> and the language includes direct support for interoperability
>> with C, Java, C++, Fortran, and COBOL.   That model
>> could be easily extended (it is defined in the language) to
>> include other languages as they become popular.  Oh, and
>> Ada can also interact with Assembler and low-level machine code.
>
> My question really had more to do with how SPARK was going to figure out if a 
> particular variable is initialized or how it should be.  It seems to me that 
> seperate compilation might cause complications.  Is the conditional 
> information saved in whatever the object files are called?
>
During development, there is quite a bit of supporting information saved
for a full verification process.   Separate compilation is designed over the
package model.  Let me provide a simple example here.  Be aware that
this is a toy example.

A library unit may be composed of multiple compilation units.  In every
case, the varioius parts of a library unit may be compiled separately.
The first compilation unit is the specification.

         generic
             type Item is private;
         package Stack is
             procedure Push(Data : in   Item);
             procedure Pop (Data : out Item);
             function Is_Full return Boolean;
        end Stack;

This is a generic package meaning that it is independent of any
particular data type.   The same package could be instantiated
for an integer, a float, or whatever.

Next is the implementation part, called a package body.

        package body Stack is
             -- here we can define the structure of the Stack
             procedure Push(Data : in   Item) is separate;
             procedure Pop (Data : out Item) is separate;
             function Is_Full return Boolean is separate;
        end Stack;

Note that the body does not need to refer to the specification
through #include mechanisms as one would with C++.  This
is because the library unit is integral even though it can be
separately compiled.

Each of the procedures and functions can also be compiled
as separate files.   Once again, the Ada library model is
designed so the entire library unit is treated as a single library
unit, even though the compilation units can be in separate
files.

Ada (and SPARK which uses Ada as its underlying compilation
engine) has a unique visibility model that ensures consistency across
these compilation units.   Unlike a #include which gives one both
scope and visibility, Ada separates these two ideas.  An element
of a library unit may be in scope, but not be directly visible.  This
subtle difference assists in the feature I described earlier:  ensuring
that every scalar can be tested for whether it can ever be given a
value anywhere in the final program.

In another reply, you indicate that you don't see the difference
between lint assert and what SPARK does.   The assert is not
as fine-grained in its checking as the SPARK model.  Further,
SPARK's checking is primarily static.   A designer inserts the
assertions in the code and the entire program is statically
evaluated to determine whether there are any places in the
code where those assertions can be violated.   Also, SPARK
can detect, not in all cases, but in some, whether there will
be conflicting assertions.

The SPARK model is very close to a theorem-proving
approach, although we still have a long way to go in software
before we are really able to satisfy all the issues of theorem
proving.

You asked why we would not do this with all software.  The
answer is primarily economics.   Formal methods are not the
right approach for every software problem.  It is an expensive
way to build software.   However, it is also the right way to
build software in safety-critical environments.  SPARK is
unique and that means expensive.   If a software system
must absolutely work according to is specification or
people could be killed or maimed because of a software
failure, SPARK is probably the right approach.   Most
software does not fall into that category.

For safety-critical software, the stakes are very high. Few
other language designs will be adequate; not C++, not
PL/I, not C, not Fortran, and not even all of Ada.   SPARK
forbids the use of some Ada constructs because they cannot
be confirmed as safe by the SPARK Examiner.

Thanks for your question.

Richard Riehle


0
adaworks2 (748)
10/7/2006 4:14:24 PM
"David Frank" <dave_frank@hotmail.com> wrote in message 
news:45276d40$0$3047$ec3e2dad@news.usenetmonster.com...
>
> <adaworks@sbcglobal.net> wrote in message 
> news:0TDVg.8033$TV3.2838@newssvr21.news.prodigy.com...
>>
>> "David Frank" <dave_frank@hotmail.com> wrote in message 
>> news:452691fb$0$3036$ec3e2dad@news.usenetmonster.com...
>>>
>>
>>> but you have declared nums as a scalar not as a allocatable ARRAY member of 
>>> List_Type
>>> therefore it isnt equivalent to my Fortran declaration and as a result cant 
>>> hold ALL the data
>>> of the "arbitrary lists" problem..
>>>
>> OK.   I will simply include an allocatable array of integers
>> in the record.
>>
>>         type Integer_List is array(Positive range <>) of Integer;
>>         type List_Type(Allocated_Size) is record
>>                name : unbounded_string;
>>                nums : integer_list(1..Allocated_Size);
>>          end record;
>>
>> Now, when I declare a record of this type I will simply code,
>>
>>         My_Record : List_Type(Allocated_Size => 200);
>>
>> which will give me a bounded list, dynamically allocated, for the
>> record.   I could also do this with an indirection-based solution,
>> but that will require a little more code.
>>
>> Richard Riehle
>>
>
> Not sure, but it appears you still havent duplicated my Fortran declaration
>   Type List
>      Character,Allocatable :: Name(:)
>      Integer,Allocatable :: Nums(:)
>   End Type
>   Type (List),Allocatable :: Lists(:)
>
> Which allows me to have INDEPENDENT array sizes for
> Name, Nums members of EACH  instance of List within Lists
>
> IOW   all data from a file with arbitrary #lists can be contained.
>
>
> Which allows me EACH  of n Lists  Lists(n)%Nums   to be independently 
> allocated
The example I posted shows an unconstrained array and an unconstrained
record.   At the time of declaration, each record can be allocated a different
number of integer values.   The Name is an unbounded string which means
it can vary accordning to however much I want to put in it.  In fact, we
can vary that string size dynamically, if we wish.    For the integer list,
had I chosen to use a simple linked list for my implementation, the list
could also grow and shrink dynamically.

Richard Riehle 


0
adaworks2 (748)
10/7/2006 4:18:34 PM
"LR" <lruss@superlink.net> wrote in message 
news:4526ccb8$0$25774$cc2e38e6@news.uslec.net...
>
> I've always wondered though, what happens if you specify something like 
> bin(1000) fixed, on a machine whose largest native fixed type is 32 bits?
>
Some languages such as Scheme, Smalltalk, Python, and many others
allow the programmer to have numeric values of any size they want, and
to do arithmetic on them.  Consider,

x := 15415112987195719571729512219305 / 
1412989375160325123512512612740571975923571925791

This would evaluate just fine in some of the languages named.  The
numbers are not implemented simply on the basis of the underlying
word size of the machine.

Richard Riehle



0
adaworks2 (748)
10/7/2006 4:23:10 PM
adaworks@sbcglobal.net wrote:

> "LR" <lruss@superlink.net> wrote in message 
> news:4526ccb8$0$25774$cc2e38e6@news.uslec.net...
> 
>>I've always wondered though, what happens if you specify something like 
>>bin(1000) fixed, on a machine whose largest native fixed type is 32 bits?
>>
> 
> Some languages such as Scheme, Smalltalk, Python, and many others
> allow the programmer to have numeric values of any size they want, and
> to do arithmetic on them.  Consider,
> 
> x := 15415112987195719571729512219305 / 
> 1412989375160325123512512612740571975923571925791
> 
> This would evaluate just fine in some of the languages named.  The
> numbers are not implemented simply on the basis of the underlying
> word size of the machine.

I'm aware of this, but I was asking particularly about PL/I.

But since you've raised the issue, how for example, are irrational 
constants represented/stored in these languages?

And of course, for my money, the nice thing about C++ is that you can 
come pretty close to the syntax above, if you take the time to write 
classes that can deal with these kinds of numbers.

I think that Java has something like BigNum or BigInt or something to 
handle these already.

LR
0
lruss (582)
10/7/2006 5:46:57 PM
adaworks@sbcglobal.net wrote:

> "LR" <lruss@superlink.net> wrote in message 
> news:45271e8c$0$25783$cc2e38e6@news.uslec.net...
> 
>>adaworks@sbcglobal.net wrote:
>>
>>
>>
>>>Yes.  Under rare circumstances, it might be better to
>>>let an exception occur, provided one includes a proper
>>>exception handler.   This decision would be driven by the
>>>case where a valid value of any kind might deliver incorrect
>>>results that might not be trapped until too late in the program.
>>
>>I'm not sure that I understand this.  Are you saying that the condition is met 
>>if you have a variable in your code with a valid value will cause an incorrect 
>>result?  That sounds like poor design.
>>
>>As if we had a shoe that is fine unless it has a foot in it.
>>
> 
> The initialization of a scalar with a value that could be intepreted
> as correct at run-time, if it becomes a kind of default value, may
> cause more run-time errors than if it is not initialized at all.  It is not
> always possible to decide that a given initialization is better than no
> value at all.  The circumstances will vary, of course.

I find this pretty confusing.  How can a variable have "no value at all" 
unless you have some meta-data attached to the variable that indicates 
that it hasn't been initialized or had a value assigned to it? 
Otherwise, I think the bits will have some 'value'.  It may be a 'legal' 
value or 'not legal' but the bits will indicate some value.  No?

I get the feeling I'm missing something.

Also, can you give an example where no initialization is better than 
initialization?

[snip]
[snip]
> Note that the body does not need to refer to the specification
> through #include mechanisms as one would with C++.  This
> is because the library unit is integral even though it can be
> separately compiled.

I see pluses and minuses in that.

> 
> Each of the procedures and functions can also be compiled
> as separate files.   Once again, the Ada library model is
> designed so the entire library unit is treated as a single library
> unit, even though the compilation units can be in separate
> files.

There has to be some underlying method for determining where the files 
are though, right?  Is this implementation/platform dependent?


[snip]
> In another reply, you indicate that you don't see the difference
> between lint assert and what SPARK does.   

I should have been more specific.  I don't see much of a difference, 
although it seems to me that SPARK is kind of like these but stronger.

 >  The assert is not
> as fine-grained in its checking as the SPARK model.  Further,
> SPARK's checking is primarily static.   

Another difference in that assert is a runtime check.  But C++ may yet 
get some static checking feature.  templates will make that likely.

 > A designer inserts the
> assertions in the code and the entire program is statically
> evaluated to determine whether there are any places in the
> code where those assertions can be violated.   

'Are' violated, or 'can be' violated?  I feel a little confused here, is 
the code that evaluates the SPARK assertions good enough to tell if the 
constraints will be violated at run time?

 > Also, SPARK
> can detect, not in all cases, but in some, whether there will
> be conflicting assertions.
> 
> The SPARK model is very close to a theorem-proving
> approach, although we still have a long way to go in software
> before we are really able to satisfy all the issues of theorem
> proving.

I'd like to know about it when you can tell if my program will halt. ;)

> 
> You asked why we would not do this with all software.  The
> answer is primarily economics.   

Of course.

I'm also curious about the size of the programs that you've used these 
methods for, and how long the compilation step takes.  Is there any 
overhead in the executables that you create?

Also, what kinds of problems have you run into?  Things that surprised you?

 > Formal methods are not the
> right approach for every software problem.  It is an expensive
> way to build software.   However, it is also the right way to
> build software in safety-critical environments.  SPARK is
> unique and that means expensive.   If a software system
> must absolutely work according to is specification or

What method are you using to specify how the software works, and how do 
you convert or translate the specification into SPARK code?


> people could be killed or maimed because of a software
> failure, SPARK is probably the right approach.   Most
> software does not fall into that category.
> 
> For safety-critical software, the stakes are very high. Few
> other language designs will be adequate; not C++, not
> PL/I, not C, not Fortran, and not even all of Ada.   SPARK
> forbids the use of some Ada constructs because they cannot
> be confirmed as safe by the SPARK Examiner.

Examples of this last part please?

> 
> Thanks for your question.

Thanks for your answers.

LR
0
lruss (582)
10/7/2006 6:03:13 PM
James J. Weinkam wrote:
> LR wrote:
>>
>>
>> I don't always share that view.  There's at least one newer language 
>> that I know of that I don't think was created to advance the state of 
>> the art, but to attack one of the creator's competitors.
>>
> 
> Wow, what a statement! Either tell us the language, the creator, and the 
> competitor, or don't tell us anything at all.  Otherwise all it is is 
> innuendo.

C#, of course. Microsoft is unalterably opposed to portable standards of 
all kinds.

-- 
John W. Kennedy
"The blind rulers of Logres
Nourished the land on a fallacy of rational virtue."
   -- Charles Williams.  "Taliessin through Logres: Prelude"
0
jwkenne (1442)
10/7/2006 11:40:56 PM
LR wrote:
> adaworks@sbcglobal.net wrote:
> 
>> "LR" <lruss@superlink.net> wrote in message 
>> news:4526ccb8$0$25774$cc2e38e6@news.uslec.net...
>>
>>> I've always wondered though, what happens if you specify something 
>>> like bin(1000) fixed, on a machine whose largest native fixed type is 
>>> 32 bits?
>>>
>>
>> Some languages such as Scheme, Smalltalk, Python, and many others
>> allow the programmer to have numeric values of any size they want, and
>> to do arithmetic on them.  Consider,
>>
>> x := 15415112987195719571729512219305 / 
>> 1412989375160325123512512612740571975923571925791
>>
>> This would evaluate just fine in some of the languages named.  The
>> numbers are not implemented simply on the basis of the underlying
>> word size of the machine.
> 
> I'm aware of this, but I was asking particularly about PL/I.
> 
> But since you've raised the issue, how for example, are irrational 
> constants represented/stored in these languages?

Irrationals are normally restricted to floating-point, and to a certain 
precision.

A few languages implement rationals as a distinct type, with numerators 
and denominators. In such a language 7*(1/7) is guaranteed to return 
exactly 1.

> And of course, for my money, the nice thing about C++ is that you can 
> come pretty close to the syntax above, if you take the time to write 
> classes that can deal with these kinds of numbers.
> 
> I think that Java has something like BigNum or BigInt or something to 
> handle these already.

BigInteger and BigDecimal. However, because Java does not overload 
operators, expressions are of the form

   a.multiply(x).plus(b)

rather than

   a*x+b

Ruby (which is purely OO) automatically switches between machine 
integers and big software integers.

-- 
John W. Kennedy
"The blind rulers of Logres
Nourished the land on a fallacy of rational virtue."
   -- Charles Williams.  "Taliessin through Logres: Prelude"
0
jwkenne (1442)
10/7/2006 11:49:52 PM
James J. Weinkam wrote:
> John W. Kennedy wrote:
>> adaworks@sbcglobal.net wrote:
>>
>>> "glen herrmannsfeldt" <gah@seniti.ugcs.caltech.edu> wrote in message 
>>> news:ees344$6s$7@naig.caltech.edu...
>>>
>>>> David Frank <dave_frank@hotmail.com> wrote:
>>>>
>>>>> E.G.  No one is willing to confirm if PL/I has equivalent 
>>>>> declaration of
>>>>> Fortran's defined type variables because they dont trust there own 
>>>>> knowledge
>>>>> well enuf to state the facts OR in Vowels case he wont respond 
>>>>> because he
>>>>> knows it does NOT.
>>>>
>>>> I don't know if it has defined type variables, I presume you mean
>>>> something like C's typedef.  I don't remember that Fortran does, 
>>>> either.
>>>>
>>> typedef is a farce.  Too many C programmers think it is doing
>>> something it isn't doing at all.   It is not a capability for 
>>> declaring or defining
>>> new types.  Rather, it is a way to create an alias for an existing type.
>>>
>>> I think maybe David is asking whether one can invent new types as one
>>> does in Ada.  For example, how would one declare, in PL/I, the 
>>> following?
>>>
>>>           type Int16 is range -2**15 .. 2**15 - 1;
>>>           for Int16'Size use 16;
>>>
>>> which says give the new type called Int16 a range as shown
>>> and force it to be stored in 16 bits;  or,
>>>
>>>         type  Color is (Red, Yellow, Blue);
>>>         for Color use (Red     => 16#34F2#,
>>>                               Yellow => 16#34F3#,
>>>                               Blue     => 16#34F4#);
>>>
>>> which says, for the enumerated values named Red, Yellow, and Blue
>>> force the machine representation to the hexadecimal values shown.
>>>
>>> I am pretty sure something like this is possible in PL/I.   Perhaps 
>>> Robin
>>> can give an example in PL/I source code.
>>
>>
>> DEFINE ALIAS INT16 FIXED BINARY(15,0); -- but it is not guaranteed to 
>> be in 16 bits. Of course, it isn't necessarily guaranteed in Ada, 
>> either. And PL/I does not have the ability to declare true numeric 
>> types -- only a aliases. Ranges are also unavailable, except insofar 
>> as they are implied by precisions.
>>
>> DEFINE ORDINAL COLOR (RED VALUE (34F2B4),
>>                       YELLOW VALUE (34F3B4),
>>                       BLUE VALUE (34F4B4));
>>
> You seem to have omitted some quotes in your B4 constants.

Yeah. Make that: '34F2'XN, etc.. I've never had occasion to use hex 
FIXED BIN constants (as opposed to hex BIT constants), so I messed up.

-- 
John W. Kennedy
"The blind rulers of Logres
Nourished the land on a fallacy of rational virtue."
   -- Charles Williams.  "Taliessin through Logres: Prelude"
0
jwkenne (1442)
10/7/2006 11:55:25 PM
John W. Kennedy wrote:

> James J. Weinkam wrote:
> 
>> LR wrote:
>>
>>>
>>>
>>> I don't always share that view.  There's at least one newer language 
>>> that I know of that I don't think was created to advance the state of 
>>> the art, but to attack one of the creator's competitors.
>>>
>>
>> Wow, what a statement! Either tell us the language, the creator, and 
>> the competitor, or don't tell us anything at all.  Otherwise all it is 
>> is innuendo.

True.

> 
> 
> C#, of course. 

I said "at least one".

 > Microsoft is unalterably opposed to portable standards of
> all kinds.

I don't think this is so.  I recall reading something by Herb Sutter 
that said that MS was maintaining it's commitment to the C++ standard.

And speaking of standards where is the standard for Java, or when was 
the standard for PL/I last updated?

LR

0
lruss (582)
10/8/2006 12:36:38 AM
"LR" <lruss@superlink.net> wrote in message 
news:452847e6$0$25778$cc2e38e6@news.uslec.net...
> John W. Kennedy wrote:
>>
>> C#, of course.
>
> > Microsoft is unalterably opposed to portable standards of
>> all kinds.
>
> I don't think this is so.  I recall reading something by Herb Sutter that said 
> that MS was maintaining it's commitment to the C++ standard.
>
> And speaking of standards where is the standard for Java, or when was the 
> standard for PL/I last updated?
>
The C# language, the existence of which may have been motivated
by the evil intent of Microsoft policy, is actually a slight improvement
over Java.   In particular, the addition of a feature called "delegates"
enhances the power of C# over Java for functional style software
development.

Microsoft originally announced that it intended for C# to become
an ISO standard (e.g., as is C++, Ada, and several other languages)
although I have not seen them submit an application to ISO yet.

Even  so, I will admit that the reason for creating C# in the first
place appeared less inspired by the need for a better language
than the spiteful deed of a mean-spirited monopolistic company.
The reasons for the invention of C# should not be taken as a
reason for criticizing the good job its designers did in bringing
it into existence.

Further, the entire .NET model (on which C# is built) has some
very nice properties.   In particular, the CLR (Common
Language Runtime) enhances the options for language
interoperability.   CLR and .NET are significant contributions
to the software architecture environment.

I am no fan of Microsoft, and don't place a lot of trust in
their good intentions.  However, when they do produce a
good product, I have to admit it.

Richard Riehle 


0
adaworks2 (748)
10/8/2006 1:32:40 AM
"LR" <lruss@superlink.net> wrote in message 
news:4527ebb2$0$25792$cc2e38e6@news.uslec.net...
> adaworks@sbcglobal.net wrote:
>>
>> The initialization of a scalar with a value that could be intepreted
>> as correct at run-time, if it becomes a kind of default value, may
>> cause more run-time errors than if it is not initialized at all.  It is not
>> always possible to decide that a given initialization is better than no
>> value at all.  The circumstances will vary, of course.
>
> I find this pretty confusing.  How can a variable have "no value at all" 
> unless you have some meta-data attached to the variable that indicates that it 
> hasn't been initialized or had a value assigned to it? Otherwise, I think the 
> bits will have some 'value'.  It may be a 'legal' value or 'not legal' but the 
> bits will indicate some value.  No?
>
> I get the feeling I'm missing something.
>
> Also, can you give an example where no initialization is better than 
> initialization?
>
Suppose I have a variable that I initialize to zero so my program
can compile without warnings.   If my program is designed so I
never have a method that updates that value, when the program
tries to use that value, it turns out to be valid and there is not
immediate error message.

On the other hand, suppose I do not assign an initial value to that
variable.   When I try to use it in my program, it will be an invalid
value and the program will raise an exception.   It is often better
for a program to fail to do anything than to do something that looks
right but isn't.
>
>> Note that the body does not need to refer to the specification
>> through #include mechanisms as one would with C++.  This
>> is because the library unit is integral even though it can be
>> separately compiled.
>
> I see pluses and minuses in that.
>
>>
>> Each of the procedures and functions can also be compiled
>> as separate files.   Once again, the Ada library model is
>> designed so the entire library unit is treated as a single library
>> unit, even though the compilation units can be in separate
>> files.
>
> There has to be some underlying method for determining where the files are 
> though, right?  Is this implementation/platform dependent?
>
No.  This is not implementation dependent.   The specification for
the Ada language demands that the compiler detect every inconsistency,
even in separate compilation.

Unlike C or C++, Ada library units must compile correctly before
any dependent units can be compiled.   That is, where the #include
is textual, the Ada equivalent is library based.   The existence of the
library, along with the scope and visibility rules, ensure that no
artifact of a large program will be ignored during the compilation
of some dependent unit.

In the early days of Ada, we made a lot of mistakes in design that
led to excessively long compilation times.  I recall a program of
4.5 million lines where the compilations took almost two days. The
computers were slower, but most of the slowness was due to our
failure to understand correct design procedures.   Once we did
understand those procedures, slowness due to dependency
checking virtually vanished.
>
> [snip]
>> In another reply, you indicate that you don't see the difference
>> between lint assert and what SPARK does.
>
> I should have been more specific.  I don't see much of a difference, although 
> it seems to me that SPARK is kind of like these but stronger.
>
SPARK goes well beyond the simple assert model.   To begin with, it
directly supports the notion of pre-, post-, and invariant conditions. The
post-condition model is especially powerful.   Eiffel also supports this
as a dynamic (run-time) feature.

SPARK also goes beyond simple assertion checking.   It includes a
special program called the Examiner which performs static analysis
of the entire set of programs.  In part, this is possible because of
SPARK's use of Ada's library model, but it is also a function of
the many kinds of assertions (including dependency assertions)
the designer can include in the code.
>
> 'Are' violated, or 'can be' violated?  I feel a little confused here, is the 
> code that evaluates the SPARK assertions good enough to tell if the 
> constraints will be violated at run time?
>
A good question.   The PRAXIS people will tell you that the
kind of static checking done by SPARK will eliminate any
errors that can be checked by the SPARK Examiner.  This
seems to be a very large number of kinds of errors.

Even so, no one claims that a programmer, or software
designer, will always specify every feature with perfect
accuracy.  There is always room for some kind of error.
All SPARK can do, and does do, is lower the probability
of errors.  It seems to do that better than anything else currently
available for software development.
>
>>
>> The SPARK model is very close to a theorem-proving
>> approach, although we still have a long way to go in software
>> before we are really able to satisfy all the issues of theorem
>> proving.
>
> I'd like to know about it when you can tell if my program will halt. ;)
>
The "halting problem" is still with us, as a problem in formal proofs.
However, we usually find ways to avoid dealing with it in real
software solutions.   I cannot think of the last time one of my
programs failed to halt, even though I could not have provided
a formal proof that it would.
>>
>> You asked why we would not do this with all software.  The
>> answer is primarily economics.
>
> Of course.
>
> I'm also curious about the size of the programs that you've used these methods 
> for, and how long the compilation step takes.  Is there any overhead in the 
> executables that you create?
>
Whenever we leave exception-handling activated in a deployed program,
there is a slight overhead.   Engineering is largely about trade-offs in
design and deployment decisions.   An engineer is striving to create a
product that abides by the "principle of least surprise."  SPARK and
Ada are designed, to a large extent, to reduce surprise in a software
product.

While we cannot eliminate surprise entirely in large-scale software
products, we can reduce the incidence of surprise.  Further, we can
also include "software circuit breakers" in our design in the form of
exception handling routines.   Safety-critical software should not
rely too heavily on exception handling, but no one would install
electrical wiring in their home without considering the potential for
a spike in the current that might burn down their home.

>>
>> For safety-critical software, the stakes are very high. Few
>> other language designs will be adequate; not C++, not
>> PL/I, not C, not Fortran, and not even all of Ada.   SPARK
>> forbids the use of some Ada constructs because they cannot
>> be confirmed as safe by the SPARK Examiner.
>
> Examples of this last part please?
>
Ada includes a model for concurrency.   The use of this
feature cannot be proven correct for a complex system.  Also,
dynamic binding (as in OOP) is a no-no for SPARK.  It is
too dependent on unpredictable events (regardless of what
OOP language one might use).   There are other features of
Ada (and other languages) that cannot be proven with formal
methods.   Anything that cannot be confirmed by SPARK is
rejected by it.

Even so, SPARK does nothing more than ensure that those
things it can check are valid and safe.   If someone chooses
to use unsafe constructs, they are on their own.
>>
I strongly recommend you check out the web site from
PRAXIS.   You will get a lot more information from them,
probably more accurate information that you will from me.
The PRAXIS people are continually improving their product,
and I may be a little short on information regarding the latest
advances in their pursuit for highly-reliable software.

I do know that they devote their entire set of corporate
resources to high-integrity software and that those organizations
who have chosen to use SPARK are regularly contributing
new ideas for even better dependability.   The safety-critical
software community is fairly small relative to other parts of
the software world, but they are dedicated to the constant
improvement in tools that ensure the safety of the software
products that fly people around the planet, control nuclear
power-plants, control the switching mechanisms in rail
transportation systems, and keep software-controlled
medical devices working without failures.

Richard Riehle 


0
adaworks2 (748)
10/8/2006 2:12:44 AM
"John W. Kennedy" <jwkenne@attglobal.net> wrote in message 
news:G2XVg.69$Ii5.22@newsfe10.lga...
> LR wrote:
>> adaworks@sbcglobal.net wrote:
>>
>>> "LR" <lruss@superlink.net> wrote in message 
>>> news:4526ccb8$0$25774$cc2e38e6@news.uslec.net...
>>>
>>>> I've always wondered though, what happens if you specify something like 
>>>> bin(1000) fixed, on a machine whose largest native fixed type is 32 bits?
>>>
>>> Some languages such as Scheme, Smalltalk, Python, and many others
>>> allow the programmer to have numeric values of any size they want, and
>>> to do arithmetic on them.  Consider,
>>>
>>> x := 15415112987195719571729512219305 / 
>>> 1412989375160325123512512612740571975923571925791
>>>
>>> This would evaluate just fine in some of the languages named.  The
>>> numbers are not implemented simply on the basis of the underlying
>>> word size of the machine.
>>
>> I'm aware of this, but I was asking particularly about PL/I.
>>
>> But since you've raised the issue, how for example, are irrational constants 
>> represented/stored in these languages?
>
> Irrationals are normally restricted to floating-point, and to a certain 
> precision.
>
We need to be a little careful about terminology.  The numbers I showed
were rational numbers (i.e., based on a ratio model).     Many languages
have built in support for rational numbers in the form of fractions.  For
example, in Scheme I can add the following quite easily (not Scheme
syntax),

         (1/4 + 5/17 + 3/93) * (827 / 515/359)

will give a fractional result, not a decimal fraction.

The rational numbers, as shown, are never converted, internally,
to decimal fractions.   This preserves a high degree of accuracy
since we never lose precision due to the conversion to binary
and back to decimal that occurs with so many language designs.

Richard Riehle 


0
adaworks2 (748)
10/8/2006 2:33:02 AM
"Shmuel (Seymour J.) Metz" <spamtrap@library.lspace.org.invalid> wrote in 
message news:45266089$14$fuzhry+tra$mr2ice@news.patriot.net...
> In <2PlVg.7848$TV3.3125@newssvr21.news.prodigy.com>, on 10/06/2006
>   at 05:27 AM, <adaworks@sbcglobal.net> said:
>
>>I think it is very short-sighted of the PL/I community to continue to
>>resist developing an OOP version of the language.
>
> It would be if they were.
>
Please expand on this reply.   Is there an operational version
of PL/I that now supports object-oriented programming?

          Extensible inheritance?
          Polymorphism?
          Dynamic binding?
          Message passing?
          Distinguished receiver?
          Genericity?

Thanks.

Richard Riehle 


0
adaworks2 (748)
10/8/2006 2:35:46 AM
"robin" <robin_v@bigpond.com> wrote in message 
news:7xtVg.42536$rP1.13239@news-server.bigpond.net.au...
> "David Frank" <dave_frank@hotmail.com> wrote in message
> news:45262687$0$3016$ec3e2dad@news.usenetmonster.com...
>>
>> integer(2) :: Int16
>

 > No, this doesn' give you 16 bits in Fortran.

It certainly does for those current compilers that support 16 bit integers

...



0
dave_frank (2243)
10/8/2006 5:55:37 AM
adaworks@sbcglobal.net wrote:

> "LR" <lruss@superlink.net> wrote in message 
> news:4527ebb2$0$25792$cc2e38e6@news.uslec.net...
> 
>>adaworks@sbcglobal.net wrote:
>>
>>>The initialization of a scalar with a value that could be intepreted
>>>as correct at run-time, if it becomes a kind of default value, may
>>>cause more run-time errors than if it is not initialized at all.  It is not
>>>always possible to decide that a given initialization is better than no
>>>value at all.  The circumstances will vary, of course.
>>
>>I find this pretty confusing.  How can a variable have "no value at all" 
>>unless you have some meta-data attached to the variable that indicates that it 
>>hasn't been initialized or had a value assigned to it? Otherwise, I think the 
>>bits will have some 'value'.  It may be a 'legal' value or 'not legal' but the 
>>bits will indicate some value.  No?
>>
>>I get the feeling I'm missing something.
>>
>>Also, can you give an example where no initialization is better than 
>>initialization?
>>
> 
> Suppose I have a variable that I initialize to zero so my program
> can compile without warnings.   If my program is designed so I
> never have a method that updates that value, when the program
> tries to use that value, it turns out to be valid and there is not
> immediate error message.
> 
> On the other hand, suppose I do not assign an initial value to that
> variable.   When I try to use it in my program, it will be an invalid
> value and the program will raise an exception.   It is often better
> for a program to fail to do anything than to do something that looks
> right but isn't.
> 

What do you mean by an "invalid value"?  Most variables are defined in 
some context that restricts their valid values to some subset (or, in 
pathological cases, some superset) of the values representable by the 
underlying machine representation.  For example, a variable intended to 
be used as an array subscript would probably be represented internally 
as some sort of native machine integer value.  Valid values for such a 
variable would be the range of integers corresponding to valid 
subscripts for the array.  In such a case, it's easy to choose an 
"invalid value" -- i.e., one outside the range of legal subscripts -- 
but it's not clear to me how that would necessarily raise an exception 
at run time.  Certainly it could be made to do so if the array were a 
properly-defined C++ class or if SUBSCRIPTRANGE were in effect in PL/I 
or similar mechanisms in other languages but, in most cases, raising an 
exception for such a case is not automatic.

The IEEE floating point standard has a signaling NaN that could be used 
to cause an uninitialized floating point variable (well, one initialized 
to a signaling NaN in the absence of explicit initialization) to raise 
an exception but that depends on the underlying hardware and is not 
likely to work for integers, character strings, Booleans, pointers, etc.

Then there's the issue of the validity of a value depending on context. 
  For a real-world example, take shirt sizes.  Men's shirt sizes are 
frequently specified as a combination of neck size and sleeve length. 
Except for special orders, only certain sleeve lengths are available for 
any given neck size and the available sleeve lengths will be different 
for different neck sizes.  In this case, the range of valid values for 
one variable would depend on the current value of another variable. 
This is not the best example, because one could easily pick a value for 
sleeve length that would be guaranteed to be invalid for any neck size 
(negative length, for example).  But there are certain to be 
applications for which any initial value chosen could be either valid or 
invalid depending on the value of some other variable at run time.

So -- what do you mean by "invalid value" for various data types 
(especially for a 1-bit representation of a Boolean) such that its use 
at run time will cause an exception to be raised without some additional 
programmer effort?

> [...]
> 
>>>The SPARK model is very close to a theorem-proving
>>>approach, although we still have a long way to go in software
>>>before we are really able to satisfy all the issues of theorem
>>>proving.
>>
>>I'd like to know about it when you can tell if my program will halt. ;)
>>
> 
> The "halting problem" is still with us, as a problem in formal proofs.
> However, we usually find ways to avoid dealing with it in real
> software solutions.   I cannot think of the last time one of my
> programs failed to halt, even though I could not have provided
> a formal proof that it would.
> 
The halting problem is only a problem for FSAs.  Humans do not share the 
same limitations.

> [...]
> 
> Richard Riehle 
> 
> 
Bob Lidral
lidral  at  alum  dot  mit  dot  edu
0
10/8/2006 8:55:07 AM
On Sat, 07 Oct 2006 19:33:02 -0700, <adaworks@sbcglobal.net> wrote:

>
> "John W. Kennedy" <jwkenne@attglobal.net> wrote in message
> news:G2XVg.69$Ii5.22@newsfe10.lga...
>> LR wrote:
>>> adaworks@sbcglobal.net wrote:
>>>
>>>> "LR" <lruss@superlink.net> wrote in message
>>>> news:4526ccb8$0$25774$cc2e38e6@news.uslec.net...
>>>>
>>>>> I've always wondered though, what happens if you specify something=
  =

>>>>> like
>>>>> bin(1000) fixed, on a machine whose largest native fixed type is 3=
2  =

>>>>> bits?
>>>>
>>>> Some languages such as Scheme, Smalltalk, Python, and many others
>>>> allow the programmer to have numeric values of any size they want, =
and
>>>> to do arithmetic on them.  Consider,
>>>>
>>>> x :=3D 15415112987195719571729512219305 /
>>>> 1412989375160325123512512612740571975923571925791
>>>>
>>>> This would evaluate just fine in some of the languages named.  The
>>>> numbers are not implemented simply on the basis of the underlying
>>>> word size of the machine.
>>>
>>> I'm aware of this, but I was asking particularly about PL/I.
>>>
>>> But since you've raised the issue, how for example, are irrational  =

>>> constants
>>> represented/stored in these languages?
>>
>> Irrationals are normally restricted to floating-point, and to a certa=
in
>> precision.
>>
> We need to be a little careful about terminology.  The numbers I showe=
d
> were rational numbers (i.e., based on a ratio model).     Many languag=
es
> have built in support for rational numbers in the form of fractions.  =
For
> example, in Scheme I can add the following quite easily (not Scheme
> syntax),
>
>          (1/4 + 5/17 + 3/93) * (827 / 515/359)
>
> will give a fractional result, not a decimal fraction.

Do you mean of determines the common denominator? Does it reduce it?
Interesting, but not useful.

>
> The rational numbers, as shown, are never converted, internally,
> to decimal fractions.   This preserves a high degree of accuracy
> since we never lose precision due to the conversion to binary
> and back to decimal that occurs with so many language designs.
>
> Richard Riehle
>
>



-- =

Using Opera's revolutionary e-mail client: http://www.opera.com/mail/
0
tom294 (608)
10/8/2006 12:37:18 PM
On Sat, 07 Oct 2006 16:55:25 -0700, John W. Kennedy  =

<jwkenne@attglobal.net> wrote:

> James J. Weinkam wrote:
>> John W. Kennedy wrote:
>>> adaworks@sbcglobal.net wrote:
>>>
>>>> "glen herrmannsfeldt" <gah@seniti.ugcs.caltech.edu> wrote in messag=
e  =

>>>> news:ees344$6s$7@naig.caltech.edu...
>>>>
>>>>> David Frank <dave_frank@hotmail.com> wrote:
>>>>>
>>>>>> E.G.  No one is willing to confirm if PL/I has equivalent  =

>>>>>> declaration of
>>>>>> Fortran's defined type variables because they dont trust there ow=
n  =

>>>>>> knowledge
>>>>>> well enuf to state the facts OR in Vowels case he wont respond  =

>>>>>> because he
>>>>>> knows it does NOT.
>>>>>
>>>>> I don't know if it has defined type variables, I presume you mean
>>>>> something like C's typedef.  I don't remember that Fortran does,  =

>>>>> either.
>>>>>
>>>> typedef is a farce.  Too many C programmers think it is doing
>>>> something it isn't doing at all.   It is not a capability for  =

>>>> declaring or defining
>>>> new types.  Rather, it is a way to create an alias for an existing =
 =

>>>> type.
>>>>
>>>> I think maybe David is asking whether one can invent new types as o=
ne
>>>> does in Ada.  For example, how would one declare, in PL/I, the  =

>>>> following?
>>>>
>>>>           type Int16 is range -2**15 .. 2**15 - 1;
>>>>           for Int16'Size use 16;
>>>>
>>>> which says give the new type called Int16 a range as shown
>>>> and force it to be stored in 16 bits;  or,
>>>>
>>>>         type  Color is (Red, Yellow, Blue);
>>>>         for Color use (Red     =3D> 16#34F2#,
>>>>                               Yellow =3D> 16#34F3#,
>>>>                               Blue     =3D> 16#34F4#);
>>>>
>>>> which says, for the enumerated values named Red, Yellow, and Blue
>>>> force the machine representation to the hexadecimal values shown.
>>>>
>>>> I am pretty sure something like this is possible in PL/I.   Perhaps=
  =

>>>> Robin
>>>> can give an example in PL/I source code.
>>>
>>>
>>> DEFINE ALIAS INT16 FIXED BINARY(15,0); -- but it is not guaranteed t=
o  =

>>> be in 16 bits. Of course, it isn't necessarily guaranteed in Ada,  =

>>> either. And PL/I does not have the ability to declare true numeric  =

>>> types -- only a aliases. Ranges are also unavailable, except insofar=
  =

>>> as they are implied by precisions.
>>>
>>> DEFINE ORDINAL COLOR (RED VALUE (34F2B4),
>>>                       YELLOW VALUE (34F3B4),
>>>                       BLUE VALUE (34F4B4));
>>>
>> You seem to have omitted some quotes in your B4 constants.
>
> Yeah. Make that: '34F2'XN, etc.. I've never had occasion to use hex  =

> FIXED BIN constants (as opposed to hex BIT constants), so I messed up.=

>
Well the notation is certainly superfluous.  Hex bit constants work just=

fine without any loss of efficiency


-- =

Using Opera's revolutionary e-mail client: http://www.opera.com/mail/
0
tom294 (608)
10/8/2006 12:40:46 PM
On Sat, 07 Oct 2006 19:12:44 -0700, <adaworks@sbcglobal.net> wrote:

>
> "LR" <lruss@superlink.net> wrote in message
> news:4527ebb2$0$25792$cc2e38e6@news.uslec.net...
>> adaworks@sbcglobal.net wrote:
>>>
>>> The initialization of a scalar with a value that could be intepreted
>>> as correct at run-time, if it becomes a kind of default value, may
>>> cause more run-time errors than if it is not initialized at all.  It  
>>> is not
>>> always possible to decide that a given initialization is better than no
>>> value at all.  The circumstances will vary, of course.
>>
>> I find this pretty confusing.  How can a variable have "no value at all"
>> unless you have some meta-data attached to the variable that indicates  
>> that it
>> hasn't been initialized or had a value assigned to it? Otherwise, I  
>> think the
>> bits will have some 'value'.  It may be a 'legal' value or 'not legal'  
>> but the
>> bits will indicate some value.  No?
>>
>> I get the feeling I'm missing something.
>>
>> Also, can you give an example where no initialization is better than
>> initialization?
>>
> Suppose I have a variable that I initialize to zero so my program
> can compile without warnings.   If my program is designed so I
> never have a method that updates that value, when the program
> tries to use that value, it turns out to be valid and there is not
> immediate error message.
>
> On the other hand, suppose I do not assign an initial value to that
> variable.   When I try to use it in my program, it will be an invalid
> value and the program will raise an exception.   It is often better
> for a program to fail to do anything than to do something that looks
> right but isn't.
>>
>>> Note that the body does not need to refer to the specification
>>> through #include mechanisms as one would with C++.  This
>>> is because the library unit is integral even though it can be
>>> separately compiled.
>>
>> I see pluses and minuses in that.
>>
>>>
>>> Each of the procedures and functions can also be compiled
>>> as separate files.   Once again, the Ada library model is
>>> designed so the entire library unit is treated as a single library
>>> unit, even though the compilation units can be in separate
>>> files.
>>
>> There has to be some underlying method for determining where the files  
>> are
>> though, right?  Is this implementation/platform dependent?
>>
> No.  This is not implementation dependent.   The specification for
> the Ada language demands that the compiler detect every inconsistency,
> even in separate compilation.
>
> Unlike C or C++, Ada library units must compile correctly before
> any dependent units can be compiled.   That is, where the #include
> is textual, the Ada equivalent is library based.   The existence of the
> library, along with the scope and visibility rules, ensure that no
> artifact of a large program will be ignored during the compilation
> of some dependent unit.
>
> In the early days of Ada, we made a lot of mistakes in design that
> led to excessively long compilation times.  I recall a program of
> 4.5 million lines where the compilations took almost two days. The
> computers were slower, but most of the slowness was due to our
> failure to understand correct design procedures.   Once we did
> understand those procedures, slowness due to dependency
> checking virtually vanished.
>>
>> [snip]
>>> In another reply, you indicate that you don't see the difference
>>> between lint assert and what SPARK does.
>>
>> I should have been more specific.  I don't see much of a difference,  
>> although
>> it seems to me that SPARK is kind of like these but stronger.
>>
> SPARK goes well beyond the simple assert model.   To begin with, it
> directly supports the notion of pre-, post-, and invariant conditions.  
> The
> post-condition model is especially powerful.   Eiffel also supports this
> as a dynamic (run-time) feature.
>
> SPARK also goes beyond simple assertion checking.   It includes a
> special program called the Examiner which performs static analysis
> of the entire set of programs.  In part, this is possible because of
> SPARK's use of Ada's library model, but it is also a function of
> the many kinds of assertions (including dependency assertions)
> the designer can include in the code.

What does that static analysis include?

>>
>> 'Are' violated, or 'can be' violated?  I feel a little confused here,  
>> is the
>> code that evaluates the SPARK assertions good enough to tell if the
>> constraints will be violated at run time?
>>
> A good question.   The PRAXIS people will tell you that the
> kind of static checking done by SPARK will eliminate any
> errors that can be checked by the SPARK Examiner.  This
> seems to be a very large number of kinds of errors.
>
> Even so, no one claims that a programmer, or software
> designer, will always specify every feature with perfect
> accuracy.  There is always room for some kind of error.
> All SPARK can do, and does do, is lower the probability
> of errors.  It seems to do that better than anything else currently
> available for software development.
>>
>>>
>>> The SPARK model is very close to a theorem-proving
>>> approach, although we still have a long way to go in software
>>> before we are really able to satisfy all the issues of theorem
>>> proving.
>>
>> I'd like to know about it when you can tell if my program will halt. ;)
>>
> The "halting problem" is still with us, as a problem in formal proofs.
> However, we usually find ways to avoid dealing with it in real
> software solutions.   I cannot think of the last time one of my
> programs failed to halt, even though I could not have provided
> a formal proof that it would.
>>>
>>> You asked why we would not do this with all software.  The
>>> answer is primarily economics.
>>
>> Of course.
>>
>> I'm also curious about the size of the programs that you've used these  
>> methods
>> for, and how long the compilation step takes.  Is there any overhead in  
>> the
>> executables that you create?
>>
> Whenever we leave exception-handling activated in a deployed program,
> there is a slight overhead.   Engineering is largely about trade-offs in
> design and deployment decisions.   An engineer is striving to create a
> product that abides by the "principle of least surprise."  SPARK and
> Ada are designed, to a large extent, to reduce surprise in a software
> product.
>
Exception handling IS an integral part of well-designed code.

> While we cannot eliminate surprise entirely in large-scale software
> products, we can reduce the incidence of surprise.  Further, we can
> also include "software circuit breakers" in our design in the form of
> exception handling routines.   Safety-critical software should not
> rely too heavily on exception handling, but no one would install
> electrical wiring in their home without considering the potential for
> a spike in the current that might burn down their home.
>
>>>
>>> For safety-critical software, the stakes are very high. Few
>>> other language designs will be adequate; not C++, not
>>> PL/I, not C, not Fortran, and not even all of Ada.   SPARK
>>> forbids the use of some Ada constructs because they cannot
>>> be confirmed as safe by the SPARK Examiner.
>>
>> Examples of this last part please?
>>
> Ada includes a model for concurrency.   The use of this
> feature cannot be proven correct for a complex system.  Also,
> dynamic binding (as in OOP) is a no-no for SPARK.  It is
> too dependent on unpredictable events (regardless of what
> OOP language one might use).   There are other features of
> Ada (and other languages) that cannot be proven with formal
> methods.   Anything that cannot be confirmed by SPARK is
> rejected by it.
>
> Even so, SPARK does nothing more than ensure that those
> things it can check are valid and safe.   If someone chooses
> to use unsafe constructs, they are on their own.
>>>
> I strongly recommend you check out the web site from
> PRAXIS.   You will get a lot more information from them,
> probably more accurate information that you will from me.
> The PRAXIS people are continually improving their product,
> and I may be a little short on information regarding the latest
> advances in their pursuit for highly-reliable software.
>
> I do know that they devote their entire set of corporate
> resources to high-integrity software and that those organizations
> who have chosen to use SPARK are regularly contributing
> new ideas for even better dependability.   The safety-critical
> software community is fairly small relative to other parts of
> the software world, but they are dedicated to the constant
> improvement in tools that ensure the safety of the software
> products that fly people around the planet, control nuclear
> power-plants, control the switching mechanisms in rail
> transportation systems, and keep software-controlled
> medical devices working without failures.
>
> Richard Riehle
>
>



-- 
Using Opera's revolutionary e-mail client: http://www.opera.com/mail/
0
tom294 (608)
10/8/2006 12:54:32 PM
"glen herrmannsfeldt" <gah@seniti.ugcs.caltech.edu> wrote in message
news:eg66n6$7b7$2@naig.caltech.edu...
> adaworks@sbcglobal.net wrote:
> (very large snip)
>
> > No programmer graduating from any computer science program
> > anywhere in the world would consider adopting a programming
> > language that fails to support the object model.
>
> I would say that most scientific programmers don't come
> from the computer science program, but from engineering
> and physical sciences.

And mathematics?  And Chemistry?

> PL/I by design included features from COBOL for
> the business community, and from Fortran for the
> scientific community.  The life cycle of scientific
> and engineering software is a little different from
> that of business or 'computer science' software.

How so?
Some commercial software runs for years and years.
So does some scientific software.
Look at scientific subroutine libraries.
Some are still around decades later.


0
robin_v (2737)
10/8/2006 2:24:19 PM
"glen herrmannsfeldt" <gah@seniti.ugcs.caltech.edu> wrote in message
news:eg6jlb$cl8$1@naig.caltech.edu...
> LR <lruss@superlink.net> wrote:
>
> (I wrote)
>
> >> The life cycle of scientific
> >> and engineering software is a little different from
> >> that of business or 'computer science' software.
>
> > In what way?
>
> One is that speed is usually pretty important, so run time
> checks are usually reduced.
>
> Another is that many times, though not all, something
> is written to solve one problem and never used again.

And that doesn't happen with commercial work?


0
robin_v (2737)
10/8/2006 2:24:19 PM
"LR" <lruss@superlink.net> wrote in message
news:4526e5b5$0$25784$cc2e38e6@news.uslec.net...

> Does that assume that you're pointing to something that requires word
> alignment?  Does character data normally require word alignment in PL/I?

No.


0
robin_v (2737)
10/8/2006 2:24:20 PM
"glen herrmannsfeldt" <gah@seniti.ugcs.caltech.edu> wrote in message
news:eg6kbb$cl8$2@naig.caltech.edu...
> LR <lruss@superlink.net> wrote:
>
> >> For pointers, it may or may not point outside the available
> >> addressing range.
>
> > NULL works well for this in C & C++.  I'm curious, is there a value in
> > PL/I for a pointer which will always be invalid?  If not, what do you do
> > about writing code that has to move between platforms?
>
> Well, NULL, which PL/I also has, tends to be a valid value
> when a pointer doesn't have anything to point to.

???

NULL and SYSNULL are the only valid values that a pointer
can contain when it is desired that a pointer has a value that is not
a legitimate data item.


0
robin_v (2737)
10/8/2006 2:24:20 PM
"Tom Linden" <tom@kednos-remove.com> wrote in message
news:op.tg0yygo1tte90l@hyrrokkin...
>
> In PL/I there is a builtin function null() which returns the value of the
> null pointer, which may not be zero.  This is implementation defined.

True, just like the format of a FLOAT is implementation defined.
The programmer has no need of that internal value.


0
robin_v (2737)
10/8/2006 2:24:21 PM
<adaworks@sbcglobal.net> wrote in message
news:CCDVg.8029$TV3.5843@newssvr21.news.prodigy.com...

> I indicated earlier that language design choices need to be
> made on the basis of criteria relevant to the problem one
> is trying to solve.   One of the primary criterion for the
> environment in which I work is dependability.  At present,
> the most powerful language toolset to satisfy the need
> for high-integrity, highly dependable software is called
> SPARK, not C++, not PL/I.   It is a niche language,
> to be sure.  One would not use SPARK for pedestrian
> projects such as business data processing.  However,
> there is currently no language model better suited to
> the creation of safety-critical software.

Other than, of course, PL/I.


0
robin_v (2737)
10/8/2006 2:24:21 PM
"robin" <robin_v@bigpond.com> wrote in message 
news:pS7Wg.43679$rP1.16736@news-server.bigpond.net.au...
>
> <adaworks@sbcglobal.net> wrote in message
> news:CCDVg.8029$TV3.5843@newssvr21.news.prodigy.com...
>
>> I indicated earlier that language design choices need to be
>> made on the basis of criteria relevant to the problem one
>> is trying to solve.   One of the primary criterion for the
>> environment in which I work is dependability.  At present,
>> the most powerful language toolset to satisfy the need
>> for high-integrity, highly dependable software is called
>> SPARK, not C++, not PL/I.   It is a niche language,
>> to be sure.  One would not use SPARK for pedestrian
>> projects such as business data processing.  However,
>> there is currently no language model better suited to
>> the creation of safety-critical software.
>
> Other than, of course, PL/I.
>
Sorry Robin, but in this case PL/I does not even run
a close second.  Before you object, you need to study
this issue.  I am quite certain you know little or nothing
about SPARK.  I am not even sure how well-prepared
you are in the topic of formal methods.

Of course, you are welcome to object, but be sure you
know whereof you speak by getting informed first.

Richard
>
> 


0
adaworks2 (748)
10/8/2006 5:16:14 PM
"Tom Linden" <tom@kednos-remove.com> wrote in message 
news:op.tg3r0giitte90l@hyrrokkin...
On Sat, 07 Oct 2006 19:33:02 -0700, <adaworks@sbcglobal.net> wrote:

>
> "John W. Kennedy" <jwkenne@attglobal.net> wrote in message
> news:G2XVg.69$Ii5.22@newsfe10.lga...
>> LR wrote:
>>> adaworks@sbcglobal.net wrote:
>>>
>>>> "LR" <lruss@superlink.net> wrote in message
>>>> news:4526ccb8$0$25774$cc2e38e6@news.uslec.net...
>>>>
>>>>> I've always wondered though, what happens if you specify something  like
>>>>> bin(1000) fixed, on a machine whose largest native fixed type is 32  bits?
>>>>
>>>> Some languages such as Scheme, Smalltalk, Python, and many others
>>>> allow the programmer to have numeric values of any size they want, and
>>>> to do arithmetic on them.  Consider,
>>>>
>>>> x := 15415112987195719571729512219305 /
>>>> 1412989375160325123512512612740571975923571925791
>>>>
>>>> This would evaluate just fine in some of the languages named.  The
>>>> numbers are not implemented simply on the basis of the underlying
>>>> word size of the machine.
>>>
>>> I'm aware of this, but I was asking particularly about PL/I.
>>>
>>> But since you've raised the issue, how for example, are irrational 
>>> constants
>>> represented/stored in these languages?
>>
>> Irrationals are normally restricted to floating-point, and to a certain
>> precision.
>>
> We need to be a little careful about terminology.  The numbers I showed
> were rational numbers (i.e., based on a ratio model).     Many languages
> have built in support for rational numbers in the form of fractions.  For
> example, in Scheme I can add the following quite easily (not Scheme
> syntax),
>
>          (1/4 + 5/17 + 3/93) * (827 / 515/359)
>
> will give a fractional result, not a decimal fraction.

Do you mean of determines the common denominator? Does it reduce it?
Interesting, but not useful.
>
Actually, this can be quite useful in a lot of mathematical problems.
The rational number (e.g., 5/17, 25/57, etc.) is never converted to
a floating point value.   This reduces the loss of accuracy due to
frequent conversions in a long mathematical problem that involves
a lot of rational numbers with a denominator other than 1.

And yes, continual reduction and computation along the GCD
model is part of the solution space.   The downside of this can
be the additional time involved in making these computations.
The upside is the constancy of the accuracy of those computations.
>
> Richard Riehle




-- 
Using Opera's revolutionary e-mail client: http://www.opera.com/mail/ 


0
adaworks2 (748)
10/8/2006 5:24:29 PM
adaworks@sbcglobal.net wrote:

> "LR" <lruss@superlink.net> wrote in message 
> news:4527ebb2$0$25792$cc2e38e6@news.uslec.net...
> 
>>adaworks@sbcglobal.net wrote:
>>
>>>The initialization of a scalar with a value that could be intepreted
>>>as correct at run-time, if it becomes a kind of default value, may
>>>cause more run-time errors than if it is not initialized at all.  It is not
>>>always possible to decide that a given initialization is better than no
>>>value at all.  The circumstances will vary, of course.
>>
>>I find this pretty confusing.  How can a variable have "no value at all" 
>>unless you have some meta-data attached to the variable that indicates that it 
>>hasn't been initialized or had a value assigned to it? Otherwise, I think the 
>>bits will have some 'value'.  It may be a 'legal' value or 'not legal' but the 
>>bits will indicate some value.  No?
>>
>>I get the feeling I'm missing something.
>>
>>Also, can you give an example where no initialization is better than 
>>initialization?
>>
> 
> Suppose I have a variable that I initialize to zero so my program
> can compile without warnings.   If my program is designed so I
> never have a method that updates that value, when the program
> tries to use that value, it turns out to be valid and there is not
> immediate error message.
> 
> On the other hand, suppose I do not assign an initial value to that
> variable.   When I try to use it in my program, it will be an invalid
> value and the program will raise an exception.   It is often better
> for a program to fail to do anything than to do something that looks
> right but isn't.

So in SPARK, if a variable isn't initialized it gets an invalid value? 
Suppose the variable in question is just a plain old binary integer type 
(or whatever that is in SPARK) that can contain any value.  Or does 
SPARK not allow these kinds of variables?  Then there would still have 
to be some meta-data that says that it's not initialized, right?

BTW, does, or maybe I should say, could, SPARK run on ones-complement 
machines?




>>There has to be some underlying method for determining where the files are 
>>though, right?  Is this implementation/platform dependent?
>>
> 
> No.  This is not implementation dependent.   The specification for
> the Ada language demands that the compiler detect every inconsistency,
> even in separate compilation.

How does an Ada/SPARK compiler tell where files that contain say, other 
functions are?

> 
> Unlike C or C++, Ada library units must compile correctly before
> any dependent units can be compiled.   That is, where the #include
> is textual, the Ada equivalent is library based.   The existence of the
> library, along with the scope and visibility rules, ensure that no
> artifact of a large program will be ignored during the compilation
> of some dependent unit.

Then the compiler has to have some way of figuring out where the 
'library' is?



>>[snip]
>>
>>>In another reply, you indicate that you don't see the difference
>>>between lint assert and what SPARK does.
>>
>>I should have been more specific.  I don't see much of a difference, although 
>>it seems to me that SPARK is kind of like these but stronger.
>>
> 
> SPARK goes well beyond the simple assert model.   To begin with, it
> directly supports the notion of pre-, post-, and invariant conditions. The
> post-condition model is especially powerful.   Eiffel also supports this
> as a dynamic (run-time) feature.

By directly supports, do you mean supports with syntax in the language?

> 
> SPARK also goes beyond simple assertion checking.   It includes a
> special program called the Examiner which performs static analysis
> of the entire set of programs.  In part, this is possible because of
> SPARK's use of Ada's library model, but it is also a function of
> the many kinds of assertions (including dependency assertions)
> the designer can include in the code.
> 
>>'Are' violated, or 'can be' violated?  I feel a little confused here, is the 
>>code that evaluates the SPARK assertions good enough to tell if the 
>>constraints will be violated at run time?
>>
> 
> A good question.   The PRAXIS people will tell you that the
> kind of static checking done by SPARK will eliminate any
> errors that can be checked by the SPARK Examiner.  This
> seems to be a very large number of kinds of errors.

Heh.  When I hear stuff like that you'll forgive me if I become a little 
suspicious.


> 
> Even so, no one claims that a programmer, or software
> designer, will always specify every feature with perfect
> accuracy.  There is always room for some kind of error.

What I said!



>>>The SPARK model is very close to a theorem-proving
>>>approach, although we still have a long way to go in software
>>>before we are really able to satisfy all the issues of theorem
>>>proving.
>>
>>I'd like to know about it when you can tell if my program will halt. ;)
>>
> 
> The "halting problem" is still with us, as a problem in formal proofs.
> However, we usually find ways to avoid dealing with it in real
> software solutions.   I cannot think of the last time one of my
> programs failed to halt, even though I could not have provided
> a formal proof that it would.

There's always, well, almost always, a power cord that can be pulled out 
of the wall.  What?  No wall?  No cord?  Then everybody panic.


> 
>>>You asked why we would not do this with all software.  The
>>>answer is primarily economics.
>>
>>Of course.
>>
>>I'm also curious about the size of the programs that you've used these methods 
>>for, and how long the compilation step takes.  Is there any overhead in the 
>>executables that you create?
>>
> 
> Whenever we leave exception-handling activated in a deployed program,
> there is a slight overhead.   Engineering is largely about trade-offs in
> design and deployment decisions.  

But programming, especially if you're selling the idea of correctness, 
and proveability, isn't engineering.

 > An engineer is striving to create a
> product that abides by the "principle of least surprise."  


 > SPARK and
> Ada are designed, to a large extent, to reduce surprise in a software
> product.

 From what you're telling me, they seem to be designed to reduce 
surprise for the user of a programming language, not a software product, 
which might contain something like (in pseudo code).

if(Modulus(randomFunction(),2) equals zero) then
	print "Surprise, Surprise, Surprise."
endif

Which I think will function as perfectly as SPARK or ADA, or maybe any 
other language will make it, and will in fact provide a surprise about 
50% of the time for the user of the software prodcut.



> 
> While we cannot eliminate surprise entirely in large-scale software
> products, we can reduce the incidence of surprise.  Further, we can
> also include "software circuit breakers" in our design in the form of
> exception handling routines.   Safety-critical software should not
> rely too heavily on exception handling, but no one would install
> electrical wiring in their home without considering the potential for
> a spike in the current that might burn down their home.

Can you please provide an example of a software circuit breaker?




> I strongly recommend you check out the web site from
> PRAXIS.   

As I told you, I took a look at their site, but it seems mostly for 
sales and not much in the way of info.  Maybe I missed it?  Suggested 
links are welcome.


> The PRAXIS people are continually improving their product,

That provability thing will provide a lifetime or two of fun development 
questions and entertainment for them, I'm sure.


> I do know that they devote their entire set of corporate
> resources to high-integrity software and that those organizations
> who have chosen to use SPARK are regularly contributing
> new ideas for even better dependability.   The safety-critical
> software community is fairly small relative to other parts of
> the software world, but they are dedicated to the constant
> improvement in tools that ensure the safety of the software
> products that fly people around the planet, control nuclear
> power-plants, control the switching mechanisms in rail
> transportation systems, and keep software-controlled
> medical devices working without failures.

No mention of automobiles?
0
lruss (582)
10/8/2006 5:38:33 PM
I have received quite a few questions about SPARK
since I first raised the issue in this forum.

SPARK is an approach to creating safety-critical software
largely based on the application of formal methods (formal
mathematical methods) to achieve a higher level of software
dependability than one might expect from more traditional
approaches to this problem.

So far, it seems to have succeeded quite well where it has
been used.  However, as I mentioned in a separate posting,
the economics of SPARK are such that it is not appropriate
for every kind of software project.

I am not an expert in SPARK, nor am I a spokesperson for
it.  My role is simply that of a developer and a user.  For a
more comprehensive look at this topic I am going to refer
you to:

             http://www.praxis-his.com/

The people at Praxis are quite helpful.   I am sure they can
refer you to up-to-date tutorials as well as good literature
on the subject.

A PL/I advocate need not feel threatened by SPARK.  The
vast majority of programs for which PL/I is used (as is also
the case for other languages) do not fall into the niche domain
best suited to SPARK.

Also, I am not really interested in arguing the relative merits
of SPARK versus some other language unless you are well
informed about 1) formal methods, and 2) how SPARK
really works.

I provide this message for informational purposes, not as a
troll, an invitation to argument, nor a universal endorsement
of SPARK.    It is simply a tool that is good for some kinds
of software development, not _the_ tool that will solve
all of our problems in the world of software development. At
present, there is no such tool.

I am cc'ing the Praxis folks in case anyone from there would
want to comment.

Richard Riehle



0
adaworks2 (748)
10/8/2006 5:40:33 PM
In <45267d5e$0$25792$cc2e38e6@news.uslec.net>, on 10/06/2006
   at 12:00 PM, LR <lruss@superlink.net> said:

>Does Ada support a seperate compilation model?

Yes. Not that comp.lang.pl1 is the proper place to debate the relative
merits of Ada and C++!

>Interlanguage programming?

Yes.

-- 
Shmuel (Seymour J.) Metz, SysProg and JOAT  <http://patriot.net/~shmuel>

Unsolicited bulk E-mail subject to legal action.  I reserve the
right to publicly post or ridicule any abusive E-mail.  Reply to
domain Patriot dot net user shmuel+news to contact me.  Do not
reply to spamtrap@library.lspace.org

0
spamtrap16 (3722)
10/9/2006 12:07:32 AM
In <452847e6$0$25778$cc2e38e6@news.uslec.net>, on 10/07/2006
   at 08:36 PM, LR <lruss@superlink.net> said:

>And speaking of standards where is the standard for Java, or when was
> the standard for PL/I last updated?

Which PL/I? ANSI updated the G subset but refused to update the full
language.

-- 
Shmuel (Seymour J.) Metz, SysProg and JOAT  <http://patriot.net/~shmuel>

Unsolicited bulk E-mail subject to legal action.  I reserve the
right to publicly post or ridicule any abusive E-mail.  Reply to
domain Patriot dot net user shmuel+news to contact me.  Do not
reply to spamtrap@library.lspace.org

0
spamtrap16 (3722)
10/9/2006 12:11:05 AM
In <6uZVg.3040$NE6.2582@newssvr11.news.prodigy.com>, on 10/08/2006
   at 02:35 AM, <adaworks@sbcglobal.net> said:

>Please expand on this reply.   Is there an operational version of
>PL/I that now supports object-oriented programming?

I'm not sure what is deployed, but IBM did a proof of concept and the
user community expressed a good deal of interest. There may be a lack
of resources, but I've seen no sign of resistance.

-- 
Shmuel (Seymour J.) Metz, SysProg and JOAT  <http://patriot.net/~shmuel>

Unsolicited bulk E-mail subject to legal action.  I reserve the
right to publicly post or ridicule any abusive E-mail.  Reply to
domain Patriot dot net user shmuel+news to contact me.  Do not
reply to spamtrap@library.lspace.org

0
spamtrap16 (3722)
10/9/2006 12:13:51 AM
In <D7vVg.2266$NE6.342@newssvr11.news.prodigy.com>, on 10/06/2006
   at 04:03 PM, <adaworks@sbcglobal.net> said:

>First,  thanks for all the replies.   Note that I never said that
>PL/I could not accomplish the equivalent of what I posted.   In fact,
>I suggested that Robin would have
>a good solution and invited him to show it to us.

The problem is that DF keeps throwing out bogus challenges, often with
incorrect FORTRAN code; after a while it becomes obvious that there is
no point to answering them, or even in seeing his articles. He's in my
twit filter because his sole purpose seems to be to disrupt news
groups devoted to languages other than FORTRAN.

BTW, don't blame the other FORTRAN users for his nonsense. Some of
them are just as turned off by it as we (TINW) are.

-- 
Shmuel (Seymour J.) Metz, SysProg and JOAT  <http://patriot.net/~shmuel>

Unsolicited bulk E-mail subject to legal action.  I reserve the
right to publicly post or ridicule any abusive E-mail.  Reply to
domain Patriot dot net user shmuel+news to contact me.  Do not
reply to spamtrap@library.lspace.org

0
spamtrap16 (3722)
10/9/2006 12:16:09 AM
In <eg6kbb$cl8$2@naig.caltech.edu>, on 10/06/2006
   at 10:11 PM, glen herrmannsfeldt <gah@seniti.ugcs.caltech.edu>
said:

>Note that Intel processors reserve segment selector zero as the null
>segment selector.  That is, hardware support for a null pointer.

That's a software convention, available on any machine with
segmentation support. In fact, you can do the same thing on any
machine with paging support by having the OS reserve on page as always
invalid.

-- 
Shmuel (Seymour J.) Metz, SysProg and JOAT  <http://patriot.net/~shmuel>

Unsolicited bulk E-mail subject to legal action.  I reserve the
right to publicly post or ridicule any abusive E-mail.  Reply to
domain Patriot dot net user shmuel+news to contact me.  Do not
reply to spamtrap@library.lspace.org

0
spamtrap16 (3722)
10/9/2006 12:19:05 AM
In <4526ca04$0$25793$cc2e38e6@news.uslec.net>, on 10/06/2006
   at 05:27 PM, LR <lruss@superlink.net> said:

>NULL works well for this in C & C++.  I'm curious, is there a value
>in  PL/I for a pointer which will always be invalid?

Yes -null. It does not, however, have a value for uninitialized, and
it is common to intialize pointers to null.

>If not, what do you do 
>about writing code that has to move between platforms?

You use null.

Now, if you're asking whether there is a numeric value that can be
converted to pointer and guarantied to be invalid, I would say that
not only is there no such numeric value but that attempting to assign
numeric values to pointers is a sin for which there is no forgiveness.

-- 
Shmuel (Seymour J.) Metz, SysProg and JOAT  <http://patriot.net/~shmuel>

Unsolicited bulk E-mail subject to legal action.  I reserve the
right to publicly post or ridicule any abusive E-mail.  Reply to
domain Patriot dot net user shmuel+news to contact me.  Do not
reply to spamtrap@library.lspace.org

0
spamtrap16 (3722)
10/9/2006 12:22:54 AM
In <eg67qq$7b7$4@naig.caltech.edu>, on 10/06/2006
   at 06:38 PM, glen herrmannsfeldt <gah@seniti.ugcs.caltech.edu>
said:

>I am pretty sure PL/I now has enumarations, but
>I don't believe it did originally.

Correct on both.

-- 
Shmuel (Seymour J.) Metz, SysProg and JOAT  <http://patriot.net/~shmuel>

Unsolicited bulk E-mail subject to legal action.  I reserve the
right to publicly post or ridicule any abusive E-mail.  Reply to
domain Patriot dot net user shmuel+news to contact me.  Do not
reply to spamtrap@library.lspace.org

0
spamtrap16 (3722)
10/9/2006 12:23:49 AM
On Sun, 08 Oct 2006 10:40:33 -0700, <adaworks@sbcglobal.net> wrote:

>
> I have received quite a few questions about SPARK
> since I first raised the issue in this forum.
>
> SPARK is an approach to creating safety-critical software
> largely based on the application of formal methods (formal
> mathematical methods) to achieve a higher level of software
> dependability than one might expect from more traditional
> approaches to this problem.
>
> So far, it seems to have succeeded quite well where it has
> been used.  However, as I mentioned in a separate posting,
> the economics of SPARK are such that it is not appropriate
> for every kind of software project.
>
> I am not an expert in SPARK, nor am I a spokesperson for
> it.  My role is simply that of a developer and a user.  For a
> more comprehensive look at this topic I am going to refer
> you to:
>
>              http://www.praxis-his.com/
>
> The people at Praxis are quite helpful.   I am sure they can
> refer you to up-to-date tutorials as well as good literature
> on the subject.
>
> A PL/I advocate need not feel threatened by SPARK.  The
> vast majority of programs for which PL/I is used (as is also
> the case for other languages) do not fall into the niche domain
> best suited to SPARK.
>
> Also, I am not really interested in arguing the relative merits
> of SPARK versus some other language unless you are well
> informed about 1) formal methods, and 2) how SPARK
> really works.
>
> I provide this message for informational purposes, not as a
> troll, an invitation to argument, nor a universal endorsement
> of SPARK.    It is simply a tool that is good for some kinds
> of software development, not _the_ tool that will solve
> all of our problems in the world of software development. At
> present, there is no such tool.
>
> I am cc'ing the Praxis folks in case anyone from there would
> want to comment.
>
> Richard Riehle
>
>

Richard, you didn't answer my question as to what is included in the static
analysis.




-- 
Using Opera's revolutionary e-mail client: http://www.opera.com/mail/
0
tom294 (608)
10/9/2006 12:37:26 AM
On Sun, 08 Oct 2006 10:24:29 -0700, <adaworks@sbcglobal.net> wrote:

> Do you mean of determines the common denominator? Does it reduce it?
> Interesting, but not useful.
>>
> Actually, this can be quite useful in a lot of mathematical problems.
> The rational number (e.g., 5/17, 25/57, etc.) is never converted to
> a floating point value.   This reduces the loss of accuracy due to
> frequent conversions in a long mathematical problem that involves
> a lot of rational numbers with a denominator other than 1.
> And yes, continual reduction and computation along the GCD
> model is part of the solution space.   The downside of this can
> be the additional time involved in making these computations.
> The upside is the constancy of the accuracy of those computations.

You will need to demonstrate by example how this is useful.  Ultimately you
will need to ouput the result, and I doubt you would present it as
398562947593/892908359830   Remember the set of rationals form a set of
measure 0, are in the same cardinal class as the integers, Aleph 0.  Real
men use real numbers.  This strikes me at best as an onanistic  
self-indulgence
of no value.

The PL/I standard foresaw this by defining the precision of intermediate
results, to preserve acuracy, actual implementation may differ.
-- 
Using Opera's revolutionary e-mail client: http://www.opera.com/mail/
0
tom294 (608)
10/9/2006 12:53:32 AM
Tom Linden wrote:

(snip on rational arithmetic)

> You will need to demonstrate by example how this is useful.  Ultimately you
> will need to ouput the result, and I doubt you would present it as
> 398562947593/892908359830   Remember the set of rationals form a set of
> measure 0, are in the same cardinal class as the integers, Aleph 0.  Real
> men use real numbers.  This strikes me at best as an onanistic  
> self-indulgence
> of no value.

There are places it is useful and used, and overall I would say that
floating point is overused, but otherwise I agree.

Floating point was designed for quantities with a relative error.

Consider the weight of an object.  The weight of a proton can
be calculated to about 10 digits relative accuracy.  The weight
of a large object, say the earth, can be calculated to three or
four digits, pretty much no matter how hard you try.

Distance measured using an interferometer can probably be
calculated with an absolute error.  That is, the error might
not need to increase as the distance increases.  Often, though,
it is only needed to relative accuracy if it is combined with
other quantities with a relative error.  This is true for most
physical measurements, and most experimental science.

There are probably some pure math problems where rational
arithmetic makes sense.  Note that Mathematica can do it,
and does by default if the input is all integers.

-- glen

0
gah (12851)
10/9/2006 1:36:11 AM
Shmuel (Seymour J.) Metz wrote:

> In <eg6kbb$cl8$2@naig.caltech.edu>, on 10/06/2006
>    at 10:11 PM, glen herrmannsfeldt <gah@seniti.ugcs.caltech.edu>
> said:

>>Note that Intel processors reserve segment selector zero as the null
>>segment selector.  That is, hardware support for a null pointer.

> That's a software convention, available on any machine with
> segmentation support. In fact, you can do the same thing on any
> machine with paging support by having the OS reserve on page as always
> invalid.

It could probably be done in software, but there is hardware support,
at least on the 80286.  I am pretty sure it is still there.
The hardware loads a segment descriptor matching the segment selector
in the segment register.  (Except in real mode.)  It is very convenient
to have a segment selector that doesn't load a segment descriptor, and
Intel supports that.  Others may require a dummy entry in the
descriptor table.

-- glen

0
gah (12851)
10/9/2006 1:42:26 AM
adaworks@sbcglobal.net wrote:

(snip on SPARK)

>>Other than, of course, PL/I.

> Sorry Robin, but in this case PL/I does not even run
> a close second.  Before you object, you need to study
> this issue.  I am quite certain you know little or nothing
> about SPARK.  I am not even sure how well-prepared
> you are in the topic of formal methods.

I don't know it so much, but I can understand the idea
of formal methods.

> Of course, you are welcome to object, but be sure you
> know whereof you speak by getting informed first.

My thought would probably be Java for second place.
Not that it is intended for formal model programming,
but its exception model is pretty strong.

-- glen

0
gah (12851)
10/9/2006 1:54:25 AM
robin wrote:

(snip)

>>Another is that many times, though not all, something
>>is written to solve one problem and never used again.

> And that doesn't happen with commercial work?

Depending on your definition of program.

For programs compiled, more or less, to machine code,
I would say it is relatively rare.  If you include interpreted
languages, such as Excel, then I would probably agree that
it happens often in commercial work, maybe too often.

-- glen

0
gah (12851)
10/9/2006 1:57:48 AM
Tom Linden wrote:
> 
> You will need to demonstrate by example how this is useful.  Ultimately you
> will need to ouput the result, and I doubt you would present it as
> 398562947593/892908359830   Remember the set of rationals form a set of
> measure 0, are in the same cardinal class as the integers, Aleph 0.  

This is true, but is is equally true that the rationals are a dense subset of 
the reals and suffice to approximate any real number to arbitrary accuracy.

> Real men use real numbers.  

You know not whereof you speak.

We have given names to a handful of irrational numbers such as pi, e, Euler's 
constant, n-th roots or integers that are not a perfect n-th power and the like. 
  There are a handful (tens or hundreds of thousands or maybe even millions or 
billions, but still only a handful in the grand scheme of things) of formulas 
that have be rigorously proved to yield one of these values.  In fact, the set 
of all conceivable such formulas is countable, so its union with the rationals 
is also of measure 0.  But this is symbolic mathematics, not numerical 
calculation.

All actual numerical calculation is done using rational numbers carried to 
enough (or in all too many cases not enough) digits to achieve the required 
accuracy.

After all, every irrational number has an infinite, non repeating expansion in 
every base.  All actual numeric calculation is perforce done with a finite 
number (however large it may be) of digits regardless of the base or bases used. 
  Therefore all numbers used and intermediate and final results obtained in any 
numerical calculation are rational.  QED.
0
jjw (608)
10/9/2006 2:00:22 AM
robin wrote:

(snip)

>>I would say that most scientific programmers don't come
>>from the computer science program, but from engineering
>>and physical sciences.

> And mathematics?  And Chemistry?

Chemistry is part of the physical sciences.  For mathematics,
is harder to say.  For pure math, I would say no.  Applied
math is often related to engineering or physical science.


>>PL/I by design included features from COBOL for
>>the business community, and from Fortran for the
>>scientific community.  The life cycle of scientific
>>and engineering software is a little different from
>>that of business or 'computer science' software.

> How so?
> Some commercial software runs for years and years.
> So does some scientific software.
> Look at scientific subroutine libraries.
> Some are still around decades later.

I meant it in terms of a complete program, not a subroutine
library.  Some scientific software runs unchanged for years,
and some commercial (business) software lasts for days.

(And again, I am not counting simple programs for interpreted
languages, like Excel spreadsheets.  I would count the Excel
program itself.)

-- glen

0
gah (12851)
10/9/2006 2:04:51 AM
LR wrote:

> glen herrmannsfeldt wrote:

(snip)

>> I recently found a bug in a large program written by an
>> experienced and knowledgable C++ programmer.  This program tries
>> to check every argument for being in range, and otherwise having
>> the right value.  At one point it does a recursive search through
>> what is supposed to be a binary tree, but hadn't actually been
>> allocated yet.  

> Was this something the experienced and knowlegeable C++ programmer had 
> tried to implement themselves when std::set and std::map are available 
> and waiting to be used?

I don't think so, but even if it was, that isn't a solution for
not initializing the pointer to the tree.  The problem occurred
when the tree hadn't been allocated, and the deallocate routine
was called, something like:

if(tree) deallocate_tree(tree);

-- glen

0
gah (12851)
10/9/2006 4:59:06 AM
"Shmuel (Seymour J.) Metz" <spamtrap@library.lspace.org.invalid> wrote in 
message news:12ij8ipgm71md7a@corp.supernews.com...
> In <6uZVg.3040$NE6.2582@newssvr11.news.prodigy.com>, on 10/08/2006
>   at 02:35 AM, <adaworks@sbcglobal.net> said:
>
>>Please expand on this reply.   Is there an operational version of
>>PL/I that now supports object-oriented programming?
>
> I'm not sure what is deployed, but IBM did a proof of concept and the
> user community expressed a good deal of interest. There may be a lack
> of resources, but I've seen no sign of resistance.
>
Perhaps my use of the word resistance was a bit harsh.   I do think
PL/I would benefit from support of the object model.   I wonder
why IBM does not update the language to support OOP.

Richard Riehle 


0
adaworks2 (748)
10/9/2006 6:05:22 AM
"Tom Linden" <tom@kednos-remove.com> wrote in message 
news:op.tg4pcoyltte90l@hyrrokkin...
>
> Richard, you didn't answer my question as to what is included in the static
> analysis.
>
SPARK is built over a concept of formal assertions.
Those assertions come in a variety of constructs.
Using those assertions, SPARK will determine the
validity of related constructs throughout the code.

At the simplest level, this involves type-safety.  At the
more advanced levels, it goes deeper into the code
to determine whether a particular construct can ever
go awry at run-time.   This process is far more extensive
in SPARK than in any other development model now
in place.  It goes substantially beyond simple compile-time
checking.

Assertions of the elementary form such as pre-, post-, and
invariant conditions are one level of static checking.   Another
level involves the concept of weakest pre-conditions versus
corresponding post-conditions.   Assertions are also stated
in terms of formal methods, and those can also be statically
checked.

I recommend John Barnes' book, High Integrity Programming
Using SPARK.  It has a more detailed and in-depth presentation
of the underlying model.

Perhaps in a future posting, when I have more of my SPARK
materials readily at-hand, I can add some more details.   Meanwhile,
as I noted in my posting, I think you can get more information from
the PRAXIS people.   Perhaps one of them will comment on this
thread since I did cc them in my last posting.

Richard 


0
adaworks2 (748)
10/9/2006 6:19:17 AM
"Tom Linden" <tom@kednos-remove.com> wrote in message 
news:op.tg4p3isitte90l@hyrrokkin...
> On Sun, 08 Oct 2006 10:24:29 -0700, <adaworks@sbcglobal.net> wrote:
>
>> Do you mean of determines the common denominator? Does it reduce it?
>> Interesting, but not useful.
>>>
>> Actually, this can be quite useful in a lot of mathematical problems.
>> The rational number (e.g., 5/17, 25/57, etc.) is never converted to
>> a floating point value.   This reduces the loss of accuracy due to
>> frequent conversions in a long mathematical problem that involves
>> a lot of rational numbers with a denominator other than 1.
>> And yes, continual reduction and computation along the GCD
>> model is part of the solution space.   The downside of this can
>> be the additional time involved in making these computations.
>> The upside is the constancy of the accuracy of those computations.
>
> You will need to demonstrate by example how this is useful.  Ultimately you
> will need to ouput the result, and I doubt you would present it as
> 398562947593/892908359830   Remember the set of rationals form a set of
> measure 0, are in the same cardinal class as the integers, Aleph 0.  Real
> men use real numbers.  This strikes me at best as an onanistic 
> self-indulgence of no value.
>
Wow!  A reference to Onan!   I would respectfully disagree that
this is the mathematical equivalent of onanism, but the concept is
rather delightful.

Consider a long series of rational numbers where the denominators
are other than 1.    As we perform an operation as simple as addition
on, say 30,000 of these, we want to avoid any kind of cumulative
drift from floating-point error.    When we continually add those
rational numbers without doing a floating-point conversion, we have
no errors due to rounding, truncation, or decisions about where to
stop the computation.    We can also calculate very large fractions
this way.

This is not simply an academic issue.   In many kinds of data reduction
problems, scientific computations, and analyses of large volumes of
information collected from continual recording of data, this kind of
thing becomes quite useful.   It is also useful in systems where there
is no on-board floating-point co-processor (e.g., some space
applications, etc.).

Sometimes we have matrices that are composed entirely of rational
numbers with denominators other than 1.   This kind of thing is not
as uncommon as you might suppose.

As to presentation of the final solution,  it can always be converted
to a floating-point value if that is appropriate.  Sometimes, we would
rather use a graphical representation.   Other times, the unusual
rational number is simply used as input to yet another set of equations.

I realize this is not a common thing to do in business data processing,
but it is valuable in other kinds of computing.    That is why, as noted
in another post, MatLab also supports this capability.   Engineers like
the feature.  Accountants are not interested in it.   Systems programmers
find it strange, I suppose.

Richard Riehle 


0
adaworks2 (748)
10/9/2006 6:35:00 AM
"glen herrmannsfeldt" <gah@ugcs.caltech.edu> wrote in message 
news:PvWdnZpLl8sQNrTYnZ2dnUVZ_tudnZ2d@comcast.com...
> adaworks@sbcglobal.net wrote:
>
> (snip on SPARK)
>
>>>Other than, of course, PL/I.
>
>> Sorry Robin, but in this case PL/I does not even run
>> a close second.  Before you object, you need to study
>> this issue.  I am quite certain you know little or nothing
>> about SPARK.  I am not even sure how well-prepared
>> you are in the topic of formal methods.
>
> I don't know it so much, but I can understand the idea
> of formal methods.
>
>> Of course, you are welcome to object, but be sure you
>> know whereof you speak by getting informed first.
>
> My thought would probably be Java for second place.
> Not that it is intended for formal model programming,
> but its exception model is pretty strong.
>
I would probably put Eiffel slightly ahead of Java.  However,
there is a third-party productd for Java called IContract that
does support assertions of the pre-, post-, and invariant
condition variety.   Even so, no implementation of Java
includes the level of support for formal methods found
in SPARK.

Richard 


0
adaworks2 (748)
10/9/2006 6:39:38 AM
adaworks@sbcglobal.net wrote:

> On the other hand, suppose I do not assign an initial value to that
> variable.   When I try to use it in my program, it will be an invalid
> value and the program will raise an exception.   It is often better
> for a program to fail to do anything than to do something that looks
> right but isn't.

But it might accidentally have a valid but wrong value.

For pointer variables there is a fairly low but non-zero
chance of an uninitialized variable pointing to a valid
address.  In one I previously posted, I had a pointer that
accidentally pointed to a valid pointer that pointed
to a valid pointer, eventually reaching a pointer
loop more than 19000 pointers long.  The pointer
chain it was supposed to be pointing to had not yet
been allocated.

For integers there are values that are much less likely to
occur in real data and reasonably likely to cause problems
when used as data.

I also once had a program that accidentally depended on a variable
being initialized to a non-zero value.  I didn't discover the
bug until I ran it on a system that initialized to zero.

-- glen

0
gah (12851)
10/9/2006 7:37:48 AM
"Shmuel (Seymour J.) Metz" <spamtrap@library.lspace.org.invalid> wrote in 
message news:12ij8n437tq8e0a@corp.supernews.com...
>
> The problem is that DF keeps throwing out bogus challenges,

In many cases these challenges are first posted in comp.lang.fortran by 
others
 and if I manage to post a solution there I usually SHARE it here as 
edification
to those that EVENTUALLY will have to migrate to a modern Fortran..

> often with incorrect FORTRAN code;

Nonsense,
my solution(s) invariably include source/test case/output, and are usually 
made available
online.   The arbitrary lists problem FOR WHICH THERE IS NO PL/I SOLUTION
because there is no PL/I syntax for allocatable derived type syntax is the 
latest example.

  http://home.earthlink.net/~dave_gemini/list1.f90

these illustrates AGAIN that PL/I is NOT more powerful than Fortran
 as this newsgroup claims.

> no point to answering them, or even in seeing his articles. He's in my
> twit filter because his sole purpose seems to be to disrupt news
> groups devoted to languages other than FORTRAN.
>

Thats a mis-statement, I havent posted in a language newsgroup other than
Fortran or here in years..



0
dave_frank (2243)
10/9/2006 8:15:40 AM
"David Frank" <dave_frank@hotmail.com> wrote in message
news:452895d7$0$3010$ec3e2dad@news.usenetmonster.com...
>
> "robin" <robin_v@bigpond.com> wrote in message
> news:7xtVg.42536$rP1.13239@news-server.bigpond.net.au...
> > "David Frank" <dave_frank@hotmail.com> wrote in message
> > news:45262687$0$3016$ec3e2dad@news.usenetmonster.com...
> >>
> >> integer(2) :: Int16
>
>  > No, this doesn' give you 16 bits in Fortran.
>
> It certainly does for those current compilers that support 16 bit integers

You're not saying anything with that response, as you well know.
You have tried to push that line in comp.lang.fortran
and have been vigorously howled down.

The only way to ensure that a fatal compilation error
does not occur is to specify

    integer (kind = kind(32767)) :: int16

or an equivalent form.

But you will, of course, get a 16-bit integer if and only if
the compiler supports them.

Come back when you understand the rudiments of Fortran.


0
robin_v (2737)
10/9/2006 1:55:26 PM
glen herrmannsfeldt wrote:
> LR wrote:
> 
>> glen herrmannsfeldt wrote:
> 
> 
> (snip)
> 
>>> I recently found a bug in a large program written by an
>>> experienced and knowledgable C++ programmer.  This program tries
>>> to check every argument for being in range, and otherwise having
>>> the right value.  At one point it does a recursive search through
>>> what is supposed to be a binary tree, but hadn't actually been
>>> allocated yet.  
> 
> 
>> Was this something the experienced and knowlegeable C++ programmer had 
>> tried to implement themselves when std::set and std::map are available 
>> and waiting to be used?
> 
> 
> I don't think so, but even if it was, that isn't a solution for
> not initializing the pointer to the tree.  The problem occurred
> when the tree hadn't been allocated, and the deallocate routine
> was called, something like:
> 
> if(tree) deallocate_tree(tree);

I'm going to take a wild guess that tree was a raw pointer, and not some 
smart pointer?  Yet the programmer was "experienced and knowledgable"?

LR


0
lruss (582)
10/9/2006 4:30:31 PM
adaworks@sbcglobal.net wrote:

> "Tom Linden" <tom@kednos-remove.com> wrote in message 
> news:op.tg4pcoyltte90l@hyrrokkin...
> 
>>Richard, you didn't answer my question as to what is included in the static
>>analysis.
>>
> 
> SPARK is built over a concept of formal assertions.
> Those assertions come in a variety of constructs.
> Using those assertions, SPARK will determine the
> validity of related constructs throughout the code.
> 
> At the simplest level, this involves type-safety.  At the
> more advanced levels, it goes deeper into the code
> to determine whether a particular construct can ever
> go awry at run-time.   This process is far more extensive
> in SPARK than in any other development model now
> in place.  It goes substantially beyond simple compile-time
> checking.
> 
> Assertions of the elementary form such as pre-, post-, and
> invariant conditions are one level of static checking.   Another
> level involves the concept of weakest pre-conditions versus
> corresponding post-conditions.   Assertions are also stated
> in terms of formal methods, and those can also be statically
> checked.
> 
> I recommend John Barnes' book, High Integrity Programming
> Using SPARK.  It has a more detailed and in-depth presentation
> of the underlying model.
> 
> Perhaps in a future posting, when I have more of my SPARK
> materials readily at-hand, I can add some more details.   Meanwhile,
> as I noted in my posting, I think you can get more information from
> the PRAXIS people.   Perhaps one of them will comment on this
> thread since I did cc them in my last posting.


May I ask what kind of software development you do in SPARK.  IE, what 
is your application area?

Also, do you happen to know what SPARK is implemented in?

LR
0
lruss (582)
10/9/2006 4:32:20 PM
I just received an email from Rod Chapman, one of
the members of the SPARK team.   He indicates that
he will answer questions about SPARK if there are
any remaining.

Thanks to everyone for their interest.

Richard Riehle





0
adaworks2 (748)
10/9/2006 4:38:57 PM
"LR" <lruss@superlink.net> wrote in message 
news:45293768$0$25785$cc2e38e6@news.uslec.net...
> adaworks@sbcglobal.net wrote:
>
>> "LR" <lruss@superlink.net> wrote in message 
>> news:4527ebb2$0$25792$cc2e38e6@news.uslec.net...
>>
>>>adaworks@sbcglobal.net wrote:
>>>
>>>>The initialization of a scalar with a value that could be intepreted
>>>>as correct at run-time, if it becomes a kind of default value, may
>>>>cause more run-time errors than if it is not initialized at all.  It is not
>>>>always possible to decide that a given initialization is better than no
>>>>value at all.  The circumstances will vary, of course.
>>>
>>>I find this pretty confusing.  How can a variable have "no value at all" 
>>>unless you have some meta-data attached to the variable that indicates that 
>>>it hasn't been initialized or had a value assigned to it? Otherwise, I think 
>>>the bits will have some 'value'.  It may be a 'legal' value or 'not legal' 
>>>but the bits will indicate some value.  No?
>>>
>>>I get the feeling I'm missing something.
>>>
>>>Also, can you give an example where no initialization is better than 
>>>initialization?
>>>
>>
>> Suppose I have a variable that I initialize to zero so my program
>> can compile without warnings.   If my program is designed so I
>> never have a method that updates that value, when the program
>> tries to use that value, it turns out to be valid and there is not
>> immediate error message.
>>
>> On the other hand, suppose I do not assign an initial value to that
>> variable.   When I try to use it in my program, it will be an invalid
>> value and the program will raise an exception.   It is often better
>> for a program to fail to do anything than to do something that looks
>> right but isn't.
>
> So in SPARK, if a variable isn't initialized it gets an invalid value? Suppose 
> the variable in question is just a plain old binary integer type (or whatever 
> that is in SPARK) that can contain any value.  Or does SPARK not allow these 
> kinds of variables?  Then there would still have to be some meta-data that 
> says that it's not initialized, right?
>
SPARK ensures that, at run-time, every scalar will have a value
that conforms to the invariant given for that value.

> BTW, does, or maybe I should say, could, SPARK run on ones-complement 
> machines?
>
If any other kind of program will run on that machine, there is no
reason why a program using SPARK cannot be compiled and linked
to it as well.
>
>
> How does an Ada/SPARK compiler tell where files that contain say, other 
> functions are?
>
Ada, the underlying language engine for SPARK, is designed to compile
using a library model.   When an Ada program is dependent of another
library unit, that unit must have already successfully compiled and reside
in the library.  All the entities of that library unit(s) are in scope for the
compiler.   However, being is scope is not enough.   Ada also has a
strong model of visibility.   If any entity from some other library unit
is referenced in any dependent library unit, the programmer must make
it directly visible before that unit can be compiled.  In this way, the
compiler has a direct reference to that entity, and it can easily be
checked for other kinds of validity vis a vis the places in the dependent
unit where it is being used.
>
> Then the compiler has to have some way of figuring out where the 'library' is?
>
Yes, this is a part of the language design.   It is a requirement for any
Ada compiler and ensures that every visible entity is properly handled
and evaluated at compile time.    An entity that is in scope and not
visible is not a problem since the dependent unit can neither alter nor
inspect that entity.  It has no effect on the dependent unit.
>
>>
>> SPARK goes well beyond the simple assert model.   To begin with, it
>> directly supports the notion of pre-, post-, and invariant conditions. The
>> post-condition model is especially powerful.   Eiffel also supports this
>> as a dynamic (run-time) feature.
>
> By directly supports, do you mean supports with syntax in the language?
>
Yes.  Eiffel is one of the few languages that is deliberately designed to 
support
programming model that is built over pre-, post-, and invariant assertions.
>>
>>
>> Whenever we leave exception-handling activated in a deployed program,
>> there is a slight overhead.   Engineering is largely about trade-offs in
>> design and deployment decisions.
>
> But programming, especially if you're selling the idea of correctness, and 
> proveability, isn't engineering.
>
The creation of highly-reliable software is an engineering problem.  In
designing that kind of software, the programmers and software engineers
are part of a larger engineering team.   That team often includes electrical
engineers, mechanical engineers, avionics engineers, chemical (combustion)
engineers, and many others.

The software members of that team are required to adopt an engineering
attitude toward their own contribution.  It is not appropriate to take an
attitude that, because software is difficult to engineer, we should throw
up our hands and abandon attempts to apply engineering principles and
practices.   Rather, that very difficulty demands that we pursue an
engineering approach even more vigorously.   Remember that, for safety
critical software, people's lives and personal safety is often at stake.

Although the state of software engineering is not yet as fully developed
as other kinds of engineering, we are required to take an engineering
approach to the development of requirements, testing, design, and
implementation of our software products when people might be killed
or maimed by our final result.
>
> if(Modulus(randomFunction(),2) equals zero) then
> print "Surprise, Surprise, Surprise."
> endif
>
> Which I think will function as perfectly as SPARK or ADA, or maybe any other 
> language will make it, and will in fact provide a surprise about 50% of the 
> time for the user of the software prodcut.
>
For result from a random() function, one must take some care in
using that result.   This is also true of input from an outside source
that might be in error.   This is probably one of the easiest things
to prevent.

In assertion-based programming, we establish, as part of the requirements
process, the legal upper and lower bounds for any value.  There are many
ways to do this.   In fact, this feature has already been included in the UML
Object Constraint Language (OCL).

Here is a pseudo-code example of how that might look in one of several
languages that support invariants.

            type Number is ...
            invariant (number) > 34 and < 67 and (not = 0)

This is a very simple example.   The invariant will apply throughout
the program wherever an instance of number is declared or used.

In Ada, this might have been coded as,

           type Number is range 34 .. 67;

where the test for not = Zero is not shown.  In SPARK, the
test for Zero would be more easily included.
>
> Can you please provide an example of a software circuit breaker?
>
I wote an article for the Journal of Object-oriented Programming
some years ago about the notion of a software circuit-breaker.  In
software, this is akin to exception handling.  I made up the term
software circuit-breaker to demonstrate the parallels between the
physical world of fuses and circuit-breakers and the soft world of
computer programs where we could do something similar.
>
>
>> I do know that they devote their entire set of corporate
>> resources to high-integrity software and that those organizations
>> who have chosen to use SPARK are regularly contributing
>> new ideas for even better dependability.   The safety-critical
>> software community is fairly small relative to other parts of
>> the software world, but they are dedicated to the constant
>> improvement in tools that ensure the safety of the software
>> products that fly people around the planet, control nuclear
>> power-plants, control the switching mechanisms in rail
>> transportation systems, and keep software-controlled
>> medical devices working without failures.
>
> No mention of automobiles?
>
The absence of an application domain in my list does not
mean that such a domain is unimportant.   MISRA, the
Motor Industry Software Reliability Association, does
have an assessment of software tools and languages that
it recommends as appropriate for high-integrity software.
>
Richard Riehle 


0
adaworks2 (748)
10/9/2006 5:16:05 PM
James J. Weinkam <jjw@cs.sfu.ca> wrote:
 
> After all, every irrational number has an infinite, non repeating 
> expansion in every base.  All actual numeric calculation is perforce 
> done with a finite number (however large it may be) of digits 
> regardless of the base or bases used. 

Well, for every rational base.  If you write e in base e, or
pi in base pi then it is not repeating.

I thought once about having one bit in a floating point format
to represent a factor of pi, such that multiples of pi could
be exactly represented.  

There is a story of a professor writing a number on the board
and asking the students what number it was.  The first answers
e**pi, the second answer pi**e, the third answer pi**e times
some constant.

OK, it isn't that funny, but it seems applicable here.

-- glen
0
gah1 (524)
10/9/2006 5:18:51 PM
"glen herrmannsfeldt" <gah@ugcs.caltech.edu> wrote in message 
news:GeednfFAyvaVYbTYnZ2dnUVZ_rWdnZ2d@comcast.com...
> adaworks@sbcglobal.net wrote:
>  For integers there are values that are much less likely to
> occur in real data and reasonably likely to cause problems
> when used as data.
>
Your point is very good.   This is another reason why invariants,
at the type specification level, are so important.   For example,

       type My_Float is digits 7 range 0.200.0 .. 800.0;

will guarantee that no instance of My_Float will be allowed to
stray beyond the bounds of 200.0 and 800.0.   An initialization
such as,
              x : My_Float := 0.0;

will fail at compile-time.   At run-time, it will raise a constraint
exception.
>
> I also once had a program that accidentally depended on a variable
> being initialized to a non-zero value.  I didn't discover the
> bug until I ran it on a system that initialized to zero.
>
A SPARK assert statement would have prevented this.

Richard Riehle 


0
adaworks2 (748)
10/9/2006 5:21:22 PM
adaworks@sbcglobal.net wrote:
 
> Consider a long series of rational numbers where the denominators
> are other than 1.    As we perform an operation as simple as addition
> on, say 30,000 of these, we want to avoid any kind of cumulative
> drift from floating-point error.    When we continually add those
> rational numbers without doing a floating-point conversion, we have
> no errors due to rounding, truncation, or decisions about where to
> stop the computation.    We can also calculate very large fractions
> this way.

Mathematica will do that for you by default if the inputs to
an operation are not 'machine precision' numbers.  If the inputs
are Integer (Mathematica capitalizes the first letter of all
system defined symbols, including Pi and E.) the computation
will be done in rational arithmetic.  Even in calculating the
determinant or inverse of a very large matrix.

> Sometimes we have matrices that are composed entirely of rational
> numbers with denominators other than 1.   This kind of thing is not
> as uncommon as you might suppose.
 
> As to presentation of the final solution,  it can always be converted
> to a floating-point value if that is appropriate.  Sometimes, we would
> rather use a graphical representation.   Other times, the unusual
> rational number is simply used as input to yet another set of equations.

If you are going to convert to floating point in the end, you might
as well do the whole calculation in arbitrary precision floating point?

-- glen
0
gah1 (524)
10/9/2006 5:32:18 PM
glen herrmannsfeldt wrote:
> adaworks@sbcglobal.net wrote:
> [...]  
> 
>>As to presentation of the final solution,  it can always be converted
>>to a floating-point value if that is appropriate.  Sometimes, we would
>>rather use a graphical representation.   Other times, the unusual
>>rational number is simply used as input to yet another set of equations.
> 
> 
> If you are going to convert to floating point in the end, you might
> as well do the whole calculation in arbitrary precision floating point?
> 
> -- glen

Doing the whole computation using rational numbers rather than arbitrary 
precision floating point eliminates rounding errors and propagation 
thereof in intermediate results.  I've worked on a few programs where an 
extra (least significant) digit resulted in a significant, double-digit 
percentage change to the computed result which would certainly have 
shown up in the arbitrary precision floating point result conversion  -- 
and those weren't particularly long computations.  Propagated rounding 
errors are not limited to the order of magnitude of the least 
significant digit.


Bob Lidral
lidral  at  alum  dot  mit  dot  edu
0
10/9/2006 7:25:36 PM
Bob Lidral <l1dralspamba1t@comcast.net> wrote:
(I wrote)
 
>> If you are going to convert to floating point in the end, you might
>> as well do the whole calculation in arbitrary precision floating point?
 
> Doing the whole computation using rational numbers rather than arbitrary 
> precision floating point eliminates rounding errors and propagation 
> thereof in intermediate results.  I've worked on a few programs where an 
> extra (least significant) digit resulted in a significant, double-digit 
> percentage change to the computed result which would certainly have 
> shown up in the arbitrary precision floating point result conversion  -- 
> and those weren't particularly long computations.  Propagated rounding 
> errors are not limited to the order of magnitude of the least 
> significant digit.

OK, the problem is that you don't know in advance how much
precision to use in the calculation to get the desired precision
in the result.  I believe in some cases Mathematica determines
this at run time, possibly iteratively.  (Try the calculation,
test for the result precision, try again.)  In other cases, it
might be able to do the precision calculation symbolically.

I have seen Mathematica try to do something like a least squares
fit using rational arithemetic.  Even more than that, it puts symbolic
square roots in, too.  It gets very slow for large n, though.

-- glen
0
gah1 (524)
10/9/2006 7:37:26 PM
adaworks@sbcglobal.net wrote:

> "LR" <lruss@superlink.net> wrote in message 
> news:45293768$0$25785$cc2e38e6@news.uslec.net...
> 
>>adaworks@sbcglobal.net wrote:
>>
>>
>>>"LR" <lruss@superlink.net> wrote in message 
>>>news:4527ebb2$0$25792$cc2e38e6@news.uslec.net...
>>>
> 




>>>Whenever we leave exception-handling activated in a deployed program,
>>>there is a slight overhead.   Engineering is largely about trade-offs in
>>>design and deployment decisions.
>>
>>But programming, especially if you're selling the idea of correctness, and 
>>proveability, isn't engineering.
>>
> 
> The creation of highly-reliable software is an engineering problem.  

I'm fascinated.  Would you please tell me how you define 'software' and 
'engineering'?


> The software members of that team are required to adopt an engineering
> attitude toward their own contribution.  

To have an "engineering attitude", whatever that might mean, is not to 
be an engineer and not to be engineering.

 > It is not appropriate to take an
> attitude that, because software is difficult to engineer, we should throw
> up our hands and abandon attempts to apply engineering principles and
> practices.   

No, not at all, but it's not a question of difficultly.  It's simply a 
question of the nature of software and the nature of engineering.



> Rather, that very difficulty demands that we pursue an
> engineering approach even more vigorously.   

An "engineering approach", what ever that means, is not engineering.


 > Remember that, for safety
> critical software, people's lives and personal safety is often at stake.

I see.  Well then, let's have the Engineering Good Fairy tell us to feel 
better, because <POOF>  now we're engineers, if we have the attitude and 
the approach.  No, sorry, won't work.  There is no Engineering Good 
Fairy.  You're still stuck with an inability to change the nature of the 
universe through wishful thinking.  You'll have to depend on a 
legislature for that.  No worries, they're good at that kind of thing.


> 
> Although the state of software engineering is not yet as fully developed
> as other kinds of engineering, we are required to take an engineering
> approach to the development of requirements, testing, design, and
> implementation of our software products when people might be killed
> or maimed by our final result.

Be as rigorous as you desire.  I will await your definitions of both 
'software' and 'engineering'.


> 
>>if(Modulus(randomFunction(),2) equals zero) then
>>print "Surprise, Surprise, Surprise."
>>endif
>>
>>Which I think will function as perfectly as SPARK or ADA, or maybe any other 
>>language will make it, and will in fact provide a surprise about 50% of the 
>>time for the user of the software prodcut.
>>
> 
> For result from a random() function, one must take some care in
> using that result.   This is also true of input from an outside source
> that might be in error.   This is probably one of the easiest things
> to prevent.

But you might not want to prevent it, that was my point.  Surprise for 
the user of a computer language is one thing, surprise for the user of a 
software product _might_ in some circumstances be desireable.


> 
> In assertion-based programming, we establish, as part of the requirements
> process, the legal upper and lower bounds for any value.  There are many
> ways to do this.   In fact, this feature has already been included in the UML
> Object Constraint Language (OCL).
> 
> Here is a pseudo-code example of how that might look in one of several
> languages that support invariants.
> 
>             type Number is ...
>             invariant (number) > 34 and < 67 and (not = 0)
> 
> This is a very simple example.   The invariant will apply throughout
> the program wherever an instance of number is declared or used.
> 
> In Ada, this might have been coded as,
> 
>            type Number is range 34 .. 67;
> 
> where the test for not = Zero is not shown.  In SPARK, the
> test for Zero would be more easily included.

What has this to do with the result of the function being pseudo-random 
or not?

What if you're required to use a true random source for your numbers, 
what then?


> 
>>Can you please provide an example of a software circuit breaker?
>>
> 
> I wote an article for the Journal of Object-oriented Programming
> some years ago about the notion of a software circuit-breaker.  In
> software, this is akin to exception handling.  I made up the term
> software circuit-breaker to demonstrate the parallels between the
> physical world of fuses and circuit-breakers and the soft world of
> computer programs where we could do something similar.

I'm sorry, but I really don't see how a circuit-breaker is like an 
exception.  Could you please expand on that?


LR
0
lruss (582)
10/9/2006 8:29:21 PM
adaworks@sbcglobal.net wrote:

> "LR" <lruss@superlink.net> wrote in message 
> news:45293768$0$25785$cc2e38e6@news.uslec.net...
> 
>>adaworks@sbcglobal.net wrote:
>>
>>
>>>"LR" <lruss@superlink.net> wrote in message 
>>>news:4527ebb2$0$25792$cc2e38e6@news.uslec.net...
>>>
>>>
>>>>adaworks@sbcglobal.net wrote:
>>>>
>>>>
>>>>>The initialization of a scalar with a value that could be intepreted
>>>>>as correct at run-time, if it becomes a kind of default value, may
>>>>>cause more run-time errors than if it is not initialized at all.


> SPARK ensures that, at run-time, every scalar will have a value
> that conforms to the invariant given for that value.

I'm having some problems understanding these.

LR
0
lruss (582)
10/9/2006 8:38:44 PM
glen herrmannsfeldt wrote:
> James J. Weinkam <jjw@cs.sfu.ca> wrote:
>  
> 
>>After all, every irrational number has an infinite, non repeating 
>>expansion in every base.  All actual numeric calculation is perforce 
>>done with a finite number (however large it may be) of digits 
>>regardless of the base or bases used. 
> 
> 
> Well, for every rational base.  If you write e in base e, or
> pi in base pi then it is not repeating.
> 
Well yes, there is literature on non standard number representations using non 
integral or even irrational bases, but I have never heard of a practical 
implementation.  Can you cite any?
0
jjw (608)
10/9/2006 11:05:56 PM
"LR" <lruss@superlink.net> wrote in message 
news:452ab0ef$0$25776$cc2e38e6@news.uslec.net...
>
> I'm fascinated.  Would you please tell me how you define 'software' and 
> 'engineering'?
>
Software engineering is the application of engineering principles,
practices, and methods to the creation and management of
software.

Engineering is the management and use of settled knowledge
in science, mathematics, and previous engineering practice,
within economic constraints, to produce a dependable design
within defined tolerances to achieve a predictable outcome.

The application of engineering principles and practices
to the creation of software is continuing to advance, although
we still have a long way to go before we are at the level of
practice that other engineering disciplines have achieved.

Reuse of artifacts and knowledge from previous engineering practice
includes, for software, some of the very ideas you have propounded
in earlier contributions to this forum.   The use of existing software
classes and generic components that already work as they should
is one kind of engineering practice.  The application of assertion
based specifications is another.

I just returned from a conference on Automated Software Engineering
where the tool demonstrations were quite a bit beyond what the
average programmer knows.   As long as we think of the the software
process in terms of programming, especially old-fashioned programming
as represented by most programming languages, we are not going
to get very close to software engineering.

There are organizations that are making headway in the application
of engineering principles and practices in software.   You may
be unaware of them, but that does not mean they don't exist.

My specific area of study and practice is software engineering.  I have
spent much of my long career in software as a programmer, and I
have a pretty good understanding of the limitations that engineering
might involve when it comes to laying down code.   However, I also
realize that source code is only one part of the software process.
Moreover, I also know, through extensive research and practice,
that traditional approaches to the creation of source code fall short
of what is now possible using new tools and methods.  This is one
reason I find SPARK so valuable as an engineering tool for
software.

As to my concept of a software circuit-breaker, any design in the
physical world that involves electrical current usually includes some
kind of fail-safe device such as a circuit-breaker.   When a modern
program fails in PL/I, Ada, Java, C++, Eiffel, or most other languages,
it is common to include some kind of fail-safe code.  This code acts,
in a program, much the same way a circuit-breaker does in an
electrical system.   In the case of the software, it is often self-resetting.
In the physical world, we often require manual intervention. 


0
adaworks2 (748)
10/10/2006 12:23:50 AM
"glen herrmannsfeldt" <gah@seniti.ugcs.caltech.edu> wrote in message 
news:ege132$6uk$4@naig.caltech.edu...
>
> If you are going to convert to floating point in the end, you might
> as well do the whole calculation in arbitrary precision floating point?
>
Not really.   The final conversion might be perfectly OK in its
precision, but the intermediate caculations, because there is
no cumulative drift due to rounding error, means the final
result is not as far off as it might otherwise be.

You are correct that tools such as Mathematica and MatLab
provide for this quite nicely.  In C++ and Ada it is common
to use a package or a class that is designed to do fractional
arithmetic.  For example,

      package Rational_Number is
          type Fraction is private;
          function "+" (Left, Right : Fraction) return Fraction;
          -- more operations would follow
      private
          type Fraction is record
              Numerator : Integer;
              Denominator : Integer;
          end record;
       end Rational_Number;

Now, anyone who uses this package will be able to
do arithmetic directly on Fractions.

Richard Riehle 


0
adaworks2 (748)
10/10/2006 12:30:15 AM
adaworks@sbcglobal.net wrote:
 
> You are correct that tools such as Mathematica and MatLab
> provide for this quite nicely.  In C++ and Ada it is common
> to use a package or a class that is designed to do fractional
> arithmetic.  For example,

In Mathematica you can do something like:

y=Table[Random[Integer,{100,1000000}],{100},{100}]

That is a 100 by 100 matrix of random integers between 100 and 1000000.

Now  z=Inverse[y];

You get a matrix of rational numbers with about 600 digits in
the numerator and denominator.  It doesn't take long to calculate,
and a little longer to display.

You can ask for the eigenvalues of y, but it will take it somewhat
longer to calculate them. It seems that instead of giving the values
it gives polynomials whose root are the eigenvalues.

-- glen


0
gah1 (524)
10/10/2006 12:48:32 AM
adaworks@sbcglobal.net wrote:
> "LR" <lruss@superlink.net> wrote in message 
> news:452ab0ef$0$25776$cc2e38e6@news.uslec.net...
> 
>>I'm fascinated.  Would you please tell me how you define 'software' and 
>>'engineering'?
>>
> 
> Software engineering is the application of engineering principles,
> practices, and methods to the creation and management of
> software.

That's not exactly engineering.  But not an impossible thing. 
Programmers certainly can learn from observing what engineers do.  But 
that won't ever make them engineers.


> Engineering is the management and use of settled knowledge
> in science, mathematics, and previous engineering practice,
> within economic constraints, to produce a dependable design
> within defined tolerances to achieve a predictable outcome.


Interesting.  But not the definition I've gotten over the years when 
I've asked actual engineers.  That definition is:  The application of 
scientific principles.  This would not include the application of 
mathematics except as a tool.

Now, I wonder, what scientific principles may be applied to the 
development of software development?  For example, do you ever use the 
formula F=MA to produce software.  I distinguish this from writing a 
program, a mathematical construct after all, to evaluate the formula itself.

And what might "defined tolerances" mean regarding software?  For 
example, does software respond to stress in the same way that metal 
might?  Does it deform?

And "settled knowledge"?  Robin is pretty settled in the knowledge that 
PL/I is the best language ever.  Maybe that's not what you meant.  Ok, 
then I think we're back to asking about issues of proof.



> As to my concept of a software circuit-breaker, any design in the
> physical world that involves electrical current usually includes some
> kind of fail-safe device such as a circuit-breaker.   When a modern
> program fails in PL/I, Ada, Java, C++, Eiffel, or most other languages,
> it is common to include some kind of fail-safe code.  This code acts,
> in a program, much the same way a circuit-breaker does in an
> electrical system.   In the case of the software, it is often self-resetting.
> In the physical world, we often require manual intervention. 

That seems a fundamental distinction to me.  In software, 'resetting' 
and perhaps continuing on with some task as best we are able, may often 
be a good choice.  In dealing with electrical equipment, a circuit 
breaker that resets itself might often prove to be fatal.

0
lruss (582)
10/10/2006 1:19:52 AM
"glen herrmannsfeldt" <gah@ugcs.caltech.edu> wrote in message
news:ZaGdnQSdFrpiMLTYnZ2dnUVZ_sCdnZ2d@comcast.com...
> robin wrote:
>
> >>I would say that most scientific programmers don't come
> >>from the computer science program, but from engineering
> >>and physical sciences.
>
> > And mathematics?  And Chemistry?
>
> Chemistry is part of the physical sciences.  For mathematics,
> is harder to say.

There is nothing "harder to say" about mathematics.
Mathematics was the largest user, above physics, chem,
engineering and all the others at a uni where I worked.

Maths (numerical algorithms) has historically been
the largest user, before the expansion of applications
in the other fields.


0
robin_v (2737)
10/10/2006 1:34:04 AM
"Shmuel (Seymour J.) Metz" <spamtrap@library.lspace.org.invalid> wrote in
message news:12ij93qhmco4591@corp.supernews.com...
> In <4526ca04$0$25793$cc2e38e6@news.uslec.net>, on 10/06/2006
>    at 05:27 PM, LR <lruss@superlink.net> said:
>
> >NULL works well for this in C & C++.  I'm curious, is there a value
> >in  PL/I for a pointer which will always be invalid?
>
> Yes -null.

NULL is a valid value for a pointer.

> It does not, however, have a value for uninitialized, and
> it is common to intialize pointers to null.
>
> >If not, what do you do
> >about writing code that has to move between platforms?
>
> You use null.


0
robin_v (2737)
10/10/2006 1:34:04 AM
<adaworks@sbcglobal.net> wrote in message
news:ptvWg.21133$Ij.8932@newssvr14.news.prodigy.com...
>
> "LR" <lruss@superlink.net> wrote in message
> news:45293768$0$25785$cc2e38e6@news.uslec.net...
> > adaworks@sbcglobal.net wrote:
> >
> >> "LR" <lruss@superlink.net> wrote in message
> >> news:4527ebb2$0$25792$cc2e38e6@news.uslec.net...
> >>
> >>>adaworks@sbcglobal.net wrote:
> >>>
> >>>>The initialization of a scalar with a value that could be intepreted
> >>>>as correct at run-time, if it becomes a kind of default value, may
> >>>>cause more run-time errors than if it is not initialized at all.  It is
not
> >>>>always possible to decide that a given initialization is better than no
> >>>>value at all.  The circumstances will vary, of course.
> >>>
> >>>I find this pretty confusing.  How can a variable have "no value at all"
> >>>unless you have some meta-data attached to the variable that indicates that
> >>>it hasn't been initialized or had a value assigned to it? Otherwise, I
think
> >>>the bits will have some 'value'.  It may be a 'legal' value or 'not legal'
> >>>but the bits will indicate some value.  No?
> >>>
> >>>I get the feeling I'm missing something.
> >>>
> >>>Also, can you give an example where no initialization is better than
> >>>initialization?
> >>>
> >>
> >> Suppose I have a variable that I initialize to zero so my program
> >> can compile without warnings.   If my program is designed so I
> >> never have a method that updates that value, when the program
> >> tries to use that value, it turns out to be valid and there is not
> >> immediate error message.
> >>
> >> On the other hand, suppose I do not assign an initial value to that
> >> variable.   When I try to use it in my program, it will be an invalid
> >> value and the program will raise an exception.   It is often better
> >> for a program to fail to do anything than to do something that looks
> >> right but isn't.
> >
> > So in SPARK, if a variable isn't initialized it gets an invalid value?
Suppose
> > the variable in question is just a plain old binary integer type (or
whatever
> > that is in SPARK) that can contain any value.  Or does SPARK not allow these
> > kinds of variables?  Then there would still have to be some meta-data that
> > says that it's not initialized, right?
> >
> SPARK ensures that, at run-time, every scalar will have a value
> that conforms to the invariant given for that value.

That doesn't answer the questions.
He asked 1. whether or not an unitialized variable gets an invalid value;
2. does spark allow variables to have any value?


0
robin_v (2737)
10/10/2006 1:34:05 AM
<adaworks@sbcglobal.net> wrote in message
news:myvWg.21134$Ij.1684@newssvr14.news.prodigy.com...
>
> "glen herrmannsfeldt" <gah@ugcs.caltech.edu> wrote in message
> news:GeednfFAyvaVYbTYnZ2dnUVZ_rWdnZ2d@comcast.com...
> > adaworks@sbcglobal.net wrote:
> >  For integers there are values that are much less likely to
> > occur in real data and reasonably likely to cause problems
> > when used as data.
> >
> Your point is very good.   This is another reason why invariants,
> at the type specification level, are so important.   For example,
>
>        type My_Float is digits 7 range 0.200.0 .. 800.0;
>
> will guarantee that no instance of My_Float will be allowed to
> stray beyond the bounds of 200.0 and 800.0.   An initialization
> such as,
>               x : My_Float := 0.0;
>
> will fail at compile-time.   At run-time, it will raise a constraint
> exception.
> >
> > I also once had a program that accidentally depended on a variable
> > being initialized to a non-zero value.  I didn't discover the
> > bug until I ran it on a system that initialized to zero.
> >
> A SPARK assert statement would have prevented this.

It would?
This should have been picked up as an uninitialized
variable.

> Richard Riehle


0
robin_v (2737)
10/10/2006 1:34:06 AM
"glen herrmannsfeldt" <gah@ugcs.caltech.edu> wrote in message
news:N6GdnZzh1v3aOrTYnZ2dnUVZ_oGdnZ2d@comcast.com...
>
 > There are places it is useful and used, and overall I would say that
> floating point is overused, but otherwise I agree.
>
> Floating point was designed for quantities with a relative error.

No it wasn't.  It was designed to cater for a wider range of numbers
than was available with fixed-point forms.


0
robin_v (2737)
10/10/2006 1:34:06 AM
"David Frank" <dave_frank@hotmail.com> wrote in message
news:452a0842$0$2979$ec3e2dad@news.usenetmonster.com...
>
> "Shmuel (Seymour J.) Metz" <spamtrap@library.lspace.org.invalid> wrote in
> message news:12ij8n437tq8e0a@corp.supernews.com...
> >
> > The problem is that DF keeps throwing out bogus challenges,
>
> In many cases these challenges are first posted in comp.lang.fortran by
> others and if I manage to post a solution there I usually SHARE it here as
> edification to those that EVENTUALLY will have to migrate to a modern
Fortran..

We have been using modern Fortran for 40 years.
It's called PL/I.

> > often with incorrect FORTRAN code;
>
> Nonsense,
> my solution(s) invariably include source/test case/output, and are usually
> made available online.

That doesn't change the fact that the code usually doesn't work.

>   The arbitrary lists problem FOR WHICH THERE IS NO PL/I SOLUTION

I posted a solution some weeks ago.  You read it.
What does that make you?

Here it is again.

   dcl (x ctl, a(*) ctl) float, name char (10) var ctl,
        i fixed binary;
   dcl onsource builtin;

   on conversion begin;
      put skip edit (onsource, '=')(a);
      allocate name; name = onsource;
      go to next;
   end;
   on endfile (sysin) go to next;
next:
   if allocation (x) > 0 then do;
      allocate a(allocation(x));
      do i = 1 to allocation (x);
         a(hbound(a,1)-i+1) = x;
         free x;
      end;
   end;

   if endfile (sysin) then signal finish;

   do forever;
      allocate x;
      get list (x);
      put list (x);
   end;

> because there is no PL/I syntax for allocatable derived type syntax is the
> latest example.
>
>   http://home.earthlink.net/~dave_gemini/rubbish.f90
>
> these illustrates AGAIN that PL/I is NOT more powerful than Fortran
>  as this newsgroup claims.

All you have proved is that you know nothing about PL/I.


0
robin_v (2737)
10/10/2006 1:34:07 AM
robin wrote:
> "Shmuel (Seymour J.) Metz" <spamtrap@library.lspace.org.invalid> wrote in
> message news:12ij93qhmco4591@corp.supernews.com...
> 
>>In <4526ca04$0$25793$cc2e38e6@news.uslec.net>, on 10/06/2006
>>   at 05:27 PM, LR <lruss@superlink.net> said:
>>
>>
>>>NULL works well for this in C & C++.  I'm curious, is there a value
>>>in  PL/I for a pointer which will always be invalid?
>>
>>Yes -null.
> 
> 
> NULL is a valid value for a pointer.


Maybe I should clarify.  Yes, NULL can be assigned to a pointer, so it 
is a valid value for a pointer, but you can't dereference it.

LR




0
lruss (582)
10/10/2006 3:43:30 AM
"robin" <robin_v@bigpond.com> wrote in message 
news:iMCWg.44642$rP1.31379@news-server.bigpond.net.au...
>
RR> I also once had a program that accidentally depended on a variable
RR> being initialized to a non-zero value.  I didn't discover the
RR> bug until I ran it on a system that initialized to zero.
RR>
RR> A SPARK assert statement would have prevented this.
>
> It would?
> This should have been picked up as an uninitialized
> variable.
>
When does a variable get initialized?  It is OK to have
the variable initialized at someplace in the program
other than where it is declared.   In fact, it is often a
bad idea to initialize the variable where it is declared.

1) An Ada program will issue a warning if a variable
    is never given a value anywhere in the program,
2) SPARK will issue additional warnings, based on
    the assertion provided for that variable or its
    type.  SPARK will also examine the value to
    ensure that the range of acceptable values is
    not violated, along with a lot of other checks.

I know you are trying to get in a statement about
PL/I here, but as far as I know, PL/I does not have
a model for assigning and examining assertions.  It
certainly does not support fine-grained assertions
as one finds in SPARK (or even Eiffel).   This would
be a good thing to add to PL/I.   It might not even be
very difficult given the macro facility of the language.
However, the current macro capability is not quite
powerful enough to carry this off at the level of
rigor found in tools that are specifically designed for
this task.

Richard 


0
adaworks2 (748)
10/10/2006 5:46:43 AM
"robin" <robin_v@bigpond.com> wrote in message 
news:hMCWg.44641$rP1.19275@news-server.bigpond.net.au...
> <adaworks@sbcglobal.net> wrote in message
 >
RR> SPARK ensures that, at run-time, every scalar will have a value
RR> that conforms to the invariant given for that value.
>
> That doesn't answer the questions.
> He asked 1. whether or not an unitialized variable gets an invalid value;
> 2. does spark allow variables to have any value?
>
Perhaps you did not quite understand my answer.  SPARK will
not allow an invalid value for a variable of a given type.  Further,
the assertions (usually invariants) for that type or for that instance
of the type, will be controlled within a specified set of valid
values.  That set of values is not a direct function of the machine
representation of that value, but it is a constraint that becomes
a part of completed program.

Finally, if SPARK determines that a value is never
assigned a value at any point in the program, it will not
allow that program to pass its own validation and verification
process.   That is, SPARK will reject a program where some
variable is never assigned a value.

This only a tiny part of the kind of checking done by SPARK.
In fact, it is one of the more trivial checks.

Richard
> 


0
adaworks2 (748)
10/10/2006 5:53:25 AM
adaworks@sbcglobal.net wrote:
> "robin" <robin_v@bigpond.com> wrote in message 
> news:hMCWg.44641$rP1.19275@news-server.bigpond.net.au...
> 
>><adaworks@sbcglobal.net> wrote in message
> 
>  >
> RR> SPARK ensures that, at run-time, every scalar will have a value
> RR> that conforms to the invariant given for that value.
> 
>>That doesn't answer the questions.
>>He asked 1. whether or not an unitialized variable gets an invalid value;
>>2. does spark allow variables to have any value?
>>
> 
> Perhaps you did not quite understand my answer.  SPARK will
> not allow an invalid value for a variable of a given type.  Further,
> the assertions (usually invariants) for that type or for that instance
> of the type, will be controlled within a specified set of valid
> values.  That set of values is not a direct function of the machine
> representation of that value, but it is a constraint that becomes
> a part of completed program.
> 
> Finally, if SPARK determines that a value is never
> assigned a value at any point in the program, it will not
> allow that program to pass its own validation and verification
> process.   That is, SPARK will reject a program where some
> variable is never assigned a value.
> 
> This only a tiny part of the kind of checking done by SPARK.
> In fact, it is one of the more trivial checks.
> 
> Richard
> 
> 
> 
Maybe I missed something, but there's a question I've seen asked several 
times here that you haven't answered yet.

Suppose there's a variable that can legally have any value representable 
by its underlying machine representation (integer, character, Boolean, 
floating point, etc. -- especially Boolean) that is initialized to a 
value somewhere in the program other than where it's declared.

Further, suppose that variable is only used in parts of the program 
where it's not possible to determine statically at compilation time 
whether it has already been set to some value.

In such a case, how would SPARK determine the variable had not been 
initialized before being used?  Clearly it can't reject the program at 
compilation time.  Presumably, if I've understood your postings, SPARK 
will somehow ensure it is initialized (at load time?) to some invalid 
value so when it is first used, its use will raise some sort of exception.

Please pardon the reference to C or PL/I data types (well, it is the 
PL/I newsgroup, despite DF's rantings).  Please give some examples of 
invalid values for C's char, unsigned char, short, or float variables. 
Please give some examples of invalid values for PL/I's character or 
bit(1) variables.  How do these values cause exceptions to occur?  For 
the IEEE representations of floating point data, it's possible to use a 
signaling NaN -- if it's supported by the hardware.  But what's an 
invalid value for bit(1)?  Valid values are '0'b and '1'b and PL/I only 
uses 1 bit to store such values.  How many other values can a single bit 
represent?  One would hope that if a single bit actually did hold some 
value other than '0'b or '1'b, the hardware might raise an exception, 
but I'm not sure how reliable such hardware would be in the first place. 
  :-)


Thanks.


Bob Lidral
lidral  at  alum  dot  mit  dot  edu
0
10/10/2006 7:54:40 AM
I am one of the designers of SPARK, so I thought I might take up
Richard
on his offer and stick my oar in.

SPARK is the result of an effort (spanning some 20 years) to design
and build programming languages with sound, modular and
efficient static verification as the primary
design goals. By "sound" I mean the absence of false-negatives (i.e.
the tool
tells you that your program is OK when actually it isn't...a bad
thing!)

SPARK is an "annotated" (aka "design by contract") strict subset of
Ada.
For compilation, it appears to a standard Ada compiler to just be a
standard
Ada program, with the very special property that there are _no_
implementation-
dependent ("unspecified" in C language terms) language features or
undefined behaviours.  This means that all SPARK programs mean the
_same_ thing regardless of what choices a compiler might make.  This
means
the semantics implemented by the verification tools really do
correspond with
the semantics implemented by all industrially used compilers - a
property that
surprisingly eludes most static analysis tools! :-)

>From the point of view of design and verification, though, SPARK really
is
a totally different language - it is certainly _not_ "just a subset" of
Ada at all.

It is aimed at the needs of embedded, hard real-time critical systems.
It is
mostly used in safety- and security-critical applications, such as
avionics,
railway signalling, and high-grade secure applications.

The verification tools first perform static semantic analysis, which
includes
freedom from function side-effects and aliasing issues.  The language
is designed so that these analyses are sound and efficient (actually -
all
this is done in P-time.)  These are important pre-requisites for later
on.

The information flow analysis engine then kicks in.  This is based on
the work
of Carre and Bergeretti (ACM TOPLAS Jan 1985 for details).  This
subsumes
all traditional data-flow analysis as well, so spotting undefined
variables
is again sound and efficient.

This sets up the real kicker - the VC generator.  This is an
implementation
of classical Hoare-logic style weakest-precondition generation.  The
resulting
verification conditions (VCs) are then thrown at either an automatic or
a user-assisted
theorem-proving tool.

The proof system can prove static type-safety (i.e. no "run-time
errors" like
buffer overflow, division by zero, arithmetic overflow etc. etc),
partial
correctness with respect to user-supplied pre- and post-conditions
(which
are first-order, of course...) and invariants such as
safety-properties.

If you want to know more, then drop us a line, check out the SPARK
textbook
(see www.praxis-his.com/sparkada/sparkbook.asp for details) or
see the numerous publications on www.sparkada.com

Yours,
 Rod Chapman, SPARK Team

0
10/10/2006 8:07:31 AM
> Suppose there's a variable that can legally have any value representable
> by its underlying machine representation (integer, character, Boolean,
> floating point, etc. -- especially Boolean) that is initialized to a
> value somewhere in the program other than where it's declared.
>
> Further, suppose that variable is only used in parts of the program
> where it's not possible to determine statically at compilation time
> whether it has already been set to some value.

SPARK has no such "parts" as you describe.  All library-level variables
in SPARK have a contract that declares whether or not they
are initialized at their point of declaration of not.  This contract is
verified
when the actual declaration is eventually supplied.  Clients of such
units read the contract and just know whether the variable can be
assumed
to be initialized or not - classic modular "guarantee/assume" style
design-by-contract verification.

> In such a case, how would SPARK determine the variable had not been
> initialized before being used?  Clearly it can't reject the program at
> compilation time.  Presumably, if I've understood your postings, SPARK
> will somehow ensure it is initialized (at load time?) to some invalid
> value so when it is first used, its use will raise some sort of exception.

No - SPARK ensures static type-safety
by a combination of information-flow analysis and theorem-proving.

 - Rod Chapman, SPARK Team

0
10/10/2006 8:13:47 AM
roderick.chapman@googlemail.com wrote:

(snip)

> SPARK is an "annotated" (aka "design by contract") strict subset of
> Ada.   For compilation, it appears to a standard Ada compiler to just
 > be a standard Ada program, with the very special property that there
 > are _no_ implementation-dependent ("unspecified" in C language terms)
 > language features or undefined behaviours.

Much of the implementation dependent behavior in other languages
is allowing for hardware implementation dependencies.

C, for example, allows sign data to be twos complement, ones complement,
or sign magnitude.  Fortran not only allows for those, but any base
greater than one.   Both also have allowed for either sign for the
remainder for integer division of negative numbers.  (It might be
that more recent standards have restricted this.)

Then there is the byte ordering for multiple byte quantities.

To allow operation on a wide variety of hardware without restrictions
that would slow down operation on processors with differing hardware,
languages allow for those differences.

-- glen

0
gah (12851)
10/10/2006 9:03:08 AM
"robin" <robin_v@bigpond.com> wrote in message 
news:jMCWg.44644$rP1.30083@news-server.bigpond.net.au...
> "David Frank" <dave_frank@hotmail.com> wrote in message
> news:452a0842$0$2979$ec3e2dad@news.usenetmonster.com...
>>
>
>>   The arbitrary lists problem FOR WHICH THERE IS NO PL/I SOLUTION
>
> I posted a solution some weeks ago.  You read it.
> What does that make you?
>

DISCRIMINATING!!

> Here it is again.
>
>   dcl (x ctl, a(*) ctl) float, name char (10) var ctl,
>        i fixed binary;
>   dcl onsource builtin;
>
>   on conversion begin;
>      put skip edit (onsource, '=')(a);
>      allocate name; name = onsource;
>      go to next;
>   end;
>   on endfile (sysin) go to next;
> next:
>   if allocation (x) > 0 then do;
>      allocate a(allocation(x));
>      do i = 1 to allocation (x);
>         a(hbound(a,1)-i+1) = x;
>         free x;
>      end;
>   end;
>
>   if endfile (sysin) then signal finish;
>
>   do forever;
>      allocate x;
>      get list (x);
>      put list (x);
>   end;
>

Lets see how many maroons here agree with you,
ANYONE agree that Vowel's solution is valid please vote AYE !! 


0
dave_frank (2243)
10/10/2006 12:32:52 PM
On Mon, 09 Oct 2006 22:46:43 -0700, <adaworks@sbcglobal.net> wrote:

>
> "robin" <robin_v@bigpond.com> wrote in message
> news:iMCWg.44642$rP1.31379@news-server.bigpond.net.au...
>>
> RR> I also once had a program that accidentally depended on a variable
> RR> being initialized to a non-zero value.  I didn't discover the
> RR> bug until I ran it on a system that initialized to zero.
> RR>
> RR> A SPARK assert statement would have prevented this.
>>
>> It would?
>> This should have been picked up as an uninitialized
>> variable.
>>
> When does a variable get initialized?  It is OK to have
> the variable initialized at someplace in the program
> other than where it is declared.   In fact, it is often a
> bad idea to initialize the variable where it is declared.

I disagree, this the right place to do it.
>
> 1) An Ada program will issue a warning if a variable
>     is never given a value anywhere in the program,

Supposing initialization occurs in an external procedure.  How
would the compiler reference that?  I include as an option in our
PL/I compiler the ability to print out into the listing file the
locations where a variable is set or referenced.


> 2) SPARK will issue additional warnings, based on
>     the assertion provided for that variable or its
>     type.  SPARK will also examine the value to
>     ensure that the range of acceptable values is
>     not violated, along with a lot of other checks.
>
> I know you are trying to get in a statement about
> PL/I here, but as far as I know, PL/I does not have
> a model for assigning and examining assertions.  It
> certainly does not support fine-grained assertions
> as one finds in SPARK (or even Eiffel).   This would
> be a good thing to add to PL/I.   It might not even be
> very difficult given the macro facility of the language.
> However, the current macro capability is not quite
> powerful enough to carry this off at the level of
> rigor found in tools that are specifically designed for
> this task.

I am not conviced that putting the assertions into the compiler
as opposed to the application is particularly useful, although
holonomic constraints and signalling would be a trivial extension.

>
> Richard
>
>



-- 
Using Opera's revolutionary e-mail client: http://www.opera.com/mail/
0
tom294 (608)
10/10/2006 12:39:06 PM
On Tue, 10 Oct 2006 01:13:47 -0700, <roderick.chapman@googlemail.com>  
wrote:

>> Suppose there's a variable that can legally have any value representable
>> by its underlying machine representation (integer, character, Boolean,
>> floating point, etc. -- especially Boolean) that is initialized to a
>> value somewhere in the program other than where it's declared.
>>
>> Further, suppose that variable is only used in parts of the program
>> where it's not possible to determine statically at compilation time
>> whether it has already been set to some value.
>
> SPARK has no such "parts" as you describe.  All library-level variables
> in SPARK have a contract that declares whether or not they
> are initialized at their point of declaration of not.  This contract is
> verified
> when the actual declaration is eventually supplied.  Clients of such
> units read the contract and just know whether the variable can be
> assumed
> to be initialized or not - classic modular "guarantee/assume" style
> design-by-contract verification.

By library-level variables I presume you mean global external?
>
>> In such a case, how would SPARK determine the variable had not been
>> initialized before being used?  Clearly it can't reject the program at
>> compilation time.  Presumably, if I've understood your postings, SPARK
>> will somehow ensure it is initialized (at load time?) to some invalid
>> value so when it is first used, its use will raise some sort of  
>> exception.
>
> No - SPARK ensures static type-safety

What does static type-safety mean?  Is it more than semantic analysis
of args and params?

> by a combination of information-flow analysis and theorem-proving.
>
>  - Rod Chapman, SPARK Team
>



-- 
Using Opera's revolutionary e-mail client: http://www.opera.com/mail/
0
tom294 (608)
10/10/2006 12:48:21 PM
roderick.chapman@googlemail.com wrote:
> I am one of the designers of SPARK, so I thought I might take up
> Richard
> on his offer and stick my oar in.
> 
> SPARK is the result of an effort (spanning some 20 years) to design
> and build programming languages with sound, modular and
> efficient static verification as the primary
> design goals. By "sound" I mean the absence of false-negatives (i.e.
> the tool
> tells you that your program is OK when actually it isn't...a bad
> thing!)

Forgive me for being pickity, but I always thought that if a tool says 
ok, when in fact it isn't, that's a false-positive.

> 
> SPARK is an "annotated" (aka "design by contract") strict subset of
> Ada.
> For compilation, it appears to a standard Ada compiler to just be a
> standard
> Ada program, with the very special property that there are _no_
> implementation-
> dependent ("unspecified" in C language terms) language features or
> undefined behaviours.  

Richard suggested that SPARK code could run on a ones complement 
machine, is that true?  What about machines where the character size 
isn't 8 bits, but say 9 or six?


 > This means that all SPARK programs mean the
> _same_ thing regardless of what choices a compiler might make.  This
> means
> the semantics implemented by the verification tools really do
> correspond with
> the semantics implemented by all industrially used compilers - a
> property that
> surprisingly eludes most static analysis tools! :-)
> 
>>From the point of view of design and verification, though, SPARK really
> is
> a totally different language - it is certainly _not_ "just a subset" of
> Ada at all.

Could you give an example of that please?

> 
> It is aimed at the needs of embedded, hard real-time critical systems.
> It is
> mostly used in safety- and security-critical applications, such as
> avionics,
> railway signalling, and high-grade secure applications.

Hmmm... Richard said railroad switching, if memory serves. ;)

But seriously, he mentioned that SPARK doesn't do what we might call 
simultaneous processes.  So if you're doing railway control stuff, how 
do you handle interactions with other processes that occur at the same 
time?  Or other processes over which you don't have control?


> 
> The verification tools first perform static semantic analysis, which
> includes
> freedom from function side-effects and aliasing issues.  The language
> is designed so that these analyses are sound and efficient (actually -
> all
> this is done in P-time.)  These are important pre-requisites for later
> on.
> 
> The information flow analysis engine then kicks in.  This is based on
> the work
> of Carre and Bergeretti (ACM TOPLAS Jan 1985 for details).  This
> subsumes
> all traditional data-flow analysis as well, so spotting undefined
> variables
> is again sound and efficient.

Do you mean undefined, or uninitialized?  I'm really having trouble 
understanding what Richard is saying about these.  Some of what he says 
seems to imply that it might be better for a variable not to be 
initialized at the time of it's creation, or maybe declaration.  Some of 
what he says implies that values that aren't initialized take on invalid 
values, which means to me that there _must_ be some meta-data for some 
types, as has already been pointed out in this thread, for example, 
single bit types.


> 
> This sets up the real kicker - the VC generator.  This is an
> implementation
> of classical Hoare-logic style weakest-precondition generation.  The
> resulting
> verification conditions (VCs) are then thrown at either an automatic or
> a user-assisted
> theorem-proving tool.
> 
> The proof system can prove static type-safety (i.e. no "run-time
> errors" like
> buffer overflow, division by zero, arithmetic overflow etc. etc),
> partial
> correctness with respect to user-supplied pre- and post-conditions
> (which
> are first-order, of course...) and invariants such as
> safety-properties.
> 
> If you want to know more, then drop us a line, check out the SPARK
> textbook
> (see www.praxis-his.com/sparkada/sparkbook.asp for details) or
> see the numerous publications on www.sparkada.com

I'd very much like to learn more, can you recommend an on line tutorial?

LR
0
lruss (582)
10/10/2006 2:14:32 PM
In <CElWg.12671$6S3.6373@newssvr25.news.prodigy.net>, on 10/09/2006
   at 06:05 AM, <adaworks@sbcglobal.net> said:

>I wonder why IBM does not update the language to support OOP.

The usual answer to such questions is resources. IBM has a
requirements process by which customers can request enhancements, but
the customer has to make a business case that IBM accepts. Research
projects don't go through that process, except when they turn into
products.

I don't know when the requirement was last submitted, but if somebody
needs it then it can't hurt to ask. Just remember that the business
case is key.

-- 
Shmuel (Seymour J.) Metz, SysProg and JOAT  <http://patriot.net/~shmuel>

Unsolicited bulk E-mail subject to legal action.  I reserve the
right to publicly post or ridicule any abusive E-mail.  Reply to
domain Patriot dot net user shmuel+news to contact me.  Do not
reply to spamtrap@library.lspace.org

0
spamtrap16 (3722)
10/10/2006 3:02:07 PM
In <-YOdnSn_59wjNbTYnZ2dnUVZ_t6dnZ2d@comcast.com>, on 10/08/2006
   at 06:42 PM, glen herrmannsfeldt <gah@ugcs.caltech.edu> said:

>It could probably be done in software, but there is hardware support,
>at least on the 80286. 

I didn't believe you, and checked. It is there on the 80486. What a
waste of chip area :-(

-- 
Shmuel (Seymour J.) Metz, SysProg and JOAT  <http://patriot.net/~shmuel>

Unsolicited bulk E-mail subject to legal action.  I reserve the
right to publicly post or ridicule any abusive E-mail.  Reply to
domain Patriot dot net user shmuel+news to contact me.  Do not
reply to spamtrap@library.lspace.org

0
spamtrap16 (3722)
10/10/2006 3:03:14 PM
In <452b16b1$0$25785$cc2e38e6@news.uslec.net>, on 10/09/2006
   at 11:43 PM, LR <lruss@superlink.net> said:

>Maybe I should clarify.  Yes, NULL can be assigned to a pointer, so
>it  is a valid value for a pointer, but you can't dereference it.

Not legitimately. That's why IBM changed the representation of NULL.

-- 
Shmuel (Seymour J.) Metz, SysProg and JOAT  <http://patriot.net/~shmuel>

Unsolicited bulk E-mail subject to legal action.  I reserve the
right to publicly post or ridicule any abusive E-mail.  Reply to
domain Patriot dot net user shmuel+news to contact me.  Do not
reply to spamtrap@library.lspace.org

0
spamtrap16 (3722)
10/10/2006 3:10:46 PM
In <N6GdnZzh1v3aOrTYnZ2dnUVZ_oGdnZ2d@comcast.com>, on 10/08/2006
   at 06:36 PM, glen herrmannsfeldt <gah@ugcs.caltech.edu> said:

>There are probably some pure math problems where rational arithmetic
>makes sense.

Floating point arithmetic is a special case of rational arithmetic,
unless you have an infinitely long word.

-- 
Shmuel (Seymour J.) Metz, SysProg and JOAT  <http://patriot.net/~shmuel>

Unsolicited bulk E-mail subject to legal action.  I reserve the
right to publicly post or ridicule any abusive E-mail.  Reply to
domain Patriot dot net user shmuel+news to contact me.  Do not
reply to spamtrap@library.lspace.org

0
spamtrap16 (3722)
10/10/2006 3:12:00 PM
In <gMCWg.44640$rP1.29773@news-server.bigpond.net.au>, on 10/10/2006
   at 01:34 AM, "robin" <robin_v@bigpond.com> said:

>NULL is a valid value for a pointer.

NULL is a valid value to assign a pointer. It is not a valid value for
the -> operator.

-- 
Shmuel (Seymour J.) Metz, SysProg and JOAT  <http://patriot.net/~shmuel>

Unsolicited bulk E-mail subject to legal action.  I reserve the
right to publicly post or ridicule any abusive E-mail.  Reply to
domain Patriot dot net user shmuel+news to contact me.  Do not
reply to spamtrap@library.lspace.org

0
spamtrap16 (3722)
10/10/2006 3:16:59 PM
In <jMCWg.44644$rP1.30083@news-server.bigpond.net.au>, on 10/10/2006
   at 01:34 AM, "robin" <robin_v@bigpond.com> said:

>> In many cases these challenges are first posted in comp.lang.fortran by
>> others and if I manage to post a solution there I usually SHARE it here as
>> edification to those that EVENTUALLY will have to migrate to a modern
>Fortran..

It is that pompous attitude that caused me to add him to my twit list.
His belief that his delusions justify posting to the wrong news group
is enough to ensure that nothing he writes will ever be trustworthy.
His repetition of the same lies that you have already debunked is even
worse. Why not plonk him?

>All you have proved is that you know nothing about PL/I.

And yet you continue trying to teach the pig to sing. He won't learn.

-- 
Shmuel (Seymour J.) Metz, SysProg and JOAT  <http://patriot.net/~shmuel>

Unsolicited bulk E-mail subject to legal action.  I reserve the
right to publicly post or ridicule any abusive E-mail.  Reply to
domain Patriot dot net user shmuel+news to contact me.  Do not
reply to spamtrap@library.lspace.org

0
spamtrap16 (3722)
10/10/2006 3:22:47 PM
robin <robin_v@bigpond.com> wrote:

>> > And mathematics?  And Chemistry?

>> Chemistry is part of the physical sciences.  For mathematics,
>> is harder to say.
 
> There is nothing "harder to say" about mathematics.
> Mathematics was the largest user, above physics, chem,
> engineering and all the others at a uni where I worked.

I didn't say anything about being the largest user,
I said it was harder to say whether math was a physical
science.  You even cut out just the right two sentences,
and then answered a different question.

People I know in applied math are doing computational
fluid dynamics, computationally intensive and a physical
science.

-- glen
0
gah1 (524)
10/10/2006 5:29:58 PM
Bob Lidral <l1dralspamba1t@comcast.net> wrote:
 
> C can do just about anything PL/I can do.  And assembly/machine language 
>  absolutely can do anything PL/I can do (it does get compiled into 
> machine language, after all).

I used to think this, too, but apparently there are some things
that OS/390 compilers can do but the assembler cannot do.  
Those are (or were) related to dynamic linking and generating the
appropriate control information.  I once suggested that with the
PUNCH instruction (which writes data directly to the object file)
the assembler could, but that doesn't really count.

-- glen
0
gah1 (524)
10/10/2006 5:36:45 PM
robin <robin_v@bigpond.com> wrote:
> "glen herrmannsfeldt" <gah@ugcs.caltech.edu> wrote in message

>> Floating point was designed for quantities with a relative error.
 
> No it wasn't.  It was designed to cater for a wider range of numbers
> than was available with fixed-point forms.

A wide range of numbers with a relative error.  If they have
absolute error you will need arbitrary length fixed point data.

-- glen
0
gah1 (524)
10/10/2006 5:38:02 PM
"Shmuel (Seymour J.) Metz" <spamtrap@library.lspace.org.invalid> wrote:
(someone wrote) 
>>All you have proved is that you know nothing about PL/I.
 
> And yet you continue trying to teach the pig to sing. He won't learn.

My kids were three years old not so long ago.  
The problem isn't much different.

-- glen 
0
gah1 (524)
10/10/2006 7:18:44 PM
"Shmuel (Seymour J.) Metz" <spamtrap@library.lspace.org.invalid> wrote:
 
> I didn't believe you, and checked. It is there on the 80486. What a
> waste of chip area :-(

It probably doesn't take much chip area.  It saves the time
to load a segment descriptor once in a while.  (Though I
believe pointer assignment is not usually done through 
segment registers.)

I was running OS/2 1.2 on a 486 until 2.0 came out.  1.2 is
16 bit, so needs segment selectors.  How many 80286 machines
never ran in protected mode?  (Mine ran OS/2 1.0 until I got
1.2, and before I got the 486.)

Even more, I am pretty sure the hardware on the 80386 and
above supports multiple segments in 32 bit mode, with 48 bit pointers.  
(16 bit selector, 32 bit offset.) It might be that OS/2 supports it, 
I don't know of any other OS does.  

I believe the Watcom compilers will generate large model 32 bit
code,  and it might be that OS/2 will run it.

-- glen
0
gah1 (524)
10/10/2006 8:37:59 PM
"David Frank" <dave_frank@hotmail.com> wrote in message
news:452b9626$0$3127$ec3e2dad@news.usenetmonster.com...
> "robin" <robin_v@bigpond.com> wrote in message
> news:jMCWg.44644$rP1.30083@news-server.bigpond.net.au...
> > "David Frank" <dave_frank@hotmail.com> wrote in message
> > news:452a0842$0$2979$ec3e2dad@news.usenetmonster.com...
> >>   The arbitrary lists problem FOR WHICH THERE IS NO PL/I SOLUTION
> >
> > I posted a solution some weeks ago.  You read it.
> > What does that make you?
>
> DISCRIMINATING!!

Why is it that you are always wrong?

The correct answer was: "A LIAR".

It makes you a liar.



0
robin_v (2737)
10/10/2006 10:09:00 PM
This thread seems like a logical place for me to repeat that IEEE is having a 
"ballot" on a draft standard for DECIMAL floating point support.  If you are 
interested, you might want to check out:

    http://754r.ucbtest.org/balloting.txt

and to see the current drafts, look at:


http://math.berkeley.edu/~scanon/754/754r.odt

http://math.berkeley.edu/~scanon/754/754r.pdf

http://math.berkeley.edu/~scanon/754/754r.sxw



-- 
Bill Klein
 wmklein <at> ix.netcom.com
"glen herrmannsfeldt" <gah@seniti.ugcs.caltech.edu> wrote in message 
news:egglpq$sg6$4@naig.caltech.edu...
> robin <robin_v@bigpond.com> wrote:
>> "glen herrmannsfeldt" <gah@ugcs.caltech.edu> wrote in message
>
>>> Floating point was designed for quantities with a relative error.
>
>> No it wasn't.  It was designed to cater for a wider range of numbers
>> than was available with fixed-point forms.
>
> A wide range of numbers with a relative error.  If they have
> absolute error you will need arbitrary length fixed point data.
>
> -- glen 


0
wmklein (2605)
10/10/2006 11:29:44 PM
works better if you leave the r out, ie,   .../754/754.pdf

William M. Klein wrote:
> This thread seems like a logical place for me to repeat that IEEE is having a 
> "ballot" on a draft standard for DECIMAL floating point support.  If you are 
> interested, you might want to check out:
> 
>     http://754r.ucbtest.org/balloting.txt
> 
> and to see the current drafts, look at:
> 
> 
> http://math.berkeley.edu/~scanon/754/754r.odt
> 
> http://math.berkeley.edu/~scanon/754/754r.pdf
> 
> http://math.berkeley.edu/~scanon/754/754r.sxw
> 
> 
> 
0
donaldldobbs (108)
10/11/2006 1:12:30 AM
"Bob Lidral" <l1dralspamba1t@comcast.net> wrote in message 
news:452B51C0.1080309@comcast.net...
> adaworks@sbcglobal.net wrote:
>> "robin" <robin_v@bigpond.com> wrote in message 
>> news:hMCWg.44641$rP1.19275@news-server.bigpond.net.au...
>>
>>><adaworks@sbcglobal.net> wrote in message
>>
>>  >
>> RR> SPARK ensures that, at run-time, every scalar will have a value
>> RR> that conforms to the invariant given for that value.
>>
>>>That doesn't answer the questions.
>>>He asked 1. whether or not an unitialized variable gets an invalid value;
>>>2. does spark allow variables to have any value?
>>>
>>
>> Perhaps you did not quite understand my answer.  SPARK will
>> not allow an invalid value for a variable of a given type.  Further,
>> the assertions (usually invariants) for that type or for that instance
>> of the type, will be controlled within a specified set of valid
>> values.  That set of values is not a direct function of the machine
>> representation of that value, but it is a constraint that becomes
>> a part of completed program.
>>
>> Finally, if SPARK determines that a value is never
>> assigned a value at any point in the program, it will not
>> allow that program to pass its own validation and verification
>> process.   That is, SPARK will reject a program where some
>> variable is never assigned a value.
>>
>> This only a tiny part of the kind of checking done by SPARK.
>> In fact, it is one of the more trivial checks.
>>
>> Richard
>>
>>
>>
> Maybe I missed something, but there's a question I've seen asked several times 
> here that you haven't answered yet.
>
> Suppose there's a variable that can legally have any value representable by 
> its underlying machine representation (integer, character, Boolean, floating 
> point, etc. -- especially Boolean) that is initialized to a value somewhere in 
> the program other than where it's declared.
>
> Further, suppose that variable is only used in parts of the program where it's 
> not possible to determine statically at compilation time whether it has 
> already been set to some value.
>
> In such a case, how would SPARK determine the variable had not been 
> initialized before being used?  Clearly it can't reject the program at 
> compilation time.  Presumably, if I've understood your postings, SPARK will 
> somehow ensure it is initialized (at load time?) to some invalid value so when 
> it is first used, its use will raise some sort of exception.
>
> Please pardon the reference to C or PL/I data types (well, it is the PL/I 
> newsgroup, despite DF's rantings).  Please give some examples of invalid 
> values for C's char, unsigned char, short, or float variables. Please give 
> some examples of invalid values for PL/I's character or bit(1) variables.  How 
> do these values cause exceptions to occur?  For the IEEE representations of 
> floating point data, it's possible to use a signaling NaN -- if it's supported 
> by the hardware.  But what's an invalid value for bit(1)?  Valid values are 
> '0'b and '1'b and PL/I only uses 1 bit to store such values.  How many other 
> values can a single bit represent?  One would hope that if a single bit 
> actually did hold some value other than '0'b or '1'b, the hardware might raise 
> an exception, but I'm not sure how reliable such hardware would be in the 
> first place. :-)
>
The first part of your question is about a value that is legally
representable on a particular machine.   This is not the criteria
used by either SPARK nor Ada.   Rather, it uses the notion
of a value that is legally representable for some type.

A type is not the same as a legal machine representation in
this model.  Rather, a type is a legal representation based
on how the type is defined.   The underlying concept is
name equivalence rather than structural equivalence.  Let
me begin with a very simple type declaration.

        type Number is range -473..250;
        for Number'Size use 32;

The for statement is not required, but I added it to
force Number to be represented in 32 bits.

A value of type Number cannot be outside the
bounds of -473 through 250 even though it is
represented in the machine as 32 bits.

A value of a type may not have a lifetime longer
than the declaration of that type.  Therefore, once
the type is defined, any variables of that type are
going to be in scope.   However, even though they
are in scope, they may not be directly visible.

At any place where a value of a type is manipulated,
whether through assignment or otherwise, it must
be directly visible.  There will never be hidden
operations on a value of a declared type.  The
compiler can easily check whether a value of a given
type is ever initialized to a value, either at the time of
declaration or somewhere else in the program.

When the compiler determines, and it will always
determine this, that a value can never be given a
value anywhere in the program, it will raise an
error at compile-time.    Further, if a value is
declared and initialized at the time of declaration,
and if it is never used anywhere in the program, the
compiler will report this too.

A value may not be given an invalid value anywhere in
the program. Certainly it cannot be initialized to an
invalid value at the place of declaration.   For example,
given the type example for Number, the following would
never compile.

          x : Number := 0;   -- will not compile

You mentioned C.   C is notoriously not type-safe.
An integer can be assigned to a float.  A character
can be an integer or an integer can be a pointer.  This
is not evil, it is just the way things are with C.

As for PL/I, it is certainly more type-safe than C.  In
some respects, it appears to be more type-safe than
C++.   PL/I does not support the same model of
type declaration found in Ada, but for the kinds of
applications designed with PL/I, it is probably OK.

Beyond simple type-safety, we come to SPARK. Here
we have a model that is based on the Hoare Triplet
as well as the concept of Invariants.   Further, when
one type is derived from another, as often happens,
that relationship is also checked carefully by SPARK.
The fact is that SPARK, with assertions and active
annotations in place, does a great deal more static
checking than one sees in most other development
environments.

As to unitialized variables, SPARK will not permit
a program to compile if it determines that a variable
can never be assigned a value anywhere in the
program.   Because of the library model, the visibility
rules, the scope of a type lifetime rules, and other rules,
this is one of the more easily detected errors in a program.

Richard Riehle 


0
adaworks2 (748)
10/11/2006 1:34:20 AM
"Tom Linden" <tom@kednos-remove.com> wrote in message 
news:op.tg7hfgwutte90l@hyrrokkin...
>
> Supposing initialization occurs in an external procedure.  How
> would the compiler reference that?  I include as an option in our
> PL/I compiler the ability to print out into the listing file the
> locations where a variable is set or referenced.
>
Every library unit must compile successfully before any
dependent units will compile successfully.    I a value, x,
is in some other library unit, and another unit tries to
assign a value to it, that value must be made explicitly
and directly visible before that can happen.

The visibility rules are quite strict, and the condition you
describe simply cannot occur.
>
>
> I am not conviced that putting the assertions into the compiler
> as opposed to the application is particularly useful, although
> holonomic constraints and signalling would be a trivial extension.
>
So far, this approach has proven useful in a number of places.  It
has certainly been a powerful asset in Eiffel.  The assertions are
in the application.  It simply happens that the compiler can check
them.   Also, in the case of Eiffel (not SPARK) violation of the
assertions at run-time will raise some kind of exception.

PL/I has many  of the pieces in place for a good assertion model
already.   I think it could be a good upgrade to provide this
feature.  However, you are the PL/I expert, not I, and you are
better able to see whether this would have value to the PL/I
community.

Richard 


0
adaworks2 (748)
10/11/2006 1:47:32 AM
"robin" <robin_v@bigpond.com> wrote in message 
news:jMCWg.44644$rP1.30083@news-server.bigpond.net.au...
>
> We have been using modern Fortran for 40 years.
> It's called PL/I.
>
This may have been true up through some of the
recent standards revisions.   I think it is no longer
true.

The latest Fortran standard has extended the language
a bit beyond what is currently available in production
versions of PL/I.

Of special interest is Fortran's new standard (2003)
to support programming by extension, something not
yet included in PL/I.   Also, the Fortran model for
Abstract Data Types seems a bit better than that
of PL/I.

Languages evolve and improve.   Fortran has been one
of the languages that has evolved quite well.   I suggest
you might be interested in taking an in-depth look at the
Fortran 2003 standard.

After looking at PL/I, including your FAQ's and the many
interesting emails I have received describing solutions in
that language, I am persuaded that PL/I, while once a
good language choice, needs to catch up to the more
contemporary approaches of software practice.   However,
it is a fundamentally good design that could, if those who
control it were so motivated, be brought into the modern
world of software practice quite easily with a few extensions
and adaptations.

Richard 


0
adaworks2 (748)
10/11/2006 1:56:39 AM
"Shmuel (Seymour J.) Metz" <spamtrap@library.lspace.org.invalid> wrote in 
message
>
> Floating point arithmetic is a special case of rational arithmetic,
> unless you have an infinitely long word.
>
But summation of a series of rational numbers that are kept
in their fractional form will produce a greater degree of
accuracy and no cumulative drift.   When we convert rational
numbers to binary, there is almost always some inaccuracy,
unless the values are "model" numbers. By model numbers,
I mean those few floating-point values that can be exactly
represented in a binary representation.

The inabilty to exactly convert most decimal fractions to a
binary word instantly creates a problem with accuracy.  By
preserving the numerator and denominator as whole numbers,
and computing a long series such as a summation on fractions
built over whole numbers, we have no inaccuracy.

If, after performing a set of operations on that series, we need
to represent the final result as a decimal fraction, we will still get
some inaccuracy, but the inaccuracy will not be nearly as great
as it would be if we had converted every fraction in the series
into a binary representation of a decimal fraction throughout
the computation.

Richard Riehle 


0
adaworks2 (748)
10/11/2006 2:08:56 AM
On Tue, 10 Oct 2006 18:47:32 -0700, <adaworks@sbcglobal.net> wrote:

>
> "Tom Linden" <tom@kednos-remove.com> wrote in message
> news:op.tg7hfgwutte90l@hyrrokkin...
>>
>> Supposing initialization occurs in an external procedure.  How
>> would the compiler reference that?  I include as an option in our
>> PL/I compiler the ability to print out into the listing file the
>> locations where a variable is set or referenced.
>>
> Every library unit must compile successfully before any
> dependent units will compile successfully.    I a value, x,
> is in some other library unit, and another unit tries to
> assign a value to it, that value must be made explicitly
> and directly visible before that can happen.
>
> The visibility rules are quite strict, and the condition you
> describe simply cannot occur.
>>
>>
>> I am not conviced that putting the assertions into the compiler
>> as opposed to the application is particularly useful, although
>> holonomic constraints and signalling would be a trivial extension.
>>
> So far, this approach has proven useful in a number of places.  It
> has certainly been a powerful asset in Eiffel.  The assertions are
> in the application.  It simply happens that the compiler can check
> them.   Also, in the case of Eiffel (not SPARK) violation of the
> assertions at run-time will raise some kind of exception.
>
> PL/I has many  of the pieces in place for a good assertion model
> already.   I think it could be a good upgrade to provide this
> feature.  However, you are the PL/I expert, not I, and you are
> better able to see whether this would have value to the PL/I
> community.

The pieces are there for the application.  For example, suppose you wanted
to apply constraints to some variable, x, which were related to variables y
and z.  You could then have something like

if f(x,y,z) then signal condition assertx;

and in a suitable spot you had the handler

on condition(assertx) begin;
                       .
                       .
                       .
                       end;

So my point is this,  assertions (BTW, constraints would be a better term)
can already be handled in PL/I, and since they are application specific,
the applications programmer is better suited to implement them.
>
> Richard
>
>



-- 
Using Opera's revolutionary e-mail client: http://www.opera.com/mail/
0
tom294 (608)
10/11/2006 2:52:07 AM
In <egglnd$sg6$3@naig.caltech.edu>, on 10/10/2006
   at 05:36 PM, glen herrmannsfeldt <gah@seniti.ugcs.caltech.edu>
said:

>I used to think this, too, but apparently there are some things that
>OS/390 compilers can do but the assembler cannot do.

There have been some LE enhancements in support of HLA since MVS was
called OS/390. 

As for C, it's like Johnson's marvelous dancing dog; the marvel was
not how well it danced, but that it danced at all. In some ways HLA is
a higher level language than C, and PL/I definitely is.

-- 
Shmuel (Seymour J.) Metz, SysProg and JOAT  <http://patriot.net/~shmuel>

Unsolicited bulk E-mail subject to legal action.  I reserve the
right to publicly post or ridicule any abusive E-mail.  Reply to
domain Patriot dot net user shmuel+news to contact me.  Do not
reply to spamtrap@library.lspace.org

0
spamtrap16 (3722)
10/11/2006 11:55:36 AM
In <eggrmk$1h5$4@naig.caltech.edu>, on 10/10/2006
   at 07:18 PM, glen herrmannsfeldt <gah@seniti.ugcs.caltech.edu>
said:

>My kids were three years old not so long ago.

Your kids will grow more mature.

>The problem isn't much different.

Head, meet wall. Wall, meet head. Good luck teaching him. Personally,
I'd rather teach the 3 year olds; they're more rational.

-- 
Shmuel (Seymour J.) Metz, SysProg and JOAT  <http://patriot.net/~shmuel>

Unsolicited bulk E-mail subject to legal action.  I reserve the
right to publicly post or ridicule any abusive E-mail.  Reply to
domain Patriot dot net user shmuel+news to contact me.  Do not
reply to spamtrap@library.lspace.org

0
spamtrap16 (3722)
10/11/2006 11:58:18 AM
In <op.tg8kw5vjtte90l@hyrrokkin>, on 10/10/2006
   at 07:52 PM, "Tom Linden" <tom@kednos-remove.com> said:

>So my point is this,  assertions (BTW, constraints would be a better
>term) can already be handled in PL/I, and since they are application
>specific, the applications programmer is better suited to implement
>them.

I believe that his point is that the compiler is better suited to
apply flow analysis to the constraints. I haven't used SPARK, but I
have used Ada. While I prefer PL/I, there are definitely things in Ada
worth looking at, and constraining variables to specific ranges is one
of them.

-- 
Shmuel (Seymour J.) Metz, SysProg and JOAT  <http://patriot.net/~shmuel>

Unsolicited bulk E-mail subject to legal action.  I reserve the
right to publicly post or ridicule any abusive E-mail.  Reply to
domain Patriot dot net user shmuel+news to contact me.  Do not
reply to spamtrap@library.lspace.org

0
spamtrap16 (3722)
10/11/2006 12:02:06 PM
In <egh0b7$3cb$1@naig.caltech.edu>, on 10/10/2006
   at 08:37 PM, glen herrmannsfeldt <gah@seniti.ugcs.caltech.edu>
said:

>It probably doesn't take much chip area.  It saves the time to load
>a segment descriptor once in a while.  (Though I believe pointer
>assignment is not usually done through segment registers.)

Assignment, no. Dereferencing is.

>Even more, I am pretty sure the hardware on the 80386 and above
>supports multiple segments in 32 bit mode, with 48 bit pointers.

The hardware supports it, but there's no translation lookaside buffer,
making it expensive to exploit. OS/2 2.0 uses a flat memory model for
32-bit code, and I'm sure that the overhead of reloading segment
descriptors was a factor.

>I believe the Watcom compilers will generate large model 32 bit
>code,  and it might be that OS/2 will run it.

OS/2 will run 32-bit code from Watcom; it won't run 48-bit code.

-- 
Shmuel (Seymour J.) Metz, SysProg and JOAT  <http://patriot.net/~shmuel>

Unsolicited bulk E-mail subject to legal action.  I reserve the
right to publicly post or ridicule any abusive E-mail.  Reply to
domain Patriot dot net user shmuel+news to contact me.  Do not
reply to spamtrap@library.lspace.org

0
spamtrap16 (3722)
10/11/2006 12:08:12 PM
In <U2YWg.13334$6S3.9165@newssvr25.news.prodigy.net>, on 10/11/2006
   at 01:47 AM, <adaworks@sbcglobal.net> said:

>Every library unit must compile successfully before any
>dependent units will compile successfully.

I hope that you're not saying that the body must compile before
dependent routines can be compiled. That would be a disaster for the
typical Ada application.

-- 
Shmuel (Seymour J.) Metz, SysProg and JOAT  <http://patriot.net/~shmuel>

Unsolicited bulk E-mail subject to legal action.  I reserve the
right to publicly post or ridicule any abusive E-mail.  Reply to
domain Patriot dot net user shmuel+news to contact me.  Do not
reply to spamtrap@library.lspace.org

0
spamtrap16 (3722)
10/11/2006 12:13:16 PM
In <YmYWg.13341$6S3.3964@newssvr25.news.prodigy.net>, on 10/11/2006
   at 02:08 AM, <adaworks@sbcglobal.net> said:

>But summation of a series of rational numbers that are kept in their
>fractional form will produce a greater degree of accuracy and no
>cumulative drift.

I'm not questioning that one partial representation of rational
numbers may have an advantage over another, either globally or in
specific cases. I'm saying that none of them is able to represent
arbitrary real numbers.

-- 
Shmuel (Seymour J.) Metz, SysProg and JOAT  <http://patriot.net/~shmuel>

Unsolicited bulk E-mail subject to legal action.  I reserve the
right to publicly post or ridicule any abusive E-mail.  Reply to
domain Patriot dot net user shmuel+news to contact me.  Do not
reply to spamtrap@library.lspace.org

0
spamtrap16 (3722)
10/11/2006 12:17:07 PM
<adaworks@sbcglobal.net> wrote in message
news:pzGWg.10432$vJ2.1196@newssvr12.news.prodigy.com...
>
> "robin" <robin_v@bigpond.com> wrote in message
> news:hMCWg.44641$rP1.19275@news-server.bigpond.net.au...
> > <adaworks@sbcglobal.net> wrote in message
>  >
> RR> SPARK ensures that, at run-time, every scalar will have a value
> RR> that conforms to the invariant given for that value.
> >
> > That doesn't answer the questions.
> > He asked 1. whether or not an unitialized variable gets an invalid value;
> > 2. does spark allow variables to have any value?
> >
> Perhaps you did not quite understand my answer.

You haven't understood the question.

>  SPARK will
> not allow an invalid value for a variable of a given type.

That's not what was asked.

>  Further,
> the assertions (usually invariants) for that type or for that instance
> of the type, will be controlled within a specified set of valid
> values.  That set of values is not a direct function of the machine
> representation of that value, but it is a constraint that becomes
> a part of completed program.
>
> Finally, if SPARK determines that a value

variable

> is never
> assigned a value at any point in the program, it will not
> allow that program to pass its own validation and verification
> process.   That is, SPARK will reject a program where some
> variable is never assigned a value.

That's still inadequate.  A variable might well be assigned a value,
but before it is assigned a value, an attempt is made to use it.

> This only a tiny part of the kind of checking done by SPARK.
> In fact, it is one of the more trivial checks.

The two questions at the top are still not answered.


0
robin_v (2737)
10/11/2006 1:05:30 PM
"Shmuel (Seymour J.) Metz" <spamtrap@library.lspace.org.invalid> wrote in
message news:452bc8d7$26$fuzhry+tra$mr2ice@news.patriot.net...
> In <jMCWg.44644$rP1.30083@news-server.bigpond.net.au>, on 10/10/2006
>    at 01:34 AM, "robin" <robin_v@bigpond.com> said:
>
> >> In many cases these challenges are first posted in comp.lang.fortran by
> >> others and if I manage to post a solution there I usually SHARE it here as
> >> edification to those that EVENTUALLY will have to migrate to a modern
> >Fortran..
>
> It is that pompous attitude that caused me to add him to my twit list.
> His belief that his delusions justify posting to the wrong news group
> is enough to ensure that nothing he writes will ever be trustworthy.
> His repetition of the same lies that you have already debunked is even
> worse. Why not plonk him?
>
> >All you have proved is that you know nothing about PL/I.
>
> And yet you continue trying to teach the pig to sing. He won't learn.

I think that Peter Cooke had the right idea,
with his skit of teaching ravens to fly [or was it to sing?] under water.


0
robin_v (2737)
10/11/2006 1:05:31 PM
"glen herrmannsfeldt" <gah@seniti.ugcs.caltech.edu> wrote in message
news:egglpq$sg6$4@naig.caltech.edu...
> robin <robin_v@bigpond.com> wrote:
> > "glen herrmannsfeldt" <gah@ugcs.caltech.edu> wrote in message
>
> >> Floating point was designed for quantities with a relative error.
>
> > No it wasn't.  It was designed to cater for a wider range of numbers
> > than was available with fixed-point forms.
>
> A wide range of numbers with a relative error.

They were designed to allow a wide range of numbers.  period.

Whether or not there is a relative error is irrelevant.

And if it does become of interest, it's a secondary issue.

Some f.p. problems can in point of fact be solved
without ever encountering any error at all.


0
robin_v (2737)
10/11/2006 1:05:31 PM
<adaworks@sbcglobal.net> wrote in message
news:qKBWg.8973$TV3.4811@newssvr21.news.prodigy.com...
>
> As to my concept of a software circuit-breaker, any design in the
> physical world that involves electrical current usually includes some
> kind of fail-safe device such as a circuit-breaker.

That is probably not a good analogy.  A circuit-breaker
is not what I would call a "fail-safe" device.
It merely breaks the circuit.  Permanently.
There's no guarantee that damage has not already been done.

>   When a modern
> program fails in PL/I, Ada, Java, C++, Eiffel, or most other languages,
> it is common to include some kind of fail-safe code.  This code acts,
> in a program, much the same way a circuit-breaker does in an
> electrical system.

Not really.  A circuit breaker switches off the circuit.
There is no opportunity afterwards to do anything.

>   In the case of the software, it is often self-resetting.

The equivalent in the software world is to abort.
A circuit-breaker is not "self-resetting".
It would/could be dangerous to do so.

> In the physical world, we often require manual intervention.


0
robin_v (2737)
10/11/2006 1:05:32 PM
On Wed, 11 Oct 2006 05:02:06 -0700, Shmuel (Seymour J.) Metz  
<spamtrap@library.lspace.org.invalid> wrote:

> In <op.tg8kw5vjtte90l@hyrrokkin>, on 10/10/2006
>    at 07:52 PM, "Tom Linden" <tom@kednos-remove.com> said:
>
>> So my point is this,  assertions (BTW, constraints would be a better
>> term) can already be handled in PL/I, and since they are application
>> specific, the applications programmer is better suited to implement
>> them.
>
> I believe that his point is that the compiler is better suited to
> apply flow analysis to the constraints. I haven't used SPARK, but I
> have used Ada. While I prefer PL/I, there are definitely things in Ada
> worth looking at, and constraining variables to specific ranges is one
> of them.
>
So you would favour putting the machinery in the compiler, which BTW, is
not difficult.  Would you want this to apply to individual statements as
is done with condition prefixes or only per scope?  Would you want general
holonomic contraint as might be implemented by a function call, or are
you thinking of just, linear constraints, e.g., range of values ?


-- 
Using Opera's revolutionary e-mail client: http://www.opera.com/mail/
0
tom294 (608)
10/11/2006 3:01:21 PM
"robin" <robin_v@bigpond.com> wrote in message 
news:u_5Xg.45525$rP1.16301@news-server.bigpond.net.au...
> <adaworks@sbcglobal.net> wrote in message
> news:pzGWg.10432$vJ2.1196@newssvr12.news.prodigy.com...
>>
>> "robin" <robin_v@bigpond.com> wrote in message
>> news:hMCWg.44641$rP1.19275@news-server.bigpond.net.au...
>> > <adaworks@sbcglobal.net> wrote in message
>>  >
>> RR> SPARK ensures that, at run-time, every scalar will have a value
>> RR> that conforms to the invariant given for that value.
>> >
>> > That doesn't answer the questions.
>> > He asked 1. whether or not an unitialized variable gets an invalid value;
>> > 2. does spark allow variables to have any value?
>> >
      1)   SPARK will not allow a variable to have an invalid value

       2)  SPARK will allow a variable to have a value that conforms
            to the range constraint for the type of that variable.

                   type Number is range 50..70;
                   x : Number := 15;    -- illegal;
                 ========================
                   x := any value not in the range of 50 through 70,
                          is still illegal.

>
> That's still inadequate.  A variable might well be assigned a value,
> but before it is assigned a value, an attempt is made to use it.
>
That can never happen.   The progam is checked at compile-time,
and any such attempt will cause a compile-time error.  This is
easy to detect.  Note that even within a procedure, operations
on values are checked by the compiler.  Consider,

            procedure ABC (x : in integer;  y : out integer) is
                 z : integer;
            begin
                 x := y;       -- compile error; x is an in parameter and
                                 -- cannot be assigned a new value
                 y := z;       -- compile-time error; z has never been
                                 -- assigned a value;
                 z := y        -- compile-time error; y is an out parameter,
                                 -- and it cannot be assigned to another 
variable
                                 -- unless it has been assigned a value first
           end ABC;

In the above example, the compiler does a lot of checking to ensure
the absence of stupid mistakes.  A parameter designated as "in" is
effectively a constant within the procedure's algorithm.  One that is
designated as "out" cannot be assigned to another value until it has
a value of its own.   A local variable that has never been assigned
a value cannot be used on the right side of assignment unless it has
already been assigned a value.
>
Further, one might ask whether a variable is assigned a value
far away, as global data, from its declaration.   This is also
checked by the compiler.   The visibility rules guarantee that
no error can occur even when the variable seems to be far
away from where it is used.  In Ada, there are ways a
programmer can deliberately circumvent the visibility rules,
but that cannot happen in SPARK.

In any case, global data, in modern programming practice
is far less common (no pun intended) than it was during
the earlier years.  Does PL/I allow/encourage global data?
Does PL/I support a strong model of data localization? It
seems to provide such support, but I wonder about how
that is used in practice.
>
> The two questions at the top are still not answered.
>
I hope they are now.   Let me know if you need further
clarification.

Richard 


0
adaworks2 (748)
10/11/2006 3:07:50 PM
On Wed, 11 Oct 2006 05:17:07 -0700, Shmuel (Seymour J.) Metz  
<spamtrap@library.lspace.org.invalid> wrote:

> In <YmYWg.13341$6S3.3964@newssvr25.news.prodigy.net>, on 10/11/2006
>    at 02:08 AM, <adaworks@sbcglobal.net> said:
>
>> But summation of a series of rational numbers that are kept in their
>> fractional form will produce a greater degree of accuracy and no
>> cumulative drift.
>
> I'm not questioning that one partial representation of rational
> numbers may have an advantage over another, either globally or in
> specific cases. I'm saying that none of them is able to represent
> arbitrary real numbers.
>
As I stated elsewhere, they do not form a dense set.  Anyone who has had
even a cursory reading on the topic of numerical analysis would be of the
opinion that the use of rationals for computations is silly and  
ill-advised.
You design your code to achieve certain levels of accuracy, much the  
equivalent
of maintaing a given signal to noise ratio.  It is, after all, possible
to analyze the propogation of errors in any set of calculations.  Consider,
for example, using binary floating point for financial calculations rather
than scaled fixed decimal.  You can in advance determine how large the
characteristic of the former must be to ensure acceptable propogation of
errors.  The other point here is that the latter, with proper precsion and  
scale
forms a closed, bounded set wrt these type of calculations; whereas, the  
former
does not and uses approximations to the decimals within the acceptable
margin of error


-- 
Using Opera's revolutionary e-mail client: http://www.opera.com/mail/
0
tom294 (608)
10/11/2006 3:20:32 PM
On Wed, 11 Oct 2006 08:07:50 -0700, <adaworks@sbcglobal.net> wrote:

>
> "robin" <robin_v@bigpond.com> wrote in message
> news:u_5Xg.45525$rP1.16301@news-server.bigpond.net.au...
>> <adaworks@sbcglobal.net> wrote in message
>> news:pzGWg.10432$vJ2.1196@newssvr12.news.prodigy.com...
>>>
>>> "robin" <robin_v@bigpond.com> wrote in message
>>> news:hMCWg.44641$rP1.19275@news-server.bigpond.net.au...
>>> > <adaworks@sbcglobal.net> wrote in message
>>>  >
>>> RR> SPARK ensures that, at run-time, every scalar will have a value
>>> RR> that conforms to the invariant given for that value.
>>> >
>>> > That doesn't answer the questions.
>>> > He asked 1. whether or not an unitialized variable gets an invalid=
  =

>>> value;
>>> > 2. does spark allow variables to have any value?
>>> >
>       1)   SPARK will not allow a variable to have an invalid value
>
>        2)  SPARK will allow a variable to have a value that conforms
>             to the range constraint for the type of that variable.
>
>                    type Number is range 50..70;
>                    x : Number :=3D 15;    -- illegal;
>                  =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D
>                    x :=3D any value not in the range of 50 through 70,=

>                           is still illegal.
>
>>
>> That's still inadequate.  A variable might well be assigned a value,
>> but before it is assigned a value, an attempt is made to use it.
>>
> That can never happen.   The progam is checked at compile-time,
> and any such attempt will cause a compile-time error.  This is
> easy to detect.  Note that even within a procedure, operations
> on values are checked by the compiler.  Consider,
>
>             procedure ABC (x : in integer;  y : out integer) is
>                  z : integer;
>             begin
>                  x :=3D y;       -- compile error; x is an in paramete=
r and
>                                  -- cannot be assigned a new value
>                  y :=3D z;       -- compile-time error; z has never be=
en
>                                  -- assigned a value;
>                  z :=3D y        -- compile-time error; y is an out  =

> parameter,
>                                  -- and it cannot be assigned to anoth=
er
> variable
>                                  -- unless it has been assigned a valu=
e  =

> first
>            end ABC;
>
> In the above example, the compiler does a lot of checking to ensure
> the absence of stupid mistakes.  A parameter designated as "in" is
> effectively a constant within the procedure's algorithm.  One that is
> designated as "out" cannot be assigned to another value until it has
> a value of its own.   A local variable that has never been assigned
> a value cannot be used on the right side of assignment unless it has
> already been assigned a value.
>>
> Further, one might ask whether a variable is assigned a value
> far away, as global data, from its declaration.   This is also
> checked by the compiler.   The visibility rules guarantee that
> no error can occur even when the variable seems to be far
> away from where it is used.  In Ada, there are ways a
> programmer can deliberately circumvent the visibility rules,
> but that cannot happen in SPARK.
>
> In any case, global data, in modern programming practice
> is far less common (no pun intended) than it was during
> the earlier years.  Does PL/I allow/encourage global data?
> Does PL/I support a strong model of data localization? It
> seems to provide such support, but I wonder about how
> that is used in practice.

It is neutral in this regard.  You may use global data or not.
I use it.  This is also more an issue of the OS.  For example, under
OpenVMS declarations can have the added attribute GLOBALDEF or GLOBALREF=

and you can specify the psect in which the GLOBALDEF resides.

Data localization, if I am following you,  can be in some psect as above=

or in an AREA.

>>
>> The two questions at the top are still not answered.
>>
> I hope they are now.   Let me know if you need further
> clarification.
>
> Richard
>
>



-- =

Using Opera's revolutionary e-mail client: http://www.opera.com/mail/
0
tom294 (608)
10/11/2006 3:29:46 PM
adaworks@sbcglobal.net wrote:

> "robin" <robin_v@bigpond.com> wrote in message 
> news:u_5Xg.45525$rP1.16301@news-server.bigpond.net.au...
> 
>><adaworks@sbcglobal.net> wrote in message
>>news:pzGWg.10432$vJ2.1196@newssvr12.news.prodigy.com...
>>
>>>"robin" <robin_v@bigpond.com> wrote in message
>>>news:hMCWg.44641$rP1.19275@news-server.bigpond.net.au...
>>>
>>>><adaworks@sbcglobal.net> wrote in message
>>>
>>> >
>>>RR> SPARK ensures that, at run-time, every scalar will have a value
>>>RR> that conforms to the invariant given for that value.
>>>
>>>>That doesn't answer the questions.
>>>>He asked 1. whether or not an unitialized variable gets an invalid value;
>>>>2. does spark allow variables to have any value?
>>>>
> 
>       1)   SPARK will not allow a variable to have an invalid value
> 
>        2)  SPARK will allow a variable to have a value that conforms
>             to the range constraint for the type of that variable.
> 
>                    type Number is range 50..70;
>                    x : Number := 15;    -- illegal;
>                  ========================
>                    x := any value not in the range of 50 through 70,
>                           is still illegal.


Suppose using SPARK (and bear with me please, because I'm not a SPARK or 
Ada programmer) someone writes:
	procedure ABC() is z : integer;
	begin
		type Number is range 50..70;
		x : Number;
		y : Number : = 51;
		x := y;
		z := 0;
	end ABC;

Ok, then assuming that I haven't completely botched the syntax and 
semantics, will the line that says "x : Number;"  yield some sort of 
warning or error?  If not, then what value will x have after variable y 
is instantiated and given it's value, but before the line "x := y;" is 
executed?

Keeping in mind that provability is a very nice concept, but we're still 
using software as a tool to do this, so we must consider that buggen 
lurk within, two more:

If it's possible to write the above code, will x's value be predictable 
at that point?

If it's possible to write the above code, will x's value, be in the 
range that is specified for the type (if that's what it's called) Number 
at that point?



> 
> 
>>That's still inadequate.  A variable might well be assigned a value,
>>but before it is assigned a value, an attempt is made to use it.
>>
> 
> That can never happen.   The progam is checked at compile-time,
> and any such attempt will cause a compile-time error.  This is
> easy to detect.  Note that even within a procedure, operations
> on values are checked by the compiler.  Consider,
> 
>             procedure ABC (x : in integer;  y : out integer) is
>                  z : integer;
>             begin
>                  x := y;       -- compile error; x is an in parameter and
>                                  -- cannot be assigned a new value
>                  y := z;       -- compile-time error; z has never been
>                                  -- assigned a value;
>                  z := y        -- compile-time error; y is an out parameter,
>                                  -- and it cannot be assigned to another 
> variable
>                                  -- unless it has been assigned a value first
>            end ABC;
> 
> In the above example, the compiler does a lot of checking to ensure
> the absence of stupid mistakes.  A parameter designated as "in" is
> effectively a constant within the procedure's algorithm.  One that is
> designated as "out" cannot be assigned to another value until it has
> a value of its own.   A local variable that has never been assigned
> a value cannot be used on the right side of assignment unless it has
> already been assigned a value.
> 
> Further, one might ask whether a variable is assigned a value
> far away, as global data, from its declaration.   This is also
> checked by the compiler.   The visibility rules guarantee that
> no error can occur even when the variable seems to be far
> away from where it is used.  In Ada, there are ways a
> programmer can deliberately circumvent the visibility rules,
> but that cannot happen in SPARK.
> 
> In any case, global data, in modern programming practice
> is far less common (no pun intended) than it was during
> the earlier years.  Does PL/I allow/encourage global data?
> Does PL/I support a strong model of data localization? It
> seems to provide such support, but I wonder about how
> that is used in practice.
> 
>>The two questions at the top are still not answered.
>>
> 
> I hope they are now.   Let me know if you need further
> clarification.
> 
> Richard 
> 
> 
0
lruss (582)
10/11/2006 3:46:12 PM
On Wed, 11 Oct 2006 08:29:46 -0700, Tom Linden <tom@kednos-remove.com>  =

wrote:

> On Wed, 11 Oct 2006 08:07:50 -0700, <adaworks@sbcglobal.net> wrote:
>
>>
>> "robin" <robin_v@bigpond.com> wrote in message
>> news:u_5Xg.45525$rP1.16301@news-server.bigpond.net.au...
>>> <adaworks@sbcglobal.net> wrote in message
>>> news:pzGWg.10432$vJ2.1196@newssvr12.news.prodigy.com...
>>>>
>>>> "robin" <robin_v@bigpond.com> wrote in message
>>>> news:hMCWg.44641$rP1.19275@news-server.bigpond.net.au...
>>>> > <adaworks@sbcglobal.net> wrote in message
>>>>  >
>>>> RR> SPARK ensures that, at run-time, every scalar will have a value=

>>>> RR> that conforms to the invariant given for that value.
>>>> >
>>>> > That doesn't answer the questions.
>>>> > He asked 1. whether or not an unitialized variable gets an invali=
d  =

>>>> value;
>>>> > 2. does spark allow variables to have any value?
>>>> >
>>       1)   SPARK will not allow a variable to have an invalid value
>>
>>        2)  SPARK will allow a variable to have a value that conforms
>>             to the range constraint for the type of that variable.
>>
>>                    type Number is range 50..70;
>>                    x : Number :=3D 15;    -- illegal;
>>                  =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D
>>                    x :=3D any value not in the range of 50 through 70=
,
>>                           is still illegal.
>>
>>>
>>> That's still inadequate.  A variable might well be assigned a value,=

>>> but before it is assigned a value, an attempt is made to use it.
>>>
>> That can never happen.   The progam is checked at compile-time,
>> and any such attempt will cause a compile-time error.  This is
>> easy to detect.  Note that even within a procedure, operations
>> on values are checked by the compiler.  Consider,
>>
>>             procedure ABC (x : in integer;  y : out integer) is
>>                  z : integer;
>>             begin
>>                  x :=3D y;       -- compile error; x is an in paramet=
er  =

>> and
>>                                  -- cannot be assigned a new value
>>                  y :=3D z;       -- compile-time error; z has never b=
een
>>                                  -- assigned a value;
>>                  z :=3D y        -- compile-time error; y is an out  =

>> parameter,
>>                                  -- and it cannot be assigned to anot=
her
>> variable
>>                                  -- unless it has been assigned a val=
ue  =

>> first
>>            end ABC;
>>
>> In the above example, the compiler does a lot of checking to ensure
>> the absence of stupid mistakes.  A parameter designated as "in" is
>> effectively a constant within the procedure's algorithm.  One that is=

>> designated as "out" cannot be assigned to another value until it has
>> a value of its own.   A local variable that has never been assigned
>> a value cannot be used on the right side of assignment unless it has
>> already been assigned a value.
>>>
>> Further, one might ask whether a variable is assigned a value
>> far away, as global data, from its declaration.   This is also
>> checked by the compiler.   The visibility rules guarantee that
>> no error can occur even when the variable seems to be far
>> away from where it is used.  In Ada, there are ways a
>> programmer can deliberately circumvent the visibility rules,
>> but that cannot happen in SPARK.
>>
>> In any case, global data, in modern programming practice
>> is far less common (no pun intended) than it was during
>> the earlier years.  Does PL/I allow/encourage global data?
>> Does PL/I support a strong model of data localization? It
>> seems to provide such support, but I wonder about how
>> that is used in practice.
>
> It is neutral in this regard.  You may use global data or not.
> I use it.  This is also more an issue of the OS.  For example, under
> OpenVMS declarations can have the added attribute GLOBALDEF or GLOBALR=
EF
> and you can specify the psect in which the GLOBALDEF resides.
>
> Data localization, if I am following you,  can be in some psect as abo=
ve
> or in an AREA.

Or some lexical scope from which contained scopes inherit the declaratio=
n.

>
>>>
>>> The two questions at the top are still not answered.
>>>
>> I hope they are now.   Let me know if you need further
>> clarification.
>>
>> Richard
>>
>>
>
>
>



-- =

Using Opera's revolutionary e-mail client: http://www.opera.com/mail/
0
tom294 (608)
10/11/2006 3:49:32 PM
"robin" <robin_v@bigpond.com> writes:
> The equivalent in the software world is to abort.
> A circuit-breaker is not "self-resetting".
> It would/could be dangerous to do so.

The "big boys" (electrical utilities) do have self-resetting
CBs.  These are why the uninitiated are warned to stay away
from downed power lines -- the lines may come back to life
without warning.  Safety isn't the issue with these devices;
the devices are to protect/restore service with minimal
manual intervention.
0
mojaveg866 (241)
10/11/2006 4:20:31 PM
"Shmuel (Seymour J.) Metz" <spamtrap@library.lspace.org.invalid> wrote:
> In <egh0b7$3cb$1@naig.caltech.edu>, on 10/10/2006
>   at 08:37 PM, glen herrmannsfeldt <gah@seniti.ugcs.caltech.edu>
> said:
 
>>It probably doesn't take much chip area.  It saves the time to load
>>a segment descriptor once in a while.  (Though I believe pointer
>>assignment is not usually done through segment registers.)
 
> Assignment, no. Dereferencing is.

Assignment could be, but I don't know any compilers that do it.
 
>>Even more, I am pretty sure the hardware on the 80386 and above
>>supports multiple segments in 32 bit mode, with 48 bit pointers.
 
> The hardware supports it, but there's no translation lookaside buffer,
> making it expensive to exploit. OS/2 2.0 uses a flat memory model for
> 32-bit code, and I'm sure that the overhead of reloading segment
> descriptors was a factor.

I have had people tell me that there was a TLB, or something
similar, for segment selectors on some models, but I never found
anything in the documentation about it.

>>I believe the Watcom compilers will generate large model 32 bit
>>code,  and it might be that OS/2 will run it.
 
> OS/2 will run 32-bit code from Watcom; it won't run 48-bit code.

I was pretty sure that other OS didn't, but I thought it possible
that the segment selector code was still in OS/2.  

In the OS/2 1.x days (when everyone else was using DOS 5.0) I was
debugging a C program doing complicated array operations that
would sometimes go outside the array bounds.  (Some were arrays
of pointers to arrays, where the arrays had different lengths.)

By allocating segments directly from OS/2 of the appropriate length,
I had a hardware check on array bounds for either reading or writing.

I still did this for some debugging with OS/2 2.x, but production
code was compiled as 32bit code.  

With a segment selector cache it could have worked well for 
large programs.

-- glen 
0
gah1 (524)
10/11/2006 6:22:51 PM
(snip, someone wrote)

> In any case, global data, in modern programming practice
> is far less common (no pun intended) than it was during
> the earlier years.  Does PL/I allow/encourage global data?

There is STATIC EXTERNAL, pretty much equivalent to Fortran
COMMON or C's extern.  The default is always automatic, as
usual for a language allowing recursion.  

> Does PL/I support a strong model of data localization? It
> seems to provide such support, but I wonder about how
> that is used in practice.

What is data localization?  Is that structures?

-- glen
0
gah1 (524)
10/11/2006 6:28:35 PM
adaworks@sbcglobal.net wrote:
 
> "robin" <robin_v@bigpond.com> wrote in message 
> news:jMCWg.44644$rP1.30083@news-server.bigpond.net.au...

>> We have been using modern Fortran for 40 years.
>> It's called PL/I.

> This may have been true up through some of the
> recent standards revisions.   I think it is no longer
> true.

It is true in the sense that one goal of PL/I was to
replace Fortran.  I believe IBM intended not to write
Fortran or COBOL compilers for OS/360, and convert everyone
to PL/I.  

If the compiler came out on time, was as fast as Fortran
and COBOL compilers, and generated code ran as fast, it
might have worked.  

Do you remember when OS/2 was called DOS 5.0?

-- glen
0
gah1 (524)
10/11/2006 6:34:24 PM
It is my best guess that IBM always expected to provide a COBOL compiler.  This 
is because (at that time) getting contracts with the US government (not JUST the 
DOD) required that you have a COBOL compiler for your machine/operating system. 
This requirement didn't go away until the late 1990's.

P.S.  As one of - it not the only) "customer" for IBM's "PRPQ" that supported 
the '85 COBOL Standard before VS COBOL II, R3.0 came out, I am pretty certain 
that IBM took this requirement pretty seriously.

P.P.S.  However, I also always said that the NIST "certification tests" for 
COBOL were "so weak that - with a little effort - a decent PL/I preprocessor 
could have been written to pass the tests".  So maybe, they were going to offer 
their PL/I compiler as a "COBOL conforming" compiler <G>.

-- 
Bill Klein
 wmklein <at> ix.netcom.com
"glen herrmannsfeldt" <gah@seniti.ugcs.caltech.edu> wrote in message 
news:egjdfg$orp$4@naig.caltech.edu...
> adaworks@sbcglobal.net wrote:
>
>> "robin" <robin_v@bigpond.com> wrote in message
>> news:jMCWg.44644$rP1.30083@news-server.bigpond.net.au...
>
>>> We have been using modern Fortran for 40 years.
>>> It's called PL/I.
>
>> This may have been true up through some of the
>> recent standards revisions.   I think it is no longer
>> true.
>
> It is true in the sense that one goal of PL/I was to
> replace Fortran.  I believe IBM intended not to write
> Fortran or COBOL compilers for OS/360, and convert everyone
> to PL/I.
>
> If the compiler came out on time, was as fast as Fortran
> and COBOL compilers, and generated code ran as fast, it
> might have worked.
>
> Do you remember when OS/2 was called DOS 5.0?
>
> -- glen 


0
wmklein (2605)
10/11/2006 6:47:13 PM
William M. Klein <wmklein@nospam.netcom.com> wrote:
> It is my best guess that IBM always expected to provide a COBOL 
> compiler.  This is because (at that time) getting contracts with the 
> US government (not JUST the DOD) required that you have a COBOL 
> compiler for your machine/operating system. 

You may be right, but it also might depend on the definition
of compiler.  They did offer a COBOL to PL/I translator.
That as a front end for a PL/I compiler might qualify.

I am more sure about Fortran, though I was only about 7 at the time.

Consider how the early C++ compilers used C as an intermediate.

-- glen
0
gah1 (524)
10/11/2006 7:04:04 PM
On Wed, 11 Oct 2006 20:41:43 -0700, robin <robin_v@bigpond.com> wrote:

> "Tom Linden" <tom@kednos-remove.com> wrote in message
> news:op.tg8kw5vjtte90l@hyrrokkin...
>> On Tue, 10 Oct 2006 18:47:32 -0700, <adaworks@sbcglobal.net> wrote:
>>
>> > "Tom Linden" <tom@kednos-remove.com> wrote in message
>> > news:op.tg7hfgwutte90l@hyrrokkin...
>> > So far, this approach has proven useful in a number of places.  It
>> > has certainly been a powerful asset in Eiffel.  The assertions are
>> > in the application.  It simply happens that the compiler can check
>> > them.   Also, in the case of Eiffel (not SPARK) violation of the
>> > assertions at run-time will raise some kind of exception.
>> >
>> > PL/I has many  of the pieces in place for a good assertion model
>> > already.   I think it could be a good upgrade to provide this
>> > feature.  However, you are the PL/I expert, not I, and you are
>> > better able to see whether this would have value to the PL/I
>> > community.
>>
>> The pieces are there for the application.  For example, suppose you  
>> wanted
>> to apply constraints to some variable, x, which were related to  
>> variables y
>> and z.  You could then have something like
>>
>> if f(x,y,z) then signal condition assertx;
>
> or even
>
>     assert (x, y, z);
>
> as I have pointed out before.

I presume that is a macro?

>
>> and in a suitable spot you had the handler
>>
>> on condition(assertx) begin;
>>                        .
>>                        .
>>                        .
>>                        end;
>>
>> So my point is this,  assertions (BTW, constraints would be a better  
>> term)
>> can already be handled in PL/I, and since they are application specific,
>> the applications programmer is better suited to implement them.
>
>



-- 
Using Opera's revolutionary e-mail client: http://www.opera.com/mail/
0
tom294 (608)
10/12/2006 3:33:20 AM
"Tom Linden" <tom@kednos-remove.com> wrote in message
news:op.tg8kw5vjtte90l@hyrrokkin...
> On Tue, 10 Oct 2006 18:47:32 -0700, <adaworks@sbcglobal.net> wrote:
>
> > "Tom Linden" <tom@kednos-remove.com> wrote in message
> > news:op.tg7hfgwutte90l@hyrrokkin...
> > So far, this approach has proven useful in a number of places.  It
> > has certainly been a powerful asset in Eiffel.  The assertions are
> > in the application.  It simply happens that the compiler can check
> > them.   Also, in the case of Eiffel (not SPARK) violation of the
> > assertions at run-time will raise some kind of exception.
> >
> > PL/I has many  of the pieces in place for a good assertion model
> > already.   I think it could be a good upgrade to provide this
> > feature.  However, you are the PL/I expert, not I, and you are
> > better able to see whether this would have value to the PL/I
> > community.
>
> The pieces are there for the application.  For example, suppose you wanted
> to apply constraints to some variable, x, which were related to variables y
> and z.  You could then have something like
>
> if f(x,y,z) then signal condition assertx;

or even

    assert (x, y, z);

as I have pointed out before.

> and in a suitable spot you had the handler
>
> on condition(assertx) begin;
>                        .
>                        .
>                        .
>                        end;
>
> So my point is this,  assertions (BTW, constraints would be a better term)
> can already be handled in PL/I, and since they are application specific,
> the applications programmer is better suited to implement them.


0
robin_v (2737)
10/12/2006 3:41:43 AM
<adaworks@sbcglobal.net> wrote in message
news:wSXWg.13333$6S3.12640@newssvr25.news.prodigy.net...
>
> "Bob Lidral" <l1dralspamba1t@comcast.net> wrote in message
> news:452B51C0.1080309@comcast.net...
> > adaworks@sbcglobal.net wrote:
> >> "robin" <robin_v@bigpond.com> wrote in message
> >> news:hMCWg.44641$rP1.19275@news-server.bigpond.net.au...
> >>
> > Maybe I missed something, but there's a question I've seen asked several
times
> > here that you haven't answered yet.
> >
> > Suppose there's a variable that can legally have any value representable by
> > its underlying machine representation (integer, character, Boolean, floating
> > point, etc. -- especially Boolean) that is initialized to a value somewhere
in
> > the program other than where it's declared.
> >
> > Further, suppose that variable is only used in parts of the program where
it's
> > not possible to determine statically at compilation time whether it has
> > already been set to some value.
> >
> > In such a case, how would SPARK determine the variable had not been
> > initialized before being used?  Clearly it can't reject the program at
> > compilation time.  Presumably, if I've understood your postings, SPARK will
> > somehow ensure it is initialized (at load time?) to some invalid value so
when
> > it is first used, its use will raise some sort of exception.
> >
> > Please pardon the reference to C or PL/I data types (well, it is the PL/I
> > newsgroup, despite DF's rantings).  Please give some examples of invalid
> > values for C's char, unsigned char, short, or float variables. Please give
> > some examples of invalid values for PL/I's character or bit(1) variables.
How
> > do these values cause exceptions to occur?  For the IEEE representations of
> > floating point data, it's possible to use a signaling NaN -- if it's
supported
> > by the hardware.  But what's an invalid value for bit(1)?  Valid values are
> > '0'b and '1'b and PL/I only uses 1 bit to store such values.  How many other

> > values can a single bit represent?  One would hope that if a single bit
> > actually did hold some value other than '0'b or '1'b, the hardware might
raise
> > an exception, but I'm not sure how reliable such hardware would be in the
> > first place. :-)
> >
> The first part of your question is about a value that is legally
> representable on a particular machine.   This is not the criteria
> used by either SPARK nor Ada.   Rather, it uses the notion
> of a value that is legally representable for some type.
>
> A type is not the same as a legal machine representation in
> this model.  Rather, a type is a legal representation based
> on how the type is defined.   The underlying concept is
> name equivalence rather than structural equivalence.  Let
> me begin with a very simple type declaration.
>
>         type Number is range -473..250;
>         for Number'Size use 32;
>
> The for statement is not required, but I added it to
> force Number to be represented in 32 bits.
>
> A value of type Number cannot be outside the
> bounds of -473 through 250 even though it is
> represented in the machine as 32 bits.
>
> A value of a type may not have a lifetime longer
> than the declaration of that type.  Therefore, once
> the type is defined, any variables of that type are
> going to be in scope.   However, even though they
> are in scope, they may not be directly visible.
>
> At any place where a value of a type is manipulated,
> whether through assignment or otherwise, it must
> be directly visible.  There will never be hidden
> operations on a value of a declared type.  The
> compiler can easily check whether a value of a given
> type is ever initialized to a value, either at the time of
> declaration or somewhere else in the program.
>
> When the compiler determines, and it will always
> determine this, that a value can never be given a
> value anywhere in the program, it will raise an
> error at compile-time.    Further, if a value is
> declared and initialized at the time of declaration,
> and if it is never used anywhere in the program, the
> compiler will report this too.

Is there anying special about this?  Some non-Ada compilers
do this.


0
robin_v (2737)
10/12/2006 3:41:45 AM
In <op.tg9iojx5tte90l@hyrrokkin>, on 10/11/2006
   at 08:01 AM, "Tom Linden" <tom@kednos-remove.com> said:

>So you would favour putting the machinery in the compiler, which BTW,
>is not difficult.

Yes, subject to enabling and disabling conditions, e.g., no check on
JOE if

(NORANGE(JOE)):JOE=JOHN;

>Would you want this to apply to individual statements as is done
>with condition prefixes or only per scope?

Range checking should apply[1] throughout the scope of the variable,
as with any other element of a declaration. There might be other types
of constraints that applied to single statements.

>Would you want general holonomic contraint as might be implemented 
>by a function call, or are you thinking of just, linear constraints, 
>e.g., range of values ?

I'd want range checking regardless, but if the goal is automatic
correctness proofs then you need something more general. I'm not sure
what the best way is to fit pre- and post-assertions into PL/I, but my
first[2] thought is to use keyords on BEGIN, END and PROCEDURE.

[1] Except where explicitly suppressed.

[2] I don't know whether the granularity would be fine enough.

-- 
Shmuel (Seymour J.) Metz, SysProg and JOAT  <http://patriot.net/~shmuel>

Unsolicited bulk E-mail subject to legal action.  I reserve the
right to publicly post or ridicule any abusive E-mail.  Reply to
domain Patriot dot net user shmuel+news to contact me.  Do not
reply to spamtrap@library.lspace.org

0
spamtrap16 (3722)
10/12/2006 12:01:24 PM
In <op.tg9jkig8tte90l@hyrrokkin>, on 10/11/2006
   at 08:20 AM, "Tom Linden" <tom@kednos-remove.com> said:

>As I stated elsewhere, they do not form a dense set.  Anyone who has
>had even a cursory reading on the topic of numerical analysis would
>be of the opinion that the use of rationals for computations is silly
>and ill-advised.

Rationals is all we have. It's like the old joke whose punch line is
"We've already settled that; now we're haggling over the price."
Whether it's scaled fixed point, floating point or explicit fractions,
it's all rational numbers, limited to finite size and precision by the
hardware.

>You design your code

Within the constraints of what is physically realizable.

-- 
Shmuel (Seymour J.) Metz, SysProg and JOAT  <http://patriot.net/~shmuel>

Unsolicited bulk E-mail subject to legal action.  I reserve the
right to publicly post or ridicule any abusive E-mail.  Reply to
domain Patriot dot net user shmuel+news to contact me.  Do not
reply to spamtrap@library.lspace.org

0
spamtrap16 (3722)
10/12/2006 12:06:04 PM
In <aN7Xg.10728$vJ2.8224@newssvr12.news.prodigy.com>, on 10/11/2006
   at 03:07 PM, <adaworks@sbcglobal.net> said:

>That can never happen.

Why not? What if the assignment is in a conditional construct,
possibly involving external input?

>In any case, global data, in modern programming practice is far less
>common

ObPedant ITYM are

>Does PL/I allow/encourage global data?

Allow, not encourage.

>Does PL/I support a strong model of data localization?

? The default is AUTOMATIC, which is lexically scoped.

>but I wonder about how that is used in practice.

Well, I've seen a lot of code that uses STATIC, but overall AUTOMATIC
is more common[1].

[1] Maybe your pun was unintended; mine wasn't ;-)

-- 
Shmuel (Seymour J.) Metz, SysProg and JOAT  <http://patriot.net/~shmuel>

Unsolicited bulk E-mail subject to legal action.  I reserve the
right to publicly post or ridicule any abusive E-mail.  Reply to
domain Patriot dot net user shmuel+news to contact me.  Do not
reply to spamtrap@library.lspace.org

0
spamtrap16 (3722)
10/12/2006 12:17:14 PM
In <egjcpr$orp$2@naig.caltech.edu>, on 10/11/2006
   at 06:22 PM, glen herrmannsfeldt <gah@seniti.ugcs.caltech.edu>
said:

>I have had people tell me that there was a TLB, or something similar,
>for segment selectors on some models, but I never found anything in
>the documentation about it.

The claims that I've seen related to the Pentium´┐Ż, which was far too
late in the game. It was needed on the 80386.

-- 
Shmuel (Seymour J.) Metz, SysProg and JOAT  <http://patriot.net/~shmuel>

Unsolicited bulk E-mail subject to legal action.  I reserve the
right to publicly post or ridicule any abusive E-mail.  Reply to
domain Patriot dot net user shmuel+news to contact me.  Do not
reply to spamtrap@library.lspace.org

0
spamtrap16 (3722)
10/12/2006 12:21:09 PM
In <egjdfg$orp$4@naig.caltech.edu>, on 10/11/2006
   at 06:34 PM, glen herrmannsfeldt <gah@seniti.ugcs.caltech.edu>
said:

>If the compiler came out on time, was as fast as Fortran and COBOL
>compilers, and generated code ran as fast, it might have worked.

Only if the LCP's were efficient, which they were not.

-- 
Shmuel (Seymour J.) Metz, SysProg and JOAT  <http://patriot.net/~shmuel>

Unsolicited bulk E-mail subject to legal action.  I reserve the
right to publicly post or ridicule any abusive E-mail.  Reply to
domain Patriot dot net user shmuel+news to contact me.  Do not
reply to spamtrap@library.lspace.org

0
spamtrap16 (3722)
10/12/2006 12:22:30 PM
On Thu, 12 Oct 2006 05:06:04 -0700, Shmuel (Seymour J.) Metz  
<spamtrap@library.lspace.org.invalid> wrote:

> In <op.tg9jkig8tte90l@hyrrokkin>, on 10/11/2006
>    at 08:20 AM, "Tom Linden" <tom@kednos-remove.com> said:
>
>> As I stated elsewhere, they do not form a dense set.  Anyone who has
>> had even a cursory reading on the topic of numerical analysis would
>> be of the opinion that the use of rationals for computations is silly
>> and ill-advised.
>
> Rationals is all we have. It's like the old joke whose punch line is
> "We've already settled that; now we're haggling over the price."
> Whether it's scaled fixed point, floating point or explicit fractions,
> it's all rational numbers, limited to finite size and precision by the
> hardware.
Yes and no.  It is true that all numerical representations on computers
are rationals,  but the discussion started with usinf represnetations
which retained retained the numerator/denominator.  The different
representaions used are not isomorphic and only bounded under certain
operations.
>
>> You design your code
>
> Within the constraints of what is physically realizable.
>
and what is suitable, i.e. scaled fixed decimals for financials,
float for structural analysis, etc.


-- 
Using Opera's revolutionary e-mail client: http://www.opera.com/mail/
0
tom294 (608)
10/12/2006 9:21:59 PM
"robin" <robin_v@bigpond.com> wrote in message 
news:ZPiXg.45810$rP1.26232@news-server.bigpond.net.au...
> <adaworks@sbcglobal.net> wrote in message
> news:wSXWg.13333$6S3.12640@newssvr25.news.prodigy.net...
>>
>> "Bob Lidral" <l1dralspamba1t@comcast.net> wrote in message
>> news:452B51C0.1080309@comcast.net...
>> > adaworks@sbcglobal.net wrote:
>> >> "robin" <robin_v@bigpond.com> wrote in message
>> >> news:hMCWg.44641$rP1.19275@news-server.bigpond.net.au...
>> >>
>> > Maybe I missed something, but there's a question I've seen asked several
> times
>> > here that you haven't answered yet.
>> >
>> > Suppose there's a variable that can legally have any value representable by
>> > its underlying machine representation (integer, character, Boolean, 
>> > floating
>> > point, etc. -- especially Boolean) that is initialized to a value somewhere
> in
>> > the program other than where it's declared.
>> >
>> > Further, suppose that variable is only used in parts of the program where
> it's
>> > not possible to determine statically at compilation time whether it has
>> > already been set to some value.
>> >
>> > In such a case, how would SPARK determine the variable had not been
>> > initialized before being used?  Clearly it can't reject the program at
>> > compilation time.  Presumably, if I've understood your postings, SPARK will
>> > somehow ensure it is initialized (at load time?) to some invalid value so
> when
>> > it is first used, its use will raise some sort of exception.
>> >
>> > Please pardon the reference to C or PL/I data types (well, it is the PL/I
>> > newsgroup, despite DF's rantings).  Please give some examples of invalid
>> > values for C's char, unsigned char, short, or float variables. Please give
>> > some examples of invalid values for PL/I's character or bit(1) variables.
> How
>> > do these values cause exceptions to occur?  For the IEEE representations of
>> > floating point data, it's possible to use a signaling NaN -- if it's
> supported
>> > by the hardware.  But what's an invalid value for bit(1)?  Valid values are
>> > '0'b and '1'b and PL/I only uses 1 bit to store such values.  How many 
>> > other
>
>> > values can a single bit represent?  One would hope that if a single bit
>> > actually did hold some value other than '0'b or '1'b, the hardware might
> raise
>> > an exception, but I'm not sure how reliable such hardware would be in the
>> > first place. :-)
>> >
>> The first part of your question is about a value that is legally
>> representable on a particular machine.   This is not the criteria
>> used by either SPARK nor Ada.   Rather, it uses the notion
>> of a value that is legally representable for some type.
>>
>> A type is not the same as a legal machine representation in
>> this model.  Rather, a type is a legal representation based
>> on how the type is defined.   The underlying concept is
>> name equivalence rather than structural equivalence.  Let
>> me begin with a very simple type declaration.
>>
>>         type Number is range -473..250;
>>         for Number'Size use 32;
>>
>> The for statement is not required, but I added it to
>> force Number to be represented in 32 bits.
>>
>> A value of type Number cannot be outside the
>> bounds of -473 through 250 even though it is
>> represented in the machine as 32 bits.
>>
>> A value of a type may not have a lifetime longer
>> than the declaration of that type.  Therefore, once
>> the type is defined, any variables of that type are
>> going to be in scope.   However, even though they
>> are in scope, they may not be directly visible.
>>
>> At any place where a value of a type is manipulated,
>> whether through assignment or otherwise, it must
>> be directly visible.  There will never be hidden
>> operations on a value of a declared type.  The
>> compiler can easily check whether a value of a given
>> type is ever initialized to a value, either at the time of
>> declaration or somewhere else in the program.
>>
>> When the compiler determines, and it will always
>> determine this, that a value can never be given a
>> value anywhere in the program, it will raise an
>> error at compile-time.    Further, if a value is
>> declared and initialized at the time of declaration,
>> and if it is never used anywhere in the program, the
>> compiler will report this too.
>
> Is there anying special about this?  Some non-Ada compilers
> do this.
>
Good.  However, I'm not sure many do everything I just
described.

Richard 


0
adaworks2 (748)
10/13/2006 5:57:51 AM
"Tom Linden" <tom@kednos-remove.com> wrote in message 
news:op.tg9jzwxatte90l@hyrrokkin...

TL>It is neutral in this regard.  You may use global data or not.
TL>I use it.  This is also more an issue of the OS.  For example, under
TL>OpenVMS declarations can have the added attribute GLOBALDEF or GLOBALREF
TL>and you can specify the psect in which the GLOBALDEF resides.
TL>
TL>Data localization, if I am following you,  can be in some psect as above
TL>or in an AREA.
TL>
Localization versus globalization is not an operating systems issue.
It is more of a programming design issue.   We have learned over
the past forty years that global data is usually a really bad idea.
While it makes a program easy to write, it makes it hard for
continued modification.

In the object model, we go a bit beyond simple localization.  This
is called encapsulation.   Encapsulation is based on the notion of
a tight binding of the public methods to hidden implementation and
hidden data.   That is the data is only accessible by a client of
that data through the public methods.

There are many levels of localization.  In general, languages that are
strictly procedural (i.e., not OOP) have fewer levels of localization
available.

This concept is closely related to the notions of cohesion and
coupling.   The more global the data, the tighter the coupling,
in most cases.  The more localized the data, the safer it is
from unruly behavior within the rest of the program.

Richard 


0
adaworks2 (748)
10/13/2006 6:06:59 AM
"glen herrmannsfeldt" <gah@seniti.ugcs.caltech.edu> wrote in message 
news:egjd4j$orp$3@naig.caltech.edu...
> (snip, someone wrote)
>
>> In any case, global data, in modern programming practice
>> is far less common (no pun intended) than it was during
>> the earlier years.  Does PL/I allow/encourage global data?
>
> There is STATIC EXTERNAL, pretty much equivalent to Fortran
> COMMON or C's extern.  The default is always automatic, as
> usual for a language allowing recursion.
>
>> Does PL/I support a strong model of data localization? It
>> seems to provide such support, but I wonder about how
>> that is used in practice.
>
> What is data localization?  Is that structures?
>
Localization is the practice of restricting direct visibility
of data to the smallest scope possible.   Where one can
separate the concerns of scope from visibility, as we can
in some languages, it also limits visibility to the smallest
set of relevant operations possible.

Global data can be considered in terms of the COBOL Data
Division, or Fortran Common, both of which are really bad
language design constructs.   The Data Division makes a
program harder and harder to maintain over a long period
of time.

In 1977, in a paper on the goals and principles of software
engineering, Ross, Goodenough, and Irvine described the
principles of software engineering.   Localization was one
of those principles.  Others included abstraction, information
hiding, confirmability, etc.

Richard Riehle 


0
adaworks2 (748)
10/13/2006 6:14:20 AM
adaworks@sbcglobal.net wrote:

> "robin" <robin_v@bigpond.com> wrote in message 
> news:u_5Xg.45525$rP1.16301@news-server.bigpond.net.au...
> 
>><adaworks@sbcglobal.net> wrote in message
>>news:pzGWg.10432$vJ2.1196@newssvr12.news.prodigy.com...
>>
>>>"robin" <robin_v@bigpond.com> wrote in message
>>>news:hMCWg.44641$rP1.19275@news-server.bigpond.net.au...
>>>
>>>><adaworks@sbcglobal.net> wrote in message
>>>
>>> >
>>>RR> SPARK ensures that, at run-time, every scalar will have a value
>>>RR> that conforms to the invariant given for that value.
>>>
>>>>That doesn't answer the questions.
>>>>He asked 1. whether or not an unitialized variable gets an invalid value;
>>>>2. does spark allow variables to have any value?
>>>>
> 
>       1)   SPARK will not allow a variable to have an invalid value
> 
>        2)  SPARK will allow a variable to have a value that conforms
>             to the range constraint for the type of that variable.
> 
>                    type Number is range 50..70;
>                    x : Number := 15;    -- illegal;
>                  ========================
>                    x := any value not in the range of 50 through 70,
>                           is still illegal.
> 
> 
>>That's still inadequate.  A variable might well be assigned a value,
>>but before it is assigned a value, an attempt is made to use it.
>>
> 
> That can never happen.   The progam is checked at compile-time,
> and any such attempt will cause a compile-time error.  This is
> easy to detect.  Note that even within a procedure, operations
> on values are checked by the compiler.  Consider,
> 
>             procedure ABC (x : in integer;  y : out integer) is
>                  z : integer;
>             begin
>                  x := y;       -- compile error; x is an in parameter and
>                                  -- cannot be assigned a new value
>                  y := z;       -- compile-time error; z has never been
>                                  -- assigned a value;
>                  z := y        -- compile-time error; y is an out parameter,
>                                  -- and it cannot be assigned to another 
> variable
>                                  -- unless it has been assigned a value first
>            end ABC;
> 
> In the above example, the compiler does a lot of checking to ensure
> the absence of stupid mistakes.  A parameter designated as "in" is
> effectively a constant within the procedure's algorithm.  One that is
> designated as "out" cannot be assigned to another value until it has
> a value of its own.   A local variable that has never been assigned
> a value cannot be used on the right side of assignment unless it has
> already been assigned a value.
 > [...]

That's all well and good for straight line code segments.  But the 
problem is a lot more difficult than that.  What about loops and 
conditional statements?  What if initial assignment to a variable 
depends on a value read from input?


Bob Lidral
lidral  at  alum  dot  mit  dot  edu
0
10/13/2006 6:19:43 AM
"LR" <lruss@superlink.net> wrote in message 
news:452d1191$0$25785$cc2e38e6@news.uslec.net...
> adaworks@sbcglobal.net wrote:
>
>> "robin" <robin_v@bigpond.com> wrote in message 
>> news:u_5Xg.45525$rP1.16301@news-server.bigpond.net.au...
>>
>>><adaworks@sbcglobal.net> wrote in message
>>>news:pzGWg.10432$vJ2.1196@newssvr12.news.prodigy.com...
>>>
>>>>"robin" <robin_v@bigpond.com> wrote in message
>>>>news:hMCWg.44641$rP1.19275@news-server.bigpond.net.au...
>>>>
>>>>><adaworks@sbcglobal.net> wrote in message
>>>>
>>>> >
>>>>RR> SPARK ensures that, at run-time, every scalar will have a value
>>>>RR> that conforms to the invariant given for that value.
>>>>
>>>>>That doesn't answer the questions.
>>>>>He asked 1. whether or not an unitialized variable gets an invalid value;
>>>>>2. does spark allow variables to have any value?
>>>>>
>>
>>       1)   SPARK will not allow a variable to have an invalid value
>>
>>        2)  SPARK will allow a variable to have a value that conforms
>>             to the range constraint for the type of that variable.
>>
>>                    type Number is range 50..70;
>>                    x : Number := 15;    -- illegal;
>>                  ========================
>>                    x := any value not in the range of 50 through 70,
>>                           is still illegal.
>
>
> Suppose using SPARK (and bear with me please, because I'm not a SPARK or Ada 
> programmer) someone writes:
> procedure ABC() is z : integer;
> begin
> type Number is range 50..70;
> x : Number;
> y : Number : = 51;
> x := y;
> z := 0;
> end ABC;
>
> Ok, then assuming that I haven't completely botched the syntax and semantics, 
> will the line that says "x : Number;"  yield some sort of warning or error? 
> If not, then what value will x have after variable y is instantiated and given 
> it's value, but before the line "x := y;" is executed?
>
If, at run-time, an instance of type Number is given a value that
is not within the bounds of 50 through 70, a constraint error exception
will be raised.
>
> Keeping in mind that provability is a very nice concept, but we're still using 
> software as a tool to do this, so we must consider that buggen lurk within, 
> two more:
>
No bugs in the code just shown. No errors will get past this either.
>
> If it's possible to write the above code, will x's value be predictable at 
> that point?
>
All we know about x is that it must be within the range of values
given for the type of which it is an instance.
>
> If it's possible to write the above code, will x's value, be in the range that 
> is specified for the type (if that's what it's called) Number at that point?
>
First, the code shown will not compile because y has a value out of
range that will be rejected by the compiler.  Second, x must have
a value within the range specifed for the type.  If, somehow, a
different value were supplied anywhere in the program, whehter
through a calculation or otherwise, a constraint error will be
raised.

A type definition can specify a set of legal values for instances
of that type.  The run-time will always raise an exception when
that instance somehow gets a value not in the set of legal values.
Of course, a programmer can use a compiler directive to suppress
exception checking, but that defeats the purpose of the model.

Richard 


0
adaworks2 (748)
10/13/2006 6:21:45 AM
Shmuel (Seymour J.) Metz wrote:
> In <YmYWg.13341$6S3.3964@newssvr25.news.prodigy.net>, on 10/11/2006
>    at 02:08 AM, <adaworks@sbcglobal.net> said:
> 
> 
>>But summation of a series of rational numbers that are kept in their
>>fractional form will produce a greater degree of accuracy and no
>>cumulative drift.
> 
> 
> I'm not questioning that one partial representation of rational
> numbers may have an advantage over another, either globally or in
> specific cases. I'm saying that none of them is able to represent
> arbitrary real numbers.
> 
What do you mean by the phrase "represent arbitrary real numbers?"

What method are you proposing to use to represent arbitrary real numbers?

If you have in mind some specific rational number, you can represent it exactly 
in many ways.

If you have in mind some specific irrational real number you can only represent 
it exactly by giving a formula that yields its value or an equation that it 
satisfies (or by mentioning an agreed upon name for it that is backed up by one 
or more such formulas or equations).  In fact, if you can't cite such a formula 
or equation or name, I deny that you can have a specific irrational real number 
in mind.  After all there is an uncountable infinity of them, how are you 
proposing to single out the one you say you are thinking of and distinguish it 
from all others?

As to representing an arbitrary real number, may I suggest x.  At any rate it's 
traditional.

0
jjw (608)
10/13/2006 6:24:43 AM
robin wrote:
> 
> The equivalent in the software world is to abort.
> A circuit-breaker is not "self-resetting".

Household circuit breakers in the service entrance panel generally don't self 
reset, but the ones in power company substations generally do attempt one or 
more automatic resets.  Also, quite a few appliances have circuit breakers that 
self reset after what might be called a cooling off period.
0
jjw (608)
10/13/2006 6:31:59 AM
Tom Linden wrote:
> On Wed, 11 Oct 2006 05:17:07 -0700, Shmuel (Seymour J.) Metz  
> <spamtrap@library.lspace.org.invalid> wrote:
> 
>> In <YmYWg.13341$6S3.3964@newssvr25.news.prodigy.net>, on 10/11/2006
>>    at 02:08 AM, <adaworks@sbcglobal.net> said:
>>
>>> But summation of a series of rational numbers that are kept in their
>>> fractional form will produce a greater degree of accuracy and no
>>> cumulative drift.
>>
>>
>> I'm not questioning that one partial representation of rational
>> numbers may have an advantage over another, either globally or in
>> specific cases. I'm saying that none of them is able to represent
>> arbitrary real numbers.
>>
> As I stated elsewhere, they do not form a dense set.

Nonsense.

The rational numbers are a dense subset of the reals.  The proof is trivial. 
The closure of the set of rational numbers is the set of real numbers because 
every real number is the limit of a sequence of rational numbers.


   Anyone who has had
> even a cursory reading on the topic of numerical analysis would be of the
> opinion that the use of rationals for computations is silly and  
> ill-advised.

Again nonsense.  Aside from nonstandard use of irrational bases, which doesn't 
really change anything, all practical computations are carried out entirely with 
rational numbers.



> You design your code to achieve certain levels of accuracy, much the  
> equivalent
> of maintaing a given signal to noise ratio.  It is, after all, possible
> to analyze the propogation of errors in any set of calculations.  Consider,
> for example, using binary floating point for financial calculations rather
> than scaled fixed decimal.  You can in advance determine how large the
> characteristic of the former must be to ensure acceptable propogation of
> errors.  The other point here is that the latter, with proper precsion 
> and  scale
> forms a closed, bounded set wrt these type of calculations;

No computer arithmetic data type that purports to represent either integers or 
reals, whether floating point or fixed, whether binary or decimal, is closed. 
There are always pairs of values for which operations such as addition or 
multiplication produce overflows or underflows instead of valid results. 
Moreover, in floating point there are always pairs of values and operations for 
which the "exact" result is not even representable and an approximation is 
substituted.

Computer arithmetic is also not associative, so optimizers that introduce 
transformations based on the associative law can introduce errors.

For example, in 32 bit integer arithmetic, let x=(2**31)-10, y=(2**31)-20, and 
z=-x.  Consider the expression x+y+z.  If 32 bit integer computer arithmetic is 
associative we must have (x+y)+z=x+(y+z).  But the left hand side overflows 
while the right hand side evaluates to (2**31)-20.  Ergo, even integer 
arithmetic is not associative.  The situation is even worse for floating point 
in that it is easy to give values of x, y, and z, where both sides produce valid 
values but the values are not the same.  Moreover the difference between the two 
associations can be embarrassingly large.

Computer arithmetic on native data types is inherently bounded, tis true.

  whereas,
> the  former
> does not and uses approximations to the decimals within the acceptable
> margin of error
> 
> 
0
jjw (608)
10/13/2006 7:36:15 AM
"Tom Linden" <tom@kednos-remove.com> wrote in message
news:op.thahhutttte90l@hyrrokkin...
> On Wed, 11 Oct 2006 20:41:43 -0700, robin <robin_v@bigpond.com> wrote:
>
> > "Tom Linden" <tom@kednos-remove.com> wrote in message
> > news:op.tg8kw5vjtte90l@hyrrokkin...
> >> On Tue, 10 Oct 2006 18:47:32 -0700, <adaworks@sbcglobal.net> wrote:
> >>
> >> > "Tom Linden" <tom@kednos-remove.com> wrote in message
> >> > news:op.tg7hfgwutte90l@hyrrokkin...
> >> > So far, this approach has proven useful in a number of places.  It
> >> > has certainly been a powerful asset in Eiffel.  The assertions are
> >> > in the application.  It simply happens that the compiler can check
> >> > them.   Also, in the case of Eiffel (not SPARK) violation of the
> >> > assertions at run-time will raise some kind of exception.
> >> >
> >> > PL/I has many  of the pieces in place for a good assertion model
> >> > already.   I think it could be a good upgrade to provide this
> >> > feature.  However, you are the PL/I expert, not I, and you are
> >> > better able to see whether this would have value to the PL/I
> >> > community.
> >>
> >> The pieces are there for the application.  For example, suppose you
> >> wanted
> >> to apply constraints to some variable, x, which were related to
> >> variables y
> >> and z.  You could then have something like
> >>
> >> if f(x,y,z) then signal condition assertx;
> >
> > or even
> >
> >     assert (x, y, z);
> >
> > as I have pointed out before.
>
> I presume that is a macro?

It's a macro call.
The macro procedure generates the tests relevant to the
assertion.
Its use simplifies the writing of such checks.


0
robin_v (2737)
10/13/2006 1:29:30 PM
adaworks@sbcglobal.net wrote:

> "LR" <lruss@superlink.net> wrote in message 
> news:452d1191$0$25785$cc2e38e6@news.uslec.net...
> 
>>adaworks@sbcglobal.net wrote:
>>
>>
>>>"robin" <robin_v@bigpond.com> wrote in message 
>>>news:u_5Xg.45525$rP1.16301@news-server.bigpond.net.au...
>>>
>>>
>>>><adaworks@sbcglobal.net> wrote in message
>>>>news:pzGWg.10432$vJ2.1196@newssvr12.news.prodigy.com...
>>>>
>>>>
>>>>>"robin" <robin_v@bigpond.com> wrote in message
>>>>>news:hMCWg.44641$rP1.19275@news-server.bigpond.net.au...
>>>>>
>>>>>
>>>>>><adaworks@sbcglobal.net> wrote in message
>>>>>
>>>>>RR> SPARK ensures that, at run-time, every scalar will have a value
>>>>>RR> that conforms to the invariant given for that value.
>>>>>
>>>>>
>>>>>>That doesn't answer the questions.
>>>>>>He asked 1. whether or not an unitialized variable gets an invalid value;
>>>>>>2. does spark allow variables to have any value?
>>>>>>
>>>
>>>      1)   SPARK will not allow a variable to have an invalid value
>>>
>>>       2)  SPARK will allow a variable to have a value that conforms
>>>            to the range constraint for the type of that variable.
>>>
>>>                   type Number is range 50..70;
>>>                   x : Number := 15;    -- illegal;
>>>                 ========================
>>>                   x := any value not in the range of 50 through 70,
>>>                          is still illegal.
>>
>>
>>Suppose using SPARK (and bear with me please, because I'm not a SPARK or Ada 
>>programmer) someone writes:
>>procedure ABC() is z : integer;
>>begin
>>type Number is range 50..70;
>>x : Number;
>>y : Number : = 51;
>>x := y;
>>z := 0;
>>end ABC;
>>
>>Ok, then assuming that I haven't completely botched the syntax and semantics, 
>>will the line that says "x : Number;"  yield some sort of warning or error? 
>>If not, then what value will x have after variable y is instantiated and given 
>>it's value, but before the line "x := y;" is executed?
>>
> 
> If, at run-time, an instance of type Number is given a value that
> is not within the bounds of 50 through 70, a constraint error exception
> will be raised.


Now I'm really confused, because in another post in this thread 
<1160583443.135888.62980@i3g2000cwc.googlegroups.com> in response to 
this question, roderick.chapman@googlemail.com wrote:

RC> "No - perfectly OK variable declaration."

And then I asked:

LR>> If it's possible to write the above code,
LR>> will x's value, be in the
LR>> range that is specified for the type (
LR>> if that's what it's called) Number
LR>> at that point?

Where the point refered to is after y is declared and given a value.

RC> No.

So if I'm interpreting correctly, what RC has written implies that x's 
declaration won't get any kind of message from SPARK at compile time, 
but since you write that an out of range value will get a runtime error 
in SPARK, and RC writes that the value of x will be out of range during 
runtime, then:  Will this get a runtime error?



>>Keeping in mind that provability is a very nice concept, but we're still using 
>>software as a tool to do this, so we must consider that buggen lurk within, 
>>two more:
>>
> 
> No bugs in the code just shown. No errors will get past this either.

I hope you'll forgive me, but I don't think that what you've written is 
reponsive to the point I made.  And also seems to contradict other 
things that have been written in this thread.


> 
>>If it's possible to write the above code, will x's value be predictable at 
>>that point?
>>
> 
> All we know about x is that it must be within the range of values
> given for the type of which it is an instance.

Actually, RC responed more simply to this:

RC>> "No - "undefined" means all bets are off - unpredictable."


> 
>>If it's possible to write the above code, will x's value, be in the range that 
>>is specified for the type (if that's what it's called) Number at that point?
>>
> 
> First, the code shown will not compile because y has a value out of
> range that will be rejected by the compiler. 

I'm sorry, isn't 51 in the range 50..70?  Or have I misunderstood 
something so fundamental, that ((50 <= 51) and (51 <= 70)) is not true? 
  Please clarify.

RC rewrote my code and didn't change that value or suggest that it would 
cause a problem as follows

RC>> procedure ABC is
RC>>    z : integer;
RC>>   type Number is range 50..70;
RC>>   x : Number;
RC>>   y : Number : = 51;
RC>>begin
RC>>   -- HERE
RC>>   x := y;
RC>>   z := 0;
RC>>end ABC;

So what exactly is the problem?


>  Second, x must have
> a value within the range specifed for the type.  If, somehow, a
> different value were supplied anywhere in the program, whehter
> through a calculation or otherwise, a constraint error will be
> raised.

If you're right about y not getting a valid value.



> A type definition can specify a set of legal values for instances
> of that type.  The run-time will always raise an exception when
> that instance somehow gets a value not in the set of legal values.

So again, by what you say, declaring x without an initial value will not 
get a compile time error, but will get a runtime error in SPARK?

> Of course, a programmer can use a compiler directive to suppress
> exception checking, but that defeats the purpose of the model.

I think it would be more useful to just issue a message saying that x 
isn't initialized, but then I think that would conflict with your view 
that x can and should sometimes have an invalid value.  Although, now 
you're asserting that will result in a runtime error.

And if I understood any of this correctly, I don't know, but does 
Ada/SPARK support a boolean type?

If so, what invalid value would it be initialized to, if the user 
doesn't give it a true/false value?

TIA, and thanks for your answers so far.

LR


0
lruss (582)
10/13/2006 2:50:26 PM
Bob Lidral wrote:

> adaworks@sbcglobal.net wrote:
> 
>> "robin" <robin_v@bigpond.com> wrote in message 
>> news:u_5Xg.45525$rP1.16301@news-server.bigpond.net.au...
>>
>>> <adaworks@sbcglobal.net> wrote in message
>>> news:pzGWg.10432$vJ2.1196@newssvr12.news.prodigy.com...
>>>
>>>> "robin" <robin_v@bigpond.com> wrote in message
>>>> news:hMCWg.44641$rP1.19275@news-server.bigpond.net.au...
>>>>
>>>>> <adaworks@sbcglobal.net> wrote in message
>>>>
>>>>
>>>> >
>>>> RR> SPARK ensures that, at run-time, every scalar will have a value
>>>> RR> that conforms to the invariant given for that value.
>>>>
>>>>> That doesn't answer the questions.
>>>>> He asked 1. whether or not an unitialized variable gets an invalid 
>>>>> value;
>>>>> 2. does spark allow variables to have any value?
>>>>>
>>
>>       1)   SPARK will not allow a variable to have an invalid value
>>
>>        2)  SPARK will allow a variable to have a value that conforms
>>             to the range constraint for the type of that variable.
>>
>>                    type Number is range 50..70;
>>                    x : Number := 15;    -- illegal;
>>                  ========================
>>                    x := any value not in the range of 50 through 70,
>>                           is still illegal.
>>
>>
>>> That's still inadequate.  A variable might well be assigned a value,
>>> but before it is assigned a value, an attempt is made to use it.
>>>
>>
>> That can never happen.   The progam is checked at compile-time,
>> and any such attempt will cause a compile-time error.  This is
>> easy to detect.  Note that even within a procedure, operations
>> on values are checked by the compiler.  Consider,
>>
>>             procedure ABC (x : in integer;  y : out integer) is
>>                  z : integer;
>>             begin
>>                  x := y;       -- compile error; x is an in parameter and
>>                                  -- cannot be assigned a new value
>>                  y := z;       -- compile-time error; z has never been
>>                                  -- assigned a value;
>>                  z := y        -- compile-time error; y is an out 
>> parameter,
>>                                  -- and it cannot be assigned to 
>> another variable
>>                                  -- unless it has been assigned a 
>> value first
>>            end ABC;
>>
>> In the above example, the compiler does a lot of checking to ensure
>> the absence of stupid mistakes.  A parameter designated as "in" is
>> effectively a constant within the procedure's algorithm.  One that is
>> designated as "out" cannot be assigned to another value until it has
>> a value of its own.   A local variable that has never been assigned
>> a value cannot be used on the right side of assignment unless it has
>> already been assigned a value.
> 
>  > [...]
> 
> That's all well and good for straight line code segments.  But the 
> problem is a lot more difficult than that.  What about loops and 
> conditional statements?  

If this stuff truly works then I don't think that this would be a 
problem.  For one thing, if it makes it easier to do, all of these can 
be converted to while loops, but failing that, for loops have limits 
that can be checked. Conditional statements can be checked as well. 
Perhaps I'm missing something obvious, can you give an example of 
something that you think can't be checked in principle?

(Although, of course, as I've pointed out elsewhere, since we're relying 
on software to check the proof, we're likely not to be able to do this 
in fact.)


 > What if initial assignment to a variable
 > depends on a value read from input?

I could see how reading a value weakens the proof, but there's no reason 
why the SPARK can't require the user to assert that the value is valid 
after a read.  Or generate code that does so. Or perhaps something else? 
Or do you think this conflicts with the idea of exceptions not being a 
good thing?

OTOH, I wonder what happens if you have to do something like validate 
the zip code of an address, which would probably come from a database of 
some sort.  Then our _proof_ would be trouble.  Perhaps this severely 
limits the range of problem domains for SPARK?  Or perhaps in practice 
it's ignored?

LR
0
lruss (582)
10/13/2006 2:59:50 PM
In <LiGXg.61636$E67.11841@clgrps13>, on 10/13/2006
   at 06:24 AM, "James J. Weinkam" <jjw@cs.sfu.ca> said:

>What do you mean by the phrase "represent arbitrary real numbers?"

I mean represent arbitrary real numbers, e.g., e, Pi.

>What method are you proposing to use to represent arbitrary real
>numbers?

What method do you propose for picking up the moon? I don't tend to
propose methods for doing things that are impossible. You're
challenging things that I didn't write instead of addressing the ones
that I did.

>If you have in mind some specific rational number, you can represent
>it exactly  in many ways.

I wrote "arbitrary"; I didn't limit it to rational numbers.

-- 
Shmuel (Seymour J.) Metz, SysProg and JOAT  <http://patriot.net/~shmuel>

Unsolicited bulk E-mail subject to legal action.  I reserve the
right to publicly post or ridicule any abusive E-mail.  Reply to
domain Patriot dot net user shmuel+news to contact me.  Do not
reply to spamtrap@library.lspace.org

0
spamtrap16 (3722)
10/13/2006 5:17:53 PM
"Shmuel (Seymour J.) Metz" <spamtrap@library.lspace.org.invalid> wrote in 
message news:452cedec$63$fuzhry+tra$mr2ice@news.patriot.net...
> In <U2YWg.13334$6S3.9165@newssvr25.news.prodigy.net>, on 10/11/2006
>   at 01:47 AM, <adaworks@sbcglobal.net> said:
>
>>Every library unit must compile successfully before any
>>dependent units will compile successfully.
>
> I hope that you're not saying that the body must compile before
> dependent routines can be compiled. That would be a disaster for the
> typical Ada application.
>
The body can be compiled independently of the specification.

The successful compilation of library units is not dependent on
successful compilation of the body (implementation).   This is
one of the powerful features of Ada (and Modula).   A design
can be compiled at the specification level long before anyone
writes a single line of algorithmic code.

Once the specification model (i.e., successful compilation of the
architecture of the system) is compiled, work can begin on the
implementation of each specification.   From time to time, we
may find a need to revisit a specification, but that is not as
problematic as it may sound.   One reason for this is that
we ordinarily push dependencies down to the body whenever
we can.  This reduces overall dependencies, and also reduces
compile-time on very large software systems.

Separate compilation is one of the more powerful features
of Ada not easily duplicated in most other languages.

Richard 


0
adaworks2 (748)
10/13/2006 5:31:19 PM
"William M. Klein" <wmklein@nospam.netcom.com> wrote in message 
news:R_aXg.58537$f62.39355@fe07.news.easynews.com...
> It is my best guess that IBM always expected to provide a COBOL compiler. 
> This is because (at that time) getting contracts with the US government (not 
> JUST the DOD) required that you have a COBOL compiler for your 
> machine/operating system. This requirement didn't go away until the late 
> 1990's.
>
Actually, IBM intended to abandon COBOL.   However, other vendors persuaded
the DoD to require COBOL for open bids.  The "straw that broke the camel's
back" was a contract for the Air Force in the late 1960's.  IBM wanted to 
propose
PL/I, but the "seven dwarfs" made the case for COBOL.
>
> P.S.  As one of - it not the only) "customer" for IBM's "PRPQ" that supported 
> the '85 COBOL Standard before VS COBOL II, R3.0 came out, I am pretty certain 
> that IBM took this requirement pretty seriously.
>
IBM did support some of the ANSI-85 COBOL standard pretty early.  The
best support, however, came from Tandem Computers.
>
> P.P.S.  However, I also always said that the NIST "certification tests" for 
> COBOL were "so weak that - with a little effort - a decent PL/I preprocessor 
> could have been written to pass the tests".  So maybe, they were going to 
> offer their PL/I compiler as a "COBOL conforming" compiler <G>.
>
NIST did try to keep up, but it just had too much going on to satisfy
all the demands on it.
>
Richard

> -- 
> Bill Klein
> wmklein <at> ix.netcom.com
> "glen herrmannsfeldt" <gah@seniti.ugcs.caltech.edu> wrote in message 
> news:egjdfg$orp$4@naig.caltech.edu...
>> adaworks@sbcglobal.net wrote:
>>
>>> "robin" <robin_v@bigpond.com> wrote in message
>>> news:jMCWg.44644$rP1.30083@news-server.bigpond.net.au...
>>
>>>> We have been using modern Fortran for 40 years.
>>>> It's called PL/I.
>>
>>> This may have been true up through some of the
>>> recent standards revisions.   I think it is no longer
>>> true.
>>
>> It is true in the sense that one goal of PL/I was to
>> replace Fortran.  I believe IBM intended not to write
>> Fortran or COBOL compilers for OS/360, and convert everyone
>> to PL/I.
>>
>> If the compiler came out on time, was as fast as Fortran
>> and COBOL compilers, and generated code ran as fast, it
>> might have worked.
>>
>> Do you remember when OS/2 was called DOS 5.0?
>>
>> -- glen
>
>
> 


0
adaworks2 (748)
10/13/2006 5:31:19 PM
"Tom Linden" <tom@kednos-remove.com> wrote in message 
news:op.tg9jkig8tte90l@hyrrokkin...
> On Wed, 11 Oct 2006 05:17:07 -0700, Shmuel (Seymour J.) Metz 
> <spamtrap@library.lspace.org.invalid> wrote:
>
>> In <YmYWg.13341$6S3.3964@newssvr25.news.prodigy.net>, on 10/11/2006
>>    at 02:08 AM, <adaworks@sbcglobal.net> said:
>>
>>> But summation of a series of rational numbers that are kept in their
>>> fractional form will produce a greater degree of accuracy and no
>>> cumulative drift.
>>
>> I'm not questioning that one partial representation of rational
>> numbers may have an advantage over another, either globally or in
>> specific cases. I'm saying that none of them is able to represent
>> arbitrary real numbers.
>>
> As I stated elsewhere, they do not form a dense set.  Anyone who has had
> even a cursory reading on the topic of numerical analysis would be of the
> opinion that the use of rationals for computations is silly and  ill-advised.
>
I do have a bit more than a "cursory reading ... of numerical analysis" and
would suggest that "silly and ill-advised" does not describe the choice
to use rational numbers.
>
Very few decimal fractions can be represented exactly in any binary
representation.  It is true that, when one has a larger word size, the
issue becomes less important.  However, we are still deploying a lot
of space applications and weapon systems with machines where the
word size is smaller than 32 bits.

The addition of a series of true fractions, getting a fraction result in each
case, does preserve a level of accuracy not easily achievable in
converting each fraction to a binary representation of a decimal value.
The cumulative drift may, in some cases, be intolerable.
>
Richard Riehle
>
==================================================
> You design your code to achieve certain levels of accuracy, much the 
> equivalent
> of maintaing a given signal to noise ratio.  It is, after all, possible
> to analyze the propogation of errors in any set of calculations.  Consider,
> for example, using binary floating point for financial calculations rather
> than scaled fixed decimal.  You can in advance determine how large the
> characteristic of the former must be to ensure acceptable propogation of
> errors.  The other point here is that the latter, with proper precsion and 
> scale
> forms a closed, bounded set wrt these type of calculations; whereas, the 
> former
> does not and uses approximations to the decimals within the acceptable
> margin of error
>
>
> -- 
> Using Opera's revolutionary e-mail client: http://www.opera.com/mail/
> 


0
adaworks2 (748)
10/13/2006 5:31:20 PM
<adaworks@sbcglobal.net> wrote in message 
news:H3QXg.10030$TV3.4668@newssvr21.news.prodigy.com...
>
>
> Separate compilation is one of the more powerful features
> of Ada not easily duplicated in most other languages.
>

How about showing us some Ada compilation with results...
Pls provide us a look at how you would write/rewrite/translate the Fortran 
shown at:

http://home.earthlink.net/~dave_gemini/list1.f90

including the same test case file generation thats built-in and the same 
outputs shown at
end of code.   Note the file is only input once.

If you cant do it with file input once,  try doing it the way I originally 
did by
reading the file twice.

http://home.earthlink.net/~dave_gemini/list.f90

In either case you shud wind up with the data contained wholly in 1 data 
structure.
Thanks...


 


0
dave_frank (2243)
10/13/2006 7:36:54 PM
"David Frank" <dave_frank@hotmail.com> wrote in message 
news:452fed01$0$17417$ec3e2dad@news.usenetmonster.com...
>
<snip>
> How about showing us some Ada compilation with results...
> Pls provide us a look at how you would write/rewrite/translate the Fortran 
> shown at:
>
> http://home.earthlink.net/~dave_gemini/list1.f90
>
> including the same test case file generation thats built-in and the same 
> outputs shown at
> end of code.   Note the file is only input once.
>

David,
   What POSSIBLE interest do you think translating Fortran source code into ADA 
has for a PL/I newsgroup????

-- 
Bill Klein
 wmklein <at> ix.netcom.com


0
wmklein (2605)
10/13/2006 8:30:55 PM
On Thu, 12 Oct 2006 23:14:20 -0700, <adaworks@sbcglobal.net> wrote:

>
> "glen herrmannsfeldt" <gah@seniti.ugcs.caltech.edu> wrote in message
> news:egjd4j$orp$3@naig.caltech.edu...
>> (snip, someone wrote)
>>
>>> In any case, global data, in modern programming practice
>>> is far less common (no pun intended) than it was during
>>> the earlier years.  Does PL/I allow/encourage global data?
>>
>> There is STATIC EXTERNAL, pretty much equivalent to Fortran
>> COMMON or C's extern.  The default is always automatic, as
>> usual for a language allowing recursion.
>>
>>> Does PL/I support a strong model of data localization? It
>>> seems to provide such support, but I wonder about how
>>> that is used in practice.
>>
>> What is data localization?  Is that structures?
>>
> Localization is the practice of restricting direct visibility
> of data to the smallest scope possible.   Where one can
> separate the concerns of scope from visibility, as we can
> in some languages, it also limits visibility to the smallest
> set of relevant operations possible.
>
> Global data can be considered in terms of the COBOL Data
> Division, or Fortran Common, both of which are really bad
> language design constructs.   The Data Division makes a
> program harder and harder to maintain over a long period
> of time.
>
> In 1977, in a paper on the goals and principles of software
> engineering, Ross, Goodenough, and Irvine described the
> principles of software engineering.   Localization was one
> of those principles.  Others included abstraction, information
> hiding, confirmability, etc.

I met those guys shortly thereafter, sure didn't help them write their  
first
Ada compiler. The name of their company escapes me at the moment, it was in
Waltham, MA


>
> Richard Riehle
>
>



-- 
Using Opera's revolutionary e-mail client: http://www.opera.com/mail/
0
tom284 (1839)
10/13/2006 10:07:23 PM
On Thu, 12 Oct 2006 23:06:59 -0700, <adaworks@sbcglobal.net> wrote:

>
> "Tom Linden" <tom@kednos-remove.com> wrote in message
> news:op.tg9jzwxatte90l@hyrrokkin...
>
> TL>It is neutral in this regard.  You may use global data or not.
> TL>I use it.  This is also more an issue of the OS.  For example, under
> TL>OpenVMS declarations can have the added attribute GLOBALDEF or  
> GLOBALREF
> TL>and you can specify the psect in which the GLOBALDEF resides.
> TL>
> TL>Data localization, if I am following you,  can be in some psect as  
> above
> TL>or in an AREA.
> TL>
> Localization versus globalization is not an operating systems issue.
> It is more of a programming design issue.   We have learned over
> the past forty years that global data is usually a really bad idea.
> While it makes a program easy to write, it makes it hard for
> continued modification.

Richard, I have to confess, I really don't know what you are talking
about.  "We" haven't learned that, but then you are using a rather
different nomenclature
>
> In the object model, we go a bit beyond simple localization.  This
> is called encapsulation.   Encapsulation is based on the notion of
> a tight binding of the public methods to hidden implementation and
> hidden data.   That is the data is only accessible by a client of
> that data through the public methods.

Well, that requires the complictly of the OS as in L4 Micro kernel or
Gnosis
>
> There are many levels of localization.  In general, languages that are
> strictly procedural (i.e., not OOP) have fewer levels of localization
> available.
>
> This concept is closely related to the notions of cohesion and
> coupling.   The more global the data, the tighter the coupling,
> in most cases.  The more localized the data, the safer it is
> from unruly behavior within the rest of the program.

I think we understood the pretty well 40 years ago and did indeed
practice it.  There is nothing wrong with global data, depending on
how you are using it.


>
> Richard
>
>



-- 
Using Opera's revolutionary e-mail client: http://www.opera.com/mail/
0
tom284 (1839)
10/13/2006 10:20:15 PM
Shmuel (Seymour J.) Metz wrote:
> In <LiGXg.61636$E67.11841@clgrps13>, on 10/13/2006
>    at 06:24 AM, "James J. Weinkam" <jjw@cs.sfu.ca> said:
> 
> 
>>What do you mean by the phrase "represent arbitrary real numbers?"
> 
> 
> I mean represent arbitrary real numbers, e.g., e, Pi.
> 
> 
>>What method are you proposing to use to represent arbitrary real
>>numbers?
> 
> 
> What method do you propose for picking up the moon? I don't tend to
> propose methods for doing things that are impossible. You're
> challenging things that I didn't write instead of addressing the ones
> that I did.
>
Well, excuse me if I misunderstood you.

Here's how I understood the thread:

Tom Linden had been arguing against using rational numbers represented as 
numerator and denominator in particular and against using rational numbers in 
general and making statements such as "Real men use real numbers" and that the 
rationals are not a dense subset of the reals.

Richard Riehle explained how using numerator and denominator to represent 
rationals eliminates truncation or rounding error in intermediate results.

You then said that you agreed that one method of representing rationals may have 
advantages over others, but that none of them could represent reals.  I took 
that to mean that you did not consider any form of rational computation adequate 
to represent reals and was curious to know how you would go about it.

> 
>>If you have in mind some specific rational number, you can represent
>>it exactly  in many ways.
> 
> 
> I wrote "arbitrary"; I didn't limit it to rational numbers.
>
You apparently did not look at the next paragraph.  I was contrasting the 
ability of exactly representing an arbitrary rational in this paragraph with the 
fact that we can only represent an arbitrary irrational real symbolically in the 
next paragraph.  I don't think we are actually in disagreement.

Perhaps you could clarify what point you were intending to make with your 
statement that rational numbers cannot represent (irrational) reals.
0
jjw (608)
10/14/2006 12:43:49 AM
"William M. Klein" <wmklein@nospam.netcom.com> wrote in message 
news:2ISXg.18281$0s1.4000@fe03.news.easynews.com...
> "David Frank" <dave_frank@hotmail.com> wrote in message 
> news:452fed01$0$17417$ec3e2dad@news.usenetmonster.com...
>>
> <snip>
>> How about showing us some Ada compilation with results...
>> Pls provide us a look at how you would write/rewrite/translate the 
>> Fortran shown at:
>>
>> http://home.earthlink.net/~dave_gemini/list1.f90
>>
>> including the same test case file generation thats built-in and the same 
>> outputs shown at
>> end of code.   Note the file is only input once.
>>
>
> David,
>   What POSSIBLE interest do you think translating Fortran source code into 
> ADA has for a PL/I newsgroup????
>
> -- 
> Bill Klein
> wmklein <at> ix.netcom.com
>
>

I would ask for him to translate the PL/I version of this problem EXCEPT 
there hasnt been a
valid one posted with outputs proving it works..

How about a Cobol version or are have you stuck around so long that they 
kicked you
upstairs into management and have forgotten how to program like many here.


0
dave_frank (2243)
10/14/2006 8:24:47 AM
When someone posts a request for COBOL examples in the COBOL newsgroup and
  A) "shows their own work"
        and
  B) explains what they want (not how to translate something from another 
language)

Then I often provide sample code snippets

-- 
Bill Klein
 wmklein <at> ix.netcom.com
"David Frank" <dave_frank@hotmail.com> wrote in message 
news:4530a168$0$17419$ec3e2dad@news.usenetmonster.com...
>
> "William M. Klein" <wmklein@nospam.netcom.com> wrote in message 
> news:2ISXg.18281$0s1.4000@fe03.news.easynews.com...
>> "David Frank" <dave_frank@hotmail.com> wrote in message 
>> news:452fed01$0$17417$ec3e2dad@news.usenetmonster.com...
>>>
>> <snip>
>>> How about showing us some Ada compilation with results...
>>> Pls provide us a look at how you would write/rewrite/translate the Fortran 
>>> shown at:
>>>
>>> http://home.earthlink.net/~dave_gemini/list1.f90
>>>
>>> including the same test case file generation thats built-in and the same 
>>> outputs shown at
>>> end of code.   Note the file is only input once.
>>>
>>
>> David,
>>   What POSSIBLE interest do you think translating Fortran source code into 
>> ADA has for a PL/I newsgroup????
>>
>> -- 
>> Bill Klein
>> wmklein <at> ix.netcom.com
>>
>>
>
> I would ask for him to translate the PL/I version of this problem EXCEPT there 
> hasnt been a
> valid one posted with outputs proving it works..
>
> How about a Cobol version or are have you stuck around so long that they 
> kicked you
> upstairs into management and have forgotten how to program like many here.
>
> 


0
wmklein (2605)
10/14/2006 1:19:43 PM
Tom,

Global data, when used in small programs, tends to be
relatively benign.   As programs and software systems
get larger, global data becomes a serious issue.

I have programmed a in a lot of languages over forty
plus years and used global data liberally, as did all my
colleagues.  I look back on that experience and compare
it to the kind of programming I am able to now with the
realization that e.g. Fortran Common and COBOL
DATA DIVISION were part of the problem, not
part of the solution.

Localization prevents the programmer from having to
work from long cross-reference listings since every
data item is either local to a module or a parameter
to a module.  This requires more careful design, and
it also requires the software architect to reason more
carefully about the system being designed.

The reality is that modern programming style, including
the supporting languages, are leaning more and more
toward localization further and further from global
data.

It is certainly easier to write a program using global data.
The programmer need not think too much about the
conflicts that can occur withing calling subroutines.  For
Q&D programs, global data works out just fine.  However,
maintenance of a program designed over global data
becomes harder and harder (an more error-prone) the
longe that program lives, and the more different people
make changes to it.

Most software researchers and language designers have come
to a common understanding on this issue.  The best of modern
language design will usually include some approach to limiting
visibility of data and strong support for localization.

Limiting of visibility is often effected through some kind of
mechanism such as public, private, and protected categories
of data and methods (subprograms).

Richard Riehle

================================================

"Tom Linden" <tom@kednos.com> wrote in message news:op.thdsb1jczgicya@murphus...
> On Thu, 12 Oct 2006 23:06:59 -0700, <adaworks@sbcglobal.net> wrote:
>
>>
>> "Tom Linden" <tom@kednos-remove.com> wrote in message
>> news:op.tg9jzwxatte90l@hyrrokkin...
>>
>> TL>It is neutral in this regard.  You may use global data or not.
>> TL>I use it.  This is also more an issue of the OS.  For example, under
>> TL>OpenVMS declarations can have the added attribute GLOBALDEF or  GLOBALREF
>> TL>and you can specify the psect in which the GLOBALDEF resides.
>> TL>
>> TL>Data localization, if I am following you,  can be in some psect as  above
>> TL>or in an AREA.
>> TL>
>> Localization versus globalization is not an operating systems issue.
>> It is more of a programming design issue.   We have learned over
>> the past forty years that global data is usually a really bad idea.
>> While it makes a program easy to write, it makes it hard for
>> continued modification.
>
> Richard, I have to confess, I really don't know what you are talking
> about.  "We" haven't learned that, but then you are using a rather
> different nomenclature
>>
>> In the object model, we go a bit beyond simple localization.  This
>> is called encapsulation.   Encapsulation is based on the notion of
>> a tight binding of the public methods to hidden implementation and
>> hidden data.   That is the data is only accessible by a client of
>> that data through the public methods.
>
> Well, that requires the complictly of the OS as in L4 Micro kernel or
> Gnosis
>>
>> There are many levels of localization.  In general, languages that are
>> strictly procedural (i.e., not OOP) have fewer levels of localization
>> available.
>>
>> This concept is closely related to the notions of cohesion and
>> coupling.   The more global the data, the tighter the coupling,
>> in most cases.  The more localized the data, the safer it is
>> from unruly behavior within the rest of the program.
>
> I think we understood the pretty well 40 years ago and did indeed
> practice it.  There is nothing wrong with global data, depending on
> how you are using it.
>
>
>>
>> Richard
>>
>>
>
>
>
> -- 
> Using Opera's revolutionary e-mail client: http://www.opera.com/mail/ 


0
adaworks2 (748)
10/14/2006 5:51:45 PM
"LR" <lruss@superlink.net> wrote in message 
news:452fa72e$0$2552$cc2e38e6@news.uslec.net...
> adaworks@sbcglobal.net wrote:
>>>>>>
>>>>>>RR> SPARK ensures that, at run-time, every scalar will have a value
>>>>>>RR> that conforms to the invariant given for that value.
>>>>>>
>>>>>>
>>>>>>>That doesn't answer the questions.
>>>>>>>He asked 1. whether or not an unitialized variable gets an invalid value;
>>>>>>>2. does spark allow variables to have any value?
>>>>>>>
>>>>
>>>>      1)   SPARK will not allow a variable to have an invalid value
>>>>
>>>>       2)  SPARK will allow a variable to have a value that conforms
>>>>            to the range constraint for the type of that variable.
>>>>
>>>>                   type Number is range 50..70;
>>>>                   x : Number := 15;    -- illegal;
>>>>                 ========================
>>>>                   x := any value not in the range of 50 through 70,
>>>>                          is still illegal.
>>>
>>>
LR>>>Suppose using SPARK (and bear with me please, because I'm not a SPARK or 
Ada
LR>>>programmer) someone writes:
LR>>>      procedure ABC() is z : integer;               --1
LR>>>      begin                                                     -- 2
LR>>>            type Number is range 50..70;          -- 3
LR>>>            x : Number;                                    -- 4
LR>>>            y : Number : = 51;                          -- 5
LR>>>            x := y;                                            -- 6
LR>>>            z := 0;                                            -- 7
LR>>>       end ABC;                                           -- 8
LR>>>
LR>>>Ok, then assuming that I haven't completely botched the syntax and 
semantics,
LR>>>will the line that says "x : Number;"  yield some sort of warning or error?
LR>>>If not, then what value will x have after variable y is instantiated and 
given
LR>>>it's value, but before the line "x := y;" is executed?
LR>>>
LR>>>
>
> LR>> If it's possible to write the above code,
> LR>> will x's value, be in the
> LR>> range that is specified for the type (
> LR>> if that's what it's called) Number
> LR>> at that point?
>
Let me comment on the code you posted line by line.  Syntactically,
it is not correct Ada, but that is not important in this discussion.  First,
the type definitions are likely to be at a different level of the design. It
might be that a package somewhere will look like this (abbreviated)

           package Number_Types is
                type Number is range 50..70;
                -- more type definitions follow
                -- some procedures and functions here
          end Number_Types;

An instance/value of type Number, anywhere in the program,
must conform to the range constraint given in the type definition.
In a more sophisticated design, we could limit the set of
operations, or introduce new operations, or new versions
of existing operations.  This example does not show that,
but it is a powerful way to limit the potential for errors that
can so easily occur with pre-defined types.

The procedure you wrote, that declares values of
type Number will have to make Number visible
before it can be used.   This might look like this

    with Number_Types;                                              -- 1 put the 
package in scope
    procedure ABC(q, r : Number_Types.Number) is  -- 2 parameters for x and y
           z : Number_Types.Number := 0;                     -- 3
           x, y : Number_Types.Number := 55;               -- 4
    begin                                                                     --  
5
          x := q;                                                             --  
6
          y := 13;                                                           --  
7 will fail during compile
          z := r * 10;                                                      -- 8
    end ABC;                                                             -- 9

Line 1 brings the package Number_Types into scope.   Since
we want to avoid global data, none of the elements of that
pacakge are directly visible.  We make them directly visible
through dot notation  (z : Number_Types.Number) by
giving the package name and the type within the package
we want to use.   When there are more types, each will
require the same syntax.  This increases the type safety
of the overall design.

The parameters, q and r, are local to the procedure and
any call to this procedure will require that r and q have
valid ranges for Number.   Line 3 is wrong since 0 is
not in the range.   The compiler will note this error and
the program will be deemed wrong.

Line 4 is OK since 55 is in range.   Line 6 is OK since
the procedure would never have gotten this far if the
parameters were out of range.   Line 7 is wrong.  It is
not in the correct range.  Line 8 is problematic.   Any
value of type Number multiplied by 10 is likely to be
an error.  An Ada compiler may issue a warning;  I
thing SPARK will reject this, but I have not tested it
to be certain.

If Line 8 is allowed to pass, it will probably generate a
run-time exception.   Ada has a good model for exception
handling, and a programmer can include a handler for this.
SPARK will not like it and will not want to rely on an
exception handler since it is so easy to evaluate in the
SPARK Examiner.
>
LC> I think it would be more useful to just issue a message saying that x
LC> isn't initialized, but then I think that would conflict with your view
LC> that x can and should sometimes have an invalid value.  Although, now
LC> you're asserting that will result in a runtime error.
LC>
SPARK does not like to pass code that has the potential for a run-time
error.  It does far more rigorous checking than a an Ada compiler -- or
any other kind of compiler, for that matter.
>
LC> And if I understood any of this correctly, I don't know, but does
LC> Ada/SPARK support a boolean type?
LC>
        the Ada package named Standard has a Boolean type.
>
LC> If so, what invalid value would it be initialized to, if the user
LC> doesn't give it a true/false value?
LC>
In any function that with a Boolean return type, the programmer
must supply a return value of Boolean.  Failure to do so will cause
a compile-time error.  No program will get deployed.  Boolean
is one of the easiest things to control in compile-time checking.

Richard Riehle


0
adaworks2 (748)
10/14/2006 6:27:08 PM
"LR" <lruss@superlink.net> wrote in message 
news:452fa962$0$2557$cc2e38e6@news.uslec.net...
> Bob Lidral wrote:
>
>>  > [...]
>>
>> That's all well and good for straight line code segments.  But the problem is 
>> a lot more difficult than that.  What about loops and conditional statements?
>
LR> If this stuff truly works then I don't think that this would be a
LR> problem.  For one thing, if it makes it easier to do, all of these can
LR> be converted to while loops, but failing that, for loops have limits
LR> that can be checked. Conditional statements can be checked as well.
LR> Perhaps I'm missing something obvious, can you give an example of
LR> something that you think can't be checked in principle?
>
Very good observations, but then this ...

LR> (Although, of course, as I've pointed out elsewhere, since we're relying
LR> on software to check the proof, we're likely not to be able to do this
LR> in fact.)
>
Well, in fact, we do it just fine. This is not a theoretical exercise.  A lot
of software has been built using this set of tools.

There is only so much we can cover in a forum of this kind.  I am going
to suggest that, for those with a serious interest, you consult the book
by John Barnes on SPARK.
>
Richard Riehle 


0
adaworks2 (748)
10/14/2006 6:33:33 PM
"William M. Klein" <wmklein@nospam.netcom.com> wrote in message 
news:2ISXg.18281$0s1.4000@fe03.news.easynews.com...
> "David Frank" <dave_frank@hotmail.com> wrote in message 
> news:452fed01$0$17417$ec3e2dad@news.usenetmonster.com...
>>
> <snip>
>> How about showing us some Ada compilation with results...
>> Pls provide us a look at how you would write/rewrite/translate the Fortran 
>> shown at:
>>
>> http://home.earthlink.net/~dave_gemini/list1.f90
>>
>> including the same test case file generation thats built-in and the same 
>> outputs shown at
>> end of code.   Note the file is only input once.
>>
>
> David,
>   What POSSIBLE interest do you think translating Fortran source code into ADA 
> has for a PL/I newsgroup????
>
Thank you Bill.   I had actually considered accepting his
challenge since it is fairly trivial to do.  However, I am
not sure this would be the best use of my time, nor would
it contribute value to most of those involved in this
discussion.

I might go ahead and do it anyway and send him the
code.  Let's see.  It is Saturday.  I can write an Ada
program or hang-out with my grandchildren.   What
to do?  What to do?

Richard 


0
adaworks2 (748)
10/14/2006 6:56:10 PM
"James J. Weinkam" <jjw@cs.sfu.ca> wrote in message 
news:9pWXg.13119$P7.7638@edtnps90...
>
> Here's how I understood the thread:
>
> Tom Linden had been arguing against using rational numbers represented as 
> numerator and denominator in particular and against using rational numbers in 
> general and making statements such as "Real men use real numbers" and that the 
> rationals are not a dense subset of the reals.
>
> Richard Riehle explained how using numerator and denominator to represent 
> rationals eliminates truncation or rounding error in intermediate results.
>
Let me be clear that Tom is not wrong in his assertion that, for most kinds
of problems, the use of non-decimal fractions might be superfluous.   In
particular, for accounting problems, decimal fractions with a fixed number
of decimal places work out just fine.   PL/I, COBOL, Ada, and a few
other languages have direct support for accounting data types.

The important point is that it is useful to have more than one way to
represent numbers for the wide range of computing problems we
are asked to solve.   In some, not all, cases, the use of non-decimal
fractions has value.   I don't think Tom would object to that view. He
is simply indicating that, to use this as the only model for computation
would be a bit silly.

Richard Riehle 


0
adaworks2 (748)
10/14/2006 7:04:58 PM
adaworks@sbcglobal.net wrote:
> "LR" <lruss@superlink.net> wrote in message 

> LR> (Although, of course, as I've pointed out elsewhere, since we're relying
> LR> on software to check the proof, we're likely not to be able to do this
> LR> in fact.)
> 
> Well, in fact, we do it just fine. This is not a theoretical exercise.  

Of course not.  You're actually depending on a software product to check 
to see if both it and the software you're developing have buggen lurking 
within.

 > A lot
> of software has been built using this set of tools.

Well then, perhaps I shouldn't have used the word proof.

How about 'likelyhood' instead?  Or perhaps you could suggest a word 
that is descriptive of how far short of 'proof' this concept is?

LR
0
lruss (582)
10/14/2006 8:20:30 PM
adaworks@sbcglobal.net wrote:

> "LR" <lruss@superlink.net> wrote in message 
> news:452fa72e$0$2552$cc2e38e6@news.uslec.net...
> 
>>adaworks@sbcglobal.net wrote:
>>
>>>>>>>RR> SPARK ensures that, at run-time, every scalar will have a value
>>>>>>>RR> that conforms to the invariant given for that value.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>>That doesn't answer the questions.
>>>>>>>>He asked 1. whether or not an unitialized variable gets an invalid value;
>>>>>>>>2. does spark allow variables to have any value?
>>>>>>>>
>>>>>
>>>>>     1)   SPARK will not allow a variable to have an invalid value
>>>>>
>>>>>      2)  SPARK will allow a variable to have a value that conforms
>>>>>           to the range constraint for the type of that variable.
>>>>>
>>>>>                  type Number is range 50..70;
>>>>>                  x : Number := 15;    -- illegal;
>>>>>                ========================
>>>>>                  x := any value not in the range of 50 through 70,
>>>>>                         is still illegal.
>>>>
>>>>
> LR>>>Suppose using SPARK (and bear with me please, because I'm not a SPARK or 
> Ada
> LR>>>programmer) someone writes:
> LR>>>      procedure ABC() is z : integer;               --1
> LR>>>      begin                                                     -- 2
> LR>>>            type Number is range 50..70;          -- 3
> LR>>>            x : Number;                                    -- 4
> LR>>>            y : Number : = 51;                          -- 5
> LR>>>            x := y;                                            -- 6
> LR>>>            z := 0;                                            -- 7
> LR>>>       end ABC;                                           -- 8
> LR>>>
> LR>>>Ok, then assuming that I haven't completely botched the syntax and 
> semantics,
> LR>>>will the line that says "x : Number;"  yield some sort of warning or error?
> LR>>>If not, then what value will x have after variable y is instantiated and 
> given
> LR>>>it's value, but before the line "x := y;" is executed?
> LR>>>
> LR>>>
> 
>>LR>> If it's possible to write the above code,
>>LR>> will x's value, be in the
>>LR>> range that is specified for the type (
>>LR>> if that's what it's called) Number
>>LR>> at that point?
>>
> 
> Let me comment on the code you posted line by line.  Syntactically,
> it is not correct Ada, but that is not important in this discussion.  First,
> the type definitions are likely to be at a different level of the design. It
> might be that a package somewhere will look like this (abbreviated)
> 
>            package Number_Types is
>                 type Number is range 50..70;
>                 -- more type definitions follow
>                 -- some procedures and functions here
>           end Number_Types;
> 
> An instance/value of type Number, anywhere in the program,
> must conform to the range constraint given in the type definition.
> In a more sophisticated design, we could limit the set of
> operations, or introduce new operations, or new versions
> of existing operations.  This example does not show that,
> but it is a powerful way to limit the potential for errors that
> can so easily occur with pre-defined types.
> 
> The procedure you wrote, that declares values of
> type Number will have to make Number visible
> before it can be used.   This might look like this
> 
>     with Number_Types;                                              -- 1 put the 
> package in scope
>     procedure ABC(q, r : Number_Types.Number) is  -- 2 parameters for x and y
>            z : Number_Types.Number := 0;                     -- 3
>            x, y : Number_Types.Number := 55;               -- 4
>     begin                                                                     --  
> 5
>           x := q;                                                             --  
> 6
>           y := 13;                                                           --  
> 7 will fail during compile
>           z := r * 10;                                                      -- 8
>     end ABC;                                                             -- 9
> 
> Line 1 brings the package Number_Types into scope.   Since
> we want to avoid global data, none of the elements of that
> pacakge are directly visible.  We make them directly visible
> through dot notation  (z : Number_Types.Number) by
> giving the package name and the type within the package
> we want to use.   When there are more types, each will
> require the same syntax.  This increases the type safety
> of the overall design.
> 
> The parameters, q and r, are local to the procedure and
> any call to this procedure will require that r and q have
> valid ranges for Number.   Line 3 is wrong since 0 is
> not in the range.   The compiler will note this error and
> the program will be deemed wrong.
> 
> Line 4 is OK since 55 is in range.   Line 6 is OK since
> the procedure would never have gotten this far if the
> parameters were out of range.   Line 7 is wrong.  It is
> not in the correct range.  Line 8 is problematic.   Any
> value of type Number multiplied by 10 is likely to be
> an error.  An Ada compiler may issue a warning;  I
> thing SPARK will reject this, but I have not tested it
> to be certain.

Suppose it was part of an if statement, again, I'm probably botching the 
syntax, but anyway,

if(r <=5 && r <= 7) then
	z := r * 10;
endif

What would happen?  Would SPARK pass this code as ok?



> If Line 8 is allowed to pass, it will probably generate a
> run-time exception.   


I think I'd like to have better knowledge of tools I might use. 
Particularly if I was writing safety critical kinds of software. 
Personally, I think I might be better off using C++ then relying on an 
error might be issued at compile time, or at run time.






> Ada has a good model for exception
> handling, and a programmer can include a handler for this.
> SPARK will not like it and will not want to rely on an
> exception handler since it is so easy to evaluate in the
> SPARK Examiner.
> 
> LC> I think it would be more useful to just issue a message saying that x
> LC> isn't initialized, but then I think that would conflict with your view
> LC> that x can and should sometimes have an invalid value.  Although, now
> LC> you're asserting that will result in a runtime error.
> LC>
> SPARK does not like to pass code that has the potential for a run-time
> error.  It does far more rigorous checking than a an Ada compiler -- or
> any other kind of compiler, for that matter.
> 
> LC> And if I understood any of this correctly, I don't know, but does
> LC> Ada/SPARK support a boolean type?
> LC>
>         the Ada package named Standard has a Boolean type.
> 
> LC> If so, what invalid value would it be initialized to, if the user
> LC> doesn't give it a true/false value?
> LC>
> In any function that with a Boolean return type, the programmer
> must supply a return value of Boolean.  Failure to do so will cause
> a compile-time error.  No program will get deployed.  Boolean
> is one of the easiest things to control in compile-time checking.


I wasn't speaking of functions that return a boolean value, but of 
boolean variables.  I think this question has been asked several times 
and an answer has not yet been forthcoming.

LR
0
lruss (582)
10/14/2006 8:34:45 PM
"David Frank" <dave_frank@hotmail.com> wrote in message
news:4530a168$0$17419$ec3e2dad@news.usenetmonster.com...
>
> "William M. Klein" <wmklein@nospam.netcom.com> wrote in message
> news:2ISXg.18281$0s1.4000@fe03.news.easynews.com...
> > "David Frank" <dave_frank@hotmail.com> wrote in message
> > news:452fed01$0$17417$ec3e2dad@news.usenetmonster.com...
> >>
> >> How about showing us some Ada compilation with results...
> >> Pls provide us a look at how you would write/rewrite/translate the
> >> Fortran shown at:
> >>
> >> http://home.earthlink.net/~dave_gemini/garbage.f90
> >>
> >> including the same test case file generation thats built-in and the same
> >> outputs shown at
> >> end of code.   Note the file is only input once.
>
> > David,
> >   What POSSIBLE interest do you think translating Fortran source code into
> > ADA has for a PL/I newsgroup????
> >
> I would ask for him to translate the PL/I version of this problem EXCEPT
> there hasnt been a valid one posted

You're no judge of that, since you know nothing
about PL/I.

And there has been at least one posted in PL/I.



0
robin_v (2737)
10/14/2006 11:56:29 PM
"robin" <robin_v@bigpond.com> wrote in message 
news:w_5Xg.45528$rP1.2900@news-server.bigpond.net.au...
> <adaworks@sbcglobal.net> wrote in message
> news:qKBWg.8973$TV3.4811@newssvr21.news.prodigy.com...
>>
>> As to my concept of a software circuit-breaker, any design in the
>> physical world that involves electrical current usually includes some
>> kind of fail-safe device such as a circuit-breaker.
>
> That is probably not a good analogy.  A circuit-breaker
> is not what I would call a "fail-safe" device.
> It merely breaks the circuit.  Permanently.
> There's no guarantee that damage has not already been done.
>
>>   When a modern
>> program fails in PL/I, Ada, Java, C++, Eiffel, or most other languages,
>> it is common to include some kind of fail-safe code.  This code acts,
>> in a program, much the same way a circuit-breaker does in an
>> electrical system.
>
> Not really.  A circuit breaker switches off the circuit.
> There is no opportunity afterwards to do anything.
>
>>   In the case of the software, it is often self-resetting.
>
> The equivalent in the software world is to abort.
> A circuit-breaker is not "self-resetting".
> It would/could be dangerous to do so.
>
>> In the physical world, we often require manual intervention.
>
I see your objection as something as a quibble.   The point of the
analogy is that, with software, we can insert the equivalent of a
circuit-breaker into our designs.   However, we can also go beyond
the limitations of the world of physical engineering, and also build
recovery routines.

The circuit-breaker analogy works just fine.  The addition of
the exception handler is usually unique to software. However,
there are, in the physical world, some circuit-breaker designs
that "trip" and reset before failing entirely.

Richard 


0
adaworks2 (748)
10/15/2006 12:23:24 AM
adaworks@sbcglobal.net wrote:
> "James J. Weinkam" <jjw@cs.sfu.ca> wrote in message 
> news:9pWXg.13119$P7.7638@edtnps90...
> 
>>Here's how I understood the thread:
>>
>>Tom Linden had been arguing against using rational numbers represented as 
>>numerator and denominator in particular and against using rational numbers in 
>>general and making statements such as "Real men use real numbers" and that the 
>>rationals are not a dense subset of the reals.
>>
>>Richard Riehle explained how using numerator and denominator to represent 
>>rationals eliminates truncation or rounding error in intermediate results.
>>
> 
> Let me be clear that Tom is not wrong in his assertion that, for most kinds
> of problems, the use of non-decimal fractions might be superfluous.   In
> particular, for accounting problems, decimal fractions with a fixed number
> of decimal places work out just fine.   PL/I, COBOL, Ada, and a few
> other languages have direct support for accounting data types.
> 
> The important point is that it is useful to have more than one way to
> represent numbers for the wide range of computing problems we
> are asked to solve.   In some, not all, cases, the use of non-decimal
> fractions has value.   I don't think Tom would object to that view. He
> is simply indicating that, to use this as the only model for computation
> would be a bit silly.
> 
I don't disagree with anything you have said.  However, no one had proposed 
using rational fractions as the only model of computation and Tom certainly came 
pretty close to saying that he saw no value in them at all.  Also his several 
times repeated statement that the rationals are not a dense supset of the reals 
is just plain wrong.
0
jjw (608)
10/15/2006 5:42:42 AM
LR wrote:

> Bob Lidral wrote:
> 
>> adaworks@sbcglobal.net wrote:
>>
>>> "robin" <robin_v@bigpond.com> wrote in message 
>>> news:u_5Xg.45525$rP1.16301@news-server.bigpond.net.au...
>>>
>>> [...]
>>>
>>>> That's still inadequate.  A variable might well be assigned a value,
>>>> but before it is assigned a value, an attempt is made to use it.
>>>>
>>>
>>> That can never happen.   The progam is checked at compile-time,
>>> and any such attempt will cause a compile-time error.  This is
>>> easy to detect.  Note that even within a procedure, operations
>>> on values are checked by the compiler.  Consider,
>>>
>>>             procedure ABC (x : in integer;  y : out integer) is
>>>                  z : integer;
>>>             begin
>>>                  x := y;       -- compile error; x is an in parameter 
>>> and
>>>                                  -- cannot be assigned a new value
>>>                  y := z;       -- compile-time error; z has never been
>>>                                  -- assigned a value;
>>>                  z := y        -- compile-time error; y is an out 
>>> parameter,
>>>                                  -- and it cannot be assigned to 
>>> another variable
>>>                                  -- unless it has been assigned a 
>>> value first
>>>            end ABC;
>>>
>>> In the above example, the compiler does a lot of checking to ensure
>>> the absence of stupid mistakes.  A parameter designated as "in" is
>>> effectively a constant within the procedure's algorithm.  One that is
>>> designated as "out" cannot be assigned to another value until it has
>>> a value of its own.   A local variable that has never been assigned
>>> a value cannot be used on the right side of assignment unless it has
>>> already been assigned a value.
>>
>>
>>  > [...]
>>
>> That's all well and good for straight line code segments.  But the 
>> problem is a lot more difficult than that.  What about loops and 
>> conditional statements?  
> 
> 
> If this stuff truly works then I don't think that this would be a 
> problem.  For one thing, if it makes it easier to do, all of these can 
> be converted to while loops, but failing that, for loops have limits 
> that can be checked. Conditional statements can be checked as well. 
> Perhaps I'm missing something obvious, can you give an example of 
> something that you think can't be checked in principle?
> 
> (Although, of course, as I've pointed out elsewhere, since we're relying 
> on software to check the proof, we're likely not to be able to do this 
> in fact.)
> 
> 
>  > What if initial assignment to a variable
>  > depends on a value read from input?
> 
> I could see how reading a value weakens the proof, but there's no reason 
> why the SPARK can't require the user to assert that the value is valid 
> after a read.  Or generate code that does so. Or perhaps something else? 
> Or do you think this conflicts with the idea of exceptions not being a 
> good thing?
> 
That was partly my question.  I'm getting lost in the deeply-nested 
postings and replies so I don't always remember who posted what, but I 
thought adaworks originally claimed that uninitialized variables had 
invalid values prior to being assigned an initial value.  Later, I 
thought I saw a claim that variables could never contain invalid values. 
  It's difficult to keep up with who wrote what but there seem to be 
conflicting claims about Ada -- or perhaps I'm confusing the claims 
about Ada with those about SPARK.

Anyhow, in practice programs are much more complicated than 
three-statement inline code segments.  Of course, eliminating all but 
one of two control structures in the style of Pascal may make the 
disciples of St. Nik happy, but such an approach makes a language much 
more difficult to use -- besides, I don't see how an if statement or a 
switch statement could be easily converted to a while or for statement.

I had thought someone claimed that SPARK/Ada could determine at any 
point in a program at compilation time whether any particular variable 
held a valid value or not.  (Of course, if variables can never hold 
invalid values, then the answer becomes more complicated.)  As I wrote, 
in practice real programs are more complicated than short code segments 
(especially straight-line segments) can demonstrate.  No really huge 
examples come to mind, but a couple of short ones do:

Example 1: Input values.

     read x   /* in whatever programming language syntax you prefer */

     After this statement is executed, does x hold a valid value for its
     type?  As with Schroedinger's cat, at compilation time the answer is
     "yes or no", but not "yes", "no", or "maybe".  It's not possible to
     narrow down the answer to "yes" or "no" until execution time (well,
     the answer is probably more like "yes" or "an exception will be
     raised at execution time").  The only way to guarantee (or test)
     program correctness at this point is for the compiler or programmer
     to insert some sort of runtime check, possibly with some sort of
     recovery mechanism -- but the answer cannot be determined at
     compilation time.


Example 2:  Conditional code.

     Again, feel free to convert to the syntax of your favorite
     programming language:

     int x, y, z;

     read x;
     if (x < 12) y = 4;
     /* At this point, does y hold a valid value? */
     z = y;

     Again, the answer at compilation time is "yes or no".  If the value
     read for x was valid and less than 12, then y will have the value 4,
     otherwise the value of y will be undefined and the assignment to z
     will be undefined or illegal.  How would SPARK/Ada handle such a
     situation?  Suppose the code path is more complicated and involves
     several reads, computations, tests, conditional branches, select
     statements, loops, etc. prior to the read statement.  Then the
     answer becomes "'yes, if y held a valid value before the if
     statement', otherwise 'yes or no'".  Can SPARK/Ada always determine
     whether y will have a value or might be undefined under such
     circumstances?  Of course, there's always the brute force method of
     deciding that y never has a valid value after the if statement, but
     that seems heavy-handed in the more complicated case for which it
     might be safe.


> OTOH, I wonder what happens if you have to do something like validate 
> the zip code of an address, which would probably come from a database of 
> some sort.  Then our _proof_ would be trouble.  Perhaps this severely 
> limits the range of problem domains for SPARK?  Or perhaps in practice 
> it's ignored?
> 
> LR

And that is related to a question I asked in an earlier post that has 
not previously been addressed.  What happens if the range of valid 
values for one variable is dependent on the specific value(s) of one or 
more other variables depending on data read at run time?  I claim the 
validity of values for such a variable cannot be determined at 
compilation time and requires some sort of runtime consistency check, 
probably generated by the programmer rather than automatically by the 
compiler.  Yet, unless I misunderstood, it seemed to me there were some 
who claimed SPARK/Ada could determine the validity of values for such a 
variable by static compilation-time analysis without the aid of runtime 
checks or programmer supplied assertions.


Bob Lidral
lidral  at  alum  dot  mit  edu
0
10/15/2006 8:20:49 AM
"robin" <robin_v@bigpond.com> wrote in message 
news:NOeYg.47262$rP1.36516@news-server.bigpond.net.au...
> "David Frank" <dave_frank@hotmail.com> wrote in message
> news:4530a168$0$17419$ec3e2dad@news.usenetmonster.com...
>>
> >
>> I would ask for him to translate the PL/I version of this problem EXCEPT
>> there hasnt been a valid one posted
>
> You're no judge of that, since you know nothing
> about PL/I.
>
> And there has been at least one posted in PL/I.
>

I invited ANY of your peers to vote AYE if they agreed you had posted a 
valid solution,
NO-ONE  has..

Neither have you responded to my inquiry whether PL/I has derived type 
syntax,
why is that?   How tuff is it to post a YES/NO ?

Or I get it,  Robin Vowels will never admit  PL/I lacks less capability than 
Fortran..

However,  none of your peers will admit it either, what a bunch of losers!




0
dave_frank (2243)
10/15/2006 8:57:30 AM
<adaworks@sbcglobal.net> wrote in message 
news:epaYg.12770$GR.6318@newssvr29.news.prodigy.net...
>
> "William M. Klein" <wmklein@nospam.netcom.com> wrote in message 
> news:2ISXg.18281$0s1.4000@fe03.news.easynews.com...
>> "David Frank" <dave_frank@hotmail.com> wrote in message 
>> news:452fed01$0$17417$ec3e2dad@news.usenetmonster.com...
>>>
>> <snip>
>>> How about showing us some Ada compilation with results...
>>> Pls provide us a look at how you would write/rewrite/translate the 
>>> Fortran shown at:
>>>
>>> http://home.earthlink.net/~dave_gemini/list1.f90
>>>
>>> including the same test case file generation thats built-in and the same 
>>> outputs shown at
>>> end of code.   Note the file is only input once.
>>>
>>
>> David,
>>   What POSSIBLE interest do you think translating Fortran source code 
>> into ADA has for a PL/I newsgroup????
>>
> Thank you Bill.   I had actually considered accepting his
> challenge since it is fairly trivial to do.  However, I am
> not sure this would be the best use of my time, nor would
> it contribute value to most of those involved in this
> discussion.
>
> I might go ahead and do it anyway and send him the
> code.  Let's see.  It is Saturday.  I can write an Ada
> program or hang-out with my grandchildren.   What
> to do?  What to do?
>
> Richard
>

If you send me your solution I will post it on my site similiar to what
I did when LR sent me his C++ solution

BTW,  he thinks his C++ is comparable to readability of my Fortran 
solution(s)

     http://home.earthlink.net/~dave_gemini/list.cpp

Yet another example of C++ programmers generating WRITE-ONLY source

Note my list1 solution has no explicit "do loops" processing each list.

    http://home.earthlink.net/~dave_gemini/list1.f90

Looking forward to seeing your list1.ada  translation,
its for sure there wont be a PL/I one..



 


0
dave_frank (2243)
10/15/2006 9:18:14 AM
On Sat, 14 Oct 2006 22:42:42 -0700, James J. Weinkam <jjw@cs.sfu.ca> wrote:

> adaworks@sbcglobal.net wrote:
>> "James J. Weinkam" <jjw@cs.sfu.ca> wrote in message  
>> news:9pWXg.13119$P7.7638@edtnps90...
>>
>>> Here's how I understood the thread:
>>>
>>> Tom Linden had been arguing against using rational numbers represented  
>>> as numerator and denominator in particular and against using rational  
>>> numbers in general and making statements such as "Real men use real  
>>> numbers" and that the rationals are not a dense subset of the reals.
>>>
>>> Richard Riehle explained how using numerator and denominator to  
>>> represent rationals eliminates truncation or rounding error in  
>>> intermediate results.
>>>
>>  Let me be clear that Tom is not wrong in his assertion that, for most  
>> kinds
>> of problems, the use of non-decimal fractions might be superfluous.   In
>> particular, for accounting problems, decimal fractions with a fixed  
>> number
>> of decimal places work out just fine.   PL/I, COBOL, Ada, and a few
>> other languages have direct support for accounting data types.
>>  The important point is that it is useful to have more than one way to
>> represent numbers for the wide range of computing problems we
>> are asked to solve.   In some, not all, cases, the use of non-decimal
>> fractions has value.   I don't think Tom would object to that view. He
>> is simply indicating that, to use this as the only model for computation
>> would be a bit silly.
>>
> I don't disagree with anything you have said.  However, no one had  
> proposed using rational fractions as the only model of computation and  
> Tom certainly came pretty close to saying that he saw no value in them  
> at all.  Also his several times repeated statement that the rationals  
> are not a dense supset of the reals is just plain wrong.

I thought I had already replied to this, but it appears not, if so sorry
for being redundant, new laptop new version of everything.

It has been more than 40 years since I studied that branch of math
including Measure theory (the name Paul Halmos comes to mind) that deals  
with
these topics .  I believe that it was Georg Cantor who demonstrated that  
the
rationals were isomorphic to the integers.  The Lebesgue integral on the  
unit
interval over the rationals is 0 for the reals it is 1.  QED.

Of course, this math and what we do on computers with finite precision is
a bit different, because the number systems we use are not closed or  
isomorphic
we necessarily must think about the required accuracy for any given set of
caslculations qand how to manage error propogation.




-- 
Using Opera's revolutionary e-mail client: http://www.opera.com/mail/
0
tom284 (1839)
10/15/2006 3:46:24 PM
"Bob Lidral" <l1dralspamba1t@comcast.net> wrote in message 
news:4531EF61.7070005@comcast.net...
>>
> That was partly my question.  I'm getting lost in the deeply-nested postings 
> and replies so I don't always remember who posted what, but I thought adaworks 
> originally claimed that uninitialized variables had invalid values prior to 
> being assigned an initial value.  Later, I thought I saw a claim that 
> variables could never contain invalid values. It's difficult to keep up with 
> who wrote what but there seem to be conflicting claims about Ada -- or perhaps 
> I'm confusing the claims about Ada with those about SPARK.
>
In Ada and SPARK, when a  constraint is given for a type,
no value of that type can hold a value that is not a member of
that type.   As another example, if it is an enumarated type,

    type Color is (Red, Orange, Yellow, Blue, Indigo, Violet);

a variable,  name hue, could hold any of those values, but none
other.  Further,

                 hue : color := 3;

would be illegal.   If during an input operation, there was an attempt
to assign a value outside the set of defined values for Color, it would
raise an exception, in Ada.   In SPARK, this would undergo even
more strict testing and to assure that such an error would have a very
low probability of occurrence.
>
> Anyhow, in practice programs are much more complicated than three-statement 
> inline code segments.  Of course, eliminating all but one of two control 
> structures in the style of Pascal may make the disciples of St. Nik happy, but 
> such an approach makes a language much more difficult to use -- besides, I 
> don't see how an if statement or a switch statement could be easily converted 
> to a while or for statement.
>
The famous proof, written by Jacopini and Bohm, makes it clear that
any correct computer program can be written with a small set of
fundamental control structures.   Although the paper is written using
formal mathematics, I believe everyone involved in the design of
computer programs should be familiar with this seminal work in
our field.

As to switch statements, the way they are formed in the C family of
languages, with the automatic drop through default, is a slightly
pathological construct.   It is one of those C constructs that so
easily leads to errors, and it is also difficult to debug.   Such a
construct should never be permitted in the development of
safety-critical software.   It is, I admit, convenient for the programmer,
but it is really awful from a software engineering perspective.
>
> I had thought someone claimed that SPARK/Ada could determine at any point in a 
> program at compilation time whether any particular variable held a valid value 
> or not.  (Of course, if variables can never hold invalid values, then the 
> answer becomes more complicated.)  As I wrote, in practice real programs are 
> more complicated than short code segments (especially straight-line segments) 
> can demonstrate.  No really huge examples come to mind, but a couple of short 
> ones do:
>
Let me be clear about the relationship of SPARK to Ada.   A SPARK
program will compile with an Ada compiler.   The reverse would not be
true, if there were a SPARK compiler.  While Ada remains the best general
purpose language for creating dependable software solutions, it still has
features that cannot be proven during development.  That is, any attempt to
apply formal methods to all of Ada will not work.    This is true of every other
programming language, including the dominant language of this forum.

SPARK is a kind of extension to a provable subset of Ada.   The purpose
of SPARK is to ensure the highest quality of software possible for those
kinds of systems where reliability is of the utmost importance.   At present,
it does this better than any other tool I know of.   This is not to suggest that
it is perfect.  At present, the world of software has a long way to go before
we have reached the level of engineering capability that even the most
pedestrian of mechanical engineers can apply in that field.   But SPARK is
a big step in the right direction.

The designers of SPARK did not choose Ada as the under-the-hood
language without evaluating the many alternatives.   I know Dr. Rod Chapman,
and I know that he took very great care in deciding what language model
would best support the formal methods required in SPARK.  As nearly
as I can tell, C, C++, and PL/I were not even in the running.  I think that
at early, experimental version of SPARK might have been built over a
subset of Pascal, but Dr. Chapman can best address that.

I am leaving the rest of your inquiry, as included below, to Dr. Chapman,
if he chooses to take the time to reply.

Richard

==============================================

> Example 1: Input values.
>
>     read x   /* in whatever programming language syntax you prefer */
>
>     After this statement is executed, does x hold a valid value for its
>     type?  As with Schroedinger's cat, at compilation time the answer is
>     "yes or no", but not "yes", "no", or "maybe".  It's not possible to
>     narrow down the answer to "yes" or "no" until execution time (well,
>     the answer is probably more like "yes" or "an exception will be
>     raised at execution time").  The only way to guarantee (or test)
>     program correctness at this point is for the compiler or programmer
>     to insert some sort of runtime check, possibly with some sort of
>     recovery mechanism -- but the answer cannot be determined at
>     compilation time.
>
>
> Example 2:  Conditional code.
>
>     Again, feel free to convert to the syntax of your favorite
>     programming language:
>
>     int x, y, z;
>
>     read x;
>     if (x < 12) y = 4;
>     /* At this point, does y hold a valid value? */
>     z = y;
>
>     Again, the answer at compilation time is "yes or no".  If the value
>     read for x was valid and less than 12, then y will have the value 4,
>     otherwise the value of y will be undefined and the assignment to z
>     will be undefined or illegal.  How would SPARK/Ada handle such a
>     situation?  Suppose the code path is more complicated and involves
>     several reads, computations, tests, conditional branches, select
>     statements, loops, etc. prior to the read statement.  Then the
>     answer becomes "'yes, if y held a valid value before the if
>     statement', otherwise 'yes or no'".  Can SPARK/Ada always determine
>     whether y will have a value or might be undefined under such
>     circumstances?  Of course, there's always the brute force method of
>     deciding that y never has a valid value after the if statement, but
>     that seems heavy-handed in the more complicated case for which it
>     might be safe.
>
>
>> OTOH, I wonder what happens if you have to do something like validate the zip 
>> code of an address, which would probably come from a database of some sort. 
>> Then our _proof_ would be trouble.  Perhaps this severely limits the range of 
>> problem domains for SPARK?  Or perhaps in practice it's ignored?
>>
>> LR
>
> And that is related to a question I asked in an earlier post that has not 
> previously been addressed.  What happens if the range of valid values for one 
> variable is dependent on the specific value(s) of one or more other variables 
> depending on data read at run time?  I claim the validity of values for such a 
> variable cannot be determined at compilation time and requires some sort of 
> runtime consistency check, probably generated by the programmer rather than 
> automatically by the compiler.  Yet, unless I misunderstood, it seemed to me 
> there were some who claimed SPARK/Ada could determine the validity of values 
> for such a variable by static compilation-time analysis without the aid of 
> runtime checks or programmer supplied assertions.
>
>
> Bob Lidral
> lidral  at  alum  dot  mit  edu 


0
adaworks2 (748)
10/16/2006 12:15:05 AM
"LR" <lruss@superlink.net> wrote in message 
news:4531495b$0$2538$cc2e38e6@news.uslec.net...
>
> I think I'd like to have better knowledge of tools I might use. Particularly 
> if I was writing safety critical kinds of software. Personally, I think I 
> might be better off using C++ then relying on an error might be issued at 
> compile time, or at run time.
>
You will have a far greater number of errors with C++ than with
Ada.  Further, C++ continues to have a lot of little surprises for
the unwary developer.   Years ago, when I was doing a lot more
with C++, I used to enjoy reading the column titled "Obfuscated
C++" in a magazine called "C++ Report."   Sometimes I could
solve the problem easily without a compiler.  Often, I had to
resort to compiling and executing the code to find out what
would happen.

There are, at this writing, more gotchas in C++ than any language
in widespread use.  On one project I know, a major well-known
weapon system currently under development, there are two principal
languages being used:  Ada and C++.   On that system, very few
new rules are required for the Ada code.  However, for the C++,
a very strict set of guidelines is in place that forbids a lot of things
that C++ programmers take for granted.   Why?   Because too
many C++ constructs can lead to unpredictable outcomes.
>   I wasn't speaking of functions that return a boolean value, but of boolean 
> variables.  I think this question has been asked several times and an answer 
> has not yet been forthcoming.
>
OK.  Yes, Ada does have a Boolean type.   Variables of that type
can be declared.   The set of values for the Boolean types are False
and True.   Boolean defined as a special kind of enumerated type.

           type Boolean is (False, True);

An Ada Boolean is not an alias for any numerical value as it is
in C++.  Therefore, the following code,

         B : Boolean := 0;

would fail at compile-time.    It is possible to create an array of
boolean values such as,

        type BA is array (Positive range <>) of Boolean;

where we could declare instances of that array as:

       Q : BA (1..6);
       R : BA (1..20);

Further, we could force that a constrained boolean array
to a set size of bits.   One way to do this is:

      type Word is array (0..31) of Boolean;
      pragma Pack(Word);

which would force every element of the array to be represented
as a single bit.   We could add some additional representation
clauses, if they were deemed necessary.   For example,

     for Word'Size use 32;

which would force the values of type Word to be represented
in 32 bits.

We can also do logical operations on a boolean array.  These
include and, or, and xor.  This allows us to do a lot of low-level
operations (at the machine level) when appropriate.

There is a lot more I could add. However, the boolean model of
Ada is sufficiently powerful to accomodata any modern programming
problem in the domain of boolean manipulation.

I hope this answers your question.

Richard 


0
adaworks2 (748)
10/16/2006 12:37:06 AM
Bob Lidral wrote:

> LR wrote:
> 
>> Bob Lidral wrote:
>>
>>> adaworks@sbcglobal.net wrote:
>>>
>>>> "robin" <robin_v@bigpond.com> wrote in message 
>>>> news:u_5Xg.45525$rP1.16301@news-server.bigpond.net.au...
>>>>
>>>> [...]
>>>>
>>>>> That's still inadequate.  A variable might well be assigned a value,
>>>>> but before it is assigned a value, an attempt is made to use it.
>>>>>
>>>>
>>>> That can never happen.   The progam is checked at compile-time,
>>>> and any such attempt will cause a compile-time error.  This is
>>>> easy to detect.  Note that even within a procedure, operations
>>>> on values are checked by the compiler.  Consider,
>>>>
>>>>             procedure ABC (x : in integer;  y : out integer) is
>>>>                  z : integer;
>>>>             begin
>>>>                  x := y;       -- compile error; x is an in 
>>>> parameter and
>>>>                                  -- cannot be assigned a new value
>>>>                  y := z;       -- compile-time error; z has never been
>>>>                                  -- assigned a value;
>>>>                  z := y        -- compile-time error; y is an out 
>>>> parameter,
>>>>                                  -- and it cannot be assigned to 
>>>> another variable
>>>>                                  -- unless it has been assigned a 
>>>> value first
>>>>            end ABC;
>>>>
>>>> In the above example, the compiler does a lot of checking to ensure
>>>> the absence of stupid mistakes.  A parameter designated as "in" is
>>>> effectively a constant within the procedure's algorithm.  One that is
>>>> designated as "out" cannot be assigned to another value until it has
>>>> a value of its own.   A local variable that has never been assigned
>>>> a value cannot be used on the right side of assignment unless it has
>>>> already been assigned a value.
>>>
>>>
>>>
>>>  > [...]
>>>
>>> That's all well and good for straight line code segments.  But the 
>>> problem is a lot more difficult than that.  What about loops and 
>>> conditional statements?  
>>
>>
>>
>> If this stuff truly works then I don't think that this would be a 
>> problem.  For one thing, if it makes it easier to do, all of these can 
>> be converted to while loops, but failing that, for loops have limits 
>> that can be checked. Conditional statements can be checked as well. 
>> Perhaps I'm missing something obvious, can you give an example of 
>> something that you think can't be checked in principle?
>>
>> (Although, of course, as I've pointed out elsewhere, since we're 
>> relying on software to check the proof, we're likely not to be able to 
>> do this in fact.)
>>
>>
>>  > What if initial assignment to a variable
>>  > depends on a value read from input?
>>
>> I could see how reading a value weakens the proof, but there's no 
>> reason why the SPARK can't require the user to assert that the value 
>> is valid after a read.  Or generate code that does so. Or perhaps 
>> something else? Or do you think this conflicts with the idea of 
>> exceptions not being a good thing?
>>
> That was partly my question.  I'm getting lost in the deeply-nested 
> postings and replies so I don't always remember who posted what, but I 
> thought adaworks originally claimed that uninitialized variables had 
> invalid values prior to being assigned an initial value.  Later, I 
> thought I saw a claim that variables could never contain invalid values. 
>  It's difficult to keep up with who wrote what but there seem to be 
> conflicting claims about Ada -- or perhaps I'm confusing the claims 
> about Ada with those about SPARK.

I think that we've seen some self-contradictory claims which is very 
disturbing, considering that we're talking about a tool that is being 
used for safety critical systems.



> Anyhow, in practice programs are much more complicated than 
> three-statement inline code segments.  Of course, eliminating all but 
> one of two control structures in the style of Pascal may make the 
> disciples of St. Nik happy, but such an approach makes a language much 
> more difficult to use -- 

True, but I wasn't suggesting that the user be required to write code 
like this, but that a mechanical proof system could do this to 
potentially simplify some of the proof requirements.


 > besides, I don't see how an if statement or a
> switch statement could be easily converted to a while or for statement.

Please consider these three posts using, sorry, c++,
http://groups.google.com/group/comp.lang.c++.moderated/msg/1cd4be1180fa3492?hl=en&
http://groups.google.com/group/comp.lang.c++.moderated/msg/1edc3375a3f555cf?hl=en&
and a slight correction to my code.
http://groups.google.com/group/comp.lang.c++.moderated/msg/1775d3736da5522e?hl=en&

Does that satisfy, if not, please let me know.




>> OTOH, I wonder what happens if you have to do something like validate 
>> the zip code of an address, which would probably come from a database 
>> of some sort.  Then our _proof_ would be trouble.  Perhaps this 
>> severely limits the range of problem domains for SPARK?  Or perhaps in 
>> practice it's ignored?
>>
>> LR
> 
> 
> And that is related to a question I asked in an earlier post that has 
> not previously been addressed.  What happens if the range of valid 
> values for one variable is dependent on the specific value(s) of one or 
> more other variables depending on data read at run time?  I claim the 
> validity of values for such a variable cannot be determined at 
> compilation time and requires some sort of runtime consistency check, 
> probably generated by the programmer rather than automatically by the 
> compiler.  Yet, unless I misunderstood, it seemed to me there were some 
> who claimed SPARK/Ada could determine the validity of values for such a 
> variable by static compilation-time analysis without the aid of runtime 
> checks or programmer supplied assertions.

Yes, it is related, but there are questions that are lots more fun then 
this.  For example, one of the applications claimed for SPARK is RR 
switch and signal control, which generally had to interface at somepoint 
to a dispatch system, but I understand there are some problems updating 
these systems in real time, so that conditions in the field may take 
some time to be reflected on the display of a person who needs that 
information.

LR
0
lruss (582)
10/16/2006 12:57:43 AM
Tom Linden wrote:
> 
> I thought I had already replied to this, but it appears not, if so sorry
> for being redundant, new laptop new version of everything.
> 
> It has been more than 40 years since I studied that branch of math
> including Measure theory (the name Paul Halmos comes to mind) that 
> deals  with
> these topics .  I believe that it was Georg Cantor who demonstrated 
> that  the
> rationals were isomorphic to the integers.

He demonstrated a lot of things but not that.

The rationals and the integers have the same cardinality - they can be placed in 
1-1 correspondence, but that does not make them isomorphic.  For that to be so 
there would have to be a 1-1 correspondence that preserved all the algebraic 
structure and that's impossible.

Algebraically, the integers are a ring, in fact an integral domain, and the 
rationals are a field of characteristic 0.  Of course, the rationals are also a 
ring because after all a field is a ring with additional properties.  In general 
a ring homomorphism between two rings R and S with identity does not necessarily 
map the identity of R onto the identity of S.  However if a homomorphism, h, 
from R to S maps 1 in R onto x~=1 in S, we have from 1 {times} 1 = 1 that 
h(1)h(1)=h(1) or xx=x. This gives x(x-1)=0.  So x is idempotent and a divisor of 
0.  Now we are contemplating a homomorphism from the integers Z to the rationals 
R.  The rationals, being a field, do not have any divisors of zero (the 
integers, being an integral domain, don't either, but that's neither here nor 
there) so we must have h(1)=1.  Perforce h(0)=0, because any ring homomorphism 
preserves the additive identity. We can now deduce that 
h(2)=h(1+1)=h(1)+h(1)=1+1=2. And by induction h(n)=n.  Also h(-n)=-h(n)=-n.  In 
other words, any ring homomorphism from Z into R must map Z onto itself as a 
subring of R.  Thus h is a ring isomorphism but it is not onto the entire ring R 
but only onto the subring Z.  Moreover it should not come as a big surprise.

There are several methods for constructing the reals from the rationals.  All 
are equivalent in the sense that the resulting algebraic structures are 
isomorphic.  Perhaps the simplest to describe in a few lines is the method of 
b-adic expansions.  Let b be a fixed integer greater than 1 and E be the set of 
all functions, e, from the integers, Z, to the set {0,...,b-1} for which there 
is a largest n such that e(n)>0 (note, the value of n varies from function to 
function).  Clearly, SUM{e(i)*b**i: i in Z} is a convergent series all of whose 
terms and partial sums are rational.

Let T be the subset of E consisting of those expansions for which there is a 
smallest n such that e(n)>0.  The elements of T are the terminating expansions 
and they all have rational sums; they do not, however, correspond to the 
complete set of rational numbers - they are just those rationals that have two 
expansions in base b.  Each element in the set E~T (the nonterminating 
expansions) is deemed to be a real number and the series corresponding to that 
element converges to itself.

To complete the construction one must show how addition, subtraction, and 
multiplication are done and that these operations have the required properties 
(associative and distributive laws, etc), how to impose an ordering, that the 
limits of arbitrary convergent sequences of reals are real numbers, etc, etc.

The reals are a field of characteristic 0 and are the smallest field containing 
the rationals that is closed under the taking of limits of sequences.  Since 
every real number has been exhibited as the limit of a sequence of rational 
numbers, it follows that the rationals are everywhere dense in the reals.

The complex numbers can be constructed from the reals by field extension by the 
square root of -1 (i, or if you are an engineer, j).  The complex numbers are 
algebraically closed over the reals and also over themselves.  In other words, 
all roots of polynomial equations with either real or complex coefficients are 
complex numbers and every such equation has a complete set of n roots (provided 
multiplicity of roots is properly taken into account), where n is the degree of 
the equation (the fundamental theorem of algebra).

The cardinality of both the real numbers and the complex numbers as well as 
Euclidean spaces of any countable dimension is aleph=2**aleph0.

A complex number that satisfies a polynomial equation with rational coefficients 
is called an algebraic number. (Actually by multiplying both sides of the 
equation by the lcm of the denominators of the coefficients it can be seen that 
the coefficients can be assumed to be integers.)  The algebraic numbers form an 
algebraically closed field of characteristic 0 and are the smallest such field 
containing the rationals.  The algebraic numbers are countable.

You just have to reconcile yourself to the fact that infinite sets have amazing 
and sometimes counter-intuitive properties.  For example, the seemingly 
paradoxical fact that k-dimensional Euclidean space has the same cardinality as 
the reals (1 dimensional Euclidean space) is easily demonstrated:

Let ei(j) i in {0,...,k-1} j in Z be the non terminating b-adic expansions of 
the coordinates of a point in k dimensional Euclidean space.  Associate this 
k-tuple with the real number having non terminating b-adic expansion e(n)=ei(j) 
where i=mod(n,k) and j=floor(n/k).  This gives a 1-1 correspondence between 
k-dimensional Euclidean space and the reals, showing that they have the same 
cardinality.  This is just a 1-1 mapping, not an isomorphism.  As already 
mentioned, an isomorphism preserves all algebraic structure, including 
dimensionality.




The Lebesgue integral on
> the  unit
> interval over the rationals is 0 for the reals it is 1.  QED.

The sentence is true but what was proved?  If you think that proves the 
rationals are not dense in the reals you are mistaken.  Consult any good text on 
set theory, or read the exposition above.
> 
> Of course, this math and what we do on computers with finite precision is
> a bit different, because the number systems we use are not closed or  
> isomorphic

isomorphic to what?

> we necessarily must think about the required accuracy for any given set of
> caslculations qand how to manage error propogation.
> 
I can't disagree with that.
0
jjw (608)
10/16/2006 7:17:21 AM
LR wrote:
> Bob Lidral wrote:
> 
> [...]
>  > besides, I don't see how an if statement or a
> 
>> switch statement could be easily converted to a while or for statement.
> 
> 
> Please consider these three posts using, sorry, c++,
> http://groups.google.com/group/comp.lang.c++.moderated/msg/1cd4be1180fa3492?hl=en& 
> 
> http://groups.google.com/group/comp.lang.c++.moderated/msg/1edc3375a3f555cf?hl=en& 
> 
> and a slight correction to my code.
> http://groups.google.com/group/comp.lang.c++.moderated/msg/1775d3736da5522e?hl=en& 
> 
> 
> Does that satisfy, if not, please let me know.
> 
Oops, sorry -- I had forgotten just how ugly structured code could be, 
how convoluted were the required workarounds, and how many extraneous 
Booleans would be required when the language was restricted to eliminate 
most of the control structures in the manner of Pascal.  OTOH, the 
aesthetic appeal of the transformed code is irrelevant to the validity 
of the proof.

You're right.  I wasn't thinking -- or at least not thinking hard enough 
  or digging deeply enough into my memories of early (pre-version 3.0) 
of Pascal.

> [...]
> 
> 
> LR

Bob Lidral
lidral  at  alum  dot  mit  dot  edu
0
10/16/2006 8:10:26 AM
In <op.thdsb1jczgicya@murphus>, on 10/13/2006
   at 03:20 PM, "Tom Linden" <tom@kednos.com> said:

>Well, that requires the complictly of the OS as in L4 Micro kernel or
>Gnosis

No. Encapsulation can be handled by the compiler and the run-time
library.

-- 
Shmuel (Seymour J.) Metz, SysProg and JOAT  <http://patriot.net/~shmuel>

Unsolicited bulk E-mail subject to legal action.  I reserve the
right to publicly post or ridicule any abusive E-mail.  Reply to
domain Patriot dot net user shmuel+news to contact me.  Do not
reply to spamtrap@library.lspace.org

0
spamtrap16 (3722)
10/16/2006 11:37:21 AM
In <H3QXg.10030$TV3.4668@newssvr21.news.prodigy.com>, on 10/13/2006
   at 05:31 PM, <adaworks@sbcglobal.net> said:

>The body can be compiled independently of the specification.

Water is wet.

>The successful compilation of library units is not dependent on
>successful compilation of the body (implementation).

Which doesn't address the issue of determining whether a particular
variable is set. Does SPARK add compilation dependencies that do not
exist in Ada?

>The successful compilation of library units is not dependent on
>successful compilation of the body (implementation).

Which has nothing to do with determining whether a variable is set.

-- 
Shmuel (Seymour J.) Metz, SysProg and JOAT  <http://patriot.net/~shmuel>

Unsolicited bulk E-mail subject to legal action.  I reserve the
right to publicly post or ridicule any abusive E-mail.  Reply to
domain Patriot dot net user shmuel+news to contact me.  Do not
reply to spamtrap@library.lspace.org

0
spamtrap16 (3722)
10/16/2006 11:42:53 AM
In <9pWXg.13119$P7.7638@edtnps90>, on 10/14/2006
   at 12:43 AM, "James J. Weinkam" <jjw@cs.sfu.ca> said:

>Perhaps you could clarify what point you were intending to make with
>your  statement that rational numbers cannot represent (irrational)
>reals.

The point that I was making to glen herrmannsfeldt et al was that
*all* of the schemes discussed are rational arithmetic and that *none*
of them handles arbitrary real numbers. Only this and nothing more.

-- 
Shmuel (Seymour J.) Metz, SysProg and JOAT  <http://patriot.net/~shmuel>

Unsolicited bulk E-mail subject to legal action.  I reserve the
right to publicly post or ridicule any abusive E-mail.  Reply to
domain Patriot dot net user shmuel+news to contact me.  Do not
reply to spamtrap@library.lspace.org

0
spamtrap16 (3722)
10/16/2006 11:51:49 AM
<adaworks@sbcglobal.net> wrote in message
news:0cfYg.22738$Ij.12957@newssvr14.news.prodigy.com...
>
> "robin" <robin_v@bigpond.com> wrote in message
> news:w_5Xg.45528$rP1.2900@news-server.bigpond.net.au...
> > <adaworks@sbcglobal.net> wrote in message
> > news:qKBWg.8973$TV3.4811@newssvr21.news.prodigy.com...
> >>
> >> As to my concept of a software circuit-breaker, any design in the
> >> physical world that involves electrical current usually includes some
> >> kind of fail-safe device such as a circuit-breaker.
> >
> > That is probably not a good analogy.  A circuit-breaker
> > is not what I would call a "fail-safe" device.
> > It merely breaks the circuit.  Permanently.
> > There's no guarantee that damage has not already been done.
> >
> >>   When a modern
> >> program fails in PL/I, Ada, Java, C++, Eiffel, or most other languages,
> >> it is common to include some kind of fail-safe code.  This code acts,
> >> in a program, much the same way a circuit-breaker does in an
> >> electrical system.
> >
> > Not really.  A circuit breaker switches off the circuit.
> > There is no opportunity afterwards to do anything.
> >
> >>   In the case of the software, it is often self-resetting.
> >
> > The equivalent in the software world is to abort.
> > A circuit-breaker is not "self-resetting".
> > It would/could be dangerous to do so.
> >
> >> In the physical world, we often require manual intervention.
> >
> I see your objection as something as a quibble.

It's not a quibble.
The circuit breaker is an inappropriate analogy.

>   The point of the
> analogy is that, with software, we can insert the equivalent of a
> circuit-breaker into our designs.

The equivalent of a circuit breaker in software
is a dead halt, such as caused by division by zero,
exponent overflow, etc., or anything else that immediately
terminates the program.

>   However, we can also go beyond
> the limitations of the world of physical engineering, and also build
> recovery routines.
>
> The circuit-breaker analogy works just fine.

Really!  A circuit breaker does just that.  It breaks the circuit.
A circuit breaker is there to protect a circuit.
When a circuit breaker breaks the circuit, it is
because of an overload of because of a fault
condition.  A fault condition can arise in the case of
(a) personal contact with active conductor;
(b) overload of the circuit controlled by the breaker;
(c) a faulty appliance.
Any of these could cause personal injury and/or death.
Under no circumstances would the circuit breaker
be automatically switched back on.

>  The addition of
> the exception handler is usually unique to software.

Scarcely unique, since it's been available since 1966.

> However,
> there are, in the physical world, some circuit-breaker designs
> that "trip" and reset before failing entirely.

These are more in the nature of thermal cutouts,
that re-apply the power after a given time interval.
They are not circuit breakers.
They are usually on devices such as refrigerators
where it is undesirable for the power to be off for
lengthy periods.

> Richard


0
robin_v (2737)
10/16/2006 12:10:48 PM
"David Frank" <dave_frank@hotmail.com> wrote in message
news:4531fab4$0$17479$ec3e2dad@news.usenetmonster.com...
>
> "robin" <robin_v@bigpond.com> wrote in message
> news:NOeYg.47262$rP1.36516@news-server.bigpond.net.au...
> > "David Frank" <dave_frank@hotmail.com> wrote in message
> > news:4530a168$0$17419$ec3e2dad@news.usenetmonster.com...
> >>
> > >
> >> I would ask for him to translate the PL/I version of this problem EXCEPT
> >> there hasnt been a valid one posted
> >
> > You're no judge of that, since you know nothing
> > about PL/I.
> >
> > And there has been at least one posted in PL/I.
> >
>
> I invited ANY of your peers to vote AYE if they agreed you had posted a
> valid solution, NO-ONE  has..

They aren't going to do your program testing for you.


0
robin_v (2737)
10/16/2006 12:10:48 PM
"David Frank" <dave_frank@hotmail.com> wrote in message
news:4531ff87$0$17468$ec3e2dad@news.usenetmonster.com...

> Looking forward to seeing your list1.ada  translation,
> its for sure there wont be a PL/I one..

PL/I code has already been posted.


0
robin_v (2737)
10/16/2006 12:10:49 PM
<adaworks@sbcglobal.net> wrote in message
news:ynaWg.12481$6S3.7124@newssvr25.news.prodigy.net...
>
> "robin" <robin_v@bigpond.com> wrote in message
> news:pS7Wg.43679$rP1.16736@news-server.bigpond.net.au...
> >
> > <adaworks@sbcglobal.net> wrote in message
> > news:CCDVg.8029$TV3.5843@newssvr21.news.prodigy.com...
> >
> >> I indicated earlier that language design choices need to be
> >> made on the basis of criteria relevant to the problem one
> >> is trying to solve.   One of the primary criterion for the
> >> environment in which I work is dependability.  At present,
> >> the most powerful language toolset to satisfy the need
> >> for high-integrity, highly dependable software is called
> >> SPARK, not C++, not PL/I.   It is a niche language,
> >> to be sure.  One would not use SPARK for pedestrian
> >> projects such as business data processing.  However,
> >> there is currently no language model better suited to
> >> the creation of safety-critical software.
> >
> > Other than, of course, PL/I.
> >
> Sorry Robin, but in this case PL/I does not even run
> a close second.

PL/I has equal facilities for "high-integrity, highly dependable software".
It has had these facilities since 1966.


0
robin_v (2737)
10/16/2006 12:10:49 PM
<adaworks@sbcglobal.net> wrote in message
news:zVFXg.11100$vJ2.4589@newssvr12.news.prodigy.com...
>
> "robin" <robin_v@bigpond.com> wrote in message
> news:ZPiXg.45810$rP1.26232@news-server.bigpond.net.au...
> > <adaworks@sbcglobal.net> wrote in message
> > news:wSXWg.13333$6S3.12640@newssvr25.news.prodigy.net...
> >>
> >> "Bob Lidral" <l1dralspamba1t@comcast.net> wrote in message
> >> news:452B51C0.1080309@comcast.net...
> >> > adaworks@sbcglobal.net wrote:
> >> >> "robin" <robin_v@bigpond.com> wrote in message
> >> >> news:hMCWg.44641$rP1.19275@news-server.bigpond.net.au...
> >> >>
> >> > Maybe I missed something, but there's a question I've seen asked several
> > times
> >> > here that you haven't answered yet.
> >> >
> >> > Suppose there's a variable that can legally have any value representable
by
> >> > its underlying machine representation (integer, character, Boolean,
> >> > floating
> >> > point, etc. -- especially Boolean) that is initialized to a value
somewhere
> > in
> >> > the program other than where it's declared.
> >> >
> >> > Further, suppose that variable is only used in parts of the program where
> > it's
> >> > not possible to determine statically at compilation time whether it has
> >> > already been set to some value.
> >> >
> >> > In such a case, how would SPARK determine the variable had not been
> >> > initialized before being used?  Clearly it can't reject the program at
> >> > compilation time.  Presumably, if I've understood your postings, SPARK
will
> >> > somehow ensure it is initialized (at load time?) to some invalid value so
> > when
> >> > it is first used, its use will raise some sort of exception.
> >> >
> >> > Please pardon the reference to C or PL/I data types (well, it is the PL/I
> >> > newsgroup, despite DF's rantings).  Please give some examples of invalid
> >> > values for C's char, unsigned char, short, or float variables. Please
give
> >> > some examples of invalid values for PL/I's character or bit(1) variables.
> > How
> >> > do these values cause exceptions to occur?  For the IEEE representations
of
> >> > floating point data, it's possible to use a signaling NaN -- if it's
> > supported
> >> > by the hardware.  But what's an invalid value for bit(1)?  Valid values
are
> >> > '0'b and '1'b and PL/I only uses 1 bit to store such values.  How many
> >> > other
> >
> >> > values can a single bit represent?  One would hope that if a single bit
> >> > actually did hold some value other than '0'b or '1'b, the hardware might
> > raise
> >> > an exception, but I'm not sure how reliable such hardware would be in the
> >> > first place. :-)
> >> >
> >> The first part of your question is about a value that is legally
> >> representable on a particular machine.   This is not the criteria
> >> used by either SPARK nor Ada.   Rather, it uses the notion
> >> of a value that is legally representable for some type.
> >>
> >> A type is not the same as a legal machine representation in
> >> this model.  Rather, a type is a legal representation based
> >> on how the type is defined.   The underlying concept is
> >> name equivalence rather than structural equivalence.  Let
> >> me begin with a very simple type declaration.
> >>
> >>         type Number is range -473..250;
> >>         for Number'Size use 32;
> >>
> >> The for statement is not required, but I added it to
> >> force Number to be represented in 32 bits.
> >>
> >> A value of type Number cannot be outside the
> >> bounds of -473 through 250 even though it is
> >> represented in the machine as 32 bits.
> >>
> >> A value of a type may not have a lifetime longer
> >> than the declaration of that type.  Therefore, once
> >> the type is defined, any variables of that type are
> >> going to be in scope.   However, even though they
> >> are in scope, they may not be directly visible.
> >>
> >> At any place where a value of a type is manipulated,
> >> whether through assignment or otherwise, it must
> >> be directly visible.  There will never be hidden
> >> operations on a value of a declared type.  The
> >> compiler can easily check whether a value of a given
> >> type is ever initialized to a value, either at the time of
> >> declaration or somewhere else in the program.
> >>
> >> When the compiler determines, and it will always
> >> determine this, that a value can never be given a
> >> value anywhere in the program, it will raise an
> >> error at compile-time.    Further, if a value is
> >> declared and initialized at the time of declaration,
> >> and if it is never used anywhere in the program, the
> >> compiler will report this too.
> >
> > Is there anying special about this?  Some non-Ada compilers
> > do this.
> >
> Good.  However, I'm not sure many do everything I just
> described.

One or more, Ada does not offer anything new here.
OTOMH WATFOR (c. 1967) included uninitialized variables test,
so did PL/C (c. 1970).  Salford Fortran also (from c. 1991)


0
robin_v (2737)
10/16/2006 12:10:51 PM
<adaworks@sbcglobal.net> wrote in message
news:aN7Xg.10728$vJ2.8224@newssvr12.news.prodigy.com...
>
> "robin" <robin_v@bigpond.com> wrote in message
> news:u_5Xg.45525$rP1.16301@news-server.bigpond.net.au...
> > <adaworks@sbcglobal.net> wrote in message
> > news:pzGWg.10432$vJ2.1196@newssvr12.news.prodigy.com...
> >>
> >> "robin" <robin_v@bigpond.com> wrote in message
> >> news:hMCWg.44641$rP1.19275@news-server.bigpond.net.au...
> >> > <adaworks@sbcglobal.net> wrote in message
> >>  >
> >> RR> SPARK ensures that, at run-time, every scalar will have a value
> >> RR> that conforms to the invariant given for that value.
> >> >
> >> > That doesn't answer the questions.
> >> > He asked 1. whether or not an unitialized variable gets an invalid value;
> >> > 2. does spark allow variables to have any value?
> >> >
>       1)   SPARK will not allow a variable to have an invalid value
>
>        2)  SPARK will allow a variable to have a value that conforms
>             to the range constraint for the type of that variable.
>
>                    type Number is range 50..70;
>                    x : Number := 15;    -- illegal;
>                  ========================
>                    x := any value not in the range of 50 through 70,
>                           is still illegal.
>
> >
> > That's still inadequate.  A variable might well be assigned a value,
> > but before it is assigned a value, an attempt is made to use it.
> >
> That can never happen.   The progam is checked at compile-time,
> and any such attempt will cause a compile-time error.  This is
> easy to detect.  Note that even within a procedure, operations
> on values are checked by the compiler.  Consider,
>
>             procedure ABC (x : in integer;  y : out integer) is
>                  z : integer;
>             begin
>                  x := y;       -- compile error; x is an in parameter and
>                                  -- cannot be assigned a new value
>                  y := z;       -- compile-time error; z has never been
>                                  -- assigned a value;
>                  z := y        -- compile-time error; y is an out parameter,
>                                  -- and it cannot be assigned to another
> variable
>                                  -- unless it has been assigned a value first
>            end ABC;
>
> In the above example, the compiler does a lot of checking to ensure
> the absence of stupid mistakes.  A parameter designated as "in" is
> effectively a constant within the procedure's algorithm.  One that is
> designated as "out" cannot be assigned to another value until it has
> a value of its own.   A local variable that has never been assigned
> a value cannot be used on the right side of assignment unless it has
> already been assigned a value.
> >
> Further, one might ask whether a variable is assigned a value
> far away, as global data, from its declaration.   This is also
> checked by the compiler.   The visibility rules guarantee that
> no error can occur even when the variable seems to be far
> away from where it is used.  In Ada, there are ways a
> programmer can deliberately circumvent the visibility rules,
> but that cannot happen in SPARK.
>
> In any case, global data, in modern programming practice
> is far less common (no pun intended) than it was during
> the earlier years.  Does PL/I allow/encourage global data?
> Does PL/I support a strong model of data localization? It
> seems to provide such support, but I wonder about how
> that is used in practice.
> >
> > The two questions at the top are still not answered.
> >
> I hope they are now.   Let me know if you need further
> clarification.

I didn't need clarification.
The original poster did.
I merely pointed out that the questions that he asked
had not been answered.


0
robin_v (2737)
10/16/2006 12:10:52 PM
<adaworks@sbcglobal.net> wrote in message
news:Rs9Yg.14790$6S3.9243@newssvr25.news.prodigy.net...
> Tom,
>
> Global data, when used in small programs, tends to be
> relatively benign.   As programs and software systems
> get larger, global data becomes a serious issue.
>
> I have programmed a in a lot of languages over forty
> plus years and used global data liberally, as did all my
> colleagues.  I look back on that experience and compare
> it to the kind of programming I am able to now with the
> realization that e.g. Fortran Common

COMMON was origially provided to facilitate communication
between various procedures of a large FORTRAN program
where global variables per se were not available.
    Computers of those times were typified by small memories,
and it was often necessary to resort to overlay techniques.
COMMON provided this facility, and it helped to avoid long
and unmanageable lists of variables in argument lists.
    COMMON therefore was part of the solution to the problem.

> and COBOL
> DATA DIVISION were part of the problem, not
> part of the solution.

Of course, there's been no need to use COMMON since
the advent of Fortran 90, apart from the maintenance
of old codes.


0
robin_v (2737)
10/16/2006 12:10:53 PM