COBOL Compiler for Windows

  • Permalink
  • submit to reddit
  • Email
  • Follow


Dear All,
I am looking for a Free COBOL IDE and Compiler for Windows XP.
Regards,
0
Reply ahsharif (52) 4/28/2008 10:31:42 AM

See related articles to this posting


On 28 Apr, 11:31, amir <ahsha...@gmail.com> wrote:
> Dear All,
> I am looking for a Free COBOL IDE and Compiler for Windows XP.
> Regards,

www.adtools.com used to provide a free v3 of Fujitsu Cobol and if you
look on the Microfocus website you can download the personal edition
of v5 of their compiler.
0
Reply alistair7 (2054) 4/28/2008 5:50:35 PM

On Apr 28, 10:31=A0pm, amir <ahsha...@gmail.com> wrote:
> Dear All,
> I am looking for a Free COBOL IDE and Compiler for Windows XP.
> Regards,

http://www.adtools.com/student/index.htm

0
Reply riplin (4127) 4/28/2008 6:28:39 PM

Hello,
Further questions please..
Do the MS and Fujitsu compilers support VSAM KSDS files? What about
DB2 access, CICS access, guess I'm asking EXEC CICS suchandsuch and
EXEC SQL suchandsuch.
Thanks
Graham

On Mon, 28 Apr 2008 10:50:35 -0700 (PDT), Alistair
<alistair@ld50macca.demon.co.uk> wrote:

>On 28 Apr, 11:31, amir <ahsha...@gmail.com> wrote:
>> Dear All,
>> I am looking for a Free COBOL IDE and Compiler for Windows XP.
>> Regards,
>
>www.adtools.com used to provide a free v3 of Fujitsu Cobol and if you
>look on the Microfocus website you can download the personal edition
>of v5 of their compiler.

** Posted from http://www.teranews.com **
0
Reply ghobbs (117) 4/29/2008 4:11:55 AM

On Apr 29, 4:11=A0pm, Graham Hobbs <gho...@cdpwise.net> wrote:
> Hello,
> Further questions please..
> Do the MS and Fujitsu compilers support VSAM KSDS files? What about

What do you mean by 'support' ?  You are unlikely to find a VSAM KSDS
file on Windows. Windows is not MVS or zOS. It does, however, have
Cobol INDEXED SEQUENTIAL files.

> DB2 access, CICS access, guess I'm asking EXEC CICS suchandsuch

Winows does not normally run a CICS service. CICS is not part of
Cobol, though normally Cobol programs are used in a CICS service.

> and
> EXEC SQL suchandsuch.

SQL service is also not part of Cobol. Most SQL is achieved with a
different product, such as a preprocessor, that converts the EXEC SQL
lines into Cobol CALLs and such.

Fujitsu version 3 does not include (AFAIK) SQL. Later versions of
Fujitsu do support ODBC access through a limited set of EXEC SQL
statements. You can always write the appropriate CALL statements.


0
Reply riplin (4127) 4/29/2008 5:44:55 AM

On Tue, 29 Apr 2008 00:11:55 -0400, Graham Hobbs <ghobbs@cdpwise.net>
wrote:

>Hello,
>Further questions please..
>Do the MS and Fujitsu compilers support VSAM KSDS files? What about
>DB2 access, CICS access, guess I'm asking EXEC CICS suchandsuch and
>EXEC SQL suchandsuch.
>Thanks
>Graham

Huh?    It looks as though you want an IBM mainframe compiler, not a
Windows compiler.   Or maybe you want terminal software to allow you
to run TSO/SPF on your mainframe.
0
Reply howard (6275) 4/29/2008 1:32:12 PM

On Tue, 29 Apr 2008 07:32:12 -0600, Howard Brazee <howard@brazee.net>
wrote:

>On Tue, 29 Apr 2008 00:11:55 -0400, Graham Hobbs <ghobbs@cdpwise.net>
>wrote:
>
>>Hello,
>>Further questions please..
>>Do the MS and Fujitsu compilers support VSAM KSDS files? What about
>>DB2 access, CICS access, guess I'm asking EXEC CICS suchandsuch and
>>EXEC SQL suchandsuch.
>>Thanks
>>Graham
>
>Huh?    It looks as though you want an IBM mainframe compiler, not a
>Windows compiler.   Or maybe you want terminal software to allow you
>to run TSO/SPF on your mainframe.

Right about a mainframe compiler. My 1998 IBM compiler runs nicely
under NT but fails utterly under XP thus my interest in an XP version.
From IBM it is part of a $10,000 'great' package (last I heard) - most
unfriendly.

I semi-expected your comments about CICS and DB2 but what's the
difference between KSDS and Indexed Sequential? For that matter what's
the diff between ESDS and 'flat or sequential' files?

I really should know this stufff but I don't!

Thanks
Graham
P.S. Have used DB2 and CICS calls from Cobol pgms many times but have
'never' ever examined syntax etc - never thought about writing my own
DB2 calls - maybe it's not difficult.

** Posted from http://www.teranews.com **
0
Reply ghobbs (117) 4/29/2008 2:13:48 PM

Howard Brazee wrote:
> On Tue, 29 Apr 2008 00:11:55 -0400, Graham Hobbs <ghobbs@cdpwise.net>
> wrote:
> 
>> Do the MS and Fujitsu compilers support VSAM KSDS files?

Do you mean "MF and Fujitsu compilers"? Microsoft hasn't sold a COBOL 
compiler since the mid-1990s, if memory serves, and even then it was a 
rebadged Micro Focus compiler.

>> What about
>> DB2 access, CICS access, guess I'm asking EXEC CICS suchandsuch and
>> EXEC SQL suchandsuch.
> 
> Huh?    It looks as though you want an IBM mainframe compiler, not a
> Windows compiler.   Or maybe you want terminal software to allow you
> to run TSO/SPF on your mainframe.

Or maybe he wants a mainframe emulation environment, which would let 
him run programs with EXEC CICS and EXEC SQL macros. We do sell such a 
product, after all (a few of 'em, actually).

It's not free, though. It's not even cheap. Emulating the mainframe 
operating environment is not trivial.

For the truly ambitious, there's the free Hercules mainframe emulator, 
on which you can install the public-domain MVS 3.8 and its compiler 
pack, which includes an ancient IBM COBOL compiler. (Apparently 
someone has now hacked that to even support VSAM files. Luxury!) But I 
don't believe there's any free emulator for CICS available.

If the OP can do without CICS, and this is for personal 
(non-commercial) use, there's the academic version of MF COBOL: Net 
Express Personal Edition, which is a free (big) download or $50 for 
media in a box.[1] It includes Open ESQL ("EXEC SQL") support.


[1] http://microfocus.com/Resources/Communities/Academic/shop/index.asp


-- 
Michael Wojcik
Micro Focus
0
Reply mwojcik (1879) 4/29/2008 2:38:57 PM

On Tue, 29 Apr 2008 10:13:48 -0400, Graham Hobbs <ghobbs@cdpwise.net>
wrote:

>I semi-expected your comments about CICS and DB2 but what's the
>difference between KSDS and Indexed Sequential? For that matter what's
>the diff between ESDS and 'flat or sequential' files?
>
>I really should know this stufff but I don't!

It depends on the system.   VSAM is a file management system that was
supposed to replace the three types of files that IBM already had on
mainframes (flat, indexed, and relative).   In my experience it made
them more like the Univac 9030 file systems I was familiar with at the
time.

IDCAMS is the utility to run this file management system.    But
people were already familiar with the non-managed systems, and the
only place I've seen ESDS files was where I implemented them myself -
to replace the 9030 functionality.

On the other hand, KSDS files had some significant performance
advantages over IBM's previous indexed file system, so it became the
standard in most shops.    (Many shops have replaced visible KSDS by
databases (which sometimes use KSDS behind the scenes)).

Someone who works with alternative mainframe company's machines just
see different flavors of indexed files.   Their companies didn't come
up with new names for their improvements in file handling.
0
Reply howard (6275) 4/29/2008 2:43:41 PM

On 04/29/08 01:44 am, Richard wrote:
> On Apr 29, 4:11 pm, Graham Hobbs <gho...@cdpwise.net> wrote:
>> Hello,
>> Further questions please..
>> Do the MS and Fujitsu compilers support VSAM KSDS files? What about
> 
> What do you mean by 'support' ?  You are unlikely to find a VSAM KSDS
> file on Windows. Windows is not MVS or zOS. It does, however, have
> Cobol INDEXED SEQUENTIAL files.

The IBM workstation compilers [while you may not like the prices], do 
allow use of emulated VSAM files.  These programs can be directly ported 
[via just a recompile] between the mainframe/workstation assuming you 
don't intentionally use features that are unique to the platform.

>> DB2 access, CICS access, guess I'm asking EXEC CICS suchandsuch
> 
> Winows does not normally run a CICS service. CICS is not part of
> Cobol, though normally Cobol programs are used in a CICS service.

I have not looked lately, but certainly there used to be workstation 
based CICS packages that would run the programs compiled by workstation 
compilers.

>> and
>> EXEC SQL suchandsuch.
> 
> SQL service is also not part of Cobol. Most SQL is achieved with a
> different product, such as a preprocessor, that converts the EXEC SQL
> lines into Cobol CALLs and such.

Well, not quite true.  The Enterprise COBOL [and PL/I] compilers have 
integrated the processing of CICS and DB2 'EXEC ...' statements into the 
compiler.  While you can still use the preprocessors, they are no longer 
required.

> Fujitsu version 3 does not include (AFAIK) SQL. Later versions of
> Fujitsu do support ODBC access through a limited set of EXEC SQL
> statements. You can always write the appropriate CALL statements.
I have no knowledge of what Fujitsu provides.
0
Reply Carl.Gehr.ButNoSPAMStuff5 (41) 4/29/2008 2:53:39 PM

There have been lots of replies - with varying levels of "helpfulness".

It would SEEM to me that you are asking for a PC (Windows) based mainframe 
emulation environment.  These exist but are not cheap.  (IBM and Micro Focus 
both sell extensive and expensive products for this).  However, I am a little 
confused as to WHY you would want this.  If you want this in order to do Windows 
development for applications intended for mainframe deployment, then I would 
expect that your mainframe EMPLOYER would provide such tools (and they often can 
afford them).

If you are actually trying to develop (or even "play with") programs that are 
just intended for the PC/Windows environment, then you are looking for the wrong 
tools.

Can you tell us WHY you want DB2, CICS, VSAM support?  What do you plan on doing 
with this compiler/product?  With that information, we may be able to better 
help you.

-- 
Bill Klein
 wmklein <at> ix.netcom.com
"Graham Hobbs" <ghobbs@cdpwise.net> wrote in message 
news:6n7d145dpefuk1vqfk8b9cj2uvk906ooc3@4ax.com...
> Hello,
> Further questions please..
> Do the MS and Fujitsu compilers support VSAM KSDS files? What about
> DB2 access, CICS access, guess I'm asking EXEC CICS suchandsuch and
> EXEC SQL suchandsuch.
> Thanks
> Graham
>
> On Mon, 28 Apr 2008 10:50:35 -0700 (PDT), Alistair
> <alistair@ld50macca.demon.co.uk> wrote:
>
>>On 28 Apr, 11:31, amir <ahsha...@gmail.com> wrote:
>>> Dear All,
>>> I am looking for a Free COBOL IDE and Compiler for Windows XP.
>>> Regards,
>>
>>www.adtools.com used to provide a free v3 of Fujitsu Cobol and if you
>>look on the Microfocus website you can download the personal edition
>>of v5 of their compiler.
>
> ** Posted from http://www.teranews.com ** 


0
Reply wmklein (2605) 4/29/2008 7:01:56 PM


"Howard Brazee" <howard@brazee.net> wrote in message 
news:nhce14purq6ud3pk76qmg5hgtc15torruo@4ax.com...
> On Tue, 29 Apr 2008 10:13:48 -0400, Graham Hobbs <ghobbs@cdpwise.net>
> wrote:
>
>>I semi-expected your comments about CICS and DB2 but what's the
>>difference between KSDS and Indexed Sequential? For that matter what's
>>the diff between ESDS and 'flat or sequential' files?
>>
>>I really should know this stufff but I don't!

GOOGLE is your friend...

>
> It depends on the system.   VSAM is a file management system that was
> supposed to replace the three types of files that IBM already had on
> mainframes (flat, indexed, and relative).   In my experience it made
> them more like the Univac 9030 file systems I was familiar with at the
> time.
>

VSAM (the Very Silly Access Method) was a major leap up from ISAM (the 
Incredibly Silly Access Method). The main enhancement was that VSAM was self 
reorganizing and space managing. This eliminated problems like second level 
overflow which previously degraded performance.

Difference between Indexed Sequential and VSAM/KSDS?  Pretty much what I 
just said. KSDS (Keep Spinning Don't Stop) was able to utilize the new VSAM 
technology to assist with Index and Data organisation, Control Area and 
Interval splits when more space was needed, and no need for a separate 
Utility to reorganize. (Except that, in practice, IDCAMS (It Doesnt 
Contribute Anything. Mostly Silly) is needed to reload datasets regularly, 
or performance degrades just as it always did...)

> IDCAMS is the utility to run this file management system.    But
> people were already familiar with the non-managed systems, and the
> only place I've seen ESDS files was where I implemented them myself -
> to replace the 9030 functionality.

ESDS (Entirely Superficial Data Storage) was fine for data entry,  where you 
could lose the odd item and re-enter it later, retrieval speed wasn't a big 
priority and sequence wasn't a problem, but if you needed to find something 
in a hurry, the last member of the VSAM family, RRDS (Really  Righteous Data 
Service), was the best. Unfortunately,figuring out the hashing algorithms to 
make it work effectively, was a bit beyond the average programmer, so it 
languished in a cupboard gathering dust...I once wrote an entire access 
system based on it and everyone was very impressed... Dunno why, it was only 
COBOL... :-)

>
> On the other hand, KSDS files had some significant performance
> advantages over IBM's previous indexed file system, so it became the
> standard in most shops.    (Many shops have replaced visible KSDS by
> databases (which sometimes use KSDS behind the scenes)).

>
> Someone who works with alternative mainframe company's machines just
> see different flavors of indexed files.   Their companies didn't come
> up with new names for their improvements in file handling.

Sometimes they not only came up with new names but actually used different 
access methods entirely. Everyone knows about "Indexed Sequential"... ever 
used "Indexed Random"?

It was a really cool idea and I was blown away when I first encountered it. 
Did everything Indexed Sequential did, but in less than half the time. 
Perfect for online systems which were just starting to appear when it was 
invented. It was multitasked and certain housekeeping functions ran in the 
background without affecting performance in the foreground. It was developed 
by Burroughs (the Unisys power of two that WASN'T Univac) and caused me to 
realise (to my horror) that all the good IT ideas DIDN'T come out of IBM... 
:-)

Pete.
-- 
"I used to write COBOL...now I can do anything." 


0
Reply dashwood (4370) 4/30/2008 2:02:54 PM

"Michael Wojcik" <mwojcik@newsguy.com> wrote in message
news:fv7fch02im3@news1.newsguy.com...
> Howard Brazee wrote:
>> On Tue, 29 Apr 2008 00:11:55 -0400, Graham Hobbs <ghobbs@cdpwise.net>
>> wrote:
>>
>>> Do the MS and Fujitsu compilers support VSAM KSDS files?
>
> Do you mean "MF and Fujitsu compilers"? Microsoft hasn't sold a COBOL
> compiler since the mid-1990s, if memory serves, and even then it was a
> rebadged Micro Focus compiler.
>
>>> What about
>>> DB2 access, CICS access, guess I'm asking EXEC CICS suchandsuch and
>>> EXEC SQL suchandsuch.
>>
>> Huh?    It looks as though you want an IBM mainframe compiler, not a
>> Windows compiler.   Or maybe you want terminal software to allow you
>> to run TSO/SPF on your mainframe.
>
> Or maybe he wants a mainframe emulation environment, which would let him
> run programs with EXEC CICS and EXEC SQL macros. We do sell such a
> product, after all (a few of 'em, actually).
>
> It's not free, though. It's not even cheap. Emulating the mainframe
> operating environment is not trivial.

Sure it is.

But the people who want it are used to paying through the nose so why
disappoint them?

One day the mainframe world will wake up to the fact that they have a
choice. There WAS a time when mainframe hardware was the only game in town
and vendors (both hardware and software) could charge anything they liked.

We paid $300,000 for an IBM 360-30 that had a fraction of the processing
power that the machine I'm writing this on has.

(I did NOT pay $300,000 for this Notebook...:-))

EVERYTHING in the mainframe environment was overpriced, even the manuals...

There was no choice. You want/need computer capability...cough up.

It is ingrained into the mainframe pysche; a living legacy to the Watson
philosophy right up to this day.

I've done it myself... written mainframe software over a few weeks and
charged tens of thousands for it. The same stuff on a PC workstation would
sell for a few hundred.

The world is voting with its feet, they won't buy software at grossly
inflated prices. The only people who will do so are mainframers, and they
are a dying breed.

Why would a small developer pay several thousand for a .NET COBOL
environment, when he can get C# or Java for free (and they are actually many
times more powerful)?  Only because he knows COBOL and has an investment in
it.

Given he can leverage his current investment by using Interop services in
the .NET environment, then the only choice he has to make is whether he
would rather pay the money and stay with a language that's going nowhere
(can't even produce a credible or viable standard), or incur the hassle and
learning curve of Java/C#.

For me, I was gratified to find that learning both Java and C# was nowhere
near as difficult as learning OO COBOL...

I believe it is only a matter of time before larger businesses make the move
also. Or divest themselves of in-house COBOL IT altogether and simply
outsource the service.

Both the Fujitsu and MicroFocus offerings for .NET COBOL are excellent
products (albeit, both overpriced), but it's like producing an excellent
horse, just as the Model T starts rolling off the assembly line.

Given that the car costs much less than the horse and requires no grooming
or cleaning up after, surely it's a no-brainer which one to go for?

Judging by the ads for Developers on JobServe and Seek,  it looks like the
car has a future.


>
> For the truly ambitious, there's the free Hercules mainframe emulator, on
> which you can install the public-domain MVS 3.8 and its compiler pack,
> which includes an ancient IBM COBOL compiler. (Apparently someone has now
> hacked that to even support VSAM files. Luxury!) But I don't believe
> there's any free emulator for CICS available.
>
> If the OP can do without CICS, and this is for personal (non-commercial)
> use, there's the academic version of MF COBOL: Net Express Personal
> Edition, which is a free (big) download or $50 for media in a box.[1] It
> includes Open ESQL ("EXEC SQL") support.
>
>
> [1] http://microfocus.com/Resources/Communities/Academic/shop/index.asp
>
>
> -- 
> Michael Wojcik
> Micro Focus

Gee, Michael, guess you forgot to mention the runtime fees...?

Or have they been removed?

Pete.
--
"I used to write COBOL...now I can do anything."




0
Reply dashwood (4370) 4/30/2008 2:56:51 PM

Hello,
Thanks for the responses. Learning all the time.
Not only 'not like the price', 'havent got the money'.
Hercules! For me, a whole new line of research, maybe later.
MF not MS, sorry.
Pete, thanks for the history.
Bill Klein asked for more explanation so ..
 
Bill,
Am retired so have no EMPLOYER and cost is a serious factor especially
for a project that today might be dubious saleability.

Until a year ago I had an old IBM compiler (VA Cobol V2.2) that runs
on an old laptop under Windows NT plus DB2 V7.2, CICS for Windows 3.1
and VSAM KSDS via Pervasive's bTrieve, all 1990's vintage. On this
laptop I'm developing a software package written in batch Cobol that
generates Cobol/CICS programs (dinosaurial as it may seem, have gone
too far to stop). The package is aimed at any platform that runs CICS,
especially these days it seems, z/OS. 

So the V2.2 compiler I have performs two functions, a) compile/produce
executables of the batch pgms of my software, b) compile/produce
executables for the online Cobol/CICS pgms that my software generates.

The old NT laptop grew old, slow and full so I bought a new one. It
won't accept NT so I opted for Windows XP. My CICS and DB2 work fine
thereon. The Cobol fails horribly and no fixes available. 

Today the XP laptop has NT running under Microsoft's Virtual PC thus
some of my development can continue on the NT side.

But the ultimate intent is to email pgms between myself and clients.
Technically this must be done on the XP side while compilations etc
must be done on the NT side - is labour intensive and 'almost'
impractical.

Thus the need for a free/cheap XP Cobol compiler that accepts CICS and
DB2 commands and produces working executables - in essence goodbye NT.

As you said mainframe emulators are expensive (unless my government
will give me a grant :-000)))(still choice 1), PWD might be amenable
(choice 2), a Cobol compiler NT to XP fix was available (choice 3), 
whatelse?. 

So In contacting the group, I am really looking at the 'whatelse'
scenario. Hope that better explains my situation.
Any help appreciated. Thanks
Graham

On Tue, 29 Apr 2008 19:01:56 GMT, "William M. Klein"
<wmklein@nospam.netcom.com> wrote:

>There have been lots of replies - with varying levels of "helpfulness".
>
>It would SEEM to me that you are asking for a PC (Windows) based mainframe 
>emulation environment.  These exist but are not cheap.  (IBM and Micro Focus 
>both sell extensive and expensive products for this).  However, I am a little 
>confused as to WHY you would want this.  If you want this in order to do Windows 
>development for applications intended for mainframe deployment, then I would 
>expect that your mainframe EMPLOYER would provide such tools (and they often can 
>afford them).
>
>If you are actually trying to develop (or even "play with") programs that are 
>just intended for the PC/Windows environment, then you are looking for the wrong 
>tools.
>
>Can you tell us WHY you want DB2, CICS, VSAM support?  What do you plan on doing 
>with this compiler/product?  With that information, we may be able to better 
>help you.

** Posted from http://www.teranews.com **
0
Reply ghobbs (117) 4/30/2008 3:30:08 PM

On Thu, 1 May 2008 02:02:54 +1200, "Pete Dashwood"
<dashwood@removethis.enternet.co.nz> wrote:

>> IDCAMS is the utility to run this file management system.    But
>> people were already familiar with the non-managed systems, and the
>> only place I've seen ESDS files was where I implemented them myself -
>> to replace the 9030 functionality.
>
>ESDS (Entirely Superficial Data Storage) was fine for data entry,  where you 
>could lose the odd item and re-enter it later, retrieval speed wasn't a big 
>priority and sequence wasn't a problem, but if you needed to find something 
>in a hurry, the last member of the VSAM family, RRDS (Really  Righteous Data 
>Service), was the best. Unfortunately,figuring out the hashing algorithms to 
>make it work effectively, was a bit beyond the average programmer, so it 
>languished in a cupboard gathering dust...I once wrote an entire access 
>system based on it and everyone was very impressed... Dunno why, it was only 
>COBOL... :-)

Come to think of it, my RRDS experience was almost entirely in the
same project - moving our data-entry files from Univac 9030 to IBM. I
have never used non-RRDS relative files on IBM mainframes.

>It was a really cool idea and I was blown away when I first encountered it. 
>Did everything Indexed Sequential did, but in less than half the time. 
>Perfect for online systems which were just starting to appear when it was 
>invented. It was multitasked and certain housekeeping functions ran in the 
>background without affecting performance in the foreground. It was developed 
>by Burroughs (the Unisys power of two that WASN'T Univac) and caused me to 
>realise (to my horror) that all the good IT ideas DIDN'T come out of IBM... 
>:-)

Compare WFL or DCL with JCL.    Of course IBM wanted people to switch
to a REXX environment, but nobody listened (well, the techies used
REXX, but the JCL environment stayed).

I used an Amdahl operating system called Aspen, which I liked very
much.
0
Reply howard (6275) 4/30/2008 4:42:30 PM

On Thu, 1 May 2008 02:56:51 +1200, "Pete Dashwood"
<dashwood@removethis.enternet.co.nz> wrote:

>> It's not free, though. It's not even cheap. Emulating the mainframe
>> operating environment is not trivial.
>
>Sure it is.
>
>But the people who want it are used to paying through the nose so why
>disappoint them?
>
>One day the mainframe world will wake up to the fact that they have a
>choice. There WAS a time when mainframe hardware was the only game in town
>and vendors (both hardware and software) could charge anything they liked.

Why should they worry about emulating the mainframe environment on a
PC?

The future of mainframes is as a component in the system.  It will be
the database full of security rules and powerful behind-the-scenes
action.    There is no reason to duplicate real data on PCs (security
audits will make sure we don't), nor to create allowable test data.
The DBAs and systems people will handle that component, just as the
network people handle the routers and ports.
0
Reply howard (6275) 4/30/2008 4:58:52 PM

"Pete Dashwood" <dashwood@removethis.enternet.co.nz> wrote in message 
news:67rcgfF2qieu6U1@mid.individual.net...
>KSDS (Keep Spinning Don't Stop)....
>ESDS (Entirely Superficial Data Storage)....
>RRDS (Really  Righteous Data Service)...

Either I have lived a sheltered life or someone has WAAAAY too much free 
time on his hands...


MCM


0
Reply mmattias (364) 4/30/2008 5:59:35 PM

On Wed, 30 Apr 2008 11:30:08 -0400, Graham Hobbs <ghobbs@cdpwise.net>
wrote:

>Hello,
>Thanks for the responses. Learning all the time.
>Not only 'not like the price', 'havent got the money'.
>Hercules! For me, a whole new line of research, maybe later.
>MF not MS, sorry.
>Pete, thanks for the history.
>Bill Klein asked for more explanation so ..
> 
>Bill,
>Am retired so have no EMPLOYER and cost is a serious factor especially
>for a project that today might be dubious saleability.
>
>Until a year ago I had an old IBM compiler (VA Cobol V2.2) that runs
>on an old laptop under Windows NT plus DB2 V7.2, CICS for Windows 3.1
>and VSAM KSDS via Pervasive's bTrieve, all 1990's vintage. On this
>laptop I'm developing a software package written in batch Cobol that
>generates Cobol/CICS programs (dinosaurial as it may seem, have gone
>too far to stop). The package is aimed at any platform that runs CICS,
>especially these days it seems, z/OS. 
>
>So the V2.2 compiler I have performs two functions, a) compile/produce
>executables of the batch pgms of my software, b) compile/produce
>executables for the online Cobol/CICS pgms that my software generates.
>
>The old NT laptop grew old, slow and full so I bought a new one. It
>won't accept NT so I opted for Windows XP. My CICS and DB2 work fine
>thereon. The Cobol fails horribly and no fixes available. 
>
>Today the XP laptop has NT running under Microsoft's Virtual PC thus
>some of my development can continue on the NT side.
>
>But the ultimate intent is to email pgms between myself and clients.
>Technically this must be done on the XP side while compilations etc
>must be done on the NT side - is labour intensive and 'almost'
>impractical.
>
>Thus the need for a free/cheap XP Cobol compiler that accepts CICS and
>DB2 commands and produces working executables - in essence goodbye NT.
>
>As you said mainframe emulators are expensive (unless my government
>will give me a grant :-000)))(still choice 1), PWD might be amenable
>(choice 2), a Cobol compiler NT to XP fix was available (choice 3), 
>whatelse?. 
>
>So In contacting the group, I am really looking at the 'whatelse'
>scenario. Hope that better explains my situation.
>Any help appreciated. Thanks
>Graham
>
Have you looked at
http://www-1.ibm.com/support/docview.wss?uid=swg21104993

Not sure if this relates to your problem, as you havent specified it.


I can't test it myself as I dont have VA COBOL. If this is a free
version, then I would not mind testing it myself if you wish and
supply it to me.



Frederico Fonseca
ema il: frederico_fonseca at syssoft-int.com
0
Reply real-email-in-msg-spam (383) 4/30/2008 6:01:36 PM


"Michael Mattias" <mmattias@talsystems.com> wrote in message 
news:Tw2Sj.410$17.104@newssvr22.news.prodigy.net...
> "Pete Dashwood" <dashwood@removethis.enternet.co.nz> wrote in message 
> news:67rcgfF2qieu6U1@mid.individual.net...
>>KSDS (Keep Spinning Don't Stop)....
>>ESDS (Entirely Superficial Data Storage)....
>>RRDS (Really  Righteous Data Service)...
>
> Either I have lived a sheltered life or someone has WAAAAY too much free 
> time on his hands...
>
>
> MCM
>
>
As I am really flat out with these reports at the moment, I guess it must be 
the former :-)

(Reinventing acronyms has been a favourite game of mine for years now. The 
ones in that post were entirely original and invented on the spur of the 
moment.)

Pete.
-- 
"I used to write COBOL...now I can do anything." 


0
Reply dashwood (4370) 4/30/2008 10:11:10 PM


"Howard Brazee" <howard@brazee.net> wrote in message 
news:229h14t368gh6g93tuiavg8i6va75qhk0l@4ax.com...
> On Thu, 1 May 2008 02:56:51 +1200, "Pete Dashwood"
> <dashwood@removethis.enternet.co.nz> wrote:
>
>>> It's not free, though. It's not even cheap. Emulating the mainframe
>>> operating environment is not trivial.
>>
>>Sure it is.
>>
>>But the people who want it are used to paying through the nose so why
>>disappoint them?
>>
>>One day the mainframe world will wake up to the fact that they have a
>>choice. There WAS a time when mainframe hardware was the only game in town
>>and vendors (both hardware and software) could charge anything they liked.
>
> Why should they worry about emulating the mainframe environment on a
> PC?
>
> The future of mainframes is as a component in the system.  It will be
> the database full of security rules and powerful behind-the-scenes
> action.    There is no reason to duplicate real data on PCs (security
> audits will make sure we don't), nor to create allowable test data.
> The DBAs and systems people will handle that component, just as the
> network people handle the routers and ports.

For a very long time I completely agreed with this position, Howard.

Now I'm not so sure.

Using the mainframe as a component of the Network makes sense as long as the 
mainframe has something to offer, that Network servers can't.

Most people consider these factors to be unique to mainframes:

1. Huge processing power.
2. Stability and security.
3. Traditional proven development methodology, perfect for batch processing.

To obtain these benefits, many sites are happy to pay the much higher costs 
of hardware/software and the support teams necessary to keep them running. 
(They have been conditioned to over years by vendors, ever since the days 
when the mainframe was the only game in town.) But I think it is being 
questioned now.

The latest top end Network servers can compete very favourably on point 1. 
And even though the price of mainframes has fallen, the servers are much 
cheaper and can be more easily configured and reconfigured and redeployed in 
ways to suit the Business.

Point 2 is largely an illusion. Certainly, there are no viruses on 
mainframes, but there will be once they are opened up to the network; it is 
only a matter of time. (No, I don't have mainframe virus writing on my 
"TODO" list, but it would be an "interesting" exercise... :-)).

Point 3 will be overtaken by events. A perfect online system will obviate 
the need for batch processing. Historical reports can be run as a background 
task (on one or more dedicated servers, if required) rather than in an 
overnight window, and even if multiple servers were required, the cost is 
still far less than a single mainframe. The days when it simply wasn't 
possible to process all of an online transaction in real time are (almost) 
over. There's no need to leave the back end updates to overnight batch 
processing because they can now be done at the front end, without 
significantly increasing the overall transaction time.

When you balance the above arguments against the powerful persuasion of much 
lower costs, I think that mainframes as we have understood the term, may be 
on a sticky wicket long term.

I know there are indications that mainframe purchases have increased in the 
last year, but I believe that is due to aggressive (maybe even desperate) 
marketing. If you look at the overall picture of mainframes as a percentage 
of ALL computers running commercial applications, it is not a pretty sight. 
Even the strongholds of fortress COBOL (Banks, Insurance companies, 
Financial Institutions...) are deserting their posts and opting for cheaper 
solutions.

At this point, I'm not prepared to predict when the "death of mainframes" 
will occur, but I still stand by 2015 for the death of in-house COBOL. My 
feeling is that the demise of the mainframe won't be far behind that.

Pete.
-- 
"I used to write COBOL...now I can do anything." 


0
Reply dashwood (4370) 4/30/2008 10:39:46 PM


"Graham Hobbs" <ghobbs@cdpwise.net> wrote in message 
news:nb3h141500phmuq82nj3buhtmo5ibr3sfp@4ax.com...
> Hello,
> Thanks for the responses. Learning all the time.
> Not only 'not like the price', 'havent got the money'.
> Hercules! For me, a whole new line of research, maybe later.
> MF not MS, sorry.
> Pete, thanks for the history.
> Bill Klein asked for more explanation so ..
>
> Bill,
> Am retired so have no EMPLOYER and cost is a serious factor especially
> for a project that today might be dubious saleability.

Ah, Glasshopper, truly the journey is often more important than the 
destination... :-)

>
> Until a year ago I had an old IBM compiler (VA Cobol V2.2) that runs
> on an old laptop under Windows NT plus DB2 V7.2, CICS for Windows 3.1
> and VSAM KSDS via Pervasive's bTrieve, all 1990's vintage. On this
> laptop I'm developing a software package written in batch Cobol that
> generates Cobol/CICS programs (dinosaurial as it may seem, have gone
> too far to stop). The package is aimed at any platform that runs CICS,
> especially these days it seems, z/OS.
>
> So the V2.2 compiler I have performs two functions, a) compile/produce
> executables of the batch pgms of my software, b) compile/produce
> executables for the online Cobol/CICS pgms that my software generates.
>
> The old NT laptop grew old, slow and full so I bought a new one. It
> won't accept NT so I opted for Windows XP. My CICS and DB2 work fine
> thereon. The Cobol fails horribly and no fixes available.

Graham, have you tried running the compiler in "compatibility mode" on XP?

Right click the icon you use to run COBOL. (If you are running it from a 
command line, create a shortcut to it and place that on your desktop so you 
have an icon.) A drop down menu opens up. The bottom option is "Properties". 
Click this and then click the "Compatibility" tab. You will find an option 
there to run your application (in this case, your COBOL compiler) in NT 
compatibility mode. XP creates an environment to run the application, that 
emulates whichever environment you select.

It doesn't always work, but it usually does...

>
> Today the XP laptop has NT running under Microsoft's Virtual PC thus
> some of my development can continue on the NT side.
>
> But the ultimate intent is to email pgms between myself and clients.
> Technically this must be done on the XP side while compilations etc
> must be done on the NT side - is labour intensive and 'almost'
> impractical.
>
> Thus the need for a free/cheap XP Cobol compiler that accepts CICS and
> DB2 commands and produces working executables - in essence goodbye NT.
>
> As you said mainframe emulators are expensive (unless my government
> will give me a grant :-000)))(still choice 1), PWD might be amenable
> (choice 2), a Cobol compiler NT to XP fix was available (choice 3),
> whatelse?.
>
> So In contacting the group, I am really looking at the 'whatelse'
> scenario. Hope that better explains my situation.
> Any help appreciated. Thanks
> Graham

I sincerely wish you luck with your enterprise.

Anything that generates code gets my vote :-)

Let us know how you get on.

Pete.
-- 
"I used to write COBOL...now I can do anything."

<previous snipped>


0
Reply dashwood (4370) 4/30/2008 10:55:04 PM

On Thu, 1 May 2008 10:55:04 +1200, "Pete Dashwood"
<dashwood@removethis.enternet.co.nz> wrote:

>
>
>"Graham Hobbs" <ghobbs@cdpwise.net> wrote in message 
>news:nb3h141500phmuq82nj3buhtmo5ibr3sfp@4ax.com...
>> Hello,
>> Thanks for the responses. Learning all the time.
>> Not only 'not like the price', 'havent got the money'.
>> Hercules! For me, a whole new line of research, maybe later.
>> MF not MS, sorry.
>> Pete, thanks for the history.
>> Bill Klein asked for more explanation so ..
>>
>> Bill,
>> Am retired so have no EMPLOYER and cost is a serious factor especially
>> for a project that today might be dubious saleability.
>
>Ah, Glasshopper, truly the journey is often more important than the 
>destination... :-)
>
>>
>> Until a year ago I had an old IBM compiler (VA Cobol V2.2) that runs
>> on an old laptop under Windows NT plus DB2 V7.2, CICS for Windows 3.1
>> and VSAM KSDS via Pervasive's bTrieve, all 1990's vintage. On this
>> laptop I'm developing a software package written in batch Cobol that
>> generates Cobol/CICS programs (dinosaurial as it may seem, have gone
>> too far to stop). The package is aimed at any platform that runs CICS,
>> especially these days it seems, z/OS.
>>
>> So the V2.2 compiler I have performs two functions, a) compile/produce
>> executables of the batch pgms of my software, b) compile/produce
>> executables for the online Cobol/CICS pgms that my software generates.
>>
>> The old NT laptop grew old, slow and full so I bought a new one. It
>> won't accept NT so I opted for Windows XP. My CICS and DB2 work fine
>> thereon. The Cobol fails horribly and no fixes available.
>
>Graham, have you tried running the compiler in "compatibility mode" on XP?
>
>Right click the icon you use to run COBOL. (If you are running it from a 
>command line, create a shortcut to it and place that on your desktop so you 
>have an icon.) A drop down menu opens up. The bottom option is "Properties". 
>Click this and then click the "Compatibility" tab. You will find an option 
>there to run your application (in this case, your COBOL compiler) in NT 
>compatibility mode. XP creates an environment to run the application, that 
>emulates whichever environment you select.
>
>It doesn't always work, but it usually does...

It didn't.

Not only that, but later found out that installing the Cobol software
on XP destroyed access to XP Help. I didn't hang around long enough to
see what else got shafted. Had to reload the OS. 

1990's VA Cobol is a very nice virus for XP but I dont thionk either
IBM or MS had this in mind:-)

But thanks for the thought. 
>>
>> Today the XP laptop has NT running under Microsoft's Virtual PC thus
>> some of my development can continue on the NT side.
>>
>> But the ultimate intent is to email pgms between myself and clients.
>> Technically this must be done on the XP side while compilations etc
>> must be done on the NT side - is labour intensive and 'almost'
>> impractical.
>>
>> Thus the need for a free/cheap XP Cobol compiler that accepts CICS and
>> DB2 commands and produces working executables - in essence goodbye NT.
>>
>> As you said mainframe emulators are expensive (unless my government
>> will give me a grant :-000)))(still choice 1), PWD might be amenable
>> (choice 2), a Cobol compiler NT to XP fix was available (choice 3),
>> whatelse?.
>>
>> So In contacting the group, I am really looking at the 'whatelse'
>> scenario. Hope that better explains my situation.
>> Any help appreciated. Thanks
>> Graham
>
>I sincerely wish you luck with your enterprise.
>
>Anything that generates code gets my vote :-)
>
>Let us know how you get on.
>
>Pete.

** Posted from http://www.teranews.com **
0
Reply ghobbs (117) 4/30/2008 11:43:00 PM

"Graham Hobbs" <ghobbs@cdpwise.net> wrote in message 
news:nb3h141500phmuq82nj3buhtmo5ibr3sfp@4ax.com...
> Hello,
> Thanks for the responses. Learning all the time.
> Not only 'not like the price', 'havent got the money'.
> Hercules! For me, a whole new line of research, maybe later.
> MF not MS, sorry.
> Pete, thanks for the history.
> Bill Klein asked for more explanation so ..
>
> Bill,
> Am retired so have no EMPLOYER and cost is a serious factor especially
> for a project that today might be dubious saleability.
>
> Until a year ago I had an old IBM compiler (VA Cobol V2.2) that runs
> on an old laptop under Windows NT plus DB2 V7.2, CICS for Windows 3.1
> and VSAM KSDS via Pervasive's bTrieve, all 1990's vintage. On this
> laptop I'm developing a software package written in batch Cobol that
> generates Cobol/CICS programs (dinosaurial as it may seem, have gone
> too far to stop). The package is aimed at any platform that runs CICS,
> especially these days it seems, z/OS.
>
> So the V2.2 compiler I have performs two functions, a) compile/produce
> executables of the batch pgms of my software, b) compile/produce
> executables for the online Cobol/CICS pgms that my software generates.
>
> The old NT laptop grew old, slow and full so I bought a new one. It
> won't accept NT so I opted for Windows XP. My CICS and DB2 work fine
> thereon. The Cobol fails horribly and no fixes available.
>
> Today the XP laptop has NT running under Microsoft's Virtual PC thus
> some of my development can continue on the NT side.
>
> But the ultimate intent is to email pgms between myself and clients.
> Technically this must be done on the XP side while compilations etc
> must be done on the NT side - is labour intensive and 'almost'
> impractical.
>
> Thus the need for a free/cheap XP Cobol compiler that accepts CICS and
> DB2 commands and produces working executables - in essence goodbye NT.
>
> As you said mainframe emulators are expensive (unless my government
> will give me a grant :-000)))(still choice 1), PWD might be amenable
> (choice 2), a Cobol compiler NT to XP fix was available (choice 3),
> whatelse?.
>
> So In contacting the group, I am really looking at the 'whatelse'
> scenario. Hope that better explains my situation.
> Any help appreciated. Thanks
> Graham


How about using MS Virtual PC 2007 or VMware etc and running NT uder that?

See: http://www.pcmag.com/article2/0,2817,2099563,00.asp 


0
Reply chottel (601) 4/30/2008 11:47:25 PM

On Wed, 30 Apr 2008 19:01:36 +0100, Frederico Fonseca
<real-email-in-msg-spam@email.com> wrote:

>On Wed, 30 Apr 2008 11:30:08 -0400, Graham Hobbs <ghobbs@cdpwise.net>
>wrote:
>
>>Hello,
>>Thanks for the responses. Learning all the time.
>>Not only 'not like the price', 'havent got the money'.
>>Hercules! For me, a whole new line of research, maybe later.
>>MF not MS, sorry.
>>Pete, thanks for the history.
>>Bill Klein asked for more explanation so ..
>> 
>>Bill,
>>Am retired so have no EMPLOYER and cost is a serious factor especially
>>for a project that today might be dubious saleability.
>>
>>Until a year ago I had an old IBM compiler (VA Cobol V2.2) that runs
>>on an old laptop under Windows NT plus DB2 V7.2, CICS for Windows 3.1
>>and VSAM KSDS via Pervasive's bTrieve, all 1990's vintage. On this
>>laptop I'm developing a software package written in batch Cobol that
>>generates Cobol/CICS programs (dinosaurial as it may seem, have gone
>>too far to stop). The package is aimed at any platform that runs CICS,
>>especially these days it seems, z/OS. 
>>
>>So the V2.2 compiler I have performs two functions, a) compile/produce
>>executables of the batch pgms of my software, b) compile/produce
>>executables for the online Cobol/CICS pgms that my software generates.
>>
>>The old NT laptop grew old, slow and full so I bought a new one. It
>>won't accept NT so I opted for Windows XP. My CICS and DB2 work fine
>>thereon. The Cobol fails horribly and no fixes available. 
>>
>>Today the XP laptop has NT running under Microsoft's Virtual PC thus
>>some of my development can continue on the NT side.
>>
>>But the ultimate intent is to email pgms between myself and clients.
>>Technically this must be done on the XP side while compilations etc
>>must be done on the NT side - is labour intensive and 'almost'
>>impractical.
>>
>>Thus the need for a free/cheap XP Cobol compiler that accepts CICS and
>>DB2 commands and produces working executables - in essence goodbye NT.
>>
>>As you said mainframe emulators are expensive (unless my government
>>will give me a grant :-000)))(still choice 1), PWD might be amenable
>>(choice 2), a Cobol compiler NT to XP fix was available (choice 3), 
>>whatelse?. 
>>
>>So In contacting the group, I am really looking at the 'whatelse'
>>scenario. Hope that better explains my situation.
>>Any help appreciated. Thanks
>>Graham
>>
>Have you looked at
>http://www-1.ibm.com/support/docview.wss?uid=swg21104993
>
>Not sure if this relates to your problem, as you havent specified it.
>
>
>I can't test it myself as I dont have VA COBOL. If this is a free
>version, then I would not mind testing it myself if you wish and
>supply it to me.
>
>
>
>Frederico Fonseca
>ema il: frederico_fonseca at syssoft-int.com

Hello Frederico,

Thanks for your help.

I tried that website a long time ago, besides trying Pete Dashwood's
suggestion of compatability mode.Neither worked for reasons I cant
fully remember but certainly included, install OK, compile OK, pgm
execution gave Dr Watsin stuff (EVERY pgm).

It was never free, I got Cobol & CICS for $2200 way back then and
while I'd love for you to try it, you'd be very unhappy if what
happens is what I think will happen which is ..

After the install XP's Help and Support feature ceased working; click
on it and nothing happens. My retail dealer and I proved it. How could
you ever know whatelse VA Cobol had done to the OS.And is Help &
Support just the tip of the iceberg?

** Posted from http://www.teranews.com **
0
Reply ghobbs (117) 5/1/2008 1:42:03 AM

On Wed, 30 Apr 2008 19:47:25 -0400, "Charles Hottel"
<chottel@earthlink.net> wrote:

>
>"Graham Hobbs" <ghobbs@cdpwise.net> wrote in message 
>news:nb3h141500phmuq82nj3buhtmo5ibr3sfp@4ax.com...
>> Hello,
>> Thanks for the responses. Learning all the time.
>> Not only 'not like the price', 'havent got the money'.
>> Hercules! For me, a whole new line of research, maybe later.
>> MF not MS, sorry.
>> Pete, thanks for the history.
>> Bill Klein asked for more explanation so ..
>>
>> Bill,
>> Am retired so have no EMPLOYER and cost is a serious factor especially
>> for a project that today might be dubious saleability.
>>
>> Until a year ago I had an old IBM compiler (VA Cobol V2.2) that runs
>> on an old laptop under Windows NT plus DB2 V7.2, CICS for Windows 3.1
>> and VSAM KSDS via Pervasive's bTrieve, all 1990's vintage. On this
>> laptop I'm developing a software package written in batch Cobol that
>> generates Cobol/CICS programs (dinosaurial as it may seem, have gone
>> too far to stop). The package is aimed at any platform that runs CICS,
>> especially these days it seems, z/OS.
>>
>> So the V2.2 compiler I have performs two functions, a) compile/produce
>> executables of the batch pgms of my software, b) compile/produce
>> executables for the online Cobol/CICS pgms that my software generates.
>>
>> The old NT laptop grew old, slow and full so I bought a new one. It
>> won't accept NT so I opted for Windows XP. My CICS and DB2 work fine
>> thereon. The Cobol fails horribly and no fixes available.
>>
>> Today the XP laptop has NT running under Microsoft's Virtual PC thus
>> some of my development can continue on the NT side.
>>
>> But the ultimate intent is to email pgms between myself and clients.
>> Technically this must be done on the XP side while compilations etc
>> must be done on the NT side - is labour intensive and 'almost'
>> impractical.
>>
>> Thus the need for a free/cheap XP Cobol compiler that accepts CICS and
>> DB2 commands and produces working executables - in essence goodbye NT.
>>
>> As you said mainframe emulators are expensive (unless my government
>> will give me a grant :-000)))(still choice 1), PWD might be amenable
>> (choice 2), a Cobol compiler NT to XP fix was available (choice 3),
>> whatelse?.
>>
>> So In contacting the group, I am really looking at the 'whatelse'
>> scenario. Hope that better explains my situation.
>> Any help appreciated. Thanks
>> Graham
>
>
>How about using MS Virtual PC 2007 or VMware etc and running NT uder that?
>
>See: http://www.pcmag.com/article2/0,2817,2099563,00.asp 
>
Charles
I do exactly that right now via Virtual PC but my 'project' is too
severely handicapped to continue this way. My NT CICS is 10 years old
and doesn't have the CICS Web features, CTG, etc I will eventually
need.
Cheers
Graham
** Posted from http://www.teranews.com **
0
Reply ghobbs (117) 5/1/2008 2:10:32 AM

On Wed, 30 Apr 2008 10:58:52 -0600, Howard Brazee <howard@brazee.net> wrote:

>On Thu, 1 May 2008 02:56:51 +1200, "Pete Dashwood"
><dashwood@removethis.enternet.co.nz> wrote:
>
>>> It's not free, though. It's not even cheap. Emulating the mainframe
>>> operating environment is not trivial.
>>
>>Sure it is.
>>
>>But the people who want it are used to paying through the nose so why
>>disappoint them?
>>
>>One day the mainframe world will wake up to the fact that they have a
>>choice. There WAS a time when mainframe hardware was the only game in town
>>and vendors (both hardware and software) could charge anything they liked.
>
>Why should they worry about emulating the mainframe environment on a
>PC?
>
>The future of mainframes is as a component in the system.  It will be
>the database full of security rules and powerful behind-the-scenes
>action.    There is no reason to duplicate real data on PCs (security
>audits will make sure we don't), nor to create allowable test data.
>The DBAs and systems people will handle that component, just as the
>network people handle the routers and ports.

Unix database servers are already doing the same, cheaper. 

For example, Sabre (airline reservations, Travelocity) cut its total expenses 50% when it
switched from mainframe to Unix. It handles 15,000 transactions per second. The database
resides on 17 HP S86000 boxes (now obsolete). 

For example, US telephone networks run entirely on Unix machines. About 10 servers handle
200,000 transactions (calls) per second. Each transaction creates at least one database
row. 

For example, PULSE, the largest ATM/EFT clearing system in the US, runs entirely on Unix
servers. 

"The base zSeries 990-332 machine, without disk and memory, costs around $15 million. If
you had to beef it up with an appropriate amount of memory and disk, the mainframe
hardware might cost $30 million. If you have to add monthly software fees for three years
to this machine, it is probably on the order of $50 million for this machine over three
years, including maintenance. Mainframes don't have list prices--which used to be against
the law for IBM--so it is hard to say for sure. Even if you assume a 50 percent discount,
after adding in the software costs, you are talking about $55 per TPM. That's a 10 to 1
price premium. And it will get worse as the Power6 and Power7 generations roll out, unless
IBM consolidates the zSeries into one of these future Power-based servers. And that is why
many people believe IBM will do just that, as it has already done with its proprietary
OS/400-based servers."

http://www.itjungle.com/tug/tug120204-story02.html
0
Reply Robert 5/1/2008 3:38:36 AM

On Thu, 1 May 2008 10:39:46 +1200 "Pete Dashwood"
<dashwood@removethis.enternet.co.nz> wrote:

:>Most people consider these factors to be unique to mainframes:

:>1. Huge processing power.
:>2. Stability and security.
:>3. Traditional proven development methodology, perfect for batch processing.

:>To obtain these benefits, many sites are happy to pay the much higher costs 
:>of hardware/software and the support teams necessary to keep them running. 
:>(They have been conditioned to over years by vendors, ever since the days 
:>when the mainframe was the only game in town.) But I think it is being 
:>questioned now.

:>The latest top end Network servers can compete very favourably on point 1. 
:>And even though the price of mainframes has fallen, the servers are much 
:>cheaper and can be more easily configured and reconfigured and redeployed in 
:>ways to suit the Business.

You have to define "huge processing power". Many chips run faster than those
on mainframes.

They can parallel process huge amounts of data.

:>Point 2 is largely an illusion. Certainly, there are no viruses on 
:>mainframes, but there will be once they are opened up to the network; it is 
:>only a matter of time. (No, I don't have mainframe virus writing on my 
:>"TODO" list, but it would be an "interesting" exercise... :-)).

Spoken like a true PC expert, who has no problem with reboots. So much so,
that they do it multiple times a day.

Point two is really 5 9s.

The reliability of mainframes and their components are such that failures can
be tolerated with fallover, so that the end users are not even aware that a
component has failed.

Mainframes regularly run months without rebooting. And that is with CPU 95%+
busy much of the time.

Point three is so meaningless that it isn't even worth addressing.

--
Binyamin Dissen <bdissen@dissensoftware.com>
http://www.dissensoftware.com

Director, Dissen Software, Bar & Grill - Israel


Should you use the mailblocks package and expect a response from me,
you should preauthorize the dissensoftware.com domain.

I very rarely bother responding to challenge/response systems,
especially those from irresponsible companies.
0
Reply postingid (197) 5/1/2008 8:54:55 AM

On Apr 28, 12:31 pm, amir <ahsha...@gmail.com> wrote:
> Dear All,
> I am looking for a Free COBOL IDE and Compiler for Windows XP.
> Regards,


You can use Open Cobol (compiled on cygwin) and using UltraEdit or
Notepad++ for viewing your code.

You can also create a editor using synedit component with your
'specific' common word used by cobol.

Kr,
gilles
0
Reply Crebassa.Gilles (5) 5/1/2008 9:10:53 AM


"Binyamin Dissen" <postingid@dissensoftware.com> wrote in message 
news:iu0j14p6qu5vcaj726itutemuaimqvpta8@4ax.com...
> On Thu, 1 May 2008 10:39:46 +1200 "Pete Dashwood"
> <dashwood@removethis.enternet.co.nz> wrote:
>
> :>Most people consider these factors to be unique to mainframes:
>
> :>1. Huge processing power.
> :>2. Stability and security.
> :>3. Traditional proven development methodology, perfect for batch 
> processing.
>
> :>To obtain these benefits, many sites are happy to pay the much higher 
> costs
> :>of hardware/software and the support teams necessary to keep them 
> running.
> :>(They have been conditioned to over years by vendors, ever since the 
> days
> :>when the mainframe was the only game in town.) But I think it is being
> :>questioned now.
>
> :>The latest top end Network servers can compete very favourably on point 
> 1.
> :>And even though the price of mainframes has fallen, the servers are much
> :>cheaper and can be more easily configured and reconfigured and 
> redeployed in
> :>ways to suit the Business.
>
> You have to define "huge processing power". Many chips run faster than 
> those
> on mainframes.

Thank you. That was my point.
>
> They can parallel process huge amounts of data.
>

Yes, they can.

> :>Point 2 is largely an illusion. Certainly, there are no viruses on
> :>mainframes, but there will be once they are opened up to the network; it 
> is
> :>only a matter of time. (No, I don't have mainframe virus writing on my
> :>"TODO" list, but it would be an "interesting" exercise... :-)).
>
> Spoken like a true PC expert, who has no problem with reboots. So much so,
> that they do it multiple times a day.

I have a Win 98 SE machine that runs 24/7. It was last rebooted 3 months 
ago. The Notebook I am writing this on (Win XP SP2) has not been booted for 
6 days (it hibernates between uses).

The last commercial site I was managing had multiple 8 processor Xeons and 
they were cold booted twice during the year I was there.

The last IBM mainframe system I worked on (admittedly a long time ago now), 
was rebooted every Monday morning after Preventive Maintenance.

While I don't claim to be a PC expert, I certainly have no problem with 
rebooting ANY computer system if it needs it.


>
> Point two is really 5 9s.

45...? :-)
>
> The reliability of mainframes and their components are such that failures 
> can
> be tolerated with fallover, so that the end users are not even aware that 
> a
> component has failed.

The same is easily achievable with network servers also.
>
> Mainframes regularly run months without rebooting. And that is with CPU 
> 95%+
> busy much of the time.

So do certain networks. Your point?
>
> Point three is so meaningless that it isn't even worth addressing.
>

No, probably not. Batch processing will be redundant within your lifetime.

So will mainframes (as we currently understand the term). The boundaries are 
becoming more and more blurred; in the end, the great mainframe con will be 
recognised for what it is, and cost effectiveness will simply win the day.

Pete.
-- 
"I used to write COBOL...now I can do anything."


0
Reply dashwood (4370) 5/1/2008 9:45:41 AM

In article <67thq5F2p8o04U1@mid.individual.net>,
Pete Dashwood <dashwood@removethis.enternet.co.nz> wrote:

[snip]

>No, probably not. Batch processing will be redundant within your lifetime.

Mr Dashwood, leaving aside the myriad of legislated requirements for batch 
reporting - and The Law, in some parts of the world, is rather gastropodic 
in its rate of change - what would a demonstration of this assertion look 
like?

'All right, I'm dead now and folks still need batch!'

DD

0
Reply docdwarf (6044) 5/1/2008 11:43:16 AM

On Thu, 1 May 2008 10:39:46 +1200, "Pete Dashwood"
<dashwood@removethis.enternet.co.nz> wrote:

>Most people consider these factors to be unique to mainframes:
>
>1. Huge processing power.
>2. Stability and security.
>3. Traditional proven development methodology, perfect for batch processing.
>
>To obtain these benefits, many sites are happy to pay the much higher costs 
>of hardware/software and the support teams necessary to keep them running. 
>(They have been conditioned to over years by vendors, ever since the days 
>when the mainframe was the only game in town.) But I think it is being 
>questioned now.

Costs will change.   Right now, there are times when big iron is more
cost effective, and times when it is less cost effective than
alternatives.

It is interesting that with the price of Oil going up, freight trains
are moving more goods (relative to trucks), than recently.   But the
balance will shift back and forth.   Notice though that our trains and
ships do carry truck-trailers.    When we program such a package on
our computer, we slap a label on it, and don't care how it gets to its
destination.

The above factors you mentioned aren't absolutes - they are relative
characteristics.    What we choose will be based upon cost-benefit
analysis at a particular moment.   (Again - I won't be choosing the
back-end hardware.   I'm a programmer, the nuts and bolts won't matter
as long as the infrastructure is flexible enough to handle my
requests).

0
Reply howard (6275) 5/1/2008 1:43:14 PM

On Thu, 1 May 2008 11:43:16 +0000 (UTC), docdwarf@panix.com () wrote:

>>No, probably not. Batch processing will be redundant within your lifetime.
>
>Mr Dashwood, leaving aside the myriad of legislated requirements for batch 
>reporting - and The Law, in some parts of the world, is rather gastropodic 
>in its rate of change - what would a demonstration of this assertion look 
>like?


I'm not quite sure what "batch processing" is.    What's the
difference between an operator submitting a job (and sometimes
entering parms interactively), CA-7 submitting a job, Time Machine
taking its hourly backup, Windows starting its boot-up tasks, Me
selecting a document and clicking on "print" (instead of opening it to
edit it), clicking on a .bat file that synchronizes my data to my
portable or network drive (to access from home), or my telling iTunes
to synchronize with my computer?

Lots of the arguments about how the old way of doing things will die
aren't about absolutes.   

0
Reply howard (6275) 5/1/2008 1:51:07 PM

On Wed, 30 Apr 2008 22:38:36 -0500, Robert <no@e.mail> wrote:

>>The future of mainframes is as a component in the system.  It will be
>>the database full of security rules and powerful behind-the-scenes
>>action.    There is no reason to duplicate real data on PCs (security
>>audits will make sure we don't), nor to create allowable test data.
>>The DBAs and systems people will handle that component, just as the
>>network people handle the routers and ports.
>
>Unix database servers are already doing the same, cheaper. 

Some of those Unix database servers are on desktop machines, some on
minis, some on server farms, and some on big iron.   (Although the way
we can tell our Sun and our IBM mainframe in our computer room is by
color - they are the same size).

"Cheaper" depends on lots of factors.     Which is cheapest, shipping
by truck, ship, foot, train, or by air?    Answer:   It depends.

Which is the cheapest platform to run a database server?    Answer: It
depends.
0
Reply howard (6275) 5/1/2008 1:55:03 PM

Pete Dashwood wrote:
> "Michael Wojcik" <mwojcik@newsguy.com> wrote in message
> news:fv7fch02im3@news1.newsguy.com...
 >>
>> Or maybe he wants a mainframe emulation environment, which would let him
>> run programs with EXEC CICS and EXEC SQL macros. We do sell such a
>> product, after all (a few of 'em, actually).
>>
>> It's not free, though. It's not even cheap. Emulating the mainframe
>> operating environment is not trivial.
> 
> Sure it is.

OK, Pete, code it up. I'm sure you could make a mint selling it.

Really, this must be one of the weakest claims I've ever seen you post 
here, and that's saying something. There are 622 pages of technical 
specifications in the CICS Prog Ref alone. Duplicating all of that 
behavior requires a major development investment, regardless of the 
technology it's built on or who's doing the work. And that's just the 
CICS API.

Let's take an easy bit, as an example: emulating IBM's versions of the 
BSD sockets interface (EZASOKET and EZACICAL) on top of Winsock and 
SUSv3 sockets. That's about seven thousand lines of C source. You're 
welcome to try to do better.

> One day the mainframe world will wake up to the fact that they have a
> choice. There WAS a time when mainframe hardware was the only game in town
> and vendors (both hardware and software) could charge anything they liked.

All of our customers know they have a choice. They're exercising it by 
buying our products.

Perhaps one day you'll realize that you're just another programmer 
with an opinion, and not a prophet with a divine channel to 
transcendent and unassailable truth. But I'm not holding my breath.

> We paid $300,000 for an IBM 360-30 that had a fraction of the processing
> power that the machine I'm writing this on has.

Hurrah for you. That has absolutely nothing to do with the effort 
required to emulate the mainframe environment.

> Gee, Michael, guess you forgot to mention the runtime fees...?

No; I didn't mention them because they weren't relevant. I didn't 
forget to mention the price of tea in China, either.

-- 
Michael Wojcik
Micro Focus
0
Reply mwojcik (1879) 5/1/2008 2:56:08 PM

In article <v9ij14p4um1t9cm69sq7f65bp686egicjn@4ax.com>,
Howard Brazee  <howard@brazee.net> wrote:
>On Thu, 1 May 2008 11:43:16 +0000 (UTC), docdwarf@panix.com () wrote:
>
>>>No, probably not. Batch processing will be redundant within your lifetime.
>>
>>Mr Dashwood, leaving aside the myriad of legislated requirements for batch 
>>reporting - and The Law, in some parts of the world, is rather gastropodic 
>>in its rate of change - what would a demonstration of this assertion look 
>>like?
>
>
>I'm not quite sure what "batch processing" is.

According to IBM - and who could ask for more? - it is 'A method of 
running a program or a series of programs in which one or more records (a 
batch) are processed with little or no action from the user or operator. 
Contrast with interactive processing.'

<http://publib.boulder.ibm.com/infocenter/iseries/v5r3/index.jsp?topic=/ddp/rbal1glossary.htm>

DD

0
Reply docdwarf (6044) 5/1/2008 2:58:27 PM


<docdwarf@panix.com> wrote in message news:fvcack$bej$1@reader2.panix.com...
> In article <67thq5F2p8o04U1@mid.individual.net>,
> Pete Dashwood <dashwood@removethis.enternet.co.nz> wrote:
>
> [snip]
>
>>No, probably not. Batch processing will be redundant within your lifetime.
>
> Mr Dashwood, leaving aside the myriad of legislated requirements for batch
> reporting - and The Law, in some parts of the world, is rather gastropodic
> in its rate of change - what would a demonstration of this assertion look
> like?

There is not ONE single legislated requirement for batch processing and you 
know it. There are legislated requirements for information, and it must be 
organised a certain way. Currently, IT people see that as best accomplished 
by batch processing, but the legislation doesn't require that it be.

If legislated data requirements can be met by alternative technology, you 
won't find Government complaining. (In fact, as long a sthe legal 
requirements are met they CAN'T complain...)

As technology changes so will people's minds. New approaches and 
possibilities open up. Oh, Brave new world that has such people in it!

>
> 'All right, I'm dead now and folks still need batch!'

:-)

Some folks will believe they still need batch processing long after all of 
us are dead. That doesn't mean that they actually do need it.

Pete.
-- 
"I used to write COBOL...now I can do anything."


0
Reply dashwood (4370) 5/1/2008 3:15:37 PM

On Thu, 1 May 2008 14:58:27 +0000 (UTC), docdwarf@panix.com () wrote:

>According to IBM - and who could ask for more? - it is 'A method of 
>running a program or a series of programs in which one or more records (a 
>batch) are processed with little or no action from the user or operator. 
>Contrast with interactive processing.'
>
><http://publib.boulder.ibm.com/infocenter/iseries/v5r3/index.jsp?topic=/ddp/rbal1glossary.htm>

From that I infer that the following are batch processes:

1.   My computer's triggered backup.
2.   My .bat file that copies changed files to my portable drive.
3.   The job that sends out our past-due bills to the collection
agency.
4.   The job that checks my IRS form and sends out rebates.
5.   The job that looks for significant events in the experiments at
the Large Hadron Collider.
6.   The process that starts when I load all the songs in my playlist
into my iPod.

So which of these are going to soon be obsolete?
0
Reply howard (6275) 5/1/2008 3:19:43 PM


"Michael Wojcik" <mwojcik@newsguy.com> wrote in message 
news:fvcm1e06lq@news1.newsguy.com...
> Pete Dashwood wrote:
>> "Michael Wojcik" <mwojcik@newsguy.com> wrote in message
>> news:fv7fch02im3@news1.newsguy.com...
> >>
>>> Or maybe he wants a mainframe emulation environment, which would let him
>>> run programs with EXEC CICS and EXEC SQL macros. We do sell such a
>>> product, after all (a few of 'em, actually).
>>>
>>> It's not free, though. It's not even cheap. Emulating the mainframe
>>> operating environment is not trivial.
>>
>> Sure it is.
>
> OK, Pete, code it up. I'm sure you could make a mint selling it.
>

There was a time when  I would have... :-)

I made a mint selling other things instead.

> Really, this must be one of the weakest claims I've ever seen you post 
> here, and that's saying something. There are 622 pages of technical 
> specifications in the CICS Prog Ref alone. Duplicating all of that 
> behavior requires a major development investment, regardless of the 
> technology it's built on or who's doing the work. And that's just the CICS 
> API.

And yet the command interface is not that complex. Ok, I'll agree that it is 
a serious amount of code. :-)

>
> Let's take an easy bit, as an example: emulating IBM's versions of the BSD 
> sockets interface (EZASOKET and EZACICAL) on top of Winsock and SUSv3 
> sockets. That's about seven thousand lines of C source. You're welcome to 
> try to do better.

I guess that would be a zillion lines of COBOL... :-)

OK, I'll upgrade my estimate to a VERY serious amount of code... :-)
>
>> One day the mainframe world will wake up to the fact that they have a
>> choice. There WAS a time when mainframe hardware was the only game in 
>> town
>> and vendors (both hardware and software) could charge anything they 
>> liked.
>
> All of our customers know they have a choice. They're exercising it by 
> buying our products.

So the rest of the marketplace exercised it the other way :-)?

I think your products are excellent. But I don't have any. Oddly enough, I 
may be in the market for some, after some events that occurred today... It 
would be ironic if I end up paying what I consider to be an unnecessarily 
high price for NetExpress... :-) (I'll probably get it on e-bay, if I have 
to...)

It's the runtime licensing that really puts me off.

>
> Perhaps one day you'll realize that you're just another programmer with an 
> opinion, and not a prophet with a divine channel to transcendent and 
> unassailable truth. But I'm not holding my breath.

:-) I have stated here many times that what I'm expressing is opinion. I've 
never claimed divinity.There are those who would question my claim to be a 
programmer... :-)

However, my opinions are honestly held and I'm happy to debate them.

(Nice turn of phrase, by the way...:-))

>
>> We paid $300,000 for an IBM 360-30 that had a fraction of the processing
>> power that the machine I'm writing this on has.
>
> Hurrah for you. That has absolutely nothing to do with the effort required 
> to emulate the mainframe environment.

Ah, we are debating different things. I'm concerned about the costs of 
software and service in mainframe environments which I believe has derived 
from the times when people (even your customers) DIDN'T have any option. You 
are apparently miffed because I dismissed the effort required to emulate a 
mainframe TP monitor on a PC. (I actually said it tongue in cheek, but now I 
stand by it...)

For MY argument, the cost when purchased of a now obsolete mainframe was 
very pertinent

Let's be clear about this.

Your company has put in a lot of effort building a mainframe environment 
emulator which is a very good product. My argument is that it's overpriced. 
If I'm wrong, then show how I am. Maybe there's a limited market or the 
market was underestimated before development started or the job was a lot 
more difficult than expected and blew out the development cost... any of 
these would be perfectly valid reasons for the price being high.

But so is my posit that it is easy to charge high prices to people who are 
used to high prices. Unfair? Then show why that isn't the case and I'll 
sincerely apologize (and I don't unless I'm sorry...)


>
>> Gee, Michael, guess you forgot to mention the runtime fees...?
>
> No; I didn't mention them because they weren't relevant. I didn't forget 
> to mention the price of tea in China, either.
>
Given that the discussion was about the cost of things IT, I can agree that 
the price of tea in China isn't relevant...:-)

Runtime fees, however... are another story.

Pete.
--
"I used to write COBOL.. now I can do anything 


0
Reply dashwood (4370) 5/1/2008 3:57:13 PM

In article <6cnj14db0cpscakdtdacpotptrgtjnkf75@4ax.com>,
Howard Brazee  <howard@brazee.net> wrote:

[snip]

>So which of these are going to soon be obsolete?

Mr Dashwood's assertion did not address obsolescence.

DD

0
Reply docdwarf (6044) 5/1/2008 4:00:36 PM

In article <67u54oF2p4cubU1@mid.individual.net>,
Pete Dashwood <dashwood@removethis.enternet.co.nz> wrote:
>
>
><docdwarf@panix.com> wrote in message news:fvcack$bej$1@reader2.panix.com...
>> In article <67thq5F2p8o04U1@mid.individual.net>,
>> Pete Dashwood <dashwood@removethis.enternet.co.nz> wrote:
>>
>> [snip]
>>
>>>No, probably not. Batch processing will be redundant within your lifetime.
>>
>> Mr Dashwood, leaving aside the myriad of legislated requirements for batch
>> reporting - and The Law, in some parts of the world, is rather gastropodic
>> in its rate of change - what would a demonstration of this assertion look
>> like?
>
>There is not ONE single legislated requirement for batch processing and you 
>know it.

Mr Dashwood, that might be a reason for my speaking of 'batch reporting', 
not 'batch processing'.  If one wishes to design an online transaction 
processing environment to create batch reports, have at it... that does 
not change the fact that there are legislated batch reporting 
requirements.

[snip]

>> 'All right, I'm dead now and folks still need batch!'
>
>:-)
>
>Some folks will believe they still need batch processing long after all of 
>us are dead. That doesn't mean that they actually do need it.

I wasn't asking into what 'some folks believe', Mr Dashwood, I was asking 
into what a demonstration of your assertion would look like.  I offered a 
suggestion and your response was an emoticon with... other stuff.

DD

0
Reply docdwarf (6044) 5/1/2008 4:11:39 PM

On Fri, 2 May 2008 03:15:37 +1200, "Pete Dashwood"
<dashwood@removethis.enternet.co.nz> wrote:

>There is not ONE single legislated requirement for batch processing and you 
>know it. There are legislated requirements for information, and it must be 
>organised a certain way. Currently, IT people see that as best accomplished 
>by batch processing, but the legislation doesn't require that it be.
>
>If legislated data requirements can be met by alternative technology, you 
>won't find Government complaining. (In fact, as long a sthe legal 
>requirements are met they CAN'T complain...)

But the administrations require that we send them files in their
formats.    Maybe someday they will accept real-time data updates,
although it is not obvious to me how this will happen.    Real-time
paychecks, tax withdrawals, and tax reporting might take a while.
0
Reply howard (6275) 5/1/2008 4:37:41 PM

On Thu, 01 May 2008 07:55:03 -0600, Howard Brazee <howard@brazee.net> wrote:

>On Wed, 30 Apr 2008 22:38:36 -0500, Robert <no@e.mail> wrote:
>
>>>The future of mainframes is as a component in the system.  It will be
>>>the database full of security rules and powerful behind-the-scenes
>>>action.    There is no reason to duplicate real data on PCs (security
>>>audits will make sure we don't), nor to create allowable test data.
>>>The DBAs and systems people will handle that component, just as the
>>>network people handle the routers and ports.
>>
>>Unix database servers are already doing the same, cheaper. 
>
>Some of those Unix database servers are on desktop machines, some on
>minis, some on server farms, and some on big iron.   (Although the way
>we can tell our Sun and our IBM mainframe in our computer room is by
>color - they are the same size).
>
>"Cheaper" depends on lots of factors.     Which is cheapest, shipping
>by truck, ship, foot, train, or by air?    Answer:   It depends.
>
>Which is the cheapest platform to run a database server?    Answer: It
>depends.

It depends on getting a 90% discount from the IBM salesman. 
0
Reply Robert 5/1/2008 5:22:17 PM

On Thu, 1 May 2008 21:45:41 +1200 "Pete Dashwood"
<dashwood@removethis.enternet.co.nz> wrote:

:>"Binyamin Dissen" <postingid@dissensoftware.com> wrote in message 
:>news:iu0j14p6qu5vcaj726itutemuaimqvpta8@4ax.com...
:>> On Thu, 1 May 2008 10:39:46 +1200 "Pete Dashwood"
:>> <dashwood@removethis.enternet.co.nz> wrote:

:>> :>Most people consider these factors to be unique to mainframes:

:>> :>1. Huge processing power.
:>> :>2. Stability and security.
:>> :>3. Traditional proven development methodology, perfect for batch 
:>> processing.

:>> :>To obtain these benefits, many sites are happy to pay the much higher 
:>> costs
:>> :>of hardware/software and the support teams necessary to keep them 
:>> running.
:>> :>(They have been conditioned to over years by vendors, ever since the 
:>> days
:>> :>when the mainframe was the only game in town.) But I think it is being
:>> :>questioned now.

:>> :>The latest top end Network servers can compete very favourably on point 
:>> 1.
:>> :>And even though the price of mainframes has fallen, the servers are much
:>> :>cheaper and can be more easily configured and reconfigured and 
:>> redeployed in
:>> :>ways to suit the Business.

:>> You have to define "huge processing power". Many chips run faster than 
:>> those
:>> on mainframes.

:>Thank you. That was my point.

:>> They can parallel process huge amounts of data.

:>Yes, they can.

:>> :>Point 2 is largely an illusion. Certainly, there are no viruses on
:>> :>mainframes, but there will be once they are opened up to the network; it 
:>> is
:>> :>only a matter of time. (No, I don't have mainframe virus writing on my
:>> :>"TODO" list, but it would be an "interesting" exercise... :-)).

:>> Spoken like a true PC expert, who has no problem with reboots. So much so,
:>> that they do it multiple times a day.

:>I have a Win 98 SE machine that runs 24/7. It was last rebooted 3 months 
:>ago. The Notebook I am writing this on (Win XP SP2) has not been booted for 
:>6 days (it hibernates between uses).

Running at 95% CPU most of the time?

:>The last commercial site I was managing had multiple 8 processor Xeons and 
:>they were cold booted twice during the year I was there.

Not familiar with the hardware.

:>The last IBM mainframe system I worked on (admittedly a long time ago now), 
:>was rebooted every Monday morning after Preventive Maintenance.

Measured in decades. They weren't close to 5 9s at that time.

:>While I don't claim to be a PC expert, I certainly have no problem with 
:>rebooting ANY computer system if it needs it.

The issue is how often it needs it.

And is the first response from a support call - reboot the machine?

:>> Point two is really 5 9s.

:>45...? :-)

99.999% availability. Quite important when doing heavy transaction loads.

:>> The reliability of mainframes and their components are such that failures 
:>> can
:>> be tolerated with fallover, so that the end users are not even aware that 
:>> a
:>> component has failed.

:>The same is easily achievable with network servers also.

Multiple paths to the data and redundant data?

Nice.

:>> Mainframes regularly run months without rebooting. And that is with CPU 
:>> 95%+
:>> busy much of the time.

:>So do certain networks. Your point?

Certain networks run at 95% CPU?

:>> Point three is so meaningless that it isn't even worth addressing.

:>No, probably not. Batch processing will be redundant within your lifetime.

:>So will mainframes (as we currently understand the term). The boundaries are 
:>becoming more and more blurred; in the end, the great mainframe con will be 
:>recognised for what it is, and cost effectiveness will simply win the day.

Depends as to how you define it. Not hard to knock down your own straw man.

--
Binyamin Dissen <bdissen@dissensoftware.com>
http://www.dissensoftware.com

Director, Dissen Software, Bar & Grill - Israel


Should you use the mailblocks package and expect a response from me,
you should preauthorize the dissensoftware.com domain.

I very rarely bother responding to challenge/response systems,
especially those from irresponsible companies.
0
Reply postingid (197) 5/1/2008 5:40:43 PM

On May 2, 5:40=A0am, Binyamin Dissen <postin...@dissensoftware.com>
wrote:

> :>> Spoken like a true PC expert, who has no problem with reboots. So much=
 so,
> :>> that they do it multiple times a day.
>
> :>I have a Win 98 SE machine that runs 24/7. It was last rebooted 3 months=

> :>ago. The Notebook I am writing this on (Win XP SP2) has not been booted =
for
> :>6 days (it hibernates between uses).
>
> Running at 95% CPU most of the time?
>
> :>The last commercial site I was managing had multiple 8 processor Xeons a=
nd
> :>they were cold booted twice during the year I was there.
>
> Not familiar with the hardware.
>
> :>The last IBM mainframe system I worked on (admittedly a long time ago no=
w),
> :>was rebooted every Monday morning after Preventive Maintenance.
>
> Measured in decades. They weren't close to 5 9s at that time.
>
> :>While I don't claim to be a PC expert, I certainly have no problem with
> :>rebooting ANY computer system if it needs it.
>
> The issue is how often it needs it.
>
> And is the first response from a support call - reboot the machine?
>
> :>> Point two is really 5 9s.
>
> :>45...? :-)
>
> 99.999% availability. Quite important when doing heavy transaction loads.
>
> :>> The reliability of mainframes and their components are such that failu=
res
> :>> can
> :>> be tolerated with fallover, so that the end users are not even aware t=
hat
> :>> a
> :>> component has failed.
>
> :>The same is easily achievable with network servers also.
>
> Multiple paths to the data and redundant data?
>
> Nice.
>
> :>> Mainframes regularly run months without rebooting. And that is with CP=
U
> :>> 95%+
> :>> busy much of the time.
>
> :>So do certain networks. Your point?
>
> Certain networks run at 95% CPU?


You seem to co-mingle the term 'PC' with the Windows operating system.
The first machine referred to as a 'PC' was the Apple ][ in 1979. It
happens that IBM's design for their 'PC' became dominent and that
Microsoft became the majority OS on this design.

But it is only Windows that reboots and uses 95% CPU and has spyware
and virusses. Using the same 'PC' hardware as may be used for a
Windows machine I have systems that have only been rebooted after the
power has failed.

0
Reply riplin (4127) 5/1/2008 6:43:53 PM

Graham,
  This thread has wondered far from what you originally asked.  I think the EASY 
summary is:

1) Getting a free or cheap COBOL compiler for Windows that supports IBM 
mainframe COBOL syntax is easy.  Both Fujitsu and Micro Focus provides such. 
OpenCOBOL is "pretty close" to supporting IBM dialect COBOL.  All of these will 
work for doing "batch COBOL" on PC/Windows (or Unix or Linux).  Differences 
between VSAM and "COBOL Indexed" files is trivial.  Many of these also have some 
sort of SQL support and should be reasonable for developing DB2 applications

2) Getting a CICS emulation on a PC is another question entirely.  The products 
that supply "integrated" CICS/COBOL development are EXPENSIVE.  These include 
IBM and Micro Focus (and in my opinion, less so CA-Realia and Fujitsu).  I did 
find a webpage with some products that MIGHT be useful.  Check out:
   http://www.cobug.com/cobug/docs/transproc0044.html
However, my guess is that the ones on that page that meet your needs are 
expensive as well.  (Lots of those products aren't what you are looking for - 
but some are, I think).

  ***

If it were I, I would look into the IBM "partner" program and see if you can get 
qualified.  They no longer use Flex-ES "pc" for mainframe emulation, but do (I 
think) provide time share on actual IBM mainframes.  See:
   http://www-304.ibm.com/jct09002c/isv/index.html

If that doesn't work, you may want to check your area for Universities or other 
companies that might be willing to sell (at a minimal cost) some mainframe time.

Another thing that you need to think about is exactly WHAT environment you want 
to target your product for.  Will you have a "minimum" level of COBOL and/or 
CICS that you will support?  For example, CICS TS V3.x on the mainframe no 
longer supports IBM's OS/VS COBOL.  You would need to know if this is or is not 
something that you will be requiring for your prospective customers.

  ***

I hope this helps and gives you some ideas

-- 
Bill Klein
 wmklein <at> ix.netcom.com
"Graham Hobbs" <ghobbs@cdpwise.net> wrote in message 
news:nb3h141500phmuq82nj3buhtmo5ibr3sfp@4ax.com...
> Hello,
> Thanks for the responses. Learning all the time.
> Not only 'not like the price', 'havent got the money'.
> Hercules! For me, a whole new line of research, maybe later.
> MF not MS, sorry.
> Pete, thanks for the history.
> Bill Klein asked for more explanation so ..
>
> Bill,
> Am retired so have no EMPLOYER and cost is a serious factor especially
> for a project that today might be dubious saleability.
>
> Until a year ago I had an old IBM compiler (VA Cobol V2.2) that runs
> on an old laptop under Windows NT plus DB2 V7.2, CICS for Windows 3.1
> and VSAM KSDS via Pervasive's bTrieve, all 1990's vintage. On this
> laptop I'm developing a software package written in batch Cobol that
> generates Cobol/CICS programs (dinosaurial as it may seem, have gone
> too far to stop). The package is aimed at any platform that runs CICS,
> especially these days it seems, z/OS.
>
> So the V2.2 compiler I have performs two functions, a) compile/produce
> executables of the batch pgms of my software, b) compile/produce
> executables for the online Cobol/CICS pgms that my software generates.
>
> The old NT laptop grew old, slow and full so I bought a new one. It
> won't accept NT so I opted for Windows XP. My CICS and DB2 work fine
> thereon. The Cobol fails horribly and no fixes available.
>
> Today the XP laptop has NT running under Microsoft's Virtual PC thus
> some of my development can continue on the NT side.
>
> But the ultimate intent is to email pgms between myself and clients.
> Technically this must be done on the XP side while compilations etc
> must be done on the NT side - is labour intensive and 'almost'
> impractical.
>
> Thus the need for a free/cheap XP Cobol compiler that accepts CICS and
> DB2 commands and produces working executables - in essence goodbye NT.
>
> As you said mainframe emulators are expensive (unless my government
> will give me a grant :-000)))(still choice 1), PWD might be amenable
> (choice 2), a Cobol compiler NT to XP fix was available (choice 3),
> whatelse?.
>
> So In contacting the group, I am really looking at the 'whatelse'
> scenario. Hope that better explains my situation.
> Any help appreciated. Thanks
> Graham
>
> On Tue, 29 Apr 2008 19:01:56 GMT, "William M. Klein"
> <wmklein@nospam.netcom.com> wrote:
>
>>There have been lots of replies - with varying levels of "helpfulness".
>>
>>It would SEEM to me that you are asking for a PC (Windows) based mainframe
>>emulation environment.  These exist but are not cheap.  (IBM and Micro Focus
>>both sell extensive and expensive products for this).  However, I am a little
>>confused as to WHY you would want this.  If you want this in order to do 
>>Windows
>>development for applications intended for mainframe deployment, then I would
>>expect that your mainframe EMPLOYER would provide such tools (and they often 
>>can
>>afford them).
>>
>>If you are actually trying to develop (or even "play with") programs that are
>>just intended for the PC/Windows environment, then you are looking for the 
>>wrong
>>tools.
>>
>>Can you tell us WHY you want DB2, CICS, VSAM support?  What do you plan on 
>>doing
>>with this compiler/product?  With that information, we may be able to better
>>help you.
>
> ** Posted from http://www.teranews.com ** 


0
Reply wmklein (2605) 5/1/2008 7:21:11 PM

On Thu, 01 May 2008 19:21:11 GMT, "William M. Klein"
<wmklein@nospam.netcom.com> wrote:

>If that doesn't work, you may want to check your area for Universities or other 
>companies that might be willing to sell (at a minimal cost) some mainframe time.

That's the approach I'd take.

>Another thing that you need to think about is exactly WHAT environment you want 
>to target your product for.  Will you have a "minimum" level of COBOL and/or 
>CICS that you will support?  For example, CICS TS V3.x on the mainframe no 
>longer supports IBM's OS/VS COBOL.  You would need to know if this is or is not 
>something that you will be requiring for your prospective customers.

Good point.
0
Reply howard (6275) 5/1/2008 7:30:34 PM


<docdwarf@panix.com> wrote in message news:fvcpf4$ge8$1@reader2.panix.com...
> In article <6cnj14db0cpscakdtdacpotptrgtjnkf75@4ax.com>,
> Howard Brazee  <howard@brazee.net> wrote:
>
> [snip]
>
>>So which of these are going to soon be obsolete?
>
> Mr Dashwood's assertion did not address obsolescence.
>
> DD
>
Fair comment, Doc.

Before this spins out of control I better be clear about what particular 
flavour of "batch processing" I WAS addressing.

It was the "traditional" processing of "batches" of transactions. The stuff 
that is currently done in overnight windows to update back end databases. 
However, I will extend it to include manually written DB scans that build 
reports. (as opposed to single queries or Lambdas where the RDBMS software 
decides what will be scanned and by which processor and when...)

I said it would be redundant.

My reasons for that are as follows:

1. Before we had online processing, batch processing was the only 
processing. COBOL was ideally suited to it and this led to the development 
of a culture that "took it as read".

2. When the first 3270 style displays appeared (running in 32K in Foreground 
1) the processor power available was such that the entire updates required 
by the online transactions could not be accomplished (and there were other 
security considerations and backup implications) so transaction files were 
written for processing in batch against the back end DB, and the online 
processing confined itself mainly to data retrieval.
As more sophisticated DB software arrived and processors became more 
powerful, online transactions were able to accomplish more and the need to 
"complete" transaction processing in batch diminished. Batch processing then 
became the domain of reporting which involved DB scans, and the deferred 
processing of batched transactions collected during the day.

3. My argument is that with modern networked processors parallel processing 
and multitasking can simply do what once required a batch process to 
achieve. There will still be reports that are based around a time series, 
for example, and the data for each line of the report needs to be collected 
from somewhere. However, if the problem is viewed differently from the 
traditional approach, different ways of achieving it become apparent.

For example, suppose a certain "monthly report" was viewed as a report 
"object" with a "collection" of report lines. The problem is now one of 
ensuring the collection covers what happened during the month. Using the 
traditional approach we might scan all the transactions between the 
requisite dates, do some figuring on each transaction, and place report 
lines into the collection from our batch scan.The collection would be built 
by a single processor running a dedicated (batch) process.

But it quickly becomes apparent that that is not the ONLY way to build the 
collection. Suppose, as each transaction occurred in real time it kicked of 
a sub task (maybe running on a different procesor) that added the necessary 
report line to the collection right then and there... No need to filter by 
date or scan databases, just create the line in real time. (The report 
object could contain a method that filtered its own collection by date as 
the lines were sent to the print spool. You can argue that this is also a 
batch process, but it isn't within the scope I defined above...)

The gist of my argument is that distributed parallel processing power (such 
as will emerge in the next decade or so) will render traditional batch 
processing redundant. This also has implications for the "death of the 
mainframe" as discussed elsewhere...

As noted at the start, scans carried out by RDB software to resolve a single 
query, and data warehousing/BI processes were also not included in my 
scope...

Pete.
-- 
"I used to write COBOL...now I can do anything."



0
Reply dashwood (4370) 5/1/2008 11:55:22 PM

In article <67sapjF2qabbrU1@mid.individual.net>,
	"Pete Dashwood" <dashwood@removethis.enternet.co.nz> writes:
> 
> Point 2 is largely an illusion. Certainly, there are no viruses on 
> mainframes, but there will be once they are opened up to the network; it is 
> only a matter of time. (No, I don't have mainframe virus writing on my 
> "TODO" list, but it would be an "interesting" exercise... :-)).
> 

I don't agree with any of your conclussions but this is the easiest
target.  There are a lot of systems that have been connected to the
INTERNET since the ARPA and NSFNet days and have never had a virus.
Being on the net does not guarantee virus will just pop up.  Not all
systems are targets for virus writers.  Some systems have inherent
security that makes the kind of attacks  that are so common today 
totally impossible.

bill

-- 
Bill Gunshannon          |  de-moc-ra-cy (di mok' ra see) n.  Three wolves
billg999@cs.scranton.edu |  and a sheep voting on what's for dinner.
University of Scranton   |
Scranton, Pennsylvania   |         #include <std.disclaimer.h>   
0
Reply billg999 (2588) 5/2/2008 12:44:26 AM

In article <67v3jaF2q8a1cU1@mid.individual.net>,
Pete Dashwood <dashwood@removethis.enternet.co.nz> wrote:
>
>
><docdwarf@panix.com> wrote in message news:fvcpf4$ge8$1@reader2.panix.com...
>> In article <6cnj14db0cpscakdtdacpotptrgtjnkf75@4ax.com>,
>> Howard Brazee  <howard@brazee.net> wrote:
>>
>> [snip]
>>
>>>So which of these are going to soon be obsolete?
>>
>> Mr Dashwood's assertion did not address obsolescence.
>>
>Fair comment, Doc.

Seems to be supported by what I can find a URL to point at, aye.

>
>Before this spins out of control I better be clear about what particular 
>flavour of "batch processing" I WAS addressing.

Too late... wooWOOWoowoowoowoowoo!

DD

0
Reply docdwarf (6044) 5/2/2008 1:07:04 AM

Bill,
Will ponder this greatly and get back to you. Is excellent. 
Not a word mincer are you:-). 
Graham

On Thu, 01 May 2008 19:21:11 GMT, "William M. Klein"
<wmklein@nospam.netcom.com> wrote:

>Graham,
>  This thread has wondered far from what you originally asked.  I think the EASY 
>summary is:
>
>1) Getting a free or cheap COBOL compiler for Windows that supports IBM 
>mainframe COBOL syntax is easy.  Both Fujitsu and Micro Focus provides such. 
>OpenCOBOL is "pretty close" to supporting IBM dialect COBOL.  All of these will 
>work for doing "batch COBOL" on PC/Windows (or Unix or Linux).  Differences 
>between VSAM and "COBOL Indexed" files is trivial.  Many of these also have some 
>sort of SQL support and should be reasonable for developing DB2 applications
>
>2) Getting a CICS emulation on a PC is another question entirely.  The products 
>that supply "integrated" CICS/COBOL development are EXPENSIVE.  These include 
>IBM and Micro Focus (and in my opinion, less so CA-Realia and Fujitsu).  I did 
>find a webpage with some products that MIGHT be useful.  Check out:
>   http://www.cobug.com/cobug/docs/transproc0044.html
>However, my guess is that the ones on that page that meet your needs are 
>expensive as well.  (Lots of those products aren't what you are looking for - 
>but some are, I think).
>
>  ***
>
>If it were I, I would look into the IBM "partner" program and see if you can get 
>qualified.  They no longer use Flex-ES "pc" for mainframe emulation, but do (I 
>think) provide time share on actual IBM mainframes.  See:
>   http://www-304.ibm.com/jct09002c/isv/index.html
>
>If that doesn't work, you may want to check your area for Universities or other 
>companies that might be willing to sell (at a minimal cost) some mainframe time.
>
>Another thing that you need to think about is exactly WHAT environment you want 
>to target your product for.  Will you have a "minimum" level of COBOL and/or 
>CICS that you will support?  For example, CICS TS V3.x on the mainframe no 
>longer supports IBM's OS/VS COBOL.  You would need to know if this is or is not 
>something that you will be requiring for your prospective customers.
>
>  ***
>
>I hope this helps and gives you some ideas

** Posted from http://www.teranews.com **
0
Reply ghobbs (117) 5/2/2008 2:11:55 AM

On Fri, 2 May 2008 11:55:22 +1200, "Pete Dashwood" <dashwood@removethis.enternet.co.nz>
wrote:
> Suppose, as each transaction occurred in real time it kicked of 
>a sub task (maybe running on a different procesor) that added the necessary 
>report line to the collection right then and there... No need to filter by 
>date or scan databases, just create the line in real time. (The report 
>object could contain a method that filtered its own collection by date as 
>the lines were sent to the print spool. You can argue that this is also a 
>batch process, but it isn't within the scope I defined above...)

It's called a Materialized View in Oracle. You don't have to write anything except 

create materialized view foo 
  build immediate refresh on commit enable query rewrite
  as select customer, count(*) from customers group by customer;

Oracle remembers the select. Whenever you insert or delete from customers, it
automagically updates foo. If someone issues the select ... from customers .., Oracle
figures out it has the result set in foo and changes the query to read foo. 

Data warehouses use this all the time. They scan the logs for frequent long-running
queries and cast them into materialized views without the user's knowledge or
participation. Suddenly the queries run much faster. 

On other databases, you can do it with triggers, but getting it to use the view
automatically takes a bit of work.


0
Reply Robert 5/2/2008 2:16:45 AM

On Thu, 1 May 2008 10:39:46 +1200, "Pete Dashwood" <dashwood@removethis.enternet.co.nz>
wrote:

>Point 2 is largely an illusion. Certainly, there are no viruses on 
>mainframes, but there will be once they are opened up to the network; it is 
>only a matter of time. (No, I don't have mainframe virus writing on my 
>"TODO" list, but it would be an "interesting" exercise... :-)).

How does a virus get into a database server? You can't send it executable code. 
0
Reply Robert 5/2/2008 2:24:38 AM

Pete Dashwood wrote:
> <docdwarf@panix.com> wrote in message news:fvcpf4$ge8$1@reader2.panix.com...
> 
>>In article <6cnj14db0cpscakdtdacpotptrgtjnkf75@4ax.com>,
>>Howard Brazee  <howard@brazee.net> wrote:
>>
>>[snip]
>>
>>
>>>So which of these are going to soon be obsolete?
>>
>>Mr Dashwood's assertion did not address obsolescence.
>>
>>DD
>>
> 
> Fair comment, Doc.
> 
> Before this spins out of control I better be clear about what particular 
> flavour of "batch processing" I WAS addressing.
> 
> It was the "traditional" processing of "batches" of transactions. The stuff 
> that is currently done in overnight windows to update back end databases. 
> However, I will extend it to include manually written DB scans that build 
> reports. (as opposed to single queries or Lambdas where the RDBMS software 
> decides what will be scanned and by which processor and when...)
> 
> I said it would be redundant.
> 
> My reasons for that are as follows:
> 
> 1. Before we had online processing, batch processing was the only 
> processing. COBOL was ideally suited to it and this led to the development 
> of a culture that "took it as read".
> 
> 2. When the first 3270 style displays appeared (running in 32K in Foreground 
> 1) the processor power available was such that the entire updates required 
> by the online transactions could not be accomplished (and there were other 
> security considerations and backup implications) so transaction files were 
> written for processing in batch against the back end DB, and the online 
> processing confined itself mainly to data retrieval.
> As more sophisticated DB software arrived and processors became more 
> powerful, online transactions were able to accomplish more and the need to 
> "complete" transaction processing in batch diminished. Batch processing then 
> became the domain of reporting which involved DB scans, and the deferred 
> processing of batched transactions collected during the day.
> 
> 3. My argument is that with modern networked processors parallel processing 
> and multitasking can simply do what once required a batch process to 
> achieve. There will still be reports that are based around a time series, 
> for example, and the data for each line of the report needs to be collected 
> from somewhere. However, if the problem is viewed differently from the 
> traditional approach, different ways of achieving it become apparent.
> 
> For example, suppose a certain "monthly report" was viewed as a report 
> "object" with a "collection" of report lines. The problem is now one of 
> ensuring the collection covers what happened during the month. Using the 
> traditional approach we might scan all the transactions between the 
> requisite dates, do some figuring on each transaction, and place report 
> lines into the collection from our batch scan.The collection would be built 
> by a single processor running a dedicated (batch) process.
> 
> But it quickly becomes apparent that that is not the ONLY way to build the 
> collection. Suppose, as each transaction occurred in real time it kicked of 
> a sub task (maybe running on a different procesor) that added the necessary 
> report line to the collection right then and there... No need to filter by 
> date or scan databases, just create the line in real time. (The report 
> object could contain a method that filtered its own collection by date as 
> the lines were sent to the print spool. You can argue that this is also a 
> batch process, but it isn't within the scope I defined above...)

You make it sound so easy, if you leave pertinent bits out.

All fine and dandy above, but all your elements in the Collection, and 
the Collection itself, are NON-PERSISTENT objects, in memory as 
vapourware. Theoretically it could work, providing you don't exit the 
application or somebody pulls the wall-plug. For security's sake you 
would have to store the elements as PERSISTENT objects in text form in a 
file or DB. When appropriate you could read the Persistent DB Table and 
either recreate the Collection for reporting or print direct from the DB.

I can't see 'batch processing' entirely disappearing - not in the first 
half of this century, anyway. I mentioned some years back a friend who 
was the night-time supervisor for a bank data centre in Calgary; 
collecting ALL cheques for Alberta, parts of North West B.C and a little 
of Saskatchewan, on our eastern border. (Their cut-off was something 
like 02:00 hours to get the encoded batches back East (Ontario)).

During her tenure volumes decreased to 30% and most likely are now 
around 20-25%, given Internet banking, plus the every increasing volume 
of debit/credit cards. The current problem is they,(bank data centres), 
have to encode (MICR) the numerals for "Sixty-six dollars and two 
cents". Back in 1967 OCR was a challenge and still IS - there's no way, 
other than universal draconian legislation, to make humans all write 
their numbers in a set format. (The way you and I write the Arabic 
numeral 'five' is not the way an Egyptian writes it).

Some years back I did read where software had been written to recognise 
Chinese characters and that Microsoft was interested. So in due course 
of time if there's an enhanced OCR, which can read the Rosetta Stone 
then banks wont be faced with the manual intervention of keying in MICR. 
Nonetheless, however the cheques are encoded you are still left with a 
bunch of cheques = BATCH !

Jimmy, Calgary AB
> 
> The gist of my argument is that distributed parallel processing power (such 
> as will emerge in the next decade or so) will render traditional batch 
> processing redundant. This also has implications for the "death of the 
> mainframe" as discussed elsewhere...
> 
> As noted at the start, scans carried out by RDB software to resolve a single 
> query, and data warehousing/BI processes were also not included in my 
> scope...
> 
> Pete.
0
Reply jgavandeletethis (1047) 5/2/2008 3:53:21 AM

On Thu, 01 May 2008 21:24:38 -0500 Robert <no@e.mail> wrote:

:>On Thu, 1 May 2008 10:39:46 +1200, "Pete Dashwood" <dashwood@removethis.enternet.co.nz>
:>wrote:

:>>Point 2 is largely an illusion. Certainly, there are no viruses on 
:>>mainframes, but there will be once they are opened up to the network; it is 
:>>only a matter of time. (No, I don't have mainframe virus writing on my 
:>>"TODO" list, but it would be an "interesting" exercise... :-)).

:>How does a virus get into a database server? You can't send it executable code. 

Buffer overflow.

--
Binyamin Dissen <bdissen@dissensoftware.com>
http://www.dissensoftware.com

Director, Dissen Software, Bar & Grill - Israel


Should you use the mailblocks package and expect a response from me,
you should preauthorize the dissensoftware.com domain.

I very rarely bother responding to challenge/response systems,
especially those from irresponsible companies.
0
Reply postingid (197) 5/2/2008 4:58:49 AM


"Robert" <no@e.mail> wrote in message 
news:kdtk14pk2oococs6u2ruukcbhmtqmv9ck1@4ax.com...
> On Fri, 2 May 2008 11:55:22 +1200, "Pete Dashwood" 
> <dashwood@removethis.enternet.co.nz>
> wrote:
>> Suppose, as each transaction occurred in real time it kicked of
>>a sub task (maybe running on a different procesor) that added the 
>>necessary
>>report line to the collection right then and there... No need to filter by
>>date or scan databases, just create the line in real time. (The report
>>object could contain a method that filtered its own collection by date as
>>the lines were sent to the print spool. You can argue that this is also a
>>batch process, but it isn't within the scope I defined above...)
>
> It's called a Materialized View in Oracle. You don't have to write 
> anything except
>
> create materialized view foo
>  build immediate refresh on commit enable query rewrite
>  as select customer, count(*) from customers group by customer;
>
> Oracle remembers the select. Whenever you insert or delete from customers, 
> it
> automagically updates foo. If someone issues the select ... from customers 
> .., Oracle
> figures out it has the result set in foo and changes the query to read 
> foo.
>
> Data warehouses use this all the time. They scan the logs for frequent 
> long-running
> queries and cast them into materialized views without the user's knowledge 
> or
> participation. Suddenly the queries run much faster.
>
> On other databases, you can do it with triggers, but getting it to use the 
> view
> automatically takes a bit of work.
>
>
Thanks for that Robert. I didn't know about it and found your description 
interesting.

Pete.
-- 
"I used to write COBOL...now I can do anything." 


0
Reply dashwood (4370) 5/2/2008 9:59:07 AM


"James J. Gavan" <jgavandeletethis@shaw.ca> wrote in message 
news:RewSj.107364$rd2.67172@pd7urf3no...
> Pete Dashwood wrote:
>> <docdwarf@panix.com> wrote in message 
>> news:fvcpf4$ge8$1@reader2.panix.com...
>>
>>>In article <6cnj14db0cpscakdtdacpotptrgtjnkf75@4ax.com>,
>>>Howard Brazee  <howard@brazee.net> wrote:
>>>
>>>[snip]
>>>
>>>
>>>>So which of these are going to soon be obsolete?
>>>
>>>Mr Dashwood's assertion did not address obsolescence.
>>>
>>>DD
>>>
>>
>> Fair comment, Doc.
>>
>> Before this spins out of control I better be clear about what particular 
>> flavour of "batch processing" I WAS addressing.
>>
>> It was the "traditional" processing of "batches" of transactions. The 
>> stuff that is currently done in overnight windows to update back end 
>> databases. However, I will extend it to include manually written DB scans 
>> that build reports. (as opposed to single queries or Lambdas where the 
>> RDBMS software decides what will be scanned and by which processor and 
>> when...)
>>
>> I said it would be redundant.
>>
>> My reasons for that are as follows:
>>
>> 1. Before we had online processing, batch processing was the only 
>> processing. COBOL was ideally suited to it and this led to the 
>> development of a culture that "took it as read".
>>
>> 2. When the first 3270 style displays appeared (running in 32K in 
>> Foreground 1) the processor power available was such that the entire 
>> updates required by the online transactions could not be accomplished 
>> (and there were other security considerations and backup implications) so 
>> transaction files were written for processing in batch against the back 
>> end DB, and the online processing confined itself mainly to data 
>> retrieval.
>> As more sophisticated DB software arrived and processors became more 
>> powerful, online transactions were able to accomplish more and the need 
>> to "complete" transaction processing in batch diminished. Batch 
>> processing then became the domain of reporting which involved DB scans, 
>> and the deferred processing of batched transactions collected during the 
>> day.
>>
>> 3. My argument is that with modern networked processors parallel 
>> processing and multitasking can simply do what once required a batch 
>> process to achieve. There will still be reports that are based around a 
>> time series, for example, and the data for each line of the report needs 
>> to be collected from somewhere. However, if the problem is viewed 
>> differently from the traditional approach, different ways of achieving it 
>> become apparent.
>>
>> For example, suppose a certain "monthly report" was viewed as a report 
>> "object" with a "collection" of report lines. The problem is now one of 
>> ensuring the collection covers what happened during the month. Using the 
>> traditional approach we might scan all the transactions between the 
>> requisite dates, do some figuring on each transaction, and place report 
>> lines into the collection from our batch scan.The collection would be 
>> built by a single processor running a dedicated (batch) process.
>>
>> But it quickly becomes apparent that that is not the ONLY way to build 
>> the collection. Suppose, as each transaction occurred in real time it 
>> kicked of a sub task (maybe running on a different procesor) that added 
>> the necessary report line to the collection right then and there... No 
>> need to filter by date or scan databases, just create the line in real 
>> time. (The report object could contain a method that filtered its own 
>> collection by date as the lines were sent to the print spool. You can 
>> argue that this is also a batch process, but it isn't within the scope I 
>> defined above...)
>
> You make it sound so easy, if you leave pertinent bits out.
>

What pertinent bit do you believe was left out, Jimmy?

> All fine and dandy above, but all your elements in the Collection, and the 
> Collection itself, are NON-PERSISTENT objects, in memory as vapourware.

Only very briefly until they are stored. What's important, and the point I 
was trying to make, is that by thinking about it differently, you can arrive 
at a different solution.

Sure, the objects are volatile if you are talking about OO processing. I'm 
talking about OO concepts, not the programming level. The collection of 
report lines is "viewed as" a collection (I did say that...) conceptually. 
Of course it has to be stored somewhere and it is totally irrelevant whether 
you use an object storage system or a standard RDB.


>Theoretically it could work, providing you don't exit the application or 
>somebody pulls the wall-plug. For security's sake you would have to store 
>the elements as PERSISTENT objects in text form in a file or DB. When 
>appropriate you could read the Persistent DB Table and either recreate the 
>Collection for reporting or print direct from the DB.

There you go... :-) (That's not the only way to do it, BTW...)
>
> I can't see 'batch processing' entirely disappearing - not in the first 
> half of this century, anyway. I mentioned some years back a friend who was 
> the night-time supervisor for a bank data centre in Calgary; collecting 
> ALL cheques for Alberta, parts of North West B.C and a little of 
> Saskatchewan, on our eastern border. (Their cut-off was something like 
> 02:00 hours to get the encoded batches back East (Ontario)).
>
> During her tenure volumes decreased to 30% and most likely are now around 
> 20-25%, given Internet banking, plus the every increasing volume of 
> debit/credit cards. The current problem is they,(bank data centres), have 
> to encode (MICR) the numerals for "Sixty-six dollars and two cents". Back 
> in 1967 OCR was a challenge and still IS - there's no way, other than 
> universal draconian legislation, to make humans all write their numbers in 
> a set format. (The way you and I write the Arabic numeral 'five' is not 
> the way an Egyptian writes it).
>
> Some years back I did read where software had been written to recognise 
> Chinese characters and that Microsoft was interested. So in due course of 
> time if there's an enhanced OCR, which can read the Rosetta Stone then 
> banks wont be faced with the manual intervention of keying in MICR. 
> Nonetheless, however the cheques are encoded you are still left with a 
> bunch of cheques = BATCH !
>
OK, my point was that batch processing will become redundant (largely 
because the processing cycles it consumes will be available in parallel and 
real time, so they can be applied at the time the transaction occurs, using 
distributed parallel processors (which looks like being the next "big thing" 
to hit IT, and will change some of the ways we look at programming...))

Taking your example of the Bank above, I hope you would agree that if 
cheques are eliminated, so is that particular problem.

I believe they will be. (I haven't personally written a cheque for at least 
15 years now and I haven't seen one for at least five years. (NZ is very 
much an electronic society so we may be atypical.) Even if they (cheques) 
are not eliminated, MICR encoding is pretty archaic. Leaps have been made in 
OCR techology and recognition software generally.  There are systems now 
that can recognises faces from moving video, shot at a distance in poor 
light; deciphering numbers is unlikely to be a problem.

A truly illegible cheque won't make it into the system anyway.

Online banking means not sending money through the mail as a piece of paper. 
I pay regular bills by standing orders or direct debits, and one offs by 
electronic transfer, initiated from my Notebook computer, direct to the 
recipient's bank account. (I bought something on "TradeMe" (NZ equivalent of 
e-bay) in exactly this way, just a few days ago. Completely painless... a 
vendor in Wellington 300 miles away received my electronic transfer into 
their account overnight and I had the goods (sent by courier once the funds 
were in the bank), before the end of the same day.)

Both my company and personal income is all electronic. I can't speak for all 
New Zealanders, but I'm pretty sure most of us no longer carry more than a 
few dollars in cash, never mind cheque books. We use EFTPOS (Electronic 
Funds Transfer at Point Of Sale) in more than 90% of businesses and the same 
system allows credit and debit cards. I would wager there are people in our 
society now, over 18 and working, who have never seen or handled a cheque.

BOTTOM LINE:  It won't be fifty years :-)

Pete.
--
"I used to write COBOL...now I can do anything."




0
Reply dashwood (4370) 5/2/2008 10:44:26 AM


"Bill Gunshannon" <billg999@cs.uofs.edu> wrote in message 
news:67v6faF2qb5npU1@mid.individual.net...
> In article <67sapjF2qabbrU1@mid.individual.net>,
> "Pete Dashwood" <dashwood@removethis.enternet.co.nz> writes:
>>
>> Point 2 is largely an illusion. Certainly, there are no viruses on
>> mainframes, but there will be once they are opened up to the network; it 
>> is
>> only a matter of time. (No, I don't have mainframe virus writing on my
>> "TODO" list, but it would be an "interesting" exercise... :-)).
>>
>
> I don't agree with any of your conclussions but this is the easiest
> target.  There are a lot of systems that have been connected to the
> INTERNET since the ARPA and NSFNet days and have never had a virus.
> Being on the net does not guarantee virus will just pop up.  Not all
> systems are targets for virus writers.  Some systems have inherent
> security that makes the kind of attacks  that are so common today
> totally impossible.

You might well be right.

However, I believe the "challenge" of an arcitecture that hasn't previously 
been broken into (that we know about...) will prove pretty irresistible to 
some of the people who write this stuff...:-)

Time will tell.

Pete.
-- 
"I used to write COBOL...now I can do anything."


0
Reply dashwood (4370) 5/2/2008 10:48:28 AM


"Robert" <no@e.mail> wrote in message 
news:8muk14pvqchs7f765h7ea8uqvsd3rne72b@4ax.com...
> On Thu, 1 May 2008 10:39:46 +1200, "Pete Dashwood" 
> <dashwood@removethis.enternet.co.nz>
> wrote:
>
>>Point 2 is largely an illusion. Certainly, there are no viruses on
>>mainframes, but there will be once they are opened up to the network; it 
>>is
>>only a matter of time. (No, I don't have mainframe virus writing on my
>>"TODO" list, but it would be an "interesting" exercise... :-)).
>
> How does a virus get into a database server? You can't send it executable 
> code.

Wanna bet? You can if you store it as data... :-)

Once it has open ports it can be reached in a number of ways, not just using 
normal protocols...if you can get access to the storage system you're in...

I was using "virus" loosely, perhaps I should have said "malware"...

What about a stored procedure with a hidden triggerable process? Maybe 
disguised as an existing procedure (Trojan clone) which it replaces in the 
system libraries.

What about an entirely new type of virus written specifcally for the 
mainframe architecture? Something that hooks into a particular SVC or ESTAE 
for example?  Bury a trigger deep in REXX or JCL, cause a data or addressing 
exception and away you go... :-)

Sure, all fanciful stuff and not very likely. But I'm not really trying and 
I have no desire to it. Nevertheless, I bet there's someone out there, 
smarter than me who DOES want to do it :-)

And I don't see the mainframes as being ONLY database servers...

Pete.
--
"I used to write COBOL...now I can do anything." 


0
Reply dashwood (4370) 5/2/2008 11:09:56 AM

Pete Dashwood wrote:
> "Bill Gunshannon" <billg999@cs.uofs.edu> wrote in message 
> news:67v6faF2qb5npU1@mid.individual.net...
>>>
>> I don't agree with any of your conclussions but this is the easiest
>> target.  There are a lot of systems that have been connected to the
>> INTERNET since the ARPA and NSFNet days and have never had a virus.
>> Being on the net does not guarantee virus will just pop up.
> 
> You might well be right.

And why let facts get in the way of perfectly good speculation, eh?

> However, I believe the "challenge" of an arcitecture that hasn't previously 
> been broken into (that we know about...) will prove pretty irresistible to 
> some of the people who write this stuff...:-)

Apparently this message has been inadvertently held by some server for 
a quarter-century and only now forwarded on.

ARPAnet switched to TCP/IP in 1983. I'd be willing to bet that before 
the year was up, there were S/370s running VM/CMS connected to the 
Internet. MIT likely had one, for example, given their close 
relationship with IBM's Cambridge Scientific Center and the popularity 
of CMS (which was written at CSC) in academia.

I know personally of sites that had S/390s (running OS/390, CICS, and 
IMS) with non-firewalled Internet connections in the early 1990s. 
Certainly at least since 1990 or so there were many AS/400s and S/390s 
connected to the Internet. (Prior to 1990, read "S/370" in place of 
"S/390".) Often they were behind firewalls, but not always.

In the early 1980s, there were plenty of black hats and curious kids 
trying to break into systems of all sorts over dialup connections, 
using war-dialing modems to find targets. I knew a number of them, and 
the exploits of the more famous are well-documented in journals like 
_2600_, not to mention various personal retrospectives, academic 
studies, etc.

There were also plenty of mainframe systems on other networks that 
were open to college students and other ... curious ... parties. All 
of BITNET, for example.

Many of these systems were penetrated to some extent by unauthorized 
users; many were not.

The hardware architecture is only one small part of the security of a 
system. Certainly the capability architecture of the AS/400 / iSeries 
makes it resistant to many of the forms of attack that are popular 
against more conventional virtual-memory computers, for example. But 
many exploits target vulnerabilities in the operating system, and far 
more in applications, in system administration errors, and - above all 
- in users, who remain the weakest link in the security chain.

In short, there is nothing new about various "mainframe" platforms 
being Internet-accessible, or otherwise available to hackers and 
writers of malware. The relative lack of documented malware (there are 
exceptions, like IBM's own "christmas card" trojan) for these 
platforms can be attributed to many things: smaller profile, smaller 
attack surface, generally more conservative administration, less 
casual use, and so forth. But it cannot reasonably be attributed to a 
historical unavailability to attackers, so there is no "new challenge" 
for them to find there.


-- 
Michael Wojcik
Micro Focus
0
Reply mwojcik (1879) 5/2/2008 2:25:57 PM

Robert wrote:
> 
> How does a virus get into a database server? You can't send it executable code. 

Typically, via SQL injection attacks that create stored procedures.

See for example David Litchfield's recent paper on "Lateral SQL 
Injection" attacks against Oracle:

http://www.databasesecurity.com/dbsec/lateral-sql-injection.pdf

In brief: attacker sends data to the system that is incorrectly 
interpreted as SQL, due to incautious handling of tainted data.

This class of vulnerability potentially exists for any database that 
supports any privileged operations over SQL. The hole is actually in 
the application code that processes untrustworthy input and builds SQL 
requests, and in database administration (giving the account used by 
the application excess privilege, for example).

-- 
Michael Wojcik
Micro Focus
0
Reply mwojcik (1879) 5/2/2008 2:30:39 PM

In article <6809kaF2qgkhnU1@mid.individual.net>,
Pete Dashwood <dashwood@removethis.enternet.co.nz> wrote:
>
>
>"James J. Gavan" <jgavandeletethis@shaw.ca> wrote in message 
>news:RewSj.107364$rd2.67172@pd7urf3no...

[snip]

>> Some years back I did read where software had been written to recognise 
>> Chinese characters and that Microsoft was interested. So in due course of 
>> time if there's an enhanced OCR, which can read the Rosetta Stone then 
>> banks wont be faced with the manual intervention of keying in MICR. 
>> Nonetheless, however the cheques are encoded you are still left with a 
>> bunch of cheques = BATCH !
>>
>OK, my point was that batch processing will become redundant (largely 
>because the processing cycles it consumes will be available in parallel and 
>real time, so they can be applied at the time the transaction occurs, using 
>distributed parallel processors (which looks like being the next "big thing" 
>to hit IT, and will change some of the ways we look at programming...))
>
>Taking your example of the Bank above, I hope you would agree that if 
>cheques are eliminated, so is that particular problem.

Mr Dashwood, I am not certain of the various legal requirement about when 
and how an institution is required to supply an authority (individual or 
governmental) with a statement of account(s)... but no, eliminating 
checks/cheques does not eliminate a bank's responsibility to be able to 
generate a list of transactions for an account or range of accounts over a 
given period of time.

Given the example above... eliminating the physical, negotiable 
instruments knows as cheques/checks in no wise eliminates the need or 
desireability of an organisation to know the state of its accounts, 
inventory, personnel activity or other phenomena segregated discretely 
over time.

To see everything as discrete transactions ignores the possibility of 
trend analysis... this was one of the reasons, e'er-so-long ago, that 
information was weighted against cost-of-production and some stuff deemed 
worthy of weekly reports... and other of monthly... and others quarterly, 
semi-annually, annually, biennially, etc.  Yes, sometime In The Future 
hardware will be such that data organised for an OnLine Transaction 
Processing system can be used for data-mining... but not just yet... and 
until hardware becomes such, data will have to be manipulated from the 
OLTP-entry state to a summarisable state with little or no operator 
interaction.

DD
0
Reply docdwarf (6044) 5/2/2008 2:43:27 PM

On Fri, 2 May 2008 11:55:22 +1200, "Pete Dashwood"
<dashwood@removethis.enternet.co.nz> wrote:

>Fair comment, Doc.
>
>Before this spins out of control I better be clear about what particular 
>flavour of "batch processing" I WAS addressing.
>
>It was the "traditional" processing of "batches" of transactions. The stuff 
>that is currently done in overnight windows to update back end databases. 
>However, I will extend it to include manually written DB scans that build 
>reports. (as opposed to single queries or Lambdas where the RDBMS software 
>decides what will be scanned and by which processor and when...)
>
>I said it would be redundant.
>
>My reasons for that are as follows:
>
>1. Before we had online processing, batch processing was the only 
>processing. COBOL was ideally suited to it and this led to the development 
>of a culture that "took it as read".


To me, this is kind of like someone a century ago saying bicycles
would be obsolete - by defining bicycles as being "Ordinary bicycles".
But in real life, things evolve and change.

Even with your definition, I see the need for snapshot reports to
continue.    I also see the need to process a file of transactions
(even if that file is an individual's tax form that gets e-mailed to
the feds). 

Business will have to change considerably before we can tell companies
to deposit pay into individual accounts as the employee is working.
Weekly payroll processing will continue for a while - but modified so
that the person who only worked one day can get his pay when he
leaves.

0
Reply howard (6275) 5/2/2008 2:49:06 PM

On Fri, 02 May 2008 03:53:21 GMT, "James J. Gavan"
<jgavandeletethis@shaw.ca> wrote:

>(The way you and I write the Arabic 
>numeral 'five' is not the way an Egyptian writes it).

What does the Egyptian five look like?
0
Reply howard (6275) 5/2/2008 2:52:50 PM

Robert <no@e.mail> wrote in message
news:kdtk14pk2oococs6u2ruukcbhmtqmv9ck1@4ax.com...
> On Fri, 2 May 2008 11:55:22 +1200, "Pete Dashwood"
<dashwood@removethis.enternet.co.nz>
> wrote:
> > Suppose, as each transaction occurred in real time it kicked of
> >a sub task (maybe running on a different procesor) that added the
necessary
> >report line to the collection right then and there... No need to filter
by
> >date or scan databases, just create the line in real time. (The report
> >object could contain a method that filtered its own collection by date as
> >the lines were sent to the print spool. You can argue that this is also a
> >batch process, but it isn't within the scope I defined above...)
>
> It's called a Materialized View in Oracle. You don't have to write
anything except
>
> create materialized view foo
>   build immediate refresh on commit enable query rewrite
>   as select customer, count(*) from customers group by customer;
>


I don't know if this process answers the obvious concern that PD's
suggestion doesn't seem to address: that of the posted information remaining
correct until it's "printed" or otherwise disseminated.  Example: credit
card monthly statements or any A/R listing: customers change their address
between the time the transaction is posted and the time the report is
printed.   What's the phrase - you don't commit to a changeable piece of
information or the results of a calculation until you actually have to.

Of course you could have a trigger associated with a customer address change
which would go in and change the information in the accumulating file but
that opens up a whole host of related issues (such as whether the changed
information adds a line or two to the address).  There is a level of
complexity in the on-the-spot real-time approach which, if exceeded, makes
the old-fashioned batch approach seem very sane and desirable.

PL


0
Reply lacey1 (490) 5/2/2008 3:18:25 PM


"tlmfru" <lacey@mts.net> wrote in message 
news:ajGSj.111848$Ft5.38403@newsfe15.lga...
>
> Robert <no@e.mail> wrote in message
> news:kdtk14pk2oococs6u2ruukcbhmtqmv9ck1@4ax.com...
>> On Fri, 2 May 2008 11:55:22 +1200, "Pete Dashwood"
> <dashwood@removethis.enternet.co.nz>
>> wrote:
>> > Suppose, as each transaction occurred in real time it kicked of
>> >a sub task (maybe running on a different procesor) that added the
> necessary
>> >report line to the collection right then and there... No need to filter
> by
>> >date or scan databases, just create the line in real time. (The report
>> >object could contain a method that filtered its own collection by date 
>> >as
>> >the lines were sent to the print spool. You can argue that this is also 
>> >a
>> >batch process, but it isn't within the scope I defined above...)
>>
>> It's called a Materialized View in Oracle. You don't have to write
> anything except
>>
>> create materialized view foo
>>   build immediate refresh on commit enable query rewrite
>>   as select customer, count(*) from customers group by customer;
>>
>
>
> I don't know if this process answers the obvious concern that PD's
> suggestion doesn't seem to address: that of the posted information 
> remaining
> correct until it's "printed" or otherwise disseminated.  Example: credit
> card monthly statements or any A/R listing: customers change their address
> between the time the transaction is posted and the time the report is
> printed.   What's the phrase - you don't commit to a changeable piece of
> information or the results of a calculation until you actually have to.

Yes, that's a good point.
>
> Of course you could have a trigger associated with a customer address 
> change
> which would go in and change the information in the accumulating file but
> that opens up a whole host of related issues (such as whether the changed
> information adds a line or two to the address).  There is a level of
> complexity in the on-the-spot real-time approach which, if exceeded, makes
> the old-fashioned batch approach seem very sane and desirable.
>

Good points. I was more concerned with the concept than the details.

It may well be that batch processing will run on dedicated servers for some 
time to come. My point was that it doesn't have to and it won't do so 
forever. The real point is that a lot of batch rocessing is done because 
that's how we've always done things.

I believe the advent of prolific and powerful parallel processing will 
change a lot of traditions...

Pete.
-- 
"I used to write COBOL...now I can do anything."


0
Reply dashwood (4370) 5/2/2008 4:53:47 PM


<docdwarf@panix.com> wrote in message news:fvf9af$dcs$1@reader2.panix.com...
> In article <6809kaF2qgkhnU1@mid.individual.net>,
> Pete Dashwood <dashwood@removethis.enternet.co.nz> wrote:
>>
>>
>>"James J. Gavan" <jgavandeletethis@shaw.ca> wrote in message
>>news:RewSj.107364$rd2.67172@pd7urf3no...
>
> [snip]
>
>>> Some years back I did read where software had been written to recognise
>>> Chinese characters and that Microsoft was interested. So in due course 
>>> of
>>> time if there's an enhanced OCR, which can read the Rosetta Stone then
>>> banks wont be faced with the manual intervention of keying in MICR.
>>> Nonetheless, however the cheques are encoded you are still left with a
>>> bunch of cheques = BATCH !
>>>
>>OK, my point was that batch processing will become redundant (largely
>>because the processing cycles it consumes will be available in parallel 
>>and
>>real time, so they can be applied at the time the transaction occurs, 
>>using
>>distributed parallel processors (which looks like being the next "big 
>>thing"
>>to hit IT, and will change some of the ways we look at programming...))
>>
>>Taking your example of the Bank above, I hope you would agree that if
>>cheques are eliminated, so is that particular problem.
>
> Mr Dashwood, I am not certain of the various legal requirement about when
> and how an institution is required to supply an authority (individual or
> governmental) with a statement of account(s)... but no, eliminating
> checks/cheques does not eliminate a bank's responsibility to be able to
> generate a list of transactions for an account or range of accounts over a
> given period of time.

It does,  however, eliminate their specific requirement to encode cheques 
with MICR and process them in batches, which I believe is what we were 
discussing. (I agree the other responsibilities you mention remain in 
place.)
>
> Given the example above... eliminating the physical, negotiable
> instruments knows as cheques/checks in no wise eliminates the need or
> desireability of an organisation to know the state of its accounts,
> inventory, personnel activity or other phenomena segregated discretely
> over time.

No, I agree.
>
> To see everything as discrete transactions ignores the possibility of
> trend analysis... this was one of the reasons, e'er-so-long ago, that
> information was weighted against cost-of-production and some stuff deemed
> worthy of weekly reports... and other of monthly... and others quarterly,
> semi-annually, annually, biennially, etc.  Yes, sometime In The Future
> hardware will be such that data organised for an OnLine Transaction
> Processing system can be used for data-mining... but not just yet... and
> until hardware becomes such, data will have to be manipulated from the
> OLTP-entry state to a summarisable state with little or no operator
> interaction.

I agree again. I never said we can abolish batch processing right now.

Pete.
-- 
"I used to write COBOL...now I can do anything."


0
Reply dashwood (4370) 5/2/2008 4:59:29 PM


-- 
"I used to write COBOL...now I can do anything."
"Michael Wojcik" <mwojcik@newsguy.com> wrote in message 
news:fvfctq01flb@news4.newsguy.com...
> Pete Dashwood wrote:
>> "Bill Gunshannon" <billg999@cs.uofs.edu> wrote in message 
>> news:67v6faF2qb5npU1@mid.individual.net...
>>>>
>>> I don't agree with any of your conclussions but this is the easiest
>>> target.  There are a lot of systems that have been connected to the
>>> INTERNET since the ARPA and NSFNet days and have never had a virus.
>>> Being on the net does not guarantee virus will just pop up.
>>
>> You might well be right.
>
> And why let facts get in the way of perfectly good speculation, eh?
>
>> However, I believe the "challenge" of an arcitecture that hasn't 
>> previously been broken into (that we know about...) will prove pretty 
>> irresistible to some of the people who write this stuff...:-)
>
> Apparently this message has been inadvertently held by some server for a 
> quarter-century and only now forwarded on.
>
> ARPAnet switched to TCP/IP in 1983. I'd be willing to bet that before the 
> year was up, there were S/370s running VM/CMS connected to the Internet. 
> MIT likely had one, for example, given their close relationship with IBM's 
> Cambridge Scientific Center and the popularity of CMS (which was written 
> at CSC) in academia.
>
> I know personally of sites that had S/390s (running OS/390, CICS, and IMS) 
> with non-firewalled Internet connections in the early 1990s. Certainly at 
> least since 1990 or so there were many AS/400s and S/390s connected to the 
> Internet. (Prior to 1990, read "S/370" in place of "S/390".) Often they 
> were behind firewalls, but not always.
>
> In the early 1980s, there were plenty of black hats and curious kids 
> trying to break into systems of all sorts over dialup connections, using 
> war-dialing modems to find targets. I knew a number of them, and the 
> exploits of the more famous are well-documented in journals like _2600_, 
> not to mention various personal retrospectives, academic studies, etc.
>
> There were also plenty of mainframe systems on other networks that were 
> open to college students and other ... curious ... parties. All of BITNET, 
> for example.
>
> Many of these systems were penetrated to some extent by unauthorized 
> users; many were not.
>
> The hardware architecture is only one small part of the security of a 
> system. Certainly the capability architecture of the AS/400 / iSeries 
> makes it resistant to many of the forms of attack that are popular against 
> more conventional virtual-memory computers, for example. But many exploits 
> target vulnerabilities in the operating system, and far more in 
> applications, in system administration errors, and - above all - in users, 
> who remain the weakest link in the security chain.
>
> In short, there is nothing new about various "mainframe" platforms being 
> Internet-accessible, or otherwise available to hackers and writers of 
> malware. The relative lack of documented malware (there are exceptions, 
> like IBM's own "christmas card" trojan) for these platforms can be 
> attributed to many things: smaller profile, smaller attack surface, 
> generally more conservative administration, less casual use, and so forth. 
> But it cannot reasonably be attributed to a historical unavailability to 
> attackers, so there is no "new challenge" for them to find there.

OK. If I have this right, you are saying I was wrong to suggest they will be 
penetrated once they take over the role of "network server" (as some people 
suggested they will), because they've been connected to the network for 
years already, and have already been penetrated?

Fair enough.

Thanks for the history, very interesting.

Pete.
-- 
"I used to write COBOL...now I can do anything."


0
Reply dashwood (4370) 5/2/2008 5:09:20 PM

In article <680v8pF2r3apqU1@mid.individual.net>,
Pete Dashwood <dashwood@removethis.enternet.co.nz> wrote:

[snip]

>Good points. I was more concerned with the concept than the details.

How very... Managerial.  Are you a 'Big Picture' guy, too?

DD

0
Reply docdwarf (6044) 5/2/2008 5:37:44 PM

On Fri, 2 May 2008 10:18:25 -0500, "tlmfru" <lacey@mts.net> wrote:

>
>Robert <no@e.mail> wrote in message
>news:kdtk14pk2oococs6u2ruukcbhmtqmv9ck1@4ax.com...
>> On Fri, 2 May 2008 11:55:22 +1200, "Pete Dashwood"
><dashwood@removethis.enternet.co.nz>
>> wrote:
>> > Suppose, as each transaction occurred in real time it kicked of
>> >a sub task (maybe running on a different procesor) that added the
>necessary
>> >report line to the collection right then and there... No need to filter
>by
>> >date or scan databases, just create the line in real time. (The report
>> >object could contain a method that filtered its own collection by date as
>> >the lines were sent to the print spool. You can argue that this is also a
>> >batch process, but it isn't within the scope I defined above...)
>>
>> It's called a Materialized View in Oracle. You don't have to write
>anything except
>>
>> create materialized view foo
>>   build immediate refresh on commit enable query rewrite
>>   as select customer, count(*) from customers group by customer;
>>
>
>
>I don't know if this process answers the obvious concern that PD's
>suggestion doesn't seem to address: that of the posted information remaining
>correct until it's "printed" or otherwise disseminated.  Example: credit
>card monthly statements or any A/R listing: customers change their address
>between the time the transaction is posted and the time the report is
>printed.   What's the phrase - you don't commit to a changeable piece of
>information or the results of a calculation until you actually have to.
>
>Of course you could have a trigger associated with a customer address change
>which would go in and change the information in the accumulating file but
>that opens up a whole host of related issues (such as whether the changed
>information adds a line or two to the address).  There is a level of
>complexity in the on-the-spot real-time approach which, if exceeded, makes
>the old-fashioned batch approach seem very sane and desirable.

Updating a materialized view is done with cpu cycles that would otherwise be idle. It does
not slow the primary update, to customers in this case. In other words it shifts cpu
cycles from the critical path, when the user is waiting, to times when usage is low.

A simple trigger WOULD slow the update. It would be done better by appending  changes to a
sequential log file or table, posting changes to the summary (materialized view) table
later. 

If no one is waiting for an answer, you want to minimize total cpu cycles. Batch might be
the best, or it might not, depending on change volume. If nothing changed since last
month's report, repreparing it is a waste of time. But when people are waiting for answers
in real time, it is rational to 'waste' cpu cycles anticipating those queries. 
0
Reply Robert 5/2/2008 5:41:52 PM

Howard Brazee wrote:
> On Fri, 02 May 2008 03:53:21 GMT, "James J. Gavan"
> <jgavandeletethis@shaw.ca> wrote:
> 
> 
>>(The way you and I write the Arabic 
>>numeral 'five' is not the way an Egyptian writes it).
> 
> 
> What does the Egyptian five look like?

I have no idea :-). It was '54 when I left Egypt so I think I can plead 
the onset of Alzheimer's. Our only view of 'Egypt' was in our RAF bus 
taking us swimming in the afternoon, and en route we passed through the 
outskirts of Ismalia to get to our Lagoon, an offshoot of the Suez 
Canal. Saw the odd civilian vehicle with Arabic number plates, but the 
numbers were unintelligible. The best I can remember was that zero was 
written at a slant as a sort of four-sided character.

Long shot - googling might show the characters they use(d).

Jimmy, Calgary AB
0
Reply jgavandeletethis (1047) 5/2/2008 7:04:30 PM

On Sat, 3 May 2008 04:53:47 +1200, "Pete Dashwood"
<dashwood@removethis.enternet.co.nz> wrote:

>It may well be that batch processing will run on dedicated servers for some 
>time to come. My point was that it doesn't have to and it won't do so 
>forever. The real point is that a lot of batch rocessing is done because 
>that's how we've always done things.

And this "we" includes accountants, managers, and other business
people.    Not all manufacturing fits into JIT processing, and lots of
business techniques are of a quantum nature.    We will still have
"packets" of data to process.

>I believe the advent of prolific and powerful parallel processing will 
>change a lot of traditions...

For various values of "powerful".   Ubiquitous connected data give us
more choices.    Life itself is community based, we have more alien
cells in our bodies than we have human cells.   Our brains work by
creating consensus between various elements pulling our decisions in
different ways.   Von Neumann architecture is simple and reliable -
but limited.
0
Reply howard (6275) 5/2/2008 7:11:37 PM

On Fri, 02 May 2008 19:04:30 GMT, "James J. Gavan"
<jgavandeletethis@shaw.ca> wrote:

>> What does the Egyptian five look like?
>
>I have no idea :-). It was '54 when I left Egypt so I think I can plead 
>the onset of Alzheimer's. Our only view of 'Egypt' was in our RAF bus 
>taking us swimming in the afternoon, and en route we passed through the 
>outskirts of Ismalia to get to our Lagoon, an offshoot of the Suez 
>Canal. Saw the odd civilian vehicle with Arabic number plates, but the 
>numbers were unintelligible. The best I can remember was that zero was 
>written at a slant as a sort of four-sided character.

I like the story that our number system is based upon how many angles
each digit has.    That requires a closed 4 and a 7 with a bar that
drops, plus only zero has rounded shape.

One would think that the Arabic speaking countries would use Arabic
numerals.
0
Reply howard (6275) 5/2/2008 7:21:54 PM

On May 2, 4:58=A0pm, Binyamin Dissen <postin...@dissensoftware.com>
wrote:
> On Thu, 01 May 2008 21:24:38 -0500 Robert <n...@e.mail> wrote:
>
> :>On Thu, 1 May 2008 10:39:46 +1200, "Pete Dashwood" <dashw...@removethis.=
enternet.co.nz>
> :>wrote:
>
> :>>Point 2 is largely an illusion. Certainly, there are no viruses on
> :>>mainframes, but there will be once they are opened up to the network; i=
t is
> :>>only a matter of time. (No, I don't have mainframe virus writing on my
> :>>"TODO" list, but it would be an "interesting" exercise... :-)).
>
> :>How does a virus get into a database server? You can't send it executabl=
e code.
>
> Buffer overflow.

In order for a 'Buffer Overflow' to work then first there needs to be
a buffer that can overflow, that is a data block that is larger than
the buffer must be able to get to it unchecked. If the gateway or I/O
system checks the data block size then buffer overflow won't happen.

Next it needs to overflow from a data area into an area that can be
executed. If the memory is segmented so that data segments cannot be
executed then the buffer overflow may corrupt data but it won't
execute code.

Then if it can get executed it needs to be able to establish itelf in
the system, by for example adding some code to an executable on the
disk so that it can be run each time that program is started. If the
file system priviledge prevents this then it will only be a data
corruption in a single program run.

So why are the vast majority of viruses aimed at Windows ? It is
because MS makes it easy and other systems make it hard.


0
Reply riplin (4127) 5/2/2008 7:42:30 PM

On 29 Apr, 06:44, Richard <rip...@azonic.co.nz> wrote:
> On Apr 29, 4:11=A0pm, Graham Hobbs <gho...@cdpwise.net> wrote:
>
> > Hello,
> > Further questions please..
> > Do the MS and Fujitsu compilers support VSAM KSDS files? What about
>
> What do you mean by 'support' ? =A0You are unlikely to find a VSAM KSDS
> file on Windows. Windows is not MVS or zOS. It does, however, have
> Cobol INDEXED SEQUENTIAL files.
>
> > DB2 access, CICS access, guess I'm asking EXEC CICS suchandsuch
>
> Winows does not normally run a CICS service. CICS is not part of
> Cobol, though normally Cobol programs are used in a CICS service.
>
> > and
> > EXEC SQL suchandsuch.
>
> SQL service is also not part of Cobol. Most SQL is achieved with a
> different product, such as a preprocessor, that converts the EXEC SQL
> lines into Cobol CALLs and such.
>
> Fujitsu version 3 does not include (AFAIK) SQL. Later versions of
> Fujitsu do support ODBC access through a limited set of EXEC SQL
> statements. You can always write the appropriate CALL statements.

If you had bought the Fujitsu v3 compiler then there would have been
ODBC compatibility (similar to DB2 and using SQL).

Fujitsu and Microfocus do support ODBC and SQL. Microfocus also has
CICS and JCL emulators available (at a price). DB2 can be obtained for
free from IBM and runs on the pc under windows.

To save opening up another thread, VSAM can be emulated on the pc
(albeit with z/390 Assembler by Don Higgins).
0
Reply alistair7 (2054) 5/2/2008 7:42:46 PM

On 29 Apr, 15:13, Graham Hobbs <gho...@cdpwise.net> wrote:
> On Tue, 29 Apr 2008 07:32:12 -0600, Howard Brazee <how...@brazee.net>
> wrote:
>
> >On Tue, 29 Apr 2008 00:11:55 -0400, Graham Hobbs <gho...@cdpwise.net>
> >wrote:
>
> >>Hello,
> >>Further questions please..
> >>Do the MS and Fujitsu compilers support VSAM KSDS files? What about
> >>DB2 access, CICS access, guess I'm asking EXEC CICS suchandsuch and
> >>EXEC SQL suchandsuch.
> >>Thanks
> >>Graham
>
> >Huh? =A0 =A0It looks as though you want an IBM mainframe compiler, not a
> >Windows compiler. =A0 Or maybe you want terminal software to allow you
> >to run TSO/SPF on your mainframe.
>
> Right about a mainframe compiler. My 1998 IBM compiler runs nicely
> under NT but fails utterly under XP thus my interest in an XP version.
> From IBM it is part of a $10,000 'great' package (last I heard) - most
> unfriendly.
>
> I semi-expected your comments about CICS and DB2 but what's the
> difference between KSDS and Indexed Sequential? For that matter what's
> the diff between ESDS and 'flat or sequential' files?
>

You can put KSDS alternate indices on to an ESDS file. You can't do
that with ISAM although that is effectively an indexed flat file.

> I really should know this stufff but I don't!
>
> Thanks
> Graham
> P.S. Have used DB2 and CICS calls from Cobol pgms many times but have
> 'never' ever examined syntax etc - never thought about writing my own
> DB2 calls - maybe it's not difficult.
>
> ** Posted fromhttp://www.teranews.com**

0
Reply alistair7 (2054) 5/2/2008 7:45:00 PM

On May 2, 10:44=A0pm, "Pete Dashwood"
<dashw...@removethis.enternet.co.nz> wrote:
> "James J. Gavan" <jgavandeletet...@shaw.ca> wrote in messagenews:RewSj.107=
364$rd2.67172@pd7urf3no...
>
>
>
> > Pete Dashwood wrote:
> >> <docdw...@panix.com> wrote in message
> >>news:fvcpf4$ge8$1@reader2.panix.com...
>
> >>>In article <6cnj14db0cpscakdtdacpotptrgtjnk...@4ax.com>,
> >>>Howard Brazee =A0<how...@brazee.net> wrote:
>
> >>>[snip]
>
> >>>>So which of these are going to soon be obsolete?
>
> >>>Mr Dashwood's assertion did not address obsolescence.
>
> >>>DD
>
> >> Fair comment, Doc.
>
> >> Before this spins out of control I better be clear about what particula=
r
> >> flavour of "batch processing" I WAS addressing.
>
> >> It was the "traditional" processing of "batches" of transactions. The
> >> stuff that is currently done in overnight windows to update back end
> >> databases. However, I will extend it to include manually written DB sca=
ns
> >> that build reports. (as opposed to single queries or Lambdas where the
> >> RDBMS software decides what will be scanned and by which processor and
> >> when...)
>
> >> I said it would be redundant.
>
> >> My reasons for that are as follows:
>
> >> 1. Before we had online processing, batch processing was the only
> >> processing. COBOL was ideally suited to it and this led to the
> >> development of a culture that "took it as read".
>
> >> 2. When the first 3270 style displays appeared (running in 32K in
> >> Foreground 1) the processor power available was such that the entire
> >> updates required by the online transactions could not be accomplished
> >> (and there were other security considerations and backup implications) =
so
> >> transaction files were written for processing in batch against the back=

> >> end DB, and the online processing confined itself mainly to data
> >> retrieval.
> >> As more sophisticated DB software arrived and processors became more
> >> powerful, online transactions were able to accomplish more and the need=

> >> to "complete" transaction processing in batch diminished. Batch
> >> processing then became the domain of reporting which involved DB scans,=

> >> and the deferred processing of batched transactions collected during th=
e
> >> day.
>
> >> 3. My argument is that with modern networked processors parallel
> >> processing and multitasking can simply do what once required a batch
> >> process to achieve. There will still be reports that are based around a=

> >> time series, for example, and the data for each line of the report need=
s
> >> to be collected from somewhere. However, if the problem is viewed
> >> differently from the traditional approach, different ways of achieving =
it
> >> become apparent.
>
> >> For example, suppose a certain "monthly report" was viewed as a report
> >> "object" with a "collection" of report lines. The problem is now one of=

> >> ensuring the collection covers what happened during the month. Using th=
e
> >> traditional approach we might scan all the transactions between the
> >> requisite dates, do some figuring on each transaction, and place report=

> >> lines into the collection from our batch scan.The collection would be
> >> built by a single processor running a dedicated (batch) process.
>
> >> But it quickly becomes apparent that that is not the ONLY way to build
> >> the collection. Suppose, as each transaction occurred in real time it
> >> kicked of a sub task (maybe running on a different procesor) that added=

> >> the necessary report line to the collection right then and there... No
> >> need to filter by date or scan databases, just create the line in real
> >> time. (The report object could contain a method that filtered its own
> >> collection by date as the lines were sent to the print spool. You can
> >> argue that this is also a batch process, but it isn't within the scope =
I
> >> defined above...)
>
> > You make it sound so easy, if you leave pertinent bits out.
>
> What pertinent bit do you believe was left out, Jimmy?
>
> > All fine and dandy above, but all your elements in the Collection, and t=
he
> > Collection itself, are NON-PERSISTENT objects, in memory as vapourware.
>
> Only very briefly until they are stored. What's important, and the point I=

> was trying to make, is that by thinking about it differently, you can arri=
ve
> at a different solution.
>
> Sure, the objects are volatile if you are talking about OO processing. I'm=

> talking about OO concepts, not the programming level. The collection of
> report lines is "viewed as" a collection (I did say that...) conceptually.=

> Of course it has to be stored somewhere and it is totally irrelevant wheth=
er
> you use an object storage system or a standard RDB.
>
> >Theoretically it could work, providing you don't exit the application or
> >somebody pulls the wall-plug. For security's sake you would have to store=

> >the elements as PERSISTENT objects in text form in a file or DB. When
> >appropriate you could read the Persistent DB Table and either recreate th=
e
> >Collection for reporting or print direct from the DB.
>
> There you go... :-) (That's not the only way to do it, BTW...)
>
>
>
> > I can't see 'batch processing' entirely disappearing - not in the first
> > half of this century, anyway. I mentioned some years back a friend who w=
as
> > the night-time supervisor for a bank data centre in Calgary; collecting
> > ALL cheques for Alberta, parts of North West B.C and a little of
> > Saskatchewan, on our eastern border. (Their cut-off was something like
> > 02:00 hours to get the encoded batches back East (Ontario)).
>
> > During her tenure volumes decreased to 30% and most likely are now aroun=
d
> > 20-25%, given Internet banking, plus the every increasing volume of
> > debit/credit cards. The current problem is they,(bank data centres), hav=
e
> > to encode (MICR) the numerals for "Sixty-six dollars and two cents". Bac=
k
> > in 1967 OCR was a challenge and still IS - there's no way, other than
> > universal draconian legislation, to make humans all write their numbers =
in
> > a set format. (The way you and I write the Arabic numeral 'five' is not
> > the way an Egyptian writes it).
>
> > Some years back I did read where software had been written to recognise
> > Chinese characters and that Microsoft was interested. So in due course o=
f
> > time if there's an enhanced OCR, which can read the Rosetta Stone then
> > banks wont be faced with the manual intervention of keying in MICR.
> > Nonetheless, however the cheques are encoded you are still left with a
> > bunch of cheques =3D BATCH !
>
> OK, my point was that batch processing will become redundant (largely
> because the processing cycles it consumes will be available in parallel an=
d
> real time, so they can be applied at the time the transaction occurs, usin=
g
> distributed parallel processors (which looks like being the next "big thin=
g"
> to hit IT, and will change some of the ways we look at programming...))
>
> Taking your example of the Bank above, I hope you would agree that if
> cheques are eliminated, so is that particular problem.
>
> I believe they will be. (I haven't personally written a cheque for at leas=
t
> 15 years now and I haven't seen one for at least five years. (NZ is very
> much an electronic society so we may be atypical.) Even if they (cheques)
> are not eliminated, MICR encoding is pretty archaic. Leaps have been made =
in
> OCR techology and recognition software generally. =A0There are systems now=

> that can recognises faces from moving video, shot at a distance in poor
> light; deciphering numbers is unlikely to be a problem.
>
> A truly illegible cheque won't make it into the system anyway.
>
> Online banking means not sending money through the mail as a piece of pape=
r.
> I pay regular bills by standing orders or direct debits, and one offs by
> electronic transfer, initiated from my Notebook computer, direct to the
> recipient's bank account. (I bought something on "TradeMe" (NZ equivalent =
of
> e-bay) in exactly this way, just a few days ago. Completely painless... a
> vendor in Wellington 300 miles away received my electronic transfer into
> their account overnight and I had the goods (sent by courier once the fund=
s
> were in the bank), before the end of the same day.)
>
> Both my company and personal income is all electronic. I can't speak for a=
ll
> New Zealanders, but I'm pretty sure most of us no longer carry more than a=

> few dollars in cash, never mind cheque books. We use EFTPOS (Electronic
> Funds Transfer at Point Of Sale) in more than 90% of businesses and the sa=
me
> system allows credit and debit cards.

I use cash and cheques. I have never once used EFTPOS and have used my
credit card locally perhaps 4 times in the last 10 years. I do use it
for overseas stuff. The wife does use credit cards but it is always
paid off every month. When clients ask for my bank details to do
direct credit I 'lose' the paperwork and then eventually send a
cheque.

One complains that the have to keep a cheque printer around just for
me, having now given up on trying to do it another way.

I do use ATMs, however, but distrust them.


> I would wager there are people in our
> society now, over 18 and working, who have never seen or handled a cheque.=

>
> BOTTOM LINE: =A0It won't be fifty years :-)
>
> Pete.
> --
> "I used to write COBOL...now I can do anything."

0
Reply riplin (4127) 5/2/2008 7:55:59 PM

On May 2, 11:09=A0pm, "Pete Dashwood"
<dashw...@removethis.enternet.co.nz> wrote:
> "Robert" <n...@e.mail> wrote in message
>
> news:8muk14pvqchs7f765h7ea8uqvsd3rne72b@4ax.com...
>
> > On Thu, 1 May 2008 10:39:46 +1200, "Pete Dashwood"
> > <dashw...@removethis.enternet.co.nz>
> > wrote:
>
> >>Point 2 is largely an illusion. Certainly, there are no viruses on
> >>mainframes, but there will be once they are opened up to the network; it=

> >>is
> >>only a matter of time. (No, I don't have mainframe virus writing on my
> >>"TODO" list, but it would be an "interesting" exercise... :-)).
>
> > How does a virus get into a database server? You can't send it executabl=
e
> > code.
>
> Wanna bet? You can if you store it as data... :-)
>
> Once it has open ports it can be reached in a number of ways, not just usi=
ng
> normal protocols...if you can get access to the storage system you're in..=
..
>
> I was using "virus" loosely, perhaps I should have said "malware"...
>
> What about a stored procedure with a hidden triggerable process? Maybe
> disguised as an existing procedure (Trojan clone) which it replaces in the=

> system libraries.
>
> What about an entirely new type of virus written specifcally for the
> mainframe architecture? Something that hooks into a particular SVC or ESTA=
E
> for example? =A0Bury a trigger deep in REXX or JCL, cause a data or addres=
sing
> exception and away you go... :-)
>
> Sure, all fanciful stuff and not very likely. But I'm not really trying an=
d
> I have no desire to it. Nevertheless, I bet there's someone out there,
> smarter than me who DOES want to do it :-)
>
> And I don't see the mainframes as being ONLY database servers...

You have been indoctrinated by the MicroSoft astroturf that claims
that the faults in Windows exist in other systems and it is only
because Windows is so well loved that malware attacks them.

> if you store it as data.

In most system if you 'store it as data' then it is just data.

MSSQL, however, has this 'feature' that statements need not refer to a
table. An SQL injection can cause a executable statement without the
injection needing to know anything about the structures in the
database. This when it is output in .ASP as a data field can inject
JavaScript into the page.

> Once it has open ports

Data can't have 'open ports', only executing code can. If the
artitecture won't allow data segments to be executed then ...

But even if data segments can contain executable code then how does it
'execute' ? Something else needs to change a code pointer or the stack
to point to this.

> it can be reached in a number of ways, not just using
> normal protocols...

Any normal firewall will block all ports and will filter traffic in
and out of the system.

It happens that Window's firewall has many failings, one is that it
starts _after_ the network is up so there is a time when no firewall
exists.

> if you can get access to the storage system you're in...

That is why most reasonable systems have file system security that
prevents such access. For example on Unix/Linux a file is required to
be marked with the 'exectable' attribute before it can be executed. A
download or save will not do this. On Windows (or more specifically
Outlook Express) an email attachment 'xyz.jpeg.exe' will execute just
by opening the email. NO OTHER email client will do this. NO OTHER
system goes to such pains as to hide the fact that this will execute.

This not to say that other systems are immune but that they are not
designed to be so easy. eg ActiveX.

0
Reply riplin (4127) 5/2/2008 8:16:39 PM

This might be a useful top post reply.

Graham - have you tried right clicking on the setup executable and
setting Windows Compatibility mode to NT?

If it does install but just doesn't work - have you tried right
clicking the Icon that starts it and setting Windows compatibility
mode?

Please let us know here on the newsgroup if this helps.


On Apr 30, 10:30 am, Graham Hobbs <gho...@cdpwise.net> wrote:
> Hello,
> Thanks for the responses. Learning all the time.
> Not only 'not like the price', 'havent got the money'.
> Hercules! For me, a whole new line of research, maybe later.
> MF not MS, sorry.
> Pete, thanks for the history.
> Bill Klein asked for more explanation so ..
>
> Bill,
> Am retired so have no EMPLOYER and cost is a serious factor especially
> for a project that today might be dubious saleability.
>
> Until a year ago I had an old IBM compiler (VA Cobol V2.2) that runs
> on an old laptop under Windows NT plus DB2 V7.2, CICS for Windows 3.1
> and VSAM KSDS via Pervasive's bTrieve, all 1990's vintage. On this
> laptop I'm developing a software package written in batch Cobol that
> generates Cobol/CICS programs (dinosaurial as it may seem, have gone
> too far to stop). The package is aimed at any platform that runs CICS,
> especially these days it seems, z/OS.
>
> So the V2.2 compiler I have performs two functions, a) compile/produce
> executables of the batch pgms of my software, b) compile/produce
> executables for the online Cobol/CICS pgms that my software generates.
>
> The old NT laptop grew old, slow and full so I bought a new one. It
> won't accept NT so I opted for Windows XP. My CICS and DB2 work fine
> thereon. The Cobol fails horribly and no fixes available.
>
> Today the XP laptop has NT running under Microsoft's Virtual PC thus
> some of my development can continue on the NT side.
>
> But the ultimate intent is to email pgms between myself and clients.
> Technically this must be done on the XP side while compilations etc
> must be done on the NT side - is labour intensive and 'almost'
> impractical.
>
> Thus the need for a free/cheap XP Cobol compiler that accepts CICS and
> DB2 commands and produces working executables - in essence goodbye NT.
>
> As you said mainframe emulators are expensive (unless my government
> will give me a grant :-000)))(still choice 1), PWD might be amenable
> (choice 2), a Cobol compiler NT to XP fix was available (choice 3),
> whatelse?.
>
> So In contacting the group, I am really looking at the 'whatelse'
> scenario. Hope that better explains my situation.
> Any help appreciated. Thanks
> Graham
>
> On Tue, 29 Apr 2008 19:01:56 GMT, "William M. Klein"
>
>
>
> <wmkl...@nospam.netcom.com> wrote:
> >There have been lots of replies - with varying levels of "helpfulness".
>
> >It would SEEM to me that you are asking for a PC (Windows) based mainframe
> >emulation environment.  These exist but are not cheap.  (IBM and Micro Focus
> >both sell extensive and expensive products for this).  However, I am a little
> >confused as to WHY you would want this.  If you want this in order to do Windows
> >development for applications intended for mainframe deployment, then I would
> >expect that your mainframe EMPLOYER would provide such tools (and they often can
> >afford them).
>
> >If you are actually trying to develop (or even "play with") programs that are
> >just intended for the PC/Windows environment, then you are looking for the wrong
> >tools.
>
> >Can you tell us WHY you want DB2, CICS, VSAM support?  What do you plan on doing
> >with this compiler/product?  With that information, we may be able to better
> >help you.
>
> ** Posted fromhttp://www.teranews.com**

0
Reply thaneh (142) 5/2/2008 8:18:50 PM

On Fri, 2 May 2008 12:42:46 -0700 (PDT), Alistair
<alistair@ld50macca.demon.co.uk> wrote:

>On 29 Apr, 06:44, Richard <rip...@azonic.co.nz> wrote:
>> On Apr 29, 4:11�pm, Graham Hobbs <gho...@cdpwise.net> wrote:
>>
>> > Hello,
>> > Further questions please..
>> > Do the MS and Fujitsu compilers support VSAM KSDS files? What about
>>
>> What do you mean by 'support' ? �You are unlikely to find a VSAM KSDS
>> file on Windows. Windows is not MVS or zOS. It does, however, have
>> Cobol INDEXED SEQUENTIAL files.
>>
>> > DB2 access, CICS access, guess I'm asking EXEC CICS suchandsuch
>>
>> Winows does not normally run a CICS service. CICS is not part of
>> Cobol, though normally Cobol programs are used in a CICS service.
>>
>> > and
>> > EXEC SQL suchandsuch.
>>
>> SQL service is also not part of Cobol. Most SQL is achieved with a
>> different product, such as a preprocessor, that converts the EXEC SQL
>> lines into Cobol CALLs and such.
>>
>> Fujitsu version 3 does not include (AFAIK) SQL. Later versions of
>> Fujitsu do support ODBC access through a limited set of EXEC SQL
>> statements. You can always write the appropriate CALL statements.
>
>If you had bought the Fujitsu v3 compiler then there would have been
>ODBC compatibility (similar to DB2 and using SQL).
>
>Fujitsu and Microfocus do support ODBC and SQL. Microfocus also has
>CICS and JCL emulators available (at a price). DB2 can be obtained for
>free from IBM and runs on the pc under windows.
>
>To save opening up another thread, VSAM can be emulated on the pc
>(albeit with z/390 Assembler by Don Higgins).
Alistair,
I started with IBM Cobol and CICS in the early nineties and the
whatevers for DB2 and VSAM and it's starting to look as though there
I'll have to stay. Not sure Fujies even had a product.
Anyway, my project is entirely DB2 and VSAM KSDS so noted your
comments posted elsewhere about ESDS and ISAM for knowledge sake.
Thanks.
Graham
Amazing the amount of off topic stuff got into this subject eh!:-)
** Posted from http://www.teranews.com **
0
Reply ghobbs (117) 5/2/2008 8:53:09 PM

Thane, 

I tried Compatability NT mode. Didn't work.

I bought the new laptop a year ago with XP thereon. Installing the VA
Cobol went OK, compiling went OK, executing pgms failed every time.
Went to check Help & Support - it wasn't there - the install had
rubbed it. Have no idea what else also got destroyed. Anyway, after
reloading the OS I wasn't going to put the Cobol back on again.

The question that arises in my mind is . . back then I thought I had
tried everything - maybe I overlooked something. I don't think I did.

Given the zero feedback on any success stories tells me nobody's done
this. Too bad for me - mais c'est la vie.

Graham

On Fri, 2 May 2008 13:18:50 -0700 (PDT), Thane
<thaneh@softwaresimple.com> wrote:

>This might be a useful top post reply.
>
>Graham - have you tried right clicking on the setup executable and
>setting Windows Compatibility mode to NT?
>
>If it does install but just doesn't work - have you tried right
>clicking the Icon that starts it and setting Windows compatibility
>mode?
>
>Please let us know here on the newsgroup if this helps.
>
>
>On Apr 30, 10:30 am, Graham Hobbs <gho...@cdpwise.net> wrote:
>> Hello,
>> Thanks for the responses. Learning all the time.
>> Not only 'not like the price', 'havent got the money'.
>> Hercules! For me, a whole new line of research, maybe later.
>> MF not MS, sorry.
>> Pete, thanks for the history.
>> Bill Klein asked for more explanation so ..
>>
>> Bill,
>> Am retired so have no EMPLOYER and cost is a serious factor especially
>> for a project that today might be dubious saleability.
>>
>> Until a year ago I had an old IBM compiler (VA Cobol V2.2) that runs
>> on an old laptop under Windows NT plus DB2 V7.2, CICS for Windows 3.1
>> and VSAM KSDS via Pervasive's bTrieve, all 1990's vintage. On this
>> laptop I'm developing a software package written in batch Cobol that
>> generates Cobol/CICS programs (dinosaurial as it may seem, have gone
>> too far to stop). The package is aimed at any platform that runs CICS,
>> especially these days it seems, z/OS.
>>
>> So the V2.2 compiler I have performs two functions, a) compile/produce
>> executables of the batch pgms of my software, b) compile/produce
>> executables for the online Cobol/CICS pgms that my software generates.
>>
>> The old NT laptop grew old, slow and full so I bought a new one. It
>> won't accept NT so I opted for Windows XP. My CICS and DB2 work fine
>> thereon. The Cobol fails horribly and no fixes available.
>>
>> Today the XP laptop has NT running under Microsoft's Virtual PC thus
>> some of my development can continue on the NT side.
>>
>> But the ultimate intent is to email pgms between myself and clients.
>> Technically this must be done on the XP side while compilations etc
>> must be done on the NT side - is labour intensive and 'almost'
>> impractical.
>>
>> Thus the need for a free/cheap XP Cobol compiler that accepts CICS and
>> DB2 commands and produces working executables - in essence goodbye NT.
>>
>> As you said mainframe emulators are expensive (unless my government
>> will give me a grant :-000)))(still choice 1), PWD might be amenable
>> (choice 2), a Cobol compiler NT to XP fix was available (choice 3),
>> whatelse?.
>>
>> So In contacting the group, I am really looking at the 'whatelse'
>> scenario. Hope that better explains my situation.
>> Any help appreciated. Thanks
>> Graham
>>
>> On Tue, 29 Apr 2008 19:01:56 GMT, "William M. Klein"
>>
>>
>>
>> <wmkl...@nospam.netcom.com> wrote:
>> >There have been lots of replies - with varying levels of "helpfulness".
>>
>> >It would SEEM to me that you are asking for a PC (Windows) based mainframe
>> >emulation environment.  These exist but are not cheap.  (IBM and Micro Focus
>> >both sell extensive and expensive products for this).  However, I am a little
>> >confused as to WHY you would want this.  If you want this in order to do Windows
>> >development for applications intended for mainframe deployment, then I would
>> >expect that your mainframe EMPLOYER would provide such tools (and they often can
>> >afford them).
>>
>> >If you are actually trying to develop (or even "play with") programs that are
>> >just intended for the PC/Windows environment, then you are looking for the wrong
>> >tools.
>>
>> >Can you tell us WHY you want DB2, CICS, VSAM support?  What do you plan on doing
>> >with this compiler/product?  With that information, we may be able to better
>> >help you.
>>
>> ** Posted fromhttp://www.teranews.com**

** Posted from http://www.teranews.com **
0
Reply ghobbs (117) 5/2/2008 9:29:44 PM

On May 3, 7:42 am, Alistair <alist...@ld50macca.demon.co.uk> wrote:
> On 29 Apr, 06:44, Richard <rip...@azonic.co.nz> wrote:
>
>
>
> > On Apr 29, 4:11 pm, Graham Hobbs <gho...@cdpwise.net> wrote:
>
> > > Hello,
> > > Further questions please..
> > > Do the MS and Fujitsu compilers support VSAM KSDS files? What about
>
> > What do you mean by 'support' ?  You are unlikely to find a VSAM KSDS
> > file on Windows. Windows is not MVS or zOS. It does, however, have
> > Cobol INDEXED SEQUENTIAL files.
>
> > > DB2 access, CICS access, guess I'm asking EXEC CICS suchandsuch
>
> > Winows does not normally run a CICS service. CICS is not part of
> > Cobol, though normally Cobol programs are used in a CICS service.
>
> > > and
> > > EXEC SQL suchandsuch.
>
> > SQL service is also not part of Cobol. Most SQL is achieved with a
> > different product, such as a preprocessor, that converts the EXEC SQL
> > lines into Cobol CALLs and such.
>
> > Fujitsu version 3 does not include (AFAIK) SQL. Later versions of
> > Fujitsu do support ODBC access through a limited set of EXEC SQL
> > statements. You can always write the appropriate CALL statements.
>
> If you had bought the Fujitsu v3 compiler then there would have been
> ODBC compatibility (similar to DB2 and using SQL).

I have now checked and the Fujitsu version 3 available for download
does not include ODBC or SQL. The syntax may be in the compiler but
the libraries required are not included (see webpage).

I did actually buy version 3 (very cheaply) after having several free
betas and I even bought a set of printed manuals.  The copy I had
included Windows 3.1 compilers and stuff.

The SQL in Fujitsu is not  for arbitrary SQL but is a subset that is
directly supported by the compiler. For example you cannot do a 'EXEC
SQL CREATE TABLE ...'.


>  Fujitsu and Microfocus do support ODBC and SQL. Microfocus also has
> CICS and JCL emulators available (at a price). DB2 can be obtained for
> free from IBM and runs on the pc under windows.
>
> To save opening up another thread, VSAM can be emulated on the pc
> (albeit with z/390 Assembler by Don Higgins).

0
Reply riplin (4127) 5/2/2008 10:04:45 PM


<docdwarf@panix.com> wrote in message news:fvfjh8$evc$1@reader2.panix.com...
> In article <680v8pF2r3apqU1@mid.individual.net>,
> Pete Dashwood <dashwood@removethis.enternet.co.nz> wrote:
>
> [snip]
>
>>Good points. I was more concerned with the concept than the details.
>
> How very... Managerial.  Are you a 'Big Picture' guy, too?
>

Is that really the best you can do, Doc?

Sad.

Pete.
-- 
"I used to write COBOL...now I can do anything."


0
Reply dashwood (4370) 5/2/2008 11:20:49 PM


"Richard" <riplin@azonic.co.nz> wrote in message 
news:2db5bfab-5d42-4a26-af71-695e3cf8f93a@b5g2000pri.googlegroups.com...
On May 2, 10:44 pm, "Pete Dashwood"
<dashw...@removethis.enternet.co.nz> wrote:
> "James J. Gavan" <jgavandeletet...@shaw.ca> wrote in 
> messagenews:RewSj.107364$rd2.67172@pd7urf3no...
>
>
>
> > Pete Dashwood wrote:
> >> <docdw...@panix.com> wrote in message
> >>news:fvcpf4$ge8$1@reader2.panix.com...
>
> >>>In article <6cnj14db0cpscakdtdacpotptrgtjnk...@4ax.com>,
> >>>Howard Brazee <how...@brazee.net> wrote:
>
> >>>[snip]
>
> >>>>So which of these are going to soon be obsolete?
>
> >>>Mr Dashwood's assertion did not address obsolescence.
>
> >>>DD
>
> >> Fair comment, Doc.
>
> >> Before this spins out of control I better be clear about what 
> >> particular
> >> flavour of "batch processing" I WAS addressing.
>
> >> It was the "traditional" processing of "batches" of transactions. The
> >> stuff that is currently done in overnight windows to update back end
> >> databases. However, I will extend it to include manually written DB 
> >> scans
> >> that build reports. (as opposed to single queries or Lambdas where the
> >> RDBMS software decides what will be scanned and by which processor and
> >> when...)
>
> >> I said it would be redundant.
>
> >> My reasons for that are as follows:
>
> >> 1. Before we had online processing, batch processing was the only
> >> processing. COBOL was ideally suited to it and this led to the
> >> development of a culture that "took it as read".
>
> >> 2. When the first 3270 style displays appeared (running in 32K in
> >> Foreground 1) the processor power available was such that the entire
> >> updates required by the online transactions could not be accomplished
> >> (and there were other security considerations and backup implications) 
> >> so
> >> transaction files were written for processing in batch against the back
> >> end DB, and the online processing confined itself mainly to data
> >> retrieval.
> >> As more sophisticated DB software arrived and processors became more
> >> powerful, online transactions were able to accomplish more and the need
> >> to "complete" transaction processing in batch diminished. Batch
> >> processing then became the domain of reporting which involved DB scans,
> >> and the deferred processing of batched transactions collected during 
> >> the
> >> day.
>
> >> 3. My argument is that with modern networked processors parallel
> >> processing and multitasking can simply do what once required a batch
> >> process to achieve. There will still be reports that are based around a
> >> time series, for example, and the data for each line of the report 
> >> needs
> >> to be collected from somewhere. However, if the problem is viewed
> >> differently from the traditional approach, different ways of achieving 
> >> it
> >> become apparent.
>
> >> For example, suppose a certain "monthly report" was viewed as a report
> >> "object" with a "collection" of report lines. The problem is now one of
> >> ensuring the collection covers what happened during the month. Using 
> >> the
> >> traditional approach we might scan all the transactions between the
> >> requisite dates, do some figuring on each transaction, and place report
> >> lines into the collection from our batch scan.The collection would be
> >> built by a single processor running a dedicated (batch) process.
>
> >> But it quickly becomes apparent that that is not the ONLY way to build
> >> the collection. Suppose, as each transaction occurred in real time it
> >> kicked of a sub task (maybe running on a different procesor) that added
> >> the necessary report line to the collection right then and there... No
> >> need to filter by date or scan databases, just create the line in real
> >> time. (The report object could contain a method that filtered its own
> >> collection by date as the lines were sent to the print spool. You can
> >> argue that this is also a batch process, but it isn't within the scope 
> >> I
> >> defined above...)
>
> > You make it sound so easy, if you leave pertinent bits out.
>
> What pertinent bit do you believe was left out, Jimmy?
>
> > All fine and dandy above, but all your elements in the Collection, and 
> > the
> > Collection itself, are NON-PERSISTENT objects, in memory as vapourware.
>
> Only very briefly until they are stored. What's important, and the point I
> was trying to make, is that by thinking about it differently, you can 
> arrive
> at a different solution.
>
> Sure, the objects are volatile if you are talking about OO processing. I'm
> talking about OO concepts, not the programming level. The collection of
> report lines is "viewed as" a collection (I did say that...) conceptually.
> Of course it has to be stored somewhere and it is totally irrelevant 
> whether
> you use an object storage system or a standard RDB.
>
> >Theoretically it could work, providing you don't exit the application or
> >somebody pulls the wall-plug. For security's sake you would have to store
> >the elements as PERSISTENT objects in text form in a file or DB. When
> >appropriate you could read the Persistent DB Table and either recreate 
> >the
> >Collection for reporting or print direct from the DB.
>
> There you go... :-) (That's not the only way to do it, BTW...)
>
>
>
> > I can't see 'batch processing' entirely disappearing - not in the first
> > half of this century, anyway. I mentioned some years back a friend who 
> > was
> > the night-time supervisor for a bank data centre in Calgary; collecting
> > ALL cheques for Alberta, parts of North West B.C and a little of
> > Saskatchewan, on our eastern border. (Their cut-off was something like
> > 02:00 hours to get the encoded batches back East (Ontario)).
>
> > During her tenure volumes decreased to 30% and most likely are now 
> > around
> > 20-25%, given Internet banking, plus the every increasing volume of
> > debit/credit cards. The current problem is they,(bank data centres), 
> > have
> > to encode (MICR) the numerals for "Sixty-six dollars and two cents". 
> > Back
> > in 1967 OCR was a challenge and still IS - there's no way, other than
> > universal draconian legislation, to make humans all write their numbers 
> > in
> > a set format. (The way you and I write the Arabic numeral 'five' is not
> > the way an Egyptian writes it).
>
> > Some years back I did read where software had been written to recognise
> > Chinese characters and that Microsoft was interested. So in due course 
> > of
> > time if there's an enhanced OCR, which can read the Rosetta Stone then
> > banks wont be faced with the manual intervention of keying in MICR.
> > Nonetheless, however the cheques are encoded you are still left with a
> > bunch of cheques = BATCH !
>
> OK, my point was that batch processing will become redundant (largely
> because the processing cycles it consumes will be available in parallel 
> and
> real time, so they can be applied at the time the transaction occurs, 
> using
> distributed parallel processors (which looks like being the next "big 
> thing"
> to hit IT, and will change some of the ways we look at programming...))
>
> Taking your example of the Bank above, I hope you would agree that if
> cheques are eliminated, so is that particular problem.
>
> I believe they will be. (I haven't personally written a cheque for at 
> least
> 15 years now and I haven't seen one for at least five years. (NZ is very
> much an electronic society so we may be atypical.) Even if they (cheques)
> are not eliminated, MICR encoding is pretty archaic. Leaps have been made 
> in
> OCR techology and recognition software generally. There are systems now
> that can recognises faces from moving video, shot at a distance in poor
> light; deciphering numbers is unlikely to be a problem.
>
> A truly illegible cheque won't make it into the system anyway.
>
> Online banking means not sending money through the mail as a piece of 
> paper.
> I pay regular bills by standing orders or direct debits, and one offs by
> electronic transfer, initiated from my Notebook computer, direct to the
> recipient's bank account. (I bought something on "TradeMe" (NZ equivalent 
> of
> e-bay) in exactly this way, just a few days ago. Completely painless... a
> vendor in Wellington 300 miles away received my electronic transfer into
> their account overnight and I had the goods (sent by courier once the 
> funds
> were in the bank), before the end of the same day.)
>
> Both my company and personal income is all electronic. I can't speak for 
> all
> New Zealanders, but I'm pretty sure most of us no longer carry more than a
> few dollars in cash, never mind cheque books. We use EFTPOS (Electronic
> Funds Transfer at Point Of Sale) in more than 90% of businesses and the 
> same
> system allows credit and debit cards.

I use cash and cheques. I have never once used EFTPOS and have used my
credit card locally perhaps 4 times in the last 10 years. I do use it
for overseas stuff. The wife does use credit cards but it is always
paid off every month. When clients ask for my bank details to do
direct credit I 'lose' the paperwork and then eventually send a
cheque.

One complains that the have to keep a cheque printer around just for
me, having now given up on trying to do it another way.

I do use ATMs, however, but distrust them.

[Pete]

Thanks for that Richard. Your post prompted me to survey some other friends. 
I talked to 6. 2 of them are under thirty and would not even consider using 
cheques, seeing them as "quaint" :-) The three over 50 all have cheque books 
(I don't myself), but only one of them uses it regularly.

The remaining one reckoned she never pays bills anyway and if she needs 
anything she steals it from the Warehouse... :-) (She had had a few drinks 
and I saw no point in pursuing a serious discussion :-))

I think my perception may have been wrong and more people actually use 
cheques and cash than I thought. Certainly the older folks said they 
objected to EFTPOS because it was hard enough to remember a pin for ATMs 
without having another one as well...


> I would wager there are people in our
> society now, over 18 and working, who have never seen or handled a cheque.
>
> BOTTOM LINE: It won't be fifty years :-)
>
> Pete.
> --
> "I used to write COBOL...now I can do anything."


0
Reply dashwood (4370) 5/2/2008 11:33:43 PM


Pete Dashwood wrote:
> "Richard" <riplin@azonic.co.nz> wrote in message
> news:2db5bfab-5d42-4a26-af71-695e3cf8f93a@b5g2000pri.googlegroups.com...
> On May 2, 10:44 pm, "Pete Dashwood"
> <dashw...@removethis.enternet.co.nz> wrote:

>
> [Pete]
>
> Thanks for that Richard. Your post prompted me to survey some other friends.
> I talked to 6. 2 of them are under thirty and would not even consider using
> cheques, seeing them as "quaint" :-) The three over 50 all have cheque books
> (I don't myself), but only one of them uses it regularly.
>
> The remaining one reckoned she never pays bills anyway and if she needs
> anything she steals it from the Warehouse... :-) (She had had a few drinks
> and I saw no point in pursuing a serious discussion :-))

So, what did you persue ??

> I think my perception may have been wrong and more people actually use
> cheques and cash than I thought. Certainly the older folks said they
> objected to EFTPOS because it was hard enough to remember a pin for ATMs
> without having another one as well...

I have a more fundemental objection (I can still remember 8388608), in
that I object to any organization, especially banks, taking a
percentage, or a charge, for all transactions. It is bad enough that
GST does this, but at least some of that goes to good causes.

Banks want  to be like a Casino and rake off a small percentage of
_everything_, just because they can (if you let them).




0
Reply riplin (4127) 5/3/2008 2:04:45 AM

Pete Dashwood wrote:
> OK. If I have this right, you are saying I was wrong to suggest they will be 
> penetrated once they take over the role of "network server" (as some people 
> suggested they will), because they've been connected to the network for 
> years already, and have already been penetrated?

More or less. In other words, I agree with you on what I think is the 
major thesis (mainframe systems will be attacked when they are 
accessible to attackers), though I disagree on the minor one (when and 
how rapidly this will happen).

I don't expect there will ever be an explosion of malware specifically 
for the mainframe OSes. They're a less-rewarding target (fewer of 
them, in particular) that requires greater effort. There's less 
interactive use, which means a smaller attack surface for trojans; and 
many of the worms these days are platform-independent, targeting 
multiplatform server languages (eg PHP) and subsystems (eg Oracle).

(Viruses are more difficult to write for any system with C2 or better 
security than they were for unsecured PC OSes, but they're also 
largely pass�, and are much less useful to today's blackhats, most of 
whom are professional malware writers for organized crime and 
industrial espionage.)

The main security advantages of the mainframe over, say, Windows, are 
cultural. People don't tend to run unnecessary services on mainframes. 
People don't tend to use their mainframe servers for checking email 
and surfing the web. People who administer mainframes tend to be 
cranky about giving anyone permission to do anything.

Fact is, it's pretty rare these days that anything exciting happens in 
computer security. The arms race between attackers and defenders is 
fairly even, so attack and penetration rates stay fairly level over 
the long term. It's just noise to most people - until it's their 
system that's successfully penetrated.

-- 
Michael Wojcik
0
Reply mwojcik (1879) 5/3/2008 3:46:02 AM


"Richard" <riplin@azonic.co.nz> wrote in message 
news:c2b47643-e6e3-4cbe-9916-5bb3a2e4e179@u36g2000prf.googlegroups.com...
>
>
> Pete Dashwood wrote:
>> "Richard" <riplin@azonic.co.nz> wrote in message
>> news:2db5bfab-5d42-4a26-af71-695e3cf8f93a@b5g2000pri.googlegroups.com...
>> On May 2, 10:44 pm, "Pete Dashwood"
>> <dashw...@removethis.enternet.co.nz> wrote:
>
>>
>> [Pete]
>>
>> Thanks for that Richard. Your post prompted me to survey some other 
>> friends.
>> I talked to 6. 2 of them are under thirty and would not even consider 
>> using
>> cheques, seeing them as "quaint" :-) The three over 50 all have cheque 
>> books
>> (I don't myself), but only one of them uses it regularly.
>>
>> The remaining one reckoned she never pays bills anyway and if she needs
>> anything she steals it from the Warehouse... :-) (She had had a few 
>> drinks
>> and I saw no point in pursuing a serious discussion :-))
>
> So, what did you persue ??

:-) If I ever get to the stage where I have to get them drunk... well, I 
think I'd just quit.
>
>> I think my perception may have been wrong and more people actually use
>> cheques and cash than I thought. Certainly the older folks said they
>> objected to EFTPOS because it was hard enough to remember a pin for ATMs
>> without having another one as well...
>
> I have a more fundemental objection (I can still remember 8388608), in
> that I object to any organization, especially banks, taking a
> percentage, or a charge, for all transactions. It is bad enough that
> GST does this, but at least some of that goes to good causes.

I remember that number as being the maximum in an unsigned 24 bit word (ICL 
days), but not in any other connection. Am I missing something here?
>
> Banks want  to be like a Casino and rake off a small percentage of
> _everything_, just because they can (if you let them).

Yep. They call it "business"... I don't disagree with your objections, 
though.

Taking a scriptural inspiration: "Go Ye and do likewise..." :-)

(Don't know why MF runtime fees came to mind just then...funny how the mind 
works...:-))

Pete.
-- 
"I used to write COBOL...now I can do anything."


0
Reply dashwood (4370) 5/3/2008 3:51:18 AM

On Sat, 3 May 2008 11:33:43 +1200, "Pete Dashwood" <dashwood@removethis.enternet.co.nz>
wrote:
> Certainly the older folks said they 
>objected to EFTPOS because it was hard enough to remember a pin for ATMs 
>without having another one as well...

There is an easy solution -- I write the PIN on the back of the card in large numbers
using a felt-tipped marker. 

I always answer bank email inquiries asking for personal information. I first thought it
strange their websites are in former Soviet Union countries, but then I saw ALL the banks
do it. It must be important because the queries come several times a week. 

What does it mean when the bank says Insufficient Funds? It must mean the bank is
temporarily short, right? My accounts have plenty of money.
0
Reply Robert 5/3/2008 4:42:55 AM

"Robert" <no@e.mail> wrote in message 
news:0iqn1451n8e0v2hpee65imksr7c2jks1s5@4ax.com...
> On Sat, 3 May 2008 11:33:43 +1200, "Pete Dashwood" 
> <dashwood@removethis.enternet.co.nz>
> wrote:
>> Certainly the older folks said they
>>objected to EFTPOS because it was hard enough to remember a pin for ATMs
>>without having another one as well...
>
> There is an easy solution -- I write the PIN on the back of the card in 
> large numbers
> using a felt-tipped marker.
>
> I always answer bank email inquiries asking for personal information. I 
> first thought it
> strange their websites are in former Soviet Union countries, but then I 
> saw ALL the banks
> do it. It must be important because the queries come several times a week.
>
> What does it mean when the bank says Insufficient Funds? It must mean the 
> bank is
> temporarily short, right? My accounts have plenty of money.

LOL! Good stuff... made my day :-)

Thanks,

Pete.
-- 
"I used to write COBOL...now I can do anything." 


0
Reply dashwood (4370) 5/3/2008 7:36:29 AM

In article <681luhF2qqqoeU1@mid.individual.net>,
Pete Dashwood <dashwood@removethis.enternet.co.nz> wrote:
>
>
><docdwarf@panix.com> wrote in message news:fvfjh8$evc$1@reader2.panix.com...
>> In article <680v8pF2r3apqU1@mid.individual.net>,
>> Pete Dashwood <dashwood@removethis.enternet.co.nz> wrote:
>>
>> [snip]
>>
>>>Good points. I was more concerned with the concept than the details.
>>
>> How very... Managerial.  Are you a 'Big Picture' guy, too?
>>
>
>Is that really the best you can do, Doc?

It was, I believe, what the situation merited... and a bit of lagniappe.

>
>Sad.

Not every day is sunshine and bunny-rabbits, Mr Dashwood... and a good 
thing, too, as it might get monotonous.

DD

0
Reply docdwarf (6044) 5/3/2008 7:45:31 AM

In article <681mmmF2qlbguU1@mid.individual.net>,
Pete Dashwood <dashwood@removethis.enternet.co.nz> wrote:

[snip]

>Thanks for that Richard. Your post prompted me to survey some other friends. 

An actual survey?  How novel!  Perhaps The Reverend Dodgson might have put 
it as 'First conclusions, *then* data!'

>I talked to 6. 2 of them are under thirty and would not even consider using 
>cheques, seeing them as "quaint" :-) The three over 50 all have cheque books 
>(I don't myself), but only one of them uses it regularly.
>
>The remaining one reckoned she never pays bills anyway and if she needs 
>anything she steals it from the Warehouse... :-) (She had had a few drinks 
>and I saw no point in pursuing a serious discussion :-))
>
>I think my perception may have been wrong and more people actually use 
>cheques and cash than I thought.

Well, since you're asking... I still write checks/cheques; every Tuesday I 
sit down with the previous weeks' bills (and the steady payment-books for 
things like housing) and fill out Pay To The Order Of lines.  (I gave up 
using a goose-quill but I still, at times, will use a fountain/cartridge 
pen.)

My reasons for this are sentimental and aphoristic.  The sentiment comes 
from my own Oldene Dayse, when it was not as easy to get from one payday 
to the next... I'd sit down and decide what *had* to be paid, one month 
I'd pay the electric and skip the telephone, the next I'd pay the 
telephone and skip the electric... barely keeping up with the game, as it 
were.

Now I remember those days and with every signature there comes to mind a 
firmly-spoken 'You ain't shutting me off *this* month!'... along with 
'Living well is the best revenge'.

DD

0
Reply docdwarf (6044) 5/3/2008 9:29:01 AM

"Graham Hobbs" <ghobbs@cdpwise.net> wrote in message 
news:tbvm141kff7pivak67ptuf728ohtbqfmc8@4ax.com...
> On Fri, 2 May 2008 12:42:46 -0700 (PDT), Alistair
> <alistair@ld50macca.demon.co.uk> wrote:
<snip>
>>To save opening up another thread, VSAM can be emulated on the pc
>>(albeit with z/390 Assembler by Don Higgins).
> Alistair,
> I started with IBM Cobol and CICS in the early nineties and the
> whatevers for DB2 and VSAM and it's starting to look as though there
> I'll have to stay. Not sure Fujies even had a product.
> Anyway, my project is entirely DB2 and VSAM KSDS so noted your
> comments posted elsewhere about ESDS and ISAM for knowledge sake.
> Thanks.
> Graham

Graham,
  Be VERY careful about developing IBM mainframe tools with OLD versions of VA 
COBOL.  Remember that BOTH COBOL and CICS have had many releases/versions since 
that product was introduced.  In MOST cases, this will just mean that "new" 
features of the mainframe product won't be available to you for developing with 
VA COBOL.  However, in some cases, VA COBOL will be incompatible with current 
CICS and COBOL (for example, new reserved words with recent releases of 
Enterprise COBOL on the mainframe).

As I stated before, I would really look at the "partnerworld" program as a 
better solution.


-- 
Bill Klein
 wmklein <at> ix.netcom.com
> 


0
Reply wmklein (2605) 5/3/2008 6:06:21 PM

Howard Brazee wrote:
> On Fri, 02 May 2008 19:04:30 GMT, "James J. Gavan"
> <jgavandeletethis@shaw.ca> wrote:
> 
> 
>>>What does the Egyptian five look like?
>>
>>I have no idea :-). It was '54 when I left Egypt so I think I can plead 
>>the onset of Alzheimer's. Our only view of 'Egypt' was in our RAF bus 
>>taking us swimming in the afternoon, and en route we passed through the 
>>outskirts of Ismalia to get to our Lagoon, an offshoot of the Suez 
>>Canal. Saw the odd civilian vehicle with Arabic number plates, but the 
>>numbers were unintelligible. The best I can remember was that zero was 
>>written at a slant as a sort of four-sided character.
> 
> 
> I like the story that our number system is based upon how many angles
> each digit has.    That requires a closed 4 and a 7 with a bar that
> drops, plus only zero has rounded shape.
> 
> One would think that the Arabic speaking countries would use Arabic
> numerals.

There you go Howard, Google shows up all sorts of trivia such as 
"Egyptian License Plates". These two have samples :-

http://www.worldlicenseplates.com/world/AF_EGYP.html
http://www.pl8s.com/e/egyp.htm

The pictures don't ring any bells for me, but my reference to the 'zero' 
- looks like it was/is written at a slant, as a rectangle, (the 
equivalent of an Egyptian period/full stop, perhaps ?), but with no 
dough nut hole in the middle.

Jimmy, Calgary AB
0
Reply jgavandeletethis (1047) 5/4/2008 6:05:32 PM

On Fri, 2 May 2008 12:55:59 -0700 (PDT), Richard <riplin@azonic.co.nz>
wrote:

>I use cash and cheques. I have never once used EFTPOS and have used my
>credit card locally perhaps 4 times in the last 10 years. I do use it
>for overseas stuff. The wife does use credit cards but it is always
>paid off every month. When clients ask for my bank details to do
>direct credit I 'lose' the paperwork and then eventually send a
>cheque.

For those with discipline, credit cards are the cheapest way to go.
Cash in one's pocket earns nothing, and if you lose it or have it
stolen from you, isn't automatically insured.

>One complains that the have to keep a cheque printer around just for
>me, having now given up on trying to do it another way.
>
>I do use ATMs, however, but distrust them.

Why do you distrust ATMs?   Have you come across anybody who has had
his trust in them betrayed?
0
Reply howard (6275) 5/5/2008 1:49:42 PM

Howard Brazee <howard@brazee.net> wrote in message
news:404u14pht2csng06qoraqhbqd9n2d2qv36@4ax.com...
> On Fri, 2 May 2008 12:55:59 -0700 (PDT), Richard <riplin@azonic.co.nz>
> wrote:
>
> >I use cash and cheques. I have never once used EFTPOS and have used my
> >credit card locally perhaps 4 times in the last 10 years. I do use it
> >for overseas stuff. The wife does use credit cards but it is always
> >paid off every month. When clients ask for my bank details to do
> >direct credit I 'lose' the paperwork and then eventually send a
> >cheque.
>
> For those with discipline, credit cards are the cheapest way to go.
> Cash in one's pocket earns nothing, and if you lose it or have it
> stolen from you, isn't automatically insured.
>


I'll tell you why I stick to paper wherever I can - which mostly means
cheques - the banks take enormous profits from "service charges" for any
conceivable transaction, and pay truly derisory interest; since I must deal
with banks I'm going to make them work for the service charge they get from
me.

My credit cards can be hacked without my knowledge; whereas money can't be
stolen from me without my knowing.  As (again) I must use credit cards
willy-nilly, it has a limit of $1500: at least that makes it more difficult
for someone to bankrupt me!

It's true that if you pay off your credit card each month it may be the
cheapest way to go (annual charge notwithstanding?) but to state that that
it's a matter of discpline is to grossly oversimplify the matter.

PL


0
Reply lacey1 (490) 5/5/2008 5:04:50 PM

In article <Z8HTj.112091$Ft5.54731@newsfe15.lga>, tlmfru <lacey@mts.net> wrote:

[snip]

>My credit cards can be hacked without my knowledge; whereas money can't be
>stolen from me without my knowing.

If a credit card is used without your consent you are (in the USA) liable 
for a maximum of US$50; I believe that Mr Trembley can give the Insider's 
View on this.

Cash is cash... so if you carry more than US$50 you stand to lose more 
were you to lose the cash (by overt theft ('This is a gun, give me your 
money') or covert theft (pick-pocket) or accident ('I must have left my 
wallet on the table.').

DD

0
Reply docdwarf (6044) 5/5/2008 6:36:06 PM

docdwarf@panix.com wrote:
> In article <Z8HTj.112091$Ft5.54731@newsfe15.lga>, tlmfru <lacey@mts.net> wrote:
> 
> [snip]
> 
>> My credit cards can be hacked without my knowledge; whereas money can't be
>> stolen from me without my knowing.
> 
> If a credit card is used without your consent you are (in the USA) liable 
> for a maximum of US$50; I believe that Mr Trembley can give the Insider's 
> View on this.
> 
> Cash is cash... so if you carry more than US$50 you stand to lose more 
> were you to lose the cash (by overt theft ('This is a gun, give me your 
> money') or covert theft (pick-pocket) or accident ('I must have left my 
> wallet on the table.').
> 
> DD
> 

Well, I don't claim to be an expert just because I work for a credit 
card company.  I've been fortunate enough never to have had one of my 
credit cards lost or stolen in the last 35 years.  But my 
understanding is that as long as you promptly report your major credit 
card as missing, for whatever reason, you cannot be liable for more 
than $50, and most banks will not require you to pay anything.

I may be old-school, but I prefer not to use a debit card, at least 
not one that can reach into my checking account.  I have used ATM 
cards and pre-paid debit cards (sometimes called stored-value cards).

-- 
http://arnold.trembley.home.att.net/
0
Reply arnold.trembley (268) 5/6/2008 5:21:36 AM
comp.lang.cobol 4203 articles. 4 followers. Post

95 Replies
624 Views

Similar Articles

[PageSpeed] 54


  • Permalink
  • submit to reddit
  • Email
  • Follow


Reply:

Similar Artilces:

cobol compiler for Windows 7
I'm an out of work mainframe developer who wants to keep up my Cobol skills. Is there a low-cost or free Cobol compiler available that would work on my PC with Windows 7 operating system? On Jul 23, 9:46=A0am, fern <fernhi...@msn.com> wrote: > I'm an out of work mainframe developer who wants to keep up my Cobol > skills. =A0Is there a low-cost or free Cobol compiler available that > would work on my PC with Windows 7 operating system? OpenCOBOL is free. see: http://www.opencobol.org/ Windows builds are available at: http://www.kiska.net/opencobol/1.1/ On Jul 22, 10...

COBOL compiler written in COBOL
Way back in 1974 I was the systems programmer in an all COBOL DP department, using an ICL 1902A mainframe, with punch card input and tape and disk storage. That was a small slow machine, and to compile a COBOL program could take up to half an hour. As usual there were many changes and file fixups to do, and these could be done only by writing a COBOL program especially for that purpose. Even for a small program it could take half a day to code, punch, list, check, compile, fix errors, recompile, and test. COBOL being a somewhat verbose language, the source was punched into cards by expe...

IBM COBOL Migration to Windows COBOL
Hi Everyone, My organization wants to move its IBM z/OS COBOL programs off onto another (perceived-to-be cheaper) platform. One option being discussed is converting the (400+) programs (most of which do financial crunching to create print files) to run on Windows using either MicroFocus COBOL or Fujitsu NETCOBOL. There will be 2-4 programmers on the conversion team. My questions: 1) Does anyone here have a preference between MicroFocus and Fujitsu ? 2) If anyone has done conversions like this, how were DD statement (let alone IDCAMS) functionality converted ? (i.e. pass file descr...

New Cobol compiler written in Cobol
I am considering the development of a Cobol compiler itself written in Cobol. Please don't flame me over the perceived stupidity of such a choice, I think it is a better idea than a language that cannot compile itself. I do believe obviously that there is enough power in the language to implement a COBOL compiler using the COBOL language. After all, a compiler is simply a program to translate text from a source into a target. We're not talking rocket science or brain surgery here. I believe there is money to be made with a good inexpensive Cobol compiler which is available ...

Windows Service using Fujitsu Cobol for WIndows
We are converting from Micro Focus COBOL to Fujitsu because of the new licensing fees. We have a number of Windows Services (NT Services) and need to convert them. When I talked to Fujitsu, they said we could not convert them. Then, we thought that we could write a Driver in C# (.Net) that would call the Cobol program with "pinvoke"- That works, except, when C# calls Cobol wwhich then calls Cobol- (both Cobol being non .Net)- the second Cobol program does not retain in Working Storage. When I reported this to Fujitsu, they said "Yes, that is correct". Has anyone been s...

5.3 Compiler not compiling all modules under Windows
This is a strange one, and I'm at my wit's end. For some reason, on my laptop (Win98se) Clipper will not compile all modules, instead stopping after about 50% of the program. There are no error messages, and Clipper acts as though all is normal, like the modules weren't even there. The program compiles normally on the same computer under DOS, as well as my desktop under a similar configuration. Any suggestions? This is going to be a pain when I have to travel next. John John Sounds like you don't have enough memory for your DOS sessions. What tweaks if any have you don...

GNU Arm Compiler/cross compiler for windows.
Can someone tell me if there is any compiler/cross compiler for arm on windows, that can run on Microsoft Visual Environment. I know that there is gcc on cygwin that I can use.. I was just trying to know if there is some utility like this that can be used with MS VC++. Thanks and Regards, Ahmed. S. Md. Kabeer Ahmed wrote: > Can someone tell me if there is any compiler/cross compiler for arm on > windows, that can run on Microsoft Visual Environment. I know that > there is gcc on cygwin that I can use.. I was just trying to know if > there is some utility like this that...

[wxMac 2.8.4, wxMSW ?] "Strange" Mac g++ compiler errors trying to compile Windows compilable wx code
Hi, folks! I'm trying to compile some wx C++ code written by a colleague; I'm working late so can't consult him, but seeing as how he's primarily a Windows developer and the code in question compiles thereon, I'm not sure how much he'd be able to help me figure out why I'm having the problems I'm having trying to do it on a Mac w/ g++. The first "symptom" was the compiler complaining about an overloaded method ambiguity: "error: call of overloaded 'wxString(const char [33])' is ambiguous" Here's the problem lin...

how to show the black console window when compiled with 2010b compiler
Hi, When I used earlier version of matlab to compile my gui, there was a black console window everytime I open the compiled program. I switched to 2010b recently. However, in this version, the console window doesn't show up. I would like to keep the console window visiable. Is there a way to do this? Thanks! Yijin "Yijin " <liuyj1982@gmail.com> wrote in message <ic186o$d6l$1@fred.mathworks.com>... > Hi, > > When I used earlier version of matlab to compile my gui, there was a black console window everytime I open the compiled program. > I sw...

Compiling Python (modules) on 64bit Windows
Hi, when processing our mass spectrometry data we are running against the 2GB memory limit on our 32 bit machines. So we are planning to move to 64bit. Downloading and installing the 64bit version of Python for Windows is trivial, but how do we compile our own C extension? Visual C ++ 2008 express comes for free, but only compiles for 32 bit. What has been used to compile the downloadable Python Win64 bit version? Visual Studio professional? The problem with the professional edition is that it is hard to obtain and it is sort of out-of-date - nowadays everyone uses Visual Studio 2010 (or eve...

COBOL WINDOWS
Hi We have cobol "vscobol ver 1.2" running on xenix box. Is there any way of making this run on a windows box. Any conversion/emulators available that will make these programs run on WinXP ? Thanks Mark mnews wrote: > Hi > > We have cobol "vscobol ver 1.2" running on xenix box. Is there any way > of making this run on a windows box. Any conversion/emulators > available that will make these programs run on WinXP ? > > Thanks > Mark COBOL is pretty portable. All you need is a Windows COBOL compiler and a bit of time. 'Course converting the ...

Cobol Compiler
Does anyone know if there is a compiler for Cobol that creates independently executable programs, e.g. that do not need to be run using a runtime environment? Can be a commercial one. TIA. anguel@web.de schrieb: > Does anyone know if there is a compiler for Cobol that creates > independently executable programs, e.g. that do not need to be run > using a runtime environment? Can be a commercial one. TIA. Forgot to mention: For linux. Sorry. Microsoft had a COBOL compiler for PC many years ago. It would generate ..obj, .lst, and .exe files. I remember it came on 5 1/4 floppy diske...

compiling in windows
Downloaded the refocus plugin, but it seems to only be source code. I downloaded also the GTK libraries and other dependancies. How do you compile in windows? gcc+ ? Not sure. How does refocus compare to refocus-it ? -- Troy Piggins | http://piggo.com/~troy _ __ (_) __ _ __ _ ___ | '_ \| |/ _` |/ _` |/ _ \ | .__/|_|\__, |\__, |\___/ |_| |___/ |___/ Troy Piggins wrote: > Downloaded the refocus ...

COBOL-compiler
Dear m. I am from Holland. In a second hand shop I bought a book (in Dutch) to programm with COBOL. It was a bit nostalgic, because I had this book in the beginning of the 1980th. But was not able to much with it that time. Now I would like to TRY something to do with it. Q: Where can I find a good or best compiler? Does ik work on PCs as well? HOW? Can you refer to sites with simple aids? Thanks so much henko43@freeler.nl *** henko43 <h.overvlietXXX@chelle.nl> wrote in message news:FSsoc.30988$F33.3545@amsnews03.chello.com... > Dear m. > > I am from Holland. > In a ...

cobol compiler
Hello group, I am looking for a cobol compile for Linux. Do someone knows about such a tool, commercial or free? TIA, Bernard On 2007-11-01, Bernard <bernard.fay@gmail.com> wrote: > Hello group, > > I am looking for a cobol compile for Linux. Do someone knows > about such a tool, commercial or free? Google does. You should try Google, it's pretty cool: http://www.google.com/search?hl=en&q=cobol+compiler+for+Linux&btnG=Search -- Grant Edwards grante Yow! Today, THREE WINOS at fro...

Compile on windows
[Note: parts of this message were removed to make it a legal post.] Hello all, I'm running Ruby on windows xp. I made changes to the keywords file and would like to recompile the Ruby source to see my changes. How do I recompile Ruby source on windows? Thanks in advance You could download and tweak the source for the mingw download. I think :) http://www.akitaonrails.com/2008/7/21/testing-the-new-one-click-ruby-installer-for-windows http://www.akitaonrails.com/2008/7/26/still-playing-with-ruby-on-windows http://rubyinstaller.rubyforge.org/wiki/wiki.pl?Mingw Those talk about how...

windows xp <-> windows 2000 compiler compatibility
I have a question: I wrote a GUI on matlab R14 and compiled it with Matlab Compiler 4.0 on Windows XP (LCC). Now i want to deploy this program on another machine through MCR. The problem is that this machine runs Windows 2000 instead of XP. Will this work? This, with respect to the following from the compiler manual: "You can distribute a MATLAB Compiler-generated stand-alone to any target machine that has the same operating system as the machine on which the application was compiled. For example, if you want to deploy an application to a Windows machine, you must use the MATLAB Compil...

Compiling boost using Intel 8 c++ compiler under Windows
I have downloaded the newest boos release. I am havng problems building boost using the intel C++ 8.0 compiler. It looks as if bjam can't fine the icl.exe compiler executable itself. This file is installed under this directory C:\Program Files\Intel\CPP\Compiler80\Ia32\Bin Any ideas what I do wrong? I try this: run the iclvars.bat batch file. Then I use bjam like this: C:\dev\boost\boost_1_31_0_icc>bjam "-sTOOLS=intel-win32" "-sINTEL_VERSION=80" "-sINTEL_BASE_MSVC_TOOLSET=vc7.1" But when compiling I get these errors from each command to the compiler: ...

COBOL/CICS/DB2
Hello all System: z/OS and LE for MVS Why is it possible to compile a COBOL/CICS/DB2 program with the=20 options: NODYNAM . . . ????? nowerdays thats not the state of the art, because: IMS could be called dynamic COBOL and ASSEMBLER could be called dynamic why not inside of a CICS environment I asked this question, because i mus used the same sources in two=20 environments: DB2 Batch (IMS/DSN) - one compile and link DB2 CICS - a second compile and link i have to manage two sources inside a revision environment!!! The first source could be compiled with the option: DYNAM for batch -=2...

compilator cobol
There is an utility for Windows XP for compile and execute Cobol program ? Thanks. bufalonero wrote: > There is an utility for Windows XP for compile and execute Cobol > program ? > Thanks. > Search recent messages here for "William Klein" and "COBOL FAQ" (Frequently Asked Questions) - the FAQ lists all current COBOL compilers for PCs. bufalonero wrote: > There is an utility for Windows XP for compile and execute Cobol > program ? > Thanks. Are you asking "Is there a utility for Windows XP to compile and execute COBOL programs?" or are...

Compiling (Windows)
The www.tcl.tk howto says that Visual C++ 5.x or greater is required to compile Tcl/Tk for Windows. Well, I don't have it. Can other compilers be used? Say, Dev C++, for example, or MingW? Thanks, -- Luciano ES <lucianoav@ggmmxx.nneett> Santos, SP - Brasil Luciano wrote: > The www.tcl.tk howto says that Visual C++ 5.x or greater is required > to compile Tcl/Tk for Windows. Well, I don't have it. Can other > compilers be used? Say, Dev C++, for example, or MingW? mingw is also supported, and borland to some extent. note that you can obtain vc7 for free from ms - ...

can a rpogramm which was compiled in Windows XP run in Windows 2000?
I just developed a server side programm. I compiled it using VC++ 6.0 in Windows XP. It run well in Windows XP. But in windows 2000 it got a runtime error. Can I need to compile it in Windows 2000 again? Thanks in advanced. On 7 Mar 2006 18:26:44 -0800, "swl" <wl_sha88@yahoo.com> wrote in comp.lang.c++: > I just developed a server side programm. I compiled it using VC++ 6.0 > in Windows XP. It run well in Windows XP. But in windows 2000 it got a > runtime error. > Can I need to compile it in Windows 2000 again? > Thanks in advanced. Ask in news:comp.os.ms-win...

Compiling for Windows
I got the error Message 'InterfaceLib missing' when I try to start my compiled program in Windows XP. Any Ideas? Thanks Christian > I got the error Message 'InterfaceLib missing' when I try to start my > compiled program in Windows XP. There must be a declare for Mac OS (Classic) still somewhere in the code. Be sure to embed all MacOS code in an "#if" statement like this: #if TargetMacOS Then Declare Function Something Lib "InterfaceLib" () As Integer #EndIf Note the "EndIf" has no space. -- Charl...

compiling under windows
hi there, although my wife has a GNU/linux/win98 dualboot, i've never really used MS windows before, other than to get it to talk with my samba daemon. i know, literally, nothing about MS windows development, however, i assume three things: 1. MS windows doesn't come with a compiler. 2. microsoft charges a lot for a compiler. 3. there are free compilers available. suppose i install a free compiler. would i be able to compile my wxwindows source code and have it run on MS windows? or do the free compilers just compile DOSish type things? in other words, do i need ...