Michael & Alan
Thanks very much for your replies on this subject.
It turned out that to arm the programmes in question was a trivial task,
with no measurable overhead. We are capturing the sas start/end times, plus
times it takes to run an update process over various areas of our data
It was easier to create seperate file for each programme (each one was an
update process) which have started to provide a wealth of information,
already identifying one area with performance issues which has been solved.
The other area I was keen to use the ARM was code developed by users,
maintained by users, which we run in production with our pre/post
processing. I was keen to identify which production tables and libnames
this user code was accessing - again very easy to determine. SO we now have
a reference point to refer back to when the user next provides an update -
so we can be absolutly certain of the dependencies and scheduling flow.
From: SAS(r) Discussion [mailto:SAS-L@LISTSERV.UGA.EDU] On Behalf Of Michael
Sent: Tuesday, 31 October 2006 4:59 a.m.
Subject: Re: Using ARM Macros in Unix
Robin Templer posted the following:
> Does anyone have any experience in using the Application
> Response Measurement (ARM) macros?? In particular under HP
> Unix but I am interested in any experiences that anyone may have
> I am considering using this facility to track the usage of
> datasets and more importantly which indexes are being used,
> and also the timings of some important core programmes.. I
> know the ARM facility can mesaure this, but my queries relate
> to where I can record this information.
> The documentation and examples show the log being written to
> a flat file, but I have hundreds of users and approx 2000 SAS
> invocations per day I need
> to measure - and I am concerned with all of these attempting to
> concurrently write to the same file. I really do not want to
> create a seperate file for each sas invocation.
> As we have HP's measureware in use and analyse it with SAS
> ITRM, can we send the ARM transactions to measureware and let
> SAS ITRM analyse it ??
Robin, argh! Instrumenting your SAS application with ARM and then
post-processing the logs to dig out the information is a lot of work.
I've done it in the distant past and found it to be very intrusive to
the SAS programs. Unfortunately, I don't have anything else to add
concerning ARM. I'll wait and see what other SAS-L-ers have to say.
I would, however, suggest that you also consider the merits of using a
more SAS-centric solution, which would be to use the LOGPARSE SAS macro
to measure SAS data step and PROC step usage and the RTRACE facility to
determine which data sets are being used. Oddly enough, as coincidence
would have it, I have written papers on both of those subjects:
Programmatically Measure SAS Application Performance on Any Computer
Platform With the New LOGPARSE SAS Macro
Measuring SAS Software Usage on Shared Servers with the RTRACE Facility
Both the LOGPARSE SAS Macro and the RTRACE facility can be utilized with
minimal intrusion into the SAS application programs that you want to
In addition to the aforementioned papers, Ron Fehd, SAS-L's own Macro
Maven, has written a paper that cleverly combines both of the concepts
Modifying The LOGPARSE PassInfo Macro to Provide a Link between Product
Usage in Rtrace Log and Time Used in Job Log
In addition to Ron's paper being a good read and cleverly bridging the
gaps between the RTRACE facility and the LOGPARSE SAS macro, it has the
dubious distinction of having a title that is even longer and more wordy
than any of mine, to date. And, unlike the pulp fiction writers of
yesteryear, neither of us is paid by the word. Go figure?!?!?!?
The one thing that you would have to do to fully bend the use of
LOGPARSE to your own needs would be to modify it to pick up and store
index usage messages. That could be a hassle that may make going this
route non-worthwhile. But, at least you know that this tool (LOGPARSE)
Robin, best of luck in measuring what the heck your SAS applications on
Unix are doing!
I hope that this suggestion proves helpful now, and in the future!
Of course, all of these opinions and insights are my own, and do not
reflect those of my organization or my associates. All SAS code and/or
methodologies specified in this posting are for illustrative purposes
only and no warranty is stated or implied as to their accuracy or
applicability. People deciding to use information in this posting do so
at their own risk.
Michael A. Raithel
"The man who wrote the book on performance"
Author: Tuning SAS Applications in the MVS Environment
Author: Tuning SAS Applications in the OS/390 and z/OS Environments,
Author: The Complete Guide to SAS Indexes
Don't be too timid and squeamish about your actions. All life is an
experiment. The more experiments you make the better. - Ralph Waldo