We are running currently Weblogic 5.1 with jdk 1.3.1_01, Solaris 7
I am trying to investigate a java memory leak, which is reprodusable
after long running of the Weblogic server.
I tried using JProbe as a profiler, but then the speed goes down 10
times and I even can not think of reproducing the problem.Now I am
looking for test case and script,that can reproduce the leak faster.
I have questions about the integrated jdk possibilities for profiling.
In the official sun documentation for J2SE 2 (jdk 1.2.2), they
describe some of the new feautures and tools in Java 2.
I read the following:
(NOTE that I found this description only in docs for SOLARIS)
"Increased performance is largely due to the scalable architecture of
the JVM which has improved in bytecode execution, memory management
and thread synchronization performance as a result of the following
...A heap inspection tool
This diagnostic tool for interactively killed programs is accessible
from the SIGQUIT handler menu. It can be used to find memory leaks in
your programs. A memory leak occurs when a program inadvertently
retains objects, preventing the garbage collector from reclaiming the
memory. Heap inspection presents a per-class breakdown of the objects
in the heap, sorted by total amount of memory consumed. You can then
examine reference chains to selected objects to see what is keeping
Do they speak here about the java -hprof (-runhprof) parameters ,that
you can use?? But then why I can find this description
in the Solaris section?
If yes, do you know ,if it can help me.What I need is information
about the existing objects in JVM (after several days working
Weblogic).I read that "-hprof", when generating in text mode,will
generate file,which after 2 days working of JVM will be extremely
large.Is this so?
Can you give me some ideas,how to proceed?
Thanks a lot
||10/14/2003 11:43:18 PM