f



Flashback log file sync

In the thread about LOB storage, Jonathan objected that I haven't really 
shown any numbers, which is true. I decided to fix my laziness and wrote 
a Perl script to load 614 files with approximately 2.3 GB of into a 
simple table having 3 columns: fileno, filename and content of type BLOB. 
While loading, I was also monitoring the script execution and wait 
events. The outcome is more than interesting:


 1  select event,time_waited,average_wait
  2  from v$session_event
  3  where sid=204
  4* order by time_waited
SQL> /

EVENT		     TIME_WAITED AVERAGE_WAIT
-------------------- ----------- ------------
Disk file operations	       0	  .02
 I/O

events in waitclass	       0	  .12
Other

log file sync		       1	  1.4
SQL*Net message to c	       1	    0
lient

log file switch comp	      46	 9.25

EVENT		     TIME_WAITED AVERAGE_WAIT
-------------------- ----------- ------------
letion

SQL*Net more data fr	     988	    0
om client

SQL*Net message from	    1098	 2.26
 client

db file sequential r	    5617	 1.49
ead


EVENT		     TIME_WAITED AVERAGE_WAIT
-------------------- ----------- ------------
direct path write	   18902	  .67
flashback log file s	   29509	 4.53
ync

direct path read	   35703	 1.29

11 rows selected.


The event which caused the most time to be spent waiting was, as is to be 
expected, "direct path read". After all, the script was loading BLOB 
objects into the database. The second most time consuming event was, 
quite unexpectedly, "flashback log file sync". Much, much more time was 
spent on writing flashback logs than on writing the redo logs. As opposed 
to redo log, flashback is a horrific monster. This data is for a user 
session, which means that flashback log file sync is happening 
synchronously. In other words, there is a very hefty performance price to 
be paid for flashback, much much more than for redo logs and 
recoverability. The mechanisms and parameters to control flashback are 
not exposed which means that there is no control over that mechanism. The 
only possibility is to turn it on and off. Apparently, flashback will 
exact a heavy performance penalty on the DML intensive applications.

The loading script worked well:

SQL> select bytes/1048576 from dba_segments
  2  where owner='SCOTT' and 
  3        segment_type='LOBSEGMENT' and
  4        segment_name='SYS_LOB0000087552C00003$$';

BYTES/1048576
-------------
	 2304

Elapsed: 00:00:00.05
SQL> select count(*) from scott.bin_files;

  COUNT(*)
----------
       614

Elapsed: 00:00:00.00
SQL> 


This is the original:
[mgogala@medo tmp]$ du -sh ~/mp3/misc
2.3G	/home/mgogala/mp3/misc
[mgogala@medo tmp]$ ls /home/mgogala/mp3/misc|wc -l
614

This is the "business end" of the script:

my $dbh   = db_connect("scott", "tiger");
my @files = glob("/home/mgogala/mp3/misc/*");
my $cnt   = 0;
my $ins   = "insert into bin_files values(?,?,?)";
my $sth=$dbh->prepare($ins);

foreach my $f (@files) {
    my $cont=slurp($f);
    $sth->bind_param(1,$cnt++);
    $sth->bind_param(2,$f);
    $sth->bind_param(3,$cont,{ora_type=>SQLT_BIN});
    $sth->execute();
}
$dbh->commit();

This is the target table:


SQL> desc scott.bin_files
 Name					   Null?    Type
 ----------------------------------------- -------- 
----------------------------
 FILENO 					    NUMBER(4)
 FILENAME					    VARCHAR2(256)
 CONTENT					    BLOB
SQL> 

Version is 11.2.0.3, 64 bit on Linux, storage in row is disabled.

I will do some tests with deleting rows and re-loading the files and see 
whether the LOB segment will grow, but that will be another thread. 
This is only to show what a hefty performance penalty is exacted by 
having flashback turned on.


-- 
http://mgogala.byethost5.com
0
gogala.mladen (1153)
5/12/2012 5:37:02 AM
comp.databases.oracle.server 22978 articles. 1 followers. Post Follow

17 Replies
19124 Views

Similar Articles

[PageSpeed] 40


"Mladen Gogala" <gogala.mladen@gmail.com> wrote in message 
news:pan.2012.05.12.05.37.02@gmail.com...
| In the thread about LOB storage, Jonathan objected that I haven't really
|
| The event which caused the most time to be spent waiting was, as is to be
| expected, "direct path read".

Why do you expect to read when all you're doing is writing - (I think I 
know the answer to that question):

It would be interesting to see the session stats to see what the reads 
were, and how big the writes were.

I think the reads may have been "for flashback new" - to modify a block 
Oracle may have to read it and put it in the flashback log for writing 
before modifying it. In the case of very high volume the I/O for a LOB 
insert turns from "write" to "read / write (flashback) / write (lob)". (You 
may find that if you have a new tablespace the flashback activity is hugely 
reduced)

What's the full DDL for the table (use dbms_metadata.get_ddl) - are the 
LOBs nocache nologging, for example, and can this affect the flashback 
rate.

-- 
Regards

Jonathan Lewis
http://jonathanlewis.wordpress.com
Oracle Core (Apress 2011)
http://www.apress.com/9781430239543



0
jonathan5683 (1392)
5/12/2012 6:26:46 AM
Mladen Gogala wrote,on my timestamp of 12/05/2012 3:37 PM:


> This is only to show what a hefty performance penalty is exacted by
> having flashback turned on.

A few weeks ago an Oracle "expert" insinuated I was a "bad dba" for refusing to 
use flashback in our production DW while testing new BI code releases.

I gave up on it a long time ago after watching the performance hit on some 
typical loads.  Instead, I provide an exact copy of production - down to the 
same SAN LUN types and numbers - to break as they please. Fun and games to keep 
the optimizer behaving the same in both instances, but I can live with that.

Flashback has to read and then write the data somewhere, somehow.  And when it 
is written in the flashback area, it is potentially from multiple tablespaces 
into one set of devices. So much for I/O spreading.  Not easily tunable - I 
don't think using "hidden" parameters is a reliable long term tuning idea - and 
I/O to flashback includes data and indexes.

Of course: there are special cases where it is useful and doesn't terribly harm 
performance. On standby dbs for testing DR, for example. But on a DW? And with 
large loads and data cleansing - typical ETL cycle - happening all day?

No, flashback can't possibly, ever, cause a performance hit under typical DW 
process loads!...  Ah well, how "bad dba" of me to actually care about 
performance instead of features. What can I say.
0
wizofoz2k (1386)
5/12/2012 1:41:21 PM
On Sat, 12 May 2012 07:26:46 +0100, Jonathan Lewis wrote:

> What's the full DDL for the table (use dbms_metadata.get_ddl) - are the
> LOBs nocache nologging, for example, and can this affect the flashback
> rate.

Jonathan, no need for DBMS_METADATA. I have the DDL:

create table bin_files(
fileno number(4,0),
filename varchar2(256),
content  BLOB)
LOB(content) store as (tablespace test_lob disable storage in row nocache)
tablespace test_tbs
/

It was late when I started working on the test, I forgot the nologging 
clause. I re-created the table, the DDL now looks like this:

CREATE TABLE "SCOTT"."BIN_FILES" 
   (	"FILENO" NUMBER(4,0), 
	"FILENAME" VARCHAR2(256), 
	"CONTENT" BLOB
   ) SEGMENT CREATION IMMEDIATE 
  PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 
 NOCOMPRESS LOGGING
  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "TEST_TBS" 
 LOB ("CONTENT") STORE AS BASICFILE (
  TABLESPACE "TEST_LOB" DISABLE STORAGE IN ROW CHUNK 8192 RETENTION 
  NOCACHE NOLOGGING 
  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)) ;

The tablespace is a standard ASSM tablespace.
-- 
http://mgogala.byethost5.com
0
gogala.mladen (1153)
5/12/2012 6:08:32 PM
On Sat, 12 May 2012 07:26:46 +0100, Jonathan Lewis wrote:

> Why do you expect to read when all you're doing is writing - (I think I
> know the answer to that question):

When writing LOB data, oracle first has to prepare the space. It first 
has to read the "chunk" into the PGA. I proved this by increasing chunk 
to 32k. I could probably eliminate the direct reads completely, by 
changing the lob to be cached (blocks read during into the SGA). Of 
course, I would then get db file scattered reads instead.



-- 
http://mgogala.byethost5.com
0
gogala.mladen (1153)
5/12/2012 6:47:31 PM

"Mladen Gogala" <gogala.mladen@gmail.com> wrote in message 
news:pan.2012.05.12.18.08.31@gmail.com...
| On Sat, 12 May 2012 07:26:46 +0100, Jonathan Lewis wrote:
|
| > What's the full DDL for the table (use dbms_metadata.get_ddl) - are the
| > LOBs nocache nologging, for example, and can this affect the flashback
| > rate.
|
| Jonathan, no need for DBMS_METADATA. I have the DDL:
|

When puzzled I always look at the output from dbms_metadata in case 
something I had forgotten about has sneaked into the definition.

 LOB ("CONTENT") STORE AS BASICFILE (
|  TABLESPACE "TEST_LOB" DISABLE STORAGE IN ROW CHUNK 8192 RETENTION
|  NOCACHE NOLOGGING
|  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
|  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
|  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)) ;
|

As a test, create the LOB as CACHE.

Did you check the session stats to see what your reads were ?

-- 
Regards

Jonathan Lewis
http://jonathanlewis.wordpress.com
Oracle Core (Apress 2011)
http://www.apress.com/9781430239543


0
jonathan5683 (1392)
5/12/2012 6:49:36 PM
On Sat, 12 May 2012 19:49:36 +0100, Jonathan Lewis wrote:


> Did you check the session stats to see what your reads were ?

I turned on 10046 tracing. Reads were from the TEST_LOB tablespace, the 
LOB segmentL



=126792 block cnt=4 obj#=-1 tim=1336849142292618
WAIT #140145345192712: nam='direct path read' ela= 280 file number=7 
first dba=127180 block cnt=4 obj#=-1 tim=1336849142302190
WAIT #140145345192712: nam='direct path read' ela= 359 file number=7 
first dba=126352 block cnt=4 obj#=-1 tim=1336849142303526
WAIT #140145345192712: nam='direct path read' ela= 136 file number=7 
first dba=126416 block cnt=4 obj#=-1 tim=1336849142341101
WAIT #140145345192712: nam='direct path read' ela= 38497 file number=7 
first dba=126480 block cnt=4 obj#=-1 tim=1336849142379976
WAIT #140145345192712: nam='direct path read' ela= 122 file number=7 
first dba=126612 block cnt=4 obj#=0 tim=1336849142445429
WAIT #140145345192712: nam='direct path read' ela= 337 file number=7 
first dba=126488 block cnt=4 obj#=0 tim=1336849142450894
WAIT #140145345192712: nam='direct path read' ela= 4656 file number=7 
first dba=126552 block cnt=4 obj#=0 tim=1336849142478622
WAIT #140145345192712: nam='direct path read' ela= 2062 file number=7 
first dba=126616 block cnt=4 obj#=0 tim=1336849142529709
WAIT #140145345192712: nam='direct path read' ela= 2366 file number=7 
first dba=126680 block cnt=4 obj#=0 tim=1336849142532207
WAIT #140145345192712: nam='direct path read' ela= 239 file number=7 
first dba=126496 block cnt=4 obj#=-1 tim=1336849142636995
WAIT #140145345192712: nam='direct path read' ela= 260 file number=7 
first dba=126624 block cnt=4 obj#=-1 tim=1336849142637611

I increased the chunk size to 32K and got reads by 4 blocks, which means 
that Oracle is reading a chunk into PGA.

-- 
http://mgogala.byethost5.com
0
gogala.mladen (1153)
5/12/2012 7:03:30 PM
On Sat, 12 May 2012 19:49:36 +0100, Jonathan Lewis wrote:

> "Mladen Gogala" <gogala.mladen@gmail.com> wrote in message
> news:pan.2012.05.12.18.08.31@gmail.com...
> | On Sat, 12 May 2012 07:26:46 +0100, Jonathan Lewis wrote:
> |
> | > What's the full DDL for the table (use dbms_metadata.get_ddl) - are
> the | > LOBs nocache nologging, for example, and can this affect the
> flashback | > rate.
> |
> | Jonathan, no need for DBMS_METADATA. I have the DDL:
> |
> 
> When puzzled I always look at the output from dbms_metadata in case
> something I had forgotten about has sneaked into the definition.
> 
>  LOB ("CONTENT") STORE AS BASICFILE (
> |  TABLESPACE "TEST_LOB" DISABLE STORAGE IN ROW CHUNK 8192 RETENTION | 
> NOCACHE NOLOGGING |  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1
> MAXEXTENTS 2147483645 |  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 | 
> BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)) ;
> |
> 
> As a test, create the LOB as CACHE.
> 
> Did you check the session stats to see what your reads were ?

The plot thickens. I re-created the LOB as cache and flashback waits went 
away:

EVENT			       TIME_WAITED AVERAGE_WAIT
------------------------------ ----------- ------------
Disk file operations I/O		 0	    .02
log buffer space			 0	      0
log file sync				 1	    .77
SQL*Net message to client		 1	      0
log file switch completion		23	   5.77
SQL*Net message from client	       523	   1.02
SQL*Net more data from client	      1323	    .01
db file sequential read 	     33915	    .14


The DDL is now this:


CREATE TABLE "SCOTT"."BIN_FILES" 
   (	"FILENO" NUMBER(4,0), 
	"FILENAME" VARCHAR2(256), 
	"CONTENT" BLOB
   ) SEGMENT CREATION IMMEDIATE 
  PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 
 NOCOMPRESS LOGGING
  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "TEST_TBS" 
 LOB ("CONTENT") STORE AS BASICFILE "MUSIC"(
  TABLESPACE "TEST_LOB" DISABLE STORAGE IN ROW CHUNK 32768 RETENTION 
  CACHE 
  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)) ;

It seems that flashback operations are synchronous only for the "direct" 
operations, which bypass the SGA. When I come to think of that, it is a 
logical behavior of a sort. Also, Oracle prevents me from specifying 
"NOLOGGING" with CACHE, which is sort of strange. The script now 
completes in less than 9 minutes, which is excellent, after having 
started with 20 minutes.

mgogala@medo tmp]$ time ./test_lob

real	8m59.722s
user	0m6.372s
sys	0m7.414s

Here is the entire script, if someone else wants to get involved. One 
only needs to modify the directory and the maximum LOB size in the 
connection (LongReadLen) if the files to be loaded can exceed 10M in 
size. 

#!/usr/bin/env perl
use warnings;
use strict;
use DBI;
use DBD::Oracle qw(:ora_types);

my $dbh = db_connect("scott", "tiger");
$dbh->do("alter session set events='10046 trace name context forever, 
level 12'");
my @files = glob("/home/mgogala/mp3/misc/*");
my $cnt   = 0;
my $ins   = "insert into bin_files values(?,?,?)";
my $sth=$dbh->prepare($ins);

foreach my $f (@files) {
    my $cont=slurp($f);
    $sth->bind_param(1,$cnt++);
    $sth->bind_param(2,$f);
    $sth->bind_param(3,$cont,{ora_type=>SQLT_BIN});
    $sth->execute();
}

$dbh->commit();

END {
    $dbh->disconnect() if defined($dbh);
}


sub db_connect {
    my ( $username, $passwd, $db ) = ( @_, $ENV{"TWO_TASK"} );
    if ( $username =~ m|/| ) {
        my @tmp = split( /[\/\@]+/, $username );
        $username = $tmp[0];
        $passwd   = $tmp[1];
        $db       = $tmp[2] || $ENV{"TWO_TASK"} ";
    }

    $db ||= "";
    my $dbh = DBI->connect( "dbi:Oracle:$db", $username, $passwd )
      || die( $DBI::errstr . " \n " );
    $dbh->{AutoCommit}           = 0;
    $dbh->{RaiseError}           = 1;
    $dbh->{ora_check_sql}        = 0;
    $dbh->{RowCacheSize}         = 1024;
    $dbh->{ora_array_chunk_size} = 1024;
    $dbh->{LongReadLen}          = 10000000;
    return ($dbh);
}

sub slurp {
    my $name=shift;
    local $/=undef;
    my $fd;
    open($fd," < ",$name) or die(" cannot open $name for reading : $! \n 
");
    my $cont=<$fd>;
    close($fd);
    return($cont);
}


-- 
http://mgogala.byethost5.com
0
gogala.mladen (1153)
5/12/2012 7:46:21 PM
On Sat, 12 May 2012 23:41:21 +1000, Noons wrote:


> No, flashback can't possibly, ever, cause a performance hit under
> typical DW process loads!...  Ah well, how "bad dba" of me to actually
> care about performance instead of features. What can I say.

Flashback records almost identical information as redo. The main 
difference between the two is that redo logs are not supposed to be read, 
except in case of instance recovery. Redo logs are the main mechanism 
that provides the "D" in ACID.
Flashback logs, just like their redo counterparts, record change vectors, 
in order to be able to reconstruct the data, up to specific point in 
time. The default is 24 hours, controlled by the  
"db_flashback_retention_target" parameter.
If the block being changed is in SGA, that is being done by a background 
process named "FBDA". If the block is not in the SGA, as in the case when 
LOB column is created with "NOCACHE" option, that must be done 
synchronously, by invoking a library function, because background 
processes cannot access PGA of another session. In other words, one must 
wait for flashback information to be written. When doing a direct load 
with NOLOGGING clause in effect, one avoids waiting for the redo to be 
written to disk but it doesn't avoid flashback.



-- 
http://mgogala.byethost5.com
0
gogala.mladen (1153)
5/12/2012 9:10:26 PM
On Sat, 12 May 2012 21:10:26 +0000, Mladen Gogala wrote:

>  being done by a background
> process named "FBDA".

Actually, it's RVWR. FBDA is the "archiver" process, although it's not 
quite clear what is being archived and where.



-- 
http://mgogala.byethost5.com
0
gogala.mladen (1153)
5/12/2012 11:31:22 PM

-- 
Regards

Jonathan Lewis
http://jonathanlewis.wordpress.com
Oracle Core (Apress 2011)
http://www.apress.com/9781430239543


"Mladen Gogala" <gogala.mladen@gmail.com> wrote in message 
news:pan.2012.05.12.19.46.20@gmail.com...
| On Sat, 12 May 2012 19:49:36 +0100, Jonathan Lewis wrote:
|
| > "Mladen Gogala" <gogala.mladen@gmail.com> wrote in message
| > news:pan.2012.05.12.18.08.31@gmail.com...
| > | On Sat, 12 May 2012 07:26:46 +0100, Jonathan Lewis wrote:
| > |
| > | > What's the full DDL for the table (use dbms_metadata.get_ddl) - are
| > the | > LOBs nocache nologging, for example, and can this affect the
| > flashback | > rate.
| > |
| > | Jonathan, no need for DBMS_METADATA. I have the DDL:
| > |
| >
| > When puzzled I always look at the output from dbms_metadata in case
| > something I had forgotten about has sneaked into the definition.
| >
| >  LOB ("CONTENT") STORE AS BASICFILE (
| > |  TABLESPACE "TEST_LOB" DISABLE STORAGE IN ROW CHUNK 8192 RETENTION |
| > NOCACHE NOLOGGING |  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1
| > MAXEXTENTS 2147483645 |  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 |
| > BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)) ;
| > |
| >
| > As a test, create the LOB as CACHE.
| >
| > Did you check the session stats to see what your reads were ?
|
| The plot thickens. I re-created the LOB as cache and flashback waits went
| away:
|
| EVENT        TIME_WAITED AVERAGE_WAIT
| ------------------------------ ----------- ------------
| Disk file operations I/O 0     .02
| log buffer space 0       0
| log file sync 1     .77
| SQL*Net message to client 1       0
| log file switch completion 23    5.77
| SQL*Net message from client        523    1.02
| SQL*Net more data from client       1323     .01
| db file sequential read      33915     .14
|
|
| The DDL is now this:
|
|
| CREATE TABLE "SCOTT"."BIN_FILES"
|   ( "FILENO" NUMBER(4,0),
| "FILENAME" VARCHAR2(256),
| "CONTENT" BLOB
|   ) SEGMENT CREATION IMMEDIATE
|  PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
| NOCOMPRESS LOGGING
|  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
|  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
|  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
|  TABLESPACE "TEST_TBS"
| LOB ("CONTENT") STORE AS BASICFILE "MUSIC"(
|  TABLESPACE "TEST_LOB" DISABLE STORAGE IN ROW CHUNK 32768 RETENTION
|  CACHE
|  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
|  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
|  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)) ;
|
| It seems that flashback operations are synchronous only for the "direct"
| operations, which bypass the SGA. When I come to think of that, it is a
| logical behavior of a sort. Also, Oracle prevents me from specifying
| "NOLOGGING" with CACHE, which is sort of strange. The script now
| completes in less than 9 minutes, which is excellent, after having
| started with 20 minutes.
|
| mgogala@medo tmp]$ time ./test_lob
|
| real 8m59.722s
| user 0m6.372s
| sys 0m7.414s
|
| Here is the entire script, if someone else wants to get involved. One
| only needs to modify the directory and the maximum LOB size in the
| connection (LongReadLen) if the files to be loaded can exceed 10M in
| size.
|
| #!/usr/bin/env perl
| use warnings;
| use strict;
| use DBI;
| use DBD::Oracle qw(:ora_types);
|
| my $dbh = db_connect("scott", "tiger");
| $dbh->do("alter session set events='10046 trace name context forever,
| level 12'");
| my @files = glob("/home/mgogala/mp3/misc/*");
| my $cnt   = 0;
| my $ins   = "insert into bin_files values(?,?,?)";
| my $sth=$dbh->prepare($ins);
|
| foreach my $f (@files) {
|    my $cont=slurp($f);
|    $sth->bind_param(1,$cnt++);
|    $sth->bind_param(2,$f);
|    $sth->bind_param(3,$cont,{ora_type=>SQLT_BIN});
|    $sth->execute();
| }
|
| $dbh->commit();
|
| END {
|    $dbh->disconnect() if defined($dbh);
| }
|
|
| sub db_connect {
|    my ( $username, $passwd, $db ) = ( @_, $ENV{"TWO_TASK"} );
|    if ( $username =~ m|/| ) {
|        my @tmp = split( /[\/\@]+/, $username );
|        $username = $tmp[0];
|        $passwd   = $tmp[1];
|        $db       = $tmp[2] || $ENV{"TWO_TASK"} ";
|    }
|
|    $db ||= "";
|    my $dbh = DBI->connect( "dbi:Oracle:$db", $username, $passwd )
|      || die( $DBI::errstr . " \n " );
|    $dbh->{AutoCommit}           = 0;
|    $dbh->{RaiseError}           = 1;
|    $dbh->{ora_check_sql}        = 0;
|    $dbh->{RowCacheSize}         = 1024;
|    $dbh->{ora_array_chunk_size} = 1024;
|    $dbh->{LongReadLen}          = 10000000;
|    return ($dbh);
| }
|
| sub slurp {
|    my $name=shift;
|    local $/=undef;
|    my $fd;
|    open($fd," < ",$name) or die(" cannot open $name for reading : $! \n
| ");
|    my $cont=<$fd>;
|    close($fd);
|    return($cont);
| }
|
|
| -- 
| http://mgogala.byethost5.com 


0
jonathan5683 (1392)
5/13/2012 6:41:41 AM
"Mladen Gogala" <gogala.mladen@gmail.com> wrote in message 
news:pan.2012.05.12.19.03.30@gmail.com...
| On Sat, 12 May 2012 19:49:36 +0100, Jonathan Lewis wrote:
|
|
| > Did you check the session stats to see what your reads were ?
|
| I turned on 10046 tracing. Reads were from the TEST_LOB tablespace, the
| LOB segmentL
|
|

The 10046 trace tells us what they were, not why they happened.
It's important to see (for example) things like "undo blocks read for 
flashback new".

|
| =126792 block cnt=4 obj#=-1 tim=1336849142292618
| WAIT #140145345192712: nam='direct path read' ela= 280 file number=7
| first dba=127180 block cnt=4 obj#=-1 tim=1336849142302190
| WAIT #140145345192712: nam='direct path read' ela= 359 file number=7
| first dba=126352 block cnt=4 obj#=-1 tim=1336849142303526
| WAIT #140145345192712: nam='direct path read' ela= 136 file number=7
| first dba=126416 block cnt=4 obj#=-1 tim=1336849142341101
| WAIT #140145345192712: nam='direct path read' ela= 38497 file number=7
| first dba=126480 block cnt=4 obj#=-1 tim=1336849142379976
| WAIT #140145345192712: nam='direct path read' ela= 122 file number=7
| first dba=126612 block cnt=4 obj#=0 tim=1336849142445429

Interesting that your reads are often about 64 blocks apart, even though 
waiting for 4 blocks.

-- 
Regards

Jonathan Lewis
http://jonathanlewis.wordpress.com
Oracle Core (Apress 2011)
http://www.apress.com/9781430239543




0
jonathan5683 (1392)
5/13/2012 6:45:22 AM
"Mladen Gogala" <gogala.mladen@gmail.com> wrote in message 
news:pan.2012.05.12.19.46.20@gmail.com...
| On Sat, 12 May 2012 19:49:36 +0100, Jonathan Lewis wrote:
|
|
| The plot thickens. I re-created the LOB as cache and flashback waits went
| away:
|
| EVENT        TIME_WAITED AVERAGE_WAIT
| ------------------------------ ----------- ------------
| Disk file operations I/O 0     .02
| log buffer space 0       0
| log file sync 1     .77
| SQL*Net message to client 1       0
| log file switch completion 23    5.77
| SQL*Net message from client        523    1.02
| SQL*Net more data from client       1323     .01
| db file sequential read      33915     .14
|

I'm not entirely surprised. The algorithm for when a block should be 
written to the flashback log relates to when it was last changed, when it 
last went into the flashback, and when it was last written. If your LOBs 
are direct path reads and writes all the time it's possible that the 
default action is to write the block into the flashback log because the 
information that is needed to minimise flashback logging doesn't get 
created on direct path actions.

When you can get LOBs over-written, though (i.e. LOB space re-used rather 
than constantly "newing" previously unused block), I think you may start to 
see flashback log writes re-appearing

Sorry about the empty previous post - finger trouble.

-- 
Regards

Jonathan Lewis
http://jonathanlewis.wordpress.com
Oracle Core (Apress 2011)
http://www.apress.com/9781430239543


0
jonathan5683 (1392)
5/13/2012 6:49:47 AM

"Mladen Gogala" <gogala.mladen@gmail.com> wrote in message 
news:pan.2012.05.12.21.10.26@gmail.com...
|
| Flashback logs, just like their redo counterparts, record change vectors,
| in order to be able to reconstruct the data, up to specific point in
| time. The default is 24 hours, controlled by the
| "db_flashback_retention_target" parameter.
|

Flashback logs hold entire Oracle blocks (think Oracle 5 and the BI file).

However they don't (normally) hold a copy of every version of the block 
that has every existed; you should get something like one copy per 
flashback log file to cover all the changes that have happened to a block 
since it was most recently loaded into the buffer cache (compare redo 
logging full blocks when tablespaces are in backup mode)


-- 
Regards

Jonathan Lewis
http://jonathanlewis.wordpress.com
Oracle Core (Apress 2011)
http://www.apress.com/9781430239543

| If the block being changed is in SGA, th 


0
jonathan5683 (1392)
5/13/2012 6:54:15 AM
On Sun, 13 May 2012 07:54:15 +0100, Jonathan Lewis wrote:


> Flashback logs hold entire Oracle blocks (think Oracle 5 and the BI
> file).

Wow! Not that I don't believe you, but that is really wild. I'll try 
checking it by doing a hex dump of some .fb files and confirm this.



-- 
http://mgogala.byethost5.com
0
gogala.mladen (1153)
5/13/2012 7:43:08 AM
"Mladen Gogala" <gogala.mladen@gmail.com> wrote in message 
news:pan.2012.05.13.07.43.08@gmail.com...
| On Sun, 13 May 2012 07:54:15 +0100, Jonathan Lewis wrote:
|
| > Flashback logs hold entire Oracle blocks (think Oracle 5 and the BI
| > file).
|
| Wow! Not that I don't believe you, but that is really wild. I'll try
| checking it by doing a hex dump of some .fb files and confirm this.
|

I wouldn't want you to do otherwise.

-- 
Regards

Jonathan Lewis
http://jonathanlewis.wordpress.com
Oracle Core (Apress 2011)
http://www.apress.com/9781430239543


0
jonathan5683 (1392)
5/13/2012 8:56:11 AM
On Sun, 13 May 2012 09:56:11 +0100, Jonathan Lewis wrote:

> | Wow! Not that I don't believe you, but that is really wild. I'll try |
> checking it by doing a hex dump of some .fb files and confirm this. |
> 
> I wouldn't want you to do otherwise.

And you are right. Current record: block image. I still don't understand 
why did they do it like that.

Trace file /oracle/diag/rdbms/o11/O11/trace/O11_ora_2567.trc
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit 
Production
With the Partitioning, OLAP, Data Mining and Real Application Testing 
options
ORACLE_HOME = /oracle/product/11.2.0/11.2.0.3
System name:    Linux
Node name:      medo.home.com
Release:        3.3.5-2.fc16.x86_64
Version:        #1 SMP Tue May 8 11:24:50 UTC 2012
Machine:        x86_64
Instance name: O11
Redo thread mounted by this instance: 1
Oracle process number: 28
Unix process pid: 2567, image: oracle@medo.home.com


*** 2012-05-13 23:33:46.282
*** SESSION ID:(11.17) 2012-05-13 23:33:46.282
*** CLIENT ID:() 2012-05-13 23:33:46.282
*** SERVICE NAME:(O11.home.com) 2012-05-13 23:33:46.282
*** MODULE NAME:(sqlplus@medo.home.com (TNS V1-V3)) 2012-05-13 
23:33:46.282
*** ACTION NAME:() 2012-05-13 23:33:46.282
 
DUMP OF FLASHBACK LOG FILE (15) /data/oracle/flashback/O11/flashback/
o1_mf_7txdxqkn_.flb
 FILE HEADER:
        Compatibility Vsn = 186646528=0xb200000
        Db ID=1818577220=0x6c654944, Db Name='O11'
        Activation ID=1818575684=0x6c654344
        Control Seq=3492=0xda4, File size=65536=0x10000
        File Number=15, Blksiz=8192, File Type=8 FLASH BACK
  FLASHBACK HEADER:
   Flashback Block Header: 
      Seq: 16 Block: 1 Cks: 0xc586 Flag: 0x1 Lst: 0
   description:"Thread 0001, Seq# 0000000016, SCN 0x0000001bbfaf"
   thread: 1  seq: 16  version 0  nab: 65537
   reset logs count: 0x2e883e0a scn: 0x0000.000f30dc
   formatted blocks: 65536  usable blocks: 65536 
   magic: 4  previous magic: 0  flags: 0x0 
   Low scn: 0x0000.001bbfaf 05/12/2012 15:05:06
   High scn: 0x0000.001bc39c 05/12/2012 15:13:17
   Last Marker: 
      fba: (lno 0 thr 0 seq 0 bno 0 bof 0)
Flashback async buffers are set to 32

*** 2012-05-13 23:33:52.815
 Current records:
 **** Record at fba: (lno 15 thr 1 seq 16 bno 65536 bof 4852) ****
    RECORD HEADER: 
      Type: 1 (Block Image)  Size: 28 
    RECORD DATA (Block Image):
      file#: 7 rdba: 0x01c08cf8
      Next scn: 0x0000.001bc39c [0.1819548] 
      Flag: 0x1
      Block Size: 8192
    BLOCK IMAGE:
      buffer rdba: 0x01c08cf8
      scn: 0x0000.001bbca2 seq: 0x02 flg: 0x04 tail: 0xbca22802
      frmt: 0x02 chkval: 0x285c type: 0x28=PAGETABLE MANAGED LOB BLOCK
Hex dump of block: st=0, typ_found=1




-- 
http://mgogala.byethost5.com
0
gogala.mladen (1153)
5/14/2012 3:40:04 AM


"Mladen Gogala" <gogala.mladen@gmail.com> wrote in message 
news:pan.2012.05.14.03.40.04@gmail.com...
| On Sun, 13 May 2012 09:56:11 +0100, Jonathan Lewis wrote:
|
| > | Wow! Not that I don't believe you, but that is really wild. I'll try 
|
| > checking it by doing a hex dump of some .fb files and confirm this. |
| >
| > I wouldn't want you to do otherwise.
|
| And you are right. Current record: block image. I still don't understand
| why did they do it like that.
|

Oracle Core: pages 155-156

-- 
Regards

Jonathan Lewis
http://jonathanlewis.wordpress.com
Oracle Core (Apress 2011)
http://www.apress.com/9781430239543




0
jonathan5683 (1392)
5/14/2012 4:49:28 AM
Reply:

Similar Artilces:

Oracle database instance shutdown without logging in log file
oracle database instance shutdown without logging in log file. we have Oracle 10g(10.2.0.1)(HP DL380) databse server from Windows 2003 R2 for test/simulation purpose, we found that its shutting down the database sometimes unusally without any error/warning message in the SID alert log file. we observerd that the OS service for oracle is not starting the datbase server completely. Just after hitting start icon inside the services panel it displays the status "Starting", instead of "Started" and database is not in the "Open" state. So the method adopted to start...

Archive logs files accumulating on my Oracle log mode Database
>From what I understand is that an Oracle log mode Database is the recommended configuration for a production database. However, the problem we have is that Archive Logs files in the ..\RDBMS\ directory are accumulating daily which has implications when you have limited disk space. How should these Archive log files be maintained? ie when should they be deleted from the system? is there an automated procedure which will remove the log files on a weekly basis? I understand that there are two ways for performing a backup of the oracle database, 1) cold backup (requires the oracle database...

log file sync , log file parallel write
Hi , my top wait list shows 2 events , log file sync and log file parallel write in 1 hour. During this time , the db server created 370,594 bytes redo per second. the docs say that high value of both log file sync and log file parallel write indicates io problem . How can we decide whether below numbers are close to each other or not? If it is not possible to change the application and eliminate the unnecessary commits (such as sap) , how can lg file sync be decreased? Kind Regards Load Profile ~~~~~~~~~~~~ Per Second Per Transaction ...

log file sync vs log file parallel write
Hello all, While investigating performance problems in a database I discovered that the 'log file sync' wait event is almost always in the top 5. (I am looking at the statspack datat) Calculating the ratio of the average wait time of the 'log file sync' and the 'log file parallel write' gives a ratio of 10. Does someone know why this may happen? Regards Hans-Peter On 3 okt, 12:31, HansP <hans-peter.sl...@atosorigin.com> wrote: > Hello all, > > While investigating performance problems in a database I discovered > that the 'log file sync' ...

log file sync vs log file parallel write probably not bug 2669566
This is a continuation of a previous thread about =91log file sync=92 and =91log file parallel write=92 events. Version : 9.2.0.8 Platform : Solaris Application : Oracle Apps The number of commits per second ranges between 10 and 30. When querying statspack performance data the calculated average wait time on the event =91log file sync=92 is on average 10 times the wait time for the =91log file parallel write=92 event. Below just 2 samples where the ratio is even about 20. "snap_time" " log file parallel write avg" "log file sync avg" &qu...

log file sync vs log file parallel write probably not bug 2669566 #2
This is a continuation of a previous thread about =91log file sync=92 and =91log file parallel write=92 events. Version : 9.2.0.8 Platform : Solaris Application : Oracle Apps The number of commits per second ranges between 10 and 30. When querying statspack performance data the calculated average wait time on the event =91log file sync=92 is on average 10 times the wait time for the =91log file parallel write=92 event. Below just 2 samples where the ratio is even about 20. "snap_time" " log file parallel write avg" "log file sync avg" &qu...

log file sync vs log file parallel write probably not bug 2669566 #3
This is a continuation of a previous thread about =91log file sync=92 and =91log file parallel write=92 events. Version : 9.2.0.8 Platform : Solaris Application : Oracle Apps The number of commits per second ranges between 10 and 30. When querying statspack performance data the calculated average wait time on the event =91log file sync=92 is on average 10 times the wait time for the =91log file parallel write=92 event. Below just 2 samples where the ratio is even about 20. "snap_time" " log file parallel write avg" "log file sync avg" &qu...

redo log file on Oracle 11g physical database
Parameter 'LOG_FILE_NAME_CONVERT' definition is: " Specify the location of the primary database online redo log files followed by the standby location. This parameter converts the path names of the primary database log files to the path names on the standby database. If the standby database is on the same system as the primary database or if the directory structure where the log files are located on the standby system is different from the primary system, then this parameter is required. Multiple pairs of paths may be specified by this parameter. " As I understand, if the re...

Restore of database file and log file
Hi, I have a Sybase Adaptive Server Anywhere 7 database running. I have a database file that is dated june 12, 2003 I have the database log file from yesterday. The backupsoftware have not backed up the database file since june 12 because it did not back up open files. So, can i merge my database file frome june 12 and my logfile from yesterday and have all the changes since june 12 committed ? If, so how do i do it ? Can't see any tools in the Sybase Central that can help me out... This is a library database and my librarian is not happy at the moment..... ...

Increase in 'log file sync' waits while 'log file parallel write' remains unchanged
We have small 9.2.0.8 database on AIX, from time to time it experiences waits on 'log file sync'. I am unable to explain these waits: - There is no increase in the number or size of transaction - While average wait on 'log file sync' increases from 10ms to 124ms average wait on 'log file parallel write' does not change much - approx 0.05 ms. NORMAL PERIOD: Load Profile ~~~~~~~~~~~~ Per Second Per Transaction --------------- --------------- Redo size: 30,016.82 1,384.01 ...

Very high 'log file sync' wait time with no 'log file parallel write' wait time
Oracle 10.2.0.2 SE on Windows 2003 SP1 The following trace file section is from a very slow import session which is importing 9 Million rows into the database. COMMIT call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 0 0.00 0.00 0 0 0 0 Execute 179802 4.90 474.73 0 0 0 0 Fetch 0 0.00 0.00 0 0 0 0 ------- ------ -------- ---------- --...

Restoring Oracle 9.0.1 database from original files (server was rebuilt)
Here's my issue ... our database server was rebuilt but all the original files were backed up before the server was rebuilt. The server was shutdown correctly so I'm not worried about a bad shutdown but I've never done this before. It's on a Windows2000 box. I've reconfigured the server to the same way it was before, but I'm unsure which files and I should exclude and/or include to get the box running the way it was before. Can anyone point me in the right direction to an article that outlines how to do this? I'm more a developer than DBA, but I set up the orig...

moving oracle database server into a new server
Hi all I will be moving oracle from one Solaris server into another Solaris server, the IP address and box name will be changed. Oracle version is 8.1.7 and O/S is 5.8 on both servers ( just different O/S patch). Does any one know what I need to do in order to get oracle up and running again on the new server. I know that I will need to change the tnsnames.ora and listner.ora, but is there anything else that I need to do? Thanks Teresa Teresa wrote: > Hi all > > I will be moving oracle from one Solaris server into another Solaris > server, the IP address and box name will...

File Server Database Apps and SQL Server?
Like a lot of database developers, I often choose Jet for the back- end. I'm starting to worry about what will happen when Jet is deprecated. Ostensibly, Jet users like me must switch to SQL Server (or MSDE / SQL Express), but there's something I just don't understand. Without Jet, how will we create file server database applications? In other words, how will we create multi-user apps which use a file server to share data and don't require a database server? I've seen many discussions of Access vs. SQL Server, but I've never seen this question addressed head-on. I can ...

Web resources about - Flashback log file sync - comp.databases.oracle.server

Atari Flashback - Wikipedia, the free encyclopedia
The Atari Flashback is a series of dedicated consoles marketed by Atari, Inc. in 2004, 2005 and 2010, then AtGames in 2011 and 2012. They are ...

Flashback for Facebook - Free for iPad on the iTunes App Store
Read reviews, get customer ratings, see screenshots, and learn more about Flashback for Facebook - Free on the App Store. Download Flashback ...

Atari Flashback 2 Joystick (top) - Flickr - Photo Sharing!
Atari Flashback 2 joystick. Don't change what isn't broken! :)

[FLASHBACK] Pelosi: We have to pass the bill so you can find out what is in it - YouTube
Today is the 4th Anniversary of Obamacare and on this day I think it fitting to revisit Nancy Pelosi. "You've heard about the controversies within ...

Bendigo Country Cricket Week flashback: PHOTOS
THERE have been plenty of shouts of Howzat echo around the grounds since Bendigo Country Cricket Week hit off in the 1930s.

Flashback Friday- Canberra weddings.
A look back at some of wedding moments caught by Canberra Times photographers.

'I get so many flashbacks': man charged over fatal drag race
A twenty-one year old man, Adem Arpaci, has been charged after the death of two people in a crash last week.

Flashbacks and Forecasts: Samsung in 2016
2015 was a tumultuous year for many Android manufacturers and 2016 will be make-or-break for some, as the smartphone market plateaus and competition ...

There’s more to Rey’s mysterious flashback in ‘The Force Awakens’ than meets the eye
The new Star Wars movie is on its track to become the highest grossing film ever, with millions of fans having seen the movie already. Being ...

"Writing about Taylor Swift is a horrific ordeal for me because her twinkly persona is such a scary flashback ...
Writes Camille Paglia. Swift herself should retire that obnoxious Nazi Barbie routine of wheeling out friends and celebrities as performance ...

Resources last updated: 2/9/2016 1:06:52 PM