File System Performance Issue

  • Permalink
  • submit to reddit
  • Email
  • Follow


We have been benchmarking Ubuntu 10.04 vs OpenSuse 11. We notice that
the OpenSuse file system is just faster than Ubuntu.

For example, tar -xf linux-2.6.36-rc3.tar.bz2 runs in 35 seconds on
OpenSuse, and 45 seconds on Ubuntu. Both systems are using ext4 and
bunzip 1.05 and tar 1.22 or 1.23.

Is Ubuntu mounting it's root filesystem with some journalling option
that is slower than OpenSuse? Any other guesses?

-Thanks
-Kitplane01
0
Reply kitplane01 (1) 11/22/2010 5:21:30 PM

See related articles to this posting


"Charles Talleyrand" <kitplane01@gmail.com> wrote in message 
news:2097f955-cb60-44b9-832a-a4dea5a2ac85@z19g2000yqb.googlegroups.com...
> We have been benchmarking Ubuntu 10.04 vs OpenSuse 11. We notice that
> the OpenSuse file system is just faster than Ubuntu.
>
> For example, tar -xf linux-2.6.36-rc3.tar.bz2 runs in 35 seconds on
> OpenSuse, and 45 seconds on Ubuntu. Both systems are using ext4 and
> bunzip 1.05 and tar 1.22 or 1.23.
>
> Is Ubuntu mounting it's root filesystem with some journalling option
> that is slower than OpenSuse? Any other guesses?


This may no longer be relevant but a while ago OpenSuse was shipping with 
and using a newer version of gcc than Ubuntu. Several benchmarks (SSL, etc) 
would run faster on Suse because they were built with the newer compiler and 
used the latest version of libc.so



0
Reply Ezekiel 11/22/2010 5:40:03 PM

Charles Talleyrand pulled this Usenet face plant:

> We have been benchmarking Ubuntu 10.04 vs OpenSuse 11. We notice that
> the OpenSuse file system is just faster than Ubuntu.
>
> For example, tar -xf linux-2.6.36-rc3.tar.bz2 runs in 35 seconds on
> OpenSuse, and 45 seconds on Ubuntu. Both systems are using ext4 and
> bunzip 1.05 and tar 1.22 or 1.23.
>
> Is Ubuntu mounting it's root filesystem with some journalling option
> that is slower than OpenSuse? Any other guesses?
>
> -Thanks
> -Kitplane01

Same drive?

noatime set on SuSE?  (mount -v)

-- 
Anthony's Law of the Workshop:
	Any tool when dropped, will roll into the least accessible
	corner of the workshop.
  
Corollary:
	On the way to the corner, any dropped tool will first strike
	your toes.
0
Reply Chris 11/22/2010 5:49:18 PM

Chris Ahlstrom <ahlstromc@xzoozy.com> writes:

> Charles Talleyrand pulled this Usenet face plant:
>
>> We have been benchmarking Ubuntu 10.04 vs OpenSuse 11. We notice that
>> the OpenSuse file system is just faster than Ubuntu.
>>
>> For example, tar -xf linux-2.6.36-rc3.tar.bz2 runs in 35 seconds on
>> OpenSuse, and 45 seconds on Ubuntu. Both systems are using ext4 and
>> bunzip 1.05 and tar 1.22 or 1.23.
>>
>> Is Ubuntu mounting it's root filesystem with some journalling option
>> that is slower than OpenSuse? Any other guesses?
>>
>> -Thanks
>> -Kitplane01
>
> Same drive?
>
> noatime set on SuSE?  (mount -v)


Totally different OSen with different daemons/processes running,
different ram caches etc etc.

It could be one of millions of things.
0
Reply Hadron 11/22/2010 6:37:31 PM

["Followup-To:" header set to comp.os.linux.development.system.]
On Mon, 2010-11-22, Hadron wrote:
> Chris Ahlstrom <ahlstromc@xzoozy.com> writes:
>
>> Charles Talleyrand pulled this Usenet face plant:
>>
>>> We have been benchmarking Ubuntu 10.04 vs OpenSuse 11. We notice that
>>> the OpenSuse file system is just faster than Ubuntu.
>>>
>>> For example, tar -xf linux-2.6.36-rc3.tar.bz2 runs in 35 seconds on
>>> OpenSuse, and 45 seconds on Ubuntu. Both systems are using ext4 and
>>> bunzip 1.05 and tar 1.22 or 1.23.
>>>
>>> Is Ubuntu mounting it's root filesystem with some journalling option
>>> that is slower than OpenSuse? Any other guesses?
>>>
>>> -Thanks
>>> -Kitplane01
>>
>> Same drive?
>>
>> noatime set on SuSE?  (mount -v)
>
>
> Totally different OSen with different daemons/processes running,
> different ram caches etc etc.
>
> It could be one of millions of things.

It *could*, but a proper investigation /can/ tell what the difference
is. Just measuring wall clock time and deciding that "the OpenSuse
file system is just faster" is way too simplistic.

Try to at least look at the output from time(1), vmstat and mpstat.

/Jorgen

-- 
  // Jorgen Grahn <grahn@  Oo  o.   .  .
\X/     snipabacken.se>   O  o   .
0
Reply Jorgen 11/22/2010 7:40:56 PM

Jorgen Grahn <grahn+nntp@snipabacken.se> writes:

> ["Followup-To:" header set to comp.os.linux.development.system.]
> On Mon, 2010-11-22, Hadron wrote:
>> Chris Ahlstrom <ahlstromc@xzoozy.com> writes:
>>
>>> Charles Talleyrand pulled this Usenet face plant:
>>>
>>>> We have been benchmarking Ubuntu 10.04 vs OpenSuse 11. We notice that
>>>> the OpenSuse file system is just faster than Ubuntu.
>>>>
>>>> For example, tar -xf linux-2.6.36-rc3.tar.bz2 runs in 35 seconds on
>>>> OpenSuse, and 45 seconds on Ubuntu. Both systems are using ext4 and
>>>> bunzip 1.05 and tar 1.22 or 1.23.
>>>>
>>>> Is Ubuntu mounting it's root filesystem with some journalling option
>>>> that is slower than OpenSuse? Any other guesses?
>>>>
>>>> -Thanks
>>>> -Kitplane01
>>>
>>> Same drive?
>>>
>>> noatime set on SuSE?  (mount -v)
>>
>>
>> Totally different OSen with different daemons/processes running,
>> different ram caches etc etc.
>>
>> It could be one of millions of things.
>
> It *could*, but a proper investigation /can/ tell what the difference
> is. Just measuring wall clock time and deciding that "the OpenSuse
> file system is just faster" is way too simplistic.

Thats what I said above. There are too many unknowns.

>
> Try to at least look at the output from time(1), vmstat and mpstat.

Any timings are totally and utterly useless unless we know the operating
framework.

There can be no "proper investigation" without ensuring similar
operating conditions. It's not rocket science.

0
Reply Hadron 11/22/2010 11:34:10 PM

Hadron<hadronquark@gmail.com> writes:

> Jorgen Grahn <grahn+nntp@snipabacken.se> writes:
>
>> ["Followup-To:" header set to comp.os.linux.development.system.]
>> On Mon, 2010-11-22, Hadron wrote:
>>> Chris Ahlstrom <ahlstromc@xzoozy.com> writes:
>>>
>>>> Charles Talleyrand pulled this Usenet face plant:
>>>>
>>>>> We have been benchmarking Ubuntu 10.04 vs OpenSuse 11. We notice that
>>>>> the OpenSuse file system is just faster than Ubuntu.
>>>>>
>>>>> For example, tar -xf linux-2.6.36-rc3.tar.bz2 runs in 35 seconds on
>>>>> OpenSuse, and 45 seconds on Ubuntu. Both systems are using ext4 and
>>>>> bunzip 1.05 and tar 1.22 or 1.23.
>>>>>
>>>>> Is Ubuntu mounting it's root filesystem with some journalling option
>>>>> that is slower than OpenSuse? Any other guesses?
>>>>>
>>>>> -Thanks
>>>>> -Kitplane01
>>>>
>>>> Same drive?
>>>>
>>>> noatime set on SuSE?  (mount -v)
>>>
>>>
>>> Totally different OSen with different daemons/processes running,
>>> different ram caches etc etc.
>>>
>>> It could be one of millions of things.
>>
>> It *could*, but a proper investigation /can/ tell what the difference
>> is. Just measuring wall clock time and deciding that "the OpenSuse
>> file system is just faster" is way too simplistic.
>
> Thats what I said above. There are too many unknowns.
>
>>
>> Try to at least look at the output from time(1), vmstat and mpstat.
>
> Any timings are totally and utterly useless unless we know the operating
> framework.
>
> There can be no "proper investigation" without ensuring similar
> operating conditions. It's not rocket science.

Except that in this case the whole question is to determine which of the
variables are the relevant ones.  People have made a couple of
suggestions that are up above; we can safely say, though, that it's
*extremely* unlikely that the different daemons running would make that
sort of difference.

Try using tune2fs -l to see what the filesystem parameters are.
-- 
As we enjoy great advantages from the inventions of others, we should
be glad of an opportunity to serve others by any invention of ours;
and this we should do freely and generously. (Benjamin Franklin)
0
Reply Joe 11/23/2010 1:03:54 AM

["Followup-To:" header set to comp.os.linux.development.system.]
On Mon, 2010-11-22, Hadron wrote:
> Jorgen Grahn <grahn+nntp@snipabacken.se> writes:
>
>> ["Followup-To:" header set to comp.os.linux.development.system.]
>> On Mon, 2010-11-22, Hadron wrote:
>>> Chris Ahlstrom <ahlstromc@xzoozy.com> writes:
>>>
>>>> Charles Talleyrand pulled this Usenet face plant:
>>>>
>>>>> We have been benchmarking Ubuntu 10.04 vs OpenSuse 11. We notice that
>>>>> the OpenSuse file system is just faster than Ubuntu.
>>>>>
>>>>> For example, tar -xf linux-2.6.36-rc3.tar.bz2 runs in 35 seconds on
>>>>> OpenSuse, and 45 seconds on Ubuntu. Both systems are using ext4 and
>>>>> bunzip 1.05 and tar 1.22 or 1.23.
>>>>>
>>>>> Is Ubuntu mounting it's root filesystem with some journalling option
>>>>> that is slower than OpenSuse? Any other guesses?
>>>>>
>>>>> -Thanks
>>>>> -Kitplane01
>>>>
>>>> Same drive?
>>>>
>>>> noatime set on SuSE?  (mount -v)
>>>
>>>
>>> Totally different OSen with different daemons/processes running,
>>> different ram caches etc etc.
>>>
>>> It could be one of millions of things.
>>
>> It *could*, but a proper investigation /can/ tell what the difference
>> is. Just measuring wall clock time and deciding that "the OpenSuse
>> file system is just faster" is way too simplistic.
>
> Thats what I said above. There are too many unknowns.

But that's the /opposite/ of what I write!

Unless you mean "there's not enough information in the posting",
rather than "it's a mystery which noone can ever explain". I agree
with the former of course, but not with the latter.

/Jorgen

-- 
  // Jorgen Grahn <grahn@  Oo  o.   .  .
\X/     snipabacken.se>   O  o   .
0
Reply Jorgen 11/23/2010 12:42:13 PM
comp.os.linux.advocacy 117808 articles. 8 followers. Post

7 Replies
241 Views

Similar Articles

[PageSpeed] 39


  • Permalink
  • submit to reddit
  • Email
  • Follow


Reply:

Similar Artilces:

ext3 File System Performance Issue
Dear all, I am writing a program which generates lots of temp files, and need to remove the unneeded ones (to save disk space) during execution. Currently I put them into the /tmp directory, but I found the performance very slow, especially during the file remove phase when the number of files generated is large. (Each file size is small, at a maximum of 16KB, though.) Is it true that when a single file is removed in ext3 file system, the time needed is proportional to the number of files inside the directory? Or is there a "threshold" number of files that can be stored...

File System and Performance
Is there any good link/article on File system performance. ? Which file system should one use to have min. response time ? Faster server ? <on Various OS solaris/linux/aix etc> i.e. say for ex. if i am going to have httpd installed on RH Linux AS , Is ext2 or ext3 or other fs likely to give faster performance ? any link/article or personal experience on Solaris/AIX/Linux/ with various file system comparision ? -Raxit On Aug 7, 3:07 pm, Sheth Raxit <raxitsheth2...@gmail.com> wrote: > Is there any good link/article on File system performance. ? > > Which file system ...

File System and Performance
Is there any good link/article on File system performance. ? Which file system should one use to have min. response time ? Faster server ? <on Various OS solaris/linux/aix etc> i.e. say for ex. if i am going to have httpd installed on RH Linux AS , Is ext2 or ext3 or other fs likely to give faster performance ? any link/article or personal experience on Solaris/AIX/Linux/ with various file system comparision ? -Raxit Sheth Raxit wrote: > Is there any good link/article on File system performance. ? > > Which file system should one use to have min. response time ? Faster ...

[TCPP-announce] Special Issue on "Performance Modeling and Evaluation of High-Performance Parallel and Distributed Systems
[Please accept our apologies if you receive multiple copies of this CFP] ******************************************************************************************* CALL FOR PAPERS Special Issue on Performance Modeling and Evaluation of High-Performance Parallel and Distributed Systems To appear in Performance Evaluation: An International Journal http://www.dcs.gla.ac.uk/people/personal/mohamed/si-pe.html The world�s unfulfilled appetite for high computation power, supported by advances in VLSI and communications technology as well as algorithms, has led to the rapid deployment of a wid...

[TCPP-announce] Special Issue on "Performance Modeling and Evaluation of High-Performance Parallel and Distributed Systems #2
[Please accept our apologies if you receive multiple copies of this CFP] ******************************************************************************************* CALL FOR PAPERS Special Issue on Performance Modeling and Evaluation of High-Performance Parallel and Distributed Systems To appear in Performance Evaluation: An International Journal http://www.dcs.gla.ac.uk/people/personal/mohamed/si-pe.html The world�s unfulfilled appetite for high computation power, supported by advances in VLSI and communications technology as well as algorithms, has led to the rapid deployment of a wid...

issues with /etc/system file
Hi all, I have appended the /etc/system file with the following line set msgsys:msginfo_msgmax=2048 set msgsys:msginfo_msgmnb=6144 set msgsys:msginfo_msgtql=50 set msgsys:msginfo_msgmni=5 set msgsys:msginfo_msgseg=12288 set msgsys:msginfo_msgssz=12288 forceload: sys/msgsys but when i run "sysdef" on the solaris machine after the rebooting , i can see only the values set for the follwoing : 2048 max message size (MSGMAX) 6144 max bytes on queue (MSGMNB) 5 message queue identifiers (MSGMNI) 50 system message headers (MSGTQL) I dont know why the /etc/system file is not...

file system cache performance
hi. i was wondering if anyone could offer any insight into how to assess the performance of the file system cache. I am interested in things like hit rate (which % of pages read are coming from the cache instead of from disk), the amount of data read from the cache over a time span, etc. depending on the workload, is it safe to assume that a large percentage of the reclaims reported by vmstat are due to file system cache hits on a system that does a lot of disk io? outside of the ::memstat dcmd for mdb, there seems to be a dearth of information attainable from the system about this issue....

Poor performance from file system
Hi thre, I seem to be having some strange performance glitches since upgrading my kernel from 2.6.6 to 2.6.9. When I'm writing large amounts of data, for example: moving several GB of music files from another machine or writing decoded audio files, the performance of my machine is erratic. Half the time it's working fine, copying across several 4MB MP3 files a second. The other half, the machine is completely locked up and I can't even type. It's so unresponsive, it's difficult even to use top to work out what's consuming all the CPU, but I think it's kjournald. It...

File Read Performance Issue
I am developing an application that reads and stores data from 2 large text files. One file (5750kb) has ~160,000 records and the other (23,000kb) has ~330,000 records. The program reads both files and converts/stores their data into vectors. Also, some indices to unique records are developed (much like a telephone book), so that searches can be efficiently done. The reads at program start take a long time (~2 minutes), and although I use buffers of 4096 size, I can't see other ways to improve this aspect of the program's performance. The data this program uses...

Performance issue with copying a file
We're on AIX 5.3 with two virtual servers on the same physical box. Whenever we copy (rcp or ftp) a 20 Mb file from our Windows client to a directory on the AIX virtual server it takes about 4-5 seconds. However, copying the same file from the AIX virtual server to the Windows client takes about 10 minutes. Any ideas what we should be looking for? On Apr 2, 11:41 pm, Jim <jmr...@comcast.net> wrote: > We're on AIX 5.3 with two virtual servers on the same physical box. > Whenever we copy (rcp or ftp) a 20 Mb file from our Windows client to > a directory on the AIX virtual...

File System , I/O performance.
--089e0160c820b3958804d978821d Content-Type: text/plain; charset=ISO-8859-1 Sharing... Studying about JFS , found this nice database microbenchmark from OpenLDAP project... There is no big player here... only lightweight databases. What caught my attention is the JFS performance at FS test , "without" the journaling overhead. The comparison is with : btrfs, ext2, ext3, ext4, jfs, ntfs, reiserfs, xfs, and zfs (Linux) http://symas.com/mdb/microbench/july/index.html#sec11 Just as nice information . Regards Cesar --089e0160c820b3958804d978821d Content-Type:...

odd issue with file system
hello, i have a file system defined on a logical volume that is situated solely on one physical volume. I then created another file system of the _exact_ same size on another logical volume situated solely on a _different_ physical volume. Lets call these FS.orig and FS.new FS.orig already has in it data that takes up about 85% of the file system space. On that files sytem i can restore this data over the top of what is there without issue (and obviously i am still left with 85% usage of the file system). However, if i restore the exact same data on to the FS.new file system the f...

Performance Issues in Random Access Files
I have an application where I need to read data from random posiitionswithin file. In this case performance is VERY important. The problem Iam running into is that the RandomAccessFile class appears to haveserious performance issues. Based on a bit of experimentation, Isuspect that it doesn't implement any kind of intrinsic buffering.Anyway, I can think of five or six ways of working around thisproblem, but I can't believe that there isn't a standard solution. Asa rule, I like to stick to "recognized practices of the community"even when it would be more fun to "roll ...

poor performance on raid5 file system
hi, i am having real performance problems during write operations on an raid5 file-system. the machine is a p4 with one 40G disk with solaris9 and three other 160G disks configured with sds as raid5. while on the system disk write access exceeds 10MB/S it is on the raid at peak 800KB/S (!). i know that there are many tuning tips and that this machine ist not very appropriate for such configurations but i suspect i�ve done some basic things wrong. so far i have checked the performance of every single disk on both controllers which is ok, and the output of metastat also reports no proble...

dRuby file transfer performance issue
Hi, I'm a Ruby newbie fra Norway (say that many times fast:) Currently i'm trying to send files from one application to another using distributed ruby (dRuby). The files are sent, but it takes "forever". I tried to send a Word-document (about 600 kB), and it took more than two minutes when both applications ran locally on the same machine. Do I have to do something special if I'm working with files other than ordinary text? This is the code I'm using: ### def fetch(fname) File.open(fname, 'r') do |fp| while buf = fp.read(4096) ...

writing to file limiting system performance
Hello, &nbsp; I really could use some help with my VI in terms of writing data.&nbsp; I?ve had a LOT of help optimizing my code and am trying to enhance the performance in terms of data acquisition.&nbsp; However, it seems as though writing to a data file is really limiting the frequency I can sample at.&nbsp; I?ve done some research and understand that writing data at every iteration of the while loop and the build array function slows things down.&nbsp; How would I modify the code so that the array buffer would store maybe 5000 data points before writing to a file, then...

Performance issue with a system being migrated from Orbix
We are experiencing a performance issue with a r/t control system that we are currently migrating from Orbix (CORBA 2.1) to TAO. We have a particular server that hosts many 1000's of CORBA objects. These objects are mainly accessed internally within the server, so to reduce the CORBA overhead we are using the co-location option. Unfortunately we are still seeing a significant performance overhead while internally accessing these objects (factor of 5 greater than with Orbix). Are there any other optimizations we should try? Hi Mark, > We are experiencing a performance issue...

file system byte ordering issues?
I'm hoping to switch off my old SPARC hardware soon (anyone want a Sparcstation10?) and onto Intel hardware. I'm wondering if I can just unplug my chain of scsi disks from one machine and plug them into the new one and have the filesystems mount intact, or if I will have to move the data across a network? Thanks, chris * Christopher (cak@dimebank.com) wrote: > new one and have the filesystems mount intact, or if I will have to move > the data across a network? The "data" will be fine, yet the bootloader, kernel and the whole userland binaries are sparc a...

Issue related to system() call and file handles.
Okay, this one's a tough one for me to explain so this might take a few e-mails to get the idea across. Here's what I got though. I have this application running on a Sun/Solaris machine, written in C, using the Sun Forte Developer 7 C 5.4 Compiler released a few years ago. Not exactly old but, you get the picture. Now, most if not all the file handles on this machine are most likely being used up by a secondary process and/or application. I can't help it, they are just gone. So for file manipuation I've been using open, write, etc. The older file handle functions. N...

improve file system performance related to BEA
Hi all, we are running BEA weblogic on an alphaserver ES47 with 8 GB RAM. openVMS version is 7.3-2 with the latest patches. we nioticed that the startup of BEA weblogic takes about 2-3 min, while in windows and unix it takes 25 sec. during the statup phase we noticed that there are a lot of exec RMS activity the bea account has the following settings $ show rms_default $!default values for sequential files block=1 buffer=1 $ set rms_default/sequential/block=127/buffer=255 $!default values for indexed files buffer=1 $ set rms_default/indexed/buffer=20 $!default values for extents $ set rms...

File system layout issue need help
Hey Gurus, I am tryin g to install Solaris 8 on my home server, but keeps giving me the same error over and over again. I have tried changing the size of all the filesystems, but it is giving me the same error. Any help will be greatl appreciated. # Slice Name Size Minimum Size 0. / 4096 MB 1835 MB 1. swap 1024 MB 10 MB 3. /backup 3072 MB 0 MB 4. /var 2048 MB 38 MB 5. /home 512 MB 0 MB 6. Unused 7. Unused 8. Done Select a slice to modify, or Done to return to the previous menu [8]: 8 The following errors were detected in your...

WinXP VPN offline files performance issue
I have a fairly small and new network. It consists of half a dozen Win2000 servers running a native AD domain. All my clients are running Windows XP (under 50 clients) and have offline files to thier home directories (H:). I have a dedicated Win2003 appliance based VPN server that is connected to a T1. Everything seems to be running fine, except when a WinXP client remotely connects and then performs a sync for the offline files. When users double click on their home directory icons, the little flashlight icon appears and takes at least 15 seconds just to display the directories. Its painfully...

[9fans] Two small file system issues
1. The message "icache: nothing to do - kick dcache" is irritating, is it just that I'm running a dated version of venti (I'm not, apparently) or should that message go away? 2. Venti drops into background with a non-empty exit condition. Makes it hard to run Fossil conditional on Venti getting set up correctly. Are patches welcome or are there reasons to retain this behaviour? ++L ...

Performance issue with a system being migrated from Orbix 3
To: ace-bugs@cs.wustl.edu Subject: TAO: Performance issue with a system being migrated from Orbix ACE VERSION: 5.5.1 HOST MACHINE and OPERATING SYSTEM: Sun-Fire-V240, SunOS 5.8, SOLARIS 8 TARGET MACHINE and OPERATING SYSTEM, if different from HOST: COMPILER NAME AND VERSION (AND PATCHLEVEL): SunPRO 5.3, Patch 111678-21 THE $ACE_ROOT/ace/config.h FILE: ace/config-sunos5.8.h THE $ACE_ROOT/include/makeinclude/platform_macros.GNU FILE: See below... # Enable smart proxies globally smart_proxies=1 include $(ACE_ROOT)/include/makeinclude/platfo...