Reduced I/O performance when logged into GNOME

Bug #21790 reported by Lee Willis
34
Affects Status Importance Assigned to Milestone
linux-source-2.6.15 (Ubuntu)
Fix Released
Medium
Tollef Fog Heen

Bug Description

Several users have reported low hard drive performance on breezy. particularly
this has become obvious where people have upgraded from hoary and the system
"feels" slower. Some people report that speeds are faster in single user mode.

Bonnie test results:
Multi User
Writing with putc()...done
Writing intelligently...done
Rewriting...done
Reading with getc()...done
Reading intelligently...done
start 'em...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version 1.03 ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
leedesktop.plu 480M 14216 62 14031 8 8400 3 15009 57 33911 7 117.4 0
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
                 16 1581 98 +++++ +++ +++++ +++ 1604 97 +++++ +++ 3767 98
leedesktop.plus.net,480M,14216,62,14031,8,8400,3,15009,57,33911,7,117.4,0,16,1581,98,+++++,+++,+++++,+++,1604,97,+++++,+++,3767,98

Single User
Using uid:1000, gid:1000.
Writing with putc()...done
Writing intelligently...done
Rewriting...done
Reading with getc()...done
Reading intelligently...done
start 'em...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version 1.03 ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
leedesktop.plu 480M 22502 94 32769 16 15203 7 19808 72 35799 6 182.2 0
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
                 16 1860 99 +++++ +++ +++++ +++ 1930 98 +++++ +++ 4599 97
leedesktop.plus.net,480M,22502,94,32769,16,15203,7,19808,72,35799,6,182.2,0,16,1860,99,+++++,+++,+++++,+++,1930,98,+++++,+++,4599,97

Full thread on ubuntu-devel:

http://lists.ubuntu.com/archives/ubuntu-devel/2005-September/010552.html

Revision history for this message
Alvin Thompson (alvint-deactivatedaccount) wrote :

to be clear, the slowness has nothing to do with upgrading; it's just that
people who used to run hoary know how fast it ran so find the difference more
noticeable.

i don't have enough hardware to be definitive, but according to the thread it
appears to affect ATA (but not SATA) drives. if i had to make a completely
unsientific wild guess, judging from the sound of the disk drive shuffling i'd
guess that the caches may not be being utilized properly.

Revision history for this message
Matt Zimmerman (mdz) wrote :

In your multi-user mode test, were you logged into a GNOME session?

Starting from single-user mode, try starting each service from /etc/rc2.d and
see when the problem begins

Revision history for this message
Matt Zimmerman (mdz) wrote :

*** Bug 22022 has been marked as a duplicate of this bug. ***

Revision history for this message
Matt Zimmerman (mdz) wrote :

Has anyone tried to debug this further?

Revision history for this message
Lee Willis (lwillis) wrote :

I've attempted to identify culprits but have had no joy so far. If you can
assist with what I should be looking for in the bonnie output that would help
[Is it as simple as comparing the per second throughput counts?]

I'll also be trying to incrementally start services to identify where the
problem starts as soon as I'm back at a machine that exhibits this problem (My
laptop doesn't show this problem :( )

Have you any other ideas of how I can identify the problem?

Revision history for this message
Matt Zimmerman (mdz) wrote :

(In reply to comment #5)
> I'll also be trying to incrementally start services to identify where the
> problem starts as soon as I'm back at a machine that exhibits this problem (My
> laptop doesn't show this problem :( )

Please do; that would help.

> Have you any other ideas of how I can identify the problem?

If it works well in single-user, then it should be straightforward to narrow
down the cause using the above procedure.

Revision history for this message
Lee Willis (lwillis) wrote :

(In reply to comment #6)
> (In reply to comment #5)
> > I'll also be trying to incrementally start services to identify where the
> > problem starts as soon as I'm back at a machine that exhibits this problem (My
> > laptop doesn't show this problem :( )
>
> Please do; that would help.
>
> > Have you any other ideas of how I can identify the problem?
>
> If it works well in single-user, then it should be straightforward to narrow
> down the cause using the above procedure.

Right. After having looked into this some more it seems that the differentiation
isn't whether I'm in single user or multi-user mode, but whether or not I'm
logged into GNOME.

If I boot normally, log into GNOME & run hdparm I get poor results (around
17MB/s for disk reads). If I log out out from GNOME and back to the GDM login
page, and run hdparm then I get around 45 MB/s.

It appears that it is something in the GNOME session that is causing the slowdown.

The difference in process lists between no GNOME session, and a GNOME session is
roughly:

bonobo-activation-server
dbus-daemon
dbus-launch
esd
gam_server
gconfd-2
gnome-cups-icon
gnome-keyring-daemon
gnome-panel
gnome-pty-helper
gnome-settings-daemon
gnome-terminal
gnome-vfs-daemon
metacity
notification-area-applet
notification-daemon
ssh-agent
wnck-applet
x-session-manager

I've shut off everything non-essential and still see "poor" speeds. I do see
some improvements by getting rid of nautilus, clock-applet, update-notifier,
xscreensaver, gweather_applet, gnome-volumne-manager and multiload-applet I get
about 10 MB/s back, but am still 10MB/s short. The obvious candidate would be
gam_server but I can't work out how to get it to not re-spawn ...

A strace of gam_server shows:
poll([{fd=1, events=0}, {fd=2, events=POLLIN}, {fd=4, events=POLLIN}, {fd=1,
events=POLLIN}, {fd=5, events=POLLIN}, {fd=0, events=POLLIN}, {fd=3,
events=POLLIN}], 7, 0) = 0
gettimeofday({1127377987, 340687}, NULL) = 0
poll([{fd=1, events=0}, {fd=2, events=POLLIN}, {fd=4, events=POLLIN}, {fd=1,
events=POLLIN}, {fd=5, events=POLLIN}, {fd=0, events=POLLIN}, {fd=3,
events=POLLIN}], 7, 11) = 0
gettimeofday({1127377987, 352450}, NULL) = 0
gettimeofday({1127377987, 352512}, NULL) = 0
poll([{fd=1, events=0}, {fd=2, events=POLLIN}, {fd=4, events=POLLIN}, {fd=1,
events=POLLIN}, {fd=5, events=POLLIN}, {fd=0, events=POLLIN}, {fd=3,
events=POLLIN}], 7, 0) = 0
gettimeofday({1127377987, 352654}, NULL) = 0
poll([{fd=1, events=0}, {fd=2, events=POLLIN}, {fd=4, events=POLLIN}, {fd=1,
events=POLLIN}, {fd=5, events=POLLIN}, {fd=0, events=POLLIN}, {fd=3,
events=POLLIN}], 7, 0) = 0
gettimeofday({1127377987, 352796}, NULL) = 0

over and over again - but I'm not sure if this is actually a problem or not ...

Revision history for this message
Matt Zimmerman (mdz) wrote :

bug #10821 perhaps?

Please run "vmstat 5" on the console and show us the output before and after you
login to GNOME, after waiting for the system to settle down to idle. Is there
excessive CPU utilization or excessive I/O?

Revision history for this message
Alvin Thompson (alvint-deactivatedaccount) wrote :

before login:
procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
 r b swpd free buff cache si so bi bo in cs us sy id wa
 0 0 0 465204 9980 158580 0 0 780 132 1084 354 9 4 61 26
 0 0 0 465204 9988 158580 0 0 0 5 1028 56 0 0 100 0
 0 0 0 465204 9988 158580 0 0 0 10 1020 38 0 0 100 0
 0 0 0 465204 9988 158580 0 0 0 13 1026 46 0 0 100 0
 0 0 0 465204 9996 158580 0 0 0 2 1023 48 0 0 100 0
 0 0 0 465204 9996 158580 0 0 0 0 1018 39 0 0 100 0
 0 0 0 465204 10004 158580 0 0 0 2 1025 49 0 0 100 0
 0 0 0 465204 10004 158580 0 0 0 0 1020 44 0 0 100 0
 0 0 0 465204 10012 158580 0 0 0 2 1022 43 0 0 100 0
 0 0 0 465204 10012 158580 0 0 0 0 1023 45 0 0 100 0
 0 0 0 465204 10020 158580 0 0 0 2 1025 46 0 0 100 0
 0 0 0 465204 10020 158580 0 0 0 1 1030 62 0 0 100 0
 0 0 0 465204 10028 158580 0 0 0 2 1027 54 0 0 100 0
 0 0 0 465204 10028 158580 0 0 0 0 1025 55 0 0 100 0
 0 0 0 465204 10036 158580 0 0 0 3 1024 46 0 0 100 0

after login:
procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
 r b swpd free buff cache si so bi bo in cs us sy id wa
 1 1 0 283064 17224 238548 0 0 378 68 1073 388 9 2 77 12
 1 0 0 283064 17236 238548 0 0 0 12 1062 450 0 0 100 0
 1 1 0 283080 17244 238548 0 0 1 2 1020 196 0 0 100 0
 1 0 0 283080 17256 238548 0 0 0 10 1027 251 0 1 98 1
 0 0 0 283096 17264 238548 0 0 0 5 1025 212 0 0 100 0
 0 0 0 283088 17272 238548 0 0 0 138 1046 207 0 0 99 0
 0 0 0 283096 17280 238548 0 0 0 3 1022 206 0 0 100 0
 0 0 0 282964 17288 238548 0 0 0 9 1027 210 0 0 100 0
 0 0 0 282956 17296 238548 0 0 0 3 1024 209 0 0 99 0
 0 0 0 282956 17304 238548 0 0 0 20 1049 369 1 0 98 0
 0 0 0 282972 17312 238548 0 0 0 7 1026 246 1 0 99 0
 0 0 0 282964 17320 238548 0 0 0 12 1070 538 5 0 95 0
 1 0 0 282956 17328 238548 0 0 0 20 1044 350 1 0 99 0

this is probably a red herring, but i'm now also getting flooded with the dreaded:

[4295254.185000] cs: pcmcia_socket0: unable to apply power.

but only after i log in. i don't think i got that in hoary, but i'm not sure.

Revision history for this message
Fabio Marzocca (thesaltydog) wrote :

If it could help, there is another long thread about this on ubuntuforums:
http://www.ubuntuforums.org/showthread.php?t=61798&highlight=breezy+slow

Revision history for this message
Lee Willis (lwillis) wrote :
Download full text (3.6 KiB)

This is the output from vmstat with no GNOME session running:

$ vmstat 5 > no_session_vmstat.txt
procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
 r b swpd free buff cache si so bi bo in cs us sy id wa
 0 0 39488 66692 4476 67852 2 2 49 16 39 169 2 0 96 1
 0 0 39488 66692 4484 67852 0 0 0 14 1011 30 0 0 100 0
 0 0 39488 66692 4492 67852 0 0 0 3 1015 27 0 0 100 0
 0 0 39488 66692 4500 67852 0 0 0 3 1012 24 0 0 100 0
 0 0 39488 66692 4508 67852 0 0 0 3 1014 26 0 0 100 0
 0 0 39488 66692 4516 67852 0 0 0 9 1014 24 0 0 100 0
 0 0 39488 66692 4524 67852 0 0 0 10 1013 24 0 0 100 0
 0 0 39488 66692 4540 67856 0 0 2 11 1015 28 0 0 99 1
 0 0 39488 66692 4548 67856 0 0 0 3 1011 23 0 0 100 0
 0 0 39488 66568 4556 67856 0 0 0 3 1015 25 0 0 100 0
 0 0 39488 66568 4564 67856 0 0 0 5 1010 24 0 0 100 0
 0 0 39488 66568 4572 67856 0 0 0 3 1015 26 0 0 100 0
 0 0 39488 66568 4580 67856 0 0 0 3 1012 23 0 0 100 0
 0 0 39488 66568 4588 67856 0 0 0 13 1017 29 0 0 100 0
 0 0 39488 66568 4596 67856 0 0 0 10 1012 26 0 0 100 0
 0 0 39488 66568 4604 67856 0 0 0 3 1016 28 0 0 100 0
 0 0 39488 66568 4612 67856 0 0 0 3 1012 25 0 0 100 0
 0 1 39488 66568 4616 67856 6 0 6 9 1016 28 0 0 99 1
 0 0 39488 66568 4628 67856 0 0 0 4 1012 25 0 0 100 0

This is the output from within a GNOME session:

$ vmstat 5 > session_vmstat.txt
procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
 r b swpd free buff cache si so bi bo in cs us sy id wa
 0 0 38572 9232 6672 70196 2 2 49 16 41 169 2 0 96 1
 0 0 38572 9272 6788 70196 0 0 22 29 1017 190 0 0 95 4
 0 0 38572 9272 6796 70196 0 0 0 6 1015 173 0 0 100 0
 0 0 38572 9272 6804 70196 0 0 0 22 1015 174 1 0 99 0
 0 0 38572 9272 6812 70196 0 0 0 106 1034 181 0 0 99 0
 0 0 38572 9272 6820 70196 0 0 0 5 1011 173 0 0 100 0
 0 0 38572 9272 6828 70196 0 0 0 5 1014 170 0 0 99 0
 0 0 38572 9272 6836 70196 0 0 0 5 1012 169 0 0 100 0
 0 0 38572 9272 6844 70196 0 0 0 5 1015 173 0 0 99 0
 0 0 38572 9272 6852 70196 0 0 0 5 1011 170 0 0 99 0
 0 0 38572 9272 6860 70196 0 0 0 5 1014 172 0 0 100 0
 0 0 38572 9272 6868 70196 0 0 0 6 1012 170 0 0 99 0
 0 0 38572 9272 6876 70196 0 0 0 5 1015 175 1 0 99 0
 0 0 38572 9272 6884 70196 0 0 0 11 1012 174 0 0 99 0
 0 0 38572 9272 6892 70196 0 0 0 6 1019 182 0 0 99 ...

Read more...

Revision history for this message
João Inácio (inacio) wrote :

I can see a large difference is context switching (+200%).

Also, i have noticed most of reports are from laptops, in wich case i guess this
could be related to 'cs: pcmcia_socket0: unable to apply power.'

Revision history for this message
Matt Zimmerman (mdz) wrote :

*** Bug 22642 has been marked as a duplicate of this bug. ***

Revision history for this message
Matt Zimmerman (mdz) wrote :

(In reply to comment #12)
> I can see a large difference is context switching (+200%).

That's normal.

> Also, i have noticed most of reports are from laptops, in wich case i guess this
> could be related to 'cs: pcmcia_socket0: unable to apply power.'

Is anyone else seeing this in association with the performance issue?

Revision history for this message
Matt Zimmerman (mdz) wrote :

(In reply to comment #11)
> I can't see any major issues myself :(

Well, it tells us that the difference isn't due to increased CPU load or
increased disk activity, which eliminates some factors.

If you log out of GNOME again, do things return to normal?

Revision history for this message
Lee Willis (lwillis) wrote :

I saw this problem originally on a desktop PC so PCMCIA doesn't seem to be
the problem there. [My laptop *doesn't* have this problem on breezy].

As for logging out of GNOME then yes, things return to normal, then slow again
when I log back in.

Revision history for this message
Matt Zimmerman (mdz) wrote :

(In reply to comment #16)
> I saw this problem originally on a desktop PC so PCMCIA doesn't seem to be
> the problem there. [My laptop *doesn't* have this problem on breezy].
>
> As for logging out of GNOME then yes, things return to normal, then slow again
> when I log back in.

Both useful data points, thanks. My current suspicion is that this is related
to inotify; Sebastien, is there an easy way to disable it so that we can test
that hypothesis?

Revision history for this message
Sebastien Bacher (seb128) wrote :

(In reply to comment #17)

> Both useful data points, thanks. My current suspicion is that this is related
> to inotify; Sebastien, is there an easy way to disable it so that we can test
> that hypothesis?

Booting with the "noinotify" option was working before hoary, I would try that.
The other option is to rebuild gamin with the right configure flag to use only
dnotify.

Revision history for this message
Matt Zimmerman (mdz) wrote :

(In reply to comment #16)
> I saw this problem originally on a desktop PC so PCMCIA doesn't seem to be
> the problem there. [My laptop *doesn't* have this problem on breezy].

What's different between your laptop and your desktop? I assume they both have
ATA disks. Same kernel? Same installed software? What chipsets?

Revision history for this message
Lee Willis (lwillis) wrote :

The desktop machine which experiences this problem is a Fujitsu/Siemens celeron
1.70GHz with 256M of memory. The disk info is:

/dev/hda:

ATA device, with non-removable media
        Model Number: Maxtor 2F040L0
        Serial Number: F137NAVE
        Firmware Revision: VAM51JJ0
Standards:
        Supported: 7 6 5 4
        Likely used: 7
Configuration:
        Logical max current
        cylinders 16383 16383
        heads 16 16
        sectors/track 63 63
        --
        CHS current addressable sectors: 16514064
        LBA user addressable sectors: 80293248
        device size with M = 1024*1024: 39205 MBytes
        device size with M = 1000*1000: 41110 MBytes (41 GB)
Capabilities:
        LBA, IORDY(can be disabled)
        Queue depth: 1
        Standby timer values: spec'd by Standard, no device specific minimum
        R/W multiple sector transfer: Max = 16 Current = 0
        Advanced power management level: unknown setting (0x0000)
        Recommended acoustic management value: 192, current value: 254
        DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 *udma5 udma6
             Cycle time: min=120ns recommended=120ns
        PIO: pio0 pio1 pio2 pio3 pio4
             Cycle time: no flow control=120ns IORDY flow control=120ns
Commands/features:
        Enabled Supported:
           * NOP cmd
           * READ BUFFER cmd
           * WRITE BUFFER cmd
           * Host Protected Area feature set
           * Look-ahead
           * Write cache
           * Power Management feature set
                Security Mode feature set
           * SMART feature set
           * FLUSH CACHE EXT command
           * Mandatory FLUSH CACHE command
           * Device Configuration Overlay feature set
           * Automatic Acoustic Management feature set
                SET MAX security extension
                Advanced Power Management feature set
           * DOWNLOAD MICROCODE cmd
           * SMART self-test
           * SMART error logging
Security:
        Master password revision code = 65534
                supported
        not enabled
        not locked
                frozen
        not expired: security count
        not supported: enhanced erase
HW reset results:
        CBLID- above Vih
        Device num = 0 determined by CSEL
Checksum: correct

All packages are up-to-date breezy (sudo apt-get dist-upgrade) [Kernel 2.6.12-9]

The laptop [Which doesn't experience problems] is a Toshiba Satellite Pro A60,
with a celeron 2.80GHz, 768M of memory. I can't post the hdparm info right now
as I don't have it on me, but it's ATA 40G drive I'm fairly sure.

Revision history for this message
Matt Zimmerman (mdz) wrote :

(In reply to comment #20)
> The desktop machine which experiences this problem is a Fujitsu/Siemens celeron
> 1.70GHz with 256M of memory. The disk info is:

Please send lspci output so we can see the IDE chipset info

Revision history for this message
Lee Willis (lwillis) wrote :

0000:00:00.0 Host bridge: Intel Corp. 82845G/GL[Brookdale-G]/GE/PE DRAM
Controller/Host-Hub Interface (rev 01)
0000:00:02.0 VGA compatible controller: Intel Corp. 82845G/GL[Brookdale-G]/GE
Chipset Integrated Graphics Device (rev 01)
0000:00:1d.0 USB Controller: Intel Corp. 82801DB/DBL/DBM (ICH4/ICH4-L/ICH4-M)
USB UHCI Controller #1 (rev 01)
0000:00:1d.1 USB Controller: Intel Corp. 82801DB/DBL/DBM (ICH4/ICH4-L/ICH4-M)
USB UHCI Controller #2 (rev 01)
0000:00:1d.2 USB Controller: Intel Corp. 82801DB/DBL/DBM (ICH4/ICH4-L/ICH4-M)
USB UHCI Controller #3 (rev 01)
0000:00:1d.7 USB Controller: Intel Corp. 82801DB/DBM (ICH4/ICH4-M) USB 2.0 EHCI
Controller (rev 01)
0000:00:1e.0 PCI bridge: Intel Corp. 82801 PCI Bridge (rev 81)
0000:00:1f.0 ISA bridge: Intel Corp. 82801DB/DBL (ICH4/ICH4-L) LPC Bridge (rev 01)
0000:00:1f.1 IDE interface: Intel Corp. 82801DB/DBL (ICH4/ICH4-L) UltraATA-100
IDE Controller (rev 01)
0000:00:1f.3 SMBus: Intel Corp. 82801DB/DBL/DBM (ICH4/ICH4-L/ICH4-M) SMBus
Controller (rev 01)
0000:00:1f.5 Multimedia audio controller: Intel Corp. 82801DB/DBL/DBM
(ICH4/ICH4-L/ICH4-M) AC'97 Audio Controller (rev 01)
0000:02:08.0 Ethernet controller: Intel Corp. 82801BD PRO/100 VE (LOM) Ethernet
Controller (rev 81)

Revision history for this message
Tollef Fog Heen (tfheen) wrote :

I'm unable to reproduce this on an nforce2-based system. Will try later today on an
intel-based rig.

Revision history for this message
Scott James Remnant (Canonical) (canonical-scott) wrote :

No luck reproducing on my ATI/ALI based i386 laptop ... I get 35 MB/sec whatever
is running.

Revision history for this message
Javier Cabezas (javier-cabezas) wrote :

I have this poor performance before I login into Gnome:

/dev/hdb:
 Timing cached reads: 640 MB in 2.01 seconds = 319.09 MB/sec
 Timing buffered disk reads: 2 MB in 5.27 seconds = 388.67 kB/sec

but after the login, it gets even worse:

/dev/hdb:
 Timing cached reads: 604 MB in 2.01 seconds = 300.84 MB/sec
 Timing buffered disk reads: 2 MB in 9.04 seconds = 226.51 kB/sec

The info of hdparm about my disk is:

dev/hdb:

ATA device, with non-removable media
 Model Number: SAMSUNG SV1824D
 Serial Number: 0159J2FKB03618
 Firmware Revision: MD100-31
Standards:
 Used: ATA/ATAPI-4 T13 1153D revision 17
 Supported: 4 3 2 1 & some of 5
Configuration:
 Logical max current
 cylinders 16383 16383
 heads 16 16
 sectors/track 63 63
 --
 bytes/track: 34902 bytes/sector: 554
 CHS current addressable sectors: 16514064
 LBA user addressable sectors: 35606592
 device size with M = 1024*1024: 17386 MBytes
 device size with M = 1000*1000: 18230 MBytes (18 GB)
Capabilities:
 LBA, IORDY(cannot be disabled)
 Buffer size: 472.0kB bytes avail on r/w long: 4 Queue depth: 1
 Standby timer values: spec'd by Vendor
 R/W multiple sector transfer: Max = 16 Current = ?
 DMA: mdma0 mdma1 mdma2 udma0 udma1 *udma2 udma3 udma4
      Cycle time: min=120ns recommended=120ns
 PIO: pio0 pio1 pio2 pio3 pio4
      Cycle time: no flow control=120ns IORDY flow control=120ns
Commands/features:
 Enabled Supported:
    * NOP cmd
    * READ BUFFER cmd
    * WRITE BUFFER cmd
    * Host Protected Area feature set
    * DEVICE RESET cmd
    * Look-ahead
    * Write cache
    * Power Management feature set
    * SMART feature set
HW reset results:
 CBLID- above Vih
 Device num = 1

And the lspci output is:

0000:00:00.0 Host bridge: Advanced Micro Devices [AMD] AMD-751 [Irongate] System
Controller (rev 23)
0000:00:01.0 PCI bridge: Advanced Micro Devices [AMD] AMD-751 [Irongate] AGP
Bridge (rev 01)
0000:00:04.0 ISA bridge: VIA Technologies, Inc. VT82C686 [Apollo Super South]
(rev 1b)
0000:00:04.1 IDE interface: VIA Technologies, Inc.
VT82C586A/B/VT82C686/A/B/VT823x/A/C PIPC Bus Master IDE (rev 06)
0000:00:04.2 USB Controller: VIA Technologies, Inc. VT82xxxxx UHCI USB 1.1
Controller (rev 0e)
0000:00:04.4 SMBus: VIA Technologies, Inc. VT82C686 [Apollo Super ACPI] (rev 20)
0000:00:0e.0 Ethernet controller: Realtek Semiconductor Co., Ltd.
RTL-8139/8139C/8139C+ (rev 10)
0000:00:0f.0 Multimedia video controller: Brooktree Corporation Bt878 Video
Capture (rev 11)
0000:00:0f.1 Multimedia controller: Brooktree Corporation Bt878 Audio Capture
(rev 11)
0000:00:10.0 Multimedia audio controller: Ensoniq ES1371 [AudioPCI-97] (rev 08)
0000:01:05.0 VGA compatible controller: nVidia Corporation NV17 [GeForce4 MX
440] (rev a3)

Are these results normal?? I find buffered reads VERY slow.

Revision history for this message
Matt Zimmerman (mdz) wrote :

Not reproducible here, on a ThinkPad T42. Someone able to reproduce this
problem needs to help us debug it, or it won't be possible to find the cause.

potpal:[~] sudo hdparm -tT /dev/hda # before login

/dev/hda:
 Timing cached reads: 1516 MB in 2.00 seconds = 756.60 MB/sec
 Timing buffered disk reads: 66 MB in 3.07 seconds = 21.47 MB/sec
potpal:[~] sudo hdparm -tT /dev/hda # after login

/dev/hda:
 Timing cached reads: 1656 MB in 2.00 seconds = 827.30 MB/sec
 Timing buffered disk reads: 72 MB in 3.01 seconds = 23.94 MB/sec

Revision history for this message
Javier Cabezas (javier-cabezas) wrote :

What can I do to help you?

I really like Ubuntu and I don't want to switch because of this bug.

Revision history for this message
Matt Zimmerman (mdz) wrote :

(In reply to comment #27)
> What can I do to help you?
>
> I really like Ubuntu and I don't want to switch because of this bug.

For example, you can try disabling inotify as I requested in comment #17.

It is also helpful to search for common factors between systems which experience
this problem (because most do not).

Revision history for this message
Javier Cabezas (javier-cabezas) wrote :

Booting with the "noinotify" option doesn't change the hdparm numbers.

I have another Ubuntu machine (with a worse HD) but it doesn't suffer this
problem. It gets 10 MB/s in buffered disk reads.

The affected machine it's a 700 Mhz Athlon with 384MB RAM. I have posted other
details previously. I installed Hoary first and upgraded to Breezy via apt-get
dist-upgrade

Revision history for this message
Lee Willis (lwillis) wrote :

(In reply to comment #28)

> For example, you can try disabling inotify as I requested in comment #17.

I've tried booting with "noinotify" appended to the grub boot string and see no
change (IO is still slow when logged into GNOME). I'm not sure if inotify *has*
actually been disabled - is there any way I could tell?

Revision history for this message
Benjamin Schindler (bschindler) wrote :

(In reply to comment #30)
> (In reply to comment #28)
>
> > For example, you can try disabling inotify as I requested in comment #17.
>
> I've tried booting with "noinotify" appended to the grub boot string and see no
> change (IO is still slow when logged into GNOME). I'm not sure if inotify *has*
> actually been disabled - is there any way I could tell?

If you boot with inotify, dmesg should show something (I guess) - and - in
/dev/, there should be an inotify device if inotify started

Revision history for this message
Javier Cabezas (javier-cabezas) wrote :

No messages in dmesg here (inotify not disabled). I also don't find any inotify
device in /dev

(In reply to comment #31)
> (In reply to comment #30)
> > (In reply to comment #28)
> >
> > > For example, you can try disabling inotify as I requested in comment #17.
> >
> > I've tried booting with "noinotify" appended to the grub boot string and see no
> > change (IO is still slow when logged into GNOME). I'm not sure if inotify *has*
> > actually been disabled - is there any way I could tell?
>
> If you boot with inotify, dmesg should show something (I guess) - and - in
> /dev/, there should be an inotify device if inotify started

Revision history for this message
Lee Willis (lwillis) wrote :

Created an attachment (id=4321)
Test for inotify?

Revision history for this message
Lee Willis (lwillis) wrote :

Hmm - maybe that's the problem. I downloaded a small inotify "test" from
http://www-128.ibm.com/developerworks/linux/library/l-inotify.html#download

This [Along with the absence of /dev/inotify and anything relating to inotify in
my dmesg] that I don't have inotify - could/would that cause the problem?

My test program [attached] gives:
$ ./inotify_test /home/lee
open("/dev/inotify", O_RDONLY) = : No such file or directory
No inotify

Revision history for this message
Tollef Fog Heen (tfheen) wrote :

Modern inotify (which is in breezy) doesn't use /dev/inotify any more,
it uses a system call instead. Also, I have been informed that noinotify
doesn't disable inotify. I am rolling a set of breezy kernels without inotify
I'd like you to test (as soon as they are finished).

Revision history for this message
Matt Zimmerman (mdz) wrote :

Has anyone experiencing this problem tried booting the Hoary kernel to see if
that has an effect? No one on the development team is experiencing the problem,
so nothing can be done unless those of you experiencing the bug can do some
investigation on your own.

Revision history for this message
Matt Zimmerman (mdz) wrote :

It would also be useful to know if anyone can reproduce this on a fresh Breezy
install using the current daily ISOs.

Revision history for this message
Tollef Fog Heen (tfheen) wrote :

I now have kernel images available on
http://people.ubuntu.com/~tfheen/noinotify/ . Please
test those and see if the problem goes away.

(To verify whether you have inotify or not available, grep for inotify in
/proc/slabinfo)

Revision history for this message
Lee Willis (lwillis) wrote :

(In reply to comment #38)
> I now have kernel images available on
> http://people.ubuntu.com/~tfheen/noinotify/ . Please
> test those and see if the problem goes away.
>
> (To verify whether you have inotify or not available, grep for inotify in
> /proc/slabinfo)

I checked this with my "normal" kernel and do indeed find inotify entries so the
standard kernel does have them. I've also installed the linux images you
provided, confirmed that inotify is not present [No matches in /proc/slabinfo],
however I still experience disk throughput problems when a gnome session is running.

Revision history for this message
Javier Cabezas (javier-cabezas) wrote :

Problem resolved. It was my HD, now always works as expected.

Revision history for this message
Manuel Lucena (mlucena) wrote :

(In reply to comment #40)
> Problem resolved. It was my HD, now always works as expected.

What was exactly the problem? I have a similar problem on my computer and I
don't know how to solve it :-(

Revision history for this message
Lee Willis (lwillis) wrote :

(In reply to comment #36)
> Has anyone experiencing this problem tried booting the Hoary kernel to see if
> that has an effect? No one on the development team is experiencing the problem,
> so nothing can be done unless those of you experiencing the bug can do some
> investigation on your own.

Would you like a hoary kernel only putting on a breezy install and testing - or
would just botting back into a hoary live CD be OK? Or both?

Revision history for this message
Tollef Fog Heen (tfheen) wrote :

Booting the hoary kernel in a breezy userspace is what we would like you to do.

Also, if you can try to track down which component of the gnome login which causes the slowdown, that'd be very useful.

Revision history for this message
Lee Willis (lwillis) wrote :

(In reply to comment #43)
> Booting the hoary kernel in a breezy userspace is what we would like you to do.
>
> Also, if you can try to track down which component of the gnome login which
causes the slowdown, that'd be very useful.

Right - I tried 2.6.10-5-386 from hoary and here are the results (Figures are
buffered disk reads from hdparm -t):

2.6.10-5-386 (No GNOME session running)
Run #1 46.0
Run #2 46.0
Run #3 43.0
Average 45.0

2.6.10-5-386 (With GNOME session)
Run #1 43.0
Run #2 42.0
Run #3 39.0
Average 41.3

As you can see no obvious slowdowns. Rebooting to the latest breezy kernel gives
the following results:

2.6.12-9-386 (No GNOME session running)
Run #1 41.2
Run #2 45.6
Run #3 45.6
Average 44.1

2.6.12-9-386 (With GNOME session)
Run #1 21.5
Run #2 23.5
Run #3 24.8
Average 23.2

Very noticeable slowdown.

Re: "Working out which GNOME component is causing the problem". I have tried a
number of things but can't nail down the problem. My speeds now (Circa 23MB/s)
are better than they were with the benefit mainly coming from killing off
various applets on my panel seems to have helped. My suspicion is that some
low-level library is having the issue rather than a particular app since I seem
to get a small incremental improvement the more apps that I kill off, but no big
obvious win ...

Revision history for this message
Alvin Thompson (alvint-deactivatedaccount) wrote :

this was obviously not a HD problem because many people had it (including i).
however, the problem does seem to have gone away for me with recent updates
(within the last week or so). i have a new laptop i'm using so i haven't
empirically(sp) verified this.

if you, i can send you the laptop which exhibited the problem, so that you can
isolate what exactly caused the problem for future reference. you can zap the
partitions if you want since i have a new laptop and i'm going to reinstall
anyway. there are a couple of conditions:
  1. don't look at any porn or national military secrets that may be on the hard
drive currently.
  2. ship it back in a reasonable amount of time.
  3. include a t-shirt (XL) and a hat (fat head) in the return shipment. so
sissy colors!

email me if this is acceptable.

Revision history for this message
Alvin Thompson (alvint-deactivatedaccount) wrote :

err, i meant to say *no* sissy colors!

Revision history for this message
oreste villa (ore-villa-deactivatedaccount) wrote :

Hi all,
are there any news for this bug?
I still have the problem even if I see that nobody is complaining anymore.

I'm using xfce4 then I think is not fault of gnome (can be the gdm??).

Here the output of my hdparam, (I have two disks here, the OS is in hda).

/dev/hda:
 Timing cached reads: 2180 MB in 2.00 seconds = 1090.17 MB/sec
 Timing buffered disk reads: 68 MB in 3.06 seconds = 22.24 MB/sec

/dev/hdb:
 Timing cached reads: 2036 MB in 2.00 seconds = 1017.14 MB/sec
 Timing buffered disk reads: 174 MB in 3.04 seconds = 57.25 MB/sec

If I install the OS in the other disk the disk behaviour switch.

I think this bug should have an higher priority, as I know a lot of people are
having this problem.
In my machine is impossible to do an 'ls' in a directory with 100 files without
waiting 3 seconds....it is .
 slow!!!!

Thanks

Revision history for this message
Ben Collins (ben-collins) wrote :

If possible, please upgrade to Dapper's 2.6.15-7 kernel. If you do not want to
upgrade to Dapper, then you can also wait for the Dapper Flight 2 CD's, which
are due out within the next few days.

Let me know if this bug still exists with this kernel.

Revision history for this message
Lee Willis (lwillis) wrote :

(In reply to comment #48)
> If possible, please upgrade to Dapper's 2.6.15-7 kernel. If you do not want to
> upgrade to Dapper, then you can also wait for the Dapper Flight 2 CD's, which
> are due out within the next few days.

With 2.6.15-7 (I only updated the kernel and any dependencies that apt flagged -
I haven't upgraded my whole system to dapper) I get the following results [I've
included some from my "old" kernel [2.6.10] as well for comparison]:

Old kernel - Without GNOME

/dev/hda:
 Timing buffered disk reads: 130 MB in 3.03 seconds = 42.90 MB/sec
 Timing buffered disk reads: 130 MB in 3.01 seconds = 43.24 MB/sec
 Timing buffered disk reads: 128 MB in 3.04 seconds = 42.15 MB/sec

Old kernel - With Gnome

/dev/hda:
 Timing buffered disk reads: 102 MB in 3.02 seconds = 33.77 MB/sec
 Timing buffered disk reads: 96 MB in 3.06 seconds = 31.34 MB/sec
 Timing buffered disk reads: 100 MB in 3.02 seconds = 33.06 MB/sec

New kernel - With Gnome

/dev/hda:
 Timing buffered disk reads: 118 MB in 3.00 seconds = 39.28 MB/sec
 Timing buffered disk reads: 134 MB in 3.03 seconds = 44.25 MB/sec
 Timing buffered disk reads: 120 MB in 3.01 seconds = 39.89 MB/sec

New kernel - Without Gnome

/dev/hda:
 Timing buffered disk reads: 134 MB in 3.04 seconds = 44.08 MB/sec
 Timing buffered disk reads: 138 MB in 3.02 seconds = 45.75 MB/sec
 Timing buffered disk reads: 138 MB in 3.02 seconds = 45.75 MB/sec

As you can hopefully see - the new kernel appears to have improved performance
from around 32MB/s to around 41MB/s. This matches what I was originally seeing
on hoary so I'd hesitate to say this is fixed with the latest kernel. I'd note
though that disk performance while in GNOME is still about 4MB/s slower than
without GNOME running (41MB/s vs. 45.1 MB/s). Not sure if that's worth
discussing separately. Any clue as to what was causing the degradation?

Revision history for this message
Lee Willis (lwillis) wrote :

(In reply to comment #48)
> If possible, please upgrade to Dapper's 2.6.15-7 kernel. If you do not want to
> upgrade to Dapper, then you can also wait for the Dapper Flight 2 CD's, which
> are due out within the next few days.
>
> Let me know if this bug still exists with this kernel.

PS. Thanks! :)

Revision history for this message
Tollef Fog Heen (tfheen) wrote :

As nobody has complained about this lately and the numbers in the end of the bug report suggest we now have "good" numbers again, I'm marking this as fixed.

Changed in linux-source-2.6.15:
status: Needs Info → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Duplicates of this bug

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.