Reduced I/O performance when logged into GNOME
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
linux-source-2.6.15 (Ubuntu) |
Fix Released
|
Medium
|
Tollef Fog Heen |
Bug Description
Several users have reported low hard drive performance on breezy. particularly
this has become obvious where people have upgraded from hoary and the system
"feels" slower. Some people report that speeds are faster in single user mode.
Bonnie test results:
Multi User
Writing with putc()...done
Writing intelligently.
Rewriting...done
Reading with getc()...done
Reading intelligently.
start 'em...done.
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version 1.03 ------Sequential Output------ --Sequential Input- --Random-
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
leedesktop.plu 480M 14216 62 14031 8 8400 3 15009 57 33911 7 117.4 0
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 1581 98 +++++ +++ +++++ +++ 1604 97 +++++ +++ 3767 98
leedesktop.
Single User
Using uid:1000, gid:1000.
Writing with putc()...done
Writing intelligently.
Rewriting...done
Reading with getc()...done
Reading intelligently.
start 'em...done.
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version 1.03 ------Sequential Output------ --Sequential Input- --Random-
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
leedesktop.plu 480M 22502 94 32769 16 15203 7 19808 72 35799 6 182.2 0
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 1860 99 +++++ +++ +++++ +++ 1930 98 +++++ +++ 4599 97
leedesktop.
Full thread on ubuntu-devel:
http://
Alvin Thompson (alvint-deactivatedaccount) wrote : | #1 |
Matt Zimmerman (mdz) wrote : | #2 |
In your multi-user mode test, were you logged into a GNOME session?
Starting from single-user mode, try starting each service from /etc/rc2.d and
see when the problem begins
Matt Zimmerman (mdz) wrote : | #3 |
*** Bug 22022 has been marked as a duplicate of this bug. ***
Matt Zimmerman (mdz) wrote : | #4 |
Has anyone tried to debug this further?
Lee Willis (lwillis) wrote : | #5 |
I've attempted to identify culprits but have had no joy so far. If you can
assist with what I should be looking for in the bonnie output that would help
[Is it as simple as comparing the per second throughput counts?]
I'll also be trying to incrementally start services to identify where the
problem starts as soon as I'm back at a machine that exhibits this problem (My
laptop doesn't show this problem :( )
Have you any other ideas of how I can identify the problem?
Matt Zimmerman (mdz) wrote : | #6 |
(In reply to comment #5)
> I'll also be trying to incrementally start services to identify where the
> problem starts as soon as I'm back at a machine that exhibits this problem (My
> laptop doesn't show this problem :( )
Please do; that would help.
> Have you any other ideas of how I can identify the problem?
If it works well in single-user, then it should be straightforward to narrow
down the cause using the above procedure.
Lee Willis (lwillis) wrote : | #7 |
(In reply to comment #6)
> (In reply to comment #5)
> > I'll also be trying to incrementally start services to identify where the
> > problem starts as soon as I'm back at a machine that exhibits this problem (My
> > laptop doesn't show this problem :( )
>
> Please do; that would help.
>
> > Have you any other ideas of how I can identify the problem?
>
> If it works well in single-user, then it should be straightforward to narrow
> down the cause using the above procedure.
Right. After having looked into this some more it seems that the differentiation
isn't whether I'm in single user or multi-user mode, but whether or not I'm
logged into GNOME.
If I boot normally, log into GNOME & run hdparm I get poor results (around
17MB/s for disk reads). If I log out out from GNOME and back to the GDM login
page, and run hdparm then I get around 45 MB/s.
It appears that it is something in the GNOME session that is causing the slowdown.
The difference in process lists between no GNOME session, and a GNOME session is
roughly:
bonobo-
dbus-daemon
dbus-launch
esd
gam_server
gconfd-2
gnome-cups-icon
gnome-keyring-
gnome-panel
gnome-pty-helper
gnome-settings-
gnome-terminal
gnome-vfs-daemon
metacity
notification-
notification-daemon
ssh-agent
wnck-applet
x-session-manager
I've shut off everything non-essential and still see "poor" speeds. I do see
some improvements by getting rid of nautilus, clock-applet, update-notifier,
xscreensaver, gweather_applet, gnome-volumne-
about 10 MB/s back, but am still 10MB/s short. The obvious candidate would be
gam_server but I can't work out how to get it to not re-spawn ...
A strace of gam_server shows:
poll([{fd=1, events=0}, {fd=2, events=POLLIN}, {fd=4, events=POLLIN}, {fd=1,
events=POLLIN}, {fd=5, events=POLLIN}, {fd=0, events=POLLIN}, {fd=3,
events=POLLIN}], 7, 0) = 0
gettimeofday(
poll([{fd=1, events=0}, {fd=2, events=POLLIN}, {fd=4, events=POLLIN}, {fd=1,
events=POLLIN}, {fd=5, events=POLLIN}, {fd=0, events=POLLIN}, {fd=3,
events=POLLIN}], 7, 11) = 0
gettimeofday(
gettimeofday(
poll([{fd=1, events=0}, {fd=2, events=POLLIN}, {fd=4, events=POLLIN}, {fd=1,
events=POLLIN}, {fd=5, events=POLLIN}, {fd=0, events=POLLIN}, {fd=3,
events=POLLIN}], 7, 0) = 0
gettimeofday(
poll([{fd=1, events=0}, {fd=2, events=POLLIN}, {fd=4, events=POLLIN}, {fd=1,
events=POLLIN}, {fd=5, events=POLLIN}, {fd=0, events=POLLIN}, {fd=3,
events=POLLIN}], 7, 0) = 0
gettimeofday(
over and over again - but I'm not sure if this is actually a problem or not ...
Matt Zimmerman (mdz) wrote : | #8 |
bug #10821 perhaps?
Please run "vmstat 5" on the console and show us the output before and after you
login to GNOME, after waiting for the system to settle down to idle. Is there
excessive CPU utilization or excessive I/O?
Alvin Thompson (alvint-deactivatedaccount) wrote : | #9 |
before login:
procs -------
r b swpd free buff cache si so bi bo in cs us sy id wa
0 0 0 465204 9980 158580 0 0 780 132 1084 354 9 4 61 26
0 0 0 465204 9988 158580 0 0 0 5 1028 56 0 0 100 0
0 0 0 465204 9988 158580 0 0 0 10 1020 38 0 0 100 0
0 0 0 465204 9988 158580 0 0 0 13 1026 46 0 0 100 0
0 0 0 465204 9996 158580 0 0 0 2 1023 48 0 0 100 0
0 0 0 465204 9996 158580 0 0 0 0 1018 39 0 0 100 0
0 0 0 465204 10004 158580 0 0 0 2 1025 49 0 0 100 0
0 0 0 465204 10004 158580 0 0 0 0 1020 44 0 0 100 0
0 0 0 465204 10012 158580 0 0 0 2 1022 43 0 0 100 0
0 0 0 465204 10012 158580 0 0 0 0 1023 45 0 0 100 0
0 0 0 465204 10020 158580 0 0 0 2 1025 46 0 0 100 0
0 0 0 465204 10020 158580 0 0 0 1 1030 62 0 0 100 0
0 0 0 465204 10028 158580 0 0 0 2 1027 54 0 0 100 0
0 0 0 465204 10028 158580 0 0 0 0 1025 55 0 0 100 0
0 0 0 465204 10036 158580 0 0 0 3 1024 46 0 0 100 0
after login:
procs -------
r b swpd free buff cache si so bi bo in cs us sy id wa
1 1 0 283064 17224 238548 0 0 378 68 1073 388 9 2 77 12
1 0 0 283064 17236 238548 0 0 0 12 1062 450 0 0 100 0
1 1 0 283080 17244 238548 0 0 1 2 1020 196 0 0 100 0
1 0 0 283080 17256 238548 0 0 0 10 1027 251 0 1 98 1
0 0 0 283096 17264 238548 0 0 0 5 1025 212 0 0 100 0
0 0 0 283088 17272 238548 0 0 0 138 1046 207 0 0 99 0
0 0 0 283096 17280 238548 0 0 0 3 1022 206 0 0 100 0
0 0 0 282964 17288 238548 0 0 0 9 1027 210 0 0 100 0
0 0 0 282956 17296 238548 0 0 0 3 1024 209 0 0 99 0
0 0 0 282956 17304 238548 0 0 0 20 1049 369 1 0 98 0
0 0 0 282972 17312 238548 0 0 0 7 1026 246 1 0 99 0
0 0 0 282964 17320 238548 0 0 0 12 1070 538 5 0 95 0
1 0 0 282956 17328 238548 0 0 0 20 1044 350 1 0 99 0
this is probably a red herring, but i'm now also getting flooded with the dreaded:
[4295254.185000] cs: pcmcia_socket0: unable to apply power.
but only after i log in. i don't think i got that in hoary, but i'm not sure.
Fabio Marzocca (thesaltydog) wrote : | #10 |
If it could help, there is another long thread about this on ubuntuforums:
http://
Lee Willis (lwillis) wrote : | #11 |
This is the output from vmstat with no GNOME session running:
$ vmstat 5 > no_session_
procs -------
r b swpd free buff cache si so bi bo in cs us sy id wa
0 0 39488 66692 4476 67852 2 2 49 16 39 169 2 0 96 1
0 0 39488 66692 4484 67852 0 0 0 14 1011 30 0 0 100 0
0 0 39488 66692 4492 67852 0 0 0 3 1015 27 0 0 100 0
0 0 39488 66692 4500 67852 0 0 0 3 1012 24 0 0 100 0
0 0 39488 66692 4508 67852 0 0 0 3 1014 26 0 0 100 0
0 0 39488 66692 4516 67852 0 0 0 9 1014 24 0 0 100 0
0 0 39488 66692 4524 67852 0 0 0 10 1013 24 0 0 100 0
0 0 39488 66692 4540 67856 0 0 2 11 1015 28 0 0 99 1
0 0 39488 66692 4548 67856 0 0 0 3 1011 23 0 0 100 0
0 0 39488 66568 4556 67856 0 0 0 3 1015 25 0 0 100 0
0 0 39488 66568 4564 67856 0 0 0 5 1010 24 0 0 100 0
0 0 39488 66568 4572 67856 0 0 0 3 1015 26 0 0 100 0
0 0 39488 66568 4580 67856 0 0 0 3 1012 23 0 0 100 0
0 0 39488 66568 4588 67856 0 0 0 13 1017 29 0 0 100 0
0 0 39488 66568 4596 67856 0 0 0 10 1012 26 0 0 100 0
0 0 39488 66568 4604 67856 0 0 0 3 1016 28 0 0 100 0
0 0 39488 66568 4612 67856 0 0 0 3 1012 25 0 0 100 0
0 1 39488 66568 4616 67856 6 0 6 9 1016 28 0 0 99 1
0 0 39488 66568 4628 67856 0 0 0 4 1012 25 0 0 100 0
This is the output from within a GNOME session:
$ vmstat 5 > session_vmstat.txt
procs -------
r b swpd free buff cache si so bi bo in cs us sy id wa
0 0 38572 9232 6672 70196 2 2 49 16 41 169 2 0 96 1
0 0 38572 9272 6788 70196 0 0 22 29 1017 190 0 0 95 4
0 0 38572 9272 6796 70196 0 0 0 6 1015 173 0 0 100 0
0 0 38572 9272 6804 70196 0 0 0 22 1015 174 1 0 99 0
0 0 38572 9272 6812 70196 0 0 0 106 1034 181 0 0 99 0
0 0 38572 9272 6820 70196 0 0 0 5 1011 173 0 0 100 0
0 0 38572 9272 6828 70196 0 0 0 5 1014 170 0 0 99 0
0 0 38572 9272 6836 70196 0 0 0 5 1012 169 0 0 100 0
0 0 38572 9272 6844 70196 0 0 0 5 1015 173 0 0 99 0
0 0 38572 9272 6852 70196 0 0 0 5 1011 170 0 0 99 0
0 0 38572 9272 6860 70196 0 0 0 5 1014 172 0 0 100 0
0 0 38572 9272 6868 70196 0 0 0 6 1012 170 0 0 99 0
0 0 38572 9272 6876 70196 0 0 0 5 1015 175 1 0 99 0
0 0 38572 9272 6884 70196 0 0 0 11 1012 174 0 0 99 0
0 0 38572 9272 6892 70196 0 0 0 6 1019 182 0 0 99 ...
João Inácio (inacio) wrote : | #12 |
I can see a large difference is context switching (+200%).
Also, i have noticed most of reports are from laptops, in wich case i guess this
could be related to 'cs: pcmcia_socket0: unable to apply power.'
Matt Zimmerman (mdz) wrote : | #13 |
*** Bug 22642 has been marked as a duplicate of this bug. ***
Matt Zimmerman (mdz) wrote : | #14 |
(In reply to comment #12)
> I can see a large difference is context switching (+200%).
That's normal.
> Also, i have noticed most of reports are from laptops, in wich case i guess this
> could be related to 'cs: pcmcia_socket0: unable to apply power.'
Is anyone else seeing this in association with the performance issue?
Matt Zimmerman (mdz) wrote : | #15 |
(In reply to comment #11)
> I can't see any major issues myself :(
Well, it tells us that the difference isn't due to increased CPU load or
increased disk activity, which eliminates some factors.
If you log out of GNOME again, do things return to normal?
Lee Willis (lwillis) wrote : | #16 |
I saw this problem originally on a desktop PC so PCMCIA doesn't seem to be
the problem there. [My laptop *doesn't* have this problem on breezy].
As for logging out of GNOME then yes, things return to normal, then slow again
when I log back in.
Matt Zimmerman (mdz) wrote : | #17 |
(In reply to comment #16)
> I saw this problem originally on a desktop PC so PCMCIA doesn't seem to be
> the problem there. [My laptop *doesn't* have this problem on breezy].
>
> As for logging out of GNOME then yes, things return to normal, then slow again
> when I log back in.
Both useful data points, thanks. My current suspicion is that this is related
to inotify; Sebastien, is there an easy way to disable it so that we can test
that hypothesis?
Sebastien Bacher (seb128) wrote : | #18 |
(In reply to comment #17)
> Both useful data points, thanks. My current suspicion is that this is related
> to inotify; Sebastien, is there an easy way to disable it so that we can test
> that hypothesis?
Booting with the "noinotify" option was working before hoary, I would try that.
The other option is to rebuild gamin with the right configure flag to use only
dnotify.
Matt Zimmerman (mdz) wrote : | #19 |
(In reply to comment #16)
> I saw this problem originally on a desktop PC so PCMCIA doesn't seem to be
> the problem there. [My laptop *doesn't* have this problem on breezy].
What's different between your laptop and your desktop? I assume they both have
ATA disks. Same kernel? Same installed software? What chipsets?
Lee Willis (lwillis) wrote : | #20 |
The desktop machine which experiences this problem is a Fujitsu/Siemens celeron
1.70GHz with 256M of memory. The disk info is:
/dev/hda:
ATA device, with non-removable media
Model Number: Maxtor 2F040L0
Serial Number: F137NAVE
Firmware Revision: VAM51JJ0
Standards:
Supported: 7 6 5 4
Likely used: 7
Configuration:
Logical max current
cylinders 16383 16383
heads 16 16
--
CHS current addressable sectors: 16514064
LBA user addressable sectors: 80293248
device size with M = 1024*1024: 39205 MBytes
device size with M = 1000*1000: 41110 MBytes (41 GB)
Capabilities:
LBA, IORDY(can be disabled)
Queue depth: 1
Standby timer values: spec'd by Standard, no device specific minimum
R/W multiple sector transfer: Max = 16 Current = 0
Advanced power management level: unknown setting (0x0000)
Recommended acoustic management value: 192, current value: 254
DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 *udma5 udma6
Cycle time: min=120ns recommended=120ns
PIO: pio0 pio1 pio2 pio3 pio4
Cycle time: no flow control=120ns IORDY flow control=120ns
Commands/features:
Enabled Supported:
* NOP cmd
* READ BUFFER cmd
* WRITE BUFFER cmd
* Host Protected Area feature set
* Look-ahead
* Write cache
* Power Management feature set
* SMART feature set
* FLUSH CACHE EXT command
* Mandatory FLUSH CACHE command
* Device Configuration Overlay feature set
* Automatic Acoustic Management feature set
SET MAX security extension
* DOWNLOAD MICROCODE cmd
* SMART self-test
* SMART error logging
Security:
Master password revision code = 65534
not enabled
not locked
not expired: security count
not supported: enhanced erase
HW reset results:
CBLID- above Vih
Device num = 0 determined by CSEL
Checksum: correct
All packages are up-to-date breezy (sudo apt-get dist-upgrade) [Kernel 2.6.12-9]
The laptop [Which doesn't experience problems] is a Toshiba Satellite Pro A60,
with a celeron 2.80GHz, 768M of memory. I can't post the hdparm info right now
as I don't have it on me, but it's ATA 40G drive I'm fairly sure.
Matt Zimmerman (mdz) wrote : | #21 |
(In reply to comment #20)
> The desktop machine which experiences this problem is a Fujitsu/Siemens celeron
> 1.70GHz with 256M of memory. The disk info is:
Please send lspci output so we can see the IDE chipset info
Lee Willis (lwillis) wrote : | #22 |
0000:00:00.0 Host bridge: Intel Corp. 82845G/
Controller/Host-Hub Interface (rev 01)
0000:00:02.0 VGA compatible controller: Intel Corp. 82845G/
Chipset Integrated Graphics Device (rev 01)
0000:00:1d.0 USB Controller: Intel Corp. 82801DB/DBL/DBM (ICH4/ICH4-
USB UHCI Controller #1 (rev 01)
0000:00:1d.1 USB Controller: Intel Corp. 82801DB/DBL/DBM (ICH4/ICH4-
USB UHCI Controller #2 (rev 01)
0000:00:1d.2 USB Controller: Intel Corp. 82801DB/DBL/DBM (ICH4/ICH4-
USB UHCI Controller #3 (rev 01)
0000:00:1d.7 USB Controller: Intel Corp. 82801DB/DBM (ICH4/ICH4-M) USB 2.0 EHCI
Controller (rev 01)
0000:00:1e.0 PCI bridge: Intel Corp. 82801 PCI Bridge (rev 81)
0000:00:1f.0 ISA bridge: Intel Corp. 82801DB/DBL (ICH4/ICH4-L) LPC Bridge (rev 01)
0000:00:1f.1 IDE interface: Intel Corp. 82801DB/DBL (ICH4/ICH4-L) UltraATA-100
IDE Controller (rev 01)
0000:00:1f.3 SMBus: Intel Corp. 82801DB/DBL/DBM (ICH4/ICH4-
Controller (rev 01)
0000:00:1f.5 Multimedia audio controller: Intel Corp. 82801DB/DBL/DBM
(ICH4/ICH4-
0000:02:08.0 Ethernet controller: Intel Corp. 82801BD PRO/100 VE (LOM) Ethernet
Controller (rev 81)
Tollef Fog Heen (tfheen) wrote : | #23 |
I'm unable to reproduce this on an nforce2-based system. Will try later today on an
intel-based rig.
Scott James Remnant (Canonical) (canonical-scott) wrote : | #24 |
No luck reproducing on my ATI/ALI based i386 laptop ... I get 35 MB/sec whatever
is running.
Javier Cabezas (javier-cabezas) wrote : | #25 |
I have this poor performance before I login into Gnome:
/dev/hdb:
Timing cached reads: 640 MB in 2.01 seconds = 319.09 MB/sec
Timing buffered disk reads: 2 MB in 5.27 seconds = 388.67 kB/sec
but after the login, it gets even worse:
/dev/hdb:
Timing cached reads: 604 MB in 2.01 seconds = 300.84 MB/sec
Timing buffered disk reads: 2 MB in 9.04 seconds = 226.51 kB/sec
The info of hdparm about my disk is:
dev/hdb:
ATA device, with non-removable media
Model Number: SAMSUNG SV1824D
Serial Number: 0159J2FKB03618
Firmware Revision: MD100-31
Standards:
Used: ATA/ATAPI-4 T13 1153D revision 17
Supported: 4 3 2 1 & some of 5
Configuration:
Logical max current
cylinders 16383 16383
heads 16 16
sectors/track 63 63
--
bytes/track: 34902 bytes/sector: 554
CHS current addressable sectors: 16514064
LBA user addressable sectors: 35606592
device size with M = 1024*1024: 17386 MBytes
device size with M = 1000*1000: 18230 MBytes (18 GB)
Capabilities:
LBA, IORDY(cannot be disabled)
Buffer size: 472.0kB bytes avail on r/w long: 4 Queue depth: 1
Standby timer values: spec'd by Vendor
R/W multiple sector transfer: Max = 16 Current = ?
DMA: mdma0 mdma1 mdma2 udma0 udma1 *udma2 udma3 udma4
Cycle time: min=120ns recommended=120ns
PIO: pio0 pio1 pio2 pio3 pio4
Cycle time: no flow control=120ns IORDY flow control=120ns
Commands/features:
Enabled Supported:
* NOP cmd
* READ BUFFER cmd
* WRITE BUFFER cmd
* Host Protected Area feature set
* DEVICE RESET cmd
* Look-ahead
* Write cache
* Power Management feature set
* SMART feature set
HW reset results:
CBLID- above Vih
Device num = 1
And the lspci output is:
0000:00:00.0 Host bridge: Advanced Micro Devices [AMD] AMD-751 [Irongate] System
Controller (rev 23)
0000:00:01.0 PCI bridge: Advanced Micro Devices [AMD] AMD-751 [Irongate] AGP
Bridge (rev 01)
0000:00:04.0 ISA bridge: VIA Technologies, Inc. VT82C686 [Apollo Super South]
(rev 1b)
0000:00:04.1 IDE interface: VIA Technologies, Inc.
VT82C586A/
0000:00:04.2 USB Controller: VIA Technologies, Inc. VT82xxxxx UHCI USB 1.1
Controller (rev 0e)
0000:00:04.4 SMBus: VIA Technologies, Inc. VT82C686 [Apollo Super ACPI] (rev 20)
0000:00:0e.0 Ethernet controller: Realtek Semiconductor Co., Ltd.
RTL-8139/
0000:00:0f.0 Multimedia video controller: Brooktree Corporation Bt878 Video
Capture (rev 11)
0000:00:0f.1 Multimedia controller: Brooktree Corporation Bt878 Audio Capture
(rev 11)
0000:00:10.0 Multimedia audio controller: Ensoniq ES1371 [AudioPCI-97] (rev 08)
0000:01:05.0 VGA compatible controller: nVidia Corporation NV17 [GeForce4 MX
440] (rev a3)
Are these results normal?? I find buffered reads VERY slow.
Matt Zimmerman (mdz) wrote : | #26 |
Not reproducible here, on a ThinkPad T42. Someone able to reproduce this
problem needs to help us debug it, or it won't be possible to find the cause.
potpal:[~] sudo hdparm -tT /dev/hda # before login
/dev/hda:
Timing cached reads: 1516 MB in 2.00 seconds = 756.60 MB/sec
Timing buffered disk reads: 66 MB in 3.07 seconds = 21.47 MB/sec
potpal:[~] sudo hdparm -tT /dev/hda # after login
/dev/hda:
Timing cached reads: 1656 MB in 2.00 seconds = 827.30 MB/sec
Timing buffered disk reads: 72 MB in 3.01 seconds = 23.94 MB/sec
Javier Cabezas (javier-cabezas) wrote : | #27 |
What can I do to help you?
I really like Ubuntu and I don't want to switch because of this bug.
Matt Zimmerman (mdz) wrote : | #28 |
(In reply to comment #27)
> What can I do to help you?
>
> I really like Ubuntu and I don't want to switch because of this bug.
For example, you can try disabling inotify as I requested in comment #17.
It is also helpful to search for common factors between systems which experience
this problem (because most do not).
Javier Cabezas (javier-cabezas) wrote : | #29 |
Booting with the "noinotify" option doesn't change the hdparm numbers.
I have another Ubuntu machine (with a worse HD) but it doesn't suffer this
problem. It gets 10 MB/s in buffered disk reads.
The affected machine it's a 700 Mhz Athlon with 384MB RAM. I have posted other
details previously. I installed Hoary first and upgraded to Breezy via apt-get
dist-upgrade
Lee Willis (lwillis) wrote : | #30 |
(In reply to comment #28)
> For example, you can try disabling inotify as I requested in comment #17.
I've tried booting with "noinotify" appended to the grub boot string and see no
change (IO is still slow when logged into GNOME). I'm not sure if inotify *has*
actually been disabled - is there any way I could tell?
Benjamin Schindler (bschindler) wrote : | #31 |
(In reply to comment #30)
> (In reply to comment #28)
>
> > For example, you can try disabling inotify as I requested in comment #17.
>
> I've tried booting with "noinotify" appended to the grub boot string and see no
> change (IO is still slow when logged into GNOME). I'm not sure if inotify *has*
> actually been disabled - is there any way I could tell?
If you boot with inotify, dmesg should show something (I guess) - and - in
/dev/, there should be an inotify device if inotify started
Javier Cabezas (javier-cabezas) wrote : | #32 |
No messages in dmesg here (inotify not disabled). I also don't find any inotify
device in /dev
(In reply to comment #31)
> (In reply to comment #30)
> > (In reply to comment #28)
> >
> > > For example, you can try disabling inotify as I requested in comment #17.
> >
> > I've tried booting with "noinotify" appended to the grub boot string and see no
> > change (IO is still slow when logged into GNOME). I'm not sure if inotify *has*
> > actually been disabled - is there any way I could tell?
>
> If you boot with inotify, dmesg should show something (I guess) - and - in
> /dev/, there should be an inotify device if inotify started
Lee Willis (lwillis) wrote : | #33 |
Lee Willis (lwillis) wrote : | #34 |
Hmm - maybe that's the problem. I downloaded a small inotify "test" from
http://
This [Along with the absence of /dev/inotify and anything relating to inotify in
my dmesg] that I don't have inotify - could/would that cause the problem?
My test program [attached] gives:
$ ./inotify_test /home/lee
open("/
No inotify
Tollef Fog Heen (tfheen) wrote : | #35 |
Modern inotify (which is in breezy) doesn't use /dev/inotify any more,
it uses a system call instead. Also, I have been informed that noinotify
doesn't disable inotify. I am rolling a set of breezy kernels without inotify
I'd like you to test (as soon as they are finished).
Matt Zimmerman (mdz) wrote : | #36 |
Has anyone experiencing this problem tried booting the Hoary kernel to see if
that has an effect? No one on the development team is experiencing the problem,
so nothing can be done unless those of you experiencing the bug can do some
investigation on your own.
Matt Zimmerman (mdz) wrote : | #37 |
It would also be useful to know if anyone can reproduce this on a fresh Breezy
install using the current daily ISOs.
Tollef Fog Heen (tfheen) wrote : | #38 |
I now have kernel images available on
http://
test those and see if the problem goes away.
(To verify whether you have inotify or not available, grep for inotify in
/proc/slabinfo)
Lee Willis (lwillis) wrote : | #39 |
(In reply to comment #38)
> I now have kernel images available on
> http://
> test those and see if the problem goes away.
>
> (To verify whether you have inotify or not available, grep for inotify in
> /proc/slabinfo)
I checked this with my "normal" kernel and do indeed find inotify entries so the
standard kernel does have them. I've also installed the linux images you
provided, confirmed that inotify is not present [No matches in /proc/slabinfo],
however I still experience disk throughput problems when a gnome session is running.
Javier Cabezas (javier-cabezas) wrote : | #40 |
Problem resolved. It was my HD, now always works as expected.
Manuel Lucena (mlucena) wrote : | #41 |
(In reply to comment #40)
> Problem resolved. It was my HD, now always works as expected.
What was exactly the problem? I have a similar problem on my computer and I
don't know how to solve it :-(
Lee Willis (lwillis) wrote : | #42 |
(In reply to comment #36)
> Has anyone experiencing this problem tried booting the Hoary kernel to see if
> that has an effect? No one on the development team is experiencing the problem,
> so nothing can be done unless those of you experiencing the bug can do some
> investigation on your own.
Would you like a hoary kernel only putting on a breezy install and testing - or
would just botting back into a hoary live CD be OK? Or both?
Tollef Fog Heen (tfheen) wrote : | #43 |
Booting the hoary kernel in a breezy userspace is what we would like you to do.
Also, if you can try to track down which component of the gnome login which causes the slowdown, that'd be very useful.
Lee Willis (lwillis) wrote : | #44 |
(In reply to comment #43)
> Booting the hoary kernel in a breezy userspace is what we would like you to do.
>
> Also, if you can try to track down which component of the gnome login which
causes the slowdown, that'd be very useful.
Right - I tried 2.6.10-5-386 from hoary and here are the results (Figures are
buffered disk reads from hdparm -t):
2.6.10-5-386 (No GNOME session running)
Run #1 46.0
Run #2 46.0
Run #3 43.0
Average 45.0
2.6.10-5-386 (With GNOME session)
Run #1 43.0
Run #2 42.0
Run #3 39.0
Average 41.3
As you can see no obvious slowdowns. Rebooting to the latest breezy kernel gives
the following results:
2.6.12-9-386 (No GNOME session running)
Run #1 41.2
Run #2 45.6
Run #3 45.6
Average 44.1
2.6.12-9-386 (With GNOME session)
Run #1 21.5
Run #2 23.5
Run #3 24.8
Average 23.2
Very noticeable slowdown.
Re: "Working out which GNOME component is causing the problem". I have tried a
number of things but can't nail down the problem. My speeds now (Circa 23MB/s)
are better than they were with the benefit mainly coming from killing off
various applets on my panel seems to have helped. My suspicion is that some
low-level library is having the issue rather than a particular app since I seem
to get a small incremental improvement the more apps that I kill off, but no big
obvious win ...
Alvin Thompson (alvint-deactivatedaccount) wrote : | #45 |
this was obviously not a HD problem because many people had it (including i).
however, the problem does seem to have gone away for me with recent updates
(within the last week or so). i have a new laptop i'm using so i haven't
empirically(sp) verified this.
if you, i can send you the laptop which exhibited the problem, so that you can
isolate what exactly caused the problem for future reference. you can zap the
partitions if you want since i have a new laptop and i'm going to reinstall
anyway. there are a couple of conditions:
1. don't look at any porn or national military secrets that may be on the hard
drive currently.
2. ship it back in a reasonable amount of time.
3. include a t-shirt (XL) and a hat (fat head) in the return shipment. so
sissy colors!
email me if this is acceptable.
Alvin Thompson (alvint-deactivatedaccount) wrote : | #46 |
err, i meant to say *no* sissy colors!
oreste villa (ore-villa-deactivatedaccount) wrote : | #47 |
Hi all,
are there any news for this bug?
I still have the problem even if I see that nobody is complaining anymore.
I'm using xfce4 then I think is not fault of gnome (can be the gdm??).
Here the output of my hdparam, (I have two disks here, the OS is in hda).
/dev/hda:
Timing cached reads: 2180 MB in 2.00 seconds = 1090.17 MB/sec
Timing buffered disk reads: 68 MB in 3.06 seconds = 22.24 MB/sec
/dev/hdb:
Timing cached reads: 2036 MB in 2.00 seconds = 1017.14 MB/sec
Timing buffered disk reads: 174 MB in 3.04 seconds = 57.25 MB/sec
If I install the OS in the other disk the disk behaviour switch.
I think this bug should have an higher priority, as I know a lot of people are
having this problem.
In my machine is impossible to do an 'ls' in a directory with 100 files without
waiting 3 seconds....it is .
slow!!!!
Thanks
Ben Collins (ben-collins) wrote : | #48 |
If possible, please upgrade to Dapper's 2.6.15-7 kernel. If you do not want to
upgrade to Dapper, then you can also wait for the Dapper Flight 2 CD's, which
are due out within the next few days.
Let me know if this bug still exists with this kernel.
Lee Willis (lwillis) wrote : | #49 |
(In reply to comment #48)
> If possible, please upgrade to Dapper's 2.6.15-7 kernel. If you do not want to
> upgrade to Dapper, then you can also wait for the Dapper Flight 2 CD's, which
> are due out within the next few days.
With 2.6.15-7 (I only updated the kernel and any dependencies that apt flagged -
I haven't upgraded my whole system to dapper) I get the following results [I've
included some from my "old" kernel [2.6.10] as well for comparison]:
Old kernel - Without GNOME
/dev/hda:
Timing buffered disk reads: 130 MB in 3.03 seconds = 42.90 MB/sec
Timing buffered disk reads: 130 MB in 3.01 seconds = 43.24 MB/sec
Timing buffered disk reads: 128 MB in 3.04 seconds = 42.15 MB/sec
Old kernel - With Gnome
/dev/hda:
Timing buffered disk reads: 102 MB in 3.02 seconds = 33.77 MB/sec
Timing buffered disk reads: 96 MB in 3.06 seconds = 31.34 MB/sec
Timing buffered disk reads: 100 MB in 3.02 seconds = 33.06 MB/sec
New kernel - With Gnome
/dev/hda:
Timing buffered disk reads: 118 MB in 3.00 seconds = 39.28 MB/sec
Timing buffered disk reads: 134 MB in 3.03 seconds = 44.25 MB/sec
Timing buffered disk reads: 120 MB in 3.01 seconds = 39.89 MB/sec
New kernel - Without Gnome
/dev/hda:
Timing buffered disk reads: 134 MB in 3.04 seconds = 44.08 MB/sec
Timing buffered disk reads: 138 MB in 3.02 seconds = 45.75 MB/sec
Timing buffered disk reads: 138 MB in 3.02 seconds = 45.75 MB/sec
As you can hopefully see - the new kernel appears to have improved performance
from around 32MB/s to around 41MB/s. This matches what I was originally seeing
on hoary so I'd hesitate to say this is fixed with the latest kernel. I'd note
though that disk performance while in GNOME is still about 4MB/s slower than
without GNOME running (41MB/s vs. 45.1 MB/s). Not sure if that's worth
discussing separately. Any clue as to what was causing the degradation?
Lee Willis (lwillis) wrote : | #50 |
(In reply to comment #48)
> If possible, please upgrade to Dapper's 2.6.15-7 kernel. If you do not want to
> upgrade to Dapper, then you can also wait for the Dapper Flight 2 CD's, which
> are due out within the next few days.
>
> Let me know if this bug still exists with this kernel.
PS. Thanks! :)
Tollef Fog Heen (tfheen) wrote : | #51 |
As nobody has complained about this lately and the numbers in the end of the bug report suggest we now have "good" numbers again, I'm marking this as fixed.
Changed in linux-source-2.6.15: | |
status: | Needs Info → Fix Released |
to be clear, the slowness has nothing to do with upgrading; it's just that
people who used to run hoary know how fast it ran so find the difference more
noticeable.
i don't have enough hardware to be definitive, but according to the thread it
appears to affect ATA (but not SATA) drives. if i had to make a completely
unsientific wild guess, judging from the sound of the disk drive shuffling i'd
guess that the caches may not be being utilized properly.