oneiric cluster compute (hvm) instances do not boot

Bug #791850 reported by Scott Moser
34
This bug affects 5 people
Affects Status Importance Assigned to Milestone
linux (Ubuntu)
Fix Released
High
Stefan Bader
Oneiric
Fix Released
High
Stefan Bader

Bug Description

I just tested
  us-east-1 ami-5cea1335 hvm/ubuntu-oneiric-daily-amd64-server-20110601

An instance of type cc1.4xlarge does not boot. I did not try cg1.4xlarge, but I would not expect a change.

For the record, I opened this bug on a t1.micro, and then edited the collected Ec2AMI and InstanceType and such.

ProblemType: Bug
DistroRelease: Ubuntu 11.10
Package: linux-image-2.6.39-3-virtual 2.6.39-3.10
ProcVersionSignature: User Name 2.6.39-3.10-virtual 2.6.39
Uname: Linux 2.6.39-3-virtual x86_64
AlsaDevices:
 total 0
 crw------- 1 root root 116, 1 2011-06-02 13:17 seq
 crw------- 1 root root 116, 33 2011-06-02 13:17 timer
AplayDevices: Error: [Errno 2] No such file or directory
Architecture: amd64
ArecordDevices: Error: [Errno 2] No such file or directory
CurrentDmesg: [ 18.320030] eth0: no IPv6 routers present
Date: Thu Jun 2 13:38:14 2011
Ec2AMI: ami-5cea1335
Ec2AMIManifest: (unknown)
Ec2AvailabilityZone: us-east-1c
Ec2InstanceType: cc1.4xlarge
Ec2Kernel: unavailable
Ec2Ramdisk: unavailable
Lspci:

Lsusb: Error: command ['lsusb'] failed with exit code 1: unable to initialize libusb: -99
ProcEnviron:
 LANG=en_US.UTF-8
 LC_MESSAGES=en_US.utf8
 SHELL=/bin/bash
ProcKernelCmdLine: root=LABEL=uec-rootfs ro console=hvc0
ProcModules: acpiphp 24127 0 - Live 0xffffffffa0000000
SourcePackage: linux
UpgradeStatus: No upgrade log present (probably fresh install)

Revision history for this message
Scott Moser (smoser) wrote :
description: updated
tags: added: iso-testing
Revision history for this message
Ben Howard (darkmuggle-deactivatedaccount) wrote :

Confirmed that this is an issue with daily build of Oneiric for CC1.4xlarge and CG1.4xlarge.
us-east-1 64-bit hvm ami-f65ca79f

Requested more information from Amazon.

Changed in linux (Ubuntu):
status: New → Confirmed
Dave Walker (davewalker)
tags: added: server-o-ors
Dave Walker (davewalker)
tags: added: server-o-rs
removed: server-o-ors
Revision history for this message
Dave Walker (davewalker) wrote : Server development status

Issue may be related to changes in AWS infrastructure, awaiting update.

Revision history for this message
Scott Moser (smoser) wrote : Re: oneiric cluster compute instances do not boot

Not likely due to changes, 11.04 instances still boot, so at very least there is something that we *could* do to boot.

Revision history for this message
Ben Howard (darkmuggle-deactivatedaccount) wrote :

Amazon is still getting us information. However, they have confirmed that it is a dead-lock. The DomU just sits and spins CPU cycles, but never comes back.

What we do know:
1. The instance does not terminate
2. The instance will sit and spin with high CPU utilization
3. It only is known to effect CG1 and CC1 HVM instance types
4. There are no stack traces ,OOP's or Panic messages

Changed in linux (Ubuntu):
importance: Undecided → High
status: Confirmed → Triaged
Dave Walker (davewalker)
Changed in linux (Ubuntu):
assignee: nobody → Ben Howard (utlemming)
Revision history for this message
Stefan Bader (smb) wrote :

The good news is, that this can be reproduced on my CentOS test system. Bad news, there really does not seem to be much more hints than we already got. Dom0 console only shows one message about "Bad HVM op 9", but that happens between grub and os boot and has no impact. And vlapic reports a few write accesses to read-only registers 0x20 and 0x30, but those are not a problem either.

I am able to make a successful boot with vcpu=1 and a modified 64bit oneiric-server installation that force-loads all of the xen drivers (its not verified yet, but I think it is mainly the blkfront driver I need). This also shows the dom0 messages above but boots sucessfully. However, it seems there is something weird with the ide controller emulation as well. While boot and grub in HVM recognize the disk, the Oneiric kernel does not seem to find it (boot only succeeds because it finds the xvd device).

When booting with more than one CPU, the boot stops in a tight loop, right after printing out

Brought up x CPUs
Total of x processors activated (y BogoMIPS).

Neither xm dmesg nor xm log contain more useful data. Increasing the debugging level within the guest also has no more to say. Feels like something completing the SMP/CPU init gets stuck. I will try to get a crash-dump of the system and see whether that has a bit more to say. Though it is sometimes hard to find the right combo of util versions to look into that.

Revision history for this message
Stefan Bader (smb) wrote :

Hm, ok. The missing disk like is just my misconfig...

<6>[ 0.000000] Xen version 3.4.
<7>[ 0.000000] Xen Platform PCI: I/O protocol version 1
<6>[ 0.000000] Netfront and the Xen platform PCI driver have been compiled for this kernel: unplug emulated NICs.
<6>[ 0.000000] Blkfront and the Xen platform PCI driver have been compiled for this kernel: unplug emulated disks.
<6>[ 0.000000] You might have to change the root device
<6>[ 0.000000] from /dev/hd[a-d] to /dev/xvd[a-d]
<6>[ 0.000000] in your root= kernel command line option

For the CPU part, there is this:

<6>[ 0.212226] Booting Node 0, Processors #1
<7>[ 0.213122] smpboot cpu 1: start_ip = 9a000
<6>[ 0.020000] mce: CPU supports 0 MCE banks
<4>[ 0.370357] #2
<7>[ 0.371474] smpboot cpu 2: start_ip = 9a000
<6>[ 0.020000] mce: CPU supports 0 MCE banks
<4>[ 0.530405] #3
<7>[ 0.531518] smpboot cpu 3: start_ip = 9a000
<6>[ 0.020000] mce: CPU supports 0 MCE banks
<4>[ 0.690353] #4
<7>[ 0.691459] smpboot cpu 4: start_ip = 9a000
<6>[ 0.020000] mce: CPU supports 0 MCE banks
<4>[ 0.850385] #5
<7>[ 0.851486] smpboot cpu 5: start_ip = 9a000
<6>[ 0.020000] mce: CPU supports 0 MCE banks
<4>[ 1.010421] #6
<7>[ 1.011525] smpboot cpu 6: start_ip = 9a000
<6>[ 0.020000] mce: CPU supports 0 MCE banks
<4>[ 1.170360] #7 Ok.
<7>[ 1.171691] smpboot cpu 7: start_ip = 9a000
<6>[ 0.020000] mce: CPU supports 0 MCE banks
<6>[ 1.330279] Brought up 8 CPUs
<6>[ 1.331627] Total of 8 processors activated (32017.39 BogoMIPS).

Need to investigate more...

Revision history for this message
Stefan Bader (smb) wrote :

I hope the crash data is not misleading, though it looks like an explanation for the situation. Looking at two dumps, there is one cpu showing activity in both. And in both cases the backtrace includes the following:

 #0 [ffff8805abcfdd10] schedule at ffffffff815f9fd2
 #1 [ffff8805abcfdd38] up at ffffffff810869a2
 #2 [ffff8805abcfdd48] __assign_irq_vector at ffffffff810291f4
 #3 [ffff8805abcfde18] set_mtrr at ffffffff81022a74
 #4 [ffff8805abcfdea8] mtrr_aps_init at ffffffff81023389
 #5 [ffff8805abcfdeb8] native_smp_cpus_done at ffffffff81cf4283
 #6 [ffff8805abcfdee8] smp_init at ffffffff81d02b2f
 #7 [ffff8805abcfdf18] kernel_init at ffffffff81ce6cc9
 #8 [ffff8805abcfdf48] device_not_available at ffffffff816057a4

The thing that is unclear to me is between #4 and #3. The BP is in set_mtrr to initilialize the APs, preempt should be disabled and the address (0xffffffff81022a74) does match up with the code that waits on the BP for all APs to announce that they started the rendevouz handler. And right then something interrupts the BP (some apic init code?) which then blocks on something else which unlikely will happen as the APs would wait for the BP to go on...

Dave Walker (davewalker)
Changed in linux (Ubuntu):
assignee: Ben Howard (utlemming) → Canonical Kernel Team (canonical-kernel-team)
tags: added: server-o-ro
removed: server-o-rs
Revision history for this message
Stefan Bader (smb) wrote :

All the data structures look ok, cpu#0 has queued the mtrr_work_handler for all other cpus (for simplicity only looked and vcpu=2 here) and went into

       while (atomic_read(&data.count))
                cpu_relax();

which translates into:

0xffffffff81022a6b <set_mtrr+203>: nopl 0x0(%rax,%rax,1)
/home/smb/oneiric-amd64/ubuntu-2.6/arch/x86/include/asm/processor.h: 704
0xffffffff81022a70 <set_mtrr+208>: pause
/home/smb/oneiric-amd64/ubuntu-2.6/arch/x86/include/asm/atomic.h: 25
0xffffffff81022a72 <set_mtrr+210>: mov (%rax),%edx
/home/smb/oneiric-amd64/ubuntu-2.6/arch/x86/kernel/cpu/mtrr/main.c: 274
0xffffffff81022a74 <set_mtrr+212>: test %edx,%edx
0xffffffff81022a76 <set_mtrr+214>: jne 0xffffffff81022a70 <set_mtrr+208>

There does not seem to be a sensible way how cpu#0 should end up on a deeper call chain like it does appear to be. And cpu#1 should start the mtrr_work_handler via its migration task, which also does not seem to happen...

  PID PPID CPU TASK ST %MEM VSZ RSS COMM
      0 0 0 ffffffff81c0b020 RU 0.0 0 0 [swapper]
> 0 2 1 ffff8805abd38000 RU 0.0 0 0 [kworker/0:0]
> 1 0 0 ffff8805abd00000 RU 0.0 0 0 [swapper]
      2 0 0 ffff8805abd016f0 IN 0.0 0 0 [kthreadd]
      3 2 0 ffff8805abd02de0 IN 0.0 0 0 [ksoftirqd/0]
      4 2 0 ffff8805abd044d0 IN 0.0 0 0 [kworker/0:0]
      5 2 0 ffff8805abd05bc0 IN 0.0 0 0 [kworker/u:0]
      6 2 0 ffff8805abd20000 IN 0.0 0 0 [migration/0]
      7 2 1 ffff8805abd216f0 SW 0.0 0 0 [migration/1]
      8 2 1 ffff8805abd22de0 SW 0.0 0 0 [kworker/1:0]
      9 2 1 ffff8805abd244d0 SW 0.0 0 0 [ksoftirqd/1]
     10 2 0 ffff8805abd25bc0 IN 0.0 0 0 [kworker/0:1]

Stefan Bader (smb)
Changed in linux (Ubuntu):
assignee: Canonical Kernel Team (canonical-kernel-team) → Stefan Bader (stefan-bader-canonical)
Scott Moser (smoser)
summary: - oneiric cluster compute instances do not boot
+ oneiric cluster compute (hvm) instances do not boot
Revision history for this message
Scott Moser (smoser) wrote :

just as a point of reference, this failed on the alpha-3 candidate's (20110802.2)
  linux-image-3.0.0-7-virtual 3.0.0-7.9
  linux-image-virtual 3.0.0.7.8

Revision history for this message
Stefan Bader (smb) wrote : Kernel 2.6.39+ hangs when running as HVM guest under Xen

Since kernel 2.6.39 we were experiencing strange hangs when booting those as HVM
guests in Xen (similar hangs but different places when looking at CentOS 5.4 +
Xen 3.4.3 as well as Xen 4.1 and a 3.0 based dom0). The problem only happens
when running with more than one vcpu.

I was able to examine some dumps[1] and it always seemed to be a weird
situations. In one example (booting 3.0 HVM under Xen 3.4.3/2.6.18 dom0) the
lockup always seemed to occur when the delayed mtrr init took place. Cpu#0
seemed to have been starting the rendevouz (stop_cpu) but then been interrupted
and the other (I was using vcpu=2 for simplicity) was idling somewhere else but
had the mtrr
rendevouz handler queued up (just seemed to never get started).

Things seemed to indicate some IPI problem but to be sure I went to bisect when
the problem started. I ended up with the following patch which, when reverted,
allows me to bring up a 3.0 HVM guest with more than one CPU without any problems.

commit 99bbb3a84a99cd04ab16b998b20f01a72cfa9f4f
Author: Stefano Stabellini <email address hidden>
Date: Thu Dec 2 17:55:10 2010 +0000

    xen: PV on HVM: support PV spinlocks and IPIs

    Initialize PV spinlocks on boot CPU right after native_smp_prepare_cpus
    (that switch to APIC mode and initialize APIC routing); on secondary
    CPUs on CPU_UP_PREPARE.

    Enable the usage of event channels to send and receive IPIs when
    running as a PV on HVM guest.

Though I have not yet really understood why exactly this happens, I thought I
post the results so far. It feels like either signalling an IPI through the
eventchannel does not come through or goes to the wrong CPU. It did not seem to
cause the exactly same place to fail. Like said, the 3.0 guest running in the
CentOS dom0 was locking up early right after all CPUs were brought up. While
during the bisect (using a kernel between 2.6.38 and .39-rc1) the lockup was later.

Maybe someone has a clue immediately. I will dig a bit deeper in the dumps in
the meantime. Looking at the description, which sounds like using event channels
only was intended for PV on HVM guests, it is wrong in the first place to set
the xen ipi functions on the HVM side...

-Stefan

[1] https://bugs.launchpad.net/ubuntu/+source/linux/+bug/791850

Revision history for this message
Stefan Bader (smb) wrote : Re: [Xen-devel] Kernel 2.6.39+ hangs when running as HVM guest under Xen
Download full text (3.5 KiB)

On 08.08.2011 21:38, Konrad Rzeszutek Wilk wrote:
> On Thu, Aug 04, 2011 at 02:59:05PM +0200, Stefan Bader wrote:
>> Since kernel 2.6.39 we were experiencing strange hangs when booting those as HVM
>> guests in Xen (similar hangs but different places when looking at CentOS 5.4 +
>> Xen 3.4.3 as well as Xen 4.1 and a 3.0 based dom0). The problem only happens
>> when running with more than one vcpu.
>>
>
> Hey Stefan,
>
> We were all at the XenSummit and I think did not get to think about this at all.
> Also the merge window openned so that ate a good chunk of time. Anyhow..
>

Ah, right. Know the feeling. :) I am travelling this week, too.

> Is this related to this: http://marc.info/?<email address hidden> ?
>

On a quick glance it seems to be different. What I was looking at was dom0
setups which worked for HVM guests up to kernel 2.6.38. And locked up at some
point when a guest kernel after that was started in SMP mode.

>> I was able to examine some dumps[1] and it always seemed to be a weird
>> situations. In one example (booting 3.0 HVM under Xen 3.4.3/2.6.18 dom0) the
>> lockup always seemed to occur when the delayed mtrr init took place. Cpu#0
>> seemed to have been starting the rendevouz (stop_cpu) but then been interrupted
>> and the other (I was using vcpu=2 for simplicity) was idling somewhere else but
>> had the mtrr
>> rendevouz handler queued up (just seemed to never get started).
>>
>> Things seemed to indicate some IPI problem but to be sure I went to bisect when
>> the problem started. I ended up with the following patch which, when reverted,
>> allows me to bring up a 3.0 HVM guest with more than one CPU without any problems.
>>
>> commit 99bbb3a84a99cd04ab16b998b20f01a72cfa9f4f
>> Author: Stefano Stabellini <email address hidden>
>> Date: Thu Dec 2 17:55:10 2010 +0000
>>
>> xen: PV on HVM: support PV spinlocks and IPIs
>>
>> Initialize PV spinlocks on boot CPU right after native_smp_prepare_cpus
>> (that switch to APIC mode and initialize APIC routing); on secondary
>> CPUs on CPU_UP_PREPARE.
>>
>> Enable the usage of event channels to send and receive IPIs when
>> running as a PV on HVM guest.
>>
>> Though I have not yet really understood why exactly this happens, I thought I
>> post the results so far. It feels like either signalling an IPI through the
>> eventchannel does not come through or goes to the wrong CPU. It did not seem to
>> cause the exactly same place to fail. Like said, the 3.0 guest running in the
>> CentOS dom0 was locking up early right after all CPUs were brought up. While
>> during the bisect (using a kernel between 2.6.38 and .39-rc1) the lockup was later.
>>
>> Maybe someone has a clue immediately. I will dig a bit deeper in the dumps in
>> the meantime. Looking at the description, which sounds like using event channels
>
> Anything turned up?

From the data structures everything seems to be set up correctly.

>> only was intended for PV on HVM guests, it is wrong in the first place to set
>> the xen ipi functions on the HVM side...
>
> On true HVM - sure, but on PVonHVM it sounds right.

Though exactly that seems to be what is happening. S...

Read more...

Revision history for this message
Steven Noonan (steven-valvesoftware) wrote :

I can confirm that reverting 99bbb3a8 fixes the issue for cc1.4xlarge/cg1.4xlarge.

Public HVM AMI with cheap fix applied: ami-9dbf7ef4 (derived from ami-0db17064, with manually built kernel)

Revision history for this message
Stefan Bader (smb) wrote :
Download full text (4.3 KiB)

On 09.08.2011 09:54, Stefan Bader wrote:
> On 08.08.2011 21:38, Konrad Rzeszutek Wilk wrote:
>> On Thu, Aug 04, 2011 at 02:59:05PM +0200, Stefan Bader wrote:
>>> Since kernel 2.6.39 we were experiencing strange hangs when booting those as HVM
>>> guests in Xen (similar hangs but different places when looking at CentOS 5.4 +
>>> Xen 3.4.3 as well as Xen 4.1 and a 3.0 based dom0). The problem only happens
>>> when running with more than one vcpu.
>>>
>>
>> Hey Stefan,
>>
>> We were all at the XenSummit and I think did not get to think about this at all.
>> Also the merge window openned so that ate a good chunk of time. Anyhow..
>>
>
> Ah, right. Know the feeling. :) I am travelling this week, too.
>
>> Is this related to this: http://marc.info/?<email address hidden> ?
>>
>
> On a quick glance it seems to be different. What I was looking at was dom0
> setups which worked for HVM guests up to kernel 2.6.38. And locked up at some
> point when a guest kernel after that was started in SMP mode.
>
>>> I was able to examine some dumps[1] and it always seemed to be a weird
>>> situations. In one example (booting 3.0 HVM under Xen 3.4.3/2.6.18 dom0) the
>>> lockup always seemed to occur when the delayed mtrr init took place. Cpu#0
>>> seemed to have been starting the rendevouz (stop_cpu) but then been interrupted
>>> and the other (I was using vcpu=2 for simplicity) was idling somewhere else but
>>> had the mtrr
>>> rendevouz handler queued up (just seemed to never get started).
>>>
>>> Things seemed to indicate some IPI problem but to be sure I went to bisect when
>>> the problem started. I ended up with the following patch which, when reverted,
>>> allows me to bring up a 3.0 HVM guest with more than one CPU without any problems.
>>>
>>> commit 99bbb3a84a99cd04ab16b998b20f01a72cfa9f4f
>>> Author: Stefano Stabellini <email address hidden>
>>> Date: Thu Dec 2 17:55:10 2010 +0000
>>>
>>> xen: PV on HVM: support PV spinlocks and IPIs
>>>
>>> Initialize PV spinlocks on boot CPU right after native_smp_prepare_cpus
>>> (that switch to APIC mode and initialize APIC routing); on secondary
>>> CPUs on CPU_UP_PREPARE.
>>>
>>> Enable the usage of event channels to send and receive IPIs when
>>> running as a PV on HVM guest.
>>>
>>> Though I have not yet really understood why exactly this happens, I thought I
>>> post the results so far. It feels like either signalling an IPI through the
>>> eventchannel does not come through or goes to the wrong CPU. It did not seem to
>>> cause the exactly same place to fail. Like said, the 3.0 guest running in the
>>> CentOS dom0 was locking up early right after all CPUs were brought up. While
>>> during the bisect (using a kernel between 2.6.38 and .39-rc1) the lockup was later.
>>>
>>> Maybe someone has a clue immediately. I will dig a bit deeper in the dumps in
>>> the meantime. Looking at the description, which sounds like using event channels
>>
>> Anything turned up?
>
>>From the data structures everything seems to be set up correctly.
>
>>> only was intended for PV on HVM guests, it is wrong in the first place to set
>>> the xen ipi functions on the HVM side...

Read more...

Revision history for this message
Stefan Bader (smb) wrote :

So after a bit more help from Stefano, a fix for that could be this one:

From 8e6c2f27782859b657faef508c6b56c2068af533 Mon Sep 17 00:00:00 2001
From: Stefano Stabellini <email address hidden>
Date: Wed, 17 Aug 2011 10:10:59 +0200
Subject: [PATCH] UBUNTU: (upstream) xen: Do not enable PV IPIs when vector
callback not present

Fix regression for HVM case on older (<4.1.1) hypervisors caused by

  commit 99bbb3a84a99cd04ab16b998b20f01a72cfa9f4f
  Author: Stefano Stabellini <email address hidden>
  Date: Thu Dec 2 17:55:10 2010 +0000

    xen: PV on HVM: support PV spinlocks and IPIs

This change replaced the SMP operations with event based handlers without
taking into account that this only works when the hypervisor supports
callback vectors. This causes unexplainable hangs early on boot for
HVM guests with more than one CPU.

BugLink: http://bugs.launchpad.net/bugs/791850

Signed-off-by: Stefan Bader <email address hidden>

---
 arch/x86/xen/smp.c | 4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
index b4533a8..e79dbb9 100644
--- a/arch/x86/xen/smp.c
+++ b/arch/x86/xen/smp.c
@@ -521,8 +521,6 @@ static void __init xen_hvm_smp_prepare_cpus(unsigned int
max_cpus)
        native_smp_prepare_cpus(max_cpus);
        WARN_ON(xen_smp_intr_init(0));

- if (!xen_have_vector_callback)
- return;
        xen_init_lock_cpu(0);
        xen_init_spinlocks();
 }
@@ -546,6 +544,8 @@ static void xen_hvm_cpu_die(unsigned int cpu)
 void __init xen_hvm_smp_init(void)
 {
+ if (!xen_have_vector_callback)
+ return;
        smp_ops.smp_prepare_cpus = xen_hvm_smp_prepare_cpus;
        smp_ops.smp_send_reschedule = xen_smp_send_reschedule;
        smp_ops.cpu_up = xen_hvm_cpu_up;
--
1.7.4.1

Changed in linux (Ubuntu Oneiric):
status: Triaged → Fix Committed
Revision history for this message
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package linux - 3.0.0-9.12

---------------
linux (3.0.0-9.12) oneiric; urgency=low

  [ Andy Whitcroft ]

  * [Config] standardise CONFIG_NETFILTER_XT_TARGET_TCPOPTSTRIP=m
  * [Config] move ECRYPT_FS back to =y for all architectures
    - LP: #827197
  * record the compiler in the ABI and check for inconsistant builds

  [ Leann Ogasawara ]

  * Revert "SAUCE: OMAP: DSS2: enable hsclk in dsi_pll_init for OMAP36XX"
  * Revert "SAUCE: OMAP: DSS2: check for both cpu type and revision, rather
    than just revision"
  * Revert "SAUCE: ARM: OMAP: Add macros for comparing silicon revision"
  * rebase to v3.0.2
  * rebase to v3.0.3
  * Temporarily ignore module check
  * [Config] Set CONFIG_DM_MIRROR=m on amd64, i386, and arm
  * [Config] Set CONFIG_DM_MULTIPATH=m on amd64, i386, and arm
  * [Config] Set CONFIG_DM_SNAPSHOT=m on amd64, i386, and arm
  * [Config] Enable CONFIG_EDAC_AMD8111=m on powerpc
  * [Config] Enable CONFIG_EDAC_AMD8131=m on powerpc
  * [Config] Enable CONFIG_EDAC_CPC925=m on powerpc
  * [Config] Enable CONFIG_EDAC_PASEMI=m on powerpc
  * [Config] Set CONFIG_EFI_VARS=m on amd64 and i386

  [ Stefan Bader ]

  * [Upstream] xen-blkfront: Drop name and minor adjustments for emulated
    scsi devices
    - LP: #784937
  * [Config] Force perf to use libiberty for demangling
    - LP: #783660

  [ Stefano Stabellini ]

  * [Upstream] xen: Do not enable PV IPIs when vector callback not present
    - LP: #791850

  [ Tim Gardner ]

  * [Config] updateconfigs after rebase to 3.0.2

  [ Upstream Kernel Changes ]

  * Not all systems expose a firmware or platform mechanism for changing
    the backlight intensity on i915, so add native driver support.
    - LP: #568611
  * rebase to v3.0.2
  * rebase to v3.0.3
 -- Leann Ogasawara <email address hidden> Mon, 15 Aug 2011 13:35:57 -0700

Changed in linux (Ubuntu Oneiric):
status: Fix Committed → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.