reverted: --- linux-azure-4.15.0/Documentation/ABI/testing/sysfs-class-cxl +++ linux-azure-4.15.0.orig/Documentation/ABI/testing/sysfs-class-cxl @@ -69,9 +69,7 @@ Contact: linuxppc-dev@lists.ozlabs.org Description: read/write Set the mode for prefaulting in segments into the segment table + when performing the START_WORK ioctl. Possible values: - when performing the START_WORK ioctl. Only applicable when - running under hashed page table mmu. - Possible values: none: No prefaulting (default) work_element_descriptor: Treat the work element descriptor as an effective address and @@ -246,11 +244,3 @@ Returns 1 if the psl timebase register is synchronized with the core timebase register, 0 otherwise. Users: https://github.com/ibm-capi/libcxl - -What: /sys/class/cxl//tunneled_ops_supported -Date: May 2018 -Contact: linuxppc-dev@lists.ozlabs.org -Description: read only - Returns 1 if tunneled operations are supported in capi mode, - 0 otherwise. -Users: https://github.com/ibm-capi/libcxl reverted: --- linux-azure-4.15.0/Documentation/ABI/testing/sysfs-class-net +++ linux-azure-4.15.0.orig/Documentation/ABI/testing/sysfs-class-net @@ -259,27 +259,3 @@ Description: Symbolic link to the PHY device this network device is attached to. - -What: /sys/class/net//carrier_changes -Date: Mar 2014 -KernelVersion: 3.15 -Contact: netdev@vger.kernel.org -Description: - 32-bit unsigned integer counting the number of times the link has - seen a change from UP to DOWN and vice versa - -What: /sys/class/net//carrier_up_count -Date: Jan 2018 -KernelVersion: 4.16 -Contact: netdev@vger.kernel.org -Description: - 32-bit unsigned integer counting the number of times the link has - been up - -What: /sys/class/net//carrier_down_count -Date: Jan 2018 -KernelVersion: 4.16 -Contact: netdev@vger.kernel.org -Description: - 32-bit unsigned integer counting the number of times the link has - been down diff -u linux-azure-4.15.0/Documentation/ABI/testing/sysfs-devices-system-cpu linux-azure-4.15.0/Documentation/ABI/testing/sysfs-devices-system-cpu --- linux-azure-4.15.0/Documentation/ABI/testing/sysfs-devices-system-cpu +++ linux-azure-4.15.0/Documentation/ABI/testing/sysfs-devices-system-cpu @@ -381,6 +381,7 @@ /sys/devices/system/cpu/vulnerabilities/spectre_v1 /sys/devices/system/cpu/vulnerabilities/spectre_v2 /sys/devices/system/cpu/vulnerabilities/spec_store_bypass + /sys/devices/system/cpu/vulnerabilities/l1tf Date: January 2018 Contact: Linux kernel mailing list Description: Information about CPU vulnerabilities @@ -394,0 +396,23 @@ + + Details about the l1tf file can be found in + Documentation/admin-guide/l1tf.rst + +What: /sys/devices/system/cpu/smt + /sys/devices/system/cpu/smt/active + /sys/devices/system/cpu/smt/control +Date: June 2018 +Contact: Linux kernel mailing list +Description: Control Symetric Multi Threading (SMT) + + active: Tells whether SMT is active (enabled and siblings online) + + control: Read/write interface to control SMT. Possible + values: + + "on" SMT is enabled + "off" SMT is disabled + "forceoff" SMT is force disabled. Cannot be changed. + "notsupported" SMT is not supported by the CPU + + If control status is "forceoff" or "notsupported" writes + are rejected. diff -u linux-azure-4.15.0/Documentation/admin-guide/kernel-parameters.txt linux-azure-4.15.0/Documentation/admin-guide/kernel-parameters.txt --- linux-azure-4.15.0/Documentation/admin-guide/kernel-parameters.txt +++ linux-azure-4.15.0/Documentation/admin-guide/kernel-parameters.txt @@ -1920,10 +1920,84 @@ (virtualized real and unpaged mode) on capable Intel chips. Default is 1 (enabled) + kvm-intel.vmentry_l1d_flush=[KVM,Intel] Mitigation for L1 Terminal Fault + CVE-2018-3620. + + Valid arguments: never, cond, always + + always: L1D cache flush on every VMENTER. + cond: Flush L1D on VMENTER only when the code between + VMEXIT and VMENTER can leak host memory. + never: Disables the mitigation + + Default is cond (do L1 cache flush in specific instances) + kvm-intel.vpid= [KVM,Intel] Disable Virtual Processor Identification feature (tagged TLBs) on capable Intel chips. Default is 1 (enabled) + l1tf= [X86] Control mitigation of the L1TF vulnerability on + affected CPUs + + The kernel PTE inversion protection is unconditionally + enabled and cannot be disabled. + + full + Provides all available mitigations for the + L1TF vulnerability. Disables SMT and + enables all mitigations in the + hypervisors, i.e. unconditional L1D flush. + + SMT control and L1D flush control via the + sysfs interface is still possible after + boot. Hypervisors will issue a warning + when the first VM is started in a + potentially insecure configuration, + i.e. SMT enabled or L1D flush disabled. + + full,force + Same as 'full', but disables SMT and L1D + flush runtime control. Implies the + 'nosmt=force' command line option. + (i.e. sysfs control of SMT is disabled.) + + flush + Leaves SMT enabled and enables the default + hypervisor mitigation, i.e. conditional + L1D flush. + + SMT control and L1D flush control via the + sysfs interface is still possible after + boot. Hypervisors will issue a warning + when the first VM is started in a + potentially insecure configuration, + i.e. SMT enabled or L1D flush disabled. + + flush,nosmt + + Disables SMT and enables the default + hypervisor mitigation. + + SMT control and L1D flush control via the + sysfs interface is still possible after + boot. Hypervisors will issue a warning + when the first VM is started in a + potentially insecure configuration, + i.e. SMT enabled or L1D flush disabled. + + flush,nowarn + Same as 'flush', but hypervisors will not + warn when a VM is started in a potentially + insecure configuration. + + off + Disables hypervisor mitigations and doesn't + emit any warnings. + + Default is 'flush'. + + For details see: Documentation/admin-guide/l1tf.rst + l2cr= [PPC] l3cr= [PPC] @@ -2627,6 +2701,10 @@ nosmt [KNL,S390] Disable symmetric multithreading (SMT). Equivalent to smt=1. + [KNL,x86] Disable symmetric multithreading (SMT). + nosmt=force: Force disable SMT, cannot be undone + via the sysfs control file. + nospectre_v2 [X86] Disable all mitigations for the Spectre variant 2 (indirect branch prediction) vulnerability. System may allow data leaks with this option, which is equivalent diff -u linux-azure-4.15.0/Documentation/arm64/silicon-errata.txt linux-azure-4.15.0/Documentation/arm64/silicon-errata.txt --- linux-azure-4.15.0/Documentation/arm64/silicon-errata.txt +++ linux-azure-4.15.0/Documentation/arm64/silicon-errata.txt @@ -55,7 +55,6 @@ | ARM | Cortex-A57 | #834220 | ARM64_ERRATUM_834220 | | ARM | Cortex-A72 | #853709 | N/A | | ARM | Cortex-A73 | #858921 | ARM64_ERRATUM_858921 | -| ARM | Cortex-A55 | #1024718 | ARM64_ERRATUM_1024718 | | ARM | MMU-500 | #841119,#826419 | N/A | | | | | | | Cavium | ThunderX ITS | #22375, #24313 | CAVIUM_ERRATUM_22375 | diff -u linux-azure-4.15.0/Documentation/virtual/kvm/api.txt linux-azure-4.15.0/Documentation/virtual/kvm/api.txt --- linux-azure-4.15.0/Documentation/virtual/kvm/api.txt +++ linux-azure-4.15.0/Documentation/virtual/kvm/api.txt @@ -123,14 +123,15 @@ flag KVM_VM_MIPS_VZ. -4.3 KVM_GET_MSR_INDEX_LIST +4.3 KVM_GET_MSR_INDEX_LIST, KVM_GET_MSR_FEATURE_INDEX_LIST -Capability: basic +Capability: basic, KVM_CAP_GET_MSR_FEATURES for KVM_GET_MSR_FEATURE_INDEX_LIST Architectures: x86 -Type: system +Type: system ioctl Parameters: struct kvm_msr_list (in/out) Returns: 0 on success; -1 on error Errors: + EFAULT: the msr index list cannot be read from or written to E2BIG: the msr index list is to be to fit in the array specified by the user. @@ -139,16 +140,23 @@ __u32 indices[0]; }; -This ioctl returns the guest msrs that are supported. The list varies -by kvm version and host processor, but does not change otherwise. The -user fills in the size of the indices array in nmsrs, and in return -kvm adjusts nmsrs to reflect the actual number of msrs and fills in -the indices array with their numbers. +The user fills in the size of the indices array in nmsrs, and in return +kvm adjusts nmsrs to reflect the actual number of msrs and fills in the +indices array with their numbers. + +KVM_GET_MSR_INDEX_LIST returns the guest msrs that are supported. The list +varies by kvm version and host processor, but does not change otherwise. Note: if kvm indicates supports MCE (KVM_CAP_MCE), then the MCE bank MSRs are not returned in the MSR list, as different vcpus can have a different number of banks, as set via the KVM_X86_SETUP_MCE ioctl. +KVM_GET_MSR_FEATURE_INDEX_LIST returns the list of MSRs that can be passed +to the KVM_GET_MSRS system ioctl. This lets userspace probe host capabilities +and processor features that are exposed via MSRs (e.g., VMX capabilities). +This list also varies by kvm version and host processor, but does not change +otherwise. + 4.4 KVM_CHECK_EXTENSION @@ -475,14 +483,22 @@ 4.18 KVM_GET_MSRS -Capability: basic +Capability: basic (vcpu), KVM_CAP_GET_MSR_FEATURES (system) Architectures: x86 -Type: vcpu ioctl +Type: system ioctl, vcpu ioctl Parameters: struct kvm_msrs (in/out) -Returns: 0 on success, -1 on error +Returns: number of msrs successfully returned; + -1 on error + +When used as a system ioctl: +Reads the values of MSR-based features that are available for the VM. This +is similar to KVM_GET_SUPPORTED_CPUID, but it returns MSR indices and values. +The list of msr-based features can be obtained using KVM_GET_MSR_FEATURE_INDEX_LIST +in a system ioctl. +When used as a vcpu ioctl: Reads model-specific registers from the vcpu. Supported msr indices can -be obtained using KVM_GET_MSR_INDEX_LIST. +be obtained using KVM_GET_MSR_INDEX_LIST in a system ioctl. struct kvm_msrs { __u32 nmsrs; /* number of msrs in entries */ @@ -1944,9 +1960,6 @@ ARM 64-bit FP registers have the following id bit patterns: 0x4030 0000 0012 0 -ARM firmware pseudo-registers have the following bit pattern: - 0x4030 0000 0014 - arm64 registers are mapped using the lower 32 bits. The upper 16 of that is the register group type, or coprocessor number: @@ -1963,9 +1976,6 @@ arm64 system registers have the following id bit patterns: 0x6030 0000 0013 -arm64 firmware pseudo-registers have the following bit pattern: - 0x6030 0000 0014 - MIPS registers are mapped using the lower 32 bits. The upper 16 of that is the register group type: @@ -2500,8 +2510,7 @@ and execute guest code when KVM_RUN is called. - KVM_ARM_VCPU_EL1_32BIT: Starts the CPU in a 32bit mode. Depends on KVM_CAP_ARM_EL1_32BIT (arm64 only). - - KVM_ARM_VCPU_PSCI_0_2: Emulate PSCI v0.2 (or a future revision - backward compatible with v0.2) for the CPU. + - KVM_ARM_VCPU_PSCI_0_2: Emulate PSCI v0.2 for the CPU. Depends on KVM_CAP_ARM_PSCI_0_2. - KVM_ARM_VCPU_PMU_V3: Emulate PMUv3 for the CPU. Depends on KVM_CAP_ARM_PMU_V3. reverted: --- linux-azure-4.15.0/Documentation/virtual/kvm/arm/psci.txt +++ linux-azure-4.15.0.orig/Documentation/virtual/kvm/arm/psci.txt @@ -1,30 +0,0 @@ -KVM implements the PSCI (Power State Coordination Interface) -specification in order to provide services such as CPU on/off, reset -and power-off to the guest. - -The PSCI specification is regularly updated to provide new features, -and KVM implements these updates if they make sense from a virtualization -point of view. - -This means that a guest booted on two different versions of KVM can -observe two different "firmware" revisions. This could cause issues if -a given guest is tied to a particular PSCI revision (unlikely), or if -a migration causes a different PSCI version to be exposed out of the -blue to an unsuspecting guest. - -In order to remedy this situation, KVM exposes a set of "firmware -pseudo-registers" that can be manipulated using the GET/SET_ONE_REG -interface. These registers can be saved/restored by userspace, and set -to a convenient value if required. - -The following register is defined: - -* KVM_REG_ARM_PSCI_VERSION: - - - Only valid if the vcpu has the KVM_ARM_VCPU_PSCI_0_2 feature set - (and thus has already been initialized) - - Returns the current PSCI version on GET_ONE_REG (defaulting to the - highest PSCI version implemented by KVM and compatible with v0.2) - - Allows any PSCI version implemented by KVM and compatible with - v0.2 to be set with SET_ONE_REG - - Affects the whole VM (even if the register view is per-vcpu) reverted: --- linux-azure-4.15.0/arch/arm/boot/dts/imx7d-sdb.dts +++ linux-azure-4.15.0.orig/arch/arm/boot/dts/imx7d-sdb.dts @@ -82,7 +82,7 @@ enable-active-high; }; + reg_usb_otg2_vbus: regulator-usb-otg1-vbus { - reg_usb_otg2_vbus: regulator-usb-otg2-vbus { compatible = "regulator-fixed"; regulator-name = "usb_otg2_vbus"; regulator-min-microvolt = <5000000>; reverted: --- linux-azure-4.15.0/arch/arm/configs/socfpga_defconfig +++ linux-azure-4.15.0.orig/arch/arm/configs/socfpga_defconfig @@ -57,7 +57,6 @@ CONFIG_MTD_NAND=y CONFIG_MTD_NAND_DENALI_DT=y CONFIG_MTD_SPI_NOR=y -# CONFIG_MTD_SPI_NOR_USE_4K_SECTORS is not set CONFIG_SPI_CADENCE_QUADSPI=y CONFIG_OF_OVERLAY=y CONFIG_OF_CONFIGFS=y reverted: --- linux-azure-4.15.0/arch/arm/include/asm/assembler.h +++ linux-azure-4.15.0.orig/arch/arm/include/asm/assembler.h @@ -536,14 +536,4 @@ #endif .endm -#ifdef CONFIG_KPROBES -#define _ASM_NOKPROBE(entry) \ - .pushsection "_kprobe_blacklist", "aw" ; \ - .balign 4 ; \ - .long entry; \ - .popsection -#else -#define _ASM_NOKPROBE(entry) -#endif - #endif /* __ASM_ASSEMBLER_H__ */ diff -u linux-azure-4.15.0/arch/arm/include/asm/kvm_host.h linux-azure-4.15.0/arch/arm/include/asm/kvm_host.h --- linux-azure-4.15.0/arch/arm/include/asm/kvm_host.h +++ linux-azure-4.15.0/arch/arm/include/asm/kvm_host.h @@ -75,9 +75,6 @@ /* Interrupt controller */ struct vgic_dist vgic; int max_vcpus; - - /* Mandated version of PSCI */ - u32 psci_version; }; #define KVM_NR_MEM_OBJS 40 diff -u linux-azure-4.15.0/arch/arm/include/asm/kvm_mmu.h linux-azure-4.15.0/arch/arm/include/asm/kvm_mmu.h --- linux-azure-4.15.0/arch/arm/include/asm/kvm_mmu.h +++ linux-azure-4.15.0/arch/arm/include/asm/kvm_mmu.h @@ -221,22 +221,6 @@ return 8; } -/* - * We are not in the kvm->srcu critical section most of the time, so we take - * the SRCU read lock here. Since we copy the data from the user page, we - * can immediately drop the lock again. - */ -static inline int kvm_read_guest_lock(struct kvm *kvm, - gpa_t gpa, void *data, unsigned long len) -{ - int srcu_idx = srcu_read_lock(&kvm->srcu); - int ret = kvm_read_guest(kvm, gpa, data, len); - - srcu_read_unlock(&kvm->srcu, srcu_idx); - - return ret; -} - static inline void *kvm_get_hyp_vector(void) { return kvm_ksym_ref(__kvm_hyp_vector); reverted: --- linux-azure-4.15.0/arch/arm/include/uapi/asm/kvm.h +++ linux-azure-4.15.0.orig/arch/arm/include/uapi/asm/kvm.h @@ -186,12 +186,6 @@ #define KVM_REG_ARM_VFP_FPINST 0x1009 #define KVM_REG_ARM_VFP_FPINST2 0x100A -/* KVM-as-firmware specific pseudo-registers */ -#define KVM_REG_ARM_FW (0x0014 << KVM_REG_ARM_COPROC_SHIFT) -#define KVM_REG_ARM_FW_REG(r) (KVM_REG_ARM | KVM_REG_SIZE_U64 | \ - KVM_REG_ARM_FW | ((r) & 0xffff)) -#define KVM_REG_ARM_PSCI_VERSION KVM_REG_ARM_FW_REG(0) - /* Device Control API: ARM VGIC */ #define KVM_DEV_ARM_VGIC_GRP_ADDR 0 #define KVM_DEV_ARM_VGIC_GRP_DIST_REGS 1 reverted: --- linux-azure-4.15.0/arch/arm/kernel/traps.c +++ linux-azure-4.15.0.orig/arch/arm/kernel/traps.c @@ -19,7 +19,6 @@ #include #include #include -#include #include #include #include @@ -418,8 +417,7 @@ raw_spin_unlock_irqrestore(&undef_lock, flags); } +static int call_undef_hook(struct pt_regs *regs, unsigned int instr) -static nokprobe_inline -int call_undef_hook(struct pt_regs *regs, unsigned int instr) { struct undef_hook *hook; unsigned long flags; @@ -492,7 +490,6 @@ arm_notify_die("Oops - undefined instruction", regs, &info, 0, 6); } -NOKPROBE_SYMBOL(do_undefinstr) /* * Handle FIQ similarly to NMI on x86 systems. reverted: --- linux-azure-4.15.0/arch/arm/kvm/guest.c +++ linux-azure-4.15.0.orig/arch/arm/kvm/guest.c @@ -22,7 +22,6 @@ #include #include #include -#include #include #include #include @@ -177,7 +176,6 @@ unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu) { return num_core_regs() + kvm_arm_num_coproc_regs(vcpu) - + kvm_arm_get_fw_num_regs(vcpu) + NUM_TIMER_REGS; } @@ -198,11 +196,6 @@ uindices++; } - ret = kvm_arm_copy_fw_reg_indices(vcpu, uindices); - if (ret) - return ret; - uindices += kvm_arm_get_fw_num_regs(vcpu); - ret = copy_timer_indices(vcpu, uindices); if (ret) return ret; @@ -221,9 +214,6 @@ if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_CORE) return get_core_reg(vcpu, reg); - if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_FW) - return kvm_arm_get_fw_reg(vcpu, reg); - if (is_timer_reg(reg->id)) return get_timer_reg(vcpu, reg); @@ -240,9 +230,6 @@ if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_CORE) return set_core_reg(vcpu, reg); - if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_FW) - return kvm_arm_set_fw_reg(vcpu, reg); - if (is_timer_reg(reg->id)) return set_timer_reg(vcpu, reg); reverted: --- linux-azure-4.15.0/arch/arm/lib/getuser.S +++ linux-azure-4.15.0.orig/arch/arm/lib/getuser.S @@ -38,7 +38,6 @@ mov r0, #0 ret lr ENDPROC(__get_user_1) -_ASM_NOKPROBE(__get_user_1) ENTRY(__get_user_2) check_uaccess r0, 2, r1, r2, __get_user_bad @@ -59,7 +58,6 @@ mov r0, #0 ret lr ENDPROC(__get_user_2) -_ASM_NOKPROBE(__get_user_2) ENTRY(__get_user_4) check_uaccess r0, 4, r1, r2, __get_user_bad @@ -67,7 +65,6 @@ mov r0, #0 ret lr ENDPROC(__get_user_4) -_ASM_NOKPROBE(__get_user_4) ENTRY(__get_user_8) check_uaccess r0, 8, r1, r2, __get_user_bad8 @@ -81,7 +78,6 @@ mov r0, #0 ret lr ENDPROC(__get_user_8) -_ASM_NOKPROBE(__get_user_8) #ifdef __ARMEB__ ENTRY(__get_user_32t_8) @@ -95,7 +91,6 @@ mov r0, #0 ret lr ENDPROC(__get_user_32t_8) -_ASM_NOKPROBE(__get_user_32t_8) ENTRY(__get_user_64t_1) check_uaccess r0, 1, r1, r2, __get_user_bad8 @@ -103,7 +98,6 @@ mov r0, #0 ret lr ENDPROC(__get_user_64t_1) -_ASM_NOKPROBE(__get_user_64t_1) ENTRY(__get_user_64t_2) check_uaccess r0, 2, r1, r2, __get_user_bad8 @@ -120,7 +114,6 @@ mov r0, #0 ret lr ENDPROC(__get_user_64t_2) -_ASM_NOKPROBE(__get_user_64t_2) ENTRY(__get_user_64t_4) check_uaccess r0, 4, r1, r2, __get_user_bad8 @@ -128,7 +121,6 @@ mov r0, #0 ret lr ENDPROC(__get_user_64t_4) -_ASM_NOKPROBE(__get_user_64t_4) #endif __get_user_bad8: @@ -139,8 +131,6 @@ ret lr ENDPROC(__get_user_bad) ENDPROC(__get_user_bad8) -_ASM_NOKPROBE(__get_user_bad) -_ASM_NOKPROBE(__get_user_bad8) .pushsection __ex_table, "a" .long 1b, __get_user_bad reverted: --- linux-azure-4.15.0/arch/arm/probes/kprobes/opt-arm.c +++ linux-azure-4.15.0.orig/arch/arm/probes/kprobes/opt-arm.c @@ -165,14 +165,13 @@ { unsigned long flags; struct kprobe *p = &op->kp; + struct kprobe_ctlblk *kcb = get_kprobe_ctlblk(); - struct kprobe_ctlblk *kcb; /* Save skipped registers */ regs->ARM_pc = (unsigned long)op->kp.addr; regs->ARM_ORIG_r0 = ~0UL; local_irq_save(flags); - kcb = get_kprobe_ctlblk(); if (kprobe_running()) { kprobes_inc_nmissed_count(&op->kp); @@ -192,7 +191,6 @@ local_irq_restore(flags); } -NOKPROBE_SYMBOL(optimized_callback) int arch_prepare_optimized_kprobe(struct optimized_kprobe *op, struct kprobe *orig) { diff -u linux-azure-4.15.0/arch/arm64/Kconfig linux-azure-4.15.0/arch/arm64/Kconfig --- linux-azure-4.15.0/arch/arm64/Kconfig +++ linux-azure-4.15.0/arch/arm64/Kconfig @@ -462,34 +462,6 @@ If unsure, say Y. -config ARM64_ERRATUM_1024718 - bool "Cortex-A55: 1024718: Update of DBM/AP bits without break before make might result in incorrect update" - default y - help - This option adds work around for Arm Cortex-A55 Erratum 1024718. - - Affected Cortex-A55 cores (r0p0, r0p1, r1p0) could cause incorrect - update of the hardware dirty bit when the DBM/AP bits are updated - without a break-before-make. The work around is to disable the usage - of hardware DBM locally on the affected cores. CPUs not affected by - erratum will continue to use the feature. - - If unsure, say Y. - -config ARM64_ERRATUM_1024718 - bool "Cortex-A55: 1024718: Update of DBM/AP bits without break before make might result in incorrect update" - default y - help - This option adds work around for Arm Cortex-A55 Erratum 1024718. - - Affected Cortex-A55 cores (r0p0, r0p1, r1p0) could cause incorrect - update of the hardware dirty bit when the DBM/AP bits are updated - without a break-before-make. The work around is to disable the usage - of hardware DBM locally on the affected cores. CPUs not affected by - erratum will continue to use the feature. - - If unsure, say Y. - config CAVIUM_ERRATUM_22375 bool "Cavium erratum 22375, 24313" default y diff -u linux-azure-4.15.0/arch/arm64/include/asm/assembler.h linux-azure-4.15.0/arch/arm64/include/asm/assembler.h --- linux-azure-4.15.0/arch/arm64/include/asm/assembler.h +++ linux-azure-4.15.0/arch/arm64/include/asm/assembler.h @@ -25,7 +25,6 @@ #include #include -#include #include #include #include @@ -526,41 +525,2 @@ -/* - * Check the MIDR_EL1 of the current CPU for a given model and a range of - * variant/revision. See asm/cputype.h for the macros used below. - * - * model: MIDR_CPU_MODEL of CPU - * rv_min: Minimum of MIDR_CPU_VAR_REV() - * rv_max: Maximum of MIDR_CPU_VAR_REV() - * res: Result register. - * tmp1, tmp2, tmp3: Temporary registers - * - * Corrupts: res, tmp1, tmp2, tmp3 - * Returns: 0, if the CPU id doesn't match. Non-zero otherwise - */ - .macro cpu_midr_match model, rv_min, rv_max, res, tmp1, tmp2, tmp3 - mrs \res, midr_el1 - mov_q \tmp1, (MIDR_REVISION_MASK | MIDR_VARIANT_MASK) - mov_q \tmp2, MIDR_CPU_MODEL_MASK - and \tmp3, \res, \tmp2 // Extract model - and \tmp1, \res, \tmp1 // rev & variant - mov_q \tmp2, \model - cmp \tmp3, \tmp2 - cset \res, eq - cbz \res, .Ldone\@ // Model matches ? - - .if (\rv_min != 0) // Skip min check if rv_min == 0 - mov_q \tmp3, \rv_min - cmp \tmp1, \tmp3 - cset \res, ge - .endif // \rv_min != 0 - /* Skip rv_max check if rv_min == rv_max && rv_min != 0 */ - .if ((\rv_min != \rv_max) || \rv_min == 0) - mov_q \tmp2, \rv_max - cmp \tmp1, \tmp2 - cset \tmp2, le - and \res, \res, \tmp2 - .endif -.Ldone\@: - .endm - #endif /* __ASM_ASSEMBLER_H */ diff -u linux-azure-4.15.0/arch/arm64/include/asm/cputype.h linux-azure-4.15.0/arch/arm64/include/asm/cputype.h --- linux-azure-4.15.0/arch/arm64/include/asm/cputype.h +++ linux-azure-4.15.0/arch/arm64/include/asm/cputype.h @@ -78,13 +78,11 @@ #define ARM_CPU_PART_AEM_V8 0xD0F #define ARM_CPU_PART_FOUNDATION 0xD00 -#define ARM_CPU_PART_CORTEX_A55 0xD05 #define ARM_CPU_PART_CORTEX_A57 0xD07 #define ARM_CPU_PART_CORTEX_A72 0xD08 #define ARM_CPU_PART_CORTEX_A53 0xD03 #define ARM_CPU_PART_CORTEX_A73 0xD09 #define ARM_CPU_PART_CORTEX_A75 0xD0A -#define ARM_CPU_PART_CORTEX_A55 0xD05 #define APM_CPU_PART_POTENZA 0x000 @@ -100,12 +98,10 @@ #define QCOM_CPU_PART_KRYO 0x200 #define MIDR_CORTEX_A53 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A53) -#define MIDR_CORTEX_A55 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A55) #define MIDR_CORTEX_A57 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A57) #define MIDR_CORTEX_A72 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A72) #define MIDR_CORTEX_A73 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A73) #define MIDR_CORTEX_A75 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A75) -#define MIDR_CORTEX_A55 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A55) #define MIDR_THUNDERX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX) #define MIDR_THUNDERX_81XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_81XX) #define MIDR_THUNDERX_83XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_83XX) diff -u linux-azure-4.15.0/arch/arm64/include/asm/kvm_host.h linux-azure-4.15.0/arch/arm64/include/asm/kvm_host.h --- linux-azure-4.15.0/arch/arm64/include/asm/kvm_host.h +++ linux-azure-4.15.0/arch/arm64/include/asm/kvm_host.h @@ -73,9 +73,6 @@ /* Interrupt controller */ struct vgic_dist vgic; - - /* Mandated version of PSCI */ - u32 psci_version; }; #define KVM_NR_MEM_OBJS 40 diff -u linux-azure-4.15.0/arch/arm64/include/asm/kvm_mmu.h linux-azure-4.15.0/arch/arm64/include/asm/kvm_mmu.h --- linux-azure-4.15.0/arch/arm64/include/asm/kvm_mmu.h +++ linux-azure-4.15.0/arch/arm64/include/asm/kvm_mmu.h @@ -309,22 +309,6 @@ return (cpuid_feature_extract_unsigned_field(reg, ID_AA64MMFR1_VMIDBITS_SHIFT) == 2) ? 16 : 8; } -/* - * We are not in the kvm->srcu critical section most of the time, so we take - * the SRCU read lock here. Since we copy the data from the user page, we - * can immediately drop the lock again. - */ -static inline int kvm_read_guest_lock(struct kvm *kvm, - gpa_t gpa, void *data, unsigned long len) -{ - int srcu_idx = srcu_read_lock(&kvm->srcu); - int ret = kvm_read_guest(kvm, gpa, data, len); - - srcu_read_unlock(&kvm->srcu, srcu_idx); - - return ret; -} - #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR #include reverted: --- linux-azure-4.15.0/arch/arm64/include/uapi/asm/kvm.h +++ linux-azure-4.15.0.orig/arch/arm64/include/uapi/asm/kvm.h @@ -206,12 +206,6 @@ #define KVM_REG_ARM_TIMER_CNT ARM64_SYS_REG(3, 3, 14, 3, 2) #define KVM_REG_ARM_TIMER_CVAL ARM64_SYS_REG(3, 3, 14, 0, 2) -/* KVM-as-firmware specific pseudo-registers */ -#define KVM_REG_ARM_FW (0x0014 << KVM_REG_ARM_COPROC_SHIFT) -#define KVM_REG_ARM_FW_REG(r) (KVM_REG_ARM64 | KVM_REG_SIZE_U64 | \ - KVM_REG_ARM_FW | ((r) & 0xffff)) -#define KVM_REG_ARM_PSCI_VERSION KVM_REG_ARM_FW_REG(0) - /* Device Control API: ARM VGIC */ #define KVM_DEV_ARM_VGIC_GRP_ADDR 0 #define KVM_DEV_ARM_VGIC_GRP_DIST_REGS 1 reverted: --- linux-azure-4.15.0/arch/arm64/kvm/guest.c +++ linux-azure-4.15.0.orig/arch/arm64/kvm/guest.c @@ -25,7 +25,6 @@ #include #include #include -#include #include #include #include @@ -206,7 +205,7 @@ unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu) { return num_core_regs() + kvm_arm_num_sys_reg_descs(vcpu) + + NUM_TIMER_REGS; - + kvm_arm_get_fw_num_regs(vcpu) + NUM_TIMER_REGS; } /** @@ -226,11 +225,6 @@ uindices++; } - ret = kvm_arm_copy_fw_reg_indices(vcpu, uindices); - if (ret) - return ret; - uindices += kvm_arm_get_fw_num_regs(vcpu); - ret = copy_timer_indices(vcpu, uindices); if (ret) return ret; @@ -249,9 +243,6 @@ if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_CORE) return get_core_reg(vcpu, reg); - if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_FW) - return kvm_arm_get_fw_reg(vcpu, reg); - if (is_timer_reg(reg->id)) return get_timer_reg(vcpu, reg); @@ -268,9 +259,6 @@ if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_CORE) return set_core_reg(vcpu, reg); - if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_FW) - return kvm_arm_set_fw_reg(vcpu, reg); - if (is_timer_reg(reg->id)) return set_timer_reg(vcpu, reg); diff -u linux-azure-4.15.0/arch/arm64/mm/proc.S linux-azure-4.15.0/arch/arm64/mm/proc.S --- linux-azure-4.15.0/arch/arm64/mm/proc.S +++ linux-azure-4.15.0/arch/arm64/mm/proc.S @@ -447,11 +447,6 @@ cbz x9, 2f cmp x9, #2 b.lt 1f -#ifdef CONFIG_ARM64_ERRATUM_1024718 - /* Disable hardware DBM on Cortex-A55 r0p0, r0p1 & r1p0 */ - cpu_midr_match MIDR_CORTEX_A55, MIDR_CPU_VAR_REV(0, 0), MIDR_CPU_VAR_REV(1, 0), x1, x2, x3, x4 - cbnz x1, 1f -#endif orr x10, x10, #TCR_HD // hardware Dirty flag update 1: orr x10, x10, #TCR_HA // hardware Access flag update 2: diff -u linux-azure-4.15.0/arch/powerpc/include/asm/security_features.h linux-azure-4.15.0/arch/powerpc/include/asm/security_features.h --- linux-azure-4.15.0/arch/powerpc/include/asm/security_features.h +++ linux-azure-4.15.0/arch/powerpc/include/asm/security_features.h @@ -76,10 +76,2 @@ - -// Features enabled by default -#define SEC_FTR_DEFAULT \ - (SEC_FTR_L1D_FLUSH_HV | \ - SEC_FTR_L1D_FLUSH_PR | \ - SEC_FTR_BNDS_CHK_SPEC_BAR | \ - SEC_FTR_FAVOUR_SECURITY) - #endif /* _ASM_POWERPC_SECURITY_FEATURES_H */ reverted: --- linux-azure-4.15.0/arch/powerpc/kernel/eeh_driver.c +++ linux-azure-4.15.0.orig/arch/powerpc/kernel/eeh_driver.c @@ -207,18 +207,18 @@ if (!dev || eeh_dev_removed(edev) || eeh_pe_passed(edev->pe)) return NULL; - - device_lock(&dev->dev); dev->error_state = pci_channel_io_frozen; driver = eeh_pcid_get(dev); + if (!driver) return NULL; - if (!driver) goto out_no_dev; eeh_disable_irq(dev); if (!driver->err_handler || + !driver->err_handler->error_detected) { + eeh_pcid_put(dev); + return NULL; + } - !driver->err_handler->error_detected) - goto out; rc = driver->err_handler->error_detected(dev, pci_channel_io_frozen); @@ -227,10 +227,7 @@ if (*res == PCI_ERS_RESULT_NONE) *res = rc; edev->in_error = true; -out: eeh_pcid_put(dev); -out_no_dev: - device_unlock(&dev->dev); return NULL; } @@ -253,14 +250,15 @@ if (!dev || eeh_dev_removed(edev) || eeh_pe_passed(edev->pe)) return NULL; - device_lock(&dev->dev); driver = eeh_pcid_get(dev); + if (!driver) return NULL; - if (!driver) goto out_no_dev; if (!driver->err_handler || !driver->err_handler->mmio_enabled || + (edev->mode & EEH_DEV_NO_HANDLER)) { + eeh_pcid_put(dev); + return NULL; + } - (edev->mode & EEH_DEV_NO_HANDLER)) - goto out; rc = driver->err_handler->mmio_enabled(dev); @@ -268,10 +266,7 @@ if (rc == PCI_ERS_RESULT_NEED_RESET) *res = rc; if (*res == PCI_ERS_RESULT_NONE) *res = rc; -out: eeh_pcid_put(dev); -out_no_dev: - device_unlock(&dev->dev); return NULL; } @@ -294,20 +289,20 @@ if (!dev || eeh_dev_removed(edev) || eeh_pe_passed(edev->pe)) return NULL; - - device_lock(&dev->dev); dev->error_state = pci_channel_io_normal; driver = eeh_pcid_get(dev); + if (!driver) return NULL; - if (!driver) goto out_no_dev; eeh_enable_irq(dev); if (!driver->err_handler || !driver->err_handler->slot_reset || (edev->mode & EEH_DEV_NO_HANDLER) || + (!edev->in_error)) { + eeh_pcid_put(dev); + return NULL; + } - (!edev->in_error)) - goto out; rc = driver->err_handler->slot_reset(dev); if ((*res == PCI_ERS_RESULT_NONE) || @@ -315,10 +310,7 @@ if (*res == PCI_ERS_RESULT_DISCONNECT && rc == PCI_ERS_RESULT_NEED_RESET) *res = rc; -out: eeh_pcid_put(dev); -out_no_dev: - device_unlock(&dev->dev); return NULL; } @@ -369,12 +361,10 @@ if (!dev || eeh_dev_removed(edev) || eeh_pe_passed(edev->pe)) return NULL; - - device_lock(&dev->dev); dev->error_state = pci_channel_io_normal; driver = eeh_pcid_get(dev); + if (!driver) return NULL; - if (!driver) goto out_no_dev; was_in_error = edev->in_error; edev->in_error = false; @@ -384,15 +374,13 @@ !driver->err_handler->resume || (edev->mode & EEH_DEV_NO_HANDLER) || !was_in_error) { edev->mode &= ~EEH_DEV_NO_HANDLER; + eeh_pcid_put(dev); + return NULL; - goto out; } driver->err_handler->resume(dev); -out: eeh_pcid_put(dev); -out_no_dev: - device_unlock(&dev->dev); return NULL; } @@ -412,25 +400,22 @@ if (!dev || eeh_dev_removed(edev) || eeh_pe_passed(edev->pe)) return NULL; - - device_lock(&dev->dev); dev->error_state = pci_channel_io_perm_failure; driver = eeh_pcid_get(dev); + if (!driver) return NULL; - if (!driver) goto out_no_dev; eeh_disable_irq(dev); if (!driver->err_handler || + !driver->err_handler->error_detected) { + eeh_pcid_put(dev); + return NULL; + } - !driver->err_handler->error_detected) - goto out; driver->err_handler->error_detected(dev, pci_channel_io_perm_failure); -out: eeh_pcid_put(dev); -out_no_dev: - device_unlock(&dev->dev); return NULL; } diff -u linux-azure-4.15.0/arch/powerpc/kernel/security.c linux-azure-4.15.0/arch/powerpc/kernel/security.c --- linux-azure-4.15.0/arch/powerpc/kernel/security.c +++ linux-azure-4.15.0/arch/powerpc/kernel/security.c @@ -12,7 +12,12 @@ #include -unsigned long powerpc_security_features __read_mostly = SEC_FTR_DEFAULT; +unsigned long powerpc_security_features __read_mostly = \ + SEC_FTR_L1D_FLUSH_HV | \ + SEC_FTR_L1D_FLUSH_PR | \ + SEC_FTR_BNDS_CHK_SPEC_BAR | \ + SEC_FTR_FAVOUR_SECURITY; + ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr, char *buf) { diff -u linux-azure-4.15.0/arch/powerpc/platforms/powernv/opal-nvram.c linux-azure-4.15.0/arch/powerpc/platforms/powernv/opal-nvram.c --- linux-azure-4.15.0/arch/powerpc/platforms/powernv/opal-nvram.c +++ linux-azure-4.15.0/arch/powerpc/platforms/powernv/opal-nvram.c @@ -44,10 +44,6 @@ return count; } -/* - * This can be called in the panic path with interrupts off, so use - * mdelay in that case. - */ static ssize_t opal_nvram_write(char *buf, size_t count, loff_t *index) { s64 rc = OPAL_BUSY; @@ -62,16 +58,10 @@ while (rc == OPAL_BUSY || rc == OPAL_BUSY_EVENT) { rc = opal_write_nvram(__pa(buf), count, off); if (rc == OPAL_BUSY_EVENT) { - if (in_interrupt() || irqs_disabled()) - mdelay(OPAL_BUSY_DELAY_MS); - else - msleep(OPAL_BUSY_DELAY_MS); + msleep(OPAL_BUSY_DELAY_MS); opal_poll_events(NULL); } else if (rc == OPAL_BUSY) { - if (in_interrupt() || irqs_disabled()) - mdelay(OPAL_BUSY_DELAY_MS); - else - msleep(OPAL_BUSY_DELAY_MS); + msleep(OPAL_BUSY_DELAY_MS); } } reverted: --- linux-azure-4.15.0/arch/s390/crypto/arch_random.c +++ linux-azure-4.15.0.orig/arch/s390/crypto/arch_random.c @@ -2,37 +2,14 @@ /* * s390 arch random implementation. * + * Copyright IBM Corp. 2017 + * Author(s): Harald Freudenberger - * Copyright IBM Corp. 2017, 2018 - * Author(s): Harald Freudenberger - * - * The s390_arch_random_generate() function may be called from random.c - * in interrupt context. So this implementation does the best to be very - * fast. There is a buffer of random data which is asynchronously checked - * and filled by a workqueue thread. - * If there are enough bytes in the buffer the s390_arch_random_generate() - * just delivers these bytes. Otherwise false is returned until the - * worker thread refills the buffer. - * The worker fills the rng buffer by pulling fresh entropy from the - * high quality (but slow) true hardware random generator. This entropy - * is then spread over the buffer with an pseudo random generator PRNG. - * As the arch_get_random_seed_long() fetches 8 bytes and the calling - * function add_interrupt_randomness() counts this as 1 bit entropy the - * distribution needs to make sure there is in fact 1 bit entropy contained - * in 8 bytes of the buffer. The current values pull 32 byte entropy - * and scatter this into a 2048 byte buffer. So 8 byte in the buffer - * will contain 1 bit of entropy. - * The worker thread is rescheduled based on the charge level of the - * buffer but at least with 500 ms delay to avoid too much CPU consumption. - * So the max. amount of rng data delivered via arch_get_random_seed is - * limited to 4k bytes per second. */ #include #include #include -#include #include -#include #include DEFINE_STATIC_KEY_FALSE(s390_arch_random_available); @@ -40,83 +17,11 @@ atomic64_t s390_arch_random_counter = ATOMIC64_INIT(0); EXPORT_SYMBOL(s390_arch_random_counter); -#define ARCH_REFILL_TICKS (HZ/2) -#define ARCH_PRNG_SEED_SIZE 32 -#define ARCH_RNG_BUF_SIZE 2048 - -static DEFINE_SPINLOCK(arch_rng_lock); -static u8 *arch_rng_buf; -static unsigned int arch_rng_buf_idx; - -static void arch_rng_refill_buffer(struct work_struct *); -static DECLARE_DELAYED_WORK(arch_rng_work, arch_rng_refill_buffer); - -bool s390_arch_random_generate(u8 *buf, unsigned int nbytes) -{ - /* lock rng buffer */ - if (!spin_trylock(&arch_rng_lock)) - return false; - - /* try to resolve the requested amount of bytes from the buffer */ - arch_rng_buf_idx -= nbytes; - if (arch_rng_buf_idx < ARCH_RNG_BUF_SIZE) { - memcpy(buf, arch_rng_buf + arch_rng_buf_idx, nbytes); - atomic64_add(nbytes, &s390_arch_random_counter); - spin_unlock(&arch_rng_lock); - return true; - } - - /* not enough bytes in rng buffer, refill is done asynchronously */ - spin_unlock(&arch_rng_lock); - - return false; -} -EXPORT_SYMBOL(s390_arch_random_generate); - -static void arch_rng_refill_buffer(struct work_struct *unused) -{ - unsigned int delay = ARCH_REFILL_TICKS; - - spin_lock(&arch_rng_lock); - if (arch_rng_buf_idx > ARCH_RNG_BUF_SIZE) { - /* buffer is exhausted and needs refill */ - u8 seed[ARCH_PRNG_SEED_SIZE]; - u8 prng_wa[240]; - /* fetch ARCH_PRNG_SEED_SIZE bytes of entropy */ - cpacf_trng(NULL, 0, seed, sizeof(seed)); - /* blow this entropy up to ARCH_RNG_BUF_SIZE with PRNG */ - memset(prng_wa, 0, sizeof(prng_wa)); - cpacf_prno(CPACF_PRNO_SHA512_DRNG_SEED, - &prng_wa, NULL, 0, seed, sizeof(seed)); - cpacf_prno(CPACF_PRNO_SHA512_DRNG_GEN, - &prng_wa, arch_rng_buf, ARCH_RNG_BUF_SIZE, NULL, 0); - arch_rng_buf_idx = ARCH_RNG_BUF_SIZE; - } - delay += (ARCH_REFILL_TICKS * arch_rng_buf_idx) / ARCH_RNG_BUF_SIZE; - spin_unlock(&arch_rng_lock); - - /* kick next check */ - queue_delayed_work(system_long_wq, &arch_rng_work, delay); -} - static int __init s390_arch_random_init(void) { + /* check if subfunction CPACF_PRNO_TRNG is available */ + if (cpacf_query_func(CPACF_PRNO, CPACF_PRNO_TRNG)) - /* all the needed PRNO subfunctions available ? */ - if (cpacf_query_func(CPACF_PRNO, CPACF_PRNO_TRNG) && - cpacf_query_func(CPACF_PRNO, CPACF_PRNO_SHA512_DRNG_GEN)) { - - /* alloc arch random working buffer */ - arch_rng_buf = kmalloc(ARCH_RNG_BUF_SIZE, GFP_KERNEL); - if (!arch_rng_buf) - return -ENOMEM; - - /* kick worker queue job to fill the random buffer */ - queue_delayed_work(system_long_wq, - &arch_rng_work, ARCH_REFILL_TICKS); - - /* enable arch random to the outside world */ static_branch_enable(&s390_arch_random_available); - } return 0; } reverted: --- linux-azure-4.15.0/arch/s390/crypto/crc32be-vx.S +++ linux-azure-4.15.0.orig/arch/s390/crypto/crc32be-vx.S @@ -13,7 +13,6 @@ */ #include -#include #include /* Vector register range containing CRC-32 constants */ @@ -68,8 +67,6 @@ .previous - GEN_BR_THUNK %r14 - .text /* * The CRC-32 function(s) use these calling conventions: @@ -206,6 +203,6 @@ .Ldone: VLGVF %r2,%v2,3 + br %r14 - BR_EX %r14 .previous reverted: --- linux-azure-4.15.0/arch/s390/crypto/crc32le-vx.S +++ linux-azure-4.15.0.orig/arch/s390/crypto/crc32le-vx.S @@ -14,7 +14,6 @@ */ #include -#include #include /* Vector register range containing CRC-32 constants */ @@ -77,7 +76,6 @@ .previous - GEN_BR_THUNK %r14 .text @@ -266,6 +264,6 @@ .Ldone: VLGVF %r2,%v2,2 + br %r14 - BR_EX %r14 .previous reverted: --- linux-azure-4.15.0/arch/s390/include/asm/alternative-asm.h +++ linux-azure-4.15.0.orig/arch/s390/include/asm/alternative-asm.h @@ -1,108 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -#ifndef _ASM_S390_ALTERNATIVE_ASM_H -#define _ASM_S390_ALTERNATIVE_ASM_H - -#ifdef __ASSEMBLY__ - -/* - * Check the length of an instruction sequence. The length may not be larger - * than 254 bytes and it has to be divisible by 2. - */ -.macro alt_len_check start,end - .if ( \end - \start ) > 254 - .error "cpu alternatives does not support instructions blocks > 254 bytes\n" - .endif - .if ( \end - \start ) % 2 - .error "cpu alternatives instructions length is odd\n" - .endif -.endm - -/* - * Issue one struct alt_instr descriptor entry (need to put it into - * the section .altinstructions, see below). This entry contains - * enough information for the alternatives patching code to patch an - * instruction. See apply_alternatives(). - */ -.macro alt_entry orig_start, orig_end, alt_start, alt_end, feature - .long \orig_start - . - .long \alt_start - . - .word \feature - .byte \orig_end - \orig_start - .byte \alt_end - \alt_start -.endm - -/* - * Fill up @bytes with nops. The macro emits 6-byte nop instructions - * for the bulk of the area, possibly followed by a 4-byte and/or - * a 2-byte nop if the size of the area is not divisible by 6. - */ -.macro alt_pad_fill bytes - .fill ( \bytes ) / 6, 6, 0xc0040000 - .fill ( \bytes ) % 6 / 4, 4, 0x47000000 - .fill ( \bytes ) % 6 % 4 / 2, 2, 0x0700 -.endm - -/* - * Fill up @bytes with nops. If the number of bytes is larger - * than 6, emit a jg instruction to branch over all nops, then - * fill an area of size (@bytes - 6) with nop instructions. - */ -.macro alt_pad bytes - .if ( \bytes > 0 ) - .if ( \bytes > 6 ) - jg . + \bytes - alt_pad_fill \bytes - 6 - .else - alt_pad_fill \bytes - .endif - .endif -.endm - -/* - * Define an alternative between two instructions. If @feature is - * present, early code in apply_alternatives() replaces @oldinstr with - * @newinstr. ".skip" directive takes care of proper instruction padding - * in case @newinstr is longer than @oldinstr. - */ -.macro ALTERNATIVE oldinstr, newinstr, feature - .pushsection .altinstr_replacement,"ax" -770: \newinstr -771: .popsection -772: \oldinstr -773: alt_len_check 770b, 771b - alt_len_check 772b, 773b - alt_pad ( ( 771b - 770b ) - ( 773b - 772b ) ) -774: .pushsection .altinstructions,"a" - alt_entry 772b, 774b, 770b, 771b, \feature - .popsection -.endm - -/* - * Define an alternative between two instructions. If @feature is - * present, early code in apply_alternatives() replaces @oldinstr with - * @newinstr. ".skip" directive takes care of proper instruction padding - * in case @newinstr is longer than @oldinstr. - */ -.macro ALTERNATIVE_2 oldinstr, newinstr1, feature1, newinstr2, feature2 - .pushsection .altinstr_replacement,"ax" -770: \newinstr1 -771: \newinstr2 -772: .popsection -773: \oldinstr -774: alt_len_check 770b, 771b - alt_len_check 771b, 772b - alt_len_check 773b, 774b - .if ( 771b - 770b > 772b - 771b ) - alt_pad ( ( 771b - 770b ) - ( 774b - 773b ) ) - .else - alt_pad ( ( 772b - 771b ) - ( 774b - 773b ) ) - .endif -775: .pushsection .altinstructions,"a" - alt_entry 773b, 775b, 770b, 771b,\feature1 - alt_entry 773b, 775b, 771b, 772b,\feature2 - .popsection -.endm - -#endif /* __ASSEMBLY__ */ - -#endif /* _ASM_S390_ALTERNATIVE_ASM_H */ reverted: --- linux-azure-4.15.0/arch/s390/include/asm/archrandom.h +++ linux-azure-4.15.0.orig/arch/s390/include/asm/archrandom.h @@ -15,11 +15,16 @@ #include #include +#include DECLARE_STATIC_KEY_FALSE(s390_arch_random_available); extern atomic64_t s390_arch_random_counter; +static void s390_arch_random_generate(u8 *buf, unsigned int nbytes) +{ + cpacf_trng(NULL, 0, buf, nbytes); + atomic64_add(nbytes, &s390_arch_random_counter); +} -bool s390_arch_random_generate(u8 *buf, unsigned int nbytes); static inline bool arch_has_random(void) { @@ -46,7 +51,8 @@ static inline bool arch_get_random_seed_long(unsigned long *v) { if (static_branch_likely(&s390_arch_random_available)) { + s390_arch_random_generate((u8 *)v, sizeof(*v)); + return true; - return s390_arch_random_generate((u8 *)v, sizeof(*v)); } return false; } @@ -54,7 +60,8 @@ static inline bool arch_get_random_seed_int(unsigned int *v) { if (static_branch_likely(&s390_arch_random_available)) { + s390_arch_random_generate((u8 *)v, sizeof(*v)); + return true; - return s390_arch_random_generate((u8 *)v, sizeof(*v)); } return false; } reverted: --- linux-azure-4.15.0/arch/s390/include/asm/nospec-insn.h +++ linux-azure-4.15.0.orig/arch/s390/include/asm/nospec-insn.h @@ -1,194 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -#ifndef _ASM_S390_NOSPEC_ASM_H -#define _ASM_S390_NOSPEC_ASM_H - -#include -#include -#ifdef __ASSEMBLY__ - -#ifdef CONFIG_EXPOLINE - -_LC_BR_R1 = __LC_BR_R1 - -/* - * The expoline macros are used to create thunks in the same format - * as gcc generates them. The 'comdat' section flag makes sure that - * the various thunks are merged into a single copy. - */ - .macro __THUNK_PROLOG_NAME name - .pushsection .text.\name,"axG",@progbits,\name,comdat - .globl \name - .hidden \name - .type \name,@function -\name: - .cfi_startproc - .endm - - .macro __THUNK_EPILOG - .cfi_endproc - .popsection - .endm - - .macro __THUNK_PROLOG_BR r1,r2 - __THUNK_PROLOG_NAME __s390x_indirect_jump_r\r2\()use_r\r1 - .endm - - .macro __THUNK_PROLOG_BC d0,r1,r2 - __THUNK_PROLOG_NAME __s390x_indirect_branch_\d0\()_\r2\()use_\r1 - .endm - - .macro __THUNK_BR r1,r2 - jg __s390x_indirect_jump_r\r2\()use_r\r1 - .endm - - .macro __THUNK_BC d0,r1,r2 - jg __s390x_indirect_branch_\d0\()_\r2\()use_\r1 - .endm - - .macro __THUNK_BRASL r1,r2,r3 - brasl \r1,__s390x_indirect_jump_r\r3\()use_r\r2 - .endm - - .macro __DECODE_RR expand,reg,ruse - .set __decode_fail,1 - .irp r1,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15 - .ifc \reg,%r\r1 - .irp r2,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15 - .ifc \ruse,%r\r2 - \expand \r1,\r2 - .set __decode_fail,0 - .endif - .endr - .endif - .endr - .if __decode_fail == 1 - .error "__DECODE_RR failed" - .endif - .endm - - .macro __DECODE_RRR expand,rsave,rtarget,ruse - .set __decode_fail,1 - .irp r1,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15 - .ifc \rsave,%r\r1 - .irp r2,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15 - .ifc \rtarget,%r\r2 - .irp r3,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15 - .ifc \ruse,%r\r3 - \expand \r1,\r2,\r3 - .set __decode_fail,0 - .endif - .endr - .endif - .endr - .endif - .endr - .if __decode_fail == 1 - .error "__DECODE_RRR failed" - .endif - .endm - - .macro __DECODE_DRR expand,disp,reg,ruse - .set __decode_fail,1 - .irp r1,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15 - .ifc \reg,%r\r1 - .irp r2,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15 - .ifc \ruse,%r\r2 - \expand \disp,\r1,\r2 - .set __decode_fail,0 - .endif - .endr - .endif - .endr - .if __decode_fail == 1 - .error "__DECODE_DRR failed" - .endif - .endm - - .macro __THUNK_EX_BR reg,ruse - # Be very careful when adding instructions to this macro! - # The ALTERNATIVE replacement code has a .+10 which targets - # the "br \reg" after the code has been patched. -#ifdef CONFIG_HAVE_MARCH_Z10_FEATURES - exrl 0,555f - j . -#else - .ifc \reg,%r1 - ALTERNATIVE "ex %r0,_LC_BR_R1", ".insn ril,0xc60000000000,0,.+10", 35 - j . - .else - larl \ruse,555f - ex 0,0(\ruse) - j . - .endif -#endif -555: br \reg - .endm - - .macro __THUNK_EX_BC disp,reg,ruse -#ifdef CONFIG_HAVE_MARCH_Z10_FEATURES - exrl 0,556f - j . -#else - larl \ruse,556f - ex 0,0(\ruse) - j . -#endif -556: b \disp(\reg) - .endm - - .macro GEN_BR_THUNK reg,ruse=%r1 - __DECODE_RR __THUNK_PROLOG_BR,\reg,\ruse - __THUNK_EX_BR \reg,\ruse - __THUNK_EPILOG - .endm - - .macro GEN_B_THUNK disp,reg,ruse=%r1 - __DECODE_DRR __THUNK_PROLOG_BC,\disp,\reg,\ruse - __THUNK_EX_BC \disp,\reg,\ruse - __THUNK_EPILOG - .endm - - .macro BR_EX reg,ruse=%r1 -557: __DECODE_RR __THUNK_BR,\reg,\ruse - .pushsection .s390_indirect_branches,"a",@progbits - .long 557b-. - .popsection - .endm - - .macro B_EX disp,reg,ruse=%r1 -558: __DECODE_DRR __THUNK_BC,\disp,\reg,\ruse - .pushsection .s390_indirect_branches,"a",@progbits - .long 558b-. - .popsection - .endm - - .macro BASR_EX rsave,rtarget,ruse=%r1 -559: __DECODE_RRR __THUNK_BRASL,\rsave,\rtarget,\ruse - .pushsection .s390_indirect_branches,"a",@progbits - .long 559b-. - .popsection - .endm - -#else - .macro GEN_BR_THUNK reg,ruse=%r1 - .endm - - .macro GEN_B_THUNK disp,reg,ruse=%r1 - .endm - - .macro BR_EX reg,ruse=%r1 - br \reg - .endm - - .macro B_EX disp,reg,ruse=%r1 - b \disp(\reg) - .endm - - .macro BASR_EX rsave,rtarget,ruse=%r1 - basr \rsave,\rtarget - .endm -#endif - -#endif /* __ASSEMBLY__ */ - -#endif /* _ASM_S390_NOSPEC_ASM_H */ diff -u linux-azure-4.15.0/arch/s390/kernel/Makefile linux-azure-4.15.0/arch/s390/kernel/Makefile --- linux-azure-4.15.0/arch/s390/kernel/Makefile +++ linux-azure-4.15.0/arch/s390/kernel/Makefile @@ -65,7 +65,6 @@ extra-y += head.o head64.o vmlinux.lds -obj-$(CONFIG_SYSFS) += nospec-sysfs.o CFLAGS_REMOVE_nospec-branch.o += $(CC_FLAGS_EXPOLINE) obj-$(CONFIG_MODULES) += module.o reverted: --- linux-azure-4.15.0/arch/s390/kernel/asm-offsets.c +++ linux-azure-4.15.0.orig/arch/s390/kernel/asm-offsets.c @@ -179,7 +179,6 @@ OFFSET(__LC_MACHINE_FLAGS, lowcore, machine_flags); OFFSET(__LC_PREEMPT_COUNT, lowcore, preempt_count); OFFSET(__LC_GMAP, lowcore, gmap); - OFFSET(__LC_BR_R1, lowcore, br_r1_trampoline); /* software defined ABI-relevant lowcore locations 0xe00 - 0xe20 */ OFFSET(__LC_DUMP_REIPL, lowcore, ipib); /* hardware defined lowcore locations 0x1000 - 0x18ff */ reverted: --- linux-azure-4.15.0/arch/s390/kernel/base.S +++ linux-azure-4.15.0.orig/arch/s390/kernel/base.S @@ -9,22 +9,18 @@ #include #include -#include #include #include - GEN_BR_THUNK %r9 - GEN_BR_THUNK %r14 - ENTRY(s390_base_mcck_handler) basr %r13,0 0: lg %r15,__LC_PANIC_STACK # load panic stack aghi %r15,-STACK_FRAME_OVERHEAD larl %r1,s390_base_mcck_handler_fn + lg %r1,0(%r1) + ltgr %r1,%r1 - lg %r9,0(%r1) - ltgr %r9,%r9 jz 1f + basr %r14,%r1 - BASR_EX %r14,%r9 1: la %r1,4095 lmg %r0,%r15,__LC_GPREGS_SAVE_AREA-4095(%r1) lpswe __LC_MCK_OLD_PSW @@ -41,10 +37,10 @@ basr %r13,0 0: aghi %r15,-STACK_FRAME_OVERHEAD larl %r1,s390_base_ext_handler_fn + lg %r1,0(%r1) + ltgr %r1,%r1 - lg %r9,0(%r1) - ltgr %r9,%r9 jz 1f + basr %r14,%r1 - BASR_EX %r14,%r9 1: lmg %r0,%r15,__LC_SAVE_AREA_ASYNC ni __LC_EXT_OLD_PSW+1,0xfd # clear wait state bit lpswe __LC_EXT_OLD_PSW @@ -61,10 +57,10 @@ basr %r13,0 0: aghi %r15,-STACK_FRAME_OVERHEAD larl %r1,s390_base_pgm_handler_fn + lg %r1,0(%r1) + ltgr %r1,%r1 - lg %r9,0(%r1) - ltgr %r9,%r9 jz 1f + basr %r14,%r1 - BASR_EX %r14,%r9 lmg %r0,%r15,__LC_SAVE_AREA_SYNC lpswe __LC_PGM_OLD_PSW 1: lpswe disabled_wait_psw-0b(%r13) @@ -121,7 +117,7 @@ larl %r4,.Lcontinue_psw # Restore PSW flags lpswe 0(%r4) .Lcontinue: + br %r14 - BR_EX %r14 .align 16 .Lrestart_psw: .long 0x00080000,0x80000000 + .Lrestart_part2 diff -u linux-azure-4.15.0/arch/s390/kernel/entry.S linux-azure-4.15.0/arch/s390/kernel/entry.S --- linux-azure-4.15.0/arch/s390/kernel/entry.S +++ linux-azure-4.15.0/arch/s390/kernel/entry.S @@ -26,7 +26,6 @@ #include #include #include -#include __PT_R0 = __PT_GPRS __PT_R1 = __PT_GPRS + 8 @@ -223,9 +222,67 @@ .popsection .endm - GEN_BR_THUNK %r9 - GEN_BR_THUNK %r14 - GEN_BR_THUNK %r14,%r11 +#ifdef CONFIG_EXPOLINE + + .macro GEN_BR_THUNK name,reg,tmp + .section .text.\name,"axG",@progbits,\name,comdat + .globl \name + .hidden \name + .type \name,@function +\name: + .cfi_startproc +#ifdef CONFIG_HAVE_MARCH_Z10_FEATURES + exrl 0,0f +#else + larl \tmp,0f + ex 0,0(\tmp) +#endif + j . +0: br \reg + .cfi_endproc + .endm + + GEN_BR_THUNK __s390x_indirect_jump_r1use_r9,%r9,%r1 + GEN_BR_THUNK __s390x_indirect_jump_r1use_r14,%r14,%r1 + GEN_BR_THUNK __s390x_indirect_jump_r11use_r14,%r14,%r11 + + .macro BASR_R14_R9 +0: brasl %r14,__s390x_indirect_jump_r1use_r9 + .pushsection .s390_indirect_branches,"a",@progbits + .long 0b-. + .popsection + .endm + + .macro BR_R1USE_R14 +0: jg __s390x_indirect_jump_r1use_r14 + .pushsection .s390_indirect_branches,"a",@progbits + .long 0b-. + .popsection + .endm + + .macro BR_R11USE_R14 +0: jg __s390x_indirect_jump_r11use_r14 + .pushsection .s390_indirect_branches,"a",@progbits + .long 0b-. + .popsection + .endm + +#else /* CONFIG_EXPOLINE */ + + .macro BASR_R14_R9 + basr %r14,%r9 + .endm + + .macro BR_R1USE_R14 + br %r14 + .endm + + .macro BR_R11USE_R14 + br %r14 + .endm + +#endif /* CONFIG_EXPOLINE */ + .section .kprobes.text, "ax" .Ldummy: @@ -242,7 +299,7 @@ ENTRY(__bpon) .globl __bpon BPON - BR_EX %r14 + BR_R1USE_R14 /* * Scheduler resume function, called by switch_to @@ -268,7 +325,7 @@ TSTMSK __LC_MACHINE_FLAGS,MACHINE_FLAG_LPP jz 0f .insn s,0xb2800000,__LC_LPP # set program parameter -0: BR_EX %r14 +0: BR_R1USE_R14 .L__critical_start: @@ -335,7 +392,7 @@ xgr %r5,%r5 lmg %r6,%r14,__SF_GPRS(%r15) # restore kernel registers lg %r2,__SF_EMPTY+16(%r15) # return exit reason code - BR_EX %r14 + BR_R1USE_R14 .Lsie_fault: lghi %r14,-EFAULT stg %r14,__SF_EMPTY+16(%r15) # set exit reason code @@ -394,7 +451,7 @@ lgf %r9,0(%r8,%r10) # get system call add. TSTMSK __TI_flags(%r12),_TIF_TRACE jnz .Lsysc_tracesys - BASR_EX %r14,%r9 # call sys_xxxx + BASR_R14_R9 # call sys_xxxx stg %r2,__PT_R2(%r11) # store return value .Lsysc_return: @@ -579,7 +636,7 @@ lmg %r3,%r7,__PT_R3(%r11) stg %r7,STACK_FRAME_OVERHEAD(%r15) lg %r2,__PT_ORIG_GPR2(%r11) - BASR_EX %r14,%r9 # call sys_xxx + BASR_R14_R9 # call sys_xxx stg %r2,__PT_R2(%r11) # store return value .Lsysc_tracenogo: TSTMSK __TI_flags(%r12),_TIF_TRACE @@ -603,7 +660,7 @@ lmg %r9,%r10,__PT_R9(%r11) # load gprs ENTRY(kernel_thread_starter) la %r2,0(%r10) - BASR_EX %r14,%r9 + BASR_R14_R9 j .Lsysc_tracenogo /* @@ -685,7 +742,7 @@ je .Lpgm_return lgf %r9,0(%r10,%r1) # load address of handler routine lgr %r2,%r11 # pass pointer to pt_regs - BASR_EX %r14,%r9 # branch to interrupt-handler + BASR_R14_R9 # branch to interrupt-handler .Lpgm_return: LOCKDEP_SYS_EXIT tm __PT_PSW+1(%r11),0x01 # returning to user ? @@ -1003,7 +1060,7 @@ stpt __TIMER_IDLE_ENTER(%r2) .Lpsw_idle_lpsw: lpswe __SF_EMPTY(%r15) - BR_EX %r14 + BR_R1USE_R14 .Lpsw_idle_end: /* @@ -1045,7 +1102,7 @@ .Lsave_fpu_regs_done: oi __LC_CPU_FLAGS+7,_CIF_FPU .Lsave_fpu_regs_exit: - BR_EX %r14 + BR_R1USE_R14 .Lsave_fpu_regs_end: EXPORT_SYMBOL(save_fpu_regs) @@ -1091,7 +1148,7 @@ .Lload_fpu_regs_done: ni __LC_CPU_FLAGS+7,255-_CIF_FPU .Lload_fpu_regs_exit: - BR_EX %r14 + BR_R1USE_R14 .Lload_fpu_regs_end: .L__critical_end: @@ -1308,7 +1365,7 @@ jl 0f clg %r9,BASED(.Lcleanup_table+104) # .Lload_fpu_regs_end jl .Lcleanup_load_fpu_regs -0: BR_EX %r14 +0: BR_R11USE_R14 .align 8 .Lcleanup_table: @@ -1344,7 +1401,7 @@ ni __SIE_PROG0C+3(%r9),0xfe # no longer in SIE lctlg %c1,%c1,__LC_USER_ASCE # load primary asce larl %r9,sie_exit # skip forward to sie_exit - BR_EX %r14 + BR_R11USE_R14 #endif .Lcleanup_system_call: @@ -1398,7 +1455,7 @@ stg %r15,56(%r11) # r15 stack pointer # set new psw address and exit larl %r9,.Lsysc_do_svc - BR_EX %r14,%r11 + BR_R11USE_R14 .Lcleanup_system_call_insn: .quad system_call .quad .Lsysc_stmg @@ -1410,7 +1467,7 @@ .Lcleanup_sysc_tif: larl %r9,.Lsysc_tif - BR_EX %r14,%r11 + BR_R11USE_R14 .Lcleanup_sysc_restore: # check if stpt has been executed @@ -1427,14 +1484,14 @@ mvc 0(64,%r11),__PT_R8(%r9) lmg %r0,%r7,__PT_R0(%r9) 1: lmg %r8,%r9,__LC_RETURN_PSW - BR_EX %r14,%r11 + BR_R11USE_R14 .Lcleanup_sysc_restore_insn: .quad .Lsysc_exit_timer .quad .Lsysc_done - 4 .Lcleanup_io_tif: larl %r9,.Lio_tif - BR_EX %r14,%r11 + BR_R11USE_R14 .Lcleanup_io_restore: # check if stpt has been executed @@ -1448,7 +1505,7 @@ mvc 0(64,%r11),__PT_R8(%r9) lmg %r0,%r7,__PT_R0(%r9) 1: lmg %r8,%r9,__LC_RETURN_PSW - BR_EX %r14,%r11 + BR_R11USE_R14 .Lcleanup_io_restore_insn: .quad .Lio_exit_timer .quad .Lio_done - 4 @@ -1501,17 +1558,17 @@ # prepare return psw nihh %r8,0xfcfd # clear irq & wait state bits lg %r9,48(%r11) # return from psw_idle - BR_EX %r14,%r11 + BR_R11USE_R14 .Lcleanup_idle_insn: .quad .Lpsw_idle_lpsw .Lcleanup_save_fpu_regs: larl %r9,save_fpu_regs - BR_EX %r14,%r11 + BR_R11USE_R14 .Lcleanup_load_fpu_regs: larl %r9,load_fpu_regs - BR_EX %r14,%r11 + BR_R11USE_R14 /* * Integer constants reverted: --- linux-azure-4.15.0/arch/s390/kernel/irq.c +++ linux-azure-4.15.0.orig/arch/s390/kernel/irq.c @@ -176,9 +176,10 @@ new -= STACK_FRAME_OVERHEAD; ((struct stack_frame *) new)->back_chain = old; asm volatile(" la 15,0(%0)\n" + " basr 14,%2\n" - " brasl 14,__do_softirq\n" " la 15,0(%1)\n" + : : "a" (new), "a" (old), + "a" (__do_softirq) - : : "a" (new), "a" (old) : "0", "1", "2", "3", "4", "5", "14", "cc", "memory" ); } else { reverted: --- linux-azure-4.15.0/arch/s390/kernel/mcount.S +++ linux-azure-4.15.0.orig/arch/s390/kernel/mcount.S @@ -9,17 +9,13 @@ #include #include #include -#include #include #include - GEN_BR_THUNK %r1 - GEN_BR_THUNK %r14 - .section .kprobes.text, "ax" ENTRY(ftrace_stub) + br %r14 - BR_EX %r14 #define STACK_FRAME_SIZE (STACK_FRAME_OVERHEAD + __PT_SIZE) #define STACK_PTREGS (STACK_FRAME_OVERHEAD) @@ -27,7 +23,7 @@ #define STACK_PTREGS_PSW (STACK_PTREGS + __PT_PSW) ENTRY(_mcount) + br %r14 - BR_EX %r14 EXPORT_SYMBOL(_mcount) @@ -57,7 +53,7 @@ #endif lgr %r3,%r14 la %r5,STACK_PTREGS(%r15) + basr %r14,%r1 - BASR_EX %r14,%r1 #ifdef CONFIG_FUNCTION_GRAPH_TRACER # The j instruction gets runtime patched to a nop instruction. # See ftrace_enable_ftrace_graph_caller. @@ -72,7 +68,7 @@ #endif lg %r1,(STACK_PTREGS_PSW+8)(%r15) lmg %r2,%r15,(STACK_PTREGS_GPRS+2*8)(%r15) + br %r1 - BR_EX %r1 #ifdef CONFIG_FUNCTION_GRAPH_TRACER @@ -85,6 +81,6 @@ aghi %r15,STACK_FRAME_OVERHEAD lgr %r14,%r2 lmg %r2,%r5,32(%r15) + br %r14 - BR_EX %r14 #endif diff -u linux-azure-4.15.0/arch/s390/kernel/nospec-branch.c linux-azure-4.15.0/arch/s390/kernel/nospec-branch.c --- linux-azure-4.15.0/arch/s390/kernel/nospec-branch.c +++ linux-azure-4.15.0/arch/s390/kernel/nospec-branch.c @@ -43,6 +43,24 @@ } arch_initcall(nospec_report); +#ifdef CONFIG_SYSFS +ssize_t cpu_show_spectre_v1(struct device *dev, + struct device_attribute *attr, char *buf) +{ + return sprintf(buf, "Mitigation: __user pointer sanitization\n"); +} + +ssize_t cpu_show_spectre_v2(struct device *dev, + struct device_attribute *attr, char *buf) +{ + if (IS_ENABLED(CC_USING_EXPOLINE) && !nospec_disable) + return sprintf(buf, "Mitigation: execute trampolines\n"); + if (__test_facility(82, S390_lowcore.alt_stfle_fac_list)) + return sprintf(buf, "Mitigation: limited branch prediction.\n"); + return sprintf(buf, "Vulnerable\n"); +} +#endif + #ifdef CONFIG_EXPOLINE int nospec_disable = IS_ENABLED(CONFIG_EXPOLINE_OFF); @@ -93,6 +111,7 @@ s32 *epo; /* Second part of the instruction replace is always a nop */ + memcpy(insnbuf + 2, (char[]) { 0x47, 0x00, 0x00, 0x00 }, 4); for (epo = start; epo < end; epo++) { instr = (u8 *) epo + *epo; if (instr[0] == 0xc0 && (instr[1] & 0x0f) == 0x04) @@ -113,34 +132,18 @@ br = thunk + (*(int *)(thunk + 2)) * 2; else continue; - /* Check for unconditional branch 0x07f? or 0x47f???? */ - if ((br[0] & 0xbf) != 0x07 || (br[1] & 0xf0) != 0xf0) + if (br[0] != 0x07 || (br[1] & 0xf0) != 0xf0) continue; - - memcpy(insnbuf + 2, (char[]) { 0x47, 0x00, 0x07, 0x00 }, 4); switch (type) { case BRCL_EXPOLINE: + /* brcl to thunk, replace with br + nop */ insnbuf[0] = br[0]; insnbuf[1] = (instr[1] & 0xf0) | (br[1] & 0x0f); - if (br[0] == 0x47) { - /* brcl to b, replace with bc + nopr */ - insnbuf[2] = br[2]; - insnbuf[3] = br[3]; - } else { - /* brcl to br, replace with bcr + nop */ - } break; case BRASL_EXPOLINE: + /* brasl to thunk, replace with basr + nop */ + insnbuf[0] = 0x0d; insnbuf[1] = (instr[1] & 0xf0) | (br[1] & 0x0f); - if (br[0] == 0x47) { - /* brasl to b, replace with bas + nopr */ - insnbuf[0] = 0x4d; - insnbuf[2] = br[2]; - insnbuf[3] = br[3]; - } else { - /* brasl to br, replace with basr + nop */ - insnbuf[0] = 0x0d; - } break; } reverted: --- linux-azure-4.15.0/arch/s390/kernel/nospec-sysfs.c +++ linux-azure-4.15.0.orig/arch/s390/kernel/nospec-sysfs.c @@ -1,21 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -#include -#include -#include -#include - -ssize_t cpu_show_spectre_v1(struct device *dev, - struct device_attribute *attr, char *buf) -{ - return sprintf(buf, "Mitigation: __user pointer sanitization\n"); -} - -ssize_t cpu_show_spectre_v2(struct device *dev, - struct device_attribute *attr, char *buf) -{ - if (IS_ENABLED(CC_USING_EXPOLINE) && !nospec_disable) - return sprintf(buf, "Mitigation: execute trampolines\n"); - if (__test_facility(82, S390_lowcore.alt_stfle_fac_list)) - return sprintf(buf, "Mitigation: limited branch prediction\n"); - return sprintf(buf, "Vulnerable\n"); -} reverted: --- linux-azure-4.15.0/arch/s390/kernel/reipl.S +++ linux-azure-4.15.0.orig/arch/s390/kernel/reipl.S @@ -7,11 +7,8 @@ #include #include -#include #include - GEN_BR_THUNK %r9 - # # Issue "store status" for the current CPU to its prefix page # and call passed function afterwards @@ -70,9 +67,9 @@ st %r4,0(%r1) st %r5,4(%r1) stg %r2,8(%r1) + lgr %r1,%r2 - lgr %r9,%r2 lgr %r2,%r3 + br %r1 - BR_EX %r9 .section .bss .align 8 reverted: --- linux-azure-4.15.0/arch/s390/kernel/swsusp.S +++ linux-azure-4.15.0.orig/arch/s390/kernel/swsusp.S @@ -13,7 +13,6 @@ #include #include #include -#include #include /* @@ -25,8 +24,6 @@ * (see below) in the resume process. * This function runs with disabled interrupts. */ - GEN_BR_THUNK %r14 - .section .text ENTRY(swsusp_arch_suspend) stmg %r6,%r15,__SF_GPRS(%r15) @@ -106,7 +103,7 @@ spx 0x318(%r1) lmg %r6,%r15,STACK_FRAME_OVERHEAD + __SF_GPRS(%r15) lghi %r2,0 + br %r14 - BR_EX %r14 /* * Restore saved memory image to correct place and restore register context. @@ -200,10 +197,11 @@ larl %r15,init_thread_union ahi %r15,1<<(PAGE_SHIFT+THREAD_SIZE_ORDER) larl %r2,.Lpanic_string + larl %r3,sclp_early_printk lghi %r1,0 sam31 sigp %r1,%r0,SIGP_SET_ARCHITECTURE + basr %r14,%r3 - brasl %r14,sclp_early_printk larl %r3,.Ldisabled_wait_31 lpsw 0(%r3) 4: @@ -269,7 +267,7 @@ /* Return 0 */ lmg %r6,%r15,STACK_FRAME_OVERHEAD + __SF_GPRS(%r15) lghi %r2,0 + br %r14 - BR_EX %r14 .section .data..nosave,"aw",@progbits .align 8 reverted: --- linux-azure-4.15.0/arch/s390/lib/mem.S +++ linux-azure-4.15.0.orig/arch/s390/lib/mem.S @@ -7,9 +7,6 @@ #include #include -#include - - GEN_BR_THUNK %r14 /* * void *memmove(void *dest, const void *src, size_t n) @@ -36,14 +33,14 @@ .Lmemmove_forward_remainder: larl %r5,.Lmemmove_mvc ex %r4,0(%r5) + br %r14 - BR_EX %r14 .Lmemmove_reverse: ic %r0,0(%r4,%r3) stc %r0,0(%r4,%r1) brctg %r4,.Lmemmove_reverse ic %r0,0(%r4,%r3) stc %r0,0(%r4,%r1) + br %r14 - BR_EX %r14 .Lmemmove_mvc: mvc 0(1,%r1),0(%r3) EXPORT_SYMBOL(memmove) @@ -80,7 +77,7 @@ .Lmemset_clear_remainder: larl %r3,.Lmemset_xc ex %r4,0(%r3) + br %r14 - BR_EX %r14 .Lmemset_fill: cghi %r4,1 lgr %r1,%r2 @@ -98,10 +95,10 @@ stc %r3,0(%r1) larl %r5,.Lmemset_mvc ex %r4,0(%r5) + br %r14 - BR_EX %r14 .Lmemset_fill_exit: stc %r3,0(%r1) + br %r14 - BR_EX %r14 .Lmemset_xc: xc 0(1,%r1),0(%r1) .Lmemset_mvc: @@ -124,7 +121,7 @@ .Lmemcpy_remainder: larl %r5,.Lmemcpy_mvc ex %r4,0(%r5) + br %r14 - BR_EX %r14 .Lmemcpy_loop: mvc 0(256,%r1),0(%r3) la %r1,256(%r1) @@ -162,10 +159,10 @@ \insn %r3,0(%r1) larl %r5,.L__memset_mvc\bits ex %r4,0(%r5) + br %r14 - BR_EX %r14 .L__memset_exit\bits: \insn %r3,0(%r2) + br %r14 - BR_EX %r14 .L__memset_mvc\bits: mvc \bytes(1,%r1),0(%r1) .endm reverted: --- linux-azure-4.15.0/arch/s390/net/bpf_jit.S +++ linux-azure-4.15.0.orig/arch/s390/net/bpf_jit.S @@ -9,7 +9,6 @@ */ #include -#include #include "bpf_jit.h" /* @@ -55,7 +54,7 @@ clg %r3,STK_OFF_HLEN(%r15); /* Offset + SIZE > hlen? */ \ jh sk_load_##NAME##_slow; \ LOAD %r14,-SIZE(%r3,%r12); /* Get data from skb */ \ + b OFF_OK(%r6); /* Return */ \ - B_EX OFF_OK,%r6; /* Return */ \ \ sk_load_##NAME##_slow:; \ lgr %r2,%r7; /* Arg1 = skb pointer */ \ @@ -65,14 +64,11 @@ brasl %r14,skb_copy_bits; /* Get data from skb */ \ LOAD %r14,STK_OFF_TMP(%r15); /* Load from temp bufffer */ \ ltgr %r2,%r2; /* Set cc to (%r2 != 0) */ \ + br %r6; /* Return */ - BR_EX %r6; /* Return */ sk_load_common(word, 4, llgf) /* r14 = *(u32 *) (skb->data+offset) */ sk_load_common(half, 2, llgh) /* r14 = *(u16 *) (skb->data+offset) */ - GEN_BR_THUNK %r6 - GEN_B_THUNK OFF_OK,%r6 - /* * Load 1 byte from SKB (optimized version) */ @@ -84,7 +80,7 @@ clg %r3,STK_OFF_HLEN(%r15) # Offset >= hlen? jnl sk_load_byte_slow llgc %r14,0(%r3,%r12) # Get byte from skb + b OFF_OK(%r6) # Return OK - B_EX OFF_OK,%r6 # Return OK sk_load_byte_slow: lgr %r2,%r7 # Arg1 = skb pointer @@ -94,7 +90,7 @@ brasl %r14,skb_copy_bits # Get data from skb llgc %r14,STK_OFF_TMP(%r15) # Load result from temp buffer ltgr %r2,%r2 # Set cc to (%r2 != 0) + br %r6 # Return cc - BR_EX %r6 # Return cc #define sk_negative_common(NAME, SIZE, LOAD) \ sk_load_##NAME##_slow_neg:; \ @@ -108,7 +104,7 @@ jz bpf_error; \ LOAD %r14,0(%r2); /* Get data from pointer */ \ xr %r3,%r3; /* Set cc to zero */ \ + br %r6; /* Return cc */ - BR_EX %r6; /* Return cc */ sk_negative_common(word, 4, llgf) sk_negative_common(half, 2, llgh) @@ -117,4 +113,4 @@ bpf_error: # force a return 0 from jit handler ltgr %r15,%r15 # Set condition code + br %r6 - BR_EX %r6 reverted: --- linux-azure-4.15.0/arch/s390/net/bpf_jit_comp.c +++ linux-azure-4.15.0.orig/arch/s390/net/bpf_jit_comp.c @@ -25,8 +25,6 @@ #include #include #include -#include -#include #include #include "bpf_jit.h" @@ -45,8 +43,6 @@ int base_ip; /* Base address for literal pool */ int ret0_ip; /* Address of return 0 */ int exit_ip; /* Address of exit */ - int r1_thunk_ip; /* Address of expoline thunk for 'br %r1' */ - int r14_thunk_ip; /* Address of expoline thunk for 'br %r14' */ int tail_call_start; /* Tail call start offset */ int labels[1]; /* Labels for local jumps */ }; @@ -256,19 +252,6 @@ REG_SET_SEEN(b2); \ }) -#define EMIT6_PCREL_RILB(op, b, target) \ -({ \ - int rel = (target - jit->prg) / 2; \ - _EMIT6(op | reg_high(b) << 16 | rel >> 16, rel & 0xffff); \ - REG_SET_SEEN(b); \ -}) - -#define EMIT6_PCREL_RIL(op, target) \ -({ \ - int rel = (target - jit->prg) / 2; \ - _EMIT6(op | rel >> 16, rel & 0xffff); \ -}) - #define _EMIT6_IMM(op, imm) \ ({ \ unsigned int __imm = (imm); \ @@ -488,45 +471,8 @@ EMIT4(0xb9040000, REG_2, BPF_REG_0); /* Restore registers */ save_restore_regs(jit, REGS_RESTORE, stack_depth); - if (IS_ENABLED(CC_USING_EXPOLINE) && !nospec_disable) { - jit->r14_thunk_ip = jit->prg; - /* Generate __s390_indirect_jump_r14 thunk */ - if (test_facility(35)) { - /* exrl %r0,.+10 */ - EMIT6_PCREL_RIL(0xc6000000, jit->prg + 10); - } else { - /* larl %r1,.+14 */ - EMIT6_PCREL_RILB(0xc0000000, REG_1, jit->prg + 14); - /* ex 0,0(%r1) */ - EMIT4_DISP(0x44000000, REG_0, REG_1, 0); - } - /* j . */ - EMIT4_PCREL(0xa7f40000, 0); - } /* br %r14 */ _EMIT2(0x07fe); - - if (IS_ENABLED(CC_USING_EXPOLINE) && !nospec_disable && - (jit->seen & SEEN_FUNC)) { - jit->r1_thunk_ip = jit->prg; - /* Generate __s390_indirect_jump_r1 thunk */ - if (test_facility(35)) { - /* exrl %r0,.+10 */ - EMIT6_PCREL_RIL(0xc6000000, jit->prg + 10); - /* j . */ - EMIT4_PCREL(0xa7f40000, 0); - /* br %r1 */ - _EMIT2(0x07f1); - } else { - /* larl %r1,.+14 */ - EMIT6_PCREL_RILB(0xc0000000, REG_1, jit->prg + 14); - /* ex 0,S390_lowcore.br_r1_tampoline */ - EMIT4_DISP(0x44000000, REG_0, REG_0, - offsetof(struct lowcore, br_r1_trampoline)); - /* j . */ - EMIT4_PCREL(0xa7f40000, 0); - } - } } /* @@ -1032,13 +978,8 @@ /* lg %w1,(%l) */ EMIT6_DISP_LH(0xe3000000, 0x0004, REG_W1, REG_0, REG_L, EMIT_CONST_U64(func)); + /* basr %r14,%w1 */ + EMIT2(0x0d00, REG_14, REG_W1); - if (IS_ENABLED(CC_USING_EXPOLINE) && !nospec_disable) { - /* brasl %r14,__s390_indirect_jump_r1 */ - EMIT6_PCREL_RILB(0xc0050000, REG_14, jit->r1_thunk_ip); - } else { - /* basr %r14,%w1 */ - EMIT2(0x0d00, REG_14, REG_W1); - } /* lgr %b0,%r2: load return value into %b0 */ EMIT4(0xb9040000, BPF_REG_0, REG_2); if ((jit->seen & SEEN_SKB) && reverted: --- linux-azure-4.15.0/arch/sparc/kernel/vio.c +++ linux-azure-4.15.0.orig/arch/sparc/kernel/vio.c @@ -403,7 +403,7 @@ if (err) { printk(KERN_ERR "VIO: Could not register device %s, err=%d\n", dev_name(&vdev->dev), err); + kfree(vdev); - put_device(&vdev->dev); return NULL; } if (vdev->dp) diff -u linux-azure-4.15.0/arch/x86/Kconfig linux-azure-4.15.0/arch/x86/Kconfig --- linux-azure-4.15.0/arch/x86/Kconfig +++ linux-azure-4.15.0/arch/x86/Kconfig @@ -51,6 +51,7 @@ select ARCH_HAS_DEVMEM_IS_ALLOWED select ARCH_HAS_ELF_RANDOMIZE select ARCH_HAS_FAST_MULTIPLIER + select ARCH_HAS_FILTER_PGPROT select ARCH_HAS_FORTIFY_SOURCE select ARCH_HAS_GCOV_PROFILE_ALL select ARCH_HAS_KCOV if X86_64 @@ -177,6 +178,7 @@ select HAVE_SYSCALL_TRACEPOINTS select HAVE_UNSTABLE_SCHED_CLOCK select HAVE_USER_RETURN_NOTIFIER + select HOTPLUG_SMT if SMP select IRQ_FORCED_THREADING select PCI_LOCKLESS_CONFIG select PERF_EVENTS @@ -268,6 +270,9 @@ config ARCH_HAS_CACHE_LINE_SIZE def_bool y +config ARCH_HAS_FILTER_PGPROT + def_bool y + config HAVE_SETUP_PER_CPU_AREA def_bool y diff -u linux-azure-4.15.0/arch/x86/boot/compressed/eboot.c linux-azure-4.15.0/arch/x86/boot/compressed/eboot.c --- linux-azure-4.15.0/arch/x86/boot/compressed/eboot.c +++ linux-azure-4.15.0/arch/x86/boot/compressed/eboot.c @@ -164,8 +164,7 @@ if (status != EFI_SUCCESS) goto free_struct; - memcpy(rom->romdata, (void *)(unsigned long)pci->romimage, - pci->romsize); + memcpy(rom->romdata, pci->romimage, pci->romsize); return status; free_struct: @@ -271,8 +270,7 @@ if (status != EFI_SUCCESS) goto free_struct; - memcpy(rom->romdata, (void *)(unsigned long)pci->romimage, - pci->romsize); + memcpy(rom->romdata, pci->romimage, pci->romsize); return status; free_struct: reverted: --- linux-azure-4.15.0/arch/x86/events/core.c +++ linux-azure-4.15.0.orig/arch/x86/events/core.c @@ -27,7 +27,6 @@ #include #include #include -#include #include #include @@ -305,20 +304,17 @@ config = attr->config; + cache_type = (config >> 0) & 0xff; - cache_type = (config >> 0) & 0xff; if (cache_type >= PERF_COUNT_HW_CACHE_MAX) return -EINVAL; - cache_type = array_index_nospec(cache_type, PERF_COUNT_HW_CACHE_MAX); cache_op = (config >> 8) & 0xff; if (cache_op >= PERF_COUNT_HW_CACHE_OP_MAX) return -EINVAL; - cache_op = array_index_nospec(cache_op, PERF_COUNT_HW_CACHE_OP_MAX); cache_result = (config >> 16) & 0xff; if (cache_result >= PERF_COUNT_HW_CACHE_RESULT_MAX) return -EINVAL; - cache_result = array_index_nospec(cache_result, PERF_COUNT_HW_CACHE_RESULT_MAX); val = hw_cache_event_ids[cache_type][cache_op][cache_result]; @@ -425,8 +421,6 @@ if (attr->config >= x86_pmu.max_events) return -EINVAL; - attr->config = array_index_nospec((unsigned long)attr->config, x86_pmu.max_events); - /* * The generic map: */ reverted: --- linux-azure-4.15.0/arch/x86/events/intel/cstate.c +++ linux-azure-4.15.0.orig/arch/x86/events/intel/cstate.c @@ -91,7 +91,6 @@ #include #include #include -#include #include #include #include "../perf_event.h" @@ -302,7 +301,6 @@ } else if (event->pmu == &cstate_pkg_pmu) { if (cfg >= PERF_CSTATE_PKG_EVENT_MAX) return -EINVAL; - cfg = array_index_nospec((unsigned long)cfg, PERF_CSTATE_PKG_EVENT_MAX); if (!pkg_msr[cfg].attr) return -EINVAL; event->hw.event_base = pkg_msr[cfg].msr; reverted: --- linux-azure-4.15.0/arch/x86/events/msr.c +++ linux-azure-4.15.0.orig/arch/x86/events/msr.c @@ -1,6 +1,5 @@ // SPDX-License-Identifier: GPL-2.0 #include -#include #include enum perf_msr_id { @@ -146,6 +145,9 @@ if (event->attr.type != event->pmu->type) return -ENOENT; + if (cfg >= PERF_MSR_EVENT_MAX) + return -EINVAL; + /* unsupported modes and filters */ if (event->attr.exclude_user || event->attr.exclude_kernel || @@ -156,11 +158,6 @@ event->attr.sample_period) /* no sampling */ return -EINVAL; - if (cfg >= PERF_MSR_EVENT_MAX) - return -EINVAL; - - cfg = array_index_nospec((unsigned long)cfg, PERF_MSR_EVENT_MAX); - if (!msr[cfg].attr) return -EINVAL; diff -u linux-azure-4.15.0/arch/x86/include/asm/apic.h linux-azure-4.15.0/arch/x86/include/asm/apic.h --- linux-azure-4.15.0/arch/x86/include/asm/apic.h +++ linux-azure-4.15.0/arch/x86/include/asm/apic.h @@ -10,6 +10,7 @@ #include #include #include +#include #define ARCH_APICTIMER_STOPS_ON_C3 1 @@ -516,12 +517,19 @@ #endif /* CONFIG_X86_LOCAL_APIC */ +#ifdef CONFIG_SMP +bool apic_id_is_primary_thread(unsigned int id); +#else +static inline bool apic_id_is_primary_thread(unsigned int id) { return false; } +#endif + extern void irq_enter(void); extern void irq_exit(void); static inline void entering_irq(void) { irq_enter(); + kvm_set_cpu_l1tf_flush_l1d(); } static inline void entering_ack_irq(void) @@ -534,6 +542,7 @@ { irq_enter(); ack_APIC_irq(); + kvm_set_cpu_l1tf_flush_l1d(); } static inline void exiting_irq(void) diff -u linux-azure-4.15.0/arch/x86/include/asm/cpufeatures.h linux-azure-4.15.0/arch/x86/include/asm/cpufeatures.h --- linux-azure-4.15.0/arch/x86/include/asm/cpufeatures.h +++ linux-azure-4.15.0/arch/x86/include/asm/cpufeatures.h @@ -219,6 +219,7 @@ #define X86_FEATURE_IBPB ( 7*32+26) /* Indirect Branch Prediction Barrier */ #define X86_FEATURE_STIBP ( 7*32+27) /* Single Thread Indirect Branch Predictors */ #define X86_FEATURE_ZEN ( 7*32+28) /* "" CPU is AMD family 0x17 (Zen) */ +#define X86_FEATURE_L1TF_PTEINV ( 7*32+29) /* "" L1TF workaround PTE inversion */ /* Virtualization flags: Linux defined, word 8 */ #define X86_FEATURE_TPR_SHADOW ( 8*32+ 0) /* Intel TPR Shadow */ @@ -338,6 +339,7 @@ #define X86_FEATURE_PCONFIG (18*32+18) /* Intel PCONFIG */ #define X86_FEATURE_SPEC_CTRL (18*32+26) /* "" Speculation Control (IBRS + IBPB) */ #define X86_FEATURE_INTEL_STIBP (18*32+27) /* "" Single Thread Indirect Branch Predictors */ +#define X86_FEATURE_FLUSH_L1D (18*32+28) /* Flush L1D cache */ #define X86_FEATURE_ARCH_CAPABILITIES (18*32+29) /* IA32_ARCH_CAPABILITIES MSR (Intel) */ #define X86_FEATURE_SPEC_CTRL_SSBD (18*32+31) /* "" Speculative Store Bypass Disable */ @@ -371,4 +373,5 @@ #define X86_BUG_SPECTRE_V2 X86_BUG(16) /* CPU is affected by Spectre variant 2 attack with indirect branches */ #define X86_BUG_SPEC_STORE_BYPASS X86_BUG(17) /* CPU is affected by speculative store bypass attack */ +#define X86_BUG_L1TF X86_BUG(18) /* CPU is affected by L1 Terminal Fault */ #endif /* _ASM_X86_CPUFEATURES_H */ diff -u linux-azure-4.15.0/arch/x86/include/asm/kvm_host.h linux-azure-4.15.0/arch/x86/include/asm/kvm_host.h --- linux-azure-4.15.0/arch/x86/include/asm/kvm_host.h +++ linux-azure-4.15.0/arch/x86/include/asm/kvm_host.h @@ -17,6 +17,7 @@ #include #include #include +#include #include #include @@ -706,6 +707,9 @@ /* be preempted when it's in kernel-mode(cpl=0) */ bool preempted_in_kernel; + + /* Flush the L1 Data cache for L1TF mitigation on VMENTER */ + bool l1tf_flush_l1d; }; struct kvm_lpage_info { @@ -875,6 +879,7 @@ u64 signal_exits; u64 irq_window_exits; u64 nmi_window_exits; + u64 l1d_flush; u64 halt_exits; u64 halt_successful_poll; u64 halt_attempted_poll; @@ -1079,6 +1084,8 @@ int (*pre_enter_smm)(struct kvm_vcpu *vcpu, char *smstate); int (*pre_leave_smm)(struct kvm_vcpu *vcpu, u64 smbase); int (*enable_smi_window)(struct kvm_vcpu *vcpu); + + int (*get_msr_feature)(struct kvm_msr_entry *entry); }; struct kvm_arch_async_pf { @@ -1384,6 +1391,7 @@ void kvm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event); void kvm_vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu); +u64 kvm_get_arch_capabilities(void); void kvm_define_shared_msr(unsigned index, u32 msr); int kvm_set_shared_msr(unsigned index, u64 val, u64 mask); diff -u linux-azure-4.15.0/arch/x86/include/asm/mmu_context.h linux-azure-4.15.0/arch/x86/include/asm/mmu_context.h --- linux-azure-4.15.0/arch/x86/include/asm/mmu_context.h +++ linux-azure-4.15.0/arch/x86/include/asm/mmu_context.h @@ -192,7 +192,7 @@ #ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS if (cpu_feature_enabled(X86_FEATURE_OSPKE)) { - /* pkey 0 is the default and allocated implicitly */ + /* pkey 0 is the default and always allocated */ mm->context.pkey_allocation_map = 0x1; /* -1 means unallocated or invalid */ mm->context.execute_only_pkey = -1; diff -u linux-azure-4.15.0/arch/x86/include/asm/msr-index.h linux-azure-4.15.0/arch/x86/include/asm/msr-index.h --- linux-azure-4.15.0/arch/x86/include/asm/msr-index.h +++ linux-azure-4.15.0/arch/x86/include/asm/msr-index.h @@ -70,12 +70,19 @@ #define MSR_IA32_ARCH_CAPABILITIES 0x0000010a #define ARCH_CAP_RDCL_NO (1 << 0) /* Not susceptible to Meltdown */ #define ARCH_CAP_IBRS_ALL (1 << 1) /* Enhanced IBRS support */ +#define ARCH_CAP_SKIP_VMENTRY_L1DFLUSH (1 << 3) /* Skip L1D flush on vmentry */ #define ARCH_CAP_SSB_NO (1 << 4) /* * Not susceptible to Speculative Store Bypass * attack, so no Speculative Store Bypass * control required. */ +#define MSR_IA32_FLUSH_CMD 0x0000010b +#define L1D_FLUSH (1 << 0) /* + * Writeback and invalidate the + * L1 data cache. + */ + #define MSR_IA32_BBL_CR_CTL 0x00000119 #define MSR_IA32_BBL_CR_CTL3 0x0000011e diff -u linux-azure-4.15.0/arch/x86/include/asm/pgtable.h linux-azure-4.15.0/arch/x86/include/asm/pgtable.h --- linux-azure-4.15.0/arch/x86/include/asm/pgtable.h +++ linux-azure-4.15.0/arch/x86/include/asm/pgtable.h @@ -185,19 +185,29 @@ return pte_flags(pte) & _PAGE_SPECIAL; } +/* Entries that were set to PROT_NONE are inverted */ + +static inline u64 protnone_mask(u64 val); + static inline unsigned long pte_pfn(pte_t pte) { - return (pte_val(pte) & PTE_PFN_MASK) >> PAGE_SHIFT; + phys_addr_t pfn = pte_val(pte); + pfn ^= protnone_mask(pfn); + return (pfn & PTE_PFN_MASK) >> PAGE_SHIFT; } static inline unsigned long pmd_pfn(pmd_t pmd) { - return (pmd_val(pmd) & pmd_pfn_mask(pmd)) >> PAGE_SHIFT; + phys_addr_t pfn = pmd_val(pmd); + pfn ^= protnone_mask(pfn); + return (pfn & pmd_pfn_mask(pmd)) >> PAGE_SHIFT; } static inline unsigned long pud_pfn(pud_t pud) { - return (pud_val(pud) & pud_pfn_mask(pud)) >> PAGE_SHIFT; + phys_addr_t pfn = pud_val(pud); + pfn ^= protnone_mask(pfn); + return (pfn & pud_pfn_mask(pud)) >> PAGE_SHIFT; } static inline unsigned long p4d_pfn(p4d_t p4d) @@ -400,11 +410,6 @@ return pmd_set_flags(pmd, _PAGE_RW); } -static inline pmd_t pmd_mknotpresent(pmd_t pmd) -{ - return pmd_clear_flags(pmd, _PAGE_PRESENT | _PAGE_PROTNONE); -} - static inline pud_t pud_set_flags(pud_t pud, pudval_t set) { pudval_t v = native_pud_val(pud); @@ -459,11 +464,6 @@ return pud_set_flags(pud, _PAGE_RW); } -static inline pud_t pud_mknotpresent(pud_t pud) -{ - return pud_clear_flags(pud, _PAGE_PRESENT | _PAGE_PROTNONE); -} - #ifdef CONFIG_HAVE_ARCH_SOFT_DIRTY static inline int pte_soft_dirty(pte_t pte) { @@ -526,45 +526,82 @@ return protval; } +static inline pgprotval_t check_pgprot(pgprot_t pgprot) +{ + pgprotval_t massaged_val = massage_pgprot(pgprot); + + /* mmdebug.h can not be included here because of dependencies */ +#ifdef CONFIG_DEBUG_VM + WARN_ONCE(pgprot_val(pgprot) != massaged_val, + "attempted to set unsupported pgprot: %016llx " + "bits: %016llx supported: %016llx\n", + (u64)pgprot_val(pgprot), + (u64)pgprot_val(pgprot) ^ massaged_val, + (u64)__supported_pte_mask); +#endif + + return massaged_val; +} + static inline pte_t pfn_pte(unsigned long page_nr, pgprot_t pgprot) { - return __pte(((phys_addr_t)page_nr << PAGE_SHIFT) | - massage_pgprot(pgprot)); + phys_addr_t pfn = (phys_addr_t)page_nr << PAGE_SHIFT; + pfn ^= protnone_mask(pgprot_val(pgprot)); + pfn &= PTE_PFN_MASK; + return __pte(pfn | check_pgprot(pgprot)); } static inline pmd_t pfn_pmd(unsigned long page_nr, pgprot_t pgprot) { - return __pmd(((phys_addr_t)page_nr << PAGE_SHIFT) | - massage_pgprot(pgprot)); + phys_addr_t pfn = (phys_addr_t)page_nr << PAGE_SHIFT; + pfn ^= protnone_mask(pgprot_val(pgprot)); + pfn &= PHYSICAL_PMD_PAGE_MASK; + return __pmd(pfn | check_pgprot(pgprot)); } static inline pud_t pfn_pud(unsigned long page_nr, pgprot_t pgprot) { - return __pud(((phys_addr_t)page_nr << PAGE_SHIFT) | - massage_pgprot(pgprot)); + phys_addr_t pfn = (phys_addr_t)page_nr << PAGE_SHIFT; + pfn ^= protnone_mask(pgprot_val(pgprot)); + pfn &= PHYSICAL_PUD_PAGE_MASK; + return __pud(pfn | check_pgprot(pgprot)); } +static inline pmd_t pmd_mknotpresent(pmd_t pmd) +{ + return pfn_pmd(pmd_pfn(pmd), + __pgprot(pmd_flags(pmd) & ~(_PAGE_PRESENT|_PAGE_PROTNONE))); +} + +static inline pud_t pud_mknotpresent(pud_t pud) +{ + return pfn_pud(pud_pfn(pud), + __pgprot(pud_flags(pud) & ~(_PAGE_PRESENT|_PAGE_PROTNONE))); +} + +static inline u64 flip_protnone_guard(u64 oldval, u64 val, u64 mask); + static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) { - pteval_t val = pte_val(pte); + pteval_t val = pte_val(pte), oldval = val; /* * Chop off the NX bit (if present), and add the NX portion of * the newprot (if present): */ val &= _PAGE_CHG_MASK; - val |= massage_pgprot(newprot) & ~_PAGE_CHG_MASK; - + val |= check_pgprot(newprot) & ~_PAGE_CHG_MASK; + val = flip_protnone_guard(oldval, val, PTE_PFN_MASK); return __pte(val); } static inline pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot) { - pmdval_t val = pmd_val(pmd); + pmdval_t val = pmd_val(pmd), oldval = val; val &= _HPAGE_CHG_MASK; - val |= massage_pgprot(newprot) & ~_HPAGE_CHG_MASK; - + val |= check_pgprot(newprot) & ~_HPAGE_CHG_MASK; + val = flip_protnone_guard(oldval, val, PHYSICAL_PMD_PAGE_MASK); return __pmd(val); } @@ -584,6 +621,11 @@ #define canon_pgprot(p) __pgprot(massage_pgprot(p)) +static inline pgprot_t arch_filter_pgprot(pgprot_t prot) +{ + return canon_pgprot(prot); +} + static inline int is_new_memtype_allowed(u64 paddr, unsigned long size, enum page_cache_mode pcm, enum page_cache_mode new_pcm) @@ -1274,6 +1316,14 @@ return __pte_access_permitted(pud_val(pud), write); } +#define __HAVE_ARCH_PFN_MODIFY_ALLOWED 1 +extern bool pfn_modify_allowed(unsigned long pfn, pgprot_t prot); + +static inline bool arch_has_pfn_modify_check(void) +{ + return boot_cpu_has_bug(X86_BUG_L1TF); +} + #include #endif /* __ASSEMBLY__ */ diff -u linux-azure-4.15.0/arch/x86/include/asm/pgtable_64.h linux-azure-4.15.0/arch/x86/include/asm/pgtable_64.h --- linux-azure-4.15.0/arch/x86/include/asm/pgtable_64.h +++ linux-azure-4.15.0/arch/x86/include/asm/pgtable_64.h @@ -276,7 +276,7 @@ * * | ... | 11| 10| 9|8|7|6|5| 4| 3|2| 1|0| <- bit number * | ... |SW3|SW2|SW1|G|L|D|A|CD|WT|U| W|P| <- bit names - * | OFFSET (14->63) | TYPE (9-13) |0|0|X|X| X| X|X|SD|0| <- swp entry + * | TYPE (59-63) | ~OFFSET (9-58) |0|0|X|X| X| X|X|SD|0| <- swp entry * * G (8) is aliased and used as a PROT_NONE indicator for * !present ptes. We need to start storing swap entries above @@ -289,20 +289,34 @@ * * Bit 7 in swp entry should be 0 because pmd_present checks not only P, * but also L and G. + * + * The offset is inverted by a binary not operation to make the high + * physical bits set. */ -#define SWP_TYPE_FIRST_BIT (_PAGE_BIT_PROTNONE + 1) -#define SWP_TYPE_BITS 5 -/* Place the offset above the type: */ -#define SWP_OFFSET_FIRST_BIT (SWP_TYPE_FIRST_BIT + SWP_TYPE_BITS) +#define SWP_TYPE_BITS 5 + +#define SWP_OFFSET_FIRST_BIT (_PAGE_BIT_PROTNONE + 1) + +/* We always extract/encode the offset by shifting it all the way up, and then down again */ +#define SWP_OFFSET_SHIFT (SWP_OFFSET_FIRST_BIT+SWP_TYPE_BITS) #define MAX_SWAPFILES_CHECK() BUILD_BUG_ON(MAX_SWAPFILES_SHIFT > SWP_TYPE_BITS) -#define __swp_type(x) (((x).val >> (SWP_TYPE_FIRST_BIT)) \ - & ((1U << SWP_TYPE_BITS) - 1)) -#define __swp_offset(x) ((x).val >> SWP_OFFSET_FIRST_BIT) -#define __swp_entry(type, offset) ((swp_entry_t) { \ - ((type) << (SWP_TYPE_FIRST_BIT)) \ - | ((offset) << SWP_OFFSET_FIRST_BIT) }) +/* Extract the high bits for type */ +#define __swp_type(x) ((x).val >> (64 - SWP_TYPE_BITS)) + +/* Shift up (to get rid of type), then down to get value */ +#define __swp_offset(x) (~(x).val << SWP_TYPE_BITS >> SWP_OFFSET_SHIFT) + +/* + * Shift the offset up "too far" by TYPE bits, then down again + * The offset is inverted by a binary not operation to make the high + * physical bits set. + */ +#define __swp_entry(type, offset) ((swp_entry_t) { \ + (~(unsigned long)(offset) << SWP_OFFSET_SHIFT >> SWP_TYPE_BITS) \ + | ((unsigned long)(type) << (64-SWP_TYPE_BITS)) }) + #define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val((pte)) }) #define __pmd_to_swp_entry(pmd) ((swp_entry_t) { pmd_val((pmd)) }) #define __swp_entry_to_pte(x) ((pte_t) { .pte = (x).val }) @@ -347,4 +361,6 @@ } +#include + #endif /* !__ASSEMBLY__ */ #endif /* _ASM_X86_PGTABLE_64_H */ diff -u linux-azure-4.15.0/arch/x86/include/asm/pgtable_types.h linux-azure-4.15.0/arch/x86/include/asm/pgtable_types.h --- linux-azure-4.15.0/arch/x86/include/asm/pgtable_types.h +++ linux-azure-4.15.0/arch/x86/include/asm/pgtable_types.h @@ -197,20 +197,22 @@ #define __PAGE_KERNEL_NOENC (__PAGE_KERNEL) #define __PAGE_KERNEL_NOENC_WP (__PAGE_KERNEL_WP) -#define PAGE_KERNEL __pgprot(__PAGE_KERNEL | _PAGE_ENC) -#define PAGE_KERNEL_NOENC __pgprot(__PAGE_KERNEL) -#define PAGE_KERNEL_RO __pgprot(__PAGE_KERNEL_RO | _PAGE_ENC) -#define PAGE_KERNEL_EXEC __pgprot(__PAGE_KERNEL_EXEC | _PAGE_ENC) -#define PAGE_KERNEL_EXEC_NOENC __pgprot(__PAGE_KERNEL_EXEC) -#define PAGE_KERNEL_RX __pgprot(__PAGE_KERNEL_RX | _PAGE_ENC) -#define PAGE_KERNEL_NOCACHE __pgprot(__PAGE_KERNEL_NOCACHE | _PAGE_ENC) -#define PAGE_KERNEL_LARGE __pgprot(__PAGE_KERNEL_LARGE | _PAGE_ENC) -#define PAGE_KERNEL_LARGE_EXEC __pgprot(__PAGE_KERNEL_LARGE_EXEC | _PAGE_ENC) -#define PAGE_KERNEL_VSYSCALL __pgprot(__PAGE_KERNEL_VSYSCALL | _PAGE_ENC) -#define PAGE_KERNEL_VVAR __pgprot(__PAGE_KERNEL_VVAR | _PAGE_ENC) +#define default_pgprot(x) __pgprot((x) & __default_kernel_pte_mask) -#define PAGE_KERNEL_IO __pgprot(__PAGE_KERNEL_IO) -#define PAGE_KERNEL_IO_NOCACHE __pgprot(__PAGE_KERNEL_IO_NOCACHE) +#define PAGE_KERNEL default_pgprot(__PAGE_KERNEL | _PAGE_ENC) +#define PAGE_KERNEL_NOENC default_pgprot(__PAGE_KERNEL) +#define PAGE_KERNEL_RO default_pgprot(__PAGE_KERNEL_RO | _PAGE_ENC) +#define PAGE_KERNEL_EXEC default_pgprot(__PAGE_KERNEL_EXEC | _PAGE_ENC) +#define PAGE_KERNEL_EXEC_NOENC default_pgprot(__PAGE_KERNEL_EXEC) +#define PAGE_KERNEL_RX default_pgprot(__PAGE_KERNEL_RX | _PAGE_ENC) +#define PAGE_KERNEL_NOCACHE default_pgprot(__PAGE_KERNEL_NOCACHE | _PAGE_ENC) +#define PAGE_KERNEL_LARGE default_pgprot(__PAGE_KERNEL_LARGE | _PAGE_ENC) +#define PAGE_KERNEL_LARGE_EXEC default_pgprot(__PAGE_KERNEL_LARGE_EXEC | _PAGE_ENC) +#define PAGE_KERNEL_VSYSCALL default_pgprot(__PAGE_KERNEL_VSYSCALL | _PAGE_ENC) +#define PAGE_KERNEL_VVAR default_pgprot(__PAGE_KERNEL_VVAR | _PAGE_ENC) + +#define PAGE_KERNEL_IO default_pgprot(__PAGE_KERNEL_IO) +#define PAGE_KERNEL_IO_NOCACHE default_pgprot(__PAGE_KERNEL_IO_NOCACHE) #endif /* __ASSEMBLY__ */ @@ -485,6 +487,7 @@ typedef struct page *pgtable_t; extern pteval_t __supported_pte_mask; +extern pteval_t __default_kernel_pte_mask; extern void set_nx(void); extern int nx_enabled; reverted: --- linux-azure-4.15.0/arch/x86/include/asm/pkeys.h +++ linux-azure-4.15.0.orig/arch/x86/include/asm/pkeys.h @@ -2,8 +2,6 @@ #ifndef _ASM_X86_PKEYS_H #define _ASM_X86_PKEYS_H -#define ARCH_DEFAULT_PKEY 0 - #define arch_max_pkey() (boot_cpu_has(X86_FEATURE_OSPKE) ? 16 : 1) extern int arch_set_user_pkey_access(struct task_struct *tsk, int pkey, @@ -17,7 +15,7 @@ static inline int execute_only_pkey(struct mm_struct *mm) { if (!boot_cpu_has(X86_FEATURE_OSPKE)) + return 0; - return ARCH_DEFAULT_PKEY; return __execute_only_pkey(mm); } @@ -51,21 +49,13 @@ { /* * "Allocated" pkeys are those that have been returned + * from pkey_alloc(). pkey 0 is special, and never + * returned from pkey_alloc(). - * from pkey_alloc() or pkey 0 which is allocated - * implicitly when the mm is created. */ + if (pkey <= 0) - if (pkey < 0) return false; if (pkey >= arch_max_pkey()) return false; - /* - * The exec-only pkey is set in the allocation map, but - * is not available to any of the user interfaces like - * mprotect_pkey(). - */ - if (pkey == mm->context.execute_only_pkey) - return false; - return mm_pkey_allocation_map(mm) & (1U << pkey); } diff -u linux-azure-4.15.0/arch/x86/include/asm/processor.h linux-azure-4.15.0/arch/x86/include/asm/processor.h --- linux-azure-4.15.0/arch/x86/include/asm/processor.h +++ linux-azure-4.15.0/arch/x86/include/asm/processor.h @@ -181,20 +181,16 @@ extern void cpu_detect(struct cpuinfo_x86 *c); +static inline unsigned long l1tf_pfn_limit(void) +{ + return BIT(boot_cpu_data.x86_phys_bits - 1 - PAGE_SHIFT) - 1; +} + extern void early_cpu_init(void); extern void identify_boot_cpu(void); extern void identify_secondary_cpu(struct cpuinfo_x86 *); extern void print_cpu_info(struct cpuinfo_x86 *); void print_cpu_msr(struct cpuinfo_x86 *); -extern void init_scattered_cpuid_features(struct cpuinfo_x86 *c); -extern u32 get_scattered_cpuid_leaf(unsigned int level, - unsigned int sub_leaf, - enum cpuid_regs_idx reg); -extern unsigned int init_intel_cacheinfo(struct cpuinfo_x86 *c); -extern void init_amd_cacheinfo(struct cpuinfo_x86 *c); - -extern void detect_extended_topology(struct cpuinfo_x86 *c); -extern void detect_ht(struct cpuinfo_x86 *c); #ifdef CONFIG_X86_32 extern int have_cpuid_p(void); @@ -972,2 +968,14 @@ void microcode_check(void); + +enum l1tf_mitigations { + L1TF_MITIGATION_OFF, + L1TF_MITIGATION_FLUSH_NOWARN, + L1TF_MITIGATION_FLUSH, + L1TF_MITIGATION_FLUSH_NOSMT, + L1TF_MITIGATION_FULL, + L1TF_MITIGATION_FULL_FORCE +}; + +extern enum l1tf_mitigations l1tf_mitigation; + #endif /* _ASM_X86_PROCESSOR_H */ diff -u linux-azure-4.15.0/arch/x86/include/asm/smp.h linux-azure-4.15.0/arch/x86/include/asm/smp.h --- linux-azure-4.15.0/arch/x86/include/asm/smp.h +++ linux-azure-4.15.0/arch/x86/include/asm/smp.h @@ -171,7 +171,6 @@ wbinvd(); return 0; } -#define smp_num_siblings 1 #endif /* CONFIG_SMP */ extern unsigned disabled_cpus; diff -u linux-azure-4.15.0/arch/x86/include/asm/vmx.h linux-azure-4.15.0/arch/x86/include/asm/vmx.h --- linux-azure-4.15.0/arch/x86/include/asm/vmx.h +++ linux-azure-4.15.0/arch/x86/include/asm/vmx.h @@ -573,2 +573,13 @@ +enum vmx_l1d_flush_state { + VMENTER_L1D_FLUSH_AUTO, + VMENTER_L1D_FLUSH_NEVER, + VMENTER_L1D_FLUSH_COND, + VMENTER_L1D_FLUSH_ALWAYS, + VMENTER_L1D_FLUSH_EPT_DISABLED, + VMENTER_L1D_FLUSH_NOT_REQUIRED, +}; + +extern enum vmx_l1d_flush_state l1tf_vmx_mitigation; + #endif reverted: --- linux-azure-4.15.0/arch/x86/include/uapi/asm/msgbuf.h +++ linux-azure-4.15.0.orig/arch/x86/include/uapi/asm/msgbuf.h @@ -1,32 +1 @@ -/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ -#ifndef __ASM_X64_MSGBUF_H -#define __ASM_X64_MSGBUF_H - -#if !defined(__x86_64__) || !defined(__ILP32__) #include -#else -/* - * The msqid64_ds structure for x86 architecture with x32 ABI. - * - * On x86-32 and x86-64 we can just use the generic definition, but - * x32 uses the same binary layout as x86_64, which is differnet - * from other 32-bit architectures. - */ - -struct msqid64_ds { - struct ipc64_perm msg_perm; - __kernel_time_t msg_stime; /* last msgsnd time */ - __kernel_time_t msg_rtime; /* last msgrcv time */ - __kernel_time_t msg_ctime; /* last change time */ - __kernel_ulong_t msg_cbytes; /* current number of bytes on queue */ - __kernel_ulong_t msg_qnum; /* number of messages in queue */ - __kernel_ulong_t msg_qbytes; /* max number of bytes on queue */ - __kernel_pid_t msg_lspid; /* pid of last msgsnd */ - __kernel_pid_t msg_lrpid; /* last receive pid */ - __kernel_ulong_t __unused4; - __kernel_ulong_t __unused5; -}; - -#endif - -#endif /* __ASM_GENERIC_MSGBUF_H */ reverted: --- linux-azure-4.15.0/arch/x86/include/uapi/asm/shmbuf.h +++ linux-azure-4.15.0.orig/arch/x86/include/uapi/asm/shmbuf.h @@ -1,43 +1 @@ -/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ -#ifndef __ASM_X86_SHMBUF_H -#define __ASM_X86_SHMBUF_H - -#if !defined(__x86_64__) || !defined(__ILP32__) #include -#else -/* - * The shmid64_ds structure for x86 architecture with x32 ABI. - * - * On x86-32 and x86-64 we can just use the generic definition, but - * x32 uses the same binary layout as x86_64, which is differnet - * from other 32-bit architectures. - */ - -struct shmid64_ds { - struct ipc64_perm shm_perm; /* operation perms */ - size_t shm_segsz; /* size of segment (bytes) */ - __kernel_time_t shm_atime; /* last attach time */ - __kernel_time_t shm_dtime; /* last detach time */ - __kernel_time_t shm_ctime; /* last change time */ - __kernel_pid_t shm_cpid; /* pid of creator */ - __kernel_pid_t shm_lpid; /* pid of last operator */ - __kernel_ulong_t shm_nattch; /* no. of current attaches */ - __kernel_ulong_t __unused4; - __kernel_ulong_t __unused5; -}; - -struct shminfo64 { - __kernel_ulong_t shmmax; - __kernel_ulong_t shmmin; - __kernel_ulong_t shmmni; - __kernel_ulong_t shmseg; - __kernel_ulong_t shmall; - __kernel_ulong_t __unused1; - __kernel_ulong_t __unused2; - __kernel_ulong_t __unused3; - __kernel_ulong_t __unused4; -}; - -#endif - -#endif /* __ASM_X86_SHMBUF_H */ diff -u linux-azure-4.15.0/arch/x86/kernel/amd_nb.c linux-azure-4.15.0/arch/x86/kernel/amd_nb.c --- linux-azure-4.15.0/arch/x86/kernel/amd_nb.c +++ linux-azure-4.15.0/arch/x86/kernel/amd_nb.c @@ -14,11 +14,8 @@ #include #define PCI_DEVICE_ID_AMD_17H_ROOT 0x1450 -#define PCI_DEVICE_ID_AMD_17H_M10H_ROOT 0x15d0 #define PCI_DEVICE_ID_AMD_17H_DF_F3 0x1463 #define PCI_DEVICE_ID_AMD_17H_DF_F4 0x1464 -#define PCI_DEVICE_ID_AMD_17H_M10H_DF_F3 0x15eb -#define PCI_DEVICE_ID_AMD_17H_M10H_DF_F4 0x15ec /* Protect the PCI config register pairs used for SMN and DF indirect access. */ static DEFINE_MUTEX(smn_mutex); @@ -27,7 +24,6 @@ static const struct pci_device_id amd_root_ids[] = { { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_17H_ROOT) }, - { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_17H_M10H_ROOT) }, {} }; @@ -43,7 +39,6 @@ { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_16H_NB_F3) }, { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_16H_M30H_NB_F3) }, { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_17H_DF_F3) }, - { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_17H_M10H_DF_F3) }, { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_CNB17H_F3) }, {} }; @@ -56,7 +51,6 @@ { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_16H_NB_F4) }, { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_16H_M30H_NB_F4) }, { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_17H_DF_F4) }, - { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_17H_M10H_DF_F4) }, { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_CNB17H_F4) }, {} }; diff -u linux-azure-4.15.0/arch/x86/kernel/apic/apic.c linux-azure-4.15.0/arch/x86/kernel/apic/apic.c --- linux-azure-4.15.0/arch/x86/kernel/apic/apic.c +++ linux-azure-4.15.0/arch/x86/kernel/apic/apic.c @@ -56,6 +56,7 @@ #include #include #include +#include unsigned int num_processors; @@ -2180,6 +2181,21 @@ [0 ... NR_CPUS - 1] = -1, }; +/** + * apic_id_is_primary_thread - Check whether APIC ID belongs to a primary thread + * @id: APIC ID to check + */ +bool apic_id_is_primary_thread(unsigned int apicid) +{ + u32 mask; + + if (smp_num_siblings == 1) + return true; + /* Isolate the SMT bit(s) in the APICID and check for 0 */ + mask = (1U << (fls(smp_num_siblings) - 1)) - 1; + return !(apicid & mask); +} + /* * Should use this API to allocate logical CPU IDs to keep nr_logical_cpuids * and cpuid_to_apicid[] synchronized. diff -u linux-azure-4.15.0/arch/x86/kernel/apic/io_apic.c linux-azure-4.15.0/arch/x86/kernel/apic/io_apic.c --- linux-azure-4.15.0/arch/x86/kernel/apic/io_apic.c +++ linux-azure-4.15.0/arch/x86/kernel/apic/io_apic.c @@ -33,6 +33,7 @@ #include #include +#include #include #include #include diff -u linux-azure-4.15.0/arch/x86/kernel/apic/vector.c linux-azure-4.15.0/arch/x86/kernel/apic/vector.c --- linux-azure-4.15.0/arch/x86/kernel/apic/vector.c +++ linux-azure-4.15.0/arch/x86/kernel/apic/vector.c @@ -11,6 +11,7 @@ * published by the Free Software Foundation. */ #include +#include #include #include #include reverted: --- linux-azure-4.15.0/arch/x86/kernel/apic/x2apic_cluster.c +++ linux-azure-4.15.0.orig/arch/x86/kernel/apic/x2apic_cluster.c @@ -116,7 +116,6 @@ goto update; } cmsk = cluster_hotplug_mask; - cmsk->clusterid = cluster; cluster_hotplug_mask = NULL; update: this_cpu_write(cluster_masks, cmsk); diff -u linux-azure-4.15.0/arch/x86/kernel/cpu/amd.c linux-azure-4.15.0/arch/x86/kernel/cpu/amd.c --- linux-azure-4.15.0/arch/x86/kernel/cpu/amd.c +++ linux-azure-4.15.0/arch/x86/kernel/cpu/amd.c @@ -9,6 +9,7 @@ #include #include #include +#include #include #include #include @@ -298,7 +299,6 @@ } #endif -#ifdef CONFIG_SMP /* * Fix up cpu_core_id for pre-F17h systems to be in the * [0 .. cores_per_node - 1] range. Not really needed but @@ -315,6 +315,13 @@ c->cpu_core_id %= cus_per_node; } + +static void amd_get_topology_early(struct cpuinfo_x86 *c) +{ + if (cpu_has(c, X86_FEATURE_TOPOEXT)) + smp_num_siblings = ((cpuid_ebx(0x8000001e) >> 8) & 0xff) + 1; +} + /* * Fixup core topology information for * (1) AMD multi-node processors @@ -328,12 +335,12 @@ /* get information required for multi-node processors */ if (boot_cpu_has(X86_FEATURE_TOPOEXT)) { + int err; u32 eax, ebx, ecx, edx; cpuid(0x8000001e, &eax, &ebx, &ecx, &edx); node_id = ecx & 0xff; - smp_num_siblings = ((ebx >> 8) & 0xff) + 1; if (c->x86 == 0x15) c->cu_id = ebx & 0xff; @@ -346,21 +353,15 @@ } /* - * We may have multiple LLCs if L3 caches exist, so check if we - * have an L3 cache by looking at the L3 cache CPUID leaf. + * In case leaf B is available, use it to derive + * topology information. */ - if (cpuid_edx(0x80000006)) { - if (c->x86 == 0x17) { - /* - * LLC is at the core complex level. - * Core complex id is ApicId[3]. - */ - per_cpu(cpu_llc_id, cpu) = c->apicid >> 3; - } else { - /* LLC is at the node level. */ - per_cpu(cpu_llc_id, cpu) = node_id; - } - } + err = detect_extended_topology(c); + if (!err) + c->x86_coreid_bits = get_count_order(c->x86_max_cores); + + cacheinfo_amd_init_llc_id(c, cpu, node_id); + } else if (cpu_has(c, X86_FEATURE_NODEID_MSR)) { u64 value; @@ -376,7 +377,6 @@ legacy_fixup_core_id(c); } } -#endif /* * On a AMD dual core setup the lower bits of the APIC id distinguish the cores. @@ -384,7 +384,6 @@ */ static void amd_detect_cmp(struct cpuinfo_x86 *c) { -#ifdef CONFIG_SMP unsigned bits; int cpu = smp_processor_id(); @@ -395,17 +394,11 @@ c->phys_proc_id = c->initial_apicid >> bits; /* use socket ID also for last level cache */ per_cpu(cpu_llc_id, cpu) = c->phys_proc_id; - amd_get_topology(c); -#endif } u16 amd_get_nb_id(int cpu) { - u16 id = 0; -#ifdef CONFIG_SMP - id = per_cpu(cpu_llc_id, cpu); -#endif - return id; + return per_cpu(cpu_llc_id, cpu); } EXPORT_SYMBOL_GPL(amd_get_nb_id); @@ -579,6 +572,7 @@ static void early_init_amd(struct cpuinfo_x86 *c) { + u64 value; u32 dummy; early_init_amd_mc(c); @@ -668,6 +662,22 @@ clear_cpu_cap(c, X86_FEATURE_SME); } } + + /* Re-enable TopologyExtensions if switched off by BIOS */ + if (c->x86 == 0x15 && + (c->x86_model >= 0x10 && c->x86_model <= 0x6f) && + !cpu_has(c, X86_FEATURE_TOPOEXT)) { + + if (msr_set_bit(0xc0011005, 54) > 0) { + rdmsrl(0xc0011005, value); + if (value & BIT_64(54)) { + set_cpu_cap(c, X86_FEATURE_TOPOEXT); + pr_info_once(FW_INFO "CPU: Re-enabling disabled Topology Extensions Support.\n"); + } + } + } + + amd_get_topology_early(c); } static void init_amd_k8(struct cpuinfo_x86 *c) @@ -759,19 +769,6 @@ { u64 value; - /* re-enable TopologyExtensions if switched off by BIOS */ - if ((c->x86_model >= 0x10) && (c->x86_model <= 0x6f) && - !cpu_has(c, X86_FEATURE_TOPOEXT)) { - - if (msr_set_bit(0xc0011005, 54) > 0) { - rdmsrl(0xc0011005, value); - if (value & BIT_64(54)) { - set_cpu_cap(c, X86_FEATURE_TOPOEXT); - pr_info_once(FW_INFO "CPU: Re-enabling disabled Topology Extensions Support.\n"); - } - } - } - /* * The way access filter has a performance penalty on some workloads. * Disable it on the affected CPUs. @@ -835,15 +832,9 @@ cpu_detect_cache_sizes(c); - /* Multi core CPU? */ - if (c->extended_cpuid_level >= 0x80000008) { - amd_detect_cmp(c); - srat_detect_node(c); - } - -#ifdef CONFIG_X86_32 - detect_ht(c); -#endif + amd_detect_cmp(c); + amd_get_topology(c); + srat_detect_node(c); init_amd_cacheinfo(c); diff -u linux-azure-4.15.0/arch/x86/kernel/cpu/bugs.c linux-azure-4.15.0/arch/x86/kernel/cpu/bugs.c --- linux-azure-4.15.0/arch/x86/kernel/cpu/bugs.c +++ linux-azure-4.15.0/arch/x86/kernel/cpu/bugs.c @@ -22,14 +22,17 @@ #include #include #include +#include #include #include #include #include #include +#include static void __init spectre_v2_select_mitigation(void); static void __init ssb_select_mitigation(void); +static void __init l1tf_select_mitigation(void); /* * Our boot-time value of the SPEC_CTRL MSR. We read it once so that any @@ -55,6 +58,12 @@ { identify_boot_cpu(); + /* + * identify_boot_cpu() initialized SMT support information, let the + * core code know. + */ + cpu_smt_check_topology_early(); + if (!IS_ENABLED(CONFIG_SMP)) { pr_info("CPU: "); print_cpu_info(&boot_cpu_data); @@ -81,6 +90,8 @@ */ ssb_select_mitigation(); + l1tf_select_mitigation(); + #ifdef CONFIG_X86_32 /* * Check whether we are able to run this kernel safely on SMP. @@ -654,8 +665,121 @@ x86_amd_ssb_disable(); } +#undef pr_fmt +#define pr_fmt(fmt) "L1TF: " fmt + +/* Default mitigation for L1TF-affected CPUs */ +enum l1tf_mitigations l1tf_mitigation __ro_after_init = L1TF_MITIGATION_FLUSH; +#if IS_ENABLED(CONFIG_KVM_INTEL) +EXPORT_SYMBOL_GPL(l1tf_mitigation); + +enum vmx_l1d_flush_state l1tf_vmx_mitigation = VMENTER_L1D_FLUSH_AUTO; +EXPORT_SYMBOL_GPL(l1tf_vmx_mitigation); +#endif + +static void __init l1tf_select_mitigation(void) +{ + u64 half_pa; + + if (!boot_cpu_has_bug(X86_BUG_L1TF)) + return; + + switch (l1tf_mitigation) { + case L1TF_MITIGATION_OFF: + case L1TF_MITIGATION_FLUSH_NOWARN: + case L1TF_MITIGATION_FLUSH: + break; + case L1TF_MITIGATION_FLUSH_NOSMT: + case L1TF_MITIGATION_FULL: + cpu_smt_disable(false); + break; + case L1TF_MITIGATION_FULL_FORCE: + cpu_smt_disable(true); + break; + } + +#if CONFIG_PGTABLE_LEVELS == 2 + pr_warn("Kernel not compiled for PAE. No mitigation for L1TF\n"); + return; +#endif + + /* + * This is extremely unlikely to happen because almost all + * systems have far more MAX_PA/2 than RAM can be fit into + * DIMM slots. + */ + half_pa = (u64)l1tf_pfn_limit() << PAGE_SHIFT; + if (e820__mapped_any(half_pa, ULLONG_MAX - half_pa, E820_TYPE_RAM)) { + pr_warn("System has more than MAX_PA/2 memory. L1TF mitigation not effective.\n"); + return; + } + + setup_force_cpu_cap(X86_FEATURE_L1TF_PTEINV); +} + +static int __init l1tf_cmdline(char *str) +{ + if (!boot_cpu_has_bug(X86_BUG_L1TF)) + return 0; + + if (!str) + return -EINVAL; + + if (!strcmp(str, "off")) + l1tf_mitigation = L1TF_MITIGATION_OFF; + else if (!strcmp(str, "flush,nowarn")) + l1tf_mitigation = L1TF_MITIGATION_FLUSH_NOWARN; + else if (!strcmp(str, "flush")) + l1tf_mitigation = L1TF_MITIGATION_FLUSH; + else if (!strcmp(str, "flush,nosmt")) + l1tf_mitigation = L1TF_MITIGATION_FLUSH_NOSMT; + else if (!strcmp(str, "full")) + l1tf_mitigation = L1TF_MITIGATION_FULL; + else if (!strcmp(str, "full,force")) + l1tf_mitigation = L1TF_MITIGATION_FULL_FORCE; + + return 0; +} +early_param("l1tf", l1tf_cmdline); + +#undef pr_fmt + #ifdef CONFIG_SYSFS +#define L1TF_DEFAULT_MSG "Mitigation: PTE Inversion" + +#if IS_ENABLED(CONFIG_KVM_INTEL) +static const char *l1tf_vmx_states[] = { + [VMENTER_L1D_FLUSH_AUTO] = "auto", + [VMENTER_L1D_FLUSH_NEVER] = "vulnerable", + [VMENTER_L1D_FLUSH_COND] = "conditional cache flushes", + [VMENTER_L1D_FLUSH_ALWAYS] = "cache flushes", + [VMENTER_L1D_FLUSH_EPT_DISABLED] = "EPT disabled", + [VMENTER_L1D_FLUSH_NOT_REQUIRED] = "flush not necessary" +}; + +static ssize_t l1tf_show_state(char *buf) +{ + if (l1tf_vmx_mitigation == VMENTER_L1D_FLUSH_AUTO) + return sprintf(buf, "%s\n", L1TF_DEFAULT_MSG); + + if (l1tf_vmx_mitigation == VMENTER_L1D_FLUSH_EPT_DISABLED || + (l1tf_vmx_mitigation == VMENTER_L1D_FLUSH_NEVER && + cpu_smt_control == CPU_SMT_ENABLED)) + return sprintf(buf, "%s; VMX: %s\n", L1TF_DEFAULT_MSG, + l1tf_vmx_states[l1tf_vmx_mitigation]); + + return sprintf(buf, "%s; VMX: %s, SMT %s\n", L1TF_DEFAULT_MSG, + l1tf_vmx_states[l1tf_vmx_mitigation], + cpu_smt_control == CPU_SMT_ENABLED ? "vulnerable" : "disabled"); +} +#else +static ssize_t l1tf_show_state(char *buf) +{ + return sprintf(buf, "%s\n", L1TF_DEFAULT_MSG); +} +#endif + static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr, char *buf, unsigned int bug) { @@ -681,6 +805,10 @@ case X86_BUG_SPEC_STORE_BYPASS: return sprintf(buf, "%s\n", ssb_strings[ssb_mode]); + case X86_BUG_L1TF: + if (boot_cpu_has(X86_FEATURE_L1TF_PTEINV)) + return l1tf_show_state(buf); + break; default: break; } @@ -709,2 +837,7 @@ } + +ssize_t cpu_show_l1tf(struct device *dev, struct device_attribute *attr, char *buf) +{ + return cpu_show_common(dev, attr, buf, X86_BUG_L1TF); +} #endif diff -u linux-azure-4.15.0/arch/x86/kernel/cpu/centaur.c linux-azure-4.15.0/arch/x86/kernel/cpu/centaur.c --- linux-azure-4.15.0/arch/x86/kernel/cpu/centaur.c +++ linux-azure-4.15.0/arch/x86/kernel/cpu/centaur.c @@ -18,6 +18,13 @@ #define RNG_ENABLED (1 << 3) #define RNG_ENABLE (1 << 6) /* MSR_VIA_RNG */ +#define X86_VMX_FEATURE_PROC_CTLS_TPR_SHADOW 0x00200000 +#define X86_VMX_FEATURE_PROC_CTLS_VNMI 0x00400000 +#define X86_VMX_FEATURE_PROC_CTLS_2ND_CTLS 0x80000000 +#define X86_VMX_FEATURE_PROC_CTLS2_VIRT_APIC 0x00000001 +#define X86_VMX_FEATURE_PROC_CTLS2_EPT 0x00000002 +#define X86_VMX_FEATURE_PROC_CTLS2_VPID 0x00000020 + static void init_c3(struct cpuinfo_x86 *c) { u32 lo, hi; @@ -108,6 +115,31 @@ #endif } +static void centaur_detect_vmx_virtcap(struct cpuinfo_x86 *c) +{ + u32 vmx_msr_low, vmx_msr_high, msr_ctl, msr_ctl2; + + rdmsr(MSR_IA32_VMX_PROCBASED_CTLS, vmx_msr_low, vmx_msr_high); + msr_ctl = vmx_msr_high | vmx_msr_low; + + if (msr_ctl & X86_VMX_FEATURE_PROC_CTLS_TPR_SHADOW) + set_cpu_cap(c, X86_FEATURE_TPR_SHADOW); + if (msr_ctl & X86_VMX_FEATURE_PROC_CTLS_VNMI) + set_cpu_cap(c, X86_FEATURE_VNMI); + if (msr_ctl & X86_VMX_FEATURE_PROC_CTLS_2ND_CTLS) { + rdmsr(MSR_IA32_VMX_PROCBASED_CTLS2, + vmx_msr_low, vmx_msr_high); + msr_ctl2 = vmx_msr_high | vmx_msr_low; + if ((msr_ctl2 & X86_VMX_FEATURE_PROC_CTLS2_VIRT_APIC) && + (msr_ctl & X86_VMX_FEATURE_PROC_CTLS_TPR_SHADOW)) + set_cpu_cap(c, X86_FEATURE_FLEXPRIORITY); + if (msr_ctl2 & X86_VMX_FEATURE_PROC_CTLS2_EPT) + set_cpu_cap(c, X86_FEATURE_EPT); + if (msr_ctl2 & X86_VMX_FEATURE_PROC_CTLS2_VPID) + set_cpu_cap(c, X86_FEATURE_VPID); + } +} + static void init_centaur(struct cpuinfo_x86 *c) { #ifdef CONFIG_X86_32 @@ -124,6 +156,24 @@ clear_cpu_cap(c, 0*32+31); #endif early_init_centaur(c); + init_intel_cacheinfo(c); + detect_num_cpu_cores(c); +#ifdef CONFIG_X86_32 + detect_ht(c); +#endif + + if (c->cpuid_level > 9) { + unsigned int eax = cpuid_eax(10); + + /* + * Check for version and the number of counters + * Version(eax[7:0]) can't be 0; + * Counters(eax[15:8]) should be greater than 1; + */ + if ((eax & 0xff) && (((eax >> 8) & 0xff) > 1)) + set_cpu_cap(c, X86_FEATURE_ARCH_PERFMON); + } + switch (c->x86) { #ifdef CONFIG_X86_32 case 5: @@ -195,6 +245,9 @@ #ifdef CONFIG_X86_64 set_cpu_cap(c, X86_FEATURE_LFENCE_RDTSC); #endif + + if (cpu_has(c, X86_FEATURE_VMX)) + centaur_detect_vmx_virtcap(c); } #ifdef CONFIG_X86_32 diff -u linux-azure-4.15.0/arch/x86/kernel/cpu/common.c linux-azure-4.15.0/arch/x86/kernel/cpu/common.c --- linux-azure-4.15.0/arch/x86/kernel/cpu/common.c +++ linux-azure-4.15.0/arch/x86/kernel/cpu/common.c @@ -66,6 +66,13 @@ /* representing cpus for which sibling maps can be computed */ cpumask_var_t cpu_sibling_setup_mask; +/* Number of siblings per CPU package */ +int smp_num_siblings = 1; +EXPORT_SYMBOL(smp_num_siblings); + +/* Last level cache ID of each logical CPU */ +DEFINE_PER_CPU_READ_MOSTLY(u16, cpu_llc_id) = BAD_APICID; + /* correctly size the local cpu masks */ void __init setup_cpu_local_masks(void) { @@ -577,6 +584,19 @@ *(s + 1) = '\0'; } +void detect_num_cpu_cores(struct cpuinfo_x86 *c) +{ + unsigned int eax, ebx, ecx, edx; + + c->x86_max_cores = 1; + if (!IS_ENABLED(CONFIG_SMP) || c->cpuid_level < 4) + return; + + cpuid_count(4, 0, &eax, &ebx, &ecx, &edx); + if (eax & 0x1f) + c->x86_max_cores = (eax >> 26) + 1; +} + void cpu_detect_cache_sizes(struct cpuinfo_x86 *c) { unsigned int n, dummy, ebx, ecx, edx, l2size; @@ -638,33 +658,36 @@ tlb_lld_4m[ENTRIES], tlb_lld_1g[ENTRIES]); } -void detect_ht(struct cpuinfo_x86 *c) +int detect_ht_early(struct cpuinfo_x86 *c) { #ifdef CONFIG_SMP u32 eax, ebx, ecx, edx; - int index_msb, core_bits; - static bool printed; if (!cpu_has(c, X86_FEATURE_HT)) - return; + return -1; if (cpu_has(c, X86_FEATURE_CMP_LEGACY)) - goto out; + return -1; if (cpu_has(c, X86_FEATURE_XTOPOLOGY)) - return; + return -1; cpuid(1, &eax, &ebx, &ecx, &edx); smp_num_siblings = (ebx & 0xff0000) >> 16; - - if (smp_num_siblings == 1) { + if (smp_num_siblings == 1) pr_info_once("CPU0: Hyper-Threading is disabled\n"); - goto out; - } +#endif + return 0; +} + +void detect_ht(struct cpuinfo_x86 *c) +{ +#ifdef CONFIG_SMP + int index_msb, core_bits; - if (smp_num_siblings <= 1) - goto out; + if (detect_ht_early(c) < 0) + return; index_msb = get_count_order(smp_num_siblings); c->phys_proc_id = apic->phys_pkg_id(c->initial_apicid, index_msb); @@ -677,15 +700,6 @@ c->cpu_core_id = apic->phys_pkg_id(c->initial_apicid, index_msb) & ((1 << core_bits) - 1); - -out: - if (!printed && (c->x86_max_cores * smp_num_siblings) > 1) { - pr_info("CPU: Physical Processor ID: %d\n", - c->phys_proc_id); - pr_info("CPU: Processor Core ID: %d\n", - c->cpu_core_id); - printed = 1; - } #endif } @@ -957,6 +971,21 @@ {} }; +static const __initconst struct x86_cpu_id cpu_no_l1tf[] = { + /* in addition to cpu_no_speculation */ + { X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_SILVERMONT1 }, + { X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_SILVERMONT2 }, + { X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_AIRMONT }, + { X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_MERRIFIELD }, + { X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_MOOREFIELD }, + { X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_GOLDMONT }, + { X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_DENVERTON }, + { X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_GEMINI_LAKE }, + { X86_VENDOR_INTEL, 6, INTEL_FAM6_XEON_PHI_KNL }, + { X86_VENDOR_INTEL, 6, INTEL_FAM6_XEON_PHI_KNM }, + {} +}; + static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c) { u64 ia32_cap = 0; @@ -982,6 +1011,11 @@ return; setup_force_cpu_bug(X86_BUG_CPU_MELTDOWN); + + if (x86_match_cpu(cpu_no_l1tf)) + return; + + setup_force_cpu_bug(X86_BUG_L1TF); } /* diff -u linux-azure-4.15.0/arch/x86/kernel/cpu/cpu.h linux-azure-4.15.0/arch/x86/kernel/cpu/cpu.h --- linux-azure-4.15.0/arch/x86/kernel/cpu/cpu.h +++ linux-azure-4.15.0/arch/x86/kernel/cpu/cpu.h @@ -47,6 +47,18 @@ extern void get_cpu_cap(struct cpuinfo_x86 *c); extern void cpu_detect_cache_sizes(struct cpuinfo_x86 *c); +extern void init_scattered_cpuid_features(struct cpuinfo_x86 *c); +extern u32 get_scattered_cpuid_leaf(unsigned int level, + unsigned int sub_leaf, + enum cpuid_regs_idx reg); +extern void init_intel_cacheinfo(struct cpuinfo_x86 *c); +extern void init_amd_cacheinfo(struct cpuinfo_x86 *c); + +extern void detect_num_cpu_cores(struct cpuinfo_x86 *c); +extern int detect_extended_topology_early(struct cpuinfo_x86 *c); +extern int detect_extended_topology(struct cpuinfo_x86 *c); +extern int detect_ht_early(struct cpuinfo_x86 *c); +extern void detect_ht(struct cpuinfo_x86 *c); unsigned int aperfmperf_get_khz(int cpu); diff -u linux-azure-4.15.0/arch/x86/kernel/cpu/intel.c linux-azure-4.15.0/arch/x86/kernel/cpu/intel.c --- linux-azure-4.15.0/arch/x86/kernel/cpu/intel.c +++ linux-azure-4.15.0/arch/x86/kernel/cpu/intel.c @@ -301,6 +301,13 @@ } check_mpx_erratum(c); + + /* + * Get the number of SMT siblings early from the extended topology + * leaf, if available. Otherwise try the legacy SMT detection. + */ + if (detect_extended_topology_early(c) < 0) + detect_ht_early(c); } #ifdef CONFIG_X86_32 @@ -456,24 +463,6 @@ #endif } -/* - * find out the number of processor cores on the die - */ -static int intel_num_cpu_cores(struct cpuinfo_x86 *c) -{ - unsigned int eax, ebx, ecx, edx; - - if (!IS_ENABLED(CONFIG_SMP) || c->cpuid_level < 4) - return 1; - - /* Intel has a non-standard dependency on %ecx for this CPUID level. */ - cpuid_count(4, 0, &eax, &ebx, &ecx, &edx); - if (eax & 0x1f) - return (eax >> 26) + 1; - else - return 1; -} - static void detect_vmx_virtcap(struct cpuinfo_x86 *c) { /* Intel VMX MSR indicated features */ @@ -572,8 +561,6 @@ static void init_intel(struct cpuinfo_x86 *c) { - unsigned int l2 = 0; - early_init_intel(c); intel_workarounds(c); @@ -590,19 +577,13 @@ * let's use the legacy cpuid vector 0x1 and 0x4 for topology * detection. */ - c->x86_max_cores = intel_num_cpu_cores(c); + detect_num_cpu_cores(c); #ifdef CONFIG_X86_32 detect_ht(c); #endif } - l2 = init_intel_cacheinfo(c); - - /* Detect legacy cache sizes if init_intel_cacheinfo did not */ - if (l2 == 0) { - cpu_detect_cache_sizes(c); - l2 = c->x86_cache_size; - } + init_intel_cacheinfo(c); if (c->cpuid_level > 9) { unsigned eax = cpuid_eax(10); @@ -615,7 +596,8 @@ set_cpu_cap(c, X86_FEATURE_LFENCE_RDTSC); if (boot_cpu_has(X86_FEATURE_DS)) { - unsigned int l1; + unsigned int l1, l2; + rdmsr(MSR_IA32_MISC_ENABLE, l1, l2); if (!(l1 & (1<<11))) set_cpu_cap(c, X86_FEATURE_BTS); @@ -643,6 +625,7 @@ * Dixon is NOT a Celeron. */ if (c->x86 == 6) { + unsigned int l2 = c->x86_cache_size; char *p = NULL; switch (c->x86_model) { diff -u linux-azure-4.15.0/arch/x86/kernel/cpu/microcode/core.c linux-azure-4.15.0/arch/x86/kernel/cpu/microcode/core.c --- linux-azure-4.15.0/arch/x86/kernel/cpu/microcode/core.c +++ linux-azure-4.15.0/arch/x86/kernel/cpu/microcode/core.c @@ -564,12 +564,14 @@ apply_microcode_local(&err); spin_unlock(&update_lock); - /* siblings return UCODE_OK because their engine got updated already */ if (err > UCODE_NFOUND) { pr_warn("Error reloading microcode on CPU %d\n", cpu); - ret = -1; + return -1; + /* siblings return UCODE_OK because their engine got updated already */ } else if (err == UCODE_UPDATED || err == UCODE_OK) { ret = 1; + } else { + return ret; } /* diff -u linux-azure-4.15.0/arch/x86/kernel/cpu/microcode/intel.c linux-azure-4.15.0/arch/x86/kernel/cpu/microcode/intel.c --- linux-azure-4.15.0/arch/x86/kernel/cpu/microcode/intel.c +++ linux-azure-4.15.0/arch/x86/kernel/cpu/microcode/intel.c @@ -485,6 +485,7 @@ */ static void save_mc_for_early(u8 *mc, unsigned int size) { +#ifdef CONFIG_HOTPLUG_CPU /* Synchronization during CPU hotplug. */ static DEFINE_MUTEX(x86_cpu_microcode_mutex); @@ -494,6 +495,7 @@ show_saved_mc(); mutex_unlock(&x86_cpu_microcode_mutex); +#endif } static bool load_builtin_intel_microcode(struct cpio_data *cp) diff -u linux-azure-4.15.0/arch/x86/kernel/head_64.S linux-azure-4.15.0/arch/x86/kernel/head_64.S --- linux-azure-4.15.0/arch/x86/kernel/head_64.S +++ linux-azure-4.15.0/arch/x86/kernel/head_64.S @@ -401,8 +401,13 @@ .quad level2_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE_NOENC .fill 511, 8, 0 NEXT_PAGE(level2_ident_pgt) - /* Since I easily can, map the first 1G. + /* + * Since I easily can, map the first 1G. * Don't set NX because code runs from these pages. + * + * Note: This sets _PAGE_GLOBAL despite whether + * the CPU supports it or it is enabled. But, + * the CPU should ignore the bit. */ PMDS(0, __PAGE_KERNEL_IDENT_LARGE_EXEC, PTRS_PER_PMD) #else @@ -433,6 +438,10 @@ * (NOTE: at +512MB starts the module area, see MODULES_VADDR. * If you want to increase this then increase MODULES_VADDR * too.) + * + * This table is eventually used by the kernel during normal + * runtime. Care must be taken to clear out undesired bits + * later, like _PAGE_RW or _PAGE_GLOBAL in some cases. */ PMDS(0, __PAGE_KERNEL_LARGE_EXEC, KERNEL_IMAGE_SIZE/PMD_SIZE) diff -u linux-azure-4.15.0/arch/x86/kernel/idt.c linux-azure-4.15.0/arch/x86/kernel/idt.c --- linux-azure-4.15.0/arch/x86/kernel/idt.c +++ linux-azure-4.15.0/arch/x86/kernel/idt.c @@ -8,6 +8,7 @@ #include #include #include +#include struct idt_data { unsigned int vector; diff -u linux-azure-4.15.0/arch/x86/kernel/kprobes/core.c linux-azure-4.15.0/arch/x86/kernel/kprobes/core.c --- linux-azure-4.15.0/arch/x86/kernel/kprobes/core.c +++ linux-azure-4.15.0/arch/x86/kernel/kprobes/core.c @@ -63,6 +63,7 @@ #include #include #include +#include #include "common.h" reverted: --- linux-azure-4.15.0/arch/x86/kernel/machine_kexec_32.c +++ linux-azure-4.15.0.orig/arch/x86/kernel/machine_kexec_32.c @@ -57,17 +57,12 @@ static void machine_kexec_free_page_tables(struct kimage *image) { free_page((unsigned long)image->arch.pgd); - image->arch.pgd = NULL; #ifdef CONFIG_X86_PAE free_page((unsigned long)image->arch.pmd0); - image->arch.pmd0 = NULL; free_page((unsigned long)image->arch.pmd1); - image->arch.pmd1 = NULL; #endif free_page((unsigned long)image->arch.pte0); - image->arch.pte0 = NULL; free_page((unsigned long)image->arch.pte1); - image->arch.pte1 = NULL; } static int machine_kexec_alloc_page_tables(struct kimage *image) @@ -84,6 +79,7 @@ !image->arch.pmd0 || !image->arch.pmd1 || #endif !image->arch.pte0 || !image->arch.pte1) { + machine_kexec_free_page_tables(image); return -ENOMEM; } return 0; diff -u linux-azure-4.15.0/arch/x86/kernel/machine_kexec_64.c linux-azure-4.15.0/arch/x86/kernel/machine_kexec_64.c --- linux-azure-4.15.0/arch/x86/kernel/machine_kexec_64.c +++ linux-azure-4.15.0/arch/x86/kernel/machine_kexec_64.c @@ -38,13 +38,9 @@ static void free_transition_pgtable(struct kimage *image) { free_page((unsigned long)image->arch.p4d); - image->arch.p4d = NULL; free_page((unsigned long)image->arch.pud); - image->arch.pud = NULL; free_page((unsigned long)image->arch.pmd); - image->arch.pmd = NULL; free_page((unsigned long)image->arch.pte); - image->arch.pte = NULL; } static int init_transition_pgtable(struct kimage *image, pgd_t *pgd) @@ -94,6 +90,7 @@ set_pte(pte, pfn_pte(paddr >> PAGE_SHIFT, PAGE_KERNEL_EXEC_NOENC)); return 0; err: + free_transition_pgtable(image); return result; } diff -u linux-azure-4.15.0/arch/x86/kernel/process_64.c linux-azure-4.15.0/arch/x86/kernel/process_64.c --- linux-azure-4.15.0/arch/x86/kernel/process_64.c +++ linux-azure-4.15.0/arch/x86/kernel/process_64.c @@ -528,7 +528,6 @@ clear_thread_flag(TIF_X32); /* Pretend that this comes from a 64bit execve */ task_pt_regs(current)->orig_ax = __NR_execve; - current_thread_info()->status &= ~TS_COMPAT; /* Ensure the corresponding mm is not marked. */ if (current->mm) diff -u linux-azure-4.15.0/arch/x86/kernel/setup.c linux-azure-4.15.0/arch/x86/kernel/setup.c --- linux-azure-4.15.0/arch/x86/kernel/setup.c +++ linux-azure-4.15.0/arch/x86/kernel/setup.c @@ -821,6 +821,12 @@ memblock_reserve(__pa_symbol(_text), (unsigned long)__bss_stop - (unsigned long)_text); + /* + * Make sure page 0 is always reserved because on systems with + * L1TF its contents can be leaked to user processes. + */ + memblock_reserve(0, PAGE_SIZE); + early_reserve_initrd(); /* diff -u linux-azure-4.15.0/arch/x86/kernel/smpboot.c linux-azure-4.15.0/arch/x86/kernel/smpboot.c --- linux-azure-4.15.0/arch/x86/kernel/smpboot.c +++ linux-azure-4.15.0/arch/x86/kernel/smpboot.c @@ -79,13 +79,7 @@ #include #include #include - -/* Number of siblings per CPU package */ -int smp_num_siblings = 1; -EXPORT_SYMBOL(smp_num_siblings); - -/* Last level cache ID of each logical CPU */ -DEFINE_PER_CPU_READ_MOSTLY(u16, cpu_llc_id) = BAD_APICID; +#include /* representing HT siblings of each logical CPU */ DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_sibling_map); @@ -272,6 +266,23 @@ } /** + * topology_is_primary_thread - Check whether CPU is the primary SMT thread + * @cpu: CPU to check + */ +bool topology_is_primary_thread(unsigned int cpu) +{ + return apic_id_is_primary_thread(per_cpu(x86_cpu_to_apicid, cpu)); +} + +/** + * topology_smt_supported - Check whether SMT is supported by the CPUs + */ +bool topology_smt_supported(void) +{ + return smp_num_siblings > 1; +} + +/** * topology_phys_to_logical_pkg - Map a physical package id to a logical * * Returns logical package id or -1 if not found @@ -1541,8 +1552,6 @@ void *mwait_ptr; int i; - if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD) - return; if (!this_cpu_has(X86_FEATURE_MWAIT)) return; if (!this_cpu_has(X86_FEATURE_CLFLUSH)) diff -u linux-azure-4.15.0/arch/x86/kvm/lapic.c linux-azure-4.15.0/arch/x86/kvm/lapic.c --- linux-azure-4.15.0/arch/x86/kvm/lapic.c +++ linux-azure-4.15.0/arch/x86/kvm/lapic.c @@ -1446,6 +1446,23 @@ local_irq_restore(flags); } +static void start_sw_period(struct kvm_lapic *apic) +{ + if (!apic->lapic_timer.period) + return; + + if (apic_lvtt_oneshot(apic) && + ktime_after(ktime_get(), + apic->lapic_timer.target_expiration)) { + apic_timer_expired(apic); + return; + } + + hrtimer_start(&apic->lapic_timer.timer, + apic->lapic_timer.target_expiration, + HRTIMER_MODE_ABS_PINNED); +} + static void update_target_expiration(struct kvm_lapic *apic, uint32_t old_divisor) { ktime_t now, remaining; @@ -1505,43 +1522,11 @@ static void advance_periodic_target_expiration(struct kvm_lapic *apic) { - ktime_t now = ktime_get(); - u64 tscl = rdtsc(); - ktime_t delta; - - /* - * Synchronize both deadlines to the same time source or - * differences in the periods (caused by differences in the - * underlying clocks or numerical approximation errors) will - * cause the two to drift apart over time as the errors - * accumulate. - */ + apic->lapic_timer.tscdeadline += + nsec_to_cycles(apic->vcpu, apic->lapic_timer.period); apic->lapic_timer.target_expiration = ktime_add_ns(apic->lapic_timer.target_expiration, apic->lapic_timer.period); - delta = ktime_sub(apic->lapic_timer.target_expiration, now); - apic->lapic_timer.tscdeadline = kvm_read_l1_tsc(apic->vcpu, tscl) + - nsec_to_cycles(apic->vcpu, delta); -} - -static void start_sw_period(struct kvm_lapic *apic) -{ - if (!apic->lapic_timer.period) - return; - - if (ktime_after(ktime_get(), - apic->lapic_timer.target_expiration)) { - apic_timer_expired(apic); - - if (apic_lvtt_oneshot(apic)) - return; - - advance_periodic_target_expiration(apic); - } - - hrtimer_start(&apic->lapic_timer.timer, - apic->lapic_timer.target_expiration, - HRTIMER_MODE_ABS_PINNED); } bool kvm_lapic_hv_timer_in_use(struct kvm_vcpu *vcpu) diff -u linux-azure-4.15.0/arch/x86/kvm/mmu.c linux-azure-4.15.0/arch/x86/kvm/mmu.c --- linux-azure-4.15.0/arch/x86/kvm/mmu.c +++ linux-azure-4.15.0/arch/x86/kvm/mmu.c @@ -3836,6 +3836,7 @@ { int r = 1; + vcpu->arch.l1tf_flush_l1d = true; switch (vcpu->arch.apf.host_apf_reason) { default: trace_kvm_page_fault(fault_address, error_code); diff -u linux-azure-4.15.0/arch/x86/kvm/svm.c linux-azure-4.15.0/arch/x86/kvm/svm.c --- linux-azure-4.15.0/arch/x86/kvm/svm.c +++ linux-azure-4.15.0/arch/x86/kvm/svm.c @@ -3571,6 +3571,11 @@ return 0; } +static int svm_get_msr_feature(struct kvm_msr_entry *msr) +{ + return 1; +} + static int svm_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) { struct vcpu_svm *svm = to_svm(vcpu); @@ -5687,6 +5692,7 @@ .vcpu_unblocking = svm_vcpu_unblocking, .update_bp_intercept = update_bp_intercept, + .get_msr_feature = svm_get_msr_feature, .get_msr = svm_get_msr, .set_msr = svm_set_msr, .get_segment_base = svm_get_segment_base, diff -u linux-azure-4.15.0/arch/x86/kvm/vmx.c linux-azure-4.15.0/arch/x86/kvm/vmx.c --- linux-azure-4.15.0/arch/x86/kvm/vmx.c +++ linux-azure-4.15.0/arch/x86/kvm/vmx.c @@ -194,6 +194,150 @@ extern const ulong vmx_return; +static DEFINE_STATIC_KEY_FALSE(vmx_l1d_should_flush); +static DEFINE_STATIC_KEY_FALSE(vmx_l1d_flush_cond); +static DEFINE_MUTEX(vmx_l1d_flush_mutex); + +/* Storage for pre module init parameter parsing */ +static enum vmx_l1d_flush_state __read_mostly vmentry_l1d_flush_param = VMENTER_L1D_FLUSH_AUTO; + +static const struct { + const char *option; + enum vmx_l1d_flush_state cmd; +} vmentry_l1d_param[] = { + {"auto", VMENTER_L1D_FLUSH_AUTO}, + {"never", VMENTER_L1D_FLUSH_NEVER}, + {"cond", VMENTER_L1D_FLUSH_COND}, + {"always", VMENTER_L1D_FLUSH_ALWAYS}, +}; + +#define L1D_CACHE_ORDER 4 +static void *vmx_l1d_flush_pages; + +static int vmx_setup_l1d_flush(enum vmx_l1d_flush_state l1tf) +{ + struct page *page; + unsigned int i; + + if (!enable_ept) { + l1tf_vmx_mitigation = VMENTER_L1D_FLUSH_EPT_DISABLED; + return 0; + } + + if (boot_cpu_has(X86_FEATURE_ARCH_CAPABILITIES)) { + u64 msr; + + rdmsrl(MSR_IA32_ARCH_CAPABILITIES, msr); + if (msr & ARCH_CAP_SKIP_VMENTRY_L1DFLUSH) { + l1tf_vmx_mitigation = VMENTER_L1D_FLUSH_NOT_REQUIRED; + return 0; + } + } + + /* If set to auto use the default l1tf mitigation method */ + if (l1tf == VMENTER_L1D_FLUSH_AUTO) { + switch (l1tf_mitigation) { + case L1TF_MITIGATION_OFF: + l1tf = VMENTER_L1D_FLUSH_NEVER; + break; + case L1TF_MITIGATION_FLUSH_NOWARN: + case L1TF_MITIGATION_FLUSH: + case L1TF_MITIGATION_FLUSH_NOSMT: + l1tf = VMENTER_L1D_FLUSH_COND; + break; + case L1TF_MITIGATION_FULL: + case L1TF_MITIGATION_FULL_FORCE: + l1tf = VMENTER_L1D_FLUSH_ALWAYS; + break; + } + } else if (l1tf_mitigation == L1TF_MITIGATION_FULL_FORCE) { + l1tf = VMENTER_L1D_FLUSH_ALWAYS; + } + + if (l1tf != VMENTER_L1D_FLUSH_NEVER && !vmx_l1d_flush_pages && + !boot_cpu_has(X86_FEATURE_FLUSH_L1D)) { + page = alloc_pages(GFP_KERNEL, L1D_CACHE_ORDER); + if (!page) + return -ENOMEM; + vmx_l1d_flush_pages = page_address(page); + + /* + * Initialize each page with a different pattern in + * order to protect against KSM in the nested + * virtualization case. + */ + for (i = 0; i < 1u << L1D_CACHE_ORDER; ++i) { + memset(vmx_l1d_flush_pages + i * PAGE_SIZE, i + 1, + PAGE_SIZE); + } + } + + l1tf_vmx_mitigation = l1tf; + + if (l1tf != VMENTER_L1D_FLUSH_NEVER) + static_branch_enable(&vmx_l1d_should_flush); + else + static_branch_disable(&vmx_l1d_should_flush); + + if (l1tf == VMENTER_L1D_FLUSH_COND) + static_branch_enable(&vmx_l1d_flush_cond); + else + static_branch_disable(&vmx_l1d_flush_cond); + return 0; +} + +static int vmentry_l1d_flush_parse(const char *s) +{ + unsigned int i; + + if (s) { + for (i = 0; i < ARRAY_SIZE(vmentry_l1d_param); i++) { + if (sysfs_streq(s, vmentry_l1d_param[i].option)) + return vmentry_l1d_param[i].cmd; + } + } + return -EINVAL; +} + +static int vmentry_l1d_flush_set(const char *s, const struct kernel_param *kp) +{ + int l1tf, ret; + + if (!boot_cpu_has(X86_BUG_L1TF)) + return 0; + + l1tf = vmentry_l1d_flush_parse(s); + if (l1tf < 0) + return l1tf; + + /* + * Has vmx_init() run already? If not then this is the pre init + * parameter parsing. In that case just store the value and let + * vmx_init() do the proper setup after enable_ept has been + * established. + */ + if (l1tf_vmx_mitigation == VMENTER_L1D_FLUSH_AUTO) { + vmentry_l1d_flush_param = l1tf; + return 0; + } + + mutex_lock(&vmx_l1d_flush_mutex); + ret = vmx_setup_l1d_flush(l1tf); + mutex_unlock(&vmx_l1d_flush_mutex); + return ret; +} + +static int vmentry_l1d_flush_get(char *s, const struct kernel_param *kp) +{ + return sprintf(s, "%s\n", vmentry_l1d_param[l1tf_vmx_mitigation].option); +} + +static const struct kernel_param_ops vmentry_l1d_flush_ops = { + .set = vmentry_l1d_flush_set, + .get = vmentry_l1d_flush_get, +}; +module_param_cb(vmentry_l1d_flush, &vmentry_l1d_flush_ops, NULL, 0644); + #define NR_AUTOLOAD_MSRS 8 struct vmcs { @@ -578,6 +722,11 @@ (unsigned long *)&pi_desc->control); } +struct vmx_msrs { + unsigned int nr; + struct vmx_msr_entry val[NR_AUTOLOAD_MSRS]; +}; + struct vcpu_vmx { struct kvm_vcpu vcpu; unsigned long host_rsp; @@ -611,9 +760,8 @@ struct loaded_vmcs *loaded_vmcs; bool __launched; /* temporary, used in vmx_vcpu_run */ struct msr_autoload { - unsigned nr; - struct vmx_msr_entry guest[NR_AUTOLOAD_MSRS]; - struct vmx_msr_entry host[NR_AUTOLOAD_MSRS]; + struct vmx_msrs guest; + struct vmx_msrs host; } msr_autoload; struct { int loaded; @@ -1972,9 +2120,20 @@ vm_exit_controls_clearbit(vmx, exit); } +static int find_msr(struct vmx_msrs *m, unsigned int msr) +{ + unsigned int i; + + for (i = 0; i < m->nr; ++i) { + if (m->val[i].index == msr) + return i; + } + return -ENOENT; +} + static void clear_atomic_switch_msr(struct vcpu_vmx *vmx, unsigned msr) { - unsigned i; + int i; struct msr_autoload *m = &vmx->msr_autoload; switch (msr) { @@ -1995,18 +2154,21 @@ } break; } - - for (i = 0; i < m->nr; ++i) - if (m->guest[i].index == msr) - break; - - if (i == m->nr) + i = find_msr(&m->guest, msr); + if (i < 0) + goto skip_guest; + --m->guest.nr; + m->guest.val[i] = m->guest.val[m->guest.nr]; + vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, m->guest.nr); + +skip_guest: + i = find_msr(&m->host, msr); + if (i < 0) return; - --m->nr; - m->guest[i] = m->guest[m->nr]; - m->host[i] = m->host[m->nr]; - vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, m->nr); - vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, m->nr); + + --m->host.nr; + m->host.val[i] = m->host.val[m->host.nr]; + vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, m->host.nr); } static void add_atomic_switch_msr_special(struct vcpu_vmx *vmx, @@ -2021,9 +2183,9 @@ } static void add_atomic_switch_msr(struct vcpu_vmx *vmx, unsigned msr, - u64 guest_val, u64 host_val) + u64 guest_val, u64 host_val, bool entry_only) { - unsigned i; + int i, j = 0; struct msr_autoload *m = &vmx->msr_autoload; switch (msr) { @@ -2058,24 +2220,31 @@ wrmsrl(MSR_IA32_PEBS_ENABLE, 0); } - for (i = 0; i < m->nr; ++i) - if (m->guest[i].index == msr) - break; + i = find_msr(&m->guest, msr); + if (!entry_only) + j = find_msr(&m->host, msr); - if (i == NR_AUTOLOAD_MSRS) { + if (i == NR_AUTOLOAD_MSRS || j == NR_AUTOLOAD_MSRS) { printk_once(KERN_WARNING "Not enough msr switch entries. " "Can't add msr %x\n", msr); return; - } else if (i == m->nr) { - ++m->nr; - vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, m->nr); - vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, m->nr); } + if (i < 0) { + i = m->guest.nr++; + vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, m->guest.nr); + } + m->guest.val[i].index = msr; + m->guest.val[i].value = guest_val; - m->guest[i].index = msr; - m->guest[i].value = guest_val; - m->host[i].index = msr; - m->host[i].value = host_val; + if (entry_only) + return; + + if (j < 0) { + j = m->host.nr++; + vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, m->host.nr); + } + m->host.val[j].index = msr; + m->host.val[j].value = host_val; } static bool update_transition_efer(struct vcpu_vmx *vmx, int efer_offset) @@ -2119,7 +2288,7 @@ guest_efer &= ~EFER_LME; if (guest_efer != host_efer) add_atomic_switch_msr(vmx, MSR_EFER, - guest_efer, host_efer); + guest_efer, host_efer, false); return false; } else { guest_efer &= ~ignore_bits; @@ -3267,6 +3436,11 @@ return !(val & ~valid_bits); } +static int vmx_get_msr_feature(struct kvm_msr_entry *msr) +{ + return 1; +} + /* * Reads an msr value (of 'msr_index') into 'pdata'. * Returns 0 on success, non-0 otherwise. @@ -3524,7 +3698,7 @@ vcpu->arch.ia32_xss = data; if (vcpu->arch.ia32_xss != host_xss) add_atomic_switch_msr(vmx, MSR_IA32_XSS, - vcpu->arch.ia32_xss, host_xss); + vcpu->arch.ia32_xss, host_xss, false); else clear_atomic_switch_msr(vmx, MSR_IA32_XSS); break; @@ -5717,9 +5891,9 @@ vmcs_write32(VM_EXIT_MSR_STORE_COUNT, 0); vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, 0); - vmcs_write64(VM_EXIT_MSR_LOAD_ADDR, __pa(vmx->msr_autoload.host)); + vmcs_write64(VM_EXIT_MSR_LOAD_ADDR, __pa(vmx->msr_autoload.host.val)); vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, 0); - vmcs_write64(VM_ENTRY_MSR_LOAD_ADDR, __pa(vmx->msr_autoload.guest)); + vmcs_write64(VM_ENTRY_MSR_LOAD_ADDR, __pa(vmx->msr_autoload.guest.val)); if (vmcs_config.vmentry_ctrl & VM_ENTRY_LOAD_IA32_PAT) vmcs_write64(GUEST_IA32_PAT, vmx->vcpu.arch.pat); @@ -5739,8 +5913,7 @@ ++vmx->nmsrs; } - if (boot_cpu_has(X86_FEATURE_ARCH_CAPABILITIES)) - rdmsrl(MSR_IA32_ARCH_CAPABILITIES, vmx->arch_capabilities); + vmx->arch_capabilities = kvm_get_arch_capabilities(); vm_exit_controls_init(vmx, vmcs_config.vmexit_ctrl); @@ -7402,12 +7575,6 @@ return 1; } - /* CPL=0 must be checked manually. */ - if (vmx_get_cpl(vcpu)) { - kvm_queue_exception(vcpu, UD_VECTOR); - return 1; - } - if (vmx->nested.vmxon) { nested_vmx_failValid(vcpu, VMXERR_VMXON_IN_VMX_ROOT_OPERATION); return kvm_skip_emulated_instruction(vcpu); @@ -7467,11 +7634,6 @@ */ static int nested_vmx_check_permission(struct kvm_vcpu *vcpu) { - if (vmx_get_cpl(vcpu)) { - kvm_queue_exception(vcpu, UD_VECTOR); - return 0; - } - if (!to_vmx(vcpu)->nested.vmxon) { kvm_queue_exception(vcpu, UD_VECTOR); return 0; @@ -7805,7 +7967,7 @@ if (get_vmx_mem_address(vcpu, exit_qualification, vmx_instruction_info, true, &gva)) return 1; - /* _system ok, nested_vmx_check_permission has verified cpl=0 */ + /* _system ok, as hardware has verified cpl=0 */ kvm_write_guest_virt_system(&vcpu->arch.emulate_ctxt, gva, &field_value, (is_long_mode(vcpu) ? 8 : 4), NULL); } @@ -7948,7 +8110,7 @@ if (get_vmx_mem_address(vcpu, exit_qualification, vmx_instruction_info, true, &vmcs_gva)) return 1; - /* *_system ok, nested_vmx_check_permission has verified cpl=0 */ + /* ok to use *_system, as hardware has verified cpl=0 */ if (kvm_write_guest_virt_system(&vcpu->arch.emulate_ctxt, vmcs_gva, (void *)&to_vmx(vcpu)->nested.current_vmptr, sizeof(u64), &e)) { @@ -8992,6 +9154,79 @@ } } +/* + * Software based L1D cache flush which is used when microcode providing + * the cache control MSR is not loaded. + * + * The L1D cache is 32 KiB on Nehalem and later microarchitectures, but to + * flush it is required to read in 64 KiB because the replacement algorithm + * is not exactly LRU. This could be sized at runtime via topology + * information but as all relevant affected CPUs have 32KiB L1D cache size + * there is no point in doing so. + */ +#define L1D_CACHE_ORDER 4 +static void *vmx_l1d_flush_pages; + +static void vmx_l1d_flush(struct kvm_vcpu *vcpu) +{ + int size = PAGE_SIZE << L1D_CACHE_ORDER; + + /* + * This code is only executed when the the flush mode is 'cond' or + * 'always' + */ + if (static_branch_likely(&vmx_l1d_flush_cond)) { + bool flush_l1d; + + /* + * Clear the per-vcpu flush bit, it gets set again + * either from vcpu_run() or from one of the unsafe + * VMEXIT handlers. + */ + flush_l1d = vcpu->arch.l1tf_flush_l1d; + vcpu->arch.l1tf_flush_l1d = false; + + /* + * Clear the per-cpu flush bit, it gets set again from + * the interrupt handlers. + */ + flush_l1d |= kvm_get_cpu_l1tf_flush_l1d(); + kvm_clear_cpu_l1tf_flush_l1d(); + + if (!flush_l1d) + return; + } + + vcpu->stat.l1d_flush++; + + if (static_cpu_has(X86_FEATURE_FLUSH_L1D)) { + wrmsrl(MSR_IA32_FLUSH_CMD, L1D_FLUSH); + return; + } + + asm volatile( + /* First ensure the pages are in the TLB */ + "xorl %%eax, %%eax\n" + ".Lpopulate_tlb:\n\t" + "movzbl (%[flush_pages], %%" _ASM_AX "), %%ecx\n\t" + "addl $4096, %%eax\n\t" + "cmpl %%eax, %[size]\n\t" + "jne .Lpopulate_tlb\n\t" + "xorl %%eax, %%eax\n\t" + "cpuid\n\t" + /* Now fill the cache */ + "xorl %%eax, %%eax\n" + ".Lfill_cache:\n" + "movzbl (%[flush_pages], %%" _ASM_AX "), %%ecx\n\t" + "addl $64, %%eax\n\t" + "cmpl %%eax, %[size]\n\t" + "jne .Lfill_cache\n\t" + "lfence\n" + :: [flush_pages] "r" (vmx_l1d_flush_pages), + [size] "r" (size) + : "eax", "ebx", "ecx", "edx"); +} + static void update_cr8_intercept(struct kvm_vcpu *vcpu, int tpr, int irr) { struct vmcs12 *vmcs12 = get_vmcs12(vcpu); @@ -9395,7 +9630,7 @@ clear_atomic_switch_msr(vmx, msrs[i].msr); else add_atomic_switch_msr(vmx, msrs[i].msr, msrs[i].guest, - msrs[i].host); + msrs[i].host, false); } static void vmx_arm_hv_timer(struct kvm_vcpu *vcpu) @@ -9487,6 +9722,10 @@ x86_spec_ctrl_set_guest(vmx->spec_ctrl, 0); vmx->__launched = vmx->loaded_vmcs->launched; + + if (static_branch_unlikely(&vmx_l1d_should_flush)) + vmx_l1d_flush(vcpu); + asm( /* Store host registers */ "push %%" _ASM_DX "; push %%" _ASM_BP ";" @@ -9836,6 +10075,37 @@ return ERR_PTR(err); } +#define L1TF_MSG_SMT "L1TF CPU bug present and SMT on, data leak possible. See CVE-2018-3646 and https://www.kernel.org/doc/html/latest/admin-guide/l1tf.html for details.\n" +#define L1TF_MSG_L1D "L1TF CPU bug present and virtualization mitigation disabled, data leak possible. See CVE-2018-3646 and https://www.kernel.org/doc/html/latest/admin-guide/l1tf.html for details.\n" + +static int vmx_vm_init(struct kvm *kvm) +{ + if (boot_cpu_has(X86_BUG_L1TF) && enable_ept) { + switch (l1tf_mitigation) { + case L1TF_MITIGATION_OFF: + case L1TF_MITIGATION_FLUSH_NOWARN: + /* 'I explicitly don't care' is set */ + break; + case L1TF_MITIGATION_FLUSH: + case L1TF_MITIGATION_FLUSH_NOSMT: + case L1TF_MITIGATION_FULL: + /* + * Warn upon starting the first VM in a potentially + * insecure environment. + */ + if (cpu_smt_control == CPU_SMT_ENABLED) + pr_warn_once(L1TF_MSG_SMT); + if (l1tf_vmx_mitigation == VMENTER_L1D_FLUSH_NEVER) + pr_warn_once(L1TF_MSG_L1D); + break; + case L1TF_MITIGATION_FULL_FORCE: + /* Flush is enforced */ + break; + } + } + return 0; +} + static void __init vmx_check_processor_compat(void *rtn) { struct vmcs_config vmcs_conf; @@ -10764,10 +11034,10 @@ * Set the MSR load/store lists to match L0's settings. */ vmcs_write32(VM_EXIT_MSR_STORE_COUNT, 0); - vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, vmx->msr_autoload.nr); - vmcs_write64(VM_EXIT_MSR_LOAD_ADDR, __pa(vmx->msr_autoload.host)); - vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, vmx->msr_autoload.nr); - vmcs_write64(VM_ENTRY_MSR_LOAD_ADDR, __pa(vmx->msr_autoload.guest)); + vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, vmx->msr_autoload.host.nr); + vmcs_write64(VM_EXIT_MSR_LOAD_ADDR, __pa(vmx->msr_autoload.host.val)); + vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, vmx->msr_autoload.guest.nr); + vmcs_write64(VM_ENTRY_MSR_LOAD_ADDR, __pa(vmx->msr_autoload.guest.val)); /* * HOST_RSP is normally set correctly in vmx_vcpu_run() just before @@ -11194,6 +11464,9 @@ if (ret) return ret; + /* Hide L1D cache contents from the nested guest. */ + vmx->vcpu.arch.l1tf_flush_l1d = true; + if (vmcs12->guest_activity_state == GUEST_ACTIVITY_HLT) return kvm_vcpu_halt(vcpu); @@ -11702,8 +11975,8 @@ vmx_segment_cache_clear(vmx); /* Update any VMCS fields that might have changed while L2 ran */ - vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, vmx->msr_autoload.nr); - vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, vmx->msr_autoload.nr); + vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, vmx->msr_autoload.host.nr); + vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, vmx->msr_autoload.guest.nr); vmcs_write64(TSC_OFFSET, vcpu->arch.tsc_offset); if (vmx->hv_deadline_tsc == -1) vmcs_clear_bits(PIN_BASED_VM_EXEC_CONTROL, @@ -12265,4 +12538,6 @@ .has_emulated_msr = vmx_has_emulated_msr, + .vm_init = vmx_vm_init, + .vcpu_create = vmx_create_vcpu, .vcpu_free = vmx_free_vcpu, @@ -12273,6 +12548,7 @@ .vcpu_put = vmx_vcpu_put, .update_bp_intercept = update_exception_bitmap, + .get_msr_feature = vmx_get_msr_feature, .get_msr = vmx_get_msr, .set_msr = vmx_set_msr, .get_segment_base = vmx_get_segment_base, @@ -12385,22 +12661,17 @@ .enable_smi_window = enable_smi_window, }; -static int __init vmx_init(void) +static void vmx_cleanup_l1d_flush(void) { - int r = kvm_init(&vmx_x86_ops, sizeof(struct vcpu_vmx), - __alignof__(struct vcpu_vmx), THIS_MODULE); - if (r) - return r; - -#ifdef CONFIG_KEXEC_CORE - rcu_assign_pointer(crash_vmclear_loaded_vmcss, - crash_vmclear_local_loaded_vmcss); -#endif - - return 0; + if (vmx_l1d_flush_pages) { + free_pages((unsigned long)vmx_l1d_flush_pages, L1D_CACHE_ORDER); + vmx_l1d_flush_pages = NULL; + } + /* Restore state so sysfs ignores VMX */ + l1tf_vmx_mitigation = VMENTER_L1D_FLUSH_AUTO; } -static void __exit vmx_exit(void) +static void vmx_exit(void) { #ifdef CONFIG_KEXEC_CORE RCU_INIT_POINTER(crash_vmclear_loaded_vmcss, NULL); @@ -12411,4 +12682,37 @@ + + vmx_cleanup_l1d_flush(); } +module_exit(vmx_exit) +static int __init vmx_init(void) +{ + int r; + + r = kvm_init(&vmx_x86_ops, sizeof(struct vcpu_vmx), + __alignof__(struct vcpu_vmx), THIS_MODULE); + if (r) + return r; + + /* + * Must be called after kvm_init() so enable_ept is properly set + * up. Hand the parameter mitigation value in which was stored in + * the pre module init parser. If no parameter was given, it will + * contain 'auto' which will be turned into the default 'cond' + * mitigation mode. + */ + if (boot_cpu_has(X86_BUG_L1TF)) { + r = vmx_setup_l1d_flush(vmentry_l1d_flush_param); + if (r) { + vmx_exit(); + return r; + } + } + +#ifdef CONFIG_KEXEC_CORE + rcu_assign_pointer(crash_vmclear_loaded_vmcss, + crash_vmclear_local_loaded_vmcss); +#endif + + return 0; +} module_init(vmx_init) -module_exit(vmx_exit) diff -u linux-azure-4.15.0/arch/x86/kvm/x86.c linux-azure-4.15.0/arch/x86/kvm/x86.c --- linux-azure-4.15.0/arch/x86/kvm/x86.c +++ linux-azure-4.15.0/arch/x86/kvm/x86.c @@ -184,6 +184,7 @@ { "irq_injections", VCPU_STAT(irq_injections) }, { "nmi_injections", VCPU_STAT(nmi_injections) }, { "req_event", VCPU_STAT(req_event) }, + { "l1d_flush", VCPU_STAT(l1d_flush) }, { "mmu_shadow_zapped", VM_STAT(mmu_shadow_zapped) }, { "mmu_pte_write", VM_STAT(mmu_pte_write) }, { "mmu_pte_updated", VM_STAT(mmu_pte_updated) }, @@ -1047,6 +1048,66 @@ static unsigned num_emulated_msrs; +/* + * List of msr numbers which are used to expose MSR-based features that + * can be used by a hypervisor to validate requested CPU features. + */ +static u32 msr_based_features[] = { + MSR_IA32_ARCH_CAPABILITIES, +}; + +static unsigned int num_msr_based_features; + +u64 kvm_get_arch_capabilities(void) +{ + u64 data; + + rdmsrl_safe(MSR_IA32_ARCH_CAPABILITIES, &data); + + /* + * If we're doing cache flushes (either "always" or "cond") + * we will do one whenever the guest does a vmlaunch/vmresume. + * If an outer hypervisor is doing the cache flush for us + * (VMENTER_L1D_FLUSH_NESTED_VM), we can safely pass that + * capability to the guest too, and if EPT is disabled we're not + * vulnerable. Overall, only VMENTER_L1D_FLUSH_NEVER will + * require a nested hypervisor to do a flush of its own. + */ + if (l1tf_vmx_mitigation != VMENTER_L1D_FLUSH_NEVER) + data |= ARCH_CAP_SKIP_VMENTRY_L1DFLUSH; + + return data; +} +EXPORT_SYMBOL_GPL(kvm_get_arch_capabilities); + +static int kvm_get_msr_feature(struct kvm_msr_entry *msr) +{ + switch (msr->index) { + case MSR_IA32_ARCH_CAPABILITIES: + msr->data = kvm_get_arch_capabilities(); + break; + default: + if (kvm_x86_ops->get_msr_feature(msr)) + return 1; + } + return 0; +} + +static int do_get_msr_feature(struct kvm_vcpu *vcpu, unsigned index, u64 *data) +{ + struct kvm_msr_entry msr; + int r; + + msr.index = index; + r = kvm_get_msr_feature(&msr); + if (r) + return r; + + *data = msr.data; + + return 0; +} + bool kvm_valid_efer(struct kvm_vcpu *vcpu, u64 efer) { if (efer & efer_reserved_bits) @@ -2617,13 +2678,11 @@ int (*do_msr)(struct kvm_vcpu *vcpu, unsigned index, u64 *data)) { - int i, idx; + int i; - idx = srcu_read_lock(&vcpu->kvm->srcu); for (i = 0; i < msrs->nmsrs; ++i) if (do_msr(vcpu, entries[i].index, &entries[i].data)) break; - srcu_read_unlock(&vcpu->kvm->srcu, idx); return i; } @@ -2722,6 +2781,7 @@ case KVM_CAP_SET_BOOT_CPU_ID: case KVM_CAP_SPLIT_IRQCHIP: case KVM_CAP_IMMEDIATE_EXIT: + case KVM_CAP_GET_MSR_FEATURES: r = 1; break; case KVM_CAP_ADJUST_CLOCK: @@ -2836,6 +2896,31 @@ goto out; r = 0; break; + case KVM_GET_MSR_FEATURE_INDEX_LIST: { + struct kvm_msr_list __user *user_msr_list = argp; + struct kvm_msr_list msr_list; + unsigned int n; + + r = -EFAULT; + if (copy_from_user(&msr_list, user_msr_list, sizeof(msr_list))) + goto out; + n = msr_list.nmsrs; + msr_list.nmsrs = num_msr_based_features; + if (copy_to_user(user_msr_list, &msr_list, sizeof(msr_list))) + goto out; + r = -E2BIG; + if (n < msr_list.nmsrs) + goto out; + r = -EFAULT; + if (copy_to_user(user_msr_list->indices, &msr_based_features, + num_msr_based_features * sizeof(u32))) + goto out; + r = 0; + break; + } + case KVM_GET_MSRS: + r = msr_io(NULL, argp, do_get_msr_feature, 1); + break; } default: r = -EINVAL; @@ -3569,12 +3654,18 @@ r = 0; break; } - case KVM_GET_MSRS: + case KVM_GET_MSRS: { + int idx = srcu_read_lock(&vcpu->kvm->srcu); r = msr_io(vcpu, argp, do_get_msr, 1); + srcu_read_unlock(&vcpu->kvm->srcu, idx); break; - case KVM_SET_MSRS: + } + case KVM_SET_MSRS: { + int idx = srcu_read_lock(&vcpu->kvm->srcu); r = msr_io(vcpu, argp, do_set_msr, 0); + srcu_read_unlock(&vcpu->kvm->srcu, idx); break; + } case KVM_TPR_ACCESS_REPORTING: { struct kvm_tpr_access_ctl tac; @@ -4355,6 +4446,19 @@ j++; } num_emulated_msrs = j; + + for (i = j = 0; i < ARRAY_SIZE(msr_based_features); i++) { + struct kvm_msr_entry msr; + + msr.index = msr_based_features[i]; + if (kvm_get_msr_feature(&msr)) + continue; + + if (j < i) + msr_based_features[j] = msr_based_features[i]; + j++; + } + num_msr_based_features = j; } static int vcpu_mmio_write(struct kvm_vcpu *vcpu, gpa_t addr, int len, @@ -4552,6 +4656,9 @@ void *data = val; int r = X86EMUL_CONTINUE; + /* kvm_write_guest_virt_system can pull in tons of pages. */ + vcpu->arch.l1tf_flush_l1d = true; + while (bytes) { gpa_t gpa = vcpu->arch.walk_mmu->gva_to_gpa(vcpu, addr, PFERR_WRITE_MASK, @@ -5693,6 +5800,8 @@ bool writeback = true; bool write_fault_to_spt = vcpu->arch.write_fault_to_shadow_pgtable; + vcpu->arch.l1tf_flush_l1d = true; + /* * Clear write_fault_to_shadow_pgtable here to ensure it is * never reused. @@ -7144,6 +7253,7 @@ struct kvm *kvm = vcpu->kvm; vcpu->srcu_idx = srcu_read_lock(&kvm->srcu); + vcpu->arch.l1tf_flush_l1d = true; for (;;) { if (kvm_vcpu_running(vcpu)) { @@ -8157,6 +8267,7 @@ void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) { + vcpu->arch.l1tf_flush_l1d = true; kvm_x86_ops->sched_in(vcpu, cpu); } diff -u linux-azure-4.15.0/arch/x86/mm/init_32.c linux-azure-4.15.0/arch/x86/mm/init_32.c --- linux-azure-4.15.0/arch/x86/mm/init_32.c +++ linux-azure-4.15.0/arch/x86/mm/init_32.c @@ -558,8 +558,14 @@ permanent_kmaps_init(pgd_base); } -pteval_t __supported_pte_mask __read_mostly = ~(_PAGE_NX | _PAGE_GLOBAL); +#define DEFAULT_PTE_MASK ~(_PAGE_NX | _PAGE_GLOBAL) +/* Bits supported by the hardware: */ +pteval_t __supported_pte_mask __read_mostly = DEFAULT_PTE_MASK; +/* Bits allowed in normal kernel mappings: */ +pteval_t __default_kernel_pte_mask __read_mostly = DEFAULT_PTE_MASK; EXPORT_SYMBOL_GPL(__supported_pte_mask); +/* Used in PAGE_KERNEL_* macros which are reasonably used out-of-tree: */ +EXPORT_SYMBOL(__default_kernel_pte_mask); /* user-defined highmem size */ static unsigned int highmem_pages = -1; diff -u linux-azure-4.15.0/arch/x86/mm/init_64.c linux-azure-4.15.0/arch/x86/mm/init_64.c --- linux-azure-4.15.0/arch/x86/mm/init_64.c +++ linux-azure-4.15.0/arch/x86/mm/init_64.c @@ -65,8 +65,13 @@ * around without checking the pgd every time. */ +/* Bits supported by the hardware: */ pteval_t __supported_pte_mask __read_mostly = ~0; +/* Bits allowed in normal kernel mappings: */ +pteval_t __default_kernel_pte_mask __read_mostly = ~0; EXPORT_SYMBOL_GPL(__supported_pte_mask); +/* Used in PAGE_KERNEL_* macros which are reasonably used out-of-tree: */ +EXPORT_SYMBOL(__default_kernel_pte_mask); int force_personality32; diff -u linux-azure-4.15.0/arch/x86/mm/ioremap.c linux-azure-4.15.0/arch/x86/mm/ioremap.c --- linux-azure-4.15.0/arch/x86/mm/ioremap.c +++ linux-azure-4.15.0/arch/x86/mm/ioremap.c @@ -816,6 +816,9 @@ } pte = early_ioremap_pte(addr); + /* Sanitize 'prot' against any unsupported bits: */ + pgprot_val(flags) &= __default_kernel_pte_mask; + if (pgprot_val(flags)) set_pte(pte, pfn_pte(phys >> PAGE_SHIFT, flags)); else diff -u linux-azure-4.15.0/arch/x86/mm/pgtable.c linux-azure-4.15.0/arch/x86/mm/pgtable.c --- linux-azure-4.15.0/arch/x86/mm/pgtable.c +++ linux-azure-4.15.0/arch/x86/mm/pgtable.c @@ -583,6 +583,9 @@ void native_set_fixmap(enum fixed_addresses idx, phys_addr_t phys, pgprot_t flags) { + /* Sanitize 'prot' against any unsupported bits: */ + pgprot_val(flags) &= __default_kernel_pte_mask; + __native_set_fixmap(idx, pfn_pte(phys >> PAGE_SHIFT, flags)); } reverted: --- linux-azure-4.15.0/arch/x86/mm/pkeys.c +++ linux-azure-4.15.0.orig/arch/x86/mm/pkeys.c @@ -94,27 +94,26 @@ */ if (pkey != -1) return pkey; + /* + * Look for a protection-key-drive execute-only mapping + * which is now being given permissions that are not + * execute-only. Move it back to the default pkey. + */ + if (vma_is_pkey_exec_only(vma) && + (prot & (PROT_READ|PROT_WRITE))) { + return 0; + } - /* * The mapping is execute-only. Go try to get the * execute-only protection key. If we fail to do that, * fall through as if we do not have execute-only + * support. - * support in this mm. */ if (prot == PROT_EXEC) { pkey = execute_only_pkey(vma->vm_mm); if (pkey > 0) return pkey; - } else if (vma_is_pkey_exec_only(vma)) { - /* - * Protections are *not* PROT_EXEC, but the mapping - * is using the exec-only pkey. This mapping was - * PROT_EXEC and will no longer be. Move back to - * the default pkey. - */ - return ARCH_DEFAULT_PKEY; } - /* * This is a vanilla, non-pkey mprotect (or we failed to * setup execute-only), inherit the pkey from the VMA we diff -u linux-azure-4.15.0/arch/x86/platform/uv/tlb_uv.c linux-azure-4.15.0/arch/x86/platform/uv/tlb_uv.c --- linux-azure-4.15.0/arch/x86/platform/uv/tlb_uv.c +++ linux-azure-4.15.0/arch/x86/platform/uv/tlb_uv.c @@ -1285,6 +1285,7 @@ struct msg_desc msgdesc; ack_APIC_irq(); + kvm_set_cpu_l1tf_flush_l1d(); time_start = get_cycles(); bcp = &per_cpu(bau_control, smp_processor_id()); diff -u linux-azure-4.15.0/arch/x86/power/hibernate_64.c linux-azure-4.15.0/arch/x86/power/hibernate_64.c --- linux-azure-4.15.0/arch/x86/power/hibernate_64.c +++ linux-azure-4.15.0/arch/x86/power/hibernate_64.c @@ -51,6 +51,12 @@ pmd_t *pmd; pud_t *pud; p4d_t *p4d; + pgprot_t pgtable_prot = __pgprot(_KERNPG_TABLE); + pgprot_t pmd_text_prot = __pgprot(__PAGE_KERNEL_LARGE_EXEC); + + /* Filter out unsupported __PAGE_KERNEL* bits: */ + pgprot_val(pmd_text_prot) &= __default_kernel_pte_mask; + pgprot_val(pgtable_prot) &= __default_kernel_pte_mask; /* * The new mapping only has to cover the page containing the image @@ -81,15 +87,19 @@ return -ENOMEM; set_pmd(pmd + pmd_index(restore_jump_address), - __pmd((jump_address_phys & PMD_MASK) | __PAGE_KERNEL_LARGE_EXEC)); + __pmd((jump_address_phys & PMD_MASK) | pgprot_val(pmd_text_prot))); set_pud(pud + pud_index(restore_jump_address), - __pud(__pa(pmd) | _KERNPG_TABLE)); + __pud(__pa(pmd) | pgprot_val(pgtable_prot))); if (IS_ENABLED(CONFIG_X86_5LEVEL)) { - set_p4d(p4d + p4d_index(restore_jump_address), __p4d(__pa(pud) | _KERNPG_TABLE)); - set_pgd(pgd + pgd_index(restore_jump_address), __pgd(__pa(p4d) | _KERNPG_TABLE)); + p4d_t new_p4d = __p4d(__pa(pud) | pgprot_val(pgtable_prot)); + pgd_t new_pgd = __pgd(__pa(p4d) | pgprot_val(pgtable_prot)); + + set_p4d(p4d + p4d_index(restore_jump_address), new_p4d); + set_pgd(pgd + pgd_index(restore_jump_address), new_pgd); } else { /* No p4d for 4-level paging: point the pgd to the pud page table */ - set_pgd(pgd + pgd_index(restore_jump_address), __pgd(__pa(pud) | _KERNPG_TABLE)); + pgd_t new_pgd = __pgd(__pa(pud) | pgprot_val(pgtable_prot)); + set_pgd(pgd + pgd_index(restore_jump_address), new_pgd); } return 0; reverted: --- linux-azure-4.15.0/arch/x86/xen/mmu.c +++ linux-azure-4.15.0.orig/arch/x86/xen/mmu.c @@ -42,11 +42,13 @@ } EXPORT_SYMBOL_GPL(arbitrary_virt_to_machine); +static void xen_flush_tlb_all(void) -static noinline void xen_flush_tlb_all(void) { struct mmuext_op *op; struct multicall_space mcs; + trace_xen_mmu_flush_tlb_all(0); + preempt_disable(); mcs = xen_mc_entry(sizeof(*op)); diff -u linux-azure-4.15.0/arch/x86/xen/mmu_pv.c linux-azure-4.15.0/arch/x86/xen/mmu_pv.c --- linux-azure-4.15.0/arch/x86/xen/mmu_pv.c +++ linux-azure-4.15.0/arch/x86/xen/mmu_pv.c @@ -1280,11 +1280,13 @@ return this_cpu_read(xen_vcpu_info.arch.cr2); } -static noinline void xen_flush_tlb(void) +static void xen_flush_tlb(void) { struct mmuext_op *op; struct multicall_space mcs; + trace_xen_mmu_flush_tlb(0); + preempt_disable(); mcs = xen_mc_entry(sizeof(*op)); reverted: --- linux-azure-4.15.0/block/bfq-iosched.c +++ linux-azure-4.15.0.orig/block/bfq-iosched.c @@ -3630,22 +3630,20 @@ } /* + * We exploit the put_rq_private hook to decrement + * rq_in_driver, but put_rq_private will not be + * invoked on this request. So, to avoid unbalance, + * just start this request, without incrementing + * rq_in_driver. As a negative consequence, + * rq_in_driver is deceptively lower than it should be + * while this request is in service. This may cause + * bfq_schedule_dispatch to be invoked uselessly. - * We exploit the bfq_finish_requeue_request hook to - * decrement rq_in_driver, but - * bfq_finish_requeue_request will not be invoked on - * this request. So, to avoid unbalance, just start - * this request, without incrementing rq_in_driver. As - * a negative consequence, rq_in_driver is deceptively - * lower than it should be while this request is in - * service. This may cause bfq_schedule_dispatch to be - * invoked uselessly. * * As for implementing an exact solution, the + * put_request hook, if defined, is probably invoked + * also on this request. So, by exploiting this hook, + * we could 1) increment rq_in_driver here, and 2) + * decrement it in put_request. Such a solution would - * bfq_finish_requeue_request hook, if defined, is - * probably invoked also on this request. So, by - * exploiting this hook, we could 1) increment - * rq_in_driver here, and 2) decrement it in - * bfq_finish_requeue_request. Such a solution would * let the value of the counter be always accurate, * but it would entail using an extra interface * function. This cost seems higher than the benefit, @@ -3691,16 +3689,35 @@ return rq; } +static struct request *bfq_dispatch_request(struct blk_mq_hw_ctx *hctx) +{ + struct bfq_data *bfqd = hctx->queue->elevator->elevator_data; + struct request *rq; #if defined(CONFIG_BFQ_GROUP_IOSCHED) && defined(CONFIG_DEBUG_BLK_CGROUP) + struct bfq_queue *in_serv_queue, *bfqq; + bool waiting_rq, idle_timer_disabled; +#endif + + spin_lock_irq(&bfqd->lock); + +#if defined(CONFIG_BFQ_GROUP_IOSCHED) && defined(CONFIG_DEBUG_BLK_CGROUP) + in_serv_queue = bfqd->in_service_queue; + waiting_rq = in_serv_queue && bfq_bfqq_wait_request(in_serv_queue); + + rq = __bfq_dispatch_request(hctx); + + idle_timer_disabled = + waiting_rq && !bfq_bfqq_wait_request(in_serv_queue); + +#else + rq = __bfq_dispatch_request(hctx); +#endif + spin_unlock_irq(&bfqd->lock); -static void bfq_update_dispatch_stats(struct request_queue *q, - struct request *rq, - struct bfq_queue *in_serv_queue, - bool idle_timer_disabled) -{ - struct bfq_queue *bfqq = rq ? RQ_BFQQ(rq) : NULL; +#if defined(CONFIG_BFQ_GROUP_IOSCHED) && defined(CONFIG_DEBUG_BLK_CGROUP) + bfqq = rq ? RQ_BFQQ(rq) : NULL; if (!idle_timer_disabled && !bfqq) + return rq; - return; /* * rq and bfqq are guaranteed to exist until this function @@ -3715,7 +3732,7 @@ * In addition, the following queue lock guarantees that * bfqq_group(bfqq) exists as well. */ + spin_lock_irq(hctx->queue->queue_lock); - spin_lock_irq(q->queue_lock); if (idle_timer_disabled) /* * Since the idle timer has been disabled, @@ -3734,37 +3751,9 @@ bfqg_stats_set_start_empty_time(bfqg); bfqg_stats_update_io_remove(bfqg, rq->cmd_flags); } + spin_unlock_irq(hctx->queue->queue_lock); - spin_unlock_irq(q->queue_lock); -} -#else -static inline void bfq_update_dispatch_stats(struct request_queue *q, - struct request *rq, - struct bfq_queue *in_serv_queue, - bool idle_timer_disabled) {} #endif -static struct request *bfq_dispatch_request(struct blk_mq_hw_ctx *hctx) -{ - struct bfq_data *bfqd = hctx->queue->elevator->elevator_data; - struct request *rq; - struct bfq_queue *in_serv_queue; - bool waiting_rq, idle_timer_disabled; - - spin_lock_irq(&bfqd->lock); - - in_serv_queue = bfqd->in_service_queue; - waiting_rq = in_serv_queue && bfq_bfqq_wait_request(in_serv_queue); - - rq = __bfq_dispatch_request(hctx); - - idle_timer_disabled = - waiting_rq && !bfq_bfqq_wait_request(in_serv_queue); - - spin_unlock_irq(&bfqd->lock); - - bfq_update_dispatch_stats(hctx->queue, rq, in_serv_queue, - idle_timer_disabled); - return rq; } @@ -4287,48 +4276,16 @@ return idle_timer_disabled; } -#if defined(CONFIG_BFQ_GROUP_IOSCHED) && defined(CONFIG_DEBUG_BLK_CGROUP) -static void bfq_update_insert_stats(struct request_queue *q, - struct bfq_queue *bfqq, - bool idle_timer_disabled, - unsigned int cmd_flags) -{ - if (!bfqq) - return; - - /* - * bfqq still exists, because it can disappear only after - * either it is merged with another queue, or the process it - * is associated with exits. But both actions must be taken by - * the same process currently executing this flow of - * instructions. - * - * In addition, the following queue lock guarantees that - * bfqq_group(bfqq) exists as well. - */ - spin_lock_irq(q->queue_lock); - bfqg_stats_update_io_add(bfqq_group(bfqq), bfqq, cmd_flags); - if (idle_timer_disabled) - bfqg_stats_update_idle_time(bfqq_group(bfqq)); - spin_unlock_irq(q->queue_lock); -} -#else -static inline void bfq_update_insert_stats(struct request_queue *q, - struct bfq_queue *bfqq, - bool idle_timer_disabled, - unsigned int cmd_flags) {} -#endif - -static void bfq_prepare_request(struct request *rq, struct bio *bio); - static void bfq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq, bool at_head) { struct request_queue *q = hctx->queue; struct bfq_data *bfqd = q->elevator->elevator_data; +#if defined(CONFIG_BFQ_GROUP_IOSCHED) && defined(CONFIG_DEBUG_BLK_CGROUP) struct bfq_queue *bfqq = RQ_BFQQ(rq); bool idle_timer_disabled = false; unsigned int cmd_flags; +#endif spin_lock_irq(&bfqd->lock); if (blk_mq_sched_try_insert_merge(q, rq)) { @@ -4347,18 +4304,7 @@ else list_add_tail(&rq->queuelist, &bfqd->dispatch); } else { +#if defined(CONFIG_BFQ_GROUP_IOSCHED) && defined(CONFIG_DEBUG_BLK_CGROUP) - if (WARN_ON_ONCE(!bfqq)) { - /* - * This should never happen. Most likely rq is - * a requeued regular request, being - * re-inserted without being first - * re-prepared. Do a prepare, to avoid - * failure. - */ - bfq_prepare_request(rq, rq->bio); - bfqq = RQ_BFQQ(rq); - } - idle_timer_disabled = __bfq_insert_request(bfqd, rq); /* * Update bfqq, because, if a queue merge has occurred @@ -4366,6 +4312,9 @@ * redirected into a new queue. */ bfqq = RQ_BFQQ(rq); +#else + __bfq_insert_request(bfqd, rq); +#endif if (rq_mergeable(rq)) { elv_rqhash_add(q, rq); @@ -4374,17 +4323,35 @@ } } +#if defined(CONFIG_BFQ_GROUP_IOSCHED) && defined(CONFIG_DEBUG_BLK_CGROUP) /* * Cache cmd_flags before releasing scheduler lock, because rq * may disappear afterwards (for example, because of a request * merge). */ cmd_flags = rq->cmd_flags; +#endif - spin_unlock_irq(&bfqd->lock); +#if defined(CONFIG_BFQ_GROUP_IOSCHED) && defined(CONFIG_DEBUG_BLK_CGROUP) + if (!bfqq) + return; + /* + * bfqq still exists, because it can disappear only after + * either it is merged with another queue, or the process it + * is associated with exits. But both actions must be taken by + * the same process currently executing this flow of + * instruction. + * + * In addition, the following queue lock guarantees that + * bfqq_group(bfqq) exists as well. + */ + spin_lock_irq(q->queue_lock); + bfqg_stats_update_io_add(bfqq_group(bfqq), bfqq, cmd_flags); + if (idle_timer_disabled) + bfqg_stats_update_idle_time(bfqq_group(bfqq)); + spin_unlock_irq(q->queue_lock); +#endif - bfq_update_insert_stats(q, bfqq, idle_timer_disabled, - cmd_flags); } static void bfq_insert_requests(struct blk_mq_hw_ctx *hctx, @@ -4515,44 +4482,22 @@ bfq_schedule_dispatch(bfqd); } +static void bfq_put_rq_priv_body(struct bfq_queue *bfqq) -static void bfq_finish_requeue_request_body(struct bfq_queue *bfqq) { bfqq->allocated--; bfq_put_queue(bfqq); } +static void bfq_finish_request(struct request *rq) -/* - * Handle either a requeue or a finish for rq. The things to do are - * the same in both cases: all references to rq are to be dropped. In - * particular, rq is considered completed from the point of view of - * the scheduler. - */ -static void bfq_finish_requeue_request(struct request *rq) { + struct bfq_queue *bfqq; - struct bfq_queue *bfqq = RQ_BFQQ(rq); struct bfq_data *bfqd; + if (!rq->elv.icq) - /* - * Requeue and finish hooks are invoked in blk-mq without - * checking whether the involved request is actually still - * referenced in the scheduler. To handle this fact, the - * following two checks make this function exit in case of - * spurious invocations, for which there is nothing to do. - * - * First, check whether rq has nothing to do with an elevator. - */ - if (unlikely(!(rq->rq_flags & RQF_ELVPRIV))) - return; - - /* - * rq either is not associated with any icq, or is an already - * requeued request that has not (yet) been re-inserted into - * a bfq_queue. - */ - if (!rq->elv.icq || !bfqq) return; + bfqq = RQ_BFQQ(rq); bfqd = bfqq->bfqd; if (rq->rq_flags & RQF_STARTED) @@ -4567,14 +4512,13 @@ spin_lock_irqsave(&bfqd->lock, flags); bfq_completed_request(bfqq, bfqd); + bfq_put_rq_priv_body(bfqq); - bfq_finish_requeue_request_body(bfqq); spin_unlock_irqrestore(&bfqd->lock, flags); } else { /* * Request rq may be still/already in the scheduler, + * in which case we need to remove it. And we cannot - * in which case we need to remove it (this should - * never happen in case of requeue). And we cannot * defer such a check and removal, to avoid * inconsistencies in the time interval from the end * of this function to the start of the deferred work. @@ -4589,26 +4533,9 @@ bfqg_stats_update_io_remove(bfqq_group(bfqq), rq->cmd_flags); } + bfq_put_rq_priv_body(bfqq); - bfq_finish_requeue_request_body(bfqq); } - /* - * Reset private fields. In case of a requeue, this allows - * this function to correctly do nothing if it is spuriously - * invoked again on this same request (see the check at the - * beginning of the function). Probably, a better general - * design would be to prevent blk-mq from invoking the requeue - * or finish hooks of an elevator, for a request that is not - * referred by that elevator. - * - * Resetting the following fields would break the - * request-insertion logic if rq is re-inserted into a bfq - * internal queue, without a re-preparation. Here we assume - * that re-insertions of requeued requests, without - * re-preparation, can happen only for pass_through or at_head - * requests (which are not re-inserted into bfq internal - * queues). - */ rq->elv.priv[0] = NULL; rq->elv.priv[1] = NULL; } @@ -4713,16 +4640,8 @@ bool new_queue = false; bool bfqq_already_existing = false, split = false; + if (!rq->elv.icq) - /* - * Even if we don't have an icq attached, we should still clear - * the scheduler pointers, as they might point to previously - * allocated bic/bfqq structs. - */ - if (!rq->elv.icq) { - rq->elv.priv[0] = rq->elv.priv[1] = NULL; return; - } - bic = icq_to_bic(rq->elv.icq); spin_lock_irq(&bfqd->lock); @@ -5288,8 +5207,7 @@ static struct elevator_type iosched_bfq_mq = { .ops.mq = { .prepare_request = bfq_prepare_request, + .finish_request = bfq_finish_request, - .requeue_request = bfq_finish_requeue_request, - .finish_request = bfq_finish_requeue_request, .exit_icq = bfq_exit_icq, .insert_requests = bfq_insert_requests, .dispatch_request = bfq_dispatch_request, diff -u linux-azure-4.15.0/block/blk-core.c linux-azure-4.15.0/block/blk-core.c --- linux-azure-4.15.0/block/blk-core.c +++ linux-azure-4.15.0/block/blk-core.c @@ -821,6 +821,7 @@ while (true) { bool success = false; + int ret; rcu_read_lock(); if (percpu_ref_tryget_live(&q->q_usage_counter)) { @@ -852,12 +853,14 @@ */ smp_rmb(); - wait_event(q->mq_freeze_wq, - (atomic_read(&q->mq_freeze_depth) == 0 && - (preempt || !blk_queue_preempt_only(q))) || - blk_queue_dying(q)); + ret = wait_event_interruptible(q->mq_freeze_wq, + (atomic_read(&q->mq_freeze_depth) == 0 && + (preempt || !blk_queue_preempt_only(q))) || + blk_queue_dying(q)); if (blk_queue_dying(q)) return -ENODEV; + if (ret) + return ret; } } diff -u linux-azure-4.15.0/crypto/af_alg.c linux-azure-4.15.0/crypto/af_alg.c --- linux-azure-4.15.0/crypto/af_alg.c +++ linux-azure-4.15.0/crypto/af_alg.c @@ -158,14 +158,14 @@ void *private; int err; - if (sock->state == SS_CONNECTED) + /* If caller uses non-allowed flag, return error. */ + if ((sa->salg_feat & ~allowed) || (sa->salg_mask & ~allowed)) return -EINVAL; - if (addr_len < sizeof(*sa)) + if (sock->state == SS_CONNECTED) return -EINVAL; - /* If caller uses non-allowed flag, return error. */ - if ((sa->salg_feat & ~allowed) || (sa->salg_mask & ~allowed)) + if (addr_len < sizeof(*sa)) return -EINVAL; sa->salg_type[sizeof(sa->salg_type) - 1] = 0; reverted: --- linux-azure-4.15.0/crypto/drbg.c +++ linux-azure-4.15.0.orig/crypto/drbg.c @@ -1134,10 +1134,8 @@ if (!drbg) return; kzfree(drbg->Vbuf); - drbg->Vbuf = NULL; drbg->V = NULL; kzfree(drbg->Cbuf); - drbg->Cbuf = NULL; drbg->C = NULL; kzfree(drbg->scratchpadbuf); drbg->scratchpadbuf = NULL; diff -u linux-azure-4.15.0/debian.azure-xenial/changelog linux-azure-4.15.0/debian.azure-xenial/changelog --- linux-azure-4.15.0/debian.azure-xenial/changelog +++ linux-azure-4.15.0/debian.azure-xenial/changelog @@ -1,622 +1,113 @@ -linux-azure (4.15.0-1020.20~16.04.1) xenial; urgency=medium +linux-azure (4.15.0-1021.21~16.04.1) xenial; urgency=medium - * linux-azure: 4.15.0-1020.20~16.04.1 -proposed tracker (LP: #1784292) + [ Ubuntu: 4.15.0-32.34 ] - * linux-azure: 4.15.0-1020.20 -proposed tracker (LP: #1784288) + * CVE-2018-5391 + - Revert "net: increase fragment memory usage limits" + * CVE-2018-3620 // CVE-2018-3646 + - x86/Centaur: Initialize supported CPU features properly + - x86/Centaur: Report correct CPU/cache topology + - x86/CPU/AMD: Have smp_num_siblings and cpu_llc_id always be present + - perf/events/amd/uncore: Fix amd_uncore_llc ID to use pre-defined cpu_llc_id + - x86/CPU: Rename intel_cacheinfo.c to cacheinfo.c + - x86/CPU/AMD: Calculate last level cache ID from number of sharing threads + - x86/CPU: Modify detect_extended_topology() to return result + - x86/CPU/AMD: Derive CPU topology from CPUID function 0xB when available + - x86/CPU: Move cpu local function declarations to local header + - x86/CPU: Make intel_num_cpu_cores() generic + - x86/CPU: Move cpu_detect_cache_sizes() into init_intel_cacheinfo() + - x86/CPU: Move x86_cpuinfo::x86_max_cores assignment to + detect_num_cpu_cores() + - x86/CPU/AMD: Fix LLC ID bit-shift calculation + - x86/mm: Factor out pageattr _PAGE_GLOBAL setting + - x86/mm: Undo double _PAGE_PSE clearing + - x86/mm: Introduce "default" kernel PTE mask + - x86/espfix: Document use of _PAGE_GLOBAL + - x86/mm: Do not auto-massage page protections + - x86/mm: Remove extra filtering in pageattr code + - x86/mm: Comment _PAGE_GLOBAL mystery + - x86/mm: Do not forbid _PAGE_RW before init for __ro_after_init + - x86/ldt: Fix support_pte_mask filtering in map_ldt_struct() + - x86/power/64: Fix page-table setup for temporary text mapping + - x86/pti: Filter at vma->vm_page_prot population + - x86/boot/64/clang: Use fixup_pointer() to access '__supported_pte_mask' + - x86/speculation/l1tf: Increase 32bit PAE __PHYSICAL_PAGE_SHIFT + - x86/speculation/l1tf: Change order of offset/type in swap entry + - x86/speculation/l1tf: Protect swap entries against L1TF + - x86/speculation/l1tf: Protect PROT_NONE PTEs against speculation + - x86/speculation/l1tf: Make sure the first page is always reserved + - x86/speculation/l1tf: Add sysfs reporting for l1tf + - x86/speculation/l1tf: Disallow non privileged high MMIO PROT_NONE mappings + - x86/speculation/l1tf: Limit swap file size to MAX_PA/2 + - x86/bugs: Move the l1tf function and define pr_fmt properly + - sched/smt: Update sched_smt_present at runtime + - x86/smp: Provide topology_is_primary_thread() + - x86/topology: Provide topology_smt_supported() + - cpu/hotplug: Make bringup/teardown of smp threads symmetric + - cpu/hotplug: Split do_cpu_down() + - cpu/hotplug: Provide knobs to control SMT + - x86/cpu: Remove the pointless CPU printout + - x86/cpu/AMD: Remove the pointless detect_ht() call + - x86/cpu/common: Provide detect_ht_early() + - x86/cpu/topology: Provide detect_extended_topology_early() + - x86/cpu/intel: Evaluate smp_num_siblings early + - x86/CPU/AMD: Do not check CPUID max ext level before parsing SMP info + - x86/cpu/AMD: Evaluate smp_num_siblings early + - x86/apic: Ignore secondary threads if nosmt=force + - x86/speculation/l1tf: Extend 64bit swap file size limit + - x86/cpufeatures: Add detection of L1D cache flush support. + - x86/CPU/AMD: Move TOPOEXT reenablement before reading smp_num_siblings + - x86/speculation/l1tf: Protect PAE swap entries against L1TF + - x86/speculation/l1tf: Fix up pte->pfn conversion for PAE + - Revert "x86/apic: Ignore secondary threads if nosmt=force" + - cpu/hotplug: Boot HT siblings at least once + - x86/KVM: Warn user if KVM is loaded SMT and L1TF CPU bug being present + - x86/KVM/VMX: Add module argument for L1TF mitigation + - x86/KVM/VMX: Add L1D flush algorithm + - x86/KVM/VMX: Add L1D MSR based flush + - x86/KVM/VMX: Add L1D flush logic + - x86/KVM/VMX: Split the VMX MSR LOAD structures to have an host/guest numbers + - x86/KVM/VMX: Add find_msr() helper function + - x86/KVM/VMX: Separate the VMX AUTOLOAD guest/host number accounting + - x86/KVM/VMX: Extend add_atomic_switch_msr() to allow VMENTER only MSRs + - x86/KVM/VMX: Use MSR save list for IA32_FLUSH_CMD if required + - cpu/hotplug: Online siblings when SMT control is turned on + - x86/litf: Introduce vmx status variable + - x86/kvm: Drop L1TF MSR list approach + - x86/l1tf: Handle EPT disabled state proper + - x86/kvm: Move l1tf setup function + - x86/kvm: Add static key for flush always + - x86/kvm: Serialize L1D flush parameter setter + - x86/kvm: Allow runtime control of L1D flush + - cpu/hotplug: Expose SMT control init function + - cpu/hotplug: Set CPU_SMT_NOT_SUPPORTED early + - x86/bugs, kvm: Introduce boot-time control of L1TF mitigations + - Documentation: Add section about CPU vulnerabilities + - x86/speculation/l1tf: Unbreak !__HAVE_ARCH_PFN_MODIFY_ALLOWED architectures + - x86/KVM/VMX: Initialize the vmx_l1d_flush_pages' content + - Documentation/l1tf: Fix typos + - cpu/hotplug: detect SMT disabled by BIOS + - x86/KVM/VMX: Don't set l1tf_flush_l1d to true from vmx_l1d_flush() + - x86/KVM/VMX: Replace 'vmx_l1d_flush_always' with 'vmx_l1d_flush_cond' + - x86/KVM/VMX: Move the l1tf_flush_l1d test to vmx_l1d_flush() + - x86/irq: Demote irq_cpustat_t::__softirq_pending to u16 + - x86/KVM/VMX: Introduce per-host-cpu analogue of l1tf_flush_l1d + - x86: Don't include linux/irq.h from asm/hardirq.h + - x86/irq: Let interrupt handlers set kvm_cpu_l1tf_flush_l1d + - x86/KVM/VMX: Don't set l1tf_flush_l1d from vmx_handle_external_intr() + - Documentation/l1tf: Remove Yonah processors from not vulnerable list + - x86/speculation: Simplify sysfs report of VMX L1TF vulnerability + - x86/speculation: Use ARCH_CAPABILITIES to skip L1D flush on vmentry + - KVM: x86: Add a framework for supporting MSR-based features + - KVM: X86: Introduce kvm_get_msr_feature() + - KVM: VMX: support MSR_IA32_ARCH_CAPABILITIES as a feature MSR + - KVM: VMX: Tell the nested hypervisor to skip L1D flush on vmentry + - cpu/hotplug: Fix SMT supported evaluation + - x86/speculation/l1tf: Invert all not present mappings + - x86/speculation/l1tf: Make pmd/pud_mknotpresent() invert + - x86/mm/pat: Make set_memory_np() L1TF safe - [ Ubuntu: 4.15.0-31.33 ] - - * linux: 4.15.0-31.33 -proposed tracker (LP: #1784281) - * ubuntu_bpf_jit test failed on Bionic s390x systems (LP: #1753941) - - test_bpf: flag tests that cannot be jited on s390 - * HDMI/DP audio can't work on the laptop of Dell Latitude 5495 (LP: #1782689) - - drm/nouveau: fix nouveau_dsm_get_client_id()'s return type - - drm/radeon: fix radeon_atpx_get_client_id()'s return type - - drm/amdgpu: fix amdgpu_atpx_get_client_id()'s return type - - platform/x86: apple-gmux: fix gmux_get_client_id()'s return type - - ALSA: hda: use PCI_BASE_CLASS_DISPLAY to replace PCI_CLASS_DISPLAY_VGA - - vga_switcheroo: set audio client id according to bound GPU id - * locking sockets broken due to missing AppArmor socket mediation patches - (LP: #1780227) - - UBUNTU SAUCE: apparmor: fix apparmor mediating locking non-fs, unix sockets - * Update2 for ocxl driver (LP: #1781436) - - ocxl: Fix page fault handler in case of fault on dying process - * RTNL assertion failure on ipvlan (LP: #1776927) - - ipvlan: drop ipv6 dependency - - ipvlan: use per device spinlock to protect addrs list updates - * netns: unable to follow an interface that moves to another netns - (LP: #1774225) - - net: core: Expose number of link up/down transitions - - dev: always advertise the new nsid when the netns iface changes - - dev: advertise the new ifindex when the netns iface changes - * [Bionic] Disk IO hangs when using BFQ as io scheduler (LP: #1780066) - - block, bfq: fix occurrences of request finish method's old name - - block, bfq: remove batches of confusing ifdefs - - block, bfq: add requeue-request hook - * HP ProBook 455 G5 needs mute-led-gpio fixup (LP: #1781763) - - ALSA: hda: add mute led support for HP ProBook 455 G5 - * [Bionic] bug fixes to improve stability of the ThunderX2 i2c driver - (LP: #1781476) - - i2c: xlp9xx: Fix issue seen when updating receive length - - i2c: xlp9xx: Make sure the transfer size is not more than - I2C_SMBUS_BLOCK_SIZE - * x86/kvm: fix LAPIC timer drift when guest uses periodic mode (LP: #1778486) - - x86/kvm: fix LAPIC timer drift when guest uses periodic mode - * Please include ax88179_178a and r8152 modules in d-i udeb (LP: #1771823) - - [Config:] d-i: Add ax88179_178a and r8152 to nic-modules - * Nvidia fails after switching its mode (LP: #1778658) - - PCI: Restore config space on runtime resume despite being unbound - * Kernel error "task zfs:pid blocked for more than 120 seconds" (LP: #1781364) - - SAUCE: (noup) zfs to 0.7.5-1ubuntu16.3 - * CVE-2018-12232 - - PATCH 1/1] socket: close race condition between sock_close() and - sockfs_setattr() - * CVE-2018-10323 - - xfs: set format back to extents if xfs_bmap_extents_to_btree - * change front mic location for more lenovo m7/8/9xx machines (LP: #1781316) - - ALSA: hda/realtek - Fix the problem of two front mics on more machines - - ALSA: hda/realtek - two more lenovo models need fixup of MIC_LOCATION - * Cephfs + fscache: unable to handle kernel NULL pointer dereference at - 0000000000000000 IP: jbd2__journal_start+0x22/0x1f0 (LP: #1783246) - - ceph: track read contexts in ceph_file_info - * Touchpad of ThinkPad P52 failed to work with message "lost sync at byte" - (LP: #1779802) - - Input: elantech - fix V4 report decoding for module with middle key - - Input: elantech - enable middle button of touchpads on ThinkPad P52 - * xhci_hcd 0000:00:14.0: Root hub is not suspended (LP: #1779823) - - usb: xhci: dbc: Fix lockdep warning - - usb: xhci: dbc: Don't decrement runtime PM counter if DBC is not started - * CVE-2018-13406 - - video: uvesafb: Fix integer overflow in allocation - * CVE-2018-10840 - - ext4: correctly handle a zero-length xattr with a non-zero e_value_offs - * CVE-2018-11412 - - ext4: do not allow external inodes for inline data - * CVE-2018-10881 - - ext4: clear i_data in ext4_inode_info when removing inline data - * CVE-2018-12233 - - jfs: Fix inconsistency between memory allocation and ea_buf->max_size - * CVE-2018-12904 - - kvm: nVMX: Enforce cpl=0 for VMX instructions - * Error parsing PCC subspaces from PCCT (LP: #1528684) - - mailbox: PCC: erroneous error message when parsing ACPI PCCT - * CVE-2018-13094 - - xfs: don't call xfs_da_shrink_inode with NULL bp - * other users' coredumps can be read via setgid directory and killpriv bypass - (LP: #1779923) // CVE-2018-13405 - - Fix up non-directory creation in SGID directories - * Invoking obsolete 'firmware_install' target breaks snap build (LP: #1782166) - - snapcraft.yaml: stop invoking the obsolete (and non-existing) - 'firmware_install' target - * snapcraft.yaml: missing ubuntu-retpoline-extract-one script breaks the build - (LP: #1782116) - - snapcraft.yaml: copy retpoline-extract-one to scripts before build - * Allow Raven Ridge's audio controller to be runtime suspended (LP: #1782540) - - ALSA: hda: Add AZX_DCAPS_PM_RUNTIME for AMD Raven Ridge - * CVE-2018-11506 - - sr: pass down correctly sized SCSI sense buffer - * Bionic update: upstream stable patchset 2018-07-24 (LP: #1783418) - - net: Fix a bug in removing queues from XPS map - - net/mlx4_core: Fix error handling in mlx4_init_port_info. - - net/sched: fix refcnt leak in the error path of tcf_vlan_init() - - net: sched: red: avoid hashing NULL child - - net/smc: check for missing nlattrs in SMC_PNETID messages - - net: test tailroom before appending to linear skb - - packet: in packet_snd start writing at link layer allocation - - sock_diag: fix use-after-free read in __sk_free - - tcp: purge write queue in tcp_connect_init() - - vmxnet3: set the DMA mask before the first DMA map operation - - vmxnet3: use DMA memory barriers where required - - hv_netvsc: empty current transmit aggregation if flow blocked - - hv_netvsc: Use the num_online_cpus() for channel limit - - hv_netvsc: avoid retry on send during shutdown - - hv_netvsc: only wake transmit queue if link is up - - hv_netvsc: fix error unwind handling if vmbus_open fails - - hv_netvsc: cancel subchannel setup before halting device - - hv_netvsc: fix race in napi poll when rescheduling - - hv_netvsc: defer queue selection to VF - - hv_netvsc: disable NAPI before channel close - - hv_netvsc: use RCU to fix concurrent rx and queue changes - - hv_netvsc: change GPAD teardown order on older versions - - hv_netvsc: common detach logic - - hv_netvsc: Use Windows version instead of NVSP version on GPAD teardown - - hv_netvsc: Split netvsc_revoke_buf() and netvsc_teardown_gpadl() - - hv_netvsc: Ensure correct teardown message sequence order - - hv_netvsc: Fix a network regression after ifdown/ifup - - sparc: vio: use put_device() instead of kfree() - - ext2: fix a block leak - - s390: add assembler macros for CPU alternatives - - s390: move expoline assembler macros to a header - - s390/crc32-vx: use expoline for indirect branches - - s390/lib: use expoline for indirect branches - - s390/ftrace: use expoline for indirect branches - - s390/kernel: use expoline for indirect branches - - s390: move spectre sysfs attribute code - - s390: extend expoline to BC instructions - - s390: use expoline thunks in the BPF JIT - - scsi: sg: allocate with __GFP_ZERO in sg_build_indirect() - - scsi: zfcp: fix infinite iteration on ERP ready list - - loop: don't call into filesystem while holding lo_ctl_mutex - - loop: fix LOOP_GET_STATUS lock imbalance - - cfg80211: limit wiphy names to 128 bytes - - hfsplus: stop workqueue when fill_super() failed - - x86/kexec: Avoid double free_page() upon do_kexec_load() failure - - usb: gadget: f_uac2: fix bFirstInterface in composite gadget - - usb: dwc3: Undo PHY init if soft reset fails - - usb: dwc3: omap: don't miss events during suspend/resume - - usb: gadget: core: Fix use-after-free of usb_request - - usb: gadget: fsl_udc_core: fix ep valid checks - - usb: dwc2: Fix dwc2_hsotg_core_init_disconnected() - - usb: cdc_acm: prevent race at write to acm while system resumes - - net: usbnet: fix potential deadlock on 32bit hosts - - ARM: dts: imx7d-sdb: Fix regulator-usb-otg2-vbus node name - - usb: host: xhci-plat: revert "usb: host: xhci-plat: enable clk in resume - timing" - - USB: OHCI: Fix NULL dereference in HCDs using HCD_LOCAL_MEM - - net/usb/qmi_wwan.c: Add USB id for lt4120 modem - - net-usb: add qmi_wwan if on lte modem wistron neweb d18q1 - - Bluetooth: btusb: Add USB ID 7392:a611 for Edimax EW-7611ULB - - ALSA: usb-audio: Add native DSD support for Luxman DA-06 - - usb: dwc3: Add SoftReset PHY synchonization delay - - usb: dwc3: Update DWC_usb31 GTXFIFOSIZ reg fields - - usb: dwc3: Makefile: fix link error on randconfig - - xhci: zero usb device slot_id member when disabling and freeing a xhci slot - - usb: dwc2: Fix interval type issue - - usb: dwc2: hcd: Fix host channel halt flow - - usb: dwc2: host: Fix transaction errors in host mode - - usb: gadget: ffs: Let setup() return USB_GADGET_DELAYED_STATUS - - usb: gadget: ffs: Execute copy_to_user() with USER_DS set - - usbip: Correct maximum value of CONFIG_USBIP_VHCI_HC_PORTS - - usb: gadget: udc: change comparison to bitshift when dealing with a mask - - usb: gadget: composite: fix incorrect handling of OS desc requests - - media: lgdt3306a: Fix module count mismatch on usb unplug - - media: em28xx: USB bulk packet size fix - - Bluetooth: btusb: Add device ID for RTL8822BE - - xhci: Show what USB release number the xHC supports from protocol capablity - - staging: bcm2835-audio: Release resources on module_exit() - - staging: lustre: fix bug in osc_enter_cache_try - - staging: fsl-dpaa2/eth: Fix incorrect casts - - staging: rtl8192u: return -ENOMEM on failed allocation of priv->oldaddr - - staging: ks7010: Use constants from ieee80211_eid instead of literal ints. - - staging: lustre: lmv: correctly iput lmo_root - - crypto: inside-secure - wait for the request to complete if in the backlog - - crypto: atmel-aes - fix the keys zeroing on errors - - crypto: ccp - don't disable interrupts while setting up debugfs - - crypto: inside-secure - do not process request if no command was issued - - crypto: inside-secure - fix the cache_len computation - - crypto: inside-secure - fix the extra cache computation - - crypto: sunxi-ss - Add MODULE_ALIAS to sun4i-ss - - crypto: inside-secure - fix the invalidation step during cra_exit - - scsi: mpt3sas: fix an out of bound write - - scsi: ufs: Enable quirk to ignore sending WRITE_SAME command - - scsi: bnx2fc: Fix check in SCSI completion handler for timed out request - - scsi: sym53c8xx_2: iterator underflow in sym_getsync() - - scsi: mptfusion: Add bounds check in mptctl_hp_targetinfo() - - scsi: qla2xxx: Avoid triggering undefined behavior in - qla2x00_mbx_completion() - - scsi: storvsc: Increase cmd_per_lun for higher speed devices - - scsi: qedi: Fix truncation of CHAP name and secret - - scsi: aacraid: fix shutdown crash when init fails - - scsi: qla4xxx: skip error recovery in case of register disconnect. - - scsi: qedi: Fix kernel crash during port toggle - - scsi: mpt3sas: Do not mark fw_event workqueue as WQ_MEM_RECLAIM - - scsi: sd: Keep disk read-only when re-reading partition - - scsi: iscsi_tcp: set BDI_CAP_STABLE_WRITES when data digest enabled - - scsi: aacraid: Insure command thread is not recursively stopped - - scsi: core: Make SCSI Status CONDITION MET equivalent to GOOD - - scsi: mvsas: fix wrong endianness of sgpio api - - ASoC: hdmi-codec: Fix module unloading caused kernel crash - - ASoC: rockchip: rk3288-hdmi-analog: Select needed codecs - - ASoC: samsung: odroid: Fix 32000 sample rate handling - - ASoC: topology: create TLV data for dapm widgets - - ASoC: samsung: i2s: Ensure the RCLK rate is properly determined - - clk: rockchip: Fix wrong parent for SDMMC phase clock for rk3228 - - clk: Don't show the incorrect clock phase - - clk: hisilicon: mark wdt_mux_p[] as const - - clk: tegra: Fix pll_u rate configuration - - clk: rockchip: Prevent calculating mmc phase if clock rate is zero - - clk: samsung: s3c2410: Fix PLL rates - - clk: samsung: exynos7: Fix PLL rates - - clk: samsung: exynos5260: Fix PLL rates - - clk: samsung: exynos5433: Fix PLL rates - - clk: samsung: exynos5250: Fix PLL rates - - clk: samsung: exynos3250: Fix PLL rates - - media: dmxdev: fix error code for invalid ioctls - - media: Don't let tvp5150_get_vbi() go out of vbi_ram_default array - - media: ov5645: add missing of_node_put() in error path - - media: cx23885: Override 888 ImpactVCBe crystal frequency - - media: cx23885: Set subdev host data to clk_freq pointer - - media: s3c-camif: fix out-of-bounds array access - - media: lgdt3306a: Fix a double kfree on i2c device remove - - media: em28xx: Add Hauppauge SoloHD/DualHD bulk models - - media: v4l: vsp1: Fix display stalls when requesting too many inputs - - media: i2c: adv748x: fix HDMI field heights - - media: vb2: Fix videobuf2 to map correct area - - media: vivid: fix incorrect capabilities for radio - - media: cx25821: prevent out-of-bounds read on array card - - serial: xuartps: Fix out-of-bounds access through DT alias - - serial: sh-sci: Fix out-of-bounds access through DT alias - - serial: samsung: Fix out-of-bounds access through serial port index - - serial: mxs-auart: Fix out-of-bounds access through serial port index - - serial: imx: Fix out-of-bounds access through serial port index - - serial: fsl_lpuart: Fix out-of-bounds access through DT alias - - serial: arc_uart: Fix out-of-bounds access through DT alias - - serial: 8250: Don't service RX FIFO if interrupts are disabled - - serial: altera: ensure port->regshift is honored consistently - - rtc: snvs: Fix usage of snvs_rtc_enable - - rtc: hctosys: Ensure system time doesn't overflow time_t - - rtc: rk808: fix possible race condition - - rtc: m41t80: fix race conditions - - rtc: tx4939: avoid unintended sign extension on a 24 bit shift - - rtc: rp5c01: fix possible race condition - - rtc: goldfish: Add missing MODULE_LICENSE - - cxgb4: Correct ntuple mask validation for hash filters - - net: dsa: bcm_sf2: Fix RX_CLS_LOC_ANY overwrite for last rule - - net: dsa: Do not register devlink for unused ports - - net: dsa: bcm_sf2: Fix IPv6 rules and chain ID - - net: dsa: bcm_sf2: Fix IPv6 rule half deletion - - 3c59x: convert to generic DMA API - - net: ip6_gre: Request headroom in __gre6_xmit() - - net: ip6_gre: Split up ip6gre_tnl_link_config() - - net: ip6_gre: Split up ip6gre_tnl_change() - - net: ip6_gre: Split up ip6gre_newlink() - - net: ip6_gre: Split up ip6gre_changelink() - - qed: LL2 flush isles when connection is closed - - qed: Fix possibility of list corruption during rmmod flows - - qed: Fix LL2 race during connection terminate - - powerpc: Move default security feature flags - - Bluetooth: btusb: Add support for Intel Bluetooth device 22560 [8087:0026] - - staging: fsl-dpaa2/eth: Fix incorrect kfree - - crypto: inside-secure - move the digest to the request context - - scsi: lpfc: Fix NVME Initiator FirstBurst - - serial: mvebu-uart: fix tx lost characters - * Bionic update: upstream stable patchset 2018-07-20 (LP: #1782846) - - usbip: usbip_host: refine probe and disconnect debug msgs to be useful - - usbip: usbip_host: delete device from busid_table after rebind - - usbip: usbip_host: run rebind from exit when module is removed - - usbip: usbip_host: fix NULL-ptr deref and use-after-free errors - - usbip: usbip_host: fix bad unlock balance during stub_probe() - - ALSA: usb: mixer: volume quirk for CM102-A+/102S+ - - ALSA: hda: Add Lenovo C50 All in one to the power_save blacklist - - ALSA: control: fix a redundant-copy issue - - spi: pxa2xx: Allow 64-bit DMA - - spi: bcm-qspi: Avoid setting MSPI_CDRAM_PCS for spi-nor master - - spi: bcm-qspi: Always read and set BSPI_MAST_N_BOOT_CTRL - - KVM: arm/arm64: VGIC/ITS save/restore: protect kvm_read_guest() calls - - KVM: arm/arm64: VGIC/ITS: protect kvm_read_guest() calls with SRCU lock - - vfio: ccw: fix cleanup if cp_prefetch fails - - tracing/x86/xen: Remove zero data size trace events - trace_xen_mmu_flush_tlb{_all} - - tee: shm: fix use-after-free via temporarily dropped reference - - netfilter: nf_tables: free set name in error path - - netfilter: nf_tables: can't fail after linking rule into active rule list - - netfilter: nf_socket: Fix out of bounds access in nf_sk_lookup_slow_v{4,6} - - i2c: designware: fix poll-after-enable regression - - powerpc/powernv: Fix NVRAM sleep in invalid context when crashing - - drm: Match sysfs name in link removal to link creation - - lib/test_bitmap.c: fix bitmap optimisation tests to report errors correctly - - radix tree: fix multi-order iteration race - - mm: don't allow deferred pages with NEED_PER_CPU_KM - - drm/i915/gen9: Add WaClearHIZ_WM_CHICKEN3 for bxt and glk - - s390/qdio: fix access to uninitialized qdio_q fields - - s390/qdio: don't release memory in qdio_setup_irq() - - s390: remove indirect branch from do_softirq_own_stack - - x86/pkeys: Override pkey when moving away from PROT_EXEC - - x86/pkeys: Do not special case protection key 0 - - efi: Avoid potential crashes, fix the 'struct efi_pci_io_protocol_32' - definition for mixed mode - - ARM: 8771/1: kprobes: Prohibit kprobes on do_undefinstr - - x86/mm: Drop TS_COMPAT on 64-bit exec() syscall - - tick/broadcast: Use for_each_cpu() specially on UP kernels - - ARM: 8769/1: kprobes: Fix to use get_kprobe_ctlblk after irq-disabed - - ARM: 8770/1: kprobes: Prohibit probing on optimized_callback - - ARM: 8772/1: kprobes: Prohibit kprobes on get_user functions - - Btrfs: fix xattr loss after power failure - - Btrfs: send, fix invalid access to commit roots due to concurrent - snapshotting - - btrfs: property: Set incompat flag if lzo/zstd compression is set - - btrfs: fix crash when trying to resume balance without the resume flag - - btrfs: Split btrfs_del_delalloc_inode into 2 functions - - btrfs: Fix delalloc inodes invalidation during transaction abort - - btrfs: fix reading stale metadata blocks after degraded raid1 mounts - - xhci: Fix USB3 NULL pointer dereference at logical disconnect. - - KVM: arm/arm64: Properly protect VGIC locks from IRQs - - KVM: arm/arm64: VGIC/ITS: Promote irq_lock() in update_affinity - - hwmon: (k10temp) Fix reading critical temperature register - - hwmon: (k10temp) Use API function to access System Management Network - - vsprintf: Replace memory barrier with static_key for random_ptr_key update - - x86/amd_nb: Add support for Raven Ridge CPUs - - x86/apic/x2apic: Initialize cluster ID properly - * Bionic update: upstream stable patchset 2018-07-09 (LP: #1780858) - - 8139too: Use disable_irq_nosync() in rtl8139_poll_controller() - - bridge: check iface upper dev when setting master via ioctl - - dccp: fix tasklet usage - - ipv4: fix fnhe usage by non-cached routes - - ipv4: fix memory leaks in udp_sendmsg, ping_v4_sendmsg - - llc: better deal with too small mtu - - net: ethernet: sun: niu set correct packet size in skb - - net: ethernet: ti: cpsw: fix packet leaking in dual_mac mode - - net/mlx4_en: Fix an error handling path in 'mlx4_en_init_netdev()' - - net/mlx4_en: Verify coalescing parameters are in range - - net/mlx5e: Err if asked to offload TC match on frag being first - - net/mlx5: E-Switch, Include VF RDMA stats in vport statistics - - net sched actions: fix refcnt leak in skbmod - - net_sched: fq: take care of throttled flows before reuse - - net: support compat 64-bit time in {s,g}etsockopt - - net/tls: Don't recursively call push_record during tls_write_space callbacks - - net/tls: Fix connection stall on partial tls record - - openvswitch: Don't swap table in nlattr_set() after OVS_ATTR_NESTED is found - - qmi_wwan: do not steal interfaces from class drivers - - r8169: fix powering up RTL8168h - - rds: do not leak kernel memory to user land - - sctp: delay the authentication for the duplicated cookie-echo chunk - - sctp: fix the issue that the cookie-ack with auth can't get processed - - sctp: handle two v4 addrs comparison in sctp_inet6_cmp_addr - - sctp: remove sctp_chunk_put from fail_mark err path in - sctp_ulpevent_make_rcvmsg - - sctp: use the old asoc when making the cookie-ack chunk in dupcook_d - - tcp_bbr: fix to zero idle_restart only upon S/ACKed data - - tcp: ignore Fast Open on repair mode - - tg3: Fix vunmap() BUG_ON() triggered from tg3_free_consistent(). - - bonding: do not allow rlb updates to invalid mac - - bonding: send learning packets for vlans on slave - - net: sched: fix error path in tcf_proto_create() when modules are not - configured - - net/mlx5e: TX, Use correct counter in dma_map error flow - - net/mlx5: Avoid cleaning flow steering table twice during error flow - - hv_netvsc: set master device - - ipv6: fix uninit-value in ip6_multipath_l3_keys() - - net/mlx5e: Allow offloading ipv4 header re-write for icmp - - nsh: fix infinite loop - - udp: fix SO_BINDTODEVICE - - l2tp: revert "l2tp: fix missing print session offset info" - - proc: do not access cmdline nor environ from file-backed areas - - net/smc: restrict non-blocking connect finish - - mlxsw: spectrum_switchdev: Do not remove mrouter port from MDB's ports list - - net/mlx5e: DCBNL fix min inline header size for dscp - - net: systemport: Correclty disambiguate driver instances - - sctp: clear the new asoc's stream outcnt in sctp_stream_update - - tcp: restore autocorking - - tipc: fix one byte leak in tipc_sk_set_orig_addr() - - hv_netvsc: Fix net device attach on older Windows hosts - * Bionic update: upstream stable patchset 2018-07-06 (LP: #1780499) - - ext4: prevent right-shifting extents beyond EXT_MAX_BLOCKS - - ipvs: fix rtnl_lock lockups caused by start_sync_thread - - netfilter: ebtables: don't attempt to allocate 0-sized compat array - - kcm: Call strp_stop before strp_done in kcm_attach - - crypto: af_alg - fix possible uninit-value in alg_bind() - - netlink: fix uninit-value in netlink_sendmsg - - net: fix rtnh_ok() - - net: initialize skb->peeked when cloning - - net: fix uninit-value in __hw_addr_add_ex() - - dccp: initialize ireq->ir_mark - - ipv4: fix uninit-value in ip_route_output_key_hash_rcu() - - soreuseport: initialise timewait reuseport field - - inetpeer: fix uninit-value in inet_getpeer - - memcg: fix per_node_info cleanup - - perf: Remove superfluous allocation error check - - tcp: fix TCP_REPAIR_QUEUE bound checking - - bdi: wake up concurrent wb_shutdown() callers. - - bdi: Fix oops in wb_workfn() - - gpioib: do not free unrequested descriptors - - gpio: fix aspeed_gpio unmask irq - - gpio: fix error path in lineevent_create - - rfkill: gpio: fix memory leak in probe error path - - libata: Apply NOLPM quirk for SanDisk SD7UB3Q*G1001 SSDs - - dm integrity: use kvfree for kvmalloc'd memory - - tracing: Fix regex_match_front() to not over compare the test string - - z3fold: fix reclaim lock-ups - - mm: sections are not offlined during memory hotremove - - mm, oom: fix concurrent munlock and oom reaper unmap, v3 - - ceph: fix rsize/wsize capping in ceph_direct_read_write() - - can: kvaser_usb: Increase correct stats counter in kvaser_usb_rx_can_msg() - - can: hi311x: Acquire SPI lock on ->do_get_berr_counter - - can: hi311x: Work around TX complete interrupt erratum - - drm/vc4: Fix scaling of uni-planar formats - - drm/i915: Fix drm:intel_enable_lvds ERROR message in kernel log - - drm/atomic: Clean old_state/new_state in drm_atomic_state_default_clear() - - drm/atomic: Clean private obj old_state/new_state in - drm_atomic_state_default_clear() - - net: atm: Fix potential Spectre v1 - - atm: zatm: Fix potential Spectre v1 - - cpufreq: schedutil: Avoid using invalid next_freq - - Revert "Bluetooth: btusb: Fix quirk for Atheros 1525/QCA6174" - - Bluetooth: btusb: Only check needs_reset_resume DMI table for QCA rome - chipsets - - thermal: exynos: Reading temperature makes sense only when TMU is turned on - - thermal: exynos: Propagate error value from tmu_read() - - nvme: add quirk to force medium priority for SQ creation - - smb3: directory sync should not return an error - - sched/autogroup: Fix possible Spectre-v1 indexing for sched_prio_to_weight[] - - tracing/uprobe_event: Fix strncpy corner case - - perf/x86: Fix possible Spectre-v1 indexing for hw_perf_event cache_* - - perf/x86/cstate: Fix possible Spectre-v1 indexing for pkg_msr - - perf/x86/msr: Fix possible Spectre-v1 indexing in the MSR driver - - perf/core: Fix possible Spectre-v1 indexing for ->aux_pages[] - - perf/x86: Fix possible Spectre-v1 indexing for x86_pmu::event_map() - - i2c: dev: prevent ZERO_SIZE_PTR deref in i2cdev_ioctl_rdwr() - - bdi: Fix use after free bug in debugfs_remove() - - drm/ttm: Use GFP_TRANSHUGE_LIGHT for allocating huge pages - - drm/i915: Adjust eDP's logical vco in a reliable place. - - drm/nouveau/ttm: don't dereference nvbo::cli, it can outlive client - - sched/core: Fix possible Spectre-v1 indexing for sched_prio_to_weight[] - * Bionic update: upstream stable patchset 2018-06-26 (LP: #1778759) - - percpu: include linux/sched.h for cond_resched() - - ACPI / button: make module loadable when booted in non-ACPI mode - - USB: serial: option: Add support for Quectel EP06 - - ALSA: hda - Fix incorrect usage of IS_REACHABLE() - - ALSA: pcm: Check PCM state at xfern compat ioctl - - ALSA: seq: Fix races at MIDI encoding in snd_virmidi_output_trigger() - - ALSA: dice: fix kernel NULL pointer dereference due to invalid calculation - for array index - - ALSA: aloop: Mark paused device as inactive - - ALSA: aloop: Add missing cable lock to ctl API callbacks - - tracepoint: Do not warn on ENOMEM - - scsi: target: Fix fortify_panic kernel exception - - Input: leds - fix out of bound access - - Input: atmel_mxt_ts - add touchpad button mapping for Samsung Chromebook Pro - - rtlwifi: btcoex: Add power_on_setting routine - - rtlwifi: cleanup 8723be ant_sel definition - - xfs: prevent creating negative-sized file via INSERT_RANGE - - RDMA/cxgb4: release hw resources on device removal - - RDMA/ucma: Allow resolving address w/o specifying source address - - RDMA/mlx5: Fix multiple NULL-ptr deref errors in rereg_mr flow - - RDMA/mlx5: Protect from shift operand overflow - - NET: usb: qmi_wwan: add support for ublox R410M PID 0x90b2 - - IB/mlx5: Use unlimited rate when static rate is not supported - - IB/hfi1: Fix handling of FECN marked multicast packet - - IB/hfi1: Fix loss of BECN with AHG - - IB/hfi1: Fix NULL pointer dereference when invalid num_vls is used - - iw_cxgb4: Atomically flush per QP HW CQEs - - drm/vmwgfx: Fix a buffer object leak - - drm/bridge: vga-dac: Fix edid memory leak - - test_firmware: fix setting old custom fw path back on exit, second try - - errseq: Always report a writeback error once - - USB: serial: visor: handle potential invalid device configuration - - usb: dwc3: gadget: Fix list_del corruption in dwc3_ep_dequeue - - USB: Accept bulk endpoints with 1024-byte maxpacket - - USB: serial: option: reimplement interface masking - - USB: serial: option: adding support for ublox R410M - - usb: musb: host: fix potential NULL pointer dereference - - usb: musb: trace: fix NULL pointer dereference in musb_g_tx() - - platform/x86: asus-wireless: Fix NULL pointer dereference - - irqchip/qcom: Fix check for spurious interrupts - - tracing: Fix bad use of igrab in trace_uprobe.c - - [Config] CONFIG_ARM64_ERRATUM_1024718=y - - arm64: Add work around for Arm Cortex-A55 Erratum 1024718 - - Input: atmel_mxt_ts - add touchpad button mapping for Samsung Chromebook Pro - - infiniband: mlx5: fix build errors when INFINIBAND_USER_ACCESS=m - - btrfs: Take trans lock before access running trans in check_delayed_ref - - drm/vc4: Make sure vc4_bo_{inc,dec}_usecnt() calls are balanced - - xhci: Fix use-after-free in xhci_free_virt_device - - platform/x86: Kconfig: Fix dell-laptop dependency chain. - - KVM: x86: remove APIC Timer periodic/oneshot spikes - - clocksource: Allow clocksource_mark_unstable() on unregistered clocksources - - clocksource: Initialize cs->wd_list - - clocksource: Consistent de-rate when marking unstable - * Bionic update: upstream stable patchset 2018-06-22 (LP: #1778265) - - ext4: set h_journal if there is a failure starting a reserved handle - - ext4: add MODULE_SOFTDEP to ensure crc32c is included in the initramfs - - ext4: add validity checks for bitmap block numbers - - ext4: fix bitmap position validation - - random: fix possible sleeping allocation from irq context - - random: rate limit unseeded randomness warnings - - usbip: usbip_event: fix to not print kernel pointer address - - usbip: usbip_host: fix to hold parent lock for device_attach() calls - - usbip: vhci_hcd: Fix usb device and sockfd leaks - - usbip: vhci_hcd: check rhport before using in vhci_hub_control() - - Revert "xhci: plat: Register shutdown for xhci_plat" - - USB: serial: simple: add libtransistor console - - USB: serial: ftdi_sio: use jtag quirk for Arrow USB Blaster - - USB: serial: cp210x: add ID for NI USB serial console - - usb: core: Add quirk for HP v222w 16GB Mini - - USB: Increment wakeup count on remote wakeup. - - ALSA: usb-audio: Skip broken EU on Dell dock USB-audio - - virtio: add ability to iterate over vqs - - virtio_console: don't tie bufs to a vq - - virtio_console: free buffers after reset - - virtio_console: drop custom control queue cleanup - - virtio_console: move removal code - - virtio_console: reset on out of memory - - drm/virtio: fix vq wait_event condition - - tty: Don't call panic() at tty_ldisc_init() - - tty: n_gsm: Fix long delays with control frame timeouts in ADM mode - - tty: n_gsm: Fix DLCI handling for ADM mode if debug & 2 is not set - - tty: Avoid possible error pointer dereference at tty_ldisc_restore(). - - tty: Use __GFP_NOFAIL for tty_ldisc_get() - - ALSA: dice: fix OUI for TC group - - ALSA: dice: fix error path to destroy initialized stream data - - ALSA: hda - Skip jack and others for non-existing PCM streams - - ALSA: opl3: Hardening for potential Spectre v1 - - ALSA: asihpi: Hardening for potential Spectre v1 - - ALSA: hdspm: Hardening for potential Spectre v1 - - ALSA: rme9652: Hardening for potential Spectre v1 - - ALSA: control: Hardening for potential Spectre v1 - - ALSA: pcm: Return negative delays from SNDRV_PCM_IOCTL_DELAY. - - ALSA: core: Report audio_tstamp in snd_pcm_sync_ptr - - ALSA: seq: oss: Fix unbalanced use lock for synth MIDI device - - ALSA: seq: oss: Hardening for potential Spectre v1 - - ALSA: hda: Hardening for potential Spectre v1 - - ALSA: hda/realtek - Add some fixes for ALC233 - - ALSA: hda/realtek - Update ALC255 depop optimize - - ALSA: hda/realtek - change the location for one of two front mics - - mtd: spi-nor: cadence-quadspi: Fix page fault kernel panic - - mtd: cfi: cmdset_0001: Do not allow read/write to suspend erase block. - - mtd: cfi: cmdset_0001: Workaround Micron Erase suspend bug. - - mtd: cfi: cmdset_0002: Do not allow read/write to suspend erase block. - - mtd: rawnand: tango: Fix struct clk memory leak - - kobject: don't use WARN for registration failures - - scsi: sd: Defer spinning up drive while SANITIZE is in progress - - bfq-iosched: ensure to clear bic/bfqq pointers when preparing request - - vfio: ccw: process ssch with interrupts disabled - - ANDROID: binder: prevent transactions into own process. - - PCI: aardvark: Fix logic in advk_pcie_{rd,wr}_conf() - - PCI: aardvark: Set PIO_ADDR_LS correctly in advk_pcie_rd_conf() - - PCI: aardvark: Use ISR1 instead of ISR0 interrupt in legacy irq mode - - PCI: aardvark: Fix PCIe Max Read Request Size setting - - ARM: amba: Make driver_override output consistent with other buses - - ARM: amba: Fix race condition with driver_override - - ARM: amba: Don't read past the end of sysfs "driver_override" buffer - - ARM: socfpga_defconfig: Remove QSPI Sector 4K size force - - KVM: arm/arm64: Close VMID generation race - - crypto: drbg - set freed buffers to NULL - - ASoC: fsl_esai: Fix divisor calculation failure at lower ratio - - libceph: un-backoff on tick when we have a authenticated session - - libceph: reschedule a tick in finish_hunting() - - libceph: validate con->state at the top of try_write() - - fpga-manager: altera-ps-spi: preserve nCONFIG state - - earlycon: Use a pointer table to fix __earlycon_table stride - - drm/amdgpu: set COMPUTE_PGM_RSRC1 for SGPR/VGPR clearing shaders - - drm/i915: Enable display WA#1183 from its correct spot - - objtool, perf: Fix GCC 8 -Wrestrict error - - tools/lib/subcmd/pager.c: do not alias select() params - - x86/ipc: Fix x32 version of shmid64_ds and msqid64_ds - - x86/smpboot: Don't use mwait_play_dead() on AMD systems - - x86/microcode/intel: Save microcode patch unconditionally - - x86/microcode: Do not exit early from __reload_late() - - tick/sched: Do not mess with an enqueued hrtimer - - arm/arm64: KVM: Add PSCI version selection API - - powerpc/eeh: Fix race with driver un/bind - - serial: mvebu-uart: Fix local flags handling on termios update - - block: do not use interruptible wait anywhere - - ASoC: dmic: Fix clock parenting - - PCI / PM: Do not clear state_saved in pci_pm_freeze() when smart suspend is - set - - module: Fix display of wrong module .text address - - drm/edid: Reset more of the display info - - drm/i915/fbdev: Enable late fbdev initial configuration - - drm/i915/audio: set minimum CD clock to twice the BCLK - - drm/amd/display: Fix deadlock when flushing irq - - drm/amd/display: Disallow enabling CRTC without primary plane with FB - * Bionic update: upstream stable patchset 2018-06-22 (LP: #1778265) // - CVE-2018-1108. - - random: set up the NUMA crng instances after the CRNG is fully initialized - * Ryzen/Raven Ridge USB ports do not work (LP: #1756700) - - xhci: Fix USB ports for Dell Inspiron 5775 - * [Ubuntu 1804][boston][ixgbe] EEH causes kernel BUG at /build/linux- - jWa1Fv/linux-4.15.0/drivers/pci/msi.c:352 (i2S) (LP: #1776389) - - ixgbe/ixgbevf: Free IRQ when PCI error recovery removes the device - * Need fix to aacraid driver to prevent panic (LP: #1770095) - - scsi: aacraid: Correct hba_send to include iu_type - * kernel: Fix arch random implementation (LP: #1775391) - - s390/archrandom: Rework arch random implementation. - * kernel: Fix memory leak on CCA and EP11 CPRB processing. (LP: #1775390) - - s390/zcrypt: Fix CCA and EP11 CPRB processing failure memory leak. - * Various fixes for CXL kernel module (LP: #1774471) - - cxl: Remove function write_timebase_ctrl_psl9() for PSL9 - - cxl: Set the PBCQ Tunnel BAR register when enabling capi mode - - cxl: Report the tunneled operations status - - cxl: Configure PSL to not use APC virtual machines - - cxl: Disable prefault_mode in Radix mode - * Bluetooth not working (LP: #1764645) - - Bluetooth: btusb: Apply QCA Rome patches for some ATH3012 models - * linux-snapdragon: wcn36xx: mac address generation on boot (LP: #1776491) - - [Config] arm64: snapdragon: WCN36XX_SNAPDRAGON_HACKS=y - - SAUCE: wcn36xx: read MAC from file or randomly generate one - * fscache: Fix hanging wait on page discarded by writeback (LP: #1777029) - - fscache: Fix hanging wait on page discarded by writeback - - -- Stefan Bader Thu, 02 Aug 2018 17:10:18 +0200 + -- Stefan Bader Fri, 10 Aug 2018 11:22:51 +0200 linux-azure (4.15.0-1019.19) bionic; urgency=medium diff -u linux-azure-4.15.0/debian.azure-xenial/config/config.common.ubuntu linux-azure-4.15.0/debian.azure-xenial/config/config.common.ubuntu --- linux-azure-4.15.0/debian.azure-xenial/config/config.common.ubuntu +++ linux-azure-4.15.0/debian.azure-xenial/config/config.common.ubuntu @@ -167,6 +167,7 @@ CONFIG_ARCH_HAS_DEVMEM_IS_ALLOWED=y CONFIG_ARCH_HAS_ELF_RANDOMIZE=y CONFIG_ARCH_HAS_FAST_MULTIPLIER=y +CONFIG_ARCH_HAS_FILTER_PGPROT=y CONFIG_ARCH_HAS_FORTIFY_SOURCE=y CONFIG_ARCH_HAS_GCOV_PROFILE_ALL=y CONFIG_ARCH_HAS_GIGANTIC_PAGE=y @@ -1761,6 +1762,7 @@ CONFIG_HOTPLUG_PCI_CPCI_ZT5550=m CONFIG_HOTPLUG_PCI_PCIE=y CONFIG_HOTPLUG_PCI_SHPC=m +CONFIG_HOTPLUG_SMT=y CONFIG_HPET=y CONFIG_HPET_EMULATE_RTC=y CONFIG_HPET_MMAP=y @@ -2673,7 +2675,7 @@ CONFIG_MLX5_EN_IPSEC=y CONFIG_MLX5_ESWITCH=y CONFIG_MLX5_FPGA=y -CONFIG_MLX5_INFINIBAND=m +CONFIG_MLX5_INFINIBAND=y CONFIG_MLX5_MPFS=y CONFIG_MLXFW=m CONFIG_MLXSW_CORE=m diff -u linux-azure-4.15.0/debian.azure/changelog linux-azure-4.15.0/debian.azure/changelog --- linux-azure-4.15.0/debian.azure/changelog +++ linux-azure-4.15.0/debian.azure/changelog @@ -1,620 +1,113 @@ -linux-azure (4.15.0-1020.20) bionic; urgency=medium +linux-azure (4.15.0-1021.21) bionic; urgency=medium - * linux-azure: 4.15.0-1020.20 -proposed tracker (LP: #1784288) + [ Ubuntu: 4.15.0-32.34 ] - [ Ubuntu: 4.15.0-31.33 ] + * CVE-2018-5391 + - Revert "net: increase fragment memory usage limits" + * CVE-2018-3620 // CVE-2018-3646 + - x86/Centaur: Initialize supported CPU features properly + - x86/Centaur: Report correct CPU/cache topology + - x86/CPU/AMD: Have smp_num_siblings and cpu_llc_id always be present + - perf/events/amd/uncore: Fix amd_uncore_llc ID to use pre-defined cpu_llc_id + - x86/CPU: Rename intel_cacheinfo.c to cacheinfo.c + - x86/CPU/AMD: Calculate last level cache ID from number of sharing threads + - x86/CPU: Modify detect_extended_topology() to return result + - x86/CPU/AMD: Derive CPU topology from CPUID function 0xB when available + - x86/CPU: Move cpu local function declarations to local header + - x86/CPU: Make intel_num_cpu_cores() generic + - x86/CPU: Move cpu_detect_cache_sizes() into init_intel_cacheinfo() + - x86/CPU: Move x86_cpuinfo::x86_max_cores assignment to + detect_num_cpu_cores() + - x86/CPU/AMD: Fix LLC ID bit-shift calculation + - x86/mm: Factor out pageattr _PAGE_GLOBAL setting + - x86/mm: Undo double _PAGE_PSE clearing + - x86/mm: Introduce "default" kernel PTE mask + - x86/espfix: Document use of _PAGE_GLOBAL + - x86/mm: Do not auto-massage page protections + - x86/mm: Remove extra filtering in pageattr code + - x86/mm: Comment _PAGE_GLOBAL mystery + - x86/mm: Do not forbid _PAGE_RW before init for __ro_after_init + - x86/ldt: Fix support_pte_mask filtering in map_ldt_struct() + - x86/power/64: Fix page-table setup for temporary text mapping + - x86/pti: Filter at vma->vm_page_prot population + - x86/boot/64/clang: Use fixup_pointer() to access '__supported_pte_mask' + - x86/speculation/l1tf: Increase 32bit PAE __PHYSICAL_PAGE_SHIFT + - x86/speculation/l1tf: Change order of offset/type in swap entry + - x86/speculation/l1tf: Protect swap entries against L1TF + - x86/speculation/l1tf: Protect PROT_NONE PTEs against speculation + - x86/speculation/l1tf: Make sure the first page is always reserved + - x86/speculation/l1tf: Add sysfs reporting for l1tf + - x86/speculation/l1tf: Disallow non privileged high MMIO PROT_NONE mappings + - x86/speculation/l1tf: Limit swap file size to MAX_PA/2 + - x86/bugs: Move the l1tf function and define pr_fmt properly + - sched/smt: Update sched_smt_present at runtime + - x86/smp: Provide topology_is_primary_thread() + - x86/topology: Provide topology_smt_supported() + - cpu/hotplug: Make bringup/teardown of smp threads symmetric + - cpu/hotplug: Split do_cpu_down() + - cpu/hotplug: Provide knobs to control SMT + - x86/cpu: Remove the pointless CPU printout + - x86/cpu/AMD: Remove the pointless detect_ht() call + - x86/cpu/common: Provide detect_ht_early() + - x86/cpu/topology: Provide detect_extended_topology_early() + - x86/cpu/intel: Evaluate smp_num_siblings early + - x86/CPU/AMD: Do not check CPUID max ext level before parsing SMP info + - x86/cpu/AMD: Evaluate smp_num_siblings early + - x86/apic: Ignore secondary threads if nosmt=force + - x86/speculation/l1tf: Extend 64bit swap file size limit + - x86/cpufeatures: Add detection of L1D cache flush support. + - x86/CPU/AMD: Move TOPOEXT reenablement before reading smp_num_siblings + - x86/speculation/l1tf: Protect PAE swap entries against L1TF + - x86/speculation/l1tf: Fix up pte->pfn conversion for PAE + - Revert "x86/apic: Ignore secondary threads if nosmt=force" + - cpu/hotplug: Boot HT siblings at least once + - x86/KVM: Warn user if KVM is loaded SMT and L1TF CPU bug being present + - x86/KVM/VMX: Add module argument for L1TF mitigation + - x86/KVM/VMX: Add L1D flush algorithm + - x86/KVM/VMX: Add L1D MSR based flush + - x86/KVM/VMX: Add L1D flush logic + - x86/KVM/VMX: Split the VMX MSR LOAD structures to have an host/guest numbers + - x86/KVM/VMX: Add find_msr() helper function + - x86/KVM/VMX: Separate the VMX AUTOLOAD guest/host number accounting + - x86/KVM/VMX: Extend add_atomic_switch_msr() to allow VMENTER only MSRs + - x86/KVM/VMX: Use MSR save list for IA32_FLUSH_CMD if required + - cpu/hotplug: Online siblings when SMT control is turned on + - x86/litf: Introduce vmx status variable + - x86/kvm: Drop L1TF MSR list approach + - x86/l1tf: Handle EPT disabled state proper + - x86/kvm: Move l1tf setup function + - x86/kvm: Add static key for flush always + - x86/kvm: Serialize L1D flush parameter setter + - x86/kvm: Allow runtime control of L1D flush + - cpu/hotplug: Expose SMT control init function + - cpu/hotplug: Set CPU_SMT_NOT_SUPPORTED early + - x86/bugs, kvm: Introduce boot-time control of L1TF mitigations + - Documentation: Add section about CPU vulnerabilities + - x86/speculation/l1tf: Unbreak !__HAVE_ARCH_PFN_MODIFY_ALLOWED architectures + - x86/KVM/VMX: Initialize the vmx_l1d_flush_pages' content + - Documentation/l1tf: Fix typos + - cpu/hotplug: detect SMT disabled by BIOS + - x86/KVM/VMX: Don't set l1tf_flush_l1d to true from vmx_l1d_flush() + - x86/KVM/VMX: Replace 'vmx_l1d_flush_always' with 'vmx_l1d_flush_cond' + - x86/KVM/VMX: Move the l1tf_flush_l1d test to vmx_l1d_flush() + - x86/irq: Demote irq_cpustat_t::__softirq_pending to u16 + - x86/KVM/VMX: Introduce per-host-cpu analogue of l1tf_flush_l1d + - x86: Don't include linux/irq.h from asm/hardirq.h + - x86/irq: Let interrupt handlers set kvm_cpu_l1tf_flush_l1d + - x86/KVM/VMX: Don't set l1tf_flush_l1d from vmx_handle_external_intr() + - Documentation/l1tf: Remove Yonah processors from not vulnerable list + - x86/speculation: Simplify sysfs report of VMX L1TF vulnerability + - x86/speculation: Use ARCH_CAPABILITIES to skip L1D flush on vmentry + - KVM: x86: Add a framework for supporting MSR-based features + - KVM: X86: Introduce kvm_get_msr_feature() + - KVM: VMX: support MSR_IA32_ARCH_CAPABILITIES as a feature MSR + - KVM: VMX: Tell the nested hypervisor to skip L1D flush on vmentry + - cpu/hotplug: Fix SMT supported evaluation + - x86/speculation/l1tf: Invert all not present mappings + - x86/speculation/l1tf: Make pmd/pud_mknotpresent() invert + - x86/mm/pat: Make set_memory_np() L1TF safe - * linux: 4.15.0-31.33 -proposed tracker (LP: #1784281) - * ubuntu_bpf_jit test failed on Bionic s390x systems (LP: #1753941) - - test_bpf: flag tests that cannot be jited on s390 - * HDMI/DP audio can't work on the laptop of Dell Latitude 5495 (LP: #1782689) - - drm/nouveau: fix nouveau_dsm_get_client_id()'s return type - - drm/radeon: fix radeon_atpx_get_client_id()'s return type - - drm/amdgpu: fix amdgpu_atpx_get_client_id()'s return type - - platform/x86: apple-gmux: fix gmux_get_client_id()'s return type - - ALSA: hda: use PCI_BASE_CLASS_DISPLAY to replace PCI_CLASS_DISPLAY_VGA - - vga_switcheroo: set audio client id according to bound GPU id - * locking sockets broken due to missing AppArmor socket mediation patches - (LP: #1780227) - - UBUNTU SAUCE: apparmor: fix apparmor mediating locking non-fs, unix sockets - * Update2 for ocxl driver (LP: #1781436) - - ocxl: Fix page fault handler in case of fault on dying process - * RTNL assertion failure on ipvlan (LP: #1776927) - - ipvlan: drop ipv6 dependency - - ipvlan: use per device spinlock to protect addrs list updates - * netns: unable to follow an interface that moves to another netns - (LP: #1774225) - - net: core: Expose number of link up/down transitions - - dev: always advertise the new nsid when the netns iface changes - - dev: advertise the new ifindex when the netns iface changes - * [Bionic] Disk IO hangs when using BFQ as io scheduler (LP: #1780066) - - block, bfq: fix occurrences of request finish method's old name - - block, bfq: remove batches of confusing ifdefs - - block, bfq: add requeue-request hook - * HP ProBook 455 G5 needs mute-led-gpio fixup (LP: #1781763) - - ALSA: hda: add mute led support for HP ProBook 455 G5 - * [Bionic] bug fixes to improve stability of the ThunderX2 i2c driver - (LP: #1781476) - - i2c: xlp9xx: Fix issue seen when updating receive length - - i2c: xlp9xx: Make sure the transfer size is not more than - I2C_SMBUS_BLOCK_SIZE - * x86/kvm: fix LAPIC timer drift when guest uses periodic mode (LP: #1778486) - - x86/kvm: fix LAPIC timer drift when guest uses periodic mode - * Please include ax88179_178a and r8152 modules in d-i udeb (LP: #1771823) - - [Config:] d-i: Add ax88179_178a and r8152 to nic-modules - * Nvidia fails after switching its mode (LP: #1778658) - - PCI: Restore config space on runtime resume despite being unbound - * Kernel error "task zfs:pid blocked for more than 120 seconds" (LP: #1781364) - - SAUCE: (noup) zfs to 0.7.5-1ubuntu16.3 - * CVE-2018-12232 - - PATCH 1/1] socket: close race condition between sock_close() and - sockfs_setattr() - * CVE-2018-10323 - - xfs: set format back to extents if xfs_bmap_extents_to_btree - * change front mic location for more lenovo m7/8/9xx machines (LP: #1781316) - - ALSA: hda/realtek - Fix the problem of two front mics on more machines - - ALSA: hda/realtek - two more lenovo models need fixup of MIC_LOCATION - * Cephfs + fscache: unable to handle kernel NULL pointer dereference at - 0000000000000000 IP: jbd2__journal_start+0x22/0x1f0 (LP: #1783246) - - ceph: track read contexts in ceph_file_info - * Touchpad of ThinkPad P52 failed to work with message "lost sync at byte" - (LP: #1779802) - - Input: elantech - fix V4 report decoding for module with middle key - - Input: elantech - enable middle button of touchpads on ThinkPad P52 - * xhci_hcd 0000:00:14.0: Root hub is not suspended (LP: #1779823) - - usb: xhci: dbc: Fix lockdep warning - - usb: xhci: dbc: Don't decrement runtime PM counter if DBC is not started - * CVE-2018-13406 - - video: uvesafb: Fix integer overflow in allocation - * CVE-2018-10840 - - ext4: correctly handle a zero-length xattr with a non-zero e_value_offs - * CVE-2018-11412 - - ext4: do not allow external inodes for inline data - * CVE-2018-10881 - - ext4: clear i_data in ext4_inode_info when removing inline data - * CVE-2018-12233 - - jfs: Fix inconsistency between memory allocation and ea_buf->max_size - * CVE-2018-12904 - - kvm: nVMX: Enforce cpl=0 for VMX instructions - * Error parsing PCC subspaces from PCCT (LP: #1528684) - - mailbox: PCC: erroneous error message when parsing ACPI PCCT - * CVE-2018-13094 - - xfs: don't call xfs_da_shrink_inode with NULL bp - * other users' coredumps can be read via setgid directory and killpriv bypass - (LP: #1779923) // CVE-2018-13405 - - Fix up non-directory creation in SGID directories - * Invoking obsolete 'firmware_install' target breaks snap build (LP: #1782166) - - snapcraft.yaml: stop invoking the obsolete (and non-existing) - 'firmware_install' target - * snapcraft.yaml: missing ubuntu-retpoline-extract-one script breaks the build - (LP: #1782116) - - snapcraft.yaml: copy retpoline-extract-one to scripts before build - * Allow Raven Ridge's audio controller to be runtime suspended (LP: #1782540) - - ALSA: hda: Add AZX_DCAPS_PM_RUNTIME for AMD Raven Ridge - * CVE-2018-11506 - - sr: pass down correctly sized SCSI sense buffer - * Bionic update: upstream stable patchset 2018-07-24 (LP: #1783418) - - net: Fix a bug in removing queues from XPS map - - net/mlx4_core: Fix error handling in mlx4_init_port_info. - - net/sched: fix refcnt leak in the error path of tcf_vlan_init() - - net: sched: red: avoid hashing NULL child - - net/smc: check for missing nlattrs in SMC_PNETID messages - - net: test tailroom before appending to linear skb - - packet: in packet_snd start writing at link layer allocation - - sock_diag: fix use-after-free read in __sk_free - - tcp: purge write queue in tcp_connect_init() - - vmxnet3: set the DMA mask before the first DMA map operation - - vmxnet3: use DMA memory barriers where required - - hv_netvsc: empty current transmit aggregation if flow blocked - - hv_netvsc: Use the num_online_cpus() for channel limit - - hv_netvsc: avoid retry on send during shutdown - - hv_netvsc: only wake transmit queue if link is up - - hv_netvsc: fix error unwind handling if vmbus_open fails - - hv_netvsc: cancel subchannel setup before halting device - - hv_netvsc: fix race in napi poll when rescheduling - - hv_netvsc: defer queue selection to VF - - hv_netvsc: disable NAPI before channel close - - hv_netvsc: use RCU to fix concurrent rx and queue changes - - hv_netvsc: change GPAD teardown order on older versions - - hv_netvsc: common detach logic - - hv_netvsc: Use Windows version instead of NVSP version on GPAD teardown - - hv_netvsc: Split netvsc_revoke_buf() and netvsc_teardown_gpadl() - - hv_netvsc: Ensure correct teardown message sequence order - - hv_netvsc: Fix a network regression after ifdown/ifup - - sparc: vio: use put_device() instead of kfree() - - ext2: fix a block leak - - s390: add assembler macros for CPU alternatives - - s390: move expoline assembler macros to a header - - s390/crc32-vx: use expoline for indirect branches - - s390/lib: use expoline for indirect branches - - s390/ftrace: use expoline for indirect branches - - s390/kernel: use expoline for indirect branches - - s390: move spectre sysfs attribute code - - s390: extend expoline to BC instructions - - s390: use expoline thunks in the BPF JIT - - scsi: sg: allocate with __GFP_ZERO in sg_build_indirect() - - scsi: zfcp: fix infinite iteration on ERP ready list - - loop: don't call into filesystem while holding lo_ctl_mutex - - loop: fix LOOP_GET_STATUS lock imbalance - - cfg80211: limit wiphy names to 128 bytes - - hfsplus: stop workqueue when fill_super() failed - - x86/kexec: Avoid double free_page() upon do_kexec_load() failure - - usb: gadget: f_uac2: fix bFirstInterface in composite gadget - - usb: dwc3: Undo PHY init if soft reset fails - - usb: dwc3: omap: don't miss events during suspend/resume - - usb: gadget: core: Fix use-after-free of usb_request - - usb: gadget: fsl_udc_core: fix ep valid checks - - usb: dwc2: Fix dwc2_hsotg_core_init_disconnected() - - usb: cdc_acm: prevent race at write to acm while system resumes - - net: usbnet: fix potential deadlock on 32bit hosts - - ARM: dts: imx7d-sdb: Fix regulator-usb-otg2-vbus node name - - usb: host: xhci-plat: revert "usb: host: xhci-plat: enable clk in resume - timing" - - USB: OHCI: Fix NULL dereference in HCDs using HCD_LOCAL_MEM - - net/usb/qmi_wwan.c: Add USB id for lt4120 modem - - net-usb: add qmi_wwan if on lte modem wistron neweb d18q1 - - Bluetooth: btusb: Add USB ID 7392:a611 for Edimax EW-7611ULB - - ALSA: usb-audio: Add native DSD support for Luxman DA-06 - - usb: dwc3: Add SoftReset PHY synchonization delay - - usb: dwc3: Update DWC_usb31 GTXFIFOSIZ reg fields - - usb: dwc3: Makefile: fix link error on randconfig - - xhci: zero usb device slot_id member when disabling and freeing a xhci slot - - usb: dwc2: Fix interval type issue - - usb: dwc2: hcd: Fix host channel halt flow - - usb: dwc2: host: Fix transaction errors in host mode - - usb: gadget: ffs: Let setup() return USB_GADGET_DELAYED_STATUS - - usb: gadget: ffs: Execute copy_to_user() with USER_DS set - - usbip: Correct maximum value of CONFIG_USBIP_VHCI_HC_PORTS - - usb: gadget: udc: change comparison to bitshift when dealing with a mask - - usb: gadget: composite: fix incorrect handling of OS desc requests - - media: lgdt3306a: Fix module count mismatch on usb unplug - - media: em28xx: USB bulk packet size fix - - Bluetooth: btusb: Add device ID for RTL8822BE - - xhci: Show what USB release number the xHC supports from protocol capablity - - staging: bcm2835-audio: Release resources on module_exit() - - staging: lustre: fix bug in osc_enter_cache_try - - staging: fsl-dpaa2/eth: Fix incorrect casts - - staging: rtl8192u: return -ENOMEM on failed allocation of priv->oldaddr - - staging: ks7010: Use constants from ieee80211_eid instead of literal ints. - - staging: lustre: lmv: correctly iput lmo_root - - crypto: inside-secure - wait for the request to complete if in the backlog - - crypto: atmel-aes - fix the keys zeroing on errors - - crypto: ccp - don't disable interrupts while setting up debugfs - - crypto: inside-secure - do not process request if no command was issued - - crypto: inside-secure - fix the cache_len computation - - crypto: inside-secure - fix the extra cache computation - - crypto: sunxi-ss - Add MODULE_ALIAS to sun4i-ss - - crypto: inside-secure - fix the invalidation step during cra_exit - - scsi: mpt3sas: fix an out of bound write - - scsi: ufs: Enable quirk to ignore sending WRITE_SAME command - - scsi: bnx2fc: Fix check in SCSI completion handler for timed out request - - scsi: sym53c8xx_2: iterator underflow in sym_getsync() - - scsi: mptfusion: Add bounds check in mptctl_hp_targetinfo() - - scsi: qla2xxx: Avoid triggering undefined behavior in - qla2x00_mbx_completion() - - scsi: storvsc: Increase cmd_per_lun for higher speed devices - - scsi: qedi: Fix truncation of CHAP name and secret - - scsi: aacraid: fix shutdown crash when init fails - - scsi: qla4xxx: skip error recovery in case of register disconnect. - - scsi: qedi: Fix kernel crash during port toggle - - scsi: mpt3sas: Do not mark fw_event workqueue as WQ_MEM_RECLAIM - - scsi: sd: Keep disk read-only when re-reading partition - - scsi: iscsi_tcp: set BDI_CAP_STABLE_WRITES when data digest enabled - - scsi: aacraid: Insure command thread is not recursively stopped - - scsi: core: Make SCSI Status CONDITION MET equivalent to GOOD - - scsi: mvsas: fix wrong endianness of sgpio api - - ASoC: hdmi-codec: Fix module unloading caused kernel crash - - ASoC: rockchip: rk3288-hdmi-analog: Select needed codecs - - ASoC: samsung: odroid: Fix 32000 sample rate handling - - ASoC: topology: create TLV data for dapm widgets - - ASoC: samsung: i2s: Ensure the RCLK rate is properly determined - - clk: rockchip: Fix wrong parent for SDMMC phase clock for rk3228 - - clk: Don't show the incorrect clock phase - - clk: hisilicon: mark wdt_mux_p[] as const - - clk: tegra: Fix pll_u rate configuration - - clk: rockchip: Prevent calculating mmc phase if clock rate is zero - - clk: samsung: s3c2410: Fix PLL rates - - clk: samsung: exynos7: Fix PLL rates - - clk: samsung: exynos5260: Fix PLL rates - - clk: samsung: exynos5433: Fix PLL rates - - clk: samsung: exynos5250: Fix PLL rates - - clk: samsung: exynos3250: Fix PLL rates - - media: dmxdev: fix error code for invalid ioctls - - media: Don't let tvp5150_get_vbi() go out of vbi_ram_default array - - media: ov5645: add missing of_node_put() in error path - - media: cx23885: Override 888 ImpactVCBe crystal frequency - - media: cx23885: Set subdev host data to clk_freq pointer - - media: s3c-camif: fix out-of-bounds array access - - media: lgdt3306a: Fix a double kfree on i2c device remove - - media: em28xx: Add Hauppauge SoloHD/DualHD bulk models - - media: v4l: vsp1: Fix display stalls when requesting too many inputs - - media: i2c: adv748x: fix HDMI field heights - - media: vb2: Fix videobuf2 to map correct area - - media: vivid: fix incorrect capabilities for radio - - media: cx25821: prevent out-of-bounds read on array card - - serial: xuartps: Fix out-of-bounds access through DT alias - - serial: sh-sci: Fix out-of-bounds access through DT alias - - serial: samsung: Fix out-of-bounds access through serial port index - - serial: mxs-auart: Fix out-of-bounds access through serial port index - - serial: imx: Fix out-of-bounds access through serial port index - - serial: fsl_lpuart: Fix out-of-bounds access through DT alias - - serial: arc_uart: Fix out-of-bounds access through DT alias - - serial: 8250: Don't service RX FIFO if interrupts are disabled - - serial: altera: ensure port->regshift is honored consistently - - rtc: snvs: Fix usage of snvs_rtc_enable - - rtc: hctosys: Ensure system time doesn't overflow time_t - - rtc: rk808: fix possible race condition - - rtc: m41t80: fix race conditions - - rtc: tx4939: avoid unintended sign extension on a 24 bit shift - - rtc: rp5c01: fix possible race condition - - rtc: goldfish: Add missing MODULE_LICENSE - - cxgb4: Correct ntuple mask validation for hash filters - - net: dsa: bcm_sf2: Fix RX_CLS_LOC_ANY overwrite for last rule - - net: dsa: Do not register devlink for unused ports - - net: dsa: bcm_sf2: Fix IPv6 rules and chain ID - - net: dsa: bcm_sf2: Fix IPv6 rule half deletion - - 3c59x: convert to generic DMA API - - net: ip6_gre: Request headroom in __gre6_xmit() - - net: ip6_gre: Split up ip6gre_tnl_link_config() - - net: ip6_gre: Split up ip6gre_tnl_change() - - net: ip6_gre: Split up ip6gre_newlink() - - net: ip6_gre: Split up ip6gre_changelink() - - qed: LL2 flush isles when connection is closed - - qed: Fix possibility of list corruption during rmmod flows - - qed: Fix LL2 race during connection terminate - - powerpc: Move default security feature flags - - Bluetooth: btusb: Add support for Intel Bluetooth device 22560 [8087:0026] - - staging: fsl-dpaa2/eth: Fix incorrect kfree - - crypto: inside-secure - move the digest to the request context - - scsi: lpfc: Fix NVME Initiator FirstBurst - - serial: mvebu-uart: fix tx lost characters - * Bionic update: upstream stable patchset 2018-07-20 (LP: #1782846) - - usbip: usbip_host: refine probe and disconnect debug msgs to be useful - - usbip: usbip_host: delete device from busid_table after rebind - - usbip: usbip_host: run rebind from exit when module is removed - - usbip: usbip_host: fix NULL-ptr deref and use-after-free errors - - usbip: usbip_host: fix bad unlock balance during stub_probe() - - ALSA: usb: mixer: volume quirk for CM102-A+/102S+ - - ALSA: hda: Add Lenovo C50 All in one to the power_save blacklist - - ALSA: control: fix a redundant-copy issue - - spi: pxa2xx: Allow 64-bit DMA - - spi: bcm-qspi: Avoid setting MSPI_CDRAM_PCS for spi-nor master - - spi: bcm-qspi: Always read and set BSPI_MAST_N_BOOT_CTRL - - KVM: arm/arm64: VGIC/ITS save/restore: protect kvm_read_guest() calls - - KVM: arm/arm64: VGIC/ITS: protect kvm_read_guest() calls with SRCU lock - - vfio: ccw: fix cleanup if cp_prefetch fails - - tracing/x86/xen: Remove zero data size trace events - trace_xen_mmu_flush_tlb{_all} - - tee: shm: fix use-after-free via temporarily dropped reference - - netfilter: nf_tables: free set name in error path - - netfilter: nf_tables: can't fail after linking rule into active rule list - - netfilter: nf_socket: Fix out of bounds access in nf_sk_lookup_slow_v{4,6} - - i2c: designware: fix poll-after-enable regression - - powerpc/powernv: Fix NVRAM sleep in invalid context when crashing - - drm: Match sysfs name in link removal to link creation - - lib/test_bitmap.c: fix bitmap optimisation tests to report errors correctly - - radix tree: fix multi-order iteration race - - mm: don't allow deferred pages with NEED_PER_CPU_KM - - drm/i915/gen9: Add WaClearHIZ_WM_CHICKEN3 for bxt and glk - - s390/qdio: fix access to uninitialized qdio_q fields - - s390/qdio: don't release memory in qdio_setup_irq() - - s390: remove indirect branch from do_softirq_own_stack - - x86/pkeys: Override pkey when moving away from PROT_EXEC - - x86/pkeys: Do not special case protection key 0 - - efi: Avoid potential crashes, fix the 'struct efi_pci_io_protocol_32' - definition for mixed mode - - ARM: 8771/1: kprobes: Prohibit kprobes on do_undefinstr - - x86/mm: Drop TS_COMPAT on 64-bit exec() syscall - - tick/broadcast: Use for_each_cpu() specially on UP kernels - - ARM: 8769/1: kprobes: Fix to use get_kprobe_ctlblk after irq-disabed - - ARM: 8770/1: kprobes: Prohibit probing on optimized_callback - - ARM: 8772/1: kprobes: Prohibit kprobes on get_user functions - - Btrfs: fix xattr loss after power failure - - Btrfs: send, fix invalid access to commit roots due to concurrent - snapshotting - - btrfs: property: Set incompat flag if lzo/zstd compression is set - - btrfs: fix crash when trying to resume balance without the resume flag - - btrfs: Split btrfs_del_delalloc_inode into 2 functions - - btrfs: Fix delalloc inodes invalidation during transaction abort - - btrfs: fix reading stale metadata blocks after degraded raid1 mounts - - xhci: Fix USB3 NULL pointer dereference at logical disconnect. - - KVM: arm/arm64: Properly protect VGIC locks from IRQs - - KVM: arm/arm64: VGIC/ITS: Promote irq_lock() in update_affinity - - hwmon: (k10temp) Fix reading critical temperature register - - hwmon: (k10temp) Use API function to access System Management Network - - vsprintf: Replace memory barrier with static_key for random_ptr_key update - - x86/amd_nb: Add support for Raven Ridge CPUs - - x86/apic/x2apic: Initialize cluster ID properly - * Bionic update: upstream stable patchset 2018-07-09 (LP: #1780858) - - 8139too: Use disable_irq_nosync() in rtl8139_poll_controller() - - bridge: check iface upper dev when setting master via ioctl - - dccp: fix tasklet usage - - ipv4: fix fnhe usage by non-cached routes - - ipv4: fix memory leaks in udp_sendmsg, ping_v4_sendmsg - - llc: better deal with too small mtu - - net: ethernet: sun: niu set correct packet size in skb - - net: ethernet: ti: cpsw: fix packet leaking in dual_mac mode - - net/mlx4_en: Fix an error handling path in 'mlx4_en_init_netdev()' - - net/mlx4_en: Verify coalescing parameters are in range - - net/mlx5e: Err if asked to offload TC match on frag being first - - net/mlx5: E-Switch, Include VF RDMA stats in vport statistics - - net sched actions: fix refcnt leak in skbmod - - net_sched: fq: take care of throttled flows before reuse - - net: support compat 64-bit time in {s,g}etsockopt - - net/tls: Don't recursively call push_record during tls_write_space callbacks - - net/tls: Fix connection stall on partial tls record - - openvswitch: Don't swap table in nlattr_set() after OVS_ATTR_NESTED is found - - qmi_wwan: do not steal interfaces from class drivers - - r8169: fix powering up RTL8168h - - rds: do not leak kernel memory to user land - - sctp: delay the authentication for the duplicated cookie-echo chunk - - sctp: fix the issue that the cookie-ack with auth can't get processed - - sctp: handle two v4 addrs comparison in sctp_inet6_cmp_addr - - sctp: remove sctp_chunk_put from fail_mark err path in - sctp_ulpevent_make_rcvmsg - - sctp: use the old asoc when making the cookie-ack chunk in dupcook_d - - tcp_bbr: fix to zero idle_restart only upon S/ACKed data - - tcp: ignore Fast Open on repair mode - - tg3: Fix vunmap() BUG_ON() triggered from tg3_free_consistent(). - - bonding: do not allow rlb updates to invalid mac - - bonding: send learning packets for vlans on slave - - net: sched: fix error path in tcf_proto_create() when modules are not - configured - - net/mlx5e: TX, Use correct counter in dma_map error flow - - net/mlx5: Avoid cleaning flow steering table twice during error flow - - hv_netvsc: set master device - - ipv6: fix uninit-value in ip6_multipath_l3_keys() - - net/mlx5e: Allow offloading ipv4 header re-write for icmp - - nsh: fix infinite loop - - udp: fix SO_BINDTODEVICE - - l2tp: revert "l2tp: fix missing print session offset info" - - proc: do not access cmdline nor environ from file-backed areas - - net/smc: restrict non-blocking connect finish - - mlxsw: spectrum_switchdev: Do not remove mrouter port from MDB's ports list - - net/mlx5e: DCBNL fix min inline header size for dscp - - net: systemport: Correclty disambiguate driver instances - - sctp: clear the new asoc's stream outcnt in sctp_stream_update - - tcp: restore autocorking - - tipc: fix one byte leak in tipc_sk_set_orig_addr() - - hv_netvsc: Fix net device attach on older Windows hosts - * Bionic update: upstream stable patchset 2018-07-06 (LP: #1780499) - - ext4: prevent right-shifting extents beyond EXT_MAX_BLOCKS - - ipvs: fix rtnl_lock lockups caused by start_sync_thread - - netfilter: ebtables: don't attempt to allocate 0-sized compat array - - kcm: Call strp_stop before strp_done in kcm_attach - - crypto: af_alg - fix possible uninit-value in alg_bind() - - netlink: fix uninit-value in netlink_sendmsg - - net: fix rtnh_ok() - - net: initialize skb->peeked when cloning - - net: fix uninit-value in __hw_addr_add_ex() - - dccp: initialize ireq->ir_mark - - ipv4: fix uninit-value in ip_route_output_key_hash_rcu() - - soreuseport: initialise timewait reuseport field - - inetpeer: fix uninit-value in inet_getpeer - - memcg: fix per_node_info cleanup - - perf: Remove superfluous allocation error check - - tcp: fix TCP_REPAIR_QUEUE bound checking - - bdi: wake up concurrent wb_shutdown() callers. - - bdi: Fix oops in wb_workfn() - - gpioib: do not free unrequested descriptors - - gpio: fix aspeed_gpio unmask irq - - gpio: fix error path in lineevent_create - - rfkill: gpio: fix memory leak in probe error path - - libata: Apply NOLPM quirk for SanDisk SD7UB3Q*G1001 SSDs - - dm integrity: use kvfree for kvmalloc'd memory - - tracing: Fix regex_match_front() to not over compare the test string - - z3fold: fix reclaim lock-ups - - mm: sections are not offlined during memory hotremove - - mm, oom: fix concurrent munlock and oom reaper unmap, v3 - - ceph: fix rsize/wsize capping in ceph_direct_read_write() - - can: kvaser_usb: Increase correct stats counter in kvaser_usb_rx_can_msg() - - can: hi311x: Acquire SPI lock on ->do_get_berr_counter - - can: hi311x: Work around TX complete interrupt erratum - - drm/vc4: Fix scaling of uni-planar formats - - drm/i915: Fix drm:intel_enable_lvds ERROR message in kernel log - - drm/atomic: Clean old_state/new_state in drm_atomic_state_default_clear() - - drm/atomic: Clean private obj old_state/new_state in - drm_atomic_state_default_clear() - - net: atm: Fix potential Spectre v1 - - atm: zatm: Fix potential Spectre v1 - - cpufreq: schedutil: Avoid using invalid next_freq - - Revert "Bluetooth: btusb: Fix quirk for Atheros 1525/QCA6174" - - Bluetooth: btusb: Only check needs_reset_resume DMI table for QCA rome - chipsets - - thermal: exynos: Reading temperature makes sense only when TMU is turned on - - thermal: exynos: Propagate error value from tmu_read() - - nvme: add quirk to force medium priority for SQ creation - - smb3: directory sync should not return an error - - sched/autogroup: Fix possible Spectre-v1 indexing for sched_prio_to_weight[] - - tracing/uprobe_event: Fix strncpy corner case - - perf/x86: Fix possible Spectre-v1 indexing for hw_perf_event cache_* - - perf/x86/cstate: Fix possible Spectre-v1 indexing for pkg_msr - - perf/x86/msr: Fix possible Spectre-v1 indexing in the MSR driver - - perf/core: Fix possible Spectre-v1 indexing for ->aux_pages[] - - perf/x86: Fix possible Spectre-v1 indexing for x86_pmu::event_map() - - i2c: dev: prevent ZERO_SIZE_PTR deref in i2cdev_ioctl_rdwr() - - bdi: Fix use after free bug in debugfs_remove() - - drm/ttm: Use GFP_TRANSHUGE_LIGHT for allocating huge pages - - drm/i915: Adjust eDP's logical vco in a reliable place. - - drm/nouveau/ttm: don't dereference nvbo::cli, it can outlive client - - sched/core: Fix possible Spectre-v1 indexing for sched_prio_to_weight[] - * Bionic update: upstream stable patchset 2018-06-26 (LP: #1778759) - - percpu: include linux/sched.h for cond_resched() - - ACPI / button: make module loadable when booted in non-ACPI mode - - USB: serial: option: Add support for Quectel EP06 - - ALSA: hda - Fix incorrect usage of IS_REACHABLE() - - ALSA: pcm: Check PCM state at xfern compat ioctl - - ALSA: seq: Fix races at MIDI encoding in snd_virmidi_output_trigger() - - ALSA: dice: fix kernel NULL pointer dereference due to invalid calculation - for array index - - ALSA: aloop: Mark paused device as inactive - - ALSA: aloop: Add missing cable lock to ctl API callbacks - - tracepoint: Do not warn on ENOMEM - - scsi: target: Fix fortify_panic kernel exception - - Input: leds - fix out of bound access - - Input: atmel_mxt_ts - add touchpad button mapping for Samsung Chromebook Pro - - rtlwifi: btcoex: Add power_on_setting routine - - rtlwifi: cleanup 8723be ant_sel definition - - xfs: prevent creating negative-sized file via INSERT_RANGE - - RDMA/cxgb4: release hw resources on device removal - - RDMA/ucma: Allow resolving address w/o specifying source address - - RDMA/mlx5: Fix multiple NULL-ptr deref errors in rereg_mr flow - - RDMA/mlx5: Protect from shift operand overflow - - NET: usb: qmi_wwan: add support for ublox R410M PID 0x90b2 - - IB/mlx5: Use unlimited rate when static rate is not supported - - IB/hfi1: Fix handling of FECN marked multicast packet - - IB/hfi1: Fix loss of BECN with AHG - - IB/hfi1: Fix NULL pointer dereference when invalid num_vls is used - - iw_cxgb4: Atomically flush per QP HW CQEs - - drm/vmwgfx: Fix a buffer object leak - - drm/bridge: vga-dac: Fix edid memory leak - - test_firmware: fix setting old custom fw path back on exit, second try - - errseq: Always report a writeback error once - - USB: serial: visor: handle potential invalid device configuration - - usb: dwc3: gadget: Fix list_del corruption in dwc3_ep_dequeue - - USB: Accept bulk endpoints with 1024-byte maxpacket - - USB: serial: option: reimplement interface masking - - USB: serial: option: adding support for ublox R410M - - usb: musb: host: fix potential NULL pointer dereference - - usb: musb: trace: fix NULL pointer dereference in musb_g_tx() - - platform/x86: asus-wireless: Fix NULL pointer dereference - - irqchip/qcom: Fix check for spurious interrupts - - tracing: Fix bad use of igrab in trace_uprobe.c - - [Config] CONFIG_ARM64_ERRATUM_1024718=y - - arm64: Add work around for Arm Cortex-A55 Erratum 1024718 - - Input: atmel_mxt_ts - add touchpad button mapping for Samsung Chromebook Pro - - infiniband: mlx5: fix build errors when INFINIBAND_USER_ACCESS=m - - btrfs: Take trans lock before access running trans in check_delayed_ref - - drm/vc4: Make sure vc4_bo_{inc,dec}_usecnt() calls are balanced - - xhci: Fix use-after-free in xhci_free_virt_device - - platform/x86: Kconfig: Fix dell-laptop dependency chain. - - KVM: x86: remove APIC Timer periodic/oneshot spikes - - clocksource: Allow clocksource_mark_unstable() on unregistered clocksources - - clocksource: Initialize cs->wd_list - - clocksource: Consistent de-rate when marking unstable - * Bionic update: upstream stable patchset 2018-06-22 (LP: #1778265) - - ext4: set h_journal if there is a failure starting a reserved handle - - ext4: add MODULE_SOFTDEP to ensure crc32c is included in the initramfs - - ext4: add validity checks for bitmap block numbers - - ext4: fix bitmap position validation - - random: fix possible sleeping allocation from irq context - - random: rate limit unseeded randomness warnings - - usbip: usbip_event: fix to not print kernel pointer address - - usbip: usbip_host: fix to hold parent lock for device_attach() calls - - usbip: vhci_hcd: Fix usb device and sockfd leaks - - usbip: vhci_hcd: check rhport before using in vhci_hub_control() - - Revert "xhci: plat: Register shutdown for xhci_plat" - - USB: serial: simple: add libtransistor console - - USB: serial: ftdi_sio: use jtag quirk for Arrow USB Blaster - - USB: serial: cp210x: add ID for NI USB serial console - - usb: core: Add quirk for HP v222w 16GB Mini - - USB: Increment wakeup count on remote wakeup. - - ALSA: usb-audio: Skip broken EU on Dell dock USB-audio - - virtio: add ability to iterate over vqs - - virtio_console: don't tie bufs to a vq - - virtio_console: free buffers after reset - - virtio_console: drop custom control queue cleanup - - virtio_console: move removal code - - virtio_console: reset on out of memory - - drm/virtio: fix vq wait_event condition - - tty: Don't call panic() at tty_ldisc_init() - - tty: n_gsm: Fix long delays with control frame timeouts in ADM mode - - tty: n_gsm: Fix DLCI handling for ADM mode if debug & 2 is not set - - tty: Avoid possible error pointer dereference at tty_ldisc_restore(). - - tty: Use __GFP_NOFAIL for tty_ldisc_get() - - ALSA: dice: fix OUI for TC group - - ALSA: dice: fix error path to destroy initialized stream data - - ALSA: hda - Skip jack and others for non-existing PCM streams - - ALSA: opl3: Hardening for potential Spectre v1 - - ALSA: asihpi: Hardening for potential Spectre v1 - - ALSA: hdspm: Hardening for potential Spectre v1 - - ALSA: rme9652: Hardening for potential Spectre v1 - - ALSA: control: Hardening for potential Spectre v1 - - ALSA: pcm: Return negative delays from SNDRV_PCM_IOCTL_DELAY. - - ALSA: core: Report audio_tstamp in snd_pcm_sync_ptr - - ALSA: seq: oss: Fix unbalanced use lock for synth MIDI device - - ALSA: seq: oss: Hardening for potential Spectre v1 - - ALSA: hda: Hardening for potential Spectre v1 - - ALSA: hda/realtek - Add some fixes for ALC233 - - ALSA: hda/realtek - Update ALC255 depop optimize - - ALSA: hda/realtek - change the location for one of two front mics - - mtd: spi-nor: cadence-quadspi: Fix page fault kernel panic - - mtd: cfi: cmdset_0001: Do not allow read/write to suspend erase block. - - mtd: cfi: cmdset_0001: Workaround Micron Erase suspend bug. - - mtd: cfi: cmdset_0002: Do not allow read/write to suspend erase block. - - mtd: rawnand: tango: Fix struct clk memory leak - - kobject: don't use WARN for registration failures - - scsi: sd: Defer spinning up drive while SANITIZE is in progress - - bfq-iosched: ensure to clear bic/bfqq pointers when preparing request - - vfio: ccw: process ssch with interrupts disabled - - ANDROID: binder: prevent transactions into own process. - - PCI: aardvark: Fix logic in advk_pcie_{rd,wr}_conf() - - PCI: aardvark: Set PIO_ADDR_LS correctly in advk_pcie_rd_conf() - - PCI: aardvark: Use ISR1 instead of ISR0 interrupt in legacy irq mode - - PCI: aardvark: Fix PCIe Max Read Request Size setting - - ARM: amba: Make driver_override output consistent with other buses - - ARM: amba: Fix race condition with driver_override - - ARM: amba: Don't read past the end of sysfs "driver_override" buffer - - ARM: socfpga_defconfig: Remove QSPI Sector 4K size force - - KVM: arm/arm64: Close VMID generation race - - crypto: drbg - set freed buffers to NULL - - ASoC: fsl_esai: Fix divisor calculation failure at lower ratio - - libceph: un-backoff on tick when we have a authenticated session - - libceph: reschedule a tick in finish_hunting() - - libceph: validate con->state at the top of try_write() - - fpga-manager: altera-ps-spi: preserve nCONFIG state - - earlycon: Use a pointer table to fix __earlycon_table stride - - drm/amdgpu: set COMPUTE_PGM_RSRC1 for SGPR/VGPR clearing shaders - - drm/i915: Enable display WA#1183 from its correct spot - - objtool, perf: Fix GCC 8 -Wrestrict error - - tools/lib/subcmd/pager.c: do not alias select() params - - x86/ipc: Fix x32 version of shmid64_ds and msqid64_ds - - x86/smpboot: Don't use mwait_play_dead() on AMD systems - - x86/microcode/intel: Save microcode patch unconditionally - - x86/microcode: Do not exit early from __reload_late() - - tick/sched: Do not mess with an enqueued hrtimer - - arm/arm64: KVM: Add PSCI version selection API - - powerpc/eeh: Fix race with driver un/bind - - serial: mvebu-uart: Fix local flags handling on termios update - - block: do not use interruptible wait anywhere - - ASoC: dmic: Fix clock parenting - - PCI / PM: Do not clear state_saved in pci_pm_freeze() when smart suspend is - set - - module: Fix display of wrong module .text address - - drm/edid: Reset more of the display info - - drm/i915/fbdev: Enable late fbdev initial configuration - - drm/i915/audio: set minimum CD clock to twice the BCLK - - drm/amd/display: Fix deadlock when flushing irq - - drm/amd/display: Disallow enabling CRTC without primary plane with FB - * Bionic update: upstream stable patchset 2018-06-22 (LP: #1778265) // - CVE-2018-1108. - - random: set up the NUMA crng instances after the CRNG is fully initialized - * Ryzen/Raven Ridge USB ports do not work (LP: #1756700) - - xhci: Fix USB ports for Dell Inspiron 5775 - * [Ubuntu 1804][boston][ixgbe] EEH causes kernel BUG at /build/linux- - jWa1Fv/linux-4.15.0/drivers/pci/msi.c:352 (i2S) (LP: #1776389) - - ixgbe/ixgbevf: Free IRQ when PCI error recovery removes the device - * Need fix to aacraid driver to prevent panic (LP: #1770095) - - scsi: aacraid: Correct hba_send to include iu_type - * kernel: Fix arch random implementation (LP: #1775391) - - s390/archrandom: Rework arch random implementation. - * kernel: Fix memory leak on CCA and EP11 CPRB processing. (LP: #1775390) - - s390/zcrypt: Fix CCA and EP11 CPRB processing failure memory leak. - * Various fixes for CXL kernel module (LP: #1774471) - - cxl: Remove function write_timebase_ctrl_psl9() for PSL9 - - cxl: Set the PBCQ Tunnel BAR register when enabling capi mode - - cxl: Report the tunneled operations status - - cxl: Configure PSL to not use APC virtual machines - - cxl: Disable prefault_mode in Radix mode - * Bluetooth not working (LP: #1764645) - - Bluetooth: btusb: Apply QCA Rome patches for some ATH3012 models - * linux-snapdragon: wcn36xx: mac address generation on boot (LP: #1776491) - - [Config] arm64: snapdragon: WCN36XX_SNAPDRAGON_HACKS=y - - SAUCE: wcn36xx: read MAC from file or randomly generate one - * fscache: Fix hanging wait on page discarded by writeback (LP: #1777029) - - fscache: Fix hanging wait on page discarded by writeback - - -- Stefan Bader Thu, 02 Aug 2018 17:10:18 +0200 + -- Stefan Bader Fri, 10 Aug 2018 11:22:51 +0200 linux-azure (4.15.0-1019.19) bionic; urgency=medium diff -u linux-azure-4.15.0/debian.master/changelog linux-azure-4.15.0/debian.master/changelog --- linux-azure-4.15.0/debian.master/changelog +++ linux-azure-4.15.0/debian.master/changelog @@ -1,664 +1,112 @@ -linux (4.15.0-31.33) bionic; urgency=medium +linux (4.15.0-32.34) bionic; urgency=medium - * linux: 4.15.0-31.33 -proposed tracker (LP: #1784281) + * CVE-2018-5391 + - Revert "net: increase fragment memory usage limits" - * ubuntu_bpf_jit test failed on Bionic s390x systems (LP: #1753941) - - test_bpf: flag tests that cannot be jited on s390 + * CVE-2018-3620 // CVE-2018-3646 + - x86/Centaur: Initialize supported CPU features properly + - x86/Centaur: Report correct CPU/cache topology + - x86/CPU/AMD: Have smp_num_siblings and cpu_llc_id always be present + - perf/events/amd/uncore: Fix amd_uncore_llc ID to use pre-defined cpu_llc_id + - x86/CPU: Rename intel_cacheinfo.c to cacheinfo.c + - x86/CPU/AMD: Calculate last level cache ID from number of sharing threads + - x86/CPU: Modify detect_extended_topology() to return result + - x86/CPU/AMD: Derive CPU topology from CPUID function 0xB when available + - x86/CPU: Move cpu local function declarations to local header + - x86/CPU: Make intel_num_cpu_cores() generic + - x86/CPU: Move cpu_detect_cache_sizes() into init_intel_cacheinfo() + - x86/CPU: Move x86_cpuinfo::x86_max_cores assignment to + detect_num_cpu_cores() + - x86/CPU/AMD: Fix LLC ID bit-shift calculation + - x86/mm: Factor out pageattr _PAGE_GLOBAL setting + - x86/mm: Undo double _PAGE_PSE clearing + - x86/mm: Introduce "default" kernel PTE mask + - x86/espfix: Document use of _PAGE_GLOBAL + - x86/mm: Do not auto-massage page protections + - x86/mm: Remove extra filtering in pageattr code + - x86/mm: Comment _PAGE_GLOBAL mystery + - x86/mm: Do not forbid _PAGE_RW before init for __ro_after_init + - x86/ldt: Fix support_pte_mask filtering in map_ldt_struct() + - x86/power/64: Fix page-table setup for temporary text mapping + - x86/pti: Filter at vma->vm_page_prot population + - x86/boot/64/clang: Use fixup_pointer() to access '__supported_pte_mask' + - x86/speculation/l1tf: Increase 32bit PAE __PHYSICAL_PAGE_SHIFT + - x86/speculation/l1tf: Change order of offset/type in swap entry + - x86/speculation/l1tf: Protect swap entries against L1TF + - x86/speculation/l1tf: Protect PROT_NONE PTEs against speculation + - x86/speculation/l1tf: Make sure the first page is always reserved + - x86/speculation/l1tf: Add sysfs reporting for l1tf + - x86/speculation/l1tf: Disallow non privileged high MMIO PROT_NONE mappings + - x86/speculation/l1tf: Limit swap file size to MAX_PA/2 + - x86/bugs: Move the l1tf function and define pr_fmt properly + - sched/smt: Update sched_smt_present at runtime + - x86/smp: Provide topology_is_primary_thread() + - x86/topology: Provide topology_smt_supported() + - cpu/hotplug: Make bringup/teardown of smp threads symmetric + - cpu/hotplug: Split do_cpu_down() + - cpu/hotplug: Provide knobs to control SMT + - x86/cpu: Remove the pointless CPU printout + - x86/cpu/AMD: Remove the pointless detect_ht() call + - x86/cpu/common: Provide detect_ht_early() + - x86/cpu/topology: Provide detect_extended_topology_early() + - x86/cpu/intel: Evaluate smp_num_siblings early + - x86/CPU/AMD: Do not check CPUID max ext level before parsing SMP info + - x86/cpu/AMD: Evaluate smp_num_siblings early + - x86/apic: Ignore secondary threads if nosmt=force + - x86/speculation/l1tf: Extend 64bit swap file size limit + - x86/cpufeatures: Add detection of L1D cache flush support. + - x86/CPU/AMD: Move TOPOEXT reenablement before reading smp_num_siblings + - x86/speculation/l1tf: Protect PAE swap entries against L1TF + - x86/speculation/l1tf: Fix up pte->pfn conversion for PAE + - Revert "x86/apic: Ignore secondary threads if nosmt=force" + - cpu/hotplug: Boot HT siblings at least once + - x86/KVM: Warn user if KVM is loaded SMT and L1TF CPU bug being present + - x86/KVM/VMX: Add module argument for L1TF mitigation + - x86/KVM/VMX: Add L1D flush algorithm + - x86/KVM/VMX: Add L1D MSR based flush + - x86/KVM/VMX: Add L1D flush logic + - x86/KVM/VMX: Split the VMX MSR LOAD structures to have an host/guest numbers + - x86/KVM/VMX: Add find_msr() helper function + - x86/KVM/VMX: Separate the VMX AUTOLOAD guest/host number accounting + - x86/KVM/VMX: Extend add_atomic_switch_msr() to allow VMENTER only MSRs + - x86/KVM/VMX: Use MSR save list for IA32_FLUSH_CMD if required + - cpu/hotplug: Online siblings when SMT control is turned on + - x86/litf: Introduce vmx status variable + - x86/kvm: Drop L1TF MSR list approach + - x86/l1tf: Handle EPT disabled state proper + - x86/kvm: Move l1tf setup function + - x86/kvm: Add static key for flush always + - x86/kvm: Serialize L1D flush parameter setter + - x86/kvm: Allow runtime control of L1D flush + - cpu/hotplug: Expose SMT control init function + - cpu/hotplug: Set CPU_SMT_NOT_SUPPORTED early + - x86/bugs, kvm: Introduce boot-time control of L1TF mitigations + - Documentation: Add section about CPU vulnerabilities + - x86/speculation/l1tf: Unbreak !__HAVE_ARCH_PFN_MODIFY_ALLOWED architectures + - x86/KVM/VMX: Initialize the vmx_l1d_flush_pages' content + - Documentation/l1tf: Fix typos + - cpu/hotplug: detect SMT disabled by BIOS + - x86/KVM/VMX: Don't set l1tf_flush_l1d to true from vmx_l1d_flush() + - x86/KVM/VMX: Replace 'vmx_l1d_flush_always' with 'vmx_l1d_flush_cond' + - x86/KVM/VMX: Move the l1tf_flush_l1d test to vmx_l1d_flush() + - x86/irq: Demote irq_cpustat_t::__softirq_pending to u16 + - x86/KVM/VMX: Introduce per-host-cpu analogue of l1tf_flush_l1d + - x86: Don't include linux/irq.h from asm/hardirq.h + - x86/irq: Let interrupt handlers set kvm_cpu_l1tf_flush_l1d + - x86/KVM/VMX: Don't set l1tf_flush_l1d from vmx_handle_external_intr() + - Documentation/l1tf: Remove Yonah processors from not vulnerable list + - x86/speculation: Simplify sysfs report of VMX L1TF vulnerability + - x86/speculation: Use ARCH_CAPABILITIES to skip L1D flush on vmentry + - KVM: x86: Add a framework for supporting MSR-based features + - KVM: X86: Introduce kvm_get_msr_feature() + - KVM: VMX: support MSR_IA32_ARCH_CAPABILITIES as a feature MSR + - KVM: VMX: Tell the nested hypervisor to skip L1D flush on vmentry + - cpu/hotplug: Fix SMT supported evaluation + - x86/speculation/l1tf: Invert all not present mappings + - x86/speculation/l1tf: Make pmd/pud_mknotpresent() invert + - x86/mm/pat: Make set_memory_np() L1TF safe - * HDMI/DP audio can't work on the laptop of Dell Latitude 5495 (LP: #1782689) - - drm/nouveau: fix nouveau_dsm_get_client_id()'s return type - - drm/radeon: fix radeon_atpx_get_client_id()'s return type - - drm/amdgpu: fix amdgpu_atpx_get_client_id()'s return type - - platform/x86: apple-gmux: fix gmux_get_client_id()'s return type - - ALSA: hda: use PCI_BASE_CLASS_DISPLAY to replace PCI_CLASS_DISPLAY_VGA - - vga_switcheroo: set audio client id according to bound GPU id - - * locking sockets broken due to missing AppArmor socket mediation patches - (LP: #1780227) - - UBUNTU SAUCE: apparmor: fix apparmor mediating locking non-fs, unix sockets - - * Update2 for ocxl driver (LP: #1781436) - - ocxl: Fix page fault handler in case of fault on dying process - - * RTNL assertion failure on ipvlan (LP: #1776927) - - ipvlan: drop ipv6 dependency - - ipvlan: use per device spinlock to protect addrs list updates - - * netns: unable to follow an interface that moves to another netns - (LP: #1774225) - - net: core: Expose number of link up/down transitions - - dev: always advertise the new nsid when the netns iface changes - - dev: advertise the new ifindex when the netns iface changes - - * [Bionic] Disk IO hangs when using BFQ as io scheduler (LP: #1780066) - - block, bfq: fix occurrences of request finish method's old name - - block, bfq: remove batches of confusing ifdefs - - block, bfq: add requeue-request hook - - * HP ProBook 455 G5 needs mute-led-gpio fixup (LP: #1781763) - - ALSA: hda: add mute led support for HP ProBook 455 G5 - - * [Bionic] bug fixes to improve stability of the ThunderX2 i2c driver - (LP: #1781476) - - i2c: xlp9xx: Fix issue seen when updating receive length - - i2c: xlp9xx: Make sure the transfer size is not more than - I2C_SMBUS_BLOCK_SIZE - - * x86/kvm: fix LAPIC timer drift when guest uses periodic mode (LP: #1778486) - - x86/kvm: fix LAPIC timer drift when guest uses periodic mode - - * Please include ax88179_178a and r8152 modules in d-i udeb (LP: #1771823) - - [Config:] d-i: Add ax88179_178a and r8152 to nic-modules - - * Nvidia fails after switching its mode (LP: #1778658) - - PCI: Restore config space on runtime resume despite being unbound - - * Kernel error "task zfs:pid blocked for more than 120 seconds" (LP: #1781364) - - SAUCE: (noup) zfs to 0.7.5-1ubuntu16.3 - - * CVE-2018-12232 - - PATCH 1/1] socket: close race condition between sock_close() and - sockfs_setattr() - - * CVE-2018-10323 - - xfs: set format back to extents if xfs_bmap_extents_to_btree - - * change front mic location for more lenovo m7/8/9xx machines (LP: #1781316) - - ALSA: hda/realtek - Fix the problem of two front mics on more machines - - ALSA: hda/realtek - two more lenovo models need fixup of MIC_LOCATION - - * Cephfs + fscache: unable to handle kernel NULL pointer dereference at - 0000000000000000 IP: jbd2__journal_start+0x22/0x1f0 (LP: #1783246) - - ceph: track read contexts in ceph_file_info - - * Touchpad of ThinkPad P52 failed to work with message "lost sync at byte" - (LP: #1779802) - - Input: elantech - fix V4 report decoding for module with middle key - - Input: elantech - enable middle button of touchpads on ThinkPad P52 - - * xhci_hcd 0000:00:14.0: Root hub is not suspended (LP: #1779823) - - usb: xhci: dbc: Fix lockdep warning - - usb: xhci: dbc: Don't decrement runtime PM counter if DBC is not started - - * CVE-2018-13406 - - video: uvesafb: Fix integer overflow in allocation - - * CVE-2018-10840 - - ext4: correctly handle a zero-length xattr with a non-zero e_value_offs - - * CVE-2018-11412 - - ext4: do not allow external inodes for inline data - - * CVE-2018-10881 - - ext4: clear i_data in ext4_inode_info when removing inline data - - * CVE-2018-12233 - - jfs: Fix inconsistency between memory allocation and ea_buf->max_size - - * CVE-2018-12904 - - kvm: nVMX: Enforce cpl=0 for VMX instructions - - * Error parsing PCC subspaces from PCCT (LP: #1528684) - - mailbox: PCC: erroneous error message when parsing ACPI PCCT - - * CVE-2018-13094 - - xfs: don't call xfs_da_shrink_inode with NULL bp - - * other users' coredumps can be read via setgid directory and killpriv bypass - (LP: #1779923) // CVE-2018-13405 - - Fix up non-directory creation in SGID directories - - * Invoking obsolete 'firmware_install' target breaks snap build (LP: #1782166) - - snapcraft.yaml: stop invoking the obsolete (and non-existing) - 'firmware_install' target - - * snapcraft.yaml: missing ubuntu-retpoline-extract-one script breaks the build - (LP: #1782116) - - snapcraft.yaml: copy retpoline-extract-one to scripts before build - - * Allow Raven Ridge's audio controller to be runtime suspended (LP: #1782540) - - ALSA: hda: Add AZX_DCAPS_PM_RUNTIME for AMD Raven Ridge - - * CVE-2018-11506 - - sr: pass down correctly sized SCSI sense buffer - - * Bionic update: upstream stable patchset 2018-07-24 (LP: #1783418) - - net: Fix a bug in removing queues from XPS map - - net/mlx4_core: Fix error handling in mlx4_init_port_info. - - net/sched: fix refcnt leak in the error path of tcf_vlan_init() - - net: sched: red: avoid hashing NULL child - - net/smc: check for missing nlattrs in SMC_PNETID messages - - net: test tailroom before appending to linear skb - - packet: in packet_snd start writing at link layer allocation - - sock_diag: fix use-after-free read in __sk_free - - tcp: purge write queue in tcp_connect_init() - - vmxnet3: set the DMA mask before the first DMA map operation - - vmxnet3: use DMA memory barriers where required - - hv_netvsc: empty current transmit aggregation if flow blocked - - hv_netvsc: Use the num_online_cpus() for channel limit - - hv_netvsc: avoid retry on send during shutdown - - hv_netvsc: only wake transmit queue if link is up - - hv_netvsc: fix error unwind handling if vmbus_open fails - - hv_netvsc: cancel subchannel setup before halting device - - hv_netvsc: fix race in napi poll when rescheduling - - hv_netvsc: defer queue selection to VF - - hv_netvsc: disable NAPI before channel close - - hv_netvsc: use RCU to fix concurrent rx and queue changes - - hv_netvsc: change GPAD teardown order on older versions - - hv_netvsc: common detach logic - - hv_netvsc: Use Windows version instead of NVSP version on GPAD teardown - - hv_netvsc: Split netvsc_revoke_buf() and netvsc_teardown_gpadl() - - hv_netvsc: Ensure correct teardown message sequence order - - hv_netvsc: Fix a network regression after ifdown/ifup - - sparc: vio: use put_device() instead of kfree() - - ext2: fix a block leak - - s390: add assembler macros for CPU alternatives - - s390: move expoline assembler macros to a header - - s390/crc32-vx: use expoline for indirect branches - - s390/lib: use expoline for indirect branches - - s390/ftrace: use expoline for indirect branches - - s390/kernel: use expoline for indirect branches - - s390: move spectre sysfs attribute code - - s390: extend expoline to BC instructions - - s390: use expoline thunks in the BPF JIT - - scsi: sg: allocate with __GFP_ZERO in sg_build_indirect() - - scsi: zfcp: fix infinite iteration on ERP ready list - - loop: don't call into filesystem while holding lo_ctl_mutex - - loop: fix LOOP_GET_STATUS lock imbalance - - cfg80211: limit wiphy names to 128 bytes - - hfsplus: stop workqueue when fill_super() failed - - x86/kexec: Avoid double free_page() upon do_kexec_load() failure - - usb: gadget: f_uac2: fix bFirstInterface in composite gadget - - usb: dwc3: Undo PHY init if soft reset fails - - usb: dwc3: omap: don't miss events during suspend/resume - - usb: gadget: core: Fix use-after-free of usb_request - - usb: gadget: fsl_udc_core: fix ep valid checks - - usb: dwc2: Fix dwc2_hsotg_core_init_disconnected() - - usb: cdc_acm: prevent race at write to acm while system resumes - - net: usbnet: fix potential deadlock on 32bit hosts - - ARM: dts: imx7d-sdb: Fix regulator-usb-otg2-vbus node name - - usb: host: xhci-plat: revert "usb: host: xhci-plat: enable clk in resume - timing" - - USB: OHCI: Fix NULL dereference in HCDs using HCD_LOCAL_MEM - - net/usb/qmi_wwan.c: Add USB id for lt4120 modem - - net-usb: add qmi_wwan if on lte modem wistron neweb d18q1 - - Bluetooth: btusb: Add USB ID 7392:a611 for Edimax EW-7611ULB - - ALSA: usb-audio: Add native DSD support for Luxman DA-06 - - usb: dwc3: Add SoftReset PHY synchonization delay - - usb: dwc3: Update DWC_usb31 GTXFIFOSIZ reg fields - - usb: dwc3: Makefile: fix link error on randconfig - - xhci: zero usb device slot_id member when disabling and freeing a xhci slot - - usb: dwc2: Fix interval type issue - - usb: dwc2: hcd: Fix host channel halt flow - - usb: dwc2: host: Fix transaction errors in host mode - - usb: gadget: ffs: Let setup() return USB_GADGET_DELAYED_STATUS - - usb: gadget: ffs: Execute copy_to_user() with USER_DS set - - usbip: Correct maximum value of CONFIG_USBIP_VHCI_HC_PORTS - - usb: gadget: udc: change comparison to bitshift when dealing with a mask - - usb: gadget: composite: fix incorrect handling of OS desc requests - - media: lgdt3306a: Fix module count mismatch on usb unplug - - media: em28xx: USB bulk packet size fix - - Bluetooth: btusb: Add device ID for RTL8822BE - - xhci: Show what USB release number the xHC supports from protocol capablity - - staging: bcm2835-audio: Release resources on module_exit() - - staging: lustre: fix bug in osc_enter_cache_try - - staging: fsl-dpaa2/eth: Fix incorrect casts - - staging: rtl8192u: return -ENOMEM on failed allocation of priv->oldaddr - - staging: ks7010: Use constants from ieee80211_eid instead of literal ints. - - staging: lustre: lmv: correctly iput lmo_root - - crypto: inside-secure - wait for the request to complete if in the backlog - - crypto: atmel-aes - fix the keys zeroing on errors - - crypto: ccp - don't disable interrupts while setting up debugfs - - crypto: inside-secure - do not process request if no command was issued - - crypto: inside-secure - fix the cache_len computation - - crypto: inside-secure - fix the extra cache computation - - crypto: sunxi-ss - Add MODULE_ALIAS to sun4i-ss - - crypto: inside-secure - fix the invalidation step during cra_exit - - scsi: mpt3sas: fix an out of bound write - - scsi: ufs: Enable quirk to ignore sending WRITE_SAME command - - scsi: bnx2fc: Fix check in SCSI completion handler for timed out request - - scsi: sym53c8xx_2: iterator underflow in sym_getsync() - - scsi: mptfusion: Add bounds check in mptctl_hp_targetinfo() - - scsi: qla2xxx: Avoid triggering undefined behavior in - qla2x00_mbx_completion() - - scsi: storvsc: Increase cmd_per_lun for higher speed devices - - scsi: qedi: Fix truncation of CHAP name and secret - - scsi: aacraid: fix shutdown crash when init fails - - scsi: qla4xxx: skip error recovery in case of register disconnect. - - scsi: qedi: Fix kernel crash during port toggle - - scsi: mpt3sas: Do not mark fw_event workqueue as WQ_MEM_RECLAIM - - scsi: sd: Keep disk read-only when re-reading partition - - scsi: iscsi_tcp: set BDI_CAP_STABLE_WRITES when data digest enabled - - scsi: aacraid: Insure command thread is not recursively stopped - - scsi: core: Make SCSI Status CONDITION MET equivalent to GOOD - - scsi: mvsas: fix wrong endianness of sgpio api - - ASoC: hdmi-codec: Fix module unloading caused kernel crash - - ASoC: rockchip: rk3288-hdmi-analog: Select needed codecs - - ASoC: samsung: odroid: Fix 32000 sample rate handling - - ASoC: topology: create TLV data for dapm widgets - - ASoC: samsung: i2s: Ensure the RCLK rate is properly determined - - clk: rockchip: Fix wrong parent for SDMMC phase clock for rk3228 - - clk: Don't show the incorrect clock phase - - clk: hisilicon: mark wdt_mux_p[] as const - - clk: tegra: Fix pll_u rate configuration - - clk: rockchip: Prevent calculating mmc phase if clock rate is zero - - clk: samsung: s3c2410: Fix PLL rates - - clk: samsung: exynos7: Fix PLL rates - - clk: samsung: exynos5260: Fix PLL rates - - clk: samsung: exynos5433: Fix PLL rates - - clk: samsung: exynos5250: Fix PLL rates - - clk: samsung: exynos3250: Fix PLL rates - - media: dmxdev: fix error code for invalid ioctls - - media: Don't let tvp5150_get_vbi() go out of vbi_ram_default array - - media: ov5645: add missing of_node_put() in error path - - media: cx23885: Override 888 ImpactVCBe crystal frequency - - media: cx23885: Set subdev host data to clk_freq pointer - - media: s3c-camif: fix out-of-bounds array access - - media: lgdt3306a: Fix a double kfree on i2c device remove - - media: em28xx: Add Hauppauge SoloHD/DualHD bulk models - - media: v4l: vsp1: Fix display stalls when requesting too many inputs - - media: i2c: adv748x: fix HDMI field heights - - media: vb2: Fix videobuf2 to map correct area - - media: vivid: fix incorrect capabilities for radio - - media: cx25821: prevent out-of-bounds read on array card - - serial: xuartps: Fix out-of-bounds access through DT alias - - serial: sh-sci: Fix out-of-bounds access through DT alias - - serial: samsung: Fix out-of-bounds access through serial port index - - serial: mxs-auart: Fix out-of-bounds access through serial port index - - serial: imx: Fix out-of-bounds access through serial port index - - serial: fsl_lpuart: Fix out-of-bounds access through DT alias - - serial: arc_uart: Fix out-of-bounds access through DT alias - - serial: 8250: Don't service RX FIFO if interrupts are disabled - - serial: altera: ensure port->regshift is honored consistently - - rtc: snvs: Fix usage of snvs_rtc_enable - - rtc: hctosys: Ensure system time doesn't overflow time_t - - rtc: rk808: fix possible race condition - - rtc: m41t80: fix race conditions - - rtc: tx4939: avoid unintended sign extension on a 24 bit shift - - rtc: rp5c01: fix possible race condition - - rtc: goldfish: Add missing MODULE_LICENSE - - cxgb4: Correct ntuple mask validation for hash filters - - net: dsa: bcm_sf2: Fix RX_CLS_LOC_ANY overwrite for last rule - - net: dsa: Do not register devlink for unused ports - - net: dsa: bcm_sf2: Fix IPv6 rules and chain ID - - net: dsa: bcm_sf2: Fix IPv6 rule half deletion - - 3c59x: convert to generic DMA API - - net: ip6_gre: Request headroom in __gre6_xmit() - - net: ip6_gre: Split up ip6gre_tnl_link_config() - - net: ip6_gre: Split up ip6gre_tnl_change() - - net: ip6_gre: Split up ip6gre_newlink() - - net: ip6_gre: Split up ip6gre_changelink() - - qed: LL2 flush isles when connection is closed - - qed: Fix possibility of list corruption during rmmod flows - - qed: Fix LL2 race during connection terminate - - powerpc: Move default security feature flags - - Bluetooth: btusb: Add support for Intel Bluetooth device 22560 [8087:0026] - - staging: fsl-dpaa2/eth: Fix incorrect kfree - - crypto: inside-secure - move the digest to the request context - - scsi: lpfc: Fix NVME Initiator FirstBurst - - serial: mvebu-uart: fix tx lost characters - - * Bionic update: upstream stable patchset 2018-07-20 (LP: #1782846) - - usbip: usbip_host: refine probe and disconnect debug msgs to be useful - - usbip: usbip_host: delete device from busid_table after rebind - - usbip: usbip_host: run rebind from exit when module is removed - - usbip: usbip_host: fix NULL-ptr deref and use-after-free errors - - usbip: usbip_host: fix bad unlock balance during stub_probe() - - ALSA: usb: mixer: volume quirk for CM102-A+/102S+ - - ALSA: hda: Add Lenovo C50 All in one to the power_save blacklist - - ALSA: control: fix a redundant-copy issue - - spi: pxa2xx: Allow 64-bit DMA - - spi: bcm-qspi: Avoid setting MSPI_CDRAM_PCS for spi-nor master - - spi: bcm-qspi: Always read and set BSPI_MAST_N_BOOT_CTRL - - KVM: arm/arm64: VGIC/ITS save/restore: protect kvm_read_guest() calls - - KVM: arm/arm64: VGIC/ITS: protect kvm_read_guest() calls with SRCU lock - - vfio: ccw: fix cleanup if cp_prefetch fails - - tracing/x86/xen: Remove zero data size trace events - trace_xen_mmu_flush_tlb{_all} - - tee: shm: fix use-after-free via temporarily dropped reference - - netfilter: nf_tables: free set name in error path - - netfilter: nf_tables: can't fail after linking rule into active rule list - - netfilter: nf_socket: Fix out of bounds access in nf_sk_lookup_slow_v{4,6} - - i2c: designware: fix poll-after-enable regression - - powerpc/powernv: Fix NVRAM sleep in invalid context when crashing - - drm: Match sysfs name in link removal to link creation - - lib/test_bitmap.c: fix bitmap optimisation tests to report errors correctly - - radix tree: fix multi-order iteration race - - mm: don't allow deferred pages with NEED_PER_CPU_KM - - drm/i915/gen9: Add WaClearHIZ_WM_CHICKEN3 for bxt and glk - - s390/qdio: fix access to uninitialized qdio_q fields - - s390/qdio: don't release memory in qdio_setup_irq() - - s390: remove indirect branch from do_softirq_own_stack - - x86/pkeys: Override pkey when moving away from PROT_EXEC - - x86/pkeys: Do not special case protection key 0 - - efi: Avoid potential crashes, fix the 'struct efi_pci_io_protocol_32' - definition for mixed mode - - ARM: 8771/1: kprobes: Prohibit kprobes on do_undefinstr - - x86/mm: Drop TS_COMPAT on 64-bit exec() syscall - - tick/broadcast: Use for_each_cpu() specially on UP kernels - - ARM: 8769/1: kprobes: Fix to use get_kprobe_ctlblk after irq-disabed - - ARM: 8770/1: kprobes: Prohibit probing on optimized_callback - - ARM: 8772/1: kprobes: Prohibit kprobes on get_user functions - - Btrfs: fix xattr loss after power failure - - Btrfs: send, fix invalid access to commit roots due to concurrent - snapshotting - - btrfs: property: Set incompat flag if lzo/zstd compression is set - - btrfs: fix crash when trying to resume balance without the resume flag - - btrfs: Split btrfs_del_delalloc_inode into 2 functions - - btrfs: Fix delalloc inodes invalidation during transaction abort - - btrfs: fix reading stale metadata blocks after degraded raid1 mounts - - xhci: Fix USB3 NULL pointer dereference at logical disconnect. - - KVM: arm/arm64: Properly protect VGIC locks from IRQs - - KVM: arm/arm64: VGIC/ITS: Promote irq_lock() in update_affinity - - hwmon: (k10temp) Fix reading critical temperature register - - hwmon: (k10temp) Use API function to access System Management Network - - vsprintf: Replace memory barrier with static_key for random_ptr_key update - - x86/amd_nb: Add support for Raven Ridge CPUs - - x86/apic/x2apic: Initialize cluster ID properly - - * Bionic update: upstream stable patchset 2018-07-09 (LP: #1780858) - - 8139too: Use disable_irq_nosync() in rtl8139_poll_controller() - - bridge: check iface upper dev when setting master via ioctl - - dccp: fix tasklet usage - - ipv4: fix fnhe usage by non-cached routes - - ipv4: fix memory leaks in udp_sendmsg, ping_v4_sendmsg - - llc: better deal with too small mtu - - net: ethernet: sun: niu set correct packet size in skb - - net: ethernet: ti: cpsw: fix packet leaking in dual_mac mode - - net/mlx4_en: Fix an error handling path in 'mlx4_en_init_netdev()' - - net/mlx4_en: Verify coalescing parameters are in range - - net/mlx5e: Err if asked to offload TC match on frag being first - - net/mlx5: E-Switch, Include VF RDMA stats in vport statistics - - net sched actions: fix refcnt leak in skbmod - - net_sched: fq: take care of throttled flows before reuse - - net: support compat 64-bit time in {s,g}etsockopt - - net/tls: Don't recursively call push_record during tls_write_space callbacks - - net/tls: Fix connection stall on partial tls record - - openvswitch: Don't swap table in nlattr_set() after OVS_ATTR_NESTED is found - - qmi_wwan: do not steal interfaces from class drivers - - r8169: fix powering up RTL8168h - - rds: do not leak kernel memory to user land - - sctp: delay the authentication for the duplicated cookie-echo chunk - - sctp: fix the issue that the cookie-ack with auth can't get processed - - sctp: handle two v4 addrs comparison in sctp_inet6_cmp_addr - - sctp: remove sctp_chunk_put from fail_mark err path in - sctp_ulpevent_make_rcvmsg - - sctp: use the old asoc when making the cookie-ack chunk in dupcook_d - - tcp_bbr: fix to zero idle_restart only upon S/ACKed data - - tcp: ignore Fast Open on repair mode - - tg3: Fix vunmap() BUG_ON() triggered from tg3_free_consistent(). - - bonding: do not allow rlb updates to invalid mac - - bonding: send learning packets for vlans on slave - - net: sched: fix error path in tcf_proto_create() when modules are not - configured - - net/mlx5e: TX, Use correct counter in dma_map error flow - - net/mlx5: Avoid cleaning flow steering table twice during error flow - - hv_netvsc: set master device - - ipv6: fix uninit-value in ip6_multipath_l3_keys() - - net/mlx5e: Allow offloading ipv4 header re-write for icmp - - nsh: fix infinite loop - - udp: fix SO_BINDTODEVICE - - l2tp: revert "l2tp: fix missing print session offset info" - - proc: do not access cmdline nor environ from file-backed areas - - net/smc: restrict non-blocking connect finish - - mlxsw: spectrum_switchdev: Do not remove mrouter port from MDB's ports list - - net/mlx5e: DCBNL fix min inline header size for dscp - - net: systemport: Correclty disambiguate driver instances - - sctp: clear the new asoc's stream outcnt in sctp_stream_update - - tcp: restore autocorking - - tipc: fix one byte leak in tipc_sk_set_orig_addr() - - hv_netvsc: Fix net device attach on older Windows hosts - - * Bionic update: upstream stable patchset 2018-07-06 (LP: #1780499) - - ext4: prevent right-shifting extents beyond EXT_MAX_BLOCKS - - ipvs: fix rtnl_lock lockups caused by start_sync_thread - - netfilter: ebtables: don't attempt to allocate 0-sized compat array - - kcm: Call strp_stop before strp_done in kcm_attach - - crypto: af_alg - fix possible uninit-value in alg_bind() - - netlink: fix uninit-value in netlink_sendmsg - - net: fix rtnh_ok() - - net: initialize skb->peeked when cloning - - net: fix uninit-value in __hw_addr_add_ex() - - dccp: initialize ireq->ir_mark - - ipv4: fix uninit-value in ip_route_output_key_hash_rcu() - - soreuseport: initialise timewait reuseport field - - inetpeer: fix uninit-value in inet_getpeer - - memcg: fix per_node_info cleanup - - perf: Remove superfluous allocation error check - - tcp: fix TCP_REPAIR_QUEUE bound checking - - bdi: wake up concurrent wb_shutdown() callers. - - bdi: Fix oops in wb_workfn() - - gpioib: do not free unrequested descriptors - - gpio: fix aspeed_gpio unmask irq - - gpio: fix error path in lineevent_create - - rfkill: gpio: fix memory leak in probe error path - - libata: Apply NOLPM quirk for SanDisk SD7UB3Q*G1001 SSDs - - dm integrity: use kvfree for kvmalloc'd memory - - tracing: Fix regex_match_front() to not over compare the test string - - z3fold: fix reclaim lock-ups - - mm: sections are not offlined during memory hotremove - - mm, oom: fix concurrent munlock and oom reaper unmap, v3 - - ceph: fix rsize/wsize capping in ceph_direct_read_write() - - can: kvaser_usb: Increase correct stats counter in kvaser_usb_rx_can_msg() - - can: hi311x: Acquire SPI lock on ->do_get_berr_counter - - can: hi311x: Work around TX complete interrupt erratum - - drm/vc4: Fix scaling of uni-planar formats - - drm/i915: Fix drm:intel_enable_lvds ERROR message in kernel log - - drm/atomic: Clean old_state/new_state in drm_atomic_state_default_clear() - - drm/atomic: Clean private obj old_state/new_state in - drm_atomic_state_default_clear() - - net: atm: Fix potential Spectre v1 - - atm: zatm: Fix potential Spectre v1 - - cpufreq: schedutil: Avoid using invalid next_freq - - Revert "Bluetooth: btusb: Fix quirk for Atheros 1525/QCA6174" - - Bluetooth: btusb: Only check needs_reset_resume DMI table for QCA rome - chipsets - - thermal: exynos: Reading temperature makes sense only when TMU is turned on - - thermal: exynos: Propagate error value from tmu_read() - - nvme: add quirk to force medium priority for SQ creation - - smb3: directory sync should not return an error - - sched/autogroup: Fix possible Spectre-v1 indexing for sched_prio_to_weight[] - - tracing/uprobe_event: Fix strncpy corner case - - perf/x86: Fix possible Spectre-v1 indexing for hw_perf_event cache_* - - perf/x86/cstate: Fix possible Spectre-v1 indexing for pkg_msr - - perf/x86/msr: Fix possible Spectre-v1 indexing in the MSR driver - - perf/core: Fix possible Spectre-v1 indexing for ->aux_pages[] - - perf/x86: Fix possible Spectre-v1 indexing for x86_pmu::event_map() - - i2c: dev: prevent ZERO_SIZE_PTR deref in i2cdev_ioctl_rdwr() - - bdi: Fix use after free bug in debugfs_remove() - - drm/ttm: Use GFP_TRANSHUGE_LIGHT for allocating huge pages - - drm/i915: Adjust eDP's logical vco in a reliable place. - - drm/nouveau/ttm: don't dereference nvbo::cli, it can outlive client - - sched/core: Fix possible Spectre-v1 indexing for sched_prio_to_weight[] - - * Bionic update: upstream stable patchset 2018-06-26 (LP: #1778759) - - percpu: include linux/sched.h for cond_resched() - - ACPI / button: make module loadable when booted in non-ACPI mode - - USB: serial: option: Add support for Quectel EP06 - - ALSA: hda - Fix incorrect usage of IS_REACHABLE() - - ALSA: pcm: Check PCM state at xfern compat ioctl - - ALSA: seq: Fix races at MIDI encoding in snd_virmidi_output_trigger() - - ALSA: dice: fix kernel NULL pointer dereference due to invalid calculation - for array index - - ALSA: aloop: Mark paused device as inactive - - ALSA: aloop: Add missing cable lock to ctl API callbacks - - tracepoint: Do not warn on ENOMEM - - scsi: target: Fix fortify_panic kernel exception - - Input: leds - fix out of bound access - - Input: atmel_mxt_ts - add touchpad button mapping for Samsung Chromebook Pro - - rtlwifi: btcoex: Add power_on_setting routine - - rtlwifi: cleanup 8723be ant_sel definition - - xfs: prevent creating negative-sized file via INSERT_RANGE - - RDMA/cxgb4: release hw resources on device removal - - RDMA/ucma: Allow resolving address w/o specifying source address - - RDMA/mlx5: Fix multiple NULL-ptr deref errors in rereg_mr flow - - RDMA/mlx5: Protect from shift operand overflow - - NET: usb: qmi_wwan: add support for ublox R410M PID 0x90b2 - - IB/mlx5: Use unlimited rate when static rate is not supported - - IB/hfi1: Fix handling of FECN marked multicast packet - - IB/hfi1: Fix loss of BECN with AHG - - IB/hfi1: Fix NULL pointer dereference when invalid num_vls is used - - iw_cxgb4: Atomically flush per QP HW CQEs - - drm/vmwgfx: Fix a buffer object leak - - drm/bridge: vga-dac: Fix edid memory leak - - test_firmware: fix setting old custom fw path back on exit, second try - - errseq: Always report a writeback error once - - USB: serial: visor: handle potential invalid device configuration - - usb: dwc3: gadget: Fix list_del corruption in dwc3_ep_dequeue - - USB: Accept bulk endpoints with 1024-byte maxpacket - - USB: serial: option: reimplement interface masking - - USB: serial: option: adding support for ublox R410M - - usb: musb: host: fix potential NULL pointer dereference - - usb: musb: trace: fix NULL pointer dereference in musb_g_tx() - - platform/x86: asus-wireless: Fix NULL pointer dereference - - irqchip/qcom: Fix check for spurious interrupts - - tracing: Fix bad use of igrab in trace_uprobe.c - - [Config] CONFIG_ARM64_ERRATUM_1024718=y - - arm64: Add work around for Arm Cortex-A55 Erratum 1024718 - - Input: atmel_mxt_ts - add touchpad button mapping for Samsung Chromebook Pro - - infiniband: mlx5: fix build errors when INFINIBAND_USER_ACCESS=m - - btrfs: Take trans lock before access running trans in check_delayed_ref - - drm/vc4: Make sure vc4_bo_{inc,dec}_usecnt() calls are balanced - - xhci: Fix use-after-free in xhci_free_virt_device - - platform/x86: Kconfig: Fix dell-laptop dependency chain. - - KVM: x86: remove APIC Timer periodic/oneshot spikes - - clocksource: Allow clocksource_mark_unstable() on unregistered clocksources - - clocksource: Initialize cs->wd_list - - clocksource: Consistent de-rate when marking unstable - - * Bionic update: upstream stable patchset 2018-06-22 (LP: #1778265) - - ext4: set h_journal if there is a failure starting a reserved handle - - ext4: add MODULE_SOFTDEP to ensure crc32c is included in the initramfs - - ext4: add validity checks for bitmap block numbers - - ext4: fix bitmap position validation - - random: fix possible sleeping allocation from irq context - - random: rate limit unseeded randomness warnings - - usbip: usbip_event: fix to not print kernel pointer address - - usbip: usbip_host: fix to hold parent lock for device_attach() calls - - usbip: vhci_hcd: Fix usb device and sockfd leaks - - usbip: vhci_hcd: check rhport before using in vhci_hub_control() - - Revert "xhci: plat: Register shutdown for xhci_plat" - - USB: serial: simple: add libtransistor console - - USB: serial: ftdi_sio: use jtag quirk for Arrow USB Blaster - - USB: serial: cp210x: add ID for NI USB serial console - - usb: core: Add quirk for HP v222w 16GB Mini - - USB: Increment wakeup count on remote wakeup. - - ALSA: usb-audio: Skip broken EU on Dell dock USB-audio - - virtio: add ability to iterate over vqs - - virtio_console: don't tie bufs to a vq - - virtio_console: free buffers after reset - - virtio_console: drop custom control queue cleanup - - virtio_console: move removal code - - virtio_console: reset on out of memory - - drm/virtio: fix vq wait_event condition - - tty: Don't call panic() at tty_ldisc_init() - - tty: n_gsm: Fix long delays with control frame timeouts in ADM mode - - tty: n_gsm: Fix DLCI handling for ADM mode if debug & 2 is not set - - tty: Avoid possible error pointer dereference at tty_ldisc_restore(). - - tty: Use __GFP_NOFAIL for tty_ldisc_get() - - ALSA: dice: fix OUI for TC group - - ALSA: dice: fix error path to destroy initialized stream data - - ALSA: hda - Skip jack and others for non-existing PCM streams - - ALSA: opl3: Hardening for potential Spectre v1 - - ALSA: asihpi: Hardening for potential Spectre v1 - - ALSA: hdspm: Hardening for potential Spectre v1 - - ALSA: rme9652: Hardening for potential Spectre v1 - - ALSA: control: Hardening for potential Spectre v1 - - ALSA: pcm: Return negative delays from SNDRV_PCM_IOCTL_DELAY. - - ALSA: core: Report audio_tstamp in snd_pcm_sync_ptr - - ALSA: seq: oss: Fix unbalanced use lock for synth MIDI device - - ALSA: seq: oss: Hardening for potential Spectre v1 - - ALSA: hda: Hardening for potential Spectre v1 - - ALSA: hda/realtek - Add some fixes for ALC233 - - ALSA: hda/realtek - Update ALC255 depop optimize - - ALSA: hda/realtek - change the location for one of two front mics - - mtd: spi-nor: cadence-quadspi: Fix page fault kernel panic - - mtd: cfi: cmdset_0001: Do not allow read/write to suspend erase block. - - mtd: cfi: cmdset_0001: Workaround Micron Erase suspend bug. - - mtd: cfi: cmdset_0002: Do not allow read/write to suspend erase block. - - mtd: rawnand: tango: Fix struct clk memory leak - - kobject: don't use WARN for registration failures - - scsi: sd: Defer spinning up drive while SANITIZE is in progress - - bfq-iosched: ensure to clear bic/bfqq pointers when preparing request - - vfio: ccw: process ssch with interrupts disabled - - ANDROID: binder: prevent transactions into own process. - - PCI: aardvark: Fix logic in advk_pcie_{rd,wr}_conf() - - PCI: aardvark: Set PIO_ADDR_LS correctly in advk_pcie_rd_conf() - - PCI: aardvark: Use ISR1 instead of ISR0 interrupt in legacy irq mode - - PCI: aardvark: Fix PCIe Max Read Request Size setting - - ARM: amba: Make driver_override output consistent with other buses - - ARM: amba: Fix race condition with driver_override - - ARM: amba: Don't read past the end of sysfs "driver_override" buffer - - ARM: socfpga_defconfig: Remove QSPI Sector 4K size force - - KVM: arm/arm64: Close VMID generation race - - crypto: drbg - set freed buffers to NULL - - ASoC: fsl_esai: Fix divisor calculation failure at lower ratio - - libceph: un-backoff on tick when we have a authenticated session - - libceph: reschedule a tick in finish_hunting() - - libceph: validate con->state at the top of try_write() - - fpga-manager: altera-ps-spi: preserve nCONFIG state - - earlycon: Use a pointer table to fix __earlycon_table stride - - drm/amdgpu: set COMPUTE_PGM_RSRC1 for SGPR/VGPR clearing shaders - - drm/i915: Enable display WA#1183 from its correct spot - - objtool, perf: Fix GCC 8 -Wrestrict error - - tools/lib/subcmd/pager.c: do not alias select() params - - x86/ipc: Fix x32 version of shmid64_ds and msqid64_ds - - x86/smpboot: Don't use mwait_play_dead() on AMD systems - - x86/microcode/intel: Save microcode patch unconditionally - - x86/microcode: Do not exit early from __reload_late() - - tick/sched: Do not mess with an enqueued hrtimer - - arm/arm64: KVM: Add PSCI version selection API - - powerpc/eeh: Fix race with driver un/bind - - serial: mvebu-uart: Fix local flags handling on termios update - - block: do not use interruptible wait anywhere - - ASoC: dmic: Fix clock parenting - - PCI / PM: Do not clear state_saved in pci_pm_freeze() when smart suspend is - set - - module: Fix display of wrong module .text address - - drm/edid: Reset more of the display info - - drm/i915/fbdev: Enable late fbdev initial configuration - - drm/i915/audio: set minimum CD clock to twice the BCLK - - drm/amd/display: Fix deadlock when flushing irq - - drm/amd/display: Disallow enabling CRTC without primary plane with FB - - * Bionic update: upstream stable patchset 2018-06-22 (LP: #1778265) // - CVE-2018-1108. - - random: set up the NUMA crng instances after the CRNG is fully initialized - - * Ryzen/Raven Ridge USB ports do not work (LP: #1756700) - - xhci: Fix USB ports for Dell Inspiron 5775 - - * [Ubuntu 1804][boston][ixgbe] EEH causes kernel BUG at /build/linux- - jWa1Fv/linux-4.15.0/drivers/pci/msi.c:352 (i2S) (LP: #1776389) - - ixgbe/ixgbevf: Free IRQ when PCI error recovery removes the device - - * Need fix to aacraid driver to prevent panic (LP: #1770095) - - scsi: aacraid: Correct hba_send to include iu_type - - * kernel: Fix arch random implementation (LP: #1775391) - - s390/archrandom: Rework arch random implementation. - - * kernel: Fix memory leak on CCA and EP11 CPRB processing. (LP: #1775390) - - s390/zcrypt: Fix CCA and EP11 CPRB processing failure memory leak. - - * Various fixes for CXL kernel module (LP: #1774471) - - cxl: Remove function write_timebase_ctrl_psl9() for PSL9 - - cxl: Set the PBCQ Tunnel BAR register when enabling capi mode - - cxl: Report the tunneled operations status - - cxl: Configure PSL to not use APC virtual machines - - cxl: Disable prefault_mode in Radix mode - - * Bluetooth not working (LP: #1764645) - - Bluetooth: btusb: Apply QCA Rome patches for some ATH3012 models - - * linux-snapdragon: wcn36xx: mac address generation on boot (LP: #1776491) - - [Config] arm64: snapdragon: WCN36XX_SNAPDRAGON_HACKS=y - - SAUCE: wcn36xx: read MAC from file or randomly generate one - - * fscache: Fix hanging wait on page discarded by writeback (LP: #1777029) - - fscache: Fix hanging wait on page discarded by writeback - - -- Stefan Bader Thu, 02 Aug 2018 12:31:38 +0200 + -- Stefan Bader Wed, 08 Aug 2018 14:04:58 +0200 linux (4.15.0-30.32) bionic; urgency=medium diff -u linux-azure-4.15.0/debian.master/config/amd64/config.common.amd64 linux-azure-4.15.0/debian.master/config/amd64/config.common.amd64 --- linux-azure-4.15.0/debian.master/config/amd64/config.common.amd64 +++ linux-azure-4.15.0/debian.master/config/amd64/config.common.amd64 @@ -524,7 +524,6 @@ CONFIG_VXFS_FS=m CONFIG_W1=m CONFIG_WAN=y -# CONFIG_WCN36XX_SNAPDRAGON_HACKS is not set CONFIG_WDTPCI=m CONFIG_WIMAX=m CONFIG_X25=m diff -u linux-azure-4.15.0/debian.master/config/arm64/config.flavour.generic linux-azure-4.15.0/debian.master/config/arm64/config.flavour.generic --- linux-azure-4.15.0/debian.master/config/arm64/config.flavour.generic +++ linux-azure-4.15.0/debian.master/config/arm64/config.flavour.generic @@ -71 +70,0 @@ -# CONFIG_WCN36XX_SNAPDRAGON_HACKS is not set diff -u linux-azure-4.15.0/debian.master/config/arm64/config.flavour.snapdragon linux-azure-4.15.0/debian.master/config/arm64/config.flavour.snapdragon --- linux-azure-4.15.0/debian.master/config/arm64/config.flavour.snapdragon +++ linux-azure-4.15.0/debian.master/config/arm64/config.flavour.snapdragon @@ -71 +70,0 @@ -CONFIG_WCN36XX_SNAPDRAGON_HACKS=y diff -u linux-azure-4.15.0/debian.master/config/armhf/config.common.armhf linux-azure-4.15.0/debian.master/config/armhf/config.common.armhf --- linux-azure-4.15.0/debian.master/config/armhf/config.common.armhf +++ linux-azure-4.15.0/debian.master/config/armhf/config.common.armhf @@ -547,7 +547,6 @@ CONFIG_VXFS_FS=m CONFIG_W1=m CONFIG_WAN=y -# CONFIG_WCN36XX_SNAPDRAGON_HACKS is not set CONFIG_WDTPCI=m CONFIG_WIMAX=m CONFIG_X25=m diff -u linux-azure-4.15.0/debian.master/config/config.common.ubuntu linux-azure-4.15.0/debian.master/config/config.common.ubuntu --- linux-azure-4.15.0/debian.master/config/config.common.ubuntu +++ linux-azure-4.15.0/debian.master/config/config.common.ubuntu @@ -533,7 +533,6 @@ CONFIG_ARM64_ACPI_PARKING_PROTOCOL=y CONFIG_ARM64_CONT_SHIFT=4 CONFIG_ARM64_CRYPTO=y -CONFIG_ARM64_ERRATUM_1024718=y CONFIG_ARM64_ERRATUM_819472=y CONFIG_ARM64_ERRATUM_824069=y CONFIG_ARM64_ERRATUM_826319=y diff -u linux-azure-4.15.0/debian.master/config/i386/config.common.i386 linux-azure-4.15.0/debian.master/config/i386/config.common.i386 --- linux-azure-4.15.0/debian.master/config/i386/config.common.i386 +++ linux-azure-4.15.0/debian.master/config/i386/config.common.i386 @@ -516,7 +516,6 @@ CONFIG_VXFS_FS=m CONFIG_W1=m CONFIG_WAN=y -# CONFIG_WCN36XX_SNAPDRAGON_HACKS is not set CONFIG_WDTPCI=m CONFIG_WIMAX=m CONFIG_X25=m diff -u linux-azure-4.15.0/debian.master/config/ppc64el/config.common.ppc64el linux-azure-4.15.0/debian.master/config/ppc64el/config.common.ppc64el --- linux-azure-4.15.0/debian.master/config/ppc64el/config.common.ppc64el +++ linux-azure-4.15.0/debian.master/config/ppc64el/config.common.ppc64el @@ -521,7 +521,6 @@ CONFIG_VXFS_FS=m CONFIG_W1=m CONFIG_WAN=y -# CONFIG_WCN36XX_SNAPDRAGON_HACKS is not set CONFIG_WDTPCI=m CONFIG_WIMAX=m CONFIG_X25=m diff -u linux-azure-4.15.0/debian.master/d-i/modules/nic-usb-modules linux-azure-4.15.0/debian.master/d-i/modules/nic-usb-modules --- linux-azure-4.15.0/debian.master/d-i/modules/nic-usb-modules +++ linux-azure-4.15.0/debian.master/d-i/modules/nic-usb-modules @@ -1,4 +1,3 @@ -ax88179_178a ? catc ? kaweth ? pegasus ? @@ -26,7 +25,6 @@ net1080 ? plusb ? rndis_host ? -r8152 ? smsc95xx ? zaurus ? carl9170 ? diff -u linux-azure-4.15.0/debian/changelog linux-azure-4.15.0/debian/changelog --- linux-azure-4.15.0/debian/changelog +++ linux-azure-4.15.0/debian/changelog @@ -1,622 +1,113 @@ -linux-azure (4.15.0-1020.20~16.04.1) xenial; urgency=medium +linux-azure (4.15.0-1021.21~16.04.1) xenial; urgency=medium - * linux-azure: 4.15.0-1020.20~16.04.1 -proposed tracker (LP: #1784292) + [ Ubuntu: 4.15.0-32.34 ] - * linux-azure: 4.15.0-1020.20 -proposed tracker (LP: #1784288) + * CVE-2018-5391 + - Revert "net: increase fragment memory usage limits" + * CVE-2018-3620 // CVE-2018-3646 + - x86/Centaur: Initialize supported CPU features properly + - x86/Centaur: Report correct CPU/cache topology + - x86/CPU/AMD: Have smp_num_siblings and cpu_llc_id always be present + - perf/events/amd/uncore: Fix amd_uncore_llc ID to use pre-defined cpu_llc_id + - x86/CPU: Rename intel_cacheinfo.c to cacheinfo.c + - x86/CPU/AMD: Calculate last level cache ID from number of sharing threads + - x86/CPU: Modify detect_extended_topology() to return result + - x86/CPU/AMD: Derive CPU topology from CPUID function 0xB when available + - x86/CPU: Move cpu local function declarations to local header + - x86/CPU: Make intel_num_cpu_cores() generic + - x86/CPU: Move cpu_detect_cache_sizes() into init_intel_cacheinfo() + - x86/CPU: Move x86_cpuinfo::x86_max_cores assignment to + detect_num_cpu_cores() + - x86/CPU/AMD: Fix LLC ID bit-shift calculation + - x86/mm: Factor out pageattr _PAGE_GLOBAL setting + - x86/mm: Undo double _PAGE_PSE clearing + - x86/mm: Introduce "default" kernel PTE mask + - x86/espfix: Document use of _PAGE_GLOBAL + - x86/mm: Do not auto-massage page protections + - x86/mm: Remove extra filtering in pageattr code + - x86/mm: Comment _PAGE_GLOBAL mystery + - x86/mm: Do not forbid _PAGE_RW before init for __ro_after_init + - x86/ldt: Fix support_pte_mask filtering in map_ldt_struct() + - x86/power/64: Fix page-table setup for temporary text mapping + - x86/pti: Filter at vma->vm_page_prot population + - x86/boot/64/clang: Use fixup_pointer() to access '__supported_pte_mask' + - x86/speculation/l1tf: Increase 32bit PAE __PHYSICAL_PAGE_SHIFT + - x86/speculation/l1tf: Change order of offset/type in swap entry + - x86/speculation/l1tf: Protect swap entries against L1TF + - x86/speculation/l1tf: Protect PROT_NONE PTEs against speculation + - x86/speculation/l1tf: Make sure the first page is always reserved + - x86/speculation/l1tf: Add sysfs reporting for l1tf + - x86/speculation/l1tf: Disallow non privileged high MMIO PROT_NONE mappings + - x86/speculation/l1tf: Limit swap file size to MAX_PA/2 + - x86/bugs: Move the l1tf function and define pr_fmt properly + - sched/smt: Update sched_smt_present at runtime + - x86/smp: Provide topology_is_primary_thread() + - x86/topology: Provide topology_smt_supported() + - cpu/hotplug: Make bringup/teardown of smp threads symmetric + - cpu/hotplug: Split do_cpu_down() + - cpu/hotplug: Provide knobs to control SMT + - x86/cpu: Remove the pointless CPU printout + - x86/cpu/AMD: Remove the pointless detect_ht() call + - x86/cpu/common: Provide detect_ht_early() + - x86/cpu/topology: Provide detect_extended_topology_early() + - x86/cpu/intel: Evaluate smp_num_siblings early + - x86/CPU/AMD: Do not check CPUID max ext level before parsing SMP info + - x86/cpu/AMD: Evaluate smp_num_siblings early + - x86/apic: Ignore secondary threads if nosmt=force + - x86/speculation/l1tf: Extend 64bit swap file size limit + - x86/cpufeatures: Add detection of L1D cache flush support. + - x86/CPU/AMD: Move TOPOEXT reenablement before reading smp_num_siblings + - x86/speculation/l1tf: Protect PAE swap entries against L1TF + - x86/speculation/l1tf: Fix up pte->pfn conversion for PAE + - Revert "x86/apic: Ignore secondary threads if nosmt=force" + - cpu/hotplug: Boot HT siblings at least once + - x86/KVM: Warn user if KVM is loaded SMT and L1TF CPU bug being present + - x86/KVM/VMX: Add module argument for L1TF mitigation + - x86/KVM/VMX: Add L1D flush algorithm + - x86/KVM/VMX: Add L1D MSR based flush + - x86/KVM/VMX: Add L1D flush logic + - x86/KVM/VMX: Split the VMX MSR LOAD structures to have an host/guest numbers + - x86/KVM/VMX: Add find_msr() helper function + - x86/KVM/VMX: Separate the VMX AUTOLOAD guest/host number accounting + - x86/KVM/VMX: Extend add_atomic_switch_msr() to allow VMENTER only MSRs + - x86/KVM/VMX: Use MSR save list for IA32_FLUSH_CMD if required + - cpu/hotplug: Online siblings when SMT control is turned on + - x86/litf: Introduce vmx status variable + - x86/kvm: Drop L1TF MSR list approach + - x86/l1tf: Handle EPT disabled state proper + - x86/kvm: Move l1tf setup function + - x86/kvm: Add static key for flush always + - x86/kvm: Serialize L1D flush parameter setter + - x86/kvm: Allow runtime control of L1D flush + - cpu/hotplug: Expose SMT control init function + - cpu/hotplug: Set CPU_SMT_NOT_SUPPORTED early + - x86/bugs, kvm: Introduce boot-time control of L1TF mitigations + - Documentation: Add section about CPU vulnerabilities + - x86/speculation/l1tf: Unbreak !__HAVE_ARCH_PFN_MODIFY_ALLOWED architectures + - x86/KVM/VMX: Initialize the vmx_l1d_flush_pages' content + - Documentation/l1tf: Fix typos + - cpu/hotplug: detect SMT disabled by BIOS + - x86/KVM/VMX: Don't set l1tf_flush_l1d to true from vmx_l1d_flush() + - x86/KVM/VMX: Replace 'vmx_l1d_flush_always' with 'vmx_l1d_flush_cond' + - x86/KVM/VMX: Move the l1tf_flush_l1d test to vmx_l1d_flush() + - x86/irq: Demote irq_cpustat_t::__softirq_pending to u16 + - x86/KVM/VMX: Introduce per-host-cpu analogue of l1tf_flush_l1d + - x86: Don't include linux/irq.h from asm/hardirq.h + - x86/irq: Let interrupt handlers set kvm_cpu_l1tf_flush_l1d + - x86/KVM/VMX: Don't set l1tf_flush_l1d from vmx_handle_external_intr() + - Documentation/l1tf: Remove Yonah processors from not vulnerable list + - x86/speculation: Simplify sysfs report of VMX L1TF vulnerability + - x86/speculation: Use ARCH_CAPABILITIES to skip L1D flush on vmentry + - KVM: x86: Add a framework for supporting MSR-based features + - KVM: X86: Introduce kvm_get_msr_feature() + - KVM: VMX: support MSR_IA32_ARCH_CAPABILITIES as a feature MSR + - KVM: VMX: Tell the nested hypervisor to skip L1D flush on vmentry + - cpu/hotplug: Fix SMT supported evaluation + - x86/speculation/l1tf: Invert all not present mappings + - x86/speculation/l1tf: Make pmd/pud_mknotpresent() invert + - x86/mm/pat: Make set_memory_np() L1TF safe - [ Ubuntu: 4.15.0-31.33 ] - - * linux: 4.15.0-31.33 -proposed tracker (LP: #1784281) - * ubuntu_bpf_jit test failed on Bionic s390x systems (LP: #1753941) - - test_bpf: flag tests that cannot be jited on s390 - * HDMI/DP audio can't work on the laptop of Dell Latitude 5495 (LP: #1782689) - - drm/nouveau: fix nouveau_dsm_get_client_id()'s return type - - drm/radeon: fix radeon_atpx_get_client_id()'s return type - - drm/amdgpu: fix amdgpu_atpx_get_client_id()'s return type - - platform/x86: apple-gmux: fix gmux_get_client_id()'s return type - - ALSA: hda: use PCI_BASE_CLASS_DISPLAY to replace PCI_CLASS_DISPLAY_VGA - - vga_switcheroo: set audio client id according to bound GPU id - * locking sockets broken due to missing AppArmor socket mediation patches - (LP: #1780227) - - UBUNTU SAUCE: apparmor: fix apparmor mediating locking non-fs, unix sockets - * Update2 for ocxl driver (LP: #1781436) - - ocxl: Fix page fault handler in case of fault on dying process - * RTNL assertion failure on ipvlan (LP: #1776927) - - ipvlan: drop ipv6 dependency - - ipvlan: use per device spinlock to protect addrs list updates - * netns: unable to follow an interface that moves to another netns - (LP: #1774225) - - net: core: Expose number of link up/down transitions - - dev: always advertise the new nsid when the netns iface changes - - dev: advertise the new ifindex when the netns iface changes - * [Bionic] Disk IO hangs when using BFQ as io scheduler (LP: #1780066) - - block, bfq: fix occurrences of request finish method's old name - - block, bfq: remove batches of confusing ifdefs - - block, bfq: add requeue-request hook - * HP ProBook 455 G5 needs mute-led-gpio fixup (LP: #1781763) - - ALSA: hda: add mute led support for HP ProBook 455 G5 - * [Bionic] bug fixes to improve stability of the ThunderX2 i2c driver - (LP: #1781476) - - i2c: xlp9xx: Fix issue seen when updating receive length - - i2c: xlp9xx: Make sure the transfer size is not more than - I2C_SMBUS_BLOCK_SIZE - * x86/kvm: fix LAPIC timer drift when guest uses periodic mode (LP: #1778486) - - x86/kvm: fix LAPIC timer drift when guest uses periodic mode - * Please include ax88179_178a and r8152 modules in d-i udeb (LP: #1771823) - - [Config:] d-i: Add ax88179_178a and r8152 to nic-modules - * Nvidia fails after switching its mode (LP: #1778658) - - PCI: Restore config space on runtime resume despite being unbound - * Kernel error "task zfs:pid blocked for more than 120 seconds" (LP: #1781364) - - SAUCE: (noup) zfs to 0.7.5-1ubuntu16.3 - * CVE-2018-12232 - - PATCH 1/1] socket: close race condition between sock_close() and - sockfs_setattr() - * CVE-2018-10323 - - xfs: set format back to extents if xfs_bmap_extents_to_btree - * change front mic location for more lenovo m7/8/9xx machines (LP: #1781316) - - ALSA: hda/realtek - Fix the problem of two front mics on more machines - - ALSA: hda/realtek - two more lenovo models need fixup of MIC_LOCATION - * Cephfs + fscache: unable to handle kernel NULL pointer dereference at - 0000000000000000 IP: jbd2__journal_start+0x22/0x1f0 (LP: #1783246) - - ceph: track read contexts in ceph_file_info - * Touchpad of ThinkPad P52 failed to work with message "lost sync at byte" - (LP: #1779802) - - Input: elantech - fix V4 report decoding for module with middle key - - Input: elantech - enable middle button of touchpads on ThinkPad P52 - * xhci_hcd 0000:00:14.0: Root hub is not suspended (LP: #1779823) - - usb: xhci: dbc: Fix lockdep warning - - usb: xhci: dbc: Don't decrement runtime PM counter if DBC is not started - * CVE-2018-13406 - - video: uvesafb: Fix integer overflow in allocation - * CVE-2018-10840 - - ext4: correctly handle a zero-length xattr with a non-zero e_value_offs - * CVE-2018-11412 - - ext4: do not allow external inodes for inline data - * CVE-2018-10881 - - ext4: clear i_data in ext4_inode_info when removing inline data - * CVE-2018-12233 - - jfs: Fix inconsistency between memory allocation and ea_buf->max_size - * CVE-2018-12904 - - kvm: nVMX: Enforce cpl=0 for VMX instructions - * Error parsing PCC subspaces from PCCT (LP: #1528684) - - mailbox: PCC: erroneous error message when parsing ACPI PCCT - * CVE-2018-13094 - - xfs: don't call xfs_da_shrink_inode with NULL bp - * other users' coredumps can be read via setgid directory and killpriv bypass - (LP: #1779923) // CVE-2018-13405 - - Fix up non-directory creation in SGID directories - * Invoking obsolete 'firmware_install' target breaks snap build (LP: #1782166) - - snapcraft.yaml: stop invoking the obsolete (and non-existing) - 'firmware_install' target - * snapcraft.yaml: missing ubuntu-retpoline-extract-one script breaks the build - (LP: #1782116) - - snapcraft.yaml: copy retpoline-extract-one to scripts before build - * Allow Raven Ridge's audio controller to be runtime suspended (LP: #1782540) - - ALSA: hda: Add AZX_DCAPS_PM_RUNTIME for AMD Raven Ridge - * CVE-2018-11506 - - sr: pass down correctly sized SCSI sense buffer - * Bionic update: upstream stable patchset 2018-07-24 (LP: #1783418) - - net: Fix a bug in removing queues from XPS map - - net/mlx4_core: Fix error handling in mlx4_init_port_info. - - net/sched: fix refcnt leak in the error path of tcf_vlan_init() - - net: sched: red: avoid hashing NULL child - - net/smc: check for missing nlattrs in SMC_PNETID messages - - net: test tailroom before appending to linear skb - - packet: in packet_snd start writing at link layer allocation - - sock_diag: fix use-after-free read in __sk_free - - tcp: purge write queue in tcp_connect_init() - - vmxnet3: set the DMA mask before the first DMA map operation - - vmxnet3: use DMA memory barriers where required - - hv_netvsc: empty current transmit aggregation if flow blocked - - hv_netvsc: Use the num_online_cpus() for channel limit - - hv_netvsc: avoid retry on send during shutdown - - hv_netvsc: only wake transmit queue if link is up - - hv_netvsc: fix error unwind handling if vmbus_open fails - - hv_netvsc: cancel subchannel setup before halting device - - hv_netvsc: fix race in napi poll when rescheduling - - hv_netvsc: defer queue selection to VF - - hv_netvsc: disable NAPI before channel close - - hv_netvsc: use RCU to fix concurrent rx and queue changes - - hv_netvsc: change GPAD teardown order on older versions - - hv_netvsc: common detach logic - - hv_netvsc: Use Windows version instead of NVSP version on GPAD teardown - - hv_netvsc: Split netvsc_revoke_buf() and netvsc_teardown_gpadl() - - hv_netvsc: Ensure correct teardown message sequence order - - hv_netvsc: Fix a network regression after ifdown/ifup - - sparc: vio: use put_device() instead of kfree() - - ext2: fix a block leak - - s390: add assembler macros for CPU alternatives - - s390: move expoline assembler macros to a header - - s390/crc32-vx: use expoline for indirect branches - - s390/lib: use expoline for indirect branches - - s390/ftrace: use expoline for indirect branches - - s390/kernel: use expoline for indirect branches - - s390: move spectre sysfs attribute code - - s390: extend expoline to BC instructions - - s390: use expoline thunks in the BPF JIT - - scsi: sg: allocate with __GFP_ZERO in sg_build_indirect() - - scsi: zfcp: fix infinite iteration on ERP ready list - - loop: don't call into filesystem while holding lo_ctl_mutex - - loop: fix LOOP_GET_STATUS lock imbalance - - cfg80211: limit wiphy names to 128 bytes - - hfsplus: stop workqueue when fill_super() failed - - x86/kexec: Avoid double free_page() upon do_kexec_load() failure - - usb: gadget: f_uac2: fix bFirstInterface in composite gadget - - usb: dwc3: Undo PHY init if soft reset fails - - usb: dwc3: omap: don't miss events during suspend/resume - - usb: gadget: core: Fix use-after-free of usb_request - - usb: gadget: fsl_udc_core: fix ep valid checks - - usb: dwc2: Fix dwc2_hsotg_core_init_disconnected() - - usb: cdc_acm: prevent race at write to acm while system resumes - - net: usbnet: fix potential deadlock on 32bit hosts - - ARM: dts: imx7d-sdb: Fix regulator-usb-otg2-vbus node name - - usb: host: xhci-plat: revert "usb: host: xhci-plat: enable clk in resume - timing" - - USB: OHCI: Fix NULL dereference in HCDs using HCD_LOCAL_MEM - - net/usb/qmi_wwan.c: Add USB id for lt4120 modem - - net-usb: add qmi_wwan if on lte modem wistron neweb d18q1 - - Bluetooth: btusb: Add USB ID 7392:a611 for Edimax EW-7611ULB - - ALSA: usb-audio: Add native DSD support for Luxman DA-06 - - usb: dwc3: Add SoftReset PHY synchonization delay - - usb: dwc3: Update DWC_usb31 GTXFIFOSIZ reg fields - - usb: dwc3: Makefile: fix link error on randconfig - - xhci: zero usb device slot_id member when disabling and freeing a xhci slot - - usb: dwc2: Fix interval type issue - - usb: dwc2: hcd: Fix host channel halt flow - - usb: dwc2: host: Fix transaction errors in host mode - - usb: gadget: ffs: Let setup() return USB_GADGET_DELAYED_STATUS - - usb: gadget: ffs: Execute copy_to_user() with USER_DS set - - usbip: Correct maximum value of CONFIG_USBIP_VHCI_HC_PORTS - - usb: gadget: udc: change comparison to bitshift when dealing with a mask - - usb: gadget: composite: fix incorrect handling of OS desc requests - - media: lgdt3306a: Fix module count mismatch on usb unplug - - media: em28xx: USB bulk packet size fix - - Bluetooth: btusb: Add device ID for RTL8822BE - - xhci: Show what USB release number the xHC supports from protocol capablity - - staging: bcm2835-audio: Release resources on module_exit() - - staging: lustre: fix bug in osc_enter_cache_try - - staging: fsl-dpaa2/eth: Fix incorrect casts - - staging: rtl8192u: return -ENOMEM on failed allocation of priv->oldaddr - - staging: ks7010: Use constants from ieee80211_eid instead of literal ints. - - staging: lustre: lmv: correctly iput lmo_root - - crypto: inside-secure - wait for the request to complete if in the backlog - - crypto: atmel-aes - fix the keys zeroing on errors - - crypto: ccp - don't disable interrupts while setting up debugfs - - crypto: inside-secure - do not process request if no command was issued - - crypto: inside-secure - fix the cache_len computation - - crypto: inside-secure - fix the extra cache computation - - crypto: sunxi-ss - Add MODULE_ALIAS to sun4i-ss - - crypto: inside-secure - fix the invalidation step during cra_exit - - scsi: mpt3sas: fix an out of bound write - - scsi: ufs: Enable quirk to ignore sending WRITE_SAME command - - scsi: bnx2fc: Fix check in SCSI completion handler for timed out request - - scsi: sym53c8xx_2: iterator underflow in sym_getsync() - - scsi: mptfusion: Add bounds check in mptctl_hp_targetinfo() - - scsi: qla2xxx: Avoid triggering undefined behavior in - qla2x00_mbx_completion() - - scsi: storvsc: Increase cmd_per_lun for higher speed devices - - scsi: qedi: Fix truncation of CHAP name and secret - - scsi: aacraid: fix shutdown crash when init fails - - scsi: qla4xxx: skip error recovery in case of register disconnect. - - scsi: qedi: Fix kernel crash during port toggle - - scsi: mpt3sas: Do not mark fw_event workqueue as WQ_MEM_RECLAIM - - scsi: sd: Keep disk read-only when re-reading partition - - scsi: iscsi_tcp: set BDI_CAP_STABLE_WRITES when data digest enabled - - scsi: aacraid: Insure command thread is not recursively stopped - - scsi: core: Make SCSI Status CONDITION MET equivalent to GOOD - - scsi: mvsas: fix wrong endianness of sgpio api - - ASoC: hdmi-codec: Fix module unloading caused kernel crash - - ASoC: rockchip: rk3288-hdmi-analog: Select needed codecs - - ASoC: samsung: odroid: Fix 32000 sample rate handling - - ASoC: topology: create TLV data for dapm widgets - - ASoC: samsung: i2s: Ensure the RCLK rate is properly determined - - clk: rockchip: Fix wrong parent for SDMMC phase clock for rk3228 - - clk: Don't show the incorrect clock phase - - clk: hisilicon: mark wdt_mux_p[] as const - - clk: tegra: Fix pll_u rate configuration - - clk: rockchip: Prevent calculating mmc phase if clock rate is zero - - clk: samsung: s3c2410: Fix PLL rates - - clk: samsung: exynos7: Fix PLL rates - - clk: samsung: exynos5260: Fix PLL rates - - clk: samsung: exynos5433: Fix PLL rates - - clk: samsung: exynos5250: Fix PLL rates - - clk: samsung: exynos3250: Fix PLL rates - - media: dmxdev: fix error code for invalid ioctls - - media: Don't let tvp5150_get_vbi() go out of vbi_ram_default array - - media: ov5645: add missing of_node_put() in error path - - media: cx23885: Override 888 ImpactVCBe crystal frequency - - media: cx23885: Set subdev host data to clk_freq pointer - - media: s3c-camif: fix out-of-bounds array access - - media: lgdt3306a: Fix a double kfree on i2c device remove - - media: em28xx: Add Hauppauge SoloHD/DualHD bulk models - - media: v4l: vsp1: Fix display stalls when requesting too many inputs - - media: i2c: adv748x: fix HDMI field heights - - media: vb2: Fix videobuf2 to map correct area - - media: vivid: fix incorrect capabilities for radio - - media: cx25821: prevent out-of-bounds read on array card - - serial: xuartps: Fix out-of-bounds access through DT alias - - serial: sh-sci: Fix out-of-bounds access through DT alias - - serial: samsung: Fix out-of-bounds access through serial port index - - serial: mxs-auart: Fix out-of-bounds access through serial port index - - serial: imx: Fix out-of-bounds access through serial port index - - serial: fsl_lpuart: Fix out-of-bounds access through DT alias - - serial: arc_uart: Fix out-of-bounds access through DT alias - - serial: 8250: Don't service RX FIFO if interrupts are disabled - - serial: altera: ensure port->regshift is honored consistently - - rtc: snvs: Fix usage of snvs_rtc_enable - - rtc: hctosys: Ensure system time doesn't overflow time_t - - rtc: rk808: fix possible race condition - - rtc: m41t80: fix race conditions - - rtc: tx4939: avoid unintended sign extension on a 24 bit shift - - rtc: rp5c01: fix possible race condition - - rtc: goldfish: Add missing MODULE_LICENSE - - cxgb4: Correct ntuple mask validation for hash filters - - net: dsa: bcm_sf2: Fix RX_CLS_LOC_ANY overwrite for last rule - - net: dsa: Do not register devlink for unused ports - - net: dsa: bcm_sf2: Fix IPv6 rules and chain ID - - net: dsa: bcm_sf2: Fix IPv6 rule half deletion - - 3c59x: convert to generic DMA API - - net: ip6_gre: Request headroom in __gre6_xmit() - - net: ip6_gre: Split up ip6gre_tnl_link_config() - - net: ip6_gre: Split up ip6gre_tnl_change() - - net: ip6_gre: Split up ip6gre_newlink() - - net: ip6_gre: Split up ip6gre_changelink() - - qed: LL2 flush isles when connection is closed - - qed: Fix possibility of list corruption during rmmod flows - - qed: Fix LL2 race during connection terminate - - powerpc: Move default security feature flags - - Bluetooth: btusb: Add support for Intel Bluetooth device 22560 [8087:0026] - - staging: fsl-dpaa2/eth: Fix incorrect kfree - - crypto: inside-secure - move the digest to the request context - - scsi: lpfc: Fix NVME Initiator FirstBurst - - serial: mvebu-uart: fix tx lost characters - * Bionic update: upstream stable patchset 2018-07-20 (LP: #1782846) - - usbip: usbip_host: refine probe and disconnect debug msgs to be useful - - usbip: usbip_host: delete device from busid_table after rebind - - usbip: usbip_host: run rebind from exit when module is removed - - usbip: usbip_host: fix NULL-ptr deref and use-after-free errors - - usbip: usbip_host: fix bad unlock balance during stub_probe() - - ALSA: usb: mixer: volume quirk for CM102-A+/102S+ - - ALSA: hda: Add Lenovo C50 All in one to the power_save blacklist - - ALSA: control: fix a redundant-copy issue - - spi: pxa2xx: Allow 64-bit DMA - - spi: bcm-qspi: Avoid setting MSPI_CDRAM_PCS for spi-nor master - - spi: bcm-qspi: Always read and set BSPI_MAST_N_BOOT_CTRL - - KVM: arm/arm64: VGIC/ITS save/restore: protect kvm_read_guest() calls - - KVM: arm/arm64: VGIC/ITS: protect kvm_read_guest() calls with SRCU lock - - vfio: ccw: fix cleanup if cp_prefetch fails - - tracing/x86/xen: Remove zero data size trace events - trace_xen_mmu_flush_tlb{_all} - - tee: shm: fix use-after-free via temporarily dropped reference - - netfilter: nf_tables: free set name in error path - - netfilter: nf_tables: can't fail after linking rule into active rule list - - netfilter: nf_socket: Fix out of bounds access in nf_sk_lookup_slow_v{4,6} - - i2c: designware: fix poll-after-enable regression - - powerpc/powernv: Fix NVRAM sleep in invalid context when crashing - - drm: Match sysfs name in link removal to link creation - - lib/test_bitmap.c: fix bitmap optimisation tests to report errors correctly - - radix tree: fix multi-order iteration race - - mm: don't allow deferred pages with NEED_PER_CPU_KM - - drm/i915/gen9: Add WaClearHIZ_WM_CHICKEN3 for bxt and glk - - s390/qdio: fix access to uninitialized qdio_q fields - - s390/qdio: don't release memory in qdio_setup_irq() - - s390: remove indirect branch from do_softirq_own_stack - - x86/pkeys: Override pkey when moving away from PROT_EXEC - - x86/pkeys: Do not special case protection key 0 - - efi: Avoid potential crashes, fix the 'struct efi_pci_io_protocol_32' - definition for mixed mode - - ARM: 8771/1: kprobes: Prohibit kprobes on do_undefinstr - - x86/mm: Drop TS_COMPAT on 64-bit exec() syscall - - tick/broadcast: Use for_each_cpu() specially on UP kernels - - ARM: 8769/1: kprobes: Fix to use get_kprobe_ctlblk after irq-disabed - - ARM: 8770/1: kprobes: Prohibit probing on optimized_callback - - ARM: 8772/1: kprobes: Prohibit kprobes on get_user functions - - Btrfs: fix xattr loss after power failure - - Btrfs: send, fix invalid access to commit roots due to concurrent - snapshotting - - btrfs: property: Set incompat flag if lzo/zstd compression is set - - btrfs: fix crash when trying to resume balance without the resume flag - - btrfs: Split btrfs_del_delalloc_inode into 2 functions - - btrfs: Fix delalloc inodes invalidation during transaction abort - - btrfs: fix reading stale metadata blocks after degraded raid1 mounts - - xhci: Fix USB3 NULL pointer dereference at logical disconnect. - - KVM: arm/arm64: Properly protect VGIC locks from IRQs - - KVM: arm/arm64: VGIC/ITS: Promote irq_lock() in update_affinity - - hwmon: (k10temp) Fix reading critical temperature register - - hwmon: (k10temp) Use API function to access System Management Network - - vsprintf: Replace memory barrier with static_key for random_ptr_key update - - x86/amd_nb: Add support for Raven Ridge CPUs - - x86/apic/x2apic: Initialize cluster ID properly - * Bionic update: upstream stable patchset 2018-07-09 (LP: #1780858) - - 8139too: Use disable_irq_nosync() in rtl8139_poll_controller() - - bridge: check iface upper dev when setting master via ioctl - - dccp: fix tasklet usage - - ipv4: fix fnhe usage by non-cached routes - - ipv4: fix memory leaks in udp_sendmsg, ping_v4_sendmsg - - llc: better deal with too small mtu - - net: ethernet: sun: niu set correct packet size in skb - - net: ethernet: ti: cpsw: fix packet leaking in dual_mac mode - - net/mlx4_en: Fix an error handling path in 'mlx4_en_init_netdev()' - - net/mlx4_en: Verify coalescing parameters are in range - - net/mlx5e: Err if asked to offload TC match on frag being first - - net/mlx5: E-Switch, Include VF RDMA stats in vport statistics - - net sched actions: fix refcnt leak in skbmod - - net_sched: fq: take care of throttled flows before reuse - - net: support compat 64-bit time in {s,g}etsockopt - - net/tls: Don't recursively call push_record during tls_write_space callbacks - - net/tls: Fix connection stall on partial tls record - - openvswitch: Don't swap table in nlattr_set() after OVS_ATTR_NESTED is found - - qmi_wwan: do not steal interfaces from class drivers - - r8169: fix powering up RTL8168h - - rds: do not leak kernel memory to user land - - sctp: delay the authentication for the duplicated cookie-echo chunk - - sctp: fix the issue that the cookie-ack with auth can't get processed - - sctp: handle two v4 addrs comparison in sctp_inet6_cmp_addr - - sctp: remove sctp_chunk_put from fail_mark err path in - sctp_ulpevent_make_rcvmsg - - sctp: use the old asoc when making the cookie-ack chunk in dupcook_d - - tcp_bbr: fix to zero idle_restart only upon S/ACKed data - - tcp: ignore Fast Open on repair mode - - tg3: Fix vunmap() BUG_ON() triggered from tg3_free_consistent(). - - bonding: do not allow rlb updates to invalid mac - - bonding: send learning packets for vlans on slave - - net: sched: fix error path in tcf_proto_create() when modules are not - configured - - net/mlx5e: TX, Use correct counter in dma_map error flow - - net/mlx5: Avoid cleaning flow steering table twice during error flow - - hv_netvsc: set master device - - ipv6: fix uninit-value in ip6_multipath_l3_keys() - - net/mlx5e: Allow offloading ipv4 header re-write for icmp - - nsh: fix infinite loop - - udp: fix SO_BINDTODEVICE - - l2tp: revert "l2tp: fix missing print session offset info" - - proc: do not access cmdline nor environ from file-backed areas - - net/smc: restrict non-blocking connect finish - - mlxsw: spectrum_switchdev: Do not remove mrouter port from MDB's ports list - - net/mlx5e: DCBNL fix min inline header size for dscp - - net: systemport: Correclty disambiguate driver instances - - sctp: clear the new asoc's stream outcnt in sctp_stream_update - - tcp: restore autocorking - - tipc: fix one byte leak in tipc_sk_set_orig_addr() - - hv_netvsc: Fix net device attach on older Windows hosts - * Bionic update: upstream stable patchset 2018-07-06 (LP: #1780499) - - ext4: prevent right-shifting extents beyond EXT_MAX_BLOCKS - - ipvs: fix rtnl_lock lockups caused by start_sync_thread - - netfilter: ebtables: don't attempt to allocate 0-sized compat array - - kcm: Call strp_stop before strp_done in kcm_attach - - crypto: af_alg - fix possible uninit-value in alg_bind() - - netlink: fix uninit-value in netlink_sendmsg - - net: fix rtnh_ok() - - net: initialize skb->peeked when cloning - - net: fix uninit-value in __hw_addr_add_ex() - - dccp: initialize ireq->ir_mark - - ipv4: fix uninit-value in ip_route_output_key_hash_rcu() - - soreuseport: initialise timewait reuseport field - - inetpeer: fix uninit-value in inet_getpeer - - memcg: fix per_node_info cleanup - - perf: Remove superfluous allocation error check - - tcp: fix TCP_REPAIR_QUEUE bound checking - - bdi: wake up concurrent wb_shutdown() callers. - - bdi: Fix oops in wb_workfn() - - gpioib: do not free unrequested descriptors - - gpio: fix aspeed_gpio unmask irq - - gpio: fix error path in lineevent_create - - rfkill: gpio: fix memory leak in probe error path - - libata: Apply NOLPM quirk for SanDisk SD7UB3Q*G1001 SSDs - - dm integrity: use kvfree for kvmalloc'd memory - - tracing: Fix regex_match_front() to not over compare the test string - - z3fold: fix reclaim lock-ups - - mm: sections are not offlined during memory hotremove - - mm, oom: fix concurrent munlock and oom reaper unmap, v3 - - ceph: fix rsize/wsize capping in ceph_direct_read_write() - - can: kvaser_usb: Increase correct stats counter in kvaser_usb_rx_can_msg() - - can: hi311x: Acquire SPI lock on ->do_get_berr_counter - - can: hi311x: Work around TX complete interrupt erratum - - drm/vc4: Fix scaling of uni-planar formats - - drm/i915: Fix drm:intel_enable_lvds ERROR message in kernel log - - drm/atomic: Clean old_state/new_state in drm_atomic_state_default_clear() - - drm/atomic: Clean private obj old_state/new_state in - drm_atomic_state_default_clear() - - net: atm: Fix potential Spectre v1 - - atm: zatm: Fix potential Spectre v1 - - cpufreq: schedutil: Avoid using invalid next_freq - - Revert "Bluetooth: btusb: Fix quirk for Atheros 1525/QCA6174" - - Bluetooth: btusb: Only check needs_reset_resume DMI table for QCA rome - chipsets - - thermal: exynos: Reading temperature makes sense only when TMU is turned on - - thermal: exynos: Propagate error value from tmu_read() - - nvme: add quirk to force medium priority for SQ creation - - smb3: directory sync should not return an error - - sched/autogroup: Fix possible Spectre-v1 indexing for sched_prio_to_weight[] - - tracing/uprobe_event: Fix strncpy corner case - - perf/x86: Fix possible Spectre-v1 indexing for hw_perf_event cache_* - - perf/x86/cstate: Fix possible Spectre-v1 indexing for pkg_msr - - perf/x86/msr: Fix possible Spectre-v1 indexing in the MSR driver - - perf/core: Fix possible Spectre-v1 indexing for ->aux_pages[] - - perf/x86: Fix possible Spectre-v1 indexing for x86_pmu::event_map() - - i2c: dev: prevent ZERO_SIZE_PTR deref in i2cdev_ioctl_rdwr() - - bdi: Fix use after free bug in debugfs_remove() - - drm/ttm: Use GFP_TRANSHUGE_LIGHT for allocating huge pages - - drm/i915: Adjust eDP's logical vco in a reliable place. - - drm/nouveau/ttm: don't dereference nvbo::cli, it can outlive client - - sched/core: Fix possible Spectre-v1 indexing for sched_prio_to_weight[] - * Bionic update: upstream stable patchset 2018-06-26 (LP: #1778759) - - percpu: include linux/sched.h for cond_resched() - - ACPI / button: make module loadable when booted in non-ACPI mode - - USB: serial: option: Add support for Quectel EP06 - - ALSA: hda - Fix incorrect usage of IS_REACHABLE() - - ALSA: pcm: Check PCM state at xfern compat ioctl - - ALSA: seq: Fix races at MIDI encoding in snd_virmidi_output_trigger() - - ALSA: dice: fix kernel NULL pointer dereference due to invalid calculation - for array index - - ALSA: aloop: Mark paused device as inactive - - ALSA: aloop: Add missing cable lock to ctl API callbacks - - tracepoint: Do not warn on ENOMEM - - scsi: target: Fix fortify_panic kernel exception - - Input: leds - fix out of bound access - - Input: atmel_mxt_ts - add touchpad button mapping for Samsung Chromebook Pro - - rtlwifi: btcoex: Add power_on_setting routine - - rtlwifi: cleanup 8723be ant_sel definition - - xfs: prevent creating negative-sized file via INSERT_RANGE - - RDMA/cxgb4: release hw resources on device removal - - RDMA/ucma: Allow resolving address w/o specifying source address - - RDMA/mlx5: Fix multiple NULL-ptr deref errors in rereg_mr flow - - RDMA/mlx5: Protect from shift operand overflow - - NET: usb: qmi_wwan: add support for ublox R410M PID 0x90b2 - - IB/mlx5: Use unlimited rate when static rate is not supported - - IB/hfi1: Fix handling of FECN marked multicast packet - - IB/hfi1: Fix loss of BECN with AHG - - IB/hfi1: Fix NULL pointer dereference when invalid num_vls is used - - iw_cxgb4: Atomically flush per QP HW CQEs - - drm/vmwgfx: Fix a buffer object leak - - drm/bridge: vga-dac: Fix edid memory leak - - test_firmware: fix setting old custom fw path back on exit, second try - - errseq: Always report a writeback error once - - USB: serial: visor: handle potential invalid device configuration - - usb: dwc3: gadget: Fix list_del corruption in dwc3_ep_dequeue - - USB: Accept bulk endpoints with 1024-byte maxpacket - - USB: serial: option: reimplement interface masking - - USB: serial: option: adding support for ublox R410M - - usb: musb: host: fix potential NULL pointer dereference - - usb: musb: trace: fix NULL pointer dereference in musb_g_tx() - - platform/x86: asus-wireless: Fix NULL pointer dereference - - irqchip/qcom: Fix check for spurious interrupts - - tracing: Fix bad use of igrab in trace_uprobe.c - - [Config] CONFIG_ARM64_ERRATUM_1024718=y - - arm64: Add work around for Arm Cortex-A55 Erratum 1024718 - - Input: atmel_mxt_ts - add touchpad button mapping for Samsung Chromebook Pro - - infiniband: mlx5: fix build errors when INFINIBAND_USER_ACCESS=m - - btrfs: Take trans lock before access running trans in check_delayed_ref - - drm/vc4: Make sure vc4_bo_{inc,dec}_usecnt() calls are balanced - - xhci: Fix use-after-free in xhci_free_virt_device - - platform/x86: Kconfig: Fix dell-laptop dependency chain. - - KVM: x86: remove APIC Timer periodic/oneshot spikes - - clocksource: Allow clocksource_mark_unstable() on unregistered clocksources - - clocksource: Initialize cs->wd_list - - clocksource: Consistent de-rate when marking unstable - * Bionic update: upstream stable patchset 2018-06-22 (LP: #1778265) - - ext4: set h_journal if there is a failure starting a reserved handle - - ext4: add MODULE_SOFTDEP to ensure crc32c is included in the initramfs - - ext4: add validity checks for bitmap block numbers - - ext4: fix bitmap position validation - - random: fix possible sleeping allocation from irq context - - random: rate limit unseeded randomness warnings - - usbip: usbip_event: fix to not print kernel pointer address - - usbip: usbip_host: fix to hold parent lock for device_attach() calls - - usbip: vhci_hcd: Fix usb device and sockfd leaks - - usbip: vhci_hcd: check rhport before using in vhci_hub_control() - - Revert "xhci: plat: Register shutdown for xhci_plat" - - USB: serial: simple: add libtransistor console - - USB: serial: ftdi_sio: use jtag quirk for Arrow USB Blaster - - USB: serial: cp210x: add ID for NI USB serial console - - usb: core: Add quirk for HP v222w 16GB Mini - - USB: Increment wakeup count on remote wakeup. - - ALSA: usb-audio: Skip broken EU on Dell dock USB-audio - - virtio: add ability to iterate over vqs - - virtio_console: don't tie bufs to a vq - - virtio_console: free buffers after reset - - virtio_console: drop custom control queue cleanup - - virtio_console: move removal code - - virtio_console: reset on out of memory - - drm/virtio: fix vq wait_event condition - - tty: Don't call panic() at tty_ldisc_init() - - tty: n_gsm: Fix long delays with control frame timeouts in ADM mode - - tty: n_gsm: Fix DLCI handling for ADM mode if debug & 2 is not set - - tty: Avoid possible error pointer dereference at tty_ldisc_restore(). - - tty: Use __GFP_NOFAIL for tty_ldisc_get() - - ALSA: dice: fix OUI for TC group - - ALSA: dice: fix error path to destroy initialized stream data - - ALSA: hda - Skip jack and others for non-existing PCM streams - - ALSA: opl3: Hardening for potential Spectre v1 - - ALSA: asihpi: Hardening for potential Spectre v1 - - ALSA: hdspm: Hardening for potential Spectre v1 - - ALSA: rme9652: Hardening for potential Spectre v1 - - ALSA: control: Hardening for potential Spectre v1 - - ALSA: pcm: Return negative delays from SNDRV_PCM_IOCTL_DELAY. - - ALSA: core: Report audio_tstamp in snd_pcm_sync_ptr - - ALSA: seq: oss: Fix unbalanced use lock for synth MIDI device - - ALSA: seq: oss: Hardening for potential Spectre v1 - - ALSA: hda: Hardening for potential Spectre v1 - - ALSA: hda/realtek - Add some fixes for ALC233 - - ALSA: hda/realtek - Update ALC255 depop optimize - - ALSA: hda/realtek - change the location for one of two front mics - - mtd: spi-nor: cadence-quadspi: Fix page fault kernel panic - - mtd: cfi: cmdset_0001: Do not allow read/write to suspend erase block. - - mtd: cfi: cmdset_0001: Workaround Micron Erase suspend bug. - - mtd: cfi: cmdset_0002: Do not allow read/write to suspend erase block. - - mtd: rawnand: tango: Fix struct clk memory leak - - kobject: don't use WARN for registration failures - - scsi: sd: Defer spinning up drive while SANITIZE is in progress - - bfq-iosched: ensure to clear bic/bfqq pointers when preparing request - - vfio: ccw: process ssch with interrupts disabled - - ANDROID: binder: prevent transactions into own process. - - PCI: aardvark: Fix logic in advk_pcie_{rd,wr}_conf() - - PCI: aardvark: Set PIO_ADDR_LS correctly in advk_pcie_rd_conf() - - PCI: aardvark: Use ISR1 instead of ISR0 interrupt in legacy irq mode - - PCI: aardvark: Fix PCIe Max Read Request Size setting - - ARM: amba: Make driver_override output consistent with other buses - - ARM: amba: Fix race condition with driver_override - - ARM: amba: Don't read past the end of sysfs "driver_override" buffer - - ARM: socfpga_defconfig: Remove QSPI Sector 4K size force - - KVM: arm/arm64: Close VMID generation race - - crypto: drbg - set freed buffers to NULL - - ASoC: fsl_esai: Fix divisor calculation failure at lower ratio - - libceph: un-backoff on tick when we have a authenticated session - - libceph: reschedule a tick in finish_hunting() - - libceph: validate con->state at the top of try_write() - - fpga-manager: altera-ps-spi: preserve nCONFIG state - - earlycon: Use a pointer table to fix __earlycon_table stride - - drm/amdgpu: set COMPUTE_PGM_RSRC1 for SGPR/VGPR clearing shaders - - drm/i915: Enable display WA#1183 from its correct spot - - objtool, perf: Fix GCC 8 -Wrestrict error - - tools/lib/subcmd/pager.c: do not alias select() params - - x86/ipc: Fix x32 version of shmid64_ds and msqid64_ds - - x86/smpboot: Don't use mwait_play_dead() on AMD systems - - x86/microcode/intel: Save microcode patch unconditionally - - x86/microcode: Do not exit early from __reload_late() - - tick/sched: Do not mess with an enqueued hrtimer - - arm/arm64: KVM: Add PSCI version selection API - - powerpc/eeh: Fix race with driver un/bind - - serial: mvebu-uart: Fix local flags handling on termios update - - block: do not use interruptible wait anywhere - - ASoC: dmic: Fix clock parenting - - PCI / PM: Do not clear state_saved in pci_pm_freeze() when smart suspend is - set - - module: Fix display of wrong module .text address - - drm/edid: Reset more of the display info - - drm/i915/fbdev: Enable late fbdev initial configuration - - drm/i915/audio: set minimum CD clock to twice the BCLK - - drm/amd/display: Fix deadlock when flushing irq - - drm/amd/display: Disallow enabling CRTC without primary plane with FB - * Bionic update: upstream stable patchset 2018-06-22 (LP: #1778265) // - CVE-2018-1108. - - random: set up the NUMA crng instances after the CRNG is fully initialized - * Ryzen/Raven Ridge USB ports do not work (LP: #1756700) - - xhci: Fix USB ports for Dell Inspiron 5775 - * [Ubuntu 1804][boston][ixgbe] EEH causes kernel BUG at /build/linux- - jWa1Fv/linux-4.15.0/drivers/pci/msi.c:352 (i2S) (LP: #1776389) - - ixgbe/ixgbevf: Free IRQ when PCI error recovery removes the device - * Need fix to aacraid driver to prevent panic (LP: #1770095) - - scsi: aacraid: Correct hba_send to include iu_type - * kernel: Fix arch random implementation (LP: #1775391) - - s390/archrandom: Rework arch random implementation. - * kernel: Fix memory leak on CCA and EP11 CPRB processing. (LP: #1775390) - - s390/zcrypt: Fix CCA and EP11 CPRB processing failure memory leak. - * Various fixes for CXL kernel module (LP: #1774471) - - cxl: Remove function write_timebase_ctrl_psl9() for PSL9 - - cxl: Set the PBCQ Tunnel BAR register when enabling capi mode - - cxl: Report the tunneled operations status - - cxl: Configure PSL to not use APC virtual machines - - cxl: Disable prefault_mode in Radix mode - * Bluetooth not working (LP: #1764645) - - Bluetooth: btusb: Apply QCA Rome patches for some ATH3012 models - * linux-snapdragon: wcn36xx: mac address generation on boot (LP: #1776491) - - [Config] arm64: snapdragon: WCN36XX_SNAPDRAGON_HACKS=y - - SAUCE: wcn36xx: read MAC from file or randomly generate one - * fscache: Fix hanging wait on page discarded by writeback (LP: #1777029) - - fscache: Fix hanging wait on page discarded by writeback - - -- Stefan Bader Thu, 02 Aug 2018 17:10:18 +0200 + -- Stefan Bader Fri, 10 Aug 2018 11:22:51 +0200 linux-azure (4.15.0-1019.19) bionic; urgency=medium diff -u linux-azure-4.15.0/debian/control linux-azure-4.15.0/debian/control --- linux-azure-4.15.0/debian/control +++ linux-azure-4.15.0/debian/control @@ -48,7 +48,7 @@ XS-Testsuite: autopkgtest #XS-Testsuite-Depends: gcc-4.7 binutils -Package: linux-azure-headers-4.15.0-1020 +Package: linux-azure-headers-4.15.0-1021 Build-Profiles: Architecture: all Multi-Arch: foreign @@ -58,44 +58,44 @@ Description: Header files related to Linux kernel version 4.15.0 This package provides kernel header files for version 4.15.0, for sites that want the latest kernel headers. Please read - /usr/share/doc/linux-azure-headers-4.15.0-1020/debian.README.gz for details + /usr/share/doc/linux-azure-headers-4.15.0-1021/debian.README.gz for details -Package: linux-azure-tools-4.15.0-1020 +Package: linux-azure-tools-4.15.0-1021 Build-Profiles: Architecture: amd64 Section: devel Priority: optional Depends: ${misc:Depends}, ${shlibs:Depends}, linux-tools-common -Description: Linux kernel version specific tools for version 4.15.0-1020 +Description: Linux kernel version specific tools for version 4.15.0-1021 This package provides the architecture dependant parts for kernel version locked tools (such as perf and x86_energy_perf_policy) for - version 4.15.0-1020 on + version 4.15.0-1021 on 64 bit x86. - You probably want to install linux-tools-4.15.0-1020-. + You probably want to install linux-tools-4.15.0-1021-. -Package: linux-azure-cloud-tools-4.15.0-1020 +Package: linux-azure-cloud-tools-4.15.0-1021 Build-Profiles: Architecture: amd64 Section: devel Priority: optional Depends: ${misc:Depends}, ${shlibs:Depends}, linux-cloud-tools-common -Description: Linux kernel version specific cloud tools for version 4.15.0-1020 +Description: Linux kernel version specific cloud tools for version 4.15.0-1021 This package provides the architecture dependant parts for kernel - version locked tools for cloud tools for version 4.15.0-1020 on + version locked tools for cloud tools for version 4.15.0-1021 on 64 bit x86. - You probably want to install linux-cloud-tools-4.15.0-1020-. + You probably want to install linux-cloud-tools-4.15.0-1021-. -Package: linux-image-unsigned-4.15.0-1020-azure +Package: linux-image-unsigned-4.15.0-1021-azure Build-Profiles: Architecture: amd64 Section: kernel Priority: optional Provides: linux-image, fuse-module, aufs-dkms, kvm-api-4, redhat-cluster-modules, ivtv-modules, virtualbox-guest-modules [amd64]${linux:rprovides} -Depends: ${misc:Depends}, ${shlibs:Depends}, kmod, linux-base (>= 4.5ubuntu1~16.04.1), linux-modules-4.15.0-1020-azure +Depends: ${misc:Depends}, ${shlibs:Depends}, kmod, linux-base (>= 4.5ubuntu1~16.04.1), linux-modules-4.15.0-1021-azure Recommends: grub-pc [amd64] | grub-efi-amd64 [amd64] | grub-efi-ia32 [amd64] | grub [amd64] Breaks: flash-kernel (<< 3.0~rc.4ubuntu62.2) [arm64] -Conflicts: linux-image-4.15.0-1020-azure -Suggests: fdutils, linux-azure-doc-4.15.0 | linux-azure-source-4.15.0, linux-azure-tools, linux-headers-4.15.0-1020-azure, initramfs-tools | linux-initramfs-tool +Conflicts: linux-image-4.15.0-1021-azure +Suggests: fdutils, linux-azure-doc-4.15.0 | linux-azure-source-4.15.0, linux-azure-tools, linux-headers-4.15.0-1021-azure, initramfs-tools | linux-initramfs-tool Description: Linux kernel image for version 4.15.0 on 64 bit x86 SMP This package contains the unsigned Linux kernel image for version 4.15.0 on 64 bit x86 SMP. @@ -108,7 +108,7 @@ the linux-azure meta-package, which will ensure that upgrades work correctly, and that supporting packages are also installed. -Package: linux-modules-4.15.0-1020-azure +Package: linux-modules-4.15.0-1021-azure Build-Profiles: Architecture: amd64 Section: kernel @@ -127,12 +127,12 @@ the linux-azure meta-package, which will ensure that upgrades work correctly, and that supporting packages are also installed. -Package: linux-modules-extra-4.15.0-1020-azure +Package: linux-modules-extra-4.15.0-1021-azure Build-Profiles: Architecture: amd64 Section: kernel Priority: optional -Depends: ${misc:Depends}, ${shlibs:Depends}, linux-image-4.15.0-1020-azure | linux-image-unsigned-4.15.0-1020-azure, crda | wireless-crda +Depends: ${misc:Depends}, ${shlibs:Depends}, linux-image-4.15.0-1021-azure | linux-image-unsigned-4.15.0-1021-azure, crda | wireless-crda Description: Linux kernel extra modules for version 4.15.0 on 64 bit x86 SMP This package contains the Linux kernel extra modules for version 4.15.0 on 64 bit x86 SMP. @@ -149,21 +149,21 @@ the linux-azure meta-package, which will ensure that upgrades work correctly, and that supporting packages are also installed. -Package: linux-headers-4.15.0-1020-azure +Package: linux-headers-4.15.0-1021-azure Build-Profiles: Architecture: amd64 Section: devel Priority: optional -Depends: ${misc:Depends}, linux-azure-headers-4.15.0-1020, ${shlibs:Depends} +Depends: ${misc:Depends}, linux-azure-headers-4.15.0-1021, ${shlibs:Depends} Provides: linux-headers, linux-headers-3.0 Description: Linux kernel headers for version 4.15.0 on 64 bit x86 SMP This package provides kernel header files for version 4.15.0 on 64 bit x86 SMP. . This is for sites that want the latest kernel headers. Please read - /usr/share/doc/linux-headers-4.15.0-1020/debian.README.gz for details. + /usr/share/doc/linux-headers-4.15.0-1021/debian.README.gz for details. -Package: linux-image-unsigned-4.15.0-1020-azure-dbgsym +Package: linux-image-unsigned-4.15.0-1021-azure-dbgsym Build-Profiles: Architecture: amd64 Section: devel @@ -180,27 +180,27 @@ is uncompressed, and unstripped. This package also includes the unstripped modules. -Package: linux-tools-4.15.0-1020-azure +Package: linux-tools-4.15.0-1021-azure Build-Profiles: Architecture: amd64 Section: devel Priority: optional -Depends: ${misc:Depends}, linux-azure-tools-4.15.0-1020 -Description: Linux kernel version specific tools for version 4.15.0-1020 +Depends: ${misc:Depends}, linux-azure-tools-4.15.0-1021 +Description: Linux kernel version specific tools for version 4.15.0-1021 This package provides the architecture dependant parts for kernel version locked tools (such as perf and x86_energy_perf_policy) for - version 4.15.0-1020 on + version 4.15.0-1021 on 64 bit x86. -Package: linux-cloud-tools-4.15.0-1020-azure +Package: linux-cloud-tools-4.15.0-1021-azure Build-Profiles: Architecture: amd64 Section: devel Priority: optional -Depends: ${misc:Depends}, linux-azure-cloud-tools-4.15.0-1020 -Description: Linux kernel version specific cloud tools for version 4.15.0-1020 +Depends: ${misc:Depends}, linux-azure-cloud-tools-4.15.0-1021 +Description: Linux kernel version specific cloud tools for version 4.15.0-1021 This package provides the architecture dependant parts for kernel - version locked tools for cloud for version 4.15.0-1020 on + version locked tools for cloud for version 4.15.0-1021 on 64 bit x86. Package: linux-udebs-azure reverted: --- linux-azure-4.15.0/drivers/acpi/button.c +++ linux-azure-4.15.0.orig/drivers/acpi/button.c @@ -613,26 +613,4 @@ NULL, 0644); MODULE_PARM_DESC(lid_init_state, "Behavior for reporting LID initial state"); +module_acpi_driver(acpi_button_driver); -static int acpi_button_register_driver(struct acpi_driver *driver) -{ - /* - * Modules such as nouveau.ko and i915.ko have a link time dependency - * on acpi_lid_open(), and would therefore not be loadable on ACPI - * capable kernels booted in non-ACPI mode if the return value of - * acpi_bus_register_driver() is returned from here with ACPI disabled - * when this driver is built as a module. - */ - if (acpi_disabled) - return 0; - - return acpi_bus_register_driver(driver); -} - -static void acpi_button_unregister_driver(struct acpi_driver *driver) -{ - if (!acpi_disabled) - acpi_bus_unregister_driver(driver); -} - -module_driver(acpi_button_driver, acpi_button_register_driver, - acpi_button_unregister_driver); reverted: --- linux-azure-4.15.0/drivers/amba/bus.c +++ linux-azure-4.15.0.orig/drivers/amba/bus.c @@ -69,12 +69,11 @@ struct device_attribute *attr, char *buf) { struct amba_device *dev = to_amba_device(_dev); - ssize_t len; + if (!dev->driver_override) + return 0; + + return sprintf(buf, "%s\n", dev->driver_override); - device_lock(_dev); - len = sprintf(buf, "%s\n", dev->driver_override); - device_unlock(_dev); - return len; } static ssize_t driver_override_store(struct device *_dev, @@ -82,10 +81,9 @@ const char *buf, size_t count) { struct amba_device *dev = to_amba_device(_dev); + char *driver_override, *old = dev->driver_override, *cp; - char *driver_override, *old, *cp; + if (count > PATH_MAX) - /* We need to keep extra room for a newline */ - if (count >= (PAGE_SIZE - 1)) return -EINVAL; driver_override = kstrndup(buf, count, GFP_KERNEL); @@ -96,15 +94,12 @@ if (cp) *cp = '\0'; - device_lock(_dev); - old = dev->driver_override; if (strlen(driver_override)) { dev->driver_override = driver_override; } else { kfree(driver_override); dev->driver_override = NULL; } - device_unlock(_dev); kfree(old); diff -u linux-azure-4.15.0/drivers/android/binder.c linux-azure-4.15.0/drivers/android/binder.c --- linux-azure-4.15.0/drivers/android/binder.c +++ linux-azure-4.15.0/drivers/android/binder.c @@ -2785,14 +2785,6 @@ else return_error = BR_DEAD_REPLY; mutex_unlock(&context->context_mgr_node_lock); - if (target_node && target_proc == proc) { - binder_user_error("%d:%d got transaction to context manager from process owning it\n", - proc->pid, thread->pid); - return_error = BR_FAILED_REPLY; - return_error_param = -EINVAL; - return_error_line = __LINE__; - goto err_invalid_target_handle; - } } if (!target_node) { /* diff -u linux-azure-4.15.0/drivers/ata/libata-core.c linux-azure-4.15.0/drivers/ata/libata-core.c --- linux-azure-4.15.0/drivers/ata/libata-core.c +++ linux-azure-4.15.0/drivers/ata/libata-core.c @@ -4549,9 +4549,6 @@ ATA_HORKAGE_ZERO_AFTER_TRIM | ATA_HORKAGE_NOLPM, }, - /* Sandisk devices which are known to not handle LPM well */ - { "SanDisk SD7UB3Q*G1001", NULL, ATA_HORKAGE_NOLPM, }, - /* devices that don't properly handle queued TRIM commands */ { "Micron_M500_*", NULL, ATA_HORKAGE_NO_NCQ_TRIM | ATA_HORKAGE_ZERO_AFTER_TRIM, }, reverted: --- linux-azure-4.15.0/drivers/atm/zatm.c +++ linux-azure-4.15.0.orig/drivers/atm/zatm.c @@ -28,7 +28,6 @@ #include #include #include -#include #include "uPD98401.h" #include "uPD98402.h" @@ -1459,8 +1458,6 @@ return -EFAULT; if (pool < 0 || pool > ZATM_LAST_POOL) return -EINVAL; - pool = array_index_nospec(pool, - ZATM_LAST_POOL + 1); spin_lock_irqsave(&zatm_dev->lock, flags); info = zatm_dev->pool_info[pool]; if (cmd == ZATM_GETPOOLZ) { diff -u linux-azure-4.15.0/drivers/base/cpu.c linux-azure-4.15.0/drivers/base/cpu.c --- linux-azure-4.15.0/drivers/base/cpu.c +++ linux-azure-4.15.0/drivers/base/cpu.c @@ -537,16 +537,24 @@ return sprintf(buf, "Not affected\n"); } +ssize_t __weak cpu_show_l1tf(struct device *dev, + struct device_attribute *attr, char *buf) +{ + return sprintf(buf, "Not affected\n"); +} + static DEVICE_ATTR(meltdown, 0444, cpu_show_meltdown, NULL); static DEVICE_ATTR(spectre_v1, 0444, cpu_show_spectre_v1, NULL); static DEVICE_ATTR(spectre_v2, 0444, cpu_show_spectre_v2, NULL); static DEVICE_ATTR(spec_store_bypass, 0444, cpu_show_spec_store_bypass, NULL); +static DEVICE_ATTR(l1tf, 0444, cpu_show_l1tf, NULL); static struct attribute *cpu_root_vulnerabilities_attrs[] = { &dev_attr_meltdown.attr, &dev_attr_spectre_v1.attr, &dev_attr_spectre_v2.attr, &dev_attr_spec_store_bypass.attr, + &dev_attr_l1tf.attr, NULL }; diff -u linux-azure-4.15.0/drivers/block/loop.c linux-azure-4.15.0/drivers/block/loop.c --- linux-azure-4.15.0/drivers/block/loop.c +++ linux-azure-4.15.0/drivers/block/loop.c @@ -1224,17 +1224,21 @@ static int loop_get_status(struct loop_device *lo, struct loop_info64 *info) { - struct file *file; + struct file *file = lo->lo_backing_file; struct kstat stat; - int ret; + int error; - if (lo->lo_state != Lo_bound) { - mutex_unlock(&lo->lo_ctl_mutex); + if (lo->lo_state != Lo_bound) return -ENXIO; - } - + error = vfs_getattr(&file->f_path, &stat, + STATX_INO, AT_STATX_SYNC_AS_STAT); + if (error) + return error; memset(info, 0, sizeof(*info)); info->lo_number = lo->lo_number; + info->lo_device = huge_encode_dev(stat.dev); + info->lo_inode = stat.ino; + info->lo_rdevice = huge_encode_dev(lo->lo_device ? stat.rdev : stat.dev); info->lo_offset = lo->lo_offset; info->lo_sizelimit = lo->lo_sizelimit; info->lo_flags = lo->lo_flags; @@ -1247,19 +1251,7 @@ memcpy(info->lo_encrypt_key, lo->lo_encrypt_key, lo->lo_encrypt_key_size); } - - /* Drop lo_ctl_mutex while we call into the filesystem. */ - file = get_file(lo->lo_backing_file); - mutex_unlock(&lo->lo_ctl_mutex); - ret = vfs_getattr(&file->f_path, &stat, STATX_INO, - AT_STATX_SYNC_AS_STAT); - if (!ret) { - info->lo_device = huge_encode_dev(stat.dev); - info->lo_inode = stat.ino; - info->lo_rdevice = huge_encode_dev(stat.rdev); - } - fput(file); - return ret; + return 0; } static void @@ -1340,13 +1332,12 @@ loop_get_status_old(struct loop_device *lo, struct loop_info __user *arg) { struct loop_info info; struct loop_info64 info64; - int err; + int err = 0; - if (!arg) { - mutex_unlock(&lo->lo_ctl_mutex); - return -EINVAL; - } - err = loop_get_status(lo, &info64); + if (!arg) + err = -EINVAL; + if (!err) + err = loop_get_status(lo, &info64); if (!err) err = loop_info64_to_old(&info64, &info); if (!err && copy_to_user(arg, &info, sizeof(info))) @@ -1358,13 +1349,12 @@ static int loop_get_status64(struct loop_device *lo, struct loop_info64 __user *arg) { struct loop_info64 info64; - int err; + int err = 0; - if (!arg) { - mutex_unlock(&lo->lo_ctl_mutex); - return -EINVAL; - } - err = loop_get_status(lo, &info64); + if (!arg) + err = -EINVAL; + if (!err) + err = loop_get_status(lo, &info64); if (!err && copy_to_user(arg, &info64, sizeof(info64))) err = -EFAULT; @@ -1441,8 +1431,7 @@ break; case LOOP_GET_STATUS: err = loop_get_status_old(lo, (struct loop_info __user *) arg); - /* loop_get_status() unlocks lo_ctl_mutex */ - goto out_unlocked; + break; case LOOP_SET_STATUS64: err = -EPERM; if ((mode & FMODE_WRITE) || capable(CAP_SYS_ADMIN)) @@ -1451,8 +1440,7 @@ break; case LOOP_GET_STATUS64: err = loop_get_status64(lo, (struct loop_info64 __user *) arg); - /* loop_get_status() unlocks lo_ctl_mutex */ - goto out_unlocked; + break; case LOOP_SET_CAPACITY: err = -EPERM; if ((mode & FMODE_WRITE) || capable(CAP_SYS_ADMIN)) @@ -1585,13 +1573,12 @@ struct compat_loop_info __user *arg) { struct loop_info64 info64; - int err; + int err = 0; - if (!arg) { - mutex_unlock(&lo->lo_ctl_mutex); - return -EINVAL; - } - err = loop_get_status(lo, &info64); + if (!arg) + err = -EINVAL; + if (!err) + err = loop_get_status(lo, &info64); if (!err) err = loop_info64_to_compat(&info64, arg); return err; @@ -1614,7 +1601,7 @@ mutex_lock(&lo->lo_ctl_mutex); err = loop_get_status_compat( lo, (struct compat_loop_info __user *) arg); - /* loop_get_status() unlocks lo_ctl_mutex */ + mutex_unlock(&lo->lo_ctl_mutex); break; case LOOP_SET_CAPACITY: case LOOP_CLR_FD: diff -u linux-azure-4.15.0/drivers/bluetooth/btusb.c linux-azure-4.15.0/drivers/bluetooth/btusb.c --- linux-azure-4.15.0/drivers/bluetooth/btusb.c +++ linux-azure-4.15.0/drivers/bluetooth/btusb.c @@ -230,7 +230,6 @@ { USB_DEVICE(0x0930, 0x0227), .driver_info = BTUSB_ATH3012 }, { USB_DEVICE(0x0b05, 0x17d0), .driver_info = BTUSB_ATH3012 }, { USB_DEVICE(0x0cf3, 0x0036), .driver_info = BTUSB_ATH3012 }, - { USB_DEVICE(0x0cf3, 0x3004), .driver_info = BTUSB_ATH3012 }, { USB_DEVICE(0x0cf3, 0x3008), .driver_info = BTUSB_ATH3012 }, { USB_DEVICE(0x0cf3, 0x311d), .driver_info = BTUSB_ATH3012 }, { USB_DEVICE(0x0cf3, 0x311e), .driver_info = BTUSB_ATH3012 }, @@ -263,6 +262,7 @@ { USB_DEVICE(0x0489, 0xe03c), .driver_info = BTUSB_ATH3012 }, /* QCA ROME chipset */ + { USB_DEVICE(0x0cf3, 0x3004), .driver_info = BTUSB_QCA_ROME }, { USB_DEVICE(0x0cf3, 0xe007), .driver_info = BTUSB_QCA_ROME }, { USB_DEVICE(0x0cf3, 0xe009), .driver_info = BTUSB_QCA_ROME }, { USB_DEVICE(0x0cf3, 0xe010), .driver_info = BTUSB_QCA_ROME }, @@ -339,7 +339,6 @@ /* Intel Bluetooth devices */ { USB_DEVICE(0x8087, 0x0025), .driver_info = BTUSB_INTEL_NEW }, - { USB_DEVICE(0x8087, 0x0026), .driver_info = BTUSB_INTEL_NEW }, { USB_DEVICE(0x8087, 0x07da), .driver_info = BTUSB_CSR }, { USB_DEVICE(0x8087, 0x07dc), .driver_info = BTUSB_INTEL }, { USB_DEVICE(0x8087, 0x0a2a), .driver_info = BTUSB_INTEL }, @@ -367,9 +366,6 @@ { USB_DEVICE(0x13d3, 0x3459), .driver_info = BTUSB_REALTEK }, { USB_DEVICE(0x13d3, 0x3494), .driver_info = BTUSB_REALTEK }, - /* Additional Realtek 8723BU Bluetooth devices */ - { USB_DEVICE(0x7392, 0xa611), .driver_info = BTUSB_REALTEK }, - /* Additional Realtek 8821AE Bluetooth devices */ { USB_DEVICE(0x0b05, 0x17dc), .driver_info = BTUSB_REALTEK }, { USB_DEVICE(0x13d3, 0x3414), .driver_info = BTUSB_REALTEK }, @@ -377,9 +373,6 @@ { USB_DEVICE(0x13d3, 0x3461), .driver_info = BTUSB_REALTEK }, { USB_DEVICE(0x13d3, 0x3462), .driver_info = BTUSB_REALTEK }, - /* Additional Realtek 8822BE Bluetooth devices */ - { USB_DEVICE(0x0b05, 0x185c), .driver_info = BTUSB_REALTEK }, - /* Silicon Wave based devices */ { USB_DEVICE(0x0c10, 0x0000), .driver_info = BTUSB_SWAVE }, @@ -2084,8 +2077,6 @@ case 0x0c: /* WsP */ case 0x11: /* JfP */ case 0x12: /* ThP */ - case 0x13: /* HrP */ - case 0x14: /* QnJ, IcP */ break; default: BT_ERR("%s: Unsupported Intel hardware variant (%u)", @@ -2209,8 +2200,6 @@ break; case 0x11: /* JfP */ case 0x12: /* ThP */ - case 0x13: /* HrP */ - case 0x14: /* QnJ, IcP */ snprintf(fwname, sizeof(fwname), "intel/ibt-%u-%u-%u.sfi", le16_to_cpu(ver.hw_variant), le16_to_cpu(ver.hw_revision), @@ -2243,8 +2232,6 @@ break; case 0x11: /* JfP */ case 0x12: /* ThP */ - case 0x13: /* HrP */ - case 0x14: /* QnJ, IcP */ snprintf(fwname, sizeof(fwname), "intel/ibt-%u-%u-%u.ddc", le16_to_cpu(ver.hw_variant), le16_to_cpu(ver.hw_revision), @@ -2585,9 +2572,11 @@ { 0x00000302, 28, 4, 18 }, /* Rome 3.2 */ }; -static int btusb_qca_send_vendor_req(struct usb_device *udev, u8 request, +static int btusb_qca_send_vendor_req(struct hci_dev *hdev, u8 request, void *data, u16 size) { + struct btusb_data *btdata = hci_get_drvdata(hdev); + struct usb_device *udev = btdata->udev; int pipe, err; u8 *buf; @@ -2602,7 +2591,7 @@ err = usb_control_msg(udev, pipe, request, USB_TYPE_VENDOR | USB_DIR_IN, 0, 0, buf, size, USB_CTRL_SET_TIMEOUT); if (err < 0) { - dev_err(&udev->dev, "Failed to access otp area (%d)", err); + bt_dev_err(hdev, "Failed to access otp area (%d)", err); goto done; } @@ -2752,38 +2741,20 @@ return err; } -/* identify the ROM version and check whether patches are needed */ -static bool btusb_qca_need_patch(struct usb_device *udev) -{ - struct qca_version ver; - - if (btusb_qca_send_vendor_req(udev, QCA_GET_TARGET_VERSION, &ver, - sizeof(ver)) < 0) - return false; - /* only low ROM versions need patches */ - return !(le32_to_cpu(ver.rom_version) & ~0xffffU); -} - static int btusb_setup_qca(struct hci_dev *hdev) { - struct btusb_data *btdata = hci_get_drvdata(hdev); - struct usb_device *udev = btdata->udev; const struct qca_device_info *info = NULL; struct qca_version ver; u32 ver_rom; u8 status; int i, err; - err = btusb_qca_send_vendor_req(udev, QCA_GET_TARGET_VERSION, &ver, + err = btusb_qca_send_vendor_req(hdev, QCA_GET_TARGET_VERSION, &ver, sizeof(ver)); if (err < 0) return err; ver_rom = le32_to_cpu(ver.rom_version); - /* Don't care about high ROM versions */ - if (ver_rom & ~0xffffU) - return 0; - for (i = 0; i < ARRAY_SIZE(qca_devices_table); i++) { if (ver_rom == qca_devices_table[i].rom_version) info = &qca_devices_table[i]; @@ -2793,7 +2764,7 @@ return -ENODEV; } - err = btusb_qca_send_vendor_req(udev, QCA_CHECK_STATUS, &status, + err = btusb_qca_send_vendor_req(hdev, QCA_CHECK_STATUS, &status, sizeof(status)); if (err < 0) return err; @@ -2963,12 +2934,6 @@ } #endif -static void btusb_check_needs_reset_resume(struct usb_interface *intf) -{ - if (dmi_check_system(btusb_needs_reset_resume_table)) - interface_to_usbdev(intf)->quirks |= USB_QUIRK_RESET_RESUME; -} - static int btusb_probe(struct usb_interface *intf, const struct usb_device_id *id) { @@ -3007,8 +2972,7 @@ /* Old firmware would otherwise let ath3k driver load * patch and sysconfig files */ - if (le16_to_cpu(udev->descriptor.bcdDevice) <= 0x0001 && - !btusb_qca_need_patch(udev)) + if (le16_to_cpu(udev->descriptor.bcdDevice) <= 0x0001) return -ENODEV; } @@ -3092,6 +3056,9 @@ hdev->send = btusb_send_frame; hdev->notify = btusb_notify; + if (dmi_check_system(btusb_needs_reset_resume_table)) + interface_to_usbdev(intf)->quirks |= USB_QUIRK_RESET_RESUME; + #ifdef CONFIG_PM err = btusb_config_oob_wake(hdev); if (err) @@ -3170,7 +3137,6 @@ } if (id->driver_info & BTUSB_ATH3012) { - data->setup_on_usb = btusb_setup_qca; hdev->set_bdaddr = btusb_set_bdaddr_ath3012; set_bit(HCI_QUIRK_SIMULTANEOUS_DISCOVERY, &hdev->quirks); set_bit(HCI_QUIRK_STRICT_DUPLICATE_FILTER, &hdev->quirks); @@ -3179,7 +3145,6 @@ if (id->driver_info & BTUSB_QCA_ROME) { data->setup_on_usb = btusb_setup_qca; hdev->set_bdaddr = btusb_set_bdaddr_ath3012; - btusb_check_needs_reset_resume(intf); } #ifdef CONFIG_BT_HCIBTUSB_RTL diff -u linux-azure-4.15.0/drivers/char/random.c linux-azure-4.15.0/drivers/char/random.c --- linux-azure-4.15.0/drivers/char/random.c +++ linux-azure-4.15.0/drivers/char/random.c @@ -261,7 +261,6 @@ #include #include #include -#include #include #include #include @@ -439,16 +438,6 @@ static void process_random_ready_list(void); static void _get_random_bytes(void *buf, int nbytes); -static struct ratelimit_state unseeded_warning = - RATELIMIT_STATE_INIT("warn_unseeded_randomness", HZ, 3); -static struct ratelimit_state urandom_warning = - RATELIMIT_STATE_INIT("warn_urandom_randomness", HZ, 3); - -static int ratelimit_disable __read_mostly; - -module_param_named(ratelimit_disable, ratelimit_disable, int, 0644); -MODULE_PARM_DESC(ratelimit_disable, "Disable random ratelimit suppression"); - /********************************************************************** * * OS independent entropy store. Here are the functions which handle @@ -798,39 +787,6 @@ crng->init_time = jiffies - CRNG_RESEED_INTERVAL - 1; } -#ifdef CONFIG_NUMA -static void do_numa_crng_init(struct work_struct *work) -{ - int i; - struct crng_state *crng; - struct crng_state **pool; - - pool = kcalloc(nr_node_ids, sizeof(*pool), GFP_KERNEL|__GFP_NOFAIL); - for_each_online_node(i) { - crng = kmalloc_node(sizeof(struct crng_state), - GFP_KERNEL | __GFP_NOFAIL, i); - spin_lock_init(&crng->lock); - crng_initialize(crng); - pool[i] = crng; - } - mb(); - if (cmpxchg(&crng_node_pool, NULL, pool)) { - for_each_node(i) - kfree(pool[i]); - kfree(pool); - } -} - -static DECLARE_WORK(numa_crng_init_work, do_numa_crng_init); - -static void numa_crng_init(void) -{ - schedule_work(&numa_crng_init_work); -} -#else -static void numa_crng_init(void) {} -#endif - /* * crng_fast_load() can be called by code in the interrupt service * path. So we can't afford to dilly-dally. @@ -937,23 +893,10 @@ spin_unlock_irqrestore(&crng->lock, flags); if (crng == &primary_crng && crng_init < 2) { invalidate_batched_entropy(); - numa_crng_init(); crng_init = 2; process_random_ready_list(); wake_up_interruptible(&crng_init_wait); pr_notice("random: crng init done\n"); - if (unseeded_warning.missed) { - pr_notice("random: %d get_random_xx warning(s) missed " - "due to ratelimiting\n", - unseeded_warning.missed); - unseeded_warning.missed = 0; - } - if (urandom_warning.missed) { - pr_notice("random: %d urandom warning(s) missed " - "due to ratelimiting\n", - urandom_warning.missed); - urandom_warning.missed = 0; - } } } @@ -1597,9 +1540,8 @@ #ifndef CONFIG_WARN_ALL_UNSEEDED_RANDOM print_once = true; #endif - if (__ratelimit(&unseeded_warning)) - pr_notice("random: %s called from %pS with crng_init=%d\n", - func_name, caller, crng_init); + pr_notice("random: %s called from %pS with crng_init=%d\n", + func_name, caller, crng_init); } /* @@ -1789,14 +1731,29 @@ */ static int rand_initialize(void) { +#ifdef CONFIG_NUMA + int i; + struct crng_state *crng; + struct crng_state **pool; +#endif + init_std_data(&input_pool); init_std_data(&blocking_pool); crng_initialize(&primary_crng); crng_global_init_time = jiffies; - if (ratelimit_disable) { - urandom_warning.interval = 0; - unseeded_warning.interval = 0; + +#ifdef CONFIG_NUMA + pool = kcalloc(nr_node_ids, sizeof(*pool), GFP_KERNEL|__GFP_NOFAIL); + for_each_online_node(i) { + crng = kmalloc_node(sizeof(struct crng_state), + GFP_KERNEL | __GFP_NOFAIL, i); + spin_lock_init(&crng->lock); + crng_initialize(crng); + pool[i] = crng; } + mb(); + crng_node_pool = pool; +#endif return 0; } early_initcall(rand_initialize); @@ -1864,10 +1821,9 @@ if (!crng_ready() && maxwarn > 0) { maxwarn--; - if (__ratelimit(&urandom_warning)) - printk(KERN_NOTICE "random: %s: uninitialized " - "urandom read (%zd bytes read)\n", - current->comm, nbytes); + printk(KERN_NOTICE "random: %s: uninitialized urandom read " + "(%zd bytes read)\n", + current->comm, nbytes); spin_lock_irqsave(&primary_crng.lock, flags); crng_init_cnt = 0; spin_unlock_irqrestore(&primary_crng.lock, flags); reverted: --- linux-azure-4.15.0/drivers/char/virtio_console.c +++ linux-azure-4.15.0.orig/drivers/char/virtio_console.c @@ -422,7 +422,7 @@ } } +static struct port_buffer *alloc_buf(struct virtqueue *vq, size_t buf_size, -static struct port_buffer *alloc_buf(struct virtio_device *vdev, size_t buf_size, int pages) { struct port_buffer *buf; @@ -445,16 +445,16 @@ return buf; } + if (is_rproc_serial(vq->vdev)) { - if (is_rproc_serial(vdev)) { /* * Allocate DMA memory from ancestor. When a virtio * device is created by remoteproc, the DMA memory is * associated with the grandparent device: * vdev => rproc => platform-dev. */ + if (!vq->vdev->dev.parent || !vq->vdev->dev.parent->parent) - if (!vdev->dev.parent || !vdev->dev.parent->parent) goto free_buf; + buf->dev = vq->vdev->dev.parent->parent; - buf->dev = vdev->dev.parent->parent; /* Increase device refcnt to avoid freeing it */ get_device(buf->dev); @@ -838,7 +838,7 @@ count = min((size_t)(32 * 1024), count); + buf = alloc_buf(port->out_vq, count, 0); - buf = alloc_buf(port->portdev->vdev, count, 0); if (!buf) return -ENOMEM; @@ -957,7 +957,7 @@ if (ret < 0) goto error_out; + buf = alloc_buf(port->out_vq, 0, pipe->nrbufs); - buf = alloc_buf(port->portdev->vdev, 0, pipe->nrbufs); if (!buf) { ret = -ENOMEM; goto error_out; @@ -1374,7 +1374,7 @@ nr_added_bufs = 0; do { + buf = alloc_buf(vq, PAGE_SIZE, 0); - buf = alloc_buf(vq->vdev, PAGE_SIZE, 0); if (!buf) break; @@ -1402,6 +1402,7 @@ { char debugfs_name[16]; struct port *port; + struct port_buffer *buf; dev_t devt; unsigned int nr_added_bufs; int err; @@ -1512,6 +1513,8 @@ return 0; free_inbufs: + while ((buf = virtqueue_detach_unused_buf(port->in_vq))) + free_buf(buf, true); free_device: device_destroy(pdrvdata.class, port->dev->devt); free_cdev: @@ -1536,14 +1539,34 @@ static void remove_port_data(struct port *port) { + struct port_buffer *buf; + spin_lock_irq(&port->inbuf_lock); /* Remove unused data this port might have received. */ discard_port_data(port); spin_unlock_irq(&port->inbuf_lock); + /* Remove buffers we queued up for the Host to send us data in. */ + do { + spin_lock_irq(&port->inbuf_lock); + buf = virtqueue_detach_unused_buf(port->in_vq); + spin_unlock_irq(&port->inbuf_lock); + if (buf) + free_buf(buf, true); + } while (buf); + spin_lock_irq(&port->outvq_lock); reclaim_consumed_buffers(port); spin_unlock_irq(&port->outvq_lock); + + /* Free pending buffers from the out-queue. */ + do { + spin_lock_irq(&port->outvq_lock); + buf = virtqueue_detach_unused_buf(port->out_vq); + spin_unlock_irq(&port->outvq_lock); + if (buf) + free_buf(buf, true); + } while (buf); } /* @@ -1768,24 +1791,13 @@ spin_unlock(&portdev->c_ivq_lock); } -static void flush_bufs(struct virtqueue *vq, bool can_sleep) -{ - struct port_buffer *buf; - unsigned int len; - - while ((buf = virtqueue_get_buf(vq, &len))) - free_buf(buf, can_sleep); -} - static void out_intr(struct virtqueue *vq) { struct port *port; port = find_port_by_vq(vq->vdev->priv, vq); + if (!port) - if (!port) { - flush_bufs(vq, false); return; - } wake_up_interruptible(&port->waitqueue); } @@ -1796,10 +1808,8 @@ unsigned long flags; port = find_port_by_vq(vq->vdev->priv, vq); + if (!port) - if (!port) { - flush_bufs(vq, false); return; - } spin_lock_irqsave(&port->inbuf_lock, flags); port->inbuf = get_inbuf(port); @@ -1974,54 +1984,24 @@ static void remove_vqs(struct ports_device *portdev) { - struct virtqueue *vq; - - virtio_device_for_each_vq(portdev->vdev, vq) { - struct port_buffer *buf; - - flush_bufs(vq, true); - while ((buf = virtqueue_detach_unused_buf(vq))) - free_buf(buf, true); - } portdev->vdev->config->del_vqs(portdev->vdev); kfree(portdev->in_vqs); kfree(portdev->out_vqs); } +static void remove_controlq_data(struct ports_device *portdev) -static void virtcons_remove(struct virtio_device *vdev) { + struct port_buffer *buf; + unsigned int len; - struct ports_device *portdev; - struct port *port, *port2; + if (!use_multiport(portdev)) + return; - portdev = vdev->priv; + while ((buf = virtqueue_get_buf(portdev->c_ivq, &len))) + free_buf(buf, true); - spin_lock_irq(&pdrvdata_lock); - list_del(&portdev->list); - spin_unlock_irq(&pdrvdata_lock); + while ((buf = virtqueue_detach_unused_buf(portdev->c_ivq))) + free_buf(buf, true); - /* Disable interrupts for vqs */ - vdev->config->reset(vdev); - /* Finish up work that's lined up */ - if (use_multiport(portdev)) - cancel_work_sync(&portdev->control_work); - else - cancel_work_sync(&portdev->config_work); - - list_for_each_entry_safe(port, port2, &portdev->ports, list) - unplug_port(port); - - unregister_chrdev(portdev->chr_major, "virtio-portsdev"); - - /* - * When yanking out a device, we immediately lose the - * (device-side) queues. So there's no point in keeping the - * guest side around till we drop our final reference. This - * also means that any ports which are in an open state will - * have to just stop using the port, as the vqs are going - * away. - */ - remove_vqs(portdev); - kfree(portdev); } /* @@ -2090,7 +2070,6 @@ spin_lock_init(&portdev->ports_lock); INIT_LIST_HEAD(&portdev->ports); - INIT_LIST_HEAD(&portdev->list); virtio_device_ready(portdev->vdev); @@ -2108,15 +2087,8 @@ if (!nr_added_bufs) { dev_err(&vdev->dev, "Error allocating buffers for control queue\n"); + err = -ENOMEM; + goto free_vqs; - /* - * The host might want to notify mgmt sw about device - * add failure. - */ - __send_control_msg(portdev, VIRTIO_CONSOLE_BAD_ID, - VIRTIO_CONSOLE_DEVICE_READY, 0); - /* Device was functional: we need full cleanup. */ - virtcons_remove(vdev); - return -ENOMEM; } } else { /* @@ -2147,6 +2119,11 @@ return 0; +free_vqs: + /* The host might want to notify mgmt sw about device add failure */ + __send_control_msg(portdev, VIRTIO_CONSOLE_BAD_ID, + VIRTIO_CONSOLE_DEVICE_READY, 0); + remove_vqs(portdev); free_chrdev: unregister_chrdev(portdev->chr_major, "virtio-portsdev"); free: @@ -2155,6 +2132,43 @@ return err; } +static void virtcons_remove(struct virtio_device *vdev) +{ + struct ports_device *portdev; + struct port *port, *port2; + + portdev = vdev->priv; + + spin_lock_irq(&pdrvdata_lock); + list_del(&portdev->list); + spin_unlock_irq(&pdrvdata_lock); + + /* Disable interrupts for vqs */ + vdev->config->reset(vdev); + /* Finish up work that's lined up */ + if (use_multiport(portdev)) + cancel_work_sync(&portdev->control_work); + else + cancel_work_sync(&portdev->config_work); + + list_for_each_entry_safe(port, port2, &portdev->ports, list) + unplug_port(port); + + unregister_chrdev(portdev->chr_major, "virtio-portsdev"); + + /* + * When yanking out a device, we immediately lose the + * (device-side) queues. So there's no point in keeping the + * guest side around till we drop our final reference. This + * also means that any ports which are in an open state will + * have to just stop using the port, as the vqs are going + * away. + */ + remove_controlq_data(portdev); + remove_vqs(portdev); + kfree(portdev); +} + static struct virtio_device_id id_table[] = { { VIRTIO_ID_CONSOLE, VIRTIO_DEV_ANY_ID }, { 0 }, @@ -2195,6 +2209,7 @@ */ if (use_multiport(portdev)) virtqueue_disable_cb(portdev->c_ivq); + remove_controlq_data(portdev); list_for_each_entry(port, &portdev->ports, list) { virtqueue_disable_cb(port->in_vq); diff -u linux-azure-4.15.0/drivers/clk/clk.c linux-azure-4.15.0/drivers/clk/clk.c --- linux-azure-4.15.0/drivers/clk/clk.c +++ linux-azure-4.15.0/drivers/clk/clk.c @@ -2048,9 +2048,6 @@ int ret; clk_prepare_lock(); - /* Always try to update cached phase if possible */ - if (core->ops->get_phase) - core->phase = core->ops->get_phase(core->hw); ret = core->phase; clk_prepare_unlock(); reverted: --- linux-azure-4.15.0/drivers/clk/hisilicon/crg-hi3516cv300.c +++ linux-azure-4.15.0.orig/drivers/clk/hisilicon/crg-hi3516cv300.c @@ -204,7 +204,7 @@ /* hi3516CV300 sysctrl CRG */ #define HI3516CV300_SYSCTRL_NR_CLKS 16 +static const char *wdt_mux_p[] __initconst = { "3m", "apb" }; -static const char *const wdt_mux_p[] __initconst = { "3m", "apb" }; static u32 wdt_mux_table[] = {0, 1}; static const struct hisi_mux_clock hi3516cv300_sysctrl_mux_clks[] = { reverted: --- linux-azure-4.15.0/drivers/clk/rockchip/clk-mmc-phase.c +++ linux-azure-4.15.0.orig/drivers/clk/rockchip/clk-mmc-phase.c @@ -58,12 +58,6 @@ u16 degrees; u32 delay_num = 0; - /* See the comment for rockchip_mmc_set_phase below */ - if (!rate) { - pr_err("%s: invalid clk rate\n", __func__); - return -EINVAL; - } - raw_value = readl(mmc_clock->reg) >> (mmc_clock->shift); degrees = (raw_value & ROCKCHIP_MMC_DEGREE_MASK) * 90; @@ -90,23 +84,6 @@ u32 raw_value; u32 delay; - /* - * The below calculation is based on the output clock from - * MMC host to the card, which expects the phase clock inherits - * the clock rate from its parent, namely the output clock - * provider of MMC host. However, things may go wrong if - * (1) It is orphan. - * (2) It is assigned to the wrong parent. - * - * This check help debug the case (1), which seems to be the - * most likely problem we often face and which makes it difficult - * for people to debug unstable mmc tuning results. - */ - if (!rate) { - pr_err("%s: invalid clk rate\n", __func__); - return -EINVAL; - } - nineties = degrees / 90; remainder = (degrees % 90); reverted: --- linux-azure-4.15.0/drivers/clk/rockchip/clk-rk3228.c +++ linux-azure-4.15.0.orig/drivers/clk/rockchip/clk-rk3228.c @@ -387,7 +387,7 @@ RK2928_CLKSEL_CON(23), 5, 2, MFLAGS, 0, 6, DFLAGS, RK2928_CLKGATE_CON(2), 15, GFLAGS), + COMPOSITE(SCLK_SDMMC, "sclk_sdmmc0", mux_mmc_src_p, 0, - COMPOSITE(SCLK_SDMMC, "sclk_sdmmc", mux_mmc_src_p, 0, RK2928_CLKSEL_CON(11), 8, 2, MFLAGS, 0, 8, DFLAGS, RK2928_CLKGATE_CON(2), 11, GFLAGS), reverted: --- linux-azure-4.15.0/drivers/clk/samsung/clk-exynos3250.c +++ linux-azure-4.15.0.orig/drivers/clk/samsung/clk-exynos3250.c @@ -698,7 +698,7 @@ PLL_36XX_RATE(144000000, 96, 2, 3, 0), PLL_36XX_RATE( 96000000, 128, 2, 4, 0), PLL_36XX_RATE( 84000000, 112, 2, 4, 0), + PLL_36XX_RATE( 80000004, 106, 2, 4, 43691), - PLL_36XX_RATE( 80000003, 106, 2, 4, 43691), PLL_36XX_RATE( 73728000, 98, 2, 4, 19923), PLL_36XX_RATE( 67737598, 270, 3, 5, 62285), PLL_36XX_RATE( 65535999, 174, 2, 5, 49982), @@ -734,7 +734,7 @@ PLL_36XX_RATE(148352005, 98, 2, 3, 59070), PLL_36XX_RATE(108000000, 144, 2, 4, 0), PLL_36XX_RATE( 74250000, 99, 2, 4, 0), + PLL_36XX_RATE( 74176002, 98, 3, 4, 59070), - PLL_36XX_RATE( 74176002, 98, 2, 4, 59070), PLL_36XX_RATE( 54054000, 216, 3, 5, 14156), PLL_36XX_RATE( 54000000, 144, 2, 5, 0), { /* sentinel */ } reverted: --- linux-azure-4.15.0/drivers/clk/samsung/clk-exynos5250.c +++ linux-azure-4.15.0.orig/drivers/clk/samsung/clk-exynos5250.c @@ -711,13 +711,13 @@ /* sorted in descending order */ /* PLL_36XX_RATE(rate, m, p, s, k) */ PLL_36XX_RATE(192000000, 64, 2, 2, 0), + PLL_36XX_RATE(180633600, 90, 3, 2, 20762), - PLL_36XX_RATE(180633605, 90, 3, 2, 20762), PLL_36XX_RATE(180000000, 90, 3, 2, 0), PLL_36XX_RATE(73728000, 98, 2, 4, 19923), + PLL_36XX_RATE(67737600, 90, 2, 4, 20762), - PLL_36XX_RATE(67737602, 90, 2, 4, 20762), PLL_36XX_RATE(49152000, 98, 3, 4, 19923), + PLL_36XX_RATE(45158400, 90, 3, 4, 20762), + PLL_36XX_RATE(32768000, 131, 3, 5, 4719), - PLL_36XX_RATE(45158401, 90, 3, 4, 20762), - PLL_36XX_RATE(32768001, 131, 3, 5, 4719), { }, }; reverted: --- linux-azure-4.15.0/drivers/clk/samsung/clk-exynos5260.c +++ linux-azure-4.15.0.orig/drivers/clk/samsung/clk-exynos5260.c @@ -65,7 +65,7 @@ PLL_36XX_RATE(480000000, 160, 2, 2, 0), PLL_36XX_RATE(432000000, 144, 2, 2, 0), PLL_36XX_RATE(400000000, 200, 3, 2, 0), + PLL_36XX_RATE(394073130, 459, 7, 2, 49282), - PLL_36XX_RATE(394073128, 459, 7, 2, 49282), PLL_36XX_RATE(333000000, 111, 2, 2, 0), PLL_36XX_RATE(300000000, 100, 2, 2, 0), PLL_36XX_RATE(266000000, 266, 3, 3, 0), reverted: --- linux-azure-4.15.0/drivers/clk/samsung/clk-exynos5433.c +++ linux-azure-4.15.0.orig/drivers/clk/samsung/clk-exynos5433.c @@ -729,7 +729,7 @@ PLL_35XX_RATE(800000000U, 400, 6, 1), PLL_35XX_RATE(733000000U, 733, 12, 1), PLL_35XX_RATE(700000000U, 175, 3, 1), + PLL_35XX_RATE(667000000U, 222, 4, 1), - PLL_35XX_RATE(666000000U, 222, 4, 1), PLL_35XX_RATE(633000000U, 211, 4, 1), PLL_35XX_RATE(600000000U, 500, 5, 2), PLL_35XX_RATE(552000000U, 460, 5, 2), @@ -757,12 +757,12 @@ /* AUD_PLL */ static const struct samsung_pll_rate_table exynos5433_aud_pll_rates[] __initconst = { PLL_36XX_RATE(400000000U, 200, 3, 2, 0), + PLL_36XX_RATE(393216000U, 197, 3, 2, -25690), - PLL_36XX_RATE(393216003U, 197, 3, 2, -25690), PLL_36XX_RATE(384000000U, 128, 2, 2, 0), + PLL_36XX_RATE(368640000U, 246, 4, 2, -15729), + PLL_36XX_RATE(361507200U, 181, 3, 2, -16148), + PLL_36XX_RATE(338688000U, 113, 2, 2, -6816), + PLL_36XX_RATE(294912000U, 98, 1, 3, 19923), - PLL_36XX_RATE(368639991U, 246, 4, 2, -15729), - PLL_36XX_RATE(361507202U, 181, 3, 2, -16148), - PLL_36XX_RATE(338687988U, 113, 2, 2, -6816), - PLL_36XX_RATE(294912002U, 98, 1, 3, 19923), PLL_36XX_RATE(288000000U, 96, 1, 3, 0), PLL_36XX_RATE(252000000U, 84, 1, 3, 0), { /* sentinel */ } reverted: --- linux-azure-4.15.0/drivers/clk/samsung/clk-exynos7.c +++ linux-azure-4.15.0.orig/drivers/clk/samsung/clk-exynos7.c @@ -140,7 +140,7 @@ }; static const struct samsung_pll_rate_table pll1460x_24mhz_tbl[] __initconst = { + PLL_36XX_RATE(491520000, 20, 1, 0, 31457), - PLL_36XX_RATE(491519897, 20, 1, 0, 31457), {}, }; reverted: --- linux-azure-4.15.0/drivers/clk/samsung/clk-s3c2410.c +++ linux-azure-4.15.0.orig/drivers/clk/samsung/clk-s3c2410.c @@ -168,7 +168,7 @@ PLL_35XX_RATE(226000000, 105, 1, 1), PLL_35XX_RATE(210000000, 132, 2, 1), /* 2410 common */ + PLL_35XX_RATE(203000000, 161, 3, 1), - PLL_35XX_RATE(202800000, 161, 3, 1), PLL_35XX_RATE(192000000, 88, 1, 1), PLL_35XX_RATE(186000000, 85, 1, 1), PLL_35XX_RATE(180000000, 82, 1, 1), @@ -178,18 +178,18 @@ PLL_35XX_RATE(147000000, 90, 2, 1), PLL_35XX_RATE(135000000, 82, 2, 1), PLL_35XX_RATE(124000000, 116, 1, 2), + PLL_35XX_RATE(118000000, 150, 2, 2), - PLL_35XX_RATE(118500000, 150, 2, 2), PLL_35XX_RATE(113000000, 105, 1, 2), + PLL_35XX_RATE(101000000, 127, 2, 2), - PLL_35XX_RATE(101250000, 127, 2, 2), PLL_35XX_RATE(90000000, 112, 2, 2), + PLL_35XX_RATE(85000000, 105, 2, 2), - PLL_35XX_RATE(84750000, 105, 2, 2), PLL_35XX_RATE(79000000, 71, 1, 2), + PLL_35XX_RATE(68000000, 82, 2, 2), + PLL_35XX_RATE(56000000, 142, 2, 3), - PLL_35XX_RATE(67500000, 82, 2, 2), - PLL_35XX_RATE(56250000, 142, 2, 3), PLL_35XX_RATE(48000000, 120, 2, 3), + PLL_35XX_RATE(51000000, 161, 3, 3), - PLL_35XX_RATE(50700000, 161, 3, 3), PLL_35XX_RATE(45000000, 82, 1, 3), + PLL_35XX_RATE(34000000, 82, 2, 3), - PLL_35XX_RATE(33750000, 82, 2, 3), { /* sentinel */ }, }; reverted: --- linux-azure-4.15.0/drivers/clk/tegra/clk-pll.c +++ linux-azure-4.15.0.orig/drivers/clk/tegra/clk-pll.c @@ -1151,8 +1151,6 @@ .enable = clk_pllu_enable, .disable = clk_pll_disable, .recalc_rate = clk_pll_recalc_rate, - .round_rate = clk_pll_round_rate, - .set_rate = clk_pll_set_rate, }; static int _pll_fixed_mdiv(struct tegra_clk_pll_params *pll_params, reverted: --- linux-azure-4.15.0/drivers/crypto/atmel-aes.c +++ linux-azure-4.15.0.orig/drivers/crypto/atmel-aes.c @@ -2155,7 +2155,7 @@ badkey: crypto_aead_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN); + memzero_explicit(&key, sizeof(keys)); - memzero_explicit(&keys, sizeof(keys)); return -EINVAL; } reverted: --- linux-azure-4.15.0/drivers/crypto/ccp/ccp-debugfs.c +++ linux-azure-4.15.0.orig/drivers/crypto/ccp/ccp-debugfs.c @@ -278,7 +278,7 @@ }; static struct dentry *ccp_debugfs_dir; +static DEFINE_RWLOCK(ccp_debugfs_lock); -static DEFINE_MUTEX(ccp_debugfs_lock); #define MAX_NAME_LEN 20 @@ -290,15 +290,16 @@ struct dentry *debugfs_stats; struct dentry *debugfs_q_instance; struct dentry *debugfs_q_stats; + unsigned long flags; int i; if (!debugfs_initialized()) return; + write_lock_irqsave(&ccp_debugfs_lock, flags); - mutex_lock(&ccp_debugfs_lock); if (!ccp_debugfs_dir) ccp_debugfs_dir = debugfs_create_dir(KBUILD_MODNAME, NULL); + write_unlock_irqrestore(&ccp_debugfs_lock, flags); - mutex_unlock(&ccp_debugfs_lock); if (!ccp_debugfs_dir) return; diff -u linux-azure-4.15.0/drivers/crypto/inside-secure/safexcel.c linux-azure-4.15.0/drivers/crypto/inside-secure/safexcel.c --- linux-azure-4.15.0/drivers/crypto/inside-secure/safexcel.c +++ linux-azure-4.15.0/drivers/crypto/inside-secure/safexcel.c @@ -462,15 +462,6 @@ if (backlog) backlog->complete(backlog, -EINPROGRESS); - /* In case the send() helper did not issue any command to push - * to the engine because the input data was cached, continue to - * dequeue other requests as this is valid and not an error. - */ - if (!commands && !results) { - kfree(request); - continue; - } - spin_lock_bh(&priv->ring[ring].egress_lock); list_add_tail(&request->list, &priv->ring[ring].list); spin_unlock_bh(&priv->ring[ring].egress_lock); reverted: --- linux-azure-4.15.0/drivers/crypto/inside-secure/safexcel_cipher.c +++ linux-azure-4.15.0.orig/drivers/crypto/inside-secure/safexcel_cipher.c @@ -446,7 +446,7 @@ if (!priv->ring[ring].need_dequeue) safexcel_dequeue(priv, ring); + wait_for_completion_interruptible(&result.completion); - wait_for_completion(&result.completion); if (result.error) { dev_warn(priv->dev, diff -u linux-azure-4.15.0/drivers/crypto/inside-secure/safexcel_hash.c linux-azure-4.15.0/drivers/crypto/inside-secure/safexcel_hash.c --- linux-azure-4.15.0/drivers/crypto/inside-secure/safexcel_hash.c +++ linux-azure-4.15.0/drivers/crypto/inside-secure/safexcel_hash.c @@ -22,6 +22,7 @@ struct safexcel_crypto_priv *priv; u32 alg; + u32 digest; u32 ipad[SHA1_DIGEST_SIZE / sizeof(u32)]; u32 opad[SHA1_DIGEST_SIZE / sizeof(u32)]; @@ -35,8 +36,6 @@ int nents; - u32 digest; - u8 state_sz; /* expected sate size, only set once */ u32 state[SHA256_DIGEST_SIZE / sizeof(u32)] __aligned(sizeof(u32)); @@ -51,8 +50,6 @@ u64 len; u64 processed; - u32 digest; - u32 state[SHA256_DIGEST_SIZE / sizeof(u32)]; u8 cache[SHA256_BLOCK_SIZE]; }; @@ -86,9 +83,9 @@ cdesc->control_data.control0 |= CONTEXT_CONTROL_TYPE_HASH_OUT; cdesc->control_data.control0 |= ctx->alg; - cdesc->control_data.control0 |= req->digest; + cdesc->control_data.control0 |= ctx->digest; - if (req->digest == CONTEXT_CONTROL_DIGEST_PRECOMPUTED) { + if (ctx->digest == CONTEXT_CONTROL_DIGEST_PRECOMPUTED) { if (req->processed) { if (ctx->alg == CONTEXT_CONTROL_CRYPTO_ALG_SHA1) cdesc->control_data.control0 |= CONTEXT_CONTROL_SIZE(6); @@ -116,7 +113,7 @@ if (req->finish) ctx->base.ctxr->data[i] = cpu_to_le32(req->processed / blocksize); } - } else if (req->digest == CONTEXT_CONTROL_DIGEST_HMAC) { + } else if (ctx->digest == CONTEXT_CONTROL_DIGEST_HMAC) { cdesc->control_data.control0 |= CONTEXT_CONTROL_SIZE(10); memcpy(ctx->base.ctxr->data, ctx->ipad, digestsize); @@ -188,7 +185,7 @@ int i, queued, len, cache_len, extra, n_cdesc = 0, ret = 0; queued = len = req->len - req->processed; - if (queued <= crypto_ahash_blocksize(ahash)) + if (queued < crypto_ahash_blocksize(ahash)) cache_len = queued; else cache_len = queued - areq->nbytes; @@ -202,7 +199,7 @@ /* If this is not the last request and the queued data * is a multiple of a block, cache the last one for now. */ - extra = crypto_ahash_blocksize(ahash); + extra = queued - crypto_ahash_blocksize(ahash); if (extra) { sg_pcopy_to_buffer(areq->src, sg_nents(areq->src), @@ -495,7 +492,7 @@ if (!priv->ring[ring].need_dequeue) safexcel_dequeue(priv, ring); - wait_for_completion(&result.completion); + wait_for_completion_interruptible(&result.completion); if (result.error) { dev_warn(priv->dev, "hash: completion error (%d)\n", @@ -539,7 +536,7 @@ req->needs_inv = false; - if (req->processed && req->digest == CONTEXT_CONTROL_DIGEST_PRECOMPUTED) + if (req->processed && ctx->digest == CONTEXT_CONTROL_DIGEST_PRECOMPUTED) ctx->base.needs_inv = safexcel_ahash_needs_inv_get(areq); if (ctx->base.ctxr) { @@ -570,6 +567,7 @@ static int safexcel_ahash_update(struct ahash_request *areq) { + struct safexcel_ahash_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(areq)); struct safexcel_ahash_req *req = ahash_request_ctx(areq); struct crypto_ahash *ahash = crypto_ahash_reqtfm(areq); @@ -585,7 +583,7 @@ * We're not doing partial updates when performing an hmac request. * Everything will be handled by the final() call. */ - if (req->digest == CONTEXT_CONTROL_DIGEST_HMAC) + if (ctx->digest == CONTEXT_CONTROL_DIGEST_HMAC) return 0; if (req->hmac) @@ -644,8 +642,6 @@ export->len = req->len; export->processed = req->processed; - export->digest = req->digest; - memcpy(export->state, req->state, req->state_sz); memset(export->cache, 0, crypto_ahash_blocksize(ahash)); memcpy(export->cache, req->cache, crypto_ahash_blocksize(ahash)); @@ -667,8 +663,6 @@ req->len = export->len; req->processed = export->processed; - req->digest = export->digest; - memcpy(req->cache, export->cache, crypto_ahash_blocksize(ahash)); memcpy(req->state, export->state, req->state_sz); @@ -705,7 +699,7 @@ req->state[4] = SHA1_H4; ctx->alg = CONTEXT_CONTROL_CRYPTO_ALG_SHA1; - req->digest = CONTEXT_CONTROL_DIGEST_PRECOMPUTED; + ctx->digest = CONTEXT_CONTROL_DIGEST_PRECOMPUTED; req->state_sz = SHA1_DIGEST_SIZE; return 0; @@ -767,10 +761,10 @@ static int safexcel_hmac_sha1_init(struct ahash_request *areq) { - struct safexcel_ahash_req *req = ahash_request_ctx(areq); + struct safexcel_ahash_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(areq)); safexcel_sha1_init(areq); - req->digest = CONTEXT_CONTROL_DIGEST_HMAC; + ctx->digest = CONTEXT_CONTROL_DIGEST_HMAC; return 0; } @@ -823,7 +817,7 @@ init_completion(&result.completion); ret = crypto_ahash_digest(areq); - if (ret == -EINPROGRESS || ret == -EBUSY) { + if (ret == -EINPROGRESS) { wait_for_completion_interruptible(&result.completion); ret = result.error; } @@ -1005,7 +999,7 @@ req->state[7] = SHA256_H7; ctx->alg = CONTEXT_CONTROL_CRYPTO_ALG_SHA256; - req->digest = CONTEXT_CONTROL_DIGEST_PRECOMPUTED; + ctx->digest = CONTEXT_CONTROL_DIGEST_PRECOMPUTED; req->state_sz = SHA256_DIGEST_SIZE; return 0; @@ -1067,7 +1061,7 @@ req->state[7] = SHA224_H7; ctx->alg = CONTEXT_CONTROL_CRYPTO_ALG_SHA224; - req->digest = CONTEXT_CONTROL_DIGEST_PRECOMPUTED; + ctx->digest = CONTEXT_CONTROL_DIGEST_PRECOMPUTED; req->state_sz = SHA256_DIGEST_SIZE; return 0; reverted: --- linux-azure-4.15.0/drivers/crypto/sunxi-ss/sun4i-ss-core.c +++ linux-azure-4.15.0.orig/drivers/crypto/sunxi-ss/sun4i-ss-core.c @@ -451,7 +451,6 @@ module_platform_driver(sun4i_ss_driver); -MODULE_ALIAS("platform:sun4i-ss"); MODULE_DESCRIPTION("Allwinner Security System cryptographic accelerator"); MODULE_LICENSE("GPL"); MODULE_AUTHOR("Corentin LABBE "); reverted: --- linux-azure-4.15.0/drivers/fpga/altera-ps-spi.c +++ linux-azure-4.15.0.orig/drivers/fpga/altera-ps-spi.c @@ -249,7 +249,7 @@ conf->data = of_id->data; conf->spi = spi; + conf->config = devm_gpiod_get(&spi->dev, "nconfig", GPIOD_OUT_HIGH); - conf->config = devm_gpiod_get(&spi->dev, "nconfig", GPIOD_OUT_LOW); if (IS_ERR(conf->config)) { dev_err(&spi->dev, "Failed to get config gpio: %ld\n", PTR_ERR(conf->config)); reverted: --- linux-azure-4.15.0/drivers/gpio/gpio-aspeed.c +++ linux-azure-4.15.0.orig/drivers/gpio/gpio-aspeed.c @@ -375,7 +375,7 @@ if (set) reg |= bit; else + reg &= bit; - reg &= ~bit; iowrite32(reg, addr); spin_unlock_irqrestore(&gpio->lock, flags); diff -u linux-azure-4.15.0/drivers/gpio/gpiolib.c linux-azure-4.15.0/drivers/gpio/gpiolib.c --- linux-azure-4.15.0/drivers/gpio/gpiolib.c +++ linux-azure-4.15.0/drivers/gpio/gpiolib.c @@ -446,7 +446,7 @@ struct gpiohandle_request handlereq; struct linehandle_state *lh; struct file *file; - int fd, i, count = 0, ret; + int fd, i, ret; u32 lflags; if (copy_from_user(&handlereq, ip, sizeof(handlereq))) @@ -507,7 +507,6 @@ if (ret) goto out_free_descs; lh->descs[i] = desc; - count = i; if (lflags & GPIOHANDLE_REQUEST_ACTIVE_LOW) set_bit(FLAG_ACTIVE_LOW, &desc->flags); @@ -574,7 +573,7 @@ out_put_unused_fd: put_unused_fd(fd); out_free_descs: - for (i = 0; i < count; i++) + for (; i >= 0; i--) gpiod_free(lh->descs[i]); kfree(lh->label); out_free_lh: @@ -831,7 +830,7 @@ desc = &gdev->descs[offset]; ret = gpiod_request(desc, le->label); if (ret) - goto out_free_label; + goto out_free_desc; le->desc = desc; le->eflags = eflags; diff -u linux-azure-4.15.0/drivers/gpu/drm/amd/amdgpu/amdgpu_atpx_handler.c linux-azure-4.15.0/drivers/gpu/drm/amd/amdgpu/amdgpu_atpx_handler.c --- linux-azure-4.15.0/drivers/gpu/drm/amd/amdgpu/amdgpu_atpx_handler.c +++ linux-azure-4.15.0/drivers/gpu/drm/amd/amdgpu/amdgpu_atpx_handler.c @@ -550,7 +550,7 @@ * look up whether we are the integrated or discrete GPU (all asics). * Returns the client id. */ -static enum vga_switcheroo_client_id amdgpu_atpx_get_client_id(struct pci_dev *pdev) +static int amdgpu_atpx_get_client_id(struct pci_dev *pdev) { if (amdgpu_atpx_priv.dhandle == ACPI_HANDLE(&pdev->dev)) return VGA_SWITCHEROO_IGD; reverted: --- linux-azure-4.15.0/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c +++ linux-azure-4.15.0.orig/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c @@ -1459,11 +1459,10 @@ static const u32 vgpr_init_regs[] = { mmCOMPUTE_STATIC_THREAD_MGMT_SE0, 0xffffffff, + mmCOMPUTE_RESOURCE_LIMITS, 0, - mmCOMPUTE_RESOURCE_LIMITS, 0x1000000, /* CU_GROUP_COUNT=1 */ mmCOMPUTE_NUM_THREAD_X, 256*4, mmCOMPUTE_NUM_THREAD_Y, 1, mmCOMPUTE_NUM_THREAD_Z, 1, - mmCOMPUTE_PGM_RSRC1, 0x100004f, /* VGPRS=15 (64 logical VGPRs), SGPRS=1 (16 SGPRs), BULKY=1 */ mmCOMPUTE_PGM_RSRC2, 20, mmCOMPUTE_USER_DATA_0, 0xedcedc00, mmCOMPUTE_USER_DATA_1, 0xedcedc01, @@ -1480,11 +1479,10 @@ static const u32 sgpr1_init_regs[] = { mmCOMPUTE_STATIC_THREAD_MGMT_SE0, 0x0f, + mmCOMPUTE_RESOURCE_LIMITS, 0x1000000, - mmCOMPUTE_RESOURCE_LIMITS, 0x1000000, /* CU_GROUP_COUNT=1 */ mmCOMPUTE_NUM_THREAD_X, 256*5, mmCOMPUTE_NUM_THREAD_Y, 1, mmCOMPUTE_NUM_THREAD_Z, 1, - mmCOMPUTE_PGM_RSRC1, 0x240, /* SGPRS=9 (80 GPRS) */ mmCOMPUTE_PGM_RSRC2, 20, mmCOMPUTE_USER_DATA_0, 0xedcedc00, mmCOMPUTE_USER_DATA_1, 0xedcedc01, @@ -1505,7 +1503,6 @@ mmCOMPUTE_NUM_THREAD_X, 256*5, mmCOMPUTE_NUM_THREAD_Y, 1, mmCOMPUTE_NUM_THREAD_Z, 1, - mmCOMPUTE_PGM_RSRC1, 0x240, /* SGPRS=9 (80 GPRS) */ mmCOMPUTE_PGM_RSRC2, 20, mmCOMPUTE_USER_DATA_0, 0xedcedc00, mmCOMPUTE_USER_DATA_1, 0xedcedc01, diff -u linux-azure-4.15.0/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c linux-azure-4.15.0/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c --- linux-azure-4.15.0/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ linux-azure-4.15.0/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -4424,7 +4424,6 @@ struct amdgpu_dm_connector *aconnector = NULL; struct drm_connector_state *new_con_state = NULL; struct dm_connector_state *dm_conn_state = NULL; - struct drm_plane_state *new_plane_state = NULL; new_stream = NULL; @@ -4432,13 +4431,6 @@ dm_new_crtc_state = to_dm_crtc_state(new_crtc_state); acrtc = to_amdgpu_crtc(crtc); - new_plane_state = drm_atomic_get_new_plane_state(state, new_crtc_state->crtc->primary); - - if (new_crtc_state->enable && new_plane_state && !new_plane_state->fb) { - ret = -EINVAL; - goto fail; - } - aconnector = amdgpu_dm_find_first_crtc_matching_connector(state, crtc); /* TODO This hack should go away */ @@ -4613,7 +4605,7 @@ if (!dm_old_crtc_state->stream) continue; - DRM_DEBUG_ATOMIC("Disabling DRM plane: %d on DRM crtc %d\n", + DRM_DEBUG_DRIVER("Disabling DRM plane: %d on DRM crtc %d\n", plane->base.id, old_plane_crtc->base.id); if (!dc_remove_plane_from_context( reverted: --- linux-azure-4.15.0/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c +++ linux-azure-4.15.0.orig/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c @@ -400,15 +400,14 @@ { int src; struct irq_list_head *lh; - unsigned long irq_table_flags; DRM_DEBUG_KMS("DM_IRQ: releasing resources.\n"); + for (src = 0; src < DAL_IRQ_SOURCES_NUMBER; src++) { + - DM_IRQ_TABLE_LOCK(adev, irq_table_flags); /* The handler was removed from the table, * it means it is safe to flush all the 'work' * (because no code can schedule a new one). */ lh = &adev->dm.irq_handler_list_low_tab[src]; - DM_IRQ_TABLE_UNLOCK(adev, irq_table_flags); flush_work(&lh->work); } reverted: --- linux-azure-4.15.0/drivers/gpu/drm/bridge/dumb-vga-dac.c +++ linux-azure-4.15.0.orig/drivers/gpu/drm/bridge/dumb-vga-dac.c @@ -55,9 +55,7 @@ } drm_mode_connector_update_edid_property(connector, edid); + return drm_add_edid_modes(connector, edid); - ret = drm_add_edid_modes(connector, edid); - kfree(edid); - return ret; fallback: /* reverted: --- linux-azure-4.15.0/drivers/gpu/drm/drm_atomic.c +++ linux-azure-4.15.0.orig/drivers/gpu/drm/drm_atomic.c @@ -151,8 +151,6 @@ state->connectors[i].state); state->connectors[i].ptr = NULL; state->connectors[i].state = NULL; - state->connectors[i].old_state = NULL; - state->connectors[i].new_state = NULL; drm_connector_put(connector); } @@ -167,8 +165,6 @@ state->crtcs[i].ptr = NULL; state->crtcs[i].state = NULL; - state->crtcs[i].old_state = NULL; - state->crtcs[i].new_state = NULL; } for (i = 0; i < config->num_total_plane; i++) { @@ -181,8 +177,6 @@ state->planes[i].state); state->planes[i].ptr = NULL; state->planes[i].state = NULL; - state->planes[i].old_state = NULL; - state->planes[i].new_state = NULL; } for (i = 0; i < state->num_private_objs; i++) { @@ -192,8 +186,6 @@ state->private_objs[i].state); state->private_objs[i].ptr = NULL; state->private_objs[i].state = NULL; - state->private_objs[i].old_state = NULL; - state->private_objs[i].new_state = NULL; } state->num_private_objs = 0; reverted: --- linux-azure-4.15.0/drivers/gpu/drm/drm_drv.c +++ linux-azure-4.15.0.orig/drivers/gpu/drm/drm_drv.c @@ -763,7 +763,7 @@ if (!minor) return; + name = kasprintf(GFP_KERNEL, "controlD%d", minor->index); - name = kasprintf(GFP_KERNEL, "controlD%d", minor->index + 64); if (!name) return; diff -u linux-azure-4.15.0/drivers/gpu/drm/drm_edid.c linux-azure-4.15.0/drivers/gpu/drm/drm_edid.c --- linux-azure-4.15.0/drivers/gpu/drm/drm_edid.c +++ linux-azure-4.15.0/drivers/gpu/drm/drm_edid.c @@ -4421,7 +4421,6 @@ info->cea_rev = 0; info->max_tmds_clock = 0; info->dvi_dual = false; - memset(&info->hdmi, 0, sizeof(info->hdmi)); info->non_desktop = 0; } @@ -4433,11 +4432,16 @@ u32 quirks = edid_get_quirks(edid); - drm_reset_display_info(connector); - info->width_mm = edid->width_cm * 10; info->height_mm = edid->height_cm * 10; + /* driver figures it out in this case */ + info->bpc = 0; + info->color_formats = 0; + info->cea_rev = 0; + info->max_tmds_clock = 0; + info->dvi_dual = false; + info->non_desktop = !!(quirks & EDID_QUIRK_NON_DESKTOP); DRM_DEBUG_KMS("non_desktop set to %d\n", info->non_desktop); diff -u linux-azure-4.15.0/drivers/gpu/drm/i915/i915_reg.h linux-azure-4.15.0/drivers/gpu/drm/i915/i915_reg.h --- linux-azure-4.15.0/drivers/gpu/drm/i915/i915_reg.h +++ linux-azure-4.15.0/drivers/gpu/drm/i915/i915_reg.h @@ -7085,9 +7085,6 @@ #define SLICE_ECO_CHICKEN0 _MMIO(0x7308) #define PIXEL_MASK_CAMMING_DISABLE (1 << 14) -#define GEN9_WM_CHICKEN3 _MMIO(0x5588) -#define GEN9_FACTOR_IN_CLR_VAL_HIZ (1 << 9) - /* WaCatErrorRejectionIssue */ #define GEN7_SQ_CHICKEN_MBCUNIT_CONFIG _MMIO(0x9030) #define GEN7_SQ_CHICKEN_MBCUNIT_SQINTMOB (1<<11) diff -u linux-azure-4.15.0/drivers/gpu/drm/i915/intel_cdclk.c linux-azure-4.15.0/drivers/gpu/drm/i915/intel_cdclk.c --- linux-azure-4.15.0/drivers/gpu/drm/i915/intel_cdclk.c +++ linux-azure-4.15.0/drivers/gpu/drm/i915/intel_cdclk.c @@ -1816,22 +1816,10 @@ } } - /* - * According to BSpec, "The CD clock frequency must be at least twice + /* According to BSpec, "The CD clock frequency must be at least twice * the frequency of the Azalia BCLK." and BCLK is 96 MHz by default. - * - * FIXME: Check the actual, not default, BCLK being used. - * - * FIXME: This does not depend on ->has_audio because the higher CDCLK - * is required for audio probe, also when there are no audio capable - * displays connected at probe time. This leads to unnecessarily high - * CDCLK when audio is not required. - * - * FIXME: This limit is only applied when there are displays connected - * at probe time. If we probe without displays, we'll still end up using - * the platform minimum CDCLK, failing audio probe. */ - if (INTEL_GEN(dev_priv) >= 9) + if (crtc_state->has_audio && INTEL_GEN(dev_priv) >= 9) min_cdclk = max(2 * 96000, min_cdclk); if (min_cdclk > dev_priv->max_cdclk_freq) { @@ -1925,44 +1913,9 @@ return 0; } -static int skl_dpll0_vco(struct intel_atomic_state *intel_state) -{ - struct drm_i915_private *dev_priv = to_i915(intel_state->base.dev); - struct intel_crtc *crtc; - struct intel_crtc_state *crtc_state; - int vco, i; - - vco = intel_state->cdclk.logical.vco; - if (!vco) - vco = dev_priv->skl_preferred_vco_freq; - - for_each_new_intel_crtc_in_state(intel_state, crtc, crtc_state, i) { - if (!crtc_state->base.enable) - continue; - - if (!intel_crtc_has_type(crtc_state, INTEL_OUTPUT_EDP)) - continue; - - /* - * DPLL0 VCO may need to be adjusted to get the correct - * clock for eDP. This will affect cdclk as well. - */ - switch (crtc_state->port_clock / 2) { - case 108000: - case 216000: - vco = 8640000; - break; - default: - vco = 8100000; - break; - } - } - - return vco; -} - static int skl_modeset_calc_cdclk(struct drm_atomic_state *state) { + struct drm_i915_private *dev_priv = to_i915(state->dev); struct intel_atomic_state *intel_state = to_intel_atomic_state(state); int min_cdclk, cdclk, vco; @@ -1970,7 +1923,9 @@ if (min_cdclk < 0) return min_cdclk; - vco = skl_dpll0_vco(intel_state); + vco = intel_state->cdclk.logical.vco; + if (!vco) + vco = dev_priv->skl_preferred_vco_freq; /* * FIXME should also account for plane ratio diff -u linux-azure-4.15.0/drivers/gpu/drm/i915/intel_dp.c linux-azure-4.15.0/drivers/gpu/drm/i915/intel_dp.c --- linux-azure-4.15.0/drivers/gpu/drm/i915/intel_dp.c +++ linux-azure-4.15.0/drivers/gpu/drm/i915/intel_dp.c @@ -1784,6 +1784,26 @@ reduce_m_n); } + /* + * DPLL0 VCO may need to be adjusted to get the correct + * clock for eDP. This will affect cdclk as well. + */ + if (intel_dp_is_edp(intel_dp) && IS_GEN9_BC(dev_priv)) { + int vco; + + switch (pipe_config->port_clock / 2) { + case 108000: + case 216000: + vco = 8640000; + break; + default: + vco = 8100000; + break; + } + + to_intel_atomic_state(pipe_config->base.state)->cdclk.logical.vco = vco; + } + if (!HAS_DDI(dev_priv)) intel_dp_set_clock(encoder, pipe_config); reverted: --- linux-azure-4.15.0/drivers/gpu/drm/i915/intel_engine_cs.c +++ linux-azure-4.15.0.orig/drivers/gpu/drm/i915/intel_engine_cs.c @@ -1109,10 +1109,6 @@ WA_SET_FIELD_MASKED(GEN8_CS_CHICKEN1, GEN9_PREEMPT_GPGPU_LEVEL_MASK, GEN9_PREEMPT_GPGPU_COMMAND_LEVEL); - /* WaClearHIZ_WM_CHICKEN3:bxt,glk */ - if (IS_GEN9_LP(dev_priv)) - WA_SET_BIT_MASKED(GEN9_WM_CHICKEN3, GEN9_FACTOR_IN_CLR_VAL_HIZ); - /* WaVFEStateAfterPipeControlwithMediaStateClear:skl,bxt,glk,cfl */ ret = wa_ring_whitelist_reg(engine, GEN9_CTX_PREEMPT_REG); if (ret) reverted: --- linux-azure-4.15.0/drivers/gpu/drm/i915/intel_fbdev.c +++ linux-azure-4.15.0.orig/drivers/gpu/drm/i915/intel_fbdev.c @@ -802,7 +802,7 @@ return; intel_fbdev_sync(ifbdev); + if (ifbdev->vma) - if (ifbdev->vma || ifbdev->helper.deferred_setup) drm_fb_helper_hotplug_event(&ifbdev->helper); } diff -u linux-azure-4.15.0/drivers/gpu/drm/i915/intel_lpe_audio.c linux-azure-4.15.0/drivers/gpu/drm/i915/intel_lpe_audio.c --- linux-azure-4.15.0/drivers/gpu/drm/i915/intel_lpe_audio.c +++ linux-azure-4.15.0/drivers/gpu/drm/i915/intel_lpe_audio.c @@ -62,6 +62,7 @@ #include #include +#include #include #include diff -u linux-azure-4.15.0/drivers/gpu/drm/i915/intel_lvds.c linux-azure-4.15.0/drivers/gpu/drm/i915/intel_lvds.c --- linux-azure-4.15.0/drivers/gpu/drm/i915/intel_lvds.c +++ linux-azure-4.15.0/drivers/gpu/drm/i915/intel_lvds.c @@ -317,8 +317,7 @@ I915_WRITE(PP_CONTROL(0), I915_READ(PP_CONTROL(0)) | PANEL_POWER_ON); POSTING_READ(lvds_encoder->reg); - - if (intel_wait_for_register(dev_priv, PP_STATUS(0), PP_ON, PP_ON, 5000)) + if (intel_wait_for_register(dev_priv, PP_STATUS(0), PP_ON, PP_ON, 1000)) DRM_ERROR("timed out waiting for panel to power on\n"); intel_panel_enable_backlight(pipe_config, conn_state); diff -u linux-azure-4.15.0/drivers/gpu/drm/i915/intel_runtime_pm.c linux-azure-4.15.0/drivers/gpu/drm/i915/intel_runtime_pm.c --- linux-azure-4.15.0/drivers/gpu/drm/i915/intel_runtime_pm.c +++ linux-azure-4.15.0/drivers/gpu/drm/i915/intel_runtime_pm.c @@ -622,18 +622,19 @@ DRM_DEBUG_KMS("Enabling DC6\n"); - /* Wa Display #1183: skl,kbl,cfl */ - if (IS_GEN9_BC(dev_priv)) - I915_WRITE(GEN8_CHICKEN_DCPR_1, I915_READ(GEN8_CHICKEN_DCPR_1) | - SKL_SELECT_ALTERNATE_DC_EXIT); - gen9_set_dc_state(dev_priv, DC_STATE_EN_UPTO_DC6); + } void skl_disable_dc6(struct drm_i915_private *dev_priv) { DRM_DEBUG_KMS("Disabling DC6\n"); + /* Wa Display #1183: skl,kbl,cfl */ + if (IS_GEN9_BC(dev_priv)) + I915_WRITE(GEN8_CHICKEN_DCPR_1, I915_READ(GEN8_CHICKEN_DCPR_1) | + SKL_SELECT_ALTERNATE_DC_EXIT); + gen9_set_dc_state(dev_priv, DC_STATE_DISABLE); } reverted: --- linux-azure-4.15.0/drivers/gpu/drm/nouveau/nouveau_acpi.c +++ linux-azure-4.15.0.orig/drivers/gpu/drm/nouveau/nouveau_acpi.c @@ -193,7 +193,7 @@ return nouveau_dsm_set_discrete_state(nouveau_dsm_priv.dhandle, state); } +static int nouveau_dsm_get_client_id(struct pci_dev *pdev) -static enum vga_switcheroo_client_id nouveau_dsm_get_client_id(struct pci_dev *pdev) { /* easy option one - intel vendor ID means Integrated */ if (pdev->vendor == PCI_VENDOR_ID_INTEL) reverted: --- linux-azure-4.15.0/drivers/gpu/drm/nouveau/nouveau_bo.c +++ linux-azure-4.15.0.orig/drivers/gpu/drm/nouveau/nouveau_bo.c @@ -214,6 +214,7 @@ INIT_LIST_HEAD(&nvbo->entry); INIT_LIST_HEAD(&nvbo->vma_list); nvbo->bo.bdev = &drm->ttm.bdev; + nvbo->cli = cli; /* This is confusing, and doesn't actually mean we want an uncached * mapping, but is what NOUVEAU_GEM_DOMAIN_COHERENT gets translated reverted: --- linux-azure-4.15.0/drivers/gpu/drm/nouveau/nouveau_bo.h +++ linux-azure-4.15.0.orig/drivers/gpu/drm/nouveau/nouveau_bo.h @@ -26,6 +26,8 @@ struct list_head vma_list; + struct nouveau_cli *cli; + unsigned contig:1; unsigned page:5; unsigned kind:8; reverted: --- linux-azure-4.15.0/drivers/gpu/drm/nouveau/nouveau_ttm.c +++ linux-azure-4.15.0.orig/drivers/gpu/drm/nouveau/nouveau_ttm.c @@ -63,7 +63,7 @@ struct ttm_mem_reg *reg) { struct nouveau_bo *nvbo = nouveau_bo(bo); + struct nouveau_drm *drm = nvbo->cli->drm; - struct nouveau_drm *drm = nouveau_bdev(bo->bdev); struct nouveau_mem *mem; int ret; @@ -103,7 +103,7 @@ struct ttm_mem_reg *reg) { struct nouveau_bo *nvbo = nouveau_bo(bo); + struct nouveau_drm *drm = nvbo->cli->drm; - struct nouveau_drm *drm = nouveau_bdev(bo->bdev); struct nouveau_mem *mem; int ret; @@ -131,7 +131,7 @@ struct ttm_mem_reg *reg) { struct nouveau_bo *nvbo = nouveau_bo(bo); + struct nouveau_drm *drm = nvbo->cli->drm; - struct nouveau_drm *drm = nouveau_bdev(bo->bdev); struct nouveau_mem *mem; int ret; reverted: --- linux-azure-4.15.0/drivers/gpu/drm/radeon/radeon_atpx_handler.c +++ linux-azure-4.15.0.orig/drivers/gpu/drm/radeon/radeon_atpx_handler.c @@ -526,7 +526,7 @@ * look up whether we are the integrated or discrete GPU (all asics). * Returns the client id. */ +static int radeon_atpx_get_client_id(struct pci_dev *pdev) -static enum vga_switcheroo_client_id radeon_atpx_get_client_id(struct pci_dev *pdev) { if (radeon_atpx_priv.dhandle == ACPI_HANDLE(&pdev->dev)) return VGA_SWITCHEROO_IGD; reverted: --- linux-azure-4.15.0/drivers/gpu/drm/ttm/ttm_page_alloc.c +++ linux-azure-4.15.0.orig/drivers/gpu/drm/ttm/ttm_page_alloc.c @@ -904,8 +904,7 @@ while (npages >= HPAGE_PMD_NR) { gfp_t huge_flags = gfp_flags; + huge_flags |= GFP_TRANSHUGE; - huge_flags |= GFP_TRANSHUGE_LIGHT | __GFP_NORETRY | - __GFP_KSWAPD_RECLAIM; huge_flags &= ~__GFP_MOVABLE; huge_flags &= ~__GFP_COMP; p = alloc_pages(huge_flags, HPAGE_PMD_ORDER); @@ -1022,15 +1021,11 @@ GFP_USER | GFP_DMA32, "uc dma", 0); ttm_page_pool_init_locked(&_manager->wc_pool_huge, + GFP_TRANSHUGE & ~(__GFP_MOVABLE | __GFP_COMP), - (GFP_TRANSHUGE_LIGHT | __GFP_NORETRY | - __GFP_KSWAPD_RECLAIM) & - ~(__GFP_MOVABLE | __GFP_COMP), "wc huge", order); ttm_page_pool_init_locked(&_manager->uc_pool_huge, + GFP_TRANSHUGE & ~(__GFP_MOVABLE | __GFP_COMP) - (GFP_TRANSHUGE_LIGHT | __GFP_NORETRY | - __GFP_KSWAPD_RECLAIM) & - ~(__GFP_MOVABLE | __GFP_COMP) , "uc huge", order); _manager->options.max_size = max_pages; diff -u linux-azure-4.15.0/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c linux-azure-4.15.0/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c --- linux-azure-4.15.0/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c +++ linux-azure-4.15.0/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c @@ -915,8 +915,7 @@ gfp_flags |= __GFP_ZERO; if (huge) { - gfp_flags |= GFP_TRANSHUGE_LIGHT | __GFP_NORETRY | - __GFP_KSWAPD_RECLAIM; + gfp_flags |= GFP_TRANSHUGE; gfp_flags &= ~__GFP_MOVABLE; gfp_flags &= ~__GFP_COMP; } reverted: --- linux-azure-4.15.0/drivers/gpu/drm/vc4/vc4_crtc.c +++ linux-azure-4.15.0.orig/drivers/gpu/drm/vc4/vc4_crtc.c @@ -735,7 +735,6 @@ struct vc4_async_flip_state { struct drm_crtc *crtc; struct drm_framebuffer *fb; - struct drm_framebuffer *old_fb; struct drm_pending_vblank_event *event; struct vc4_seqno_cb cb; @@ -765,23 +764,6 @@ drm_crtc_vblank_put(crtc); drm_framebuffer_put(flip_state->fb); - - /* Decrement the BO usecnt in order to keep the inc/dec calls balanced - * when the planes are updated through the async update path. - * FIXME: we should move to generic async-page-flip when it's - * available, so that we can get rid of this hand-made cleanup_fb() - * logic. - */ - if (flip_state->old_fb) { - struct drm_gem_cma_object *cma_bo; - struct vc4_bo *bo; - - cma_bo = drm_fb_cma_get_gem_obj(flip_state->old_fb, 0); - bo = to_vc4_bo(&cma_bo->base); - vc4_bo_dec_usecnt(bo); - drm_framebuffer_put(flip_state->old_fb); - } - kfree(flip_state); up(&vc4->async_modeset); @@ -806,22 +788,9 @@ struct drm_gem_cma_object *cma_bo = drm_fb_cma_get_gem_obj(fb, 0); struct vc4_bo *bo = to_vc4_bo(&cma_bo->base); - /* Increment the BO usecnt here, so that we never end up with an - * unbalanced number of vc4_bo_{dec,inc}_usecnt() calls when the - * plane is later updated through the non-async path. - * FIXME: we should move to generic async-page-flip when it's - * available, so that we can get rid of this hand-made prepare_fb() - * logic. - */ - ret = vc4_bo_inc_usecnt(bo); - if (ret) - return ret; - flip_state = kzalloc(sizeof(*flip_state), GFP_KERNEL); + if (!flip_state) - if (!flip_state) { - vc4_bo_dec_usecnt(bo); return -ENOMEM; - } drm_framebuffer_get(fb); flip_state->fb = fb; @@ -832,23 +801,10 @@ ret = down_interruptible(&vc4->async_modeset); if (ret) { drm_framebuffer_put(fb); - vc4_bo_dec_usecnt(bo); kfree(flip_state); return ret; } - /* Save the current FB before it's replaced by the new one in - * drm_atomic_set_fb_for_plane(). We'll need the old FB in - * vc4_async_page_flip_complete() to decrement the BO usecnt and keep - * it consistent. - * FIXME: we should move to generic async-page-flip when it's - * available, so that we can get rid of this hand-made cleanup_fb() - * logic. - */ - flip_state->old_fb = plane->state->fb; - if (flip_state->old_fb) - drm_framebuffer_get(flip_state->old_fb); - WARN_ON(drm_crtc_vblank_get(crtc) != 0); /* Immediately update the plane's legacy fb pointer, so that later reverted: --- linux-azure-4.15.0/drivers/gpu/drm/vc4/vc4_plane.c +++ linux-azure-4.15.0.orig/drivers/gpu/drm/vc4/vc4_plane.c @@ -536,7 +536,7 @@ * the scl fields here. */ if (num_planes == 1) { + scl0 = vc4_get_scl_field(state, 1); - scl0 = vc4_get_scl_field(state, 0); scl1 = scl0; } else { scl0 = vc4_get_scl_field(state, 1); reverted: --- linux-azure-4.15.0/drivers/gpu/drm/virtio/virtgpu_vq.c +++ linux-azure-4.15.0.orig/drivers/gpu/drm/virtio/virtgpu_vq.c @@ -291,7 +291,7 @@ ret = virtqueue_add_sgs(vq, sgs, outcnt, incnt, vbuf, GFP_ATOMIC); if (ret == -ENOSPC) { spin_unlock(&vgdev->ctrlq.qlock); + wait_event(vgdev->ctrlq.ack_queue, vq->num_free); - wait_event(vgdev->ctrlq.ack_queue, vq->num_free >= outcnt + incnt); spin_lock(&vgdev->ctrlq.qlock); goto retry; } else { @@ -366,7 +366,7 @@ ret = virtqueue_add_sgs(vq, sgs, outcnt, 0, vbuf, GFP_ATOMIC); if (ret == -ENOSPC) { spin_unlock(&vgdev->cursorq.qlock); + wait_event(vgdev->cursorq.ack_queue, vq->num_free); - wait_event(vgdev->cursorq.ack_queue, vq->num_free >= outcnt); spin_lock(&vgdev->cursorq.qlock); goto retry; } else { diff -u linux-azure-4.15.0/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c linux-azure-4.15.0/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c --- linux-azure-4.15.0/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c +++ linux-azure-4.15.0/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c @@ -2612,7 +2612,6 @@ vmw_kms_helper_buffer_finish(res->dev_priv, NULL, ctx->buf, out_fence, NULL); - vmw_dmabuf_unreference(&ctx->buf); vmw_resource_unreserve(res, false, NULL, 0); mutex_unlock(&res->dev_priv->cmdbuf_mutex); } reverted: --- linux-azure-4.15.0/drivers/gpu/vga/vga_switcheroo.c +++ linux-azure-4.15.0.orig/drivers/gpu/vga/vga_switcheroo.c @@ -102,11 +102,10 @@ * runtime pm. If true, writing ON and OFF to the vga_switcheroo debugfs * interface is a no-op so as not to interfere with runtime pm * @list: client list - * @vga_dev: pci device, indicate which GPU is bound to current audio client * * Registered client. A client can be either a GPU or an audio device on a GPU. * For audio clients, the @fb_info, @active and @driver_power_control members + * are bogus. - * are bogus. For GPU clients, the @vga_dev is bogus. */ struct vga_switcheroo_client { struct pci_dev *pdev; @@ -117,7 +116,6 @@ bool active; bool driver_power_control; struct list_head list; - struct pci_dev *vga_dev; }; /* @@ -163,8 +161,9 @@ }; #define ID_BIT_AUDIO 0x100 +#define client_is_audio(c) ((c)->id & ID_BIT_AUDIO) +#define client_is_vga(c) ((c)->id == VGA_SWITCHEROO_UNKNOWN_ID || \ + !client_is_audio(c)) -#define client_is_audio(c) ((c)->id & ID_BIT_AUDIO) -#define client_is_vga(c) (!client_is_audio(c)) #define client_id(c) ((c)->id & ~ID_BIT_AUDIO) static int vga_switcheroo_debugfs_init(struct vgasr_priv *priv); @@ -193,29 +192,14 @@ vgasr_priv.handler->init(); list_for_each_entry(client, &vgasr_priv.clients, list) { + if (client->id != VGA_SWITCHEROO_UNKNOWN_ID) - if (!client_is_vga(client) || - client_id(client) != VGA_SWITCHEROO_UNKNOWN_ID) continue; - ret = vgasr_priv.handler->get_client_id(client->pdev); if (ret < 0) return; client->id = ret; } - - list_for_each_entry(client, &vgasr_priv.clients, list) { - if (!client_is_audio(client) || - client_id(client) != VGA_SWITCHEROO_UNKNOWN_ID) - continue; - - ret = vgasr_priv.handler->get_client_id(client->vga_dev); - if (ret < 0) - return; - - client->id = ret | ID_BIT_AUDIO; - } - vga_switcheroo_debugfs_init(&vgasr_priv); vgasr_priv.active = true; } @@ -288,9 +272,7 @@ static int register_client(struct pci_dev *pdev, const struct vga_switcheroo_client_ops *ops, + enum vga_switcheroo_client_id id, bool active, - enum vga_switcheroo_client_id id, - struct pci_dev *vga_dev, - bool active, bool driver_power_control) { struct vga_switcheroo_client *client; @@ -305,7 +287,6 @@ client->id = id; client->active = active; client->driver_power_control = driver_power_control; - client->vga_dev = vga_dev; mutex_lock(&vgasr_mutex); list_add_tail(&client->list, &vgasr_priv.clients); @@ -338,7 +319,7 @@ const struct vga_switcheroo_client_ops *ops, bool driver_power_control) { + return register_client(pdev, ops, VGA_SWITCHEROO_UNKNOWN_ID, - return register_client(pdev, ops, VGA_SWITCHEROO_UNKNOWN_ID, NULL, pdev == vga_default_device(), driver_power_control); } @@ -348,40 +329,19 @@ * vga_switcheroo_register_audio_client - register audio client * @pdev: client pci device * @ops: client callbacks + * @id: client identifier - * @vga_dev: pci device which is bound to current audio client * * Register audio client (audio device on a GPU). The power state of the * client is assumed to be ON. Beforehand, vga_switcheroo_client_probe_defer() * shall be called to ensure that all prerequisites are met. * + * Return: 0 on success, -ENOMEM on memory allocation error. - * Return: 0 on success, -ENOMEM on memory allocation error, -EINVAL on getting - * client id error. */ int vga_switcheroo_register_audio_client(struct pci_dev *pdev, const struct vga_switcheroo_client_ops *ops, + enum vga_switcheroo_client_id id) - struct pci_dev *vga_dev) { + return register_client(pdev, ops, id | ID_BIT_AUDIO, false, false); - enum vga_switcheroo_client_id id = VGA_SWITCHEROO_UNKNOWN_ID; - - /* - * if vga_switcheroo has enabled, that mean two GPU clients and also - * handler are registered. Get audio client id from bound GPU client - * id directly, otherwise, set it as VGA_SWITCHEROO_UNKNOWN_ID, - * it will set to correct id in later when vga_switcheroo_enable() - * is called. - */ - mutex_lock(&vgasr_mutex); - if (vgasr_priv.active) { - id = vgasr_priv.handler->get_client_id(vga_dev); - if (id < 0) { - mutex_unlock(&vgasr_mutex); - return -EINVAL; - } - } - mutex_unlock(&vgasr_mutex); - - return register_client(pdev, ops, id | ID_BIT_AUDIO, vga_dev, - false, false); } EXPORT_SYMBOL(vga_switcheroo_register_audio_client); reverted: --- linux-azure-4.15.0/drivers/hwmon/Kconfig +++ linux-azure-4.15.0.orig/drivers/hwmon/Kconfig @@ -275,7 +275,7 @@ config SENSORS_K10TEMP tristate "AMD Family 10h+ temperature sensor" + depends on X86 && PCI - depends on X86 && PCI && AMD_NB help If you say yes here you get support for the temperature sensor(s) inside your CPU. Supported are later revisions of diff -u linux-azure-4.15.0/drivers/hwmon/k10temp.c linux-azure-4.15.0/drivers/hwmon/k10temp.c --- linux-azure-4.15.0/drivers/hwmon/k10temp.c +++ linux-azure-4.15.0/drivers/hwmon/k10temp.c @@ -23,7 +23,6 @@ #include #include #include -#include #include MODULE_DESCRIPTION("AMD Family 10h+ CPU core temperature monitor"); @@ -41,8 +40,8 @@ #define PCI_DEVICE_ID_AMD_17H_DF_F3 0x1463 #endif -#ifndef PCI_DEVICE_ID_AMD_17H_M10H_DF_F3 -#define PCI_DEVICE_ID_AMD_17H_M10H_DF_F3 0x15eb +#ifndef PCI_DEVICE_ID_AMD_17H_RR_NB +#define PCI_DEVICE_ID_AMD_17H_RR_NB 0x15d0 #endif /* CPUID function 0x80000001, ebx */ @@ -64,12 +63,10 @@ #define NB_CAP_HTC 0x00000400 /* - * For F15h M60h and M70h, REG_HARDWARE_THERMAL_CONTROL - * and REG_REPORTED_TEMPERATURE have been moved to - * D0F0xBC_xD820_0C64 [Hardware Temperature Control] - * D0F0xBC_xD820_0CA4 [Reported Temperature Control] + * For F15h M60h, functionality of REG_REPORTED_TEMPERATURE + * has been moved to D0F0xBC_xD820_0CA4 [Reported Temperature + * Control] */ -#define F15H_M60H_HARDWARE_TEMP_CTRL_OFFSET 0xd8200c64 #define F15H_M60H_REPORTED_TEMP_CTRL_OFFSET 0xd8200ca4 /* F17h M01h Access througn SMN */ @@ -77,7 +74,6 @@ struct k10temp_data { struct pci_dev *pdev; - void (*read_htcreg)(struct pci_dev *pdev, u32 *regval); void (*read_tempreg)(struct pci_dev *pdev, u32 *regval); int temp_offset; u32 temp_adjust_mask; @@ -102,11 +98,6 @@ { 0x17, "AMD Ryzen Threadripper 1910", 10000 }, }; -static void read_htcreg_pci(struct pci_dev *pdev, u32 *regval) -{ - pci_read_config_dword(pdev, REG_HARDWARE_THERMAL_CONTROL, regval); -} - static void read_tempreg_pci(struct pci_dev *pdev, u32 *regval) { pci_read_config_dword(pdev, REG_REPORTED_TEMPERATURE, regval); @@ -123,12 +114,6 @@ mutex_unlock(&nb_smu_ind_mutex); } -static void read_htcreg_nb_f15(struct pci_dev *pdev, u32 *regval) -{ - amd_nb_index_read(pdev, PCI_DEVFN(0, 0), 0xb8, - F15H_M60H_HARDWARE_TEMP_CTRL_OFFSET, regval); -} - static void read_tempreg_nb_f15(struct pci_dev *pdev, u32 *regval) { amd_nb_index_read(pdev, PCI_DEVFN(0, 0), 0xb8, @@ -137,8 +122,8 @@ static void read_tempreg_nb_f17(struct pci_dev *pdev, u32 *regval) { - amd_smn_read(amd_pci_dev_to_node_id(pdev), - F17H_M01H_REPORTED_TEMP_CTRL_OFFSET, regval); + amd_nb_index_read(pdev, PCI_DEVFN(0, 0), 0x60, + F17H_M01H_REPORTED_TEMP_CTRL_OFFSET, regval); } static ssize_t temp1_input_show(struct device *dev, @@ -175,7 +160,8 @@ u32 regval; int value; - data->read_htcreg(data->pdev, ®val); + pci_read_config_dword(data->pdev, + REG_HARDWARE_THERMAL_CONTROL, ®val); value = ((regval >> 16) & 0x7f) * 500 + 52000; if (show_hyst) value -= ((regval >> 24) & 0xf) * 500; @@ -195,18 +181,13 @@ struct pci_dev *pdev = data->pdev; if (index >= 2) { - u32 reg; - - if (!data->read_htcreg) - return 0; + u32 reg_caps, reg_htc; pci_read_config_dword(pdev, REG_NORTHBRIDGE_CAPABILITIES, - ®); - if (!(reg & NB_CAP_HTC)) - return 0; - - data->read_htcreg(data->pdev, ®); - if (!(reg & HTC_ENABLE)) + ®_caps); + pci_read_config_dword(pdev, REG_HARDWARE_THERMAL_CONTROL, + ®_htc); + if (!(reg_caps & NB_CAP_HTC) || !(reg_htc & HTC_ENABLE)) return 0; } return attr->mode; @@ -287,13 +268,11 @@ if (boot_cpu_data.x86 == 0x15 && (boot_cpu_data.x86_model == 0x60 || boot_cpu_data.x86_model == 0x70)) { - data->read_htcreg = read_htcreg_nb_f15; data->read_tempreg = read_tempreg_nb_f15; } else if (boot_cpu_data.x86 == 0x17) { data->temp_adjust_mask = 0x80000; data->read_tempreg = read_tempreg_nb_f17; } else { - data->read_htcreg = read_htcreg_pci; data->read_tempreg = read_tempreg_pci; } @@ -323,7 +302,7 @@ { PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_16H_NB_F3) }, { PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_16H_M30H_NB_F3) }, { PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_17H_DF_F3) }, - { PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_17H_M10H_DF_F3) }, + { PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_17H_RR_NB) }, {} }; MODULE_DEVICE_TABLE(pci, k10temp_id_table); diff -u linux-azure-4.15.0/drivers/i2c/busses/i2c-designware-master.c linux-azure-4.15.0/drivers/i2c/busses/i2c-designware-master.c --- linux-azure-4.15.0/drivers/i2c/busses/i2c-designware-master.c +++ linux-azure-4.15.0/drivers/i2c/busses/i2c-designware-master.c @@ -207,10 +207,7 @@ i2c_dw_disable_int(dev); /* Enable the adapter */ - __i2c_dw_enable(dev, true); - - /* Dummy read to avoid the register getting stuck on Bay Trail */ - dw_readl(dev, DW_IC_ENABLE_STATUS); + __i2c_dw_enable_and_wait(dev, true); /* Clear and enable interrupts */ dw_readl(dev, DW_IC_CLR_INTR); diff -u linux-azure-4.15.0/drivers/i2c/busses/i2c-xlp9xx.c linux-azure-4.15.0/drivers/i2c/busses/i2c-xlp9xx.c --- linux-azure-4.15.0/drivers/i2c/busses/i2c-xlp9xx.c +++ linux-azure-4.15.0/drivers/i2c/busses/i2c-xlp9xx.c @@ -155,30 +155,9 @@ priv->msg_buf += len; } -static void xlp9xx_i2c_update_rlen(struct xlp9xx_i2c_dev *priv) -{ - u32 val, len; - - /* - * Update receive length. Re-read len to get the latest value, - * and then add 4 to have a minimum value that can be safely - * written. This is to account for the byte read above, the - * transfer in progress and any delays in the register I/O - */ - val = xlp9xx_read_i2c_reg(priv, XLP9XX_I2C_CTRL); - len = xlp9xx_read_i2c_reg(priv, XLP9XX_I2C_FIFOWCNT) & - XLP9XX_I2C_FIFO_WCNT_MASK; - len = max_t(u32, priv->msg_len, len + 4); - if (len >= I2C_SMBUS_BLOCK_MAX + 2) - return; - val = (val & ~XLP9XX_I2C_CTRL_MCTLEN_MASK) | - (len << XLP9XX_I2C_CTRL_MCTLEN_SHIFT); - xlp9xx_write_i2c_reg(priv, XLP9XX_I2C_CTRL, val); -} - static void xlp9xx_i2c_drain_rx_fifo(struct xlp9xx_i2c_dev *priv) { - u32 len, i; + u32 len, i, val; u8 rlen, *buf = priv->msg_buf; len = xlp9xx_read_i2c_reg(priv, XLP9XX_I2C_FIFOWCNT) & @@ -188,20 +167,21 @@ if (priv->len_recv) { /* read length byte */ rlen = xlp9xx_read_i2c_reg(priv, XLP9XX_I2C_MRXFIFO); - if (rlen > I2C_SMBUS_BLOCK_MAX || rlen == 0) { - rlen = 0; /*abort transfer */ - priv->msg_buf_remaining = 0; - priv->msg_len = 0; - } else { - *buf++ = rlen; - if (priv->client_pec) - ++rlen; /* account for error check byte */ - /* update remaining bytes and message length */ - priv->msg_buf_remaining = rlen; - priv->msg_len = rlen + 1; - } - xlp9xx_i2c_update_rlen(priv); + *buf++ = rlen; + len--; + + if (priv->client_pec) + ++rlen; + /* update remaining bytes and message length */ + priv->msg_buf_remaining = rlen; + priv->msg_len = rlen + 1; priv->len_recv = false; + + /* Update transfer length to read only actual data */ + val = xlp9xx_read_i2c_reg(priv, XLP9XX_I2C_CTRL); + val = (val & ~XLP9XX_I2C_CTRL_MCTLEN_MASK) | + ((rlen + 1) << XLP9XX_I2C_CTRL_MCTLEN_SHIFT); + xlp9xx_write_i2c_reg(priv, XLP9XX_I2C_CTRL, val); } else { len = min(priv->msg_buf_remaining, len); for (i = 0; i < len; i++, buf++) @@ -320,6 +300,10 @@ xlp9xx_write_i2c_reg(priv, XLP9XX_I2C_MFIFOCTRL, XLP9XX_I2C_MFIFOCTRL_RST); + /* set FIFO threshold if reading */ + if (priv->msg_read) + xlp9xx_i2c_update_rx_fifo_thres(priv); + /* set slave addr */ xlp9xx_write_i2c_reg(priv, XLP9XX_I2C_SLAVEADDR, (msg->addr << XLP9XX_I2C_SLAVEADDR_ADDR_SHIFT) | @@ -338,13 +322,9 @@ val &= ~XLP9XX_I2C_CTRL_ADDMODE; priv->len_recv = msg->flags & I2C_M_RECV_LEN; - len = priv->len_recv ? I2C_SMBUS_BLOCK_MAX + 2 : msg->len; + len = priv->len_recv ? XLP9XX_I2C_FIFO_SIZE : msg->len; priv->client_pec = msg->flags & I2C_CLIENT_PEC; - /* set FIFO threshold if reading */ - if (priv->msg_read) - xlp9xx_i2c_update_rx_fifo_thres(priv); - /* set data length to be transferred */ val = (val & ~XLP9XX_I2C_CTRL_MCTLEN_MASK) | (len << XLP9XX_I2C_CTRL_MCTLEN_SHIFT); @@ -398,11 +378,8 @@ } /* update msg->len with actual received length */ - if (msg->flags & I2C_M_RECV_LEN) { - if (!priv->msg_len) - return -EPROTO; + if (msg->flags & I2C_M_RECV_LEN) msg->len = priv->msg_len; - } return 0; } reverted: --- linux-azure-4.15.0/drivers/i2c/i2c-dev.c +++ linux-azure-4.15.0.orig/drivers/i2c/i2c-dev.c @@ -278,7 +278,7 @@ */ if (msgs[i].flags & I2C_M_RECV_LEN) { if (!(msgs[i].flags & I2C_M_RD) || + msgs[i].buf[0] < 1 || - msgs[i].len < 1 || msgs[i].buf[0] < 1 || msgs[i].len < msgs[i].buf[0] + I2C_SMBUS_BLOCK_MAX) { res = -EINVAL; diff -u linux-azure-4.15.0/drivers/infiniband/core/ucma.c linux-azure-4.15.0/drivers/infiniband/core/ucma.c --- linux-azure-4.15.0/drivers/infiniband/core/ucma.c +++ linux-azure-4.15.0/drivers/infiniband/core/ucma.c @@ -678,7 +678,7 @@ if (copy_from_user(&cmd, inbuf, sizeof(cmd))) return -EFAULT; - if ((cmd.src_addr.sin6_family && !rdma_addr_size_in6(&cmd.src_addr)) || + if (!rdma_addr_size_in6(&cmd.src_addr) || !rdma_addr_size_in6(&cmd.dst_addr)) return -EINVAL; reverted: --- linux-azure-4.15.0/drivers/infiniband/hw/cxgb4/cq.c +++ linux-azure-4.15.0.orig/drivers/infiniband/hw/cxgb4/cq.c @@ -315,7 +315,7 @@ * Deal with out-of-order and/or completions that complete * prior unsignalled WRs. */ +void c4iw_flush_hw_cq(struct c4iw_cq *chp) -void c4iw_flush_hw_cq(struct c4iw_cq *chp, struct c4iw_qp *flush_qhp) { struct t4_cqe *hw_cqe, *swcqe, read_cqe; struct c4iw_qp *qhp; @@ -339,13 +339,6 @@ if (qhp == NULL) goto next_cqe; - if (flush_qhp != qhp) { - spin_lock(&qhp->lock); - - if (qhp->wq.flushed == 1) - goto next_cqe; - } - if (CQE_OPCODE(hw_cqe) == FW_RI_TERMINATE) goto next_cqe; @@ -397,8 +390,6 @@ next_cqe: t4_hwcq_consume(&chp->cq); ret = t4_next_hw_cqe(&chp->cq, &hw_cqe); - if (qhp && flush_qhp != qhp) - spin_unlock(&qhp->lock); } } reverted: --- linux-azure-4.15.0/drivers/infiniband/hw/cxgb4/device.c +++ linux-azure-4.15.0.orig/drivers/infiniband/hw/cxgb4/device.c @@ -879,11 +879,6 @@ rdev->status_page->db_off = 0; - init_completion(&rdev->rqt_compl); - init_completion(&rdev->pbl_compl); - kref_init(&rdev->rqt_kref); - kref_init(&rdev->pbl_kref); - return 0; err_free_status_page_and_wr_log: if (c4iw_wr_log && rdev->wr_log) @@ -902,15 +897,13 @@ static void c4iw_rdev_close(struct c4iw_rdev *rdev) { + destroy_workqueue(rdev->free_workq); kfree(rdev->wr_log); c4iw_release_dev_ucontext(rdev, &rdev->uctx); free_page((unsigned long)rdev->status_page); c4iw_pblpool_destroy(rdev); c4iw_rqtpool_destroy(rdev); - wait_for_completion(&rdev->pbl_compl); - wait_for_completion(&rdev->rqt_compl); c4iw_ocqp_pool_destroy(rdev); - destroy_workqueue(rdev->free_workq); c4iw_destroy_resource(&rdev->resource); } reverted: --- linux-azure-4.15.0/drivers/infiniband/hw/cxgb4/iw_cxgb4.h +++ linux-azure-4.15.0.orig/drivers/infiniband/hw/cxgb4/iw_cxgb4.h @@ -185,10 +185,6 @@ struct wr_log_entry *wr_log; int wr_log_size; struct workqueue_struct *free_workq; - struct completion rqt_compl; - struct completion pbl_compl; - struct kref rqt_kref; - struct kref pbl_kref; }; static inline int c4iw_fatal_error(struct c4iw_rdev *rdev) @@ -1053,7 +1049,7 @@ void c4iw_pblpool_free(struct c4iw_rdev *rdev, u32 addr, int size); u32 c4iw_ocqp_pool_alloc(struct c4iw_rdev *rdev, int size); void c4iw_ocqp_pool_free(struct c4iw_rdev *rdev, u32 addr, int size); +void c4iw_flush_hw_cq(struct c4iw_cq *chp); -void c4iw_flush_hw_cq(struct c4iw_cq *chp, struct c4iw_qp *flush_qhp); void c4iw_count_rcqes(struct t4_cq *cq, struct t4_wq *wq, int *count); int c4iw_ep_disconnect(struct c4iw_ep *ep, int abrupt, gfp_t gfp); int c4iw_flush_rq(struct t4_wq *wq, struct t4_cq *cq, int count); reverted: --- linux-azure-4.15.0/drivers/infiniband/hw/cxgb4/qp.c +++ linux-azure-4.15.0.orig/drivers/infiniband/hw/cxgb4/qp.c @@ -1343,12 +1343,12 @@ qhp->wq.flushed = 1; t4_set_wq_in_error(&qhp->wq); + c4iw_flush_hw_cq(rchp); - c4iw_flush_hw_cq(rchp, qhp); c4iw_count_rcqes(&rchp->cq, &qhp->wq, &count); rq_flushed = c4iw_flush_rq(&qhp->wq, &rchp->cq, count); if (schp != rchp) + c4iw_flush_hw_cq(schp); - c4iw_flush_hw_cq(schp, qhp); sq_flushed = c4iw_flush_sq(qhp); spin_unlock(&qhp->lock); reverted: --- linux-azure-4.15.0/drivers/infiniband/hw/cxgb4/resource.c +++ linux-azure-4.15.0.orig/drivers/infiniband/hw/cxgb4/resource.c @@ -260,22 +260,12 @@ rdev->stats.pbl.cur += roundup(size, 1 << MIN_PBL_SHIFT); if (rdev->stats.pbl.cur > rdev->stats.pbl.max) rdev->stats.pbl.max = rdev->stats.pbl.cur; - kref_get(&rdev->pbl_kref); } else rdev->stats.pbl.fail++; mutex_unlock(&rdev->stats.lock); return (u32)addr; } -static void destroy_pblpool(struct kref *kref) -{ - struct c4iw_rdev *rdev; - - rdev = container_of(kref, struct c4iw_rdev, pbl_kref); - gen_pool_destroy(rdev->pbl_pool); - complete(&rdev->pbl_compl); -} - void c4iw_pblpool_free(struct c4iw_rdev *rdev, u32 addr, int size) { pr_debug("addr 0x%x size %d\n", addr, size); @@ -283,7 +273,6 @@ rdev->stats.pbl.cur -= roundup(size, 1 << MIN_PBL_SHIFT); mutex_unlock(&rdev->stats.lock); gen_pool_free(rdev->pbl_pool, (unsigned long)addr, size); - kref_put(&rdev->pbl_kref, destroy_pblpool); } int c4iw_pblpool_create(struct c4iw_rdev *rdev) @@ -321,7 +310,7 @@ void c4iw_pblpool_destroy(struct c4iw_rdev *rdev) { + gen_pool_destroy(rdev->pbl_pool); - kref_put(&rdev->pbl_kref, destroy_pblpool); } /* @@ -342,22 +331,12 @@ rdev->stats.rqt.cur += roundup(size << 6, 1 << MIN_RQT_SHIFT); if (rdev->stats.rqt.cur > rdev->stats.rqt.max) rdev->stats.rqt.max = rdev->stats.rqt.cur; - kref_get(&rdev->rqt_kref); } else rdev->stats.rqt.fail++; mutex_unlock(&rdev->stats.lock); return (u32)addr; } -static void destroy_rqtpool(struct kref *kref) -{ - struct c4iw_rdev *rdev; - - rdev = container_of(kref, struct c4iw_rdev, rqt_kref); - gen_pool_destroy(rdev->rqt_pool); - complete(&rdev->rqt_compl); -} - void c4iw_rqtpool_free(struct c4iw_rdev *rdev, u32 addr, int size) { pr_debug("addr 0x%x size %d\n", addr, size << 6); @@ -365,7 +344,6 @@ rdev->stats.rqt.cur -= roundup(size << 6, 1 << MIN_RQT_SHIFT); mutex_unlock(&rdev->stats.lock); gen_pool_free(rdev->rqt_pool, (unsigned long)addr, size << 6); - kref_put(&rdev->rqt_kref, destroy_rqtpool); } int c4iw_rqtpool_create(struct c4iw_rdev *rdev) @@ -402,7 +380,7 @@ void c4iw_rqtpool_destroy(struct c4iw_rdev *rdev) { + gen_pool_destroy(rdev->rqt_pool); - kref_put(&rdev->rqt_kref, destroy_rqtpool); } /* reverted: --- linux-azure-4.15.0/drivers/infiniband/hw/hfi1/driver.c +++ linux-azure-4.15.0.orig/drivers/infiniband/hw/hfi1/driver.c @@ -443,43 +443,31 @@ bool do_cnp) { struct hfi1_ibport *ibp = to_iport(qp->ibqp.device, qp->port_num); - struct hfi1_pportdata *ppd = ppd_from_ibp(ibp); struct ib_other_headers *ohdr = pkt->ohdr; struct ib_grh *grh = pkt->grh; u32 rqpn = 0, bth1; + u16 pkey, rlid, dlid = ib_get_dlid(pkt->hdr); - u16 pkey; - u32 rlid, slid, dlid = 0; u8 hdr_type, sc, svc_type; bool is_mcast = false; - /* can be called from prescan */ if (pkt->etype == RHF_RCV_TYPE_BYPASS) { is_mcast = hfi1_is_16B_mcast(dlid); pkey = hfi1_16B_get_pkey(pkt->hdr); sc = hfi1_16B_get_sc(pkt->hdr); - dlid = hfi1_16B_get_dlid(pkt->hdr); - slid = hfi1_16B_get_slid(pkt->hdr); hdr_type = HFI1_PKT_TYPE_16B; } else { is_mcast = (dlid > be16_to_cpu(IB_MULTICAST_LID_BASE)) && (dlid != be16_to_cpu(IB_LID_PERMISSIVE)); pkey = ib_bth_get_pkey(ohdr); sc = hfi1_9B_get_sc5(pkt->hdr, pkt->rhf); - dlid = ib_get_dlid(pkt->hdr); - slid = ib_get_slid(pkt->hdr); hdr_type = HFI1_PKT_TYPE_9B; } switch (qp->ibqp.qp_type) { - case IB_QPT_UD: - dlid = ppd->lid; - rlid = slid; - rqpn = ib_get_sqpn(pkt->ohdr); - svc_type = IB_CC_SVCTYPE_UD; - break; case IB_QPT_SMI: case IB_QPT_GSI: + case IB_QPT_UD: + rlid = ib_get_slid(pkt->hdr); - rlid = slid; rqpn = ib_get_sqpn(pkt->ohdr); svc_type = IB_CC_SVCTYPE_UD; break; @@ -504,6 +492,7 @@ dlid, rlid, sc, grh); if (!is_mcast && (bth1 & IB_BECN_SMASK)) { + struct hfi1_pportdata *ppd = ppd_from_ibp(ibp); u32 lqpn = bth1 & RVT_QPN_MASK; u8 sl = ibp->sc_to_sl[sc]; diff -u linux-azure-4.15.0/drivers/infiniband/hw/hfi1/hfi.h linux-azure-4.15.0/drivers/infiniband/hw/hfi1/hfi.h --- linux-azure-4.15.0/drivers/infiniband/hw/hfi1/hfi.h +++ linux-azure-4.15.0/drivers/infiniband/hw/hfi1/hfi.h @@ -1531,13 +1531,13 @@ void process_becn(struct hfi1_pportdata *ppd, u8 sl, u32 rlid, u32 lqpn, u32 rqpn, u8 svc_type); void return_cnp(struct hfi1_ibport *ibp, struct rvt_qp *qp, u32 remote_qpn, - u16 pkey, u32 slid, u32 dlid, u8 sc5, + u32 pkey, u32 slid, u32 dlid, u8 sc5, const struct ib_grh *old_grh); void return_cnp_16B(struct hfi1_ibport *ibp, struct rvt_qp *qp, - u32 remote_qpn, u16 pkey, u32 slid, u32 dlid, + u32 remote_qpn, u32 pkey, u32 slid, u32 dlid, u8 sc5, const struct ib_grh *old_grh); typedef void (*hfi1_handle_cnp)(struct hfi1_ibport *ibp, struct rvt_qp *qp, - u32 remote_qpn, u16 pkey, u32 slid, u32 dlid, + u32 remote_qpn, u32 pkey, u32 slid, u32 dlid, u8 sc5, const struct ib_grh *old_grh); #define PKEY_CHECK_INVALID -1 @@ -2434,7 +2434,7 @@ ((slid >> OPA_16B_SLID_SHIFT) << OPA_16B_SLID_HIGH_SHIFT); lrh2 = (lrh2 & ~OPA_16B_DLID_MASK) | ((dlid >> OPA_16B_DLID_SHIFT) << OPA_16B_DLID_HIGH_SHIFT); - lrh2 = (lrh2 & ~OPA_16B_PKEY_MASK) | ((u32)pkey << OPA_16B_PKEY_SHIFT); + lrh2 = (lrh2 & ~OPA_16B_PKEY_MASK) | (pkey << OPA_16B_PKEY_SHIFT); lrh2 = (lrh2 & ~OPA_16B_L4_MASK) | l4; hdr->lrh[0] = lrh0; diff -u linux-azure-4.15.0/drivers/infiniband/hw/hfi1/init.c linux-azure-4.15.0/drivers/infiniband/hw/hfi1/init.c --- linux-azure-4.15.0/drivers/infiniband/hw/hfi1/init.c +++ linux-azure-4.15.0/drivers/infiniband/hw/hfi1/init.c @@ -1254,8 +1254,6 @@ return ERR_PTR(-ENOMEM); dd->num_pports = nports; dd->pport = (struct hfi1_pportdata *)(dd + 1); - dd->pcidev = pdev; - pci_set_drvdata(pdev, dd); INIT_LIST_HEAD(&dd->list); idr_preload(GFP_KERNEL); reverted: --- linux-azure-4.15.0/drivers/infiniband/hw/hfi1/pcie.c +++ linux-azure-4.15.0.orig/drivers/infiniband/hw/hfi1/pcie.c @@ -163,6 +163,9 @@ resource_size_t addr; int ret = 0; + dd->pcidev = pdev; + pci_set_drvdata(pdev, dd); + addr = pci_resource_start(pdev, 0); len = pci_resource_len(pdev, 0); reverted: --- linux-azure-4.15.0/drivers/infiniband/hw/hfi1/ruc.c +++ linux-azure-4.15.0.orig/drivers/infiniband/hw/hfi1/ruc.c @@ -745,20 +745,6 @@ ohdr->bth[2] = cpu_to_be32(bth2); } -/** - * hfi1_make_ruc_header_16B - build a 16B header - * @qp: the queue pair - * @ohdr: a pointer to the destination header memory - * @bth0: bth0 passed in from the RC/UC builder - * @bth2: bth2 passed in from the RC/UC builder - * @middle: non zero implies indicates ahg "could" be used - * @ps: the current packet state - * - * This routine may disarm ahg under these situations: - * - packet needs a GRH - * - BECN needed - * - migration state not IB_MIG_MIGRATED - */ static inline void hfi1_make_ruc_header_16B(struct rvt_qp *qp, struct ib_other_headers *ohdr, u32 bth0, u32 bth2, int middle, @@ -803,12 +789,6 @@ else middle = 0; - if (qp->s_flags & RVT_S_ECN) { - qp->s_flags &= ~RVT_S_ECN; - /* we recently received a FECN, so return a BECN */ - becn = true; - middle = 0; - } if (middle) build_ahg(qp, bth2); else @@ -816,6 +796,11 @@ bth0 |= pkey; bth0 |= extra_bytes << 20; + if (qp->s_flags & RVT_S_ECN) { + qp->s_flags &= ~RVT_S_ECN; + /* we recently received a FECN, so return a BECN */ + becn = 1; + } hfi1_make_ruc_bth(qp, ohdr, bth0, bth1, bth2); if (!ppd->lid) @@ -833,20 +818,6 @@ pkey, becn, 0, l4, priv->s_sc); } -/** - * hfi1_make_ruc_header_9B - build a 9B header - * @qp: the queue pair - * @ohdr: a pointer to the destination header memory - * @bth0: bth0 passed in from the RC/UC builder - * @bth2: bth2 passed in from the RC/UC builder - * @middle: non zero implies indicates ahg "could" be used - * @ps: the current packet state - * - * This routine may disarm ahg under these situations: - * - packet needs a GRH - * - BECN needed - * - migration state not IB_MIG_MIGRATED - */ static inline void hfi1_make_ruc_header_9B(struct rvt_qp *qp, struct ib_other_headers *ohdr, u32 bth0, u32 bth2, int middle, @@ -880,12 +851,6 @@ else middle = 0; - if (qp->s_flags & RVT_S_ECN) { - qp->s_flags &= ~RVT_S_ECN; - /* we recently received a FECN, so return a BECN */ - bth1 |= (IB_BECN_MASK << IB_BECN_SHIFT); - middle = 0; - } if (middle) build_ahg(qp, bth2); else @@ -893,6 +858,11 @@ bth0 |= pkey; bth0 |= extra_bytes << 20; + if (qp->s_flags & RVT_S_ECN) { + qp->s_flags &= ~RVT_S_ECN; + /* we recently received a FECN, so return a BECN */ + bth1 |= (IB_BECN_MASK << IB_BECN_SHIFT); + } hfi1_make_ruc_bth(qp, ohdr, bth0, bth1, bth2); hfi1_make_ib_hdr(&ps->s_txreq->phdr.hdr.ibh, lrh0, reverted: --- linux-azure-4.15.0/drivers/infiniband/hw/hfi1/ud.c +++ linux-azure-4.15.0.orig/drivers/infiniband/hw/hfi1/ud.c @@ -630,7 +630,7 @@ } void return_cnp_16B(struct hfi1_ibport *ibp, struct rvt_qp *qp, + u32 remote_qpn, u32 pkey, u32 slid, u32 dlid, - u32 remote_qpn, u16 pkey, u32 slid, u32 dlid, u8 sc5, const struct ib_grh *old_grh) { u64 pbc, pbc_flags = 0; @@ -688,7 +688,7 @@ } void return_cnp(struct hfi1_ibport *ibp, struct rvt_qp *qp, u32 remote_qpn, + u32 pkey, u32 slid, u32 dlid, u8 sc5, - u16 pkey, u32 slid, u32 dlid, u8 sc5, const struct ib_grh *old_grh) { u64 pbc, pbc_flags = 0; reverted: --- linux-azure-4.15.0/drivers/infiniband/hw/mlx5/Kconfig +++ linux-azure-4.15.0.orig/drivers/infiniband/hw/mlx5/Kconfig @@ -1,7 +1,6 @@ config MLX5_INFINIBAND tristate "Mellanox Connect-IB HCA support" depends on NETDEVICES && ETHERNET && PCI && MLX5_CORE - depends on INFINIBAND_USER_ACCESS || INFINIBAND_USER_ACCESS=n ---help--- This driver provides low-level InfiniBand support for Mellanox Connect-IB PCI Express host channel adapters (HCAs). diff -u linux-azure-4.15.0/drivers/infiniband/hw/mlx5/mr.c linux-azure-4.15.0/drivers/infiniband/hw/mlx5/mr.c --- linux-azure-4.15.0/drivers/infiniband/hw/mlx5/mr.c +++ linux-azure-4.15.0/drivers/infiniband/hw/mlx5/mr.c @@ -848,28 +848,25 @@ int *order) { struct mlx5_ib_dev *dev = to_mdev(pd->device); - struct ib_umem *u; int err; - *umem = NULL; - - u = ib_umem_get(pd->uobject->context, start, length, access_flags, 0); - err = PTR_ERR_OR_ZERO(u); + *umem = ib_umem_get(pd->uobject->context, start, length, + access_flags, 0); + err = PTR_ERR_OR_ZERO(*umem); if (err) { - mlx5_ib_dbg(dev, "umem get failed (%d)\n", err); + *umem = NULL; + mlx5_ib_err(dev, "umem get failed (%d)\n", err); return err; } - mlx5_ib_cont_pages(u, start, MLX5_MKEY_PAGE_SHIFT_MASK, npages, + mlx5_ib_cont_pages(*umem, start, MLX5_MKEY_PAGE_SHIFT_MASK, npages, page_shift, ncont, order); if (!*npages) { mlx5_ib_warn(dev, "avoid zero region\n"); - ib_umem_release(u); + ib_umem_release(*umem); return -EINVAL; } - *umem = u; - mlx5_ib_dbg(dev, "npages %d, ncont %d, order %d, page_shift %d\n", *npages, *ncont, *order, *page_shift); @@ -1368,12 +1365,13 @@ int access_flags = flags & IB_MR_REREG_ACCESS ? new_access_flags : mr->access_flags; + u64 addr = (flags & IB_MR_REREG_TRANS) ? virt_addr : mr->umem->address; + u64 len = (flags & IB_MR_REREG_TRANS) ? length : mr->umem->length; int page_shift = 0; int upd_flags = 0; int npages = 0; int ncont = 0; int order = 0; - u64 addr, len; int err; mlx5_ib_dbg(dev, "start 0x%llx, virt_addr 0x%llx, length 0x%llx, access_flags 0x%x\n", @@ -1381,17 +1379,6 @@ atomic_sub(mr->npages, &dev->mdev->priv.reg_pages); - if (!mr->umem) - return -EINVAL; - - if (flags & IB_MR_REREG_TRANS) { - addr = virt_addr; - len = length; - } else { - addr = mr->umem->address; - len = mr->umem->length; - } - if (flags != IB_MR_REREG_PD) { /* * Replace umem. This needs to be done whether or not UMR is @@ -1399,7 +1386,6 @@ */ flags |= IB_MR_REREG_TRANS; ib_umem_release(mr->umem); - mr->umem = NULL; err = mr_umem_get(pd, addr, len, access_flags, &mr->umem, &npages, &page_shift, &ncont, &order); if (err < 0) { diff -u linux-azure-4.15.0/drivers/infiniband/hw/mlx5/qp.c linux-azure-4.15.0/drivers/infiniband/hw/mlx5/qp.c --- linux-azure-4.15.0/drivers/infiniband/hw/mlx5/qp.c +++ linux-azure-4.15.0/drivers/infiniband/hw/mlx5/qp.c @@ -256,11 +256,7 @@ } else { if (ucmd) { qp->rq.wqe_cnt = ucmd->rq_wqe_count; - if (ucmd->rq_wqe_shift > BITS_PER_BYTE * sizeof(ucmd->rq_wqe_shift)) - return -EINVAL; qp->rq.wqe_shift = ucmd->rq_wqe_shift; - if ((1 << qp->rq.wqe_shift) / sizeof(struct mlx5_wqe_data_seg) < qp->wq_sig) - return -EINVAL; qp->rq.max_gs = (1 << qp->rq.wqe_shift) / sizeof(struct mlx5_wqe_data_seg) - qp->wq_sig; qp->rq.max_post = qp->rq.wqe_cnt; } else { @@ -2254,18 +2250,18 @@ static int ib_rate_to_mlx5(struct mlx5_ib_dev *dev, u8 rate) { - if (rate == IB_RATE_PORT_CURRENT) + if (rate == IB_RATE_PORT_CURRENT) { return 0; - - if (rate < IB_RATE_2_5_GBPS || rate > IB_RATE_300_GBPS) + } else if (rate < IB_RATE_2_5_GBPS || rate > IB_RATE_300_GBPS) { return -EINVAL; + } else { + while (rate != IB_RATE_2_5_GBPS && + !(1 << (rate + MLX5_STAT_RATE_OFFSET) & + MLX5_CAP_GEN(dev->mdev, stat_rate_support))) + --rate; + } - while (rate != IB_RATE_PORT_CURRENT && - !(1 << (rate + MLX5_STAT_RATE_OFFSET) & - MLX5_CAP_GEN(dev->mdev, stat_rate_support))) - --rate; - - return rate ? rate + MLX5_STAT_RATE_OFFSET : rate; + return rate + MLX5_STAT_RATE_OFFSET; } static int modify_raw_packet_eth_prio(struct mlx5_core_dev *dev, reverted: --- linux-azure-4.15.0/drivers/input/input-leds.c +++ linux-azure-4.15.0.orig/drivers/input/input-leds.c @@ -88,7 +88,6 @@ const struct input_device_id *id) { struct input_leds *leds; - struct input_led *led; unsigned int num_leds; unsigned int led_code; int led_no; @@ -120,13 +119,14 @@ led_no = 0; for_each_set_bit(led_code, dev->ledbit, LED_CNT) { + struct input_led *led = &leds->leds[led_no]; - if (!input_led_info[led_code].name) - continue; - led = &leds->leds[led_no]; led->handle = &leds->handle; led->code = led_code; + if (!input_led_info[led_code].name) + continue; + led->cdev.name = kasprintf(GFP_KERNEL, "%s::%s", dev_name(&dev->dev), input_led_info[led_code].name); reverted: --- linux-azure-4.15.0/drivers/input/mouse/elantech.c +++ linux-azure-4.15.0.orig/drivers/input/mouse/elantech.c @@ -804,7 +804,7 @@ else if (ic_version == 7 && etd->samples[1] == 0x2A) sanity_check = ((packet[3] & 0x1c) == 0x10); else + sanity_check = ((packet[0] & 0x0c) == 0x04 && - sanity_check = ((packet[0] & 0x08) == 0x00 && (packet[3] & 0x1c) == 0x10); if (!sanity_check) @@ -1177,12 +1177,6 @@ { } }; -static const char * const middle_button_pnp_ids[] = { - "LEN2131", /* ThinkPad P52 w/ NFC */ - "LEN2132", /* ThinkPad P52 */ - NULL -}; - /* * Set the appropriate event bits for the input subsystem */ @@ -1202,8 +1196,7 @@ __clear_bit(EV_REL, dev->evbit); __set_bit(BTN_LEFT, dev->keybit); + if (dmi_check_system(elantech_dmi_has_middle_button)) - if (dmi_check_system(elantech_dmi_has_middle_button) || - psmouse_matches_pnp_id(psmouse, middle_button_pnp_ids)) __set_bit(BTN_MIDDLE, dev->keybit); __set_bit(BTN_RIGHT, dev->keybit); reverted: --- linux-azure-4.15.0/drivers/input/touchscreen/atmel_mxt_ts.c +++ linux-azure-4.15.0.orig/drivers/input/touchscreen/atmel_mxt_ts.c @@ -3031,24 +3031,6 @@ .driver_data = samus_platform_data, }, { - /* Samsung Chromebook Pro */ - .ident = "Samsung Chromebook Pro", - .matches = { - DMI_MATCH(DMI_SYS_VENDOR, "Google"), - DMI_MATCH(DMI_PRODUCT_NAME, "Caroline"), - }, - .driver_data = samus_platform_data, - }, - { - /* Samsung Chromebook Pro */ - .ident = "Samsung Chromebook Pro", - .matches = { - DMI_MATCH(DMI_SYS_VENDOR, "Google"), - DMI_MATCH(DMI_PRODUCT_NAME, "Caroline"), - }, - .driver_data = samus_platform_data, - }, - { /* Other Google Chromebooks */ .ident = "Chromebook", .matches = { reverted: --- linux-azure-4.15.0/drivers/irqchip/qcom-irq-combiner.c +++ linux-azure-4.15.0.orig/drivers/irqchip/qcom-irq-combiner.c @@ -1,4 +1,4 @@ +/* Copyright (c) 2015-2016, The Linux Foundation. All rights reserved. -/* Copyright (c) 2015-2018, The Linux Foundation. All rights reserved. * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 and @@ -68,7 +68,7 @@ bit = readl_relaxed(combiner->regs[reg].addr); status = bit & combiner->regs[reg].enabled; + if (!status) - if (bit && !status) pr_warn_ratelimited("Unexpected IRQ on CPU%d: (%08x %08lx %p)\n", smp_processor_id(), bit, combiner->regs[reg].enabled, reverted: --- linux-azure-4.15.0/drivers/mailbox/pcc.c +++ linux-azure-4.15.0.orig/drivers/mailbox/pcc.c @@ -373,24 +373,33 @@ }; /** + * parse_pcc_subspace - Parse the PCC table and verify PCC subspace + * entries. There should be one entry per PCC client. - * parse_pcc_subspaces -- Count PCC subspaces defined * @header: Pointer to the ACPI subtable header under the PCCT. * @end: End of subtable entry. * + * Return: 0 for Success, else errno. - * Return: If we find a PCC subspace entry of a valid type, return 0. - * Otherwise, return -EINVAL. * * This gets called for each entry in the PCC table. */ static int parse_pcc_subspace(struct acpi_subtable_header *header, const unsigned long end) { + struct acpi_pcct_hw_reduced *pcct_ss; + + if (pcc_mbox_ctrl.num_chans <= MAX_PCC_SUBSPACES) { + pcct_ss = (struct acpi_pcct_hw_reduced *) header; - struct acpi_pcct_subspace *ss = (struct acpi_pcct_subspace *) header; + if ((pcct_ss->header.type != + ACPI_PCCT_TYPE_HW_REDUCED_SUBSPACE) + && (pcct_ss->header.type != + ACPI_PCCT_TYPE_HW_REDUCED_SUBSPACE_TYPE2)) { + pr_err("Incorrect PCC Subspace type detected\n"); + return -EINVAL; + } + } - if (ss->header.type < ACPI_PCCT_TYPE_RESERVED) - return 0; + return 0; - return -EINVAL; } /** @@ -440,8 +449,8 @@ struct acpi_table_header *pcct_tbl; struct acpi_subtable_header *pcct_entry; struct acpi_table_pcct *acpi_pcct_tbl; - struct acpi_subtable_proc proc[ACPI_PCCT_TYPE_RESERVED]; int count, i, rc; + int sum = 0; acpi_status status = AE_OK; /* Search for PCCT */ @@ -450,41 +459,43 @@ if (ACPI_FAILURE(status) || !pcct_tbl) return -ENODEV; + count = acpi_table_parse_entries(ACPI_SIG_PCCT, + sizeof(struct acpi_table_pcct), + ACPI_PCCT_TYPE_HW_REDUCED_SUBSPACE, + parse_pcc_subspace, MAX_PCC_SUBSPACES); + sum += (count > 0) ? count : 0; + + count = acpi_table_parse_entries(ACPI_SIG_PCCT, + sizeof(struct acpi_table_pcct), + ACPI_PCCT_TYPE_HW_REDUCED_SUBSPACE_TYPE2, + parse_pcc_subspace, MAX_PCC_SUBSPACES); + sum += (count > 0) ? count : 0; - /* Set up the subtable handlers */ - for (i = ACPI_PCCT_TYPE_GENERIC_SUBSPACE; - i < ACPI_PCCT_TYPE_RESERVED; i++) { - proc[i].id = i; - proc[i].count = 0; - proc[i].handler = parse_pcc_subspace; - } + if (sum == 0 || sum >= MAX_PCC_SUBSPACES) { + pr_err("Error parsing PCC subspaces from PCCT\n"); - count = acpi_table_parse_entries_array(ACPI_SIG_PCCT, - sizeof(struct acpi_table_pcct), proc, - ACPI_PCCT_TYPE_RESERVED, MAX_PCC_SUBSPACES); - if (count == 0 || count > MAX_PCC_SUBSPACES) { - pr_warn("Invalid PCCT: %d PCC subspaces\n", count); return -EINVAL; } + pcc_mbox_channels = kzalloc(sizeof(struct mbox_chan) * + sum, GFP_KERNEL); - pcc_mbox_channels = kzalloc(sizeof(struct mbox_chan) * count, GFP_KERNEL); if (!pcc_mbox_channels) { pr_err("Could not allocate space for PCC mbox channels\n"); return -ENOMEM; } + pcc_doorbell_vaddr = kcalloc(sum, sizeof(void *), GFP_KERNEL); - pcc_doorbell_vaddr = kcalloc(count, sizeof(void *), GFP_KERNEL); if (!pcc_doorbell_vaddr) { rc = -ENOMEM; goto err_free_mbox; } + pcc_doorbell_ack_vaddr = kcalloc(sum, sizeof(void *), GFP_KERNEL); - pcc_doorbell_ack_vaddr = kcalloc(count, sizeof(void *), GFP_KERNEL); if (!pcc_doorbell_ack_vaddr) { rc = -ENOMEM; goto err_free_db_vaddr; } + pcc_doorbell_irq = kcalloc(sum, sizeof(int), GFP_KERNEL); - pcc_doorbell_irq = kcalloc(count, sizeof(int), GFP_KERNEL); if (!pcc_doorbell_irq) { rc = -ENOMEM; goto err_free_db_ack_vaddr; @@ -498,24 +509,18 @@ if (acpi_pcct_tbl->flags & ACPI_PCCT_DOORBELL) pcc_mbox_ctrl.txdone_irq = true; + for (i = 0; i < sum; i++) { - for (i = 0; i < count; i++) { struct acpi_generic_address *db_reg; + struct acpi_pcct_hw_reduced *pcct_ss; - struct acpi_pcct_subspace *pcct_ss; pcc_mbox_channels[i].con_priv = pcct_entry; + pcct_ss = (struct acpi_pcct_hw_reduced *) pcct_entry; + + if (pcc_mbox_ctrl.txdone_irq) { + rc = pcc_parse_subspace_irq(i, pcct_ss); + if (rc < 0) + goto err; - if (pcct_entry->type == ACPI_PCCT_TYPE_HW_REDUCED_SUBSPACE || - pcct_entry->type == ACPI_PCCT_TYPE_HW_REDUCED_SUBSPACE_TYPE2) { - struct acpi_pcct_hw_reduced *pcct_hrss; - - pcct_hrss = (struct acpi_pcct_hw_reduced *) pcct_entry; - - if (pcc_mbox_ctrl.txdone_irq) { - rc = pcc_parse_subspace_irq(i, pcct_hrss); - if (rc < 0) - goto err; - } } - pcct_ss = (struct acpi_pcct_subspace *) pcct_entry; /* If doorbell is in system memory cache the virt address */ db_reg = &pcct_ss->doorbell_register; @@ -526,7 +531,7 @@ ((unsigned long) pcct_entry + pcct_entry->length); } + pcc_mbox_ctrl.num_chans = sum; - pcc_mbox_ctrl.num_chans = count; pr_info("Detected %d PCC Subspaces\n", pcc_mbox_ctrl.num_chans); reverted: --- linux-azure-4.15.0/drivers/md/dm-integrity.c +++ linux-azure-4.15.0.orig/drivers/md/dm-integrity.c @@ -2440,7 +2440,7 @@ unsigned i; for (i = 0; i < ic->journal_sections; i++) kvfree(sl[i]); + kfree(sl); - kvfree(sl); } static struct scatterlist **dm_integrity_alloc_journal_scatterlist(struct dm_integrity_c *ic, struct page_list *pl) reverted: --- linux-azure-4.15.0/drivers/media/dvb-core/dmxdev.c +++ linux-azure-4.15.0.orig/drivers/media/dvb-core/dmxdev.c @@ -1053,7 +1053,7 @@ break; default: + ret = -EINVAL; - ret = -ENOTTY; break; } mutex_unlock(&dmxdev->mutex); reverted: --- linux-azure-4.15.0/drivers/media/dvb-frontends/lgdt3306a.c +++ linux-azure-4.15.0.orig/drivers/media/dvb-frontends/lgdt3306a.c @@ -1768,13 +1768,7 @@ struct lgdt3306a_state *state = fe->demodulator_priv; dbg_info("\n"); + kfree(state); - - /* - * If state->muxc is not NULL, then we are an i2c device - * and lgdt3306a_remove will clean up state - */ - if (!state->muxc) - kfree(state); } static const struct dvb_frontend_ops lgdt3306a_ops; @@ -2175,7 +2169,7 @@ sizeof(struct lgdt3306a_config)); config->i2c_addr = client->addr; + fe = lgdt3306a_attach(config, client->adapter); - fe = dvb_attach(lgdt3306a_attach, config, client->adapter); if (fe == NULL) { ret = -ENODEV; goto err_fe; reverted: --- linux-azure-4.15.0/drivers/media/i2c/adv748x/adv748x-hdmi.c +++ linux-azure-4.15.0.orig/drivers/media/i2c/adv748x/adv748x-hdmi.c @@ -105,9 +105,6 @@ fmt->width = hdmi->timings.bt.width; fmt->height = hdmi->timings.bt.height; - - if (fmt->field == V4L2_FIELD_ALTERNATE) - fmt->height /= 2; } static void adv748x_fill_optional_dv_timings(struct v4l2_dv_timings *timings) diff -u linux-azure-4.15.0/drivers/media/i2c/ov5645.c linux-azure-4.15.0/drivers/media/i2c/ov5645.c --- linux-azure-4.15.0/drivers/media/i2c/ov5645.c +++ linux-azure-4.15.0/drivers/media/i2c/ov5645.c @@ -1230,14 +1230,13 @@ ret = v4l2_fwnode_endpoint_parse(of_fwnode_handle(endpoint), &ov5645->ep); - - of_node_put(endpoint); - if (ret < 0) { dev_err(dev, "parsing endpoint node failed\n"); return ret; } + of_node_put(endpoint); + if (ov5645->ep.bus_type != V4L2_MBUS_CSI2) { dev_err(dev, "invalid bus type, must be CSI2\n"); return -EINVAL; reverted: --- linux-azure-4.15.0/drivers/media/i2c/tvp5150.c +++ linux-azure-4.15.0.orig/drivers/media/i2c/tvp5150.c @@ -506,77 +506,80 @@ /* FIXME: Current api doesn't handle all VBI types, those not yet supported are placed under #if 0 */ #if 0 + {0x010, /* Teletext, SECAM, WST System A */ - [0] = {0x010, /* Teletext, SECAM, WST System A */ {V4L2_SLICED_TELETEXT_SECAM,6,23,1}, { 0xaa, 0xaa, 0xff, 0xff, 0xe7, 0x2e, 0x20, 0x26, 0xe6, 0xb4, 0x0e, 0x00, 0x00, 0x00, 0x10, 0x00 } }, #endif + {0x030, /* Teletext, PAL, WST System B */ - [1] = {0x030, /* Teletext, PAL, WST System B */ {V4L2_SLICED_TELETEXT_B,6,22,1}, { 0xaa, 0xaa, 0xff, 0xff, 0x27, 0x2e, 0x20, 0x2b, 0xa6, 0x72, 0x10, 0x00, 0x00, 0x00, 0x10, 0x00 } }, #if 0 + {0x050, /* Teletext, PAL, WST System C */ - [2] = {0x050, /* Teletext, PAL, WST System C */ {V4L2_SLICED_TELETEXT_PAL_C,6,22,1}, { 0xaa, 0xaa, 0xff, 0xff, 0xe7, 0x2e, 0x20, 0x22, 0xa6, 0x98, 0x0d, 0x00, 0x00, 0x00, 0x10, 0x00 } }, + {0x070, /* Teletext, NTSC, WST System B */ - [3] = {0x070, /* Teletext, NTSC, WST System B */ {V4L2_SLICED_TELETEXT_NTSC_B,10,21,1}, { 0xaa, 0xaa, 0xff, 0xff, 0x27, 0x2e, 0x20, 0x23, 0x69, 0x93, 0x0d, 0x00, 0x00, 0x00, 0x10, 0x00 } }, + {0x090, /* Tetetext, NTSC NABTS System C */ - [4] = {0x090, /* Tetetext, NTSC NABTS System C */ {V4L2_SLICED_TELETEXT_NTSC_C,10,21,1}, { 0xaa, 0xaa, 0xff, 0xff, 0xe7, 0x2e, 0x20, 0x22, 0x69, 0x93, 0x0d, 0x00, 0x00, 0x00, 0x15, 0x00 } }, + {0x0b0, /* Teletext, NTSC-J, NABTS System D */ - [5] = {0x0b0, /* Teletext, NTSC-J, NABTS System D */ {V4L2_SLICED_TELETEXT_NTSC_D,10,21,1}, { 0xaa, 0xaa, 0xff, 0xff, 0xa7, 0x2e, 0x20, 0x23, 0x69, 0x93, 0x0d, 0x00, 0x00, 0x00, 0x10, 0x00 } }, + {0x0d0, /* Closed Caption, PAL/SECAM */ - [6] = {0x0d0, /* Closed Caption, PAL/SECAM */ {V4L2_SLICED_CAPTION_625,22,22,1}, { 0xaa, 0x2a, 0xff, 0x3f, 0x04, 0x51, 0x6e, 0x02, 0xa6, 0x7b, 0x09, 0x00, 0x00, 0x00, 0x27, 0x00 } }, #endif + {0x0f0, /* Closed Caption, NTSC */ - [7] = {0x0f0, /* Closed Caption, NTSC */ {V4L2_SLICED_CAPTION_525,21,21,1}, { 0xaa, 0x2a, 0xff, 0x3f, 0x04, 0x51, 0x6e, 0x02, 0x69, 0x8c, 0x09, 0x00, 0x00, 0x00, 0x27, 0x00 } }, + {0x110, /* Wide Screen Signal, PAL/SECAM */ - [8] = {0x110, /* Wide Screen Signal, PAL/SECAM */ {V4L2_SLICED_WSS_625,23,23,1}, { 0x5b, 0x55, 0xc5, 0xff, 0x00, 0x71, 0x6e, 0x42, 0xa6, 0xcd, 0x0f, 0x00, 0x00, 0x00, 0x3a, 0x00 } }, #if 0 + {0x130, /* Wide Screen Signal, NTSC C */ - [9] = {0x130, /* Wide Screen Signal, NTSC C */ {V4L2_SLICED_WSS_525,20,20,1}, { 0x38, 0x00, 0x3f, 0x00, 0x00, 0x71, 0x6e, 0x43, 0x69, 0x7c, 0x08, 0x00, 0x00, 0x00, 0x39, 0x00 } }, + {0x150, /* Vertical Interval Timecode (VITC), PAL/SECAM */ - [10] = {0x150, /* Vertical Interval Timecode (VITC), PAL/SECAM */ {V4l2_SLICED_VITC_625,6,22,0}, { 0x00, 0x00, 0x00, 0x00, 0x00, 0x8f, 0x6d, 0x49, 0xa6, 0x85, 0x08, 0x00, 0x00, 0x00, 0x4c, 0x00 } }, + {0x170, /* Vertical Interval Timecode (VITC), NTSC */ - [11] = {0x170, /* Vertical Interval Timecode (VITC), NTSC */ {V4l2_SLICED_VITC_525,10,20,0}, { 0x00, 0x00, 0x00, 0x00, 0x00, 0x8f, 0x6d, 0x49, 0x69, 0x94, 0x08, 0x00, 0x00, 0x00, 0x4c, 0x00 } }, #endif + {0x190, /* Video Program System (VPS), PAL */ - [12] = {0x190, /* Video Program System (VPS), PAL */ {V4L2_SLICED_VPS,16,16,0}, { 0xaa, 0xaa, 0xff, 0xff, 0xba, 0xce, 0x2b, 0x0d, 0xa6, 0xda, 0x0b, 0x00, 0x00, 0x00, 0x60, 0x00 } }, /* 0x1d0 User programmable */ + + /* End of struct */ + { (u16)-1 } }; static int tvp5150_write_inittab(struct v4l2_subdev *sd, @@ -589,10 +592,10 @@ return 0; } +static int tvp5150_vdp_init(struct v4l2_subdev *sd, + const struct i2c_vbi_ram_value *regs) -static int tvp5150_vdp_init(struct v4l2_subdev *sd) { unsigned int i; - int j; /* Disable Full Field */ tvp5150_write(sd, TVP5150_FULL_FIELD_ENA, 0); @@ -602,17 +605,14 @@ tvp5150_write(sd, i, 0xff); /* Load Ram Table */ + while (regs->reg != (u16)-1) { - for (j = 0; j < ARRAY_SIZE(vbi_ram_default); j++) { - const struct i2c_vbi_ram_value *regs = &vbi_ram_default[j]; - - if (!regs->type.vbi_type) - continue; - tvp5150_write(sd, TVP5150_CONF_RAM_ADDR_HIGH, regs->reg >> 8); tvp5150_write(sd, TVP5150_CONF_RAM_ADDR_LOW, regs->reg); for (i = 0; i < 16; i++) tvp5150_write(sd, TVP5150_VDP_CONF_RAM_DATA, regs->values[i]); + + regs++; } return 0; } @@ -621,23 +621,19 @@ static int tvp5150_g_sliced_vbi_cap(struct v4l2_subdev *sd, struct v4l2_sliced_vbi_cap *cap) { + const struct i2c_vbi_ram_value *regs = vbi_ram_default; + int line; - int line, i; dev_dbg_lvl(sd->dev, 1, debug, "g_sliced_vbi_cap\n"); memset(cap, 0, sizeof *cap); + while (regs->reg != (u16)-1 ) { + for (line=regs->type.ini_line;line<=regs->type.end_line;line++) { - for (i = 0; i < ARRAY_SIZE(vbi_ram_default); i++) { - const struct i2c_vbi_ram_value *regs = &vbi_ram_default[i]; - - if (!regs->type.vbi_type) - continue; - - for (line = regs->type.ini_line; - line <= regs->type.end_line; - line++) { cap->service_lines[0][line] |= regs->type.vbi_type; } cap->service_set |= regs->type.vbi_type; + + regs++; } return 0; } @@ -656,13 +652,14 @@ * MSB = field2 */ static int tvp5150_set_vbi(struct v4l2_subdev *sd, + const struct i2c_vbi_ram_value *regs, unsigned int type,u8 flags, int line, const int fields) { struct tvp5150 *decoder = to_tvp5150(sd); v4l2_std_id std = decoder->norm; u8 reg; + int pos = 0; - int i, pos = 0; if (std == V4L2_STD_ALL) { dev_err(sd->dev, "VBI can't be configured without knowing number of lines\n"); @@ -675,19 +672,19 @@ if (line < 6 || line > 27) return 0; + while (regs->reg != (u16)-1) { - for (i = 0; i < ARRAY_SIZE(vbi_ram_default); i++) { - const struct i2c_vbi_ram_value *regs = &vbi_ram_default[i]; - - if (!regs->type.vbi_type) - continue; - if ((type & regs->type.vbi_type) && (line >= regs->type.ini_line) && (line <= regs->type.end_line)) break; + + regs++; pos++; } + if (regs->reg == (u16)-1) + return 0; + type = pos | (flags & 0xf0); reg = ((line - 6) << 1) + TVP5150_LINE_MODE_INI; @@ -700,7 +697,8 @@ return type; } +static int tvp5150_get_vbi(struct v4l2_subdev *sd, + const struct i2c_vbi_ram_value *regs, int line) -static int tvp5150_get_vbi(struct v4l2_subdev *sd, int line) { struct tvp5150 *decoder = to_tvp5150(sd); v4l2_std_id std = decoder->norm; @@ -729,8 +727,8 @@ return 0; } pos = ret & 0x0f; + if (pos < 0x0f) + type |= regs[pos].type.vbi_type; - if (pos < ARRAY_SIZE(vbi_ram_default)) - type |= vbi_ram_default[pos].type.vbi_type; } return type; @@ -791,7 +789,7 @@ tvp5150_write_inittab(sd, tvp5150_init_default); /* Initializes VDP registers */ + tvp5150_vdp_init(sd, vbi_ram_default); - tvp5150_vdp_init(sd); /* Selects decoder input */ tvp5150_selmux(sd); @@ -1124,8 +1122,8 @@ for (i = 0; i <= 23; i++) { svbi->service_lines[1][i] = 0; svbi->service_lines[0][i] = + tvp5150_set_vbi(sd, vbi_ram_default, + svbi->service_lines[0][i], 0xf0, i, 3); - tvp5150_set_vbi(sd, svbi->service_lines[0][i], - 0xf0, i, 3); } /* Enables FIFO */ tvp5150_write(sd, TVP5150_FIFO_OUT_CTRL, 1); @@ -1151,7 +1149,7 @@ for (i = 0; i <= 23; i++) { svbi->service_lines[0][i] = + tvp5150_get_vbi(sd, vbi_ram_default, i); - tvp5150_get_vbi(sd, i); mask |= svbi->service_lines[0][i]; } svbi->service_set = mask; reverted: --- linux-azure-4.15.0/drivers/media/pci/cx23885/cx23885-cards.c +++ linux-azure-4.15.0.orig/drivers/media/pci/cx23885/cx23885-cards.c @@ -2286,10 +2286,6 @@ &dev->i2c_bus[2].i2c_adap, "cx25840", 0x88 >> 1, NULL); if (dev->sd_cx25840) { - /* set host data for clk_freq configuration */ - v4l2_set_subdev_hostdata(dev->sd_cx25840, - &dev->clk_freq); - dev->sd_cx25840->grp_id = CX23885_HW_AV_CORE; v4l2_subdev_call(dev->sd_cx25840, core, load_fw); } reverted: --- linux-azure-4.15.0/drivers/media/pci/cx23885/cx23885-core.c +++ linux-azure-4.15.0.orig/drivers/media/pci/cx23885/cx23885-core.c @@ -873,16 +873,6 @@ if (cx23885_boards[dev->board].clk_freq > 0) dev->clk_freq = cx23885_boards[dev->board].clk_freq; - if (dev->board == CX23885_BOARD_HAUPPAUGE_IMPACTVCBE && - dev->pci->subsystem_device == 0x7137) { - /* Hauppauge ImpactVCBe device ID 0x7137 is populated - * with an 888, and a 25Mhz crystal, instead of the - * usual third overtone 50Mhz. The default clock rate must - * be overridden so the cx25840 is properly configured - */ - dev->clk_freq = 25000000; - } - dev->pci_bus = dev->pci->bus->number; dev->pci_slot = PCI_SLOT(dev->pci->devfn); cx23885_irq_add(dev, 0x001f00); reverted: --- linux-azure-4.15.0/drivers/media/pci/cx25821/cx25821-core.c +++ linux-azure-4.15.0.orig/drivers/media/pci/cx25821/cx25821-core.c @@ -867,10 +867,6 @@ dev->nr = ++cx25821_devcount; sprintf(dev->name, "cx25821[%d]", dev->nr); - if (dev->nr >= ARRAY_SIZE(card)) { - CX25821_INFO("dev->nr >= %zd", ARRAY_SIZE(card)); - return -ENODEV; - } if (dev->pci->device != 0x8210) { pr_info("%s(): Exiting. Incorrect Hardware device = 0x%02x\n", __func__, dev->pci->device); @@ -886,6 +882,9 @@ dev->channels[i].sram_channels = &cx25821_sram_channels[i]; } + if (dev->nr > 1) + CX25821_INFO("dev->nr > 1!"); + /* board config */ dev->board = 1; /* card[dev->nr]; */ dev->_max_num_decoders = MAX_DECODERS; reverted: --- linux-azure-4.15.0/drivers/media/platform/s3c-camif/camif-capture.c +++ linux-azure-4.15.0.orig/drivers/media/platform/s3c-camif/camif-capture.c @@ -1256,17 +1256,16 @@ { const struct s3c_camif_variant *variant = camif->variant; const struct vp_pix_limits *pix_lim; + int i = ARRAY_SIZE(camif_mbus_formats); - unsigned int i; /* FIXME: constraints against codec or preview path ? */ pix_lim = &variant->vp_pix_limits[VP_CODEC]; + while (i-- >= 0) - for (i = 0; i < ARRAY_SIZE(camif_mbus_formats); i++) if (camif_mbus_formats[i] == mf->code) break; + mf->code = camif_mbus_formats[i]; - if (i == ARRAY_SIZE(camif_mbus_formats)) - mf->code = camif_mbus_formats[0]; if (pad == CAMIF_SD_PAD_SINK) { v4l_bound_align_image(&mf->width, 8, CAMIF_MAX_PIX_WIDTH, diff -u linux-azure-4.15.0/drivers/media/platform/vivid/vivid-ctrls.c linux-azure-4.15.0/drivers/media/platform/vivid/vivid-ctrls.c --- linux-azure-4.15.0/drivers/media/platform/vivid/vivid-ctrls.c +++ linux-azure-4.15.0/drivers/media/platform/vivid/vivid-ctrls.c @@ -1208,7 +1208,6 @@ v4l2_ctrl_activate(dev->radio_rx_rds_ta, dev->radio_rx_rds_controls); v4l2_ctrl_activate(dev->radio_rx_rds_tp, dev->radio_rx_rds_controls); v4l2_ctrl_activate(dev->radio_rx_rds_ms, dev->radio_rx_rds_controls); - dev->radio_rx_dev.device_caps = dev->radio_rx_caps; break; case V4L2_CID_RDS_RECEPTION: dev->radio_rx_rds_enabled = ctrl->val; @@ -1283,7 +1282,6 @@ dev->radio_tx_caps &= ~V4L2_CAP_READWRITE; if (!dev->radio_tx_rds_controls) dev->radio_tx_caps |= V4L2_CAP_READWRITE; - dev->radio_tx_dev.device_caps = dev->radio_tx_caps; break; case V4L2_CID_RDS_TX_PTY: if (dev->radio_rx_rds_controls) reverted: --- linux-azure-4.15.0/drivers/media/platform/vsp1/vsp1_drm.c +++ linux-azure-4.15.0.orig/drivers/media/platform/vsp1/vsp1_drm.c @@ -504,15 +504,6 @@ struct vsp1_rwpf *rpf = vsp1->rpf[i]; unsigned int j; - /* - * Make sure we don't accept more inputs than the hardware can - * handle. This is a temporary fix to avoid display stall, we - * need to instead allocate the BRU or BRS to display pipelines - * dynamically based on the number of planes they each use. - */ - if (pipe->num_inputs >= pipe->bru->source_pad) - pipe->inputs[i] = NULL; - if (!pipe->inputs[i]) continue; reverted: --- linux-azure-4.15.0/drivers/media/usb/em28xx/em28xx-cards.c +++ linux-azure-4.15.0.orig/drivers/media/usb/em28xx/em28xx-cards.c @@ -508,10 +508,8 @@ }; /* + * 2040:0265 Hauppauge WinTV-dualHD DVB + * 2040:026d Hauppauge WinTV-dualHD ATSC/QAM - * 2040:0265 Hauppauge WinTV-dualHD DVB Isoc - * 2040:8265 Hauppauge WinTV-dualHD DVB Bulk - * 2040:026d Hauppauge WinTV-dualHD ATSC/QAM Isoc - * 2040:826d Hauppauge WinTV-dualHD ATSC/QAM Bulk * reg 0x80/0x84: * GPIO_0: Yellow LED tuner 1, 0=on, 1=off * GPIO_1: Green LED tuner 1, 0=on, 1=off @@ -2394,8 +2392,7 @@ .has_dvb = 1, }, /* + * 2040:0265 Hauppauge WinTV-dualHD (DVB version). - * 2040:0265 Hauppauge WinTV-dualHD (DVB version) Isoc. - * 2040:8265 Hauppauge WinTV-dualHD (DVB version) Bulk. * Empia EM28274, 2x Silicon Labs Si2168, 2x Silicon Labs Si2157 */ [EM28174_BOARD_HAUPPAUGE_WINTV_DUALHD_DVB] = { @@ -2410,8 +2407,7 @@ .leds = hauppauge_dualhd_leds, }, /* + * 2040:026d Hauppauge WinTV-dualHD (model 01595 - ATSC/QAM). - * 2040:026d Hauppauge WinTV-dualHD (model 01595 - ATSC/QAM) Isoc. - * 2040:826d Hauppauge WinTV-dualHD (model 01595 - ATSC/QAM) Bulk. * Empia EM28274, 2x LG LGDT3306A, 2x Silicon Labs Si2157 */ [EM28174_BOARD_HAUPPAUGE_WINTV_DUALHD_01595] = { @@ -2552,12 +2548,8 @@ .driver_info = EM2883_BOARD_HAUPPAUGE_WINTV_HVR_850 }, { USB_DEVICE(0x2040, 0x0265), .driver_info = EM28174_BOARD_HAUPPAUGE_WINTV_DUALHD_DVB }, - { USB_DEVICE(0x2040, 0x8265), - .driver_info = EM28174_BOARD_HAUPPAUGE_WINTV_DUALHD_DVB }, { USB_DEVICE(0x2040, 0x026d), .driver_info = EM28174_BOARD_HAUPPAUGE_WINTV_DUALHD_01595 }, - { USB_DEVICE(0x2040, 0x826d), - .driver_info = EM28174_BOARD_HAUPPAUGE_WINTV_DUALHD_01595 }, { USB_DEVICE(0x0438, 0xb002), .driver_info = EM2880_BOARD_AMD_ATI_TV_WONDER_HD_600 }, { USB_DEVICE(0x2001, 0xf112), @@ -2618,11 +2610,7 @@ .driver_info = EM28178_BOARD_PCTV_461E }, { USB_DEVICE(0x2013, 0x025f), .driver_info = EM28178_BOARD_PCTV_292E }, + { USB_DEVICE(0x2040, 0x0264), /* Hauppauge WinTV-soloHD */ - { USB_DEVICE(0x2040, 0x0264), /* Hauppauge WinTV-soloHD Isoc */ - .driver_info = EM28178_BOARD_PCTV_292E }, - { USB_DEVICE(0x2040, 0x8264), /* Hauppauge OEM Generic WinTV-soloHD Bulk */ - .driver_info = EM28178_BOARD_PCTV_292E }, - { USB_DEVICE(0x2040, 0x8268), /* Hauppauge Retail WinTV-soloHD Bulk */ .driver_info = EM28178_BOARD_PCTV_292E }, { USB_DEVICE(0x0413, 0x6f07), .driver_info = EM2861_BOARD_LEADTEK_VC100 }, reverted: --- linux-azure-4.15.0/drivers/media/usb/em28xx/em28xx.h +++ linux-azure-4.15.0.orig/drivers/media/usb/em28xx/em28xx.h @@ -191,7 +191,7 @@ USB 2.0 spec says bulk packet size is always 512 bytes */ #define EM28XX_BULK_PACKET_MULTIPLIER 384 +#define EM28XX_DVB_BULK_PACKET_MULTIPLIER 384 -#define EM28XX_DVB_BULK_PACKET_MULTIPLIER 94 #define EM28XX_INTERLACED_DEFAULT 1 reverted: --- linux-azure-4.15.0/drivers/media/v4l2-core/videobuf2-vmalloc.c +++ linux-azure-4.15.0.orig/drivers/media/v4l2-core/videobuf2-vmalloc.c @@ -106,7 +106,7 @@ if (nums[i-1] + 1 != nums[i]) goto fail_map; buf->vaddr = (__force void *) + ioremap_nocache(nums[0] << PAGE_SHIFT, size); - ioremap_nocache(__pfn_to_phys(nums[0]), size + offset); } else { buf->vaddr = vm_map_ram(frame_vector_pages(vec), n_pages, -1, PAGE_KERNEL); reverted: --- linux-azure-4.15.0/drivers/message/fusion/mptctl.c +++ linux-azure-4.15.0.orig/drivers/message/fusion/mptctl.c @@ -2698,8 +2698,6 @@ __FILE__, __LINE__, iocnum); return -ENODEV; } - if (karg.hdr.id >= MPT_MAX_FC_DEVICES) - return -EINVAL; dctlprintk(ioc, printk(MYIOC_s_DEBUG_FMT "mptctl_hp_targetinfo called.\n", ioc->name)); diff -u linux-azure-4.15.0/drivers/misc/cxl/cxl.h linux-azure-4.15.0/drivers/misc/cxl/cxl.h --- linux-azure-4.15.0/drivers/misc/cxl/cxl.h +++ linux-azure-4.15.0/drivers/misc/cxl/cxl.h @@ -717,7 +717,6 @@ bool perst_select_user; bool perst_same_image; bool psl_timebase_synced; - bool tunneled_ops_supported; /* * number of contexts mapped on to this card. Possible values are: diff -u linux-azure-4.15.0/drivers/misc/cxl/pci.c linux-azure-4.15.0/drivers/misc/cxl/pci.c --- linux-azure-4.15.0/drivers/misc/cxl/pci.c +++ linux-azure-4.15.0/drivers/misc/cxl/pci.c @@ -514,9 +514,9 @@ cxl_p1_write(adapter, CXL_PSL9_FIR_CNTL, psl_fircntl); /* Setup the PSL to transmit packets on the PCIe before the - * CAPP is enabled. Make sure that CAPP virtual machines are disabled + * CAPP is enabled */ - cxl_p1_write(adapter, CXL_PSL9_DSNDCTL, 0x0001001000012A10ULL); + cxl_p1_write(adapter, CXL_PSL9_DSNDCTL, 0x0001001000002A10ULL); /* * A response to an ASB_Notify request is returned by the @@ -621,6 +621,12 @@ /* For the PSL this is a multiple for 0 < n <= 7: */ #define PSL_2048_250MHZ_CYCLES 1 +static void write_timebase_ctrl_psl9(struct cxl *adapter) +{ + cxl_p1_write(adapter, CXL_PSL9_TB_CTLSTAT, + TBSYNC_CNT(2 * PSL_2048_250MHZ_CYCLES)); +} + static void write_timebase_ctrl_psl8(struct cxl *adapter) { cxl_p1_write(adapter, CXL_PSL_TB_CTLSTAT, @@ -679,8 +685,7 @@ * Setup PSL Timebase Control and Status register * with the recommended Timebase Sync Count value */ - if (adapter->native->sl_ops->write_timebase_ctrl) - adapter->native->sl_ops->write_timebase_ctrl(adapter); + adapter->native->sl_ops->write_timebase_ctrl(adapter); /* Enable PSL Timebase */ cxl_p1_write(adapter, CXL_PSL_Control, 0x0000000000000000); @@ -1742,15 +1747,6 @@ /* Required for devices using CAPP DMA mode, harmless for others */ pci_set_master(dev); - adapter->tunneled_ops_supported = false; - - if (cxl_is_power9()) { - if (pnv_pci_set_tunnel_bar(dev, 0x00020000E0000000ull, 1)) - dev_info(&dev->dev, "Tunneled operations unsupported\n"); - else - adapter->tunneled_ops_supported = true; - } - if ((rc = pnv_phb_to_cxl_mode(dev, adapter->native->sl_ops->capi_mode))) goto err; @@ -1777,9 +1773,6 @@ { struct pci_dev *pdev = to_pci_dev(adapter->dev.parent); - if (cxl_is_power9()) - pnv_pci_set_tunnel_bar(pdev, 0x00020000E0000000ull, 0); - cxl_native_release_psl_err_irq(adapter); cxl_unmap_adapter_regs(adapter); @@ -1842,6 +1835,7 @@ .psl_irq_dump_registers = cxl_native_irq_dump_regs_psl9, .err_irq_dump_registers = cxl_native_err_irq_dump_regs_psl9, .debugfs_stop_trace = cxl_stop_trace_psl9, + .write_timebase_ctrl = write_timebase_ctrl_psl9, .timebase_read = timebase_read_psl9, .capi_mode = OPAL_PHB_CAPI_MODE_CAPI, .needs_reset_before_disable = true, diff -u linux-azure-4.15.0/drivers/misc/cxl/sysfs.c linux-azure-4.15.0/drivers/misc/cxl/sysfs.c --- linux-azure-4.15.0/drivers/misc/cxl/sysfs.c +++ linux-azure-4.15.0/drivers/misc/cxl/sysfs.c @@ -78,15 +78,6 @@ return scnprintf(buf, PAGE_SIZE, "%i\n", adapter->psl_timebase_synced); } -static ssize_t tunneled_ops_supported_show(struct device *device, - struct device_attribute *attr, - char *buf) -{ - struct cxl *adapter = to_cxl_adapter(device); - - return scnprintf(buf, PAGE_SIZE, "%i\n", adapter->tunneled_ops_supported); -} - static ssize_t reset_adapter_store(struct device *device, struct device_attribute *attr, const char *buf, size_t count) @@ -192,7 +183,6 @@ __ATTR_RO(base_image), __ATTR_RO(image_loaded), __ATTR_RO(psl_timebase_synced), - __ATTR_RO(tunneled_ops_supported), __ATTR_RW(load_image_on_perst), __ATTR_RW(perst_reloads_same_image), __ATTR(reset, S_IWUSR, NULL, reset_adapter_store), @@ -353,20 +343,12 @@ struct cxl_afu *afu = to_cxl_afu(device); enum prefault_modes mode = -1; + if (!strncmp(buf, "work_element_descriptor", 23)) + mode = CXL_PREFAULT_WED; + if (!strncmp(buf, "all", 3)) + mode = CXL_PREFAULT_ALL; if (!strncmp(buf, "none", 4)) mode = CXL_PREFAULT_NONE; - else { - if (!radix_enabled()) { - - /* only allowed when not in radix mode */ - if (!strncmp(buf, "work_element_descriptor", 23)) - mode = CXL_PREFAULT_WED; - if (!strncmp(buf, "all", 3)) - mode = CXL_PREFAULT_ALL; - } else { - dev_err(device, "Cannot prefault with radix enabled\n"); - } - } if (mode == -1) return -EINVAL; diff -u linux-azure-4.15.0/drivers/misc/ocxl/link.c linux-azure-4.15.0/drivers/misc/ocxl/link.c --- linux-azure-4.15.0/drivers/misc/ocxl/link.c +++ linux-azure-4.15.0/drivers/misc/ocxl/link.c @@ -136,7 +136,7 @@ int rc; /* - * We must release a reference on mm_users whenever exiting this + * We need to release a reference on the mm whenever exiting this * function (taken in the memory fault interrupt handler) */ rc = copro_handle_mm_fault(fault->pe_data.mm, fault->dar, fault->dsisr, @@ -172,7 +172,7 @@ } r = RESTART; ack: - mmput(fault->pe_data.mm); + mmdrop(fault->pe_data.mm); ack_irq(spa, r); } @@ -184,7 +184,6 @@ struct pe_data *pe_data; struct ocxl_process_element *pe; int lpid, pid, tid; - bool schedule = false; read_irq(spa, &dsisr, &dar, &pe_handle); trace_ocxl_fault(spa->spa_mem, pe_handle, dsisr, dar, -1); @@ -227,19 +226,14 @@ } WARN_ON(pe_data->mm->context.id != pid); - if (mmget_not_zero(pe_data->mm)) { - spa->xsl_fault.pe = pe_handle; - spa->xsl_fault.dar = dar; - spa->xsl_fault.dsisr = dsisr; - spa->xsl_fault.pe_data = *pe_data; - schedule = true; - /* mm_users count released by bottom half */ - } + spa->xsl_fault.pe = pe_handle; + spa->xsl_fault.dar = dar; + spa->xsl_fault.dsisr = dsisr; + spa->xsl_fault.pe_data = *pe_data; + mmgrab(pe_data->mm); /* mm count is released by bottom half */ + rcu_read_unlock(); - if (schedule) - schedule_work(&spa->xsl_fault.fault_work); - else - ack_irq(spa, ADDRESS_ERROR); + schedule_work(&spa->xsl_fault.fault_work); return IRQ_HANDLED; } reverted: --- linux-azure-4.15.0/drivers/mtd/chips/cfi_cmdset_0001.c +++ linux-azure-4.15.0.orig/drivers/mtd/chips/cfi_cmdset_0001.c @@ -45,7 +45,6 @@ #define I82802AB 0x00ad #define I82802AC 0x00ac #define PF38F4476 0x881c -#define M28F00AP30 0x8963 /* STMicroelectronics chips */ #define M50LPW080 0x002F #define M50FLW080A 0x0080 @@ -376,17 +375,6 @@ extp->MinorVersion = '1'; } -static int cfi_is_micron_28F00AP30(struct cfi_private *cfi, struct flchip *chip) -{ - /* - * Micron(was Numonyx) 1Gbit bottom boot are buggy w.r.t - * Erase Supend for their small Erase Blocks(0x8000) - */ - if (cfi->mfr == CFI_MFR_INTEL && cfi->id == M28F00AP30) - return 1; - return 0; -} - static inline struct cfi_pri_intelext * read_pri_intelext(struct map_info *map, __u16 adr) { @@ -843,30 +831,21 @@ (mode == FL_WRITING && (cfip->SuspendCmdSupport & 1)))) goto sleep; - /* Do not allow suspend iff read/write to EB address */ - if ((adr & chip->in_progress_block_mask) == - chip->in_progress_block_addr) - goto sleep; - - /* do not suspend small EBs, buggy Micron Chips */ - if (cfi_is_micron_28F00AP30(cfi, chip) && - (chip->in_progress_block_mask == ~(0x8000-1))) - goto sleep; /* Erase suspend */ + map_write(map, CMD(0xB0), adr); - map_write(map, CMD(0xB0), chip->in_progress_block_addr); /* If the flash has finished erasing, then 'erase suspend' * appears to make some (28F320) flash devices switch to * 'read' mode. Make sure that we switch to 'read status' * mode so we get the right data. --rmk */ + map_write(map, CMD(0x70), adr); - map_write(map, CMD(0x70), chip->in_progress_block_addr); chip->oldstate = FL_ERASING; chip->state = FL_ERASE_SUSPENDING; chip->erase_suspended = 1; for (;;) { + status = map_read(map, adr); - status = map_read(map, chip->in_progress_block_addr); if (map_word_andequal(map, status, status_OK, status_OK)) break; @@ -1062,8 +1041,8 @@ sending the 0x70 (Read Status) command to an erasing chip and expecting it to be ignored, that's what we do. */ + map_write(map, CMD(0xd0), adr); + map_write(map, CMD(0x70), adr); - map_write(map, CMD(0xd0), chip->in_progress_block_addr); - map_write(map, CMD(0x70), chip->in_progress_block_addr); chip->oldstate = FL_READY; chip->state = FL_ERASING; break; @@ -1954,8 +1933,6 @@ map_write(map, CMD(0xD0), adr); chip->state = FL_ERASING; chip->erase_suspended = 0; - chip->in_progress_block_addr = adr; - chip->in_progress_block_mask = ~(len - 1); ret = INVAL_CACHE_AND_WAIT(map, chip, adr, adr, len, reverted: --- linux-azure-4.15.0/drivers/mtd/chips/cfi_cmdset_0002.c +++ linux-azure-4.15.0.orig/drivers/mtd/chips/cfi_cmdset_0002.c @@ -816,10 +816,9 @@ (mode == FL_WRITING && (cfip->EraseSuspend & 0x2)))) goto sleep; + /* We could check to see if we're trying to access the sector + * that is currently being erased. However, no user will try + * anything like that so we just wait for the timeout. */ - /* Do not allow suspend iff read/write to EB address */ - if ((adr & chip->in_progress_block_mask) == - chip->in_progress_block_addr) - goto sleep; /* Erase suspend */ /* It's harmless to issue the Erase-Suspend and Erase-Resume @@ -2268,7 +2267,6 @@ chip->state = FL_ERASING; chip->erase_suspended = 0; chip->in_progress_block_addr = adr; - chip->in_progress_block_mask = ~(map->size - 1); INVALIDATE_CACHE_UDELAY(map, chip, adr, map->size, @@ -2358,7 +2356,6 @@ chip->state = FL_ERASING; chip->erase_suspended = 0; chip->in_progress_block_addr = adr; - chip->in_progress_block_mask = ~(len - 1); INVALIDATE_CACHE_UDELAY(map, chip, adr, len, reverted: --- linux-azure-4.15.0/drivers/mtd/nand/tango_nand.c +++ linux-azure-4.15.0.orig/drivers/mtd/nand/tango_nand.c @@ -654,7 +654,7 @@ writel_relaxed(MODE_RAW, nfc->pbus_base + PBUS_PAD_MODE); + clk = clk_get(&pdev->dev, NULL); - clk = devm_clk_get(&pdev->dev, NULL); if (IS_ERR(clk)) return PTR_ERR(clk); reverted: --- linux-azure-4.15.0/drivers/mtd/spi-nor/cadence-quadspi.c +++ linux-azure-4.15.0.orig/drivers/mtd/spi-nor/cadence-quadspi.c @@ -501,9 +501,7 @@ void __iomem *reg_base = cqspi->iobase; void __iomem *ahb_base = cqspi->ahb_base; unsigned int remaining = n_rx; - unsigned int mod_bytes = n_rx % 4; unsigned int bytes_to_read = 0; - u8 *rxbuf_end = rxbuf + n_rx; int ret = 0; writel(remaining, reg_base + CQSPI_REG_INDIRECTRDBYTES); @@ -531,24 +529,11 @@ } while (bytes_to_read != 0) { - unsigned int word_remain = round_down(remaining, 4); - bytes_to_read *= cqspi->fifo_width; bytes_to_read = bytes_to_read > remaining ? remaining : bytes_to_read; + ioread32_rep(ahb_base, rxbuf, + DIV_ROUND_UP(bytes_to_read, 4)); - bytes_to_read = round_down(bytes_to_read, 4); - /* Read 4 byte word chunks then single bytes */ - if (bytes_to_read) { - ioread32_rep(ahb_base, rxbuf, - (bytes_to_read / 4)); - } else if (!word_remain && mod_bytes) { - unsigned int temp = ioread32(ahb_base); - - bytes_to_read = mod_bytes; - memcpy(rxbuf, &temp, min((unsigned int) - (rxbuf_end - rxbuf), - bytes_to_read)); - } rxbuf += bytes_to_read; remaining -= bytes_to_read; bytes_to_read = cqspi_get_rd_sram_level(cqspi); reverted: --- linux-azure-4.15.0/drivers/net/Kconfig +++ linux-azure-4.15.0.orig/drivers/net/Kconfig @@ -149,6 +149,7 @@ config IPVLAN tristate "IP-VLAN support" depends on INET + depends on IPV6 depends on NETFILTER depends on NET_L3_MASTER_DEV ---help--- reverted: --- linux-azure-4.15.0/drivers/net/bonding/bond_alb.c +++ linux-azure-4.15.0.orig/drivers/net/bonding/bond_alb.c @@ -450,7 +450,7 @@ { int i; + if (!client_info->slave) - if (!client_info->slave || !is_valid_ether_addr(client_info->mac_dst)) return; for (i = 0; i < RLB_ARP_BURST_SIZE; i++) { @@ -943,10 +943,6 @@ skb->priority = TC_PRIO_CONTROL; skb->dev = slave->dev; - netdev_dbg(slave->bond->dev, - "Send learning packet: dev %s mac %pM vlan %d\n", - slave->dev->name, mac_addr, vid); - if (vid) __vlan_hwaccel_put_tag(skb, vlan_proto, vid); @@ -969,13 +965,14 @@ u8 *mac_addr = data->mac_addr; struct bond_vlan_tag *tags; + if (is_vlan_dev(upper) && vlan_get_encap_level(upper) == 0) { + if (strict_match && + ether_addr_equal_64bits(mac_addr, + upper->dev_addr)) { - if (is_vlan_dev(upper) && - bond->nest_level == vlan_get_encap_level(upper) - 1) { - if (upper->addr_assign_type == NET_ADDR_STOLEN) { alb_send_lp_vid(slave, mac_addr, vlan_dev_vlan_proto(upper), vlan_dev_vlan_id(upper)); + } else if (!strict_match) { - } else { alb_send_lp_vid(slave, upper->dev_addr, vlan_dev_vlan_proto(upper), vlan_dev_vlan_id(upper)); diff -u linux-azure-4.15.0/drivers/net/bonding/bond_main.c linux-azure-4.15.0/drivers/net/bonding/bond_main.c --- linux-azure-4.15.0/drivers/net/bonding/bond_main.c +++ linux-azure-4.15.0/drivers/net/bonding/bond_main.c @@ -1738,8 +1738,6 @@ if (bond_mode_uses_xmit_hash(bond)) bond_update_slave_arr(bond, NULL); - bond->nest_level = dev_get_nest_level(bond_dev); - netdev_info(bond_dev, "Enslaving %s as %s interface with %s link\n", slave_dev->name, bond_is_active_slave(new_slave) ? "an active" : "a backup", reverted: --- linux-azure-4.15.0/drivers/net/can/spi/hi311x.c +++ linux-azure-4.15.0.orig/drivers/net/can/spi/hi311x.c @@ -91,7 +91,6 @@ #define HI3110_STAT_BUSOFF BIT(2) #define HI3110_STAT_ERRP BIT(3) #define HI3110_STAT_ERRW BIT(4) -#define HI3110_STAT_TXMTY BIT(7) #define HI3110_BTR0_SJW_SHIFT 6 #define HI3110_BTR0_BRP_SHIFT 0 @@ -428,10 +427,8 @@ struct hi3110_priv *priv = netdev_priv(net); struct spi_device *spi = priv->spi; - mutex_lock(&priv->hi3110_lock); bec->txerr = hi3110_read(spi, HI3110_READ_TEC); bec->rxerr = hi3110_read(spi, HI3110_READ_REC); - mutex_unlock(&priv->hi3110_lock); return 0; } @@ -738,7 +735,10 @@ } } + if (intf == 0) + break; + + if (intf & HI3110_INT_TXCPLT) { - if (priv->tx_len && statf & HI3110_STAT_TXMTY) { net->stats.tx_packets++; net->stats.tx_bytes += priv->tx_len - 1; can_led_event(net, CAN_LED_EVENT_TX); @@ -748,9 +748,6 @@ } netif_wake_queue(net); } - - if (intf == 0) - break; } mutex_unlock(&priv->hi3110_lock); return IRQ_HANDLED; reverted: --- linux-azure-4.15.0/drivers/net/can/usb/kvaser_usb.c +++ linux-azure-4.15.0.orig/drivers/net/can/usb/kvaser_usb.c @@ -1179,7 +1179,7 @@ skb = alloc_can_skb(priv->netdev, &cf); if (!skb) { + stats->tx_dropped++; - stats->rx_dropped++; return; } reverted: --- linux-azure-4.15.0/drivers/net/dsa/bcm_sf2_cfp.c +++ linux-azure-4.15.0.orig/drivers/net/dsa/bcm_sf2_cfp.c @@ -354,13 +354,10 @@ /* Locate the first rule available */ if (fs->location == RX_CLS_LOC_ANY) rule_index = find_first_zero_bit(priv->cfp.used, + bcm_sf2_cfp_rule_size(priv)); - priv->num_cfp_rules); else rule_index = fs->location; - if (rule_index > bcm_sf2_cfp_rule_size(priv)) - return -ENOSPC; - layout = &udf_tcpip4_layout; /* We only use one UDF slice for now */ slice_num = bcm_sf2_get_slice_number(layout, 0); @@ -565,21 +562,19 @@ * first half because the HW search is by incrementing addresses. */ if (fs->location == RX_CLS_LOC_ANY) + rule_index[0] = find_first_zero_bit(priv->cfp.used, + bcm_sf2_cfp_rule_size(priv)); - rule_index[1] = find_first_zero_bit(priv->cfp.used, - priv->num_cfp_rules); else + rule_index[0] = fs->location; - rule_index[1] = fs->location; - if (rule_index[1] > bcm_sf2_cfp_rule_size(priv)) - return -ENOSPC; /* Flag it as used (cleared on error path) such that we can immediately * obtain a second one to chain from. */ + set_bit(rule_index[0], priv->cfp.used); - set_bit(rule_index[1], priv->cfp.used); + rule_index[1] = find_first_zero_bit(priv->cfp.used, + bcm_sf2_cfp_rule_size(priv)); + if (rule_index[1] > bcm_sf2_cfp_rule_size(priv)) { - rule_index[0] = find_first_zero_bit(priv->cfp.used, - priv->num_cfp_rules); - if (rule_index[0] > bcm_sf2_cfp_rule_size(priv)) { ret = -ENOSPC; goto out_err; } @@ -717,14 +712,14 @@ /* Flag the second half rule as being used now, return it as the * location, and flag it as unique while dumping rules */ + set_bit(rule_index[1], priv->cfp.used); - set_bit(rule_index[0], priv->cfp.used); set_bit(rule_index[1], priv->cfp.unique); fs->location = rule_index[1]; return ret; out_err: + clear_bit(rule_index[0], priv->cfp.used); - clear_bit(rule_index[1], priv->cfp.used); return ret; } @@ -790,6 +785,10 @@ int ret; u32 reg; + /* Refuse deletion of unused rules, and the default reserved rule */ + if (!test_bit(loc, priv->cfp.used) || loc == 0) + return -EINVAL; + /* Indicate which rule we want to read */ bcm_sf2_cfp_rule_addr_set(priv, loc); @@ -827,13 +826,6 @@ u32 next_loc = 0; int ret; - /* Refuse deleting unused rules, and those that are not unique since - * that could leave IPv6 rules with one of the chained rule in the - * table. - */ - if (!test_bit(loc, priv->cfp.unique) || loc == 0) - return -EINVAL; - ret = bcm_sf2_cfp_rule_del_one(priv, port, loc, &next_loc); if (ret) return ret; reverted: --- linux-azure-4.15.0/drivers/net/ethernet/3com/3c59x.c +++ linux-azure-4.15.0.orig/drivers/net/ethernet/3com/3c59x.c @@ -1212,9 +1212,9 @@ vp->mii.reg_num_mask = 0x1f; /* Makes sure rings are at least 16 byte aligned. */ + vp->rx_ring = pci_alloc_consistent(pdev, sizeof(struct boom_rx_desc) * RX_RING_SIZE - vp->rx_ring = dma_alloc_coherent(gendev, sizeof(struct boom_rx_desc) * RX_RING_SIZE + sizeof(struct boom_tx_desc) * TX_RING_SIZE, + &vp->rx_ring_dma); - &vp->rx_ring_dma, GFP_KERNEL); retval = -ENOMEM; if (!vp->rx_ring) goto free_device; @@ -1476,10 +1476,11 @@ return 0; free_ring: + pci_free_consistent(pdev, + sizeof(struct boom_rx_desc) * RX_RING_SIZE + + sizeof(struct boom_tx_desc) * TX_RING_SIZE, + vp->rx_ring, + vp->rx_ring_dma); - dma_free_coherent(&pdev->dev, - sizeof(struct boom_rx_desc) * RX_RING_SIZE + - sizeof(struct boom_tx_desc) * TX_RING_SIZE, - vp->rx_ring, vp->rx_ring_dma); free_device: free_netdev(dev); pr_err(PFX "vortex_probe1 fails. Returns %d\n", retval); @@ -1750,9 +1751,9 @@ break; /* Bad news! */ skb_reserve(skb, NET_IP_ALIGN); /* Align IP on 16 byte boundaries */ + dma = pci_map_single(VORTEX_PCI(vp), skb->data, + PKT_BUF_SZ, PCI_DMA_FROMDEVICE); + if (dma_mapping_error(&VORTEX_PCI(vp)->dev, dma)) - dma = dma_map_single(vp->gendev, skb->data, - PKT_BUF_SZ, DMA_FROM_DEVICE); - if (dma_mapping_error(vp->gendev, dma)) break; vp->rx_ring[i].addr = cpu_to_le32(dma); } @@ -2066,9 +2067,9 @@ if (vp->bus_master) { /* Set the bus-master controller to transfer the packet. */ int len = (skb->len + 3) & ~3; + vp->tx_skb_dma = pci_map_single(VORTEX_PCI(vp), skb->data, len, + PCI_DMA_TODEVICE); + if (dma_mapping_error(&VORTEX_PCI(vp)->dev, vp->tx_skb_dma)) { - vp->tx_skb_dma = dma_map_single(vp->gendev, skb->data, len, - DMA_TO_DEVICE); - if (dma_mapping_error(vp->gendev, vp->tx_skb_dma)) { dev_kfree_skb_any(skb); dev->stats.tx_dropped++; return NETDEV_TX_OK; @@ -2167,9 +2168,9 @@ vp->tx_ring[entry].status = cpu_to_le32(skb->len | TxIntrUploaded | AddTCPChksum | AddUDPChksum); if (!skb_shinfo(skb)->nr_frags) { + dma_addr = pci_map_single(VORTEX_PCI(vp), skb->data, skb->len, + PCI_DMA_TODEVICE); + if (dma_mapping_error(&VORTEX_PCI(vp)->dev, dma_addr)) - dma_addr = dma_map_single(vp->gendev, skb->data, skb->len, - DMA_TO_DEVICE); - if (dma_mapping_error(vp->gendev, dma_addr)) goto out_dma_err; vp->tx_ring[entry].frag[0].addr = cpu_to_le32(dma_addr); @@ -2177,9 +2178,9 @@ } else { int i; + dma_addr = pci_map_single(VORTEX_PCI(vp), skb->data, + skb_headlen(skb), PCI_DMA_TODEVICE); + if (dma_mapping_error(&VORTEX_PCI(vp)->dev, dma_addr)) - dma_addr = dma_map_single(vp->gendev, skb->data, - skb_headlen(skb), DMA_TO_DEVICE); - if (dma_mapping_error(vp->gendev, dma_addr)) goto out_dma_err; vp->tx_ring[entry].frag[0].addr = cpu_to_le32(dma_addr); @@ -2188,21 +2189,21 @@ for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { skb_frag_t *frag = &skb_shinfo(skb)->frags[i]; + dma_addr = skb_frag_dma_map(&VORTEX_PCI(vp)->dev, frag, - dma_addr = skb_frag_dma_map(vp->gendev, frag, 0, frag->size, DMA_TO_DEVICE); + if (dma_mapping_error(&VORTEX_PCI(vp)->dev, dma_addr)) { - if (dma_mapping_error(vp->gendev, dma_addr)) { for(i = i-1; i >= 0; i--) + dma_unmap_page(&VORTEX_PCI(vp)->dev, - dma_unmap_page(vp->gendev, le32_to_cpu(vp->tx_ring[entry].frag[i+1].addr), le32_to_cpu(vp->tx_ring[entry].frag[i+1].length), DMA_TO_DEVICE); + pci_unmap_single(VORTEX_PCI(vp), - dma_unmap_single(vp->gendev, le32_to_cpu(vp->tx_ring[entry].frag[0].addr), le32_to_cpu(vp->tx_ring[entry].frag[0].length), + PCI_DMA_TODEVICE); - DMA_TO_DEVICE); goto out_dma_err; } @@ -2217,8 +2218,8 @@ } } #else + dma_addr = pci_map_single(VORTEX_PCI(vp), skb->data, skb->len, PCI_DMA_TODEVICE); + if (dma_mapping_error(&VORTEX_PCI(vp)->dev, dma_addr)) - dma_addr = dma_map_single(vp->gendev, skb->data, skb->len, DMA_TO_DEVICE); - if (dma_mapping_error(vp->gendev, dma_addr)) goto out_dma_err; vp->tx_ring[entry].addr = cpu_to_le32(dma_addr); vp->tx_ring[entry].length = cpu_to_le32(skb->len | LAST_FRAG); @@ -2253,7 +2254,7 @@ out: return NETDEV_TX_OK; out_dma_err: + dev_err(&VORTEX_PCI(vp)->dev, "Error mapping dma buffer\n"); - dev_err(vp->gendev, "Error mapping dma buffer\n"); goto out; } @@ -2321,7 +2322,7 @@ if (status & DMADone) { if (ioread16(ioaddr + Wn7_MasterStatus) & 0x1000) { iowrite16(0x1000, ioaddr + Wn7_MasterStatus); /* Ack the event. */ + pci_unmap_single(VORTEX_PCI(vp), vp->tx_skb_dma, (vp->tx_skb->len + 3) & ~3, PCI_DMA_TODEVICE); - dma_unmap_single(vp->gendev, vp->tx_skb_dma, (vp->tx_skb->len + 3) & ~3, DMA_TO_DEVICE); pkts_compl++; bytes_compl += vp->tx_skb->len; dev_kfree_skb_irq(vp->tx_skb); /* Release the transferred buffer */ @@ -2458,19 +2459,19 @@ struct sk_buff *skb = vp->tx_skbuff[entry]; #if DO_ZEROCOPY int i; + pci_unmap_single(VORTEX_PCI(vp), - dma_unmap_single(vp->gendev, le32_to_cpu(vp->tx_ring[entry].frag[0].addr), le32_to_cpu(vp->tx_ring[entry].frag[0].length)&0xFFF, + PCI_DMA_TODEVICE); - DMA_TO_DEVICE); for (i=1; i<=skb_shinfo(skb)->nr_frags; i++) + pci_unmap_page(VORTEX_PCI(vp), - dma_unmap_page(vp->gendev, le32_to_cpu(vp->tx_ring[entry].frag[i].addr), le32_to_cpu(vp->tx_ring[entry].frag[i].length)&0xFFF, + PCI_DMA_TODEVICE); - DMA_TO_DEVICE); #else + pci_unmap_single(VORTEX_PCI(vp), + le32_to_cpu(vp->tx_ring[entry].addr), skb->len, PCI_DMA_TODEVICE); - dma_unmap_single(vp->gendev, - le32_to_cpu(vp->tx_ring[entry].addr), skb->len, DMA_TO_DEVICE); #endif pkts_compl++; bytes_compl += skb->len; @@ -2560,14 +2561,14 @@ /* 'skb_put()' points to the start of sk_buff data area. */ if (vp->bus_master && ! (ioread16(ioaddr + Wn7_MasterStatus) & 0x8000)) { + dma_addr_t dma = pci_map_single(VORTEX_PCI(vp), skb_put(skb, pkt_len), + pkt_len, PCI_DMA_FROMDEVICE); - dma_addr_t dma = dma_map_single(vp->gendev, skb_put(skb, pkt_len), - pkt_len, DMA_FROM_DEVICE); iowrite32(dma, ioaddr + Wn7_MasterAddr); iowrite16((skb->len + 3) & ~3, ioaddr + Wn7_MasterLen); iowrite16(StartDMAUp, ioaddr + EL3_CMD); while (ioread16(ioaddr + Wn7_MasterStatus) & 0x8000) ; + pci_unmap_single(VORTEX_PCI(vp), dma, pkt_len, PCI_DMA_FROMDEVICE); - dma_unmap_single(vp->gendev, dma, pkt_len, DMA_FROM_DEVICE); } else { ioread32_rep(ioaddr + RX_FIFO, skb_put(skb, pkt_len), @@ -2634,11 +2635,11 @@ if (pkt_len < rx_copybreak && (skb = netdev_alloc_skb(dev, pkt_len + 2)) != NULL) { skb_reserve(skb, 2); /* Align IP on 16 byte boundaries */ + pci_dma_sync_single_for_cpu(VORTEX_PCI(vp), dma, PKT_BUF_SZ, PCI_DMA_FROMDEVICE); - dma_sync_single_for_cpu(vp->gendev, dma, PKT_BUF_SZ, DMA_FROM_DEVICE); /* 'skb_put()' points to the start of sk_buff data area. */ skb_put_data(skb, vp->rx_skbuff[entry]->data, pkt_len); + pci_dma_sync_single_for_device(VORTEX_PCI(vp), dma, PKT_BUF_SZ, PCI_DMA_FROMDEVICE); - dma_sync_single_for_device(vp->gendev, dma, PKT_BUF_SZ, DMA_FROM_DEVICE); vp->rx_copy++; } else { /* Pre-allocate the replacement skb. If it or its @@ -2650,9 +2651,9 @@ dev->stats.rx_dropped++; goto clear_complete; } + newdma = pci_map_single(VORTEX_PCI(vp), newskb->data, + PKT_BUF_SZ, PCI_DMA_FROMDEVICE); + if (dma_mapping_error(&VORTEX_PCI(vp)->dev, newdma)) { - newdma = dma_map_single(vp->gendev, newskb->data, - PKT_BUF_SZ, DMA_FROM_DEVICE); - if (dma_mapping_error(vp->gendev, newdma)) { dev->stats.rx_dropped++; consume_skb(newskb); goto clear_complete; @@ -2663,7 +2664,7 @@ vp->rx_skbuff[entry] = newskb; vp->rx_ring[entry].addr = cpu_to_le32(newdma); skb_put(skb, pkt_len); + pci_unmap_single(VORTEX_PCI(vp), dma, PKT_BUF_SZ, PCI_DMA_FROMDEVICE); - dma_unmap_single(vp->gendev, dma, PKT_BUF_SZ, DMA_FROM_DEVICE); vp->rx_nocopy++; } skb->protocol = eth_type_trans(skb, dev); @@ -2760,8 +2761,8 @@ if (vp->full_bus_master_rx) { /* Free Boomerang bus master Rx buffers. */ for (i = 0; i < RX_RING_SIZE; i++) if (vp->rx_skbuff[i]) { + pci_unmap_single( VORTEX_PCI(vp), le32_to_cpu(vp->rx_ring[i].addr), + PKT_BUF_SZ, PCI_DMA_FROMDEVICE); - dma_unmap_single(vp->gendev, le32_to_cpu(vp->rx_ring[i].addr), - PKT_BUF_SZ, DMA_FROM_DEVICE); dev_kfree_skb(vp->rx_skbuff[i]); vp->rx_skbuff[i] = NULL; } @@ -2774,12 +2775,12 @@ int k; for (k=0; k<=skb_shinfo(skb)->nr_frags; k++) + pci_unmap_single(VORTEX_PCI(vp), - dma_unmap_single(vp->gendev, le32_to_cpu(vp->tx_ring[i].frag[k].addr), le32_to_cpu(vp->tx_ring[i].frag[k].length)&0xFFF, + PCI_DMA_TODEVICE); - DMA_TO_DEVICE); #else + pci_unmap_single(VORTEX_PCI(vp), le32_to_cpu(vp->tx_ring[i].addr), skb->len, PCI_DMA_TODEVICE); - dma_unmap_single(vp->gendev, le32_to_cpu(vp->tx_ring[i].addr), skb->len, DMA_TO_DEVICE); #endif dev_kfree_skb(skb); vp->tx_skbuff[i] = NULL; @@ -3287,10 +3288,11 @@ pci_iounmap(pdev, vp->ioaddr); + pci_free_consistent(pdev, + sizeof(struct boom_rx_desc) * RX_RING_SIZE + + sizeof(struct boom_tx_desc) * TX_RING_SIZE, + vp->rx_ring, + vp->rx_ring_dma); - dma_free_coherent(&pdev->dev, - sizeof(struct boom_rx_desc) * RX_RING_SIZE + - sizeof(struct boom_tx_desc) * TX_RING_SIZE, - vp->rx_ring, vp->rx_ring_dma); pci_release_regions(pdev); diff -u linux-azure-4.15.0/drivers/net/ethernet/broadcom/bcmsysport.c linux-azure-4.15.0/drivers/net/ethernet/broadcom/bcmsysport.c --- linux-azure-4.15.0/drivers/net/ethernet/broadcom/bcmsysport.c +++ linux-azure-4.15.0/drivers/net/ethernet/broadcom/bcmsysport.c @@ -2064,21 +2064,14 @@ .ndo_select_queue = bcm_sysport_select_queue, }; -static int bcm_sysport_map_queues(struct notifier_block *nb, +static int bcm_sysport_map_queues(struct net_device *dev, struct dsa_notifier_register_info *info) { + struct bcm_sysport_priv *priv = netdev_priv(dev); struct bcm_sysport_tx_ring *ring; - struct bcm_sysport_priv *priv; struct net_device *slave_dev; unsigned int num_tx_queues; unsigned int q, start, port; - struct net_device *dev; - - priv = container_of(nb, struct bcm_sysport_priv, dsa_notifier); - if (priv->netdev != info->master) - return 0; - - dev = info->master; /* We can't be setting up queue inspection for non directly attached * switches @@ -2101,7 +2094,6 @@ if (priv->is_lite) netif_set_real_num_tx_queues(slave_dev, slave_dev->num_tx_queues / 2); - num_tx_queues = slave_dev->real_num_tx_queues; if (priv->per_port_num_tx_queues && @@ -2129,7 +2121,7 @@ return 0; } -static int bcm_sysport_dsa_notifier(struct notifier_block *nb, +static int bcm_sysport_dsa_notifier(struct notifier_block *unused, unsigned long event, void *ptr) { struct dsa_notifier_register_info *info; @@ -2139,7 +2131,7 @@ info = ptr; - return notifier_from_errno(bcm_sysport_map_queues(nb, info)); + return notifier_from_errno(bcm_sysport_map_queues(info->master, info)); } #define REV_FMT "v%2x.%02x" diff -u linux-azure-4.15.0/drivers/net/ethernet/broadcom/tg3.c linux-azure-4.15.0/drivers/net/ethernet/broadcom/tg3.c --- linux-azure-4.15.0/drivers/net/ethernet/broadcom/tg3.c +++ linux-azure-4.15.0/drivers/net/ethernet/broadcom/tg3.c @@ -8733,15 +8733,14 @@ tg3_mem_rx_release(tp); tg3_mem_tx_release(tp); - /* tp->hw_stats can be referenced safely: - * 1. under rtnl_lock - * 2. or under tp->lock if TG3_FLAG_INIT_COMPLETE is set. - */ + /* Protect tg3_get_stats64() from reading freed tp->hw_stats. */ + tg3_full_lock(tp, 0); if (tp->hw_stats) { dma_free_coherent(&tp->pdev->dev, sizeof(struct tg3_hw_stats), tp->hw_stats, tp->stats_mapping); tp->hw_stats = NULL; } + tg3_full_unlock(tp); } /* @@ -14179,7 +14178,7 @@ struct tg3 *tp = netdev_priv(dev); spin_lock_bh(&tp->lock); - if (!tp->hw_stats || !tg3_flag(tp, INIT_COMPLETE)) { + if (!tp->hw_stats) { *stats = tp->net_stats_prev; spin_unlock_bh(&tp->lock); return; reverted: --- linux-azure-4.15.0/drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c +++ linux-azure-4.15.0.orig/drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c @@ -814,7 +814,7 @@ { struct tp_params *tp = &adap->params.tp; u64 hash_filter_mask = tp->hash_filter_mask; + u32 mask; - u64 ntuple_mask = 0; if (!is_hashfilter(adap)) return false; @@ -843,45 +843,73 @@ if (!fs->val.fport || fs->mask.fport != 0xffff) return false; + if (tp->fcoe_shift >= 0) { + mask = (hash_filter_mask >> tp->fcoe_shift) & FT_FCOE_W; + if (mask && !fs->mask.fcoe) + return false; + } - /* calculate tuple mask and compare with mask configured in hw */ - if (tp->fcoe_shift >= 0) - ntuple_mask |= (u64)fs->mask.fcoe << tp->fcoe_shift; + if (tp->port_shift >= 0) { + mask = (hash_filter_mask >> tp->port_shift) & FT_PORT_W; + if (mask && !fs->mask.iport) + return false; + } - if (tp->port_shift >= 0) - ntuple_mask |= (u64)fs->mask.iport << tp->port_shift; if (tp->vnic_shift >= 0) { + mask = (hash_filter_mask >> tp->vnic_shift) & FT_VNIC_ID_W; + + if ((adap->params.tp.ingress_config & VNIC_F)) { + if (mask && !fs->mask.pfvf_vld) + return false; + } else { + if (mask && !fs->mask.ovlan_vld) + return false; + } - if ((adap->params.tp.ingress_config & VNIC_F)) - ntuple_mask |= (u64)fs->mask.pfvf_vld << tp->vnic_shift; - else - ntuple_mask |= (u64)fs->mask.ovlan_vld << - tp->vnic_shift; } + if (tp->vlan_shift >= 0) { + mask = (hash_filter_mask >> tp->vlan_shift) & FT_VLAN_W; + if (mask && !fs->mask.ivlan) + return false; + } - if (tp->vlan_shift >= 0) - ntuple_mask |= (u64)fs->mask.ivlan << tp->vlan_shift; + if (tp->tos_shift >= 0) { + mask = (hash_filter_mask >> tp->tos_shift) & FT_TOS_W; + if (mask && !fs->mask.tos) + return false; + } - if (tp->tos_shift >= 0) - ntuple_mask |= (u64)fs->mask.tos << tp->tos_shift; + if (tp->protocol_shift >= 0) { + mask = (hash_filter_mask >> tp->protocol_shift) & FT_PROTOCOL_W; + if (mask && !fs->mask.proto) + return false; + } - if (tp->protocol_shift >= 0) - ntuple_mask |= (u64)fs->mask.proto << tp->protocol_shift; + if (tp->ethertype_shift >= 0) { + mask = (hash_filter_mask >> tp->ethertype_shift) & + FT_ETHERTYPE_W; + if (mask && !fs->mask.ethtype) + return false; + } - if (tp->ethertype_shift >= 0) - ntuple_mask |= (u64)fs->mask.ethtype << tp->ethertype_shift; + if (tp->macmatch_shift >= 0) { + mask = (hash_filter_mask >> tp->macmatch_shift) & FT_MACMATCH_W; + if (mask && !fs->mask.macidx) + return false; + } - if (tp->macmatch_shift >= 0) - ntuple_mask |= (u64)fs->mask.macidx << tp->macmatch_shift; - - if (tp->matchtype_shift >= 0) - ntuple_mask |= (u64)fs->mask.matchtype << tp->matchtype_shift; - - if (tp->frag_shift >= 0) - ntuple_mask |= (u64)fs->mask.frag << tp->frag_shift; - - if (ntuple_mask != hash_filter_mask) - return false; + if (tp->matchtype_shift >= 0) { + mask = (hash_filter_mask >> tp->matchtype_shift) & + FT_MPSHITTYPE_W; + if (mask && !fs->mask.matchtype) + return false; + } + if (tp->frag_shift >= 0) { + mask = (hash_filter_mask >> tp->frag_shift) & + FT_FRAGMENTATION_W; + if (mask && !fs->mask.frag) + return false; + } return true; } diff -u linux-azure-4.15.0/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c linux-azure-4.15.0/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c --- linux-azure-4.15.0/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c +++ linux-azure-4.15.0/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c @@ -10910,14 +10910,14 @@ rtnl_lock(); netif_device_detach(netdev); - if (netif_running(netdev)) - ixgbe_close_suspend(adapter); - if (state == pci_channel_io_perm_failure) { rtnl_unlock(); return PCI_ERS_RESULT_DISCONNECT; } + if (netif_running(netdev)) + ixgbe_close_suspend(adapter); + if (!test_and_set_bit(__IXGBE_DISABLED, &adapter->state)) pci_disable_device(pdev); rtnl_unlock(); reverted: --- linux-azure-4.15.0/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c +++ linux-azure-4.15.0.orig/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c @@ -4241,14 +4241,14 @@ rtnl_lock(); netif_device_detach(netdev); - if (netif_running(netdev)) - ixgbevf_close_suspend(adapter); - if (state == pci_channel_io_perm_failure) { rtnl_unlock(); return PCI_ERS_RESULT_DISCONNECT; } + if (netif_running(netdev)) + ixgbevf_close_suspend(adapter); + if (!test_and_set_bit(__IXGBEVF_DISABLED, &adapter->state)) pci_disable_device(pdev); rtnl_unlock(); diff -u linux-azure-4.15.0/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c linux-azure-4.15.0/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c --- linux-azure-4.15.0/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c +++ linux-azure-4.15.0/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c @@ -1013,22 +1013,6 @@ if (!coal->tx_max_coalesced_frames_irq) return -EINVAL; - if (coal->tx_coalesce_usecs > MLX4_EN_MAX_COAL_TIME || - coal->rx_coalesce_usecs > MLX4_EN_MAX_COAL_TIME || - coal->rx_coalesce_usecs_low > MLX4_EN_MAX_COAL_TIME || - coal->rx_coalesce_usecs_high > MLX4_EN_MAX_COAL_TIME) { - netdev_info(dev, "%s: maximum coalesce time supported is %d usecs\n", - __func__, MLX4_EN_MAX_COAL_TIME); - return -ERANGE; - } - - if (coal->tx_max_coalesced_frames > MLX4_EN_MAX_COAL_PKTS || - coal->rx_max_coalesced_frames > MLX4_EN_MAX_COAL_PKTS) { - netdev_info(dev, "%s: maximum coalesced frames supported is %d\n", - __func__, MLX4_EN_MAX_COAL_PKTS); - return -ERANGE; - } - priv->rx_frames = (coal->rx_max_coalesced_frames == MLX4_EN_AUTO_CONF) ? MLX4_EN_RX_COAL_TARGET : diff -u linux-azure-4.15.0/drivers/net/ethernet/mellanox/mlx4/en_netdev.c linux-azure-4.15.0/drivers/net/ethernet/mellanox/mlx4/en_netdev.c --- linux-azure-4.15.0/drivers/net/ethernet/mellanox/mlx4/en_netdev.c +++ linux-azure-4.15.0/drivers/net/ethernet/mellanox/mlx4/en_netdev.c @@ -3319,11 +3319,12 @@ MAX_TX_RINGS, GFP_KERNEL); if (!priv->tx_ring[t]) { err = -ENOMEM; - goto out; + goto err_free_tx; } priv->tx_cq[t] = kzalloc(sizeof(struct mlx4_en_cq *) * MAX_TX_RINGS, GFP_KERNEL); if (!priv->tx_cq[t]) { + kfree(priv->tx_ring[t]); err = -ENOMEM; goto out; } @@ -3576,6 +3577,11 @@ return 0; +err_free_tx: + while (t--) { + kfree(priv->tx_ring[t]); + kfree(priv->tx_cq[t]); + } out: mlx4_en_destroy_netdev(dev); return err; reverted: --- linux-azure-4.15.0/drivers/net/ethernet/mellanox/mlx4/main.c +++ linux-azure-4.15.0.orig/drivers/net/ethernet/mellanox/mlx4/main.c @@ -3007,7 +3007,6 @@ mlx4_err(dev, "Failed to create file for port %d\n", port); devlink_port_unregister(&info->devlink_port); info->port = -1; - return err; } sprintf(info->dev_mtu_name, "mlx4_port%d_mtu", port); @@ -3029,10 +3028,9 @@ &info->port_attr); devlink_port_unregister(&info->devlink_port); info->port = -1; - return err; } + return err; - return 0; } static void mlx4_cleanup_port_info(struct mlx4_port_info *info) diff -u linux-azure-4.15.0/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h linux-azure-4.15.0/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h --- linux-azure-4.15.0/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h +++ linux-azure-4.15.0/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h @@ -131,9 +131,6 @@ #define MLX4_EN_TX_COAL_PKTS 16 #define MLX4_EN_TX_COAL_TIME 0x10 -#define MLX4_EN_MAX_COAL_PKTS U16_MAX -#define MLX4_EN_MAX_COAL_TIME U16_MAX - #define MLX4_EN_RX_RATE_LOW 400000 #define MLX4_EN_RX_COAL_TIME_LOW 0 #define MLX4_EN_RX_RATE_HIGH 450000 @@ -553,8 +550,8 @@ u16 rx_usecs_low; u32 pkt_rate_high; u16 rx_usecs_high; - u32 sample_interval; - u32 adaptive_rx_coal; + u16 sample_interval; + u16 adaptive_rx_coal; u32 msg_enable; u32 loopback_ok; u32 validate_loopback; reverted: --- linux-azure-4.15.0/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c +++ linux-azure-4.15.0.orig/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c @@ -1007,14 +1007,12 @@ mutex_lock(&priv->state_lock); + if (!test_bit(MLX5E_STATE_OPENED, &priv->state)) + goto out; + new_channels.params = priv->channels.params; mlx5e_trust_update_tx_min_inline_mode(priv, &new_channels.params); - if (!test_bit(MLX5E_STATE_OPENED, &priv->state)) { - priv->channels.params = new_channels.params; - goto out; - } - /* Skip if tx_min_inline is the same */ if (new_channels.params.tx_min_inline_mode == priv->channels.params.tx_min_inline_mode) diff -u linux-azure-4.15.0/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c linux-azure-4.15.0/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c --- linux-azure-4.15.0/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c +++ linux-azure-4.15.0/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c @@ -791,10 +791,6 @@ f->mask); addr_type = key->addr_type; - /* the HW doesn't support frag first/later */ - if (mask->flags & FLOW_DIS_FIRST_FRAG) - return -EOPNOTSUPP; - if (mask->flags & FLOW_DIS_IS_FRAGMENT) { MLX5_SET(fte_match_set_lyr_2_4, headers_c, frag, 1); MLX5_SET(fte_match_set_lyr_2_4, headers_v, frag, @@ -1398,8 +1394,7 @@ } ip_proto = MLX5_GET(fte_match_set_lyr_2_4, headers_v, ip_protocol); - if (modify_ip_header && ip_proto != IPPROTO_TCP && - ip_proto != IPPROTO_UDP && ip_proto != IPPROTO_ICMP) { + if (modify_ip_header && ip_proto != IPPROTO_TCP && ip_proto != IPPROTO_UDP) { pr_info("can't offload re-write of ip proto %d\n", ip_proto); return false; } diff -u linux-azure-4.15.0/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c linux-azure-4.15.0/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c --- linux-azure-4.15.0/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c +++ linux-azure-4.15.0/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c @@ -255,7 +255,7 @@ dma_addr = dma_map_single(sq->pdev, skb_data, headlen, DMA_TO_DEVICE); if (unlikely(dma_mapping_error(sq->pdev, dma_addr))) - goto dma_unmap_wqe_err; + return -ENOMEM; dseg->addr = cpu_to_be64(dma_addr); dseg->lkey = sq->mkey_be; @@ -273,7 +273,7 @@ dma_addr = skb_frag_dma_map(sq->pdev, frag, 0, fsz, DMA_TO_DEVICE); if (unlikely(dma_mapping_error(sq->pdev, dma_addr))) - goto dma_unmap_wqe_err; + return -ENOMEM; dseg->addr = cpu_to_be64(dma_addr); dseg->lkey = sq->mkey_be; @@ -285,10 +285,6 @@ } return num_dma; - -dma_unmap_wqe_err: - mlx5e_dma_unmap_wqe_err(sq, num_dma); - return -ENOMEM; } static inline void @@ -384,15 +380,17 @@ num_dma = mlx5e_txwqe_build_dsegs(sq, skb, skb_data, headlen, (struct mlx5_wqe_data_seg *)cseg + ds_cnt); if (unlikely(num_dma < 0)) - goto err_drop; + goto dma_unmap_wqe_err; mlx5e_txwqe_complete(sq, skb, opcode, ds_cnt + num_dma, num_bytes, num_dma, wi, cseg); return NETDEV_TX_OK; -err_drop: +dma_unmap_wqe_err: sq->stats.dropped++; + mlx5e_dma_unmap_wqe_err(sq, wi->num_dma); + dev_kfree_skb_any(skb); return NETDEV_TX_OK; @@ -622,15 +620,17 @@ num_dma = mlx5e_txwqe_build_dsegs(sq, skb, skb_data, headlen, (struct mlx5_wqe_data_seg *)cseg + ds_cnt); if (unlikely(num_dma < 0)) - goto err_drop; + goto dma_unmap_wqe_err; mlx5e_txwqe_complete(sq, skb, opcode, ds_cnt + num_dma, num_bytes, num_dma, wi, cseg); return NETDEV_TX_OK; -err_drop: +dma_unmap_wqe_err: sq->stats.dropped++; + mlx5e_dma_unmap_wqe_err(sq, wi->num_dma); + dev_kfree_skb_any(skb); return NETDEV_TX_OK; reverted: --- linux-azure-4.15.0/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c +++ linux-azure-4.15.0.orig/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c @@ -2054,35 +2054,26 @@ memset(vf_stats, 0, sizeof(*vf_stats)); vf_stats->rx_packets = MLX5_GET_CTR(out, received_eth_unicast.packets) + - MLX5_GET_CTR(out, received_ib_unicast.packets) + MLX5_GET_CTR(out, received_eth_multicast.packets) + - MLX5_GET_CTR(out, received_ib_multicast.packets) + MLX5_GET_CTR(out, received_eth_broadcast.packets); vf_stats->rx_bytes = MLX5_GET_CTR(out, received_eth_unicast.octets) + - MLX5_GET_CTR(out, received_ib_unicast.octets) + MLX5_GET_CTR(out, received_eth_multicast.octets) + - MLX5_GET_CTR(out, received_ib_multicast.octets) + MLX5_GET_CTR(out, received_eth_broadcast.octets); vf_stats->tx_packets = MLX5_GET_CTR(out, transmitted_eth_unicast.packets) + - MLX5_GET_CTR(out, transmitted_ib_unicast.packets) + MLX5_GET_CTR(out, transmitted_eth_multicast.packets) + - MLX5_GET_CTR(out, transmitted_ib_multicast.packets) + MLX5_GET_CTR(out, transmitted_eth_broadcast.packets); vf_stats->tx_bytes = MLX5_GET_CTR(out, transmitted_eth_unicast.octets) + - MLX5_GET_CTR(out, transmitted_ib_unicast.octets) + MLX5_GET_CTR(out, transmitted_eth_multicast.octets) + - MLX5_GET_CTR(out, transmitted_ib_multicast.octets) + MLX5_GET_CTR(out, transmitted_eth_broadcast.octets); vf_stats->multicast = + MLX5_GET_CTR(out, received_eth_multicast.packets); - MLX5_GET_CTR(out, received_eth_multicast.packets) + - MLX5_GET_CTR(out, received_ib_multicast.packets); vf_stats->broadcast = MLX5_GET_CTR(out, received_eth_broadcast.packets); diff -u linux-azure-4.15.0/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c linux-azure-4.15.0/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c --- linux-azure-4.15.0/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c +++ linux-azure-4.15.0/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c @@ -182,7 +182,6 @@ static void del_sw_hw_rule(struct fs_node *node); static bool mlx5_flow_dests_cmp(struct mlx5_flow_destination *d1, struct mlx5_flow_destination *d2); -static void cleanup_root_ns(struct mlx5_flow_root_namespace *root_ns); static struct mlx5_flow_rule * find_flow_rule(struct fs_fte *fte, struct mlx5_flow_destination *dest); @@ -2309,27 +2308,23 @@ static int init_root_ns(struct mlx5_flow_steering *steering) { - int err; - steering->root_ns = create_root_ns(steering, FS_FT_NIC_RX); if (!steering->root_ns) - return -ENOMEM; + goto cleanup; - err = init_root_tree(steering, &root_fs, &steering->root_ns->ns.node); - if (err) - goto out_err; + if (init_root_tree(steering, &root_fs, &steering->root_ns->ns.node)) + goto cleanup; set_prio_attrs(steering->root_ns); - err = create_anchor_flow_table(steering); - if (err) - goto out_err; + + if (create_anchor_flow_table(steering)) + goto cleanup; return 0; -out_err: - cleanup_root_ns(steering->root_ns); - steering->root_ns = NULL; - return err; +cleanup: + mlx5_cleanup_fs(steering->dev); + return -ENOMEM; } static void clean_tree(struct fs_node *node) diff -u linux-azure-4.15.0/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c linux-azure-4.15.0/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c --- linux-azure-4.15.0/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c +++ linux-azure-4.15.0/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c @@ -1718,11 +1718,13 @@ struct net_device *dev = mlxsw_sp_port->dev; int err; - if (bridge_port->bridge_device->multicast_enabled && - !bridge_port->mrouter) { - err = mlxsw_sp_port_smid_set(mlxsw_sp_port, mid->mid, false); - if (err) - netdev_err(dev, "Unable to remove port from SMID\n"); + if (bridge_port->bridge_device->multicast_enabled) { + if (bridge_port->bridge_device->multicast_enabled) { + err = mlxsw_sp_port_smid_set(mlxsw_sp_port, mid->mid, + false); + if (err) + netdev_err(dev, "Unable to remove port from SMID\n"); + } } err = mlxsw_sp_port_remove_from_mid(mlxsw_sp_port, mid); reverted: --- linux-azure-4.15.0/drivers/net/ethernet/qlogic/qed/qed_ll2.c +++ linux-azure-4.15.0.orig/drivers/net/ethernet/qlogic/qed/qed_ll2.c @@ -292,7 +292,6 @@ struct qed_ll2_tx_packet *p_pkt = NULL; struct qed_ll2_info *p_ll2_conn; struct qed_ll2_tx_queue *p_tx; - unsigned long flags = 0; dma_addr_t tx_frag; p_ll2_conn = qed_ll2_handle_sanity_inactive(p_hwfn, connection_handle); @@ -301,7 +300,6 @@ p_tx = &p_ll2_conn->tx_queue; - spin_lock_irqsave(&p_tx->lock, flags); while (!list_empty(&p_tx->active_descq)) { p_pkt = list_first_entry(&p_tx->active_descq, struct qed_ll2_tx_packet, list_entry); @@ -311,7 +309,6 @@ list_del(&p_pkt->list_entry); b_last_packet = list_empty(&p_tx->active_descq); list_add_tail(&p_pkt->list_entry, &p_tx->free_descq); - spin_unlock_irqrestore(&p_tx->lock, flags); if (p_ll2_conn->input.conn_type == QED_LL2_TYPE_OOO) { struct qed_ooo_buffer *p_buffer; @@ -331,9 +328,7 @@ b_last_frag, b_last_packet); } - spin_lock_irqsave(&p_tx->lock, flags); } - spin_unlock_irqrestore(&p_tx->lock, flags); } static int qed_ll2_txq_completion(struct qed_hwfn *p_hwfn, void *p_cookie) @@ -558,7 +553,6 @@ struct qed_ll2_info *p_ll2_conn = NULL; struct qed_ll2_rx_packet *p_pkt = NULL; struct qed_ll2_rx_queue *p_rx; - unsigned long flags = 0; p_ll2_conn = qed_ll2_handle_sanity_inactive(p_hwfn, connection_handle); if (!p_ll2_conn) @@ -566,14 +560,13 @@ p_rx = &p_ll2_conn->rx_queue; - spin_lock_irqsave(&p_rx->lock, flags); while (!list_empty(&p_rx->active_descq)) { p_pkt = list_first_entry(&p_rx->active_descq, struct qed_ll2_rx_packet, list_entry); if (!p_pkt) break; + list_move_tail(&p_pkt->list_entry, &p_rx->free_descq); - spin_unlock_irqrestore(&p_rx->lock, flags); if (p_ll2_conn->input.conn_type == QED_LL2_TYPE_OOO) { struct qed_ooo_buffer *p_buffer; @@ -592,9 +585,7 @@ cookie, rx_buf_addr, b_last); } - spin_lock_irqsave(&p_rx->lock, flags); } - spin_unlock_irqrestore(&p_rx->lock, flags); } static u8 qed_ll2_convert_rx_parse_to_tx_flags(u16 parse_flags) @@ -607,27 +598,6 @@ return bd_flags; } -static bool -qed_ll2_lb_rxq_handler_slowpath(struct qed_hwfn *p_hwfn, - struct core_rx_slow_path_cqe *p_cqe) -{ - struct ooo_opaque *iscsi_ooo; - u32 cid; - - if (p_cqe->ramrod_cmd_id != CORE_RAMROD_RX_QUEUE_FLUSH) - return false; - - iscsi_ooo = (struct ooo_opaque *)&p_cqe->opaque_data; - if (iscsi_ooo->ooo_opcode != TCP_EVENT_DELETE_ISLES) - return false; - - /* Need to make a flush */ - cid = le32_to_cpu(iscsi_ooo->cid); - qed_ooo_release_connection_isles(p_hwfn, p_hwfn->p_ooo_info, cid); - - return true; -} - static int qed_ll2_lb_rxq_handler(struct qed_hwfn *p_hwfn, struct qed_ll2_info *p_ll2_conn) { @@ -654,11 +624,6 @@ cq_old_idx = qed_chain_get_cons_idx(&p_rx->rcq_chain); cqe_type = cqe->rx_cqe_sp.type; - if (cqe_type == CORE_RX_CQE_TYPE_SLOW_PATH) - if (qed_ll2_lb_rxq_handler_slowpath(p_hwfn, - &cqe->rx_cqe_sp)) - continue; - if (cqe_type != CORE_RX_CQE_TYPE_REGULAR) { DP_NOTICE(p_hwfn, "Got a non-regular LB LL2 completion [type 0x%02x]\n", @@ -839,9 +804,6 @@ struct qed_ll2_info *p_ll2_conn = (struct qed_ll2_info *)p_cookie; int rc; - if (!QED_LL2_RX_REGISTERED(p_ll2_conn)) - return 0; - rc = qed_ll2_lb_rxq_handler(p_hwfn, p_ll2_conn); if (rc) return rc; @@ -862,9 +824,6 @@ u16 new_idx = 0, num_bds = 0; int rc; - if (!QED_LL2_TX_REGISTERED(p_ll2_conn)) - return 0; - new_idx = le16_to_cpu(*p_tx->p_fw_cons); num_bds = ((s16)new_idx - (s16)p_tx->bds_idx); @@ -1905,25 +1864,17 @@ /* Stop Tx & Rx of connection, if needed */ if (QED_LL2_TX_REGISTERED(p_ll2_conn)) { - p_ll2_conn->tx_queue.b_cb_registred = false; - smp_wmb(); /* Make sure this is seen by ll2_lb_rxq_completion */ rc = qed_sp_ll2_tx_queue_stop(p_hwfn, p_ll2_conn); if (rc) goto out; - qed_ll2_txq_flush(p_hwfn, connection_handle); - qed_int_unregister_cb(p_hwfn, p_ll2_conn->tx_queue.tx_sb_index); } if (QED_LL2_RX_REGISTERED(p_ll2_conn)) { - p_ll2_conn->rx_queue.b_cb_registred = false; - smp_wmb(); /* Make sure this is seen by ll2_lb_rxq_completion */ rc = qed_sp_ll2_rx_queue_stop(p_hwfn, p_ll2_conn); if (rc) goto out; - qed_ll2_rxq_flush(p_hwfn, connection_handle); - qed_int_unregister_cb(p_hwfn, p_ll2_conn->rx_queue.rx_sb_index); } if (p_ll2_conn->input.conn_type == QED_LL2_TYPE_OOO) @@ -1971,6 +1922,16 @@ if (!p_ll2_conn) return; + if (QED_LL2_RX_REGISTERED(p_ll2_conn)) { + p_ll2_conn->rx_queue.b_cb_registred = false; + qed_int_unregister_cb(p_hwfn, p_ll2_conn->rx_queue.rx_sb_index); + } + + if (QED_LL2_TX_REGISTERED(p_ll2_conn)) { + p_ll2_conn->tx_queue.b_cb_registred = false; + qed_int_unregister_cb(p_hwfn, p_ll2_conn->tx_queue.tx_sb_index); + } + kfree(p_ll2_conn->tx_queue.descq_mem); qed_chain_free(p_hwfn->cdev, &p_ll2_conn->tx_queue.txq_chain); reverted: --- linux-azure-4.15.0/drivers/net/ethernet/realtek/8139too.c +++ linux-azure-4.15.0.orig/drivers/net/ethernet/realtek/8139too.c @@ -2224,7 +2224,7 @@ struct rtl8139_private *tp = netdev_priv(dev); const int irq = tp->pci_dev->irq; + disable_irq(irq); - disable_irq_nosync(irq); rtl8139_interrupt(irq, dev); enable_irq(irq); } diff -u linux-azure-4.15.0/drivers/net/ethernet/realtek/r8169.c linux-azure-4.15.0/drivers/net/ethernet/realtek/r8169.c --- linux-azure-4.15.0/drivers/net/ethernet/realtek/r8169.c +++ linux-azure-4.15.0/drivers/net/ethernet/realtek/r8169.c @@ -5090,9 +5090,6 @@ static void rtl_pll_power_up(struct rtl8169_private *tp) { rtl_generic_op(tp, tp->pll_power_ops.up); - - /* give MAC/PHY some time to resume */ - msleep(20); } static void rtl_init_pll_power_ops(struct rtl8169_private *tp) reverted: --- linux-azure-4.15.0/drivers/net/ethernet/sun/niu.c +++ linux-azure-4.15.0.orig/drivers/net/ethernet/sun/niu.c @@ -3442,7 +3442,7 @@ len = (val & RCR_ENTRY_L2_LEN) >> RCR_ENTRY_L2_LEN_SHIFT; + len -= ETH_FCS_LEN; - append_size = len + ETH_HLEN + ETH_FCS_LEN; addr = (val & RCR_ENTRY_PKT_BUF_ADDR) << RCR_ENTRY_PKT_BUF_ADDR_SHIFT; @@ -3452,6 +3452,7 @@ RCR_ENTRY_PKTBUFSZ_SHIFT]; off = addr & ~PAGE_MASK; + append_size = rcr_size; if (num_rcr == 1) { int ptype; @@ -3464,7 +3465,7 @@ else skb_checksum_none_assert(skb); } else if (!(val & RCR_ENTRY_MULTI)) + append_size = len - skb->len; - append_size = append_size - skb->len; niu_rx_skb_append(skb, page, off, append_size, rcr_size); if ((page->index + rp->rbr_block_size) - rcr_size == addr) { diff -u linux-azure-4.15.0/drivers/net/ethernet/ti/cpsw.c linux-azure-4.15.0/drivers/net/ethernet/ti/cpsw.c --- linux-azure-4.15.0/drivers/net/ethernet/ti/cpsw.c +++ linux-azure-4.15.0/drivers/net/ethernet/ti/cpsw.c @@ -1260,8 +1260,6 @@ cpsw_ale_add_ucast(cpsw->ale, priv->mac_addr, HOST_PORT_NUM, ALE_VLAN | ALE_SECURE, slave->port_vlan); - cpsw_ale_control_set(cpsw->ale, slave_port, - ALE_PORT_DROP_UNKNOWN_VLAN, 1); } static void soft_reset_slave(struct cpsw_slave *slave) diff -u linux-azure-4.15.0/drivers/net/hyperv/hyperv_net.h linux-azure-4.15.0/drivers/net/hyperv/hyperv_net.h --- linux-azure-4.15.0/drivers/net/hyperv/hyperv_net.h +++ linux-azure-4.15.0/drivers/net/hyperv/hyperv_net.h @@ -195,7 +195,7 @@ const struct netvsc_device_info *info); int netvsc_alloc_recv_comp_ring(struct netvsc_device *net_device, u32 q_idx); void netvsc_device_remove(struct hv_device *device); -int netvsc_send(struct net_device *net, +int netvsc_send(struct net_device_context *ndc, struct hv_netvsc_packet *packet, struct rndis_message *rndis_msg, struct hv_page_buffer *page_buffer, @@ -211,6 +211,7 @@ int netvsc_poll(struct napi_struct *napi, int budget); void rndis_set_subchannel(struct work_struct *w); +bool rndis_filter_opened(const struct netvsc_device *nvdev); int rndis_filter_open(struct netvsc_device *nvdev); int rndis_filter_close(struct netvsc_device *nvdev); struct netvsc_device *rndis_filter_device_add(struct hv_device *dev, diff -u linux-azure-4.15.0/drivers/net/hyperv/netvsc.c linux-azure-4.15.0/drivers/net/hyperv/netvsc.c --- linux-azure-4.15.0/drivers/net/hyperv/netvsc.c +++ linux-azure-4.15.0/drivers/net/hyperv/netvsc.c @@ -89,11 +89,6 @@ = container_of(head, struct netvsc_device, rcu); int i; - kfree(nvdev->extension); - vfree(nvdev->recv_buf); - vfree(nvdev->send_buf); - kfree(nvdev->send_section_map); - for (i = 0; i < VRSS_CHANNEL_MAX; i++) vfree(nvdev->chan_table[i].mrc.slots); @@ -105,11 +100,11 @@ call_rcu(&nvdev->rcu, free_netvsc_device); } -static void netvsc_revoke_recv_buf(struct hv_device *device, - struct netvsc_device *net_device) +static void netvsc_revoke_buf(struct hv_device *device, + struct netvsc_device *net_device) { - struct net_device *ndev = hv_get_drvdata(device); struct nvsp_message *revoke_packet; + struct net_device *ndev = hv_get_drvdata(device); int ret; /* @@ -151,14 +146,6 @@ } net_device->recv_section_cnt = 0; } -} - -static void netvsc_revoke_send_buf(struct hv_device *device, - struct netvsc_device *net_device) -{ - struct net_device *ndev = hv_get_drvdata(device); - struct nvsp_message *revoke_packet; - int ret; /* Deal with the send buffer we may have setup. * If we got a send section size, it means we received a @@ -202,8 +189,8 @@ } } -static void netvsc_teardown_recv_gpadl(struct hv_device *device, - struct netvsc_device *net_device) +static void netvsc_teardown_gpadl(struct hv_device *device, + struct netvsc_device *net_device) { struct net_device *ndev = hv_get_drvdata(device); int ret; @@ -222,13 +209,12 @@ } net_device->recv_buf_gpadl_handle = 0; } -} -static void netvsc_teardown_send_gpadl(struct hv_device *device, - struct netvsc_device *net_device) -{ - struct net_device *ndev = hv_get_drvdata(device); - int ret; + if (net_device->recv_buf) { + /* Free up the receive buffer */ + vfree(net_device->recv_buf); + net_device->recv_buf = NULL; + } if (net_device->send_buf_gpadl_handle) { ret = vmbus_teardown_gpadl(device->channel, @@ -244,6 +230,12 @@ } net_device->send_buf_gpadl_handle = 0; } + if (net_device->send_buf) { + /* Free up the send buffer */ + vfree(net_device->send_buf); + net_device->send_buf = NULL; + } + kfree(net_device->send_section_map); } int netvsc_alloc_recv_comp_ring(struct netvsc_device *net_device, u32 q_idx) @@ -438,10 +430,8 @@ goto exit; cleanup: - netvsc_revoke_recv_buf(device, net_device); - netvsc_revoke_send_buf(device, net_device); - netvsc_teardown_recv_gpadl(device, net_device); - netvsc_teardown_send_gpadl(device, net_device); + netvsc_revoke_buf(device, net_device); + netvsc_teardown_gpadl(device, net_device); exit: return ret; @@ -571,23 +561,11 @@ = rtnl_dereference(net_device_ctx->nvdev); int i; - /* - * Revoke receive buffer. If host is pre-Win2016 then tear down - * receive buffer GPADL. Do the same for send buffer. - */ - netvsc_revoke_recv_buf(device, net_device); - if (vmbus_proto_version < VERSION_WIN10) - netvsc_teardown_recv_gpadl(device, net_device); - - netvsc_revoke_send_buf(device, net_device); - if (vmbus_proto_version < VERSION_WIN10) - netvsc_teardown_send_gpadl(device, net_device); + cancel_work_sync(&net_device->subchan_work); - RCU_INIT_POINTER(net_device_ctx->nvdev, NULL); + netvsc_revoke_buf(device, net_device); - /* And disassociate NAPI context from device */ - for (i = 0; i < net_device->num_chn; i++) - netif_napi_del(&net_device->chan_table[i].napi); + RCU_INIT_POINTER(net_device_ctx->nvdev, NULL); /* * At this point, no one should be accessing net_device @@ -598,14 +576,11 @@ /* Now, we can close the channel safely */ vmbus_close(device->channel); - /* - * If host is Win2016 or higher then we do the GPADL tear down - * here after VMBus is closed. - */ - if (vmbus_proto_version >= VERSION_WIN10) { - netvsc_teardown_recv_gpadl(device, net_device); - netvsc_teardown_send_gpadl(device, net_device); - } + netvsc_teardown_gpadl(device, net_device); + + /* And dissassociate NAPI context from device */ + for (i = 0; i < net_device->num_chn; i++) + netif_napi_del(&net_device->chan_table[i].napi); /* Release all resources */ free_netvsc_device_rcu(net_device); @@ -672,18 +647,14 @@ queue_sends = atomic_dec_return(&net_device->chan_table[q_idx].queue_sends); - if (unlikely(net_device->destroy)) { - if (queue_sends == 0) - wake_up(&net_device->wait_drain); - } else { - struct netdev_queue *txq = netdev_get_tx_queue(ndev, q_idx); - - if (netif_tx_queue_stopped(txq) && - (hv_ringbuf_avail_percent(&channel->outbound) > RING_AVAIL_PERCENT_HIWATER || - queue_sends < 1)) { - netif_tx_wake_queue(txq); - ndev_ctx->eth_stats.wake_queue++; - } + if (net_device->destroy && queue_sends == 0) + wake_up(&net_device->wait_drain); + + if (netif_tx_queue_stopped(netdev_get_tx_queue(ndev, q_idx)) && + (hv_ringbuf_avail_percent(&channel->outbound) > RING_AVAIL_PERCENT_HIWATER || + queue_sends < 1)) { + netif_tx_wake_queue(netdev_get_tx_queue(ndev, q_idx)); + ndev_ctx->eth_stats.wake_queue++; } } @@ -732,13 +703,13 @@ return NETVSC_INVALID_INDEX; } -static void netvsc_copy_to_send_buf(struct netvsc_device *net_device, - unsigned int section_index, - u32 pend_size, - struct hv_netvsc_packet *packet, - struct rndis_message *rndis_msg, - struct hv_page_buffer *pb, - bool xmit_more) +static u32 netvsc_copy_to_send_buf(struct netvsc_device *net_device, + unsigned int section_index, + u32 pend_size, + struct hv_netvsc_packet *packet, + struct rndis_message *rndis_msg, + struct hv_page_buffer *pb, + struct sk_buff *skb) { char *start = net_device->send_buf; char *dest = start + (section_index * net_device->send_section_size) @@ -751,8 +722,7 @@ packet->page_buf_cnt; /* Add padding */ - remain = packet->total_data_buflen & (net_device->pkt_align - 1); - if (xmit_more && remain) { + if (skb->xmit_more && remain && !packet->cp_partial) { padding = net_device->pkt_align - remain; rndis_msg->msg_len += padding; packet->total_data_buflen += padding; @@ -772,6 +742,8 @@ memset(dest, 0, padding); msg_size += padding; } + + return msg_size; } static inline int netvsc_send_pkt( @@ -864,13 +836,12 @@ } /* RCU already held by caller */ -int netvsc_send(struct net_device *ndev, +int netvsc_send(struct net_device_context *ndev_ctx, struct hv_netvsc_packet *packet, struct rndis_message *rndis_msg, struct hv_page_buffer *pb, struct sk_buff *skb) { - struct net_device_context *ndev_ctx = netdev_priv(ndev); struct netvsc_device *net_device = rcu_dereference_bh(ndev_ctx->nvdev); struct hv_device *device = ndev_ctx->device_ctx; @@ -881,7 +852,8 @@ struct multi_send_data *msdp; struct hv_netvsc_packet *msd_send = NULL, *cur_send = NULL; struct sk_buff *msd_skb = NULL; - bool try_batch, xmit_more; + bool try_batch; + bool xmit_more = (skb != NULL) ? skb->xmit_more : false; /* If device is rescinded, return error and packet will get dropped. */ if (unlikely(!net_device || net_device->destroy)) @@ -923,17 +895,10 @@ } } - /* Keep aggregating only if stack says more data is coming - * and not doing mixed modes send and not flow blocked - */ - xmit_more = skb->xmit_more && - !packet->cp_partial && - !netif_xmit_stopped(netdev_get_tx_queue(ndev, packet->q_idx)); - if (section_index != NETVSC_INVALID_INDEX) { netvsc_copy_to_send_buf(net_device, section_index, msd_len, - packet, rndis_msg, pb, xmit_more); + packet, rndis_msg, pb, skb); packet->send_buf_index = section_index; @@ -953,7 +918,7 @@ if (msdp->skb) dev_consume_skb_any(msdp->skb); - if (xmit_more) { + if (xmit_more && !packet->cp_partial) { msdp->skb = skb; msdp->pkt = packet; msdp->count++; diff -u linux-azure-4.15.0/drivers/net/hyperv/netvsc_drv.c linux-azure-4.15.0/drivers/net/hyperv/netvsc_drv.c --- linux-azure-4.15.0/drivers/net/hyperv/netvsc_drv.c +++ linux-azure-4.15.0/drivers/net/hyperv/netvsc_drv.c @@ -45,10 +45,7 @@ #include "hyperv_net.h" -#define RING_SIZE_MIN 64 -#define RETRY_US_LO 5000 -#define RETRY_US_HI 10000 -#define RETRY_MAX 2000 /* >10 sec */ +#define RING_SIZE_MIN 64 #define LINKCHANGE_INT (2 * HZ) #define VF_TAKEOVER_INT (HZ / 10) @@ -137,25 +134,36 @@ return 0; } -static int netvsc_wait_until_empty(struct netvsc_device *nvdev) +static int netvsc_close(struct net_device *net) { - unsigned int retry = 0; - int i; + struct net_device_context *net_device_ctx = netdev_priv(net); + struct net_device *vf_netdev + = rtnl_dereference(net_device_ctx->vf_netdev); + struct netvsc_device *nvdev = rtnl_dereference(net_device_ctx->nvdev); + int ret = 0; + u32 aread, i, msec = 10, retry = 0, retry_max = 20; + struct vmbus_channel *chn; - /* Ensure pending bytes in ring are read */ - for (;;) { - u32 aread = 0; + netif_tx_disable(net); - for (i = 0; i < nvdev->num_chn; i++) { - struct vmbus_channel *chn - = nvdev->chan_table[i].channel; + /* No need to close rndis filter if it is removed already */ + if (!nvdev) + goto out; + + ret = rndis_filter_close(nvdev); + if (ret != 0) { + netdev_err(net, "unable to close device (ret %d).\n", ret); + return ret; + } + /* Ensure pending bytes in ring are read */ + while (true) { + aread = 0; + for (i = 0; i < nvdev->num_chn; i++) { + chn = nvdev->chan_table[i].channel; if (!chn) continue; - /* make sure receive not running now */ - napi_synchronize(&nvdev->chan_table[i].napi); - aread = hv_get_bytes_to_read(&chn->inbound); if (aread) break; @@ -165,40 +173,22 @@ break; } - if (aread == 0) - return 0; + retry++; + if (retry > retry_max || aread == 0) + break; - if (++retry > RETRY_MAX) - return -ETIMEDOUT; + msleep(msec); - usleep_range(RETRY_US_LO, RETRY_US_HI); + if (msec < 1000) + msec *= 2; } -} -static int netvsc_close(struct net_device *net) -{ - struct net_device_context *net_device_ctx = netdev_priv(net); - struct net_device *vf_netdev - = rtnl_dereference(net_device_ctx->vf_netdev); - struct netvsc_device *nvdev = rtnl_dereference(net_device_ctx->nvdev); - int ret; - - netif_tx_disable(net); - - /* No need to close rndis filter if it is removed already */ - if (!nvdev) - return 0; - - ret = rndis_filter_close(nvdev); - if (ret != 0) { - netdev_err(net, "unable to close device (ret %d).\n", ret); - return ret; - } - - ret = netvsc_wait_until_empty(nvdev); - if (ret) + if (aread) { netdev_err(net, "Ring buffer not empty after closing rndis\n"); + ret = -ETIMEDOUT; + } +out: if (vf_netdev) dev_close(vf_netdev); @@ -674,7 +664,7 @@ /* timestamp packet in software */ skb_tx_timestamp(skb); - ret = netvsc_send(net, packet, rndis_msg, pb, skb); + ret = netvsc_send(net_device_ctx, packet, rndis_msg, pb, skb); if (likely(ret == 0)) return NETDEV_TX_OK; @@ -870,81 +860,16 @@ } } -static int netvsc_detach(struct net_device *ndev, - struct netvsc_device *nvdev) -{ - struct net_device_context *ndev_ctx = netdev_priv(ndev); - struct hv_device *hdev = ndev_ctx->device_ctx; - int ret; - - /* Don't try continuing to try and setup sub channels */ - if (cancel_work_sync(&nvdev->subchan_work)) - nvdev->num_chn = 1; - - /* If device was up (receiving) then shutdown */ - if (netif_running(ndev)) { - netif_tx_disable(ndev); - - ret = rndis_filter_close(nvdev); - if (ret) { - netdev_err(ndev, - "unable to close device (ret %d).\n", ret); - return ret; - } - - ret = netvsc_wait_until_empty(nvdev); - if (ret) { - netdev_err(ndev, - "Ring buffer not empty after closing rndis\n"); - return ret; - } - } - - netif_device_detach(ndev); - - rndis_filter_device_remove(hdev, nvdev); - - return 0; -} - -static int netvsc_attach(struct net_device *ndev, - struct netvsc_device_info *dev_info) -{ - struct net_device_context *ndev_ctx = netdev_priv(ndev); - struct hv_device *hdev = ndev_ctx->device_ctx; - struct netvsc_device *nvdev; - struct rndis_device *rdev; - int ret; - - nvdev = rndis_filter_device_add(hdev, dev_info); - if (IS_ERR(nvdev)) - return PTR_ERR(nvdev); - - /* Note: enable and attach happen when sub-channels setup */ - - netif_carrier_off(ndev); - - if (netif_running(ndev)) { - ret = rndis_filter_open(nvdev); - if (ret) - return ret; - - rdev = nvdev->extension; - if (!rdev->link_state) - netif_carrier_on(ndev); - } - - return 0; -} - static int netvsc_set_channels(struct net_device *net, struct ethtool_channels *channels) { struct net_device_context *net_device_ctx = netdev_priv(net); + struct hv_device *dev = net_device_ctx->device_ctx; struct netvsc_device *nvdev = rtnl_dereference(net_device_ctx->nvdev); unsigned int orig, count = channels->combined_count; struct netvsc_device_info device_info; - int ret; + bool was_opened; + int ret = 0; /* We do not support separate count for rx, tx, or other */ if (count == 0 || @@ -961,6 +886,9 @@ return -EINVAL; orig = nvdev->num_chn; + was_opened = rndis_filter_opened(nvdev); + if (was_opened) + rndis_filter_close(nvdev); memset(&device_info, 0, sizeof(device_info)); device_info.num_chn = count; @@ -970,17 +898,28 @@ device_info.recv_sections = nvdev->recv_section_cnt; device_info.recv_section_size = nvdev->recv_section_size; - ret = netvsc_detach(net, nvdev); - if (ret) - return ret; + rndis_filter_device_remove(dev, nvdev); - ret = netvsc_attach(net, &device_info); - if (ret) { + nvdev = rndis_filter_device_add(dev, &device_info); + if (IS_ERR(nvdev)) { + ret = PTR_ERR(nvdev); device_info.num_chn = orig; - if (netvsc_attach(net, &device_info)) - netdev_err(net, "restoring channel setting failed\n"); + nvdev = rndis_filter_device_add(dev, &device_info); + + if (IS_ERR(nvdev)) { + netdev_err(net, "restoring channel setting failed: %ld\n", + PTR_ERR(nvdev)); + return ret; + } } + if (was_opened) + rndis_filter_open(nvdev); + + /* We may have missed link change notifications */ + net_device_ctx->last_reconfig = 0; + schedule_delayed_work(&net_device_ctx->dwork, 0); + return ret; } @@ -1046,8 +985,10 @@ struct net_device_context *ndevctx = netdev_priv(ndev); struct net_device *vf_netdev = rtnl_dereference(ndevctx->vf_netdev); struct netvsc_device *nvdev = rtnl_dereference(ndevctx->nvdev); + struct hv_device *hdev = ndevctx->device_ctx; int orig_mtu = ndev->mtu; struct netvsc_device_info device_info; + bool was_opened; int ret = 0; if (!nvdev || nvdev->destroy) @@ -1060,6 +1001,11 @@ return ret; } + netif_device_detach(ndev); + was_opened = rndis_filter_opened(nvdev); + if (was_opened) + rndis_filter_close(nvdev); + memset(&device_info, 0, sizeof(device_info)); device_info.ring_size = ring_size; device_info.num_chn = nvdev->num_chn; @@ -1068,27 +1014,35 @@ device_info.recv_sections = nvdev->recv_section_cnt; device_info.recv_section_size = nvdev->recv_section_size; - ret = netvsc_detach(ndev, nvdev); - if (ret) - goto rollback_vf; + rndis_filter_device_remove(hdev, nvdev); ndev->mtu = mtu; - ret = netvsc_attach(ndev, &device_info); - if (ret) - goto rollback; + nvdev = rndis_filter_device_add(hdev, &device_info); + if (IS_ERR(nvdev)) { + ret = PTR_ERR(nvdev); + + /* Attempt rollback to original MTU */ + ndev->mtu = orig_mtu; + nvdev = rndis_filter_device_add(hdev, &device_info); + + if (vf_netdev) + dev_set_mtu(vf_netdev, orig_mtu); + + if (IS_ERR(nvdev)) { + netdev_err(ndev, "restoring mtu failed: %ld\n", + PTR_ERR(nvdev)); + return ret; + } + } - return 0; + if (was_opened) + rndis_filter_open(nvdev); -rollback: - /* Attempt rollback to original MTU */ - ndev->mtu = orig_mtu; - - if (netvsc_attach(ndev, &device_info)) - netdev_err(ndev, "restoring mtu failed\n"); -rollback_vf: - if (vf_netdev) - dev_set_mtu(vf_netdev, orig_mtu); + netif_device_attach(ndev); + + /* We may have missed link change notifications */ + schedule_delayed_work(&ndevctx->dwork, 0); return ret; } @@ -1593,9 +1547,11 @@ { struct net_device_context *ndevctx = netdev_priv(ndev); struct netvsc_device *nvdev = rtnl_dereference(ndevctx->nvdev); + struct hv_device *hdev = ndevctx->device_ctx; struct netvsc_device_info device_info; struct ethtool_ringparam orig; u32 new_tx, new_rx; + bool was_opened; int ret = 0; if (!nvdev || nvdev->destroy) @@ -1622,17 +1578,33 @@ device_info.recv_section_size = nvdev->recv_section_size; - ret = netvsc_detach(ndev, nvdev); - if (ret) - return ret; + netif_device_detach(ndev); + was_opened = rndis_filter_opened(nvdev); + if (was_opened) + rndis_filter_close(nvdev); + + rndis_filter_device_remove(hdev, nvdev); + + nvdev = rndis_filter_device_add(hdev, &device_info); + if (IS_ERR(nvdev)) { + ret = PTR_ERR(nvdev); - ret = netvsc_attach(ndev, &device_info); - if (ret) { device_info.send_sections = orig.tx_pending; device_info.recv_sections = orig.rx_pending; - - if (netvsc_attach(ndev, &device_info)) - netdev_err(ndev, "restoring ringparam failed"); + nvdev = rndis_filter_device_add(hdev, &device_info); + if (IS_ERR(nvdev)) { + netdev_err(ndev, "restoring ringparam failed: %ld\n", + PTR_ERR(nvdev)); + return ret; + } } + if (was_opened) + rndis_filter_open(nvdev); + netif_device_attach(ndev); + + /* We may have missed link change notifications */ + ndevctx->last_reconfig = 0; + schedule_delayed_work(&ndevctx->dwork, 0); + return ret; } @@ -1859,8 +1831,7 @@ goto rx_handler_failed; } - ret = netdev_master_upper_dev_link(vf_netdev, ndev, - NULL, NULL, NULL); + ret = netdev_upper_dev_link(vf_netdev, ndev, NULL); if (ret != 0) { netdev_err(vf_netdev, "can not set master device %s (err = %d)\n", @@ -2115,8 +2086,8 @@ static int netvsc_remove(struct hv_device *dev) { struct net_device_context *ndev_ctx; - struct net_device *vf_netdev, *net; - struct netvsc_device *nvdev; + struct net_device *vf_netdev; + struct net_device *net; net = hv_get_drvdata(dev); if (net == NULL) { @@ -2126,13 +2097,9 @@ ndev_ctx = netdev_priv(net); - cancel_delayed_work_sync(&ndev_ctx->dwork); - - rcu_read_lock(); - nvdev = rcu_dereference(ndev_ctx->nvdev); + netif_device_detach(net); - if (nvdev) - cancel_work_sync(&nvdev->subchan_work); + cancel_delayed_work_sync(&ndev_ctx->dwork); /* * Call to the vsc driver to let it know that the device is being @@ -2143,13 +2110,11 @@ if (vf_netdev) netvsc_unregister_vf(vf_netdev); - if (nvdev) - rndis_filter_device_remove(dev, nvdev); - unregister_netdevice(net); + rndis_filter_device_remove(dev, + rtnl_dereference(ndev_ctx->nvdev)); rtnl_unlock(); - rcu_read_unlock(); hv_set_drvdata(dev, NULL); diff -u linux-azure-4.15.0/drivers/net/hyperv/rndis_filter.c linux-azure-4.15.0/drivers/net/hyperv/rndis_filter.c --- linux-azure-4.15.0/drivers/net/hyperv/rndis_filter.c +++ linux-azure-4.15.0/drivers/net/hyperv/rndis_filter.c @@ -217,6 +217,7 @@ struct hv_netvsc_packet *packet; struct hv_page_buffer page_buf[2]; struct hv_page_buffer *pb = page_buf; + struct net_device_context *net_device_ctx = netdev_priv(dev->ndev); int ret; /* Setup the packet to send it */ @@ -244,7 +245,7 @@ } rcu_read_lock_bh(); - ret = netvsc_send(dev->ndev, packet, NULL, pb, NULL); + ret = netvsc_send(net_device_ctx, packet, NULL, pb, NULL); rcu_read_unlock_bh(); return ret; @@ -266,23 +267,13 @@ } } -static void rndis_filter_receive_response(struct net_device *ndev, - struct netvsc_device *nvdev, - const struct rndis_message *resp) +static void rndis_filter_receive_response(struct rndis_device *dev, + struct rndis_message *resp) { - struct rndis_device *dev = nvdev->extension; struct rndis_request *request = NULL; bool found = false; unsigned long flags; - - /* This should never happen, it means control message - * response received after device removed. - */ - if (dev->state == RNDIS_DEV_UNINITIALIZED) { - netdev_err(ndev, - "got rndis message uninitialized\n"); - return; - } + struct net_device *ndev = dev->ndev; spin_lock_irqsave(&dev->request_lock, flags); list_for_each_entry(request, &dev->req_list, list_ent) { @@ -363,7 +354,7 @@ } static int rndis_filter_receive_data(struct net_device *ndev, - struct netvsc_device *nvdev, + struct rndis_device *dev, struct rndis_message *msg, struct vmbus_channel *channel, void *data, u32 data_buflen) @@ -383,7 +374,7 @@ * should be the data packet size plus the trailer padding size */ if (unlikely(data_buflen < rndis_pkt->data_len)) { - netdev_err(ndev, "rndis message buffer " + netdev_err(dev->ndev, "rndis message buffer " "overflow detected (got %u, min %u)" "...dropping this message!\n", data_buflen, rndis_pkt->data_len); @@ -411,20 +402,34 @@ void *data, u32 buflen) { struct net_device_context *net_device_ctx = netdev_priv(ndev); + struct rndis_device *rndis_dev = net_dev->extension; struct rndis_message *rndis_msg = data; + /* Make sure the rndis device state is initialized */ + if (unlikely(!rndis_dev)) { + netif_dbg(net_device_ctx, rx_err, ndev, + "got rndis message but no rndis device!\n"); + return NVSP_STAT_FAIL; + } + + if (unlikely(rndis_dev->state == RNDIS_DEV_UNINITIALIZED)) { + netif_dbg(net_device_ctx, rx_err, ndev, + "got rndis message uninitialized\n"); + return NVSP_STAT_FAIL; + } + if (netif_msg_rx_status(net_device_ctx)) dump_rndis_message(dev, rndis_msg); switch (rndis_msg->ndis_msg_type) { case RNDIS_MSG_PACKET: - return rndis_filter_receive_data(ndev, net_dev, rndis_msg, + return rndis_filter_receive_data(ndev, rndis_dev, rndis_msg, channel, data, buflen); case RNDIS_MSG_INIT_C: case RNDIS_MSG_QUERY_C: case RNDIS_MSG_SET_C: /* completion msgs */ - rndis_filter_receive_response(ndev, net_dev, rndis_msg); + rndis_filter_receive_response(rndis_dev, rndis_msg); break; case RNDIS_MSG_INDICATE: @@ -1116,7 +1121,6 @@ for (i = 0; i < VRSS_SEND_TAB_SIZE; i++) ndev_ctx->tx_table[i] = i % nvdev->num_chn; - netif_device_attach(ndev); rtnl_unlock(); return; @@ -1127,8 +1131,6 @@ nvdev->max_chn = 1; nvdev->num_chn = 1; - - netif_device_attach(ndev); unlock: rtnl_unlock(); } @@ -1224,6 +1226,7 @@ struct ndis_recv_scale_cap rsscap; u32 rsscap_size = sizeof(struct ndis_recv_scale_cap); u32 mtu, size; + const struct cpumask *node_cpu_mask; u32 num_possible_rss_qs; int i, ret; @@ -1280,7 +1283,7 @@ rndis_device->link_state ? "down" : "up"); if (net_device->nvsp_version < NVSP_PROTOCOL_VERSION_5) - goto out; + return net_device; rndis_filter_query_link_speed(rndis_device, net_device); @@ -1292,8 +1295,14 @@ if (ret || rsscap.num_recv_que < 2) goto out; - /* This guarantees that num_possible_rss_qs <= num_online_cpus */ - num_possible_rss_qs = min_t(u32, num_online_cpus(), + /* + * We will limit the VRSS channels to the number CPUs in the NUMA node + * the primary channel is currently bound to. + * + * This also guarantees that num_possible_rss_qs <= num_online_cpus + */ + node_cpu_mask = cpumask_of_node(cpu_to_node(dev->channel->target_cpu)); + num_possible_rss_qs = min_t(u32, cpumask_weight(node_cpu_mask), rsscap.num_recv_que); net_device->max_chn = min_t(u32, VRSS_CHANNEL_MAX, num_possible_rss_qs); @@ -1331,10 +1340,6 @@ net_device->num_chn = 1; } - /* No sub channels, device is ready */ - if (net_device->num_chn == 1) - netif_device_attach(net); - return net_device; err_dev_remv: @@ -1356,6 +1361,7 @@ net_dev->extension = NULL; netvsc_device_remove(dev); + kfree(rndis_dev); } int rndis_filter_open(struct netvsc_device *nvdev) @@ -1381,0 +1388,5 @@ + +bool rndis_filter_opened(const struct netvsc_device *nvdev) +{ + return atomic_read(&nvdev->open_cnt) > 0; +} reverted: --- linux-azure-4.15.0/drivers/net/ipvlan/ipvlan.h +++ linux-azure-4.15.0.orig/drivers/net/ipvlan/ipvlan.h @@ -74,7 +74,6 @@ DECLARE_BITMAP(mac_filters, IPVLAN_MAC_FILTER_SIZE); netdev_features_t sfeatures; u32 msg_enable; - spinlock_t addrs_lock; }; struct ipvl_addr { diff -u linux-azure-4.15.0/drivers/net/ipvlan/ipvlan_core.c linux-azure-4.15.0/drivers/net/ipvlan/ipvlan_core.c --- linux-azure-4.15.0/drivers/net/ipvlan/ipvlan_core.c +++ linux-azure-4.15.0/drivers/net/ipvlan/ipvlan_core.c @@ -35,7 +35,6 @@ } EXPORT_SYMBOL_GPL(ipvlan_count_rx); -#if IS_ENABLED(CONFIG_IPV6) static u8 ipvlan_get_v6_hash(const void *iaddr) { const struct in6_addr *ip6_addr = iaddr; @@ -43,12 +42,6 @@ return __ipv6_addr_jhash(ip6_addr, ipvlan_jhash_secret) & IPVLAN_HASH_MASK; } -#else -static u8 ipvlan_get_v6_hash(const void *iaddr) -{ - return 0; -} -#endif static u8 ipvlan_get_v4_hash(const void *iaddr) { @@ -58,23 +51,6 @@ IPVLAN_HASH_MASK; } -static bool addr_equal(bool is_v6, struct ipvl_addr *addr, const void *iaddr) -{ - if (!is_v6 && addr->atype == IPVL_IPV4) { - struct in_addr *i4addr = (struct in_addr *)iaddr; - - return addr->ip4addr.s_addr == i4addr->s_addr; -#if IS_ENABLED(CONFIG_IPV6) - } else if (is_v6 && addr->atype == IPVL_IPV6) { - struct in6_addr *i6addr = (struct in6_addr *)iaddr; - - return ipv6_addr_equal(&addr->ip6addr, i6addr); -#endif - } - - return false; -} - static struct ipvl_addr *ipvlan_ht_addr_lookup(const struct ipvl_port *port, const void *iaddr, bool is_v6) { @@ -83,9 +59,15 @@ hash = is_v6 ? ipvlan_get_v6_hash(iaddr) : ipvlan_get_v4_hash(iaddr); - hlist_for_each_entry_rcu(addr, &port->hlhead[hash], hlnode) - if (addr_equal(is_v6, addr, iaddr)) + hlist_for_each_entry_rcu(addr, &port->hlhead[hash], hlnode) { + if (is_v6 && addr->atype == IPVL_IPV6 && + ipv6_addr_equal(&addr->ip6addr, iaddr)) + return addr; + else if (!is_v6 && addr->atype == IPVL_IPV4 && + addr->ip4addr.s_addr == + ((struct in_addr *)iaddr)->s_addr) return addr; + } return NULL; } @@ -109,33 +91,29 @@ struct ipvl_addr *ipvlan_find_addr(const struct ipvl_dev *ipvlan, const void *iaddr, bool is_v6) { - struct ipvl_addr *addr, *ret = NULL; + struct ipvl_addr *addr; - rcu_read_lock(); - list_for_each_entry_rcu(addr, &ipvlan->addrs, anode) { - if (addr_equal(is_v6, addr, iaddr)) { - ret = addr; - break; - } + list_for_each_entry(addr, &ipvlan->addrs, anode) { + if ((is_v6 && addr->atype == IPVL_IPV6 && + ipv6_addr_equal(&addr->ip6addr, iaddr)) || + (!is_v6 && addr->atype == IPVL_IPV4 && + addr->ip4addr.s_addr == ((struct in_addr *)iaddr)->s_addr)) + return addr; } - rcu_read_unlock(); - return ret; + return NULL; } bool ipvlan_addr_busy(struct ipvl_port *port, void *iaddr, bool is_v6) { struct ipvl_dev *ipvlan; - bool ret = false; - rcu_read_lock(); - list_for_each_entry_rcu(ipvlan, &port->ipvlans, pnode) { - if (ipvlan_find_addr(ipvlan, iaddr, is_v6)) { - ret = true; - break; - } + ASSERT_RTNL(); + + list_for_each_entry(ipvlan, &port->ipvlans, pnode) { + if (ipvlan_find_addr(ipvlan, iaddr, is_v6)) + return true; } - rcu_read_unlock(); - return ret; + return false; } static void *ipvlan_get_L3_hdr(struct ipvl_port *port, struct sk_buff *skb, int *type) @@ -172,7 +150,6 @@ lyr3h = ip4h; break; } -#if IS_ENABLED(CONFIG_IPV6) case htons(ETH_P_IPV6): { struct ipv6hdr *ip6h; @@ -211,7 +188,6 @@ } break; } -#endif default: return NULL; } @@ -365,18 +341,14 @@ { struct ipvl_addr *addr = NULL; - switch (addr_type) { -#if IS_ENABLED(CONFIG_IPV6) - case IPVL_IPV6: { + if (addr_type == IPVL_IPV6) { struct ipv6hdr *ip6h; struct in6_addr *i6addr; ip6h = (struct ipv6hdr *)lyr3h; i6addr = use_dest ? &ip6h->daddr : &ip6h->saddr; addr = ipvlan_ht_addr_lookup(port, i6addr, true); - break; - } - case IPVL_ICMPV6: { + } else if (addr_type == IPVL_ICMPV6) { struct nd_msg *ndmh; struct in6_addr *i6addr; @@ -388,19 +360,14 @@ i6addr = &ndmh->target; addr = ipvlan_ht_addr_lookup(port, i6addr, true); } - break; - } -#endif - case IPVL_IPV4: { + } else if (addr_type == IPVL_IPV4) { struct iphdr *ip4h; __be32 *i4addr; ip4h = (struct iphdr *)lyr3h; i4addr = use_dest ? &ip4h->daddr : &ip4h->saddr; addr = ipvlan_ht_addr_lookup(port, i4addr, false); - break; - } - case IPVL_ARP: { + } else if (addr_type == IPVL_ARP) { struct arphdr *arph; unsigned char *arp_ptr; __be32 dip; @@ -414,8 +381,6 @@ memcpy(&dip, arp_ptr, 4); addr = ipvlan_ht_addr_lookup(port, &dip, false); - break; - } } return addr; @@ -459,7 +424,6 @@ return ret; } -#if IS_ENABLED(CONFIG_IPV6) static int ipvlan_process_v6_outbound(struct sk_buff *skb) { const struct ipv6hdr *ip6h = ipv6_hdr(skb); @@ -496,12 +460,6 @@ out: return ret; } -#else -static int ipvlan_process_v6_outbound(struct sk_buff *skb) -{ - return NET_XMIT_DROP; -} -#endif static int ipvlan_process_outbound(struct sk_buff *skb) { @@ -814,7 +772,6 @@ goto out; break; } -#if IS_ENABLED(CONFIG_IPV6) case AF_INET6: { struct dst_entry *dst; @@ -834,7 +791,6 @@ skb_dst_set(skb, dst); break; } -#endif default: break; } reverted: --- linux-azure-4.15.0/drivers/net/ipvlan/ipvlan_main.c +++ linux-azure-4.15.0.orig/drivers/net/ipvlan/ipvlan_main.c @@ -22,14 +22,12 @@ .hooknum = NF_INET_LOCAL_IN, .priority = INT_MAX, }, -#if IS_ENABLED(CONFIG_IPV6) { .hook = ipvlan_nf_input, .pf = NFPROTO_IPV6, .hooknum = NF_INET_LOCAL_IN, .priority = INT_MAX, }, -#endif }; static const struct l3mdev_ops ipvl_l3mdev_ops = { @@ -227,10 +225,8 @@ else dev->flags &= ~IFF_NOARP; + list_for_each_entry(addr, &ipvlan->addrs, anode) - rcu_read_lock(); - list_for_each_entry_rcu(addr, &ipvlan->addrs, anode) ipvlan_ht_addr_add(ipvlan, addr); - rcu_read_unlock(); return dev_uc_add(phy_dev, phy_dev->dev_addr); } @@ -246,10 +242,8 @@ dev_uc_del(phy_dev, phy_dev->dev_addr); + list_for_each_entry(addr, &ipvlan->addrs, anode) - rcu_read_lock(); - list_for_each_entry_rcu(addr, &ipvlan->addrs, anode) ipvlan_ht_addr_del(addr); - rcu_read_unlock(); return 0; } @@ -592,7 +586,6 @@ ipvlan->sfeatures = IPVLAN_FEATURES; ipvlan_adjust_mtu(ipvlan, phy_dev); INIT_LIST_HEAD(&ipvlan->addrs); - spin_lock_init(&ipvlan->addrs_lock); /* TODO Probably put random address here to be presented to the * world but keep using the physical-dev address for the outgoing @@ -670,13 +663,11 @@ struct ipvl_dev *ipvlan = netdev_priv(dev); struct ipvl_addr *addr, *next; - spin_lock_bh(&ipvlan->addrs_lock); list_for_each_entry_safe(addr, next, &ipvlan->addrs, anode) { ipvlan_ht_addr_del(addr); + list_del(&addr->anode); - list_del_rcu(&addr->anode); kfree_rcu(addr, rcu); } - spin_unlock_bh(&ipvlan->addrs_lock); ida_simple_remove(&ipvlan->port->ida, dev->dev_id); list_del_rcu(&ipvlan->pnode); @@ -767,7 +758,8 @@ if (dev->reg_state != NETREG_UNREGISTERING) break; + list_for_each_entry_safe(ipvlan, next, &port->ipvlans, + pnode) - list_for_each_entry_safe(ipvlan, next, &port->ipvlans, pnode) ipvlan->dev->rtnl_link_ops->dellink(ipvlan->dev, &lst_kill); unregister_netdevice_many(&lst_kill); @@ -799,7 +791,6 @@ return NOTIFY_DONE; } -/* the caller must held the addrs lock */ static int ipvlan_add_addr(struct ipvl_dev *ipvlan, void *iaddr, bool is_v6) { struct ipvl_addr *addr; @@ -809,17 +800,14 @@ return -ENOMEM; addr->master = ipvlan; + if (is_v6) { + memcpy(&addr->ip6addr, iaddr, sizeof(struct in6_addr)); + addr->atype = IPVL_IPV6; + } else { - if (!is_v6) { memcpy(&addr->ip4addr, iaddr, sizeof(struct in_addr)); addr->atype = IPVL_IPV4; -#if IS_ENABLED(CONFIG_IPV6) - } else { - memcpy(&addr->ip6addr, iaddr, sizeof(struct in6_addr)); - addr->atype = IPVL_IPV6; -#endif } + list_add_tail(&addr->anode, &ipvlan->addrs); - - list_add_tail_rcu(&addr->anode, &ipvlan->addrs); /* If the interface is not up, the address will be added to the hash * list by ipvlan_open. @@ -834,46 +822,27 @@ { struct ipvl_addr *addr; - spin_lock_bh(&ipvlan->addrs_lock); addr = ipvlan_find_addr(ipvlan, iaddr, is_v6); + if (!addr) - if (!addr) { - spin_unlock_bh(&ipvlan->addrs_lock); return; - } ipvlan_ht_addr_del(addr); + list_del(&addr->anode); - list_del_rcu(&addr->anode); - spin_unlock_bh(&ipvlan->addrs_lock); kfree_rcu(addr, rcu); -} - -static bool ipvlan_is_valid_dev(const struct net_device *dev) -{ - struct ipvl_dev *ipvlan = netdev_priv(dev); - - if (!netif_is_ipvlan(dev)) - return false; - - if (!ipvlan || !ipvlan->port) - return false; + return; - return true; } -#if IS_ENABLED(CONFIG_IPV6) static int ipvlan_add_addr6(struct ipvl_dev *ipvlan, struct in6_addr *ip6_addr) { + if (ipvlan_addr_busy(ipvlan->port, ip6_addr, true)) { - int ret = -EINVAL; - - spin_lock_bh(&ipvlan->addrs_lock); - if (ipvlan_addr_busy(ipvlan->port, ip6_addr, true)) netif_err(ipvlan, ifup, ipvlan->dev, "Failed to add IPv6=%pI6c addr for %s intf\n", ip6_addr, ipvlan->dev->name); + return -EINVAL; + } + + return ipvlan_add_addr(ipvlan, ip6_addr, true); - else - ret = ipvlan_add_addr(ipvlan, ip6_addr, true); - spin_unlock_bh(&ipvlan->addrs_lock); - return ret; } static void ipvlan_del_addr6(struct ipvl_dev *ipvlan, struct in6_addr *ip6_addr) @@ -915,6 +884,10 @@ struct net_device *dev = (struct net_device *)i6vi->i6vi_dev->dev; struct ipvl_dev *ipvlan = netdev_priv(dev); + /* FIXME IPv6 autoconf calls us from bh without RTNL */ + if (in_softirq()) + return NOTIFY_DONE; + if (!netif_is_ipvlan(dev)) return NOTIFY_DONE; @@ -933,21 +906,17 @@ return NOTIFY_OK; } -#endif static int ipvlan_add_addr4(struct ipvl_dev *ipvlan, struct in_addr *ip4_addr) { + if (ipvlan_addr_busy(ipvlan->port, ip4_addr, false)) { - int ret = -EINVAL; - - spin_lock_bh(&ipvlan->addrs_lock); - if (ipvlan_addr_busy(ipvlan->port, ip4_addr, false)) netif_err(ipvlan, ifup, ipvlan->dev, "Failed to add IPv4=%pI4 on %s intf.\n", ip4_addr, ipvlan->dev->name); + return -EINVAL; + } + + return ipvlan_add_addr(ipvlan, ip4_addr, false); - else - ret = ipvlan_add_addr(ipvlan, ip4_addr, false); - spin_unlock_bh(&ipvlan->addrs_lock); - return ret; } static void ipvlan_del_addr4(struct ipvl_dev *ipvlan, struct in_addr *ip4_addr) @@ -1023,7 +992,6 @@ .notifier_call = ipvlan_device_event, }; -#if IS_ENABLED(CONFIG_IPV6) static struct notifier_block ipvlan_addr6_notifier_block __read_mostly = { .notifier_call = ipvlan_addr6_event, }; @@ -1031,7 +999,6 @@ static struct notifier_block ipvlan_addr6_vtor_notifier_block __read_mostly = { .notifier_call = ipvlan_addr6_validator_event, }; -#endif static void ipvlan_ns_exit(struct net *net) { @@ -1056,11 +1023,9 @@ ipvlan_init_secret(); register_netdevice_notifier(&ipvlan_notifier_block); -#if IS_ENABLED(CONFIG_IPV6) register_inet6addr_notifier(&ipvlan_addr6_notifier_block); register_inet6addr_validator_notifier( &ipvlan_addr6_vtor_notifier_block); -#endif register_inetaddr_notifier(&ipvlan_addr4_notifier_block); register_inetaddr_validator_notifier(&ipvlan_addr4_vtor_notifier_block); @@ -1079,11 +1044,9 @@ unregister_inetaddr_notifier(&ipvlan_addr4_notifier_block); unregister_inetaddr_validator_notifier( &ipvlan_addr4_vtor_notifier_block); -#if IS_ENABLED(CONFIG_IPV6) unregister_inet6addr_notifier(&ipvlan_addr6_notifier_block); unregister_inet6addr_validator_notifier( &ipvlan_addr6_vtor_notifier_block); -#endif unregister_netdevice_notifier(&ipvlan_notifier_block); return err; } @@ -1096,11 +1059,9 @@ unregister_inetaddr_notifier(&ipvlan_addr4_notifier_block); unregister_inetaddr_validator_notifier( &ipvlan_addr4_vtor_notifier_block); -#if IS_ENABLED(CONFIG_IPV6) unregister_inet6addr_notifier(&ipvlan_addr6_notifier_block); unregister_inet6addr_validator_notifier( &ipvlan_addr6_vtor_notifier_block); -#endif } module_init(ipvlan_init_module); diff -u linux-azure-4.15.0/drivers/net/usb/qmi_wwan.c linux-azure-4.15.0/drivers/net/usb/qmi_wwan.c --- linux-azure-4.15.0/drivers/net/usb/qmi_wwan.c +++ linux-azure-4.15.0/drivers/net/usb/qmi_wwan.c @@ -1098,16 +1098,12 @@ {QMI_FIXED_INTF(0x05c6, 0x9080, 8)}, {QMI_FIXED_INTF(0x05c6, 0x9083, 3)}, {QMI_FIXED_INTF(0x05c6, 0x9084, 4)}, - {QMI_FIXED_INTF(0x05c6, 0x90b2, 3)}, /* ublox R410M */ {QMI_FIXED_INTF(0x05c6, 0x920d, 0)}, {QMI_FIXED_INTF(0x05c6, 0x920d, 5)}, {QMI_QUIRK_SET_DTR(0x05c6, 0x9625, 4)}, /* YUGA CLM920-NC5 */ {QMI_FIXED_INTF(0x0846, 0x68a2, 8)}, {QMI_FIXED_INTF(0x12d1, 0x140c, 1)}, /* Huawei E173 */ {QMI_FIXED_INTF(0x12d1, 0x14ac, 1)}, /* Huawei E1820 */ - {QMI_FIXED_INTF(0x1435, 0xd181, 3)}, /* Wistron NeWeb D18Q1 */ - {QMI_FIXED_INTF(0x1435, 0xd181, 4)}, /* Wistron NeWeb D18Q1 */ - {QMI_FIXED_INTF(0x1435, 0xd181, 5)}, /* Wistron NeWeb D18Q1 */ {QMI_FIXED_INTF(0x16d8, 0x6003, 0)}, /* CMOTech 6003 */ {QMI_FIXED_INTF(0x16d8, 0x6007, 0)}, /* CMOTech CHE-628S */ {QMI_FIXED_INTF(0x16d8, 0x6008, 0)}, /* CMOTech CMU-301 */ @@ -1244,7 +1240,6 @@ {QMI_FIXED_INTF(0x413c, 0x81b6, 8)}, /* Dell Wireless 5811e */ {QMI_FIXED_INTF(0x413c, 0x81b6, 10)}, /* Dell Wireless 5811e */ {QMI_FIXED_INTF(0x03f0, 0x4e1d, 8)}, /* HP lt4111 LTE/EV-DO/HSPA+ Gobi 4G Module */ - {QMI_FIXED_INTF(0x03f0, 0x9d1d, 1)}, /* HP lt4120 Snapdragon X5 LTE */ {QMI_FIXED_INTF(0x22de, 0x9061, 3)}, /* WeTelecom WPD-600N */ {QMI_FIXED_INTF(0x1e0e, 0x9001, 5)}, /* SIMCom 7230E */ {QMI_QUIRK_SET_DTR(0x2c7c, 0x0125, 4)}, /* Quectel EC25, EC20 R2.0 Mini PCIe */ @@ -1342,18 +1337,6 @@ id->driver_info = (unsigned long)&qmi_wwan_info; } - /* There are devices where the same interface number can be - * configured as different functions. We should only bind to - * vendor specific functions when matching on interface number - */ - if (id->match_flags & USB_DEVICE_ID_MATCH_INT_NUMBER && - desc->bInterfaceClass != USB_CLASS_VENDOR_SPEC) { - dev_dbg(&intf->dev, - "Rejecting interface number match for class %02x\n", - desc->bInterfaceClass); - return -ENODEV; - } - /* Quectel EC20 quirk where we've QMI on interface 4 instead of 0 */ if (quectel_ec20_detected(intf) && desc->bInterfaceNumber == 0) { dev_dbg(&intf->dev, "Quectel EC20 quirk, skipping interface 0\n"); reverted: --- linux-azure-4.15.0/drivers/net/usb/usbnet.c +++ linux-azure-4.15.0.orig/drivers/net/usb/usbnet.c @@ -315,7 +315,6 @@ void usbnet_skb_return (struct usbnet *dev, struct sk_buff *skb) { struct pcpu_sw_netstats *stats64 = this_cpu_ptr(dev->stats64); - unsigned long flags; int status; if (test_bit(EVENT_RX_PAUSED, &dev->flags)) { @@ -327,10 +326,10 @@ if (skb->protocol == 0) skb->protocol = eth_type_trans (skb, dev->net); + u64_stats_update_begin(&stats64->syncp); - flags = u64_stats_update_begin_irqsave(&stats64->syncp); stats64->rx_packets++; stats64->rx_bytes += skb->len; + u64_stats_update_end(&stats64->syncp); - u64_stats_update_end_irqrestore(&stats64->syncp, flags); netif_dbg(dev, rx_status, dev->net, "< rx, len %zu, type 0x%x\n", skb->len + sizeof (struct ethhdr), skb->protocol); @@ -1249,12 +1248,11 @@ if (urb->status == 0) { struct pcpu_sw_netstats *stats64 = this_cpu_ptr(dev->stats64); - unsigned long flags; + u64_stats_update_begin(&stats64->syncp); - flags = u64_stats_update_begin_irqsave(&stats64->syncp); stats64->tx_packets += entry->packets; stats64->tx_bytes += entry->length; + u64_stats_update_end(&stats64->syncp); - u64_stats_update_end_irqrestore(&stats64->syncp, flags); } else { dev->net->stats.tx_errors++; diff -u linux-azure-4.15.0/drivers/net/vmxnet3/vmxnet3_drv.c linux-azure-4.15.0/drivers/net/vmxnet3/vmxnet3_drv.c --- linux-azure-4.15.0/drivers/net/vmxnet3/vmxnet3_drv.c +++ linux-azure-4.15.0/drivers/net/vmxnet3/vmxnet3_drv.c @@ -369,11 +369,6 @@ gdesc = tq->comp_ring.base + tq->comp_ring.next2proc; while (VMXNET3_TCD_GET_GEN(&gdesc->tcd) == tq->comp_ring.gen) { - /* Prevent any &gdesc->tcd field from being (speculatively) - * read before (&gdesc->tcd)->gen is read. - */ - dma_rmb(); - completed += vmxnet3_unmap_pkt(VMXNET3_TCD_GET_TXIDX( &gdesc->tcd), tq, adapter->pdev, adapter); @@ -1108,11 +1103,6 @@ gdesc->txd.tci = skb_vlan_tag_get(skb); } - /* Ensure that the write to (&gdesc->txd)->gen will be observed after - * all other writes to &gdesc->txd. - */ - dma_wmb(); - /* finally flips the GEN bit of the SOP desc. */ gdesc->dword[2] = cpu_to_le32(le32_to_cpu(gdesc->dword[2]) ^ VMXNET3_TXD_GEN); @@ -1308,12 +1298,6 @@ */ break; } - - /* Prevent any rcd field from being (speculatively) read before - * rcd->gen is read. - */ - dma_rmb(); - BUG_ON(rcd->rqID != rq->qid && rcd->rqID != rq->qid2 && rcd->rqID != rq->dataRingQid); idx = rcd->rxdIdx; @@ -1544,12 +1528,6 @@ ring->next2comp = idx; num_to_alloc = vmxnet3_cmd_ring_desc_avail(ring); ring = rq->rx_ring + ring_idx; - - /* Ensure that the writes to rxd->gen bits will be observed - * after all other writes to rxd objects. - */ - dma_wmb(); - while (num_to_alloc) { vmxnet3_getRxDesc(rxd, &ring->base[ring->next2fill].rxd, &rxCmdDesc); @@ -2710,7 +2688,7 @@ /* ==================== initialization and cleanup routines ============ */ static int -vmxnet3_alloc_pci_resources(struct vmxnet3_adapter *adapter) +vmxnet3_alloc_pci_resources(struct vmxnet3_adapter *adapter, bool *dma64) { int err; unsigned long mmio_start, mmio_len; @@ -2722,12 +2700,30 @@ return err; } + if (pci_set_dma_mask(pdev, DMA_BIT_MASK(64)) == 0) { + if (pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64)) != 0) { + dev_err(&pdev->dev, + "pci_set_consistent_dma_mask failed\n"); + err = -EIO; + goto err_set_mask; + } + *dma64 = true; + } else { + if (pci_set_dma_mask(pdev, DMA_BIT_MASK(32)) != 0) { + dev_err(&pdev->dev, + "pci_set_dma_mask failed\n"); + err = -EIO; + goto err_set_mask; + } + *dma64 = false; + } + err = pci_request_selected_regions(pdev, (1 << 2) - 1, vmxnet3_driver_name); if (err) { dev_err(&pdev->dev, "Failed to request region for adapter: error %d\n", err); - goto err_enable_device; + goto err_set_mask; } pci_set_master(pdev); @@ -2755,7 +2751,7 @@ iounmap(adapter->hw_addr0); err_ioremap: pci_release_selected_regions(pdev, (1 << 2) - 1); -err_enable_device: +err_set_mask: pci_disable_device(pdev); return err; } @@ -3260,7 +3256,7 @@ #endif }; int err; - bool dma64; + bool dma64 = false; /* stupid gcc */ u32 ver; struct net_device *netdev; struct vmxnet3_adapter *adapter; @@ -3306,24 +3302,6 @@ adapter->rx_ring_size = VMXNET3_DEF_RX_RING_SIZE; adapter->rx_ring2_size = VMXNET3_DEF_RX_RING2_SIZE; - if (pci_set_dma_mask(pdev, DMA_BIT_MASK(64)) == 0) { - if (pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64)) != 0) { - dev_err(&pdev->dev, - "pci_set_consistent_dma_mask failed\n"); - err = -EIO; - goto err_set_mask; - } - dma64 = true; - } else { - if (pci_set_dma_mask(pdev, DMA_BIT_MASK(32)) != 0) { - dev_err(&pdev->dev, - "pci_set_dma_mask failed\n"); - err = -EIO; - goto err_set_mask; - } - dma64 = false; - } - spin_lock_init(&adapter->cmd_lock); adapter->adapter_pa = dma_map_single(&adapter->pdev->dev, adapter, sizeof(struct vmxnet3_adapter), @@ -3331,7 +3309,7 @@ if (dma_mapping_error(&adapter->pdev->dev, adapter->adapter_pa)) { dev_err(&pdev->dev, "Failed to map dma\n"); err = -EFAULT; - goto err_set_mask; + goto err_dma_map; } adapter->shared = dma_alloc_coherent( &adapter->pdev->dev, @@ -3382,7 +3360,7 @@ } #endif /* VMXNET3_RSS */ - err = vmxnet3_alloc_pci_resources(adapter); + err = vmxnet3_alloc_pci_resources(adapter, &dma64); if (err < 0) goto err_alloc_pci; @@ -3528,7 +3506,7 @@ err_alloc_shared: dma_unmap_single(&adapter->pdev->dev, adapter->adapter_pa, sizeof(struct vmxnet3_adapter), PCI_DMA_TODEVICE); -err_set_mask: +err_dma_map: free_netdev(netdev); return err; } reverted: --- linux-azure-4.15.0/drivers/net/wireless/ath/wcn36xx/Kconfig +++ linux-azure-4.15.0.orig/drivers/net/wireless/ath/wcn36xx/Kconfig @@ -16,12 +16,3 @@ Enabled debugfs support If unsure, say Y to make it easier to debug problems. - -config WCN36XX_SNAPDRAGON_HACKS - bool "Dragonboard 410c WCN36XX MAC address generation hacks" - default n - depends on WCN36XX - ---help--- - Upon probe, WCN36XX will try to read its MAC address from - a file located at /lib/firmware/wlan/macaddr0. If the file - is not present, it will randomly generate a new MAC address. diff -u linux-azure-4.15.0/drivers/net/wireless/ath/wcn36xx/main.c linux-azure-4.15.0/drivers/net/wireless/ath/wcn36xx/main.c --- linux-azure-4.15.0/drivers/net/wireless/ath/wcn36xx/main.c +++ linux-azure-4.15.0/drivers/net/wireless/ath/wcn36xx/main.c @@ -1265,14 +1265,6 @@ void *wcnss; int ret; const u8 *addr; -#ifdef CONFIG_WCN36XX_SNAPDRAGON_HACKS - int status; - const struct firmware *addr_file = NULL; - u8 tmp[18], _addr[ETH_ALEN]; - static const u8 qcom_oui[3] = {0x00, 0x0A, 0xF5}; - static const char *files = {"wlan/macaddr0"}; -#endif - wcn36xx_dbg(WCN36XX_DBG_MAC, "platform probe\n"); @@ -1306,35 +1298,7 @@ wcn36xx_err("invalid local-mac-address\n"); ret = -EINVAL; goto out_wq; - } -#ifdef CONFIG_WCN36XX_SNAPDRAGON_HACKS - else if (addr == NULL) { - addr = _addr; - status = request_firmware(&addr_file, files, &pdev->dev); - - if (status < 0) { - /* Assign a random mac with Qualcomm oui */ - dev_err(&pdev->dev, "Failed (%d) to read macaddress" - "file %s, using a random address instead", status, files); - memcpy(addr, qcom_oui, 3); - get_random_bytes(addr + 3, 3); - } else { - memset(tmp, 0, sizeof(tmp)); - memcpy(tmp, addr_file->data, sizeof(tmp) - 1); - sscanf(tmp, "%hhx:%hhx:%hhx:%hhx:%hhx:%hhx", - &addr[0], - &addr[1], - &addr[2], - &addr[3], - &addr[4], - &addr[5]); - - release_firmware(addr_file); - } - } -#endif - - if (addr) { + } else if (addr) { wcn36xx_info("mac address: %pM\n", addr); SET_IEEE80211_PERM_ADDR(wcn->hw, addr); } reverted: --- linux-azure-4.15.0/drivers/net/wireless/realtek/rtlwifi/btcoexist/halbtcoutsrc.c +++ linux-azure-4.15.0.orig/drivers/net/wireless/realtek/rtlwifi/btcoexist/halbtcoutsrc.c @@ -173,6 +173,16 @@ u8 rtl_get_hwpg_single_ant_path(struct rtl_priv *rtlpriv) { + struct rtl_mod_params *mod_params = rtlpriv->cfg->mod_params; + + /* override ant_num / ant_path */ + if (mod_params->ant_sel) { + rtlpriv->btcoexist.btc_info.ant_num = + (mod_params->ant_sel == 1 ? ANT_X2 : ANT_X1); + + rtlpriv->btcoexist.btc_info.single_ant_path = + (mod_params->ant_sel == 1 ? 0 : 1); + } return rtlpriv->btcoexist.btc_info.single_ant_path; } @@ -183,6 +193,7 @@ u8 rtl_get_hwpg_ant_num(struct rtl_priv *rtlpriv) { + struct rtl_mod_params *mod_params = rtlpriv->cfg->mod_params; u8 num; if (rtlpriv->btcoexist.btc_info.ant_num == ANT_X2) @@ -190,6 +201,10 @@ else num = 1; + /* override ant_num / ant_path */ + if (mod_params->ant_sel) + num = (mod_params->ant_sel == 1 ? ANT_X2 : ANT_X1) + 1; + return num; } reverted: --- linux-azure-4.15.0/drivers/net/wireless/realtek/rtlwifi/btcoexist/halbtcoutsrc.h +++ linux-azure-4.15.0.orig/drivers/net/wireless/realtek/rtlwifi/btcoexist/halbtcoutsrc.h @@ -601,7 +601,6 @@ bool exhalbtc_initlize_variables(void); bool exhalbtc_bind_bt_coex_withadapter(void *adapter); -void exhalbtc_power_on_setting(struct btc_coexist *btcoexist); void exhalbtc_init_hw_config(struct btc_coexist *btcoexist, bool wifi_only); void exhalbtc_init_coex_dm(struct btc_coexist *btcoexist); void exhalbtc_ips_notify(struct btc_coexist *btcoexist, u8 type); reverted: --- linux-azure-4.15.0/drivers/net/wireless/realtek/rtlwifi/btcoexist/rtl_btc.c +++ linux-azure-4.15.0.orig/drivers/net/wireless/realtek/rtlwifi/btcoexist/rtl_btc.c @@ -32,7 +32,6 @@ static struct rtl_btc_ops rtl_btc_operation = { .btc_init_variables = rtl_btc_init_variables, .btc_init_hal_vars = rtl_btc_init_hal_vars, - .btc_power_on_setting = rtl_btc_power_on_setting, .btc_init_hw_config = rtl_btc_init_hw_config, .btc_ips_notify = rtl_btc_ips_notify, .btc_lps_notify = rtl_btc_lps_notify, @@ -111,11 +110,6 @@ */ } -void rtl_btc_power_on_setting(struct rtl_priv *rtlpriv) -{ - exhalbtc_power_on_setting(&gl_bt_coexist); -} - void rtl_btc_init_hw_config(struct rtl_priv *rtlpriv) { u8 bt_exist; reverted: --- linux-azure-4.15.0/drivers/net/wireless/realtek/rtlwifi/btcoexist/rtl_btc.h +++ linux-azure-4.15.0.orig/drivers/net/wireless/realtek/rtlwifi/btcoexist/rtl_btc.h @@ -29,7 +29,6 @@ void rtl_btc_init_variables(struct rtl_priv *rtlpriv); void rtl_btc_init_hal_vars(struct rtl_priv *rtlpriv); -void rtl_btc_power_on_setting(struct rtl_priv *rtlpriv); void rtl_btc_init_hw_config(struct rtl_priv *rtlpriv); void rtl_btc_ips_notify(struct rtl_priv *rtlpriv, u8 type); void rtl_btc_lps_notify(struct rtl_priv *rtlpriv, u8 type); diff -u linux-azure-4.15.0/drivers/net/wireless/realtek/rtlwifi/rtl8723be/hw.c linux-azure-4.15.0/drivers/net/wireless/realtek/rtlwifi/rtl8723be/hw.c --- linux-azure-4.15.0/drivers/net/wireless/realtek/rtlwifi/rtl8723be/hw.c +++ linux-azure-4.15.0/drivers/net/wireless/realtek/rtlwifi/rtl8723be/hw.c @@ -847,9 +847,6 @@ return false; } - if (rtlpriv->cfg->ops->get_btc_status()) - rtlpriv->btcoexist.btc_ops->btc_power_on_setting(rtlpriv); - bytetmp = rtl_read_byte(rtlpriv, REG_MULTI_FUNC_CTRL); rtl_write_byte(rtlpriv, REG_MULTI_FUNC_CTRL, bytetmp | BIT(3)); @@ -2699,21 +2696,21 @@ rtlpriv->btcoexist.btc_info.bt_type = BT_RTL8723B; rtlpriv->btcoexist.btc_info.ant_num = (value & 0x1); rtlpriv->btcoexist.btc_info.single_ant_path = - (value & 0x40 ? ANT_AUX : ANT_MAIN); /*0xc3[6]*/ + (value & 0x40); /*0xc3[6]*/ } else { rtlpriv->btcoexist.btc_info.btcoexist = 0; rtlpriv->btcoexist.btc_info.bt_type = BT_RTL8723B; rtlpriv->btcoexist.btc_info.ant_num = ANT_X2; - rtlpriv->btcoexist.btc_info.single_ant_path = ANT_MAIN; + rtlpriv->btcoexist.btc_info.single_ant_path = 0; } /* override ant_num / ant_path */ if (mod_params->ant_sel) { rtlpriv->btcoexist.btc_info.ant_num = - (mod_params->ant_sel == 1 ? ANT_X1 : ANT_X2); + (mod_params->ant_sel == 1 ? ANT_X2 : ANT_X1); rtlpriv->btcoexist.btc_info.single_ant_path = - (mod_params->ant_sel == 1 ? ANT_AUX : ANT_MAIN); + (mod_params->ant_sel == 1 ? 0 : 1); } } diff -u linux-azure-4.15.0/drivers/net/wireless/realtek/rtlwifi/wifi.h linux-azure-4.15.0/drivers/net/wireless/realtek/rtlwifi/wifi.h --- linux-azure-4.15.0/drivers/net/wireless/realtek/rtlwifi/wifi.h +++ linux-azure-4.15.0/drivers/net/wireless/realtek/rtlwifi/wifi.h @@ -2551,7 +2551,6 @@ struct rtl_btc_ops { void (*btc_init_variables) (struct rtl_priv *rtlpriv); void (*btc_init_hal_vars) (struct rtl_priv *rtlpriv); - void (*btc_power_on_setting)(struct rtl_priv *rtlpriv); void (*btc_init_hw_config) (struct rtl_priv *rtlpriv); void (*btc_ips_notify) (struct rtl_priv *rtlpriv, u8 type); void (*btc_lps_notify)(struct rtl_priv *rtlpriv, u8 type); @@ -2714,11 +2713,6 @@ ANT_X1 = 1, }; -enum bt_ant_path { - ANT_MAIN = 0, - ANT_AUX = 1, -}; - enum bt_co_type { BT_2WIRE = 0, BT_ISSC_3WIRE = 1, diff -u linux-azure-4.15.0/drivers/nvme/host/nvme.h linux-azure-4.15.0/drivers/nvme/host/nvme.h --- linux-azure-4.15.0/drivers/nvme/host/nvme.h +++ linux-azure-4.15.0/drivers/nvme/host/nvme.h @@ -81,11 +81,6 @@ * Supports the LighNVM command set if indicated in vs[1]. */ NVME_QUIRK_LIGHTNVM = (1 << 6), - - /* - * Set MEDIUM priority on SQ creation - */ - NVME_QUIRK_MEDIUM_PRIO_SQ = (1 << 7), }; /* diff -u linux-azure-4.15.0/drivers/nvme/host/pci.c linux-azure-4.15.0/drivers/nvme/host/pci.c --- linux-azure-4.15.0/drivers/nvme/host/pci.c +++ linux-azure-4.15.0/drivers/nvme/host/pci.c @@ -1070,19 +1070,10 @@ static int adapter_alloc_cq(struct nvme_dev *dev, u16 qid, struct nvme_queue *nvmeq) { - struct nvme_ctrl *ctrl = &dev->ctrl; struct nvme_command c; int flags = NVME_QUEUE_PHYS_CONTIG | NVME_CQ_IRQ_ENABLED; /* - * Some drives have a bug that auto-enables WRRU if MEDIUM isn't - * set. Since URGENT priority is zeroes, it makes all queues - * URGENT. - */ - if (ctrl->quirks & NVME_QUIRK_MEDIUM_PRIO_SQ) - flags |= NVME_SQ_PRIO_MEDIUM; - - /* * Note: we (ab)use the fact that the prp fields survive if no data * is attached to the request. */ @@ -2677,8 +2668,7 @@ .driver_data = NVME_QUIRK_STRIPE_SIZE | NVME_QUIRK_DEALLOCATE_ZEROES, }, { PCI_VDEVICE(INTEL, 0xf1a5), /* Intel 600P/P3100 */ - .driver_data = NVME_QUIRK_NO_DEEPEST_PS | - NVME_QUIRK_MEDIUM_PRIO_SQ }, + .driver_data = NVME_QUIRK_NO_DEEPEST_PS }, { PCI_VDEVICE(INTEL, 0x5845), /* Qemu emulated controller */ .driver_data = NVME_QUIRK_IDENTIFY_CNS, }, { PCI_DEVICE(0x1c58, 0x0003), /* HGST adapter */ reverted: --- linux-azure-4.15.0/drivers/of/fdt.c +++ linux-azure-4.15.0.orig/drivers/of/fdt.c @@ -944,7 +944,7 @@ int offset; const char *p, *q, *options = NULL; int l; + const struct earlycon_id *match; - const struct earlycon_id **p_match; const void *fdt = initial_boot_params; offset = fdt_path_offset(fdt, "/chosen"); @@ -971,10 +971,7 @@ return 0; } + for (match = __earlycon_table; match < __earlycon_table_end; match++) { - for (p_match = __earlycon_table; p_match < __earlycon_table_end; - p_match++) { - const struct earlycon_id *match = *p_match; - if (!match->compatible[0]) continue; reverted: --- linux-azure-4.15.0/drivers/pci/host/pci-aardvark.c +++ linux-azure-4.15.0.orig/drivers/pci/host/pci-aardvark.c @@ -32,7 +32,6 @@ #define PCIE_CORE_DEV_CTRL_STATS_MAX_PAYLOAD_SZ_SHIFT 5 #define PCIE_CORE_DEV_CTRL_STATS_SNOOP_DISABLE (0 << 11) #define PCIE_CORE_DEV_CTRL_STATS_MAX_RD_REQ_SIZE_SHIFT 12 -#define PCIE_CORE_DEV_CTRL_STATS_MAX_RD_REQ_SZ 0x2 #define PCIE_CORE_LINK_CTRL_STAT_REG 0xd0 #define PCIE_CORE_LINK_L0S_ENTRY BIT(0) #define PCIE_CORE_LINK_TRAINING BIT(5) @@ -104,8 +103,7 @@ #define PCIE_ISR1_MASK_REG (CONTROL_BASE_ADDR + 0x4C) #define PCIE_ISR1_POWER_STATE_CHANGE BIT(4) #define PCIE_ISR1_FLUSH BIT(5) +#define PCIE_ISR1_ALL_MASK GENMASK(5, 4) -#define PCIE_ISR1_INTX_ASSERT(val) BIT(8 + (val)) -#define PCIE_ISR1_ALL_MASK GENMASK(11, 4) #define PCIE_MSI_ADDR_LOW_REG (CONTROL_BASE_ADDR + 0x50) #define PCIE_MSI_ADDR_HIGH_REG (CONTROL_BASE_ADDR + 0x54) #define PCIE_MSI_STATUS_REG (CONTROL_BASE_ADDR + 0x58) @@ -177,6 +175,8 @@ #define PCIE_CONFIG_WR_TYPE0 0xa #define PCIE_CONFIG_WR_TYPE1 0xb +/* PCI_BDF shifts 8bit, so we need extra 4bit shift */ +#define PCIE_BDF(dev) (dev << 4) #define PCIE_CONF_BUS(bus) (((bus) & 0xff) << 20) #define PCIE_CONF_DEV(dev) (((dev) & 0x1f) << 15) #define PCIE_CONF_FUNC(fun) (((fun) & 0x7) << 12) @@ -299,8 +299,7 @@ reg = PCIE_CORE_DEV_CTRL_STATS_RELAX_ORDER_DISABLE | (7 << PCIE_CORE_DEV_CTRL_STATS_MAX_PAYLOAD_SZ_SHIFT) | PCIE_CORE_DEV_CTRL_STATS_SNOOP_DISABLE | + PCIE_CORE_DEV_CTRL_STATS_MAX_RD_REQ_SIZE_SHIFT; - (PCIE_CORE_DEV_CTRL_STATS_MAX_RD_REQ_SZ << - PCIE_CORE_DEV_CTRL_STATS_MAX_RD_REQ_SIZE_SHIFT); advk_writel(pcie, reg, PCIE_CORE_DEV_CTRL_STATS_REG); /* Program PCIe Control 2 to disable strict ordering */ @@ -441,7 +440,7 @@ u32 reg; int ret; + if (PCI_SLOT(devfn) != 0) { - if ((bus->number == pcie->root_bus_nr) && PCI_SLOT(devfn) != 0) { *val = 0xffffffff; return PCIBIOS_DEVICE_NOT_FOUND; } @@ -460,7 +459,7 @@ advk_writel(pcie, reg, PIO_CTRL); /* Program the address registers */ + reg = PCIE_BDF(devfn) | PCIE_CONF_REG(where); - reg = PCIE_CONF_ADDR(bus->number, devfn, where); advk_writel(pcie, reg, PIO_ADDR_LS); advk_writel(pcie, 0, PIO_ADDR_MS); @@ -495,7 +494,7 @@ int offset; int ret; + if (PCI_SLOT(devfn) != 0) - if ((bus->number == pcie->root_bus_nr) && PCI_SLOT(devfn) != 0) return PCIBIOS_DEVICE_NOT_FOUND; if (where % size) @@ -613,9 +612,9 @@ irq_hw_number_t hwirq = irqd_to_hwirq(d); u32 mask; + mask = advk_readl(pcie, PCIE_ISR0_MASK_REG); + mask |= PCIE_ISR0_INTX_ASSERT(hwirq); + advk_writel(pcie, mask, PCIE_ISR0_MASK_REG); - mask = advk_readl(pcie, PCIE_ISR1_MASK_REG); - mask |= PCIE_ISR1_INTX_ASSERT(hwirq); - advk_writel(pcie, mask, PCIE_ISR1_MASK_REG); } static void advk_pcie_irq_unmask(struct irq_data *d) @@ -624,9 +623,9 @@ irq_hw_number_t hwirq = irqd_to_hwirq(d); u32 mask; + mask = advk_readl(pcie, PCIE_ISR0_MASK_REG); + mask &= ~PCIE_ISR0_INTX_ASSERT(hwirq); + advk_writel(pcie, mask, PCIE_ISR0_MASK_REG); - mask = advk_readl(pcie, PCIE_ISR1_MASK_REG); - mask &= ~PCIE_ISR1_INTX_ASSERT(hwirq); - advk_writel(pcie, mask, PCIE_ISR1_MASK_REG); } static int advk_pcie_irq_map(struct irq_domain *h, @@ -769,35 +768,29 @@ static void advk_pcie_handle_int(struct advk_pcie *pcie) { + u32 val, mask, status; - u32 isr0_val, isr0_mask, isr0_status; - u32 isr1_val, isr1_mask, isr1_status; int i, virq; + val = advk_readl(pcie, PCIE_ISR0_REG); + mask = advk_readl(pcie, PCIE_ISR0_MASK_REG); + status = val & ((~mask) & PCIE_ISR0_ALL_MASK); + + if (!status) { + advk_writel(pcie, val, PCIE_ISR0_REG); - isr0_val = advk_readl(pcie, PCIE_ISR0_REG); - isr0_mask = advk_readl(pcie, PCIE_ISR0_MASK_REG); - isr0_status = isr0_val & ((~isr0_mask) & PCIE_ISR0_ALL_MASK); - - isr1_val = advk_readl(pcie, PCIE_ISR1_REG); - isr1_mask = advk_readl(pcie, PCIE_ISR1_MASK_REG); - isr1_status = isr1_val & ((~isr1_mask) & PCIE_ISR1_ALL_MASK); - - if (!isr0_status && !isr1_status) { - advk_writel(pcie, isr0_val, PCIE_ISR0_REG); - advk_writel(pcie, isr1_val, PCIE_ISR1_REG); return; } /* Process MSI interrupts */ + if (status & PCIE_ISR0_MSI_INT_PENDING) - if (isr0_status & PCIE_ISR0_MSI_INT_PENDING) advk_pcie_handle_msi(pcie); /* Process legacy interrupts */ for (i = 0; i < PCI_NUM_INTX; i++) { + if (!(status & PCIE_ISR0_INTX_ASSERT(i))) - if (!(isr1_status & PCIE_ISR1_INTX_ASSERT(i))) continue; + advk_writel(pcie, PCIE_ISR0_INTX_ASSERT(i), + PCIE_ISR0_REG); - advk_writel(pcie, PCIE_ISR1_INTX_ASSERT(i), - PCIE_ISR1_REG); virq = irq_find_mapping(pcie->irq_domain, i); generic_handle_irq(virq); diff -u linux-azure-4.15.0/drivers/pci/host/pci-hyperv.c linux-azure-4.15.0/drivers/pci/host/pci-hyperv.c --- linux-azure-4.15.0/drivers/pci/host/pci-hyperv.c +++ linux-azure-4.15.0/drivers/pci/host/pci-hyperv.c @@ -55,6 +55,7 @@ #include #include #include +#include #include #include #include diff -u linux-azure-4.15.0/drivers/pci/pci-driver.c linux-azure-4.15.0/drivers/pci/pci-driver.c --- linux-azure-4.15.0/drivers/pci/pci-driver.c +++ linux-azure-4.15.0/drivers/pci/pci-driver.c @@ -947,11 +947,10 @@ * devices should not be touched during freeze/thaw transitions, * however. */ - if (!dev_pm_smart_suspend_and_suspended(dev)) { + if (!dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_SUSPEND)) pm_runtime_resume(dev); - pci_dev->state_saved = false; - } + pci_dev->state_saved = false; if (pm->freeze) { int error; @@ -1227,14 +1226,11 @@ int error; /* - * If pci_dev->driver is not set (unbound), we leave the device in D0, - * but it may go to D3cold when the bridge above it runtime suspends. - * Save its config space in case that happens. + * If pci_dev->driver is not set (unbound), the device should + * always remain in D0 regardless of the runtime PM status */ - if (!pci_dev->driver) { - pci_save_state(pci_dev); + if (!pci_dev->driver) return 0; - } if (!pm || !pm->runtime_suspend) return -ENOSYS; @@ -1282,18 +1278,16 @@ const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; /* - * Restoring config space is necessary even if the device is not bound - * to a driver because although we left it in D0, it may have gone to - * D3cold when the bridge above it runtime suspended. + * If pci_dev->driver is not set (unbound), the device should + * always remain in D0 regardless of the runtime PM status */ - pci_restore_standard_config(pci_dev); - if (!pci_dev->driver) return 0; if (!pm || !pm->runtime_resume) return -ENOSYS; + pci_restore_standard_config(pci_dev); pci_fixup_device(pci_fixup_resume_early, pci_dev); pci_enable_wake(pci_dev, PCI_D0, false); pci_fixup_device(pci_fixup_resume, pci_dev); diff -u linux-azure-4.15.0/drivers/platform/x86/Kconfig linux-azure-4.15.0/drivers/platform/x86/Kconfig --- linux-azure-4.15.0/drivers/platform/x86/Kconfig +++ linux-azure-4.15.0/drivers/platform/x86/Kconfig @@ -126,7 +126,7 @@ depends on ACPI_VIDEO || ACPI_VIDEO = n depends on RFKILL || RFKILL = n depends on SERIO_I8042 - depends on DELL_SMBIOS + select DELL_SMBIOS select POWER_SUPPLY select LEDS_CLASS select NEW_LEDS diff -u linux-azure-4.15.0/drivers/platform/x86/apple-gmux.c linux-azure-4.15.0/drivers/platform/x86/apple-gmux.c --- linux-azure-4.15.0/drivers/platform/x86/apple-gmux.c +++ linux-azure-4.15.0/drivers/platform/x86/apple-gmux.c @@ -495,7 +495,7 @@ return gmux_set_discrete_state(apple_gmux_data, state); } -static enum vga_switcheroo_client_id gmux_get_client_id(struct pci_dev *pdev) +static int gmux_get_client_id(struct pci_dev *pdev) { /* * Early Macbook Pros with switchable graphics use nvidia reverted: --- linux-azure-4.15.0/drivers/platform/x86/asus-wireless.c +++ linux-azure-4.15.0.orig/drivers/platform/x86/asus-wireless.c @@ -178,10 +178,8 @@ { struct asus_wireless_data *data = acpi_driver_data(adev); + if (data->wq) - if (data->wq) { - devm_led_classdev_unregister(&adev->dev, &data->led); destroy_workqueue(data->wq); - } return 0; } reverted: --- linux-azure-4.15.0/drivers/rtc/hctosys.c +++ linux-azure-4.15.0.orig/drivers/rtc/hctosys.c @@ -49,11 +49,6 @@ tv64.tv_sec = rtc_tm_to_time64(&tm); -#if BITS_PER_LONG == 32 - if (tv64.tv_sec > INT_MAX) - goto err_read; -#endif - err = do_settimeofday64(&tv64); dev_info(rtc->dev.parent, reverted: --- linux-azure-4.15.0/drivers/rtc/rtc-goldfish.c +++ linux-azure-4.15.0.orig/drivers/rtc/rtc-goldfish.c @@ -235,5 +235,3 @@ }; module_platform_driver(goldfish_rtc); - -MODULE_LICENSE("GPL v2"); reverted: --- linux-azure-4.15.0/drivers/rtc/rtc-m41t80.c +++ linux-azure-4.15.0.orig/drivers/rtc/rtc-m41t80.c @@ -885,6 +885,7 @@ { struct i2c_adapter *adapter = to_i2c_adapter(client->dev.parent); int rc = 0; + struct rtc_device *rtc = NULL; struct rtc_time tm; struct m41t80_data *m41t80_data = NULL; bool wakeup_source = false; @@ -908,10 +909,6 @@ m41t80_data->features = id->driver_data; i2c_set_clientdata(client, m41t80_data); - m41t80_data->rtc = devm_rtc_allocate_device(&client->dev); - if (IS_ERR(m41t80_data->rtc)) - return PTR_ERR(m41t80_data->rtc); - #ifdef CONFIG_OF wakeup_source = of_property_read_bool(client->dev.of_node, "wakeup-source"); @@ -935,11 +932,15 @@ device_init_wakeup(&client->dev, true); } + rtc = devm_rtc_device_register(&client->dev, client->name, + &m41t80_rtc_ops, THIS_MODULE); + if (IS_ERR(rtc)) + return PTR_ERR(rtc); - m41t80_data->rtc->ops = &m41t80_rtc_ops; + m41t80_data->rtc = rtc; if (client->irq <= 0) { /* We cannot support UIE mode if we do not have an IRQ line */ + rtc->uie_unsupported = 1; - m41t80_data->rtc->uie_unsupported = 1; } /* Make sure HT (Halt Update) bit is cleared */ @@ -992,11 +993,6 @@ if (m41t80_data->features & M41T80_FEATURE_SQ) m41t80_sqw_register_clk(m41t80_data); #endif - - rc = rtc_register_device(m41t80_data->rtc); - if (rc) - return rc; - return 0; } reverted: --- linux-azure-4.15.0/drivers/rtc/rtc-rk808.c +++ linux-azure-4.15.0.orig/drivers/rtc/rtc-rk808.c @@ -416,11 +416,12 @@ device_init_wakeup(&pdev->dev, 1); + rk808_rtc->rtc = devm_rtc_device_register(&pdev->dev, "rk808-rtc", + &rk808_rtc_ops, THIS_MODULE); + if (IS_ERR(rk808_rtc->rtc)) { + ret = PTR_ERR(rk808_rtc->rtc); + return ret; + } - rk808_rtc->rtc = devm_rtc_allocate_device(&pdev->dev); - if (IS_ERR(rk808_rtc->rtc)) - return PTR_ERR(rk808_rtc->rtc); - - rk808_rtc->rtc->ops = &rk808_rtc_ops; rk808_rtc->irq = platform_get_irq(pdev, 0); if (rk808_rtc->irq < 0) { @@ -437,10 +438,9 @@ if (ret) { dev_err(&pdev->dev, "Failed to request alarm IRQ %d: %d\n", rk808_rtc->irq, ret); - return ret; } + return ret; - return rtc_register_device(rk808_rtc->rtc); } static struct platform_driver rk808_rtc_driver = { reverted: --- linux-azure-4.15.0/drivers/rtc/rtc-rp5c01.c +++ linux-azure-4.15.0.orig/drivers/rtc/rtc-rp5c01.c @@ -249,24 +249,16 @@ platform_set_drvdata(dev, priv); + rtc = devm_rtc_device_register(&dev->dev, "rtc-rp5c01", &rp5c01_rtc_ops, + THIS_MODULE); - rtc = devm_rtc_allocate_device(&dev->dev); if (IS_ERR(rtc)) return PTR_ERR(rtc); - - rtc->ops = &rp5c01_rtc_ops; - priv->rtc = rtc; error = sysfs_create_bin_file(&dev->dev.kobj, &priv->nvram_attr); if (error) return error; - error = rtc_register_device(rtc); - if (error) { - sysfs_remove_bin_file(&dev->dev.kobj, &priv->nvram_attr); - return error; - } - return 0; } reverted: --- linux-azure-4.15.0/drivers/rtc/rtc-snvs.c +++ linux-azure-4.15.0.orig/drivers/rtc/rtc-snvs.c @@ -132,23 +132,20 @@ { struct snvs_rtc_data *data = dev_get_drvdata(dev); unsigned long time; - int ret; rtc_tm_to_time(tm, &time); /* Disable RTC first */ + snvs_rtc_enable(data, false); - ret = snvs_rtc_enable(data, false); - if (ret) - return ret; /* Write 32-bit time to 47-bit timer, leaving 15 LSBs blank */ regmap_write(data->regmap, data->offset + SNVS_LPSRTCLR, time << CNTR_TO_SECS_SH); regmap_write(data->regmap, data->offset + SNVS_LPSRTCMR, time >> (32 - CNTR_TO_SECS_SH)); /* Enable RTC again */ + snvs_rtc_enable(data, true); - ret = snvs_rtc_enable(data, true); + return 0; - return ret; } static int snvs_rtc_read_alarm(struct device *dev, struct rtc_wkalrm *alrm) @@ -291,11 +288,7 @@ regmap_write(data->regmap, data->offset + SNVS_LPSR, 0xffffffff); /* Enable RTC */ + snvs_rtc_enable(data, true); - ret = snvs_rtc_enable(data, true); - if (ret) { - dev_err(&pdev->dev, "failed to enable rtc %d\n", ret); - goto error_rtc_device_register; - } device_init_wakeup(&pdev->dev, true); reverted: --- linux-azure-4.15.0/drivers/rtc/rtc-tx4939.c +++ linux-azure-4.15.0.orig/drivers/rtc/rtc-tx4939.c @@ -86,8 +86,7 @@ for (i = 2; i < 6; i++) buf[i] = __raw_readl(&rtcreg->dat); spin_unlock_irq(&pdata->lock); + sec = (buf[5] << 24) | (buf[4] << 16) | (buf[3] << 8) | buf[2]; - sec = ((unsigned long)buf[5] << 24) | (buf[4] << 16) | - (buf[3] << 8) | buf[2]; rtc_time_to_tm(sec, tm); return rtc_valid_tm(tm); } @@ -148,8 +147,7 @@ alrm->enabled = (ctl & TX4939_RTCCTL_ALME) ? 1 : 0; alrm->pending = (ctl & TX4939_RTCCTL_ALMD) ? 1 : 0; spin_unlock_irq(&pdata->lock); + sec = (buf[5] << 24) | (buf[4] << 16) | (buf[3] << 8) | buf[2]; - sec = ((unsigned long)buf[5] << 24) | (buf[4] << 16) | - (buf[3] << 8) | buf[2]; rtc_time_to_tm(sec, &alrm->time); return rtc_valid_tm(&alrm->time); } reverted: --- linux-azure-4.15.0/drivers/s390/cio/qdio_setup.c +++ linux-azure-4.15.0.orig/drivers/s390/cio/qdio_setup.c @@ -141,7 +141,7 @@ int i; for (i = 0; i < nr_queues; i++) { + q = kmem_cache_alloc(qdio_q_cache, GFP_KERNEL); - q = kmem_cache_zalloc(qdio_q_cache, GFP_KERNEL); if (!q) return -ENOMEM; @@ -456,6 +456,7 @@ { struct ciw *ciw; struct qdio_irq *irq_ptr = init_data->cdev->private->qdio_data; + int rc; memset(&irq_ptr->qib, 0, sizeof(irq_ptr->qib)); memset(&irq_ptr->siga_flag, 0, sizeof(irq_ptr->siga_flag)); @@ -492,14 +493,16 @@ ciw = ccw_device_get_ciw(init_data->cdev, CIW_TYPE_EQUEUE); if (!ciw) { DBF_ERROR("%4x NO EQ", irq_ptr->schid.sch_no); + rc = -EINVAL; + goto out_err; - return -EINVAL; } irq_ptr->equeue = *ciw; ciw = ccw_device_get_ciw(init_data->cdev, CIW_TYPE_AQUEUE); if (!ciw) { DBF_ERROR("%4x NO AQ", irq_ptr->schid.sch_no); + rc = -EINVAL; + goto out_err; - return -EINVAL; } irq_ptr->aqueue = *ciw; @@ -507,6 +510,9 @@ irq_ptr->orig_handler = init_data->cdev->handler; init_data->cdev->handler = qdio_int_handler; return 0; +out_err: + qdio_release_memory(irq_ptr); + return rc; } void qdio_print_subchannel_info(struct qdio_irq *irq_ptr, reverted: --- linux-azure-4.15.0/drivers/s390/cio/vfio_ccw_cp.c +++ linux-azure-4.15.0.orig/drivers/s390/cio/vfio_ccw_cp.c @@ -715,10 +715,6 @@ * and stores the result to ccwchain list. @cp must have been * initialized by a previous call with cp_init(). Otherwise, undefined * behavior occurs. - * For each chain composing the channel program: - * - On entry ch_len holds the count of CCWs to be translated. - * - On exit ch_len is adjusted to the count of successfully translated CCWs. - * This allows cp_free to find in ch_len the count of CCWs to free in a chain. * * The S/390 CCW Translation APIS (prefixed by 'cp_') are introduced * as helpers to do ccw chain translation inside the kernel. Basically @@ -753,18 +749,11 @@ for (idx = 0; idx < len; idx++) { ret = ccwchain_fetch_one(chain, idx, cp); if (ret) + return ret; - goto out_err; } } return 0; -out_err: - /* Only cleanup the chain elements that were actually translated. */ - chain->ch_len = idx; - list_for_each_entry_continue(chain, &cp->ccwchain_list, next) { - chain->ch_len = 0; - } - return ret; } /** reverted: --- linux-azure-4.15.0/drivers/s390/cio/vfio_ccw_fsm.c +++ linux-azure-4.15.0.orig/drivers/s390/cio/vfio_ccw_fsm.c @@ -20,12 +20,12 @@ int ccode; __u8 lpm; unsigned long flags; - int ret; sch = private->sch; spin_lock_irqsave(sch->lock, flags); private->state = VFIO_CCW_STATE_BUSY; + spin_unlock_irqrestore(sch->lock, flags); orb = cp_get_orb(&private->cp, (u32)(addr_t)sch, sch->lpm); @@ -38,12 +38,10 @@ * Initialize device status information */ sch->schib.scsw.cmd.actl |= SCSW_ACTL_START_PEND; + return 0; - ret = 0; - break; case 1: /* Status pending */ case 2: /* Busy */ + return -EBUSY; - ret = -EBUSY; - break; case 3: /* Device/path not operational */ { lpm = orb->cmd.lpm; @@ -53,16 +51,13 @@ sch->lpm = 0; if (cio_update_schib(sch)) + return -ENODEV; + + return sch->lpm ? -EACCES : -ENODEV; - ret = -ENODEV; - else - ret = sch->lpm ? -EACCES : -ENODEV; - break; } default: + return ccode; - ret = ccode; } - spin_unlock_irqrestore(sch->lock, flags); - return ret; } static void fsm_notoper(struct vfio_ccw_private *private, reverted: --- linux-azure-4.15.0/drivers/s390/crypto/ap_bus.h +++ linux-azure-4.15.0.orig/drivers/s390/crypto/ap_bus.h @@ -198,18 +198,11 @@ */ static inline void ap_init_message(struct ap_message *ap_msg) { + ap_msg->psmid = 0; + ap_msg->length = 0; + ap_msg->rc = 0; + ap_msg->special = 0; + ap_msg->receive = NULL; - memset(ap_msg, 0, sizeof(*ap_msg)); -} - -/** - * ap_release_message() - Release ap_message. - * Releases all memory used internal within the ap_message struct - * Currently this is the message and private field. - */ -static inline void ap_release_message(struct ap_message *ap_msg) -{ - kzfree(ap_msg->message); - kzfree(ap_msg->private); } #define for_each_ap_card(_ac) \ reverted: --- linux-azure-4.15.0/drivers/s390/crypto/zcrypt_api.c +++ linux-azure-4.15.0.orig/drivers/s390/crypto/zcrypt_api.c @@ -373,7 +373,6 @@ trace_s390_zcrypt_req(xcRB, TB_ZSECSENDCPRB); - ap_init_message(&ap_msg); rc = get_cprb_fc(xcRB, &ap_msg, &func_code, &domain); if (rc) goto out; @@ -428,7 +427,6 @@ spin_unlock(&zcrypt_list_lock); out: - ap_release_message(&ap_msg); trace_s390_zcrypt_rep(xcRB, func_code, rc, AP_QID_CARD(qid), AP_QID_QUEUE(qid)); return rc; @@ -472,8 +470,6 @@ trace_s390_zcrypt_req(xcrb, TP_ZSENDEP11CPRB); - ap_init_message(&ap_msg); - target_num = (unsigned short) xcrb->targets_num; /* empty list indicates autoselect (all available targets) */ @@ -491,7 +487,7 @@ if (copy_from_user(targets, uptr, target_num * sizeof(*targets))) { rc = -EFAULT; + goto out; - goto out_free; } } @@ -548,7 +544,6 @@ out_free: kfree(targets); out: - ap_release_message(&ap_msg); trace_s390_zcrypt_rep(xcrb, func_code, rc, AP_QID_CARD(qid), AP_QID_QUEUE(qid)); return rc; @@ -566,7 +561,6 @@ trace_s390_zcrypt_req(buffer, TP_HWRNGCPRB); - ap_init_message(&ap_msg); rc = get_rng_fc(&ap_msg, &func_code, &domain); if (rc) goto out; @@ -597,10 +591,8 @@ pref_zq = zcrypt_pick_queue(pref_zc, pref_zq, weight); spin_unlock(&zcrypt_list_lock); + if (!pref_zq) + return -ENODEV; - if (!pref_zq) { - rc = -ENODEV; - goto out; - } qid = pref_zq->queue->qid; rc = pref_zq->ops->rng(pref_zq, buffer, &ap_msg); @@ -610,7 +602,6 @@ spin_unlock(&zcrypt_list_lock); out: - ap_release_message(&ap_msg); trace_s390_zcrypt_rep(buffer, func_code, rc, AP_QID_CARD(qid), AP_QID_QUEUE(qid)); return rc; reverted: --- linux-azure-4.15.0/drivers/s390/crypto/zcrypt_msgtype6.c +++ linux-azure-4.15.0.orig/drivers/s390/crypto/zcrypt_msgtype6.c @@ -1084,13 +1084,6 @@ return rc; } -/** - * Fetch function code from cprb. - * Extracting the fc requires to copy the cprb from userspace. - * So this function allocates memory and needs an ap_msg prepared - * by the caller with ap_init_message(). Also the caller has to - * make sure ap_release_message() is always called even on failure. - */ unsigned int get_cprb_fc(struct ica_xcRB *xcRB, struct ap_message *ap_msg, unsigned int *func_code, unsigned short **dom) @@ -1098,7 +1091,9 @@ struct response_type resp_type = { .type = PCIXCC_RESPONSE_TYPE_XCRB, }; + int rc; + ap_init_message(ap_msg); ap_msg->message = kmalloc(MSGTYPE06_MAX_MSG_SIZE, GFP_KERNEL); if (!ap_msg->message) return -ENOMEM; @@ -1106,10 +1101,17 @@ ap_msg->psmid = (((unsigned long long) current->pid) << 32) + atomic_inc_return(&zcrypt_step); ap_msg->private = kmalloc(sizeof(resp_type), GFP_KERNEL); + if (!ap_msg->private) { + kzfree(ap_msg->message); - if (!ap_msg->private) return -ENOMEM; + } memcpy(ap_msg->private, &resp_type, sizeof(resp_type)); + rc = XCRB_msg_to_type6CPRB_msgX(ap_msg, xcRB, func_code, dom); + if (rc) { + kzfree(ap_msg->message); + kzfree(ap_msg->private); + } + return rc; - return XCRB_msg_to_type6CPRB_msgX(ap_msg, xcRB, func_code, dom); } /** @@ -1137,16 +1139,11 @@ /* Signal pending. */ ap_cancel_message(zq->queue, ap_msg); + kzfree(ap_msg->message); + kzfree(ap_msg->private); return rc; } -/** - * Fetch function code from ep11 cprb. - * Extracting the fc requires to copy the ep11 cprb from userspace. - * So this function allocates memory and needs an ap_msg prepared - * by the caller with ap_init_message(). Also the caller has to - * make sure ap_release_message() is always called even on failure. - */ unsigned int get_ep11cprb_fc(struct ep11_urb *xcrb, struct ap_message *ap_msg, unsigned int *func_code) @@ -1154,7 +1151,9 @@ struct response_type resp_type = { .type = PCIXCC_RESPONSE_TYPE_EP11, }; + int rc; + ap_init_message(ap_msg); ap_msg->message = kmalloc(MSGTYPE06_MAX_MSG_SIZE, GFP_KERNEL); if (!ap_msg->message) return -ENOMEM; @@ -1162,10 +1161,17 @@ ap_msg->psmid = (((unsigned long long) current->pid) << 32) + atomic_inc_return(&zcrypt_step); ap_msg->private = kmalloc(sizeof(resp_type), GFP_KERNEL); + if (!ap_msg->private) { + kzfree(ap_msg->message); - if (!ap_msg->private) return -ENOMEM; + } memcpy(ap_msg->private, &resp_type, sizeof(resp_type)); + rc = xcrb_msg_to_type6_ep11cprb_msgx(ap_msg, xcrb, func_code); + if (rc) { + kzfree(ap_msg->message); + kzfree(ap_msg->private); + } + return rc; - return xcrb_msg_to_type6_ep11cprb_msgx(ap_msg, xcrb, func_code); } /** @@ -1240,6 +1246,8 @@ /* Signal pending. */ ap_cancel_message(zq->queue, ap_msg); + kzfree(ap_msg->message); + kzfree(ap_msg->private); return rc; } @@ -1250,6 +1258,7 @@ .type = PCIXCC_RESPONSE_TYPE_XCRB, }; + ap_init_message(ap_msg); ap_msg->message = kmalloc(MSGTYPE06_MAX_MSG_SIZE, GFP_KERNEL); if (!ap_msg->message) return -ENOMEM; @@ -1257,8 +1266,10 @@ ap_msg->psmid = (((unsigned long long) current->pid) << 32) + atomic_inc_return(&zcrypt_step); ap_msg->private = kmalloc(sizeof(resp_type), GFP_KERNEL); + if (!ap_msg->private) { + kzfree(ap_msg->message); - if (!ap_msg->private) return -ENOMEM; + } memcpy(ap_msg->private, &resp_type, sizeof(resp_type)); rng_type6CPRB_msgX(ap_msg, ZCRYPT_RNG_BUFFER_SIZE, domain); @@ -1302,6 +1313,8 @@ /* Signal pending. */ ap_cancel_message(zq->queue, ap_msg); + kzfree(ap_msg->message); + kzfree(ap_msg->private); return rc; } reverted: --- linux-azure-4.15.0/drivers/s390/scsi/zfcp_dbf.c +++ linux-azure-4.15.0.orig/drivers/s390/scsi/zfcp_dbf.c @@ -4,7 +4,7 @@ * * Debug traces for zfcp. * + * Copyright IBM Corp. 2002, 2017 - * Copyright IBM Corp. 2002, 2018 */ #define KMSG_COMPONENT "zfcp" @@ -308,27 +308,6 @@ spin_unlock_irqrestore(&dbf->rec_lock, flags); } -/** - * zfcp_dbf_rec_trig_lock - trace event related to triggered recovery with lock - * @tag: identifier for event - * @adapter: adapter on which the erp_action should run - * @port: remote port involved in the erp_action - * @sdev: scsi device involved in the erp_action - * @want: wanted erp_action - * @need: required erp_action - * - * The adapter->erp_lock must not be held. - */ -void zfcp_dbf_rec_trig_lock(char *tag, struct zfcp_adapter *adapter, - struct zfcp_port *port, struct scsi_device *sdev, - u8 want, u8 need) -{ - unsigned long flags; - - read_lock_irqsave(&adapter->erp_lock, flags); - zfcp_dbf_rec_trig(tag, adapter, port, sdev, want, need); - read_unlock_irqrestore(&adapter->erp_lock, flags); -} /** * zfcp_dbf_rec_run_lvl - trace event related to running recovery reverted: --- linux-azure-4.15.0/drivers/s390/scsi/zfcp_ext.h +++ linux-azure-4.15.0.orig/drivers/s390/scsi/zfcp_ext.h @@ -4,7 +4,7 @@ * * External function declarations. * + * Copyright IBM Corp. 2002, 2016 - * Copyright IBM Corp. 2002, 2018 */ #ifndef ZFCP_EXT_H @@ -35,9 +35,6 @@ extern void zfcp_dbf_adapter_unregister(struct zfcp_adapter *); extern void zfcp_dbf_rec_trig(char *, struct zfcp_adapter *, struct zfcp_port *, struct scsi_device *, u8, u8); -extern void zfcp_dbf_rec_trig_lock(char *tag, struct zfcp_adapter *adapter, - struct zfcp_port *port, - struct scsi_device *sdev, u8 want, u8 need); extern void zfcp_dbf_rec_run(char *, struct zfcp_erp_action *); extern void zfcp_dbf_rec_run_lvl(int level, char *tag, struct zfcp_erp_action *erp); reverted: --- linux-azure-4.15.0/drivers/s390/scsi/zfcp_scsi.c +++ linux-azure-4.15.0.orig/drivers/s390/scsi/zfcp_scsi.c @@ -4,7 +4,7 @@ * * Interface to Linux SCSI midlayer. * + * Copyright IBM Corp. 2002, 2017 - * Copyright IBM Corp. 2002, 2018 */ #define KMSG_COMPONENT "zfcp" @@ -618,9 +618,9 @@ ids.port_id = port->d_id; ids.roles = FC_RPORT_ROLE_FCP_TARGET; + zfcp_dbf_rec_trig("scpaddy", port->adapter, port, NULL, + ZFCP_PSEUDO_ERP_ACTION_RPORT_ADD, + ZFCP_PSEUDO_ERP_ACTION_RPORT_ADD); - zfcp_dbf_rec_trig_lock("scpaddy", port->adapter, port, NULL, - ZFCP_PSEUDO_ERP_ACTION_RPORT_ADD, - ZFCP_PSEUDO_ERP_ACTION_RPORT_ADD); rport = fc_remote_port_add(port->adapter->scsi_host, 0, &ids); if (!rport) { dev_err(&port->adapter->ccw_device->dev, @@ -642,9 +642,9 @@ struct fc_rport *rport = port->rport; if (rport) { + zfcp_dbf_rec_trig("scpdely", port->adapter, port, NULL, + ZFCP_PSEUDO_ERP_ACTION_RPORT_DEL, + ZFCP_PSEUDO_ERP_ACTION_RPORT_DEL); - zfcp_dbf_rec_trig_lock("scpdely", port->adapter, port, NULL, - ZFCP_PSEUDO_ERP_ACTION_RPORT_DEL, - ZFCP_PSEUDO_ERP_ACTION_RPORT_DEL); fc_remote_port_delete(rport); port->rport = NULL; } diff -u linux-azure-4.15.0/drivers/scsi/aacraid/commsup.c linux-azure-4.15.0/drivers/scsi/aacraid/commsup.c --- linux-azure-4.15.0/drivers/scsi/aacraid/commsup.c +++ linux-azure-4.15.0/drivers/scsi/aacraid/commsup.c @@ -724,8 +724,6 @@ int wait; unsigned long flags = 0; unsigned long mflags = 0; - struct aac_hba_cmd_req *hbacmd = (struct aac_hba_cmd_req *) - fibptr->hw_fib_va; fibptr->flags = (FIB_CONTEXT_FLAG | FIB_CONTEXT_FLAG_NATIVE_HBA); if (callback) { @@ -736,9 +734,11 @@ wait = 1; - hbacmd->iu_type = command; - if (command == HBA_IU_TYPE_SCSI_CMD_REQ) { + struct aac_hba_cmd_req *hbacmd = + (struct aac_hba_cmd_req *)fibptr->hw_fib_va; + + hbacmd->iu_type = command; /* bit1 of request_id must be 0 */ hbacmd->request_id = cpu_to_le32((((u32)(fibptr - dev->fibs)) << 2) + 1); @@ -1502,10 +1502,9 @@ host = aac->scsi_host_ptr; scsi_block_requests(host); aac_adapter_disable_int(aac); - if (aac->thread && aac->thread->pid != current->pid) { + if (aac->thread->pid != current->pid) { spin_unlock_irq(host->host_lock); kthread_stop(aac->thread); - aac->thread = NULL; jafo = 1; } @@ -1592,7 +1591,6 @@ aac->name); if (IS_ERR(aac->thread)) { retval = PTR_ERR(aac->thread); - aac->thread = NULL; goto out; } } diff -u linux-azure-4.15.0/drivers/scsi/aacraid/linit.c linux-azure-4.15.0/drivers/scsi/aacraid/linit.c --- linux-azure-4.15.0/drivers/scsi/aacraid/linit.c +++ linux-azure-4.15.0/drivers/scsi/aacraid/linit.c @@ -1562,7 +1562,6 @@ up(&fib->event_wait); } kthread_stop(aac->thread); - aac->thread = NULL; } aac_send_shutdown(aac); @@ -1694,10 +1693,8 @@ * Map in the registers from the adapter. */ aac->base_size = AAC_MIN_FOOTPRINT_SIZE; - if ((*aac_drivers[index].init)(aac)) { - error = -ENODEV; + if ((*aac_drivers[index].init)(aac)) goto out_unmap; - } if (aac->sync_mode) { if (aac_sync_mode) reverted: --- linux-azure-4.15.0/drivers/scsi/bnx2fc/bnx2fc_io.c +++ linux-azure-4.15.0.orig/drivers/scsi/bnx2fc/bnx2fc_io.c @@ -1889,7 +1889,6 @@ /* we will not receive ABTS response for this IO */ BNX2FC_IO_DBG(io_req, "Timer context finished processing " "this scsi cmd\n"); - return; } /* Cancel the timeout_work, as we received IO completion */ reverted: --- linux-azure-4.15.0/drivers/scsi/iscsi_tcp.c +++ linux-azure-4.15.0.orig/drivers/scsi/iscsi_tcp.c @@ -37,7 +37,6 @@ #include #include #include -#include #include #include #include @@ -953,13 +952,6 @@ static int iscsi_sw_tcp_slave_configure(struct scsi_device *sdev) { - struct iscsi_sw_tcp_host *tcp_sw_host = iscsi_host_priv(sdev->host); - struct iscsi_session *session = tcp_sw_host->session; - struct iscsi_conn *conn = session->leadconn; - - if (conn->datadgst_en) - sdev->request_queue->backing_dev_info->capabilities - |= BDI_CAP_STABLE_WRITES; blk_queue_bounce_limit(sdev->request_queue, BLK_BOUNCE_ANY); blk_queue_dma_alignment(sdev->request_queue, 0); return 0; diff -u linux-azure-4.15.0/drivers/scsi/lpfc/lpfc_nportdisc.c linux-azure-4.15.0/drivers/scsi/lpfc/lpfc_nportdisc.c --- linux-azure-4.15.0/drivers/scsi/lpfc/lpfc_nportdisc.c +++ linux-azure-4.15.0/drivers/scsi/lpfc/lpfc_nportdisc.c @@ -1998,14 +1998,8 @@ ndlp->nlp_type |= NLP_NVME_TARGET; if (bf_get_be32(prli_disc, nvpr)) ndlp->nlp_type |= NLP_NVME_DISCOVERY; - - /* - * If prli_fba is set, the Target supports FirstBurst. - * If prli_fb_sz is 0, the FirstBurst size is unlimited, - * otherwise it defines the actual size supported by - * the NVME Target. - */ if ((bf_get_be32(prli_fba, nvpr) == 1) && + (bf_get_be32(prli_fb_sz, nvpr) > 0) && (phba->cfg_nvme_enable_fb) && (!phba->nvmet_support)) { /* Both sides support FB. The target's first @@ -2014,13 +2008,6 @@ ndlp->nlp_flag |= NLP_FIRSTBURST; ndlp->nvme_fb_size = bf_get_be32(prli_fb_sz, nvpr); - - /* Expressed in units of 512 bytes */ - if (ndlp->nvme_fb_size) - ndlp->nvme_fb_size <<= - LPFC_NVME_FB_SHIFT; - else - ndlp->nvme_fb_size = LPFC_NVME_MAX_FB; } } diff -u linux-azure-4.15.0/drivers/scsi/lpfc/lpfc_nvme.h linux-azure-4.15.0/drivers/scsi/lpfc/lpfc_nvme.h --- linux-azure-4.15.0/drivers/scsi/lpfc/lpfc_nvme.h +++ linux-azure-4.15.0/drivers/scsi/lpfc/lpfc_nvme.h @@ -27,8 +27,6 @@ #define LPFC_NVME_WAIT_TMO 10 #define LPFC_NVME_EXPEDITE_XRICNT 8 -#define LPFC_NVME_FB_SHIFT 9 -#define LPFC_NVME_MAX_FB (1 << 20) /* 1M */ struct lpfc_nvme_qhandle { uint32_t index; /* WQ index to use */ diff -u linux-azure-4.15.0/drivers/scsi/mpt3sas/mpt3sas_base.c linux-azure-4.15.0/drivers/scsi/mpt3sas/mpt3sas_base.c --- linux-azure-4.15.0/drivers/scsi/mpt3sas/mpt3sas_base.c +++ linux-azure-4.15.0/drivers/scsi/mpt3sas/mpt3sas_base.c @@ -2387,11 +2387,8 @@ continue; } - for_each_cpu_and(cpu, mask, cpu_online_mask) { - if (cpu >= ioc->cpu_msix_table_sz) - break; + for_each_cpu(cpu, mask) ioc->cpu_msix_table[cpu] = reply_q->msix_index; - } } return; } diff -u linux-azure-4.15.0/drivers/scsi/mpt3sas/mpt3sas_scsih.c linux-azure-4.15.0/drivers/scsi/mpt3sas/mpt3sas_scsih.c --- linux-azure-4.15.0/drivers/scsi/mpt3sas/mpt3sas_scsih.c +++ linux-azure-4.15.0/drivers/scsi/mpt3sas/mpt3sas_scsih.c @@ -10720,7 +10720,7 @@ snprintf(ioc->firmware_event_name, sizeof(ioc->firmware_event_name), "fw_event_%s%d", ioc->driver_name, ioc->id); ioc->firmware_event_thread = alloc_ordered_workqueue( - ioc->firmware_event_name, 0); + ioc->firmware_event_name, WQ_MEM_RECLAIM); if (!ioc->firmware_event_thread) { pr_err(MPT3SAS_FMT "failure at %s:%d/%s()!\n", ioc->name, __FILE__, __LINE__, __func__); reverted: --- linux-azure-4.15.0/drivers/scsi/mvsas/mv_94xx.c +++ linux-azure-4.15.0.orig/drivers/scsi/mvsas/mv_94xx.c @@ -1080,16 +1080,16 @@ void __iomem *regs = mvi->regs_ex - 0x10200; int drive = (i/3) & (4-1); /* drive number on host */ + u32 block = mr32(MVS_SGPIO_DCTRL + - int driveshift = drive * 8; /* bit offset of drive */ - u32 block = ioread32be(regs + MVS_SGPIO_DCTRL + MVS_SGPIO_HOST_OFFSET * mvi->id); + /* * if bit is set then create a mask with the first * bit of the drive set in the mask ... */ + u32 bit = (write_data[i/8] & (1 << (i&(8-1)))) ? + 1<<(24-drive*8) : 0; - u32 bit = get_unaligned_be32(write_data) & (1 << i) ? - 1 << driveshift : 0; /* * ... and then shift it to the right position based @@ -1098,27 +1098,26 @@ switch (i%3) { case 0: /* activity */ block &= ~((0x7 << MVS_SGPIO_DCTRL_ACT_SHIFT) + << (24-drive*8)); - << driveshift); /* hardwire activity bit to SOF */ block |= LED_BLINKA_SOF << ( MVS_SGPIO_DCTRL_ACT_SHIFT + + (24-drive*8)); - driveshift); break; case 1: /* id */ block &= ~((0x3 << MVS_SGPIO_DCTRL_LOC_SHIFT) + << (24-drive*8)); - << driveshift); block |= bit << MVS_SGPIO_DCTRL_LOC_SHIFT; break; case 2: /* fail */ block &= ~((0x7 << MVS_SGPIO_DCTRL_ERR_SHIFT) + << (24-drive*8)); - << driveshift); block |= bit << MVS_SGPIO_DCTRL_ERR_SHIFT; break; } + mw32(MVS_SGPIO_DCTRL + MVS_SGPIO_HOST_OFFSET * mvi->id, + block); - iowrite32be(block, - regs + MVS_SGPIO_DCTRL + - MVS_SGPIO_HOST_OFFSET * mvi->id); } @@ -1133,7 +1132,7 @@ void __iomem *regs = mvi->regs_ex - 0x10200; mw32(MVS_SGPIO_DCTRL + MVS_SGPIO_HOST_OFFSET * mvi->id, + be32_to_cpu(((u32 *) write_data)[i])); - ((u32 *) write_data)[i]); } return reg_count; } reverted: --- linux-azure-4.15.0/drivers/scsi/qedi/qedi_fw.c +++ linux-azure-4.15.0.orig/drivers/scsi/qedi/qedi_fw.c @@ -761,11 +761,6 @@ iscsi_cid = cqe->conn_id; qedi_conn = qedi->cid_que.conn_cid_tbl[iscsi_cid]; - if (!qedi_conn) { - QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO, - "icid not found 0x%x\n", cqe->conn_id); - return; - } /* Based on this itt get the corresponding qedi_cmd */ spin_lock_bh(&qedi_conn->tmf_work_lock); reverted: --- linux-azure-4.15.0/drivers/scsi/qedi/qedi_main.c +++ linux-azure-4.15.0.orig/drivers/scsi/qedi/qedi_main.c @@ -1840,8 +1840,8 @@ switch (type) { case ISCSI_BOOT_INI_INITIATOR_NAME: + rc = snprintf(str, NVM_ISCSI_CFG_ISCSI_NAME_MAX_LEN, "%s\n", + initiator->initiator_name.byte); - rc = sprintf(str, "%.*s\n", NVM_ISCSI_CFG_ISCSI_NAME_MAX_LEN, - initiator->initiator_name.byte); break; default: rc = 0; @@ -1908,8 +1908,8 @@ switch (type) { case ISCSI_BOOT_TGT_NAME: + rc = snprintf(str, NVM_ISCSI_CFG_ISCSI_NAME_MAX_LEN, "%s\n", + block->target[idx].target_name.byte); - rc = sprintf(str, "%.*s\n", NVM_ISCSI_CFG_ISCSI_NAME_MAX_LEN, - block->target[idx].target_name.byte); break; case ISCSI_BOOT_TGT_IP_ADDR: if (ipv6_en) @@ -1930,20 +1930,20 @@ block->target[idx].lun.value[0]); break; case ISCSI_BOOT_TGT_CHAP_NAME: + rc = snprintf(str, NVM_ISCSI_CFG_CHAP_NAME_MAX_LEN, "%s\n", + chap_name); - rc = sprintf(str, "%.*s\n", NVM_ISCSI_CFG_CHAP_NAME_MAX_LEN, - chap_name); break; case ISCSI_BOOT_TGT_CHAP_SECRET: + rc = snprintf(str, NVM_ISCSI_CFG_CHAP_PWD_MAX_LEN, "%s\n", + chap_secret); - rc = sprintf(str, "%.*s\n", NVM_ISCSI_CFG_CHAP_NAME_MAX_LEN, - chap_secret); break; case ISCSI_BOOT_TGT_REV_CHAP_NAME: + rc = snprintf(str, NVM_ISCSI_CFG_CHAP_NAME_MAX_LEN, "%s\n", + mchap_name); - rc = sprintf(str, "%.*s\n", NVM_ISCSI_CFG_CHAP_NAME_MAX_LEN, - mchap_name); break; case ISCSI_BOOT_TGT_REV_CHAP_SECRET: + rc = snprintf(str, NVM_ISCSI_CFG_CHAP_PWD_MAX_LEN, "%s\n", + mchap_secret); - rc = sprintf(str, "%.*s\n", NVM_ISCSI_CFG_CHAP_NAME_MAX_LEN, - mchap_secret); break; case ISCSI_BOOT_TGT_FLAGS: rc = snprintf(str, 3, "%hhd\n", SYSFS_FLAG_FW_SEL_BOOT); diff -u linux-azure-4.15.0/drivers/scsi/qla2xxx/qla_isr.c linux-azure-4.15.0/drivers/scsi/qla2xxx/qla_isr.c --- linux-azure-4.15.0/drivers/scsi/qla2xxx/qla_isr.c +++ linux-azure-4.15.0/drivers/scsi/qla2xxx/qla_isr.c @@ -272,8 +272,7 @@ struct device_reg_2xxx __iomem *reg = &ha->iobase->isp; /* Read all mbox registers? */ - WARN_ON_ONCE(ha->mbx_count > 32); - mboxes = (1ULL << ha->mbx_count) - 1; + mboxes = (1 << ha->mbx_count) - 1; if (!ha->mcp) ql_dbg(ql_dbg_async, vha, 0x5001, "MBX pointer ERROR.\n"); else @@ -2848,8 +2847,7 @@ struct device_reg_24xx __iomem *reg = &ha->iobase->isp24; /* Read all mbox registers? */ - WARN_ON_ONCE(ha->mbx_count > 32); - mboxes = (1ULL << ha->mbx_count) - 1; + mboxes = (1 << ha->mbx_count) - 1; if (!ha->mcp) ql_dbg(ql_dbg_async, vha, 0x504e, "MBX pointer ERROR.\n"); else reverted: --- linux-azure-4.15.0/drivers/scsi/qla4xxx/ql4_def.h +++ linux-azure-4.15.0.orig/drivers/scsi/qla4xxx/ql4_def.h @@ -168,8 +168,6 @@ #define DEV_DB_NON_PERSISTENT 0 #define DEV_DB_PERSISTENT 1 -#define QL4_ISP_REG_DISCONNECT 0xffffffffU - #define COPY_ISID(dst_isid, src_isid) { \ int i, j; \ for (i = 0, j = ISID_SIZE - 1; i < ISID_SIZE;) \ reverted: --- linux-azure-4.15.0/drivers/scsi/qla4xxx/ql4_os.c +++ linux-azure-4.15.0.orig/drivers/scsi/qla4xxx/ql4_os.c @@ -262,24 +262,6 @@ static struct scsi_transport_template *qla4xxx_scsi_transport; -static int qla4xxx_isp_check_reg(struct scsi_qla_host *ha) -{ - u32 reg_val = 0; - int rval = QLA_SUCCESS; - - if (is_qla8022(ha)) - reg_val = readl(&ha->qla4_82xx_reg->host_status); - else if (is_qla8032(ha) || is_qla8042(ha)) - reg_val = qla4_8xxx_rd_direct(ha, QLA8XXX_PEG_ALIVE_COUNTER); - else - reg_val = readw(&ha->reg->ctrl_status); - - if (reg_val == QL4_ISP_REG_DISCONNECT) - rval = QLA_ERROR; - - return rval; -} - static int qla4xxx_send_ping(struct Scsi_Host *shost, uint32_t iface_num, uint32_t iface_type, uint32_t payload_size, uint32_t pid, struct sockaddr *dst_addr) @@ -9206,17 +9188,10 @@ struct srb *srb = NULL; int ret = SUCCESS; int wait = 0; - int rval; ql4_printk(KERN_INFO, ha, "scsi%ld:%d:%llu: Abort command issued cmd=%p, cdb=0x%x\n", ha->host_no, id, lun, cmd, cmd->cmnd[0]); - rval = qla4xxx_isp_check_reg(ha); - if (rval != QLA_SUCCESS) { - ql4_printk(KERN_INFO, ha, "PCI/Register disconnect, exiting.\n"); - return FAILED; - } - spin_lock_irqsave(&ha->hardware_lock, flags); srb = (struct srb *) CMD_SP(cmd); if (!srb) { @@ -9268,7 +9243,6 @@ struct scsi_qla_host *ha = to_qla_host(cmd->device->host); struct ddb_entry *ddb_entry = cmd->device->hostdata; int ret = FAILED, stat; - int rval; if (!ddb_entry) return ret; @@ -9288,12 +9262,6 @@ cmd, jiffies, cmd->request->timeout / HZ, ha->dpc_flags, cmd->result, cmd->allowed)); - rval = qla4xxx_isp_check_reg(ha); - if (rval != QLA_SUCCESS) { - ql4_printk(KERN_INFO, ha, "PCI/Register disconnect, exiting.\n"); - return FAILED; - } - /* FIXME: wait for hba to go online */ stat = qla4xxx_reset_lun(ha, ddb_entry, cmd->device->lun); if (stat != QLA_SUCCESS) { @@ -9337,7 +9305,6 @@ struct scsi_qla_host *ha = to_qla_host(cmd->device->host); struct ddb_entry *ddb_entry = cmd->device->hostdata; int stat, ret; - int rval; if (!ddb_entry) return FAILED; @@ -9355,12 +9322,6 @@ ha->host_no, cmd, jiffies, cmd->request->timeout / HZ, ha->dpc_flags, cmd->result, cmd->allowed)); - rval = qla4xxx_isp_check_reg(ha); - if (rval != QLA_SUCCESS) { - ql4_printk(KERN_INFO, ha, "PCI/Register disconnect, exiting.\n"); - return FAILED; - } - stat = qla4xxx_reset_target(ha, ddb_entry); if (stat != QLA_SUCCESS) { starget_printk(KERN_INFO, scsi_target(cmd->device), @@ -9415,16 +9376,9 @@ { int return_status = FAILED; struct scsi_qla_host *ha; - int rval; ha = to_qla_host(cmd->device->host); - rval = qla4xxx_isp_check_reg(ha); - if (rval != QLA_SUCCESS) { - ql4_printk(KERN_INFO, ha, "PCI/Register disconnect, exiting.\n"); - return FAILED; - } - if ((is_qla8032(ha) || is_qla8042(ha)) && ql4xdontresethba) qla4_83xx_set_idc_dontreset(ha); diff -u linux-azure-4.15.0/drivers/scsi/scsi_lib.c linux-azure-4.15.0/drivers/scsi/scsi_lib.c --- linux-azure-4.15.0/drivers/scsi/scsi_lib.c +++ linux-azure-4.15.0/drivers/scsi/scsi_lib.c @@ -855,17 +855,6 @@ /* for passthrough error may be set */ error = BLK_STS_OK; } - /* - * Another corner case: the SCSI status byte is non-zero but 'good'. - * Example: PRE-FETCH command returns SAM_STAT_CONDITION_MET when - * it is able to fit nominated LBs in its cache (and SAM_STAT_GOOD - * if it can't fit). Treat SAM_STAT_CONDITION_MET and the related - * intermediate statuses (both obsolete in SAM-4) as good. - */ - if (status_byte(result) && scsi_status_is_good(result)) { - result = 0; - error = BLK_STS_OK; - } /* * special case: failed zero length commands always need to reverted: --- linux-azure-4.15.0/drivers/scsi/sd.c +++ linux-azure-4.15.0.orig/drivers/scsi/sd.c @@ -2152,8 +2152,6 @@ break; /* standby */ if (sshdr.asc == 4 && sshdr.ascq == 0xc) break; /* unavailable */ - if (sshdr.asc == 4 && sshdr.ascq == 0x1b) - break; /* sanitize in progress */ /* * Issue command to spin up drive when not ready */ @@ -2628,7 +2626,6 @@ int res; struct scsi_device *sdp = sdkp->device; struct scsi_mode_data data; - int disk_ro = get_disk_ro(sdkp->disk); int old_wp = sdkp->write_prot; set_disk_ro(sdkp->disk, 0); @@ -2669,7 +2666,7 @@ "Test WP failed, assume Write Enabled\n"); } else { sdkp->write_prot = ((data.device_specific & 0x80) != 0); + set_disk_ro(sdkp->disk, sdkp->write_prot); - set_disk_ro(sdkp->disk, sdkp->write_prot || disk_ro); if (sdkp->first_scan || old_wp != sdkp->write_prot) { sd_printk(KERN_NOTICE, sdkp, "Write Protect is %s\n", sdkp->write_prot ? "on" : "off"); reverted: --- linux-azure-4.15.0/drivers/scsi/sg.c +++ linux-azure-4.15.0.orig/drivers/scsi/sg.c @@ -1894,7 +1894,7 @@ num = (rem_sz > scatter_elem_sz_prev) ? scatter_elem_sz_prev : rem_sz; + schp->pages[k] = alloc_pages(gfp_mask, order); - schp->pages[k] = alloc_pages(gfp_mask | __GFP_ZERO, order); if (!schp->pages[k]) goto out; reverted: --- linux-azure-4.15.0/drivers/scsi/sr_ioctl.c +++ linux-azure-4.15.0.orig/drivers/scsi/sr_ioctl.c @@ -188,13 +188,9 @@ struct scsi_device *SDev; struct scsi_sense_hdr sshdr; int result, err = 0, retries = 0; - unsigned char sense_buffer[SCSI_SENSE_BUFFERSIZE], *senseptr = NULL; SDev = cd->device; - if (cgc->sense) - senseptr = sense_buffer; - retry: if (!scsi_block_when_processing_errors(SDev)) { err = -ENODEV; @@ -202,12 +198,10 @@ } result = scsi_execute(SDev, cgc->cmd, cgc->data_direction, + cgc->buffer, cgc->buflen, + (unsigned char *)cgc->sense, &sshdr, - cgc->buffer, cgc->buflen, senseptr, &sshdr, cgc->timeout, IOCTL_RETRIES, 0, 0, NULL); - if (cgc->sense) - memcpy(cgc->sense, sense_buffer, sizeof(*cgc->sense)); - /* Minimal error checking. Ignore cases we know about, and report the rest. */ if (driver_byte(result) != 0) { switch (sshdr.sense_key) { reverted: --- linux-azure-4.15.0/drivers/scsi/sym53c8xx_2/sym_hipd.c +++ linux-azure-4.15.0.orig/drivers/scsi/sym53c8xx_2/sym_hipd.c @@ -536,7 +536,7 @@ * Look for the greatest clock divisor that allows an * input speed faster than the period. */ + while (div-- > 0) - while (--div > 0) if (kpc >= (div_10M[div] << 2)) break; /* reverted: --- linux-azure-4.15.0/drivers/scsi/ufs/ufshcd.c +++ linux-azure-4.15.0.orig/drivers/scsi/ufs/ufshcd.c @@ -4352,8 +4352,6 @@ /* REPORT SUPPORTED OPERATION CODES is not supported */ sdev->no_report_opcodes = 1; - /* WRITE_SAME command is not supported */ - sdev->no_write_same = 1; ufshcd_set_queue_depth(sdev); reverted: --- linux-azure-4.15.0/drivers/spi/spi-bcm-qspi.c +++ linux-azure-4.15.0.orig/drivers/spi/spi-bcm-qspi.c @@ -490,7 +490,7 @@ static void bcm_qspi_enable_bspi(struct bcm_qspi *qspi) { + if (!has_bspi(qspi) || (qspi->bspi_enabled)) - if (!has_bspi(qspi)) return; qspi->bspi_enabled = 1; @@ -505,7 +505,7 @@ static void bcm_qspi_disable_bspi(struct bcm_qspi *qspi) { + if (!has_bspi(qspi) || (!qspi->bspi_enabled)) - if (!has_bspi(qspi)) return; qspi->bspi_enabled = 0; @@ -519,19 +519,16 @@ static void bcm_qspi_chip_select(struct bcm_qspi *qspi, int cs) { + u32 data = 0; - u32 rd = 0; - u32 wr = 0; + if (qspi->curr_cs == cs) + return; if (qspi->base[CHIP_SELECT]) { + data = bcm_qspi_read(qspi, CHIP_SELECT, 0); + data = (data & ~0xff) | (1 << cs); + bcm_qspi_write(qspi, CHIP_SELECT, 0, data); - rd = bcm_qspi_read(qspi, CHIP_SELECT, 0); - wr = (rd & ~0xff) | (1 << cs); - if (rd == wr) - return; - bcm_qspi_write(qspi, CHIP_SELECT, 0, wr); usleep_range(10, 20); } - - dev_dbg(&qspi->pdev->dev, "using cs:%d\n", cs); qspi->curr_cs = cs; } @@ -758,13 +755,8 @@ dev_dbg(&qspi->pdev->dev, "WR %04x\n", val); } mspi_cdram = MSPI_CDRAM_CONT_BIT; + mspi_cdram |= (~(1 << spi->chip_select) & + MSPI_CDRAM_PCS); - - if (has_bspi(qspi)) - mspi_cdram &= ~1; - else - mspi_cdram |= (~(1 << spi->chip_select) & - MSPI_CDRAM_PCS); - mspi_cdram |= ((tp.trans->bits_per_word <= 8) ? 0 : MSPI_CDRAM_BITSE_BIT); reverted: --- linux-azure-4.15.0/drivers/spi/spi-pxa2xx.h +++ linux-azure-4.15.0.orig/drivers/spi/spi-pxa2xx.h @@ -38,7 +38,7 @@ /* SSP register addresses */ void __iomem *ioaddr; + u32 ssdr_physical; - phys_addr_t ssdr_physical; /* SSP masks*/ u32 dma_cr1; diff -u linux-azure-4.15.0/drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth.c linux-azure-4.15.0/drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth.c --- linux-azure-4.15.0/drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth.c +++ linux-azure-4.15.0/drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth.c @@ -322,7 +322,7 @@ } fd = dpaa2_dq_fd(dq); - fq = (struct dpaa2_eth_fq *)(uintptr_t)dpaa2_dq_fqd_ctx(dq); + fq = (struct dpaa2_eth_fq *)dpaa2_dq_fqd_ctx(dq); fq->stats.frames++; fq->consume(priv, ch, fd, &ch->napi); @@ -373,12 +373,12 @@ /* Prepare the HW SGT structure */ sgt_buf_size = priv->tx_data_offset + sizeof(struct dpaa2_sg_entry) * (1 + num_dma_bufs); - sgt_buf = netdev_alloc_frag(sgt_buf_size + DPAA2_ETH_TX_BUF_ALIGN); + sgt_buf = kzalloc(sgt_buf_size + DPAA2_ETH_TX_BUF_ALIGN, GFP_ATOMIC); if (unlikely(!sgt_buf)) { err = -ENOMEM; goto sgt_buf_alloc_failed; } - memset(sgt_buf, 0, sgt_buf_size); + sgt_buf = PTR_ALIGN(sgt_buf, DPAA2_ETH_TX_BUF_ALIGN); /* PTA from egress side is passed as is to the confirmation side so * we need to clear some fields here in order to find consistent values @@ -430,7 +430,7 @@ return 0; dma_map_single_failed: - skb_free_frag(sgt_buf); + kfree(sgt_buf); sgt_buf_alloc_failed: dma_unmap_sg(dev, scl, num_sg, DMA_BIDIRECTIONAL); dma_map_sg_failed: @@ -550,9 +550,9 @@ if (status) *status = le32_to_cpu(fas->status); - /* Free SGT buffer allocated on tx */ + /* Free SGT buffer kmalloc'ed on tx */ if (fd_format != dpaa2_fd_single) - skb_free_frag(skbh); + kfree(skbh); /* Move on with skb release */ dev_kfree_skb(skb); @@ -1924,7 +1924,7 @@ queue.destination.id = fq->channel->dpcon_id; queue.destination.type = DPNI_DEST_DPCON; queue.destination.priority = 1; - queue.user_context = (u64)(uintptr_t)fq; + queue.user_context = (u64)fq; err = dpni_set_queue(priv->mc_io, 0, priv->mc_token, DPNI_QUEUE_RX, 0, fq->flowid, DPNI_QUEUE_OPT_USER_CTX | DPNI_QUEUE_OPT_DEST, @@ -1976,7 +1976,7 @@ queue.destination.id = fq->channel->dpcon_id; queue.destination.type = DPNI_DEST_DPCON; queue.destination.priority = 0; - queue.user_context = (u64)(uintptr_t)fq; + queue.user_context = (u64)fq; err = dpni_set_queue(priv->mc_io, 0, priv->mc_token, DPNI_QUEUE_TX_CONFIRM, 0, fq->flowid, DPNI_QUEUE_OPT_USER_CTX | DPNI_QUEUE_OPT_DEST, reverted: --- linux-azure-4.15.0/drivers/staging/ks7010/ks_hostif.c +++ linux-azure-4.15.0.orig/drivers/staging/ks7010/ks_hostif.c @@ -242,8 +242,9 @@ offset = 0; while (bsize > offset) { + /* DPRINTK(4, "Element ID=%d\n",*bp); */ + switch (*bp) { + case 0: /* ssid */ - switch (*bp) { /* Information Element ID */ - case WLAN_EID_SSID: if (*(bp + 1) <= SSID_MAX_SIZE) { ap->ssid.size = *(bp + 1); } else { @@ -253,8 +254,8 @@ } memcpy(ap->ssid.body, bp + 2, ap->ssid.size); break; + case 1: /* rate */ + case 50: /* ext rate */ - case WLAN_EID_SUPP_RATES: - case WLAN_EID_EXT_SUPP_RATES: if ((*(bp + 1) + ap->rate_set.size) <= RATE_SET_MAX_SIZE) { memcpy(&ap->rate_set.body[ap->rate_set.size], @@ -270,9 +271,9 @@ (RATE_SET_MAX_SIZE - ap->rate_set.size); } break; + case 3: /* DS parameter */ - case WLAN_EID_DS_PARAMS: break; + case 48: /* RSN(WPA2) */ - case WLAN_EID_RSN: ap->rsn_ie.id = *bp; if (*(bp + 1) <= RSN_IE_BODY_MAX) { ap->rsn_ie.size = *(bp + 1); @@ -283,8 +284,8 @@ } memcpy(ap->rsn_ie.body, bp + 2, ap->rsn_ie.size); break; + case 221: /* WPA */ + if (memcmp(bp + 2, "\x00\x50\xf2\x01", 4) == 0) { /* WPA OUI check */ - case WLAN_EID_VENDOR_SPECIFIC: /* WPA */ - if (memcmp(bp + 2, "\x00\x50\xf2\x01", 4) == 0) { /* WPA OUI check */ ap->wpa_ie.id = *bp; if (*(bp + 1) <= RSN_IE_BODY_MAX) { ap->wpa_ie.size = *(bp + 1); @@ -299,18 +300,18 @@ } break; + case 2: /* FH parameter */ + case 4: /* CF parameter */ + case 5: /* TIM */ + case 6: /* IBSS parameter */ + case 7: /* Country */ + case 42: /* ERP information */ + case 47: /* Reserve ID 47 Broadcom AP */ - case WLAN_EID_FH_PARAMS: - case WLAN_EID_CF_PARAMS: - case WLAN_EID_TIM: - case WLAN_EID_IBSS_PARAMS: - case WLAN_EID_COUNTRY: - case WLAN_EID_ERP_INFO: break; default: DPRINTK(4, "unknown Element ID=%d\n", *bp); break; } - offset += 2; /* id & size field */ offset += *(bp + 1); /* +size offset */ bp += (*(bp + 1) + 2); /* pointer update */ reverted: --- linux-azure-4.15.0/drivers/staging/ks7010/ks_hostif.h +++ linux-azure-4.15.0.orig/drivers/staging/ks7010/ks_hostif.h @@ -13,7 +13,6 @@ #define _KS_HOSTIF_H_ #include -#include /* * HOST-MAC I/F events reverted: --- linux-azure-4.15.0/drivers/staging/lustre/lustre/include/obd.h +++ linux-azure-4.15.0.orig/drivers/staging/lustre/lustre/include/obd.h @@ -191,7 +191,7 @@ struct sptlrpc_flavor cl_flvr_mgc; /* fixed flavor of mgc->mgs */ /* the grant values are protected by loi_list_lock below */ + unsigned long cl_dirty_pages; /* all _dirty_ in pahges */ - unsigned long cl_dirty_pages; /* all _dirty_ in pages */ unsigned long cl_dirty_max_pages; /* allowed w/o rpc */ unsigned long cl_dirty_transit; /* dirty synchronous */ unsigned long cl_avail_grant; /* bytes of credit for ost */ reverted: --- linux-azure-4.15.0/drivers/staging/lustre/lustre/lmv/lmv_obd.c +++ linux-azure-4.15.0.orig/drivers/staging/lustre/lustre/lmv/lmv_obd.c @@ -2695,7 +2695,7 @@ if (lsm && !lmm) { int i; + for (i = 1; i < lsm->lsm_md_stripe_count; i++) { - for (i = 0; i < lsm->lsm_md_stripe_count; i++) { /* * For migrating inode, the master stripe and master * object will be the same, so do not need iput, see reverted: --- linux-azure-4.15.0/drivers/staging/lustre/lustre/osc/osc_cache.c +++ linux-azure-4.15.0.orig/drivers/staging/lustre/lustre/osc/osc_cache.c @@ -1530,7 +1530,7 @@ if (rc < 0) return 0; + if (cli->cl_dirty_pages <= cli->cl_dirty_max_pages && - if (cli->cl_dirty_pages < cli->cl_dirty_max_pages && atomic_long_read(&obd_dirty_pages) + 1 <= obd_max_dirty_pages) { osc_consume_write_grant(cli, &oap->oap_brw_page); if (transient) { reverted: --- linux-azure-4.15.0/drivers/staging/rtl8192u/r8192U_core.c +++ linux-azure-4.15.0.orig/drivers/staging/rtl8192u/r8192U_core.c @@ -1706,8 +1706,6 @@ priv->rx_urb[16] = usb_alloc_urb(0, GFP_KERNEL); priv->oldaddr = kmalloc(16, GFP_KERNEL); - if (!priv->oldaddr) - return -ENOMEM; oldaddr = priv->oldaddr; align = ((long)oldaddr) & 3; if (align) { reverted: --- linux-azure-4.15.0/drivers/staging/vc04_services/bcm2835-audio/bcm2835.c +++ linux-azure-4.15.0.orig/drivers/staging/vc04_services/bcm2835-audio/bcm2835.c @@ -36,10 +36,6 @@ static void snd_devm_unregister_child(struct device *dev, void *res) { struct device *childdev = *(struct device **)res; - struct bcm2835_chip *chip = dev_get_drvdata(childdev); - struct snd_card *card = chip->card; - - snd_card_free(card); device_unregister(childdev); } @@ -65,13 +61,6 @@ return 0; } -static void snd_bcm2835_release(struct device *dev) -{ - struct bcm2835_chip *chip = dev_get_drvdata(dev); - - kfree(chip); -} - static struct device * snd_create_device(struct device *parent, struct device_driver *driver, @@ -87,7 +76,6 @@ device_initialize(device); device->parent = parent; device->driver = driver; - device->release = snd_bcm2835_release; dev_set_name(device, "%s", name); @@ -98,19 +86,18 @@ return device; } +static int snd_bcm2835_free(struct bcm2835_chip *chip) +{ + kfree(chip); + return 0; +} + /* component-destructor * (see "Management of Cards and Components") */ static int snd_bcm2835_dev_free(struct snd_device *device) { + return snd_bcm2835_free(device->device_data); - struct bcm2835_chip *chip = device->device_data; - struct snd_card *card = chip->card; - - /* TODO: free pcm, ctl */ - - snd_device_free(card, chip); - - return 0; } /* chip-specific constructor @@ -135,7 +122,7 @@ err = snd_device_new(card, SNDRV_DEV_LOWLEVEL, chip, &ops); if (err) { + snd_bcm2835_free(chip); - kfree(chip); return err; } @@ -143,14 +130,31 @@ return 0; } +static void snd_devm_card_free(struct device *dev, void *res) -static struct snd_card *snd_bcm2835_card_new(struct device *dev) { + struct snd_card *snd_card = *(struct snd_card **)res; + + snd_card_free(snd_card); +} + +static struct snd_card *snd_devm_card_new(struct device *dev) +{ + struct snd_card **dr; struct snd_card *card; int ret; + dr = devres_alloc(snd_devm_card_free, sizeof(*dr), GFP_KERNEL); + if (!dr) + return ERR_PTR(-ENOMEM); + ret = snd_card_new(dev, -1, NULL, THIS_MODULE, 0, &card); + if (ret) { + devres_free(dr); - if (ret) return ERR_PTR(ret); + } + + *dr = card; + devres_add(dev, dr); return card; } @@ -267,7 +271,7 @@ return PTR_ERR(child); } + card = snd_devm_card_new(child); - card = snd_bcm2835_card_new(child); if (IS_ERR(card)) { dev_err(child, "Failed to create card"); return PTR_ERR(card); @@ -309,7 +313,7 @@ return err; } + dev_set_drvdata(child, card); - dev_set_drvdata(child, chip); dev_info(child, "card created with %d channels\n", numchans); return 0; reverted: --- linux-azure-4.15.0/drivers/target/target_core_iblock.c +++ linux-azure-4.15.0.orig/drivers/target/target_core_iblock.c @@ -427,8 +427,8 @@ { struct se_device *dev = cmd->se_dev; struct scatterlist *sg = &cmd->t_data_sg[0]; + unsigned char *buf, zero = 0x00, *p = &zero; + int rc, ret; - unsigned char *buf, *not_zero; - int ret; buf = kmap(sg_page(sg)) + sg->offset; if (!buf) @@ -437,10 +437,10 @@ * Fall back to block_execute_write_same() slow-path if * incoming WRITE_SAME payload does not contain zeros. */ + rc = memcmp(buf, p, cmd->data_length); - not_zero = memchr_inv(buf, 0x00, cmd->data_length); kunmap(sg_page(sg)); + if (rc) - if (not_zero) return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; ret = blkdev_issue_zeroout(bdev, reverted: --- linux-azure-4.15.0/drivers/tee/tee_shm.c +++ linux-azure-4.15.0.orig/drivers/tee/tee_shm.c @@ -203,10 +203,9 @@ if ((shm->flags & req_flags) != req_flags) return -EINVAL; - get_dma_buf(shm->dmabuf); fd = dma_buf_fd(shm->dmabuf, O_CLOEXEC); + if (fd >= 0) + get_dma_buf(shm->dmabuf); - if (fd < 0) - dma_buf_put(shm->dmabuf); return fd; } reverted: --- linux-azure-4.15.0/drivers/thermal/samsung/exynos_tmu.c +++ linux-azure-4.15.0.orig/drivers/thermal/samsung/exynos_tmu.c @@ -185,7 +185,6 @@ * @regulator: pointer to the TMU regulator structure. * @reg_conf: pointer to structure to register with core thermal. * @ntrip: number of supported trip points. - * @enabled: current status of TMU device * @tmu_initialize: SoC specific TMU initialization method * @tmu_control: SoC specific TMU control method * @tmu_read: SoC specific TMU temperature read method @@ -206,7 +205,6 @@ struct regulator *regulator; struct thermal_zone_device *tzd; unsigned int ntrip; - bool enabled; int (*tmu_initialize)(struct platform_device *pdev); void (*tmu_control)(struct platform_device *pdev, bool on); @@ -400,7 +398,6 @@ mutex_lock(&data->lock); clk_enable(data->clk); data->tmu_control(pdev, on); - data->enabled = on; clk_disable(data->clk); mutex_unlock(&data->lock); } @@ -892,24 +889,19 @@ static int exynos_get_temp(void *p, int *temp) { struct exynos_tmu_data *data = p; - int value, ret = 0; + if (!data || !data->tmu_read) - if (!data || !data->tmu_read || !data->enabled) return -EINVAL; mutex_lock(&data->lock); clk_enable(data->clk); + *temp = code_to_temp(data, data->tmu_read(data)) * MCELSIUS; - value = data->tmu_read(data); - if (value < 0) - ret = value; - else - *temp = code_to_temp(data, value) * MCELSIUS; clk_disable(data->clk); mutex_unlock(&data->lock); + return 0; - return ret; } #ifdef CONFIG_THERMAL_EMULATION diff -u linux-azure-4.15.0/drivers/tty/n_gsm.c linux-azure-4.15.0/drivers/tty/n_gsm.c --- linux-azure-4.15.0/drivers/tty/n_gsm.c +++ linux-azure-4.15.0/drivers/tty/n_gsm.c @@ -121,9 +121,6 @@ struct mutex mutex; /* Link layer */ - int mode; -#define DLCI_MODE_ABM 0 /* Normal Asynchronous Balanced Mode */ -#define DLCI_MODE_ADM 1 /* Asynchronous Disconnected Mode */ spinlock_t lock; /* Protects the internal state */ struct timer_list t1; /* Retransmit timer for SABM and UA */ int retries; @@ -1367,13 +1364,7 @@ ctrl->data = data; ctrl->len = clen; gsm->pending_cmd = ctrl; - - /* If DLCI0 is in ADM mode skip retries, it won't respond */ - if (gsm->dlci[0]->mode == DLCI_MODE_ADM) - gsm->cretries = 1; - else - gsm->cretries = gsm->n2; - + gsm->cretries = gsm->n2; mod_timer(&gsm->t2_timer, jiffies + gsm->t2 * HZ / 100); gsm_control_transmit(gsm, ctrl); spin_unlock_irqrestore(&gsm->control_lock, flags); @@ -1481,7 +1472,6 @@ if (debug & 8) pr_info("DLCI %d opening in ADM mode.\n", dlci->addr); - dlci->mode = DLCI_MODE_ADM; gsm_dlci_open(dlci); } else { gsm_dlci_close(dlci); @@ -2871,22 +2861,11 @@ static int gsm_carrier_raised(struct tty_port *port) { struct gsm_dlci *dlci = container_of(port, struct gsm_dlci, port); - struct gsm_mux *gsm = dlci->gsm; - /* Not yet open so no carrier info */ if (dlci->state != DLCI_OPEN) return 0; if (debug & 2) return 1; - - /* - * Basic mode with control channel in ADM mode may not respond - * to CMD_MSC at all and modem_rx is empty. - */ - if (gsm->encoding == 0 && gsm->dlci[0]->mode == DLCI_MODE_ADM && - !dlci->modem_rx) - return 1; - return dlci->modem_rx & TIOCM_CD; } diff -u linux-azure-4.15.0/drivers/tty/serial/8250/8250_port.c linux-azure-4.15.0/drivers/tty/serial/8250/8250_port.c --- linux-azure-4.15.0/drivers/tty/serial/8250/8250_port.c +++ linux-azure-4.15.0/drivers/tty/serial/8250/8250_port.c @@ -1867,8 +1867,7 @@ status = serial_port_in(port, UART_LSR); - if (status & (UART_LSR_DR | UART_LSR_BI) && - iir & UART_IIR_RDI) { + if (status & (UART_LSR_DR | UART_LSR_BI)) { if (!up->dma || handle_rx_dma(up, iir)) status = serial8250_rx_chars(up, status); } reverted: --- linux-azure-4.15.0/drivers/tty/serial/altera_uart.c +++ linux-azure-4.15.0.orig/drivers/tty/serial/altera_uart.c @@ -327,7 +327,7 @@ /* Enable RX interrupts now */ pp->imr = ALTERA_UART_CONTROL_RRDY_MSK; + writel(pp->imr, port->membase + ALTERA_UART_CONTROL_REG); - altera_uart_writel(port, pp->imr, ALTERA_UART_CONTROL_REG); spin_unlock_irqrestore(&port->lock, flags); @@ -343,7 +343,7 @@ /* Disable all interrupts now */ pp->imr = 0; + writel(pp->imr, port->membase + ALTERA_UART_CONTROL_REG); - altera_uart_writel(port, pp->imr, ALTERA_UART_CONTROL_REG); spin_unlock_irqrestore(&port->lock, flags); @@ -432,7 +432,7 @@ ALTERA_UART_STATUS_TRDY_MSK)) cpu_relax(); + writel(c, port->membase + ALTERA_UART_TXDATA_REG); - altera_uart_writel(port, c, ALTERA_UART_TXDATA_REG); } static void altera_uart_console_write(struct console *co, const char *s, @@ -502,13 +502,13 @@ return -ENODEV; /* Enable RX interrupts now */ + writel(ALTERA_UART_CONTROL_RRDY_MSK, + port->membase + ALTERA_UART_CONTROL_REG); - altera_uart_writel(port, ALTERA_UART_CONTROL_RRDY_MSK, - ALTERA_UART_CONTROL_REG); if (dev->baud) { unsigned int baudclk = port->uartclk / dev->baud; + writel(baudclk, port->membase + ALTERA_UART_DIVISOR_REG); - altera_uart_writel(port, baudclk, ALTERA_UART_DIVISOR_REG); } dev->con->write = altera_uart_earlycon_write; reverted: --- linux-azure-4.15.0/drivers/tty/serial/arc_uart.c +++ linux-azure-4.15.0.orig/drivers/tty/serial/arc_uart.c @@ -593,11 +593,6 @@ if (dev_id < 0) dev_id = 0; - if (dev_id >= ARRAY_SIZE(arc_uart_ports)) { - dev_err(&pdev->dev, "serial%d out of range\n", dev_id); - return -EINVAL; - } - uart = &arc_uart_ports[dev_id]; port = &uart->port; diff -u linux-azure-4.15.0/drivers/tty/serial/earlycon.c linux-azure-4.15.0/drivers/tty/serial/earlycon.c --- linux-azure-4.15.0/drivers/tty/serial/earlycon.c +++ linux-azure-4.15.0/drivers/tty/serial/earlycon.c @@ -169,7 +169,7 @@ */ int __init setup_earlycon(char *buf) { - const struct earlycon_id **p_match; + const struct earlycon_id *match; if (!buf || !buf[0]) return -EINVAL; @@ -177,9 +177,7 @@ if (early_con.flags & CON_ENABLED) return -EALREADY; - for (p_match = __earlycon_table; p_match < __earlycon_table_end; - p_match++) { - const struct earlycon_id *match = *p_match; + for (match = __earlycon_table; match < __earlycon_table_end; match++) { size_t len = strlen(match->name); if (strncmp(buf, match->name, len)) reverted: --- linux-azure-4.15.0/drivers/tty/serial/fsl_lpuart.c +++ linux-azure-4.15.0.orig/drivers/tty/serial/fsl_lpuart.c @@ -2145,10 +2145,6 @@ dev_err(&pdev->dev, "failed to get alias id, errno %d\n", ret); return ret; } - if (ret >= ARRAY_SIZE(lpuart_ports)) { - dev_err(&pdev->dev, "serial%d out of range\n", ret); - return -EINVAL; - } sport->port.line = ret; res = platform_get_resource(pdev, IORESOURCE_MEM, 0); sport->port.membase = devm_ioremap_resource(&pdev->dev, res); diff -u linux-azure-4.15.0/drivers/tty/serial/imx.c linux-azure-4.15.0/drivers/tty/serial/imx.c --- linux-azure-4.15.0/drivers/tty/serial/imx.c +++ linux-azure-4.15.0/drivers/tty/serial/imx.c @@ -2062,12 +2062,6 @@ else if (ret < 0) return ret; - if (sport->port.line >= ARRAY_SIZE(imx_ports)) { - dev_err(&pdev->dev, "serial%d out of range\n", - sport->port.line); - return -EINVAL; - } - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); base = devm_ioremap_resource(&pdev->dev, res); if (IS_ERR(base)) reverted: --- linux-azure-4.15.0/drivers/tty/serial/mvebu-uart.c +++ linux-azure-4.15.0.orig/drivers/tty/serial/mvebu-uart.c @@ -495,6 +495,7 @@ termios->c_iflag |= old->c_iflag & ~(INPCK | IGNPAR); termios->c_cflag &= CREAD | CBAUD; termios->c_cflag |= old->c_cflag & ~(CREAD | CBAUD); + termios->c_lflag = old->c_lflag; } spin_unlock_irqrestore(&port->lock, flags); @@ -617,7 +618,7 @@ u32 val; readl_poll_timeout_atomic(port->membase + UART_STAT, val, + (val & STAT_TX_EMP), 1, 10000); - (val & STAT_TX_RDY(port)), 1, 10000); } static void mvebu_uart_console_putchar(struct uart_port *port, int ch) reverted: --- linux-azure-4.15.0/drivers/tty/serial/mxs-auart.c +++ linux-azure-4.15.0.orig/drivers/tty/serial/mxs-auart.c @@ -1664,10 +1664,6 @@ s->port.line = pdev->id < 0 ? 0 : pdev->id; else if (ret < 0) return ret; - if (s->port.line >= ARRAY_SIZE(auart_port)) { - dev_err(&pdev->dev, "serial%d out of range\n", s->port.line); - return -EINVAL; - } if (of_id) { pdev->id_entry = of_id->data; reverted: --- linux-azure-4.15.0/drivers/tty/serial/samsung.c +++ linux-azure-4.15.0.orig/drivers/tty/serial/samsung.c @@ -1818,10 +1818,6 @@ dbg("s3c24xx_serial_probe(%p) %d\n", pdev, index); - if (index >= ARRAY_SIZE(s3c24xx_serial_ports)) { - dev_err(&pdev->dev, "serial%d out of range\n", index); - return -EINVAL; - } ourport = &s3c24xx_serial_ports[index]; ourport->drv_data = s3c24xx_get_driver_data(pdev); diff -u linux-azure-4.15.0/drivers/tty/serial/sh-sci.c linux-azure-4.15.0/drivers/tty/serial/sh-sci.c --- linux-azure-4.15.0/drivers/tty/serial/sh-sci.c +++ linux-azure-4.15.0/drivers/tty/serial/sh-sci.c @@ -3098,10 +3098,6 @@ dev_err(&pdev->dev, "failed to get alias id (%d)\n", id); return NULL; } - if (id >= ARRAY_SIZE(sci_ports)) { - dev_err(&pdev->dev, "serial%d out of range\n", id); - return NULL; - } sp = &sci_ports[id]; *dev_id = id; reverted: --- linux-azure-4.15.0/drivers/tty/serial/xilinx_uartps.c +++ linux-azure-4.15.0.orig/drivers/tty/serial/xilinx_uartps.c @@ -1110,7 +1110,7 @@ struct uart_port *port; /* Try the given port id if failed use default method */ + if (cdns_uart_port[id].mapbase != 0) { - if (id < CDNS_UART_NR_PORTS && cdns_uart_port[id].mapbase != 0) { /* Find the next unused port */ for (id = 0; id < CDNS_UART_NR_PORTS; id++) if (cdns_uart_port[id].mapbase == 0) diff -u linux-azure-4.15.0/drivers/tty/tty_io.c linux-azure-4.15.0/drivers/tty/tty_io.c --- linux-azure-4.15.0/drivers/tty/tty_io.c +++ linux-azure-4.15.0/drivers/tty/tty_io.c @@ -2816,10 +2816,7 @@ kref_init(&tty->kref); tty->magic = TTY_MAGIC; - if (tty_ldisc_init(tty)) { - kfree(tty); - return NULL; - } + tty_ldisc_init(tty); tty->session = NULL; tty->pgrp = NULL; mutex_init(&tty->legacy_mutex); diff -u linux-azure-4.15.0/drivers/tty/tty_ldisc.c linux-azure-4.15.0/drivers/tty/tty_ldisc.c --- linux-azure-4.15.0/drivers/tty/tty_ldisc.c +++ linux-azure-4.15.0/drivers/tty/tty_ldisc.c @@ -176,11 +176,12 @@ return ERR_CAST(ldops); } - /* - * There is no way to handle allocation failure of only 16 bytes. - * Let's simplify error handling and save more memory. - */ - ld = kmalloc(sizeof(struct tty_ldisc), GFP_KERNEL | __GFP_NOFAIL); + ld = kmalloc(sizeof(struct tty_ldisc), GFP_KERNEL); + if (ld == NULL) { + put_ldops(ldops); + return ERR_PTR(-ENOMEM); + } + ld->ops = ldops; ld->tty = tty; @@ -526,16 +527,19 @@ static void tty_ldisc_restore(struct tty_struct *tty, struct tty_ldisc *old) { /* There is an outstanding reference here so this is safe */ - if (tty_ldisc_failto(tty, old->ops->num) < 0) { - const char *name = tty_name(tty); - - pr_warn("Falling back ldisc for %s.\n", name); + old = tty_ldisc_get(tty, old->ops->num); + WARN_ON(IS_ERR(old)); + tty->ldisc = old; + tty_set_termios_ldisc(tty, old->ops->num); + if (tty_ldisc_open(tty, old) < 0) { + tty_ldisc_put(old); /* The traditional behaviour is to fall back to N_TTY, we want to avoid falling back to N_NULL unless we have no choice to avoid the risk of breaking anything */ if (tty_ldisc_failto(tty, N_TTY) < 0 && tty_ldisc_failto(tty, N_NULL) < 0) - panic("Couldn't open N_NULL ldisc for %s.", name); + panic("Couldn't open N_NULL ldisc for %s.", + tty_name(tty)); } } @@ -820,13 +824,12 @@ * the tty structure is not completely set up when this call is made. */ -int tty_ldisc_init(struct tty_struct *tty) +void tty_ldisc_init(struct tty_struct *tty) { struct tty_ldisc *ld = tty_ldisc_get(tty, N_TTY); if (IS_ERR(ld)) - return PTR_ERR(ld); + panic("n_tty: init_tty"); tty->ldisc = ld; - return 0; } /** diff -u linux-azure-4.15.0/drivers/usb/class/cdc-acm.c linux-azure-4.15.0/drivers/usb/class/cdc-acm.c --- linux-azure-4.15.0/drivers/usb/class/cdc-acm.c +++ linux-azure-4.15.0/drivers/usb/class/cdc-acm.c @@ -174,7 +174,6 @@ wb = &acm->wb[wbn]; if (!wb->use) { wb->use = 1; - wb->len = 0; return wbn; } wbn = (wbn + 1) % ACM_NW; @@ -806,18 +805,16 @@ static void acm_tty_flush_chars(struct tty_struct *tty) { struct acm *acm = tty->driver_data; - struct acm_wb *cur; + struct acm_wb *cur = acm->putbuffer; int err; unsigned long flags; - spin_lock_irqsave(&acm->write_lock, flags); - - cur = acm->putbuffer; if (!cur) /* nothing to do */ - goto out; + return; acm->putbuffer = NULL; err = usb_autopm_get_interface_async(acm->control); + spin_lock_irqsave(&acm->write_lock, flags); if (err < 0) { cur->use = 0; acm->putbuffer = cur; reverted: --- linux-azure-4.15.0/drivers/usb/core/config.c +++ linux-azure-4.15.0.orig/drivers/usb/core/config.c @@ -191,9 +191,7 @@ static const unsigned short high_speed_maxpacket_maxes[4] = { [USB_ENDPOINT_XFER_CONTROL] = 64, [USB_ENDPOINT_XFER_ISOC] = 1024, + [USB_ENDPOINT_XFER_BULK] = 512, - - /* Bulk should be 512, but some devices use 1024: we will warn below */ - [USB_ENDPOINT_XFER_BULK] = 1024, [USB_ENDPOINT_XFER_INT] = 1024, }; static const unsigned short super_speed_maxpacket_maxes[4] = { reverted: --- linux-azure-4.15.0/drivers/usb/core/hcd.c +++ linux-azure-4.15.0.orig/drivers/usb/core/hcd.c @@ -2365,7 +2365,6 @@ spin_lock_irqsave (&hcd_root_hub_lock, flags); if (hcd->rh_registered) { - pm_wakeup_event(&hcd->self.root_hub->dev, 0); set_bit(HCD_FLAG_WAKEUP_PENDING, &hcd->flags); queue_work(pm_wq, &hcd->wakeup_work); } reverted: --- linux-azure-4.15.0/drivers/usb/core/hub.c +++ linux-azure-4.15.0.orig/drivers/usb/core/hub.c @@ -650,17 +650,12 @@ unsigned int portnum) { struct usb_hub *hub; - struct usb_port *port_dev; if (!hdev) return; hub = usb_hub_to_struct_hub(hdev); if (hub) { - port_dev = hub->ports[portnum - 1]; - if (port_dev && port_dev->child) - pm_wakeup_event(&port_dev->child->dev, 0); - set_bit(portnum, hub->wakeup_bits); kick_hub_wq(hub); } @@ -3420,11 +3415,8 @@ /* Skip the initial Clear-Suspend step for a remote wakeup */ status = hub_port_status(hub, port1, &portstatus, &portchange); + if (status == 0 && !port_is_suspended(hub, portstatus)) - if (status == 0 && !port_is_suspended(hub, portstatus)) { - if (portchange & USB_PORT_STAT_C_SUSPEND) - pm_wakeup_event(&udev->dev, 0); goto SuspendCleared; - } /* see 7.1.7.7; affects power usage, but not budgeting */ if (hub_is_superspeed(hub->hdev)) diff -u linux-azure-4.15.0/drivers/usb/core/quirks.c linux-azure-4.15.0/drivers/usb/core/quirks.c --- linux-azure-4.15.0/drivers/usb/core/quirks.c +++ linux-azure-4.15.0/drivers/usb/core/quirks.c @@ -186,9 +186,6 @@ { USB_DEVICE(0x03f0, 0x0701), .driver_info = USB_QUIRK_STRING_FETCH_255 }, - /* HP v222w 16GB Mini USB Drive */ - { USB_DEVICE(0x03f0, 0x3f40), .driver_info = USB_QUIRK_DELAY_INIT }, - /* Creative SB Audigy 2 NX */ { USB_DEVICE(0x041e, 0x3020), .driver_info = USB_QUIRK_RESET_RESUME }, reverted: --- linux-azure-4.15.0/drivers/usb/dwc2/core.h +++ linux-azure-4.15.0.orig/drivers/usb/dwc2/core.h @@ -217,7 +217,7 @@ unsigned char dir_in; unsigned char index; unsigned char mc; + unsigned char interval; - u16 interval; unsigned int halted:1; unsigned int periodic:1; reverted: --- linux-azure-4.15.0/drivers/usb/dwc2/gadget.c +++ linux-azure-4.15.0.orig/drivers/usb/dwc2/gadget.c @@ -3375,6 +3375,12 @@ dwc2_writel(dwc2_hsotg_ep0_mps(hsotg->eps_out[0]->ep.maxpacket) | DXEPCTL_USBACTEP, hsotg->regs + DIEPCTL0); + dwc2_hsotg_enqueue_setup(hsotg); + + dev_dbg(hsotg->dev, "EP0: DIEPCTL0=0x%08x, DOEPCTL0=0x%08x\n", + dwc2_readl(hsotg->regs + DIEPCTL0), + dwc2_readl(hsotg->regs + DOEPCTL0)); + /* clear global NAKs */ val = DCTL_CGOUTNAK | DCTL_CGNPINNAK; if (!is_usb_reset) @@ -3385,12 +3391,6 @@ mdelay(3); hsotg->lx_state = DWC2_L0; - - dwc2_hsotg_enqueue_setup(hsotg); - - dev_dbg(hsotg->dev, "EP0: DIEPCTL0=0x%08x, DOEPCTL0=0x%08x\n", - dwc2_readl(hsotg->regs + DIEPCTL0), - dwc2_readl(hsotg->regs + DOEPCTL0)); } static void dwc2_hsotg_core_disconnect(struct dwc2_hsotg *hsotg) reverted: --- linux-azure-4.15.0/drivers/usb/dwc2/hcd.c +++ linux-azure-4.15.0.orig/drivers/usb/dwc2/hcd.c @@ -985,24 +985,6 @@ if (dbg_hc(chan)) dev_vdbg(hsotg->dev, "%s()\n", __func__); - - /* - * In buffer DMA or external DMA mode channel can't be halted - * for non-split periodic channels. At the end of the next - * uframe/frame (in the worst case), the core generates a channel - * halted and disables the channel automatically. - */ - if ((hsotg->params.g_dma && !hsotg->params.g_dma_desc) || - hsotg->hw_params.arch == GHWCFG2_EXT_DMA_ARCH) { - if (!chan->do_split && - (chan->ep_type == USB_ENDPOINT_XFER_ISOC || - chan->ep_type == USB_ENDPOINT_XFER_INT)) { - dev_err(hsotg->dev, "%s() Channel can't be halted\n", - __func__); - return; - } - } - if (halt_status == DWC2_HC_XFER_NO_HALT_STATUS) dev_err(hsotg->dev, "!!! halt_status = %d !!!\n", halt_status); @@ -2335,22 +2317,10 @@ */ static void dwc2_core_host_init(struct dwc2_hsotg *hsotg) { + u32 hcfg, hfir, otgctl; - u32 hcfg, hfir, otgctl, usbcfg; dev_dbg(hsotg->dev, "%s(%p)\n", __func__, hsotg); - /* Set HS/FS Timeout Calibration to 7 (max available value). - * The number of PHY clocks that the application programs in - * this field is added to the high/full speed interpacket timeout - * duration in the core to account for any additional delays - * introduced by the PHY. This can be required, because the delay - * introduced by the PHY in generating the linestate condition - * can vary from one PHY to another. - */ - usbcfg = dwc2_readl(hsotg->regs + GUSBCFG); - usbcfg |= GUSBCFG_TOUTCAL(7); - dwc2_writel(usbcfg, hsotg->regs + GUSBCFG); - /* Restart the Phy Clock */ dwc2_writel(0, hsotg->regs + PCGCTL); reverted: --- linux-azure-4.15.0/drivers/usb/dwc3/Makefile +++ linux-azure-4.15.0.orig/drivers/usb/dwc3/Makefile @@ -6,7 +6,7 @@ dwc3-y := core.o +ifneq ($(CONFIG_FTRACE),) -ifneq ($(CONFIG_TRACING),) dwc3-y += trace.o endif diff -u linux-azure-4.15.0/drivers/usb/dwc3/core.c linux-azure-4.15.0/drivers/usb/dwc3/core.c --- linux-azure-4.15.0/drivers/usb/dwc3/core.c +++ linux-azure-4.15.0/drivers/usb/dwc3/core.c @@ -231,26 +231,12 @@ do { reg = dwc3_readl(dwc->regs, DWC3_DCTL); if (!(reg & DWC3_DCTL_CSFTRST)) - goto done; + return 0; udelay(1); } while (--retries); - phy_exit(dwc->usb3_generic_phy); - phy_exit(dwc->usb2_generic_phy); - return -ETIMEDOUT; - -done: - /* - * For DWC_usb31 controller, once DWC3_DCTL_CSFTRST bit is cleared, - * we must wait at least 50ms before accessing the PHY domain - * (synchronization delay). DWC_usb31 programming guide section 1.3.2. - */ - if (dwc3_is_usb31(dwc)) - msleep(50); - - return 0; } /* diff -u linux-azure-4.15.0/drivers/usb/dwc3/core.h linux-azure-4.15.0/drivers/usb/dwc3/core.h --- linux-azure-4.15.0/drivers/usb/dwc3/core.h +++ linux-azure-4.15.0/drivers/usb/dwc3/core.h @@ -241,8 +241,6 @@ #define DWC3_GUSB3PIPECTL_TX_DEEPH(n) ((n) << 1) /* Global TX Fifo Size Register */ -#define DWC31_GTXFIFOSIZ_TXFRAMNUM BIT(15) /* DWC_usb31 only */ -#define DWC31_GTXFIFOSIZ_TXFDEF(n) ((n) & 0x7fff) /* DWC_usb31 only */ #define DWC3_GTXFIFOSIZ_TXFDEF(n) ((n) & 0xffff) #define DWC3_GTXFIFOSIZ_TXFSTADDR(n) ((n) & 0xffff0000) reverted: --- linux-azure-4.15.0/drivers/usb/dwc3/dwc3-omap.c +++ linux-azure-4.15.0.orig/drivers/usb/dwc3/dwc3-omap.c @@ -582,25 +582,9 @@ return 0; } -static void dwc3_omap_complete(struct device *dev) -{ - struct dwc3_omap *omap = dev_get_drvdata(dev); - - if (extcon_get_state(omap->edev, EXTCON_USB)) - dwc3_omap_set_mailbox(omap, OMAP_DWC3_VBUS_VALID); - else - dwc3_omap_set_mailbox(omap, OMAP_DWC3_VBUS_OFF); - - if (extcon_get_state(omap->edev, EXTCON_USB_HOST)) - dwc3_omap_set_mailbox(omap, OMAP_DWC3_ID_GROUND); - else - dwc3_omap_set_mailbox(omap, OMAP_DWC3_ID_FLOAT); -} - static const struct dev_pm_ops dwc3_omap_dev_pm_ops = { SET_SYSTEM_SLEEP_PM_OPS(dwc3_omap_suspend, dwc3_omap_resume) - .complete = dwc3_omap_complete, }; #define DEV_PM_OPS (&dwc3_omap_dev_pm_ops) diff -u linux-azure-4.15.0/drivers/usb/dwc3/gadget.c linux-azure-4.15.0/drivers/usb/dwc3/gadget.c --- linux-azure-4.15.0/drivers/usb/dwc3/gadget.c +++ linux-azure-4.15.0/drivers/usb/dwc3/gadget.c @@ -1424,7 +1424,7 @@ dwc->lock); if (!r->trb) - goto out0; + goto out1; if (r->num_pending_sgs) { struct dwc3_trb *trb; reverted: --- linux-azure-4.15.0/drivers/usb/gadget/composite.c +++ linux-azure-4.15.0.orig/drivers/usb/gadget/composite.c @@ -1422,7 +1422,7 @@ return res; } +static void fill_ext_compat(struct usb_configuration *c, u8 *buf) -static int fill_ext_compat(struct usb_configuration *c, u8 *buf) { int i, count; @@ -1449,12 +1449,10 @@ buf += 23; } count += 24; + if (count >= 4096) + return; - if (count + 24 >= USB_COMP_EP0_OS_DESC_BUFSIZ) - return count; } } - - return count; } static int count_ext_prop(struct usb_configuration *c, int interface) @@ -1499,20 +1497,25 @@ struct usb_os_desc *d; struct usb_os_desc_ext_prop *ext_prop; int j, count, n, ret; + u8 *start = buf; f = c->interface[interface]; - count = 10; /* header length */ for (j = 0; j < f->os_desc_n; ++j) { if (interface != f->os_desc_table[j].if_id) continue; d = f->os_desc_table[j].os_desc; if (d) list_for_each_entry(ext_prop, &d->ext_prop, entry) { + /* 4kB minus header length */ + n = buf - start; + if (n >= 4086) + return 0; + + count = ext_prop->data_len + - n = ext_prop->data_len + ext_prop->name_len + 14; + if (count > 4086 - n) + return -EINVAL; + usb_ext_prop_put_size(buf, count); - if (count + n >= USB_COMP_EP0_OS_DESC_BUFSIZ) - return count; - usb_ext_prop_put_size(buf, n); usb_ext_prop_put_type(buf, ext_prop->type); ret = usb_ext_prop_put_name(buf, ext_prop->name, ext_prop->name_len); @@ -1538,12 +1541,11 @@ default: return -EINVAL; } + buf += count; - buf += n; - count += n; } } + return 0; - return count; } /* @@ -1825,7 +1827,6 @@ req->complete = composite_setup_complete; buf = req->buf; os_desc_cfg = cdev->os_desc_config; - w_length = min_t(u16, w_length, USB_COMP_EP0_OS_DESC_BUFSIZ); memset(buf, 0, w_length); buf[5] = 0x01; switch (ctrl->bRequestType & USB_RECIP_MASK) { @@ -1849,8 +1850,8 @@ count += 16; /* header */ put_unaligned_le32(count, buf); buf += 16; + fill_ext_compat(os_desc_cfg, buf); + value = w_length; - value = fill_ext_compat(os_desc_cfg, buf); - value = min_t(u16, w_length, value); } break; case USB_RECIP_INTERFACE: @@ -1879,7 +1880,8 @@ interface, buf); if (value < 0) return value; + + value = w_length; - value = min_t(u16, w_length, value); } break; } @@ -2154,8 +2156,8 @@ goto end; } + /* OS feature descriptor length <= 4kB */ + cdev->os_desc_req->buf = kmalloc(4096, GFP_KERNEL); - cdev->os_desc_req->buf = kmalloc(USB_COMP_EP0_OS_DESC_BUFSIZ, - GFP_KERNEL); if (!cdev->os_desc_req->buf) { ret = -ENOMEM; usb_ep_free_request(ep0, cdev->os_desc_req); diff -u linux-azure-4.15.0/drivers/usb/gadget/function/f_fs.c linux-azure-4.15.0/drivers/usb/gadget/function/f_fs.c --- linux-azure-4.15.0/drivers/usb/gadget/function/f_fs.c +++ linux-azure-4.15.0/drivers/usb/gadget/function/f_fs.c @@ -755,13 +755,9 @@ bool kiocb_has_eventfd = io_data->kiocb->ki_flags & IOCB_EVENTFD; if (io_data->read && ret > 0) { - mm_segment_t oldfs = get_fs(); - - set_fs(USER_DS); use_mm(io_data->mm); ret = ffs_copy_to_iter(io_data->buf, ret, &io_data->data); unuse_mm(io_data->mm); - set_fs(oldfs); } io_data->kiocb->ki_complete(io_data->kiocb, ret, ret); @@ -3239,7 +3235,7 @@ __ffs_event_add(ffs, FUNCTIONFS_SETUP); spin_unlock_irqrestore(&ffs->ev.waitq.lock, flags); - return USB_GADGET_DELAYED_STATUS; + return 0; } static bool ffs_func_req_match(struct usb_function *f, reverted: --- linux-azure-4.15.0/drivers/usb/gadget/function/f_uac2.c +++ linux-azure-4.15.0.orig/drivers/usb/gadget/function/f_uac2.c @@ -524,8 +524,6 @@ dev_err(dev, "%s:%d Error!\n", __func__, __LINE__); return ret; } - iad_desc.bFirstInterface = ret; - std_ac_if_desc.bInterfaceNumber = ret; uac2->ac_intf = ret; uac2->ac_alt = 0; diff -u linux-azure-4.15.0/drivers/usb/gadget/udc/core.c linux-azure-4.15.0/drivers/usb/gadget/udc/core.c --- linux-azure-4.15.0/drivers/usb/gadget/udc/core.c +++ linux-azure-4.15.0/drivers/usb/gadget/udc/core.c @@ -180,8 +180,8 @@ void usb_ep_free_request(struct usb_ep *ep, struct usb_request *req) { - trace_usb_ep_free_request(ep, req, 0); ep->ops->free_request(ep, req); + trace_usb_ep_free_request(ep, req, 0); } EXPORT_SYMBOL_GPL(usb_ep_free_request); reverted: --- linux-azure-4.15.0/drivers/usb/gadget/udc/fsl_udc_core.c +++ linux-azure-4.15.0.orig/drivers/usb/gadget/udc/fsl_udc_core.c @@ -1305,7 +1305,7 @@ { struct fsl_ep *ep = get_ep_by_pipe(udc, pipe); + if (ep->name) - if (ep->ep.name) nuke(ep, -ESHUTDOWN); } @@ -1693,7 +1693,7 @@ curr_ep = get_ep_by_pipe(udc, i); /* If the ep is configured */ + if (curr_ep->name == NULL) { - if (!curr_ep->ep.name) { WARNING("Invalid EP?"); continue; } reverted: --- linux-azure-4.15.0/drivers/usb/gadget/udc/goku_udc.h +++ linux-azure-4.15.0.orig/drivers/usb/gadget/udc/goku_udc.h @@ -25,7 +25,7 @@ # define INT_EP1DATASET 0x00040 # define INT_EP2DATASET 0x00080 # define INT_EP3DATASET 0x00100 +#define INT_EPnNAK(n) (0x00100 < (n)) /* 0 < n < 4 */ -#define INT_EPnNAK(n) (0x00100 << (n)) /* 0 < n < 4 */ # define INT_EP1NAK 0x00200 # define INT_EP2NAK 0x00400 # define INT_EP3NAK 0x00800 diff -u linux-azure-4.15.0/drivers/usb/host/ohci-hcd.c linux-azure-4.15.0/drivers/usb/host/ohci-hcd.c --- linux-azure-4.15.0/drivers/usb/host/ohci-hcd.c +++ linux-azure-4.15.0/drivers/usb/host/ohci-hcd.c @@ -447,8 +447,7 @@ struct usb_hcd *hcd = ohci_to_hcd(ohci); /* Accept arbitrarily long scatter-gather lists */ - if (!(hcd->driver->flags & HCD_LOCAL_MEM)) - hcd->self.sg_tablesize = ~0; + hcd->self.sg_tablesize = ~0; if (distrust_firmware) ohci->flags |= OHCI_QUIRK_HUB_POWER; diff -u linux-azure-4.15.0/drivers/usb/host/xhci-dbgcap.c linux-azure-4.15.0/drivers/usb/host/xhci-dbgcap.c --- linux-azure-4.15.0/drivers/usb/host/xhci-dbgcap.c +++ linux-azure-4.15.0/drivers/usb/host/xhci-dbgcap.c @@ -328,14 +328,13 @@ int dbc_ep_queue(struct dbc_ep *dep, struct dbc_request *req, gfp_t gfp_flags) { - unsigned long flags; struct xhci_dbc *dbc = dep->dbc; int ret = -ESHUTDOWN; - spin_lock_irqsave(&dbc->lock, flags); + spin_lock(&dbc->lock); if (dbc->state == DS_CONFIGURED) ret = dbc_ep_do_queue(dep, req); - spin_unlock_irqrestore(&dbc->lock, flags); + spin_unlock(&dbc->lock); mod_delayed_work(system_wq, &dbc->event_work, 0); @@ -507,33 +506,30 @@ return 0; } -static int xhci_do_dbc_stop(struct xhci_hcd *xhci) +static void xhci_do_dbc_stop(struct xhci_hcd *xhci) { struct xhci_dbc *dbc = xhci->dbc; if (dbc->state == DS_DISABLED) - return -1; + return; writel(0, &dbc->regs->control); xhci_dbc_mem_cleanup(xhci); dbc->state = DS_DISABLED; - - return 0; } static int xhci_dbc_start(struct xhci_hcd *xhci) { int ret; - unsigned long flags; struct xhci_dbc *dbc = xhci->dbc; WARN_ON(!dbc); pm_runtime_get_sync(xhci_to_hcd(xhci)->self.controller); - spin_lock_irqsave(&dbc->lock, flags); + spin_lock(&dbc->lock); ret = xhci_do_dbc_start(xhci); - spin_unlock_irqrestore(&dbc->lock, flags); + spin_unlock(&dbc->lock); if (ret) { pm_runtime_put(xhci_to_hcd(xhci)->self.controller); @@ -545,8 +541,6 @@ static void xhci_dbc_stop(struct xhci_hcd *xhci) { - int ret; - unsigned long flags; struct xhci_dbc *dbc = xhci->dbc; struct dbc_port *port = &dbc->port; @@ -557,12 +551,11 @@ if (port->registered) xhci_dbc_tty_unregister_device(xhci); - spin_lock_irqsave(&dbc->lock, flags); - ret = xhci_do_dbc_stop(xhci); - spin_unlock_irqrestore(&dbc->lock, flags); + spin_lock(&dbc->lock); + xhci_do_dbc_stop(xhci); + spin_unlock(&dbc->lock); - if (!ret) - pm_runtime_put_sync(xhci_to_hcd(xhci)->self.controller); + pm_runtime_put_sync(xhci_to_hcd(xhci)->self.controller); } static void @@ -786,15 +779,14 @@ int ret; enum evtreturn evtr; struct xhci_dbc *dbc; - unsigned long flags; struct xhci_hcd *xhci; dbc = container_of(to_delayed_work(work), struct xhci_dbc, event_work); xhci = dbc->xhci; - spin_lock_irqsave(&dbc->lock, flags); + spin_lock(&dbc->lock); evtr = xhci_dbc_do_handle_events(dbc); - spin_unlock_irqrestore(&dbc->lock, flags); + spin_unlock(&dbc->lock); switch (evtr) { case EVT_GSER: diff -u linux-azure-4.15.0/drivers/usb/host/xhci-dbgtty.c linux-azure-4.15.0/drivers/usb/host/xhci-dbgtty.c --- linux-azure-4.15.0/drivers/usb/host/xhci-dbgtty.c +++ linux-azure-4.15.0/drivers/usb/host/xhci-dbgtty.c @@ -92,23 +92,21 @@ static void dbc_read_complete(struct xhci_hcd *xhci, struct dbc_request *req) { - unsigned long flags; struct xhci_dbc *dbc = xhci->dbc; struct dbc_port *port = &dbc->port; - spin_lock_irqsave(&port->port_lock, flags); + spin_lock(&port->port_lock); list_add_tail(&req->list_pool, &port->read_queue); tasklet_schedule(&port->push); - spin_unlock_irqrestore(&port->port_lock, flags); + spin_unlock(&port->port_lock); } static void dbc_write_complete(struct xhci_hcd *xhci, struct dbc_request *req) { - unsigned long flags; struct xhci_dbc *dbc = xhci->dbc; struct dbc_port *port = &dbc->port; - spin_lock_irqsave(&port->port_lock, flags); + spin_lock(&port->port_lock); list_add(&req->list_pool, &port->write_pool); switch (req->status) { case 0: @@ -121,7 +119,7 @@ req->status); break; } - spin_unlock_irqrestore(&port->port_lock, flags); + spin_unlock(&port->port_lock); } void xhci_dbc_free_req(struct dbc_ep *dep, struct dbc_request *req) @@ -331,13 +329,12 @@ { struct dbc_request *req; struct tty_struct *tty; - unsigned long flags; bool do_push = false; bool disconnect = false; struct dbc_port *port = (void *)_port; struct list_head *queue = &port->read_queue; - spin_lock_irqsave(&port->port_lock, flags); + spin_lock_irq(&port->port_lock); tty = port->port.tty; while (!list_empty(queue)) { req = list_first_entry(queue, struct dbc_request, list_pool); @@ -397,17 +394,16 @@ if (!disconnect) dbc_start_rx(port); - spin_unlock_irqrestore(&port->port_lock, flags); + spin_unlock_irq(&port->port_lock); } static int dbc_port_activate(struct tty_port *_port, struct tty_struct *tty) { - unsigned long flags; struct dbc_port *port = container_of(_port, struct dbc_port, port); - spin_lock_irqsave(&port->port_lock, flags); + spin_lock_irq(&port->port_lock); dbc_start_rx(port); - spin_unlock_irqrestore(&port->port_lock, flags); + spin_unlock_irq(&port->port_lock); return 0; } reverted: --- linux-azure-4.15.0/drivers/usb/host/xhci-hub.c +++ linux-azure-4.15.0.orig/drivers/usb/host/xhci-hub.c @@ -354,7 +354,7 @@ slot_id = 0; for (i = 0; i < MAX_HC_SLOTS; i++) { + if (!xhci->devs[i]) - if (!xhci->devs[i] || !xhci->devs[i]->udev) continue; speed = xhci->devs[i]->udev->speed; if (((speed >= USB_SPEED_SUPER) == (hcd->speed >= HCD_USB3)) diff -u linux-azure-4.15.0/drivers/usb/host/xhci-mem.c linux-azure-4.15.0/drivers/usb/host/xhci-mem.c --- linux-azure-4.15.0/drivers/usb/host/xhci-mem.c +++ linux-azure-4.15.0/drivers/usb/host/xhci-mem.c @@ -913,8 +913,6 @@ if (dev->out_ctx) xhci_free_container_ctx(xhci, dev->out_ctx); - if (dev->udev && dev->udev->slot_id) - dev->udev->slot_id = 0; kfree(xhci->devs[slot_id]); xhci->devs[slot_id] = NULL; } diff -u linux-azure-4.15.0/drivers/usb/host/xhci-pci.c linux-azure-4.15.0/drivers/usb/host/xhci-pci.c --- linux-azure-4.15.0/drivers/usb/host/xhci-pci.c +++ linux-azure-4.15.0/drivers/usb/host/xhci-pci.c @@ -122,10 +122,7 @@ if (pdev->vendor == PCI_VENDOR_ID_AMD && usb_amd_find_chipset_info()) xhci->quirks |= XHCI_AMD_PLL_FIX; - if (pdev->vendor == PCI_VENDOR_ID_AMD && - (pdev->device == 0x15e0 || - pdev->device == 0x15e1 || - pdev->device == 0x43bb)) + if (pdev->vendor == PCI_VENDOR_ID_AMD && pdev->device == 0x43bb) xhci->quirks |= XHCI_SUSPEND_DELAY; if (pdev->vendor == PCI_VENDOR_ID_AMD) reverted: --- linux-azure-4.15.0/drivers/usb/host/xhci-plat.c +++ linux-azure-4.15.0.orig/drivers/usb/host/xhci-plat.c @@ -355,6 +355,7 @@ { struct usb_hcd *hcd = dev_get_drvdata(dev); struct xhci_hcd *xhci = hcd_to_xhci(hcd); + int ret; /* * xhci_suspend() needs `do_wakeup` to know whether host is allowed @@ -364,7 +365,12 @@ * reconsider this when xhci_plat_suspend enlarges its scope, e.g., * also applies to runtime suspend. */ + ret = xhci_suspend(xhci, device_may_wakeup(dev)); + + if (!device_may_wakeup(dev) && !IS_ERR(xhci->clk)) + clk_disable_unprepare(xhci->clk); + + return ret; - return xhci_suspend(xhci, device_may_wakeup(dev)); } static int __maybe_unused xhci_plat_resume(struct device *dev) @@ -373,6 +379,9 @@ struct xhci_hcd *xhci = hcd_to_xhci(hcd); int ret; + if (!device_may_wakeup(dev) && !IS_ERR(xhci->clk)) + clk_prepare_enable(xhci->clk); + ret = xhci_priv_resume_quirk(hcd); if (ret) return ret; @@ -414,6 +423,7 @@ static struct platform_driver usb_xhci_driver = { .probe = xhci_plat_probe, .remove = xhci_plat_remove, + .shutdown = usb_hcd_platform_shutdown, .driver = { .name = "xhci-hcd", .pm = &xhci_plat_pm_ops, diff -u linux-azure-4.15.0/drivers/usb/host/xhci.c linux-azure-4.15.0/drivers/usb/host/xhci.c --- linux-azure-4.15.0/drivers/usb/host/xhci.c +++ linux-azure-4.15.0/drivers/usb/host/xhci.c @@ -3564,7 +3564,6 @@ del_timer_sync(&virt_dev->eps[i].stop_cmd_timer); } xhci_debugfs_remove_slot(xhci, udev->slot_id); - virt_dev->udev = NULL; ret = xhci_disable_slot(xhci, udev->slot_id); if (ret) xhci_free_virt_device(xhci, udev->slot_id); @@ -4785,7 +4784,6 @@ * quirks */ struct device *dev = hcd->self.sysdev; - unsigned int minor_rev; int retval; /* Accept arbitrarily long scatter-gather lists */ @@ -4813,19 +4811,12 @@ */ hcd->has_tt = 1; } else { - /* - * Some 3.1 hosts return sbrn 0x30, use xhci supported protocol - * minor revision instead of sbrn - */ - minor_rev = xhci->usb3_rhub.min_rev; - if (minor_rev) { + /* Some 3.1 hosts return sbrn 0x30, can't rely on sbrn alone */ + if (xhci->sbrn == 0x31 || xhci->usb3_rhub.min_rev >= 1) { + xhci_info(xhci, "Host supports USB 3.1 Enhanced SuperSpeed\n"); hcd->speed = HCD_USB31; hcd->self.root_hub->speed = USB_SPEED_SUPER_PLUS; } - xhci_info(xhci, "Host supports USB 3.%x %s SuperSpeed\n", - minor_rev, - minor_rev ? "Enhanced" : ""); - /* xHCI private pointer was set in xhci_pci_probe for the second * registered roothub. */ reverted: --- linux-azure-4.15.0/drivers/usb/musb/musb_gadget.c +++ linux-azure-4.15.0.orig/drivers/usb/musb/musb_gadget.c @@ -417,6 +417,7 @@ req = next_request(musb_ep); request = &req->request; + trace_musb_req_tx(req); csr = musb_readw(epio, MUSB_TXCSR); musb_dbg(musb, "<== %s, txcsr %04x", musb_ep->end_point.name, csr); @@ -455,8 +456,6 @@ u8 is_dma = 0; bool short_packet = false; - trace_musb_req_tx(req); - if (dma && (csr & MUSB_TXCSR_DMAENAB)) { is_dma = 1; csr |= MUSB_TXCSR_P_WZC_BITS; diff -u linux-azure-4.15.0/drivers/usb/musb/musb_host.c linux-azure-4.15.0/drivers/usb/musb/musb_host.c --- linux-azure-4.15.0/drivers/usb/musb/musb_host.c +++ linux-azure-4.15.0/drivers/usb/musb/musb_host.c @@ -998,9 +998,7 @@ /* set tx_reinit and schedule the next qh */ ep->tx_reinit = 1; } - - if (next_qh) - musb_start_urb(musb, is_in, next_qh); + musb_start_urb(musb, is_in, next_qh); } } diff -u linux-azure-4.15.0/drivers/usb/serial/Kconfig linux-azure-4.15.0/drivers/usb/serial/Kconfig --- linux-azure-4.15.0/drivers/usb/serial/Kconfig +++ linux-azure-4.15.0/drivers/usb/serial/Kconfig @@ -62,7 +62,6 @@ - Fundamental Software dongle. - Google USB serial devices - HP4x calculators - - Libtransistor USB console - a number of Motorola phones - Motorola Tetra devices - Novatel Wireless GPS receivers diff -u linux-azure-4.15.0/drivers/usb/serial/cp210x.c linux-azure-4.15.0/drivers/usb/serial/cp210x.c --- linux-azure-4.15.0/drivers/usb/serial/cp210x.c +++ linux-azure-4.15.0/drivers/usb/serial/cp210x.c @@ -214,7 +214,6 @@ { USB_DEVICE(0x3195, 0xF190) }, /* Link Instruments MSO-19 */ { USB_DEVICE(0x3195, 0xF280) }, /* Link Instruments MSO-28 */ { USB_DEVICE(0x3195, 0xF281) }, /* Link Instruments MSO-28 */ - { USB_DEVICE(0x3923, 0x7A0B) }, /* National Instruments USB Serial Console */ { USB_DEVICE(0x413C, 0x9500) }, /* DW700 GPS USB interface */ { } /* Terminating Entry */ }; diff -u linux-azure-4.15.0/drivers/usb/serial/ftdi_sio.c linux-azure-4.15.0/drivers/usb/serial/ftdi_sio.c --- linux-azure-4.15.0/drivers/usb/serial/ftdi_sio.c +++ linux-azure-4.15.0/drivers/usb/serial/ftdi_sio.c @@ -1898,8 +1898,7 @@ return ftdi_jtag_probe(serial); if (udev->product && - (!strcmp(udev->product, "Arrow USB Blaster") || - !strcmp(udev->product, "BeagleBone/XDS100V2") || + (!strcmp(udev->product, "BeagleBone/XDS100V2") || !strcmp(udev->product, "SNAP Connect E10"))) return ftdi_jtag_probe(serial); diff -u linux-azure-4.15.0/drivers/usb/serial/option.c linux-azure-4.15.0/drivers/usb/serial/option.c --- linux-azure-4.15.0/drivers/usb/serial/option.c +++ linux-azure-4.15.0/drivers/usb/serial/option.c @@ -233,8 +233,6 @@ /* These Quectel products use Qualcomm's vendor ID */ #define QUECTEL_PRODUCT_UC20 0x9003 #define QUECTEL_PRODUCT_UC15 0x9090 -/* These u-blox products use Qualcomm's vendor ID */ -#define UBLOX_PRODUCT_R410M 0x90b2 /* These Yuga products use Qualcomm's vendor ID */ #define YUGA_PRODUCT_CLM920_NC5 0x9625 @@ -243,7 +241,6 @@ #define QUECTEL_PRODUCT_EC21 0x0121 #define QUECTEL_PRODUCT_EC25 0x0125 #define QUECTEL_PRODUCT_BG96 0x0296 -#define QUECTEL_PRODUCT_EP06 0x0306 #define CMOTECH_VENDOR_ID 0x16d8 #define CMOTECH_PRODUCT_6001 0x6001 @@ -550,15 +547,147 @@ #define WETELECOM_PRODUCT_6802 0x6802 #define WETELECOM_PRODUCT_WMD300 0x6803 - -/* Device flags */ - -/* Interface does not support modem-control requests */ -#define NCTRL(ifnum) ((BIT(ifnum) & 0xff) << 8) - -/* Interface is reserved */ -#define RSVD(ifnum) ((BIT(ifnum) & 0xff) << 0) - +struct option_blacklist_info { + /* bitmask of interface numbers blacklisted for send_setup */ + const unsigned long sendsetup; + /* bitmask of interface numbers that are reserved */ + const unsigned long reserved; +}; + +static const struct option_blacklist_info four_g_w14_blacklist = { + .sendsetup = BIT(0) | BIT(1), +}; + +static const struct option_blacklist_info four_g_w100_blacklist = { + .sendsetup = BIT(1) | BIT(2), + .reserved = BIT(3), +}; + +static const struct option_blacklist_info alcatel_x200_blacklist = { + .sendsetup = BIT(0) | BIT(1), + .reserved = BIT(4), +}; + +static const struct option_blacklist_info zte_0037_blacklist = { + .sendsetup = BIT(0) | BIT(1), +}; + +static const struct option_blacklist_info zte_k3765_z_blacklist = { + .sendsetup = BIT(0) | BIT(1) | BIT(2), + .reserved = BIT(4), +}; + +static const struct option_blacklist_info zte_ad3812_z_blacklist = { + .sendsetup = BIT(0) | BIT(1) | BIT(2), +}; + +static const struct option_blacklist_info zte_mc2718_z_blacklist = { + .sendsetup = BIT(1) | BIT(2) | BIT(3) | BIT(4), +}; + +static const struct option_blacklist_info zte_mc2716_z_blacklist = { + .sendsetup = BIT(1) | BIT(2) | BIT(3), +}; + +static const struct option_blacklist_info zte_me3620_mbim_blacklist = { + .reserved = BIT(2) | BIT(3) | BIT(4), +}; + +static const struct option_blacklist_info zte_me3620_xl_blacklist = { + .reserved = BIT(3) | BIT(4) | BIT(5), +}; + +static const struct option_blacklist_info zte_zm8620_x_blacklist = { + .reserved = BIT(3) | BIT(4) | BIT(5), +}; + +static const struct option_blacklist_info huawei_cdc12_blacklist = { + .reserved = BIT(1) | BIT(2), +}; + +static const struct option_blacklist_info net_intf0_blacklist = { + .reserved = BIT(0), +}; + +static const struct option_blacklist_info net_intf1_blacklist = { + .reserved = BIT(1), +}; + +static const struct option_blacklist_info net_intf2_blacklist = { + .reserved = BIT(2), +}; + +static const struct option_blacklist_info net_intf3_blacklist = { + .reserved = BIT(3), +}; + +static const struct option_blacklist_info net_intf4_blacklist = { + .reserved = BIT(4), +}; + +static const struct option_blacklist_info net_intf5_blacklist = { + .reserved = BIT(5), +}; + +static const struct option_blacklist_info net_intf6_blacklist = { + .reserved = BIT(6), +}; + +static const struct option_blacklist_info zte_mf626_blacklist = { + .sendsetup = BIT(0) | BIT(1), + .reserved = BIT(4), +}; + +static const struct option_blacklist_info zte_1255_blacklist = { + .reserved = BIT(3) | BIT(4), +}; + +static const struct option_blacklist_info simcom_sim7100e_blacklist = { + .reserved = BIT(5) | BIT(6), +}; + +static const struct option_blacklist_info telit_me910_blacklist = { + .sendsetup = BIT(0), + .reserved = BIT(1) | BIT(3), +}; + +static const struct option_blacklist_info telit_me910_dual_modem_blacklist = { + .sendsetup = BIT(0), + .reserved = BIT(3), +}; + +static const struct option_blacklist_info telit_le910_blacklist = { + .sendsetup = BIT(0), + .reserved = BIT(1) | BIT(2), +}; + +static const struct option_blacklist_info telit_le920_blacklist = { + .sendsetup = BIT(0), + .reserved = BIT(1) | BIT(5), +}; + +static const struct option_blacklist_info telit_le920a4_blacklist_1 = { + .sendsetup = BIT(0), + .reserved = BIT(1), +}; + +static const struct option_blacklist_info telit_le922_blacklist_usbcfg0 = { + .sendsetup = BIT(2), + .reserved = BIT(0) | BIT(1) | BIT(3), +}; + +static const struct option_blacklist_info telit_le922_blacklist_usbcfg3 = { + .sendsetup = BIT(0), + .reserved = BIT(1) | BIT(2) | BIT(3), +}; + +static const struct option_blacklist_info cinterion_rmnet2_blacklist = { + .reserved = BIT(4) | BIT(5), +}; + +static const struct option_blacklist_info yuga_clm920_nc5_blacklist = { + .reserved = BIT(1) | BIT(4), +}; static const struct usb_device_id option_ids[] = { { USB_DEVICE(OPTION_VENDOR_ID, OPTION_PRODUCT_COLT) }, @@ -592,26 +721,26 @@ { USB_DEVICE(QUANTA_VENDOR_ID, QUANTA_PRODUCT_GKE) }, { USB_DEVICE(QUANTA_VENDOR_ID, QUANTA_PRODUCT_GLE) }, { USB_DEVICE(QUANTA_VENDOR_ID, 0xea42), - .driver_info = RSVD(4) }, + .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0x1c05, USB_CLASS_COMM, 0x02, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0x1c1f, USB_CLASS_COMM, 0x02, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0x1c23, USB_CLASS_COMM, 0x02, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, HUAWEI_PRODUCT_E173, 0xff, 0xff, 0xff), - .driver_info = RSVD(1) }, + .driver_info = (kernel_ulong_t) &net_intf1_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, HUAWEI_PRODUCT_E173S6, 0xff, 0xff, 0xff), - .driver_info = RSVD(1) }, + .driver_info = (kernel_ulong_t) &net_intf1_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, HUAWEI_PRODUCT_E1750, 0xff, 0xff, 0xff), - .driver_info = RSVD(2) }, + .driver_info = (kernel_ulong_t) &net_intf2_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0x1441, USB_CLASS_COMM, 0x02, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0x1442, USB_CLASS_COMM, 0x02, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, HUAWEI_PRODUCT_K4505, 0xff, 0xff, 0xff), - .driver_info = RSVD(1) | RSVD(2) }, + .driver_info = (kernel_ulong_t) &huawei_cdc12_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, HUAWEI_PRODUCT_K3765, 0xff, 0xff, 0xff), - .driver_info = RSVD(1) | RSVD(2) }, + .driver_info = (kernel_ulong_t) &huawei_cdc12_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0x14ac, 0xff, 0xff, 0xff), /* Huawei E1820 */ - .driver_info = RSVD(1) }, + .driver_info = (kernel_ulong_t) &net_intf1_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, HUAWEI_PRODUCT_K4605, 0xff, 0xff, 0xff), - .driver_info = RSVD(1) | RSVD(2) }, + .driver_info = (kernel_ulong_t) &huawei_cdc12_blacklist }, { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0xff, 0xff) }, { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x01, 0x01) }, { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x01, 0x02) }, @@ -1056,70 +1185,65 @@ { USB_DEVICE(KYOCERA_VENDOR_ID, KYOCERA_PRODUCT_KPC680) }, { USB_DEVICE(QUALCOMM_VENDOR_ID, 0x6000)}, /* ZTE AC8700 */ { USB_DEVICE_AND_INTERFACE_INFO(QUALCOMM_VENDOR_ID, 0x6001, 0xff, 0xff, 0xff), /* 4G LTE usb-modem U901 */ - .driver_info = RSVD(3) }, + .driver_info = (kernel_ulong_t)&net_intf3_blacklist }, { USB_DEVICE(QUALCOMM_VENDOR_ID, 0x6613)}, /* Onda H600/ZTE MF330 */ { USB_DEVICE(QUALCOMM_VENDOR_ID, 0x0023)}, /* ONYX 3G device */ { USB_DEVICE(QUALCOMM_VENDOR_ID, 0x9000)}, /* SIMCom SIM5218 */ /* Quectel products using Qualcomm vendor ID */ { USB_DEVICE(QUALCOMM_VENDOR_ID, QUECTEL_PRODUCT_UC15)}, { USB_DEVICE(QUALCOMM_VENDOR_ID, QUECTEL_PRODUCT_UC20), - .driver_info = RSVD(4) }, + .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, /* Yuga products use Qualcomm vendor ID */ { USB_DEVICE(QUALCOMM_VENDOR_ID, YUGA_PRODUCT_CLM920_NC5), - .driver_info = RSVD(1) | RSVD(4) }, - /* u-blox products using Qualcomm vendor ID */ - { USB_DEVICE(QUALCOMM_VENDOR_ID, UBLOX_PRODUCT_R410M), - .driver_info = RSVD(1) | RSVD(3) }, + .driver_info = (kernel_ulong_t)&yuga_clm920_nc5_blacklist }, /* Quectel products using Quectel vendor ID */ { USB_DEVICE(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC21), - .driver_info = RSVD(4) }, + .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, { USB_DEVICE(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC25), - .driver_info = RSVD(4) }, + .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, { USB_DEVICE(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_BG96), - .driver_info = RSVD(4) }, - { USB_DEVICE(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EP06), - .driver_info = RSVD(4) | RSVD(5) }, + .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_6001) }, { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_CMU_300) }, { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_6003), - .driver_info = RSVD(0) }, + .driver_info = (kernel_ulong_t)&net_intf0_blacklist }, { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_6004) }, { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_6005) }, { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_CGU_628A) }, { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_CHE_628S), - .driver_info = RSVD(0) }, + .driver_info = (kernel_ulong_t)&net_intf0_blacklist }, { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_CMU_301), - .driver_info = RSVD(0) }, + .driver_info = (kernel_ulong_t)&net_intf0_blacklist }, { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_CHU_628), - .driver_info = RSVD(0) }, + .driver_info = (kernel_ulong_t)&net_intf0_blacklist }, { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_CHU_628S) }, { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_CDU_680) }, { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_CDU_685A) }, { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_CHU_720S), - .driver_info = RSVD(0) }, + .driver_info = (kernel_ulong_t)&net_intf0_blacklist }, { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_7002), - .driver_info = RSVD(0) }, + .driver_info = (kernel_ulong_t)&net_intf0_blacklist }, { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_CHU_629K), - .driver_info = RSVD(4) }, + .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_7004), - .driver_info = RSVD(3) }, + .driver_info = (kernel_ulong_t)&net_intf3_blacklist }, { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_7005) }, { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_CGU_629), - .driver_info = RSVD(5) }, + .driver_info = (kernel_ulong_t)&net_intf5_blacklist }, { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_CHU_629S), - .driver_info = RSVD(4) }, + .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_CHU_720I), - .driver_info = RSVD(0) }, + .driver_info = (kernel_ulong_t)&net_intf0_blacklist }, { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_7212), - .driver_info = RSVD(0) }, + .driver_info = (kernel_ulong_t)&net_intf0_blacklist }, { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_7213), - .driver_info = RSVD(0) }, + .driver_info = (kernel_ulong_t)&net_intf0_blacklist }, { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_7251), - .driver_info = RSVD(1) }, + .driver_info = (kernel_ulong_t)&net_intf1_blacklist }, { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_7252), - .driver_info = RSVD(1) }, + .driver_info = (kernel_ulong_t)&net_intf1_blacklist }, { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_7253), - .driver_info = RSVD(1) }, + .driver_info = (kernel_ulong_t)&net_intf1_blacklist }, { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_UC864E) }, { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_UC864G) }, { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_CC864_DUAL) }, @@ -1127,38 +1251,38 @@ { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_DE910_DUAL) }, { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_UE910_V2) }, { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE922_USBCFG0), - .driver_info = RSVD(0) | RSVD(1) | NCTRL(2) | RSVD(3) }, + .driver_info = (kernel_ulong_t)&telit_le922_blacklist_usbcfg0 }, { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE922_USBCFG1), - .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) }, + .driver_info = (kernel_ulong_t)&telit_le910_blacklist }, { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE922_USBCFG2), - .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) | RSVD(3) }, + .driver_info = (kernel_ulong_t)&telit_le922_blacklist_usbcfg3 }, { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE922_USBCFG3), - .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) | RSVD(3) }, + .driver_info = (kernel_ulong_t)&telit_le922_blacklist_usbcfg3 }, { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, TELIT_PRODUCT_LE922_USBCFG5, 0xff), - .driver_info = RSVD(0) | RSVD(1) | NCTRL(2) | RSVD(3) }, + .driver_info = (kernel_ulong_t)&telit_le922_blacklist_usbcfg0 }, { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910), - .driver_info = NCTRL(0) | RSVD(1) | RSVD(3) }, + .driver_info = (kernel_ulong_t)&telit_me910_blacklist }, { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910_DUAL_MODEM), - .driver_info = NCTRL(0) | RSVD(3) }, + .driver_info = (kernel_ulong_t)&telit_me910_dual_modem_blacklist }, { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE910), - .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) }, + .driver_info = (kernel_ulong_t)&telit_le910_blacklist }, { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE910_USBCFG4), - .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) | RSVD(3) }, + .driver_info = (kernel_ulong_t)&telit_le922_blacklist_usbcfg3 }, { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE920), - .driver_info = NCTRL(0) | RSVD(1) | RSVD(5) }, + .driver_info = (kernel_ulong_t)&telit_le920_blacklist }, { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE920A4_1207) }, { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE920A4_1208), - .driver_info = NCTRL(0) | RSVD(1) }, + .driver_info = (kernel_ulong_t)&telit_le920a4_blacklist_1 }, { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE920A4_1211), - .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) | RSVD(3) }, + .driver_info = (kernel_ulong_t)&telit_le922_blacklist_usbcfg3 }, { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE920A4_1212), - .driver_info = NCTRL(0) | RSVD(1) }, + .driver_info = (kernel_ulong_t)&telit_le920a4_blacklist_1 }, { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, TELIT_PRODUCT_LE920A4_1213, 0xff) }, { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE920A4_1214), - .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) | RSVD(3) }, + .driver_info = (kernel_ulong_t)&telit_le922_blacklist_usbcfg3 }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, ZTE_PRODUCT_MF622, 0xff, 0xff, 0xff) }, /* ZTE WCDMA products */ { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0002, 0xff, 0xff, 0xff), - .driver_info = RSVD(1) }, + .driver_info = (kernel_ulong_t)&net_intf1_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0003, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0004, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0005, 0xff, 0xff, 0xff) }, @@ -1174,58 +1298,58 @@ { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0010, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0011, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0012, 0xff, 0xff, 0xff), - .driver_info = RSVD(1) }, + .driver_info = (kernel_ulong_t)&net_intf1_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0013, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, ZTE_PRODUCT_MF628, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0016, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0017, 0xff, 0xff, 0xff), - .driver_info = RSVD(3) }, + .driver_info = (kernel_ulong_t)&net_intf3_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0018, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0019, 0xff, 0xff, 0xff), - .driver_info = RSVD(3) }, + .driver_info = (kernel_ulong_t)&net_intf3_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0020, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0021, 0xff, 0xff, 0xff), - .driver_info = RSVD(4) }, + .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0022, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0023, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0024, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0025, 0xff, 0xff, 0xff), - .driver_info = RSVD(1) }, + .driver_info = (kernel_ulong_t)&net_intf1_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0028, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0029, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0030, 0xff, 0xff, 0xff) }, - { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, ZTE_PRODUCT_MF626, 0xff, 0xff, 0xff), - .driver_info = NCTRL(0) | NCTRL(1) | RSVD(4) }, + { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, ZTE_PRODUCT_MF626, 0xff, + 0xff, 0xff), .driver_info = (kernel_ulong_t)&zte_mf626_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0032, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0033, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0034, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0037, 0xff, 0xff, 0xff), - .driver_info = NCTRL(0) | NCTRL(1) }, + .driver_info = (kernel_ulong_t)&zte_0037_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0038, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0039, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0040, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0042, 0xff, 0xff, 0xff), - .driver_info = RSVD(4) }, + .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0043, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0044, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0048, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0049, 0xff, 0xff, 0xff), - .driver_info = RSVD(5) }, + .driver_info = (kernel_ulong_t)&net_intf5_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0050, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0051, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0052, 0xff, 0xff, 0xff), - .driver_info = RSVD(4) }, + .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0054, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0055, 0xff, 0xff, 0xff), - .driver_info = RSVD(1) }, + .driver_info = (kernel_ulong_t)&net_intf1_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0056, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0057, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0058, 0xff, 0xff, 0xff), - .driver_info = RSVD(4) }, + .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0061, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0062, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0063, 0xff, 0xff, 0xff), - .driver_info = RSVD(4) }, + .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0064, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0065, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0066, 0xff, 0xff, 0xff) }, @@ -1250,26 +1374,26 @@ { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0096, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0097, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0104, 0xff, 0xff, 0xff), - .driver_info = RSVD(4) }, + .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0105, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0106, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0108, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0113, 0xff, 0xff, 0xff), - .driver_info = RSVD(5) }, + .driver_info = (kernel_ulong_t)&net_intf5_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0117, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0118, 0xff, 0xff, 0xff), - .driver_info = RSVD(5) }, + .driver_info = (kernel_ulong_t)&net_intf5_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0121, 0xff, 0xff, 0xff), - .driver_info = RSVD(5) }, + .driver_info = (kernel_ulong_t)&net_intf5_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0122, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0123, 0xff, 0xff, 0xff), - .driver_info = RSVD(4) }, + .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0124, 0xff, 0xff, 0xff), - .driver_info = RSVD(5) }, + .driver_info = (kernel_ulong_t)&net_intf5_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0125, 0xff, 0xff, 0xff), - .driver_info = RSVD(6) }, + .driver_info = (kernel_ulong_t)&net_intf6_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0126, 0xff, 0xff, 0xff), - .driver_info = RSVD(5) }, + .driver_info = (kernel_ulong_t)&net_intf5_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0128, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0135, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0136, 0xff, 0xff, 0xff) }, @@ -1285,50 +1409,50 @@ { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0155, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0156, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0157, 0xff, 0xff, 0xff), - .driver_info = RSVD(5) }, + .driver_info = (kernel_ulong_t)&net_intf5_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0158, 0xff, 0xff, 0xff), - .driver_info = RSVD(3) }, + .driver_info = (kernel_ulong_t)&net_intf3_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0159, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0161, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0162, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0164, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0165, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0167, 0xff, 0xff, 0xff), - .driver_info = RSVD(4) }, + .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0189, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0191, 0xff, 0xff, 0xff), /* ZTE EuFi890 */ - .driver_info = RSVD(4) }, + .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0196, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0197, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0199, 0xff, 0xff, 0xff), /* ZTE MF820S */ - .driver_info = RSVD(1) }, + .driver_info = (kernel_ulong_t)&net_intf1_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0200, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0201, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0254, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0257, 0xff, 0xff, 0xff), /* ZTE MF821 */ - .driver_info = RSVD(3) }, + .driver_info = (kernel_ulong_t)&net_intf3_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0265, 0xff, 0xff, 0xff), /* ONDA MT8205 */ - .driver_info = RSVD(4) }, + .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0284, 0xff, 0xff, 0xff), /* ZTE MF880 */ - .driver_info = RSVD(4) }, + .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0317, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0326, 0xff, 0xff, 0xff), - .driver_info = RSVD(4) }, + .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0330, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0395, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0412, 0xff, 0xff, 0xff), /* Telewell TW-LTE 4G */ - .driver_info = RSVD(4) }, + .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0414, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0417, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1008, 0xff, 0xff, 0xff), - .driver_info = RSVD(4) }, + .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1010, 0xff, 0xff, 0xff), - .driver_info = RSVD(4) }, + .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1012, 0xff, 0xff, 0xff), - .driver_info = RSVD(4) }, + .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1018, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1021, 0xff, 0xff, 0xff), - .driver_info = RSVD(2) }, + .driver_info = (kernel_ulong_t)&net_intf2_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1057, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1058, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1059, 0xff, 0xff, 0xff) }, @@ -1445,23 +1569,23 @@ { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1170, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1244, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1245, 0xff, 0xff, 0xff), - .driver_info = RSVD(4) }, + .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1246, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1247, 0xff, 0xff, 0xff), - .driver_info = RSVD(4) }, + .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1248, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1249, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1250, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1251, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1252, 0xff, 0xff, 0xff), - .driver_info = RSVD(4) }, + .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1253, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1254, 0xff, 0xff, 0xff), - .driver_info = RSVD(4) }, + .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1255, 0xff, 0xff, 0xff), - .driver_info = RSVD(3) | RSVD(4) }, + .driver_info = (kernel_ulong_t)&zte_1255_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1256, 0xff, 0xff, 0xff), - .driver_info = RSVD(4) }, + .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1257, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1258, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1259, 0xff, 0xff, 0xff) }, @@ -1476,7 +1600,7 @@ { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1268, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1269, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1270, 0xff, 0xff, 0xff), - .driver_info = RSVD(5) }, + .driver_info = (kernel_ulong_t)&net_intf5_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1271, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1272, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1273, 0xff, 0xff, 0xff) }, @@ -1512,17 +1636,17 @@ { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1303, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1333, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1401, 0xff, 0xff, 0xff), - .driver_info = RSVD(2) }, + .driver_info = (kernel_ulong_t)&net_intf2_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1402, 0xff, 0xff, 0xff), - .driver_info = RSVD(2) }, + .driver_info = (kernel_ulong_t)&net_intf2_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1424, 0xff, 0xff, 0xff), - .driver_info = RSVD(2) }, + .driver_info = (kernel_ulong_t)&net_intf2_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1425, 0xff, 0xff, 0xff), - .driver_info = RSVD(2) }, + .driver_info = (kernel_ulong_t)&net_intf2_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1426, 0xff, 0xff, 0xff), /* ZTE MF91 */ - .driver_info = RSVD(2) }, + .driver_info = (kernel_ulong_t)&net_intf2_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1428, 0xff, 0xff, 0xff), /* Telewell TW-LTE 4G v2 */ - .driver_info = RSVD(2) }, + .driver_info = (kernel_ulong_t)&net_intf2_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1533, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1534, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1535, 0xff, 0xff, 0xff) }, @@ -1540,8 +1664,8 @@ { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1596, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1598, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1600, 0xff, 0xff, 0xff) }, - { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x2002, 0xff, 0xff, 0xff), - .driver_info = NCTRL(0) | NCTRL(1) | NCTRL(2) | RSVD(4) }, + { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x2002, 0xff, + 0xff, 0xff), .driver_info = (kernel_ulong_t)&zte_k3765_z_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x2003, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0014, 0xff, 0xff, 0xff) }, /* ZTE CDMA products */ @@ -1552,20 +1676,20 @@ { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0073, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0094, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0130, 0xff, 0xff, 0xff), - .driver_info = RSVD(1) }, + .driver_info = (kernel_ulong_t)&net_intf1_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0133, 0xff, 0xff, 0xff), - .driver_info = RSVD(3) }, + .driver_info = (kernel_ulong_t)&net_intf3_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0141, 0xff, 0xff, 0xff), - .driver_info = RSVD(5) }, + .driver_info = (kernel_ulong_t)&net_intf5_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0147, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0152, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0168, 0xff, 0xff, 0xff), - .driver_info = RSVD(4) }, + .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0170, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0176, 0xff, 0xff, 0xff), - .driver_info = RSVD(3) }, + .driver_info = (kernel_ulong_t)&net_intf3_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0178, 0xff, 0xff, 0xff), - .driver_info = RSVD(3) }, + .driver_info = (kernel_ulong_t)&net_intf3_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0xff42, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0xff43, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0xff44, 0xff, 0xff, 0xff) }, @@ -1717,19 +1841,19 @@ { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, ZTE_PRODUCT_AC2726, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, ZTE_PRODUCT_AC8710T, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, ZTE_PRODUCT_MC2718, 0xff, 0xff, 0xff), - .driver_info = NCTRL(1) | NCTRL(2) | NCTRL(3) | NCTRL(4) }, + .driver_info = (kernel_ulong_t)&zte_mc2718_z_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, ZTE_PRODUCT_AD3812, 0xff, 0xff, 0xff), - .driver_info = NCTRL(0) | NCTRL(1) | NCTRL(2) }, + .driver_info = (kernel_ulong_t)&zte_ad3812_z_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, ZTE_PRODUCT_MC2716, 0xff, 0xff, 0xff), - .driver_info = NCTRL(1) | NCTRL(2) | NCTRL(3) }, + .driver_info = (kernel_ulong_t)&zte_mc2716_z_blacklist }, { USB_DEVICE(ZTE_VENDOR_ID, ZTE_PRODUCT_ME3620_L), - .driver_info = RSVD(3) | RSVD(4) | RSVD(5) }, + .driver_info = (kernel_ulong_t)&zte_me3620_xl_blacklist }, { USB_DEVICE(ZTE_VENDOR_ID, ZTE_PRODUCT_ME3620_MBIM), - .driver_info = RSVD(2) | RSVD(3) | RSVD(4) }, + .driver_info = (kernel_ulong_t)&zte_me3620_mbim_blacklist }, { USB_DEVICE(ZTE_VENDOR_ID, ZTE_PRODUCT_ME3620_X), - .driver_info = RSVD(3) | RSVD(4) | RSVD(5) }, + .driver_info = (kernel_ulong_t)&zte_me3620_xl_blacklist }, { USB_DEVICE(ZTE_VENDOR_ID, ZTE_PRODUCT_ZM8620_X), - .driver_info = RSVD(3) | RSVD(4) | RSVD(5) }, + .driver_info = (kernel_ulong_t)&zte_zm8620_x_blacklist }, { USB_VENDOR_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0xff, 0x02, 0x01) }, { USB_VENDOR_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0xff, 0x02, 0x05) }, { USB_VENDOR_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0xff, 0x86, 0x10) }, @@ -1749,34 +1873,37 @@ { USB_DEVICE(ALINK_VENDOR_ID, ALINK_PRODUCT_PH300) }, { USB_DEVICE_AND_INTERFACE_INFO(ALINK_VENDOR_ID, ALINK_PRODUCT_3GU, 0xff, 0xff, 0xff) }, { USB_DEVICE(ALINK_VENDOR_ID, SIMCOM_PRODUCT_SIM7100E), - .driver_info = RSVD(5) | RSVD(6) }, + .driver_info = (kernel_ulong_t)&simcom_sim7100e_blacklist }, { USB_DEVICE(ALCATEL_VENDOR_ID, ALCATEL_PRODUCT_X060S_X200), - .driver_info = NCTRL(0) | NCTRL(1) | RSVD(4) }, + .driver_info = (kernel_ulong_t)&alcatel_x200_blacklist + }, { USB_DEVICE(ALCATEL_VENDOR_ID, ALCATEL_PRODUCT_X220_X500D), - .driver_info = RSVD(6) }, + .driver_info = (kernel_ulong_t)&net_intf6_blacklist }, { USB_DEVICE(ALCATEL_VENDOR_ID, 0x0052), - .driver_info = RSVD(6) }, + .driver_info = (kernel_ulong_t)&net_intf6_blacklist }, { USB_DEVICE(ALCATEL_VENDOR_ID, 0x00b6), - .driver_info = RSVD(3) }, + .driver_info = (kernel_ulong_t)&net_intf3_blacklist }, { USB_DEVICE(ALCATEL_VENDOR_ID, 0x00b7), - .driver_info = RSVD(5) }, + .driver_info = (kernel_ulong_t)&net_intf5_blacklist }, { USB_DEVICE(ALCATEL_VENDOR_ID, ALCATEL_PRODUCT_L100V), - .driver_info = RSVD(4) }, + .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, { USB_DEVICE(ALCATEL_VENDOR_ID, ALCATEL_PRODUCT_L800MA), - .driver_info = RSVD(2) }, + .driver_info = (kernel_ulong_t)&net_intf2_blacklist }, { USB_DEVICE(AIRPLUS_VENDOR_ID, AIRPLUS_PRODUCT_MCD650) }, { USB_DEVICE(TLAYTECH_VENDOR_ID, TLAYTECH_PRODUCT_TEU800) }, { USB_DEVICE(LONGCHEER_VENDOR_ID, FOUR_G_SYSTEMS_PRODUCT_W14), - .driver_info = NCTRL(0) | NCTRL(1) }, + .driver_info = (kernel_ulong_t)&four_g_w14_blacklist + }, { USB_DEVICE(LONGCHEER_VENDOR_ID, FOUR_G_SYSTEMS_PRODUCT_W100), - .driver_info = NCTRL(1) | NCTRL(2) | RSVD(3) }, + .driver_info = (kernel_ulong_t)&four_g_w100_blacklist + }, {USB_DEVICE(LONGCHEER_VENDOR_ID, FUJISOFT_PRODUCT_FS040U), - .driver_info = RSVD(3)}, + .driver_info = (kernel_ulong_t)&net_intf3_blacklist}, { USB_DEVICE_INTERFACE_CLASS(LONGCHEER_VENDOR_ID, SPEEDUP_PRODUCT_SU9800, 0xff) }, { USB_DEVICE_INTERFACE_CLASS(LONGCHEER_VENDOR_ID, 0x9801, 0xff), - .driver_info = RSVD(3) }, + .driver_info = (kernel_ulong_t)&net_intf3_blacklist }, { USB_DEVICE_INTERFACE_CLASS(LONGCHEER_VENDOR_ID, 0x9803, 0xff), - .driver_info = RSVD(4) }, + .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, { USB_DEVICE(LONGCHEER_VENDOR_ID, ZOOM_PRODUCT_4597) }, { USB_DEVICE(LONGCHEER_VENDOR_ID, IBALL_3_5G_CONNECT) }, { USB_DEVICE(HAIER_VENDOR_ID, HAIER_PRODUCT_CE100) }, @@ -1802,14 +1929,14 @@ { USB_DEVICE(CINTERION_VENDOR_ID, CINTERION_PRODUCT_EU3_E) }, { USB_DEVICE(CINTERION_VENDOR_ID, CINTERION_PRODUCT_EU3_P) }, { USB_DEVICE(CINTERION_VENDOR_ID, CINTERION_PRODUCT_PH8), - .driver_info = RSVD(4) }, + .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, { USB_DEVICE_INTERFACE_CLASS(CINTERION_VENDOR_ID, CINTERION_PRODUCT_AHXX, 0xff) }, { USB_DEVICE(CINTERION_VENDOR_ID, CINTERION_PRODUCT_PLXX), - .driver_info = RSVD(4) }, + .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, { USB_DEVICE_INTERFACE_CLASS(CINTERION_VENDOR_ID, CINTERION_PRODUCT_PH8_2RMNET, 0xff), - .driver_info = RSVD(4) | RSVD(5) }, + .driver_info = (kernel_ulong_t)&cinterion_rmnet2_blacklist }, { USB_DEVICE_INTERFACE_CLASS(CINTERION_VENDOR_ID, CINTERION_PRODUCT_PH8_AUDIO, 0xff), - .driver_info = RSVD(4) }, + .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, { USB_DEVICE_INTERFACE_CLASS(CINTERION_VENDOR_ID, CINTERION_PRODUCT_AHXX_2RMNET, 0xff) }, { USB_DEVICE_INTERFACE_CLASS(CINTERION_VENDOR_ID, CINTERION_PRODUCT_AHXX_AUDIO, 0xff) }, { USB_DEVICE(CINTERION_VENDOR_ID, CINTERION_PRODUCT_HC28_MDM) }, @@ -1819,20 +1946,20 @@ { USB_DEVICE(SIEMENS_VENDOR_ID, CINTERION_PRODUCT_HC28_MDM) }, /* HC28 enumerates with Siemens or Cinterion VID depending on FW revision */ { USB_DEVICE(SIEMENS_VENDOR_ID, CINTERION_PRODUCT_HC28_MDMNET) }, { USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD100), - .driver_info = RSVD(4) }, + .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, { USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD120), - .driver_info = RSVD(4) }, + .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, { USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD140), - .driver_info = RSVD(4) }, + .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, { USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD145) }, { USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD155), - .driver_info = RSVD(6) }, + .driver_info = (kernel_ulong_t)&net_intf6_blacklist }, { USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD200), - .driver_info = RSVD(6) }, + .driver_info = (kernel_ulong_t)&net_intf6_blacklist }, { USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD160), - .driver_info = RSVD(6) }, + .driver_info = (kernel_ulong_t)&net_intf6_blacklist }, { USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD500), - .driver_info = RSVD(4) }, + .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, { USB_DEVICE(CELOT_VENDOR_ID, CELOT_PRODUCT_CT680M) }, /* CT-650 CDMA 450 1xEVDO modem */ { USB_DEVICE_AND_INTERFACE_INFO(SAMSUNG_VENDOR_ID, SAMSUNG_PRODUCT_GT_B3730, USB_CLASS_CDC_DATA, 0x00, 0x00) }, /* Samsung GT-B3730 LTE USB modem.*/ { USB_DEVICE(YUGA_VENDOR_ID, YUGA_PRODUCT_CEM600) }, @@ -1909,9 +2036,9 @@ { USB_DEVICE(PETATEL_VENDOR_ID, PETATEL_PRODUCT_NP10T_600E) }, { USB_DEVICE_AND_INTERFACE_INFO(TPLINK_VENDOR_ID, TPLINK_PRODUCT_LTE, 0xff, 0x00, 0x00) }, /* TP-Link LTE Module */ { USB_DEVICE(TPLINK_VENDOR_ID, TPLINK_PRODUCT_MA180), - .driver_info = RSVD(4) }, + .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, { USB_DEVICE(TPLINK_VENDOR_ID, 0x9000), /* TP-Link MA260 */ - .driver_info = RSVD(4) }, + .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, { USB_DEVICE(CHANGHONG_VENDOR_ID, CHANGHONG_PRODUCT_CH690) }, { USB_DEVICE_INTERFACE_CLASS(0x2001, 0x7d01, 0xff) }, /* D-Link DWM-156 (variant) */ { USB_DEVICE_INTERFACE_CLASS(0x2001, 0x7d02, 0xff) }, @@ -1919,9 +2046,9 @@ { USB_DEVICE_INTERFACE_CLASS(0x2001, 0x7d04, 0xff) }, /* D-Link DWM-158 */ { USB_DEVICE_INTERFACE_CLASS(0x2001, 0x7d0e, 0xff) }, /* D-Link DWM-157 C1 */ { USB_DEVICE_INTERFACE_CLASS(0x2001, 0x7e19, 0xff), /* D-Link DWM-221 B1 */ - .driver_info = RSVD(4) }, + .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, { USB_DEVICE_INTERFACE_CLASS(0x2001, 0x7e35, 0xff), /* D-Link DWM-222 */ - .driver_info = RSVD(4) }, + .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, { USB_DEVICE_AND_INTERFACE_INFO(0x07d1, 0x3e01, 0xff, 0xff, 0xff) }, /* D-Link DWM-152/C1 */ { USB_DEVICE_AND_INTERFACE_INFO(0x07d1, 0x3e02, 0xff, 0xff, 0xff) }, /* D-Link DWM-156/C1 */ { USB_DEVICE_AND_INTERFACE_INFO(0x07d1, 0x7e11, 0xff, 0xff, 0xff) }, /* D-Link DWM-156/A3 */ @@ -1981,7 +2108,7 @@ struct usb_interface_descriptor *iface_desc = &serial->interface->cur_altsetting->desc; struct usb_device_descriptor *dev_desc = &serial->dev->descriptor; - unsigned long device_flags = id->driver_info; + const struct option_blacklist_info *blacklist; /* Never bind to the CD-Rom emulation interface */ if (iface_desc->bInterfaceClass == 0x08) @@ -1992,7 +2119,9 @@ * the same class/subclass/protocol as the serial interfaces. Look at * the Windows driver .INF files for reserved interface numbers. */ - if (device_flags & RSVD(iface_desc->bInterfaceNumber)) + blacklist = (void *)id->driver_info; + if (blacklist && test_bit(iface_desc->bInterfaceNumber, + &blacklist->reserved)) return -ENODEV; /* * Don't bind network interface on Samsung GT-B3730, it is handled by @@ -2003,8 +2132,8 @@ iface_desc->bInterfaceClass != USB_CLASS_CDC_DATA) return -ENODEV; - /* Store the device flags so we can use them during attach. */ - usb_set_serial_data(serial, (void *)device_flags); + /* Store the blacklist info so we can use it during attach. */ + usb_set_serial_data(serial, (void *)blacklist); return 0; } @@ -2012,21 +2141,22 @@ static int option_attach(struct usb_serial *serial) { struct usb_interface_descriptor *iface_desc; + const struct option_blacklist_info *blacklist; struct usb_wwan_intf_private *data; - unsigned long device_flags; data = kzalloc(sizeof(struct usb_wwan_intf_private), GFP_KERNEL); if (!data) return -ENOMEM; - /* Retrieve device flags stored at probe. */ - device_flags = (unsigned long)usb_get_serial_data(serial); + /* Retrieve blacklist info stored at probe. */ + blacklist = usb_get_serial_data(serial); iface_desc = &serial->interface->cur_altsetting->desc; - if (!(device_flags & NCTRL(iface_desc->bInterfaceNumber))) + if (!blacklist || !test_bit(iface_desc->bInterfaceNumber, + &blacklist->sendsetup)) { data->use_send_setup = 1; - + } spin_lock_init(&data->susp_lock); usb_set_serial_data(serial, data); diff -u linux-azure-4.15.0/drivers/usb/serial/usb-serial-simple.c linux-azure-4.15.0/drivers/usb/serial/usb-serial-simple.c --- linux-azure-4.15.0/drivers/usb/serial/usb-serial-simple.c +++ linux-azure-4.15.0/drivers/usb/serial/usb-serial-simple.c @@ -63,11 +63,6 @@ 0x01) } DEVICE(google, GOOGLE_IDS); -/* Libtransistor USB console */ -#define LIBTRANSISTOR_IDS() \ - { USB_DEVICE(0x1209, 0x8b00) } -DEVICE(libtransistor, LIBTRANSISTOR_IDS); - /* ViVOpay USB Serial Driver */ #define VIVOPAY_IDS() \ { USB_DEVICE(0x1d5f, 0x1004) } /* ViVOpay 8800 */ @@ -115,7 +110,6 @@ &funsoft_device, &flashloader_device, &google_device, - &libtransistor_device, &vivopay_device, &moto_modem_device, &motorola_tetra_device, @@ -132,7 +126,6 @@ FUNSOFT_IDS(), FLASHLOADER_IDS(), GOOGLE_IDS(), - LIBTRANSISTOR_IDS(), VIVOPAY_IDS(), MOTO_IDS(), MOTOROLA_TETRA_IDS(), reverted: --- linux-azure-4.15.0/drivers/usb/serial/visor.c +++ linux-azure-4.15.0.orig/drivers/usb/serial/visor.c @@ -335,48 +335,47 @@ goto exit; } + if (retval == sizeof(*connection_info)) { + connection_info = (struct visor_connection_info *) + transfer_buffer; + + num_ports = le16_to_cpu(connection_info->num_ports); + for (i = 0; i < num_ports; ++i) { + switch ( + connection_info->connections[i].port_function_id) { + case VISOR_FUNCTION_GENERIC: + string = "Generic"; + break; + case VISOR_FUNCTION_DEBUGGER: + string = "Debugger"; + break; + case VISOR_FUNCTION_HOTSYNC: + string = "HotSync"; + break; + case VISOR_FUNCTION_CONSOLE: + string = "Console"; + break; + case VISOR_FUNCTION_REMOTE_FILE_SYS: + string = "Remote File System"; + break; + default: + string = "unknown"; + break; + } + dev_info(dev, "%s: port %d, is for %s use\n", + serial->type->description, + connection_info->connections[i].port, string); + } - if (retval != sizeof(*connection_info)) { - dev_err(dev, "Invalid connection information received from device\n"); - retval = -ENODEV; - goto exit; } + /* + * Handle devices that report invalid stuff here. + */ - - connection_info = (struct visor_connection_info *)transfer_buffer; - - num_ports = le16_to_cpu(connection_info->num_ports); - - /* Handle devices that report invalid stuff here. */ if (num_ports == 0 || num_ports > 2) { dev_warn(dev, "%s: No valid connect info available\n", serial->type->description); num_ports = 2; } - for (i = 0; i < num_ports; ++i) { - switch (connection_info->connections[i].port_function_id) { - case VISOR_FUNCTION_GENERIC: - string = "Generic"; - break; - case VISOR_FUNCTION_DEBUGGER: - string = "Debugger"; - break; - case VISOR_FUNCTION_HOTSYNC: - string = "HotSync"; - break; - case VISOR_FUNCTION_CONSOLE: - string = "Console"; - break; - case VISOR_FUNCTION_REMOTE_FILE_SYS: - string = "Remote File System"; - break; - default: - string = "unknown"; - break; - } - dev_info(dev, "%s: port %d, is for %s use\n", - serial->type->description, - connection_info->connections[i].port, string); - } dev_info(dev, "%s: Number of ports: %d\n", serial->type->description, num_ports); reverted: --- linux-azure-4.15.0/drivers/usb/usbip/Kconfig +++ linux-azure-4.15.0.orig/drivers/usb/usbip/Kconfig @@ -27,7 +27,7 @@ config USBIP_VHCI_HC_PORTS int "Number of ports per USB/IP virtual host controller" + range 1 31 - range 1 15 default 8 depends on USBIP_VHCI_HCD ---help--- reverted: --- linux-azure-4.15.0/drivers/usb/usbip/stub.h +++ linux-azure-4.15.0.orig/drivers/usb/usbip/stub.h @@ -73,7 +73,6 @@ struct stub_device *sdev; struct usb_device *udev; char shutdown_busid; - spinlock_t busid_lock; }; /* stub_priv is allocated from stub_priv_cache */ @@ -84,7 +83,6 @@ /* stub_main.c */ struct bus_id_priv *get_busid_priv(const char *busid); -void put_busid_priv(struct bus_id_priv *bid); int del_match_busid(char *busid); void stub_device_cleanup_urbs(struct stub_device *sdev); diff -u linux-azure-4.15.0/drivers/usb/usbip/stub_dev.c linux-azure-4.15.0/drivers/usb/usbip/stub_dev.c --- linux-azure-4.15.0/drivers/usb/usbip/stub_dev.c +++ linux-azure-4.15.0/drivers/usb/usbip/stub_dev.c @@ -300,9 +300,9 @@ struct stub_device *sdev = NULL; const char *udev_busid = dev_name(&udev->dev); struct bus_id_priv *busid_priv; - int rc = 0; + int rc; - dev_dbg(&udev->dev, "Enter probe\n"); + dev_dbg(&udev->dev, "Enter\n"); /* check we should claim or not by busid_table */ busid_priv = get_busid_priv(udev_busid); @@ -317,15 +317,13 @@ * other matched drivers by the driver core. * See driver_probe_device() in driver/base/dd.c */ - rc = -ENODEV; - goto call_put_busid_priv; + return -ENODEV; } if (udev->descriptor.bDeviceClass == USB_CLASS_HUB) { dev_dbg(&udev->dev, "%s is a usb hub device... skip!\n", udev_busid); - rc = -ENODEV; - goto call_put_busid_priv; + return -ENODEV; } if (!strcmp(udev->bus->bus_name, "vhci_hcd")) { @@ -333,16 +331,13 @@ "%s is attached on vhci_hcd... skip!\n", udev_busid); - rc = -ENODEV; - goto call_put_busid_priv; + return -ENODEV; } /* ok, this is my device */ sdev = stub_device_alloc(udev); - if (!sdev) { - rc = -ENOMEM; - goto call_put_busid_priv; - } + if (!sdev) + return -ENOMEM; dev_info(&udev->dev, "usbip-host: register new device (bus %u dev %u)\n", @@ -374,9 +369,7 @@ } busid_priv->status = STUB_BUSID_ALLOC; - rc = 0; - goto call_put_busid_priv; - + return 0; err_files: usb_hub_release_port(udev->parent, udev->portnum, (struct usb_dev_state *) udev); @@ -386,9 +379,6 @@ busid_priv->sdev = NULL; stub_device_free(sdev); - -call_put_busid_priv: - put_busid_priv(busid_priv); return rc; } @@ -414,7 +404,7 @@ struct bus_id_priv *busid_priv; int rc; - dev_dbg(&udev->dev, "Enter disconnect\n"); + dev_dbg(&udev->dev, "Enter\n"); busid_priv = get_busid_priv(udev_busid); if (!busid_priv) { @@ -427,7 +417,7 @@ /* get stub_device */ if (!sdev) { dev_err(&udev->dev, "could not get device"); - goto call_put_busid_priv; + return; } dev_set_drvdata(&udev->dev, NULL); @@ -442,12 +432,12 @@ (struct usb_dev_state *) udev); if (rc) { dev_dbg(&udev->dev, "unable to release port\n"); - goto call_put_busid_priv; + return; } /* If usb reset is called from event handler */ if (usbip_in_eh(current)) - goto call_put_busid_priv; + return; /* shutdown the current connection */ shutdown_busid(busid_priv); @@ -458,11 +448,12 @@ busid_priv->sdev = NULL; stub_device_free(sdev); - if (busid_priv->status == STUB_BUSID_ALLOC) + if (busid_priv->status == STUB_BUSID_ALLOC) { busid_priv->status = STUB_BUSID_ADDED; - -call_put_busid_priv: - put_busid_priv(busid_priv); + } else { + busid_priv->status = STUB_BUSID_OTHER; + del_match_busid((char *)udev_busid); + } } #ifdef CONFIG_PM reverted: --- linux-azure-4.15.0/drivers/usb/usbip/stub_main.c +++ linux-azure-4.15.0.orig/drivers/usb/usbip/stub_main.c @@ -14,7 +14,6 @@ #define DRIVER_DESC "USB/IP Host Driver" struct kmem_cache *stub_priv_cache; - /* * busid_tables defines matching busids that usbip can grab. A user can change * dynamically what device is locally used and what device is exported to a @@ -26,8 +25,6 @@ static void init_busid_table(void) { - int i; - /* * This also sets the bus_table[i].status to * STUB_BUSID_OTHER, which is 0. @@ -35,9 +32,6 @@ memset(busid_table, 0, sizeof(busid_table)); spin_lock_init(&busid_table_lock); - - for (i = 0; i < MAX_BUSID; i++) - spin_lock_init(&busid_table[i].busid_lock); } /* @@ -49,20 +43,15 @@ int i; int idx = -1; + for (i = 0; i < MAX_BUSID; i++) - for (i = 0; i < MAX_BUSID; i++) { - spin_lock(&busid_table[i].busid_lock); if (busid_table[i].name[0]) if (!strncmp(busid_table[i].name, busid, BUSID_SIZE)) { idx = i; - spin_unlock(&busid_table[i].busid_lock); break; } - spin_unlock(&busid_table[i].busid_lock); - } return idx; } -/* Returns holding busid_lock. Should call put_busid_priv() to unlock */ struct bus_id_priv *get_busid_priv(const char *busid) { int idx; @@ -70,22 +59,13 @@ spin_lock(&busid_table_lock); idx = get_busid_idx(busid); + if (idx >= 0) - if (idx >= 0) { bid = &(busid_table[idx]); - /* get busid_lock before returning */ - spin_lock(&bid->busid_lock); - } spin_unlock(&busid_table_lock); return bid; } -void put_busid_priv(struct bus_id_priv *bid) -{ - if (bid) - spin_unlock(&bid->busid_lock); -} - static int add_match_busid(char *busid) { int i; @@ -98,19 +78,15 @@ goto out; } + for (i = 0; i < MAX_BUSID; i++) - for (i = 0; i < MAX_BUSID; i++) { - spin_lock(&busid_table[i].busid_lock); if (!busid_table[i].name[0]) { strlcpy(busid_table[i].name, busid, BUSID_SIZE); if ((busid_table[i].status != STUB_BUSID_ALLOC) && (busid_table[i].status != STUB_BUSID_REMOV)) busid_table[i].status = STUB_BUSID_ADDED; ret = 0; - spin_unlock(&busid_table[i].busid_lock); break; } - spin_unlock(&busid_table[i].busid_lock); - } out: spin_unlock(&busid_table_lock); @@ -131,8 +107,6 @@ /* found */ ret = 0; - spin_lock(&busid_table[idx].busid_lock); - if (busid_table[idx].status == STUB_BUSID_OTHER) memset(busid_table[idx].name, 0, BUSID_SIZE); @@ -140,7 +114,6 @@ (busid_table[idx].status != STUB_BUSID_ADDED)) busid_table[idx].status = STUB_BUSID_REMOV; - spin_unlock(&busid_table[idx].busid_lock); out: spin_unlock(&busid_table_lock); @@ -153,12 +126,9 @@ char *out = buf; spin_lock(&busid_table_lock); + for (i = 0; i < MAX_BUSID; i++) - for (i = 0; i < MAX_BUSID; i++) { - spin_lock(&busid_table[i].busid_lock); if (busid_table[i].name[0]) out += sprintf(out, "%s ", busid_table[i].name); - spin_unlock(&busid_table[i].busid_lock); - } spin_unlock(&busid_table_lock); out += sprintf(out, "\n"); @@ -199,51 +169,6 @@ } static DRIVER_ATTR_RW(match_busid); -static int do_rebind(char *busid, struct bus_id_priv *busid_priv) -{ - int ret; - - /* device_attach() callers should hold parent lock for USB */ - if (busid_priv->udev->dev.parent) - device_lock(busid_priv->udev->dev.parent); - ret = device_attach(&busid_priv->udev->dev); - if (busid_priv->udev->dev.parent) - device_unlock(busid_priv->udev->dev.parent); - if (ret < 0) { - dev_err(&busid_priv->udev->dev, "rebind failed\n"); - return ret; - } - return 0; -} - -static void stub_device_rebind(void) -{ -#if IS_MODULE(CONFIG_USBIP_HOST) - struct bus_id_priv *busid_priv; - int i; - - /* update status to STUB_BUSID_OTHER so probe ignores the device */ - spin_lock(&busid_table_lock); - for (i = 0; i < MAX_BUSID; i++) { - if (busid_table[i].name[0] && - busid_table[i].shutdown_busid) { - busid_priv = &(busid_table[i]); - busid_priv->status = STUB_BUSID_OTHER; - } - } - spin_unlock(&busid_table_lock); - - /* now run rebind - no need to hold locks. driver files are removed */ - for (i = 0; i < MAX_BUSID; i++) { - if (busid_table[i].name[0] && - busid_table[i].shutdown_busid) { - busid_priv = &(busid_table[i]); - do_rebind(busid_table[i].name, busid_priv); - } - } -#endif -} - static ssize_t rebind_store(struct device_driver *dev, const char *buf, size_t count) { @@ -261,17 +186,11 @@ if (!bid) return -ENODEV; + ret = device_attach(&bid->udev->dev); + if (ret < 0) { + dev_err(&bid->udev->dev, "rebind failed\n"); - /* mark the device for deletion so probe ignores it during rescan */ - bid->status = STUB_BUSID_OTHER; - /* release the busid lock */ - put_busid_priv(bid); - - ret = do_rebind((char *) buf, bid); - if (ret < 0) return ret; + } - - /* delete device from busid_table */ - del_match_busid((char *) buf); return count; } @@ -393,9 +312,6 @@ */ usb_deregister_device_driver(&stub_driver); - /* initiate scan to attach devices */ - stub_device_rebind(); - kmem_cache_destroy(stub_priv_cache); } reverted: --- linux-azure-4.15.0/drivers/usb/usbip/usbip_common.h +++ linux-azure-4.15.0.orig/drivers/usb/usbip/usbip_common.h @@ -243,7 +243,7 @@ #define VUDC_EVENT_ERROR_USB (USBIP_EH_SHUTDOWN | USBIP_EH_UNUSABLE) #define VUDC_EVENT_ERROR_MALLOC (USBIP_EH_SHUTDOWN | USBIP_EH_UNUSABLE) +#define VDEV_EVENT_REMOVED (USBIP_EH_SHUTDOWN | USBIP_EH_BYE) -#define VDEV_EVENT_REMOVED (USBIP_EH_SHUTDOWN | USBIP_EH_RESET | USBIP_EH_BYE) #define VDEV_EVENT_DOWN (USBIP_EH_SHUTDOWN | USBIP_EH_RESET) #define VDEV_EVENT_ERROR_TCP (USBIP_EH_SHUTDOWN | USBIP_EH_RESET) #define VDEV_EVENT_ERROR_MALLOC (USBIP_EH_SHUTDOWN | USBIP_EH_UNUSABLE) reverted: --- linux-azure-4.15.0/drivers/usb/usbip/usbip_event.c +++ linux-azure-4.15.0.orig/drivers/usb/usbip/usbip_event.c @@ -91,6 +91,10 @@ unset_event(ud, USBIP_EH_UNUSABLE); } + /* Stop the error handler. */ + if (ud->event & USBIP_EH_BYE) + usbip_dbg_eh("removed %p\n", ud); + wake_up(&ud->eh_waitq); } } diff -u linux-azure-4.15.0/drivers/usb/usbip/vhci_hcd.c linux-azure-4.15.0/drivers/usb/usbip/vhci_hcd.c --- linux-azure-4.15.0/drivers/usb/usbip/vhci_hcd.c +++ linux-azure-4.15.0/drivers/usb/usbip/vhci_hcd.c @@ -354,8 +354,6 @@ usbip_dbg_vhci_rh(" ClearHubFeature\n"); break; case ClearPortFeature: - if (rhport < 0) - goto error; switch (wValue) { case USB_PORT_FEAT_SUSPEND: if (hcd->speed == HCD_USB3) { @@ -513,16 +511,11 @@ goto error; } - if (rhport < 0) - goto error; - vhci_hcd->port_status[rhport] |= USB_PORT_STAT_SUSPEND; break; case USB_PORT_FEAT_POWER: usbip_dbg_vhci_rh( " SetPortFeature: USB_PORT_FEAT_POWER\n"); - if (rhport < 0) - goto error; if (hcd->speed == HCD_USB3) vhci_hcd->port_status[rhport] |= USB_SS_PORT_STAT_POWER; else @@ -531,8 +524,6 @@ case USB_PORT_FEAT_BH_PORT_RESET: usbip_dbg_vhci_rh( " SetPortFeature: USB_PORT_FEAT_BH_PORT_RESET\n"); - if (rhport < 0) - goto error; /* Applicable only for USB3.0 hub */ if (hcd->speed != HCD_USB3) { pr_err("USB_PORT_FEAT_BH_PORT_RESET req not " @@ -543,8 +534,6 @@ case USB_PORT_FEAT_RESET: usbip_dbg_vhci_rh( " SetPortFeature: USB_PORT_FEAT_RESET\n"); - if (rhport < 0) - goto error; /* if it's already enabled, disable */ if (hcd->speed == HCD_USB3) { vhci_hcd->port_status[rhport] = 0; @@ -565,8 +554,6 @@ default: usbip_dbg_vhci_rh(" SetPortFeature: default %d\n", wValue); - if (rhport < 0) - goto error; if (hcd->speed == HCD_USB3) { if ((vhci_hcd->port_status[rhport] & USB_SS_PORT_STAT_POWER) != 0) { reverted: --- linux-azure-4.15.0/drivers/video/fbdev/uvesafb.c +++ linux-azure-4.15.0.orig/drivers/video/fbdev/uvesafb.c @@ -1044,8 +1044,7 @@ info->cmap.len || cmap->start < info->cmap.start) return -EINVAL; + entries = kmalloc(sizeof(*entries) * cmap->len, GFP_KERNEL); - entries = kmalloc_array(cmap->len, sizeof(*entries), - GFP_KERNEL); if (!entries) return -ENOMEM; diff -u linux-azure-4.15.0/fs/btrfs/ctree.c linux-azure-4.15.0/fs/btrfs/ctree.c --- linux-azure-4.15.0/fs/btrfs/ctree.c +++ linux-azure-4.15.0/fs/btrfs/ctree.c @@ -2497,8 +2497,10 @@ if (p->reada != READA_NONE) reada_for_search(fs_info, p, level, slot, key->objectid); + btrfs_release_path(p); + ret = -EAGAIN; - tmp = read_tree_block(fs_info, blocknr, gen); + tmp = read_tree_block(fs_info, blocknr, 0); if (!IS_ERR(tmp)) { /* * If the read above didn't mark this buffer up to date, @@ -2512,8 +2514,6 @@ } else { ret = PTR_ERR(tmp); } - - btrfs_release_path(p); return ret; } @@ -5454,24 +5454,12 @@ down_read(&fs_info->commit_root_sem); left_level = btrfs_header_level(left_root->commit_root); left_root_level = left_level; - left_path->nodes[left_level] = - btrfs_clone_extent_buffer(left_root->commit_root); - if (!left_path->nodes[left_level]) { - up_read(&fs_info->commit_root_sem); - ret = -ENOMEM; - goto out; - } + left_path->nodes[left_level] = left_root->commit_root; extent_buffer_get(left_path->nodes[left_level]); right_level = btrfs_header_level(right_root->commit_root); right_root_level = right_level; - right_path->nodes[right_level] = - btrfs_clone_extent_buffer(right_root->commit_root); - if (!right_path->nodes[right_level]) { - up_read(&fs_info->commit_root_sem); - ret = -ENOMEM; - goto out; - } + right_path->nodes[right_level] = right_root->commit_root; extent_buffer_get(right_path->nodes[right_level]); up_read(&fs_info->commit_root_sem); reverted: --- linux-azure-4.15.0/fs/btrfs/ctree.h +++ linux-azure-4.15.0.orig/fs/btrfs/ctree.h @@ -3156,8 +3156,6 @@ u64 *orig_start, u64 *orig_block_len, u64 *ram_bytes); -void __btrfs_del_delalloc_inode(struct btrfs_root *root, - struct btrfs_inode *inode); struct inode *btrfs_lookup_dentry(struct inode *dir, struct dentry *dentry); int btrfs_set_inode_index(struct btrfs_inode *dir, u64 *index); int btrfs_unlink_inode(struct btrfs_trans_handle *trans, diff -u linux-azure-4.15.0/fs/btrfs/disk-io.c linux-azure-4.15.0/fs/btrfs/disk-io.c --- linux-azure-4.15.0/fs/btrfs/disk-io.c +++ linux-azure-4.15.0/fs/btrfs/disk-io.c @@ -3745,7 +3745,6 @@ set_bit(BTRFS_FS_CLOSING_DONE, &fs_info->flags); btrfs_free_qgroup_config(fs_info); - ASSERT(list_empty(&fs_info->delalloc_roots)); if (percpu_counter_sum(&fs_info->delalloc_bytes)) { btrfs_info(fs_info, "at unmount delalloc count %lld", @@ -4051,15 +4050,15 @@ static void btrfs_error_commit_super(struct btrfs_fs_info *fs_info) { - /* cleanup FS via transaction */ - btrfs_cleanup_transaction(fs_info); - mutex_lock(&fs_info->cleaner_mutex); btrfs_run_delayed_iputs(fs_info); mutex_unlock(&fs_info->cleaner_mutex); down_write(&fs_info->cleanup_work_sem); up_write(&fs_info->cleanup_work_sem); + + /* cleanup FS via transaction */ + btrfs_cleanup_transaction(fs_info); } static void btrfs_destroy_ordered_extents(struct btrfs_root *root) @@ -4184,23 +4183,19 @@ list_splice_init(&root->delalloc_inodes, &splice); while (!list_empty(&splice)) { - struct inode *inode = NULL; btrfs_inode = list_first_entry(&splice, struct btrfs_inode, delalloc_inodes); - __btrfs_del_delalloc_inode(root, btrfs_inode); + + list_del_init(&btrfs_inode->delalloc_inodes); + clear_bit(BTRFS_INODE_IN_DELALLOC_LIST, + &btrfs_inode->runtime_flags); spin_unlock(&root->delalloc_lock); - /* - * Make sure we get a live inode and that it'll not disappear - * meanwhile. - */ - inode = igrab(&btrfs_inode->vfs_inode); - if (inode) { - invalidate_inode_pages2(inode->i_mapping); - iput(inode); - } + btrfs_invalidate_inodes(btrfs_inode->root); + spin_lock(&root->delalloc_lock); } + spin_unlock(&root->delalloc_lock); } @@ -4216,6 +4211,7 @@ while (!list_empty(&splice)) { root = list_first_entry(&splice, struct btrfs_root, delalloc_root); + list_del_init(&root->delalloc_root); root = btrfs_grab_fs_root(root); BUG_ON(!root); spin_unlock(&fs_info->delalloc_root_lock); diff -u linux-azure-4.15.0/fs/btrfs/extent-tree.c linux-azure-4.15.0/fs/btrfs/extent-tree.c --- linux-azure-4.15.0/fs/btrfs/extent-tree.c +++ linux-azure-4.15.0/fs/btrfs/extent-tree.c @@ -3148,11 +3148,7 @@ struct rb_node *node; int ret = 0; - spin_lock(&root->fs_info->trans_lock); cur_trans = root->fs_info->running_transaction; - if (cur_trans) - refcount_inc(&cur_trans->use_count); - spin_unlock(&root->fs_info->trans_lock); if (!cur_trans) return 0; @@ -3161,7 +3157,6 @@ head = btrfs_find_delayed_ref_head(delayed_refs, bytenr); if (!head) { spin_unlock(&delayed_refs->lock); - btrfs_put_transaction(cur_trans); return 0; } @@ -3178,7 +3173,6 @@ mutex_lock(&head->mutex); mutex_unlock(&head->mutex); btrfs_put_delayed_ref_head(head); - btrfs_put_transaction(cur_trans); return -EAGAIN; } spin_unlock(&delayed_refs->lock); @@ -3211,7 +3205,6 @@ } spin_unlock(&head->lock); mutex_unlock(&head->mutex); - btrfs_put_transaction(cur_trans); return ret; } diff -u linux-azure-4.15.0/fs/btrfs/inode.c linux-azure-4.15.0/fs/btrfs/inode.c --- linux-azure-4.15.0/fs/btrfs/inode.c +++ linux-azure-4.15.0/fs/btrfs/inode.c @@ -1757,12 +1757,12 @@ spin_unlock(&root->delalloc_lock); } - -void __btrfs_del_delalloc_inode(struct btrfs_root *root, - struct btrfs_inode *inode) +static void btrfs_del_delalloc_inode(struct btrfs_root *root, + struct btrfs_inode *inode) { struct btrfs_fs_info *fs_info = btrfs_sb(inode->vfs_inode.i_sb); + spin_lock(&root->delalloc_lock); if (!list_empty(&inode->delalloc_inodes)) { list_del_init(&inode->delalloc_inodes); clear_bit(BTRFS_INODE_IN_DELALLOC_LIST, @@ -1775,13 +1775,6 @@ spin_unlock(&fs_info->delalloc_root_lock); } } -} - -static void btrfs_del_delalloc_inode(struct btrfs_root *root, - struct btrfs_inode *inode) -{ - spin_lock(&root->delalloc_lock); - __btrfs_del_delalloc_inode(root, inode); spin_unlock(&root->delalloc_lock); } reverted: --- linux-azure-4.15.0/fs/btrfs/props.c +++ linux-azure-4.15.0.orig/fs/btrfs/props.c @@ -400,7 +400,6 @@ const char *value, size_t len) { - struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb); int type; if (len == 0) { @@ -411,17 +410,14 @@ return 0; } + if (!strncmp("lzo", value, 3)) - if (!strncmp("lzo", value, 3)) { type = BTRFS_COMPRESS_LZO; + else if (!strncmp("zlib", value, 4)) - btrfs_set_fs_incompat(fs_info, COMPRESS_LZO); - } else if (!strncmp("zlib", value, 4)) { type = BTRFS_COMPRESS_ZLIB; + else if (!strncmp("zstd", value, len)) - } else if (!strncmp("zstd", value, len)) { type = BTRFS_COMPRESS_ZSTD; + else - btrfs_set_fs_incompat(fs_info, COMPRESS_ZSTD); - } else { return -EINVAL; - } BTRFS_I(inode)->flags &= ~BTRFS_INODE_NOCOMPRESS; BTRFS_I(inode)->flags |= BTRFS_INODE_COMPRESS; diff -u linux-azure-4.15.0/fs/btrfs/tree-log.c linux-azure-4.15.0/fs/btrfs/tree-log.c --- linux-azure-4.15.0/fs/btrfs/tree-log.c +++ linux-azure-4.15.0/fs/btrfs/tree-log.c @@ -4670,7 +4670,6 @@ struct extent_map_tree *em_tree = &inode->extent_tree; u64 logged_isize = 0; bool need_log_inode_item = true; - bool xattrs_logged = false; path = btrfs_alloc_path(); if (!path) @@ -4972,7 +4971,6 @@ err = btrfs_log_all_xattrs(trans, root, inode, path, dst_path); if (err) goto out_unlock; - xattrs_logged = true; if (max_key.type >= BTRFS_EXTENT_DATA_KEY && !fast_search) { btrfs_release_path(path); btrfs_release_path(dst_path); @@ -4985,11 +4983,6 @@ btrfs_release_path(dst_path); if (need_log_inode_item) { err = log_inode_item(trans, log, dst_path, inode); - if (!err && !xattrs_logged) { - err = btrfs_log_all_xattrs(trans, root, inode, path, - dst_path); - btrfs_release_path(path); - } if (err) goto out_unlock; } diff -u linux-azure-4.15.0/fs/btrfs/volumes.c linux-azure-4.15.0/fs/btrfs/volumes.c --- linux-azure-4.15.0/fs/btrfs/volumes.c +++ linux-azure-4.15.0/fs/btrfs/volumes.c @@ -3968,15 +3968,6 @@ return 0; } - /* - * A ro->rw remount sequence should continue with the paused balance - * regardless of who pauses it, system or the user as of now, so set - * the resume flag. - */ - spin_lock(&fs_info->balance_lock); - fs_info->balance_ctl->flags |= BTRFS_BALANCE_RESUME; - spin_unlock(&fs_info->balance_lock); - tsk = kthread_run(balance_kthread, fs_info, "btrfs-balance"); return PTR_ERR_OR_ZERO(tsk); } reverted: --- linux-azure-4.15.0/fs/ceph/addr.c +++ linux-azure-4.15.0.orig/fs/ceph/addr.c @@ -299,8 +299,7 @@ * start an async read(ahead) operation. return nr_pages we submitted * a read for on success, or negative error code. */ +static int start_read(struct inode *inode, struct list_head *page_list, int max) -static int start_read(struct inode *inode, struct ceph_rw_context *rw_ctx, - struct list_head *page_list, int max) { struct ceph_osd_client *osdc = &ceph_inode_to_client(inode)->client->osdc; @@ -317,7 +316,7 @@ int got = 0; int ret = 0; + if (!current->journal_info) { - if (!rw_ctx) { /* caller of readpages does not hold buffer and read caps * (fadvise, madvise and readahead cases) */ int want = CEPH_CAP_FILE_CACHE; @@ -438,8 +437,6 @@ { struct inode *inode = file_inode(file); struct ceph_fs_client *fsc = ceph_inode_to_client(inode); - struct ceph_file_info *ci = file->private_data; - struct ceph_rw_context *rw_ctx; int rc = 0; int max = 0; @@ -452,12 +449,11 @@ if (rc == 0) goto out; - rw_ctx = ceph_find_rw_context(ci); max = fsc->mount_options->rsize >> PAGE_SHIFT; + dout("readpages %p file %p nr_pages %d max %d\n", + inode, file, nr_pages, max); - dout("readpages %p file %p ctx %p nr_pages %d max %d\n", - inode, file, rw_ctx, nr_pages, max); while (!list_empty(page_list)) { + rc = start_read(inode, page_list, max); - rc = start_read(inode, rw_ctx, page_list, max); if (rc < 0) goto out; } @@ -1454,10 +1450,9 @@ if ((got & (CEPH_CAP_FILE_CACHE | CEPH_CAP_FILE_LAZYIO)) || ci->i_inline_version == CEPH_INLINE_NONE) { + current->journal_info = vma->vm_file; - CEPH_DEFINE_RW_CONTEXT(rw_ctx, got); - ceph_add_rw_context(fi, &rw_ctx); ret = filemap_fault(vmf); + current->journal_info = NULL; - ceph_del_rw_context(fi, &rw_ctx); } else ret = -EAGAIN; diff -u linux-azure-4.15.0/fs/ceph/file.c linux-azure-4.15.0/fs/ceph/file.c --- linux-azure-4.15.0/fs/ceph/file.c +++ linux-azure-4.15.0/fs/ceph/file.c @@ -181,10 +181,6 @@ return -ENOMEM; } cf->fmode = fmode; - - spin_lock_init(&cf->rw_contexts_lock); - INIT_LIST_HEAD(&cf->rw_contexts); - cf->next_offset = 2; cf->readdir_cache_idx = -1; file->private_data = cf; @@ -468,7 +464,6 @@ ceph_mdsc_put_request(cf->last_readdir); kfree(cf->last_name); kfree(cf->dir_info); - WARN_ON(!list_empty(&cf->rw_contexts)); kmem_cache_free(ceph_file_cachep, cf); /* wake up anyone waiting for caps on this inode */ @@ -878,11 +873,6 @@ size_t start = 0; ssize_t len; - if (write) - size = min_t(u64, size, fsc->mount_options->wsize); - else - size = min_t(u64, size, fsc->mount_options->rsize); - vino = ceph_vino(inode); req = ceph_osdc_new_request(&fsc->client->osdc, &ci->i_layout, vino, pos, &size, 0, @@ -898,6 +888,11 @@ break; } + if (write) + size = min_t(u64, size, fsc->mount_options->wsize); + else + size = min_t(u64, size, fsc->mount_options->rsize); + len = size; pages = dio_get_pages_alloc(iter, len, &start, &num_pages); if (IS_ERR(pages)) { @@ -1207,13 +1202,12 @@ retry_op = READ_INLINE; } } else { - CEPH_DEFINE_RW_CONTEXT(rw_ctx, got); dout("aio_read %p %llx.%llx %llu~%u got cap refs on %s\n", inode, ceph_vinop(inode), iocb->ki_pos, (unsigned)len, ceph_cap_string(got)); - ceph_add_rw_context(fi, &rw_ctx); + current->journal_info = filp; ret = generic_file_read_iter(iocb, to); - ceph_del_rw_context(fi, &rw_ctx); + current->journal_info = NULL; } dout("aio_read %p %llx.%llx dropping cap refs on %s = %d\n", inode, ceph_vinop(inode), ceph_cap_string(got), (int)ret); reverted: --- linux-azure-4.15.0/fs/ceph/super.h +++ linux-azure-4.15.0.orig/fs/ceph/super.h @@ -668,9 +668,6 @@ short fmode; /* initialized on open */ short flags; /* CEPH_F_* */ - spinlock_t rw_contexts_lock; - struct list_head rw_contexts; - /* readdir: position within the dir */ u32 frag; struct ceph_mds_request *last_readdir; @@ -687,49 +684,6 @@ int dir_info_len; }; -struct ceph_rw_context { - struct list_head list; - struct task_struct *thread; - int caps; -}; - -#define CEPH_DEFINE_RW_CONTEXT(_name, _caps) \ - struct ceph_rw_context _name = { \ - .thread = current, \ - .caps = _caps, \ - } - -static inline void ceph_add_rw_context(struct ceph_file_info *cf, - struct ceph_rw_context *ctx) -{ - spin_lock(&cf->rw_contexts_lock); - list_add(&ctx->list, &cf->rw_contexts); - spin_unlock(&cf->rw_contexts_lock); -} - -static inline void ceph_del_rw_context(struct ceph_file_info *cf, - struct ceph_rw_context *ctx) -{ - spin_lock(&cf->rw_contexts_lock); - list_del(&ctx->list); - spin_unlock(&cf->rw_contexts_lock); -} - -static inline struct ceph_rw_context* -ceph_find_rw_context(struct ceph_file_info *cf) -{ - struct ceph_rw_context *ctx, *found = NULL; - spin_lock(&cf->rw_contexts_lock); - list_for_each_entry(ctx, &cf->rw_contexts, list) { - if (ctx->thread == current) { - found = ctx; - break; - } - } - spin_unlock(&cf->rw_contexts_lock); - return found; -} - struct ceph_readdir_cache_control { struct page *page; struct dentry **dentries; diff -u linux-azure-4.15.0/fs/cifs/cifsfs.c linux-azure-4.15.0/fs/cifs/cifsfs.c --- linux-azure-4.15.0/fs/cifs/cifsfs.c +++ linux-azure-4.15.0/fs/cifs/cifsfs.c @@ -1045,18 +1045,6 @@ return rc; } -/* - * Directory operations under CIFS/SMB2/SMB3 are synchronous, so fsync() - * is a dummy operation. - */ -static int cifs_dir_fsync(struct file *file, loff_t start, loff_t end, int datasync) -{ - cifs_dbg(FYI, "Sync directory - name: %pD datasync: 0x%x\n", - file, datasync); - - return 0; -} - static ssize_t cifs_copy_file_range(struct file *src_file, loff_t off, struct file *dst_file, loff_t destoff, size_t len, unsigned int flags) @@ -1185,7 +1173,6 @@ .copy_file_range = cifs_copy_file_range, .clone_file_range = cifs_clone_file_range, .llseek = generic_file_llseek, - .fsync = cifs_dir_fsync, }; static void reverted: --- linux-azure-4.15.0/fs/ext2/inode.c +++ linux-azure-4.15.0.orig/fs/ext2/inode.c @@ -1261,11 +1261,21 @@ static void ext2_truncate_blocks(struct inode *inode, loff_t offset) { + /* + * XXX: it seems like a bug here that we don't allow + * IS_APPEND inode to have blocks-past-i_size trimmed off. + * review and fix this. + * + * Also would be nice to be able to handle IO errors and such, + * but that's probably too much to ask. + */ if (!(S_ISREG(inode->i_mode) || S_ISDIR(inode->i_mode) || S_ISLNK(inode->i_mode))) return; if (ext2_inode_is_fast_symlink(inode)) return; + if (IS_APPEND(inode) || IS_IMMUTABLE(inode)) + return; dax_sem_down_write(EXT2_I(inode)); __ext2_truncate_blocks(inode, offset); diff -u linux-azure-4.15.0/fs/ext4/balloc.c linux-azure-4.15.0/fs/ext4/balloc.c --- linux-azure-4.15.0/fs/ext4/balloc.c +++ linux-azure-4.15.0/fs/ext4/balloc.c @@ -321,7 +321,6 @@ struct ext4_sb_info *sbi = EXT4_SB(sb); ext4_grpblk_t offset; ext4_grpblk_t next_zero_bit; - ext4_grpblk_t max_bit = EXT4_CLUSTERS_PER_GROUP(sb); ext4_fsblk_t blk; ext4_fsblk_t group_first_block; @@ -339,25 +338,20 @@ /* check whether block bitmap block number is set */ blk = ext4_block_bitmap(sb, desc); offset = blk - group_first_block; - if (offset < 0 || EXT4_B2C(sbi, offset) >= max_bit || - !ext4_test_bit(EXT4_B2C(sbi, offset), bh->b_data)) + if (!ext4_test_bit(EXT4_B2C(sbi, offset), bh->b_data)) /* bad block bitmap */ return blk; /* check whether the inode bitmap block number is set */ blk = ext4_inode_bitmap(sb, desc); offset = blk - group_first_block; - if (offset < 0 || EXT4_B2C(sbi, offset) >= max_bit || - !ext4_test_bit(EXT4_B2C(sbi, offset), bh->b_data)) + if (!ext4_test_bit(EXT4_B2C(sbi, offset), bh->b_data)) /* bad block bitmap */ return blk; /* check whether the inode table block number is set */ blk = ext4_inode_table(sb, desc); offset = blk - group_first_block; - if (offset < 0 || EXT4_B2C(sbi, offset) >= max_bit || - EXT4_B2C(sbi, offset + sbi->s_itb_per_group) >= max_bit) - return blk; next_zero_bit = ext4_find_next_zero_bit(bh->b_data, EXT4_B2C(sbi, offset + EXT4_SB(sb)->s_itb_per_group), EXT4_B2C(sbi, offset)); @@ -426,7 +420,6 @@ ext4_read_block_bitmap_nowait(struct super_block *sb, ext4_group_t block_group) { struct ext4_group_desc *desc; - struct ext4_sb_info *sbi = EXT4_SB(sb); struct buffer_head *bh; ext4_fsblk_t bitmap_blk; int err; @@ -435,12 +428,6 @@ if (!desc) return ERR_PTR(-EFSCORRUPTED); bitmap_blk = ext4_block_bitmap(sb, desc); - if ((bitmap_blk <= le32_to_cpu(sbi->s_es->s_first_data_block)) || - (bitmap_blk >= ext4_blocks_count(sbi->s_es))) { - ext4_error(sb, "Invalid block bitmap block %llu in " - "block_group %u", bitmap_blk, block_group); - return ERR_PTR(-EFSCORRUPTED); - } bh = sb_getblk(sb, bitmap_blk); if (unlikely(!bh)) { ext4_error(sb, "Cannot get buffer for block bitmap - " reverted: --- linux-azure-4.15.0/fs/ext4/extents.c +++ linux-azure-4.15.0.orig/fs/ext4/extents.c @@ -5346,9 +5346,8 @@ stop = le32_to_cpu(extent->ee_block); /* + * In case of left shift, Don't start shifting extents until we make + * sure the hole is big enough to accommodate the shift. - * For left shifts, make sure the hole on the left is big enough to - * accommodate the shift. For right shifts, make sure the last extent - * won't be shifted beyond EXT_MAX_BLOCKS. */ if (SHIFT == SHIFT_LEFT) { path = ext4_find_extent(inode, start - 1, &path, @@ -5368,14 +5367,9 @@ if ((start == ex_start && shift > ex_start) || (shift > start - ex_end)) { + ext4_ext_drop_refs(path); + kfree(path); + return -EINVAL; - ret = -EINVAL; - goto out; - } - } else { - if (shift > EXT_MAX_BLOCKS - - (stop + ext4_ext_get_actual_len(extent))) { - ret = -EINVAL; - goto out; } } diff -u linux-azure-4.15.0/fs/ext4/ialloc.c linux-azure-4.15.0/fs/ext4/ialloc.c --- linux-azure-4.15.0/fs/ext4/ialloc.c +++ linux-azure-4.15.0/fs/ext4/ialloc.c @@ -125,7 +125,6 @@ ext4_read_inode_bitmap(struct super_block *sb, ext4_group_t block_group) { struct ext4_group_desc *desc; - struct ext4_sb_info *sbi = EXT4_SB(sb); struct buffer_head *bh = NULL; ext4_fsblk_t bitmap_blk; int err; @@ -135,12 +134,6 @@ return ERR_PTR(-EFSCORRUPTED); bitmap_blk = ext4_inode_bitmap(sb, desc); - if ((bitmap_blk <= le32_to_cpu(sbi->s_es->s_first_data_block)) || - (bitmap_blk >= ext4_blocks_count(sbi->s_es))) { - ext4_error(sb, "Invalid inode bitmap blk %llu in " - "block_group %u", bitmap_blk, block_group); - return ERR_PTR(-EFSCORRUPTED); - } bh = sb_getblk(sb, bitmap_blk); if (unlikely(!bh)) { ext4_error(sb, "Cannot read inode bitmap - " reverted: --- linux-azure-4.15.0/fs/ext4/inline.c +++ linux-azure-4.15.0.orig/fs/ext4/inline.c @@ -151,12 +151,6 @@ goto out; if (!is.s.not_found) { - if (is.s.here->e_value_inum) { - EXT4_ERROR_INODE(inode, "inline data xattr refers " - "to an external xattr inode"); - error = -EFSCORRUPTED; - goto out; - } EXT4_I(inode)->i_inline_off = (u16)((void *)is.s.here - (void *)ext4_raw_inode(&is.iloc)); EXT4_I(inode)->i_inline_size = EXT4_MIN_INLINE_DATA_SIZE + @@ -444,7 +438,6 @@ memset((void *)ext4_raw_inode(&is.iloc)->i_block, 0, EXT4_MIN_INLINE_DATA_SIZE); - memset(ei->i_data, 0, EXT4_MIN_INLINE_DATA_SIZE); if (ext4_has_feature_extents(inode->i_sb)) { if (S_ISDIR(inode->i_mode) || diff -u linux-azure-4.15.0/fs/ext4/super.c linux-azure-4.15.0/fs/ext4/super.c --- linux-azure-4.15.0/fs/ext4/super.c +++ linux-azure-4.15.0/fs/ext4/super.c @@ -5906,5 +5906,4 @@ MODULE_DESCRIPTION("Fourth Extended Filesystem"); MODULE_LICENSE("GPL"); -MODULE_SOFTDEP("pre: crc32c"); module_init(ext4_init_fs) module_exit(ext4_exit_fs) diff -u linux-azure-4.15.0/fs/ext4/xattr.c linux-azure-4.15.0/fs/ext4/xattr.c --- linux-azure-4.15.0/fs/ext4/xattr.c +++ linux-azure-4.15.0/fs/ext4/xattr.c @@ -1687,7 +1687,7 @@ /* No failures allowed past this point. */ - if (!s->not_found && here->e_value_size && here->e_value_offs) { + if (!s->not_found && here->e_value_offs) { /* Remove the old value. */ void *first_val = s->base + min_offs; size_t offs = le16_to_cpu(here->e_value_offs); diff -u linux-azure-4.15.0/fs/fs-writeback.c linux-azure-4.15.0/fs/fs-writeback.c --- linux-azure-4.15.0/fs/fs-writeback.c +++ linux-azure-4.15.0/fs/fs-writeback.c @@ -1961,7 +1961,7 @@ } if (!list_empty(&wb->work_list)) - wb_wakeup(wb); + mod_delayed_work(bdi_wq, &wb->dwork, 0); else if (wb_has_dirty_io(wb) && dirty_writeback_interval) wb_wakeup_delayed(wb); reverted: --- linux-azure-4.15.0/fs/fscache/page.c +++ linux-azure-4.15.0.orig/fs/fscache/page.c @@ -776,7 +776,6 @@ _enter("{OP%x,%d}", op->op.debug_id, atomic_read(&op->op.usage)); -again: spin_lock(&object->lock); cookie = object->cookie; @@ -817,6 +816,10 @@ goto superseded; page = results[0]; _debug("gang %d [%lx]", n, page->index); + if (page->index >= op->store_limit) { + fscache_stat(&fscache_n_store_pages_over_limit); + goto superseded; + } radix_tree_tag_set(&cookie->stores, page->index, FSCACHE_COOKIE_STORING_TAG); @@ -826,9 +829,6 @@ spin_unlock(&cookie->stores_lock); spin_unlock(&object->lock); - if (page->index >= op->store_limit) - goto discard_page; - fscache_stat(&fscache_n_store_pages); fscache_stat(&fscache_n_cop_write_page); ret = object->cache->ops->write_page(op, page); @@ -844,11 +844,6 @@ _leave(""); return; -discard_page: - fscache_stat(&fscache_n_store_pages_over_limit); - fscache_end_page_write(object, page); - goto again; - superseded: /* this writer is going away and there aren't any more things to * write */ reverted: --- linux-azure-4.15.0/fs/hfsplus/super.c +++ linux-azure-4.15.0.orig/fs/hfsplus/super.c @@ -588,7 +588,6 @@ return 0; out_put_hidden_dir: - cancel_delayed_work_sync(&sbi->sync_work); iput(sbi->hidden_dir); out_put_root: dput(sb->s_root); diff -u linux-azure-4.15.0/fs/inode.c linux-azure-4.15.0/fs/inode.c --- linux-azure-4.15.0/fs/inode.c +++ linux-azure-4.15.0/fs/inode.c @@ -2008,14 +2008,8 @@ inode->i_uid = current_fsuid(); if (dir && dir->i_mode & S_ISGID) { inode->i_gid = dir->i_gid; - - /* Directories are special, and always inherit S_ISGID */ if (S_ISDIR(mode)) mode |= S_ISGID; - else if ((mode & (S_ISGID | S_IXGRP)) == (S_ISGID | S_IXGRP) && - !in_group_p(inode->i_gid) && - !capable_wrt_inode_uidgid(dir, CAP_FSETID)) - mode &= ~S_ISGID; } else inode->i_gid = current_fsgid(); inode->i_mode = mode; diff -u linux-azure-4.15.0/fs/jbd2/transaction.c linux-azure-4.15.0/fs/jbd2/transaction.c --- linux-azure-4.15.0/fs/jbd2/transaction.c +++ linux-azure-4.15.0/fs/jbd2/transaction.c @@ -535,7 +535,6 @@ */ ret = start_this_handle(journal, handle, GFP_NOFS); if (ret < 0) { - handle->h_journal = journal; jbd2_journal_free_reserved(handle); return ret; } reverted: --- linux-azure-4.15.0/fs/jfs/xattr.c +++ linux-azure-4.15.0.orig/fs/jfs/xattr.c @@ -491,17 +491,15 @@ if (size > PSIZE) { /* * To keep the rest of the code simple. Allocate a + * contiguous buffer to work with - * contiguous buffer to work with. Make the buffer large - * enough to make use of the whole extent. */ + ea_buf->xattr = kmalloc(size, GFP_KERNEL); - ea_buf->max_size = (size + sb->s_blocksize - 1) & - ~(sb->s_blocksize - 1); - - ea_buf->xattr = kmalloc(ea_buf->max_size, GFP_KERNEL); if (ea_buf->xattr == NULL) return -ENOMEM; ea_buf->flag = EA_MALLOC; + ea_buf->max_size = (size + sb->s_blocksize - 1) & + ~(sb->s_blocksize - 1); if (ea_size == 0) return 0; diff -u linux-azure-4.15.0/fs/proc/base.c linux-azure-4.15.0/fs/proc/base.c --- linux-azure-4.15.0/fs/proc/base.c +++ linux-azure-4.15.0/fs/proc/base.c @@ -267,7 +267,7 @@ * Inherently racy -- command line shares address space * with code and data. */ - rv = access_remote_vm(mm, arg_end - 1, &c, 1, FOLL_ANON); + rv = access_remote_vm(mm, arg_end - 1, &c, 1, 0); if (rv <= 0) goto out_free_page; @@ -285,7 +285,7 @@ int nr_read; _count = min3(count, len, PAGE_SIZE); - nr_read = access_remote_vm(mm, p, page, _count, FOLL_ANON); + nr_read = access_remote_vm(mm, p, page, _count, 0); if (nr_read < 0) rv = nr_read; if (nr_read <= 0) @@ -331,7 +331,7 @@ bool final; _count = min3(count, len, PAGE_SIZE); - nr_read = access_remote_vm(mm, p, page, _count, FOLL_ANON); + nr_read = access_remote_vm(mm, p, page, _count, 0); if (nr_read < 0) rv = nr_read; if (nr_read <= 0) @@ -956,7 +956,7 @@ max_len = min_t(size_t, PAGE_SIZE, count); this_len = min(max_len, this_len); - retval = access_remote_vm(mm, (env_start + src), page, this_len, FOLL_ANON); + retval = access_remote_vm(mm, (env_start + src), page, this_len, 0); if (retval <= 0) { ret = retval; reverted: --- linux-azure-4.15.0/fs/xfs/libxfs/xfs_attr_leaf.c +++ linux-azure-4.15.0.orig/fs/xfs/libxfs/xfs_attr_leaf.c @@ -784,8 +784,9 @@ ASSERT(blkno == 0); error = xfs_attr3_leaf_create(args, blkno, &bp); if (error) { + error = xfs_da_shrink_inode(args, 0, bp); + bp = NULL; + if (error) - /* xfs_attr3_leaf_create may not have instantiated a block */ - if (bp && (xfs_da_shrink_inode(args, 0, bp) != 0)) goto out; xfs_idata_realloc(dp, size, XFS_ATTR_FORK); /* try to put */ memcpy(ifp->if_u1.if_data, tmpbuffer, size); /* it back */ reverted: --- linux-azure-4.15.0/fs/xfs/libxfs/xfs_bmap.c +++ linux-azure-4.15.0.orig/fs/xfs/libxfs/xfs_bmap.c @@ -725,16 +725,12 @@ *logflagsp = 0; if ((error = xfs_alloc_vextent(&args))) { xfs_iroot_realloc(ip, -1, whichfork); - ASSERT(ifp->if_broot == NULL); - XFS_IFORK_FMT_SET(ip, whichfork, XFS_DINODE_FMT_EXTENTS); xfs_btree_del_cursor(cur, XFS_BTREE_ERROR); return error; } if (WARN_ON_ONCE(args.fsbno == NULLFSBLOCK)) { xfs_iroot_realloc(ip, -1, whichfork); - ASSERT(ifp->if_broot == NULL); - XFS_IFORK_FMT_SET(ip, whichfork, XFS_DINODE_FMT_EXTENTS); xfs_btree_del_cursor(cur, XFS_BTREE_ERROR); return -ENOSPC; } reverted: --- linux-azure-4.15.0/fs/xfs/xfs_file.c +++ linux-azure-4.15.0.orig/fs/xfs/xfs_file.c @@ -812,26 +812,22 @@ if (error) goto out_unlock; } else if (mode & FALLOC_FL_INSERT_RANGE) { + unsigned int blksize_mask = i_blocksize(inode) - 1; - unsigned int blksize_mask = i_blocksize(inode) - 1; - loff_t isize = i_size_read(inode); + new_size = i_size_read(inode) + len; if (offset & blksize_mask || len & blksize_mask) { error = -EINVAL; goto out_unlock; } + /* check the new inode size does not wrap through zero */ + if (new_size > inode->i_sb->s_maxbytes) { - /* - * New inode size must not exceed ->s_maxbytes, accounting for - * possible signed overflow. - */ - if (inode->i_sb->s_maxbytes - isize < len) { error = -EFBIG; goto out_unlock; } - new_size = isize + len; /* Offset should be less than i_size */ + if (offset >= i_size_read(inode)) { - if (offset >= isize) { error = -EINVAL; goto out_unlock; } diff -u linux-azure-4.15.0/include/asm-generic/pgtable.h linux-azure-4.15.0/include/asm-generic/pgtable.h --- linux-azure-4.15.0/include/asm-generic/pgtable.h +++ linux-azure-4.15.0/include/asm-generic/pgtable.h @@ -1055,6 +1055,18 @@ static inline void init_espfix_bsp(void) { } #endif +#ifndef __HAVE_ARCH_PFN_MODIFY_ALLOWED +static inline bool pfn_modify_allowed(unsigned long pfn, pgprot_t prot) +{ + return true; +} + +static inline bool arch_has_pfn_modify_check(void) +{ + return false; +} +#endif /* !_HAVE_ARCH_PFN_MODIFY_ALLOWED */ + #endif /* !__ASSEMBLY__ */ #ifndef io_remap_pfn_range reverted: --- linux-azure-4.15.0/include/asm-generic/vmlinux.lds.h +++ linux-azure-4.15.0.orig/include/asm-generic/vmlinux.lds.h @@ -170,7 +170,7 @@ #endif #ifdef CONFIG_SERIAL_EARLYCON +#define EARLYCON_TABLE() STRUCT_ALIGN(); \ -#define EARLYCON_TABLE() . = ALIGN(8); \ VMLINUX_SYMBOL(__earlycon_table) = .; \ KEEP(*(__earlycon_table)) \ VMLINUX_SYMBOL(__earlycon_table_end) = .; diff -u linux-azure-4.15.0/include/kvm/arm_psci.h linux-azure-4.15.0/include/kvm/arm_psci.h --- linux-azure-4.15.0/include/kvm/arm_psci.h +++ linux-azure-4.15.0/include/kvm/arm_psci.h @@ -37,15 +37,10 @@ * Our PSCI implementation stays the same across versions from * v0.2 onward, only adding the few mandatory functions (such * as FEATURES with 1.0) that are required by newer - * revisions. It is thus safe to return the latest, unless - * userspace has instructed us otherwise. + * revisions. It is thus safe to return the latest. */ - if (test_bit(KVM_ARM_VCPU_PSCI_0_2, vcpu->arch.features)) { - if (vcpu->kvm->arch.psci_version) - return vcpu->kvm->arch.psci_version; - + if (test_bit(KVM_ARM_VCPU_PSCI_0_2, vcpu->arch.features)) return KVM_ARM_PSCI_LATEST; - } return KVM_ARM_PSCI_0_1; } @@ -55,9 +50,2 @@ -struct kvm_one_reg; - -int kvm_arm_get_fw_num_regs(struct kvm_vcpu *vcpu); -int kvm_arm_copy_fw_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices); -int kvm_arm_get_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg); -int kvm_arm_set_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg); - #endif /* __KVM_ARM_PSCI_H__ */ diff -u linux-azure-4.15.0/include/linux/cpu.h linux-azure-4.15.0/include/linux/cpu.h --- linux-azure-4.15.0/include/linux/cpu.h +++ linux-azure-4.15.0/include/linux/cpu.h @@ -55,6 +55,8 @@ struct device_attribute *attr, char *buf); extern ssize_t cpu_show_spec_store_bypass(struct device *dev, struct device_attribute *attr, char *buf); +extern ssize_t cpu_show_l1tf(struct device *dev, + struct device_attribute *attr, char *buf); extern __printf(4, 5) struct device *cpu_device_create(struct device *parent, void *drvdata, @@ -168,2 +170,21 @@ +enum cpuhp_smt_control { + CPU_SMT_ENABLED, + CPU_SMT_DISABLED, + CPU_SMT_FORCE_DISABLED, + CPU_SMT_NOT_SUPPORTED, +}; + +#if defined(CONFIG_SMP) && defined(CONFIG_HOTPLUG_SMT) +extern enum cpuhp_smt_control cpu_smt_control; +extern void cpu_smt_disable(bool force); +extern void cpu_smt_check_topology_early(void); +extern void cpu_smt_check_topology(void); +#else +# define cpu_smt_control (CPU_SMT_ENABLED) +static inline void cpu_smt_disable(bool force) { } +static inline void cpu_smt_check_topology_early(void) { } +static inline void cpu_smt_check_topology(void) { } +#endif + #endif /* _LINUX_CPU_H_ */ diff -u linux-azure-4.15.0/include/linux/efi.h linux-azure-4.15.0/include/linux/efi.h --- linux-azure-4.15.0/include/linux/efi.h +++ linux-azure-4.15.0/include/linux/efi.h @@ -395,8 +395,8 @@ u32 attributes; u32 get_bar_attributes; u32 set_bar_attributes; - u64 romsize; - u32 romimage; + uint64_t romsize; + void *romimage; } efi_pci_io_protocol_32; typedef struct { @@ -415,8 +415,8 @@ u64 attributes; u64 get_bar_attributes; u64 set_bar_attributes; - u64 romsize; - u64 romimage; + uint64_t romsize; + void *romimage; } efi_pci_io_protocol_64; typedef struct { diff -u linux-azure-4.15.0/include/linux/mm.h linux-azure-4.15.0/include/linux/mm.h --- linux-azure-4.15.0/include/linux/mm.h +++ linux-azure-4.15.0/include/linux/mm.h @@ -2457,7 +2457,6 @@ #define FOLL_MLOCK 0x1000 /* lock present pages */ #define FOLL_REMOTE 0x2000 /* we are working on non-current tsk/mm */ #define FOLL_COW 0x4000 /* internal GUP flag */ -#define FOLL_ANON 0x8000 /* don't do file mappings */ static inline int vm_fault_to_errno(int vm_fault, int foll_flags) { reverted: --- linux-azure-4.15.0/include/linux/mtd/flashchip.h +++ linux-azure-4.15.0.orig/include/linux/mtd/flashchip.h @@ -85,7 +85,6 @@ unsigned int write_suspended:1; unsigned int erase_suspended:1; unsigned long in_progress_block_addr; - unsigned long in_progress_block_mask; struct mutex mutex; wait_queue_head_t wq; /* Wait on here when we're waiting for the chip diff -u linux-azure-4.15.0/include/linux/netdevice.h linux-azure-4.15.0/include/linux/netdevice.h --- linux-azure-4.15.0/include/linux/netdevice.h +++ linux-azure-4.15.0/include/linux/netdevice.h @@ -1669,6 +1669,8 @@ unsigned long base_addr; int irq; + atomic_t carrier_changes; + /* * Some hardware also needs these fields (state,dev_list, * napi_list,unreg_list,close_list) but they are not @@ -1706,10 +1708,6 @@ atomic_long_t tx_dropped; atomic_long_t rx_nohandler; - /* Stats to monitor link on/off, flapping */ - atomic_t carrier_up_count; - atomic_t carrier_down_count; - #ifdef CONFIG_WIRELESS_EXT const struct iw_handler_def *wireless_handlers; struct iw_public_data *wireless_data; reverted: --- linux-azure-4.15.0/include/linux/oom.h +++ linux-azure-4.15.0.orig/include/linux/oom.h @@ -95,8 +95,6 @@ return 0; } -void __oom_reap_task_mm(struct mm_struct *mm); - extern unsigned long oom_badness(struct task_struct *p, struct mem_cgroup *memcg, const nodemask_t *nodemask, unsigned long totalpages); reverted: --- linux-azure-4.15.0/include/linux/rtnetlink.h +++ linux-azure-4.15.0.orig/include/linux/rtnetlink.h @@ -19,11 +19,10 @@ void rtmsg_ifinfo(int type, struct net_device *dev, unsigned change, gfp_t flags); void rtmsg_ifinfo_newnet(int type, struct net_device *dev, unsigned int change, + gfp_t flags, int *new_nsid); - gfp_t flags, int *new_nsid, int new_ifindex); struct sk_buff *rtmsg_ifinfo_build_skb(int type, struct net_device *dev, unsigned change, u32 event, + gfp_t flags, int *new_nsid); - gfp_t flags, int *new_nsid, - int new_ifindex); void rtmsg_ifinfo_send(struct sk_buff *skb, struct net_device *dev, gfp_t flags); reverted: --- linux-azure-4.15.0/include/linux/serial_core.h +++ linux-azure-4.15.0.orig/include/linux/serial_core.h @@ -351,10 +351,10 @@ char name[16]; char compatible[128]; int (*setup)(struct earlycon_device *, const char *options); +} __aligned(32); -}; +extern const struct earlycon_id __earlycon_table[]; +extern const struct earlycon_id __earlycon_table_end[]; -extern const struct earlycon_id *__earlycon_table[]; -extern const struct earlycon_id *__earlycon_table_end[]; #if defined(CONFIG_SERIAL_EARLYCON) && !defined(MODULE) #define EARLYCON_USED_OR_UNUSED __used @@ -362,19 +362,12 @@ #define EARLYCON_USED_OR_UNUSED __maybe_unused #endif +#define OF_EARLYCON_DECLARE(_name, compat, fn) \ + static const struct earlycon_id __UNIQUE_ID(__earlycon_##_name) \ + EARLYCON_USED_OR_UNUSED __section(__earlycon_table) \ -#define _OF_EARLYCON_DECLARE(_name, compat, fn, unique_id) \ - static const struct earlycon_id unique_id \ - EARLYCON_USED_OR_UNUSED __initconst \ = { .name = __stringify(_name), \ .compatible = compat, \ + .setup = fn } - .setup = fn }; \ - static const struct earlycon_id EARLYCON_USED_OR_UNUSED \ - __section(__earlycon_table) \ - * const __PASTE(__p, unique_id) = &unique_id - -#define OF_EARLYCON_DECLARE(_name, compat, fn) \ - _OF_EARLYCON_DECLARE(_name, compat, fn, \ - __UNIQUE_ID(__earlycon_##_name)) #define EARLYCON_DECLARE(_name, fn) OF_EARLYCON_DECLARE(_name, "", fn) diff -u linux-azure-4.15.0/include/linux/tty.h linux-azure-4.15.0/include/linux/tty.h --- linux-azure-4.15.0/include/linux/tty.h +++ linux-azure-4.15.0/include/linux/tty.h @@ -701,7 +701,7 @@ extern int tty_set_ldisc(struct tty_struct *tty, int disc); extern int tty_ldisc_setup(struct tty_struct *tty, struct tty_struct *o_tty); extern void tty_ldisc_release(struct tty_struct *tty); -extern int __must_check tty_ldisc_init(struct tty_struct *tty); +extern void tty_ldisc_init(struct tty_struct *tty); extern void tty_ldisc_deinit(struct tty_struct *tty); extern int tty_ldisc_receive_buf(struct tty_ldisc *ld, const unsigned char *p, char *f, int count); reverted: --- linux-azure-4.15.0/include/linux/u64_stats_sync.h +++ linux-azure-4.15.0.orig/include/linux/u64_stats_sync.h @@ -90,28 +90,6 @@ #endif } -static inline unsigned long -u64_stats_update_begin_irqsave(struct u64_stats_sync *syncp) -{ - unsigned long flags = 0; - -#if BITS_PER_LONG==32 && defined(CONFIG_SMP) - local_irq_save(flags); - write_seqcount_begin(&syncp->seq); -#endif - return flags; -} - -static inline void -u64_stats_update_end_irqrestore(struct u64_stats_sync *syncp, - unsigned long flags) -{ -#if BITS_PER_LONG==32 && defined(CONFIG_SMP) - write_seqcount_end(&syncp->seq); - local_irq_restore(flags); -#endif -} - static inline void u64_stats_update_begin_raw(struct u64_stats_sync *syncp) { #if BITS_PER_LONG==32 && defined(CONFIG_SMP) reverted: --- linux-azure-4.15.0/include/linux/usb/composite.h +++ linux-azure-4.15.0.orig/include/linux/usb/composite.h @@ -54,9 +54,6 @@ /* big enough to hold our biggest descriptor */ #define USB_COMP_EP0_BUFSIZ 1024 -/* OS feature descriptor length <= 4kB */ -#define USB_COMP_EP0_OS_DESC_BUFSIZ 4096 - #define USB_MS_TO_HS_INTERVAL(x) (ilog2((x * 1000 / 125)) + 1) struct usb_configuration; reverted: --- linux-azure-4.15.0/include/linux/vga_switcheroo.h +++ linux-azure-4.15.0.orig/include/linux/vga_switcheroo.h @@ -84,8 +84,8 @@ * Client identifier. Audio clients use the same identifier & 0x100. */ enum vga_switcheroo_client_id { + VGA_SWITCHEROO_UNKNOWN_ID = -1, + VGA_SWITCHEROO_IGD, - VGA_SWITCHEROO_UNKNOWN_ID = 0x1000, - VGA_SWITCHEROO_IGD = 0, VGA_SWITCHEROO_DIS, VGA_SWITCHEROO_MAX_CLIENTS, }; @@ -151,7 +151,7 @@ bool driver_power_control); int vga_switcheroo_register_audio_client(struct pci_dev *pdev, const struct vga_switcheroo_client_ops *ops, + enum vga_switcheroo_client_id id); - struct pci_dev *vga_dev); void vga_switcheroo_client_fb_set(struct pci_dev *dev, struct fb_info *info); @@ -183,7 +183,7 @@ enum vga_switcheroo_handler_flags_t handler_flags) { return 0; } static inline int vga_switcheroo_register_audio_client(struct pci_dev *pdev, const struct vga_switcheroo_client_ops *ops, + enum vga_switcheroo_client_id id) { return 0; } - struct pci_dev *vga_dev) { return 0; } static inline void vga_switcheroo_unregister_handler(void) {} static inline enum vga_switcheroo_handler_flags_t vga_switcheroo_handler_flags(void) { return 0; } static inline int vga_switcheroo_lock_ddc(struct pci_dev *pdev) { return -ENODEV; } reverted: --- linux-azure-4.15.0/include/linux/virtio.h +++ linux-azure-4.15.0.orig/include/linux/virtio.h @@ -157,9 +157,6 @@ int virtio_device_restore(struct virtio_device *dev); #endif -#define virtio_device_for_each_vq(vdev, vq) \ - list_for_each_entry(vq, &vdev->vqs, list) - /** * virtio_driver - operations for a virtio I/O driver * @driver: underlying device driver (populate name and owner). reverted: --- linux-azure-4.15.0/include/linux/wait_bit.h +++ linux-azure-4.15.0.orig/include/linux/wait_bit.h @@ -262,21 +262,4 @@ return out_of_line_wait_on_atomic_t(val, action, mode); } -/** - * clear_and_wake_up_bit - clear a bit and wake up anyone waiting on that bit - * - * @bit: the bit of the word being waited on - * @word: the word being waited on, a kernel virtual address - * - * You can use this helper if bitflags are manipulated atomically rather than - * non-atomically under a lock. - */ -static inline void clear_and_wake_up_bit(int bit, void *word) -{ - clear_bit_unlock(bit, word); - /* See wake_up_bit() for which memory barrier you need to use. */ - smp_mb__after_atomic(); - wake_up_bit(word, bit); -} - #endif /* _LINUX_WAIT_BIT_H */ reverted: --- linux-azure-4.15.0/include/net/bonding.h +++ linux-azure-4.15.0.orig/include/net/bonding.h @@ -198,7 +198,6 @@ struct slave __rcu *primary_slave; struct bond_up_slave __rcu *slave_arr; /* Array of usable slaves */ bool force_primary; - u32 nest_level; s32 slave_cnt; /* never change this value outside the attach/detach wrappers */ int (*recv_probe)(const struct sk_buff *, struct bonding *, struct slave *); reverted: --- linux-azure-4.15.0/include/net/inet_timewait_sock.h +++ linux-azure-4.15.0.orig/include/net/inet_timewait_sock.h @@ -43,7 +43,6 @@ #define tw_family __tw_common.skc_family #define tw_state __tw_common.skc_state #define tw_reuse __tw_common.skc_reuse -#define tw_reuseport __tw_common.skc_reuseport #define tw_ipv6only __tw_common.skc_ipv6only #define tw_bound_dev_if __tw_common.skc_bound_dev_if #define tw_node __tw_common.skc_nulls_node reverted: --- linux-azure-4.15.0/include/net/nexthop.h +++ linux-azure-4.15.0.orig/include/net/nexthop.h @@ -7,7 +7,7 @@ static inline int rtnh_ok(const struct rtnexthop *rtnh, int remaining) { + return remaining >= sizeof(*rtnh) && - return remaining >= (int)sizeof(*rtnh) && rtnh->rtnh_len >= sizeof(*rtnh) && rtnh->rtnh_len <= remaining; } reverted: --- linux-azure-4.15.0/include/net/tls.h +++ linux-azure-4.15.0.orig/include/net/tls.h @@ -100,7 +100,6 @@ struct scatterlist *partially_sent_record; u16 partially_sent_offset; unsigned long flags; - bool in_tcp_sendpages; u16 pending_open_record_frags; int (*push_pending_record)(struct sock *sk, int flags); reverted: --- linux-azure-4.15.0/include/scsi/scsi.h +++ linux-azure-4.15.0.orig/include/scsi/scsi.h @@ -47,8 +47,6 @@ */ status &= 0xfe; return ((status == SAM_STAT_GOOD) || - (status == SAM_STAT_CONDITION_MET) || - /* Next two "intermediate" statuses are obsolete in SAM-4 */ (status == SAM_STAT_INTERMEDIATE) || (status == SAM_STAT_INTERMEDIATE_CONDITION_MET) || /* FIXME: this is obsolete in SAM-3 */ reverted: --- linux-azure-4.15.0/include/sound/control.h +++ linux-azure-4.15.0.orig/include/sound/control.h @@ -23,7 +23,6 @@ */ #include -#include #include #define snd_kcontrol_chip(kcontrol) ((kcontrol)->private_data) @@ -149,14 +148,12 @@ static inline unsigned int snd_ctl_get_ioffnum(struct snd_kcontrol *kctl, struct snd_ctl_elem_id *id) { + return id->numid - kctl->id.numid; - unsigned int ioff = id->numid - kctl->id.numid; - return array_index_nospec(ioff, kctl->count); } static inline unsigned int snd_ctl_get_ioffidx(struct snd_kcontrol *kctl, struct snd_ctl_elem_id *id) { + return id->index - kctl->id.index; - unsigned int ioff = id->index - kctl->id.index; - return array_index_nospec(ioff, kctl->count); } static inline unsigned int snd_ctl_get_ioff(struct snd_kcontrol *kctl, struct snd_ctl_elem_id *id) diff -u linux-azure-4.15.0/include/trace/events/xen.h linux-azure-4.15.0/include/trace/events/xen.h --- linux-azure-4.15.0/include/trace/events/xen.h +++ linux-azure-4.15.0/include/trace/events/xen.h @@ -352,6 +352,22 @@ DEFINE_XEN_MMU_PGD_EVENT(xen_mmu_pgd_pin); DEFINE_XEN_MMU_PGD_EVENT(xen_mmu_pgd_unpin); +TRACE_EVENT(xen_mmu_flush_tlb_all, + TP_PROTO(int x), + TP_ARGS(x), + TP_STRUCT__entry(__array(char, x, 0)), + TP_fast_assign((void)x), + TP_printk("%s", "") + ); + +TRACE_EVENT(xen_mmu_flush_tlb, + TP_PROTO(int x), + TP_ARGS(x), + TP_STRUCT__entry(__array(char, x, 0)), + TP_fast_assign((void)x), + TP_printk("%s", "") + ); + TRACE_EVENT(xen_mmu_flush_tlb_one_user, TP_PROTO(unsigned long addr), TP_ARGS(addr), diff -u linux-azure-4.15.0/include/uapi/linux/if_link.h linux-azure-4.15.0/include/uapi/linux/if_link.h --- linux-azure-4.15.0/include/uapi/linux/if_link.h +++ linux-azure-4.15.0/include/uapi/linux/if_link.h @@ -161,9 +161,6 @@ IFLA_EVENT, IFLA_NEW_NETNSID, IFLA_IF_NETNSID, - IFLA_CARRIER_UP_COUNT, - IFLA_CARRIER_DOWN_COUNT, - IFLA_NEW_IFINDEX, __IFLA_MAX }; reverted: --- linux-azure-4.15.0/include/uapi/linux/nl80211.h +++ linux-azure-4.15.0.orig/include/uapi/linux/nl80211.h @@ -2618,8 +2618,6 @@ #define NL80211_ATTR_KEYS NL80211_ATTR_KEYS #define NL80211_ATTR_FEATURE_FLAGS NL80211_ATTR_FEATURE_FLAGS -#define NL80211_WIPHY_NAME_MAXLEN 128 - #define NL80211_MAX_SUPP_RATES 32 #define NL80211_MAX_SUPP_HT_RATES 77 #define NL80211_MAX_SUPP_REG_RULES 64 diff -u linux-azure-4.15.0/kernel/events/callchain.c linux-azure-4.15.0/kernel/events/callchain.c --- linux-azure-4.15.0/kernel/events/callchain.c +++ linux-azure-4.15.0/kernel/events/callchain.c @@ -131,8 +131,14 @@ goto exit; } - if (count == 1) - err = alloc_callchain_buffers(); + if (count > 1) { + /* If the allocation failed, give up */ + if (!callchain_cpus_entries) + err = -ENOMEM; + goto exit; + } + + err = alloc_callchain_buffers(); exit: if (err) atomic_dec(&nr_callchain_events); reverted: --- linux-azure-4.15.0/kernel/events/ring_buffer.c +++ linux-azure-4.15.0.orig/kernel/events/ring_buffer.c @@ -14,7 +14,6 @@ #include #include #include -#include #include "internal.h" @@ -868,10 +867,8 @@ return NULL; /* AUX space */ + if (pgoff >= rb->aux_pgoff) + return virt_to_page(rb->aux_pages[pgoff - rb->aux_pgoff]); - if (pgoff >= rb->aux_pgoff) { - int aux_pgoff = array_index_nospec(pgoff - rb->aux_pgoff, rb->aux_nr_pages); - return virt_to_page(rb->aux_pages[aux_pgoff]); - } } return __perf_mmap_to_page(rb, pgoff); diff -u linux-azure-4.15.0/kernel/module.c linux-azure-4.15.0/kernel/module.c --- linux-azure-4.15.0/kernel/module.c +++ linux-azure-4.15.0/kernel/module.c @@ -1472,8 +1472,7 @@ { struct module_sect_attr *sattr = container_of(mattr, struct module_sect_attr, mattr); - return sprintf(buf, "0x%px\n", kptr_restrict < 2 ? - (void *)sattr->address : NULL); + return sprintf(buf, "0x%pK\n", (void *)sattr->address); } static void free_sect_attrs(struct module_sect_attrs *sect_attrs) reverted: --- linux-azure-4.15.0/kernel/sched/autogroup.c +++ linux-azure-4.15.0.orig/kernel/sched/autogroup.c @@ -7,7 +7,6 @@ #include #include #include -#include unsigned int __read_mostly sysctl_sched_autogroup_enabled = 1; static struct autogroup autogroup_default; @@ -214,7 +213,7 @@ static unsigned long next = INITIAL_JIFFIES; struct autogroup *ag; unsigned long shares; + int err; - int err, idx; if (nice < MIN_NICE || nice > MAX_NICE) return -EINVAL; @@ -232,9 +231,7 @@ next = HZ / 10 + jiffies; ag = autogroup_task_get(p); + shares = scale_load(sched_prio_to_weight[nice + 20]); - - idx = array_index_nospec(nice + 20, 40); - shares = scale_load(sched_prio_to_weight[idx]); down_write(&ag->lock); err = sched_group_set_shares(ag->tg, shares); diff -u linux-azure-4.15.0/kernel/sched/core.c linux-azure-4.15.0/kernel/sched/core.c --- linux-azure-4.15.0/kernel/sched/core.c +++ linux-azure-4.15.0/kernel/sched/core.c @@ -23,7 +23,6 @@ #include #include #include -#include #include #include #include @@ -5649,6 +5648,18 @@ struct rq *rq = cpu_rq(cpu); struct rq_flags rf; +#ifdef CONFIG_SCHED_SMT + /* + * The sched_smt_present static key needs to be evaluated on every + * hotplug event because at boot time SMT might be disabled when + * the number of booted CPUs is limited. + * + * If then later a sibling gets hotplugged, then the key would stay + * off and SMT scheduling would never be functional. + */ + if (cpumask_weight(cpu_smt_mask(cpu)) > 1) + static_branch_enable_cpuslocked(&sched_smt_present); +#endif set_cpu_active(cpu, true); if (sched_smp_initialized) { @@ -5744,22 +5755,6 @@ } #endif -#ifdef CONFIG_SCHED_SMT -DEFINE_STATIC_KEY_FALSE(sched_smt_present); - -static void sched_init_smt(void) -{ - /* - * We've enumerated all CPUs and will assume that if any CPU - * has SMT siblings, CPU0 will too. - */ - if (cpumask_weight(cpu_smt_mask(0)) > 1) - static_branch_enable(&sched_smt_present); -} -#else -static inline void sched_init_smt(void) { } -#endif - void __init sched_init_smp(void) { sched_init_numa(); @@ -5781,8 +5776,6 @@ init_sched_rt_class(); init_sched_dl_class(); - sched_init_smt(); - sched_smp_initialized = true; } @@ -6802,15 +6795,11 @@ struct cftype *cft, s64 nice) { unsigned long weight; - int idx; if (nice < MIN_NICE || nice > MAX_NICE) return -ERANGE; - idx = NICE_TO_PRIO(nice) - MAX_RT_PRIO; - idx = array_index_nospec(idx, 40); - weight = sched_prio_to_weight[idx]; - + weight = sched_prio_to_weight[NICE_TO_PRIO(nice) - MAX_RT_PRIO]; return sched_group_set_shares(css_tg(css), scale_load(weight)); } #endif reverted: --- linux-azure-4.15.0/kernel/sched/cpufreq_schedutil.c +++ linux-azure-4.15.0.orig/kernel/sched/cpufreq_schedutil.c @@ -282,8 +282,7 @@ * Do not reduce the frequency if the CPU has not been idle * recently, as the reduction is likely to be premature then. */ + if (busy && next_f < sg_policy->next_freq) { - if (busy && next_f < sg_policy->next_freq && - sg_policy->next_freq != UINT_MAX) { next_f = sg_policy->next_freq; /* Reset cached freq as next_freq has changed */ reverted: --- linux-azure-4.15.0/kernel/time/clocksource.c +++ linux-azure-4.15.0.orig/kernel/time/clocksource.c @@ -119,16 +119,6 @@ static int watchdog_running; static atomic_t watchdog_reset_pending; -static void inline clocksource_watchdog_lock(unsigned long *flags) -{ - spin_lock_irqsave(&watchdog_lock, *flags); -} - -static void inline clocksource_watchdog_unlock(unsigned long *flags) -{ - spin_unlock_irqrestore(&watchdog_lock, *flags); -} - static int clocksource_watchdog_kthread(void *data); static void __clocksource_change_rating(struct clocksource *cs, int rating); @@ -152,19 +142,9 @@ cs->flags &= ~(CLOCK_SOURCE_VALID_FOR_HRES | CLOCK_SOURCE_WATCHDOG); cs->flags |= CLOCK_SOURCE_UNSTABLE; - /* - * If the clocksource is registered clocksource_watchdog_kthread() will - * re-rate and re-select. - */ - if (list_empty(&cs->list)) { - cs->rating = 0; - return; - } - if (cs->mark_unstable) cs->mark_unstable(cs); - /* kick clocksource_watchdog_kthread() */ if (finished_booting) schedule_work(&watchdog_work); } @@ -184,7 +164,7 @@ spin_lock_irqsave(&watchdog_lock, flags); if (!(cs->flags & CLOCK_SOURCE_UNSTABLE)) { + if (list_empty(&cs->wd_list)) - if (!list_empty(&cs->list) && list_empty(&cs->wd_list)) list_add(&cs->wd_list, &watchdog_list); __clocksource_unstable(cs); } @@ -339,8 +319,9 @@ static void clocksource_enqueue_watchdog(struct clocksource *cs) { + unsigned long flags; - INIT_LIST_HEAD(&cs->wd_list); + spin_lock_irqsave(&watchdog_lock, flags); if (cs->flags & CLOCK_SOURCE_MUST_VERIFY) { /* cs is a clocksource to be watched. */ list_add(&cs->wd_list, &watchdog_list); @@ -350,6 +331,7 @@ if (cs->flags & CLOCK_SOURCE_IS_CONTINUOUS) cs->flags |= CLOCK_SOURCE_VALID_FOR_HRES; } + spin_unlock_irqrestore(&watchdog_lock, flags); } static void clocksource_select_watchdog(bool fallback) @@ -391,6 +373,9 @@ static void clocksource_dequeue_watchdog(struct clocksource *cs) { + unsigned long flags; + + spin_lock_irqsave(&watchdog_lock, flags); if (cs != watchdog) { if (cs->flags & CLOCK_SOURCE_MUST_VERIFY) { /* cs is a watched clocksource. */ @@ -399,19 +384,21 @@ clocksource_stop_watchdog(); } } + spin_unlock_irqrestore(&watchdog_lock, flags); } static int __clocksource_watchdog_kthread(void) { struct clocksource *cs, *tmp; unsigned long flags; + LIST_HEAD(unstable); int select = 0; spin_lock_irqsave(&watchdog_lock, flags); list_for_each_entry_safe(cs, tmp, &watchdog_list, wd_list) { if (cs->flags & CLOCK_SOURCE_UNSTABLE) { list_del_init(&cs->wd_list); + list_add(&cs->wd_list, &unstable); - __clocksource_change_rating(cs, 0); select = 1; } if (cs->flags & CLOCK_SOURCE_RESELECT) { @@ -423,6 +410,11 @@ clocksource_stop_watchdog(); spin_unlock_irqrestore(&watchdog_lock, flags); + /* Needs to be done outside of watchdog lock */ + list_for_each_entry_safe(cs, tmp, &unstable, wd_list) { + list_del_init(&cs->wd_list); + __clocksource_change_rating(cs, 0); + } return select; } @@ -455,9 +447,6 @@ static bool clocksource_is_watchdog(struct clocksource *cs) { return false; } void clocksource_mark_unstable(struct clocksource *cs) { } -static void inline clocksource_watchdog_lock(unsigned long *flags) { } -static void inline clocksource_watchdog_unlock(unsigned long *flags) { } - #endif /* CONFIG_CLOCKSOURCE_WATCHDOG */ /** @@ -786,19 +775,14 @@ */ int __clocksource_register_scale(struct clocksource *cs, u32 scale, u32 freq) { - unsigned long flags; /* Initialize mult/shift and max_idle_ns */ __clocksource_update_freq_scale(cs, scale, freq); /* Add clocksource to the clocksource list */ mutex_lock(&clocksource_mutex); - - clocksource_watchdog_lock(&flags); clocksource_enqueue(cs); clocksource_enqueue_watchdog(cs); - clocksource_watchdog_unlock(&flags); - clocksource_select(); clocksource_select_watchdog(false); mutex_unlock(&clocksource_mutex); @@ -820,13 +804,8 @@ */ void clocksource_change_rating(struct clocksource *cs, int rating) { - unsigned long flags; - mutex_lock(&clocksource_mutex); - clocksource_watchdog_lock(&flags); __clocksource_change_rating(cs, rating); - clocksource_watchdog_unlock(&flags); - clocksource_select(); clocksource_select_watchdog(false); mutex_unlock(&clocksource_mutex); @@ -838,8 +817,6 @@ */ static int clocksource_unbind(struct clocksource *cs) { - unsigned long flags; - if (clocksource_is_watchdog(cs)) { /* Select and try to install a replacement watchdog. */ clocksource_select_watchdog(true); @@ -853,12 +830,8 @@ if (curr_clocksource == cs) return -EBUSY; } - - clocksource_watchdog_lock(&flags); clocksource_dequeue_watchdog(cs); list_del_init(&cs->list); - clocksource_watchdog_unlock(&flags); - return 0; } reverted: --- linux-azure-4.15.0/kernel/time/tick-broadcast.c +++ linux-azure-4.15.0.orig/kernel/time/tick-broadcast.c @@ -612,14 +612,6 @@ now = ktime_get(); /* Find all expired events */ for_each_cpu(cpu, tick_broadcast_oneshot_mask) { - /* - * Required for !SMP because for_each_cpu() reports - * unconditionally CPU0 as set on UP kernels. - */ - if (!IS_ENABLED(CONFIG_SMP) && - cpumask_empty(tick_broadcast_oneshot_mask)) - break; - td = &per_cpu(tick_cpu_device, cpu); if (td->evtdev->next_event <= now) { cpumask_set_cpu(cpu, tmpmask); reverted: --- linux-azure-4.15.0/kernel/time/tick-sched.c +++ linux-azure-4.15.0.orig/kernel/time/tick-sched.c @@ -797,13 +797,12 @@ goto out; } + hrtimer_set_expires(&ts->sched_timer, tick); + + if (ts->nohz_mode == NOHZ_MODE_HIGHRES) + hrtimer_start_expires(&ts->sched_timer, HRTIMER_MODE_ABS_PINNED); + else - if (ts->nohz_mode == NOHZ_MODE_HIGHRES) { - hrtimer_start(&ts->sched_timer, tick, HRTIMER_MODE_ABS_PINNED); - } else { - hrtimer_set_expires(&ts->sched_timer, tick); tick_program_event(tick, 1); - } - out: /* * Update the estimated sleep length until the next timer diff -u linux-azure-4.15.0/kernel/trace/trace_events_filter.c linux-azure-4.15.0/kernel/trace/trace_events_filter.c --- linux-azure-4.15.0/kernel/trace/trace_events_filter.c +++ linux-azure-4.15.0/kernel/trace/trace_events_filter.c @@ -338,9 +338,6 @@ static int regex_match_front(char *str, struct regex *r, int len) { - if (len < r->len) - return 0; - if (strncmp(str, r->pattern, r->len) == 0) return 1; return 0; diff -u linux-azure-4.15.0/kernel/trace/trace_uprobe.c linux-azure-4.15.0/kernel/trace/trace_uprobe.c --- linux-azure-4.15.0/kernel/trace/trace_uprobe.c +++ linux-azure-4.15.0/kernel/trace/trace_uprobe.c @@ -55,7 +55,6 @@ struct list_head list; struct trace_uprobe_filter filter; struct uprobe_consumer consumer; - struct path path; struct inode *inode; char *filename; unsigned long offset; @@ -152,8 +151,6 @@ return; ret = strncpy_from_user(dst, src, maxlen); - if (ret == maxlen) - dst[--ret] = '\0'; if (ret < 0) { /* Failed to fetch string */ ((u8 *)get_rloc_data(dest))[0] = '\0'; @@ -290,7 +287,7 @@ for (i = 0; i < tu->tp.nr_args; i++) traceprobe_free_probe_arg(&tu->tp.args[i]); - path_put(&tu->path); + iput(tu->inode); kfree(tu->tp.call.class->system); kfree(tu->tp.call.name); kfree(tu->filename); @@ -364,6 +361,7 @@ static int create_trace_uprobe(int argc, char **argv) { struct trace_uprobe *tu; + struct inode *inode; char *arg, *event, *group, *filename; char buf[MAX_EVENT_NAME_LEN]; struct path path; @@ -371,6 +369,7 @@ bool is_delete, is_return; int i, ret; + inode = NULL; ret = 0; is_delete = false; is_return = false; @@ -436,16 +435,21 @@ } /* Find the last occurrence, in case the path contains ':' too. */ arg = strrchr(argv[1], ':'); - if (!arg) - return -EINVAL; + if (!arg) { + ret = -EINVAL; + goto fail_address_parse; + } *arg++ = '\0'; filename = argv[1]; ret = kern_path(filename, LOOKUP_FOLLOW, &path); if (ret) - return ret; + goto fail_address_parse; - if (!d_is_reg(path.dentry)) { + inode = igrab(d_inode(path.dentry)); + path_put(&path); + + if (!inode || !S_ISREG(inode->i_mode)) { ret = -EINVAL; goto fail_address_parse; } @@ -484,7 +488,7 @@ goto fail_address_parse; } tu->offset = offset; - tu->path = path; + tu->inode = inode; tu->filename = kstrdup(filename, GFP_KERNEL); if (!tu->filename) { @@ -552,7 +556,7 @@ return ret; fail_address_parse: - path_put(&path); + iput(inode); pr_info("Failed to parse address or file.\n"); @@ -931,7 +935,6 @@ goto err_flags; tu->consumer.filter = filter; - tu->inode = d_real_inode(tu->path.dentry); ret = uprobe_register(tu->inode, tu->offset, &tu->consumer); if (ret) goto err_buffer; @@ -977,7 +980,6 @@ WARN_ON(!uprobe_filter_is_empty(&tu->filter)); uprobe_unregister(tu->inode, tu->offset, &tu->consumer); - tu->inode = NULL; tu->tp.flags &= file ? ~TP_FLAG_TRACE : ~TP_FLAG_PROFILE; uprobe_buffer_disable(); reverted: --- linux-azure-4.15.0/kernel/tracepoint.c +++ linux-azure-4.15.0.orig/kernel/tracepoint.c @@ -207,7 +207,7 @@ lockdep_is_held(&tracepoints_mutex)); old = func_add(&tp_funcs, func, prio); if (IS_ERR(old)) { + WARN_ON_ONCE(1); - WARN_ON_ONCE(PTR_ERR(old) != -ENOMEM); return PTR_ERR(old); } @@ -240,7 +240,7 @@ lockdep_is_held(&tracepoints_mutex)); old = func_remove(&tp_funcs, func); if (IS_ERR(old)) { + WARN_ON_ONCE(1); - WARN_ON_ONCE(PTR_ERR(old) != -ENOMEM); return PTR_ERR(old); } reverted: --- linux-azure-4.15.0/lib/errseq.c +++ linux-azure-4.15.0.orig/lib/errseq.c @@ -111,22 +111,25 @@ * errseq_sample - grab current errseq_t value * @eseq: pointer to errseq_t to be sampled * + * This function allows callers to sample an errseq_t value, marking it as + * "seen" if required. - * This function allows callers to initialise their errseq_t variable. - * If the error has been "seen", new callers will not see an old error. - * If there is an unseen error in @eseq, the caller of this function will - * see it the next time it checks for an error. - * - * Context: Any context. - * Return: The current errseq value. */ errseq_t errseq_sample(errseq_t *eseq) { errseq_t old = READ_ONCE(*eseq); + errseq_t new = old; + /* + * For the common case of no errors ever having been set, we can skip + * marking the SEEN bit. Once an error has been set, the value will + * never go back to zero. + */ + if (old != 0) { + new |= ERRSEQ_SEEN; + if (old != new) + cmpxchg(eseq, old, new); + } + return new; - /* If nobody has seen this error yet, then we can be the first. */ - if (!(old & ERRSEQ_SEEN)) - old = 0; - return old; } EXPORT_SYMBOL(errseq_sample); reverted: --- linux-azure-4.15.0/lib/kobject.c +++ linux-azure-4.15.0.orig/lib/kobject.c @@ -234,12 +234,14 @@ /* be noisy on error issues */ if (error == -EEXIST) + WARN(1, "%s failed for %s with " + "-EEXIST, don't try to register things with " + "the same name in the same directory.\n", + __func__, kobject_name(kobj)); - pr_err("%s failed for %s with -EEXIST, don't try to register things with the same name in the same directory.\n", - __func__, kobject_name(kobj)); else + WARN(1, "%s failed for %s (error: %d parent: %s)\n", + __func__, kobject_name(kobj), error, + parent ? kobject_name(parent) : "'none'"); - pr_err("%s failed for %s (error: %d parent: %s)\n", - __func__, kobject_name(kobj), error, - parent ? kobject_name(parent) : "'none'"); } else kobj->state_in_sysfs = 1; reverted: --- linux-azure-4.15.0/lib/radix-tree.c +++ linux-azure-4.15.0.orig/lib/radix-tree.c @@ -1611,9 +1611,11 @@ static void __rcu **skip_siblings(struct radix_tree_node **nodep, void __rcu **slot, struct radix_tree_iter *iter) { + void *sib = node_to_entry(slot - 1); + while (iter->index < iter->next_index) { *nodep = rcu_dereference_raw(*slot); + if (*nodep && *nodep != sib) - if (*nodep && !is_sibling_entry(iter->node, *nodep)) return slot; slot++; iter->index = __radix_tree_iter_add(iter, 1); @@ -1628,7 +1630,7 @@ struct radix_tree_iter *iter, unsigned flags) { unsigned tag = flags & RADIX_TREE_ITER_TAG_MASK; + struct radix_tree_node *node = rcu_dereference_raw(*slot); - struct radix_tree_node *node; slot = skip_siblings(&node, slot, iter); diff -u linux-azure-4.15.0/lib/test_bitmap.c linux-azure-4.15.0/lib/test_bitmap.c --- linux-azure-4.15.0/lib/test_bitmap.c +++ linux-azure-4.15.0/lib/test_bitmap.c @@ -434,32 +434,23 @@ unsigned int start, nbits; for (start = 0; start < 1024; start += 8) { + memset(bmap1, 0x5a, sizeof(bmap1)); + memset(bmap2, 0x5a, sizeof(bmap2)); for (nbits = 0; nbits < 1024 - start; nbits += 8) { - memset(bmap1, 0x5a, sizeof(bmap1)); - memset(bmap2, 0x5a, sizeof(bmap2)); - bitmap_set(bmap1, start, nbits); __bitmap_set(bmap2, start, nbits); - if (!bitmap_equal(bmap1, bmap2, 1024)) { + if (!bitmap_equal(bmap1, bmap2, 1024)) printk("set not equal %d %d\n", start, nbits); - failed_tests++; - } - if (!__bitmap_equal(bmap1, bmap2, 1024)) { + if (!__bitmap_equal(bmap1, bmap2, 1024)) printk("set not __equal %d %d\n", start, nbits); - failed_tests++; - } bitmap_clear(bmap1, start, nbits); __bitmap_clear(bmap2, start, nbits); - if (!bitmap_equal(bmap1, bmap2, 1024)) { + if (!bitmap_equal(bmap1, bmap2, 1024)) printk("clear not equal %d %d\n", start, nbits); - failed_tests++; - } - if (!__bitmap_equal(bmap1, bmap2, 1024)) { + if (!__bitmap_equal(bmap1, bmap2, 1024)) printk("clear not __equal %d %d\n", start, nbits); - failed_tests++; - } } } } diff -u linux-azure-4.15.0/lib/test_bpf.c linux-azure-4.15.0/lib/test_bpf.c --- linux-azure-4.15.0/lib/test_bpf.c +++ linux-azure-4.15.0/lib/test_bpf.c @@ -5419,31 +5419,21 @@ { /* Mainly checking JIT here. */ "BPF_MAXINSNS: Ctx heavy transformations", { }, -#if defined(CONFIG_BPF_JIT_ALWAYS_ON) && defined(CONFIG_S390) - CLASSIC | FLAG_EXPECTED_FAIL, -#else CLASSIC, -#endif { }, { { 1, !!(SKB_VLAN_TCI & VLAN_TAG_PRESENT) }, { 10, !!(SKB_VLAN_TCI & VLAN_TAG_PRESENT) } }, .fill_helper = bpf_fill_maxinsns6, - .expected_errcode = -ENOTSUPP, }, { /* Mainly checking JIT here. */ "BPF_MAXINSNS: Call heavy transformations", { }, -#if defined(CONFIG_BPF_JIT_ALWAYS_ON) && defined(CONFIG_S390) - CLASSIC | FLAG_NO_DATA | FLAG_EXPECTED_FAIL, -#else CLASSIC | FLAG_NO_DATA, -#endif { }, { { 1, 0 }, { 10, 0 } }, .fill_helper = bpf_fill_maxinsns7, - .expected_errcode = -ENOTSUPP, }, { /* Mainly checking JIT here. */ "BPF_MAXINSNS: Jump heavy test", @@ -5485,17 +5475,11 @@ { "BPF_MAXINSNS: ld_abs+get_processor_id", { }, -#if defined(CONFIG_BPF_JIT_ALWAYS_ON) && defined(CONFIG_S390) - CLASSIC | FLAG_EXPECTED_FAIL, -#else CLASSIC, -#endif { }, { { 1, 0xbee } }, .fill_helper = bpf_fill_ld_abs_get_processor_id, - .expected_errcode = -ENOTSUPP, }, -#if !(defined(CONFIG_BPF_JIT_ALWAYS_ON) && defined(CONFIG_S390)) { "BPF_MAXINSNS: ld_abs+vlan_push/pop", { }, @@ -5512,7 +5496,6 @@ { { 2, 10 } }, .fill_helper = bpf_fill_jump_around_ld_abs, }, -#endif /* * LD_IND / LD_ABS on fragmented SKBs */ diff -u linux-azure-4.15.0/lib/vsprintf.c linux-azure-4.15.0/lib/vsprintf.c --- linux-azure-4.15.0/lib/vsprintf.c +++ linux-azure-4.15.0/lib/vsprintf.c @@ -1660,22 +1660,19 @@ return number(buf, end, (unsigned long int)ptr, spec); } -static DEFINE_STATIC_KEY_TRUE(not_filled_random_ptr_key); +static bool have_filled_random_ptr_key __read_mostly; static siphash_key_t ptr_key __read_mostly; -static void enable_ptr_key_workfn(struct work_struct *work) -{ - get_random_bytes(&ptr_key, sizeof(ptr_key)); - /* Needs to run from preemptible context */ - static_branch_disable(¬_filled_random_ptr_key); -} - -static DECLARE_WORK(enable_ptr_key_work, enable_ptr_key_workfn); - static void fill_random_ptr_key(struct random_ready_callback *unused) { - /* This may be in an interrupt handler. */ - queue_work(system_unbound_wq, &enable_ptr_key_work); + get_random_bytes(&ptr_key, sizeof(ptr_key)); + /* + * have_filled_random_ptr_key==true is dependent on get_random_bytes(). + * ptr_to_id() needs to see have_filled_random_ptr_key==true + * after get_random_bytes() returns. + */ + smp_mb(); + WRITE_ONCE(have_filled_random_ptr_key, true); } static struct random_ready_callback random_ready = { @@ -1689,8 +1686,7 @@ if (!ret) { return 0; } else if (ret == -EALREADY) { - /* This is in preemptible context */ - enable_ptr_key_workfn(&enable_ptr_key_work); + fill_random_ptr_key(&random_ready); return 0; } @@ -1704,7 +1700,7 @@ unsigned long hashval; const int default_width = 2 * sizeof(ptr); - if (static_branch_unlikely(¬_filled_random_ptr_key)) { + if (unlikely(!have_filled_random_ptr_key)) { spec.field_width = default_width; /* string length must be less than default_width */ return string(buf, end, "(ptrval)", spec); reverted: --- linux-azure-4.15.0/mm/Kconfig +++ linux-azure-4.15.0.orig/mm/Kconfig @@ -649,7 +649,6 @@ depends on ARCH_SUPPORTS_DEFERRED_STRUCT_PAGE_INIT depends on NO_BOOTMEM && MEMORY_HOTPLUG depends on !FLATMEM - depends on !NEED_PER_CPU_KM help Ordinarily all struct pages are initialised during early boot in a single thread. On very large machines this can take a considerable reverted: --- linux-azure-4.15.0/mm/backing-dev.c +++ linux-azure-4.15.0.orig/mm/backing-dev.c @@ -126,7 +126,6 @@ bdi, &bdi_debug_stats_fops); if (!bdi->debug_stats) { debugfs_remove(bdi->debug_dir); - bdi->debug_dir = NULL; return -ENOMEM; } @@ -395,7 +394,7 @@ * the barrier provided by test_and_clear_bit() above. */ smp_wmb(); + clear_bit(WB_shutting_down, &wb->state); - clear_and_wake_up_bit(WB_shutting_down, &wb->state); } static void wb_exit(struct bdi_writeback *wb) diff -u linux-azure-4.15.0/mm/gup.c linux-azure-4.15.0/mm/gup.c --- linux-azure-4.15.0/mm/gup.c +++ linux-azure-4.15.0/mm/gup.c @@ -544,9 +544,6 @@ if (vm_flags & (VM_IO | VM_PFNMAP)) return -EFAULT; - if (gup_flags & FOLL_ANON && !vma_is_anonymous(vma)) - return -EFAULT; - if (write) { if (!(vm_flags & VM_WRITE)) { if (!(gup_flags & FOLL_FORCE)) diff -u linux-azure-4.15.0/mm/memcontrol.c linux-azure-4.15.0/mm/memcontrol.c --- linux-azure-4.15.0/mm/memcontrol.c +++ linux-azure-4.15.0/mm/memcontrol.c @@ -4187,9 +4187,6 @@ { struct mem_cgroup_per_node *pn = memcg->nodeinfo[node]; - if (!pn) - return; - free_percpu(pn->lruvec_stat); kfree(pn); } diff -u linux-azure-4.15.0/mm/memory.c linux-azure-4.15.0/mm/memory.c --- linux-azure-4.15.0/mm/memory.c +++ linux-azure-4.15.0/mm/memory.c @@ -1888,6 +1888,9 @@ if (addr < vma->vm_start || addr >= vma->vm_end) return -EFAULT; + if (!pfn_modify_allowed(pfn, pgprot)) + return -EACCES; + track_pfn_insert(vma, &pgprot, __pfn_to_pfn_t(pfn, PFN_DEV)); ret = insert_pfn(vma, addr, __pfn_to_pfn_t(pfn, PFN_DEV), pgprot, @@ -1923,6 +1926,9 @@ track_pfn_insert(vma, &pgprot, pfn); + if (!pfn_modify_allowed(pfn_t_to_pfn(pfn), pgprot)) + return -EACCES; + /* * If we don't have pte special, then we have to use the pfn_valid() * based VM_MIXEDMAP scheme (see vm_normal_page), and thus we *must* @@ -1970,6 +1976,7 @@ { pte_t *pte; spinlock_t *ptl; + int err = 0; pte = pte_alloc_map_lock(mm, pmd, addr, &ptl); if (!pte) @@ -1977,12 +1984,16 @@ arch_enter_lazy_mmu_mode(); do { BUG_ON(!pte_none(*pte)); + if (!pfn_modify_allowed(pfn, prot)) { + err = -EACCES; + break; + } set_pte_at(mm, addr, pte, pte_mkspecial(pfn_pte(pfn, prot))); pfn++; } while (pte++, addr += PAGE_SIZE, addr != end); arch_leave_lazy_mmu_mode(); pte_unmap_unlock(pte - 1, ptl); - return 0; + return err; } static inline int remap_pmd_range(struct mm_struct *mm, pud_t *pud, @@ -1991,6 +2002,7 @@ { pmd_t *pmd; unsigned long next; + int err; pfn -= addr >> PAGE_SHIFT; pmd = pmd_alloc(mm, pud, addr); @@ -1999,9 +2011,10 @@ VM_BUG_ON(pmd_trans_huge(*pmd)); do { next = pmd_addr_end(addr, end); - if (remap_pte_range(mm, pmd, addr, next, - pfn + (addr >> PAGE_SHIFT), prot)) - return -ENOMEM; + err = remap_pte_range(mm, pmd, addr, next, + pfn + (addr >> PAGE_SHIFT), prot); + if (err) + return err; } while (pmd++, addr = next, addr != end); return 0; } @@ -2012,6 +2025,7 @@ { pud_t *pud; unsigned long next; + int err; pfn -= addr >> PAGE_SHIFT; pud = pud_alloc(mm, p4d, addr); @@ -2019,9 +2033,10 @@ return -ENOMEM; do { next = pud_addr_end(addr, end); - if (remap_pmd_range(mm, pud, addr, next, - pfn + (addr >> PAGE_SHIFT), prot)) - return -ENOMEM; + err = remap_pmd_range(mm, pud, addr, next, + pfn + (addr >> PAGE_SHIFT), prot); + if (err) + return err; } while (pud++, addr = next, addr != end); return 0; } @@ -2032,6 +2047,7 @@ { p4d_t *p4d; unsigned long next; + int err; pfn -= addr >> PAGE_SHIFT; p4d = p4d_alloc(mm, pgd, addr); @@ -2039,9 +2055,10 @@ return -ENOMEM; do { next = p4d_addr_end(addr, end); - if (remap_pud_range(mm, p4d, addr, next, - pfn + (addr >> PAGE_SHIFT), prot)) - return -ENOMEM; + err = remap_pud_range(mm, p4d, addr, next, + pfn + (addr >> PAGE_SHIFT), prot); + if (err) + return err; } while (p4d++, addr = next, addr != end); return 0; } diff -u linux-azure-4.15.0/mm/mmap.c linux-azure-4.15.0/mm/mmap.c --- linux-azure-4.15.0/mm/mmap.c +++ linux-azure-4.15.0/mm/mmap.c @@ -100,11 +100,20 @@ __S000, __S001, __S010, __S011, __S100, __S101, __S110, __S111 }; +#ifndef CONFIG_ARCH_HAS_FILTER_PGPROT +static inline pgprot_t arch_filter_pgprot(pgprot_t prot) +{ + return prot; +} +#endif + pgprot_t vm_get_page_prot(unsigned long vm_flags) { - return __pgprot(pgprot_val(protection_map[vm_flags & + pgprot_t ret = __pgprot(pgprot_val(protection_map[vm_flags & (VM_READ|VM_WRITE|VM_EXEC|VM_SHARED)]) | pgprot_val(arch_vm_get_page_prot(vm_flags))); + + return arch_filter_pgprot(ret); } EXPORT_SYMBOL(vm_get_page_prot); @@ -3014,32 +3023,6 @@ /* mm's last user has gone, and its about to be pulled down */ mmu_notifier_release(mm); - if (unlikely(mm_is_oom_victim(mm))) { - /* - * Manually reap the mm to free as much memory as possible. - * Then, as the oom reaper does, set MMF_OOM_SKIP to disregard - * this mm from further consideration. Taking mm->mmap_sem for - * write after setting MMF_OOM_SKIP will guarantee that the oom - * reaper will not run on this mm again after mmap_sem is - * dropped. - * - * Nothing can be holding mm->mmap_sem here and the above call - * to mmu_notifier_release(mm) ensures mmu notifier callbacks in - * __oom_reap_task_mm() will not block. - * - * This needs to be done before calling munlock_vma_pages_all(), - * which clears VM_LOCKED, otherwise the oom reaper cannot - * reliably test it. - */ - mutex_lock(&oom_lock); - __oom_reap_task_mm(mm); - mutex_unlock(&oom_lock); - - set_bit(MMF_OOM_SKIP, &mm->flags); - down_write(&mm->mmap_sem); - up_write(&mm->mmap_sem); - } - if (mm->locked_vm) { vma = mm->mmap; while (vma) { @@ -3061,6 +3044,24 @@ /* update_hiwater_rss(mm) here? but nobody should be looking */ /* Use -1 here to ensure all VMAs in the mm are unmapped */ unmap_vmas(&tlb, vma, 0, -1); + + if (unlikely(mm_is_oom_victim(mm))) { + /* + * Wait for oom_reap_task() to stop working on this + * mm. Because MMF_OOM_SKIP is already set before + * calling down_read(), oom_reap_task() will not run + * on this "mm" post up_write(). + * + * mm_is_oom_victim() cannot be set from under us + * either because victim->mm is already set to NULL + * under task_lock before calling mmput and oom_mm is + * set not NULL by the OOM killer only if victim->mm + * is found not NULL while holding the task_lock. + */ + set_bit(MMF_OOM_SKIP, &mm->flags); + down_write(&mm->mmap_sem); + up_write(&mm->mmap_sem); + } free_pgtables(&tlb, vma, FIRST_USER_ADDRESS, USER_PGTABLES_CEILING); tlb_finish_mmu(&tlb, 0, -1); reverted: --- linux-azure-4.15.0/mm/oom_kill.c +++ linux-azure-4.15.0.orig/mm/oom_kill.c @@ -474,6 +474,7 @@ return false; } + #ifdef CONFIG_MMU /* * OOM Reaper kernel thread which tries to reap the memory used by the OOM @@ -484,51 +485,16 @@ static struct task_struct *oom_reaper_list; static DEFINE_SPINLOCK(oom_reaper_lock); +static bool __oom_reap_task_mm(struct task_struct *tsk, struct mm_struct *mm) -void __oom_reap_task_mm(struct mm_struct *mm) { + struct mmu_gather tlb; struct vm_area_struct *vma; - - /* - * Tell all users of get_user/copy_from_user etc... that the content - * is no longer stable. No barriers really needed because unmapping - * should imply barriers already and the reader would hit a page fault - * if it stumbled over a reaped memory. - */ - set_bit(MMF_UNSTABLE, &mm->flags); - - for (vma = mm->mmap ; vma; vma = vma->vm_next) { - if (!can_madv_dontneed_vma(vma)) - continue; - - /* - * Only anonymous pages have a good chance to be dropped - * without additional steps which we cannot afford as we - * are OOM already. - * - * We do not even care about fs backed pages because all - * which are reclaimable have already been reclaimed and - * we do not want to block exit_mmap by keeping mm ref - * count elevated without a good reason. - */ - if (vma_is_anonymous(vma) || !(vma->vm_flags & VM_SHARED)) { - struct mmu_gather tlb; - - tlb_gather_mmu(&tlb, mm, vma->vm_start, vma->vm_end); - unmap_page_range(&tlb, vma, vma->vm_start, vma->vm_end, - NULL); - tlb_finish_mmu(&tlb, vma->vm_start, vma->vm_end); - } - } -} - -static bool oom_reap_task_mm(struct task_struct *tsk, struct mm_struct *mm) -{ bool ret = true; /* * We have to make sure to not race with the victim exit path * and cause premature new oom victim selection: + * __oom_reap_task_mm exit_mm - * oom_reap_task_mm exit_mm * mmget_not_zero * mmput * atomic_dec_and_test @@ -576,8 +542,35 @@ trace_start_task_reaping(tsk->pid); + /* + * Tell all users of get_user/copy_from_user etc... that the content + * is no longer stable. No barriers really needed because unmapping + * should imply barriers already and the reader would hit a page fault + * if it stumbled over a reaped memory. + */ + set_bit(MMF_UNSTABLE, &mm->flags); + + for (vma = mm->mmap ; vma; vma = vma->vm_next) { + if (!can_madv_dontneed_vma(vma)) + continue; - __oom_reap_task_mm(mm); + /* + * Only anonymous pages have a good chance to be dropped + * without additional steps which we cannot afford as we + * are OOM already. + * + * We do not even care about fs backed pages because all + * which are reclaimable have already been reclaimed and + * we do not want to block exit_mmap by keeping mm ref + * count elevated without a good reason. + */ + if (vma_is_anonymous(vma) || !(vma->vm_flags & VM_SHARED)) { + tlb_gather_mmu(&tlb, mm, vma->vm_start, vma->vm_end); + unmap_page_range(&tlb, vma, vma->vm_start, vma->vm_end, + NULL); + tlb_finish_mmu(&tlb, vma->vm_start, vma->vm_end); + } + } pr_info("oom_reaper: reaped process %d (%s), now anon-rss:%lukB, file-rss:%lukB, shmem-rss:%lukB\n", task_pid_nr(tsk), tsk->comm, K(get_mm_counter(mm, MM_ANONPAGES)), @@ -598,12 +591,13 @@ struct mm_struct *mm = tsk->signal->oom_mm; /* Retry the down_read_trylock(mmap_sem) a few times */ + while (attempts++ < MAX_OOM_REAP_RETRIES && !__oom_reap_task_mm(tsk, mm)) - while (attempts++ < MAX_OOM_REAP_RETRIES && !oom_reap_task_mm(tsk, mm)) schedule_timeout_idle(HZ/10); if (attempts <= MAX_OOM_REAP_RETRIES) goto done; + pr_info("oom_reaper: unable to reap pid:%d (%s)\n", task_pid_nr(tsk), tsk->comm); debug_show_all_locks(); diff -u linux-azure-4.15.0/mm/percpu.c linux-azure-4.15.0/mm/percpu.c --- linux-azure-4.15.0/mm/percpu.c +++ linux-azure-4.15.0/mm/percpu.c @@ -80,7 +80,6 @@ #include #include #include -#include #include #include reverted: --- linux-azure-4.15.0/mm/sparse.c +++ linux-azure-4.15.0.orig/mm/sparse.c @@ -661,7 +661,7 @@ unsigned long pfn; for (pfn = start_pfn; pfn < end_pfn; pfn += PAGES_PER_SECTION) { + unsigned long section_nr = pfn_to_section_nr(start_pfn); - unsigned long section_nr = pfn_to_section_nr(pfn); struct mem_section *ms; /* reverted: --- linux-azure-4.15.0/mm/z3fold.c +++ linux-azure-4.15.0.orig/mm/z3fold.c @@ -144,8 +144,7 @@ PAGE_HEADLESS = 0, MIDDLE_CHUNK_MAPPED, NEEDS_COMPACTING, + PAGE_STALE - PAGE_STALE, - UNDER_RECLAIM }; /***************** @@ -174,7 +173,6 @@ clear_bit(MIDDLE_CHUNK_MAPPED, &page->private); clear_bit(NEEDS_COMPACTING, &page->private); clear_bit(PAGE_STALE, &page->private); - clear_bit(UNDER_RECLAIM, &page->private); spin_lock_init(&zhdr->page_lock); kref_init(&zhdr->refcount); @@ -750,10 +748,6 @@ atomic64_dec(&pool->pages_nr); return; } - if (test_bit(UNDER_RECLAIM, &page->private)) { - z3fold_page_unlock(zhdr); - return; - } if (test_and_set_bit(NEEDS_COMPACTING, &page->private)) { z3fold_page_unlock(zhdr); return; @@ -838,8 +832,6 @@ kref_get(&zhdr->refcount); list_del_init(&zhdr->buddy); zhdr->cpu = -1; - set_bit(UNDER_RECLAIM, &page->private); - break; } list_del_init(&page->lru); @@ -887,35 +879,25 @@ goto next; } next: + spin_lock(&pool->lock); if (test_bit(PAGE_HEADLESS, &page->private)) { if (ret == 0) { + spin_unlock(&pool->lock); free_z3fold_page(page); return 0; } + } else if (kref_put(&zhdr->refcount, release_z3fold_page)) { + atomic64_dec(&pool->pages_nr); - spin_lock(&pool->lock); - list_add(&page->lru, &pool->lru); - spin_unlock(&pool->lock); - } else { - z3fold_page_lock(zhdr); - clear_bit(UNDER_RECLAIM, &page->private); - if (kref_put(&zhdr->refcount, - release_z3fold_page_locked)) { - atomic64_dec(&pool->pages_nr); - return 0; - } - /* - * if we are here, the page is still not completely - * free. Take the global pool lock then to be able - * to add it back to the lru list - */ - spin_lock(&pool->lock); - list_add(&page->lru, &pool->lru); spin_unlock(&pool->lock); + return 0; - z3fold_page_unlock(zhdr); } + /* + * Add to the beginning of LRU. + * Pool lock has to be kept here to ensure the page has + * not already been released + */ + list_add(&page->lru, &pool->lru); - /* We started off locked to we need to lock the pool back */ - spin_lock(&pool->lock); } spin_unlock(&pool->lock); return -EAGAIN; reverted: --- linux-azure-4.15.0/net/atm/lec.c +++ linux-azure-4.15.0.orig/net/atm/lec.c @@ -41,9 +41,6 @@ #include #include -/* Hardening for Spectre-v1 */ -#include - #include "lec.h" #include "lec_arpc.h" #include "resources.h" @@ -690,10 +687,8 @@ bytes_left = copy_from_user(&ioc_data, arg, sizeof(struct atmlec_ioc)); if (bytes_left != 0) pr_info("copy from user failed for %d bytes\n", bytes_left); + if (ioc_data.dev_num < 0 || ioc_data.dev_num >= MAX_LEC_ITF || + !dev_lec[ioc_data.dev_num]) - if (ioc_data.dev_num < 0 || ioc_data.dev_num >= MAX_LEC_ITF) - return -EINVAL; - ioc_data.dev_num = array_index_nospec(ioc_data.dev_num, MAX_LEC_ITF); - if (!dev_lec[ioc_data.dev_num]) return -EINVAL; vpriv = kmalloc(sizeof(struct lec_vcc_priv), GFP_KERNEL); if (!vpriv) reverted: --- linux-azure-4.15.0/net/bridge/br_if.c +++ linux-azure-4.15.0.orig/net/bridge/br_if.c @@ -509,8 +509,8 @@ return -ELOOP; } + /* Device is already being bridged */ + if (br_port_exists(dev)) - /* Device has master upper dev */ - if (netdev_master_upper_dev_get(dev)) return -EBUSY; /* No bridging devices that dislike that (e.g. wireless) */ diff -u linux-azure-4.15.0/net/bridge/netfilter/ebtables.c linux-azure-4.15.0/net/bridge/netfilter/ebtables.c --- linux-azure-4.15.0/net/bridge/netfilter/ebtables.c +++ linux-azure-4.15.0/net/bridge/netfilter/ebtables.c @@ -1819,14 +1819,13 @@ { unsigned int size = info->entries_size; const void *entries = info->entries; + int ret; newinfo->entries_size = size; - if (info->nentries) { - int ret = xt_compat_init_offsets(NFPROTO_BRIDGE, - info->nentries); - if (ret) - return ret; - } + + ret = xt_compat_init_offsets(NFPROTO_BRIDGE, info->nentries); + if (ret) + return ret; return EBT_ENTRY_ITERATE(entries, size, compat_calc_entry, info, entries, newinfo); reverted: --- linux-azure-4.15.0/net/ceph/messenger.c +++ linux-azure-4.15.0.orig/net/ceph/messenger.c @@ -2531,11 +2531,6 @@ int ret = 1; dout("try_write start %p state %lu\n", con, con->state); - if (con->state != CON_STATE_PREOPEN && - con->state != CON_STATE_CONNECTING && - con->state != CON_STATE_NEGOTIATING && - con->state != CON_STATE_OPEN) - return 0; more: dout("try_write out_kvec_bytes %d\n", con->out_kvec_bytes); @@ -2561,8 +2556,6 @@ } more_kvec: - BUG_ON(!con->sock); - /* kvec data queued? */ if (con->out_kvec_left) { ret = write_partial_kvec(con); reverted: --- linux-azure-4.15.0/net/ceph/mon_client.c +++ linux-azure-4.15.0.orig/net/ceph/mon_client.c @@ -209,14 +209,6 @@ __open_session(monc); } -static void un_backoff(struct ceph_mon_client *monc) -{ - monc->hunt_mult /= 2; /* reduce by 50% */ - if (monc->hunt_mult < 1) - monc->hunt_mult = 1; - dout("%s hunt_mult now %d\n", __func__, monc->hunt_mult); -} - /* * Reschedule delayed work timer. */ @@ -971,7 +963,6 @@ if (!monc->hunting) { ceph_con_keepalive(&monc->con); __validate_auth(monc); - un_backoff(monc); } if (is_auth && @@ -1132,8 +1123,9 @@ dout("%s found mon%d\n", __func__, monc->cur_mon); monc->hunting = false; monc->had_a_connection = true; + monc->hunt_mult /= 2; /* reduce by 50% */ + if (monc->hunt_mult < 1) + monc->hunt_mult = 1; - un_backoff(monc); - __schedule_delayed(monc); } } reverted: --- linux-azure-4.15.0/net/compat.c +++ linux-azure-4.15.0.orig/net/compat.c @@ -377,8 +377,7 @@ optname == SO_ATTACH_REUSEPORT_CBPF) return do_set_attach_filter(sock, level, optname, optval, optlen); + if (optname == SO_RCVTIMEO || optname == SO_SNDTIMEO) - if (!COMPAT_USE_64BIT_TIME && - (optname == SO_RCVTIMEO || optname == SO_SNDTIMEO)) return do_set_sock_timeout(sock, level, optname, optval, optlen); return sock_setsockopt(sock, level, optname, optval, optlen); @@ -443,8 +442,7 @@ static int compat_sock_getsockopt(struct socket *sock, int level, int optname, char __user *optval, int __user *optlen) { + if (optname == SO_RCVTIMEO || optname == SO_SNDTIMEO) - if (!COMPAT_USE_64BIT_TIME && - (optname == SO_RCVTIMEO || optname == SO_SNDTIMEO)) return do_get_sock_timeout(sock, level, optname, optval, optlen); return sock_getsockopt(sock, level, optname, optval, optlen); } diff -u linux-azure-4.15.0/net/core/dev.c linux-azure-4.15.0/net/core/dev.c --- linux-azure-4.15.0/net/core/dev.c +++ linux-azure-4.15.0/net/core/dev.c @@ -2081,7 +2081,7 @@ int i, j; for (i = count, j = offset; i--; j++) { - if (!remove_xps_queue(dev_maps, tci, j)) + if (!remove_xps_queue(dev_maps, cpu, j)) break; } @@ -7283,7 +7283,7 @@ if (!dev->rtnl_link_ops || dev->rtnl_link_state == RTNL_LINK_INITIALIZED) skb = rtmsg_ifinfo_build_skb(RTM_DELLINK, dev, ~0U, 0, - GFP_KERNEL, NULL, 0); + GFP_KERNEL, NULL); /* * Flush the unicast and multicast chains @@ -8370,7 +8370,7 @@ int dev_change_net_namespace(struct net_device *dev, struct net *net, const char *pat) { - int err, new_nsid, new_ifindex; + int err, new_nsid; ASSERT_RTNL(); @@ -8426,16 +8426,11 @@ call_netdevice_notifiers(NETDEV_UNREGISTER, dev); rcu_barrier(); call_netdevice_notifiers(NETDEV_UNREGISTER_FINAL, dev); - - new_nsid = peernet2id_alloc(dev_net(dev), net); - /* If there is an ifindex conflict assign a new one */ - if (__dev_get_by_index(net, dev->ifindex)) - new_ifindex = dev_new_index(net); + if (dev->rtnl_link_ops && dev->rtnl_link_ops->get_link_net) + new_nsid = peernet2id_alloc(dev_net(dev), net); else - new_ifindex = dev->ifindex; - - rtmsg_ifinfo_newnet(RTM_DELLINK, dev, ~0U, GFP_KERNEL, &new_nsid, - new_ifindex); + new_nsid = peernet2id(dev_net(dev), net); + rtmsg_ifinfo_newnet(RTM_DELLINK, dev, ~0U, GFP_KERNEL, &new_nsid); /* * Flush the unicast and multicast chains @@ -8449,7 +8444,10 @@ /* Actually switch the network namespace */ dev_net_set(dev, net); - dev->ifindex = new_ifindex; + + /* If there is an ifindex conflict assign a new one */ + if (__dev_get_by_index(net, dev->ifindex)) + dev->ifindex = dev_new_index(net); /* Send a netdev-add uevent to the new namespace */ kobject_uevent(&dev->dev.kobj, KOBJ_ADD); reverted: --- linux-azure-4.15.0/net/core/dev_addr_lists.c +++ linux-azure-4.15.0.orig/net/core/dev_addr_lists.c @@ -57,8 +57,8 @@ return -EINVAL; list_for_each_entry(ha, &list->list, list) { + if (!memcmp(ha->addr, addr, addr_len) && + ha->type == addr_type) { - if (ha->type == addr_type && - !memcmp(ha->addr, addr, addr_len)) { if (global) { /* check if addr is already used as global */ if (ha->global_use) reverted: --- linux-azure-4.15.0/net/core/net-sysfs.c +++ linux-azure-4.15.0.orig/net/core/net-sysfs.c @@ -295,31 +295,10 @@ struct net_device *netdev = to_net_dev(dev); return sprintf(buf, fmt_dec, + atomic_read(&netdev->carrier_changes)); - atomic_read(&netdev->carrier_up_count) + - atomic_read(&netdev->carrier_down_count)); } static DEVICE_ATTR_RO(carrier_changes); -static ssize_t carrier_up_count_show(struct device *dev, - struct device_attribute *attr, - char *buf) -{ - struct net_device *netdev = to_net_dev(dev); - - return sprintf(buf, fmt_dec, atomic_read(&netdev->carrier_up_count)); -} -static DEVICE_ATTR_RO(carrier_up_count); - -static ssize_t carrier_down_count_show(struct device *dev, - struct device_attribute *attr, - char *buf) -{ - struct net_device *netdev = to_net_dev(dev); - - return sprintf(buf, fmt_dec, atomic_read(&netdev->carrier_down_count)); -} -static DEVICE_ATTR_RO(carrier_down_count); - /* read-write attributes */ static int change_mtu(struct net_device *dev, unsigned long new_mtu) @@ -568,8 +547,6 @@ &dev_attr_phys_port_name.attr, &dev_attr_phys_switch_id.attr, &dev_attr_proto_down.attr, - &dev_attr_carrier_up_count.attr, - &dev_attr_carrier_down_count.attr, NULL, }; ATTRIBUTE_GROUPS(net_class); diff -u linux-azure-4.15.0/net/core/rtnetlink.c linux-azure-4.15.0/net/core/rtnetlink.c --- linux-azure-4.15.0/net/core/rtnetlink.c +++ linux-azure-4.15.0/net/core/rtnetlink.c @@ -920,11 +920,8 @@ + rtnl_xdp_size() /* IFLA_XDP */ + nla_total_size(4) /* IFLA_EVENT */ + nla_total_size(4) /* IFLA_NEW_NETNSID */ - + nla_total_size(4) /* IFLA_NEW_IFINDEX */ + nla_total_size(1) /* IFLA_PROTO_DOWN */ + nla_total_size(4) /* IFLA_IF_NETNSID */ - + nla_total_size(4) /* IFLA_CARRIER_UP_COUNT */ - + nla_total_size(4) /* IFLA_CARRIER_DOWN_COUNT */ + 0; } @@ -1436,8 +1433,7 @@ struct net_device *dev, struct net *src_net, int type, u32 pid, u32 seq, u32 change, unsigned int flags, u32 ext_filter_mask, - u32 event, int *new_nsid, int new_ifindex, - int tgt_netnsid) + u32 event, int *new_nsid, int tgt_netnsid) { struct ifinfomsg *ifm; struct nlmsghdr *nlh; @@ -1479,13 +1475,8 @@ nla_put_string(skb, IFLA_QDISC, dev->qdisc->ops->id)) || nla_put_ifalias(skb, dev) || nla_put_u32(skb, IFLA_CARRIER_CHANGES, - atomic_read(&dev->carrier_up_count) + - atomic_read(&dev->carrier_down_count)) || - nla_put_u8(skb, IFLA_PROTO_DOWN, dev->proto_down) || - nla_put_u32(skb, IFLA_CARRIER_UP_COUNT, - atomic_read(&dev->carrier_up_count)) || - nla_put_u32(skb, IFLA_CARRIER_DOWN_COUNT, - atomic_read(&dev->carrier_down_count))) + atomic_read(&dev->carrier_changes)) || + nla_put_u8(skb, IFLA_PROTO_DOWN, dev->proto_down)) goto nla_put_failure; if (event != IFLA_EVENT_NONE) { @@ -1534,10 +1525,6 @@ if (new_nsid && nla_put_s32(skb, IFLA_NEW_NETNSID, *new_nsid) < 0) goto nla_put_failure; - if (new_ifindex && - nla_put_s32(skb, IFLA_NEW_IFINDEX, new_ifindex) < 0) - goto nla_put_failure; - rcu_read_lock(); if (rtnl_fill_link_af(skb, dev, ext_filter_mask)) @@ -1591,8 +1578,6 @@ [IFLA_EVENT] = { .type = NLA_U32 }, [IFLA_GROUP] = { .type = NLA_U32 }, [IFLA_IF_NETNSID] = { .type = NLA_S32 }, - [IFLA_CARRIER_UP_COUNT] = { .type = NLA_U32 }, - [IFLA_CARRIER_DOWN_COUNT] = { .type = NLA_U32 }, }; static const struct nla_policy ifla_info_policy[IFLA_INFO_MAX+1] = { @@ -1781,7 +1766,7 @@ NETLINK_CB(cb->skb).portid, cb->nlh->nlmsg_seq, 0, flags, - ext_filter_mask, 0, NULL, 0, + ext_filter_mask, 0, NULL, netnsid); if (err < 0) { @@ -3025,7 +3010,7 @@ err = rtnl_fill_ifinfo(nskb, dev, net, RTM_NEWLINK, NETLINK_CB(skb).portid, nlh->nlmsg_seq, 0, 0, ext_filter_mask, - 0, NULL, 0, netnsid); + 0, NULL, netnsid); if (err < 0) { /* -EMSGSIZE implies BUG in if_nlmsg_size */ WARN_ON(err == -EMSGSIZE); @@ -3113,8 +3098,7 @@ struct sk_buff *rtmsg_ifinfo_build_skb(int type, struct net_device *dev, unsigned int change, - u32 event, gfp_t flags, int *new_nsid, - int new_ifindex) + u32 event, gfp_t flags, int *new_nsid) { struct net *net = dev_net(dev); struct sk_buff *skb; @@ -3127,7 +3111,7 @@ err = rtnl_fill_ifinfo(skb, dev, dev_net(dev), type, 0, 0, change, 0, 0, event, - new_nsid, new_ifindex, -1); + new_nsid, -1); if (err < 0) { /* -EMSGSIZE implies BUG in if_nlmsg_size() */ WARN_ON(err == -EMSGSIZE); @@ -3150,15 +3134,14 @@ static void rtmsg_ifinfo_event(int type, struct net_device *dev, unsigned int change, u32 event, - gfp_t flags, int *new_nsid, int new_ifindex) + gfp_t flags, int *new_nsid) { struct sk_buff *skb; if (dev->reg_state != NETREG_REGISTERED) return; - skb = rtmsg_ifinfo_build_skb(type, dev, change, event, flags, new_nsid, - new_ifindex); + skb = rtmsg_ifinfo_build_skb(type, dev, change, event, flags, new_nsid); if (skb) rtmsg_ifinfo_send(skb, dev, flags); } @@ -3166,15 +3149,14 @@ void rtmsg_ifinfo(int type, struct net_device *dev, unsigned int change, gfp_t flags) { - rtmsg_ifinfo_event(type, dev, change, rtnl_get_event(0), flags, - NULL, 0); + rtmsg_ifinfo_event(type, dev, change, rtnl_get_event(0), flags, NULL); } void rtmsg_ifinfo_newnet(int type, struct net_device *dev, unsigned int change, - gfp_t flags, int *new_nsid, int new_ifindex) + gfp_t flags, int *new_nsid) { rtmsg_ifinfo_event(type, dev, change, rtnl_get_event(0), flags, - new_nsid, new_ifindex); + new_nsid); } static int nlmsg_populate_fdb_fill(struct sk_buff *skb, @@ -4567,7 +4549,7 @@ case NETDEV_CHANGELOWERSTATE: case NETDEV_CHANGE_TX_QUEUE_LEN: rtmsg_ifinfo_event(RTM_NEWLINK, dev, 0, rtnl_get_event(event), - GFP_KERNEL, NULL, 0); + GFP_KERNEL, NULL); break; default: break; diff -u linux-azure-4.15.0/net/core/skbuff.c linux-azure-4.15.0/net/core/skbuff.c --- linux-azure-4.15.0/net/core/skbuff.c +++ linux-azure-4.15.0/net/core/skbuff.c @@ -857,7 +857,6 @@ n->hdr_len = skb->nohdr ? skb_headroom(skb) : skb->hdr_len; n->cloned = 1; n->nohdr = 0; - n->peeked = 0; n->destructor = NULL; C(tail); C(end); diff -u linux-azure-4.15.0/net/core/sock.c linux-azure-4.15.0/net/core/sock.c --- linux-azure-4.15.0/net/core/sock.c +++ linux-azure-4.15.0/net/core/sock.c @@ -1595,7 +1595,7 @@ static void __sk_free(struct sock *sk) { - if (unlikely(sk->sk_net_refcnt && sock_diag_has_destroy_listeners(sk))) + if (unlikely(sock_diag_has_destroy_listeners(sk) && sk->sk_net_refcnt)) sock_diag_broadcast_destroy(sk); else sk_destruct(sk); reverted: --- linux-azure-4.15.0/net/dccp/ccids/ccid2.c +++ linux-azure-4.15.0.orig/net/dccp/ccids/ccid2.c @@ -126,16 +126,6 @@ DCCPF_SEQ_WMAX)); } -static void dccp_tasklet_schedule(struct sock *sk) -{ - struct tasklet_struct *t = &dccp_sk(sk)->dccps_xmitlet; - - if (!test_and_set_bit(TASKLET_STATE_SCHED, &t->state)) { - sock_hold(sk); - __tasklet_schedule(t); - } -} - static void ccid2_hc_tx_rto_expire(struct timer_list *t) { struct ccid2_hc_tx_sock *hc = from_timer(hc, t, tx_rtotimer); @@ -176,7 +166,7 @@ /* if we were blocked before, we may now send cwnd=1 packet */ if (sender_was_blocked) + tasklet_schedule(&dccp_sk(sk)->dccps_xmitlet); - dccp_tasklet_schedule(sk); /* restart backed-off timer */ sk_reset_timer(sk, &hc->tx_rtotimer, jiffies + hc->tx_rto); out: @@ -716,7 +706,7 @@ done: /* check if incoming Acks allow pending packets to be sent */ if (sender_was_blocked && !ccid2_cwnd_network_limited(hc)) + tasklet_schedule(&dccp_sk(sk)->dccps_xmitlet); - dccp_tasklet_schedule(sk); dccp_ackvec_parsed_cleanup(&hc->tx_av_chunks); } reverted: --- linux-azure-4.15.0/net/dccp/ipv4.c +++ linux-azure-4.15.0.orig/net/dccp/ipv4.c @@ -614,7 +614,6 @@ ireq = inet_rsk(req); sk_rcv_saddr_set(req_to_sk(req), ip_hdr(skb)->daddr); sk_daddr_set(req_to_sk(req), ip_hdr(skb)->saddr); - ireq->ir_mark = inet_request_mark(sk, skb); ireq->ireq_family = AF_INET; ireq->ir_iif = sk->sk_bound_dev_if; reverted: --- linux-azure-4.15.0/net/dccp/ipv6.c +++ linux-azure-4.15.0.orig/net/dccp/ipv6.c @@ -351,7 +351,6 @@ ireq->ir_v6_rmt_addr = ipv6_hdr(skb)->saddr; ireq->ir_v6_loc_addr = ipv6_hdr(skb)->daddr; ireq->ireq_family = AF_INET6; - ireq->ir_mark = inet_request_mark(sk, skb); if (ipv6_opt_accepted(sk, skb, IP6CB(skb)) || np->rxopt.bits.rxinfo || np->rxopt.bits.rxoinfo || reverted: --- linux-azure-4.15.0/net/dccp/timer.c +++ linux-azure-4.15.0.orig/net/dccp/timer.c @@ -232,7 +232,6 @@ else dccp_write_xmit(sk); bh_unlock_sock(sk); - sock_put(sk); } static void dccp_write_xmit_timer(struct timer_list *t) @@ -241,6 +240,7 @@ struct sock *sk = &dp->dccps_inet_connection.icsk_inet.sk; dccp_write_xmitlet((unsigned long)sk); + sock_put(sk); } void dccp_init_xmit_timers(struct sock *sk) reverted: --- linux-azure-4.15.0/net/dsa/dsa2.c +++ linux-azure-4.15.0.orig/net/dsa/dsa2.c @@ -258,13 +258,11 @@ static int dsa_port_setup(struct dsa_port *dp) { struct dsa_switch *ds = dp->ds; + int err; - int err = 0; memset(&dp->devlink_port, 0, sizeof(dp->devlink_port)); + err = devlink_port_register(ds->devlink, &dp->devlink_port, dp->index); - if (dp->type != DSA_PORT_TYPE_UNUSED) - err = devlink_port_register(ds->devlink, &dp->devlink_port, - dp->index); if (err) return err; @@ -296,8 +294,7 @@ static void dsa_port_teardown(struct dsa_port *dp) { + devlink_port_unregister(&dp->devlink_port); - if (dp->type != DSA_PORT_TYPE_UNUSED) - devlink_port_unregister(&dp->devlink_port); switch (dp->type) { case DSA_PORT_TYPE_UNUSED: reverted: --- linux-azure-4.15.0/net/ipv4/inet_timewait_sock.c +++ linux-azure-4.15.0.orig/net/ipv4/inet_timewait_sock.c @@ -179,7 +179,6 @@ tw->tw_dport = inet->inet_dport; tw->tw_family = sk->sk_family; tw->tw_reuse = sk->sk_reuse; - tw->tw_reuseport = sk->sk_reuseport; tw->tw_hash = sk->sk_hash; tw->tw_ipv6only = 0; tw->tw_transparent = inet->transparent; reverted: --- linux-azure-4.15.0/net/ipv4/inetpeer.c +++ linux-azure-4.15.0.orig/net/ipv4/inetpeer.c @@ -210,7 +210,6 @@ p = kmem_cache_alloc(peer_cachep, GFP_ATOMIC); if (p) { p->daddr = *daddr; - p->dtime = (__u32)jiffies; refcount_set(&p->refcnt, 2); atomic_set(&p->rid, 0); p->metrics[RTAX_LOCK-1] = INETPEER_METRICS_NEW; reverted: --- linux-azure-4.15.0/net/ipv4/ip_output.c +++ linux-azure-4.15.0.orig/net/ipv4/ip_output.c @@ -1040,8 +1040,7 @@ if (copy > length) copy = length; + if (!(rt->dst.dev->features&NETIF_F_SG)) { - if (!(rt->dst.dev->features&NETIF_F_SG) && - skb_tailroom(skb) >= copy) { unsigned int off; off = skb->len; reverted: --- linux-azure-4.15.0/net/ipv4/netfilter/nf_socket_ipv4.c +++ linux-azure-4.15.0.orig/net/ipv4/netfilter/nf_socket_ipv4.c @@ -108,12 +108,10 @@ int doff = 0; if (iph->protocol == IPPROTO_UDP || iph->protocol == IPPROTO_TCP) { + struct udphdr _hdr, *hp; - struct tcphdr _hdr; - struct udphdr *hp; hp = skb_header_pointer(skb, ip_hdrlen(skb), + sizeof(_hdr), &_hdr); - iph->protocol == IPPROTO_UDP ? - sizeof(*hp) : sizeof(_hdr), &_hdr); if (hp == NULL) return NULL; reverted: --- linux-azure-4.15.0/net/ipv4/ping.c +++ linux-azure-4.15.0.orig/net/ipv4/ping.c @@ -775,10 +775,8 @@ ipc.addr = faddr = daddr; if (ipc.opt && ipc.opt->opt.srr) { + if (!daddr) + return -EINVAL; - if (!daddr) { - err = -EINVAL; - goto out_free; - } faddr = ipc.opt->opt.faddr; } tos = get_rttos(&ipc, inet); @@ -844,7 +842,6 @@ out: ip_rt_put(rt); -out_free: if (free) kfree(ipc.opt); if (!err) { diff -u linux-azure-4.15.0/net/ipv4/route.c linux-azure-4.15.0/net/ipv4/route.c --- linux-azure-4.15.0/net/ipv4/route.c +++ linux-azure-4.15.0/net/ipv4/route.c @@ -711,7 +711,7 @@ fnhe->fnhe_daddr = daddr; fnhe->fnhe_gw = gw; fnhe->fnhe_pmtu = pmtu; - fnhe->fnhe_expires = max(1UL, expires); + fnhe->fnhe_expires = expires; /* Exception created; mark the cached routes for the nexthop * stale, so anyone caching it rechecks if this exception @@ -1286,36 +1286,6 @@ return mtu - lwtunnel_headroom(dst->lwtstate, mtu); } -static void ip_del_fnhe(struct fib_nh *nh, __be32 daddr) -{ - struct fnhe_hash_bucket *hash; - struct fib_nh_exception *fnhe, __rcu **fnhe_p; - u32 hval = fnhe_hashfun(daddr); - - spin_lock_bh(&fnhe_lock); - - hash = rcu_dereference_protected(nh->nh_exceptions, - lockdep_is_held(&fnhe_lock)); - hash += hval; - - fnhe_p = &hash->chain; - fnhe = rcu_dereference_protected(*fnhe_p, lockdep_is_held(&fnhe_lock)); - while (fnhe) { - if (fnhe->fnhe_daddr == daddr) { - rcu_assign_pointer(*fnhe_p, rcu_dereference_protected( - fnhe->fnhe_next, lockdep_is_held(&fnhe_lock))); - fnhe_flush_routes(fnhe); - kfree_rcu(fnhe, rcu); - break; - } - fnhe_p = &fnhe->fnhe_next; - fnhe = rcu_dereference_protected(fnhe->fnhe_next, - lockdep_is_held(&fnhe_lock)); - } - - spin_unlock_bh(&fnhe_lock); -} - static struct fib_nh_exception *find_exception(struct fib_nh *nh, __be32 daddr) { struct fnhe_hash_bucket *hash = rcu_dereference(nh->nh_exceptions); @@ -1329,14 +1299,8 @@ for (fnhe = rcu_dereference(hash[hval].chain); fnhe; fnhe = rcu_dereference(fnhe->fnhe_next)) { - if (fnhe->fnhe_daddr == daddr) { - if (fnhe->fnhe_expires && - time_after(jiffies, fnhe->fnhe_expires)) { - ip_del_fnhe(nh, daddr); - break; - } + if (fnhe->fnhe_daddr == daddr) return fnhe; - } } return NULL; } @@ -1661,6 +1625,36 @@ #endif } +static void ip_del_fnhe(struct fib_nh *nh, __be32 daddr) +{ + struct fnhe_hash_bucket *hash; + struct fib_nh_exception *fnhe, __rcu **fnhe_p; + u32 hval = fnhe_hashfun(daddr); + + spin_lock_bh(&fnhe_lock); + + hash = rcu_dereference_protected(nh->nh_exceptions, + lockdep_is_held(&fnhe_lock)); + hash += hval; + + fnhe_p = &hash->chain; + fnhe = rcu_dereference_protected(*fnhe_p, lockdep_is_held(&fnhe_lock)); + while (fnhe) { + if (fnhe->fnhe_daddr == daddr) { + rcu_assign_pointer(*fnhe_p, rcu_dereference_protected( + fnhe->fnhe_next, lockdep_is_held(&fnhe_lock))); + fnhe_flush_routes(fnhe); + kfree_rcu(fnhe, rcu); + break; + } + fnhe_p = &fnhe->fnhe_next; + fnhe = rcu_dereference_protected(fnhe->fnhe_next, + lockdep_is_held(&fnhe_lock)); + } + + spin_unlock_bh(&fnhe_lock); +} + static void set_lwt_redirect(struct rtable *rth) { if (lwtunnel_output_redirect(rth->dst.lwtstate)) { @@ -1727,10 +1721,20 @@ fnhe = find_exception(&FIB_RES_NH(*res), daddr); if (do_cache) { - if (fnhe) + if (fnhe) { rth = rcu_dereference(fnhe->fnhe_rth_input); - else - rth = rcu_dereference(FIB_RES_NH(*res).nh_rth_input); + if (rth && rth->dst.expires && + time_after(jiffies, rth->dst.expires)) { + ip_del_fnhe(&FIB_RES_NH(*res), daddr); + fnhe = NULL; + } else { + goto rt_cache; + } + } + + rth = rcu_dereference(FIB_RES_NH(*res).nh_rth_input); + +rt_cache: if (rt_cache_valid(rth)) { skb_dst_set_noref(skb, &rth->dst); goto out; @@ -2207,31 +2211,39 @@ * the loopback interface and the IP_PKTINFO ipi_ifindex will * be set to the loopback interface as well. */ - do_cache = false; + fi = NULL; } fnhe = NULL; do_cache &= fi != NULL; - if (fi) { + if (do_cache) { struct rtable __rcu **prth; struct fib_nh *nh = &FIB_RES_NH(*res); fnhe = find_exception(nh, fl4->daddr); - if (!do_cache) - goto add; if (fnhe) { prth = &fnhe->fnhe_rth_output; - } else { - if (unlikely(fl4->flowi4_flags & - FLOWI_FLAG_KNOWN_NH && - !(nh->nh_gw && - nh->nh_scope == RT_SCOPE_LINK))) { - do_cache = false; - goto add; + rth = rcu_dereference(*prth); + if (rth && rth->dst.expires && + time_after(jiffies, rth->dst.expires)) { + ip_del_fnhe(nh, fl4->daddr); + fnhe = NULL; + } else { + goto rt_cache; } - prth = raw_cpu_ptr(nh->nh_pcpu_rth_output); } + + if (unlikely(fl4->flowi4_flags & + FLOWI_FLAG_KNOWN_NH && + !(nh->nh_gw && + nh->nh_scope == RT_SCOPE_LINK))) { + do_cache = false; + goto add; + } + prth = raw_cpu_ptr(nh->nh_pcpu_rth_output); rth = rcu_dereference(*prth); + +rt_cache: if (rt_cache_valid(rth) && dst_hold_safe(&rth->dst)) return rth; } @@ -2281,14 +2293,13 @@ const struct sk_buff *skb) { __u8 tos = RT_FL_TOS(fl4); - struct fib_result res = { - .type = RTN_UNSPEC, - .fi = NULL, - .table = NULL, - .tclassid = 0, - }; + struct fib_result res; struct rtable *rth; + res.tclassid = 0; + res.fi = NULL; + res.table = NULL; + fl4->flowi4_iif = LOOPBACK_IFINDEX; fl4->flowi4_tos = tos & IPTOS_RT_MASK; fl4->flowi4_scope = ((tos & RTO_ONLINK) ? diff -u linux-azure-4.15.0/net/ipv4/tcp.c linux-azure-4.15.0/net/ipv4/tcp.c --- linux-azure-4.15.0/net/ipv4/tcp.c +++ linux-azure-4.15.0/net/ipv4/tcp.c @@ -692,7 +692,7 @@ { return skb->len < size_goal && sock_net(sk)->ipv4.sysctl_tcp_autocorking && - !tcp_rtx_queue_empty(sk) && + skb != tcp_write_queue_head(sk) && refcount_read(&sk->sk_wmem_alloc) > skb->truesize; } @@ -1210,8 +1210,7 @@ uarg->zerocopy = 0; } - if (unlikely(flags & MSG_FASTOPEN || inet_sk(sk)->defer_connect) && - !tp->repair) { + if (unlikely(flags & MSG_FASTOPEN || inet_sk(sk)->defer_connect)) { err = tcp_sendmsg_fastopen(sk, msg, &copied_syn, size); if (err == -EINPROGRESS && copied_syn > 0) goto out; @@ -2667,7 +2666,7 @@ case TCP_REPAIR_QUEUE: if (!tp->repair) err = -EPERM; - else if ((unsigned int)val < TCP_QUEUES_NR) + else if (val < TCP_QUEUES_NR) tp->repair_queue = val; else err = -EINVAL; diff -u linux-azure-4.15.0/net/ipv4/tcp_bbr.c linux-azure-4.15.0/net/ipv4/tcp_bbr.c --- linux-azure-4.15.0/net/ipv4/tcp_bbr.c +++ linux-azure-4.15.0/net/ipv4/tcp_bbr.c @@ -802,9 +802,7 @@ } } } - /* Restart after idle ends only once we process a new S/ACK for data */ - if (rs->delivered > 0) - bbr->idle_restart = 0; + bbr->idle_restart = 0; } static void bbr_update_model(struct sock *sk, const struct rate_sample *rs) diff -u linux-azure-4.15.0/net/ipv4/tcp_output.c linux-azure-4.15.0/net/ipv4/tcp_output.c --- linux-azure-4.15.0/net/ipv4/tcp_output.c +++ linux-azure-4.15.0/net/ipv4/tcp_output.c @@ -2859,10 +2859,8 @@ return -EBUSY; if (before(TCP_SKB_CB(skb)->seq, tp->snd_una)) { - if (unlikely(before(TCP_SKB_CB(skb)->end_seq, tp->snd_una))) { - WARN_ON_ONCE(1); - return -EINVAL; - } + if (before(TCP_SKB_CB(skb)->end_seq, tp->snd_una)) + BUG(); if (tcp_trim_head(sk, skb, tp->snd_una - TCP_SKB_CB(skb)->seq)) return -ENOMEM; } @@ -3366,7 +3364,6 @@ sock_reset_flag(sk, SOCK_DONE); tp->snd_wnd = 0; tcp_init_wl(tp, 0); - tcp_write_queue_purge(sk); tp->snd_una = tp->write_seq; tp->snd_sml = tp->write_seq; tp->snd_up = tp->write_seq; diff -u linux-azure-4.15.0/net/ipv4/udp.c linux-azure-4.15.0/net/ipv4/udp.c --- linux-azure-4.15.0/net/ipv4/udp.c +++ linux-azure-4.15.0/net/ipv4/udp.c @@ -413,9 +413,9 @@ bool dev_match = (sk->sk_bound_dev_if == dif || sk->sk_bound_dev_if == sdif); - if (!dev_match) + if (exact_dif && !dev_match) return -1; - if (sk->sk_bound_dev_if) + if (sk->sk_bound_dev_if && dev_match) score += 4; } @@ -978,10 +978,8 @@ sock_tx_timestamp(sk, ipc.sockc.tsflags, &ipc.tx_flags); if (ipc.opt && ipc.opt->opt.srr) { - if (!daddr) { - err = -EINVAL; - goto out_free; - } + if (!daddr) + return -EINVAL; faddr = ipc.opt->opt.faddr; connected = 0; } @@ -1089,7 +1087,6 @@ out: ip_rt_put(rt); -out_free: if (free) kfree(ipc.opt); if (!err) diff -u linux-azure-4.15.0/net/ipv6/ip6_gre.c linux-azure-4.15.0/net/ipv6/ip6_gre.c --- linux-azure-4.15.0/net/ipv6/ip6_gre.c +++ linux-azure-4.15.0/net/ipv6/ip6_gre.c @@ -518,9 +518,6 @@ if (tunnel->parms.o_flags & TUNNEL_SEQ) tunnel->o_seqno++; - if (skb_cow_head(skb, dev->needed_headroom ?: tunnel->hlen)) - return -ENOMEM; - /* Push GRE header. */ protocol = (dev->type == ARPHRD_ETHER) ? htons(ETH_P_TEB) : proto; gre_build_header(skb, tunnel->tun_hlen, tunnel->parms.o_flags, @@ -711,11 +708,12 @@ return NETDEV_TX_OK; } -static void ip6gre_tnl_link_config_common(struct ip6_tnl *t) +static void ip6gre_tnl_link_config(struct ip6_tnl *t, int set_mtu) { struct net_device *dev = t->dev; struct __ip6_tnl_parm *p = &t->parms; struct flowi6 *fl6 = &t->fl.u.ip6; + int t_hlen; if (dev->type != ARPHRD_ETHER) { memcpy(dev->dev_addr, &p->laddr, sizeof(struct in6_addr)); @@ -742,13 +740,12 @@ dev->flags |= IFF_POINTOPOINT; else dev->flags &= ~IFF_POINTOPOINT; -} -static void ip6gre_tnl_link_config_route(struct ip6_tnl *t, int set_mtu, - int t_hlen) -{ - const struct __ip6_tnl_parm *p = &t->parms; - struct net_device *dev = t->dev; + t->tun_hlen = gre_calc_hlen(t->parms.o_flags); + + t->hlen = t->encap_hlen + t->tun_hlen; + + t_hlen = t->hlen + sizeof(struct ipv6hdr); if (p->flags & IP6_TNL_F_CAP_XMIT) { int strict = (ipv6_addr_type(&p->raddr) & @@ -780,26 +777,8 @@ } } -static int ip6gre_calc_hlen(struct ip6_tnl *tunnel) -{ - int t_hlen; - - tunnel->tun_hlen = gre_calc_hlen(tunnel->parms.o_flags); - tunnel->hlen = tunnel->tun_hlen + tunnel->encap_hlen; - - t_hlen = tunnel->hlen + sizeof(struct ipv6hdr); - tunnel->dev->hard_header_len = LL_MAX_HEADER + t_hlen; - return t_hlen; -} - -static void ip6gre_tnl_link_config(struct ip6_tnl *t, int set_mtu) -{ - ip6gre_tnl_link_config_common(t); - ip6gre_tnl_link_config_route(t, set_mtu, ip6gre_calc_hlen(t)); -} - -static void ip6gre_tnl_copy_tnl_parm(struct ip6_tnl *t, - const struct __ip6_tnl_parm *p) +static int ip6gre_tnl_change(struct ip6_tnl *t, + const struct __ip6_tnl_parm *p, int set_mtu) { t->parms.laddr = p->laddr; t->parms.raddr = p->raddr; @@ -815,12 +794,6 @@ t->parms.o_flags = p->o_flags; t->parms.fwmark = p->fwmark; dst_cache_reset(&t->dst_cache); -} - -static int ip6gre_tnl_change(struct ip6_tnl *t, const struct __ip6_tnl_parm *p, - int set_mtu) -{ - ip6gre_tnl_copy_tnl_parm(t, p); ip6gre_tnl_link_config(t, set_mtu); return 0; } @@ -1097,7 +1070,11 @@ return ret; } - t_hlen = ip6gre_calc_hlen(tunnel); + tunnel->tun_hlen = gre_calc_hlen(tunnel->parms.o_flags); + tunnel->hlen = tunnel->tun_hlen + tunnel->encap_hlen; + t_hlen = tunnel->hlen + sizeof(struct ipv6hdr); + + dev->hard_header_len = LL_MAX_HEADER + t_hlen; dev->mtu = ETH_DATA_LEN - t_hlen; if (dev->type == ARPHRD_ETHER) dev->mtu -= ETH_HLEN; @@ -1400,12 +1377,13 @@ return ret; } -static int ip6gre_newlink_common(struct net *src_net, struct net_device *dev, - struct nlattr *tb[], struct nlattr *data[], - struct netlink_ext_ack *extack) +static int ip6gre_newlink(struct net *src_net, struct net_device *dev, + struct nlattr *tb[], struct nlattr *data[], + struct netlink_ext_ack *extack) { struct ip6_tnl *nt; struct net *net = dev_net(dev); + struct ip6gre_net *ign = net_generic(net, ip6gre_net_id); struct ip_tunnel_encap ipencap; int err; @@ -1433,76 +1411,49 @@ if (err) goto out; + ip6gre_tnl_link_config(nt, !tb[IFLA_MTU]); + if (tb[IFLA_MTU]) ip6_tnl_change_mtu(dev, nla_get_u32(tb[IFLA_MTU])); dev_hold(dev); + ip6gre_tunnel_link(ign, nt); out: return err; } -static int ip6gre_newlink(struct net *src_net, struct net_device *dev, - struct nlattr *tb[], struct nlattr *data[], - struct netlink_ext_ack *extack) -{ - int err = ip6gre_newlink_common(src_net, dev, tb, data, extack); - struct ip6_tnl *nt = netdev_priv(dev); - struct net *net = dev_net(dev); - - if (!err) { - ip6gre_tnl_link_config(nt, !tb[IFLA_MTU]); - ip6gre_tunnel_link(net_generic(net, ip6gre_net_id), nt); - } - return err; -} - -static struct ip6_tnl * -ip6gre_changelink_common(struct net_device *dev, struct nlattr *tb[], - struct nlattr *data[], struct __ip6_tnl_parm *p_p, - struct netlink_ext_ack *extack) +static int ip6gre_changelink(struct net_device *dev, struct nlattr *tb[], + struct nlattr *data[], + struct netlink_ext_ack *extack) { struct ip6_tnl *t, *nt = netdev_priv(dev); struct net *net = nt->net; struct ip6gre_net *ign = net_generic(net, ip6gre_net_id); + struct __ip6_tnl_parm p; struct ip_tunnel_encap ipencap; if (dev == ign->fb_tunnel_dev) - return ERR_PTR(-EINVAL); + return -EINVAL; if (ip6gre_netlink_encap_parms(data, &ipencap)) { int err = ip6_tnl_encap_setup(nt, &ipencap); if (err < 0) - return ERR_PTR(err); + return err; } - ip6gre_netlink_parms(data, p_p); + ip6gre_netlink_parms(data, &p); - t = ip6gre_tunnel_locate(net, p_p, 0); + t = ip6gre_tunnel_locate(net, &p, 0); if (t) { if (t->dev != dev) - return ERR_PTR(-EEXIST); + return -EEXIST; } else { t = nt; } - return t; -} - -static int ip6gre_changelink(struct net_device *dev, struct nlattr *tb[], - struct nlattr *data[], - struct netlink_ext_ack *extack) -{ - struct ip6gre_net *ign = net_generic(dev_net(dev), ip6gre_net_id); - struct __ip6_tnl_parm p; - struct ip6_tnl *t; - - t = ip6gre_changelink_common(dev, tb, data, &p, extack); - if (IS_ERR(t)) - return PTR_ERR(t); - ip6gre_tunnel_unlink(ign, t); ip6gre_tnl_change(t, &p, !tb[IFLA_MTU]); ip6gre_tunnel_link(ign, t); diff -u linux-azure-4.15.0/net/ipv6/ip6_output.c linux-azure-4.15.0/net/ipv6/ip6_output.c --- linux-azure-4.15.0/net/ipv6/ip6_output.c +++ linux-azure-4.15.0/net/ipv6/ip6_output.c @@ -1488,8 +1488,7 @@ if (copy > length) copy = length; - if (!(rt->dst.dev->features&NETIF_F_SG) && - skb_tailroom(skb) >= copy) { + if (!(rt->dst.dev->features&NETIF_F_SG)) { unsigned int off; off = skb->len; reverted: --- linux-azure-4.15.0/net/ipv6/netfilter/nf_socket_ipv6.c +++ linux-azure-4.15.0.orig/net/ipv6/netfilter/nf_socket_ipv6.c @@ -116,11 +116,9 @@ } if (tproto == IPPROTO_UDP || tproto == IPPROTO_TCP) { + struct udphdr _hdr, *hp; - struct tcphdr _hdr; - struct udphdr *hp; + hp = skb_header_pointer(skb, thoff, sizeof(_hdr), &_hdr); - hp = skb_header_pointer(skb, thoff, tproto == IPPROTO_UDP ? - sizeof(*hp) : sizeof(_hdr), &_hdr); if (hp == NULL) return NULL; diff -u linux-azure-4.15.0/net/ipv6/route.c linux-azure-4.15.0/net/ipv6/route.c --- linux-azure-4.15.0/net/ipv6/route.c +++ linux-azure-4.15.0/net/ipv6/route.c @@ -1823,16 +1823,11 @@ const struct ipv6hdr *inner_iph; const struct icmp6hdr *icmph; struct ipv6hdr _inner_iph; - struct icmp6hdr _icmph; if (likely(outer_iph->nexthdr != IPPROTO_ICMPV6)) goto out; - icmph = skb_header_pointer(skb, skb_transport_offset(skb), - sizeof(_icmph), &_icmph); - if (!icmph) - goto out; - + icmph = icmp6_hdr(skb); if (icmph->icmp6_type != ICMPV6_DEST_UNREACH && icmph->icmp6_type != ICMPV6_PKT_TOOBIG && icmph->icmp6_type != ICMPV6_TIME_EXCEED && reverted: --- linux-azure-4.15.0/net/ipv6/udp.c +++ linux-azure-4.15.0.orig/net/ipv6/udp.c @@ -164,9 +164,9 @@ bool dev_match = (sk->sk_bound_dev_if == dif || sk->sk_bound_dev_if == sdif); + if (exact_dif && !dev_match) - if (!dev_match) return -1; + if (sk->sk_bound_dev_if && dev_match) - if (sk->sk_bound_dev_if) score++; } diff -u linux-azure-4.15.0/net/kcm/kcmsock.c linux-azure-4.15.0/net/kcm/kcmsock.c --- linux-azure-4.15.0/net/kcm/kcmsock.c +++ linux-azure-4.15.0/net/kcm/kcmsock.c @@ -1425,7 +1425,6 @@ */ if (csk->sk_user_data) { write_unlock_bh(&csk->sk_callback_lock); - strp_stop(&psock->strp); strp_done(&psock->strp); kmem_cache_free(kcm_psockp, psock); err = -EALREADY; diff -u linux-azure-4.15.0/net/l2tp/l2tp_netlink.c linux-azure-4.15.0/net/l2tp/l2tp_netlink.c --- linux-azure-4.15.0/net/l2tp/l2tp_netlink.c +++ linux-azure-4.15.0/net/l2tp/l2tp_netlink.c @@ -768,6 +768,8 @@ if ((session->ifname[0] && nla_put_string(skb, L2TP_ATTR_IFNAME, session->ifname)) || + (session->offset && + nla_put_u16(skb, L2TP_ATTR_OFFSET, session->offset)) || (session->cookie_len && nla_put(skb, L2TP_ATTR_COOKIE, session->cookie_len, &session->cookie[0])) || diff -u linux-azure-4.15.0/net/llc/af_llc.c linux-azure-4.15.0/net/llc/af_llc.c --- linux-azure-4.15.0/net/llc/af_llc.c +++ linux-azure-4.15.0/net/llc/af_llc.c @@ -930,9 +930,6 @@ if (size > llc->dev->mtu) size = llc->dev->mtu; copied = size - hdrlen; - rc = -EINVAL; - if (copied < 0) - goto release; release_sock(sk); skb = sock_alloc_send_skb(sk, size, noblock, &rc); lock_sock(sk); reverted: --- linux-azure-4.15.0/net/netfilter/ipvs/ip_vs_ctl.c +++ linux-azure-4.15.0.orig/net/netfilter/ipvs/ip_vs_ctl.c @@ -2387,7 +2387,11 @@ strlcpy(cfg.mcast_ifn, dm->mcast_ifn, sizeof(cfg.mcast_ifn)); cfg.syncid = dm->syncid; + rtnl_lock(); + mutex_lock(&ipvs->sync_mutex); ret = start_sync_thread(ipvs, &cfg, dm->state); + mutex_unlock(&ipvs->sync_mutex); + rtnl_unlock(); } else { mutex_lock(&ipvs->sync_mutex); ret = stop_sync_thread(ipvs, dm->state); @@ -3480,8 +3484,12 @@ if (ipvs->mixed_address_family_dests > 0) return -EINVAL; + rtnl_lock(); + mutex_lock(&ipvs->sync_mutex); ret = start_sync_thread(ipvs, &c, nla_get_u32(attrs[IPVS_DAEMON_ATTR_STATE])); + mutex_unlock(&ipvs->sync_mutex); + rtnl_unlock(); return ret; } reverted: --- linux-azure-4.15.0/net/netfilter/ipvs/ip_vs_sync.c +++ linux-azure-4.15.0.orig/net/netfilter/ipvs/ip_vs_sync.c @@ -49,7 +49,6 @@ #include #include #include -#include #include /* Used for ntoh_seq and hton_seq */ @@ -1361,9 +1360,15 @@ /* * Specifiy default interface for outgoing multicasts */ +static int set_mcast_if(struct sock *sk, char *ifname) -static int set_mcast_if(struct sock *sk, struct net_device *dev) { + struct net_device *dev; struct inet_sock *inet = inet_sk(sk); + struct net *net = sock_net(sk); + + dev = __dev_get_by_name(net, ifname); + if (!dev) + return -ENODEV; if (sk->sk_bound_dev_if && dev->ifindex != sk->sk_bound_dev_if) return -EINVAL; @@ -1391,14 +1396,19 @@ * in the in_addr structure passed in as a parameter. */ static int +join_mcast_group(struct sock *sk, struct in_addr *addr, char *ifname) -join_mcast_group(struct sock *sk, struct in_addr *addr, struct net_device *dev) { + struct net *net = sock_net(sk); struct ip_mreqn mreq; + struct net_device *dev; int ret; memset(&mreq, 0, sizeof(mreq)); memcpy(&mreq.imr_multiaddr, addr, sizeof(struct in_addr)); + dev = __dev_get_by_name(net, ifname); + if (!dev) + return -ENODEV; if (sk->sk_bound_dev_if && dev->ifindex != sk->sk_bound_dev_if) return -EINVAL; @@ -1413,10 +1423,15 @@ #ifdef CONFIG_IP_VS_IPV6 static int join_mcast_group6(struct sock *sk, struct in6_addr *addr, + char *ifname) - struct net_device *dev) { + struct net *net = sock_net(sk); + struct net_device *dev; int ret; + dev = __dev_get_by_name(net, ifname); + if (!dev) + return -ENODEV; if (sk->sk_bound_dev_if && dev->ifindex != sk->sk_bound_dev_if) return -EINVAL; @@ -1428,18 +1443,24 @@ } #endif +static int bind_mcastif_addr(struct socket *sock, char *ifname) -static int bind_mcastif_addr(struct socket *sock, struct net_device *dev) { + struct net *net = sock_net(sock->sk); + struct net_device *dev; __be32 addr; struct sockaddr_in sin; + dev = __dev_get_by_name(net, ifname); + if (!dev) + return -ENODEV; + addr = inet_select_addr(dev, 0, RT_SCOPE_UNIVERSE); if (!addr) pr_err("You probably need to specify IP address on " "multicast interface.\n"); IP_VS_DBG(7, "binding socket with (%s) %pI4\n", + ifname, &addr); - dev->name, &addr); /* Now bind the socket with the address of multicast interface */ sin.sin_family = AF_INET; @@ -1472,8 +1493,7 @@ /* * Set up sending multicast socket over UDP */ +static struct socket *make_send_sock(struct netns_ipvs *ipvs, int id) -static int make_send_sock(struct netns_ipvs *ipvs, int id, - struct net_device *dev, struct socket **sock_ret) { /* multicast addr */ union ipvs_sockaddr mcast_addr; @@ -1485,10 +1505,9 @@ IPPROTO_UDP, &sock); if (result < 0) { pr_err("Error during creation of socket; terminating\n"); + return ERR_PTR(result); - goto error; } + result = set_mcast_if(sock->sk, ipvs->mcfg.mcast_ifn); - *sock_ret = sock; - result = set_mcast_if(sock->sk, dev); if (result < 0) { pr_err("Error setting outbound mcast interface\n"); goto error; @@ -1503,7 +1522,7 @@ set_sock_size(sock->sk, 1, result); if (AF_INET == ipvs->mcfg.mcast_af) + result = bind_mcastif_addr(sock, ipvs->mcfg.mcast_ifn); - result = bind_mcastif_addr(sock, dev); else result = 0; if (result < 0) { @@ -1519,18 +1538,19 @@ goto error; } + return sock; - return 0; error: + sock_release(sock); + return ERR_PTR(result); - return result; } /* * Set up receiving multicast socket over UDP */ +static struct socket *make_receive_sock(struct netns_ipvs *ipvs, int id, + int ifindex) -static int make_receive_sock(struct netns_ipvs *ipvs, int id, - struct net_device *dev, struct socket **sock_ret) { /* multicast addr */ union ipvs_sockaddr mcast_addr; @@ -1542,9 +1562,8 @@ IPPROTO_UDP, &sock); if (result < 0) { pr_err("Error during creation of socket; terminating\n"); + return ERR_PTR(result); - goto error; } - *sock_ret = sock; /* it is equivalent to the REUSEADDR option in user-space */ sock->sk->sk_reuse = SK_CAN_REUSE; result = sysctl_sync_sock_size(ipvs); @@ -1552,7 +1571,7 @@ set_sock_size(sock->sk, 0, result); get_mcast_sockaddr(&mcast_addr, &salen, &ipvs->bcfg, id); + sock->sk->sk_bound_dev_if = ifindex; - sock->sk->sk_bound_dev_if = dev->ifindex; result = sock->ops->bind(sock, (struct sockaddr *)&mcast_addr, salen); if (result < 0) { pr_err("Error binding to the multicast addr\n"); @@ -1563,20 +1582,21 @@ #ifdef CONFIG_IP_VS_IPV6 if (ipvs->bcfg.mcast_af == AF_INET6) result = join_mcast_group6(sock->sk, &mcast_addr.in6.sin6_addr, + ipvs->bcfg.mcast_ifn); - dev); else #endif result = join_mcast_group(sock->sk, &mcast_addr.in.sin_addr, + ipvs->bcfg.mcast_ifn); - dev); if (result < 0) { pr_err("Error joining to the multicast group\n"); goto error; } + return sock; - return 0; error: + sock_release(sock); + return ERR_PTR(result); - return result; } @@ -1761,12 +1781,13 @@ int start_sync_thread(struct netns_ipvs *ipvs, struct ipvs_sync_daemon_cfg *c, int state) { + struct ip_vs_sync_thread_data *tinfo; - struct ip_vs_sync_thread_data *tinfo = NULL; struct task_struct **array = NULL, *task; + struct socket *sock; struct net_device *dev; char *name; int (*threadfn)(void *data); + int id, count, hlen; - int id = 0, count, hlen; int result = -ENOMEM; u16 mtu, min_mtu; @@ -1774,18 +1795,6 @@ IP_VS_DBG(7, "Each ip_vs_sync_conn entry needs %zd bytes\n", sizeof(struct ip_vs_sync_conn_v0)); - /* Do not hold one mutex and then to block on another */ - for (;;) { - rtnl_lock(); - if (mutex_trylock(&ipvs->sync_mutex)) - break; - rtnl_unlock(); - mutex_lock(&ipvs->sync_mutex); - if (rtnl_trylock()) - break; - mutex_unlock(&ipvs->sync_mutex); - } - if (!ipvs->sync_state) { count = clamp(sysctl_sync_ports(ipvs), 1, IPVS_SYNC_PORTS_MAX); ipvs->threads_mask = count - 1; @@ -1804,8 +1813,7 @@ dev = __dev_get_by_name(ipvs->net, c->mcast_ifn); if (!dev) { pr_err("Unknown mcast interface: %s\n", c->mcast_ifn); + return -ENODEV; - result = -ENODEV; - goto out_early; } hlen = (AF_INET6 == c->mcast_af) ? sizeof(struct ipv6hdr) + sizeof(struct udphdr) : @@ -1822,30 +1830,26 @@ c->sync_maxlen = mtu - hlen; if (state == IP_VS_STATE_MASTER) { - result = -EEXIST; if (ipvs->ms) + return -EEXIST; - goto out_early; ipvs->mcfg = *c; name = "ipvs-m:%d:%d"; threadfn = sync_thread_master; } else if (state == IP_VS_STATE_BACKUP) { - result = -EEXIST; if (ipvs->backup_threads) + return -EEXIST; - goto out_early; ipvs->bcfg = *c; name = "ipvs-b:%d:%d"; threadfn = sync_thread_backup; } else { + return -EINVAL; - result = -EINVAL; - goto out_early; } if (state == IP_VS_STATE_MASTER) { struct ipvs_master_sync_state *ms; - result = -ENOMEM; ipvs->ms = kcalloc(count, sizeof(ipvs->ms[0]), GFP_KERNEL); if (!ipvs->ms) goto out; @@ -1861,38 +1865,39 @@ } else { array = kcalloc(count, sizeof(struct task_struct *), GFP_KERNEL); - result = -ENOMEM; if (!array) goto out; } + tinfo = NULL; for (id = 0; id < count; id++) { + if (state == IP_VS_STATE_MASTER) + sock = make_send_sock(ipvs, id); + else + sock = make_receive_sock(ipvs, id, dev->ifindex); + if (IS_ERR(sock)) { + result = PTR_ERR(sock); + goto outtinfo; + } - result = -ENOMEM; tinfo = kmalloc(sizeof(*tinfo), GFP_KERNEL); if (!tinfo) + goto outsocket; - goto out; tinfo->ipvs = ipvs; + tinfo->sock = sock; - tinfo->sock = NULL; if (state == IP_VS_STATE_BACKUP) { tinfo->buf = kmalloc(ipvs->bcfg.sync_maxlen, GFP_KERNEL); if (!tinfo->buf) + goto outtinfo; - goto out; } else { tinfo->buf = NULL; } tinfo->id = id; - if (state == IP_VS_STATE_MASTER) - result = make_send_sock(ipvs, id, dev, &tinfo->sock); - else - result = make_receive_sock(ipvs, id, dev, &tinfo->sock); - if (result < 0) - goto out; task = kthread_run(threadfn, tinfo, name, ipvs->gen, id); if (IS_ERR(task)) { result = PTR_ERR(task); + goto outtinfo; - goto out; } tinfo = NULL; if (state == IP_VS_STATE_MASTER) @@ -1909,20 +1914,20 @@ ipvs->sync_state |= state; spin_unlock_bh(&ipvs->sync_buff_lock); - mutex_unlock(&ipvs->sync_mutex); - rtnl_unlock(); - /* increase the module use count */ ip_vs_use_count_inc(); return 0; +outsocket: + sock_release(sock); + +outtinfo: + if (tinfo) { + sock_release(tinfo->sock); + kfree(tinfo->buf); + kfree(tinfo); + } -out: - /* We do not need RTNL lock anymore, release it here so that - * sock_release below and in the kthreads can use rtnl_lock - * to leave the mcast group. - */ - rtnl_unlock(); count = id; while (count-- > 0) { if (state == IP_VS_STATE_MASTER) @@ -1930,23 +1935,13 @@ else kthread_stop(array[count]); } + kfree(array); + +out: if (!(ipvs->sync_state & IP_VS_STATE_MASTER)) { kfree(ipvs->ms); ipvs->ms = NULL; } - mutex_unlock(&ipvs->sync_mutex); - if (tinfo) { - if (tinfo->sock) - sock_release(tinfo->sock); - kfree(tinfo->buf); - kfree(tinfo); - } - kfree(array); - return result; - -out_early: - mutex_unlock(&ipvs->sync_mutex); - rtnl_unlock(); return result; } reverted: --- linux-azure-4.15.0/net/netfilter/nf_tables_api.c +++ linux-azure-4.15.0.orig/net/netfilter/nf_tables_api.c @@ -2344,46 +2344,41 @@ } if (nlh->nlmsg_flags & NLM_F_REPLACE) { + if (nft_is_active_next(net, old_rule)) { + trans = nft_trans_rule_add(&ctx, NFT_MSG_DELRULE, + old_rule); + if (trans == NULL) { + err = -ENOMEM; + goto err2; + } + nft_deactivate_next(net, old_rule); + chain->use--; + list_add_tail_rcu(&rule->list, &old_rule->list); + } else { - if (!nft_is_active_next(net, old_rule)) { err = -ENOENT; goto err2; } + } else if (nlh->nlmsg_flags & NLM_F_APPEND) + if (old_rule) + list_add_rcu(&rule->list, &old_rule->list); + else + list_add_tail_rcu(&rule->list, &chain->rules); + else { + if (old_rule) + list_add_tail_rcu(&rule->list, &old_rule->list); + else + list_add_rcu(&rule->list, &chain->rules); + } - trans = nft_trans_rule_add(&ctx, NFT_MSG_DELRULE, - old_rule); - if (trans == NULL) { - err = -ENOMEM; - goto err2; - } - nft_deactivate_next(net, old_rule); - chain->use--; + if (nft_trans_rule_add(&ctx, NFT_MSG_NEWRULE, rule) == NULL) { + err = -ENOMEM; + goto err3; - if (nft_trans_rule_add(&ctx, NFT_MSG_NEWRULE, rule) == NULL) { - err = -ENOMEM; - goto err2; - } - - list_add_tail_rcu(&rule->list, &old_rule->list); - } else { - if (nft_trans_rule_add(&ctx, NFT_MSG_NEWRULE, rule) == NULL) { - err = -ENOMEM; - goto err2; - } - - if (nlh->nlmsg_flags & NLM_F_APPEND) { - if (old_rule) - list_add_rcu(&rule->list, &old_rule->list); - else - list_add_tail_rcu(&rule->list, &chain->rules); - } else { - if (old_rule) - list_add_tail_rcu(&rule->list, &old_rule->list); - else - list_add_rcu(&rule->list, &chain->rules); - } } chain->use++; return 0; +err3: + list_del_rcu(&rule->list); err2: nf_tables_rule_destroy(&ctx, rule); err1: @@ -3196,20 +3191,18 @@ err = ops->init(set, &desc, nla); if (err < 0) + goto err2; - goto err3; err = nft_trans_set_add(&ctx, NFT_MSG_NEWSET, set); if (err < 0) + goto err3; - goto err4; list_add_tail_rcu(&set->list, &table->sets); table->use++; return 0; +err3: -err4: ops->destroy(set); -err3: - kfree(set->name); err2: kvfree(set); err1: diff -u linux-azure-4.15.0/net/netlink/af_netlink.c linux-azure-4.15.0/net/netlink/af_netlink.c --- linux-azure-4.15.0/net/netlink/af_netlink.c +++ linux-azure-4.15.0/net/netlink/af_netlink.c @@ -1812,8 +1812,6 @@ if (msg->msg_namelen) { err = -EINVAL; - if (msg->msg_namelen < sizeof(struct sockaddr_nl)) - goto out; if (addr->nl_family != AF_NETLINK) goto out; dst_portid = addr->nl_pid; reverted: --- linux-azure-4.15.0/net/nsh/nsh.c +++ linux-azure-4.15.0.orig/net/nsh/nsh.c @@ -90,8 +90,6 @@ if (unlikely(!pskb_may_pull(skb, NSH_BASE_HDR_LEN))) goto out; nsh_len = nsh_hdr_len(nsh_hdr(skb)); - if (nsh_len < NSH_BASE_HDR_LEN) - goto out; if (unlikely(!pskb_may_pull(skb, nsh_len))) goto out; reverted: --- linux-azure-4.15.0/net/openvswitch/flow_netlink.c +++ linux-azure-4.15.0.orig/net/openvswitch/flow_netlink.c @@ -1664,10 +1664,13 @@ /* The nlattr stream should already have been validated */ nla_for_each_nested(nla, attr, rem) { + if (tbl[nla_type(nla)].len == OVS_ATTR_NESTED) { + if (tbl[nla_type(nla)].next) + tbl = tbl[nla_type(nla)].next; + nlattr_set(nla, val, tbl); + } else { - if (tbl[nla_type(nla)].len == OVS_ATTR_NESTED) - nlattr_set(nla, val, tbl[nla_type(nla)].next ? : tbl); - else memset(nla_data(nla), val, nla_len(nla)); + } if (nla_type(nla) == OVS_KEY_ATTR_CT_STATE) *(u32 *)nla_data(nla) &= CT_SUPPORTED_MASK; diff -u linux-azure-4.15.0/net/packet/af_packet.c linux-azure-4.15.0/net/packet/af_packet.c --- linux-azure-4.15.0/net/packet/af_packet.c +++ linux-azure-4.15.0/net/packet/af_packet.c @@ -2902,15 +2902,13 @@ if (skb == NULL) goto out_unlock; - skb_reset_network_header(skb); + skb_set_network_header(skb, reserve); err = -EINVAL; if (sock->type == SOCK_DGRAM) { offset = dev_hard_header(skb, dev, ntohs(proto), addr, NULL, len); if (unlikely(offset < 0)) goto out_free; - } else if (reserve) { - skb_push(skb, reserve); } /* Returns -EFAULT on error */ reverted: --- linux-azure-4.15.0/net/rds/recv.c +++ linux-azure-4.15.0.orig/net/rds/recv.c @@ -558,7 +558,6 @@ struct rds_cmsg_rx_trace t; int i, j; - memset(&t, 0, sizeof(t)); inc->i_rx_lat_trace[RDS_MSG_RX_CMSG] = local_clock(); t.rx_traces = rs->rs_rx_traces; for (i = 0; i < rs->rs_rx_traces; i++) { reverted: --- linux-azure-4.15.0/net/rfkill/rfkill-gpio.c +++ linux-azure-4.15.0.orig/net/rfkill/rfkill-gpio.c @@ -137,18 +137,13 @@ ret = rfkill_register(rfkill->rfkill_dev); if (ret < 0) + return ret; - goto err_destroy; platform_set_drvdata(pdev, rfkill); dev_info(&pdev->dev, "%s device registered.\n", rfkill->name); return 0; - -err_destroy: - rfkill_destroy(rfkill->rfkill_dev); - - return ret; } static int rfkill_gpio_remove(struct platform_device *pdev) diff -u linux-azure-4.15.0/net/sched/act_skbmod.c linux-azure-4.15.0/net/sched/act_skbmod.c --- linux-azure-4.15.0/net/sched/act_skbmod.c +++ linux-azure-4.15.0/net/sched/act_skbmod.c @@ -131,11 +131,8 @@ if (exists && bind) return 0; - if (!lflags) { - if (exists) - tcf_idr_release(*a, bind); + if (!lflags) return -EINVAL; - } if (!exists) { ret = tcf_idr_create(tn, parm->index, est, a, diff -u linux-azure-4.15.0/net/sched/act_vlan.c linux-azure-4.15.0/net/sched/act_vlan.c --- linux-azure-4.15.0/net/sched/act_vlan.c +++ linux-azure-4.15.0/net/sched/act_vlan.c @@ -161,8 +161,6 @@ case htons(ETH_P_8021AD): break; default: - if (exists) - tcf_idr_release(*a, bind); return -EPROTONOSUPPORT; } } else { diff -u linux-azure-4.15.0/net/sched/cls_api.c linux-azure-4.15.0/net/sched/cls_api.c --- linux-azure-4.15.0/net/sched/cls_api.c +++ linux-azure-4.15.0/net/sched/cls_api.c @@ -150,8 +150,8 @@ } else { err = -ENOENT; } -#endif goto errout; +#endif } tp->classify = tp->ops->classify; tp->protocol = protocol; reverted: --- linux-azure-4.15.0/net/sched/sch_fq.c +++ linux-azure-4.15.0.orig/net/sched/sch_fq.c @@ -128,28 +128,6 @@ return f->next == &detached; } -static bool fq_flow_is_throttled(const struct fq_flow *f) -{ - return f->next == &throttled; -} - -static void fq_flow_add_tail(struct fq_flow_head *head, struct fq_flow *flow) -{ - if (head->first) - head->last->next = flow; - else - head->first = flow; - head->last = flow; - flow->next = NULL; -} - -static void fq_flow_unset_throttled(struct fq_sched_data *q, struct fq_flow *f) -{ - rb_erase(&f->rate_node, &q->delayed); - q->throttled_flows--; - fq_flow_add_tail(&q->old_flows, f); -} - static void fq_flow_set_throttled(struct fq_sched_data *q, struct fq_flow *f) { struct rb_node **p = &q->delayed.rb_node, *parent = NULL; @@ -177,6 +155,15 @@ static struct kmem_cache *fq_flow_cachep __read_mostly; +static void fq_flow_add_tail(struct fq_flow_head *head, struct fq_flow *flow) +{ + if (head->first) + head->last->next = flow; + else + head->first = flow; + head->last = flow; + flow->next = NULL; +} /* limit number of collected flows per round */ #define FQ_GC_MAX 8 @@ -280,8 +267,6 @@ f->socket_hash != sk->sk_hash)) { f->credit = q->initial_quantum; f->socket_hash = sk->sk_hash; - if (fq_flow_is_throttled(f)) - fq_flow_unset_throttled(q, f); f->time_next_packet = 0ULL; } return f; @@ -453,7 +438,9 @@ q->time_next_delayed_flow = f->time_next_packet; break; } + rb_erase(p, &q->delayed); + q->throttled_flows--; + fq_flow_add_tail(&q->old_flows, f); - fq_flow_unset_throttled(q, f); } } diff -u linux-azure-4.15.0/net/sched/sch_generic.c linux-azure-4.15.0/net/sched/sch_generic.c --- linux-azure-4.15.0/net/sched/sch_generic.c +++ linux-azure-4.15.0/net/sched/sch_generic.c @@ -369,7 +369,7 @@ if (test_and_clear_bit(__LINK_STATE_NOCARRIER, &dev->state)) { if (dev->reg_state == NETREG_UNINITIALIZED) return; - atomic_inc(&dev->carrier_up_count); + atomic_inc(&dev->carrier_changes); linkwatch_fire_event(dev); if (netif_running(dev)) __netdev_watchdog_up(dev); @@ -388,7 +388,7 @@ if (!test_and_set_bit(__LINK_STATE_NOCARRIER, &dev->state)) { if (dev->reg_state == NETREG_UNINITIALIZED) return; - atomic_inc(&dev->carrier_down_count); + atomic_inc(&dev->carrier_changes); linkwatch_fire_event(dev); } } diff -u linux-azure-4.15.0/net/sched/sch_red.c linux-azure-4.15.0/net/sched/sch_red.c --- linux-azure-4.15.0/net/sched/sch_red.c +++ linux-azure-4.15.0/net/sched/sch_red.c @@ -219,11 +219,10 @@ child = fifo_create_dflt(sch, &bfifo_qdisc_ops, ctl->limit); if (IS_ERR(child)) return PTR_ERR(child); - - /* child is fifo, no need to check for noop_qdisc */ - qdisc_hash_add(child, true); } + if (child != &noop_qdisc) + qdisc_hash_add(child, true); sch_tree_lock(sch); q->flags = ctl->flags; q->limit = ctl->limit; diff -u linux-azure-4.15.0/net/sched/sch_tbf.c linux-azure-4.15.0/net/sched/sch_tbf.c --- linux-azure-4.15.0/net/sched/sch_tbf.c +++ linux-azure-4.15.0/net/sched/sch_tbf.c @@ -378,9 +378,6 @@ err = PTR_ERR(child); goto done; } - - /* child is fifo, no need to check for noop_qdisc */ - qdisc_hash_add(child, true); } sch_tree_lock(sch); @@ -389,6 +386,8 @@ q->qdisc->qstats.backlog); qdisc_destroy(q->qdisc); q->qdisc = child; + if (child != &noop_qdisc) + qdisc_hash_add(child, true); } q->limit = qopt->limit; if (tb[TCA_TBF_PBURST]) reverted: --- linux-azure-4.15.0/net/sctp/associola.c +++ linux-azure-4.15.0.orig/net/sctp/associola.c @@ -1024,9 +1024,8 @@ struct sctp_endpoint *ep; struct sctp_chunk *chunk; struct sctp_inq *inqueue; + int state; - int first_time = 1; /* is this the first time through the loop */ int error = 0; - int state; /* The association should be held so we should be safe. */ ep = asoc->ep; @@ -1037,30 +1036,6 @@ state = asoc->state; subtype = SCTP_ST_CHUNK(chunk->chunk_hdr->type); - /* If the first chunk in the packet is AUTH, do special - * processing specified in Section 6.3 of SCTP-AUTH spec - */ - if (first_time && subtype.chunk == SCTP_CID_AUTH) { - struct sctp_chunkhdr *next_hdr; - - next_hdr = sctp_inq_peek(inqueue); - if (!next_hdr) - goto normal; - - /* If the next chunk is COOKIE-ECHO, skip the AUTH - * chunk while saving a pointer to it so we can do - * Authentication later (during cookie-echo - * processing). - */ - if (next_hdr->type == SCTP_CID_COOKIE_ECHO) { - chunk->auth_chunk = skb_clone(chunk->skb, - GFP_ATOMIC); - chunk->auth = 1; - continue; - } - } - -normal: /* SCTP-AUTH, Section 6.3: * The receiver has a list of chunk types which it expects * to be received only after an AUTH-chunk. This list has @@ -1099,9 +1074,6 @@ /* If there is an error on chunk, discard this packet. */ if (error && chunk) chunk->pdiscard = 1; - - if (first_time) - first_time = 0; } sctp_association_put(asoc); } reverted: --- linux-azure-4.15.0/net/sctp/inqueue.c +++ linux-azure-4.15.0.orig/net/sctp/inqueue.c @@ -217,7 +217,7 @@ skb_pull(chunk->skb, sizeof(*ch)); chunk->subh.v = NULL; /* Subheader is no longer valid. */ + if (chunk->chunk_end + sizeof(*ch) < skb_tail_pointer(chunk->skb)) { - if (chunk->chunk_end + sizeof(*ch) <= skb_tail_pointer(chunk->skb)) { /* This is not a singleton */ chunk->singleton = 0; } else if (chunk->chunk_end > skb_tail_pointer(chunk->skb)) { diff -u linux-azure-4.15.0/net/sctp/ipv6.c linux-azure-4.15.0/net/sctp/ipv6.c --- linux-azure-4.15.0/net/sctp/ipv6.c +++ linux-azure-4.15.0/net/sctp/ipv6.c @@ -866,9 +866,6 @@ if (sctp_is_any(sk, addr1) || sctp_is_any(sk, addr2)) return 1; - if (addr1->sa.sa_family == AF_INET && addr2->sa.sa_family == AF_INET) - return addr1->v4.sin_addr.s_addr == addr2->v4.sin_addr.s_addr; - return __sctp_v6_cmp_addr(addr1, addr2); } reverted: --- linux-azure-4.15.0/net/sctp/sm_statefuns.c +++ linux-azure-4.15.0.orig/net/sctp/sm_statefuns.c @@ -150,7 +150,10 @@ struct sctp_cmd_seq *commands); static enum sctp_ierror sctp_sf_authenticate( + struct net *net, + const struct sctp_endpoint *ep, const struct sctp_association *asoc, + const union sctp_subtype type, struct sctp_chunk *chunk); static enum sctp_disposition __sctp_sf_do_9_1_abort( @@ -615,38 +618,6 @@ return SCTP_DISPOSITION_CONSUME; } -static bool sctp_auth_chunk_verify(struct net *net, struct sctp_chunk *chunk, - const struct sctp_association *asoc) -{ - struct sctp_chunk auth; - - if (!chunk->auth_chunk) - return true; - - /* SCTP-AUTH: auth_chunk pointer is only set when the cookie-echo - * is supposed to be authenticated and we have to do delayed - * authentication. We've just recreated the association using - * the information in the cookie and now it's much easier to - * do the authentication. - */ - - /* Make sure that we and the peer are AUTH capable */ - if (!net->sctp.auth_enable || !asoc->peer.auth_capable) - return false; - - /* set-up our fake chunk so that we can process it */ - auth.skb = chunk->auth_chunk; - auth.asoc = chunk->asoc; - auth.sctp_hdr = chunk->sctp_hdr; - auth.chunk_hdr = (struct sctp_chunkhdr *) - skb_push(chunk->auth_chunk, - sizeof(struct sctp_chunkhdr)); - skb_pull(chunk->auth_chunk, sizeof(struct sctp_chunkhdr)); - auth.transport = chunk->transport; - - return sctp_sf_authenticate(asoc, &auth) == SCTP_IERROR_NO_ERROR; -} - /* * Respond to a normal COOKIE ECHO chunk. * We are the side that is being asked for an association. @@ -784,9 +755,37 @@ if (error) goto nomem_init; + /* SCTP-AUTH: auth_chunk pointer is only set when the cookie-echo + * is supposed to be authenticated and we have to do delayed + * authentication. We've just recreated the association using + * the information in the cookie and now it's much easier to + * do the authentication. + */ + if (chunk->auth_chunk) { + struct sctp_chunk auth; + enum sctp_ierror ret; + + /* Make sure that we and the peer are AUTH capable */ + if (!net->sctp.auth_enable || !new_asoc->peer.auth_capable) { + sctp_association_free(new_asoc); + return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands); + } + + /* set-up our fake chunk so that we can process it */ + auth.skb = chunk->auth_chunk; + auth.asoc = chunk->asoc; + auth.sctp_hdr = chunk->sctp_hdr; + auth.chunk_hdr = (struct sctp_chunkhdr *) + skb_push(chunk->auth_chunk, + sizeof(struct sctp_chunkhdr)); + skb_pull(chunk->auth_chunk, sizeof(struct sctp_chunkhdr)); + auth.transport = chunk->transport; + + ret = sctp_sf_authenticate(net, ep, new_asoc, type, &auth); + if (ret != SCTP_IERROR_NO_ERROR) { + sctp_association_free(new_asoc); + return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands); + } - if (!sctp_auth_chunk_verify(net, chunk, new_asoc)) { - sctp_association_free(new_asoc); - return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands); } repl = sctp_make_cookie_ack(new_asoc, chunk); @@ -1756,15 +1755,13 @@ GFP_ATOMIC)) goto nomem; - if (!sctp_auth_chunk_verify(net, chunk, new_asoc)) - return SCTP_DISPOSITION_DISCARD; - /* Make sure no new addresses are being added during the * restart. Though this is a pretty complicated attack * since you'd have to get inside the cookie. */ + if (!sctp_sf_check_restart_addrs(new_asoc, asoc, chunk, commands)) { - if (!sctp_sf_check_restart_addrs(new_asoc, asoc, chunk, commands)) return SCTP_DISPOSITION_CONSUME; + } /* If the endpoint is in the SHUTDOWN-ACK-SENT state and recognizes * the peer has restarted (Action A), it MUST NOT setup a new @@ -1870,9 +1867,6 @@ GFP_ATOMIC)) goto nomem; - if (!sctp_auth_chunk_verify(net, chunk, new_asoc)) - return SCTP_DISPOSITION_DISCARD; - /* Update the content of current association. */ sctp_add_cmd_sf(commands, SCTP_CMD_UPDATE_ASSOC, SCTP_ASOC(new_asoc)); sctp_add_cmd_sf(commands, SCTP_CMD_NEW_STATE, @@ -1967,9 +1961,6 @@ * a COOKIE ACK. */ - if (!sctp_auth_chunk_verify(net, chunk, asoc)) - return SCTP_DISPOSITION_DISCARD; - /* Don't accidentally move back into established state. */ if (asoc->state < SCTP_STATE_ESTABLISHED) { sctp_add_cmd_sf(commands, SCTP_CMD_TIMER_STOP, @@ -2009,7 +2000,7 @@ } } + repl = sctp_make_cookie_ack(new_asoc, chunk); - repl = sctp_make_cookie_ack(asoc, chunk); if (!repl) goto nomem; @@ -4120,7 +4111,10 @@ * The return value is the disposition of the chunk. */ static enum sctp_ierror sctp_sf_authenticate( + struct net *net, + const struct sctp_endpoint *ep, const struct sctp_association *asoc, + const union sctp_subtype type, struct sctp_chunk *chunk) { struct sctp_authhdr *auth_hdr; @@ -4218,7 +4212,7 @@ commands); auth_hdr = (struct sctp_authhdr *)chunk->skb->data; + error = sctp_sf_authenticate(net, ep, asoc, type, chunk); - error = sctp_sf_authenticate(asoc, chunk); switch (error) { case SCTP_IERROR_AUTH_BAD_HMAC: /* Generate the ERROR chunk and discard the rest reverted: --- linux-azure-4.15.0/net/sctp/stream.c +++ linux-azure-4.15.0.orig/net/sctp/stream.c @@ -237,8 +237,6 @@ new->out = NULL; new->in = NULL; - new->outcnt = 0; - new->incnt = 0; } static int sctp_send_reconf(struct sctp_association *asoc, reverted: --- linux-azure-4.15.0/net/sctp/ulpevent.c +++ linux-azure-4.15.0.orig/net/sctp/ulpevent.c @@ -717,6 +717,7 @@ return event; fail_mark: + sctp_chunk_put(chunk); kfree_skb(skb); fail: return NULL; diff -u linux-azure-4.15.0/net/smc/af_smc.c linux-azure-4.15.0/net/smc/af_smc.c --- linux-azure-4.15.0/net/smc/af_smc.c +++ linux-azure-4.15.0/net/smc/af_smc.c @@ -1138,15 +1138,13 @@ if ((sk->sk_state == SMC_INIT) || smc->use_fallback) { /* delegate to CLC child sock */ mask = smc->clcsock->ops->poll(file, smc->clcsock, wait); + /* if non-blocking connect finished ... */ lock_sock(sk); - sk->sk_err = smc->clcsock->sk->sk_err; - if (sk->sk_err) { - mask |= POLLERR; - } else { - /* if non-blocking connect finished ... */ - if (sk->sk_state == SMC_INIT && - mask & POLLOUT && - smc->clcsock->sk->sk_state != TCP_CLOSE) { + if ((sk->sk_state == SMC_INIT) && (mask & POLLOUT)) { + sk->sk_err = smc->clcsock->sk->sk_err; + if (sk->sk_err) { + mask |= POLLERR; + } else { rc = smc_connect_rdma(smc); if (rc < 0) mask |= POLLERR; reverted: --- linux-azure-4.15.0/net/smc/smc_pnet.c +++ linux-azure-4.15.0.orig/net/smc/smc_pnet.c @@ -245,45 +245,40 @@ static int smc_pnet_fill_entry(struct net *net, struct smc_pnetentry *pnetelem, struct nlattr *tb[]) { + char *string, *ibname = NULL; + int rc = 0; - char *string, *ibname; - int rc; memset(pnetelem, 0, sizeof(*pnetelem)); INIT_LIST_HEAD(&pnetelem->list); + if (tb[SMC_PNETID_NAME]) { + string = (char *)nla_data(tb[SMC_PNETID_NAME]); + if (!smc_pnetid_valid(string, pnetelem->pnet_name)) { + rc = -EINVAL; + goto error; + } + } + if (tb[SMC_PNETID_ETHNAME]) { + string = (char *)nla_data(tb[SMC_PNETID_ETHNAME]); + pnetelem->ndev = dev_get_by_name(net, string); + if (!pnetelem->ndev) + return -ENOENT; + } + if (tb[SMC_PNETID_IBNAME]) { + ibname = (char *)nla_data(tb[SMC_PNETID_IBNAME]); + ibname = strim(ibname); + pnetelem->smcibdev = smc_pnet_find_ib(ibname); + if (!pnetelem->smcibdev) { + rc = -ENOENT; + goto error; + } + } + if (tb[SMC_PNETID_IBPORT]) { + pnetelem->ib_port = nla_get_u8(tb[SMC_PNETID_IBPORT]); + if (pnetelem->ib_port > SMC_MAX_PORTS) { + rc = -EINVAL; + goto error; + } + } - - rc = -EINVAL; - if (!tb[SMC_PNETID_NAME]) - goto error; - string = (char *)nla_data(tb[SMC_PNETID_NAME]); - if (!smc_pnetid_valid(string, pnetelem->pnet_name)) - goto error; - - rc = -EINVAL; - if (!tb[SMC_PNETID_ETHNAME]) - goto error; - rc = -ENOENT; - string = (char *)nla_data(tb[SMC_PNETID_ETHNAME]); - pnetelem->ndev = dev_get_by_name(net, string); - if (!pnetelem->ndev) - goto error; - - rc = -EINVAL; - if (!tb[SMC_PNETID_IBNAME]) - goto error; - rc = -ENOENT; - ibname = (char *)nla_data(tb[SMC_PNETID_IBNAME]); - ibname = strim(ibname); - pnetelem->smcibdev = smc_pnet_find_ib(ibname); - if (!pnetelem->smcibdev) - goto error; - - rc = -EINVAL; - if (!tb[SMC_PNETID_IBPORT]) - goto error; - pnetelem->ib_port = nla_get_u8(tb[SMC_PNETID_IBPORT]); - if (pnetelem->ib_port < 1 || pnetelem->ib_port > SMC_MAX_PORTS) - goto error; - return 0; error: @@ -312,8 +307,6 @@ void *hdr; int rc; - if (!info->attrs[SMC_PNETID_NAME]) - return -EINVAL; pnetelem = smc_pnet_find_pnetid( (char *)nla_data(info->attrs[SMC_PNETID_NAME])); if (!pnetelem) @@ -366,8 +359,6 @@ static int smc_pnet_del(struct sk_buff *skb, struct genl_info *info) { - if (!info->attrs[SMC_PNETID_NAME]) - return -EINVAL; return smc_pnet_remove_by_pnetid( (char *)nla_data(info->attrs[SMC_PNETID_NAME])); } reverted: --- linux-azure-4.15.0/net/socket.c +++ linux-azure-4.15.0.orig/net/socket.c @@ -544,10 +544,7 @@ if (!err && (iattr->ia_valid & ATTR_UID)) { struct socket *sock = SOCKET_I(d_inode(dentry)); + sock->sk->sk_uid = iattr->ia_uid; - if (sock->sk) - sock->sk->sk_uid = iattr->ia_uid; - else - err = -ENOENT; } return err; @@ -597,16 +594,12 @@ * an inode not a file. */ +void sock_release(struct socket *sock) -static void __sock_release(struct socket *sock, struct inode *inode) { if (sock->ops) { struct module *owner = sock->ops->owner; - if (inode) - inode_lock(inode); sock->ops->release(sock); - if (inode) - inode_unlock(inode); sock->ops = NULL; module_put(owner); } @@ -621,11 +614,6 @@ } sock->file = NULL; } - -void sock_release(struct socket *sock) -{ - __sock_release(sock, NULL); -} EXPORT_SYMBOL(sock_release); void __sock_tx_timestamp(__u16 tsflags, __u8 *tx_flags) @@ -1140,7 +1128,7 @@ static int sock_close(struct inode *inode, struct file *filp) { + sock_release(SOCKET_I(inode)); - __sock_release(SOCKET_I(inode), inode); return 0; } reverted: --- linux-azure-4.15.0/net/tipc/socket.c +++ linux-azure-4.15.0.orig/net/tipc/socket.c @@ -1502,10 +1502,10 @@ srcaddr->sock.family = AF_TIPC; srcaddr->sock.addrtype = TIPC_ADDR_ID; - srcaddr->sock.scope = 0; srcaddr->sock.addr.id.ref = msg_origport(hdr); srcaddr->sock.addr.id.node = msg_orignode(hdr); srcaddr->sock.addr.name.domain = 0; + srcaddr->sock.scope = 0; m->msg_namelen = sizeof(struct sockaddr_tipc); if (!msg_in_group(hdr)) @@ -1514,7 +1514,6 @@ /* Group message users may also want to know sending member's id */ srcaddr->member.family = AF_TIPC; srcaddr->member.addrtype = TIPC_ADDR_NAME; - srcaddr->member.scope = 0; srcaddr->member.addr.name.name.type = msg_nametype(hdr); srcaddr->member.addr.name.name.instance = TIPC_SKB_CB(skb)->orig_member; srcaddr->member.addr.name.domain = 0; diff -u linux-azure-4.15.0/net/tls/tls_main.c linux-azure-4.15.0/net/tls/tls_main.c --- linux-azure-4.15.0/net/tls/tls_main.c +++ linux-azure-4.15.0/net/tls/tls_main.c @@ -107,7 +107,6 @@ size = sg->length - offset; offset += sg->offset; - ctx->in_tcp_sendpages = true; while (1) { if (sg_is_last(sg)) sendpage_flags = flags; @@ -128,7 +127,6 @@ offset -= sg->offset; ctx->partially_sent_offset = offset; ctx->partially_sent_record = (void *)sg; - ctx->in_tcp_sendpages = false; return ret; } @@ -143,8 +141,6 @@ } clear_bit(TLS_PENDING_CLOSED_RECORD, &ctx->flags); - ctx->in_tcp_sendpages = false; - ctx->sk_write_space(sk); return 0; } @@ -214,10 +210,6 @@ { struct tls_context *ctx = tls_get_ctx(sk); - /* We are already sending pages, ignore notification */ - if (ctx->in_tcp_sendpages) - return; - if (!sk->sk_write_pending && tls_is_pending_closed_record(ctx)) { gfp_t sk_allocation = sk->sk_allocation; int rc; reverted: --- linux-azure-4.15.0/net/wireless/core.c +++ linux-azure-4.15.0.orig/net/wireless/core.c @@ -95,9 +95,6 @@ ASSERT_RTNL(); - if (strlen(newname) > NL80211_WIPHY_NAME_MAXLEN) - return -EINVAL; - /* prohibit calling the thing phy%d when %d is not its number */ sscanf(newname, PHY_NAME "%d%n", &wiphy_idx, &taken); if (taken == strlen(newname) && wiphy_idx != rdev->wiphy_idx) { diff -u linux-azure-4.15.0/security/apparmor/lib.c linux-azure-4.15.0/security/apparmor/lib.c --- linux-azure-4.15.0/security/apparmor/lib.c +++ linux-azure-4.15.0/security/apparmor/lib.c @@ -327,7 +327,7 @@ /* for v5 perm mapping in the policydb, the other set is used * to extend the general perm set */ - perms->allow |= map_other(dfa_other_allow(dfa, state)) | AA_MAY_LOCK; + perms->allow |= map_other(dfa_other_allow(dfa, state)); perms->audit |= map_other(dfa_other_audit(dfa, state)); perms->quiet |= map_other(dfa_other_quiet(dfa, state)); // perms->xindex = dfa_user_xindex(dfa, state); diff -u linux-azure-4.15.0/snapcraft.yaml linux-azure-4.15.0/snapcraft.yaml --- linux-azure-4.15.0/snapcraft.yaml +++ linux-azure-4.15.0/snapcraft.yaml @@ -17,11 +17,6 @@ kconfigflavour: generic kconfigs: - CONFIG_DEBUG_INFO=n - override-build: | - cp debian/scripts/retpoline-extract-one \ - $SNAPCRAFT_PART_BUILD/scripts/ubuntu-retpoline-extract-one - snapcraftctl build - kernel-with-firmware: false firmware: plugin: nil stage-packages: reverted: --- linux-azure-4.15.0/sound/core/control_compat.c +++ linux-azure-4.15.0.orig/sound/core/control_compat.c @@ -396,7 +396,8 @@ if (copy_from_user(&data->id, &data32->id, sizeof(data->id)) || copy_from_user(&data->type, &data32->type, 3 * sizeof(u32))) goto error; + if (get_user(data->owner, &data32->owner) || + get_user(data->type, &data32->type)) - if (get_user(data->owner, &data32->owner)) goto error; switch (data->type) { case SNDRV_CTL_ELEM_TYPE_BOOLEAN: reverted: --- linux-azure-4.15.0/sound/core/pcm_compat.c +++ linux-azure-4.15.0.orig/sound/core/pcm_compat.c @@ -27,11 +27,10 @@ s32 __user *src) { snd_pcm_sframes_t delay; - int err; + delay = snd_pcm_delay(substream); + if (delay < 0) + return delay; - err = snd_pcm_delay(substream, &delay); - if (err) - return err; if (put_user(delay, src)) return -EFAULT; return 0; @@ -423,8 +422,6 @@ return -ENOTTY; if (substream->stream != dir) return -EINVAL; - if (substream->runtime->status->state == SNDRV_PCM_STATE_OPEN) - return -EBADFD; if ((ch = substream->runtime->channels) > 128) return -EINVAL; diff -u linux-azure-4.15.0/sound/core/pcm_native.c linux-azure-4.15.0/sound/core/pcm_native.c --- linux-azure-4.15.0/sound/core/pcm_native.c +++ linux-azure-4.15.0/sound/core/pcm_native.c @@ -2687,8 +2687,7 @@ return err; } -static int snd_pcm_delay(struct snd_pcm_substream *substream, - snd_pcm_sframes_t *delay) +static snd_pcm_sframes_t snd_pcm_delay(struct snd_pcm_substream *substream) { struct snd_pcm_runtime *runtime = substream->runtime; int err; @@ -2704,9 +2703,7 @@ n += runtime->delay; } snd_pcm_stream_unlock_irq(substream); - if (!err) - *delay = n; - return err; + return err < 0 ? err : n; } static int snd_pcm_sync_ptr(struct snd_pcm_substream *substream, @@ -2749,7 +2746,6 @@ sync_ptr.s.status.hw_ptr = status->hw_ptr; sync_ptr.s.status.tstamp = status->tstamp; sync_ptr.s.status.suspended_state = status->suspended_state; - sync_ptr.s.status.audio_tstamp = status->audio_tstamp; snd_pcm_stream_unlock_irq(substream); if (copy_to_user(_sync_ptr, &sync_ptr, sizeof(sync_ptr))) return -EFAULT; @@ -2915,13 +2911,11 @@ return snd_pcm_hwsync(substream); case SNDRV_PCM_IOCTL_DELAY: { - snd_pcm_sframes_t delay; + snd_pcm_sframes_t delay = snd_pcm_delay(substream); snd_pcm_sframes_t __user *res = arg; - int err; - err = snd_pcm_delay(substream, &delay); - if (err) - return err; + if (delay < 0) + return delay; if (put_user(delay, res)) return -EFAULT; return 0; @@ -3009,7 +3003,13 @@ case SNDRV_PCM_IOCTL_DROP: return snd_pcm_drop(substream); case SNDRV_PCM_IOCTL_DELAY: - return snd_pcm_delay(substream, frames); + { + result = snd_pcm_delay(substream); + if (result < 0) + return result; + *frames = result; + return 0; + } default: return -EINVAL; } reverted: --- linux-azure-4.15.0/sound/core/seq/oss/seq_oss_event.c +++ linux-azure-4.15.0.orig/sound/core/seq/oss/seq_oss_event.c @@ -26,7 +26,6 @@ #include #include "seq_oss_readq.h" #include "seq_oss_writeq.h" -#include /* @@ -288,10 +287,10 @@ { struct seq_oss_synthinfo *info; + if (!snd_seq_oss_synth_is_valid(dp, dev)) - info = snd_seq_oss_synth_info(dp, dev); - if (!info) return -ENXIO; + info = &dp->synths[dev]; switch (info->arg.event_passing) { case SNDRV_SEQ_OSS_PROCESS_EVENTS: if (! info->ch || ch < 0 || ch >= info->nr_voices) { @@ -299,7 +298,6 @@ return set_note_event(dp, dev, SNDRV_SEQ_EVENT_NOTEON, ch, note, vel, ev); } - ch = array_index_nospec(ch, info->nr_voices); if (note == 255 && info->ch[ch].note >= 0) { /* volume control */ int type; @@ -349,10 +347,10 @@ { struct seq_oss_synthinfo *info; + if (!snd_seq_oss_synth_is_valid(dp, dev)) - info = snd_seq_oss_synth_info(dp, dev); - if (!info) return -ENXIO; + info = &dp->synths[dev]; switch (info->arg.event_passing) { case SNDRV_SEQ_OSS_PROCESS_EVENTS: if (! info->ch || ch < 0 || ch >= info->nr_voices) { @@ -360,7 +358,6 @@ return set_note_event(dp, dev, SNDRV_SEQ_EVENT_NOTEON, ch, note, vel, ev); } - ch = array_index_nospec(ch, info->nr_voices); if (info->ch[ch].note >= 0) { note = info->ch[ch].note; info->ch[ch].vel = 0; @@ -384,7 +381,7 @@ static int set_note_event(struct seq_oss_devinfo *dp, int dev, int type, int ch, int note, int vel, struct snd_seq_event *ev) { + if (! snd_seq_oss_synth_is_valid(dp, dev)) - if (!snd_seq_oss_synth_info(dp, dev)) return -ENXIO; ev->type = type; @@ -402,7 +399,7 @@ static int set_control_event(struct seq_oss_devinfo *dp, int dev, int type, int ch, int param, int val, struct snd_seq_event *ev) { + if (! snd_seq_oss_synth_is_valid(dp, dev)) - if (!snd_seq_oss_synth_info(dp, dev)) return -ENXIO; ev->type = type; reverted: --- linux-azure-4.15.0/sound/core/seq/oss/seq_oss_midi.c +++ linux-azure-4.15.0.orig/sound/core/seq/oss/seq_oss_midi.c @@ -29,7 +29,6 @@ #include "../seq_lock.h" #include #include -#include /* @@ -316,7 +315,6 @@ { if (dev < 0 || dev >= dp->max_mididev) return NULL; - dev = array_index_nospec(dev, dp->max_mididev); return get_mdev(dev); } reverted: --- linux-azure-4.15.0/sound/core/seq/oss/seq_oss_synth.c +++ linux-azure-4.15.0.orig/sound/core/seq/oss/seq_oss_synth.c @@ -26,7 +26,6 @@ #include #include #include -#include /* * constants @@ -340,13 +339,17 @@ dp->max_synthdev = 0; } +/* + * check if the specified device is MIDI mapped device + */ +static int +is_midi_dev(struct seq_oss_devinfo *dp, int dev) -static struct seq_oss_synthinfo * -get_synthinfo_nospec(struct seq_oss_devinfo *dp, int dev) { if (dev < 0 || dev >= dp->max_synthdev) + return 0; + if (dp->synths[dev].is_midi) + return 1; + return 0; - return NULL; - dev = array_index_nospec(dev, SNDRV_SEQ_OSS_MAX_SYNTH_DEVS); - return &dp->synths[dev]; } /* @@ -356,20 +359,14 @@ get_synthdev(struct seq_oss_devinfo *dp, int dev) { struct seq_oss_synth *rec; + if (dev < 0 || dev >= dp->max_synthdev) + return NULL; + if (! dp->synths[dev].opened) - struct seq_oss_synthinfo *info = get_synthinfo_nospec(dp, dev); - - if (!info) return NULL; + if (dp->synths[dev].is_midi) + return &midi_synth_dev; + if ((rec = get_sdev(dev)) == NULL) - if (!info->opened) return NULL; - if (info->is_midi) { - rec = &midi_synth_dev; - snd_use_lock_use(&rec->use_lock); - } else { - rec = get_sdev(dev); - if (!rec) - return NULL; - } if (! rec->opened) { snd_use_lock_free(&rec->use_lock); return NULL; @@ -405,8 +402,10 @@ struct seq_oss_synth *rec; struct seq_oss_synthinfo *info; + if (snd_BUG_ON(dev < 0 || dev >= dp->max_synthdev)) + return; + info = &dp->synths[dev]; + if (! info->opened) - info = get_synthinfo_nospec(dp, dev); - if (!info || !info->opened) return; if (info->sysex) info->sysex->len = 0; /* reset sysex */ @@ -455,14 +454,12 @@ const char __user *buf, int p, int c) { struct seq_oss_synth *rec; - struct seq_oss_synthinfo *info; int rc; + if (dev < 0 || dev >= dp->max_synthdev) - info = get_synthinfo_nospec(dp, dev); - if (!info) return -ENXIO; + if (is_midi_dev(dp, dev)) - if (info->is_midi) return 0; if ((rec = get_synthdev(dp, dev)) == NULL) return -ENXIO; @@ -470,25 +467,24 @@ if (rec->oper.load_patch == NULL) rc = -ENXIO; else + rc = rec->oper.load_patch(&dp->synths[dev].arg, fmt, buf, p, c); - rc = rec->oper.load_patch(&info->arg, fmt, buf, p, c); snd_use_lock_free(&rec->use_lock); return rc; } /* + * check if the device is valid synth device - * check if the device is valid synth device and return the synth info */ +int +snd_seq_oss_synth_is_valid(struct seq_oss_devinfo *dp, int dev) -struct seq_oss_synthinfo * -snd_seq_oss_synth_info(struct seq_oss_devinfo *dp, int dev) { struct seq_oss_synth *rec; - rec = get_synthdev(dp, dev); if (rec) { snd_use_lock_free(&rec->use_lock); + return 1; - return get_synthinfo_nospec(dp, dev); } + return 0; - return NULL; } @@ -503,18 +499,16 @@ int i, send; unsigned char *dest; struct seq_oss_synth_sysex *sysex; - struct seq_oss_synthinfo *info; + if (! snd_seq_oss_synth_is_valid(dp, dev)) - info = snd_seq_oss_synth_info(dp, dev); - if (!info) return -ENXIO; + sysex = dp->synths[dev].sysex; - sysex = info->sysex; if (sysex == NULL) { sysex = kzalloc(sizeof(*sysex), GFP_KERNEL); if (sysex == NULL) return -ENOMEM; + dp->synths[dev].sysex = sysex; - info->sysex = sysex; } send = 0; @@ -559,12 +553,10 @@ int snd_seq_oss_synth_addr(struct seq_oss_devinfo *dp, int dev, struct snd_seq_event *ev) { + if (! snd_seq_oss_synth_is_valid(dp, dev)) - struct seq_oss_synthinfo *info = snd_seq_oss_synth_info(dp, dev); - - if (!info) return -EINVAL; + snd_seq_oss_fill_addr(dp, ev, dp->synths[dev].arg.addr.client, + dp->synths[dev].arg.addr.port); - snd_seq_oss_fill_addr(dp, ev, info->arg.addr.client, - info->arg.addr.port); return 0; } @@ -576,18 +568,16 @@ snd_seq_oss_synth_ioctl(struct seq_oss_devinfo *dp, int dev, unsigned int cmd, unsigned long addr) { struct seq_oss_synth *rec; - struct seq_oss_synthinfo *info; int rc; + if (is_midi_dev(dp, dev)) - info = get_synthinfo_nospec(dp, dev); - if (!info || info->is_midi) return -ENXIO; if ((rec = get_synthdev(dp, dev)) == NULL) return -ENXIO; if (rec->oper.ioctl == NULL) rc = -ENXIO; else + rc = rec->oper.ioctl(&dp->synths[dev].arg, cmd, addr); - rc = rec->oper.ioctl(&info->arg, cmd, addr); snd_use_lock_free(&rec->use_lock); return rc; } @@ -599,10 +589,7 @@ int snd_seq_oss_synth_raw_event(struct seq_oss_devinfo *dp, int dev, unsigned char *data, struct snd_seq_event *ev) { + if (! snd_seq_oss_synth_is_valid(dp, dev) || is_midi_dev(dp, dev)) - struct seq_oss_synthinfo *info; - - info = snd_seq_oss_synth_info(dp, dev); - if (!info || info->is_midi) return -ENXIO; ev->type = SNDRV_SEQ_EVENT_OSS; memcpy(ev->data.raw8.d, data, 8); reverted: --- linux-azure-4.15.0/sound/core/seq/oss/seq_oss_synth.h +++ linux-azure-4.15.0.orig/sound/core/seq/oss/seq_oss_synth.h @@ -37,8 +37,7 @@ void snd_seq_oss_synth_reset(struct seq_oss_devinfo *dp, int dev); int snd_seq_oss_synth_load_patch(struct seq_oss_devinfo *dp, int dev, int fmt, const char __user *buf, int p, int c); +int snd_seq_oss_synth_is_valid(struct seq_oss_devinfo *dp, int dev); -struct seq_oss_synthinfo *snd_seq_oss_synth_info(struct seq_oss_devinfo *dp, - int dev); int snd_seq_oss_synth_sysex(struct seq_oss_devinfo *dp, int dev, unsigned char *buf, struct snd_seq_event *ev); int snd_seq_oss_synth_addr(struct seq_oss_devinfo *dp, int dev, struct snd_seq_event *ev); reverted: --- linux-azure-4.15.0/sound/core/seq/seq_virmidi.c +++ linux-azure-4.15.0.orig/sound/core/seq/seq_virmidi.c @@ -174,12 +174,12 @@ } return; } - spin_lock_irqsave(&substream->runtime->lock, flags); if (vmidi->event.type != SNDRV_SEQ_EVENT_NONE) { if (snd_seq_kernel_client_dispatch(vmidi->client, &vmidi->event, in_atomic(), 0) < 0) + return; - goto out; vmidi->event.type = SNDRV_SEQ_EVENT_NONE; } + spin_lock_irqsave(&substream->runtime->lock, flags); while (1) { count = __snd_rawmidi_transmit_peek(substream, buf, sizeof(buf)); if (count <= 0) diff -u linux-azure-4.15.0/sound/drivers/aloop.c linux-azure-4.15.0/sound/drivers/aloop.c --- linux-azure-4.15.0/sound/drivers/aloop.c +++ linux-azure-4.15.0/sound/drivers/aloop.c @@ -296,8 +296,6 @@ cable->pause |= stream; loopback_timer_stop(dpcm); spin_unlock(&cable->lock); - if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) - loopback_active_notify(dpcm); break; case SNDRV_PCM_TRIGGER_PAUSE_RELEASE: case SNDRV_PCM_TRIGGER_RESUME: @@ -306,8 +304,6 @@ cable->pause &= ~stream; loopback_timer_start(dpcm); spin_unlock(&cable->lock); - if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) - loopback_active_notify(dpcm); break; default: return -EINVAL; @@ -831,11 +827,9 @@ { struct loopback *loopback = snd_kcontrol_chip(kcontrol); - mutex_lock(&loopback->cable_lock); ucontrol->value.integer.value[0] = loopback->setup[kcontrol->id.subdevice] [kcontrol->id.device].rate_shift; - mutex_unlock(&loopback->cable_lock); return 0; } @@ -867,11 +861,9 @@ { struct loopback *loopback = snd_kcontrol_chip(kcontrol); - mutex_lock(&loopback->cable_lock); ucontrol->value.integer.value[0] = loopback->setup[kcontrol->id.subdevice] [kcontrol->id.device].notify; - mutex_unlock(&loopback->cable_lock); return 0; } @@ -883,14 +875,12 @@ int change = 0; val = ucontrol->value.integer.value[0] ? 1 : 0; - mutex_lock(&loopback->cable_lock); if (val != loopback->setup[kcontrol->id.subdevice] [kcontrol->id.device].notify) { loopback->setup[kcontrol->id.subdevice] [kcontrol->id.device].notify = val; change = 1; } - mutex_unlock(&loopback->cable_lock); return change; } @@ -898,18 +888,13 @@ struct snd_ctl_elem_value *ucontrol) { struct loopback *loopback = snd_kcontrol_chip(kcontrol); - struct loopback_cable *cable; - + struct loopback_cable *cable = loopback->cables + [kcontrol->id.subdevice][kcontrol->id.device ^ 1]; unsigned int val = 0; - mutex_lock(&loopback->cable_lock); - cable = loopback->cables[kcontrol->id.subdevice][kcontrol->id.device ^ 1]; - if (cable != NULL) { - unsigned int running = cable->running ^ cable->pause; - - val = (running & (1 << SNDRV_PCM_STREAM_PLAYBACK)) ? 1 : 0; - } - mutex_unlock(&loopback->cable_lock); + if (cable != NULL) + val = (cable->running & (1 << SNDRV_PCM_STREAM_PLAYBACK)) ? + 1 : 0; ucontrol->value.integer.value[0] = val; return 0; } @@ -952,11 +937,9 @@ { struct loopback *loopback = snd_kcontrol_chip(kcontrol); - mutex_lock(&loopback->cable_lock); ucontrol->value.integer.value[0] = loopback->setup[kcontrol->id.subdevice] [kcontrol->id.device].rate; - mutex_unlock(&loopback->cable_lock); return 0; } @@ -976,11 +959,9 @@ { struct loopback *loopback = snd_kcontrol_chip(kcontrol); - mutex_lock(&loopback->cable_lock); ucontrol->value.integer.value[0] = loopback->setup[kcontrol->id.subdevice] [kcontrol->id.device].channels; - mutex_unlock(&loopback->cable_lock); return 0; } reverted: --- linux-azure-4.15.0/sound/drivers/opl3/opl3_synth.c +++ linux-azure-4.15.0.orig/sound/drivers/opl3/opl3_synth.c @@ -21,7 +21,6 @@ #include #include -#include #include #include @@ -449,7 +448,7 @@ { unsigned short reg_side; unsigned char op_offset; + unsigned char voice_offset; - unsigned char voice_offset, voice_op; unsigned short opl3_reg; unsigned char reg_val; @@ -474,9 +473,7 @@ voice_offset = voice->voice - MAX_OPL2_VOICES; } /* Get register offset of operator */ + op_offset = snd_opl3_regmap[voice_offset][voice->op]; - voice_offset = array_index_nospec(voice_offset, MAX_OPL2_VOICES); - voice_op = array_index_nospec(voice->op, 4); - op_offset = snd_opl3_regmap[voice_offset][voice_op]; reg_val = 0x00; /* Set amplitude modulation (tremolo) effect */ reverted: --- linux-azure-4.15.0/sound/firewire/amdtp-stream.c +++ linux-azure-4.15.0.orig/sound/firewire/amdtp-stream.c @@ -773,6 +773,8 @@ u32 cycle; unsigned int packets; + s->max_payload_length = amdtp_stream_get_max_payload(s); + /* * For in-stream, first packet has come. * For out-stream, prepared to transmit first packet @@ -877,9 +879,6 @@ amdtp_stream_update(s); - if (s->direction == AMDTP_IN_STREAM) - s->max_payload_length = amdtp_stream_get_max_payload(s); - if (s->flags & CIP_NO_HEADER) s->tag = TAG_NO_CIP_HEADER; else reverted: --- linux-azure-4.15.0/sound/firewire/dice/dice-stream.c +++ linux-azure-4.15.0.orig/sound/firewire/dice/dice-stream.c @@ -435,7 +435,7 @@ err = init_stream(dice, AMDTP_IN_STREAM, i); if (err < 0) { for (; i >= 0; i--) + destroy_stream(dice, AMDTP_OUT_STREAM, i); - destroy_stream(dice, AMDTP_IN_STREAM, i); goto end; } } reverted: --- linux-azure-4.15.0/sound/firewire/dice/dice.c +++ linux-azure-4.15.0.orig/sound/firewire/dice/dice.c @@ -14,7 +14,7 @@ #define OUI_WEISS 0x001c6a #define OUI_LOUD 0x000ff2 #define OUI_FOCUSRITE 0x00130e +#define OUI_TCELECTRONIC 0x001486 -#define OUI_TCELECTRONIC 0x000166 #define DICE_CATEGORY_ID 0x04 #define WEISS_CATEGORY_ID 0x00 reverted: --- linux-azure-4.15.0/sound/pci/asihpi/hpimsginit.c +++ linux-azure-4.15.0.orig/sound/pci/asihpi/hpimsginit.c @@ -23,7 +23,6 @@ #include "hpi_internal.h" #include "hpimsginit.h" -#include /* The actual message size for each object type */ static u16 msg_size[HPI_OBJ_MAXINDEX + 1] = HPI_MESSAGE_SIZE_BY_OBJECT; @@ -40,12 +39,10 @@ { u16 size; + if ((object > 0) && (object <= HPI_OBJ_MAXINDEX)) - if ((object > 0) && (object <= HPI_OBJ_MAXINDEX)) { - object = array_index_nospec(object, HPI_OBJ_MAXINDEX + 1); size = msg_size[object]; + else - } else { size = sizeof(*phm); - } memset(phm, 0, size); phm->size = size; @@ -69,12 +66,10 @@ { u16 size; + if ((object > 0) && (object <= HPI_OBJ_MAXINDEX)) - if ((object > 0) && (object <= HPI_OBJ_MAXINDEX)) { - object = array_index_nospec(object, HPI_OBJ_MAXINDEX + 1); size = res_size[object]; + else - } else { size = sizeof(*phr); - } memset(phr, 0, sizeof(*phr)); phr->size = size; reverted: --- linux-azure-4.15.0/sound/pci/asihpi/hpioctl.c +++ linux-azure-4.15.0.orig/sound/pci/asihpi/hpioctl.c @@ -33,7 +33,6 @@ #include #include #include -#include #ifdef MODULE_FIRMWARE MODULE_FIRMWARE("asihpi/dsp5000.bin"); @@ -187,8 +186,7 @@ struct hpi_adapter *pa = NULL; if (hm->h.adapter_index < ARRAY_SIZE(adapters)) + pa = &adapters[hm->h.adapter_index]; - pa = &adapters[array_index_nospec(hm->h.adapter_index, - ARRAY_SIZE(adapters))]; if (!pa || !pa->adapter || !pa->adapter->type) { hpi_init_response(&hr->r0, hm->h.object, reverted: --- linux-azure-4.15.0/sound/pci/hda/hda_hwdep.c +++ linux-azure-4.15.0.orig/sound/pci/hda/hda_hwdep.c @@ -21,7 +21,6 @@ #include #include #include -#include #include #include "hda_codec.h" #include "hda_local.h" @@ -52,16 +51,7 @@ if (get_user(verb, &arg->verb)) return -EFAULT; + res = get_wcaps(codec, verb >> 24); - /* open-code get_wcaps(verb>>24) with nospec */ - verb >>= 24; - if (verb < codec->core.start_nid || - verb >= codec->core.start_nid + codec->core.num_nodes) { - res = 0; - } else { - verb -= codec->core.start_nid; - verb = array_index_nospec(verb, codec->core.num_nodes); - res = codec->wcaps[verb]; - } if (put_user(res, &arg->res)) return -EFAULT; return 0; diff -u linux-azure-4.15.0/sound/pci/hda/hda_intel.c linux-azure-4.15.0/sound/pci/hda/hda_intel.c --- linux-azure-4.15.0/sound/pci/hda/hda_intel.c +++ linux-azure-4.15.0/sound/pci/hda/hda_intel.c @@ -1312,16 +1312,15 @@ static int register_vga_switcheroo(struct azx *chip) { struct hda_intel *hda = container_of(chip, struct hda_intel, chip); - struct pci_dev *p; int err; if (!hda->use_vga_switcheroo) return 0; - - p = get_bound_vga(chip->pci); - err = vga_switcheroo_register_audio_client(chip->pci, &azx_vs_ops, p); - pci_dev_put(p); - + /* FIXME: currently only handling DIS controller + * is there any machine with two switchable HDMI audio controllers? + */ + err = vga_switcheroo_register_audio_client(chip->pci, &azx_vs_ops, + VGA_SWITCHEROO_DIS); if (err < 0) return err; hda->vga_switcheroo_registered = 1; @@ -1428,7 +1427,7 @@ p = pci_get_domain_bus_and_slot(pci_domain_nr(pci->bus), pci->bus->number, 0); if (p) { - if ((p->class >> 16) == PCI_BASE_CLASS_DISPLAY) + if ((p->class >> 8) == PCI_CLASS_DISPLAY_VGA) return p; pci_dev_put(p); } @@ -2209,8 +2208,6 @@ SND_PCI_QUIRK(0x1849, 0x0c0c, "Asrock B85M-ITX", 0), /* https://bugzilla.redhat.com/show_bug.cgi?id=1525104 */ SND_PCI_QUIRK(0x1043, 0x8733, "Asus Prime X370-Pro", 0), - /* https://bugzilla.redhat.com/show_bug.cgi?id=1572975 */ - SND_PCI_QUIRK(0x17aa, 0x36a7, "Lenovo C50 All in one", 0), /* https://bugzilla.kernel.org/show_bug.cgi?id=198611 */ SND_PCI_QUIRK(0x17aa, 0x2227, "Lenovo X1 Carbon 3rd Gen", 0), {} @@ -2511,8 +2508,7 @@ .driver_data = AZX_DRIVER_GENERIC | AZX_DCAPS_PRESET_ATI_SB }, /* AMD Raven */ { PCI_DEVICE(0x1022, 0x15e3), - .driver_data = AZX_DRIVER_GENERIC | AZX_DCAPS_PRESET_ATI_SB | - AZX_DCAPS_PM_RUNTIME }, + .driver_data = AZX_DRIVER_GENERIC | AZX_DCAPS_PRESET_ATI_SB }, /* ATI HDMI */ { PCI_DEVICE(0x1002, 0x0002), .driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS }, diff -u linux-azure-4.15.0/sound/pci/hda/patch_conexant.c linux-azure-4.15.0/sound/pci/hda/patch_conexant.c --- linux-azure-4.15.0/sound/pci/hda/patch_conexant.c +++ linux-azure-4.15.0/sound/pci/hda/patch_conexant.c @@ -963,7 +963,6 @@ SND_PCI_QUIRK(0x103c, 0x8115, "HP Z1 Gen3", CXT_FIXUP_HP_GATE_MIC), SND_PCI_QUIRK(0x103c, 0x814f, "HP ZBook 15u G3", CXT_FIXUP_MUTE_LED_GPIO), SND_PCI_QUIRK(0x103c, 0x822e, "HP ProBook 440 G4", CXT_FIXUP_MUTE_LED_GPIO), - SND_PCI_QUIRK(0x103c, 0x836e, "HP ProBook 455 G5", CXT_FIXUP_MUTE_LED_GPIO), SND_PCI_QUIRK(0x103c, 0x8299, "HP 800 G3 SFF", CXT_FIXUP_HP_MIC_NO_PRESENCE), SND_PCI_QUIRK(0x103c, 0x829a, "HP 800 G3 DM", CXT_FIXUP_HP_MIC_NO_PRESENCE), SND_PCI_QUIRK(0x1043, 0x138d, "Asus", CXT_FIXUP_HEADPHONE_MIC_PIN), reverted: --- linux-azure-4.15.0/sound/pci/hda/patch_hdmi.c +++ linux-azure-4.15.0.orig/sound/pci/hda/patch_hdmi.c @@ -1383,8 +1383,6 @@ pcm = get_pcm_rec(spec, per_pin->pcm_idx); else return; - if (!pcm->pcm) - return; if (!test_bit(per_pin->pcm_idx, &spec->pcm_in_use)) return; @@ -2153,13 +2151,8 @@ int dev, err; int pin_idx, pcm_idx; + for (pcm_idx = 0; pcm_idx < spec->pcm_used; pcm_idx++) { - if (!get_pcm_rec(spec, pcm_idx)->pcm) { - /* no PCM: mark this for skipping permanently */ - set_bit(pcm_idx, &spec->pcm_bitmap); - continue; - } - err = generic_hdmi_build_jack(codec, pcm_idx); if (err < 0) return err; diff -u linux-azure-4.15.0/sound/pci/hda/patch_realtek.c linux-azure-4.15.0/sound/pci/hda/patch_realtek.c --- linux-azure-4.15.0/sound/pci/hda/patch_realtek.c +++ linux-azure-4.15.0/sound/pci/hda/patch_realtek.c @@ -331,7 +331,6 @@ /* fallthrough */ case 0x10ec0215: case 0x10ec0233: - case 0x10ec0235: case 0x10ec0236: case 0x10ec0255: case 0x10ec0256: @@ -3833,7 +3832,7 @@ } } -#if IS_REACHABLE(CONFIG_INPUT) +#if IS_REACHABLE(INPUT) static void gpio2_mic_hotkey_event(struct hda_codec *codec, struct hda_jack_callback *event) { @@ -6577,8 +6576,7 @@ SND_PCI_QUIRK(0x17aa, 0x30bb, "ThinkCentre AIO", ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY), SND_PCI_QUIRK(0x17aa, 0x30e2, "ThinkCentre AIO", ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY), SND_PCI_QUIRK(0x17aa, 0x310c, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION), - SND_PCI_QUIRK(0x17aa, 0x312f, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION), - SND_PCI_QUIRK(0x17aa, 0x312a, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION), + SND_PCI_QUIRK(0x17aa, 0x3138, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION), SND_PCI_QUIRK(0x17aa, 0x313c, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION), SND_PCI_QUIRK(0x17aa, 0x3902, "Lenovo E50-80", ALC269_FIXUP_DMIC_THINKPAD_ACPI), SND_PCI_QUIRK(0x17aa, 0x3977, "IdeaPad S210", ALC283_FIXUP_INT_MIC), @@ -6756,17 +6754,6 @@ {0x14, 0x90170110}, {0x19, 0x02a11030}, {0x21, 0x02211020}), - SND_HDA_PIN_QUIRK(0x10ec0235, 0x17aa, "Lenovo", ALC294_FIXUP_LENOVO_MIC_LOCATION, - {0x14, 0x90170110}, - {0x19, 0x02a11030}, - {0x1a, 0x02a11040}, - {0x1b, 0x01014020}, - {0x21, 0x0221101f}), - SND_HDA_PIN_QUIRK(0x10ec0235, 0x17aa, "Lenovo", ALC294_FIXUP_LENOVO_MIC_LOCATION, - {0x14, 0x90170110}, - {0x19, 0x02a11020}, - {0x1a, 0x02a11030}, - {0x21, 0x0221101f}), SND_HDA_PIN_QUIRK(0x10ec0236, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE, {0x12, 0x90a60140}, {0x14, 0x90170110}, @@ -7178,11 +7165,8 @@ case 0x10ec0298: spec->codec_variant = ALC269_TYPE_ALC298; break; - case 0x10ec0235: case 0x10ec0255: spec->codec_variant = ALC269_TYPE_ALC255; - spec->shutup = alc256_shutup; - spec->init_hook = alc256_init; break; case 0x10ec0236: case 0x10ec0256: reverted: --- linux-azure-4.15.0/sound/pci/rme9652/hdspm.c +++ linux-azure-4.15.0.orig/sound/pci/rme9652/hdspm.c @@ -137,7 +137,6 @@ #include #include #include -#include #include #include @@ -5699,43 +5698,40 @@ struct snd_pcm_channel_info *info) { struct hdspm *hdspm = snd_pcm_substream_chip(substream); - unsigned int channel = info->channel; if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) { + if (snd_BUG_ON(info->channel >= hdspm->max_channels_out)) { - if (snd_BUG_ON(channel >= hdspm->max_channels_out)) { dev_info(hdspm->card->dev, "snd_hdspm_channel_info: output channel out of range (%d)\n", + info->channel); - channel); return -EINVAL; } + if (hdspm->channel_map_out[info->channel] < 0) { - channel = array_index_nospec(channel, hdspm->max_channels_out); - if (hdspm->channel_map_out[channel] < 0) { dev_info(hdspm->card->dev, "snd_hdspm_channel_info: output channel %d mapped out\n", + info->channel); - channel); return -EINVAL; } + info->offset = hdspm->channel_map_out[info->channel] * - info->offset = hdspm->channel_map_out[channel] * HDSPM_CHANNEL_BUFFER_BYTES; } else { + if (snd_BUG_ON(info->channel >= hdspm->max_channels_in)) { - if (snd_BUG_ON(channel >= hdspm->max_channels_in)) { dev_info(hdspm->card->dev, "snd_hdspm_channel_info: input channel out of range (%d)\n", + info->channel); - channel); return -EINVAL; } + if (hdspm->channel_map_in[info->channel] < 0) { - channel = array_index_nospec(channel, hdspm->max_channels_in); - if (hdspm->channel_map_in[channel] < 0) { dev_info(hdspm->card->dev, "snd_hdspm_channel_info: input channel %d mapped out\n", + info->channel); - channel); return -EINVAL; } + info->offset = hdspm->channel_map_in[info->channel] * - info->offset = hdspm->channel_map_in[channel] * HDSPM_CHANNEL_BUFFER_BYTES; } reverted: --- linux-azure-4.15.0/sound/pci/rme9652/rme9652.c +++ linux-azure-4.15.0.orig/sound/pci/rme9652/rme9652.c @@ -26,7 +26,6 @@ #include #include #include -#include #include #include @@ -2072,10 +2071,9 @@ if (snd_BUG_ON(info->channel >= RME9652_NCHANNELS)) return -EINVAL; + if ((chn = rme9652->channel_map[info->channel]) < 0) { - chn = rme9652->channel_map[array_index_nospec(info->channel, - RME9652_NCHANNELS)]; - if (chn < 0) return -EINVAL; + } info->offset = chn * RME9652_CHANNEL_BUFFER_BYTES; info->first = 0; reverted: --- linux-azure-4.15.0/sound/soc/codecs/hdmi-codec.c +++ linux-azure-4.15.0.orig/sound/soc/codecs/hdmi-codec.c @@ -798,7 +798,12 @@ static int hdmi_codec_remove(struct platform_device *pdev) { + struct device *dev = &pdev->dev; + struct hdmi_codec_priv *hcp; + + hcp = dev_get_drvdata(dev); + kfree(hcp->chmap_info); + snd_soc_unregister_codec(dev); - snd_soc_unregister_codec(&pdev->dev); return 0; } reverted: --- linux-azure-4.15.0/sound/soc/fsl/fsl_esai.c +++ linux-azure-4.15.0.orig/sound/soc/fsl/fsl_esai.c @@ -144,13 +144,6 @@ psr = ratio <= 256 * maxfp ? ESAI_xCCR_xPSR_BYPASS : ESAI_xCCR_xPSR_DIV8; - /* Do not loop-search if PM (1 ~ 256) alone can serve the ratio */ - if (ratio <= 256) { - pm = ratio; - fp = 1; - goto out; - } - /* Set the max fluctuation -- 0.1% of the max devisor */ savesub = (psr ? 1 : 8) * 256 * maxfp / 1000; reverted: --- linux-azure-4.15.0/sound/soc/omap/omap-dmic.c +++ linux-azure-4.15.0.orig/sound/soc/omap/omap-dmic.c @@ -281,7 +281,7 @@ static int omap_dmic_select_fclk(struct omap_dmic *dmic, int clk_id, unsigned int freq) { + struct clk *parent_clk; - struct clk *parent_clk, *mux; char *parent_clk_name; int ret = 0; @@ -329,21 +329,14 @@ return -ENODEV; } - mux = clk_get_parent(dmic->fclk); - if (IS_ERR(mux)) { - dev_err(dmic->dev, "can't get fck mux parent\n"); - clk_put(parent_clk); - return -ENODEV; - } - mutex_lock(&dmic->mutex); if (dmic->active) { /* disable clock while reparenting */ pm_runtime_put_sync(dmic->dev); + ret = clk_set_parent(dmic->fclk, parent_clk); - ret = clk_set_parent(mux, parent_clk); pm_runtime_get_sync(dmic->dev); } else { + ret = clk_set_parent(dmic->fclk, parent_clk); - ret = clk_set_parent(mux, parent_clk); } mutex_unlock(&dmic->mutex); @@ -356,7 +349,6 @@ dmic->fclk_freq = freq; err_busy: - clk_put(mux); clk_put(parent_clk); return ret; reverted: --- linux-azure-4.15.0/sound/soc/rockchip/Kconfig +++ linux-azure-4.15.0.orig/sound/soc/rockchip/Kconfig @@ -56,9 +56,6 @@ depends on SND_SOC_ROCKCHIP && I2C && GPIOLIB && CLKDEV_LOOKUP select SND_SOC_ROCKCHIP_I2S select SND_SOC_HDMI_CODEC - select SND_SOC_ES8328_I2C - select SND_SOC_ES8328_SPI if SPI_MASTER - select DRM_DW_HDMI_I2S_AUDIO if DRM_DW_HDMI help Say Y or M here if you want to add support for SoC audio on Rockchip RK3288 boards using an analog output and the built-in HDMI audio. reverted: --- linux-azure-4.15.0/sound/soc/samsung/i2s.c +++ linux-azure-4.15.0.orig/sound/soc/samsung/i2s.c @@ -656,12 +656,8 @@ tmp |= mod_slave; break; case SND_SOC_DAIFMT_CBS_CFS: + /* Set default source clock in Master mode */ + if (i2s->rclk_srcrate == 0) - /* - * Set default source clock in Master mode, only when the - * CLK_I2S_RCLK_SRC clock is not exposed so we ensure any - * clock configuration assigned in DT is not overwritten. - */ - if (i2s->rclk_srcrate == 0 && i2s->clk_data.clks == NULL) i2s_set_sysclk(dai, SAMSUNG_I2S_RCLKSRC_0, 0, SND_SOC_CLOCK_IN); break; @@ -885,11 +881,6 @@ return 0; if (!(i2s->quirks & QUIRK_NO_MUXPSR)) { - struct clk *rclksrc = i2s->clk_table[CLK_I2S_RCLK_SRC]; - - if (i2s->rclk_srcrate == 0 && rclksrc && !IS_ERR(rclksrc)) - i2s->rclk_srcrate = clk_get_rate(rclksrc); - psr = i2s->rclk_srcrate / i2s->frmclk / rfs; writel(((psr - 1) << 8) | PSR_PSREN, i2s->addr + I2SPSR); dev_dbg(&i2s->pdev->dev, reverted: --- linux-azure-4.15.0/sound/soc/samsung/odroid.c +++ linux-azure-4.15.0.orig/sound/soc/samsung/odroid.c @@ -36,26 +36,23 @@ { struct snd_soc_pcm_runtime *rtd = substream->private_data; struct odroid_priv *priv = snd_soc_card_get_drvdata(rtd->card); + unsigned int pll_freq, rclk_freq; - unsigned int pll_freq, rclk_freq, rfs; int ret; switch (params_rate(params)) { + case 32000: case 64000: + pll_freq = 131072006U; - pll_freq = 196608001U; - rfs = 384; break; case 44100: case 88200: case 176400: pll_freq = 180633609U; - rfs = 512; break; - case 32000: case 48000: case 96000: case 192000: pll_freq = 196608001U; - rfs = 512; break; default: return -EINVAL; @@ -70,7 +67,7 @@ * frequency values due to the EPLL output frequency not being exact * multiple of the audio sampling rate. */ + rclk_freq = params_rate(params) * 256 + 1; - rclk_freq = params_rate(params) * rfs + 1; ret = clk_set_rate(priv->sclk_i2s, rclk_freq); if (ret < 0) diff -u linux-azure-4.15.0/sound/soc/soc-topology.c linux-azure-4.15.0/sound/soc/soc-topology.c --- linux-azure-4.15.0/sound/soc/soc-topology.c +++ linux-azure-4.15.0/sound/soc/soc-topology.c @@ -1276,9 +1276,6 @@ kfree(sm); continue; } - - /* create any TLV data */ - soc_tplg_create_tlv(tplg, &kc[i], &mc->hdr); } return kc; diff -u linux-azure-4.15.0/sound/usb/mixer.c linux-azure-4.15.0/sound/usb/mixer.c --- linux-azure-4.15.0/sound/usb/mixer.c +++ linux-azure-4.15.0/sound/usb/mixer.c @@ -911,14 +911,6 @@ } break; - case USB_ID(0x0d8c, 0x0103): - if (!strcmp(kctl->id.name, "PCM Playback Volume")) { - usb_audio_info(chip, - "set volume quirk for CM102-A+/102S+\n"); - cval->min = -256; - } - break; - case USB_ID(0x0471, 0x0101): case USB_ID(0x0471, 0x0104): case USB_ID(0x0471, 0x0105): reverted: --- linux-azure-4.15.0/sound/usb/mixer_maps.c +++ linux-azure-4.15.0.orig/sound/usb/mixer_maps.c @@ -353,11 +353,8 @@ /* * Dell usb dock with ALC4020 codec had a firmware problem where it got * screwed up when zero volume is passed; just skip it as a workaround - * - * Also the extension unit gives an access error, so skip it as well. */ static const struct usbmix_name_map dell_alc4020_map[] = { - { 4, NULL }, /* extension unit */ { 16, NULL }, { 19, NULL }, { 0 } diff -u linux-azure-4.15.0/sound/usb/quirks.c linux-azure-4.15.0/sound/usb/quirks.c --- linux-azure-4.15.0/sound/usb/quirks.c +++ linux-azure-4.15.0/sound/usb/quirks.c @@ -1149,27 +1149,24 @@ return false; } -/* ITF-USB DSD based DACs need a vendor cmd to switch +/* Marantz/Denon USB DACs need a vendor cmd to switch * between PCM and native DSD mode - * (2 altsets version) */ -static bool is_itf_usb_dsd_2alts_dac(unsigned int id) +static bool is_marantz_denon_dac(unsigned int id) { switch (id) { case USB_ID(0x154e, 0x1003): /* Denon DA-300USB */ case USB_ID(0x154e, 0x3005): /* Marantz HD-DAC1 */ case USB_ID(0x154e, 0x3006): /* Marantz SA-14S1 */ - case USB_ID(0x1852, 0x5065): /* Luxman DA-06 */ return true; } return false; } -/* ITF-USB DSD based DACs need a vendor cmd to switch - * between PCM and native DSD mode - * (3 altsets version) +/* TEAC UD-501/UD-503/NT-503 USB DACs need a vendor cmd to switch + * between PCM/DOP and native DSD mode */ -static bool is_itf_usb_dsd_3alts_dac(unsigned int id) +static bool is_teac_dsd_dac(unsigned int id) { switch (id) { case USB_ID(0x0644, 0x8043): /* TEAC UD-501/UD-503/NT-503 */ @@ -1186,7 +1183,7 @@ struct usb_device *dev = subs->dev; int err; - if (is_itf_usb_dsd_2alts_dac(subs->stream->chip->usb_id)) { + if (is_marantz_denon_dac(subs->stream->chip->usb_id)) { /* First switch to alt set 0, otherwise the mode switch cmd * will not be accepted by the DAC */ @@ -1207,7 +1204,7 @@ break; } mdelay(20); - } else if (is_itf_usb_dsd_3alts_dac(subs->stream->chip->usb_id)) { + } else if (is_teac_dsd_dac(subs->stream->chip->usb_id)) { /* Vendor mode switch cmd is required. */ switch (fmt->altsetting) { case 3: /* DSD mode (DSD_U32) requested */ @@ -1303,10 +1300,10 @@ (requesttype & USB_TYPE_MASK) == USB_TYPE_CLASS) mdelay(20); - /* ITF-USB DSD based DACs functionality need a delay + /* Marantz/Denon devices with USB DAC functionality need a delay * after each class compliant request */ - if (is_itf_usb_dsd_2alts_dac(chip->usb_id) + if (is_marantz_denon_dac(chip->usb_id) && (requesttype & USB_TYPE_MASK) == USB_TYPE_CLASS) mdelay(20); @@ -1393,14 +1390,14 @@ break; } - /* ITF-USB DSD based DACs (2 altsets version) */ - if (is_itf_usb_dsd_2alts_dac(chip->usb_id)) { + /* Denon/Marantz devices with USB DAC functionality */ + if (is_marantz_denon_dac(chip->usb_id)) { if (fp->altsetting == 2) return SNDRV_PCM_FMTBIT_DSD_U32_BE; } - /* ITF-USB DSD based DACs (3 altsets version) */ - if (is_itf_usb_dsd_3alts_dac(chip->usb_id)) { + /* TEAC devices with USB DAC functionality */ + if (is_teac_dsd_dac(chip->usb_id)) { if (fp->altsetting == 3) return SNDRV_PCM_FMTBIT_DSD_U32_BE; } reverted: --- linux-azure-4.15.0/tools/lib/str_error_r.c +++ linux-azure-4.15.0.orig/tools/lib/str_error_r.c @@ -22,6 +22,6 @@ { int err = strerror_r(errnum, buf, buflen); if (err) + snprintf(buf, buflen, "INTERNAL ERROR: strerror_r(%d, %p, %zd)=%d", errnum, buf, buflen, err); - snprintf(buf, buflen, "INTERNAL ERROR: strerror_r(%d, [buf], %zd)=%d", errnum, buflen, err); return buf; } reverted: --- linux-azure-4.15.0/tools/lib/subcmd/pager.c +++ linux-azure-4.15.0.orig/tools/lib/subcmd/pager.c @@ -30,13 +30,10 @@ * have real input */ fd_set in; - fd_set exception; FD_ZERO(&in); - FD_ZERO(&exception); FD_SET(0, &in); + select(1, &in, NULL, &in, NULL); - FD_SET(0, &exception); - select(1, &in, NULL, &exception, NULL); setenv("LESS", "FRSX", 0); } diff -u linux-azure-4.15.0/tools/testing/selftests/firmware/fw_filesystem.sh linux-azure-4.15.0/tools/testing/selftests/firmware/fw_filesystem.sh --- linux-azure-4.15.0/tools/testing/selftests/firmware/fw_filesystem.sh +++ linux-azure-4.15.0/tools/testing/selftests/firmware/fw_filesystem.sh @@ -46,11 +46,9 @@ echo "$OLD_TIMEOUT" >/sys/class/firmware/timeout fi if [ "$OLD_FWPATH" = "" ]; then - # A zero-length write won't work; write a null byte - printf '\000' >/sys/module/firmware_class/parameters/path - else - echo -n "$OLD_FWPATH" >/sys/module/firmware_class/parameters/path + OLD_FWPATH=" " fi + echo -n "$OLD_FWPATH" >/sys/module/firmware_class/parameters/path rm -f "$FW" rmdir "$FWPATH" } diff -u linux-azure-4.15.0/virt/kvm/arm/arm.c linux-azure-4.15.0/virt/kvm/arm/arm.c --- linux-azure-4.15.0/virt/kvm/arm/arm.c +++ linux-azure-4.15.0/virt/kvm/arm/arm.c @@ -63,7 +63,7 @@ static atomic64_t kvm_vmid_gen = ATOMIC64_INIT(1); static u32 kvm_next_vmid; static unsigned int kvm_vmid_bits __read_mostly; -static DEFINE_RWLOCK(kvm_vmid_lock); +static DEFINE_SPINLOCK(kvm_vmid_lock); static bool vgic_present; @@ -465,16 +465,11 @@ { phys_addr_t pgd_phys; u64 vmid; - bool new_gen; - read_lock(&kvm_vmid_lock); - new_gen = need_new_vmid_gen(kvm); - read_unlock(&kvm_vmid_lock); - - if (!new_gen) + if (!need_new_vmid_gen(kvm)) return; - write_lock(&kvm_vmid_lock); + spin_lock(&kvm_vmid_lock); /* * We need to re-check the vmid_gen here to ensure that if another vcpu @@ -482,7 +477,7 @@ * use the same vmid. */ if (!need_new_vmid_gen(kvm)) { - write_unlock(&kvm_vmid_lock); + spin_unlock(&kvm_vmid_lock); return; } @@ -516,7 +511,7 @@ vmid = ((u64)(kvm->arch.vmid) << VTTBR_VMID_SHIFT) & VTTBR_VMID_MASK(kvm_vmid_bits); kvm->arch.vttbr = pgd_phys | vmid; - write_unlock(&kvm_vmid_lock); + spin_unlock(&kvm_vmid_lock); } static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu) diff -u linux-azure-4.15.0/virt/kvm/arm/psci.c linux-azure-4.15.0/virt/kvm/arm/psci.c --- linux-azure-4.15.0/virt/kvm/arm/psci.c +++ linux-azure-4.15.0/virt/kvm/arm/psci.c @@ -18,7 +18,6 @@ #include #include #include -#include #include #include @@ -431,59 +429,0 @@ - -int kvm_arm_get_fw_num_regs(struct kvm_vcpu *vcpu) -{ - return 1; /* PSCI version */ -} - -int kvm_arm_copy_fw_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices) -{ - if (put_user(KVM_REG_ARM_PSCI_VERSION, uindices)) - return -EFAULT; - - return 0; -} - -int kvm_arm_get_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) -{ - if (reg->id == KVM_REG_ARM_PSCI_VERSION) { - void __user *uaddr = (void __user *)(long)reg->addr; - u64 val; - - val = kvm_psci_version(vcpu, vcpu->kvm); - if (copy_to_user(uaddr, &val, KVM_REG_SIZE(reg->id))) - return -EFAULT; - - return 0; - } - - return -EINVAL; -} - -int kvm_arm_set_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) -{ - if (reg->id == KVM_REG_ARM_PSCI_VERSION) { - void __user *uaddr = (void __user *)(long)reg->addr; - bool wants_02; - u64 val; - - if (copy_from_user(&val, uaddr, KVM_REG_SIZE(reg->id))) - return -EFAULT; - - wants_02 = test_bit(KVM_ARM_VCPU_PSCI_0_2, vcpu->arch.features); - - switch (val) { - case KVM_ARM_PSCI_0_1: - if (wants_02) - return -EINVAL; - vcpu->kvm->arch.psci_version = val; - return 0; - case KVM_ARM_PSCI_0_2: - case KVM_ARM_PSCI_1_0: - if (!wants_02) - return -EINVAL; - vcpu->kvm->arch.psci_version = val; - return 0; - } - } - - return -EINVAL; -} reverted: --- linux-azure-4.15.0/virt/kvm/arm/vgic/vgic-debug.c +++ linux-azure-4.15.0.orig/virt/kvm/arm/vgic/vgic-debug.c @@ -211,7 +211,6 @@ struct vgic_state_iter *iter = (struct vgic_state_iter *)v; struct vgic_irq *irq; struct kvm_vcpu *vcpu = NULL; - unsigned long flags; if (iter->dist_id == 0) { print_dist_state(s, &kvm->arch.vgic); @@ -228,9 +227,9 @@ irq = &kvm->arch.vgic.spis[iter->intid - VGIC_NR_PRIVATE_IRQS]; } + spin_lock(&irq->irq_lock); - spin_lock_irqsave(&irq->irq_lock, flags); print_irq_state(s, irq, vcpu); + spin_unlock(&irq->irq_lock); - spin_unlock_irqrestore(&irq->irq_lock, flags); return 0; } diff -u linux-azure-4.15.0/virt/kvm/arm/vgic/vgic-its.c linux-azure-4.15.0/virt/kvm/arm/vgic/vgic-its.c --- linux-azure-4.15.0/virt/kvm/arm/vgic/vgic-its.c +++ linux-azure-4.15.0/virt/kvm/arm/vgic/vgic-its.c @@ -52,7 +52,6 @@ { struct vgic_dist *dist = &kvm->arch.vgic; struct vgic_irq *irq = vgic_get_irq(kvm, NULL, intid), *oldirq; - unsigned long flags; int ret; /* In this case there is no put, since we keep the reference. */ @@ -72,7 +71,7 @@ irq->intid = intid; irq->target_vcpu = vcpu; - spin_lock_irqsave(&dist->lpi_list_lock, flags); + spin_lock(&dist->lpi_list_lock); /* * There could be a race with another vgic_add_lpi(), so we need to @@ -100,7 +99,7 @@ dist->lpi_list_count++; out_unlock: - spin_unlock_irqrestore(&dist->lpi_list_lock, flags); + spin_unlock(&dist->lpi_list_lock); /* * We "cache" the configuration table entries in our struct vgic_irq's. @@ -281,8 +280,8 @@ int ret; unsigned long flags; - ret = kvm_read_guest_lock(kvm, propbase + irq->intid - GIC_LPI_OFFSET, - &prop, 1); + ret = kvm_read_guest(kvm, propbase + irq->intid - GIC_LPI_OFFSET, + &prop, 1); if (ret) return ret; @@ -316,7 +315,6 @@ { struct vgic_dist *dist = &vcpu->kvm->arch.vgic; struct vgic_irq *irq; - unsigned long flags; u32 *intids; int irq_count, i = 0; @@ -332,7 +330,7 @@ if (!intids) return -ENOMEM; - spin_lock_irqsave(&dist->lpi_list_lock, flags); + spin_lock(&dist->lpi_list_lock); list_for_each_entry(irq, &dist->lpi_list_head, lpi_list) { if (i == irq_count) break; @@ -341,7 +339,7 @@ continue; intids[i++] = irq->intid; } - spin_unlock_irqrestore(&dist->lpi_list_lock, flags); + spin_unlock(&dist->lpi_list_lock); *intid_ptr = intids; return i; @@ -350,11 +348,10 @@ static int update_affinity(struct vgic_irq *irq, struct kvm_vcpu *vcpu) { int ret = 0; - unsigned long flags; - spin_lock_irqsave(&irq->irq_lock, flags); + spin_lock(&irq->irq_lock); irq->target_vcpu = vcpu; - spin_unlock_irqrestore(&irq->irq_lock, flags); + spin_unlock(&irq->irq_lock); if (irq->hw) { struct its_vlpi_map map; @@ -444,9 +441,8 @@ * this very same byte in the last iteration. Reuse that. */ if (byte_offset != last_byte_offset) { - ret = kvm_read_guest_lock(vcpu->kvm, - pendbase + byte_offset, - &pendmask, 1); + ret = kvm_read_guest(vcpu->kvm, pendbase + byte_offset, + &pendmask, 1); if (ret) { kfree(intids); return ret; @@ -790,7 +786,7 @@ return false; /* Each 1st level entry is represented by a 64-bit value. */ - if (kvm_read_guest_lock(its->dev->kvm, + if (kvm_read_guest(its->dev->kvm, BASER_ADDRESS(baser) + index * sizeof(indirect_ptr), &indirect_ptr, sizeof(indirect_ptr))) return false; @@ -1373,8 +1369,8 @@ cbaser = CBASER_ADDRESS(its->cbaser); while (its->cwriter != its->creadr) { - int ret = kvm_read_guest_lock(kvm, cbaser + its->creadr, - cmd_buf, ITS_CMD_SIZE); + int ret = kvm_read_guest(kvm, cbaser + its->creadr, + cmd_buf, ITS_CMD_SIZE); /* * If kvm_read_guest() fails, this could be due to the guest * programming a bogus value in CBASER or something else going @@ -1899,7 +1895,7 @@ int next_offset; size_t byte_offset; - ret = kvm_read_guest_lock(kvm, gpa, entry, esz); + ret = kvm_read_guest(kvm, gpa, entry, esz); if (ret) return ret; @@ -2269,7 +2265,7 @@ int ret; BUG_ON(esz > sizeof(val)); - ret = kvm_read_guest_lock(kvm, gpa, &val, esz); + ret = kvm_read_guest(kvm, gpa, &val, esz); if (ret) return ret; val = le64_to_cpu(val); diff -u linux-azure-4.15.0/virt/kvm/arm/vgic/vgic-v3.c linux-azure-4.15.0/virt/kvm/arm/vgic/vgic-v3.c --- linux-azure-4.15.0/virt/kvm/arm/vgic/vgic-v3.c +++ linux-azure-4.15.0/virt/kvm/arm/vgic/vgic-v3.c @@ -300,7 +300,7 @@ bit_nr = irq->intid % BITS_PER_BYTE; ptr = pendbase + byte_offset; - ret = kvm_read_guest_lock(kvm, ptr, &val, 1); + ret = kvm_read_guest(kvm, ptr, &val, 1); if (ret) return ret; @@ -353,7 +353,7 @@ ptr = pendbase + byte_offset; if (byte_offset != last_byte_offset) { - ret = kvm_read_guest_lock(kvm, ptr, &val, 1); + ret = kvm_read_guest(kvm, ptr, &val, 1); if (ret) return ret; last_byte_offset = byte_offset; diff -u linux-azure-4.15.0/virt/kvm/arm/vgic/vgic.c linux-azure-4.15.0/virt/kvm/arm/vgic/vgic.c --- linux-azure-4.15.0/virt/kvm/arm/vgic/vgic.c +++ linux-azure-4.15.0/virt/kvm/arm/vgic/vgic.c @@ -40,13 +40,9 @@ * kvm->lock (mutex) * its->cmd_lock (mutex) * its->its_lock (mutex) - * vgic_cpu->ap_list_lock must be taken with IRQs disabled - * kvm->lpi_list_lock must be taken with IRQs disabled - * vgic_irq->irq_lock must be taken with IRQs disabled - * - * As the ap_list_lock might be taken from the timer interrupt handler, - * we have to disable IRQs before taking this lock and everything lower - * than it. + * vgic_cpu->ap_list_lock + * kvm->lpi_list_lock + * vgic_irq->irq_lock * * If you need to take multiple locks, always take the upper lock first, * then the lower ones, e.g. first take the its_lock, then the irq_lock. @@ -73,9 +69,8 @@ { struct vgic_dist *dist = &kvm->arch.vgic; struct vgic_irq *irq = NULL; - unsigned long flags; - spin_lock_irqsave(&dist->lpi_list_lock, flags); + spin_lock(&dist->lpi_list_lock); list_for_each_entry(irq, &dist->lpi_list_head, lpi_list) { if (irq->intid != intid) @@ -91,7 +86,7 @@ irq = NULL; out_unlock: - spin_unlock_irqrestore(&dist->lpi_list_lock, flags); + spin_unlock(&dist->lpi_list_lock); return irq; } @@ -132,20 +127,19 @@ void vgic_put_irq(struct kvm *kvm, struct vgic_irq *irq) { struct vgic_dist *dist = &kvm->arch.vgic; - unsigned long flags; if (irq->intid < VGIC_MIN_LPI) return; - spin_lock_irqsave(&dist->lpi_list_lock, flags); + spin_lock(&dist->lpi_list_lock); if (!kref_put(&irq->refcount, vgic_irq_release)) { - spin_unlock_irqrestore(&dist->lpi_list_lock, flags); + spin_unlock(&dist->lpi_list_lock); return; }; list_del(&irq->lpi_list); dist->lpi_list_count--; - spin_unlock_irqrestore(&dist->lpi_list_lock, flags); + spin_unlock(&dist->lpi_list_lock); kfree(irq); } diff -u linux-azure-4.15.0/zfs/META linux-azure-4.15.0/zfs/META --- linux-azure-4.15.0/zfs/META +++ linux-azure-4.15.0/zfs/META @@ -2,7 +2,7 @@ Name: zfs Branch: 1.0 Version: 0.7.5 -Release: 1ubuntu16.3 +Release: 1ubuntu15 Release-Tags: relext License: CDDL Author: OpenZFS on Linux diff -u linux-azure-4.15.0/zfs/include/sys/zfs_vfsops.h linux-azure-4.15.0/zfs/include/sys/zfs_vfsops.h --- linux-azure-4.15.0/zfs/include/sys/zfs_vfsops.h +++ linux-azure-4.15.0/zfs/include/sys/zfs_vfsops.h @@ -32,7 +32,6 @@ #include #include #include -#include #include #ifdef __cplusplus diff -u linux-azure-4.15.0/zfs/module/zfs/zpl_super.c linux-azure-4.15.0/zfs/module/zfs/zpl_super.c --- linux-azure-4.15.0/zfs/module/zfs/zpl_super.c +++ linux-azure-4.15.0/zfs/module/zfs/zpl_super.c @@ -271,17 +271,8 @@ if (err) return (ERR_PTR(-err)); - /* - * The dsl pool lock must be released prior to calling sget(). - * It is possible sget() may block on the lock in grab_super() - * while deactivate_super() holds that same lock and waits for - * a txg sync. If the dsl_pool lock is held over over sget() - * this can prevent the pool sync and cause a deadlock. - */ - dsl_pool_rele(dmu_objset_pool(os), FTAG); s = zpl_sget(fs_type, zpl_test_super, set_anon_super, flags, os); - dsl_dataset_rele(dmu_objset_ds(os), FTAG); - + dmu_objset_rele(os, FTAG); if (IS_ERR(s)) return (ERR_CAST(s)); only in patch2: unchanged: --- linux-azure-4.15.0.orig/Documentation/admin-guide/index.rst +++ linux-azure-4.15.0/Documentation/admin-guide/index.rst @@ -17,6 +17,15 @@ kernel-parameters devices +This section describes CPU vulnerabilities and provides an overview of the +possible mitigations along with guidance for selecting mitigations if they +are configurable at compile, boot or run time. + +.. toctree:: + :maxdepth: 1 + + l1tf + Here is a set of documents aimed at users who are trying to track down problems and bugs in particular. only in patch2: unchanged: --- linux-azure-4.15.0.orig/Documentation/admin-guide/l1tf.rst +++ linux-azure-4.15.0/Documentation/admin-guide/l1tf.rst @@ -0,0 +1,610 @@ +L1TF - L1 Terminal Fault +======================== + +L1 Terminal Fault is a hardware vulnerability which allows unprivileged +speculative access to data which is available in the Level 1 Data Cache +when the page table entry controlling the virtual address, which is used +for the access, has the Present bit cleared or other reserved bits set. + +Affected processors +------------------- + +This vulnerability affects a wide range of Intel processors. The +vulnerability is not present on: + + - Processors from AMD, Centaur and other non Intel vendors + + - Older processor models, where the CPU family is < 6 + + - A range of Intel ATOM processors (Cedarview, Cloverview, Lincroft, + Penwell, Pineview, Silvermont, Airmont, Merrifield) + + - The Intel XEON PHI family + + - Intel processors which have the ARCH_CAP_RDCL_NO bit set in the + IA32_ARCH_CAPABILITIES MSR. If the bit is set the CPU is not affected + by the Meltdown vulnerability either. These CPUs should become + available by end of 2018. + +Whether a processor is affected or not can be read out from the L1TF +vulnerability file in sysfs. See :ref:`l1tf_sys_info`. + +Related CVEs +------------ + +The following CVE entries are related to the L1TF vulnerability: + + ============= ================= ============================== + CVE-2018-3615 L1 Terminal Fault SGX related aspects + CVE-2018-3620 L1 Terminal Fault OS, SMM related aspects + CVE-2018-3646 L1 Terminal Fault Virtualization related aspects + ============= ================= ============================== + +Problem +------- + +If an instruction accesses a virtual address for which the relevant page +table entry (PTE) has the Present bit cleared or other reserved bits set, +then speculative execution ignores the invalid PTE and loads the referenced +data if it is present in the Level 1 Data Cache, as if the page referenced +by the address bits in the PTE was still present and accessible. + +While this is a purely speculative mechanism and the instruction will raise +a page fault when it is retired eventually, the pure act of loading the +data and making it available to other speculative instructions opens up the +opportunity for side channel attacks to unprivileged malicious code, +similar to the Meltdown attack. + +While Meltdown breaks the user space to kernel space protection, L1TF +allows to attack any physical memory address in the system and the attack +works across all protection domains. It allows an attack of SGX and also +works from inside virtual machines because the speculation bypasses the +extended page table (EPT) protection mechanism. + + +Attack scenarios +---------------- + +1. Malicious user space +^^^^^^^^^^^^^^^^^^^^^^^ + + Operating Systems store arbitrary information in the address bits of a + PTE which is marked non present. This allows a malicious user space + application to attack the physical memory to which these PTEs resolve. + In some cases user-space can maliciously influence the information + encoded in the address bits of the PTE, thus making attacks more + deterministic and more practical. + + The Linux kernel contains a mitigation for this attack vector, PTE + inversion, which is permanently enabled and has no performance + impact. The kernel ensures that the address bits of PTEs, which are not + marked present, never point to cacheable physical memory space. + + A system with an up to date kernel is protected against attacks from + malicious user space applications. + +2. Malicious guest in a virtual machine +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + + The fact that L1TF breaks all domain protections allows malicious guest + OSes, which can control the PTEs directly, and malicious guest user + space applications, which run on an unprotected guest kernel lacking the + PTE inversion mitigation for L1TF, to attack physical host memory. + + A special aspect of L1TF in the context of virtualization is symmetric + multi threading (SMT). The Intel implementation of SMT is called + HyperThreading. The fact that Hyperthreads on the affected processors + share the L1 Data Cache (L1D) is important for this. As the flaw allows + only to attack data which is present in L1D, a malicious guest running + on one Hyperthread can attack the data which is brought into the L1D by + the context which runs on the sibling Hyperthread of the same physical + core. This context can be host OS, host user space or a different guest. + + If the processor does not support Extended Page Tables, the attack is + only possible, when the hypervisor does not sanitize the content of the + effective (shadow) page tables. + + While solutions exist to mitigate these attack vectors fully, these + mitigations are not enabled by default in the Linux kernel because they + can affect performance significantly. The kernel provides several + mechanisms which can be utilized to address the problem depending on the + deployment scenario. The mitigations, their protection scope and impact + are described in the next sections. + + The default mitigations and the rationale for choosing them are explained + at the end of this document. See :ref:`default_mitigations`. + +.. _l1tf_sys_info: + +L1TF system information +----------------------- + +The Linux kernel provides a sysfs interface to enumerate the current L1TF +status of the system: whether the system is vulnerable, and which +mitigations are active. The relevant sysfs file is: + +/sys/devices/system/cpu/vulnerabilities/l1tf + +The possible values in this file are: + + =========================== =============================== + 'Not affected' The processor is not vulnerable + 'Mitigation: PTE Inversion' The host protection is active + =========================== =============================== + +If KVM/VMX is enabled and the processor is vulnerable then the following +information is appended to the 'Mitigation: PTE Inversion' part: + + - SMT status: + + ===================== ================ + 'VMX: SMT vulnerable' SMT is enabled + 'VMX: SMT disabled' SMT is disabled + ===================== ================ + + - L1D Flush mode: + + ================================ ==================================== + 'L1D vulnerable' L1D flushing is disabled + + 'L1D conditional cache flushes' L1D flush is conditionally enabled + + 'L1D cache flushes' L1D flush is unconditionally enabled + ================================ ==================================== + +The resulting grade of protection is discussed in the following sections. + + +Host mitigation mechanism +------------------------- + +The kernel is unconditionally protected against L1TF attacks from malicious +user space running on the host. + + +Guest mitigation mechanisms +--------------------------- + +.. _l1d_flush: + +1. L1D flush on VMENTER +^^^^^^^^^^^^^^^^^^^^^^^ + + To make sure that a guest cannot attack data which is present in the L1D + the hypervisor flushes the L1D before entering the guest. + + Flushing the L1D evicts not only the data which should not be accessed + by a potentially malicious guest, it also flushes the guest + data. Flushing the L1D has a performance impact as the processor has to + bring the flushed guest data back into the L1D. Depending on the + frequency of VMEXIT/VMENTER and the type of computations in the guest + performance degradation in the range of 1% to 50% has been observed. For + scenarios where guest VMEXIT/VMENTER are rare the performance impact is + minimal. Virtio and mechanisms like posted interrupts are designed to + confine the VMEXITs to a bare minimum, but specific configurations and + application scenarios might still suffer from a high VMEXIT rate. + + The kernel provides two L1D flush modes: + - conditional ('cond') + - unconditional ('always') + + The conditional mode avoids L1D flushing after VMEXITs which execute + only audited code paths before the corresponding VMENTER. These code + paths have been verified that they cannot expose secrets or other + interesting data to an attacker, but they can leak information about the + address space layout of the hypervisor. + + Unconditional mode flushes L1D on all VMENTER invocations and provides + maximum protection. It has a higher overhead than the conditional + mode. The overhead cannot be quantified correctly as it depends on the + workload scenario and the resulting number of VMEXITs. + + The general recommendation is to enable L1D flush on VMENTER. The kernel + defaults to conditional mode on affected processors. + + **Note**, that L1D flush does not prevent the SMT problem because the + sibling thread will also bring back its data into the L1D which makes it + attackable again. + + L1D flush can be controlled by the administrator via the kernel command + line and sysfs control files. See :ref:`mitigation_control_command_line` + and :ref:`mitigation_control_kvm`. + +.. _guest_confinement: + +2. Guest VCPU confinement to dedicated physical cores +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + + To address the SMT problem, it is possible to make a guest or a group of + guests affine to one or more physical cores. The proper mechanism for + that is to utilize exclusive cpusets to ensure that no other guest or + host tasks can run on these cores. + + If only a single guest or related guests run on sibling SMT threads on + the same physical core then they can only attack their own memory and + restricted parts of the host memory. + + Host memory is attackable, when one of the sibling SMT threads runs in + host OS (hypervisor) context and the other in guest context. The amount + of valuable information from the host OS context depends on the context + which the host OS executes, i.e. interrupts, soft interrupts and kernel + threads. The amount of valuable data from these contexts cannot be + declared as non-interesting for an attacker without deep inspection of + the code. + + **Note**, that assigning guests to a fixed set of physical cores affects + the ability of the scheduler to do load balancing and might have + negative effects on CPU utilization depending on the hosting + scenario. Disabling SMT might be a viable alternative for particular + scenarios. + + For further information about confining guests to a single or to a group + of cores consult the cpusets documentation: + + https://www.kernel.org/doc/Documentation/cgroup-v1/cpusets.txt + +.. _interrupt_isolation: + +3. Interrupt affinity +^^^^^^^^^^^^^^^^^^^^^ + + Interrupts can be made affine to logical CPUs. This is not universally + true because there are types of interrupts which are truly per CPU + interrupts, e.g. the local timer interrupt. Aside of that multi queue + devices affine their interrupts to single CPUs or groups of CPUs per + queue without allowing the administrator to control the affinities. + + Moving the interrupts, which can be affinity controlled, away from CPUs + which run untrusted guests, reduces the attack vector space. + + Whether the interrupts with are affine to CPUs, which run untrusted + guests, provide interesting data for an attacker depends on the system + configuration and the scenarios which run on the system. While for some + of the interrupts it can be assumed that they won't expose interesting + information beyond exposing hints about the host OS memory layout, there + is no way to make general assumptions. + + Interrupt affinity can be controlled by the administrator via the + /proc/irq/$NR/smp_affinity[_list] files. Limited documentation is + available at: + + https://www.kernel.org/doc/Documentation/IRQ-affinity.txt + +.. _smt_control: + +4. SMT control +^^^^^^^^^^^^^^ + + To prevent the SMT issues of L1TF it might be necessary to disable SMT + completely. Disabling SMT can have a significant performance impact, but + the impact depends on the hosting scenario and the type of workloads. + The impact of disabling SMT needs also to be weighted against the impact + of other mitigation solutions like confining guests to dedicated cores. + + The kernel provides a sysfs interface to retrieve the status of SMT and + to control it. It also provides a kernel command line interface to + control SMT. + + The kernel command line interface consists of the following options: + + =========== ========================================================== + nosmt Affects the bring up of the secondary CPUs during boot. The + kernel tries to bring all present CPUs online during the + boot process. "nosmt" makes sure that from each physical + core only one - the so called primary (hyper) thread is + activated. Due to a design flaw of Intel processors related + to Machine Check Exceptions the non primary siblings have + to be brought up at least partially and are then shut down + again. "nosmt" can be undone via the sysfs interface. + + nosmt=force Has the same effect as "nosmt" but it does not allow to + undo the SMT disable via the sysfs interface. + =========== ========================================================== + + The sysfs interface provides two files: + + - /sys/devices/system/cpu/smt/control + - /sys/devices/system/cpu/smt/active + + /sys/devices/system/cpu/smt/control: + + This file allows to read out the SMT control state and provides the + ability to disable or (re)enable SMT. The possible states are: + + ============== =================================================== + on SMT is supported by the CPU and enabled. All + logical CPUs can be onlined and offlined without + restrictions. + + off SMT is supported by the CPU and disabled. Only + the so called primary SMT threads can be onlined + and offlined without restrictions. An attempt to + online a non-primary sibling is rejected + + forceoff Same as 'off' but the state cannot be controlled. + Attempts to write to the control file are rejected. + + notsupported The processor does not support SMT. It's therefore + not affected by the SMT implications of L1TF. + Attempts to write to the control file are rejected. + ============== =================================================== + + The possible states which can be written into this file to control SMT + state are: + + - on + - off + - forceoff + + /sys/devices/system/cpu/smt/active: + + This file reports whether SMT is enabled and active, i.e. if on any + physical core two or more sibling threads are online. + + SMT control is also possible at boot time via the l1tf kernel command + line parameter in combination with L1D flush control. See + :ref:`mitigation_control_command_line`. + +5. Disabling EPT +^^^^^^^^^^^^^^^^ + + Disabling EPT for virtual machines provides full mitigation for L1TF even + with SMT enabled, because the effective page tables for guests are + managed and sanitized by the hypervisor. Though disabling EPT has a + significant performance impact especially when the Meltdown mitigation + KPTI is enabled. + + EPT can be disabled in the hypervisor via the 'kvm-intel.ept' parameter. + +There is ongoing research and development for new mitigation mechanisms to +address the performance impact of disabling SMT or EPT. + +.. _mitigation_control_command_line: + +Mitigation control on the kernel command line +--------------------------------------------- + +The kernel command line allows to control the L1TF mitigations at boot +time with the option "l1tf=". The valid arguments for this option are: + + ============ ============================================================= + full Provides all available mitigations for the L1TF + vulnerability. Disables SMT and enables all mitigations in + the hypervisors, i.e. unconditional L1D flushing + + SMT control and L1D flush control via the sysfs interface + is still possible after boot. Hypervisors will issue a + warning when the first VM is started in a potentially + insecure configuration, i.e. SMT enabled or L1D flush + disabled. + + full,force Same as 'full', but disables SMT and L1D flush runtime + control. Implies the 'nosmt=force' command line option. + (i.e. sysfs control of SMT is disabled.) + + flush Leaves SMT enabled and enables the default hypervisor + mitigation, i.e. conditional L1D flushing + + SMT control and L1D flush control via the sysfs interface + is still possible after boot. Hypervisors will issue a + warning when the first VM is started in a potentially + insecure configuration, i.e. SMT enabled or L1D flush + disabled. + + flush,nosmt Disables SMT and enables the default hypervisor mitigation, + i.e. conditional L1D flushing. + + SMT control and L1D flush control via the sysfs interface + is still possible after boot. Hypervisors will issue a + warning when the first VM is started in a potentially + insecure configuration, i.e. SMT enabled or L1D flush + disabled. + + flush,nowarn Same as 'flush', but hypervisors will not warn when a VM is + started in a potentially insecure configuration. + + off Disables hypervisor mitigations and doesn't emit any + warnings. + ============ ============================================================= + +The default is 'flush'. For details about L1D flushing see :ref:`l1d_flush`. + + +.. _mitigation_control_kvm: + +Mitigation control for KVM - module parameter +------------------------------------------------------------- + +The KVM hypervisor mitigation mechanism, flushing the L1D cache when +entering a guest, can be controlled with a module parameter. + +The option/parameter is "kvm-intel.vmentry_l1d_flush=". It takes the +following arguments: + + ============ ============================================================== + always L1D cache flush on every VMENTER. + + cond Flush L1D on VMENTER only when the code between VMEXIT and + VMENTER can leak host memory which is considered + interesting for an attacker. This still can leak host memory + which allows e.g. to determine the hosts address space layout. + + never Disables the mitigation + ============ ============================================================== + +The parameter can be provided on the kernel command line, as a module +parameter when loading the modules and at runtime modified via the sysfs +file: + +/sys/module/kvm_intel/parameters/vmentry_l1d_flush + +The default is 'cond'. If 'l1tf=full,force' is given on the kernel command +line, then 'always' is enforced and the kvm-intel.vmentry_l1d_flush +module parameter is ignored and writes to the sysfs file are rejected. + + +Mitigation selection guide +-------------------------- + +1. No virtualization in use +^^^^^^^^^^^^^^^^^^^^^^^^^^^ + + The system is protected by the kernel unconditionally and no further + action is required. + +2. Virtualization with trusted guests +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + + If the guest comes from a trusted source and the guest OS kernel is + guaranteed to have the L1TF mitigations in place the system is fully + protected against L1TF and no further action is required. + + To avoid the overhead of the default L1D flushing on VMENTER the + administrator can disable the flushing via the kernel command line and + sysfs control files. See :ref:`mitigation_control_command_line` and + :ref:`mitigation_control_kvm`. + + +3. Virtualization with untrusted guests +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +3.1. SMT not supported or disabled +"""""""""""""""""""""""""""""""""" + + If SMT is not supported by the processor or disabled in the BIOS or by + the kernel, it's only required to enforce L1D flushing on VMENTER. + + Conditional L1D flushing is the default behaviour and can be tuned. See + :ref:`mitigation_control_command_line` and :ref:`mitigation_control_kvm`. + +3.2. EPT not supported or disabled +"""""""""""""""""""""""""""""""""" + + If EPT is not supported by the processor or disabled in the hypervisor, + the system is fully protected. SMT can stay enabled and L1D flushing on + VMENTER is not required. + + EPT can be disabled in the hypervisor via the 'kvm-intel.ept' parameter. + +3.3. SMT and EPT supported and active +""""""""""""""""""""""""""""""""""""" + + If SMT and EPT are supported and active then various degrees of + mitigations can be employed: + + - L1D flushing on VMENTER: + + L1D flushing on VMENTER is the minimal protection requirement, but it + is only potent in combination with other mitigation methods. + + Conditional L1D flushing is the default behaviour and can be tuned. See + :ref:`mitigation_control_command_line` and :ref:`mitigation_control_kvm`. + + - Guest confinement: + + Confinement of guests to a single or a group of physical cores which + are not running any other processes, can reduce the attack surface + significantly, but interrupts, soft interrupts and kernel threads can + still expose valuable data to a potential attacker. See + :ref:`guest_confinement`. + + - Interrupt isolation: + + Isolating the guest CPUs from interrupts can reduce the attack surface + further, but still allows a malicious guest to explore a limited amount + of host physical memory. This can at least be used to gain knowledge + about the host address space layout. The interrupts which have a fixed + affinity to the CPUs which run the untrusted guests can depending on + the scenario still trigger soft interrupts and schedule kernel threads + which might expose valuable information. See + :ref:`interrupt_isolation`. + +The above three mitigation methods combined can provide protection to a +certain degree, but the risk of the remaining attack surface has to be +carefully analyzed. For full protection the following methods are +available: + + - Disabling SMT: + + Disabling SMT and enforcing the L1D flushing provides the maximum + amount of protection. This mitigation is not depending on any of the + above mitigation methods. + + SMT control and L1D flushing can be tuned by the command line + parameters 'nosmt', 'l1tf', 'kvm-intel.vmentry_l1d_flush' and at run + time with the matching sysfs control files. See :ref:`smt_control`, + :ref:`mitigation_control_command_line` and + :ref:`mitigation_control_kvm`. + + - Disabling EPT: + + Disabling EPT provides the maximum amount of protection as well. It is + not depending on any of the above mitigation methods. SMT can stay + enabled and L1D flushing is not required, but the performance impact is + significant. + + EPT can be disabled in the hypervisor via the 'kvm-intel.ept' + parameter. + +3.4. Nested virtual machines +"""""""""""""""""""""""""""" + +When nested virtualization is in use, three operating systems are involved: +the bare metal hypervisor, the nested hypervisor and the nested virtual +machine. VMENTER operations from the nested hypervisor into the nested +guest will always be processed by the bare metal hypervisor. If KVM is the +bare metal hypervisor it wiil: + + - Flush the L1D cache on every switch from the nested hypervisor to the + nested virtual machine, so that the nested hypervisor's secrets are not + exposed to the nested virtual machine; + + - Flush the L1D cache on every switch from the nested virtual machine to + the nested hypervisor; this is a complex operation, and flushing the L1D + cache avoids that the bare metal hypervisor's secrets are exposed to the + nested virtual machine; + + - Instruct the nested hypervisor to not perform any L1D cache flush. This + is an optimization to avoid double L1D flushing. + + +.. _default_mitigations: + +Default mitigations +------------------- + + The kernel default mitigations for vulnerable processors are: + + - PTE inversion to protect against malicious user space. This is done + unconditionally and cannot be controlled. + + - L1D conditional flushing on VMENTER when EPT is enabled for + a guest. + + The kernel does not by default enforce the disabling of SMT, which leaves + SMT systems vulnerable when running untrusted guests with EPT enabled. + + The rationale for this choice is: + + - Force disabling SMT can break existing setups, especially with + unattended updates. + + - If regular users run untrusted guests on their machine, then L1TF is + just an add on to other malware which might be embedded in an untrusted + guest, e.g. spam-bots or attacks on the local network. + + There is no technical way to prevent a user from running untrusted code + on their machines blindly. + + - It's technically extremely unlikely and from today's knowledge even + impossible that L1TF can be exploited via the most popular attack + mechanisms like JavaScript because these mechanisms have no way to + control PTEs. If this would be possible and not other mitigation would + be possible, then the default might be different. + + - The administrators of cloud and hosting setups have to carefully + analyze the risk for their scenarios and make the appropriate + mitigation choices, which might even vary across their deployed + machines and also result in other changes of their overall setup. + There is no way for the kernel to provide a sensible default for this + kind of scenarios. only in patch2: unchanged: --- linux-azure-4.15.0.orig/arch/Kconfig +++ linux-azure-4.15.0/arch/Kconfig @@ -13,6 +13,9 @@ config HAVE_IMA_KEXEC bool +config HOTPLUG_SMT + bool + config OPROFILE tristate "OProfile system profiling" depends on PROFILING only in patch2: unchanged: --- linux-azure-4.15.0.orig/arch/x86/boot/compressed/kaslr.c +++ linux-azure-4.15.0/arch/x86/boot/compressed/kaslr.c @@ -48,6 +48,9 @@ extern unsigned long get_cmd_line_ptr(void); +/* Used by PAGE_KERN* macros: */ +pteval_t __default_kernel_pte_mask __read_mostly = ~0; + /* Simplified build-specific string for starting entropy. */ static const char build_str[] = UTS_RELEASE " (" LINUX_COMPILE_BY "@" LINUX_COMPILE_HOST ") (" LINUX_COMPILER ") " UTS_VERSION; only in patch2: unchanged: --- linux-azure-4.15.0.orig/arch/x86/events/amd/uncore.c +++ linux-azure-4.15.0/arch/x86/events/amd/uncore.c @@ -19,6 +19,7 @@ #include #include #include +#include #define NUM_COUNTERS_NB 4 #define NUM_COUNTERS_L2 4 @@ -399,26 +400,8 @@ } if (amd_uncore_llc) { - unsigned int apicid = cpu_data(cpu).apicid; - unsigned int nshared, subleaf, prev_eax = 0; - uncore = *per_cpu_ptr(amd_uncore_llc, cpu); - /* - * Iterate over Cache Topology Definition leaves until no - * more cache descriptions are available. - */ - for (subleaf = 0; subleaf < 5; subleaf++) { - cpuid_count(0x8000001d, subleaf, &eax, &ebx, &ecx, &edx); - - /* EAX[0:4] gives type of cache */ - if (!(eax & 0x1f)) - break; - - prev_eax = eax; - } - nshared = ((prev_eax >> 14) & 0xfff) + 1; - - uncore->id = apicid - (apicid % nshared); + uncore->id = per_cpu(cpu_llc_id, cpu); uncore = amd_uncore_find_online_sibling(uncore, amd_uncore_llc); *per_cpu_ptr(amd_uncore_llc, cpu) = uncore; only in patch2: unchanged: --- linux-azure-4.15.0.orig/arch/x86/include/asm/cacheinfo.h +++ linux-azure-4.15.0/arch/x86/include/asm/cacheinfo.h @@ -0,0 +1,7 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_X86_CACHEINFO_H +#define _ASM_X86_CACHEINFO_H + +void cacheinfo_amd_init_llc_id(struct cpuinfo_x86 *c, int cpu, u8 node_id); + +#endif /* _ASM_X86_CACHEINFO_H */ only in patch2: unchanged: --- linux-azure-4.15.0.orig/arch/x86/include/asm/dmi.h +++ linux-azure-4.15.0/arch/x86/include/asm/dmi.h @@ -4,8 +4,8 @@ #include #include +#include -#include #include static __always_inline __init void *dmi_alloc(unsigned len) only in patch2: unchanged: --- linux-azure-4.15.0.orig/arch/x86/include/asm/hardirq.h +++ linux-azure-4.15.0/arch/x86/include/asm/hardirq.h @@ -3,10 +3,12 @@ #define _ASM_X86_HARDIRQ_H #include -#include typedef struct { - unsigned int __softirq_pending; + u16 __softirq_pending; +#if IS_ENABLED(CONFIG_KVM_INTEL) + u8 kvm_cpu_l1tf_flush_l1d; +#endif unsigned int __nmi_count; /* arch dependent */ #ifdef CONFIG_X86_LOCAL_APIC unsigned int apic_timer_irqs; /* arch dependent */ @@ -62,4 +64,24 @@ extern u64 arch_irq_stat(void); #define arch_irq_stat arch_irq_stat + +#if IS_ENABLED(CONFIG_KVM_INTEL) +static inline void kvm_set_cpu_l1tf_flush_l1d(void) +{ + __this_cpu_write(irq_stat.kvm_cpu_l1tf_flush_l1d, 1); +} + +static inline void kvm_clear_cpu_l1tf_flush_l1d(void) +{ + __this_cpu_write(irq_stat.kvm_cpu_l1tf_flush_l1d, 0); +} + +static inline bool kvm_get_cpu_l1tf_flush_l1d(void) +{ + return __this_cpu_read(irq_stat.kvm_cpu_l1tf_flush_l1d); +} +#else /* !IS_ENABLED(CONFIG_KVM_INTEL) */ +static inline void kvm_set_cpu_l1tf_flush_l1d(void) { } +#endif /* IS_ENABLED(CONFIG_KVM_INTEL) */ + #endif /* _ASM_X86_HARDIRQ_H */ only in patch2: unchanged: --- linux-azure-4.15.0.orig/arch/x86/include/asm/page_32_types.h +++ linux-azure-4.15.0/arch/x86/include/asm/page_32_types.h @@ -29,8 +29,13 @@ #define N_EXCEPTION_STACKS 1 #ifdef CONFIG_X86_PAE -/* 44=32+12, the limit we can fit into an unsigned long pfn */ -#define __PHYSICAL_MASK_SHIFT 44 +/* + * This is beyond the 44 bit limit imposed by the 32bit long pfns, + * but we need the full mask to make sure inverted PROT_NONE + * entries have all the host bits set in a guest. + * The real limit is still 44 bits. + */ +#define __PHYSICAL_MASK_SHIFT 52 #define __VIRTUAL_MASK_SHIFT 32 #else /* !CONFIG_X86_PAE */ only in patch2: unchanged: --- linux-azure-4.15.0.orig/arch/x86/include/asm/pgtable-2level.h +++ linux-azure-4.15.0/arch/x86/include/asm/pgtable-2level.h @@ -95,4 +95,21 @@ #define __pte_to_swp_entry(pte) ((swp_entry_t) { (pte).pte_low }) #define __swp_entry_to_pte(x) ((pte_t) { .pte = (x).val }) +/* No inverted PFNs on 2 level page tables */ + +static inline u64 protnone_mask(u64 val) +{ + return 0; +} + +static inline u64 flip_protnone_guard(u64 oldval, u64 val, u64 mask) +{ + return val; +} + +static inline bool __pte_needs_invert(u64 val) +{ + return false; +} + #endif /* _ASM_X86_PGTABLE_2LEVEL_H */ only in patch2: unchanged: --- linux-azure-4.15.0.orig/arch/x86/include/asm/pgtable-3level.h +++ linux-azure-4.15.0/arch/x86/include/asm/pgtable-3level.h @@ -206,12 +206,43 @@ #endif /* Encode and de-code a swap entry */ +#define SWP_TYPE_BITS 5 + +#define SWP_OFFSET_FIRST_BIT (_PAGE_BIT_PROTNONE + 1) + +/* We always extract/encode the offset by shifting it all the way up, and then down again */ +#define SWP_OFFSET_SHIFT (SWP_OFFSET_FIRST_BIT + SWP_TYPE_BITS) + #define MAX_SWAPFILES_CHECK() BUILD_BUG_ON(MAX_SWAPFILES_SHIFT > 5) #define __swp_type(x) (((x).val) & 0x1f) #define __swp_offset(x) ((x).val >> 5) #define __swp_entry(type, offset) ((swp_entry_t){(type) | (offset) << 5}) -#define __pte_to_swp_entry(pte) ((swp_entry_t){ (pte).pte_high }) -#define __swp_entry_to_pte(x) ((pte_t){ { .pte_high = (x).val } }) + +/* + * Normally, __swp_entry() converts from arch-independent swp_entry_t to + * arch-dependent swp_entry_t, and __swp_entry_to_pte() just stores the result + * to pte. But here we have 32bit swp_entry_t and 64bit pte, and need to use the + * whole 64 bits. Thus, we shift the "real" arch-dependent conversion to + * __swp_entry_to_pte() through the following helper macro based on 64bit + * __swp_entry(). + */ +#define __swp_pteval_entry(type, offset) ((pteval_t) { \ + (~(pteval_t)(offset) << SWP_OFFSET_SHIFT >> SWP_TYPE_BITS) \ + | ((pteval_t)(type) << (64 - SWP_TYPE_BITS)) }) + +#define __swp_entry_to_pte(x) ((pte_t){ .pte = \ + __swp_pteval_entry(__swp_type(x), __swp_offset(x)) }) +/* + * Analogically, __pte_to_swp_entry() doesn't just extract the arch-dependent + * swp_entry_t, but also has to convert it from 64bit to the 32bit + * intermediate representation, using the following macros based on 64bit + * __swp_type() and __swp_offset(). + */ +#define __pteval_swp_type(x) ((unsigned long)((x).pte >> (64 - SWP_TYPE_BITS))) +#define __pteval_swp_offset(x) ((unsigned long)(~((x).pte) << SWP_TYPE_BITS >> SWP_OFFSET_SHIFT)) + +#define __pte_to_swp_entry(pte) (__swp_entry(__pteval_swp_type(pte), \ + __pteval_swp_offset(pte))) #define gup_get_pte gup_get_pte /* @@ -260,4 +291,6 @@ return pte; } +#include + #endif /* _ASM_X86_PGTABLE_3LEVEL_H */ only in patch2: unchanged: --- linux-azure-4.15.0.orig/arch/x86/include/asm/pgtable-invert.h +++ linux-azure-4.15.0/arch/x86/include/asm/pgtable-invert.h @@ -0,0 +1,32 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_PGTABLE_INVERT_H +#define _ASM_PGTABLE_INVERT_H 1 + +#ifndef __ASSEMBLY__ + +static inline bool __pte_needs_invert(u64 val) +{ + return !(val & _PAGE_PRESENT); +} + +/* Get a mask to xor with the page table entry to get the correct pfn. */ +static inline u64 protnone_mask(u64 val) +{ + return __pte_needs_invert(val) ? ~0ull : 0; +} + +static inline u64 flip_protnone_guard(u64 oldval, u64 val, u64 mask) +{ + /* + * When a PTE transitions from NONE to !NONE or vice-versa + * invert the PFN part to stop speculation. + * pte_pfn undoes this when needed. + */ + if (__pte_needs_invert(oldval) != __pte_needs_invert(val)) + val = (val & ~mask) | (~val & mask); + return val; +} + +#endif /* __ASSEMBLY__ */ + +#endif only in patch2: unchanged: --- linux-azure-4.15.0.orig/arch/x86/include/asm/topology.h +++ linux-azure-4.15.0/arch/x86/include/asm/topology.h @@ -123,13 +123,17 @@ } int topology_update_package_map(unsigned int apicid, unsigned int cpu); -extern int topology_phys_to_logical_pkg(unsigned int pkg); +int topology_phys_to_logical_pkg(unsigned int pkg); +bool topology_is_primary_thread(unsigned int cpu); +bool topology_smt_supported(void); #else #define topology_max_packages() (1) static inline int topology_update_package_map(unsigned int apicid, unsigned int cpu) { return 0; } static inline int topology_phys_to_logical_pkg(unsigned int pkg) { return 0; } static inline int topology_max_smt_threads(void) { return 1; } +static inline bool topology_is_primary_thread(unsigned int cpu) { return true; } +static inline bool topology_smt_supported(void) { return false; } #endif static inline void arch_fix_phys_package_id(int num, u32 slot) only in patch2: unchanged: --- linux-azure-4.15.0.orig/arch/x86/kernel/apic/msi.c +++ linux-azure-4.15.0/arch/x86/kernel/apic/msi.c @@ -12,6 +12,7 @@ */ #include #include +#include #include #include #include only in patch2: unchanged: --- linux-azure-4.15.0.orig/arch/x86/kernel/cpu/Makefile +++ linux-azure-4.15.0/arch/x86/kernel/cpu/Makefile @@ -17,7 +17,7 @@ nostackp := $(call cc-option, -fno-stack-protector) CFLAGS_common.o := $(nostackp) -obj-y := intel_cacheinfo.o scattered.o topology.o +obj-y := cacheinfo.o scattered.o topology.o obj-y += common.o obj-y += rdrand.o obj-y += match.o only in patch2: unchanged: --- linux-azure-4.15.0.orig/arch/x86/kernel/cpu/cacheinfo.c +++ linux-azure-4.15.0/arch/x86/kernel/cpu/cacheinfo.c @@ -0,0 +1,1010 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Routines to identify caches on Intel CPU. + * + * Changes: + * Venkatesh Pallipadi : Adding cache identification through cpuid(4) + * Ashok Raj : Work with CPU hotplug infrastructure. + * Andi Kleen / Andreas Herrmann : CPUID4 emulation on AMD. + */ + +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include + +#include "cpu.h" + +#define LVL_1_INST 1 +#define LVL_1_DATA 2 +#define LVL_2 3 +#define LVL_3 4 +#define LVL_TRACE 5 + +struct _cache_table { + unsigned char descriptor; + char cache_type; + short size; +}; + +#define MB(x) ((x) * 1024) + +/* All the cache descriptor types we care about (no TLB or + trace cache entries) */ + +static const struct _cache_table cache_table[] = +{ + { 0x06, LVL_1_INST, 8 }, /* 4-way set assoc, 32 byte line size */ + { 0x08, LVL_1_INST, 16 }, /* 4-way set assoc, 32 byte line size */ + { 0x09, LVL_1_INST, 32 }, /* 4-way set assoc, 64 byte line size */ + { 0x0a, LVL_1_DATA, 8 }, /* 2 way set assoc, 32 byte line size */ + { 0x0c, LVL_1_DATA, 16 }, /* 4-way set assoc, 32 byte line size */ + { 0x0d, LVL_1_DATA, 16 }, /* 4-way set assoc, 64 byte line size */ + { 0x0e, LVL_1_DATA, 24 }, /* 6-way set assoc, 64 byte line size */ + { 0x21, LVL_2, 256 }, /* 8-way set assoc, 64 byte line size */ + { 0x22, LVL_3, 512 }, /* 4-way set assoc, sectored cache, 64 byte line size */ + { 0x23, LVL_3, MB(1) }, /* 8-way set assoc, sectored cache, 64 byte line size */ + { 0x25, LVL_3, MB(2) }, /* 8-way set assoc, sectored cache, 64 byte line size */ + { 0x29, LVL_3, MB(4) }, /* 8-way set assoc, sectored cache, 64 byte line size */ + { 0x2c, LVL_1_DATA, 32 }, /* 8-way set assoc, 64 byte line size */ + { 0x30, LVL_1_INST, 32 }, /* 8-way set assoc, 64 byte line size */ + { 0x39, LVL_2, 128 }, /* 4-way set assoc, sectored cache, 64 byte line size */ + { 0x3a, LVL_2, 192 }, /* 6-way set assoc, sectored cache, 64 byte line size */ + { 0x3b, LVL_2, 128 }, /* 2-way set assoc, sectored cache, 64 byte line size */ + { 0x3c, LVL_2, 256 }, /* 4-way set assoc, sectored cache, 64 byte line size */ + { 0x3d, LVL_2, 384 }, /* 6-way set assoc, sectored cache, 64 byte line size */ + { 0x3e, LVL_2, 512 }, /* 4-way set assoc, sectored cache, 64 byte line size */ + { 0x3f, LVL_2, 256 }, /* 2-way set assoc, 64 byte line size */ + { 0x41, LVL_2, 128 }, /* 4-way set assoc, 32 byte line size */ + { 0x42, LVL_2, 256 }, /* 4-way set assoc, 32 byte line size */ + { 0x43, LVL_2, 512 }, /* 4-way set assoc, 32 byte line size */ + { 0x44, LVL_2, MB(1) }, /* 4-way set assoc, 32 byte line size */ + { 0x45, LVL_2, MB(2) }, /* 4-way set assoc, 32 byte line size */ + { 0x46, LVL_3, MB(4) }, /* 4-way set assoc, 64 byte line size */ + { 0x47, LVL_3, MB(8) }, /* 8-way set assoc, 64 byte line size */ + { 0x48, LVL_2, MB(3) }, /* 12-way set assoc, 64 byte line size */ + { 0x49, LVL_3, MB(4) }, /* 16-way set assoc, 64 byte line size */ + { 0x4a, LVL_3, MB(6) }, /* 12-way set assoc, 64 byte line size */ + { 0x4b, LVL_3, MB(8) }, /* 16-way set assoc, 64 byte line size */ + { 0x4c, LVL_3, MB(12) }, /* 12-way set assoc, 64 byte line size */ + { 0x4d, LVL_3, MB(16) }, /* 16-way set assoc, 64 byte line size */ + { 0x4e, LVL_2, MB(6) }, /* 24-way set assoc, 64 byte line size */ + { 0x60, LVL_1_DATA, 16 }, /* 8-way set assoc, sectored cache, 64 byte line size */ + { 0x66, LVL_1_DATA, 8 }, /* 4-way set assoc, sectored cache, 64 byte line size */ + { 0x67, LVL_1_DATA, 16 }, /* 4-way set assoc, sectored cache, 64 byte line size */ + { 0x68, LVL_1_DATA, 32 }, /* 4-way set assoc, sectored cache, 64 byte line size */ + { 0x70, LVL_TRACE, 12 }, /* 8-way set assoc */ + { 0x71, LVL_TRACE, 16 }, /* 8-way set assoc */ + { 0x72, LVL_TRACE, 32 }, /* 8-way set assoc */ + { 0x73, LVL_TRACE, 64 }, /* 8-way set assoc */ + { 0x78, LVL_2, MB(1) }, /* 4-way set assoc, 64 byte line size */ + { 0x79, LVL_2, 128 }, /* 8-way set assoc, sectored cache, 64 byte line size */ + { 0x7a, LVL_2, 256 }, /* 8-way set assoc, sectored cache, 64 byte line size */ + { 0x7b, LVL_2, 512 }, /* 8-way set assoc, sectored cache, 64 byte line size */ + { 0x7c, LVL_2, MB(1) }, /* 8-way set assoc, sectored cache, 64 byte line size */ + { 0x7d, LVL_2, MB(2) }, /* 8-way set assoc, 64 byte line size */ + { 0x7f, LVL_2, 512 }, /* 2-way set assoc, 64 byte line size */ + { 0x80, LVL_2, 512 }, /* 8-way set assoc, 64 byte line size */ + { 0x82, LVL_2, 256 }, /* 8-way set assoc, 32 byte line size */ + { 0x83, LVL_2, 512 }, /* 8-way set assoc, 32 byte line size */ + { 0x84, LVL_2, MB(1) }, /* 8-way set assoc, 32 byte line size */ + { 0x85, LVL_2, MB(2) }, /* 8-way set assoc, 32 byte line size */ + { 0x86, LVL_2, 512 }, /* 4-way set assoc, 64 byte line size */ + { 0x87, LVL_2, MB(1) }, /* 8-way set assoc, 64 byte line size */ + { 0xd0, LVL_3, 512 }, /* 4-way set assoc, 64 byte line size */ + { 0xd1, LVL_3, MB(1) }, /* 4-way set assoc, 64 byte line size */ + { 0xd2, LVL_3, MB(2) }, /* 4-way set assoc, 64 byte line size */ + { 0xd6, LVL_3, MB(1) }, /* 8-way set assoc, 64 byte line size */ + { 0xd7, LVL_3, MB(2) }, /* 8-way set assoc, 64 byte line size */ + { 0xd8, LVL_3, MB(4) }, /* 12-way set assoc, 64 byte line size */ + { 0xdc, LVL_3, MB(2) }, /* 12-way set assoc, 64 byte line size */ + { 0xdd, LVL_3, MB(4) }, /* 12-way set assoc, 64 byte line size */ + { 0xde, LVL_3, MB(8) }, /* 12-way set assoc, 64 byte line size */ + { 0xe2, LVL_3, MB(2) }, /* 16-way set assoc, 64 byte line size */ + { 0xe3, LVL_3, MB(4) }, /* 16-way set assoc, 64 byte line size */ + { 0xe4, LVL_3, MB(8) }, /* 16-way set assoc, 64 byte line size */ + { 0xea, LVL_3, MB(12) }, /* 24-way set assoc, 64 byte line size */ + { 0xeb, LVL_3, MB(18) }, /* 24-way set assoc, 64 byte line size */ + { 0xec, LVL_3, MB(24) }, /* 24-way set assoc, 64 byte line size */ + { 0x00, 0, 0} +}; + + +enum _cache_type { + CTYPE_NULL = 0, + CTYPE_DATA = 1, + CTYPE_INST = 2, + CTYPE_UNIFIED = 3 +}; + +union _cpuid4_leaf_eax { + struct { + enum _cache_type type:5; + unsigned int level:3; + unsigned int is_self_initializing:1; + unsigned int is_fully_associative:1; + unsigned int reserved:4; + unsigned int num_threads_sharing:12; + unsigned int num_cores_on_die:6; + } split; + u32 full; +}; + +union _cpuid4_leaf_ebx { + struct { + unsigned int coherency_line_size:12; + unsigned int physical_line_partition:10; + unsigned int ways_of_associativity:10; + } split; + u32 full; +}; + +union _cpuid4_leaf_ecx { + struct { + unsigned int number_of_sets:32; + } split; + u32 full; +}; + +struct _cpuid4_info_regs { + union _cpuid4_leaf_eax eax; + union _cpuid4_leaf_ebx ebx; + union _cpuid4_leaf_ecx ecx; + unsigned int id; + unsigned long size; + struct amd_northbridge *nb; +}; + +static unsigned short num_cache_leaves; + +/* AMD doesn't have CPUID4. Emulate it here to report the same + information to the user. This makes some assumptions about the machine: + L2 not shared, no SMT etc. that is currently true on AMD CPUs. + + In theory the TLBs could be reported as fake type (they are in "dummy"). + Maybe later */ +union l1_cache { + struct { + unsigned line_size:8; + unsigned lines_per_tag:8; + unsigned assoc:8; + unsigned size_in_kb:8; + }; + unsigned val; +}; + +union l2_cache { + struct { + unsigned line_size:8; + unsigned lines_per_tag:4; + unsigned assoc:4; + unsigned size_in_kb:16; + }; + unsigned val; +}; + +union l3_cache { + struct { + unsigned line_size:8; + unsigned lines_per_tag:4; + unsigned assoc:4; + unsigned res:2; + unsigned size_encoded:14; + }; + unsigned val; +}; + +static const unsigned short assocs[] = { + [1] = 1, + [2] = 2, + [4] = 4, + [6] = 8, + [8] = 16, + [0xa] = 32, + [0xb] = 48, + [0xc] = 64, + [0xd] = 96, + [0xe] = 128, + [0xf] = 0xffff /* fully associative - no way to show this currently */ +}; + +static const unsigned char levels[] = { 1, 1, 2, 3 }; +static const unsigned char types[] = { 1, 2, 3, 3 }; + +static const enum cache_type cache_type_map[] = { + [CTYPE_NULL] = CACHE_TYPE_NOCACHE, + [CTYPE_DATA] = CACHE_TYPE_DATA, + [CTYPE_INST] = CACHE_TYPE_INST, + [CTYPE_UNIFIED] = CACHE_TYPE_UNIFIED, +}; + +static void +amd_cpuid4(int leaf, union _cpuid4_leaf_eax *eax, + union _cpuid4_leaf_ebx *ebx, + union _cpuid4_leaf_ecx *ecx) +{ + unsigned dummy; + unsigned line_size, lines_per_tag, assoc, size_in_kb; + union l1_cache l1i, l1d; + union l2_cache l2; + union l3_cache l3; + union l1_cache *l1 = &l1d; + + eax->full = 0; + ebx->full = 0; + ecx->full = 0; + + cpuid(0x80000005, &dummy, &dummy, &l1d.val, &l1i.val); + cpuid(0x80000006, &dummy, &dummy, &l2.val, &l3.val); + + switch (leaf) { + case 1: + l1 = &l1i; + case 0: + if (!l1->val) + return; + assoc = assocs[l1->assoc]; + line_size = l1->line_size; + lines_per_tag = l1->lines_per_tag; + size_in_kb = l1->size_in_kb; + break; + case 2: + if (!l2.val) + return; + assoc = assocs[l2.assoc]; + line_size = l2.line_size; + lines_per_tag = l2.lines_per_tag; + /* cpu_data has errata corrections for K7 applied */ + size_in_kb = __this_cpu_read(cpu_info.x86_cache_size); + break; + case 3: + if (!l3.val) + return; + assoc = assocs[l3.assoc]; + line_size = l3.line_size; + lines_per_tag = l3.lines_per_tag; + size_in_kb = l3.size_encoded * 512; + if (boot_cpu_has(X86_FEATURE_AMD_DCM)) { + size_in_kb = size_in_kb >> 1; + assoc = assoc >> 1; + } + break; + default: + return; + } + + eax->split.is_self_initializing = 1; + eax->split.type = types[leaf]; + eax->split.level = levels[leaf]; + eax->split.num_threads_sharing = 0; + eax->split.num_cores_on_die = __this_cpu_read(cpu_info.x86_max_cores) - 1; + + + if (assoc == 0xffff) + eax->split.is_fully_associative = 1; + ebx->split.coherency_line_size = line_size - 1; + ebx->split.ways_of_associativity = assoc - 1; + ebx->split.physical_line_partition = lines_per_tag - 1; + ecx->split.number_of_sets = (size_in_kb * 1024) / line_size / + (ebx->split.ways_of_associativity + 1) - 1; +} + +#if defined(CONFIG_AMD_NB) && defined(CONFIG_SYSFS) + +/* + * L3 cache descriptors + */ +static void amd_calc_l3_indices(struct amd_northbridge *nb) +{ + struct amd_l3_cache *l3 = &nb->l3_cache; + unsigned int sc0, sc1, sc2, sc3; + u32 val = 0; + + pci_read_config_dword(nb->misc, 0x1C4, &val); + + /* calculate subcache sizes */ + l3->subcaches[0] = sc0 = !(val & BIT(0)); + l3->subcaches[1] = sc1 = !(val & BIT(4)); + + if (boot_cpu_data.x86 == 0x15) { + l3->subcaches[0] = sc0 += !(val & BIT(1)); + l3->subcaches[1] = sc1 += !(val & BIT(5)); + } + + l3->subcaches[2] = sc2 = !(val & BIT(8)) + !(val & BIT(9)); + l3->subcaches[3] = sc3 = !(val & BIT(12)) + !(val & BIT(13)); + + l3->indices = (max(max3(sc0, sc1, sc2), sc3) << 10) - 1; +} + +/* + * check whether a slot used for disabling an L3 index is occupied. + * @l3: L3 cache descriptor + * @slot: slot number (0..1) + * + * @returns: the disabled index if used or negative value if slot free. + */ +static int amd_get_l3_disable_slot(struct amd_northbridge *nb, unsigned slot) +{ + unsigned int reg = 0; + + pci_read_config_dword(nb->misc, 0x1BC + slot * 4, ®); + + /* check whether this slot is activated already */ + if (reg & (3UL << 30)) + return reg & 0xfff; + + return -1; +} + +static ssize_t show_cache_disable(struct cacheinfo *this_leaf, char *buf, + unsigned int slot) +{ + int index; + struct amd_northbridge *nb = this_leaf->priv; + + index = amd_get_l3_disable_slot(nb, slot); + if (index >= 0) + return sprintf(buf, "%d\n", index); + + return sprintf(buf, "FREE\n"); +} + +#define SHOW_CACHE_DISABLE(slot) \ +static ssize_t \ +cache_disable_##slot##_show(struct device *dev, \ + struct device_attribute *attr, char *buf) \ +{ \ + struct cacheinfo *this_leaf = dev_get_drvdata(dev); \ + return show_cache_disable(this_leaf, buf, slot); \ +} +SHOW_CACHE_DISABLE(0) +SHOW_CACHE_DISABLE(1) + +static void amd_l3_disable_index(struct amd_northbridge *nb, int cpu, + unsigned slot, unsigned long idx) +{ + int i; + + idx |= BIT(30); + + /* + * disable index in all 4 subcaches + */ + for (i = 0; i < 4; i++) { + u32 reg = idx | (i << 20); + + if (!nb->l3_cache.subcaches[i]) + continue; + + pci_write_config_dword(nb->misc, 0x1BC + slot * 4, reg); + + /* + * We need to WBINVD on a core on the node containing the L3 + * cache which indices we disable therefore a simple wbinvd() + * is not sufficient. + */ + wbinvd_on_cpu(cpu); + + reg |= BIT(31); + pci_write_config_dword(nb->misc, 0x1BC + slot * 4, reg); + } +} + +/* + * disable a L3 cache index by using a disable-slot + * + * @l3: L3 cache descriptor + * @cpu: A CPU on the node containing the L3 cache + * @slot: slot number (0..1) + * @index: index to disable + * + * @return: 0 on success, error status on failure + */ +static int amd_set_l3_disable_slot(struct amd_northbridge *nb, int cpu, + unsigned slot, unsigned long index) +{ + int ret = 0; + + /* check if @slot is already used or the index is already disabled */ + ret = amd_get_l3_disable_slot(nb, slot); + if (ret >= 0) + return -EEXIST; + + if (index > nb->l3_cache.indices) + return -EINVAL; + + /* check whether the other slot has disabled the same index already */ + if (index == amd_get_l3_disable_slot(nb, !slot)) + return -EEXIST; + + amd_l3_disable_index(nb, cpu, slot, index); + + return 0; +} + +static ssize_t store_cache_disable(struct cacheinfo *this_leaf, + const char *buf, size_t count, + unsigned int slot) +{ + unsigned long val = 0; + int cpu, err = 0; + struct amd_northbridge *nb = this_leaf->priv; + + if (!capable(CAP_SYS_ADMIN)) + return -EPERM; + + cpu = cpumask_first(&this_leaf->shared_cpu_map); + + if (kstrtoul(buf, 10, &val) < 0) + return -EINVAL; + + err = amd_set_l3_disable_slot(nb, cpu, slot, val); + if (err) { + if (err == -EEXIST) + pr_warn("L3 slot %d in use/index already disabled!\n", + slot); + return err; + } + return count; +} + +#define STORE_CACHE_DISABLE(slot) \ +static ssize_t \ +cache_disable_##slot##_store(struct device *dev, \ + struct device_attribute *attr, \ + const char *buf, size_t count) \ +{ \ + struct cacheinfo *this_leaf = dev_get_drvdata(dev); \ + return store_cache_disable(this_leaf, buf, count, slot); \ +} +STORE_CACHE_DISABLE(0) +STORE_CACHE_DISABLE(1) + +static ssize_t subcaches_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct cacheinfo *this_leaf = dev_get_drvdata(dev); + int cpu = cpumask_first(&this_leaf->shared_cpu_map); + + return sprintf(buf, "%x\n", amd_get_subcaches(cpu)); +} + +static ssize_t subcaches_store(struct device *dev, + struct device_attribute *attr, + const char *buf, size_t count) +{ + struct cacheinfo *this_leaf = dev_get_drvdata(dev); + int cpu = cpumask_first(&this_leaf->shared_cpu_map); + unsigned long val; + + if (!capable(CAP_SYS_ADMIN)) + return -EPERM; + + if (kstrtoul(buf, 16, &val) < 0) + return -EINVAL; + + if (amd_set_subcaches(cpu, val)) + return -EINVAL; + + return count; +} + +static DEVICE_ATTR_RW(cache_disable_0); +static DEVICE_ATTR_RW(cache_disable_1); +static DEVICE_ATTR_RW(subcaches); + +static umode_t +cache_private_attrs_is_visible(struct kobject *kobj, + struct attribute *attr, int unused) +{ + struct device *dev = kobj_to_dev(kobj); + struct cacheinfo *this_leaf = dev_get_drvdata(dev); + umode_t mode = attr->mode; + + if (!this_leaf->priv) + return 0; + + if ((attr == &dev_attr_subcaches.attr) && + amd_nb_has_feature(AMD_NB_L3_PARTITIONING)) + return mode; + + if ((attr == &dev_attr_cache_disable_0.attr || + attr == &dev_attr_cache_disable_1.attr) && + amd_nb_has_feature(AMD_NB_L3_INDEX_DISABLE)) + return mode; + + return 0; +} + +static struct attribute_group cache_private_group = { + .is_visible = cache_private_attrs_is_visible, +}; + +static void init_amd_l3_attrs(void) +{ + int n = 1; + static struct attribute **amd_l3_attrs; + + if (amd_l3_attrs) /* already initialized */ + return; + + if (amd_nb_has_feature(AMD_NB_L3_INDEX_DISABLE)) + n += 2; + if (amd_nb_has_feature(AMD_NB_L3_PARTITIONING)) + n += 1; + + amd_l3_attrs = kcalloc(n, sizeof(*amd_l3_attrs), GFP_KERNEL); + if (!amd_l3_attrs) + return; + + n = 0; + if (amd_nb_has_feature(AMD_NB_L3_INDEX_DISABLE)) { + amd_l3_attrs[n++] = &dev_attr_cache_disable_0.attr; + amd_l3_attrs[n++] = &dev_attr_cache_disable_1.attr; + } + if (amd_nb_has_feature(AMD_NB_L3_PARTITIONING)) + amd_l3_attrs[n++] = &dev_attr_subcaches.attr; + + cache_private_group.attrs = amd_l3_attrs; +} + +const struct attribute_group * +cache_get_priv_group(struct cacheinfo *this_leaf) +{ + struct amd_northbridge *nb = this_leaf->priv; + + if (this_leaf->level < 3 || !nb) + return NULL; + + if (nb && nb->l3_cache.indices) + init_amd_l3_attrs(); + + return &cache_private_group; +} + +static void amd_init_l3_cache(struct _cpuid4_info_regs *this_leaf, int index) +{ + int node; + + /* only for L3, and not in virtualized environments */ + if (index < 3) + return; + + node = amd_get_nb_id(smp_processor_id()); + this_leaf->nb = node_to_amd_nb(node); + if (this_leaf->nb && !this_leaf->nb->l3_cache.indices) + amd_calc_l3_indices(this_leaf->nb); +} +#else +#define amd_init_l3_cache(x, y) +#endif /* CONFIG_AMD_NB && CONFIG_SYSFS */ + +static int +cpuid4_cache_lookup_regs(int index, struct _cpuid4_info_regs *this_leaf) +{ + union _cpuid4_leaf_eax eax; + union _cpuid4_leaf_ebx ebx; + union _cpuid4_leaf_ecx ecx; + unsigned edx; + + if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD) { + if (boot_cpu_has(X86_FEATURE_TOPOEXT)) + cpuid_count(0x8000001d, index, &eax.full, + &ebx.full, &ecx.full, &edx); + else + amd_cpuid4(index, &eax, &ebx, &ecx); + amd_init_l3_cache(this_leaf, index); + } else { + cpuid_count(4, index, &eax.full, &ebx.full, &ecx.full, &edx); + } + + if (eax.split.type == CTYPE_NULL) + return -EIO; /* better error ? */ + + this_leaf->eax = eax; + this_leaf->ebx = ebx; + this_leaf->ecx = ecx; + this_leaf->size = (ecx.split.number_of_sets + 1) * + (ebx.split.coherency_line_size + 1) * + (ebx.split.physical_line_partition + 1) * + (ebx.split.ways_of_associativity + 1); + return 0; +} + +static int find_num_cache_leaves(struct cpuinfo_x86 *c) +{ + unsigned int eax, ebx, ecx, edx, op; + union _cpuid4_leaf_eax cache_eax; + int i = -1; + + if (c->x86_vendor == X86_VENDOR_AMD) + op = 0x8000001d; + else + op = 4; + + do { + ++i; + /* Do cpuid(op) loop to find out num_cache_leaves */ + cpuid_count(op, i, &eax, &ebx, &ecx, &edx); + cache_eax.full = eax; + } while (cache_eax.split.type != CTYPE_NULL); + return i; +} + +void cacheinfo_amd_init_llc_id(struct cpuinfo_x86 *c, int cpu, u8 node_id) +{ + /* + * We may have multiple LLCs if L3 caches exist, so check if we + * have an L3 cache by looking at the L3 cache CPUID leaf. + */ + if (!cpuid_edx(0x80000006)) + return; + + if (c->x86 < 0x17) { + /* LLC is at the node level. */ + per_cpu(cpu_llc_id, cpu) = node_id; + } else if (c->x86 == 0x17 && + c->x86_model >= 0 && c->x86_model <= 0x1F) { + /* + * LLC is at the core complex level. + * Core complex ID is ApicId[3] for these processors. + */ + per_cpu(cpu_llc_id, cpu) = c->apicid >> 3; + } else { + /* + * LLC ID is calculated from the number of threads sharing the + * cache. + * */ + u32 eax, ebx, ecx, edx, num_sharing_cache = 0; + u32 llc_index = find_num_cache_leaves(c) - 1; + + cpuid_count(0x8000001d, llc_index, &eax, &ebx, &ecx, &edx); + if (eax) + num_sharing_cache = ((eax >> 14) & 0xfff) + 1; + + if (num_sharing_cache) { + int bits = get_count_order(num_sharing_cache); + + per_cpu(cpu_llc_id, cpu) = c->apicid >> bits; + } + } +} + +void init_amd_cacheinfo(struct cpuinfo_x86 *c) +{ + + if (boot_cpu_has(X86_FEATURE_TOPOEXT)) { + num_cache_leaves = find_num_cache_leaves(c); + } else if (c->extended_cpuid_level >= 0x80000006) { + if (cpuid_edx(0x80000006) & 0xf000) + num_cache_leaves = 4; + else + num_cache_leaves = 3; + } +} + +void init_intel_cacheinfo(struct cpuinfo_x86 *c) +{ + /* Cache sizes */ + unsigned int trace = 0, l1i = 0, l1d = 0, l2 = 0, l3 = 0; + unsigned int new_l1d = 0, new_l1i = 0; /* Cache sizes from cpuid(4) */ + unsigned int new_l2 = 0, new_l3 = 0, i; /* Cache sizes from cpuid(4) */ + unsigned int l2_id = 0, l3_id = 0, num_threads_sharing, index_msb; +#ifdef CONFIG_SMP + unsigned int cpu = c->cpu_index; +#endif + + if (c->cpuid_level > 3) { + static int is_initialized; + + if (is_initialized == 0) { + /* Init num_cache_leaves from boot CPU */ + num_cache_leaves = find_num_cache_leaves(c); + is_initialized++; + } + + /* + * Whenever possible use cpuid(4), deterministic cache + * parameters cpuid leaf to find the cache details + */ + for (i = 0; i < num_cache_leaves; i++) { + struct _cpuid4_info_regs this_leaf = {}; + int retval; + + retval = cpuid4_cache_lookup_regs(i, &this_leaf); + if (retval < 0) + continue; + + switch (this_leaf.eax.split.level) { + case 1: + if (this_leaf.eax.split.type == CTYPE_DATA) + new_l1d = this_leaf.size/1024; + else if (this_leaf.eax.split.type == CTYPE_INST) + new_l1i = this_leaf.size/1024; + break; + case 2: + new_l2 = this_leaf.size/1024; + num_threads_sharing = 1 + this_leaf.eax.split.num_threads_sharing; + index_msb = get_count_order(num_threads_sharing); + l2_id = c->apicid & ~((1 << index_msb) - 1); + break; + case 3: + new_l3 = this_leaf.size/1024; + num_threads_sharing = 1 + this_leaf.eax.split.num_threads_sharing; + index_msb = get_count_order(num_threads_sharing); + l3_id = c->apicid & ~((1 << index_msb) - 1); + break; + default: + break; + } + } + } + /* + * Don't use cpuid2 if cpuid4 is supported. For P4, we use cpuid2 for + * trace cache + */ + if ((num_cache_leaves == 0 || c->x86 == 15) && c->cpuid_level > 1) { + /* supports eax=2 call */ + int j, n; + unsigned int regs[4]; + unsigned char *dp = (unsigned char *)regs; + int only_trace = 0; + + if (num_cache_leaves != 0 && c->x86 == 15) + only_trace = 1; + + /* Number of times to iterate */ + n = cpuid_eax(2) & 0xFF; + + for (i = 0 ; i < n ; i++) { + cpuid(2, ®s[0], ®s[1], ®s[2], ®s[3]); + + /* If bit 31 is set, this is an unknown format */ + for (j = 0 ; j < 3 ; j++) + if (regs[j] & (1 << 31)) + regs[j] = 0; + + /* Byte 0 is level count, not a descriptor */ + for (j = 1 ; j < 16 ; j++) { + unsigned char des = dp[j]; + unsigned char k = 0; + + /* look up this descriptor in the table */ + while (cache_table[k].descriptor != 0) { + if (cache_table[k].descriptor == des) { + if (only_trace && cache_table[k].cache_type != LVL_TRACE) + break; + switch (cache_table[k].cache_type) { + case LVL_1_INST: + l1i += cache_table[k].size; + break; + case LVL_1_DATA: + l1d += cache_table[k].size; + break; + case LVL_2: + l2 += cache_table[k].size; + break; + case LVL_3: + l3 += cache_table[k].size; + break; + case LVL_TRACE: + trace += cache_table[k].size; + break; + } + + break; + } + + k++; + } + } + } + } + + if (new_l1d) + l1d = new_l1d; + + if (new_l1i) + l1i = new_l1i; + + if (new_l2) { + l2 = new_l2; +#ifdef CONFIG_SMP + per_cpu(cpu_llc_id, cpu) = l2_id; +#endif + } + + if (new_l3) { + l3 = new_l3; +#ifdef CONFIG_SMP + per_cpu(cpu_llc_id, cpu) = l3_id; +#endif + } + +#ifdef CONFIG_SMP + /* + * If cpu_llc_id is not yet set, this means cpuid_level < 4 which in + * turns means that the only possibility is SMT (as indicated in + * cpuid1). Since cpuid2 doesn't specify shared caches, and we know + * that SMT shares all caches, we can unconditionally set cpu_llc_id to + * c->phys_proc_id. + */ + if (per_cpu(cpu_llc_id, cpu) == BAD_APICID) + per_cpu(cpu_llc_id, cpu) = c->phys_proc_id; +#endif + + c->x86_cache_size = l3 ? l3 : (l2 ? l2 : (l1i+l1d)); + + if (!l2) + cpu_detect_cache_sizes(c); +} + +static int __cache_amd_cpumap_setup(unsigned int cpu, int index, + struct _cpuid4_info_regs *base) +{ + struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu); + struct cacheinfo *this_leaf; + int i, sibling; + + /* + * For L3, always use the pre-calculated cpu_llc_shared_mask + * to derive shared_cpu_map. + */ + if (index == 3) { + for_each_cpu(i, cpu_llc_shared_mask(cpu)) { + this_cpu_ci = get_cpu_cacheinfo(i); + if (!this_cpu_ci->info_list) + continue; + this_leaf = this_cpu_ci->info_list + index; + for_each_cpu(sibling, cpu_llc_shared_mask(cpu)) { + if (!cpu_online(sibling)) + continue; + cpumask_set_cpu(sibling, + &this_leaf->shared_cpu_map); + } + } + } else if (boot_cpu_has(X86_FEATURE_TOPOEXT)) { + unsigned int apicid, nshared, first, last; + + nshared = base->eax.split.num_threads_sharing + 1; + apicid = cpu_data(cpu).apicid; + first = apicid - (apicid % nshared); + last = first + nshared - 1; + + for_each_online_cpu(i) { + this_cpu_ci = get_cpu_cacheinfo(i); + if (!this_cpu_ci->info_list) + continue; + + apicid = cpu_data(i).apicid; + if ((apicid < first) || (apicid > last)) + continue; + + this_leaf = this_cpu_ci->info_list + index; + + for_each_online_cpu(sibling) { + apicid = cpu_data(sibling).apicid; + if ((apicid < first) || (apicid > last)) + continue; + cpumask_set_cpu(sibling, + &this_leaf->shared_cpu_map); + } + } + } else + return 0; + + return 1; +} + +static void __cache_cpumap_setup(unsigned int cpu, int index, + struct _cpuid4_info_regs *base) +{ + struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu); + struct cacheinfo *this_leaf, *sibling_leaf; + unsigned long num_threads_sharing; + int index_msb, i; + struct cpuinfo_x86 *c = &cpu_data(cpu); + + if (c->x86_vendor == X86_VENDOR_AMD) { + if (__cache_amd_cpumap_setup(cpu, index, base)) + return; + } + + this_leaf = this_cpu_ci->info_list + index; + num_threads_sharing = 1 + base->eax.split.num_threads_sharing; + + cpumask_set_cpu(cpu, &this_leaf->shared_cpu_map); + if (num_threads_sharing == 1) + return; + + index_msb = get_count_order(num_threads_sharing); + + for_each_online_cpu(i) + if (cpu_data(i).apicid >> index_msb == c->apicid >> index_msb) { + struct cpu_cacheinfo *sib_cpu_ci = get_cpu_cacheinfo(i); + + if (i == cpu || !sib_cpu_ci->info_list) + continue;/* skip if itself or no cacheinfo */ + sibling_leaf = sib_cpu_ci->info_list + index; + cpumask_set_cpu(i, &this_leaf->shared_cpu_map); + cpumask_set_cpu(cpu, &sibling_leaf->shared_cpu_map); + } +} + +static void ci_leaf_init(struct cacheinfo *this_leaf, + struct _cpuid4_info_regs *base) +{ + this_leaf->id = base->id; + this_leaf->attributes = CACHE_ID; + this_leaf->level = base->eax.split.level; + this_leaf->type = cache_type_map[base->eax.split.type]; + this_leaf->coherency_line_size = + base->ebx.split.coherency_line_size + 1; + this_leaf->ways_of_associativity = + base->ebx.split.ways_of_associativity + 1; + this_leaf->size = base->size; + this_leaf->number_of_sets = base->ecx.split.number_of_sets + 1; + this_leaf->physical_line_partition = + base->ebx.split.physical_line_partition + 1; + this_leaf->priv = base->nb; +} + +static int __init_cache_level(unsigned int cpu) +{ + struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu); + + if (!num_cache_leaves) + return -ENOENT; + if (!this_cpu_ci) + return -EINVAL; + this_cpu_ci->num_levels = 3; + this_cpu_ci->num_leaves = num_cache_leaves; + return 0; +} + +/* + * The max shared threads number comes from CPUID.4:EAX[25-14] with input + * ECX as cache index. Then right shift apicid by the number's order to get + * cache id for this cache node. + */ +static void get_cache_id(int cpu, struct _cpuid4_info_regs *id4_regs) +{ + struct cpuinfo_x86 *c = &cpu_data(cpu); + unsigned long num_threads_sharing; + int index_msb; + + num_threads_sharing = 1 + id4_regs->eax.split.num_threads_sharing; + index_msb = get_count_order(num_threads_sharing); + id4_regs->id = c->apicid >> index_msb; +} + +static int __populate_cache_leaves(unsigned int cpu) +{ + unsigned int idx, ret; + struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu); + struct cacheinfo *this_leaf = this_cpu_ci->info_list; + struct _cpuid4_info_regs id4_regs = {}; + + for (idx = 0; idx < this_cpu_ci->num_leaves; idx++) { + ret = cpuid4_cache_lookup_regs(idx, &id4_regs); + if (ret) + return ret; + get_cache_id(cpu, &id4_regs); + ci_leaf_init(this_leaf++, &id4_regs); + __cache_cpumap_setup(cpu, idx, &id4_regs); + } + this_cpu_ci->cpu_map_populated = true; + + return 0; +} + +DEFINE_SMP_CALL_CACHE_FUNCTION(init_cache_level) +DEFINE_SMP_CALL_CACHE_FUNCTION(populate_cache_leaves) only in patch2: unchanged: --- linux-azure-4.15.0.orig/arch/x86/kernel/cpu/topology.c +++ linux-azure-4.15.0/arch/x86/kernel/cpu/topology.c @@ -22,21 +22,13 @@ #define BITS_SHIFT_NEXT_LEVEL(eax) ((eax) & 0x1f) #define LEVEL_MAX_SIBLINGS(ebx) ((ebx) & 0xffff) -/* - * Check for extended topology enumeration cpuid leaf 0xb and if it - * exists, use it for populating initial_apicid and cpu topology - * detection. - */ -void detect_extended_topology(struct cpuinfo_x86 *c) +int detect_extended_topology_early(struct cpuinfo_x86 *c) { #ifdef CONFIG_SMP - unsigned int eax, ebx, ecx, edx, sub_index; - unsigned int ht_mask_width, core_plus_mask_width; - unsigned int core_select_mask, core_level_siblings; - static bool printed; + unsigned int eax, ebx, ecx, edx; if (c->cpuid_level < 0xb) - return; + return -1; cpuid_count(0xb, SMT_LEVEL, &eax, &ebx, &ecx, &edx); @@ -44,7 +36,7 @@ * check if the cpuid leaf 0xb is actually implemented. */ if (ebx == 0 || (LEAFB_SUBTYPE(ecx) != SMT_TYPE)) - return; + return -1; set_cpu_cap(c, X86_FEATURE_XTOPOLOGY); @@ -52,10 +44,30 @@ * initial apic id, which also represents 32-bit extended x2apic id. */ c->initial_apicid = edx; + smp_num_siblings = LEVEL_MAX_SIBLINGS(ebx); +#endif + return 0; +} + +/* + * Check for extended topology enumeration cpuid leaf 0xb and if it + * exists, use it for populating initial_apicid and cpu topology + * detection. + */ +int detect_extended_topology(struct cpuinfo_x86 *c) +{ +#ifdef CONFIG_SMP + unsigned int eax, ebx, ecx, edx, sub_index; + unsigned int ht_mask_width, core_plus_mask_width; + unsigned int core_select_mask, core_level_siblings; + + if (detect_extended_topology_early(c) < 0) + return -1; /* * Populate HT related information from sub-leaf level 0. */ + cpuid_count(0xb, SMT_LEVEL, &eax, &ebx, &ecx, &edx); core_level_siblings = smp_num_siblings = LEVEL_MAX_SIBLINGS(ebx); core_plus_mask_width = ht_mask_width = BITS_SHIFT_NEXT_LEVEL(eax); @@ -86,15 +98,6 @@ c->apicid = apic->phys_pkg_id(c->initial_apicid, 0); c->x86_max_cores = (core_level_siblings / smp_num_siblings); - - if (!printed) { - pr_info("CPU: Physical Processor ID: %d\n", - c->phys_proc_id); - if (c->x86_max_cores > 1) - pr_info("CPU: Processor Core ID: %d\n", - c->cpu_core_id); - printed = 1; - } - return; #endif + return 0; } only in patch2: unchanged: --- linux-azure-4.15.0.orig/arch/x86/kernel/espfix_64.c +++ linux-azure-4.15.0/arch/x86/kernel/espfix_64.c @@ -195,6 +195,10 @@ pte_p = pte_offset_kernel(&pmd, addr); stack_page = page_address(alloc_pages_node(node, GFP_KERNEL, 0)); + /* + * __PAGE_KERNEL_* includes _PAGE_GLOBAL, which we want since + * this is mapped to userspace. + */ pte = __pte(__pa(stack_page) | ((__PAGE_KERNEL_RO | _PAGE_ENC) & ptemask)); for (n = 0; n < ESPFIX_PTE_CLONES; n++) set_pte(&pte_p[n*PTE_STRIDE], pte); only in patch2: unchanged: --- linux-azure-4.15.0.orig/arch/x86/kernel/fpu/core.c +++ linux-azure-4.15.0/arch/x86/kernel/fpu/core.c @@ -10,6 +10,7 @@ #include #include #include +#include #include #include only in patch2: unchanged: --- linux-azure-4.15.0.orig/arch/x86/kernel/head64.c +++ linux-azure-4.15.0/arch/x86/kernel/head64.c @@ -46,6 +46,12 @@ return ptr - (void *)_text + (void *)physaddr; } +/* Code in __startup_64() can be relocated during execution, but the compiler + * doesn't have to generate PC-relative relocations when accessing globals from + * that function. Clang actually does not generate them, which leads to + * boot-time crashes. To work around this problem, every global pointer must + * be adjusted using fixup_pointer(). + */ unsigned long __head __startup_64(unsigned long physaddr, struct boot_params *bp) { @@ -55,6 +61,7 @@ p4dval_t *p4d; pudval_t *pud; pmdval_t *pmd, pmd_entry; + pteval_t *mask_ptr; int i; unsigned int *next_pgt_ptr; @@ -129,6 +136,9 @@ pud[i + 1] = (pudval_t)pmd + pgtable_flags; pmd_entry = __PAGE_KERNEL_LARGE_EXEC & ~_PAGE_GLOBAL; + /* Filter out unsupported __PAGE_KERNEL_* bits: */ + mask_ptr = fixup_pointer(&__supported_pte_mask, physaddr); + pmd_entry &= *mask_ptr; pmd_entry += sme_get_me_mask(); pmd_entry += physaddr; only in patch2: unchanged: --- linux-azure-4.15.0.orig/arch/x86/kernel/hpet.c +++ linux-azure-4.15.0/arch/x86/kernel/hpet.c @@ -1,6 +1,7 @@ #include #include #include +#include #include #include #include only in patch2: unchanged: --- linux-azure-4.15.0.orig/arch/x86/kernel/i8259.c +++ linux-azure-4.15.0/arch/x86/kernel/i8259.c @@ -5,6 +5,7 @@ #include #include #include +#include #include #include #include only in patch2: unchanged: --- linux-azure-4.15.0.orig/arch/x86/kernel/irq.c +++ linux-azure-4.15.0/arch/x86/kernel/irq.c @@ -10,6 +10,7 @@ #include #include #include +#include #include #include only in patch2: unchanged: --- linux-azure-4.15.0.orig/arch/x86/kernel/irq_32.c +++ linux-azure-4.15.0/arch/x86/kernel/irq_32.c @@ -11,6 +11,7 @@ #include #include +#include #include #include #include only in patch2: unchanged: --- linux-azure-4.15.0.orig/arch/x86/kernel/irq_64.c +++ linux-azure-4.15.0/arch/x86/kernel/irq_64.c @@ -11,6 +11,7 @@ #include #include +#include #include #include #include only in patch2: unchanged: --- linux-azure-4.15.0.orig/arch/x86/kernel/irqinit.c +++ linux-azure-4.15.0/arch/x86/kernel/irqinit.c @@ -5,6 +5,7 @@ #include #include #include +#include #include #include #include only in patch2: unchanged: --- linux-azure-4.15.0.orig/arch/x86/kernel/ldt.c +++ linux-azure-4.15.0/arch/x86/kernel/ldt.c @@ -145,6 +145,7 @@ unsigned long offset = i << PAGE_SHIFT; const void *src = (char *)ldt->entries + offset; unsigned long pfn; + pgprot_t pte_prot; pte_t pte, *ptep; va = (unsigned long)ldt_slot_va(slot) + offset; @@ -163,7 +164,10 @@ * target via some kernel interface which misses a * permission check. */ - pte = pfn_pte(pfn, __pgprot(__PAGE_KERNEL_RO & ~_PAGE_GLOBAL)); + pte_prot = __pgprot(__PAGE_KERNEL_RO & ~_PAGE_GLOBAL); + /* Filter out unsuppored __PAGE_KERNEL* bits: */ + pgprot_val(pte_prot) &= __supported_pte_mask; + pte = pfn_pte(pfn, pte_prot); set_pte_at(mm, va, ptep, pte); pte_unmap_unlock(ptep, ptl); } only in patch2: unchanged: --- linux-azure-4.15.0.orig/arch/x86/kernel/smp.c +++ linux-azure-4.15.0/arch/x86/kernel/smp.c @@ -261,6 +261,7 @@ { ack_APIC_irq(); inc_irq_stat(irq_resched_count); + kvm_set_cpu_l1tf_flush_l1d(); if (trace_resched_ipi_enabled()) { /* only in patch2: unchanged: --- linux-azure-4.15.0.orig/arch/x86/kernel/time.c +++ linux-azure-4.15.0/arch/x86/kernel/time.c @@ -12,6 +12,7 @@ #include #include +#include #include #include #include only in patch2: unchanged: --- linux-azure-4.15.0.orig/arch/x86/mm/ident_map.c +++ linux-azure-4.15.0/arch/x86/mm/ident_map.c @@ -98,6 +98,9 @@ if (!info->kernpg_flag) info->kernpg_flag = _KERNPG_TABLE; + /* Filter out unsupported __PAGE_KERNEL_* bits: */ + info->kernpg_flag &= __default_kernel_pte_mask; + for (; addr < end; addr = next) { pgd_t *pgd = pgd_page + pgd_index(addr); p4d_t *p4d; only in patch2: unchanged: --- linux-azure-4.15.0.orig/arch/x86/mm/init.c +++ linux-azure-4.15.0/arch/x86/mm/init.c @@ -4,6 +4,8 @@ #include #include #include /* for max_low_pfn */ +#include +#include #include #include @@ -190,6 +192,12 @@ enable_global_pages(); } + /* By the default is everything supported: */ + __default_kernel_pte_mask = __supported_pte_mask; + /* Except when with PTI where the kernel is mostly non-Global: */ + if (cpu_feature_enabled(X86_FEATURE_PTI)) + __default_kernel_pte_mask &= ~_PAGE_GLOBAL; + /* Enable 1 GB linear kernel mappings if available: */ if (direct_gbpages && boot_cpu_has(X86_FEATURE_GBPAGES)) { printk(KERN_INFO "Using GB pages for direct mapping\n"); @@ -878,3 +886,24 @@ __cachemode2pte_tbl[cache] = __cm_idx2pte(entry); __pte2cachemode_tbl[entry] = cache; } + +unsigned long max_swapfile_size(void) +{ + unsigned long pages; + + pages = generic_max_swapfile_size(); + + if (boot_cpu_has_bug(X86_BUG_L1TF)) { + /* Limit the swap file size to MAX_PA/2 for L1TF workaround */ + unsigned long l1tf_limit = l1tf_pfn_limit() + 1; + /* + * We encode swap offsets also with 3 bits below those for pfn + * which makes the usable limit higher. + */ +#if CONFIG_PGTABLE_LEVELS > 2 + l1tf_limit <<= PAGE_SHIFT - SWP_OFFSET_FIRST_BIT; +#endif + pages = min_t(unsigned long, l1tf_limit, pages); + } + return pages; +} only in patch2: unchanged: --- linux-azure-4.15.0.orig/arch/x86/mm/iomap_32.c +++ linux-azure-4.15.0/arch/x86/mm/iomap_32.c @@ -44,6 +44,9 @@ return ret; *prot = __pgprot(__PAGE_KERNEL | cachemode2protval(pcm)); + /* Filter out unsupported __PAGE_KERNEL* bits: */ + pgprot_val(*prot) &= __default_kernel_pte_mask; + return 0; } EXPORT_SYMBOL_GPL(iomap_create_wc); @@ -88,6 +91,9 @@ prot = __pgprot(__PAGE_KERNEL | cachemode2protval(_PAGE_CACHE_MODE_UC_MINUS)); + /* Filter out unsupported __PAGE_KERNEL* bits: */ + pgprot_val(prot) &= __default_kernel_pte_mask; + return (void __force __iomem *) kmap_atomic_prot_pfn(pfn, prot); } EXPORT_SYMBOL_GPL(iomap_atomic_prot_pfn); only in patch2: unchanged: --- linux-azure-4.15.0.orig/arch/x86/mm/kasan_init_64.c +++ linux-azure-4.15.0/arch/x86/mm/kasan_init_64.c @@ -263,6 +263,12 @@ pudval_t pud_val = __pa_nodebug(kasan_zero_pmd) | _KERNPG_TABLE; p4dval_t p4d_val = __pa_nodebug(kasan_zero_pud) | _KERNPG_TABLE; + /* Mask out unsupported __PAGE_KERNEL bits: */ + pte_val &= __default_kernel_pte_mask; + pmd_val &= __default_kernel_pte_mask; + pud_val &= __default_kernel_pte_mask; + p4d_val &= __default_kernel_pte_mask; + for (i = 0; i < PTRS_PER_PTE; i++) kasan_zero_pte[i] = __pte(pte_val); @@ -365,7 +371,13 @@ */ memset(kasan_zero_page, 0, PAGE_SIZE); for (i = 0; i < PTRS_PER_PTE; i++) { - pte_t pte = __pte(__pa(kasan_zero_page) | __PAGE_KERNEL_RO | _PAGE_ENC); + pte_t pte; + pgprot_t prot; + + prot = __pgprot(__PAGE_KERNEL_RO | _PAGE_ENC); + pgprot_val(prot) &= __default_kernel_pte_mask; + + pte = __pte(__pa(kasan_zero_page) | pgprot_val(prot)); set_pte(&kasan_zero_pte[i], pte); } /* Flush TLBs again to be sure that write protection applied. */ only in patch2: unchanged: --- linux-azure-4.15.0.orig/arch/x86/mm/mmap.c +++ linux-azure-4.15.0/arch/x86/mm/mmap.c @@ -236,3 +236,24 @@ return phys_addr_valid(addr + count - 1); } + +/* + * Only allow root to set high MMIO mappings to PROT_NONE. + * This prevents an unpriv. user to set them to PROT_NONE and invert + * them, then pointing to valid memory for L1TF speculation. + * + * Note: for locked down kernels may want to disable the root override. + */ +bool pfn_modify_allowed(unsigned long pfn, pgprot_t prot) +{ + if (!boot_cpu_has_bug(X86_BUG_L1TF)) + return true; + if (!__pte_needs_invert(pgprot_val(prot))) + return true; + /* If it's real memory always allow */ + if (pfn_valid(pfn)) + return true; + if (pfn > l1tf_pfn_limit() && !capable(CAP_SYS_ADMIN)) + return false; + return true; +} only in patch2: unchanged: --- linux-azure-4.15.0.orig/arch/x86/mm/pageattr.c +++ linux-azure-4.15.0/arch/x86/mm/pageattr.c @@ -298,9 +298,11 @@ /* * The .rodata section needs to be read-only. Using the pfn - * catches all aliases. + * catches all aliases. This also includes __ro_after_init, + * so do not enforce until kernel_set_to_readonly is true. */ - if (within(pfn, __pa_symbol(__start_rodata) >> PAGE_SHIFT, + if (kernel_set_to_readonly && + within(pfn, __pa_symbol(__start_rodata) >> PAGE_SHIFT, __pa_symbol(__end_rodata) >> PAGE_SHIFT)) pgprot_val(forbidden) |= _PAGE_RW; @@ -512,6 +514,23 @@ #endif } +static pgprot_t pgprot_clear_protnone_bits(pgprot_t prot) +{ + /* + * _PAGE_GLOBAL means "global page" for present PTEs. + * But, it is also used to indicate _PAGE_PROTNONE + * for non-present PTEs. + * + * This ensures that a _PAGE_GLOBAL PTE going from + * present to non-present is not confused as + * _PAGE_PROTNONE. + */ + if (!(pgprot_val(prot) & _PAGE_PRESENT)) + pgprot_val(prot) &= ~_PAGE_GLOBAL; + + return prot; +} + static int try_preserve_large_page(pte_t *kpte, unsigned long address, struct cpa_data *cpa) @@ -566,6 +585,7 @@ * up accordingly. */ old_pte = *kpte; + /* Clear PSE (aka _PAGE_PAT) and move PAT bit to correct position */ req_prot = pgprot_large_2_4k(old_prot); pgprot_val(req_prot) &= ~pgprot_val(cpa->mask_clr); @@ -577,19 +597,9 @@ * different bit positions in the two formats. */ req_prot = pgprot_4k_2_large(req_prot); - - /* - * Set the PSE and GLOBAL flags only if the PRESENT flag is - * set otherwise pmd_present/pmd_huge will return true even on - * a non present pmd. The canon_pgprot will clear _PAGE_GLOBAL - * for the ancient hardware that doesn't support it. - */ + req_prot = pgprot_clear_protnone_bits(req_prot); if (pgprot_val(req_prot) & _PAGE_PRESENT) - pgprot_val(req_prot) |= _PAGE_PSE | _PAGE_GLOBAL; - else - pgprot_val(req_prot) &= ~(_PAGE_PSE | _PAGE_GLOBAL); - - req_prot = canon_pgprot(req_prot); + pgprot_val(req_prot) |= _PAGE_PSE; /* * old_pfn points to the large page base pfn. So we need @@ -674,8 +684,12 @@ switch (level) { case PG_LEVEL_2M: ref_prot = pmd_pgprot(*(pmd_t *)kpte); - /* clear PSE and promote PAT bit to correct position */ + /* + * Clear PSE (aka _PAGE_PAT) and move + * PAT bit to correct position. + */ ref_prot = pgprot_large_2_4k(ref_prot); + ref_pfn = pmd_pfn(*(pmd_t *)kpte); break; @@ -698,23 +712,14 @@ return 1; } - /* - * Set the GLOBAL flags only if the PRESENT flag is set - * otherwise pmd/pte_present will return true even on a non - * present pmd/pte. The canon_pgprot will clear _PAGE_GLOBAL - * for the ancient hardware that doesn't support it. - */ - if (pgprot_val(ref_prot) & _PAGE_PRESENT) - pgprot_val(ref_prot) |= _PAGE_GLOBAL; - else - pgprot_val(ref_prot) &= ~_PAGE_GLOBAL; + ref_prot = pgprot_clear_protnone_bits(ref_prot); /* * Get the target pfn from the original entry: */ pfn = ref_pfn; for (i = 0; i < PTRS_PER_PTE; i++, pfn += pfninc) - set_pte(&pbase[i], pfn_pte(pfn, canon_pgprot(ref_prot))); + set_pte(&pbase[i], pfn_pte(pfn, ref_prot)); if (virt_addr_valid(address)) { unsigned long pfn = PFN_DOWN(__pa(address)); @@ -930,19 +935,7 @@ pte = pte_offset_kernel(pmd, start); - /* - * Set the GLOBAL flags only if the PRESENT flag is - * set otherwise pte_present will return true even on - * a non present pte. The canon_pgprot will clear - * _PAGE_GLOBAL for the ancient hardware that doesn't - * support it. - */ - if (pgprot_val(pgprot) & _PAGE_PRESENT) - pgprot_val(pgprot) |= _PAGE_GLOBAL; - else - pgprot_val(pgprot) &= ~_PAGE_GLOBAL; - - pgprot = canon_pgprot(pgprot); + pgprot = pgprot_clear_protnone_bits(pgprot); while (num_pages-- && start < end) { set_pte(pte, pfn_pte(cpa->pfn, pgprot)); @@ -1004,8 +997,8 @@ pmd = pmd_offset(pud, start); - set_pmd(pmd, __pmd(cpa->pfn << PAGE_SHIFT | _PAGE_PSE | - massage_pgprot(pmd_pgprot))); + set_pmd(pmd, pmd_mkhuge(pfn_pmd(cpa->pfn, + canon_pgprot(pmd_pgprot)))); start += PMD_SIZE; cpa->pfn += PMD_SIZE >> PAGE_SHIFT; @@ -1077,8 +1070,8 @@ * Map everything starting from the Gb boundary, possibly with 1G pages */ while (boot_cpu_has(X86_FEATURE_GBPAGES) && end - start >= PUD_SIZE) { - set_pud(pud, __pud(cpa->pfn << PAGE_SHIFT | _PAGE_PSE | - massage_pgprot(pud_pgprot))); + set_pud(pud, pud_mkhuge(pfn_pud(cpa->pfn, + canon_pgprot(pud_pgprot)))); start += PUD_SIZE; cpa->pfn += PUD_SIZE >> PAGE_SHIFT; @@ -1234,24 +1227,14 @@ new_prot = static_protections(new_prot, address, pfn); - /* - * Set the GLOBAL flags only if the PRESENT flag is - * set otherwise pte_present will return true even on - * a non present pte. The canon_pgprot will clear - * _PAGE_GLOBAL for the ancient hardware that doesn't - * support it. - */ - if (pgprot_val(new_prot) & _PAGE_PRESENT) - pgprot_val(new_prot) |= _PAGE_GLOBAL; - else - pgprot_val(new_prot) &= ~_PAGE_GLOBAL; + new_prot = pgprot_clear_protnone_bits(new_prot); /* * We need to keep the pfn from the existing PTE, * after all we're only going to change it's attributes * not the memory it points to */ - new_pte = pfn_pte(pfn, canon_pgprot(new_prot)); + new_pte = pfn_pte(pfn, new_prot); cpa->pfn = pfn; /* * Do we really change anything ? only in patch2: unchanged: --- linux-azure-4.15.0.orig/arch/x86/mm/pti.c +++ linux-azure-4.15.0/arch/x86/mm/pti.c @@ -45,6 +45,7 @@ #include #include #include +#include #undef pr_fmt #define pr_fmt(fmt) "Kernel/User page tables isolation: " fmt only in patch2: unchanged: --- linux-azure-4.15.0.orig/arch/x86/platform/intel-mid/device_libs/platform_mrfld_wdt.c +++ linux-azure-4.15.0/arch/x86/platform/intel-mid/device_libs/platform_mrfld_wdt.c @@ -18,6 +18,7 @@ #include #include #include +#include #define TANGIER_EXT_TIMER0_MSI 12 only in patch2: unchanged: --- linux-azure-4.15.0.orig/arch/x86/xen/enlighten.c +++ linux-azure-4.15.0/arch/x86/xen/enlighten.c @@ -3,6 +3,7 @@ #endif #include #include +#include #include #include only in patch2: unchanged: --- linux-azure-4.15.0.orig/include/linux/swapfile.h +++ linux-azure-4.15.0/include/linux/swapfile.h @@ -10,5 +10,7 @@ extern struct plist_head swap_active_head; extern struct swap_info_struct *swap_info[]; extern int try_to_unuse(unsigned int, bool, unsigned long); +extern unsigned long generic_max_swapfile_size(void); +extern unsigned long max_swapfile_size(void); #endif /* _LINUX_SWAPFILE_H */ only in patch2: unchanged: --- linux-azure-4.15.0.orig/include/net/ipv6.h +++ linux-azure-4.15.0/include/net/ipv6.h @@ -385,8 +385,8 @@ } #endif -#define IPV6_FRAG_HIGH_THRESH (4 * 1024*1024) /* 4194304 */ -#define IPV6_FRAG_LOW_THRESH (3 * 1024*1024) /* 3145728 */ +#define IPV6_FRAG_HIGH_THRESH (256 * 1024) /* 262144 */ +#define IPV6_FRAG_LOW_THRESH (192 * 1024) /* 196608 */ #define IPV6_FRAG_TIMEOUT (60 * HZ) /* 60 seconds */ int __ipv6_addr_type(const struct in6_addr *addr); only in patch2: unchanged: --- linux-azure-4.15.0.orig/include/uapi/linux/kvm.h +++ linux-azure-4.15.0/include/uapi/linux/kvm.h @@ -761,6 +761,7 @@ #define KVM_TRACE_PAUSE __KVM_DEPRECATED_MAIN_0x07 #define KVM_TRACE_DISABLE __KVM_DEPRECATED_MAIN_0x08 #define KVM_GET_EMULATED_CPUID _IOWR(KVMIO, 0x09, struct kvm_cpuid2) +#define KVM_GET_MSR_FEATURE_INDEX_LIST _IOWR(KVMIO, 0x0a, struct kvm_msr_list) /* * Extension capability list. @@ -934,6 +935,7 @@ #define KVM_CAP_S390_AIS_MIGRATION 150 #define KVM_CAP_PPC_GET_CPU_CHAR 151 #define KVM_CAP_S390_BPB 152 +#define KVM_CAP_GET_MSR_FEATURES 153 #ifdef KVM_CAP_IRQ_ROUTING only in patch2: unchanged: --- linux-azure-4.15.0.orig/kernel/cpu.c +++ linux-azure-4.15.0/kernel/cpu.c @@ -60,6 +60,7 @@ bool rollback; bool single; bool bringup; + bool booted_once; struct hlist_node *node; struct hlist_node *last; enum cpuhp_state cb_state; @@ -346,6 +347,85 @@ EXPORT_SYMBOL_GPL(cpu_hotplug_enable); #endif /* CONFIG_HOTPLUG_CPU */ +#ifdef CONFIG_HOTPLUG_SMT +enum cpuhp_smt_control cpu_smt_control __read_mostly = CPU_SMT_ENABLED; +EXPORT_SYMBOL_GPL(cpu_smt_control); + +static bool cpu_smt_available __read_mostly; + +void __init cpu_smt_disable(bool force) +{ + if (cpu_smt_control == CPU_SMT_FORCE_DISABLED || + cpu_smt_control == CPU_SMT_NOT_SUPPORTED) + return; + + if (force) { + pr_info("SMT: Force disabled\n"); + cpu_smt_control = CPU_SMT_FORCE_DISABLED; + } else { + cpu_smt_control = CPU_SMT_DISABLED; + } +} + +/* + * The decision whether SMT is supported can only be done after the full + * CPU identification. Called from architecture code before non boot CPUs + * are brought up. + */ +void __init cpu_smt_check_topology_early(void) +{ + if (!topology_smt_supported()) + cpu_smt_control = CPU_SMT_NOT_SUPPORTED; +} + +/* + * If SMT was disabled by BIOS, detect it here, after the CPUs have been + * brought online. This ensures the smt/l1tf sysfs entries are consistent + * with reality. cpu_smt_available is set to true during the bringup of non + * boot CPUs when a SMT sibling is detected. Note, this may overwrite + * cpu_smt_control's previous setting. + */ +void __init cpu_smt_check_topology(void) +{ + if (!cpu_smt_available) + cpu_smt_control = CPU_SMT_NOT_SUPPORTED; +} + +static int __init smt_cmdline_disable(char *str) +{ + cpu_smt_disable(str && !strcmp(str, "force")); + return 0; +} +early_param("nosmt", smt_cmdline_disable); + +static inline bool cpu_smt_allowed(unsigned int cpu) +{ + if (topology_is_primary_thread(cpu)) + return true; + + /* + * If the CPU is not a 'primary' thread and the booted_once bit is + * set then the processor has SMT support. Store this information + * for the late check of SMT support in cpu_smt_check_topology(). + */ + if (per_cpu(cpuhp_state, cpu).booted_once) + cpu_smt_available = true; + + if (cpu_smt_control == CPU_SMT_ENABLED) + return true; + + /* + * On x86 it's required to boot all logical CPUs at least once so + * that the init code can get a chance to set CR4.MCE on each + * CPU. Otherwise, a broadacasted MCE observing CR4.MCE=0b on any + * core will shutdown the machine. + */ + return !per_cpu(cpuhp_state, cpu).booted_once; +} +#else +static inline bool cpu_smt_allowed(unsigned int cpu) { return true; } +#endif + static inline enum cpuhp_state cpuhp_set_state(struct cpuhp_cpu_state *st, enum cpuhp_state target) { @@ -426,6 +506,16 @@ stop_machine_unpark(cpu); kthread_unpark(st->thread); + /* + * SMT soft disabling on X86 requires to bring the CPU out of the + * BIOS 'wait for SIPI' state in order to set the CR4.MCE bit. The + * CPU marked itself as booted_once in cpu_notify_starting() so the + * cpu_smt_allowed() check will now return false if this is not the + * primary sibling. + */ + if (!cpu_smt_allowed(cpu)) + return -ECANCELED; + if (st->target <= CPUHP_AP_ONLINE_IDLE) return 0; @@ -758,7 +848,6 @@ /* Park the smpboot threads */ kthread_park(per_cpu_ptr(&cpuhp_state, cpu)->thread); - smpboot_park_threads(cpu); /* * Prevent irq alloc/free while the dying cpu reorganizes the @@ -911,20 +1000,19 @@ return ret; } +static int cpu_down_maps_locked(unsigned int cpu, enum cpuhp_state target) +{ + if (cpu_hotplug_disabled) + return -EBUSY; + return _cpu_down(cpu, 0, target); +} + static int do_cpu_down(unsigned int cpu, enum cpuhp_state target) { int err; cpu_maps_update_begin(); - - if (cpu_hotplug_disabled) { - err = -EBUSY; - goto out; - } - - err = _cpu_down(cpu, 0, target); - -out: + err = cpu_down_maps_locked(cpu, target); cpu_maps_update_done(); return err; } @@ -953,6 +1041,7 @@ int ret; rcu_cpu_starting(cpu); /* Enables RCU usage on this CPU. */ + st->booted_once = true; while (st->state < target) { st->state++; ret = cpuhp_invoke_callback(cpu, st->state, true, NULL, NULL); @@ -1062,6 +1151,10 @@ err = -EBUSY; goto out; } + if (!cpu_smt_allowed(cpu)) { + err = -EPERM; + goto out; + } err = _cpu_up(cpu, 0, target); out: @@ -1344,7 +1437,7 @@ [CPUHP_AP_SMPBOOT_THREADS] = { .name = "smpboot/threads:online", .startup.single = smpboot_unpark_threads, - .teardown.single = NULL, + .teardown.single = smpboot_park_threads, }, [CPUHP_AP_IRQ_AFFINITY_ONLINE] = { .name = "irq/affinity:online", @@ -1918,10 +2011,172 @@ NULL }; +#ifdef CONFIG_HOTPLUG_SMT + +static const char *smt_states[] = { + [CPU_SMT_ENABLED] = "on", + [CPU_SMT_DISABLED] = "off", + [CPU_SMT_FORCE_DISABLED] = "forceoff", + [CPU_SMT_NOT_SUPPORTED] = "notsupported", +}; + +static ssize_t +show_smt_control(struct device *dev, struct device_attribute *attr, char *buf) +{ + return snprintf(buf, PAGE_SIZE - 2, "%s\n", smt_states[cpu_smt_control]); +} + +static void cpuhp_offline_cpu_device(unsigned int cpu) +{ + struct device *dev = get_cpu_device(cpu); + + dev->offline = true; + /* Tell user space about the state change */ + kobject_uevent(&dev->kobj, KOBJ_OFFLINE); +} + +static void cpuhp_online_cpu_device(unsigned int cpu) +{ + struct device *dev = get_cpu_device(cpu); + + dev->offline = false; + /* Tell user space about the state change */ + kobject_uevent(&dev->kobj, KOBJ_ONLINE); +} + +static int cpuhp_smt_disable(enum cpuhp_smt_control ctrlval) +{ + int cpu, ret = 0; + + cpu_maps_update_begin(); + for_each_online_cpu(cpu) { + if (topology_is_primary_thread(cpu)) + continue; + ret = cpu_down_maps_locked(cpu, CPUHP_OFFLINE); + if (ret) + break; + /* + * As this needs to hold the cpu maps lock it's impossible + * to call device_offline() because that ends up calling + * cpu_down() which takes cpu maps lock. cpu maps lock + * needs to be held as this might race against in kernel + * abusers of the hotplug machinery (thermal management). + * + * So nothing would update device:offline state. That would + * leave the sysfs entry stale and prevent onlining after + * smt control has been changed to 'off' again. This is + * called under the sysfs hotplug lock, so it is properly + * serialized against the regular offline usage. + */ + cpuhp_offline_cpu_device(cpu); + } + if (!ret) + cpu_smt_control = ctrlval; + cpu_maps_update_done(); + return ret; +} + +static int cpuhp_smt_enable(void) +{ + int cpu, ret = 0; + + cpu_maps_update_begin(); + cpu_smt_control = CPU_SMT_ENABLED; + for_each_present_cpu(cpu) { + /* Skip online CPUs and CPUs on offline nodes */ + if (cpu_online(cpu) || !node_online(cpu_to_node(cpu))) + continue; + ret = _cpu_up(cpu, 0, CPUHP_ONLINE); + if (ret) + break; + /* See comment in cpuhp_smt_disable() */ + cpuhp_online_cpu_device(cpu); + } + cpu_maps_update_done(); + return ret; +} + +static ssize_t +store_smt_control(struct device *dev, struct device_attribute *attr, + const char *buf, size_t count) +{ + int ctrlval, ret; + + if (sysfs_streq(buf, "on")) + ctrlval = CPU_SMT_ENABLED; + else if (sysfs_streq(buf, "off")) + ctrlval = CPU_SMT_DISABLED; + else if (sysfs_streq(buf, "forceoff")) + ctrlval = CPU_SMT_FORCE_DISABLED; + else + return -EINVAL; + + if (cpu_smt_control == CPU_SMT_FORCE_DISABLED) + return -EPERM; + + if (cpu_smt_control == CPU_SMT_NOT_SUPPORTED) + return -ENODEV; + + ret = lock_device_hotplug_sysfs(); + if (ret) + return ret; + + if (ctrlval != cpu_smt_control) { + switch (ctrlval) { + case CPU_SMT_ENABLED: + ret = cpuhp_smt_enable(); + break; + case CPU_SMT_DISABLED: + case CPU_SMT_FORCE_DISABLED: + ret = cpuhp_smt_disable(ctrlval); + break; + } + } + + unlock_device_hotplug(); + return ret ? ret : count; +} +static DEVICE_ATTR(control, 0644, show_smt_control, store_smt_control); + +static ssize_t +show_smt_active(struct device *dev, struct device_attribute *attr, char *buf) +{ + bool active = topology_max_smt_threads() > 1; + + return snprintf(buf, PAGE_SIZE - 2, "%d\n", active); +} +static DEVICE_ATTR(active, 0444, show_smt_active, NULL); + +static struct attribute *cpuhp_smt_attrs[] = { + &dev_attr_control.attr, + &dev_attr_active.attr, + NULL +}; + +static const struct attribute_group cpuhp_smt_attr_group = { + .attrs = cpuhp_smt_attrs, + .name = "smt", + NULL +}; + +static int __init cpu_smt_state_init(void) +{ + return sysfs_create_group(&cpu_subsys.dev_root->kobj, + &cpuhp_smt_attr_group); +} + +#else +static inline int cpu_smt_state_init(void) { return 0; } +#endif + static int __init cpuhp_sysfs_init(void) { int cpu, ret; + ret = cpu_smt_state_init(); + if (ret) + return ret; + ret = sysfs_create_group(&cpu_subsys.dev_root->kobj, &cpuhp_cpu_root_attr_group); if (ret) @@ -2024,5 +2279,6 @@ */ void __init boot_cpu_state_init(void) { - per_cpu_ptr(&cpuhp_state, smp_processor_id())->state = CPUHP_ONLINE; + this_cpu_write(cpuhp_state.booted_once, true); + this_cpu_write(cpuhp_state.state, CPUHP_ONLINE); } only in patch2: unchanged: --- linux-azure-4.15.0.orig/kernel/sched/fair.c +++ linux-azure-4.15.0/kernel/sched/fair.c @@ -6008,6 +6008,7 @@ } #ifdef CONFIG_SCHED_SMT +DEFINE_STATIC_KEY_FALSE(sched_smt_present); static inline void set_idle_cores(int cpu, int val) { only in patch2: unchanged: --- linux-azure-4.15.0.orig/kernel/smp.c +++ linux-azure-4.15.0/kernel/smp.c @@ -584,6 +584,8 @@ num_nodes, (num_nodes > 1 ? "s" : ""), num_cpus, (num_cpus > 1 ? "s" : "")); + /* Final decision about SMT support */ + cpu_smt_check_topology(); /* Any cleanup work */ smp_cpus_done(setup_max_cpus); } only in patch2: unchanged: --- linux-azure-4.15.0.orig/mm/mprotect.c +++ linux-azure-4.15.0/mm/mprotect.c @@ -292,6 +292,42 @@ return pages; } +static int prot_none_pte_entry(pte_t *pte, unsigned long addr, + unsigned long next, struct mm_walk *walk) +{ + return pfn_modify_allowed(pte_pfn(*pte), *(pgprot_t *)(walk->private)) ? + 0 : -EACCES; +} + +static int prot_none_hugetlb_entry(pte_t *pte, unsigned long hmask, + unsigned long addr, unsigned long next, + struct mm_walk *walk) +{ + return pfn_modify_allowed(pte_pfn(*pte), *(pgprot_t *)(walk->private)) ? + 0 : -EACCES; +} + +static int prot_none_test(unsigned long addr, unsigned long next, + struct mm_walk *walk) +{ + return 0; +} + +static int prot_none_walk(struct vm_area_struct *vma, unsigned long start, + unsigned long end, unsigned long newflags) +{ + pgprot_t new_pgprot = vm_get_page_prot(newflags); + struct mm_walk prot_none_walk = { + .pte_entry = prot_none_pte_entry, + .hugetlb_entry = prot_none_hugetlb_entry, + .test_walk = prot_none_test, + .mm = current->mm, + .private = &new_pgprot, + }; + + return walk_page_range(start, end, &prot_none_walk); +} + int mprotect_fixup(struct vm_area_struct *vma, struct vm_area_struct **pprev, unsigned long start, unsigned long end, unsigned long newflags) @@ -310,6 +346,19 @@ } /* + * Do PROT_NONE PFN permission checks here when we can still + * bail out without undoing a lot of state. This is a rather + * uncommon case, so doesn't need to be very optimized. + */ + if (arch_has_pfn_modify_check() && + (vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP)) && + (newflags & (VM_READ|VM_WRITE|VM_EXEC)) == 0) { + error = prot_none_walk(vma, start, end, newflags); + if (error) + return error; + } + + /* * If we make a private mapping writable we increase our commit; * but (without finer accounting) cannot reduce our commit if we * make it unwritable again. hugetlb mapping were accounted for only in patch2: unchanged: --- linux-azure-4.15.0.orig/mm/swapfile.c +++ linux-azure-4.15.0/mm/swapfile.c @@ -2909,6 +2909,35 @@ return 0; } + +/* + * Find out how many pages are allowed for a single swap device. There + * are two limiting factors: + * 1) the number of bits for the swap offset in the swp_entry_t type, and + * 2) the number of bits in the swap pte, as defined by the different + * architectures. + * + * In order to find the largest possible bit mask, a swap entry with + * swap type 0 and swap offset ~0UL is created, encoded to a swap pte, + * decoded to a swp_entry_t again, and finally the swap offset is + * extracted. + * + * This will mask all the bits from the initial ~0UL mask that can't + * be encoded in either the swp_entry_t or the architecture definition + * of a swap pte. + */ +unsigned long generic_max_swapfile_size(void) +{ + return swp_offset(pte_to_swp_entry( + swp_entry_to_pte(swp_entry(0, ~0UL)))) + 1; +} + +/* Can be overridden by an architecture for additional checks. */ +__weak unsigned long max_swapfile_size(void) +{ + return generic_max_swapfile_size(); +} + static unsigned long read_swap_header(struct swap_info_struct *p, union swap_header *swap_header, struct inode *inode) @@ -2944,22 +2973,7 @@ p->cluster_next = 1; p->cluster_nr = 0; - /* - * Find out how many pages are allowed for a single swap - * device. There are two limiting factors: 1) the number - * of bits for the swap offset in the swp_entry_t type, and - * 2) the number of bits in the swap pte as defined by the - * different architectures. In order to find the - * largest possible bit mask, a swap entry with swap type 0 - * and swap offset ~0UL is created, encoded to a swap pte, - * decoded to a swp_entry_t again, and finally the swap - * offset is extracted. This will mask all the bits from - * the initial ~0UL mask that can't be encoded in either - * the swp_entry_t or the architecture definition of a - * swap pte. - */ - maxpages = swp_offset(pte_to_swp_entry( - swp_entry_to_pte(swp_entry(0, ~0UL)))) + 1; + maxpages = max_swapfile_size(); last_page = swap_header->info.last_page; if (last_page > maxpages) { pr_warn("Truncating oversized swap area, only using %luk out of %luk\n", only in patch2: unchanged: --- linux-azure-4.15.0.orig/net/ipv4/ip_fragment.c +++ linux-azure-4.15.0/net/ipv4/ip_fragment.c @@ -846,22 +846,14 @@ static int __net_init ipv4_frags_init_net(struct net *net) { - /* Fragment cache limits. - * - * The fragment memory accounting code, (tries to) account for - * the real memory usage, by measuring both the size of frag - * queue struct (inet_frag_queue (ipv4:ipq/ipv6:frag_queue)) - * and the SKB's truesize. - * - * A 64K fragment consumes 129736 bytes (44*2944)+200 - * (1500 truesize == 2944, sizeof(struct ipq) == 200) - * - * We will commit 4MB at one time. Should we cross that limit - * we will prune down to 3MB, making room for approx 8 big 64K - * fragments 8x128k. + /* + * Fragment cache limits. We will commit 256K at one time. Should we + * cross that limit we will prune down to 192K. This should cope with + * even the most extreme cases without allowing an attacker to + * measurably harm machine performance. */ - net->ipv4.frags.high_thresh = 4 * 1024 * 1024; - net->ipv4.frags.low_thresh = 3 * 1024 * 1024; + net->ipv4.frags.high_thresh = 256 * 1024; + net->ipv4.frags.low_thresh = 192 * 1024; /* * Important NOTE! Fragment queue must be destroyed before MSL expires. * RFC791 is wrong proposing to prolongate timer each fragment arrival