- Jul 13, 2008
-
-
Greg Kroah-Hartman authored
-
Michael Karcher authored
commit 5ac37f87 upstream Fix size of LDT entries. On x86-64, ldt_desc is a double-sized descriptor. Signed-off-by:
Michael Karcher <kernel@mkarcher.dialup.fu-berlin.de> Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
- Jul 02, 2008
-
-
Greg Kroah-Hartman authored
-
Max Asbock authored
Commit 41aefdcc upstream x86: shift bits the right way in native_read_tscp native_read_tscp shifts the bits in the high order value in the wrong direction, the attached patch fixes that. Signed-off-by:
Max Asbock <masbock@linux.vnet.ibm.com> Acked-by:
Glauber Costa <gcosta@redhat.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Yanmin Zhang authored
Commit fcb43042 upstream x86: fix cpu hotplug crash Vegard Nossum reported crashes during cpu hotplug tests: http://marc.info/?l=linux-kernel&m=121413950227884&w=4 In function _cpu_up, the panic happens when calling __raw_notifier_call_chain at the second time. Kernel doesn't panic when calling it at the first time. If just say because of nr_cpu_ids, that's not right. By checking the source code, I found that function do_boot_cpu is the culprit. Consider below call chain: _cpu_up=>__cpu_up=>smp_ops.cpu_up=>native_cpu_up=>do_boot_cpu. So do_boot_cpu is called in the end. In do_boot_cpu, if boot_error==true, cpu_clear(cpu, cpu_possible_map) is executed. So later on, when _cpu_up calls __raw_notifier_call_chain at the second time to report CPU_UP_CANCELED, because this cpu is already cleared from cpu_possible_map, get_cpu_sysdev returns NULL. Many resources are related to cpu_possible_map, so it's better not to change it. Below patch against 2.6.26-rc7 fixes it by removing the bit clearing in cpu_possible_map. Signed-off-by:
Zhang Yanmin <yanmin_zhang@linux.intel.com> Tested-by:
Vegard Nossum <vegard.nossum@gmail.com> Acked-by:
Rusty Russell <rusty@rustcorp.com.au> Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
TAKADA Yoshihito authored
Commit 11dbc963 upstream ptrace GET/SET FPXREGS broken When I update kernel 2.6.25 from 2.6.24, gdb does not work. On 2.6.25, ptrace(PTRACE_GETFPXREGS, ...) returns ENODEV. But 2.6.24 kernel's ptrace() returns EIO. It is issue of compatibility. I attached test program as pt.c and patch for fix it. #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <signal.h> #include <errno.h> #include <sys/ptrace.h> #include <sys/types.h> struct user_fxsr_struct { unsigned short cwd; unsigned short swd; unsigned short twd; unsigned short fop; long fip; long fcs; long foo; long fos; long mxcsr; long reserved; long st_space[32]; /* 8*16 bytes for each FP-reg = 128 bytes */ long xmm_space[32]; /* 8*16 bytes for each XMM-reg = 128 bytes */ long padding[56]; }; int main(void) { pid_t pid; pid = fork(); switch(pid){ case -1:/* error */ break; case 0:/* child */ child(); break; default: parent(pid); break; } return 0; } int child(void) { ptrace(PTRACE_TRACEME); kill(getpid(), SIGSTOP); sleep(10); return 0; } int parent(pid_t pid) { int ret; struct user_fxsr_struct fpxregs; ret = ptrace(PTRACE_GETFPXREGS, pid, 0, &fpxregs); if(ret < 0){ printf("%d: %s.\n", errno, strerror(errno)); } kill(pid, SIGCONT); wait(pid); return 0; } /* in the kerel, at kernel/i387.c get_fpxregs() */ Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Dmitry Adamushko authored
Commit 79c53799 upstream the CPU hotplug problems (crashes under high-volume unplug+replug tests) seem to be related to migrate_dead_tasks(). Firstly I added traces to see all tasks being migrated with migrate_live_tasks() and migrate_dead_tasks(). On my setup the problem pops up (the one with "se == NULL" in the loop of pick_next_task_fair()) shortly after the traces indicate that some has been migrated with migrate_dead_tasks()). btw., I can reproduce it much faster now with just a plain cpu down/up loop. [disclaimer] Well, unless I'm really missing something important in this late hour [/desclaimer] pick_next_task() is not something appropriate for migrate_dead_tasks() :-) the following change seems to eliminate the problem on my setup (although, I kept it running only for a few minutes to get a few messages indicating migrate_dead_tasks() does move tasks and the system is still ok) Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Roland McGrath authored
Commit 5a4646a4 introduced a leak of task_struct refs into sys32_ptrace. This bug has already gone away in for 2.6.26 in commit 562b80ba . Signed-off-by:
Roland McGrath <roland@redhat.com> Acked-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Jie Luo authored
commit ea7b44c8 upstream On 9xx chips, bus mastering needs to be enabled at resume time for much of the chip to function. With this patch, vblank interrupts will work as expected on resume, along with other chip functions. Fixes kernel bugzilla #10844. Signed-off-by:
Jie Luo <clotho67@gmail.com> Signed-off-by:
Jesse Barnes <jbarnes@virtuousgeek.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Eli Cohen authored
commit 87afd448 upstream Current memfree FW has a bug which in some cases, assumes that ICM pages passed to it are cleared. This patch uses __GFP_ZERO to allocate all ICM pages passed to the FW. Once firmware with a fix is released, we can make the workaround conditional on firmware version. This fixes the bug reported by Arthur Kepner <akepner@sgi.com> here: http://lists.openfabrics.org/pipermail/general/2008-May/050026.html [ Rewritten to be a one-liner using __GFP_ZERO instead of vmap()ing ICM memory and memset()ing it to 0. - Roland ] Signed-off-by:
Eli Cohen <eli@mellanox.co.il> Signed-off-by:
Roland Dreier <rolandd@cisco.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Thomas Gleixner authored
commit 1b7558e4 upstream This patch addresses a very sporadic pi-futex related failure in highly threaded java apps on large SMP systems. David Holmes reported that the pi_state consistency check in lookup_pi_state triggered with his test application. This means that the kernel internal pi_state and the user space futex variable are out of sync. First we assumed that this is a user space data corruption, but deeper investigation revieled that the problem happend because the pi-futex code is not handling a fault in the futex_lock_pi path when the user space variable needs to be fixed up. The fault happens when a fork mapped the anon memory which contains the futex readonly for COW or the page got swapped out exactly between the unlock of the futex and the return of either the new futex owner or the task which was the expected owner but failed to acquire the kernel internal rtmutex. The current futex_lock_pi() code drops out with an inconsistent in case it faults and returns -EFAULT to user space. User space has no way to fixup that state. When we wrote this code we thought that we could not drop the hash bucket lock at this point to handle the fault. After analysing the code again it turned out to be wrong because there are only two tasks involved which might modify the pi_state and the user space variable: - the task which acquired the rtmutex - the pending owner of the pi_state which did not get the rtmutex Both tasks drop into the fixup_pi_state() function before returning to user space. The first task which acquired the hash bucket lock faults in the fixup of the user space variable, drops the spinlock and calls futex_handle_fault() to fault in the page. Now the second task could acquire the hash bucket lock and tries to fixup the user space variable as well. It either faults as well or it succeeds because the first task already faulted the page in. One caveat is to avoid a double fixup. After returning from the fault handling we reacquire the hash bucket lock and check whether the pi_state owner has been modified already. Reported-by:
David Holmes <david.holmes@sun.com> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: David Holmes <david.holmes@sun.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Alan Cox authored
This is fixed with the recent tty operations rewrite in mainline in a different way, this is a selective backport of the relevant portions to the -stable tree. Signed-off-by:
Alan Cox <alan@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
- Jun 24, 2008
-
-
Greg Kroah-Hartman authored
-
Linus Torvalds authored
commit 672ca28e upstream Commit 89f5b7da ("Reinstate ZERO_PAGE optimization in 'get_user_pages()' and fix XIP") broke vmware, as reported by Jeff Chua: "This broke vmware 6.0.4. Jun 22 14:53:03.845: vmx| NOT_IMPLEMENTED /build/mts/release/bora-93057/bora/vmx/main/vmmonPosix.c:774" and the reason seems to be that there's an old bug in how we handle do FOLL_ANON on VM_SHARED areas in get_user_pages(), but since it only triggered if the whole page table was missing, nobody had apparently hit it before. The recent changes to 'follow_page()' made the FOLL_ANON logic trigger not just for whole missing page tables, but for individual pages as well, and exposed this problem. This fixes it by making the test for when FOLL_ANON is used more careful, and also makes the code easier to read and understand by moving the logic to a separate inline function. Reported-and-tested-by:
Jeff Chua <jeff.chua.linux@gmail.com> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Jean Delvare authored
commit ed4ec814 upstream data->max_duty_at_overheat is not updated in adt7473_update_device, so it might be used before it is initialized (if the user reads from sysfs file max_duty_at_crit before writing to it.) Signed-off-by:
Jean Delvare <khali@linux-fr.org> Acked-by:
Darrick J. Wong <djwong@us.ibm.com> Signed-off-by:
Mark M. Hoffman <mhoffman@lightlink.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Jean Delvare authored
Function RANGE_TO_REG() is broken. For a requested range of 2000 (2 degrees C), it will return an index value of 15, i.e. 80.0 degrees C, instead of the expected index value of 0. All other values are handled properly, just 2000 isn't. The bug was introduced back in November 2004 by this patch: http://git.kernel.org/?p=linux/kernel/git/tglx/history.git;a=commit;h=1c28d80f1992240373099d863e4996cdd5d646d0 In Linus' kernel I decided to rewrite the whole function in a way which was more obviously correct. But for -stable let's just do the minimal fix. Signed-off-by:
Jean Delvare <khali@linux-fr.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Linus Torvalds authored
commit 1f6ef234 upstream The inline assembly in drivers/watchdog/hpwdt.c was incredibly broken, and included all the function prologue and epilogue stuff, even though it was itself then inside a C function where the compiler would add its own prologue and epilogue on top of it all. This then just _happened_ to work if you had exactly the right compiler version and exactly the right compiler flags, so that gcc just happened to not create any prologue at all (the gcc-generated epilogue wouldn't matter, since it would never be reached). But the more proper way to fix it is to simply not do this. Move the inline asm to the top level, with no surrounding function at all (the better alternative would be to remove the prologue and make it actually use proper description of the arguments to the inline asm, but that's a bigger change than the one I'm willing to make right now). Tested-by:
S.Çağlar Onur <caglar@pardus.org.tr> Acked-by:
Thomas Mingarelli <Thomas.Mingarelli@hp.com> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Jeremy Fitzhardinge authored
commit ad524d46 upstream When a 64-bit x86 processor runs in 32-bit PAE mode, a pte can potentially have the same number of physical address bits as the 64-bit host ("Enhanced Legacy PAE Paging"). This means, in theory, we could have up to 52 bits of physical address in a pte. The 32-bit kernel uses a 32-bit unsigned long to represent a pfn. This means that it can only represent physical addresses up to 32+12=44 bits wide. Rather than widening pfns everywhere, just set 2^44 as the Linux x86_32-PAE architectural limit for physical address size. This is a bugfix for two cases: 1. running a 32-bit PAE kernel on a machine with more than 64GB RAM. 2. running a 32-bit PAE Xen guest on a host machine with more than 64GB RAM In both cases, a pte could need to have more than 36 bits of physical, and masking it to 36-bits will cause fairly severe havoc. Signed-off-by:
Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: Jan Beulich <jbeulich@novell.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Bernhard Walle authored
commit d3942cff upstream This patch uses the BOOTMEM_EXCLUSIVE for crashkernel reservation also for i386 and prints a error message on failure. The patch is still for 2.6.26 since it is only bug fixing. The unification of reserve_crashkernel() between i386 and x86_64 should be done for 2.6.27. Signed-off-by:
Bernhard Walle <bwalle@suse.de> Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Bernhard Walle authored
commit 71c2742f upstream This patch changes the function reserve_bootmem_node() from void to int, returning -ENOMEM if the allocation fails. This fixes a build problem on x86 with CONFIG_KEXEC=y and CONFIG_NEED_MULTIPLE_NODES=y Signed-off-by:
Bernhard Walle <bwalle@suse.de> Reported-by:
Adrian Bunk <bunk@kernel.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
David S. Miller authored
commit 735ce972 upstream As noticed by Gabriel Campana, the kmalloc() length arg passed in by sctp_getsockopt_local_addrs_old() can overflow if ->addr_num is large enough. Therefore, enforce an appropriate limit. Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Linus Torvalds authored
commit 89f5b7da upstream KAMEZAWA Hiroyuki and Oleg Nesterov point out that since the commit 557ed1fa ("remove ZERO_PAGE") removed the ZERO_PAGE from the VM mappings, any users of get_user_pages() will generally now populate the VM with real empty pages needlessly. We used to get the ZERO_PAGE when we did the "handle_mm_fault()", but since fault handling no longer uses ZERO_PAGE for new anonymous pages, we now need to handle that special case in follow_page() instead. In particular, the removal of ZERO_PAGE effectively removed the core file writing optimization where we would skip writing pages that had not been populated at all, and increased memory pressure a lot by allocating all those useless newly zeroed pages. This reinstates the optimization by making the unmapped PTE case the same as for a non-existent page table, which already did this correctly. While at it, this also fixes the XIP case for follow_page(), where the caller could not differentiate between the case of a page that simply could not be used (because it had no "struct page" associated with it) and a page that just wasn't mapped. We do that by simply returning an error pointer for pages that could not be turned into a "struct page *". The error is arbitrarily picked to be EFAULT, since that was what get_user_pages() already used for the equivalent IO-mapped page case. [ Also removed an impossible test for pte_offset_map_lock() failing: that's not how that function works ] Acked-by:
Oleg Nesterov <oleg@tv-sign.ru> Acked-by:
Nick Piggin <npiggin@suse.de> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Hugh Dickins <hugh@veritas.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Ingo Molnar <mingo@elte.hu> Cc: Roland McGrath <roland@redhat.com> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Radu Cristescu authored
upstream commit: 58c7821c The atl1 driver tries to determine the MAC address thusly: - If an EEPROM exists, read the MAC address from EEPROM and validate it. - If an EEPROM doesn't exist, try to read a MAC address from SPI flash. - If that fails, try to read a MAC address directly from the MAC Station Address register. - If that fails, assign a random MAC address provided by the kernel. We now have a report of a system fitted with an EEPROM containing all zeros where we expect the MAC address to be, and we currently handle this as an error condition. Turns out, on this system the BIOS writes a valid MAC address to the NIC's MAC Station Address register, but we never try to read it because we return an error when we find the all- zeros address in EEPROM. This patch relaxes the error check and continues looking for a MAC address even if it finds an illegal one in EEPROM. http://ubuntuforums.org/showthread.php?t=562617 [jacliburn@bellsouth.net: backport to 2.6.25.7] Signed-off-by:
Radu Cristescu <advantis@gmx.net> Signed-off-by:
Jay Cliburn <jacliburn@bellsouth.net> Signed-off-by:
Jeff Garzik <jgarzik@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
- Jun 22, 2008
-
-
Greg Kroah-Hartman authored
-
Thomas Gleixner authored
back-ported from upstream commit e9623b35 by Vegard Nossum The previous revert of 0c07ee38 left out the mwait disable condition for AMD family 10H/11H CPUs. Andreas Herrman said: It depends on the CPU. For AMD CPUs that support MWAIT this is wrong. Family 0x10 and 0x11 CPUs will enter C1 on HLT. Powersavings then depend on a clock divisor and current Pstate of the core. If all cores of a processor are in halt state (C1) the processor can enter the C1E (C1 enhanced) state. If mwait is used this will never happen. Thus HLT saves more power than MWAIT here. It might be best to switch off the mwait flag for these AMD CPU families like it was introduced with commit f039b754 (x86: Don't use MWAIT on AMD Family 10) Re-add the AMD families 10H/11H check and disable the mwait usage for those. Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Vegard Nossum <vegard.nossum@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Ingo Molnar authored
back-ported from upstream commit a738d897 by Vegard Nossum Vegard Nossum reports: | powertop shows between 200-400 wakeups/second with the description | "<kernel IPI>: Rescheduling interrupts" when all processors have load (e.g. | I need to run two busy-loops on my 2-CPU system for this to show up). | | The bisect resulted in this commit: | | commit 0c07ee38 | Date: Wed Jan 30 13:33:16 2008 +0100 | | x86: use the correct cpuid method to detect MWAIT support for C states remove the functional effects of this patch and make mwait unconditional. A future patch will turn off mwait on specific CPUs where that causes power to be wasted. Bisected-by:
Vegard Nossum <vegard.nossum@gmail.com> Tested-by:
Vegard Nossum <vegard.nossum@gmail.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Vegard Nossum <vegard.nossum@gmail.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Patrick McHardy authored
netfilter: nf_conntrack_h323: fix memory leak in module initialization error path Upstream commit 8a548868 Properly free h323_buffer when helper registration fails. Signed-off-by:
Patrick McHardy <kaber@trash.net> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Patrick McHardy authored
netfilter: nf_conntrack_h323: fix module unload crash Upstream commit a56b8f81 The H.245 helper is not registered/unregistered, but assigned to connections manually from the Q.931 helper. This means on unload existing expectations and connections using the helper are not cleaned up, leading to the following oops on module unload: CPU 0 Unable to handle kernel paging request at virtual address c00a6828, epc == 802224dc, ra == 801d4e7c Oops[#1]: Cpu 0 $ 0 : 00000000 00000000 00000004 c00a67f0 $ 4 : 802a5ad0 81657e00 00000000 00000000 $ 8 : 00000008 801461c8 00000000 80570050 $12 : 819b0280 819b04b0 00000006 00000000 $16 : 802a5a60 80000000 80b46000 80321010 $20 : 00000000 00000004 802a5ad0 00000001 $24 : 00000000 802257a8 $28 : 802a4000 802a59e8 00000004 801d4e7c Hi : 0000000b Lo : 00506320 epc : 802224dc ip_conntrack_help+0x38/0x74 Tainted: P ra : 801d4e7c nf_iterate+0xbc/0x130 Status: 1000f403 KERNEL EXL IE Cause : 00800008 BadVA : c00a6828 PrId : 00019374 Modules linked in: ip_nat_pptp ip_conntrack_pptp ath_pktlog wlan_acl wlan_wep wlan_tkip wlan_ccmp wlan_xauth ath_pci ath_dev ath_dfs ath_rate_atheros wlan ath_hal ip_nat_tftp ip_conntrack_tftp ip_nat_ftp ip_conntrack_ftp pppoe ppp_async ppp_deflate ppp_mppe pppox ppp_generic slhc Process swapper (pid: 0, threadinfo=802a4000, task=802a6000) Stack : 801e7d98 00000004 802a5a60 80000000 801d4e7c 801d4e7c 802a5ad0 00000004 00000000 00000000 801e7d98 00000000 00000004 802a5ad0 00000000 00000010 801e7d98 80b46000 802a5a60 80320000 80000000 801d4f8c 802a5b00 00000002 80063834 00000000 80b46000 802a5a60 801e7d98 80000000 802ba854 00000000 81a02180 80b7e260 81a021b0 819b0000 819b0000 80570056 00000000 00000001 ... Call Trace: [<801e7d98>] ip_finish_output+0x0/0x23c [<801d4e7c>] nf_iterate+0xbc/0x130 [<801d4e7c>] nf_iterate+0xbc/0x130 [<801e7d98>] ip_finish_output+0x0/0x23c [<801e7d98>] ip_finish_output+0x0/0x23c [<801d4f8c>] nf_hook_slow+0x9c/0x1a4 One way to fix this would be to split helper cleanup from the unregistration function and invoke it for the H.245 helper, but since ctnetlink needs to be able to find the helper for synchonization purposes, a better fix is to register it normally and make sure its not assigned to connections during helper lookup. The missing l3num initialization is enough for this, this patch changes it to use AF_UNSPEC to make it more explicit though. Reported-by:
liannan <liannan@twsz.com> Signed-off-by:
Patrick McHardy <kaber@trash.net> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Patrick McHardy authored
netfilter: nf_conntrack: fix ctnetlink related crash in nf_nat_setup_info() Upstream commit ceeff754 When creation of a new conntrack entry in ctnetlink fails after having set up the NAT mappings, the conntrack has an extension area allocated that is not getting properly destroyed when freeing the conntrack again. This means the NAT extension is still in the bysource hash, causing a crash when walking over the hash chain the next time: BUG: unable to handle kernel paging request at 00120fbd IP: [<c03d394b>] nf_nat_setup_info+0x221/0x58a *pde = 00000000 Oops: 0000 [#1] PREEMPT SMP Pid: 2795, comm: conntrackd Not tainted (2.6.26-rc5 #1) EIP: 0060:[<c03d394b>] EFLAGS: 00010206 CPU: 1 EIP is at nf_nat_setup_info+0x221/0x58a EAX: 00120fbd EBX: 00120fbd ECX: 00000001 EDX: 00000000 ESI: 0000019e EDI: e853bbb4 EBP: e853bbc8 ESP: e853bb78 DS: 007b ES: 007b FS: 00d8 GS: 0033 SS: 0068 Process conntrackd (pid: 2795, ti=e853a000 task=f7de10f0 task.ti=e853a000) Stack: 00000000 e853bc2c e85672ec 00000008 c0561084 63c1db4a 00000000 00000000 00000000 0002e109 61d2b1c3 00000000 00000000 00000000 01114e22 61d2b1c3 00000000 00000000 f7444674 e853bc04 00000008 c038e728 0000000a f7444674 Call Trace: [<c038e728>] nla_parse+0x5c/0xb0 [<c0397c1b>] ctnetlink_change_status+0x190/0x1c6 [<c0397eec>] ctnetlink_new_conntrack+0x189/0x61f [<c0119aee>] update_curr+0x3d/0x52 [<c03902d1>] nfnetlink_rcv_msg+0xc1/0xd8 [<c0390228>] nfnetlink_rcv_msg+0x18/0xd8 [<c0390210>] nfnetlink_rcv_msg+0x0/0xd8 [<c038d2ce>] netlink_rcv_skb+0x2d/0x71 [<c0390205>] nfnetlink_rcv+0x19/0x24 [<c038d0f5>] netlink_unicast+0x1b3/0x216 ... Move invocation of the extension destructors to nf_conntrack_free() to fix this problem. Fixes http://bugzilla.kernel.org/show_bug.cgi?id=10875 Reported-and-Tested-by:
Krzysztof Piotr Oledzki <ole@ans.pl> Signed-off-by:
Patrick McHardy <kaber@trash.net> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
James Bottomley authored
commit: d1daeabf upstream Reported-by:
Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com> If you delay 30s or more before mounting a CD after inserting it then the kernel has the wrong value for the CD size. http://marc.info/?t=121276133000001 The problem is in sr_test_unit_ready(): the function eats unit attentions without adjusting the sdev->changed status. This means that when the CD signals changed media via unit attention, we can ignore it. Fix by making sr_test_unit_ready() adjust the changed status. Reported-by:
Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com> Tested-by:
Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com> Signed-off-by:
James Bottomley <James.Bottomley@HansenPartnership.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Bob Moore authored
upstream bc45b1d3 Without this patch booting with acpi_osi="!Windows 2006" is required for several machines to function properly with cpufreq due to failure to load a Vista specific table with a bad signature. Only "SSDT" is acceptable to the ACPI spec, but tables are seen with OEMx and null sigs. Therefore, signature validation is worthless. Apparently MS ACPI accepts such signatures, ACPICA must be compatible. http://bugzilla.kernel.org/show_bug.cgi?id=9919 http://bugzilla.kernel.org/show_bug.cgi?id=10383 http://bugzilla.kernel.org/show_bug.cgi?id=10454 https://bugzilla.novell.com/show_bug.cgi?id=396311 Signed-off-by:
Bob Moore <robert.moore@intel.com> Signed-off-by:
Lin Ming <ming.m.lin@intel.com> Signed-off-by:
Len Brown <len.brown@intel.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Mike Christie authored
The patch is upstream as commit 3ed78972 A different backport is necessary because of the class_device to device conversion post 2.6.25. commit 9c770108 Author: Dave Young <hidave.darkstar@gmail.com> Date: Tue Jan 22 14:01:34 2008 +0800 scsi: use class iteration api Isn't a correct replacement for the original hand rolled host lookup. The problem is that class_find_child would get a reference to the host's class device which is never released. Since the host class device holds a reference to the host gendev, the host can never be freed. In 2.6.25 we started using class_find_device, and this function also gets a reference to the device, so we end up with an extra ref and the host will not get released. This patch adds a class_put_device to balance the class_find_device() get. I kept the scsi_host_get in scsi_host_lookup, because the target layer is using scsi_host_lookup and it looks like it needs the SHOST_DEL check. Signed-off-by:
Mike Christie <michaelc@cs.wisc.edu> Signed-off-by:
James Bottomley <James.Bottomley@HansenPartnership.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Michael Buesch authored
a cut-down version of commit 028118a5 upstream This fixes a possible NULL pointer dereference in an error path of the DMA allocation error checking code. In case the DMA allocation address is invalid, the dev pointer is dereferenced for unmapping of the buffer. Reported-by:
Miles Lane <miles.lane@gmail.com> Signed-off-by:
Michael Buesch <mb@bu3sch.de> Signed-off-by:
John W. Linville <linville@tuxdriver.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Michael Buesch authored
commit 98a3b2fe upstream. This removes a WARN_ON that is responsible for the following koops: http://www.kerneloops.org/searchweek.php?search=b43_generate_noise_sample The comment in the patch describes why it's safe to simply remove the check. Signed-off-by:
Michael Buesch <mb@bu3sch.de> Signed-off-by:
John W. Linville <linville@tuxdriver.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Mark McLoughlin authored
commit 23cde76d upstream. hdr->csum_start is the offset from the start of the ethernet header to the transport layer checksum field. skb->csum_start is the offset from skb->head. skb_partial_csum_set() assumes that skb->data points to the ethernet header - i.e. it computes skb->csum_start by adding the headroom to hdr->csum_start. Since eth_type_trans() skb_pull()s the ethernet header, skb_partial_csum_set() should be called before eth_type_trans(). (Without this patch, GSO packets from a guest to the world outside the host are corrupted). Signed-off-by:
Mark McLoughlin <markmc@redhat.com> Acked-by:
Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by:
Rusty Russell <rusty@rustcorp.com.au> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Bartlomiej Zolnierkiewicz authored
commit f3610376 upstream These controllers don't support DMA. Based on a bugreport from Juergen Kosel & inspired by pata_opti.c code. Tested-by:
Juergen Kosel <juergen.kosel@gmx.de> Acked-by:
Sergei Shtylyov <sshtylyov@ru.mvista.com> Signed-off-by:
Bartlomiej Zolnierkiewicz <bzolnier@gmail.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Bartlomiej Zolnierkiewicz authored
commit 62128b2c upstream This fixes 2.6.25 regression (kernel.org bugzilla bug #10723) caused by: commit 912fb29a Author: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com> Date: Fri Oct 19 00:30:11 2007 +0200 opti621: always tune PIO ... Based on a bugreport from Juergen Kosel & inspired by pata_opti.c code. Bisected-by:
Juergen Kosel <juergen.kosel@gmx.de> Tested-by:
Juergen Kosel <juergen.kosel@gmx.de> Signed-off-by:
Bartlomiej Zolnierkiewicz <bzolnier@gmail.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Alan Cox authored
commit e991a2bd upstream. We try and write the correct speed back but the serial midlayer already mangles the speed on us and that means if we request B0 we report back B9600 when we should not. For now we'll hack around this in the drivers and serial code, pending a better long term solution. Signed-off-by:
Alan Cox <alan@redhat.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Linus Torvalds authored
commit 42a886af upstream Most users by far do not care about the exact return value (they only really care about whether the copy succeeded in its entirety or not), but a few special core routines actually care deeply about exactly how many bytes were copied from user space. And the unrolled versions of the x86-64 user copy routines would sometimes report that it had copied more bytes than it actually had. Very few uses actually have partial copies to begin with, but to make this bug even harder to trigger, most x86 CPU's use the "rep string" instructions for normal user copies, and that version didn't have this issue. To make it even harder to hit, the one user of this that really cared about the return value (and used the uncached version of the copy that doesn't use the "rep string" instructions) was the generic write routine, which pre-populated its source, once more hiding the problem by avoiding the exception case that triggers the bug. In other words, very special thanks to Bron Gondwana who not only triggered this, but created a test-program to show it, and bisected the behavior down to commit 08291429 ("mm: fix pagecache write deadlocks") which changed the access pattern just enough that you can now trigger it with 'writev()' with multiple iovec's. That commit itself was not the cause of the bug, it just allowed all the stars to align just right that you could trigger the problem. [ Side note: this is just the minimal fix to make the copy routines (with __copy_from_user_inatomic_nocache as the particular version that was involved in showing this) have the right return values. We really should improve on the exceptional case further - to make the copy do a byte-accurate copy up to the exact page limit that causes it to fail. As it is, the callers have to do extra work to handle the limit case gracefully. ] Reported-by:
Bron Gondwana <brong@fastmail.fm> Cc: Nick Piggin <npiggin@suse.de> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andi Kleen <andi@firstfloor.org> Cc: Al Viro <viro@ZenIV.linux.org.uk> Acked-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
- Jun 16, 2008
-
-
Greg Kroah-Hartman authored
-