- Oct 05, 2009
-
-
Greg Kroah-Hartman authored
-
Lee Schermerhorn authored
commit 252c5f94 upstream. We noticed very erratic behavior [throughput] with the AIM7 shared workload running on recent distro [SLES11] and mainline kernels on an 8-socket, 32-core, 256GB x86_64 platform. On the SLES11 kernel [2.6.27.19+] with Barcelona processors, as we increased the load [10s of thousands of tasks], the throughput would vary between two "plateaus"--one at ~65K jobs per minute and one at ~130K jpm. The simple patch below causes the results to smooth out at the ~130k plateau. But wait, there's more: We do not see this behavior on smaller platforms--e.g., 4 socket/8 core. This could be the result of the larger number of cpus on the larger platform--a scalability issue--or it could be the result of the larger number of interconnect "hops" between some nodes in this platform and how the tasks for a given load end up distributed over the nodes' cpus and memories--a stochastic NUMA effect. The variability in the results are less pronounced [on the same platform] with Shanghai processors and with mainline kernels. With 31-rc6 on Shanghai processors and 288 file systems on 288 fibre attached storage volumes, the curves [jpm vs load] are both quite flat with the patched kernel consistently producing ~3.9% better throughput [~80K jpm vs ~77K jpm] than the unpatched kernel. Profiling indicated that the "slow" runs were incurring high[er] contention on an anon_vma lock in vma_adjust(), apparently called from the sbrk() system call. The patch: A comment in mm/mmap.c:vma_adjust() suggests that we don't really need the anon_vma lock when we're only adjusting the end of a vma, as is the case for brk(). The comment questions whether it's worth while to optimize for this case. Apparently, on the newer, larger x86_64 platforms, with interesting NUMA topologies, it is worth while--especially considering that the patch [if correct!] is quite simple. We can detect this condition--no overlap with next vma--by noting a NULL "importer". The anon_vma pointer will also be NULL in this case, so simply avoid loading vma->anon_vma to avoid the lock. However, we DO need to take the anon_vma lock when we're inserting a vma ['insert' non-NULL] even when we have no overlap [NULL "importer"], so we need to check for 'insert', as well. And Hugh points out that we should also take it when adjusting vm_start (so that rmap.c can rely upon vma_address() while it holds the anon_vma lock). akpm: Zhang Yanmin reprts a 150% throughput improvement with aim7, so it might be -stable material even though thiss isn't a regression: "this issue is not clear on dual socket Nehalem machine (2*4*2 cpu), but is severe on large machine (4*8*2 cpu)" [hugh.dickins@tiscali.co.uk: test vma start too] Signed-off-by:
Lee Schermerhorn <lee.schermerhorn@hp.com> Signed-off-by:
Hugh Dickins <hugh.dickins@tiscali.co.uk> Cc: Nick Piggin <npiggin@suse.de> Cc: Eric Whitney <eric.whitney@hp.com> Tested-by:
"Zhang, Yanmin" <yanmin_zhang@linux.intel.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Hugh Dickins authored
commit 1ac0cb5d upstream. do_anonymous_page() has been wrong to dirty the pte regardless. If it's not going to mark the pte writable, then it won't help to mark it dirty here, and clogs up memory with pages which will need swap instead of being thrown away. Especially wrong if no overcommit is chosen, and this vma is not yet VM_ACCOUNTed - we could exceed the limit and OOM despite no overcommit. Signed-off-by:
Hugh Dickins <hugh.dickins@tiscali.co.uk> Acked-by:
Rik van Riel <riel@redhat.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Nick Piggin <npiggin@suse.de> Cc: Mel Gorman <mel@csn.ul.ie> Cc: Minchan Kim <minchan.kim@gmail.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Lee Schermerhorn authored
Not upstream as it is fixed differently in .32 I noticed that alloc_bootmem_huge_page() will only advance to the next node on failure to allocate a huge page. I asked about this on linux-mm and linux-numa, cc'ing the usual huge page suspects. Mel Gorman responded: I strongly suspect that the same node being used until allocation failure instead of round-robin is an oversight and not deliberate at all. It appears to be a side-effect of a fix made way back in commit 63b4613c ["hugetlb: fix hugepage allocation with memoryless nodes"]. Prior to that patch it looked like allocations would always round-robin even when allocation was successful. Andy Whitcroft countered that the existing behavior looked like Andi Kleen's original implementation and suggested that we ask him. We did and Andy replied that his intention was to interleave the allocations. So, ... This patch moves the advance of the hstate next node from which to allocate up before the test for success of the attempted allocation. This will unconditionally advance the next node from which to alloc, interleaving successful allocations over the nodes with sufficient contiguous memory, and skipping over nodes that fail the huge page allocation attempt. Note that alloc_bootmem_huge_page() will only be called for huge pages of order > MAX_ORDER. Signed-off-by:
Lee Schermerhorn <lee.schermerhorn@hp.com> Reviewed-by:
Andi Kleen <ak@linux.intel.com> Cc: Mel Gorman <mel@csn.ul.ie> Cc: David Rientjes <rientjes@google.com> Cc: Adam Litke <agl@us.ibm.com> Cc: Andy Whitcroft <apw@canonical.com> Cc: Eric Whitney <eric.whitney@hp.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Patrick McHardy authored
netfilter: bridge: refcount fix Upstream commit f3abc9b9: commit f216f082 ([NETFILTER]: bridge netfilter: deal with martians correctly) added a refcount leak on in_dev. Instead of using in_dev_get(), we can use __in_dev_get_rcu(), as netfilter hooks are running under rcu_read_lock(), as pointed by Patrick. Signed-off-by:
Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by:
Patrick McHardy <kaber@trash.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Arjan van de Ven authored
fixed upstream in commit b7058842 in a different way The length of the to-copy data structure is currently stored in a signed integer. However many comparisons are done with sizeof(..) which is unsigned. It's more suitable for this variable to be unsigned to make these comparisons more naturally right. Signed-off-by:
Arjan van de Ven <arjan@linux.intel.com> Cc: David S. Miller <davem@davemloft.net> Cc: Ingo Molnar <mingo@elte.hu> Cc: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Arjan van de Ven authored
fixed upstream in commit b7058842 in a different way The ax25 code tried to use if (optlen < sizeof(int)) return -EINVAL; as a security check against optlen being negative (or zero) in the set socket option. Unfortunately, "sizeof(int)" is an unsigned property, with the result that the whole comparison is done in unsigned, letting negative values slip through. This patch changes this to if (optlen < (int)sizeof(int)) return -EINVAL; so that the comparison is done as signed, and negative values get properly caught. Signed-off-by:
Arjan van de Ven <arjan@linux.intel.com> Cc: David S. Miller <davem@davemloft.net> Cc: Ingo Molnar <mingo@elte.hu> Cc: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Tilman Schmidt authored
bas_gigaset: correctly allocate USB interrupt transfer buffer [ Upstream commit 170ebf85 ] This incorrect backport to 2.6.28.10 placed some code into the probe function which used a pointer before it was initialized. Moving this to the correct place (as it is in upstream). Signed-off-by:
Stefan Bader <stefan.bader@canonical.com> Acked-by:
Tim Gardner <tim.gardner@canonical.com> Acked-by:
Steve Conklin <steve.conklin@canonical.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Cord Walter authored
commit a9d3a146 upstream. Signed-off-by:
Cord Walter <qord@cwalter.net> Signed-off-by:
Komuro <komurojun-mbn@nifty.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Cc: Christoph Biedl <linux-kernel.bfrz@manchmal.in-ulm.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Baruch Siach authored
commit 22692018 upstream. The enc28j60 driver doesn't check whether the length of the packet as reported by the hardware fits into the preallocated buffer. When stressed, the hardware may report insanely large packets even tough the "Receive OK" bit is set. Fix this. Signed-off-by:
Baruch Siach <baruch@tkos.co.il> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Christian Lamparter authored
commit f7f71173 upstream. This patch adds a new usbid for Zcomax XG-705A to the device table. Reported-by:
Jari Jaakola <jari.jaakola@gmail.com> Signed-off-by:
Christian Lamparter <chunkeey@googlemail.com> Signed-off-by:
John W. Linville <linville@tuxdriver.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Jan Kara authored
commit 580be083 upstream. In theory it could happen that on one CPU we initialize a new inode but clearing of I_NEW | I_LOCK gets reordered before some of the initialization. Thus on another CPU we return not fully uptodate inode from iget_locked(). This seems to fix a corruption issue on ext3 mounted over NFS. [akpm@linux-foundation.org: add some commentary] Signed-off-by:
Jan Kara <jack@suse.cz> Cc: Christoph Hellwig <hch@infradead.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
- Sep 24, 2009
-
-
Greg Kroah-Hartman authored
-
Wei Yongjun authored
commit a0d24b29 upstream. nfsd: fix hung up of nfs client while sync write data to nfs server Commit 'Short write in nfsd becomes a full write to the client' (31dec253) broken the sync write. With the following commands to reproduce: $ mount -t nfs -o sync 192.168.0.21:/nfsroot /mnt $ cd /mnt $ echo aaaa > temp.txt Then nfs client is hung up. In SYNC mode the server alaways return the write count 0 to the client. This is because the value of host_err in nfsd_vfs_write() will be overwrite in SYNC mode by 'host_err=nfsd_sync(file);', and then we return host_err(which is now 0) as write count. This patch fixed the problem. Signed-off-by:
Wei Yongjun <yjwei@cn.fujitsu.com> Signed-off-by:
J. Bruce Fields <bfields@citi.umich.edu> Cc: Chuck Ebbert <cebbert@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
David Shaw authored
commit 31dec253 upstream. Short write in nfsd becomes a full write to the client If a filesystem being written to via NFS returns a short write count (as opposed to an error) to nfsd, nfsd treats that as a success for the entire write, rather than the short count that actually succeeded. For example, given a 8192 byte write, if the underlying filesystem only writes 4096 bytes, nfsd will ack back to the nfs client that all 8192 bytes were written. The nfs client does have retry logic for short writes, but this is never called as the client is told the complete write succeeded. There are probably other ways it could happen, but in my case it happened with a fuse (filesystem in userspace) filesystem which can rather easily have a partial write. Here is a patch to properly return the short write count to the client. Signed-off-by:
David Shaw <dshaw@jabberwocky.com> Signed-off-by:
J. Bruce Fields <bfields@citi.umich.edu> Cc: Chuck Ebbert <cebbert@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Clemens Ladisch authored
commit f1bc07af upstream. When the volume is changed continuously (e.g., when the user drags a volume slider with the mouse), the driver does lots of I2C writes. Apparently, the sound chip can get confused when we poll the I2C status register too much, and fails to complete a read from it. On the PCI-E models, the PCI-E/PCI bridge gets upset by this and generates a machine check exception. To avoid this, this patch replaces the polling with an unconditional wait that is guaranteed to be long enough. Signed-off-by:
Clemens Ladisch <clemens@ladisch.de> Tested-by: Johann Messner <johann.messner at jku.at> Signed-off-by:
Takashi Iwai <tiwai@suse.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Brian King authored
commit 46db2f86 upstream. The SLB can change sizes across a live migration, which was not being handled, resulting in possible machine crashes during migration if migrating to a machine which has a smaller max SLB size than the source machine. Fix this by first reducing the SLB size to the minimum possible value, which is 32, prior to migration. Then during the device tree update which occurs after migration, we make the call to ensure the SLB gets updated. Also add the slb_size to the lparcfg output so that the migration tools can check to make sure the kernel has this capability before allowing migration in scenarios where the SLB size will change. BenH: Fixed #include <asm/mmu-hash64.h> -> <asm/mmu.h> to avoid breaking ppc32 build Signed-off-by:
Brian King <brking@linux.vnet.ibm.com> Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Tejun Heo authored
commit ac8672ea upstream. ata_tf_read_block() has off-by-one error when converting CHS address to LBA. The bug isn't very visible because ata_tf_read_block() is used only when generating sense data for a failed RW command and CHS addressing isn't used too often these days. This problem was spotted by Atsushi Nemoto. Signed-off-by:
Tejun Heo <tj@kernel.org> Reported-by:
Atsushi Nemoto <anemo@mba.ocn.ne.jp> Signed-off-by:
Jeff Garzik <jgarzik@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Sophie Hamilton authored
commit 6148b130 upstream. Fix minimum period size for cs46xx cards. This fixes a problem in the case where neither a period size nor a buffer size is passed to ALSA; this is the case in Audacious, OpenAL, and others. Signed-off-by:
Sophie Hamilton <kernel@theblob.org> Signed-off-by:
Takashi Iwai <tiwai@suse.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Jan Kara authored
commit 24a5d59f upstream. Some drives report 0 as the number of written blocks when there are some blocks recorded. Use device size in such case so that we can automagically mount such media. Signed-off-by:
Jan Kara <jack@suse.cz> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Jason Gunthorpe authored
commit ec579358 upstream. When probing the device in tpm_tis_init the call request_locality uses timeout_a, which wasn't being initalized until after request_locality. This results in request_locality falsely timing out if the chip is still starting. Move the initialization to before request_locality. This probably only matters for embedded cases (ie mine), a BIOS likely gets the TPM into a state where this code path isn't necessary. Signed-off-by:
Jason Gunthorpe <jgunthorpe@obsidianresearch.com> Acked-by:
Rajiv Andrade <srajiv@linux.vnet.ibm.com> Signed-off-by:
James Morris <jmorris@namei.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Geoff Levand authored
commit bc00351e upstream. A workaround for flash memory I/O errors when the PS3 internal hard disk has not been formatted for OtherOS use. This error condition mainly effects 'Live CD' users who have not formatted the PS3's internal hard disk for OtherOS. Fixes errors similar to these when using the ps3-flash-util or ps3-boot-game-os programs: ps3flash read failed 0x2050000 os_area_header_read: read error: os_area_header: Input/output error main:627: os_area_read_hp error. ERROR: can't change boot flag Signed-off-by:
Geoff Levand <geoffrey.levand@am.sony.com> Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Roland McGrath authored
commit 9f0ab4a3 upstream. In fs/binfmt_elf.c, load_elf_interp() calls padzero() for .bss even if the PT_LOAD has no PROT_WRITE and no .bss. This generates EFAULT. Here is a small test case. (Yes, there are other, useful PT_INTERP which have only .text and no .data/.bss.) ----- ptinterp.S _start: .globl _start nop int3 ----- $ gcc -m32 -nostartfiles -nostdlib -o ptinterp ptinterp.S $ gcc -m32 -Wl,--dynamic-linker=ptinterp -o hello hello.c $ ./hello Segmentation fault # during execve() itself After applying the patch: $ ./hello Trace trap # user-mode execution after execve() finishes If the ELF headers are actually self-inconsistent, then dying is fine. But having no PROT_WRITE segment is perfectly normal and correct if there is no segment with p_memsz > p_filesz (i.e. bss). John Reiser suggested checking for PROT_WRITE in the bss logic. I think it makes most sense to simply apply the bss logic only when there is bss. This patch looks less trivial than it is due to some reindentation. It just moves the "if (last_bss > elf_bss) {" test up to include the partial-page bss logic as well as the more-pages bss logic. Reported-by:
John Reiser <jreiser@bitwagon.com> Signed-off-by:
Roland McGrath <roland@redhat.com> Signed-off-by:
James Morris <jmorris@namei.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
- Sep 15, 2009
-
-
Greg Kroah-Hartman authored
-
Eric Dumazet authored
commit d76b1590 upstream. kmem_cache_destroy() should call rcu_barrier() *after* kmem_cache_close() and *before* sysfs_slab_remove() or risk rcu_free_slab() being called after kmem_cache is deleted (kfreed). rmmod nf_conntrack can crash the machine because it has to kmem_cache_destroy() a SLAB_DESTROY_BY_RCU enabled cache. Reported-by:
Zdenek Kabelac <zdenek.kabelac@gmail.com> Signed-off-by:
Eric Dumazet <eric.dumazet@gmail.com> Acked-by:
Paul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by:
Pekka Enberg <penberg@cs.helsinki.fi> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Massimo Cirillo authored
commit bc8cec0d upstream. The function jffs2_nor_wbuf_flash_setup() doesn't allocate the verify buffer if CONFIG_JFFS2_FS_WBUF_VERIFY is defined, so causing a kernel panic when that macro is enabled and the verify function is called. Similarly the jffs2_nor_wbuf_flash_cleanup() must free the buffer if CONFIG_JFFS2_FS_WBUF_VERIFY is enabled. The following patch fixes the problem. The following patch applies to 2.6.30 kernel. Signed-off-by:
Massimo Cirillo <maxcir@gmail.com> Signed-off-by:
Artem Bityutskiy <Artem.Bityutskiy@nokia.com> Signed-off-by:
David Woodhouse <David.Woodhouse@intel.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Eric Dumazet authored
[ Upstream commit 14458630 ] memcpy() should take into account size of pointers, not only number of pointers to copy. Signed-off-by:
Eric Dumazet <eric.dumazet@gmail.com> Acked-by:
Pavel Emelyanov <xemul@openvz.org> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Krzysztof Hałasa authored
[ Upstream commit 6ff9c2e7 ] E100 places it's RX packet descriptors inside skb->data and uses them with bidirectional streaming DMA mapping. Data in descriptors is accessed simultaneously by the chip (writing status and size when a packet is received) and CPU (reading to check if the packet was received). This isn't a valid usage of PCI DMA API, which requires use of the coherent (consistent) memory for such purpose. Unfortunately e100 chips working in "simplified" RX mode have to store received data directly after the descriptor. Fixing the driver to conform to the API would require using unsupported "flexible" RX mode or receiving data into a coherent memory and using CPU to copy it to network buffers. This patch, while not yet making the driver conform to the PCI DMA API, allows it to work correctly on X86 with swiotlb (while not breaking other architectures). Signed-off-by:
Krzysztof Hałasa <khc@pm.waw.pl> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
- Sep 09, 2009
-
-
Greg Kroah-Hartman authored
-
Greg Kroah-Hartman authored
Somehow a previous patch did not get committed correctly. This fixes the build. Thanks to Jayson King, Michael Tokarev, Joel Becker, and Chuck Ebbert for pointing out the problem, and the solution. Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
- Sep 08, 2009
-
-
Greg Kroah-Hartman authored
-
Sunil Mushran authored
commit 8379e7c4 upstream. Bug introduced by mainline commit e7432675 The bug causes ocfs2_write_begin_nolock() to oops when len=0. Signed-off-by:
Sunil Mushran <sunil.mushran@oracle.com> Signed-off-by:
Joel Becker <joel.becker@oracle.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Trond Myklebust authored
This fixes a problem that was reported as Red Hat Bugzilla entry number 485339, in which rpciod starts looping on the TCP connection code, rendering the NFS client unusable for 1/2 minute or so. It is basically a backport of commit f75e6745 (SUNRPC: Fix the problem of EADDRNOTAVAIL syslog floods on reconnect) Signed-off-by:
Trond Myklebust <Trond.Myklebust@netapp.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Peter Jones authored
commit 96bcc722 upstream [SCSI] sr: report more accurate drive status after closing the tray. So, what's happening here is that the drive is reporting a sense of 2/4/1 ("logical unit is becoming ready") from sr_test_unit_ready(), and then we ask for the media event notification before checking that result at all. The check_media_event_descriptor() call isn't getting a check condition, but it's also reporting that the tray is closed and that there's no media. In actuality it doesn't yet know if there's media or not, but there's no way to express that in the media event status field. My current thought is that if it told us the device isn't yet ready, we should return that immediately, since there's nothing that'll tell us any more data than that reliably: Signed-off-by:
James Bottomley <James.Bottomley@HansenPartnership.com> Cc: Chuck Ebbert <cebbert@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Chuck Ebbert authored
commit 4d8d4d25 upstream [ cebbert@redhat.com: backport to 2.6.27 ] Remove low_latency flag setting from nozomi and mxser drivers The kernel oopses if this flag is set. [and neither driver should set it as they call tty_flip_buffer_push from IRQ paths so have always been buggy] Signed-off-by:
Chuck Ebbert <cebbert@redhat.com> Signed-off-by:
Alan Cox <alan@linux.intel.com> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Oliver Neukum authored
commit 2400a2bf upstream [ cebbert@redhat.com: backport to 2.6.27 ] USB: removal of tty->low_latency hack dating back to the old serial code This removes tty->low_latency from all USB serial drivers that push data into the tty layer at hard interrupt context. It's no longer needed and actually harmful. Signed-off-by:
Oliver Neukum <oliver@neukum.org> Cc: Alan Cox <alan@lxorguk.ukuu.org.uk> Cc: Chuck Ebbert <cebbert@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Alan Cox authored
commit 05ad709d upstream parport: quickfix the proc registration bug Ideally we should have a directory of drivers and a link to the 'active' driver. For now just show the first device which is effectively the existing semantics without a warning. This is an update on the original buggy patch that I then forgot to resubmit. Confusingly it was proposed by Red Hat, written by Etched Pixels fixed and submitted by Intel ... Resolves-Bug: http://bugzilla.kernel.org/show_bug.cgi?id=9749 Signed-off-by:
Alan Cox <alan@linux.intel.com> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Cc: Chuck Ebbert <cebbert@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Takashi Iwai authored
commit 100d5eb3 upstream. Without the initialization of vmaster NID, the dB information got confused for ALC269 codec. Reference: Novell bnc#527361 https://bugzilla.novell.com/show_bug.cgi?id=527361 Signed-off-by:
Takashi Iwai <tiwai@suse.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Eric Dumazet authored
commit 17ac2e9c upstream. rose_getname() can leak kernel memory to user. Signed-off-by:
Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Sunil Mushran authored
commit e7432675 upstream. In a non-sparse extend, we correctly allocate (and zero) the clusters between the old_i_size and pos, but we don't zero the portions of the cluster we're writing to outside of pos<->len. It handles clustersize > pagesize and blocksize < pagesize. [Cleaned up by Joel Becker.] Signed-off-by:
Sunil Mushran <sunil.mushran@oracle.com> Signed-off-by:
Joel Becker <joel.becker@oracle.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-