- Jan 24, 2009
-
-
Greg Kroah-Hartman authored
-
Nicholas Piggin authored
commit 856bf4d7 upstream. s_syncing livelock avoidance was breaking data integrity guarantee of sys_sync, by allowing sys_sync to skip writing or waiting for superblocks if there is a concurrent sys_sync happening. This livelock avoidance is much less important now that we don't have the get_super_to_sync() call after every sb that we sync. This was replaced by __put_super_and_need_restart. Signed-off-by:
Nick Piggin <npiggin@suse.de> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Nicholas Piggin authored
commit 38f21977 upstream. Fix data integrity semantics required by sys_sync, by iterating over all inodes and waiting for any writeback pages after the initial writeout. Comments explain the exact problem. Signed-off-by:
Nick Piggin <npiggin@suse.de> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Nicholas Piggin authored
commit 4f5a99d6 upstream. Remove WB_SYNC_HOLD. The primary motiviation is the design of my anti-starvation code for fsync. It requires taking an inode lock over the sync operation, so we could run into lock ordering problems with multiple inodes. It is possible to take a single global lock to solve the ordering problem, but then that would prevent a future nice implementation of "sync multiple inodes" based on lock order via inode address. Seems like a backward step to remove this, but actually it is busted anyway: we can't use the inode lists for data integrity wait: an inode can be taken off the dirty lists but still be under writeback. In order to satisfy data integrity semantics, we should wait for it to finish writeback, but if we only search the dirty lists, we'll miss it. It would be possible to have a "writeback" list, for sys_sync, I suppose. But why complicate things by prematurely optimise? For unmounting, we could avoid the "livelock avoidance" code, which would be easier, but again premature IMO. Fixing the existing data integrity problem will come next. Signed-off-by:
Nick Piggin <npiggin@suse.de> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Nicholas Piggin authored
commit 48b47c56 upstream. Direct IO can invalidate and sync a lot of pagecache pages in the mapping. A 4K direct IO will actually try to sync and/or invalidate the pagecache of the entire file, for example (which might be many GB or TB large). Improve this by doing range syncs. Also, memory no longer has to be unmapped to catch the dirty bits for syncing, as dirty bits would remain coherent due to dirty mmap accounting. This fixes the immediate DM deadlocks when doing direct IO reads to block device with a mounted filesystem, if only by papering over the problem somewhat rather than addressing the fsync starvation cases. Signed-off-by:
Nick Piggin <npiggin@suse.de> Reviewed-by:
Jeff Moyer <jmoyer@redhat.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Nicholas Piggin authored
commit ee53a891 upstream. Chris Mason notices do_sync_mapping_range didn't actually ask for data integrity writeout. Unfortunately, it is advertised as being usable for data integrity operations. This is a data integrity bug. Signed-off-by:
Nick Piggin <npiggin@suse.de> Cc: Chris Mason <chris.mason@oracle.com> Cc: Dave Chinner <david@fromorbit.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Andrew Morton authored
commit 82fd1a9a upstream. Now that we have the early-termination logic in place, it makes sense to bail out early in all other cases where done is set to 1. Signed-off-by:
Nick Piggin <npiggin@suse.de> Cc: Chris Mason <chris.mason@oracle.com> Cc: Dave Chinner <david@fromorbit.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Nicholas Piggin authored
commit d5482cdf upstream. Terminate the write_cache_pages loop upon encountering the first page past end, without locking the page. Pages cannot have their index change when we have a reference on them (truncate, eg truncate_inode_pages_range performs the same check without the page lock). Signed-off-by:
Nick Piggin <npiggin@suse.de> Cc: Chris Mason <chris.mason@oracle.com> Cc: Dave Chinner <david@fromorbit.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Nicholas Piggin authored
commit 515f4a03 upstream. In write_cache_pages, if we get stuck behind another process that is cleaning pages, we will be forced to wait for them to finish, then perform our own writeout (if it was redirtied during the long wait), then wait for that. If a page under writeout is still clean, we can skip waiting for it (if we're part of a data integrity sync, we'll be waiting for all writeout pages afterwards, so we'll still be waiting for the other guy's write that's cleaned the page). Signed-off-by:
Nick Piggin <npiggin@suse.de> Cc: Chris Mason <chris.mason@oracle.com> Cc: Dave Chinner <david@fromorbit.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Nicholas Piggin authored
commit 5a3d5c98 upstream. Get rid of some complex expressions from flow control statements, add a comment, remove some duplicate code. Signed-off-by:
Nick Piggin <npiggin@suse.de> Cc: Chris Mason <chris.mason@oracle.com> Cc: Dave Chinner <david@fromorbit.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Nicholas Piggin authored
commit 05fe478d upstream. In write_cache_pages, nr_to_write is heeded even for data-integrity syncs, so the function will return success after writing out nr_to_write pages, even if that was not sufficient to guarantee data integrity. The callers tend to set it to values that could break data interity semantics easily in practice. For example, nr_to_write can be set to mapping->nr_pages * 2, however if a file has a single, dirty page, then fsync is called, subsequent pages might be concurrently added and dirtied, then write_cache_pages might writeout two of these newly dirty pages, while not writing out the old page that should have been written out. Fix this by ignoring nr_to_write if it is a data integrity sync. This is a data integrity bug. The reason this has been done in the past is to avoid stalling sync operations behind page dirtiers. "If a file has one dirty page at offset 1000000000000000 then someone does an fsync() and someone else gets in first and starts madly writing pages at offset 0, we want to write that page at 1000000000000000. Somehow." What we do today is return success after an arbitrary amount of pages are written, whether or not we have provided the data-integrity semantics that the caller has asked for. Even this doesn't actually fix all stall cases completely: in the above situation, if the file has a huge number of pages in pagecache (but not dirty), then mapping->nrpages is going to be huge, even if pages are being dirtied. This change does indeed make the possibility of long stalls lager, and that's not a good thing, but lying about data integrity is even worse. We have to either perform the sync, or return -ELINUXISLAME so at least the caller knows what has happened. There are subsequent competing approaches in the works to solve the stall problems properly, without compromising data integrity. Signed-off-by:
Nick Piggin <npiggin@suse.de> Cc: Chris Mason <chris.mason@oracle.com> Cc: Dave Chinner <david@fromorbit.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Nicholas Piggin authored
commit 00266770 upstream. In write_cache_pages, if ret signals a real error, but we still have some pages left in the pagevec, done would be set to 1, but the remaining pages would continue to be processed and ret will be overwritten in the process. It could easily be overwritten with success, and thus success will be returned even if there is an error. Thus the caller is told all writes succeeded, wheras in reality some did not. Fix this by bailing immediately if there is an error, and retaining the first error code. This is a data integrity bug. Signed-off-by:
Nick Piggin <npiggin@suse.de> Cc: Chris Mason <chris.mason@oracle.com> Cc: Dave Chinner <david@fromorbit.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Nicholas Piggin authored
commit bd19e012 upstream. We'd like to break out of the loop early in many situations, however the existing code has been setting mapping->writeback_index past the final page in the pagevec lookup for cyclic writeback. This is a problem if we don't process all pages up to the final page. Currently the code mostly keeps writeback_index reasonable and hacked around this by not breaking out of the loop or writing pages outside the range in these cases. Keep track of a real "done index" that enables us to terminate the loop in a much more flexible manner. Needed by the subsequent patch to preserve writepage errors, and then further patches to break out of the loop early for other reasons. However there are no functional changes with this patch alone. Signed-off-by:
Nick Piggin <npiggin@suse.de> Cc: Chris Mason <chris.mason@oracle.com> Cc: Dave Chinner <david@fromorbit.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Nicholas Piggin authored
commit 31a12666 upstream. In write_cache_pages, scanned == 1 is supposed to mean that cyclic writeback has circled through zero, thus we should not circle again. However it gets set to 1 after the first successful pagevec lookup. This leads to cases where not enough data gets written. Counterexample: file with first 10 pages dirty, writeback_index == 5, nr_to_write == 10. Then the 5 last pages will be found, and scanned will be set to 1, after writing those out, we will not cycle back to get the first 5. Rework this logic, now we'll always cycle unless we started off from index 0. When cycling, only write out as far as 1 page before the start page from the first cycle (so we don't write parts of the file twice). Signed-off-by:
Nick Piggin <npiggin@suse.de> Cc: Chris Mason <chris.mason@oracle.com> Cc: Dave Chinner <david@fromorbit.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Alistair John Strachan authored
commit 46a5f173 upstream. When CONFIG_DMI is not enabled, dmi detection should flag that no board could be detected (err=1) rather than another error condition (err<0). This fixes the fallback to manual probing for all motherboards, even those without DMI strings, when CONFIG_DMI=n. Signed-off-by:
Alistair John Strachan <alistair@devzero.co.uk> Cc: Hans de Goede <hdegoede@redhat.com> Signed-off-by:
Jean Delvare <khali@linux-fr.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Dave Kleikamp authored
commit 9ba0fdbf upstream. powerpc: is_hugepage_only_range() must account for both 4kB and 64kB slices The subpage_prot syscall fails on second and subsequent calls for a given region, because is_hugepage_only_range() is mis-identifying the 4 kB slices when the process has a 64 kB page size. Signed-off-by:
Dave Kleikamp <shaggy@linux.vnet.ibm.com> Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Pavel Roskin authored
commit 81156928 upstream. Reading 0 bytes from /sys/devices/platform/dell_rbu/image_type or /sys/devices/platform/dell_rbu/packet_size by an ordinary user causes an oops. Signed-off-by:
Pavel Roskin <proski@gnu.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Patrick McHardy authored
Upstream commit 71320afc : An old bug crept back into the ICMP/ICMPv6 conntrack protocols: the timeout values are defined as unsigned longs, the sysctl's maxsize is set to sizeof(unsigned int). Use unsigned int for the timeout values as in the other conntrack protocols. Reported-by:
Jean-Mickael Guerin <jean-mickael.guerin@6wind.com> Signed-off-by:
Patrick McHardy <kaber@trash.net> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Patrick McHardy authored
Upstream commit d61ba9fd: Commit 8cc784ee (netfilter: change return types of match functions for ebtables extensions) broke ebtables matches by inverting the sense of match/nomatch. Reported-by:
Matt Cross <matthltc@us.ibm.com> Signed-off-by:
Jan Engelhardt <jengelh@medozas.de> Signed-off-by:
Patrick McHardy <kaber@trash.net> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Patrick McHardy authored
Upstream commit 656caff2: Commit 55b69e91 (netfilter: implement NFPROTO_UNSPEC as a wildcard for extensions) broke revision probing for matches and targets that are registered with NFPROTO_UNSPEC. Fix by continuing the search on the NFPROTO_UNSPEC list if nothing is found on the af-specific lists. Signed-off-by:
Patrick McHardy <kaber@trash.net> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Christian Lamparter authored
commit 00627f22 upstream. All p54usb devices need a explicit termination packet, in oder to finish the pending transfer properly. Else, the firmware could freeze, or simply drop the frame. Signed-off-by:
Christian Lamparter <chunkeey@web.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Alan Stern authored
commit 2caf7fcd upstream. This patch (as1197) fixes an error introduced recently. Since a significant number of devices can't handle Set-Interface requests, we no longer call usb_set_interface() when a driver unbinds from an interface, provided the interface is already in altsetting 0. However the interface still does get disabled, and the call to usb_set_interface() was the only thing re-enabling it. Since the interface doesn't get re-enabled, further attempts to use it fail. So the patch adds a call to usb_enable_interface() when a driver unbinds and the interface is in altsetting 0. For this to work right, the interface's endpoints have to be re-enabled but their toggles have to be left alone. Therefore an additional argument is added to usb_enable_endpoint() and usb_enable_interface(), a flag indicating whether or not the endpoint toggles should be reset. This is a forward-ported version of a patch which fixes Bugzilla #12301. Signed-off-by:
Alan Stern <stern@rowland.harvard.edu> Reported-by:
David Roka <roka@dawid.hu> Reported-by:
Erik Ekman <erik@kryo.se> Tested-by:
Erik Ekman <erik@kryo.se> Tested-by:
Alon Bar-Lev <alon.barlev@gmail.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Lennert Buytenhek authored
[ Upstream commit: 4f7d54f5 ] Currently, setting SPLICE_F_NONBLOCK on splice from a TCP socket results in masking of EOF (RDHUP) and error conditions on the socket by an -EAGAIN return. Move the NONBLOCK check in tcp_splice_read() to be after the EOF and error checks to fix this. Signed-off-by:
Lennert Buytenhek <buytenh@marvell.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Florian Fainelli authored
[ Upstream commit: 4707470a ] This patch bumps the release number of the driver. Signed-off-by:
Florian Fainelli <florian@openwrt.org> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Joe Chou authored
[ Upstream commit: 3e7c469f ] This patch saves the MIER register contents before treating interrupts, then restores them correcty at the end of the interrupt routine. Signed-off-by:
Joe Chou <Joe.Chou@rdc.com.tw> Signed-off-by:
Florian Fainelli <florian@openwrt.org> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Joe Chou authored
[ Upstream commit: 11e5e8f5 ] This patch fixes a reverse logic in the MDIO code. Signed-off-by:
Joe Chou <Joe.Chou@rdc.com.tw> Signed-off-by:
Florian Fainelli <florian@openwrt.org> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Jarek Poplawski authored
[ Upstream commit: 6f573214 ] New nodes are inserted in u32_change() under rtnl_lock() with wmb(), so without tcf_tree_lock() like in other classifiers (e.g. cls_fw). This isn't enough without rmb() on the read side, but on the other hand adding such barriers doesn't give any savings, so the lock is added instead. Reported-by:
m0sia <m0sia@plotinka.ru> Signed-off-by:
Jarek Poplawski <jarkao2@gmail.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Wei Yongjun authored
[ Upstream commit: 9fcb95a1 ] If FWD-TSN chunk is received with bad stream ID, the sctp will not do the validity check, this may cause memory overflow when overwrite the TSN of the stream ID. The FORWARD-TSN chunk is like this: FORWARD-TSN chunk Type = 192 Flags = 0 Length = 172 NewTSN = 99 Stream = 10000 StreamSequence = 0xFFFF This patch fix this problem by discard the chunk if stream ID is not less than MIS. Signed-off-by:
Wei Yongjun <yjwei@cn.fujitsu.com> Signed-off-by:
Vlad Yasevich <vladislav.yasevich@hp.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Herbert Xu authored
[ Upstream commit: 7891cc81 ] When a fib6 table dump is prematurely ended, we won't unlink its walker from the list. This causes all sorts of grief for other users of the list later. Reported-by:
Chris Caputo <ccaputo@alt.net> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Jarek Poplawski authored
[ Upstream commit: none This is a quick fix for -stable purposes. Upstream fixes these problems via a large set of invasive hrtimer changes. ] Most probably there is a (still unproven) race in hrtimers (before 2.6.29 kernels), which causes a corruption of hrtimers rbtree. This patch doesn't fix it, but should let HTB avoid triggering the bug. Reported-by:
Denys Fedoryschenko <denys@visp.net.lb> Reported-by:
Badalian Vyacheslav <slavon@bigtelecom.ru> Reported-by:
Chris Caputo <ccaputo@alt.net> Tested-by:
Badalian Vyacheslav <slavon@bigtelecom.ru> Signed-off-by:
Jarek Poplawski <jarkao2@gmail.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Alan Stern authored
commit a81a81a2 upstream. This patch (as1194b) makes usb-storage set the CAPACITY_HEURISTICS flag for all devices made by Nokia, Nikon, or Motorola. These companies seem to include the READ CAPACITY bug in all of their devices. Since cell phones and digital cameras rely on flash storage, which always has an even number of sectors, setting CAPACITY_HEURISTICS shouldn't cause any problems. Not even if the companies wise up and start making devices without the bug. A large number of unusual_devs entries are now unnecessary, so the patch removes them. Signed-off-by:
Alan Stern <stern@rowland.harvard.edu> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Alan Stern authored
commit 25ff1c31 upstream. This patch (as1189c) adds some hacks to usb-storage for dealing with the growing problems involving bad capacity values and last-sector accesses: A new flag, US_FL_CAPACITY_OK, is created to indicate that the device is known to report its capacity correctly. An unusual_devs entry for Linux's own File-backed Storage Gadget is added with this flag set, since g_file_storage always reports the correct capacity and since the capacity need not be even (it is determined by the size of the backing file). An entry in unusual_devs.h which has only the CAPACITY_OK flag set shouldn't prejudice libusual, since the device will work perfectly well with either usb-storage or ub. So a new macro, COMPLIANT_DEV, is added to let libusual know about these entries. When a last-sector access fails three times in a row and neither the FIX_CAPACITY nor the CAPACITY_OK flag is set, we assume the last-sector bug is present. We replace the existing status and sense data with values that will cause the SCSI core to fail the access immediately rather than retry indefinitely. This should fix the difficulties people have been having with Nokia phones. This version of the patch differs from the version accepted into the mainline only in that it does not trigger a WARN() when an odd-numbered last-sector access succeeds. In a stable kernel series we don't want to go around spamming users' logs and consoles for no good reason. Signed-off-by:
Alan Stern <stern@rowland.harvard.edu> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Jos-Vicente Gilabert authored
commit 2950e952 upstream. Taken from http://bugzilla.kernel.org/show_bug.cgi?id=12397 We're doing an sprintf of an 11-char string into an 11-char buffer. Whoops. It breaks firmware uploading. Reported-by:
Jos-Vicente Gilabert <josevteg@gmail.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Takashi Iwai authored
commit 1725b82a upstream. The changes specific for Samsung laptops seem unapplicable to other hardware models like ASUS. The mic inputs are lost on such hardware by the change 5d5d5f43. This patch adds back the old laptop-eapd model, and create a new model "samsung" for the new one specific to Samsung laptops with automatic mic selection feature. Reference: kernel bugzilla #12070 http://bugzilla.kernel.org/show_bug.cgi?id=12070 Signed-off-by:
Takashi Iwai <tiwai@suse.de> Cc: Daniel Drake <dsd@gentoo.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Takashi Iwai authored
commit 8317e0b0 upstream. Resetting HP pinctl at the unplugged state may cause a sort of regression on some devices because of their wrong pin configuration. A simple workaround is to disable the pin reset. This is ugly and may be not good from the power-saving POV (if any), but damn simple. Signed-off-by:
Takashi Iwai <tiwai@suse.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Luke Yelavich authored
commit 3e420e78 upstream. Have the Samsung Q45 (144d:c510) select ALC262_HIPPO by default Reference: Ubuntu bug 200210 http://launchpad.net/bugs/200210 Signed-off-by:
Luke Yelavich <themuso@ubuntu.com> Signed-off-by:
Takashi Iwai <tiwai@suse.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Takashi Iwai authored
commit 1b0652eb upstream. Fix HP dv5 (103c:3603) built-in mic input. Reference: kernel bug 12440 http://bugzilla.kernel.org/show_bug.cgi?id=12440 Signed-off-by:
Takashi Iwai <tiwai@suse.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Giuseppe Bilotta authored
commit dafb70ce upstream. Add the model=hp-m4 quirk for another HP dv5 (103c:3603) Reference: kernel bug#12440 http://bugzilla.kernel.org/show_bug.cgi?id=12440 Signed-off-by:
Takashi Iwai <tiwai@suse.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Clemens Ladisch authored
commit 7e86c0e6 upstream. On the Asus Xonar D2 and D2X models, the SPI chip select signal for the fourth DAC shares its pin with the serial clock for the EEPROM that contains the PCI subdevice ID values. It appears that when DAC registers are written and some other unknown conditions occur (probably noise on the EEPROM's chip select line), the EEPROM gets overwritten with garbage, which makes it impossible to properly detect the card later. Therefore, we better avoid DAC register writes and make sure that the driver works with the DAC's registers' default values. Consequently, the sample format is now I2S instead of left-justified (no user-visible change), and the DAC's volume/mute registers cannot be used anymore (volume changes are now done by the software volume plugin). Signed-off-by:
Clemens Ladisch <clemens@ladisch.de> Signed-off-by:
Takashi Iwai <tiwai@suse.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Tony Luck authored
commit 0773a6cf upstream. sched_clock() on ia64 is based on ar.itc, so is never completely synchronized between cpus. On some platforms (e.g. certain models of SGI Altix) it may be running at radically different frequencies. Based on a patch from Dimitri Sivanich which set this just for SN2 && GENERIC kernels ... it is needed for all ia64 machines. Signed-off-by:
Tony Luck <tony.luck@intel.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-