Forum | Documentation | Website | Blog

Skip to content
Snippets Groups Projects
  1. Dec 16, 2013
    • Steven Rostedt (Red Hat)'s avatar
      Linux 3.8.13.13-rt26 · 97e7e161
      Steven Rostedt (Red Hat) authored
    • Sebastian Siewior's avatar
      net: make neigh_priv_len in struct net_device 16bit instead of 8bit · 62b2c179
      Sebastian Siewior authored
      
      neigh_priv_len is defined as u8. With all debug enabled struct
      ipoib_neigh has 200 bytes. The largest part is sk_buff_head with 96
      bytes and here the spinlock with 72 bytes.
      The size value still fits in this u8 leaving some room for more.
      
      On -RT struct ipoib_neigh put on weight and has 392 bytes. The main
      reason is sk_buff_head with 288 and the fatty here is spinlock with 192
      bytes. This does no longer fit into into neigh_priv_len and gcc
      complains.
      
      This patch changes neigh_priv_len from being 8bit to 16bit. Since the
      following element (dev_id) is 16bit followed by a spinlock which is
      aligned, the struct remains with a total size of 3200 (allmodconfig) /
      2048 (with as much debug off as possible) bytes on x86-64.
      On x86-32 the struct is 1856 (allmodconfig) / 1216 (with as much debug
      off as possible) bytes long. The numbers were gained with and without
      the patch to prove that this change does not increase the size of the
      struct.
      
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      62b2c179
  2. Dec 11, 2013
    • Sebastian Andrzej Siewior's avatar
      rtmutex: use a trylock for waiter lock in trylock · 418c69ea
      Sebastian Andrzej Siewior authored
      
      Mike Galbraith captered the following:
      | >#11 [ffff88017b243e90] _raw_spin_lock at ffffffff815d2596
      | >#12 [ffff88017b243e90] rt_mutex_trylock at ffffffff815d15be
      | >#13 [ffff88017b243eb0] get_next_timer_interrupt at ffffffff81063b42
      | >#14 [ffff88017b243f00] tick_nohz_stop_sched_tick at ffffffff810bd1fd
      | >#15 [ffff88017b243f70] tick_nohz_irq_exit at ffffffff810bd7d2
      | >#16 [ffff88017b243f90] irq_exit at ffffffff8105b02d
      | >#17 [ffff88017b243fb0] reschedule_interrupt at ffffffff815db3dd
      | >--- <IRQ stack> ---
      | >#18 [ffff88017a2a9bc8] reschedule_interrupt at ffffffff815db3dd
      | >    [exception RIP: task_blocks_on_rt_mutex+51]
      | >#19 [ffff88017a2a9ce0] rt_spin_lock_slowlock at ffffffff815d183c
      | >#20 [ffff88017a2a9da0] lock_timer_base.isra.35 at ffffffff81061cbf
      | >#21 [ffff88017a2a9dd0] schedule_timeout at ffffffff815cf1ce
      | >#22 [ffff88017a2a9e50] rcu_gp_kthread at ffffffff810f9bbb
      | >#23 [ffff88017a2a9ed0] kthread at ffffffff810796d5
      | >#24 [ffff88017a2a9f50] ret_from_fork at ffffffff815da04c
      
      lock_timer_base() does a try_lock() which deadlocks on the waiter lock
      not the lock itself.
      This patch takes the waiter_lock with trylock so it should work from interrupt
      context as well. If the fastpath doesn't work and the waiter_lock itself is
      taken then it seems that the lock itself taken.
      This patch also adds a "rt_spin_try_unlock" to keep lockdep happy. If we
      managed to take the wait_lock in the first place we should also be able
      to take it in the unlock path.
      
      Cc: stable-rt@vger.kernel.org
      Reported-by: default avatarMike Galbraith <bitbucket@online.de>
      Signed-off-by: default avatarSebastian Andrzej Siewior <bigeasy@linutronix.de>
      418c69ea
    • Peter Zijlstra's avatar
      lockdep: Correctly annotate hardirq context in irq_exit() · 3234f84f
      Peter Zijlstra authored
      
      There was a reported deadlock on -rt which lockdep didn't report.
      
      It turns out that in irq_exit() we tell lockdep that the hardirq
      context ends and then do all kinds of locking afterwards.
      
      To fix it, move trace_hardirq_exit() to the very end of irq_exit(), this
      ensures all locking in tick_irq_exit() and rcu_irq_exit() are properly
      recorded as happening from hardirq context.
      
      This however leads to the 'fun' little problem of running softirqs
      while in hardirq context. To cure this make the softirq code a little
      more complex (in the CONFIG_TRACE_IRQFLAGS case).
      
      Due to stack swizzling arch dependent trickery we cannot pass an
      argument to __do_softirq() to tell it if it was done from hardirq
      context or not; so use a side-band argument.
      
      When we do __do_softirq() from hardirq context, 'atomically' flip to
      softirq context and back, so that no locking goes without being in
      either hard- or soft-irq context.
      
      I didn't find any new problems in mainline using this patch, but it
      did show the -rt problem.
      
      Cc: stable-rt@vger.kernel.org
      Reported-by: default avatarSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/n/tip-dgwc5cdksbn0jk09vbmcc9sa@git.kernel.org
      
      
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Signed-off-by: default avatarSebastian Andrzej Siewior <bigeasy@linutronix.de>
      3234f84f
    • Sebastian Andrzej Siewior's avatar
      swait: Add a few more users · 94d3587f
      Sebastian Andrzej Siewior authored
      
      The wait-simple queue is lighter weight and more efficient than the full
      wait queue, and may be used in atomic context on PREEMPT_RT.
      
      Fix up some places that needed to call the swait_*() functions instead
      of the wait_*() functions.
      
      Cc: stable-rt@vger.kernel.org
      Signed-off-by: default avatarSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      94d3587f
  3. Dec 10, 2013
    • Tiejun Chen's avatar
      cpu_down: move migrate_enable() back · 257d6e61
      Tiejun Chen authored
      Commit 08c1ab68
      
      , "hotplug-use-migrate-disable.patch", intends to
      use migrate_enable()/migrate_disable() to replace that combination
      of preempt_enable() and preempt_disable(), but actually in
      !CONFIG_PREEMPT_RT_FULL case, migrate_enable()/migrate_disable()
      are still equal to preempt_enable()/preempt_disable(). So that
      followed cpu_hotplug_begin()/cpu_unplug_begin(cpu) would go schedule()
      to trigger schedule_debug() like this:
      
      _cpu_down()
      	|
      	+ migrate_disable() = preempt_disable()
      	|
      	+ cpu_hotplug_begin() or cpu_unplug_begin()
      		|
      		+ schedule()
      			|
      			+ __schedule()
      				|
      				+ preempt_disable();
      				|
      				+ __schedule_bug() is true!
      
      So we should move migrate_enable() as the original scheme.
      
      Cc: stable-rt@vger.kernel.org
      Signed-off-by: default avatarTiejun Chen <tiejun.chen@windriver.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      257d6e61
  4. Dec 02, 2013
  5. Nov 20, 2013
    • Sebastian Andrzej Siewior's avatar
      drm: remove preempt_disable() from drm_calc_vbltimestamp_from_scanoutpos() · 2d390c53
      Sebastian Andrzej Siewior authored
      
      Luis captured the following:
      
      | BUG: sleeping function called from invalid context at kernel/rtmutex.c:659
      | in_atomic(): 1, irqs_disabled(): 0, pid: 517, name: Xorg
      | 2 locks held by Xorg/517:
      |  #0:
      |  (
      | &dev->vbl_lock
      | ){......}
      | , at:
      | [<ffffffffa0024c60>] drm_vblank_get+0x30/0x2b0 [drm]
      |  #1:
      |  (
      | &dev->vblank_time_lock
      | ){......}
      | , at:
      | [<ffffffffa0024ce1>] drm_vblank_get+0xb1/0x2b0 [drm]
      | Preemption disabled at:
      | [<ffffffffa008bc95>] i915_get_vblank_timestamp+0x45/0xa0 [i915]
      | CPU: 3 PID: 517 Comm: Xorg Not tainted 3.10.10-rt7+ #5
      | Call Trace:
      |  [<ffffffff8164b790>] dump_stack+0x19/0x1b
      |  [<ffffffff8107e62f>] __might_sleep+0xff/0x170
      |  [<ffffffff81651ac4>] rt_spin_lock+0x24/0x60
      |  [<ffffffffa0084e67>] i915_read32+0x27/0x170 [i915]
      |  [<ffffffffa008a591>] i915_pipe_enabled+0x31/0x40 [i915]
      |  [<ffffffffa008a6be>] i915_get_crtc_scanoutpos+0x3e/0x1b0 [i915]
      |  [<ffffffffa00245d4>] drm_calc_vbltimestamp_from_scanoutpos+0xf4/0x430 [drm]
      |  [<ffffffffa008bc95>] i915_get_vblank_timestamp+0x45/0xa0 [i915]
      |  [<ffffffffa0024998>] drm_get_last_vbltimestamp+0x48/0x70 [drm]
      |  [<ffffffffa0024db5>] drm_vblank_get+0x185/0x2b0 [drm]
      |  [<ffffffffa0025d03>] drm_wait_vblank+0x83/0x5d0 [drm]
      |  [<ffffffffa00212a2>] drm_ioctl+0x552/0x6a0 [drm]
      |  [<ffffffff811a0095>] do_vfs_ioctl+0x325/0x5b0
      |  [<ffffffff811a03a1>] SyS_ioctl+0x81/0xa0
      |  [<ffffffff8165a342>] tracesys+0xdd/0xe2
      
      After a longer thread it was decided to drop the preempt_disable()/
      enable() invocations which were meant for -RT and Mario Kleiner looks
      for a replacement.
      
      Cc: stable-rt@vger.kernel.org
      Reported-By: default avatarLuis Claudio R. Goncalves <lclaudio@uudg.org>
      Signed-off-by: default avatarSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      2d390c53
    • Yang Shi's avatar
      mm/memcontrol: Don't call schedule_work_on in preemption disabled context · da32cb0f
      Yang Shi authored
      
      The following trace is triggered when running ltp oom test cases:
      
      BUG: sleeping function called from invalid context at kernel/rtmutex.c:659
      in_atomic(): 1, irqs_disabled(): 0, pid: 17188, name: oom03
      Preemption disabled at:[<ffffffff8112ba70>] mem_cgroup_reclaim+0x90/0xe0
      
      CPU: 2 PID: 17188 Comm: oom03 Not tainted 3.10.10-rt3 #2
      Hardware name: Intel Corporation Calpella platform/MATXM-CORE-411-B, BIOS 4.6.3 08/18/2010
      ffff88007684d730 ffff880070df9b58 ffffffff8169918d ffff880070df9b70
      ffffffff8106db31 ffff88007688b4a0 ffff880070df9b88 ffffffff8169d9c0
      ffff88007688b4a0 ffff880070df9bc8 ffffffff81059da1 0000000170df9bb0
      Call Trace:
      [<ffffffff8169918d>] dump_stack+0x19/0x1b
      [<ffffffff8106db31>] __might_sleep+0xf1/0x170
      [<ffffffff8169d9c0>] rt_spin_lock+0x20/0x50
      [<ffffffff81059da1>] queue_work_on+0x61/0x100
      [<ffffffff8112b361>] drain_all_stock+0xe1/0x1c0
      [<ffffffff8112ba70>] mem_cgroup_reclaim+0x90/0xe0
      [<ffffffff8112beda>] __mem_cgroup_try_charge+0x41a/0xc40
      [<ffffffff810f1c91>] ? release_pages+0x1b1/0x1f0
      [<ffffffff8106f200>] ? sched_exec+0x40/0xb0
      [<ffffffff8112cc87>] mem_cgroup_charge_common+0x37/0x70
      [<ffffffff8112e2c6>] mem_cgroup_newpage_charge+0x26/0x30
      [<ffffffff8110af68>] handle_pte_fault+0x618/0x840
      [<ffffffff8103ecf6>] ? unpin_current_cpu+0x16/0x70
      [<ffffffff81070f94>] ? migrate_enable+0xd4/0x200
      [<ffffffff8110cde5>] handle_mm_fault+0x145/0x1e0
      [<ffffffff810301e1>] __do_page_fault+0x1a1/0x4c0
      [<ffffffff8169c9eb>] ? preempt_schedule_irq+0x4b/0x70
      [<ffffffff8169e3b7>] ? retint_kernel+0x37/0x40
      [<ffffffff8103053e>] do_page_fault+0xe/0x10
      [<ffffffff8169e4c2>] page_fault+0x22/0x30
      
      So, to prevent schedule_work_on from being called in preempt disabled context,
      replace the pair of get/put_cpu() to get/put_cpu_light().
      
      Cc: stable-rt@vger.kernel.org
      Signed-off-by: default avatarYang Shi <yang.shi@windriver.com>
      Signed-off-by: default avatarSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      da32cb0f
    • Sebastian Andrzej Siewior's avatar
      mm/slub: do not rely on slab_cached passed to free_delayed() · 81b9f7c6
      Sebastian Andrzej Siewior authored
      
      You can get this backtrace:
      | =============================================================================
      | BUG dentry (Not tainted): Padding overwritten. 0xf15e1ec0-0xf15e1f1f
      | -----------------------------------------------------------------------------
      |
      | Disabling lock debugging due to kernel taint
      | INFO: Slab 0xf6f10b00 objects=21 used=0 fp=0xf15e0480 flags=0x2804080
      | CPU: 6 PID: 1 Comm: systemd Tainted: G    B        3.10.17-rt12+ #197
      | Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011
      |  f6f10b00 f6f10b00 f20a3be8 c149da9e f20a3c74 c110b0d6 c15e010c f6f10b00
      |  00000015 00000000 f15e0480 02804080 64646150 20676e69 7265766f 74697277
      |  2e6e6574 66783020 31653531 2d306365 31667830 66316535 00006631 00000046
      | Call Trace:
      |  [<c149da9e>] dump_stack+0x16/0x18
      |  [<c110b0d6>] slab_err+0x76/0x80
      |  [<c110c231>] ? deactivate_slab+0x3f1/0x4a0
      |  [<c110c231>] ? deactivate_slab+0x3f1/0x4a0
      |  [<c110b56f>] slab_pad_check.part.54+0xbf/0x150
      |  [<c110ba04>] __free_slab+0x124/0x130
      |  [<c149bb79>] ? __slab_alloc.constprop.69+0x27b/0x5d3
      |  [<c110ba39>] free_delayed+0x29/0x40
      |  [<c149bec5>] __slab_alloc.constprop.69+0x5c7/0x5d3
      |  [<c1126062>] ? __d_alloc+0x22/0x150
      |  [<c1126062>] ? __d_alloc+0x22/0x150
      |  [<c11265b0>] ? __d_lookup_rcu+0x160/0x160
      |  [<c110d912>] kmem_cache_alloc+0x162/0x190
      |  [<c112668b>] ? __d_lookup+0xdb/0x1d0
      |  [<c1126062>] ? __d_alloc+0x22/0x150
      |  [<c1126062>] __d_alloc+0x22/0x150
      |  [<c11261a5>] d_alloc+0x15/0x60
      |  [<c111aec1>] lookup_dcache+0x71/0xa0
      |  [<c111af0e>] __lookup_hash+0x1e/0x40
      |  [<c111b374>] lookup_slow+0x34/0x90
      |  [<c111c3c7>] link_path_walk+0x737/0x780
      |  [<c111a3d4>] ? path_get+0x24/0x40
      |  [<c111a3df>] ? path_get+0x2f/0x40
      |  [<c111bfb2>] link_path_walk+0x322/0x780
      |  [<c111e3ed>] path_openat.isra.54+0x7d/0x400
      |  [<c111f32b>] do_filp_open+0x2b/0x70
      |  [<c11110a2>] do_sys_open+0xe2/0x1b0
      |  [<c14a319f>] ? restore_all+0xf/0xf
      |  [<c102bb80>] ? vmalloc_sync_all+0x10/0x10
      |  [<c1111192>] SyS_open+0x22/0x30
      |  [<c14a393e>] sysenter_do_call+0x12/0x36
      | Padding f15e1de0: 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a  ZZZZZZZZZZZZZZZZ
      | Padding f15e1df0: 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a  ZZZZZZZZZZZZZZZZ
      | Padding f15e1e00: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
      | Padding f15e1e10: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
      | Padding f15e1e20: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
      | Padding f15e1e30: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
      | Padding f15e1e40: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
      | Padding f15e1e50: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
      | Padding f15e1e60: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
      | Padding f15e1e70: 6b 6b 6b 6b 6b 6b 6b a5 bb bb bb bb 80 01 5e f1  kkkkkkk.......^.
      | Padding f15e1e80: 53 7e 0d c1 c3 bd 49 c1 12 d9 10 c1 53 7e 0d c1  S~....I.....S~..
      | Padding f15e1e90: 60 7f 0d c1 e0 05 14 c1 ce d1 13 c1 96 d4 13 c1  `...............
      | Padding f15e1ea0: e9 e0 13 c1 f7 48 17 c1 13 6a 17 c1 41 fb 17 c1  .....H...j..A...
      | Padding f15e1eb0: 07 a4 11 c1 22 af 11 c1 74 b3 11 c1 06 d2 11 c1  ...."...t.......
      | Padding f15e1ec0: c6 d2 11 c1 06 00 00 00 01 00 00 00 f3 dc fe ff  ................
      | Padding f15e1ed0: 73 7e 0d c1 5d b4 49 c1 ec c4 10 c1 73 7e 0d c1  s~..].I.....s~..
      | Padding f15e1ee0: 50 83 0d c1 79 09 14 c1 fd b9 13 c1 5a f2 13 c1  P...y.......Z...
      | Padding f15e1ef0: 7b 1c 28 c1 03 20 28 c1 9e 25 28 c1 b3 26 28 c1  {.(.. (..%(..&(.
      | Padding f15e1f00: f4 ab 34 c1 bc 89 30 c1 e5 0d 0a c1 c1 0f 0a c1  ..4...0.........
      | Padding f15e1f10: ae 34 0a c1 00 00 00 00 00 00 00 00 f3 dc fe ff  .4..............
      | FIX dentry: Restoring 0xf15e1de0-0xf15e1f1f=0x5a
      |
      | =============================================================================
      | BUG dentry (Tainted: G    B       ): Redzone overwritten
      | -----------------------------------------------------------------------------
      |
      | INFO: 0xf15e009c-0xf15e009f. First byte 0x96 instead of 0xbb
      | INFO: Allocated in __ext4_get_inode_loc+0x3b7/0x460 age=1054261382 cpu=3239295485 pid=-1055657382
      |  ext4_iget+0x63/0x9c0
      |  ext4_lookup+0x71/0x180
      |  lookup_real+0x17/0x40
      |  do_last.isra.53+0x72b/0xbc0
      |  path_openat.isra.54+0x9d/0x400
      |  do_filp_open+0x2b/0x70
      |  do_sys_open+0xe2/0x1b0
      |  0x7
      |  0x1
      |  0xfffedcf2
      |  mempool_free_slab+0x13/0x20
      |  __slab_free+0x3d/0x3ae
      |  kmem_cache_free+0x1bc/0x1d0
      |  mempool_free_slab+0x13/0x20
      |  mempool_free+0x40/0x90
      |  bio_put+0x59/0x70
      | INFO: Freed in blk_update_bidi_request+0x13/0x70 age=2779021993 cpu=1515870810 pid=1515870810
      |  __blk_end_bidi_request+0x1e/0x50
      |  __blk_end_request_all+0x23/0x40
      |  virtblk_done+0xf4/0x260
      |  vring_interrupt+0x2c/0x50
      |  handle_irq_event_percpu+0x45/0x1f0
      |  handle_irq_event+0x31/0x50
      |  handle_edge_irq+0x6e/0x130
      |  0x5
      | INFO: Slab 0xf6f10b00 objects=21 used=0 fp=0xf15e0480 flags=0x2804080
      | INFO: Object 0xf15e0000 @offset=0 fp=0xc113e0e9
      
      If you try to free memory in irqs_disabled(). This is then added to the
      slub_free_list list. The following allocation then might be from a
      different kmem_cache. If the two caches have a different SLAB_DEBUG_FLAGS
      then one might complain about bad bad marker which are actually not
      used.
      
      Cc: stable-rt@vger.kernel.org
      Signed-off-by: default avatarSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      81b9f7c6
    • Mike Galbraith's avatar
      hwlat-detector: Don't ignore threshold module parameter · c4819991
      Mike Galbraith authored
      
      If the user specified a threshold at module load time, use it.
      
      Cc: stable-rt@vger.kernel.org
      Acked-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: default avatarMike Galbraith <bitbucket@online.de>
      Signed-off-by: default avatarSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      c4819991
    • Wolfram Sang's avatar
      Kind of revert "powerpc: 52xx: provide a default in mpc52xx_irqhost_map()" · f022b0f6
      Wolfram Sang authored
      This more or less reverts commit 6391f697
      
      .
      Instead of adding an unneeded 'default', mark the variable to prevent
      the false positive 'uninitialized var'. The other change (fixing the
      printout) needs revert, too. We want to know WHICH critical irq failed,
      not which level it had.
      
      Signed-off-by: default avatarWolfram Sang <wsa@the-dreams.de>
      Cc: stable-rt@vger.kernel.org
      Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
      Cc: Anatolij Gustschin <agust@denx.de>
      Signed-off-by: default avatarSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      f022b0f6
    • Thomas Pfaff's avatar
      genirq: Set the irq thread policy without checking CAP_SYS_NICE · 195fbb5b
      Thomas Pfaff authored
      In commit ee238713
      
       ("genirq: Set irq thread to RT priority on
      creation") we moved the assigment of the thread's priority from the
      thread's function into __setup_irq(). That function may run in user
      context for instance if the user opens an UART node and then driver
      calls requests in the ->open() callback. That user may not have
      CAP_SYS_NICE and so the irq thread won't run with the SCHED_OTHER
      policy.
      
      This patch uses sched_setscheduler_nocheck() so we omit the CAP_SYS_NICE
      check which is otherwise required for the SCHED_OTHER policy.
      
      Cc: Ivo Sieben <meltedpianoman@gmail.com>
      Cc: stable@vger.kernel.org
      Cc: stable-rt@vger.kernel.org
      Signed-off-by: default avatarThomas Pfaff <tpfaff@pcs.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      [bigeasy: rewrite the changelog]
      Signed-off-by: default avatarSebastian Andrzej Siewior <bigeasy@linutronix.de>
      195fbb5b
  6. Nov 19, 2013
  7. Nov 13, 2013
  8. Nov 08, 2013
  9. Nov 07, 2013