- Feb 12, 2016
-
-
Sebastian Andrzej Siewior authored
Dear RT folks! I'm pleased to announce the v4.4.1-rt6 patch set. Changes since v4.4.1-rt5: - The rtmutex wait_lock is taken with interrupts disabled again. It fixes a possible deadlock in the posix timer code. Patch by Thomas Gleixner. - Don't disable interrupts around atomic_dec_and_lock() in wb_congested_put() - use a RCU lock in call_step_hook() on ARM64 to avoid sleeping while atomic issue. Patch by Yang Shi. - In migrate_disable() we use now the fast / atomic path if were are called with interrupts disabled. This avoids a recursion with lockdep in some cases. - The migrate_disable()/_enable() invocation has been moved from the locking macro into the used rt_mutex functions. This makes the kernel a tiny bit smaller. - We now try to invoke migrate_enable() before we schedule() out while waiting for a lock. This optimization should allow the scheduler to put the task on another CPU once it became runnable and the original CPU is busy. This does not work for nested locks. Patch by Thomas Gleixner. - The stop_machine.c was converted to use raw_locks. This patch has been identified to cause problems during hotplug and was reverted. - There is a useless rcu_bh thread which has been deactivated. - Manish Jaggi reported that a sleeping while atomic issue on AMR64 with KVM. Josh Cartwright sent a patch. Known issues: - bcache stays disabled - CPU hotplug got a little better but can deadlock. - The netlink_release() OOPS, reported by Clark, is still on the list, but unsolved due to lack of information. Since Clark can not reproduce it anymore and hasn't seen it, it will be removed from this list and moved to the bugzilla. The delta patch against 4.4.1-rt5 is appended below and can be found here: https://cdn.kernel.org/pub/linux/kernel/projects/rt/4.4/incr/patch-4.4.1-rt5-rt6.patch.xz You can get this release via the git tree at: git://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-rt-devel.git v4.4.1-rt6 The RT patch against 4.4.1 can be found here: https://cdn.kernel.org/pub/linux/kernel/projects/rt/4.4/patch-4.4.1-rt6.patch.xz The split quilt queue is available at: https://cdn.kernel.org/pub/linux/kernel/projects/rt/4.4/patches-4.4.5-rt6.tar.xz Sebastian Signed-off-by:
Sebastian Andrzej Siewior <bigeasy@linutronix.de>
-
- Feb 04, 2016
-
-
Sebastian Andrzej Siewior authored
Dear RT folks! I'm pleased to announce the v4.4.1-rt5 patch set. Changes since v4.4.1-rt4: - various compile fixes found by kbuild test robot. - Mike Galbraith spotted that migrate_disables() sets ->nr_cpus_allowed to one by accident. - Christoph Mathys reported that the "preemptirqsoff_hist" tracer reboots the system once enabled. The problem has been resolved. - Thomas Gleixner sent a patch to set a default affinity for mask for interrupts via kernel command line. - Yang Shi reported that some perf events were reported as "not counted" instead the actual numbers. The problem has been resolved. Known issues: - bcache stays disabled - CPU hotplug is not better than before - The netlink_release() OOPS, reported by Clark, is still on the list, but unsolved due to lack of information The delta patch against 4.4.1-rt4 is appended below and can be found here: https://cdn.kernel.org/pub/linux/kernel/projects/rt/4.4/incr/patch-4.4.1-rt4-rt5.patch.xz You can get this release via the git tree at: git://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-rt-devel.git v4.4.1-rt5 The RT patch against 4.4.1 can be found here: https://cdn.kernel.org/pub/linux/kernel/projects/rt/4.4/patch-4.4.1-rt5.patch.xz The split quilt queue is available at: https://cdn.kernel.org/pub/linux/kernel/projects/rt/4.4/patches-4.4.1-rt5.tar.xz Sebastian Signed-off-by:
Sebastian Andrzej Siewior <bigeasy@linutronix.de>
-
- Feb 01, 2016
-
-
Sebastian Andrzej Siewior authored
Sebastian Signed-off-by:
Sebastian Andrzej Siewior <bigeasy@linutronix.de>
-
- Jan 22, 2016
-
-
Sebastian Andrzej Siewior authored
Dear RT folks! I'm pleased to announce the v4.4-rt3 patch set. Changes since v4.4-rt2: - various compile fixes found by kbuild test robot and Grygorii Strashko. - kbuild test robot reported that we open interrupts too early in ptrace_freeze_traced(). - dropping a GPIO patch from the OMAP queue which is no longer required (requested by Grygorii Strashko) - dropping a retry loop in the mm/anon_vma_free() which was probably just duct tape and does no longer seems required. - Various people pointed out that the AT91 clocksource driver did not not compile. It does now. However AT91 does not yet boot. There are two issues: - the free_irq() from irq-off region is not good and triggers a warning because it is invoked twice. This will be addressed later, the current patch is not bulletproof and not yet part of the series. - The PMC driver invokes request_irq() very early which leads to a NULL pointer exception (non-RT with threaded interrupts has the same problem). A longer explanation by Alexandre Belloni and his current patch series he refers to can be found at: http://lkml.kernel.org/r/1452997394-8554-1-git-send-email-alexandre.belloni@free-electrons.com - Using a virtual network device (like a bridge) could lead to a "Dead loop" message the packet dropped. This problem has been fixed. - Julia Lawall sent a patch against hwlat_detector to "move constants to the right of binary operators". - Carsten Emde sent a patch to fix the latency histogram tracer. - Mike Galbraith reported that the softirq ate about 25% CPU time doing nothing. The problem was fixed. - Grygorii Strashko pointed out that two RCU/ksoftirqd changes that were made to the non-RT version of the code did not make to the RT version. This was corrected. - btrfs forgot to initialize a seqcount variable which prints a warning if used with lockdep. - A few users napi_alloc_cache() were not protected against reentrance. - Grygorii Strashko fixed highmem on ARM. - Mike Galbraith reported that all tasks run on CPU0 even on a system with more than one. Problem fixed by Thomas Gleixner. - Anders Roxell sent two patches (against coupled and vsp1) because they did not compile and printed a warning on -RT. - Mike Galbraith pointed out that we forgot to check for NEED_RESCHED_LAZY in an exit path on X86 and provided a patch. - Mike Galbraith pointed out that we don't consider the preempt_lazy_count in the common preemption check and provided a patch. With this fixed, the sched_other performance should improve. - A high network load could lead to RCU stalls followed by the OOM killer. Say a slower ARM with on a GBIT link running RT tasks, doing network IO (at a RT prio) and getting shot with the flood ping at a high rate. NAPI does not really kick in because each time NAPI tries defer processing it starts again in the context of the IRQ thread of the network driver. This has been fixed in two steps: - once the NAPI budget is up, we schedule ksoftirqd. This works now on -RT, too - ksoftirqd runs now at SCHED_OTHER priority like the on !RT. Now the scheduler can preempt ksoftirqd and let RCU do its job. The timer and hrtimer softirq processing happens now in ktimersoftd which runs at SCHED_FIFO (as ksoftirqd used to). - Grygorii Strashko pointed out that if RCU_EXPERT is not enabled then we can't select RCU_BOOST. Therefore RCU_EXPERT is default y on RT. - Grygorii Strashko pointed out the we miss to check for NEED_RESCHED_LAZY in an exit path on ARM. This has been fixed on ARM and on ARM64 as well. This was a lot and I hope I forgot nothing important. Known issues: - bcache stays disabled - CPU hotplug is not better than before - The netlink_release() OOPS, reported by Clark, is still on the list, but unsolved due to lack of information The delta patch against 4.4-rt2 is appended below and can be found here: https://cdn.kernel.org/pub/linux/kernel/projects/rt/4.4/incr/patch-4.4-rt2-rt3.patch You can get this release via the git tree at: git://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-rt-devel.git v4.4-rt3 The RT patch against 4.1.13 can be found here: https://cdn.kernel.org/pub/linux/kernel/projects/rt/4.4/patch-4.4-rt3.patch.xz The split quilt queue is available at: https://cdn.kernel.org/pub/linux/kernel/projects/rt/4.4/patches-4.4-rt3.tar.xz Sebastian Signed-off-by:
Sebastian Andrzej Siewior <bigeasy@linutronix.de>
-
- Jan 12, 2016
-
-
Sebastian Andrzej Siewior authored
Signed-off-by:
Sebastian Andrzej Siewior <bigeasy@linutronix.de>
-
- Dec 23, 2015
-
-
Sebastian Andrzej Siewior authored
Please don't continue reading before christmas eve (or morning, depending on your schedule). If you don't celebrate christmas, well go ahead. Dear RT folks! I'm pleased to announce the v4.4-rc6-rt1 patch set. I tested it on my AMD A10, 64bit. Nothing exploded so far, filesystem is still there. I haven't tested it on anything else. Before someone asks: this does not mean it does *not* work on ARM I simply did not try it. If you are brave then download it, install it and have fun. If something breaks, please report it. If your machine starts blinking like a christmas tree while using the patch then *please* send a photo. Changes since v4.1.15-rt17: - rebase to v4.4-rc6 Known issues (inherited from v4.1-RT): - bcache stays disabled - CPU hotplug is not better than before - The netlink_release() OOPS, reported by Clark, is still on the list, but unsolved due to lack of information - Christoph Mathys reported a stall in cgroup locking code while using Linux containers. You can get this release via the git tree at: git://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-rt-devel.git v4.4-rc6-rt1 The RT patch against 4.4-rc6 can be found here: https://cdn.kernel.org/pub/linux/kernel/projects/rt/4.4/patch-4.4-rc6-rt1.patch.xz The split quilt queue is available at: https://cdn.kernel.org/pub/linux/kernel/projects/rt/4.4/patches-4.4-rc6-rt1.tar.xz Signed-off-by:
Sebastian Andrzej Siewior <bigeasy@linutronix.de>
-
- Dec 22, 2015
-
-
Sebastian Andrzej Siewior authored
Dear RT folks! I'm pleased to announce the v4.1.15-rt17 patch set. Changes since v4.1.15-rt16: Axel Lin (1): gpio: omap: Fix missing raw locks conversion Grygorii Strashko (15): gpio: omap: fix omap_gpio_free to not clean up irq configuration gpio: omap: fix error handling in omap_gpio_irq_type gpio: omap: rework omap_x_irq_shutdown to touch only irqs specific registers gpio: omap: rework omap_gpio_request to touch only gpio specific registers gpio: omap: rework omap_gpio_irq_startup to handle current pin state properly gpio: omap: add missed spin_unlock_irqrestore in omap_gpio_irq_type gpio: omap: prevent module from being unloaded while in use gpio: omap: remove wrong irq_domain_remove usage in probe gpio: omap: switch to use platform_get_irq gpio: omap: fix omap2_set_gpio_debounce gpio: omap: protect regs access in omap_gpio_irq_handler gpio: omap: fix clk_prepare/unprepare usage gpio: omap: fix static checker warning gpio: omap: move pm runtime in irq_chip.irq_bus_lock/sync_unlock gpio: omap: convert to use generic irq handler Russ Dill (1): ARM: OMAP2: Drop the concept of certain power domains not being able to lose context. Sebastian Andrzej Siewior (4): Revert "x86: Do not disable preemption in int3 on 32bit" Revert "gpio: omap: use raw locks for locking" gpio: omap: use raw locks for locking v4.1.15-rt17 Tony Lindgren (3): gpio: omap: Allow building as a loadable module gpio: omap: Fix gpiochip_add() handling for deferred probe gpio: omap: Fix GPIO numbering for deferred probe Yang Shi (1): x86/signal: delay calling signals on 32bit bmouring@ni.com (1): rtmutex: Use chainwalking control enum Known issues: - bcache stays disabled - CPU hotplug is not better than before - The netlink_release() OOPS, reported by Clark, is still on the list, but unsolved due to lack of information - Christoph Mathys reported a stall in cgroup locking code while using Linux containers. The delta patch against 4.1.15-rt17 is appended below and can be found here: https://cdn.kernel.org/pub/linux/kernel/projects/rt/4.1/incr/patch-4.1.15-rt16-rt17.patch.xz You can get this release via the git tree at: git://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-rt-devel.git v4.1.15-rt17 The RT patch against 4.1.15 can be found here: https://cdn.kernel.org/pub/linux/kernel/projects/rt/4.1/patch-4.1.15-rt17.patch.xz The split quilt queue is available at: https://cdn.kernel.org/pub/linux/kernel/projects/rt/4.1/patches-4.1.15-rt17.tar.xz Sebastian diff --git a/arch/arm/mach-omap2/gpio.c b/arch/arm/mach-omap2/gpio.c index 7a577145b68b..689a1af47c80 100644 --- a/arch/arm/mach-omap2/gpio.c +++ b/arch/arm/mach-omap2/gpio.c @@ -130,7 +130,6 @@ static int __init omap2_gpio_dev_init(struct omap_hwmod *oh, void *unused) } pwrdm = omap_hwmod_get_pwrdm(oh); - pdata->loses_context = pwrdm_can_ever_lose_context(pwrdm); pdev = omap_device_build(name, id - 1, oh, pdata, sizeof(*pdata)); kfree(pdata); diff --git a/arch/arm/mach-omap2/powerdomain.c b/arch/arm/mach-omap2/powerdomain.c index 78af6d8cf2e2..ef4227ffa3b6 100644 --- a/arch/arm/mach-omap2/powerdomain.c +++ b/arch/arm/mach-omap2/powerdomain.c @@ -1166,43 +1166,3 @@ int pwrdm_get_context_loss_count(struct powerdomain *pwrdm) return count; } -/** - * pwrdm_can_ever_lose_context - can this powerdomain ever lose context? - * @pwrdm: struct powerdomain * - * - * Given a struct powerdomain * @pwrdm, returns 1 if the powerdomain - * can lose either memory or logic context or if @pwrdm is invalid, or - * returns 0 otherwise. This function is not concerned with how the - * powerdomain registers are programmed (i.e., to go off or not); it's - * concerned with whether it's ever possible for this powerdomain to - * go off while some other part of the chip is active. This function - * assumes that every powerdomain can go to either ON or INACTIVE. - */ -bool pwrdm_can_ever_lose_context(struct powerdomain *pwrdm) -{ - int i; - - if (!pwrdm) { - pr_debug("powerdomain: %s: invalid powerdomain pointer\n", - __func__); - return 1; - } - - if (pwrdm->pwrsts & PWRSTS_OFF) - return 1; - - if (pwrdm->pwrsts & PWRSTS_RET) { - if (pwrdm->pwrsts_logic_ret & PWRSTS_OFF) - return 1; - - for (i = 0; i < pwrdm->banks; i++) - if (pwrdm->pwrsts_mem_ret[i] & PWRSTS_OFF) - return 1; - } - - for (i = 0; i < pwrdm->banks; i++) - if (pwrdm->pwrsts_mem_on[i] & PWRSTS_OFF) - return 1; - - return 0; -} diff --git a/arch/arm/mach-omap2/powerdomain.h b/arch/arm/mach-omap2/powerdomain.h index 28a796ce07d7..5e0c033a21db 100644 --- a/arch/arm/mach-omap2/powerdomain.h +++ b/arch/arm/mach-omap2/powerdomain.h @@ -244,7 +244,6 @@ int pwrdm_state_switch(struct powerdomain *pwrdm); int pwrdm_pre_transition(struct powerdomain *pwrdm); int pwrdm_post_transition(struct powerdomain *pwrdm); int pwrdm_get_context_loss_count(struct powerdomain *pwrdm); -bool pwrdm_can_ever_lose_context(struct powerdomain *pwrdm); extern int omap_set_pwrdm_state(struct powerdomain *pwrdm, u8 state); diff --git a/arch/x86/include/asm/signal.h b/arch/x86/include/asm/signal.h index b1b08a28c72a..0e7bfe98e1d1 100644 --- a/arch/x86/include/asm/signal.h +++ b/arch/x86/include/asm/signal.h @@ -32,7 +32,7 @@ typedef struct { * TIF_NOTIFY_RESUME and set up the signal to be sent on exit of the * trap. */ -#if defined(CONFIG_PREEMPT_RT_FULL) && defined(CONFIG_X86_64) +#if defined(CONFIG_PREEMPT_RT_FULL) #define ARCH_RT_DELAYS_SIGNAL_SEND #endif diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c index ebae118938ef..324ab5247687 100644 --- a/arch/x86/kernel/traps.c +++ b/arch/x86/kernel/traps.c @@ -88,21 +88,9 @@ static inline void conditional_sti(struct pt_regs *regs) local_irq_enable(); } -static inline void conditional_sti_ist(struct pt_regs *regs) +static inline void preempt_conditional_sti(struct pt_regs *regs) { -#ifdef CONFIG_X86_64 - /* - * X86_64 uses a per CPU stack on the IST for certain traps - * like int3. The task can not be preempted when using one - * of these stacks, thus preemption must be disabled, otherwise - * the stack can be corrupted if the task is scheduled out, - * and another task comes in and uses this stack. - * - * On x86_32 the task keeps its own stack and it is OK if the - * task schedules out. - */ preempt_count_inc(); -#endif if (regs->flags & X86_EFLAGS_IF) local_irq_enable(); } @@ -113,13 +101,11 @@ static inline void conditional_cli(struct pt_regs *regs) local_irq_disable(); } -static inline void conditional_cli_ist(struct pt_regs *regs) +static inline void preempt_conditional_cli(struct pt_regs *regs) { if (regs->flags & X86_EFLAGS_IF) local_irq_disable(); -#ifdef CONFIG_X86_64 preempt_count_dec(); -#endif } enum ctx_state ist_enter(struct pt_regs *regs) @@ -550,9 +536,9 @@ dotraplinkage void notrace do_int3(struct pt_regs *regs, long error_code) * as we may switch to the interrupt stack. */ debug_stack_usage_inc(); - conditional_sti_ist(regs); + preempt_conditional_sti(regs); do_trap(X86_TRAP_BP, SIGTRAP, "int3", regs, error_code, NULL); - conditional_cli_ist(regs); + preempt_conditional_cli(regs); debug_stack_usage_dec(); exit: ist_exit(regs, prev_state); @@ -682,12 +668,12 @@ dotraplinkage void do_debug(struct pt_regs *regs, long error_code) debug_stack_usage_inc(); /* It's safe to allow irq's after DR6 has been saved */ - conditional_sti_ist(regs); + preempt_conditional_sti(regs); if (v8086_mode(regs)) { handle_vm86_trap((struct kernel_vm86_regs *) regs, error_code, X86_TRAP_DB); - conditional_cli_ist(regs); + preempt_conditional_cli(regs); debug_stack_usage_dec(); goto exit; } @@ -707,7 +693,7 @@ dotraplinkage void do_debug(struct pt_regs *regs, long error_code) si_code = get_si_code(tsk->thread.debugreg6); if (tsk->thread.debugreg6 & (DR_STEP | DR_TRAP_BITS) || user_icebp) send_sigtrap(tsk, regs, error_code, si_code); - conditional_cli_ist(regs); + preempt_conditional_cli(regs); debug_stack_usage_dec(); exit: diff --git a/drivers/gpio/Kconfig b/drivers/gpio/Kconfig index caefe806db5e..ff7df95de3bf 100644 --- a/drivers/gpio/Kconfig +++ b/drivers/gpio/Kconfig @@ -308,7 +308,7 @@ config GPIO_OCTEON family of SOCs. config GPIO_OMAP - bool "TI OMAP GPIO support" if COMPILE_TEST && !ARCH_OMAP2PLUS + tristate "TI OMAP GPIO support" if ARCH_OMAP2PLUS || COMPILE_TEST default y if ARCH_OMAP depends on ARM select GENERIC_IRQ_CHIP diff --git a/drivers/gpio/gpio-omap.c b/drivers/gpio/gpio-omap.c index a0ace2758e2e..4916fd726dce 100644 --- a/drivers/gpio/gpio-omap.c +++ b/drivers/gpio/gpio-omap.c @@ -29,6 +29,7 @@ #include <linux/platform_data/gpio-omap.h> #define OFF_MODE 1 +#define OMAP4_GPIO_DEBOUNCINGTIME_MASK 0xFF static LIST_HEAD(omap_gpio_list); @@ -50,7 +51,7 @@ struct gpio_regs { struct gpio_bank { struct list_head node; void __iomem *base; - u16 irq; + int irq; u32 non_wakeup_gpios; u32 enabled_non_wakeup_gpios; struct gpio_regs context; @@ -58,6 +59,7 @@ struct gpio_bank { u32 level_mask; u32 toggle_mask; raw_spinlock_t lock; + raw_spinlock_t wa_lock; struct gpio_chip chip; struct clk *dbck; u32 mod_usage; @@ -67,7 +69,7 @@ struct gpio_bank { struct device *dev; bool is_mpuio; bool dbck_flag; - bool loses_context; + bool context_valid; int stride; u32 width; @@ -175,7 +177,7 @@ static inline void omap_gpio_rmw(void __iomem *base, u32 reg, u32 mask, bool set static inline void omap_gpio_dbck_enable(struct gpio_bank *bank) { if (bank->dbck_enable_mask && !bank->dbck_enabled) { - clk_prepare_enable(bank->dbck); + clk_enable(bank->dbck); bank->dbck_enabled = true; writel_relaxed(bank->dbck_enable_mask, @@ -193,7 +195,7 @@ static inline void omap_gpio_dbck_disable(struct gpio_bank *bank) */ writel_relaxed(0, bank->base + bank->regs->debounce_en); - clk_disable_unprepare(bank->dbck); + clk_disable(bank->dbck); bank->dbck_enabled = false; } } @@ -204,8 +206,9 @@ static inline void omap_gpio_dbck_disable(struct gpio_bank *bank) * @offset: the gpio number on this @bank * @debounce: debounce time to use * - * OMAP's debounce time is in 31us steps so we need - * to convert and round up to the closest unit. + * OMAP's debounce time is in 31us steps + * <debounce time> = (GPIO_DEBOUNCINGTIME[7:0].DEBOUNCETIME + 1) x 31 + * so we need to convert and round up to the closest unit. */ static void omap2_set_gpio_debounce(struct gpio_bank *bank, unsigned offset, unsigned debounce) @@ -213,34 +216,33 @@ static void omap2_set_gpio_debounce(struct gpio_bank *bank, unsigned offset, void __iomem *reg; u32 val; u32 l; + bool enable = !!debounce; if (!bank->dbck_flag) return; - if (debounce < 32) - debounce = 0x01; - else if (debounce > 7936) - debounce = 0xff; - else - debounce = (debounce / 0x1f) - 1; + if (enable) { + debounce = DIV_ROUND_UP(debounce, 31) - 1; + debounce &= OMAP4_GPIO_DEBOUNCINGTIME_MASK; + } l = BIT(offset); - clk_prepare_enable(bank->dbck); + clk_enable(bank->dbck); reg = bank->base + bank->regs->debounce; writel_relaxed(debounce, reg); reg = bank->base + bank->regs->debounce_en; val = readl_relaxed(reg); - if (debounce) + if (enable) val |= l; else val &= ~l; bank->dbck_enable_mask = val; writel_relaxed(val, reg); - clk_disable_unprepare(bank->dbck); + clk_disable(bank->dbck); /* * Enable debounce clock per module. * This call is mandatory because in omap_gpio_request() when @@ -285,7 +287,7 @@ static void omap_clear_gpio_debounce(struct gpio_bank *bank, unsigned offset) bank->context.debounce = 0; writel_relaxed(bank->context.debounce, bank->base + bank->regs->debounce); - clk_disable_unprepare(bank->dbck); + clk_disable(bank->dbck); bank->dbck_enabled = false; } } @@ -488,9 +490,6 @@ static int omap_gpio_irq_type(struct irq_data *d, unsigned type) unsigned long flags; unsigned offset = d->hwirq; - if (!BANK_USED(bank)) - pm_runtime_get_sync(bank->dev); - if (type & ~IRQ_TYPE_SENSE_MASK) return -EINVAL; @@ -500,10 +499,15 @@ static int omap_gpio_irq_type(struct irq_data *d, unsigned type) raw_spin_lock_irqsave(&bank->lock, flags); retval = omap_set_gpio_triggering(bank, offset, type); + if (retval) { + raw_spin_unlock_irqrestore(&bank->lock, flags); + goto error; + } omap_gpio_init_irq(bank, offset); if (!omap_gpio_is_input(bank, offset)) { raw_spin_unlock_irqrestore(&bank->lock, flags); - return -EINVAL; + retval = -EINVAL; + goto error; } raw_spin_unlock_irqrestore(&bank->lock, flags); @@ -512,6 +516,9 @@ static int omap_gpio_irq_type(struct irq_data *d, unsigned type) else if (type & (IRQ_TYPE_EDGE_FALLING | IRQ_TYPE_EDGE_RISING)) __irq_set_handler_locked(d->irq, handle_edge_irq); + return 0; + +error: return retval; } @@ -638,22 +645,18 @@ static int omap_set_gpio_wakeup(struct gpio_bank *bank, unsigned offset, return 0; } -static void omap_reset_gpio(struct gpio_bank *bank, unsigned offset) -{ - omap_set_gpio_direction(bank, offset, 1); - omap_set_gpio_irqenable(bank, offset, 0); - omap_clear_gpio_irqstatus(bank, offset); - omap_set_gpio_triggering(bank, offset, IRQ_TYPE_NONE); - omap_clear_gpio_debounce(bank, offset); -} - /* Use disable_irq_wake() and enable_irq_wake() functions from drivers */ static int omap_gpio_wake_enable(struct irq_data *d, unsigned int enable) { struct gpio_bank *bank = omap_irq_data_get_bank(d); unsigned offset = d->hwirq; + int ret; - return omap_set_gpio_wakeup(bank, offset, enable); + ret = omap_set_gpio_wakeup(bank, offset, enable); + if (!ret) + ret = irq_set_irq_wake(bank->irq, enable); + + return ret; } static int omap_gpio_request(struct gpio_chip *chip, unsigned offset) @@ -669,14 +672,7 @@ static int omap_gpio_request(struct gpio_chip *chip, unsigned offset) pm_runtime_get_sync(bank->dev); raw_spin_lock_irqsave(&bank->lock, flags); - /* Set trigger to none. You need to enable the desired trigger with - * request_irq() or set_irq_type(). Only do this if the IRQ line has - * not already been requested. - */ - if (!LINE_USED(bank->irq_usage, offset)) { - omap_set_gpio_triggering(bank, offset, IRQ_TYPE_NONE); - omap_enable_gpio_module(bank, offset); - } + omap_enable_gpio_module(bank, offset); bank->mod_usage |= BIT(offset); raw_spin_unlock_irqrestore(&bank->lock, flags); @@ -690,8 +686,11 @@ static void omap_gpio_free(struct gpio_chip *chip, unsigned offset) raw_spin_lock_irqsave(&bank->lock, flags); bank->mod_usage &= ~(BIT(offset)); + if (!LINE_USED(bank->irq_usage, offset)) { + omap_set_gpio_direction(bank, offset, 1); + omap_clear_gpio_debounce(bank, offset); + } omap_disable_gpio_module(bank, offset); - omap_reset_gpio(bank, offset); raw_spin_unlock_irqrestore(&bank->lock, flags); /* @@ -711,29 +710,27 @@ static void omap_gpio_free(struct gpio_chip *chip, unsigned offset) * line's interrupt handler has been run, we may miss some nested * interrupts. */ -static void omap_gpio_irq_handler(unsigned int irq, struct irq_desc *desc) +static irqreturn_t omap_gpio_irq_handler(int irq, void *gpiobank) { void __iomem *isr_reg = NULL; u32 isr; unsigned int bit; - struct gpio_bank *bank; - int unmasked = 0; - struct irq_chip *irqchip = irq_desc_get_chip(desc); - struct gpio_chip *chip = irq_get_handler_data(irq); + struct gpio_bank *bank = gpiobank; + unsigned long wa_lock_flags; + unsigned long lock_flags; - chained_irq_enter(irqchip, desc); - - bank = container_of(chip, struct gpio_bank, chip); isr_reg = bank->base + bank->regs->irqstatus; - pm_runtime_get_sync(bank->dev); - if (WARN_ON(!isr_reg)) goto exit; + pm_runtime_get_sync(bank->dev); + while (1) { u32 isr_saved, level_mask = 0; u32 enabled; + raw_spin_lock_irqsave(&bank->lock, lock_flags); + enabled = omap_get_gpio_irqbank_mask(bank); isr_saved = isr = readl_relaxed(isr_reg) & enabled; @@ -747,12 +744,7 @@ static void omap_gpio_irq_handler(unsigned int irq, struct irq_desc *desc) omap_clear_gpio_irqbank(bank, isr_saved & ~level_mask); omap_enable_gpio_irqbank(bank, isr_saved & ~level_mask); - /* if there is only edge sensitive GPIO pin interrupts - configured, we could unmask GPIO bank interrupt immediately */ - if (!level_mask && !unmasked) { - unmasked = 1; - chained_irq_exit(irqchip, desc); - } + raw_spin_unlock_irqrestore(&bank->lock, lock_flags); if (!isr) break; @@ -761,6 +753,7 @@ static void omap_gpio_irq_handler(unsigned int irq, struct irq_desc *desc) bit = __ffs(isr); isr &= ~(BIT(bit)); + raw_spin_lock_irqsave(&bank->lock, lock_flags); /* * Some chips can't respond to both rising and falling * at the same time. If this irq was requested with @@ -771,18 +764,20 @@ static void omap_gpio_irq_handler(unsigned int irq, struct irq_desc *desc) if (bank->toggle_mask & (BIT(bit))) omap_toggle_gpio_edge_triggering(bank, bit); + raw_spin_unlock_irqrestore(&bank->lock, lock_flags); + + raw_spin_lock_irqsave(&bank->wa_lock, wa_lock_flags); + generic_handle_irq(irq_find_mapping(bank->chip.irqdomain, bit)); + + raw_spin_unlock_irqrestore(&bank->wa_lock, + wa_lock_flags); } } - /* if bank has any level sensitive GPIO pin interrupt - configured, we must unmask the bank interrupt only after - handler(s) are executed in order to avoid spurious bank - interrupt */ exit: - if (!unmasked) - chained_irq_exit(irqchip, desc); pm_runtime_put(bank->dev); + return IRQ_HANDLED; } static unsigned int omap_gpio_irq_startup(struct irq_data *d) @@ -791,15 +786,22 @@ static unsigned int omap_gpio_irq_startup(struct irq_data *d) unsigned long flags; unsigned offset = d->hwirq; - if (!BANK_USED(bank)) - pm_runtime_get_sync(bank->dev); - raw_spin_lock_irqsave(&bank->lock, flags); - omap_gpio_init_irq(bank, offset); + + if (!LINE_USED(bank->mod_usage, offset)) + omap_set_gpio_direction(bank, offset, 1); + else if (!omap_gpio_is_input(bank, offset)) + goto err; + omap_enable_gpio_module(bank, offset); + bank->irq_usage |= BIT(offset); + raw_spin_unlock_irqrestore(&bank->lock, flags); omap_gpio_unmask_irq(d); return 0; +err: + raw_spin_unlock_irqrestore(&bank->lock, flags); + return -EINVAL; } static void omap_gpio_irq_shutdown(struct irq_data *d) @@ -810,9 +812,26 @@ static void omap_gpio_irq_shutdown(struct irq_data *d) raw_spin_lock_irqsave(&bank->lock, flags); bank->irq_usage &= ~(BIT(offset)); + omap_set_gpio_irqenable(bank, offset, 0); + omap_clear_gpio_irqstatus(bank, offset); + omap_set_gpio_triggering(bank, offset, IRQ_TYPE_NONE); + if (!LINE_USED(bank->mod_usage, offset)) + omap_clear_gpio_debounce(bank, offset); omap_disable_gpio_module(bank, offset); - omap_reset_gpio(bank, offset); raw_spin_unlock_irqrestore(&bank->lock, flags); +} + +static void omap_gpio_irq_bus_lock(struct irq_data *data) +{ + struct gpio_bank *bank = omap_irq_data_get_bank(data); + + if (!BANK_USED(bank)) + pm_runtime_get_sync(bank->dev); +} + +static void gpio_irq_bus_sync_unlock(struct irq_data *data) +{ + struct gpio_bank *bank = omap_irq_data_get_bank(data); /* * If this is the last IRQ to be freed in the bank, @@ -1048,10 +1067,6 @@ static void omap_gpio_mod_init(struct gpio_bank *bank) /* Initialize interface clk ungated, module enabled */ if (bank->regs->ctrl) writel_relaxed(0, base + bank->regs->ctrl); - - bank->dbck = clk_get(bank->dev, "dbclk"); - if (IS_ERR(bank->dbck)) - dev_err(bank->dev, "Could not get gpio dbck\n"); } static int omap_gpio_chip_init(struct gpio_bank *bank, struct irq_chip *irqc) @@ -1080,7 +1095,6 @@ static int omap_gpio_chip_init(struct gpio_bank *bank, struct irq_chip *irqc) } else { bank->chip.label = "gpio"; bank->chip.base = gpio; - gpio += bank->width; } bank->chip.ngpio = bank->width; @@ -1090,6 +1104,9 @@ static int omap_gpio_chip_init(struct gpio_bank *bank, struct irq_chip *irqc) return ret; } + if (!bank->is_mpuio) + gpio += bank->width; + #ifdef CONFIG_ARCH_OMAP1 /* * REVISIT: Once we have OMAP1 supporting SPARSE_IRQ, we can drop @@ -1112,7 +1129,7 @@ static int omap_gpio_chip_init(struct gpio_bank *bank, struct irq_chip *irqc) } ret = gpiochip_irqchip_add(&bank->chip, irqc, - irq_base, omap_gpio_irq_handler, + irq_base, handle_bad_irq, IRQ_TYPE_NONE); if (ret) { @@ -1121,10 +1138,14 @@ static int omap_gpio_chip_init(struct gpio_bank *bank, struct irq_chip *irqc) return -ENODEV; } - gpiochip_set_chained_irqchip(&bank->chip, irqc, - bank->irq, omap_gpio_irq_handler); + gpiochip_set_chained_irqchip(&bank->chip, irqc, bank->irq, NULL); - return 0; + ret = devm_request_irq(bank->dev, bank->irq, omap_gpio_irq_handler, + 0, dev_name(bank->dev), bank); + if (ret) + gpiochip_remove(&bank->chip); + + return ret; } static const struct of_device_id omap_gpio_match[]; @@ -1163,17 +1184,23 @@ static int omap_gpio_probe(struct platform_device *pdev) irqc->irq_unmask = omap_gpio_unmask_irq, irqc->irq_set_type = omap_gpio_irq_type, irqc->irq_set_wake = omap_gpio_wake_enable, + irqc->irq_bus_lock = omap_gpio_irq_bus_lock, + irqc->irq_bus_sync_unlock = gpio_irq_bus_sync_unlock, irqc->name = dev_name(&pdev->dev); - res = platform_get_resource(pdev, IORESOURCE_IRQ, 0); - if (unlikely(!res)) { - dev_err(dev, "Invalid IRQ resource\n"); - return -ENODEV; + bank->irq = platform_get_irq(pdev, 0); + if (bank->irq <= 0) { + if (!bank->irq) + bank->irq = -ENXIO; + if (bank->irq != -EPROBE_DEFER) + dev_err(dev, + "can't get irq resource ret=%d\n", bank->irq); + return bank->irq; } - bank->irq = res->start; bank->dev = dev; bank->chip.dev = dev; + bank->chip.owner = THIS_MODULE; bank->dbck_flag = pdata->dbck_flag; bank->stride = pdata->bank_stride; bank->width = pdata->bank_width; @@ -1183,15 +1210,9 @@ static int omap_gpio_probe(struct platform_device *pdev) #ifdef CONFIG_OF_GPIO bank->chip.of_node = of_node_get(node); #endif - if (node) { - if (!of_property_read_bool(node, "ti,gpio-always-on")) - bank->loses_context = true; - } else { - bank->loses_context = pdata->loses_context; - - if (bank->loses_context) - bank->get_context_loss_count = - pdata->get_context_loss_count; + if (!node) { + bank->get_context_loss_count = + pdata->get_context_loss_count; } if (bank->regs->set_dataout && bank->regs->clr_dataout) @@ -1200,15 +1221,26 @@ static int omap_gpio_probe(struct platform_device *pdev) bank->set_dataout = omap_set_gpio_dataout_mask; raw_spin_lock_init(&bank->lock); + raw_spin_lock_init(&bank->wa_lock); /* Static mapping, never released */ res = platform_get_resource(pdev, IORESOURCE_MEM, 0); bank->base = devm_ioremap_resource(dev, res); if (IS_ERR(bank->base)) { - irq_domain_remove(bank->chip.irqdomain); return PTR_ERR(bank->base); } + if (bank->dbck_flag) { + bank->dbck = devm_clk_get(bank->dev, "dbclk"); + if (IS_ERR(bank->dbck)) { + dev_err(bank->dev, + "Could not get gpio dbck. Disable debounce\n"); + bank->dbck_flag = false; + } else { + clk_prepare(bank->dbck); + } + } + platform_set_drvdata(pdev, bank); pm_runtime_enable(bank->dev); @@ -1221,8 +1253,11 @@ static int omap_gpio_probe(struct platform_device *pdev) omap_gpio_mod_init(bank); ret = omap_gpio_chip_init(bank, irqc); - if (ret) + if (ret) { + pm_runtime_put_sync(bank->dev); + pm_runtime_disable(bank->dev); return ret; + } omap_gpio_show_rev(bank); @@ -1233,6 +1268,19 @@ static int omap_gpio_probe(struct platform_device *pdev) return 0; } +static int omap_gpio_remove(struct platform_device *pdev) +{ + struct gpio_bank *bank = platform_get_drvdata(pdev); + + list_del(&bank->node); + gpiochip_remove(&bank->chip); + pm_runtime_disable(bank->dev); + if (bank->dbck_flag) + clk_unprepare(bank->dbck); + + return 0; +} + #ifdef CONFIG_ARCH_OMAP2PLUS #if defined(CONFIG_PM) @@ -1321,7 +1369,7 @@ static int omap_gpio_runtime_resume(struct device *dev) * been initialised and so initialise it now. Also initialise * the context loss count. */ - if (bank->loses_context && !bank->context_valid) { + if (!bank->context_valid) { omap_gpio_init_context(bank); if (bank->get_context_loss_count) @@ -1342,17 +1390,15 @@ static int omap_gpio_runtime_resume(struct device *dev) writel_relaxed(bank->context.risingdetect, bank->base + bank->regs->risingdetect); - if (bank->loses_context) { - if (!bank->get_context_loss_count) { + if (!bank->get_context_loss_count) { + omap_gpio_restore_context(bank); + } else { + c = bank->get_context_loss_count(bank->dev); + if (c != bank->context_loss_count) { omap_gpio_restore_context(bank); } else { - c = bank->get_context_loss_count(bank->dev); - if (c != bank->context_loss_count) { - omap_gpio_restore_context(bank); - } else { - raw_spin_unlock_irqrestore(&bank->lock, flags); - return 0; - } + raw_spin_unlock_irqrestore(&bank->lock, flags); + return 0; } } @@ -1418,12 +1464,13 @@ static int omap_gpio_runtime_resume(struct device *dev) } #endif /* CONFIG_PM */ +#if IS_BUILTIN(CONFIG_GPIO_OMAP) void omap2_gpio_prepare_for_idle(int pwr_mode) { struct gpio_bank *bank; list_for_each_entry(bank, &omap_gpio_list, node) { - if (!BANK_USED(bank) || !bank->loses_context) + if (!BANK_USED(bank)) continue; bank->power_mode = pwr_mode; @@ -1437,12 +1484,13 @@ void omap2_gpio_resume_after_idle(void) struct gpio_bank *bank; list_for_each_entry(bank, &omap_gpio_list, node) { - if (!BANK_USED(bank) || !bank->loses_context) + if (!BANK_USED(bank)) continue; pm_runtime_get_sync(bank->dev); } } +#endif #if defined(CONFIG_PM) static void omap_gpio_init_context(struct gpio_bank *p) @@ -1598,6 +1646,7 @@ MODULE_DEVICE_TABLE(of, omap_gpio_match); static struct platform_driver omap_gpio_driver = { .probe = omap_gpio_probe, + .remove = omap_gpio_remove, .driver = { .name = "omap_gpio", .pm = &gpio_pm_ops, @@ -1615,3 +1664,13 @@ static int __init omap_gpio_drv_reg(void) return platform_driver_register(&omap_gpio_driver); } postcore_initcall(omap_gpio_drv_reg); + +static void __exit omap_gpio_exit(void) +{ + platform_driver_unregister(&omap_gpio_driver); +} +module_exit(omap_gpio_exit); + +MODULE_DESCRIPTION("omap gpio driver"); +MODULE_ALIAS("platform:gpio-omap"); +MODULE_LICENSE("GPL v2"); diff --git a/include/linux/platform_data/gpio-omap.h b/include/linux/platform_data/gpio-omap.h index 5d50b25a73d7..ff43e01b8ca9 100644 --- a/include/linux/platform_data/gpio-omap.h +++ b/include/linux/platform_data/gpio-omap.h @@ -198,7 +198,6 @@ struct omap_gpio_platform_data { int bank_width; /* GPIO bank width */ int bank_stride; /* Only needed for omap1 MPUIO */ bool dbck_flag; /* dbck required or not - True for OMAP3&4 */ - bool loses_context; /* whether the bank would ever lose context */ bool is_mpuio; /* whether the bank is of type MPUIO */ u32 non_wakeup_gpios; @@ -208,9 +207,17 @@ struct omap_gpio_platform_data { int (*get_context_loss_count)(struct device *dev); }; +#if IS_BUILTIN(CONFIG_GPIO_OMAP) extern void omap2_gpio_prepare_for_idle(int off_mode); extern void omap2_gpio_resume_after_idle(void); -extern void omap_set_gpio_debounce(int gpio, int enable); -extern void omap_set_gpio_debounce_time(int gpio, int enable); +#else +static inline void omap2_gpio_prepare_for_idle(int off_mode) +{ +} + +static inline void omap2_gpio_resume_after_idle(void) +{ +} +#endif #endif diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c index 20267595df07..e0b0d9b419b5 100644 --- a/kernel/locking/rtmutex.c +++ b/kernel/locking/rtmutex.c @@ -1008,7 +1008,7 @@ static void noinline __sched rt_spin_lock_slowlock(struct rt_mutex *lock) __set_current_state_no_track(TASK_UNINTERRUPTIBLE); pi_unlock(&self->pi_lock); - ret = task_blocks_on_rt_mutex(lock, &waiter, self, 0); + ret = task_blocks_on_rt_mutex(lock, &waiter, self, RT_MUTEX_MIN_CHAINWALK); BUG_ON(ret); for (;;) { diff --git a/localversion-rt b/localversion-rt index 1199ebade17b..1e584b47c987 100644 --- a/localversion-rt +++ b/localversion-rt @@ -1 +1 @@ --rt16 +-rt17 Signed-off-by:
Sebastian Andrzej Siewior <bigeasy@linutronix.de>
-
Sebastian Andrzej Siewior authored
Dear RT folks! I'm pleased to announce the v4.1.15-rt16 patch set. Signed-off-by:
Sebastian Andrzej Siewior <bigeasy@linutronix.de>
-
- Nov 18, 2015
-
-
Sebastian Andrzej Siewior authored
Dear RT folks! I'm pleased to announce the v4.1.13-rt15 patch set. Changes since v4.1.13-rt14: Sebastian Andrzej Siewior (1): v4.1.13-rt15 Thomas Gleixner (1): irqwork: Move irq safe work to irq context Known issues: - bcache stays disabled - CPU hotplug is not better than before - The netlink_release() OOPS, reported by Clark, is still on the list, but unsolved due to lack of information The delta patch against 4.1.13-rt15 is appended below and can be found here: https://cdn.kernel.org/pub/linux/kernel/projects/rt/4.1/incr/patch-4.1.13-rt14-rt15.patch.xz You can get this release via the git tree at: git://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-rt-devel.git v4.1.13-rt15 The RT patch against 4.1.13 can be found here: https://cdn.kernel.org/pub/linux/kernel/projects/rt/4.1/patch-4.1.13-rt15.patch.xz The split quilt queue is available at: https://cdn.kernel.org/pub/linux/kernel/projects/rt/4.1/patches-4.1.13-rt15.tar.xz Sebastian diff --git a/include/linux/irq_work.h b/include/linux/irq_work.h --- a/include/linux/irq_work.h +++ b/include/linux/irq_work.h @@ -52,4 +52,10 @@ static inline bool irq_work_needs_cpu(void) { return false; } static inline void irq_work_run(void) { } #endif +#if defined(CONFIG_IRQ_WORK) && defined(CONFIG_PREEMPT_RT_FULL) +void irq_work_tick_soft(void); +#else +static inline void irq_work_tick_soft(void) { } +#endif + #endif /* _LINUX_IRQ_WORK_H */ diff --git a/kernel/irq_work.c b/kernel/irq_work.c --- a/kernel/irq_work.c +++ b/kernel/irq_work.c @@ -200,8 +200,17 @@ void irq_work_tick(void) if (!llist_empty(raised) && !arch_irq_work_has_interrupt()) irq_work_run_list(raised); + + if (!IS_ENABLED(CONFIG_PREEMPT_RT_FULL)) + irq_work_run_list(this_cpu_ptr(&lazy_list)); +} + +#if defined(CONFIG_IRQ_WORK) && defined(CONFIG_PREEMPT_RT_FULL) +void irq_work_tick_soft(void) +{ irq_work_run_list(this_cpu_ptr(&lazy_list)); } +#endif /* * Synchronize against the irq_work @entry, ensures the entry is not diff --git a/kernel/time/timer.c b/kernel/time/timer.c --- a/kernel/time/timer.c +++ b/kernel/time/timer.c @@ -1455,7 +1455,7 @@ void update_process_times(int user_tick) scheduler_tick(); run_local_timers(); rcu_check_callbacks(user_tick); -#if defined(CONFIG_IRQ_WORK) && !defined(CONFIG_PREEMPT_RT_FULL) +#if defined(CONFIG_IRQ_WORK) if (in_irq()) irq_work_tick(); #endif @@ -1471,9 +1471,7 @@ static void run_timer_softirq(struct softirq_action *h) hrtimer_run_pending(); -#if defined(CONFIG_IRQ_WORK) && defined(CONFIG_PREEMPT_RT_FULL) - irq_work_tick(); -#endif + irq_work_tick_soft(); if (time_after_eq(jiffies, base->timer_jiffies)) __run_timers(base); diff --git a/localversion-rt b/localversion-rt --- a/localversion-rt +++ b/localversion-rt @@ -1 +1 @@ --rt14 +-rt15 Signed-off-by:
Sebastian Andrzej Siewior <bigeasy@linutronix.de>
-
Sebastian Andrzej Siewior authored
Dear RT folks! I'm pleased to announce the v4.1.13-rt14 patch set. Changes since v4.1.12-rt13: none. Known issues: You can get this release via the git tree at: git://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-rt-devel.git linux-4.1.y-rt git://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-rt-devel.git linux-4.1.y-rt-rebase git://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-rt-devel.git linux-4.1.y-rt-queue The RT patch against 4.1.3 can be found here: https://cdn.kernel.org/pub/linux/kernel/projects/rt/4.1/patch-4.1.13-rt14.patch.xz The split quilt queue is available at: https://cdn.kernel.org/pub/linux/kernel/projects/rt/4.1/patches-4.1.13-rt14.tar.xz Sebastian Signed-off-by:
Sebastian Andrzej Siewior <bigeasy@linutronix.de>
-
- Nov 07, 2015
-
-
Thomas Gleixner authored
Dear RT folks! I'm pleased to announce the v4.1.12-rt13 patch set. v4.1.12-rt12 is a non-announced update to incorporate the linux-4.1.y stable tree. Changes since v4.1.10-rt11: Yang Shi (1): bpf: Convert hashtab lock to raw lock Thomas Gleixner(2) rtmutex: Handle non enqueued waiters gracefully v4.1.12-rt13 Known issues: - bcache stays disabled - CPU hotplug is not better than before - The netlink_release() OOPS, reported by Clark, is still on the list, but unsolved due to lack of information The delta patch against 4.1.12-rt12 is appended below and can be found here: https://www.kernel.org/pub/linux/kernel/projects/rt/4.1/incr/patch-4.1.12-rt12-rt13.patch.xz You can get this release via the git tree at: git://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-rt-devel.git v4.1.12-rt13 The RT patch against 4.1.12 can be found here: https://cdn.kernel.org/pub/linux/kernel/projects/rt/4.1/patch-4.1.12-rt13.patch.xz The split quilt queue is available at: https://cdn.kernel.org/pub/linux/kernel/projects/rt/4.1/patches-4.1.12-rt13.tar.xz Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
- Nov 04, 2015
-
-
Thomas Gleixner authored
Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
- Oct 31, 2015
-
-
Thomas Gleixner authored
Dear RT folks! I'm pleased to announce the v4.1.10-rt11 patch set. Changes since v4.1.10-rt11: Eric Dumazet (1): inet: fix potential deadlock in reqsk_queue_unlink() Josh Cartwright (1): net: Make synchronize_rcu_expedited() conditional on !RT_FULL Mathieu Desnoyers (1): latency_hist: Update sched_wakeup probe Peter Zijlstra (1): sched: Introduce the trace_sched_waking tracepoint Thomas Gleixner (2): softirq: Sanitize local_bh_[en|dis]able for RT v4.1.10-rt11 Yang Shi (1): trace: Add missing tracer macros Known issues: - bcache stays disabled - CPU hotplug is not better than before - The netlink_release() OOPS, reported by Clark, is still on the list, but unsolved due to lack of information The delta patch against 4.1.10-rt10 is appended below and can be found here: https://cdn.kernel.org/pub/linux/kernel/projects/rt/4.1/incr/patch-4.1.10-rt10-rt11.patch.xz You can get this release via the git tree at: git://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-rt-devel.git v4.1.10-rt11 The RT patch against 4.1.10 can be found here: https://cdn.kernel.org/pub/linux/kernel/projects/rt/4.1/patch-4.1.10-rt11.patch.xz The split quilt queue is available at: https://cdn.kernel.org/pub/linux/kernel/projects/rt/4.1/patches-4.1.10-rt11.tar.xz Enjoy! tglx Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
- Oct 17, 2015
-
-
Thomas Gleixner authored
Dear RT folks! I'm pleased to announce the v4.1.10-rt10 patch set. v4.1.10-rt9 is a non-announced update to incorporate the linux-4.1.y stable tree changes. Changes since v4.1.10-rt9: Ben Hutchings (1): work-simple: Add missing #include <linux/export.h> Grygorii Strashko (1): net/core/cpuhotplug: Drain input_pkt_queue lockless Thomas Gleixner (2): arm64/xen: Make XEN depend on !RT v4.1.10-rt10 Yang Shi (2): arm64: Convert patch_lock to raw lock arm64: Replace read_lock to rcu lock in call_break_hook Known issues: - bcache stays disabled - CPU hotplug is not better than before - The netlink_release() OOPS, reported by Clark, is still on the list, but unsolved due to lack of information The delta patch against 4.1.10-rt10 is appended below and can be found here: https://cdn.kernel.org/pub/linux/kernel/projects/rt/4.1/incr/patch-4.1.10-rt9-rt10.patch.xz You can get this release via the git tree at: git://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-rt-devel.git v4.1.10-rt10 The RT patch against 4.1.10 can be found here: https://cdn.kernel.org/pub/linux/kernel/projects/rt/4.1/patch-4.1.10-rt10.patch.xz The split quilt queue is available at: https://cdn.kernel.org/pub/linux/kernel/projects/rt/4.1/patches-4.1.10-rt10.tar.xz Enjoy! tglx Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 2cc65a6f4bbd..09a41259b984 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -601,7 +601,7 @@ config XEN_DOM0 config XEN bool "Xen guest support on ARM64" - depends on ARM64 && OF + depends on ARM64 && OF && !PREEMPT_RT_FULL select SWIOTLB_XEN help Say Y if you want to run Linux in a Virtual Machine on Xen on ARM64. diff --git a/arch/arm64/kernel/debug-monitors.c b/arch/arm64/kernel/debug-monitors.c index b056369fd47d..70654d843d9b 100644 --- a/arch/arm64/kernel/debug-monitors.c +++ b/arch/arm64/kernel/debug-monitors.c @@ -271,20 +271,21 @@ static int single_step_handler(unsigned long addr, unsigned int esr, * Use reader/writer locks instead of plain spinlock. */ static LIST_HEAD(break_hook); -static DEFINE_RWLOCK(break_hook_lock); +static DEFINE_SPINLOCK(break_hook_lock); void register_break_hook(struct break_hook *hook) { - write_lock(&break_hook_lock); - list_add(&hook->node, &break_hook); - write_unlock(&break_hook_lock); + spin_lock(&break_hook_lock); + list_add_rcu(&hook->node, &break_hook); + spin_unlock(&break_hook_lock); } void unregister_break_hook(struct break_hook *hook) { - write_lock(&break_hook_lock); - list_del(&hook->node); - write_unlock(&break_hook_lock); + spin_lock(&break_hook_lock); + list_del_rcu(&hook->node); + spin_unlock(&break_hook_lock); + synchronize_rcu(); } static int call_break_hook(struct pt_regs *regs, unsigned int esr) @@ -292,11 +293,11 @@ static int call_break_hook(struct pt_regs *regs, unsigned int esr) struct break_hook *hook; int (*fn)(struct pt_regs *regs, unsigned int esr) = NULL; - read_lock(&break_hook_lock); - list_for_each_entry(hook, &break_hook, node) + rcu_read_lock(); + list_for_each_entry_rcu(hook, &break_hook, node) if ((esr & hook->esr_mask) == hook->esr_val) fn = hook->fn; - read_unlock(&break_hook_lock); + rcu_read_unlock(); return fn ? fn(regs, esr) : DBG_HOOK_ERROR; } diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c index 924902083e47..30eb88e5b896 100644 --- a/arch/arm64/kernel/insn.c +++ b/arch/arm64/kernel/insn.c @@ -77,7 +77,7 @@ bool __kprobes aarch64_insn_is_nop(u32 insn) } } -static DEFINE_SPINLOCK(patch_lock); +static DEFINE_RAW_SPINLOCK(patch_lock); static void __kprobes *patch_map(void *addr, int fixmap) { @@ -124,13 +124,13 @@ static int __kprobes __aarch64_insn_write(void *addr, u32 insn) unsigned long flags = 0; int ret; - spin_lock_irqsave(&patch_lock, flags); + raw_spin_lock_irqsave(&patch_lock, flags); waddr = patch_map(addr, FIX_TEXT_POKE0); ret = probe_kernel_write(waddr, &insn, AARCH64_INSN_SIZE); patch_unmap(FIX_TEXT_POKE0); - spin_unlock_irqrestore(&patch_lock, flags); + raw_spin_unlock_irqrestore(&patch_lock, flags); return ret; } diff --git a/kernel/sched/work-simple.c b/kernel/sched/work-simple.c index c996f755dba6..e57a0522573f 100644 --- a/kernel/sched/work-simple.c +++ b/kernel/sched/work-simple.c @@ -10,6 +10,7 @@ #include <linux/kthread.h> #include <linux/slab.h> #include <linux/spinlock.h> +#include <linux/export.h> #define SWORK_EVENT_PENDING (1 << 0) diff --git a/localversion-rt b/localversion-rt index 22746d6390a4..d79dde624aaa 100644 --- a/localversion-rt +++ b/localversion-rt @@ -1 +1 @@ --rt9 +-rt10 diff --git a/net/core/dev.c b/net/core/dev.c index 4969c0d3dd67..f8c23dee5ae9 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -7217,7 +7217,7 @@ static int dev_cpu_callback(struct notifier_block *nfb, netif_rx_ni(skb); input_queue_head_incr(oldsd); } - while ((skb = skb_dequeue(&oldsd->input_pkt_queue))) { + while ((skb = __skb_dequeue(&oldsd->input_pkt_queue))) { netif_rx_ni(skb); input_queue_head_incr(oldsd); }
-
- Sep 21, 2015
-
-
Thomas Gleixner authored
Dear RT folks! I'm pleased to announce the v4.1.7-rt8 patch set. v4.1.6-rt6 and v4.1.7-rt7 are non-announced updates to incorporate the linux-4.1.y stable tree changes. Changes since v4.1.5-rt5: - Update to 4.1.7 - Cherry-pick a XFS lockdep annotation fix from mainline - Use preempt_xxx_nort in the generic implementation of k[un]map_atomic. - Revert d04ea10b mmc: sdhci: don't provide hard irq handler - Force thread primary handlers of interrupts which provide both a primary and a threaded handler - Move clear_tasks_mm_cpumask() call to __cpu_die() on ARM (Grygoriii) - Fix a RCU splat in the trace histogram (Philipp) Solved issues: - The high CPU usage problem reported by Nicholas turned out to be a scalability issue of the gcov instrumentation Known issues: - bcache stays disabled - CPU hotplug is not better than before - The netlink_release() OOPS, reported by Clark, is still on the list, but unsolved due to lack of information The delta patch against 4.1.7-rt7 is appended below and can be found here: https://www.kernel.org/pub/linux/kernel/projects/rt/4.1/incr/patch-4.1.7-rt7-rt8.patch.xz You can get this release via the git tree at: git://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-rt-devel.git v4.1.7-rt8 The RT patch against 4.1.5 can be found here: https://www.kernel.org/pub/linux/kernel/projects/rt/4.1/patch-4.1.7-rt8.patch.xz The split quilt queue is available at: https://www.kernel.org/pub/linux/kernel/projects/rt/4.1/patches-4.1.7-rt8.tar.xz Enjoy! tglx Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Thomas Gleixner authored
Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
- Sep 02, 2015
-
-
Thomas Gleixner authored
Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
- Aug 16, 2015
-
-
Sebastian Andrzej Siewior authored
Dear RT folks! I'm pleased to announce the v4.1.5-rt5 patch set. Changes since v4.1.5-rt4: - don't disable preemption in dump_stack(). We should not see a backtrace on a production kernel but then it should not increase the latency if trigger one. Known issues: - bcache is disabled. - CPU hotplug works in general. Steven's test script however deadlocks usually on the second invocation. - Clark Williams reported an OOPS in netlink_release() which has not been narrowed down yet. - Nicholas Mc Guire reported high CPU usage by softirq on an idle system which seems to freeze / halt the system. The delta patch against 4.1.5-rt4 is appended below and can be found here: https://www.kernel.org/pub/linux/kernel/projects/rt/4.1/incr/patch-4.1.5-rt4-rt5.patch.xz You can get this release via the git tree at: git://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-rt-devel.git v4.1.5-rt5 The RT patch against 4.1.5 can be found here: https://www.kernel.org/pub/linux/kernel/projects/rt/4.1/patch-4.1.5-rt5.patch.xz The split quilt queue is available at: https://www.kernel.org/pub/linux/kernel/projects/rt/4.1/patches-4.1.5-rt5.tar.xz Sebastian Signed-off-by:
Sebastian Andrzej Siewior <bigeasy@linutronix.de>
-
Sebastian Andrzej Siewior authored
Signed-off-by:
Sebastian Andrzej Siewior <bigeasy@linutronix.de>
-
- Jul 25, 2015
-
-
Sebastian Andrzej Siewior authored
Dear RT folks! I'm pleased to announce the v4.1.3-rt3 patch set. Changes since v4.1.3-rt2: - fix compile of locktorture. Patch by Wolfgang M. Reimer. - fix compile pid_namespace without lockdep on ARM. Patch by Grygorii Strashko - The annoying "cpufreq_stat_notifier_trans: No policy found" is finally gone. - xor / raid_pq The max latency will increase into the ms range if the raid6_pq is loaded. This should not matter under normal circumstances because that module should only be loaded at boot time if required (and not while a -RT task is active in production). It might also get loaded at run-time manually. Dropping the preempt_disable() might cause different results for the individual implementations. People who don't care (load it at run-time) don't need to load it at all. People who care (load it boot time) would prefer to stick with the best implementation. Therefore I think it is enough to document this (don't load it at run time if you don't need it) and I cross it off my list. Patches are welcome if someone needs / has an improvement. Known issues: - bcache is disabled. - CPU hotplug works in general. Steven's test script however deadlocks usually on the second invocation. You can get this release via the git tree at: git://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-rt-devel.git linux-4.1.y-rt git://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-rt-devel.git linux-4.1.y-rt-rebase git://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-rt-devel.git linux-4.1.y-rt-queue The RT patch against 4.1.3 can be found here: https://www.kernel.org/pub/linux/kernel/projects/rt/4.1/patch-4.1.3-rt3.patch.xz The split quilt queue is available at: https://www.kernel.org/pub/linux/kernel/projects/rt/4.1/patches-4.1.3-rt3.tar.xz Sebastian Signed-off-by:
Sebastian Andrzej Siewior <bigeasy@linutronix.de>
-
Sebastian Andrzej Siewior authored
Sebastian Signed-off-by:
Sebastian Andrzej Siewior <bigeasy@linutronix.de>
-
- Dec 31, 2014
-
-
Sebastian Andrzej Siewior authored
Dear RT folks! I'm pleased to announce the v4.1.2-rt1 patch set. The move from 4.0 to 4.1 was rather smooth, so we took the time for some overdue cleanups and restructuring of the patch queue. 1) Patch folding - Fold all fixlets into the proper patches - Consolidate the patches which change the same piece of code over and over (e.g. add/revert/redo). These patches were mostly kept to be easily picked up for stable. 2) Dropping obsolete patches Some patches have been superseeded by different upstream changes, so the RT variant is redundant. 3) Changelogs Quite some patches had no or useless changelogs. We updated them all. Each patch has now a From+Subject+Date field. That means "git quiltimport" will produce now the same commit id for each patch (as long as the commit author and date remain unchanged). 4) Reordering The patches got reordered in topics, so patches related to the same subsystem or problem space are grouped together. 5) Ability to build and boot Each step in the queue now builds with RT=n and RT=y. All steps boot with RT=n. With RT=y the functionality is obviously dependent on all patches, so a boot bisectability can not be achieved. As of now we provide a git tree with the RT changes as well. The tree is similar structured as Stevens stable RT tree. For each kernel version we provide 3 branches: linux-m.n.y-rt This branch starts when we move to a new kernel version. After the first release this branch gets only incremental updates (either from the mainline stable tree or from updates to the rt patch queue) linux-m.n.y-rt-rebase This branch is rebased when a new stable version or a new RT patch queue is available. The RT patch queue is applied on top of the latest mainline stable version. linux-m.n.y-queue This branch contains the revisions of the rt patch queue - patches and series file. Known issues: - My AMD box throws a lot of "cpufreq_stat_notifier_trans: No policy found" warnings after boot. It is gone after manually setting the policy (to something else than reported). - bcache is disabled. - CPU hotplug works in general. Steven's test script however deadlocks usually on the second invocation. - xor / raid_pq I had max latency jumping up to 67563us on one CPU while the next lower max was 58us. I tracked it down to module's init code of xor and raid_pq. Both disable preemption while measuring the performance of the individual implementation. The git URLs for this release are git://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-rt-devel.git linux-4.1.y-rt git://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-rt-devel.git linux-4.1.y-rt-rebase git://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-rt-devel.git linux-4.1.y-rt-queue The RT patch against 4.1.2 can be found here: https://www.kernel.org/pub/linux/kernel/projects/rt/4.1/patch-4.1.2-rt1.patch.xz The split quilt queue is available at: https://www.kernel.org/pub/linux/kernel/projects/rt/4.1/patches-4.1.2-rt1.tar.xz Sebastian Signed-off-by:
Sebastian Andrzej Siewior <bigeasy@linutronix.de>
-