15 Apr, 2011

1 commit

  • The FLAGS_HAS_TIMEOUT flag was not getting set, causing the restart_block to
    restart futex_wait() without a timeout after a signal.

    Commit b41277dc7a18ee332d in 2.6.38 introduced the regression by accidentally
    removing the the FLAGS_HAS_TIMEOUT assignment from futex_wait() during the setup
    of the restart block. Restore the originaly behavior.

    Fixes: https://bugzilla.kernel.org/show_bug.cgi?id=32922

    Reported-by: Tim Smith
    Reported-by: Torsten Hilbrich
    Signed-off-by: Darren Hart
    Signed-off-by: Eric Dumazet
    Cc: Peter Zijlstra
    Cc: John Kacur
    Cc: stable@kernel.org
    Link: http://lkml.kernel.org/r/%3Cdaac0eb3af607f72b9a4d3126b2ba8fb5ed3b883.1302820917.git.dvhart%40linux.intel.com%3E
    Signed-off-by: Thomas Gleixner

    Darren Hart
     

26 Mar, 2011

1 commit


25 Mar, 2011

1 commit

  • An update of the futex code had a

    WARN_ON(!spin_is_locked(q->lock_ptr))

    But on UP, spin_is_locked() is always false, and will
    trigger this warning, and even worse, it will exit the function
    without doing the necessary work.

    Converting this to a WARN_ON_SMP() fixes the problem.

    Reported-by: Richard Weinberger
    Tested-by: Richard Weinberger
    Signed-off-by: Steven Rostedt
    Acked-by: Thomas Gleixner
    Acked-by: Peter Zijlstra
    Acked-by: Darren Hart
    Cc: Lai Jiangshan
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Steven Rostedt
     

24 Mar, 2011

1 commit

  • CAP_IPC_OWNER and CAP_IPC_LOCK can be checked against current_user_ns(),
    because the resource comes from current's own ipc namespace.

    setuid/setgid are to uids in own namespace, so again checks can be against
    current_user_ns().

    Changelog:
    Jan 11: Use task_ns_capable() in place of sched_capable().
    Jan 11: Use nsown_capable() as suggested by Bastian Blank.
    Jan 11: Clarify (hopefully) some logic in futex and sched.c
    Feb 15: use ns_capable for ipc, not nsown_capable
    Feb 23: let copy_ipcs handle setting ipc_ns->user_ns
    Feb 23: pass ns down rather than taking it from current

    [akpm@linux-foundation.org: coding-style fixes]
    Signed-off-by: Serge E. Hallyn
    Acked-by: "Eric W. Biederman"
    Acked-by: Daniel Lezcano
    Acked-by: David Howells
    Cc: James Morris
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Serge E. Hallyn
     

16 Mar, 2011

1 commit

  • …el/git/tip/linux-2.6-tip

    * 'core-locking-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
    rtmutex: tester: Remove the remaining BKL leftovers
    lockdep/timers: Explain in detail the locking problems del_timer_sync() may cause
    rtmutex: Simplify PI algorithm and make highest prio task get lock
    rwsem: Remove redundant asmregparm annotation
    rwsem: Move duplicate function prototypes to linux/rwsem.h
    rwsem: Unify the duplicate rwsem_is_locked() inlines
    rwsem: Move duplicate init macros and functions to linux/rwsem.h
    rwsem: Move duplicate struct rwsem declaration to linux/rwsem.h
    x86: Cleanup rwsem_count_t typedef
    rwsem: Cleanup includes
    locking: Remove deprecated lock initializers
    cred: Replace deprecated spinlock initialization
    kthread: Replace deprecated spinlock initialization
    xtensa: Replace deprecated spinlock initialization
    um: Replace deprecated spinlock initialization
    sparc: Replace deprecated spinlock initialization
    mips: Replace deprecated spinlock initialization
    cris: Replace deprecated spinlock initialization
    alpha: Replace deprecated spinlock initialization
    rtmutex-tester: Remove BKL tests

    Linus Torvalds
     

15 Mar, 2011

1 commit

  • handle_futex_death() uses futex_atomic_cmpxchg_inatomic() without
    disabling page faults. That's ok, but totally non obvious.

    We don't hold locks so we actually can and want to fault here, because
    the get_user() before futex_atomic_cmpxchg_inatomic() does not
    guarantee a R/W mapping.

    We could just add a big fat comment to explain this, but actually
    changing the code so that the functionality is entirely clear is
    better.

    Use the helper function which disables page faults around the
    futex_atomic_cmpxchg_inatomic() and handle a fault with a call to
    fault_in_user_writeable() as all other places in the futex code do as
    well.

    Pointed-out-by: Linus Torvalds
    Signed-off-by: Thomas Gleixner
    Acked-by: Darren Hart
    Cc: Michel Lespinasse
    Cc: Peter Zijlstra
    Cc: Matt Turner
    Cc: Russell King
    Cc: David Howells
    Cc: Tony Luck
    Cc: Michal Simek
    Cc: Ralf Baechle
    Cc: "James E.J. Bottomley"
    Cc: Benjamin Herrenschmidt
    Cc: Martin Schwidefsky
    Cc: Paul Mundt
    Cc: "David S. Miller"
    Cc: Chris Metcalf
    LKML-Reference:
    Signed-off-by: Thomas Gleixner

    Thomas Gleixner
     

12 Mar, 2011

3 commits

  • …rostedt/linux-2.6-rt into core/futexes

    futex,plist: Pass the real head of the priority list to plist_del()
    futex,plist: Remove debug lock assignment from plist_node
    plist: Shrink struct plist_head
    plist: Add priority list test

    Thomas Gleixner
     
  • The original code uses &plist_node->plist as the fake head of
    the priority list for plist_del(), these debug locks in
    the fake head are needed for CONFIG_DEBUG_PI_LIST.

    But now we always pass the real head to plist_del(), the debug locks
    in plist_node will not be used, so we remove these assignments.

    Acked-by: Darren Hart
    Signed-off-by: Lai Jiangshan
    LKML-Reference:
    Signed-off-by: Steven Rostedt

    Lai Jiangshan
     
  • Some plist_del()s in kernel/futex.c are passed a faked head of the
    priority list.

    It does not fail because the current code does not require the real head
    in plist_del(). The current code of plist_del() just uses the head for checking,
    so it will not cause a bad result even when we use a faked head.

    But it is undocumented usage:

    /**
    * plist_del - Remove a @node from plist.
    *
    * @node: &struct plist_node pointer - entry to be removed
    * @head: &struct plist_head pointer - list head
    */

    The document says that the @head is the "list head" head of the priority list.

    In futex code, several places use "plist_del(&q->list, &q->list.plist);",
    they pass a fake head. We need to fix them all.

    Thanks to Darren Hart for many suggestions.

    Acked-by: Darren Hart
    Signed-off-by: Lai Jiangshan
    LKML-Reference:
    Signed-off-by: Steven Rostedt

    Lai Jiangshan
     

11 Mar, 2011

3 commits

  • The cmpxchg_futex_value_locked API was funny in that it returned either
    the original, user-exposed futex value OR an error code such as -EFAULT.
    This was confusing at best, and could be a source of livelocks in places
    that retry the cmpxchg_futex_value_locked after trying to fix the issue
    by running fault_in_user_writeable().

    This change makes the cmpxchg_futex_value_locked API more similar to the
    get_futex_value_locked one, returning an error code and updating the
    original value through a reference argument.

    Signed-off-by: Michel Lespinasse
    Acked-by: Chris Metcalf [tile]
    Acked-by: Tony Luck [ia64]
    Acked-by: Thomas Gleixner
    Tested-by: Michal Simek [microblaze]
    Acked-by: David Howells [frv]
    Cc: Darren Hart
    Cc: Peter Zijlstra
    Cc: Matt Turner
    Cc: Russell King
    Cc: Ralf Baechle
    Cc: "James E.J. Bottomley"
    Cc: Benjamin Herrenschmidt
    Cc: Martin Schwidefsky
    Cc: Paul Mundt
    Cc: "David S. Miller"
    Cc: Linus Torvalds
    LKML-Reference:
    Signed-off-by: Thomas Gleixner

    Michel Lespinasse
     
  • The result is not going to change under us, so no need to reevaluate
    this over and over. Seems to be a leftover from the mechanical mass
    conversion of task->pid to task_pid_vnr(tsk).

    Signed-off-by: Thomas Gleixner

    Thomas Gleixner
     
  • Reviving a cleanup I had done about a year ago as part of a larger
    futex_set_wait proposal. Over the years, the locking of the hashed
    futex queue got improved, so that some of the "rare but normal" race
    conditions described in comments can't actually happen anymore.

    Signed-off-by: Michel Lespinasse
    Cc: Linus Torvalds
    Cc: Darren Hart
    Cc: Peter Zijlstra
    LKML-Reference:
    Signed-off-by: Thomas Gleixner

    Michel Lespinasse
     

28 Jan, 2011

1 commit

  • In current rtmutex, the pending owner may be boosted by the tasks
    in the rtmutex's waitlist when the pending owner is deboosted
    or a task in the waitlist is boosted. This boosting is unrelated,
    because the pending owner does not really take the rtmutex.
    It is not reasonable.

    Example.

    time1:
    A(high prio) onwers the rtmutex.
    B(mid prio) and C (low prio) in the waitlist.

    time2
    A release the lock, B becomes the pending owner
    A(or other high prio task) continues to run. B's prio is lower
    than A, so B is just queued at the runqueue.

    time3
    A or other high prio task sleeps, but we have passed some time
    The B and C's prio are changed in the period (time2 ~ time3)
    due to boosting or deboosting. Now C has the priority higher
    than B. ***Is it reasonable that C has to boost B and help B to
    get the rtmutex?

    NO!! I think, it is unrelated/unneed boosting before B really
    owns the rtmutex. We should give C a chance to beat B and
    win the rtmutex.

    This is the motivation of this patch. This patch *ensures*
    only the top waiter or higher priority task can take the lock.

    How?
    1) we don't dequeue the top waiter when unlock, if the top waiter
    is changed, the old top waiter will fail and go to sleep again.
    2) when requiring lock, it will get the lock when the lock is not taken and:
    there is no waiter OR higher priority than waiters OR it is top waiter.
    3) In any time, the top waiter is changed, the top waiter will be woken up.

    The algorithm is much simpler than before, no pending owner, no
    boosting for pending owner.

    Other advantage of this patch:
    1) The states of a rtmutex are reduced a half, easier to read the code.
    2) the codes become shorter.
    3) top waiter is not dequeued until it really take the lock:
    they will retain FIFO when it is stolen.

    Not advantage nor disadvantage
    1) Even we may wakeup multiple waiters(any time when top waiter changed),
    we hardly cause "thundering herd",
    the number of wokenup task is likely 1 or very little.
    2) two APIs are changed.
    rt_mutex_owner() will not return pending owner, it will return NULL when
    the top waiter is going to take the lock.
    rt_mutex_next_owner() always return the top waiter.
    will not return NULL if we have waiters
    because the top waiter is not dequeued.

    I have fixed the code that use these APIs.

    need updated after this patch is accepted
    1) Document/*
    2) the testcase scripts/rt-tester/t4-l2-pi-deboost.tst

    Signed-off-by: Lai Jiangshan
    LKML-Reference:
    Reviewed-by: Steven Rostedt
    Signed-off-by: Steven Rostedt

    Lai Jiangshan
     

16 Jan, 2011

1 commit

  • …linus' and 'perf-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip

    * 'core-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
    rcu: avoid pointless blocked-task warnings
    rcu: demote SRCU_SYNCHRONIZE_DELAY from kernel-parameter status
    rtmutex: Fix comment about why new_owner can be NULL in wake_futex_pi()

    * 'x86-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
    x86, olpc: Add missing Kconfig dependencies
    x86, mrst: Set correct APB timer IRQ affinity for secondary cpu
    x86: tsc: Fix calibration refinement conditionals to avoid divide by zero
    x86, ia64, acpi: Clean up x86-ism in drivers/acpi/numa.c

    * 'timers-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
    timekeeping: Make local variables static
    time: Rename misnamed minsec argument of clocks_calc_mult_shift()

    * 'perf-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
    tracing: Remove syscall_exit_fields
    tracing: Only process module tracepoints once
    perf record: Add "nodelay" mode, disabled by default
    perf sched: Fix list of events, dropping unsupported ':r' modifier
    Revert "perf tools: Emit clearer message for sys_perf_event_open ENOENT return"
    perf top: Fix annotate segv
    perf evsel: Fix order of event list deletion

    Linus Torvalds
     

14 Jan, 2011

1 commit

  • Futex code is smarter than most other gup_fast O_DIRECT code and knows
    about the compound internals. However now doing a put_page(head_page)
    will not release the pin on the tail page taken by gup-fast, leading to
    all sort of refcounting bugchecks. Getting a stable head_page is a little
    tricky.

    page_head = page is there because if this is not a tail page it's also the
    page_head. Only in case this is a tail page, compound_head is called,
    otherwise it's guaranteed unnecessary. And if it's a tail page
    compound_head has to run atomically inside irq disabled section
    __get_user_pages_fast before returning. Otherwise ->first_page won't be a
    stable pointer.

    Disableing irq before __get_user_page_fast and releasing irq after running
    compound_head is needed because if __get_user_page_fast returns == 1, it
    means the huge pmd is established and cannot go away from under us.
    pmdp_splitting_flush_notify in __split_huge_page_splitting will have to
    wait for local_irq_enable before the IPI delivery can return. This means
    __split_huge_page_refcount can't be running from under us, and in turn
    when we run compound_head(page) we're not reading a dangling pointer from
    tailpage->first_page. Then after we get to stable head page, we are
    always safe to call compound_lock and after taking the compound lock on
    head page we can finally re-check if the page returned by gup-fast is
    still a tail page. in which case we're set and we didn't need to split
    the hugepage in order to take a futex on it.

    Signed-off-by: Andrea Arcangeli
    Acked-by: Mel Gorman
    Acked-by: Rik van Riel
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrea Arcangeli
     

11 Jan, 2011

1 commit

  • The comment about why rt_mutex_next_owner() can return NULL in
    wake_futex_pi() is not the normal case.

    Tracing the cause of why this occurs is more likely that waiter
    simply timedout. But because it originally caused contention with
    the futex, the owner will go into the kernel when it unlocks
    the lock. Then it will hit this code path and
    rt_mutex_next_owner() will return NULL.

    Cc: Thomas Gleixner
    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Steven Rostedt
     

10 Nov, 2010

4 commits

  • The futex_q struct has grown considerably over the last couple years. I
    believe it now merits a static initializer to avoid uninitialized data
    errors (having spent more time than I care to admit debugging an uninitialized
    q.bitset in an experimental new op code).

    With the key initializer built in, several of the FUTEX_KEY_INIT calls can
    be removed.

    V2: use a static variable instead of an init macro.
    use a C99 initializer and don't rely on variable ordering in the struct.
    V3: make futex_q_init const

    Signed-off-by: Darren Hart
    Cc: Peter Zijlstra
    Cc: Eric Dumazet
    Cc: John Kacur
    Cc: Ingo Molnar
    LKML-Reference:
    Signed-off-by: Thomas Gleixner

    Darren Hart
     
  • In the early days we passed the mmap sem around. That became the
    "int fshared" with the fast gup improvements. Then we added
    "int clockrt" in places. This patch unifies these options as "flags".

    [ tglx: Split out the stale fshared cleanup ]

    Signed-off-by: Darren Hart
    Cc: Peter Zijlstra
    Cc: Eric Dumazet
    Cc: John Kacur
    Cc: Ingo Molnar
    LKML-Reference:
    Signed-off-by: Thomas Gleixner

    Darren Hart
     
  • The fast GUP changes stopped using the fshared flag in
    put_futex_keys(), but we kept the interface the same.

    Cleanup all stale users.

    This patch is split out from Darren Harts combo patch which also
    combines various flags. This way the changes are clearly separated.

    Signed-off-by: Thomas Gleixner
    Cc: Darren Hart
    LKML-Reference:

    Thomas Gleixner
     
  • Since commit 1dcc41bb (futex: Change 3rd arg of fetch_robust_entry()
    to unsigned int*) some gcc versions decided to emit the following
    warning:

    kernel/futex.c: In function ‘exit_robust_list’:
    kernel/futex.c:2492: warning: ‘next_pi’ may be used uninitialized in this function

    The commit did not introduce the warning as gcc should have warned
    before that commit as well. It's just gcc being silly.

    The code path really can't result in next_pi being unitialized (or
    should not), but let's keep the build clean. Annotate next_pi as an
    uninitialized_var.

    [ tglx: Addressed the same issue in futex_compat.c and massaged the
    changelog ]

    Signed-off-by: Darren Hart
    Tested-by: Matt Fleming
    Tested-by: Uwe Kleine-König
    Cc: Peter Zijlstra
    Cc: Eric Dumazet
    Cc: John Kacur
    Cc: Ingo Molnar
    LKML-Reference:
    Signed-off-by: Thomas Gleixner

    Darren Hart
     

26 Oct, 2010

1 commit


22 Oct, 2010

1 commit


19 Oct, 2010

1 commit

  • futex_wait() is leaking key references due to futex_wait_setup()
    acquiring an additional reference via the queue_lock() routine. The
    nested key ref-counting has been masking bugs and complicating code
    analysis. queue_lock() is only called with a previously ref-counted
    key, so remove the additional ref-counting from the queue_(un)lock()
    functions.

    Also futex_wait_requeue_pi() drops one key reference too many in
    unqueue_me_pi(). Remove the key reference handling from
    unqueue_me_pi(). This was paired with a queue_lock() in
    futex_lock_pi(), so the count remains unchanged.

    Document remaining nested key ref-counting sites.

    Signed-off-by: Darren Hart
    Reported-and-tested-by: Matthieu Fertré
    Reported-by: Louis Rilling
    Cc: Peter Zijlstra
    Cc: Eric Dumazet
    Cc: John Kacur
    Cc: Rusty Russell
    LKML-Reference:
    Signed-off-by: Thomas Gleixner
    Cc: stable@kernel.org

    Darren Hart
     

14 Oct, 2010

1 commit

  • Convert futex_requeue() function parameters to use @name
    kernel-doc notation and add @fshared & @cmpval to prevent
    kernel-doc warnings.

    Add @list to struct futex_q.

    Fix a few typos.

    Signed-off-by: Randy Dunlap
    Acked-by: Rusty Russell
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Randy Dunlap
     

18 Sep, 2010

3 commits


01 Jul, 2010

1 commit

  • futex_find_get_task is currently used (through lookup_pi_state) from two
    contexts, futex_requeue and futex_lock_pi_atomic. None of the paths
    looks it needs the credentials check, though. Different (e)uids
    shouldn't matter at all because the only thing that is important for
    shared futex is the accessibility of the shared memory.

    The credentail check results in glibc assert failure or process hang (if
    glibc is compiled without assert support) for shared robust pthread
    mutex with priority inheritance if a process tries to lock already held
    lock owned by a process with a different euid:

    pthread_mutex_lock.c:312: __pthread_mutex_lock_full: Assertion `(-(e)) != 3 || !robust' failed.

    The problem is that futex_lock_pi_atomic which is called when we try to
    lock already held lock checks the current holder (tid is stored in the
    futex value) to get the PI state. It uses lookup_pi_state which in turn
    gets task struct from futex_find_get_task. ESRCH is returned either
    when the task is not found or if credentials check fails.

    futex_lock_pi_atomic simply returns if it gets ESRCH. glibc code,
    however, doesn't expect that robust lock returns with ESRCH because it
    should get either success or owner died.

    Signed-off-by: Michal Hocko
    Acked-by: Darren Hart
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Cc: Nick Piggin
    Cc: Alexey Kuznetsov
    Cc: Peter Zijlstra
    Signed-off-by: Linus Torvalds

    Michal Hocko
     

03 Feb, 2010

3 commits

  • The WARN_ON in lookup_pi_state which complains about a mismatch
    between pi_state->owner->pid and the pid which we retrieved from the
    user space futex is completely bogus.

    The code just emits the warning and then continues despite the fact
    that it detected an inconsistent state of the futex. A conveniant way
    for user space to spam the syslog.

    Replace the WARN_ON by a consistency check. If the values do not match
    return -EINVAL and let user space deal with the mess it created.

    This also fixes the missing task_pid_vnr() when we compare the
    pi_state->owner pid with the futex value.

    Reported-by: Jermome Marchand
    Signed-off-by: Thomas Gleixner
    Acked-by: Darren Hart
    Acked-by: Peter Zijlstra
    Cc:

    Thomas Gleixner
     
  • If the owner of a PI futex dies we fix up the pi_state and set
    pi_state->owner to NULL. When a malicious or just sloppy programmed
    user space application sets the futex value to 0 e.g. by calling
    pthread_mutex_init(), then the futex can be acquired again. A new
    waiter manages to enqueue itself on the pi_state w/o damage, but on
    unlock the kernel dereferences pi_state->owner and oopses.

    Prevent this by checking pi_state->owner in the unlock path. If
    pi_state->owner is not current we know that user space manipulated the
    futex value. Ignore the mess and return -EINVAL.

    This catches the above case and also the case where a task hijacks the
    futex by setting the tid value and then tries to unlock it.

    Reported-by: Jermome Marchand
    Signed-off-by: Thomas Gleixner
    Acked-by: Darren Hart
    Acked-by: Peter Zijlstra
    Cc:

    Thomas Gleixner
     
  • This fixes a futex key reference count bug in futex_lock_pi(),
    where a key's reference count is incremented twice but decremented
    only once, causing the backing object to not be released.

    If the futex is created in a temporary file in an ext3 file system,
    this bug causes the file's inode to become an "undead" orphan,
    which causes an oops from a BUG_ON() in ext3_put_super() when the
    file system is unmounted. glibc's test suite is known to trigger this,
    see .

    The bug is a regression from 2.6.28-git3, namely Peter Zijlstra's
    38d47c1b7075bd7ec3881141bb3629da58f88dab "[PATCH] futex: rely on
    get_user_pages() for shared futexes". That commit made get_futex_key()
    also increment the reference count of the futex key, and updated its
    callers to decrement the key's reference count before returning.
    Unfortunately the normal exit path in futex_lock_pi() wasn't corrected:
    the reference count is incremented by get_futex_key() and queue_lock(),
    but the normal exit path only decrements once, via unqueue_me_pi().
    The fix is to put_futex_key() after unqueue_me_pi(), since 2.6.31
    this is easily done by 'goto out_put_key' rather than 'goto out'.

    Signed-off-by: Mikael Pettersson
    Acked-by: Peter Zijlstra
    Acked-by: Darren Hart
    Signed-off-by: Thomas Gleixner
    Cc:

    Mikael Pettersson
     

13 Jan, 2010

1 commit

  • Currently, futexes have two problem:

    A) The current futex code doesn't handle private file mappings properly.

    get_futex_key() uses PageAnon() to distinguish file and
    anon, which can cause the following bad scenario:

    1) thread-A call futex(private-mapping, FUTEX_WAIT), it
    sleeps on file mapping object.
    2) thread-B writes a variable and it makes it cow.
    3) thread-B calls futex(private-mapping, FUTEX_WAKE), it
    wakes up blocked thread on the anonymous page. (but it's nothing)

    B) Current futex code doesn't handle zero page properly.

    Read mode get_user_pages() can return zero page, but current
    futex code doesn't handle it at all. Then, zero page makes
    infinite loop internally.

    The solution is to use write mode get_user_page() always for
    page lookup. It prevents the lookup of both file page of private
    mappings and zero page.

    Performance concerns:

    Probaly very little, because glibc always initialize variables
    for futex before to call futex(). It means glibc users never see
    the overhead of this patch.

    Compatibility concerns:

    This patch has few compatibility issues. After this patch,
    FUTEX_WAIT require writable access to futex variables (read-only
    mappings makes EFAULT). But practically it's not a problem,
    glibc always initalizes variables for futexes explicitly - nobody
    uses read-only mappings.

    Reported-by: Hugh Dickins
    Signed-off-by: KOSAKI Motohiro
    Acked-by: Peter Zijlstra
    Acked-by: Darren Hart
    Cc:
    Cc: Linus Torvalds
    Cc: KAMEZAWA Hiroyuki
    Cc: Nick Piggin
    Cc: Ulrich Drepper
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    KOSAKI Motohiro
     

15 Dec, 2009

3 commits


08 Dec, 2009

1 commit


29 Oct, 2009

1 commit

  • The requeue_pi path doesn't use unqueue_me() (and the racy lock_ptr ==
    NULL test) nor does it use the wake_list of futex_wake() which where
    the reason for commit 41890f2 (futex: Handle spurious wake up)

    See debugging discussing on LKML Message-ID:

    The changes in this fix to the wait_requeue_pi path were considered to
    be a likely unecessary, but harmless safety net. But it turns out that
    due to the fact that for unknown $@#!*( reasons EWOULDBLOCK is defined
    as EAGAIN we built an endless loop in the code path which returns
    correctly EWOULDBLOCK.

    Spurious wakeups in wait_requeue_pi code path are unlikely so we do
    the easy solution and return EWOULDBLOCK^WEAGAIN to user space and let
    it deal with the spurious wakeup.

    Cc: Darren Hart
    Cc: Peter Zijlstra
    Cc: Eric Dumazet
    Cc: John Stultz
    Cc: Dinakar Guniguntala
    LKML-Reference:
    Cc: stable@kernel.org
    Signed-off-by: Thomas Gleixner

    Thomas Gleixner
     

16 Oct, 2009

1 commit

  • When requeuing tasks from one futex to another, the reference held
    by the requeued task to the original futex location needs to be
    dropped eventually.

    Dropping the reference may ultimately lead to a call to
    "iput_final" and subsequently call into filesystem- specific code -
    which may be non-atomic.

    It is therefore safer to defer this drop operation until after the
    futex_hash_bucket spinlock has been dropped.

    Originally-From: Helge Bahmann
    Signed-off-by: Darren Hart
    Cc:
    Cc: Peter Zijlstra
    Cc: Eric Dumazet
    Cc: Dinakar Guniguntala
    Cc: John Stultz
    Cc: Sven-Thorsten Dietrich
    Cc: John Kacur
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Darren Hart
     

15 Oct, 2009

1 commit

  • If userspace tries to perform a requeue_pi on a non-requeue_pi waiter,
    it will find the futex_q->requeue_pi_key to be NULL and OOPS.

    Check for NULL in match_futex() instead of doing explicit NULL pointer
    checks on all call sites. While match_futex(NULL, NULL) returning
    false is a little odd, it's still correct as we expect valid key
    references.

    Signed-off-by: Darren Hart
    Cc: Peter Zijlstra
    Cc: Ingo Molnar
    CC: Eric Dumazet
    CC: Dinakar Guniguntala
    CC: John Stultz
    Cc: stable@kernel.org
    LKML-Reference:
    Signed-off-by: Thomas Gleixner

    Darren Hart
     

14 Oct, 2009

1 commit

  • The futex code does not handle spurious wake up in futex_wait and
    futex_wait_requeue_pi.

    The code assumes that any wake up which was not caused by futex_wake /
    requeue or by a timeout was caused by a signal wake up and returns one
    of the syscall restart error codes.

    In case of a spurious wake up the signal delivery code which deals
    with the restart error codes is not invoked and we return that error
    code to user space. That causes applications which actually check the
    return codes to fail. Blaise reported that on preempt-rt a python test
    program run into a exception trap. -rt exposed that due to a built in
    spurious wake up accelerator :)

    Solve this by checking signal_pending(current) in the wake up path and
    handle the spurious wake up case w/o returning to user space.

    Reported-by: Blaise Gassend
    Debugged-by: Darren Hart
    Signed-off-by: Thomas Gleixner
    Cc: Peter Zijlstra
    Cc: stable@kernel.org
    LKML-Reference:

    Thomas Gleixner