11 Oct, 2006

1 commit


02 Oct, 2006

1 commit


30 Sep, 2006

2 commits


09 Sep, 2006

1 commit

  • The current implementation of futex_lock_pi returns -ERESTART_RESTARTBLOCK
    in case that the lock operation has been interrupted by a signal. This
    results in a return of -EINTR to userspace in case there is an handler for
    the signal. This is wrong, because userspace expects that the lock
    function does not return in any case of signal delivery.

    This was not caught by my insufficient test case, but triggered a nasty
    userspace problem in an high load application scenario. Unfortunately also
    glibc does not check for this invalid return value.

    Using -ERSTARTNOINTR makes sure, that the interrupted syscall is restarted.
    The restart block related code can be safely removed, as the possible
    timeout argument is an absolute time value.

    Signed-off-by: Thomas Gleixner
    Acked-by: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Thomas Gleixner
     

28 Aug, 2006

1 commit

  • futex_find_get_task:

    if (p->state == EXIT_ZOMBIE || p->exit_state == EXIT_ZOMBIE)
    return NULL;

    I can't understand this. First, p->state can't be EXIT_ZOMBIE. The
    ->exit_state check looks strange too. Sub-threads or tasks whose ->parent
    ignores SIGCHLD go directly to EXIT_DEAD state (I am ignoring a ptrace
    case). Why EXIT_DEAD tasks should be ok? Yes, EXIT_ZOMBIE is more
    important (a task may stay zombie for a long time), but this doesn't mean
    we should explicitely ignore other EXIT_XXX states.

    Signed-off-by: Oleg Nesterov
    Acked-by: Ingo Molnar
    Cc: Thomas Gleixner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Oleg Nesterov
     

15 Aug, 2006

1 commit

  • We found this issue last week w/ the -RT kernel, but it seems the same
    issue is in mainline as well.

    Basically it is possible for futex_unlock_pi to return without actually
    freeing the lock. This is due to buggy logic in the use of
    futex_handle_fault() and its attempt argument in a failure case.

    Looking at futex.c the logic is as follows:

    1) In futex_unlock_pi() we start w/ ret=0 and we go down to the first
    futex_atomic_cmpxchg_inatomic(), where we find uval==-EFAULT. We then
    jump to the pi_faulted label.

    2) From pi_faulted: We increment attempt, unlock the sem and hit the
    retry label.

    3) From the retry label, with ret still zero, we again hit EFAULT on the
    first futex_atomic_cmpxchg_inatomic(), and again goto the pi_faulted
    label.

    4) Again from pi_faulted: we increment attempt and enter the
    conditional, where we call futex_handle_fault.

    5) futex_handle_fault fails, and we goto the out_unlock_release_sem
    label.

    6) From out_unlock_release_sem we return, and since ret is still zero,
    we return without error, while never actually unlocking the lock.

    Issue #1: at the first futex_atomic_cmpxchg_inatomic() we should probably
    be setting ret=-EFAULT before jumping to pi_faulted: However in our case
    this doesn't really affect anything, as the glibc we're using ignores the
    error value from futex_unlock_pi().

    Issue #2: Look at futex_handle_fault(), its first conditional will return
    -EFAULT if attempt is >= 2. However, from the "if(attempt++)
    futex_handle_fault(attempt)" logic above, we'll *never* call
    futex_handle_fault when attempt is less then two. So we never get a chance
    to even try to fault the page in.

    The following patch addresses these two issues by 1) Always setting ret to
    -EFAULT if futex_handle_fault fails, and 2) Removing the = in
    futex_handle_fault's (attempt >= 2) check.

    I'm really not sure this is the right fix, but wanted to bring it up so
    folks knew the issue is alive and well in the current -git tree. From
    looking at the git logs the logic was first introduced (then later copied
    to other places) in the following commit almost a year ago:

    http://www.kernel.org/git/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=4732efbeb997189d9f9b04708dc26bf8613ed721;hp=5b039e681b8c5f30aac9cc04385cc94be45d0823

    Cc: Rusty Russell
    Cc: Ingo Molnar
    Acked-by: Thomas Gleixner
    Signed-off-by: Andrew Morton
    Signed-off-by: Greg Kroah-Hartman

    john stultz
     

06 Aug, 2006

1 commit

  • This patch adds a barrier() in futex unqueue_me to avoid aliasing of two
    pointers.

    On my s390x system I saw the following oops:

    Unable to handle kernel pointer dereference at virtual kernel address
    0000000000000000
    Oops: 0004 [#1]
    CPU: 0 Not tainted
    Process mytool (pid: 13613, task: 000000003ecb6ac0, ksp: 00000000366bdbd8)
    Krnl PSW : 0704d00180000000 00000000003c9ac2 (_spin_lock+0xe/0x30)
    Krnl GPRS: 00000000ffffffff 000000003ecb6ac0 0000000000000000 0700000000000000
    0000000000000000 0000000000000000 000001fe00002028 00000000000c091f
    000001fe00002054 000001fe00002054 0000000000000000 00000000366bddc0
    00000000005ef8c0 00000000003d00e8 0000000000144f91 00000000366bdcb8
    Krnl Code: ba 4e 20 00 12 44 b9 16 00 3e a7 84 00 08 e3 e0 f0 88 00 04
    Call Trace:
    ([] unqueue_me+0x40/0xe4)
    [] do_futex+0x33c/0xc40
    [] sys_futex+0x12e/0x144
    [] sysc_noemu+0x10/0x16
    [] 0x2000003741c

    The code in question is:

    static int unqueue_me(struct futex_q *q)
    {
    int ret = 0;
    spinlock_t *lock_ptr;

    /* In the common case we don't take the spinlock, which is nice. */
    retry:
    lock_ptr = q->lock_ptr;
    if (lock_ptr != 0) {
    spin_lock(lock_ptr);
    /*
    * q->lock_ptr can change between reading it and
    * spin_lock(), causing us to take the wrong lock. This
    * corrects the race condition.
    [...]

    and my compiler (gcc 4.1.0) makes the following out of it:

    00000000000003c8 :
    3c8: eb bf f0 70 00 24 stmg %r11,%r15,112(%r15)
    3ce: c0 d0 00 00 00 00 larl %r13,3ce
    3d0: R_390_PC32DBL .rodata+0x2a
    3d4: a7 f1 1e 00 tml %r15,7680
    3d8: a7 84 00 01 je 3da
    3dc: b9 04 00 ef lgr %r14,%r15
    3e0: a7 fb ff d0 aghi %r15,-48
    3e4: b9 04 00 b2 lgr %r11,%r2
    3e8: e3 e0 f0 98 00 24 stg %r14,152(%r15)
    3ee: e3 c0 b0 28 00 04 lg %r12,40(%r11)
    /* write q->lock_ptr in r12 */
    3f4: b9 02 00 cc ltgr %r12,%r12
    3f8: a7 84 00 4b je 48e
    /* if r12 is zero then jump over the code.... */
    3fc: e3 20 b0 28 00 04 lg %r2,40(%r11)
    /* write q->lock_ptr in r2 */
    402: c0 e5 00 00 00 00 brasl %r14,402
    404: R_390_PC32DBL _spin_lock+0x2
    /* use r2 as parameter for spin_lock */

    So the code becomes more or less:
    if (q->lock_ptr != 0) spin_lock(q->lock_ptr)
    instead of
    if (lock_ptr != 0) spin_lock(lock_ptr)

    Which caused the oops from above.
    After adding a barrier gcc creates code without this problem:
    [...] (the same)
    3ee: e3 c0 b0 28 00 04 lg %r12,40(%r11)
    3f4: b9 02 00 cc ltgr %r12,%r12
    3f8: b9 04 00 2c lgr %r2,%r12
    3fc: a7 84 00 48 je 48c
    400: c0 e5 00 00 00 00 brasl %r14,400
    402: R_390_PC32DBL _spin_lock+0x2

    As a general note, this code of unqueue_me seems a bit fishy. The retry logic
    of unqueue_me only works if we can guarantee, that the original value of
    q->lock_ptr is always a spinlock (Otherwise we overwrite kernel memory). We
    know that q->lock_ptr can change. I dont know what happens with the original
    spinlock, as I am not an expert with the futex code.

    Cc: Martin Schwidefsky
    Cc: Rusty Russell
    Acked-by: Ingo Molnar
    Cc: Thomas Gleixner
    Signed-off-by: Christian Borntraeger
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christian Borntraeger
     

29 Jul, 2006

2 commits

  • Fix robust PI-futexes to be properly unlocked on unexpected exit.

    For this to work the kernel has to know whether a futex is a PI or a
    non-PI one, because the semantics are different. Since the space in
    relevant glibc data structures is extremely scarce, the best solution is
    to encode the 'PI' information in bit 0 of the robust list pointer.
    Existing (non-PI) glibc robust futexes have this bit always zero, so the
    ABI is kept. New glibc with PI-robust-futexes will set this bit.

    Further fixes from Thomas Gleixner

    Signed-off-by: Ingo Molnar
    Signed-off-by: Ulrich Drepper
    Signed-off-by: Thomas Gleixner
    Signed-off-by: Linus Torvalds

    Ingo Molnar
     
  • Fix pi_state->list handling bugs: list handling mishap, locking error.
    Plus add more debug checks and fix a few style issues i noticed while
    debugging this.

    (reported by Ulrich Drepper and Jakub Jelinek.)

    Signed-off-by: Ingo Molnar
    Signed-off-by: Linus Torvalds

    Ingo Molnar
     

11 Jul, 2006

1 commit

  • Calling futex_lock_pi is called with a reference to a non PI futex and
    waiters exist already, lookup_pi_state() oopses due to pi_state == NULL.
    Check this condition and return -EINVAL to userspace.

    Signed-off-by: Thomas Gleixner
    Cc: Ingo Molnar
    Cc: Jakub Jelinek
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Thomas Gleixner
     

04 Jul, 2006

1 commit

  • Teach special (recursive) locking code to the lock validator. Introduces
    double_lock_hb() to unify double- hash-bucket-lock taking.

    Signed-off-by: Ingo Molnar
    Signed-off-by: Arjan van de Ven
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Ingo Molnar
     

02 Jul, 2006

2 commits

  • Fix futex_wake() exit condition bug when handling the robust-list with PI
    futexes on them.

    (reported by Ulrich Drepper, debugged by the lock validator.)

    Signed-off-by: Ingo Molnar
    Cc: Ulrich Drepper
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Ingo Molnar
     
  • lock_queue was getting called essentially twice in a row and was
    continually incrementing the mm_count ref count, thus causing a memory
    leak.

    Dinakar Guniguntala provided a proper fix for the problem that simply grabs
    the spinlock for the hash bucket queue rather than calling lock_queue.

    The second time we do a queue_lock in futex_lock_pi, we really only need to
    take the hash bucket lock.

    Signed-off-by: Dinakar Guniguntala
    Signed-off-by: Vernon Mauery
    Acked-by: Paul E. McKenney
    Signed-off-by: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Vernon Mauery
     

28 Jun, 2006

3 commits

  • In futex_requeue(), when the 2 futexes keys hash to the same bucket, there
    is no need to move the futex_q to the end of the bucket list.

    Signed-off-by: Sebastien Dugue
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Sebastien Dugue
     
  • This adds the actual pi-futex implementation, based on rt-mutexes.

    [dino@in.ibm.com: fix an oops-causing race]
    Signed-off-by: Ingo Molnar
    Signed-off-by: Thomas Gleixner
    Signed-off-by: Arjan van de Ven
    Signed-off-by: Dinakar Guniguntala
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Ingo Molnar
     
  • We are pleased to announce "lightweight userspace priority inheritance" (PI)
    support for futexes. The following patchset and glibc patch implements it,
    ontop of the robust-futexes patchset which is included in 2.6.16-mm1.

    We are calling it lightweight for 3 reasons:

    - in the user-space fastpath a PI-enabled futex involves no kernel work
    (or any other PI complexity) at all. No registration, no extra kernel
    calls - just pure fast atomic ops in userspace.

    - in the slowpath (in the lock-contention case), the system call and
    scheduling pattern is in fact better than that of normal futexes, due to
    the 'integrated' nature of FUTEX_LOCK_PI. [more about that further down]

    - the in-kernel PI implementation is streamlined around the mutex
    abstraction, with strict rules that keep the implementation relatively
    simple: only a single owner may own a lock (i.e. no read-write lock
    support), only the owner may unlock a lock, no recursive locking, etc.

    Priority Inheritance - why, oh why???
    -------------------------------------

    Many of you heard the horror stories about the evil PI code circling Linux for
    years, which makes no real sense at all and is only used by buggy applications
    and which has horrible overhead. Some of you have dreaded this very moment,
    when someone actually submits working PI code ;-)

    So why would we like to see PI support for futexes?

    We'd like to see it done purely for technological reasons. We dont think it's
    a buggy concept, we think it's useful functionality to offer to applications,
    which functionality cannot be achieved in other ways. We also think it's the
    right thing to do, and we think we've got the right arguments and the right
    numbers to prove that. We also believe that we can address all the
    counter-arguments as well. For these reasons (and the reasons outlined below)
    we are submitting this patch-set for upstream kernel inclusion.

    What are the benefits of PI?

    The short reply:
    ----------------

    User-space PI helps achieving/improving determinism for user-space
    applications. In the best-case, it can help achieve determinism and
    well-bound latencies. Even in the worst-case, PI will improve the statistical
    distribution of locking related application delays.

    The longer reply:
    -----------------

    Firstly, sharing locks between multiple tasks is a common programming
    technique that often cannot be replaced with lockless algorithms. As we can
    see it in the kernel [which is a quite complex program in itself], lockless
    structures are rather the exception than the norm - the current ratio of
    lockless vs. locky code for shared data structures is somewhere between 1:10
    and 1:100. Lockless is hard, and the complexity of lockless algorithms often
    endangers to ability to do robust reviews of said code. I.e. critical RT
    apps often choose lock structures to protect critical data structures, instead
    of lockless algorithms. Furthermore, there are cases (like shared hardware,
    or other resource limits) where lockless access is mathematically impossible.

    Media players (such as Jack) are an example of reasonable application design
    with multiple tasks (with multiple priority levels) sharing short-held locks:
    for example, a highprio audio playback thread is combined with medium-prio
    construct-audio-data threads and low-prio display-colory-stuff threads. Add
    video and decoding to the mix and we've got even more priority levels.

    So once we accept that synchronization objects (locks) are an unavoidable fact
    of life, and once we accept that multi-task userspace apps have a very fair
    expectation of being able to use locks, we've got to think about how to offer
    the option of a deterministic locking implementation to user-space.

    Most of the technical counter-arguments against doing priority inheritance
    only apply to kernel-space locks. But user-space locks are different, there
    we cannot disable interrupts or make the task non-preemptible in a critical
    section, so the 'use spinlocks' argument does not apply (user-space spinlocks
    have the same priority inversion problems as other user-space locking
    constructs). Fact is, pretty much the only technique that currently enables
    good determinism for userspace locks (such as futex-based pthread mutexes) is
    priority inheritance:

    Currently (without PI), if a high-prio and a low-prio task shares a lock [this
    is a quite common scenario for most non-trivial RT applications], even if all
    critical sections are coded carefully to be deterministic (i.e. all critical
    sections are short in duration and only execute a limited number of
    instructions), the kernel cannot guarantee any deterministic execution of the
    high-prio task: any medium-priority task could preempt the low-prio task while
    it holds the shared lock and executes the critical section, and could delay it
    indefinitely.

    Implementation:
    ---------------

    As mentioned before, the userspace fastpath of PI-enabled pthread mutexes
    involves no kernel work at all - they behave quite similarly to normal
    futex-based locks: a 0 value means unlocked, and a value==TID means locked.
    (This is the same method as used by list-based robust futexes.) Userspace uses
    atomic ops to lock/unlock these mutexes without entering the kernel.

    To handle the slowpath, we have added two new futex ops:

    FUTEX_LOCK_PI
    FUTEX_UNLOCK_PI

    If the lock-acquire fastpath fails, [i.e. an atomic transition from 0 to TID
    fails], then FUTEX_LOCK_PI is called. The kernel does all the remaining work:
    if there is no futex-queue attached to the futex address yet then the code
    looks up the task that owns the futex [it has put its own TID into the futex
    value], and attaches a 'PI state' structure to the futex-queue. The pi_state
    includes an rt-mutex, which is a PI-aware, kernel-based synchronization
    object. The 'other' task is made the owner of the rt-mutex, and the
    FUTEX_WAITERS bit is atomically set in the futex value. Then this task tries
    to lock the rt-mutex, on which it blocks. Once it returns, it has the mutex
    acquired, and it sets the futex value to its own TID and returns. Userspace
    has no other work to perform - it now owns the lock, and futex value contains
    FUTEX_WAITERS|TID.

    If the unlock side fastpath succeeds, [i.e. userspace manages to do a TID ->
    0 atomic transition of the futex value], then no kernel work is triggered.

    If the unlock fastpath fails (because the FUTEX_WAITERS bit is set), then
    FUTEX_UNLOCK_PI is called, and the kernel unlocks the futex on the behalf of
    userspace - and it also unlocks the attached pi_state->rt_mutex and thus wakes
    up any potential waiters.

    Note that under this approach, contrary to other PI-futex approaches, there is
    no prior 'registration' of a PI-futex. [which is not quite possible anyway,
    due to existing ABI properties of pthread mutexes.]

    Also, under this scheme, 'robustness' and 'PI' are two orthogonal properties
    of futexes, and all four combinations are possible: futex, robust-futex,
    PI-futex, robust+PI-futex.

    glibc support:
    --------------

    Ulrich Drepper and Jakub Jelinek have written glibc support for PI-futexes
    (and robust futexes), enabling robust and PI (PTHREAD_PRIO_INHERIT) POSIX
    mutexes. (PTHREAD_PRIO_PROTECT support will be added later on too, no
    additional kernel changes are needed for that). [NOTE: The glibc patch is
    obviously inofficial and unsupported without matching upstream kernel
    functionality.]

    the patch-queue and the glibc patch can also be downloaded from:

    http://redhat.com/~mingo/PI-futex-patches/

    Many thanks go to the people who helped us create this kernel feature: Steven
    Rostedt, Esben Nielsen, Benedikt Spranger, Daniel Walker, John Cooper, Arjan
    van de Ven, Oleg Nesterov and others. Credits for related prior projects goes
    to Dirk Grambow, Inaky Perez-Gonzalez, Bill Huey and many others.

    Clean up the futex code, before adding more features to it:

    - use u32 as the futex field type - that's the ABI
    - use __user and pointers to u32 instead of unsigned long
    - code style / comment style cleanups
    - rename hash-bucket name from 'bh' to 'hb'.

    I checked the pre and post futex.o object files to make sure this
    patch has no code effects.

    Signed-off-by: Ingo Molnar
    Signed-off-by: Thomas Gleixner
    Signed-off-by: Arjan van de Ven
    Cc: Ulrich Drepper
    Cc: Jakub Jelinek
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Ingo Molnar
     

23 Jun, 2006

1 commit

  • Extend the get_sb() filesystem operation to take an extra argument that
    permits the VFS to pass in the target vfsmount that defines the mountpoint.

    The filesystem is then required to manually set the superblock and root dentry
    pointers. For most filesystems, this should be done with simple_set_mnt()
    which will set the superblock pointer and then set the root dentry to the
    superblock's s_root (as per the old default behaviour).

    The get_sb() op now returns an integer as there's now no need to return the
    superblock pointer.

    This patch permits a superblock to be implicitly shared amongst several mount
    points, such as can be done with NFS to avoid potential inode aliasing. In
    such a case, simple_set_mnt() would not be called, and instead the mnt_root
    and mnt_sb would be set directly.

    The patch also makes the following changes:

    (*) the get_sb_*() convenience functions in the core kernel now take a vfsmount
    pointer argument and return an integer, so most filesystems have to change
    very little.

    (*) If one of the convenience function is not used, then get_sb() should
    normally call simple_set_mnt() to instantiate the vfsmount. This will
    always return 0, and so can be tail-called from get_sb().

    (*) generic_shutdown_super() now calls shrink_dcache_sb() to clean up the
    dcache upon superblock destruction rather than shrink_dcache_anon().

    This is required because the superblock may now have multiple trees that
    aren't actually bound to s_root, but that still need to be cleaned up. The
    currently called functions assume that the whole tree is rooted at s_root,
    and that anonymous dentries are not the roots of trees which results in
    dentries being left unculled.

    However, with the way NFS superblock sharing are currently set to be
    implemented, these assumptions are violated: the root of the filesystem is
    simply a dummy dentry and inode (the real inode for '/' may well be
    inaccessible), and all the vfsmounts are rooted on anonymous[*] dentries
    with child trees.

    [*] Anonymous until discovered from another tree.

    (*) The documentation has been adjusted, including the additional bit of
    changing ext2_* into foo_* in the documentation.

    [akpm@osdl.org: convert ipath_fs, do other stuff]
    Signed-off-by: David Howells
    Acked-by: Al Viro
    Cc: Nathan Scott
    Cc: Roland Dreier
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Howells
     

01 Apr, 2006

1 commit

  • The futex timeval is not checked for correctness. The change does not
    break existing applications as the timeval is supplied by glibc (and glibc
    always passes a correct value), but the glibc-internal tests for this
    functionality fail.

    Signed-off-by: Thomas Gleixner
    Signed-off-by: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Thomas Gleixner
     

28 Mar, 2006

2 commits

  • - fix: initialize the robust list(s) to NULL in copy_process.

    - doc update

    - cleanup: rename _inuser to _inatomic

    - __user cleanups and other small cleanups

    Signed-off-by: Ingo Molnar
    Cc: Thomas Gleixner
    Cc: Arjan van de Ven
    Cc: Ulrich Drepper
    Cc: Andi Kleen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Ingo Molnar
     
  • Add the core infrastructure for robust futexes: structure definitions, the new
    syscalls and the do_exit() based cleanup mechanism.

    Signed-off-by: Ingo Molnar
    Signed-off-by: Thomas Gleixner
    Signed-off-by: Arjan van de Ven
    Acked-by: Ulrich Drepper
    Cc: Michael Kerrisk
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Ingo Molnar
     

07 Jan, 2006

1 commit

  • Make the futex code compilable and usable on NOMMU by making the attempt to
    handle page faults conditional on CONFIG_MMU. If this is not enabled, then
    we can assume that EFAULT returned from futex_atomic_op_inuser() is not
    recoverable, and that the address lies outside of valid memory.

    handle_mm_fault() is made to BUG if called on NOMMU without attempting to
    invoke the actual handler (__handle_mm_fault).

    Signed-off-by: David Howells
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Howells
     

25 Dec, 2005

1 commit


24 Nov, 2005

1 commit

  • The follow_page changes in get_futex_key have left it with two almost
    identical blocks, when handling the rare case of a futex in a nonlinear vma.
    get_user_pages will itself do that follow_page, and its additional
    find_extend_vma is hardly any overhead since the vma is already cached. Let's
    just delete the follow_page block and let get_user_pages do it.

    Signed-off-by: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     

07 Nov, 2005

1 commit

  • The code for FUTEX_WAKE_OP calls an arch callback,
    futex_atomic_op_inuser(). That callback can return an error code, but
    currently the caller assumes any error is EFAULT, and will try various
    things to resolve the fault before eventually giving up with EFAULT
    (regardless of the original error code). This is not a theoretical case -
    arch callbacks currently return -ENOSYS if the opcode they are given is
    bogus.

    This patch alters the code to detect non-EFAULT errors and return them
    directly to the user.

    Of course, whether -ENOSYS is the correct return value for the bogus opcode
    case, or whether EINVAL would be more appropriate is another question.

    Signed-off-by: David Gibson
    Cc: Rusty Russell
    Cc: Ingo Molnar
    Cc: Jamie Lokier
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Gibson
     

30 Oct, 2005

1 commit

  • Final step in pushing down common core's page_table_lock. follow_page no
    longer wants caller to hold page_table_lock, uses pte_offset_map_lock itself;
    and so no page_table_lock is taken in get_user_pages itself.

    But get_user_pages (and get_futex_key) do then need follow_page to pin the
    page for them: take Daniel's suggestion of bitflags to follow_page.

    Need one for WRITE, another for TOUCH (it was the accessed flag before:
    vanished along with check_user_page_readable, but surely get_numa_maps is
    wrong to mark every page it finds as accessed), another for GET.

    And another, ANON to dispose of untouched_anonymous_page: it seems silly for
    that to descend a second time, let follow_page observe if there was no page
    table and return ZERO_PAGE if so. Fix minor bug in that: check VM_LOCKED -
    make_pages_present ought to make readonly anonymous present.

    Give get_numa_maps a cond_resched while we're there.

    Signed-off-by: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     

08 Sep, 2005

2 commits

  • This patch cleans up the error path of futex_fd() by removing duplicate
    code.

    Signed-off-by: Pekka Enberg
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Pekka Enberg
     
  • ATM pthread_cond_signal is unnecessarily slow, because it wakes one waiter
    (which at least on UP usually means an immediate context switch to one of
    the waiter threads). This waiter wakes up and after a few instructions it
    attempts to acquire the cv internal lock, but that lock is still held by
    the thread calling pthread_cond_signal. So it goes to sleep and eventually
    the signalling thread is scheduled in, unlocks the internal lock and wakes
    the waiter again.

    Now, before 2003-09-21 NPTL was using FUTEX_REQUEUE in pthread_cond_signal
    to avoid this performance issue, but it was removed when locks were
    redesigned to the 3 state scheme (unlocked, locked uncontended, locked
    contended).

    Following scenario shows why simply using FUTEX_REQUEUE in
    pthread_cond_signal together with using lll_mutex_unlock_force in place of
    lll_mutex_unlock is not enough and probably why it has been disabled at
    that time:

    The number is value in cv->__data.__lock.
    thr1 thr2 thr3
    0 pthread_cond_wait
    1 lll_mutex_lock (cv->__data.__lock)
    0 lll_mutex_unlock (cv->__data.__lock)
    0 lll_futex_wait (&cv->__data.__futex, futexval)
    0 pthread_cond_signal
    1 lll_mutex_lock (cv->__data.__lock)
    1 pthread_cond_signal
    2 lll_mutex_lock (cv->__data.__lock)
    2 lll_futex_wait (&cv->__data.__lock, 2)
    2 lll_futex_requeue (&cv->__data.__futex, 0, 1, &cv->__data.__lock)
    # FUTEX_REQUEUE, not FUTEX_CMP_REQUEUE
    2 lll_mutex_unlock_force (cv->__data.__lock)
    0 cv->__data.__lock = 0
    0 lll_futex_wake (&cv->__data.__lock, 1)
    1 lll_mutex_lock (cv->__data.__lock)
    0 lll_mutex_unlock (cv->__data.__lock)
    # Here, lll_mutex_unlock doesn't know there are threads waiting
    # on the internal cv's lock

    Now, I believe it is possible to use FUTEX_REQUEUE in pthread_cond_signal,
    but it will cost us not one, but 2 extra syscalls and, what's worse, one of
    these extra syscalls will be done for every single waiting loop in
    pthread_cond_*wait.

    We would need to use lll_mutex_unlock_force in pthread_cond_signal after
    requeue and lll_mutex_cond_lock in pthread_cond_*wait after lll_futex_wait.

    Another alternative is to do the unlocking pthread_cond_signal needs to do
    (the lock can't be unlocked before lll_futex_wake, as that is racy) in the
    kernel.

    I have implemented both variants, futex-requeue-glibc.patch is the first
    one and futex-wake_op{,-glibc}.patch is the unlocking inside of the kernel.
    The kernel interface allows userland to specify how exactly an unlocking
    operation should look like (some atomic arithmetic operation with optional
    constant argument and comparison of the previous futex value with another
    constant).

    It has been implemented just for ppc*, x86_64 and i?86, for other
    architectures I'm including just a stub header which can be used as a
    starting point by maintainers to write support for their arches and ATM
    will just return -ENOSYS for FUTEX_WAKE_OP. The requeue patch has been
    (lightly) tested just on x86_64, the wake_op patch on ppc64 kernel running
    32-bit and 64-bit NPTL and x86_64 kernel running 32-bit and 64-bit NPTL.

    With the following benchmark on UP x86-64 I get:

    for i in nptl-orig nptl-requeue nptl-wake_op; do echo time elf/ld.so --library-path .:$i /tmp/bench; \
    for j in 1 2; do echo ( time elf/ld.so --library-path .:$i /tmp/bench ) 2>&1; done; done
    time elf/ld.so --library-path .:nptl-orig /tmp/bench
    real 0m0.655s user 0m0.253s sys 0m0.403s
    real 0m0.657s user 0m0.269s sys 0m0.388s
    time elf/ld.so --library-path .:nptl-requeue /tmp/bench
    real 0m0.496s user 0m0.225s sys 0m0.271s
    real 0m0.531s user 0m0.242s sys 0m0.288s
    time elf/ld.so --library-path .:nptl-wake_op /tmp/bench
    real 0m0.380s user 0m0.176s sys 0m0.204s
    real 0m0.382s user 0m0.175s sys 0m0.207s

    The benchmark is at:
    http://sourceware.org/ml/libc-alpha/2005-03/txt00001.txt
    Older futex-requeue-glibc.patch version is at:
    http://sourceware.org/ml/libc-alpha/2005-03/txt00002.txt
    Older futex-wake_op-glibc.patch version is at:
    http://sourceware.org/ml/libc-alpha/2005-03/txt00003.txt
    Will post a new version (just x86-64 fixes so that the patch
    applies against pthread_cond_signal.S) to libc-hacker ml soon.

    Attached is the kernel FUTEX_WAKE_OP patch as well as a simple-minded
    testcase that will not test the atomicity of the operation, but at least
    check if the threads that should have been woken up are woken up and
    whether the arithmetic operation in the kernel gave the expected results.

    Acked-by: Ingo Molnar
    Cc: Ulrich Drepper
    Cc: Jamie Lokier
    Cc: Rusty Russell
    Signed-off-by: Yoichi Yuasa
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jakub Jelinek
     

01 May, 2005

1 commit


17 Apr, 2005

1 commit

  • Initial git repository build. I'm not bothering with the full history,
    even though we have it. We can create a separate "historical" git
    archive of that later if we want to, and in the meantime it's about
    3.2GB when imported into git - space that would just make the early
    git days unnecessarily complicated, when we don't have a lot of good
    infrastructure for it.

    Let it rip!

    Linus Torvalds