21 May, 2019

1 commit

  • Based on 1 normalized pattern(s):

    this program is free software you can redistribute it and or modify
    it under the terms of the gnu general public license as published by
    the free software foundation either version 2 of the license or at
    your option any later version this program is distributed in the
    hope that it will be useful but without any warranty without even
    the implied warranty of merchantability or fitness for a particular
    purpose see the gnu general public license for more details you
    should have received a copy of the gnu general public license along
    with this program if not you can access it online at http www gnu
    org licenses gpl 2 0 html

    extracted by the scancode license scanner the SPDX license identifier

    GPL-2.0-or-later

    has been chosen to replace the boilerplate/reference in 1 file(s).

    Signed-off-by: Thomas Gleixner
    Reviewed-by: Kate Stewart
    Reviewed-by: Jilayne Lovejoy
    Reviewed-by: Steve Winslow
    Reviewed-by: Allison Randal
    Cc: linux-spdx@vger.kernel.org
    Link: https://lkml.kernel.org/r/20190519154041.430943677@linutronix.de
    Signed-off-by: Greg Kroah-Hartman

    Thomas Gleixner
     

03 Oct, 2018

1 commit

  • If CONFIG_WW_MUTEX_SELFTEST=y is enabled, booting an image
    in an arm64 virtual machine results in the following
    traceback if 8 CPUs are enabled:

    DEBUG_LOCKS_WARN_ON(__owner_task(owner) != current)
    WARNING: CPU: 2 PID: 537 at kernel/locking/mutex.c:1033 __mutex_unlock_slowpath+0x1a8/0x2e0
    ...
    Call trace:
    __mutex_unlock_slowpath()
    ww_mutex_unlock()
    test_cycle_work()
    process_one_work()
    worker_thread()
    kthread()
    ret_from_fork()

    If requesting b_mutex fails with -EDEADLK, the error variable
    is reassigned to the return value from calling ww_mutex_lock
    on a_mutex again. If this call fails, a_mutex is not locked.
    It is, however, unconditionally unlocked subsequently, causing
    the reported warning. Fix the problem by using two error variables.

    With this change, the selftest still fails as follows:

    cyclic deadlock not resolved, ret[7/8] = -35

    However, the traceback is gone.

    Signed-off-by: Guenter Roeck
    Cc: Chris Wilson
    Cc: Linus Torvalds
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Cc: Will Deacon
    Fixes: d1b42b800e5d0 ("locking/ww_mutex: Add kselftests for resolving ww_mutex cyclic deadlocks")
    Link: http://lkml.kernel.org/r/1538516929-9734-1-git-send-email-linux@roeck-us.net
    Signed-off-by: Ingo Molnar

    Guenter Roeck
     

10 Sep, 2018

1 commit


03 Jul, 2018

1 commit

  • The current Wound-Wait mutex algorithm is actually not Wound-Wait but
    Wait-Die. Implement also Wound-Wait as a per-ww-class choice. Wound-Wait
    is, contrary to Wait-Die a preemptive algorithm and is known to generate
    fewer backoffs. Testing reveals that this is true if the
    number of simultaneous contending transactions is small.
    As the number of simultaneous contending threads increases, Wait-Wound
    becomes inferior to Wait-Die in terms of elapsed time.
    Possibly due to the larger number of held locks of sleeping transactions.

    Update documentation and callers.

    Timings using git://people.freedesktop.org/~thomash/ww_mutex_test
    tag patch-18-06-15

    Each thread runs 100000 batches of lock / unlock 800 ww mutexes randomly
    chosen out of 100000. Four core Intel x86_64:

    Algorithm #threads Rollbacks time
    Wound-Wait 4 ~100 ~17s.
    Wait-Die 4 ~150000 ~19s.
    Wound-Wait 16 ~360000 ~109s.
    Wait-Die 16 ~450000 ~82s.

    Cc: Ingo Molnar
    Cc: Jonathan Corbet
    Cc: Gustavo Padovan
    Cc: Maarten Lankhorst
    Cc: Sean Paul
    Cc: David Airlie
    Cc: Davidlohr Bueso
    Cc: "Paul E. McKenney"
    Cc: Josh Triplett
    Cc: Thomas Gleixner
    Cc: Kate Stewart
    Cc: Philippe Ombredanne
    Cc: Greg Kroah-Hartman
    Cc: linux-doc@vger.kernel.org
    Cc: linux-media@vger.kernel.org
    Cc: linaro-mm-sig@lists.linaro.org
    Co-authored-by: Peter Zijlstra
    Signed-off-by: Thomas Hellstrom
    Acked-by: Peter Zijlstra (Intel)
    Acked-by: Ingo Molnar

    Thomas Hellstrom
     

14 Sep, 2017

1 commit

  • GFP_TEMPORARY was introduced by commit e12ba74d8ff3 ("Group short-lived
    and reclaimable kernel allocations") along with __GFP_RECLAIMABLE. It's
    primary motivation was to allow users to tell that an allocation is
    short lived and so the allocator can try to place such allocations close
    together and prevent long term fragmentation. As much as this sounds
    like a reasonable semantic it becomes much less clear when to use the
    highlevel GFP_TEMPORARY allocation flag. How long is temporary? Can the
    context holding that memory sleep? Can it take locks? It seems there is
    no good answer for those questions.

    The current implementation of GFP_TEMPORARY is basically GFP_KERNEL |
    __GFP_RECLAIMABLE which in itself is tricky because basically none of
    the existing caller provide a way to reclaim the allocated memory. So
    this is rather misleading and hard to evaluate for any benefits.

    I have checked some random users and none of them has added the flag
    with a specific justification. I suspect most of them just copied from
    other existing users and others just thought it might be a good idea to
    use without any measuring. This suggests that GFP_TEMPORARY just
    motivates for cargo cult usage without any reasoning.

    I believe that our gfp flags are quite complex already and especially
    those with highlevel semantic should be clearly defined to prevent from
    confusion and abuse. Therefore I propose dropping GFP_TEMPORARY and
    replace all existing users to simply use GFP_KERNEL. Please note that
    SLAB users with shrinkers will still get __GFP_RECLAIMABLE heuristic and
    so they will be placed properly for memory fragmentation prevention.

    I can see reasons we might want some gfp flag to reflect shorterm
    allocations but I propose starting from a clear semantic definition and
    only then add users with proper justification.

    This was been brought up before LSF this year by Matthew [1] and it
    turned out that GFP_TEMPORARY really doesn't have a clear semantic. It
    seems to be a heuristic without any measured advantage for most (if not
    all) its current users. The follow up discussion has revealed that
    opinions on what might be temporary allocation differ a lot between
    developers. So rather than trying to tweak existing users into a
    semantic which they haven't expected I propose to simply remove the flag
    and start from scratch if we really need a semantic for short term
    allocations.

    [1] http://lkml.kernel.org/r/20170118054945.GD18349@bombadil.infradead.org

    [akpm@linux-foundation.org: fix typo]
    [akpm@linux-foundation.org: coding-style fixes]
    [sfr@canb.auug.org.au: drm/i915: fix up]
    Link: http://lkml.kernel.org/r/20170816144703.378d4f4d@canb.auug.org.au
    Link: http://lkml.kernel.org/r/20170728091904.14627-1-mhocko@kernel.org
    Signed-off-by: Michal Hocko
    Signed-off-by: Stephen Rothwell
    Acked-by: Mel Gorman
    Acked-by: Vlastimil Babka
    Cc: Matthew Wilcox
    Cc: Neil Brown
    Cc: "Theodore Ts'o"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Michal Hocko
     

30 Mar, 2017

1 commit

  • Use a timeout rather than a fixed number of loops to avoid running for
    very long periods, such as under the kbuilder VMs.

    Reported-by: kernel test robot
    Signed-off-by: Chris Wilson
    Signed-off-by: Peter Zijlstra (Intel)
    Cc: Boqun Feng
    Cc: Linus Torvalds
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Link: http://lkml.kernel.org/r/20170310105733.6444-1-chris@chris-wilson.co.uk
    Signed-off-by: Ingo Molnar

    Chris Wilson
     

16 Mar, 2017

1 commit

  • Currently each thread starts an acquire context only once, and
    performs all its loop iterations under it.

    This means that the Wound/Wait relations between threads are fixed.

    To make things a little more realistic and cover more of the
    functionality with the test, open a new acquire context for each loop.

    Signed-off-by: Peter Zijlstra (Intel)
    Acked-by: Chris Wilson
    Cc: Andrew Morton
    Cc: Linus Torvalds
    Cc: Paul E. McKenney
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     

02 Mar, 2017

2 commits

  • Because there are only 12 bits in held_lock::references, so we only
    support 4095 nested lock held in the same time, adjust the lock number
    for ww_mutex stress test to kill one lockdep splat:

    [ ] [ BUG: bad unlock balance detected! ]
    [ ] kworker/u2:0/5 is trying to release lock (ww_class_mutex) at:
    [ ] ww_mutex_unlock()
    [ ] but there are no more locks to release!
    ...

    Signed-off-by: Boqun Feng
    Signed-off-by: Peter Zijlstra (Intel)
    Cc: Andrew Morton
    Cc: Chris Wilson
    Cc: Fengguang Wu
    Cc: Linus Torvalds
    Cc: Nicolai Hähnle
    Cc: Paul E. McKenney
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Link: http://lkml.kernel.org/r/20170301150138.hdixnmafzfsox7nn@tardis.cn.ibm.com
    Signed-off-by: Ingo Molnar

    Boqun Feng
     
  • When busy-spinning on a ww_mutex_trylock(), we depend upon the other
    thread advancing and releasing the lock. This can not happen on a single
    CPU unless we relinquish it:

    [ ] NMI watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [kworker/0:1:18]
    ...
    [ ] Call Trace:
    [ ] mutex_trylock()
    [ ] test_mutex_work+0x31/0x56
    [ ] process_one_work+0x1b4/0x2f9
    [ ] worker_thread+0x1b0/0x27c
    [ ] kthread+0xd1/0xd3
    [ ] ret_from_fork+0x19/0x30

    Reported-by: Fengguang Wu
    Signed-off-by: Chris Wilson
    Signed-off-by: Peter Zijlstra (Intel)
    Cc: Andrew Morton
    Cc: Linus Torvalds
    Cc: Paul E. McKenney
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Fixes: f2a5fec17395 ("locking/ww_mutex: Begin kselftests for ww_mutex")
    Link: http://lkml.kernel.org/r/20170228094011.2595-1-chris@chris-wilson.co.uk
    Signed-off-by: Ingo Molnar

    Chris Wilson
     

14 Jan, 2017

5 commits

  • Signed-off-by: Chris Wilson
    Signed-off-by: Peter Zijlstra (Intel)
    Cc: Andrew Morton
    Cc: Linus Torvalds
    Cc: Maarten Lankhorst
    Cc: Nicolai Hähnle
    Cc: Paul E. McKenney
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Link: http://lkml.kernel.org/r/20161201114711.28697-8-chris@chris-wilson.co.uk
    Signed-off-by: Ingo Molnar

    Chris Wilson
     
  • Check that ww_mutexes can detect cyclic deadlocks (generalised ABBA
    cycles) and resolve them by lock reordering.

    Signed-off-by: Chris Wilson
    Signed-off-by: Peter Zijlstra (Intel)
    Cc: Andrew Morton
    Cc: Linus Torvalds
    Cc: Maarten Lankhorst
    Cc: Nicolai Hähnle
    Cc: Paul E. McKenney
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Link: http://lkml.kernel.org/r/20161201114711.28697-7-chris@chris-wilson.co.uk
    Signed-off-by: Ingo Molnar

    Chris Wilson
     
  • Signed-off-by: Chris Wilson
    Signed-off-by: Peter Zijlstra (Intel)
    Cc: Andrew Morton
    Cc: Linus Torvalds
    Cc: Maarten Lankhorst
    Cc: Nicolai Hähnle
    Cc: Paul E. McKenney
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Link: http://lkml.kernel.org/r/20161201114711.28697-6-chris@chris-wilson.co.uk
    Signed-off-by: Ingo Molnar

    Chris Wilson
     
  • Signed-off-by: Chris Wilson
    Signed-off-by: Peter Zijlstra (Intel)
    Cc: Andrew Morton
    Cc: Linus Torvalds
    Cc: Maarten Lankhorst
    Cc: Nicolai Hähnle
    Cc: Paul E. McKenney
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Link: http://lkml.kernel.org/r/20161201114711.28697-5-chris@chris-wilson.co.uk
    Signed-off-by: Ingo Molnar

    Chris Wilson
     
  • Signed-off-by: Chris Wilson
    Signed-off-by: Peter Zijlstra (Intel)
    Cc: Andrew Morton
    Cc: Linus Torvalds
    Cc: Maarten Lankhorst
    Cc: Nicolai Hähnle
    Cc: Paul E. McKenney
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Link: http://lkml.kernel.org/r/20161201114711.28697-4-chris@chris-wilson.co.uk
    Signed-off-by: Ingo Molnar

    Chris Wilson