15 Oct, 2020

1 commit

  • Right now, seqlock.h produces kernel-doc warnings:

    ./include/linux/seqlock.h:181: error: Cannot parse typedef!

    Convert it to a plain comment to avoid confusing kernel-doc.

    Fixes: a8772dccb2ec ("seqlock: Fold seqcount_LOCKNAME_t definition")
    Signed-off-by: Mauro Carvalho Chehab
    Signed-off-by: Ingo Molnar
    Link: https://lore.kernel.org/r/a59144cdaadf7fdf1fe5d55d0e1575abbf1c0cb3.1602590106.git.mchehab+huawei@kernel.org

    Mauro Carvalho Chehab
     

08 Oct, 2020

1 commit

  • ctags creates a warning:
    |ctags: Warning: include/linux/seqlock.h:738: null expansion of name pattern "\2"

    The DEFINE_SEQLOCK() macro is passed to ctags and being told to expect
    an argument.

    Add a dummy argument to keep ctags quiet.

    Signed-off-by: Sebastian Andrzej Siewior
    Signed-off-by: Peter Zijlstra (Intel)
    Acked-by: Will Deacon
    Link: https://lkml.kernel.org/r/20200924154851.skmswuyj322yuz4g@linutronix.de

    Sebastian Andrzej Siewior
     

16 Sep, 2020

1 commit

  • seqcount_LOCKNAME_init() needs to be a macro due to the lockdep
    annotation in seqcount_init(). Since a macro cannot define another
    macro, we need to effectively revert commit: e4e9ab3f9f91 ("seqlock:
    Fold seqcount_LOCKNAME_init() definition").

    Fixes: e4e9ab3f9f91 ("seqlock: Fold seqcount_LOCKNAME_init() definition")
    Reported-by: Qian Cai
    Debugged-by: Boqun Feng
    Signed-off-by: Peter Zijlstra (Intel)
    Tested-by: Qian Cai
    Link: https://lkml.kernel.org/r/20200915143028.GB2674@hirez.programming.kicks-ass.net

    peterz@infradead.org
     

10 Sep, 2020

7 commits

  • On PREEMPT_RT, seqlock_t is transformed to a sleeping lock that do not
    disable preemption. A seqlock_t reader can thus preempt its write side
    section and spin for the enter scheduler tick. If that reader belongs to
    a real-time scheduling class, it can spin forever and the kernel will
    livelock.

    To break this livelock possibility on PREEMPT_RT, implement seqlock_t in
    terms of "seqcount_spinlock_t" instead of plain "seqcount_t".

    Beside its pure annotational value, this will leverage the existing
    seqcount_LOCKNAME_T PREEMPT_RT anti-livelock mechanisms, without adding
    any extra code.

    Signed-off-by: Ahmed S. Darwish
    Signed-off-by: Peter Zijlstra (Intel)
    Link: https://lkml.kernel.org/r/20200904153231.11994-6-a.darwish@linutronix.de

    Ahmed S. Darwish
     
  • Preemption must be disabled before entering a sequence counter write
    side critical section. Otherwise the read side section can preempt the
    write side section and spin for the entire scheduler tick. If that
    reader belongs to a real-time scheduling class, it can spin forever and
    the kernel will livelock.

    Disabling preemption cannot be done for PREEMPT_RT though: it can lead
    to higher latencies, and the write side sections will not be able to
    acquire locks which become sleeping locks (e.g. spinlock_t).

    To remain preemptible, while avoiding a possible livelock caused by the
    reader preempting the writer, use a different technique: let the reader
    detect if a seqcount_LOCKNAME_t writer is in progress. If that's the
    case, acquire then release the associated LOCKNAME writer serialization
    lock. This will allow any possibly-preempted writer to make progress
    until the end of its writer serialization lock critical section.

    Implement this lock-unlock technique for all seqcount_LOCKNAME_t with
    an associated (PREEMPT_RT) sleeping lock.

    References: 55f3560df975 ("seqlock: Extend seqcount API with associated locks")
    Signed-off-by: Ahmed S. Darwish
    Signed-off-by: Peter Zijlstra (Intel)
    Link: https://lkml.kernel.org/r/20200519214547.352050-1-a.darwish@linutronix.de

    Ahmed S. Darwish
     
  • The sequence counters read APIs are implemented as CPP macros, so they
    can take either seqcount_t or any of the seqcount_LOCKNAME_t variants.
    Such macros then get *directly* transformed to internal C functions that
    only take plain seqcount_t.

    Further commits need access to seqcount_LOCKNAME_t inside of the actual
    read APIs code. Thus transform all of the seqcount read APIs to pure GCC
    statement expressions instead.

    This will not break type-safety: all of the transformed APIs resolve to
    a _Generic() selection that does not have a "default" case.

    This will also not affect the transformed APIs readability: previously
    added kernel-doc above all of seqlock.h functions makes the expectations
    quite clear for call-site developers.

    Signed-off-by: Ahmed S. Darwish
    Signed-off-by: Peter Zijlstra (Intel)
    Link: https://lkml.kernel.org/r/20200904153231.11994-4-a.darwish@linutronix.de

    Ahmed S. Darwish
     
  • At seqlock.h, the following set of functions:

    - __seqcount_ptr()
    - __seqcount_preemptible()
    - __seqcount_assert()

    act as plain seqcount_t "property" accessors. Meanwhile, the following
    group:

    - __seqcount_ptr()
    - __seqcount_lock_preemptible()
    - __seqcount_assert_lock_held()

    act as the equivalent set, but in the generic form, taking either
    seqcount_t or any of the seqcount_LOCKNAME_t variants.

    This is quite confusing, especially the first member where it is called
    exactly the same in both groups.

    Differentiate the first group by using "__seqprop" as prefix, and also
    use that same prefix for all of seqcount_LOCKNAME_t property accessors.

    While at it, constify the property accessors first parameter when
    appropriate.

    References: 55f3560df975 ("seqlock: Extend seqcount API with associated locks")
    Signed-off-by: Ahmed S. Darwish
    Signed-off-by: Peter Zijlstra (Intel)
    Link: https://lkml.kernel.org/r/20200904153231.11994-3-a.darwish@linutronix.de

    Ahmed S. Darwish
     
  • At seqlock.h, sequence counters with associated locks are either called
    seqcount_LOCKNAME_t, seqcount_LOCKTYPE_t, or seqcount_locktype_t.

    Standardize on seqcount_LOCKNAME_t for all instances in comments,
    kernel-doc, and SEQCOUNT_LOCKNAME() generative macro paramters.

    Signed-off-by: Ahmed S. Darwish
    Signed-off-by: Peter Zijlstra (Intel)
    Link: https://lkml.kernel.org/r/20200904153231.11994-2-a.darwish@linutronix.de

    Ahmed S. Darwish
     
  • All latch sequence counter call-sites have now been converted from plain
    seqcount_t to the new seqcount_latch_t data type.

    Enforce type-safety by modifying seqlock.h latch APIs to only accept
    seqcount_latch_t.

    Signed-off-by: Ahmed S. Darwish
    Signed-off-by: Peter Zijlstra (Intel)
    Link: https://lkml.kernel.org/r/20200827114044.11173-9-a.darwish@linutronix.de

    Ahmed S. Darwish
     
  • Latch sequence counters are a multiversion concurrency control mechanism
    where the seqcount_t counter even/odd value is used to switch between
    two copies of protected data. This allows the seqcount_t read path to
    safely interrupt its write side critical section (e.g. from NMIs).

    Initially, latch sequence counters were implemented as a single write
    function above plain seqcount_t: raw_write_seqcount_latch(). The read
    side was expected to use plain seqcount_t raw_read_seqcount().

    A specialized latch read function, raw_read_seqcount_latch(), was later
    added. It became the standardized way for latch read paths. Due to the
    dependent load, it has one read memory barrier less than the plain
    seqcount_t raw_read_seqcount() API.

    Only raw_write_seqcount_latch() and raw_read_seqcount_latch() should be
    used with latch sequence counters. Having *unique* read and write path
    APIs means that latch sequence counters are actually a data type of
    their own -- just inappropriately overloading plain seqcount_t.

    Introduce seqcount_latch_t. This adds type-safety and ensures that only
    the correct latch-safe APIs are to be used.

    Not to break bisection, let the latch APIs also accept plain seqcount_t
    or seqcount_raw_spinlock_t. After converting all call sites to
    seqcount_latch_t, only that new data type will be allowed.

    References: 9b0fd802e8c0 ("seqcount: Add raw_write_seqcount_latch()")
    References: 7fc26327b756 ("seqlock: Introduce raw_read_seqcount_latch()")
    References: aadd6e5caaac ("time/sched_clock: Use raw_read_seqcount_latch()")
    Signed-off-by: Ahmed S. Darwish
    Signed-off-by: Peter Zijlstra (Intel)
    Link: https://lkml.kernel.org/r/20200827114044.11173-4-a.darwish@linutronix.de

    Ahmed S. Darwish
     

26 Aug, 2020

1 commit

  • Fix kernel-doc warnings in .

    ../include/linux/seqlock.h:152: warning: Incorrect use of kernel-doc format: * seqcount_LOCKNAME_init() - runtime initializer for seqcount_LOCKNAME_t
    ../include/linux/seqlock.h:164: warning: Incorrect use of kernel-doc format: * SEQCOUNT_LOCKTYPE() - Instantiate seqcount_LOCKNAME_t and helpers
    ../include/linux/seqlock.h:229: warning: Function parameter or member 'seq_name' not described in 'SEQCOUNT_LOCKTYPE_ZERO'
    ../include/linux/seqlock.h:229: warning: Function parameter or member 'assoc_lock' not described in 'SEQCOUNT_LOCKTYPE_ZERO'
    ../include/linux/seqlock.h:229: warning: Excess function parameter 'name' description in 'SEQCOUNT_LOCKTYPE_ZERO'
    ../include/linux/seqlock.h:229: warning: Excess function parameter 'lock' description in 'SEQCOUNT_LOCKTYPE_ZERO'
    ../include/linux/seqlock.h:695: warning: duplicate section name 'NOTE'

    Demote kernel-doc notation for the macros "seqcount_LOCKNAME_init()" and
    "SEQCOUNT_LOCKTYPE()"; scripts/kernel-doc does not handle them correctly.

    Rename function parameters in SEQCNT_LOCKNAME_ZERO() documentation
    to match the macro's argument names. Change the macro name in the
    documentation to SEQCOUNT_LOCKTYPE_ZERO() to match the macro's name.

    For raw_write_seqcount_latch(), rename the second NOTE: to NOTE2:
    to prevent a kernel-doc warning. However, the generated output is not
    quite as nice as it could be for this.

    Fix a typo: s/LOCKTYPR/LOCKTYPE/

    Fixes: 0efc94c5d15c ("seqcount: Compress SEQCNT_LOCKNAME_ZERO()")
    Fixes: e4e9ab3f9f91 ("seqlock: Fold seqcount_LOCKNAME_init() definition")
    Fixes: a8772dccb2ec ("seqlock: Fold seqcount_LOCKNAME_t definition")
    Reported-by: kernel test robot
    Signed-off-by: Randy Dunlap
    Signed-off-by: Peter Zijlstra (Intel)
    Link: https://lkml.kernel.org/r/20200817000200.20993-1-rdunlap@infradead.org

    Randy Dunlap
     

06 Aug, 2020

1 commit

  • By using lockdep_assert_*() from seqlock.h, the spaghetti monster
    attacked.

    Attack back by reducing seqlock.h dependencies from two key high level headers:

    - : -Remove
    - : -Remove
    - : +Add

    The price was to add it to sched.h ...

    Core header fallout, we add direct header dependencies instead of gaining them
    parasitically from higher level headers:

    - : +Add
    - : +Add
    - : +Add
    - : +Add
    - : +Add
    - : +Add

    Arch headers fallout:

    - PARISC: : +Add
    - SH: : +Add
    - SPARC: : +Add
    - SPARC: : +Add ,
    -Remove
    - X86: : +Add
    -Remove

    There's also a bunch of parasitic header dependency fallout in .c files, not listed
    separately.

    [ mingo: Extended the changelog, split up & fixed the original patch. ]

    Co-developed-by: Ingo Molnar
    Signed-off-by: Peter Zijlstra (Intel)
    Signed-off-by: Ingo Molnar
    Link: https://lore.kernel.org/r/20200804133438.GK2674@hirez.programming.kicks-ass.net

    Peter Zijlstra
     

29 Jul, 2020

14 commits

  • Attempt uniformity and brevity.

    Signed-off-by: Peter Zijlstra (Intel)

    Peter Zijlstra
     
  • Less is more.

    Signed-off-by: Peter Zijlstra (Intel)

    Peter Zijlstra
     
  • Manual repetition is boring and error prone.

    Signed-off-by: Peter Zijlstra (Intel)

    Peter Zijlstra
     
  • Manual repetition is boring and error prone.

    Signed-off-by: Peter Zijlstra (Intel)

    Peter Zijlstra
     
  • __SEQ_LOCKDEP() is an expression gate for the
    seqcount_LOCKNAME_t::lock member. Rename it to be about the member,
    not the gate condition.

    Later (PREEMPT_RT) patches will make the member available for !LOCKDEP
    configs.

    Signed-off-by: Peter Zijlstra (Intel)

    Peter Zijlstra
     
  • Parent commit, "seqlock: Extend seqcount API with associated locks",
    introduced a big number of multi-line macros that are newline-escaped
    at 72 columns.

    For overall cohesion, align the earlier-existing macros similarly.

    Signed-off-by: Ahmed S. Darwish
    Signed-off-by: Peter Zijlstra (Intel)
    Link: https://lkml.kernel.org/r/20200720155530.1173732-11-a.darwish@linutronix.de

    Ahmed S. Darwish
     
  • A sequence counter write side critical section must be protected by some
    form of locking to serialize writers. If the serialization primitive is
    not disabling preemption implicitly, preemption has to be explicitly
    disabled before entering the write side critical section.

    There is no built-in debugging mechanism to verify that the lock used
    for writer serialization is held and preemption is disabled. Some usage
    sites like dma-buf have explicit lockdep checks for the writer-side
    lock, but this covers only a small portion of the sequence counter usage
    in the kernel.

    Add new sequence counter types which allows to associate a lock to the
    sequence counter at initialization time. The seqcount API functions are
    extended to provide appropriate lockdep assertions depending on the
    seqcount/lock type.

    For sequence counters with associated locks that do not implicitly
    disable preemption, preemption protection is enforced in the sequence
    counter write side functions. This removes the need to explicitly add
    preempt_disable/enable() around the write side critical sections: the
    write_begin/end() functions for these new sequence counter types
    automatically do this.

    Introduce the following seqcount types with associated locks:

    seqcount_spinlock_t
    seqcount_raw_spinlock_t
    seqcount_rwlock_t
    seqcount_mutex_t
    seqcount_ww_mutex_t

    Extend the seqcount read and write functions to branch out to the
    specific seqcount_LOCKTYPE_t implementation at compile-time. This avoids
    kernel API explosion per each new seqcount_LOCKTYPE_t added. Add such
    compile-time type detection logic into a new, internal, seqlock header.

    Document the proper seqcount_LOCKTYPE_t usage, and rationale, at
    Documentation/locking/seqlock.rst.

    If lockdep is disabled, this lock association is compiled out and has
    neither storage size nor runtime overhead.

    Signed-off-by: Ahmed S. Darwish
    Signed-off-by: Peter Zijlstra (Intel)
    Link: https://lkml.kernel.org/r/20200720155530.1173732-10-a.darwish@linutronix.de

    Ahmed S. Darwish
     
  • Preemption must be disabled before entering a sequence count write side
    critical section. Failing to do so, the seqcount read side can preempt
    the write side section and spin for the entire scheduler tick. If that
    reader belongs to a real-time scheduling class, it can spin forever and
    the kernel will livelock.

    Assert through lockdep that preemption is disabled for seqcount writers.

    Signed-off-by: Ahmed S. Darwish
    Signed-off-by: Peter Zijlstra (Intel)
    Link: https://lkml.kernel.org/r/20200720155530.1173732-9-a.darwish@linutronix.de

    Ahmed S. Darwish
     
  • raw_seqcount_begin() has the same code as raw_read_seqcount(), with the
    exception of masking the sequence counter's LSB before returning it to
    the caller.

    Note, raw_seqcount_begin() masks the counter's LSB before returning it
    to the caller so that read_seqcount_retry() can fail if the counter is
    odd -- without the overhead of an extra branching instruction.

    Signed-off-by: Ahmed S. Darwish
    Signed-off-by: Peter Zijlstra (Intel)
    Link: https://lkml.kernel.org/r/20200720155530.1173732-7-a.darwish@linutronix.de

    Ahmed S. Darwish
     
  • seqlock.h is now included by kernel's RST documentation, but a small
    number of the the exported seqlock.h functions are kernel-doc annotated.

    Add kernel-doc for all seqlock.h exported APIs.

    Signed-off-by: Ahmed S. Darwish
    Signed-off-by: Peter Zijlstra (Intel)
    Link: https://lkml.kernel.org/r/20200720155530.1173732-6-a.darwish@linutronix.de

    Ahmed S. Darwish
     
  • The seqlock.h seqcount_t and seqlock_t API definitions are presented in
    the chronological order of their development rather than the order that
    makes most sense to readers. This makes it hard to follow and understand
    the header file code.

    Group and reorder all of the exported seqlock.h functions according to
    their function.

    First, group together the seqcount_t standard read path functions:

    - __read_seqcount_begin()
    - raw_read_seqcount_begin()
    - read_seqcount_begin()

    since each function is implemented exactly in terms of the one above
    it. Then, group the special-case seqcount_t readers on their own as:

    - raw_read_seqcount()
    - raw_seqcount_begin()

    since the only difference between the two functions is that the second
    one masks the sequence counter LSB while the first one does not. Note
    that raw_seqcount_begin() can actually be implemented in terms of
    raw_read_seqcount(), which will be done in a follow-up commit.

    Then, group the seqcount_t write path functions, instead of injecting
    unrelated seqcount_t latch functions between them, and order them as:

    - raw_write_seqcount_begin()
    - raw_write_seqcount_end()
    - write_seqcount_begin_nested()
    - write_seqcount_begin()
    - write_seqcount_end()
    - raw_write_seqcount_barrier()
    - write_seqcount_invalidate()

    which is the expected natural order. This also isolates the seqcount_t
    latch functions into their own area, at the end of the sequence counters
    section, and before jumping to the next one: sequential locks
    (seqlock_t).

    Do a similar grouping and reordering for seqlock_t "locking" readers vs.
    the "conditionally locking or lockless" ones.

    No implementation code was changed in any of the reordering above.

    Signed-off-by: Ahmed S. Darwish
    Signed-off-by: Peter Zijlstra (Intel)
    Link: https://lkml.kernel.org/r/20200720155530.1173732-5-a.darwish@linutronix.de

    Ahmed S. Darwish
     
  • The seqcount_t latch reader example at the raw_write_seqcount_latch()
    kernel-doc comment ends the latch read section with a manual smp memory
    barrier and sequence counter comparison.

    This is technically correct, but it is suboptimal: read_seqcount_retry()
    already contains the same logic of an smp memory barrier and sequence
    counter comparison.

    End the latch read critical section example with read_seqcount_retry().

    Signed-off-by: Ahmed S. Darwish
    Signed-off-by: Peter Zijlstra (Intel)
    Link: https://lkml.kernel.org/r/20200720155530.1173732-4-a.darwish@linutronix.de

    Ahmed S. Darwish
     
  • Align the code samples and note sections inside kernel-doc comments with
    tabs. This way they can be properly parsed and rendered by Sphinx. It
    also makes the code samples easier to read from text editors.

    Signed-off-by: Ahmed S. Darwish
    Signed-off-by: Peter Zijlstra (Intel)
    Link: https://lkml.kernel.org/r/20200720155530.1173732-3-a.darwish@linutronix.de

    Ahmed S. Darwish
     
  • Proper documentation for the design and usage of sequence counters and
    sequential locks does not exist. Complete the seqlock.h documentation as
    follows:

    - Divide all documentation on a seqcount_t vs. seqlock_t basis. The
    description for both mechanisms was intermingled, which is incorrect
    since the usage constrains for each type are vastly different.

    - Add an introductory paragraph describing the internal design of, and
    rationale for, sequence counters.

    - Document seqcount_t writer non-preemptibility requirement, which was
    not previously documented anywhere, and provide a clear rationale.

    - Provide template code for seqcount_t and seqlock_t initialization
    and reader/writer critical sections.

    - Recommend using seqlock_t by default. It implicitly handles the
    serialization and non-preemptibility requirements of writers.

    At seqlock.h:

    - Remove references to brlocks as they've long been removed from the
    kernel.

    - Remove references to gcc-3.x since the kernel's minimum supported
    gcc version is 4.9.

    References: 0f6ed63b1707 ("no need to keep brlock macros anymore...")
    References: 6ec4476ac825 ("Raise gcc version requirement to 4.9")
    Signed-off-by: Ahmed S. Darwish
    Signed-off-by: Peter Zijlstra (Intel)
    Link: https://lkml.kernel.org/r/20200720155530.1173732-2-a.darwish@linutronix.de

    Ahmed S. Darwish
     

21 Mar, 2020

1 commit


30 Dec, 2019

1 commit


20 Nov, 2019

1 commit

  • Tidy up a few bits:

    - Fix typos and grammar, improve wording.

    - Remove spurious newlines that are col80 warning artifacts where the
    resulting line-break is worse than the disease it's curing.

    - Use core kernel coding style to improve readability and reduce
    spurious code pattern variations.

    - Use better vertical alignment for structure definitions and initialization
    sequences.

    - Misc other small details.

    No change in functionality intended.

    Cc: linux-kernel@vger.kernel.org
    Cc: Marco Elver
    Cc: Paul E. McKenney
    Cc: Peter Zijlstra
    Cc: Linus Torvalds
    Cc: Andrew Morton
    Cc: Thomas Gleixner
    Cc: Paul E. McKenney
    Cc: Will Deacon
    Signed-off-by: Ingo Molnar

    Ingo Molnar
     

16 Nov, 2019

2 commits

  • This patch proposes to require marked atomic accesses surrounding
    raw_write_seqcount_barrier. We reason that otherwise there is no way to
    guarantee propagation nor atomicity of writes before/after the barrier
    [1]. For example, consider the compiler tears stores either before or
    after the barrier; in this case, readers may observe a partial value,
    and because readers are unaware that writes are going on (writes are not
    in a seq-writer critical section), will complete the seq-reader critical
    section while having observed some partial state.
    [1] https://lwn.net/Articles/793253/

    This came up when designing and implementing KCSAN, because KCSAN would
    flag these accesses as data-races. After careful analysis, our reasoning
    as above led us to conclude that the best thing to do is to propose an
    amendment to the raw_seqcount_barrier usage.

    Signed-off-by: Marco Elver
    Acked-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney

    Marco Elver
     
  • Since seqlocks in the Linux kernel do not require the use of marked
    atomic accesses in critical sections, we teach KCSAN to assume such
    accesses are atomic. KCSAN currently also pretends that writes to
    `sequence` are atomic, although currently plain writes are used (their
    corresponding reads are READ_ONCE).

    Further, to avoid false positives in the absence of clear ending of a
    seqlock reader critical section (only when using the raw interface),
    KCSAN assumes a fixed number of accesses after start of a seqlock
    critical section are atomic.

    === Commentary on design around absence of clear begin/end markings ===
    Seqlock usage via seqlock_t follows a predictable usage pattern, where
    clear critical section begin/end is enforced. With subtle special cases
    for readers needing to be flat atomic regions, e.g. because usage such
    as in:
    - fs/namespace.c:__legitimize_mnt - unbalanced read_seqretry
    - fs/dcache.c:d_walk - unbalanced need_seqretry

    But, anything directly accessing seqcount_t seems to be unpredictable.
    Filtering for usage of read_seqcount_retry not following 'do { .. }
    while (read_seqcount_retry(..));':

    $ git grep 'read_seqcount_retry' | grep -Ev 'while \(|seqlock.h|Doc|\* '
    => about 1/3 of the total read_seqcount_retry usage.

    Just looking at fs/namei.c, we conclude that it is non-trivial to
    prescribe and migrate to an interface that would force clear begin/end
    seqlock markings for critical sections.

    As such, we concluded that the best design currently, is to simply
    ensure that KCSAN works well with the existing code.

    Signed-off-by: Marco Elver
    Acked-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney

    Marco Elver
     

09 Oct, 2019

1 commit

  • Since the following commit:

    b4adfe8e05f1 ("locking/lockdep: Remove unused argument in __lock_release")

    @nested is no longer used in lock_release(), so remove it from all
    lock_release() calls and friends.

    Signed-off-by: Qian Cai
    Signed-off-by: Peter Zijlstra (Intel)
    Acked-by: Will Deacon
    Acked-by: Daniel Vetter
    Cc: Linus Torvalds
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Cc: airlied@linux.ie
    Cc: akpm@linux-foundation.org
    Cc: alexander.levin@microsoft.com
    Cc: daniel@iogearbox.net
    Cc: davem@davemloft.net
    Cc: dri-devel@lists.freedesktop.org
    Cc: duyuyang@gmail.com
    Cc: gregkh@linuxfoundation.org
    Cc: hannes@cmpxchg.org
    Cc: intel-gfx@lists.freedesktop.org
    Cc: jack@suse.com
    Cc: jlbec@evilplan.or
    Cc: joonas.lahtinen@linux.intel.com
    Cc: joseph.qi@linux.alibaba.com
    Cc: jslaby@suse.com
    Cc: juri.lelli@redhat.com
    Cc: maarten.lankhorst@linux.intel.com
    Cc: mark@fasheh.com
    Cc: mhocko@kernel.org
    Cc: mripard@kernel.org
    Cc: ocfs2-devel@oss.oracle.com
    Cc: rodrigo.vivi@intel.com
    Cc: sean@poorly.run
    Cc: st@kernel.org
    Cc: tj@kernel.org
    Cc: tytso@mit.edu
    Cc: vdavydov.dev@gmail.com
    Cc: vincent.guittot@linaro.org
    Cc: viro@zeniv.linux.org.uk
    Link: https://lkml.kernel.org/r/1568909380-32199-1-git-send-email-cai@lca.pw
    Signed-off-by: Ingo Molnar

    Qian Cai
     

05 Dec, 2017

1 commit


02 Nov, 2017

1 commit

  • Many source files in the tree are missing licensing information, which
    makes it harder for compliance tools to determine the correct license.

    By default all files without license information are under the default
    license of the kernel, which is GPL version 2.

    Update the files which contain no license information with the 'GPL-2.0'
    SPDX license identifier. The SPDX identifier is a legally binding
    shorthand, which can be used instead of the full boiler plate text.

    This patch is based on work done by Thomas Gleixner and Kate Stewart and
    Philippe Ombredanne.

    How this work was done:

    Patches were generated and checked against linux-4.14-rc6 for a subset of
    the use cases:
    - file had no licensing information it it.
    - file was a */uapi/* one with no licensing information in it,
    - file was a */uapi/* one with existing licensing information,

    Further patches will be generated in subsequent months to fix up cases
    where non-standard license headers were used, and references to license
    had to be inferred by heuristics based on keywords.

    The analysis to determine which SPDX License Identifier to be applied to
    a file was done in a spreadsheet of side by side results from of the
    output of two independent scanners (ScanCode & Windriver) producing SPDX
    tag:value files created by Philippe Ombredanne. Philippe prepared the
    base worksheet, and did an initial spot review of a few 1000 files.

    The 4.13 kernel was the starting point of the analysis with 60,537 files
    assessed. Kate Stewart did a file by file comparison of the scanner
    results in the spreadsheet to determine which SPDX license identifier(s)
    to be applied to the file. She confirmed any determination that was not
    immediately clear with lawyers working with the Linux Foundation.

    Criteria used to select files for SPDX license identifier tagging was:
    - Files considered eligible had to be source code files.
    - Make and config files were included as candidates if they contained >5
    lines of source
    - File already had some variant of a license header in it (even if
    Reviewed-by: Philippe Ombredanne
    Reviewed-by: Thomas Gleixner
    Signed-off-by: Greg Kroah-Hartman

    Greg Kroah-Hartman
     

03 Jun, 2016

1 commit

  • Commit 50755bc1c305 ("seqlock: fix raw_read_seqcount_latch()") broke
    raw_read_seqcount_latch().

    If you look at the comment that was modified; the thing that changes is
    the seq count, not the latch pointer.

    * void latch_modify(struct latch_struct *latch, ...)
    * {
    * smp_wmb(); seq++;
    * smp_wmb(); data[0], ...);
    *
    * smp_wmb(); seq++;
    * smp_wmb(); data[1], ...);
    * }
    *
    * The query will have a form like:
    *
    * struct entry *latch_query(struct latch_struct *latch, ...)
    * {
    * struct entry *entry;
    * unsigned seq, idx;
    *
    * do {
    * seq = lockless_dereference(latch->seq);

    So here we have:

    seq = READ_ONCE(latch->seq);
    smp_read_barrier_depends();

    Which is exactly what we want; the new code:

    seq = ({ p = READ_ONCE(latch);
    smp_read_barrier_depends(); p })->seq;

    is just wrong; because it looses the volatile read on seq, which can now
    be torn or worse 'optimized'. And the read_depend barrier is also placed
    wrong, we want it after the load of seq, to match the above data[]
    up-to-date wmb()s.

    Such that when we dereference latch->data[] below, we're guaranteed to
    observe the right data.

    *
    * idx = seq & 0x01;
    * entry = data_query(latch->data[idx], ...);
    *
    * smp_rmb();
    * } while (seq != latch->seq);
    *
    * return entry;
    * }

    So yes, not passing a pointer is not pretty, but the code was correct,
    and isn't anymore now.

    Change to explicit READ_ONCE()+smp_read_barrier_depends() to avoid
    confusion and allow strict lockless_dereference() checking.

    Signed-off-by: Peter Zijlstra (Intel)
    Cc: Alexey Dobriyan
    Cc: Andrew Morton
    Cc: Linus Torvalds
    Cc: Paul E. McKenney
    Cc: Paul McKenney
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Fixes: 50755bc1c305 ("seqlock: fix raw_read_seqcount_latch()")
    Link: http://lkml.kernel.org/r/20160527111117.GL3192@twins.programming.kicks-ass.net
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     

27 May, 2016

1 commit


02 Jul, 2015

1 commit

  • Pull module updates from Rusty Russell:
    "Main excitement here is Peter Zijlstra's lockless rbtree optimization
    to speed module address lookup. He found some abusers of the module
    lock doing that too.

    A little bit of parameter work here too; including Dan Streetman's
    breaking up the big param mutex so writing a parameter can load
    another module (yeah, really). Unfortunately that broke the usual
    suspects, !CONFIG_MODULES and !CONFIG_SYSFS, so those fixes were
    appended too"

    * tag 'modules-next-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rusty/linux: (26 commits)
    modules: only use mod->param_lock if CONFIG_MODULES
    param: fix module param locks when !CONFIG_SYSFS.
    rcu: merge fix for Convert ACCESS_ONCE() to READ_ONCE() and WRITE_ONCE()
    module: add per-module param_lock
    module: make perm const
    params: suppress unused variable error, warn once just in case code changes.
    modules: clarify CONFIG_MODULE_COMPRESS help, suggest 'N'.
    kernel/module.c: avoid ifdefs for sig_enforce declaration
    kernel/workqueue.c: remove ifdefs over wq_power_efficient
    kernel/params.c: export param_ops_bool_enable_only
    kernel/params.c: generalize bool_enable_only
    kernel/module.c: use generic module param operaters for sig_enforce
    kernel/params: constify struct kernel_param_ops uses
    sysfs: tightened sysfs permission checks
    module: Rework module_addr_{min,max}
    module: Use __module_address() for module_address_lookup()
    module: Make the mod_tree stuff conditional on PERF_EVENTS || TRACING
    module: Optimize __module_address() using a latched RB-tree
    rbtree: Implement generic latch_tree
    seqlock: Introduce raw_read_seqcount_latch()
    ...

    Linus Torvalds
     

19 Jun, 2015

2 commits

  • Introduce raw_write_seqcount_barrier(), a new construct that can be
    used to provide write barrier semantics in seqcount read loops instead
    of the usual consistency guarantee.

    raw_write_seqcount_barier() is equivalent to:

    raw_write_seqcount_begin();
    raw_write_seqcount_end();

    But avoids issueing two back-to-back smp_wmb() instructions.

    This construct works because the read side will 'stall' when observing
    odd values. This means that -- referring to the example in the comment
    below -- even though there is no (matching) read barrier between the
    loads of X and Y, we cannot observe !x && !y, because:

    - if we observe Y == false we must observe the first sequence
    increment, which makes us loop, until

    - we observe !(seq & 1) -- the second sequence increment -- at which
    time we must also observe T == true.

    Suggested-by: Oleg Nesterov
    Signed-off-by: Peter Zijlstra (Intel)
    Cc: umgwanakikbuti@gmail.com
    Cc: ktkhai@parallels.com
    Cc: rostedt@goodmis.org
    Cc: juri.lelli@gmail.com
    Cc: pang.xunlei@linaro.org
    Cc: oleg@redhat.com
    Cc: wanpeng.li@linux.intel.com
    Cc: Al Viro
    Cc: Linus Torvalds
    Cc: Paul E. McKenney
    Link: http://lkml.kernel.org/r/20150617122924.GP3644@twins.programming.kicks-ass.net
    Signed-off-by: Thomas Gleixner

    Peter Zijlstra
     
  • I'll shortly be introducing another seqcount primitive that's useful
    to provide ordering semantics and would like to use the
    write_seqcount_barrier() name for that.

    Seeing how there's only one user of the current primitive, lets rename
    it to invalidate, as that appears what its doing.

    While there, employ lockdep_assert_held() instead of
    assert_spin_locked() to not generate debug code for regular kernels.

    Signed-off-by: Peter Zijlstra (Intel)
    Cc: ktkhai@parallels.com
    Cc: rostedt@goodmis.org
    Cc: juri.lelli@gmail.com
    Cc: pang.xunlei@linaro.org
    Cc: Oleg Nesterov
    Cc: wanpeng.li@linux.intel.com
    Cc: Paul McKenney
    Cc: Al Viro
    Cc: Linus Torvalds
    Cc: umgwanakikbuti@gmail.com
    Link: http://lkml.kernel.org/r/20150611124743.279926217@infradead.org
    Signed-off-by: Thomas Gleixner

    Peter Zijlstra
     

28 May, 2015

1 commit