02 Nov, 2017

1 commit

  • Many source files in the tree are missing licensing information, which
    makes it harder for compliance tools to determine the correct license.

    By default all files without license information are under the default
    license of the kernel, which is GPL version 2.

    Update the files which contain no license information with the 'GPL-2.0'
    SPDX license identifier. The SPDX identifier is a legally binding
    shorthand, which can be used instead of the full boiler plate text.

    This patch is based on work done by Thomas Gleixner and Kate Stewart and
    Philippe Ombredanne.

    How this work was done:

    Patches were generated and checked against linux-4.14-rc6 for a subset of
    the use cases:
    - file had no licensing information it it.
    - file was a */uapi/* one with no licensing information in it,
    - file was a */uapi/* one with existing licensing information,

    Further patches will be generated in subsequent months to fix up cases
    where non-standard license headers were used, and references to license
    had to be inferred by heuristics based on keywords.

    The analysis to determine which SPDX License Identifier to be applied to
    a file was done in a spreadsheet of side by side results from of the
    output of two independent scanners (ScanCode & Windriver) producing SPDX
    tag:value files created by Philippe Ombredanne. Philippe prepared the
    base worksheet, and did an initial spot review of a few 1000 files.

    The 4.13 kernel was the starting point of the analysis with 60,537 files
    assessed. Kate Stewart did a file by file comparison of the scanner
    results in the spreadsheet to determine which SPDX license identifier(s)
    to be applied to the file. She confirmed any determination that was not
    immediately clear with lawyers working with the Linux Foundation.

    Criteria used to select files for SPDX license identifier tagging was:
    - Files considered eligible had to be source code files.
    - Make and config files were included as candidates if they contained >5
    lines of source
    - File already had some variant of a license header in it (even if
    Reviewed-by: Philippe Ombredanne
    Reviewed-by: Thomas Gleixner
    Signed-off-by: Greg Kroah-Hartman

    Greg Kroah-Hartman
     

03 Jun, 2016

1 commit

  • Commit 50755bc1c305 ("seqlock: fix raw_read_seqcount_latch()") broke
    raw_read_seqcount_latch().

    If you look at the comment that was modified; the thing that changes is
    the seq count, not the latch pointer.

    * void latch_modify(struct latch_struct *latch, ...)
    * {
    * smp_wmb(); seq++;
    * smp_wmb(); data[0], ...);
    *
    * smp_wmb(); seq++;
    * smp_wmb(); data[1], ...);
    * }
    *
    * The query will have a form like:
    *
    * struct entry *latch_query(struct latch_struct *latch, ...)
    * {
    * struct entry *entry;
    * unsigned seq, idx;
    *
    * do {
    * seq = lockless_dereference(latch->seq);

    So here we have:

    seq = READ_ONCE(latch->seq);
    smp_read_barrier_depends();

    Which is exactly what we want; the new code:

    seq = ({ p = READ_ONCE(latch);
    smp_read_barrier_depends(); p })->seq;

    is just wrong; because it looses the volatile read on seq, which can now
    be torn or worse 'optimized'. And the read_depend barrier is also placed
    wrong, we want it after the load of seq, to match the above data[]
    up-to-date wmb()s.

    Such that when we dereference latch->data[] below, we're guaranteed to
    observe the right data.

    *
    * idx = seq & 0x01;
    * entry = data_query(latch->data[idx], ...);
    *
    * smp_rmb();
    * } while (seq != latch->seq);
    *
    * return entry;
    * }

    So yes, not passing a pointer is not pretty, but the code was correct,
    and isn't anymore now.

    Change to explicit READ_ONCE()+smp_read_barrier_depends() to avoid
    confusion and allow strict lockless_dereference() checking.

    Signed-off-by: Peter Zijlstra (Intel)
    Cc: Alexey Dobriyan
    Cc: Andrew Morton
    Cc: Linus Torvalds
    Cc: Paul E. McKenney
    Cc: Paul McKenney
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Fixes: 50755bc1c305 ("seqlock: fix raw_read_seqcount_latch()")
    Link: http://lkml.kernel.org/r/20160527111117.GL3192@twins.programming.kicks-ass.net
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     

27 May, 2016

1 commit


02 Jul, 2015

1 commit

  • Pull module updates from Rusty Russell:
    "Main excitement here is Peter Zijlstra's lockless rbtree optimization
    to speed module address lookup. He found some abusers of the module
    lock doing that too.

    A little bit of parameter work here too; including Dan Streetman's
    breaking up the big param mutex so writing a parameter can load
    another module (yeah, really). Unfortunately that broke the usual
    suspects, !CONFIG_MODULES and !CONFIG_SYSFS, so those fixes were
    appended too"

    * tag 'modules-next-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rusty/linux: (26 commits)
    modules: only use mod->param_lock if CONFIG_MODULES
    param: fix module param locks when !CONFIG_SYSFS.
    rcu: merge fix for Convert ACCESS_ONCE() to READ_ONCE() and WRITE_ONCE()
    module: add per-module param_lock
    module: make perm const
    params: suppress unused variable error, warn once just in case code changes.
    modules: clarify CONFIG_MODULE_COMPRESS help, suggest 'N'.
    kernel/module.c: avoid ifdefs for sig_enforce declaration
    kernel/workqueue.c: remove ifdefs over wq_power_efficient
    kernel/params.c: export param_ops_bool_enable_only
    kernel/params.c: generalize bool_enable_only
    kernel/module.c: use generic module param operaters for sig_enforce
    kernel/params: constify struct kernel_param_ops uses
    sysfs: tightened sysfs permission checks
    module: Rework module_addr_{min,max}
    module: Use __module_address() for module_address_lookup()
    module: Make the mod_tree stuff conditional on PERF_EVENTS || TRACING
    module: Optimize __module_address() using a latched RB-tree
    rbtree: Implement generic latch_tree
    seqlock: Introduce raw_read_seqcount_latch()
    ...

    Linus Torvalds
     

19 Jun, 2015

2 commits

  • Introduce raw_write_seqcount_barrier(), a new construct that can be
    used to provide write barrier semantics in seqcount read loops instead
    of the usual consistency guarantee.

    raw_write_seqcount_barier() is equivalent to:

    raw_write_seqcount_begin();
    raw_write_seqcount_end();

    But avoids issueing two back-to-back smp_wmb() instructions.

    This construct works because the read side will 'stall' when observing
    odd values. This means that -- referring to the example in the comment
    below -- even though there is no (matching) read barrier between the
    loads of X and Y, we cannot observe !x && !y, because:

    - if we observe Y == false we must observe the first sequence
    increment, which makes us loop, until

    - we observe !(seq & 1) -- the second sequence increment -- at which
    time we must also observe T == true.

    Suggested-by: Oleg Nesterov
    Signed-off-by: Peter Zijlstra (Intel)
    Cc: umgwanakikbuti@gmail.com
    Cc: ktkhai@parallels.com
    Cc: rostedt@goodmis.org
    Cc: juri.lelli@gmail.com
    Cc: pang.xunlei@linaro.org
    Cc: oleg@redhat.com
    Cc: wanpeng.li@linux.intel.com
    Cc: Al Viro
    Cc: Linus Torvalds
    Cc: Paul E. McKenney
    Link: http://lkml.kernel.org/r/20150617122924.GP3644@twins.programming.kicks-ass.net
    Signed-off-by: Thomas Gleixner

    Peter Zijlstra
     
  • I'll shortly be introducing another seqcount primitive that's useful
    to provide ordering semantics and would like to use the
    write_seqcount_barrier() name for that.

    Seeing how there's only one user of the current primitive, lets rename
    it to invalidate, as that appears what its doing.

    While there, employ lockdep_assert_held() instead of
    assert_spin_locked() to not generate debug code for regular kernels.

    Signed-off-by: Peter Zijlstra (Intel)
    Cc: ktkhai@parallels.com
    Cc: rostedt@goodmis.org
    Cc: juri.lelli@gmail.com
    Cc: pang.xunlei@linaro.org
    Cc: Oleg Nesterov
    Cc: wanpeng.li@linux.intel.com
    Cc: Paul McKenney
    Cc: Al Viro
    Cc: Linus Torvalds
    Cc: umgwanakikbuti@gmail.com
    Link: http://lkml.kernel.org/r/20150611124743.279926217@infradead.org
    Signed-off-by: Thomas Gleixner

    Peter Zijlstra
     

28 May, 2015

2 commits

  • Because with latches there is a strict data dependency on the seq load
    we can avoid the rmb in favour of a read_barrier_depends.

    Suggested-by: Ingo Molnar
    Signed-off-by: Peter Zijlstra (Intel)
    Signed-off-by: Rusty Russell

    Peter Zijlstra
     
  • Improve the documentation of the latch technique as used in the
    current timekeeping code, such that it can be readily employed
    elsewhere.

    Borrow from the comments in timekeeping and replace those with a
    reference to this more generic comment.

    Cc: Andrea Arcangeli
    Cc: David Woodhouse
    Cc: Rik van Riel
    Cc: "Paul E. McKenney"
    Cc: Oleg Nesterov
    Reviewed-by: Mathieu Desnoyers
    Acked-by: Michel Lespinasse
    Signed-off-by: Peter Zijlstra (Intel)
    Signed-off-by: Rusty Russell

    Peter Zijlstra
     

24 Feb, 2015

1 commit

  • With the new standardized functions, we can replace all
    ACCESS_ONCE() calls across relevant locking - this includes
    lockref and seqlock while at it.

    ACCESS_ONCE() does not work reliably on non-scalar types.
    For example gcc 4.6 and 4.7 might remove the volatile tag
    for such accesses during the SRA (scalar replacement of
    aggregates) step:

    https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145

    Update the new calls regardless of if it is a scalar type,
    this is cleaner than having three alternatives.

    Signed-off-by: Davidlohr Bueso
    Cc: Peter Zijlstra
    Cc: Linus Torvalds
    Cc: Andrew Morton
    Cc: Thomas Gleixner
    Cc: Paul E. McKenney
    Link: http://lkml.kernel.org/r/1424662301.6539.18.camel@stgolabs.net
    Signed-off-by: Ingo Molnar

    Davidlohr Bueso
     

19 Sep, 2014

1 commit

  • There are cases where read_seqbegin_or_lock() needs to block irqs,
    because the seqlock in question nests inside a lock that is also
    be taken from irq context.

    Add read_seqbegin_or_lock_irqsave() and done_seqretry_irqrestore(), which
    are almost identical to read_seqbegin_or_lock() and done_seqretry().

    Signed-off-by: Rik van Riel
    Signed-off-by: Peter Zijlstra (Intel)
    Cc: prarit@redhat.com
    Cc: oleg@redhat.com
    Cc: sgruszka@redhat.com
    Cc: Al Viro
    Cc: John Stultz
    Cc: Linus Torvalds
    Cc: Mathieu Desnoyers
    Cc: Stephen Boyd
    Cc: Trond Myklebust
    Link: http://lkml.kernel.org/r/1410527535-9814-2-git-send-email-riel@redhat.com
    [ Improved the readability of the code a bit. ]
    Signed-off-by: Ingo Molnar

    Rik van Riel
     

06 Aug, 2014

1 commit

  • Pull timer and time updates from Thomas Gleixner:
    "A rather large update of timers, timekeeping & co

    - Core timekeeping code is year-2038 safe now for 32bit machines.
    Now we just need to fix all in kernel users and the gazillion of
    user space interfaces which rely on timespec/timeval :)

    - Better cache layout for the timekeeping internal data structures.

    - Proper nanosecond based interfaces for in kernel users.

    - Tree wide cleanup of code which wants nanoseconds but does hoops
    and loops to convert back and forth from timespecs. Some of it
    definitely belongs into the ugly code museum.

    - Consolidation of the timekeeping interface zoo.

    - A fast NMI safe accessor to clock monotonic for tracing. This is a
    long standing request to support correlated user/kernel space
    traces. With proper NTP frequency correction it's also suitable
    for correlation of traces accross separate machines.

    - Checkpoint/restart support for timerfd.

    - A few NOHZ[_FULL] improvements in the [hr]timer code.

    - Code move from kernel to kernel/time of all time* related code.

    - New clocksource/event drivers from the ARM universe. I'm really
    impressed that despite an architected timer in the newer chips SoC
    manufacturers insist on inventing new and differently broken SoC
    specific timers.

    [ Ed. "Impressed"? I don't think that word means what you think it means ]

    - Another round of code move from arch to drivers. Looks like most
    of the legacy mess in ARM regarding timers is sorted out except for
    a few obnoxious strongholds.

    - The usual updates and fixlets all over the place"

    * 'timers-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (114 commits)
    timekeeping: Fixup typo in update_vsyscall_old definition
    clocksource: document some basic timekeeping concepts
    timekeeping: Use cached ntp_tick_length when accumulating error
    timekeeping: Rework frequency adjustments to work better w/ nohz
    timekeeping: Minor fixup for timespec64->timespec assignment
    ftrace: Provide trace clocks monotonic
    timekeeping: Provide fast and NMI safe access to CLOCK_MONOTONIC
    seqcount: Add raw_write_seqcount_latch()
    seqcount: Provide raw_read_seqcount()
    timekeeping: Use tk_read_base as argument for timekeeping_get_ns()
    timekeeping: Create struct tk_read_base and use it in struct timekeeper
    timekeeping: Restructure the timekeeper some more
    clocksource: Get rid of cycle_last
    clocksource: Move cycle_last validation to core code
    clocksource: Make delta calculation a function
    wireless: ath9k: Get rid of timespec conversions
    drm: vmwgfx: Use nsec based interfaces
    drm: i915: Use nsec based interfaces
    timekeeping: Provide ktime_get_raw()
    hangcheck-timer: Use ktime_get_ns()
    ...

    Linus Torvalds
     

24 Jul, 2014

2 commits

  • For NMI safe access to clock monotonic we use the seqcount LSB as
    index of a timekeeper array. The update sequence looks like this:

    smp_wmb();
    Cc: John Stultz
    Cc: Peter Zijlstra
    Cc: Ingo Molnar
    Cc: Steven Rostedt
    Signed-off-by: Thomas Gleixner
    Signed-off-by: John Stultz

    Mathieu Desnoyers
     
  • raw_read_seqcount opens a read critical section of the given seqcount
    without any lockdep checking and without checking or masking the
    LSB. Calling code is responsible for handling that.

    Preparatory patch to provide a NMI safe clock monotonic accessor
    function.

    Signed-off-by: Thomas Gleixner
    Cc: John Stultz
    Cc: Peter Zijlstra
    Cc: Ingo Molnar
    Cc: Mathieu Desnoyers
    Signed-off-by: John Stultz

    Thomas Gleixner
     

19 Jun, 2014

1 commit

  • This commit reverts the addition of lockdep checking to raw_seqcount_begin
    for the following reasons:

    1) It violates the naming convention that raw_* functions should not
    do lockdep checks (a convention that is also followed by the other
    raw_*_seqcount_begin functions).

    2) raw_seqcount_begin does not spin, so it can only be part of an ABBA
    deadlock in very special circumstances (for instance if a lock
    is held across the entire raw_seqcount_begin()+read_seqcount_retry()
    loop while also being taken inside the write_seqcount protected area).

    3) It is causing false positives with some existing callers, and there
    is no non-lockdep alternative for those callers to use.

    None of the three existing callers (__d_lookup_rcu, netdev_get_name, and
    the NFS state code) appear to use the function in a manner that is ABBA
    deadlock prone.

    Fixes: 1ca7d67cf5d5: seqcount: Add lockdep functionality to seqcount/seqlock
    Signed-off-by: Trond Myklebust
    Signed-off-by: Peter Zijlstra
    Cc: John Stultz
    Cc: "David S. Miller"
    Cc: Al Viro
    Cc: Waiman Long
    Cc: Stephen Boyd
    Cc: Linus Torvalds
    Link: http://lkml.kernel.org/r/CAHQdGtRR6SvEhXiqWo24hoUh9AU9cL82Z8Z-d8-7u951F_d+5g@mail.gmail.com
    Signed-off-by: Ingo Molnar

    Trond Myklebust
     

12 Jan, 2014

1 commit

  • Linus disliked the _no_lockdep() naming, so instead
    use the more-consistent raw_* prefix to the non-lockdep
    enabled seqcount methods.

    This also adds raw_ methods for the write operations
    as well, which will be utilized in a following patch.

    Acked-by: Linus Torvalds
    Reviewed-by: Stephen Boyd
    Signed-off-by: John Stultz
    Signed-off-by: Peter Zijlstra
    Cc: Krzysztof Hałasa
    Cc: Uwe Kleine-König
    Cc: Willy Tarreau
    Link: http://lkml.kernel.org/r/1388704274-5278-1-git-send-email-john.stultz@linaro.org
    Signed-off-by: Ingo Molnar

    John Stultz
     

16 Nov, 2013

1 commit


06 Nov, 2013

1 commit

  • Currently seqlocks and seqcounts don't support lockdep.

    After running across a seqcount related deadlock in the timekeeping
    code, I used a less-refined and more focused variant of this patch
    to narrow down the cause of the issue.

    This is a first-pass attempt to properly enable lockdep functionality
    on seqlocks and seqcounts.

    Since seqcounts are used in the vdso gettimeofday code, I've provided
    non-lockdep accessors for those needs.

    I've also handled one case where there were nested seqlock writers
    and there may be more edge cases.

    Comments and feedback would be appreciated!

    Signed-off-by: John Stultz
    Signed-off-by: Peter Zijlstra
    Cc: Eric Dumazet
    Cc: Li Zefan
    Cc: Mathieu Desnoyers
    Cc: Steven Rostedt
    Cc: "David S. Miller"
    Cc: netdev@vger.kernel.org
    Link: http://lkml.kernel.org/r/1381186321-4906-3-git-send-email-john.stultz@linaro.org
    Signed-off-by: Ingo Molnar

    John Stultz
     

13 Sep, 2013

1 commit

  • The sequence lock (seqlock) was originally designed for the cases where
    the readers do not need to block the writers by making the readers retry
    the read operation when the data change.

    Since then, the use cases have been expanded to include situations where
    a thread does not need to change the data (effectively a reader) at all
    but have to take the writer lock because it can't tolerate changes to
    the protected structure. Some examples are the d_path() function and
    the getcwd() syscall in fs/dcache.c where the functions take the writer
    lock on rename_lock even though they don't need to change anything in
    the protected data structure at all. This is inefficient as a reader is
    now blocking other sequence number reading readers from moving forward
    by pretending to be a writer.

    This patch tries to eliminate this inefficiency by introducing a new
    type of locking reader to the seqlock locking mechanism. This new
    locking reader will try to take an exclusive lock preventing other
    writers and locking readers from going forward. However, it won't
    affect the progress of the other sequence number reading readers as the
    sequence number won't be changed.

    Signed-off-by: Waiman Long
    Cc: Alexander Viro
    Signed-off-by: Linus Torvalds

    Waiman Long
     

19 Feb, 2013

2 commits


05 May, 2012

2 commits

  • The normal read_seqcount_begin() function will wait for any current
    writers to exit their critical region by looping until the sequence
    count is even.

    That "wait for sequence count to stabilize" is the right thing to do if
    the read-locker will just retry the whole operation on contention: no
    point in doing a potentially expensive reader sequence if we know at the
    beginning that we'll just end up re-doing it all.

    HOWEVER. Some users don't actually retry the operation, but instead
    will abort and do the operation with proper locking. So the sequence
    count case may be the optimistic quick case, but in the presense of
    writers you may want to do full locking in order to guarantee forward
    progress. The prime example of this would be the RCU name lookup.

    And in that case, you may well be better off without the "retry early",
    and are in a rush to instead get to the failure handling. Thus this
    "raw" interface that just returns the sequence number without testing it
    - it just forces the low bit to zero so that read_seqcount_retry() will
    always fail such a "active concurrent writer" scenario.

    Signed-off-by: Linus Torvalds

    Linus Torvalds
     
  • We really need to use a ACCESS_ONCE() on the sequence value read in
    __read_seqcount_begin(), because otherwise the compiler might end up
    reloading the value in between the test and the return of it. As a
    result, it might end up returning an odd value (which means that a write
    is in progress).

    If the reader is then fast enough that that odd value is still the
    current one when the read_seqcount_retry() is done, we might end up with
    a "successful" read sequence, even despite the concurrent write being
    active.

    In practice this probably never really happens - there just isn't
    anything else going on around the read of the sequence count, and the
    common case is that we end up having a read barrier immediately
    afterwards.

    So the code sequence in which gcc might decide to reaload from memory is
    small, and there's no reason to believe it would ever actually do the
    reload. But if the compiler ever were to decide to do so, it would be
    incredibly annoying to debug. Let's just make sure.

    Cc: stable@kernel.org
    Signed-off-by: Linus Torvalds

    Linus Torvalds
     

12 Jun, 2011

1 commit

  • It uses cpu_relax(), and so needs

    Without this patch, I see:

    CC arch/mn10300/kernel/asm-offsets.s
    In file included from include/linux/time.h:8,
    from include/linux/timex.h:56,
    from include/linux/sched.h:57,
    from arch/mn10300/kernel/asm-offsets.c:7:
    include/linux/seqlock.h: In function 'read_seqbegin':
    include/linux/seqlock.h:91: error: implicit declaration of function 'cpu_relax'

    whilst building asb2364_defconfig on MN10300.

    Signed-off-by: David Howells
    Signed-off-by: Linus Torvalds

    David Howells
     

24 May, 2011

1 commit

  • All static seqlock should be initialized with the lockdep friendly
    __SEQLOCK_UNLOCKED() macro.

    Remove legacy SEQLOCK_UNLOCKED() macro.

    Signed-off-by: Eric Dumazet
    Cc: David Miller
    Link: http://lkml.kernel.org/r/%3C1306238888.3026.31.camel%40edumazet-laptop%3E
    Signed-off-by: Thomas Gleixner

    Eric Dumazet
     

12 May, 2011

1 commit

  • Move the smp_rmb after cpu_relax loop in read_seqlock and add
    ACCESS_ONCE to make sure the test and return are consistent.

    A multi-threaded core in the lab didn't like the update
    from 2.6.35 to 2.6.36, to the point it would hang during
    boot when multiple threads were active. Bisection showed
    af5ab277ded04bd9bc6b048c5a2f0e7d70ef0867 (clockevents:
    Remove the per cpu tick skew) as the culprit and it is
    supported with stack traces showing xtime_lock waits including
    tick_do_update_jiffies64 and/or update_vsyscall.

    Experimentation showed the combination of cpu_relax and smp_rmb
    was significantly slowing the progress of other threads sharing
    the core, and this patch is effective in avoiding the hang.

    A theory is the rmb is affecting the whole core while the
    cpu_relax is causing a resource rebalance flush, together they
    cause an interfernce cadance that is unbroken when the seqlock
    reader has interrupts disabled.

    At first I was confused why the refactor in
    3c22cd5709e8143444a6d08682a87f4c57902df3 (kernel: optimise
    seqlock) didn't affect this patch application, but after some
    study that affected seqcount not seqlock. The new seqcount was
    not factored back into the seqlock. I defer that the future.

    While the removal of the timer interrupt offset created
    contention for the xtime lock while a cpu does the
    additonal work to update the system clock, the seqlock
    implementation with the tight rmb spin loop goes back much
    further, and is just waiting for the right trigger.

    Cc:
    Signed-off-by: Milton Miller
    Cc:
    Cc: Linus Torvalds
    Cc: Andi Kleen
    Cc: Nick Piggin
    Cc: Benjamin Herrenschmidt
    Cc: Anton Blanchard
    Cc: Paul McKenney
    Acked-by: Eric Dumazet
    Link: http://lkml.kernel.org/r/%3Cseqlock-rmb%40mdm.bga.com%3E
    Signed-off-by: Thomas Gleixner

    Milton Miller
     

07 Jan, 2011

1 commit

  • Add branch annotations for seqlock read fastpath, and introduce
    __read_seqcount_begin and __read_seqcount_end functions, that can avoid the
    smp_rmb() if used carefully. These will be used by store-free path walking
    algorithm performance is critical and seqlocks are in use.

    Signed-off-by: Nick Piggin

    Nick Piggin
     

25 Apr, 2008

1 commit

  • Thomas Gleixner debugged a particularly ugly seqlock related livelock:
    do not process the seq-read section if we know it beforehand that the
    test at the end of the section will fail ...

    Signed-off-by: Ingo Molnar

    Ingo Molnar
     

28 Apr, 2007

1 commit


18 Feb, 2007

1 commit


13 Dec, 2006

1 commit

  • seqlock_init() needs to use spin_lock_init() for dynamic locks, so that
    lockdep is notified about the presence of a new lock.

    (this is a fallout of the recent networking merge, which started using
    the so-far unused seqlock_init() API.)

    This fix solves the following lockdep-internal warning on current -git:

    INFO: trying to register non-static key.
    the code is fine but needs lockdep annotation.
    turning off the locking correctness validator.
    __lock_acquire+0x10c/0x9f9
    lock_acquire+0x56/0x72
    _spin_lock+0x35/0x42
    neigh_destroy+0x9d/0x12e
    neigh_periodic_timer+0x10a/0x15c
    run_timer_softirq+0x126/0x18e
    __do_softirq+0x6b/0xe6
    do_softirq+0x64/0xd2
    ksoftirqd+0x82/0x138

    Signed-off-by: Ingo Molnar
    Signed-off-by: Linus Torvalds

    Ingo Molnar
     

04 Jul, 2006

1 commit


26 Apr, 2006

1 commit


11 Apr, 2006

1 commit

  • In vsyscall function do_vgettimeofday(), some functions are declared as
    inlined, which is a hint for gcc to compile the function inlined but it
    not forced. Sometimes compiler does not compile the function as
    inlined, so here inline is replaced by __always_inline prefix.

    It does not happen in gcc compiler actually, but it possibly happens.

    Signed-off-by: bibo mao
    Signed-off-by: Linus Torvalds

    mao, bibo
     

17 Apr, 2005

1 commit

  • Initial git repository build. I'm not bothering with the full history,
    even though we have it. We can create a separate "historical" git
    archive of that later if we want to, and in the meantime it's about
    3.2GB when imported into git - space that would just make the early
    git days unnecessarily complicated, when we don't have a lot of good
    infrastructure for it.

    Let it rip!

    Linus Torvalds