31 Aug, 2016

1 commit

  • There are three usercopy warnings which are currently being silenced for
    gcc 4.6 and newer:

    1) "copy_from_user() buffer size is too small" compile warning/error

    This is a static warning which happens when object size and copy size
    are both const, and copy size > object size. I didn't see any false
    positives for this one. So the function warning attribute seems to
    be working fine here.

    Note this scenario is always a bug and so I think it should be
    changed to *always* be an error, regardless of
    CONFIG_DEBUG_STRICT_USER_COPY_CHECKS.

    2) "copy_from_user() buffer size is not provably correct" compile warning

    This is another static warning which happens when I enable
    __compiletime_object_size() for new compilers (and
    CONFIG_DEBUG_STRICT_USER_COPY_CHECKS). It happens when object size
    is const, but copy size is *not*. In this case there's no way to
    compare the two at build time, so it gives the warning. (Note the
    warning is a byproduct of the fact that gcc has no way of knowing
    whether the overflow function will be called, so the call isn't dead
    code and the warning attribute is activated.)

    So this warning seems to only indicate "this is an unusual pattern,
    maybe you should check it out" rather than "this is a bug".

    I get 102(!) of these warnings with allyesconfig and the
    __compiletime_object_size() gcc check removed. I don't know if there
    are any real bugs hiding in there, but from looking at a small
    sample, I didn't see any. According to Kees, it does sometimes find
    real bugs. But the false positive rate seems high.

    3) "Buffer overflow detected" runtime warning

    This is a runtime warning where object size is const, and copy size >
    object size.

    All three warnings (both static and runtime) were completely disabled
    for gcc 4.6 with the following commit:

    2fb0815c9ee6 ("gcc4: disable __compiletime_object_size for GCC 4.6+")

    That commit mistakenly assumed that the false positives were caused by a
    gcc bug in __compiletime_object_size(). But in fact,
    __compiletime_object_size() seems to be working fine. The false
    positives were instead triggered by #2 above. (Though I don't have an
    explanation for why the warnings supposedly only started showing up in
    gcc 4.6.)

    So remove warning #2 to get rid of all the false positives, and re-enable
    warnings #1 and #3 by reverting the above commit.

    Furthermore, since #1 is a real bug which is detected at compile time,
    upgrade it to always be an error.

    Having done all that, CONFIG_DEBUG_STRICT_USER_COPY_CHECKS is no longer
    needed.

    Signed-off-by: Josh Poimboeuf
    Cc: Kees Cook
    Cc: Thomas Gleixner
    Cc: Ingo Molnar
    Cc: "H . Peter Anvin"
    Cc: Andy Lutomirski
    Cc: Steven Rostedt
    Cc: Brian Gerst
    Cc: Peter Zijlstra
    Cc: Frederic Weisbecker
    Cc: Byungchul Park
    Cc: Nilay Vaish
    Signed-off-by: Linus Torvalds

    Josh Poimboeuf
     

04 Aug, 2016

2 commits

  • Previously, all the __exit sections were just dropped by the link phase.
    However, if there are static_key (jump label) constructs in __exit
    sections that are not modules, the link fails with the message:

    `.exit.text' referenced in section `__jump_table' of xxx.o:
    defined in discarded section `.exit.text' of xxx.o

    Support this usage by keeping the .exit.text sections in the final image
    if JUMP_LABEL is defined, then discarding them once initialization is
    complete.

    Link: http://lkml.kernel.org/r/bfd7c107c610c30e992868ebfe2a5d796a097464.1467837322.git.jbaron@akamai.com
    Signed-off-by: Jason Baron
    Signed-off-by: Chris Metcalf
    Cc: "David S. Miller"
    Cc: Arnd Bergmann
    Cc: Benjamin Herrenschmidt
    Cc: Heiko Carstens
    Cc: Joe Perches
    Cc: Martin Schwidefsky
    Cc: Michael Ellerman
    Cc: Paul Mackerras
    Cc: Peter Zijlstra
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Chris Metcalf
     
  • The dma-mapping core and the implementations do not change the DMA
    attributes passed by pointer. Thus the pointer can point to const data.
    However the attributes do not have to be a bitfield. Instead unsigned
    long will do fine:

    1. This is just simpler. Both in terms of reading the code and setting
    attributes. Instead of initializing local attributes on the stack
    and passing pointer to it to dma_set_attr(), just set the bits.

    2. It brings safeness and checking for const correctness because the
    attributes are passed by value.

    Semantic patches for this change (at least most of them):

    virtual patch
    virtual context

    @r@
    identifier f, attrs;

    @@
    f(...,
    - struct dma_attrs *attrs
    + unsigned long attrs
    , ...)
    {
    ...
    }

    @@
    identifier r.f;
    @@
    f(...,
    - NULL
    + 0
    )

    and

    // Options: --all-includes
    virtual patch
    virtual context

    @r@
    identifier f, attrs;
    type t;

    @@
    t f(..., struct dma_attrs *attrs);

    @@
    identifier r.f;
    @@
    f(...,
    - NULL
    + 0
    )

    Link: http://lkml.kernel.org/r/1468399300-5399-2-git-send-email-k.kozlowski@samsung.com
    Signed-off-by: Krzysztof Kozlowski
    Acked-by: Vineet Gupta
    Acked-by: Robin Murphy
    Acked-by: Hans-Christian Noren Egtvedt
    Acked-by: Mark Salter [c6x]
    Acked-by: Jesper Nilsson [cris]
    Acked-by: Daniel Vetter [drm]
    Reviewed-by: Bart Van Assche
    Acked-by: Joerg Roedel [iommu]
    Acked-by: Fabien Dessenne [bdisp]
    Reviewed-by: Marek Szyprowski [vb2-core]
    Acked-by: David Vrabel [xen]
    Acked-by: Konrad Rzeszutek Wilk [xen swiotlb]
    Acked-by: Joerg Roedel [iommu]
    Acked-by: Richard Kuo [hexagon]
    Acked-by: Geert Uytterhoeven [m68k]
    Acked-by: Gerald Schaefer [s390]
    Acked-by: Bjorn Andersson
    Acked-by: Hans-Christian Noren Egtvedt [avr32]
    Acked-by: Vineet Gupta [arc]
    Acked-by: Robin Murphy [arm64 and dma-iommu]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Krzysztof Kozlowski
     

03 Aug, 2016

1 commit

  • In general, there's no need for the "restore sigmask" flag to live in
    ti->flags. alpha, ia64, microblaze, powerpc, sh, sparc (64-bit only),
    tile, and x86 use essentially identical alternative implementations,
    placing the flag in ti->status.

    Replace those optimized implementations with an equally good common
    implementation that stores it in a bitfield in struct task_struct and
    drop the custom implementations.

    Additional architectures can opt in by removing their
    TIF_RESTORE_SIGMASK defines.

    Link: http://lkml.kernel.org/r/8a14321d64a28e40adfddc90e18a96c086a6d6f9.1468522723.git.luto@kernel.org
    Signed-off-by: Andy Lutomirski
    Tested-by: Michael Ellerman [powerpc]
    Cc: Richard Henderson
    Cc: Ivan Kokshaysky
    Cc: Matt Turner
    Cc: Tony Luck
    Cc: Fenghua Yu
    Cc: Michal Simek
    Cc: Benjamin Herrenschmidt
    Cc: Paul Mackerras
    Cc: Yoshinori Sato
    Cc: Rich Felker
    Cc: "David S. Miller"
    Cc: Chris Metcalf
    Cc: Peter Zijlstra
    Cc: Borislav Petkov
    Cc: Brian Gerst
    Cc: Dmitry Safonov
    Cc: Oleg Nesterov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andy Lutomirski
     

30 Jul, 2016

1 commit

  • Pull security subsystem updates from James Morris:
    "Highlights:

    - TPM core and driver updates/fixes
    - IPv6 security labeling (CALIPSO)
    - Lots of Apparmor fixes
    - Seccomp: remove 2-phase API, close hole where ptrace can change
    syscall #"

    * 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security: (156 commits)
    apparmor: fix SECURITY_APPARMOR_HASH_DEFAULT parameter handling
    tpm: Add TPM 2.0 support to the Nuvoton i2c driver (NPCT6xx family)
    tpm: Factor out common startup code
    tpm: use devm_add_action_or_reset
    tpm2_i2c_nuvoton: add irq validity check
    tpm: read burstcount from TPM_STS in one 32-bit transaction
    tpm: fix byte-order for the value read by tpm2_get_tpm_pt
    tpm_tis_core: convert max timeouts from msec to jiffies
    apparmor: fix arg_size computation for when setprocattr is null terminated
    apparmor: fix oops, validate buffer size in apparmor_setprocattr()
    apparmor: do not expose kernel stack
    apparmor: fix module parameters can be changed after policy is locked
    apparmor: fix oops in profile_unpack() when policy_db is not present
    apparmor: don't check for vmalloc_addr if kvzalloc() failed
    apparmor: add missing id bounds check on dfa verification
    apparmor: allow SYS_CAP_RESOURCE to be sufficient to prlimit another task
    apparmor: use list_next_entry instead of list_entry_next
    apparmor: fix refcount race when finding a child profile
    apparmor: fix ref count leak when profile sha1 hash is read
    apparmor: check that xindex is in trans_table bounds
    ...

    Linus Torvalds
     

29 Jul, 2016

3 commits

  • There are now a number of accounting oddities such as mapped file pages
    being accounted for on the node while the total number of file pages are
    accounted on the zone. This can be coped with to some extent but it's
    confusing so this patch moves the relevant file-based accounted. Due to
    throttling logic in the page allocator for reliable OOM detection, it is
    still necessary to track dirty and writeback pages on a per-zone basis.

    [mgorman@techsingularity.net: fix NR_ZONE_WRITE_PENDING accounting]
    Link: http://lkml.kernel.org/r/1468404004-5085-5-git-send-email-mgorman@techsingularity.net
    Link: http://lkml.kernel.org/r/1467970510-21195-20-git-send-email-mgorman@techsingularity.net
    Signed-off-by: Mel Gorman
    Acked-by: Vlastimil Babka
    Acked-by: Michal Hocko
    Cc: Hillf Danton
    Acked-by: Johannes Weiner
    Cc: Joonsoo Kim
    Cc: Minchan Kim
    Cc: Rik van Riel
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • Reclaim makes decisions based on the number of pages that are mapped but
    it's mixing node and zone information. Account NR_FILE_MAPPED and
    NR_ANON_PAGES pages on the node.

    Link: http://lkml.kernel.org/r/1467970510-21195-18-git-send-email-mgorman@techsingularity.net
    Signed-off-by: Mel Gorman
    Acked-by: Vlastimil Babka
    Acked-by: Michal Hocko
    Cc: Hillf Danton
    Acked-by: Johannes Weiner
    Cc: Joonsoo Kim
    Cc: Minchan Kim
    Cc: Rik van Riel
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • This moves the LRU lists from the zone to the node and related data such
    as counters, tracing, congestion tracking and writeback tracking.

    Unfortunately, due to reclaim and compaction retry logic, it is
    necessary to account for the number of LRU pages on both zone and node
    logic. Most reclaim logic is based on the node counters but the retry
    logic uses the zone counters which do not distinguish inactive and
    active sizes. It would be possible to leave the LRU counters on a
    per-zone basis but it's a heavier calculation across multiple cache
    lines that is much more frequent than the retry checks.

    Other than the LRU counters, this is mostly a mechanical patch but note
    that it introduces a number of anomalies. For example, the scans are
    per-zone but using per-node counters. We also mark a node as congested
    when a zone is congested. This causes weird problems that are fixed
    later but is easier to review.

    In the event that there is excessive overhead on 32-bit systems due to
    the nodes being on LRU then there are two potential solutions

    1. Long-term isolation of highmem pages when reclaim is lowmem

    When pages are skipped, they are immediately added back onto the LRU
    list. If lowmem reclaim persisted for long periods of time, the same
    highmem pages get continually scanned. The idea would be that lowmem
    keeps those pages on a separate list until a reclaim for highmem pages
    arrives that splices the highmem pages back onto the LRU. It potentially
    could be implemented similar to the UNEVICTABLE list.

    That would reduce the skip rate with the potential corner case is that
    highmem pages have to be scanned and reclaimed to free lowmem slab pages.

    2. Linear scan lowmem pages if the initial LRU shrink fails

    This will break LRU ordering but may be preferable and faster during
    memory pressure than skipping LRU pages.

    Link: http://lkml.kernel.org/r/1467970510-21195-4-git-send-email-mgorman@techsingularity.net
    Signed-off-by: Mel Gorman
    Acked-by: Johannes Weiner
    Acked-by: Vlastimil Babka
    Cc: Hillf Danton
    Cc: Joonsoo Kim
    Cc: Michal Hocko
    Cc: Minchan Kim
    Cc: Rik van Riel
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     

28 Jul, 2016

1 commit


27 Jul, 2016

1 commit


26 Jul, 2016

2 commits

  • Pull locking updates from Ingo Molnar:
    "The locking tree was busier in this cycle than the usual pattern - a
    couple of major projects happened to coincide.

    The main changes are:

    - implement the atomic_fetch_{add,sub,and,or,xor}() API natively
    across all SMP architectures (Peter Zijlstra)

    - add atomic_fetch_{inc/dec}() as well, using the generic primitives
    (Davidlohr Bueso)

    - optimize various aspects of rwsems (Jason Low, Davidlohr Bueso,
    Waiman Long)

    - optimize smp_cond_load_acquire() on arm64 and implement LSE based
    atomic{,64}_fetch_{add,sub,and,andnot,or,xor}{,_relaxed,_acquire,_release}()
    on arm64 (Will Deacon)

    - introduce smp_acquire__after_ctrl_dep() and fix various barrier
    mis-uses and bugs (Peter Zijlstra)

    - after discovering ancient spin_unlock_wait() barrier bugs in its
    implementation and usage, strengthen its semantics and update/fix
    usage sites (Peter Zijlstra)

    - optimize mutex_trylock() fastpath (Peter Zijlstra)

    - ... misc fixes and cleanups"

    * 'locking-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (67 commits)
    locking/atomic: Introduce inc/dec variants for the atomic_fetch_$op() API
    locking/barriers, arch/arm64: Implement LDXR+WFE based smp_cond_load_acquire()
    locking/static_keys: Fix non static symbol Sparse warning
    locking/qspinlock: Use __this_cpu_dec() instead of full-blown this_cpu_dec()
    locking/atomic, arch/tile: Fix tilepro build
    locking/atomic, arch/m68k: Remove comment
    locking/atomic, arch/arc: Fix build
    locking/Documentation: Clarify limited control-dependency scope
    locking/atomic, arch/rwsem: Employ atomic_long_fetch_add()
    locking/atomic, arch/qrwlock: Employ atomic_fetch_add_acquire()
    locking/atomic, arch/mips: Convert to _relaxed atomics
    locking/atomic, arch/alpha: Convert to _relaxed atomics
    locking/atomic: Remove the deprecated atomic_{set,clear}_mask() functions
    locking/atomic: Remove linux/atomic.h:atomic_fetch_or()
    locking/atomic: Implement atomic{,64,_long}_fetch_{add,sub,and,andnot,or,xor}{,_relaxed,_acquire,_release}()
    locking/atomic: Fix atomic64_relaxed() bits
    locking/atomic, arch/xtensa: Implement atomic_fetch_{add,sub,and,or,xor}()
    locking/atomic, arch/x86: Implement atomic{,64}_fetch_{add,sub,and,or,xor}()
    locking/atomic, arch/tile: Implement atomic{,64}_fetch_{add,sub,and,or,xor}()
    locking/atomic, arch/sparc: Implement atomic{,64}_fetch_{add,sub,and,or,xor}()
    ...

    Linus Torvalds
     
  • AT_VECTOR_SIZE_ARCH should be defined with the maximum number of
    NEW_AUX_ENT entries that ARCH_DLINFO can contain, but it wasn't defined
    for tile at all even though ARCH_DLINFO will contain one NEW_AUX_ENT for
    the VDSO address.

    This shouldn't be a problem as AT_VECTOR_SIZE_BASE includes space for
    AT_BASE_PLATFORM which tile doesn't use, but lets define it now and add
    the comment above ARCH_DLINFO as found in several other architectures to
    remind future modifiers of ARCH_DLINFO to keep AT_VECTOR_SIZE_ARCH up to
    date.

    Fixes: 4a556f4f56da ("tile: implement gettimeofday() via vDSO")
    Signed-off-by: James Hogan
    Cc: Chris Metcalf
    Signed-off-by: Chris Metcalf

    James Hogan
     

25 Jun, 2016

4 commits

  • Merge misc fixes from Andrew Morton:
    "Two weeks worth of fixes here"

    * emailed patches from Andrew Morton : (41 commits)
    init/main.c: fix initcall_blacklisted on ia64, ppc64 and parisc64
    autofs: don't get stuck in a loop if vfs_write() returns an error
    mm/page_owner: avoid null pointer dereference
    tools/vm/slabinfo: fix spelling mistake: "Ocurrences" -> "Occurrences"
    fs/nilfs2: fix potential underflow in call to crc32_le
    oom, suspend: fix oom_reaper vs. oom_killer_disable race
    ocfs2: disable BUG assertions in reading blocks
    mm, compaction: abort free scanner if split fails
    mm: prevent KASAN false positives in kmemleak
    mm/hugetlb: clear compound_mapcount when freeing gigantic pages
    mm/swap.c: flush lru pvecs on compound page arrival
    memcg: css_alloc should return an ERR_PTR value on error
    memcg: mem_cgroup_migrate() may be called with irq disabled
    hugetlb: fix nr_pmds accounting with shared page tables
    Revert "mm: disable fault around on emulated access bit architecture"
    Revert "mm: make faultaround produce old ptes"
    mailmap: add Boris Brezillon's email
    mailmap: add Antoine Tenart's email
    mm, sl[au]b: add __GFP_ATOMIC to the GFP reclaim mask
    mm: mempool: kasan: don't poot mempool objects in quarantine
    ...

    Linus Torvalds
     
  • __GFP_REPEAT has a rather weak semantic but since it has been introduced
    around 2.6.12 it has been ignored for low order allocations.

    pgtable_alloc_one uses __GFP_REPEAT flag for L2_USER_PGTABLE_ORDER but
    the order is either 0 or 3 if L2_KERNEL_PGTABLE_SHIFT for HPAGE_SHIFT.
    This means that this flag has never been actually useful here because it
    has always been used only for PAGE_ALLOC_COSTLY requests.

    Link: http://lkml.kernel.org/r/1464599699-30131-16-git-send-email-mhocko@kernel.org
    Signed-off-by: Michal Hocko
    Acked-by: Chris Metcalf [for tile]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Michal Hocko
     
  • We've had the thread info allocated together with the thread stack for
    most architectures for a long time (since the thread_info was split off
    from the task struct), but that is about to change.

    But the patches that move the thread info to be off-stack (and a part of
    the task struct instead) made it clear how confused the allocator and
    freeing functions are.

    Because the common case was that we share an allocation with the thread
    stack and the thread_info, the two pointers were identical. That
    identity then meant that we would have things like

    ti = alloc_thread_info_node(tsk, node);
    ...
    tsk->stack = ti;

    which certainly _worked_ (since stack and thread_info have the same
    value), but is rather confusing: why are we assigning a thread_info to
    the stack? And if we move the thread_info away, the "confusing" code
    just gets to be entirely bogus.

    So remove all this confusion, and make it clear that we are doing the
    stack allocation by renaming and clarifying the function names to be
    about the stack. The fact that the thread_info then shares the
    allocation is an implementation detail, and not really about the
    allocation itself.

    This is a pure renaming and type fix: we pass in the same pointer, it's
    just that we clarify what the pointer means.

    The ia64 code that actually only has one single allocation (for all of
    task_struct, thread_info and kernel thread stack) now looks a bit odd,
    but since "tsk->stack" is actually not even used there, that oddity
    doesn't matter. It would be a separate thing to clean that up, I
    intentionally left the ia64 changes as a pure brute-force renaming and
    type change.

    Acked-by: Andy Lutomirski
    Signed-off-by: Linus Torvalds

    Linus Torvalds
     
  • The tip gcc includes an optimization mode that converts
    64-bit divides into 128-bit multiplies using __multi3.
    Export the symbol so that modules can find it. We just
    export unconditionally without worrying about the gcc
    version, since the symbol has been in libgcc forever and
    the function is less than 300 bytes.

    Signed-off-by: Chris Metcalf

    Chris Metcalf
     

24 Jun, 2016

1 commit

  • The tilepro change wasn't ever compiled it seems (the 0day built bot
    also doesn't have a toolchain for it).

    Make it work.

    The thing that makes the patch bigger than desired is namespace
    collision with the C11 __atomic builtin functions. So rename the
    tilepro functions to __atomic32.

    Reported-by: Sudip Mukherjee
    Signed-off-by: Peter Zijlstra (Intel)
    Acked-by: Chris Metcalf
    Cc: Linus Torvalds
    Cc: Peter Zijlstra
    Cc: Stephen Rothwell
    Cc: Thomas Gleixner
    Fixes: 1af5de9af138 ("locking/atomic, arch/tile: Implement atomic{,64}_fetch_{add,sub,and,or,xor}()")
    Link: http://lkml.kernel.org/r/20160622091649.GB30154@twins.programming.kicks-ass.net
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     

16 Jun, 2016

3 commits

  • Since all architectures have this implemented now natively, remove this
    dead code.

    Signed-off-by: Peter Zijlstra (Intel)
    Cc: Andrew Morton
    Cc: Linus Torvalds
    Cc: Paul E. McKenney
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Cc: linux-arch@vger.kernel.org
    Cc: linux-kernel@vger.kernel.org
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • Implement FETCH-OP atomic primitives, these are very similar to the
    existing OP-RETURN primitives we already have, except they return the
    value of the atomic variable _before_ modification.

    This is especially useful for irreversible operations -- such as
    bitops (because it becomes impossible to reconstruct the state prior
    to modification).

    Signed-off-by: Peter Zijlstra (Intel)
    Acked-by: Chris Metcalf
    Cc: Andrew Morton
    Cc: Linus Torvalds
    Cc: Paul E. McKenney
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Cc: linux-arch@vger.kernel.org
    Cc: linux-kernel@vger.kernel.org
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • The glibc __LONG_LONG_PAIR passes arguments in a different
    order for BE platforms vs LE platforms. Adjust the expectations
    in the kernel 32-bit syscall routines for BE to match.

    Signed-off-by: Chris Metcalf

    Chris Metcalf
     

15 Jun, 2016

2 commits


14 Jun, 2016

2 commits

  • This patch updates/fixes all spin_unlock_wait() implementations.

    The update is in semantics; where it previously was only a control
    dependency, we now upgrade to a full load-acquire to match the
    store-release from the spin_unlock() we waited on. This ensures that
    when spin_unlock_wait() returns, we're guaranteed to observe the full
    critical section we waited on.

    This fixes a number of spin_unlock_wait() users that (not
    unreasonably) rely on this.

    I also fixed a number of ticket lock versions to only wait on the
    current lock holder, instead of for a full unlock, as this is
    sufficient.

    Furthermore; again for ticket locks; I added an smp_rmb() in between
    the initial ticket load and the spin loop testing the current value
    because I could not convince myself the address dependency is
    sufficient, esp. if the loads are of different sizes.

    I'm more than happy to remove this smp_rmb() again if people are
    certain the address dependency does indeed work as expected.

    Note: PPC32 will be fixed independently

    Signed-off-by: Peter Zijlstra (Intel)
    Cc: Andrew Morton
    Cc: Linus Torvalds
    Cc: Paul E. McKenney
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Cc: chris@zankel.net
    Cc: cmetcalf@mellanox.com
    Cc: davem@davemloft.net
    Cc: dhowells@redhat.com
    Cc: james.hogan@imgtec.com
    Cc: jejb@parisc-linux.org
    Cc: linux@armlinux.org.uk
    Cc: mpe@ellerman.id.au
    Cc: ralf@linux-mips.org
    Cc: realmz6@gmail.com
    Cc: rkuo@codeaurora.org
    Cc: rth@twiddle.net
    Cc: schwidefsky@de.ibm.com
    Cc: tony.luck@intel.com
    Cc: vgupta@synopsys.com
    Cc: ysato@users.sourceforge.jp
    Cc: linux-kernel@vger.kernel.org
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • Since TILE doesn't do read speculation, its control dependencies also
    guarantee LOAD->LOAD order and we don't need the additional RMB
    otherwise required to provide ACQUIRE semantics.

    Signed-off-by: Peter Zijlstra (Intel)
    Acked-by: Chris Metcalf
    Cc: Andrew Morton
    Cc: Linus Torvalds
    Cc: Paul E. McKenney
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Cc: linux-kernel@vger.kernel.org
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     

08 Jun, 2016

1 commit


26 May, 2016

1 commit

  • Pull perf updates from Ingo Molnar:
    "Mostly tooling and PMU driver fixes, but also a number of late updates
    such as the reworking of the call-chain size limiting logic to make
    call-graph recording more robust, plus tooling side changes for the
    new 'backwards ring-buffer' extension to the perf ring-buffer"

    * 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (34 commits)
    perf record: Read from backward ring buffer
    perf record: Rename variable to make code clear
    perf record: Prevent reading invalid data in record__mmap_read
    perf evlist: Add API to pause/resume
    perf trace: Use the ptr->name beautifier as default for "filename" args
    perf trace: Use the fd->name beautifier as default for "fd" args
    perf report: Add srcline_from/to branch sort keys
    perf evsel: Record fd into perf_mmap
    perf evsel: Add overwrite attribute and check write_backward
    perf tools: Set buildid dir under symfs when --symfs is provided
    perf trace: Only auto set call-graph to "dwarf" when syscalls are being traced
    perf annotate: Sort list of recognised instructions
    perf annotate: Fix identification of ARM blt and bls instructions
    perf tools: Fix usage of max_stack sysctl
    perf callchain: Stop validating callchains by the max_stack sysctl
    perf trace: Fix exit_group() formatting
    perf top: Use machine->kptr_restrict_warned
    perf trace: Warn when trying to resolve kernel addresses with kptr_restrict=1
    perf machine: Do not bail out if not managing to read ref reloc symbol
    perf/x86/intel/p4: Trival indentation fix, remove space
    ...

    Linus Torvalds
     

25 May, 2016

1 commit


24 May, 2016

6 commits

  • Merge yet more updates from Andrew Morton:

    - Oleg's "wait/ptrace: assume __WALL if the child is traced". It's a
    kernel-based workaround for existing userspace issues.

    - A few hotfixes

    - befs cleanups

    - nilfs2 updates

    - sys_wait() changes

    - kexec updates

    - kdump

    - scripts/gdb updates

    - the last of the MM queue

    - a few other misc things

    * emailed patches from Andrew Morton : (84 commits)
    kgdb: depends on VT
    drm/amdgpu: make amdgpu_mn_get wait for mmap_sem killable
    drm/radeon: make radeon_mn_get wait for mmap_sem killable
    drm/i915: make i915_gem_mmap_ioctl wait for mmap_sem killable
    uprobes: wait for mmap_sem for write killable
    prctl: make PR_SET_THP_DISABLE wait for mmap_sem killable
    exec: make exec path waiting for mmap_sem killable
    aio: make aio_setup_ring killable
    coredump: make coredump_wait wait for mmap_sem for write killable
    vdso: make arch_setup_additional_pages wait for mmap_sem for write killable
    ipc, shm: make shmem attach/detach wait for mmap_sem killable
    mm, fork: make dup_mmap wait for mmap_sem for write killable
    mm, proc: make clear_refs killable
    mm: make vm_brk killable
    mm, elf: handle vm_brk error
    mm, aout: handle vm_brk failures
    mm: make vm_munmap killable
    mm: make vm_mmap killable
    mm: make mmap_sem for write waits killable for mm syscalls
    MAINTAINERS: add co-maintainer for scripts/gdb
    ...

    Linus Torvalds
     
  • Pull arch/tile updates from Chris Metcalf:
    "This is an even quieter cycle than usual"

    * git://git.kernel.org/pub/scm/linux/kernel/git/cmetcalf/linux-tile:
    Fix typo
    Fix typo
    Fix typo
    tile: sort the "select" lines in the TILE/TILEGX configs
    tile: clarify barrier semantics of atomic_add_return
    tile/defconfigs: Remove CONFIG_IPV6_PRIVACY

    Linus Torvalds
     
  • This option was replaced by PAGE_COUNTER which is selected by MEMCG.

    Signed-off-by: Konstantin Khlebnikov
    Acked-by: Arnd Bergmann
    Acked-by: Balbir Singh
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Konstantin Khlebnikov
     
  • Signed-off-by: Andrea Gelmini
    Signed-off-by: Chris Metcalf

    Andrea Gelmini
     
  • Signed-off-by: Andrea Gelmini
    Signed-off-by: Chris Metcalf

    Andrea Gelmini
     
  • Signed-off-by: Andrea Gelmini
    Signed-off-by: Chris Metcalf

    Andrea Gelmini
     

21 May, 2016

3 commits

  • printk() takes some locks and could not be used a safe way in NMI
    context.

    The chance of a deadlock is real especially when printing stacks from
    all CPUs. This particular problem has been addressed on x86 by the
    commit a9edc8809328 ("x86/nmi: Perform a safe NMI stack trace on all
    CPUs").

    The patchset brings two big advantages. First, it makes the NMI
    backtraces safe on all architectures for free. Second, it makes all NMI
    messages almost safe on all architectures (the temporary buffer is
    limited. We still should keep the number of messages in NMI context at
    minimum).

    Note that there already are several messages printed in NMI context:
    WARN_ON(in_nmi()), BUG_ON(in_nmi()), anything being printed out from MCE
    handlers. These are not easy to avoid.

    This patch reuses most of the code and makes it generic. It is useful
    for all messages and architectures that support NMI.

    The alternative printk_func is set when entering and is reseted when
    leaving NMI context. It queues IRQ work to copy the messages into the
    main ring buffer in a safe context.

    __printk_nmi_flush() copies all available messages and reset the buffer.
    Then we could use a simple cmpxchg operations to get synchronized with
    writers. There is also used a spinlock to get synchronized with other
    flushers.

    We do not longer use seq_buf because it depends on external lock. It
    would be hard to make all supported operations safe for a lockless use.
    It would be confusing and error prone to make only some operations safe.

    The code is put into separate printk/nmi.c as suggested by Steven
    Rostedt. It needs a per-CPU buffer and is compiled only on
    architectures that call nmi_enter(). This is achieved by the new
    HAVE_NMI Kconfig flag.

    The are MN10300 and Xtensa architectures. We need to clean up NMI
    handling there first. Let's do it separately.

    The patch is heavily based on the draft from Peter Zijlstra, see

    https://lkml.org/lkml/2015/6/10/327

    [arnd@arndb.de: printk-nmi: use %zu format string for size_t]
    [akpm@linux-foundation.org: min_t->min - all types are size_t here]
    Signed-off-by: Petr Mladek
    Suggested-by: Peter Zijlstra
    Suggested-by: Steven Rostedt
    Cc: Jan Kara
    Acked-by: Russell King [arm part]
    Cc: Daniel Thompson
    Cc: Jiri Kosina
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Cc: Ralf Baechle
    Cc: Benjamin Herrenschmidt
    Cc: Martin Schwidefsky
    Cc: David Miller
    Cc: Daniel Thompson
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Petr Mladek
     
  • We need to call exit_thread from copy_process in a fail path. So make it
    accept task_struct as a parameter.

    [v2]
    * s390: exit_thread_runtime_instr doesn't make sense to be called for
    non-current tasks.
    * arm: fix the comment in vfp_thread_copy
    * change 'me' to 'tsk' for task_struct
    * now we can change only archs that actually have exit_thread

    [akpm@linux-foundation.org: coding-style fixes]
    Signed-off-by: Jiri Slaby
    Cc: "David S. Miller"
    Cc: "H. Peter Anvin"
    Cc: "James E.J. Bottomley"
    Cc: Aurelien Jacquiot
    Cc: Benjamin Herrenschmidt
    Cc: Catalin Marinas
    Cc: Chen Liqin
    Cc: Chris Metcalf
    Cc: Chris Zankel
    Cc: David Howells
    Cc: Fenghua Yu
    Cc: Geert Uytterhoeven
    Cc: Guan Xuetao
    Cc: Haavard Skinnemoen
    Cc: Hans-Christian Egtvedt
    Cc: Heiko Carstens
    Cc: Helge Deller
    Cc: Ingo Molnar
    Cc: Ivan Kokshaysky
    Cc: James Hogan
    Cc: Jeff Dike
    Cc: Jesper Nilsson
    Cc: Jiri Slaby
    Cc: Jonas Bonn
    Cc: Koichi Yasutake
    Cc: Lennox Wu
    Cc: Ley Foon Tan
    Cc: Mark Salter
    Cc: Martin Schwidefsky
    Cc: Matt Turner
    Cc: Max Filippov
    Cc: Michael Ellerman
    Cc: Michal Simek
    Cc: Mikael Starvik
    Cc: Paul Mackerras
    Cc: Peter Zijlstra
    Cc: Ralf Baechle
    Cc: Rich Felker
    Cc: Richard Henderson
    Cc: Richard Kuo
    Cc: Richard Weinberger
    Cc: Russell King
    Cc: Steven Miao
    Cc: Thomas Gleixner
    Cc: Tony Luck
    Cc: Vineet Gupta
    Cc: Will Deacon
    Cc: Yoshinori Sato
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jiri Slaby
     
  • Define HAVE_EXIT_THREAD for archs which want to do something in
    exit_thread. For others, let's define exit_thread as an empty inline.

    This is a cleanup before we change the prototype of exit_thread to
    accept a task parameter.

    [akpm@linux-foundation.org: fix mips]
    Signed-off-by: Jiri Slaby
    Cc: "David S. Miller"
    Cc: "H. Peter Anvin"
    Cc: "James E.J. Bottomley"
    Cc: Aurelien Jacquiot
    Cc: Benjamin Herrenschmidt
    Cc: Catalin Marinas
    Cc: Chen Liqin
    Cc: Chris Metcalf
    Cc: Chris Zankel
    Cc: David Howells
    Cc: Fenghua Yu
    Cc: Geert Uytterhoeven
    Cc: Guan Xuetao
    Cc: Haavard Skinnemoen
    Cc: Hans-Christian Egtvedt
    Cc: Heiko Carstens
    Cc: Helge Deller
    Cc: Ingo Molnar
    Cc: Ivan Kokshaysky
    Cc: James Hogan
    Cc: Jeff Dike
    Cc: Jesper Nilsson
    Cc: Jiri Slaby
    Cc: Jonas Bonn
    Cc: Koichi Yasutake
    Cc: Lennox Wu
    Cc: Ley Foon Tan
    Cc: Mark Salter
    Cc: Martin Schwidefsky
    Cc: Matt Turner
    Cc: Max Filippov
    Cc: Michael Ellerman
    Cc: Michal Simek
    Cc: Mikael Starvik
    Cc: Paul Mackerras
    Cc: Peter Zijlstra
    Cc: Ralf Baechle
    Cc: Rich Felker
    Cc: Richard Henderson
    Cc: Richard Kuo
    Cc: Richard Weinberger
    Cc: Russell King
    Cc: Steven Miao
    Cc: Thomas Gleixner
    Cc: Tony Luck
    Cc: Vineet Gupta
    Cc: Will Deacon
    Cc: Yoshinori Sato
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jiri Slaby
     

20 May, 2016

4 commits

  • I've just discovered that the useful-sounding has_transparent_hugepage()
    is actually an architecture-dependent minefield: on some arches it only
    builds if CONFIG_TRANSPARENT_HUGEPAGE=y, on others it's also there when
    not, but on some of those (arm and arm64) it then gives the wrong
    answer; and on mips alone it's marked __init, which would crash if
    called later (but so far it has not been called later).

    Straighten this out: make it available to all configs, with a sensible
    default in asm-generic/pgtable.h, removing its definitions from those
    arches (arc, arm, arm64, sparc, tile) which are served by the default,
    adding #define has_transparent_hugepage has_transparent_hugepage to
    those (mips, powerpc, s390, x86) which need to override the default at
    runtime, and removing the __init from mips (but maybe that kind of code
    should be avoided after init: set a static variable the first time it's
    called).

    Signed-off-by: Hugh Dickins
    Cc: "Kirill A. Shutemov"
    Cc: Andrea Arcangeli
    Cc: Andres Lagar-Cavilla
    Cc: Yang Shi
    Cc: Ning Qu
    Cc: Mel Gorman
    Cc: Konstantin Khlebnikov
    Acked-by: David S. Miller
    Acked-by: Vineet Gupta [arch/arc]
    Acked-by: Gerald Schaefer [arch/s390]
    Acked-by: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     
  • Update setup_hugepagesz() to call hugetlb_bad_size() when unsupported
    hugepage size is found.

    Signed-off-by: Vaishali Thakkar
    Reviewed-by: Mike Kravetz
    Reviewed-by: Naoya Horiguchi
    Acked-by: Michal Hocko
    Cc: Hillf Danton
    Cc: Yaowei Bai
    Cc: Dominik Dingel
    Cc: Kirill A. Shutemov
    Cc: Paul Gortmaker
    Cc: Dave Hansen
    Cc: James Hogan
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Vaishali Thakkar
     
  • Lots of code does

    node = next_node(node, XXX);
    if (node == MAX_NUMNODES)
    node = first_node(XXX);

    so create next_node_in() to do this and use it in various places.

    [mhocko@suse.com: use next_node_in() helper]
    Acked-by: Vlastimil Babka
    Acked-by: Michal Hocko
    Signed-off-by: Michal Hocko
    Cc: Xishi Qiu
    Cc: Joonsoo Kim
    Cc: David Rientjes
    Cc: Naoya Horiguchi
    Cc: Laura Abbott
    Cc: Hui Zhu
    Cc: Wang Xiaoqiang
    Cc: Johannes Weiner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrew Morton
     
  • Many developers already know that field for reference count of the
    struct page is _count and atomic type. They would try to handle it
    directly and this could break the purpose of page reference count
    tracepoint. To prevent direct _count modification, this patch rename it
    to _refcount and add warning message on the code. After that, developer
    who need to handle reference count will find that field should not be
    accessed directly.

    [akpm@linux-foundation.org: fix comments, per Vlastimil]
    [akpm@linux-foundation.org: Documentation/vm/transhuge.txt too]
    [sfr@canb.auug.org.au: sync ethernet driver changes]
    Signed-off-by: Joonsoo Kim
    Signed-off-by: Stephen Rothwell
    Cc: Vlastimil Babka
    Cc: Hugh Dickins
    Cc: Johannes Berg
    Cc: "David S. Miller"
    Cc: Sunil Goutham
    Cc: Chris Metcalf
    Cc: Manish Chopra
    Cc: Yuval Mintz
    Cc: Tariq Toukan
    Cc: Saeed Mahameed
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Joonsoo Kim