18 Mar, 2016

2 commits

  • Most of the mm subsystem uses pr_ so make it consistent.

    Miscellanea:

    - Realign arguments
    - Add missing newline to format
    - kmemleak-test.c has a "kmemleak: " prefix added to the
    "Kmemleak testing" logging message via pr_fmt

    Signed-off-by: Joe Perches
    Acked-by: Tejun Heo [percpu]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Joe Perches
     
  • Kernel style prefers a single string over split strings when the string is
    'user-visible'.

    Miscellanea:

    - Add a missing newline
    - Realign arguments

    Signed-off-by: Joe Perches
    Acked-by: Tejun Heo [percpu]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Joe Perches
     

07 Nov, 2015

1 commit

  • …d avoiding waking kswapd

    __GFP_WAIT has been used to identify atomic context in callers that hold
    spinlocks or are in interrupts. They are expected to be high priority and
    have access one of two watermarks lower than "min" which can be referred
    to as the "atomic reserve". __GFP_HIGH users get access to the first
    lower watermark and can be called the "high priority reserve".

    Over time, callers had a requirement to not block when fallback options
    were available. Some have abused __GFP_WAIT leading to a situation where
    an optimisitic allocation with a fallback option can access atomic
    reserves.

    This patch uses __GFP_ATOMIC to identify callers that are truely atomic,
    cannot sleep and have no alternative. High priority users continue to use
    __GFP_HIGH. __GFP_DIRECT_RECLAIM identifies callers that can sleep and
    are willing to enter direct reclaim. __GFP_KSWAPD_RECLAIM to identify
    callers that want to wake kswapd for background reclaim. __GFP_WAIT is
    redefined as a caller that is willing to enter direct reclaim and wake
    kswapd for background reclaim.

    This patch then converts a number of sites

    o __GFP_ATOMIC is used by callers that are high priority and have memory
    pools for those requests. GFP_ATOMIC uses this flag.

    o Callers that have a limited mempool to guarantee forward progress clear
    __GFP_DIRECT_RECLAIM but keep __GFP_KSWAPD_RECLAIM. bio allocations fall
    into this category where kswapd will still be woken but atomic reserves
    are not used as there is a one-entry mempool to guarantee progress.

    o Callers that are checking if they are non-blocking should use the
    helper gfpflags_allow_blocking() where possible. This is because
    checking for __GFP_WAIT as was done historically now can trigger false
    positives. Some exceptions like dm-crypt.c exist where the code intent
    is clearer if __GFP_DIRECT_RECLAIM is used instead of the helper due to
    flag manipulations.

    o Callers that built their own GFP flags instead of starting with GFP_KERNEL
    and friends now also need to specify __GFP_KSWAPD_RECLAIM.

    The first key hazard to watch out for is callers that removed __GFP_WAIT
    and was depending on access to atomic reserves for inconspicuous reasons.
    In some cases it may be appropriate for them to use __GFP_HIGH.

    The second key hazard is callers that assembled their own combination of
    GFP flags instead of starting with something like GFP_KERNEL. They may
    now wish to specify __GFP_KSWAPD_RECLAIM. It's almost certainly harmless
    if it's missed in most cases as other activity will wake kswapd.

    Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
    Acked-by: Vlastimil Babka <vbabka@suse.cz>
    Acked-by: Michal Hocko <mhocko@suse.com>
    Acked-by: Johannes Weiner <hannes@cmpxchg.org>
    Cc: Christoph Lameter <cl@linux.com>
    Cc: David Rientjes <rientjes@google.com>
    Cc: Vitaly Wool <vitalywool@gmail.com>
    Cc: Rik van Riel <riel@redhat.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

    Mel Gorman
     

02 Oct, 2015

1 commit

  • If a DMA pool lies at the very top of the dma_addr_t range (as may
    happen with an IOMMU involved), the calculated end address of the pool
    wraps around to zero, and page lookup always fails.

    Tweak the relevant calculation to be overflow-proof.

    Signed-off-by: Robin Murphy
    Cc: Arnd Bergmann
    Cc: Marek Szyprowski
    Cc: Sumit Semwal
    Cc: Sakari Ailus
    Cc: Russell King
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Robin Murphy
     

09 Sep, 2015

2 commits

  • Currently a call to dma_pool_alloc() with a ___GFP_ZERO flag returns a
    non-zeroed memory region.

    This patchset adds support for the __GFP_ZERO flag to dma_pool_alloc(),
    adds 2 wrapper functions for allocing zeroed memory from a pool, and
    provides a coccinelle script for finding & replacing instances of
    dma_pool_alloc() followed by memset(0) with a single dma_pool_zalloc()
    call.

    There was some concern that this always calls memset() to zero, instead
    of passing __GFP_ZERO into the page allocator.
    [https://lkml.org/lkml/2015/7/15/881]

    I ran a test on my system to get an idea of how often dma_pool_alloc()
    calls into pool_alloc_page().

    After Boot: [ 30.119863] alloc_calls:541, page_allocs:7
    After an hour: [ 3600.951031] alloc_calls:9566, page_allocs:12
    After copying 1GB file onto a USB drive:
    [ 4260.657148] alloc_calls:17225, page_allocs:12

    It doesn't look like dma_pool_alloc() calls down to the page allocator
    very often (at least on my system).

    This patch (of 4):

    Currently the __GFP_ZERO flag is ignored by dma_pool_alloc().
    Make dma_pool_alloc() zero the memory if this flag is set.

    Signed-off-by: Sean O. Stalley
    Acked-by: David Rientjes
    Cc: Vinod Koul
    Cc: Bjorn Helgaas
    Cc: Gilles Muller
    Cc: Nicolas Palix
    Cc: Michal Marek
    Cc: Sebastian Andrzej Siewior
    Cc: Jonathan Corbet
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Sean O. Stalley
     
  • dma_pool_destroy() does not tolerate a NULL dma_pool pointer argument and
    performs a NULL-pointer dereference. This requires additional attention
    and effort from developers/reviewers and forces all dma_pool_destroy()
    callers to do a NULL check

    if (pool)
    dma_pool_destroy(pool);

    Or, otherwise, be invalid dma_pool_destroy() users.

    Tweak dma_pool_destroy() and NULL-check the pointer there.

    Proposed by Andrew Morton.

    Link: https://lkml.org/lkml/2015/6/8/583
    Signed-off-by: Sergey Senozhatsky
    Acked-by: David Rientjes
    Cc: Julia Lawall
    Cc: Joe Perches
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Sergey Senozhatsky
     

05 Sep, 2015

1 commit


10 Oct, 2014

2 commits

  • Remove 3 brace coding style for any arm of this statement

    Signed-off-by: Paul McQuade
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Paul McQuade
     
  • cat /sys/.../pools followed by removal the device leads to:

    |======================================================
    |[ INFO: possible circular locking dependency detected ]
    |3.17.0-rc4+ #1498 Not tainted
    |-------------------------------------------------------
    |rmmod/2505 is trying to acquire lock:
    | (s_active#28){++++.+}, at: [] kernfs_remove_by_name_ns+0x3c/0x88
    |
    |but task is already holding lock:
    | (pools_lock){+.+.+.}, at: [] dma_pool_destroy+0x18/0x17c
    |
    |which lock already depends on the new lock.
    |the existing dependency chain (in reverse order) is:
    |
    |-> #1 (pools_lock){+.+.+.}:
    | [] show_pools+0x30/0xf8
    | [] dev_attr_show+0x1c/0x48
    | [] sysfs_kf_seq_show+0x88/0x10c
    | [] kernfs_seq_show+0x24/0x28
    | [] seq_read+0x1b8/0x480
    | [] vfs_read+0x8c/0x148
    | [] SyS_read+0x40/0x8c
    | [] ret_fast_syscall+0x0/0x48
    |
    |-> #0 (s_active#28){++++.+}:
    | [] __kernfs_remove+0x258/0x2ec
    | [] kernfs_remove_by_name_ns+0x3c/0x88
    | [] dma_pool_destroy+0x148/0x17c
    | [] hcd_buffer_destroy+0x20/0x34
    | [] usb_remove_hcd+0x110/0x1a4

    The problem is the lock order of pools_lock and kernfs_mutex in
    dma_pool_destroy() vs show_pools() call path.

    This patch breaks out the creation of the sysfs file outside of the
    pools_lock mutex. The newly added pools_reg_lock ensures that there is no
    race of create vs destroy code path in terms whether or not the sysfs file
    has to be deleted (and was it deleted before we try to create a new one)
    and what to do if device_create_file() failed.

    Signed-off-by: Sebastian Andrzej Siewior
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Sebastian Andrzej Siewior
     

19 Sep, 2014

1 commit

  • dma_pool_create() needs to unlock the mutex in error case. The bug was
    introduced in the 3.16 by commit cc6b664aa26d ("mm/dmapool.c: remove
    redundant NULL check for dev in dma_pool_create()")/

    Signed-off-by: Krzysztof Hałasa
    Cc: stable@vger.kernel.org # v3.16
    Signed-off-by: Linus Torvalds

    Krzysztof Hałasa
     

05 Jun, 2014

2 commits


05 May, 2014

1 commit


12 Dec, 2012

1 commit


11 Dec, 2012

1 commit

  • dmapool always calls dma_alloc_coherent() with GFP_ATOMIC flag,
    regardless the flags provided by the caller. This causes excessive
    pruning of emergency memory pools without any good reason. Additionaly,
    on ARM architecture any driver which is using dmapools will sooner or
    later trigger the following error:
    "ERROR: 256 KiB atomic DMA coherent pool is too small!
    Please increase it with coherent_pool= kernel parameter!".
    Increasing the coherent pool size usually doesn't help much and only
    delays such error, because all GFP_ATOMIC DMA allocations are always
    served from the special, very limited memory pool.

    This patch changes the dmapool code to correctly use gfp flags provided
    by the dmapool caller.

    Reported-by: Soeren Moch
    Reported-by: Thomas Petazzoni
    Signed-off-by: Marek Szyprowski
    Tested-by: Andrew Lunn
    Tested-by: Soeren Moch
    Cc: stable@vger.kernel.org

    Marek Szyprowski
     

31 Oct, 2011

2 commits


26 Jul, 2011

1 commit

  • devres uses the pointer value as key after it's freed, which is safe but
    triggers spurious use-after-free warnings on some static analysis tools.
    Rearrange code to avoid such warnings.

    Signed-off-by: Maxin B. John
    Reviewed-by: Rolf Eike Beer
    Acked-by: Tejun Heo
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Maxin B John
     

14 Jan, 2011

2 commits

  • As it stands this code will degenerate into a busy-wait if the calling task
    has signal_pending().

    Cc: Rolf Eike Beer
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrew Morton
     
  • dma_pool_free() scans for the page to free in the pool list holding the
    pool lock. Then it releases the lock basically to acquire it immediately
    again. Modify the code to only take the lock once.

    This will do some additional loops and computations with the lock held in
    if memory debugging is activated. If it is not activated the only new
    operations with this lock is one if and one substraction.

    Signed-off-by: Rolf Eike Beer
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Rolf Eike Beer
     

27 Oct, 2010

1 commit

  • Buggy drivers (e.g. fsl_udc) could call dma_pool_alloc from atomic
    context with GFP_KERNEL. In most instances, the first pool_alloc_page
    call would succeed and the sleeping functions would never be called. This
    allowed the buggy drivers to slip through the cracks.

    Add a might_sleep_if() checking for __GFP_WAIT in flags.

    Signed-off-by: Dima Zavin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Dima Zavin
     

01 Jul, 2009

1 commit


28 Apr, 2008

1 commit

  • Previously it was only enabled for CONFIG_DEBUG_SLAB.

    Not hooked into the slub runtime debug configuration, so you currently only
    get it with CONFIG_SLUB_DEBUG_ON, not plain CONFIG_SLUB_DEBUG

    Acked-by: Matthew Wilcox
    Signed-off-by: Andi Kleen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andi Kleen
     

04 Dec, 2007

7 commits