26 Mar, 2016

1 commit

  • Add GFP flags to KASAN hooks for future patches to use.

    This patch is based on the "mm: kasan: unified support for SLUB and SLAB
    allocators" patch originally prepared by Dmitry Chernenkov.

    Signed-off-by: Alexander Potapenko
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Cc: Joonsoo Kim
    Cc: Andrey Konovalov
    Cc: Dmitry Vyukov
    Cc: Andrey Ryabinin
    Cc: Steven Rostedt
    Cc: Konstantin Serebryany
    Cc: Dmitry Chernenkov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Alexander Potapenko
     

18 Mar, 2016

1 commit

  • If an oom killed thread calls mempool_alloc(), it is possible that it'll
    loop forever if there are no elements on the freelist since
    __GFP_NOMEMALLOC prevents it from accessing needed memory reserves in
    oom conditions.

    Only set __GFP_NOMEMALLOC if there are elements on the freelist. If
    there are no free elements, allow allocations without the bit set so
    that memory reserves can be accessed if needed.

    Additionally, using mempool_alloc() with __GFP_NOMEMALLOC is not
    supported since the implementation can loop forever without accessing
    memory reserves when needed.

    Signed-off-by: David Rientjes
    Cc: Greg Thelen
    Cc: Michal Hocko
    Cc: Tetsuo Handa
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     

12 Mar, 2016

1 commit


07 Nov, 2015

1 commit

  • …d avoiding waking kswapd

    __GFP_WAIT has been used to identify atomic context in callers that hold
    spinlocks or are in interrupts. They are expected to be high priority and
    have access one of two watermarks lower than "min" which can be referred
    to as the "atomic reserve". __GFP_HIGH users get access to the first
    lower watermark and can be called the "high priority reserve".

    Over time, callers had a requirement to not block when fallback options
    were available. Some have abused __GFP_WAIT leading to a situation where
    an optimisitic allocation with a fallback option can access atomic
    reserves.

    This patch uses __GFP_ATOMIC to identify callers that are truely atomic,
    cannot sleep and have no alternative. High priority users continue to use
    __GFP_HIGH. __GFP_DIRECT_RECLAIM identifies callers that can sleep and
    are willing to enter direct reclaim. __GFP_KSWAPD_RECLAIM to identify
    callers that want to wake kswapd for background reclaim. __GFP_WAIT is
    redefined as a caller that is willing to enter direct reclaim and wake
    kswapd for background reclaim.

    This patch then converts a number of sites

    o __GFP_ATOMIC is used by callers that are high priority and have memory
    pools for those requests. GFP_ATOMIC uses this flag.

    o Callers that have a limited mempool to guarantee forward progress clear
    __GFP_DIRECT_RECLAIM but keep __GFP_KSWAPD_RECLAIM. bio allocations fall
    into this category where kswapd will still be woken but atomic reserves
    are not used as there is a one-entry mempool to guarantee progress.

    o Callers that are checking if they are non-blocking should use the
    helper gfpflags_allow_blocking() where possible. This is because
    checking for __GFP_WAIT as was done historically now can trigger false
    positives. Some exceptions like dm-crypt.c exist where the code intent
    is clearer if __GFP_DIRECT_RECLAIM is used instead of the helper due to
    flag manipulations.

    o Callers that built their own GFP flags instead of starting with GFP_KERNEL
    and friends now also need to specify __GFP_KSWAPD_RECLAIM.

    The first key hazard to watch out for is callers that removed __GFP_WAIT
    and was depending on access to atomic reserves for inconspicuous reasons.
    In some cases it may be appropriate for them to use __GFP_HIGH.

    The second key hazard is callers that assembled their own combination of
    GFP flags instead of starting with something like GFP_KERNEL. They may
    now wish to specify __GFP_KSWAPD_RECLAIM. It's almost certainly harmless
    if it's missed in most cases as other activity will wake kswapd.

    Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
    Acked-by: Vlastimil Babka <vbabka@suse.cz>
    Acked-by: Michal Hocko <mhocko@suse.com>
    Acked-by: Johannes Weiner <hannes@cmpxchg.org>
    Cc: Christoph Lameter <cl@linux.com>
    Cc: David Rientjes <rientjes@google.com>
    Cc: Vitaly Wool <vitalywool@gmail.com>
    Cc: Rik van Riel <riel@redhat.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

    Mel Gorman
     

09 Sep, 2015

1 commit

  • mempool_destroy() does not tolerate a NULL mempool_t pointer argument and
    performs a NULL-pointer dereference. This requires additional attention
    and effort from developers/reviewers and forces all mempool_destroy()
    callers to do a NULL check

    if (pool)
    mempool_destroy(pool);

    Or, otherwise, be invalid mempool_destroy() users.

    Tweak mempool_destroy() and NULL-check the pointer there.

    Proposed by Andrew Morton.

    Link: https://lkml.org/lkml/2015/6/8/583
    Signed-off-by: Sergey Senozhatsky
    Acked-by: David Rientjes
    Cc: Julia Lawall
    Cc: Joe Perches
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Sergey Senozhatsky
     

16 Apr, 2015

3 commits

  • Mempools keep allocated objects in reserved for situations when ordinary
    allocation may not be possible to satisfy. These objects shouldn't be
    accessed before they leave the pool.

    This patch poison elements when get into the pool and unpoison when they
    leave it. This will let KASan to detect use-after-free of mempool's
    elements.

    Signed-off-by: Andrey Ryabinin
    Tested-by: David Rientjes
    Cc: Catalin Marinas
    Cc: Dmitry Chernenkov
    Cc: Dmitry Vyukov
    Cc: Alexander Potapenko
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Ryabinin
     
  • Mempools keep elements in a reserved pool for contexts in which allocation
    may not be possible. When an element is allocated from the reserved pool,
    its memory contents is the same as when it was added to the reserved pool.

    Because of this, elements lack any free poisoning to detect use-after-free
    errors.

    This patch adds free poisoning for elements backed by the slab allocator.
    This is possible because the mempool layer knows the object size of each
    element.

    When an element is added to the reserved pool, it is poisoned with
    POISON_FREE. When it is removed from the reserved pool, the contents are
    checked for POISON_FREE. If there is a mismatch, a warning is emitted to
    the kernel log.

    This is only effective for configs with CONFIG_DEBUG_SLAB or
    CONFIG_SLUB_DEBUG_ON.

    [fabio.estevam@freescale.com: use '%zu' for printing 'size_t' variable]
    [arnd@arndb.de: add missing include]
    Signed-off-by: David Rientjes
    Cc: Dave Kleikamp
    Cc: Christoph Hellwig
    Cc: Sebastian Ott
    Cc: Mikulas Patocka
    Cc: Catalin Marinas
    Signed-off-by: Fabio Estevam
    Signed-off-by: Arnd Bergmann
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • All occurrences of mempools based on slab caches with object constructors
    have been removed from the tree, so disallow creating them.

    We can only dereference mem->ctor in mm/mempool.c without including
    mm/slab.h in include/linux/mempool.h. So simply note the restriction,
    just like the comment restricting usage of __GFP_ZERO, and warn on kernels
    with CONFIG_DEBUG_VM() if such a mempool is allocated from.

    We don't want to incur this check on every element allocation, so use
    VM_BUG_ON().

    Signed-off-by: David Rientjes
    Cc: Dave Kleikamp
    Cc: Christoph Hellwig
    Cc: Sebastian Ott
    Cc: Mikulas Patocka
    Cc: Catalin Marinas
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     

15 Apr, 2015

1 commit

  • Allocating a large number of elements in atomic context could quickly
    deplete memory reserves, so just disallow atomic resizing entirely.

    Nothing currently uses mempool_resize() with anything other than
    GFP_KERNEL, so convert existing callers to drop the gfp_mask.

    [akpm@linux-foundation.org: coding-style fixes]
    Signed-off-by: David Rientjes
    Acked-by: Steffen Maier [zfcp]
    Cc: Martin Schwidefsky
    Cc: Heiko Carstens
    Cc: Steve French
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     

07 Jun, 2014

1 commit


05 Jun, 2014

1 commit

  • Memory obtained via mempool_alloc is not always zeroed even when
    called with __GFP_ZERO. Add a note and VM_BUG_ON statement to make
    that clear.

    [akpm@linux-foundation.org: use VM_WARN_ON_ONCE]
    Signed-off-by: Sebastian Ott
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Sebastian Ott
     

08 Apr, 2014

1 commit

  • Add unlikely and likely hints to the function mempool_free. It lays out
    the code in such a way that the common path is executed straighforward and
    saves a cache line.

    Signed-off-by: Mikulas Patocka
    Cc: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mikulas Patocka
     

12 Sep, 2013

1 commit


25 Jun, 2012

1 commit

  • mempool_create_node() currently assumes %GFP_KERNEL. Its only user,
    blk_init_free_list(), is about to be updated to use other allocation
    flags - add @gfp_mask argument to the function.

    Signed-off-by: Tejun Heo
    Cc: Andrew Morton
    Cc: Hugh Dickins
    Signed-off-by: Jens Axboe

    Tejun Heo
     

11 Jan, 2012

3 commits

  • mempool modifies gfp_mask so that the backing allocator doesn't try too
    hard or trigger warning message when there's pool to fall back on. In
    addition, for the first try, it removes __GFP_WAIT and IO, so that it
    doesn't trigger reclaim or wait when allocation can be fulfilled from
    pool; however, when that allocation fails and pool is empty too, it waits
    for the pool to be replenished before retrying.

    Allocation which could have succeeded after a bit of reclaim has to wait
    on the reserved items and it's not like mempool doesn't retry with
    __GFP_WAIT and IO. It just does that *after* someone returns an element,
    pointlessly delaying things.

    Fix it by retrying immediately if the first round of allocation attempts
    w/o __GFP_WAIT and IO fails.

    [akpm@linux-foundation.org: shorten the lock hold time]
    Signed-off-by: Tejun Heo
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Tejun Heo
     
  • mempool_destroy() is a thin wrapper around free_pool(). The only thing it
    adds is BUG_ON(pool->curr_nr != pool->min_nr). The intention seems to be
    to enforce that all allocated elements are freed; however, the BUG_ON()
    can't achieve that (it doesn't know anything about objects above min_nr)
    and incorrect as mempool_resize() is allowed to leave the pool extended
    but not filled. Furthermore, panicking is way worse than any memory leak
    and there are better debug tools to track memory leaks.

    Drop the BUG_ON() from mempool_destory() and as that leaves the function
    identical to free_pool(), replace it.

    Signed-off-by: Tejun Heo
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Tejun Heo
     
  • mempool_alloc/free() use undocumented smp_mb()'s. The code is slightly
    broken and misleading.

    The lockless part is in mempool_free(). It wants to determine whether the
    item being freed needs to be returned to the pool or backing allocator
    without grabbing pool->lock. Two things need to be guaranteed for correct
    operation.

    1. pool->curr_nr + #allocated should never dip below pool->min_nr.
    2. Waiters shouldn't be left dangling.

    For #1, The only necessary condition is that curr_nr visible at free is
    from after the allocation of the element being freed (details in the
    comment). For most cases, this is true without any barrier but there can
    be fringe cases where the allocated pointer is passed to the freeing task
    without going through memory barriers. To cover this case, wmb is
    necessary before returning from allocation and rmb is necessary before
    reading curr_nr. IOW,

    ALLOCATING TASK FREEING TASK

    update pool state after alloc;
    wmb();
    pass pointer to freeing task;
    read pointer;
    rmb();
    read pool state to free;

    The current code doesn't have wmb after pool update during allocation and
    may theoretically, on machines where unlock doesn't behave as full wmb,
    lead to pool depletion and deadlock. smp_wmb() needs to be added after
    successful allocation from reserved elements and smp_mb() in
    mempool_free() can be replaced with smp_rmb().

    For #2, the waiter needs to add itself to waitqueue and then check the
    wait condition and the waker needs to update the wait condition and then
    wake up. Because waitqueue operations always go through full spinlock
    synchronization, there is no need for extra memory barriers.

    Furthermore, mempool_alloc() is already holding pool->lock when it decides
    that it needs to wait. There is no reason to do unlock - add waitqueue -
    test condition again. It can simply add itself to waitqueue while holding
    pool->lock and then unlock and sleep.

    This patch adds smp_wmb() after successful allocation from reserved pool,
    replaces smp_mb() in mempool_free() with smp_rmb() and extend pool->lock
    over waitqueue addition. More importantly, it explains what memory
    barriers do and how the lockless testing is correct.

    -v2: Oleg pointed out that unlock doesn't imply wmb. Added explicit
    smp_wmb() after successful allocation from reserved pool and
    updated comments accordingly.

    Signed-off-by: Tejun Heo
    Cc: Oleg Nesterov
    Cc: "Paul E. McKenney"
    Cc: David Howells
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Tejun Heo
     

31 Oct, 2011

1 commit


22 Sep, 2009

1 commit

  • The kzalloc mempool zeros items when they are initially allocated, but
    does not rezero used items that are returned to the pool. Consequently
    mempool_alloc()s may return non-zeroed memory.

    Since there are/were only two in-tree users for
    mempool_create_kzalloc_pool(), and 'fixing' this in a way that will
    re-zero used (but not new) items before first use is non-trivial, just
    remove it.

    Signed-off-by: Sage Weil
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Sage Weil
     

10 Aug, 2009

1 commit

  • clean up type-casting twice. "size_t" is typedef as "unsigned long" in
    64-bit system, and "unsigned int" in 32-bit system, and the intermediate
    cast to 'long' is pointless.

    Signed-off-by: Figo.zhang
    Signed-off-by: Linus Torvalds

    Figo.zhang
     

20 Oct, 2007

1 commit


18 Jul, 2007

1 commit

  • kmalloc_node() and kmem_cache_alloc_node() were not available in a zeroing
    variant in the past. But with __GFP_ZERO it is possible now to do zeroing
    while allocating.

    Use __GFP_ZERO to remove the explicit clearing of memory via memset whereever
    we can.

    Signed-off-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     

17 Jul, 2007

1 commit

  • Christian Borntraeger points out that mempool_free() doesn't noop when
    handed NULL. This is inconsistent with the other free-like functions
    in the kernel.

    Signed-off-by: Rusty Russell
    Cc: Christian Borntraeger
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Rusty Russell
     

12 Feb, 2007

1 commit

  • A variety of (mostly) innocuous fixes to the embedded kernel-doc content in
    source files, including:

    * make multi-line initial descriptions single line
    * denote some function names, constants and structs as such
    * change erroneous opening '/*' to '/**' in a few places
    * reword some text for clarity

    Signed-off-by: Robert P. J. Day
    Cc: "Randy.Dunlap"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Robert P. J. Day
     

02 Sep, 2006

1 commit

  • This patch works around a complex dm-related deadlock/livelock down in the
    mempool allocator.

    Alasdair said:

    Several dm targets suffer from this.

    Mempools are not yet used correctly everywhere in device-mapper: they can
    get shared when devices are stacked, and some targets share them across
    multiple instances. I made fixing this one of the prerequisites for this
    patch:

    md-dm-reduce-stack-usage-with-stacked-block-devices.patch

    which in some cases makes people more likely to hit the problem.

    There's been some progress on this recently with (unfinished) dm-crypt
    patches at:

    http://www.kernel.org/pub/linux/kernel/people/agk/patches/2.6/editing/
    (dm-crypt-move-io-to-workqueue.patch plus dependencies)

    and:

    I've no problems with a temporary workaround like that, but Milan Broz (a
    new Redhat developer in the Czech Republic) has started reviewing all the
    mempool usage in device-mapper so I'm expecting we'll soon have a proper fix
    for this associated problems. [He's back from holiday at the start of next
    week.]

    For now, this sad-but-safe little patch will allow the machine to recover.

    [akpm@osdl.org: rewrote changelog]
    Cc: Alasdair G Kergon
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Pavel Mironchik
     

27 Mar, 2006

5 commits

  • * git://git.kernel.org/pub/scm/linux/kernel/git/bunk/trivial:
    drivers/char/ftape/lowlevel/fdc-io.c: Correct a comment
    Kconfig help: MTD_JEDECPROBE already supports Intel
    Remove ugly debugging stuff
    do_mounts.c: Minor ROOT_DEV comment cleanup
    BUG_ON() Conversion in drivers/s390/block/dasd_devmap.c
    BUG_ON() Conversion in mm/mempool.c
    BUG_ON() Conversion in mm/memory.c
    BUG_ON() Conversion in kernel/fork.c
    BUG_ON() Conversion in ipc/sem.c
    BUG_ON() Conversion in fs/ext2/
    BUG_ON() Conversion in fs/hfs/
    BUG_ON() Conversion in fs/dcache.c
    BUG_ON() Conversion in fs/buffer.c
    BUG_ON() Conversion in input/serio/hp_sdc_mlc.c
    BUG_ON() Conversion in md/dm-table.c
    BUG_ON() Conversion in md/dm-path-selector.c
    BUG_ON() Conversion in drivers/isdn
    BUG_ON() Conversion in drivers/char
    BUG_ON() Conversion in drivers/mtd/

    Linus Torvalds
     
  • Add another allocator to the common mempool code: a kzalloc/kfree allocator

    This will be used by the next patch in the series to replace a mempool-backed
    kzalloc allocator. It is also very likely that there will be more users in
    the future.

    Signed-off-by: Matthew Dobson
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Matthew Dobson
     
  • Add another allocator to the common mempool code: a kmalloc/kfree allocator

    This will be used by the next patch in the series to replace duplicate
    mempool-backed kmalloc allocators in several places in the kernel. It is also
    very likely that there will be more users in the future.

    Signed-off-by: Matthew Dobson
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Matthew Dobson
     
  • This will be used by the next patch in the series to replace duplicate
    mempool-backed page allocators in 2 places in the kernel. It is also likely
    that there will be more users in the future.

    Signed-off-by: Matthew Dobson
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Matthew Dobson
     
  • this changes if() BUG(); constructs to BUG_ON() which is
    cleaner, contains unlikely() and can better optimized away.

    Signed-off-by: Eric Sesterhenn
    Signed-off-by: Adrian Bunk

    Eric Sesterhenn
     

22 Mar, 2006

1 commit


28 Oct, 2005

1 commit


09 Oct, 2005

1 commit

  • - added typedef unsigned int __nocast gfp_t;

    - replaced __nocast uses for gfp flags with gfp_t - it gives exactly
    the same warnings as far as sparse is concerned, doesn't change
    generated code (from gcc point of view we replaced unsigned int with
    typedef) and documents what's going on far better.

    Signed-off-by: Al Viro
    Signed-off-by: Linus Torvalds

    Al Viro
     

08 Jul, 2005

1 commit


24 Jun, 2005

2 commits

  • Here's a small patch to improve the performance of mempool_alloc by only
    initializing the wait queue when we're about to wait.

    Signed-off-by: Benjamin LaHaise
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Benjamin LaHaise
     
  • Patch to allocate the control structures for for ide devices on the node of
    the device itself (for NUMA systems). The patch depends on the Slab API
    change patch by Manfred and me (in mm) and the pcidev_to_node patch that I
    posted today.

    Does some realignment too.

    Signed-off-by: Justin M. Forbes
    Signed-off-by: Christoph Lameter
    Signed-off-by: Pravin Shelar
    Signed-off-by: Shobhit Dayal
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     

01 May, 2005

3 commits

  • Replace a number of memory barriers with smp_ variants. This means we won't
    take the unnecessary hit on UP machines.

    Signed-off-by: Anton Blanchard
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    akpm@osdl.org
     
  • Mempool is pretty clever. Looks too clever for its own good :) It
    shouldn't really know so much about page reclaim internals.

    - don't guess about what effective page reclaim might involve.

    - don't randomly flush out all dirty data if some unlikely thing
    happens (alloc returns NULL). page reclaim can (sort of :P) handle
    it.

    I think the main motivation is trying to avoid pool->lock at all costs.
    However the first allocation is attempted with __GFP_WAIT cleared, so it
    will be 'can_try_harder' if it hits the page allocator. So if allocation
    still fails, then we can probably afford to hit the pool->lock - and what's
    the alternative? Try page reclaim and hit zone->lru_lock?

    A nice upshot is that we don't need to do any fancy memory barriers or do
    (intentionally) racy access to pool-> fields outside the lock.

    Signed-off-by: Nick Piggin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nick Piggin
     
  • Mempools have 2 problems.

    The first is that mempool_alloc can possibly get stuck in __alloc_pages
    when they should opt to fail, and take an element from their reserved pool.

    The second is that it will happily eat emergency PF_MEMALLOC reserves
    instead of going to their reserved pools.

    Fix the first by passing __GFP_NORETRY in the allocation calls in
    mempool_alloc. Fix the second by introducing a __GFP_MEMPOOL flag which
    directs the page allocator not to allocate from the reserve pool.

    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nick Piggin
     

17 Apr, 2005

1 commit

  • Initial git repository build. I'm not bothering with the full history,
    even though we have it. We can create a separate "historical" git
    archive of that later if we want to, and in the meantime it's about
    3.2GB when imported into git - space that would just make the early
    git days unnecessarily complicated, when we don't have a lot of good
    infrastructure for it.

    Let it rip!

    Linus Torvalds