11 Jan, 2012

3 commits

  • mempool modifies gfp_mask so that the backing allocator doesn't try too
    hard or trigger warning message when there's pool to fall back on. In
    addition, for the first try, it removes __GFP_WAIT and IO, so that it
    doesn't trigger reclaim or wait when allocation can be fulfilled from
    pool; however, when that allocation fails and pool is empty too, it waits
    for the pool to be replenished before retrying.

    Allocation which could have succeeded after a bit of reclaim has to wait
    on the reserved items and it's not like mempool doesn't retry with
    __GFP_WAIT and IO. It just does that *after* someone returns an element,
    pointlessly delaying things.

    Fix it by retrying immediately if the first round of allocation attempts
    w/o __GFP_WAIT and IO fails.

    [akpm@linux-foundation.org: shorten the lock hold time]
    Signed-off-by: Tejun Heo
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Tejun Heo
     
  • mempool_destroy() is a thin wrapper around free_pool(). The only thing it
    adds is BUG_ON(pool->curr_nr != pool->min_nr). The intention seems to be
    to enforce that all allocated elements are freed; however, the BUG_ON()
    can't achieve that (it doesn't know anything about objects above min_nr)
    and incorrect as mempool_resize() is allowed to leave the pool extended
    but not filled. Furthermore, panicking is way worse than any memory leak
    and there are better debug tools to track memory leaks.

    Drop the BUG_ON() from mempool_destory() and as that leaves the function
    identical to free_pool(), replace it.

    Signed-off-by: Tejun Heo
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Tejun Heo
     
  • mempool_alloc/free() use undocumented smp_mb()'s. The code is slightly
    broken and misleading.

    The lockless part is in mempool_free(). It wants to determine whether the
    item being freed needs to be returned to the pool or backing allocator
    without grabbing pool->lock. Two things need to be guaranteed for correct
    operation.

    1. pool->curr_nr + #allocated should never dip below pool->min_nr.
    2. Waiters shouldn't be left dangling.

    For #1, The only necessary condition is that curr_nr visible at free is
    from after the allocation of the element being freed (details in the
    comment). For most cases, this is true without any barrier but there can
    be fringe cases where the allocated pointer is passed to the freeing task
    without going through memory barriers. To cover this case, wmb is
    necessary before returning from allocation and rmb is necessary before
    reading curr_nr. IOW,

    ALLOCATING TASK FREEING TASK

    update pool state after alloc;
    wmb();
    pass pointer to freeing task;
    read pointer;
    rmb();
    read pool state to free;

    The current code doesn't have wmb after pool update during allocation and
    may theoretically, on machines where unlock doesn't behave as full wmb,
    lead to pool depletion and deadlock. smp_wmb() needs to be added after
    successful allocation from reserved elements and smp_mb() in
    mempool_free() can be replaced with smp_rmb().

    For #2, the waiter needs to add itself to waitqueue and then check the
    wait condition and the waker needs to update the wait condition and then
    wake up. Because waitqueue operations always go through full spinlock
    synchronization, there is no need for extra memory barriers.

    Furthermore, mempool_alloc() is already holding pool->lock when it decides
    that it needs to wait. There is no reason to do unlock - add waitqueue -
    test condition again. It can simply add itself to waitqueue while holding
    pool->lock and then unlock and sleep.

    This patch adds smp_wmb() after successful allocation from reserved pool,
    replaces smp_mb() in mempool_free() with smp_rmb() and extend pool->lock
    over waitqueue addition. More importantly, it explains what memory
    barriers do and how the lockless testing is correct.

    -v2: Oleg pointed out that unlock doesn't imply wmb. Added explicit
    smp_wmb() after successful allocation from reserved pool and
    updated comments accordingly.

    Signed-off-by: Tejun Heo
    Cc: Oleg Nesterov
    Cc: "Paul E. McKenney"
    Cc: David Howells
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Tejun Heo
     

31 Oct, 2011

1 commit


22 Sep, 2009

1 commit

  • The kzalloc mempool zeros items when they are initially allocated, but
    does not rezero used items that are returned to the pool. Consequently
    mempool_alloc()s may return non-zeroed memory.

    Since there are/were only two in-tree users for
    mempool_create_kzalloc_pool(), and 'fixing' this in a way that will
    re-zero used (but not new) items before first use is non-trivial, just
    remove it.

    Signed-off-by: Sage Weil
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Sage Weil
     

10 Aug, 2009

1 commit

  • clean up type-casting twice. "size_t" is typedef as "unsigned long" in
    64-bit system, and "unsigned int" in 32-bit system, and the intermediate
    cast to 'long' is pointless.

    Signed-off-by: Figo.zhang
    Signed-off-by: Linus Torvalds

    Figo.zhang
     

20 Oct, 2007

1 commit


18 Jul, 2007

1 commit

  • kmalloc_node() and kmem_cache_alloc_node() were not available in a zeroing
    variant in the past. But with __GFP_ZERO it is possible now to do zeroing
    while allocating.

    Use __GFP_ZERO to remove the explicit clearing of memory via memset whereever
    we can.

    Signed-off-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     

17 Jul, 2007

1 commit

  • Christian Borntraeger points out that mempool_free() doesn't noop when
    handed NULL. This is inconsistent with the other free-like functions
    in the kernel.

    Signed-off-by: Rusty Russell
    Cc: Christian Borntraeger
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Rusty Russell
     

12 Feb, 2007

1 commit

  • A variety of (mostly) innocuous fixes to the embedded kernel-doc content in
    source files, including:

    * make multi-line initial descriptions single line
    * denote some function names, constants and structs as such
    * change erroneous opening '/*' to '/**' in a few places
    * reword some text for clarity

    Signed-off-by: Robert P. J. Day
    Cc: "Randy.Dunlap"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Robert P. J. Day
     

02 Sep, 2006

1 commit

  • This patch works around a complex dm-related deadlock/livelock down in the
    mempool allocator.

    Alasdair said:

    Several dm targets suffer from this.

    Mempools are not yet used correctly everywhere in device-mapper: they can
    get shared when devices are stacked, and some targets share them across
    multiple instances. I made fixing this one of the prerequisites for this
    patch:

    md-dm-reduce-stack-usage-with-stacked-block-devices.patch

    which in some cases makes people more likely to hit the problem.

    There's been some progress on this recently with (unfinished) dm-crypt
    patches at:

    http://www.kernel.org/pub/linux/kernel/people/agk/patches/2.6/editing/
    (dm-crypt-move-io-to-workqueue.patch plus dependencies)

    and:

    I've no problems with a temporary workaround like that, but Milan Broz (a
    new Redhat developer in the Czech Republic) has started reviewing all the
    mempool usage in device-mapper so I'm expecting we'll soon have a proper fix
    for this associated problems. [He's back from holiday at the start of next
    week.]

    For now, this sad-but-safe little patch will allow the machine to recover.

    [akpm@osdl.org: rewrote changelog]
    Cc: Alasdair G Kergon
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Pavel Mironchik
     

27 Mar, 2006

5 commits

  • * git://git.kernel.org/pub/scm/linux/kernel/git/bunk/trivial:
    drivers/char/ftape/lowlevel/fdc-io.c: Correct a comment
    Kconfig help: MTD_JEDECPROBE already supports Intel
    Remove ugly debugging stuff
    do_mounts.c: Minor ROOT_DEV comment cleanup
    BUG_ON() Conversion in drivers/s390/block/dasd_devmap.c
    BUG_ON() Conversion in mm/mempool.c
    BUG_ON() Conversion in mm/memory.c
    BUG_ON() Conversion in kernel/fork.c
    BUG_ON() Conversion in ipc/sem.c
    BUG_ON() Conversion in fs/ext2/
    BUG_ON() Conversion in fs/hfs/
    BUG_ON() Conversion in fs/dcache.c
    BUG_ON() Conversion in fs/buffer.c
    BUG_ON() Conversion in input/serio/hp_sdc_mlc.c
    BUG_ON() Conversion in md/dm-table.c
    BUG_ON() Conversion in md/dm-path-selector.c
    BUG_ON() Conversion in drivers/isdn
    BUG_ON() Conversion in drivers/char
    BUG_ON() Conversion in drivers/mtd/

    Linus Torvalds
     
  • Add another allocator to the common mempool code: a kzalloc/kfree allocator

    This will be used by the next patch in the series to replace a mempool-backed
    kzalloc allocator. It is also very likely that there will be more users in
    the future.

    Signed-off-by: Matthew Dobson
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Matthew Dobson
     
  • Add another allocator to the common mempool code: a kmalloc/kfree allocator

    This will be used by the next patch in the series to replace duplicate
    mempool-backed kmalloc allocators in several places in the kernel. It is also
    very likely that there will be more users in the future.

    Signed-off-by: Matthew Dobson
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Matthew Dobson
     
  • This will be used by the next patch in the series to replace duplicate
    mempool-backed page allocators in 2 places in the kernel. It is also likely
    that there will be more users in the future.

    Signed-off-by: Matthew Dobson
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Matthew Dobson
     
  • this changes if() BUG(); constructs to BUG_ON() which is
    cleaner, contains unlikely() and can better optimized away.

    Signed-off-by: Eric Sesterhenn
    Signed-off-by: Adrian Bunk

    Eric Sesterhenn
     

22 Mar, 2006

1 commit


28 Oct, 2005

1 commit


09 Oct, 2005

1 commit

  • - added typedef unsigned int __nocast gfp_t;

    - replaced __nocast uses for gfp flags with gfp_t - it gives exactly
    the same warnings as far as sparse is concerned, doesn't change
    generated code (from gcc point of view we replaced unsigned int with
    typedef) and documents what's going on far better.

    Signed-off-by: Al Viro
    Signed-off-by: Linus Torvalds

    Al Viro
     

08 Jul, 2005

1 commit


24 Jun, 2005

2 commits

  • Here's a small patch to improve the performance of mempool_alloc by only
    initializing the wait queue when we're about to wait.

    Signed-off-by: Benjamin LaHaise
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Benjamin LaHaise
     
  • Patch to allocate the control structures for for ide devices on the node of
    the device itself (for NUMA systems). The patch depends on the Slab API
    change patch by Manfred and me (in mm) and the pcidev_to_node patch that I
    posted today.

    Does some realignment too.

    Signed-off-by: Justin M. Forbes
    Signed-off-by: Christoph Lameter
    Signed-off-by: Pravin Shelar
    Signed-off-by: Shobhit Dayal
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     

01 May, 2005

3 commits

  • Replace a number of memory barriers with smp_ variants. This means we won't
    take the unnecessary hit on UP machines.

    Signed-off-by: Anton Blanchard
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    akpm@osdl.org
     
  • Mempool is pretty clever. Looks too clever for its own good :) It
    shouldn't really know so much about page reclaim internals.

    - don't guess about what effective page reclaim might involve.

    - don't randomly flush out all dirty data if some unlikely thing
    happens (alloc returns NULL). page reclaim can (sort of :P) handle
    it.

    I think the main motivation is trying to avoid pool->lock at all costs.
    However the first allocation is attempted with __GFP_WAIT cleared, so it
    will be 'can_try_harder' if it hits the page allocator. So if allocation
    still fails, then we can probably afford to hit the pool->lock - and what's
    the alternative? Try page reclaim and hit zone->lru_lock?

    A nice upshot is that we don't need to do any fancy memory barriers or do
    (intentionally) racy access to pool-> fields outside the lock.

    Signed-off-by: Nick Piggin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nick Piggin
     
  • Mempools have 2 problems.

    The first is that mempool_alloc can possibly get stuck in __alloc_pages
    when they should opt to fail, and take an element from their reserved pool.

    The second is that it will happily eat emergency PF_MEMALLOC reserves
    instead of going to their reserved pools.

    Fix the first by passing __GFP_NORETRY in the allocation calls in
    mempool_alloc. Fix the second by introducing a __GFP_MEMPOOL flag which
    directs the page allocator not to allocate from the reserve pool.

    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nick Piggin
     

17 Apr, 2005

1 commit

  • Initial git repository build. I'm not bothering with the full history,
    even though we have it. We can create a separate "historical" git
    archive of that later if we want to, and in the meantime it's about
    3.2GB when imported into git - space that would just make the early
    git days unnecessarily complicated, when we don't have a lot of good
    infrastructure for it.

    Let it rip!

    Linus Torvalds