29 Jul, 2016

1 commit

  • This reverts commit f9054c70d28b ("mm, mempool: only set __GFP_NOMEMALLOC
    if there are free elements").

    There has been a report about OOM killer invoked when swapping out to a
    dm-crypt device. The primary reason seems to be that the swapout out IO
    managed to completely deplete memory reserves. Ondrej was able to
    bisect and explained the issue by pointing to f9054c70d28b ("mm,
    mempool: only set __GFP_NOMEMALLOC if there are free elements").

    The reason is that the swapout path is not throttled properly because
    the md-raid layer needs to allocate from the generic_make_request path
    which means it allocates from the PF_MEMALLOC context. dm layer uses
    mempool_alloc in order to guarantee a forward progress which used to
    inhibit access to memory reserves when using page allocator. This has
    changed by f9054c70d28b ("mm, mempool: only set __GFP_NOMEMALLOC if
    there are free elements") which has dropped the __GFP_NOMEMALLOC
    protection when the memory pool is depleted.

    If we are running out of memory and the only way forward to free memory
    is to perform swapout we just keep consuming memory reserves rather than
    throttling the mempool allocations and allowing the pending IO to
    complete up to a moment when the memory is depleted completely and there
    is no way forward but invoking the OOM killer. This is less than
    optimal.

    The original intention of f9054c70d28b was to help with the OOM
    situations where the oom victim depends on mempool allocation to make a
    forward progress. David has mentioned the following backtrace:

    schedule
    schedule_timeout
    io_schedule_timeout
    mempool_alloc
    __split_and_process_bio
    dm_request
    generic_make_request
    submit_bio
    mpage_readpages
    ext4_readpages
    __do_page_cache_readahead
    ra_submit
    filemap_fault
    handle_mm_fault
    __do_page_fault
    do_page_fault
    page_fault

    We do not know more about why the mempool is depleted without being
    replenished in time, though. In any case the dm layer shouldn't depend
    on any allocations outside of the dedicated pools so a forward progress
    should be guaranteed. If this is not the case then the dm should be
    fixed rather than papering over the problem and postponing it to later
    by accessing more memory reserves.

    mempools are a mechanism to maintain dedicated memory reserves to
    guaratee forward progress. Allowing them an unbounded access to the
    page allocator memory reserves is going against the whole purpose of
    this mechanism.

    Bisected by Ondrej Kozina.

    [akpm@linux-foundation.org: coding-style fixes]
    Link: http://lkml.kernel.org/r/20160721145309.GR26379@dhcp22.suse.cz
    Signed-off-by: Michal Hocko
    Reported-by: Ondrej Kozina
    Reviewed-by: Johannes Weiner
    Acked-by: NeilBrown
    Cc: David Rientjes
    Cc: Mikulas Patocka
    Cc: Ondrej Kozina
    Cc: Tetsuo Handa
    Cc: Mel Gorman
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Michal Hocko
     

25 Jun, 2016

1 commit

  • Currently we may put reserved by mempool elements into quarantine via
    kasan_kfree(). This is totally wrong since quarantine may really free
    these objects. So when mempool will try to use such element,
    use-after-free will happen. Or mempool may decide that it no longer
    need that element and double-free it.

    So don't put object into quarantine in kasan_kfree(), just poison it.
    Rename kasan_kfree() to kasan_poison_kfree() to respect that.

    Also, we shouldn't use kasan_slab_alloc()/kasan_krealloc() in
    kasan_unpoison_element() because those functions may update allocation
    stacktrace. This would be wrong for the most of the remove_element call
    sites.

    (The only call site where we may want to update alloc stacktrace is
    in mempool_alloc(). Kmemleak solves this by calling
    kmemleak_update_trace(), so we could make something like that too.
    But this is out of scope of this patch).

    Fixes: 55834c59098d ("mm: kasan: initial memory quarantine implementation")
    Link: http://lkml.kernel.org/r/575977C3.1010905@virtuozzo.com
    Signed-off-by: Andrey Ryabinin
    Reported-by: Kuthonuzo Luruo
    Acked-by: Alexander Potapenko
    Cc: Dmitriy Vyukov
    Cc: Kostya Serebryany
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Ryabinin
     

21 May, 2016

1 commit

  • Quarantine isolates freed objects in a separate queue. The objects are
    returned to the allocator later, which helps to detect use-after-free
    errors.

    When the object is freed, its state changes from KASAN_STATE_ALLOC to
    KASAN_STATE_QUARANTINE. The object is poisoned and put into quarantine
    instead of being returned to the allocator, therefore every subsequent
    access to that object triggers a KASAN error, and the error handler is
    able to say where the object has been allocated and deallocated.

    When it's time for the object to leave quarantine, its state becomes
    KASAN_STATE_FREE and it's returned to the allocator. From now on the
    allocator may reuse it for another allocation. Before that happens,
    it's still possible to detect a use-after free on that object (it
    retains the allocation/deallocation stacks).

    When the allocator reuses this object, the shadow is unpoisoned and old
    allocation/deallocation stacks are wiped. Therefore a use of this
    object, even an incorrect one, won't trigger ASan warning.

    Without the quarantine, it's not guaranteed that the objects aren't
    reused immediately, that's why the probability of catching a
    use-after-free is lower than with quarantine in place.

    Quarantine isolates freed objects in a separate queue. The objects are
    returned to the allocator later, which helps to detect use-after-free
    errors.

    Freed objects are first added to per-cpu quarantine queues. When a
    cache is destroyed or memory shrinking is requested, the objects are
    moved into the global quarantine queue. Whenever a kmalloc call allows
    memory reclaiming, the oldest objects are popped out of the global queue
    until the total size of objects in quarantine is less than 3/4 of the
    maximum quarantine size (which is a fraction of installed physical
    memory).

    As long as an object remains in the quarantine, KASAN is able to report
    accesses to it, so the chance of reporting a use-after-free is
    increased. Once the object leaves quarantine, the allocator may reuse
    it, in which case the object is unpoisoned and KASAN can't detect
    incorrect accesses to it.

    Right now quarantine support is only enabled in SLAB allocator.
    Unification of KASAN features in SLAB and SLUB will be done later.

    This patch is based on the "mm: kasan: quarantine" patch originally
    prepared by Dmitry Chernenkov. A number of improvements have been
    suggested by Andrey Ryabinin.

    [glider@google.com: v9]
    Link: http://lkml.kernel.org/r/1462987130-144092-1-git-send-email-glider@google.com
    Signed-off-by: Alexander Potapenko
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Cc: Joonsoo Kim
    Cc: Andrey Konovalov
    Cc: Dmitry Vyukov
    Cc: Andrey Ryabinin
    Cc: Steven Rostedt
    Cc: Konstantin Serebryany
    Cc: Dmitry Chernenkov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Alexander Potapenko
     

26 Mar, 2016

1 commit

  • Add GFP flags to KASAN hooks for future patches to use.

    This patch is based on the "mm: kasan: unified support for SLUB and SLAB
    allocators" patch originally prepared by Dmitry Chernenkov.

    Signed-off-by: Alexander Potapenko
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Cc: Joonsoo Kim
    Cc: Andrey Konovalov
    Cc: Dmitry Vyukov
    Cc: Andrey Ryabinin
    Cc: Steven Rostedt
    Cc: Konstantin Serebryany
    Cc: Dmitry Chernenkov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Alexander Potapenko
     

18 Mar, 2016

1 commit

  • If an oom killed thread calls mempool_alloc(), it is possible that it'll
    loop forever if there are no elements on the freelist since
    __GFP_NOMEMALLOC prevents it from accessing needed memory reserves in
    oom conditions.

    Only set __GFP_NOMEMALLOC if there are elements on the freelist. If
    there are no free elements, allow allocations without the bit set so
    that memory reserves can be accessed if needed.

    Additionally, using mempool_alloc() with __GFP_NOMEMALLOC is not
    supported since the implementation can loop forever without accessing
    memory reserves when needed.

    Signed-off-by: David Rientjes
    Cc: Greg Thelen
    Cc: Michal Hocko
    Cc: Tetsuo Handa
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     

12 Mar, 2016

1 commit


07 Nov, 2015

1 commit

  • …d avoiding waking kswapd

    __GFP_WAIT has been used to identify atomic context in callers that hold
    spinlocks or are in interrupts. They are expected to be high priority and
    have access one of two watermarks lower than "min" which can be referred
    to as the "atomic reserve". __GFP_HIGH users get access to the first
    lower watermark and can be called the "high priority reserve".

    Over time, callers had a requirement to not block when fallback options
    were available. Some have abused __GFP_WAIT leading to a situation where
    an optimisitic allocation with a fallback option can access atomic
    reserves.

    This patch uses __GFP_ATOMIC to identify callers that are truely atomic,
    cannot sleep and have no alternative. High priority users continue to use
    __GFP_HIGH. __GFP_DIRECT_RECLAIM identifies callers that can sleep and
    are willing to enter direct reclaim. __GFP_KSWAPD_RECLAIM to identify
    callers that want to wake kswapd for background reclaim. __GFP_WAIT is
    redefined as a caller that is willing to enter direct reclaim and wake
    kswapd for background reclaim.

    This patch then converts a number of sites

    o __GFP_ATOMIC is used by callers that are high priority and have memory
    pools for those requests. GFP_ATOMIC uses this flag.

    o Callers that have a limited mempool to guarantee forward progress clear
    __GFP_DIRECT_RECLAIM but keep __GFP_KSWAPD_RECLAIM. bio allocations fall
    into this category where kswapd will still be woken but atomic reserves
    are not used as there is a one-entry mempool to guarantee progress.

    o Callers that are checking if they are non-blocking should use the
    helper gfpflags_allow_blocking() where possible. This is because
    checking for __GFP_WAIT as was done historically now can trigger false
    positives. Some exceptions like dm-crypt.c exist where the code intent
    is clearer if __GFP_DIRECT_RECLAIM is used instead of the helper due to
    flag manipulations.

    o Callers that built their own GFP flags instead of starting with GFP_KERNEL
    and friends now also need to specify __GFP_KSWAPD_RECLAIM.

    The first key hazard to watch out for is callers that removed __GFP_WAIT
    and was depending on access to atomic reserves for inconspicuous reasons.
    In some cases it may be appropriate for them to use __GFP_HIGH.

    The second key hazard is callers that assembled their own combination of
    GFP flags instead of starting with something like GFP_KERNEL. They may
    now wish to specify __GFP_KSWAPD_RECLAIM. It's almost certainly harmless
    if it's missed in most cases as other activity will wake kswapd.

    Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
    Acked-by: Vlastimil Babka <vbabka@suse.cz>
    Acked-by: Michal Hocko <mhocko@suse.com>
    Acked-by: Johannes Weiner <hannes@cmpxchg.org>
    Cc: Christoph Lameter <cl@linux.com>
    Cc: David Rientjes <rientjes@google.com>
    Cc: Vitaly Wool <vitalywool@gmail.com>
    Cc: Rik van Riel <riel@redhat.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

    Mel Gorman
     

09 Sep, 2015

1 commit

  • mempool_destroy() does not tolerate a NULL mempool_t pointer argument and
    performs a NULL-pointer dereference. This requires additional attention
    and effort from developers/reviewers and forces all mempool_destroy()
    callers to do a NULL check

    if (pool)
    mempool_destroy(pool);

    Or, otherwise, be invalid mempool_destroy() users.

    Tweak mempool_destroy() and NULL-check the pointer there.

    Proposed by Andrew Morton.

    Link: https://lkml.org/lkml/2015/6/8/583
    Signed-off-by: Sergey Senozhatsky
    Acked-by: David Rientjes
    Cc: Julia Lawall
    Cc: Joe Perches
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Sergey Senozhatsky
     

16 Apr, 2015

3 commits

  • Mempools keep allocated objects in reserved for situations when ordinary
    allocation may not be possible to satisfy. These objects shouldn't be
    accessed before they leave the pool.

    This patch poison elements when get into the pool and unpoison when they
    leave it. This will let KASan to detect use-after-free of mempool's
    elements.

    Signed-off-by: Andrey Ryabinin
    Tested-by: David Rientjes
    Cc: Catalin Marinas
    Cc: Dmitry Chernenkov
    Cc: Dmitry Vyukov
    Cc: Alexander Potapenko
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Ryabinin
     
  • Mempools keep elements in a reserved pool for contexts in which allocation
    may not be possible. When an element is allocated from the reserved pool,
    its memory contents is the same as when it was added to the reserved pool.

    Because of this, elements lack any free poisoning to detect use-after-free
    errors.

    This patch adds free poisoning for elements backed by the slab allocator.
    This is possible because the mempool layer knows the object size of each
    element.

    When an element is added to the reserved pool, it is poisoned with
    POISON_FREE. When it is removed from the reserved pool, the contents are
    checked for POISON_FREE. If there is a mismatch, a warning is emitted to
    the kernel log.

    This is only effective for configs with CONFIG_DEBUG_SLAB or
    CONFIG_SLUB_DEBUG_ON.

    [fabio.estevam@freescale.com: use '%zu' for printing 'size_t' variable]
    [arnd@arndb.de: add missing include]
    Signed-off-by: David Rientjes
    Cc: Dave Kleikamp
    Cc: Christoph Hellwig
    Cc: Sebastian Ott
    Cc: Mikulas Patocka
    Cc: Catalin Marinas
    Signed-off-by: Fabio Estevam
    Signed-off-by: Arnd Bergmann
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • All occurrences of mempools based on slab caches with object constructors
    have been removed from the tree, so disallow creating them.

    We can only dereference mem->ctor in mm/mempool.c without including
    mm/slab.h in include/linux/mempool.h. So simply note the restriction,
    just like the comment restricting usage of __GFP_ZERO, and warn on kernels
    with CONFIG_DEBUG_VM() if such a mempool is allocated from.

    We don't want to incur this check on every element allocation, so use
    VM_BUG_ON().

    Signed-off-by: David Rientjes
    Cc: Dave Kleikamp
    Cc: Christoph Hellwig
    Cc: Sebastian Ott
    Cc: Mikulas Patocka
    Cc: Catalin Marinas
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     

15 Apr, 2015

1 commit

  • Allocating a large number of elements in atomic context could quickly
    deplete memory reserves, so just disallow atomic resizing entirely.

    Nothing currently uses mempool_resize() with anything other than
    GFP_KERNEL, so convert existing callers to drop the gfp_mask.

    [akpm@linux-foundation.org: coding-style fixes]
    Signed-off-by: David Rientjes
    Acked-by: Steffen Maier [zfcp]
    Cc: Martin Schwidefsky
    Cc: Heiko Carstens
    Cc: Steve French
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     

07 Jun, 2014

1 commit


05 Jun, 2014

1 commit

  • Memory obtained via mempool_alloc is not always zeroed even when
    called with __GFP_ZERO. Add a note and VM_BUG_ON statement to make
    that clear.

    [akpm@linux-foundation.org: use VM_WARN_ON_ONCE]
    Signed-off-by: Sebastian Ott
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Sebastian Ott
     

08 Apr, 2014

1 commit

  • Add unlikely and likely hints to the function mempool_free. It lays out
    the code in such a way that the common path is executed straighforward and
    saves a cache line.

    Signed-off-by: Mikulas Patocka
    Cc: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mikulas Patocka
     

12 Sep, 2013

1 commit


25 Jun, 2012

1 commit

  • mempool_create_node() currently assumes %GFP_KERNEL. Its only user,
    blk_init_free_list(), is about to be updated to use other allocation
    flags - add @gfp_mask argument to the function.

    Signed-off-by: Tejun Heo
    Cc: Andrew Morton
    Cc: Hugh Dickins
    Signed-off-by: Jens Axboe

    Tejun Heo
     

11 Jan, 2012

3 commits

  • mempool modifies gfp_mask so that the backing allocator doesn't try too
    hard or trigger warning message when there's pool to fall back on. In
    addition, for the first try, it removes __GFP_WAIT and IO, so that it
    doesn't trigger reclaim or wait when allocation can be fulfilled from
    pool; however, when that allocation fails and pool is empty too, it waits
    for the pool to be replenished before retrying.

    Allocation which could have succeeded after a bit of reclaim has to wait
    on the reserved items and it's not like mempool doesn't retry with
    __GFP_WAIT and IO. It just does that *after* someone returns an element,
    pointlessly delaying things.

    Fix it by retrying immediately if the first round of allocation attempts
    w/o __GFP_WAIT and IO fails.

    [akpm@linux-foundation.org: shorten the lock hold time]
    Signed-off-by: Tejun Heo
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Tejun Heo
     
  • mempool_destroy() is a thin wrapper around free_pool(). The only thing it
    adds is BUG_ON(pool->curr_nr != pool->min_nr). The intention seems to be
    to enforce that all allocated elements are freed; however, the BUG_ON()
    can't achieve that (it doesn't know anything about objects above min_nr)
    and incorrect as mempool_resize() is allowed to leave the pool extended
    but not filled. Furthermore, panicking is way worse than any memory leak
    and there are better debug tools to track memory leaks.

    Drop the BUG_ON() from mempool_destory() and as that leaves the function
    identical to free_pool(), replace it.

    Signed-off-by: Tejun Heo
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Tejun Heo
     
  • mempool_alloc/free() use undocumented smp_mb()'s. The code is slightly
    broken and misleading.

    The lockless part is in mempool_free(). It wants to determine whether the
    item being freed needs to be returned to the pool or backing allocator
    without grabbing pool->lock. Two things need to be guaranteed for correct
    operation.

    1. pool->curr_nr + #allocated should never dip below pool->min_nr.
    2. Waiters shouldn't be left dangling.

    For #1, The only necessary condition is that curr_nr visible at free is
    from after the allocation of the element being freed (details in the
    comment). For most cases, this is true without any barrier but there can
    be fringe cases where the allocated pointer is passed to the freeing task
    without going through memory barriers. To cover this case, wmb is
    necessary before returning from allocation and rmb is necessary before
    reading curr_nr. IOW,

    ALLOCATING TASK FREEING TASK

    update pool state after alloc;
    wmb();
    pass pointer to freeing task;
    read pointer;
    rmb();
    read pool state to free;

    The current code doesn't have wmb after pool update during allocation and
    may theoretically, on machines where unlock doesn't behave as full wmb,
    lead to pool depletion and deadlock. smp_wmb() needs to be added after
    successful allocation from reserved elements and smp_mb() in
    mempool_free() can be replaced with smp_rmb().

    For #2, the waiter needs to add itself to waitqueue and then check the
    wait condition and the waker needs to update the wait condition and then
    wake up. Because waitqueue operations always go through full spinlock
    synchronization, there is no need for extra memory barriers.

    Furthermore, mempool_alloc() is already holding pool->lock when it decides
    that it needs to wait. There is no reason to do unlock - add waitqueue -
    test condition again. It can simply add itself to waitqueue while holding
    pool->lock and then unlock and sleep.

    This patch adds smp_wmb() after successful allocation from reserved pool,
    replaces smp_mb() in mempool_free() with smp_rmb() and extend pool->lock
    over waitqueue addition. More importantly, it explains what memory
    barriers do and how the lockless testing is correct.

    -v2: Oleg pointed out that unlock doesn't imply wmb. Added explicit
    smp_wmb() after successful allocation from reserved pool and
    updated comments accordingly.

    Signed-off-by: Tejun Heo
    Cc: Oleg Nesterov
    Cc: "Paul E. McKenney"
    Cc: David Howells
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Tejun Heo
     

31 Oct, 2011

1 commit


22 Sep, 2009

1 commit

  • The kzalloc mempool zeros items when they are initially allocated, but
    does not rezero used items that are returned to the pool. Consequently
    mempool_alloc()s may return non-zeroed memory.

    Since there are/were only two in-tree users for
    mempool_create_kzalloc_pool(), and 'fixing' this in a way that will
    re-zero used (but not new) items before first use is non-trivial, just
    remove it.

    Signed-off-by: Sage Weil
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Sage Weil
     

10 Aug, 2009

1 commit

  • clean up type-casting twice. "size_t" is typedef as "unsigned long" in
    64-bit system, and "unsigned int" in 32-bit system, and the intermediate
    cast to 'long' is pointless.

    Signed-off-by: Figo.zhang
    Signed-off-by: Linus Torvalds

    Figo.zhang
     

20 Oct, 2007

1 commit


18 Jul, 2007

1 commit

  • kmalloc_node() and kmem_cache_alloc_node() were not available in a zeroing
    variant in the past. But with __GFP_ZERO it is possible now to do zeroing
    while allocating.

    Use __GFP_ZERO to remove the explicit clearing of memory via memset whereever
    we can.

    Signed-off-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     

17 Jul, 2007

1 commit

  • Christian Borntraeger points out that mempool_free() doesn't noop when
    handed NULL. This is inconsistent with the other free-like functions
    in the kernel.

    Signed-off-by: Rusty Russell
    Cc: Christian Borntraeger
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Rusty Russell
     

12 Feb, 2007

1 commit

  • A variety of (mostly) innocuous fixes to the embedded kernel-doc content in
    source files, including:

    * make multi-line initial descriptions single line
    * denote some function names, constants and structs as such
    * change erroneous opening '/*' to '/**' in a few places
    * reword some text for clarity

    Signed-off-by: Robert P. J. Day
    Cc: "Randy.Dunlap"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Robert P. J. Day
     

02 Sep, 2006

1 commit

  • This patch works around a complex dm-related deadlock/livelock down in the
    mempool allocator.

    Alasdair said:

    Several dm targets suffer from this.

    Mempools are not yet used correctly everywhere in device-mapper: they can
    get shared when devices are stacked, and some targets share them across
    multiple instances. I made fixing this one of the prerequisites for this
    patch:

    md-dm-reduce-stack-usage-with-stacked-block-devices.patch

    which in some cases makes people more likely to hit the problem.

    There's been some progress on this recently with (unfinished) dm-crypt
    patches at:

    http://www.kernel.org/pub/linux/kernel/people/agk/patches/2.6/editing/
    (dm-crypt-move-io-to-workqueue.patch plus dependencies)

    and:

    I've no problems with a temporary workaround like that, but Milan Broz (a
    new Redhat developer in the Czech Republic) has started reviewing all the
    mempool usage in device-mapper so I'm expecting we'll soon have a proper fix
    for this associated problems. [He's back from holiday at the start of next
    week.]

    For now, this sad-but-safe little patch will allow the machine to recover.

    [akpm@osdl.org: rewrote changelog]
    Cc: Alasdair G Kergon
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Pavel Mironchik
     

27 Mar, 2006

5 commits

  • * git://git.kernel.org/pub/scm/linux/kernel/git/bunk/trivial:
    drivers/char/ftape/lowlevel/fdc-io.c: Correct a comment
    Kconfig help: MTD_JEDECPROBE already supports Intel
    Remove ugly debugging stuff
    do_mounts.c: Minor ROOT_DEV comment cleanup
    BUG_ON() Conversion in drivers/s390/block/dasd_devmap.c
    BUG_ON() Conversion in mm/mempool.c
    BUG_ON() Conversion in mm/memory.c
    BUG_ON() Conversion in kernel/fork.c
    BUG_ON() Conversion in ipc/sem.c
    BUG_ON() Conversion in fs/ext2/
    BUG_ON() Conversion in fs/hfs/
    BUG_ON() Conversion in fs/dcache.c
    BUG_ON() Conversion in fs/buffer.c
    BUG_ON() Conversion in input/serio/hp_sdc_mlc.c
    BUG_ON() Conversion in md/dm-table.c
    BUG_ON() Conversion in md/dm-path-selector.c
    BUG_ON() Conversion in drivers/isdn
    BUG_ON() Conversion in drivers/char
    BUG_ON() Conversion in drivers/mtd/

    Linus Torvalds
     
  • Add another allocator to the common mempool code: a kzalloc/kfree allocator

    This will be used by the next patch in the series to replace a mempool-backed
    kzalloc allocator. It is also very likely that there will be more users in
    the future.

    Signed-off-by: Matthew Dobson
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Matthew Dobson
     
  • Add another allocator to the common mempool code: a kmalloc/kfree allocator

    This will be used by the next patch in the series to replace duplicate
    mempool-backed kmalloc allocators in several places in the kernel. It is also
    very likely that there will be more users in the future.

    Signed-off-by: Matthew Dobson
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Matthew Dobson
     
  • This will be used by the next patch in the series to replace duplicate
    mempool-backed page allocators in 2 places in the kernel. It is also likely
    that there will be more users in the future.

    Signed-off-by: Matthew Dobson
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Matthew Dobson
     
  • this changes if() BUG(); constructs to BUG_ON() which is
    cleaner, contains unlikely() and can better optimized away.

    Signed-off-by: Eric Sesterhenn
    Signed-off-by: Adrian Bunk

    Eric Sesterhenn
     

22 Mar, 2006

1 commit


28 Oct, 2005

1 commit


09 Oct, 2005

1 commit

  • - added typedef unsigned int __nocast gfp_t;

    - replaced __nocast uses for gfp flags with gfp_t - it gives exactly
    the same warnings as far as sparse is concerned, doesn't change
    generated code (from gcc point of view we replaced unsigned int with
    typedef) and documents what's going on far better.

    Signed-off-by: Al Viro
    Signed-off-by: Linus Torvalds

    Al Viro
     

08 Jul, 2005

1 commit


24 Jun, 2005

2 commits

  • Here's a small patch to improve the performance of mempool_alloc by only
    initializing the wait queue when we're about to wait.

    Signed-off-by: Benjamin LaHaise
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Benjamin LaHaise
     
  • Patch to allocate the control structures for for ide devices on the node of
    the device itself (for NUMA systems). The patch depends on the Slab API
    change patch by Manfred and me (in mm) and the pcidev_to_node patch that I
    posted today.

    Does some realignment too.

    Signed-off-by: Justin M. Forbes
    Signed-off-by: Christoph Lameter
    Signed-off-by: Pravin Shelar
    Signed-off-by: Shobhit Dayal
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     

01 May, 2005

1 commit