17 Jun, 2011

1 commit

  • Every slab has its on alignment definition in include/linux/sl?b_def.h. Extract those
    and define a common set in include/linux/slab.h.

    SLOB: As notes sometimes we need double word alignment on 32 bit. This gives all
    structures allocated by SLOB a unsigned long long alignment like the others do.

    SLAB: If ARCH_SLAB_MINALIGN is not set SLAB would set ARCH_SLAB_MINALIGN to
    zero meaning no alignment at all. Give it the default unsigned long long alignment.

    Signed-off-by: Christoph Lameter
    Signed-off-by: Pekka Enberg

    Christoph Lameter
     

11 Aug, 2010

1 commit

  • Now each architecture has the own dma_get_cache_alignment implementation.

    dma_get_cache_alignment returns the minimum DMA alignment. Architectures
    define it as ARCH_KMALLOC_MINALIGN (it's used to make sure that malloc'ed
    buffer is DMA-safe; the buffer doesn't share a cache with the others). So
    we can unify dma_get_cache_alignment implementations.

    This patch:

    dma_get_cache_alignment() needs to know if an architecture defines
    ARCH_KMALLOC_MINALIGN or not (needs to know if architecture has DMA
    alignment restriction). However, slab.h define ARCH_KMALLOC_MINALIGN if
    architectures doesn't define it.

    Let's rename ARCH_KMALLOC_MINALIGN to ARCH_DMA_MINALIGN.
    ARCH_KMALLOC_MINALIGN is used only in the internals of slab/slob/slub
    (except for crypto).

    Signed-off-by: FUJITA Tomonori
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    FUJITA Tomonori
     

20 May, 2010

1 commit


06 Aug, 2009

1 commit


12 Jun, 2009

1 commit

  • As explained by Benjamin Herrenschmidt:

    Oh and btw, your patch alone doesn't fix powerpc, because it's missing
    a whole bunch of GFP_KERNEL's in the arch code... You would have to
    grep the entire kernel for things that check slab_is_available() and
    even then you'll be missing some.

    For example, slab_is_available() didn't always exist, and so in the
    early days on powerpc, we used a mem_init_done global that is set form
    mem_init() (not perfect but works in practice). And we still have code
    using that to do the test.

    Therefore, mask out __GFP_WAIT, __GFP_IO, and __GFP_FS in the slab allocators
    in early boot code to avoid enabling interrupts.

    Signed-off-by: Pekka Enberg

    Pekka Enberg
     

29 Dec, 2008

1 commit

  • This adds hooks for the SLOB allocator, to allow tracing with kmemtrace.

    We also convert some inline functions to __always_inline to make sure
    _RET_IP_, which expands to __builtin_return_address(0), always works
    as expected.

    Acked-by: Matt Mackall
    Signed-off-by: Eduard - Gabriel Munteanu
    Signed-off-by: Pekka Enberg

    Eduard - Gabriel Munteanu
     

18 Jul, 2007

1 commit

  • With the slab zeroing allocations cleanups Christoph stubbed in a generic
    kzalloc(), which was missed on SLOB. Follow the SLAB/SLUB changes and
    kill off the __kzalloc() wrapper that SLOB was using.

    Reported-by: Jan Engelhardt
    Signed-off-by: Paul Mundt
    Signed-off-by: Linus Torvalds

    Paul Mundt
     

17 Jul, 2007

1 commit

  • This adds preliminary NUMA support to SLOB, primarily aimed at systems with
    small nodes (tested all the way down to a 128kB SRAM block), whether
    asymmetric or otherwise.

    We follow the same conventions as SLAB/SLUB, preferring current node
    placement for new pages, or with explicit placement, if a node has been
    specified. Presently on UP NUMA this has the side-effect of preferring
    node#0 allocations (since numa_node_id() == 0, though this could be
    reworked if we could hand off a pfn to determine node placement), so
    single-CPU NUMA systems will want to place smaller nodes further out in
    terms of node id. Once a page has been bound to a node (via explicit node
    id typing), we only do block allocations from partial free pages that have
    a matching node id in the page flags.

    The current implementation does have some scalability problems, in that all
    partial free pages are tracked in the global freelist (with contention due
    to the single spinlock). However, these are things that are being reworked
    for SMP scalability first, while things like per-node freelists can easily
    be built on top of this sort of functionality once it's been added.

    More background can be found in:

    http://marc.info/?l=linux-mm&m=118117916022379&w=2
    http://marc.info/?l=linux-mm&m=118170446306199&w=2
    http://marc.info/?l=linux-mm&m=118187859420048&w=2

    and subsequent threads.

    Acked-by: Christoph Lameter
    Acked-by: Matt Mackall
    Signed-off-by: Paul Mundt
    Acked-by: Nick Piggin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Paul Mundt