01 Nov, 2011

1 commit


26 Jul, 2011

1 commit

  • We presently define all kinds of notifiers in notifier.h. This is not
    necessary at all, since different subsystems use different notifiers, they
    are almost non-related with each other.

    This can also save much build time. Suppose I add a new netdevice event,
    really I don't have to recompile all the source, just network related.
    Without this patch, all the source will be recompiled.

    I move the notify events near to their subsystem notifier registers, so
    that they can be found more easily.

    This patch:

    It is not necessary to share the same notifier.h.

    Signed-off-by: WANG Cong
    Cc: David Miller
    Cc: "Rafael J. Wysocki"
    Cc: Greg KH
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Amerigo Wang
     

21 Oct, 2010

2 commits


16 Oct, 2010

1 commit

  • Don't try to "optimize" rds_page_copy_user() by using kmap_atomic() and
    the unsafe atomic user mode accessor functions. It's actually slower
    than the straightforward code on any reasonable modern CPU.

    Back when the code was written (although probably not by the time it was
    actually merged, though), 32-bit x86 may have been the dominant
    architecture. And there kmap_atomic() can be a lot faster than kmap()
    (unless you have very good locality, in which case the virtual address
    caching by kmap() can overcome all the downsides).

    But these days, x86-64 may not be more populous, but it's getting there
    (and if you care about performance, it's definitely already there -
    you'd have upgraded your CPU's already in the last few years). And on
    x86-64, the non-kmap_atomic() version is faster, simply because the code
    is simpler and doesn't have the "re-try page fault" case.

    People with old hardware are not likely to care about RDS anyway, and
    the optimization for the 32-bit case is simply buggy, since it doesn't
    verify the user addresses properly.

    Reported-by: Dan Rosenberg
    Acked-by: Andrew Morton
    Cc: stable@kernel.org
    Signed-off-by: Linus Torvalds

    Linus Torvalds
     

09 Sep, 2010

2 commits

  • Instead of splitting up a page into RDS_FRAG_SIZE chunks
    ourselves, ask rds_page_remainder_alloc() to do it. While it
    is possible PAGE_SIZE > FRAG_SIZE, on x86en it isn't, so having
    duplicate "carve up a page into buffers" code seems excessive.

    The other modification this spawns is the use of a single
    struct scatterlist in rds_page_frag instead of a bare page ptr.
    This causes verbosity to increase in some places, and decrease
    in others.

    Finally, I decided to unify the lifetimes and alloc/free of
    rds_page_frag and its page. This is a nice simplification in itself,
    but will be extra-nice once we come to adding cmason's recycling
    patch.

    Signed-off-by: Andy Grover

    Andy Grover
     
  • Favor "if (foo)" style over "if (foo != NULL)".

    Signed-off-by: Andy Grover

    Andy Grover
     

30 Mar, 2010

1 commit

  • …it slab.h inclusion from percpu.h

    percpu.h is included by sched.h and module.h and thus ends up being
    included when building most .c files. percpu.h includes slab.h which
    in turn includes gfp.h making everything defined by the two files
    universally available and complicating inclusion dependencies.

    percpu.h -> slab.h dependency is about to be removed. Prepare for
    this change by updating users of gfp and slab facilities include those
    headers directly instead of assuming availability. As this conversion
    needs to touch large number of source files, the following script is
    used as the basis of conversion.

    http://userweb.kernel.org/~tj/misc/slabh-sweep.py

    The script does the followings.

    * Scan files for gfp and slab usages and update includes such that
    only the necessary includes are there. ie. if only gfp is used,
    gfp.h, if slab is used, slab.h.

    * When the script inserts a new include, it looks at the include
    blocks and try to put the new include such that its order conforms
    to its surrounding. It's put in the include block which contains
    core kernel includes, in the same order that the rest are ordered -
    alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
    doesn't seem to be any matching order.

    * If the script can't find a place to put a new include (mostly
    because the file doesn't have fitting include block), it prints out
    an error message indicating which .h file needs to be added to the
    file.

    The conversion was done in the following steps.

    1. The initial automatic conversion of all .c files updated slightly
    over 4000 files, deleting around 700 includes and adding ~480 gfp.h
    and ~3000 slab.h inclusions. The script emitted errors for ~400
    files.

    2. Each error was manually checked. Some didn't need the inclusion,
    some needed manual addition while adding it to implementation .h or
    embedding .c file was more appropriate for others. This step added
    inclusions to around 150 files.

    3. The script was run again and the output was compared to the edits
    from #2 to make sure no file was left behind.

    4. Several build tests were done and a couple of problems were fixed.
    e.g. lib/decompress_*.c used malloc/free() wrappers around slab
    APIs requiring slab.h to be added manually.

    5. The script was run on all .h files but without automatically
    editing them as sprinkling gfp.h and slab.h inclusions around .h
    files could easily lead to inclusion dependency hell. Most gfp.h
    inclusion directives were ignored as stuff from gfp.h was usually
    wildly available and often used in preprocessor macros. Each
    slab.h inclusion directive was examined and added manually as
    necessary.

    6. percpu.h was updated not to include slab.h.

    7. Build test were done on the following configurations and failures
    were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
    distributed build env didn't work with gcov compiles) and a few
    more options had to be turned off depending on archs to make things
    build (like ipr on powerpc/64 which failed due to missing writeq).

    * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
    * powerpc and powerpc64 SMP allmodconfig
    * sparc and sparc64 SMP allmodconfig
    * ia64 SMP allmodconfig
    * s390 SMP allmodconfig
    * alpha SMP allmodconfig
    * um on x86_64 SMP allmodconfig

    8. percpu.h modifications were reverted so that it could be applied as
    a separate patch and serve as bisection point.

    Given the fact that I had only a couple of failures from tests on step
    6, I'm fairly confident about the coverage of this conversion patch.
    If there is a breakage, it's likely to be something in one of the arch
    headers which should be easily discoverable easily on most builds of
    the specific arch.

    Signed-off-by: Tejun Heo <tj@kernel.org>
    Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
    Cc: Ingo Molnar <mingo@redhat.com>
    Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>

    Tejun Heo
     

16 Sep, 2009

1 commit

  • * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu: (46 commits)
    powerpc64: convert to dynamic percpu allocator
    sparc64: use embedding percpu first chunk allocator
    percpu: kill lpage first chunk allocator
    x86,percpu: use embedding for 64bit NUMA and page for 32bit NUMA
    percpu: update embedding first chunk allocator to handle sparse units
    percpu: use group information to allocate vmap areas sparsely
    vmalloc: implement pcpu_get_vm_areas()
    vmalloc: separate out insert_vmalloc_vm()
    percpu: add chunk->base_addr
    percpu: add pcpu_unit_offsets[]
    percpu: introduce pcpu_alloc_info and pcpu_group_info
    percpu: move pcpu_lpage_build_unit_map() and pcpul_lpage_dump_cfg() upward
    percpu: add @align to pcpu_fc_alloc_fn_t
    percpu: make @dyn_size mandatory for pcpu_setup_first_chunk()
    percpu: drop @static_size from first chunk allocators
    percpu: generalize first chunk allocator selection
    percpu: build first chunk allocators selectively
    percpu: rename 4k first chunk allocator to page
    percpu: improve boot messages
    percpu: fix pcpu_reclaim() locking
    ...

    Fix trivial conflict as by Tejun Heo in kernel/sched.c

    Linus Torvalds
     

24 Aug, 2009

1 commit


24 Jun, 2009

1 commit

  • There are a few places where ___cacheline_aligned* is used with
    DEFINE_PER_CPU(). Use DEFINE_PER_CPU_SHARED_ALIGNED() instead.

    DEFINE_PER_CPU_SHARED_ALIGNED() applies alignment only on SMPs. While
    all other converted places used _in_smp variant or only get compiled
    for SMP, net/rds used unconditional ____cacheline_aligned. I don't
    see any reason these data structures should be aligned on UP and thus
    converted together.

    Signed-off-by: Tejun Heo
    Cc: Mike Frysinger
    Cc: Tony Luck
    Cc: Andy Grover

    Tejun Heo
     

27 Feb, 2009

1 commit

  • Parsing of newly-received RDS message headers (including ext.
    headers) and copy-to/from-user routines.

    page.c implements a per-cpu page remainder cache, to reduce the
    number of allocations needed for small datagrams.

    Signed-off-by: Andy Grover
    Signed-off-by: David S. Miller

    Andy Grover