30 Aug, 2014

1 commit

  • To avoid potential format string expansion via module parameters, do not
    use the zpool type directly in request_module() without a format string.
    Additionally, to avoid arbitrary modules being loaded via zpool API
    (e.g. via the zswap_zpool_type module parameter) add a "zpool-" prefix
    to the requested module, as well as module aliases for the existing
    zpool types (zbud and zsmalloc).

    Signed-off-by: Kees Cook
    Cc: Seth Jennings
    Cc: Minchan Kim
    Cc: Nitin Gupta
    Acked-by: Dan Streetman
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Kees Cook
     

07 Aug, 2014

3 commits

  • Update zbud and zsmalloc to implement the zpool api.

    [fengguang.wu@intel.com: make functions static]
    Signed-off-by: Dan Streetman
    Tested-by: Seth Jennings
    Cc: Minchan Kim
    Cc: Nitin Gupta
    Cc: Weijie Yang
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Dan Streetman
     
  • Add zpool api.

    zpool provides an interface for memory storage, typically of compressed
    memory. Users can select what backend to use; currently the only
    implementations are zbud, a low density implementation with up to two
    compressed pages per storage page, and zsmalloc, a higher density
    implementation with multiple compressed pages per storage page.

    Signed-off-by: Dan Streetman
    Tested-by: Seth Jennings
    Cc: Minchan Kim
    Cc: Nitin Gupta
    Cc: Weijie Yang
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Dan Streetman
     
  • Currently map_vm_area() takes (struct page *** pages) as third argument,
    and after mapping, it moves (*pages) to point to (*pages +
    nr_mappped_pages).

    It looks like this kind of increment is useless to its caller these
    days. The callers don't care about the increments and actually they're
    trying to avoid this by passing another copy to map_vm_area().

    The caller can always guarantee all the pages can be mapped into vm_area
    as specified in first argument and the caller only cares about whether
    map_vm_area() fails or not.

    This patch cleans up the pointer movement in map_vm_area() and updates
    its callers accordingly.

    Signed-off-by: WANG Chao
    Cc: Zhang Yanfei
    Acked-by: Greg Kroah-Hartman
    Cc: Minchan Kim
    Cc: Nitin Gupta
    Cc: Rusty Russell
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    WANG Chao
     

05 Jun, 2014

2 commits


20 Mar, 2014

1 commit

  • Subsystems that want to register CPU hotplug callbacks, as well as perform
    initialization for the CPUs that are already online, often do it as shown
    below:

    get_online_cpus();

    for_each_online_cpu(cpu)
    init_cpu(cpu);

    register_cpu_notifier(&foobar_cpu_notifier);

    put_online_cpus();

    This is wrong, since it is prone to ABBA deadlocks involving the
    cpu_add_remove_lock and the cpu_hotplug.lock (when running concurrently
    with CPU hotplug operations).

    Instead, the correct and race-free way of performing the callback
    registration is:

    cpu_notifier_register_begin();

    for_each_online_cpu(cpu)
    init_cpu(cpu);

    /* Note the use of the double underscored version of the API */
    __register_cpu_notifier(&foobar_cpu_notifier);

    cpu_notifier_register_done();

    Fix the zsmalloc code by using this latter form of callback registration.

    Cc: Nitin Gupta
    Cc: Ingo Molnar
    Signed-off-by: Srivatsa S. Bhat
    Acked-by: Minchan Kim
    Signed-off-by: Rafael J. Wysocki

    Srivatsa S. Bhat
     

31 Jan, 2014

2 commits

  • Add my copyright to the zsmalloc source code which I maintain.

    Signed-off-by: Minchan Kim
    Cc: Nitin Gupta
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Minchan Kim
     
  • This patch moves zsmalloc under mm directory.

    Before that, description will explain why we have needed custom
    allocator.

    Zsmalloc is a new slab-based memory allocator for storing compressed
    pages. It is designed for low fragmentation and high allocation success
    rate on large object, but
    Acked-by: Nitin Gupta
    Reviewed-by: Konrad Rzeszutek Wilk
    Cc: Bob Liu
    Cc: Greg Kroah-Hartman
    Cc: Hugh Dickins
    Cc: Jens Axboe
    Cc: Luigi Semenzato
    Cc: Mel Gorman
    Cc: Pekka Enberg
    Cc: Rik van Riel
    Cc: Seth Jennings
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Minchan Kim