17 Jan, 2012

1 commit

  • Since a32618d2 (ARM: pgtable: switch to use pgtable-nopud.h), assabet
    warns as follows:

    arch/arm/mach-sa1100/assabet.c: In function 'map_sa1100_gpio_regs':
    arch/arm/mach-sa1100/assabet.c:264: warning: passing argument 1 of 'pmd_offset' from incompatible pointer type

    Fix this by adding the necessary pud_offset() macro.

    Signed-off-by: Russell King

    Russell King
     

13 Jan, 2012

39 commits

  • Several platforms are now using the memblock_alloc+memblock_free+
    memblock_remove trick to obtain memory which won't be mapped in the
    kernel's page tables. Most platforms do this (correctly) in the
    ->reserve callback. However, OMAP has started to call these functions
    outside of this callback, and this is extremely unsafe - memory will
    not be unmapped, and could well be given out after memblock is no
    longer responsible for its management.

    So, provide arm_memblock_steal() to perform this function, and ensure
    that it panic()s if it is used inappropriately. Convert everyone
    over, including OMAP.

    As a result, OMAP with OMAP4_ERRATA_I688 enabled will panic on boot
    with this change. Mark this option as BROKEN and make it depend on
    BROKEN. OMAP needs to be fixed, or 137d105d50 (ARM: OMAP4: Fix
    errata i688 with MPU interconnect barriers.) reverted until such
    time it can be fixed correctly.

    Signed-off-by: Russell King

    Russell King
     
  • Russell King
     
  • This patch adds a check for the presence of the LPAE feature during the
    CPU initialisation. If not present, it reports an error when
    CONFIG_DEBUG_LL is enabled.

    Signed-off-by: Catalin Marinas
    Acked-by: Nicolas Pitre
    Signed-off-by: Russell King

    Catalin Marinas
     
  • This change ensures the platform device name matches nuc900-ac97 platform
    driver name.

    Signed-off-by: Axel Lin
    Acked-by: Wan Zongshun
    Signed-off-by: Russell King

    Axel Lin
     
  • Andrew explains:

    - various misc stuff

    - Most of the rest of MM: memcg, threaded hugepages, others.

    - cpumask

    - kexec

    - kdump

    - some direct-io performance tweaking

    - radix-tree optimisations

    - new selftests code

    A note on this: often people will develop a new userspace-visible
    feature and will develop userspace code to exercise/test that
    feature. Then they merge the patch and the selftest code dies.
    Sometimes we paste it into the changelog. Sometimes the code gets
    thrown into Documentation/(!).

    This saddens me. So this patch creates a bare-bones framework which
    will henceforth allow me to ask people to include their test apps in
    the kernel tree so we can keep them alive. Then when people enhance
    or fix the feature, I can ask them to update the test app too.

    The infrastruture is terribly trivial at present - let's see how it
    evolves.

    - checkpoint/restart feature work.

    A note on this: this is a project by various mad Russians to perform
    c/r mainly from userspace, with various oddball helper code added
    into the kernel where the need is demonstrated.

    So rather than some large central lump of code, what we have is
    little bits and pieces popping up in various places which either
    expose something new or which permit something which is normally
    kernel-private to be modified.

    The overall project is an ongoing thing. I've judged that the size
    and scope of the thing means that we're more likely to be successful
    with it if we integrate the support into mainline piecemeal rather
    than allowing it all to develop out-of-tree.

    However I'm less confident than the developers that it will all
    eventually work! So what I'm asking them to do is to wrap each piece
    of new code inside CONFIG_CHECKPOINT_RESTORE. So if it all
    eventually comes to tears and the project as a whole fails, it should
    be a simple matter to go through and delete all trace of it.

    This lot pretty much wraps up the -rc1 merge for me.

    * akpm: (96 commits)
    unlzo: fix input buffer free
    ramoops: update parameters only after successful init
    ramoops: fix use of rounddown_pow_of_two()
    c/r: prctl: add PR_SET_MM codes to set up mm_struct entries
    c/r: procfs: add start_data, end_data, start_brk members to /proc/$pid/stat v4
    c/r: introduce CHECKPOINT_RESTORE symbol
    selftests: new x86 breakpoints selftest
    selftests: new very basic kernel selftests directory
    radix_tree: take radix_tree_path off stack
    radix_tree: remove radix_tree_indirect_to_ptr()
    dio: optimize cache misses in the submission path
    vfs: cache request_queue in struct block_device
    fs/direct-io.c: calculate fs_count correctly in get_more_blocks()
    drivers/parport/parport_pc.c: fix warnings
    panic: don't print redundant backtraces on oops
    sysctl: add the kernel.ns_last_pid control
    kdump: add udev events for memory online/offline
    include/linux/crash_dump.h needs elf.h
    kdump: fix crash_kexec()/smp_send_stop() race in panic()
    kdump: crashk_res init check for /sys/kernel/kexec_crash_size
    ...

    Linus Torvalds
     
  • * git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (69 commits)
    pptp: Accept packet with seq zero
    RDS: Remove some unused iWARP code
    net: fsl: fec: handle 10Mbps speed in RMII mode
    drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c: add missing iounmap
    drivers/net/ethernet/tundra/tsi108_eth.c: add missing iounmap
    ksz884x: fix mtu for VLAN
    net_sched: sfq: add optional RED on top of SFQ
    dp83640: Fix NOHZ local_softirq_pending 08 warning
    gianfar: Fix invalid TX frames returned on error queue when time stamping
    gianfar: Fix missing sock reference when processing TX time stamps
    phylib: introduce mdiobus_alloc_size()
    net: decrement memcg jump label when limit, not usage, is changed
    net: reintroduce missing rcu_assign_pointer() calls
    inet_diag: Rename inet_diag_req_compat into inet_diag_req
    inet_diag: Rename inet_diag_req into inet_diag_req_v2
    bond_alb: don't disable softirq under bond_alb_xmit
    mac80211: fix rx->key NULL pointer dereference in promiscuous mode
    nl80211: fix old station flags compatibility
    mdio-octeon: use an unique MDIO bus name.
    mdio-gpio: use an unique MDIO bus name.
    ...

    Linus Torvalds
     
  • unlzo modifies the pointer to in_buf, so we have to free the original
    buffer, not the modified pointer.

    Signed-off-by: Sascha Hauer
    Cc: Lasse Collin
    Cc: Namhyung Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Sascha Hauer
     
  • If a platform device exists on the system, but ramoops fails to attach to
    it, the module parameters are overridden before ramoops can fall back and
    try to use passed module parameters. Move update to end of init routine.

    Signed-off-by: Kees Cook
    Cc: Marco Stornelli
    Cc: Sergiu Iordache
    Cc: Seiji Aguchi
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Kees Cook
     
  • The return value of rounddown_pow_of_two wasn't evaluated, so the
    operation was a no-op.

    Signed-off-by: Marco Stornelli
    Reported-by: Andrew Morton
    Reviewed-by: WANG Cong
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Marco Stornelli
     
  • When we restore a task we need to set up text, data and data heap sizes
    from userspace to the values a task had at checkpoint time. This patch
    adds auxilary prctl codes for that.

    While most of them have a statistical nature (their values are involved
    into calculation of /proc//statm output) the start_brk and brk values
    are used to compute an allowed size of program data segment expansion.
    Which means an arbitrary changes of this values might be dangerous
    operation. So to restrict access the following requirements applied to
    prctl calls:

    - The process has to have CAP_SYS_ADMIN capability granted.
    - For all opcodes except start_brk/brk members an appropriate
    VMA area must exist and should fit certain VMA flags,
    such as:
    - code segment must be executable but not writable;
    - data segment must not be executable.

    start_brk/brk values must not intersect with data segment and must not
    exceed RLIMIT_DATA resource limit.

    Still the main guard is CAP_SYS_ADMIN capability check.

    Note the kernel should be compiled with CONFIG_CHECKPOINT_RESTORE support
    otherwise these prctl calls will return -EINVAL.

    [akpm@linux-foundation.org: cache current->mm in a local, saving 200 bytes text]
    Signed-off-by: Cyrill Gorcunov
    Reviewed-by: Kees Cook
    Cc: Tejun Heo
    Cc: Andrew Vagin
    Cc: Serge Hallyn
    Cc: Pavel Emelyanov
    Cc: Vasiliy Kulikov
    Cc: KAMEZAWA Hiroyuki
    Cc: Michael Kerrisk
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Cyrill Gorcunov
     
  • The mm->start_code/end_code, mm->start_data/end_data, mm->start_brk are
    involved into calculation of program text/data segment sizes (which might
    be seen in /proc//statm) and into brk() call final address.

    For restore we need to know all these values. While
    mm->start_code/end_code already present in /proc/$pid/stat, the rest
    members are not, so this patch brings them in.

    The restore procedure of these members is addressed in another patch using
    prctl().

    Signed-off-by: Cyrill Gorcunov
    Acked-by: Serge Hallyn
    Reviewed-by: Kees Cook
    Reviewed-by: KAMEZAWA Hiroyuki
    Cc: Alexey Dobriyan
    Cc: Tejun Heo
    Cc: Andrew Vagin
    Cc: Vasiliy Kulikov
    Cc: Alexey Dobriyan
    Cc: "Eric W. Biederman"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Cyrill Gorcunov
     
  • For checkpoint/restore we need auxilary features being compiled into the
    kernel, such as additional prctl codes, /proc//map_files and etc...
    but same time these features are not mandatory for a regular kernel so
    CHECKPOINT_RESTORE config symbol should bring a way to disable them all at
    once if one wish to get rid of additional functionality.

    Signed-off-by: Cyrill Gorcunov
    Cc: Tejun Heo
    Cc: Andrew Vagin
    Cc: Serge Hallyn
    Cc: Vasiliy Kulikov
    Reviewed-by: Kees Cook
    Cc: KAMEZAWA Hiroyuki
    Cc: Alexey Dobriyan
    Cc: "Eric W. Biederman"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Cyrill Gorcunov
     
  • Bring a first selftest in the relevant directory. This tests several
    combinations of breakpoints and watchpoints in x86, as well as icebp traps
    and int3 traps. Given the amount of breakpoint regressions we raised
    after we merged the generic breakpoint infrastructure, such selftest
    became necessary and can still serve today as a basis for new patches that
    touch the do_debug() path.

    Signed-off-by: Frederic Weisbecker
    Cc: Thomas Gleixner
    Cc: Ingo Molnar
    Cc: H. Peter Anvin
    Cc: Jason Wessel
    Cc: Will Deacon
    Cc: Michal Marek
    Cc: Sam Ravnborg
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Frederic Weisbecker
     
  • Bring a new kernel selftests directory in tools/testing/selftests. To
    add a new selftest, create a subdirectory with the sources and a
    makefile that creates a target named "run_test" then add the
    subdirectory name to the TARGET var in tools/testing/selftests/Makefile
    and tools/testing/selftests/run_tests script.

    This can help centralizing and maintaining any useful selftest that
    developers usually tend to let rust in peace on some random server.

    Suggested-by: Andrew Morton
    Signed-off-by: Frederic Weisbecker
    Cc: Thomas Gleixner
    Cc: Ingo Molnar
    Cc: "H. Peter Anvin"
    Cc: Jason Wessel
    Cc: Will Deacon
    Cc: Steven Rostedt
    Cc: Michal Marek
    Cc: Sam Ravnborg
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Frederic Weisbecker
     
  • Down, down in the deepest depths of GFP_NOIO page reclaim, we have
    shrink_page_list() calling __remove_mapping() calling __delete_from_
    swap_cache() or __delete_from_page_cache().

    You would not expect those to need much stack, but in fact they call
    radix_tree_delete(): which declares a 192-byte radix_tree_path array on
    its stack (to record the node,offsets it visits when descending, in case
    it needs to ascend to update them). And if any tag is still set [1],
    that calls radix_tree_tag_clear(), which declares a further such
    192-byte radix_tree_path array on the stack. (At least we have
    interrupts disabled here, so won't then be pushing registers too.)

    That was probably a good choice when most users were 32-bit (array of
    half the size), and adding fields to radix_tree_node would have bloated
    it unnecessarily. But nowadays many are 64-bit, and each
    radix_tree_node contains a struct rcu_head, which is only used when
    freeing; whereas the radix_tree_path info is only used for updating the
    tree (deleting, clearing tags or setting tags if tagged) when a lock
    must be held, of no interest when accessing the tree locklessly.

    So add a parent pointer to the radix_tree_node, in union with the
    rcu_head, and remove all uses of the radix_tree_path. There would be
    space in that union to save the offset when descending as before (we can
    argue that a lock must already be held to exclude other users), but
    recalculating it when ascending is both easy (a constant shift and a
    constant mask) and uncommon, so it seems better just to do that.

    Two little optimizations: no need to decrement height when descending,
    adjusting shift is enough; and once radix_tree_tag_if_tagged() has set
    tag on a node and its ancestors, it need not ascend from that node
    again.

    perf on the radix tree test harness reports radix_tree_insert() as 2%
    slower (now having to set parent), but radix_tree_delete() 24% faster.
    Surely that's an exaggeration from rtth's artificially low map shift 3,
    but forcing it back to 6 still rates radix_tree_delete() 8% faster.

    [1] Can a pagecache tag (dirty, writeback or towrite) actually still be
    set at the time of radix_tree_delete()? Perhaps not if the filesystem is
    well-behaved. But although I've not tracked any stack overflow down to
    this cause, I have observed a curious case in which a dirty tag is set
    and left set on tmpfs: page migration's migrate_page_copy() happens to
    use __set_page_dirty_nobuffers() to set PageDirty on the newpage, and
    that sets PAGECACHE_TAG_DIRTY as a side-effect - harmless to a
    filesystem which doesn't use tags, except for this stack depth issue.

    Signed-off-by: Hugh Dickins
    Cc: Jan Kara
    Cc: Dave Chinner
    Cc: Mel Gorman
    Cc: Nai Xia
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     
  • It is not used anymore, remove it

    Signed-off-by: Xiao Guangrong
    Acked-by: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Xiao Guangrong
     
  • Some investigation of a transaction processing workload showed that a
    major consumer of cycles in __blockdev_direct_IO is the cache miss while
    accessing the block size. This is because it has to walk the chain from
    block_dev to gendisk to queue.

    The block size is needed early on to check alignment and sizes. It's only
    done if the check for the inode block size fails. But the costly block
    device state is unconditionally fetched.

    - Reorganize the code to only fetch block dev state when actually
    needed.

    Then do a prefetch on the block dev early on in the direct IO path. This
    is worth it, because there is substantial code run before we actually
    touch the block dev now.

    - I also added some unlikelies to make it clear the compiler that block
    device fetch code is not normally executed.

    This gave a small, but measurable improvement on a large database
    benchmark (about 0.3%)

    [akpm@linux-foundation.org: coding-style fixes]
    [sfr@canb.auug.org.au: using prefetch requires including prefetch.h]
    Signed-off-by: Andi Kleen
    Cc: Jeff Moyer
    Cc: Jens Axboe
    Cc: Christoph Hellwig
    Signed-off-by: Stephen Rothwell
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andi Kleen
     
  • This makes it possible to get from the inode to the request_queue with one
    less cache miss. Used in followon optimization.

    The livetime of the pointer is the same as the gendisk.

    This assumes that the queue will always stay the same in the gendisk while
    it's visible to block_devices. I think that's safe correct?

    Signed-off-by: Andi Kleen
    Acked-by: Jeff Moyer
    Cc: Jens Axboe
    Cc: Christoph Hellwig
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andi Kleen
     
  • In get_more_blocks(), we use dio_count to calcuate fs_count and do some
    tricky things to increase fs_count if dio_count isn't aligned. But
    actually it still has some corner cases that can't be coverd. See the
    following example:

    dio_write foo -s 1024 -w 4096

    (direct write 4096 bytes at offset 1024). The same goes if the offset
    isn't aligned to fs_blocksize.

    In this case, the old calculation counts fs_count to be 1, but actually we
    will write into 2 different blocks (if fs_blocksize=4096). The old code
    just works, since it will call get_block twice (and may have to allocate
    and create extents twice for filesystems like ext4). So we'd better call
    get_block just once with the proper fs_count.

    Signed-off-by: Tao Ma
    Cc: "Theodore Ts'o"
    Cc: Christoph Hellwig
    Cc: Al Viro
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Tao Ma
     
  • drivers/parport/parport_pc.c: In function '__check_irq':
    drivers/parport/parport_pc.c:3415: warning: return from incompatible pointer type
    drivers/parport/parport_pc.c: In function '__check_dma':
    drivers/parport/parport_pc.c:3417: warning: return from incompatible pointer type

    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrew Morton
     
  • When an oops causes a panic and panic prints another backtrace it's pretty
    common to have the original oops data be scrolled away on a 80x50 screen.

    The second backtrace is quite redundant and not needed anyways.

    So don't print the panic backtrace when oops_in_progress is true.

    [akpm@linux-foundation.org: add comment]
    Signed-off-by: Andi Kleen
    Cc: Michael Holzheu
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andi Kleen
     
  • The sysctl works on the current task's pid namespace, getting and setting
    its last_pid field.

    Writing is allowed for CAP_SYS_ADMIN-capable tasks thus making it possible
    to create a task with desired pid value. This ability is required badly
    for the checkpoint/restore in userspace.

    This approach suits all the parties for now.

    Signed-off-by: Pavel Emelyanov
    Acked-by: Tejun Heo
    Cc: Oleg Nesterov
    Cc: Cyrill Gorcunov
    Cc: "Eric W. Biederman"
    Cc: Serge Hallyn
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Pavel Emelyanov
     
  • Currently no udev events for memory hotplug "online" and "offline" are
    generated:

    # udevadm monitor
    # echo offline > /sys/devices/system/memory/memory4/state
    ==> No event

    When kdump is loaded, kexec detects the current memory configuration and
    stores it in the pre-allocated ELF core header. Therefore, for kdump it
    is necessary to reload the kdump kernel with kexec when the memory
    configuration changes (e.g. for online/offline hotplug memory).

    In order to do this automatically, udev rules should be used. This kernel
    patch adds udev events for "online" and "offline". Together with this
    kernel patch, the following udev rules for online/offline have to be added
    to "/etc/udev/rules.d/98-kexec.rules":

    SUBSYSTEM=="memory", ACTION=="online", PROGRAM="/etc/init.d/kdump restart"
    SUBSYSTEM=="memory", ACTION=="offline", PROGRAM="/etc/init.d/kdump restart"

    [sfr@canb.auug.org.au: fixups for class to subsystem conversion]
    Signed-off-by: Michael Holzheu
    Cc: Heiko Carstens
    Cc: Vivek Goyal
    Cc: "Eric W. Biederman"
    Cc: Kay Sievers
    Cc: Dave Hansen
    Cc: Martin Schwidefsky
    Cc: Greg KH
    Signed-off-by: Stephen Rothwell
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Michael Holzheu
     
  • Building an ARM target we get the following warnings:

    CC arch/arm/kernel/setup.o
    In file included from arch/arm/kernel/setup.c:39:
    arch/arm/include/asm/elf.h:102:1: warning: "vmcore_elf64_check_arch" redefined
    In file included from arch/arm/kernel/setup.c:24:
    include/linux/crash_dump.h:30:1: warning: this is the location of the previous definition

    Quoting Russell King:

    "linux/crash_dump.h makes no attempt to include asm/elf.h, but it depends
    on stuff in asm/elf.h to determine how stuff inside this file is defined
    at parse time.

    So, if asm/elf.h is included after linux/crash_dump.h or not at all, you
    get a different result from the situation where asm/elf.h is included
    before."

    So add elf.h header to crash_dump.h to avoid this problem.

    The original discussion about this can be found at:
    http://www.spinics.net/lists/arm-kernel/msg154113.html

    Signed-off-by: Fabio Estevam
    Cc: Russell King
    Cc: [3.2.1]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Fabio Estevam
     
  • When two CPUs call panic at the same time there is a possible race
    condition that can stop kdump. The first CPU calls crash_kexec() and the
    second CPU calls smp_send_stop() in panic() before crash_kexec() finished
    on the first CPU. So the second CPU stops the first CPU and therefore
    kdump fails:

    1st CPU:
    panic()->crash_kexec()->mutex_trylock(&kexec_mutex)-> do kdump

    2nd CPU:
    panic()->crash_kexec()->kexec_mutex already held by 1st CPU
    ->smp_send_stop()-> stop 1st CPU (stop kdump)

    This patch fixes the problem by introducing a spinlock in panic that
    allows only one CPU to process crash_kexec() and the subsequent panic
    code.

    All other CPUs call the weak function panic_smp_self_stop() that stops the
    CPU itself. This function can be overloaded by architecture code. For
    example "tile" can use their lower-power "nap" instruction for that.

    Signed-off-by: Michael Holzheu
    Acked-by: Chris Metcalf
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Michael Holzheu
     
  • Currently it is possible to set the crash_size via the sysfs
    /sys/kernel/kexec_crash_size even if no crash kernel memory has been
    defined with the "crashkernel" parameter. In this case "crashk_res" is
    not initialized and crashk_res.start = crashk_res.end = 0. Unfortunately
    resource_size(&crashk_res) returns 1 in this case. This breaks the s390
    implementation of crash_(un)map_reserved_pages().

    To fix the problem the correct "old_size" is now calculated in
    crash_shrink_memory(). "old_size is set to "0" if crashk_res is not
    initialized. With this change crash_shrink_memory() will do nothing, when
    "crashk_res" is not initialized. It will return "0" for "echo 0 >
    /sys/kernel/kexec_crash_size" and -EINVAL for "echo [not zero] >
    /sys/kernel/kexec_crash_size".

    In addition to that this patch also simplifies the "ret = -EINVAL" vs.
    "ret = 0" logic as suggested by Simon Horman.

    Signed-off-by: Michael Holzheu
    Reviewed-by: Dave Young
    Reviewed-by: WANG Cong
    Reviewed-by: Simon Horman
    Cc: Vivek Goyal
    Cc: "Eric W. Biederman"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Michael Holzheu
     
  • When shrinking crashkernel memory using /sys/kernel/kexec_crash_size for
    the newly added memory no RAM resource is created at the moment.

    Example:

    $ cat /proc/iomem
    00000000-bfffffff : System RAM
    00000000-005b7ac3 : Kernel code
    005b7ac4-009743bf : Kernel data
    009bb000-00a85c33 : Kernel bss
    c0000000-cfffffff : Crash kernel
    d0000000-ffffffff : System RAM

    $ echo 0 > /sys/kernel/kexec_crash_size
    $ cat /proc/iomem
    00000000-bfffffff : System RAM
    00000000-005b7ac3 : Kernel code
    005b7ac4-009743bf : Kernel data
    009bb000-00a85c33 : Kernel bss
    < /sys/kernel/kexec_crash_size
    $ cat /proc/iomem
    00000000-bfffffff : System RAM
    00000000-005b7ac3 : Kernel code
    005b7ac4-009743bf : Kernel data
    009bb000-00a85c33 : Kernel bss
    c0000000-cfffffff : System RAM <
    Cc: Vivek Goyal
    Cc: "Eric W. Biederman"
    Cc: Heiko Carstens
    Cc: Martin Schwidefsky
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Michael Holzheu
     
  • KMSG_DUMP_KEXEC is useless because we already save kernel messages inside
    /proc/vmcore, and it is unsafe to allow modules to do other stuffs in a
    crash dump scenario.

    [akpm@linux-foundation.org: fix powerpc build]
    Signed-off-by: WANG Cong
    Reported-by: Vivek Goyal
    Acked-by: Vivek Goyal
    Acked-by: Jarod Wilson
    Cc: "Eric W. Biederman"
    Cc: KOSAKI Motohiro
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    WANG Cong
     
  • node_to_cpumask() has been replaced by cpumask_of_node(), and wholly
    removed since commit 29c337a0 ("cpumask: remove obsolete node_to_cpumask
    now everyone uses cpumask_of_node").

    So update the comments for setup_node_to_cpumask_map().

    Signed-off-by: Wanlong Gao
    Acked-by: Rusty Russell
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Wanlong Gao
     
  • If either of the vas or vms arrays are not properly kzalloced, then the
    code jumps to the err_free label.

    The err_free label runs a loop to check and free each of the array members
    of the vas and vms arrays which is not required for this situation as none
    of the array members have been allocated till this point.

    Eliminate the extra loop we have to go through by introducing a new label
    err_free2 and then jumping to it.

    [akpm@linux-foundation.org: remove now-unneeded tests]
    Signed-off-by: Kautuk Consul
    Acked-by: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Kautuk Consul
     
  • There is sometimes confusion between the global putback_lru_pages() in
    migrate.c and the static putback_lru_pages() in vmscan.c: rename the
    latter putback_inactive_pages(): it helps shrink_inactive_list() rather as
    move_active_pages_to_lru() helps shrink_active_list().

    Remove unused scan_control arg from putback_inactive_pages() and from
    update_isolated_counts(). Move clear_active_flags() inside
    update_isolated_counts(). Move NR_ISOLATED accounting up into
    shrink_inactive_list() itself, so the balance is clearer.

    Do the spin_lock_irq() before calling putback_inactive_pages() and
    spin_unlock_irq() after return from it, so that it better matches
    update_isolated_counts() and move_active_pages_to_lru().

    Signed-off-by: Hugh Dickins
    Cc: Johannes Weiner
    Cc: KOSAKI Motohiro
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     
  • The isolate_pages() level in vmscan.c offers little but indirection: merge
    it into isolate_lru_pages() as the compiler does, and use the names
    nr_to_scan and nr_scanned in each case.

    Signed-off-by: Hugh Dickins
    Reviewed-by: KOSAKI Motohiro
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     
  • del_page_from_lru() repeats del_page_from_lru_list(), also working out
    which LRU the page was on, clearing the relevant bits. Decouple those
    functions: remove del_page_from_lru() and add page_off_lru().

    Signed-off-by: Hugh Dickins
    Reviewed-by: KOSAKI Motohiro
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     
  • Mostly we use "enum lru_list lru": change those few "l"s to "lru"s.

    Signed-off-by: Hugh Dickins
    Reviewed-by: KOSAKI Motohiro
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     
  • checkpatch rightly protests

    WARNING: EXPORT_SYMBOL(foo); should immediately follow its function/variable

    so fix the five offenders in mm/swap.c.

    Signed-off-by: Hugh Dickins
    Reviewed-by: KOSAKI Motohiro
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     
  • What's so special about ____pagevec_lru_add() that it needs four leading
    underscores? Nothing, it just helped to distinguish from
    __pagevec_lru_add() in 2.6.28 development. Cut two leading underscores.

    Signed-off-by: Hugh Dickins
    Reviewed-by: KOSAKI Motohiro
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     
  • Replace pagevecs in putback_lru_pages() and move_active_pages_to_lru()
    by lists of pages_to_free: then apply Konstantin Khlebnikov's
    free_hot_cold_page_list() to them instead of pagevec_release().

    Which simplifies the flow (no need to drop and retake lock whenever
    pagevec fills up) and reduces stale addresses in stack backtraces
    (which often showed through the pagevecs); but more importantly,
    removes another 120 bytes from the deepest stacks in page reclaim.
    Although I've not recently seen an actual stack overflow here with
    a vanilla kernel, move_active_pages_to_lru() has often featured in
    deep backtraces.

    However, free_hot_cold_page_list() does not handle compound pages
    (nor need it: a Transparent HugePage would have been split by the
    time it reaches the call in shrink_page_list()), but it is possible
    for putback_lru_pages() or move_active_pages_to_lru() to be left
    holding the last reference on a THP, so must exclude the unlikely
    compound case before putting on pages_to_free.

    Remove pagevec_strip(), its work now done in move_active_pages_to_lru().
    The pagevec in scan_mapping_unevictable_pages() remains in mm/vmscan.c,
    but that is never on the reclaim path, and cannot be replaced by a list.

    Signed-off-by: Hugh Dickins
    Reviewed-by: KOSAKI Motohiro
    Reviewed-by: Konstantin Khlebnikov
    Cc: KAMEZAWA Hiroyuki
    Cc: Mel Gorman
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     
  • If DEBUG_VM, mem_cgroup_print_bad_page() is called whenever bad_page()
    shows a "Bad page state" message, removes page from circulation, adds a
    taint and continues. This is at a very low level, often when a spinlock
    is held (sometimes when page table lock is held, for example).

    We want to recover from this badness, not make it worse: we must not
    kmalloc memory here, we must not do a cgroup path lookup via dubious
    pointers. No doubt that code was useful to debug a particular case at one
    time, and may be again, but take it out of the mainline kernel.

    Signed-off-by: Hugh Dickins
    Cc: Daisuke Nishimura
    Cc: KAMEZAWA Hiroyuki
    Cc: Johannes Weiner
    Cc: Michal Hocko
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     
  • This patch started off as a cleanup: __split_huge_page_refcounts() has to
    cope with two scenarios, when the hugepage being split is already on LRU,
    and when it is not; but why does it have to split that accounting across
    three different sites? Consolidate it in lru_add_page_tail(), handling
    evictable and unevictable alike, and use standard add_page_to_lru_list()
    when accounting is needed (when the head is not yet on LRU).

    But a recent regression in -next, I guess the removal of PageCgroupAcctLRU
    test from mem_cgroup_split_huge_fixup(), makes this now a necessary fix:
    under load, the MEM_CGROUP_ZSTAT count was wrapping to a huge number,
    messing up reclaim calculations and causing a freeze at rmdir of cgroup.

    Add a VM_BUG_ON to mem_cgroup_lru_del_list() when we're about to wrap that
    count - this has not been the only such incident. Document that
    lru_add_page_tail() is for Transparent HugePages by #ifdef around it.

    Signed-off-by: Hugh Dickins
    Cc: Daisuke Nishimura
    Cc: KAMEZAWA Hiroyuki
    Cc: Johannes Weiner
    Cc: Michal Hocko
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins