29 Jul, 2008

1 commit

  • With KVM/GFP/XPMEM there isn't just the primary CPU MMU pointing to pages.
    There are secondary MMUs (with secondary sptes and secondary tlbs) too.
    sptes in the kvm case are shadow pagetables, but when I say spte in
    mmu-notifier context, I mean "secondary pte". In GRU case there's no
    actual secondary pte and there's only a secondary tlb because the GRU
    secondary MMU has no knowledge about sptes and every secondary tlb miss
    event in the MMU always generates a page fault that has to be resolved by
    the CPU (this is not the case of KVM where the a secondary tlb miss will
    walk sptes in hardware and it will refill the secondary tlb transparently
    to software if the corresponding spte is present). The same way
    zap_page_range has to invalidate the pte before freeing the page, the spte
    (and secondary tlb) must also be invalidated before any page is freed and
    reused.

    Currently we take a page_count pin on every page mapped by sptes, but that
    means the pages can't be swapped whenever they're mapped by any spte
    because they're part of the guest working set. Furthermore a spte unmap
    event can immediately lead to a page to be freed when the pin is released
    (so requiring the same complex and relatively slow tlb_gather smp safe
    logic we have in zap_page_range and that can be avoided completely if the
    spte unmap event doesn't require an unpin of the page previously mapped in
    the secondary MMU).

    The mmu notifiers allow kvm/GRU/XPMEM to attach to the tsk->mm and know
    when the VM is swapping or freeing or doing anything on the primary MMU so
    that the secondary MMU code can drop sptes before the pages are freed,
    avoiding all page pinning and allowing 100% reliable swapping of guest
    physical address space. Furthermore it avoids the code that teardown the
    mappings of the secondary MMU, to implement a logic like tlb_gather in
    zap_page_range that would require many IPI to flush other cpu tlbs, for
    each fixed number of spte unmapped.

    To make an example: if what happens on the primary MMU is a protection
    downgrade (from writeable to wrprotect) the secondary MMU mappings will be
    invalidated, and the next secondary-mmu-page-fault will call
    get_user_pages and trigger a do_wp_page through get_user_pages if it
    called get_user_pages with write=1, and it'll re-establishing an updated
    spte or secondary-tlb-mapping on the copied page. Or it will setup a
    readonly spte or readonly tlb mapping if it's a guest-read, if it calls
    get_user_pages with write=0. This is just an example.

    This allows to map any page pointed by any pte (and in turn visible in the
    primary CPU MMU), into a secondary MMU (be it a pure tlb like GRU, or an
    full MMU with both sptes and secondary-tlb like the shadow-pagetable layer
    with kvm), or a remote DMA in software like XPMEM (hence needing of
    schedule in XPMEM code to send the invalidate to the remote node, while no
    need to schedule in kvm/gru as it's an immediate event like invalidating
    primary-mmu pte).

    At least for KVM without this patch it's impossible to swap guests
    reliably. And having this feature and removing the page pin allows
    several other optimizations that simplify life considerably.

    Dependencies:

    1) mm_take_all_locks() to register the mmu notifier when the whole VM
    isn't doing anything with "mm". This allows mmu notifier users to keep
    track if the VM is in the middle of the invalidate_range_begin/end
    critical section with an atomic counter incraese in range_begin and
    decreased in range_end. No secondary MMU page fault is allowed to map
    any spte or secondary tlb reference, while the VM is in the middle of
    range_begin/end as any page returned by get_user_pages in that critical
    section could later immediately be freed without any further
    ->invalidate_page notification (invalidate_range_begin/end works on
    ranges and ->invalidate_page isn't called immediately before freeing
    the page). To stop all page freeing and pagetable overwrites the
    mmap_sem must be taken in write mode and all other anon_vma/i_mmap
    locks must be taken too.

    2) It'd be a waste to add branches in the VM if nobody could possibly
    run KVM/GRU/XPMEM on the kernel, so mmu notifiers will only enabled if
    CONFIG_KVM=m/y. In the current kernel kvm won't yet take advantage of
    mmu notifiers, but this already allows to compile a KVM external module
    against a kernel with mmu notifiers enabled and from the next pull from
    kvm.git we'll start using them. And GRU/XPMEM will also be able to
    continue the development by enabling KVM=m in their config, until they
    submit all GRU/XPMEM GPLv2 code to the mainline kernel. Then they can
    also enable MMU_NOTIFIERS in the same way KVM does it (even if KVM=n).
    This guarantees nobody selects MMU_NOTIFIER=y if KVM and GRU and XPMEM
    are all =n.

    The mmu_notifier_register call can fail because mm_take_all_locks may be
    interrupted by a signal and return -EINTR. Because mmu_notifier_reigster
    is used when a driver startup, a failure can be gracefully handled. Here
    an example of the change applied to kvm to register the mmu notifiers.
    Usually when a driver startups other allocations are required anyway and
    -ENOMEM failure paths exists already.

    struct kvm *kvm_arch_create_vm(void)
    {
    struct kvm *kvm = kzalloc(sizeof(struct kvm), GFP_KERNEL);
    + int err;

    if (!kvm)
    return ERR_PTR(-ENOMEM);

    INIT_LIST_HEAD(&kvm->arch.active_mmu_pages);

    + kvm->arch.mmu_notifier.ops = &kvm_mmu_notifier_ops;
    + err = mmu_notifier_register(&kvm->arch.mmu_notifier, current->mm);
    + if (err) {
    + kfree(kvm);
    + return ERR_PTR(err);
    + }
    +
    return kvm;
    }

    mmu_notifier_unregister returns void and it's reliable.

    The patch also adds a few needed but missing includes that would prevent
    kernel to compile after these changes on non-x86 archs (x86 didn't need
    them by luck).

    [akpm@linux-foundation.org: coding-style fixes]
    [akpm@linux-foundation.org: fix mm/filemap_xip.c build]
    [akpm@linux-foundation.org: fix mm/mmu_notifier.c build]
    Signed-off-by: Andrea Arcangeli
    Signed-off-by: Nick Piggin
    Signed-off-by: Christoph Lameter
    Cc: Jack Steiner
    Cc: Robin Holt
    Cc: Nick Piggin
    Cc: Peter Zijlstra
    Cc: Kanoj Sarcar
    Cc: Roland Dreier
    Cc: Steve Wise
    Cc: Avi Kivity
    Cc: Hugh Dickins
    Cc: Rusty Russell
    Cc: Anthony Liguori
    Cc: Chris Wright
    Cc: Marcelo Tosatti
    Cc: Eric Dumazet
    Cc: "Paul E. McKenney"
    Cc: Izik Eidus
    Cc: Anthony Liguori
    Cc: Rik van Riel
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrea Arcangeli
     

28 Jul, 2008

1 commit

  • Put all i/o statistics in struct proc_io_accounting and use inline functions to
    initialize and increment statistics, removing a lot of single variable
    assignments.

    This also reduces the kernel size as following (with CONFIG_TASK_XACCT=y and
    CONFIG_TASK_IO_ACCOUNTING=y).

    text data bss dec hex filename
    11651 0 0 11651 2d83 kernel/exit.o.before
    11619 0 0 11619 2d63 kernel/exit.o.after
    10886 132 136 11154 2b92 kernel/fork.o.before
    10758 132 136 11026 2b12 kernel/fork.o.after

    3082029 807968 4818600 8708597 84e1f5 vmlinux.o.before
    3081869 807968 4818600 8708437 84e155 vmlinux.o.after

    Signed-off-by: Andrea Righi
    Acked-by: Oleg Nesterov
    Signed-off-by: Linus Torvalds

    Andrea Righi
     

27 Jul, 2008

4 commits

  • long overdue...

    Signed-off-by: Al Viro

    Al Viro
     
  • This moves the PTRACE_EVENT_VFORK_DONE tracing into a tracehook.h inline,
    tracehook_report_vfork_done(). The change has no effect, just clean-up.

    Signed-off-by: Roland McGrath
    Cc: Oleg Nesterov
    Reviewed-by: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Roland McGrath
     
  • This moves all the ptrace initialization and tracing logic for task
    creation into tracehook.h and ptrace.h inlines. It reorganizes the code
    slightly, but should not change any behavior.

    There are four tracehook entry points, at each important stage of task
    creation. This keeps the interface from the core fork.c code fairly
    clean, while supporting the complex setup required for ptrace or something
    like it.

    Signed-off-by: Roland McGrath
    Cc: Oleg Nesterov
    Reviewed-by: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Roland McGrath
     
  • Kmem cache passed to constructor is only needed for constructors that are
    themselves multiplexeres. Nobody uses this "feature", nor does anybody uses
    passed kmem cache in non-trivial way, so pass only pointer to object.

    Non-trivial places are:
    arch/powerpc/mm/init_64.c
    arch/powerpc/mm/hugetlbpage.c

    This is flag day, yes.

    Signed-off-by: Alexey Dobriyan
    Acked-by: Pekka Enberg
    Acked-by: Christoph Lameter
    Cc: Jon Tollefson
    Cc: Nick Piggin
    Cc: Matt Mackall
    [akpm@linux-foundation.org: fix arch/powerpc/mm/hugetlbpage.c]
    [akpm@linux-foundation.org: fix mm/slab.c]
    [akpm@linux-foundation.org: fix ubifs]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Alexey Dobriyan
     

26 Jul, 2008

5 commits

  • Report per-thread I/O statistics in /proc/pid/task/tid/io and aggregate
    parent I/O statistics in /proc/pid/io. This approach follows the same
    model used to account per-process and per-thread CPU times.

    As a practial application, this allows for example to quickly find the top
    I/O consumer when a process spawns many child threads that perform the
    actual I/O work, because the aggregated I/O statistics can always be found
    in /proc/pid/io.

    [ Oleg Nesterov points out that we should check that the task is still
    alive before we iterate over the threads, but also says that we can do
    that fixup on top of this later. - Linus ]

    Acked-by: Balbir Singh
    Signed-off-by: Andrea Righi
    Cc: Matt Heaton
    Cc: Shailabh Nagar
    Acked-by-with-comments: Oleg Nesterov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrea Righi
     
  • Move mm->core_waiters into "struct core_state" allocated on stack. This
    shrinks mm_struct a little bit and allows further changes.

    This patch mostly does s/core_waiters/core_state. The only essential
    change is that coredump_wait() must clear mm->core_state before return.

    The coredump_wait()'s path is uglified and .text grows by 30 bytes, this
    is fixed by the next patch.

    Signed-off-by: Oleg Nesterov
    Cc: Roland McGrath
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Oleg Nesterov
     
  • Kill PF_BORROWED_MM. Change use_mm/unuse_mm to not play with ->flags, and
    do s/PF_BORROWED_MM/PF_KTHREAD/ for a couple of other users.

    No functional changes yet. But this allows us to do further
    fixes/cleanups.

    oom_kill/ptrace/etc often check "p->mm != NULL" to filter out the
    kthreads, this is wrong because of use_mm(). The problem with
    PF_BORROWED_MM is that we need task_lock() to avoid races. With this
    patch we can check PF_KTHREAD directly, or use a simple lockless helper:

    /* The result must not be dereferenced !!! */
    struct mm_struct *__get_task_mm(struct task_struct *tsk)
    {
    if (tsk->flags & PF_KTHREAD)
    return NULL;
    return tsk->mm;
    }

    Note also ecard_task(). It runs with ->mm != NULL, but it's the kernel
    thread without PF_BORROWED_MM.

    Signed-off-by: Oleg Nesterov
    Cc: Roland McGrath
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Oleg Nesterov
     
  • cgroup_clone creates a new cgroup with the pid of the task. This works
    correctly for unshare, but for clone cgroup_clone is called from
    copy_namespaces inside copy_process, which happens before the new pid is
    created. As a result, the new cgroup was created with current's pid.
    This patch:

    1. Moves the call inside copy_process to after the new pid
    is created
    2. Passes the struct pid into ns_cgroup_clone (as it is not
    yet attached to the task)
    3. Passes a name from ns_cgroup_clone() into cgroup_clone()
    so as to keep cgroup_clone() itself simpler
    4. Uses pid_vnr() to get the process id value, so that the
    pid used to name the new cgroup is always the pid as it
    would be known to the task which did the cloning or
    unsharing. I think that is the most intuitive thing to
    do. This way, task t1 does clone(CLONE_NEWPID) to get
    t2, which does clone(CLONE_NEWPID) to get t3, then the
    cgroup for t3 will be named for the pid by which t2 knows
    t3.

    (Thanks to Dan Smith for finding the main bug)

    Changelog:
    June 11: Incorporate Paul Menage's feedback: don't pass
    NULL to ns_cgroup_clone from unshare, and reduce
    patch size by using 'nodename' in cgroup_clone.
    June 10: Original version

    [akpm@linux-foundation.org: build fix]
    [akpm@linux-foundation.org: coding-style fixes]
    Signed-off-by: Serge Hallyn
    Acked-by: Paul Menage
    Tested-by: Dan Smith
    Cc: Balbir Singh
    Cc: KAMEZAWA Hiroyuki
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Serge E. Hallyn
     
  • We duplicate alloc/free_thread_info defines on many platforms (the
    majority uses __get_free_pages/free_pages). This patch defines common
    defines and removes these duplicated defines.
    __HAVE_ARCH_THREAD_INFO_ALLOCATOR is introduced for platforms that do
    something different.

    Signed-off-by: FUJITA Tomonori
    Acked-by: Russell King
    Cc: Pekka Enberg
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    FUJITA Tomonori
     

25 Jul, 2008

1 commit

  • This patch reserves huge pages at mmap() time for MAP_PRIVATE mappings in
    a similar manner to the reservations taken for MAP_SHARED mappings. The
    reserve count is accounted both globally and on a per-VMA basis for
    private mappings. This guarantees that a process that successfully calls
    mmap() will successfully fault all pages in the future unless fork() is
    called.

    The characteristics of private mappings of hugetlbfs files behaviour after
    this patch are;

    1. The process calling mmap() is guaranteed to succeed all future faults until
    it forks().
    2. On fork(), the parent may die due to SIGKILL on writes to the private
    mapping if enough pages are not available for the COW. For reasonably
    reliable behaviour in the face of a small huge page pool, children of
    hugepage-aware processes should not reference the mappings; such as
    might occur when fork()ing to exec().
    3. On fork(), the child VMAs inherit no reserves. Reads on pages already
    faulted by the parent will succeed. Successful writes will depend on enough
    huge pages being free in the pool.
    4. Quotas of the hugetlbfs mount are checked at reserve time for the mapper
    and at fault time otherwise.

    Before this patch, all reads or writes in the child potentially needs page
    allocations that can later lead to the death of the parent. This applies
    to reads and writes of uninstantiated pages as well as COW. After the
    patch it is only a write to an instantiated page that causes problems.

    Signed-off-by: Mel Gorman
    Acked-by: Adam Litke
    Cc: Andy Whitcroft
    Cc: William Lee Irwin III
    Cc: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     

17 Jul, 2008

1 commit

  • ptrace no longer fiddles with the children/sibling links, and the
    old ptrace_children list is gone. Now ptrace, whether of one's own
    children or another's via PTRACE_ATTACH, just uses the new ptraced
    list instead.

    There should be no user-visible difference that matters. The only
    change is the order in which do_wait() sees multiple stopped
    children and stopped ptrace attachees. Since wait_task_stopped()
    was changed earlier so it no longer reorders the children list, we
    already know this won't cause any new problems.

    Signed-off-by: Roland McGrath

    Roland McGrath
     

15 Jul, 2008

2 commits

  • * 'core/locking' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
    lockdep: fix kernel/fork.c warning
    lockdep: fix ftrace irq tracing false positive
    lockdep: remove duplicate definition of STATIC_LOCKDEP_MAP_INIT
    lockdep: add lock_class information to lock_chain and output it
    lockdep: add lock_class information to lock_chain and output it
    lockdep: output lock_class key instead of address for forward dependency output
    __mutex_lock_common: use signal_pending_state()
    mutex-debug: check mutex magic before owner

    Fixed up conflict in kernel/fork.c manually

    Linus Torvalds
     
  • * 'tracing/for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (228 commits)
    ftrace: build fix for ftraced_suspend
    ftrace: separate out the function enabled variable
    ftrace: add ftrace_kill_atomic
    ftrace: use current CPU for function startup
    ftrace: start wakeup tracing after setting function tracer
    ftrace: check proper config for preempt type
    ftrace: trace schedule
    ftrace: define function trace nop
    ftrace: move sched_switch enable after markers
    ftrace: prevent ftrace modifications while being kprobe'd, v2
    fix "ftrace: store mcount address in rec->ip"
    mmiotrace broken in linux-next (8-bit writes only)
    ftrace: avoid modifying kprobe'd records
    ftrace: freeze kprobe'd records
    kprobes: enable clean usage of get_kprobe
    ftrace: store mcount address in rec->ip
    ftrace: build fix with gcc 4.3
    namespacecheck: fixes
    ftrace: fix "notrace" filtering priority
    ftrace: fix printout
    ...

    Linus Torvalds
     

14 Jul, 2008

1 commit

  • fix:

    [ 0.184011] ------------[ cut here ]------------
    [ 0.188011] WARNING: at kernel/fork.c:918 copy_process+0x1c0/0x1084()
    [ 0.192011] Pid: 0, comm: swapper Not tainted 2.6.26-tip-00351-g01d4a50-dirty #14521
    [ 0.196011] [] warn_on_slowpath+0x3c/0x60
    [ 0.200012] [] ? __alloc_pages_internal+0x92/0x36b
    [ 0.208012] [] ? __spin_lock_init+0x24/0x4a
    [ 0.212012] [] copy_process+0x1c0/0x1084
    [ 0.216013] [] do_fork+0xb8/0x1ad
    [ 0.220013] [] ? acpi_os_release_lock+0x8/0xa
    [ 0.228013] [] ? acpi_os_vprintf+0x20/0x24
    [ 0.232014] [] kernel_thread+0x75/0x7d
    [ 0.236014] [] ? kernel_init+0x0/0x24a
    [ 0.240014] [] ? kernel_init+0x0/0x24a
    [ 0.244014] [] ? kernel_thread_helper+0x0/0x10
    [ 0.252015] [] rest_init+0x14/0x50
    [ 0.256015] [] start_kernel+0x2b9/0x2c0
    [ 0.260015] [] __init_begin+0x4f/0x57
    [ 0.264016] =======================
    [ 0.268016] ---[ end trace 4eaa2a86a8e2da22 ]---
    [ 0.272016] enabled ExtINT on CPU#0

    which occurs if CONFIG_TRACE_IRQFLAGS=y, CONFIG_DEBUG_LOCKDEP=y,
    but CONFIG_PROVE_LOCKING is disabled.

    Signed-off-by: Ingo Molnar

    Ingo Molnar
     

03 Jul, 2008

1 commit


24 May, 2008

1 commit

  • This patch adds latency tracing for critical timings
    (how long interrupts are disabled for).

    "irqsoff" is added to /debugfs/tracing/available_tracers

    Note:
    tracing_max_latency
    also holds the max latency for irqsoff (in usecs).
    (default to large number so one must start latency tracing)

    tracing_thresh
    threshold (in usecs) to always print out if irqs off
    is detected to be longer than stated here.
    If irq_thresh is non-zero, then max_irq_latency
    is ignored.

    Here's an example of a trace with ftrace_enabled = 0

    =======
    preemption latency trace v1.1.5 on 2.6.24-rc7
    Signed-off-by: Ingo Molnar
    --------------------------------------------------------------------
    latency: 100 us, #3/3, CPU#1 | (M:rt VP:0, KP:0, SP:0 HP:0 #P:2)
    -----------------
    | task: swapper-0 (uid:0 nice:0 policy:0 rt_prio:0)
    -----------------
    => started at: _spin_lock_irqsave+0x2a/0xb7
    => ended at: _spin_unlock_irqrestore+0x32/0x5f

    _------=> CPU#
    / _-----=> irqs-off
    | / _----=> need-resched
    || / _---=> hardirq/softirq
    ||| / _--=> preempt-depth
    |||| /
    ||||| delay
    cmd pid ||||| time | caller
    \ / ||||| \ | /
    swapper-0 1d.s3 0us+: _spin_lock_irqsave+0x2a/0xb7 (e1000_update_stats+0x47/0x64c [e1000])
    swapper-0 1d.s3 100us : _spin_unlock_irqrestore+0x32/0x5f (e1000_update_stats+0x641/0x64c [e1000])
    swapper-0 1d.s3 100us : trace_hardirqs_on_caller+0x75/0x89 (_spin_unlock_irqrestore+0x32/0x5f)

    vim:ft=help
    =======

    And this is a trace with ftrace_enabled == 1

    =======
    preemption latency trace v1.1.5 on 2.6.24-rc7
    --------------------------------------------------------------------
    latency: 102 us, #12/12, CPU#1 | (M:rt VP:0, KP:0, SP:0 HP:0 #P:2)
    -----------------
    | task: swapper-0 (uid:0 nice:0 policy:0 rt_prio:0)
    -----------------
    => started at: _spin_lock_irqsave+0x2a/0xb7
    => ended at: _spin_unlock_irqrestore+0x32/0x5f

    _------=> CPU#
    / _-----=> irqs-off
    | / _----=> need-resched
    || / _---=> hardirq/softirq
    ||| / _--=> preempt-depth
    |||| /
    ||||| delay
    cmd pid ||||| time | caller
    \ / ||||| \ | /
    swapper-0 1dNs3 0us+: _spin_lock_irqsave+0x2a/0xb7 (e1000_update_stats+0x47/0x64c [e1000])
    swapper-0 1dNs3 46us : e1000_read_phy_reg+0x16/0x225 [e1000] (e1000_update_stats+0x5e2/0x64c [e1000])
    swapper-0 1dNs3 46us : e1000_swfw_sync_acquire+0x10/0x99 [e1000] (e1000_read_phy_reg+0x49/0x225 [e1000])
    swapper-0 1dNs3 46us : e1000_get_hw_eeprom_semaphore+0x12/0xa6 [e1000] (e1000_swfw_sync_acquire+0x36/0x99 [e1000])
    swapper-0 1dNs3 47us : __const_udelay+0x9/0x47 (e1000_read_phy_reg+0x116/0x225 [e1000])
    swapper-0 1dNs3 47us+: __delay+0x9/0x50 (__const_udelay+0x45/0x47)
    swapper-0 1dNs3 97us : preempt_schedule+0xc/0x84 (__delay+0x4e/0x50)
    swapper-0 1dNs3 98us : e1000_swfw_sync_release+0xc/0x55 [e1000] (e1000_read_phy_reg+0x211/0x225 [e1000])
    swapper-0 1dNs3 99us+: e1000_put_hw_eeprom_semaphore+0x9/0x35 [e1000] (e1000_swfw_sync_release+0x50/0x55 [e1000])
    swapper-0 1dNs3 101us : _spin_unlock_irqrestore+0xe/0x5f (e1000_update_stats+0x641/0x64c [e1000])
    swapper-0 1dNs3 102us : _spin_unlock_irqrestore+0x32/0x5f (e1000_update_stats+0x641/0x64c [e1000])
    swapper-0 1dNs3 102us : trace_hardirqs_on_caller+0x75/0x89 (_spin_unlock_irqrestore+0x32/0x5f)

    vim:ft=help
    =======

    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar
    Signed-off-by: Thomas Gleixner

    Steven Rostedt
     

17 May, 2008

1 commit


02 May, 2008

1 commit


30 Apr, 2008

1 commit

  • Suggested by Roland McGrath.

    Initialize signal->curr_target in copy_signal(). This way ->curr_target is
    never == NULL, we can kill the check in __group_complete_signal's hot path.

    Signed-off-by: Oleg Nesterov
    Cc: Roland McGrath
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Oleg Nesterov
     

29 Apr, 2008

4 commits

  • The kernel implements readlink of /proc/pid/exe by getting the file from
    the first executable VMA. Then the path to the file is reconstructed and
    reported as the result.

    Because of the VMA walk the code is slightly different on nommu systems.
    This patch avoids separate /proc/pid/exe code on nommu systems. Instead of
    walking the VMAs to find the first executable file-backed VMA we store a
    reference to the exec'd file in the mm_struct.

    That reference would prevent the filesystem holding the executable file
    from being unmounted even after unmapping the VMAs. So we track the number
    of VM_EXECUTABLE VMAs and drop the new reference when the last one is
    unmapped. This avoids pinning the mounted filesystem.

    [akpm@linux-foundation.org: improve comments]
    [yamamoto@valinux.co.jp: fix dup_mmap]
    Signed-off-by: Matt Helsley
    Cc: Oleg Nesterov
    Cc: David Howells
    Cc:"Eric W. Biederman"
    Cc: Christoph Hellwig
    Cc: Al Viro
    Cc: Hugh Dickins
    Signed-off-by: YAMAMOTO Takashi
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Matt Helsley
     
  • sys_unshare(CLONE_NEWIPC) doesn't handle the undo lists properly, this can
    cause a kernel memory corruption. CLONE_NEWIPC must detach from the existing
    undo lists.

    Fix, part 2: perform an implicit CLONE_SYSVSEM in CLONE_NEWIPC. CLONE_NEWIPC
    creates a new IPC namespace, the task cannot access the existing semaphore
    arrays after the unshare syscall. Thus the task can/must detach from the
    existing undo list entries, too.

    This fixes the kernel corruption, because it makes it impossible that
    undo records from two different namespaces are in sysvsem.undo_list.

    Signed-off-by: Manfred Spraul
    Signed-off-by: Serge E. Hallyn
    Acked-by: "Eric W. Biederman"
    Cc: Pavel Emelyanov
    Cc: Michael Kerrisk
    Cc: Pierre Peiffer
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Manfred Spraul
     
  • sys_unshare(CLONE_NEWIPC) doesn't handle the undo lists properly, this can
    cause a kernel memory corruption. CLONE_NEWIPC must detach from the existing
    undo lists.

    Fix, part 1: add support for sys_unshare(CLONE_SYSVSEM)

    The original reason to not support it was the potential (inevitable?)
    confusion due to the fact that sys_unshare(CLONE_SYSVSEM) has the
    inverse meaning of clone(CLONE_SYSVSEM).

    Our two most reasonable options then appear to be (1) fully support
    CLONE_SYSVSEM, or (2) continue to refuse explicit CLONE_SYSVSEM,
    but always do it anyway on unshare(CLONE_SYSVSEM). This patch does
    (1).

    Changelog:
    Apr 16: SEH: switch to Manfred's alternative patch which
    removes the unshare_semundo() function which
    always refused CLONE_SYSVSEM.

    Signed-off-by: Manfred Spraul
    Signed-off-by: Serge E. Hallyn
    Acked-by: "Eric W. Biederman"
    Cc: Pavel Emelyanov
    Cc: Michael Kerrisk
    Cc: Pierre Peiffer
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Manfred Spraul
     
  • Remove the mem_cgroup member from mm_struct and instead adds an owner.

    This approach was suggested by Paul Menage. The advantage of this approach
    is that, once the mm->owner is known, using the subsystem id, the cgroup
    can be determined. It also allows several control groups that are
    virtually grouped by mm_struct, to exist independent of the memory
    controller i.e., without adding mem_cgroup's for each controller, to
    mm_struct.

    A new config option CONFIG_MM_OWNER is added and the memory resource
    controller selects this config option.

    This patch also adds cgroup callbacks to notify subsystems when mm->owner
    changes. The mm_cgroup_changed callback is called with the task_lock() of
    the new task held and is called just prior to changing the mm->owner.

    I am indebted to Paul Menage for the several reviews of this patchset and
    helping me make it lighter and simpler.

    This patch was tested on a powerpc box, it was compiled with both the
    MM_OWNER config turned on and off.

    After the thread group leader exits, it's moved to init_css_state by
    cgroup_exit(), thus all future charges from runnings threads would be
    redirected to the init_css_set's subsystem.

    Signed-off-by: Balbir Singh
    Cc: Pavel Emelianov
    Cc: Hugh Dickins
    Cc: Sudhir Kumar
    Cc: YAMAMOTO Takashi
    Cc: Hirokazu Takahashi
    Cc: David Rientjes ,
    Cc: Balbir Singh
    Acked-by: KAMEZAWA Hiroyuki
    Acked-by: Pekka Enberg
    Reviewed-by: Paul Menage
    Cc: Oleg Nesterov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Balbir Singh
     

28 Apr, 2008

2 commits

  • This patch renames mpol_copy() to mpol_dup() because, well, that's what it
    does. Like, e.g., strdup() for strings, mpol_dup() takes a pointer to an
    existing mempolicy, allocates a new one and copies the contents.

    In a later patch, I want to use the name mpol_copy() to copy the contents from
    one mempolicy to another like, e.g., strcpy() does for strings.

    Signed-off-by: Lee Schermerhorn
    Cc: Christoph Lameter
    Cc: David Rientjes
    Cc: Mel Gorman
    Cc: Andi Kleen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Lee Schermerhorn
     
  • This is a change that was requested some time ago by Mel Gorman. Makes sense
    to me, so here it is.

    Note: I retain the name "mpol_free_shared_policy()" because it actually does
    free the shared_policy, which is NOT a reference counted object. However, ...

    The mempolicy object[s] referenced by the shared_policy are reference counted,
    so mpol_put() is used to release the reference held by the shared_policy. The
    mempolicy might not be freed at this time, because some task attached to the
    shared object associated with the shared policy may be in the process of
    allocating a page based on the mempolicy. In that case, the task performing
    the allocation will hold a reference on the mempolicy, obtained via
    mpol_shared_policy_lookup(). The mempolicy will be freed when all tasks
    holding such a reference have called mpol_put() for the mempolicy.

    Signed-off-by: Lee Schermerhorn
    Cc: Christoph Lameter
    Cc: David Rientjes
    Cc: Mel Gorman
    Cc: Andi Kleen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Lee Schermerhorn
     

27 Apr, 2008

2 commits

  • The SIE instruction on s390 uses the 2nd half of the page table page to
    virtualize the storage keys of a guest. This patch offers the s390_enable_sie
    function, which reorganizes the page tables of a single-threaded process to
    reserve space in the page table:
    s390_enable_sie makes sure that the process is single threaded and then uses
    dup_mm to create a new mm with reorganized page tables. The old mm is freed
    and the process has now a page status extended field after every page table.

    Code that wants to exploit pgstes should SELECT CONFIG_PGSTE.

    This patch has a small common code hit, namely making dup_mm non-static.

    Edit (Carsten): I've modified Martin's patch, following Jeremy Fitzhardinge's
    review feedback. Now we do have the prototype for dup_mm in
    include/linux/sched.h. Following Martin's suggestion, s390_enable_sie() does now
    call task_lock() to prevent race against ptrace modification of mm_users.

    Signed-off-by: Martin Schwidefsky
    Signed-off-by: Carsten Otte
    Acked-by: Andrew Morton
    Signed-off-by: Avi Kivity

    Carsten Otte
     
  • Arrgghhh...

    Sorry about that, I'd been sure I'd folded that one, but it actually got
    lost. Please apply - that breaks execve().

    Signed-off-by: Al Viro
    Tested-by: Ingo Molnar
    Signed-off-by: Linus Torvalds

    Al Viro
     

25 Apr, 2008

3 commits

  • * let unshare_files() give caller the displaced files_struct
    * don't bother with grabbing reference only to drop it in the
    caller if it hadn't been shared in the first place
    * in that form unshare_files() is trivially implemented via
    unshare_fd(), so we eliminate the duplicate logics in fork.c
    * reset_files_struct() is not just only called for current;
    it will break the system if somebody ever calls it for anything
    else (we can't modify ->files of somebody else). Lose the
    task_struct * argument.

    Signed-off-by: Al Viro

    Al Viro
     
  • * unshare_files() can fail; doing it after irreversible actions is wrong
    and de_thread() is certainly irreversible.
    * since we do it unconditionally anyway, we might as well do it in do_execve()
    and save ourselves the PITA in binfmt handlers, etc.
    * while we are at it, binfmt_som actually leaked files_struct on failure.

    As a side benefit, unshare_files(), put_files_struct() and reset_files_struct()
    become unexported.

    Signed-off-by: Al Viro

    Al Viro
     
  • updating current->files requires task_lock

    Signed-off-by: Al Viro

    Al Viro
     

20 Apr, 2008

2 commits

  • Signed-off-by: Ingo Molnar
    Signed-off-by: Thomas Gleixner

    Suresh Siddha
     
  • Split the FPU save area from the task struct. This allows easy migration
    of FPU context, and it's generally cleaner. It also allows the following
    two optimizations:

    1) only allocate when the application actually uses FPU, so in the first
    lazy FPU trap. This could save memory for non-fpu using apps. Next patch
    does this lazy allocation.

    2) allocate the right size for the actual cpu rather than 512 bytes always.
    Patches enabling xsave/xrstor support (coming shortly) will take advantage
    of this.

    Signed-off-by: Suresh Siddha
    Signed-off-by: Arjan van de Ven
    Signed-off-by: Ingo Molnar
    Signed-off-by: Thomas Gleixner

    Suresh Siddha
     

29 Mar, 2008

1 commit


15 Feb, 2008

1 commit


09 Feb, 2008

3 commits

  • [akpm@linux-foundation.org: coding-style fixes]
    Signed-off-by: Harvey Harrison
    Acked-by: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Harvey Harrison
     
  • Some time ago the xxx_vnr() calls (e.g. pid_vnr or find_task_by_vpid) were
    _all_ converted to operate on the current pid namespace. After this each call
    like xxx_nr_ns(foo, current->nsproxy->pid_ns) is nothing but a xxx_vnr(foo)
    one.

    Switch all the xxx_nr_ns() callers to use the xxx_vnr() calls where
    appropriate.

    Signed-off-by: Pavel Emelyanov
    Reviewed-by: Oleg Nesterov
    Cc: "Eric W. Biederman"
    Cc: Balbir Singh
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Pavel Emelyanov
     
  • signal_struct->tsk points to the ->group_leader and thus we have the nasty
    code in de_thread() which has to change it and restart ->real_timer if the
    leader is changed.

    Use "struct pid *leader_pid" instead. This also allows us to kill now
    unneeded send_group_sig_info().

    Signed-off-by: Oleg Nesterov
    Acked-by: "Eric W. Biederman"
    Cc: Davide Libenzi
    Cc: Pavel Emelyanov
    Acked-by: Roland McGrath
    Acked-by: Thomas Gleixner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Oleg Nesterov
     

08 Feb, 2008

1 commit

  • Basic setup routines, the mm_struct has a pointer to the cgroup that
    it belongs to and the the page has a page_cgroup associated with it.

    Signed-off-by: Pavel Emelianov
    Signed-off-by: Balbir Singh
    Cc: Paul Menage
    Cc: Peter Zijlstra
    Cc: "Eric W. Biederman"
    Cc: Nick Piggin
    Cc: Kirill Korotaev
    Cc: Herbert Poetzl
    Cc: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Pavel Emelianov