17 Oct, 2007

8 commits

  • There's no reason to sleep in try_set_zone_oom() or clear_zonelist_oom() if
    the lock can't be acquired; it will be available soon enough once the zonelist
    scanning is done. All other threads waiting for the OOM killer are also
    contingent on the exiting task being able to acquire the lock in
    clear_zonelist_oom() so it doesn't make sense to put it to sleep.

    Cc: Andrea Arcangeli
    Cc: Christoph Lameter
    Signed-off-by: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • Since no task descriptor's 'cpuset' field is dereferenced in the execution of
    the OOM killer anymore, it is no longer necessary to take callback_mutex.

    [akpm@linux-foundation.org: restore cpuset_lock for other patches]
    Cc: Andrea Arcangeli
    Acked-by: Christoph Lameter
    Signed-off-by: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • Instead of testing for overlap in the memory nodes of the the nearest
    exclusive ancestor of both current and the candidate task, it is better to
    simply test for intersection between the task's mems_allowed in their task
    descriptors. This does not require taking callback_mutex since it is only
    used as a hint in the badness scoring.

    Tasks that do not have an intersection in their mems_allowed with the current
    task are not explicitly restricted from being OOM killed because it is quite
    possible that the candidate task has allocated memory there before and has
    since changed its mems_allowed.

    Cc: Andrea Arcangeli
    Acked-by: Christoph Lameter
    Signed-off-by: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • Suppresses the extraneous stack and memory dump when a parallel OOM killing
    has been found. There's no need to fill the ring buffer with this information
    if its already been printed and the condition that triggered the previous OOM
    killer has not yet been alleviated.

    Cc: Andrea Arcangeli
    Acked-by: Christoph Lameter
    Signed-off-by: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • Adds a new sysctl, 'oom_kill_allocating_task', which will automatically kill
    the OOM-triggering task instead of scanning through the tasklist to find a
    memory-hogging target. This is helpful for systems with an insanely large
    number of tasks where scanning the tasklist significantly degrades
    performance.

    Cc: Andrea Arcangeli
    Acked-by: Christoph Lameter
    Signed-off-by: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • OOM killer synchronization should be done with zone granularity so that memory
    policy and cpuset allocations may have their corresponding zones locked and
    allow parallel kills for other OOM conditions that may exist elsewhere in the
    system. DMA allocations can be targeted at the zone level, which would not be
    possible if locking was done in nodes or globally.

    Synchronization shall be done with a variation of "trylocks." The goal is to
    put the current task to sleep and restart the failed allocation attempt later
    if the trylock fails. Otherwise, the OOM killer is invoked.

    Each zone in the zonelist that __alloc_pages() was called with is checked for
    the newly-introduced ZONE_OOM_LOCKED flag. If any zone has this flag present,
    the "trylock" to serialize the OOM killer fails and returns zero. Otherwise,
    all the zones have ZONE_OOM_LOCKED set and the try_set_zone_oom() function
    returns non-zero.

    Cc: Andrea Arcangeli
    Cc: Christoph Lameter
    Signed-off-by: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • The OOM killer's CONSTRAINT definitions are really more appropriate in an
    enum, so define them in include/linux/oom.h.

    Cc: Andrea Arcangeli
    Acked-by: Christoph Lameter
    Signed-off-by: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • constrained_alloc() builds its own memory map for nodes with memory. We have
    that available in N_HIGH_MEMORY now. So simplify the code.

    Signed-off-by: Christoph Lameter
    Acked-by: Nishanth Aravamudan
    Acked-by: Lee Schermerhorn
    Acked-by: Bob Picco
    Cc: KAMEZAWA Hiroyuki
    Cc: Mel Gorman
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     

01 Aug, 2007

1 commit


30 Jul, 2007

1 commit

  • Remove fs.h from mm.h. For this,
    1) Uninline vma_wants_writenotify(). It's pretty huge anyway.
    2) Add back fs.h or less bloated headers (err.h) to files that need it.

    As result, on x86_64 allyesconfig, fs.h dependencies cut down from 3929 files
    rebuilt down to 3444 (-12.3%).

    Cross-compile tested without regressions on my two usual configs and (sigh):

    alpha arm-mx1ads mips-bigsur powerpc-ebony
    alpha-allnoconfig arm-neponset mips-capcella powerpc-g5
    alpha-defconfig arm-netwinder mips-cobalt powerpc-holly
    alpha-up arm-netx mips-db1000 powerpc-iseries
    arm arm-ns9xxx mips-db1100 powerpc-linkstation
    arm-assabet arm-omap_h2_1610 mips-db1200 powerpc-lite5200
    arm-at91rm9200dk arm-onearm mips-db1500 powerpc-maple
    arm-at91rm9200ek arm-picotux200 mips-db1550 powerpc-mpc7448_hpc2
    arm-at91sam9260ek arm-pleb mips-ddb5477 powerpc-mpc8272_ads
    arm-at91sam9261ek arm-pnx4008 mips-decstation powerpc-mpc8313_rdb
    arm-at91sam9263ek arm-pxa255-idp mips-e55 powerpc-mpc832x_mds
    arm-at91sam9rlek arm-realview mips-emma2rh powerpc-mpc832x_rdb
    arm-ateb9200 arm-realview-smp mips-excite powerpc-mpc834x_itx
    arm-badge4 arm-rpc mips-fulong powerpc-mpc834x_itxgp
    arm-carmeva arm-s3c2410 mips-ip22 powerpc-mpc834x_mds
    arm-cerfcube arm-shannon mips-ip27 powerpc-mpc836x_mds
    arm-clps7500 arm-shark mips-ip32 powerpc-mpc8540_ads
    arm-collie arm-simpad mips-jazz powerpc-mpc8544_ds
    arm-corgi arm-spitz mips-jmr3927 powerpc-mpc8560_ads
    arm-csb337 arm-trizeps4 mips-malta powerpc-mpc8568mds
    arm-csb637 arm-versatile mips-mipssim powerpc-mpc85xx_cds
    arm-ebsa110 i386 mips-mpc30x powerpc-mpc8641_hpcn
    arm-edb7211 i386-allnoconfig mips-msp71xx powerpc-mpc866_ads
    arm-em_x270 i386-defconfig mips-ocelot powerpc-mpc885_ads
    arm-ep93xx i386-up mips-pb1100 powerpc-pasemi
    arm-footbridge ia64 mips-pb1500 powerpc-pmac32
    arm-fortunet ia64-allnoconfig mips-pb1550 powerpc-ppc64
    arm-h3600 ia64-bigsur mips-pnx8550-jbs powerpc-prpmc2800
    arm-h7201 ia64-defconfig mips-pnx8550-stb810 powerpc-ps3
    arm-h7202 ia64-gensparse mips-qemu powerpc-pseries
    arm-hackkit ia64-sim mips-rbhma4200 powerpc-up
    arm-integrator ia64-sn2 mips-rbhma4500 s390
    arm-iop13xx ia64-tiger mips-rm200 s390-allnoconfig
    arm-iop32x ia64-up mips-sb1250-swarm s390-defconfig
    arm-iop33x ia64-zx1 mips-sead s390-up
    arm-ixp2000 m68k mips-tb0219 sparc
    arm-ixp23xx m68k-amiga mips-tb0226 sparc-allnoconfig
    arm-ixp4xx m68k-apollo mips-tb0287 sparc-defconfig
    arm-jornada720 m68k-atari mips-workpad sparc-up
    arm-kafa m68k-bvme6000 mips-wrppmc sparc64
    arm-kb9202 m68k-hp300 mips-yosemite sparc64-allnoconfig
    arm-ks8695 m68k-mac parisc sparc64-defconfig
    arm-lart m68k-mvme147 parisc-allnoconfig sparc64-up
    arm-lpd270 m68k-mvme16x parisc-defconfig um-x86_64
    arm-lpd7a400 m68k-q40 parisc-up x86_64
    arm-lpd7a404 m68k-sun3 powerpc x86_64-allnoconfig
    arm-lubbock m68k-sun3x powerpc-cell x86_64-defconfig
    arm-lusl7200 mips powerpc-celleb x86_64-up
    arm-mainstone mips-atlas powerpc-chrp32

    Signed-off-by: Alexey Dobriyan
    Signed-off-by: Linus Torvalds

    Alexey Dobriyan
     

08 May, 2007

3 commits

  • Fixes a deadlock in the OOM killer for allocations that are not
    __GFP_HARDWALL.

    Before the OOM killer checks for the allocation constraint, it takes
    callback_mutex.

    constrained_alloc() iterates through each zone in the allocation zonelist
    and calls cpuset_zone_allowed_softwall() to determine whether an allocation
    for gfp_mask is possible. If a zone's node is not in the OOM-triggering
    task's mems_allowed, it is not exiting, and we did not fail on a
    __GFP_HARDWALL allocation, cpuset_zone_allowed_softwall() attempts to take
    callback_mutex to check the nearest exclusive ancestor of current's cpuset.
    This results in deadlock.

    We now take callback_mutex after iterating through the zonelist since we
    don't need it yet.

    Cc: Andi Kleen
    Cc: Nick Piggin
    Cc: Christoph Lameter
    Cc: Martin J. Bligh
    Signed-off-by: David Rientjes
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • The current panic_on_oom may not work if there is a process using
    cpusets/mempolicy, because other nodes' memory may remain. But some people
    want failover by panic ASAP even if they are used. This patch makes new
    setting for its request.

    This is tested on my ia64 box which has 3 nodes.

    Signed-off-by: Yasunori Goto
    Signed-off-by: Benjamin LaHaise
    Cc: Christoph Lameter
    Cc: Paul Jackson
    Cc: Ethan Solomita
    Cc: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Yasunori Goto
     
  • If the badness of a process is zero then oom_adj>0 has no effect. This
    patch makes sure that the oom_adj shift actually increases badness points
    appropriately.

    Signed-off-by: Joshua N. Pritikin
    Cc: Andrea Arcangeli
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Joshua N Pritikin
     

24 Apr, 2007

2 commits

  • I only have CONFIG_NUMA=y for build testing: surprised when trying a memhog
    to see lots of other processes killed with "No available memory
    (MPOL_BIND)". memhog is killed correctly once we initialize nodemask in
    constrained_alloc().

    Signed-off-by: Hugh Dickins
    Acked-by: Christoph Lameter
    Acked-by: William Irwin
    Acked-by: KAMEZAWA Hiroyuki
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     
  • oom_kill_task() calls __oom_kill_task() to OOM kill a selected task.
    When finding other threads that share an mm with that task, we need to
    kill those individual threads and not the same one.

    (Bug introduced by f2a2a7108aa0039ba7a5fe7a0d2ecef2219a7584)

    Acked-by: William Irwin
    Acked-by: Christoph Lameter
    Cc: Nick Piggin
    Cc: Andrew Morton
    Cc: Andi Kleen
    Signed-off-by: David Rientjes
    Signed-off-by: Linus Torvalds

    David Rientjes
     

17 Mar, 2007

1 commit


06 Jan, 2007

1 commit

  • These days, if you swapoff when there isn't enough memory, OOM killer gives
    "BUG: scheduling while atomic" and the machine hangs: badness() needs to do
    its PF_SWAPOFF return after the task_unlock (tasklist_lock is also held
    here, so p isn't going to be freed: PF_SWAPOFF might get turned off at any
    moment, but that doesn't really matter).

    Signed-off-by: Hugh Dickins
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     

31 Dec, 2006

1 commit

  • constrained_alloc(), which is called to detect where oom is from, checks
    passed zone_list(). If zone_list doesn't include all nodes, it thinks oom
    is from mempolicy.

    But there is memory-less-node. memory-less-node's zones are never included
    in zonelist[].

    contstrained_alloc() should get memory_less_node into count. Otherwise, it
    always thinks 'oom is from mempolicy'. This means that current process
    dies at any time. This patch fix it.

    Signed-off-by: KAMEZAWA Hiroyuki
    Cc: Paul Jackson
    Cc: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    KAMEZAWA Hiroyuki
     

14 Dec, 2006

1 commit

  • Elaborate the API for calling cpuset_zone_allowed(), so that users have to
    explicitly choose between the two variants:

    cpuset_zone_allowed_hardwall()
    cpuset_zone_allowed_softwall()

    Until now, whether or not you got the hardwall flavor depended solely on
    whether or not you or'd in the __GFP_HARDWALL gfp flag to the gfp_mask
    argument.

    If you didn't specify __GFP_HARDWALL, you implicitly got the softwall
    version.

    Unfortunately, this meant that users would end up with the softwall version
    without thinking about it. Since only the softwall version might sleep,
    this led to bugs with possible sleeping in interrupt context on more than
    one occassion.

    The hardwall version requires that the current tasks mems_allowed allows
    the node of the specified zone (or that you're in interrupt or that
    __GFP_THISNODE is set or that you're on a one cpuset system.)

    The softwall version, depending on the gfp_mask, might allow a node if it
    was allowed in the nearest enclusing cpuset marked mem_exclusive (which
    requires taking the cpuset lock 'callback_mutex' to evaluate.)

    This patch removes the cpuset_zone_allowed() call, and forces the caller to
    explicitly choose between the hardwall and the softwall case.

    If the caller wants the gfp_mask to determine this choice, they should (1)
    be sure they can sleep or that __GFP_HARDWALL is set, and (2) invoke the
    cpuset_zone_allowed_softwall() routine.

    This adds another 100 or 200 bytes to the kernel text space, due to the few
    lines of nearly duplicate code at the top of both cpuset_zone_allowed_*
    routines. It should save a few instructions executed for the calls that
    turned into calls of cpuset_zone_allowed_hardwall, thanks to not having to
    set (before the call) then check (within the call) the __GFP_HARDWALL flag.

    For the most critical call, from get_page_from_freelist(), the same
    instructions are executed as before -- the old cpuset_zone_allowed()
    routine it used to call is the same code as the
    cpuset_zone_allowed_softwall() routine that it calls now.

    Not a perfect win, but seems worth it, to reduce this chance of hitting a
    sleeping with irq off complaint again.

    Signed-off-by: Paul Jackson
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Paul Jackson
     

08 Dec, 2006

3 commits

  • Don't cause all threads in all other thread groups to gain TIF_MEMDIE
    otherwise we'll get a thundering herd eating our memory reserve. This may not
    be the optimal scheme, but it fits our policy of allowing just one TIF_MEMDIE
    in the system at once.

    Signed-off-by: Nick Piggin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nick Piggin
     
  • Clean up the OOM killer messages to be more consistent.

    Signed-off-by: Nick Piggin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nick Piggin
     
  • Abort the kill if any of our threads have OOM_DISABLE set. Having this
    test here also prevents any OOM_DISABLE child of the "selected" process
    from being killed.

    Signed-off-by: Nick Piggin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nick Piggin
     

21 Oct, 2006

1 commit

  • Despite mm.h is not being exported header, it does contain one thing
    which is part of userspace ABI -- value disabling OOM killer for given
    process. So,
    a) create and export include/linux/oom.h
    b) move OOM_DISABLE define there.
    c) turn bounding values of /proc/$PID/oom_adj into defines and export
    them too.

    Note: mass __KERNEL__ removal will be done later.

    Signed-off-by: Alexey Dobriyan
    Cc: Nick Piggin
    Cc: David Woodhouse
    Cc: Christoph Hellwig
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Alexey Dobriyan
     

30 Sep, 2006

7 commits

  • A previous patch to allow an exiting task to OOM kill itself (and thereby
    avoid a little deadlock) introduced a problem. We don't want the
    PF_EXITING task, even if it is 'current', to access mem reserves if there
    is already a TIF_MEMDIE process in the system sucking up reserves.

    Also make the commenting a little bit clearer, and note that our current
    scheme of effectively single threading the OOM killer is not itself
    perfect.

    Signed-off-by: Nick Piggin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nick Piggin
     
  • - It is not possible to have task->mm == &init_mm.

    - task_lock() buys nothing for 'if (!p->mm)' check.

    Signed-off-by: Oleg Nesterov
    Cc: Nick Piggin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Oleg Nesterov
     
  • No logic changes, but imho easier to read.

    Signed-off-by: Oleg Nesterov
    Acked-by: Nick Piggin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Oleg Nesterov
     
  • The only one usage of TASK_DEAD outside of last schedule path,

    select_bad_process:

    for_each_task(p) {

    if (!p->mm)
    continue;
    ...
    if (p->state == TASK_DEAD)
    continue;
    ...

    TASK_DEAD state is set at the end of do_exit(), this means that p->mm
    was already set == NULL by exit_mm(), so this task was already rejected
    by 'if (!p->mm)' above.

    Note also that the caller holds tasklist_lock, this means that p can't
    pass exit_notify() and then set TASK_DEAD when p->mm != NULL.

    Also, remove open-coded is_init().

    Signed-off-by: Oleg Nesterov
    Cc: Ingo Molnar
    Cc: Nick Piggin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Oleg Nesterov
     
  • I am not sure about this patch, I am asking Ingo to take a decision.

    task_struct->state == EXIT_DEAD is a very special case, to avoid a confusion
    it makes sense to introduce a new state, TASK_DEAD, while EXIT_DEAD should
    live only in ->exit_state as documented in sched.h.

    Note that this state is not visible to user-space, get_task_state() masks off
    unsuitable states.

    Signed-off-by: Oleg Nesterov
    Cc: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Oleg Nesterov
     
  • After the previous change (->flags & PF_DEAD) (->state == EXIT_DEAD), we
    don't need PF_DEAD any longer.

    Signed-off-by: Oleg Nesterov
    Cc: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Oleg Nesterov
     
  • This is an updated version of Eric Biederman's is_init() patch.
    (http://lkml.org/lkml/2006/2/6/280). It applies cleanly to 2.6.18-rc3 and
    replaces a few more instances of ->pid == 1 with is_init().

    Further, is_init() checks pid and thus removes dependency on Eric's other
    patches for now.

    Eric's original description:

    There are a lot of places in the kernel where we test for init
    because we give it special properties. Most significantly init
    must not die. This results in code all over the kernel test
    ->pid == 1.

    Introduce is_init to capture this case.

    With multiple pid spaces for all of the cases affected we are
    looking for only the first process on the system, not some other
    process that has pid == 1.

    Signed-off-by: Eric W. Biederman
    Signed-off-by: Sukadev Bhattiprolu
    Cc: Dave Hansen
    Cc: Serge Hallyn
    Cc: Cedric Le Goater
    Cc:
    Acked-by: Paul Mackerras
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Sukadev Bhattiprolu
     

26 Sep, 2006

9 commits

  • There are many places where we need to determine the node of a zone.
    Currently we use a difficult to read sequence of pointer dereferencing.
    Put that into an inline function and use throughout VM. Maybe we can find
    a way to optimize the lookup in the future.

    Signed-off-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • Update the comments for __oom_kill_task() to reflect the code changes.

    Signed-off-by: Ram Gupta
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Ram Gupta
     
  • Print the name of the task invoking the OOM killer. Could make debugging
    easier.

    Signed-off-by: Nick Piggin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nick Piggin
     
  • Skip kernel threads, rather than having them return 0 from badness.
    Theoretically, badness might truncate all results to 0, thus a kernel thread
    might be picked first, causing an infinite loop.

    Signed-off-by: Nick Piggin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nick Piggin
     
  • PF_SWAPOFF processes currently cause select_bad_process to return straight
    away. Instead, give them high priority, so we will kill them first, however
    we also first ensure no parallel OOM kills are happening at the same time.

    Signed-off-by: Nick Piggin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nick Piggin
     
  • Having the oomkilladj == OOM_DISABLE check before the releasing check means
    that oomkilladj == OOM_DISABLE tasks exiting will not stop the OOM killer.

    Moving the test down will give the desired behaviour. Also: it will allow
    them to "OOM-kill" themselves if they are exiting. As per the previous patch,
    this is required to prevent OOM killer deadlocks (and they don't actually get
    killed, because they're already exiting -- they're simply allowed access to
    memory reserves).

    Signed-off-by: Nick Piggin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nick Piggin
     
  • If current *is* exiting, it should actually be allowed to access reserved
    memory rather than OOM kill something else. Can't do this via a straight
    check in page_alloc.c because that would allow multiple tasks to use up
    reserves. Instead cause current to OOM-kill itself which will mark it as
    TIF_MEMDIE.

    The current procedure of simply aborting the OOM-kill if a task is exiting can
    lead to OOM deadlocks.

    In the case of killing a PF_EXITING task, don't make a lot of noise about it.
    This becomes more important in future patches, where we can "kill" OOM_DISABLE
    tasks.

    Signed-off-by: Nick Piggin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nick Piggin
     
  • cpuset_excl_nodes_overlap does not always indicate that killing a task will
    not free any memory we for us. For example, we may be asking for an
    allocation from _anywhere_ in the machine, or the task in question may be
    pinning memory that is outside its cpuset. Fix this by just causing
    cpuset_excl_nodes_overlap to reduce the badness rather than disallow it.

    Signed-off-by: Nick Piggin
    Acked-by: Paul Jackson
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nick Piggin
     
  • Add a notifer chain to the out of memory killer. If one of the registered
    callbacks could release some memory, do not kill the process but return and
    retry the allocation that forced the oom killer to run.

    The purpose of the notifier is to add a safety net in the presence of
    memory ballooners. If the resource manager inflated the balloon to a size
    where memory allocations can not be satisfied anymore, it is better to
    deflate the balloon a bit instead of killing processes.

    The implementation for the s390 ballooner is included.

    [akpm@osdl.org: cleanups]
    Signed-off-by: Martin Schwidefsky
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Martin Schwidefsky
     

04 Jul, 2006

1 commit