30 Jan, 2008

3 commits

  • They now look like:

    hal-resmgr[13791]: segfault at 3c rip 2b9c8caec182 rsp 7fff1e825d30 error 4 in libacl.so.1.1.0[2b9c8caea000+6000]

    This makes it easier to pinpoint bugs to specific libraries.

    And printing the offset into a mapping also always allows to find the
    correct fault point in a library even with randomized mappings. Previously
    there was no way to actually find the correct code address inside
    the randomized mapping.

    Relies on earlier patch to shorten the printk formats.

    They are often now longer than 80 characters, but I think that's worth it.

    [includes fix from Eric Dumazet to check d_path error value]

    Signed-off-by: Andi Kleen
    Signed-off-by: Ingo Molnar
    Signed-off-by: Thomas Gleixner

    Andi Kleen
     
  • The break_lock data structure and code for spinlocks is quite nasty.
    Not only does it double the size of a spinlock but it changes locking to
    a potentially less optimal trylock.

    Put all of that under CONFIG_GENERIC_LOCKBREAK, and introduce a
    __raw_spin_is_contended that uses the lock data itself to determine whether
    there are waiters on the lock, to be used if CONFIG_GENERIC_LOCKBREAK is
    not set.

    Rename need_lockbreak to spin_needbreak, make it use spin_is_contended to
    decouple it from the spinlock implementation, and make it typesafe (rwlocks
    do not have any need_lockbreak sites -- why do they even get bloated up
    with that break_lock then?).

    Signed-off-by: Nick Piggin
    Signed-off-by: Ingo Molnar
    Signed-off-by: Thomas Gleixner

    Nick Piggin
     
  • Randomize the location of the heap (brk) for i386 and x86_64. The range is
    randomized in the range starting at current brk location up to 0x02000000
    offset for both architectures. This, together with
    pie-executable-randomization.patch and
    pie-executable-randomization-fix.patch, should make the address space
    randomization on i386 and x86_64 complete.

    Arjan says:

    This is known to break older versions of some emacs variants, whose dumper
    code assumed that the last variable declared in the program is equal to the
    start of the dynamically allocated memory region.

    (The dumper is the code where emacs effectively dumps core at the end of it's
    compilation stage; this coredump is then loaded as the main program during
    normal use)

    iirc this was 5 years or so; we found this way back when I was at RH and we
    first did the security stuff there (including this brk randomization). It
    wasn't all variants of emacs, and it got fixed as a result (I vaguely remember
    that emacs already had code to deal with it for other archs/oses, just
    ifdeffed wrongly).

    It's a rare and wrong assumption as a general thing, just on x86 it mostly
    happened to be true (but to be honest, it'll break too if gcc does
    something fancy or if the linker does a non-standard order). Still its
    something we should at least document.

    Note 2: afaik it only broke the emacs *build*. I'm not 100% sure about that
    (it IS 5 years ago) though.

    [ akpm@linux-foundation.org: deuglification ]

    Signed-off-by: Jiri Kosina
    Cc: Arjan van de Ven
    Cc: Roland McGrath
    Cc: Jakub Jelinek
    Signed-off-by: Andrew Morton
    Signed-off-by: Ingo Molnar
    Signed-off-by: Thomas Gleixner

    Jiri Kosina
     

28 Jan, 2008

1 commit


26 Jan, 2008

5 commits

  • Move the task_struct members specific to rt scheduling together.
    A future optimization could be to put sched_entity and sched_rt_entity
    into a union.

    Signed-off-by: Peter Zijlstra
    CC: Srivatsa Vaddagiri
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • This patch converts the known per-subsystem mutexes to get_online_cpus
    put_online_cpus. It also eliminates the CPU_LOCK_ACQUIRE and
    CPU_LOCK_RELEASE hotplug notification events.

    Signed-off-by: Gautham R Shenoy
    Signed-off-by: Ingo Molnar

    Gautham R Shenoy
     
  • * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/selinux-2.6:
    selinux: make mls_compute_sid always polyinstantiate
    security/selinux: constify function pointer tables and fields
    security: add a secctx_to_secid() hook
    security: call security_file_permission from rw_verify_area
    security: remove security_sb_post_mountroot hook
    Security: remove security.h include from mm.h
    Security: remove security_file_mmap hook sparse-warnings (NULL as 0).
    Security: add get, set, and cloning of superblock security information
    security/selinux: Add missing "space"

    Linus Torvalds
     
  • This can be broken down into these major areas:
    - Documentation updates (language translations and fixes, as
    well as kobject and kset documenatation updates.)
    - major kset/kobject/ktype rework and fixes. This cleans up the
    kset and kobject and ktype relationship and architecture,
    making sense of things now, and good documenation and samples
    are provided for others to use. Also the attributes for
    kobjects are much easier to handle now. This cleaned up a LOT
    of code all through the kernel, making kobjects easier to use
    if you want to.
    - struct bus_type has been reworked to now handle the lifetime
    rules properly, as the kobject is properly dynamic.
    - struct driver has also been reworked, and now the lifetime
    issues are resolved.
    - the block subsystem has been converted to use struct device
    now, and not "raw" kobjects. This patch has been in the -mm
    tree for over a year now, and finally all the issues are
    worked out with it. Older distros now properly work with new
    kernels, and no userspace updates are needed at all.
    - nozomi driver is added. This has also been in -mm for a long
    time, and many people have asked for it to go in. It is now
    in good enough shape to do so.
    - lots of class_device conversions to use struct device instead.
    The tree is almost all cleaned up now, only SCSI and IB is the
    remaining code to fix up...

    * git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-2.6: (196 commits)
    Driver core: coding style fixes
    Kobject: fix coding style issues in kobject c files
    Kobject: fix coding style issues in kobject.h
    Driver core: fix coding style issues in device.h
    spi: use class iteration api
    scsi: use class iteration api
    rtc: use class iteration api
    power supply : use class iteration api
    ieee1394: use class iteration api
    Driver Core: add class iteration api
    Driver core: Cleanup get_device_parent() in device_add() and device_move()
    UIO: constify function pointer tables
    Driver Core: constify the name passed to platform_device_register_simple
    driver core: fix build with SYSFS=n
    sysfs: make SYSFS_DEPRECATED depend on SYSFS
    Driver core: use LIST_HEAD instead of call to INIT_LIST_HEAD in __init
    kobject: add sample code for how to use ksets/ktypes/kobjects
    kobject: add sample code for how to use kobjects in a simple manner.
    kobject: update the kobject/kset documentation
    kobject: remove old, outdated documentation.
    ...

    Linus Torvalds
     
  • If the node we're booting on doesn't have memory, bootstrapping kmalloc()
    caches resorts to fallback_alloc() which requires ->nodelists set for all
    nodes. Fix that by calling set_up_list3s() for CACHE_CACHE in
    kmem_cache_init().

    As kmem_getpages() is called with GFP_THISNODE set, this used to work before
    because of breakage in 2.6.22 and before with GFP_THISNODE returning pages from
    the wrong node if a node had no memory. So it may have worked accidentally and
    in an unsafe manner because the pages would have been associated with the wrong
    node which could trigger bug ons and locking troubles.

    Tested-by: Mel Gorman
    Tested-by: Olaf Hering
    Reviewed-by: Christoph Lameter
    Signed-off-by: Pekka Enberg
    [ With additional one-liner by Olaf Hering - Linus ]
    Signed-off-by: Linus Torvalds

    Pekka Enberg
     

25 Jan, 2008

8 commits

  • This converts the code to use the new kobject functions, cleaning up the
    logic in doing so.

    Cc: Christoph Lameter
    Cc: Kay Sievers
    Signed-off-by: Greg Kroah-Hartman

    Greg Kroah-Hartman
     
  • kernel_kset does not need to be a kset, but a much simpler kobject now
    that we have kobj_attributes.

    We also rename kernel_kset to kernel_kobj to catch all users of this
    symbol with a build error instead of an easy-to-ignore build warning.

    Cc: Kay Sievers
    Signed-off-by: Greg Kroah-Hartman

    Greg Kroah-Hartman
     
  • /sys/kernel is where these things should go.
    Also updated the documentation and tool that used this directory.

    Cc: Kay Sievers
    Acked-by: Christoph Lameter
    Signed-off-by: Greg Kroah-Hartman

    Greg Kroah-Hartman
     
  • Dynamically create the kset instead of declaring it statically.

    Cc: Kay Sievers
    Cc: Christoph Lameter
    Signed-off-by: Greg Kroah-Hartman

    Greg Kroah-Hartman
     
  • We don't need a "default" ktype for a kset. We should set this
    explicitly every time for each kset. This change is needed so that we
    can make ksets dynamic, and cleans up one of the odd, undocumented
    assumption that the kset/kobject/ktype model has.

    This patch is based on a lot of help from Kay Sievers.

    Nasty bug in the block code was found by Dave Young

    Cc: Kay Sievers
    Cc: Dave Young
    Signed-off-by: Greg Kroah-Hartman

    Greg Kroah-Hartman
     
  • Fixing:
    CHECK mm/mmap.c
    mm/mmap.c:1623:29: warning: Using plain integer as NULL pointer
    mm/mmap.c:1623:29: warning: Using plain integer as NULL pointer
    mm/mmap.c:1944:29: warning: Using plain integer as NULL pointer

    Signed-off-by: Richard Knutsson
    Signed-off-by: James Morris

    Richard Knutsson
     
  • Partial revert the changes made by 04231b3002ac53f8a64a7bd142fde3fa4b6808c6
    to the kmem_list3 management. On a machine with a memoryless node, this
    BUG_ON was triggering

    static void *____cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid)
    {
    struct list_head *entry;
    struct slab *slabp;
    struct kmem_list3 *l3;
    void *obj;
    int x;

    l3 = cachep->nodelists[nodeid];
    BUG_ON(!l3);

    Signed-off-by: Mel Gorman
    Cc: Pekka Enberg
    Acked-by: Christoph Lameter
    Cc: "Aneesh Kumar K.V"
    Cc: Nishanth Aravamudan
    Cc: KAMEZAWA Hiroyuki
    Cc: Paul Mackerras
    Cc: Benjamin Herrenschmidt
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • The shared page table code for hugetlb memory on x86 and x86_64
    is causing a leak. When a user of hugepages exits using this code
    the system leaks some of the hugepages.

    -------------------------------------------------------
    Part of /proc/meminfo just before database startup:
    HugePages_Total: 5500
    HugePages_Free: 5500
    HugePages_Rsvd: 0
    Hugepagesize: 2048 kB

    Just before shutdown:
    HugePages_Total: 5500
    HugePages_Free: 4475
    HugePages_Rsvd: 0
    Hugepagesize: 2048 kB

    After shutdown:
    HugePages_Total: 5500
    HugePages_Free: 4988
    HugePages_Rsvd:
    0 Hugepagesize: 2048 kB
    ----------------------------------------------------------

    The problem occurs durring a fork, in copy_hugetlb_page_range(). It
    locates the dst_pte using huge_pte_alloc(). Since huge_pte_alloc() calls
    huge_pmd_share() it will share the pmd page if can, yet the main loop in
    copy_hugetlb_page_range() does a get_page() on every hugepage. This is a
    violation of the shared hugepmd pagetable protocol and creates additional
    referenced to the hugepages causing a leak when the unmap of the VMA
    occurs. We can skip the entire replication of the ptes when the hugepage
    pagetables are shared. The attached patch skips copying the ptes and the
    get_page() calls if the hugetlbpage pagetable is shared.

    [akpm@linux-foundation.org: coding-style cleanups]
    Signed-off-by: Larry Woodman
    Signed-off-by: Adam Litke
    Cc: Badari Pulavarty
    Cc: Ken Chen
    Cc: David Gibson
    Cc: William Lee Irwin III
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Larry Woodman
     

24 Jan, 2008

1 commit


18 Jan, 2008

2 commits

  • This patch puts #ifdef CONFIG_DEBUG_VM around a check in vm_normal_page
    that verifies that a pfn is valid. This patch increases performance of the
    page fault microbenchmark in lmbench by 13% and overall dbench performance
    by 7% on s390x. pfn_valid() is an expensive operation on s390 that needs a
    high double digit amount of CPU cycles. Nick Piggin suggested that
    pfn_valid() involves an array lookup on systems with sparsemem, and
    therefore is an expensive operation there too.

    The check looks like a clear debug thing to me, it should never trigger on
    regular kernels. And if a pte is created for an invalid pfn, we'll find
    out once the memory gets accessed later on anyway. Please consider
    inclusion of this patch into mm.

    Signed-off-by: Carsten Otte
    Acked-by: Nick Piggin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Carsten Otte
     
  • With CONFIG_HOTPLUG=n and CONFIG_HOTPLUG_CPU=y we saw
    following warning:
    WARNING: mm/built-in.o(.text+0x6864): Section mismatch: reference to .init.text: (between 'process_zones' and 'pageset_cpuup_callback')

    The culprit was zone_batchsize() which were annotated __devinit but used
    from process_zones() which is annotated __cpuinit. zone_batchsize() are
    used from another function annotated __meminit so the only valid option is
    to drop the annotation of zone_batchsize() so we know it is always valid to
    use it.

    Signed-off-by: Sam Ravnborg
    Acked-by: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Sam Ravnborg
     

15 Jan, 2008

3 commits

  • This reverts commit 2e6883bdf49abd0e7f0d9b6297fc3be7ebb2250b, as
    requested by Fengguang Wu. It's not quite fully baked yet, and while
    there are patches around to fix the problems it caused, they should get
    more testing. Says Fengguang: "I'll resend them both for -mm later on,
    in a more complete patchset".

    See

    http://bugzilla.kernel.org/show_bug.cgi?id=9738

    for some of this discussion.

    Requested-by: Fengguang Wu
    Cc: Andrew Morton
    Cc: Peter Zijlstra
    Signed-off-by: Linus Torvalds

    Linus Torvalds
     
  • In the error path of both shared and private hugetlb page allocation,
    the file system quota is never undone, leading to fs quota leak. Fix
    them up.

    [akpm@linux-foundation.org: cleanup, micro-optimise]
    Signed-off-by: Ken Chen
    Acked-by: Adam Litke
    Cc: David Gibson
    Cc: William Lee Irwin III
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Ken Chen
     
  • Quicklists calculates the size of the quicklists based on the number of
    free pages. This must be the number of free pages that can be allocated
    with GFP_KERNEL. node_page_state() includes the pages in ZONE_HIGHMEM and
    ZONE_MOVABLE which may lead the quicklists to become too large causing OOM.

    Signed-off-by: Christoph Lameter
    Tested-by: Dhaval Giani
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     

09 Jan, 2008

2 commits

  • When using FLAT_MEMORY and ARCH_PFN_OFFSET is not 0, the kernel crashes in
    memmap_init_zone(). This bug got introduced by commit
    c713216deebd95d2b0ab38fef8bb2361c0180c2d

    Signed-off-by: Thomas Bogendoerfer
    Acked-by: Mel Gorman
    Cc: Bob Picco
    Cc: Dave Hansen
    Cc: Andy Whitcroft
    Cc: Andi Kleen
    Cc: Benjamin Herrenschmidt
    Cc: Paul Mackerras
    Cc: "Keith Mannthey"
    Cc: "Luck, Tony"
    Cc: KAMEZAWA Hiroyuki
    Cc: Yasunori Goto
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Thomas Bogendoerfer
     
  • The use of get_zeroed_page() with __GFP_HIGHMEM is invalid. Use
    alloc_page() with __GFP_ZERO instead of invalid get_zeroed_page().

    (This patch is only compile tested)

    Cc: Carsten Otte
    Signed-off-by: Akinobu Mita
    Acked-by: Hugh Dickins
    Acked-by: Carsten Otte
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Akinobu Mita
     

03 Jan, 2008

1 commit

  • Both SLUB and SLAB really did almost exactly the same thing for
    /proc/slabinfo setup, using duplicate code and per-allocator #ifdef's.

    This just creates a common CONFIG_SLABINFO that is enabled by both SLUB
    and SLAB, and shares all the setup code. Maybe SLOB will want this some
    day too.

    Reviewed-by: Pekka Enberg
    Signed-off-by: Linus Torvalds

    Linus Torvalds
     

02 Jan, 2008

1 commit

  • This adds a read-only /proc/slabinfo file on SLUB, that makes slabtop work.

    [ mingo@elte.hu: build fix. ]

    Cc: Andi Kleen
    Cc: Christoph Lameter
    Cc: Peter Zijlstra
    Signed-off-by: Pekka Enberg
    Signed-off-by: Ingo Molnar
    Signed-off-by: Linus Torvalds

    Pekka J Enberg
     

22 Dec, 2007

1 commit

  • Increase the mininum number of partial slabs to keep around and put
    partial slabs to the end of the partial queue so that they can add
    more objects.

    Signed-off-by: Christoph Lameter
    Reviewed-by: Pekka Enberg
    Acked-by: Ingo Molnar
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     

20 Dec, 2007

1 commit

  • Krzysztof Oledzki noticed a dirty page accounting leak on some of his
    machines, causing the machine to eventually lock up when the kernel
    decided that there was too much dirty data, but nobody could actually
    write anything out to fix it.

    The culprit turns out to be filesystems (cough ext3 with data=journal
    cough) that re-dirty the page when the "->invalidatepage()" callback is
    called.

    Fix it up by doing a final dirty page accounting check when we actually
    remove the page from the page cache.

    This fixes bugzilla entry 9182:

    http://bugzilla.kernel.org/show_bug.cgi?id=9182

    Tested-by: Ingo Molnar
    Tested-by: Krzysztof Oledzki
    Cc: Andrew Morton
    Cc: Nick Piggin
    Cc: Peter Zijlstra
    Signed-off-by: Linus Torvalds

    Linus Torvalds
     

18 Dec, 2007

7 commits

  • Remove a recently added useless masking of GFP_ZERO. GFP_ZERO is already
    masked out in new_slab() (See how it calls allocate_slab). No need to do
    it twice.

    This reverts the SLUB parts of 7fd272550bd43cc1d7289ef0ab2fa50de137e767.

    Cc: Matt Mackall
    Reviewed-by: Pekka Enberg
    Signed-off-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • This reverts commit 54f9f80d6543fb7b157d3b11e2e7911dc1379790 ("hugetlb:
    Add hugetlb_dynamic_pool sysctl")

    Given the new sysctl nr_overcommit_hugepages, the boolean dynamic pool
    sysctl is not needed, as its semantics can be expressed by 0 in the
    overcommit sysctl (no dynamic pool) and non-0 in the overcommit sysctl
    (pool enabled).

    (Needed in 2.6.24 since it reverts a post-2.6.23 userspace-visible change)

    Signed-off-by: Nishanth Aravamudan
    Acked-by: Adam Litke
    Cc: William Lee Irwin III
    Cc: Dave Hansen
    Cc: David Gibson
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nishanth Aravamudan
     
  • hugetlb: introduce nr_overcommit_hugepages sysctl

    While examining the code to support /proc/sys/vm/hugetlb_dynamic_pool, I
    became convinced that having a boolean sysctl was insufficient:

    1) To support per-node control of hugepages, I have previously submitted
    patches to add a sysfs attribute related to nr_hugepages. However, with
    a boolean global value and per-mount quota enforcement constraining the
    dynamic pool, adding corresponding control of the dynamic pool on a
    per-node basis seems inconsistent to me.

    2) Administration of the hugetlb dynamic pool with multiple hugetlbfs
    mount points is, arguably, more arduous than it needs to be. Each quota
    would need to be set separately, and the sum would need to be monitored.

    To ease the administration, and to help make the way for per-node
    control of the static & dynamic hugepage pool, I added a separate
    sysctl, nr_overcommit_hugepages. This value serves as a high watermark
    for the overall hugepage pool, while nr_hugepages serves as a low
    watermark. The boolean sysctl can then be removed, as the condition

    nr_overcommit_hugepages > 0

    indicates the same administrative setting as

    hugetlb_dynamic_pool == 1

    Quotas still serve as local enforcement of the size of the pool on a
    per-mount basis.

    A few caveats:

    1) There is a race whereby the global surplus huge page counter is
    incremented before a hugepage has allocated. Another process could then
    try grow the pool, and fail to convert a surplus huge page to a normal
    huge page and instead allocate a fresh huge page. I believe this is
    benign, as no memory is leaked (the actual pages are still tracked
    correctly) and the counters won't go out of sync.

    2) Shrinking the static pool while a surplus is in effect will allow the
    number of surplus huge pages to exceed the overcommit value. As long as
    this condition holds, however, no more surplus huge pages will be
    allowed on the system until one of the two sysctls are increased
    sufficiently, or the surplus huge pages go out of use and are freed.

    Successfully tested on x86_64 with the current libhugetlbfs snapshot,
    modified to use the new sysctl.

    Signed-off-by: Nishanth Aravamudan
    Acked-by: Adam Litke
    Cc: William Lee Irwin III
    Cc: Dave Hansen
    Cc: David Gibson
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nishanth Aravamudan
     
  • In some cases the IO subsystem is able to merge requests if the pages are
    adjacent in physical memory. This was achieved in the allocator by having
    expand() return pages in physically contiguous order in situations were a
    large buddy was split. However, list-based anti-fragmentation changed the
    order pages were returned in to avoid searching in buffered_rmqueue() for a
    page of the appropriate migrate type.

    This patch restores behaviour of rmqueue_bulk() preserving the physical
    order of pages returned by the allocator without incurring increased search
    costs for anti-fragmentation.

    Signed-off-by: Mel Gorman
    Cc: James Bottomley
    Cc: Jens Axboe
    Cc: Mark Lord
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • Improve the error handling for mm/sparse.c::sparse_add_one_section(). And I
    see no reason to check 'usemap' until holding the 'pgdat_resize_lock'.

    [geoffrey.levand@am.sony.com: sparse_index_init() returns -EEXIST]
    Cc: Christoph Lameter
    Acked-by: Dave Hansen
    Cc: Rik van Riel
    Acked-by: Yasunori Goto
    Cc: Andy Whitcroft
    Signed-off-by: WANG Cong
    Signed-off-by: Geoff Levand
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    WANG Cong
     
  • Since sparse_index_alloc() can return NULL on memory allocation failure,
    we must deal with the failure condition when calling it.

    Signed-off-by: WANG Cong
    Cc: Christoph Lameter
    Cc: Rik van Riel
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    WANG Cong
     
  • SPARSEMEM_VMEMMAP needs to be a selectable config option to support
    building the kernel both with and without sparsemem vmemmap support. This
    selection is desirable for platforms which could be configured one way for
    platform specific builds and the other for multi-platform builds.

    Signed-off-by: Miguel Botón
    Signed-off-by: Geoff Levand
    Acked-by: Yasunori Goto
    Cc: Christoph Lameter
    Cc: Andy Whitcroft
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Geoff Levand
     

11 Dec, 2007

1 commit

  • The follow_hugetlb_page() fix I posted (merged as git commit
    5b23dbe8173c212d6a326e35347b038705603d39) missed one case. If the pte is
    present, but not writable and write access is requested by the caller to
    get_user_pages(), the code will do the wrong thing. Rather than calling
    hugetlb_fault to make the pte writable, it notes the presence of the pte
    and continues.

    This simple one-liner makes sure we also fault on the pte for this case.
    Please apply.

    Signed-off-by: Adam Litke
    Acked-by: Dave Kleikamp
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Adam Litke
     

10 Dec, 2007

1 commit

  • Both slob and slub react to __GFP_ZERO by clearing the allocation, which
    means that passing the GFP_ZERO bit down to the page allocator is just
    wasteful and pointless.

    Acked-by: Matt Mackall
    Reviewed-by: Pekka Enberg
    Signed-off-by: Linus Torvalds

    Linus Torvalds
     

06 Dec, 2007

2 commits

  • * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/selinux-2.6:
    VM/Security: add security hook to do_brk
    Security: round mmap hint address above mmap_min_addr
    security: protect from stack expantion into low vm addresses
    Security: allow capable check to permit mmap or low vm space
    SELinux: detect dead booleans
    SELinux: do not clear f_op when removing entries

    Linus Torvalds
     
  • Given a specifically crafted binary do_brk() can be used to get low pages
    available in userspace virtual memory and can thus be used to circumvent
    the mmap_min_addr low memory protection. Add security checks in do_brk().

    Signed-off-by: Eric Paris
    Acked-by: Alan Cox
    Cc: Stephen Smalley
    Cc: James Morris
    Cc: Chris Wright
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Eric Paris