15 Aug, 2008

1 commit

  • Highmem code can leave ptes and tlb entries around for a given page even after
    kunmap, and after it has been freed.

    >From what I can gather, the PAT code may change the cache attributes of
    arbitrary physical addresses (ie. including highmem pages), which would result
    in aliases in the case that it operates on one of these lazy tlb highmem
    pages.

    Flushing kmaps should solve the problem.

    I've also just added code for conditional flushing if we haven't got
    any dangling highmem aliases -- this should help performance if we
    change page attributes frequently or systems that aren't using much
    highmem pages (eg. if < 4G RAM). Should be turned into 2 patches, but
    just for RFC...

    Signed-off-by: Ingo Molnar

    Nick Piggin
     

20 Jul, 2008

1 commit


20 Mar, 2008

1 commit


06 Feb, 2008

1 commit


18 Jul, 2007

1 commit

  • The following 8 patches against 2.6.20-mm2 create a zone called ZONE_MOVABLE
    that is only usable by allocations that specify both __GFP_HIGHMEM and
    __GFP_MOVABLE. This has the effect of keeping all non-movable pages within a
    single memory partition while allowing movable allocations to be satisfied
    from either partition. The patches may be applied with the list-based
    anti-fragmentation patches that groups pages together based on mobility.

    The size of the zone is determined by a kernelcore= parameter specified at
    boot-time. This specifies how much memory is usable by non-movable
    allocations and the remainder is used for ZONE_MOVABLE. Any range of pages
    within ZONE_MOVABLE can be released by migrating the pages or by reclaiming.

    When selecting a zone to take pages from for ZONE_MOVABLE, there are two
    things to consider. First, only memory from the highest populated zone is
    used for ZONE_MOVABLE. On the x86, this is probably going to be ZONE_HIGHMEM
    but it would be ZONE_DMA on ppc64 or possibly ZONE_DMA32 on x86_64. Second,
    the amount of memory usable by the kernel will be spread evenly throughout
    NUMA nodes where possible. If the nodes are not of equal size, the amount of
    memory usable by the kernel on some nodes may be greater than others.

    By default, the zone is not as useful for hugetlb allocations because they are
    pinned and non-migratable (currently at least). A sysctl is provided that
    allows huge pages to be allocated from that zone. This means that the huge
    page pool can be resized to the size of ZONE_MOVABLE during the lifetime of
    the system assuming that pages are not mlocked. Despite huge pages being
    non-movable, we do not introduce additional external fragmentation of note as
    huge pages are always the largest contiguous block we care about.

    Credit goes to Andy Whitcroft for catching a large variety of problems during
    review of the patches.

    This patch creates an additional zone, ZONE_MOVABLE. This zone is only usable
    by allocations which specify both __GFP_HIGHMEM and __GFP_MOVABLE. Hot-added
    memory continues to be placed in their existing destination as there is no
    mechanism to redirect them to a specific zone.

    [y-goto@jp.fujitsu.com: Fix section mismatch of memory hotplug related code]
    [akpm@linux-foundation.org: various fixes]
    Signed-off-by: Mel Gorman
    Cc: Andy Whitcroft
    Signed-off-by: Yasunori Goto
    Cc: William Lee Irwin III
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     

03 May, 2007

1 commit

  • Xen and VMI both have special requirements when mapping a highmem pte
    page into the kernel address space. These can be dealt with by adding
    a new kmap_atomic_pte() function for mapping highptes, and hooking it
    into the paravirt_ops infrastructure.

    Xen specifically wants to map the pte page RO, so this patch exposes a
    helper function, kmap_atomic_prot, which maps the page with the
    specified page protections.

    This also adds a kmap_flush_unused() function to clear out the cached
    kmap mappings. Xen needs this to clear out any potential stray RW
    mappings of pages which will become part of a pagetable.

    [ Zach - vmi.c will need some attention after this patch. It wasn't
    immediately obvious to me what needs to be done. ]

    Signed-off-by: Jeremy Fitzhardinge
    Signed-off-by: Andi Kleen
    Cc: Zachary Amsden

    Jeremy Fitzhardinge
     

12 Feb, 2007

1 commit

  • This is again simplifies some of the VM counter calculations through the use
    of the ZVC consolidated counters.

    [michal.k.k.piotrowski@gmail.com: build fix]
    Signed-off-by: Christoph Lameter
    Signed-off-by: Michal Piotrowski
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     

01 Oct, 2006

1 commit


26 Sep, 2006

1 commit


01 Jul, 2006

1 commit

  • Conversion of nr_bounce to a per zone counter

    nr_bounce is only used for proc output. So it could be left as an event
    counter. However, the event counters may not be accurate and nr_bounce is
    categorizing types of pages in a zone. So we really need this to also be a
    per zone counter.

    [akpm@osdl.org: bugfix]
    Signed-off-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     

02 Apr, 2006

1 commit


27 Mar, 2006

1 commit


24 Mar, 2006

1 commit


28 Oct, 2005

1 commit


09 Oct, 2005

1 commit

  • - added typedef unsigned int __nocast gfp_t;

    - replaced __nocast uses for gfp flags with gfp_t - it gives exactly
    the same warnings as far as sparse is concerned, doesn't change
    generated code (from gcc point of view we replaced unsigned int with
    typedef) and documents what's going on far better.

    Signed-off-by: Al Viro
    Signed-off-by: Linus Torvalds

    Al Viro
     

01 May, 2005

1 commit

  • This is a patch for counting the number of pages for bounce buffers. It's
    shown in /proc/vmstat.

    Currently, the number of bounce pages are not counted anywhere. So, if
    there are many bounce pages, it seems that there are leaked pages. And
    it's difficult for a user to imagine the usage of bounce pages. So, it's
    meaningful to show # of bouce pages.

    Signed-off-by: KAMEZAWA Hiroyuki
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    KAMEZAWA Hiroyuki
     

17 Apr, 2005

1 commit

  • Initial git repository build. I'm not bothering with the full history,
    even though we have it. We can create a separate "historical" git
    archive of that later if we want to, and in the meantime it's about
    3.2GB when imported into git - space that would just make the early
    git days unnecessarily complicated, when we don't have a lot of good
    infrastructure for it.

    Let it rip!

    Linus Torvalds