07 Nov, 2011

1 commit

  • * 'modsplit-Oct31_2011' of git://git.kernel.org/pub/scm/linux/kernel/git/paulg/linux: (230 commits)
    Revert "tracing: Include module.h in define_trace.h"
    irq: don't put module.h into irq.h for tracking irqgen modules.
    bluetooth: macroize two small inlines to avoid module.h
    ip_vs.h: fix implicit use of module_get/module_put from module.h
    nf_conntrack.h: fix up fallout from implicit moduleparam.h presence
    include: replace linux/module.h with "struct module" wherever possible
    include: convert various register fcns to macros to avoid include chaining
    crypto.h: remove unused crypto_tfm_alg_modname() inline
    uwb.h: fix implicit use of asm/page.h for PAGE_SIZE
    pm_runtime.h: explicitly requires notifier.h
    linux/dmaengine.h: fix implicit use of bitmap.h and asm/page.h
    miscdevice.h: fix up implicit use of lists and types
    stop_machine.h: fix implicit use of smp.h for smp_processor_id
    of: fix implicit use of errno.h in include/linux/of.h
    of_platform.h: delete needless include
    acpi: remove module.h include from platform/aclinux.h
    miscdevice.h: delete unnecessary inclusion of module.h
    device_cgroup.h: delete needless include
    net: sch_generic remove redundant use of
    net: inet_timewait_sock doesnt need
    ...

    Fix up trivial conflicts (other header files, and removal of the ab3550 mfd driver) in
    - drivers/media/dvb/frontends/dibx000_common.c
    - drivers/media/video/{mt9m111.c,ov6650.c}
    - drivers/mfd/ab3550-core.c
    - include/linux/dmaengine.h

    Linus Torvalds
     

01 Nov, 2011

1 commit


31 Oct, 2011

1 commit


18 Aug, 2011

1 commit


28 Oct, 2010

1 commit

  • It appears i386 uses kmap_atomic infrastructure regardless of
    CONFIG_HIGHMEM which results in a compile error when highmem is disabled.

    Cure this by providing the needed few bits for both CONFIG_HIGHMEM and
    CONFIG_X86_32.

    Signed-off-by: Peter Zijlstra
    Reported-by: Chris Wilson
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Peter Zijlstra
     

27 Oct, 2010

1 commit

  • Keep the current interface but ignore the KM_type and use a stack based
    approach.

    The advantage is that we get rid of crappy code like:

    #define __KM_PTE \
    (in_nmi() ? KM_NMI_PTE : \
    in_irq() ? KM_IRQ_PTE : \
    KM_PTE0)

    and in general can stop worrying about what context we're in and what kmap
    slots might be appropriate for that.

    The downside is that FRV kmap_atomic() gets more expensive.

    For now we use a CPP trick suggested by Andrew:

    #define kmap_atomic(page, args...) __kmap_atomic(page)

    to avoid having to touch all kmap_atomic() users in a single patch.

    [ not compiled on:
    - mn10300: the arch doesn't actually build with highmem to begin with ]

    [akpm@linux-foundation.org: coding-style fixes]
    [akpm@linux-foundation.org: fix up drivers/gpu/drm/i915/intel_overlay.c]
    Acked-by: Rik van Riel
    Signed-off-by: Peter Zijlstra
    Acked-by: Chris Metcalf
    Cc: David Howells
    Cc: Hugh Dickins
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Cc: "H. Peter Anvin"
    Cc: Steven Rostedt
    Cc: Russell King
    Cc: Ralf Baechle
    Cc: David Miller
    Cc: Paul Mackerras
    Cc: Benjamin Herrenschmidt
    Cc: Dave Airlie
    Cc: Li Zefan
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Peter Zijlstra
     

05 Aug, 2010

1 commit


25 May, 2010

1 commit

  • In f4112de6b679d84bd9b9681c7504be7bdfb7c7d5 ("mm: introduce
    debug_kmap_atomic") I said that debug_kmap_atomic() needs
    CONFIG_TRACE_IRQFLAGS_SUPPORT.

    It was wrong. (I thought irqs_disabled() is only available when the
    architecture has CONFIG_TRACE_IRQFLAGS_SUPPORT)

    Remove the #ifdef CONFIG_TRACE_IRQFLAGS_SUPPORT check to enable
    kmap_atomic() debugging for the architectures which do not have
    CONFIG_TRACE_IRQFLAGS_SUPPORT.

    Reported-by: Andrew Morton
    Signed-off-by: Akinobu Mita
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Akinobu Mita
     

05 Feb, 2010

1 commit


10 Nov, 2009

2 commits


16 Jun, 2009

1 commit


01 Apr, 2009

1 commit

  • x86 has debug_kmap_atomic_prot() which is error checking function for
    kmap_atomic. It is usefull for the other architectures, although it needs
    CONFIG_TRACE_IRQFLAGS_SUPPORT.

    This patch exposes it to the other architectures.

    Signed-off-by: Akinobu Mita
    Cc: Thomas Gleixner
    Cc: Ingo Molnar
    Cc: "H. Peter Anvin"
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Akinobu Mita
     

16 Mar, 2009

1 commit

  • Most ARM machines have a non IO coherent cache, meaning that the
    dma_map_*() set of functions must clean and/or invalidate the affected
    memory manually before DMA occurs. And because the majority of those
    machines have a VIVT cache, the cache maintenance operations must be
    performed using virtual
    addresses.

    When a highmem page is kunmap'd, its mapping (and cache) remains in place
    in case it is kmap'd again. However if dma_map_page() is then called with
    such a page, some cache maintenance on the remaining mapping must be
    performed. In that case, page_address(page) is non null and we can use
    that to synchronize the cache.

    It is unlikely but still possible for kmap() to race and recycle the
    virtual address obtained above, and use it for another page before some
    on-going cache invalidation loop in dma_map_page() is done. In that case,
    the new mapping could end up with dirty cache lines for another page,
    and the unsuspecting cache invalidation loop in dma_map_page() might
    simply discard those dirty cache lines resulting in data loss.

    For example, let's consider this sequence of events:

    - dma_map_page(..., DMA_FROM_DEVICE) is called on a highmem page.

    --> - vaddr = page_address(page) is non null. In this case
    it is likely that the page has valid cache lines
    associated with vaddr. Remember that the cache is VIVT.

    --> for (i = vaddr; i < vaddr + PAGE_SIZE; i += 32)
    invalidate_cache_line(i);

    *** preemption occurs in the middle of the loop above ***

    - kmap_high() is called for a different page.

    --> - last_pkmap_nr wraps to zero and flush_all_zero_pkmaps()
    is called. The pkmap_count value for the page passed
    to dma_map_page() above happens to be 1, so the page
    is unmapped. But prior to that, flush_cache_kmaps()
    cleared the cache for it. So far so good.

    - A fresh pkmap entry is assigned for this kmap request.
    The Murphy law says this pkmap entry will eventually
    happen to use the same vaddr as the one which used to
    belong to the other page being processed by
    dma_map_page() in the preempted thread above.

    - The kmap_high() caller start dirtying the cache using the
    just assigned virtual mapping for its page.

    *** the first thread is rescheduled ***

    - The for(...) loop is resumed, but now cached
    data belonging to a different physical page is
    being discarded !

    And this is not only a preemption issue as ARM can be SMP as well,
    making the above scenario just as likely. Hence the need for some kind
    of pkmap page pinning which can be used in any context, primarily for
    the benefit of dma_map_page() on ARM.

    This provides the necessary interface to cope with the above issue if
    ARCH_NEEDS_KMAP_HIGH_GET is defined, otherwise the resulting code is
    unchanged.

    Signed-off-by: Nicolas Pitre
    Reviewed-by: MinChan Kim
    Acked-by: Andrew Morton

    Nicolas Pitre
     

15 Aug, 2008

1 commit

  • Highmem code can leave ptes and tlb entries around for a given page even after
    kunmap, and after it has been freed.

    >From what I can gather, the PAT code may change the cache attributes of
    arbitrary physical addresses (ie. including highmem pages), which would result
    in aliases in the case that it operates on one of these lazy tlb highmem
    pages.

    Flushing kmaps should solve the problem.

    I've also just added code for conditional flushing if we haven't got
    any dangling highmem aliases -- this should help performance if we
    change page attributes frequently or systems that aren't using much
    highmem pages (eg. if < 4G RAM). Should be turned into 2 patches, but
    just for RFC...

    Signed-off-by: Ingo Molnar

    Nick Piggin
     

20 Jul, 2008

1 commit


20 Mar, 2008

1 commit


06 Feb, 2008

1 commit


18 Jul, 2007

1 commit

  • The following 8 patches against 2.6.20-mm2 create a zone called ZONE_MOVABLE
    that is only usable by allocations that specify both __GFP_HIGHMEM and
    __GFP_MOVABLE. This has the effect of keeping all non-movable pages within a
    single memory partition while allowing movable allocations to be satisfied
    from either partition. The patches may be applied with the list-based
    anti-fragmentation patches that groups pages together based on mobility.

    The size of the zone is determined by a kernelcore= parameter specified at
    boot-time. This specifies how much memory is usable by non-movable
    allocations and the remainder is used for ZONE_MOVABLE. Any range of pages
    within ZONE_MOVABLE can be released by migrating the pages or by reclaiming.

    When selecting a zone to take pages from for ZONE_MOVABLE, there are two
    things to consider. First, only memory from the highest populated zone is
    used for ZONE_MOVABLE. On the x86, this is probably going to be ZONE_HIGHMEM
    but it would be ZONE_DMA on ppc64 or possibly ZONE_DMA32 on x86_64. Second,
    the amount of memory usable by the kernel will be spread evenly throughout
    NUMA nodes where possible. If the nodes are not of equal size, the amount of
    memory usable by the kernel on some nodes may be greater than others.

    By default, the zone is not as useful for hugetlb allocations because they are
    pinned and non-migratable (currently at least). A sysctl is provided that
    allows huge pages to be allocated from that zone. This means that the huge
    page pool can be resized to the size of ZONE_MOVABLE during the lifetime of
    the system assuming that pages are not mlocked. Despite huge pages being
    non-movable, we do not introduce additional external fragmentation of note as
    huge pages are always the largest contiguous block we care about.

    Credit goes to Andy Whitcroft for catching a large variety of problems during
    review of the patches.

    This patch creates an additional zone, ZONE_MOVABLE. This zone is only usable
    by allocations which specify both __GFP_HIGHMEM and __GFP_MOVABLE. Hot-added
    memory continues to be placed in their existing destination as there is no
    mechanism to redirect them to a specific zone.

    [y-goto@jp.fujitsu.com: Fix section mismatch of memory hotplug related code]
    [akpm@linux-foundation.org: various fixes]
    Signed-off-by: Mel Gorman
    Cc: Andy Whitcroft
    Signed-off-by: Yasunori Goto
    Cc: William Lee Irwin III
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     

03 May, 2007

1 commit

  • Xen and VMI both have special requirements when mapping a highmem pte
    page into the kernel address space. These can be dealt with by adding
    a new kmap_atomic_pte() function for mapping highptes, and hooking it
    into the paravirt_ops infrastructure.

    Xen specifically wants to map the pte page RO, so this patch exposes a
    helper function, kmap_atomic_prot, which maps the page with the
    specified page protections.

    This also adds a kmap_flush_unused() function to clear out the cached
    kmap mappings. Xen needs this to clear out any potential stray RW
    mappings of pages which will become part of a pagetable.

    [ Zach - vmi.c will need some attention after this patch. It wasn't
    immediately obvious to me what needs to be done. ]

    Signed-off-by: Jeremy Fitzhardinge
    Signed-off-by: Andi Kleen
    Cc: Zachary Amsden

    Jeremy Fitzhardinge
     

12 Feb, 2007

1 commit

  • This is again simplifies some of the VM counter calculations through the use
    of the ZVC consolidated counters.

    [michal.k.k.piotrowski@gmail.com: build fix]
    Signed-off-by: Christoph Lameter
    Signed-off-by: Michal Piotrowski
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     

01 Oct, 2006

1 commit


26 Sep, 2006

1 commit


01 Jul, 2006

1 commit

  • Conversion of nr_bounce to a per zone counter

    nr_bounce is only used for proc output. So it could be left as an event
    counter. However, the event counters may not be accurate and nr_bounce is
    categorizing types of pages in a zone. So we really need this to also be a
    per zone counter.

    [akpm@osdl.org: bugfix]
    Signed-off-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     

02 Apr, 2006

1 commit


27 Mar, 2006

1 commit


24 Mar, 2006

1 commit


28 Oct, 2005

1 commit


09 Oct, 2005

1 commit

  • - added typedef unsigned int __nocast gfp_t;

    - replaced __nocast uses for gfp flags with gfp_t - it gives exactly
    the same warnings as far as sparse is concerned, doesn't change
    generated code (from gcc point of view we replaced unsigned int with
    typedef) and documents what's going on far better.

    Signed-off-by: Al Viro
    Signed-off-by: Linus Torvalds

    Al Viro
     

01 May, 2005

1 commit

  • This is a patch for counting the number of pages for bounce buffers. It's
    shown in /proc/vmstat.

    Currently, the number of bounce pages are not counted anywhere. So, if
    there are many bounce pages, it seems that there are leaked pages. And
    it's difficult for a user to imagine the usage of bounce pages. So, it's
    meaningful to show # of bouce pages.

    Signed-off-by: KAMEZAWA Hiroyuki
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    KAMEZAWA Hiroyuki
     

17 Apr, 2005

1 commit

  • Initial git repository build. I'm not bothering with the full history,
    even though we have it. We can create a separate "historical" git
    archive of that later if we want to, and in the meantime it's about
    3.2GB when imported into git - space that would just make the early
    git days unnecessarily complicated, when we don't have a lot of good
    infrastructure for it.

    Let it rip!

    Linus Torvalds