02 Nov, 2017

1 commit

  • Many source files in the tree are missing licensing information, which
    makes it harder for compliance tools to determine the correct license.

    By default all files without license information are under the default
    license of the kernel, which is GPL version 2.

    Update the files which contain no license information with the 'GPL-2.0'
    SPDX license identifier. The SPDX identifier is a legally binding
    shorthand, which can be used instead of the full boiler plate text.

    This patch is based on work done by Thomas Gleixner and Kate Stewart and
    Philippe Ombredanne.

    How this work was done:

    Patches were generated and checked against linux-4.14-rc6 for a subset of
    the use cases:
    - file had no licensing information it it.
    - file was a */uapi/* one with no licensing information in it,
    - file was a */uapi/* one with existing licensing information,

    Further patches will be generated in subsequent months to fix up cases
    where non-standard license headers were used, and references to license
    had to be inferred by heuristics based on keywords.

    The analysis to determine which SPDX License Identifier to be applied to
    a file was done in a spreadsheet of side by side results from of the
    output of two independent scanners (ScanCode & Windriver) producing SPDX
    tag:value files created by Philippe Ombredanne. Philippe prepared the
    base worksheet, and did an initial spot review of a few 1000 files.

    The 4.13 kernel was the starting point of the analysis with 60,537 files
    assessed. Kate Stewart did a file by file comparison of the scanner
    results in the spreadsheet to determine which SPDX license identifier(s)
    to be applied to the file. She confirmed any determination that was not
    immediately clear with lawyers working with the Linux Foundation.

    Criteria used to select files for SPDX license identifier tagging was:
    - Files considered eligible had to be source code files.
    - Make and config files were included as candidates if they contained >5
    lines of source
    - File already had some variant of a license header in it (even if
    Reviewed-by: Philippe Ombredanne
    Reviewed-by: Thomas Gleixner
    Signed-off-by: Greg Kroah-Hartman

    Greg Kroah-Hartman
     

10 Nov, 2015

1 commit


19 May, 2015

1 commit

  • The existing code relies on pagefault_disable() implicitly disabling
    preemption, so that no schedule will happen between kmap_atomic() and
    kunmap_atomic().

    Let's make this explicit, to prepare for pagefault_disable() not
    touching preemption anymore.

    Reviewed-and-tested-by: Thomas Gleixner
    Signed-off-by: David Hildenbrand
    Signed-off-by: Peter Zijlstra (Intel)
    Cc: David.Laight@ACULAB.COM
    Cc: Linus Torvalds
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Cc: airlied@linux.ie
    Cc: akpm@linux-foundation.org
    Cc: benh@kernel.crashing.org
    Cc: bigeasy@linutronix.de
    Cc: borntraeger@de.ibm.com
    Cc: daniel.vetter@intel.com
    Cc: heiko.carstens@de.ibm.com
    Cc: herbert@gondor.apana.org.au
    Cc: hocko@suse.cz
    Cc: hughd@google.com
    Cc: mst@redhat.com
    Cc: paulus@samba.org
    Cc: ralf@linux-mips.org
    Cc: schwidefsky@de.ibm.com
    Cc: yang.shi@windriver.com
    Link: http://lkml.kernel.org/r/1431359540-32227-5-git-send-email-dahi@linux.vnet.ibm.com
    Signed-off-by: Ingo Molnar

    David Hildenbrand
     

07 Aug, 2014

1 commit

  • __kmap_atomic_idx is per_cpu variable. Each CPU can use KM_TYPE_NR
    entries from FIXMAP i.e. from 0 to KM_TYPE_NR - 1. Allowing
    __kmap_atomic_idx to over- shoot to KM_TYPE_NR can mess up with next
    CPU's 0th entry which is a bug. Hence BUG_ON if __kmap_atomic_idx >=
    KM_TYPE_NR.

    Fix the off-by-on in this test.

    Signed-off-by: Chintan Pandya
    Cc: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Chintan Pandya
     

24 Feb, 2013

1 commit


01 Aug, 2012

1 commit

  • The patch "mm: add support for a filesystem to activate swap files and use
    direct_IO for writing swap pages" added support for using direct_IO to
    write swap pages but it is insufficient for highmem pages.

    To support highmem pages, this patch kmaps() the page before calling the
    direct_IO() handler. As direct_IO deals with virtual addresses an
    additional helper is necessary for get_kernel_pages() to lookup the struct
    page for a kmap virtual address.

    Signed-off-by: Mel Gorman
    Acked-by: Rik van Riel
    Cc: Christoph Hellwig
    Cc: David S. Miller
    Cc: Eric B Munson
    Cc: Eric Paris
    Cc: James Morris
    Cc: Mel Gorman
    Cc: Mike Christie
    Cc: Neil Brown
    Cc: Peter Zijlstra
    Cc: Sebastian Andrzej Siewior
    Cc: Trond Myklebust
    Cc: Xiaotian Feng
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     

24 Jul, 2012

1 commit


25 Mar, 2012

1 commit

  • Pull cleanup from Paul Gortmaker:
    "The changes shown here are to unify linux's BUG support under the one
    file. Due to historical reasons, we have some BUG code
    in bug.h and some in kernel.h -- i.e. the support for BUILD_BUG in
    linux/kernel.h predates the addition of linux/bug.h, but old code in
    kernel.h wasn't moved to bug.h at that time. As a band-aid, kernel.h
    was including to pseudo link them.

    This has caused confusion[1] and general yuck/WTF[2] reactions. Here
    is an example that violates the principle of least surprise:

    CC lib/string.o
    lib/string.c: In function 'strlcat':
    lib/string.c:225:2: error: implicit declaration of function 'BUILD_BUG_ON'
    make[2]: *** [lib/string.o] Error 1
    $
    $ grep linux/bug.h lib/string.c
    #include
    $

    We've included for the BUG infrastructure and yet we
    still get a compile fail! [We've not kernel.h for BUILD_BUG_ON.] Ugh -
    very confusing for someone who is new to kernel development.

    With the above in mind, the goals of this changeset are:

    1) find and fix any include/*.h files that were relying on the
    implicit presence of BUG code.
    2) find and fix any C files that were consuming kernel.h and hence
    relying on implicitly getting some/all BUG code.
    3) Move the BUG related code living in kernel.h to
    4) remove the asm/bug.h from kernel.h to finally break the chain.

    During development, the order was more like 3-4, build-test, 1-2. But
    to ensure that git history for bisect doesn't get needless build
    failures introduced, the commits have been reorderd to fix the problem
    areas in advance.

    [1] https://lkml.org/lkml/2012/1/3/90
    [2] https://lkml.org/lkml/2012/1/17/414"

    Fix up conflicts (new radeon file, reiserfs header cleanups) as per Paul
    and linux-next.

    * tag 'bug-for-3.4' of git://git.kernel.org/pub/scm/linux/kernel/git/paulg/linux:
    kernel.h: doesn't explicitly use bug.h, so don't include it.
    bug: consolidate BUILD_BUG_ON with other bug code
    BUG: headers with BUG/BUG_ON etc. need linux/bug.h
    bug.h: add include of it to various implicit C users
    lib: fix implicit users of kernel.h for TAINT_WARN
    spinlock: macroize assert_spin_locked to avoid bug.h dependency
    x86: relocate get/set debugreg fcns to include/asm/debugreg.

    Linus Torvalds
     

20 Mar, 2012

3 commits


05 Mar, 2012

1 commit

  • If a header file is making use of BUG, BUG_ON, BUILD_BUG_ON, or any
    other BUG variant in a static inline (i.e. not in a #define) then
    that header really should be including and not just
    expecting it to be implicitly present.

    We can make this change risk-free, since if the files using these
    headers didn't have exposure to linux/bug.h already, they would have
    been causing compile failures/warnings.

    Signed-off-by: Paul Gortmaker

    Paul Gortmaker
     

17 Dec, 2010

1 commit

  • Use this_cpu operations to optimize access primitives for highmem.

    The main effect is the avoidance of address calculations through the
    use of a segment prefix.

    V3->V4
    - kmap_atomic_idx: Do not return a value.
    - Use __this_cpu_dec without HIGHMEM_DEBUG

    Cc: Peter Zijlstra
    Cc: Catalin Marinas
    Acked-by: H. Peter Anvin
    Signed-off-by: Christoph Lameter
    Signed-off-by: Tejun Heo

    Christoph Lameter
     

12 Nov, 2010

1 commit

  • Commit 3e4d3af501cc ("mm: stack based kmap_atomic()") introduced the
    kmap_atomic_idx_push() function which warns on in_irq() with
    CONFIG_DEBUG_HIGHMEM enabled. This patch includes linux/hardirq.h for
    the in_irq definition.

    Signed-off-by: Catalin Marinas
    Acked-by: Peter Zijlstra
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Catalin Marinas
     

28 Oct, 2010

2 commits

  • Christoph reported a nice splat which illustrated a race in the new stack
    based kmap_atomic implementation.

    The problem is that we pop our stack slot before we're completely done
    resetting its state -- in particular clearing the PTE (sometimes that's
    CONFIG_DEBUG_HIGHMEM). If an interrupt happens before we actually clear
    the PTE used for the last slot, that interrupt can reuse the slot in a
    dirty state, which triggers a BUG in kmap_atomic().

    Fix this by introducing kmap_atomic_idx() which reports the current slot
    index without actually releasing it and use that to find the PTE and delay
    the _pop() until after we're completely done.

    Signed-off-by: Peter Zijlstra
    Reported-by: Christoph Hellwig
    Acked-by: Rik van Riel
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Peter Zijlstra
     
  • It appears i386 uses kmap_atomic infrastructure regardless of
    CONFIG_HIGHMEM which results in a compile error when highmem is disabled.

    Cure this by providing the needed few bits for both CONFIG_HIGHMEM and
    CONFIG_X86_32.

    Signed-off-by: Peter Zijlstra
    Reported-by: Chris Wilson
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Peter Zijlstra
     

27 Oct, 2010

2 commits

  • Keep the current interface but ignore the KM_type and use a stack based
    approach.

    The advantage is that we get rid of crappy code like:

    #define __KM_PTE \
    (in_nmi() ? KM_NMI_PTE : \
    in_irq() ? KM_IRQ_PTE : \
    KM_PTE0)

    and in general can stop worrying about what context we're in and what kmap
    slots might be appropriate for that.

    The downside is that FRV kmap_atomic() gets more expensive.

    For now we use a CPP trick suggested by Andrew:

    #define kmap_atomic(page, args...) __kmap_atomic(page)

    to avoid having to touch all kmap_atomic() users in a single patch.

    [ not compiled on:
    - mn10300: the arch doesn't actually build with highmem to begin with ]

    [akpm@linux-foundation.org: coding-style fixes]
    [akpm@linux-foundation.org: fix up drivers/gpu/drm/i915/intel_overlay.c]
    Acked-by: Rik van Riel
    Signed-off-by: Peter Zijlstra
    Acked-by: Chris Metcalf
    Cc: David Howells
    Cc: Hugh Dickins
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Cc: "H. Peter Anvin"
    Cc: Steven Rostedt
    Cc: Russell King
    Cc: Ralf Baechle
    Cc: David Miller
    Cc: Paul Mackerras
    Cc: Benjamin Herrenschmidt
    Cc: Dave Airlie
    Cc: Li Zefan
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Peter Zijlstra
     
  • Ensure kmap_atomic() usage is strictly nested

    Signed-off-by: Peter Zijlstra
    Reviewed-by: Rik van Riel
    Acked-by: Chris Metcalf
    Cc: David Howells
    Cc: Hugh Dickins
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Cc: "H. Peter Anvin"
    Cc: Steven Rostedt
    Cc: Russell King
    Cc: Ralf Baechle
    Cc: David Miller
    Cc: Paul Mackerras
    Cc: Benjamin Herrenschmidt
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Peter Zijlstra
     

10 Aug, 2010

2 commits

  • No real bugs, just some dead code and some fixups.

    Signed-off-by: Andi Kleen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andi Kleen
     
  • kunmap_atomic() is currently at level -4 on Rusty's "Hard To Misuse"
    list[1] ("Follow common convention and you'll get it wrong"), except in
    some architectures when CONFIG_DEBUG_HIGHMEM is set[2][3].

    kunmap() takes a pointer to a struct page; kunmap_atomic(), however, takes
    takes a pointer to within the page itself. This seems to once in a while
    trip people up (the convention they are following is the one from
    kunmap()).

    Make it much harder to misuse, by moving it to level 9 on Rusty's list[4]
    ("The compiler/linker won't let you get it wrong"). This is done by
    refusing to build if the type of its first argument is a pointer to a
    struct page.

    The real kunmap_atomic() is renamed to kunmap_atomic_notypecheck()
    (which is what you would call in case for some strange reason calling it
    with a pointer to a struct page is not incorrect in your code).

    The previous version of this patch was compile tested on x86-64.

    [1] http://ozlabs.org/~rusty/index.cgi/tech/2008-04-01.html
    [2] In these cases, it is at level 5, "Do it right or it will always
    break at runtime."
    [3] At least mips and powerpc look very similar, and sparc also seems to
    share a common ancestor with both; there seems to be quite some
    degree of copy-and-paste coding here. The include/asm/highmem.h file
    for these three archs mention x86 CPUs at its top.
    [4] http://ozlabs.org/~rusty/index.cgi/tech/2008-03-30.html
    [5] As an aside, could someone tell me why mn10300 uses unsigned long as
    the first parameter of kunmap_atomic() instead of void *?

    Signed-off-by: Cesar Eduardo Barros
    Cc: Russell King (arch/arm)
    Cc: Ralf Baechle (arch/mips)
    Cc: David Howells (arch/frv, arch/mn10300)
    Cc: Koichi Yasutake (arch/mn10300)
    Cc: Kyle McMartin (arch/parisc)
    Cc: Helge Deller (arch/parisc)
    Cc: "James E.J. Bottomley" (arch/parisc)
    Cc: Benjamin Herrenschmidt (arch/powerpc)
    Cc: Paul Mackerras (arch/powerpc)
    Cc: "David S. Miller" (arch/sparc)
    Cc: Thomas Gleixner (arch/x86)
    Cc: Ingo Molnar (arch/x86)
    Cc: "H. Peter Anvin" (arch/x86)
    Cc: Arnd Bergmann (include/asm-generic)
    Cc: Rusty Russell ("Hard To Misuse" list)
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Cesar Eduardo Barros
     

25 May, 2010

1 commit

  • In f4112de6b679d84bd9b9681c7504be7bdfb7c7d5 ("mm: introduce
    debug_kmap_atomic") I said that debug_kmap_atomic() needs
    CONFIG_TRACE_IRQFLAGS_SUPPORT.

    It was wrong. (I thought irqs_disabled() is only available when the
    architecture has CONFIG_TRACE_IRQFLAGS_SUPPORT)

    Remove the #ifdef CONFIG_TRACE_IRQFLAGS_SUPPORT check to enable
    kmap_atomic() debugging for the architectures which do not have
    CONFIG_TRACE_IRQFLAGS_SUPPORT.

    Reported-by: Andrew Morton
    Signed-off-by: Akinobu Mita
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Akinobu Mita
     

27 Feb, 2010

1 commit


26 Jan, 2010

1 commit

  • On Virtually Indexed architectures (which don't do automatic alias
    resolution in their caches), we have to flush via the correct
    virtual address to prepare pages for DMA. On some architectures
    (like arm) we cannot prevent the CPU from doing data movein along
    the alias (and thus giving stale read data), so we not only have to
    introduce a flush API to push dirty cache lines out, but also an invalidate
    API to kill inconsistent cache lines that may have moved in before
    DMA changed the data

    Signed-off-by: James Bottomley

    James Bottomley
     

12 Jan, 2010

1 commit

  • Makes it consistent with the extern declaration, used when CONFIG_HIGHMEM
    is set Removes redundant casts in printout messages

    Signed-off-by: Andreas Fenkart
    Acked-by: Russell King
    Cc: Ralf Baechle
    Cc: David Howells
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Cc: "H. Peter Anvin"
    Cc: Chen Liqin
    Cc: Lennox Wu
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andreas Fenkart
     

17 Jun, 2009

1 commit


04 Apr, 2009

1 commit

  • Commit f4112de6b679d84bd9b9681c7504be7bdfb7c7d5 ("mm: introduce
    debug_kmap_atomic") broke PPC builds with CONFIG_HIGHMEM=y:

    CC init/main.o
    In file included from include/linux/highmem.h:25,
    from include/linux/pagemap.h:11,
    from include/linux/mempolicy.h:63,
    from init/main.c:53:
    arch/powerpc/include/asm/highmem.h: In function 'kmap_atomic_prot':
    arch/powerpc/include/asm/highmem.h:98: error: implicit declaration of function 'debug_kmap_atomic'
    In file included from include/linux/pagemap.h:11,
    from include/linux/mempolicy.h:63,
    from init/main.c:53:
    include/linux/highmem.h: At top level:
    include/linux/highmem.h:196: warning: conflicting types for 'debug_kmap_atomic'
    include/linux/highmem.h:196: error: static declaration of 'debug_kmap_atomic' follows non-static declaration
    include/asm/highmem.h:98: error: previous implicit declaration of 'debug_kmap_atomic' was here
    make[1]: *** [init/main.o] Error 1
    make: *** [init] Error 2

    Signed-off-by: Kumar Gala
    Acked-by: Akinobu Mita
    Signed-off-by: Linus Torvalds

    Kumar Gala
     

01 Apr, 2009

1 commit

  • x86 has debug_kmap_atomic_prot() which is error checking function for
    kmap_atomic. It is usefull for the other architectures, although it needs
    CONFIG_TRACE_IRQFLAGS_SUPPORT.

    This patch exposes it to the other architectures.

    Signed-off-by: Akinobu Mita
    Cc: Thomas Gleixner
    Cc: Ingo Molnar
    Cc: "H. Peter Anvin"
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Akinobu Mita
     

28 Nov, 2008

1 commit

  • With aliasing VIPT cache support, the ARM implementation of
    clear_user_page() and copy_user_page() sets up a temporary kernel space
    mapping such that we have the same cache colour as the userspace page.
    This avoids having to consider any userspace aliases from this operation.

    However, when highmem is enabled, kmap_atomic() have to setup mappings.
    The copy_user_highpage() and clear_user_highpage() call these functions
    before delegating the copies to copy_user_page() and clear_user_page().

    The effect of this is that each of the *_user_highpage() functions setup
    their own kmap mapping, followed by the *_user_page() functions setting
    up another mapping. This is rather wasteful.

    Thankfully, copy_user_highpage() can be overriden by architectures by
    defining __HAVE_ARCH_COPY_USER_HIGHPAGE. However, replacement of
    clear_user_highpage() is more difficult because its inline definition
    is not conditional. It seems that you're expected to define
    __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE and provide a replacement
    __alloc_zeroed_user_highpage() implementation instead.

    The allocation itself is fine, so we don't want to override that. What
    we really want to do is to override clear_user_highpage() with our own
    version which doesn't kmap_atomic() unnecessarily.

    Other VIPT architectures (PARISC and SH) would also like to override
    this function as well.

    Acked-by: Hugh Dickins
    Acked-by: James Bottomley
    Acked-by: Paul Mundt
    Signed-off-by: Russell King

    Russell King
     

06 Feb, 2008

2 commits

  • After running SetPageUptodate, preceeding stores to the page contents to
    actually bring it uptodate may not be ordered with the store to set the
    page uptodate.

    Therefore, another CPU which checks PageUptodate is true, then reads the
    page contents can get stale data.

    Fix this by having an smp_wmb before SetPageUptodate, and smp_rmb after
    PageUptodate.

    Many places that test PageUptodate, do so with the page locked, and this
    would be enough to ensure memory ordering in those places if
    SetPageUptodate were only called while the page is locked. Unfortunately
    that is not always the case for some filesystems, but it could be an idea
    for the future.

    Also bring the handling of anonymous page uptodateness in line with that of
    file backed page management, by marking anon pages as uptodate when they
    _are_ uptodate, rather than when our implementation requires that they be
    marked as such. Doing allows us to get rid of the smp_wmb's in the page
    copying functions, which were especially added for anonymous pages for an
    analogous memory ordering problem. Both file and anonymous pages are
    handled with the same barriers.

    FAQ:
    Q. Why not do this in flush_dcache_page?
    A. Firstly, flush_dcache_page handles only one side (the smb side) of the
    ordering protocol; we'd still need smp_rmb somewhere. Secondly, hiding away
    memory barriers in a completely unrelated function is nasty; at least in the
    PageUptodate macros, they are located together with (half) the operations
    involved in the ordering. Thirdly, the smp_wmb is only required when first
    bringing the page uptodate, wheras flush_dcache_page should be called each time
    it is written to through the kernel mapping. It is logically the wrong place to
    put it.

    Q. Why does this increase my text size / reduce my performance / etc.
    A. Because it is adding the necessary instructions to eliminate the data-race.

    Q. Can it be improved?
    A. Yes, eg. if you were to create a rule that all SetPageUptodate operations
    run under the page lock, we could avoid the smp_rmb places where PageUptodate
    is queried under the page lock. Requires audit of all filesystems and at least
    some would need reworking. That's great you're interested, I'm eagerly awaiting
    your patches.

    Signed-off-by: Nick Piggin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nick Piggin
     
  • Simplify page cache zeroing of segments of pages through 3 functions

    zero_user_segments(page, start1, end1, start2, end2)

    Zeros two segments of the page. It takes the position where to
    start and end the zeroing which avoids length calculations and
    makes code clearer.

    zero_user_segment(page, start, end)

    Same for a single segment.

    zero_user(page, start, length)

    Length variant for the case where we know the length.

    We remove the zero_user_page macro. Issues:

    1. Its a macro. Inline functions are preferable.

    2. The KM_USER0 macro is only defined for HIGHMEM.

    Having to treat this special case everywhere makes the
    code needlessly complex. The parameter for zeroing is always
    KM_USER0 except in one single case that we open code.

    Avoiding KM_USER0 makes a lot of code not having to be dealing
    with the special casing for HIGHMEM anymore. Dealing with
    kmap is only necessary for HIGHMEM configurations. In those
    configurations we use KM_USER0 like we do for a series of other
    functions defined in highmem.h.

    Since KM_USER0 is depends on HIGHMEM the existing zero_user_page
    function could not be a macro. zero_user_* functions introduced
    here can be be inline because that constant is not used when these
    functions are called.

    Also extract the flushing of the caches to be outside of the kmap.

    [akpm@linux-foundation.org: fix nfs and ntfs build]
    [akpm@linux-foundation.org: fix ntfs build some more]
    Signed-off-by: Christoph Lameter
    Cc: Steven French
    Cc: Michael Halcrow
    Cc:
    Cc: Steven Whitehouse
    Cc: Trond Myklebust
    Cc: "J. Bruce Fields"
    Cc: Anton Altaparmakov
    Cc: Mark Fasheh
    Cc: David Chinner
    Cc: Michael Halcrow
    Cc: Steven French
    Cc: Steven Whitehouse
    Cc: Trond Myklebust
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     

20 Jul, 2007

1 commit

  • alloc_zeroed_user_highpage() has no in-tree users and it is not exported.
    As it is not exported, it can simply be removed.

    Signed-off-by: Mel Gorman
    Acked-by: Andy Whitcroft
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     

18 Jul, 2007

1 commit

  • It is often known at allocation time whether a page may be migrated or not.
    This patch adds a flag called __GFP_MOVABLE and a new mask called
    GFP_HIGH_MOVABLE. Allocations using the __GFP_MOVABLE can be either migrated
    using the page migration mechanism or reclaimed by syncing with backing
    storage and discarding.

    An API function very similar to alloc_zeroed_user_highpage() is added for
    __GFP_MOVABLE allocations called alloc_zeroed_user_highpage_movable(). The
    flags used by alloc_zeroed_user_highpage() are not changed because it would
    change the semantics of an existing API. After this patch is applied there
    are no in-kernel users of alloc_zeroed_user_highpage() so it probably should
    be marked deprecated if this patch is merged.

    Note that this patch includes a minor cleanup to the use of __GFP_ZERO in
    shmem.c to keep all flag modifications to inode->mapping in the
    shmem_dir_alloc() helper function. This clean-up suggestion is courtesy of
    Hugh Dickens.

    Additional credit goes to Christoph Lameter and Linus Torvalds for shaping the
    concept. Credit to Hugh Dickens for catching issues with shmem swap vector
    and ramfs allocations.

    [akpm@linux-foundation.org: build fix]
    [hugh@veritas.com: __GFP_ZERO cleanup]
    Signed-off-by: Mel Gorman
    Cc: Andy Whitcroft
    Cc: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     

10 May, 2007

2 commits

  • Now that all the in-tree users are converted over to zero_user_page(),
    deprecate the old memclear_highpage_flush() call.

    Signed-off-by: Nate Diller
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nate Diller
     
  • It's very common for file systems to need to zero part or all of a page,
    the simplist way is just to use kmap_atomic() and memset(). There's
    actually a library function in include/linux/highmem.h that does exactly
    that, but it's confusingly named memclear_highpage_flush(), which is
    descriptive of *how* it does the work rather than what the *purpose* is.
    So this patchset renames the function to zero_user_page(), and calls it
    from the various places that currently open code it.

    This first patch introduces the new function call, and converts all the
    core kernel callsites, both the open-coded ones and the old
    memclear_highpage_flush() ones. Following this patch is a series of
    conversions for each file system individually, per AKPM, and finally a
    patch deprecating the old call. The diffstat below shows the entire
    patchset.

    [akpm@linux-foundation.org: fix a few things]
    Signed-off-by: Nate Diller
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nate Diller
     

06 May, 2007

1 commit

  • * 'for-linus' of git://one.firstfloor.org/home/andi/git/linux-2.6: (231 commits)
    [PATCH] i386: Don't delete cpu_devs data to identify different x86 types in late_initcall
    [PATCH] i386: type may be unused
    [PATCH] i386: Some additional chipset register values validation.
    [PATCH] i386: Add missing !X86_PAE dependincy to the 2G/2G split.
    [PATCH] x86-64: Don't exclude asm-offsets.c in Documentation/dontdiff
    [PATCH] i386: avoid redundant preempt_disable in __unlazy_fpu
    [PATCH] i386: white space fixes in i387.h
    [PATCH] i386: Drop noisy e820 debugging printks
    [PATCH] x86-64: Fix allnoconfig error in genapic_flat.c
    [PATCH] x86-64: Shut up warnings for vfat compat ioctls on other file systems
    [PATCH] x86-64: Share identical video.S between i386 and x86-64
    [PATCH] x86-64: Remove CONFIG_REORDER
    [PATCH] x86-64: Print type and size correctly for unknown compat ioctls
    [PATCH] i386: Remove copy_*_user BUG_ONs for (size < 0)
    [PATCH] i386: Little cleanups in smpboot.c
    [PATCH] x86-64: Don't enable NUMA for a single node in K8 NUMA scanning
    [PATCH] x86: Use RDTSCP for synchronous get_cycles if possible
    [PATCH] i386: Add X86_FEATURE_RDTSCP
    [PATCH] i386: Implement X86_FEATURE_SYNC_RDTSC on i386
    [PATCH] i386: Implement alternative_io for i386
    ...

    Fix up trivial conflict in include/linux/highmem.h manually.

    Signed-off-by: Linus Torvalds

    Linus Torvalds
     

05 May, 2007

1 commit


03 May, 2007

1 commit

  • Xen and VMI both have special requirements when mapping a highmem pte
    page into the kernel address space. These can be dealt with by adding
    a new kmap_atomic_pte() function for mapping highptes, and hooking it
    into the paravirt_ops infrastructure.

    Xen specifically wants to map the pte page RO, so this patch exposes a
    helper function, kmap_atomic_prot, which maps the page with the
    specified page protections.

    This also adds a kmap_flush_unused() function to clear out the cached
    kmap mappings. Xen needs this to clear out any potential stray RW
    mappings of pages which will become part of a pagetable.

    [ Zach - vmi.c will need some attention after this patch. It wasn't
    immediately obvious to me what needs to be done. ]

    Signed-off-by: Jeremy Fitzhardinge
    Signed-off-by: Andi Kleen
    Cc: Zachary Amsden

    Jeremy Fitzhardinge
     

09 Jan, 2007

1 commit

  • Since get_user_pages() may be used with processes other than the
    current process and calls flush_anon_page(), flush_anon_page() has to
    cope in some way with non-current processes.

    It may not be appropriate, or even desirable to flush a region of
    virtual memory cache in the current process when that is different to
    the process that we want the flush to occur for.

    Therefore, pass the vma into flush_anon_page() so that the architecture
    can work out whether the 'vmaddr' is for the current process or not.

    Signed-off-by: Russell King

    Russell King
     

14 Dec, 2006

2 commits

  • To allow a more effective copy_user_highpage() on certain architectures,
    a vma argument is added to the function and cow_user_page() allowing
    the implementation of these functions to check for the VM_EXEC bit.

    The main part of this patch was originally written by Ralf Baechle;
    Atushi Nemoto did the the debugging.

    Signed-off-by: Atsushi Nemoto
    Signed-off-by: Ralf Baechle
    Signed-off-by: Linus Torvalds

    Atsushi Nemoto
     
  • Problem:

    1. There is a process containing two thread (T1 and T2). The
    thread T1 calls fork(). Then dup_mmap() function called on T1 context.

    static inline int dup_mmap(struct mm_struct *mm, struct mm_struct *oldmm)
    ...
    flush_cache_mm(current->mm);
    ... /* A */
    (write-protect all Copy-On-Write pages)
    ... /* B */
    flush_tlb_mm(current->mm);
    ...

    2. When preemption happens between A and B (or on SMP kernel), the
    thread T2 can run and modify data on COW pages without page fault
    (modified data will stay in cache).

    3. Some time after fork() completed, the thread T2 may cause a page
    fault by write-protect on a COW page.

    4. Then data of the COW page will be copied to newly allocated
    physical page (copy_cow_page()). It reads data via kernel mapping.
    The kernel mapping can have different 'color' with user space
    mapping of the thread T2 (dcache aliasing). Therefore
    copy_cow_page() will copy stale data. Then the modified data in
    cache will be lost.

    In order to allow architecture code to deal with this problem allow
    architecture code to override copy_user_highpage() by defining
    __HAVE_ARCH_COPY_USER_HIGHPAGE in .

    The main part of this patch was originally written by Ralf Baechle;
    Atushi Nemoto did the the debugging.

    Signed-off-by: Atsushi Nemoto
    Signed-off-by: Ralf Baechle
    Signed-off-by: Linus Torvalds

    Atsushi Nemoto