02 Nov, 2017

1 commit

  • Many source files in the tree are missing licensing information, which
    makes it harder for compliance tools to determine the correct license.

    By default all files without license information are under the default
    license of the kernel, which is GPL version 2.

    Update the files which contain no license information with the 'GPL-2.0'
    SPDX license identifier. The SPDX identifier is a legally binding
    shorthand, which can be used instead of the full boiler plate text.

    This patch is based on work done by Thomas Gleixner and Kate Stewart and
    Philippe Ombredanne.

    How this work was done:

    Patches were generated and checked against linux-4.14-rc6 for a subset of
    the use cases:
    - file had no licensing information it it.
    - file was a */uapi/* one with no licensing information in it,
    - file was a */uapi/* one with existing licensing information,

    Further patches will be generated in subsequent months to fix up cases
    where non-standard license headers were used, and references to license
    had to be inferred by heuristics based on keywords.

    The analysis to determine which SPDX License Identifier to be applied to
    a file was done in a spreadsheet of side by side results from of the
    output of two independent scanners (ScanCode & Windriver) producing SPDX
    tag:value files created by Philippe Ombredanne. Philippe prepared the
    base worksheet, and did an initial spot review of a few 1000 files.

    The 4.13 kernel was the starting point of the analysis with 60,537 files
    assessed. Kate Stewart did a file by file comparison of the scanner
    results in the spreadsheet to determine which SPDX license identifier(s)
    to be applied to the file. She confirmed any determination that was not
    immediately clear with lawyers working with the Linux Foundation.

    Criteria used to select files for SPDX license identifier tagging was:
    - Files considered eligible had to be source code files.
    - Make and config files were included as candidates if they contained >5
    lines of source
    - File already had some variant of a license header in it (even if
    Reviewed-by: Philippe Ombredanne
    Reviewed-by: Thomas Gleixner
    Signed-off-by: Greg Kroah-Hartman

    Greg Kroah-Hartman
     

11 May, 2017

1 commit

  • Pull metag updates from James Hogan:
    "These patches primarily make some usercopy improvements (following on
    from the recent usercopy fixes):

    - reformat and simplify rapf copy loops

    - add 64-bit get_user support

    And fix a couple more uaccess issues, partily pointed out by Al:

    - fix access_ok() serious shortcomings

    - fix strncpy_from_user() address validation

    Also included is a trivial removal of a redundant increment"

    * tag 'metag-for-v4.12' of git://git.kernel.org/pub/scm/linux/kernel/git/jhogan/metag:
    metag/mm: Drop pointless increment
    metag/uaccess: Check access_ok in strncpy_from_user
    metag/uaccess: Fix access_ok()
    metag/usercopy: Add 64-bit get_user support
    metag/usercopy: Simplify rapf loop fixup corner case
    metag/usercopy: Reformat rapf loop inline asm

    Linus Torvalds
     

05 Apr, 2017

11 commits

  • Switch to using raw user copy instead of providing metag specific
    [__]copy_{to,from}_user[_inatomic](). This simplifies the metag
    uaccess.h and allows us to take advantage of extra checking in the
    generic versions.

    Signed-off-by: James Hogan
    Cc: Al Viro
    Cc: linux-metag@vger.kernel.org
    Signed-off-by: Al Viro

    James Hogan
     
  • Metag already supports 64-bit put_user, so add support for 64-bit
    get_user too so that the test_user_copy module can test both.

    Signed-off-by: James Hogan
    Cc: linux-metag@vger.kernel.org

    James Hogan
     
  • The final fixup in the rapf loops must handle a corner case due to the
    intermediate decrementing of the destination pointer before writing the
    last element to it again and re-incrementing it. This decrement (and the
    associated increment in the fixup code) can be easily avoided by using
    SETL/SETD with an offset of -8/-4.

    Signed-off-by: James Hogan
    Cc: linux-metag@vger.kernel.org

    James Hogan
     
  • Reformat rapf loop inline assembly to make it more readable and easier
    to modify in future.

    Signed-off-by: James Hogan
    Cc: linux-metag@vger.kernel.org

    James Hogan
     
  • The rapf copy loops in the Meta usercopy code is missing some extable
    entries for HTP cores with unaligned access checking enabled, where
    faults occur on the instruction immediately after the faulting access.

    Add the fixup labels and extable entries for these cases so that corner
    case user copy failures don't cause kernel crashes.

    Fixes: 373cd784d0fc ("metag: Memory handling")
    Signed-off-by: James Hogan
    Cc: linux-metag@vger.kernel.org
    Cc: stable@vger.kernel.org

    James Hogan
     
  • The fixup code to rewind the source pointer in
    __asm_copy_from_user_{32,64}bit_rapf_loop() always rewound the source by
    a single unit (4 or 8 bytes), however this is insufficient if the fault
    didn't occur on the first load in the loop, as the source pointer will
    have been incremented but nothing will have been stored until all 4
    register [pairs] are loaded.

    Read the LSM_STEP field of TXSTATUS (which is already loaded into a
    register), a bit like the copy_to_user versions, to determine how many
    iterations of MGET[DL] have taken place, all of which need rewinding.

    Fixes: 373cd784d0fc ("metag: Memory handling")
    Signed-off-by: James Hogan
    Cc: linux-metag@vger.kernel.org
    Cc: stable@vger.kernel.org

    James Hogan
     
  • The fixup code for the copy_to_user rapf loops reads TXStatus.LSM_STEP
    to decide how far to rewind the source pointer. There is a special case
    for the last execution of an MGETL/MGETD, since it leaves LSM_STEP=0
    even though the number of MGETLs/MGETDs attempted was 4. This uses ADDZ
    which is conditional upon the Z condition flag, but the AND instruction
    which masked the TXStatus.LSM_STEP field didn't set the condition flags
    based on the result.

    Fix that now by using ANDS which does set the flags, and also marking
    the condition codes as clobbered by the inline assembly.

    Fixes: 373cd784d0fc ("metag: Memory handling")
    Signed-off-by: James Hogan
    Cc: linux-metag@vger.kernel.org
    Cc: stable@vger.kernel.org

    James Hogan
     
  • Currently we try to zero the destination for a failed read from userland
    in fixup code in the usercopy.c macros. The rest of the destination
    buffer is then zeroed from __copy_user_zeroing(), which is used for both
    copy_from_user() and __copy_from_user().

    Unfortunately we fail to zero in the fixup code as D1Ar1 is set to 0
    before the fixup code entry labels, and __copy_from_user() shouldn't even
    be zeroing the rest of the buffer.

    Move the zeroing out into copy_from_user() and rename
    __copy_user_zeroing() to raw_copy_from_user() since it no longer does
    any zeroing. This also conveniently matches the name needed for
    RAW_COPY_USER support in a later patch.

    Fixes: 373cd784d0fc ("metag: Memory handling")
    Reported-by: Al Viro
    Signed-off-by: James Hogan
    Cc: linux-metag@vger.kernel.org
    Cc: stable@vger.kernel.org

    James Hogan
     
  • When copying to userland on Meta, if any faults are encountered
    immediately abort the copy instead of continuing on and repeatedly
    faulting, and worse potentially copying further bytes successfully to
    subsequent valid pages.

    Fixes: 373cd784d0fc ("metag: Memory handling")
    Reported-by: Al Viro
    Signed-off-by: James Hogan
    Cc: linux-metag@vger.kernel.org
    Cc: stable@vger.kernel.org

    James Hogan
     
  • Fix the error checking of the alignment adjustment code in
    raw_copy_from_user(), which mistakenly considers it safe to skip the
    error check when aligning the source buffer on a 2 or 4 byte boundary.

    If the destination buffer was unaligned it may have started to copy
    using byte or word accesses, which could well be at the start of a new
    (valid) source page. This would result in it appearing to have copied 1
    or 2 bytes at the end of the first (invalid) page rather than none at
    all.

    Fixes: 373cd784d0fc ("metag: Memory handling")
    Signed-off-by: James Hogan
    Cc: linux-metag@vger.kernel.org
    Cc: stable@vger.kernel.org

    James Hogan
     
  • Metag's lib/usercopy.c has a bunch of copy_from_user macros for larger
    copies between 5 and 16 bytes which are completely unused. Before fixing
    zeroing lets drop these macros so there is less to fix.

    Signed-off-by: James Hogan
    Cc: Al Viro
    Cc: linux-metag@vger.kernel.org
    Cc: stable@vger.kernel.org

    James Hogan
     

15 Jul, 2016

1 commit


04 Jul, 2013

1 commit

  • Move EXPORT_SYMBOL(csum_partial) from lib/checksum.c into metag_ksyms.c
    so that it doesn't get omitted by the static linker if it's not used by
    any other statically linked code, which can result in undefined symbols
    when building modules.

    For example a randconfig caused the following error:
    ERROR: "csum_partial" [fs/reiserfs/reiserfs.ko] undefined!

    Signed-off-by: James Hogan

    James Hogan
     

03 Mar, 2013

4 commits

  • It's less error prone to have function symbols exported immediately
    after the function rather than in metag_ksyms.c. Move each EXPORT_SYMBOL
    in metag_ksyms.c for symbols defined in usercopy.c into usercopy.c

    Signed-off-by: James Hogan

    James Hogan
     
  • Add metag build infrastructure.

    Signed-off-by: James Hogan

    James Hogan
     
  • Add optimised library functions for metag.

    Signed-off-by: James Hogan

    James Hogan
     
  • Meta has instructions for accessing:
    - bytes - GETB (1 byte)
    - words - GETW (2 bytes)
    - doublewords - GETD (4 bytes)
    - longwords - GETL (8 bytes)

    All accesses must be aligned. Unaligned accesses can be detected and
    made to fault on Meta2, however it isn't possible to fix up unaligned
    writes so we don't bother fixing up reads either.

    This patch adds metag memory handling code including:
    - I/O memory (io.h, ioremap.c): Actually any virtual memory can be
    accessed with these helpers. A part of the non-MMUable address space
    is used for memory mapped I/O. The ioremap() function is implemented
    one to one for non-MMUable addresses.
    - User memory (uaccess.h, usercopy.c): User memory is directly
    accessible from privileged code.
    - Kernel memory (maccess.c): probe_kernel_write() needs to be
    overwridden to use the I/O functions when doing a simple aligned
    write to non-writecombined memory, otherwise the write may be split
    by the generic version.

    Note that due to the fact that a portion of the virtual address space is
    non-MMUable, and therefore always maps directly to the physical address
    space, metag specific I/O functions are made available (metag_in32,
    metag_out32 etc). These cast the address argument to a pointer so that
    they can be used with raw physical addresses. These accessors are only
    to be used for accessing fixed core Meta architecture registers in the
    non-MMU region, and not for any SoC/peripheral registers.

    Signed-off-by: James Hogan

    James Hogan