23 Aug, 2018

5 commits

  • The varable names got a mess, thus standardize them again:

    id: user space id. Called semid, shmid, msgid if the type is known.
    Most functions use "id" already.
    idx: "index" for the idr lookup
    Right now, some functions use lid, ipc_addid() already uses idx as
    the variable name.
    seq: sequence number, to avoid quick collisions of the user space id
    key: user space key, used for the rhash tree

    Link: http://lkml.kernel.org/r/20180712185241.4017-12-manfred@colorfullife.com
    Signed-off-by: Manfred Spraul
    Cc: Dmitry Vyukov
    Cc: Davidlohr Bueso
    Cc: Davidlohr Bueso
    Cc: Herbert Xu
    Cc: Kees Cook
    Cc: Michael Kerrisk
    Cc: Michal Hocko
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Manfred Spraul
     
  • Now that we know that rhashtable_init() will not fail, we can get rid of a
    lot of the unnecessary cleanup paths when the call errored out.

    [manfred@colorfullife.com: variable name added to util.h to resolve checkpatch warning]
    Link: http://lkml.kernel.org/r/20180712185241.4017-11-manfred@colorfullife.com
    Signed-off-by: Davidlohr Bueso
    Signed-off-by: Manfred Spraul
    Cc: Dmitry Vyukov
    Cc: Herbert Xu
    Cc: Kees Cook
    Cc: Michael Kerrisk
    Cc: Michal Hocko
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Davidlohr Bueso
     
  • Both the comment and the name of ipcctl_pre_down_nolock() are misleading:
    The function must be called while holdling the rw semaphore.

    Therefore the patch renames the function to ipcctl_obtain_check(): This
    name matches the other names used in util.c:

    - "obtain" function look up a pointer in the idr, without
    acquiring the object lock.
    - The caller is responsible for locking.
    - _check means that the sequence number is checked.

    Link: http://lkml.kernel.org/r/20180712185241.4017-5-manfred@colorfullife.com
    Signed-off-by: Manfred Spraul
    Reviewed-by: Davidlohr Bueso
    Cc: Davidlohr Bueso
    Cc: Dmitry Vyukov
    Cc: Herbert Xu
    Cc: Kees Cook
    Cc: Michael Kerrisk
    Cc: Michal Hocko
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Manfred Spraul
     
  • ipc_addid() is impossible to use:
    - for certain failures, the caller must not use ipc_rcu_putref(),
    because the reference counter is not yet initialized.
    - for other failures, the caller must use ipc_rcu_putref(),
    because parallel operations could be ongoing already.

    The patch cleans that up, by initializing the refcount early, and by
    modifying all callers.

    The issues is related to the finding of
    syzbot+2827ef6b3385deb07eaf@syzkaller.appspotmail.com: syzbot found an
    issue with reading kern_ipc_perm.seq, here both read and write to already
    released memory could happen.

    Link: http://lkml.kernel.org/r/20180712185241.4017-4-manfred@colorfullife.com
    Signed-off-by: Manfred Spraul
    Cc: Dmitry Vyukov
    Cc: Kees Cook
    Cc: Davidlohr Bueso
    Cc: Davidlohr Bueso
    Cc: Herbert Xu
    Cc: Michael Kerrisk
    Cc: Michal Hocko
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Manfred Spraul
     
  • ipc_addid() initializes kern_ipc_perm.id after having called
    ipc_idr_alloc().

    Thus a parallel semctl() or msgctl() that uses e.g. MSG_STAT may use this
    unitialized value as the return code.

    The patch moves all accesses to kern_ipc_perm.id under the spin_lock().

    The issues is related to the finding of
    syzbot+2827ef6b3385deb07eaf@syzkaller.appspotmail.com: syzbot found an
    issue with kern_ipc_perm.seq

    Link: http://lkml.kernel.org/r/20180712185241.4017-2-manfred@colorfullife.com
    Signed-off-by: Manfred Spraul
    Reviewed-by: Davidlohr Bueso
    Cc: Dmitry Vyukov
    Cc: Kees Cook
    Cc: Davidlohr Bueso
    Cc: Herbert Xu
    Cc: Michael Kerrisk
    Cc: Michal Hocko
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Manfred Spraul
     

03 Aug, 2018

1 commit


27 Jul, 2018

1 commit

  • In order for load/store tearing prevention to work, _all_ accesses to
    the variable in question need to be done around READ and WRITE_ONCE()
    macros. Ensure everyone does so for q->status variable for
    semtimedop().

    Link: http://lkml.kernel.org/r/20180717052654.676-1-dave@stgolabs.net
    Signed-off-by: Davidlohr Bueso
    Cc: Manfred Spraul
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Davidlohr Bueso
     

22 Jun, 2018

1 commit

  • Due to the use of rhashtables in net namespaces,
    rhashtable.h is included in lots of the kernel,
    so a small changes can required a large recompilation.
    This makes development painful.

    This patch splits out rhashtable-types.h which just includes
    the major type declarations, and does not include (non-trivial)
    inline code. rhashtable.h is no longer included by anything
    in the include/ directory.
    Common include files only include rhashtable-types.h so a large
    recompilation is only triggered when that changes.

    Acked-by: Herbert Xu
    Signed-off-by: NeilBrown
    Signed-off-by: David S. Miller

    NeilBrown
     

15 Jun, 2018

1 commit

  • Both smatch and coverity are reporting potential issues with spectre
    variant 1 with the 'semnum' index within the sma->sems array, ie:

    ipc/sem.c:388 sem_lock() warn: potential spectre issue 'sma->sems'
    ipc/sem.c:641 perform_atomic_semop_slow() warn: potential spectre issue 'sma->sems'
    ipc/sem.c:721 perform_atomic_semop() warn: potential spectre issue 'sma->sems'

    Avoid any possible speculation by using array_index_nospec() thus
    ensuring the semnum value is bounded to [0, sma->sem_nsems). With the
    exception of sem_lock() all of these are slowpaths.

    Link: http://lkml.kernel.org/r/20180423171131.njs4rfm2yzyeg6do@linux-n805
    Signed-off-by: Davidlohr Bueso
    Reported-by: Dan Carpenter
    Cc: Peter Zijlstra
    Cc: "Gustavo A. R. Silva"
    Cc: Manfred Spraul
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Davidlohr Bueso
     

13 Jun, 2018

1 commit

  • The kvmalloc() function has a 2-factor argument form, kvmalloc_array(). This
    patch replaces cases of:

    kvmalloc(a * b, gfp)

    with:
    kvmalloc_array(a * b, gfp)

    as well as handling cases of:

    kvmalloc(a * b * c, gfp)

    with:

    kvmalloc(array3_size(a, b, c), gfp)

    as it's slightly less ugly than:

    kvmalloc_array(array_size(a, b), c, gfp)

    This does, however, attempt to ignore constant size factors like:

    kvmalloc(4 * 1024, gfp)

    though any constants defined via macros get caught up in the conversion.

    Any factors with a sizeof() of "unsigned char", "char", and "u8" were
    dropped, since they're redundant.

    The Coccinelle script used for this was:

    // Fix redundant parens around sizeof().
    @@
    type TYPE;
    expression THING, E;
    @@

    (
    kvmalloc(
    - (sizeof(TYPE)) * E
    + sizeof(TYPE) * E
    , ...)
    |
    kvmalloc(
    - (sizeof(THING)) * E
    + sizeof(THING) * E
    , ...)
    )

    // Drop single-byte sizes and redundant parens.
    @@
    expression COUNT;
    typedef u8;
    typedef __u8;
    @@

    (
    kvmalloc(
    - sizeof(u8) * (COUNT)
    + COUNT
    , ...)
    |
    kvmalloc(
    - sizeof(__u8) * (COUNT)
    + COUNT
    , ...)
    |
    kvmalloc(
    - sizeof(char) * (COUNT)
    + COUNT
    , ...)
    |
    kvmalloc(
    - sizeof(unsigned char) * (COUNT)
    + COUNT
    , ...)
    |
    kvmalloc(
    - sizeof(u8) * COUNT
    + COUNT
    , ...)
    |
    kvmalloc(
    - sizeof(__u8) * COUNT
    + COUNT
    , ...)
    |
    kvmalloc(
    - sizeof(char) * COUNT
    + COUNT
    , ...)
    |
    kvmalloc(
    - sizeof(unsigned char) * COUNT
    + COUNT
    , ...)
    )

    // 2-factor product with sizeof(type/expression) and identifier or constant.
    @@
    type TYPE;
    expression THING;
    identifier COUNT_ID;
    constant COUNT_CONST;
    @@

    (
    - kvmalloc
    + kvmalloc_array
    (
    - sizeof(TYPE) * (COUNT_ID)
    + COUNT_ID, sizeof(TYPE)
    , ...)
    |
    - kvmalloc
    + kvmalloc_array
    (
    - sizeof(TYPE) * COUNT_ID
    + COUNT_ID, sizeof(TYPE)
    , ...)
    |
    - kvmalloc
    + kvmalloc_array
    (
    - sizeof(TYPE) * (COUNT_CONST)
    + COUNT_CONST, sizeof(TYPE)
    , ...)
    |
    - kvmalloc
    + kvmalloc_array
    (
    - sizeof(TYPE) * COUNT_CONST
    + COUNT_CONST, sizeof(TYPE)
    , ...)
    |
    - kvmalloc
    + kvmalloc_array
    (
    - sizeof(THING) * (COUNT_ID)
    + COUNT_ID, sizeof(THING)
    , ...)
    |
    - kvmalloc
    + kvmalloc_array
    (
    - sizeof(THING) * COUNT_ID
    + COUNT_ID, sizeof(THING)
    , ...)
    |
    - kvmalloc
    + kvmalloc_array
    (
    - sizeof(THING) * (COUNT_CONST)
    + COUNT_CONST, sizeof(THING)
    , ...)
    |
    - kvmalloc
    + kvmalloc_array
    (
    - sizeof(THING) * COUNT_CONST
    + COUNT_CONST, sizeof(THING)
    , ...)
    )

    // 2-factor product, only identifiers.
    @@
    identifier SIZE, COUNT;
    @@

    - kvmalloc
    + kvmalloc_array
    (
    - SIZE * COUNT
    + COUNT, SIZE
    , ...)

    // 3-factor product with 1 sizeof(type) or sizeof(expression), with
    // redundant parens removed.
    @@
    expression THING;
    identifier STRIDE, COUNT;
    type TYPE;
    @@

    (
    kvmalloc(
    - sizeof(TYPE) * (COUNT) * (STRIDE)
    + array3_size(COUNT, STRIDE, sizeof(TYPE))
    , ...)
    |
    kvmalloc(
    - sizeof(TYPE) * (COUNT) * STRIDE
    + array3_size(COUNT, STRIDE, sizeof(TYPE))
    , ...)
    |
    kvmalloc(
    - sizeof(TYPE) * COUNT * (STRIDE)
    + array3_size(COUNT, STRIDE, sizeof(TYPE))
    , ...)
    |
    kvmalloc(
    - sizeof(TYPE) * COUNT * STRIDE
    + array3_size(COUNT, STRIDE, sizeof(TYPE))
    , ...)
    |
    kvmalloc(
    - sizeof(THING) * (COUNT) * (STRIDE)
    + array3_size(COUNT, STRIDE, sizeof(THING))
    , ...)
    |
    kvmalloc(
    - sizeof(THING) * (COUNT) * STRIDE
    + array3_size(COUNT, STRIDE, sizeof(THING))
    , ...)
    |
    kvmalloc(
    - sizeof(THING) * COUNT * (STRIDE)
    + array3_size(COUNT, STRIDE, sizeof(THING))
    , ...)
    |
    kvmalloc(
    - sizeof(THING) * COUNT * STRIDE
    + array3_size(COUNT, STRIDE, sizeof(THING))
    , ...)
    )

    // 3-factor product with 2 sizeof(variable), with redundant parens removed.
    @@
    expression THING1, THING2;
    identifier COUNT;
    type TYPE1, TYPE2;
    @@

    (
    kvmalloc(
    - sizeof(TYPE1) * sizeof(TYPE2) * COUNT
    + array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2))
    , ...)
    |
    kvmalloc(
    - sizeof(TYPE1) * sizeof(THING2) * (COUNT)
    + array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2))
    , ...)
    |
    kvmalloc(
    - sizeof(THING1) * sizeof(THING2) * COUNT
    + array3_size(COUNT, sizeof(THING1), sizeof(THING2))
    , ...)
    |
    kvmalloc(
    - sizeof(THING1) * sizeof(THING2) * (COUNT)
    + array3_size(COUNT, sizeof(THING1), sizeof(THING2))
    , ...)
    |
    kvmalloc(
    - sizeof(TYPE1) * sizeof(THING2) * COUNT
    + array3_size(COUNT, sizeof(TYPE1), sizeof(THING2))
    , ...)
    |
    kvmalloc(
    - sizeof(TYPE1) * sizeof(THING2) * (COUNT)
    + array3_size(COUNT, sizeof(TYPE1), sizeof(THING2))
    , ...)
    )

    // 3-factor product, only identifiers, with redundant parens removed.
    @@
    identifier STRIDE, SIZE, COUNT;
    @@

    (
    kvmalloc(
    - (COUNT) * STRIDE * SIZE
    + array3_size(COUNT, STRIDE, SIZE)
    , ...)
    |
    kvmalloc(
    - COUNT * (STRIDE) * SIZE
    + array3_size(COUNT, STRIDE, SIZE)
    , ...)
    |
    kvmalloc(
    - COUNT * STRIDE * (SIZE)
    + array3_size(COUNT, STRIDE, SIZE)
    , ...)
    |
    kvmalloc(
    - (COUNT) * (STRIDE) * SIZE
    + array3_size(COUNT, STRIDE, SIZE)
    , ...)
    |
    kvmalloc(
    - COUNT * (STRIDE) * (SIZE)
    + array3_size(COUNT, STRIDE, SIZE)
    , ...)
    |
    kvmalloc(
    - (COUNT) * STRIDE * (SIZE)
    + array3_size(COUNT, STRIDE, SIZE)
    , ...)
    |
    kvmalloc(
    - (COUNT) * (STRIDE) * (SIZE)
    + array3_size(COUNT, STRIDE, SIZE)
    , ...)
    |
    kvmalloc(
    - COUNT * STRIDE * SIZE
    + array3_size(COUNT, STRIDE, SIZE)
    , ...)
    )

    // Any remaining multi-factor products, first at least 3-factor products,
    // when they're not all constants...
    @@
    expression E1, E2, E3;
    constant C1, C2, C3;
    @@

    (
    kvmalloc(C1 * C2 * C3, ...)
    |
    kvmalloc(
    - (E1) * E2 * E3
    + array3_size(E1, E2, E3)
    , ...)
    |
    kvmalloc(
    - (E1) * (E2) * E3
    + array3_size(E1, E2, E3)
    , ...)
    |
    kvmalloc(
    - (E1) * (E2) * (E3)
    + array3_size(E1, E2, E3)
    , ...)
    |
    kvmalloc(
    - E1 * E2 * E3
    + array3_size(E1, E2, E3)
    , ...)
    )

    // And then all remaining 2 factors products when they're not all constants,
    // keeping sizeof() as the second factor argument.
    @@
    expression THING, E1, E2;
    type TYPE;
    constant C1, C2, C3;
    @@

    (
    kvmalloc(sizeof(THING) * C2, ...)
    |
    kvmalloc(sizeof(TYPE) * C2, ...)
    |
    kvmalloc(C1 * C2 * C3, ...)
    |
    kvmalloc(C1 * C2, ...)
    |
    - kvmalloc
    + kvmalloc_array
    (
    - sizeof(TYPE) * (E2)
    + E2, sizeof(TYPE)
    , ...)
    |
    - kvmalloc
    + kvmalloc_array
    (
    - sizeof(TYPE) * E2
    + E2, sizeof(TYPE)
    , ...)
    |
    - kvmalloc
    + kvmalloc_array
    (
    - sizeof(THING) * (E2)
    + E2, sizeof(THING)
    , ...)
    |
    - kvmalloc
    + kvmalloc_array
    (
    - sizeof(THING) * E2
    + E2, sizeof(THING)
    , ...)
    |
    - kvmalloc
    + kvmalloc_array
    (
    - (E1) * E2
    + E1, E2
    , ...)
    |
    - kvmalloc
    + kvmalloc_array
    (
    - (E1) * (E2)
    + E1, E2
    , ...)
    |
    - kvmalloc
    + kvmalloc_array
    (
    - E1 * E2
    + E1, E2
    , ...)
    )

    Signed-off-by: Kees Cook

    Kees Cook
     

20 Apr, 2018

4 commits

  • Three ipc syscalls (mq_timedsend, mq_timedreceive and and semtimedop)
    take a timespec argument. After we move 32-bit architectures over to
    useing 64-bit time_t based syscalls, we need seperate entry points for
    the old 32-bit based interfaces.

    This changes the #ifdef guards for the existing 32-bit compat syscalls
    to check for CONFIG_COMPAT_32BIT_TIME instead, which will then be
    enabled on all existing 32-bit architectures.

    Signed-off-by: Arnd Bergmann

    Arnd Bergmann
     
  • This is a preparatation for changing over __kernel_timespec to 64-bit
    times, which involves assigning new system call numbers for mq_timedsend(),
    mq_timedreceive() and semtimedop() for compatibility with future y2038
    proof user space.

    The existing ABIs will remain available through compat code.

    Signed-off-by: Arnd Bergmann

    Arnd Bergmann
     
  • The shmid64_ds/semid64_ds/msqid64_ds data structures have been extended
    to contain extra fields for storing the upper bits of the time stamps,
    this patch does the other half of the job and and fills the new fields on
    32-bit architectures as well as 32-bit tasks running on a 64-bit kernel
    in compat mode.

    There should be no change for native 64-bit tasks.

    Signed-off-by: Arnd Bergmann

    Arnd Bergmann
     
  • In some places, we still used get_seconds() instead of
    ktime_get_real_seconds(), and I'm changing the remaining ones now to
    all use ktime_get_real_seconds() so we use the full available range for
    timestamps instead of overflowing the 'unsigned long' return value in
    year 2106 on 32-bit kernels.

    Signed-off-by: Arnd Bergmann

    Arnd Bergmann
     

12 Apr, 2018

1 commit

  • There is a permission discrepancy when consulting shm ipc object
    metadata between /proc/sysvipc/sem (0444) and the SEM_STAT semctl
    command. The later does permission checks for the object vs S_IRUGO.
    As such there can be cases where EACCESS is returned via syscall but the
    info is displayed anyways in the procfs files.

    While this might have security implications via info leaking (albeit no
    writing to the sma metadata), this behavior goes way back and showing
    all the objects regardless of the permissions was most likely an
    overlook - so we are stuck with it. Furthermore, modifying either the
    syscall or the procfs file can cause userspace programs to break (ie
    ipcs). Some applications require getting the procfs info (without root
    privileges) and can be rather slow in comparison with a syscall -- up to
    500x in some reported cases for shm.

    This patch introduces a new SEM_STAT_ANY command such that the sem ipc
    object permissions are ignored, and only audited instead. In addition,
    I've left the lsm security hook checks in place, as if some policy can
    block the call, then the user has no other choice than just parsing the
    procfs file.

    Link: http://lkml.kernel.org/r/20180215162458.10059-3-dave@stgolabs.net
    Signed-off-by: Davidlohr Bueso
    Reported-by: Robert Kettler
    Cc: Eric W. Biederman
    Cc: Kees Cook
    Cc: Manfred Spraul
    Cc: Michael Kerrisk
    Cc: Michal Hocko
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Davidlohr Bueso
     

04 Apr, 2018

1 commit

  • Pull namespace updates from Eric Biederman:
    "There was a lot of work this cycle fixing bugs that were discovered
    after the merge window and getting everything ready where we can
    reasonably support fully unprivileged fuse. The bug fixes you already
    have and much of the unprivileged fuse work is coming in via other
    trees.

    Still left for fully unprivileged fuse is figuring out how to cleanly
    handle .set_acl and .get_acl in the legacy case, and properly handling
    of evm xattrs on unprivileged mounts.

    Included in the tree is a cleanup from Alexely that replaced a linked
    list with a statically allocated fix sized array for the pid caches,
    which simplifies and speeds things up.

    Then there is are some cleanups and fixes for the ipc namespace. The
    motivation was that in reviewing other code it was discovered that
    access ipc objects from different pid namespaces recorded pids in such
    a way that when asked the wrong pids were returned. In the worst case
    there has been a measured 30% performance impact for sysvipc
    semaphores. Other test cases showed no measurable performance impact.
    Manfred Spraul and Davidlohr Bueso who tend to work on sysvipc
    performance both gave the nod that this is good enough.

    Casey Schaufler and James Morris have given their approval to the LSM
    side of the changes.

    I simplified the types and the code dealing with sysvipc to pass just
    kern_ipc_perm for all three types of ipc. Which reduced the header
    dependencies throughout the kernel and simplified the lsm code.

    Which let me work on the pid fixes without having to worry about
    trivial changes causing complete kernel recompiles"

    * 'userns-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiederm/user-namespace:
    ipc/shm: Fix pid freeing.
    ipc/shm: fix up for struct file no longer being available in shm.h
    ipc/smack: Tidy up from the change in type of the ipc security hooks
    ipc: Directly call the security hook in ipc_ops.associate
    ipc/sem: Fix semctl(..., GETPID, ...) between pid namespaces
    ipc/msg: Fix msgctl(..., IPC_STAT, ...) between pid namespaces
    ipc/shm: Fix shmctl(..., IPC_STAT, ...) between pid namespaces.
    ipc/util: Helpers for making the sysvipc operations pid namespace aware
    ipc: Move IPCMNI from include/ipc.h into ipc/util.h
    msg: Move struct msg_queue into ipc/msg.c
    shm: Move struct shmid_kernel into ipc/shm.c
    sem: Move struct sem and struct sem_array into ipc/sem.c
    msg/security: Pass kern_ipc_perm not msg_queue into the msg_queue security hooks
    shm/security: Pass kern_ipc_perm not shmid_kernel into the shm security hooks
    sem/security: Pass kern_ipc_perm not sem_array into the sem security hooks
    pidns: simpler allocation of pid_* caches

    Linus Torvalds
     

03 Apr, 2018

3 commits

  • Provide ksys_semctl() and compat_ksys_semctl() wrappers to avoid in-kernel
    calls to these syscalls. The ksys_ prefix denotes that these functions are
    meant as a drop-in replacement for the syscalls. In particular, they use
    the same calling convention as sys_semctl() and compat_sys_semctl().

    This patch is part of a series which removes in-kernel calls to syscalls.
    On this basis, the syscall entry path can be streamlined. For details, see
    http://lkml.kernel.org/r/20180325162527.GA17492@light.dominikbrodowski.net

    Cc: Al Viro
    Cc: Andrew Morton
    Signed-off-by: Dominik Brodowski

    Dominik Brodowski
     
  • Provide ksys_semget() wrapper to avoid in-kernel calls to this syscall.
    The ksys_ prefix denotes that this function is meant as a drop-in
    replacement for the syscall. In particular, it uses the same calling
    convention as sys_semget().

    This patch is part of a series which removes in-kernel calls to syscalls.
    On this basis, the syscall entry path can be streamlined. For details, see
    http://lkml.kernel.org/r/20180325162527.GA17492@light.dominikbrodowski.net

    Cc: Al Viro
    Cc: Andrew Morton
    Signed-off-by: Dominik Brodowski

    Dominik Brodowski
     
  • Provide ksys_semtimedop() and compat_ksys_semtimedop() wrappers to avoid
    in-kernel calls to these syscalls. The ksys_ prefix denotes that these
    functions are meant as a drop-in replacement for the syscalls. In
    particular, they use the same calling convention as sys_semtimedop() and
    compat_sys_semtimedop().

    This patch is part of a series which removes in-kernel calls to syscalls.
    On this basis, the syscall entry path can be streamlined. For details, see
    http://lkml.kernel.org/r/20180325162527.GA17492@light.dominikbrodowski.net

    Cc: Al Viro
    Cc: Andrew Morton
    Signed-off-by: Dominik Brodowski

    Dominik Brodowski
     

28 Mar, 2018

2 commits

  • After the last round of cleanups the shm, sem, and msg associate
    operations just became trivial wrappers around the appropriate security
    method. Simplify things further by just calling the security method
    directly.

    Signed-off-by: "Eric W. Biederman"

    Eric W. Biederman
     
  • Today the last process to update a semaphore is remembered and
    reported in the pid namespace of that process. If there are processes
    in any other pid namespace querying that process id with GETPID the
    result will be unusable nonsense as it does not make any
    sense in your own pid namespace.

    Due to ipc_update_pid I don't think you will be able to get System V
    ipc semaphores into a troublesome cache line ping-pong. Using struct
    pids from separate process are not a problem because they do not share
    a cache line. Using struct pid from different threads of the same
    process are unlikely to be a problem as the reference count update
    can be avoided.

    Further linux futexes are a much better tool for the job of mutual
    exclusion between processes than System V semaphores. So I expect
    programs that are performance limited by their interprocess mutual
    exclusion primitive will be using futexes.

    So while it is possible that enhancing the storage of the last
    rocess of a System V semaphore from an integer to a struct pid
    will cause a performance regression because of the effect
    of frequently updating the pid reference count. I don't expect
    that to happen in practice.

    This change updates semctl(..., GETPID, ...) to return the
    process id of the last process to update a semphore inthe
    pid namespace of the calling process.

    Fixes: b488893a390e ("pid namespaces: changes to show virtual ids to user")
    Signed-off-by: "Eric W. Biederman"

    Eric W. Biederman
     

23 Mar, 2018

2 commits


07 Feb, 2018

1 commit

  • As described in the title, this patch fixes id_ds inconsistency when
    ctl_stat executes concurrently with some ds-changing function, e.g.
    shmat, msgsnd or whatever.

    For instance, if shmctl(IPC_STAT) is running concurrently
    with shmat, following data structure can be returned:
    {... shm_lpid = 0, shm_nattch = 1, ...}

    Link: http://lkml.kernel.org/r/20171202153456.6514-1-philippe.mikoyan@skat.systems
    Signed-off-by: Philippe Mikoyan
    Reviewed-by: Davidlohr Bueso
    Cc: Al Viro
    Cc: Manfred Spraul
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Philippe Mikoyan
     

18 Nov, 2017

3 commits

  • Merge more updates from Andrew Morton:

    - a bit more MM

    - procfs updates

    - dynamic-debug fixes

    - lib/ updates

    - checkpatch

    - epoll

    - nilfs2

    - signals

    - rapidio

    - PID management cleanup and optimization

    - kcov updates

    - sysvipc updates

    - quite a few misc things all over the place

    * emailed patches from Andrew Morton : (94 commits)
    EXPERT Kconfig menu: fix broken EXPERT menu
    include/asm-generic/topology.h: remove unused parent_node() macro
    arch/tile/include/asm/topology.h: remove unused parent_node() macro
    arch/sparc/include/asm/topology_64.h: remove unused parent_node() macro
    arch/sh/include/asm/topology.h: remove unused parent_node() macro
    arch/ia64/include/asm/topology.h: remove unused parent_node() macro
    drivers/pcmcia/sa1111_badge4.c: avoid unused function warning
    mm: add infrastructure for get_user_pages_fast() benchmarking
    sysvipc: make get_maxid O(1) again
    sysvipc: properly name ipc_addid() limit parameter
    sysvipc: duplicate lock comments wrt ipc_addid()
    sysvipc: unteach ids->next_id for !CHECKPOINT_RESTORE
    initramfs: use time64_t timestamps
    drivers/watchdog: make use of devm_register_reboot_notifier()
    kernel/reboot.c: add devm_register_reboot_notifier()
    kcov: update documentation
    Makefile: support flag -fsanitizer-coverage=trace-cmp
    kcov: support comparison operands collection
    kcov: remove pointless current != NULL check
    kernel/panic.c: add TAINT_AUX
    ...

    Linus Torvalds
     
  • The comment in msgqueues when using ipc_addid() is quite useful imo.
    Duplicate it for shm and semaphores.

    Link: http://lkml.kernel.org/r/20170831172049.14576-3-dave@stgolabs.net
    Signed-off-by: Davidlohr Bueso
    Cc: Manfred Spraul
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Davidlohr Bueso
     
  • Pull misc vfs updates from Al Viro:
    "Assorted stuff, really no common topic here"

    * 'work.misc' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs:
    vfs: grab the lock instead of blocking in __fd_install during resizing
    vfs: stop clearing close on exec when closing a fd
    include/linux/fs.h: fix comment about struct address_space
    fs: make fiemap work from compat_ioctl
    coda: fix 'kernel memory exposure attempt' in fsync
    pstore: remove unneeded unlikely()
    vfs: remove unneeded unlikely()
    stubs for mount_bdev() and kill_block_super() in !CONFIG_BLOCK case
    make vfs_ustat() static
    do_handle_open() should be static
    elf_fdpic: fix unused variable warning
    fold destroy_super() into __put_super()
    new helper: destroy_unused_super()
    fix address space warnings in ipc/
    acct.h: get rid of detritus

    Linus Torvalds
     

02 Nov, 2017

1 commit

  • Many source files in the tree are missing licensing information, which
    makes it harder for compliance tools to determine the correct license.

    By default all files without license information are under the default
    license of the kernel, which is GPL version 2.

    Update the files which contain no license information with the 'GPL-2.0'
    SPDX license identifier. The SPDX identifier is a legally binding
    shorthand, which can be used instead of the full boiler plate text.

    This patch is based on work done by Thomas Gleixner and Kate Stewart and
    Philippe Ombredanne.

    How this work was done:

    Patches were generated and checked against linux-4.14-rc6 for a subset of
    the use cases:
    - file had no licensing information it it.
    - file was a */uapi/* one with no licensing information in it,
    - file was a */uapi/* one with existing licensing information,

    Further patches will be generated in subsequent months to fix up cases
    where non-standard license headers were used, and references to license
    had to be inferred by heuristics based on keywords.

    The analysis to determine which SPDX License Identifier to be applied to
    a file was done in a spreadsheet of side by side results from of the
    output of two independent scanners (ScanCode & Windriver) producing SPDX
    tag:value files created by Philippe Ombredanne. Philippe prepared the
    base worksheet, and did an initial spot review of a few 1000 files.

    The 4.13 kernel was the starting point of the analysis with 60,537 files
    assessed. Kate Stewart did a file by file comparison of the scanner
    results in the spreadsheet to determine which SPDX license identifier(s)
    to be applied to the file. She confirmed any determination that was not
    immediately clear with lawyers working with the Linux Foundation.

    Criteria used to select files for SPDX license identifier tagging was:
    - Files considered eligible had to be source code files.
    - Make and config files were included as candidates if they contained >5
    lines of source
    - File already had some variant of a license header in it (even if
    Reviewed-by: Philippe Ombredanne
    Reviewed-by: Thomas Gleixner
    Signed-off-by: Greg Kroah-Hartman

    Greg Kroah-Hartman
     

12 Oct, 2017

1 commit


15 Sep, 2017

1 commit

  • Pull ipc compat cleanup and 64-bit time_t from Al Viro:
    "IPC copyin/copyout sanitizing, including 64bit time_t work from Deepa
    Dinamani"

    * 'work.ipc' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs:
    utimes: Make utimes y2038 safe
    ipc: shm: Make shmid_kernel timestamps y2038 safe
    ipc: sem: Make sem_array timestamps y2038 safe
    ipc: msg: Make msg_queue timestamps y2038 safe
    ipc: mqueue: Replace timespec with timespec64
    ipc: Make sys_semtimedop() y2038 safe
    get rid of SYSVIPC_COMPAT on ia64
    semtimedop(): move compat to native
    shmat(2): move compat to native
    msgrcv(2), msgsnd(2): move compat to native
    ipc(2): move compat to native
    ipc: make use of compat ipc_perm helpers
    semctl(): move compat to native
    semctl(): separate all layout-dependent copyin/copyout
    msgctl(): move compat to native
    msgctl(): split the actual work from copyin/copyout
    ipc: move compat shmctl to native
    shmctl: split the work from copyin/copyout

    Linus Torvalds
     

09 Sep, 2017

4 commits

  • ipc_findkey() used to scan all objects to look for the wanted key. This
    is slow when using a high number of keys. This change adds an rhashtable
    of kern_ipc_perm objects in ipc_ids, so that one lookup cease to be O(n).

    This change gives a 865% improvement of benchmark reaim.jobs_per_min on a
    56 threads Intel(R) Xeon(R) CPU E5-2695 v3 @ 2.30GHz with 256G memory [1]

    Other (more micro) benchmark results, by the author: On an i5 laptop, the
    following loop executed right after a reboot took, without and with this
    change:

    for (int i = 0, k=0x424242; i < KEYS; ++i)
    semget(k++, 1, IPC_CREAT | 0600);

    total total max single max single
    KEYS without with call without call with

    1 3.5 4.9 µs 3.5 4.9
    10 7.6 8.6 µs 3.7 4.7
    32 16.2 15.9 µs 4.3 5.3
    100 72.9 41.8 µs 3.7 4.7
    1000 5,630.0 502.0 µs * *
    10000 1,340,000.0 7,240.0 µs * *
    31900 17,600,000.0 22,200.0 µs * *

    *: unreliable measure: high variance

    The duration for a lookup-only usage was obtained by the same loop once
    the keys are present:

    total total max single max single
    KEYS without with call without call with

    1 2.1 2.5 µs 2.1 2.5
    10 4.5 4.8 µs 2.2 2.3
    32 13.0 10.8 µs 2.3 2.8
    100 82.9 25.1 µs * 2.3
    1000 5,780.0 217.0 µs * *
    10000 1,470,000.0 2,520.0 µs * *
    31900 17,400,000.0 7,810.0 µs * *

    Finally, executing each semget() in a new process gave, when still
    summing only the durations of these syscalls:

    creation:
    total total
    KEYS without with

    1 3.7 5.0 µs
    10 32.9 36.7 µs
    32 125.0 109.0 µs
    100 523.0 353.0 µs
    1000 20,300.0 3,280.0 µs
    10000 2,470,000.0 46,700.0 µs
    31900 27,800,000.0 219,000.0 µs

    lookup-only:
    total total
    KEYS without with

    1 2.5 2.7 µs
    10 25.4 24.4 µs
    32 106.0 72.6 µs
    100 591.0 352.0 µs
    1000 22,400.0 2,250.0 µs
    10000 2,510,000.0 25,700.0 µs
    31900 28,200,000.0 115,000.0 µs

    [1] http://lkml.kernel.org/r/20170814060507.GE23258@yexl-desktop

    Link: http://lkml.kernel.org/r/20170815194954.ck32ta2z35yuzpwp@debix
    Signed-off-by: Guillaume Knispel
    Reviewed-by: Marc Pardo
    Cc: Davidlohr Bueso
    Cc: Kees Cook
    Cc: Manfred Spraul
    Cc: Alexey Dobriyan
    Cc: "Eric W. Biederman"
    Cc: "Peter Zijlstra (Intel)"
    Cc: Ingo Molnar
    Cc: Sebastian Andrzej Siewior
    Cc: Serge Hallyn
    Cc: Andrey Vagin
    Cc: Guillaume Knispel
    Cc: Marc Pardo
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Guillaume Knispel
     
  • Replacing semop()'s kmalloc for kvmalloc was originally proposed by
    Manfred on the premise that it can be called for large (than order-1)
    sizes. For example, while Oracle recommends setting SEMOPM to a _minimum_
    of 100, some distros[1] encourage the setting to be a factor of the amount
    of db tasks (PROCESSES), which can get fishy for large systems (easily
    going beyond 1000).

    [1] An Example of Semaphore Settings
    https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/5/html/Tuning_and_Optimizing_Red_Hat_Enterprise_Linux_for_Oracle_9i_and_10g_Databases/sect-Oracle_9i_and_10g_Tuning_Guide-Setting_Semaphores-An_Example_of_Semaphore_Settings.html

    So let's just convert this to kvmalloc, just like the rest of the
    allocations we do in ipc. While the fallback vmalloc obviously involves
    more overhead, this by far the uncommon path, and it's better for the user
    than just erroring out with kmalloc.

    Link: http://lkml.kernel.org/r/20170803184136.13855-2-dave@stgolabs.net
    Signed-off-by: Davidlohr Bueso
    Cc: Manfred Spraul
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Davidlohr Bueso
     
  • ... 'tis not used.

    Link: http://lkml.kernel.org/r/20170803184136.13855-1-dave@stgolabs.net
    Signed-off-by: Davidlohr Bueso
    Cc: Manfred Spraul
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Davidlohr Bueso
     
  • refcount_t type and corresponding API should be used instead of atomic_t
    when the variable is used as a reference counter. This allows to avoid
    accidental refcounter overflows that might lead to use-after-free
    situations.

    Link: http://lkml.kernel.org/r/1499417992-3238-3-git-send-email-elena.reshetova@intel.com
    Signed-off-by: Elena Reshetova
    Signed-off-by: Hans Liljestrand
    Signed-off-by: Kees Cook
    Signed-off-by: David Windsor
    Cc: Peter Zijlstra
    Cc: Greg Kroah-Hartman
    Cc: "Eric W. Biederman"
    Cc: Ingo Molnar
    Cc: Alexey Dobriyan
    Cc: Serge Hallyn
    Cc:
    Cc: Davidlohr Bueso
    Cc: Manfred Spraul
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Elena Reshetova
     

04 Sep, 2017

2 commits

  • time_t is not y2038 safe. Replace all uses of
    time_t by y2038 safe time64_t.

    Similarly, replace the calls to get_seconds() with
    y2038 safe ktime_get_real_seconds().
    Note that this preserves fast access on 64 bit systems,
    but 32 bit systems need sequence counters.

    The syscall interface themselves are not changed as part of
    the patch. They will be part of a different series.

    Signed-off-by: Deepa Dinamani
    Reviewed-by: Arnd Bergmann
    Signed-off-by: Al Viro

    Deepa Dinamani
     
  • struct timespec is not y2038 safe on 32 bit machines.
    Replace timespec with y2038 safe struct timespec64.

    Note that the patch only changes the internals without
    modifying the syscall interface. This will be part
    of a separate series.

    Signed-off-by: Deepa Dinamani
    Reviewed-by: Arnd Bergmann
    Signed-off-by: Al Viro

    Deepa Dinamani
     

21 Aug, 2017

1 commit


17 Aug, 2017

1 commit

  • There is no agreed-upon definition of spin_unlock_wait()'s semantics,
    and it appears that all callers could do just as well with a lock/unlock
    pair. This commit therefore replaces the spin_unlock_wait() call in
    exit_sem() with spin_lock() followed immediately by spin_unlock().
    This should be safe from a performance perspective because exit_sem()
    is rarely invoked in production.

    Signed-off-by: Paul E. McKenney
    Cc: Andrew Morton
    Cc: Davidlohr Bueso
    Cc: Will Deacon
    Cc: Peter Zijlstra
    Cc: Alan Stern
    Cc: Andrea Parri
    Cc: Linus Torvalds
    Acked-by: Manfred Spraul

    Paul E. McKenney
     

03 Aug, 2017

1 commit

  • When building with the randstruct gcc plugin, the layout of the IPC
    structs will be randomized, which requires any sub-structure accesses to
    use container_of(). The proc display handlers were missing the needed
    container_of()s since the iterator is passing in the top-level struct
    kern_ipc_perm.

    This would lead to crashes when running the "lsipc" program after the
    system had IPC registered (e.g. after starting up Gnome):

    general protection fault: 0000 [#1] PREEMPT SMP
    ...
    RIP: 0010:shm_add_rss_swap.isra.1+0x13/0xa0
    ...
    Call Trace:
    sysvipc_shm_proc_show+0x5e/0x150
    sysvipc_proc_show+0x1a/0x30
    seq_read+0x2e9/0x3f0
    ...

    Link: http://lkml.kernel.org/r/20170730205950.GA55841@beast
    Fixes: 3859a271a003 ("randstruct: Mark various structs for randomization")
    Signed-off-by: Kees Cook
    Reported-by: Dominik Brodowski
    Acked-by: Davidlohr Bueso
    Acked-by: Manfred Spraul
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Kees Cook
     

16 Jul, 2017

1 commit