09 Jun, 2020

1 commit

  • the reason is to avoid a delay caused by the synchronize_rcu() call in
    kern_umount() when the mqueue mount is freed.

    the code:

    #define _GNU_SOURCE
    #include
    #include
    #include
    #include

    int main()
    {
    int i;

    for (i = 0; i < 1000; i++)
    if (unshare(CLONE_NEWIPC) < 0)
    error(EXIT_FAILURE, errno, "unshare");
    }

    goes from

    Command being timed: "./ipc-namespace"
    User time (seconds): 0.00
    System time (seconds): 0.06
    Percent of CPU this job got: 0%
    Elapsed (wall clock) time (h:mm:ss or m:ss): 0:08.05

    to

    Command being timed: "./ipc-namespace"
    User time (seconds): 0.00
    System time (seconds): 0.02
    Percent of CPU this job got: 96%
    Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.03

    Signed-off-by: Giuseppe Scrivano
    Signed-off-by: Andrew Morton
    Reviewed-by: Paul E. McKenney
    Reviewed-by: Waiman Long
    Cc: Davidlohr Bueso
    Cc: Manfred Spraul
    Link: http://lkml.kernel.org/r/20200225145419.527994-1-gscrivan@redhat.com
    Signed-off-by: Linus Torvalds

    Giuseppe Scrivano
     

15 May, 2019

1 commit

  • Rewrite, based on the patch from Waiman Long:

    The mixing in of a sequence number into the IPC IDs is probably to avoid
    ID reuse in userspace as much as possible. With ipcmni_extend mode, the
    number of usable sequence numbers is greatly reduced leading to higher
    chance of ID reuse.

    To address this issue, we need to conserve the sequence number space as
    much as possible. Right now, the sequence number is incremented for
    every new ID created. In reality, we only need to increment the
    sequence number when new allocated ID is not greater than the last one
    allocated. It is in such case that the new ID may collide with an
    existing one. This is being done irrespective of the ipcmni mode.

    In order to avoid any races, the index is first allocated and then the
    pointer is replaced.

    Changes compared to the initial patch:
    - Handle failures from idr_alloc().
    - Avoid that concurrent operations can see the wrong sequence number.
    (This is achieved by using idr_replace()).
    - IPCMNI_SEQ_SHIFT is not a constant, thus renamed to
    ipcmni_seq_shift().
    - IPCMNI_SEQ_MAX is not a constant, thus renamed to ipcmni_seq_max().

    Link: http://lkml.kernel.org/r/20190329204930.21620-2-longman@redhat.com
    Signed-off-by: Manfred Spraul
    Signed-off-by: Waiman Long
    Suggested-by: Matthew Wilcox
    Acked-by: Waiman Long
    Cc: Al Viro
    Cc: Davidlohr Bueso
    Cc: "Eric W . Biederman"
    Cc: Jonathan Corbet
    Cc: Kees Cook
    Cc: "Luis R. Rodriguez"
    Cc: Takashi Iwai
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Manfred Spraul
     

23 Aug, 2018

2 commits

  • The varable names got a mess, thus standardize them again:

    id: user space id. Called semid, shmid, msgid if the type is known.
    Most functions use "id" already.
    idx: "index" for the idr lookup
    Right now, some functions use lid, ipc_addid() already uses idx as
    the variable name.
    seq: sequence number, to avoid quick collisions of the user space id
    key: user space key, used for the rhash tree

    Link: http://lkml.kernel.org/r/20180712185241.4017-12-manfred@colorfullife.com
    Signed-off-by: Manfred Spraul
    Cc: Dmitry Vyukov
    Cc: Davidlohr Bueso
    Cc: Davidlohr Bueso
    Cc: Herbert Xu
    Cc: Kees Cook
    Cc: Michael Kerrisk
    Cc: Michal Hocko
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Manfred Spraul
     
  • In sysvipc we have an ids->tables_initialized regarding the rhashtable,
    introduced in 0cfb6aee70bd ("ipc: optimize semget/shmget/msgget for lots
    of keys")

    It's there, specifically, to prevent nil pointer dereferences, from using
    an uninitialized api. Considering how rhashtable_init() can fail
    (probably due to ENOMEM, if anything), this made the overall ipc
    initialization capable of failure as well. That alone is ugly, but fine,
    however I've spotted a few issues regarding the semantics of
    tables_initialized (however unlikely they may be):

    - There is inconsistency in what we return to userspace: ipc_addid()
    returns ENOSPC which is certainly _wrong_, while ipc_obtain_object_idr()
    returns EINVAL.

    - After we started using rhashtables, ipc_findkey() can return nil upon
    !tables_initialized, but the caller expects nil for when the ipc
    structure isn't found, and can therefore call into ipcget() callbacks.

    Now that rhashtable initialization cannot fail, we can properly get rid of
    the hack altogether.

    [manfred@colorfullife.com: commit id extended to 12 digits]
    Link: http://lkml.kernel.org/r/20180712185241.4017-10-manfred@colorfullife.com
    Signed-off-by: Davidlohr Bueso
    Signed-off-by: Manfred Spraul
    Cc: Dmitry Vyukov
    Cc: Herbert Xu
    Cc: Kees Cook
    Cc: Michael Kerrisk
    Cc: Michal Hocko
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Davidlohr Bueso
     

22 Jun, 2018

1 commit

  • Due to the use of rhashtables in net namespaces,
    rhashtable.h is included in lots of the kernel,
    so a small changes can required a large recompilation.
    This makes development painful.

    This patch splits out rhashtable-types.h which just includes
    the major type declarations, and does not include (non-trivial)
    inline code. rhashtable.h is no longer included by anything
    in the include/ directory.
    Common include files only include rhashtable-types.h so a large
    recompilation is only triggered when that changes.

    Acked-by: Herbert Xu
    Signed-off-by: NeilBrown
    Signed-off-by: David S. Miller

    NeilBrown
     

18 Nov, 2017

2 commits

  • For a custom microbenchmark on a 3.30GHz Xeon SandyBridge, which calls
    IPC_STAT over and over, it was calculated that, on avg the cost of
    ipc_get_maxid() for increasing amounts of keys was:

    10 keys: ~900 cycles
    100 keys: ~15000 cycles
    1000 keys: ~150000 cycles
    10000 keys: ~2100000 cycles

    This is unsurprising as maxid is currently O(n).

    By having the max_id available in O(1) we save all those cycles for each
    semctl(_STAT) command, the idr_find can be expensive -- which some real
    (customer) workloads actually poll on.

    Note that this used to be the case, until commit 7ca7e564e04 ("ipc:
    store ipcs into IDRs"). The cost is the extra idr_find when doing
    RMIDs, but we simply go backwards, and should not take too many
    iterations to find the new value.

    [akpm@linux-foundation.org: coding-style fixes]
    Link: http://lkml.kernel.org/r/20170831172049.14576-5-dave@stgolabs.net
    Signed-off-by: Davidlohr Bueso
    Cc: Manfred Spraul
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Davidlohr Bueso
     
  • Patch series "sysvipc: ipc-key management improvements".

    Here are a few improvements I spotted while eyeballing Guillaume's
    rhashtable implementation for ipc keys. The first and fourth patches
    are the interesting ones, the middle two are trivial.

    This patch (of 4):

    The next_id object-allocation functionality was introduced in commit
    03f595668017 ("ipc: add sysctl to specify desired next object id").

    Given that these new entries are _only_ exported under the
    CONFIG_CHECKPOINT_RESTORE option, there is no point for the common case
    to even know about ->next_id. As such rewrite ipc_buildid() such that
    it can do away with the field as well as unnecessary branches when
    adding a new identifier. The end result also better differentiates both
    cases, so the code ends up being cleaner; albeit the small duplications
    regarding the default case.

    [akpm@linux-foundation.org: coding-style fixes]
    Link: http://lkml.kernel.org/r/20170831172049.14576-2-dave@stgolabs.net
    Signed-off-by: Davidlohr Bueso
    Cc: Manfred Spraul
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Davidlohr Bueso
     

02 Nov, 2017

1 commit

  • Many source files in the tree are missing licensing information, which
    makes it harder for compliance tools to determine the correct license.

    By default all files without license information are under the default
    license of the kernel, which is GPL version 2.

    Update the files which contain no license information with the 'GPL-2.0'
    SPDX license identifier. The SPDX identifier is a legally binding
    shorthand, which can be used instead of the full boiler plate text.

    This patch is based on work done by Thomas Gleixner and Kate Stewart and
    Philippe Ombredanne.

    How this work was done:

    Patches were generated and checked against linux-4.14-rc6 for a subset of
    the use cases:
    - file had no licensing information it it.
    - file was a */uapi/* one with no licensing information in it,
    - file was a */uapi/* one with existing licensing information,

    Further patches will be generated in subsequent months to fix up cases
    where non-standard license headers were used, and references to license
    had to be inferred by heuristics based on keywords.

    The analysis to determine which SPDX License Identifier to be applied to
    a file was done in a spreadsheet of side by side results from of the
    output of two independent scanners (ScanCode & Windriver) producing SPDX
    tag:value files created by Philippe Ombredanne. Philippe prepared the
    base worksheet, and did an initial spot review of a few 1000 files.

    The 4.13 kernel was the starting point of the analysis with 60,537 files
    assessed. Kate Stewart did a file by file comparison of the scanner
    results in the spreadsheet to determine which SPDX license identifier(s)
    to be applied to the file. She confirmed any determination that was not
    immediately clear with lawyers working with the Linux Foundation.

    Criteria used to select files for SPDX license identifier tagging was:
    - Files considered eligible had to be source code files.
    - Make and config files were included as candidates if they contained >5
    lines of source
    - File already had some variant of a license header in it (even if
    Reviewed-by: Philippe Ombredanne
    Reviewed-by: Thomas Gleixner
    Signed-off-by: Greg Kroah-Hartman

    Greg Kroah-Hartman
     

09 Sep, 2017

2 commits

  • ipc_findkey() used to scan all objects to look for the wanted key. This
    is slow when using a high number of keys. This change adds an rhashtable
    of kern_ipc_perm objects in ipc_ids, so that one lookup cease to be O(n).

    This change gives a 865% improvement of benchmark reaim.jobs_per_min on a
    56 threads Intel(R) Xeon(R) CPU E5-2695 v3 @ 2.30GHz with 256G memory [1]

    Other (more micro) benchmark results, by the author: On an i5 laptop, the
    following loop executed right after a reboot took, without and with this
    change:

    for (int i = 0, k=0x424242; i < KEYS; ++i)
    semget(k++, 1, IPC_CREAT | 0600);

    total total max single max single
    KEYS without with call without call with

    1 3.5 4.9 µs 3.5 4.9
    10 7.6 8.6 µs 3.7 4.7
    32 16.2 15.9 µs 4.3 5.3
    100 72.9 41.8 µs 3.7 4.7
    1000 5,630.0 502.0 µs * *
    10000 1,340,000.0 7,240.0 µs * *
    31900 17,600,000.0 22,200.0 µs * *

    *: unreliable measure: high variance

    The duration for a lookup-only usage was obtained by the same loop once
    the keys are present:

    total total max single max single
    KEYS without with call without call with

    1 2.1 2.5 µs 2.1 2.5
    10 4.5 4.8 µs 2.2 2.3
    32 13.0 10.8 µs 2.3 2.8
    100 82.9 25.1 µs * 2.3
    1000 5,780.0 217.0 µs * *
    10000 1,470,000.0 2,520.0 µs * *
    31900 17,400,000.0 7,810.0 µs * *

    Finally, executing each semget() in a new process gave, when still
    summing only the durations of these syscalls:

    creation:
    total total
    KEYS without with

    1 3.7 5.0 µs
    10 32.9 36.7 µs
    32 125.0 109.0 µs
    100 523.0 353.0 µs
    1000 20,300.0 3,280.0 µs
    10000 2,470,000.0 46,700.0 µs
    31900 27,800,000.0 219,000.0 µs

    lookup-only:
    total total
    KEYS without with

    1 2.5 2.7 µs
    10 25.4 24.4 µs
    32 106.0 72.6 µs
    100 591.0 352.0 µs
    1000 22,400.0 2,250.0 µs
    10000 2,510,000.0 25,700.0 µs
    31900 28,200,000.0 115,000.0 µs

    [1] http://lkml.kernel.org/r/20170814060507.GE23258@yexl-desktop

    Link: http://lkml.kernel.org/r/20170815194954.ck32ta2z35yuzpwp@debix
    Signed-off-by: Guillaume Knispel
    Reviewed-by: Marc Pardo
    Cc: Davidlohr Bueso
    Cc: Kees Cook
    Cc: Manfred Spraul
    Cc: Alexey Dobriyan
    Cc: "Eric W. Biederman"
    Cc: "Peter Zijlstra (Intel)"
    Cc: Ingo Molnar
    Cc: Sebastian Andrzej Siewior
    Cc: Serge Hallyn
    Cc: Andrey Vagin
    Cc: Guillaume Knispel
    Cc: Marc Pardo
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Guillaume Knispel
     
  • refcount_t type and corresponding API should be used instead of atomic_t
    when the variable is used as a reference counter. This allows to avoid
    accidental refcounter overflows that might lead to use-after-free
    situations.

    Link: http://lkml.kernel.org/r/1499417992-3238-2-git-send-email-elena.reshetova@intel.com
    Signed-off-by: Elena Reshetova
    Signed-off-by: Hans Liljestrand
    Signed-off-by: Kees Cook
    Signed-off-by: David Windsor
    Cc: Peter Zijlstra
    Cc: Greg Kroah-Hartman
    Cc: "Eric W. Biederman"
    Cc: Ingo Molnar
    Cc: Alexey Dobriyan
    Cc: Serge Hallyn
    Cc:
    Cc: Davidlohr Bueso
    Cc: Manfred Spraul
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Elena Reshetova
     

01 Jul, 2017

1 commit

  • This marks many critical kernel structures for randomization. These are
    structures that have been targeted in the past in security exploits, or
    contain functions pointers, pointers to function pointer tables, lists,
    workqueues, ref-counters, credentials, permissions, or are otherwise
    sensitive. This initial list was extracted from Brad Spengler/PaX Team's
    code in the last public patch of grsecurity/PaX based on my understanding
    of the code. Changes or omissions from the original code are mine and
    don't reflect the original grsecurity/PaX code.

    Left out of this list is task_struct, which requires special handling
    and will be covered in a subsequent patch.

    Signed-off-by: Kees Cook

    Kees Cook
     

09 Aug, 2016

1 commit


03 Aug, 2016

1 commit


17 Dec, 2014

1 commit

  • Pull vfs pile #2 from Al Viro:
    "Next pile (and there'll be one or two more).

    The large piece in this one is getting rid of /proc/*/ns/* weirdness;
    among other things, it allows to (finally) make nameidata completely
    opaque outside of fs/namei.c, making for easier further cleanups in
    there"

    * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs:
    coda_venus_readdir(): use file_inode()
    fs/namei.c: fold link_path_walk() call into path_init()
    path_init(): don't bother with LOOKUP_PARENT in argument
    fs/namei.c: new helper (path_cleanup())
    path_init(): store the "base" pointer to file in nameidata itself
    make default ->i_fop have ->open() fail with ENXIO
    make nameidata completely opaque outside of fs/namei.c
    kill proc_ns completely
    take the targets of /proc/*/ns/* symlinks to separate fs
    bury struct proc_ns in fs/proc
    copy address of proc_ns_ops into ns_common
    new helpers: ns_alloc_inum/ns_free_inum
    make proc_ns_operations work with struct ns_common * instead of void *
    switch the rest of proc_ns_operations to working with &...->ns
    netns: switch ->get()/->put()/->install()/->inum() to working with &net->ns
    make mntns ->get()/->put()/->install()/->inum() work with &mnt_ns->ns
    common object embedded into various struct ....ns

    Linus Torvalds
     

14 Dec, 2014

1 commit

  • SysV can be abused to allocate locked kernel memory. For most systems, a
    small limit doesn't make sense, see the discussion with regards to SHMMAX.

    Therefore: increase MSGMNI to the maximum supported.

    And: If we ignore the risk of locking too much memory, then an automatic
    scaling of MSGMNI doesn't make sense. Therefore the logic can be removed.

    The code preserves auto_msgmni to avoid breaking any user space applications
    that expect that the value exists.

    Notes:
    1) If an administrator must limit the memory allocations, then he can set
    MSGMNI as necessary.

    Or he can disable sysv entirely (as e.g. done by Android).

    2) MSGMAX and MSGMNB are intentionally not increased, as these values are used
    to control latency vs. throughput:
    If MSGMNB is large, then msgsnd() just returns and more messages can be queued
    before a task switch to a task that calls msgrcv() is forced.

    [akpm@linux-foundation.org: coding-style fixes]
    Signed-off-by: Manfred Spraul
    Cc: Davidlohr Bueso
    Cc: Rafael Aquini
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Manfred Spraul
     

05 Dec, 2014

1 commit


26 Feb, 2014

1 commit

  • Commit 93e6f119c0ce ("ipc/mqueue: cleanup definition names and
    locations") added global hardcoded limits to the amount of message
    queues that can be created. While these limits are per-namespace,
    reality is that it ends up breaking userspace applications.
    Historically users have, at least in theory, been able to create up to
    INT_MAX queues, and limiting it to just 1024 is way too low and dramatic
    for some workloads and use cases. For instance, Madars reports:

    "This update imposes bad limits on our multi-process application. As
    our app uses approaches that each process opens its own set of queues
    (usually something about 3-5 queues per process). In some scenarios
    we might run up to 3000 processes or more (which of-course for linux
    is not a problem). Thus we might need up to 9000 queues or more. All
    processes run under one user."

    Other affected users can be found in launchpad bug #1155695:
    https://bugs.launchpad.net/ubuntu/+source/manpages/+bug/1155695

    Instead of increasing this limit, revert it entirely and fallback to the
    original way of dealing queue limits -- where once a user's resource
    limit is reached, and all memory is used, new queues cannot be created.

    Signed-off-by: Davidlohr Bueso
    Reported-by: Madars Vitolins
    Acked-by: Doug Ledford
    Cc: Manfred Spraul
    Cc: [3.5+]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Davidlohr Bueso
     

28 Jan, 2014

1 commit

  • This field is only used to reset the ids seq number if it exceeds the
    smaller of INT_MAX/SEQ_MULTIPLIER and USHRT_MAX, and can therefore be
    moved out of the structure and into its own macro. Since each
    ipc_namespace contains a table of 3 pointers to struct ipc_ids we can
    save space in instruction text:

    text data bss dec hex filename
    56232 2348 24 58604 e4ec ipc/built-in.o
    56216 2348 24 58588 e4dc ipc/built-in.o-after

    Signed-off-by: Davidlohr Bueso
    Reviewed-by: Jonathan Gonzalez
    Cc: Aswin Chandramouleeswaran
    Cc: Rik van Riel
    Acked-by: Manfred Spraul
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Davidlohr Bueso
     

04 Nov, 2013

1 commit

  • Negative message lengths make no sense -- so don't do negative queue
    lenghts or identifier counts. Prevent them from getting negative.

    Also change the underlying data types to be unsigned to avoid hairy
    surprises with sign extensions in cases where those variables get
    evaluated in unsigned expressions with bigger data types, e.g size_t.

    In case a user still wants to have "unlimited" sizes she could just use
    INT_MAX instead.

    Signed-off-by: Mathias Krause
    Cc: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mathias Krause
     

12 Sep, 2013

1 commit

  • Since in some situations the lock can be shared for readers, we shouldn't
    be calling it a mutex, rename it to rwsem.

    Signed-off-by: Davidlohr Bueso
    Tested-by: Sedat Dilek
    Cc: Rik van Riel
    Cc: Manfred Spraul
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Davidlohr Bueso
     

01 May, 2013

1 commit

  • Trying to run an application which was trying to put data into half of
    memory using shmget(), we found that having a shmall value below 8EiB-8TiB
    would prevent us from using anything more than 8TiB. By setting
    kernel.shmall greater than 8EiB-8TiB would make the job work.

    In the newseg() function, ns->shm_tot which, at 8TiB is INT_MAX.

    ipc/shm.c:
    458 static int newseg(struct ipc_namespace *ns, struct ipc_params *params)
    459 {
    ...
    465 int numpages = (size + PAGE_SIZE -1) >> PAGE_SHIFT;
    ...
    474 if (ns->shm_tot + numpages > ns->shm_ctlall)
    475 return -ENOSPC;

    [akpm@linux-foundation.org: make ipc/shm.c:newseg()'s numpages size_t, not int]
    Signed-off-by: Robin Holt
    Reported-by: Alex Thorlton
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Robin Holt
     

05 Jan, 2013

1 commit

  • Add 3 new variables and sysctls to tune them (by one "next_id" variable
    for messages, semaphores and shared memory respectively). This variable
    can be used to set desired id for next allocated IPC object. By default
    it's equal to -1 and old behaviour is preserved. If this variable is
    non-negative, then desired idr will be extracted from it and used as a
    start value to search for free IDR slot.

    Notes:

    1) this patch doesn't guarantee that the new object will have desired
    id. So it's up to user space how to handle new object with wrong id.

    2) After a sucessful id allocation attempt, "next_id" will be set back
    to -1 (if it was non-negative).

    [akpm@linux-foundation.org: checkpatch fixes]
    Signed-off-by: Stanislav Kinsbursky
    Cc: Serge Hallyn
    Cc: "Eric W. Biederman"
    Cc: Pavel Emelyanov
    Cc: Al Viro
    Cc: KOSAKI Motohiro
    Cc: Michael Kerrisk
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Stanislav Kinsbursky
     

20 Nov, 2012

2 commits

  • Assign a unique proc inode to each namespace, and use that
    inode number to ensure we only allocate at most one proc
    inode for every namespace in proc.

    A single proc inode per namespace allows userspace to test
    to see if two processes are in the same namespace.

    This has been a long requested feature and only blocked because
    a naive implementation would put the id in a global space and
    would ultimately require having a namespace for the names of
    namespaces, making migration and certain virtualization tricks
    impossible.

    We still don't have per superblock inode numbers for proc, which
    appears necessary for application unaware checkpoint/restart and
    migrations (if the application is using namespace file descriptors)
    but that is now allowd by the design if it becomes important.

    I have preallocated the ipc and uts initial proc inode numbers so
    their structures can be statically initialized.

    Signed-off-by: Eric W. Biederman

    Eric W. Biederman
     
  • Modify create_new_namespaces to explicitly take a user namespace
    parameter, instead of implicitly through the task_struct.

    This allows an implementation of unshare(CLONE_NEWUSER) where
    the new user namespace is not stored onto the current task_struct
    until after all of the namespaces are created.

    Acked-by: Serge Hallyn
    Signed-off-by: "Eric W. Biederman"

    Eric W. Biederman
     

01 Jun, 2012

5 commits

  • Commit b231cca4381e ("message queues: increase range limits") changed
    mqueue default value when attr parameter is specified NULL from hard
    coded value to fs.mqueue.{msg,msgsize}_max sysctl value.

    This made large side effect. When user need to use two mqueue
    applications 1) using !NULL attr parameter and it require big message
    size and 2) using NULL attr parameter and only need small size message,
    app (1) require to raise fs.mqueue.msgsize_max and app (2) consume large
    memory size even though it doesn't need.

    Doug Ledford propsed to switch back it to static hard coded value.
    However it also has a compatibility problem. Some applications might
    started depend on the default value is tunable.

    The solution is to separate default value from maximum value.

    Signed-off-by: KOSAKI Motohiro
    Signed-off-by: Doug Ledford
    Acked-by: Doug Ledford
    Acked-by: Joe Korty
    Cc: Amerigo Wang
    Acked-by: Serge E. Hallyn
    Cc: Jiri Slaby
    Cc: Manfred Spraul
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    KOSAKI Motohiro
     
  • Mqueue limitation is slightly naieve parameter likes other ipcs because
    unprivileged user can consume kernel memory by using ipcs.

    Thus, too aggressive raise bring us security issue. Example, current
    setting allow evil unprivileged user use 256GB (= 256 * 1024 * 1024*1024)
    and it's enough large to system will belome unresponsive. Don't do that.

    Instead, every admin should adjust the knobs for their own systems.

    Signed-off-by: KOSAKI Motohiro
    Acked-by: Doug Ledford
    Acked-by: Joe Korty
    Cc: Amerigo Wang
    Acked-by: Serge E. Hallyn
    Cc: Jiri Slaby
    Cc: Manfred Spraul
    Cc: Dave Hansen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    KOSAKI Motohiro
     
  • Commit b231cca4381e ("message queues: increase range limits") changed the
    maximum size of a message in a message queue from INT_MAX to 8192*128.
    Unfortunately, we had customers that relied on a size much larger than
    8192*128 on their production systems. After reviewing POSIX, we found
    that it is silent on the maximum message size. We did find a couple other
    areas in which it was not silent. Fix up the mqueue maximums so that the
    customer's system can continue to work, and document both the POSIX and
    real world requirements in ipc_namespace.h so that we don't have this
    issue crop back up.

    Also, commit 9cf18e1dd74cd0 ("ipc: HARD_MSGMAX should be higher not lower
    on 64bit") fiddled with HARD_MSGMAX without realizing that the number was
    intentionally in place to limit the msg queue depth to one that was small
    enough to kmalloc an array of pointers (hence why we divided 128k by
    sizeof(long)). If we wish to meet POSIX requirements, we have no choice
    but to change our allocation to a vmalloc instead (at least for the large
    queue size case). With that, it's possible to increase our allowed
    maximum to the POSIX requirements (or more if we choose).

    [sfr@canb.auug.org.au: using vmalloc requires including vmalloc.h]
    Signed-off-by: Doug Ledford
    Cc: Serge E. Hallyn
    Cc: Amerigo Wang
    Cc: Joe Korty
    Cc: Jiri Slaby
    Acked-by: KOSAKI Motohiro
    Cc: Manfred Spraul
    Signed-off-by: Stephen Rothwell
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Doug Ledford
     
  • Commit b231cca4381e ("message queues: increase range limits") changed
    how we create a queue that does not include an attr struct passed to
    open so that it creates the queue with whatever the maximum values are.
    However, if the admin has set the maximums to allow flexibility in
    creating a queue (aka, both a large size and large queue are allowed,
    but combined they create a queue too large for the RLIMIT_MSGQUEUE of
    the user), then attempts to create a queue without an attr struct will
    fail. Switch back to using acceptable defaults regardless of what the
    maximums are.

    Note: so far, we only know of a few applications that rely on this
    behavior (specifically, set the maximums in /proc, then run the
    application which calls mq_open() without passing in an attr struct, and
    the application expects the newly created message queue to have the
    maximum sizes that were set in /proc used on the mq_open() call, and all
    of those applications that we know of are actually part of regression
    test suites that were coded to do something like this:

    for size in 4096 65536 $((1024 * 1024)) $((16 * 1024 * 1024)); do
    echo $size > /proc/sys/fs/mqueue/msgsize_max
    mq_open || echo "Error opening mq with size $size"
    done

    These test suites that depend on any behavior like this are broken. The
    concept that programs should rely upon the system wide maximum in order
    to get their desired results instead of simply using a attr struct to
    specify what they want is fundamentally unfriendly programming practice
    for any multi-tasking OS.

    Fixing this will break those few apps that we know of (and those app
    authors recognize the brokenness of their code and the need to fix it).
    However, the following patch "mqueue: separate mqueue default value"
    allows a workaround in the form of new knobs for the default msg queue
    creation parameters for any software out there that we don't already
    know about that might rely on this behavior at the moment.

    Signed-off-by: Doug Ledford
    Cc: Serge E. Hallyn
    Cc: Amerigo Wang
    Cc: Joe Korty
    Cc: Jiri Slaby
    Acked-by: KOSAKI Motohiro
    Cc: Manfred Spraul
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Doug Ledford
     
  • Since commit b231cca4381e ("message queues: increase range limits") on
    Oct 18, 2008, calls to mq_open() that did not pass in an attribute
    struct and expected to get default values for the size of the queue and
    the max message size now get the system wide maximums instead of
    hardwired defaults like they used to get.

    This was uncovered when one of the earlier patches in this patch set
    increased the default system wide maximums at the same time it increased
    the hard ceiling on the system wide maximums (a customer specifically
    needed the hard ceiling brought back up, the new ceiling that commit
    b231cca4381e introduced was too low for their production systems). By
    increasing the default maximums and not realising they were tied to any
    attempt to create a message queue without an attribute struct, I had
    inadvertently made it such that all message queue creation attempts
    without an attribute struct were failing because the new default
    maximums would create a queue that exceeded the default rlimit for
    message queue bytes.

    As a result, the system wide defaults were brought back down to their
    previous levels, and the system wide ceilings on the maximums were
    raised to meet the customer's needs. However, the fact that the no
    attribute struct behavior of mq_open() could be broken by changing the
    system wide maximums for message queues was seen as fundamentally broken
    itself. So we hardwired the no attribute case back like it used to be.
    But, then we realized that on the very off chance that some piece of
    software in the wild depended on that behavior, we could work around
    that issue by adding two new knobs to /proc that allowed setting the
    defaults for message queues created without an attr struct separately
    from the system wide maximums.

    What is not an option IMO is to leave the current behavior in place. No
    piece of software should ever rely on setting the system wide maximums
    in order to get a desired message queue. Such a reliance would be so
    fundamentally multitasking OS unfriendly as to not really be tolerable.
    Fortunately, we don't know of any software in the wild that uses this
    except for a regression test program that caught the issue in the first
    place. If there is though, we have made accommodations with the two new
    /proc knobs (and that's all the accommodations such fundamentally broken
    software can be allowed)..

    This patch:

    The various defines for minimums and maximums of the sysctl controllable
    mqueue values are scattered amongst different files and named
    inconsistently. Move them all into ipc_namespace.h and make them have
    consistent names. Additionally, make the number of queues per namespace
    also have a minimum and maximum and use the same sysctl function as the
    other two settable variables.

    Signed-off-by: Doug Ledford
    Acked-by: Serge E. Hallyn
    Cc: Amerigo Wang
    Cc: Joe Korty
    Cc: Jiri Slaby
    Acked-by: KOSAKI Motohiro
    Cc: Manfred Spraul
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Doug Ledford
     

27 Jul, 2011

1 commit

  • Add support for the shm_rmid_forced sysctl. If set to 1, all shared
    memory objects in current ipc namespace will be automatically forced to
    use IPC_RMID.

    The POSIX way of handling shmem allows one to create shm objects and
    call shmdt(), leaving shm object associated with no process, thus
    consuming memory not counted via rlimits.

    With shm_rmid_forced=1 the shared memory object is counted at least for
    one process, so OOM killer may effectively kill the fat process holding
    the shared memory.

    It obviously breaks POSIX - some programs relying on the feature would
    stop working. So set shm_rmid_forced=1 only if you're sure nobody uses
    "orphaned" memory. Use shm_rmid_forced=0 by default for compatability
    reasons.

    The feature was previously impemented in -ow as a configure option.

    [akpm@linux-foundation.org: fix documentation, per Randy]
    [akpm@linux-foundation.org: fix warning]
    [akpm@linux-foundation.org: readability/conventionality tweaks]
    [akpm@linux-foundation.org: fix shm_rmid_forced/shm_forced_rmid confusion, use standard comment layout]
    Signed-off-by: Vasiliy Kulikov
    Cc: Randy Dunlap
    Cc: "Eric W. Biederman"
    Cc: "Serge E. Hallyn"
    Cc: Daniel Lezcano
    Cc: Oleg Nesterov
    Cc: Tejun Heo
    Cc: Ingo Molnar
    Cc: Alan Cox
    Cc: Solar Designer
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Vasiliy Kulikov
     

24 Mar, 2011

2 commits

  • CAP_IPC_OWNER and CAP_IPC_LOCK can be checked against current_user_ns(),
    because the resource comes from current's own ipc namespace.

    setuid/setgid are to uids in own namespace, so again checks can be against
    current_user_ns().

    Changelog:
    Jan 11: Use task_ns_capable() in place of sched_capable().
    Jan 11: Use nsown_capable() as suggested by Bastian Blank.
    Jan 11: Clarify (hopefully) some logic in futex and sched.c
    Feb 15: use ns_capable for ipc, not nsown_capable
    Feb 23: let copy_ipcs handle setting ipc_ns->user_ns
    Feb 23: pass ns down rather than taking it from current

    [akpm@linux-foundation.org: coding-style fixes]
    Signed-off-by: Serge E. Hallyn
    Acked-by: "Eric W. Biederman"
    Acked-by: Daniel Lezcano
    Acked-by: David Howells
    Cc: James Morris
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Serge E. Hallyn
     
  • Changelog:
    Feb 15: Don't set new ipc->user_ns if we didn't create a new
    ipc_ns.
    Feb 23: Move extern declaration to ipc_namespace.h, and group
    fwd declarations at top.

    Signed-off-by: Serge E. Hallyn
    Acked-by: "Eric W. Biederman"
    Acked-by: Daniel Lezcano
    Acked-by: David Howells
    Cc: James Morris
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Serge E. Hallyn
     

13 Mar, 2010

1 commit

  • Remove INIT_NSPROXY(), use C99 initializer.
    Remove INIT_IPC_NS(), INIT_NET_NS() while I'm at it.

    Note: headers trim will be done later, now it's quite pointless because
    results will be invalidated by merge window.

    Signed-off-by: Alexey Dobriyan
    Acked-by: Serge Hallyn
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Alexey Dobriyan
     

16 Dec, 2009

1 commit

  • We have HARD_MSGMAX lower on 64bit than on 32bit, since usually 64bit
    machines have more memory than 32bit machines.

    Making it higher on 64bit seems reasonable, and keep the original number
    on 32bit.

    Acked-by: Serge E. Hallyn
    Cc: Cedric Le Goater
    Signed-off-by: WANG Cong
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Amerigo Wang
     

19 Jun, 2009

2 commits


07 Apr, 2009

3 commits

  • Largely inspired from ipc/ipc_sysctl.c. This patch isolates the mqueue
    sysctl stuff in its own file.

    [akpm@linux-foundation.org: build fix]
    Signed-off-by: Cedric Le Goater
    Signed-off-by: Nadia Derbey
    Signed-off-by: Serge E. Hallyn
    Cc: Alexey Dobriyan
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Serge E. Hallyn
     
  • Implement multiple mounts of the mqueue file system, and link it to usage
    of CLONE_NEWIPC.

    Each ipc ns has a corresponding mqueuefs superblock. When a user does
    clone(CLONE_NEWIPC) or unshare(CLONE_NEWIPC), the unshare will cause an
    internal mount of a new mqueuefs sb linked to the new ipc ns.

    When a user does 'mount -t mqueue mqueue /dev/mqueue', he mounts the
    mqueuefs superblock.

    Posix message queues can be worked with both through the mq_* system calls
    (see mq_overview(7)), and through the VFS through the mqueue mount. Any
    usage of mq_open() and friends will work with the acting task's ipc
    namespace. Any actions through the VFS will work with the mqueuefs in
    which the file was created. So if a user doesn't remount mqueuefs after
    unshare(CLONE_NEWIPC), mq_open("/ab") will not be reflected in "ls
    /dev/mqueue".

    If task a mounts mqueue for ipc_ns:1, then clones task b with a new ipcns,
    ipcns:2, and then task a is the last task in ipc_ns:1 to exit, then (1)
    ipc_ns:1 will be freed, (2) it's superblock will live on until task b
    umounts the corresponding mqueuefs, and vfs actions will continue to
    succeed, but (3) sb->s_fs_info will be NULL for the sb corresponding to
    the deceased ipc_ns:1.

    To make this happen, we must protect the ipc reference count when

    a) a task exits and drops its ipcns->count, since it might be dropping
    it to 0 and freeing the ipcns

    b) a task accesses the ipcns through its mqueuefs interface, since it
    bumps the ipcns refcount and might race with the last task in the ipcns
    exiting.

    So the kref is changed to an atomic_t so we can use
    atomic_dec_and_lock(&ns->count,mq_lock), and every access to the ipcns
    through ns = mqueuefs_sb->s_fs_info is protected by the same lock.

    Signed-off-by: Cedric Le Goater
    Signed-off-by: Serge E. Hallyn
    Cc: Alexey Dobriyan
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Serge E. Hallyn
     
  • Move mqueue vfsmount plus a few tunables into the ipc_namespace struct.
    The CONFIG_IPC_NS boolean and the ipc_namespace struct will serve both the
    posix message queue namespaces and the SYSV ipc namespaces.

    The sysctl code will be fixed separately in patch 3. After just this
    patch, making a change to posix mqueue tunables always changes the values
    in the initial ipc namespace.

    Signed-off-by: Cedric Le Goater
    Signed-off-by: Serge E. Hallyn
    Cc: Alexey Dobriyan
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Serge E. Hallyn
     

26 Jul, 2008

1 commit

  • This patch proposes an alternative to the "magical
    positive-versus-negative number trick" Andrew complained about last week
    in http://lkml.org/lkml/2008/6/24/418.

    This had been introduced with the patches that scale msgmni to the amount
    of lowmem. With these patches, msgmni has a registered notification
    routine that recomputes msgmni value upon memory add/remove or ipc
    namespace creation/ removal.

    When msgmni is changed from user space (i.e. value written to the proc
    file), that notification routine is unregistered, and the way to make it
    registered back is to write a negative value into the proc file. This is
    the "magical positive-versus-negative number trick".

    To fix this, a new proc file is introduced: /proc/sys/kernel/auto_msgmni.
    This file acts as ON/OFF for msgmni automatic recomputing.

    With this patch, the process is the following:
    1) kernel boots in "automatic recomputing mode"
    /proc/sys/kernel/msgmni contains the value that has been computed (depends
    on lowmem)
    /proc/sys/kernel/automatic_msgmni contains "1"

    2) echo > /proc/sys/kernel/msgmni
    . sets msg_ctlmni to
    . de-activates automatic recomputing (i.e. if, say, some memory is added
    msgmni won't be recomputed anymore)
    . /proc/sys/kernel/automatic_msgmni now contains "0"

    3) echo "0" > /proc/sys/kernel/automatic_msgmni
    . de-activates msgmni automatic recomputing
    this has the same effect as 2) except that msg_ctlmni's value stays
    blocked at its current value)

    3) echo "1" > /proc/sys/kernel/automatic_msgmni
    . recomputes msgmni's value based on the current available memory size
    and number of ipc namespaces
    . re-activates automatic recomputing for msgmni.

    Signed-off-by: Nadia Derbey
    Cc: Solofo Ramangalahy
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nadia Derbey