17 Feb, 2018

2 commits

  • commit 9903a91c763ecdae333a04a9d89d79d2b8966503 upstream.

    With pipe-user-pages-hard set to 'N', users were actually only allowed up
    to 'N - 1' buffers; and likewise for pipe-user-pages-soft.

    Fix this to allow up to 'N' buffers, as would be expected.

    Link: http://lkml.kernel.org/r/20180111052902.14409-5-ebiggers3@gmail.com
    Fixes: b0b91d18e2e9 ("pipe: fix limit checking in pipe_set_size()")
    Signed-off-by: Eric Biggers
    Acked-by: Willy Tarreau
    Acked-by: Kees Cook
    Acked-by: Joe Lawrence
    Cc: Alexander Viro
    Cc: "Luis R . Rodriguez"
    Cc: Michael Kerrisk
    Cc: Mikulas Patocka
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Greg Kroah-Hartman

    Eric Biggers
     
  • commit 85c2dd5473b2718b4b63e74bfeb1ca876868e11f upstream.

    pipe-user-pages-hard and pipe-user-pages-soft are only supposed to apply
    to unprivileged users, as documented in both Documentation/sysctl/fs.txt
    and the pipe(7) man page.

    However, the capabilities are actually only checked when increasing a
    pipe's size using F_SETPIPE_SZ, not when creating a new pipe. Therefore,
    if pipe-user-pages-hard has been set, the root user can run into it and be
    unable to create pipes. Similarly, if pipe-user-pages-soft has been set,
    the root user can run into it and have their pipes limited to 1 page each.

    Fix this by allowing the privileged override in both cases.

    Link: http://lkml.kernel.org/r/20180111052902.14409-4-ebiggers3@gmail.com
    Fixes: 759c01142a5d ("pipe: limit the per-user amount of pages allocated in pipes")
    Signed-off-by: Eric Biggers
    Acked-by: Kees Cook
    Acked-by: Joe Lawrence
    Cc: Alexander Viro
    Cc: "Luis R . Rodriguez"
    Cc: Michael Kerrisk
    Cc: Mikulas Patocka
    Cc: Willy Tarreau
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Greg Kroah-Hartman

    Eric Biggers
     

24 Jan, 2018

1 commit

  • commit d3f14c485867cfb2e0c48aa88c41d0ef4bf5209c upstream.

    round_pipe_size() contains a right-bit-shift expression which may
    overflow, which would cause undefined results in a subsequent
    roundup_pow_of_two() call.

    static inline unsigned int round_pipe_size(unsigned int size)
    {
    unsigned long nr_pages;

    nr_pages = (size + PAGE_SIZE - 1) >> PAGE_SHIFT;
    return roundup_pow_of_two(nr_pages) << PAGE_SHIFT;
    }

    PAGE_SIZE is defined as (1UL << PAGE_SHIFT), so:
    - 4 bytes wide on 32-bit (0 to 0xffffffff)
    - 8 bytes wide on 64-bit (0 to 0xffffffffffffffff)

    That means that 32-bit round_pipe_size(), nr_pages may overflow to 0:

    size=0x00000000 nr_pages=0x0
    size=0x00000001 nr_pages=0x1
    size=0xfffff000 nr_pages=0xfffff
    size=0xfffff001 nr_pages=0x0 << !
    size=0xffffffff nr_pages=0x0 << !

    This is bad because roundup_pow_of_two(n) is undefined when n == 0!

    64-bit is not a problem as the unsigned int size is 4 bytes wide
    (similar to 32-bit) and the larger, 8 byte wide unsigned long, is
    sufficient to handle the largest value of the bit shift expression:

    size=0xffffffff nr_pages=100000

    Modify round_pipe_size() to return 0 if n == 0 and updates its callers to
    handle accordingly.

    Link: http://lkml.kernel.org/r/1507658689-11669-3-git-send-email-joe.lawrence@redhat.com
    Signed-off-by: Joe Lawrence
    Reported-by: Mikulas Patocka
    Reviewed-by: Mikulas Patocka
    Cc: Al Viro
    Cc: Jens Axboe
    Cc: Michael Kerrisk
    Cc: Randy Dunlap
    Cc: Josh Poimboeuf
    Signed-off-by: Andrew Morton
    Signed-off-by: Dong Jinguang
    Signed-off-by: Linus Torvalds

    Signed-off-by: Greg Kroah-Hartman

    Joe Lawrence
     

14 Dec, 2017

1 commit

  • [ Upstream commit 98159d977f71c3b3dee898d1c34e56f520b094e7 ]

    Patch series "A few round_pipe_size() and pipe-max-size fixups", v3.

    While backporting Michael's "pipe: fix limit handling" patchset to a
    distro-kernel, Mikulas noticed that current upstream pipe limit handling
    contains a few problems:

    1 - procfs signed wrap: echo'ing a large number into
    /proc/sys/fs/pipe-max-size and then cat'ing it back out shows a
    negative value.

    2 - round_pipe_size() nr_pages overflow on 32bit: this would
    subsequently try roundup_pow_of_two(0), which is undefined.

    3 - visible non-rounded pipe-max-size value: there is no mutual
    exclusion or protection between the time pipe_max_size is assigned
    a raw value from proc_dointvec_minmax() and when it is rounded.

    4 - unsigned long -> unsigned int conversion makes for potential odd
    return errors from do_proc_douintvec_minmax_conv() and
    do_proc_dopipe_max_size_conv().

    This version underwent the same testing as v1:
    https://marc.info/?l=linux-kernel&m=150643571406022&w=2

    This patch (of 4):

    pipe_max_size is defined as an unsigned int:

    unsigned int pipe_max_size = 1048576;

    but its procfs/sysctl representation is an integer:

    static struct ctl_table fs_table[] = {
    ...
    {
    .procname = "pipe-max-size",
    .data = &pipe_max_size,
    .maxlen = sizeof(int),
    .mode = 0644,
    .proc_handler = &pipe_proc_fn,
    .extra1 = &pipe_min_size,
    },
    ...

    that is signed:

    int pipe_proc_fn(struct ctl_table *table, int write, void __user *buf,
    size_t *lenp, loff_t *ppos)
    {
    ...
    ret = proc_dointvec_minmax(table, write, buf, lenp, ppos)

    This leads to signed results via procfs for large values of pipe_max_size:

    % echo 2147483647 >/proc/sys/fs/pipe-max-size
    % cat /proc/sys/fs/pipe-max-size
    -2147483648

    Use unsigned operations on this variable to avoid such negative values.

    Link: http://lkml.kernel.org/r/1507658689-11669-2-git-send-email-joe.lawrence@redhat.com
    Signed-off-by: Joe Lawrence
    Reported-by: Mikulas Patocka
    Reviewed-by: Mikulas Patocka
    Cc: Michael Kerrisk
    Cc: Randy Dunlap
    Cc: Al Viro
    Cc: Jens Axboe
    Cc: Josh Poimboeuf
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Sasha Levin
    Signed-off-by: Greg Kroah-Hartman

    Joe Lawrence
     

02 Nov, 2017

1 commit

  • Many source files in the tree are missing licensing information, which
    makes it harder for compliance tools to determine the correct license.

    By default all files without license information are under the default
    license of the kernel, which is GPL version 2.

    Update the files which contain no license information with the 'GPL-2.0'
    SPDX license identifier. The SPDX identifier is a legally binding
    shorthand, which can be used instead of the full boiler plate text.

    This patch is based on work done by Thomas Gleixner and Kate Stewart and
    Philippe Ombredanne.

    How this work was done:

    Patches were generated and checked against linux-4.14-rc6 for a subset of
    the use cases:
    - file had no licensing information it it.
    - file was a */uapi/* one with no licensing information in it,
    - file was a */uapi/* one with existing licensing information,

    Further patches will be generated in subsequent months to fix up cases
    where non-standard license headers were used, and references to license
    had to be inferred by heuristics based on keywords.

    The analysis to determine which SPDX License Identifier to be applied to
    a file was done in a spreadsheet of side by side results from of the
    output of two independent scanners (ScanCode & Windriver) producing SPDX
    tag:value files created by Philippe Ombredanne. Philippe prepared the
    base worksheet, and did an initial spot review of a few 1000 files.

    The 4.13 kernel was the starting point of the analysis with 60,537 files
    assessed. Kate Stewart did a file by file comparison of the scanner
    results in the spreadsheet to determine which SPDX license identifier(s)
    to be applied to the file. She confirmed any determination that was not
    immediately clear with lawyers working with the Linux Foundation.

    Criteria used to select files for SPDX license identifier tagging was:
    - Files considered eligible had to be source code files.
    - Make and config files were included as candidates if they contained >5
    lines of source
    - File already had some variant of a license header in it (even if
    Reviewed-by: Philippe Ombredanne
    Reviewed-by: Thomas Gleixner
    Signed-off-by: Greg Kroah-Hartman

    Greg Kroah-Hartman
     

06 Jul, 2017

1 commit


25 Dec, 2016

1 commit


12 Oct, 2016

8 commits

  • This is a patch that provides behavior that is more consistent, and
    probably less surprising to users. I consider the change optional, and
    welcome opinions about whether it should be applied.

    By default, pipes are created with a capacity of 64 kiB. However,
    /proc/sys/fs/pipe-max-size may be set smaller than this value. In this
    scenario, an unprivileged user could thus create a pipe whose initial
    capacity exceeds the limit. Therefore, it seems logical to cap the
    initial pipe capacity according to the value of pipe-max-size.

    The test program shown earlier in this patch series can be used to
    demonstrate the effect of the change brought about with this patch:

    # cat /proc/sys/fs/pipe-max-size
    1048576
    # sudo -u mtk ./test_F_SETPIPE_SZ 1
    Initial pipe capacity: 65536
    # echo 10000 > /proc/sys/fs/pipe-max-size
    # cat /proc/sys/fs/pipe-max-size
    16384
    # sudo -u mtk ./test_F_SETPIPE_SZ 1
    Initial pipe capacity: 16384
    # ./test_F_SETPIPE_SZ 1
    Initial pipe capacity: 65536

    The last two executions of 'test_F_SETPIPE_SZ' show that pipe-max-size
    caps the initial allocation for a new pipe for unprivileged users, but
    not for privileged users.

    Link: http://lkml.kernel.org/r/31dc7064-2a17-9c5b-1df1-4e3012ee992c@gmail.com
    Signed-off-by: Michael Kerrisk
    Reviewed-by: Vegard Nossum
    Cc: Willy Tarreau
    Cc:
    Cc: Tetsuo Handa
    Cc: Jens Axboe
    Cc: Al Viro
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Michael Kerrisk (man-pages)
     
  • This is an optional patch, to provide a small performance
    improvement. Alter account_pipe_buffers() so that it returns the
    new value in user->pipe_bufs. This means that we can refactor
    too_many_pipe_buffers_soft() and too_many_pipe_buffers_hard() to
    avoid the costs of repeated use of atomic_long_read() to get the
    value user->pipe_bufs.

    Link: http://lkml.kernel.org/r/93e5f193-1e5e-3e1f-3a20-eae79b7e1310@gmail.com
    Signed-off-by: Michael Kerrisk
    Reviewed-by: Vegard Nossum
    Cc: Willy Tarreau
    Cc:
    Cc: Tetsuo Handa
    Cc: Jens Axboe
    Cc: Al Viro
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Michael Kerrisk (man-pages)
     
  • The limit checking in alloc_pipe_info() (used by pipe(2) and when
    opening a FIFO) has the following problems:

    (1) When checking capacity required for the new pipe, the checks against
    the limit in /proc/sys/fs/pipe-user-pages-{soft,hard} are made
    against existing consumption, and exclude the memory required for
    the new pipe capacity. As a consequence: (1) the memory allocation
    throttling provided by the soft limit does not kick in quite as
    early as it should, and (2) the user can overrun the hard limit.

    (2) As currently implemented, accounting and checking against the limits
    is done as follows:

    (a) Test whether the user has exceeded the limit.
    (b) Make new pipe buffer allocation.
    (c) Account new allocation against the limits.

    This is racey. Multiple processes may pass point (a) simultaneously,
    and then allocate pipe buffers that are accounted for only in step
    (c). The race means that the user's pipe buffer allocation could be
    pushed over the limit (by an arbitrary amount, depending on how
    unlucky we were in the race). [Thanks to Vegard Nossum for spotting
    this point, which I had missed.]

    This patch addresses the above problems as follows:

    * Alter the checks against limits to include the memory required for the
    new pipe.
    * Re-order the accounting step so that it precedes the buffer allocation.
    If the accounting step determines that a limit has been reached, revert
    the accounting and cause the operation to fail.

    Link: http://lkml.kernel.org/r/8ff3e9f9-23f6-510c-644f-8e70cd1c0bd9@gmail.com
    Signed-off-by: Michael Kerrisk
    Reviewed-by: Vegard Nossum
    Cc: Willy Tarreau
    Cc:
    Cc: Tetsuo Handa
    Cc: Jens Axboe
    Cc: Al Viro
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Michael Kerrisk (man-pages)
     
  • Replace an 'if' block that covers most of the code in this function
    with a 'goto'. This makes the code a little simpler to read, and also
    simplifies the next patch (fix limit checking in alloc_pipe_info())

    Link: http://lkml.kernel.org/r/aef030c1-0257-98a9-4988-186efa48530c@gmail.com
    Signed-off-by: Michael Kerrisk
    Reviewed-by: Vegard Nossum
    Cc: Willy Tarreau
    Cc:
    Cc: Tetsuo Handa
    Cc: Jens Axboe
    Cc: Al Viro
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Michael Kerrisk (man-pages)
     
  • The limit checking in pipe_set_size() (used by fcntl(F_SETPIPE_SZ))
    has the following problems:

    (1) When increasing the pipe capacity, the checks against the limits in
    /proc/sys/fs/pipe-user-pages-{soft,hard} are made against existing
    consumption, and exclude the memory required for the increased pipe
    capacity. The new increase in pipe capacity can then push the total
    memory used by the user for pipes (possibly far) over a limit. This
    can also trigger the problem described next.

    (2) The limit checks are performed even when the new pipe capacity is
    less than the existing pipe capacity. This can lead to problems if a
    user sets a large pipe capacity, and then the limits are lowered,
    with the result that the user will no longer be able to decrease the
    pipe capacity.

    (3) As currently implemented, accounting and checking against the
    limits is done as follows:

    (a) Test whether the user has exceeded the limit.
    (b) Make new pipe buffer allocation.
    (c) Account new allocation against the limits.

    This is racey. Multiple processes may pass point (a)
    simultaneously, and then allocate pipe buffers that are accounted
    for only in step (c). The race means that the user's pipe buffer
    allocation could be pushed over the limit (by an arbitrary amount,
    depending on how unlucky we were in the race). [Thanks to Vegard
    Nossum for spotting this point, which I had missed.]

    This patch addresses the above problems as follows:

    * Perform checks against the limits only when increasing a pipe's
    capacity; an unprivileged user can always decrease a pipe's capacity.
    * Alter the checks against limits to include the memory required for
    the new pipe capacity.
    * Re-order the accounting step so that it precedes the buffer
    allocation. If the accounting step determines that a limit has
    been reached, revert the accounting and cause the operation to fail.

    The program below can be used to demonstrate problems 1 and 2, and the
    effect of the fix. The program takes one or more command-line arguments.
    The first argument specifies the number of pipes that the program should
    create. The remaining arguments are, alternately, pipe capacities that
    should be set using fcntl(F_SETPIPE_SZ), and sleep intervals (in
    seconds) between the fcntl() operations. (The sleep intervals allow the
    possibility to change the limits between fcntl() operations.)

    Problem 1
    =========

    Using the test program on an unpatched kernel, we first set some
    limits:

    # echo 0 > /proc/sys/fs/pipe-user-pages-soft
    # echo 1000000000 > /proc/sys/fs/pipe-max-size
    # echo 10000 > /proc/sys/fs/pipe-user-pages-hard # 40.96 MB

    Then show that we can set a pipe with capacity (100MB) that is
    over the hard limit

    # sudo -u mtk ./test_F_SETPIPE_SZ 1 100000000
    Initial pipe capacity: 65536
    Loop 1: set pipe capacity to 100000000 bytes
    F_SETPIPE_SZ returned 134217728

    Now set the capacity to 100MB twice. The second call fails (which is
    probably surprising to most users, since it seems like a no-op):

    # sudo -u mtk ./test_F_SETPIPE_SZ 1 100000000 0 100000000
    Initial pipe capacity: 65536
    Loop 1: set pipe capacity to 100000000 bytes
    F_SETPIPE_SZ returned 134217728
    Loop 2: set pipe capacity to 100000000 bytes
    Loop 2, pipe 0: F_SETPIPE_SZ failed: fcntl: Operation not permitted

    With a patched kernel, setting a capacity over the limit fails at the
    first attempt:

    # echo 0 > /proc/sys/fs/pipe-user-pages-soft
    # echo 1000000000 > /proc/sys/fs/pipe-max-size
    # echo 10000 > /proc/sys/fs/pipe-user-pages-hard
    # sudo -u mtk ./test_F_SETPIPE_SZ 1 100000000
    Initial pipe capacity: 65536
    Loop 1: set pipe capacity to 100000000 bytes
    Loop 1, pipe 0: F_SETPIPE_SZ failed: fcntl: Operation not permitted

    There is a small chance that the change to fix this problem could
    break user-space, since there are cases where fcntl(F_SETPIPE_SZ)
    calls that previously succeeded might fail. However, the chances are
    small, since (a) the pipe-user-pages-{soft,hard} limits are new (in
    4.5), and the default soft/hard limits are high/unlimited. Therefore,
    it seems warranted to make these limits operate more precisely (and
    behave more like what users probably expect).

    Problem 2
    =========

    Running the test program on an unpatched kernel, we first set some limits:

    # getconf PAGESIZE
    4096
    # echo 0 > /proc/sys/fs/pipe-user-pages-soft
    # echo 1000000000 > /proc/sys/fs/pipe-max-size
    # echo 10000 > /proc/sys/fs/pipe-user-pages-hard # 40.96 MB

    Now perform two fcntl(F_SETPIPE_SZ) operations on a single pipe,
    first setting a pipe capacity (10MB), sleeping for a few seconds,
    during which time the hard limit is lowered, and then set pipe
    capacity to a smaller amount (5MB):

    # sudo -u mtk ./test_F_SETPIPE_SZ 1 10000000 15 5000000 &
    [1] 748
    # Initial pipe capacity: 65536
    Loop 1: set pipe capacity to 10000000 bytes
    F_SETPIPE_SZ returned 16777216
    Sleeping 15 seconds

    # echo 1000 > /proc/sys/fs/pipe-user-pages-hard # 4.096 MB
    # Loop 2: set pipe capacity to 5000000 bytes
    Loop 2, pipe 0: F_SETPIPE_SZ failed: fcntl: Operation not permitted

    In this case, the user should be able to lower the limit.

    With a kernel that has the patch below, the second fcntl()
    succeeds:

    # echo 0 > /proc/sys/fs/pipe-user-pages-soft
    # echo 1000000000 > /proc/sys/fs/pipe-max-size
    # echo 10000 > /proc/sys/fs/pipe-user-pages-hard
    # sudo -u mtk ./test_F_SETPIPE_SZ 1 10000000 15 5000000 &
    [1] 3215
    # Initial pipe capacity: 65536
    # Loop 1: set pipe capacity to 10000000 bytes
    F_SETPIPE_SZ returned 16777216
    Sleeping 15 seconds

    # echo 1000 > /proc/sys/fs/pipe-user-pages-hard

    # Loop 2: set pipe capacity to 5000000 bytes
    F_SETPIPE_SZ returned 8388608

    8x---8x---8x---8x---8x---8x---8x---8x---8x---8x---8x---8x---8x---8x---

    /* test_F_SETPIPE_SZ.c

    (C) 2016, Michael Kerrisk; licensed under GNU GPL version 2 or later

    Test operation of fcntl(F_SETPIPE_SZ) for setting pipe capacity
    and interactions with limits defined by /proc/sys/fs/pipe-* files.
    */

    #define _GNU_SOURCE
    #include
    #include
    #include
    #include

    int
    main(int argc, char *argv[])
    {
    int (*pfd)[2];
    int npipes;
    int pcap, rcap;
    int j, p, s, stime, loop;

    if (argc < 2) {
    fprintf(stderr, "Usage: %s num-pipes "
    "[pipe-capacity sleep-time]...\n", argv[0]);
    exit(EXIT_FAILURE);
    }

    npipes = atoi(argv[1]);

    pfd = calloc(npipes, sizeof (int [2]));
    if (pfd == NULL) {
    perror("calloc");
    exit(EXIT_FAILURE);
    }

    for (j = 0; j < npipes; j++) {
    if (pipe(pfd[j]) == -1) {
    fprintf(stderr, "Loop %d: pipe() failed: ", j);
    perror("pipe");
    exit(EXIT_FAILURE);
    }
    }

    printf("Initial pipe capacity: %d\n", fcntl(pfd[0][0], F_GETPIPE_SZ));

    for (j = 2; j < argc; j += 2 ) {
    loop = j / 2;
    pcap = atoi(argv[j]);
    printf(" Loop %d: set pipe capacity to %d bytes\n", loop, pcap);

    for (p = 0; p < npipes; p++) {
    s = fcntl(pfd[p][0], F_SETPIPE_SZ, pcap);
    if (s == -1) {
    fprintf(stderr, " Loop %d, pipe %d: F_SETPIPE_SZ "
    "failed: ", loop, p);
    perror("fcntl");
    exit(EXIT_FAILURE);
    }

    if (p == 0) {
    printf(" F_SETPIPE_SZ returned %d\n", s);
    rcap = s;
    } else {
    if (s != rcap) {
    fprintf(stderr, " Loop %d, pipe %d: F_SETPIPE_SZ "
    "unexpected return: %d\n", loop, p, s);
    exit(EXIT_FAILURE);
    }
    }

    stime = (j + 1 < argc) ? atoi(argv[j + 1]) : 0;
    if (stime > 0) {
    printf(" Sleeping %d seconds\n", stime);
    sleep(stime);
    }
    }
    }

    exit(EXIT_SUCCESS);
    }

    8x---8x---8x---8x---8x---8x---8x---8x---8x---8x---8x---8x---8x---8x---

    Patch history:

    v2
    * Switch order of test in 'if' statement to avoid function call
    (to capability()) in normal path. [This is a fix to a preexisting
    wart in the code. Thanks to Willy Tarreau]
    * Perform (size > pipe_max_size) check before calling
    account_pipe_buffers(). [Thanks to Vegard Nossum]
    Quoting Vegard:

    The potential problem happens if the user passes a very large number
    which will overflow pipe->user->pipe_bufs.

    On 32-bit, sizeof(int) == sizeof(long), so if they pass arg = INT_MAX
    then round_pipe_size() returns INT_MAX. Although it's true that the
    accounting is done in terms of pages and not bytes, so you'd need on
    the order of (1 << 13) = 8192 processes hitting the limit at the same
    time in order to make it overflow, which seems a bit unlikely.

    (See https://lkml.org/lkml/2016/8/12/215 for another discussion on the
    limit checking)

    Link: http://lkml.kernel.org/r/1e464945-536b-2420-798b-e77b9c7e8593@gmail.com
    Signed-off-by: Michael Kerrisk
    Reviewed-by: Vegard Nossum
    Cc: Willy Tarreau
    Cc:
    Cc: Tetsuo Handa
    Cc: Jens Axboe
    Cc: Al Viro
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Michael Kerrisk (man-pages)
     
  • This is a preparatory patch for following work. account_pipe_buffers()
    performs accounting in the 'user_struct'. There is no need to pass a
    pointer to a 'pipe_inode_info' struct (which is then dereferenced to
    obtain a pointer to the 'user' field). Instead, pass a pointer directly
    to the 'user_struct'. This change is needed in preparation for a
    subsequent patch that the fixes the limit checking in alloc_pipe_info()
    (and the resulting code is a little more logical).

    Link: http://lkml.kernel.org/r/7277bf8c-a6fc-4a7d-659c-f5b145c981ab@gmail.com
    Signed-off-by: Michael Kerrisk
    Reviewed-by: Vegard Nossum
    Cc: Willy Tarreau
    Cc:
    Cc: Tetsuo Handa
    Cc: Jens Axboe
    Cc: Al Viro
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Michael Kerrisk (man-pages)
     
  • This is a preparatory patch for following work. Move the F_SETPIPE_SZ
    limit-checking logic from pipe_fcntl() into pipe_set_size(). This
    simplifies the code a little, and allows for reworking required in
    a later patch that fixes the limit checking in pipe_set_size()

    Link: http://lkml.kernel.org/r/3701b2c5-2c52-2c3e-226d-29b9deb29b50@gmail.com
    Signed-off-by: Michael Kerrisk
    Reviewed-by: Vegard Nossum
    Cc: Willy Tarreau
    Cc:
    Cc: Tetsuo Handa
    Cc: Jens Axboe
    Cc: Al Viro
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Michael Kerrisk (man-pages)
     
  • Patch series "pipe: fix limit handling", v2.

    When changing a pipe's capacity with fcntl(F_SETPIPE_SZ), various limits
    defined by /proc/sys/fs/pipe-* files are checked to see if unprivileged
    users are exceeding limits on memory consumption.

    While documenting and testing the operation of these limits I noticed
    that, as currently implemented, these checks have a number of problems:

    (1) When increasing the pipe capacity, the checks against the limits
    in /proc/sys/fs/pipe-user-pages-{soft,hard} are made against
    existing consumption, and exclude the memory required for the
    increased pipe capacity. The new increase in pipe capacity can then
    push the total memory used by the user for pipes (possibly far) over
    a limit. This can also trigger the problem described next.

    (2) The limit checks are performed even when the new pipe capacity
    is less than the existing pipe capacity. This can lead to problems
    if a user sets a large pipe capacity, and then the limits are
    lowered, with the result that the user will no longer be able to
    decrease the pipe capacity.

    (3) As currently implemented, accounting and checking against the
    limits is done as follows:

    (a) Test whether the user has exceeded the limit.
    (b) Make new pipe buffer allocation.
    (c) Account new allocation against the limits.

    This is racey. Multiple processes may pass point (a) simultaneously,
    and then allocate pipe buffers that are accounted for only in step
    (c). The race means that the user's pipe buffer allocation could be
    pushed over the limit (by an arbitrary amount, depending on how
    unlucky we were in the race). [Thanks to Vegard Nossum for spotting
    this point, which I had missed.]

    This patch series addresses these three problems.

    This patch (of 8):

    This is a minor preparatory patch. After subsequent patches,
    round_pipe_size() will be called from pipe_set_size(), so place
    round_pipe_size() above pipe_set_size().

    Link: http://lkml.kernel.org/r/91a91fdb-a959-ba7f-b551-b62477cc98a1@gmail.com
    Signed-off-by: Michael Kerrisk
    Reviewed-by: Vegard Nossum
    Cc: Willy Tarreau
    Cc:
    Cc: Tetsuo Handa
    Cc: Jens Axboe
    Cc: Al Viro
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Michael Kerrisk (man-pages)
     

11 Oct, 2016

1 commit

  • Pull more vfs updates from Al Viro:
    ">rename2() work from Miklos + current_time() from Deepa"

    * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs:
    fs: Replace current_fs_time() with current_time()
    fs: Replace CURRENT_TIME_SEC with current_time() for inode timestamps
    fs: Replace CURRENT_TIME with current_time() for inode timestamps
    fs: proc: Delete inode time initializations in proc_alloc_inode()
    vfs: Add current_time() api
    vfs: add note about i_op->rename changes to porting
    fs: rename "rename2" i_op to "rename"
    vfs: remove unused i_op->rename
    fs: make remaining filesystems use .rename2
    libfs: support RENAME_NOREPLACE in simple_rename()
    fs: support RENAME_NOREPLACE for local filesystems
    ncpfs: fix unused variable warning

    Linus Torvalds
     

06 Oct, 2016

2 commits


28 Sep, 2016

1 commit

  • CURRENT_TIME macro is not appropriate for filesystems as it
    doesn't use the right granularity for filesystem timestamps.
    Use current_time() instead.

    CURRENT_TIME is also not y2038 safe.

    This is also in preparation for the patch that transitions
    vfs timestamps to use 64 bit time and hence make them
    y2038 safe. As part of the effort current_time() will be
    extended to do range checks. Hence, it is necessary for all
    file system timestamps to use current_time(). Also,
    current_time() will be transitioned along with vfs to be
    y2038 safe.

    Note that whenever a single call to current_time() is used
    to change timestamps in different inodes, it is because they
    share the same time granularity.

    Signed-off-by: Deepa Dinamani
    Reviewed-by: Arnd Bergmann
    Acked-by: Felipe Balbi
    Acked-by: Steven Whitehouse
    Acked-by: Ryusuke Konishi
    Acked-by: David Sterba
    Signed-off-by: Al Viro

    Deepa Dinamani
     

10 Aug, 2016

1 commit

  • To distinguish non-slab pages charged to kmemcg we mark them PageKmemcg,
    which sets page->_mapcount to -512. Currently, we set/clear PageKmemcg
    in __alloc_pages_nodemask()/free_pages_prepare() for any page allocated
    with __GFP_ACCOUNT, including those that aren't actually charged to any
    cgroup, i.e. allocated from the root cgroup context. To avoid overhead
    in case cgroups are not used, we only do that if memcg_kmem_enabled() is
    true. The latter is set iff there are kmem-enabled memory cgroups
    (online or offline). The root cgroup is not considered kmem-enabled.

    As a result, if a page is allocated with __GFP_ACCOUNT for the root
    cgroup when there are kmem-enabled memory cgroups and is freed after all
    kmem-enabled memory cgroups were removed, e.g.

    # no memory cgroups has been created yet, create one
    mkdir /sys/fs/cgroup/memory/test
    # run something allocating pages with __GFP_ACCOUNT, e.g.
    # a program using pipe
    dmesg | tail
    # remove the memory cgroup
    rmdir /sys/fs/cgroup/memory/test

    we'll get bad page state bug complaining about page->_mapcount != -1:

    BUG: Bad page state in process swapper/0 pfn:1fd945c
    page:ffffea007f651700 count:0 mapcount:-511 mapping: (null) index:0x0
    flags: 0x1000000000000000()

    To avoid that, let's mark with PageKmemcg only those pages that are
    actually charged to and hence pin a non-root memory cgroup.

    Fixes: 4949148ad433 ("mm: charge/uncharge kmemcg from generic page allocator paths")
    Reported-and-tested-by: Eric Dumazet
    Signed-off-by: Vladimir Davydov
    Signed-off-by: Linus Torvalds

    Vladimir Davydov
     

27 Jul, 2016

1 commit

  • Pipes can consume a significant amount of system memory, hence they
    should be accounted to kmemcg.

    This patch marks pipe_inode_info and anonymous pipe buffer page
    allocations as __GFP_ACCOUNT so that they would be charged to kmemcg.
    Note, since a pipe buffer page can be "stolen" and get reused for other
    purposes, including mapping to userspace, we clear PageKmemcg thus
    resetting page->_mapcount and uncharge it in anon_pipe_buf_steal, which
    is introduced by this patch.

    A note regarding anon_pipe_buf_steal implementation. We allow to steal
    the page if its ref count equals 1. It looks racy, but it is correct
    for anonymous pipe buffer pages, because:

    - We lock out all other pipe users, because ->steal is called with
    pipe_lock held, so the page can't be spliced to another pipe from
    under us.

    - The page is not on LRU and it never was.

    - Thus a parallel thread can access it only by PFN. Although this is
    quite possible (e.g. see page_idle_get_page and balloon_page_isolate)
    this is not dangerous, because all such functions do is increase page
    ref count, check if the page is the one they are looking for, and
    decrease ref count if it isn't. Since our page is clean except for
    PageKmemcg mark, which doesn't conflict with other _mapcount users,
    the worst that can happen is we see page_count > 2 due to a transient
    ref, in which case we false-positively abort ->steal, which is still
    fine, because ->steal is not guaranteed to succeed.

    Link: http://lkml.kernel.org/r/20160527150313.GD26059@esperanza
    Signed-off-by: Vladimir Davydov
    Cc: Alexander Viro
    Cc: Johannes Weiner
    Cc: Michal Hocko
    Cc: Eric Dumazet
    Cc: Minchan Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Vladimir Davydov
     

05 Apr, 2016

1 commit

  • PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
    ago with promise that one day it will be possible to implement page
    cache with bigger chunks than PAGE_SIZE.

    This promise never materialized. And unlikely will.

    We have many places where PAGE_CACHE_SIZE assumed to be equal to
    PAGE_SIZE. And it's constant source of confusion on whether
    PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
    especially on the border between fs and mm.

    Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
    breakage to be doable.

    Let's stop pretending that pages in page cache are special. They are
    not.

    The changes are pretty straight-forward:

    - << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> ;

    - >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> ;

    - PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};

    - page_cache_get() -> get_page();

    - page_cache_release() -> put_page();

    This patch contains automated changes generated with coccinelle using
    script below. For some reason, coccinelle doesn't patch header files.
    I've called spatch for them manually.

    The only adjustment after coccinelle is revert of changes to
    PAGE_CAHCE_ALIGN definition: we are going to drop it later.

    There are few places in the code where coccinelle didn't reach. I'll
    fix them manually in a separate patch. Comments and documentation also
    will be addressed with the separate patch.

    virtual patch

    @@
    expression E;
    @@
    - E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
    + E

    @@
    expression E;
    @@
    - E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
    + E

    @@
    @@
    - PAGE_CACHE_SHIFT
    + PAGE_SHIFT

    @@
    @@
    - PAGE_CACHE_SIZE
    + PAGE_SIZE

    @@
    @@
    - PAGE_CACHE_MASK
    + PAGE_MASK

    @@
    expression E;
    @@
    - PAGE_CACHE_ALIGN(E)
    + PAGE_ALIGN(E)

    @@
    expression E;
    @@
    - page_cache_get(E)
    + get_page(E)

    @@
    expression E;
    @@
    - page_cache_release(E)
    + put_page(E)

    Signed-off-by: Kirill A. Shutemov
    Acked-by: Michal Hocko
    Signed-off-by: Linus Torvalds

    Kirill A. Shutemov
     

20 Jan, 2016

1 commit

  • On no-so-small systems, it is possible for a single process to cause an
    OOM condition by filling large pipes with data that are never read. A
    typical process filling 4000 pipes with 1 MB of data will use 4 GB of
    memory. On small systems it may be tricky to set the pipe max size to
    prevent this from happening.

    This patch makes it possible to enforce a per-user soft limit above
    which new pipes will be limited to a single page, effectively limiting
    them to 4 kB each, as well as a hard limit above which no new pipes may
    be created for this user. This has the effect of protecting the system
    against memory abuse without hurting other users, and still allowing
    pipes to work correctly though with less data at once.

    The limit are controlled by two new sysctls : pipe-user-pages-soft, and
    pipe-user-pages-hard. Both may be disabled by setting them to zero. The
    default soft limit allows the default number of FDs per process (1024)
    to create pipes of the default size (64kB), thus reaching a limit of 64MB
    before starting to create only smaller pipes. With 256 processes limited
    to 1024 FDs each, this results in 1024*64kB + (256*1024 - 1024) * 4kB =
    1084 MB of memory allocated for a user. The hard limit is disabled by
    default to avoid breaking existing applications that make intensive use
    of pipes (eg: for splicing).

    Reported-by: socketpair@gmail.com
    Reported-by: Tetsuo Handa
    Mitigates: CVE-2013-4312 (Linux 2.0+)
    Suggested-by: Linus Torvalds
    Signed-off-by: Willy Tarreau
    Signed-off-by: Al Viro

    Willy Tarreau
     

11 Nov, 2015

2 commits

  • pipe_write() would return 0 if it failed to merge the beginning of the
    data to write with the last, partially filled pipe buffer. It should
    return an error code instead. Userspace programs could be confused by
    write() returning 0 when called with a nonzero 'count'.

    The EFAULT error case was a regression from f0d1bec9d5 ("new helper:
    copy_page_from_iter()"), while the ops->confirm() error case was a much
    older bug.

    Test program:

    #include
    #include
    #include

    int main(void)
    {
    int fd[2];
    char data[1] = {0};

    assert(0 == pipe(fd));
    assert(1 == write(fd[1], data, 1));

    /* prior to this patch, write() returned 0 here */
    assert(-1 == write(fd[1], NULL, 1));
    assert(errno == EFAULT);
    }

    Cc: stable@vger.kernel.org # at least v3.15+
    Signed-off-by: Eric Biggers
    Signed-off-by: Al Viro

    Eric Biggers
     
  • If sys_pipe() was unable to allocate a 'struct file', it always failed
    with ENFILE, which means "The number of simultaneously open files in the
    system would exceed a system-imposed limit." However, alloc_file()
    actually returns an ERR_PTR value and might fail with other error codes.
    Currently, in addition to ENFILE, it can fail with ENOMEM, potentially
    when there are few open files in the system. Update sys_pipe() to
    preserve this error code.

    In a prior submission of a similar patch (1) some concern was raised
    about introducing a new error code for sys_pipe(). However, for most
    system calls, programs cannot assume that new error codes will never be
    introduced. In addition, ENOMEM was, in fact, already a possible error
    code for sys_pipe(), in the case where the file descriptor table could
    not be expanded due to insufficient memory.

    (1) http://comments.gmane.org/gmane.linux.kernel/1357942

    Signed-off-by: Eric Biggers
    Signed-off-by: Al Viro

    Eric Biggers
     

16 Apr, 2015

1 commit


12 Apr, 2015

1 commit

  • All places outside of core VFS that checked ->read and ->write for being NULL or
    called the methods directly are gone now, so NULL {read,write} with non-NULL
    {read,write}_iter will do the right thing in all cases.

    Signed-off-by: Al Viro

    Al Viro
     

26 Mar, 2015

1 commit


07 May, 2014

3 commits


02 Apr, 2014

2 commits


24 Jan, 2014

1 commit

  • Pipe has no data associated with fs so it is not good idea to block
    pipe_write() if FS is frozen, but we can not update file's time on such
    filesystem. Let's use same idea as we use in touch_time().

    Addresses https://bugzilla.kernel.org/show_bug.cgi?id=65701

    Signed-off-by: Dmitry Monakhov
    Reviewed-by: Jan Kara
    Cc: Al Viro
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Dmitry Monakhov
     

03 Dec, 2013

1 commit

  • The pipe code was trying (and failing) to be very careful about freeing
    the pipe info only after the last access, with a pattern like:

    spin_lock(&inode->i_lock);
    if (!--pipe->files) {
    inode->i_pipe = NULL;
    kill = 1;
    }
    spin_unlock(&inode->i_lock);
    __pipe_unlock(pipe);
    if (kill)
    free_pipe_info(pipe);

    where the final freeing is done last.

    HOWEVER. The above is actually broken, because while the freeing is
    done at the end, if we have two racing processes releasing the pipe
    inode info, the one that *doesn't* free it will decrement the ->files
    count, and unlock the inode i_lock, but then still use the
    "pipe_inode_info" afterwards when it does the "__pipe_unlock(pipe)".

    This is *very* hard to trigger in practice, since the race window is
    very small, and adding debug options seems to just hide it by slowing
    things down.

    Simon originally reported this way back in July as an Oops in
    kmem_cache_allocate due to a single bit corruption (due to the final
    "spin_unlock(pipe->mutex.wait_lock)" incrementing a field in a different
    allocation that had re-used the free'd pipe-info), it's taken this long
    to figure out.

    Since the 'pipe->files' accesses aren't even protected by the pipe lock
    (we very much use the inode lock for that), the simple solution is to
    just drop the pipe lock early. And since there were two users of this
    pattern, create a helper function for it.

    Introduced commit ba5bb147330a ("pipe: take allocation and freeing of
    pipe_inode_info out of ->i_mutex").

    Reported-by: Simon Kirby
    Reported-by: Ian Applegate
    Acked-by: Al Viro
    Cc: stable@kernel.org # v3.10+
    Signed-off-by: Linus Torvalds

    Linus Torvalds
     

08 May, 2013

1 commit

  • Faster kernel compiles by way of fewer unnecessary includes.

    [akpm@linux-foundation.org: fix fallout]
    [akpm@linux-foundation.org: fix build]
    Signed-off-by: Kent Overstreet
    Cc: Zach Brown
    Cc: Felipe Balbi
    Cc: Greg Kroah-Hartman
    Cc: Mark Fasheh
    Cc: Joel Becker
    Cc: Rusty Russell
    Cc: Jens Axboe
    Cc: Asai Thambi S P
    Cc: Selvan Mani
    Cc: Sam Bradshaw
    Cc: Jeff Moyer
    Cc: Al Viro
    Cc: Benjamin LaHaise
    Reviewed-by: "Theodore Ts'o"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Kent Overstreet
     

10 Apr, 2013

4 commits