01 Oct, 2016

1 commit

  • CAI Qian pointed out that the semantics
    of shared subtrees make it possible to create an exponentially
    increasing number of mounts in a mount namespace.

    mkdir /tmp/1 /tmp/2
    mount --make-rshared /
    for i in $(seq 1 20) ; do mount --bind /tmp/1 /tmp/2 ; done

    Will create create 2^20 or 1048576 mounts, which is a practical problem
    as some people have managed to hit this by accident.

    As such CVE-2016-6213 was assigned.

    Ian Kent described the situation for autofs users
    as follows:

    > The number of mounts for direct mount maps is usually not very large because of
    > the way they are implemented, large direct mount maps can have performance
    > problems. There can be anywhere from a few (likely case a few hundred) to less
    > than 10000, plus mounts that have been triggered and not yet expired.
    >
    > Indirect mounts have one autofs mount at the root plus the number of mounts that
    > have been triggered and not yet expired.
    >
    > The number of autofs indirect map entries can range from a few to the common
    > case of several thousand and in rare cases up to between 30000 and 50000. I've
    > not heard of people with maps larger than 50000 entries.
    >
    > The larger the number of map entries the greater the possibility for a large
    > number of active mounts so it's not hard to expect cases of a 1000 or somewhat
    > more active mounts.

    So I am setting the default number of mounts allowed per mount
    namespace at 100,000. This is more than enough for any use case I
    know of, but small enough to quickly stop an exponential increase
    in mounts. Which should be perfect to catch misconfigurations and
    malfunctioning programs.

    For anyone who needs a higher limit this can be changed by writing
    to the new /proc/sys/fs/mount-max sysctl.

    Tested-by: CAI Qian
    Signed-off-by: "Eric W. Biederman"

    Eric W. Biederman
     

05 May, 2016

1 commit

  • When the first propgated copy was a slave the following oops would result:
    > BUG: unable to handle kernel NULL pointer dereference at 0000000000000010
    > IP: [] propagate_one+0xbe/0x1c0
    > PGD bacd4067 PUD bac66067 PMD 0
    > Oops: 0000 [#1] SMP
    > Modules linked in:
    > CPU: 1 PID: 824 Comm: mount Not tainted 4.6.0-rc5userns+ #1523
    > Hardware name: Bochs Bochs, BIOS Bochs 01/01/2007
    > task: ffff8800bb0a8000 ti: ffff8800bac3c000 task.ti: ffff8800bac3c000
    > RIP: 0010:[] [] propagate_one+0xbe/0x1c0
    > RSP: 0018:ffff8800bac3fd38 EFLAGS: 00010283
    > RAX: 0000000000000000 RBX: ffff8800bb77ec00 RCX: 0000000000000010
    > RDX: 0000000000000000 RSI: ffff8800bb58c000 RDI: ffff8800bb58c480
    > RBP: ffff8800bac3fd48 R08: 0000000000000001 R09: 0000000000000000
    > R10: 0000000000001ca1 R11: 0000000000001c9d R12: 0000000000000000
    > R13: ffff8800ba713800 R14: ffff8800bac3fda0 R15: ffff8800bb77ec00
    > FS: 00007f3c0cd9b7e0(0000) GS:ffff8800bfb00000(0000) knlGS:0000000000000000
    > CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
    > CR2: 0000000000000010 CR3: 00000000bb79d000 CR4: 00000000000006e0
    > Stack:
    > ffff8800bb77ec00 0000000000000000 ffff8800bac3fd88 ffffffff811fbf85
    > ffff8800bac3fd98 ffff8800bb77f080 ffff8800ba713800 ffff8800bb262b40
    > 0000000000000000 0000000000000000 ffff8800bac3fdd8 ffffffff811f1da0
    > Call Trace:
    > [] propagate_mnt+0x105/0x140
    > [] attach_recursive_mnt+0x120/0x1e0
    > [] graft_tree+0x63/0x70
    > [] do_add_mount+0x9b/0x100
    > [] do_mount+0x2aa/0xdf0
    > [] ? strndup_user+0x4e/0x70
    > [] SyS_mount+0x75/0xc0
    > [] do_syscall_64+0x4b/0xa0
    > [] entry_SYSCALL64_slow_path+0x25/0x25
    > Code: 00 00 75 ec 48 89 0d 02 22 22 01 8b 89 10 01 00 00 48 89 05 fd 21 22 01 39 8e 10 01 00 00 0f 84 e0 00 00 00 48 8b 80 d8 00 00 00 8b 50 10 48 89 05 df 21 22 01 48 89 15 d0 21 22 01 8b 53 30
    > RIP [] propagate_one+0xbe/0x1c0
    > RSP
    > CR2: 0000000000000010
    > ---[ end trace 2725ecd95164f217 ]---

    This oops happens with the namespace_sem held and can be triggered by
    non-root users. An all around not pleasant experience.

    To avoid this scenario when finding the appropriate source mount to
    copy stop the walk up the mnt_master chain when the first source mount
    is encountered.

    Further rewrite the walk up the last_source mnt_master chain so that
    it is clear what is going on.

    The reason why the first source mount is special is that it it's
    mnt_parent is not a mount in the dest_mnt propagation tree, and as
    such termination conditions based up on the dest_mnt mount propgation
    tree do not make sense.

    To avoid other kinds of confusion last_dest is not changed when
    computing last_source. last_dest is only used once in propagate_one
    and that is above the point of the code being modified, so changing
    the global variable is meaningless and confusing.

    Cc: stable@vger.kernel.org
    fixes: f2ebb3a921c1ca1e2ddd9242e95a1989a50c4c68 ("smarter propagate_mnt()")
    Reported-by: Tycho Andersen
    Reviewed-by: Seth Forshee
    Tested-by: Seth Forshee
    Signed-off-by: "Eric W. Biederman"

    Eric W. Biederman
     

20 Feb, 2016

1 commit

  • propagate_one(m) calculates "type" argument for copy_tree() like this:

    > if (m->mnt_group_id == last_dest->mnt_group_id) {
    > type = CL_MAKE_SHARED;
    > } else {
    > type = CL_SLAVE;
    > if (IS_MNT_SHARED(m))
    > type |= CL_MAKE_SHARED;
    > }

    The "type" argument then governs clone_mnt() behavior with respect to flags
    and mnt_master of new mount. When we iterate through a slave group, it is
    possible that both current "m" and "last_dest" are not shared (although,
    both are slaves, i.e. have non-NULL mnt_master-s). Then the comparison
    above erroneously makes new mount shared and sets its mnt_master to
    last_source->mnt_master. The patch fixes the problem by handling zero
    mnt_group_id-s as though they are unequal.

    The similar problem exists in the implementation of "else" clause above
    when we have to ascend upward in the master/slave tree by calling:

    > last_source = last_source->mnt_master;
    > last_dest = last_source->mnt_parent;

    proper number of times. The last step is governed by
    "n->mnt_group_id != last_dest->mnt_group_id" condition that may lie if
    both are zero. The patch fixes this case in the same way as the former one.

    [AV: don't open-code an obvious helper...]

    Signed-off-by: Maxim Patlasov
    Signed-off-by: Al Viro

    Maxim Patlasov
     

03 Apr, 2015

5 commits

  • If the first mount in shared subtree is locked don't unmount the
    shared subtree.

    This is ensured by walking through the mounts parents before children
    and marking a mount as unmountable if it is not locked or it is locked
    but it's parent is marked.

    This allows recursive mount detach to propagate through a set of
    mounts when unmounting them would not reveal what is under any locked
    mount.

    Cc: stable@vger.kernel.org
    Signed-off-by: "Eric W. Biederman"

    Eric W. Biederman
     
  • A prerequisite of calling umount_tree is that the point where the tree
    is mounted at is valid to unmount.

    If we are propagating the effect of the unmount clear MNT_LOCKED in
    every instance where the same filesystem is mounted on the same
    mountpoint in the mount tree, as we know (by virtue of the fact
    that umount_tree was called) that it is safe to reveal what
    is at that mountpoint.

    Cc: stable@vger.kernel.org
    Signed-off-by: "Eric W. Biederman"

    Eric W. Biederman
     
  • - Modify __lookup_mnt_hash_last to ignore mounts that have MNT_UMOUNTED set.
    - Don't remove mounts from the mount hash table in propogate_umount
    - Don't remove mounts from the mount hash table in umount_tree before
    the entire list of mounts to be umounted is selected.
    - Remove mounts from the mount hash table as the last thing that
    happens in the case where a mount has a parent in umount_tree.
    Mounts without parents are not hashed (by definition).

    This paves the way for delaying removal from the mount hash table even
    farther and fixing the MNT_LOCKED vs MNT_DETACH issue.

    Cc: stable@vger.kernel.org
    Signed-off-by: "Eric W. Biederman"

    Eric W. Biederman
     
  • In some instances it is necessary to know if the the unmounting
    process has begun on a mount. Add MNT_UMOUNT to make that reliably
    testable.

    This fix gets used in fixing locked mounts in MNT_DETACH

    Cc: stable@vger.kernel.org
    Signed-off-by: "Eric W. Biederman"

    Eric W. Biederman
     
  • umount_tree builds a list of mounts that need to be unmounted.
    Utilize mnt_list for this purpose instead of mnt_hash. This begins to
    allow keeping a mount on the mnt_hash after it is unmounted, which is
    necessary for a properly functioning MNT_LOCKED implementation.

    The fact that mnt_list is an ordinary list makding available list_move
    is nice bonus.

    Cc: stable@vger.kernel.org
    Signed-off-by: "Eric W. Biederman"

    Eric W. Biederman
     

03 Dec, 2014

1 commit

  • Clear MNT_LOCKED in the callers of copy_tree except copy_mnt_ns, and
    collect_mounts. In copy_mnt_ns it is necessary to create an exact
    copy of a mount tree, so not clearing MNT_LOCKED is important.
    Similarly collect_mounts is used to take a snapshot of the mount tree
    for audit logging purposes and auditing using a faithful copy of the
    tree is important.

    This becomes particularly significant when we start setting MNT_LOCKED
    on rootfs to prevent it from being unmounted.

    Signed-off-by: "Eric W. Biederman"

    Eric W. Biederman
     

31 Aug, 2014

1 commit


02 Apr, 2014

1 commit

  • The current mainline has copies propagated to *all* nodes, then
    tears down the copies we made for nodes that do not contain
    counterparts of the desired mountpoint. That sets the right
    propagation graph for the copies (at teardown time we move
    the slaves of removed node to a surviving peer or directly
    to master), but we end up paying a fairly steep price in
    useless allocations. It's fairly easy to create a situation
    where N calls of mount(2) create exactly N bindings, with
    O(N^2) vfsmounts allocated and freed in process.

    Fortunately, it is possible to avoid those allocations/freeings.
    The trick is to create copies in the right order and find which
    one would've eventually become a master with the current algorithm.
    It turns out to be possible in O(nodes getting propagation) time
    and with no extra allocations at all.

    One part is that we need to make sure that eventual master will be
    created before its slaves, so we need to walk the propagation
    tree in a different order - by peer groups. And iterate through
    the peers before dealing with the next group.

    Another thing is finding the (earlier) copy that will be a master
    of one we are about to create; to do that we are (temporary) marking
    the masters of mountpoints we are attaching the copies to.

    Either we are in a peer of the last mountpoint we'd dealt with,
    or we have the following situation: we are attaching to mountpoint M,
    the last copy S_0 had been attached to M_0 and there are sequences
    S_0...S_n, M_0...M_n such that S_{i+1} is a master of S_{i},
    S_{i} mounted on M{i} and we need to create a slave of the first S_{k}
    such that M is getting propagation from M_{k}. It means that the master
    of M_{k} will be among the sequence of masters of M. On the
    other hand, the nearest marked node in that sequence will either
    be the master of M_{k} or the master of M_{k-1} (the latter -
    in the case if M_{k-1} is a slave of something M gets propagation
    from, but in a wrong peer group).

    So we go through the sequence of masters of M until we find
    a marked one (P). Let N be the one before it. Then we go through
    the sequence of masters of S_0 until we find one (say, S) mounted
    on a node D that has P as master and check if D is a peer of N.
    If it is, S will be the master of new copy, if not - the master of S
    will be.

    That's it for the hard part; the rest is fairly simple. Iterator
    is in next_group(), handling of one prospective mountpoint is
    propagate_one().

    It seems to survive all tests and gives a noticably better performance
    than the current mainline for setups that are seriously using shared
    subtrees.

    Cc: stable@vger.kernel.org
    Signed-off-by: Al Viro

    Al Viro
     

31 Mar, 2014

1 commit

  • fixes RCU bug - walking through hlist is safe in face of element moves,
    since it's self-terminating. Cyclic lists are not - if we end up jumping
    to another hash chain, we'll loop infinitely without ever hitting the
    original list head.

    [fix for dumb braino folded]

    Spotted by: Max Kellermann
    Cc: stable@vger.kernel.org
    Signed-off-by: Al Viro

    Al Viro
     

25 Oct, 2013

3 commits


01 Jun, 2013

1 commit

  • When the group id of a shared mount is not allocated, the umount still
    tries to call mnt_release_group_id(), which eventually hits a kernel
    warning at ida_remove() spewing a message like:
    ida_remove called for id=0 which is not allocated.

    This patch fixes the bug simply checking the group id in the caller.

    Reported-by: Cristian Rodríguez
    Signed-off-by: Takashi Iwai
    Signed-off-by: Al Viro

    Takashi Iwai
     

02 May, 2013

1 commit

  • Pull VFS updates from Al Viro,

    Misc cleanups all over the place, mainly wrt /proc interfaces (switch
    create_proc_entry to proc_create(), get rid of the deprecated
    create_proc_read_entry() in favor of using proc_create_data() and
    seq_file etc).

    7kloc removed.

    * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: (204 commits)
    don't bother with deferred freeing of fdtables
    proc: Move non-public stuff from linux/proc_fs.h to fs/proc/internal.h
    proc: Make the PROC_I() and PDE() macros internal to procfs
    proc: Supply a function to remove a proc entry by PDE
    take cgroup_open() and cpuset_open() to fs/proc/base.c
    ppc: Clean up scanlog
    ppc: Clean up rtas_flash driver somewhat
    hostap: proc: Use remove_proc_subtree()
    drm: proc: Use remove_proc_subtree()
    drm: proc: Use minor->index to label things, not PDE->name
    drm: Constify drm_proc_list[]
    zoran: Don't print proc_dir_entry data in debug
    reiserfs: Don't access the proc_dir_entry in r_open(), r_start() r_show()
    proc: Supply an accessor for getting the data from a PDE's parent
    airo: Use remove_proc_subtree()
    rtl8192u: Don't need to save device proc dir PDE
    rtl8187se: Use a dir under /proc/net/r8180/
    proc: Add proc_mkdir_data()
    proc: Move some bits from linux/proc_fs.h to linux/{of.h,signal.h,tty.h}
    proc: Move PDE_NET() to fs/proc/proc_net.c
    ...

    Linus Torvalds
     

10 Apr, 2013

2 commits


27 Mar, 2013

1 commit

  • As a matter of policy MNT_READONLY should not be changable if the
    original mounter had more privileges than creator of the mount
    namespace.

    Add the flag CL_UNPRIVILEGED to note when we are copying a mount from
    a mount namespace that requires more privileges to a mount namespace
    that requires fewer privileges.

    When the CL_UNPRIVILEGED flag is set cause clone_mnt to set MNT_NO_REMOUNT
    if any of the mnt flags that should never be changed are set.

    This protects both mount propagation and the initial creation of a less
    privileged mount namespace.

    Cc: stable@vger.kernel.org
    Acked-by: Serge Hallyn
    Reported-by: Andy Lutomirski
    Signed-off-by: "Eric W. Biederman"

    Eric W. Biederman
     

14 Jul, 2012

1 commit

  • copy_tree() can theoretically fail in a case other than ENOMEM, but always
    returns NULL which is interpreted by callers as -ENOMEM. Change it to return
    an explicit error.

    Also change clone_mnt() for consistency and because union mounts will add new
    error cases.

    Thanks to Andreas Gruenbacher for a bug fix.
    [AV: folded braino fix by Dan Carpenter]

    Original-author: Valerie Aurora
    Signed-off-by: David Howells
    Cc: Valerie Aurora
    Cc: Andreas Gruenbacher
    Signed-off-by: Al Viro

    David Howells
     

30 May, 2012

1 commit

  • lglocks and brlocks are currently generated with some complicated macros
    in lglock.h. But there's no reason to not just use common utility
    functions and put all the data into a common data structure.

    In preparation, this patch changes the API to look more like normal
    function calls with pointers, not magic macros.

    The patch is rather large because I move over all users in one go to keep
    it bisectable. This impacts the VFS somewhat in terms of lines changed.
    But no actual behaviour change.

    [akpm@linux-foundation.org: checkpatch fixes]
    Signed-off-by: Andi Kleen
    Cc: Al Viro
    Cc: Rusty Russell
    Signed-off-by: Andrew Morton
    Signed-off-by: Rusty Russell
    Signed-off-by: Al Viro

    Andi Kleen
     

04 Jan, 2012

18 commits