15 Jun, 2010

4 commits

  • Helps finding racy users of call_rcu(), which results in hangs because list
    entries are overwritten and/or skipped.

    Changelog since v4:
    - Bissectability is now OK
    - Now generate a WARN_ON_ONCE() for non-initialized rcu_head passed to
    call_rcu(). Statically initialized objects are detected with
    object_is_static().
    - Rename rcu_head_init_on_stack to init_rcu_head_on_stack.
    - Remove init_rcu_head() completely.

    Changelog since v3:
    - Include comments from Lai Jiangshan

    This new patch version is based on the debugobjects with the newly introduced
    "active state" tracker.

    Non-initialized entries are all considered as "statically initialized". An
    activation fixup (triggered by call_rcu()) takes care of performing the debug
    object initialization without issuing any warning. Since we cannot increase the
    size of struct rcu_head, I don't see much room to put an identifier for
    statically initialized rcu_head structures. So for now, we have to live without
    "activation without explicit init" detection. But the main purpose of this debug
    option is to detect double-activations (double call_rcu() use of a rcu_head
    before the callback is executed), which is correctly addressed here.

    This also detects potential internal RCU callback corruption, which would cause
    the callbacks to be executed twice.

    Signed-off-by: Mathieu Desnoyers
    CC: David S. Miller
    CC: "Paul E. McKenney"
    CC: akpm@linux-foundation.org
    CC: mingo@elte.hu
    CC: laijs@cn.fujitsu.com
    CC: dipankar@in.ibm.com
    CC: josh@joshtriplett.org
    CC: dvhltc@us.ibm.com
    CC: niv@us.ibm.com
    CC: tglx@linutronix.de
    CC: peterz@infradead.org
    CC: rostedt@goodmis.org
    CC: Valdis.Kletnieks@vt.edu
    CC: dhowells@redhat.com
    CC: eric.dumazet@gmail.com
    CC: Alexey Dobriyan
    Signed-off-by: Paul E. McKenney
    Reviewed-by: Lai Jiangshan

    Mathieu Desnoyers
     
  • Remove all rcu head inits. We don't care about the RCU head state before passing
    it to call_rcu() anyway. Only leave the "on_stack" variants so debugobjects can
    keep track of objects on stack.

    Signed-off-by: Alexey Dobriyan
    Signed-off-by: Mathieu Desnoyers
    Signed-off-by: Paul E. McKenney
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: Matt Mackall
    Cc: Andrew Morton

    Paul E. McKenney
     
  • Remove all rcu head inits. We don't care about the RCU head state before passing
    it to call_rcu() anyway. Only leave the "on_stack" variants so debugobjects can
    keep track of objects on stack.

    Signed-off-by: Alexey Dobriyan
    Signed-off-by: Mathieu Desnoyers
    Signed-off-by: Paul E. McKenney
    Cc: Alexander Viro
    Cc: Andries Brouwer

    Paul E. McKenney
     
  • Remove all rcu head inits. We don't care about the RCU head state before passing
    it to call_rcu() anyway. Only leave the "on_stack" variants so debugobjects can
    keep track of objects on stack.

    Signed-off-by: Alexey Dobriyan
    Signed-off-by: Mathieu Desnoyers
    Cc: Benjamin Herrenschmidt
    Cc: Paul Mackerras
    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     

12 Jun, 2010

27 commits


11 Jun, 2010

9 commits

  • when we use remap_file_pages() to remap a file, remap_file_pages always return
    error. It is because btrfs didn't set VM_CAN_NONLINEAR for vma.

    Signed-off-by: Miao Xie
    Signed-off-by: Chris Mason

    Miao Xie
     
  • refs can be used with uninitialized data if btrfs_lookup_extent_info()
    fails on the first pass through the loop. In the original code if that
    happens then check_path_shared() probably returns 1, this patch
    changes it to return 1 for safety.

    Signed-off-by: Dan Carpenter
    Signed-off-by: Chris Mason

    Dan Carpenter
     
  • Seems that when btrfs_fallocate was converted to use the new ENOSPC stuff we
    dropped passing the mode to the function that actually does the preallocation.
    This breaks anybody who wants to use FALLOC_FL_KEEP_SIZE. Thanks,

    Signed-off-by: Josef Bacik
    Signed-off-by: Chris Mason

    Josef Bacik
     
  • We cannot use the loop device which has been connected to a file in the btrf

    The reproduce steps is following:
    # dd if=/dev/zero of=vdev0 bs=1M count=1024
    # losetup /dev/loop0 vdev0
    # mkfs.btrfs /dev/loop0
    ...
    failed to zero device start -5

    The reason is that the btrfs don't implement either ->write_begin or ->write
    the VFS API, so we fix it by setting ->write to do_sync_write().

    Signed-off-by: Miao Xie
    Signed-off-by: Chris Mason

    Miao Xie
     
  • fix a race at the end of NAPI complete processing, it had
    better do __napi_complete() first before re-enable interrupt.

    Signed-off-by:Figo.zhang

    Signed-off-by: David S. Miller

    Figo.zhang
     
  • This patch correct a bug in the delay of pktgen.
    It makes sure the inter-packet interval is accurate.

    Signed-off-by: Daniel Turull
    Signed-off-by: Robert Olsson
    Signed-off-by: David S. Miller

    Daniel Turull
     
  • gen_kill_estimator() / gen_new_estimator() is not always called with
    RTNL held.

    net/netfilter/xt_RATEEST.c is one user of these API that do not hold
    RTNL, so random corruptions can occur between "tc" and "iptables".

    Add a new fine grained lock instead of trying to use RTNL in netfilter.

    Signed-off-by: Eric Dumazet
    Signed-off-by: David S. Miller

    Eric Dumazet
     
  • Currently, the accelerated receive path for VLAN's will
    drop packets if the real device is an inactive slave and
    is not one of the special pkts tested for in
    skb_bond_should_drop(). This behavior is different then
    the non-accelerated path and for pkts over a bonded vlan.

    For example,

    vlanx -> bond0 -> ethx

    will be dropped in the vlan path and not delivered to any
    packet handlers at all. However,

    bond0 -> vlanx -> ethx

    and

    bond0 -> ethx

    will be delivered to handlers that match the exact dev,
    because the VLAN path checks the real_dev which is not a
    slave and netif_recv_skb() doesn't drop frames but only
    delivers them to exact matches.

    This patch adds a sk_buff flag which is used for tagging
    skbs that would previously been dropped and allows the
    skb to continue to skb_netif_recv(). Here we add
    logic to check for the deliver_no_wcard flag and if it
    is set only deliver to handlers that match exactly. This
    makes both paths above consistent and gives pkt handlers
    a way to identify skbs that come from inactive slaves.
    Without this patch in some configurations skbs will be
    delivered to handlers with exact matches and in others
    be dropped out right in the vlan path.

    I have tested the following 4 configurations in failover modes
    and load balancing modes.

    # bond0 -> ethx

    # vlanx -> bond0 -> ethx

    # bond0 -> vlanx -> ethx

    # bond0 -> ethx
    |
    vlanx -> --

    Signed-off-by: John Fastabend
    Signed-off-by: David S. Miller

    John Fastabend
     
  • If we have enough memory to allocate a new cap release message, do so, so
    that we can send a partial release message immediately. This keeps us from
    making the MDS wait when the cap release it needs is in a partially full
    release message.

    If we fail because of ENOMEM, oh well, they'll just have to wait a bit
    longer.

    Signed-off-by: Sage Weil

    Sage Weil