20 Mar, 2012

21 commits

  • [ Upstream commit 3f2010b2ad3d66d5291497c9b274315e7b807ecd ]

    As part of the big network driver reorg, each vendor directory defaults to
    yes, so that older config's can migrate correctly. Looks like this one
    got missed.

    Signed-off-by: Stephen Hemminger
    Signed-off-by: David S. Miller
    Signed-off-by: Greg Kroah-Hartman

    stephen hemminger
     
  • [ Upstream commit efead8710aad9e384730ecf25eae0287878840d7 ]

    Fix transport header size

    Fix the transpoert header size for UDP packets.

    Signed-off-by: Shreyas N Bhatewara
    Signed-off-by: David S. Miller
    Signed-off-by: Greg Kroah-Hartman

    Shreyas Bhatewara
     
  • [ Upstream commit 4c90d3b30334833450ccbb02f452d4972a3c3c3f ]

    When tcp_shifted_skb() shifts bytes from the skb that is currently
    pointed to by 'highest_sack' then the increment of
    TCP_SKB_CB(skb)->seq implicitly advances tcp_highest_sack_seq(). This
    implicit advancement, combined with the recent fix to pass the correct
    SACKed range into tcp_sacktag_one(), caused tcp_sacktag_one() to think
    that the newly SACKed range was before the tcp_highest_sack_seq(),
    leading to a call to tcp_update_reordering() with a degree of
    reordering matching the size of the newly SACKed range (typically just
    1 packet, which is a NOP, but potentially larger).

    This commit fixes this by simply calling tcp_sacktag_one() before the
    TCP_SKB_CB(skb)->seq advancement that can advance our notion of the
    highest SACKed sequence.

    Correspondingly, we can simplify the code a little now that
    tcp_shifted_skb() should update the lost_cnt_hint in all cases where
    skb == tp->lost_skb_hint.

    Signed-off-by: Neal Cardwell
    Acked-by: Yuchung Cheng
    Signed-off-by: David S. Miller
    Signed-off-by: Greg Kroah-Hartman

    Neal Cardwell
     
  • [ Upstream commit ff3bc1e7527504a93710535611b2f812f3bb89bf ]

    When pre-allocating skbs for received packets, we set ip_summed =
    CHECKSUM_UNNCESSARY. We used to change it back to CHECKSUM_NONE when
    the received packet had an incorrect checksum or unhandled protocol.

    Commit bc8acf2c8c3e43fcc192762a9f964b3e9a17748b ('drivers/net: avoid
    some skb->ip_summed initializations') mistakenly replaced the latter
    assignment with a DEBUG-only assertion that ip_summed ==
    CHECKSUM_NONE. This assertion is always false, but it seems no-one
    has exercised this code path in a DEBUG build.

    Fix this by moving our assignment of CHECKSUM_UNNECESSARY into
    efx_rx_packet_gro().

    Signed-off-by: Ben Hutchings
    Signed-off-by: David S. Miller
    Signed-off-by: Greg Kroah-Hartman

    Ben Hutchings
     
  • [ Upstream commit 8a49ad6e89feb5015e77ce6efeb2678947117e20 ]

    This patch fixes a (mostly cosmetic) bug introduced by the patch
    'ppp: Use SKB queue abstraction interfaces in fragment processing'
    found here: http://www.spinics.net/lists/netdev/msg153312.html

    The above patch rewrote and moved the code responsible for cleaning
    up discarded fragments but the new code does not catch every case
    where this is necessary. This results in some discarded fragments
    remaining in the queue, and triggering a 'bad seq' error on the
    subsequent call to ppp_mp_reconstruct. Fragments are discarded
    whenever other fragments of the same frame have been lost.
    This can generate a lot of unwanted and misleading log messages.

    This patch also adds additional detail to the debug logging to
    make it clearer which fragments were lost and which other fragments
    were discarded as a result of losses. (Run pppd with 'kdebug 1'
    option to enable debug logging.)

    Signed-off-by: Ben McKeegan
    Signed-off-by: David S. Miller
    Signed-off-by: Greg Kroah-Hartman

    Ben McKeegan
     
  • [ Upstream commit 03606895cd98c0a628b17324fd7b5ff15db7e3cd ]

    Niccolo Belli reported ipsec crashes in case we handle a frame without
    mac header (atm in his case)

    Before copying mac header, better make sure it is present.

    Bugzilla reference: https://bugzilla.kernel.org/show_bug.cgi?id=42809

    Reported-by: Niccolò Belli
    Tested-by: Niccolò Belli
    Signed-off-by: Eric Dumazet
    Signed-off-by: David S. Miller
    Signed-off-by: Greg Kroah-Hartman

    Eric Dumazet
     
  • [ Upstream commit 84338a6c9dbb6ff3de4749864020f8f25d86fc81 ]

    When the fixed race condition happens:

    1. While function neigh_periodic_work scans the neighbor hash table
    pointed by field tbl->nht, it unlocks and locks tbl->lock between
    buckets in order to call cond_resched.

    2. Assume that function neigh_periodic_work calls cond_resched, that is,
    the lock tbl->lock is available, and function neigh_hash_grow runs.

    3. Once function neigh_hash_grow finishes, and RCU calls
    neigh_hash_free_rcu, the original struct neigh_hash_table that function
    neigh_periodic_work was using doesn't exist anymore.

    4. Once back at neigh_periodic_work, whenever the old struct
    neigh_hash_table is accessed, things can go badly.

    Signed-off-by: Michel Machado
    CC: "David S. Miller"
    CC: Eric Dumazet
    Signed-off-by: David S. Miller
    Signed-off-by: Greg Kroah-Hartman

    Michel Machado
     
  • [ Upstream commit 11aad99af6ef629ff3b05d1c9f0936589b204316 ]

    This driver attempts to use two TX rings but lacks proper support :

    1) IRQ handler only takes care of TX completion on first TX ring
    2) the stop/start logic uses the legacy functions (for non multiqueue
    drivers)

    This means all packets witk skb mark set to 1 are sent through high
    queue but are never cleaned and queue eventualy fills and block the
    device, triggering the infamous "NETDEV WATCHDOG" message.

    Lets use a single TX ring to fix the problem, this driver is not a real
    multiqueue one yet.

    Minimal fix for stable kernels.

    Reported-by: Thomas Meyer
    Tested-by: Thomas Meyer
    Signed-off-by: Eric Dumazet
    Cc: Jay Cliburn
    Cc: Chris Snook
    Signed-off-by: David S. Miller
    Signed-off-by: Greg Kroah-Hartman

    Eric Dumazet
     
  • commit 461e74377cfcfc2c0d6bbdfa8fc5fbc21b052c2a upstream.

    We have several reports which says acer-wmi is loaded on ideapads
    and register rfkill for wifi which can not be unblocked.

    Since ideapad-laptop also register rfkill for wifi and it works
    reliably, it will be fine acer-wmi is not going to register rfkill
    for wifi once VPC2004 is found.

    Also put IBM0068/LEN0068 in the list. Though thinkpad_acpi has no
    wifi rfkill capability, there are reports which says acer-wmi also
    block wireless on Thinkpad E520/E420.

    Signed-off-by: Ike Panhc
    Signed-off-by: Matthew Garrett
    Cc: Jonathan Nieder
    Signed-off-by: Greg Kroah-Hartman

    Ike Panhc
     
  • commit 097b180ca09b581ef0dc24fbcfc1b227de3875df upstream.

    complete_walk() already puts nd->path, no need to do it again at cleanup time.

    This would result in Oopses if triggered, apparently the codepath is not too
    well exercised.

    Signed-off-by: Miklos Szeredi
    Signed-off-by: Al Viro
    Signed-off-by: Greg Kroah-Hartman

    Miklos Szeredi
     
  • commit 7f6c7e62fcc123e6bd9206da99a2163fe3facc31 upstream.

    complete_walk() returns either ECHILD or ESTALE. do_last() turns this into
    ECHILD unconditionally. If not in RCU mode, this error will reach userspace
    which is complete nonsense.

    Signed-off-by: Miklos Szeredi
    Signed-off-by: Al Viro
    Signed-off-by: Greg Kroah-Hartman

    Miklos Szeredi
     
  • commit d5751469f210d2149cc2159ffff66cbeef6da3f2 upstream.

    Reorganize the code to make the memory already allocated before
    spinlock'ed loop.

    Reviewed-by: Jeff Layton
    Signed-off-by: Pavel Shilovsky
    Signed-off-by: Steve French
    Signed-off-by: Greg Kroah-Hartman

    Pavel Shilovsky
     
  • commit 87e24f4b67e68d9fd8df16e0bf9c66d1ad2a2533 upstream.

    Verified using the below proglet.. before:

    [root@westmere ~]# perf stat -e node-stores -e node-store-misses ./numa 0
    remote write

    Performance counter stats for './numa 0':

    2,101,554 node-stores
    2,096,931 node-store-misses

    5.021546079 seconds time elapsed

    [root@westmere ~]# perf stat -e node-stores -e node-store-misses ./numa 1
    local write

    Performance counter stats for './numa 1':

    501,137 node-stores
    199 node-store-misses

    5.124451068 seconds time elapsed

    After:

    [root@westmere ~]# perf stat -e node-stores -e node-store-misses ./numa 0
    remote write

    Performance counter stats for './numa 0':

    2,107,516 node-stores
    2,097,187 node-store-misses

    5.012755149 seconds time elapsed

    [root@westmere ~]# perf stat -e node-stores -e node-store-misses ./numa 1
    local write

    Performance counter stats for './numa 1':

    2,063,355 node-stores
    165 node-store-misses

    5.082091494 seconds time elapsed

    #define _GNU_SOURCE

    #include
    #include
    #include
    #include
    #include
    #include
    #include
    #include
    #include
    #include

    #define SIZE (32*1024*1024)

    volatile int done;

    void sig_done(int sig)
    {
    done = 1;
    }

    int main(int argc, char **argv)
    {
    cpu_set_t *mask, *mask2;
    size_t size;
    int i, err, t;
    int nrcpus = 1024;
    char *mem;
    unsigned long nodemask = 0x01; /* node 0 */
    DIR *node;
    struct dirent *de;
    int read = 0;
    int local = 0;

    if (argc < 2) {
    printf("usage: %s [0-3]\n", argv[0]);
    printf(" bit0 - local/remote\n");
    printf(" bit1 - read/write\n");
    exit(0);
    }

    switch (atoi(argv[1])) {
    case 0:
    printf("remote write\n");
    break;
    case 1:
    printf("local write\n");
    local = 1;
    break;
    case 2:
    printf("remote read\n");
    read = 1;
    break;
    case 3:
    printf("local read\n");
    local = 1;
    read = 1;
    break;
    }

    mask = CPU_ALLOC(nrcpus);
    size = CPU_ALLOC_SIZE(nrcpus);
    CPU_ZERO_S(size, mask);

    node = opendir("/sys/devices/system/node/node0/");
    if (!node)
    perror("opendir");
    while ((de = readdir(node))) {
    int cpu;

    if (sscanf(de->d_name, "cpu%d", &cpu) == 1)
    CPU_SET_S(cpu, size, mask);
    }
    closedir(node);

    mask2 = CPU_ALLOC(nrcpus);
    CPU_ZERO_S(size, mask2);
    for (i = 0; i < size; i++)
    CPU_SET_S(i, size, mask2);
    CPU_XOR_S(size, mask2, mask2, mask); // invert

    if (!local)
    mask = mask2;

    err = sched_setaffinity(0, size, mask);
    if (err)
    perror("sched_setaffinity");

    mem = mmap(0, SIZE, PROT_READ|PROT_WRITE,
    MAP_PRIVATE|MAP_ANONYMOUS, -1, 0);
    err = mbind(mem, SIZE, MPOL_BIND, &nodemask, 8*sizeof(nodemask), MPOL_MF_MOVE);
    if (err)
    perror("mbind");

    signal(SIGALRM, sig_done);
    alarm(5);

    if (!read) {
    while (!done) {
    for (i = 0; i < SIZE; i++)
    mem[i] = 0x01;
    }
    } else {
    while (!done) {
    for (i = 0; i < SIZE; i++)
    t += *(volatile char *)(mem + i);
    }
    }

    return 0;
    }

    Signed-off-by: Peter Zijlstra
    Cc: Stephane Eranian
    Link: http://lkml.kernel.org/n/tip-tq73sxus35xmqpojf7ootxgs@git.kernel.org
    Signed-off-by: Ingo Molnar
    Signed-off-by: Greg Kroah-Hartman

    Peter Zijlstra
     
  • commit 3780d038fdf4b5ef26ead10b0604ab1f46dd9510 upstream.

    Is possible that we stop queue and then do not wake up it again,
    especially when packets are transmitted fast. That can be easily
    reproduced with modified tx queue entry_num to some small value e.g. 16.

    If mac80211 already hold local->queue_stop_reason_lock, then we can wait
    on that lock in both rt2x00queue_pause_queue() and
    rt2x00queue_unpause_queue(). After drooping ->queue_stop_reason_lock
    is possible that __ieee80211_wake_queue() will be performed before
    __ieee80211_stop_queue(), hence we stop queue and newer wake up it
    again.

    Another race condition is possible when between rt2x00queue_threshold()
    check and rt2x00queue_pause_queue() we will process all pending tx
    buffers on different cpu. This might happen if for example interrupt
    will be triggered on cpu performing rt2x00mac_tx().

    To prevent race conditions serialize pause/unpause by queue->tx_lock.

    Signed-off-by: Stanislaw Gruszka
    Acked-by: Gertjan van Wingerde
    Signed-off-by: John W. Linville
    Signed-off-by: Greg Kroah-Hartman

    Stanislaw Gruszka
     
  • commit bd0f2e6da7ea9e225cb2dbd3229e25584b0e9538 upstream.

    The HS/VS interrupt handler needs to access the pipeline object. It
    erronously tries to get it from the CCDC output video node, which isn't
    necessarily included in the pipeline. This leads to a NULL pointer
    dereference.

    Fix the bug by getting the pipeline object from the CCDC subdev entity.

    Reported-by: Gary Thomas
    Signed-off-by: Laurent Pinchart
    Acked-by: Sakari Ailus
    Signed-off-by: Mauro Carvalho Chehab
    Signed-off-by: Greg Kroah-Hartman

    Laurent Pinchart
     
  • commit 4949be16822e92a18ea0cc1616319926628092ee upstream.

    Right now we won't touch ASPM state if ASPM is disabled, except in the case
    where we find a device that appears to be too old to reliably support ASPM.
    Right now we'll clear it in that case, which is almost certainly the wrong
    thing to do. The easiest way around this is just to disable the blacklisting
    when ASPM is disabled.

    Signed-off-by: Matthew Garrett
    Signed-off-by: Jesse Barnes
    Signed-off-by: Greg Kroah-Hartman

    Matthew Garrett
     
  • commit a7f4255f906f60f72e00aad2fb000939449ff32e upstream.

    Commit f0fbf0abc093 ("x86: integrate delay functions") converted
    delay_tsc() into a random delay generator for 64 bit. The reason is
    that it merged the mostly identical versions of delay_32.c and
    delay_64.c. Though the subtle difference of the result was:

    static void delay_tsc(unsigned long loops)
    {
    - unsigned bclock, now;
    + unsigned long bclock, now;

    Now the function uses rdtscl() which returns the lower 32bit of the
    TSC. On 32bit that's not problematic as unsigned long is 32bit. On 64
    bit this fails when the lower 32bit are close to wrap around when
    bclock is read, because the following check

    if ((now - bclock) >= loops)
    break;

    evaluated to true on 64bit for e.g. bclock = 0xffffffff and now = 0
    because the unsigned long (now - bclock) of these values results in
    0xffffffff00000001 which is definitely larger than the loops
    value. That explains Tvortkos observation:

    "Because I am seeing udelay(500) (_occasionally_) being short, and
    that by delaying for some duration between 0us (yep) and 491us."

    Make those variables explicitely u32 again, so this works for both 32
    and 64 bit.

    Reported-by: Tvrtko Ursulin
    Signed-off-by: Thomas Gleixner
    Signed-off-by: Linus Torvalds
    Signed-off-by: Greg Kroah-Hartman

    Thomas Gleixner
     
  • commit c7b285550544c22bc005ec20978472c9ac7138c6 upstream.

    Current code has put_ioctx() called asynchronously from aio_fput_routine();
    that's done *after* we have killed the request that used to pin ioctx,
    so there's nothing to stop io_destroy() waiting in wait_for_all_aios()
    from progressing. As the result, we can end up with async call of
    put_ioctx() being the last one and possibly happening during exit_mmap()
    or elf_core_dump(), neither of which expects stray munmap() being done
    to them...

    We do need to prevent _freeing_ ioctx until aio_fput_routine() is done
    with that, but that's all we care about - neither io_destroy() nor
    exit_aio() will progress past wait_for_all_aios() until aio_fput_routine()
    does really_put_req(), so the ioctx teardown won't be done until then
    and we don't care about the contents of ioctx past that point.

    Since actual freeing of these suckers is RCU-delayed, we don't need to
    bump ioctx refcount when request goes into list for async removal.
    All we need is rcu_read_lock held just over the ->ctx_lock-protected
    area in aio_fput_routine().

    Signed-off-by: Al Viro
    Reviewed-by: Jeff Moyer
    Acked-by: Benjamin LaHaise
    Signed-off-by: Linus Torvalds
    Signed-off-by: Greg Kroah-Hartman

    Al Viro
     
  • commit 86b62a2cb4fc09037bbce2959d2992962396fd7f upstream.

    Have ioctx_alloc() return an extra reference, so that caller would drop it
    on success and not bother with re-grabbing it on failure exit. The current
    code is obviously broken - io_destroy() from another thread that managed
    to guess the address io_setup() would've returned would free ioctx right
    under us; gets especially interesting if aio_context_t * we pass to
    io_setup() points to PROT_READ mapping, so put_user() fails and we end
    up doing io_destroy() on kioctx another thread has just got freed...

    Signed-off-by: Al Viro
    Acked-by: Benjamin LaHaise
    Reviewed-by: Jeff Moyer
    Signed-off-by: Linus Torvalds
    Signed-off-by: Greg Kroah-Hartman

    Al Viro
     
  • commit 526af6eb4dc71302f59806e2ccac7793963a7fe0 upstream.

    The coef setup in alc269_fill_coef() was designed only for ALC269VB
    model, and this has some bad effects for other ALC269 variants, such
    as turning off the external mic input. Apply it only to ALC269VB.

    Signed-off-by: Kailang Yang
    Signed-off-by: Takashi Iwai
    Signed-off-by: Greg Kroah-Hartman

    Kailang Yang
     
  • commit b2ccf065f7b23147ed135a41b01d05a332ca6b7e upstream.

    The neo1973 driver had wrong codec name which prevented the "sound card"
    from appearing.

    Signed-off-by: Denis 'GNUtoo' Carikli
    Signed-off-by: Mark Brown
    Signed-off-by: Greg Kroah-Hartman

    Denis 'GNUtoo' Carikli
     

14 Mar, 2012

2 commits


13 Mar, 2012

17 commits

  • Greg Kroah-Hartman
     
  • commit 134d12fae0bb8f3d60dc7440a9e1950bb5427167 upstream.

    For some weird (freudian?) reason, commit 435792d "ARM: OMAP: make
    iommu subsys_initcall to fix builtin omap3isp" unintentionally changed
    the mailbox's initcall instead of the iommu's.

    Fix that.

    Reported-by: Fernando Guzman Lugo
    Signed-off-by: Ohad Ben-Cohen
    Cc: Laurent Pinchart
    Cc: Joerg Roedel
    Cc: Tony Lindgren
    Signed-off-by: Joerg Roedel
    Signed-off-by: Greg Kroah-Hartman

    Ohad Ben-Cohen
     
  • commit c88db233251b026fda775428f0250c760553e216 upstream.

    Rename static struct pci_driver pch_spi_pcidev to
    pch_spi_pcidev_driver to get rid of warnings from modpost checks.

    Signed-off-by: Danny Kukawka
    Signed-off-by: Grant Likely
    Signed-off-by: Greg Kroah-Hartman

    Danny Kukawka
     
  • commit 97e43c983c721a47546e6db3b7711dcd912a6481 upstream.

    Silence following warnings:
    WARNING: drivers/mfd/cs5535-mfd.o(.data+0x20): Section mismatch in
    reference from the variable cs5535_mfd_drv to the function
    .devinit.text:cs5535_mfd_probe()
    The variable cs5535_mfd_drv references
    the function __devinit cs5535_mfd_probe()
    If the reference is valid then annotate the
    variable with __init* or __refdata (see linux/init.h) or name the variable:
    *driver, *_template, *_timer, *_sht, *_ops, *_probe, *_probe_one, *_console

    WARNING: drivers/mfd/cs5535-mfd.o(.data+0x28): Section mismatch in
    reference from the variable cs5535_mfd_drv to the function
    .devexit.text:cs5535_mfd_remove()
    The variable cs5535_mfd_drv references
    the function __devexit cs5535_mfd_remove()
    If the reference is valid then annotate the
    variable with __exit* (see linux/init.h) or name the variable:
    *driver, *_template, *_timer, *_sht, *_ops, *_probe, *_probe_one, *_console

    Rename the variable from *_drv to *_driver so
    modpost ignore the OK references to __devinit/__devexit
    functions.

    Signed-off-by: Christian Gmeiner
    Acked-by: Andres Salomon
    Signed-off-by: Samuel Ortiz
    Signed-off-by: Greg Kroah-Hartman

    Christian Gmeiner
     
  • commit 474de3bbadd9cb75ffc32cc759c40d868343d46c upstream.

    Fix scan_timers() to be __devinit and not __init since
    the function get called from cs5535_mfgpt_probe which is
    __devinit.

    Signed-off-by: Danny Kukawka
    Signed-off-by: Greg Kroah-Hartman

    Danny Kukawka
     
  • commit 0ca93de9b789e0eb05e103f0c04de72df13da73a upstream.

    Fix dm-raid flush support.

    Both md and dm have support for flush, but the dm-raid target
    forgot to set the flag to indicate that flushes should be
    passed on. (Important for data integrity e.g. with writeback cache
    enabled.)

    Signed-off-by: Jonathan Brassow
    Acked-by: Mike Snitzer
    Signed-off-by: Alasdair G Kergon
    Signed-off-by: Greg Kroah-Hartman

    Jonathan E Brassow
     
  • commit 3aa3b2b2b1edb813dc5342d0108befc39541542d upstream.

    The 'rebuild' parameter is used to rebuild individual devices in an
    array (e.g. resynchronize a RAID1 device or recalculate a parity device
    in higher RAID). The MD_CHANGE_DEVS flag must be set when this
    parameter is given in order to write out the superblocks and make the
    change take immediate effect. The code that handles new devices in
    super_load already sets MD_CHANGE_DEVS and 'FirstUse'. (The 'FirstUse'
    flag was being set as a special case for rebuilds in
    super_init_validation.)

    Add a condition for rebuilds in super_load to take care of both flags
    without the special case in 'super_init_validation'.

    Signed-off-by: Jonathan Brassow
    Signed-off-by: Alasdair G Kergon
    Signed-off-by: Greg Kroah-Hartman

    Jonathan E Brassow
     
  • commit af63bcb817cf708f53bcae6edc2e3fb7dd7d8051 upstream.

    Correct the number of mapped sectors shown on a thin device's
    status line by decrementing td->mapped_blocks in __remove() each time
    a block is removed.

    Signed-off-by: Joe Thornber
    Acked-by: Mike Snitzer
    Signed-off-by: Alasdair G Kergon
    Signed-off-by: Greg Kroah-Hartman

    Joe Thornber
     
  • commit 4469a5f387fdde956894137751a41473618a4a52 upstream.

    If dm_sm_disk_create() fails the superblock must be unlocked.

    Signed-off-by: Joe Thornber
    Acked-by: Mike Snitzer
    Signed-off-by: Alasdair G Kergon
    Signed-off-by: Greg Kroah-Hartman

    Joe Thornber
     
  • commit 1f3db25d8be4ac50b897b39609802183ea68a514 upstream.

    The __open_device() error paths in __create_thin() and __create_snap()
    incorrectly call __close_device() even if td was not initialized by
    __open_device(). Remove this.

    Also document __open_device() return values, remove a redundant
    td->changed = 1 in __create_thin(), and insert an additional
    safeguard against creating an already-existing device.

    Signed-off-by: Mike Snitzer
    Signed-off-by: Alasdair G Kergon
    Signed-off-by: Greg Kroah-Hartman

    Mike Snitzer
     
  • commit 1212268fd9816e3b8801e57b896fceaec71969ad upstream.

    The following BUG is hit on the first read that is submitted to a dm
    flakey test device while the device is "down" if the corrupt_bio_byte
    feature wasn't requested when the device's table was loaded.

    Example DM table that will hit this BUG:
    0 2097152 flakey 8:0 2048 0 30

    This bug was introduced by commit a3998799fb4df0b0af8271a7d50c4269032397aa
    (dm flakey: add corrupt_bio_byte feature) in v3.1-rc1.

    BUG: unable to handle kernel paging request at ffff8801cfce3fff
    IP: [] corrupt_bio_data+0x6e/0xae [dm_flakey]
    PGD 1606063 PUD 0
    Oops: 0002 [#1] SMP
    ...
    Call Trace:

    [] flakey_end_io+0x42/0x48 [dm_flakey]
    [] clone_endio+0x54/0xb6 [dm_mod]
    [] bio_endio+0x2d/0x2f
    [] req_bio_endio+0x96/0x9f
    [] blk_update_request+0x1dc/0x3a9
    [] ? rcu_read_unlock+0x21/0x23
    [] blk_update_bidi_request+0x20/0x6e
    [] blk_end_bidi_request+0x1f/0x5d
    [] blk_end_request+0x10/0x12
    [] scsi_io_completion+0x1e5/0x4b1
    [] scsi_finish_command+0xec/0xf5
    [] scsi_softirq_done+0xff/0x108
    [] blk_done_softirq+0x84/0x98
    [] __do_softirq+0xe3/0x1d5
    [] ? _raw_spin_lock+0x62/0x69
    [] ? handle_irq_event+0x4c/0x61
    [] call_softirq+0x1c/0x30
    [] do_softirq+0x4b/0xa3
    [] irq_exit+0x53/0xca
    [] do_IRQ+0x9d/0xb4
    [] common_interrupt+0x73/0x73
    ...

    Signed-off-by: Mike Snitzer
    Signed-off-by: Alasdair G Kergon
    Signed-off-by: Greg Kroah-Hartman

    Mike Snitzer
     
  • commit 0c535e0d6f463365c29623350dbd91642363c39b upstream.

    This patch fixes a crash by recognising discards in dm_io.

    Currently dm_mirror can send REQ_DISCARD bios if running over a
    discard-enabled device and without support in dm_io the system
    crashes badly.

    BUG: unable to handle kernel paging request at 00800000
    IP: __bio_add_page.part.17+0xf5/0x1e0
    ...
    bio_add_page+0x56/0x70
    dispatch_io+0x1cf/0x240 [dm_mod]
    ? km_get_page+0x50/0x50 [dm_mod]
    ? vm_next_page+0x20/0x20 [dm_mod]
    ? mirror_flush+0x130/0x130 [dm_mirror]
    dm_io+0xdc/0x2b0 [dm_mod]
    ...

    Introduced in 2.6.38-rc1 by commit 5fc2ffeabb9ee0fc0e71ff16b49f34f0ed3d05b4
    (dm raid1: support discard).

    Signed-off-by: Milan Broz
    Acked-by: Mike Snitzer
    Signed-off-by: Alasdair G Kergon
    Signed-off-by: Greg Kroah-Hartman

    Milan Broz
     
  • commit 902c6a96a7cb9c50d2a8aed1788efad0a5d8f04c upstream.

    If 'argc' is zero we jump to the 'out:' label, but this leaks the
    (unused) memory that 'dm_split_args()' allocated for 'argv' if the
    string being split consisted entirely of whitespace. Jump to the
    'out_argv:' label instead to free up that memory.

    Signed-off-by: Jesper Juhl
    Signed-off-by: Alasdair G Kergon
    Signed-off-by: Greg Kroah-Hartman

    Jesper Juhl
     
  • commit 6b7f000eb6a0b81d7a809833edb7a457eedf8512 upstream.

    This function is called from enable_iommus(), which in turn is used
    from amd_iommu_resume().

    Signed-off-by: Jan Beulich
    Signed-off-by: Joerg Roedel
    Signed-off-by: Greg Kroah-Hartman

    Jan Beulich
     
  • commit 4231d47e6fe69f061f96c98c30eaf9fb4c14b96d upstream.

    |kernel BUG at kernel/rtmutex.c:724!
    |[] (rt_spin_lock_slowlock+0x108/0x2bc) from [] (defer_bh+0x1c/0xb4)
    |[] (defer_bh+0x1c/0xb4) from [] (rx_complete+0x14c/0x194)
    |[] (rx_complete+0x14c/0x194) from [] (usb_hcd_giveback_urb+0xa0/0xf0)
    |[] (usb_hcd_giveback_urb+0xa0/0xf0) from [] (musb_giveback+0x34/0x40)
    |[] (musb_giveback+0x34/0x40) from [] (musb_advance_schedule+0xb4/0x1c0)
    |[] (musb_advance_schedule+0xb4/0x1c0) from [] (musb_cleanup_urb.isra.9+0x80/0x8c)
    |[] (musb_cleanup_urb.isra.9+0x80/0x8c) from [] (musb_urb_dequeue+0xec/0x108)
    |[] (musb_urb_dequeue+0xec/0x108) from [] (unlink1+0xbc/0xcc)
    |[] (unlink1+0xbc/0xcc) from [] (usb_hcd_unlink_urb+0x54/0xa8)
    |[] (usb_hcd_unlink_urb+0x54/0xa8) from [] (unlink_urbs.isra.17+0x2c/0x58)
    |[] (unlink_urbs.isra.17+0x2c/0x58) from [] (usbnet_terminate_urbs+0x94/0x10c)
    |[] (usbnet_terminate_urbs+0x94/0x10c) from [] (usbnet_stop+0x100/0x15c)
    |[] (usbnet_stop+0x100/0x15c) from [] (__dev_close_many+0x94/0xc8)

    defer_bh() takes the lock which is hold during unlink_urbs(). The safe
    walk suggest that the skb will be removed from the list and this is done
    by defer_bh() so it seems to be okay to drop the lock here.

    Reported-by: Aníbal Almeida Pinto
    Signed-off-by: Sebastian Andrzej Siewior
    Acked-by: Oliver Neukum
    Signed-off-by: David S. Miller
    Signed-off-by: Greg Kroah-Hartman

    Sebastian Siewior
     
  • commit cf00790dea6f210ddd01a6656da58c7c9a4ea0e4 upstream.

    Mesa may set it to 1, causing all primitives to be killed.

    v2: also update the r7xx code

    Signed-off-by: Marek Olšák
    Reviewed-by: Alex Deucher
    Signed-off-by: Dave Airlie
    Signed-off-by: Greg Kroah-Hartman

    Marek Olšák
     
  • commit 9926a67557532acb6cddb1c1add02952175b5c72 upstream.

    Nicolas Cavallari discovered that carl9170 has some
    serious problems delivering data to sleeping stations.

    It turns out that the driver was not honoring two
    important flags (IEEE80211_TX_CTL_POLL_RESPONSE and
    IEEE80211_TX_CTL_CLEAR_PS_FILT) which are set on
    frames that should be sent although the receiving
    station is still in powersave mode.

    Reported-by: Nicolas Cavallari
    Signed-off-by: Christian Lamparter
    Signed-off-by: John W. Linville
    Signed-off-by: Greg Kroah-Hartman

    Christian Lamparter