18 Oct, 2013
5 commits
-
Pull char/misc driver fixes from Greg KH:
"Here are some small iio and w1 driver fixes for 3.12-rc6.There is also a hyper-v fix in here, which turned out to be incorrect,
so it was reverted. That will probably have to wait unto 3.13-rc1 to
get accepted as it's still being discussed"* tag 'char-misc-3.12-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc:
Revert "Drivers: hv: vmbus: Fix a bug in channel rescind code"
Drivers: hv: vmbus: Fix a bug in channel rescind code
iio:buffer: Free active scan mask in iio_disable_all_buffers()
iio: frequency: adf4350: add missing clk_disable_unprepare() on error in adf4350_probe()
w1 - call request_module with w1 master mutex unlocked
w1 - fix fops in w1_bus_notify -
Pull sound fixes from Takashi Iwai:
"All reasonably small fixes as rc6: a HD-audio mic fix, a us122l mmap
regression fix, and kernel memory leak fix in hdsp driver. Hopefully
this will be the last pull request for 3.12..."* tag 'sound-3.12' of git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound:
ALSA: hdsp - info leak in snd_hdsp_hwdep_ioctl()
ALSA: us122l: Fix pcm_usb_stream mmapping regression
ALSA: hda - Fix inverted internal mic not indicated on some machines -
Pull apparmor fixes from James Morris:
"A couple more regressions fixed"* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security:
apparmor: fix bad lock balance when introspecting policy
apparmor: fix memleak of the profile hash -
…/jic23/iio into char-misc-linus
Jonathan writes:
Third set of IIO fixes for the 3.12 cycle.
Two little ones this time:
1) A missing clk_unprepare in adf4350.
2) A missing free of the active_scan_mask when iio_disable_all_buffers is
called during an unexpected device removal. This leak was introduced by
the fix
a87c82e454f184a9473f8cdfd4d304205f585f65 iio: Stop sampling when the device is removed
and hence is a regression fix. -
This reverts commit 90d33f3ec519db19d785216299a4ee85ef58ec97 as it's not
the correct fix for this issue, and it causes a build warning to be
added to the kernel tree.Cc: K. Y. Srinivasan
Cc:
Signed-off-by: Greg Kroah-Hartman
17 Oct, 2013
26 commits
-
Merge misc fixes from Andrew Morton.
* emailed patches from Andrew Morton : (21 commits)
mm: revert mremap pud_free anti-fix
mm: fix BUG in __split_huge_page_pmd
swap: fix set_blocksize race during swapon/swapoff
procfs: call default get_unmapped_area on MMU-present architectures
procfs: fix unintended truncation of returned mapped address
writeback: fix negative bdi max pause
percpu_refcount: export symbols
fs: buffer: move allocation failure loop into the allocator
mm: memcg: handle non-error OOM situations more gracefully
tools/testing/selftests: fix uninitialized variable
block/partitions/efi.c: treat size mismatch as a warning, not an error
mm: hugetlb: initialize PG_reserved for tail pages of gigantic compound pages
mm/zswap: bugfix: memory leak when re-swapon
mm: /proc/pid/pagemap: inspect _PAGE_SOFT_DIRTY only on present pages
mm: migration: do not lose soft dirty bit if page is in migration state
gcov: MAINTAINERS: Add an entry for gcov
mm/hugetlb.c: correct missing private flag clearing
mm/vmscan.c: don't forget to free shrinker->nr_deferred
ipc/sem.c: synchronize semop and semctl with IPC_RMID
ipc: update locking scheme comments
... -
Revert commit 1ecfd533f4c5 ("mm/mremap.c: call pud_free() after fail
calling pmd_alloc()").The original code was correct: pud_alloc(), pmd_alloc(), pte_alloc_map()
ensure that the pud, pmd, pt is already allocated, and seldom do they
need to allocate; on failure, upper levels are freed if appropriate by
the subsequent do_munmap(). Whereas commit 1ecfd533f4c5 did an
unconditional pud_free() of a most-likely still-in-use pud: saved only
by the near-impossiblity of pmd_alloc() failing.Signed-off-by: Hugh Dickins
Cc: Chen Gang
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
Occasionally we hit the BUG_ON(pmd_trans_huge(*pmd)) at the end of
__split_huge_page_pmd(): seen when doing madvise(,,MADV_DONTNEED).It's invalid: we don't always have down_write of mmap_sem there: a racing
do_huge_pmd_wp_page() might have copied-on-write to another huge page
before our split_huge_page() got the anon_vma lock.Forget the BUG_ON, just go back and try again if this happens.
Signed-off-by: Hugh Dickins
Acked-by: Kirill A. Shutemov
Cc: Andrea Arcangeli
Cc: Naoya Horiguchi
Cc: David Rientjes
Cc:
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
Fix race between swapoff and swapon. Swapoff used old_block_size from
swap_info outside of swapon_mutex so it could be overwritten by
concurrent swapon.The race has visible effect only if more than one swap block device
exists with different block sizes (e.g. /dev/sda1 with block size 4096
and /dev/sdb1 with 512). In such case it leads to setting the blocksize
of swapped off device with wrong blocksize.The bug can be triggered with multiple concurrent swapoff and swapon:
0. Swap for some device is on.
1. swapoff:
First the swapoff is called on this device and "struct swap_info_struct
*p" is assigned. This is done under swap_lock however this lock is
released for the call try_to_unuse().2. swapon:
After the assignment above (and before acquiring swapon_mutex &
swap_lock by swapoff) the swapon is called on the same device.
The p->old_block_size is assigned to the value of block_size the device.
This block size should be the same as previous but sometimes it is not.
The swapon ends successfully.3. swapoff:
Swapoff resumes, grabs the locks and mutex and continues to disable this
swap device. Now it sets the block size to value taken from swap_info
which was overwritten by swapon in 2.Signed-off-by: Krzysztof Kozlowski
Reported-by: Weijie Yang
Cc: Bob Liu
Cc: Konrad Rzeszutek Wilk
Cc: Shaohua Li
Cc: Minchan Kim
Acked-by: Hugh Dickins
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
Commit c4fe24485729 ("sparc: fix PCI device proc file mmap(2)") added
proc_reg_get_unmapped_area in proc_reg_file_ops and
proc_reg_file_ops_no_compat, by which now mmap always returns EIO if
get_unmapped_area method is not defined for the target procfs file,
which causes regression of mmap on /proc/vmcore.To address this issue, like get_unmapped_area(), call default
current->mm->get_unmapped_area on MMU-present architectures if
pde->proc_fops->get_unmapped_area, i.e. the one in actual file
operation in the procfs file, is not defined.Reported-by: Michael Holzheu
Signed-off-by: HATAYAMA Daisuke
Cc: Alexey Dobriyan
Cc: David S. Miller
Tested-by: Michael Holzheu
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
Currently, proc_reg_get_unmapped_area truncates upper 32-bit of the
mapped virtual address returned from get_unmapped_area method in
pde->proc_fops due to the variable rv of signed integer on x86_64. This
is too small to have vitual address of unsigned long on x86_64 since on
x86_64, signed integer is of 4 bytes while unsigned long is of 8 bytes.
To fix this issue, use unsigned long instead.Fixes a regression added in commit c4fe24485729 ("sparc: fix PCI device
proc file mmap(2)").Signed-off-by: HATAYAMA Daisuke
Cc: Alexey Dobriyan
Cc: David S. Miller
Tested-by: Michael Holzheu
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
Toralf runs trinity on UML/i386. After some time it hangs and the last
message line isBUG: soft lockup - CPU#0 stuck for 22s! [trinity-child0:1521]
It's found that pages_dirtied becomes very large. More than 1000000000
pages in this case:period = HZ * pages_dirtied / task_ratelimit;
BUG_ON(pages_dirtied > 2000000000);
BUG_ON(pages_dirtied > 1000000000); < 0) {
+ extern int printf(char *, ...);
+ printf("ick : pause : %li\n", pause);
+ printf("ick: pages_dirtied : %lu\n", pages_dirtied);
+ printf("ick: task_ratelimit: %lu\n", task_ratelimit);
+ BUG_ON(1);
+ }
trace_balance_dirty_pages(bdi,Since pause is bounded by [min_pause, max_pause] where min_pause is also
bounded by max_pause. It's suspected and demonstrated that the
max_pause calculation goes wrong:ick: pause : -717
ick: min_pause : -177
ick: max_pause : -717
ick: pages_dirtied : 14
ick: task_ratelimit: 0The problem lies in the two "long = unsigned long" assignments in
bdi_max_pause() which might go negative if the highest bit is 1, and the
min_t(long, ...) check failed to protect it falling under 0. Fix all of
them by using "unsigned long" throughout the function.Signed-off-by: Fengguang Wu
Reported-by: Toralf Förster
Tested-by: Toralf Förster
Reviewed-by: Jan Kara
Cc: Richard Weinberger
Cc: Geert Uytterhoeven
Cc:
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
Export the interface to be used within modules.
Signed-off-by: Matias Bjorling
Acked-by: Tejun Heo
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
Buffer allocation has a very crude indefinite loop around waking the
flusher threads and performing global NOFS direct reclaim because it can
not handle allocation failures.The most immediate problem with this is that the allocation may fail due
to a memory cgroup limit, where flushers + direct reclaim might not make
any progress towards resolving the situation at all. Because unlike the
global case, a memory cgroup may not have any cache at all, only
anonymous pages but no swap. This situation will lead to a reclaim
livelock with insane IO from waking the flushers and thrashing unrelated
filesystem cache in a tight loop.Use __GFP_NOFAIL allocations for buffers for now. This makes sure that
any looping happens in the page allocator, which knows how to
orchestrate kswapd, direct reclaim, and the flushers sensibly. It also
allows memory cgroups to detect allocations that can't handle failure
and will allow them to ultimately bypass the limit if reclaim can not
make progress.Reported-by: azurIt
Signed-off-by: Johannes Weiner
Cc: Michal Hocko
Cc:
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
Commit 3812c8c8f395 ("mm: memcg: do not trap chargers with full
callstack on OOM") assumed that only a few places that can trigger a
memcg OOM situation do not return VM_FAULT_OOM, like optional page cache
readahead. But there are many more and it's impractical to annotate
them all.First of all, we don't want to invoke the OOM killer when the failed
allocation is gracefully handled, so defer the actual kill to the end of
the fault handling as well. This simplifies the code quite a bit for
added bonus.Second, since a failed allocation might not be the abrupt end of the
fault, the memcg OOM handler needs to be re-entrant until the fault
finishes for subsequent allocation attempts. If an allocation is
attempted after the task already OOMed, allow it to bypass the limit so
that it can quickly finish the fault and invoke the OOM killer.Reported-by: azurIt
Signed-off-by: Johannes Weiner
Cc: Michal Hocko
Cc:
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
The err variable is intended to receive the timer_create() return before
checking itSigned-off-by: Felipe Pena
Cc: Frederic Weisbecker
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
In commit 27a7c642174e ("partitions/efi: account for pmbr size in lba")
we started treating bad sizes in lba field of the partition that has the
0xEE (GPT protective) as errors.However, we may run into these "bad sizes" in the real world if someone
uses dd to copy an image from a smaller disk to a bigger disk. Since
this case used to work (even without using force_gpt), keep it working
and treat the size mismatch as a warning instead of an error.Reported-by: Josh Triplett
Reported-by: Sean Paul
Signed-off-by: Doug Anderson
Reviewed-by: Josh Triplett
Acked-by: Davidlohr Bueso
Tested-by: Artem Bityutskiy
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
Commit 11feeb498086 ("kvm: optimize away THP checks in
kvm_is_mmio_pfn()") introduced a memory leak when KVM is run on gigantic
compound pages.That commit depends on the assumption that PG_reserved is identical for
all head and tail pages of a compound page. So that if get_user_pages
returns a tail page, we don't need to check the head page in order to
know if we deal with a reserved page that requires different
refcounting.The assumption that PG_reserved is the same for head and tail pages is
certainly correct for THP and regular hugepages, but gigantic hugepages
allocated through bootmem don't clear the PG_reserved on the tail pages
(the clearing of PG_reserved is done later only if the gigantic hugepage
is freed).This patch corrects the gigantic compound page initialization so that we
can retain the optimization in 11feeb498086. The cacheline was already
modified in order to set PG_tail so this won't affect the boot time of
large memory systems.[akpm@linux-foundation.org: tweak comment layout and grammar]
Signed-off-by: Andrea Arcangeli
Reported-by: andy123
Acked-by: Rik van Riel
Cc: Gleb Natapov
Cc: Mel Gorman
Cc: Hugh Dickins
Acked-by: Rafael Aquini
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
zswap_tree is not freed when swapoff, and it got re-kmalloced in swapon,
so a memory leak occurs.Free the memory of zswap_tree in zswap_frontswap_invalidate_area().
Signed-off-by: Weijie Yang
Reviewed-by: Bob Liu
Cc: Minchan Kim
Reviewed-by: Minchan Kim
Cc:
From: Weijie Yang
Subject: mm/zswap: bugfix: memory leak when invalidate and reclaim occur concurrentlyConsider the following scenario:
thread 0: reclaim entry x (get refcount, but not call zswap_get_swap_cache_page)
thread 1: call zswap_frontswap_invalidate_page to invalidate entry x.
finished, entry x and its zbud is not freed as its refcount != 0
now, the swap_map[x] = 0
thread 0: now call zswap_get_swap_cache_page
swapcache_prepare return -ENOENT because entry x is not used any more
zswap_get_swap_cache_page return ZSWAP_SWAPCACHE_NOMEM
zswap_writeback_entry do nothing except put refcount
Now, the memory of zswap_entry x and its zpage leak.Modify:
- check the refcount in fail path, free memory if it is not referenced.- use ZSWAP_SWAPCACHE_FAIL instead of ZSWAP_SWAPCACHE_NOMEM as the fail path
can be not only caused by nomem but also by invalidate.[akpm@linux-foundation.org: coding-style fixes]
Signed-off-by: Weijie Yang
Reviewed-by: Bob Liu
Cc: Minchan Kim
Cc:
Acked-by: Seth JenningsSigned-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
If a page we are inspecting is in swap we may occasionally report it as
having soft dirty bit (even if it is clean). The pte_soft_dirty helper
should be called on present pte only.Signed-off-by: Cyrill Gorcunov
Cc: Pavel Emelyanov
Cc: Andy Lutomirski
Cc: Matt Mackall
Cc: Xiao Guangrong
Cc: Marcelo Tosatti
Cc: KOSAKI Motohiro
Cc: Stephen Rothwell
Cc: Peter Zijlstra
Cc: "Aneesh Kumar K.V"
Reviewed-by: Naoya Horiguchi
Cc: Mel Gorman
Cc:
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
If page migration is turned on in config and the page is migrating, we
may lose the soft dirty bit. If fork and mprotect are called on
migrating pages (once migration is complete) pages do not obtain the
soft dirty bit in the correspond pte entries. Fix it adding an
appropriate test on swap entries.Signed-off-by: Cyrill Gorcunov
Cc: Pavel Emelyanov
Cc: Andy Lutomirski
Cc: Matt Mackall
Cc: Xiao Guangrong
Cc: Marcelo Tosatti
Cc: KOSAKI Motohiro
Cc: Stephen Rothwell
Cc: Peter Zijlstra
Cc: "Aneesh Kumar K.V"
Cc: Naoya Horiguchi
Cc: Mel Gorman
Cc:
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
Signed-off-by: Peter Oberparleiter
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
We should clear the page's private flag when returing the page to the
hugepage pool. Otherwise, marked hugepage can be allocated to the user
who tries to allocate the non-reserved hugepage. If this user fail to
map this hugepage, he would try to return the page to the hugepage pool.
Since this page has a private flag, resv_huge_pages would mistakenly
increase. This patch fixes this situation.Signed-off-by: Joonsoo Kim
Cc: Rik van Riel
Cc: Mel Gorman
Cc: Michal Hocko
Cc: "Aneesh Kumar K.V"
Cc: KAMEZAWA Hiroyuki
Cc: Hugh Dickins
Cc: Davidlohr Bueso
Cc: David Gibson
Cc: Wanpeng Li
Cc: Naoya Horiguchi
Cc: Hillf Danton
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
This leak was added by commit 1d3d4437eae1 ("vmscan: per-node deferred
work").unreferenced object 0xffff88006ada3bd0 (size 8):
comm "criu", pid 14781, jiffies 4295238251 (age 105.641s)
hex dump (first 8 bytes):
00 00 00 00 00 00 00 00 ........
backtrace:
[] kmemleak_alloc+0x5e/0xc0
[] __kmalloc+0x247/0x310
[] register_shrinker+0x3c/0xa0
[] sget+0x5ab/0x670
[] proc_mount+0x54/0x170
[] mount_fs+0x43/0x1b0
[] vfs_kern_mount+0x72/0x110
[] kern_mount_data+0x19/0x30
[] pid_ns_prepare_proc+0x20/0x40
[] alloc_pid+0x466/0x4a0
[] copy_process+0xc6a/0x1860
[] do_fork+0x8b/0x370
[] SyS_clone+0x16/0x20
[] stub_clone+0x69/0x90
[] 0xffffffffffffffffSigned-off-by: Andrew Vagin
Cc: Mel Gorman
Cc: Michal Hocko
Cc: Rik van Riel
Cc: Johannes Weiner
Cc: Glauber Costa
Cc: Chuck Lever
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
After acquiring the semlock spinlock, operations must test that the
array is still valid.- semctl() and exit_sem() would walk stale linked lists (ugly, but
should be ok: all lists are empty)- semtimedop() would sleep forever - and if woken up due to a signal -
access memory after free.The patch also:
- standardizes the tests for .deleted, so that all tests in one
function leave the function with the same approach.
- unconditionally tests for .deleted immediately after every call to
sem_lock - even it it means that for semctl(GETALL), .deleted will be
tested twice.Both changes make the review simpler: After every sem_lock, there must
be a test of .deleted, followed by a goto to the cleanup code (if the
function uses "goto cleanup").The only exception is semctl_down(): If sem_ids().rwsem is locked, then
the presence in ids->ipcs_idr is equivalent to !.deleted, thus no
additional test is required.Signed-off-by: Manfred Spraul
Cc: Mike Galbraith
Acked-by: Davidlohr Bueso
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
The initial documentation was a bit incomplete, update accordingly.
[akpm@linux-foundation.org: make it more readable in 80 columns]
Signed-off-by: Davidlohr Bueso
Acked-by: Manfred Spraul
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
for_each_online_cpu() needs the protection of {get,put}_online_cpus() so
cpu_online_mask doesn't change during the iteration.cpu_hotplug.lock is held while a cpu is going down, it's a coarse lock
that is used kernel-wide to synchronize cpu hotplug activity. Memcg has
a cpu hotplug notifier, called while there may not be any cpu hotplug
refcounts, which drains per-cpu event counts to memcg->nocpu_base.events
to maintain a cumulative event count as cpus disappear. Without
get_online_cpus() in mem_cgroup_read_events(), it's possible to account
for the event count on a dying cpu twice, and this value may be
significantly large.In fact, all memcg->pcp_counter_lock use should be nested by
{get,put}_online_cpus().This fixes that issue and ensures the reported statistics are not vastly
over-reported during cpu hotplug.Signed-off-by: David Rientjes
Cc: Johannes Weiner
Cc: Michal Hocko
Cc: KAMEZAWA Hiroyuki
Acked-by: KOSAKI Motohiro
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
Pull tmpfile fix from Al Viro:
"A fix for double iput() in ->tmpfile() on ext3 and ext4; I'd fucked it
up, Miklos has caught it"* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs:
ext[34]: fix double put in tmpfile -
Pull device-mapper fix from Alasdair Kergon:
"A patch to avoid data corruption in a device-mapper snapshot.This is primarily a data corruption bug that all users of
device-mapper snapshots will want to fix. The CVE is due to a data
leak under specific circumstances if, for example, the snapshot is
presented to a virtual machine: a block written as data inside the VM
can get interpreted incorrectly on the host outside the VM as
metadata, causing the host to provide the VM with access to blocks it
would not otherwise see. This is likely to affect few, if any,
people"* tag 'dm-3.12-fix-cve' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm:
dm snapshot: fix data corruption -
Pull gpio fixes from Linus Walleij:
"Three GPIO fixes for the v3.12 series:
- A fix to the Lynxpoint IRQ handler
- Two late fixes to fallout from the gpiod refactoring"* tag 'gpio-v3.12-3' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-gpio:
gpiolib: let gpiod_request() return -EPROBE_DEFER
gpiolib: safer implementation of desc_to_gpio()
gpio/lynxpoint: check if the interrupt is enabled in IRQ handler -
Rescind of subchannels were not being correctly handled. Fix the bug.
Signed-off-by: K. Y. Srinivasan
Cc: [3.11+]
Signed-off-by: Greg Kroah-Hartman
16 Oct, 2013
9 commits
-
In GCC the sizeof(hdsp_version) is 8 because there is a 2 byte hole at
the end of the struct after ->firmware_rev.Signed-off-by: Dan Carpenter
Signed-off-by: Takashi Iwai -
This patch fixes a particular type of data corruption that has been
encountered when loading a snapshot's metadata from disk.When we allocate a new chunk in persistent_prepare, we increment
ps->next_free and we make sure that it doesn't point to a metadata area
by further incrementing it if necessary.When we load metadata from disk on device activation, ps->next_free is
positioned after the last used data chunk. However, if this last used
data chunk is followed by a metadata area, ps->next_free is positioned
erroneously to the metadata area. A newly-allocated chunk is placed at
the same location as the metadata area, resulting in data or metadata
corruption.This patch changes the code so that ps->next_free skips the metadata
area when metadata are loaded in function read_exceptions.The patch also moves a piece of code from persistent_prepare_exception
to a separate function skip_metadata to avoid code duplication.CVE-2013-4299
Signed-off-by: Mikulas Patocka
Cc: stable@vger.kernel.org
Cc: Mike Snitzer
Signed-off-by: Alasdair G Kergon -
BugLink: http://bugs.launchpad.net/bugs/1235977
The profile introspection seq file has a locking bug when policy is viewed
from a virtual root (task in a policy namespace), introspection from the
real root is not affected.The test for root
while (parent) {
is correct for the real root, but incorrect for tasks in a policy namespace.
This allows the task to walk backup the policy tree past its virtual root
causing it to be unlocked before the virtual root should be in the p_stop
fn.This results in the following lockdep back trace:
[ 78.479744] [ BUG: bad unlock balance detected! ]
[ 78.479792] 3.11.0-11-generic #17 Not tainted
[ 78.479838] -------------------------------------
[ 78.479885] grep/2223 is trying to release lock (&ns->lock) at:
[ 78.479952] [] mutex_unlock+0xe/0x10
[ 78.480002] but there are no more locks to release!
[ 78.480037]
[ 78.480037] other info that might help us debug this:
[ 78.480037] 1 lock held by grep/2223:
[ 78.480037] #0: (&p->lock){+.+.+.}, at: [] seq_read+0x3d/0x3d0
[ 78.480037]
[ 78.480037] stack backtrace:
[ 78.480037] CPU: 0 PID: 2223 Comm: grep Not tainted 3.11.0-11-generic #17
[ 78.480037] Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011
[ 78.480037] ffffffff817bf3be ffff880007763d60 ffffffff817b97ef ffff8800189d2190
[ 78.480037] ffff880007763d88 ffffffff810e1c6e ffff88001f044730 ffff8800189d2190
[ 78.480037] ffffffff817bf3be ffff880007763e00 ffffffff810e5bd6 0000000724fe56b7
[ 78.480037] Call Trace:
[ 78.480037] [] ? mutex_unlock+0xe/0x10
[ 78.480037] [] dump_stack+0x54/0x74
[ 78.480037] [] print_unlock_imbalance_bug+0xee/0x100
[ 78.480037] [] ? mutex_unlock+0xe/0x10
[ 78.480037] [] lock_release_non_nested+0x226/0x300
[ 78.480037] [] ? __mutex_unlock_slowpath+0xce/0x180
[ 78.480037] [] ? mutex_unlock+0xe/0x10
[ 78.480037] [] lock_release+0xac/0x310
[ 78.480037] [] __mutex_unlock_slowpath+0x83/0x180
[ 78.480037] [] mutex_unlock+0xe/0x10
[ 78.480037] [] p_stop+0x51/0x90
[ 78.480037] [] seq_read+0x288/0x3d0
[ 78.480037] [] vfs_read+0x9e/0x170
[ 78.480037] [] SyS_read+0x4c/0xa0
[ 78.480037] [] system_call_fastpath+0x1a/0x1fSigned-off-by: John Johansen
Signed-off-by: James Morris -
BugLink: http://bugs.launchpad.net/bugs/1235523
This fixes the following kmemleak trace:
unreferenced object 0xffff8801e8c35680 (size 32):
comm "apparmor_parser", pid 691, jiffies 4294895667 (age 13230.876s)
hex dump (first 32 bytes):
e0 d3 4e b5 ac 6d f4 ed 3f cb ee 48 1c fd 40 cf ..N..m..?..H..@.
5b cc e9 93 00 00 00 00 00 00 00 00 00 00 00 00 [...............
backtrace:
[] kmemleak_alloc+0x4e/0xb0
[] __kmalloc+0x103/0x290
[] aa_calc_profile_hash+0x6c/0x150
[] aa_unpack+0x39d/0xd50
[] aa_replace_profiles+0x3d/0xd80
[] profile_replace+0x37/0x50
[] vfs_write+0xbd/0x1e0
[] SyS_write+0x4c/0xa0
[] system_call_fastpath+0x1a/0x1f
[] 0xffffffffffffffffSigned-off-by: John Johansen
Signed-off-by: James Morris -
Pull device tree fixes and reverts from Grant Likely:
"One bug fix and three reverts. The reverts back out the slightly
controversial feeding the entire device tree into the random pool and
the reserved-memory binding which isn't fully baked yet. Expect the
reserved-memory patches at least to resurface for v3.13.The bug fixes removes a scary but harmless warning on SPARC that was
introduced in the v3.12 merge window. v3.13 will contain a proper fix
that makes the new code work on SPARC.On the plus side, the diffstat looks *awesome*. I love removing lines
of code"* tag 'devicetree-for-linus' of git://git.secretlab.ca/git/linux:
Revert "drivers: of: add initialization code for dma reserved memory"
Revert "ARM: init: add support for reserved memory defined by device tree"
Revert "of: Feed entire flattened device tree into the random pool"
of: fix unnecessary warning on missing /cpus node -
Pull DMA-mapping fix from Marek Szyprowski:
"A bugfix for the IOMMU-based implementation of dma-mapping subsystem
for ARM architecture"* 'fixes-for-v3.12' of git://git.linaro.org/people/mszyprowski/linux-dma-mapping:
ARM: dma-mapping: Always pass proper prot flags to iommu_map() -
Pull kvm fix from Gleb Natapov.
* git://git.kernel.org/pub/scm/virt/kvm/kvm:
KVM: Enable pvspinlock after jump_label_init() to avoid VM hang -
Pull Xen fixes from Stefano Stabellini:
"A small fix for Xen on x86_32 and a build fix for xen-tpmfront on
arm64"* tag 'stable/for-linus-3.12-rc4-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip:
xen: Fix possible user space selector corruption
tpm: xen-tpmfront: fix missing declaration of xen_domain -
Usually the active scan mask is freed in __iio_update_buffers() when the buffer
is disabled. But when the device is still sampling when it is removed we'll end
up disabling the buffers in iio_disable_all_buffers(). So we also need to free
the active scan mask here, otherwise it will be leaked.Signed-off-by: Lars-Peter Clausen
Signed-off-by: Jonathan Cameron