01 Jul, 2009
1 commit
-
show_pools() walks the page_list of a pool w/o protection against the list
modifications in alloc/free. Take pool->lock to avoid stomping into
nirvana.Signed-off-by: Thomas Gleixner
Signed-off-by: Matthew Wilcox
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
28 Apr, 2008
1 commit
-
Previously it was only enabled for CONFIG_DEBUG_SLAB.
Not hooked into the slub runtime debug configuration, so you currently only
get it with CONFIG_SLUB_DEBUG_ON, not plain CONFIG_SLUB_DEBUGAcked-by: Matthew Wilcox
Signed-off-by: Andi Kleen
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
04 Dec, 2007
7 commits
-
The previous implementation simply refused to allocate more than a
boundary's worth of data from an entire page. Some users didn't know
this, so specified things like SMP_CACHE_BYTES, not realising the
horrible waste of memory that this was. It's fairly easy to correct
this problem, just by ensuring we don't cross a boundary within a page.
This even helps drivers like EHCI (which can't cross a 4k boundary)
on machines with larger page sizes.Signed-off-by: Matthew Wilcox
Acked-by: David S. Miller -
Use a list of free blocks within a page instead of using a bitmap.
Update documentation to reflect this. As well as being a slight
reduction in memory allocation, locked ops and lines of code, it speeds
up a transaction processing benchmark by 0.4%.Signed-off-by: Matthew Wilcox
-
We were missing a copyright statement and license, so add GPLv2, David
Brownell's copyright and my copyright.The asm/io.h include was superfluous, but we were missing a few other
necessary includes.Signed-off-by: Matthew Wilcox
-
Check that 'align' is a power of two, like the API specifies.
Align 'size' to 'align' correctly -- the current code has an off-by-one.
The ALIGN macro in kernel.h doesn't.Signed-off-by: Matthew Wilcox
Acked-by: David S. Miller -
With one trivial change (taking the lock slightly earlier on wakeup
from schedule), all uses of the waitq are under the pool lock, so we
can use the locked (or __) versions of the wait queue functions, and
avoid the extra spinlock.Signed-off-by: Matthew Wilcox
Acked-by: David S. Miller -
Run Lindent and fix all issues reported by checkpatch.pl
Signed-off-by: Matthew Wilcox
-
Signed-off-by: Matthew Wilcox