08 Jul, 2013
3 commits
-
While doing some code inspection, I noticed that the slob constructor
method can be called with a NULL pointer. If memory is tight and slob
fails to allocate with slob_alloc() or slob_new_pages() it still calls
the ctor() method with a NULL pointer. Looking at the first ctor()
method I found, I noticed that it can not handle a NULL pointer (I'm
sure others probably can't either):static void sighand_ctor(void *data)
{
struct sighand_struct *sighand = data;spin_lock_init(&sighand->siglock);
init_waitqueue_head(&sighand->signalfd_wqh);
}The solution is to only call the ctor() method if allocation succeeded.
Acked-by: Christoph Lameter
Signed-off-by: Steven Rostedt
Signed-off-by: Pekka Enberg -
CPU partial support can introduce level of indeterminism that is not
wanted in certain context (like a realtime kernel). Make it
configurable.This patch is based on Christoph Lameter's "slub: Make cpu partial slab
support configurable V2".Acked-by: Christoph Lameter
Signed-off-by: Joonsoo Kim
Signed-off-by: Pekka Enberg -
At the moment, kmalloc() isn't even listed in the kernel API
documentation (DocBook/kernel-api.html after running "make htmldocs").Another issue is that the documentation for kmalloc_node()
refers to kcalloc()'s documentation to describe its 'flags' parameter,
while kcalloc() refered to kmalloc()'s documentation, which doesn't exist!This patch is a proposed fix for this. It also removes the documentation
for kmalloc() in include/linux/slob_def.h which isn't included to
generate the documentation anyway. This way, kmalloc() is described
in only one place.Acked-by: Christoph Lameter
Acked-by: Randy Dunlap
Signed-off-by: Michael Opdenacker
Signed-off-by: Pekka Enberg
07 Jul, 2013
8 commits
-
Some architectures (e.g. powerpc built with CONFIG_PPC_256K_PAGES=y
CONFIG_FORCE_MAX_ZONEORDER=11) get PAGE_SHIFT + MAX_ORDER > 26.In 3.10 kernels, CONFIG_LOCKDEP=y with PAGE_SHIFT + MAX_ORDER > 26 makes
init_lock_keys() dereference beyond kmalloc_caches[26].
This leads to an unbootable system (kernel panic at initializing SLAB)
if one of kmalloc_caches[26...PAGE_SHIFT+MAX_ORDER-1] is not NULL.Fix this by making sure that init_lock_keys() does not dereference beyond
kmalloc_caches[26] arrays.Signed-off-by: Christoph Lameter
Reported-by: Tetsuo Handa
Cc: Pekka Enberg
Cc: [3.10.x]
Signed-off-by: Andrew Morton
Signed-off-by: Pekka Enberg -
Acked-by: Christoph Lameter
Signed-off-by: Sasha Levin
Signed-off-by: Pekka Enberg -
In free path, we don't check number of cpu_partial, so one slab can
be linked in cpu partial list even if cpu_partial is 0. To prevent this,
we should check number of cpu_partial in put_cpu_partial().Acked-by: Christoph Lameeter
Reviewed-by: Wanpeng Li
Signed-off-by: Joonsoo Kim
Signed-off-by: Pekka Enberg -
Use existing interface node_nr_slabs and node_nr_objs to get
nr_slabs and nr_objs.Acked-by: Christoph Lameter
Signed-off-by: Wanpeng Li
Signed-off-by: Pekka Enberg -
This patch remove unused nr_partials variable.
Acked-by: Christoph Lameter
Signed-off-by: Wanpeng Li
Signed-off-by: Pekka Enberg -
Slab have some tunables like limit, batchcount, and sharedfactor can be
tuned through function slabinfo_write. Commit (b7454ad3: mm/sl[au]b: Move
slabinfo processing to slab_common.c) uncorrectly change /proc/slabinfo
unwriteable for slab, this patch fix it by revert to original mode.Acked-by: Christoph Lameter
Signed-off-by: Wanpeng Li
Signed-off-by: Pekka Enberg -
This patch shares s_next and s_stop between slab and slub.
Acked-by: Christoph Lameter
Signed-off-by: Wanpeng Li
Signed-off-by: Pekka Enberg -
The drain_freelist is called to drain slabs_free lists for cache reap,
cache shrink, memory hotplug callback etc. The tofree parameter should
be the number of slab to free instead of the number of slab objects to
free.This patch fix the callers that pass # of objects. Make sure they pass #
of slabs.Acked-by: Christoph Lameter
Signed-off-by: Wanpeng Li
Signed-off-by: Pekka Enberg
18 Jun, 2013
1 commit
-
Make the SLOB specific stuff harmonize more with the way the other allocators
do it. Create the typical kmalloc constants for that purpose. SLOB does not
support it but the constants help us avoid #ifdefs.Signed-off-by: Christoph Lameter
Signed-off-by: Pekka Enberg
08 Jun, 2013
1 commit
-
After several fixing about kmem_cache_alloc_node(), its comment
was splitted. This patch moved it on top of kmem_cache_alloc_node()
definition.Signed-off-by: Zhouping Liu
Signed-off-by: Pekka Enberg
07 May, 2013
1 commit
-
For SLAB the kmalloc caches must be created in ascending sizes in order
for the OFF_SLAB sub-slab cache to work properly.Create the non power of two caches immediately after the prior power of
two kmalloc cache. Do not create the non power of two caches before all
other caches.Reported-and-tested-by: Tetsuo Handa
Signed-off-by: Christoph Lamete
Link: http://lkml.kernel.org/r/201305040348.CIF81716.OStQOHFJMFLOVF@I-love.SAKURA.ne.jp
Signed-off-by: Pekka Enberg
06 May, 2013
1 commit
-
The inline path seems to have changed the SLAB behavior for very large
kmalloc allocations with commit e3366016 ("slab: Use common
kmalloc_index/kmalloc_size functions"). This patch restores the old
behavior but also adds diagnostics so that we can figure where in the
code these large allocations occur.Reported-and-tested-by: Tetsuo Handa
Signed-off-by: Christoph Lameter
Link: http://lkml.kernel.org/r/201305040348.CIF81716.OStQOHFJMFLOVF@I-love.SAKURA.ne.jp
[ penberg@kernel.org: use WARN_ON_ONCE ]
Signed-off-by: Pekka Enberg
01 May, 2013
1 commit
-
If the nodeid is > num_online_nodes() this can cause an Oops and a
panic(). The purpose of this patch is to assert if this condition is
true to aid debugging efforts rather than some random NULL pointer
dereference or page fault.This patch is in response to BZ#42967 [1]. Using VM_BUG_ON so it's used
only when CONFIG_DEBUG_VM is set, given that ____cache_alloc_node() is a
hot code path.[1]: https://bugzilla.kernel.org/show_bug.cgi?id=42967
Signed-off-by: Aaron Tomlin
Reviewed-by: Rik van Riel
Acked-by: Christoph Lameter
Acked-by: Rafael Aquini
Acked-by: David Rientjes
Signed-off-by: Pekka Enberg
05 Apr, 2013
2 commits
-
As Steven Rostedt has pointer out: rescheduling could occur on a
different processor after the determination of the per cpu pointer and
before the tid is retrieved. This could result in allocation from the
wrong node in slab_alloc().The effect is much more severe in slab_free() where we could free to the
freelist of the wrong page.The window for something like that occurring is pretty small but it is
possible.Signed-off-by: Christoph Lameter
Signed-off-by: Pekka Enberg -
The variables accessed in slab_alloc are volatile and therefore
the page pointer passed to node_match can be NULL. The processing
of data in slab_alloc is tentative until either the cmpxhchg
succeeds or the __slab_alloc slowpath is invoked. Both are
able to perform the same allocation from the freelist.Check for the NULL pointer in node_match.
A false positive will lead to a retry of the loop in __slab_alloc.
Signed-off-by: Christoph Lameter
Signed-off-by: Pekka Enberg
02 Apr, 2013
2 commits
-
After boot phase, 'n' always exist.
So add 'likely' macro for helping compiler.Acked-by: Christoph Lameter
Signed-off-by: Joonsoo Kim
Signed-off-by: Pekka Enberg -
There is a subtle bug when calculating a number of acquired objects.
Currently, we calculate "available = page->objects - page->inuse",
after acquire_slab() is called in get_partial_node().In acquire_slab() with mode = 1, we always set new.inuse = page->objects.
So,acquire_slab(s, n, page, object == NULL);
if (!object) {
c->page = page;
stat(s, ALLOC_FROM_PARTIAL);
object = t;
available = page->objects - page->inuse;!!! availabe is always 0 !!!
...Therfore, "available > s->cpu_partial / 2" is always false and
we always go to second iteration.
This patch correct this problem.After that, we don't need return value of put_cpu_partial().
So remove it.Reviewed-by: Wanpeng Li
Acked-by: Christoph Lameter
Signed-off-by: Joonsoo Kim
Signed-off-by: Pekka Enberg
28 Feb, 2013
1 commit
-
After we create a boot cache, we may allocate from it until it is bootstraped.
This will move the page from the partial list to the cpu slab list. If this
happens, the loop:list_for_each_entry(p, &n->partial, lru)
that we use to scan for all partial pages will yield nothing, and the pages
will keep pointing to the boot cpu cache, which is of course, invalid. To do
that, we should flush the cache to make sure that the cpu slab is back to the
partial list.Signed-off-by: Glauber Costa
Reported-by: Steffen Michalke
Tested-by: KAMEZAWA Hiroyuki
Acked-by: Christoph Lameter
Cc: Andrew Morton
Cc: Tejun Heo
Signed-off-by: Pekka Enberg
07 Feb, 2013
3 commits
-
commit "slab: Common Kmalloc cache determination" made mistake
in kmalloc_slab(). SLAB_CACHE_DMA is for kmem_cache creation,
not for allocation. For allocation, we should use GFP_XXX to identify
type of allocation. So, change SLAB_CACHE_DMA to GFP_DMA.Acked-by: Christoph Lameter
Reported-by: Fengguang Wu
Signed-off-by: Joonsoo Kim
Signed-off-by: Pekka Enberg -
Variables were not properly converted and the conversion caused
a naming conflict.Signed-off-by: Tetsuo Handa
Signed-off-by: Christoph Lameter
Signed-off-by: Pekka Enberg -
James Hogan hit boot problems in next-20130204 on Meta:
META213-Thread0 DSP [LogF] kobject (4fc03980): tried to init an initialized object, something is seriously wrong.
META213-Thread0 DSP [LogF]
META213-Thread0 DSP [LogF] Call trace:
META213-Thread0 DSP [LogF] [] _show_stack+0x68/0x7c
META213-Thread0 DSP [LogF] [] _dump_stack+0x14/0x28
META213-Thread0 DSP [LogF] [] _kobject_init+0x58/0x9c
META213-Thread0 DSP [LogF] [] _kobject_create+0x38/0x64
META213-Thread0 DSP [LogF] [] _kobject_create_and_add+0x14/0x8c
META213-Thread0 DSP [LogF] [] _mnt_init+0xd8/0x220
META213-Thread0 DSP [LogF] [] _vfs_caches_init+0xb0/0x160
META213-Thread0 DSP [LogF] [] _start_kernel+0x274/0x340
META213-Thread0 DSP [LogF] [] _metag_start_kernel+0x58/0x6c
META213-Thread0 DSP [LogF] [] __start+0x44/0x48
META213-Thread0 DSP [LogF]
META213-Thread0 DSP [LogF] devtmpfs: initialized
META213-Thread0 DSP [LogF] L2 Cache: Not present
META213-Thread0 DSP [LogF] BUG: failure at fs/sysfs/dir.c:736/sysfs_read_ns_type()!
META213-Thread0 DSP [LogF] Kernel panic - not syncing: BUG!
META213-Thread0 DSP [Thread Exit] Thread has exited - return code = 4294967295And bisected the problem to commit 95a05b4 ("slab: Common constants for
kmalloc boundaries").As it turns out, a fixed KMALLOC_SHIFT_LOW does not work for arches with
higher alignment requirements.Determine KMALLOC_SHIFT_LOW from ARCH_DMA_MINALIGN instead.
Reported-and-tested-by: James Hogan
Signed-off-by: Christoph Lameter
Signed-off-by: Pekka Enberg
01 Feb, 2013
13 commits
-
Put the definitions for the kmem_cache_node structures together so that
we have one structure. That will allow us to create more common fields in
the future which could yield more opportunities to share code.Signed-off-by: Christoph Lameter
Signed-off-by: Pekka Enberg -
The list3 or l3 pointers are pointing to per node structures. Reflect
that in the names of variables used.Signed-off-by: Christoph Lameter
Signed-off-by: Pekka Enberg -
Extract the optimized lookup functions from slub and put them into
slab_common.c. Then make slab use these functions as well.Joonsoo notes that this fixes some issues with constant folding which
also reduces the code size for slub.https://lkml.org/lkml/2012/10/20/82
Signed-off-by: Christoph Lameter
Signed-off-by: Pekka Enberg -
On some platforms (such as IA64) the large page size may results in
slab allocations to be allowed of numbers that do not fit in 32 bit.Acked-by: Glauber Costa
Signed-off-by: Christoph Lameter
Signed-off-by: Pekka Enberg -
The kmalloc array is created in similar ways in both SLAB
and SLUB. Create a common function and have both allocators
call that function.V1->V2:
Whitespace cleanupReviewed-by: Glauber Costa
Signed-off-by: Christoph Lameter
Signed-off-by: Pekka Enberg -
Have a common definition fo the kmalloc cache arrays in
SLAB and SLUBAcked-by: Glauber Costa
Signed-off-by: Christoph Lameter
Signed-off-by: Pekka Enberg -
Standardize the constants that describe the smallest and largest
object kept in the kmalloc arrays for SLAB and SLUB.Differentiate between the maximum size for which a slab cache is used
(KMALLOC_MAX_CACHE_SIZE) and the maximum allocatable size
(KMALLOC_MAX_SIZE, KMALLOC_MAX_ORDER).Signed-off-by: Christoph Lameter
Signed-off-by: Pekka Enberg -
Have a common naming between both slab caches for future changes.
Acked-by: Glauber Costa
Signed-off-by: Christoph Lameter
Signed-off-by: Pekka Enberg -
Rename the structure used for the per node structures in slab
to have a name that expresses that fact.Acked-by: Glauber Costa
Signed-off-by: Christoph Lameter
Signed-off-by: Pekka Enberg -
Make slab use the common functions. We can get rid of a lot
of old ugly stuff as a results. Among them the sizes
array and the weird include/linux/kmalloc_sizes file and
some pretty bad #include statements in slab_def.h.The one thing that is different in slab is that the 32 byte
cache will also be created for arches that have page sizes
larger than 4K. There are numerous smaller allocations that
SLOB and SLUB can handle better because of their support for
smaller allocation sizes so lets keep the 32 byte slab also
for arches with > 4K pages.Reviewed-by: Glauber Costa
Signed-off-by: Christoph Lameter
Signed-off-by: Pekka Enberg -
Extract the function to determine the index of the slab within
the array of kmalloc caches as well as a function to determine
maximum object size from the nr of the kmalloc slab.This is used here only to simplify slub bootstrap but will
be used later also for SLAB.Acked-by: Glauber Costa
Signed-off-by: Christoph Lameter
Signed-off-by: Pekka Enberg -
Move these functions higher up in slab.h so that they are grouped with other
generic kmalloc related definitions.Acked-by: Glauber Costa
Signed-off-by: Christoph Lameter
Signed-off-by: Pekka Enberg -
Signed-off-by: Christoph Lameter
Signed-off-by: Pekka Enberg
22 Dec, 2012
3 commits
-
Pull watchdog updates from Wim Van Sebroeck:
"This includes some fixes and code improvements (like
clk_prepare_enable and clk_disable_unprepare), conversion from the
omap_wdt and twl4030_wdt drivers to the watchdog framework, addition
of the SB8x0 chipset support and the DA9055 Watchdog driver and some
OF support for the davinci_wdt driver."* git://www.linux-watchdog.org/linux-watchdog: (22 commits)
watchdog: mei: avoid oops in watchdog unregister code path
watchdog: Orion: Fix possible null-deference in orion_wdt_probe
watchdog: sp5100_tco: Add SB8x0 chipset support
watchdog: davinci_wdt: add OF support
watchdog: da9052: Fix invalid free of devm_ allocated data
watchdog: twl4030_wdt: Change TWL4030_MODULE_PM_RECEIVER to TWL_MODULE_PM_RECEIVER
watchdog: remove depends on CONFIG_EXPERIMENTAL
watchdog: Convert dev_printk(KERN_ to dev_(
watchdog: DA9055 Watchdog driver
watchdog: omap_wdt: eliminate goto
watchdog: omap_wdt: delete redundant platform_set_drvdata() calls
watchdog: omap_wdt: convert to devm_ functions
watchdog: omap_wdt: convert to new watchdog core
watchdog: WatchDog Timer Driver Core: fix comment
watchdog: s3c2410_wdt: use clk_prepare_enable and clk_disable_unprepare
watchdog: imx2_wdt: Select the driver via ARCH_MXC
watchdog: cpu5wdt.c: add missing del_timer call
watchdog: hpwdt.c: Increase version string
watchdog: Convert twl4030_wdt to watchdog core
davinci_wdt: preparation for switch to common clock framework
... -
Pull CIFS fixes from Steve French:
"Misc small cifs fixes"* 'for-next' of git://git.samba.org/sfrench/cifs-2.6:
cifs: eliminate cifsERROR variable
cifs: don't compare uniqueids in cifs_prime_dcache unless server inode numbers are in use
cifs: fix double-free of "string" in cifs_parse_mount_options