24 Sep, 2009
40 commits
-
Add two helpers that allow access to the seq_file's own buffer, but
hide the internal details of seq_files.This allows easier implementation of special purpose filling
functions. It also cleans up some existing functions which duplicated
the seq_file logic.Make these inline functions in seq_file.h, as suggested by Al.
Signed-off-by: Miklos Szeredi
Acked-by: Hugh Dickins
Signed-off-by: Al Viro -
As Johannes Weiner pointed out, one of the range checks in do_sendfile
is redundant and is already checked in rw_verify_area.Signed-off-by: Jeff Layton
Reviewed-by: Johannes Weiner
Cc: Christoph Hellwig
Cc: Al Viro
Cc: Robert Love
Cc: Mandeep Singh Baines
Signed-off-by: Andrew Morton
Signed-off-by: Al Viro -
sb->s_maxbytes is supposed to indicate the maximum size of a file that can
exist on the filesystem. It's declared as an unsigned long long.Even if a filesystem has no inherent limit that prevents it from using
every bit in that unsigned long long, it's still problematic to set it to
anything larger than MAX_LFS_FILESIZE. There are places in the kernel
that cast s_maxbytes to a signed value. If it's set too large then this
cast makes it a negative number and generally breaks the comparison.Change s_maxbytes to be loff_t instead. That should help eliminate the
temptation to set it too large by making it a signed value.Also, add a warning for couple of releases to help catch filesystems that
set s_maxbytes too large. Eventually we can either convert this to a
BUG() or just remove it and in the hope that no one will get it wrong now
that it's a signed value.Signed-off-by: Jeff Layton
Cc: Johannes Weiner
Cc: Christoph Hellwig
Cc: Al Viro
Cc: Robert Love
Cc: Mandeep Singh Baines
Signed-off-by: Andrew Morton
Signed-off-by: Al Viro -
If fiemap_check_ranges is passed a large enough value, then it's
possible that the value would be cast to a signed value for comparison
against s_maxbytes when we change it to loff_t. Make sure that doesn't
happen by explicitly casting s_maxbytes to an unsigned value for the
purposes of comparison.Signed-off-by: Jeff Layton
Cc: Christoph Hellwig
Cc: Robert Love
Cc: Al Viro
Cc: Johannes Weiner
Cc: Mandeep Singh Baines
Signed-off-by: Andrew Morton
Signed-off-by: Al Viro -
Currently all simple_attr.set handlers return 0 on success and negative
codes on error. Fix simple_attr_write() to return these error codes.Signed-off-by: Wu Fengguang
Cc: Theodore Ts'o
Cc: Al Viro
Cc: Christoph Hellwig
Cc: Nick Piggin
Signed-off-by: Andrew Morton
Signed-off-by: Al Viro -
seq_path_root() is returning a return value of successful __d_path()
instead of returning a negative value when mangle_path() failed.This is not a bug so far because nobody is using return value of
seq_path_root().Signed-off-by: Tetsuo Handa
Cc: Al Viro
Signed-off-by: Andrew Morton
Signed-off-by: Al Viro -
Do a similar optimization as earlier for touch_atime. Getting the lock in
mnt_get_write is relatively costly, so try all avenues to avoid it first.This patch is careful to still only update inode fields inside the lock
region.This didn't show up in benchmarks, but it's easy enough to do.
[akpm@linux-foundation.org: fix typo in comment]
[hugh.dickins@tiscali.co.uk: fix inverted test of mnt_want_write_file()]
Signed-off-by: Andi Kleen
Cc: Christoph Hellwig
Cc: Valerie Aurora
Cc: Al Viro
Cc: Dave Hansen
Signed-off-by: Hugh Dickins
Signed-off-by: Andrew Morton
Signed-off-by: Al Viro -
Some benchmark testing shows touch_atime to be high up in profile logs for
IO intensive workloads. Most likely that's due to the lock in
mnt_want_write(). Unfortunately touch_atime first takes the lock, and
then does all the other tests that could avoid atime updates (like noatime
or relatime).Do it the other way round -- first try to avoid the update and only then
if that didn't succeed take the lock. That works because none of the
atime avoidance tests rely on locking.This also eliminates a goto.
Signed-off-by: Andi Kleen
Cc: Christoph Hellwig
Reviewed-by: Valerie Aurora
Cc: Al Viro
Cc: Dave Hansen
Signed-off-by: Andrew Morton
Signed-off-by: Al Viro -
Hugetlbfs needs to do special things instead of truncate_inode_pages().
Currently, it copied generic_forget_inode() except for
truncate_inode_pages() call which is asking for trouble (the code there
isn't trivial). So create a separate function generic_detach_inode()
which does all the list magic done in generic_forget_inode() and call
it from hugetlbfs_forget_inode().Signed-off-by: Jan Kara
Cc: Al Viro
Cc: Christoph Hellwig
Signed-off-by: Andrew Morton
Signed-off-by: Al Viro -
Add device-id and inode number for better debugging. This was suggested
by Andreas in one of the threads
http://article.gmane.org/gmane.comp.file-systems.ext4/12062 ."If anyone has a chance, fixing this error message to be not-useless would
be good... Including the device name and the inode number would help
track down the source of the problem."Signed-off-by: Manish Katiyar
Cc: Andreas Dilger
Cc: Al Viro
Signed-off-by: Andrew Morton
Signed-off-by: Al Viro -
Impact: have simple_read_from_buffer conform to standards
It was brought to my attention by Andrew Morton, Theodore Tso, and H.
Peter Anvin that a read from userspace should only return -EFAULT if
nothing was actually read.Looking at the simple_read_from_buffer I noticed that this function does
not conform to that rule. This patch fixes that function.[akpm@linux-foundation.org: simplification suggested by hpa]
[hpa@zytor.com: fix count==0 handling]
Signed-off-by: Steven Rostedt
Cc: Al Viro
Cc: Theodore Ts'o
Cc: Ingo Molnar
Signed-off-by: H. Peter Anvin
Signed-off-by: Andrew Morton
Signed-off-by: Al Viro -
* git://git.kernel.org/pub/scm/linux/kernel/git/rusty/linux-2.6-for-linus: (39 commits)
cpumask: Move deprecated functions to end of header.
cpumask: remove unused deprecated functions, avoid accusations of insanity
cpumask: use new-style cpumask ops in mm/quicklist.
cpumask: use mm_cpumask() wrapper: x86
cpumask: use mm_cpumask() wrapper: um
cpumask: use mm_cpumask() wrapper: mips
cpumask: use mm_cpumask() wrapper: mn10300
cpumask: use mm_cpumask() wrapper: m32r
cpumask: use mm_cpumask() wrapper: arm
cpumask: Use accessors for cpu_*_mask: um
cpumask: Use accessors for cpu_*_mask: powerpc
cpumask: Use accessors for cpu_*_mask: mips
cpumask: Use accessors for cpu_*_mask: m32r
cpumask: remove arch_send_call_function_ipi
cpumask: arch_send_call_function_ipi_mask: s390
cpumask: arch_send_call_function_ipi_mask: powerpc
cpumask: arch_send_call_function_ipi_mask: mips
cpumask: arch_send_call_function_ipi_mask: m32r
cpumask: arch_send_call_function_ipi_mask: alpha
cpumask: remove obsolete topology_core_siblings and topology_thread_siblings: ia64
... -
* remove asm/atomic.h inclusion from linux/utsname.h --
not needed after kref conversion
* remove linux/utsname.h inclusion from files which do not need itNOTE: it looks like fs/binfmt_elf.c do not need utsname.h, however
due to some personality stuff it _is_ needed -- cowardly leave ELF-related
headers and files alone.Signed-off-by: Alexey Dobriyan
Signed-off-by: Linus Torvalds -
This reverts commit c02e3f361c7 ("kmod: fix race in usermodehelper code")
The patch is wrong. UMH_WAIT_EXEC is called with VFORK what ensures
that the child finishes prior returing back to the parent. No race.In fact, the patch makes it even worse because it does the thing it
claims not do:- It calls ->complete() on UMH_WAIT_EXEC
- the complete() callback may de-allocated subinfo as seen in the
following call chain:[] (__link_path_walk+0x20/0xeb4) from [] (path_walk+0x48/0x94)
[] (path_walk+0x48/0x94) from [] (do_path_lookup+0x24/0x4c)
[] (do_path_lookup+0x24/0x4c) from [] (do_filp_open+0xa4/0x83c)
[] (do_filp_open+0xa4/0x83c) from [] (open_exec+0x24/0xe0)
[] (open_exec+0x24/0xe0) from [] (do_execve+0x7c/0x2e4)
[] (do_execve+0x7c/0x2e4) from [] (kernel_execve+0x34/0x80)
[] (kernel_execve+0x34/0x80) from [] (____call_usermodehelper+0x130/0x148)
[] (____call_usermodehelper+0x130/0x148) from [] (kernel_thread_exit+0x0/0x8)and the path pointer was NULL. Good that ARM's kernel_execve()
doesn't check the pointer for NULL or else I wouldn't notice it.The only race there might be is with UMH_NO_WAIT but it is too late for
me to investigate it now. UMH_WAIT_PROC could probably also use VFORK
and we could save one exec. So the only race I see is with UMH_NO_WAIT
and recent scheduler changes where the child does not always run first
might have trigger here something but as I said, it is late....Signed-off-by: Sebastian Andrzej Siewior
Acked-by: Neil Horman
Signed-off-by: Linus Torvalds -
The new ones have pretty kerneldoc. Move the old ones to the end to
avoid confusing people.Signed-off-by: Rusty Russell
Cc: benh@kernel.crashing.org -
We're not forcing removal of the old cpu_ functions, but we might as
well delete the now-unused ones.Especially CPUMASK_ALLOC and friends. I actually got a phone call (!)
from a hacker who thought I had introduced them as the new cpumask
API. He seemed bewildered that I had lost all taste.Signed-off-by: Rusty Russell
Cc: benh@kernel.crashing.org -
This slipped past the previous sweeps.
Signed-off-by: Rusty Russell
Acked-by: Christoph Lameter -
Makes code futureproof against the impending change to mm->cpu_vm_mask (to be a pointer).
It's also a chance to use the new cpumask_ ops which take a pointer
(the older ones are deprecated, but there's no hurry for arch code).Signed-off-by: Rusty Russell
-
Makes code futureproof against the impending change to mm->cpu_vm_mask.
It's also a chance to use the new cpumask_ ops which take a pointer
(the older ones are deprecated, but there's no hurry for arch code).Signed-off-by: Rusty Russell
-
Makes code futureproof against the impending change to mm->cpu_vm_mask.
It's also a chance to use the new cpumask_ ops which take a pointer
(the older ones are deprecated, but there's no hurry for arch code).Signed-off-by: Rusty Russell
-
Makes code futureproof against the impending change to mm->cpu_vm_mask
(to be a pointer).It's also a chance to use the new cpumask_ ops which take a pointer
(the older ones are deprecated, but there's no hurry for arch code).Also change the actual arg name here to "mm" (which it is), not "task".
Signed-off-by: Rusty Russell
-
Makes code futureproof against the impending change to mm->cpu_vm_mask.
It's also a chance to use the new cpumask_ ops which take a pointer
(the older ones are deprecated, but there's no hurry for arch code).Signed-off-by: Rusty Russell
Acked-by: Hirokazu Takata (fixes) -
Makes code futureproof against the impending change to mm->cpu_vm_mask.
It's also a chance to use the new cpumask_ ops which take a pointer
(the older ones are deprecated, but there's no hurry for arch code).Signed-off-by: Rusty Russell
-
Use the accessors rather than frobbing bits directly (the new versions
are const).Signed-off-by: Rusty Russell
Signed-off-by: Mike Travis -
Use the accessors rather than frobbing bits directly (the new versions
are const).Signed-off-by: Rusty Russell
Signed-off-by: Mike Travis -
Use the accessors rather than frobbing bits directly (the new versions
are const).Signed-off-by: Rusty Russell
Signed-off-by: Mike Travis -
Use the accessors rather than frobbing bits directly (the new versions
are const).Signed-off-by: Rusty Russell
Signed-off-by: Mike Travis -
Now everyone is converted to arch_send_call_function_ipi_mask, remove
the shim and the #defines.Signed-off-by: Rusty Russell
-
We're weaning the core code off handing cpumask's around on-stack.
This introduces arch_send_call_function_ipi_mask().Signed-off-by: Rusty Russell
-
We're weaning the core code off handing cpumask's around on-stack.
This introduces arch_send_call_function_ipi_mask(), and by defining
it, the old arch_send_call_function_ipi is defined by the core code.Signed-off-by: Rusty Russell
-
We're weaning the core code off handing cpumask's around on-stack.
This introduces arch_send_call_function_ipi_mask(), and by defining
it, the old arch_send_call_function_ipi is defined by the core code.We also take the chance to wean the implementations off the
obsolescent for_each_cpu_mask(): making send_ipi_mask take the pointer
seemed the most natural way to ensure all implementations used
for_each_cpu.Signed-off-by: Rusty Russell
-
We're weaning the core code off handing cpumask's around on-stack.
This introduces arch_send_call_function_ipi_mask(), and by defining
it, the old arch_send_call_function_ipi is defined by the core code.We also take the chance to wean the implementations off the
obsolescent for_each_cpu_mask(): making send_ipi_mask take the pointer
seemed the most natural way to ensure all implementations used
for_each_cpu.Signed-off-by: Rusty Russell
-
We're weaning the core code off handing cpumask's around on-stack.
This introduces arch_send_call_function_ipi_mask().We also take the chance to wean the send_ipi_message off the
obsolescent for_each_cpu_mask(): making it take a pointer seemed the
most natural way to do this.Signed-off-by: Rusty Russell
-
There were replaced by topology_core_cpumask and topology_thread_cpumask.
Signed-off-by: Rusty Russell
-
There were replaced by topology_core_cpumask and topology_thread_cpumask.
Signed-off-by: Rusty Russell
-
There were replaced by topology_core_cpumask and topology_thread_cpumask.
Signed-off-by: Rusty Russell
-
There were replaced by topology_core_cpumask and topology_thread_cpumask.
Signed-off-by: Rusty Russell
-
There were replaced by topology_core_cpumask and topology_thread_cpumask.
Signed-off-by: Rusty Russell
-
Everyone is now using smp_call_function_many().
Signed-off-by: Rusty Russell
-
smp_call_function_many is the new version: it takes a pointer. Also,
use mm accessor macro while we're changing this.Signed-off-by: Rusty Russell