04 Jan, 2019
1 commit
-
Nobody has actually used the type (VERIFY_READ vs VERIFY_WRITE) argument
of the user address range verification function since we got rid of the
old racy i386-only code to walk page tables by hand.It existed because the original 80386 would not honor the write protect
bit when in kernel mode, so you had to do COW by hand before doing any
user access. But we haven't supported that in a long time, and these
days the 'type' argument is a purely historical artifact.A discussion about extending 'user_access_begin()' to do the range
checking resulted this patch, because there is no way we're going to
move the old VERIFY_xyz interface to that model. And it's best done at
the end of the merge window when I've done most of my merges, so let's
just get this done once and for all.This patch was mostly done with a sed-script, with manual fix-ups for
the cases that weren't of the trivial 'access_ok(VERIFY_xyz' form.There were a couple of notable cases:
- csky still had the old "verify_area()" name as an alias.
- the iter_iov code had magical hardcoded knowledge of the actual
values of VERIFY_{READ,WRITE} (not that they mattered, since nothing
really used it)- microblaze used the type argument for a debug printout
but other than those oddities this should be a total no-op patch.
I tried to fix up all architectures, did fairly extensive grepping for
access_ok() uses, and the changes are trivial, but I may have missed
something. Any missed conversion should be trivially fixable, though.Signed-off-by: Linus Torvalds
13 Jun, 2018
1 commit
-
Alpha provides a custom implementation of dec_and_lock(). The functions
is split into two parts:
- atomic_add_unless() + return 0 (fast path in assembly)
- remaining part including locking (slow path in C)Comparing the result of the alpha implementation with the generic
implementation compiled by gcc it looks like the fast path is optimized
by avoiding a stack frame (and reloading the GP), register store and all
this. This is only done in the slowpath.
After marking the slowpath (atomic_dec_and_lock_1()) as "noinline" and
doing the slowpath in C (the atomic_add_unless(atomic, -1, 1) part) I
noticed differences in the resulting assembly:
- the GP is still reloaded
- atomic_add_unless() adds more memory barriers compared to the custom
assembly
- the custom assembly here does "load, sub, beq" while
atomic_add_unless() does "load, cmpeq, add, bne". This is okay because
it compares against zero after subtraction while the generic code
compares against 1 before.I'm not sure if avoiding the stack frame (and GP reloading) brings a lot
in terms of performance. Regarding the different barriers, Peter
Zijlstra says:|refcount decrement needs to be a RELEASE operation, such that all the
|load/stores to the object happen before we decrement the refcount.
|
|Otherwise things like:
|
| obj->foo = 5;
| refcnt_dec(&obj->ref);
|
|can be re-ordered, which then allows fun scenarios like:
|
| CPU0 CPU1
|
| refcnt_dec(&obj->ref);
| if (dec_and_test(&obj->ref))
| free(obj);
| obj->foo = 5; // oops UaF
|
|
|This means (for alpha) that there should be a memory barrier _before_
|the decrement, however the dec_and_lock asm thing only has one _after_,
|which, per the above, is too late.
|
|The generic version using add_unless will result in memory barrier
|before and after (because that is the rule for atomic ops with a return
|value) which is strictly too many barriers for the refcount story, but
|who knows what other ordering requirements code has.Remove the custom alpha implementation of dec_and_lock() and if it is an
issue (performance wise) then the fast path could still be inlined.Signed-off-by: Sebastian Andrzej Siewior
Signed-off-by: Thomas Gleixner
Acked-by: Peter Zijlstra (Intel)
Cc: Richard Henderson
Cc: Ivan Kokshaysky
Cc: Matt Turner
Cc: linux-alpha@vger.kernel.org
Link: https://lkml.kernel.org/r/20180606115918.GG12198@hirez.programming.kicks-ass.net
Link: https://lkml.kernel.org/r20180612161621.22645-2-bigeasy@linutronix.de
17 Jan, 2018
1 commit
-
Commit 92ce4c3ea7c4, "alpha: add support for memset16", renamed
the function memsetw() to be memset16() but neglected to do this for
the EV6 optimised version, thus when building a kernel optimised
for EV6 (or later) link errors result. This extends the memset16
support to EV6.Signed-off-by: Michael Cree
Signed-off-by: Matt Turner
02 Nov, 2017
1 commit
-
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.By default all files without license information are under the default
license of the kernel, which is GPL version 2.Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if
Reviewed-by: Philippe Ombredanne
Reviewed-by: Thomas Gleixner
Signed-off-by: Greg Kroah-Hartman
09 Sep, 2017
1 commit
-
Alpha already had an optimised fill-memory-with-16-bit-quantity
assembler routine called memsetw(). It has a slightly different calling
convention from memset16() in that it takes a byte count, not a count of
words. That's the same convention used by ARM's __memset routines, so
rename Alpha's routine to match and add a memset16() wrapper around it.
Then convert Alpha's scr_memsetw() to call memset16() instead of
memsetw().Link: http://lkml.kernel.org/r/20170720184539.31609-6-willy@infradead.org
Signed-off-by: Matthew Wilcox
Cc: Richard Henderson
Cc: Ivan Kokshaysky
Cc: Matt Turner
Cc: "H. Peter Anvin"
Cc: "James E.J. Bottomley"
Cc: "Martin K. Petersen"
Cc: David Miller
Cc: Ingo Molnar
Cc: Michael Ellerman
Cc: Minchan Kim
Cc: Ralf Baechle
Cc: Russell King
Cc: Sam Ravnborg
Cc: Sergey Senozhatsky
Cc: Thomas Gleixner
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
30 Aug, 2017
2 commits
-
Patch 8525023121de4848b5f0a7d867ffeadbc477774d introduced a typo.
That said, the identity AND insns added by that patch are more
clearly written as MOV. At the same time, re-schedule the ev6
version so that the first dispatch can execute in parallel.Signed-off-by: Richard Henderson
Signed-off-by: Matt Turner -
There are direct branches between {str*cpy,str*cat} and stx*cpy.
Ensure the branches are within range by merging these objects.Signed-off-by: Richard Henderson
Signed-off-by: Matt Turner
11 May, 2017
1 commit
-
Pull Kbuild updates from Masahiro Yamada:
- improve Clang support
- clean up various Makefiles
- improve build log visibility (objtool, alpha, ia64)
- improve compiler flag evaluation for better build performance
- fix GCC version-dependent warning
- fix genksyms
* tag 'kbuild-v4.12' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild: (23 commits)
kbuild: dtbinst: remove unnecessary __dtbs_install_prep target
ia64: beatify build log for gate.so and gate-syms.o
alpha: make short build log available for division routines
alpha: merge build rules of division routines
alpha: add $(src)/ rather than $(obj)/ to make source file path
Makefile: evaluate LDFLAGS_BUILD_ID only once
objtool: make it visible in make V=1 output
kbuild: clang: add -no-integrated-as to KBUILD_[AC]FLAGS
kbuild: Add support to generate LLVM assembly files
kbuild: Add better clang cross build support
kbuild: drop -Wno-unknown-warning-option from clang options
kbuild: fix asm-offset generation to work with clang
kbuild: consolidate redundant sed script ASM offset generation
frv: Use OFFSET macro in DEF_*REG()
kbuild: avoid conflict between -ffunction-sections and -pg on gcc-4.7
kbuild: Consolidate header generation from ASM offset information
kbuild: use -Oz instead of -Os when using clang
kbuild, LLVMLinux: Add -Werror to cc-option to support clang
Kbuild: make designated_init attribute fatal
kbuild: drop unneeded patterns '.*.orig' and '.*.rej' from distclean
...
03 May, 2017
3 commits
-
This enables the Kbuild standard log style as follows:
AS arch/alpha/lib/__divlu.o
AS arch/alpha/lib/__divqu.o
AS arch/alpha/lib/__remlu.o
AS arch/alpha/lib/__remqu.oSigned-off-by: Masahiro Yamada
-
These four objects are generated by the same build rule, with
different compile options. The build rules can be merged.Signed-off-by: Masahiro Yamada
-
$(ev6-y)divide.S is a source file, not a build-time generated file.
So, it should be prefixed with $(src)/ rather than $(obj)/.Signed-off-by: Masahiro Yamada
29 Mar, 2017
2 commits
-
Signed-off-by: Al Viro
-
They used to need odd calling conventions due to old exception handling
mechanism, the last remnants of which had disappeared back in 2002.Signed-off-by: Al Viro
25 Dec, 2016
1 commit
-
This was entirely automated, using the script by Al:
PATT='^[[:blank:]]*#[[:blank:]]*include[[:blank:]]*'
sed -i -e "s!$PATT!#include !" \
$(git grep -l "$PATT"|grep -v ^include/linux/uaccess.h)to do the replacement at the end of the merge window.
Requested-by: Al Viro
Signed-off-by: Linus Torvalds
15 Oct, 2016
1 commit
-
Pull more misc uaccess and vfs updates from Al Viro:
"The rest of the stuff from -next (more uaccess work) + assorted fixes"* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs:
score: traps: Add missing include file to fix build error
fs/super.c: don't fool lockdep in freeze_super() and thaw_super() paths
fs/super.c: fix race between freeze_super() and thaw_super()
overlayfs: Fix setting IOP_XATTR flag
iov_iter: kernel-doc import_iovec() and rw_copy_check_uvector()
blackfin: no access_ok() for __copy_{to,from}_user()
arm64: don't zero in __copy_from_user{,_inatomic}
arm: don't zero in __copy_from_user_inatomic()/__copy_from_user()
arc: don't leak bits of kernel stack into coredump
alpha: get rid of tail-zeroing in __copy_user()
16 Sep, 2016
1 commit
-
... and adjust copy_from_user() accordingly
Signed-off-by: Al Viro
08 Aug, 2016
1 commit
-
Signed-off-by: Al Viro
14 Mar, 2016
1 commit
-
This patch updates all instances of csum_tcpudp_magic and
csum_tcpudp_nofold to reflect the types that are usually used as the source
inputs. For example the protocol field is populated based on nexthdr which
is actually an unsigned 8 bit value. The length is usually populated based
on skb->len which is an unsigned integer.This addresses an issue in which the IPv6 function csum_ipv6_magic was
generating a checksum using the full 32b of skb->len while
csum_tcpudp_magic was only using the lower 16 bits. As a result we could
run into issues when attempting to adjust the checksum as there was no
protocol agnostic way to update it.With this change the value is still truncated as many architectures use
"(len + proto) << 8", however this truncation only occurs for values
greater than 16776960 in length and as such is unlikely to occur as we stop
the inner headers at ~64K in size.I did have to make a few minor changes in the arm, mn10300, nios2, and
score versions of the function in order to support these changes as they
were either using things such as an OR to combine the protocol and length,
or were using ntohs to convert the length which would have truncated the
value.I also updated a few spots in terms of whitespace and type differences for
the addresses. Most of this was just to make sure all of the definitions
were in sync going forward.Signed-off-by: Alexander Duyck
Signed-off-by: David S. Miller
18 Sep, 2015
1 commit
-
__delay was not exported as a result while building with allmodconfig we
were getting build error of undefined symbol. __delay is being used by:
drivers/net/phy/mdio-octeon.cSigned-off-by: Sudip Mukherjee
Cc: Richard Henderson
Cc: Ivan Kokshaysky
Cc: Matt Turner
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
01 Feb, 2014
1 commit
-
The patch 3ddc5b46a8e90f3c9251338b60191d0a804b0d92 breaks networking on
alpha (there is a follow-up fix 5cfe8f1ba5eebe6f4b6e5858cdb1a5be4f3272a6,
but networking is still broken even with the second patch).The patch 3ddc5b46a8e90f3c9251338b60191d0a804b0d92 makes
csum_partial_copy_from_user check the pointer with access_ok. However,
csum_partial_copy_from_user is called also from csum_partial_copy_nocheck
and csum_partial_copy_nocheck is called on kernel pointers and it is
supposed not to check pointer validity.This bug results in ssh session hangs if the system is loaded and bulk
data are printed to ssh terminal.This patch fixes csum_partial_copy_nocheck to call set_fs(KERNEL_DS), so
that access_ok in csum_partial_copy_from_user accepts kernel-space
addresses.Cc: stable@vger.kernel.org
Signed-off-by: Mikulas Patocka
Signed-off-by: Matt Turner
17 Nov, 2013
2 commits
-
Introduced by 3ddc5b46a8e90f3c92 ("kernel-wide: fix missing validations
on __get/__put/__copy_to/__copy_from_user()").Also fix some other places which could be problematic in a similar way,
although they hadn't been proved so, as far as I can tell.Cc: Michael Cree
Signed-off-by: Matt Turner -
Compiling with GCC 4.8 yields several instances of
crypto/vmac.c: In function ‘vmac_final’:
crypto/vmac.c:616:9: warning: value computed is not used [-Wunused-value]
memset(&mac, 0, sizeof(vmac_t));
^
arch/alpha/include/asm/string.h:31:25: note: in definition of macro ‘memset’
? __builtin_memset((s),0,(n)) \
^
Converting the macro to an inline function eliminates this problem.However, doing only that causes problems with the GCC 3.x series. The
inline function cannot be named "memset", as otherwise we wind up with
recursion via __builtin_memset. Solve this by adjusting the symbols
such that __memset is the inline, and ___memset is the real function.Signed-off-by: Richard Henderson
12 Sep, 2013
1 commit
-
I found the following pattern that leads in to interesting findings:
grep -r "ret.*|=.*__put_user" *
grep -r "ret.*|=.*__get_user" *
grep -r "ret.*|=.*__copy" *The __put_user() calls in compat_ioctl.c, ptrace compat, signal compat,
since those appear in compat code, we could probably expect the kernel
addresses not to be reachable in the lower 32-bit range, so I think they
might not be exploitable.For the "__get_user" cases, I don't think those are exploitable: the worse
that can happen is that the kernel will copy kernel memory into in-kernel
buffers, and will fail immediately afterward.The alpha csum_partial_copy_from_user() seems to be missing the
access_ok() check entirely. The fix is inspired from x86. This could
lead to information leak on alpha. I also noticed that many architectures
map csum_partial_copy_from_user() to csum_partial_copy_generic(), but I
wonder if the latter is performing the access checks on every
architectures.Signed-off-by: Mathieu Desnoyers
Cc: Richard Henderson
Cc: Ivan Kokshaysky
Cc: Matt Turner
Cc: Jens Axboe
Cc: Oleg Nesterov
Cc: David Miller
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
19 Aug, 2012
1 commit
-
Similar to x86/sparc/powerpc implementations except:
1) we implement an extremely efficient has_zero()/find_zero()
sequence with both prep_zero_mask() and create_zero_mask()
no-operations.
2) Our output from prep_zero_mask() differs in that only the
lowest eight bits are used to represent the zero bytes
nevertheless it can be safely ORed with other similar masks
from prep_zero_mask() and forms input to create_zero_mask(),
the two fundamental properties prep_zero_mask() must satisfy.Tests on EV67 and EV68 CPUs revealed that the generic code is
essentially as fast (to within 0.5% of CPU cycles) of the old
Alpha specific code for large quadword-aligned strings, despite
the 30% extra CPU instructions executed. In contrast, the
generic code for unaligned strings is substantially slower (by
more than a factor of 3) than the old Alpha specific code.Signed-off-by: Michael Cree
Acked-by: Matt Turner
Signed-off-by: Linus Torvalds
29 Mar, 2012
1 commit
-
Disintegrate asm/system.h for Alpha.
Signed-off-by: David Howells
cc: linux-alpha@vger.kernel.org
27 Jul, 2011
1 commit
-
This allows us to move duplicated code in
(atomic_inc_not_zero() for now) toSigned-off-by: Arun Sharma
Reviewed-by: Eric Dumazet
Cc: Ingo Molnar
Cc: David Miller
Cc: Eric Dumazet
Acked-by: Mike Frysinger
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
31 Mar, 2011
1 commit
-
Fixes generated by 'codespell' and manually reviewed.
Signed-off-by: Lucas De Marchi
17 Jan, 2011
1 commit
-
Signed-off-by: matt mooney
Signed-off-by: Matt Turner
29 Jan, 2008
1 commit
-
Remove the deprecated __attribute_used__.
[Introduce __section in a few places to silence checkpatch /sam]
Signed-off-by: Adrian Bunk
Signed-off-by: Sam Ravnborg
18 Dec, 2007
1 commit
-
First of all, thanks to Bob Tracy and
Michael Cree for testing.
Especially to Bob, as he has done titanic multi-day git-bisect
work that finally helped to reproduce and nail down the bug
(http://bugzilla.kernel.org/show_bug.cgi?id=9457).[ev6-]stxncpy.S: it's t12, not t2 register that is supposed to contain
the last byte offset upon return. As a result of wrong register use
(which was my fault back in 2003, IIRC), under some circumstances extra
terminating zero bytes were added to destination string. This particularly
led to incorrect DEVPATH strings generated in uevent and therefore to udev
problems.strncpy.S: unrelated bug I found while testing the above fix - destination
is not properly zero-padded then a byte count exceeds source length.
Actually this is addition to strncpy fix from last year.Signed-off-by: Ivan Kokshaysky
Cc: Richard Henderson
Cc: Bob Tracy
Cc: Michael Cree
Cc: Kay Sievers
Cc: "Rafael J. Wysocki"
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
20 Oct, 2007
2 commits
-
Spelling fixes in arch/alpha/.
Signed-off-by: Simon Arlott
Signed-off-by: Adrian Bunk -
remove asm/bitops.h includes
including asm/bitops directly may cause compile errors. don't include it
and include linux/bitops instead. next patch will deny including asm header
directly.Cc: Adrian Bunk
Signed-off-by: Jiri Slaby
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
15 Oct, 2007
1 commit
-
The variable CFLAGS is a wellknown variable and the usage by
kbuild may result in unexpected behaviour.
On top of that several people over time has asked for a way to
pass in additional flags to gcc.This patch replace use of CFLAGS with KBUILD_CFLAGS all over the
tree and enabling one to use:
make CFLAGS=...
to specify additional gcc commandline options.One usecase is when trying to find gcc bugs but other
use cases has been requested too.Patch was tested on following architectures:
alpha, arm, i386, x86_64, mips, sparc, sparc64, ia64, m68kTest was simple to do a defconfig build, apply the patch and check
that nothing got rebuild.Signed-off-by: Sam Ravnborg
18 Jul, 2007
1 commit
-
Signed-off-by: Al Viro
Acked-by: David S. Miller
Acked-by: Geert Uytterhoeven
Signed-off-by: Linus Torvalds
24 Jun, 2007
1 commit
-
Hopefully this fixes http://bugzilla.kernel.org/show_bug.cgi?id=8635
The struct in6_addr passed to csum_ipv6_magic() is 4 byte aligned, so we
can't use the regular 64-bit loads. Since the cost of handling of 4 byte
and 1 byte aligned 64-bit data is roughly the same, this code can cope with
any src/dst [mis]alignment.Signed-off-by: Ivan Kokshaysky
Cc: Richard Henderson
Cc: Dustin Marquess
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
31 May, 2007
1 commit
-
Remove 2 functions private to the alpha implemetation,
in favor of similar functions in .Provide a more efficient version of the fls64 function
for pre-ev67 alphas.Signed-off-by: Richard Henderson
Signed-off-by: Linus Torvalds
26 Apr, 2007
1 commit
-
We have several platforms using local copies of identical
code.Signed-off-by: David S. Miller
03 Dec, 2006
1 commit
-
* sanitize prototypes and annotate
* kill useless access_ok() in csum_partial_copy_from_user() (the only
caller checks it already).
* do_csum_partial_copy_from_user() is not needed now
* replace htons(len) with len << 8 - they are the same wrt checksums
on little-endian.Signed-off-by: Al Viro
Signed-off-by: David S. Miller
04 Oct, 2006
1 commit
-
Many files include the filename at the beginning, serveral used a wrong one.
Signed-off-by: Uwe Zeisberger
Signed-off-by: Adrian Bunk
01 Jul, 2006
1 commit
-
Signed-off-by: Jörn Engel
Signed-off-by: Adrian Bunk