23 May, 2013
1 commit
-
Make the SHA1 asm code ABI conformant by making sure all stack
accesses occur above the stack pointer.Origin:
http://git.openssl.org/gitweb/?p=openssl.git;a=commit;h=1a9d60d2Signed-off-by: Ard Biesheuvel
Acked-by: Nicolas Pitre
Cc: stable@vger.kernel.org
Signed-off-by: Russell King
13 Jan, 2013
1 commit
-
This patch fixes aes-armv4.S and sha1-armv4-large.S to work
natively in Thumb. This allows ARM/Thumb interworking workarounds
to be removed.I also take the opportunity to convert some explicit assembler
directives for exported functions to the standard
ENTRY()/ENDPROC().For the code itself:
* In sha1_block_data_order, use of TEQ with sp is deprecated in
ARMv7 and not supported in Thumb. For the branches back to
.L_00_15 and .L_40_59, the TEQ is converted to a CMP, under the
assumption that clobbering the C flag here will not cause
incorrect behaviour.For the first branch back to .L_20_39_or_60_79 the C flag is
important, so sp is moved temporarily into another register so
that TEQ can be used for the comparison.* In the AES code, most forms of register-indexed addressing with
shifts and rotates are not permitted for loads and stores in
Thumb, so the address calculation is done using a separate
instruction for the Thumb case.The resulting code is unlikely to be optimally scheduled, but it
should not have a large impact given the overall size of the code.
I haven't run any benchmarks.Signed-off-by: Dave Martin
Tested-by: David McCullough (ARM only)
Acked-by: David McCullough
Acked-by: Nicolas Pitre
Signed-off-by: Russell King
07 Sep, 2012
1 commit
-
Add assembler versions of AES and SHA1 for ARM platforms. This has provided
up to a 50% improvement in IPsec/TCP throughout for tunnels using AES128/SHA1.Platform CPU SPeed Endian Before (bps) After (bps) Improvement
IXP425 533 MHz big 11217042 15566294 ~38%
KS8695 166 MHz little 3828549 5795373 ~51%Signed-off-by: David McCullough
Signed-off-by: Herbert Xu