13 Nov, 2016

4 commits

  • By using the unaligned access helpers, we drastically improve
    performance on small MIPS routers that have to go through the exception
    fix-up handler for these unaligned accesses.

    Signed-off-by: Jason A. Donenfeld
    Reviewed-by: Eric Biggers
    Acked-by: Martin Willi
    Signed-off-by: Herbert Xu

    Jason A. Donenfeld
     
  • Switch to resource-managed function devm_kzalloc instead
    of kzalloc and remove unneeded kfree

    Also, remove kfree in probe function and remove
    function, mv_remove as it is now has nothing to do.
    The Coccinelle semantic patch used to make this change is as follows:
    //
    @platform@
    identifier p, probefn, removefn;
    @@
    struct platform_driver p = {
    .probe = probefn,
    .remove = removefn,
    };

    @prb@
    identifier platform.probefn, pdev;
    expression e, e1, e2;
    @@
    probefn(struct platform_device *pdev, ...) {
    dev, e1, e2)
    ...
    ?-kfree(e);
    ...+>
    }
    @rem depends on prb@
    identifier platform.removefn;
    expression prb.e;
    @@
    removefn(...) {

    }
    //

    Signed-off-by: Nadim Almas
    Signed-off-by: Herbert Xu

    Nadim almas
     
  • Trivial fix to spelling mistake "pointeur" to "pointer"
    in dev_err message

    Signed-off-by: Colin Ian King
    Signed-off-by: Herbert Xu

    Colin Ian King
     
  • The exponent size in the ccp_op structure is in bits. A v5
    CCP requires the exponent size to be in bytes, so convert
    the size from bits to bytes when populating the descriptor.

    The current code references the exponent in memory, but
    these fields have not been set since the exponent is
    actually store in the LSB. Populate the descriptor with
    the LSB location (address).

    Signed-off-by: Gary R Hook
    Signed-off-by: Herbert Xu

    Gary R Hook
     

01 Nov, 2016

9 commits


25 Oct, 2016

19 commits


21 Oct, 2016

8 commits

  • The lsb field uses a value of -1 to indicate that it
    is unassigned. Therefore type must be a signed int.

    Signed-off-by: Gary R Hook
    Signed-off-by: Herbert Xu

    Gary R Hook
     
  • The AES key schedule generation is mostly endian agnostic, with the
    exception of the rotation and the incorporation of the round constant
    at the start of each round. So implement a big endian specific version
    of that part to make the whole routine big endian compatible.

    Fixes: 86464859cc77 ("crypto: arm - AES in ECB/CBC/CTR/XTS modes using ARMv8 Crypto Extensions")
    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     
  • Emit the XTS tweak literal constants in the appropriate order for a
    single 128-bit scalar literal load.

    Fixes: 49788fe2a128 ("arm64/crypto: AES-ECB/CBC/CTR/XTS using ARMv8 NEON and Crypto Extensions")
    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     
  • The AES implementation using pure NEON instructions relies on the generic
    AES key schedule generation routines, which store the round keys as arrays
    of 32-bit quantities stored in memory using native endianness. This means
    we should refer to these round keys using 4x4 loads rather than 16x1 loads.
    In addition, the ShiftRows tables are loading using a single scalar load,
    which is also affected by endianness, so emit these tables in the correct
    order depending on whether we are building for big endian or not.

    Fixes: 49788fe2a128 ("arm64/crypto: AES-ECB/CBC/CTR/XTS using ARMv8 NEON and Crypto Extensions")
    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     
  • The AES-CCM implementation that uses ARMv8 Crypto Extensions instructions
    refers to the AES round keys as pairs of 64-bit quantities, which causes
    failures when building the code for big endian. In addition, it byte swaps
    the input counter unconditionally, while this is only required for little
    endian builds. So fix both issues.

    Fixes: 12ac3efe74f8 ("arm64/crypto: use crypto instructions to generate AES key schedule")
    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     
  • The SHA256 digest is an array of 8 32-bit quantities, so we should refer
    to them as such in order for this code to work correctly when built for
    big endian. So replace 16 byte scalar loads and stores with 4x32 vector
    ones where appropriate.

    Fixes: 6ba6c74dfc6b ("arm64/crypto: SHA-224/SHA-256 using ARMv8 Crypto Extensions")
    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     
  • The SHA1 digest is an array of 5 32-bit quantities, so we should refer
    to them as such in order for this code to work correctly when built for
    big endian. So replace 16 byte scalar loads and stores with 4x4 vector
    ones where appropriate.

    Fixes: 2c98833a42cd ("arm64/crypto: SHA-1 using ARMv8 Crypto Extensions")
    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     
  • The GHASH key and digest are both pairs of 64-bit quantities, but the
    GHASH code does not always refer to them as such, causing failures when
    built for big endian. So replace the 16x1 loads and stores with 2x8 ones.

    Fixes: b913a6404ce2 ("arm64/crypto: improve performance of GHASH algorithm")
    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Ard Biesheuvel