08 May, 2019

1 commit

  • Pull networking updates from David Miller:
    "Highlights:

    1) Support AES128-CCM ciphers in kTLS, from Vakul Garg.

    2) Add fib_sync_mem to control the amount of dirty memory we allow to
    queue up between synchronize RCU calls, from David Ahern.

    3) Make flow classifier more lockless, from Vlad Buslov.

    4) Add PHY downshift support to aquantia driver, from Heiner
    Kallweit.

    5) Add SKB cache for TCP rx and tx, from Eric Dumazet. This reduces
    contention on SLAB spinlocks in heavy RPC workloads.

    6) Partial GSO offload support in XFRM, from Boris Pismenny.

    7) Add fast link down support to ethtool, from Heiner Kallweit.

    8) Use siphash for IP ID generator, from Eric Dumazet.

    9) Pull nexthops even further out from ipv4/ipv6 routes and FIB
    entries, from David Ahern.

    10) Move skb->xmit_more into a per-cpu variable, from Florian
    Westphal.

    11) Improve eBPF verifier speed and increase maximum program size,
    from Alexei Starovoitov.

    12) Eliminate per-bucket spinlocks in rhashtable, and instead use bit
    spinlocks. From Neil Brown.

    13) Allow tunneling with GUE encap in ipvs, from Jacky Hu.

    14) Improve link partner cap detection in generic PHY code, from
    Heiner Kallweit.

    15) Add layer 2 encap support to bpf_skb_adjust_room(), from Alan
    Maguire.

    16) Remove SKB list implementation assumptions in SCTP, your's truly.

    17) Various cleanups, optimizations, and simplifications in r8169
    driver. From Heiner Kallweit.

    18) Add memory accounting on TX and RX path of SCTP, from Xin Long.

    19) Switch PHY drivers over to use dynamic featue detection, from
    Heiner Kallweit.

    20) Support flow steering without masking in dpaa2-eth, from Ioana
    Ciocoi.

    21) Implement ndo_get_devlink_port in netdevsim driver, from Jiri
    Pirko.

    22) Increase the strict parsing of current and future netlink
    attributes, also export such policies to userspace. From Johannes
    Berg.

    23) Allow DSA tag drivers to be modular, from Andrew Lunn.

    24) Remove legacy DSA probing support, also from Andrew Lunn.

    25) Allow ll_temac driver to be used on non-x86 platforms, from Esben
    Haabendal.

    26) Add a generic tracepoint for TX queue timeouts to ease debugging,
    from Cong Wang.

    27) More indirect call optimizations, from Paolo Abeni"

    * git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next: (1763 commits)
    cxgb4: Fix error path in cxgb4_init_module
    net: phy: improve pause mode reporting in phy_print_status
    dt-bindings: net: Fix a typo in the phy-mode list for ethernet bindings
    net: macb: Change interrupt and napi enable order in open
    net: ll_temac: Improve error message on error IRQ
    net/sched: remove block pointer from common offload structure
    net: ethernet: support of_get_mac_address new ERR_PTR error
    net: usb: smsc: fix warning reported by kbuild test robot
    staging: octeon-ethernet: Fix of_get_mac_address ERR_PTR check
    net: dsa: support of_get_mac_address new ERR_PTR error
    net: dsa: sja1105: Fix status initialization in sja1105_get_ethtool_stats
    vrf: sit mtu should not be updated when vrf netdev is the link
    net: dsa: Fix error cleanup path in dsa_init_module
    l2tp: Fix possible NULL pointer dereference
    taprio: add null check on sched_nest to avoid potential null pointer dereference
    net: mvpp2: cls: fix less than zero check on a u32 variable
    net_sched: sch_fq: handle non connected flows
    net_sched: sch_fq: do not assume EDT packets are ordered
    net: hns3: use devm_kcalloc when allocating desc_cb
    net: hns3: some cleanup for struct hns3_enet_ring
    ...

    Linus Torvalds
     

07 May, 2019

1 commit

  • Pull crypto update from Herbert Xu:
    "API:
    - Add support for AEAD in simd
    - Add fuzz testing to testmgr
    - Add panic_on_fail module parameter to testmgr
    - Use per-CPU struct instead multiple variables in scompress
    - Change verify API for akcipher

    Algorithms:
    - Convert x86 AEAD algorithms over to simd
    - Forbid 2-key 3DES in FIPS mode
    - Add EC-RDSA (GOST 34.10) algorithm

    Drivers:
    - Set output IV with ctr-aes in crypto4xx
    - Set output IV in rockchip
    - Fix potential length overflow with hashing in sun4i-ss
    - Fix computation error with ctr in vmx
    - Add SM4 protected keys support in ccree
    - Remove long-broken mxc-scc driver
    - Add rfc4106(gcm(aes)) cipher support in cavium/nitrox"

    * 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (179 commits)
    crypto: ccree - use a proper le32 type for le32 val
    crypto: ccree - remove set but not used variable 'du_size'
    crypto: ccree - Make cc_sec_disable static
    crypto: ccree - fix spelling mistake "protedcted" -> "protected"
    crypto: caam/qi2 - generate hash keys in-place
    crypto: caam/qi2 - fix DMA mapping of stack memory
    crypto: caam/qi2 - fix zero-length buffer DMA mapping
    crypto: stm32/cryp - update to return iv_out
    crypto: stm32/cryp - remove request mutex protection
    crypto: stm32/cryp - add weak key check for DES
    crypto: atmel - remove set but not used variable 'alg_name'
    crypto: picoxcell - Use dev_get_drvdata()
    crypto: crypto4xx - get rid of redundant using_sd variable
    crypto: crypto4xx - use sync skcipher for fallback
    crypto: crypto4xx - fix cfb and ofb "overran dst buffer" issues
    crypto: crypto4xx - fix ctr-aes missing output IV
    crypto: ecrdsa - select ASN1 and OID_REGISTRY for EC-RDSA
    crypto: ux500 - use ccflags-y instead of CFLAGS_.o
    crypto: ccree - handle tee fips error during power management resume
    crypto: ccree - add function to handle cryptocell tee fips error
    ...

    Linus Torvalds
     

03 May, 2019

1 commit


28 Apr, 2019

1 commit

  • We currently have two levels of strict validation:

    1) liberal (default)
    - undefined (type >= max) & NLA_UNSPEC attributes accepted
    - attribute length >= expected accepted
    - garbage at end of message accepted
    2) strict (opt-in)
    - NLA_UNSPEC attributes accepted
    - attribute length >= expected accepted

    Split out parsing strictness into four different options:
    * TRAILING - check that there's no trailing data after parsing
    attributes (in message or nested)
    * MAXTYPE - reject attrs > max known type
    * UNSPEC - reject attributes with NLA_UNSPEC policy entries
    * STRICT_ATTRS - strictly validate attribute size

    The default for future things should be *everything*.
    The current *_strict() is a combination of TRAILING and MAXTYPE,
    and is renamed to _deprecated_strict().
    The current regular parsing has none of this, and is renamed to
    *_parse_deprecated().

    Additionally it allows us to selectively set one of the new flags
    even on old policies. Notably, the UNSPEC flag could be useful in
    this case, since it can be arranged (by filling in the policy) to
    not be an incompatible userspace ABI change, but would then going
    forward prevent forgetting attribute entries. Similar can apply
    to the POLICY flag.

    We end up with the following renames:
    * nla_parse -> nla_parse_deprecated
    * nla_parse_strict -> nla_parse_deprecated_strict
    * nlmsg_parse -> nlmsg_parse_deprecated
    * nlmsg_parse_strict -> nlmsg_parse_deprecated_strict
    * nla_parse_nested -> nla_parse_nested_deprecated
    * nla_validate_nested -> nla_validate_nested_deprecated

    Using spatch, of course:
    @@
    expression TB, MAX, HEAD, LEN, POL, EXT;
    @@
    -nla_parse(TB, MAX, HEAD, LEN, POL, EXT)
    +nla_parse_deprecated(TB, MAX, HEAD, LEN, POL, EXT)

    @@
    expression NLH, HDRLEN, TB, MAX, POL, EXT;
    @@
    -nlmsg_parse(NLH, HDRLEN, TB, MAX, POL, EXT)
    +nlmsg_parse_deprecated(NLH, HDRLEN, TB, MAX, POL, EXT)

    @@
    expression NLH, HDRLEN, TB, MAX, POL, EXT;
    @@
    -nlmsg_parse_strict(NLH, HDRLEN, TB, MAX, POL, EXT)
    +nlmsg_parse_deprecated_strict(NLH, HDRLEN, TB, MAX, POL, EXT)

    @@
    expression TB, MAX, NLA, POL, EXT;
    @@
    -nla_parse_nested(TB, MAX, NLA, POL, EXT)
    +nla_parse_nested_deprecated(TB, MAX, NLA, POL, EXT)

    @@
    expression START, MAX, POL, EXT;
    @@
    -nla_validate_nested(START, MAX, POL, EXT)
    +nla_validate_nested_deprecated(START, MAX, POL, EXT)

    @@
    expression NLH, HDRLEN, MAX, POL, EXT;
    @@
    -nlmsg_validate(NLH, HDRLEN, MAX, POL, EXT)
    +nlmsg_validate_deprecated(NLH, HDRLEN, MAX, POL, EXT)

    For this patch, don't actually add the strict, non-renamed versions
    yet so that it breaks compile if I get it wrong.

    Also, while at it, make nla_validate and nla_parse go down to a
    common __nla_validate_parse() function to avoid code duplication.

    Ultimately, this allows us to have very strict validation for every
    new caller of nla_parse()/nlmsg_parse() etc as re-introduced in the
    next patch, while existing things will continue to work as is.

    In effect then, this adds fully strict validation for any new command.

    Signed-off-by: Johannes Berg
    Signed-off-by: David S. Miller

    Johannes Berg
     

25 Apr, 2019

4 commits

  • Fix undefined symbol issue in ecrdsa_generic module when ASN1
    or OID_REGISTRY aren't enabled in the config by selecting these
    options for CRYPTO_ECRDSA.

    ERROR: "asn1_ber_decoder" [crypto/ecrdsa_generic.ko] undefined!
    ERROR: "look_up_OID" [crypto/ecrdsa_generic.ko] undefined!

    Reported-by: Randy Dunlap
    Cc: Stephen Rothwell
    Signed-off-by: Vitaly Chikunov
    Acked-by: Randy Dunlap # build-tested
    Signed-off-by: Herbert Xu

    Vitaly Chikunov
     
  • Mark sm4 and missing aes using protected keys which are indetical to
    same algs with no HW protected keys as tested.

    Signed-off-by: Gilad Ben-Yossef
    Signed-off-by: Herbert Xu

    Gilad Ben-Yossef
     
  • The flags field in 'struct shash_desc' never actually does anything.
    The only ostensibly supported flag is CRYPTO_TFM_REQ_MAY_SLEEP.
    However, no shash algorithm ever sleeps, making this flag a no-op.

    With this being the case, inevitably some users who can't sleep wrongly
    pass MAY_SLEEP. These would all need to be fixed if any shash algorithm
    actually started sleeping. For example, the shash_ahash_*() functions,
    which wrap a shash algorithm with the ahash API, pass through MAY_SLEEP
    from the ahash API to the shash API. However, the shash functions are
    called under kmap_atomic(), so actually they're assumed to never sleep.

    Even if it turns out that some users do need preemption points while
    hashing large buffers, we could easily provide a helper function
    crypto_shash_update_large() which divides the data into smaller chunks
    and calls crypto_shash_update() and cond_resched() for each chunk. It's
    not necessary to have a flag in 'struct shash_desc', nor is it necessary
    to make individual shash algorithms aware of this at all.

    Therefore, remove shash_desc::flags, and document that the
    crypto_shash_*() functions can be called from any context.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • The crypto_yield() in shash_ahash_digest() occurs after the entire
    digest operation already happened, so there's no real point. Remove it.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     

19 Apr, 2019

2 commits

  • CCM instances can be created by either the "ccm" template, which only
    allows choosing the block cipher, e.g. "ccm(aes)"; or by "ccm_base",
    which allows choosing the ctr and cbcmac implementations, e.g.
    "ccm_base(ctr(aes-generic),cbcmac(aes-generic))".

    However, a "ccm_base" instance prevents a "ccm" instance from being
    registered using the same implementations. Nor will the instance be
    found by lookups of "ccm". This can be used as a denial of service.
    Moreover, "ccm_base" instances are never tested by the crypto
    self-tests, even if there are compatible "ccm" tests.

    The root cause of these problems is that instances of the two templates
    use different cra_names. Therefore, fix these problems by making
    "ccm_base" instances set the same cra_name as "ccm" instances, e.g.
    "ccm(aes)" instead of "ccm_base(ctr(aes-generic),cbcmac(aes-generic))".

    This requires extracting the block cipher name from the name of the ctr
    and cbcmac algorithms. It also requires starting to verify that the
    algorithms are really ctr and cbcmac using the same block cipher, not
    something else entirely. But it would be bizarre if anyone were
    actually using non-ccm-compatible algorithms with ccm_base, so this
    shouldn't break anyone in practice.

    Fixes: 4a49b499dfa0 ("[CRYPTO] ccm: Added CCM mode")
    Cc: stable@vger.kernel.org
    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • GCM instances can be created by either the "gcm" template, which only
    allows choosing the block cipher, e.g. "gcm(aes)"; or by "gcm_base",
    which allows choosing the ctr and ghash implementations, e.g.
    "gcm_base(ctr(aes-generic),ghash-generic)".

    However, a "gcm_base" instance prevents a "gcm" instance from being
    registered using the same implementations. Nor will the instance be
    found by lookups of "gcm". This can be used as a denial of service.
    Moreover, "gcm_base" instances are never tested by the crypto
    self-tests, even if there are compatible "gcm" tests.

    The root cause of these problems is that instances of the two templates
    use different cra_names. Therefore, fix these problems by making
    "gcm_base" instances set the same cra_name as "gcm" instances, e.g.
    "gcm(aes)" instead of "gcm_base(ctr(aes-generic),ghash-generic)".

    This requires extracting the block cipher name from the name of the ctr
    algorithm. It also requires starting to verify that the algorithms are
    really ctr and ghash, not something else entirely. But it would be
    bizarre if anyone were actually using non-gcm-compatible algorithms with
    gcm_base, so this shouldn't break anyone in practice.

    Fixes: d00aa19b507b ("[CRYPTO] gcm: Allow block cipher parameter")
    Cc: stable@vger.kernel.org
    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     

18 Apr, 2019

24 commits

  • shash_ahash_digest(), which is the ->digest() method for ahash tfms that
    use an shash algorithm, has an optimization where crypto_shash_digest()
    is called if the data is in a single page. But an off-by-one error
    prevented this path from being taken unless the user happened to provide
    extra data in the scatterlist. Fix it.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • Remove cryptd_alloc_ablkcipher() and the ability of cryptd to create
    algorithms with the deprecated "ablkcipher" type.

    This has been unused since commit 0e145b477dea ("crypto: ablk_helper -
    remove ablk_helper"). Instead, cryptd_alloc_skcipher() is used.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • In commit 71052dcf4be70 ("crypto: scompress - Use per-CPU struct instead
    multiple variables") I accidentally initialized multiple times the memory on a
    random CPU. I should have initialize the memory on every CPU like it has
    been done earlier. I didn't notice this because the scheduler didn't
    move the task to another CPU.
    Guenter managed to do that and the code crashed as expected.

    Allocate / free per-CPU memory on each CPU.

    Fixes: 71052dcf4be70 ("crypto: scompress - Use per-CPU struct instead multiple variables")
    Reported-by: Guenter Roeck
    Signed-off-by: Sebastian Andrzej Siewior
    Tested-by: Guenter Roeck
    Signed-off-by: Herbert Xu

    Sebastian Andrzej Siewior
     
  • Use subsys_initcall for registration of all templates and generic
    algorithm implementations, rather than module_init. Then change
    cryptomgr to use arch_initcall, to place it before the subsys_initcalls.

    This is needed so that when both a generic and optimized implementation
    of an algorithm are built into the kernel (not loadable modules), the
    generic implementation is registered before the optimized one.
    Otherwise, the self-tests for the optimized implementation are unable to
    allocate the generic implementation for the new comparison fuzz tests.

    Note that on arm, a side effect of this change is that self-tests for
    generic implementations may run before the unaligned access handler has
    been installed. So, unaligned accesses will crash the kernel. This is
    arguably a good thing as it makes it easier to detect that type of bug.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • When the extra crypto self-tests are enabled, test each AEAD algorithm
    against its generic implementation when one is available. This
    involves: checking the algorithm properties for consistency, then
    randomly generating test vectors using the generic implementation and
    running them against the implementation under test. Both good and bad
    inputs are tested.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • When the extra crypto self-tests are enabled, test each skcipher
    algorithm against its generic implementation when one is available.
    This involves: checking the algorithm properties for consistency, then
    randomly generating test vectors using the generic implementation and
    running them against the implementation under test. Both good and bad
    inputs are tested.

    This has already detected a bug in the skcipher_walk API, a bug in the
    LRW template, and an inconsistency in the cts implementations.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • When the extra crypto self-tests are enabled, test each hash algorithm
    against its generic implementation when one is available. This
    involves: checking the algorithm properties for consistency, then
    randomly generating test vectors using the generic implementation and
    running them against the implementation under test. Both good and bad
    inputs are tested.

    This has already detected a bug in the x86 implementation of poly1305,
    bugs in crct10dif, and an inconsistency in cbcmac.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • Add some helper functions in preparation for fuzz testing algorithms
    against their generic implementation.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • In preparation for fuzz testing algorithms against their generic
    implementation, make error messages in testmgr identify test vectors by
    name rather than index. Built-in test vectors are simply "named" by
    their index in testmgr.h, as before. But (in later patches) generated
    test vectors will be given more descriptive names to help developers
    debug problems detected with them.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • Update testmgr to support testing for specific errors from setkey() and
    digest() for hashes; setkey() and encrypt()/decrypt() for skciphers and
    ciphers; and setkey(), setauthsize(), and encrypt()/decrypt() for AEADs.
    This is useful because algorithms usually restrict the lengths or format
    of the message, key, and/or authentication tag in some way. And bad
    inputs should be tested too, not just good inputs.

    As part of this change, remove the ambiguously-named 'fail' flag and
    replace it with 'setkey_error = -EINVAL' for the only test vector that
    used it -- the DES weak key test vector. Note that this tightens the
    test to require -EINVAL rather than any error code, but AFAICS this
    won't cause any test failure.

    Other than that, these new fields aren't set on any test vectors yet.
    Later patches will do so.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • Add testmgr test vectors for EC-RDSA algorithm for every of five
    supported parameters (curves). Because there are no officially published
    test vectors for the curves, the vectors are generated by gost-engine.

    Signed-off-by: Vitaly Chikunov
    Signed-off-by: Herbert Xu

    Vitaly Chikunov
     
  • Add Elliptic Curve Russian Digital Signature Algorithm (GOST R
    34.10-2012, RFC 7091, ISO/IEC 14888-3) is one of the Russian (and since
    2018 the CIS countries) cryptographic standard algorithms (called GOST
    algorithms). Only signature verification is supported, with intent to be
    used in the IMA.

    Summary of the changes:

    * crypto/Kconfig:
    - EC-RDSA is added into Public-key cryptography section.

    * crypto/Makefile:
    - ecrdsa objects are added.

    * crypto/asymmetric_keys/x509_cert_parser.c:
    - Recognize EC-RDSA and Streebog OIDs.

    * include/linux/oid_registry.h:
    - EC-RDSA OIDs are added to the enum. Also, a two currently not
    implemented curve OIDs are added for possible extension later (to
    not change numbering and grouping).

    * crypto/ecc.c:
    - Kenneth MacKay copyright date is updated to 2014, because
    vli_mmod_slow, ecc_point_add, ecc_point_mult_shamir are based on his
    code from micro-ecc.
    - Functions needed for ecrdsa are EXPORT_SYMBOL'ed.
    - New functions:
    vli_is_negative - helper to determine sign of vli;
    vli_from_be64 - unpack big-endian array into vli (used for
    a signature);
    vli_from_le64 - unpack little-endian array into vli (used for
    a public key);
    vli_uadd, vli_usub - add/sub u64 value to/from vli (used for
    increment/decrement);
    mul_64_64 - optimized to use __int128 where appropriate, this speeds
    up point multiplication (and as a consequence signature
    verification) by the factor of 1.5-2;
    vli_umult - multiply vli by a small value (speeds up point
    multiplication by another factor of 1.5-2, depending on vli sizes);
    vli_mmod_special - module reduction for some form of Pseudo-Mersenne
    primes (used for the curves A);
    vli_mmod_special2 - module reduction for another form of
    Pseudo-Mersenne primes (used for the curves B);
    vli_mmod_barrett - module reduction using pre-computed value (used
    for the curve C);
    vli_mmod_slow - more general module reduction which is much slower
    (used when the modulus is subgroup order);
    vli_mod_mult_slow - modular multiplication;
    ecc_point_add - add two points;
    ecc_point_mult_shamir - add two points multiplied by scalars in one
    combined multiplication (this gives speed up by another factor 2 in
    compare to two separate multiplications).
    ecc_is_pubkey_valid_partial - additional samity check is added.
    - Updated vli_mmod_fast with non-strict heuristic to call optimal
    module reduction function depending on the prime value;
    - All computations for the previously defined (two NIST) curves should
    not unaffected.

    * crypto/ecc.h:
    - Newly exported functions are documented.

    * crypto/ecrdsa_defs.h
    - Five curves are defined.

    * crypto/ecrdsa.c:
    - Signature verification is implemented.

    * crypto/ecrdsa_params.asn1, crypto/ecrdsa_pub_key.asn1:
    - Templates for BER decoder for EC-RDSA parameters and public key.

    Cc: linux-integrity@vger.kernel.org
    Signed-off-by: Vitaly Chikunov
    Signed-off-by: Herbert Xu

    Vitaly Chikunov
     
  • ecc.c have algorithms that could be used togeter by ecdh and ecrdsa.
    Make it separate module. Add CRYPTO_ECC into Kconfig. EXPORT_SYMBOL and
    document to what seems appropriate. Move structs ecc_point and ecc_curve
    from ecc_curve_defs.h into ecc.h.

    No code changes.

    Signed-off-by: Vitaly Chikunov
    Signed-off-by: Herbert Xu

    Vitaly Chikunov
     
  • Group RSA, DH, and ECDH into Public-key cryptography config section.

    Signed-off-by: Vitaly Chikunov
    Signed-off-by: Herbert Xu

    Vitaly Chikunov
     
  • Some public key algorithms (like EC-DSA) keep in parameters field
    important data such as digest and curve OIDs (possibly more for
    different EC-DSA variants). Thus, just setting a public key (as
    for RSA) is not enough.

    Append parameters into the key stream for akcipher_set_{pub,priv}_key.
    Appended data is: (u32) algo OID, (u32) parameters length, parameters
    data.

    This does not affect current akcipher API nor RSA ciphers (they could
    ignore it). Idea of appending parameters to the key stream is by Herbert
    Xu.

    Cc: David Howells
    Cc: Denis Kenzior
    Cc: keyrings@vger.kernel.org
    Signed-off-by: Vitaly Chikunov
    Reviewed-by: Denis Kenzior
    Signed-off-by: Herbert Xu

    Vitaly Chikunov
     
  • Treat (struct public_key_signature)'s digest same as its signature (s).
    Since digest should be already in the kmalloc'd memory do not kmemdup
    digest value before calling {public,tpm}_key_verify_signature.

    Patch is split from the previous as suggested by Herbert Xu.

    Suggested-by: David Howells
    Cc: David Howells
    Cc: keyrings@vger.kernel.org
    Signed-off-by: Vitaly Chikunov
    Reviewed-by: Denis Kenzior
    Signed-off-by: Herbert Xu

    Vitaly Chikunov
     
  • Previous akcipher .verify() just `decrypts' (using RSA encrypt which is
    using public key) signature to uncover message hash, which was then
    compared in upper level public_key_verify_signature() with the expected
    hash value, which itself was never passed into verify().

    This approach was incompatible with EC-DSA family of algorithms,
    because, to verify a signature EC-DSA algorithm also needs a hash value
    as input; then it's used (together with a signature divided into halves
    `r||s') to produce a witness value, which is then compared with `r' to
    determine if the signature is correct. Thus, for EC-DSA, nor
    requirements of .verify() itself, nor its output expectations in
    public_key_verify_signature() wasn't sufficient.

    Make improved .verify() call which gets hash value as input and produce
    complete signature check without any output besides status.

    Now for the top level verification only crypto_akcipher_verify() needs
    to be called and its return value inspected.

    Make sure that `digest' is in kmalloc'd memory (in place of `output`) in
    {public,tpm}_key_verify_signature() as insisted by Herbert Xu, and will
    be changed in the following commit.

    Cc: David Howells
    Cc: keyrings@vger.kernel.org
    Signed-off-by: Vitaly Chikunov
    Reviewed-by: Denis Kenzior
    Signed-off-by: Herbert Xu

    Vitaly Chikunov
     
  • In preparation for new akcipher verify call remove sign/verify callbacks
    from RSA backends and make PKCS1 driver call encrypt/decrypt instead.

    This also complies with the well-known idea that raw RSA should never be
    used for sign/verify. It only should be used with proper padding scheme
    such as PKCS1 driver provides.

    Cc: Giovanni Cabiddu
    Cc: qat-linux@intel.com
    Cc: Tom Lendacky
    Cc: Gary Hook
    Cc: Horia Geantă
    Cc: Aymen Sghaier
    Signed-off-by: Vitaly Chikunov
    Reviewed-by: Horia Geantă
    Acked-by: Gary R Hook
    Signed-off-by: Herbert Xu

    Vitaly Chikunov
     
  • Because with the introduction of EC-RDSA and change in workings of RSA
    in regard to sign/verify, akcipher could have not all callbacks defined,
    check the presence of callbacks in crypto_register_akcipher() and
    provide default implementation if the callback is not implemented.

    This is suggested by Herbert Xu instead of checking the presence of the
    callback on every request.

    Signed-off-by: Vitaly Chikunov
    Signed-off-by: Herbert Xu

    Vitaly Chikunov
     
  • This patch adds a requirement to the generic 3DES implementation
    such that 2-key 3DES (K1 == K3) is no longer allowed in FIPS mode.

    We will also provide helpers that may be used by drivers that
    implement 3DES to make the same check.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • If the user-provided IV needs to be aligned to the algorithm's
    alignmask, then skcipher_walk_virt() copies the IV into a new aligned
    buffer walk.iv. But skcipher_walk_virt() can fail afterwards, and then
    if the caller unconditionally accesses walk.iv, it's a use-after-free.

    salsa20-generic doesn't set an alignmask, so currently it isn't affected
    by this despite unconditionally accessing walk.iv. However this is more
    subtle than desired, and it was actually broken prior to the alignmask
    being removed by commit b62b3db76f73 ("crypto: salsa20-generic - cleanup
    and convert to skcipher API").

    Since salsa20-generic does not update the IV and does not need any IV
    alignment, update it to use req->iv instead of walk.iv.

    Fixes: 2407d60872dd ("[CRYPTO] salsa20: Salsa20 stream cipher")
    Cc: stable@vger.kernel.org
    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • If the user-provided IV needs to be aligned to the algorithm's
    alignmask, then skcipher_walk_virt() copies the IV into a new aligned
    buffer walk.iv. But skcipher_walk_virt() can fail afterwards, and then
    if the caller unconditionally accesses walk.iv, it's a use-after-free.

    Fix this in the LRW template by checking the return value of
    skcipher_walk_virt().

    This bug was detected by my patches that improve testmgr to fuzz
    algorithms against their generic implementation. When the extra
    self-tests were run on a KASAN-enabled kernel, a KASAN use-after-free
    splat occured during lrw(aes) testing.

    Fixes: c778f96bf347 ("crypto: lrw - Optimize tweak computation")
    Cc: # v4.20+
    Cc: Ondrej Mosnacek
    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • When we perform a walk in the completion function, we need to ensure
    that it is atomic.

    Fixes: ac3c8f36c31d ("crypto: lrw - Do not use auxiliary buffer")
    Cc:
    Signed-off-by: Herbert Xu
    Acked-by: Ondrej Mosnacek
    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • When we perform a walk in the completion function, we need to ensure
    that it is atomic.

    Reported-by: syzbot+6f72c20560060c98b566@syzkaller.appspotmail.com
    Fixes: 78105c7e769b ("crypto: xts - Drop use of auxiliary buffer")
    Cc:
    Signed-off-by: Herbert Xu
    Acked-by: Ondrej Mosnacek
    Signed-off-by: Herbert Xu

    Herbert Xu
     

08 Apr, 2019

6 commits

  • The x86_64 implementation of Poly1305 produces the wrong result on some
    inputs because poly1305_4block_avx2() incorrectly assumes that when
    partially reducing the accumulator, the bits carried from limb 'd4' to
    limb 'h0' fit in a 32-bit integer. This is true for poly1305-generic
    which processes only one block at a time. However, it's not true for
    the AVX2 implementation, which processes 4 blocks at a time and
    therefore can produce intermediate limbs about 4x larger.

    Fix it by making the relevant calculations use 64-bit arithmetic rather
    than 32-bit. Note that most of the carries already used 64-bit
    arithmetic, but the d4 -> h0 carry was different for some reason.

    To be safe I also made the same change to the corresponding SSE2 code,
    though that only operates on 1 or 2 blocks at a time. I don't think
    it's really needed for poly1305_block_sse2(), but it doesn't hurt
    because it's already x86_64 code. It *might* be needed for
    poly1305_2block_sse2(), but overflows aren't easy to reproduce there.

    This bug was originally detected by my patches that improve testmgr to
    fuzz algorithms against their generic implementation. But also add a
    test vector which reproduces it directly (in the AVX2 case).

    Fixes: b1ccc8f4b631 ("crypto: poly1305 - Add a four block AVX2 variant for x86_64")
    Fixes: c70f4abef07a ("crypto: poly1305 - Add a SSE2 SIMD variant for x86_64")
    Cc: # v4.3+
    Cc: Martin Willi
    Cc: Jason A. Donenfeld
    Signed-off-by: Eric Biggers
    Reviewed-by: Martin Willi
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • Add a module parameter cryptomgr.panic_on_fail which causes the kernel
    to panic if any crypto self-tests fail.

    Use cases:

    - More easily detect crypto self-test failures by boot testing,
    e.g. on KernelCI.
    - Get a bug report if syzkaller manages to use the template system to
    instantiate an algorithm that fails its self-tests.

    The command-line option "fips=1" already does this, but it also makes
    other changes not wanted for general testing, such as disabling
    "unapproved" algorithms. panic_on_fail just does what it says.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • My patches to make testmgr fuzz algorithms against their generic
    implementation detected that the arm64 implementations of
    "cts(cbc(aes))" handle empty messages differently from the cts template.
    Namely, the arm64 implementations forbids (with -EINVAL) all messages
    shorter than the block size, including the empty message; but the cts
    template permits empty messages as a special case.

    No user should be CTS-encrypting/decrypting empty messages, but we need
    to keep the behavior consistent. Unfortunately, as noted in the source
    of OpenSSL's CTS implementation [1], there's no common specification for
    CTS. This makes it somewhat debatable what the behavior should be.

    However, all CTS specifications seem to agree that messages shorter than
    the block size are not allowed, and OpenSSL follows this in both CTS
    conventions it implements. It would also simplify the user-visible
    semantics to have empty messages no longer be a special case.

    Therefore, make the cts template return -EINVAL on *all* messages
    shorter than the block size, including the empty message.

    [1] https://github.com/openssl/openssl/blob/master/crypto/modes/cts128.c

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • Don't cast the data buffer directly to streebog_uint512, as this
    violates alignment rules.

    Fixes: fe18957e8e87 ("crypto: streebog - add Streebog hash function")
    Cc: Vitaly Chikunov
    Signed-off-by: Eric Biggers
    Reviewed-by: Vitaly Chikunov
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • If the rfc7539 template is instantiated with specific implementations,
    e.g. "rfc7539(chacha20-generic,poly1305-generic)" rather than
    "rfc7539(chacha20,poly1305)", then the implementation names end up
    included in the instance's cra_name. This is incorrect because it then
    prevents all users from allocating "rfc7539(chacha20,poly1305)", if the
    highest priority implementations of chacha20 and poly1305 were selected.
    Also, the self-tests aren't run on an instance allocated in this way.

    Fix it by setting the instance's cra_name from the underlying
    algorithms' actual cra_names, rather than from the requested names.
    This matches what other templates do.

    Fixes: 71ebc4d1b27d ("crypto: chacha20poly1305 - Add a ChaCha20-Poly1305 AEAD construction, RFC7539")
    Cc: # v4.2+
    Cc: Martin Willi
    Signed-off-by: Eric Biggers
    Reviewed-by: Martin Willi
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • skcipher_walk_done() assumes it's a bug if, after the "slow" path is
    executed where the next chunk of data is processed via a bounce buffer,
    the algorithm says it didn't process all bytes. Thus it WARNs on this.

    However, this can happen legitimately when the message needs to be
    evenly divisible into "blocks" but isn't, and the algorithm has a
    'walksize' greater than the block size. For example, ecb-aes-neonbs
    sets 'walksize' to 128 bytes and only supports messages evenly divisible
    into 16-byte blocks. If, say, 17 message bytes remain but they straddle
    scatterlist elements, the skcipher_walk code will take the "slow" path
    and pass the algorithm all 17 bytes in the bounce buffer. But the
    algorithm will only be able to process 16 bytes, triggering the WARN.

    Fix this by just removing the WARN_ON(). Returning -EINVAL, as the code
    already does, is the right behavior.

    This bug was detected by my patches that improve testmgr to fuzz
    algorithms against their generic implementation.

    Fixes: b286d8b1a690 ("crypto: skcipher - Add skcipher walk interface")
    Cc: # v4.10+
    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers