02 Nov, 2017

1 commit

  • Many source files in the tree are missing licensing information, which
    makes it harder for compliance tools to determine the correct license.

    By default all files without license information are under the default
    license of the kernel, which is GPL version 2.

    Update the files which contain no license information with the 'GPL-2.0'
    SPDX license identifier. The SPDX identifier is a legally binding
    shorthand, which can be used instead of the full boiler plate text.

    This patch is based on work done by Thomas Gleixner and Kate Stewart and
    Philippe Ombredanne.

    How this work was done:

    Patches were generated and checked against linux-4.14-rc6 for a subset of
    the use cases:
    - file had no licensing information it it.
    - file was a */uapi/* one with no licensing information in it,
    - file was a */uapi/* one with existing licensing information,

    Further patches will be generated in subsequent months to fix up cases
    where non-standard license headers were used, and references to license
    had to be inferred by heuristics based on keywords.

    The analysis to determine which SPDX License Identifier to be applied to
    a file was done in a spreadsheet of side by side results from of the
    output of two independent scanners (ScanCode & Windriver) producing SPDX
    tag:value files created by Philippe Ombredanne. Philippe prepared the
    base worksheet, and did an initial spot review of a few 1000 files.

    The 4.13 kernel was the starting point of the analysis with 60,537 files
    assessed. Kate Stewart did a file by file comparison of the scanner
    results in the spreadsheet to determine which SPDX license identifier(s)
    to be applied to the file. She confirmed any determination that was not
    immediately clear with lawyers working with the Linux Foundation.

    Criteria used to select files for SPDX license identifier tagging was:
    - Files considered eligible had to be source code files.
    - Make and config files were included as candidates if they contained >5
    lines of source
    - File already had some variant of a license header in it (even if
    Reviewed-by: Philippe Ombredanne
    Reviewed-by: Thomas Gleixner
    Signed-off-by: Greg Kroah-Hartman

    Greg Kroah-Hartman
     

19 Oct, 2017

1 commit


18 Oct, 2017

3 commits


11 Oct, 2017

1 commit

  • The shash ahash digest adaptor function may crash if given a
    zero-length input together with a null SG list. This is because
    it tries to read the SG list before looking at the length.

    This patch fixes it by checking the length first.

    Cc:
    Reported-by: Stephan Müller
    Signed-off-by: Herbert Xu
    Tested-by: Stephan Müller

    Herbert Xu
     

07 Oct, 2017

3 commits

  • The skcipher walk interface doesn't handle zero-length input
    properly as the old blkcipher walk interface did. This is due
    to the fact that the length check is done too late.

    This patch moves the length check forward so that it does the
    right thing.

    Fixes: b286d8b1a690 ("crypto: skcipher - Add skcipher walk...")
    Cc:
    Reported-by: Stephan Müller
    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • The SCTP program may sleep under a spinlock, and the function call path is:
    sctp_generate_t3_rtx_event (acquire the spinlock)
    sctp_do_sm
    sctp_side_effects
    sctp_cmd_interpreter
    sctp_make_init_ack
    sctp_pack_cookie
    crypto_shash_setkey
    shash_setkey_unaligned
    kmalloc(GFP_KERNEL)

    For the same reason, the orinoco driver may sleep in interrupt handler,
    and the function call path is:
    orinoco_rx_isr_tasklet
    orinoco_rx
    orinoco_mic
    crypto_shash_setkey
    shash_setkey_unaligned
    kmalloc(GFP_KERNEL)

    To fix it, GFP_KERNEL is replaced with GFP_ATOMIC.
    This bug is found by my static analysis tool and my code review.

    Signed-off-by: Jia-Ju Bai
    Signed-off-by: Herbert Xu

    Jia-Ju Bai
     
  • All error handling paths 'goto err_drop_spawn' except this one.
    In order to avoid some resources leak, we should do it as well here.

    Fixes: f1c131b45410 ("crypto: xts - Convert to skcipher")
    Signed-off-by: Christophe JAILLET
    Signed-off-by: Herbert Xu

    Christophe Jaillet
     

20 Sep, 2017

2 commits

  • When two adjacent TX SGL are processed and parts of both TX SGLs
    are pulled into the per-request TX SGL, the wrong per-request
    TX SGL entries were updated.

    This fixes a NULL pointer dereference when a cipher implementation walks
    the TX SGL where some of the SGL entries were NULL.

    Fixes: e870456d8e7c ("crypto: algif_skcipher - overhaul memory...")
    Signed-off-by: Stephan Mueller
    Signed-off-by: Herbert Xu

    Stephan Mueller
     
  • During the change to use aligned buffers, the deallocation code path was
    not updated correctly. The current code tries to free the aligned buffer
    pointer and not the original buffer pointer as it is supposed to.

    Thus, the code is updated to free the original buffer pointer and set
    the aligned buffer pointer that is used throughout the code to NULL.

    Fixes: 3cfc3b9721123 ("crypto: drbg - use aligned buffers")
    CC:
    CC: Herbert Xu
    Signed-off-by: Stephan Mueller
    Signed-off-by: Herbert Xu

    Stephan Mueller
     

22 Aug, 2017

7 commits

  • When a page is assigned to a TX SGL, call get_page to increment the
    reference counter. It is possible that one page is referenced in
    multiple SGLs:

    - in the global TX SGL in case a previous af_alg_pull_tsgl only
    reassigned parts of a page to a per-request TX SGL

    - in the per-request TX SGL as assigned by af_alg_pull_tsgl

    Note, multiple requests can be active at the same time whose TX SGLs all
    point to different parts of the same page.

    Signed-off-by: Stephan Mueller
    Signed-off-by: Herbert Xu

    Stephan Mueller
     
  • There are already helpers to (un)register multiple normal
    and AEAD algos. Add one for ahashes too.

    Signed-off-by: Lars Persson
    Signed-off-by: Rabin Vincent
    Signed-off-by: Herbert Xu

    Rabin Vincent
     
  • Signed-off-by: Stephan Mueller
    Signed-off-by: Herbert Xu

    Stephan Mueller
     
  • Merge the crypto tree to resolve the conflict between the temporary
    and long-term fixes in algif_skcipher.

    Herbert Xu
     
  • For asynchronous operation, SGs are allocated without a page mapped to
    them or with a page that is not used (ref-counted). If the SGL is freed,
    the code must only call put_page for an SG if there was a page assigned
    and ref-counted in the first place.

    This fixes a kernel crash when using io_submit with more than one iocb
    using the sendmsg and sendpage (vmsplice/splice) interface.

    Cc:
    Signed-off-by: Stephan Mueller
    Signed-off-by: Herbert Xu

    Stephan Mueller
     
  • We failed to catch a bug in the chacha20 code after porting it to the
    skcipher API. We would have caught it if any chunked tests had been
    defined, so define some now so we will catch future regressions.

    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     
  • Commit 9ae433bc79f9 ("crypto: chacha20 - convert generic and x86 versions
    to skcipher") ported the existing chacha20 code to use the new skcipher
    API, and introduced a bug along the way. Unfortunately, the tcrypt tests
    did not catch the error, and it was only found recently by Tobias.

    Stefan kindly diagnosed the error, and proposed a fix which is similar
    to the one below, with the exception that 'walk.stride' is used rather
    than the hardcoded block size. This does not actually matter in this
    case, but it's a better example of how to use the skcipher walk API.

    Fixes: 9ae433bc79f9 ("crypto: chacha20 - convert generic and x86 ...")
    Cc: # v4.11+
    Cc: Steffen Klassert
    Reported-by: Tobias Brunner
    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     

09 Aug, 2017

4 commits

  • Consolidate following data structures:

    skcipher_async_req, aead_async_req -> af_alg_async_req
    skcipher_rsgl, aead_rsql -> af_alg_rsgl
    skcipher_tsgl, aead_tsql -> af_alg_tsgl
    skcipher_ctx, aead_ctx -> af_alg_ctx

    Consolidate following functions:

    skcipher_sndbuf, aead_sndbuf -> af_alg_sndbuf
    skcipher_writable, aead_writable -> af_alg_writable
    skcipher_rcvbuf, aead_rcvbuf -> af_alg_rcvbuf
    skcipher_readable, aead_readable -> af_alg_readable
    aead_alloc_tsgl, skcipher_alloc_tsgl -> af_alg_alloc_tsgl
    aead_count_tsgl, skcipher_count_tsgl -> af_alg_count_tsgl
    aead_pull_tsgl, skcipher_pull_tsgl -> af_alg_pull_tsgl
    aead_free_areq_sgls, skcipher_free_areq_sgls -> af_alg_free_areq_sgls
    aead_wait_for_wmem, skcipher_wait_for_wmem -> af_alg_wait_for_wmem
    aead_wmem_wakeup, skcipher_wmem_wakeup -> af_alg_wmem_wakeup
    aead_wait_for_data, skcipher_wait_for_data -> af_alg_wait_for_data
    aead_data_wakeup, skcipher_data_wakeup -> af_alg_data_wakeup
    aead_sendmsg, skcipher_sendmsg -> af_alg_sendmsg
    aead_sendpage, skcipher_sendpage -> af_alg_sendpage
    aead_async_cb, skcipher_async_cb -> af_alg_async_cb
    aead_poll, skcipher_poll -> af_alg_poll

    Split out the following common code from recvmsg:

    af_alg_alloc_areq: allocation of the request data structure for the
    cipher operation

    af_alg_get_rsgl: creation of the RX SGL anchored in the request data
    structure

    The following changes to the implementation without affecting the
    functionality have been applied to synchronize slightly different code
    bases in algif_skcipher and algif_aead:

    The wakeup in af_alg_wait_for_data is triggered when either more data
    is received or the indicator that more data is to be expected is
    released. The first is triggered by user space, the second is
    triggered by the kernel upon finishing the processing of data
    (i.e. the kernel is ready for more).

    af_alg_sendmsg uses size_t in min_t calculation for obtaining len.
    Return code determination is consistent with algif_skcipher. The
    scope of the variable i is reduced to match algif_aead. The type of the
    variable i is switched from int to unsigned int to match algif_aead.

    af_alg_sendpage does not contain the superfluous err = 0 from
    aead_sendpage.

    af_alg_async_cb requires to store the number of output bytes in
    areq->outlen before the AIO callback is triggered.

    The POLLIN / POLLRDNORM is now set when either not more data is given or
    the kernel is supplied with data. This is consistent to the wakeup from
    sleep when the kernel waits for data.

    The request data structure is extended by the field last_rsgl which
    points to the last RX SGL list entry. This shall help recvmsg
    implementation to chain the RX SGL to other SG(L)s if needed. It is
    currently used by algif_aead which chains the tag SGL to the RX SGL
    during decryption.

    Signed-off-by: Stephan Mueller
    Signed-off-by: Herbert Xu

    Stephan Mueller
     
  • When UBSAN is enabled, we get a very large stack frame for
    __serpent_setkey, when the register allocator ends up using more registers
    than it has, and has to spill temporary values to the stack. The code
    was originally optimized for in-order x86-32 CPU implementations using
    older compilers, but it now runs into a highly suboptimal case on all
    CPU architectures, as seen by this warning:

    crypto/serpent_generic.c: In function '__serpent_setkey':
    crypto/serpent_generic.c:436:1: error: the frame size of 2720 bytes is larger than 2048 bytes [-Werror=frame-larger-than=]

    Disabling -fsanitize=alignment would avoid that warning, presumably the
    option turns off a optimization step that is required for getting the
    register allocation right, but there is no easy way to do that on gcc-7
    (gcc-8 introduces a function attribute for this).

    I tried to figure out a way to modify the source code instead, and noticed
    that the two stages of the setkey() function (keyiter and sbox) each are
    fine by themselves, but not when combined into one function. Splitting
    out the entire sbox into a separate function also happens to work fine
    with all compilers I tried (arm, arm64 and x86).

    The setkey function uses a strange way to handle offsets into the key
    array, using both negative and positive index values, as well as adjusting
    the array pointer back and forth. I have checked that this actually
    makes no difference to modern compilers, but I left that untouched
    to make the patch easier to review and to keep the code closer to
    the reference implementation.

    Link: https://patchwork.kernel.org/patch/9189575/
    Signed-off-by: Arnd Bergmann
    Signed-off-by: Herbert Xu

    Arnd Bergmann
     
  • Use the NULL cipher to copy the AAD and PT/CT from the TX SGL
    to the RX SGL. This allows an in-place crypto operation on the
    RX SGL for encryption, because the TX data is always smaller or
    equal to the RX data (the RX data will hold the tag).

    For decryption, a per-request TX SGL is created which will only hold
    the tag value. As the RX SGL will have no space for the tag value and
    an in-place operation will not write the tag buffer, the TX SGL with the
    tag value is chained to the RX SGL. This now allows an in-place
    crypto operation.

    For example:

    * without the patch:
    kcapi -x 2 -e -c "gcm(aes)" -p 89154d0d4129d322e4487bafaa4f6b46 -k c0ece3e63198af382b5603331cc23fa8 -i 7e489b83622e7228314d878d -a afcd7202d621e06ca53b70c2bdff7fb2 -l 16 -u -s
    00000000000000000000000000000000f4a3eacfbdadd3b1a17117b1d67ffc1f1e21efbbc6d83724a8c296e3bb8cda0c

    * with the patch:
    kcapi -x 2 -e -c "gcm(aes)" -p 89154d0d4129d322e4487bafaa4f6b46 -k c0ece3e63198af382b5603331cc23fa8 -i 7e489b83622e7228314d878d -a afcd7202d621e06ca53b70c2bdff7fb2 -l 16 -u -s
    afcd7202d621e06ca53b70c2bdff7fb2f4a3eacfbdadd3b1a17117b1d67ffc1f1e21efbbc6d83724a8c296e3bb8cda0c

    Tests covering this functionality have been added to libkcapi.

    Signed-off-by: Stephan Mueller
    Signed-off-by: Herbert Xu

    Stephan Mueller
     
  • If no data has been processed during recvmsg, return the error code.
    This covers all errors received during non-AIO operations.

    If any error occurs during a synchronous operation in addition to
    -EIOCBQUEUED or -EBADMSG (like -ENOMEM), it should be relayed to the
    caller.

    Signed-off-by: Stephan Mueller
    Signed-off-by: Herbert Xu

    Stephan Mueller
     

04 Aug, 2017

2 commits

  • There are quite a number of occurrences in the kernel of the pattern

    if (dst != src)
    memcpy(dst, src, walk.total % AES_BLOCK_SIZE);
    crypto_xor(dst, final, walk.total % AES_BLOCK_SIZE);

    or

    crypto_xor(keystream, src, nbytes);
    memcpy(dst, keystream, nbytes);

    where crypto_xor() is preceded or followed by a memcpy() invocation
    that is only there because crypto_xor() uses its output parameter as
    one of the inputs. To avoid having to add new instances of this pattern
    in the arm64 code, which will be refactored to implement non-SIMD
    fallbacks, add an alternative implementation called crypto_xor_cpy(),
    taking separate input and output arguments. This removes the need for
    the separate memcpy().

    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     
  • In preparation of introducing crypto_xor_cpy(), which will use separate
    operands for input and output, modify the __crypto_xor() implementation,
    which it will share with the existing crypto_xor(), which provides the
    actual functionality when not using the inline version.

    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     

03 Aug, 2017

5 commits

  • The scompress code allocates 2 x 128 KB of scratch buffers for each CPU,
    so that clients of the async API can use synchronous implementations
    even from atomic context. However, on systems such as Cavium Thunderx
    (which has 96 cores), this adds up to a non-negligible 24 MB. Also,
    32-bit systems may prefer to use their precious vmalloc space for other
    things,especially since there don't appear to be any clients for the
    async compression API yet.

    So let's defer allocation of the scratch buffers until the first time
    we allocate an acompress cipher based on an scompress implementation.

    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     
  • When allocating the per-CPU scratch buffers, we allocate the source
    and destination buffers separately, but bail immediately if the second
    allocation fails, without freeing the first one. Fix that.

    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     
  • Due to the use of per-CPU buffers, scomp_acomp_comp_decomp() executes
    with preemption disabled, and so whether the CRYPTO_TFM_REQ_MAY_SLEEP
    flag is set is irrelevant, since we cannot sleep anyway. So disregard
    the flag, and use GFP_ATOMIC unconditionally.

    Cc: # v4.10+
    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     
  • ecdh_ctx contained static allocated data for the shared secret
    and public key.

    The shared secret and the public key were doomed to concurrency
    issues because they could be shared by multiple crypto requests.

    The concurrency is fixed by replacing per-tfm shared secret and
    public key with per-request dynamically allocated shared secret
    and public key.

    Signed-off-by: Tudor Ambarus
    Signed-off-by: Herbert Xu

    Tudor-Dan Ambarus
     
  • Remove xts(aes) speed tests with 2 x 192-bit keys, since implementations
    adhering strictly to IEEE 1619-2007 standard cannot cope with key sizes
    other than 2 x 128, 2 x 256 bits - i.e. AES-XTS-{128,256}:
    [...]
    tcrypt: test 5 (384 bit key, 16 byte blocks):
    caam_jr 8020000.jr: key size mismatch
    tcrypt: setkey() failed flags=200000
    [...]

    Signed-off-by: Horia Geantă
    Signed-off-by: Herbert Xu

    Horia Geantă
     

28 Jul, 2017

3 commits

  • Otherwise, we might be seeding the RNG using bad randomness, which is
    dangerous. The one use of this function from within the kernel -- not
    from userspace -- is being removed (keys/big_key), so that call site
    isn't relevant in assessing this.

    Cc: Herbert Xu
    Signed-off-by: Jason A. Donenfeld
    Signed-off-by: Herbert Xu

    Jason A. Donenfeld
     
  • The updated memory management is described in the top part of the code.
    As one benefit of the changed memory management, the AIO and synchronous
    operation is now implemented in one common function. The AF_ALG
    operation uses the async kernel crypto API interface for each cipher
    operation. Thus, the only difference between the AIO and sync operation
    types visible from user space is:

    1. the callback function to be invoked when the asynchronous operation
    is completed

    2. whether to wait for the completion of the kernel crypto API operation
    or not

    The change includes the overhaul of the TX and RX SGL handling. The TX
    SGL holding the data sent from user space to the kernel is now dynamic
    similar to algif_skcipher. This dynamic nature allows a continuous
    operation of a thread sending data and a second thread receiving the
    data. These threads do not need to synchronize as the kernel processes
    as much data from the TX SGL to fill the RX SGL.

    The caller reading the data from the kernel defines the amount of data
    to be processed. Considering that the interface covers AEAD
    authenticating ciphers, the reader must provide the buffer in the
    correct size. Thus the reader defines the encryption size.

    Signed-off-by: Stephan Mueller
    Signed-off-by: Herbert Xu

    Stephan Mueller
     
  • The updated memory management is described in the top part of the code.
    As one benefit of the changed memory management, the AIO and synchronous
    operation is now implemented in one common function. The AF_ALG
    operation uses the async kernel crypto API interface for each cipher
    operation. Thus, the only difference between the AIO and sync operation
    types visible from user space is:

    1. the callback function to be invoked when the asynchronous operation
    is completed

    2. whether to wait for the completion of the kernel crypto API operation
    or not

    In addition, the code structure is adjusted to match the structure of
    algif_aead for easier code assessment.

    The user space interface changed slightly as follows: the old AIO
    operation returned zero upon success and < 0 in case of an error to user
    space. As all other AF_ALG interfaces (including the sync skcipher
    interface) returned the number of processed bytes upon success and < 0
    in case of an error, the new skcipher interface (regardless of AIO or
    sync) returns the number of processed bytes in case of success.

    Signed-off-by: Stephan Mueller
    Signed-off-by: Herbert Xu

    Stephan Mueller
     

18 Jul, 2017

1 commit

  • When authencesn is used together with digest_null a crash will
    occur on the decrypt path. This is because normally we perform
    a special setup to preserve the ESN, but this is skipped if there
    is no authentication. However, on the post-authentication path
    it always expects the preservation to be in place, thus causing
    a crash when digest_null is used.

    This patch fixes this by also skipping the post-processing when
    there is no authentication.

    Fixes: 104880a6b470 ("crypto: authencesn - Convert to new AEAD...")
    Cc:
    Reported-by: Jan Tluka
    Signed-off-by: Herbert Xu

    Herbert Xu
     

15 Jul, 2017

1 commit

  • Pull crypto fixes from Herbert Xu:

    - fix new compiler warnings in cavium

    - set post-op IV properly in caam (this fixes chaining)

    - fix potential use-after-free in atmel in case of EBUSY

    - fix sleeping in softirq path in chcr

    - disable buggy sha1-avx2 driver (may overread and page fault)

    - fix use-after-free on signals in caam

    * 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
    crypto: cavium - make several functions static
    crypto: chcr - Avoid algo allocation in softirq.
    crypto: caam - properly set IV after {en,de}crypt
    crypto: atmel - only treat EBUSY as transient if backlog
    crypto: af_alg - Avoid sock_graft call warning
    crypto: caam - fix signals handling
    crypto: sha1-ssse3 - Disable avx2

    Linus Torvalds
     

12 Jul, 2017

1 commit

  • crypto: af_alg - Avoid sock_graft call warning

    The newly added sock_graft warning triggers in af_alg_accept.
    It's harmless as we're essentially doing sock->sk = sock->sk.

    The sock_graft call is actually redundant because all the work
    it does is subsumed by sock_init_data. However, it was added
    to placate SELinux as it uses it to initialise its internal state.

    This patch avoisd the warning by making the SELinux call directly.

    Reported-by: Linus Torvalds
    Signed-off-by: Herbert Xu
    Acked-by: David S. Miller

    Herbert Xu
     

09 Jul, 2017

1 commit

  • Pull dmaengine updates from Vinod Koul:

    - removal of AVR32 support in dw driver as AVR32 is gone

    - new driver for Broadcom stream buffer accelerator (SBA) RAID driver

    - add support for Faraday Technology FTDMAC020 in amba-pl08x driver

    - IOMMU support in pl330 driver

    - updates to bunch of drivers

    * tag 'dmaengine-4.13-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (36 commits)
    dmaengine: qcom_hidma: correct API violation for submit
    dmaengine: zynqmp_dma: Remove max len check in zynqmp_dma_prep_memcpy
    dmaengine: tegra-apb: Really fix runtime-pm usage
    dmaengine: fsl_raid: make of_device_ids const.
    dmaengine: qcom_hidma: allow ACPI/DT parameters to be overridden
    dmaengine: fsldma: set BWC, DAHTS and SAHTS values correctly
    dmaengine: Kconfig: Simplify the help text for MXS_DMA
    dmaengine: pl330: Delete unused functions
    dmaengine: Replace WARN_TAINT_ONCE() with pr_warn_once()
    dmaengine: Kconfig: Extend the dependency for MXS_DMA
    dmaengine: mxs: Use %zu for printing a size_t variable
    dmaengine: ste_dma40: Cleanup scatterlist layering violations
    dmaengine: imx-dma: cleanup scatterlist layering violations
    dmaengine: use proper name for the R-Car SoC
    dmaengine: imx-sdma: Fix compilation warning.
    dmaengine: imx-sdma: Handle return value of clk_prepare_enable
    dmaengine: pl330: Add IOMMU support to slave tranfers
    dmaengine: DW DMAC: Handle return value of clk_prepare_enable
    dmaengine: pl08x: use GENMASK() to create bitmasks
    dmaengine: pl08x: Add support for Faraday Technology FTDMAC020
    ...

    Linus Torvalds
     

06 Jul, 2017

2 commits

  • Pull networking updates from David Miller:
    "Reasonably busy this cycle, but perhaps not as busy as in the 4.12
    merge window:

    1) Several optimizations for UDP processing under high load from
    Paolo Abeni.

    2) Support pacing internally in TCP when using the sch_fq packet
    scheduler for this is not practical. From Eric Dumazet.

    3) Support mutliple filter chains per qdisc, from Jiri Pirko.

    4) Move to 1ms TCP timestamp clock, from Eric Dumazet.

    5) Add batch dequeueing to vhost_net, from Jason Wang.

    6) Flesh out more completely SCTP checksum offload support, from
    Davide Caratti.

    7) More plumbing of extended netlink ACKs, from David Ahern, Pablo
    Neira Ayuso, and Matthias Schiffer.

    8) Add devlink support to nfp driver, from Simon Horman.

    9) Add RTM_F_FIB_MATCH flag to RTM_GETROUTE queries, from Roopa
    Prabhu.

    10) Add stack depth tracking to BPF verifier and use this information
    in the various eBPF JITs. From Alexei Starovoitov.

    11) Support XDP on qed device VFs, from Yuval Mintz.

    12) Introduce BPF PROG ID for better introspection of installed BPF
    programs. From Martin KaFai Lau.

    13) Add bpf_set_hash helper for TC bpf programs, from Daniel Borkmann.

    14) For loads, allow narrower accesses in bpf verifier checking, from
    Yonghong Song.

    15) Support MIPS in the BPF selftests and samples infrastructure, the
    MIPS eBPF JIT will be merged in via the MIPS GIT tree. From David
    Daney.

    16) Support kernel based TLS, from Dave Watson and others.

    17) Remove completely DST garbage collection, from Wei Wang.

    18) Allow installing TCP MD5 rules using prefixes, from Ivan
    Delalande.

    19) Add XDP support to Intel i40e driver, from Björn Töpel

    20) Add support for TC flower offload in nfp driver, from Simon
    Horman, Pieter Jansen van Vuuren, Benjamin LaHaise, Jakub
    Kicinski, and Bert van Leeuwen.

    21) IPSEC offloading support in mlx5, from Ilan Tayari.

    22) Add HW PTP support to macb driver, from Rafal Ozieblo.

    23) Networking refcount_t conversions, From Elena Reshetova.

    24) Add sock_ops support to BPF, from Lawrence Brako. This is useful
    for tuning the TCP sockopt settings of a group of applications,
    currently via CGROUPs"

    * git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next: (1899 commits)
    net: phy: dp83867: add workaround for incorrect RX_CTRL pin strap
    dt-bindings: phy: dp83867: provide a workaround for incorrect RX_CTRL pin strap
    cxgb4: Support for get_ts_info ethtool method
    cxgb4: Add PTP Hardware Clock (PHC) support
    cxgb4: time stamping interface for PTP
    nfp: default to chained metadata prepend format
    nfp: remove legacy MAC address lookup
    nfp: improve order of interfaces in breakout mode
    net: macb: remove extraneous return when MACB_EXT_DESC is defined
    bpf: add missing break in for the TCP_BPF_SNDCWND_CLAMP case
    bpf: fix return in load_bpf_file
    mpls: fix rtm policy in mpls_getroute
    net, ax25: convert ax25_cb.refcount from atomic_t to refcount_t
    net, ax25: convert ax25_route.refcount from atomic_t to refcount_t
    net, ax25: convert ax25_uid_assoc.refcount from atomic_t to refcount_t
    net, sctp: convert sctp_ep_common.refcnt from atomic_t to refcount_t
    net, sctp: convert sctp_transport.refcnt from atomic_t to refcount_t
    net, sctp: convert sctp_chunk.refcnt from atomic_t to refcount_t
    net, sctp: convert sctp_datamsg.refcnt from atomic_t to refcount_t
    net, sctp: convert sctp_auth_bytes.refcnt from atomic_t to refcount_t
    ...

    Linus Torvalds
     
  • Pull crypto updates from Herbert Xu:
    "Algorithms:
    - add private key generation to ecdh

    Drivers:
    - add generic gcm(aes) to aesni-intel
    - add SafeXcel EIP197 crypto engine driver
    - add ecb(aes), cfb(aes) and ecb(des3_ede) to cavium
    - add support for CNN55XX adapters in cavium
    - add ctr mode to chcr
    - add support for gcm(aes) to omap"

    * 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (140 commits)
    crypto: testmgr - Reenable sha1/aes in FIPS mode
    crypto: ccp - Release locks before returning
    crypto: cavium/nitrox - dma_mapping_error() returns bool
    crypto: doc - fix typo in docs
    Documentation/bindings: Document the SafeXel cryptographic engine driver
    crypto: caam - fix gfp allocation flags (part II)
    crypto: caam - fix gfp allocation flags (part I)
    crypto: drbg - Fixes panic in wait_for_completion call
    crypto: caam - make of_device_ids const.
    crypto: vmx - remove unnecessary check
    crypto: n2 - make of_device_ids const
    crypto: inside-secure - use the base_end pointer in ring rollback
    crypto: inside-secure - increase the batch size
    crypto: inside-secure - only dequeue when needed
    crypto: inside-secure - get the backlog before dequeueing the request
    crypto: inside-secure - stop requeueing failed requests
    crypto: inside-secure - use one queue per hw ring
    crypto: inside-secure - update the context and request later
    crypto: inside-secure - align the cipher and hash send functions
    crypto: inside-secure - optimize DSE bufferability control
    ...

    Linus Torvalds
     

05 Jul, 2017

1 commit


01 Jul, 2017

1 commit

  • refcount_t type and corresponding API should be
    used instead of atomic_t when the variable is used as
    a reference counter. This allows to avoid accidental
    refcounter overflows that might lead to use-after-free
    situations.

    This patch uses refcount_inc_not_zero() instead of
    atomic_inc_not_zero_hint() due to absense of a _hint()
    version of refcount API. If the hint() version must
    be used, we might need to revisit API.

    Signed-off-by: Elena Reshetova
    Signed-off-by: Hans Liljestrand
    Signed-off-by: Kees Cook
    Signed-off-by: David Windsor
    Signed-off-by: David S. Miller

    Reshetova, Elena