15 Feb, 2018

1 commit


12 Jan, 2018

2 commits

  • We need to consistently enforce that keyed hashes cannot be used without
    setting the key. To do this we need a reliable way to determine whether
    a given hash algorithm is keyed or not. AF_ALG currently does this by
    checking for the presence of a ->setkey() method. However, this is
    actually slightly broken because the CRC-32 algorithms implement
    ->setkey() but can also be used without a key. (The CRC-32 "key" is not
    actually a cryptographic key but rather represents the initial state.
    If not overridden, then a default initial state is used.)

    Prepare to fix this by introducing a flag CRYPTO_ALG_OPTIONAL_KEY which
    indicates that the algorithm has a ->setkey() method, but it is not
    required to be called. Then set it on all the CRC-32 algorithms.

    The same also applies to the Adler-32 implementation in Lustre.

    Also, the cryptd and mcryptd templates have to pass through the flag
    from their underlying algorithm.

    Cc: stable@vger.kernel.org
    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • When the mcryptd template is used to wrap an unkeyed hash algorithm,
    don't install a ->setkey() method to the mcryptd instance. This change
    is necessary for mcryptd to keep working with unkeyed hash algorithms
    once we start enforcing that ->setkey() is called when present.

    Cc: stable@vger.kernel.org
    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     

22 Dec, 2017

1 commit


11 Dec, 2017

1 commit

  • mcryptd_enqueue_request() grabs the per-CPU queue struct and protects
    access to it with disabled preemption. Then it schedules a worker on the
    same CPU. The worker in mcryptd_queue_worker() guards access to the same
    per-CPU variable with disabled preemption.

    If we take CPU-hotplug into account then it is possible that between
    queue_work_on() and the actual invocation of the worker the CPU goes
    down and the worker will be scheduled on _another_ CPU. And here the
    preempt_disable() protection does not work anymore. The easiest thing is
    to add a spin_lock() to guard access to the list.

    Another detail: mcryptd_queue_worker() is not processing more than
    MCRYPTD_BATCH invocation in a row. If there are still items left, then
    it will invoke queue_work() to proceed with more later. *I* would
    suggest to simply drop that check because it does not use a system
    workqueue and the workqueue is already marked as "CPU_INTENSIVE". And if
    preemption is required then the scheduler should do it.
    However if queue_work() is used then the work item is marked as CPU
    unbound. That means it will try to run on the local CPU but it may run
    on another CPU as well. Especially with CONFIG_DEBUG_WQ_FORCE_RR_CPU=y.
    Again, the preempt_disable() won't work here but lock which was
    introduced will help.
    In order to keep work-item on the local CPU (and avoid RR) I changed it
    to queue_work_on().

    Cc: stable@vger.kernel.org
    Signed-off-by: Sebastian Andrzej Siewior
    Signed-off-by: Herbert Xu

    Sebastian Andrzej Siewior
     

29 Nov, 2017

1 commit

  • Preempt counter APIs have been split out, currently, hardirq.h just
    includes irq_enter/exit APIs which are not used by crypto at all.

    So, remove the unused hardirq.h.

    Signed-off-by: Yang Shi
    Cc: Herbert Xu
    Cc: "David S. Miller"
    Cc: linux-crypto@vger.kernel.org
    Signed-off-by: Herbert Xu

    Yang Shi
     

02 Mar, 2017

1 commit


07 Dec, 2016

1 commit

  • Algorithms not compatible with mcryptd could be spawned by mcryptd
    with a direct crypto_alloc_tfm invocation using a "mcryptd(alg)" name
    construct. This causes mcryptd to crash the kernel if an arbitrary
    "alg" is incompatible and not intended to be used with mcryptd. It is
    an issue if AF_ALG tries to spawn mcryptd(alg) to expose it externally.
    But such algorithms must be used internally and not be exposed.

    We added a check to enforce that only internal algorithms are allowed
    with mcryptd at the time mcryptd is spawning an algorithm.

    Link: http://marc.info/?l=linux-crypto-vger&m=148063683310477&w=2
    Cc: stable@vger.kernel.org
    Reported-by: Mikulas Patocka
    Signed-off-by: Tim Chen
    Signed-off-by: Herbert Xu

    Tim
     

13 Sep, 2016

1 commit


23 Jun, 2016

1 commit

  • Herbert wants the sha1-mb algorithm to have an async implementation:
    https://lkml.org/lkml/2016/4/5/286.
    Currently, sha1-mb uses an async interface for the outer algorithm
    and a sync interface for the inner algorithm. This patch introduces
    a async interface for even the inner algorithm.

    Signed-off-by: Megha Dey
    Signed-off-by: Tim Chen
    Signed-off-by: Herbert Xu

    Megha Dey
     

30 Jan, 2016

1 commit


23 Nov, 2015

1 commit


31 Mar, 2015

1 commit


26 Nov, 2014

1 commit

  • This adds the module loading prefix "crypto-" to the template lookup
    as well.

    For example, attempting to load 'vfat(blowfish)' via AF_ALG now correctly
    includes the "crypto-" prefix at every level, correctly rejecting "vfat":

    net-pf-38
    algif-hash
    crypto-vfat(blowfish)
    crypto-vfat(blowfish)-all
    crypto-vfat

    Reported-by: Mathias Krause
    Signed-off-by: Kees Cook
    Acked-by: Mathias Krause
    Signed-off-by: Herbert Xu

    Kees Cook
     

26 Aug, 2014

1 commit


25 Aug, 2014

1 commit

  • This patch introduces the multi-buffer crypto daemon which is responsible
    for submitting crypto jobs in a work queue to the responsible multi-buffer
    crypto algorithm. The idea of the multi-buffer algorihtm is to put
    data streams from multiple jobs in a wide (AVX2) register and then
    take advantage of SIMD instructions to do crypto computation on several
    buffers simultaneously.

    The multi-buffer crypto daemon is also responsbile for flushing the
    remaining buffers to complete the computation if no new buffers arrive
    for a while.

    Signed-off-by: Tim Chen
    Signed-off-by: Herbert Xu

    Tim Chen