21 Aug, 2020

1 commit


20 Aug, 2020

1 commit

  • The header file algapi.h includes skbuff.h unnecessarily since
    all we need is a forward declaration for struct sk_buff. This
    patch removes that inclusion.

    Unfortunately skbuff.h pulls in a lot of things and drivers over
    the years have come to rely on it so this patch adds a lot of
    missing inclusions that result from this.

    Signed-off-by: Herbert Xu

    Herbert Xu
     

15 Jun, 2020

1 commit

  • Because SCHED_FIFO is a broken scheduler model (see previous patches)
    take away the priority field, the kernel can't possibly make an
    informed decision.

    Use sched_set_fifo() to request SCHED_FIFO and delegate
    actual priority selection to userspace. Effectively no change in
    behaviour.

    Signed-off-by: Peter Zijlstra (Intel)
    Reviewed-by: Ingo Molnar
    Acked-by: Herbert Xu

    Peter Zijlstra
     

28 May, 2020

1 commit

  • Now, in crypto-engine, if hardware queue is full (-ENOSPC),
    requeue request regardless of MAY_BACKLOG flag.
    If hardware throws any other error code (like -EIO, -EINVAL,
    -ENOMEM, etc.) only MAY_BACKLOG requests are enqueued back into
    crypto-engine's queue, since the others can be dropped.
    The latter case can be fatal error, so those cannot be recovered from.
    For example, in CAAM driver, -EIO is returned in case the job descriptor
    is broken, so there is no possibility to fix the job descriptor.
    Therefore, these errors might be fatal error, so we shouldn’t
    requeue the request. This will just be pass back and forth between
    crypto-engine and hardware.

    Fixes: 6a89f492f8e5 ("crypto: engine - support for parallel requests based on retry mechanism")
    Signed-off-by: Iuliana Prodan
    Reported-by: Horia Geantă
    Reviewed-by: Horia Geantă
    Signed-off-by: Herbert Xu

    Iuliana Prodan
     

08 May, 2020

2 commits

  • Added support for batch requests, per crypto engine.
    A new callback is added, do_batch_requests, which executes a
    batch of requests. This has the crypto_engine structure as argument
    (for cases when more than one crypto-engine is used).
    The crypto_engine_alloc_init_and_set function, initializes
    crypto-engine, but also, sets the do_batch_requests callback.
    On crypto_pump_requests, if do_batch_requests callback is
    implemented in a driver, this will be executed. The link between
    the requests will be done in driver, if possible.
    do_batch_requests is available only if the hardware has support
    for multiple request.

    Signed-off-by: Iuliana Prodan
    Signed-off-by: Herbert Xu

    Iuliana Prodan
     
  • Added support for executing multiple requests, in parallel,
    for crypto engine based on a retry mechanism.
    If hardware was unable to execute a backlog request, enqueue it
    back in front of crypto-engine queue, to keep the order
    of requests.

    A new variable is added, retry_support (this is to keep the
    backward compatibility of crypto-engine) , which keeps track
    whether the hardware has support for retry mechanism and,
    also, if can run multiple requests.

    If do_one_request() returns:
    >= 0: hardware executed the request successfully;
    < 0: this is the old error path. If hardware has support for retry
    mechanism, the request is put back in front of crypto-engine queue.
    For backwards compatibility, if the retry support is not available,
    the crypto-engine will work as before.
    If hardware queue is full (-ENOSPC), requeue request regardless
    of MAY_BACKLOG flag.
    If hardware throws any other error code (like -EIO, -EINVAL,
    -ENOMEM, etc.) only MAY_BACKLOG requests are enqueued back into
    crypto-engine's queue, since the others can be dropped.

    The new crypto_engine_alloc_init_and_set function, initializes
    crypto-engine, sets the maximum size for crypto-engine software
    queue (not hardcoded anymore) and the retry_support variable
    is set, by default, to false.
    On crypto_pump_requests(), if do_one_request() returns >= 0,
    a new request is send to hardware, until there is no space in
    hardware and do_one_request() returns < 0.

    By default, retry_support is false and crypto-engine will
    work as before - will send requests to hardware,
    one-by-one, on crypto_pump_requests(), and complete it, on
    crypto_finalize_request(), and so on.

    To support multiple requests, in each driver, retry_support
    must be set on true, and if do_one_request() returns an error
    the request must not be freed, since it will be enqueued back
    into crypto-engine's queue.

    When all drivers, that use crypto-engine now, will be updated for
    retry mechanism, the retry_support variable can be removed.

    Signed-off-by: Iuliana Prodan
    Signed-off-by: Herbert Xu

    Iuliana Prodan
     

17 Nov, 2019

1 commit


09 Aug, 2019

1 commit

  • The crypto engine initializes its kworker thread to FIFO-99 (when
    requesting RT priority), reduce this to FIFO-50.

    FIFO-99 is the very highest priority available to SCHED_FIFO and
    it not a suitable default; it would indicate the crypto work is the
    most important work on the machine.

    Cc: Herbert Xu
    Cc: "David S. Miller"
    Cc: linux-crypto@vger.kernel.org
    Cc: linux-kernel@vger.kernel.org
    Cc: Thomas Gleixner
    Signed-off-by: Peter Zijlstra (Intel)
    Signed-off-by: Herbert Xu

    Peter Zijlstra
     

31 May, 2019

1 commit

  • Based on 1 normalized pattern(s):

    this program is free software you can redistribute it and or modify
    it under the terms of the gnu general public license as published by
    the free software foundation either version 2 of the license or at
    your option any later version

    extracted by the scancode license scanner the SPDX license identifier

    GPL-2.0-or-later

    has been chosen to replace the boilerplate/reference in 3029 file(s).

    Signed-off-by: Thomas Gleixner
    Reviewed-by: Allison Randal
    Cc: linux-spdx@vger.kernel.org
    Link: https://lkml.kernel.org/r/20190527070032.746973796@linutronix.de
    Signed-off-by: Greg Kroah-Hartman

    Thomas Gleixner
     

15 Feb, 2018

1 commit


19 Jun, 2017

1 commit


02 Mar, 2017

1 commit


25 Oct, 2016

1 commit

  • Use the new API to create and destroy the crypto engine kthread
    worker. The API hides some implementation details.

    In particular, kthread_create_worker() allocates and initializes
    struct kthread_worker. It runs the kthread the right way
    and stores task_struct into the worker structure.

    kthread_destroy_worker() flushes all pending works, stops
    the kthread and frees the structure.

    This patch does not change the existing behavior except for
    dynamically allocating struct kthread_worker and storing
    only the pointer of this structure.

    It is compile tested only because I did not find an easy
    way how to run the code. Well, it should be pretty safe
    given the nature of the change.

    Signed-off-by: Petr Mladek
    Signed-off-by: Herbert Xu

    Petr Mladek
     

12 Oct, 2016

1 commit

  • A good practice is to prefix the names of functions by the name
    of the subsystem.

    The kthread worker API is a mix of classic kthreads and workqueues. Each
    worker has a dedicated kthread. It runs a generic function that process
    queued works. It is implemented as part of the kthread subsystem.

    This patch renames the existing kthread worker API to use
    the corresponding name from the workqueues API prefixed by
    kthread_:

    __init_kthread_worker() -> __kthread_init_worker()
    init_kthread_worker() -> kthread_init_worker()
    init_kthread_work() -> kthread_init_work()
    insert_kthread_work() -> kthread_insert_work()
    queue_kthread_work() -> kthread_queue_work()
    flush_kthread_work() -> kthread_flush_work()
    flush_kthread_worker() -> kthread_flush_worker()

    Note that the names of DEFINE_KTHREAD_WORK*() macros stay
    as they are. It is common that the "DEFINE_" prefix has
    precedence over the subsystem names.

    Note that INIT() macros and init() functions use different
    naming scheme. There is no good solution. There are several
    reasons for this solution:

    + "init" in the function names stands for the verb "initialize"
    aka "initialize worker". While "INIT" in the macro names
    stands for the noun "INITIALIZER" aka "worker initializer".

    + INIT() macros are used only in DEFINE() macros

    + init() functions are used close to the other kthread()
    functions. It looks much better if all the functions
    use the same scheme.

    + There will be also kthread_destroy_worker() that will
    be used close to kthread_cancel_work(). It is related
    to the init() function. Again it looks better if all
    functions use the same naming scheme.

    + there are several precedents for such init() function
    names, e.g. amd_iommu_init_device(), free_area_init_node(),
    jump_label_init_type(), regmap_init_mmio_clk(),

    + It is not an argument but it was inconsistent even before.

    [arnd@arndb.de: fix linux-next merge conflict]
    Link: http://lkml.kernel.org/r/20160908135724.1311726-1-arnd@arndb.de
    Link: http://lkml.kernel.org/r/1470754545-17632-3-git-send-email-pmladek@suse.com
    Suggested-by: Andrew Morton
    Signed-off-by: Petr Mladek
    Cc: Oleg Nesterov
    Cc: Tejun Heo
    Cc: Ingo Molnar
    Cc: Peter Zijlstra
    Cc: Steven Rostedt
    Cc: "Paul E. McKenney"
    Cc: Josh Triplett
    Cc: Thomas Gleixner
    Cc: Jiri Kosina
    Cc: Borislav Petkov
    Cc: Michal Hocko
    Cc: Vlastimil Babka
    Signed-off-by: Arnd Bergmann
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Petr Mladek
     

07 Sep, 2016

2 commits


01 Feb, 2016

1 commit

  • Now block cipher engines need to implement and maintain their own queue/thread
    for processing requests, moreover currently helpers provided for only the queue
    itself (in crypto_enqueue_request() and crypto_dequeue_request()) but they
    don't help with the mechanics of driving the hardware (things like running the
    request immediately, DMA map it or providing a thread to process the queue in)
    even though a lot of that code really shouldn't vary that much from device to
    device.

    Thus this patch provides a mechanism for pushing requests to the hardware
    as it becomes free that drivers could use. And this framework is patterned
    on the SPI code and has worked out well there.
    (https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/
    drivers/spi/spi.c?id=ffbbdd21329f3e15eeca6df2d4bc11c04d9d91c0)

    Signed-off-by: Baolin Wang
    Signed-off-by: Herbert Xu

    Baolin Wang