26 Mar, 2020

1 commit


24 Mar, 2020

1 commit


20 Mar, 2020

1 commit

  • Prior, passing in chunks of 2, 3, or 4, followed by any additional
    chunks would result in the chacha state counter getting out of sync,
    resulting in incorrect encryption/decryption, which is a pretty nasty
    crypto vuln: "why do images look weird on webpages?" WireGuard users
    never experienced this prior, because we have always, out of tree, used
    a different crypto library, until the recent Frankenzinc addition. This
    commit fixes the issue by advancing the pointers and state counter by
    the actual size processed. It also fixes up a bug in the (optional,
    costly) stride test that prevented it from running on arm64.

    Fixes: b3aad5bad26a ("crypto: arm64/chacha - expose arm64 ChaCha routine as library function")
    Reported-and-tested-by: Emil Renner Berthing
    Cc: Ard Biesheuvel
    Cc: stable@vger.kernel.org # v5.5+
    Signed-off-by: Jason A. Donenfeld
    Reviewed-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Jason A. Donenfeld
     

01 Mar, 2020

1 commit

  • Alexei Starovoitov says:

    ====================
    pull-request: bpf-next 2020-02-28

    The following pull-request contains BPF updates for your *net-next* tree.

    We've added 41 non-merge commits during the last 7 day(s) which contain
    a total of 49 files changed, 1383 insertions(+), 499 deletions(-).

    The main changes are:

    1) BPF and Real-Time nicely co-exist.

    2) bpftool feature improvements.

    3) retrieve bpf_sk_storage via INET_DIAG.
    ====================

    Signed-off-by: David S. Miller

    David S. Miller
     

28 Feb, 2020

1 commit


27 Feb, 2020

1 commit

  • Pull tracing and bootconfig updates:
    "Fixes and changes to bootconfig before it goes live in a release.

    Change in API of bootconfig (before it comes live in a release):
    - Have a magic value "BOOTCONFIG" in initrd to know a bootconfig
    exists
    - Set CONFIG_BOOT_CONFIG to 'n' by default
    - Show error if "bootconfig" on cmdline but not compiled in
    - Prevent redefining the same value
    - Have a way to append values
    - Added a SELECT BLK_DEV_INITRD to fix a build failure

    Synthetic event fixes:
    - Switch to raw_smp_processor_id() for recording CPU value in preempt
    section. (No care for what the value actually is)
    - Fix samples always recording u64 values
    - Fix endianess
    - Check number of values matches number of fields
    - Fix a printing bug

    Fix of trace_printk() breaking postponed start up tests

    Make a function static that is only used in a single file"

    * tag 'trace-v5.6-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace:
    bootconfig: Fix CONFIG_BOOTTIME_TRACING dependency issue
    bootconfig: Add append value operator support
    bootconfig: Prohibit re-defining value on same key
    bootconfig: Print array as multiple commands for legacy command line
    bootconfig: Reject subkey and value on same parent key
    tools/bootconfig: Remove unneeded error message silencer
    bootconfig: Add bootconfig magic word for indicating bootconfig explicitly
    bootconfig: Set CONFIG_BOOT_CONFIG=n by default
    tracing: Clear trace_state when starting trace
    bootconfig: Mark boot_config_checksum() static
    tracing: Disable trace_printk() on post poned tests
    tracing: Have synthetic event test use raw_smp_processor_id()
    tracing: Fix number printing bug in print_synth_event()
    tracing: Check that number of vals matches number of synth event fields
    tracing: Make synth_event trace functions endian-correct
    tracing: Make sure synth_event_trace() example always uses u64

    Linus Torvalds
     

25 Feb, 2020

2 commits


22 Feb, 2020

3 commits

  • Conflict resolution of ice_virtchnl_pf.c based upon work by
    Stephen Rothwell.

    Signed-off-by: David S. Miller

    David S. Miller
     
  • Walter Wu has reported a potential case in which init_stack_slab() is
    called after stack_slabs[STACK_ALLOC_MAX_SLABS - 1] has already been
    initialized. In that case init_stack_slab() will overwrite
    stack_slabs[STACK_ALLOC_MAX_SLABS], which may result in a memory
    corruption.

    Link: http://lkml.kernel.org/r/20200218102950.260263-1-glider@google.com
    Fixes: cd11016e5f521 ("mm, kasan: stackdepot implementation. Enable stackdepot for SLAB")
    Signed-off-by: Alexander Potapenko
    Reported-by: Walter Wu
    Cc: Dmitry Vyukov
    Cc: Matthias Brugger
    Cc: Thomas Gleixner
    Cc: Josh Poimboeuf
    Cc: Kate Stewart
    Cc: Greg Kroah-Hartman
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Alexander Potapenko
     
  • There were a few attempts at changing behavior of the match_string()
    helpers (i.e. 'match_string()' & 'sysfs_match_string()'), to change &
    extend the behavior according to the doc-string.

    But the simplest approach is to just fix the doc-strings. The current
    behavior is fine as-is, and some bugs were introduced trying to fix it.

    As for extending the behavior, new helpers can always be introduced if
    needed.

    The match_string() helpers behave more like 'strncmp()' in the sense
    that they go up to n elements or until the first NULL element in the
    array of strings.

    This change updates the doc-strings with this info.

    Link: http://lkml.kernel.org/r/20200213072722.8249-1-alexandru.ardelean@analog.com
    Signed-off-by: Alexandru Ardelean
    Acked-by: Andy Shevchenko
    Cc: Kees Cook
    Cc: "Tobin C . Harding"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Alexandru Ardelean
     

21 Feb, 2020

3 commits

  • Add append value operator "+=" support to bootconfig syntax.
    With this operator, user can add new value to the key as
    an entry of array instead of overwriting.
    For example,

    foo = bar
    ...
    foo += baz

    Then the key "foo" has "bar" and "baz" values as an array.

    Link: http://lkml.kernel.org/r/158227283195.12842.8310503105963275584.stgit@devnote2

    Signed-off-by: Masami Hiramatsu
    Signed-off-by: Steven Rostedt (VMware)

    Masami Hiramatsu
     
  • Currently, bootconfig adds a new value on the existing key to the tail of an
    array. But this looks a bit confusing because an admin can easily rewrite
    the original value in the same config file.

    This rejects the following value re-definition.

    key = value1
    ...
    key = value2

    You should rewrite value1 to value2 in this case.

    Link: http://lkml.kernel.org/r/158227282199.12842.10110929876059658601.stgit@devnote2

    Suggested-by: Steven Rostedt (VMware)
    Signed-off-by: Masami Hiramatsu
    [ Fixed spelling of arraies to arrays ]
    Signed-off-by: Steven Rostedt (VMware)

    Masami Hiramatsu
     
  • Reject if a value node is mixed with subkey node on same
    parent key node.

    A value node can not co-exist with subkey node under some key
    node, e.g.

    key = value
    key.subkey = another-value

    This is not be allowed because bootconfig API is not designed
    to handle such case.

    Link: http://lkml.kernel.org/r/158220115232.26565.7792340045009731803.stgit@devnote2

    Signed-off-by: Masami Hiramatsu
    Signed-off-by: Steven Rostedt (VMware)

    Masami Hiramatsu
     

17 Feb, 2020

1 commit

  • The current codebase makes use of the zero-length array language
    extension to the C90 standard, but the preferred mechanism to declare
    variable-length types such as these ones is a flexible array member[1][2],
    introduced in C99:

    struct foo {
    int stuff;
    struct boo array[];
    };

    By making use of the mechanism above, we will get a compiler warning
    in case the flexible array does not occur last in the structure, which
    will help us prevent some kind of undefined behavior bugs from being
    inadvertenly introduced[3] to the codebase from now on.

    This issue was found with the help of Coccinelle.

    [1] https://gcc.gnu.org/onlinedocs/gcc/Zero-Length.html
    [2] https://github.com/KSPP/linux/issues/21
    [3] commit 76497732932f ("cxgb3/l2t: Fix undefined behaviour")

    Signed-off-by: Gustavo A. R. Silva
    Acked-by: Jiri Pirko
    Signed-off-by: David S. Miller

    Gustavo A. R. Silva
     

14 Feb, 2020

1 commit

  • This code assigns src_len (size_t) to sl (int), which causes problems
    when src_len is very large. Probably nobody in the kernel should be
    passing this much data to chacha20poly1305 all in one go anyway, so I
    don't think we need to change the algorithm or introduce larger types
    or anything. But we should at least error out early in this case and
    print a warning so that we get reports if this does happen and can look
    into why anybody is possibly passing it that much data or if they're
    accidently passing -1 or similar.

    Fixes: d95312a3ccc0 ("crypto: lib/chacha20poly1305 - reimplement crypt_from_sg() routine")
    Cc: Ard Biesheuvel
    Cc: stable@vger.kernel.org # 5.5+
    Signed-off-by: Jason A. Donenfeld
    Acked-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Jason A. Donenfeld
     

12 Feb, 2020

1 commit

  • Pull tracing fixes from Steven Rostedt:
    "Various fixes:

    - Fix an uninitialized variable

    - Fix compile bug to bootconfig userspace tool (in tools directory)

    - Suppress some error messages of bootconfig userspace tool

    - Remove unneded CONFIG_LIBXBC from bootconfig

    - Allocate bootconfig xbc_nodes dynamically. To ease complaints about
    taking up static memory at boot up

    - Use of parse_args() to parse bootconfig instead of strstr() usage
    Prevents issues of double quotes containing the interested string

    - Fix missing ring_buffer_nest_end() on synthetic event error path

    - Return zero not -EINVAL on soft disabled synthetic event (soft
    disabling must be the same as hard disabling, which returns zero)

    - Consolidate synthetic event code (remove duplicate code)"

    * tag 'trace-v5.6-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace:
    tracing: Consolidate trace() functions
    tracing: Don't return -EINVAL when tracing soft disabled synth events
    tracing: Add missing nest end to synth_event_trace_start() error case
    tools/bootconfig: Suppress non-error messages
    bootconfig: Allocate xbc_nodes array dynamically
    bootconfig: Use parse_args() to find bootconfig and '--'
    tracing/kprobe: Fix uninitialized variable bug
    bootconfig: Remove unneeded CONFIG_LIBXBC
    tools/bootconfig: Fix wrong __VA_ARGS__ usage

    Linus Torvalds
     

11 Feb, 2020

2 commits

  • To reduce the large static array from kernel data, allocate
    xbc_nodes array dynamically only if the kernel loads a
    bootconfig.

    Note that this also add dummy memblock.h for user-spacae
    bootconfig tool.

    Link: http://lkml.kernel.org/r/158108569699.3187.6512834527603883707.stgit@devnote2

    Signed-off-by: Masami Hiramatsu
    Signed-off-by: Steven Rostedt (VMware)

    Masami Hiramatsu
     
  • Since there is no user except CONFIG_BOOT_CONFIG and no plan
    to use it from other functions, CONFIG_LIBXBC can be removed
    and we can use CONFIG_BOOT_CONFIG directly.

    Link: http://lkml.kernel.org/r/158098769281.939.16293492056419481105.stgit@devnote2

    Suggested-by: Geert Uytterhoeven
    Signed-off-by: Masami Hiramatsu
    Signed-off-by: Steven Rostedt (VMware)

    Masami Hiramatsu
     

10 Feb, 2020

1 commit

  • Pull more Kbuild updates from Masahiro Yamada:

    - fix randconfig to generate a sane .config

    - rename hostprogs-y / always to hostprogs / always-y, which are more
    natual syntax.

    - optimize scripts/kallsyms

    - fix yes2modconfig and mod2yesconfig

    - make multiple directory targets ('make foo/ bar/') work

    * tag 'kbuild-v5.6-2' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild:
    kbuild: make multiple directory targets work
    kconfig: Invalidate all symbols after changing to y or m.
    kallsyms: fix type of kallsyms_token_table[]
    scripts/kallsyms: change table to store (strcut sym_entry *)
    scripts/kallsyms: rename local variables in read_symbol()
    kbuild: rename hostprogs-y/always to hostprogs/always-y
    kbuild: fix the document to use extra-y for vmlinux.lds
    kconfig: fix broken dependency in randconfig-generated .config

    Linus Torvalds
     

06 Feb, 2020

3 commits

  • Pull tracing updates from Steven Rostedt:

    - Added new "bootconfig".

    This looks for a file appended to initrd to add boot config options,
    and has been discussed thoroughly at Linux Plumbers.

    Very useful for adding kprobes at bootup.

    Only enabled if "bootconfig" is on the real kernel command line.

    - Created dynamic event creation.

    Merges common code between creating synthetic events and kprobe
    events.

    - Rename perf "ring_buffer" structure to "perf_buffer"

    - Rename ftrace "ring_buffer" structure to "trace_buffer"

    Had to rename existing "trace_buffer" to "array_buffer"

    - Allow trace_printk() to work withing (some) tracing code.

    - Sort of tracing configs to be a little better organized

    - Fixed bug where ftrace_graph hash was not being protected properly

    - Various other small fixes and clean ups

    * tag 'trace-v5.6-2' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace: (88 commits)
    bootconfig: Show the number of nodes on boot message
    tools/bootconfig: Show the number of bootconfig nodes
    bootconfig: Add more parse error messages
    bootconfig: Use bootconfig instead of boot config
    ftrace: Protect ftrace_graph_hash with ftrace_sync
    ftrace: Add comment to why rcu_dereference_sched() is open coded
    tracing: Annotate ftrace_graph_notrace_hash pointer with __rcu
    tracing: Annotate ftrace_graph_hash pointer with __rcu
    bootconfig: Only load bootconfig if "bootconfig" is on the kernel cmdline
    tracing: Use seq_buf for building dynevent_cmd string
    tracing: Remove useless code in dynevent_arg_pair_add()
    tracing: Remove check_arg() callbacks from dynevent args
    tracing: Consolidate some synth_event_trace code
    tracing: Fix now invalid var_ref_vals assumption in trace action
    tracing: Change trace_boot to use synth_event interface
    tracing: Move tracing selftests to bottom of menu
    tracing: Move mmio tracer config up with the other tracers
    tracing: Move tracing test module configs together
    tracing: Move all function tracing configs together
    tracing: Documentation for in-kernel synthetic event API
    ...

    Linus Torvalds
     
  • Show the number of bootconfig nodes when applying new bootconfig to
    initrd.

    Since there are limitations of bootconfig not only in its filesize,
    but also the number of nodes, the number should be shown when applying
    so that user can get the feeling of scale of current bootconfig.

    Link: http://lkml.kernel.org/r/158091061337.27924.10886706631693823982.stgit@devnote2

    Signed-off-by: Masami Hiramatsu
    Signed-off-by: Steven Rostedt (VMware)

    Masami Hiramatsu
     
  • Add more error messages for following cases.
    - Exceeding max number of nodes
    - Config tree data is empty (e.g. comment only)
    - Config data is empty or exceeding max size
    - bootconfig is already initialized

    Link: http://lkml.kernel.org/r/158091060401.27924.9024818742827122764.stgit@devnote2

    Signed-off-by: Masami Hiramatsu
    Signed-off-by: Steven Rostedt (VMware)

    Masami Hiramatsu
     

04 Feb, 2020

6 commits

  • New version of bitmap_parse() is unified with bitmap_parse_list(),
    and therefore:

    - weakens rules on whitespaces and commas between hex chunks;

    - in addition to

    - allows passing UINT_MAX or any other big number as the length of input
    string instead of actual string length.

    The patch covers the cases.

    Link: http://lkml.kernel.org/r/20200102043031.30357-7-yury.norov@gmail.com
    Signed-off-by: Yury Norov
    Reviewed-by: Andy Shevchenko
    Cc: Amritha Nambiar
    Cc: Arnaldo Carvalho de Melo
    Cc: Chris Wilson
    Cc: Kees Cook
    Cc: Matthew Wilcox
    Cc: Miklos Szeredi
    Cc: Rasmus Villemoes
    Cc: Steffen Klassert
    Cc: "Tobin C . Harding"
    Cc: Vineet Gupta
    Cc: Will Deacon
    Cc: Willem de Bruijn
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Yury Norov
     
  • bitmap_parse() is ineffective and full of opaque variables and opencoded
    parts. It leads to hard understanding and usage of it. This rework
    includes:

    - remove bitmap_shift_left() call from the cycle. Now it makes the
    complexity of the algorithm as O(nbits^2). In the suggested approach
    the input string is parsed in reverse direction, so no shifts needed;

    - relax requirement on a single comma and no white spaces between
    chunks. It is considered useful in scripting, and it aligns with
    bitmap_parselist();

    - split bitmap_parse() to small readable helpers;

    - make an explicit calculation of the end of input line at the
    beginning, so users of the bitmap_parse() won't bother doing this.

    Link: http://lkml.kernel.org/r/20200102043031.30357-6-yury.norov@gmail.com
    Signed-off-by: Yury Norov
    Cc: Amritha Nambiar
    Cc: Andy Shevchenko
    Cc: Arnaldo Carvalho de Melo
    Cc: Chris Wilson
    Cc: Kees Cook
    Cc: Matthew Wilcox
    Cc: Miklos Szeredi
    Cc: Rasmus Villemoes
    Cc: Steffen Klassert
    Cc: "Tobin C . Harding"
    Cc: Vineet Gupta
    Cc: Will Deacon
    Cc: Willem de Bruijn
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Yury Norov
     
  • Currently we parse user data byte after byte which leads to
    overcomplicating of parsing algorithm. There are no performance critical
    users of bitmap_parse_user(), and so we can duplicate user data to kernel
    buffer and simply call bitmap_parselist(). This rework lets us unify and
    simplify bitmap_parse() and bitmap_parse_user(), which is done in the
    following patch.

    Link: http://lkml.kernel.org/r/20200102043031.30357-5-yury.norov@gmail.com
    Signed-off-by: Yury Norov
    Reviewed-by: Andy Shevchenko
    Cc: Amritha Nambiar
    Cc: Arnaldo Carvalho de Melo
    Cc: Chris Wilson
    Cc: Kees Cook
    Cc: Matthew Wilcox
    Cc: Miklos Szeredi
    Cc: Rasmus Villemoes
    Cc: Steffen Klassert
    Cc: "Tobin C . Harding"
    Cc: Vineet Gupta
    Cc: Will Deacon
    Cc: Willem de Bruijn
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Yury Norov
     
  • The test is derived from bitmap_parselist() NO_LEN is reserved for use in
    following patches.

    [yury.norov@gmail.com: fix rebase issue]
    Link: http://lkml.kernel.org/r/20200102182659.6685-1-yury.norov@gmail.com
    [andriy.shevchenko@linux.intel.com: fix address space when test user buffer]
    Link: http://lkml.kernel.org/r/20200109103601.45929-2-andriy.shevchenko@linux.intel.com
    Link: http://lkml.kernel.org/r/20200102043031.30357-4-yury.norov@gmail.com
    Signed-off-by: Yury Norov
    Signed-off-by: Andy Shevchenko
    Reviewed-by: Andy Shevchenko
    Cc: Amritha Nambiar
    Cc: Arnaldo Carvalho de Melo
    Cc: Chris Wilson
    Cc: Kees Cook
    Cc: Matthew Wilcox
    Cc: Miklos Szeredi
    Cc: Rasmus Villemoes
    Cc: Steffen Klassert
    Cc: "Tobin C . Harding"
    Cc: Vineet Gupta
    Cc: Will Deacon
    Cc: Willem de Bruijn
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Yury Norov
     
  • Patch series "lib: rework bitmap_parse", v5.

    Similarl to the recently revisited bitmap_parselist(), bitmap_parse() is
    ineffective and overcomplicated. This series reworks it, aligns its
    interface with bitmap_parselist() and makes it simpler to use.

    The series also adds a test for the function and fixes usage of it in
    cpumask_parse() according to the new design - drops the calculating of
    length of an input string.

    bitmap_parse() takes the array of numbers to be put into the map in the BE
    order which is reversed to the natural LE order for bitmaps. For example,
    to construct bitmap containing a bit on the position 42, we have to put a
    line '400,0'. Current implementation reads chunk one by one from the
    beginning ('400' before '0') and makes bitmap shift after each successful
    parse. It makes the complexity of the whole process as O(n^2). We can do
    it in reverse direction ('0' before '400') and avoid shifting, but it
    requires reverse parsing helpers.

    This patch (of 7):

    New function works like strchrnul() with a length limited string.

    Link: http://lkml.kernel.org/r/20200102043031.30357-2-yury.norov@gmail.com
    Signed-off-by: Yury Norov
    Reviewed-by: Andy Shevchenko
    Cc: Rasmus Villemoes
    Cc: Amritha Nambiar
    Cc: Willem de Bruijn
    Cc: Kees Cook
    Cc: Matthew Wilcox
    Cc: "Tobin C . Harding"
    Cc: Will Deacon
    Cc: Miklos Szeredi
    Cc: Vineet Gupta
    Cc: Chris Wilson
    Cc: Arnaldo Carvalho de Melo
    Cc: Steffen Klassert
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Yury Norov
     
  • In old days, the "host-progs" syntax was used for specifying host
    programs. It was renamed to the current "hostprogs-y" in 2004.

    It is typically useful in scripts/Makefile because it allows Kbuild to
    selectively compile host programs based on the kernel configuration.

    This commit renames like follows:

    always -> always-y
    hostprogs-y -> hostprogs

    So, scripts/Makefile will look like this:

    always-$(CONFIG_BUILD_BIN2C) += ...
    always-$(CONFIG_KALLSYMS) += ...
    ...
    hostprogs := $(always-y) $(always-m)

    I think this makes more sense because a host program is always a host
    program, irrespective of the kernel configuration. We want to specify
    which ones to compile by CONFIG options, so always-y will be handier.

    The "always", "hostprogs-y", "hostprogs-m" will be kept for backward
    compatibility for a while.

    Signed-off-by: Masahiro Yamada

    Masahiro Yamada
     

01 Feb, 2020

11 commits

  • Don't instrument 3 more files that contain debugging facilities and
    produce large amounts of uninteresting coverage for every syscall.

    The following snippets are sprinkled all over the place in kcov traces
    in a debugging kernel. We already try to disable instrumentation of
    stack unwinding code and of most debug facilities. I guess we did not
    use fault-inject.c at the time, and stacktrace.c was somehow missed (or
    something has changed in kernel/configs). This change both speeds up
    kcov (kernel doesn't need to store these PCs, user-space doesn't need to
    process them) and frees trace buffer capacity for more useful coverage.

    should_fail
    lib/fault-inject.c:149
    fail_dump
    lib/fault-inject.c:45

    stack_trace_save
    kernel/stacktrace.c:124
    stack_trace_consume_entry
    kernel/stacktrace.c:86
    stack_trace_consume_entry
    kernel/stacktrace.c:89
    ... a hundred frames skipped ...
    stack_trace_consume_entry
    kernel/stacktrace.c:93
    stack_trace_consume_entry
    kernel/stacktrace.c:86

    Link: http://lkml.kernel.org/r/20200116111449.217744-1-dvyukov@gmail.com
    Signed-off-by: Dmitry Vyukov
    Reviewed-by: Andrey Konovalov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Dmitry Vyukov
     
  • It saves 25% of .text for arm64, and more for BE architectures.

    Before:
    $ size lib/find_bit.o
    text data bss dec hex filename
    1012 56 0 1068 42c lib/find_bit.o

    After:
    $ size lib/find_bit.o
    text data bss dec hex filename
    776 56 0 832 340 lib/find_bit.o

    Link: http://lkml.kernel.org/r/20200103202846.21616-3-yury.norov@gmail.com
    Signed-off-by: Yury Norov
    Cc: Thomas Gleixner
    Cc: Allison Randal
    Cc: William Breathitt Gray
    Cc: Joe Perches
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Yury Norov
     
  • _find_next_bit and _find_next_bit_le are very similar functions. It's
    possible to join them by adding 1 parameter and a couple of simple
    checks. It's simplify maintenance and make possible to shrink the size
    of .text by un-inlining the unified function (in the following patch).

    Link: http://lkml.kernel.org/r/20200103202846.21616-2-yury.norov@gmail.com
    Signed-off-by: Yury Norov
    Cc: Allison Randal
    Cc: Joe Perches
    Cc: Thomas Gleixner
    Cc: William Breathitt Gray
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Yury Norov
     
  • ext2_swab() is defined locally in lib/find_bit.c However it is not
    specific to ext2, neither to bitmaps.

    There are many potential users of it, so rename it to just swab() and
    move to include/uapi/linux/swab.h

    ABI guarantees that size of unsigned long corresponds to BITS_PER_LONG,
    therefore drop unneeded cast.

    Link: http://lkml.kernel.org/r/20200103202846.21616-1-yury.norov@gmail.com
    Signed-off-by: Yury Norov
    Cc: Allison Randal
    Cc: Joe Perches
    Cc: Thomas Gleixner
    Cc: William Breathitt Gray
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Yury Norov
     
  • Clang warns:

    ../lib/scatterlist.c:314:5: warning: misleading indentation; statement
    is not part of the previous 'if' [-Wmisleading-indentation]
    return -ENOMEM;
    ^
    ../lib/scatterlist.c:311:4: note: previous statement is here
    if (prv)
    ^
    1 warning generated.

    This warning occurs because there is a space before the tab on this
    line. Remove it so that the indentation is consistent with the Linux
    kernel coding style and clang no longer warns.

    Link: http://lkml.kernel.org/r/20191218033606.11942-1-natechancellor@gmail.com
    Link: https://github.com/ClangBuiltLinux/linux/issues/830
    Fixes: edce6820a9fd ("scatterlist: prevent invalid free when alloc fails")
    Signed-off-by: Nathan Chancellor
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nathan Chancellor
     
  • Add a new function to zlib.h checking if s390 Deflate-Conversion
    facility is installed and enabled.

    Link: http://lkml.kernel.org/r/20200103223334.20669-6-zaslonko@linux.ibm.com
    Signed-off-by: Mikhail Zaslonko
    Cc: Chris Mason
    Cc: Christian Borntraeger
    Cc: David Sterba
    Cc: Eduard Shishkin
    Cc: Heiko Carstens
    Cc: Ilya Leoshkevich
    Cc: Josef Bacik
    Cc: Richard Purdie
    Cc: Vasily Gorbik
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mikhail Zaslonko
     
  • Add the new kernel command line parameter 'dfltcc=' to configure s390
    zlib hardware support.

    Format: { on | off | def_only | inf_only | always }
    on: s390 zlib hardware support for compression on
    level 1 and decompression (default)
    off: No s390 zlib hardware support
    def_only: s390 zlib hardware support for deflate
    only (compression on level 1)
    inf_only: s390 zlib hardware support for inflate
    only (decompression)
    always: Same as 'on' but ignores the selected compression
    level always using hardware support (used for debugging)

    Link: http://lkml.kernel.org/r/20200103223334.20669-5-zaslonko@linux.ibm.com
    Signed-off-by: Mikhail Zaslonko
    Cc: Chris Mason
    Cc: Christian Borntraeger
    Cc: David Sterba
    Cc: Eduard Shishkin
    Cc: Heiko Carstens
    Cc: Ilya Leoshkevich
    Cc: Josef Bacik
    Cc: Richard Purdie
    Cc: Vasily Gorbik
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mikhail Zaslonko
     
  • Add decompression functions to zlib_dfltcc library. Update zlib_inflate
    functions with the hooks for s390 hardware support and adjust workspace
    structures with extra parameter lists required for hardware inflate
    decompression.

    Link: http://lkml.kernel.org/r/20200103223334.20669-4-zaslonko@linux.ibm.com
    Signed-off-by: Ilya Leoshkevich
    Signed-off-by: Mikhail Zaslonko
    Co-developed-by: Ilya Leoshkevich
    Cc: Chris Mason
    Cc: Christian Borntraeger
    Cc: David Sterba
    Cc: Eduard Shishkin
    Cc: Heiko Carstens
    Cc: Josef Bacik
    Cc: Richard Purdie
    Cc: Vasily Gorbik
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mikhail Zaslonko
     
  • Patch series "S390 hardware support for kernel zlib", v3.

    With IBM z15 mainframe the new DFLTCC instruction is available. It
    implements deflate algorithm in hardware (Nest Acceleration Unit - NXU)
    with estimated compression and decompression performance orders of
    magnitude faster than the current zlib.

    This patchset adds s390 hardware compression support to kernel zlib.
    The code is based on the userspace zlib implementation:

    https://github.com/madler/zlib/pull/410

    The coding style is also preserved for future maintainability. There is
    only limited set of userspace zlib functions represented in kernel.
    Apart from that, all the memory allocation should be performed in
    advance. Thus, the workarea structures are extended with the parameter
    lists required for the DEFLATE CONVENTION CALL instruction.

    Since kernel zlib itself does not support gzip headers, only Adler-32
    checksum is processed (also can be produced by DFLTCC facility). Like
    it was implemented for userspace, kernel zlib will compress in hardware
    on level 1, and in software on all other levels. Decompression will
    always happen in hardware (when enabled).

    Two DFLTCC compression calls produce the same results only when they
    both are made on machines of the same generation, and when the
    respective buffers have the same offset relative to the start of the
    page. Therefore care should be taken when using hardware compression
    when reproducible results are desired. However it does always produce
    the standard conform output which can be inflated anyway.

    The new kernel command line parameter 'dfltcc' is introduced to
    configure s390 zlib hardware support:

    Format: { on | off | def_only | inf_only | always }
    on: s390 zlib hardware support for compression on
    level 1 and decompression (default)
    off: No s390 zlib hardware support
    def_only: s390 zlib hardware support for deflate
    only (compression on level 1)
    inf_only: s390 zlib hardware support for inflate
    only (decompression)
    always: Same as 'on' but ignores the selected compression
    level always using hardware support (used for debugging)

    The main purpose of the integration of the NXU support into the kernel
    zlib is the use of hardware deflate in btrfs filesystem with on-the-fly
    compression enabled. Apart from that, hardware support can also be used
    during boot for decompressing the kernel or the ramdisk image

    With the patch for btrfs expanding zlib buffer from 1 to 4 pages (patch
    6) the following performance results have been achieved using the
    ramdisk with btrfs. These are relative numbers based on throughput rate
    and compression ratio for zlib level 1:

    Input data Deflate rate Inflate rate Compression ratio
    NXU/Software NXU/Software NXU/Software
    stream of zeroes 1.46 1.02 1.00
    random ASCII data 10.44 3.00 0.96
    ASCII text (dickens) 6,21 3.33 0.94
    binary data (vmlinux) 8,37 3.90 1.02

    This means that s390 hardware deflate can provide up to 10 times faster
    compression (on level 1) and up to 4 times faster decompression (refers
    to all compression levels) for btrfs zlib.

    Disclaimer: Performance results are based on IBM internal tests using DD
    command-line utility on btrfs on a Fedora 30 based internal driver in
    native LPAR on a z15 system. Results may vary based on individual
    workload, configuration and software levels.

    This patch (of 9):

    Create zlib_dfltcc library with the s390 DEFLATE CONVERSION CALL
    implementation and related compression functions. Update zlib_deflate
    functions with the hooks for s390 hardware support and adjust workspace
    structures with extra parameter lists required for hardware deflate.

    Link: http://lkml.kernel.org/r/20200103223334.20669-2-zaslonko@linux.ibm.com
    Signed-off-by: Ilya Leoshkevich
    Signed-off-by: Mikhail Zaslonko
    Co-developed-by: Ilya Leoshkevich
    Cc: Chris Mason
    Cc: Christian Borntraeger
    Cc: David Sterba
    Cc: Eduard Shishkin
    Cc: Heiko Carstens
    Cc: Josef Bacik
    Cc: Richard Purdie
    Cc: Vasily Gorbik
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mikhail Zaslonko
     
  • In case memory resources for _ptr2_ were allocated, release them before
    return.

    Notice that in case _ptr1_ happens to be NULL, krealloc() behaves
    exactly like kmalloc().

    Addresses-Coverity-ID: 1490594 ("Resource leak")
    Link: http://lkml.kernel.org/r/20200123160115.GA4202@embeddedor
    Fixes: 3f15801cdc23 ("lib: add kasan test module")
    Signed-off-by: Gustavo A. R. Silva
    Reviewed-by: Dmitry Vyukov
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Gustavo A. R. Silva
     
  • On 32-bit platform the size of long is only 32 bits which makes wrong
    offset in the array of 64 bit size.

    Calculate offset based on BITS_PER_LONG.

    Link: http://lkml.kernel.org/r/20200109103601.45929-1-andriy.shevchenko@linux.intel.com
    Fixes: 30544ed5de43 ("lib/bitmap: introduce bitmap_replace() helper")
    Signed-off-by: Andy Shevchenko
    Reported-by: Guenter Roeck
    Cc: Rasmus Villemoes
    Cc: Yury Norov
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andy Shevchenko