29 Jul, 2022
1 commit
-
Issue:
While servicing interrupt, trying to access variable rng_op_done,
which is not yet initalized hence causing kernel to crash
while booting.Fix:
Moving initialization of rng_op_done before request_irq.Fixes: 1d5449445bd0 (hwrng: mx-rngc - add a driver for Freescale RNGC)
Signed-off-by: Kshitiz Varshney
Reviewed-by: Horia Geantă
30 Jun, 2022
3 commits
-
This is the 5.15.51 stable release
* tag 'v5.15.51': (136 commits)
Linux 5.15.51
powerpc/pseries: wire up rng during setup_arch()
kbuild: link vmlinux only once for CONFIG_TRIM_UNUSED_KSYMS (2nd attempt)
...Signed-off-by: Jason Liu
-
This is the 5.15.50 stable release
* tag 'v5.15.50': (1395 commits)
Linux 5.15.50
arm64: mm: Don't invalidate FROM_DEVICE buffers at start of DMA transfer
serial: core: Initialize rs485 RTS polarity already on probe
...Signed-off-by: Jason Liu
Conflicts:
drivers/bus/fsl-mc/fsl-mc-bus.c
drivers/crypto/caam/ctrl.c
drivers/pci/controller/dwc/pci-imx6.c
drivers/spi/spi-fsl-qspi.c
drivers/tty/serial/fsl_lpuart.c
include/uapi/linux/dma-buf.h -
This is the 5.15.41 stable release
* tag 'v5.15.41': (1977 commits)
Linux 5.15.41
usb: gadget: uvc: allow for application to cleanly shutdown
usb: gadget: uvc: rename function to be more consistent
...Signed-off-by: Jason Liu
Conflicts:
arch/arm64/boot/dts/freescale/fsl-ls1043a.dtsi
arch/arm64/boot/dts/freescale/fsl-ls1046a.dtsi
arch/arm64/configs/defconfig
drivers/clk/imx/clk-imx8qxp-lpcg.c
drivers/dma/imx-sdma.c
drivers/gpu/drm/bridge/nwl-dsi.c
drivers/mailbox/imx-mailbox.c
drivers/net/phy/at803x.c
drivers/tty/serial/fsl_lpuart.c
security/keys/trusted-keys/trusted_core.c
29 Jun, 2022
3 commits
-
commit 63b8ea5e4f1a87dea4d3114293fc8e96a8f193d7 upstream.
This comment wasn't updated when we moved from read() to read_iter(), so
this patch makes the trivial fix.Fixes: 1b388e7765f2 ("random: convert to using fops->read_iter()")
Signed-off-by: Jason A. Donenfeld
Signed-off-by: Greg Kroah-Hartman -
commit c01d4d0a82b71857be7449380338bc53dde2da92 upstream.
random.c ratelimits how much it warns about uninitialized urandom reads
using __ratelimit(). When the RNG is finally initialized, it prints the
number of missed messages due to ratelimiting.It has been this way since that functionality was introduced back in
2018. Recently, cc1e127bfa95 ("random: remove ratelimiting for in-kernel
unseeded randomness") put a bit more stress on the urandom ratelimiting,
which teased out a bug in the implementation.Specifically, when under pressure, __ratelimit() will print its own
message and reset the count back to 0, making the final message at the
end less useful. Secondly, it does so as a pr_warn(), which apparently
is undesirable for people's CI.Fortunately, __ratelimit() has the RATELIMIT_MSG_ON_RELEASE flag exactly
for this purpose, so we set the flag.Fixes: 4e00b339e264 ("random: rate limit unseeded randomness warnings")
Cc: stable@vger.kernel.org
Reported-by: Jon Hunter
Reported-by: Ron Economos
Tested-by: Ron Economos
Signed-off-by: Jason A. Donenfeld
Signed-off-by: Greg Kroah-Hartman -
commit 534d2eaf1970274150596fdd2bf552721e65d6b2 upstream.
It used to be that mix_interrupt_randomness() would credit 1 bit each
time it ran, and so add_interrupt_randomness() would schedule mix() to
run every 64 interrupts, a fairly arbitrary number, but nonetheless
considered to be a decent enough conservative estimate.Since e3e33fc2ea7f ("random: do not use input pool from hard IRQs"),
mix() is now able to credit multiple bits, depending on the number of
calls to add(). This was done for reasons separate from this commit, but
it has the nice side effect of enabling this patch to schedule mix()
less often.Currently the rules are:
a) Credit 1 bit for every 64 calls to add().
b) Schedule mix() once a second that add() is called.
c) Schedule mix() once every 64 calls to add().Rules (a) and (c) no longer need to be coupled. It's still important to
have _some_ value in (c), so that we don't "over-saturate" the fast
pool, but the once per second we get from rule (b) is a plenty enough
baseline. So, by increasing the 64 in rule (c) to something larger, we
avoid calling queue_work_on() as frequently during irq storms.This commit changes that 64 in rule (c) to be 1024, which means we
schedule mix() 16 times less often. And it does *not* need to change the
64 in rule (a).Fixes: 58340f8e952b ("random: defer fast pool mixing to worker")
Cc: stable@vger.kernel.org
Cc: Dominik Brodowski
Acked-by: Sebastian Andrzej Siewior
Signed-off-by: Jason A. Donenfeld
Signed-off-by: Greg Kroah-Hartman
22 Jun, 2022
1 commit
-
[ Upstream commit 846bb97e131d7938847963cca00657c995b1fce1 ]
This commit changes the default Kconfig values of RANDOM_TRUST_CPU and
RANDOM_TRUST_BOOTLOADER to be Y by default. It does not change any
existing configs or change any kernel behavior. The reason for this is
several fold.As background, I recently had an email thread with the kernel
maintainers of Fedora/RHEL, Debian, Ubuntu, Gentoo, Arch, NixOS, Alpine,
SUSE, and Void as recipients. I noted that some distros trust RDRAND,
some trust EFI, and some trust both, and I asked why or why not. There
wasn't really much of a "debate" but rather an interesting discussion of
what the historical reasons have been for this, and it came up that some
distros just missed the introduction of the bootloader Kconfig knob,
while another didn't want to enable it until there was a boot time
switch to turn it off for more concerned users (which has since been
added). The result of the rather uneventful discussion is that every
major Linux distro enables these two options by default.While I didn't have really too strong of an opinion going into this
thread -- and I mostly wanted to learn what the distros' thinking was
one way or another -- ultimately I think their choice was a decent
enough one for a default option (which can be disabled at boot time).
I'll try to summarize the pros and cons:Pros:
- The RNG machinery gets initialized super quickly, and there's no
messing around with subsequent blocking behavior.- The bootloader mechanism is used by kexec in order for the prior
kernel to initialize the RNG of the next kernel, which increases
the entropy available to early boot daemons of the next kernel.- Previous objections related to backdoors centered around
Dual_EC_DRBG-like kleptographic systems, in which observing some
amount of the output stream enables an adversary holding the right key
to determine the entire output stream.This used to be a partially justified concern, because RDRAND output
was mixed into the output stream in varying ways, some of which may
have lacked pre-image resistance (e.g. XOR or an LFSR).But this is no longer the case. Now, all usage of RDRAND and
bootloader seeds go through a cryptographic hash function. This means
that the CPU would have to compute a hash pre-image, which is not
considered to be feasible (otherwise the hash function would be
terribly broken).- More generally, if the CPU is backdoored, the RNG is probably not the
realistic vector of choice for an attacker.- These CPU or bootloader seeds are far from being the only source of
entropy. Rather, there is generally a pretty huge amount of entropy,
not all of which is credited, especially on CPUs that support
instructions like RDRAND. In other words, assuming RDRAND outputs all
zeros, an attacker would *still* have to accurately model every single
other entropy source also in use.- The RNG now reseeds itself quite rapidly during boot, starting at 2
seconds, then 4, then 8, then 16, and so forth, so that other sources
of entropy get used without much delay.- Paranoid users can set random.trust_{cpu,bootloader}=no in the kernel
command line, and paranoid system builders can set the Kconfig options
to N, so there's no reduction or restriction of optionality.- It's a practical default.
- All the distros have it set this way. Microsoft and Apple trust it
too. Bandwagon.Cons:
- RDRAND *could* still be backdoored with something like a fixed key or
limited space serial number seed or another indexable scheme like
that. (However, it's hard to imagine threat models where the CPU is
backdoored like this, yet people are still okay making *any*
computations with it or connecting it to networks, etc.)- RDRAND *could* be defective, rather than backdoored, and produce
garbage that is in one way or another insufficient for crypto.- Suggesting a *reduction* in paranoia, as this commit effectively does,
may cause some to question my personal integrity as a "security
person".- Bootloader seeds and RDRAND are generally very difficult if not all
together impossible to audit.Keep in mind that this doesn't actually change any behavior. This
is just a change in the default Kconfig value. The distros already are
shipping kernels that set things this way.Ard made an additional argument in [1]:
We're at the mercy of firmware and micro-architecture anyway, given
that we are also relying on it to ensure that every instruction in
the kernel's executable image has been faithfully copied to memory,
and that the CPU implements those instructions as documented. So I
don't think firmware or ISA bugs related to RNGs deserve special
treatment - if they are broken, we should quirk around them like we
usually do. So enabling these by default is a step in the right
direction IMHO.In [2], Phil pointed out that having this disabled masked a bug that CI
otherwise would have caught:A clean 5.15.45 boots cleanly, whereas a downstream kernel shows the
static key warning (but it does go on to boot). The significant
difference is that our defconfigs set CONFIG_RANDOM_TRUST_BOOTLOADER=y
defining that on top of multi_v7_defconfig demonstrates the issue on
a clean 5.15.45. Conversely, not setting that option in a
downstream kernel build avoids the warning[1] https://lore.kernel.org/lkml/CAMj1kXGi+ieviFjXv9zQBSaGyyzeGW_VpMpTLJK8PJb2QHEQ-w@mail.gmail.com/
[2] https://lore.kernel.org/lkml/c47c42e3-1d56-5859-a6ad-976a1a3381c6@raspberrypi.com/Cc: Theodore Ts'o
Reviewed-by: Ard Biesheuvel
Signed-off-by: Jason A. Donenfeld
Signed-off-by: Sasha Levin
15 Jun, 2022
4 commits
-
commit 77fc95f8c0dc9e1f8e620ec14d2fb65028fb7adc upstream.
Rather than accounting in bytes and multiplying (shifting), we can just
account in bits and avoid the shift. The main motivation for this is
there are other patches in flux that expand this code a bit, and
avoiding the duplication of "* 8" everywhere makes things a bit clearer.Cc: stable@vger.kernel.org
Fixes: 12e45a2a6308 ("random: credit architectural init the exact amount")
Signed-off-by: Jason A. Donenfeld
Signed-off-by: Greg Kroah-Hartman -
commit 39e0f991a62ed5efabd20711a7b6e7da92603170 upstream.
add_bootloader_randomness() and the variables it touches are only used
during __init and not after, so mark these as __init. At the same time,
unexport this, since it's only called by other __init code that's
built-in.Cc: stable@vger.kernel.org
Fixes: 428826f5358c ("fdt: add support for rng-seed")
Signed-off-by: Jason A. Donenfeld
Signed-off-by: Greg Kroah-Hartman -
commit 9b29b6b20376ab64e1b043df6301d8a92378e631 upstream.
The current flow expands to:
if (crng_ready())
...
else if (...)
if (!crng_ready())
...The second crng_ready() call is redundant, but can't so easily be
optimized out by the compiler.This commit simplifies that to:
if (crng_ready()
...
else if (...)
...Fixes: 560181c27b58 ("random: move initialization functions out of hot pages")
Cc: stable@vger.kernel.org
Cc: Dominik Brodowski
Signed-off-by: Jason A. Donenfeld
Signed-off-by: Greg Kroah-Hartman -
[ Upstream commit b67d19662fdee275c479d21853bc1239600a798f ]
usb_get_dev is called in xillyusb_probe. So it is better to call
usb_put_dev before xdev is released.Acked-by: Eli Billauer
Signed-off-by: Hangyu Hua
Link: https://lore.kernel.org/r/20220406075703.23464-1-hbh25y@gmail.com
Signed-off-by: Greg Kroah-Hartman
Signed-off-by: Sasha Levin
09 Jun, 2022
5 commits
-
This reverts upstream commit f5bda35fba615ace70a656d4700423fa6c9bebee
from stable. It's not essential and will take some time during 5.19 to
work out properly.Signed-off-by: Jason A. Donenfeld
-
[ Upstream commit e4e62bbc6aba49a5edb3156ec65f6698ff37d228 ]
'ddata->clk' is enabled by clk_prepare_enable(), it should be disabled
by clk_disable_unprepare().Fixes: 8d9d4bdc495f ("hwrng: omap3-rom - Use runtime PM instead of custom functions")
Signed-off-by: Yang Yingliang
Signed-off-by: Herbert Xu
Signed-off-by: Sasha Levin -
[ Upstream commit e0687fe958f763f1790f22ed5483025b7624e744 ]
Returning an error value in an i2c remove callback results in an error
message being emitted by the i2c core, but otherwise it doesn't make a
difference. The device goes away anyhow and the devm cleanups are
called.As tpm_cr50_i2c_remove() emits an error message already and the
additional error message by the i2c core doesn't add any useful
information, change the return value to zero to suppress this error
message.Note that if i2c_clientdata is NULL, there is something really fishy.
Assuming no memory corruption happened (then all bets are lost anyhow),
tpm_cr50_i2c_remove() is only called after tpm_cr50_i2c_probe() returned
successfully. So there was a tpm chip registered before and after
tpm_cr50_i2c_remove() its privdata is freed but the associated character
device isn't removed. If after that happened userspace accesses the
character device it's likely that the freed memory is accessed. For that
reason the warning message is made a bit more frightening.Signed-off-by: Uwe Kleine-König
Signed-off-by: Jarkko Sakkinen
Signed-off-by: Sasha Levin -
[ Upstream commit 2ebaf18a0b7fb764bba6c806af99fe868cee93de ]
The was it was wouldn't work in some situations, simplify it. What was
there was unnecessary complexity.Reported-by: kernel test robot
Signed-off-by: Corey Minyard
Signed-off-by: Sasha Levin -
[ Upstream commit 7602b957e2404e5f98d9a40b68f1fd27f0028712 ]
Even though it's not possible to get into the SSIF_GETTING_MESSAGES and
SSIF_GETTING_EVENTS states without a valid message in the msg field,
it's probably best to be defensive here and check and print a log, since
that means something else went wrong.Also add a default clause to that switch statement to release the lock
and print a log, in case the state variable gets messed up somehow.Reported-by: Haowen Bai
Signed-off-by: Corey Minyard
Signed-off-by: Sasha Levin
06 Jun, 2022
3 commits
-
commit d0dc1a7100f19121f6e7450f9cdda11926aa3838 upstream.
Currently it returns zero when CRQ response timed out, it should return
an error code instead.Fixes: d8d74ea3c002 ("tpm: ibmvtpm: Wait for buffer to be set before proceeding")
Signed-off-by: Xiu Jianfeng
Reviewed-by: Stefan Berger
Acked-by: Jarkko Sakkinen
Signed-off-by: Jarkko Sakkinen
Signed-off-by: Greg Kroah-Hartman -
commit e57b2523bd37e6434f4e64c7a685e3715ad21e9a upstream.
Under certain conditions uninitialized memory will be accessed.
As described by TCG Trusted Platform Module Library Specification,
rev. 1.59 (Part 3: Commands), if a TPM2_GetCapability is received,
requesting a capability, the TPM in field upgrade mode may return a
zero length list.
Check the property count in tpm2_get_tpm_pt().Fixes: 2ab3241161b3 ("tpm: migrate tpm2_get_tpm_pt() to use struct tpm_buf")
Cc: stable@vger.kernel.org
Signed-off-by: Stefan Mahnke-Hartmann
Reviewed-by: Jarkko Sakkinen
Signed-off-by: Jarkko Sakkinen
Signed-off-by: Greg Kroah-Hartman -
commit 074bcd4000e0d812bc253f86fedc40f81ed59ccc upstream.
get_random_bytes() usually hasn't full entropy available by the time DRBG
instances are first getting seeded from it during boot. Thus, the DRBG
implementation registers random_ready_callbacks which would in turn
schedule some work for reseeding the DRBGs once get_random_bytes() has
sufficient entropy available.For reference, the relevant history around handling DRBG (re)seeding in
the context of a not yet fully seeded get_random_bytes() is:commit 16b369a91d0d ("random: Blocking API for accessing
nonblocking_pool")
commit 4c7879907edd ("crypto: drbg - add async seeding operation")commit 205a525c3342 ("random: Add callback API for random pool
readiness")
commit 57225e679788 ("crypto: drbg - Use callback API for random
readiness")
commit c2719503f5e1 ("random: Remove kernel blocking API")However, some time later, the initialization state of get_random_bytes()
has been made queryable via rng_is_initialized() introduced with commit
9a47249d444d ("random: Make crng state queryable"). This primitive now
allows for streamlining the DRBG reseeding from get_random_bytes() by
replacing that aforementioned asynchronous work scheduling from
random_ready_callbacks with some simpler, synchronous code in
drbg_generate() next to the related logic already present therein. Apart
from improving overall code readability, this change will also enable DRBG
users to rely on wait_for_random_bytes() for ensuring that the initial
seeding has completed, if desired.The previous patches already laid the grounds by making drbg_seed() to
record at each DRBG instance whether it was being seeded at a time when
rng_is_initialized() still had been false as indicated by
->seeded == DRBG_SEED_STATE_PARTIAL.All that remains to be done now is to make drbg_generate() check for this
condition, determine whether rng_is_initialized() has flipped to true in
the meanwhile and invoke a reseed from get_random_bytes() if so.Make this move:
- rename the former drbg_async_seed() work handler, i.e. the one in charge
of reseeding a DRBG instance from get_random_bytes(), to
"drbg_seed_from_random()",
- change its signature as appropriate, i.e. make it take a struct
drbg_state rather than a work_struct and change its return type from
"void" to "int" in order to allow for passing error information from
e.g. its __drbg_seed() invocation onwards to callers,
- make drbg_generate() invoke this drbg_seed_from_random() once it
encounters a DRBG instance with ->seeded == DRBG_SEED_STATE_PARTIAL by
the time rng_is_initialized() has flipped to true and
- prune everything related to the former, random_ready_callback based
mechanism.As drbg_seed_from_random() is now getting invoked from drbg_generate() with
the ->drbg_mutex being held, it must not attempt to recursively grab it
once again. Remove the corresponding mutex operations from what is now
drbg_seed_from_random(). Furthermore, as drbg_seed_from_random() can now
report errors directly to its caller, there's no need for it to temporarily
switch the DRBG's ->seeded state to DRBG_SEED_STATE_UNSEEDED so that a
failure of the subsequently invoked __drbg_seed() will get signaled to
drbg_generate(). Don't do it then.Signed-off-by: Nicolai Stange
Signed-off-by: Herbert Xu
[Jason: for stable, undid the modifications for the backport of 5acd3548.]
Signed-off-by: Jason A. Donenfeld
Signed-off-by: Greg Kroah-Hartman
30 May, 2022
20 commits
-
commit 1ce6c8d68f8ac587f54d0a271ac594d3d51f3efb upstream.
get_random_bytes_user() checks for signals after producing a PAGE_SIZE
worth of output, just like /dev/zero does. write_pool() is doing
basically the same work (actually, slightly more expensive), and so
should stop to check for signals in the same way. Let's also name it
write_pool_user() to match get_random_bytes_user(), so this won't be
misused in the future.Before this patch, massive writes to /dev/urandom would tie up the
process for an extremely long time and make it unterminatable. After, it
can be successfully interrupted. The following test program can be used
to see this works as intended:#include
#include
#include
#includestatic unsigned char x[~0U];
static void handle(int) { }
int main(int argc, char *argv[])
{
pid_t pid = getpid(), child;
int fd;
signal(SIGUSR1, handle);
if (!(child = fork())) {
for (;;)
kill(pid, SIGUSR1);
}
fd = open("/dev/urandom", O_WRONLY);
pause();
printf("interrupted after writing %zd bytes\n", write(fd, x, sizeof(x)));
close(fd);
kill(child, SIGTERM);
return 0;
}Result before: "interrupted after writing 2147479552 bytes"
Result after: "interrupted after writing 4096 bytes"Cc: Dominik Brodowski
Signed-off-by: Jason A. Donenfeld
Signed-off-by: Greg Kroah-Hartman -
commit 79025e727a846be6fd215ae9cdb654368ac3f9a6 upstream.
Now that random/urandom is using {read,write}_iter, we can wire it up to
using the generic splice handlers.Fixes: 36e2c7421f02 ("fs: don't allow splice read/write without explicit ops")
Signed-off-by: Jens Axboe
[Jason: added the splice_write path. Note that sendfile() and such still
does not work for read, though it does for write, because of a file
type restriction in splice_direct_to_actor(), which I'll address
separately.]
Cc: Al Viro
Signed-off-by: Jason A. Donenfeld
Signed-off-by: Greg Kroah-Hartman -
commit 22b0a222af4df8ee9bb8e07013ab44da9511b047 upstream.
Now that the read side has been converted to fix a regression with
splice, convert the write side as well to have some symmetry in the
interface used (and help deprecate ->write()).Signed-off-by: Jens Axboe
[Jason: cleaned up random_ioctl a bit, require full writes in
RNDADDENTROPY since it's crediting entropy, simplify control flow of
write_pool(), and incorporate suggestions from Al.]
Cc: Al Viro
Signed-off-by: Jason A. Donenfeld
Signed-off-by: Greg Kroah-Hartman -
commit 1b388e7765f2eaa137cf5d92b47ef5925ad83ced upstream.
This is a pre-requisite to wiring up splice() again for the random
and urandom drivers. It also allows us to remove the INT_MAX check in
getrandom(), because import_single_range() applies capping internally.Signed-off-by: Jens Axboe
[Jason: rewrote get_random_bytes_user() to simplify and also incorporate
additional suggestions from Al.]
Cc: Al Viro
Signed-off-by: Jason A. Donenfeld
Signed-off-by: Greg Kroah-Hartman -
commit 3092adcef3ffd2ef59634998297ca8358461ebce upstream.
There are currently two separate batched entropy implementations, for
u32 and u64, with nearly identical code, with the goal of avoiding
unaligned memory accesses and letting the buffers be used more
efficiently. Having to maintain these two functions independently is a
bit of a hassle though, considering that they always need to be kept in
sync.This commit factors them out into a type-generic macro, so that the
expansion produces the same code as before, such that diffing the
assembly shows no differences. This will also make it easier in the
future to add u16 and u8 batches.This was initially tested using an always_inline function and letting
gcc constant fold the type size in, but the code gen was less efficient,
and in general it was more verbose and harder to follow. So this patch
goes with the boring macro solution, similar to what's already done for
the _wait functions in random.h.Cc: Dominik Brodowski
Signed-off-by: Jason A. Donenfeld
Signed-off-by: Greg Kroah-Hartman -
commit 5ad7dd882e45d7fe432c32e896e2aaa0b21746ea upstream.
randomize_page is an mm function. It is documented like one. It contains
the history of one. It has the naming convention of one. It looks
just like another very similar function in mm, randomize_stack_top().
And it has always been maintained and updated by mm people. There is no
need for it to be in random.c. In the "which shape does not look like
the other ones" test, pointing to randomize_page() is correct.So move randomize_page() into mm/util.c, right next to the similar
randomize_stack_top() function.This commit contains no actual code changes.
Cc: Andrew Morton
Signed-off-by: Jason A. Donenfeld
Signed-off-by: Greg Kroah-Hartman -
commit 560181c27b582557d633ecb608110075433383af upstream.
Much of random.c is devoted to initializing the rng and accounting for
when a sufficient amount of entropy has been added. In a perfect world,
this would all happen during init, and so we could mark these functions
as __init. But in reality, this isn't the case: sometimes the rng only
finishes initializing some seconds after system init is finished.For this reason, at the moment, a whole host of functions that are only
used relatively close to system init and then never again are intermixed
with functions that are used in hot code all the time. This creates more
cache misses than necessary.In order to pack the hot code closer together, this commit moves the
initialization functions that can't be marked as __init into
.text.unlikely by way of the __cold attribute.Of particular note is moving credit_init_bits() into a macro wrapper
that inlines the crng_ready() static branch check. This avoids a
function call to a nop+ret, and most notably prevents extra entropy
arithmetic from being computed in mix_interrupt_randomness().Reviewed-by: Dominik Brodowski
Signed-off-by: Jason A. Donenfeld
Signed-off-by: Greg Kroah-Hartman -
commit a19402634c435a4eae226df53c141cdbb9922e7b upstream.
The current code was a mix of "nbytes", "count", "size", "buffer", "in",
and so forth. Instead, let's clean this up by naming input parameters
"buf" (or "ubuf") and "len", so that you always understand that you're
reading this variety of function argument.Signed-off-by: Jason A. Donenfeld
Signed-off-by: Greg Kroah-Hartman -
commit f5bda35fba615ace70a656d4700423fa6c9bebee upstream.
Since crng_ready() is only false briefly during initialization and then
forever after becomes true, we don't need to evaluate it after, making
it a prime candidate for a static branch.One complication, however, is that it changes state in a particular call
to credit_init_bits(), which might be made from atomic context, which
means we must kick off a workqueue to change the static key. Further
complicating things, credit_init_bits() may be called sufficiently early
on in system initialization such that system_wq is NULL.Fortunately, there exists the nice function execute_in_process_context(),
which will immediately execute the function if !in_interrupt(), and
otherwise defer it to a workqueue. During early init, before workqueues
are available, in_interrupt() is always false, because interrupts
haven't even been enabled yet, which means the function in that case
executes immediately. Later on, after workqueues are available,
in_interrupt() might be true, but in that case, the work is queued in
system_wq and all goes well.Cc: Theodore Ts'o
Cc: Sultan Alsawaf
Reviewed-by: Dominik Brodowski
Signed-off-by: Jason A. Donenfeld
Signed-off-by: Greg Kroah-Hartman -
commit 12e45a2a6308105469968951e6d563e8f4fea187 upstream.
RDRAND and RDSEED can fail sometimes, which is fine. We currently
initialize the RNG with 512 bits of RDRAND/RDSEED. We only need 256 bits
of those to succeed in order to initialize the RNG. Instead of the
current "all or nothing" approach, actually credit these contributions
the amount that is actually contributed.Reviewed-by: Dominik Brodowski
Signed-off-by: Jason A. Donenfeld
Signed-off-by: Greg Kroah-Hartman -
commit 2f14062bb14b0fcfcc21e6dc7d5b5c0d25966164 upstream.
Currently, start_kernel() adds latent entropy and the command line to
the entropy bool *after* the RNG has been initialized, deferring when
it's actually used by things like stack canaries until the next time
the pool is seeded. This surely is not intended.Rather than splitting up which entropy gets added where and when between
start_kernel() and random_init(), just do everything in random_init(),
which should eliminate these kinds of bugs in the future.While we're at it, rename the awkwardly titled "rand_initialize()" to
the more standard "random_init()" nomenclature.Reviewed-by: Dominik Brodowski
Signed-off-by: Jason A. Donenfeld
Signed-off-by: Greg Kroah-Hartman -
commit 8a5b8a4a4ceb353b4dd5bafd09e2b15751bcdb51 upstream.
This expands to exactly the same code that it replaces, but makes things
consistent by using the same macro for jiffy comparisons throughout.Signed-off-by: Jason A. Donenfeld
Signed-off-by: Greg Kroah-Hartman -
commit cc1e127bfa95b5fb2f9307e7168bf8b2b45b4c5e upstream.
The CONFIG_WARN_ALL_UNSEEDED_RANDOM debug option controls whether the
kernel warns about all unseeded randomness or just the first instance.
There's some complicated rate limiting and comparison to the previous
caller, such that even with CONFIG_WARN_ALL_UNSEEDED_RANDOM enabled,
developers still don't see all the messages or even an accurate count of
how many were missed. This is the result of basically parallel
mechanisms aimed at accomplishing more or less the same thing, added at
different points in random.c history, which sort of compete with the
first-instance-only limiting we have now.It turns out, however, that nobody cares about the first unseeded
randomness instance of in-kernel users. The same first user has been
there for ages now, and nobody is doing anything about it. It isn't even
clear that anybody _can_ do anything about it. Most places that can do
something about it have switched over to using get_random_bytes_wait()
or wait_for_random_bytes(), which is the right thing to do, but there is
still much code that needs randomness sometimes during init, and as a
geeneral rule, if you're not using one of the _wait functions or the
readiness notifier callback, you're bound to be doing it wrong just
based on that fact alone.So warning about this same first user that can't easily change is simply
not an effective mechanism for anything at all. Users can't do anything
about it, as the Kconfig text points out -- the problem isn't in
userspace code -- and kernel developers don't or more often can't react
to it.Instead, show the warning for all instances when CONFIG_WARN_ALL_UNSEEDED_RANDOM
is set, so that developers can debug things need be, or if it isn't set,
don't show a warning at all.At the same time, CONFIG_WARN_ALL_UNSEEDED_RANDOM now implies setting
random.ratelimit_disable=1 on by default, since if you care about one
you probably care about the other too. And we can clean up usage around
the related urandom_warning ratelimiter as well (whose behavior isn't
changing), so that it properly counts missed messages after the 10
message threshold is reached.Cc: Theodore Ts'o
Cc: Dominik Brodowski
Signed-off-by: Jason A. Donenfeld
Signed-off-by: Greg Kroah-Hartman -
commit 68c9c8b192c6dae9be6278e98ee44029d5da2d31 upstream.
Initialization happens once -- by way of credit_init_bits() -- and then
it never happens again. Therefore, it doesn't need to be in
crng_reseed(), which is a hot path that is called multiple times. It
also doesn't make sense to have there, as initialization activity is
better associated with initialization routines.After the prior commit, crng_reseed() now won't be called by multiple
concurrent callers, which means that we can safely move the
"finialize_init" logic into crng_init_bits() unconditionally.Reviewed-by: Dominik Brodowski
Signed-off-by: Jason A. Donenfeld
Signed-off-by: Greg Kroah-Hartman -
commit fed7ef061686cc813b1f3d8d0edc6c35b4d3537b upstream.
Since all changes of crng_init now go through credit_init_bits(), we can
fix a long standing race in which two concurrent callers of
credit_init_bits() have the new bit count >= some threshold, but are
doing so with crng_init as a lower threshold, checked outside of a lock,
resulting in crng_reseed() or similar being called twice.In order to fix this, we can use the original cmpxchg value of the bit
count, and only change crng_init when the bit count transitions from
below a threshold to meeting the threshold.Reviewed-by: Dominik Brodowski
Signed-off-by: Jason A. Donenfeld
Signed-off-by: Greg Kroah-Hartman -
commit e3d2c5e79a999aa4e7d6f0127e16d3da5a4ff70d upstream.
crng_init represents a state machine, with three states, and various
rules for transitions. For the longest time, we've been managing these
with "0", "1", and "2", and expecting people to figure it out. To make
the code more obvious, replace these with proper enum values
representing the transition, and then redocument what each of these
states mean.Reviewed-by: Dominik Brodowski
Cc: Joe Perches
Signed-off-by: Jason A. Donenfeld
Signed-off-by: Greg Kroah-Hartman -
commit e73aaae2fa9024832e1f42e30c787c7baf61d014 upstream.
The SipHash family of permutations is currently used in three places:
- siphash.c itself, used in the ordinary way it was intended.
- random32.c, in a construction from an anonymous contributor.
- random.c, as part of its fast_mix function.Each one of these places reinvents the wheel with the same C code, same
rotation constants, and same symmetry-breaking constants.This commit tidies things up a bit by placing macros for the
permutations and constants into siphash.h, where each of the three .c
users can access them. It also leaves a note dissuading more users of
them from emerging.Signed-off-by: Jason A. Donenfeld
Signed-off-by: Greg Kroah-Hartman -
commit 791332b3cbb080510954a4c152ce02af8832eac9 upstream.
Now that fast_mix() has more than one caller, gcc no longer inlines it.
That's fine. But it also doesn't handle the compound literal argument we
pass it very efficiently, nor does it handle the loop as well as it
could. So just expand the code to spell out this function so that it
generates the same code as it did before. Performance-wise, this now
behaves as it did before the last commit. The difference in actual code
size on x86 is 45 bytes, which is less than a cache line.Signed-off-by: Jason A. Donenfeld
Signed-off-by: Greg Kroah-Hartman -
commit e3e33fc2ea7fcefd0d761db9d6219f83b4248f5c upstream.
Years ago, a separate fast pool was added for interrupts, so that the
cost associated with taking the input pool spinlocks and mixing into it
would be avoided in places where latency is critical. However, one
oversight was that add_input_randomness() and add_disk_randomness()
still sometimes are called directly from the interrupt handler, rather
than being deferred to a thread. This means that some unlucky interrupts
will be caught doing a blake2s_compress() call and potentially spinning
on input_pool.lock, which can also be taken by unprivileged users by
writing into /dev/urandom.In order to fix this, add_timer_randomness() now checks whether it is
being called from a hard IRQ and if so, just mixes into the per-cpu IRQ
fast pool using fast_mix(), which is much faster and can be done
lock-free. A nice consequence of this, as well, is that it means hard
IRQ context FPU support is likely no longer useful.The entropy estimation algorithm used by add_timer_randomness() is also
somewhat different than the one used for add_interrupt_randomness(). The
former looks at deltas of deltas of deltas, while the latter just waits
for 64 interrupts for one bit or for one second since the last bit. In
order to bridge these, and since add_interrupt_randomness() runs after
an add_timer_randomness() that's called from hard IRQ, we add to the
fast pool credit the related amount, and then subtract one to account
for add_interrupt_randomness()'s contribution.A downside of this, however, is that the num argument is potentially
attacker controlled, which puts a bit more pressure on the fast_mix()
sponge to do more than it's really intended to do. As a mitigating
factor, the first 96 bits of input aren't attacker controlled (a cycle
counter followed by zeros), which means it's essentially two rounds of
siphash rather than one, which is somewhat better. It's also not that
much different from add_interrupt_randomness()'s use of the irq stack
instruction pointer register.Cc: Thomas Gleixner
Cc: Filipe Manana
Cc: Peter Zijlstra
Cc: Borislav Petkov
Cc: Theodore Ts'o
Signed-off-by: Jason A. Donenfeld
Signed-off-by: Greg Kroah-Hartman -
commit a4b5c26b79ffdfcfb816c198f2fc2b1e7b5b580f upstream.
There are no code changes here; this is just a reordering of functions,
so that in subsequent commits, the timer entropy functions can call into
the interrupt ones.Signed-off-by: Jason A. Donenfeld
Signed-off-by: Greg Kroah-Hartman