25 Oct, 2020

3 commits

  • Given that this code is new, let's add a selftest for it as well.
    It doesn't rely on fixed sets, instead it picks 1024 numbers and
    verifies that they're not more correlated than desired.

    Link: https://lore.kernel.org/netdev/20200808152628.GA27941@SDF.ORG/
    Cc: George Spelvin
    Cc: Amit Klein
    Cc: Eric Dumazet
    Cc: "Jason A. Donenfeld"
    Cc: Andy Lutomirski
    Cc: Kees Cook
    Cc: Thomas Gleixner
    Cc: Peter Zijlstra
    Cc: Linus Torvalds
    Cc: tytso@mit.edu
    Cc: Florian Westphal
    Cc: Marc Plumb
    Signed-off-by: Willy Tarreau

    Willy Tarreau
     
  • With the removal of the interrupt perturbations in previous random32
    change (random32: make prandom_u32() output unpredictable), the PRNG
    has become 100% deterministic again. While SipHash is expected to be
    way more robust against brute force than the previous Tausworthe LFSR,
    there's still the risk that whoever has even one temporary access to
    the PRNG's internal state is able to predict all subsequent draws till
    the next reseed (roughly every minute). This may happen through a side
    channel attack or any data leak.

    This patch restores the spirit of commit f227e3ec3b5c ("random32: update
    the net random state on interrupt and activity") in that it will perturb
    the internal PRNG's statee using externally collected noise, except that
    it will not pick that noise from the random pool's bits nor upon
    interrupt, but will rather combine a few elements along the Tx path
    that are collectively hard to predict, such as dev, skb and txq
    pointers, packet length and jiffies values. These ones are combined
    using a single round of SipHash into a single long variable that is
    mixed with the net_rand_state upon each invocation.

    The operation was inlined because it produces very small and efficient
    code, typically 3 xor, 2 add and 2 rol. The performance was measured
    to be the same (even very slightly better) than before the switch to
    SipHash; on a 6-core 12-thread Core i7-8700k equipped with a 40G NIC
    (i40e), the connection rate dropped from 556k/s to 555k/s while the
    SYN cookie rate grew from 5.38 Mpps to 5.45 Mpps.

    Link: https://lore.kernel.org/netdev/20200808152628.GA27941@SDF.ORG/
    Cc: George Spelvin
    Cc: Amit Klein
    Cc: Eric Dumazet
    Cc: "Jason A. Donenfeld"
    Cc: Andy Lutomirski
    Cc: Kees Cook
    Cc: Thomas Gleixner
    Cc: Peter Zijlstra
    Cc: Linus Torvalds
    Cc: tytso@mit.edu
    Cc: Florian Westphal
    Cc: Marc Plumb
    Tested-by: Sedat Dilek
    Signed-off-by: Willy Tarreau

    Willy Tarreau
     
  • Non-cryptographic PRNGs may have great statistical properties, but
    are usually trivially predictable to someone who knows the algorithm,
    given a small sample of their output. An LFSR like prandom_u32() is
    particularly simple, even if the sample is widely scattered bits.

    It turns out the network stack uses prandom_u32() for some things like
    random port numbers which it would prefer are *not* trivially predictable.
    Predictability led to a practical DNS spoofing attack. Oops.

    This patch replaces the LFSR with a homebrew cryptographic PRNG based
    on the SipHash round function, which is in turn seeded with 128 bits
    of strong random key. (The authors of SipHash have *not* been consulted
    about this abuse of their algorithm.) Speed is prioritized over security;
    attacks are rare, while performance is always wanted.

    Replacing all callers of prandom_u32() is the quick fix.
    Whether to reinstate a weaker PRNG for uses which can tolerate it
    is an open question.

    Commit f227e3ec3b5c ("random32: update the net random state on interrupt
    and activity") was an earlier attempt at a solution. This patch replaces
    it.

    Reported-by: Amit Klein
    Cc: Willy Tarreau
    Cc: Eric Dumazet
    Cc: "Jason A. Donenfeld"
    Cc: Andy Lutomirski
    Cc: Kees Cook
    Cc: Thomas Gleixner
    Cc: Peter Zijlstra
    Cc: Linus Torvalds
    Cc: tytso@mit.edu
    Cc: Florian Westphal
    Cc: Marc Plumb
    Fixes: f227e3ec3b5c ("random32: update the net random state on interrupt and activity")
    Signed-off-by: George Spelvin
    Link: https://lore.kernel.org/netdev/20200808152628.GA27941@SDF.ORG/
    [ willy: partial reversal of f227e3ec3b5c; moved SIPROUND definitions
    to prandom.h for later use; merged George's prandom_seed() proposal;
    inlined siprand_u32(); replaced the net_rand_state[] array with 4
    members to fix a build issue; cosmetic cleanups to make checkpatch
    happy; fixed RANDOM32_SELFTEST build ]
    Signed-off-by: Willy Tarreau

    George Spelvin
     

03 Oct, 2020

1 commit

  • Commit f227e3ec3b5c ("random32: update the net random state on interrupt
    and activity") broke compilation and was temporarily fixed by Linus in
    83bdc7275e62 ("random32: remove net_rand_state from the latent entropy
    gcc plugin") by entirely moving net_rand_state out of the things handled
    by the latent_entropy GCC plugin.

    From what I understand when reading the plugin code, using the
    __latent_entropy attribute on a declaration was the wrong part and
    simply keeping the __latent_entropy attribute on the variable definition
    was the correct fix.

    Fixes: 83bdc7275e62 ("random32: remove net_rand_state from the latent entropy gcc plugin")
    Acked-by: Willy Tarreau
    Cc: Emese Revfy
    Signed-off-by: Thibaut Sautereau
    Signed-off-by: Linus Torvalds

    Thibaut Sautereau
     

14 Aug, 2020

1 commit

  • There has been some heat around prandom_u32() lately, and some people
    were wondering if there was a simple way to determine how often
    it was used, before considering making it maybe 10 times more expensive.

    This tracepoint exports the generated pseudo random value.

    Tested:

    perf list | grep prandom_u32
    random:prandom_u32 [Tracepoint event]

    perf record -a [-g] [-C1] -e random:prandom_u32 sleep 1
    [ perf record: Woken up 0 times to write data ]
    [ perf record: Captured and wrote 259.748 MB perf.data (924087 samples) ]

    perf report --nochildren
    ...
    97.67% ksoftirqd/1 [kernel.vmlinux] [k] prandom_u32
    |
    ---prandom_u32
    prandom_u32
    |
    |--48.86%--tcp_v4_syn_recv_sock
    | tcp_check_req
    | tcp_v4_rcv
    | ...
    --48.81%--tcp_conn_request
    tcp_v4_conn_request
    tcp_rcv_state_process
    ...
    perf script

    Signed-off-by: Eric Dumazet
    Cc: Willy Tarreau
    Cc: Sedat Dilek
    Tested-by: Sedat Dilek
    Signed-off-by: David S. Miller

    Eric Dumazet
     

30 Jul, 2020

2 commits

  • It turns out that the plugin right now ends up being really unhappy
    about the change from 'static' to 'extern' storage that happened in
    commit f227e3ec3b5c ("random32: update the net random state on interrupt
    and activity").

    This is probably a trivial fix for the latent_entropy plugin, but for
    now, just remove net_rand_state from the list of things the plugin
    worries about.

    Reported-by: Stephen Rothwell
    Cc: Emese Revfy
    Cc: Kees Cook
    Cc: Willy Tarreau
    Signed-off-by: Linus Torvalds

    Linus Torvalds
     
  • This modifies the first 32 bits out of the 128 bits of a random CPU's
    net_rand_state on interrupt or CPU activity to complicate remote
    observations that could lead to guessing the network RNG's internal
    state.

    Note that depending on some network devices' interrupt rate moderation
    or binding, this re-seeding might happen on every packet or even almost
    never.

    In addition, with NOHZ some CPUs might not even get timer interrupts,
    leaving their local state rarely updated, while they are running
    networked processes making use of the random state. For this reason, we
    also perform this update in update_process_times() in order to at least
    update the state when there is user or system activity, since it's the
    only case we care about.

    Reported-by: Amit Klein
    Suggested-by: Linus Torvalds
    Cc: Eric Dumazet
    Cc: "Jason A. Donenfeld"
    Cc: Andy Lutomirski
    Cc: Kees Cook
    Cc: Thomas Gleixner
    Cc: Peter Zijlstra
    Cc:
    Signed-off-by: Willy Tarreau
    Signed-off-by: Linus Torvalds

    Willy Tarreau
     

18 May, 2019

1 commit

  • Variable 'entropy' was wrongly documented as 'seed', changed comment to
    reflect actual variable name.

    ../lib/random32.c:179: warning: Function parameter or member 'entropy' not described in 'prandom_seed'
    ../lib/random32.c:179: warning: Excess function parameter 'seed' description in 'prandom_seed'

    Signed-off-by: Philippe Mazenauer
    Acked-by: Lee Jones
    Signed-off-by: David S. Miller

    Philippe Mazenauer
     

22 Nov, 2017

1 commit

  • This changes all DEFINE_TIMER() callbacks to use a struct timer_list
    pointer instead of unsigned long. Since the data argument has already been
    removed, none of these callbacks are using their argument currently, so
    this renames the argument to "unused".

    Done using the following semantic patch:

    @match_define_timer@
    declarer name DEFINE_TIMER;
    identifier _timer, _callback;
    @@

    DEFINE_TIMER(_timer, _callback);

    @change_callback depends on match_define_timer@
    identifier match_define_timer._callback;
    type _origtype;
    identifier _origarg;
    @@

    void
    -_callback(_origtype _origarg)
    +_callback(struct timer_list *unused)
    { ... }

    Signed-off-by: Kees Cook

    Kees Cook
     

14 Nov, 2017

1 commit

  • Pull timer updates from Thomas Gleixner:
    "Yet another big pile of changes:

    - More year 2038 work from Arnd slowly reaching the point where we
    need to think about the syscalls themself.

    - A new timer function which allows to conditionally (re)arm a timer
    only when it's either not running or the new expiry time is sooner
    than the armed expiry time. This allows to use a single timer for
    multiple timeout requirements w/o caring about the first expiry
    time at the call site.

    - A new NMI safe accessor to clock real time for the printk timestamp
    work. Can be used by tracing, perf as well if required.

    - A large number of timer setup conversions from Kees which got
    collected here because either maintainers requested so or they
    simply got ignored. As Kees pointed out already there are a few
    trivial merge conflicts and some redundant commits which was
    unavoidable due to the size of this conversion effort.

    - Avoid a redundant iteration in the timer wheel softirq processing.

    - Provide a mechanism to treat RTC implementations depending on their
    hardware properties, i.e. don't inflict the write at the 0.5
    seconds boundary which originates from the PC CMOS RTC to all RTCs.
    No functional change as drivers need to be updated separately.

    - The usual small updates to core code clocksource drivers. Nothing
    really exciting"

    * 'timers-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (111 commits)
    timers: Add a function to start/reduce a timer
    pstore: Use ktime_get_real_fast_ns() instead of __getnstimeofday()
    timer: Prepare to change all DEFINE_TIMER() callbacks
    netfilter: ipvs: Convert timers to use timer_setup()
    scsi: qla2xxx: Convert timers to use timer_setup()
    block/aoe: discover_timer: Convert timers to use timer_setup()
    ide: Convert timers to use timer_setup()
    drbd: Convert timers to use timer_setup()
    mailbox: Convert timers to use timer_setup()
    crypto: Convert timers to use timer_setup()
    drivers/pcmcia: omap1: Fix error in automated timer conversion
    ARM: footbridge: Fix typo in timer conversion
    drivers/sgi-xp: Convert timers to use timer_setup()
    drivers/pcmcia: Convert timers to use timer_setup()
    drivers/memstick: Convert timers to use timer_setup()
    drivers/macintosh: Convert timers to use timer_setup()
    hwrng/xgene-rng: Convert timers to use timer_setup()
    auxdisplay: Convert timers to use timer_setup()
    sparc/led: Convert timers to use timer_setup()
    mips: ip22/32: Convert timers to use timer_setup()
    ...

    Linus Torvalds
     

02 Nov, 2017

1 commit

  • Many source files in the tree are missing licensing information, which
    makes it harder for compliance tools to determine the correct license.

    By default all files without license information are under the default
    license of the kernel, which is GPL version 2.

    Update the files which contain no license information with the 'GPL-2.0'
    SPDX license identifier. The SPDX identifier is a legally binding
    shorthand, which can be used instead of the full boiler plate text.

    This patch is based on work done by Thomas Gleixner and Kate Stewart and
    Philippe Ombredanne.

    How this work was done:

    Patches were generated and checked against linux-4.14-rc6 for a subset of
    the use cases:
    - file had no licensing information it it.
    - file was a */uapi/* one with no licensing information in it,
    - file was a */uapi/* one with existing licensing information,

    Further patches will be generated in subsequent months to fix up cases
    where non-standard license headers were used, and references to license
    had to be inferred by heuristics based on keywords.

    The analysis to determine which SPDX License Identifier to be applied to
    a file was done in a spreadsheet of side by side results from of the
    output of two independent scanners (ScanCode & Windriver) producing SPDX
    tag:value files created by Philippe Ombredanne. Philippe prepared the
    base worksheet, and did an initial spot review of a few 1000 files.

    The 4.13 kernel was the starting point of the analysis with 60,537 files
    assessed. Kate Stewart did a file by file comparison of the scanner
    results in the spreadsheet to determine which SPDX license identifier(s)
    to be applied to the file. She confirmed any determination that was not
    immediately clear with lawyers working with the Linux Foundation.

    Criteria used to select files for SPDX license identifier tagging was:
    - Files considered eligible had to be source code files.
    - Make and config files were included as candidates if they contained >5
    lines of source
    - File already had some variant of a license header in it (even if
    Reviewed-by: Philippe Ombredanne
    Reviewed-by: Thomas Gleixner
    Signed-off-by: Greg Kroah-Hartman

    Greg Kroah-Hartman
     

05 Oct, 2017

1 commit

  • Drop the arguments from the macro and adjust all callers with the
    following script:

    perl -pi -e 's/DEFINE_TIMER\((.*), 0, 0\);/DEFINE_TIMER($1);/g;' \
    $(git grep DEFINE_TIMER | cut -d: -f1 | sort -u | grep -v timer.h)

    Signed-off-by: Kees Cook
    Acked-by: Geert Uytterhoeven # for m68k parts
    Acked-by: Guenter Roeck # for watchdog parts
    Acked-by: David S. Miller # for networking parts
    Acked-by: Greg Kroah-Hartman
    Acked-by: Kalle Valo # for wireless parts
    Acked-by: Arnd Bergmann
    Cc: linux-mips@linux-mips.org
    Cc: Petr Mladek
    Cc: Benjamin Herrenschmidt
    Cc: Lai Jiangshan
    Cc: Sebastian Reichel
    Cc: Kalle Valo
    Cc: Paul Mackerras
    Cc: Pavel Machek
    Cc: linux1394-devel@lists.sourceforge.net
    Cc: Chris Metcalf
    Cc: linux-s390@vger.kernel.org
    Cc: linux-wireless@vger.kernel.org
    Cc: "James E.J. Bottomley"
    Cc: Wim Van Sebroeck
    Cc: Michael Ellerman
    Cc: Ursula Braun
    Cc: Viresh Kumar
    Cc: Harish Patil
    Cc: Stephen Boyd
    Cc: Michael Reed
    Cc: Manish Chopra
    Cc: Len Brown
    Cc: Arnd Bergmann
    Cc: linux-pm@vger.kernel.org
    Cc: Heiko Carstens
    Cc: Tejun Heo
    Cc: Julian Wiedmann
    Cc: John Stultz
    Cc: Mark Gross
    Cc: linux-watchdog@vger.kernel.org
    Cc: linux-scsi@vger.kernel.org
    Cc: "Martin K. Petersen"
    Cc: Greg Kroah-Hartman
    Cc: "Rafael J. Wysocki"
    Cc: Oleg Nesterov
    Cc: Ralf Baechle
    Cc: Stefan Richter
    Cc: Guenter Roeck
    Cc: netdev@vger.kernel.org
    Cc: Martin Schwidefsky
    Cc: Andrew Morton
    Cc: linuxppc-dev@lists.ozlabs.org
    Cc: Sudip Mukherjee
    Link: https://lkml.kernel.org/r/1507159627-127660-11-git-send-email-keescook@chromium.org
    Signed-off-by: Thomas Gleixner

    Kees Cook
     

16 Oct, 2016

1 commit

  • Pull gcc plugins update from Kees Cook:
    "This adds a new gcc plugin named "latent_entropy". It is designed to
    extract as much possible uncertainty from a running system at boot
    time as possible, hoping to capitalize on any possible variation in
    CPU operation (due to runtime data differences, hardware differences,
    SMP ordering, thermal timing variation, cache behavior, etc).

    At the very least, this plugin is a much more comprehensive example
    for how to manipulate kernel code using the gcc plugin internals"

    * tag 'gcc-plugins-v4.9-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux:
    latent_entropy: Mark functions with __latent_entropy
    gcc-plugins: Add latent_entropy plugin

    Linus Torvalds
     

11 Oct, 2016

1 commit

  • The __latent_entropy gcc attribute can be used only on functions and
    variables. If it is on a function then the plugin will instrument it for
    gathering control-flow entropy. If the attribute is on a variable then
    the plugin will initialize it with random contents. The variable must
    be an integer, an integer array type or a structure with integer fields.

    These specific functions have been selected because they are init
    functions (to help gather boot-time entropy), are called at unpredictable
    times, or they have variable loops, each of which provide some level of
    latent entropy.

    Signed-off-by: Emese Revfy
    [kees: expanded commit message]
    Signed-off-by: Kees Cook

    Emese Revfy
     

28 Sep, 2016

1 commit

  • put_cpu_var takes the percpu data, not the data returned from
    get_cpu_var.

    This doesn't change the behavior.

    Cc: Tejun Heo
    Signed-off-by: Shaohua Li
    Acked-by: Tejun Heo
    Signed-off-by: David S. Miller

    Shaohua Li
     

07 Jul, 2016

1 commit

  • We now have implicit batching in the timer wheel. The slack API is no longer
    used, so remove it.

    Signed-off-by: Thomas Gleixner
    Cc: Alan Stern
    Cc: Andrew F. Davis
    Cc: Arjan van de Ven
    Cc: Chris Mason
    Cc: David S. Miller
    Cc: David Woodhouse
    Cc: Dmitry Eremin-Solenikov
    Cc: Eric Dumazet
    Cc: Frederic Weisbecker
    Cc: George Spelvin
    Cc: Greg Kroah-Hartman
    Cc: Jaehoon Chung
    Cc: Jens Axboe
    Cc: John Stultz
    Cc: Josh Triplett
    Cc: Len Brown
    Cc: Linus Torvalds
    Cc: Mathias Nyman
    Cc: Pali Rohár
    Cc: Paul E. McKenney
    Cc: Peter Zijlstra
    Cc: Rik van Riel
    Cc: Sebastian Reichel
    Cc: Ulf Hansson
    Cc: linux-block@vger.kernel.org
    Cc: linux-kernel@vger.kernel.org
    Cc: linux-mmc@vger.kernel.org
    Cc: linux-pm@vger.kernel.org
    Cc: linux-usb@vger.kernel.org
    Cc: netdev@vger.kernel.org
    Cc: rt@linutronix.de
    Link: http://lkml.kernel.org/r/20160704094342.189813118@linutronix.de
    Signed-off-by: Ingo Molnar

    Thomas Gleixner
     

29 Feb, 2016

1 commit

  • Can be used to randomly match packets e.g. for statistic traffic sampling.

    See commit 3ad0040573b0c00f8848
    ("bpf: split state from prandom_u32() and consolidate {c, e}BPF prngs")
    for more info why this doesn't use prandom_u32 directly.

    Unlike bpf nft_meta can be built as a module, so add an EXPORT_SYMBOL
    for prandom_seed_full_state too.

    Cc: Daniel Borkmann
    Signed-off-by: Florian Westphal
    Signed-off-by: Pablo Neira Ayuso

    Florian Westphal
     

08 Oct, 2015

2 commits

  • Add a prandom_init_once() facility that works on the rnd_state, so that
    users that are keeping their own state independent from prandom_u32() can
    initialize their taus113 per cpu states.

    The motivation here is similar to net_get_random_once(): initialize the
    state as late as possible in the hope that enough entropy has been
    collected for the seeding. prandom_init_once() makes use of the recently
    introduced prandom_seed_full_state() helper and is generic enough so that
    it could also be used on fast-paths due to the DO_ONCE().

    Signed-off-by: Daniel Borkmann
    Acked-by: Hannes Frederic Sowa
    Acked-by: Alexei Starovoitov
    Signed-off-by: David S. Miller

    Daniel Borkmann
     
  • Factor out the full reseed handling code that populates the state
    through get_random_bytes() and runs prandom_warmup(). The resulting
    prandom_seed_full_state() will be used later on in more than the
    current __prandom_reseed() user. Fix also two minor whitespace
    issues along the way.

    Signed-off-by: Daniel Borkmann
    Acked-by: Hannes Frederic Sowa
    Acked-by: Alexei Starovoitov
    Signed-off-by: David S. Miller

    Daniel Borkmann
     

25 Aug, 2014

1 commit

  • This patch addresses a couple of minor items, mostly addesssing
    prandom_bytes(): 1) prandom_bytes{,_state}() should use size_t
    for length arguments, 2) We can use put_unaligned() when filling
    the array instead of open coding it [ perhaps some archs will
    further benefit from their own arch specific implementation when
    GCC cannot make up for it ], 3) Fix a typo, 4) Better use unsigned
    int as type for getting the arch seed, 5) Make use of
    prandom_u32_max() for timer slack.

    Regarding the change to put_unaligned(), callers of prandom_bytes()
    which internally invoke prandom_bytes_state(), don't bother as
    they expect the array to be filled randomly and don't have any
    control of the internal state what-so-ever (that's also why we
    have periodic reseeding there, etc), so they really don't care.

    Now for the direct callers of prandom_bytes_state(), which
    are solely located in test cases for MTD devices, that is,
    drivers/mtd/tests/{oobtest.c,pagetest.c,subpagetest.c}:

    These tests basically fill a test write-vector through
    prandom_bytes_state() with an a-priori defined seed each time
    and write that to a MTD device. Later on, they set up a read-vector
    and read back that blocks from the device. So in the verification
    phase, the write-vector is being re-setup [ so same seed and
    prandom_bytes_state() called ], and then memcmp()'ed against the
    read-vector to check if the data is the same.

    Akinobu, Lothar and I also tested this patch and it runs through
    the 3 relevant MTD test cases w/o any errors on the nandsim device
    (simulator for MTD devs) for x86_64, ppc64, ARM (i.MX28, i.MX53
    and i.MX6):

    # modprobe nandsim first_id_byte=0x20 second_id_byte=0xac \
    third_id_byte=0x00 fourth_id_byte=0x15
    # modprobe mtd_oobtest dev=0
    # modprobe mtd_pagetest dev=0
    # modprobe mtd_subpagetest dev=0

    We also don't have any users depending directly on a particular
    result of the PRNG (except the PRNG self-test itself), and that's
    just fine as it e.g. allowed us easily to do things like upgrading
    from taus88 to taus113.

    Signed-off-by: Daniel Borkmann
    Tested-by: Akinobu Mita
    Tested-by: Lothar Waßmann
    Cc: Hannes Frederic Sowa
    Signed-off-by: David S. Miller

    Daniel Borkmann
     

31 Jul, 2014

1 commit

  • Currently, we have a 3-stage seeding process in prandom():

    Phase 1 is from the early actual initialization of prandom()
    subsystem which happens during core_initcall() and remains
    most likely until the beginning of late_initcall() phase.
    Here, the system might not have enough entropy available
    for seeding with strong randomness from the random driver.
    That means, we currently have a 32bit weak LCG() seeding
    the PRNG status register 1 and mixing that successively
    into the other 3 registers just to get it up and running.

    Phase 2 starts with late_initcall() phase resp. when the
    random driver has initialized its non-blocking pool with
    enough entropy. At that time, we throw away *all* inner
    state from its 4 registers and do a full reseed with strong
    randomness.

    Phase 3 starts right after that and does a periodic reseed
    with random slack of status register 1 by a strong random
    source again.

    A problem in phase 1 is that during bootup data structures
    can be initialized, e.g. on module load time, and thus access
    a weakly seeded prandom and are never changed for the rest
    of their live-time, thus carrying along the results from a
    week seed. Lets make sure that current but also future users
    access a possibly better early seeded prandom.

    This patch therefore improves phase 1 by trying to make it
    more 'unpredictable' through mixing in seed from a possible
    hardware source. Now, the mix-in xors inner state with the
    outcome of either of the two functions arch_get_random_{,seed}_int(),
    preferably arch_get_random_seed_int() as it likely represents
    a non-deterministic random bit generator in hw rather than
    a cryptographically secure PRNG in hw. However, not all might
    have the first one, so we use the PRNG as a fallback if
    available. As we xor the seed into the current state, the
    worst case would be that a hardware source could be unverifiable
    compromised or backdoored. In that case nevertheless it
    would be as good as our original early seeding function
    prandom_seed_very_weak() since we mix through xor which is
    entropy preserving.

    Joint work with Daniel Borkmann.

    Signed-off-by: Daniel Borkmann
    Signed-off-by: Hannes Frederic Sowa
    Signed-off-by: David S. Miller

    Hannes Frederic Sowa
     

04 Apr, 2014

1 commit

  • These are just some very minor and misc cleanups in the PRNG. In
    prandom_u32() we store the result in an unsigned long which is
    unnecessary as it should be u32 instead that we get from
    prandom_u32_state(). prandom_bytes_state()'s comment is in kdoc format,
    so change it into such as it's done everywhere else. Also, use the
    normal comment style for the header comment. Last but not least for
    readability, add some newlines.

    Signed-off-by: Daniel Borkmann
    Cc: Joe Perches
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Daniel Borkmann
     

29 Mar, 2014

1 commit

  • Commit 4af712e8df ("random32: add prandom_reseed_late() and call when
    nonblocking pool becomes initialized") has added a late reseed stage
    that happens as soon as the nonblocking pool is marked as initialized.

    This fails in the case that the nonblocking pool gets initialized
    during __prandom_reseed()'s call to get_random_bytes(). In that case
    we'd double back into __prandom_reseed() in an attempt to do a late
    reseed - deadlocking on 'lock' early on in the boot process.

    Instead, just avoid even waiting to do a reseed if a reseed is already
    occuring.

    Fixes: 4af712e8df99 ("random32: add prandom_reseed_late() and call when nonblocking pool becomes initialized")
    Signed-off-by: Sasha Levin
    Acked-by: Hannes Frederic Sowa
    Signed-off-by: Daniel Borkmann
    Signed-off-by: David S. Miller

    Sasha Levin
     

15 Nov, 2013

2 commits

  • Use msecs_to_jiffies, for these calculations as different HZ
    considerations are taken into account for conversion of the timer
    shot, and also it makes the code more readable.

    Signed-off-by: Daniel Borkmann
    Signed-off-by: Hannes Frederic Sowa
    Signed-off-by: David S. Miller

    Daniel Borkmann
     
  • We only call that in functions annotated with __init, so add __init
    prefix in prandom_start_seed_timer() as well, so that the kernel can
    make use of this hint and we can possibly free up resources after it's
    usage. And since it's an internal function rename it to
    __prandom_start_seed_timer().

    Signed-off-by: Daniel Borkmann
    Signed-off-by: Hannes Frederic Sowa
    Signed-off-by: David S. Miller

    Daniel Borkmann
     

12 Nov, 2013

5 commits

  • We generated a battery of 100 test cases from GSL taus113 implemention
    and compare the results from a particular seed and a particular
    iteration with our implementation in the kernel. We have verified on
    32 and 64 bit machines that our taus113 kernel implementation gives
    same results as GSL taus113 implementation:

    [ 0.147370] prandom: seed boundary self test passed
    [ 0.148078] prandom: 100 self tests passed

    This is a Kconfig option that is disabled on default, just like the
    crc32 init selftests in order to not unnecessary slow down boot process.
    We also refactored out prandom_seed_very_weak() as it's now used in
    multiple places in order to reduce redundant code.

    GSL code we used for generating test cases:

    int i, j;
    srand(time(NULL));
    for (i = 0; i < 100; ++i) {
    int iteration = 500 + (rand() % 500);
    gsl_rng_default_seed = rand() + 1;
    gsl_rng *r = gsl_rng_alloc(gsl_rng_taus113);
    printf("\t{ %lu, ", gsl_rng_default_seed);
    for (j = 0; j < iteration - 1; ++j)
    gsl_rng_get(r);
    printf("%u, %lu },\n", iteration, gsl_rng_get(r));
    gsl_rng_free(r);
    }

    Joint work with Hannes Frederic Sowa.

    Cc: Florian Weimer
    Cc: Theodore Ts'o
    Signed-off-by: Daniel Borkmann
    Signed-off-by: Hannes Frederic Sowa
    Signed-off-by: David S. Miller

    Daniel Borkmann
     
  • Since we use prandom*() functions quite often in networking code
    i.e. in UDP port selection, netfilter code, etc, upgrade the PRNG
    from Pierre L'Ecuyer's original paper "Maximally Equidistributed
    Combined Tausworthe Generators", Mathematics of Computation, 65,
    213 (1996), 203--213 to the version published in his errata paper [1].

    The Tausworthe generator is a maximally-equidistributed generator,
    that is fast and has good statistical properties [1].

    The version presented there upgrades the 3 state LFSR to a 4 state
    LFSR with increased periodicity from about 2^88 to 2^113. The
    algorithm is presented in [1] by the very same author who also
    designed the original algorithm in [2].

    Also, by increasing the state, we make it a bit harder for attackers
    to "guess" the PRNGs internal state. See also discussion in [3].

    Now, as we use this sort of weak initialization discussed in [3]
    only between core_initcall() until late_initcall() time [*] for
    prandom32*() users, namely in prandom_init(), it is less relevant
    from late_initcall() onwards as we overwrite seeds through
    prandom_reseed() anyways with a seed source of higher entropy, that
    is, get_random_bytes(). In other words, a exhaustive keysearch of
    96 bit would be needed. Now, with the help of this patch, this
    state-search increases further to 128 bit. Initialization needs
    to make sure that s1 > 1, s2 > 7, s3 > 15, s4 > 127.

    taus88 and taus113 algorithm is also part of GSL. I added a test
    case in the next patch to verify internal behaviour of this patch
    with GSL and ran tests with the dieharder 3.31.1 RNG test suite:

    $ dieharder -g 052 -a -m 10 -s 1 -S 4137730333 #taus88
    $ dieharder -g 054 -a -m 10 -s 1 -S 4137730333 #taus113

    With this seed configuration, in order to compare both, we get
    the following differences:

    algorithm taus88 taus113
    rands/second [**] 1.61e+08 1.37e+08
    sts_serial(4, 1st run) WEAK PASSED
    sts_serial(9, 2nd run) WEAK PASSED
    rgb_lagged_sum(31) WEAK PASSED

    We took out diehard_sums test as according to the authors it is
    considered broken and unusable [4]. Despite that and the slight
    decrease in performance (which is acceptable), taus113 here passes
    all 113 tests (only rgb_minimum_distance_5 in WEAK, the rest PASSED).
    In general, taus/taus113 is considered "very good" by the authors
    of dieharder [5].

    The papers [1][2] states a single warm-up step is sufficient by
    running quicktaus once on each state to ensure proper initialization
    of ~s_{0}:

    Our selection of (s) according to Table 1 of [1] row 1 holds the
    condition L - k
    Cc: Theodore Ts'o
    Signed-off-by: Daniel Borkmann
    Signed-off-by: Hannes Frederic Sowa
    Signed-off-by: David S. Miller

    Daniel Borkmann
     
  • The Tausworthe PRNG is initialized at late_initcall time. At that time the
    entropy pool serving get_random_bytes is not filled sufficiently. This
    patch adds an additional reseeding step as soon as the nonblocking pool
    gets marked as initialized.

    On some machines it might be possible that late_initcall gets called after
    the pool has been initialized. In this situation we won't reseed again.

    (A call to prandom_seed_late blocks later invocations of early reseed
    attempts.)

    Joint work with Daniel Borkmann.

    Cc: Eric Dumazet
    Cc: Theodore Ts'o
    Signed-off-by: Hannes Frederic Sowa
    Signed-off-by: Daniel Borkmann
    Acked-by: "Theodore Ts'o"
    Signed-off-by: David S. Miller

    Hannes Frederic Sowa
     
  • The current Tausworthe PRNG is never reseeded with truly random data after
    the first attempt in late_initcall. As this PRNG is used for some critical
    random data as e.g. UDP port randomization we should try better and reseed
    the PRNG once in a while with truly random data from get_random_bytes().

    When we reseed with prandom_seed we now make also sure to throw the first
    output away. This suffices the reseeding procedure.

    The delay calculation is based on a proposal from Eric Dumazet.

    Joint work with Daniel Borkmann.

    Cc: Eric Dumazet
    Cc: Theodore Ts'o
    Signed-off-by: Hannes Frederic Sowa
    Signed-off-by: Daniel Borkmann
    Signed-off-by: David S. Miller

    Hannes Frederic Sowa
     
  • For properly initialising the Tausworthe generator [1], we have
    a strict seeding requirement, that is, s1 > 1, s2 > 7, s3 > 15.

    Commit 697f8d0348 ("random32: seeding improvement") introduced
    a __seed() function that imposes boundary checks proposed by the
    errata paper [2] to properly ensure above conditions.

    However, we're off by one, as the function is implemented as:
    "return (x < m) ? x + m : x;", and called with __seed(X, 1),
    __seed(X, 7), __seed(X, 15). Thus, an unwanted seed of 1, 7, 15
    would be possible, whereas the lower boundary should actually
    be of at least 2, 8, 16, just as GSL does. Fix this, as otherwise
    an initialization with an unwanted seed could have the effect
    that Tausworthe's PRNG properties cannot not be ensured.

    Note that this PRNG is *not* used for cryptography in the kernel.

    [1] http://www.iro.umontreal.ca/~lecuyer/myftp/papers/tausme.ps
    [2] http://www.iro.umontreal.ca/~lecuyer/myftp/papers/tausme2.ps

    Joint work with Hannes Frederic Sowa.

    Fixes: 697f8d0348a6 ("random32: seeding improvement")
    Cc: Stephen Hemminger
    Cc: Florian Weimer
    Cc: Theodore Ts'o
    Signed-off-by: Daniel Borkmann
    Signed-off-by: Hannes Frederic Sowa
    Signed-off-by: David S. Miller

    Daniel Borkmann
     

18 Dec, 2012

2 commits

  • Add functions to get the requested number of pseudo-random bytes.

    The difference from get_random_bytes() is that it generates pseudo-random
    numbers by prandom_u32(). It doesn't consume the entropy pool, and the
    sequence is reproducible if the same rnd_state is used. So it is suitable
    for generating random bytes for testing.

    Signed-off-by: Akinobu Mita
    Cc: "Theodore Ts'o"
    Cc: Artem Bityutskiy
    Cc: Adrian Hunter
    Cc: David Woodhouse
    Cc: Eilon Greenstein
    Cc: David Laight
    Cc: Michel Lespinasse
    Cc: Robert Love
    Cc: Valdis Kletnieks
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Akinobu Mita
     
  • This renames all random32 functions to have 'prandom_' prefix as follows:

    void prandom_seed(u32 seed); /* rename from srandom32() */
    u32 prandom_u32(void); /* rename from random32() */
    void prandom_seed_state(struct rnd_state *state, u64 seed);
    /* rename from prandom32_seed() */
    u32 prandom_u32_state(struct rnd_state *state);
    /* rename from prandom32() */

    The purpose of this renaming is to prevent some kernel developers from
    assuming that prandom32() and random32() might imply that only
    prandom32() was the one using a pseudo-random number generator by
    prandom32's "p", and the result may be a very embarassing security
    exposure. This concern was expressed by Theodore Ts'o.

    And furthermore, I'm going to introduce new functions for getting the
    requested number of pseudo-random bytes. If I continue to use both
    prandom32 and random32 prefixes for these functions, the confusion
    is getting worse.

    As a result of this renaming, "prandom_" is the common prefix for
    pseudo-random number library.

    Currently, srandom32() and random32() are preserved because it is
    difficult to rename too many users at once.

    Signed-off-by: Akinobu Mita
    Cc: "Theodore Ts'o"
    Cc: Robert Love
    Cc: Michel Lespinasse
    Cc: Valdis Kletnieks
    Cc: David Laight
    Cc: Adrian Hunter
    Cc: Artem Bityutskiy
    Cc: David Woodhouse
    Cc: Eilon Greenstein
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Akinobu Mita
     

08 Mar, 2012

1 commit


17 Jun, 2010

2 commits


28 May, 2010

1 commit

  • This patch moves the definition of struct rnd_state and the inline
    __seed() function to linux/random.h. It renames the static __random32()
    function to prandom32() and exports it for use in modules.

    prandom32() is useful as a privately-seeded pseudo random number generator
    that can give the same result every time it is initialized.

    For FCoE FC-BB-6 VN2VN mode self-selected unique FC address generation, we
    need an pseudo-random number generator seeded with the 64-bit world-wide
    port name. A truly random generator or one seeded with randomness won't
    do because the same sequence of numbers should be generated each time we
    boot or the link comes up.

    A prandom32_seed() inline function is added to the header file. It is
    inlined not for speed, but so the function won't be expanded in the base
    kernel, but only in the module that uses it.

    Signed-off-by: Joe Eykholt
    Acked-by: Matt Mackall
    Cc: Theodore Ts'o
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Joe Eykholt
     

31 Jul, 2008

1 commit

  • The rationale is:
    * use u32 consistently
    * no need to do LCG on values from (better) get_random_bytes
    * use more data from get_random_bytes for secondary seeding
    * don't reduce state space on srandom32()
    * enforce state variable initialization restrictions

    Note: the second paper has a version of random32() with even longer period
    and a version of random64() if needed.

    Signed-off-by: Stephen Hemminger
    Signed-off-by: Andrew Morton

    Stephen Hemminger
     

04 Apr, 2008

1 commit

  • - Let it update the state of all CPUs. The network stack goes
    into pains to feed the current IP addresses in, but it is not very
    effective if that is only done for some random CPU instead of all.
    So change it to feed bits into all CPUs. I decided to do that lockless
    because well somewhat random results are ok.

    v2: Drop rename so that this patch doesn't depend on x86 maintainers

    Signed-off-by: Andi Kleen
    Signed-off-by: David S. Miller

    Andi Kleen
     

04 Dec, 2006

1 commit


17 Oct, 2006

1 commit

  • Make net_random() more widely available by calling it random32

    akpm: hopefully this will permit the removal of carta_random32. That needs
    confirmation from Stephane - this code looks somewhat more computationally
    expensive, and has a different (ie: callee-stateful) interface.

    [akpm@osdl.org: lots of build fixes, cleanups]
    Signed-off-by: Stephen Hemminger
    Signed-off-by: David S. Miller
    Cc: Stephane Eranian
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Stephen Hemminger