30 Dec, 2017

1 commit

  • Pull x86 page table isolation updates from Thomas Gleixner:
    "This is the final set of enabling page table isolation on x86:

    - Infrastructure patches for handling the extra page tables.

    - Patches which map the various bits and pieces which are required to
    get in and out of user space into the user space visible page
    tables.

    - The required changes to have CR3 switching in the entry/exit code.

    - Optimizations for the CR3 switching along with documentation how
    the ASID/PCID mechanism works.

    - Updates to dump pagetables to cover the user space page tables for
    W+X scans and extra debugfs files to analyze both the kernel and
    the user space visible page tables

    The whole functionality is compile time controlled via a config switch
    and can be turned on/off on the command line as well"

    * 'x86-pti-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (32 commits)
    x86/ldt: Make the LDT mapping RO
    x86/mm/dump_pagetables: Allow dumping current pagetables
    x86/mm/dump_pagetables: Check user space page table for WX pages
    x86/mm/dump_pagetables: Add page table directory to the debugfs VFS hierarchy
    x86/mm/pti: Add Kconfig
    x86/dumpstack: Indicate in Oops whether PTI is configured and enabled
    x86/mm: Clarify the whole ASID/kernel PCID/user PCID naming
    x86/mm: Use INVPCID for __native_flush_tlb_single()
    x86/mm: Optimize RESTORE_CR3
    x86/mm: Use/Fix PCID to optimize user/kernel switches
    x86/mm: Abstract switching CR3
    x86/mm: Allow flushing for future ASID switches
    x86/pti: Map the vsyscall page if needed
    x86/pti: Put the LDT in its own PGD if PTI is on
    x86/mm/64: Make a full PGD-entry size hole in the memory map
    x86/events/intel/ds: Map debug buffers in cpu_entry_area
    x86/cpu_entry_area: Add debugstore entries to cpu_entry_area
    x86/mm/pti: Map ESPFIX into user space
    x86/mm/pti: Share entry text PMD
    x86/entry: Align entry text section to PMD boundary
    ...

    Linus Torvalds
     

24 Dec, 2017

2 commits

  • Add the initial files for kernel page table isolation, with a minimal init
    function and the boot time detection for this misfeature.

    Signed-off-by: Thomas Gleixner
    Reviewed-by: Borislav Petkov
    Cc: Andy Lutomirski
    Cc: Boris Ostrovsky
    Cc: Borislav Petkov
    Cc: Brian Gerst
    Cc: Dave Hansen
    Cc: David Laight
    Cc: Denys Vlasenko
    Cc: Eduardo Valentin
    Cc: Greg KH
    Cc: H. Peter Anvin
    Cc: Josh Poimboeuf
    Cc: Juergen Gross
    Cc: Linus Torvalds
    Cc: Peter Zijlstra
    Cc: Will Deacon
    Cc: aliguori@amazon.com
    Cc: daniel.gruss@iaik.tugraz.at
    Cc: hughd@google.com
    Cc: keescook@google.com
    Signed-off-by: Ingo Molnar

    Thomas Gleixner
     
  • Pull x86 PTI preparatory patches from Thomas Gleixner:
    "Todays Advent calendar window contains twentyfour easy to digest
    patches. The original plan was to have twenty three matching the date,
    but a late fixup made that moot.

    - Move the cpu_entry_area mapping out of the fixmap into a separate
    address space. That's necessary because the fixmap becomes too big
    with NRCPUS=8192 and this caused already subtle and hard to
    diagnose failures.

    The top most patch is fresh from today and cures a brain slip of
    that tall grumpy german greybeard, who ignored the intricacies of
    32bit wraparounds.

    - Limit the number of CPUs on 32bit to 64. That's insane big already,
    but at least it's small enough to prevent address space issues with
    the cpu_entry_area map, which have been observed and debugged with
    the fixmap code

    - A few TLB flush fixes in various places plus documentation which of
    the TLB functions should be used for what.

    - Rename the SYSENTER stack to CPU_ENTRY_AREA stack as it is used for
    more than sysenter now and keeping the name makes backtraces
    confusing.

    - Prevent LDT inheritance on exec() by moving it to arch_dup_mmap(),
    which is only invoked on fork().

    - Make vysycall more robust.

    - A few fixes and cleanups of the debug_pagetables code. Check
    PAGE_PRESENT instead of checking the PTE for 0 and a cleanup of the
    C89 initialization of the address hint array which already was out
    of sync with the index enums.

    - Move the ESPFIX init to a different place to prepare for PTI.

    - Several code moves with no functional change to make PTI
    integration simpler and header files less convoluted.

    - Documentation fixes and clarifications"

    * 'x86-pti-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (24 commits)
    x86/cpu_entry_area: Prevent wraparound in setup_cpu_entry_area_ptes() on 32bit
    init: Invoke init_espfix_bsp() from mm_init()
    x86/cpu_entry_area: Move it out of the fixmap
    x86/cpu_entry_area: Move it to a separate unit
    x86/mm: Create asm/invpcid.h
    x86/mm: Put MMU to hardware ASID translation in one place
    x86/mm: Remove hard-coded ASID limit checks
    x86/mm: Move the CR3 construction functions to tlbflush.h
    x86/mm: Add comments to clarify which TLB-flush functions are supposed to flush what
    x86/mm: Remove superfluous barriers
    x86/mm: Use __flush_tlb_one() for kernel memory
    x86/microcode: Dont abuse the TLB-flush interface
    x86/uv: Use the right TLB-flush API
    x86/entry: Rename SYSENTER_stack to CPU_ENTRY_AREA_entry_stack
    x86/doc: Remove obvious weirdnesses from the x86 MM layout documentation
    x86/mm/64: Improve the memory map documentation
    x86/ldt: Prevent LDT inheritance on exec
    x86/ldt: Rework locking
    arch, mm: Allow arch_dup_mmap() to fail
    x86/vsyscall/64: Warn and fail vsyscall emulation in NATIVE mode
    ...

    Linus Torvalds
     

23 Dec, 2017

1 commit

  • init_espfix_bsp() needs to be invoked before the page table isolation
    initialization. Move it into mm_init() which is the place where pti_init()
    will be added.

    While at it get rid of the #ifdeffery and provide proper stub functions.

    Signed-off-by: Thomas Gleixner
    Cc: Andy Lutomirski
    Cc: Borislav Petkov
    Cc: Dave Hansen
    Cc: H. Peter Anvin
    Cc: Josh Poimboeuf
    Cc: Juergen Gross
    Cc: Linus Torvalds
    Cc: Peter Zijlstra
    Signed-off-by: Ingo Molnar

    Thomas Gleixner
     

28 Nov, 2017

1 commit


18 Nov, 2017

2 commits

  • pidhash is no longer required as all the information can be looked up
    from idr tree. nr_hashed represented the number of pids that had been
    hashed. Since, nr_hashed and PIDNS_HASH_ADDING are no longer relevant,
    it has been renamed to pid_allocated and PIDNS_ADDING respectively.

    [gs051095@gmail.com: v6]
    Link: http://lkml.kernel.org/r/1507760379-21662-3-git-send-email-gs051095@gmail.com
    Link: http://lkml.kernel.org/r/1507583624-22146-3-git-send-email-gs051095@gmail.com
    Signed-off-by: Gargi Sharma
    Reviewed-by: Rik van Riel
    Tested-by: Tony Luck [ia64]
    Cc: Julia Lawall
    Cc: Ingo Molnar
    Cc: Pavel Tatashin
    Cc: Kirill Tkhai
    Cc: Oleg Nesterov
    Cc: Eric W. Biederman
    Cc: Christoph Hellwig
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Gargi Sharma
     
  • Patch series "Replacing PID bitmap implementation with IDR API", v4.

    This series replaces kernel bitmap implementation of PID allocation with
    IDR API. These patches are written to simplify the kernel by replacing
    custom code with calls to generic code.

    The following are the stats for pid and pid_namespace object files
    before and after the replacement. There is a noteworthy change between
    the IDR and bitmap implementation.

    Before
    text data bss dec hex filename
    8447 3894 64 12405 3075 kernel/pid.o
    After
    text data bss dec hex filename
    3397 304 0 3701 e75 kernel/pid.o

    Before
    text data bss dec hex filename
    5692 1842 192 7726 1e2e kernel/pid_namespace.o
    After
    text data bss dec hex filename
    2854 216 16 3086 c0e kernel/pid_namespace.o

    The following are the stats for ps, pstree and calling readdir on /proc
    for 10,000 processes.

    ps:
    With IDR API With bitmap
    real 0m1.479s 0m2.319s
    user 0m0.070s 0m0.060s
    sys 0m0.289s 0m0.516s

    pstree:
    With IDR API With bitmap
    real 0m1.024s 0m1.794s
    user 0m0.348s 0m0.612s
    sys 0m0.184s 0m0.264s

    proc:
    With IDR API With bitmap
    real 0m0.059s 0m0.074s
    user 0m0.000s 0m0.004s
    sys 0m0.016s 0m0.016s

    This patch (of 2):

    Replace the current bitmap implementation for Process ID allocation.
    Functions that are no longer required, for example, free_pidmap(),
    alloc_pidmap(), etc. are removed. The rest of the functions are
    modified to use the IDR API. The change was made to make the PID
    allocation less complex by replacing custom code with calls to generic
    API.

    [gs051095@gmail.com: v6]
    Link: http://lkml.kernel.org/r/1507760379-21662-2-git-send-email-gs051095@gmail.com
    [avagin@openvz.org: restore the old behaviour of the ns_last_pid sysctl]
    Link: http://lkml.kernel.org/r/20171106183144.16368-1-avagin@openvz.org
    Link: http://lkml.kernel.org/r/1507583624-22146-2-git-send-email-gs051095@gmail.com
    Signed-off-by: Gargi Sharma
    Reviewed-by: Rik van Riel
    Acked-by: Oleg Nesterov
    Cc: Julia Lawall
    Cc: Ingo Molnar
    Cc: Pavel Tatashin
    Cc: Kirill Tkhai
    Cc: Eric W. Biederman
    Cc: Christoph Hellwig
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Gargi Sharma
     

16 Nov, 2017

1 commit

  • Patch series "kmemcheck: kill kmemcheck", v2.

    As discussed at LSF/MM, kill kmemcheck.

    KASan is a replacement that is able to work without the limitation of
    kmemcheck (single CPU, slow). KASan is already upstream.

    We are also not aware of any users of kmemcheck (or users who don't
    consider KASan as a suitable replacement).

    The only objection was that since KASAN wasn't supported by all GCC
    versions provided by distros at that time we should hold off for 2
    years, and try again.

    Now that 2 years have passed, and all distros provide gcc that supports
    KASAN, kill kmemcheck again for the very same reasons.

    This patch (of 4):

    Remove kmemcheck annotations, and calls to kmemcheck from the kernel.

    [alexander.levin@verizon.com: correctly remove kmemcheck call from dma_map_sg_attrs]
    Link: http://lkml.kernel.org/r/20171012192151.26531-1-alexander.levin@verizon.com
    Link: http://lkml.kernel.org/r/20171007030159.22241-2-alexander.levin@verizon.com
    Signed-off-by: Sasha Levin
    Cc: Alexander Potapenko
    Cc: Eric W. Biederman
    Cc: Michal Hocko
    Cc: Pekka Enberg
    Cc: Steven Rostedt
    Cc: Tim Hansen
    Cc: Vegard Nossum
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Levin, Alexander (Sasha Levin)
     

14 Nov, 2017

1 commit

  • Pull x86 APIC updates from Thomas Gleixner:
    "This update provides a major overhaul of the APIC initialization and
    vector allocation code:

    - Unification of the APIC and interrupt mode setup which was
    scattered all over the place and was hard to follow. This also
    distangles the timer setup from the APIC initialization which
    brings a clear separation of functionality.

    Great detective work from Dou Lyiang!

    - Refactoring of the x86 vector allocation mechanism. The existing
    code was based on nested loops and rather convoluted APIC callbacks
    which had a horrible worst case behaviour and tried to serve all
    different use cases in one go. This led to quite odd hacks when
    supporting the new managed interupt facility for multiqueue devices
    and made it more or less impossible to deal with the vector space
    exhaustion which was a major roadblock for server hibernation.

    Aside of that the code dealing with cpu hotplug and the system
    vectors was disconnected from the actual vector management and
    allocation code, which made it hard to follow and maintain.

    Utilizing the new bitmap matrix allocator core mechanism, the new
    allocator and management code consolidates the handling of system
    vectors, legacy vectors, cpu hotplug mechanisms and the actual
    allocation which needs to be aware of system and legacy vectors and
    hotplug constraints into a single consistent entity.

    This has one visible change: The support for multi CPU targets of
    interrupts, which is only available on a certain subset of
    CPUs/APIC variants has been removed in favour of single interrupt
    targets. A proper analysis of the multi CPU target feature revealed
    that there is no real advantage as the vast majority of interrupts
    end up on the CPU with the lowest APIC id in the set of target CPUs
    anyway. That change was agreed on by the relevant folks and allowed
    to simplify the implementation significantly and to replace rather
    fragile constructs like the vector cleanup IPI with straight
    forward and solid code.

    Furthermore this allowed to cleanly separate the allocation details
    for legacy, normal and managed interrupts:

    * Legacy interrupts are not longer wasting 16 vectors
    unconditionally

    * Managed interrupts have now a guaranteed vector reservation, but
    the actual vector assignment happens when the interrupt is
    requested. It's guaranteed not to fail.

    * Normal interrupts no longer allocate vectors unconditionally
    when the interrupt is set up (IO/APIC init or MSI(X) enable).
    The mechanism has been switched to a best effort reservation
    mode. The actual allocation happens when the interrupt is
    requested. Contrary to managed interrupts the request can fail
    due to vector space exhaustion, but drivers must handle a fail
    of request_irq() anyway. When the interrupt is freed, the vector
    is handed back as well.

    This solves a long standing problem with large unconditional
    vector allocations for a certain class of enterprise devices
    which prevented server hibernation due to vector space
    exhaustion when the unused allocated vectors had to be migrated
    to CPU0 while unplugging all non boot CPUs.

    The code has been equipped with trace points and detailed debugfs
    information to aid analysis of the vector space"

    * 'x86-apic-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (60 commits)
    x86/vector/msi: Select CONFIG_GENERIC_IRQ_RESERVATION_MODE
    PCI/MSI: Set MSI_FLAG_MUST_REACTIVATE in core code
    genirq: Add config option for reservation mode
    x86/vector: Use correct per cpu variable in free_moved_vector()
    x86/apic/vector: Ignore set_affinity call for inactive interrupts
    x86/apic: Fix spelling mistake: "symmectic" -> "symmetric"
    x86/apic: Use dead_cpu instead of current CPU when cleaning up
    ACPI/init: Invoke early ACPI initialization earlier
    x86/vector: Respect affinity mask in irq descriptor
    x86/irq: Simplify hotplug vector accounting
    x86/vector: Switch IOAPIC to global reservation mode
    x86/vector/msi: Switch to global reservation mode
    x86/vector: Handle managed interrupts proper
    x86/io_apic: Reevaluate vector configuration on activate()
    iommu/amd: Reevaluate vector configuration on activate()
    iommu/vt-d: Reevaluate vector configuration on activate()
    x86/apic/msi: Force reactivation of interrupts at startup time
    x86/vector: Untangle internal state from irq_cfg
    x86/vector: Compile SMP only code conditionally
    x86/apic: Remove unused callbacks
    ...

    Linus Torvalds
     

27 Oct, 2017

1 commit

  • The housekeeping code is currently tied to the NOHZ code. As we are
    planning to make housekeeping independent from it, start with moving
    the relevant code to its own file.

    Signed-off-by: Frederic Weisbecker
    Acked-by: Thomas Gleixner
    Acked-by: Paul E. McKenney
    Cc: Chris Metcalf
    Cc: Christoph Lameter
    Cc: Linus Torvalds
    Cc: Luiz Capitulino
    Cc: Mike Galbraith
    Cc: Peter Zijlstra
    Cc: Rik van Riel
    Cc: Wanpeng Li
    Link: http://lkml.kernel.org/r/1509072159-31808-2-git-send-email-frederic@kernel.org
    Signed-off-by: Ingo Molnar

    Frederic Weisbecker
     

27 Sep, 2017

1 commit

  • acpi_early_init() unmaps the temporary ACPI Table mappings which are used
    in the early startup code and prepares for permanent table mappings.

    Before the consolidation of the x86 APIC setup code the invocation of
    acpi_early_init() happened before the interrupt remapping unit was
    initialized. With the rework the remapping unit initialization moved in
    front of acpi_early_init() which causes an ACPI warning when the ACPI root
    tables get reallocated afterwards.

    Invoke acpi_early_init() before late_time_init() which is before the access
    to the DMAR tables happens.

    Fixes: 935356cecda8 ("x86/apic: Initialize interrupt mode after timer init")
    Reported-by: Xiaolong Ye
    Signed-off-by: Dou Liyang
    Cc: Tony Luck
    Cc: linux-ia64@vger.kernel.org
    Cc: bhe@redhat.com
    Cc: Fenghua Yu
    Cc: Michael Ellerman
    Cc: "Rafael J. Wysocki"
    Cc: Will Deacon
    Cc: linux-acpi@vger.kernel.org
    Cc: bp@alien8.de
    Cc: Lv"
    Cc: yinghai@kernel.org
    Cc: linux-arm-kernel@lists.infradead.org
    Link: https://lkml.kernel.org/r/1505294274-441-1-git-send-email-douly.fnst@cn.fujitsu.com
    Signed-off-by: Thomas Gleixner

    Dou Liyang
     

09 Sep, 2017

2 commits

  • Feed the boot command-line as to the /dev/random entropy pool

    Existing Android bootloaders usually pass data which may not be known by
    an external attacker on the kernel command-line. It may also be the
    case on other embedded systems. Sample command-line from a Google Pixel
    running CopperheadOS....

    console=ttyHSL0,115200,n8 androidboot.console=ttyHSL0
    androidboot.hardware=sailfish user_debug=31 ehci-hcd.park=3
    lpm_levels.sleep_disabled=1 cma=32M@0-0xffffffff buildvariant=user
    veritykeyid=id:dfcb9db0089e5b3b4090a592415c28e1cb4545ab
    androidboot.bootdevice=624000.ufshc androidboot.verifiedbootstate=yellow
    androidboot.veritymode=enforcing androidboot.keymaster=1
    androidboot.serialno=FA6CE0305299 androidboot.baseband=msm
    mdss_mdp.panel=1:dsi:0:qcom,mdss_dsi_samsung_ea8064tg_1080p_cmd:1:none:cfg:single_dsi
    androidboot.slot_suffix=_b fpsimd.fpsimd_settings=0
    app_setting.use_app_setting=0 kernelflag=0x00000000 debugflag=0x00000000
    androidboot.hardware.revision=PVT radioflag=0x00000000
    radioflagex1=0x00000000 radioflagex2=0x00000000 cpumask=0x00000000
    androidboot.hardware.ddr=4096MB,Hynix,LPDDR4 androidboot.ddrinfo=00000006
    androidboot.ddrsize=4GB androidboot.hardware.color=GRA00
    androidboot.hardware.ufs=32GB,Samsung androidboot.msm.hw_ver_id=268824801
    androidboot.qf.st=2 androidboot.cid=11111111 androidboot.mid=G-2PW4100
    androidboot.bootloader=8996-012001-1704121145
    androidboot.oem_unlock_support=1 androidboot.fp_src=1
    androidboot.htc.hrdump=detected androidboot.ramdump.opt=mem@2g:2g,mem@4g:2g
    androidboot.bootreason=reboot androidboot.ramdump_enable=0 ro
    root=/dev/dm-0 dm="system none ro,0 1 android-verity /dev/sda34"
    rootwait skip_initramfs init=/init androidboot.wificountrycode=US
    androidboot.boottime=1BLL:85,1BLE:669,2BLL:0,2BLE:1777,SW:6,KL:8136

    Among other things, it contains a value unique to the device
    (androidboot.serialno=FA6CE0305299), unique to the OS builds for the
    device variant (veritykeyid=id:dfcb9db0089e5b3b4090a592415c28e1cb4545ab)
    and timings from the bootloader stages in milliseconds
    (androidboot.boottime=1BLL:85,1BLE:669,2BLL:0,2BLE:1777,SW:6,KL:8136).

    [tytso@mit.edu: changelog tweak]
    [labbott@redhat.com: line-wrapped command line]
    Link: http://lkml.kernel.org/r/20170816231458.2299-3-labbott@redhat.com
    Signed-off-by: Daniel Micay
    Signed-off-by: Laura Abbott
    Acked-by: Kees Cook
    Cc: "Theodore Ts'o"
    Cc: Laura Abbott
    Cc: Nick Kralevich
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Daniel Micay
     
  • Patch series "Command line randomness", v3.

    A series to add the kernel command line as a source of randomness.

    This patch (of 2):

    Stack canary intialization involves getting a random number. Getting this
    random number may involve accessing caches or other architectural specific
    features which are not available until after the architecture is setup.
    Move the stack canary initialization later to accommodate this.

    Link: http://lkml.kernel.org/r/20170816231458.2299-2-labbott@redhat.com
    Signed-off-by: Laura Abbott
    Signed-off-by: Laura Abbott
    Acked-by: Kees Cook
    Cc: "Theodore Ts'o"
    Cc: Daniel Micay
    Cc: Nick Kralevich
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Laura Abbott
     

07 Sep, 2017

2 commits

  • Pull percpu updates from Tejun Heo:
    "A lot of changes for percpu this time around. percpu inherited the
    same area allocator from the original pre-virtual-address-mapped
    implementation. This was from the time when percpu allocator wasn't
    used all that much and the implementation was focused on simplicity,
    with the unfortunate computational complexity of O(number of areas
    allocated from the chunk) per alloc / free.

    With the increase in percpu usage, we're hitting cases where the lack
    of scalability is hurting. The most prominent one right now is bpf
    perpcu map creation / destruction which may allocate and free a lot of
    entries consecutively and it's likely that the problem will become
    more prominent in the future.

    To address the issue, Dennis replaced the area allocator with hinted
    bitmap allocator which is more consistent. While the new allocator
    does perform a bit worse in some cases, it outperforms the old
    allocator way more than an order of magnitude in other more common
    scenarios while staying mostly flat in CPU overhead and completely
    flat in memory consumption"

    * 'for-4.14' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu: (27 commits)
    percpu: update header to contain bitmap allocator explanation.
    percpu: update pcpu_find_block_fit to use an iterator
    percpu: use metadata blocks to update the chunk contig hint
    percpu: update free path to take advantage of contig hints
    percpu: update alloc path to only scan if contig hints are broken
    percpu: keep track of the best offset for contig hints
    percpu: skip chunks if the alloc does not fit in the contig hint
    percpu: add first_bit to keep track of the first free in the bitmap
    percpu: introduce bitmap metadata blocks
    percpu: replace area map allocator with bitmap
    percpu: generalize bitmap (un)populated iterators
    percpu: increase minimum percpu allocation size and align first regions
    percpu: introduce nr_empty_pop_pages to help empty page accounting
    percpu: change the number of pages marked in the first_chunk pop bitmap
    percpu: combine percpu address checks
    percpu: modify base_addr to be region specific
    percpu: setup_first_chunk rename schunk/dchunk to chunk
    percpu: end chunk area maps page aligned for the populated bitmap
    percpu: unify allocation of schunk and dchunk
    percpu: setup_first_chunk remove dyn_size and consolidate logic
    ...

    Linus Torvalds
     
  • build_all_zonelists gets a zone parameter to initialize zone's pagesets.
    There is only a single user which gives a non-NULL zone parameter and
    that one doesn't really need the rest of the build_all_zonelists (see
    commit 6dcd73d7011b ("memory-hotplug: allocate zone's pcp before
    onlining pages")).

    Therefore remove setup_zone_pageset from build_all_zonelists and call it
    from its only user directly. This will also remove a pointless zonlists
    rebuilding which is always good.

    Link: http://lkml.kernel.org/r/20170721143915.14161-5-mhocko@kernel.org
    Signed-off-by: Michal Hocko
    Acked-by: Vlastimil Babka
    Cc: Johannes Weiner
    Cc: Joonsoo Kim
    Cc: Mel Gorman
    Cc: Shaohua Li
    Cc: Toshi Kani
    Cc: Wen Congyang
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Michal Hocko
     

05 Sep, 2017

2 commits

  • Pull x86 mm changes from Ingo Molnar:
    "PCID support, 5-level paging support, Secure Memory Encryption support

    The main changes in this cycle are support for three new, complex
    hardware features of x86 CPUs:

    - Add 5-level paging support, which is a new hardware feature on
    upcoming Intel CPUs allowing up to 128 PB of virtual address space
    and 4 PB of physical RAM space - a 512-fold increase over the old
    limits. (Supercomputers of the future forecasting hurricanes on an
    ever warming planet can certainly make good use of more RAM.)

    Many of the necessary changes went upstream in previous cycles,
    v4.14 is the first kernel that can enable 5-level paging.

    This feature is activated via CONFIG_X86_5LEVEL=y - disabled by
    default.

    (By Kirill A. Shutemov)

    - Add 'encrypted memory' support, which is a new hardware feature on
    upcoming AMD CPUs ('Secure Memory Encryption', SME) allowing system
    RAM to be encrypted and decrypted (mostly) transparently by the
    CPU, with a little help from the kernel to transition to/from
    encrypted RAM. Such RAM should be more secure against various
    attacks like RAM access via the memory bus and should make the
    radio signature of memory bus traffic harder to intercept (and
    decrypt) as well.

    This feature is activated via CONFIG_AMD_MEM_ENCRYPT=y - disabled
    by default.

    (By Tom Lendacky)

    - Enable PCID optimized TLB flushing on newer Intel CPUs: PCID is a
    hardware feature that attaches an address space tag to TLB entries
    and thus allows to skip TLB flushing in many cases, even if we
    switch mm's.

    (By Andy Lutomirski)

    All three of these features were in the works for a long time, and
    it's coincidence of the three independent development paths that they
    are all enabled in v4.14 at once"

    * 'x86-mm-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (65 commits)
    x86/mm: Enable RCU based page table freeing (CONFIG_HAVE_RCU_TABLE_FREE=y)
    x86/mm: Use pr_cont() in dump_pagetable()
    x86/mm: Fix SME encryption stack ptr handling
    kvm/x86: Avoid clearing the C-bit in rsvd_bits()
    x86/CPU: Align CR3 defines
    x86/mm, mm/hwpoison: Clear PRESENT bit for kernel 1:1 mappings of poison pages
    acpi, x86/mm: Remove encryption mask from ACPI page protection type
    x86/mm, kexec: Fix memory corruption with SME on successive kexecs
    x86/mm/pkeys: Fix typo in Documentation/x86/protection-keys.txt
    x86/mm/dump_pagetables: Speed up page tables dump for CONFIG_KASAN=y
    x86/mm: Implement PCID based optimization: try to preserve old TLB entries using PCID
    x86: Enable 5-level paging support via CONFIG_X86_5LEVEL=y
    x86/mm: Allow userspace have mappings above 47-bit
    x86/mm: Prepare to expose larger address space to userspace
    x86/mpx: Do not allow MPX if we have mappings above 47-bit
    x86/mm: Rename tasksize_32bit/64bit to task_size_32bit/64bit()
    x86/xen: Redefine XEN_ELFNOTE_INIT_P2M using PUD_SIZE * PTRS_PER_PUD
    x86/mm/dump_pagetables: Fix printout of p4d level
    x86/mm/dump_pagetables: Generalize address normalization
    x86/boot: Fix memremap() related build failure
    ...

    Linus Torvalds
     
  • Pull scheduler updates from Ingo Molnar:
    "The main changes in this cycle were:

    - fix affine wakeups (Peter Zijlstra)

    - improve CPU onlining (and general bootup) scalability on systems
    with ridiculous number (thousands) of CPUs (Peter Zijlstra)

    - sched/numa updates (Rik van Riel)

    - sched/deadline updates (Byungchul Park)

    - sched/cpufreq enhancements and related cleanups (Viresh Kumar)

    - sched/debug enhancements (Xie XiuQi)

    - various fixes"

    * 'sched-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (27 commits)
    sched/debug: Optimize sched_domain sysctl generation
    sched/topology: Avoid pointless rebuild
    sched/topology, cpuset: Avoid spurious/wrong domain rebuilds
    sched/topology: Improve comments
    sched/topology: Fix memory leak in __sdt_alloc()
    sched/completion: Document that reinit_completion() must be called after complete_all()
    sched/autogroup: Fix error reporting printk text in autogroup_create()
    sched/fair: Fix wake_affine() for !NUMA_BALANCING
    sched/debug: Intruduce task_state_to_char() helper function
    sched/debug: Show task state in /proc/sched_debug
    sched/debug: Use task_pid_nr_ns in /proc/$pid/sched
    sched/core: Remove unnecessary initialization init_idle_bootup_task()
    sched/deadline: Change return value of cpudl_find()
    sched/deadline: Make find_later_rq() choose a closer CPU in topology
    sched/numa: Scale scan period with tasks in group and shared/private
    sched/numa: Slow down scan rate if shared faults dominate
    sched/pelt: Fix false running accounting
    sched: Mark pick_next_task_dl() and build_sched_domain() as static
    sched/cpupri: Don't re-initialize 'struct cpupri'
    sched/deadline: Don't re-initialize 'struct cpudl'
    ...

    Linus Torvalds
     

14 Aug, 2017

1 commit

  • The allocated debug objects are either on the free list or in the
    hashed bucket lists. So they won't get lost. However if both debug
    objects and kmemleak are enabled and kmemleak scanning is done
    while some of the debug objects are transitioning from one list to
    the others, false negative reporting of memory leaks may happen for
    those objects. For example,

    [38687.275678] kmemleak: 12 new suspected memory leaks (see
    /sys/kernel/debug/kmemleak)
    unreferenced object 0xffff92e98aabeb68 (size 40):
    comm "ksmtuned", pid 4344, jiffies 4298403600 (age 906.430s)
    hex dump (first 32 bytes):
    00 00 00 00 00 00 00 00 d0 bc db 92 e9 92 ff ff ................
    01 00 00 00 00 00 00 00 38 36 8a 61 e9 92 ff ff ........86.a....
    backtrace:
    [] kmemleak_alloc+0x4a/0xa0
    [] kmem_cache_alloc+0xe9/0x320
    [] __debug_object_init+0x3e6/0x400
    [] debug_object_activate+0x131/0x210
    [] __call_rcu+0x3f/0x400
    [] call_rcu_sched+0x1d/0x20
    [] put_object+0x2c/0x40
    [] __delete_object+0x3c/0x50
    [] delete_object_full+0x1d/0x20
    [] kmemleak_free+0x32/0x80
    [] kmem_cache_free+0x77/0x350
    [] unlink_anon_vmas+0x82/0x1e0
    [] free_pgtables+0xa1/0x110
    [] exit_mmap+0xc1/0x170
    [] mmput+0x80/0x150
    [] do_exit+0x2a9/0xd20

    The references in the debug objects may also hide a real memory leak.

    As there is no point in having kmemleak to track debug object
    allocations, kmemleak checking is now disabled for debug objects.

    Signed-off-by: Waiman Long
    Signed-off-by: Thomas Gleixner
    Cc: Andrew Morton
    Link: http://lkml.kernel.org/r/1502718733-8527-1-git-send-email-longman@redhat.com

    Waiman Long
     

10 Aug, 2017

1 commit

  • init_idle_bootup_task( ) is called in rest_init( ) to switch
    the scheduling class of the boot thread to the idle class.

    the function only sets:

    idle->sched_class = &idle_sched_class;

    which has been set in init_idle() called by sched_init():

    /*
    * The idle tasks have their own, simple scheduling class:
    */
    idle->sched_class = &idle_sched_class;

    We've already set the boot thread to idle class in
    start_kernel()->sched_init()->init_idle()
    so it's unnecessary to set it again in
    start_kernel()->rest_init()->init_idle_bootup_task()

    Signed-off-by: Cheng Jian
    Signed-off-by: Xie XiuQi
    Signed-off-by: Peter Zijlstra (Intel)
    Cc:
    Cc:
    Cc: Linus Torvalds
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Link: http://lkml.kernel.org/r/1501838377-109720-1-git-send-email-cj.chengjian@huawei.com
    Signed-off-by: Ingo Molnar

    Cheng Jian
     

27 Jul, 2017

1 commit

  • The percpu memory allocator is experiencing scalability issues when
    allocating and freeing large numbers of counters as in BPF.
    Additionally, there is a corner case where iteration is triggered over
    all chunks if the contig_hint is the right size, but wrong alignment.

    This patch replaces the area map allocator with a basic bitmap allocator
    implementation. Each subsequent patch will introduce new features and
    replace full scanning functions with faster non-scanning options when
    possible.

    Implementation:
    This patchset removes the area map allocator in favor of a bitmap
    allocator backed by metadata blocks. The primary goal is to provide
    consistency in performance and memory footprint with a focus on small
    allocations (< 64 bytes). The bitmap removes the heavy memmove from the
    freeing critical path and provides a consistent memory footprint. The
    metadata blocks provide a bound on the amount of scanning required by
    maintaining a set of hints.

    In an effort to make freeing fast, the metadata is updated on the free
    path if the new free area makes a page free, a block free, or spans
    across blocks. This causes the chunk's contig hint to potentially be
    smaller than what it could allocate by up to the smaller of a page or a
    block. If the chunk's contig hint is contained within a block, a check
    occurs and the hint is kept accurate. Metadata is always kept accurate
    on allocation, so there will not be a situation where a chunk has a
    later contig hint than available.

    Evaluation:
    I have primarily done testing against a simple workload of allocation of
    1 million objects (2^20) of varying size. Deallocation was done by in
    order, alternating, and in reverse. These numbers were collected after
    rebasing ontop of a80099a152. I present the worst-case numbers here:

    Area Map Allocator:

    Object Size | Alloc Time (ms) | Free Time (ms)
    ----------------------------------------------
    4B | 310 | 4770
    16B | 557 | 1325
    64B | 436 | 273
    256B | 776 | 131
    1024B | 3280 | 122

    Bitmap Allocator:

    Object Size | Alloc Time (ms) | Free Time (ms)
    ----------------------------------------------
    4B | 490 | 70
    16B | 515 | 75
    64B | 610 | 80
    256B | 950 | 100
    1024B | 3520 | 200

    This data demonstrates the inability for the area map allocator to
    handle less than ideal situations. In the best case of reverse
    deallocation, the area map allocator was able to perform within range
    of the bitmap allocator. In the worst case situation, freeing took
    nearly 5 seconds for 1 million 4-byte objects. The bitmap allocator
    dramatically improves the consistency of the free path. The small
    allocations performed nearly identical regardless of the freeing
    pattern.

    While it does add to the allocation latency, the allocation scenario
    here is optimal for the area map allocator. The area map allocator runs
    into trouble when it is allocating in chunks where the latter half is
    full. It is difficult to replicate this, so I present a variant where
    the pages are second half filled. Freeing was done sequentially. Below
    are the numbers for this scenario:

    Area Map Allocator:

    Object Size | Alloc Time (ms) | Free Time (ms)
    ----------------------------------------------
    4B | 4118 | 4892
    16B | 1651 | 1163
    64B | 598 | 285
    256B | 771 | 158
    1024B | 3034 | 160

    Bitmap Allocator:

    Object Size | Alloc Time (ms) | Free Time (ms)
    ----------------------------------------------
    4B | 481 | 67
    16B | 506 | 69
    64B | 636 | 75
    256B | 892 | 90
    1024B | 3262 | 147

    The data shows a parabolic curve of performance for the area map
    allocator. This is due to the memmove operation being the dominant cost
    with the lower object sizes as more objects are packed in a chunk and at
    higher object sizes, the traversal of the chunk slots is the dominating
    cost. The bitmap allocator suffers this problem as well. The above data
    shows the inability to scale for the allocation path with the area map
    allocator and that the bitmap allocator demonstrates consistent
    performance in general.

    The second problem of additional scanning can result in the area map
    allocator completing in 52 minutes when trying to allocate 1 million
    4-byte objects with 8-byte alignment. The same workload takes
    approximately 16 seconds to complete for the bitmap allocator.

    V2:
    Fixed a bug in pcpu_alloc_first_chunk end_offset was setting the bitmap
    using bytes instead of bits.

    Added a comment to pcpu_cnt_pop_pages to explain bitmap_weight.

    Signed-off-by: Dennis Zhou
    Reviewed-by: Josef Bacik
    Signed-off-by: Tejun Heo

    Dennis Zhou (Facebook)
     

18 Jul, 2017

1 commit

  • Since DMA addresses will effectively look like 48-bit addresses when the
    memory encryption mask is set, SWIOTLB is needed if the DMA mask of the
    device performing the DMA does not support 48-bits. SWIOTLB will be
    initialized to create decrypted bounce buffers for use by these devices.

    Signed-off-by: Tom Lendacky
    Reviewed-by: Thomas Gleixner
    Cc: Alexander Potapenko
    Cc: Andrey Ryabinin
    Cc: Andy Lutomirski
    Cc: Arnd Bergmann
    Cc: Borislav Petkov
    Cc: Brijesh Singh
    Cc: Dave Young
    Cc: Dmitry Vyukov
    Cc: Jonathan Corbet
    Cc: Konrad Rzeszutek Wilk
    Cc: Larry Woodman
    Cc: Linus Torvalds
    Cc: Matt Fleming
    Cc: Michael S. Tsirkin
    Cc: Paolo Bonzini
    Cc: Peter Zijlstra
    Cc: Radim Krčmář
    Cc: Rik van Riel
    Cc: Toshimitsu Kani
    Cc: kasan-dev@googlegroups.com
    Cc: kvm@vger.kernel.org
    Cc: linux-arch@vger.kernel.org
    Cc: linux-doc@vger.kernel.org
    Cc: linux-efi@vger.kernel.org
    Cc: linux-mm@kvack.org
    Link: http://lkml.kernel.org/r/aa2d29b78ae7d508db8881e46a3215231b9327a7.1500319216.git.thomas.lendacky@amd.com
    Signed-off-by: Ingo Molnar

    Tom Lendacky
     

13 Jul, 2017

1 commit

  • The add_device_randomness() function would ignore incoming bytes if the
    crng wasn't ready. This additionally makes sure to make an early enough
    call to add_latent_entropy() to influence the initial stack canary,
    which is especially important on non-x86 systems where it stays the same
    through the life of the boot.

    Link: http://lkml.kernel.org/r/20170626233038.GA48751@beast
    Signed-off-by: Kees Cook
    Cc: "Theodore Ts'o"
    Cc: Arnd Bergmann
    Cc: Greg Kroah-Hartman
    Cc: Ingo Molnar
    Cc: Jessica Yu
    Cc: Steven Rostedt (VMware)
    Cc: Viresh Kumar
    Cc: Tejun Heo
    Cc: Prarit Bhargava
    Cc: Lokesh Vutla
    Cc: Nicholas Piggin
    Cc: AKASHI Takahiro
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Kees Cook
     

23 May, 2017

2 commits

  • might_sleep() and smp_processor_id() checks are enabled after the boot
    process is done. That hides bugs in the SMP bringup and driver
    initialization code.

    Enable it right when the scheduler starts working, i.e. when init task and
    kthreadd have been created and right before the idle task enables
    preemption.

    Tested-by: Mark Rutland
    Signed-off-by: Thomas Gleixner
    Signed-off-by: Peter Zijlstra (Intel)
    Acked-by: Mark Rutland
    Cc: Greg Kroah-Hartman
    Cc: Linus Torvalds
    Cc: Peter Zijlstra
    Cc: Steven Rostedt
    Link: http://lkml.kernel.org/r/20170516184736.272225698@linutronix.de
    Signed-off-by: Ingo Molnar

    Thomas Gleixner
     
  • Some of the boot code in init_kernel_freeable() which runs before SMP
    bringup assumes (rightfully) that it runs on the boot CPU and therefore can
    use smp_processor_id() in preemptible context.

    That works so far because the smp_processor_id() check starts to be
    effective after smp bringup. That's just wrong. Starting with SMP bringup
    and the ability to move threads around, smp_processor_id() in preemptible
    context is broken.

    Aside of that it does not make sense to allow init to run on all CPUs
    before sched_smp_init() has been run.

    Pin the init to the boot CPU so the existing code can continue to use
    smp_processor_id() without triggering the checks when the enabling of those
    checks starts earlier.

    Tested-by: Mark Rutland
    Signed-off-by: Thomas Gleixner
    Signed-off-by: Peter Zijlstra (Intel)
    Cc: Greg Kroah-Hartman
    Cc: Linus Torvalds
    Cc: Peter Zijlstra
    Cc: Steven Rostedt
    Link: http://lkml.kernel.org/r/20170516184734.943149935@linutronix.de
    Signed-off-by: Ingo Molnar

    Thomas Gleixner
     

09 May, 2017

1 commit

  • Pull tty/serial updates from Greg KH:
    "Here is the "big" TTY/Serial patch updates for 4.12-rc1

    Not a lot of new things here, the normal number of serial driver
    updates and additions, tiny bugs fixed, and some core files split up
    to make future changes a bit easier for Nicolas's "tiny-tty" work.

    All of these have been in linux-next for a while"

    * tag 'tty-4.12-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty: (62 commits)
    serial: small Makefile reordering
    tty: split job control support into a file of its own
    tty: move baudrate handling code to a file of its own
    console: move console_init() out of tty_io.c
    serial: 8250_early: Add earlycon support for Palmchip UART
    tty: pl011: use "qdf2400_e44" as the earlycon name for QDF2400 E44
    vt: make mouse selection of non-ASCII consistent
    vt: set mouse selection word-chars to gpm's default
    imx-serial: Reduce RX DMA startup latency when opening for reading
    serial: omap: suspend device on probe errors
    serial: omap: fix runtime-pm handling on unbind
    tty: serial: omap: add UPF_BOOT_AUTOCONF flag for DT init
    serial: samsung: Remove useless spinlock
    serial: samsung: Add missing checks for dma_map_single failure
    serial: samsung: Use right device for DMA-mapping calls
    serial: imx: setup DCEDTE early and ensure DCD and RI irqs to be off
    tty: fix comment typo s/repsonsible/responsible/
    tty: amba-pl011: Fix spurious TX interrupts
    serial: xuartps: Enable clocks in the pm disable case also
    serial: core: Re-use struct uart_port {name} field
    ...

    Linus Torvalds
     

04 May, 2017

1 commit

  • Pull tracing updates from Steven Rostedt:
    "New features for this release:

    - Pretty much a full rewrite of the processing of function plugins.
    i.e. echo do_IRQ:stacktrace > set_ftrace_filter

    - The rewrite was needed to add plugins to be unique to tracing
    instances. i.e. mkdir instance/foo; cd instances/foo; echo
    do_IRQ:stacktrace > set_ftrace_filter The old way was written very
    hacky. This removes a lot of those hacks.

    - New "function-fork" tracing option. When set, pids in the
    set_ftrace_pid will have their children added when the processes
    with their pids listed in the set_ftrace_pid file forks.

    - Exposure of "maxactive" for kretprobe in kprobe_events

    - Allow for builtin init functions to be traced by the function
    tracer (via the kernel command line). Module init function tracing
    will come in the next release.

    - Added more selftests, and have selftests also test in an instance"

    * tag 'trace-v4.12' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace: (60 commits)
    ring-buffer: Return reader page back into existing ring buffer
    selftests: ftrace: Allow some event trigger tests to run in an instance
    selftests: ftrace: Have some basic tests run in a tracing instance too
    selftests: ftrace: Have event tests also run in an tracing instance
    selftests: ftrace: Make func_event_triggers and func_traceonoff_triggers tests do instances
    selftests: ftrace: Allow some tests to be run in a tracing instance
    tracing/ftrace: Allow for instances to trigger their own stacktrace probes
    tracing/ftrace: Allow for the traceonoff probe be unique to instances
    tracing/ftrace: Enable snapshot function trigger to work with instances
    tracing/ftrace: Allow instances to have their own function probes
    tracing/ftrace: Add a better way to pass data via the probe functions
    ftrace: Dynamically create the probe ftrace_ops for the trace_array
    tracing: Pass the trace_array into ftrace_probe_ops functions
    tracing: Have the trace_array hold the list of registered func probes
    ftrace: If the hash for a probe fails to update then free what was initialized
    ftrace: Have the function probes call their own function
    ftrace: Have each function probe use its own ftrace_ops
    ftrace: Have unregister_ftrace_function_probe_func() return a value
    ftrace: Add helper function ftrace_hash_move_and_update_ops()
    ftrace: Remove data field from ftrace_func_probe structure
    ...

    Linus Torvalds
     

03 May, 2017

1 commit

  • Pull trivial tree updates from Jiri Kosina.

    * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jikos/trivial:
    tty: fix comment for __tty_alloc_driver()
    init/main: properly align the multi-line comment
    init/main: Fix double "the" in comment
    Fix dead URLs to ftp.kernel.org
    drivers: Clean up duplicated email address
    treewide: Fix typo in xml/driver-api/basics.xml
    tools/testing/selftests/powerpc: remove redundant CFLAGS in Makefile: "-Wall -O2 -Wall" -> "-O2 -Wall"
    selftests/timers: Spelling s/privledges/privileges/
    HID: picoLCD: Spelling s/REPORT_WRTIE_MEMORY/REPORT_WRITE_MEMORY/
    net: phy: dp83848: Fix Typo
    UBI: Fix typos
    Documentation: ftrace.txt: Correct nice value of 120 priority
    net: fec: Fix typo in error msg and comment
    treewide: Fix typos in printk

    Linus Torvalds
     

24 Apr, 2017

2 commits


19 Apr, 2017

1 commit


04 Apr, 2017

1 commit

  • Relying on free_reserved_area() to call ftrace to free init memory proved to
    not be sufficient. The issue is that on x86, when debug_pagealloc is
    enabled, the init memory is not freed, but simply set as not present. Since
    ftrace was uninformed of this, starting function tracing still tries to
    update pages that are not present according to the page tables, causing
    ftrace to bug, as well as killing the kernel itself.

    Instead of relying on free_reserved_area(), have init/main.c call ftrace
    directly just before it frees the init memory. Then it needs to use
    __init_begin and __init_end to know where the init memory location is.
    Looking at all archs (and testing what I can), it appears that this should
    work for each of them.

    Reported-by: kernel test robot
    Reported-by: Fengguang Wu
    Signed-off-by: Steven Rostedt (VMware)

    Steven Rostedt (VMware)
     

01 Apr, 2017

1 commit

  • Yang Li has reported that drain_all_pages triggers a WARN_ON which means
    that this function is called earlier than the mm_percpu_wq is
    initialized on arm64 with CMA configured:

    WARNING: CPU: 2 PID: 1 at mm/page_alloc.c:2423 drain_all_pages+0x244/0x25c
    Modules linked in:
    CPU: 2 PID: 1 Comm: swapper/0 Not tainted 4.11.0-rc1-next-20170310-00027-g64dfbc5 #127
    Hardware name: Freescale Layerscape 2088A RDB Board (DT)
    task: ffffffc07c4a6d00 task.stack: ffffffc07c4a8000
    PC is at drain_all_pages+0x244/0x25c
    LR is at start_isolate_page_range+0x14c/0x1f0
    [...]
    drain_all_pages+0x244/0x25c
    start_isolate_page_range+0x14c/0x1f0
    alloc_contig_range+0xec/0x354
    cma_alloc+0x100/0x1fc
    dma_alloc_from_contiguous+0x3c/0x44
    atomic_pool_init+0x7c/0x208
    arm64_dma_init+0x44/0x4c
    do_one_initcall+0x38/0x128
    kernel_init_freeable+0x1a0/0x240
    kernel_init+0x10/0xfc
    ret_from_fork+0x10/0x20

    Fix this by moving the whole setup_vmstat which is an initcall right now
    to init_mm_internals which will be called right after the WQ subsystem
    is initialized.

    Link: http://lkml.kernel.org/r/20170315164021.28532-1-mhocko@kernel.org
    Signed-off-by: Michal Hocko
    Reported-by: Yang Li
    Tested-by: Yang Li
    Tested-by: Xiaolong Ye
    Cc: Mel Gorman
    Cc: Vlastimil Babka
    Cc: Tetsuo Handa
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Michal Hocko
     

25 Mar, 2017

2 commits


12 Mar, 2017

1 commit

  • Pull random updates from Ted Ts'o:
    "Change get_random_{int,log} to use the CRNG used by /dev/urandom and
    getrandom(2). It's faster and arguably more secure than cut-down MD5
    that we had been using.

    Also do some code cleanup"

    * tag 'random_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/random:
    random: move random_min_urandom_seed into CONFIG_SYSCTL ifdef block
    random: convert get_random_int/long into get_random_u32/u64
    random: use chacha20 for get_random_int/long
    random: fix comment for unused random_min_urandom_seed
    random: remove variable limit
    random: remove stale urandom_init_wait
    random: remove stale maybe_reseed_primary_crng

    Linus Torvalds
     

02 Mar, 2017

5 commits