10 Jun, 2020

1 commit

  • This change converts the existing mmap_sem rwsem calls to use the new mmap
    locking API instead.

    The change is generated using coccinelle with the following rule:

    // spatch --sp-file mmap_lock_api.cocci --in-place --include-headers --dir .

    @@
    expression mm;
    @@
    (
    -init_rwsem
    +mmap_init_lock
    |
    -down_write
    +mmap_write_lock
    |
    -down_write_killable
    +mmap_write_lock_killable
    |
    -down_write_trylock
    +mmap_write_trylock
    |
    -up_write
    +mmap_write_unlock
    |
    -downgrade_write
    +mmap_write_downgrade
    |
    -down_read
    +mmap_read_lock
    |
    -down_read_killable
    +mmap_read_lock_killable
    |
    -down_read_trylock
    +mmap_read_trylock
    |
    -up_read
    +mmap_read_unlock
    )
    -(&mm->mmap_sem)
    +(mm)

    Signed-off-by: Michel Lespinasse
    Signed-off-by: Andrew Morton
    Reviewed-by: Daniel Jordan
    Reviewed-by: Laurent Dufour
    Reviewed-by: Vlastimil Babka
    Cc: Davidlohr Bueso
    Cc: David Rientjes
    Cc: Hugh Dickins
    Cc: Jason Gunthorpe
    Cc: Jerome Glisse
    Cc: John Hubbard
    Cc: Liam Howlett
    Cc: Matthew Wilcox
    Cc: Peter Zijlstra
    Cc: Ying Han
    Link: http://lkml.kernel.org/r/20200520052908.204642-5-walken@google.com
    Signed-off-by: Linus Torvalds

    Michel Lespinasse
     

03 Jun, 2020

1 commit

  • Patch series "Fix W+X debug feature on x86"

    Jan alerted me[1] that the W+X detection debug feature was broken in x86
    by my change[2] to switch x86 to use the generic ptdump infrastructure.

    Fundamentally the approach of trying to move the calculation of
    effective permissions into note_page() was broken because note_page() is
    only called for 'leaf' entries and the effective permissions are passed
    down via the internal nodes of the page tree. The solution I've taken
    here is to create a new (optional) callback which is called for all
    nodes of the page tree and therefore can calculate the effective
    permissions.

    Secondly on some configurations (32 bit with PAE) "unsigned long" is not
    large enough to store the table entries. The fix here is simple - let's
    just use a u64.

    [1] https://lore.kernel.org/lkml/d573dc7e-e742-84de-473d-f971142fa319@suse.com/
    [2] 2ae27137b2db ("x86: mm: convert dump_pagetables to use walk_page_range")

    This patch (of 2):

    By switching the x86 page table dump code to use the generic code the
    effective permissions are no longer calculated correctly because the
    note_page() function is only called for *leaf* entries. To calculate
    the actual effective permissions it is necessary to observe the full
    hierarchy of the page tree.

    Introduce a new callback for ptdump which is called for every entry and
    can therefore update the prot_levels array correctly. note_page() can
    then simply access the appropriate element in the array.

    [steven.price@arm.com: make the assignment conditional on val != 0]
    Link: http://lkml.kernel.org/r/430c8ab4-e7cd-6933-dde6-087fac6db872@arm.com
    Fixes: 2ae27137b2db ("x86: mm: convert dump_pagetables to use walk_page_range")
    Reported-by: Jan Beulich
    Signed-off-by: Steven Price
    Signed-off-by: Andrew Morton
    Cc: Qian Cai
    Cc: Andy Lutomirski
    Cc: Borislav Petkov
    Cc: Dave Hansen
    Cc: Ingo Molnar
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Cc:
    Link: http://lkml.kernel.org/r/20200521152308.33096-1-steven.price@arm.com
    Link: http://lkml.kernel.org/r/20200521152308.33096-2-steven.price@arm.com
    Signed-off-by: Linus Torvalds

    Steven Price
     

04 Feb, 2020

3 commits

  • struct mm_struct is quite large (~1664 bytes) and so allocating on the
    stack may cause problems as the kernel stack size is small.

    Since ptdump_walk_pgd_level_core() was only allocating the structure so
    that it could modify the pgd argument we can instead introduce a pgd
    override in struct mm_walk and pass this down the call stack to where it
    is needed.

    Since the correct mm_struct is now being passed down, it is now also
    unnecessary to take the mmap_sem semaphore because ptdump_walk_pgd() will
    now take the semaphore on the real mm.

    [steven.price@arm.com: restore missed arm64 changes]
    Link: http://lkml.kernel.org/r/20200108145710.34314-1-steven.price@arm.com
    Link: http://lkml.kernel.org/r/20200108145710.34314-1-steven.price@arm.com
    Signed-off-by: Steven Price
    Reported-by: Stephen Rothwell
    Cc: Catalin Marinas
    Cc: Albert Ou
    Cc: Alexandre Ghiti
    Cc: Andy Lutomirski
    Cc: Ard Biesheuvel
    Cc: Arnd Bergmann
    Cc: Benjamin Herrenschmidt
    Cc: Borislav Petkov
    Cc: Christian Borntraeger
    Cc: Dave Hansen
    Cc: David S. Miller
    Cc: Heiko Carstens
    Cc: "H. Peter Anvin"
    Cc: Ingo Molnar
    Cc: James Hogan
    Cc: James Morse
    Cc: Jerome Glisse
    Cc: "Liang, Kan"
    Cc: Mark Rutland
    Cc: Michael Ellerman
    Cc: Paul Burton
    Cc: Paul Mackerras
    Cc: Paul Walmsley
    Cc: Peter Zijlstra
    Cc: Ralf Baechle
    Cc: Russell King
    Cc: Thomas Gleixner
    Cc: Vasily Gorbik
    Cc: Vineet Gupta
    Cc: Will Deacon
    Cc: Zong Li
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Steven Price
     
  • Rather than having to increment the 'depth' number by 1 in ptdump_hole(),
    let's change the meaning of 'level' in note_page() since that makes the
    code simplier.

    Note that for x86, the level numbers were previously increased by 1 in
    commit 45dcd2091363 ("x86/mm/dump_pagetables: Fix printout of p4d level")
    and the comment "Bit 7 has a different meaning" was not updated, so this
    change also makes the code match the comment again.

    Link: http://lkml.kernel.org/r/20191218162402.45610-24-steven.price@arm.com
    Signed-off-by: Steven Price
    Reviewed-by: Catalin Marinas
    Cc: Albert Ou
    Cc: Alexandre Ghiti
    Cc: Andy Lutomirski
    Cc: Ard Biesheuvel
    Cc: Arnd Bergmann
    Cc: Benjamin Herrenschmidt
    Cc: Borislav Petkov
    Cc: Christian Borntraeger
    Cc: Dave Hansen
    Cc: David S. Miller
    Cc: Heiko Carstens
    Cc: "H. Peter Anvin"
    Cc: Ingo Molnar
    Cc: James Hogan
    Cc: James Morse
    Cc: Jerome Glisse
    Cc: "Liang, Kan"
    Cc: Mark Rutland
    Cc: Michael Ellerman
    Cc: Paul Burton
    Cc: Paul Mackerras
    Cc: Paul Walmsley
    Cc: Peter Zijlstra
    Cc: Ralf Baechle
    Cc: Russell King
    Cc: Thomas Gleixner
    Cc: Vasily Gorbik
    Cc: Vineet Gupta
    Cc: Will Deacon
    Cc: Zong Li
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Steven Price
     
  • Add a generic version of page table dumping that architectures can opt-in
    to.

    Link: http://lkml.kernel.org/r/20191218162402.45610-20-steven.price@arm.com
    Signed-off-by: Steven Price
    Cc: Albert Ou
    Cc: Alexandre Ghiti
    Cc: Andy Lutomirski
    Cc: Ard Biesheuvel
    Cc: Arnd Bergmann
    Cc: Benjamin Herrenschmidt
    Cc: Borislav Petkov
    Cc: Catalin Marinas
    Cc: Christian Borntraeger
    Cc: Dave Hansen
    Cc: David S. Miller
    Cc: Heiko Carstens
    Cc: "H. Peter Anvin"
    Cc: Ingo Molnar
    Cc: James Hogan
    Cc: James Morse
    Cc: Jerome Glisse
    Cc: "Liang, Kan"
    Cc: Mark Rutland
    Cc: Michael Ellerman
    Cc: Paul Burton
    Cc: Paul Mackerras
    Cc: Paul Walmsley
    Cc: Peter Zijlstra
    Cc: Ralf Baechle
    Cc: Russell King
    Cc: Thomas Gleixner
    Cc: Vasily Gorbik
    Cc: Vineet Gupta
    Cc: Will Deacon
    Cc: Zong Li
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Steven Price