04 Feb, 2020

4 commits

  • struct mm_struct is quite large (~1664 bytes) and so allocating on the
    stack may cause problems as the kernel stack size is small.

    Since ptdump_walk_pgd_level_core() was only allocating the structure so
    that it could modify the pgd argument we can instead introduce a pgd
    override in struct mm_walk and pass this down the call stack to where it
    is needed.

    Since the correct mm_struct is now being passed down, it is now also
    unnecessary to take the mmap_sem semaphore because ptdump_walk_pgd() will
    now take the semaphore on the real mm.

    [steven.price@arm.com: restore missed arm64 changes]
    Link: http://lkml.kernel.org/r/20200108145710.34314-1-steven.price@arm.com
    Link: http://lkml.kernel.org/r/20200108145710.34314-1-steven.price@arm.com
    Signed-off-by: Steven Price
    Reported-by: Stephen Rothwell
    Cc: Catalin Marinas
    Cc: Albert Ou
    Cc: Alexandre Ghiti
    Cc: Andy Lutomirski
    Cc: Ard Biesheuvel
    Cc: Arnd Bergmann
    Cc: Benjamin Herrenschmidt
    Cc: Borislav Petkov
    Cc: Christian Borntraeger
    Cc: Dave Hansen
    Cc: David S. Miller
    Cc: Heiko Carstens
    Cc: "H. Peter Anvin"
    Cc: Ingo Molnar
    Cc: James Hogan
    Cc: James Morse
    Cc: Jerome Glisse
    Cc: "Liang, Kan"
    Cc: Mark Rutland
    Cc: Michael Ellerman
    Cc: Paul Burton
    Cc: Paul Mackerras
    Cc: Paul Walmsley
    Cc: Peter Zijlstra
    Cc: Ralf Baechle
    Cc: Russell King
    Cc: Thomas Gleixner
    Cc: Vasily Gorbik
    Cc: Vineet Gupta
    Cc: Will Deacon
    Cc: Zong Li
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Steven Price
     
  • The pte_hole() callback is called at multiple levels of the page tables.
    Code dumping the kernel page tables needs to know what at what depth the
    missing entry is. Add this is an extra parameter to pte_hole(). When the
    depth isn't know (e.g. processing a vma) then -1 is passed.

    The depth that is reported is the actual level where the entry is missing
    (ignoring any folding that is in place), i.e. any levels where
    PTRS_PER_P?D is set to 1 are ignored.

    Note that depth starts at 0 for a PGD so that PUD/PMD/PTE retain their
    natural numbers as levels 2/3/4.

    Link: http://lkml.kernel.org/r/20191218162402.45610-16-steven.price@arm.com
    Signed-off-by: Steven Price
    Tested-by: Zong Li
    Cc: Albert Ou
    Cc: Alexandre Ghiti
    Cc: Andy Lutomirski
    Cc: Ard Biesheuvel
    Cc: Arnd Bergmann
    Cc: Benjamin Herrenschmidt
    Cc: Borislav Petkov
    Cc: Catalin Marinas
    Cc: Christian Borntraeger
    Cc: Dave Hansen
    Cc: David S. Miller
    Cc: Heiko Carstens
    Cc: "H. Peter Anvin"
    Cc: Ingo Molnar
    Cc: James Hogan
    Cc: James Morse
    Cc: Jerome Glisse
    Cc: "Liang, Kan"
    Cc: Mark Rutland
    Cc: Michael Ellerman
    Cc: Paul Burton
    Cc: Paul Mackerras
    Cc: Paul Walmsley
    Cc: Peter Zijlstra
    Cc: Ralf Baechle
    Cc: Russell King
    Cc: Thomas Gleixner
    Cc: Vasily Gorbik
    Cc: Vineet Gupta
    Cc: Will Deacon
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Steven Price
     
  • Since 48684a65b4e3: "mm: pagewalk: fix misbehavior of walk_page_range for
    vma(VM_PFNMAP)", page_table_walk() will report any kernel area as a hole,
    because it lacks a vma.

    This means each arch has re-implemented page table walking when needed,
    for example in the per-arch ptdump walker.

    Remove the requirement to have a vma in the generic code and add a new
    function walk_page_range_novma() which ignores the VMAs and simply walks
    the page tables.

    Link: http://lkml.kernel.org/r/20191218162402.45610-13-steven.price@arm.com
    Signed-off-by: Steven Price
    Cc: Albert Ou
    Cc: Alexandre Ghiti
    Cc: Andy Lutomirski
    Cc: Ard Biesheuvel
    Cc: Arnd Bergmann
    Cc: Benjamin Herrenschmidt
    Cc: Borislav Petkov
    Cc: Catalin Marinas
    Cc: Christian Borntraeger
    Cc: Dave Hansen
    Cc: David S. Miller
    Cc: Heiko Carstens
    Cc: "H. Peter Anvin"
    Cc: Ingo Molnar
    Cc: James Hogan
    Cc: James Morse
    Cc: Jerome Glisse
    Cc: "Liang, Kan"
    Cc: Mark Rutland
    Cc: Michael Ellerman
    Cc: Paul Burton
    Cc: Paul Mackerras
    Cc: Paul Walmsley
    Cc: Peter Zijlstra
    Cc: Ralf Baechle
    Cc: Russell King
    Cc: Thomas Gleixner
    Cc: Vasily Gorbik
    Cc: Vineet Gupta
    Cc: Will Deacon
    Cc: Zong Li
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Steven Price
     
  • pgd_entry() and pud_entry() were removed by commit 0b1fbfe50006c410
    ("mm/pagewalk: remove pgd_entry() and pud_entry()") because there were no
    users. We're about to add users so reintroduce them, along with
    p4d_entry() as we now have 5 levels of tables.

    Note that commit a00cc7d9dd93d66a ("mm, x86: add support for PUD-sized
    transparent hugepages") already re-added pud_entry() but with different
    semantics to the other callbacks. This commit reverts the semantics back
    to match the other callbacks.

    To support hmm.c which now uses the new semantics of pud_entry() a new
    member ('action') of struct mm_walk is added which allows the callbacks to
    either descend (ACTION_SUBTREE, the default), skip (ACTION_CONTINUE) or
    repeat the callback (ACTION_AGAIN). hmm.c is then updated to call
    pud_trans_huge_lock() itself and make use of the splitting/retry logic of
    the core code.

    After this change pud_entry() is called for all entries, not just
    transparent huge pages.

    [arnd@arndb.de: fix unused variable warning]
    Link: http://lkml.kernel.org/r/20200107204607.1533842-1-arnd@arndb.de
    Link: http://lkml.kernel.org/r/20191218162402.45610-12-steven.price@arm.com
    Signed-off-by: Steven Price
    Signed-off-by: Arnd Bergmann
    Cc: Albert Ou
    Cc: Alexandre Ghiti
    Cc: Andy Lutomirski
    Cc: Ard Biesheuvel
    Cc: Arnd Bergmann
    Cc: Benjamin Herrenschmidt
    Cc: Borislav Petkov
    Cc: Catalin Marinas
    Cc: Christian Borntraeger
    Cc: Dave Hansen
    Cc: David S. Miller
    Cc: Heiko Carstens
    Cc: "H. Peter Anvin"
    Cc: Ingo Molnar
    Cc: James Hogan
    Cc: James Morse
    Cc: Jerome Glisse
    Cc: "Liang, Kan"
    Cc: Mark Rutland
    Cc: Michael Ellerman
    Cc: Paul Burton
    Cc: Paul Mackerras
    Cc: Paul Walmsley
    Cc: Peter Zijlstra
    Cc: Ralf Baechle
    Cc: Russell King
    Cc: Thomas Gleixner
    Cc: Vasily Gorbik
    Cc: Vineet Gupta
    Cc: Will Deacon
    Cc: Zong Li
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Steven Price
     

06 Nov, 2019

1 commit

  • For users that want to travers all page table entries pointing into a
    region of a struct address_space mapping, introduce a walk_page_mapping()
    function.

    The walk_page_mapping() function will be initially be used for dirty-
    tracking in virtual graphics drivers.

    Cc: Andrew Morton
    Cc: Matthew Wilcox
    Cc: Will Deacon
    Cc: Peter Zijlstra
    Cc: Rik van Riel
    Cc: Minchan Kim
    Cc: Michal Hocko
    Cc: Huang Ying
    Cc: Jérôme Glisse
    Cc: Kirill A. Shutemov
    Signed-off-by: Thomas Hellstrom
    Reviewed-by: Andrew Morton

    Thomas Hellstrom
     

07 Sep, 2019

2 commits

  • The mm_walk structure currently mixed data and code. Split out the
    operations vectors into a new mm_walk_ops structure, and while we are
    changing the API also declare the mm_walk structure inside the
    walk_page_range and walk_page_vma functions.

    Based on patch from Linus Torvalds.

    Link: https://lore.kernel.org/r/20190828141955.22210-3-hch@lst.de
    Signed-off-by: Christoph Hellwig
    Reviewed-by: Thomas Hellstrom
    Reviewed-by: Steven Price
    Reviewed-by: Jason Gunthorpe
    Signed-off-by: Jason Gunthorpe

    Christoph Hellwig
     
  • Add a new header for the two handful of users of the walk_page_range /
    walk_page_vma interface instead of polluting all users of mm.h with it.

    Link: https://lore.kernel.org/r/20190828141955.22210-2-hch@lst.de
    Signed-off-by: Christoph Hellwig
    Reviewed-by: Thomas Hellstrom
    Reviewed-by: Steven Price
    Reviewed-by: Jason Gunthorpe
    Signed-off-by: Jason Gunthorpe

    Christoph Hellwig