11 Jul, 2017

1 commit

  • For fast flash disk, async IO could introduce overhead because of
    context switch. block-mq now supports IO poll, which improves
    performance and latency a lot. swapin is a good place to use this
    technique, because the task is waiting for the swapin page to continue
    execution.

    In my virtual machine, directly read 4k data from a NVMe with iopoll is
    about 60% better than that without poll. With iopoll support in swapin
    patch, my microbenchmark (a task does random memory write) is about
    10%~25% faster. CPU utilization increases a lot though, 2x and even 3x
    CPU utilization. This will depend on disk speed.

    While iopoll in swapin isn't intended for all usage cases, it's a win
    for latency sensistive workloads with high speed swap disk. block layer
    has knob to control poll in runtime. If poll isn't enabled in block
    layer, there should be no noticeable change in swapin.

    I got a chance to run the same test in a NVMe with DRAM as the media.
    In simple fio IO test, blkpoll boosts 50% performance in single thread
    test and ~20% in 8 threads test. So this is the base line. In above
    swap test, blkpoll boosts ~27% performance in single thread test.
    blkpoll uses 2x CPU time though.

    If we enable hybid polling, the performance gain has very slight drop
    but CPU time is only 50% worse than that without blkpoll. Also we can
    adjust parameter of hybid poll, with it, the CPU time penality is
    reduced further. In 8 threads test, blkpoll doesn't help though. The
    performance is similar to that without blkpoll, but cpu utilization is
    similar too. There is lock contention in swap path. The cpu time
    spending on blkpoll isn't high. So overall, blkpoll swapin isn't worse
    than that without it.

    The swapin readahead might read several pages in in the same time and
    form a big IO request. Since the IO will take longer time, it doesn't
    make sense to do poll, so the patch only does iopoll for single page
    swapin.

    [akpm@linux-foundation.org: coding-style fixes]
    Link: http://lkml.kernel.org/r/070c3c3e40b711e7b1390002c991e86a-b5408f0@7511894063d3764ff01ea8111f5a004d7dd700ed078797c204a24e620ddb965c
    Signed-off-by: Shaohua Li
    Cc: Tim Chen
    Cc: Huang Ying
    Cc: Jens Axboe
    Cc: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Shaohua Li
     

07 Jul, 2017

3 commits

  • To reduce the lock contention of swap_info_struct->lock when freeing
    swap entry. The freed swap entries will be collected in a per-CPU
    buffer firstly, and be really freed later in batch. During the batch
    freeing, if the consecutive swap entries in the per-CPU buffer belongs
    to same swap device, the swap_info_struct->lock needs to be
    acquired/released only once, so that the lock contention could be
    reduced greatly. But if there are multiple swap devices, it is possible
    that the lock may be unnecessarily released/acquired because the swap
    entries belong to the same swap device are non-consecutive in the
    per-CPU buffer.

    To solve the issue, the per-CPU buffer is sorted according to the swap
    device before freeing the swap entries.

    With the patch, the memory (some swapped out) free time reduced 11.6%
    (from 2.65s to 2.35s) in the vm-scalability swap-w-rand test case with
    16 processes. The test is done on a Xeon E5 v3 system. The swap device
    used is a RAM simulated PMEM (persistent memory) device. To test
    swapping, the test case creates 16 processes, which allocate and write
    to the anonymous pages until the RAM and part of the swap device is used
    up, finally the memory (some swapped out) is freed before exit.

    [akpm@linux-foundation.org: tweak comment]
    Link: http://lkml.kernel.org/r/20170525005916.25249-1-ying.huang@intel.com
    Signed-off-by: Huang Ying
    Acked-by: Tim Chen
    Cc: Hugh Dickins
    Cc: Shaohua Li
    Cc: Minchan Kim
    Cc: Rik van Riel
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Huang Ying
     
  • Now, get_swap_page takes struct page and allocates swap space according
    to page size(ie, normal or THP) so it would be more cleaner to introduce
    put_swap_page which is a counter function of get_swap_page. Then, it
    calls right swap slot free function depending on page's size.

    [ying.huang@intel.com: minor cleanup and fix]
    Link: http://lkml.kernel.org/r/20170515112522.32457-3-ying.huang@intel.com
    Signed-off-by: Minchan Kim
    Signed-off-by: "Huang, Ying"
    Acked-by: Johannes Weiner
    Cc: Andrea Arcangeli
    Cc: Ebru Akagunduz
    Cc: Hugh Dickins
    Cc: Kirill A. Shutemov
    Cc: Michal Hocko
    Cc: Rik van Riel
    Cc: Shaohua Li
    Cc: Tejun Heo
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Minchan Kim
     
  • Patch series "THP swap: Delay splitting THP during swapping out", v11.

    This patchset is to optimize the performance of Transparent Huge Page
    (THP) swap.

    Recently, the performance of the storage devices improved so fast that
    we cannot saturate the disk bandwidth with single logical CPU when do
    page swap out even on a high-end server machine. Because the
    performance of the storage device improved faster than that of single
    logical CPU. And it seems that the trend will not change in the near
    future. On the other hand, the THP becomes more and more popular
    because of increased memory size. So it becomes necessary to optimize
    THP swap performance.

    The advantages of the THP swap support include:

    - Batch the swap operations for the THP to reduce lock
    acquiring/releasing, including allocating/freeing the swap space,
    adding/deleting to/from the swap cache, and writing/reading the swap
    space, etc. This will help improve the performance of the THP swap.

    - The THP swap space read/write will be 2M sequential IO. It is
    particularly helpful for the swap read, which are usually 4k random
    IO. This will improve the performance of the THP swap too.

    - It will help the memory fragmentation, especially when the THP is
    heavily used by the applications. The 2M continuous pages will be
    free up after THP swapping out.

    - It will improve the THP utilization on the system with the swap
    turned on. Because the speed for khugepaged to collapse the normal
    pages into the THP is quite slow. After the THP is split during the
    swapping out, it will take quite long time for the normal pages to
    collapse back into the THP after being swapped in. The high THP
    utilization helps the efficiency of the page based memory management
    too.

    There are some concerns regarding THP swap in, mainly because possible
    enlarged read/write IO size (for swap in/out) may put more overhead on
    the storage device. To deal with that, the THP swap in should be turned
    on only when necessary. For example, it can be selected via
    "always/never/madvise" logic, to be turned on globally, turned off
    globally, or turned on only for VMA with MADV_HUGEPAGE, etc.

    This patchset is the first step for the THP swap support. The plan is
    to delay splitting THP step by step, finally avoid splitting THP during
    the THP swapping out and swap out/in the THP as a whole.

    As the first step, in this patchset, the splitting huge page is delayed
    from almost the first step of swapping out to after allocating the swap
    space for the THP and adding the THP into the swap cache. This will
    reduce lock acquiring/releasing for the locks used for the swap cache
    management.

    With the patchset, the swap out throughput improves 15.5% (from about
    3.73GB/s to about 4.31GB/s) in the vm-scalability swap-w-seq test case
    with 8 processes. The test is done on a Xeon E5 v3 system. The swap
    device used is a RAM simulated PMEM (persistent memory) device. To test
    the sequential swapping out, the test case creates 8 processes, which
    sequentially allocate and write to the anonymous pages until the RAM and
    part of the swap device is used up.

    This patch (of 5):

    In this patch, splitting huge page is delayed from almost the first step
    of swapping out to after allocating the swap space for the THP
    (Transparent Huge Page) and adding the THP into the swap cache. This
    will batch the corresponding operation, thus improve THP swap out
    throughput.

    This is the first step for the THP swap optimization. The plan is to
    delay splitting the THP step by step and avoid splitting the THP
    finally.

    In this patch, one swap cluster is used to hold the contents of each THP
    swapped out. So, the size of the swap cluster is changed to that of the
    THP (Transparent Huge Page) on x86_64 architecture (512). For other
    architectures which want such THP swap optimization,
    ARCH_USES_THP_SWAP_CLUSTER needs to be selected in the Kconfig file for
    the architecture. In effect, this will enlarge swap cluster size by 2
    times on x86_64. Which may make it harder to find a free cluster when
    the swap space becomes fragmented. So that, this may reduce the
    continuous swap space allocation and sequential write in theory. The
    performance test in 0day shows no regressions caused by this.

    In the future of THP swap optimization, some information of the swapped
    out THP (such as compound map count) will be recorded in the
    swap_cluster_info data structure.

    The mem cgroup swap accounting functions are enhanced to support charge
    or uncharge a swap cluster backing a THP as a whole.

    The swap cluster allocate/free functions are added to allocate/free a
    swap cluster for a THP. A fair simple algorithm is used for swap
    cluster allocation, that is, only the first swap device in priority list
    will be tried to allocate the swap cluster. The function will fail if
    the trying is not successful, and the caller will fallback to allocate a
    single swap slot instead. This works good enough for normal cases. If
    the difference of the number of the free swap clusters among multiple
    swap devices is significant, it is possible that some THPs are split
    earlier than necessary. For example, this could be caused by big size
    difference among multiple swap devices.

    The swap cache functions is enhanced to support add/delete THP to/from
    the swap cache as a set of (HPAGE_PMD_NR) sub-pages. This may be
    enhanced in the future with multi-order radix tree. But because we will
    split the THP soon during swapping out, that optimization doesn't make
    much sense for this first step.

    The THP splitting functions are enhanced to support to split THP in swap
    cache during swapping out. The page lock will be held during allocating
    the swap cluster, adding the THP into the swap cache and splitting the
    THP. So in the code path other than swapping out, if the THP need to be
    split, the PageSwapCache(THP) will be always false.

    The swap cluster is only available for SSD, so the THP swap optimization
    in this patchset has no effect for HDD.

    [ying.huang@intel.com: fix two issues in THP optimize patch]
    Link: http://lkml.kernel.org/r/87k25ed8zo.fsf@yhuang-dev.intel.com
    [hannes@cmpxchg.org: extensive cleanups and simplifications, reduce code size]
    Link: http://lkml.kernel.org/r/20170515112522.32457-2-ying.huang@intel.com
    Signed-off-by: "Huang, Ying"
    Signed-off-by: Johannes Weiner
    Suggested-by: Andrew Morton [for config option]
    Acked-by: Kirill A. Shutemov [for changes in huge_memory.c and huge_mm.h]
    Cc: Andrea Arcangeli
    Cc: Ebru Akagunduz
    Cc: Johannes Weiner
    Cc: Michal Hocko
    Cc: Tejun Heo
    Cc: Hugh Dickins
    Cc: Shaohua Li
    Cc: Minchan Kim
    Cc: Rik van Riel
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Huang Ying
     

09 May, 2017

1 commit

  • Now vzalloc() is used in swap code to allocate various data structures,
    such as swap cache, swap slots cache, cluster info, etc. Because the
    size may be too large on some system, so that normal kzalloc() may fail.
    But using kzalloc() has some advantages, for example, less memory
    fragmentation, less TLB pressure, etc. So change the data structure
    allocation in swap code to use kvzalloc() which will try kzalloc()
    firstly, and fallback to vzalloc() if kzalloc() failed.

    In general, although kmalloc() will reduce the number of high-order
    pages in short term, vmalloc() will cause more pain for memory
    fragmentation in the long term. And the swap data structure allocation
    that is changed in this patch is expected to be long term allocation.

    From Dave Hansen:
    "for example, we have a two-page data structure. vmalloc() takes two
    effectively random order-0 pages, probably from two different 2M pages
    and pins them. That "kills" two 2M pages. kmalloc(), allocating two
    *contiguous* pages, will not cross a 2M boundary. That means it will
    only "kill" the possibility of a single 2M page. More 2M pages == less
    fragmentation.

    The allocation in this patch occurs during swap on time, which is
    usually done during system boot, so usually we have high opportunity to
    allocate the contiguous pages successfully.

    The allocation for swap_map[] in struct swap_info_struct is not changed,
    because that is usually quite large and vmalloc_to_page() is used for
    it. That makes it a little harder to change.

    Link: http://lkml.kernel.org/r/20170407064911.25447-1-ying.huang@intel.com
    Signed-off-by: Huang Ying
    Acked-by: Tim Chen
    Acked-by: Michal Hocko
    Acked-by: Rik van Riel
    Cc: Dave Hansen
    Cc: Hugh Dickins
    Cc: Shaohua Li
    Cc: Minchan Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Huang Ying
     

04 May, 2017

4 commits

  • In swapcache_free_entries(), if swap_info_get_cont() returns NULL,
    something wrong occurs for the swap entry. But we should still continue
    to free the following swap entries in the array instead of skip them to
    avoid swap space leak. This is just problem in error path, where system
    may be in an inconsistent state, but it is still good to fix it.

    Link: http://lkml.kernel.org/r/20170421124739.24534-1-ying.huang@intel.com
    Signed-off-by: "Huang, Ying"
    Acked-by: Tim Chen
    Cc: Rik van Riel
    Cc: Hugh Dickins
    Cc: Shaohua Li
    Cc: Minchan Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Huang Ying
     
  • Cluster lock is used to protect the swap_cluster_info and corresponding
    elements in swap_info_struct->swap_map[]. But it is found that now in
    scan_swap_map_slots(), swap_avail_lock may be acquired when cluster lock
    is held. This does no good except making the locking more complex and
    improving the potential locking contention, because the
    swap_info_struct->lock is used to protect the data structure operated in
    the code already. Fix this via moving the corresponding operations in
    scan_swap_map_slots() out of cluster lock.

    Link: http://lkml.kernel.org/r/20170317064635.12792-3-ying.huang@intel.com
    Signed-off-by: "Huang, Ying"
    Acked-by: Tim Chen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Huang Ying
     
  • This is just a cleanup patch, no functionality change.

    In cluster_list_add_tail(), spin_lock_nested() is used to lock the
    cluster, while unlock_cluster() is used to unlock the cluster. To
    improve the code readability, use spin_unlock() directly to unlock the
    cluster.

    Link: http://lkml.kernel.org/r/20170317064635.12792-2-ying.huang@intel.com
    Signed-off-by: "Huang, Ying"
    Acked-by: Tim Chen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Huang Ying
     
  • Before using cluster lock in free_swap_and_cache(), the
    swap_info_struct->lock will be held during freeing the swap entry and
    acquiring page lock, so the page swap count will not change when testing
    page information later. But after using cluster lock, the cluster lock
    (or swap_info_struct->lock) will be held only during freeing the swap
    entry. So before acquiring the page lock, the page swap count may be
    changed in another thread. If the page swap count is not 0, we should
    not delete the page from the swap cache. This is fixed via checking
    page swap count again after acquiring the page lock.

    I found the race when I review the code, so I didn't trigger the race
    via a test program. If the race occurs for an anonymous page shared by
    multiple processes via fork, multiple pages will be allocated and
    swapped in from the swap device for the previously shared one page.
    That is, the user-visible runtime effect is more memory will be used and
    the access latency for the page will be higher, that is, the performance
    regression.

    Link: http://lkml.kernel.org/r/20170301143905.12846-1-ying.huang@intel.com
    Signed-off-by: "Huang, Ying"
    Cc: Hugh Dickins
    Cc: Shaohua Li
    Cc: Minchan Kim
    Cc: Rik van Riel
    Cc: Tim Chen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Huang Ying
     

10 Mar, 2017

1 commit


02 Mar, 2017

2 commits

  • We are going to split out of , which
    will have to be picked up from other headers and a couple of .c files.

    Create a trivial placeholder file that just
    maps to to make this patch obviously correct and
    bisectable.

    Include the new header in the files that are going to need it.

    Acked-by: Linus Torvalds
    Cc: Mike Galbraith
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Cc: linux-kernel@vger.kernel.org
    Signed-off-by: Ingo Molnar

    Ingo Molnar
     
  • We are going to split out of , which
    will have to be picked up from other headers and a couple of .c files.

    Create a trivial placeholder file that just
    maps to to make this patch obviously correct and
    bisectable.

    The APIs that are going to be moved first are:

    mm_alloc()
    __mmdrop()
    mmdrop()
    mmdrop_async_fn()
    mmdrop_async()
    mmget_not_zero()
    mmput()
    mmput_async()
    get_task_mm()
    mm_access()
    mm_release()

    Include the new header in the files that are going to need it.

    Acked-by: Linus Torvalds
    Cc: Mike Galbraith
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Cc: linux-kernel@vger.kernel.org
    Signed-off-by: Ingo Molnar

    Ingo Molnar
     

28 Feb, 2017

2 commits

  • We already have the helper, we can convert the rest of the kernel
    mechanically using:

    git grep -l 'atomic_inc_not_zero.*mm_users' | xargs sed -i 's/atomic_inc_not_zero(&\(.*\)->mm_users)/mmget_not_zero\(\1\)/'

    This is needed for a later patch that hooks into the helper, but might
    be a worthwhile cleanup on its own.

    Link: http://lkml.kernel.org/r/20161218123229.22952-3-vegard.nossum@oracle.com
    Signed-off-by: Vegard Nossum
    Acked-by: Michal Hocko
    Acked-by: Peter Zijlstra (Intel)
    Acked-by: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Vegard Nossum
     
  • Apart from adding the helper function itself, the rest of the kernel is
    converted mechanically using:

    git grep -l 'atomic_inc.*mm_users' | xargs sed -i 's/atomic_inc(&\(.*\)->mm_users);/mmget\(\1\);/'
    git grep -l 'atomic_inc.*mm_users' | xargs sed -i 's/atomic_inc(&\(.*\)\.mm_users);/mmget\(\&\1\);/'

    This is needed for a later patch that hooks into the helper, but might
    be a worthwhile cleanup on its own.

    (Michal Hocko provided most of the kerneldoc comment.)

    Link: http://lkml.kernel.org/r/20161218123229.22952-2-vegard.nossum@oracle.com
    Signed-off-by: Vegard Nossum
    Acked-by: Michal Hocko
    Acked-by: Peter Zijlstra (Intel)
    Acked-by: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Vegard Nossum
     

23 Feb, 2017

8 commits

  • Initialize swap slots cache and enable it on swap on. Drain all swap
    slots on swap off.

    Link: http://lkml.kernel.org/r/07cbc94882fa95d4ac3cfc50b8dce0b1ec231b93.1484082593.git.tim.c.chen@linux.intel.com
    Signed-off-by: Tim Chen
    Cc: "Huang, Ying"
    Cc: Aaron Lu
    Cc: Andi Kleen
    Cc: Andrea Arcangeli
    Cc: Christian Borntraeger
    Cc: Dave Hansen
    Cc: Hillf Danton
    Cc: Huang Ying
    Cc: Hugh Dickins
    Cc: Johannes Weiner
    Cc: Jonathan Corbet escreveu:
    Cc: Kirill A. Shutemov
    Cc: Michal Hocko
    Cc: Minchan Kim
    Cc: Rik van Riel
    Cc: Shaohua Li
    Cc: Vladimir Davydov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Tim Chen
     
  • We add per cpu caches for swap slots that can be allocated and freed
    quickly without the need to touch the swap info lock.

    Two separate caches are maintained for swap slots allocated and swap
    slots returned. This is to allow the swap slots to be returned to the
    global pool in a batch so they will have a chance to be coaelesced with
    other slots in a cluster. We do not reuse the slots that are returned
    right away, as it may increase fragmentation of the slots.

    The swap allocation cache is protected by a mutex as we may sleep when
    searching for empty slots in cache. The swap free cache is protected by
    a spin lock as we cannot sleep in the free path.

    We refill the swap slots cache when we run out of slots, and we disable
    the swap slots cache and drain the slots if the global number of slots
    fall below a low watermark threshold. We re-enable the cache agian when
    the slots available are above a high watermark.

    [ying.huang@intel.com: use raw_cpu_ptr over this_cpu_ptr for swap slots access]
    [tim.c.chen@linux.intel.com: add comments on locks in swap_slots.h]
    Link: http://lkml.kernel.org/r/20170118180327.GA24225@linux.intel.com
    Link: http://lkml.kernel.org/r/35de301a4eaa8daa2977de6e987f2c154385eb66.1484082593.git.tim.c.chen@linux.intel.com
    Signed-off-by: Tim Chen
    Signed-off-by: "Huang, Ying"
    Reviewed-by: Michal Hocko
    Cc: Aaron Lu
    Cc: Andi Kleen
    Cc: Andrea Arcangeli
    Cc: Christian Borntraeger
    Cc: Dave Hansen
    Cc: Hillf Danton
    Cc: Huang Ying
    Cc: Hugh Dickins
    Cc: Johannes Weiner
    Cc: Jonathan Corbet escreveu:
    Cc: Kirill A. Shutemov
    Cc: Minchan Kim
    Cc: Rik van Riel
    Cc: Shaohua Li
    Cc: Vladimir Davydov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Tim Chen
     
  • Add new functions that free unused swap slots in batches without the
    need to reacquire swap info lock. This improves scalability and reduce
    lock contention.

    Link: http://lkml.kernel.org/r/c25e0fcdfd237ec4ca7db91631d3b9f6ed23824e.1484082593.git.tim.c.chen@linux.intel.com
    Signed-off-by: Tim Chen
    Signed-off-by: "Huang, Ying"
    Cc: Aaron Lu
    Cc: Andi Kleen
    Cc: Andrea Arcangeli
    Cc: Christian Borntraeger
    Cc: Dave Hansen
    Cc: Hillf Danton
    Cc: Huang Ying
    Cc: Hugh Dickins
    Cc: Johannes Weiner
    Cc: Jonathan Corbet escreveu:
    Cc: Kirill A. Shutemov
    Cc: Michal Hocko
    Cc: Minchan Kim
    Cc: Rik van Riel
    Cc: Shaohua Li
    Cc: Vladimir Davydov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Tim Chen
     
  • Currently, the swap slots are allocated one page at a time, causing
    contention to the swap_info lock protecting the swap partition on every
    page being swapped.

    This patch adds new functions get_swap_pages and scan_swap_map_slots to
    request multiple swap slots at once. This will reduces the lock
    contention on the swap_info lock. Also scan_swap_map_slots can operate
    more efficiently as swap slots often occurs in clusters close to each
    other on a swap device and it is quicker to allocate them together.

    Link: http://lkml.kernel.org/r/9fec2845544371f62c3763d43510045e33d286a6.1484082593.git.tim.c.chen@linux.intel.com
    Signed-off-by: Tim Chen
    Signed-off-by: "Huang, Ying"
    Cc: Aaron Lu
    Cc: Andi Kleen
    Cc: Andrea Arcangeli
    Cc: Christian Borntraeger
    Cc: Dave Hansen
    Cc: Hillf Danton
    Cc: Huang Ying
    Cc: Hugh Dickins
    Cc: Johannes Weiner
    Cc: Jonathan Corbet escreveu:
    Cc: Kirill A. Shutemov
    Cc: Michal Hocko
    Cc: Minchan Kim
    Cc: Rik van Riel
    Cc: Shaohua Li
    Cc: Vladimir Davydov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Tim Chen
     
  • We can avoid needlessly allocating page for swap slots that are not used
    by anyone. No pages have to be read in for these slots.

    Link: http://lkml.kernel.org/r/0784b3f20b9bd3aa5552219624cb78dc4ae710c9.1484082593.git.tim.c.chen@linux.intel.com
    Signed-off-by: Tim Chen
    Signed-off-by: "Huang, Ying"
    Cc: Aaron Lu
    Cc: Andi Kleen
    Cc: Andrea Arcangeli
    Cc: Christian Borntraeger
    Cc: Dave Hansen
    Cc: Hillf Danton
    Cc: Huang Ying
    Cc: Hugh Dickins
    Cc: Johannes Weiner
    Cc: Jonathan Corbet escreveu:
    Cc: Kirill A. Shutemov
    Cc: Michal Hocko
    Cc: Minchan Kim
    Cc: Rik van Riel
    Cc: Shaohua Li
    Cc: Vladimir Davydov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Tim Chen
     
  • The patch is to improve the scalability of the swap out/in via using
    fine grained locks for the swap cache. In current kernel, one address
    space will be used for each swap device. And in the common
    configuration, the number of the swap device is very small (one is
    typical). This causes the heavy lock contention on the radix tree of
    the address space if multiple tasks swap out/in concurrently.

    But in fact, there is no dependency between pages in the swap cache. So
    that, we can split the one shared address space for each swap device
    into several address spaces to reduce the lock contention. In the
    patch, the shared address space is split into 64MB trunks. 64MB is
    chosen to balance the memory space usage and effect of lock contention
    reduction.

    The size of struct address_space on x86_64 architecture is 408B, so with
    the patch, 6528B more memory will be used for every 1GB swap space on
    x86_64 architecture.

    One address space is still shared for the swap entries in the same 64M
    trunks. To avoid lock contention for the first round of swap space
    allocation, the order of the swap clusters in the initial free clusters
    list is changed. The swap space distance between the consecutive swap
    clusters in the free cluster list is at least 64M. After the first
    round of allocation, the swap clusters are expected to be freed
    randomly, so the lock contention should be reduced effectively.

    Link: http://lkml.kernel.org/r/735bab895e64c930581ffb0a05b661e01da82bc5.1484082593.git.tim.c.chen@linux.intel.com
    Signed-off-by: "Huang, Ying"
    Signed-off-by: Tim Chen
    Cc: Aaron Lu
    Cc: Andi Kleen
    Cc: Andrea Arcangeli
    Cc: Christian Borntraeger
    Cc: Dave Hansen
    Cc: Hillf Danton
    Cc: Huang Ying
    Cc: Hugh Dickins
    Cc: Johannes Weiner
    Cc: Jonathan Corbet escreveu:
    Cc: Kirill A. Shutemov
    Cc: Michal Hocko
    Cc: Minchan Kim
    Cc: Rik van Riel
    Cc: Shaohua Li
    Cc: Vladimir Davydov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Huang, Ying
     
  • This patch is to reduce the lock contention of swap_info_struct->lock
    via using a more fine grained lock in swap_cluster_info for some swap
    operations. swap_info_struct->lock is heavily contended if multiple
    processes reclaim pages simultaneously. Because there is only one lock
    for each swap device. While in common configuration, there is only one
    or several swap devices in the system. The lock protects almost all
    swap related operations.

    In fact, many swap operations only access one element of
    swap_info_struct->swap_map array. And there is no dependency between
    different elements of swap_info_struct->swap_map. So a fine grained
    lock can be used to allow parallel access to the different elements of
    swap_info_struct->swap_map.

    In this patch, a spinlock is added to swap_cluster_info to protect the
    elements of swap_info_struct->swap_map in the swap cluster and the
    fields of swap_cluster_info. This reduced locking contention for
    swap_info_struct->swap_map access greatly.

    Because of the added spinlock, the size of swap_cluster_info increases
    from 4 bytes to 8 bytes on the 64 bit and 32 bit system. This will use
    additional 4k RAM for every 1G swap space.

    Because the size of swap_cluster_info is much smaller than the size of
    the cache line (8 vs 64 on x86_64 architecture), there may be false
    cache line sharing between spinlocks in swap_cluster_info. To avoid the
    false sharing in the first round of the swap cluster allocation, the
    order of the swap clusters in the free clusters list is changed. So
    that, the swap_cluster_info sharing the same cache line will be placed
    as far as possible. After the first round of allocation, the order of
    the clusters in free clusters list is expected to be random. So the
    false sharing should be not serious.

    Compared with a previous implementation using bit_spin_lock, the
    sequential swap out throughput improved about 3.2%. Test was done on a
    Xeon E5 v3 system. The swap device used is a RAM simulated PMEM
    (persistent memory) device. To test the sequential swapping out, the
    test case created 32 processes, which sequentially allocate and write to
    the anonymous pages until the RAM and part of the swap device is used.

    [ying.huang@intel.com: v5]
    Link: http://lkml.kernel.org/r/878tqeuuic.fsf_-_@yhuang-dev.intel.com
    [minchan@kernel.org: initialize spinlock for swap_cluster_info]
    Link: http://lkml.kernel.org/r/1486434945-29753-1-git-send-email-minchan@kernel.org
    [hughd@google.com: annotate nested locking for cluster lock]
    Link: http://lkml.kernel.org/r/alpine.LSU.2.11.1702161050540.21773@eggly.anvils
    Link: http://lkml.kernel.org/r/dbb860bbd825b1aaba18988015e8963f263c3f0d.1484082593.git.tim.c.chen@linux.intel.com
    Signed-off-by: "Huang, Ying"
    Signed-off-by: Tim Chen
    Signed-off-by: Minchan Kim
    Signed-off-by: Hugh Dickins
    Cc: Aaron Lu
    Cc: Andi Kleen
    Cc: Andrea Arcangeli
    Cc: Christian Borntraeger
    Cc: Dave Hansen
    Cc: Hillf Danton
    Cc: Huang Ying
    Cc: Hugh Dickins
    Cc: Johannes Weiner
    Cc: Jonathan Corbet escreveu:
    Cc: Kirill A. Shutemov
    Cc: Michal Hocko
    Cc: Minchan Kim
    Cc: Rik van Riel
    Cc: Shaohua Li
    Cc: Vladimir Davydov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Huang, Ying
     
  • Patch series "mm/swap: Regular page swap optimizations", v5.

    Times have changed. Coming generation of Solid state Block device
    latencies are getting down to sub 100 usec, which is within an order of
    magnitude of DRAM, and their performance is orders of magnitude higher
    than the single- spindle rotational media we've swapped to historically.

    This could benefit many usage scenearios. For example cloud providers
    who overcommit their memory (as VM don't use all the memory
    provisioned). Having a fast swap will allow them to be more aggressive
    in memory overcommit and fit more VMs to a platform.

    In our testing [see footnote], the median latency that the kernel adds
    to a page fault is 15 usec, which comes quite close to the amount that
    will be contributed by the underlying I/O devices.

    The software latency comes mostly from contentions on the locks
    protecting the radix tree of the swap cache and also the locks
    protecting the individual swap devices. The lock contentions already
    consumed 35% of cpu cycles in our test. In the very near future,
    software latency will become the bottleneck to swap performnace as block
    device I/O latency gets within the shouting distance of DRAM speed.

    This patch set, reduced the median page fault latency from 15 usec to 4
    usec (375% reduction) for DRAM based pmem block device.

    This patch (of 9):

    swap_info_get() is used not only in swap free code path but also in
    page_swapcount(), etc. So the original kernel message in swap_info_get()
    is not correct now. Fix it via replacing "swap_free" to "swap_info_get"
    in the message.

    Link: http://lkml.kernel.org/r/9b5f8bd6266f9da978c373f2384c8044df5e262c.1484082593.git.tim.c.chen@linux.intel.com
    Signed-off-by: "Huang, Ying"
    Signed-off-by: Tim Chen
    Reviewed-by: Rik van Riel
    Cc: Aaron Lu
    Cc: Andi Kleen
    Cc: Andrea Arcangeli
    Cc: Christian Borntraeger
    Cc: Dave Hansen
    Cc: Hillf Danton
    Cc: Huang Ying
    Cc: Hugh Dickins
    Cc: Johannes Weiner
    Cc: Jonathan Corbet escreveu:
    Cc: Kirill A. Shutemov
    Cc: Michal Hocko
    Cc: Minchan Kim
    Cc: Shaohua Li
    Cc: Vladimir Davydov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Huang, Ying
     

11 Jan, 2017

1 commit

  • During developemnt for zram-swap asynchronous writeback, I found strange
    corruption of compressed page, resulting in:

    Modules linked in: zram(E)
    CPU: 3 PID: 1520 Comm: zramd-1 Tainted: G E 4.8.0-mm1-00320-ge0d4894c9c38-dirty #3274
    Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Ubuntu-1.8.2-1ubuntu1 04/01/2014
    task: ffff88007620b840 task.stack: ffff880078090000
    RIP: set_freeobj.part.43+0x1c/0x1f
    RSP: 0018:ffff880078093ca8 EFLAGS: 00010246
    RAX: 0000000000000018 RBX: ffff880076798d88 RCX: ffffffff81c408c8
    RDX: 0000000000000018 RSI: 0000000000000000 RDI: 0000000000000246
    RBP: ffff880078093cb0 R08: 0000000000000000 R09: 0000000000000000
    R10: ffff88005bc43030 R11: 0000000000001df3 R12: ffff880076798d88
    R13: 000000000005bc43 R14: ffff88007819d1b8 R15: 0000000000000001
    FS: 0000000000000000(0000) GS:ffff88007e380000(0000) knlGS:0000000000000000
    CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
    CR2: 00007fc934048f20 CR3: 0000000077b01000 CR4: 00000000000406e0
    Call Trace:
    obj_malloc+0x22b/0x260
    zs_malloc+0x1e4/0x580
    zram_bvec_rw+0x4cd/0x830 [zram]
    page_requests_rw+0x9c/0x130 [zram]
    zram_thread+0xe6/0x173 [zram]
    kthread+0xca/0xe0
    ret_from_fork+0x25/0x30

    With investigation, it reveals currently stable page doesn't support
    anonymous page. IOW, reuse_swap_page can reuse the page without waiting
    writeback completion so it can overwrite page zram is compressing.

    Unfortunately, zram has used per-cpu stream feature from v4.7.
    It aims for increasing cache hit ratio of scratch buffer for
    compressing. Downside of that approach is that zram should ask
    memory space for compressed page in per-cpu context which requires
    stricted gfp flag which could be failed. If so, it retries to
    allocate memory space out of per-cpu context so it could get memory
    this time and compress the data again, copies it to the memory space.

    In this scenario, zram assumes the data should never be changed
    but it is not true unless stable page supports. So, If the data is
    changed under us, zram can make buffer overrun because second
    compression size could be bigger than one we got in previous trial
    and blindly, copy bigger size object to smaller buffer which is
    buffer overrun. The overrun breaks zsmalloc free object chaining
    so system goes crash like above.

    I think below is same problem.
    https://bugzilla.suse.com/show_bug.cgi?id=997574

    Unfortunately, reuse_swap_page should be atomic so that we cannot wait on
    writeback in there so the approach in this patch is simply return false if
    we found it needs stable page. Although it increases memory footprint
    temporarily, it happens rarely and it should be reclaimed easily althoug
    it happened. Also, It would be better than waiting of IO completion,
    which is critial path for application latency.

    Fixes: da9556a2367c ("zram: user per-cpu compression streams")
    Link: http://lkml.kernel.org/r/20161120233015.GA14113@bbox
    Link: http://lkml.kernel.org/r/1482366980-3782-2-git-send-email-minchan@kernel.org
    Signed-off-by: Minchan Kim
    Acked-by: Hugh Dickins
    Cc: Sergey Senozhatsky
    Cc: Darrick J. Wong
    Cc: Takashi Iwai
    Cc: Hyeoncheol Lee
    Cc:
    Cc: Sangseok Lee
    Cc: [4.7+]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Minchan Kim
     

13 Dec, 2016

1 commit

  • Add a cond_resched() in the unuse_pmd_range() loop (so as to call it
    even when pmd none or trans_huge, like zap_pmd_range() does); and in the
    unuse_mm() loop (since that might skip over many vmas). shmem_unuse()
    and radix_tree_locate_item() look good enough already.

    Those were the obvious places, but in fact the stalls came from
    find_next_to_unuse(), which sometimes scans through many unused entries.
    Apply scan_swap_map()'s LATENCY_LIMIT of 256 there too; and only go off
    to test frontswap_map when a used entry is found.

    Link: http://lkml.kernel.org/r/alpine.LSU.2.11.1612052155140.13021@eggly.anvils
    Signed-off-by: Hugh Dickins
    Reported-by: Eric Dumazet
    Cc: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     

12 Nov, 2016

1 commit

  • When root activates a swap partition whose header has the wrong
    endianness, nr_badpages elements of badpages are swabbed before
    nr_badpages has been checked, leading to a buffer overrun of up to 8GB.

    This normally is not a security issue because it can only be exploited
    by root (more specifically, a process with CAP_SYS_ADMIN or the ability
    to modify a swap file/partition), and such a process can already e.g.
    modify swapped-out memory of any other userspace process on the system.

    Link: http://lkml.kernel.org/r/1477949533-2509-1-git-send-email-jann@thejh.net
    Signed-off-by: Jann Horn
    Acked-by: Kees Cook
    Acked-by: Jerome Marchand
    Acked-by: Johannes Weiner
    Cc: "Kirill A. Shutemov"
    Cc: Vlastimil Babka
    Cc: Hugh Dickins
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jann Horn
     

08 Oct, 2016

2 commits

  • This patch is to improve the performance of swap cache operations when
    the type of the swap device is not 0. Originally, the whole swap entry
    value is used as the key of the swap cache, even though there is one
    radix tree for each swap device. If the type of the swap device is not
    0, the height of the radix tree of the swap cache will be increased
    unnecessary, especially on 64bit architecture. For example, for a 1GB
    swap device on the x86_64 architecture, the height of the radix tree of
    the swap cache is 11. But if the offset of the swap entry is used as
    the key of the swap cache, the height of the radix tree of the swap
    cache is 4. The increased height causes unnecessary radix tree
    descending and increased cache footprint.

    This patch reduces the height of the radix tree of the swap cache via
    using the offset of the swap entry instead of the whole swap entry value
    as the key of the swap cache. In 32 processes sequential swap out test
    case on a Xeon E5 v3 system with RAM disk as swap, the lock contention
    for the spinlock of the swap cache is reduced from 20.15% to 12.19%,
    when the type of the swap device is 1.

    Use the whole swap entry as key,

    perf-profile.calltrace.cycles-pp._raw_spin_lock_irq.__add_to_swap_cache.add_to_swap_cache.add_to_swap.shrink_page_list: 10.37,
    perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.__remove_mapping.shrink_page_list.shrink_inactive_list.shrink_node_memcg: 9.78,

    Use the swap offset as key,

    perf-profile.calltrace.cycles-pp._raw_spin_lock_irq.__add_to_swap_cache.add_to_swap_cache.add_to_swap.shrink_page_list: 6.25,
    perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.__remove_mapping.shrink_page_list.shrink_inactive_list.shrink_node_memcg: 5.94,

    Link: http://lkml.kernel.org/r/1473270649-27229-1-git-send-email-ying.huang@intel.com
    Signed-off-by: "Huang, Ying"
    Cc: Johannes Weiner
    Cc: Michal Hocko
    Cc: Vladimir Davydov
    Cc: "Kirill A. Shutemov"
    Cc: Dave Hansen
    Cc: Dan Williams
    Cc: Joonsoo Kim
    Cc: Hugh Dickins
    Cc: Mel Gorman
    Cc: Minchan Kim
    Cc: Aaron Lu
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Huang Ying
     
  • This is a code clean up patch without functionality changes. The
    swap_cluster_list data structure and its operations are introduced to
    provide some better encapsulation for the free cluster and discard
    cluster list operations. This avoid some code duplication, improved the
    code readability, and reduced the total line number.

    [akpm@linux-foundation.org: coding-style fixes]
    Link: http://lkml.kernel.org/r/1472067356-16004-1-git-send-email-ying.huang@intel.com
    Signed-off-by: "Huang, Ying"
    Acked-by: Minchan Kim
    Acked-by: Rik van Riel
    Cc: Tim Chen
    Cc: Hugh Dickins
    Cc: Shaohua Li
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Huang Ying
     

20 Sep, 2016

1 commit

  • Commit 62c230bc1790 ("mm: add support for a filesystem to activate
    swap files and use direct_IO for writing swap pages") replaced the
    swap_aops dirty hook from __set_page_dirty_no_writeback() with
    swap_set_page_dirty().

    For normal cases without these special SWP flags code path falls back to
    __set_page_dirty_no_writeback() so the behaviour is expected to be the
    same as before.

    But swap_set_page_dirty() makes use of the page_swap_info() helper to
    get the swap_info_struct to check for the flags like SWP_FILE,
    SWP_BLKDEV etc as desired for those features. This helper has
    BUG_ON(!PageSwapCache(page)) which is racy and safe only for the
    set_page_dirty_lock() path.

    For the set_page_dirty() path which is often needed for cases to be
    called from irq context, kswapd() can toggle the flag behind the back
    while the call is getting executed when system is low on memory and
    heavy swapping is ongoing.

    This ends up with undesired kernel panic.

    This patch just moves the check outside the helper to its users
    appropriately to fix kernel panic for the described path. Couple of
    users of helpers already take care of SwapCache condition so I skipped
    them.

    Link: http://lkml.kernel.org/r/1473460718-31013-1-git-send-email-santosh.shilimkar@oracle.com
    Signed-off-by: Santosh Shilimkar
    Cc: Mel Gorman
    Cc: Joe Perches
    Cc: Peter Zijlstra
    Cc: Rik van Riel
    Cc: David S. Miller
    Cc: Jens Axboe
    Cc: Michal Hocko
    Cc: Hugh Dickins
    Cc: Al Viro
    Cc: [4.7.x]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Santosh Shilimkar
     

27 Jul, 2016

1 commit

  • I have noticed that frontswap.h first declares "frontswap_enabled" as
    extern bool variable, and then overrides it with "#define
    frontswap_enabled (1)" for CONFIG_FRONTSWAP=Y or (0) when disabled. The
    bool variable isn't actually instantiated anywhere.

    This all looks like an unfinished attempt to make frontswap_enabled
    reflect whether a backend is instantiated. But in the current state,
    all frontswap hooks call unconditionally into frontswap.c just to check
    if frontswap_ops is non-NULL. This should at least be checked inline,
    but we can further eliminate the overhead when CONFIG_FRONTSWAP is
    enabled and no backend registered, using a static key that is initially
    disabled, and gets enabled only upon first backend registration.

    Thus, checks for "frontswap_enabled" are replaced with
    "frontswap_enabled()" wrapping the static key check. There are two
    exceptions:

    - xen's selfballoon_process() was testing frontswap_enabled in code guarded
    by #ifdef CONFIG_FRONTSWAP, which was effectively always true when reachable.
    The patch just removes this check. Using frontswap_enabled() does not sound
    correct here, as this can be true even without xen's own backend being
    registered.

    - in SYSCALL_DEFINE2(swapon), change the check to IS_ENABLED(CONFIG_FRONTSWAP)
    as it seems the bitmap allocation cannot currently be postponed until a
    backend is registered. This means that frontswap will still have some
    memory overhead by being configured, but without a backend.

    After the patch, we can expect that some functions in frontswap.c are
    called only when frontswap_ops is non-NULL. Change the checks there to
    VM_BUG_ONs. While at it, convert other BUG_ONs to VM_BUG_ONs as
    frontswap has been stable for some time.

    [akpm@linux-foundation.org: coding-style fixes]
    Link: http://lkml.kernel.org/r/1463152235-9717-1-git-send-email-vbabka@suse.cz
    Signed-off-by: Vlastimil Babka
    Cc: Konrad Rzeszutek Wilk
    Cc: Boris Ostrovsky
    Cc: David Vrabel
    Cc: Juergen Gross
    Cc: "Kirill A. Shutemov"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Vlastimil Babka
     

13 May, 2016

1 commit

  • This will provide fully accuracy to the mapcount calculation in the
    write protect faults, so page pinning will not get broken by false
    positive copy-on-writes.

    total_mapcount() isn't the right calculation needed in
    reuse_swap_page(), so this introduces a page_trans_huge_mapcount()
    that is effectively the full accurate return value for page_mapcount()
    if dealing with Transparent Hugepages, however we only use the
    page_trans_huge_mapcount() during COW faults where it strictly needed,
    due to its higher runtime cost.

    This also provide at practical zero cost the total_mapcount
    information which is needed to know if we can still relocate the page
    anon_vma to the local vma. If page_trans_huge_mapcount() returns 1 we
    can reuse the page no matter if it's a pte or a pmd_trans_huge
    triggering the fault, but we can only relocate the page anon_vma to
    the local vma->anon_vma if we're sure it's only this "vma" mapping the
    whole THP physical range.

    Kirill A. Shutemov discovered the problem with moving the page
    anon_vma to the local vma->anon_vma in a previous version of this
    patch and another problem in the way page_move_anon_rmap() was called.

    Andrew Morton discovered that CONFIG_SWAP=n wouldn't build in a
    previous version, because reuse_swap_page must be a macro to call
    page_trans_huge_mapcount from swap.h, so this uses a macro again
    instead of an inline function. With this change at least it's a less
    dangerous usage than it was before, because "page" is used only once
    now, while with the previous code reuse_swap_page(page++) would have
    called page_mapcount on page+1 and it would have increased page twice
    instead of just once.

    Dean Luick noticed an uninitialized variable that could result in a
    rmap inefficiency for the non-THP case in a previous version.

    Mike Marciniszyn said:

    : Our RDMA tests are seeing an issue with memory locking that bisects to
    : commit 61f5d698cc97 ("mm: re-enable THP")
    :
    : The test program registers two rather large MRs (512M) and RDMA
    : writes data to a passive peer using the first and RDMA reads it back
    : into the second MR and compares that data. The sizes are chosen randomly
    : between 0 and 1024 bytes.
    :
    : The test will get through a few (
    Reviewed-by: "Kirill A. Shutemov"
    Reviewed-by: Dean Luick
    Tested-by: Alex Williamson
    Tested-by: Mike Marciniszyn
    Tested-by: Josh Collier
    Cc: Marc Haber
    Cc: [4.5]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrea Arcangeli
     

05 Apr, 2016

1 commit

  • PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
    ago with promise that one day it will be possible to implement page
    cache with bigger chunks than PAGE_SIZE.

    This promise never materialized. And unlikely will.

    We have many places where PAGE_CACHE_SIZE assumed to be equal to
    PAGE_SIZE. And it's constant source of confusion on whether
    PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
    especially on the border between fs and mm.

    Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
    breakage to be doable.

    Let's stop pretending that pages in page cache are special. They are
    not.

    The changes are pretty straight-forward:

    - << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> ;

    - >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> ;

    - PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};

    - page_cache_get() -> get_page();

    - page_cache_release() -> put_page();

    This patch contains automated changes generated with coccinelle using
    script below. For some reason, coccinelle doesn't patch header files.
    I've called spatch for them manually.

    The only adjustment after coccinelle is revert of changes to
    PAGE_CAHCE_ALIGN definition: we are going to drop it later.

    There are few places in the code where coccinelle didn't reach. I'll
    fix them manually in a separate patch. Comments and documentation also
    will be addressed with the separate patch.

    virtual patch

    @@
    expression E;
    @@
    - E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
    + E

    @@
    expression E;
    @@
    - E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
    + E

    @@
    @@
    - PAGE_CACHE_SHIFT
    + PAGE_SHIFT

    @@
    @@
    - PAGE_CACHE_SIZE
    + PAGE_SIZE

    @@
    @@
    - PAGE_CACHE_MASK
    + PAGE_MASK

    @@
    expression E;
    @@
    - PAGE_CACHE_ALIGN(E)
    + PAGE_ALIGN(E)

    @@
    expression E;
    @@
    - page_cache_get(E)
    + get_page(E)

    @@
    expression E;
    @@
    - page_cache_release(E)
    + put_page(E)

    Signed-off-by: Kirill A. Shutemov
    Acked-by: Michal Hocko
    Signed-off-by: Linus Torvalds

    Kirill A. Shutemov
     

22 Mar, 2016

1 commit

  • Pull drm updates from Dave Airlie:
    "This is the main drm pull request for 4.6 kernel.

    Overall the coolest thing here for me is the nouveau maxwell signed
    firmware support from NVidia, it's taken a long while to extract this
    from them.

    I also wish the ARM vendors just designed one set of display IP, ARM
    display block proliferation is definitely increasing.

    Core:
    - drm_event cleanups
    - Internal API cleanup making mode_fixup optional.
    - Apple GMUX vga switcheroo support.
    - DP AUX testing interface

    Panel:
    - Refactoring of DSI core for use over more transports.

    New driver:
    - ARM hdlcd driver

    i915:
    - FBC/PSR (framebuffer compression, panel self refresh) enabled by default.
    - Ongoing atomic display support work
    - Ongoing runtime PM work
    - Pixel clock limit checks
    - VBT DSI description support
    - GEM fixes
    - GuC firmware scheduler enhancements

    amdkfd:
    - Deferred probing fixes to avoid make file or link ordering.

    amdgpu/radeon:
    - ACP support for i2s audio support.
    - Command Submission/GPU scheduler/GPUVM optimisations
    - Initial GPU reset support for amdgpu

    vmwgfx:
    - Support for DX10 gen mipmaps
    - Pageflipping and other fixes.

    exynos:
    - Exynos5420 SoC support for FIMD
    - Exynos5422 SoC support for MIPI-DSI

    nouveau:
    - GM20x secure boot support - adds acceleration for Maxwell GPUs.
    - GM200 support
    - GM20B clock driver support
    - Power sensors work

    etnaviv:
    - Correctness fixes for GPU cache flushing
    - Better support for i.MX6 systems.

    imx-drm:
    - VBlank IRQ support
    - Fence support
    - OF endpoint support

    msm:
    - HDMI support for 8996 (snapdragon 820)
    - Adreno 430 support
    - Timestamp queries support

    virtio-gpu:
    - Fixes for Android support.

    rockchip:
    - Add support for Innosilicion HDMI

    rcar-du:
    - Support for 4 crtcs
    - R8A7795 support
    - RCar Gen 3 support

    omapdrm:
    - HDMI interlace output support
    - dma-buf import support
    - Refactoring to remove a lot of legacy code.

    tilcdc:
    - Rewrite of pageflipping code
    - dma-buf support
    - pinctrl support

    vc4:
    - HDMI modesetting bug fixes
    - Significant 3D performance improvement.

    fsl-dcu (FreeScale):
    - Lots of fixes

    tegra:
    - Two small fixes

    sti:
    - Atomic support for planes
    - Improved HDMI support"

    * 'drm-next' of git://people.freedesktop.org/~airlied/linux: (1063 commits)
    drm/amdgpu: release_pages requires linux/pagemap.h
    drm/sti: restore mode_fixup callback
    drm/amdgpu/gfx7: add MTYPE definition
    drm/amdgpu: removing BO_VAs shouldn't be interruptible
    drm/amd/powerplay: show uvd/vce power gate enablement for tonga.
    drm/amd/powerplay: show uvd/vce power gate info for fiji
    drm/amdgpu: use sched fence if possible
    drm/amdgpu: move ib.fence to job.fence
    drm/amdgpu: give a fence param to ib_free
    drm/amdgpu: include the right version of gmc header files for iceland
    drm/radeon: fix indentation.
    drm/amd/powerplay: add uvd/vce dpm enabling flag to fix the performance issue for CZ
    drm/amdgpu: switch back to 32bit hw fences v2
    drm/amdgpu: remove amdgpu_fence_is_signaled
    drm/amdgpu: drop the extra fence range check v2
    drm/amdgpu: signal fences directly in amdgpu_fence_process
    drm/amdgpu: cleanup amdgpu_fence_wait_empty v2
    drm/amdgpu: keep all fences in an RCU protected array v2
    drm/amdgpu: add number of hardware submissions to amdgpu_fence_driver_init_ring
    drm/amdgpu: RCU protected amd_sched_fence_release
    ...

    Linus Torvalds
     

18 Mar, 2016

1 commit

  • Kernel style prefers a single string over split strings when the string is
    'user-visible'.

    Miscellanea:

    - Add a missing newline
    - Realign arguments

    Signed-off-by: Joe Perches
    Acked-by: Tejun Heo [percpu]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Joe Perches
     

09 Feb, 2016

1 commit

  • - support for v3 vbt dsi blocks (Jani)
    - improve mmio debug checks (Mika Kuoppala)
    - reorg the ddi port translation table entries and related code (Ville)
    - reorg gen8 interrupt handling for future platforms (Tvrtko)
    - refactor tile width/height computations for framebuffers (Ville)
    - kerneldoc integration for intel_pm.c (Jani)
    - move default context from engines to device-global dev_priv (Dave Gordon)
    - make seqno/irq ordering coherent with execlist (Chris)
    - decouple internal engine number from UABI (Chris&Tvrtko)
    - tons of small fixes all over, as usual

    * tag 'drm-intel-next-2016-01-24' of git://anongit.freedesktop.org/drm-intel: (148 commits)
    drm/i915: Update DRIVER_DATE to 20160124
    drm/i915: Seal busy-ioctl uABI and prevent leaking of internal ids
    drm/i915: Decouple execbuf uAPI from internal implementation
    drm/i915: Use ordered seqno write interrupt generation on gen8+ execlists
    drm/i915: Limit the auto arming of mmio debugs on vlv/chv
    drm/i915: Tune down "GT register while GT waking disabled" message
    drm/i915: tidy up a few leftovers
    drm/i915: abolish separate per-ring default_context pointers
    drm/i915: simplify allocation of driver-internal requests
    drm/i915: Fix NULL plane->fb oops on SKL
    drm/i915: Do not put big intel_crtc_state on the stack
    Revert "drm/i915: Add two-stage ILK-style watermark programming (v10)"
    drm/i915: add DOC: headline to RC6 kernel-doc
    drm/i915: turn some bogus kernel-doc comments to normal comments
    drm/i915/sdvo: revert bogus kernel-doc comments to normal comments
    drm/i915/gen9: Correct max save/restore register count during gpu reset with GuC
    drm/i915: Demote user facing DMC firmware load failure message
    drm/i915: use hlist_for_each_entry
    drm/i915: skl_update_scaler() wants a rotation bitmask instead of bit number
    drm/i915: Don't reject primary plane windowing with color keying enabled on SKL+
    ...

    Dave Airlie
     

23 Jan, 2016

1 commit

  • parallel to mutex_{lock,unlock,trylock,is_locked,lock_nested},
    inode_foo(inode) being mutex_foo(&inode->i_mutex).

    Please, use those for access to ->i_mutex; over the coming cycle
    ->i_mutex will become rwsem, with ->lookup() done with it held
    only shared.

    Signed-off-by: Al Viro

    Al Viro
     

21 Jan, 2016

2 commits

  • Swap cache pages are freed aggressively if swap is nearly full (>50%
    currently), because otherwise we are likely to stop scanning anonymous
    when we near the swap limit even if there is plenty of freeable swap cache
    pages. We should follow the same trend in case of memory cgroup, which
    has its own swap limit.

    Signed-off-by: Vladimir Davydov
    Acked-by: Johannes Weiner
    Cc: Michal Hocko
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Vladimir Davydov
     
  • This patchset introduces swap accounting to cgroup2.

    This patch (of 7):

    In the legacy hierarchy we charge memsw, which is dubious, because:

    - memsw.limit must be >= memory.limit, so it is impossible to limit
    swap usage less than memory usage. Taking into account the fact that
    the primary limiting mechanism in the unified hierarchy is
    memory.high while memory.limit is either left unset or set to a very
    large value, moving memsw.limit knob to the unified hierarchy would
    effectively make it impossible to limit swap usage according to the
    user preference.

    - memsw.usage != memory.usage + swap.usage, because a page occupying
    both swap entry and a swap cache page is charged only once to memsw
    counter. As a result, it is possible to effectively eat up to
    memory.limit of memory pages *and* memsw.limit of swap entries, which
    looks unexpected.

    That said, we should provide a different swap limiting mechanism for
    cgroup2.

    This patch adds mem_cgroup->swap counter, which charges the actual number
    of swap entries used by a cgroup. It is only charged in the unified
    hierarchy, while the legacy hierarchy memsw logic is left intact.

    The swap usage can be monitored using new memory.swap.current file and
    limited using memory.swap.max.

    Note, to charge swap resource properly in the unified hierarchy, we have
    to make swap_entry_free uncharge swap only when ->usage reaches zero, not
    just ->count, i.e. when all references to a swap entry, including the one
    taken by swap cache, are gone. This is necessary, because otherwise
    swap-in could result in uncharging swap even if the page is still in swap
    cache and hence still occupies a swap entry. At the same time, this
    shouldn't break memsw counter logic, where a page is never charged twice
    for using both memory and swap, because in case of legacy hierarchy we
    uncharge swap on commit (see mem_cgroup_commit_charge).

    Signed-off-by: Vladimir Davydov
    Acked-by: Johannes Weiner
    Cc: Michal Hocko
    Cc: Tejun Heo
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Vladimir Davydov
     

16 Jan, 2016

3 commits

  • Both s390 and powerpc have hit the issue of swapoff hanging, when
    CONFIG_HAVE_ARCH_SOFT_DIRTY and CONFIG_MEM_SOFT_DIRTY ifdefs were not
    quite as x86_64 had them. I think it would be much clearer if
    HAVE_ARCH_SOFT_DIRTY was just a Kconfig option set by architectures to
    determine whether the MEM_SOFT_DIRTY option should be offered, and the
    actual code depend upon CONFIG_MEM_SOFT_DIRTY alone.

    But won't embark on that change myself: instead make swapoff more
    robust, by using pte_swp_clear_soft_dirty() on each pte it encounters,
    without an explicit #ifdef CONFIG_MEM_SOFT_DIRTY. That being a no-op,
    whether the bit in question is defined as 0 or the asm-generic fallback
    is used, unless soft dirty is fully turned on.

    Why "maybe" in maybe_same_pte()? Rename it pte_same_as_swp().

    Signed-off-by: Hugh Dickins
    Reviewed-by: Aneesh Kumar K.V
    Acked-by: Cyrill Gorcunov
    Cc: Laurent Dufour
    Cc: Michael Ellerman
    Cc: Martin Schwidefsky
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     
  • With new refcounting we will be able map the same compound page with
    PTEs and PMDs. It requires adjustment to conditions when we can reuse
    the page on write-protection fault.

    For PTE fault we can't reuse the page if it's part of huge page.

    For PMD we can only reuse the page if nobody else maps the huge page or
    it's part. We can do it by checking page_mapcount() on each sub-page,
    but it's expensive.

    The cheaper way is to check page_count() to be equal 1: every mapcount
    takes page reference, so this way we can guarantee, that the PMD is the
    only mapping.

    This approach can give false negative if somebody pinned the page, but
    that doesn't affect correctness.

    Signed-off-by: Kirill A. Shutemov
    Tested-by: Sasha Levin
    Tested-by: Aneesh Kumar K.V
    Acked-by: Jerome Marchand
    Acked-by: Vlastimil Babka
    Cc: Andrea Arcangeli
    Cc: Hugh Dickins
    Cc: Dave Hansen
    Cc: Mel Gorman
    Cc: Rik van Riel
    Cc: Naoya Horiguchi
    Cc: Steve Capper
    Cc: Johannes Weiner
    Cc: Michal Hocko
    Cc: Christoph Lameter
    Cc: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Kirill A. Shutemov
     
  • As with rmap, with new refcounting we cannot rely on PageTransHuge() to
    check if we need to charge size of huge page form the cgroup. We need
    to get information from caller to know whether it was mapped with PMD or
    PTE.

    We do uncharge when last reference on the page gone. At that point if
    we see PageTransHuge() it means we need to unchange whole huge page.

    The tricky part is partial unmap -- when we try to unmap part of huge
    page. We don't do a special handing of this situation, meaning we don't
    uncharge the part of huge page unless last user is gone or
    split_huge_page() is triggered. In case of cgroup memory pressure
    happens the partial unmapped page will be split through shrinker. This
    should be good enough.

    Signed-off-by: Kirill A. Shutemov
    Tested-by: Sasha Levin
    Tested-by: Aneesh Kumar K.V
    Acked-by: Vlastimil Babka
    Acked-by: Jerome Marchand
    Cc: Andrea Arcangeli
    Cc: Hugh Dickins
    Cc: Dave Hansen
    Cc: Mel Gorman
    Cc: Rik van Riel
    Cc: Naoya Horiguchi
    Cc: Steve Capper
    Cc: Johannes Weiner
    Cc: Michal Hocko
    Cc: Christoph Lameter
    Cc: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Kirill A. Shutemov