13 Jun, 2013

1 commit

  • The bitmap accessed by bitops must have enough size to hold the required
    numbers of bits rounded up to a multiple of BITS_PER_LONG. And the
    bitmap must not be zeroed by memset() if the number of bits cleared is
    not a multiple of BITS_PER_LONG.

    This fixes incorrect zeroing and allocation size for frontswap_map. The
    incorrect zeroing part doesn't cause any problem because frontswap_map
    is freed just after zeroing. But the wrongly calculated allocation size
    may cause the problem.

    For 32bit systems, the allocation size of frontswap_map is about twice
    as large as required size. For 64bit systems, the allocation size is
    smaller than requeired if the number of bits is not a multiple of
    BITS_PER_LONG.

    Signed-off-by: Akinobu Mita
    Cc: Konrad Rzeszutek Wilk
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Akinobu Mita
     

01 May, 2013

4 commits

  • Frontswap initialization routine depends on swap_lock, which want to be
    atomic about frontswap's first appearance. IOW, frontswap is not present
    and will fail all calls OR frontswap is fully functional but if new
    swap_info_struct isn't registered by enable_swap_info, swap subsystem
    doesn't start I/O so there is no race between init procedure and page I/O
    working on frontswap.

    So let's remove unnecessary swap_lock dependency.

    Cc: Dan Magenheimer
    Signed-off-by: Minchan Kim
    [v1: Rebased on my branch, reworked to work with backends loading late]
    [v2: Added a check for !map]
    [v3: Made the invalidate path follow the init path]
    [v4: Address comments by Wanpeng Li ]
    Signed-off-by: Konrad Rzeszutek Wilk
    Signed-off-by: Bob Liu
    Cc: Wanpeng Li
    Cc: Andor Daam
    Cc: Florian Schmaus
    Cc: Stefan Hengelein
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Minchan Kim
     
  • After allowing tmem backends to build/run as modules, frontswap_enabled
    always true if defined CONFIG_FRONTSWAP. But frontswap_test() depends on
    whether backend is registered, mv it into frontswap.c using fronstswap_ops
    to make the decision.

    frontswap_set/clear are not used outside frontswap, so don't export them.

    Signed-off-by: Bob Liu
    Cc: Wanpeng Li
    Cc: Andor Daam
    Cc: Dan Magenheimer
    Cc: Florian Schmaus
    Cc: Konrad Rzeszutek Wilk
    Cc: Minchan Kim
    Cc: Stefan Hengelein
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Bob Liu
     
  • This simplifies the code in the frontswap - we can get rid of the
    'backend_registered' test and instead check against frontswap_ops.

    [v1: Rebase on top of 703ba7fe5e0 (ramster->zcache move]
    Signed-off-by: Konrad Rzeszutek Wilk
    Signed-off-by: Bob Liu
    Cc: Wanpeng Li
    Cc: Andor Daam
    Cc: Dan Magenheimer
    Cc: Florian Schmaus
    Cc: Minchan Kim
    Cc: Stefan Hengelein
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Konrad Rzeszutek Wilk
     
  • With the goal of allowing tmem backends (zcache, ramster, Xen tmem) to
    be built/loaded as modules rather than built-in and enabled by a boot
    parameter, this patch provides "lazy initialization", allowing backends
    to register to frontswap even after swapon was run. Before a backend
    registers all calls to init are recorded and the creation of tmem_pools
    delayed until a backend registers or until a frontswap store is
    attempted.

    Signed-off-by: Stefan Hengelein
    Signed-off-by: Florian Schmaus
    Signed-off-by: Andor Daam
    Signed-off-by: Dan Magenheimer
    [v1: Fixes per Seth Jennings suggestions]
    [v2: Removed FRONTSWAP_HAS_.. ]
    [v3: Fix up per Bob Liu recommendations]
    [v4: Fix up per Andrew's comments]
    Signed-off-by: Konrad Rzeszutek Wilk
    Signed-off-by: Bob Liu
    Cc: Wanpeng Li
    Cc: Dan Magenheimer
    Cc: Minchan Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Dan Magenheimer
     

21 Sep, 2012

2 commits

  • Tmem, as originally specified, assumes that "get" operations
    performed on persistent pools never flush the page of data out
    of tmem on a successful get, waiting instead for a flush
    operation. This is intended to mimic the model of a swap
    disk, where a disk read is non-destructive. Unlike a
    disk, however, freeing up the RAM can be valuable. Over
    the years that frontswap was in the review process, several
    reviewers (and notably Hugh Dickins in 2010) pointed out that
    this would result, at least temporarily, in two copies of the
    data in RAM: one (compressed for zcache) copy in tmem,
    and one copy in the swap cache. We wondered if this could
    be done differently, at least optionally.

    This patch allows tmem backends to instruct the frontswap
    code that this backend performs exclusive gets. Zcache2
    already contains hooks to support this feature. Other
    backends are completely unaffected unless/until they are
    updated to support this feature.

    While it is not clear that exclusive gets are a performance
    win on all workloads at all times, this small patch allows for
    experimentation by backends.

    P.S. Let's not quibble about the naming of "get" vs "read" vs
    "load" etc. The naming is currently horribly inconsistent between
    cleancache and frontswap and existing tmem backends, so will need
    to be straightened out as a separate patch. "Get" is used
    by the tmem architecture spec, existing backends, and
    all documentation and presentation material so I am
    using it in this patch.

    Signed-off-by: Dan Magenheimer
    Signed-off-by: Konrad Rzeszutek Wilk

    Dan Magenheimer
     
  • pages_to_unuse is set to 0 to unuse all frontswap pages
    But that doesn't happen since a wrong condition in frontswap_shrink
    cancel it.

    -v2: Add comment to explain return value of __frontswap_shrink,
    as suggested by Dan Carpenter, thanks

    Signed-off-by: Zhenzhong Duan
    Signed-off-by: Konrad Rzeszutek Wilk

    Zhenzhong Duan
     

14 Aug, 2012

1 commit

  • Fixes uninitialized variable warning on 'type' in frontswap_shrink().
    type is set before use by __frontswap_unuse_pages() called by
    __frontswap_shrink() called by frontswap_shrink() before use by
    try_to_unuse().

    Signed-off-by: Seth Jennings
    Signed-off-by: Konrad Rzeszutek Wilk

    Seth Jennings
     

23 Jul, 2012

2 commits


20 Jul, 2012

1 commit


12 Jun, 2012

7 commits


15 May, 2012

2 commits

  • Sounds so much more natural.

    Suggested-by: Andrea Arcangeli
    Signed-off-by: Konrad Rzeszutek Wilk

    Konrad Rzeszutek Wilk
     
  • This patch, 3of4, provides the core frontswap code that interfaces between
    the hooks in the swap subsystem and a frontswap backend via frontswap_ops.

    ---
    New file added: mm/frontswap.c

    [v14: add support for writethrough, per suggestion by aarcange@redhat.com]
    [v11: sjenning@linux.vnet.ibm.com: s/puts/failed_puts/]
    [v10: sjenning@linux.vnet.ibm.com: fix debugfs calls on 32-bit]
    [v9: akpm@linux-foundation.org: change "flush" to "invalidate", part 1]
    [v9: akpm@linux-foundation.org: mark some statics __read_mostly]
    [v9: akpm@linux-foundation.org: add clarifying comments]
    [v9: akpm@linux-foundation.org: no need to loop repeating try_to_unuse]
    [v9: error27@gmail.com: remove superfluous check for NULL]
    [v8: rebase to 3.0-rc4]
    [v8: kamezawa.hiroyu@jp.fujitsu.com: add comment to clarify find_next_to_unuse]
    [v7: rebase to 3.0-rc3]
    [v7: JBeulich@novell.com: use new static inlines, no-ops if not config'd]
    [v6: rebase to 3.1-rc1]
    [v6: lliubbo@gmail.com: use vzalloc]
    [v6: lliubbo@gmail.com: fix null pointer deref if vzalloc fails]
    [v6: konrad.wilk@oracl.com: various checks and code clarifications/comments]
    [v4: rebase to 2.6.39]
    Signed-off-by: Dan Magenheimer
    Acked-by: Jan Beulich
    Acked-by: Seth Jennings
    Cc: Jeremy Fitzhardinge
    Cc: Hugh Dickins
    Cc: Johannes Weiner
    Cc: Nitin Gupta
    Cc: Matthew Wilcox
    Cc: Chris Mason
    Cc: Rik Riel
    Cc: Andrew Morton
    [v12: Squashed s/flush/invalidate/ in]
    [v15: A bit of cleanup and seperate DEBUGFS]
    Signed-off-by: Konrad Wilk

    Dan Magenheimer