24 Mar, 2011

12 commits

  • In struct page_cgroup, we have a full word for flags but only a few are
    reserved. Use the remaining upper bits to encode, depending on
    configuration, the node or the section, to enable page_cgroup-to-page
    lookups without a direct pointer.

    This saves a full word for every page in a system with memory cgroups
    enabled.

    Signed-off-by: Johannes Weiner
    Acked-by: KAMEZAWA Hiroyuki
    Cc: Daisuke Nishimura
    Cc: Balbir Singh
    Cc: Minchan Kim
    Cc: Randy Dunlap
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Johannes Weiner
     
  • The per-cgroup LRU lists string up 'struct page_cgroup's. To get from
    those structures to the page they represent, a lookup is required.
    Currently, the lookup is done through a direct pointer in struct
    page_cgroup, so a lot of functions down the callchain do this lookup by
    themselves instead of receiving the page pointer from their callers.

    The next patch removes this pointer, however, and the lookup is no longer
    that straight-forward. In preparation for that, this patch only leaves
    the non-optional lookups when coming directly from the LRU list and passes
    the page down the stack.

    Signed-off-by: Johannes Weiner
    Acked-by: KAMEZAWA Hiroyuki
    Cc: Daisuke Nishimura
    Cc: Balbir Singh
    Cc: Minchan Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Johannes Weiner
     
  • It is one logical function, no need to have it split up.

    Also, get rid of some checks from the inner function that ensured the
    sanity of the outer function.

    Signed-off-by: Johannes Weiner
    Acked-by: KAMEZAWA Hiroyuki
    Acked-by: Daisuke Nishimura
    Cc: Balbir Singh
    Cc: Minchan Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Johannes Weiner
     
  • Instead of passing a whole struct page_cgroup to this function, let it
    take only what it really needs from it: the struct mem_cgroup and the
    page.

    This has the advantage that reading pc->mem_cgroup is now done at the same
    place where the ordering rules for this pointer are enforced and
    explained.

    It is also in preparation for removing the pc->page backpointer.

    Signed-off-by: Johannes Weiner
    Acked-by: KAMEZAWA Hiroyuki
    Cc: Daisuke Nishimura
    Cc: Balbir Singh
    Cc: Minchan Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Johannes Weiner
     
  • This patch series removes the direct page pointer from struct page_cgroup,
    which saves 20% of per-page memcg memory overhead (Fedora and Ubuntu
    enable memcg per default, openSUSE apparently too).

    The node id or section number is encoded in the remaining free bits of
    pc->flags which allows calculating the corresponding page without the
    extra pointer.

    I ran, what I think is, a worst-case microbenchmark that just cats a large
    sparse file to /dev/null, because it means that walking the LRU list on
    behalf of per-cgroup reclaim and looking up pages from page_cgroups is
    happening constantly and at a high rate. But it made no measurable
    difference. A profile reported a 0.11% share of the new
    lookup_cgroup_page() function in this benchmark.

    This patch:

    All callsites check PCG_USED before passing pc->mem_cgroup, so the latter
    is never NULL.

    Signed-off-by: Johannes Weiner
    Acked-by: KAMEZAWA Hiroyuki
    Cc: Daisuke Nishimura
    Acked-by: Balbir Singh
    Cc: Minchan Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Johannes Weiner
     
  • Add checks at allocating or freeing a page whether the page is used (iow,
    charged) from the view point of memcg.

    This check may be useful in debugging a problem and we did similar checks
    before the commit 52d4b9ac(memcg: allocate all page_cgroup at boot).

    This patch adds some overheads at allocating or freeing memory, so it's
    enabled only when CONFIG_DEBUG_VM is enabled.

    Signed-off-by: Daisuke Nishimura
    Signed-off-by: Johannes Weiner
    Acked-by: KAMEZAWA Hiroyuki
    Cc: Balbir Singh
    Cc: Minchan Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Daisuke Nishimura
     
  • The page_cgroup array is set up before even fork is initialized. I
    seriously doubt that this code executes before the array is alloc'd.

    Signed-off-by: Johannes Weiner
    Acked-by: KAMEZAWA Hiroyuki
    Cc: Daisuke Nishimura
    Cc: Balbir Singh
    Cc: Minchan Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Johannes Weiner
     
  • No callsite ever passes a NULL pointer for a struct mem_cgroup * to the
    committing function. There is no need to check for it.

    Signed-off-by: Johannes Weiner
    Acked-by: KAMEZAWA Hiroyuki
    Cc: Daisuke Nishimura
    Cc: Balbir Singh
    Cc: Minchan Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Johannes Weiner
     
  • These definitions have been unused since '4b3bde4 memcg: remove the
    overhead associated with the root cgroup'.

    Signed-off-by: Johannes Weiner
    Acked-by: KAMEZAWA Hiroyuki
    Cc: Daisuke Nishimura
    Cc: Balbir Singh
    Cc: Minchan Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Johannes Weiner
     
  • Since transparent huge pages, checking whether memory cgroups are below
    their limits is no longer enough, but the actual amount of chargeable
    space is important.

    To not have more than one limit-checking interface, replace
    memory_cgroup_check_under_limit() and memory_cgroup_check_margin() with a
    single memory_cgroup_margin() that returns the chargeable space and leaves
    the comparison to the callsite.

    Soft limits are now checked the other way round, by using the already
    existing function that returns the amount by which soft limits are
    exceeded: res_counter_soft_limit_excess().

    Also remove all the corresponding functions on the res_counter side that
    are now no longer used.

    Signed-off-by: Johannes Weiner
    Acked-by: KAMEZAWA Hiroyuki
    Cc: Daisuke Nishimura
    Acked-by: Balbir Singh
    Cc: Minchan Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Johannes Weiner
     
  • Soft limit reclaim continues until the usage is below the current soft
    limit, but the documented semantics are actually that soft limit reclaim
    will push usage back until the soft limits are met again.

    Signed-off-by: Johannes Weiner
    Acked-by: KAMEZAWA Hiroyuki
    Cc: Daisuke Nishimura
    Acked-by: Balbir Singh
    Cc: Minchan Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Johannes Weiner
     
  • Remove initialization of vaiable in caller of memory cgroup function.
    Actually, it's return value of memcg function but it's initialized in
    caller.

    Some memory cgroup uses following style to bring the result of start
    function to the end function for avoiding races.

    mem_cgroup_start_A(&(*ptr))
    /* Something very complicated can happen here. */
    mem_cgroup_end_A(*ptr)

    In some calls, *ptr should be initialized to NULL be caller. But it's
    ugly. This patch fixes that *ptr is initialized by _start function.

    Signed-off-by: KAMEZAWA Hiroyuki
    Acked-by: Johannes Weiner
    Acked-by: Daisuke Nishimura
    Cc: Balbir Singh
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    KAMEZAWA Hiroyuki
     

23 Mar, 2011

28 commits

  • A conflict between 52c50567d8ab ("mm: swap: unlock swapfile inode mutex
    before closing file on bad swapfiles") and 83ef99befc32 ("sys_swapon:
    remove did_down variable") caused a double unlock of the inode mutex
    (once in bad_swap: before the filp_close, once at the end just before
    returning).

    The patch which added the extra unlock cleared did_down to avoid
    unlocking twice, but the other patch removed the did_down variable.

    To fix, set inode to NULL after the first unlock, since it will be used
    after that point only for the final unlock.

    While checking this patch, I found a path which could unlock without
    locking, in case the same inode was added as a swapfile twice. To fix,
    move the setting of the inode variable further down, to just before
    claim_swapfile, which will lock the inode before doing anything else.

    Cc: Mel Gorman
    Cc: Hugh Dickins
    Cc: Eric B Munson
    Cc: KAMEZAWA Hiroyuki
    Cc: Andrew Morton
    Signed-off-by: Cesar Eduardo Barros
    Signed-off-by: Linus Torvalds

    Cesar Eduardo Barros
     
  • Clean up code and remove duplicate code. Next patch will use
    pagevec_lru_move_fn introduced here too.

    Signed-off-by: Shaohua Li
    Cc: KOSAKI Motohiro
    Cc: Hiroyuki Kamezawa
    Cc: Andi Kleen
    Reviewed-by: Minchan Kim
    Cc: Rik van Riel
    Cc: Mel Gorman
    Cc: Johannes Weiner
    Cc: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Shaohua Li
     
  • Up to 2.6.22, you could use remap_file_pages(2) on a tmpfs file or a
    shared mapping of /dev/zero or a shared anonymous mapping. In 2.6.23 we
    disabled it by default, but set VM_CAN_NONLINEAR to enable it on safe
    mappings. We made sure to set it in shmem_mmap() for tmpfs files, but
    missed it in shmem_zero_setup() for the others. Fix that at last.

    Reported-by: Kenny Simpson
    Signed-off-by: Hugh Dickins
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     
  • Currently memblock_reserve() or memblock_free() don't handle overlaps of
    any kind. There is some special casing for coalescing exactly adjacent
    regions but that's about it.

    This is annoying because typically memblock_reserve() is used to mark
    regions passed by the firmware as reserved and we all know how much we can
    trust our firmwares...

    Also, with the current code, if we do something it doesn't handle right
    such as trying to memblock_reserve() a large range spanning multiple
    existing smaller reserved regions for example, or doing overlapping
    reservations, it can silently corrupt the internal region array, causing
    odd errors much later on, such as allocations returning reserved regions
    etc...

    This patch rewrites the underlying functions that add or remove a region
    to the arrays. The new code is a lot more robust as it fully handles
    overlapping regions. It's also, imho, simpler than the previous
    implementation.

    In addition, while doing so, I found a bug where if we fail to double the
    array while adding a region, we would remove the last region of the array
    rather than the region we just allocated. This fixes it too.

    Signed-off-by: Benjamin Herrenschmidt
    Acked-by: Yinghai Lu
    Cc: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Benjamin Herrenschmidt
     
  • Signed-off-by: Kirill A. Shutemov
    Cc: Mel Gorman
    Cc: Rik van Riel
    Cc: KAMEZAWA Hiroyuki
    Reviewed-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Kirill A. Shutemov
     
  • KM_USER1 is never used for vwrite() path so the caller doesn't need to
    guarantee it is not used. Only the caller should guarantee is KM_USER0
    and it is commented already.

    Signed-off-by: Namhyung Kim
    Acked-by: KAMEZAWA Hiroyuki
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Namhyung Kim
     
  • For range-cyclic writeback (e.g. kupdate), the writeback code sets a
    continuation point of the next writeback to mapping->writeback_index which
    is set the page after the last written page. This happens so that we
    evenly write the whole file even if pages in it get continuously
    redirtied.

    However, in some cases, sequential writer is writing in the middle of the
    page and it just redirties the last written page by continuing from that.
    For example with an application which uses a file as a big ring buffer we
    see:

    [1st writeback session]
    ...
    flush-8:0-2743 4571: block_bio_queue: 8,0 W 94898514 + 8
    flush-8:0-2743 4571: block_bio_queue: 8,0 W 94898522 + 8
    flush-8:0-2743 4571: block_bio_queue: 8,0 W 94898530 + 8
    flush-8:0-2743 4571: block_bio_queue: 8,0 W 94898538 + 8
    flush-8:0-2743 4571: block_bio_queue: 8,0 W 94898546 + 8
    kworker/0:1-11 4571: block_rq_issue: 8,0 W 0 () 94898514 + 40
    >> flush-8:0-2743 4571: block_bio_queue: 8,0 W 94898554 + 8
    >> flush-8:0-2743 4571: block_rq_issue: 8,0 W 0 () 94898554 + 8

    [2nd writeback session after 35sec]
    flush-8:0-2743 4606: block_bio_queue: 8,0 W 94898562 + 8
    flush-8:0-2743 4606: block_bio_queue: 8,0 W 94898570 + 8
    flush-8:0-2743 4606: block_bio_queue: 8,0 W 94898578 + 8
    ...
    kworker/0:1-11 4606: block_rq_issue: 8,0 W 0 () 94898562 + 640
    kworker/0:1-11 4606: block_rq_issue: 8,0 W 0 () 94899202 + 72
    ...
    flush-8:0-2743 4606: block_bio_queue: 8,0 W 94899962 + 8
    flush-8:0-2743 4606: block_bio_queue: 8,0 W 94899970 + 8
    flush-8:0-2743 4606: block_bio_queue: 8,0 W 94899978 + 8
    flush-8:0-2743 4606: block_bio_queue: 8,0 W 94899986 + 8
    flush-8:0-2743 4606: block_bio_queue: 8,0 W 94899994 + 8
    kworker/0:1-11 4606: block_rq_issue: 8,0 W 0 () 94899962 + 40
    >> flush-8:0-2743 4606: block_bio_queue: 8,0 W 94898554 + 8
    >> flush-8:0-2743 4606: block_rq_issue: 8,0 W 0 () 94898554 + 8

    So we seeked back to 94898554 after we wrote all the pages at the end of
    the file.

    This extra seek seems unnecessary. If we continue writeback from the last
    written page, we can avoid it and do not cause harm to other cases. The
    original intent of even writeout over the whole file is preserved and if
    the page does not get redirtied pagevec_lookup_tag() just skips it.

    As an exceptional case, when I/O error happens, set done_index to the next
    page as the comment in the code suggests.

    Tested-by: Wu Fengguang
    Signed-off-by: Jun'ichi Nomura
    Signed-off-by: Jan Kara
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jun'ichi Nomura
     
  • scan_swap_map() is a large function (224 lines), with several loops and a
    complex control flow involving several gotos.

    Given all that, it is a bit silly that it is marked as inline. The
    compiler agrees with me: on a x86-64 compile, it did not inline the
    function.

    Remove the "inline" and let the compiler decide instead.

    Signed-off-by: Cesar Eduardo Barros
    Reviewed-by: Pekka Enberg
    Reviewed-by: KOSAKI Motohiro
    Reviewed-by: KAMEZAWA Hiroyuki
    Reviewed-by: Minchan Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Cesar Eduardo Barros
     
  • The block in sys_swapon which does the final adjustments to the
    swap_info_struct and to swap_list is the same as the block which
    re-inserts it again at sys_swapoff on failure of try_to_unuse(). Move
    this code to a separate function, and use it both in sys_swapon and
    sys_swapoff.

    Signed-off-by: Cesar Eduardo Barros
    Tested-by: Eric B Munson
    Acked-by: Eric B Munson
    Reviewed-by: Pekka Enberg
    Reviewed-by: KAMEZAWA Hiroyuki
    Cc: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Cesar Eduardo Barros
     
  • The block in sys_swapon which does the final adjustments to the
    swap_info_struct and to swap_list is the same as the block which
    re-inserts it again at sys_swapoff on failure of try_to_unuse(), except
    for the order of the operations within the lock. Since the order should
    not matter, arbitrarily change sys_swapoff to match sys_swapon, in
    preparation to making both share the same code.

    Signed-off-by: Cesar Eduardo Barros
    Tested-by: Eric B Munson
    Acked-by: Eric B Munson
    Reviewed-by: Pekka Enberg
    Reviewed-by: KAMEZAWA Hiroyuki
    Cc: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Cesar Eduardo Barros
     
  • The block in sys_swapon which does the final adjustments to the
    swap_info_struct and to swap_list is the same as the block which
    re-inserts it again at sys_swapoff on failure of try_to_unuse(). To be
    able to make both share the same code, move the printk() call in the
    middle of it to just after it.

    Signed-off-by: Cesar Eduardo Barros
    Tested-by: Eric B Munson
    Acked-by: Eric B Munson
    Reviewed-by: KAMEZAWA Hiroyuki
    Cc: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Cesar Eduardo Barros
     
  • It still exists within setup_swap_map_and_extents(), but after it
    nr_good_pages == p->pages.

    Signed-off-by: Cesar Eduardo Barros
    Tested-by: Eric B Munson
    Acked-by: Eric B Munson
    Reviewed-by: Pekka Enberg
    Reviewed-by: KAMEZAWA Hiroyuki
    Cc: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Cesar Eduardo Barros
     
  • Since there is no cleanup to do, there is no reason to jump to a label.
    Return directly instead.

    Signed-off-by: Cesar Eduardo Barros
    Tested-by: Eric B Munson
    Acked-by: Eric B Munson
    Reviewed-by: Pekka Enberg
    Reviewed-by: KAMEZAWA Hiroyuki
    Cc: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Cesar Eduardo Barros
     
  • Move the code which parses the bad block list and the extents to a
    separate function. Only code movement, no functional changes.

    This change uses the fact that, after the success path, nr_good_pages ==
    p->pages.

    Signed-off-by: Cesar Eduardo Barros
    Tested-by: Eric B Munson
    Acked-by: Eric B Munson
    Reviewed-by: Pekka Enberg
    Reviewed-by: KAMEZAWA Hiroyuki
    Cc: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Cesar Eduardo Barros
     
  • The call to swap_cgroup_swapon is in the middle of loading the swap map
    and extents. As it only does memory allocation and does not depend on
    the swapfile layout (map/extents), it can be called earlier (or later).

    Move it to just after the allocation of swap_map, since it is
    conceptually similar (allocates a map).

    Signed-off-by: Cesar Eduardo Barros
    Tested-by: Eric B Munson
    Acked-by: Eric B Munson
    Reviewed-by: Pekka Enberg
    Reviewed-by: KAMEZAWA Hiroyuki
    Cc: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Cesar Eduardo Barros
     
  • Since there is no cleanup to do, there is no reason to jump to a label.
    Return directly instead.

    Signed-off-by: Cesar Eduardo Barros
    Tested-by: Eric B Munson
    Acked-by: Eric B Munson
    Reviewed-by: KAMEZAWA Hiroyuki
    Cc: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Cesar Eduardo Barros
     
  • Move the code which parses and checks the swapfile header (except for
    the bad block list) to a separate function. Only code movement, no
    functional changes.

    Signed-off-by: Cesar Eduardo Barros
    Tested-by: Eric B Munson
    Acked-by: Eric B Munson
    Reviewed-by: KAMEZAWA Hiroyuki
    Cc: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Cesar Eduardo Barros
     
  • There is no reason I can see to read inode->i_size long before it is
    needed. Move its read to just before it is needed, to reduce the
    variable lifetime.

    Signed-off-by: Cesar Eduardo Barros
    Tested-by: Eric B Munson
    Acked-by: Eric B Munson
    Reviewed-by: Jesper Juhl
    Reviewed-by: Pekka Enberg
    Reviewed-by: KAMEZAWA Hiroyuki
    Cc: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Cesar Eduardo Barros
     
  • Since there is no cleanup to do, there is no reason to jump to a label.
    Return directly instead.

    Signed-off-by: Cesar Eduardo Barros
    Tested-by: Eric B Munson
    Acked-by: Eric B Munson
    Reviewed-by: KAMEZAWA Hiroyuki
    Cc: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Cesar Eduardo Barros
     
  • Move the code which claims the bdev (S_ISBLK) or locks the inode
    (S_ISREG) to a separate function. Only code movement, no functional
    changes.

    Signed-off-by: Cesar Eduardo Barros
    Tested-by: Eric B Munson
    Acked-by: Eric B Munson
    Reviewed-by: KAMEZAWA Hiroyuki
    Cc: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Cesar Eduardo Barros
     
  • sys_swapon currently has two error labels, bad_swap and bad_swap_2.
    bad_swap does the same as bad_swap_2 plus destroy_swap_extents() and
    swap_cgroup_swapoff(); both are noops in the places where bad_swap_2 is
    jumped to. With a single extra test for inode (matching the one in the
    S_ISREG case below), all the error paths in the function can go to
    bad_swap.

    Signed-off-by: Cesar Eduardo Barros
    Tested-by: Eric B Munson
    Acked-by: Eric B Munson
    Reviewed-by: KAMEZAWA Hiroyuki
    Cc: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Cesar Eduardo Barros
     
  • The only way error is 0 in the cleanup blocks is when the function is
    returning successfully. In this case, the cleanup blocks were setting
    S_SWAPFILE in the S_ISREG case. But this is not a cleanup.

    Move the setting of S_SWAPFILE to just before the "goto out;" to make
    this more clear. At this point, we do not need to test for inode because
    it will never be NULL.

    Signed-off-by: Cesar Eduardo Barros
    Tested-by: Eric B Munson
    Acked-by: Eric B Munson
    Reviewed-by: KAMEZAWA Hiroyuki
    Cc: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Cesar Eduardo Barros
     
  • The bdev variable is always equivalent to (S_ISBLK(inode->i_mode) ?
    p->bdev : NULL), as long as it being set is moved to a bit earlier. Use
    this fact to remove the bdev variable.

    Signed-off-by: Cesar Eduardo Barros
    Tested-by: Eric B Munson
    Acked-by: Eric B Munson
    Reviewed-by: KAMEZAWA Hiroyuki
    Cc: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Cesar Eduardo Barros
     
  • Move the setting of the error variable nearer the goto in a few places.

    Avoids calling PTR_ERR() if not IS_ERR() in two places, and makes the
    error condition more explicit in two other places.

    Signed-off-by: Cesar Eduardo Barros
    Tested-by: Eric B Munson
    Acked-by: Eric B Munson
    Reviewed-by: Jesper Juhl
    Reviewed-by: Pekka Enberg
    Reviewed-by: KAMEZAWA Hiroyuki
    Cc: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Cesar Eduardo Barros
     
  • Since mutex_lock(&inode->i_mutex) is called just after setting inode,
    did_down is always equivalent to (inode && S_ISREG(inode->i_mode)).

    Use this fact to remove the did_down variable.

    Signed-off-by: Cesar Eduardo Barros
    Tested-by: Eric B Munson
    Acked-by: Eric B Munson
    Reviewed-by: KAMEZAWA Hiroyuki
    Cc: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Cesar Eduardo Barros
     
  • Now there is nothing which jumps to the cleanup blocks before the name
    variable is set. There is no need to set it initially to NULL anymore.

    Signed-off-by: Cesar Eduardo Barros
    Tested-by: Eric B Munson
    Acked-by: Eric B Munson
    Reviewed-by: Pekka Enberg
    Reviewed-by: KAMEZAWA Hiroyuki
    Cc: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Cesar Eduardo Barros
     
  • Since there is no cleanup to do, there is no reason to jump to a label.
    Return directly instead.

    Signed-off-by: Cesar Eduardo Barros
    Tested-by: Eric B Munson
    Acked-by: Eric B Munson
    Reviewed-by: Pekka Enberg
    Reviewed-by: KAMEZAWA Hiroyuki
    Cc: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Cesar Eduardo Barros
     
  • At this point in sys_swapon, there is nothing to free. Return directly
    instead of jumping to the cleanup block at the end of the function.

    Signed-off-by: Cesar Eduardo Barros
    Tested-by: Eric B Munson
    Acked-by: Eric B Munson
    Reviewed-by: Pekka Enberg
    Reviewed-by: KAMEZAWA Hiroyuki
    Cc: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Cesar Eduardo Barros