11 Jan, 2012

1 commit

  • Rename mm_page_free_direct into mm_page_free and mm_pagevec_free into
    mm_page_free_batched

    Since v2.6.33-5426-gc475dab the kernel triggers mm_page_free_direct for
    all freed pages, not only for directly freed. So, let's name it properly.
    For pages freed via page-list we also trigger mm_page_free_batched event.

    Signed-off-by: Konstantin Khlebnikov
    Cc: Mel Gorman
    Cc: KOSAKI Motohiro
    Reviewed-by: Minchan Kim
    Cc: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Konstantin Khlebnikov
     

10 Aug, 2010

1 commit

  • Add two trace events for kswapd waking up and going asleep for the
    purposes of tracking kswapd activity and two trace events for direct
    reclaim beginning and ending. The information can be used to work out how
    much time a process or the system is spending on the reclamation of pages
    and in the case of direct reclaim, how many pages were reclaimed for that
    process. High frequency triggering of these events could point to memory
    pressure problems.

    Signed-off-by: Mel Gorman
    Acked-by: Rik van Riel
    Acked-by: Larry Woodman
    Cc: Dave Chinner
    Cc: Chris Mason
    Cc: Nick Piggin
    Cc: Rik van Riel
    Cc: Johannes Weiner
    Cc: Christoph Hellwig
    Cc: KAMEZAWA Hiroyuki
    Cc: KOSAKI Motohiro
    Cc: Andrea Arcangeli
    Cc: Michael Rubin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     

26 Nov, 2009

1 commit

  • Use DECLARE_EVENT_CLASS to remove duplicate code:

    text data bss dec hex filename
    333987 69800 27228 431015 693a7 mm/built-in.o.old
    330030 69800 27228 427058 68432 mm/built-in.o

    8 events are converted:

    kmem_alloc: kmalloc, kmem_cache_alloc
    kmem_alloc_node: kmalloc_node, kmem_cache_alloc_node
    kmem_free: kfree, kmem_cache_free
    mm_page: mm_page_alloc_zone_locked, mm_page_pcpu_drain

    No change in functionality.

    Signed-off-by: Li Zefan
    Acked-by: Pekka Enberg
    Cc: Steven Rostedt
    Cc: Frederic Weisbecker
    Cc: Mel Gorman
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Li Zefan
     

22 Sep, 2009

3 commits

  • The page allocation trace event reports that a page was successfully
    allocated but it does not specify where it came from. When analysing
    performance, it can be important to distinguish between pages coming from
    the per-cpu allocator and pages coming from the buddy lists as the latter
    requires the zone lock to the taken and more data structures to be
    examined.

    This patch adds a trace event for __rmqueue reporting when a page is being
    allocated from the buddy lists. It distinguishes between being called to
    refill the per-cpu lists or whether it is a high-order allocation.
    Similarly, this patch adds an event to catch when the PCP lists are being
    drained a little and pages are going back to the buddy lists.

    This is trickier to draw conclusions from but high activity on those
    events could explain why there were a large number of cache misses on a
    page-allocator-intensive workload. The coalescing and splitting of
    buddies involves a lot of writing of page metadata and cache line bounces
    not to mention the acquisition of an interrupt-safe lock necessary to
    enter this path.

    [akpm@linux-foundation.org: fix build]
    Signed-off-by: Mel Gorman
    Acked-by: Rik van Riel
    Reviewed-by: Ingo Molnar
    Cc: Larry Woodman
    Cc: Peter Zijlstra
    Cc: Li Ming Chun
    Reviewed-by: KOSAKI Motohiro
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • Fragmentation avoidance depends on being able to use free pages from lists
    of the appropriate migrate type. In the event this is not possible,
    __rmqueue_fallback() selects a different list and in some circumstances
    change the migratetype of the pageblock. Simplistically, the more times
    this event occurs, the more likely that fragmentation will be a problem
    later for hugepage allocation at least but there are other considerations
    such as the order of page being split to satisfy the allocation.

    This patch adds a trace event for __rmqueue_fallback() that reports what
    page is being used for the fallback, the orders of relevant pages, the
    desired migratetype and the migratetype of the lists being used, whether
    the pageblock changed type and whether this event is important with
    respect to fragmentation avoidance or not. This information can be used
    to help analyse fragmentation avoidance and help decide whether
    min_free_kbytes should be increased or not.

    Signed-off-by: Mel Gorman
    Acked-by: Rik van Riel
    Reviewed-by: Ingo Molnar
    Cc: Larry Woodman
    Cc: Peter Zijlstra
    Cc: Li Ming Chun
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • This patch adds trace events for the allocation and freeing of pages,
    including the freeing of pagevecs. Using the events, it will be known
    what struct page and pfns are being allocated and freed and what the call
    site was in many cases.

    The page alloc tracepoints be used as an indicator as to whether the
    workload was heavily dependant on the page allocator or not. You can make
    a guess based on vmstat but you can't get a per-process breakdown.
    Depending on the call path, the call_site for page allocation may be
    __get_free_pages() instead of a useful callsite. Instead of passing down
    a return address similar to slab debugging, the user should enable the
    stacktrace and seg-addr options to get a proper stack trace.

    The pagevec free tracepoint has a different usecase. It can be used to
    get a idea of how many pages are being dumped off the LRU and whether it
    is kswapd doing the work or a process doing direct reclaim.

    Signed-off-by: Mel Gorman
    Acked-by: Rik van Riel
    Reviewed-by: Ingo Molnar
    Cc: Larry Woodman
    Cc: Peter Zijlstra
    Cc: Li Ming Chun
    Reviewed-by: KOSAKI Motohiro
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     

13 Jul, 2009

1 commit

  • If TRACE_INCLDUE_FILE is defined,
    will be included and compiled, otherwise it will be

    So TRACE_SYSTEM should be defined outside of #if proctection,
    just like TRACE_INCLUDE_FILE.

    Imaging this scenario:

    #include
    -> TRACE_SYSTEM == foo
    ...
    #include
    -> TRACE_SYSTEM == bar
    ...
    #define CREATE_TRACE_POINTS
    #include
    -> TRACE_SYSTEM == bar !!!

    and then bar.h will be included and compiled.

    Signed-off-by: Li Zefan
    Cc: Steven Rostedt
    Cc: Frederic Weisbecker
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Li Zefan
     

27 May, 2009

1 commit

  • This patch changes the output for gfp_flags from being a simple hex value
    to the actual names.

    gfp_flags=GFP_ATOMIC instead of gfp_flags=00000020

    And even

    gfp_flags=GFP_KERNEL instead of gfp_flags=000000d0

    (Thanks to Frederic Weisbecker for pointing out that the first version
    had a bad order of GFP masks)

    [ Impact: more human readable output from tracer ]

    Acked-by: Eduard - Gabriel Munteanu
    Signed-off-by: Steven Rostedt
    Signed-off-by: Frederic Weisbecker

    Steven Rostedt
     

15 Apr, 2009

1 commit

  • Impact: clean up

    Create a sub directory in include/trace called events to keep the
    trace point headers in their own separate directory. Only headers that
    declare trace points should be defined in this directory.

    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Cc: Neil Horman
    Cc: Zhao Lei
    Cc: Eduard - Gabriel Munteanu
    Cc: Pekka Enberg
    Signed-off-by: Steven Rostedt

    Steven Rostedt