Commit e0fff1bd12469c45dab088e353d8882761387bb6

Authored by Mel Gorman
Committed by Linus Torvalds
1 parent 4b4f278c03

tracing, page-allocator: add trace events for anti-fragmentation falling back to other migratetypes

Fragmentation avoidance depends on being able to use free pages from lists
of the appropriate migrate type.  In the event this is not possible,
__rmqueue_fallback() selects a different list and in some circumstances
change the migratetype of the pageblock.  Simplistically, the more times
this event occurs, the more likely that fragmentation will be a problem
later for hugepage allocation at least but there are other considerations
such as the order of page being split to satisfy the allocation.

This patch adds a trace event for __rmqueue_fallback() that reports what
page is being used for the fallback, the orders of relevant pages, the
desired migratetype and the migratetype of the lists being used, whether
the pageblock changed type and whether this event is important with
respect to fragmentation avoidance or not.  This information can be used
to help analyse fragmentation avoidance and help decide whether
min_free_kbytes should be increased or not.

Signed-off-by: Mel Gorman <mel@csn.ul.ie>
Acked-by: Rik van Riel <riel@redhat.com>
Reviewed-by: Ingo Molnar <mingo@elte.hu>
Cc: Larry Woodman <lwoodman@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Li Ming Chun <macli@brc.ubc.ca>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

Showing 2 changed files with 42 additions and 0 deletions Side-by-side Diff

include/trace/events/kmem.h
... ... @@ -299,6 +299,44 @@
299 299 show_gfp_flags(__entry->gfp_flags))
300 300 );
301 301  
  302 +TRACE_EVENT(mm_page_alloc_extfrag,
  303 +
  304 + TP_PROTO(struct page *page,
  305 + int alloc_order, int fallback_order,
  306 + int alloc_migratetype, int fallback_migratetype),
  307 +
  308 + TP_ARGS(page,
  309 + alloc_order, fallback_order,
  310 + alloc_migratetype, fallback_migratetype),
  311 +
  312 + TP_STRUCT__entry(
  313 + __field( struct page *, page )
  314 + __field( int, alloc_order )
  315 + __field( int, fallback_order )
  316 + __field( int, alloc_migratetype )
  317 + __field( int, fallback_migratetype )
  318 + ),
  319 +
  320 + TP_fast_assign(
  321 + __entry->page = page;
  322 + __entry->alloc_order = alloc_order;
  323 + __entry->fallback_order = fallback_order;
  324 + __entry->alloc_migratetype = alloc_migratetype;
  325 + __entry->fallback_migratetype = fallback_migratetype;
  326 + ),
  327 +
  328 + TP_printk("page=%p pfn=%lu alloc_order=%d fallback_order=%d pageblock_order=%d alloc_migratetype=%d fallback_migratetype=%d fragmenting=%d change_ownership=%d",
  329 + __entry->page,
  330 + page_to_pfn(__entry->page),
  331 + __entry->alloc_order,
  332 + __entry->fallback_order,
  333 + pageblock_order,
  334 + __entry->alloc_migratetype,
  335 + __entry->fallback_migratetype,
  336 + __entry->fallback_order < pageblock_order,
  337 + __entry->alloc_migratetype == __entry->fallback_migratetype)
  338 +);
  339 +
302 340 #endif /* _TRACE_KMEM_H */
303 341  
304 342 /* This part must be outside protection */
... ... @@ -853,6 +853,10 @@
853 853 start_migratetype);
854 854  
855 855 expand(zone, page, order, current_order, area, migratetype);
  856 +
  857 + trace_mm_page_alloc_extfrag(page, order, current_order,
  858 + start_migratetype, migratetype);
  859 +
856 860 return page;
857 861 }
858 862 }