10 Mar, 2016

1 commit

  • Given we have uninitialized list_heads being passed to list_add() it
    will always be the case that those uninitialized values randomly trigger
    the poison value. Especially since a list_add() operation will seed the
    stack with the poison value for later stack allocations to trip over.

    For example, see these two false positive reports:

    list_add attempted on force-poisoned entry
    WARNING: at lib/list_debug.c:34
    [..]
    NIP [c00000000043c390] __list_add+0xb0/0x150
    LR [c00000000043c38c] __list_add+0xac/0x150
    Call Trace:
    __list_add+0xac/0x150 (unreliable)
    __down+0x4c/0xf8
    down+0x68/0x70
    xfs_buf_lock+0x4c/0x150 [xfs]

    list_add attempted on force-poisoned entry(0000000000000500),
    new->next == d0000000059ecdb0, new->prev == 0000000000000500
    WARNING: at lib/list_debug.c:33
    [..]
    NIP [c00000000042db78] __list_add+0xa8/0x140
    LR [c00000000042db74] __list_add+0xa4/0x140
    Call Trace:
    __list_add+0xa4/0x140 (unreliable)
    rwsem_down_read_failed+0x6c/0x1a0
    down_read+0x58/0x60
    xfs_log_commit_cil+0x7c/0x600 [xfs]

    Fixes: commit 5c2c2587b132 ("mm, dax, pmem: introduce {get|put}_dev_pagemap() for dax-gup")
    Signed-off-by: Dan Williams
    Reported-by: Eryu Guan
    Tested-by: Eryu Guan
    Cc: Ross Zwisler
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Dan Williams
     

16 Jan, 2016

1 commit

  • get_dev_page() enables paths like get_user_pages() to pin a dynamically
    mapped pfn-range (devm_memremap_pages()) while the resulting struct page
    objects are in use. Unlike get_page() it may fail if the device is, or
    is in the process of being, disabled. While the initial lookup of the
    range may be an expensive list walk, the result is cached to speed up
    subsequent lookups which are likely to be in the same mapped range.

    devm_memremap_pages() now requires a reference counter to be specified
    at init time. For pmem this means moving request_queue allocation into
    pmem_alloc() so the existing queue usage counter can track "device
    pages".

    ZONE_DEVICE pages always have an elevated count and will never be on an
    lru reclaim list. That space in 'struct page' can be redirected for
    other uses, but for safety introduce a poison value that will always
    trip __list_add() to assert. This allows half of the struct list_head
    storage to be reclaimed with some assurance to back up the assumption
    that the page count never goes to zero and a list_add() is never
    attempted.

    Signed-off-by: Dan Williams
    Tested-by: Logan Gunthorpe
    Cc: Dave Hansen
    Cc: Matthew Wilcox
    Cc: Ross Zwisler
    Cc: Alexander Viro
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Dan Williams
     

24 Nov, 2015

1 commit

  • Code that does lockless emptiness testing of non-RCU lists is relying
    on the list-addition code to write the list head's ->next pointer
    atomically. This commit therefore adds WRITE_ONCE() to list-addition
    pointer stores that could affect the head's ->next pointer.

    Reported-by: Dmitry Vyukov
    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     

06 Jul, 2012

1 commit

  • Although the C language allows you to break strings across lines, doing
    this makes it hard for people to find the Linux kernel code corresponding
    to a given console message. This commit therefore fixes broken strings
    throughout RCU's source code.

    Suggested-by: Josh Triplett
    Suggested-by: Ingo Molnar
    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     

30 May, 2012

1 commit


25 Apr, 2012

1 commit

  • * Make __list_add_rcu check the next->prev and prev->next pointers
    just like __list_add does.
    * Make list_del_rcu use __list_del_entry, which does the same checking
    at deletion time.

    Has been running for a week here without anything being tripped up,
    but it seems worth adding for completeness just in case something
    ever does corrupt those lists.

    Signed-off-by: Dave Jones
    Signed-off-by: Paul E. McKenney

    Dave Jones
     

25 Mar, 2012

1 commit

  • Pull cleanup of fs/ and lib/ users of module.h from Paul Gortmaker:
    "Fix up files in fs/ and lib/ dirs to only use module.h if they really
    need it.

    These are trivial in scope vs the work done previously. We now have
    things where any few remaining cleanups can be farmed out to arch or
    subsystem maintainers, and I have done so when possible. What is
    remaining here represents the bits that don't clearly lie within a
    single arch/subsystem boundary, like the fs dir and the lib dir.

    Some duplicate includes arising from overlapping fixes from
    independent subsystem maintainer submissions are also quashed."

    Fix up trivial conflicts due to clashes with other include file cleanups
    (including some due to the previous bug.h cleanup pull).

    * tag 'module-for-3.4' of git://git.kernel.org/pub/scm/linux/kernel/git/paulg/linux:
    lib: reduce the use of module.h wherever possible
    fs: reduce the use of module.h wherever possible
    includecheck: delete any duplicate instances of module.h

    Linus Torvalds
     

08 Mar, 2012

1 commit


01 Mar, 2012

1 commit


29 Feb, 2012

1 commit

  • A pending header cleanup will cause this to show up as:

    lib/average.c:38: error: 'TAINT_WARN' undeclared (first use in this function)
    lib/list_debug.c:24: error: 'TAINT_WARN' undeclared (first use in this function)

    and TAINT_WARN comes from include/linux/kernel.h file.

    Signed-off-by: Paul Gortmaker

    Paul Gortmaker
     

19 Feb, 2011

1 commit

  • When list debugging is enabled, we aim to readably show list corruption
    errors, and the basic list_add/list_del operations end up having extra
    debugging code in them to do some basic validation of the list entries.

    However, "list_del_init()" and "list_move[_tail]()" ended up avoiding
    the debug code due to how they were written. This fixes that.

    So the _next_ time we have list_move() problems with stale list entries,
    we'll hopefully have an easier time finding them..

    Signed-off-by: Linus Torvalds

    Linus Torvalds
     

10 Aug, 2010

1 commit

  • Use the magic LIST_POISON* values to detect an incorrect use of list_del
    on a deleted entry. This DEBUG_LIST specific warning is easier to
    understand than the generic Oops message caused by LIST_POISON
    dereference.

    Signed-off-by: Baruch Siach
    Cc: Dave Jones
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Baruch Siach
     

26 Jul, 2008

2 commits

  • Arjan noted that the list_head debugging is BUG'ing when it detects
    corruption. By causing the box to panic immediately, we're possibly
    losing some bug reports. Changing this to a WARN() should mean we at the
    least start seeing reports collected at kerneloops.org

    Signed-off-by: Dave Jones
    Cc: Matthew Wilcox
    Cc: Arjan van de Ven
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Dave Jones
     
  • Remove the conditional surrounding the definition of list_add() from list.h
    since, if you define CONFIG_DEBUG_LIST, the definition you will subsequently
    pick up from lib/list_debug.c will be absolutely identical, at which point you
    can remove that redundant definition from list_debug.c as well.

    Signed-off-by: Robert P. J. Day
    Cc: Dave Jones
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Robert P. J. Day
     

08 Dec, 2006

1 commit

  • Print the other (hopefully) known good pointer when list_head debugging
    too, which may yield additional clues.

    Also fix for 80-columns to win akpm brownie points.

    Signed-off-by: Dave Jones
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Dave Jones
     

01 Oct, 2006

1 commit

  • These two BUG_ON()s are redundant and undesired: we're checking for this
    condition further on in the function, only better.

    Cc: Dave Jones
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrew Morton
     

30 Sep, 2006

2 commits