12 Jan, 2013

1 commit

  • [Issue]

    When pstore is in panic and emergency-restart paths, it may be blocked
    in those paths because it simply takes spin_lock.

    This is an example scenario which pstore may hang up in a panic path:

    - cpuA grabs psinfo->buf_lock
    - cpuB panics and calls smp_send_stop
    - smp_send_stop sends IRQ to cpuA
    - after 1 second, cpuB gives up on cpuA and sends an NMI instead
    - cpuA is now in an NMI handler while still holding buf_lock
    - cpuB is deadlocked

    This case may happen if a firmware has a bug and
    cpuA is stuck talking with it more than one second.

    Also, this is a similar scenario in an emergency-restart path:

    - cpuA grabs psinfo->buf_lock and stucks in a firmware
    - cpuB kicks emergency-restart via either sysrq-b or hangcheck timer.
    And then, cpuB is deadlocked by taking psinfo->buf_lock again.

    [Solution]

    This patch avoids the deadlocking issues in both panic and emergency_restart
    paths by introducing a function, is_non_blocking_path(), to check if a cpu
    can be blocked in current path.

    With this patch, pstore is not blocked even if another cpu has
    taken a spin_lock, in those paths by changing from spin_lock_irqsave
    to spin_trylock_irqsave.

    In addition, according to a comment of emergency_restart() in kernel/sys.c,
    spin_lock shouldn't be taken in an emergency_restart path to avoid
    deadlock. This patch fits the comment below.

    /**
    * emergency_restart - reboot the system
    *
    * Without shutting down any hardware or taking any locks
    * reboot the system. This is called when we know we are in
    * trouble so this is our best effort to reboot. This is
    * safe to call in interrupt context.
    */
    void emergency_restart(void)

    Signed-off-by: Seiji Aguchi
    Acked-by: Don Zickus
    Signed-off-by: Tony Luck

    Seiji Aguchi
     

27 Nov, 2012

1 commit

  • [Issue]

    Currently, a variable name, which identifies each entry, consists of type, id and ctime.
    But if multiple events happens in a short time, a second/third event may fail to log because
    efi_pstore can't distinguish each event with current variable name.

    [Solution]

    A reasonable way to identify all events precisely is introducing a sequence counter to
    the variable name.

    The sequence counter has already supported in a pstore layer with "oopscount".
    So, this patch adds it to a variable name.
    Also, it is passed to read/erase callbacks of platform drivers in accordance with
    the modification of the variable name.


    a variable name of first event: dump-type0-1-12345678
    a variable name of second event: dump-type0-1-12345678

    type:0
    id:1
    ctime:12345678

    If multiple events happen in a short time, efi_pstore can't distinguish them because
    variable names are same among them.

    it can be distinguishable by adding a sequence counter as follows.

    a variable name of first event: dump-type0-1-1-12345678
    a variable name of Second event: dump-type0-1-2-12345678

    type:0
    id:1
    sequence counter: 1(first event), 2(second event)
    ctime:12345678

    In case of a write callback executed in pstore_console_write(), "0" is added to
    an argument of the write callback because it just logs all kernel messages and
    doesn't need to care about multiple events.

    Signed-off-by: Seiji Aguchi
    Acked-by: Rafael J. Wysocki
    Acked-by: Mike Waychison
    Signed-off-by: Tony Luck

    Seiji Aguchi
     

15 Nov, 2012

1 commit

  • Passing a NULL id causes a NULL pointer deference in writers such as
    erst_writer and efi_pstore_write because they expect to update this id.
    Pass a dummy id instead.

    This avoids a cascade of oopses caused when the initial
    pstore_console_write passes a null which in turn causes writes to the
    console causing further oopses in subsequent pstore_console_write calls.

    Signed-off-by: Colin Ian King
    Acked-by: Kees Cook
    Cc: stable@vger.kernel.org
    Signed-off-by: Anton Vorontsov

    Colin Ian King
     

21 Sep, 2012

1 commit


07 Sep, 2012

1 commit

  • With this patch we no longer reuse function tracer infrastructure, now
    we register our own tracer back-end via a debugfs knob.

    It's a bit more code, but that is the only downside. On the bright side we
    have:

    - Ability to make persistent_ram module removable (when needed, we can
    move ftrace_ops struct into a module). Note that persistent_ram is still
    not removable for other reasons, but with this patch it's just one
    thing less to worry about;

    - Pstore part is more isolated from the generic function tracer. We tried
    it already by registering our own tracer in available_tracers, but that
    way we're loosing ability to see the traces while we record them to
    pstore. This solution is somewhere in the middle: we only register
    "internal ftracer" back-end, but not the "front-end";

    - When there is only pstore tracing enabled, the kernel will only write
    to the pstore buffer, omitting function tracer buffer (which, of course,
    still can be enabled via 'echo function > current_tracer').

    Suggested-by: Steven Rostedt
    Signed-off-by: Anton Vorontsov

    Anton Vorontsov
     

18 Jul, 2012

2 commits

  • With this support kernel can save function call chain log into a
    persistent ram buffer that can be decoded and dumped after reboot
    through pstore filesystem. It can be used to determine what function
    was last called before a reset or panic.

    We store the log in a binary format and then decode it at read time.

    p.s.
    Mostly the code comes from trace_persistent.c driver found in the
    Android git tree, written by Colin Cross
    (according to sign-off history). I reworked the driver a little bit,
    and ported it to pstore.

    Signed-off-by: Anton Vorontsov
    Signed-off-by: Greg Kroah-Hartman

    Anton Vorontsov
     
  • For function tracing we need to stop using pstore.buf directly, since
    in a tracing callback we can't use spinlocks, and thus we can't safely
    use the global buffer.

    With write_buf callback, backends no longer need to access pstore.buf
    directly, and thus we can pass any buffers (e.g. allocated on stack).

    Signed-off-by: Anton Vorontsov
    Signed-off-by: Greg Kroah-Hartman

    Anton Vorontsov
     

26 Jun, 2012

1 commit


16 Jun, 2012

1 commit

  • Provide an iterator to receive the log buffer content, and convert all
    kmsg_dump() users to it.

    The structured data in the kmsg buffer now contains binary data, which
    should no longer be copied verbatim to the kmsg_dump() users.

    The iterator should provide reliable access to the buffer data, and also
    supports proper log line-aware chunking of data while iterating.

    Signed-off-by: Kay Sievers
    Tested-by: Tony Luck
    Reported-by: Anton Vorontsov
    Tested-by: Anton Vorontsov
    Signed-off-by: Greg Kroah-Hartman

    Kay Sievers
     

14 Jun, 2012

3 commits

  • Having automatic updates seems pointless for production system, and
    even dangerous and thus counter-productive:

    1. If we can mount pstore, or read files, we can as well read
    /proc/kmsg. So, there's little point in duplicating the
    functionality and present the same information but via another
    userland ABI;

    2. Expecting the kernel to behave sanely after oops/panic is naive.
    It might work, but you'd rather not try it. Screwed up kernel
    can do rather bad things, like recursive faults[1]; and pstore
    rather provoking bad things to happen. It uses:

    1. Timers (assumes sane interrupts state);
    2. Workqueues and mutexes (assumes scheduler in a sane state);
    3. kzalloc (a working slab allocator);

    That's too much for a dead kernel, so the debugging facility
    itself might just make debugging harder, which is not what
    we want.

    Maybe for non-oops message types it would make sense to re-enable
    automatic updates, but so far I don't see any use case for this.
    Even for tracing, it has its own run-time/normal ABI, so we're
    only interested in pstore upon next boot, to retrieve what has
    gone wrong with HW or SW.

    So, let's disable the updates by default.

    [1]
    BUG: unable to handle kernel paging request at fffffffffffffff8
    IP: [] kthread_data+0xb/0x20
    [...]
    Process kworker/0:1 (pid: 14, threadinfo ffff8800072c0000, task ffff88000725b100)
    [...
    Call Trace:
    [] wq_worker_sleeping+0x10/0xa0
    [] __schedule+0x568/0x7d0
    [] ? trace_hardirqs_on+0xd/0x10
    [] ? call_rcu_sched+0x12/0x20
    [] ? release_task+0x156/0x2d0
    [] ? release_task+0x1e/0x2d0
    [] ? trace_hardirqs_on+0xd/0x10
    [] schedule+0x24/0x70
    [] do_exit+0x1f8/0x370
    [] oops_end+0x77/0xb0
    [] no_context+0x1a6/0x1b5
    [] __bad_area_nosemaphore+0x1ce/0x1ed
    [] ? ttwu_queue+0xc6/0xe0
    [] bad_area_nosemaphore+0xe/0x10
    [] do_page_fault+0x2c7/0x450
    [] ? __lock_release+0x6b/0xe0
    [] ? mark_held_locks+0x61/0x140
    [] ? __wake_up+0x4e/0x70
    [] ? trace_hardirqs_off_thunk+0x3a/0x3c
    [] ? pstore_register+0x120/0x120
    [] page_fault+0x1f/0x30
    [] ? pstore_register+0x120/0x120
    [] ? memcpy+0x68/0x110
    [] ? pstore_get_records+0x3a/0x130
    [] ? persistent_ram_copy_old+0x64/0x90
    [] ramoops_pstore_read+0x84/0x130
    [] pstore_get_records+0x79/0x130
    [] ? process_one_work+0x116/0x450
    [] ? pstore_register+0x120/0x120
    [] pstore_dowork+0xe/0x10
    [] process_one_work+0x174/0x450
    [] ? process_one_work+0x116/0x450
    [] worker_thread+0x123/0x2d0
    [] ? manage_workers.isra.28+0x120/0x120
    [] kthread+0x8e/0xa0
    [] kernel_thread_helper+0x4/0x10
    [] ? retint_restore_args+0xe/0xe
    [] ? __init_kthread_worker+0x70/0x70
    [] ? gs_change+0xb/0xb
    Code: be e2 00 00 00 48 c7 c7 d1 2a 4e 81 e8 bf fb fd ff 48 8b 5d f0 4c 8b 65 f8 c9 c3 0f 1f 44 00 00 48 8b 87 08 02 00 00 55 48 89 e5 8b 40 f8 5d c3 66 66 66 66 66 66 2e 0f 1f 84 00 00 00 00 00
    RIP [] kthread_data+0xb/0x20
    RSP
    CR2: fffffffffffffff8
    ---[ end trace 996a332dc399111d ]---
    Fixing recursive fault but reboot is needed!

    Signed-off-by: Anton Vorontsov
    Signed-off-by: Greg Kroah-Hartman

    Anton Vorontsov
     
  • There is no behavioural change, the default value is still 60 seconds.

    Signed-off-by: Anton Vorontsov
    Acked-by: Kees Cook
    Signed-off-by: Greg Kroah-Hartman

    Anton Vorontsov
     
  • Pstore doesn't support logging kernel messages in run-time, it only
    dumps dmesg when kernel oopses/panics. This makes pstore useless for
    debugging hangs caused by HW issues or improper use of HW (e.g.
    weird device inserted -> driver tried to write a reserved bits ->
    SoC hanged. In that case we don't get any messages in the pstore.

    Therefore, let's add a runtime logging support: PSTORE_TYPE_CONSOLE.

    Signed-off-by: Anton Vorontsov
    Acked-by: Kees Cook
    Acked-by: Colin Cross
    Signed-off-by: Greg Kroah-Hartman

    Anton Vorontsov
     

17 Mar, 2012

1 commit

  • Recently, there has been some changes in kmsg_dump() below and they have been applied to linus-tree.
    (1) kmsg_dump(KMSG_DUMP_KEXEC) was removed.
    http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=commitdiff;h=a3dd3323058d281abd584b15ad4c5b65064d7a61

    (2) The order of "enum kmsg_dump_reason" was modified.
    http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=commitdiff;h=c22ab332902333f83766017478c1ef6607ace681

    Replace the fragile reason_str array with a more robust solution that
    will not be broken by future re-arrangements of the enum values.

    Signed-off-by: Seiji Aguchi
    Signed-off-by: Adrian Hunter
    Acked-by: Don Zickus
    Link: https://lkml.org/lkml/2012/3/16/417
    Signed-off-by: Tony Luck

    Seiji Aguchi
     

19 Nov, 2011

1 commit


18 Nov, 2011

2 commits

  • This allows a backend to filter on the dmesg reason as well as the pstore
    reason. When ramoops is switched to pstore, this is needed since it has
    no interest in storing non-crash dmesg details.

    Drop pstore_write() as it has no users, and handling the "reason" here
    has no obviously correct value.

    Signed-off-by: Kees Cook
    Signed-off-by: Tony Luck

    Kees Cook
     
  • The buf_lock cannot be held while populating the inodes, so make the backend
    pass forward an allocated and filled buffer instead. This solves the following
    backtrace. The effect is that "buf" is only ever used to notify the backends
    that something was written to it, and shouldn't be used in the read path.

    To replace the buf_lock during the read path, isolate the open/read/close
    loop with a separate mutex to maintain serialized access to the backend.

    Note that is is up to the pstore backend to cope if the (*write)() path is
    called in the middle of the read path.

    [ 59.691019] BUG: sleeping function called from invalid context at .../mm/slub.c:847
    [ 59.691019] in_atomic(): 0, irqs_disabled(): 1, pid: 1819, name: mount
    [ 59.691019] Pid: 1819, comm: mount Not tainted 3.0.8 #1
    [ 59.691019] Call Trace:
    [ 59.691019] [] __might_sleep+0xc3/0xca
    [ 59.691019] [] kmem_cache_alloc+0x32/0xf3
    [ 59.691019] [] ? __d_lookup_rcu+0x6f/0xf4
    [ 59.691019] [] alloc_inode+0x2a/0x64
    [ 59.691019] [] new_inode+0x18/0x43
    [ 59.691019] [] pstore_get_inode.isra.1+0x11/0x98
    [ 59.691019] [] pstore_mkfile+0xae/0x26f
    [ 59.691019] [] ? kmem_cache_free+0x19/0xb1
    [ 59.691019] [] ? ida_get_new_above+0x140/0x158
    [ 59.691019] [] ? __init_rwsem+0x1e/0x2c
    [ 59.691019] [] ? inode_init_always+0x111/0x1b0
    [ 59.691019] [] ? should_resched+0xd/0x27
    [ 59.691019] [] ? _cond_resched+0xd/0x21
    [ 59.691019] [] pstore_get_records+0x52/0xa7
    [ 59.691019] [] pstore_fill_super+0x7d/0x91
    [ 59.691019] [] mount_single+0x46/0x82
    [ 59.691019] [] pstore_mount+0x15/0x17
    [ 59.691019] [] ? pstore_get_inode.isra.1+0x98/0x98
    [ 59.691019] [] mount_fs+0x5a/0x12d
    [ 59.691019] [] ? alloc_vfsmnt+0xa4/0x14a
    [ 59.691019] [] vfs_kern_mount+0x4f/0x7d
    [ 59.691019] [] do_kern_mount+0x34/0xb2
    [ 59.691019] [] do_mount+0x5fc/0x64a
    [ 59.691019] [] ? strndup_user+0x2e/0x3f
    [ 59.691019] [] sys_mount+0x66/0x99
    [ 59.691019] [] sysenter_do_call+0x12/0x26

    Signed-off-by: Kees Cook
    Signed-off-by: Tony Luck

    Kees Cook
     

13 Oct, 2011

1 commit


17 Aug, 2011

2 commits

  • pstore was using mutex locking to protect read/write access to the
    backend plug-ins. This causes problems when pstore is executed in
    an NMI context through panic() -> kmsg_dump().

    This patch changes the mutex to a spin_lock_irqsave then also checks to
    see if we are in an NMI context. If we are in an NMI and can't get the
    lock, just print a message stating that and blow by the locking.

    All this is probably a hack around the bigger locking problem but it
    solves my current situation of trying to sleep in an NMI context.

    Tested by loading the lkdtm module and executing a HARDLOCKUP which
    will cause the machine to panic inside the nmi handler.

    Signed-off-by: Don Zickus
    Acked-by: Matthew Garrett
    Signed-off-by: Tony Luck

    Don Zickus
     
  • Life is simple for all the kernel terminating types of kmsg_dump
    call backs - pstore just saves the tail end of the console log. But
    for "oops" the situation is more complex - the kernel may carry on
    running (possibly for ever). So we'd like to make the logged copy
    of the oops appear in the pstore filesystem - so that the user has
    a handle to clear the entry from the persistent backing store (if
    we don't, the store may fill with "oops" entries (that are also
    safely stashed in /var/log/messages) leaving no space for real
    errors.

    Current code calls pstore_mkfile() immediately. But this may
    not be safe. The oops could have happened with arbitrary locks
    held, or in interrupt or NMI context. So allocating memory and
    calling into generic filesystem code seems unwise.

    This patch defers making the entry appear. At the time
    of the oops, we merely set a flag "pstore_new_entry" noting that
    a new entry has been added. A periodic timer checks once a minute
    to see if the flag is set - if so, it schedules a work queue to
    rescan the backing store and make all new entries appear in the
    pstore filesystem.

    Signed-off-by: Tony Luck

    Luck, Tony
     

23 Jul, 2011

4 commits


17 May, 2011

2 commits

  • Currently after mount/remount operation on pstore filesystem,
    the content on pstore will be lost. It is because current ERST
    implementation doesn't support multi-user usage, which moves
    internal pointer to the end after accessing it. Adding
    multi-user support for pstore usage.

    Signed-off-by: Chen Gong
    Signed-off-by: Tony Luck

    Chen Gong
     
  • the return type of function _read_ in pstore is size_t,
    but in the callback function of _read_, the logic doesn't
    consider it too much, which means if negative value (assuming
    error here) is returned, it will be converted to positive because
    of type casting. ssize_t is enough for this function.

    Signed-off-by: Chen Gong
    Signed-off-by: Tony Luck

    Chen Gong
     

23 Mar, 2011

1 commit

  • pstore_dump() can be called with many different "reason" codes. Save
    the name of the code in the persistent store record.

    Also - only worthwhile calling pstore_mkfile for KMSG_DUMP_OOPS - that
    is the only one where the kernel will continue running.

    Reviewed-by: Seiji Aguchi
    Signed-off-by: Tony Luck

    Tony Luck
     

22 Mar, 2011

1 commit


29 Dec, 2010

1 commit

  • Some platforms have a small amount of non-volatile storage that
    can be used to store information useful to diagnose the cause of
    a system crash. This is the generic part of a file system interface
    that presents information from the crash as a series of files in
    /dev/pstore. Once the information has been seen, the underlying
    storage is freed by deleting the files.

    Signed-off-by: Tony Luck

    Tony Luck