25 Jan, 2013

15 commits

  • global_cwq is now nothing but a container for per-cpu standard
    worker_pools. Declare the worker pools directly as
    cpu/unbound_std_worker_pools[] and remove global_cwq.

    * ____cacheline_aligned_in_smp moved from global_cwq to worker_pool.
    This probably would have made sense even before this change as we
    want each pool to be aligned.

    * get_gcwq() is replaced with std_worker_pools() which returns the
    pointer to the standard pool array for a given CPU.

    * __alloc_workqueue_key() updated to use get_std_worker_pool() instead
    of open-coding pool determination.

    This is part of an effort to remove global_cwq and make worker_pool
    the top level abstraction, which in turn will help implementing worker
    pools with user-specified attributes.

    v2: Joonsoo pointed out that it'd better to align struct worker_pool
    rather than the array so that every pool is aligned.

    Signed-off-by: Tejun Heo
    Reviewed-by: Lai Jiangshan
    Cc: Joonsoo Kim

    Tejun Heo
     
  • The only remaining user of pool->gcwq is std_worker_pool_pri().
    Reimplement it using get_gcwq() and remove worker_pool->gcwq.

    This is part of an effort to remove global_cwq and make worker_pool
    the top level abstraction, which in turn will help implementing worker
    pools with user-specified attributes.

    Signed-off-by: Tejun Heo
    Reviewed-by: Lai Jiangshan

    Tejun Heo
     
  • for_each_std_worker_pool() takes @cpu instead of @gcwq.

    This is part of an effort to remove global_cwq and make worker_pool
    the top level abstraction, which in turn will help implementing worker
    pools with user-specified attributes.

    Signed-off-by: Tejun Heo
    Reviewed-by: Lai Jiangshan

    Tejun Heo
     
  • Instead of holding locks from both pools and then processing the pools
    together, make freezing/thwaing per-pool - grab locks of one pool,
    process it, release it and then proceed to the next pool.

    While this patch changes processing order across pools, order within
    each pool remains the same. As each pool is independent, this
    shouldn't break anything.

    This is part of an effort to remove global_cwq and make worker_pool
    the top level abstraction, which in turn will help implementing worker
    pools with user-specified attributes.

    Signed-off-by: Tejun Heo
    Reviewed-by: Lai Jiangshan

    Tejun Heo
     
  • Instead of holding locks from both pools and then processing the pools
    together, make hotplug processing per-pool - grab locks of one pool,
    process it, release it and then proceed to the next pool.

    rebind_workers() is updated to take and process @pool instead of @gcwq
    which results in a lot of de-indentation. gcwq_claim_assoc_and_lock()
    and its counterpart are replaced with in-line per-pool locking.

    While this patch changes processing order across pools, order within
    each pool remains the same. As each pool is independent, this
    shouldn't break anything.

    This is part of an effort to remove global_cwq and make worker_pool
    the top level abstraction, which in turn will help implementing worker
    pools with user-specified attributes.

    Signed-off-by: Tejun Heo
    Reviewed-by: Lai Jiangshan

    Tejun Heo
     
  • Move gcwq->lock to pool->lock. The conversion is mostly
    straight-forward. Things worth noting are

    * In many places, this removes the need to use gcwq completely. pool
    is used directly instead. get_std_worker_pool() is added to help
    some of these conversions. This also leaves get_work_gcwq() without
    any user. Removed.

    * In hotplug and freezer paths, the pools belonging to a CPU are often
    processed together. This patch makes those paths hold locks of all
    pools, with highpri lock nested inside, to keep the conversion
    straight-forward. These nested lockings will be removed by
    following patches.

    This is part of an effort to remove global_cwq and make worker_pool
    the top level abstraction, which in turn will help implementing worker
    pools with user-specified attributes.

    Signed-off-by: Tejun Heo
    Reviewed-by: Lai Jiangshan

    Tejun Heo
     
  • Move gcwq->cpu to pool->cpu. This introduces a couple places where
    gcwq->pools[0].cpu is used. These will soon go away as gcwq is
    further reduced.

    This is part of an effort to remove global_cwq and make worker_pool
    the top level abstraction, which in turn will help implementing worker
    pools with user-specified attributes.

    Signed-off-by: Tejun Heo
    Reviewed-by: Lai Jiangshan

    Tejun Heo
     
  • There's no functional necessity for the two pools on the same CPU to
    share the busy hash table. It's also likely to be a bottleneck when
    implementing pools with user-specified attributes.

    This patch makes busy_hash per-pool. The conversion is mostly
    straight-forward. Changes worth noting are,

    * Large block of changes in rebind_workers() is moving the block
    inside for_each_worker_pool() as now there are separate hash tables
    for each pool. This changes the order of operations but doesn't
    break anything.

    * Thre for_each_worker_pool() loops in gcwq_unbind_fn() are combined
    into one. This again changes the order of operaitons but doesn't
    break anything.

    This is part of an effort to remove global_cwq and make worker_pool
    the top level abstraction, which in turn will help implementing worker
    pools with user-specified attributes.

    Signed-off-by: Tejun Heo
    Reviewed-by: Lai Jiangshan

    Tejun Heo
     
  • Currently, when a work item is off-queue, work->data records the CPU
    it was last on, which is used to locate the last executing instance
    for non-reentrance, flushing, etc.

    We're in the process of removing global_cwq and making worker_pool the
    top level abstraction. This patch makes work->data point to the pool
    it was last associated with instead of CPU.

    After the previous WORK_OFFQ_POOL_CPU and worker_poo->id additions,
    the conversion is fairly straight-forward. WORK_OFFQ constants and
    functions are modified to record and read back pool ID instead.
    worker_pool_by_id() is added to allow looking up pool from ID.
    get_work_pool() replaces get_work_gcwq(), which is reimplemented using
    get_work_pool(). get_work_pool_id() replaces work_cpu().

    This patch shouldn't introduce any observable behavior changes.

    Signed-off-by: Tejun Heo
    Reviewed-by: Lai Jiangshan

    Tejun Heo
     
  • Add worker_pool->id which is allocated from worker_pool_idr. This
    will be used to record the last associated worker_pool in work->data.

    Signed-off-by: Tejun Heo
    Reviewed-by: Lai Jiangshan

    Tejun Heo
     
  • Currently, when a work item is off queue, high bits of its data
    encodes the last CPU it was on. This is scheduled to be changed to
    pool ID, which will make it impossible to use WORK_CPU_NONE to
    indicate no association.

    This patch limits the number of bits which are used for off-queue cpu
    number to 31 (so that the max fits in an int) and uses the highest
    possible value - WORK_OFFQ_CPU_NONE - to indicate no association.

    Signed-off-by: Tejun Heo
    Reviewed-by: Lai Jiangshan

    Tejun Heo
     
  • Make GCWQ_FREEZING a pool flag POOL_FREEZING. This patch doesn't
    change locking - FREEZING on both pools of a CPU are set or clear
    together while holding gcwq->lock. It shouldn't cause any functional
    difference.

    This leaves gcwq->flags w/o any flags. Removed.

    While at it, convert BUG_ON()s in freeze_workqueue_begin() and
    thaw_workqueues() to WARN_ON_ONCE().

    This is part of an effort to remove global_cwq and make worker_pool
    the top level abstraction, which in turn will help implementing worker
    pools with user-specified attributes.

    Signed-off-by: Tejun Heo
    Reviewed-by: Lai Jiangshan

    Tejun Heo
     
  • Make GCWQ_DISASSOCIATED a pool flag POOL_DISASSOCIATED. This patch
    doesn't change locking - DISASSOCIATED on both pools of a CPU are set
    or clear together while holding gcwq->lock. It shouldn't cause any
    functional difference.

    This is part of an effort to remove global_cwq and make worker_pool
    the top level abstraction, which in turn will help implementing worker
    pools with user-specified attributes.

    Signed-off-by: Tejun Heo
    Reviewed-by: Lai Jiangshan

    Tejun Heo
     
  • There are currently two worker pools per cpu (including the unbound
    cpu) and they are the only pools in use. New class of pools are
    scheduled to be added and some pool related APIs will be added
    inbetween. Call the existing pools the standard pools and prefix them
    with std_. Do this early so that new APIs can use std_ prefix from
    the beginning.

    This patch doesn't introduce any functional difference.

    Signed-off-by: Tejun Heo
    Reviewed-by: Lai Jiangshan

    Tejun Heo
     
  • This function no longer has any external users. Unexport it. It will
    be removed later on.

    Signed-off-by: Tejun Heo
    Reviewed-by: Lai Jiangshan

    Tejun Heo
     

19 Jan, 2013

3 commits


18 Jan, 2013

1 commit

  • PF_WQ_WORKER is used to tell scheduler that the task is a workqueue
    worker and needs wq_worker_sleeping/waking_up() invoked on it for
    concurrency management. As rescuers never participate in concurrency
    management, PF_WQ_WORKER wasn't set on them.

    There's a need for an interface which can query whether %current is
    executing a work item and if so which. Such interface requires a way
    to identify all tasks which may execute work items and PF_WQ_WORKER
    will be used for that. As all normal workers always have PF_WQ_WORKER
    set, we only need to add it to rescuers.

    As rescuers start with WORKER_PREP but never clear it, it's always
    NOT_RUNNING and there's no need to worry about it interfering with
    concurrency management even if PF_WQ_WORKER is set; however, unlike
    normal workers, rescuers currently don't have its worker struct as
    kthread_data(). It uses the associated workqueue_struct instead.
    This is problematic as wq_worker_sleeping/waking_up() expect struct
    worker at kthread_data().

    This patch adds worker->rescue_wq and start rescuer kthreads with
    worker struct as kthread_data and sets PF_WQ_WORKER on rescuers.

    Signed-off-by: Tejun Heo
    Cc: Linus Torvalds

    Tejun Heo
     

20 Dec, 2012

1 commit

  • 42f8570f43 ("workqueue: use new hashtable implementation") incorrectly
    made busy workers hashed by the pointer value of worker instead of
    work. This broke find_worker_executing_work() which in turn broke a
    lot of fundamental operations of workqueue - non-reentrancy and
    flushing among others. The flush malfunction triggered warning in
    disk event code in Fengguang's automated test.

    write_dev_root_ (3265) used greatest stack depth: 2704 bytes left
    ------------[ cut here ]------------
    WARNING: at /c/kernel-tests/src/stable/block/genhd.c:1574 disk_clear_events+0x\
    cf/0x108()
    Hardware name: Bochs
    Modules linked in:
    Pid: 3328, comm: ata_id Not tainted 3.7.0-01930-gbff6343 #1167
    Call Trace:
    [] warn_slowpath_common+0x83/0x9c
    [] warn_slowpath_null+0x1a/0x1c
    [] disk_clear_events+0xcf/0x108
    [] check_disk_change+0x27/0x59
    [] cdrom_open+0x49/0x68b
    [] idecd_open+0x88/0xb7
    [] __blkdev_get+0x102/0x3ec
    [] blkdev_get+0x18f/0x30f
    [] blkdev_open+0x75/0x80
    [] do_dentry_open+0x1ea/0x295
    [] finish_open+0x35/0x41
    [] do_last+0x878/0xa25
    [] path_openat+0xc6/0x333
    [] do_filp_open+0x38/0x86
    [] do_sys_open+0x6c/0xf9
    [] sys_open+0x21/0x23
    [] system_call_fastpath+0x16/0x1b

    Signed-off-by: Tejun Heo
    Reported-by: Fengguang Wu
    Cc: Sasha Levin

    Tejun Heo
     

19 Dec, 2012

2 commits

  • To avoid executing the same work item concurrenlty, workqueue hashes
    currently busy workers according to their current work items and looks
    up the the table when it wants to execute a new work item. If there
    already is a worker which is executing the new work item, the new item
    is queued to the found worker so that it gets executed only after the
    current execution finishes.

    Unfortunately, a work item may be freed while being executed and thus
    recycled for different purposes. If it gets recycled for a different
    work item and queued while the previous execution is still in
    progress, workqueue may make the new work item wait for the old one
    although the two aren't really related in any way.

    In extreme cases, this false dependency may lead to deadlock although
    it's extremely unlikely given that there aren't too many self-freeing
    work item users and they usually don't wait for other work items.

    To alleviate the problem, record the current work function in each
    busy worker and match it together with the work item address in
    find_worker_executing_work(). While this isn't complete, it ensures
    that unrelated work items don't interact with each other and in the
    very unlikely case where a twisted wq user triggers it, it's always
    onto itself making the culprit easy to spot.

    Signed-off-by: Tejun Heo
    Reported-by: Andrey Isakov
    Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=51701
    Cc: stable@vger.kernel.org

    Tejun Heo
     
  • Switch workqueues to use the new hashtable implementation. This reduces the
    amount of generic unrelated code in the workqueues.

    This patch depends on d9b482c ("hashtable: introduce a small and naive
    hashtable") which was merged in v3.6.

    Acked-by: Tejun Heo
    Signed-off-by: Sasha Levin
    Signed-off-by: Tejun Heo

    Sasha Levin
     

18 Dec, 2012

18 commits

  • Merge misc patches from Andrew Morton:
    "Incoming:

    - lots of misc stuff

    - backlight tree updates

    - lib/ updates

    - Oleg's percpu-rwsem changes

    - checkpatch

    - rtc

    - aoe

    - more checkpoint/restart support

    I still have a pile of MM stuff pending - Pekka should be merging
    later today after which that is good to go. A number of other things
    are twiddling thumbs awaiting maintainer merges."

    * emailed patches from Andrew Morton : (180 commits)
    scatterlist: don't BUG when we can trivially return a proper error.
    docs: update documentation about /proc//fdinfo/ fanotify output
    fs, fanotify: add @mflags field to fanotify output
    docs: add documentation about /proc//fdinfo/ output
    fs, notify: add procfs fdinfo helper
    fs, exportfs: add exportfs_encode_inode_fh() helper
    fs, exportfs: escape nil dereference if no s_export_op present
    fs, epoll: add procfs fdinfo helper
    fs, eventfd: add procfs fdinfo helper
    procfs: add ability to plug in auxiliary fdinfo providers
    tools/testing/selftests/kcmp/kcmp_test.c: print reason for failure in kcmp_test
    breakpoint selftests: print failure status instead of cause make error
    kcmp selftests: print fail status instead of cause make error
    kcmp selftests: make run_tests fix
    mem-hotplug selftests: print failure status instead of cause make error
    cpu-hotplug selftests: print failure status instead of cause make error
    mqueue selftests: print failure status instead of cause make error
    vm selftests: print failure status instead of cause make error
    ubifs: use prandom_bytes
    mtd: nandsim: use prandom_bytes
    ...

    Linus Torvalds
     
  • When compiling efivars.c the build fails with:

    CC drivers/firmware/efivars.o
    drivers/firmware/efivars.c: In function ‘efivarfs_get_inode’:
    drivers/firmware/efivars.c:886:31: error: incompatible types when assigning to type ‘kgid_t’ from type ‘int’
    make[2]: *** [drivers/firmware/efivars.o] Error 1
    make[1]: *** [drivers/firmware/efivars.o] Error 2

    Fix the build error by removing the duplicate initialization of i_uid and
    i_gid inode_init_always has already initialized them to 0.

    Signed-off-by: "Eric W. Biederman"
    Signed-off-by: Linus Torvalds

    Eric W. Biederman
     
  • This build error is currently hidden by the fact that the x86
    implementation of 'update_mmu_cache_pmd()' is a macro that doesn't use
    its last argument, but commit b32967ff101a ("mm: numa: Add THP migration
    for the NUMA working set scanning fault case") introduced a call with
    the wrong third argument.

    In the akpm tree, it causes this build error:

    mm/migrate.c: In function 'migrate_misplaced_transhuge_page_put':
    mm/migrate.c:1666:2: error: incompatible type for argument 3 of 'update_mmu_cache_pmd'
    arch/x86/include/asm/pgtable.h:792:20: note: expected 'struct pmd_t *' but argument is of type 'pmd_t'

    Fix it.

    Signed-off-by: Stephen Rothwell
    Signed-off-by: Linus Torvalds

    Stephen Rothwell
     
  • There is absolutely no reason to crash the kernel when we have a
    perfectly good return value already available to use for conveying
    failure status.

    Let's return an error code instead of crashing the kernel: that sounds
    like a much better plan.

    [akpm@linux-foundation.org: s/E2BIG/EINVAL/]
    Signed-off-by: Nick Bowler
    Cc: Maxim Levitsky
    Cc: Tejun Heo
    Cc: Jens Axboe
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nick Bowler
     
  • Signed-off-by: Cyrill Gorcunov
    Cc: Pavel Emelyanov
    Cc: Oleg Nesterov
    Cc: Andrey Vagin
    Cc: Al Viro
    Cc: Alexey Dobriyan
    Cc: James Bottomley
    Cc: "Aneesh Kumar K.V"
    Cc: Alexey Dobriyan
    Cc: Matthew Helsley
    Cc: "J. Bruce Fields"
    Cc: "Aneesh Kumar K.V"
    Cc: Tvrtko Ursulin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Cyrill Gorcunov
     
  • The kernel keeps FAN_MARK_IGNORED_SURV_MODIFY bit separately from
    fsnotify_mark::mask|ignored_mask thus put it in @mflags (mark flags)
    field so the user-space reader will be able to detect if such bit were
    used on mark creation procedure.

    | pos: 0
    | flags: 04002
    | fanotify flags:10 event-flags:0
    | fanotify mnt_id:12 mflags:40 mask:38 ignored_mask:40000003
    | fanotify ino:4f969 sdev:800013 mflags:0 mask:3b ignored_mask:40000000 fhandle-bytes:8 fhandle-type:1 f_handle:69f90400c275b5b4

    Signed-off-by: Cyrill Gorcunov
    Cc: Pavel Emelyanov
    Cc: Oleg Nesterov
    Cc: Andrey Vagin
    Cc: Al Viro
    Cc: Alexey Dobriyan
    Cc: James Bottomley
    Cc: "Aneesh Kumar K.V"
    Cc: Matthew Helsley
    Cc: "J. Bruce Fields"
    Cc: Tvrtko Ursulin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Cyrill Gorcunov
     
  • [akpm@linux-foundation.org: tweak documentation]
    Signed-off-by: Cyrill Gorcunov
    Cc: Pavel Emelyanov
    Cc: Oleg Nesterov
    Cc: Andrey Vagin
    Cc: Al Viro
    Cc: Alexey Dobriyan
    Cc: James Bottomley
    Cc: "Aneesh Kumar K.V"
    Cc: Alexey Dobriyan
    Cc: Matthew Helsley
    Cc: "J. Bruce Fields"
    Cc: "Aneesh Kumar K.V"
    Cc: Tvrtko Ursulin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Cyrill Gorcunov
     
  • This allow us to print out fsnotify details such as watchee inode, device,
    mask and optionally a file handle.

    For inotify objects if kernel compiled with exportfs support the output
    will be

    | pos: 0
    | flags: 02000000
    | inotify wd:3 ino:9e7e sdev:800013 mask:800afce ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:7e9e0000640d1b6d
    | inotify wd:2 ino:a111 sdev:800013 mask:800afce ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:11a1000020542153
    | inotify wd:1 ino:6b149 sdev:800013 mask:800afce ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:49b1060023552153

    If kernel compiled without exportfs support, the file handle
    won't be provided but inode and device only.

    | pos: 0
    | flags: 02000000
    | inotify wd:3 ino:9e7e sdev:800013 mask:800afce ignored_mask:0
    | inotify wd:2 ino:a111 sdev:800013 mask:800afce ignored_mask:0
    | inotify wd:1 ino:6b149 sdev:800013 mask:800afce ignored_mask:0

    For fanotify the output is like

    | pos: 0
    | flags: 04002
    | fanotify flags:10 event-flags:0
    | fanotify mnt_id:12 mask:3b ignored_mask:0
    | fanotify ino:50205 sdev:800013 mask:3b ignored_mask:40000000 fhandle-bytes:8 fhandle-type:1 f_handle:05020500fb1d47e7

    To minimize impact on general fsnotify code the new functionality
    is gathered in fs/notify/fdinfo.c file.

    Signed-off-by: Cyrill Gorcunov
    Acked-by: Pavel Emelyanov
    Cc: Oleg Nesterov
    Cc: Andrey Vagin
    Cc: Al Viro
    Cc: Alexey Dobriyan
    Cc: James Bottomley
    Cc: "Aneesh Kumar K.V"
    Cc: Alexey Dobriyan
    Cc: Matthew Helsley
    Cc: "J. Bruce Fields"
    Cc: "Aneesh Kumar K.V"
    Cc: Tvrtko Ursulin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Cyrill Gorcunov
     
  • We will need this helper in the next patch to provide a file handle for
    inotify marks in /proc/pid/fdinfo output.

    The patch is rather providing the way to use inodes directly when dentry
    is not available (like in case of inotify system).

    Signed-off-by: Cyrill Gorcunov
    Acked-by: Pavel Emelyanov
    Cc: Oleg Nesterov
    Cc: Andrey Vagin
    Cc: Al Viro
    Cc: Alexey Dobriyan
    Cc: James Bottomley
    Cc: "Aneesh Kumar K.V"
    Cc: Alexey Dobriyan
    Cc: Matthew Helsley
    Cc: "J. Bruce Fields"
    Cc: "Aneesh Kumar K.V"
    Cc: Tvrtko Ursulin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Cyrill Gorcunov
     
  • This routine will be used to generate a file handle in fdinfo output for
    inotify subsystem, where if no s_export_op present the general
    export_encode_fh should be used. Thus add a test if s_export_op present
    inside exportfs_encode_fh itself.

    Signed-off-by: Cyrill Gorcunov
    Acked-by: Pavel Emelyanov
    Cc: Oleg Nesterov
    Cc: Andrey Vagin
    Cc: Al Viro
    Cc: Alexey Dobriyan
    Cc: James Bottomley
    Cc: "Aneesh Kumar K.V"
    Cc: Alexey Dobriyan
    Cc: Matthew Helsley
    Cc: "J. Bruce Fields"
    Cc: "Aneesh Kumar K.V"
    Cc: Tvrtko Ursulin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Cyrill Gorcunov
     
  • This allows us to print out eventpoll target file descriptor, events and
    data, the /proc/pid/fdinfo/fd consists of

    | pos: 0
    | flags: 02
    | tfd: 5 events: 1d data: ffffffffffffffff enabled: 1

    [avagin@: fix for unitialized ret variable]

    Signed-off-by: Cyrill Gorcunov
    Acked-by: Pavel Emelyanov
    Cc: Oleg Nesterov
    Cc: Andrey Vagin
    Cc: Al Viro
    Cc: Alexey Dobriyan
    Cc: James Bottomley
    Cc: "Aneesh Kumar K.V"
    Cc: Alexey Dobriyan
    Cc: Matthew Helsley
    Cc: "J. Bruce Fields"
    Cc: "Aneesh Kumar K.V"
    Cc: Tvrtko Ursulin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Cyrill Gorcunov
     
  • This allows us to print out raw counter value. The /proc/pid/fdinfo/fd
    output is

    | pos: 0
    | flags: 04002
    | eventfd-count: 5a

    Signed-off-by: Cyrill Gorcunov
    Acked-by: Pavel Emelyanov
    Cc: Oleg Nesterov
    Cc: Andrey Vagin
    Cc: Al Viro
    Cc: Alexey Dobriyan
    Cc: James Bottomley
    Cc: "Aneesh Kumar K.V"
    Cc: Alexey Dobriyan
    Cc: Matthew Helsley
    Cc: "J. Bruce Fields"
    Cc: "Aneesh Kumar K.V"
    Cc: Tvrtko Ursulin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Cyrill Gorcunov
     
  • This patch brings ability to print out auxiliary data associated with
    file in procfs interface /proc/pid/fdinfo/fd.

    In particular further patches make eventfd, evenpoll, signalfd and
    fsnotify to print additional information complete enough to restore
    these objects after checkpoint.

    To simplify the code we add show_fdinfo callback inside struct
    file_operations (as Al and Pavel are proposing).

    Signed-off-by: Cyrill Gorcunov
    Acked-by: Pavel Emelyanov
    Cc: Oleg Nesterov
    Cc: Andrey Vagin
    Cc: Al Viro
    Cc: Alexey Dobriyan
    Cc: James Bottomley
    Cc: "Aneesh Kumar K.V"
    Cc: Alexey Dobriyan
    Cc: Matthew Helsley
    Cc: "J. Bruce Fields"
    Cc: "Aneesh Kumar K.V"
    Cc: Tvrtko Ursulin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Cyrill Gorcunov
     
  • I was curious why sys_kcmp wasn't working, which led me to the testcase.
    It turned out I hadn't enabled CHECKPOINT_RESTORE in the kernel I was
    testing. Add a decoding of errno to the testcase to make that obvious.

    Signed-off-by: Dave Jones
    Acked-by: Cyrill Gorcunov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Dave Jones
     
  • In case breakpoint test exit non zero value it will cause make error.
    Better way is just print the test failure status.

    Signed-off-by: Dave Young
    Reviewed-by: Pekka Enberg
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Dave Young
     
  • In case kcmp_test exit non zero value it will cause make error.
    Better way is just print the test failure status.

    Signed-off-by: Dave Young
    Reviewed-by: Pekka Enberg
    Cc: Cyrill Gorcunov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Dave Young
     
  • make run_tests need the target is run_tests instead of run-tests
    Also gcc output should be kcmp_test. Fix these two issues.

    Signed-off-by: Dave Young
    Cc: Pekka Enberg
    Cc: Cyrill Gorcunov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Dave Young
     
  • Original behavior:
    bash-4.1$ make -C memory-hotplug run_tests
    make: Entering directory `/home/dave/git/linux-2.6/tools/testing/selftests/memory-hotplug'
    ./on-off-test.sh
    make: execvp: ./on-off-test.sh: Permission denied
    make: *** [run_tests] Error 127
    make: Leaving directory `/home/dave/git/linux-2.6/tools/testing/selftests/memory-hotplug'

    After applying the patch:
    bash-4.1$ make -C memory-hotplug run_tests
    make: Entering directory `/home/dave/git/linux-2.6/tools/testing/selftests/memory-hotplug'
    /bin/sh: ./on-off-test.sh: Permission denied
    memory-hotplug selftests: [FAIL]
    make: Leaving directory `/home/dave/git/linux-2.6/tools/testing/selftests/memory-hotplug'

    Signed-off-by: Dave Young
    Reviewed-by: Pekka Enberg
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Dave Young