15 Dec, 2011

1 commit


26 Oct, 2011

1 commit

  • * 'core-locking-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (27 commits)
    rtmutex: Add missing rcu_read_unlock() in debug_rt_mutex_print_deadlock()
    lockdep: Comment all warnings
    lib: atomic64: Change the type of local lock to raw_spinlock_t
    locking, lib/atomic64: Annotate atomic64_lock::lock as raw
    locking, x86, iommu: Annotate qi->q_lock as raw
    locking, x86, iommu: Annotate irq_2_ir_lock as raw
    locking, x86, iommu: Annotate iommu->register_lock as raw
    locking, dma, ipu: Annotate bank_lock as raw
    locking, ARM: Annotate low level hw locks as raw
    locking, drivers/dca: Annotate dca_lock as raw
    locking, powerpc: Annotate uic->lock as raw
    locking, x86: mce: Annotate cmci_discover_lock as raw
    locking, ACPI: Annotate c3_lock as raw
    locking, oprofile: Annotate oprofilefs lock as raw
    locking, video: Annotate vga console lock as raw
    locking, latencytop: Annotate latency_lock as raw
    locking, timer_stats: Annotate table_lock as raw
    locking, rwsem: Annotate inner lock as raw
    locking, semaphores: Annotate inner lock as raw
    locking, sched: Annotate thread_group_cputimer as raw
    ...

    Fix up conflicts in kernel/posix-cpu-timers.c manually: making
    cputimer->cputime a raw lock conflicted with the ABBA fix in commit
    bcd5cff7216f ("cputimer: Cure lock inversion").

    Linus Torvalds
     

18 Oct, 2011

1 commit

  • There's a lock inversion between the cputimer->lock and rq->lock;
    notably the two callchains involved are:

    update_rlimit_cpu()
    sighand->siglock
    set_process_cpu_timer()
    cpu_timer_sample_group()
    thread_group_cputimer()
    cputimer->lock
    thread_group_cputime()
    task_sched_runtime()
    ->pi_lock
    rq->lock

    scheduler_tick()
    rq->lock
    task_tick_fair()
    update_curr()
    account_group_exec()
    cputimer->lock

    Where the first one is enabling a CLOCK_PROCESS_CPUTIME_ID timer, and
    the second one is keeping up-to-date.

    This problem was introduced by e8abccb7193 ("posix-cpu-timers: Cure
    SMP accounting oddities").

    Cure the problem by removing the cputimer->lock and rq->lock nesting,
    this leaves concurrent enablers doing duplicate work, but the time
    wasted should be on the same order otherwise wasted spinning on the
    lock and the greater-than assignment filter should ensure we preserve
    monotonicity.

    Reported-by: Dave Jones
    Reported-by: Simon Kirby
    Signed-off-by: Peter Zijlstra
    Cc: stable@kernel.org
    Cc: Linus Torvalds
    Cc: Martin Schwidefsky
    Link: http://lkml.kernel.org/r/1318928713.21167.4.camel@twins
    Signed-off-by: Thomas Gleixner

    Peter Zijlstra
     

30 Sep, 2011

1 commit

  • David reported:

    Attached below is a watered-down version of rt/tst-cpuclock2.c from
    GLIBC. Just build it with "gcc -o test test.c -lpthread -lrt" or
    similar.

    Run it several times, and you will see cases where the main thread
    will measure a process clock difference before and after the nanosleep
    which is smaller than the cpu-burner thread's individual thread clock
    difference. This doesn't make any sense since the cpu-burner thread
    is part of the top-level process's thread group.

    I've reproduced this on both x86-64 and sparc64 (using both 32-bit and
    64-bit binaries).

    For example:

    [davem@boricha build-x86_64-linux]$ ./test
    process: before(0.001221967) after(0.498624371) diff(497402404)
    thread: before(0.000081692) after(0.498316431) diff(498234739)
    self: before(0.001223521) after(0.001240219) diff(16698)
    [davem@boricha build-x86_64-linux]$

    The diff of 'process' should always be >= the diff of 'thread'.

    I make sure to wrap the 'thread' clock measurements the most tightly
    around the nanosleep() call, and that the 'process' clock measurements
    are the outer-most ones.

    ---
    #include
    #include
    #include
    #include
    #include
    #include
    #include
    #include

    static pthread_barrier_t barrier;

    static void *chew_cpu(void *arg)
    {
    pthread_barrier_wait(&barrier);
    while (1)
    __asm__ __volatile__("" : : : "memory");
    return NULL;
    }

    int main(void)
    {
    clockid_t process_clock, my_thread_clock, th_clock;
    struct timespec process_before, process_after;
    struct timespec me_before, me_after;
    struct timespec th_before, th_after;
    struct timespec sleeptime;
    unsigned long diff;
    pthread_t th;
    int err;

    err = clock_getcpuclockid(0, &process_clock);
    if (err)
    return 1;

    err = pthread_getcpuclockid(pthread_self(), &my_thread_clock);
    if (err)
    return 1;

    pthread_barrier_init(&barrier, NULL, 2);
    err = pthread_create(&th, NULL, chew_cpu, NULL);
    if (err)
    return 1;

    err = pthread_getcpuclockid(th, &th_clock);
    if (err)
    return 1;

    pthread_barrier_wait(&barrier);

    err = clock_gettime(process_clock, &process_before);
    if (err)
    return 1;

    err = clock_gettime(my_thread_clock, &me_before);
    if (err)
    return 1;

    err = clock_gettime(th_clock, &th_before);
    if (err)
    return 1;

    sleeptime.tv_sec = 0;
    sleeptime.tv_nsec = 500000000;
    nanosleep(&sleeptime, NULL);

    err = clock_gettime(th_clock, &th_after);
    if (err)
    return 1;

    err = clock_gettime(my_thread_clock, &me_after);
    if (err)
    return 1;

    err = clock_gettime(process_clock, &process_after);
    if (err)
    return 1;

    diff = process_after.tv_nsec - process_before.tv_nsec;
    printf("process: before(%lu.%.9lu) after(%lu.%.9lu) diff(%lu)\n",
    process_before.tv_sec, process_before.tv_nsec,
    process_after.tv_sec, process_after.tv_nsec, diff);
    diff = th_after.tv_nsec - th_before.tv_nsec;
    printf("thread: before(%lu.%.9lu) after(%lu.%.9lu) diff(%lu)\n",
    th_before.tv_sec, th_before.tv_nsec,
    th_after.tv_sec, th_after.tv_nsec, diff);
    diff = me_after.tv_nsec - me_before.tv_nsec;
    printf("self: before(%lu.%.9lu) after(%lu.%.9lu) diff(%lu)\n",
    me_before.tv_sec, me_before.tv_nsec,
    me_after.tv_sec, me_after.tv_nsec, diff);

    return 0;
    }

    This is due to us using p->se.sum_exec_runtime in
    thread_group_cputime() where we iterate the thread group and sum all
    data. This does not take time since the last schedule operation (tick
    or otherwise) into account. We can cure this by using
    task_sched_runtime() at the cost of having to take locks.

    This also means we can (and must) do away with
    thread_group_sched_runtime() since the modified thread_group_cputime()
    is now more accurate and would deadlock when called from
    thread_group_sched_runtime().

    Aside of that it makes the function safe on 32 bit systems. The old
    code added t->se.sum_exec_runtime unprotected. sum_exec_runtime is a
    64bit value and could be changed on another cpu at the same time.

    Reported-by: David Miller
    Signed-off-by: Peter Zijlstra
    Cc: stable@kernel.org
    Link: http://lkml.kernel.org/r/1314874459.7945.22.camel@twins
    Tested-by: David Miller
    Signed-off-by: Thomas Gleixner

    Peter Zijlstra
     

13 Sep, 2011

1 commit

  • The thread_group_cputimer lock can be taken in atomic context and therefore
    cannot be preempted on -rt - annotate it.

    In mainline this change documents the low level nature of
    the lock - otherwise there's no functional difference. Lockdep
    and Sparse checking will work as usual.

    Signed-off-by: Thomas Gleixner
    Signed-off-by: Ingo Molnar

    Thomas Gleixner
     

23 May, 2011

1 commit


31 Mar, 2011

1 commit


02 Feb, 2011

7 commits


10 Nov, 2010

1 commit

  • Commit 4221a9918e38b7494cee341dda7b7b4bb8c04bde "Add RCU check for
    find_task_by_vpid()" introduced rcu_lockdep_assert to find_task_by_pid_ns.
    Add rcu_read_lock/rcu_read_unlock to call find_task_by_vpid.

    Tetsuo Handa wrote:
    | Quoting from one of posts in that thead
    | http://kerneltrap.org/mailarchive/linux-kernel/2010/2/8/4536388
    |
    || Usually tasklist gives enough protection, but if copy_process() fails
    || it calls free_pid() lockless and does call_rcu(delayed_put_pid().
    || This means, without rcu lock find_pid_ns() can't scan the hash table
    || safely.

    Thomas Gleixner wrote:
    | We can remove the tasklist_lock while at it. rcu_read_lock is enough.

    Patch also replaces thread_group_leader with has_group_leader_pid
    in accordance to comment by Oleg Nesterov:

    | ... thread_group_leader() check is not relaible without
    | tasklist. If we race with de_thread() find_task_by_vpid() can find
    | the new leader before it updates its ->group_leader.
    |
    | perhaps it makes sense to change posix_cpu_timer_create() to use
    | has_group_leader_pid() instead, just to make this code not look racy
    | and avoid adding new problems.

    Signed-off-by: Sergey Senozhatsky
    Cc: Peter Zijlstra
    Cc: Stanislaw Gruszka
    Reviewed-by: Oleg Nesterov
    LKML-Reference:
    Signed-off-by: Thomas Gleixner

    Sergey Senozhatsky
     

11 Aug, 2010

1 commit

  • * 'writable_limits' of git://decibel.fi.muni.cz/~xslaby/linux:
    unistd: add __NR_prlimit64 syscall numbers
    rlimits: implement prlimit64 syscall
    rlimits: switch more rlimit syscalls to do_prlimit
    rlimits: redo do_setrlimit to more generic do_prlimit
    rlimits: add rlimit64 structure
    rlimits: do security check under task_lock
    rlimits: allow setrlimit to non-current tasks
    rlimits: split sys_setrlimit
    rlimits: selinux, do rlimits changes under task_lock
    rlimits: make sure ->rlim_max never grows in sys_setrlimit
    rlimits: add task_struct to update_rlimit_cpu
    rlimits: security, add task_struct to setrlimit

    Fix up various system call number conflicts. We not only added fanotify
    system calls in the meantime, but asm-generic/unistd.h added a wait4
    along with a range of reserved per-architecture system calls.

    Linus Torvalds
     

16 Jul, 2010

1 commit


18 Jun, 2010

3 commits

  • fastpath_timer_check()->thread_group_cputimer() is racy and
    unneeded.

    It is racy because another thread can clear ->running before
    thread_group_cputimer() takes cputimer->lock. In this case
    thread_group_cputimer() will set ->running = true again and call
    thread_group_cputime(). But since we do not hold tasklist or
    siglock, we can race with fork/exit and copy the wrong results
    into cputimer->cputime.

    It is unneeded because if ->running == true we can just use
    the numbers in cputimer->cputime we already have.

    Change fastpath_timer_check() to copy cputimer->cputime into
    the local variable under cputimer->lock. We do not re-check
    ->running under cputimer->lock, run_posix_cpu_timers() does
    this check later.

    Note: we can add more optimizations on top of this change.

    Signed-off-by: Oleg Nesterov
    Signed-off-by: Peter Zijlstra
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Oleg Nesterov
     
  • run_posix_cpu_timers() doesn't work if current has already passed
    exit_notify(). This was needed to prevent the races with do_wait().

    Since ea6d290c ->signal is always valid and can't go away. We can
    remove the "tsk->exit_state == 0" in fastpath_timer_check() and
    convert run_posix_cpu_timers() to use lock_task_sighand().

    Note: it makes sense to take group_leader's sighand instead, the
    sub-thread still uses CPU after release_task(). But we need more
    changes to do this.

    Signed-off-by: Oleg Nesterov
    Signed-off-by: Peter Zijlstra
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Oleg Nesterov
     
  • thread_group_cputime() looks as if it is rcu-safe, but in fact this
    was wrong until ea6d290c which pins task->signal to task_struct.
    It checks ->sighand != NULL under rcu, but this can't help if ->signal
    can go away. Fortunately the caller either holds ->siglock, or it is
    fastpath_timer_check() which uses current and checks exit_state == 0.

    - Since ea6d290c commit tsk->signal is stable, we can read it first
    and avoid the initialization from INIT_CPUTIME.

    - Even if tsk->signal is always valid, we still have to check it
    is safe to use next_thread() under rcu_read_lock(). Currently
    the code checks ->sighand != NULL, change it to use pid_alive()
    which is commonly used to ensure the task wasn't unhashed before
    we take rcu_read_lock().

    Add the comment to explain this check.

    - Change the main loop to use the while_each_thread() helper.

    Signed-off-by: Oleg Nesterov
    Signed-off-by: Peter Zijlstra
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Oleg Nesterov
     

28 May, 2010

1 commit

  • Preparation to make task->signal immutable, no functional changes.

    posix-cpu-timers.c checks task->signal != NULL to ensure this task is
    alive and didn't pass __exit_signal(). This is correct but we are going
    to change the lifetime rules for ->signal and never reset this pointer.

    Change the code to check ->sighand instead, it doesn't matter which
    pointer we check under tasklist, they both are cleared simultaneously.

    As Roland pointed out, some of these changes are not strictly needed and
    probably it makes sense to revert them later, when ->signal will be pinned
    to task_struct. But this patch tries to ensure the subsequent changes in
    fork/exit can't make any visible impact on posix cpu timers.

    Signed-off-by: Oleg Nesterov
    Cc: Fenghua Yu
    Acked-by: Roland McGrath
    Cc: Stanislaw Gruszka
    Cc: Tony Luck
    Cc: Thomas Gleixner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Oleg Nesterov
     

10 May, 2010

2 commits

  • We can optimize and simplify things taking into account signal->cputimer
    is always running when we have configured any process wide cpu timer.

    In check_process_timers(), we don't have to check if new updated value of
    signal->cputime_expires is smaller, since we maintain new first expiration
    time ({prof,virt,sched}_expires) in code flow and all other writes to
    expiration cache are protected by sighand->siglock .

    Signed-off-by: Stanislaw Gruszka
    Cc: Thomas Gleixner
    Cc: Ingo Molnar
    Cc: Oleg Nesterov
    Cc: Peter Zijlstra
    Cc: Hidetoshi Seto
    Cc: Balbir Singh
    Signed-off-by: Andrew Morton
    Signed-off-by: Thomas Gleixner

    Stanislaw Gruszka
     
  • Reason: Further posix_cpu_timer patches depend on mainline changes

    Signed-off-by: Thomas Gleixner

    Thomas Gleixner
     

27 Mar, 2010

1 commit


13 Mar, 2010

6 commits

  • Spread p->sighand->siglock locking scope to make sure that
    fastpath_timer_check() never iterates over all threads. Without
    locking there is small possibility that signal->cputimer will stop
    running while we write values to signal->cputime_expires.

    Calling thread_group_cputime() from fastpath_timer_check() is not only
    bad because it is slow, also it is racy with __exit_signal() which can
    lead to invalid signal->{s,u}time values.

    Signed-off-by: Stanislaw Gruszka
    Cc: Ingo Molnar
    Cc: Oleg Nesterov
    Cc: Peter Zijlstra
    Cc: Hidetoshi Seto
    Cc: Balbir Singh
    Signed-off-by: Andrew Morton
    Signed-off-by: Thomas Gleixner

    Stanislaw Gruszka
     
  • When user sets up a timer without associated signal and process does
    not use any other cpu timers and does not exit, tsk->signal->cputimer
    is enabled and running forever.

    Avoid running the timer for no reason.

    I used below program to check patch does not break current user space
    visible behavior.

    #include
    #include
    #include
    #include
    #include
    #include
    #include
    #include

    void consume_cpu(void)
    {
    int i = 0;
    int count = 0;

    for(i=0; i< 30; i++) {
    consume_cpu();
    memset(&spec, 0, sizeof(spec));
    assert(timer_gettime(tid, &spec) == 0);
    printf("%lu.%09lu\n",
    (unsigned long) spec.it_value.tv_sec,
    (unsigned long) spec.it_value.tv_nsec);
    }

    assert(timer_delete(tid) == 0);
    return 0;
    }

    Signed-off-by: Stanislaw Gruszka
    Cc: Ingo Molnar
    Cc: Oleg Nesterov
    Cc: Peter Zijlstra
    Cc: Hidetoshi Seto
    Cc: Balbir Singh
    Signed-off-by: Andrew Morton
    Signed-off-by: Thomas Gleixner

    Stanislaw Gruszka
     
  • According POSIX we need to correctly set old timer it_interval value when
    user request that in timer_settime(). Tested using below program.

    #include
    #include
    #include
    #include
    #include
    #include
    #include

    int main(void)
    {
    struct sigaction act;
    struct sigevent evt = { };
    timer_t tid;
    struct itimerspec spec, u_spec, k_spec;

    evt.sigev_notify = SIGEV_SIGNAL;
    evt.sigev_signo = SIGPROF;
    assert(timer_create(CLOCK_PROCESS_CPUTIME_ID, &evt, &tid) == 0);

    spec.it_value.tv_sec = 1;
    spec.it_value.tv_nsec = 2;
    spec.it_interval.tv_sec = 3;
    spec.it_interval.tv_nsec = 4;
    u_spec = spec;
    assert(timer_settime(tid, 0, &spec, NULL) == 0);

    spec.it_value.tv_sec = 5;
    spec.it_value.tv_nsec = 6;
    spec.it_interval.tv_sec = 7;
    spec.it_interval.tv_nsec = 8;
    assert(timer_settime(tid, 0, &spec, &k_spec) == 0);

    #define PRT(val) printf(#val ":\t%d/%d\n", (int) u_spec.val, (int) k_spec.val)
    PRT(it_value.tv_sec);
    PRT(it_value.tv_nsec);
    PRT(it_interval.tv_sec);
    PRT(it_interval.tv_nsec);

    return 0;
    }

    Signed-off-by: Stanislaw Gruszka
    Cc: Ingo Molnar
    Cc: Oleg Nesterov
    Cc: Peter Zijlstra
    Cc: Hidetoshi Seto
    Cc: Balbir Singh
    Signed-off-by: Andrew Morton
    Signed-off-by: Thomas Gleixner

    Stanislaw Gruszka
     
  • Signed-off-by: Stanislaw Gruszka
    Cc: Ingo Molnar
    Cc: Oleg Nesterov
    Cc: Peter Zijlstra
    Cc: Hidetoshi Seto
    Cc: Balbir Singh
    Signed-off-by: Andrew Morton
    Signed-off-by: Thomas Gleixner

    Stanislaw Gruszka
     
  • Let always set signal->cputime_expires expiration cache when setting
    new itimer, POSIX 1.b timer, and RLIMIT_CPU. Since we are
    initializing prof_exp expiration cache during fork(), this allows to
    remove "RLIMIT_CPU != inf" check from fastpath_timer_check() and do
    some other cleanups.

    Checked against regression using test cases from:
    http://marc.info/?l=linux-kernel&m=123749066504641&w=4
    http://marc.info/?l=linux-kernel&m=123811277916642&w=2

    Signed-off-by: Stanislaw Gruszka
    Cc: Ingo Molnar
    Cc: Oleg Nesterov
    Cc: Peter Zijlstra
    Cc: Hidetoshi Seto
    Cc: Balbir Singh
    Signed-off-by: Andrew Morton
    Signed-off-by: Thomas Gleixner

    Stanislaw Gruszka
     
  • When a process deletes cpu timer or a timer expires we do not clear
    the expiration cache sig->cputimer_expires.

    As a result the fastpath_timer_check() which prevents us to loop over
    all threads in case no timer is active is not working and we run the
    slow path needlessly on every tick.

    Zero sig->cputimer_expires in stop_process_timers().

    Signed-off-by: Stanislaw Gruszka
    Cc: Ingo Molnar
    Cc: Oleg Nesterov
    Cc: Peter Zijlstra
    Cc: Hidetoshi Seto
    Cc: Spencer Candland
    Signed-off-by: Andrew Morton
    Signed-off-by: Thomas Gleixner

    Stanislaw Gruszka
     

07 Mar, 2010

2 commits

  • Make sure compiler won't do weird things with limits. E.g. fetching them
    twice may return 2 different values after writable limits are implemented.

    I.e. either use rlimit helpers added in commit 3e10e716abf3 ("resource:
    add helpers for fetching rlimits") or ACCESS_ONCE if not applicable.

    Signed-off-by: Jiri Slaby
    Cc: Ingo Molnar
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Cc: john stultz
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jiri Slaby
     
  • Fetch rlimit (both hard and soft) values only once and work on them. It
    removes many accesses through sig structure and makes the code cleaner.

    Mostly a preparation for writable resource limits support.

    Signed-off-by: Jiri Slaby
    Cc: Ingo Molnar
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Cc: john stultz
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jiri Slaby
     

18 Nov, 2009

1 commit


29 Aug, 2009

2 commits

  • Add tracepoints for all itimer variants: ITIMER_REAL, ITIMER_VIRTUAL
    and ITIMER_PROF.

    [ tglx: Fixed comments and made the output more readable, parseable
    and consistent. Replaced pid_vnr by pid_nr because the hrtimer
    callback can happen in any namespace ]

    Signed-off-by: Xiao Guangrong
    Cc: Steven Rostedt
    Cc: Frederic Weisbecker
    Cc: Mathieu Desnoyers
    Cc: Anton Blanchard
    Cc: Peter Zijlstra
    Cc: KOSAKI Motohiro
    Cc: Zhaolei
    LKML-Reference:
    Signed-off-by: Thomas Gleixner

    Xiao Guangrong
     
  • Merge reason: timer tracepoint patches depend on both branches

    Signed-off-by: Thomas Gleixner

    Thomas Gleixner
     

09 Aug, 2009

1 commit

  • When the process exits we don't have to run new cputimer nor
    use running one (as it not accounts when tsk->exit_state != 0)
    to get process CPU times. As there is only one thread we can
    just use CPU times fields from task and signal structs.

    Signed-off-by: Stanislaw Gruszka
    Cc: Peter Zijlstra
    Cc: Roland McGrath
    Cc: Vitaly Mayatskikh
    Signed-off-by: Andrew Morton
    Signed-off-by: Ingo Molnar

    Stanislaw Gruszka
     

03 Aug, 2009

4 commits

  • For powerpc with CONFIG_VIRT_CPU_ACCOUNTING
    jiffies_to_cputime(1) is not compile time constant and run time
    calculations are quite expensive. To optimize we use
    precomputed value. For all other architectures is is
    preprocessor definition.

    Signed-off-by: Stanislaw Gruszka
    Acked-by: Peter Zijlstra
    Acked-by: Thomas Gleixner
    Cc: Oleg Nesterov
    Cc: Andrew Morton
    Cc: Paul Mackerras
    Cc: Benjamin Herrenschmidt
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Stanislaw Gruszka
     
  • Don't update values in expiration cache when new ones are
    equal. Add expire_le() and expire_gt() helpers to simplify the
    code.

    Signed-off-by: Stanislaw Gruszka
    Acked-by: Peter Zijlstra
    Acked-by: Thomas Gleixner
    Cc: Oleg Nesterov
    Cc: Andrew Morton
    Cc: Paul Mackerras
    Cc: Benjamin Herrenschmidt
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Stanislaw Gruszka
     
  • Measure ITIMER_PROF and ITIMER_VIRT timers interval error
    between real ticks and requested by user. Take it into account
    when scheduling next tick.

    This patch introduce possibility where time between two
    consecutive tics is smaller then requested interval, it
    preserve however dependency that n tick is generated not
    earlier than n*interval time - counting from the beginning of
    periodic signal generation.

    Signed-off-by: Stanislaw Gruszka
    Acked-by: Peter Zijlstra
    Acked-by: Thomas Gleixner
    Cc: Oleg Nesterov
    Cc: Andrew Morton
    Cc: Paul Mackerras
    Cc: Benjamin Herrenschmidt
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Stanislaw Gruszka
     
  • Both cpu itimers have same data flow in the few places, this
    patch make unification of code related with VIRT and PROF
    itimers.

    Signed-off-by: Stanislaw Gruszka
    Acked-by: Peter Zijlstra
    Acked-by: Thomas Gleixner
    Cc: Oleg Nesterov
    Cc: Andrew Morton
    Cc: Paul Mackerras
    Cc: Benjamin Herrenschmidt
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Stanislaw Gruszka