13 Oct, 2007

3 commits


12 Oct, 2007

3 commits

  • * 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/paulus/powerpc: (408 commits)
    [POWERPC] Add memchr() to the bootwrapper
    [POWERPC] Implement logging of unhandled signals
    [POWERPC] Add legacy serial support for OPB with flattened device tree
    [POWERPC] Use 1TB segments
    [POWERPC] XilinxFB: Allow fixed framebuffer base address
    [POWERPC] XilinxFB: Add support for custom screen resolution
    [POWERPC] XilinxFB: Use pdata to pass around framebuffer parameters
    [POWERPC] PCI: Add 64-bit physical address support to setup_indirect_pci
    [POWERPC] 4xx: Kilauea defconfig file
    [POWERPC] 4xx: Kilauea DTS
    [POWERPC] 4xx: Add AMCC Kilauea eval board support to platforms/40x
    [POWERPC] 4xx: Add AMCC 405EX support to cputable.c
    [POWERPC] Adjust TASK_SIZE on ppc32 systems to 3GB that are capable
    [POWERPC] Use PAGE_OFFSET to tell if an address is user/kernel in SW TLB handlers
    [POWERPC] 85xx: Enable FP emulation in MPC8560 ADS defconfig
    [POWERPC] 85xx: Killed
    [POWERPC] 85xx: Add cpm nodes for 8541/8555 CDS
    [POWERPC] 85xx: Convert mpc8560ads to the new CPM binding.
    [POWERPC] mpc8272ads: Remove muram from the CPM reg property.
    [POWERPC] Make clockevents work on PPC601 processors
    ...

    Fixed up conflict in Documentation/powerpc/booting-without-of.txt manually.

    Linus Torvalds
     
  • Implement show_unhandled_signals sysctl + support to print when a process
    is killed due to unhandled signals just as i386 and x86_64 does.

    Default to having it off, unlike x86 that defaults on.

    Signed-off-by: Olof Johansson
    Signed-off-by: Paul Mackerras

    Olof Johansson
     
  • * 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-2.6: (867 commits)
    [SKY2]: status polling loop (post merge)
    [NET]: Fix NAPI completion handling in some drivers.
    [TCP]: Limit processing lost_retrans loop to work-to-do cases
    [TCP]: Fix lost_retrans loop vs fastpath problems
    [TCP]: No need to re-count fackets_out/sacked_out at RTO
    [TCP]: Extract tcp_match_queue_to_sack from sacktag code
    [TCP]: Kill almost unused variable pcount from sacktag
    [TCP]: Fix mark_head_lost to ignore R-bit when trying to mark L
    [TCP]: Add bytes_acked (ABC) clearing to FRTO too
    [IPv6]: Update setsockopt(IPV6_MULTICAST_IF) to support RFC 3493, try2
    [NETFILTER]: x_tables: add missing ip6t_modulename aliases
    [NETFILTER]: nf_conntrack_tcp: fix connection reopening
    [QETH]: fix qeth_main.c
    [NETLINK]: fib_frontend build fixes
    [IPv6]: Export userland ND options through netlink (RDNSS support)
    [9P]: build fix with !CONFIG_SYSCTL
    [NET]: Fix dev_put() and dev_hold() comments
    [NET]: make netlink user -> kernel interface synchronious
    [NET]: unify netlink kernel socket recognition
    [NET]: cleanup 3rd argument in netlink_sendskb
    ...

    Fix up conflicts manually in Documentation/feature-removal-schedule.txt
    and my new least favourite crap, the "mod_devicetable" support in the
    files include/linux/mod_devicetable.h and scripts/mod/file2alias.c.

    (The latter files seem to be explicitly _designed_ to get conflicts when
    different subsystems work with them - that have an absolutely horrid
    lack of subsystem separation!)

    Signed-off-by: Linus Torvalds

    Linus Torvalds
     

11 Oct, 2007

6 commits

  • This patch make processing netlink user -> kernel messages synchronious.
    This change was inspired by the talk with Alexey Kuznetsov about current
    netlink messages processing. He says that he was badly wrong when introduced
    asynchronious user -> kernel communication.

    The call netlink_unicast is the only path to send message to the kernel
    netlink socket. But, unfortunately, it is also used to send data to the
    user.

    Before this change the user message has been attached to the socket queue
    and sk->sk_data_ready was called. The process has been blocked until all
    pending messages were processed. The bad thing is that this processing
    may occur in the arbitrary process context.

    This patch changes nlk->data_ready callback to get 1 skb and force packet
    processing right in the netlink_unicast.

    Kernel -> user path in netlink_unicast remains untouched.

    EINTR processing for in netlink_run_queue was changed. It forces rtnl_lock
    drop, but the process remains in the cycle until the message will be fully
    processed. So, there is no need to use this kludges now.

    Signed-off-by: Denis V. Lunev
    Acked-by: Alexey Kuznetsov
    Signed-off-by: David S. Miller

    Denis V. Lunev
     
  • This patch allows you to create a new network namespace
    using sys_clone, or sys_unshare.

    As the network namespace is still experimental and under development
    clone and unshare support is only made available when CONFIG_NET_NS is
    selected at compile time.

    As this patch introduces network namespace support into code paths
    that exist when the CONFIG_NET is not selected there are a few
    additions made to net_namespace.h to allow a few more functions
    to be used when the networking stack is not compiled in.

    Signed-off-by: Eric W. Biederman
    Signed-off-by: David S. Miller

    Eric W. Biederman
     
  • raise_softirq_irqoff no longer has any modular user.

    Signed-off-by: Adrian Bunk
    Signed-off-by: David S. Miller

    Adrian Bunk
     
  • Each netlink socket will live in exactly one network namespace,
    this includes the controlling kernel sockets.

    This patch updates all of the existing netlink protocols
    to only support the initial network namespace. Request
    by clients in other namespaces will get -ECONREFUSED.
    As they would if the kernel did not have the support for
    that netlink protocol compiled in.

    As each netlink protocol is updated to be multiple network
    namespace safe it can register multiple kernel sockets
    to acquire a presence in the rest of the network namespaces.

    The implementation in af_netlink is a simple filter implementation
    at hash table insertion and hash table look up time.

    Signed-off-by: Eric W. Biederman
    Signed-off-by: David S. Miller

    Eric W. Biederman
     
  • As noted by Christoph Hellwig, pktgen was the only user so
    it can now be removed.

    [ Add missing cases caught by Adrian Bunk. -DaveM ]

    Signed-off-by: Robert Olsson
    Signed-off-by: David S. Miller

    Robert Olsson
     
  • First user will be the DCCP transport networking protocol.

    Signed-off-by: Arnaldo Carvalho de Melo
    Signed-off-by: David S. Miller

    Arnaldo Carvalho de Melo
     

10 Oct, 2007

1 commit

  • Hide everything in blkdev.h with CONFIG_BLOCK isn't set, and fixup
    the (few) files that fail to build because they were relying on blkdev.h
    pulling in extra includes for them.

    Signed-off-by: Jens Axboe

    Jens Axboe
     

08 Oct, 2007

2 commits

  • Async signals should not be reported as sent by current in audit log. As
    it is, we call audit_signal_info() too early in check_kill_permission().
    Note that check_kill_permission() has that test already - it needs to know
    if it should apply current-based permission checks. So the solution is to
    move the call of audit_signal_info() between those.

    Bogosity in question is easily reproduced - add a rule watching for e.g.
    kill(2) from specific process (so that audit_signal_info() would not
    short-circuit to nothing), say load_policy, watch the bogus OBJ_PID entry
    in audit logs claiming that write(2) on selinuxfs file issued by
    load_policy(8) had somehow managed to send a signal to syslogd...

    Signed-off-by: Al Viro
    Acked-by: Steve Grubb
    Acked-by: Eric Paris
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Al Viro
     
  • When using /proc/timer_stats on ppc64 I noticed the events/sec field wasnt
    accurate. Sometimes the integer part was incorrect due to rounding (we
    werent taking the fractional seconds into consideration).

    The fraction part is also wrong, we need to pad the printf statement and
    take the bottom three digits of 1000 times the value.

    Signed-off-by: Anton Blanchard
    Acked-by: Ingo Molnar
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Anton Blanchard
     

02 Oct, 2007

1 commit


01 Oct, 2007

2 commits

  • Calling handle_futex_death in exit_robust_list for the different robust
    mutexes of a thread basically frees the mutex. Another thread might grab
    the lock immediately which updates the next pointer of the mutex.
    fetch_robust_entry over the next pointer might therefore branch into the
    robust mutex list of a different thread. This can cause two problems: 1)
    some mutexes held by the dead thread are not getting freed and 2) some
    mutexs held by a different thread are freed.

    The next point need to be read before calling handle_futex_death.

    Signed-off-by: Martin Schwidefsky
    Acked-by: Ingo Molnar
    Acked-by: Thomas Gleixner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Martin Schwidefsky
     
  • We need to disable all CPUs other than the boot CPU (usually 0) before
    attempting to power-off modern SMP machines. This fixes the
    hang-on-poweroff issue on my MythTV SMP box, and also on Thomas Gleixner's
    new toybox.

    Signed-off-by: Mark Lord
    Acked-by: Thomas Gleixner
    Cc: "Rafael J. Wysocki"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mark Lord
     

27 Sep, 2007

1 commit


23 Sep, 2007

1 commit

  • In a desparate attempt to fix the suspend/resume problem on Andrews
    VAIO I added a workaround which enforced the broadcast of the oneshot
    timer on resume. This was actually resolving the problem on the VAIO
    but was just a stupid workaround, which was not tackling the root
    cause: the assignement of lower idle C-States in the ACPI processor_idle
    code. The cpuidle patches, which utilize the dynamic tick feature and
    go faster into deeper C-states exposed the problem again. The correct
    solution is the previous patch, which prevents lower C-states across
    the suspend/resume.

    Remove the enforcement code, including the conditional broadcast timer
    arming, which helped to pamper over the real problem for quite a time.
    The oneshot broadcast flag for the cpu, which runs the resume code can
    never be set at the time when this code is executed. It only gets set,
    when the CPU is entering a lower idle C-State.

    Signed-off-by: Thomas Gleixner
    Tested-by: Andrew Morton
    Cc: Len Brown
    Cc: Venkatesh Pallipadi
    Cc: Rafael J. Wysocki
    Signed-off-by: Linus Torvalds

    Thomas Gleixner
     

21 Sep, 2007

1 commit

  • This simplifies signalfd code, by avoiding it to remain attached to the
    sighand during its lifetime.

    In this way, the signalfd remain attached to the sighand only during
    poll(2) (and select and epoll) and read(2). This also allows to remove
    all the custom "tsk == current" checks in kernel/signal.c, since
    dequeue_signal() will only be called by "current".

    I think this is also what Ben was suggesting time ago.

    The external effect of this, is that a thread can extract only its own
    private signals and the group ones. I think this is an acceptable
    behaviour, in that those are the signals the thread would be able to
    fetch w/out signalfd.

    Signed-off-by: Davide Libenzi
    Signed-off-by: Linus Torvalds

    Davide Libenzi
     

20 Sep, 2007

6 commits

  • When using rt_mutex, a NULL pointer dereference is occurred at
    enqueue_task_rt. Here is a scenario;
    1) there are two threads, the thread A is fair_sched_class and
    thread B is rt_sched_class.
    2) Thread A is boosted up to rt_sched_class, because the thread A
    has a rt_mutex lock and the thread B is waiting the lock.
    3) At this time, when thread A create a new thread C, the thread
    C has a rt_sched_class.
    4) When doing wake_up_new_task() for the thread C, the priority
    of the thread C is out of the RT priority range, because the
    normal priority of thread A is not the RT priority. It makes
    data corruption by overflowing the rt_prio_array.
    The new thread C should be fair_sched_class.

    The new thread should be valid scheduler class before queuing.
    This patch fixes to set the suitable scheduler class.

    Signed-off-by: Hiroshi Shimamoto
    Signed-off-by: Ingo Molnar
    Signed-off-by: Peter Zijlstra

    Hiroshi Shimamoto
     
  • add /proc/sys/kernel/sched_compat_yield to make sys_sched_yield()
    more agressive, by moving the yielding task to the last position
    in the rbtree.

    with sched_compat_yield=0:

    PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
    2539 mingo 20 0 1576 252 204 R 50 0.0 0:02.03 loop_yield
    2541 mingo 20 0 1576 244 196 R 50 0.0 0:02.05 loop

    with sched_compat_yield=1:

    PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
    2584 mingo 20 0 1576 248 196 R 99 0.0 0:52.45 loop
    2582 mingo 20 0 1576 256 204 R 0 0.0 0:00.00 loop_yield

    Signed-off-by: Ingo Molnar
    Signed-off-by: Peter Zijlstra

    Ingo Molnar
     
  • It turned out, that the user namespace is released during the do_exit() in
    exit_task_namespaces(), but the struct user_struct is released only during the
    put_task_struct(), i.e. MUCH later.

    On debug kernels with poisoned slabs this will cause the oops in
    uid_hash_remove() because the head of the chain, which resides inside the
    struct user_namespace, will be already freed and poisoned.

    Since the uid hash itself is required only when someone can search it, i.e.
    when the namespace is alive, we can safely unhash all the user_struct-s from
    it during the namespace exiting. The subsequent free_uid() will complete the
    user_struct destruction.

    For example simple program

    #include

    char stack[2 * 1024 * 1024];

    int f(void *foo)
    {
    return 0;
    }

    int main(void)
    {
    clone(f, stack + 1 * 1024 * 1024, 0x10000000, 0);
    return 0;
    }

    run on kernel with CONFIG_USER_NS turned on will oops the
    kernel immediately.

    This was spotted during OpenVZ kernel testing.

    Signed-off-by: Pavel Emelyanov
    Signed-off-by: Alexey Dobriyan
    Acked-by: "Serge E. Hallyn"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Pavel Emelyanov
     
  • Surprisingly, but (spotted by Alexey Dobriyan) the uid hash still uses
    list_heads, thus occupying twice as much place as it could. Convert it to
    hlist_heads.

    Signed-off-by: Pavel Emelyanov
    Signed-off-by: Alexey Dobriyan
    Acked-by: Serge Hallyn
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Pavel Emelyanov
     
  • kernel/user.c: Convert list_for_each to list_for_each_entry in
    uid_hash_find()

    Signed-off-by: Matthias Kaehlcke
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Matthias Kaehlcke
     
  • struct utsname is copied from master one without any exclusion.

    Here is sample output from one proggie doing

    sethostname("aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa");
    sethostname("bbbbbbbbbbbbbbbbbbbbbbbbbbbbbb");

    and another

    clone(,, CLONE_NEWUTS, ...)
    uname()

    hostname = 'aaaaaaaaaaaaaaaaaaaaaaaaabbbbb'
    hostname = 'bbbaaaaaaaaaaaaaaaaaaaaaaaaaaa'
    hostname = 'aaaaaaaabbbbbbbbbbbbbbbbbbbbbb'
    hostname = 'aaaaaaaaaaaaaaaaaaaaaaaaaabbbb'
    hostname = 'aaaaaaaaaaaaaaaaaaaaaaaaaaaabb'
    hostname = 'aaabbbbbbbbbbbbbbbbbbbbbbbbbbb'
    hostname = 'bbbbbbbbbbbbbbbbaaaaaaaaaaaaaa'

    Hostname is sometimes corrupted.

    Yes, even _the_ simplest namespace activity had bug in it. :-(

    Signed-off-by: Alexey Dobriyan
    Acked-by: Serge Hallyn
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Alexey Dobriyan
     

16 Sep, 2007

5 commits

  • Taking a cpu offline removes the cpu from the online mask before the
    CPU_DEAD notification is done. The clock events layer does the cleanup
    of the dead CPU from the CPU_DEAD notifier chain. tick_do_timer_cpu is
    used to avoid xtime lock contention by assigning the task of jiffies
    xtime updates to one CPU. If a CPU is taken offline, then this
    assignment becomes stale. This went unnoticed because most of the time
    the offline CPU went dead before the online CPU reached __cpu_die(),
    where the CPU_DEAD state is checked. In the case that the offline CPU did
    not reach the DEAD state before we reach __cpu_die(), the code in there
    goes to sleep for 100ms. Due to the stale time update assignment, the
    system is stuck forever.

    Take the assignment away when a cpu is not longer in the cpu_online_mask.
    We do this in the last call to tick_nohz_stop_sched_tick() when the offline
    CPU is on the way to the final play_dead() idle entry.

    Signed-off-by: Thomas Gleixner

    Thomas Gleixner
     
  • When a cpu goes offline it is removed from the broadcast masks. If the
    mask becomes empty the code shuts down the broadcast device. This is
    wrong, because the broadcast device needs to be ready for the online
    cpu going idle (into a c-state, which stops the local apic timer).

    Signed-off-by: Thomas Gleixner

    Thomas Gleixner
     
  • The jinxed VAIO refuses to resume without hitting keys on the keyboard
    when this is not enforced. It is unclear why the cpu ends up in a lower
    C State without notifying the clock events layer, but enforcing the
    oneshot broadcast here is safe.

    Signed-off-by: Thomas Gleixner

    Thomas Gleixner
     
  • Timekeeping resume adjusts xtime by adding the slept time in seconds and
    resets the reference value of the clock source (clock->cycle_last).
    clock->cycle last is used to calculate the delta between the last xtime
    update and the readout of the clock source in __get_nsec_offset(). xtime
    plus the offset is the current time. The resume code ignores the delta
    which had already elapsed between the last xtime update and the actual
    time of suspend. If the suspend time is short, then we can see time
    going backwards on resume.

    Suspend:
    offs_s = clock->read() - clock->cycle_last;
    now = xtime + offs_s;
    timekeeping_suspend_time = read_rtc();

    Resume:
    sleep_time = read_rtc() - timekeeping_suspend_time;
    xtime.tv_sec += sleep_time;
    clock->cycle_last = clock->read();
    offs_r = clock->read() - clock->cycle_last;
    now = xtime + offs_r;

    if sleep_time_seconds == 0 and offs_r < offs_s, then time goes
    backwards.

    Fix this by storing the offset from the last xtime update and add it to
    xtime during resume, when we reset clock->cycle_last:

    sleep_time = read_rtc() - timekeeping_suspend_time;
    xtime.tv_sec += sleep_time;
    xtime += offs_s; /* Fixup xtime offset at suspend time */
    clock->cycle_last = clock->read();
    offs_r = clock->read() - clock->cycle_last;
    now = xtime + offs_r;

    Thanks to Marcelo for tracking this down on the OLPC and providing the
    necessary details to analyze the root cause.

    Signed-off-by: Thomas Gleixner
    Cc: John Stultz
    Cc: Tosatti

    Thomas Gleixner
     
  • Lockdep complains about the access of rtc in timekeeping_suspend
    inside the interrupt disabled region of the write locked xtime lock.
    Move the access outside.

    Signed-off-by: Thomas Gleixner
    Cc: John Stultz

    Thomas Gleixner
     

12 Sep, 2007

3 commits

  • Seems to me that this timer will only get started on platforms that say
    they don't want it?

    Signed-off-by: Tony Breeds
    Cc: Paul Mackerras
    Cc: Gabriel Paubert
    Cc: Zachary Amsden
    Acked-by: Thomas Gleixner
    Cc: John Stultz
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Tony Breeds
     
  • The semantics of call_usermodehelper_pipe() used to be that it would fork
    the helper, and wait for the kernel thread to be started. This was
    implemented by setting sub_info.wait to 0 (implicitly), and doing a
    wait_for_completion().

    As part of the cleanup done in 0ab4dc92278a0f3816e486d6350c6652a72e06c8,
    call_usermodehelper_pipe() was changed to pass 1 as the value for wait to
    call_usermodehelper_exec().

    This is equivalent to setting sub_info.wait to 1, which is a change from
    the previous behaviour. Using 1 instead of 0 causes
    __call_usermodehelper() to start the kernel thread running
    wait_for_helper(), rather than directly calling ____call_usermodehelper().

    The end result is that the calling kernel code blocks until the user mode
    helper finishes. As the helper is expecting input on stdin, and now no one
    is writing anything, everything locks up (observed in do_coredump).

    The fix is to change the 1 to UMH_WAIT_EXEC (aka 0), indicating that we
    want to wait for the kernel thread to be started, but not for the helper to
    finish.

    Signed-off-by: Michael Ellerman
    Acked-by: Andi Kleen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Michael Ellerman
     
  • The futex list traversal on the compat side appears to have
    a bug.

    It's loop termination condition compares:

    while (compat_ptr(uentry) != &head->list)

    But that can't be right because "uentry" has the special
    "pi" indicator bit still potentially set at bit 0. This
    is cleared by fetch_robust_entry() into the "entry"
    return value.

    What this seems to mean is that the list won't terminate
    when list iteration gets back to the the head. And we'll
    also process the list head like a normal entry, which could
    cause all kinds of problems.

    So we should check for equality with "entry". That pointer
    is of the non-compat type so we have to do a little casting
    to keep the compiler and sparse happy.

    The same problem can in theory occur with the 'pending'
    variable, although that has not been reported from users
    so far.

    Based on the original patch from David Miller.

    Acked-by: Ingo Molnar
    Cc: Thomas Gleixner
    Cc: David Miller
    Signed-off-by: Arnd Bergmann
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Arnd Bergmann
     

11 Sep, 2007

1 commit

  • When PTRACE_SYSCALL was used and then PTRACE_DETACH is used, the
    TIF_SYSCALL_TRACE flag is left set on the formerly-traced task. This
    means that when a new tracer comes along and does PTRACE_ATTACH, it's
    possible he gets a syscall tracing stop even though he's never used
    PTRACE_SYSCALL. This happens if the task was in the middle of a system
    call when the second PTRACE_ATTACH was done. The symptom is an
    unexpected SIGTRAP when the tracer thinks that only SIGSTOP should have
    been provoked by his ptrace calls so far.

    A few machines already fixed this in ptrace_disable (i386, ia64, m68k).
    But all other machines do not, and still have this bug. On x86_64, this
    constitutes a regression in IA32 compatibility support.

    Since all machines now use TIF_SYSCALL_TRACE for this, I put the
    clearing of TIF_SYSCALL_TRACE in the generic ptrace_detach code rather
    than adding it to every other machine's ptrace_disable.

    Signed-off-by: Roland McGrath
    Signed-off-by: Linus Torvalds

    Roland McGrath
     

05 Sep, 2007

4 commits

  • fix ideal_runtime:

    - do not scale it using niced_granularity()
    it is against sum_exec_delta, so its wall-time, not fair-time.

    - move the whole check into __check_preempt_curr_fair()
    so that wakeup preemption can also benefit from the new logic.

    this also results in code size reduction:

    text data bss dec hex filename
    13391 228 1204 14823 39e7 sched.o.before
    13369 228 1204 14801 39d1 sched.o.after

    Signed-off-by: Peter Zijlstra
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • Second preparatory patch for fix-ideal runtime:

    Mark prev_sum_exec_runtime at the beginning of our run, the same spot
    that adds our wait period to wait_runtime. This seems a more natural
    location to do this, and it also reduces the code a bit:

    text data bss dec hex filename
    13397 228 1204 14829 39ed sched.o.before
    13391 228 1204 14823 39e7 sched.o.after

    Signed-off-by: Peter Zijlstra
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • Preparatory patch for fix-ideal-runtime:

    simplify __check_preempt_curr_fair(): get rid of the integer return.

    text data bss dec hex filename
    13404 228 1204 14836 39f4 sched.o.before
    13393 228 1204 14825 39e9 sched.o.after

    functionality is unchanged.

    Signed-off-by: Peter Zijlstra
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • rename RSR to SRR - 'RSR' is already defined on xtensa.

    found by Adrian Bunk.

    Signed-off-by: Ingo Molnar

    Ingo Molnar