05 Dec, 2011

2 commits

  • This essentially reverts:

    2b666859ec32: x86: Default to vsyscall=native for now

    The ABI breakage should now be fixed by:

    commit 48c4206f5b02f28c4c78a1f5b491d3772fb64fb9
    Author: Andy Lutomirski
    Date: Thu Oct 20 08:48:19 2011 -0700

    x86-64: Set siginfo and context on vsyscall emulation faults

    Signed-off-by: Andy Lutomirski
    Cc: richard -rw- weinberger
    Cc: Adrian Bunk
    Cc: H. Peter Anvin
    Cc: Linus Torvalds
    Link: http://lkml.kernel.org/r/93154af3b2b6d208906ae02d80d92cf60c6fa94f.1320712291.git.luto@amacapital.net
    Signed-off-by: Ingo Molnar

    Andy Lutomirski
     
  • To make this work, we teach the page fault handler how to send
    signals on failed uaccess. This only works for user addresses
    (kernel addresses will never hit the page fault handler in the
    first place), so we need to generate signals for those
    separately.

    This gets the tricky case right: if the user buffer spans
    multiple pages and only the second page is invalid, we set
    cr2 and si_addr correctly. UML relies on this behavior to
    "fault in" pages as needed.

    We steal a bit from thread_info.uaccess_err to enable this.
    Before this change, uaccess_err was a 32-bit boolean value.

    This fixes issues with UML when vsyscall=emulate.

    Reported-by: Adrian Bunk
    Signed-off-by: Andy Lutomirski
    Cc: richard -rw- weinberger
    Cc: H. Peter Anvin
    Cc: Linus Torvalds
    Link: http://lkml.kernel.org/r/4c8f91de7ec5cd2ef0f59521a04e1015f11e42b4.1320712291.git.luto@amacapital.net
    Signed-off-by: Ingo Molnar

    Andy Lutomirski
     

01 Nov, 2011

1 commit

  • In removing the presence of from some of the
    more common files, this implict include
    of was uncovered.

    CC arch/x86/kernel/vsyscall_64.o
    arch/x86/kernel/vsyscall_64.c: In function ‘vsyscall_set_cpu’:
    arch/x86/kernel/vsyscall_64.c:259: error: implicit declaration of function ‘cpu_to_node’

    Explicitly call it out so the cleanup can take place.

    Signed-off-by: Paul Gortmaker

    Paul Gortmaker
     

11 Oct, 2011

1 commit

  • This UML breakage:

    linux-2.6.30.1[3800] vsyscall fault (exploit attempt?) ip:ffffffffff600000 cs:33 sp:7fbfb9c498 ax:ffffffffff600000 si:0 di:606790
    linux-2.6.30.1[3856] vsyscall fault (exploit attempt?) ip:ffffffffff600000 cs:33 sp:7fbfb13168 ax:ffffffffff600000 si:0 di:606790

    Is caused by commit 3ae36655 ("x86-64: Rework vsyscall emulation and add
    vsyscall= parameter") - the vsyscall emulation code is not fully cooked
    yet as UML relies on some rather fragile SIGSEGV semantics.

    Linus suggested in https://lkml.org/lkml/2011/8/9/376 to default
    to vsyscall=native for now, this patch implements that.

    Signed-off-by: Adrian Bunk
    Acked-by: Andrew Lutomirski
    Cc: H. Peter Anvin
    Link: http://lkml.kernel.org/r/20111005214047.GE14406@localhost.pp.htv.fi
    Signed-off-by: Ingo Molnar

    Adrian Bunk
     

11 Aug, 2011

2 commits

  • There are three choices:

    vsyscall=native: Vsyscalls are native code that issues the
    corresponding syscalls.

    vsyscall=emulate (default): Vsyscalls are emulated by instruction
    fault traps, tested in the bad_area path. The actual contents of
    the vsyscall page is the same as the vsyscall=native case except
    that it's marked NX. This way programs that make assumptions about
    what the code in the page does will not be confused when they read
    that code.

    vsyscall=none: Trying to execute a vsyscall will segfault.

    Signed-off-by: Andy Lutomirski
    Link: http://lkml.kernel.org/r/8449fb3abf89851fd6b2260972666a6f82542284.1312988155.git.luto@mit.edu
    Signed-off-by: H. Peter Anvin

    Andy Lutomirski
     
  • As of commit 98d0ac38ca7b1b7a552c9a2359174ff84decb600
    Author: Andy Lutomirski
    Date: Thu Jul 14 06:47:22 2011 -0400

    x86-64: Move vread_tsc and vread_hpet into the vDSO

    user code no longer directly calls into code in arch/x86/kernel/, so
    we don't need compile flag hacks to make it safe. All vdso code is
    in the vdso directory now.

    Signed-off-by: Andy Lutomirski
    Link: http://lkml.kernel.org/r/835cd05a4c7740544d09723d6ba48f4406f9826c.1312988155.git.luto@mit.edu
    Signed-off-by: H. Peter Anvin

    Andy Lutomirski
     

05 Aug, 2011

2 commits

  • Vsyscall emulation is slow, so make it easy to track down.

    Signed-off-by: Andy Lutomirski
    Link: http://lkml.kernel.org/r/cdaad7da946a80b200df16647c1700db3e1171e9.1312378163.git.luto@mit.edu
    Signed-off-by: H. Peter Anvin

    Andy Lutomirski
     
  • Three places in the kernel assume that the only long mode CPL 3
    selector is __USER_CS. This is not true on Xen -- Xen's sysretq
    changes cs to the magic value 0xe033.

    Two of the places are corner cases, but as of "x86-64: Improve
    vsyscall emulation CS and RIP handling"
    (c9712944b2a12373cb6ff8059afcfb7e826a6c54), vsyscalls will segfault
    if called with Xen's extra CS selector. This causes a panic when
    older init builds die.

    It seems impossible to make Xen use __USER_CS reliably without
    taking a performance hit on every system call, so this fixes the
    tests instead with a new paravirt op. It's a little ugly because
    ptrace.h can't include paravirt.h.

    Signed-off-by: Andy Lutomirski
    Link: http://lkml.kernel.org/r/f4fcb3947340d9e96ce1054a432f183f9da9db83.1312378163.git.luto@mit.edu
    Reported-by: Konrad Rzeszutek Wilk
    Signed-off-by: H. Peter Anvin

    Andy Lutomirski
     

15 Jul, 2011

1 commit


14 Jul, 2011

2 commits

  • The vread field was bloating struct clocksource everywhere except
    x86_64, and I want to change the way this works on x86_64, so let's
    split it out into per-arch data.

    Cc: x86@kernel.org
    Cc: Clemens Ladisch
    Cc: linux-ia64@vger.kernel.org
    Cc: Tony Luck
    Cc: Fenghua Yu
    Cc: John Stultz
    Cc: Thomas Gleixner
    Signed-off-by: Andy Lutomirski
    Link: http://lkml.kernel.org/r/3ae5ec76a168eaaae63f08a2a1060b91aa0b7759.1310563276.git.luto@mit.edu
    Signed-off-by: H. Peter Anvin

    Andy Lutomirski
     
  • Three fixes here:
    - Send SIGSEGV if called from compat code or with a funny CS.
    - Don't BUG on impossible addresses.
    - Add a missing local_irq_disable.

    This patch also removes an unused variable.

    Signed-off-by: Andy Lutomirski
    Link: http://lkml.kernel.org/r/6fb2b13ab39b743d1e4f466eef13425854912f7f.1310563276.git.luto@mit.edu
    Signed-off-by: H. Peter Anvin

    Andy Lutomirski
     

07 Jun, 2011

1 commit

  • There's a fair amount of code in the vsyscall page. It contains
    a syscall instruction (in the gettimeofday fallback) and who
    knows what will happen if an exploit jumps into the middle of
    some other code.

    Reduce the risk by replacing the vsyscalls with short magic
    incantations that cause the kernel to emulate the real
    vsyscalls. These incantations are useless if entered in the
    middle.

    This causes vsyscalls to be a little more expensive than real
    syscalls. Fortunately sensible programs don't use them.
    The only exception is time() which is still called by glibc
    through the vsyscall - but calling time() millions of times
    per second is not sensible. glibc has this fixed in the
    development tree.

    This patch is not perfect: the vread_tsc and vread_hpet
    functions are still at a fixed address. Fixing that might
    involve making alternative patching work in the vDSO.

    Signed-off-by: Andy Lutomirski
    Acked-by: Linus Torvalds
    Cc: Jesper Juhl
    Cc: Borislav Petkov
    Cc: Arjan van de Ven
    Cc: Jan Beulich
    Cc: richard -rw- weinberger
    Cc: Mikael Pettersson
    Cc: Andi Kleen
    Cc: Brian Gerst
    Cc: Louis Rilling
    Cc: Valdis.Kletnieks@vt.edu
    Cc: pageexec@freemail.hu
    Link: http://lkml.kernel.org/r/e64e1b3c64858820d12c48fa739efbd1485e79d5.1307292171.git.luto@mit.edu
    [ Removed the CONFIG option - it's simpler to just do it unconditionally. Tidied up the code as well. ]
    Signed-off-by: Ingo Molnar

    Andy Lutomirski
     

06 Jun, 2011

3 commits

  • It just segfaults since April 2008 (a4928cff), so I'm pretty
    sure that nothing uses it. And having an empty section makes
    the linker script a bit fragile.

    Signed-off-by: Andy Lutomirski
    Cc: Jesper Juhl
    Cc: Borislav Petkov
    Cc: Linus Torvalds
    Cc: Arjan van de Ven
    Cc: Jan Beulich
    Cc: richard -rw- weinberger
    Cc: Mikael Pettersson
    Cc: Andi Kleen
    Cc: Brian Gerst
    Cc: Louis Rilling
    Cc: Valdis.Kletnieks@vt.edu
    Cc: pageexec@freemail.hu
    Link: http://lkml.kernel.org/r/4a4abcf47ecadc269f2391a313576fe6d06acef7.1307292171.git.luto@mit.edu
    Signed-off-by: Ingo Molnar

    Andy Lutomirski
     
  • It's unnecessary overhead in code that's supposed to be highly
    optimized. Removing it allows us to remove one of the two
    syscall instructions in the vsyscall page.

    The only sensible use for it is for UML users, and it doesn't
    fully address inconsistent vsyscall results on UML. The real
    fix for UML is to stop using vsyscalls entirely.

    Signed-off-by: Andy Lutomirski
    Cc: Jesper Juhl
    Cc: Borislav Petkov
    Cc: Linus Torvalds
    Cc: Arjan van de Ven
    Cc: Jan Beulich
    Cc: richard -rw- weinberger
    Cc: Mikael Pettersson
    Cc: Andi Kleen
    Cc: Brian Gerst
    Cc: Louis Rilling
    Cc: Valdis.Kletnieks@vt.edu
    Cc: pageexec@freemail.hu
    Link: http://lkml.kernel.org/r/973ae803fe76f712da4b2740e66dccf452d3b1e4.1307292171.git.luto@mit.edu
    Signed-off-by: Ingo Molnar

    Andy Lutomirski
     
  • Move vvars out of the vsyscall page into their own page and mark
    it NX.

    Without this patch, an attacker who can force a daemon to call
    some fixed address could wait until the time contains, say,
    0xCD80, and then execute the current time.

    Signed-off-by: Andy Lutomirski
    Cc: Jesper Juhl
    Cc: Borislav Petkov
    Cc: Linus Torvalds
    Cc: Arjan van de Ven
    Cc: Jan Beulich
    Cc: richard -rw- weinberger
    Cc: Mikael Pettersson
    Cc: Andi Kleen
    Cc: Brian Gerst
    Cc: Louis Rilling
    Cc: Valdis.Kletnieks@vt.edu
    Cc: pageexec@freemail.hu
    Link: http://lkml.kernel.org/r/b1460f81dc4463d66ea3f2b5ce240f58d48effec.1307292171.git.luto@mit.edu
    Signed-off-by: Ingo Molnar

    Andy Lutomirski
     

27 May, 2011

1 commit

  • * 'x86-vdso-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
    x86: vdso: Remove unused variable
    x86-64: Optimize vDSO time()
    x86-64: Add time to vDSO
    x86-64: Turn off -pg and turn on -foptimize-sibling-calls for vDSO
    x86-64: Move vread_tsc into a new file with sensible options
    x86-64: Vclock_gettime(CLOCK_MONOTONIC) can't ever see nsec < 0
    x86-64: Don't generate cmov in vread_tsc
    x86-64: Remove unnecessary barrier in vread_tsc
    x86-64: Clean up vdso/kernel shared variables

    Linus Torvalds
     

24 May, 2011

2 commits

  • All static seqlock should be initialized with the lockdep friendly
    __SEQLOCK_UNLOCKED() macro.

    Remove legacy SEQLOCK_UNLOCKED() macro.

    Signed-off-by: Eric Dumazet
    Cc: David Miller
    Link: http://lkml.kernel.org/r/%3C1306238888.3026.31.camel%40edumazet-laptop%3E
    Signed-off-by: Thomas Gleixner

    Eric Dumazet
     
  • Variables that are shared between the vdso and the kernel are
    currently a bit of a mess. They are each defined with their own
    magic, they are accessed differently in the kernel, the vsyscall page,
    and the vdso, and one of them (vsyscall_clock) doesn't even really
    exist.

    This changes them all to use a common mechanism. All of them are
    delcared in vvar.h with a fixed address (validated by the linker
    script). In the kernel (as before), they look like ordinary
    read-write variables. In the vsyscall page and the vdso, they are
    accessed through a new macro VVAR, which gives read-only access.

    The vdso is now loaded verbatim into memory without any fixups. As a
    side bonus, access from the vdso is faster because a level of
    indirection is removed.

    While we're at it, pack jiffies and vgetcpu_mode into the same
    cacheline.

    Signed-off-by: Andy Lutomirski
    Cc: Andi Kleen
    Cc: Linus Torvalds
    Cc: "David S. Miller"
    Cc: Eric Dumazet
    Cc: Peter Zijlstra
    Cc: Borislav Petkov
    Link: http://lkml.kernel.org/r/%3C7357882fbb51fa30491636a7b6528747301b7ee9.1306156808.git.luto%40mit.edu%3E
    Signed-off-by: Thomas Gleixner

    Andy Lutomirski
     

27 Jul, 2010

2 commits

  • update_vsyscall() did not provide the wall_to_monotoinc offset,
    so arch specific implementations tend to reference wall_to_monotonic
    directly. This limits future cleanups in the timekeeping core, so
    this patch fixes the update_vsyscall interface to provide
    wall_to_monotonic, allowing wall_to_monotonic to be made static
    as planned in Documentation/feature-removal-schedule.txt

    Signed-off-by: John Stultz
    Cc: Martin Schwidefsky
    Cc: Anton Blanchard
    Cc: Paul Mackerras
    Cc: Tony Luck
    LKML-Reference:
    Signed-off-by: Thomas Gleixner

    John Stultz
     
  • Due to vtime calling vgettimeofday(), its possible that an application
    could call time();create("stuff",O_RDRW); only to see the file's
    creation timestamp to be before the value returned by time.

    A similar way to reproduce the issue is to compare the vsyscall time()
    with the syscall time(), and observe ordering issues.

    The modified test case from Oleg Nesterov below can illustrate this:

    int main(void)
    {
    time_t sec1,sec2;
    do {
    sec1 = time(&sec2);
    sec2 = syscall(__NR_time, NULL);
    } while (sec1
    Cc: Jiri Olsa
    Cc: Oleg Nesterov
    LKML-Reference:
    Signed-off-by: Thomas Gleixner

    John Stultz
     

01 Mar, 2010

1 commit


09 Dec, 2009

1 commit


17 Nov, 2009

1 commit

  • Since commit 0a544198 "timekeeping: Move NTP adjusted clock multiplier
    to struct timekeeper" the clock multiplier of vsyscall is updated with
    the unmodified clock multiplier of the clock source and not with the
    NTP adjusted multiplier of the timekeeper.

    This causes user space observerable time warps:
    new CLOCK-warp maximum: 120 nsecs, 00000025c337c537 -> 00000025c337c4bf

    Add a new argument "mult" to update_vsyscall() and hand in the
    timekeeping internal NTP adjusted multiplier.

    Signed-off-by: Lin Ming
    Cc: "Zhang Yanmin"
    Cc: Martin Schwidefsky
    Cc: Benjamin Herrenschmidt
    Cc: Tony Luck
    LKML-Reference:
    Signed-off-by: Thomas Gleixner

    Lin Ming
     

12 Nov, 2009

1 commit


24 Sep, 2009

1 commit

  • It's unused.

    It isn't needed -- read or write flag is already passed and sysctl
    shouldn't care about the rest.

    It _was_ used in two places at arch/frv for some reason.

    Signed-off-by: Alexey Dobriyan
    Cc: David Howells
    Cc: "Eric W. Biederman"
    Cc: Al Viro
    Cc: Ralf Baechle
    Cc: Martin Schwidefsky
    Cc: Ingo Molnar
    Cc: "David S. Miller"
    Cc: James Morris
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Alexey Dobriyan
     

22 Aug, 2009

1 commit

  • After talking with some application writers who want very fast, but not
    fine-grained timestamps, I decided to try to implement new clock_ids
    to clock_gettime(): CLOCK_REALTIME_COARSE and CLOCK_MONOTONIC_COARSE
    which returns the time at the last tick. This is very fast as we don't
    have to access any hardware (which can be very painful if you're using
    something like the acpi_pm clocksource), and we can even use the vdso
    clock_gettime() method to avoid the syscall. The only trade off is you
    only get low-res tick grained time resolution.

    This isn't a new idea, I know Ingo has a patch in the -rt tree that made
    the vsyscall gettimeofday() return coarse grained time when the
    vsyscall64 sysctrl was set to 2. However this affects all applications
    on a system.

    With this method, applications can choose the proper speed/granularity
    trade-off for themselves.

    Signed-off-by: John Stultz
    Cc: Andi Kleen
    Cc: nikolag@ca.ibm.com
    Cc: Darren Hart
    Cc: arjan@infradead.org
    Cc: jonathan@jonmasters.org
    LKML-Reference:
    Signed-off-by: Thomas Gleixner

    john stultz
     

28 May, 2009

1 commit

  • The *fence instructions were moved to vsyscall_64.c by commit
    cb9e35dce94a1b9c59d46224e8a94377d673e204. But this breaks the
    vDSO, because vread methods are also called from there.

    Besides, the synchronization might be unnecessary for other
    time sources than TSC.

    [ Impact: fix potential time warp in VDSO ]

    Signed-off-by: Petr Tesarik
    LKML-Reference:
    Signed-off-by: Thomas Gleixner
    Cc:

    Petr Tesarik
     

29 Dec, 2008

1 commit

  • …el/git/tip/linux-2.6-tip

    * 'tracing-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (241 commits)
    sched, trace: update trace_sched_wakeup()
    tracing/ftrace: don't trace on early stage of a secondary cpu boot, v3
    Revert "x86: disable X86_PTRACE_BTS"
    ring-buffer: prevent false positive warning
    ring-buffer: fix dangling commit race
    ftrace: enable format arguments checking
    x86, bts: memory accounting
    x86, bts: add fork and exit handling
    ftrace: introduce tracing_reset_online_cpus() helper
    tracing: fix warnings in kernel/trace/trace_sched_switch.c
    tracing: fix warning in kernel/trace/trace.c
    tracing/ring-buffer: remove unused ring_buffer size
    trace: fix task state printout
    ftrace: add not to regex on filtering functions
    trace: better use of stack_trace_enabled for boot up code
    trace: add a way to enable or disable the stack tracer
    x86: entry_64 - introduce FTRACE_ frame macro v2
    tracing/ftrace: add the printk-msg-only option
    tracing/ftrace: use preempt_enable_no_resched_notrace in ring_buffer_time_stamp()
    x86, bts: correctly report invalid bts records
    ...

    Fixed up trivial conflict in scripts/recordmcount.pl due to SH bits
    being already partly merged by the SH merge.

    Linus Torvalds
     

13 Nov, 2008

1 commit

  • Impact: name change of unlikely tracer and profiler

    Ingo Molnar suggested changing the config from UNLIKELY_PROFILE
    to BRANCH_PROFILING. I never did like the "unlikely" name so I
    went one step farther, and renamed all the unlikely configurations
    to a "BRANCH" variant.

    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Steven Rostedt
     

12 Nov, 2008

2 commits

  • Impact: fix bootup crash

    the branch tracer missed arch/x86/vdso/vclock_gettime.c from
    disabling tracing, which caused such bootup crashes:

    [ 201.840097] init[1]: segfault at 7fffed3fe7c0 ip 00007fffed3fea2e sp 000077

    also clean up the ugly ifdefs in arch/x86/kernel/vsyscall_64.c by
    creating DISABLE_UNLIKELY_PROFILE facility for code to turn off
    instrumentation on a per file basis.

    Signed-off-by: Ingo Molnar

    Ingo Molnar
     
  • Impact: new unlikely/likely profiler

    Andrew Morton recently suggested having an in-kernel way to profile
    likely and unlikely macros. This patch achieves that goal.

    When configured, every(*) likely and unlikely macro gets a counter attached
    to it. When the condition is hit, the hit and misses of that condition
    are recorded. These numbers can later be retrieved by:

    /debugfs/tracing/profile_likely - All likely markers
    /debugfs/tracing/profile_unlikely - All unlikely markers.

    # cat /debug/tracing/profile_unlikely | head
    correct incorrect % Function File Line
    ------- --------- - -------- ---- ----
    2167 0 0 do_arch_prctl process_64.c 832
    0 0 0 do_arch_prctl process_64.c 804
    2670 0 0 IS_ERR err.h 34
    71230 5693 7 __switch_to process_64.c 673
    76919 0 0 __switch_to process_64.c 639
    43184 33743 43 __switch_to process_64.c 624
    12740 64181 83 __switch_to process_64.c 594
    12740 64174 83 __switch_to process_64.c 590

    # cat /debug/tracing/profile_unlikely | \
    awk '{ if ($3 > 25) print $0; }' |head -20
    44963 35259 43 __switch_to process_64.c 624
    12762 67454 84 __switch_to process_64.c 594
    12762 67447 84 __switch_to process_64.c 590
    1478 595 28 syscall_get_error syscall.h 51
    0 2821 100 syscall_trace_leave ptrace.c 1567
    0 1 100 native_smp_prepare_cpus smpboot.c 1237
    86338 265881 75 calc_delta_fair sched_fair.c 408
    210410 108540 34 calc_delta_mine sched.c 1267
    0 54550 100 sched_info_queued sched_stats.h 222
    51899 66435 56 pick_next_task_fair sched_fair.c 1422
    6 10 62 yield_task_fair sched_fair.c 982
    7325 2692 26 rt_policy sched.c 144
    0 1270 100 pre_schedule_rt sched_rt.c 1261
    1268 48073 97 pick_next_task_rt sched_rt.c 884
    0 45181 100 sched_info_dequeued sched_stats.h 177
    0 15 100 sched_move_task sched.c 8700
    0 15 100 sched_move_task sched.c 8690
    53167 33217 38 schedule sched.c 4457
    0 80208 100 sched_info_switch sched_stats.h 270
    30585 49631 61 context_switch sched.c 2619

    # cat /debug/tracing/profile_likely | awk '{ if ($3 > 25) print $0; }'
    39900 36577 47 pick_next_task sched.c 4397
    20824 15233 42 switch_mm mmu_context_64.h 18
    0 7 100 __cancel_work_timer workqueue.c 560
    617 66484 99 clocksource_adjust timekeeping.c 456
    0 346340 100 audit_syscall_exit auditsc.c 1570
    38 347350 99 audit_get_context auditsc.c 732
    0 345244 100 audit_syscall_entry auditsc.c 1541
    38 1017 96 audit_free auditsc.c 1446
    0 1090 100 audit_alloc auditsc.c 862
    2618 1090 29 audit_alloc auditsc.c 858
    0 6 100 move_masked_irq migration.c 9
    1 198 99 probe_sched_wakeup trace_sched_switch.c 58
    2 2 50 probe_wakeup trace_sched_wakeup.c 227
    0 2 100 probe_wakeup_sched_switch trace_sched_wakeup.c 144
    4514 2090 31 __grab_cache_page filemap.c 2149
    12882 228786 94 mapping_unevictable pagemap.h 50
    4 11 73 __flush_cpu_slab slub.c 1466
    627757 330451 34 slab_free slub.c 1731
    2959 61245 95 dentry_lru_del_init dcache.c 153
    946 1217 56 load_elf_binary binfmt_elf.c 904
    102 82 44 disk_put_part genhd.h 206
    1 1 50 dst_gc_task dst.c 82
    0 19 100 tcp_mss_split_point tcp_output.c 1126

    As you can see by the above, there's a bit of work to do in rethinking
    the use of some unlikelys and likelys. Note: the unlikely case had 71 hits
    that were more than 25%.

    Note: After submitting my first version of this patch, Andrew Morton
    showed me a version written by Daniel Walker, where I picked up
    the following ideas from:

    1) Using __builtin_constant_p to avoid profiling fixed values.
    2) Using __FILE__ instead of instruction pointers.
    3) Using the preprocessor to stop all profiling of likely
    annotations from vsyscall_64.c.

    Thanks to Andrew Morton, Arjan van de Ven, Theodore Tso and Ingo Molnar
    for their feed back on this patch.

    (*) Not ever unlikely is recorded, those that are used by vsyscalls
    (a few of them) had to have profiling disabled.

    Signed-off-by: Steven Rostedt
    Cc: Andrew Morton
    Cc: Frederic Weisbecker
    Cc: Theodore Tso
    Cc: Arjan van de Ven
    Cc: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Steven Rostedt
     

09 Nov, 2008

1 commit


16 Jul, 2008

1 commit

  • Conflicts:

    arch/powerpc/Kconfig
    arch/s390/kernel/time.c
    arch/x86/kernel/apic_32.c
    arch/x86/kernel/cpu/perfctr-watchdog.c
    arch/x86/kernel/i8259_64.c
    arch/x86/kernel/ldt.c
    arch/x86/kernel/nmi_64.c
    arch/x86/kernel/smpboot.c
    arch/x86/xen/smp.c
    include/asm-x86/hw_irq_32.h
    include/asm-x86/hw_irq_64.h
    include/asm-x86/mach-default/irq_vectors.h
    include/asm-x86/mach-voyager/irq_vectors.h
    include/asm-x86/smp.h
    kernel/Makefile

    Signed-off-by: Ingo Molnar

    Ingo Molnar
     

14 Jul, 2008

1 commit


08 Jul, 2008

1 commit


26 Jun, 2008

2 commits


24 May, 2008

1 commit


25 Apr, 2008

1 commit


01 Mar, 2008

1 commit

  • a recent fix:

    commit ce28b9864b853803320c3f1d8de1b81aa4120b14
    Author: Thomas Gleixner
    Date: Wed Feb 20 23:57:30 2008 +0100

    x86: fix vsyscall wreckage

    removed the broken /kernel/vsyscall64 handler completely.
    This triggers the following debug check:

    sysctl table check failed: /kernel/vsyscall64 No proc_handler

    Restore the sane part of the proc handler.

    Signed-off-by: Thomas Gleixner
    Signed-off-by: Ingo Molnar

    Thomas Gleixner