27 Jan, 2007

1 commit

  • I wouldn't mind if CONFIG_COMPAT_VDSO went away entirely. But if it's there,
    it should work properly. Currently it's quite haphazard: both real vma and
    fixmap are mapped, both are put in the two different AT_* slots, sysenter
    returns to the vma address rather than the fixmap address, and core dumps yet
    are another story.

    This patch makes CONFIG_COMPAT_VDSO disable the real vma and use the fixmap
    area consistently. This makes it actually compatible with what the old vdso
    implementation did.

    Signed-off-by: Roland McGrath
    Cc: Ingo Molnar
    Cc: Paul Mackerras
    Cc: Benjamin Herrenschmidt
    Cc: Andi Kleen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Roland McGrath
     

16 Dec, 2006

1 commit

  • It has caused more problems than it ever really solved, and is
    apparently not getting cleaned up and fixed. We can put it back when
    it's stable and isn't likely to make warning or bug events worse.

    In the meantime, enable frame pointers for more readable stack traces.

    Signed-off-by: Linus Torvalds

    Linus Torvalds
     

07 Dec, 2006

7 commits

  • It turns out that the most called ops, by several orders of magnitude,
    are the interrupt manipulation ops. These are obvious candidates for
    patching, so mark them up and create infrastructure for it.

    The method used is that the ops structure has a patch function, which
    is called for each place which needs to be patched: this returns a
    number of instructions (the rest are NOP-padded).

    Usually we can spare a register (%eax) for the binary patched code to
    use, but in a couple of critical places in entry.S we can't: we make
    the clobbers explicit at the call site, and manually clobber the
    allowed registers in debug mode as an extra check.

    And:

    Don't abuse CONFIG_DEBUG_KERNEL, add CONFIG_DEBUG_PARAVIRT.

    And:

    AK: Fix warnings in x86-64 alternative.c build

    And:

    AK: Fix compilation with defconfig

    And:

    ^From: Andrew Morton

    Some binutlises still like to emit references to __stop_parainstructions and
    __start_parainstructions.

    And:

    AK: Fix warnings about unused variables when PARAVIRT is disabled.

    Signed-off-by: Rusty Russell
    Signed-off-by: Jeremy Fitzhardinge
    Signed-off-by: Chris Wright
    Signed-off-by: Zachary Amsden
    Signed-off-by: Andi Kleen
    Signed-off-by: Andrew Morton

    Rusty Russell
     
  • Create a paravirt.h header for all the critical operations which need to be
    replaced with hypervisor calls, and include that instead of defining native
    operations, when CONFIG_PARAVIRT.

    This patch does the dumbest possible replacement of paravirtualized
    instructions: calls through a "paravirt_ops" structure. Currently these are
    function implementations of native hardware: hypervisors will override the ops
    structure with their own variants.

    All the pv-ops functions are declared "fastcall" so that a specific
    register-based ABI is used, to make inlining assember easier.

    And:

    +From: Andy Whitcroft

    The paravirt ops introduce a 'weak' attribute onto memory_setup().
    Code ordering leads to the following warnings on x86:

    arch/i386/kernel/setup.c:651: warning: weak declaration of
    `memory_setup' after first use results in unspecified behavior

    Move memory_setup() to avoid this.

    Signed-off-by: Rusty Russell
    Signed-off-by: Chris Wright
    Signed-off-by: Andi Kleen
    Cc: Jeremy Fitzhardinge
    Cc: Zachary Amsden
    Signed-off-by: Andrew Morton
    Signed-off-by: Andy Whitcroft

    Rusty Russell
     
  • The entry.S code at work_notifysig is surely wrong. It drops into unrelated
    code if the branch to work_notifysig_v86 is taken, and CONFIG_VM86=n.

    [PATCH] Make vm86 support optional
    tree 9b5daef5280800a0006343a17f63072658d91a1d
    pushed to git Jan 8, 2006, and first appears in 2.6.16

    The 'fix' here is to also compile out the vm86 test & branch when
    CONFIG_VM86=n.

    Signed-off-by: Joe Korty
    Signed-off-by: Andi Kleen

    Joe Korty
     
  • Use the cpu_number in the PDA to implement raw_smp_processor_id. This is a
    little simpler than using thread_info, though the cpu field in thread_info
    cannot be removed since it is used for things other than getting the current
    CPU in common code.

    Signed-off-by: Jeremy Fitzhardinge
    Signed-off-by: Andi Kleen
    Cc: Chuck Ebbert
    Cc: Zachary Amsden
    Cc: Jan Beulich
    Cc: Andi Kleen
    Signed-off-by: Andrew Morton

    Jeremy Fitzhardinge
     
  • This patch is the meat of the PDA change. This patch makes several related
    changes:

    1: Most significantly, %gs is now used in the kernel. This means that on
    entry, the old value of %gs is saved away, and it is reloaded with
    __KERNEL_PDA.

    2: entry.S constructs the stack in the shape of struct pt_regs, and this
    is passed around the kernel so that the process's saved register
    state can be accessed.

    Unfortunately struct pt_regs doesn't currently have space for %gs
    (or %fs). This patch extends pt_regs to add space for gs (no space
    is allocated for %fs, since it won't be used, and it would just
    complicate the code in entry.S to work around the space).

    3: Because %gs is now saved on the stack like %ds, %es and the integer
    registers, there are a number of places where it no longer needs to
    be handled specially; namely context switch, and saving/restoring the
    register state in a signal context.

    4: And since kernel threads run in kernel space and call normal kernel
    code, they need to be created with their %gs == __KERNEL_PDA.

    Signed-off-by: Jeremy Fitzhardinge
    Signed-off-by: Andi Kleen
    Cc: Chuck Ebbert
    Cc: Zachary Amsden
    Cc: Jan Beulich
    Cc: Andi Kleen
    Signed-off-by: Andrew Morton

    Jeremy Fitzhardinge
     
  • Use asm-offsets for the offsets of registers into the pt_regs struct, rather
    than having hard-coded constants

    I left the constants in the comments of entry.S because they're useful for
    reference; the code in entry.S is very dependent on the layout of pt_regs,
    even when using asm-offsets.

    Signed-off-by: Jeremy Fitzhardinge
    Signed-off-by: Andi Kleen
    Cc: Keith Owens
    Signed-off-by: Andrew Morton

    Jeremy Fitzhardinge
     
  • Clean up the espfix code:

    - Introduced PER_CPU() macro to be used from asm
    - Introduced GET_DESC_BASE() macro to be used from asm
    - Rewrote the fixup code in asm, as calling a C code with the altered %ss
    appeared to be unsafe
    - No longer altering the stack from a .fixup section
    - 16bit per-cpu stack is no longer used, instead the stack segment base
    is patched the way so that the high word of the kernel and user %esp
    are the same.
    - Added the limit-patching for the espfix segment. (Chuck Ebbert)

    [jeremy@goop.org: use the x86 scaling addressing mode rather than shifting]
    Signed-off-by: Stas Sergeev
    Signed-off-by: Andi Kleen
    Acked-by: Zachary Amsden
    Acked-by: Chuck Ebbert
    Acked-by: Jan Beulich
    Cc: Andi Kleen
    Signed-off-by: Jeremy Fitzhardinge
    Signed-off-by: Andrew Morton

    Stas Sergeev
     

26 Sep, 2006

7 commits

  • Current gcc generates calls not jumps to noreturn functions. When that happens the
    return address can point to the next function, which confuses the unwinder.

    This patch works around it by marking asynchronous exception
    frames in contrast normal call frames in the unwind information. Then teach
    the unwinder to decode this.

    For normal call frames the unwinder now subtracts one from the address which avoids
    this problem. The standard libgcc unwinder uses the same trick.

    It doesn't include adjustment of the printed address (i.e. for the original
    example, it'd still be kernel_math_error+0 that gets displayed, but the
    unwinder wouldn't get confused anymore.

    This only works with binutils 2.6.17+ and some versions of H.J.Lu's 2.6.16
    unfortunately because earlier binutils don't support .cfi_signal_frame

    [AK: added automatic detection of the new binutils and wrote description]

    Signed-off-by: Jan Beulich
    Signed-off-by: Andi Kleen

    Jan Beulich
     
  • We allow for the fact that the guest kernel may not run in ring 0. This
    requires some abstraction in a few places when setting %cs or checking
    privilege level (user vs kernel).

    This is Chris' [RFC PATCH 15/33] move segment checks to subarch, except rather
    than using #define USER_MODE_MASK which depends on a config option, we use
    Zach's more flexible approach of assuming ring 3 == userspace. I also used
    "get_kernel_rpl()" over "get_kernel_cs()" because I think it reads better in
    the code...

    1) Remove the hardcoded 3 and introduce #define SEGMENT_RPL_MASK 3 2) Add a
    get_kernel_rpl() macro, and don't assume it's zero.

    And:

    Clean up of patch for letting kernel run other than ring 0:

    a. Add some comments about the SEGMENT_IS_*_CODE() macros.
    b. Add a USER_RPL macro. (Code was comparing a value to a mask
    in some places and to the magic number 3 in other places.)
    c. Add macros for table indicator field and use them.
    d. Change the entry.S tests for LDT stack segment to use the macros

    Signed-off-by: Rusty Russell
    Signed-off-by: Zachary Amsden
    Signed-off-by: Jeremy Fitzhardinge
    Signed-off-by: Andrew Morton
    Signed-off-by: Andi Kleen

    Rusty Russell
     
  • Abstract sensitive instructions in assembler code, replacing them with macros
    (which currently are #defined to the native versions). We use long names:
    assembler is case-insensitive, so if something goes wrong and macros do not
    expand, it would assemble anyway.

    Resulting object files are exactly the same as before.

    Signed-off-by: Rusty Russell
    Signed-off-by: Jeremy Fitzhardinge
    Signed-off-by: Andrew Morton
    Signed-off-by: Andi Kleen

    Rusty Russell
     
  • In i386's entry.S, FIX_STACK() needs annotation because it
    replaces the stack pointer. And the rest of nmi() needs
    annotation in order to compile with these new annotations.

    Signed-off-by: Chuck Ebbert
    Signed-off-by: Andi Kleen

    Chuck Ebbert
     
  • A kprobe executes IRET early and that could cause NMI recursion and stack
    corruption.

    Note: This problem was originally spotted and solved by Andi Kleen in the
    x86_64 architecture. This patch is an adaption of his patch for i386.

    AK: Merged with current code which was a bit different.
    AK: Removed printk in nmi handler that shouldn't be there in the first time
    AK: Added missing include.
    AK: added KPROBES_END

    Signed-off-by: Fernando Vazquez
    Signed-off-by: Andi Kleen

    Fernando Luis Vázquez Cao
     
  • And add proper CFI annotation to it which was previously
    impossible. This prevents "stuck" messages by the dwarf2 unwinder
    when reaching the top of a kernel stack.

    Includes feedback from Jan Beulich

    Cc: jbeulich@novell.com
    Signed-off-by: Andi Kleen

    Andi Kleen
     
  • This patch moves the entry.S:error_entry to .kprobes.text section,
    since code marked unsafe for kprobes jumps directly to entry.S::error_entry,
    that must be marked unsafe as well.
    This patch also moves all the ".previous.text" asm directives to ".previous"
    for kprobes section.

    AK: Following a similar i386 patch from Chuck Ebbert
    AK: Also merged Jeremy's fix in.

    +From: Jeremy Fitzhardinge

    KPROBE_ENTRY does a .section .kprobes.text, and expects its users to
    do a .previous at the end of the function.

    Unfortunately, if any code within the function switches sections, for
    example .fixup, then the .previous ends up putting all subsequent code
    into .fixup. Worse, any subsequent .fixup code gets intermingled with
    the code its supposed to be fixing (which is also in .fixup). It's
    surprising this didn't cause more havok.

    The fix is to use .pushsection/.popsection, so this stuff nests
    properly. A further cleanup would be to get rid of all
    .section/.previous pairs, since they're inherently fragile.

    +From: Chuck Ebbert

    Because code marked unsafe for kprobes jumps directly to
    entry.S::error_code, that must be marked unsafe as well.
    The easiest way to do that is to move the page fault entry
    point to just before error_code and let it inherit the same
    section.

    Also moved all the ".previous" asm directives for kprobes
    sections to column 1 and removed ".text" from them.

    Signed-off-by: Chuck Ebbert
    Signed-off-by: Andi Kleen

    Prasanna S.P
     

19 Sep, 2006

1 commit

  • (And reset it on new thread creation)

    It turns out that eflags is important to save and restore not just
    because of iopl, but due to the magic bits like the NT bit, which we
    don't want leaking between different threads.

    Tested-by: Mike Galbraith
    Signed-off-by: Linus Torvalds

    Linus Torvalds
     

01 Aug, 2006

1 commit

  • CFA needs to be adjusted upwards for push, and downwards for pop.
    arch/i386/kernel/entry.S gets it wrong in one place.

    Signed-off-by: Markus Armbruster
    Acked-by: Jan Beulich
    Cc: Andi Kleen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Markus Armbruster
     

04 Jul, 2006

1 commit


01 Jul, 2006

1 commit


28 Jun, 2006

3 commits

  • Move the i386 VDSO down into a vma and thus randomize it.

    Besides the security implications, this feature also helps debuggers, which
    can COW a vma-backed VDSO just like a normal DSO and can thus do
    single-stepping and other debugging features.

    It's good for hypervisors (Xen, VMWare) too, which typically live in the same
    high-mapped address space as the VDSO, hence whenever the VDSO is used, they
    get lots of guest pagefaults and have to fix such guest accesses up - which
    slows things down instead of speeding things up (the primary purpose of the
    VDSO).

    There's a new CONFIG_COMPAT_VDSO (default=y) option, which provides support
    for older glibcs that still rely on a prelinked high-mapped VDSO. Newer
    distributions (using glibc 2.3.3 or later) can turn this option off. Turning
    it off is also recommended for security reasons: attackers cannot use the
    predictable high-mapped VDSO page as syscall trampoline anymore.

    There is a new vdso=[0|1] boot option as well, and a runtime
    /proc/sys/vm/vdso_enabled sysctl switch, that allows the VDSO to be turned
    on/off.

    (This version of the VDSO-randomization patch also has working ELF
    coredumping, the previous patch crashed in the coredumping code.)

    This code is a combined work of the exec-shield VDSO randomization
    code and Gerd Hoffmann's hypervisor-centric VDSO patch. Rusty Russell
    started this patch and i completed it.

    [akpm@osdl.org: cleanups]
    [akpm@osdl.org: compile fix]
    [akpm@osdl.org: compile fix 2]
    [akpm@osdl.org: compile fix 3]
    [akpm@osdl.org: revernt MAXMEM change]
    Signed-off-by: Ingo Molnar
    Signed-off-by: Arjan van de Ven
    Cc: Gerd Hoffmann
    Cc: Rusty Russell
    Cc: Zachary Amsden
    Cc: Andi Kleen
    Cc: Jan Beulich
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Ingo Molnar
     
  • Commit c3ff8ec31c1249d268cd11390649768a12bec1b9 ("[PATCH] i386: Don't
    miss pending signals returning to user mode after signal processing")
    meant that vm86 interrupt/signal handling got broken for the case when
    vm86 is called from kernel space.

    In this scenario, if signal is pending because of vm86 interrupt,
    do_notify_resume/do_signal exits immediately due to user_mode() check,
    without processing any signals. Thus, resume_userspace handler is spinning
    in a tight loop with signal pending and TIF_SIGPENDING is set. Previously
    everything worked Ok.

    No in-tree usage of vm86() from kernel space exists, but I've heard
    about a number of projects out there which use vm86 calls from kernel,
    one of them being this, for instance:

    http://dev.gentoo.org/~spock/projects/vesafb-tng/

    The following patch fixes the issue.

    Signed-off-by: Aleksey Gorelov
    Cc: Atsushi Nemoto
    Cc: Roland McGrath
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Aleksey Gorelov
     
  • Remove the limit of 256 interrupt vectors by changing the value stored in
    orig_{e,r}ax to be the complemented interrupt vector. The orig_{e,r}ax
    needs to be < 0 to allow the signal code to distinguish between return from
    interrupt and return from syscall. With this change applied, NR_IRQS can
    be > 256.

    Xen extends the IRQ numbering space to include room for dynamically
    allocated virtual interrupts (in the range 256-511), which requires a more
    permissive interface to do_IRQ.

    Signed-off-by: Ian Pratt
    Signed-off-by: Christian Limpach
    Signed-off-by: Chris Wright
    Signed-off-by: Rusty Russell
    Cc: "Protasevich, Natalie"
    Cc: Andi Kleen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Rusty Russell
     

27 Jun, 2006

2 commits


23 Mar, 2006

1 commit

  • Using PTRACE_SINGLESTEP on a child that does an int80 syscall misses the
    SIGTRAP that should be delivered upon syscall exit. Fix that by setting
    TIF_SINGLESTEP when entering the kernel via int80 with TF set.

    /* Test whether singlestep through an int80 syscall works.
    */
    #define _GNU_SOURCE
    #include
    #include
    #include
    #include
    #include
    #include
    #include

    static int child, status;
    static struct user_regs_struct regs;

    static void do_child()
    {
    ptrace(PTRACE_TRACEME, 0, 0, 0);
    kill(getpid(), SIGUSR1);
    asm ("int $0x80" : : "a" (20)); /* getpid */
    }

    static void do_parent()
    {
    unsigned long eip, expected = 0;
    again:
    waitpid(child, &status, 0);
    if (WIFEXITED(status) || WIFSIGNALED(status))
    return;

    if (WIFSTOPPED(status)) {
    ptrace(PTRACE_GETREGS, child, 0, ®s);
    eip = regs.eip;
    if (expected)
    fprintf(stderr, "child stop @ %08x, expected %08x %s\n",
    eip, expected,
    eip == expected ? "" : "
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Chuck Ebbert
     

09 Jan, 2006

1 commit

  • This adds an option to remove vm86 support under CONFIG_EMBEDDED. Saves
    about 5k.

    This version eliminates most of the #ifdefs of the previous version and
    instead uses function stubs in vm86.h. Also, release_vm86_irqs is moved
    from asm-i386/irq.h to a more appropriate home in vm86.h so that the stubs
    can live together.

    $ size vmlinux-baseline vmlinux-novm86
    text data bss dec hex filename
    2920821 523232 190652 3634705 377611 vmlinux-baseline
    2916268 523100 190492 3629860 376324 vmlinux-novm86

    Signed-off-by: Matt Mackall
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Matt Mackall
     

07 Jan, 2006

1 commit


14 Nov, 2005

1 commit

  • Instruction pointer comparisons for the NMI on debug stack check/fixup
    were incorrect.

    From: Jan Beulich
    Cc: "Eric W. Biederman"
    Cc: Zwane Mwaikambo
    Acked-by: "Seth, Rohit"
    Cc: Zachary Amsden
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jan Beulich
     

12 Sep, 2005

1 commit


08 Sep, 2005

1 commit


05 Sep, 2005

3 commits

  • As a follow-up to "UML Support - Ptrace: adds the host SYSEMU support, for
    UML and general usage" (i.e. uml-support-* in current mm).

    Avoid unconditionally jumping to work_pending and code copying, just reuse
    the already existing resume_userspace path.

    One interesting note, from Charles P. Wright, suggested that the API is
    improvable with no downsides for UML (except that it will have to support
    yet another host API, since dropping support for the current API, for UML,
    is not reasonable from users' point of view).

    Signed-off-by: Paolo 'Blaisorblade' Giarrusso
    CC: Charles P. Wright
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Paolo 'Blaisorblade' Giarrusso
     
  • With this patch, we change the way we handle switching from PTRACE_SYSEMU to
    PTRACE_{SINGLESTEP,SYSCALL}, to free TIF_SYSCALL_EMU from double use as a
    preparation for PTRACE_SYSEMU_SINGLESTEP extension, without changing the
    behavior of the host kernel.

    Signed-off-by: Bodo Stroesser
    Signed-off-by: Paolo 'Blaisorblade' Giarrusso
    Cc: Jeff Dike
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Bodo Stroesser
     
  • Jeff Dike ,
    Paolo 'Blaisorblade' Giarrusso ,
    Bodo Stroesser

    Adds a new ptrace(2) mode, called PTRACE_SYSEMU, resembling PTRACE_SYSCALL
    except that the kernel does not execute the requested syscall; this is useful
    to improve performance for virtual environments, like UML, which want to run
    the syscall on their own.

    In fact, using PTRACE_SYSCALL means stopping child execution twice, on entry
    and on exit, and each time you also have two context switches; with SYSEMU you
    avoid the 2nd stop and so save two context switches per syscall.

    Also, some architectures don't have support in the host for changing the
    syscall number via ptrace(), which is currently needed to skip syscall
    execution (UML turns any syscall into getpid() to avoid it being executed on
    the host). Fixing that is hard, while SYSEMU is easier to implement.

    * This version of the patch includes some suggestions of Jeff Dike to avoid
    adding any instructions to the syscall fast path, plus some other little
    changes, by myself, to make it work even when the syscall is executed with
    SYSENTER (but I'm unsure about them). It has been widely tested for quite a
    lot of time.

    * Various fixed were included to handle the various switches between
    various states, i.e. when for instance a syscall entry is traced with one of
    PT_SYSCALL / _SYSEMU / _SINGLESTEP and another one is used on exit.
    Basically, this is done by remembering which one of them was used even after
    the call to ptrace_notify().

    * We're combining TIF_SYSCALL_EMU with TIF_SYSCALL_TRACE or TIF_SINGLESTEP
    to make do_syscall_trace() notice that the current syscall was started with
    SYSEMU on entry, so that no notification ought to be done in the exit path;
    this is a bit of a hack, so this problem is solved in another way in next
    patches.

    * Also, the effects of the patch:
    "Ptrace - i386: fix Syscall Audit interaction with singlestep"
    are cancelled; they are restored back in the last patch of this series.

    Detailed descriptions of the patches doing this kind of processing follow (but
    I've already summed everything up).

    * Fix behaviour when changing interception kind #1.

    In do_syscall_trace(), we check the status of the TIF_SYSCALL_EMU flag
    only after doing the debugger notification; but the debugger might have
    changed the status of this flag because he continued execution with
    PTRACE_SYSCALL, so this is wrong. This patch fixes it by saving the flag
    status before calling ptrace_notify().

    * Fix behaviour when changing interception kind #2:
    avoid intercepting syscall on return when using SYSCALL again.

    A guest process switching from using PTRACE_SYSEMU to PTRACE_SYSCALL
    crashes.

    The problem is in arch/i386/kernel/entry.S. The current SYSEMU patch
    inhibits the syscall-handler to be called, but does not prevent
    do_syscall_trace() to be called after this for syscall completion
    interception.

    The appended patch fixes this. It reuses the flag TIF_SYSCALL_EMU to
    remember "we come from PTRACE_SYSEMU and now are in PTRACE_SYSCALL", since
    the flag is unused in the depicted situation.

    * Fix behaviour when changing interception kind #3:
    avoid intercepting syscall on return when using SINGLESTEP.

    When testing 2.6.9 and the skas3.v6 patch, with my latest patch and had
    problems with singlestepping on UML in SKAS with SYSEMU. It looped
    receiving SIGTRAPs without moving forward. EIP of the traced process was
    the same for all SIGTRAPs.

    What's missing is to handle switching from PTRACE_SYSCALL_EMU to
    PTRACE_SINGLESTEP in a way very similar to what is done for the change from
    PTRACE_SYSCALL_EMU to PTRACE_SYSCALL_TRACE.

    I.e., after calling ptrace(PTRACE_SYSEMU), on the return path, the debugger is
    notified and then wake ups the process; the syscall is executed (or skipped,
    when do_syscall_trace() returns 0, i.e. when using PTRACE_SYSEMU), and
    do_syscall_trace() is called again. Since we are on the return path of a
    SYSEMU'd syscall, if the wake up is performed through ptrace(PTRACE_SYSCALL),
    we must still avoid notifying the parent of the syscall exit. Now, this
    behaviour is extended even to resuming with PTRACE_SINGLESTEP.

    Signed-off-by: Paolo 'Blaisorblade' Giarrusso
    Cc: Jeff Dike
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Laurent Vivier
     

01 May, 2005

2 commits

  • Split the i386 entry.S files into entry.S and syscall_table.S which is
    included in the previous one (so actually there is no difference between them)
    and use the syscall_table.S in the UML build, instead of tracking by hand the
    syscall table changes (which is inherently error-prone).

    We must only insert the right #defines to inject the changes we need from the
    i386 syscall table (for instance some different function names); also, we
    don't implement some i386 syscalls, as ioperm(), nor some TLS-related ones
    (yet to provide).

    Signed-off-by: Paolo 'Blaisorblade' Giarrusso
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Paolo 'Blaisorblade' Giarrusso
     
  • do_debug() and do_int3() return void.

    This patch fixes the CONFIG_KPROBES variant of do_int3() to return void too
    and adjusts entry.S accordingly.

    Signed-off-by: Stas Sergeev
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Stas Sergeev
     

30 Apr, 2005

1 commit

  • This makes a trap on the 'iret' that returns us to user space
    cause a nice clean SIGSEGV, instead of just a hard (and silent)
    exit.

    That way a debugger can actually try to see what happened, and
    we also properly notify everybody who might be interested about
    us being gone.

    This loses the error code, but tells the debugger what happened
    with ILL_BADSTK in the siginfo.

    Linus Torvalds
     

17 Apr, 2005

2 commits

  • Fix the access-above-bottom-of-stack crash.

    1. Allows to preserve the valueable optimization

    2. Works for NMIs

    3. Doesn't care whether or not there are more of the like instances
    where the stack is left empty.

    4. Seems to work for me without the crashes:)

    (akpm: this is still under discussion, although I _think_ it's OK. You might
    want to hold off)

    Signed-off-by: Stas Sergeev
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Stas Sergeev
     
  • Initial git repository build. I'm not bothering with the full history,
    even though we have it. We can create a separate "historical" git
    archive of that later if we want to, and in the meantime it's about
    3.2GB when imported into git - space that would just make the early
    git days unnecessarily complicated, when we don't have a lot of good
    infrastructure for it.

    Let it rip!

    Linus Torvalds