13 Jan, 2006

4 commits

  • Signed-off-by: Al Viro
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Al Viro
     
  • )

    From: Al Viro

    task_pt_regs() needs the same offset-by-8 to match copy_thread()

    Signed-off-by: Al Viro
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    akpm@osdl.org
     
  • Signed-off-by: Al Viro
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Al Viro
     
  • )

    From: Ingo Molnar

    This is the latest version of the scheduler cache-hot-auto-tune patch.

    The first problem was that detection time scaled with O(N^2), which is
    unacceptable on larger SMP and NUMA systems. To solve this:

    - I've added a 'domain distance' function, which is used to cache
    measurement results. Each distance is only measured once. This means
    that e.g. on NUMA distances of 0, 1 and 2 might be measured, on HT
    distances 0 and 1, and on SMP distance 0 is measured. The code walks
    the domain tree to determine the distance, so it automatically follows
    whatever hierarchy an architecture sets up. This cuts down on the boot
    time significantly and removes the O(N^2) limit. The only assumption
    is that migration costs can be expressed as a function of domain
    distance - this covers the overwhelming majority of existing systems,
    and is a good guess even for more assymetric systems.

    [ People hacking systems that have assymetries that break this
    assumption (e.g. different CPU speeds) should experiment a bit with
    the cpu_distance() function. Adding a ->migration_distance factor to
    the domain structure would be one possible solution - but lets first
    see the problem systems, if they exist at all. Lets not overdesign. ]

    Another problem was that only a single cache-size was used for measuring
    the cost of migration, and most architectures didnt set that variable
    up. Furthermore, a single cache-size does not fit NUMA hierarchies with
    L3 caches and does not fit HT setups, where different CPUs will often
    have different 'effective cache sizes'. To solve this problem:

    - Instead of relying on a single cache-size provided by the platform and
    sticking to it, the code now auto-detects the 'effective migration
    cost' between two measured CPUs, via iterating through a wide range of
    cachesizes. The code searches for the maximum migration cost, which
    occurs when the working set of the test-workload falls just below the
    'effective cache size'. I.e. real-life optimized search is done for
    the maximum migration cost, between two real CPUs.

    This, amongst other things, has the positive effect hat if e.g. two
    CPUs share a L2/L3 cache, a different (and accurate) migration cost
    will be found than between two CPUs on the same system that dont share
    any caches.

    (The reliable measurement of migration costs is tricky - see the source
    for details.)

    Furthermore i've added various boot-time options to override/tune
    migration behavior.

    Firstly, there's a blanket override for autodetection:

    migration_cost=1000,2000,3000

    will override the depth 0/1/2 values with 1msec/2msec/3msec values.

    Secondly, there's a global factor that can be used to increase (or
    decrease) the autodetected values:

    migration_factor=120

    will increase the autodetected values by 20%. This option is useful to
    tune things in a workload-dependent way - e.g. if a workload is
    cache-insensitive then CPU utilization can be maximized by specifying
    migration_factor=0.

    I've tested the autodetection code quite extensively on x86, on 3
    P3/Xeon/2MB, and the autodetected values look pretty good:

    Dual Celeron (128K L2 cache):

    ---------------------
    migration cost matrix (max_cache_size: 131072, cpu: 467 MHz):
    ---------------------
    [00] [01]
    [00]: - 1.7(1)
    [01]: 1.7(1) -
    ---------------------
    cacheflush times [2]: 0.0 (0) 1.7 (1784008)
    ---------------------

    Here the slow memory subsystem dominates system performance, and even
    though caches are small, the migration cost is 1.7 msecs.

    Dual HT P4 (512K L2 cache):

    ---------------------
    migration cost matrix (max_cache_size: 524288, cpu: 2379 MHz):
    ---------------------
    [00] [01] [02] [03]
    [00]: - 0.4(1) 0.0(0) 0.4(1)
    [01]: 0.4(1) - 0.4(1) 0.0(0)
    [02]: 0.0(0) 0.4(1) - 0.4(1)
    [03]: 0.4(1) 0.0(0) 0.4(1) -
    ---------------------
    cacheflush times [2]: 0.0 (33900) 0.4 (448514)
    ---------------------

    Here it can be seen that there is no migration cost between two HT
    siblings (CPU#0/2 and CPU#1/3 are separate physical CPUs). A fast memory
    system makes inter-physical-CPU migration pretty cheap: 0.4 msecs.

    8-way P3/Xeon [2MB L2 cache]:

    ---------------------
    migration cost matrix (max_cache_size: 2097152, cpu: 700 MHz):
    ---------------------
    [00] [01] [02] [03] [04] [05] [06] [07]
    [00]: - 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1)
    [01]: 19.2(1) - 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1)
    [02]: 19.2(1) 19.2(1) - 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1)
    [03]: 19.2(1) 19.2(1) 19.2(1) - 19.2(1) 19.2(1) 19.2(1) 19.2(1)
    [04]: 19.2(1) 19.2(1) 19.2(1) 19.2(1) - 19.2(1) 19.2(1) 19.2(1)
    [05]: 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) - 19.2(1) 19.2(1)
    [06]: 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) - 19.2(1)
    [07]: 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) -
    ---------------------
    cacheflush times [2]: 0.0 (0) 19.2 (19281756)
    ---------------------

    This one has huge caches and a relatively slow memory subsystem - so the
    migration cost is 19 msecs.

    Signed-off-by: Ingo Molnar
    Signed-off-by: Ashok Raj
    Signed-off-by: Ken Chen
    Cc:
    Signed-off-by: John Hawkes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    akpm@osdl.org
     

12 Jan, 2006

16 commits

  • The explicit and implicit calls to setup_early_printk() were passing
    inconsistent arguments.

    Signed-Off-By: Jan Beulich

    Signed-off-by: Andi Kleen
    Signed-off-by: Linus Torvalds

    Jan Beulich
     
  • It has no business being elsewhere and x86-64 doesn't need/want it.

    Signed-off-by: Andi Kleen
    Signed-off-by: Linus Torvalds

    Andi Kleen
     
  • early_cpu_detect only runs on the BP, but this code needs to run
    on all CPUs.

    Looks like a mismerge somewhere. Also add a warning comment.

    Signed-off-by: Andi Kleen
    Signed-off-by: Linus Torvalds

    Andi Kleen
     
  • Currently we attempt to restore virtual wire mode on reboot, which only
    works if we can figure out where the i8259 is connected. This is very
    useful when we are kexec another kernel and likely helpful to an peculiar
    BIOS that make assumptions about how the system is setup.

    Since the acpi MADT table does not provide the location where the i8259 is
    connected we have to look at the hardware to figure it out.

    Most systems have the i8259 connected the local apic of the cpu so won't be
    affected but people running Opteron and some serverworks chipsets should be
    able to use kexec now.

    In addition this patch removes the hard coded assumption that the io_apic
    that delivers isa interrups is always known to the kernel as io_apic 0.
    There does not appear to be anything to guarantee that assumption is true.

    And From: Vivek Goyal

    A minor fix to the patch which remembers the location of where i8259 is
    connected. Now counter i has been replaced by apic. counter i is having
    some junk value which was leading to non-detection of i8259 connected to
    IOAPIC.

    Signed-off-by: Eric W. Biederman
    Signed-off-by: Vivek Goyal
    Signed-off-by: Andi Kleen
    Signed-off-by: Linus Torvalds

    Eric W. Biederman
     
  • Passing random input values in eax to cpuid is not a good idea
    because the CPU will GPF for unknown ones.
    Use the correct x86-64 version that exists for a longer time too.
    This also adds a memory barrier to prevent the optimizer from
    reordering.

    Cc: tigran@veritas.com

    Signed-off-by: Andi Kleen
    Signed-off-by: Linus Torvalds

    Andi Kleen
     
  • Whenever we see that a CPU is capable of C3 (during ACPI cstate init), we
    disable local APIC timer and switch to using a broadcast from external timer
    interrupt (IRQ 0). This is needed because Intel CPUs stop the local
    APIC timer in C3. This is currently only enabled for Intel CPUs.

    Patch below adds the code for i386 and also the ACPI hunk.

    Signed-off-by: Venkatesh Pallipadi
    Signed-off-by: Andi Kleen
    Signed-off-by: Linus Torvalds

    Venkatesh Pallipadi
     
  • Remove the finer control of local APIC timer. We cannot provide a sub-jiffy
    control like this when we use broadcast from external timer in place of
    local APIC. Instead of removing this only on systems that may end up using
    broadcast from external timer (due to C3), I am going the
    "I'm feeling lucky" way to remove this fully. Basically, I am not sure about
    usefulness of this code today. Few other architectures also don't seem to
    support this today.

    If you are using profiling and fine grained control and don't like this going
    away in normal case, yell at me right now.

    Signed-off-by: Venkatesh Pallipadi
    Signed-off-by: Andi Kleen
    Signed-off-by: Linus Torvalds

    Venkatesh Pallipadi
     
  • And fix the test to include the size

    Noticed by Vivek Goyal

    Signed-off-by: Andi Kleen
    Signed-off-by: Linus Torvalds

    Andi Kleen
     
  • Some people need it now on 64bit so reuse the i386 code for
    x86-64. This will be also useful for future bug workarounds.

    It is a bit simplified there because there is no need
    to do it very early on x86-64. This means it doesn't need
    early ioremap et.al. We run it as a core initcall right now.

    I hope it's not needed for early setup.

    I added a general CONFIG_DMI symbol in case IA64 or someone
    else wants to reuse the code later too.

    Signed-off-by: Andi Kleen
    Signed-off-by: Linus Torvalds

    Andi Kleen
     
  • So why are we calling smp_send_stop from machine_halt?

    We don't.

    Looking more closely at the bug report the problem here
    is that halt -p is called which triggers not a halt but
    an attempt to power off.

    machine_power_off calls machine_shutdown which calls smp_send_stop.

    If pm_power_off is set we should never make it out machine_power_off
    to the call of do_exit. So pm_power_off must not be set in this case.
    When pm_power_off is not set we expect machine_power_off to devolve
    into machine_halt.

    So how do we fix this?

    Playing too much with smp_send_stop is dangerous because it
    must also be safe to be called from panic.

    It looks like the obviously correct fix is to only call
    machine_shutdown when pm_power_off is defined. Doing
    that will make Andi's assumption about not scheduling
    true and generally simplify what must be supported.

    This turns machine_power_off into a noop like machine_halt
    when pm_power_off is not defined.

    If the expected behavior is that sys_reboot(LINUX_REBOOT_CMD_POWER_OFF)
    becomes sys_reboot(LINUX_REBOOT_CMD_HALT) if pm_power_off is NULL
    this is not quite a comprehensive fix as we pass a different parameter
    to the reboot notifier and we set system_state to a different value
    before calling device_shutdown().

    Unfortunately any fix more comprehensive I can think of is not
    obviously correct. The core problem is that there is no architecture
    independent way to detect if machine_power will become a noop, without
    calling it.

    Signed-off-by: Andi Kleen
    Signed-off-by: Linus Torvalds

    Eric W. Biederman
     
  • Print bits for RDTSCP, SVM, CR8-LEGACY.

    Also now print power flags on i386 like x86-64 always did.
    This will add a new line in the 386 cpuinfo, but that shouldn't
    be an issue - did that in the past too and I haven't heard
    of any breakage.

    I shrunk some of the fields in the i386 cpuinfo_x86 to chars
    to make up for the new int "x86_power" field. Overall it's
    smaller than before.

    Signed-off-by: Andi Kleen
    Signed-off-by: Linus Torvalds

    Andi Kleen
     
  • They previously tried to figure this out on their own.

    Suggested by Venkatesh.

    Cc: venkatesh.pallipadi@intel.com
    Cc: davej@redhat.com
    Signed-off-by: Andi Kleen
    Signed-off-by: Linus Torvalds

    Andi Kleen
     
  • Define it for i386 too.

    This is a synthetic flag that signifies that the CPU's TSC runs
    at a constant P state invariant frequency.

    Fix up the logic on x86-64/i386 to set it on all known CPUs.
    Use the AMD defined bit to set it on future AMD CPUs.

    Cc: venkatesh.pallipadi@intel.com

    Signed-off-by: Andi Kleen
    Signed-off-by: Linus Torvalds

    Andi Kleen
     
  • arch: Use where capable() is used.

    Signed-off-by: Randy Dunlap
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Randy Dunlap
     
  • There is a window where a probe gets removed right after the probe is hit
    on some different cpu. In this case probe handlers can't find a matching
    probe instance related to break address. In this case we need to read the
    original instruction at break address to see if that is not a break/int3
    instruction and recover safely.

    Previous code had a bug where we were not checking for the above race in
    case of reentrant probes and the below patch fixes this race.

    Tested on IA64, Powerpc, x86_64.

    Signed-off-by: Anil S Keshavamurthy
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Keshavamurthy Anil S
     
  • Let's switch mutex_debug_check_no_locks_freed() to take (addr, len) as
    arguments instead, since all its callers were just calculating the 'to'
    address for themselves anyway... (and sometimes doing so badly).

    Signed-off-by: David Woodhouse
    Acked-by: Ingo Molnar
    Signed-off-by: Linus Torvalds

    David Woodhouse
     

11 Jan, 2006

13 commits

  • Removing the dependency on the boot image build was good, but it also
    meant that the $< expansion by make needed to be done explicitly.

    Noted by Stephen Hemminger.

    Signed-off-by: Linus Torvalds

    Linus Torvalds
     
  • Fix up some trivial conflicts in {i386|ia64}/Makefile

    Linus Torvalds
     
  • From: Bugzilla Bug 5351

    "After resuming from S3 (suspended while in X), the LCD panel stays black .
    However, the laptop is up again, and I can SSH into it from another
    machine.

    I can get the panel working again, when I first direct video output to the
    CRT output of the laptop, and then back to LCD (done by repeatedly hitting
    Fn+F5 buttons on the Toshiba, which directs output to either LCD, CRT or
    TV) None of this ever happened with older kernels."

    This bug is due to the recently added vesafb_blank() method in vesafb. It
    works with CRT displays, but has a high incidence of problems in laptop
    users. Since CRT users don't really get that much benefit from hardware
    blanking, drop support for this.

    Signed-off-by: Antonino Daplas
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Antonino A. Daplas
     
  • Currently arch_remove_kprobes() is only implemented/required for x86_64 and
    powerpc. All other architecture like IA64, i386 and sparc64 implementes a
    dummy function which is being called from arch independent kprobes.c file.

    This patch removes the dummy functions and replaces it with
    #define arch_remove_kprobe(p, s) do { } while(0)

    Signed-off-by: Anil S Keshavamurthy
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Anil S Keshavamurthy
     
  • Since Kprobes runtime exception handlers is now lock free as this code path is
    now using RCU to walk through the list, there is no need for the
    register/unregister{_kprobe} to use spin_{lock/unlock}_isr{save/restore}. The
    serialization during registration/unregistration is now possible using just a
    mutex.

    In the above process, this patch also fixes a minor memory leak for x86_64 and
    powerpc.

    Signed-off-by: Anil S Keshavamurthy
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Anil S Keshavamurthy
     
  • - introduce ktime_t: nanosecond-resolution time format.

    - eliminate the plain s64 scalar type, and always use the union.
    This simplifies the arithmetics. Idea from Roman Zippel.

    Signed-off-by: Thomas Gleixner
    Signed-off-by: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Thomas Gleixner
     
  • I have heard some complaints about people not finding CONFIG_CRASH_DUMP
    option and also some objections about its dependency on CONFIG_EMBEDDED.
    The following patch ends that dependency. I thought of hiding it under
    CONFIG_KEXEC, but CONFIG_PHYSICAL_START could also be used for some reasons
    other than kexec/kdump and hence left it visible. I will also update the
    documentation accordingly.

    o Following patch removes the config dependency of CONFIG_PHYSICAL_START
    on CONFIG_EMBEDDED. The reason being CONFIG_CRASH_DUMP option for
    kdump needs CONFIG_PHYSICAL_START which makes CONFIG_CRASH_DUMP depend
    on CONFIG_EMBEDDED. It is not always obvious for kdump users to choose
    CONFIG_EMBEDDED.

    o It also shifts the palce where this option appears, to make it closer
    to kexec and kdump options.

    Signed-off-by: Maneesh Soni
    Cc: "Eric W. Biederman"
    Cc: Haren Myneni
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Maneesh Soni
     
  • - Moving the crash_dump.c file to arch dependent part as kmap_atomic_pfn is
    specific to i386 and highmem may not exist in other archs.

    - Use ioremap for x86_64 to map the previous kernel memory.

    - In copy_oldmem_page(), we now directly copy to the user/kernel buffer and
    avoid the unneccesary copy to a kmalloc'd page.

    Signed-off-by: Rachita Kothiyal
    Signed-off-by: Vivek Goyal
    Cc: Andi Kleen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Vivek Goyal
     
  • - elfcorehdr= specifies the location of elf core header stored by the
    crashed kernel. This command line option will be passed by the kexec-tools
    to capture kernel.

    Changes in this version :

    - Added more comments in kernel-parameters.txt and in code.

    Signed-off-by: Murali M Chakravarthy
    Signed-off-by: Vivek Goyal
    Cc: Andi Kleen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Vivek Goyal
     
  • - If system panics then cpu register states are captured through funciton
    crash_get_current_regs(). This is not a inline function hence a stack frame
    is pushed on to the stack and then cpu register state is captured. Later
    this frame is popped and new frames are pushed (machine_kexec).

    - In theory this is not very right as we are capturing register states for a
    frame and that frame is no more valid. This seems to have created back
    trace problems for ppc64.

    - This patch fixes it up. The very first thing it does after entering
    crash_kexec() is to capture the register states. Anyway we don't want the
    back trace beyond crash_kexec(). crash_get_current_regs() has been made
    inline

    - crash_setup_regs() is the top architecture dependent function which should
    be responsible for capturing the register states as well as to do some
    architecture dependent tricks. For ex. fixing up ss and esp for i386.
    crash_setup_regs() has also been made inline to ensure no new call frame is
    pushed onto stack.

    Signed-off-by: Vivek Goyal
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Vivek Goyal
     
  • - In case of system crash, current state of cpu registers is saved in memory
    in elf note format. So far memory for storing elf notes was being allocated
    statically for NR_CPUS.

    - This patch introduces dynamic allocation of memory for storing elf notes.
    It uses alloc_percpu() interface. This should lead to better memory usage.

    - Introduced based on Andi Kleen's and Eric W. Biederman's suggestions.

    - This patch also moves memory allocation for elf notes from architecture
    dependent portion to architecture independent portion. Now crash_notes is
    architecture independent. The whole idea is that size of memory to be
    allocated per cpu (MAX_NOTE_BYTES) can be architecture dependent and
    allocation of this memory can be architecture independent.

    Signed-off-by: Vivek Goyal
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Vivek Goyal
     
  • )

    From: Vivek Goyal

    This patch fixes a minor bug based on Andi Kleen's suggestion. asm's can't be
    broken in this particular case, hence merging them.

    Signed-off-by: Vivek Goyal
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    akpm@osdl.org
     
  • Especially useful when users have booted with 'quiet'. In the regular 'oops'
    path, we set the console_loglevel before we start spewing debug info, but we
    can call the backtrace code from other places now too, such as the spinlock
    debugging code.

    Signed-off-by: Dave Jones
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Dave Jones
     

10 Jan, 2006

7 commits