09 Jan, 2012

1 commit

  • * 'pm-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (76 commits)
    PM / Hibernate: Implement compat_ioctl for /dev/snapshot
    PM / Freezer: fix return value of freezable_schedule_timeout_killable()
    PM / shmobile: Allow the A4R domain to be turned off at run time
    PM / input / touchscreen: Make st1232 use device PM QoS constraints
    PM / QoS: Introduce dev_pm_qos_add_ancestor_request()
    PM / shmobile: Remove the stay_on flag from SH7372's PM domains
    PM / shmobile: Don't include SH7372's INTCS in syscore suspend/resume
    PM / shmobile: Add support for the sh7372 A4S power domain / sleep mode
    PM: Drop generic_subsys_pm_ops
    PM / Sleep: Remove forward-only callbacks from AMBA bus type
    PM / Sleep: Remove forward-only callbacks from platform bus type
    PM: Run the driver callback directly if the subsystem one is not there
    PM / Sleep: Make pm_op() and pm_noirq_op() return callback pointers
    PM/Devfreq: Add Exynos4-bus device DVFS driver for Exynos4210/4212/4412.
    PM / Sleep: Merge internal functions in generic_ops.c
    PM / Sleep: Simplify generic system suspend callbacks
    PM / Hibernate: Remove deprecated hibernation snapshot ioctls
    PM / Sleep: Fix freezer failures due to racy usermodehelper_is_disabled()
    ARM: S3C64XX: Implement basic power domain support
    PM / shmobile: Use common always on power domain governor
    ...

    Fix up trivial conflict in fs/xfs/xfs_buf.c due to removal of unused
    XBT_FORCE_SLEEP bit

    Linus Torvalds
     

07 Jan, 2012

1 commit

  • * 'sched-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (40 commits)
    sched/tracing: Add a new tracepoint for sleeptime
    sched: Disable scheduler warnings during oopses
    sched: Fix cgroup movement of waking process
    sched: Fix cgroup movement of newly created process
    sched: Fix cgroup movement of forking process
    sched: Remove cfs bandwidth period check in tg_set_cfs_period()
    sched: Fix load-balance lock-breaking
    sched: Replace all_pinned with a generic flags field
    sched: Only queue remote wakeups when crossing cache boundaries
    sched: Add missing rcu_dereference() around ->real_parent usage
    [S390] fix cputime overflow in uptime_proc_show
    [S390] cputime: add sparse checking and cleanup
    sched: Mark parent and real_parent as __rcu
    sched, nohz: Fix missing RCU read lock
    sched, nohz: Set the NOHZ_BALANCE_KICK flag for idle load balancer
    sched, nohz: Fix the idle cpu check in nohz_idle_balance
    sched: Use jump_labels for sched_feat
    sched/accounting: Fix parameter passing in task_group_account_field
    sched/accounting: Fix user/system tick double accounting
    sched/accounting: Re-use scheduler statistics for the root cgroup
    ...

    Fix up conflicts in
    - arch/ia64/include/asm/cputime.h, include/asm-generic/cputime.h
    usecs_to_cputime64() vs the sparse cleanups
    - kernel/sched/fair.c, kernel/time/tick-sched.c
    scheduler changes in multiple branches

    Linus Torvalds
     

15 Dec, 2011

1 commit


13 Dec, 2011

1 commit


24 Nov, 2011

1 commit


07 Nov, 2011

1 commit

  • * 'modsplit-Oct31_2011' of git://git.kernel.org/pub/scm/linux/kernel/git/paulg/linux: (230 commits)
    Revert "tracing: Include module.h in define_trace.h"
    irq: don't put module.h into irq.h for tracking irqgen modules.
    bluetooth: macroize two small inlines to avoid module.h
    ip_vs.h: fix implicit use of module_get/module_put from module.h
    nf_conntrack.h: fix up fallout from implicit moduleparam.h presence
    include: replace linux/module.h with "struct module" wherever possible
    include: convert various register fcns to macros to avoid include chaining
    crypto.h: remove unused crypto_tfm_alg_modname() inline
    uwb.h: fix implicit use of asm/page.h for PAGE_SIZE
    pm_runtime.h: explicitly requires notifier.h
    linux/dmaengine.h: fix implicit use of bitmap.h and asm/page.h
    miscdevice.h: fix up implicit use of lists and types
    stop_machine.h: fix implicit use of smp.h for smp_processor_id
    of: fix implicit use of errno.h in include/linux/of.h
    of_platform.h: delete needless include
    acpi: remove module.h include from platform/aclinux.h
    miscdevice.h: delete unnecessary inclusion of module.h
    device_cgroup.h: delete needless include
    net: sch_generic remove redundant use of
    net: inet_timewait_sock doesnt need
    ...

    Fix up trivial conflicts (other header files, and removal of the ab3550 mfd driver) in
    - drivers/media/dvb/frontends/dibx000_common.c
    - drivers/media/video/{mt9m111.c,ov6650.c}
    - drivers/mfd/ab3550-core.c
    - include/linux/dmaengine.h

    Linus Torvalds
     

05 Nov, 2011

1 commit

  • The CPU hotplug notifications sent out by the _cpu_up() and _cpu_down()
    functions depend on the value of the 'tasks_frozen' argument passed to them
    (which indicates whether tasks have been frozen or not).
    (Examples for such CPU hotplug notifications: CPU_ONLINE, CPU_ONLINE_FROZEN,
    CPU_DEAD, CPU_DEAD_FROZEN).

    Thus, it is essential that while the callbacks for those notifications are
    running, the state of the system with respect to the tasks being frozen or
    not remains unchanged, *throughout that duration*. Hence there is a need for
    synchronizing the CPU hotplug code with the freezer subsystem.

    Since the freezer is involved only in the Suspend/Hibernate call paths, this
    patch hooks the CPU hotplug code to the suspend/hibernate notifiers
    PM_[SUSPEND|HIBERNATE]_PREPARE and PM_POST_[SUSPEND|HIBERNATE] to prevent
    the race between CPU hotplug and freezer, thus ensuring that CPU hotplug
    notifications will always be run with the state of the system really being
    what the notifications indicate, _throughout_ their execution time.

    Signed-off-by: Srivatsa S. Bhat
    Signed-off-by: Rafael J. Wysocki

    Srivatsa S. Bhat
     

31 Oct, 2011

1 commit

  • The changed files were only including linux/module.h for the
    EXPORT_SYMBOL infrastructure, and nothing else. Revector them
    onto the isolated export header for faster compile times.

    Nothing to see here but a whole lot of instances of:

    -#include
    +#include

    This commit is only changing the kernel dir; next targets
    will probably be mm, fs, the arch dirs, etc.

    Signed-off-by: Paul Gortmaker

    Paul Gortmaker
     

31 Mar, 2011

1 commit


23 Mar, 2011

1 commit

  • Change the printk() calls to have the KERN_INFO/KERN_ERROR stuff, and
    fixes other coding style errors. Not _all_ of them are gone, though.

    [akpm@linux-foundation.org: revert the bits I disagree with]
    Signed-off-by: Michael Rodriguez
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Michael Rodriguez
     

07 Jan, 2011

1 commit

  • …-linus', 'x86-paravirt-for-linus', 'core-locking-for-linus' and 'irq-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip

    * 'x86-alternatives-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
    x86, suspend: Avoid unnecessary smp alternatives switch during suspend/resume

    * 'x86-fpu-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
    x86-64, asm: Use fxsaveq/fxrestorq in more places

    * 'x86-hwmon-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
    x86, hwmon: Add core threshold notification to therm_throt.c

    * 'x86-paravirt-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
    x86, paravirt: Use native_halt on a halt, not native_safe_halt

    * 'core-locking-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
    locking, lockdep: Convert sprintf_symbol to %pS

    * 'irq-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
    irq: Better struct irqaction layout

    Linus Torvalds
     

14 Dec, 2010

1 commit

  • During suspend, we disable all the non boot cpus. And during resume we bring
    them all back again. So no need to do alternatives_smp_switch() in between.

    On my core 2 based laptop, this speeds up the suspend path by 15msec and the
    resume path by 5 msec (suspend/resume speed up differences can be attributed
    to the different P-states that the cpu is in during suspend/resume).

    Signed-off-by: Suresh Siddha
    LKML-Reference:
    Cc: Rafael J. Wysocki
    Signed-off-by: H. Peter Anvin

    Suresh Siddha
     

23 Nov, 2010

2 commits

  • Oleg mentioned that there is no actual guarantee the dying cpu's
    migration thread is actually finished running when we get there, so
    replace the BUG_ON() with a spinloop waiting for it.

    Reported-by: Oleg Nesterov
    Signed-off-by: Peter Zijlstra
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • GCC warns us about:

    kernel/cpu.c: In function ‘take_cpu_down’:
    kernel/cpu.c:200:15: warning: unused variable ‘cpu’

    This variable is unused since param->hcpu is directly
    used later on in cpu_notify.

    Signed-off-by: Dhaval Giani
    Signed-off-by: Peter Zijlstra
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Dhaval Giani
     

18 Nov, 2010

1 commit

  • While discussing the need for sched_idle_next(), Oleg remarked that
    since try_to_wake_up() ensures sleeping tasks will end up running on a
    sane cpu, we can do away with migrate_live_tasks().

    If we then extend the existing hack of migrating current from
    CPU_DYING to migrating the full rq worth of tasks from CPU_DYING, the
    need for the sched_idle_next() abomination disappears as well, since
    idle will be the only possible thread left after the migration thread
    stops.

    This greatly simplifies the hot-unplug task migration path, as can be
    seen from the resulting code reduction (and about half the new lines
    are comments).

    Suggested-by: Oleg Nesterov
    Signed-off-by: Peter Zijlstra
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     

09 Jun, 2010

1 commit

  • Currently, when a cpu goes down, cpu_active is cleared before
    CPU_DOWN_PREPARE starts and cpuset configuration is updated from a
    default priority cpu notifier. When a cpu is coming up, it's set
    before CPU_ONLINE but cpuset configuration again is updated from the
    same cpu notifier.

    For cpu notifiers, this presents an inconsistent state. Threads which
    a CPU_DOWN_PREPARE notifier expects to be bound to the CPU can be
    migrated to other cpus because the cpu is no more inactive.

    Fix it by updating cpu_active in the highest priority cpu notifier and
    cpuset configuration in the second highest when a cpu is coming up.
    Down path is updated similarly. This guarantees that all other cpu
    notifiers see consistent cpu_active and cpuset configuration.

    cpuset_track_online_cpus() notifier is converted to
    cpuset_update_active_cpus() which just updates the configuration and
    now called from cpuset_cpu_[in]active() notifiers registered from
    sched_init_smp(). If cpuset is disabled, cpuset_update_active_cpus()
    degenerates into partition_sched_domains() making separate notifier
    for !CONFIG_CPUSETS unnecessary.

    This problem is triggered by cmwq. During CPU_DOWN_PREPARE, hotplug
    callback creates a kthread and kthread_bind()s it to the target cpu,
    and the thread is expected to run on that cpu.

    * Ingo's test discovered __cpuinit/exit markups were incorrect.
    Fixed.

    Signed-off-by: Tejun Heo
    Acked-by: Peter Zijlstra
    Cc: Rusty Russell
    Cc: Ingo Molnar
    Cc: Paul Menage

    Tejun Heo
     

02 Jun, 2010

1 commit

  • In commit e9fb7631ebcd ("cpu-hotplug: introduce cpu_notify(),
    __cpu_notify(), cpu_notify_nofail()") the new helper functions access
    cpu_chain. As a result, it shouldn't be marked __cpuinitdata (via
    section mismatch warning).

    Alternatively, the helper functions should be forced inline, or marked
    __ref or __cpuinit. In the meantime, this patch silences the warning
    the trivial way.

    Signed-off-by: Daniel J Blueman
    Signed-off-by: Linus Torvalds

    Daniel J Blueman
     

31 May, 2010

1 commit


28 May, 2010

4 commits

  • Commit e9fb7631ebcd ("cpu-hotplug: introduce cpu_notify(),
    __cpu_notify(), cpu_notify_nofail()") also introduced this annoying
    warning:

    kernel/cpu.c:157: warning: 'cpu_notify_nofail' defined but not used

    when CONFIG_HOTPLUG_CPU wasn't set.

    So move that helper inside the #ifdef CONFIG_HOTPLUG_CPU region, and
    simplify it while at it.

    Signed-off-by: Linus Torvalds

    Linus Torvalds
     
  • Since when CONFIG_HOTPLUG_CPU=n, get_online_cpus() do nothing, so we don't
    need cpu_hotplug_begin() either.

    This patch moves cpu_hotplug_begin()/cpu_hotplug_done() into the code
    block of CONFIG_HOTPLUG_CPU=y.

    Signed-off-by: Lai Jiangshan
    Cc: Gautham R Shenoy
    Cc: Ingo Molnar

    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Lai Jiangshan
     
  • Currently, onlining or offlining a CPU failure by one of the cpu notifiers
    error always cause -EINVAL error. (i.e. writing 0 or 1 to
    /sys/devices/system/cpu/cpuX/online gets EINVAL)

    To get better error reporting rather than always getting -EINVAL, This
    changes cpu_notify() to return -errno value with notifier_to_errno() and
    fix the callers. Now that cpu notifiers can return encapsulate errno
    value.

    Currently, all cpu hotplug notifiers return NOTIFY_OK, NOTIFY_BAD, or
    NOTIFY_DONE. So cpu_notify() can returns 0 or -EPERM with this change for
    now.

    (notifier_to_errno(NOTIFY_OK) == 0, notifier_to_errno(NOTIFY_DONE) == 0,
    notifier_to_errno(NOTIFY_BAD) == -EPERM)

    Forthcoming patches convert several cpu notifiers to return encapsulate
    errno value with notifier_from_errno().

    Signed-off-by: Akinobu Mita
    Cc: Rusty Russell
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Akinobu Mita
     
  • No functional change. These are just wrappers of
    raw_cpu_notifier_call_chain.

    Signed-off-by: Akinobu Mita
    Cc: Rusty Russell
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Akinobu Mita
     

25 May, 2010

3 commits

  • Add global mutex zonelists_mutex to fix the possible race:

    CPU0 CPU1 CPU2
    (1) zone->present_pages += online_pages;
    (2) build_all_zonelists();
    (3) alloc_page();
    (4) free_page();
    (5) build_all_zonelists();
    (6) __build_all_zonelists();
    (7) zone->pageset = alloc_percpu();

    In step (3,4), zone->pageset still points to boot_pageset, so bad
    things may happen if 2+ nodes are in this state. Even if only 1 node
    is accessing the boot_pageset, (3) may still consume too much memory
    to fail the memory allocations in step (7).

    Besides, atomic operation ensures alloc_percpu() in step (7) will never fail
    since there is a new fresh memory block added in step(6).

    [haicheng.li@linux.intel.com: hold zonelists_mutex when build_all_zonelists]
    Signed-off-by: Haicheng Li
    Signed-off-by: Wu Fengguang
    Reviewed-by: Andi Kleen
    Cc: Christoph Lameter
    Cc: Mel Gorman
    Cc: Tejun Heo
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Haicheng Li
     
  • For each new populated zone of hotadded node, need to update its pagesets
    with dynamically allocated per_cpu_pageset struct for all possible CPUs:

    1) Detach zone->pageset from the shared boot_pageset
    at end of __build_all_zonelists().

    2) Use mutex to protect zone->pageset when it's still
    shared in onlined_pages()

    Otherwises, multiple zones of different nodes would share same boot strapping
    boot_pageset for same CPU, which will finally cause below kernel panic:

    ------------[ cut here ]------------
    kernel BUG at mm/page_alloc.c:1239!
    invalid opcode: 0000 [#1] SMP
    ...
    Call Trace:
    [] __alloc_pages_nodemask+0x131/0x7b0
    [] alloc_pages_current+0x87/0xd0
    [] __page_cache_alloc+0x67/0x70
    [] __do_page_cache_readahead+0x120/0x260
    [] ra_submit+0x21/0x30
    [] ondemand_readahead+0x166/0x2c0
    [] page_cache_async_readahead+0x80/0xa0
    [] generic_file_aio_read+0x364/0x670
    [] nfs_file_read+0xca/0x130
    [] do_sync_read+0xfa/0x140
    [] vfs_read+0xb5/0x1a0
    [] sys_read+0x51/0x80
    [] system_call_fastpath+0x16/0x1b
    RIP [] get_page_from_freelist+0x883/0x900
    RSP
    ---[ end trace 4bda28328b9990db ]

    [akpm@linux-foundation.org: merge fix]
    Signed-off-by: Haicheng Li
    Signed-off-by: Wu Fengguang
    Reviewed-by: Andi Kleen
    Reviewed-by: Christoph Lameter
    Cc: Mel Gorman
    Cc: Tejun Heo
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Haicheng Li
     
  • Enable users to online CPUs even if the CPUs belongs to a numa node which
    doesn't have onlined local memory.

    The zonlists(pg_data_t.node_zonelists[]) of a numa node are created either
    in system boot/init period, or at the time of local memory online. For a
    numa node without onlined local memory, its zonelists are not initialized
    at present. As a result, any memory allocation operations executed by
    CPUs within this node will fail. In fact, an out-of-memory error is
    triggered when attempt to online CPUs before memory comes to online.

    This patch tries to create zonelists for such numa nodes, so that the
    memory allocation for this node can be fallback'ed to other nodes.

    [akpm@linux-foundation.org: remove unneeded export]
    [akpm@linux-foundation.org: coding-style fixes]
    Signed-off-by: minskey guo
    Cc: Minchan Kim
    Cc: Yasunori Goto
    Cc: Andi Kleen
    Cc: Christoph Lameter
    Cc: Tejun Heo
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    minskey guo
     

07 May, 2010

1 commit

  • Reimplement stop_machine using cpu_stop. As cpu stoppers are
    guaranteed to be available for all online cpus,
    stop_machine_create/destroy() are no longer necessary and removed.

    With resource management and synchronization handled by cpu_stop, the
    new implementation is much simpler. Asking the cpu_stop to execute
    the stop_cpu() state machine on all online cpus with cpu hotplug
    disabled is enough.

    stop_machine itself doesn't need to manage any global resources
    anymore, so all per-instance information is rolled into struct
    stop_machine_data and the mutex and all static data variables are
    removed.

    The previous implementation created and destroyed RT workqueues as
    necessary which made stop_machine() calls highly expensive on very
    large machines. According to Dimitri Sivanich, preventing the dynamic
    creation/destruction makes booting faster more than twice on very
    large machines. cpu_stop resources are preallocated for all online
    cpus and should have the same effect.

    Signed-off-by: Tejun Heo
    Acked-by: Rusty Russell
    Acked-by: Peter Zijlstra
    Cc: Oleg Nesterov
    Cc: Dimitri Sivanich

    Tejun Heo
     

15 Apr, 2010

1 commit


03 Apr, 2010

1 commit

  • _cpu_down() changes the current task's affinity and then recovers it at
    the end. The problems are well known: we can't restore old_allowed if it
    was bound to the now-dead-cpu, and we can race with the userspace which
    can change cpu-affinity during unplug.

    _cpu_down() should not play with current->cpus_allowed at all. Instead,
    take_cpu_down() can migrate the caller of _cpu_down() after __cpu_disable()
    removes the dying cpu from cpu_online_mask.

    Signed-off-by: Oleg Nesterov
    Acked-by: Rafael J. Wysocki
    Signed-off-by: Peter Zijlstra
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Oleg Nesterov
     

30 Mar, 2010

1 commit

  • …it slab.h inclusion from percpu.h

    percpu.h is included by sched.h and module.h and thus ends up being
    included when building most .c files. percpu.h includes slab.h which
    in turn includes gfp.h making everything defined by the two files
    universally available and complicating inclusion dependencies.

    percpu.h -> slab.h dependency is about to be removed. Prepare for
    this change by updating users of gfp and slab facilities include those
    headers directly instead of assuming availability. As this conversion
    needs to touch large number of source files, the following script is
    used as the basis of conversion.

    http://userweb.kernel.org/~tj/misc/slabh-sweep.py

    The script does the followings.

    * Scan files for gfp and slab usages and update includes such that
    only the necessary includes are there. ie. if only gfp is used,
    gfp.h, if slab is used, slab.h.

    * When the script inserts a new include, it looks at the include
    blocks and try to put the new include such that its order conforms
    to its surrounding. It's put in the include block which contains
    core kernel includes, in the same order that the rest are ordered -
    alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
    doesn't seem to be any matching order.

    * If the script can't find a place to put a new include (mostly
    because the file doesn't have fitting include block), it prints out
    an error message indicating which .h file needs to be added to the
    file.

    The conversion was done in the following steps.

    1. The initial automatic conversion of all .c files updated slightly
    over 4000 files, deleting around 700 includes and adding ~480 gfp.h
    and ~3000 slab.h inclusions. The script emitted errors for ~400
    files.

    2. Each error was manually checked. Some didn't need the inclusion,
    some needed manual addition while adding it to implementation .h or
    embedding .c file was more appropriate for others. This step added
    inclusions to around 150 files.

    3. The script was run again and the output was compared to the edits
    from #2 to make sure no file was left behind.

    4. Several build tests were done and a couple of problems were fixed.
    e.g. lib/decompress_*.c used malloc/free() wrappers around slab
    APIs requiring slab.h to be added manually.

    5. The script was run on all .h files but without automatically
    editing them as sprinkling gfp.h and slab.h inclusions around .h
    files could easily lead to inclusion dependency hell. Most gfp.h
    inclusion directives were ignored as stuff from gfp.h was usually
    wildly available and often used in preprocessor macros. Each
    slab.h inclusion directive was examined and added manually as
    necessary.

    6. percpu.h was updated not to include slab.h.

    7. Build test were done on the following configurations and failures
    were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
    distributed build env didn't work with gcov compiles) and a few
    more options had to be turned off depending on archs to make things
    build (like ipr on powerpc/64 which failed due to missing writeq).

    * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
    * powerpc and powerpc64 SMP allmodconfig
    * sparc and sparc64 SMP allmodconfig
    * ia64 SMP allmodconfig
    * s390 SMP allmodconfig
    * alpha SMP allmodconfig
    * um on x86_64 SMP allmodconfig

    8. percpu.h modifications were reverted so that it could be applied as
    a separate patch and serve as bisection point.

    Given the fact that I had only a couple of failures from tests on step
    6, I'm fairly confident about the coverage of this conversion patch.
    If there is a breakage, it's likely to be something in one of the arch
    headers which should be easily discoverable easily on most builds of
    the specific arch.

    Signed-off-by: Tejun Heo <tj@kernel.org>
    Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
    Cc: Ingo Molnar <mingo@redhat.com>
    Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>

    Tejun Heo
     

07 Mar, 2010

1 commit


28 Jan, 2010

2 commits

  • Due to an incorrect line break the output currently contains tabs.
    Also remove trailing space.

    The actual output that logcheck sent me looked like this:
    Task events/1 (pid = 10) is on cpu 1^I^I^I^I(state = 1, flags = 84208040)

    After this patch it becomes:
    Task events/1 (pid = 10) is on cpu 1 (state = 1, flags = 84208040)

    Signed-off-by: Frans Pop
    Signed-off-by: Peter Zijlstra
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Frans Pop
     
  • We moved to migrate on wakeup, which means that sleeping tasks could
    still be present on offline cpus. Amend the check to only test running
    tasks.

    Reported-by: Heiko Carstens
    Signed-off-by: Peter Zijlstra
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     

17 Dec, 2009

1 commit

  • Sachin found cpu hotplug test failures on powerpc, which made
    the kernel hang on his POWER box.

    The problem is that we fail to re-activate a cpu when a
    hot-unplug fails. Fix this by moving the de-activation into
    _cpu_down after doing the initial checks.

    Remove the synchronize_sched() calls and rely on those implied
    by rebuilding the sched domains using the new mask.

    Reported-by: Sachin Sant
    Signed-off-by: Xiaotian Feng
    Tested-by: Sachin Sant
    Signed-off-by: Peter Zijlstra
    Cc: Mike Galbraith
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Xiaotian Feng
     

13 Dec, 2009

1 commit

  • …l/git/tip/linux-2.6-tip

    * 'sched-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (21 commits)
    sched: Remove forced2_migrations stats
    sched: Fix memory leak in two error corner cases
    sched: Fix build warning in get_update_sysctl_factor()
    sched: Update normalized values on user updates via proc
    sched: Make tunable scaling style configurable
    sched: Fix missing sched tunable recalculation on cpu add/remove
    sched: Fix task priority bug
    sched: cgroup: Implement different treatment for idle shares
    sched: Remove unnecessary RCU exclusion
    sched: Discard some old bits
    sched: Clean up check_preempt_wakeup()
    sched: Move update_curr() in check_preempt_wakeup() to avoid redundant call
    sched: Sanitize fork() handling
    sched: Clean up ttwu() rq locking
    sched: Remove rq->clock coupling from set_task_cpu()
    sched: Consolidate select_task_rq() callers
    sched: Remove sysctl.sched_features
    sched: Protect sched_rr_get_param() access to task->sched_class
    sched: Protect task->cpus_allowed access in sched_getaffinity()
    sched: Fix balance vs hotplug race
    ...

    Fixed up conflicts in kernel/sysctl.c (due to sysctl cleanup)

    Linus Torvalds
     

07 Dec, 2009

1 commit

  • Since (e761b77: cpu hotplug, sched: Introduce cpu_active_map and redo
    sched domain managment) we have cpu_active_mask which is suppose to rule
    scheduler migration and load-balancing, except it never (fully) did.

    The particular problem being solved here is a crash in try_to_wake_up()
    where select_task_rq() ends up selecting an offline cpu because
    select_task_rq_fair() trusts the sched_domain tree to reflect the
    current state of affairs, similarly select_task_rq_rt() trusts the
    root_domain.

    However, the sched_domains are updated from CPU_DEAD, which is after the
    cpu is taken offline and after stop_machine is done. Therefore it can
    race perfectly well with code assuming the domains are right.

    Cure this by building the domains from cpu_active_mask on
    CPU_DOWN_PREPARE.

    Signed-off-by: Peter Zijlstra
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     

26 Nov, 2009

1 commit

  • Limit the number of per cpu calibration messages by only
    printing out results for the first cpu to boot.

    Also, don't print "CPUx is down" as this is expected, and we
    don't need 4096 reminders... ;-)

    Signed-off-by: Mike Travis
    Cc: Heiko Carstens
    Cc: Roland Dreier
    Cc: Randy Dunlap
    Cc: Tejun Heo
    Cc: Andi Kleen
    Cc: Greg Kroah-Hartman
    Cc: Yinghai Lu
    Cc: David Rientjes
    Cc: Steven Rostedt
    Cc: Rusty Russell
    Cc: Hidetoshi Seto
    Cc: Jack Steiner
    Cc: Frederic Weisbecker
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Mike Travis
     

16 Sep, 2009

1 commit

  • * 'x86-pat-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
    x86, pat: Fix cacheflush address in change_page_attr_set_clr()
    mm: remove !NUMA condition from PAGEFLAGS_EXTENDED condition set
    x86: Fix earlyprintk=dbgp for machines without NX
    x86, pat: Sanity check remap_pfn_range for RAM region
    x86, pat: Lookup the protection from memtype list on vm_insert_pfn()
    x86, pat: Add lookup_memtype to get the current memtype of a paddr
    x86, pat: Use page flags to track memtypes of RAM pages
    x86, pat: Generalize the use of page flag PG_uncached
    x86, pat: Add rbtree to do quick lookup in memtype tracking
    x86, pat: Add PAT reserve free to io_mapping* APIs
    x86, pat: New i/f for driver to request memtype for IO regions
    x86, pat: ioremap to follow same PAT restrictions as other PAT users
    x86, pat: Keep identity maps consistent with mmaps even when pat_disabled
    x86, mtrr: make mtrr_aps_delayed_init static bool
    x86, pat/mtrr: Rendezvous all the cpus for MTRR/PAT init
    generic-ipi: Allow cpus not yet online to call smp_call_function with irqs disabled
    x86: Fix an incorrect argument of reserve_bootmem()
    x86: Fix system crash when loading with "reservetop" parameter

    Linus Torvalds
     

02 Sep, 2009

1 commit


22 Aug, 2009

1 commit

  • SDM Vol 3a section titled "MTRR considerations in MP systems" specifies
    the need for synchronizing the logical cpu's while initializing/updating
    MTRR.

    Currently Linux kernel does the synchronization of all cpu's only when
    a single MTRR register is programmed/updated. During an AP online
    (during boot/cpu-online/resume) where we initialize all the MTRR/PAT registers,
    we don't follow this synchronization algorithm.

    This can lead to scenarios where during a dynamic cpu online, that logical cpu
    is initializing MTRR/PAT with cache disabled (cr0.cd=1) etc while other logical
    HT sibling continue to run (also with cache disabled because of cr0.cd=1
    on its sibling).

    Starting from Westmere, VMX transitions with cr0.cd=1 don't work properly
    (because of some VMX performance optimizations) and the above scenario
    (with one logical cpu doing VMX activity and another logical cpu coming online)
    can result in system crash.

    Fix the MTRR initialization by doing rendezvous of all the cpus. During
    boot and resume, we delay the MTRR/PAT init for APs till all the
    logical cpu's come online and the rendezvous process at the end of AP's bringup,
    will initialize the MTRR/PAT for all AP's.

    For dynamic single cpu online, we synchronize all the logical cpus and
    do the MTRR/PAT init on the AP that is coming online.

    Signed-off-by: Suresh Siddha
    Signed-off-by: H. Peter Anvin

    Suresh Siddha
     

22 Jul, 2009

1 commit

  • Support for graceful handling of sleep states (S3/S4/S5) after an Intel(R) TXT launch.

    Without this patch, attempting to place the system in one of the ACPI sleep
    states (S3/S4/S5) will cause the TXT hardware to treat this as an attack and
    will cause a system reset, with memory locked. Not only may the subsequent
    memory scrub take some time, but the platform will be unable to enter the
    requested power state.

    This patch calls back into the tboot so that it may properly and securely clean
    up system state and clear the secrets-in-memory flag, after which it will place
    the system into the requested sleep state using ACPI information passed by the kernel.

    arch/x86/kernel/smpboot.c | 2 ++
    drivers/acpi/acpica/hwsleep.c | 3 +++
    kernel/cpu.c | 7 ++++++-
    3 files changed, 11 insertions(+), 1 deletion(-)

    Signed-off-by: Joseph Cihula
    Signed-off-by: Shane Wang
    Signed-off-by: H. Peter Anvin

    Joseph Cihula