12 Oct, 2006

1 commit

  • Switch the memory policy of the kevent threads to MPOL_DEFAULT while
    leaving the kzalloc of the workqueue structure on interleave. This means
    that all code executed in the context of the kevent thread is allocating
    node local.

    Signed-off-by: Christoph Lameter
    Cc: Christoph Lameter
    Cc: Alok Kataria
    Cc: Andi Kleen
    Cc:
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     

04 Oct, 2006

1 commit


15 Aug, 2006

1 commit

  • Use a private lock instead. It protects all per-cpu data structures in
    workqueue.c, including the workqueues list.

    Fix a bug in schedule_on_each_cpu(): it was forgetting to lock down the
    per-cpu resources.

    Unfixed long-standing bug: if someone unplugs the CPU identified by
    `singlethread_cpu' the kernel will get very sick.

    Cc: Dave Jones
    Signed-off-by: Andrew Morton
    Signed-off-by: Greg Kroah-Hartman

    Andrew Morton
     

01 Aug, 2006

1 commit

  • kernel/workqueue.c was omitted from generating kernel documentation. This
    adds a new section "Workqueues and Kevents" and adds documentation for some
    of the functions.

    Some functions in this file already had DocBook-style comments, now they
    finally become visible.

    Signed-off-by: Rolf Eike Beer
    Cc: "Randy.Dunlap"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Rolf Eike Beer
     

05 Jul, 2006

1 commit

  • * master.kernel.org:/pub/scm/linux/kernel/git/davej/cpufreq:
    Move workqueue exports to where the functions are defined.
    [CPUFREQ] Misc cleanups in ondemand.
    [CPUFREQ] Make ondemand sampling per CPU and remove the mutex usage in sampling path.
    [CPUFREQ] Add queue_delayed_work_on() interface for workqueues.
    [CPUFREQ] Remove slowdown from ondemand sampling path.

    Linus Torvalds
     

04 Jul, 2006

1 commit


30 Jun, 2006

2 commits


28 Jun, 2006

1 commit

  • In 2.6.17, there was a problem with cpu_notifiers and XFS. I provided a
    band-aid solution to solve that problem. In the process, i undid all the
    changes you both were making to ensure that these notifiers were available
    only at init time (unless CONFIG_HOTPLUG_CPU is defined).

    We deferred the real fix to 2.6.18. Here is a set of patches that fixes the
    XFS problem cleanly and makes the cpu notifiers available only at init time
    (unless CONFIG_HOTPLUG_CPU is defined).

    If CONFIG_HOTPLUG_CPU is defined then cpu notifiers are available at run
    time.

    This patch reverts the notifier_call changes made in 2.6.17

    Signed-off-by: Chandra Seetharaman
    Cc: Ashok Raj
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Chandra Seetharaman
     

26 Jun, 2006

2 commits

  • If a cpu hotplug callback fails on CPU_UP_PREPARE, all callbacks will be
    called with CPU_UP_CANCELED. A few of these callbacks assume that on
    CPU_UP_PREPARE a pointer to task has been stored in a percpu array. This
    assumption is not true if CPU_UP_PREPARE fails and the following calls to
    kthread_bind() in CPU_UP_CANCELED will cause an addressing exception
    because of passing a NULL pointer.

    Signed-off-by: Heiko Carstens
    Cc: Ashok Raj
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Heiko Carstens
     
  • schedule_on_each_cpu() presently does a large kmalloc - 96 kbytes on 1024 CPU
    64-bit.

    Rework it so that we do one 8192-byte allocation and then a pile of tiny ones,
    via alloc_percpu(). This has a much higher chance of success (100% in the
    current VM).

    This also has the effect of reducing the memory requirements from NR_CPUS*n to
    num_possible_cpus()*n.

    Cc: Christoph Lameter
    Cc: Andi Kleen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrew Morton
     

23 Jun, 2006

1 commit


26 Apr, 2006

1 commit


28 Feb, 2006

1 commit

  • We have several points in the SCSI stack (primarily for our device
    functions) where we need to guarantee process context, but (given the
    place where the last reference was released) we cannot guarantee this.

    This API gets around the issue by executing the function directly if
    the caller has process context, but scheduling a workqueue to execute
    in process context if the caller doesn't have it.

    Signed-off-by: James Bottomley

    James Bottomley
     

15 Jan, 2006

1 commit


09 Jan, 2006

3 commits

  • Use first_cpu(cpu_possible_map) for the single-thread workqueue case. We
    used to hardcode 0, but that broke on systems where !cpu_possible(0) when
    workqueue_struct->cpu_workqueue_struct was changed from a static array to
    alloc_percpu.

    Commit id bce61dd49d6ba7799be2de17c772e4c701558f14 ("Fix hardcoded cpu=0 in
    workqueue for per_cpu_ptr() calls") fixed that for Ben's funky sparc64
    system, but it regressed my Power5. Offlining cpu 0 oopses upon the next
    call to queue_work for a single-thread workqueue, because now we try to
    manipulate per_cpu_ptr(wq->cpu_wq, 1), which is uninitialized.

    So we need to establish an unchanging "slot" for single-thread workqueues
    which will have a valid percpu allocation. Since alloc_percpu keys off of
    cpu_possible_map, which must not change after initialization, make this
    slot == first_cpu(cpu_possible_map).

    Signed-off-by: Nathan Lynch
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nathan Lynch
     
  • __create_workqueue() not checking return of alloc_percpu()

    NULL dereference was possible.

    Signed-off-by: Ben Collins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Ben Collins
     
  • swap migration's isolate_lru_page() currently uses an IPI to notify other
    processors that the lru caches need to be drained if the page cannot be
    found on the LRU. The IPI interrupt may interrupt a processor that is just
    processing lru requests and cause a race condition.

    This patch introduces a new function run_on_each_cpu() that uses the
    keventd() to run the LRU draining on each processor. Processors disable
    preemption when dealing the LRU caches (these are per processor) and thus
    executing LRU draining from another process is safe.

    Thanks to Lee Schermerhorn for finding this race
    condition.

    Signed-off-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     

29 Nov, 2005

1 commit

  • Tracked this down on an Ultra Enterprise 3000. It's a 6-way machine. Odd
    thing about this machine (and it's good for finding bugs like this) is that
    the CPU id's are not 0 based. For instance, on my machine the CPU's are
    6/7/10/11/14/15.

    This caused some NULL pointer dereference in kernel/workqueue.c because for
    single_threaded workqueue's, it hardcoded the cpu to 0.

    I changed the 0's to any_online_cpu(cpu_online_mask), which cpumask.h
    claims is "First cpu in mask". So this fits the same usage.

    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Ben Collins
     

07 Nov, 2005

1 commit

  • Replace smp_processor_id() with any_online_cpu(cpu_online_map) in order to
    avoid lots of "BUG: using smp_processor_id() in preemptible [00000001]
    code:..." messages in case taking a cpu online fails.

    All the traces start at the last notifier_call_chain(...) in kernel/cpu.c.
    Since we hold the cpu_control semaphore it shouldn't be any problem to access
    cpu_online_map.

    The reason why cpu_up failed is simply that the cpu that was supposed to be
    taken online wasn't even there. That is because on s390 we never know when a
    new cpu comes and therefore cpu_possible_map consists of only ones and doesn't
    reflect reality.

    Signed-off-by: Heiko Carstens
    Cc: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Heiko Carstens
     

31 Oct, 2005

1 commit

  • This patch makes the workqueus use alloc_percpu instead of an array. The
    workqueues are placed on nodes local to each processor.

    The workqueue structure can grow to a significant size on a system with
    lots of processors if this patch is not applied. 64 bit architectures with
    all debugging features enabled and configured for 512 processors will not
    be able to boot without this patch.

    Signed-off-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     

08 Sep, 2005

2 commits

  • This patch introduces a kzalloc wrapper and converts kernel/ to use it. It
    saves a little program text.

    Signed-off-by: Pekka Enberg
    Signed-off-by: Adrian Bunk
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Pekka J Enberg
     
  • With "-W -Wno-unused -Wno-sign-compare" I get the following compile warning:

    CC kernel/workqueue.o
    kernel/workqueue.c: In function `workqueue_cpu_callback':
    kernel/workqueue.c:504: warning: ordered comparison of pointer with integer zero

    On error create_workqueue_thread() returns NULL, not negative pointer, so
    following trivial patch suggests itself.

    Signed-off-by: Mika Kukkonen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mika Kukkonen
     

11 Aug, 2005

1 commit

  • We have a chek in there to make sure that the name won't overflow
    task_struct.comm[], but it's triggering for scsi with lots of HBAs, only
    scsi is using single-threaded workqueues which don't append the "/%d"
    anyway.

    All too hard. Just kill the BUG_ON.

    Cc: Ingo Molnar
    Signed-off-by: Andrew Morton

    [ kthread_create() uses vsnprintf() and limits the thing, so no
    actual overflow can actually happen regardless ]

    Signed-off-by: Linus Torvalds

    James Bottomley
     

17 Apr, 2005

2 commits

  • This was unexported by Arjan because we have no current users.

    However, during a conversion from tasklets to workqueues of the parisc led
    functions, we ran across a case where this was needed. In particular, the
    open coded equivalent of cancel_rearming_delayed_workqueue was implemented
    incorrectly, which is, I think, all the evidence necessary that this is a
    useful API.

    Signed-off-by: James Bottomley
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    James Bottomley
     
  • Initial git repository build. I'm not bothering with the full history,
    even though we have it. We can create a separate "historical" git
    archive of that later if we want to, and in the meantime it's about
    3.2GB when imported into git - space that would just make the early
    git days unnecessarily complicated, when we don't have a lot of good
    infrastructure for it.

    Let it rip!

    Linus Torvalds