11 May, 2008

2 commits

  • The generic semaphore rewrite had a huge performance regression on AIM7
    (and potentially other BKL-heavy benchmarks) because the generic
    semaphores had been rewritten to be simple to understand and fair. The
    latter, in particular, turns a semaphore-based BKL implementation into a
    mess of scheduling.

    The attempt to fix the performance regression failed miserably (see the
    previous commit 00b41ec2611dc98f87f30753ee00a53db648d662 'Revert
    "semaphore: fix"'), and so for now the simple and sane approach is to
    instead just go back to the old spinlock-based BKL implementation that
    never had any issues like this.

    This patch also has the advantage of being reported to fix the
    regression completely according to Yanmin Zhang, unlike the semaphore
    hack which still left a couple percentage point regression.

    As a spinlock, the BKL obviously has the potential to be a latency
    issue, but it's not really any different from any other spinlock in that
    respect. We do want to get rid of the BKL asap, but that has been the
    plan for several years.

    These days, the biggest users are in the tty layer (open/release in
    particular) and Alan holds out some hope:

    "tty release is probably a few months away from getting cured - I'm
    afraid it will almost certainly be the very last user of the BKL in
    tty to get fixed as it depends on everything else being sanely locked."

    so while we're not there yet, we do have a plan of action.

    Tested-by: Yanmin Zhang
    Cc: Ingo Molnar
    Cc: Andi Kleen
    Cc: Matthew Wilcox
    Cc: Alexander Viro
    Cc: Andrew Morton
    Signed-off-by: Linus Torvalds

    Linus Torvalds
     
  • This reverts commit bf726eab3711cf192405d21688a4b21e07b6188a, as it has
    been reported to cause a regression with processes stuck in __down(),
    apparently because some missing wakeup.

    Quoth Sven Wegener:
    "I'm currently investigating a regression that has showed up with my
    last git pull yesterday. Bisecting the commits showed bf726e
    "semaphore: fix" to be the culprit, reverting it fixed the issue.

    Symptoms: During heavy filesystem usage (e.g. a kernel compile) I get
    several compiler processes in uninterruptible sleep, blocking all i/o
    on the filesystem. System is an Intel Core 2 Quad running a 64bit
    kernel and userspace. Filesystem is xfs on top of lvm. See below for
    the output of sysrq-w."

    See

    http://lkml.org/lkml/2008/5/10/45

    for full report.

    In the meantime, we can just fix the BKL performance regression by
    reverting back to the good old BKL spinlock implementation instead,
    since any sleeping lock will generally perform badly, especially if it
    tries to be fair.

    Reported-by: Sven Wegener
    Cc: Andrew Morton
    Cc: Ingo Molnar
    Signed-off-by: Linus Torvalds

    Linus Torvalds
     

09 May, 2008

5 commits

  • Linus found a logic bug: we ignore the version number in a module's
    vermagic string if we have CONFIG_MODVERSIONS set, but modversions
    also lets through a module with no __versions section for modprobe
    --force (with tainting, but still).

    We should only ignore the start of the vermagic string if the module
    actually *has* crcs to check. Rather than (say) having an
    entertaining hissy fit and creating a config option to work around the
    buggy code.

    Signed-off-by: Rusty Russell
    Signed-off-by: Linus Torvalds

    Rusty Russell
     
  • We allow missing __versions sections, because modprobe --force strips
    it. It makes less sense to allow sections where there's no version
    for a specific symbol the module uses, so disallow that.

    Signed-off-by: Rusty Russell
    Signed-off-by: Linus Torvalds

    Rusty Russell
     
  • …linux-2.6-sched-fixes

    * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-sched-fixes:
    sched: fix weight calculations
    semaphore: fix

    Linus Torvalds
     
  • * 'for-linus' of git://git.kernel.dk/linux-2.6-block:
    Revert "relay: fix splice problem"
    docbook: fix bio missing parameter
    block: use unitialized_var() in bio_alloc_bioset()
    block: avoid duplicate calls to get_part() in disk stat code
    cfq-iosched: make io priorities inherit CPU scheduling class as well as nice
    block: optimize generic_unplug_device()
    block: get rid of likely/unlikely predictions in merge logic
    vfs: splice remove_suid() cleanup
    cfq-iosched: fix RCU race in the cfq io_context destructor handling
    block: adjust tagging function queue bit locking
    block: sysfs store function needs to grab queue_lock and use queue_flag_*()

    Linus Torvalds
     
  • Due to a merge conflict, the sched_relax_domain_level control file was marked
    as being handled by cpuset_read/write_u64, but the code to handle it was
    actually in cpuset_common_file_read/write.

    Since the value being written/read is in fact a signed integer, it should be
    treated as such; this patch adds cpuset_read/write_s64 functions, and uses
    them to handle the sched_relax_domain_level file.

    With this patch, the sched_relax_domain_level can be read and written, and the
    correct contents seen/updated.

    Signed-off-by: Paul Menage
    Cc: Hidetoshi Seto
    Cc: Paul Jackson
    Cc: Ingo Molnar
    Reviewed-by: Li Zefan
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Paul Menage
     

08 May, 2008

3 commits

  • The conversion between virtual and real time is as follows:

    dvt = rw/w * dt dt = w/rw * dvt

    Since we want the fair sleeper granularity to be in real time, we actually
    need to do:

    dvt = - rw/w * l

    This bug could be related to the regression reported by Yanmin Zhang:

    | Comparing with kernel 2.6.25, sysbench+mysql(oltp, readonly) has lots
    | of regressions with 2.6.26-rc1:
    |
    | 1) 8-core stoakley: 28%;
    | 2) 16-core tigerton: 20%;
    | 3) Itanium Montvale: 50%.

    Reported-by: "Zhang, Yanmin"
    Signed-off-by: Mike Galbraith
    Signed-off-by: Peter Zijlstra
    Signed-off-by: Ingo Molnar

    Mike Galbraith
     
  • Yanmin Zhang reported:

    | Comparing with kernel 2.6.25, AIM7 (use tmpfs) has more th
    | regression under 2.6.26-rc1 on my 8-core stoakley, 16-core tigerton,
    | and Itanium Montecito. Bisect located the patch below:
    |
    | 64ac24e738823161693bf791f87adc802cf529ff is first bad commit
    | commit 64ac24e738823161693bf791f87adc802cf529ff
    | Author: Matthew Wilcox
    | Date: Fri Mar 7 21:55:58 2008 -0500
    |
    | Generic semaphore implementation
    |
    | After I manually reverted the patch against 2.6.26-rc1 while fixing
    | lots of conflicts/errors, aim7 regression became less than 2%.

    i reproduced the AIM7 workload and can confirm Yanmin's findings that
    -.26-rc1 regresses over .25 - by over 67% here.

    Looking at the workload i found and fixed what i believe to be the real
    bug causing the AIM7 regression: it was inefficient wakeup / scheduling
    / locking behavior of the new generic semaphore code, causing suboptimal
    performance.

    The problem comes from the following code. The new semaphore code does
    this on down():

    spin_lock_irqsave(&sem->lock, flags);
    if (likely(sem->count > 0))
    sem->count--;
    else
    __down(sem);
    spin_unlock_irqrestore(&sem->lock, flags);

    and this on up():

    spin_lock_irqsave(&sem->lock, flags);
    if (likely(list_empty(&sem->wait_list)))
    sem->count++;
    else
    __up(sem);
    spin_unlock_irqrestore(&sem->lock, flags);

    where __up() does:

    list_del(&waiter->list);
    waiter->up = 1;
    wake_up_process(waiter->task);

    and where __down() does this in essence:

    list_add_tail(&waiter.list, &sem->wait_list);
    waiter.task = task;
    waiter.up = 0;
    for (;;) {
    [...]
    spin_unlock_irq(&sem->lock);
    timeout = schedule_timeout(timeout);
    spin_lock_irq(&sem->lock);
    if (waiter.up)
    return 0;
    }

    the fastpath looks good and obvious, but note the following property of
    the contended path: if there's a task on the ->wait_list, the up() of
    the current owner will "pass over" ownership to that waiting task, in a
    wake-one manner, via the waiter->up flag and by removing the waiter from
    the wait list.

    That is all and fine in principle, but as implemented in
    kernel/semaphore.c it also creates a nasty, hidden source of contention!

    The contention comes from the following property of the new semaphore
    code: the new owner owns the semaphore exclusively, even if it is not
    running yet.

    So if the old owner, even if just a few instructions later, does a
    down() [lock_kernel()] again, it will be blocked and will have to wait
    on the new owner to eventually be scheduled (possibly on another CPU)!
    Or if another task gets to lock_kernel() sooner than the "new owner"
    scheduled, it will be blocked unnecessarily and for a very long time
    when there are 2000 tasks running.

    I.e. the implementation of the new semaphores code does wake-one and
    lock ownership in a very restrictive way - it does not allow
    opportunistic re-locking of the lock at all and keeps the scheduler from
    picking task order intelligently.

    This kind of scheduling, with 2000 AIM7 processes running, creates awful
    cross-scheduling between those 2000 tasks, causes reduced parallelism, a
    throttled runqueue length and a lot of idle time. With increasing number
    of CPUs it causes an exponentially worse behavior in AIM7, as the chance
    for a newly woken new-owner task to actually run anytime soon is less
    and less likely.

    Note that it takes just a tiny bit of contention for the 'new-semaphore
    catastrophy' to happen: the wakeup latencies get added to whatever small
    contention there is, and quickly snowball out of control!

    I believe Yanmin's findings and numbers support this analysis too.

    The best fix for this problem is to use the same scheduling logic that
    the kernel/mutex.c code uses: keep the wake-one behavior (that is OK and
    wanted because we do not want to over-schedule), but also allow
    opportunistic locking of the lock even if a wakee is already "in
    flight".

    The patch below implements this new logic. With this patch applied the
    AIM7 regression is largely fixed on my quad testbox:

    # v2.6.25 vanilla:
    ..................
    Tasks Jobs/Min JTI Real CPU Jobs/sec/task
    2000 56096.4 91 207.5 789.7 0.4675
    2000 55894.4 94 208.2 792.7 0.4658

    # v2.6.26-rc1-166-gc0a1811 vanilla:
    ...................................
    Tasks Jobs/Min JTI Real CPU Jobs/sec/task
    2000 33230.6 83 350.3 784.5 0.2769
    2000 31778.1 86 366.3 783.6 0.2648

    # v2.6.26-rc1-166-gc0a1811 + semaphore-speedup:
    ...............................................
    Tasks Jobs/Min JTI Real CPU Jobs/sec/task
    2000 55707.1 92 209.0 795.6 0.4642
    2000 55704.4 96 209.0 796.0 0.4642

    i.e. a 67% speedup. We are now back to within 1% of the v2.6.25
    performance levels and have zero idle time during the test, as expected.

    Btw., interactivity also improved dramatically with the fix - for
    example console-switching became almost instantaneous during this
    workload (which after all is running 2000 tasks at once!), without the
    patch it was stuck for a minute at times.

    There's another nice side-effect of this speedup patch, the new generic
    semaphore code got even smaller:

    text data bss dec hex filename
    1241 0 0 1241 4d9 semaphore.o.before
    1207 0 0 1207 4b7 semaphore.o.after

    (because the waiter.up complication got removed.)

    Longer-term we should look into using the mutex code for the generic
    semaphore code as well - but i's not easy due to legacies and it's
    outside of the scope of v2.6.26 and outside the scope of this patch as
    well.

    Bisected-by: "Zhang, Yanmin"
    Signed-off-by: Ingo Molnar

    Ingo Molnar
     
  • This reverts commit c3270e577c18b3d0e984c3371493205a4807db9d.

    Jens Axboe
     

06 May, 2008

16 commits

  • this replaces the rq->clock stuff (and possibly cpu_clock()).

    - architectures that have an 'imperfect' hardware clock can set
    CONFIG_HAVE_UNSTABLE_SCHED_CLOCK

    - the 'jiffie' window might be superfulous when we update tick_gtod
    before the __update_sched_clock() call in sched_clock_tick()

    - cpu_clock() might be implemented as:

    sched_clock_cpu(smp_processor_id())

    if the accuracy proves good enough - how far can TSC drift in a
    single jiffie when considering the filtering and idle hooks?

    [ mingo@elte.hu: various fixes and cleanups ]

    Signed-off-by: Peter Zijlstra
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • David Miller pointed it out that nothing in cpu_clock() sets
    prev_cpu_time. This caused __sync_cpu_clock() to be called
    all the time - against the intention of this code.

    The result was that in practice we hit a global spinlock every
    time cpu_clock() is called - which - even though cpu_clock()
    is used for tracing and debugging, is suboptimal.

    While at it, also:

    - move the irq disabling to the outest layer,
    this should make cpu_clock() warp-free when called with irqs
    enabled.

    - use long long instead of cycles_t - for platforms where cycles_t
    is 32-bit.

    Reported-by: David Miller
    Signed-off-by: Ingo Molnar

    Ingo Molnar
     
  • When I echoed 0 into the "cpu.shares" file, a Div0 error occured.

    We found it is caused by the following calling.

    sched_group_set_shares(tg, shares)
    set_se_shares(tg->se[i], shares/nr_cpu_ids)
    __set_se_shares(se, shares)
    div64_64((1ULL<
    Acked-by: Peter Zijlstra
    Signed-off-by: Ingo Molnar

    Miao Xie
     
  • Concurrent calls to detach_destroy_domains and arch_init_sched_domains
    were prevented by the old scheduler subsystem cpu hotplug mutex. When
    this got converted to get_online_cpus() the locking got broken.
    Unlike before now several processes can concurrently enter the critical
    sections that were protected by the old lock.

    So use the already present doms_cur_mutex to protect these sections again.

    Cc: Gautham R Shenoy
    Cc: Paul Jackson
    Signed-off-by: Heiko Carstens
    Signed-off-by: Ingo Molnar

    Heiko Carstens
     
  • make time_sync_thresh tunable to architecture code.

    Signed-off-by: Ingo Molnar

    Ingo Molnar
     
  • Revert debugging commit 7ba2e74ab5a0518bc953042952dd165724bc70c9.
    print_cfs_rq_tasks() can induce live-lock if a task is dequeued
    during list traversal.

    Signed-off-by: Mike Galbraith
    Signed-off-by: Ingo Molnar

    Mike Galbraith
     
  • http://bugzilla.kernel.org/show_bug.cgi?id=10545

    sched_stats.h says that __sched_info_switch is "called when prev !=
    next" in the comment. sched.c should therefore do that.

    Signed-off-by: Ingo Molnar

    David Simner
     
  • Gautham R Shenoy reported:

    > While running the usual CPU-Hotplug stress tests on linux-2.6.25,
    > I noticed the following in the console logs.
    >
    > This is a wee bit difficult to reproduce. In the past 10 runs I hit this
    > only once.
    >
    > ------------[ cut here ]------------
    >
    > WARNING: at kernel/sched.c:962 hrtick+0x2e/0x65()
    >
    > Just wondering if we are doing a good job at handling the cancellation
    > of any per-cpu scheduler timers during CPU-Hotplug.

    This looks like its indeed not cancelled at all and migrates the it to
    another cpu. Fix it via a proper hotplug notifier mechanism.

    Reported-by: Gautham R Shenoy
    Signed-off-by: Peter Zijlstra
    Cc: stable@kernel.org
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • We currently use an optimization to skip the overhead of wake-idle
    processing if more than one task is assigned to a run-queue. The
    assumption is that the system must already be load-balanced or we
    wouldnt be overloaded to begin with.

    The problem is that we are looking at rq->nr_running, which may include
    RT tasks in addition to CFS tasks. Since the presence of RT tasks
    really has no bearing on the balance status of CFS tasks, this throws
    the calculation off.

    This patch changes the logic to only consider the number of CFS tasks
    when making the decision to optimze the wake-idle.

    Signed-off-by: Gregory Haskins
    CC: Peter Zijlstra
    Signed-off-by: Ingo Molnar

    Gregory Haskins
     
  • Dmitry Adamushko pointed out a logic error in task_wake_up_rt() where we
    will always evaluate to "true". You can find the thread here:

    http://lkml.org/lkml/2008/4/22/296

    In reality, we only want to try to push tasks away when a wake up request is
    not going to preempt the current task. So lets fix it.

    Note: We introduce test_tsk_need_resched() instead of open-coding the flag
    check so that the merge-conflict with -rt should help remind us that we
    may need to support NEEDS_RESCHED_DELAYED in the future, too.

    Signed-off-by: Gregory Haskins
    CC: Dmitry Adamushko
    CC: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Gregory Haskins
     
  • Noticed by sparse:
    kernel/sched.c:760:20: warning: symbol 'sched_feat_names' was not declared. Should it be static?
    kernel/sched.c:767:5: warning: symbol 'sched_feat_open' was not declared. Should it be static?
    kernel/sched_fair.c:845:3: warning: returning void-valued expression
    kernel/sched.c:4386:3: warning: returning void-valued expression

    Signed-off-by: Harvey Harrison
    Signed-off-by: Ingo Molnar

    Harvey Harrison
     
  • Cc: Venkatesh Pallipadi
    Cc: "Justin Mattock"
    Signed-off-by: Andrew Morton
    Signed-off-by: Ingo Molnar

    Andrew Morton
     
  • The C files are included directly in sched.c, so they are
    effectively static.

    Signed-off-by: Harvey Harrison
    Acked-by: Peter Zijlstra
    Signed-off-by: Ingo Molnar

    Harvey Harrison
     
  • Joel noticed that the !lw->inv_weight contition isn't unlikely anymore so
    remove the unlikely annotation. Also, remove the two div64_u64() inv_weight
    calculations, which makes them rely on the calc_delta_mine() path as well.

    Signed-off-by: Peter Zijlstra
    CC: Joel Schopp
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • Normalized sleeper uses calc_delta*() which requires that the rq load is
    already updated, so move account_entity_enqueue() before place_entity()

    Tested-by: Frans Pop
    Signed-off-by: Peter Zijlstra
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • * 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jwessel/linux-2.6-kgdb:
    kgdb: kconfig fix xconfig/menuconfig element
    kgdb: fix signedness mixmatches, add statics, add declaration to header
    kgdb: 1000 loops for the single step test in kgdbts
    kgdb: trivial sparse fixes in kgdb test-suite
    kgdb: minor documentation fixes

    Linus Torvalds
     

05 May, 2008

3 commits

  • Since FUTEX_FD was scheduled for removal in June 2007 lets remove it.

    Google Code search found no users for it and NGPT was abandoned in 2003
    according to IBM. futex.h is left untouched to make sure the id does
    not get reassigned. Since queue_me() has no users left it is commented
    out to avoid a warning, i didnt remove it completely since it is part of
    the internal api (matching unqueue_me())

    Signed-off-by: Eric Sesterhenn
    Signed-off-by: Rusty Russell (removed rest)
    Acked-by: Thomas Gleixner
    Signed-off-by: Linus Torvalds

    Eric Sesterhenn
     
  • Noticed by sparse:
    arch/x86/kernel/kgdb.c:556:15: warning: symbol 'kgdb_arch_pc' was not declared. Should it be static?
    kernel/kgdb.c:149:8: warning: symbol 'kgdb_do_roundup' was not declared. Should it be static?
    kernel/kgdb.c:193:22: warning: symbol 'kgdb_arch_pc' was not declared. Should it be static?
    kernel/kgdb.c:712:5: warning: symbol 'remove_all_break' was not declared. Should it be static?

    Related to kgdb_hex2long:
    arch/x86/kernel/kgdb.c:371:28: warning: incorrect type in argument 2 (different signedness)
    arch/x86/kernel/kgdb.c:371:28: expected long *long_val
    arch/x86/kernel/kgdb.c:371:28: got unsigned long *
    kernel/kgdb.c:469:27: warning: incorrect type in argument 2 (different signedness)
    kernel/kgdb.c:469:27: expected long *long_val
    kernel/kgdb.c:469:27: got unsigned long *
    kernel/kgdb.c:470:27: warning: incorrect type in argument 2 (different signedness)
    kernel/kgdb.c:470:27: expected long *long_val
    kernel/kgdb.c:470:27: got unsigned long *
    kernel/kgdb.c:894:27: warning: incorrect type in argument 2 (different signedness)
    kernel/kgdb.c:894:27: expected long *long_val
    kernel/kgdb.c:894:27: got unsigned long *
    kernel/kgdb.c:895:27: warning: incorrect type in argument 2 (different signedness)
    kernel/kgdb.c:895:27: expected long *long_val
    kernel/kgdb.c:895:27: got unsigned long *
    kernel/kgdb.c:1127:28: warning: incorrect type in argument 2 (different signedness)
    kernel/kgdb.c:1127:28: expected long *long_val
    kernel/kgdb.c:1127:28: got unsigned long *
    kernel/kgdb.c:1132:25: warning: incorrect type in argument 2 (different signedness)
    kernel/kgdb.c:1132:25: expected long *long_val
    kernel/kgdb.c:1132:25: got unsigned long *

    Signed-off-by: Harvey Harrison
    Signed-off-by: Jason Wessel

    Harvey Harrison
     
  • The kernel module loader used to be much too happy to allow loading of
    modules for the wrong kernel version by default. For example, if you
    had MODVERSIONS enabled, but tried to load a module with no version
    info, it would happily load it and taint the kernel - whether it was
    likely to actually work or not!

    Generally, such forced module loading should be considered a really
    really bad idea, so make it conditional on a new config option
    (MODULE_FORCE_LOAD), and make it default to off.

    If somebody really wants to force module loads, that's their problem,
    but we should not encourage it. Especially as it happened to me by
    mistake (ie regular unversioned Fedora modules getting loaded) causing
    lots of strange behavior.

    Signed-off-by: Linus Torvalds

    Linus Torvalds
     

04 May, 2008

4 commits


03 May, 2008

2 commits

  • Force constants in kernel/timeconst.h (except shift counts) to be 64 bits,
    using U64_C() constructor macros, and eliminate constants that cannot
    be represented at all in 64 bits. This avoids warnings with some gcc
    versions.

    Drop generating 64-bit constants, since we have no real hope of
    getting a full set (operation on 64-bit values requires a 128-bit
    intermediate result, which gcc only supports on 64-bit platforms, and
    only with libgcc support on some.) Note that the use of these
    constants does not depend on if we are on a 32- or 64-bit architecture.

    This resolves Bugzilla 10153.

    Signed-off-by: H. Peter Anvin

    H. Peter Anvin
     
  • * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs-2.6:
    [PATCH] fix sysctl_nr_open bugs
    [PATCH] sanitize anon_inode_getfd()
    [PATCH] split linux/file.h
    [PATCH] make osf_select() use core_sys_select()
    [PATCH] remove horrors with irix tty ioctls handling
    [PATCH] fix file and descriptor handling in perfmon

    Linus Torvalds
     

02 May, 2008

3 commits

  • Uwe Kleine-Koenig has some strange hardware where one of the shared
    interrupts can be asserted during boot before the appropriate driver
    loads. Requesting the shared irq line from another driver result in a
    spurious interrupt storm which finally disables the interrupt line.

    I have seen similar behaviour on resume before (the hardware does not
    work anymore so I can not verify).

    Change the spurious disable logic to increment the disable depth and
    mark the interrupt with an extra flag which allows us to reenable the
    interrupt when a new driver arrives which requests the same irq
    line. In the worst case this will disable the irq again via the
    spurious trap, but there is a decent chance that the new driver is the
    one which can handle the already asserted interrupt and makes the box
    usable again.

    Eric Biederman said further: This case also happens on a regular basis
    in kdump kernels where we deliberately don't shutdown the hardware
    before starting the new kernel. This patch should reduce the need for
    using irqpoll in that situation by a small amount.

    Signed-off-by: Thomas Gleixner
    Tested-and-Acked-by: Uwe Kleine-König

    Thomas Gleixner
     
  • With s390 the last arch switched to the generic sys_ptrace yesterday so
    we can now kill the ifdef around it to enforce every new port it using
    it instead of introducing new weirdo versions.

    Signed-off-by: Christoph Hellwig
    Signed-off-by: Linus Torvalds

    Christoph Hellwig
     
  • Initial splitoff of the low-level stuff; taken to fdtable.h

    Signed-off-by: Al Viro

    Al Viro
     

01 May, 2008

2 commits

  • * git://git.kernel.org/pub/scm/linux/kernel/git/rusty/linux-2.6-for-linus:
    module: add MODULE_STATE_GOING notifier call
    module: Enhance verify_export_symbols
    module: set unused_gpl_crcs instead of overwriting unused_crcs
    module: neaten __find_symbol, rename to find_symbol
    module: reduce module image and resident size
    module: make module_sect_attrs private to kernel/module.c

    Linus Torvalds
     
  • timer_stats_timer_set_start_info is invoked twice, additionally, the
    invocation of this function can be moved to where it is only called when a
    delay is really required.

    Signed-off-by: Andrew Liu
    Cc: Pavel Machek
    Cc: Ingo Molnar
    Cc: Oleg Nesterov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrew Liu