26 Jan, 2008

31 commits

  • The current wake-up code path tries to determine if it can optimize the
    wake-up to "this_cpu" by computing load calculations. The problem is that
    these calculations are only relevant to SCHED_OTHER tasks where load is king.
    For RT tasks, priority is king. So the load calculation is completely wasted
    bandwidth.

    Therefore, we create a new sched_class interface to help with
    pre-wakeup routing decisions and move the load calculation as a function
    of CFS task's class.

    Signed-off-by: Gregory Haskins
    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Gregory Haskins
     
  • "this_rq" is normally used to denote the RQ on the current cpu
    (i.e. "cpu_rq(this_cpu)"). So clean up the usage of this_rq to be
    more consistent with the rest of the code.

    Signed-off-by: Gregory Haskins
    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Gregory Haskins
     
  • Some RT tasks (particularly kthreads) are bound to one specific CPU.
    It is fairly common for two or more bound tasks to get queued up at the
    same time. Consider, for instance, softirq_timer and softirq_sched. A
    timer goes off in an ISR which schedules softirq_thread to run at RT50.
    Then the timer handler determines that it's time to smp-rebalance the
    system so it schedules softirq_sched to run. So we are in a situation
    where we have two RT50 tasks queued, and the system will go into
    rt-overload condition to request other CPUs for help.

    This causes two problems in the current code:

    1) If a high-priority bound task and a low-priority unbounded task queue
    up behind the running task, we will fail to ever relocate the unbounded
    task because we terminate the search on the first unmovable task.

    2) We spend precious futile cycles in the fast-path trying to pull
    overloaded tasks over. It is therefore optimial to strive to avoid the
    overhead all together if we can cheaply detect the condition before
    overload even occurs.

    This patch tries to achieve this optimization by utilizing the hamming
    weight of the task->cpus_allowed mask. A weight of 1 indicates that
    the task cannot be migrated. We will then utilize this information to
    skip non-migratable tasks and to eliminate uncessary rebalance attempts.

    We introduce a per-rq variable to count the number of migratable tasks
    that are currently running. We only go into overload if we have more
    than one rt task, AND at least one of them is migratable.

    In addition, we introduce a per-task variable to cache the cpus_allowed
    weight, since the hamming calculation is probably relatively expensive.
    We only update the cached value when the mask is updated which should be
    relatively infrequent, especially compared to scheduling frequency
    in the fast path.

    Signed-off-by: Gregory Haskins
    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Gregory Haskins
     
  • Since we now take an active approach to load balancing, we don't need to
    balance RT tasks via the normal task balancer. In fact, this code was
    found to pull RT tasks away from CPUS that the active movement performed,
    resulting in large latencies.

    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Steven Rostedt
     
  • This patch adds pushing of overloaded RT tasks from a runqueue that is
    having tasks (most likely RT tasks) added to the run queue.

    TODO: We don't cover the case of waking of new RT tasks (yet).

    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Steven Rostedt
     
  • This patch adds the algorithm to pull tasks from RT overloaded runqueues.

    When a pull RT is initiated, all overloaded runqueues are examined for
    a RT task that is higher in prio than the highest prio task queued on the
    target runqueue. If another runqueue holds a RT task that is of higher
    prio than the highest prio task on the target runqueue is found it is pulled
    to the target runqueue.

    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Steven Rostedt
     
  • This patch adds an RT overload accounting system. When a runqueue has
    more than one RT task queued, it is marked as overloaded. That is that it
    is a candidate to have RT tasks pulled from it.

    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Steven Rostedt
     
  • This patch adds an algorithm to push extra RT tasks off a run queue to
    other CPU runqueues.

    When more than one RT task is added to a run queue, this algorithm takes
    an assertive approach to push the RT tasks that are not running onto other
    run queues that have lower priority. The way this works is that the highest
    RT task that is not running is looked at and we examine the runqueues on
    the CPUS for that tasks affinity mask. We find the runqueue with the lowest
    prio in the CPU affinity of the picked task, and if it is lower in prio than
    the picked task, we push the task onto that CPU runqueue.

    We continue pushing RT tasks off the current runqueue until we don't push any
    more. The algorithm stops when the next highest RT task can't preempt any
    other processes on other CPUS.

    TODO: The algorithm may stop when there are still RT tasks that can be
    migrated. Specifically, if the highest non running RT task CPU affinity
    is restricted to CPUs that are running higher priority tasks, there may
    be a lower priority task queued that has an affinity with a CPU that is
    running a lower priority task that it could be migrated to. This
    patch set does not address this issue.

    Note: checkpatch reveals two over 80 character instances. I'm not sure
    that breaking them up will help visually, so I left them as is.

    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Steven Rostedt
     
  • This patch adds accounting to each runqueue to keep track of the
    highest prio task queued on the run queue. We only care about
    RT tasks, so if the run queue does not contain any active RT tasks
    its priority will be considered MAX_RT_PRIO.

    This information will be used for later patches.

    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Steven Rostedt
     
  • This patch adds accounting to keep track of the number of RT tasks running
    on a runqueue. This information will be used in later patches.

    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Steven Rostedt
     
  • this patch extends the soft-lockup detector to automatically
    detect hung TASK_UNINTERRUPTIBLE tasks. Such hung tasks are
    printed the following way:

    ------------------>
    INFO: task prctl:3042 blocked for more than 120 seconds.
    "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message
    prctl D fd5e3793 0 3042 2997
    f6050f38 00000046 00000001 fd5e3793 00000009 c06d8264 c06dae80 00000286
    f6050f40 f6050f00 f7d34d90 f7d34fc8 c1e1be80 00000001 f6050000 00000000
    f7e92d00 00000286 f6050f18 c0489d1a f6050f40 00006605 00000000 c0133a5b
    Call Trace:
    [] schedule_timeout+0x6d/0x8b
    [] schedule_timeout_uninterruptible+0x15/0x17
    [] msleep+0x10/0x16
    [] sys_prctl+0x30/0x1e2
    [] sysenter_past_esp+0x5f/0xa5
    =======================
    2 locks held by prctl/3042:
    #0: (&sb->s_type->i_mutex_key#5){--..}, at: [] do_fsync+0x38/0x7a
    #1: (jbd_handle){--..}, at: [] journal_start+0xc7/0xe9
    : CPU hotplug fixes. ]
    [ Andrew Morton : build warning fix. ]

    Signed-off-by: Ingo Molnar
    Signed-off-by: Arjan van de Ven

    Ingo Molnar
     
  • fix build on !CONFIG_SMP.

    Signed-off-by: Ingo Molnar

    Ingo Molnar
     
  • This patch converts the known per-subsystem mutexes to get_online_cpus
    put_online_cpus. It also eliminates the CPU_LOCK_ACQUIRE and
    CPU_LOCK_RELEASE hotplug notification events.

    Signed-off-by: Gautham R Shenoy
    Signed-off-by: Ingo Molnar

    Gautham R Shenoy
     
  • Replace all lock_cpu_hotplug/unlock_cpu_hotplug from the kernel and use
    get_online_cpus and put_online_cpus instead as it highlights the
    refcount semantics in these operations.

    The new API guarantees protection against the cpu-hotplug operation, but
    it doesn't guarantee serialized access to any of the local data
    structures. Hence the changes needs to be reviewed.

    In case of pseries_add_processor/pseries_remove_processor, use
    cpu_maps_update_begin()/cpu_maps_update_done() as we're modifying the
    cpu_present_map there.

    Signed-off-by: Gautham R Shenoy
    Signed-off-by: Ingo Molnar

    Gautham R Shenoy
     
  • This patch implements a Refcount + Waitqueue based model for
    cpu-hotplug.

    Now, a thread which wants to prevent cpu-hotplug, will bump up a global
    refcount and the thread which wants to perform a cpu-hotplug operation
    will block till the global refcount goes to zero.

    The readers, if any, during an ongoing cpu-hotplug operation are blocked
    until the cpu-hotplug operation is over.

    Signed-off-by: Gautham R Shenoy
    Signed-off-by: Paul Jackson [For !CONFIG_HOTPLUG_CPU ]
    Signed-off-by: Ingo Molnar

    Gautham R Shenoy
     
  • The current load balancing scheme isn't good enough for precise
    group fairness.

    For example: on a 8-cpu system, I created 3 groups as under:

    a = 8 tasks (cpu.shares = 1024)
    b = 4 tasks (cpu.shares = 1024)
    c = 3 tasks (cpu.shares = 1024)

    a, b and c are task groups that have equal weight. We would expect each
    of the groups to receive 33.33% of cpu bandwidth under a fair scheduler.

    This is what I get with the latest scheduler git tree:

    Signed-off-by: Ingo Molnar
    --------------------------------------------------------------------------------
    Col1 | Col2 | Col3 | Col4
    ------|---------|-------|-------------------------------------------------------
    a | 277.676 | 57.8% | 54.1% 54.1% 54.1% 54.2% 56.7% 62.2% 62.8% 64.5%
    b | 116.108 | 24.2% | 47.4% 48.1% 48.7% 49.3%
    c | 86.326 | 18.0% | 47.5% 47.9% 48.5%
    --------------------------------------------------------------------------------

    Explanation of o/p:

    Col1 -> Group name
    Col2 -> Cumulative execution time (in seconds) received by all tasks of that
    group in a 60sec window across 8 cpus
    Col3 -> CPU bandwidth received by the group in the 60sec window, expressed in
    percentage. Col3 data is derived as:
    Col3 = 100 * Col2 / (NR_CPUS * 60)
    Col4 -> CPU bandwidth received by each individual task of the group.
    Col4 = 100 * cpu_time_recd_by_task / 60

    [I can share the test case that produces a similar o/p if reqd]

    The deviation from desired group fairness is as below:

    a = +24.47%
    b = -9.13%
    c = -15.33%

    which is quite high.

    After the patch below is applied, here are the results:

    --------------------------------------------------------------------------------
    Col1 | Col2 | Col3 | Col4
    ------|---------|-------|-------------------------------------------------------
    a | 163.112 | 34.0% | 33.2% 33.4% 33.5% 33.5% 33.7% 34.4% 34.8% 35.3%
    b | 156.220 | 32.5% | 63.3% 64.5% 66.1% 66.5%
    c | 160.653 | 33.5% | 85.8% 90.6% 91.4%
    --------------------------------------------------------------------------------

    Deviation from desired group fairness is as below:

    a = +0.67%
    b = -0.83%
    c = +0.17%

    which is far better IMO. Most of other runs have yielded a deviation within
    +-2% at the most, which is good.

    Why do we see bad (group) fairness with current scheuler?
    =========================================================

    Currently cpu's weight is just the summation of individual task weights.
    This can yield incorrect results. For ex: consider three groups as below
    on a 2-cpu system:

    CPU0 CPU1
    ---------------------------
    A (10) B(5)
    C(5)
    ---------------------------

    Group A has 10 tasks, all on CPU0, Group B and C have 5 tasks each all
    of which are on CPU1. Each task has the same weight (NICE_0_LOAD =
    1024).

    The current scheme would yield a cpu weight of 10240 (10*1024) for each cpu and
    the load balancer will think both CPUs are perfectly balanced and won't
    move around any tasks. This, however, would yield this bandwidth:

    A = 50%
    B = 25%
    C = 25%

    which is not the desired result.

    What's changing in the patch?
    =============================

    - How cpu weights are calculated when CONFIF_FAIR_GROUP_SCHED is
    defined (see below)
    - API Change
    - Two tunables introduced in sysfs (under SCHED_DEBUG) to
    control the frequency at which the load balance monitor
    thread runs.

    The basic change made in this patch is how cpu weight (rq->load.weight) is
    calculated. Its now calculated as the summation of group weights on a cpu,
    rather than summation of task weights. Weight exerted by a group on a
    cpu is dependent on the shares allocated to it and also the number of
    tasks the group has on that cpu compared to the total number of
    (runnable) tasks the group has in the system.

    Let,
    W(K,i) = Weight of group K on cpu i
    T(K,i) = Task load present in group K's cfs_rq on cpu i
    T(K) = Total task load of group K across various cpus
    S(K) = Shares allocated to group K
    NRCPUS = Number of online cpus in the scheduler domain to
    which group K is assigned.

    Then,
    W(K,i) = S(K) * NRCPUS * T(K,i) / T(K)

    A load balance monitor thread is created at bootup, which periodically
    runs and adjusts group's weight on each cpu. To avoid its overhead, two
    min/max tunables are introduced (under SCHED_DEBUG) to control the rate
    at which it runs.

    Fixes from: Peter Zijlstra

    - don't start the load_balance_monitor when there is only a single cpu.
    - rename the kthread because its currently longer than TASK_COMM_LEN

    Signed-off-by: Srivatsa Vaddagiri
    Signed-off-by: Ingo Molnar

    Srivatsa Vaddagiri
     
  • doms_cur[] array represents various scheduling domains which are
    mutually exclusive. Currently cpusets code can modify this array (by
    calling partition_sched_domains()) as a result of user modifying
    sched_load_balance flag for various cpusets.

    This patch introduces a mutex and corresponding API (only when
    CONFIG_FAIR_GROUP_SCHED is defined) which allows a reader to safely read
    the doms_cur[] array w/o worrying abt concurrent modifications to the
    array.

    The fair group scheduler code (introduced in next patch of this series)
    makes use of this mutex to walk thr' doms_cur[] array while rebalancing
    shares of task groups across cpus.

    Signed-off-by: Srivatsa Vaddagiri
    Signed-off-by: Ingo Molnar

    Srivatsa Vaddagiri
     
  • This patch changes how the cpu load exerted by fair_sched_class tasks
    is calculated. Load exerted by fair_sched_class tasks on a cpu is now
    a summation of the group weights, rather than summation of task weights.
    Weight exerted by a group on a cpu is dependent on the shares allocated
    to it.

    This version of patch has a minor impact on code size, but should have
    no runtime/functional impact for !CONFIG_FAIR_GROUP_SCHED.

    Signed-off-by: Srivatsa Vaddagiri
    Signed-off-by: Ingo Molnar

    Srivatsa Vaddagiri
     
  • Minor bug fixes for the group scheduler:

    - Use a mutex to serialize add/remove of task groups and also when
    changing shares of a task group. Use the same mutex when printing
    cfs_rq debugging stats for various task groups.

    - Use list_for_each_entry_rcu in for_each_leaf_cfs_rq macro (when
    walking task group list)

    Signed-off-by: Srivatsa Vaddagiri
    Signed-off-by: Ingo Molnar

    Srivatsa Vaddagiri
     
  • Minor cleanups:

    - Fix coding style
    - remove obsolete comment

    Signed-off-by: Srivatsa Vaddagiri
    Signed-off-by: Ingo Molnar

    Srivatsa Vaddagiri
     
  • remove remaining printk_clock references from ia64.

    Signed-off-by: Ingo Molnar

    Ingo Molnar
     
  • printk_clock() is obsolete - it has been replaced with cpu_clock().

    Signed-off-by: Ingo Molnar

    Ingo Molnar
     
  • Stefano Brivio reported weird printk timestamp behavior during
    CPU frequency changes:

    http://bugzilla.kernel.org/show_bug.cgi?id=9475

    fix CONFIG_PRINT_TIME's reliance on sched_clock() and use cpu_clock()
    instead.

    Reported-and-bisected-by: Stefano Brivio
    Signed-off-by: Ingo Molnar

    Ingo Molnar
     
  • make printk more robust by allowing recursion only if there's a crash
    going on. Also add recursion detection.

    I've tested it with an artificially injected printk recursion - instead
    of a lockup or spontaneous reboot or other crash, the output was a well
    controlled:

    [ 41.057335] SysRq : BUG: recent printk recursion!
    [ 41.057335] loglevel0-8 reBoot Crashdump show-all-locks(D) tErm Full kIll saK showMem Nice powerOff showPc show-all-timers(Q) unRaw Sync showTasks Unmount shoW-blocked-tasks

    also do all this printk-debug logic with irqs disabled.

    Signed-off-by: Ingo Molnar
    Reviewed-by: Nick Piggin

    Ingo Molnar
     
  • * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/selinux-2.6:
    selinux: make mls_compute_sid always polyinstantiate
    security/selinux: constify function pointer tables and fields
    security: add a secctx_to_secid() hook
    security: call security_file_permission from rw_verify_area
    security: remove security_sb_post_mountroot hook
    Security: remove security.h include from mm.h
    Security: remove security_file_mmap hook sparse-warnings (NULL as 0).
    Security: add get, set, and cloning of superblock security information
    security/selinux: Add missing "space"

    Linus Torvalds
     
  • * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/hskinnemoen/avr32-2.6:
    [AVR32] extint: Set initial irq type to low level
    [AVR32] extint: change set_irq_type() handling
    [AVR32] NMI debugging
    [AVR32] constify function pointer tables
    [AVR32] ATNGW100: Update defconfig
    [AVR32] ATSTK1002: Update defconfig
    [AVR32] Kconfig: Choose daughterboard instead of CPU
    [AVR32] Add support for ATSTK1003 and ATSTK1004
    [AVR32] Clean up external DAC setup code
    [AVR32] ATSTK1000: Move gpio-leds setup to setup.c
    [AVR32] Add support for AT32AP7001 and AT32AP7002
    [AVR32] Provide more CPU information in /proc/cpuinfo and dmesg
    [AVR32] Oprofile support
    [AVR32] Include instrumentation menu
    Disable VGA text console for AVR32 architecture
    [AVR32] Enable debugging only when needed
    ptrace: Call arch_ptrace_attach() when request=PTRACE_TRACEME
    [AVR32] Remove redundant try_to_freeze() call from do_signal()
    [AVR32] Drop GFP_COMP for DMA memory allocations

    Linus Torvalds
     
  • * git://git.kernel.org/pub/scm/linux/kernel/git/steve/gfs2-2.6-nmw: (56 commits)
    [GFS2] Allow journal recovery on read-only mount
    [GFS2] Lockup on error
    [GFS2] Fix page_mkwrite truncation race path
    [GFS2] Fix typo
    [GFS2] Fix write alloc required shortcut calculation
    [GFS2] gfs2_alloc_required performance
    [GFS2] Remove unneeded i_spin
    [GFS2] Reduce inode size by moving i_alloc out of line
    [GFS2] Fix assert in log code
    [GFS2] Fix problems relating to execution of files on GFS2
    [GFS2] Initialize extent_list earlier
    [GFS2] Allow page migration for writeback and ordered pages
    [GFS2] Remove unused variable
    [GFS2] Fix log block mapper
    [GFS2] Minor correction
    [GFS2] Eliminate the no longer needed sd_statfs_mutex
    [GFS2] Incremental patch to fix compiler warning
    [GFS2] Function meta_read optimization
    [GFS2] Only fetch the dinode once in block_map
    [GFS2] Reorganize function gfs2_glmutex_lock
    ...

    Linus Torvalds
     
  • * git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (125 commits)
    [CRYPTO] twofish: Merge common glue code
    [CRYPTO] hifn_795x: Fixup container_of() usage
    [CRYPTO] cast6: inline bloat--
    [CRYPTO] api: Set default CRYPTO_MINALIGN to unsigned long long
    [CRYPTO] tcrypt: Make xcbc available as a standalone test
    [CRYPTO] xcbc: Remove bogus hash/cipher test
    [CRYPTO] xcbc: Fix algorithm leak when block size check fails
    [CRYPTO] tcrypt: Zero axbuf in the right function
    [CRYPTO] padlock: Only reset the key once for each CBC and ECB operation
    [CRYPTO] api: Include sched.h for cond_resched in scatterwalk.h
    [CRYPTO] salsa20-asm: Remove unnecessary dependency on CRYPTO_SALSA20
    [CRYPTO] tcrypt: Add select of AEAD
    [CRYPTO] salsa20: Add x86-64 assembly version
    [CRYPTO] salsa20_i586: Salsa20 stream cipher algorithm (i586 version)
    [CRYPTO] gcm: Introduce rfc4106
    [CRYPTO] api: Show async type
    [CRYPTO] chainiv: Avoid lock spinning where possible
    [CRYPTO] seqiv: Add select AEAD in Kconfig
    [CRYPTO] scatterwalk: Handle zero nbytes in scatterwalk_map_and_copy
    [CRYPTO] null: Allow setkey on digest_null
    ...

    Linus Torvalds
     
  • This can be broken down into these major areas:
    - Documentation updates (language translations and fixes, as
    well as kobject and kset documenatation updates.)
    - major kset/kobject/ktype rework and fixes. This cleans up the
    kset and kobject and ktype relationship and architecture,
    making sense of things now, and good documenation and samples
    are provided for others to use. Also the attributes for
    kobjects are much easier to handle now. This cleaned up a LOT
    of code all through the kernel, making kobjects easier to use
    if you want to.
    - struct bus_type has been reworked to now handle the lifetime
    rules properly, as the kobject is properly dynamic.
    - struct driver has also been reworked, and now the lifetime
    issues are resolved.
    - the block subsystem has been converted to use struct device
    now, and not "raw" kobjects. This patch has been in the -mm
    tree for over a year now, and finally all the issues are
    worked out with it. Older distros now properly work with new
    kernels, and no userspace updates are needed at all.
    - nozomi driver is added. This has also been in -mm for a long
    time, and many people have asked for it to go in. It is now
    in good enough shape to do so.
    - lots of class_device conversions to use struct device instead.
    The tree is almost all cleaned up now, only SCSI and IB is the
    remaining code to fix up...

    * git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-2.6: (196 commits)
    Driver core: coding style fixes
    Kobject: fix coding style issues in kobject c files
    Kobject: fix coding style issues in kobject.h
    Driver core: fix coding style issues in device.h
    spi: use class iteration api
    scsi: use class iteration api
    rtc: use class iteration api
    power supply : use class iteration api
    ieee1394: use class iteration api
    Driver Core: add class iteration api
    Driver core: Cleanup get_device_parent() in device_add() and device_move()
    UIO: constify function pointer tables
    Driver Core: constify the name passed to platform_device_register_simple
    driver core: fix build with SYSFS=n
    sysfs: make SYSFS_DEPRECATED depend on SYSFS
    Driver core: use LIST_HEAD instead of call to INIT_LIST_HEAD in __init
    kobject: add sample code for how to use ksets/ktypes/kobjects
    kobject: add sample code for how to use kobjects in a simple manner.
    kobject: update the kobject/kset documentation
    kobject: remove old, outdated documentation.
    ...

    Linus Torvalds
     
  • If the node we're booting on doesn't have memory, bootstrapping kmalloc()
    caches resorts to fallback_alloc() which requires ->nodelists set for all
    nodes. Fix that by calling set_up_list3s() for CACHE_CACHE in
    kmem_cache_init().

    As kmem_getpages() is called with GFP_THISNODE set, this used to work before
    because of breakage in 2.6.22 and before with GFP_THISNODE returning pages from
    the wrong node if a node had no memory. So it may have worked accidentally and
    in an unsafe manner because the pages would have been associated with the wrong
    node which could trigger bug ons and locking troubles.

    Tested-by: Mel Gorman
    Tested-by: Olaf Hering
    Reviewed-by: Christoph Lameter
    Signed-off-by: Pekka Enberg
    [ With additional one-liner by Olaf Hering - Linus ]
    Signed-off-by: Linus Torvalds

    Pekka Enberg
     
  • Fix overwriting the stack with the version string
    (it is currently 10 bytes + zero) when unloading the
    capidrv module. Safeguard against overwriting it
    should the version string grow in the future.

    Should fix Kernel Bug Tracker Bug 9696.

    Signed-off-by: Gerd v. Egidy
    Acked-by: Karsten Keil
    Signed-off-by: Linus Torvalds

    Karsten Keil
     

25 Jan, 2008

9 commits

  • This patch allows gfs2 to perform journal recovery even if it is mounted
    read-only. Strictly speaking, a read-only mount should not be writing to
    the filesystem, but we do this only to perform journal recovery. A
    read-only mount will fail if we don't recover the dirty journal. Also,
    when gfs2 is used as a root filesystem, it will be mounted read-only
    before being mounted read-write during the boot sequence. A failed
    read-only mount will panic the machine during bootup.

    Signed-off-by: Abhijith Das
    Signed-off-by: Steven Whitehouse

    Abhijith Das
     
  • I spotted this bug while I was digging around. Looks like it could cause
    a lockup in some rare error condition.

    Signed-off-by: Bob Peterson
    Signed-off-by: Steven Whitehouse

    Bob Peterson
     
  • There was a bug in the truncation/invalidation race path for
    ->page_mkwrite for gfs2. It ought to return 0 so that the effect is the
    same as if the page was truncated at any of the other points at which
    the page_lock is dropped. This will result in the restart of the whole
    page fault path. If it was due to a real truncation (as opposed to an
    invalidate because we let a glock go) then the ->fault path will pick
    that up when it gets called again.

    Signed-off-by: Steven Whitehouse

    Steven Whitehouse
     
  • This patch fixes a minor typo. Surprisingly, it still compiled.

    Signed-off-by: Bob Peterson
    Signed-off-by: Steven Whitehouse

    Bob Peterson
     
  • The comparison was being made against the wrong quantity.

    Signed-off-by: Steven Whitehouse

    Steven Whitehouse
     
  • This is a small I/O performance enhancement to gfs2. (Actually, it is a rework of
    an earlier version I got wrong). The idea here is to check if the write extends
    past the last block in the file. If so, the function can save itself a lot of
    time and trouble because it knows an allocate will be required. Benchmarks like
    iozone should see better performance.

    Signed-off-by: Bob Peterson
    Signed-off-by: Steven Whitehouse

    Bob Peterson
     
  • This patch removes a vestigial variable "i_spin" from the gfs2_inode
    structure. This not only saves us memory (>300000 of these in memory
    for the oom test) it also saves us time because we don't have to
    spend time initializing it (i.e. slightly better performance).

    Signed-off-by: Bob Peterson
    Signed-off-by: Steven Whitehouse

    Bob Peterson
     
  • It is possible to reduce the size of GFS2 inodes by taking the i_alloc
    structure out of the gfs2_inode. This patch allocates the i_alloc
    structure whenever its needed, and frees it afterward. This decreases
    the amount of low memory we use at the expense of requiring a memory
    allocation for each page or partial page that we write. A quick test
    with postmark shows that the overhead is not measurable and I also note
    that OCFS2 use the same approach.

    In the future I'd like to solve the problem by shrinking down the size
    of the members of the i_alloc structure, but for now, this reduces the
    immediate problem of using too much low-memory on x86 and doesn't add
    too much overhead.

    Signed-off-by: Steven Whitehouse

    Steven Whitehouse
     
  • Although the values were all being calculated correctly, there was a
    race in the assert due to the way it was using atomic variables. This
    changes the value we assert on so that we get the same effect by testing
    a different variable. This prevents the assert triggering when it shouldn't.

    Signed-off-by: Steven Whitehouse

    Steven Whitehouse