31 Mar, 2006

7 commits


29 Mar, 2006

1 commit


28 Mar, 2006

2 commits


27 Mar, 2006

2 commits

  • - remove __{,test_and_}{set,clear,change}_bit() and test_bit()
    - remove ffz()
    - remove generic_fls64()
    - remove generic_hweight{32,16,8}()
    - remove generic_hweight64()
    - remove sched_find_first_bit()
    - remove find_{next,first}{,_zero}_bit()
    - remove ext2_{set,clear,test,find_first_zero,find_next_zero}_bit()

    Signed-off-by: Akinobu Mita
    Cc: Kyle McMartin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Akinobu Mita
     
  • Noticed by Michael Tokarev

    add missing ()-pair in __ffz() macro for parisc

    Signed-off-by: Akinobu Mita
    Cc: Kyle McMartin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Akinobu Mita
     

26 Mar, 2006

1 commit

  • Implement the half-closed devices notifiation, by adding a new POLLRDHUP
    (and its alias EPOLLRDHUP) bit to the existing poll/select sets. Since the
    existing POLLHUP handling, that does not report correctly half-closed
    devices, was feared to be changed, this implementation leaves the current
    POLLHUP reporting unchanged and simply add a new bit that is set in the few
    places where it makes sense. The same thing was discussed and conceptually
    agreed quite some time ago:

    http://lkml.org/lkml/2003/7/12/116

    Since this new event bit is added to the existing Linux poll infrastruture,
    even the existing poll/select system calls will be able to use it. As far
    as the existing POLLHUP handling, the patch leaves it as is. The
    pollrdhup-2.6.16.rc5-0.10.diff defines the POLLRDHUP for all the existing
    archs and sets the bit in the six relevant files. The other attached diff
    is the simple change required to sys/epoll.h to add the EPOLLRDHUP
    definition.

    There is "a stupid program" to test POLLRDHUP delivery here:

    http://www.xmailserver.org/pollrdhup-test.c

    It tests poll(2), but since the delivery is same epoll(2) will work equally.

    Signed-off-by: Davide Libenzi
    Cc: "David S. Miller"
    Cc: Michael Kerrisk
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Davide Libenzi
     

24 Mar, 2006

1 commit


23 Mar, 2006

1 commit

  • Seems like needless clutter having a bunch of #if defined(CONFIG_$ARCH) in
    include/linux/cache.h. Move the per architecture section definition to
    asm/cache.h, and keep the if-not-defined dummy case in linux/cache.h to
    catch architectures which don't implement the section.

    Verified that symbols still go in .data.read_mostly on parisc,
    and the compile doesn't break.

    Signed-off-by: Kyle McMartin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Kyle McMartin
     

16 Feb, 2006

1 commit

  • Make new MADV_REMOVE, MADV_DONTFORK, MADV_DOFORK consistent across all
    arches. The idea is to make it possible to use them portably even before
    distros include them in libc headers.

    Move common flags to asm-generic/mman.h

    Signed-off-by: Michael S. Tsirkin
    Cc: Roland Dreier
    Cc: Badari Pulavarty
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Michael S. Tsirkin
     

15 Feb, 2006

1 commit

  • Currently, copy-on-write may change the physical address of a page even if the
    user requested that the page is pinned in memory (either by mlock or by
    get_user_pages). This happens if the process forks meanwhile, and the parent
    writes to that page. As a result, the page is orphaned: in case of
    get_user_pages, the application will never see any data hardware DMA's into
    this page after the COW. In case of mlock'd memory, the parent is not getting
    the realtime/security benefits of mlock.

    In particular, this affects the Infiniband modules which do DMA from and into
    user pages all the time.

    This patch adds madvise options to control whether memory range is inherited
    across fork. Useful e.g. for when hardware is doing DMA from/into these
    pages. Could also be useful to an application wanting to speed up its forks
    by cutting large areas out of consideration.

    Signed-off-by: Michael S. Tsirkin
    Acked-by: Hugh Dickins
    Cc: Michael Kerrisk
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Michael S. Tsirkin
     

30 Jan, 2006

1 commit


23 Jan, 2006

6 commits


13 Jan, 2006

2 commits


11 Jan, 2006

5 commits


10 Jan, 2006

2 commits


09 Jan, 2006

2 commits

  • Most of the architectures have the same asm/futex.h. This consolidates them
    into asm-generic, with the arches including it from their own asm/futex.h.

    In the case of UML, this reverts the old broken futex.h and goes back to using
    the same one as almost everyone else.

    Signed-off-by: Jeff Dike
    Cc: Paolo 'Blaisorblade' Giarrusso
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jeff Dike
     
  • Kill L1_CACHE_SHIFT from all arches. Since L1_CACHE_SHIFT_MAX is not used
    anymore with the introduction of INTERNODE_CACHE, kill L1_CACHE_SHIFT_MAX.

    Signed-off-by: Ravikiran Thirumalai
    Signed-off-by: Shai Fultheim
    Signed-off-by: Andi Kleen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Ravikiran G Thirumalai
     

07 Jan, 2006

2 commits

  • Several counters already have the need to use 64 atomic variables on 64 bit
    platforms (see mm_counter_t in sched.h). We have to do ugly ifdefs to fall
    back to 32 bit atomic on 32 bit platforms.

    The VM statistics patch that I am working on will also make more extensive
    use of atomic64.

    This patch introduces a new type atomic_long_t by providing definitions in
    asm-generic/atomic.h that works similar to the c "long" type. Its 32 bits
    on 32 bit platforms and 64 bits on 64 bit platforms.

    Also cleans up the determination of the mm_counter_t in sched.h.

    Signed-off-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • Here is the patch to implement madvise(MADV_REMOVE) - which frees up a
    given range of pages & its associated backing store. Current
    implementation supports only shmfs/tmpfs and other filesystems return
    -ENOSYS.

    "Some app allocates large tmpfs files, then when some task quits and some
    client disconnect, some memory can be released. However the only way to
    release tmpfs-swap is to MADV_REMOVE". - Andrea Arcangeli

    Databases want to use this feature to drop a section of their bufferpool
    (shared memory segments) - without writing back to disk/swap space.

    This feature is also useful for supporting hot-plug memory on UML.

    Concerns raised by Andrew Morton:

    - "We have no plan for holepunching! If we _do_ have such a plan (or
    might in the future) then what would the API look like? I think
    sys_holepunch(fd, start, len), so we should start out with that."

    - Using madvise is very weird, because people will ask "why do I need to
    mmap my file before I can stick a hole in it?"

    - None of the other madvise operations call into the filesystem in this
    manner. A broad question is: is this capability an MM operation or a
    filesytem operation? truncate, for example, is a filesystem operation
    which sometimes has MM side-effects. madvise is an mm operation and with
    this patch, it gains FS side-effects, only they're really, really
    significant ones."

    Comments:

    - Andrea suggested the fs operation too but then it's more efficient to
    have it as a mm operation with fs side effects, because they don't
    immediatly know fd and physical offset of the range. It's possible to
    fixup in userland and to use the fs operation but it's more expensive,
    the vmas are already in the kernel and we can use them.

    Short term plan & Future Direction:

    - We seem to need this interface only for shmfs/tmpfs files in the short
    term. We have to add hooks into the filesystem for correctness and
    completeness. This is what this patch does.

    - In the future, plan is to support both fs and mmap apis also. This
    also involves (other) filesystem specific functions to be implemented.

    - Current patch doesn't support VM_NONLINEAR - which can be addressed in
    the future.

    Signed-off-by: Badari Pulavarty
    Cc: Hugh Dickins
    Cc: Andrea Arcangeli
    Cc: Michael Kerrisk
    Cc: Ulrich Drepper
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Badari Pulavarty
     

04 Jan, 2006

1 commit


18 Nov, 2005

2 commits

  • Since taking a spinlock disables preempt, and we need to spinlock tlb flush
    on SMP for N class, we might as well just spinlock on uniprocessor machines
    too.

    Signed-off-by: Matthew Wilcox
    Signed-off-by: Kyle McMartin

    Matthew Wilcox
     
  • We actually have two separate bad bugs

    1. The read_lock implementation spins with disabled interrupts. This is
    completely wrong
    2. Our spin_lock_irqsave should check to see if interrupts were enabled
    before the call and re-enable interrupts around the inner spin loop.

    The problem is that if we spin with interrupts off, we can't receive
    IPIs. This has resulted in a bug where SMP machines suddenly spit
    smp_call_function timeout messages and hang.

    The scenario I've caught is

    CPU0 does a flush_tlb_all holding the vmlist_lock for write.
    CPU1 tries a cat of /proc/meminfo which tries to acquire vmlist_lock for
    read
    CPU1 is now spinning with interrupts disabled
    CPU0 tries to execute a smp_call_function to flush the local tlb caches

    This is now a deadlock because CPU1 is spinning with interrupts disabled
    and can never receive the IPI

    Signed-off-by: James Bottomley
    Signed-off-by: Kyle McMartin

    James Bottomley