27 May, 2011

2 commits

  • The style that we normally use in asm-generic is to test the macro itself
    for existence, so in asm-generic, do:

    #ifndef find_next_zero_bit_le
    extern unsigned long find_next_zero_bit_le(const void *addr,
    unsigned long size, unsigned long offset);
    #endif

    and in the architectures, write

    static inline unsigned long find_next_zero_bit_le(const void *addr,
    unsigned long size, unsigned long offset)
    #define find_next_zero_bit_le find_next_zero_bit_le

    This adds the #ifndef for each of the find bitops in the generic header
    and source files.

    Suggested-by: Arnd Bergmann
    Signed-off-by: Akinobu Mita
    Acked-by: Russell King
    Cc: Martin Schwidefsky
    Cc: Heiko Carstens
    Cc: Greg Ungerer
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Akinobu Mita
     
  • This is a series of low level ptrace unification steps to make it easier
    for common code (like KGDB) to poke at register state. This also avoids
    having to duplicate higher level operations for most ports which don't
    have special needs for accessing things.

    This patch:

    This implements a bunch of helper funcs for poking the registers of a
    ptrace structure. Now common code should be able to portably update
    specific registers (like kgdb updating the PC).

    Signed-off-by: Mike Frysinger
    Cc: Oleg Nesterov
    Cc: Jason Wessel
    Cc: Thomas Gleixner
    Cc: Ingo Molnar
    Cc: "H. Peter Anvin"
    Cc: Paul Mundt
    Cc: Sergei Shtylyov
    Cc: Dongdong Deng
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mike Frysinger
     

26 May, 2011

2 commits

  • * git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-2.6: (89 commits)
    bonding: documentation and code cleanup for resend_igmp
    bonding: prevent deadlock on slave store with alb mode (v3)
    net: hold rtnl again in dump callbacks
    Add Fujitsu 1000base-SX PCI ID to tg3
    bnx2x: protect sequence increment with mutex
    sch_sfq: fix peek() implementation
    isdn: netjet - blacklist Digium TDM400P
    via-velocity: don't annotate MAC registers as packed
    xen: netfront: hold RTNL when updating features.
    sctp: fix memory leak of the ASCONF queue when free asoc
    net: make dev_disable_lro use physical device if passed a vlan dev (v2)
    net: move is_vlan_dev into public header file (v2)
    bug.h: Fix build with CONFIG_PRINTK disabled.
    wireless: fix fatal kernel-doc error + warning in mac80211.h
    wireless: fix cfg80211.h new kernel-doc warnings
    iwlagn: dbg_fixed_rate only used when CONFIG_MAC80211_DEBUGFS enabled
    dst: catch uninitialized metrics
    be2net: hash key for rss-config cmd not set
    bridge: initialize fake_rtable metrics
    net: fix __dst_destroy_metrics_generic()
    ...

    Fix up trivial conflicts in drivers/staging/brcm80211/brcmfmac/wl_cfg80211.c

    Linus Torvalds
     
  • * git://git.kernel.org/pub/scm/linux/kernel/git/cmetcalf/linux-tile: (26 commits)
    arch/tile: prefer "tilepro" as the name of the 32-bit architecture
    compat: include aio_abi.h for aio_context_t
    arch/tile: cleanups for tilegx compat mode
    arch/tile: allocate PCI IRQs later in boot
    arch/tile: support signal "exception-trace" hook
    arch/tile: use better definitions of xchg() and cmpxchg()
    include/linux/compat.h: coding-style fixes
    tile: add an RTC driver for the Tilera hypervisor
    arch/tile: finish enabling support for TILE-Gx 64-bit chip
    compat: fixes to allow working with tile arch
    arch/tile: update defconfig file to something more useful
    tile: do_hardwall_trap: do not play with task->sighand
    tile: replace mm->cpu_vm_mask with mm_cpumask()
    tile,mn10300: add device parameter to dma_cache_sync()
    audit: support the "standard"
    arch/tile: clarify flush_buffer()/finv_buffer() function names
    arch/tile: kernel-related cleanups from removing static page size
    arch/tile: various header improvements for building drivers
    arch/tile: disable GX prefetcher during cache flush
    arch/tile: tolerate disabling CONFIG_BLK_DEV_INITRD
    ...

    Linus Torvalds
     

25 May, 2011

9 commits

  • Apps are increasingly using more than 1024 file descriptors. See
    discussion in several distro bug trackers, e.g. BugLink:
    http://bugs.launchpad.net/bugs/663090
    https://issues.rpath.com/browse/RPL-2054

    You don't want to raise the default soft limit, since that might break
    apps that use select(), but it's safe to raise the default hard limit;
    that way, apps that know they need lots of file descriptors can raise
    their soft limit without needing root, and without user intervention.

    Ubuntu is doing this with a kernel change because they have a policy of
    not changing kernel defaults in userland.

    While 4096 might not be enough for *all* apps, it seems to be plenty for
    the apps I've seen lately that are unhappy with 1024.

    Signed-off-by: Tim Gardner
    Cc: Dan Kegel
    Cc: Al Viro
    Cc: Christoph Hellwig
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Tim Gardner
     
  • The copy_to_user_page() function is supposed to flush the icache on the
    memory that was written, but the current asm-generic version lacks that
    logic. While normally it isn't a big deal as the asm-generic version of
    icache flushing is a stub, it is a deal for ports that want to use the
    asm-generic version as a baseline and then overlay its own specific parts
    (like icache flushing).

    Signed-off-by: Mike Frysinger
    Cc: Arnd Bergmann
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mike Frysinger
     
  • Some of these functions have grown beyond inline sanity, move them
    out-of-line.

    Signed-off-by: Peter Zijlstra
    Requested-by: Andrew Morton
    Requested-by: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Peter Zijlstra
     
  • Instead of using a single batch (the small on-stack, or an allocated
    page), try and extend the batch every time it runs out and only flush once
    either the extend fails or we're done.

    Signed-off-by: Peter Zijlstra
    Requested-by: Nick Piggin
    Reviewed-by: KAMEZAWA Hiroyuki
    Acked-by: Hugh Dickins
    Cc: Benjamin Herrenschmidt
    Cc: David Miller
    Cc: Martin Schwidefsky
    Cc: Russell King
    Cc: Paul Mundt
    Cc: Jeff Dike
    Cc: Richard Weinberger
    Cc: Tony Luck
    Cc: Mel Gorman
    Cc: KOSAKI Motohiro
    Cc: Nick Piggin
    Cc: Namhyung Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Peter Zijlstra
     
  • In case other architectures require RCU freed page-tables to implement
    gup_fast() and software filled hashes and similar things, provide the
    means to do so by moving the logic into generic code.

    Signed-off-by: Peter Zijlstra
    Requested-by: David Miller
    Cc: Benjamin Herrenschmidt
    Cc: Martin Schwidefsky
    Cc: Russell King
    Cc: Paul Mundt
    Cc: Jeff Dike
    Cc: Richard Weinberger
    Cc: Tony Luck
    Cc: KAMEZAWA Hiroyuki
    Cc: Hugh Dickins
    Cc: Mel Gorman
    Cc: KOSAKI Motohiro
    Cc: Nick Piggin
    Cc: Namhyung Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Peter Zijlstra
     
  • Rework the existing mmu_gather infrastructure.

    The direct purpose of these patches was to allow preemptible mmu_gather,
    but even without that I think these patches provide an improvement to the
    status quo.

    The first 9 patches rework the mmu_gather infrastructure. For review
    purpose I've split them into generic and per-arch patches with the last of
    those a generic cleanup.

    The next patch provides generic RCU page-table freeing, and the followup
    is a patch converting s390 to use this. I've also got 4 patches from
    DaveM lined up (not included in this series) that uses this to implement
    gup_fast() for sparc64.

    Then there is one patch that extends the generic mmu_gather batching.

    After that follow the mm preemptibility patches, these make part of the mm
    a lot more preemptible. It converts i_mmap_lock and anon_vma->lock to
    mutexes which together with the mmu_gather rework makes mmu_gather
    preemptible as well.

    Making i_mmap_lock a mutex also enables a clean-up of the truncate code.

    This also allows for preemptible mmu_notifiers, something that XPMEM I
    think wants.

    Furthermore, it removes the new and universially detested unmap_mutex.

    This patch:

    Remove the first obstacle towards a fully preemptible mmu_gather.

    The current scheme assumes mmu_gather is always done with preemption
    disabled and uses per-cpu storage for the page batches. Change this to
    try and allocate a page for batching and in case of failure, use a small
    on-stack array to make some progress.

    Preemptible mmu_gather is desired in general and usable once i_mmap_lock
    becomes a mutex. Doing it before the mutex conversion saves us from
    having to rework the code by moving the mmu_gather bits inside the
    pte_lock.

    Also avoid flushing the tlb batches from under the pte lock, this is
    useful even without the i_mmap_lock conversion as it significantly reduces
    pte lock hold times.

    [akpm@linux-foundation.org: fix comment tpyo]
    Signed-off-by: Peter Zijlstra
    Cc: Benjamin Herrenschmidt
    Cc: David Miller
    Cc: Martin Schwidefsky
    Cc: Russell King
    Cc: Paul Mundt
    Cc: Jeff Dike
    Cc: Richard Weinberger
    Cc: Tony Luck
    Reviewed-by: KAMEZAWA Hiroyuki
    Acked-by: Hugh Dickins
    Acked-by: Mel Gorman
    Cc: KOSAKI Motohiro
    Cc: Nick Piggin
    Cc: Namhyung Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Peter Zijlstra
     
  • Based upon an email by Joe Perches.

    Reported-by: Randy Dunlap
    Signed-off-by: David S. Miller
    Acked-by: Randy Dunlap

    David S. Miller
     
  • * 'for-linus' of git://git390.marist.edu/pub/scm/linux-2.6: (29 commits)
    [S390] cpu hotplug: fix external interrupt subclass mask handling
    [S390] oprofile: dont access lowcore
    [S390] oprofile: add missing irq stats counter
    [S390] Ignore sendmmsg system call note wired up warning
    [S390] s390,oprofile: fix compile error for !CONFIG_SMP
    [S390] s390,oprofile: fix alert counter increment
    [S390] Remove unused includes in process.c
    [S390] get CPC image name
    [S390] sclp: event buffer dissection
    [S390] chsc: process channel-path-availability information
    [S390] refactor page table functions for better pgste support
    [S390] merge page_test_dirty and page_clear_dirty
    [S390] qdio: prevent compile warning
    [S390] sclp: remove unnecessary sendmask check
    [S390] convert old cpumask API into new one
    [S390] pfault: cleanup code
    [S390] pfault: cpu hotplug vs missing completion interrupts
    [S390] smp: add __noreturn attribute to cpu_die()
    [S390] percpu: implement arch specific irqsafe_cpu_ops
    [S390] vdso: disable gcov profiling
    ...

    Linus Torvalds
     
  • * 'for-2.6.40' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu:
    percpu: Unify input section names
    percpu: Avoid extra NOP in percpu_cmpxchg16b_double
    percpu: Cast away printk format warning
    percpu: Always align percpu output section to PAGE_SIZE

    Fix up fairly trivial conflict in arch/x86/include/asm/percpu.h as per Tejun

    Linus Torvalds
     

24 May, 2011

2 commits


23 May, 2011

1 commit

  • The page_clear_dirty primitive always sets the default storage key
    which resets the access control bits and the fetch protection bit.
    That will surprise a KVM guest that sets non-zero access control
    bits or the fetch protection bit. Merge page_test_dirty and
    page_clear_dirty back to a single function and only clear the
    dirty bit from the storage key.

    In addition move the function page_test_and_clear_dirty and
    page_test_and_clear_young to page.h where they belong. This
    requires to change the parameter from a struct page * to a page
    frame number.

    Signed-off-by: Martin Schwidefsky

    Martin Schwidefsky
     

21 May, 2011

1 commit

  • Commit e66eed651fd1 ("list: remove prefetching from regular list
    iterators") removed the include of prefetch.h from list.h, which
    uncovered several cases that had apparently relied on that rather
    obscure header file dependency.

    So this fixes things up a bit, using

    grep -L linux/prefetch.h $(git grep -l '[^a-z_]prefetchw*(' -- '*.[ch]')
    grep -L 'prefetchw*(' $(git grep -l 'linux/prefetch.h' -- '*.[ch]')

    to guide us in finding files that either need
    inclusion, or have it despite not needing it.

    There are more of them around (mostly network drivers), but this gets
    many core ones.

    Reported-by: Stephen Rothwell
    Signed-off-by: Linus Torvalds

    Linus Torvalds
     

20 May, 2011

1 commit

  • …git/tip/linux-2.6-tip

    * 'perf-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (107 commits)
    perf stat: Add more cache-miss percentage printouts
    perf stat: Add -d -d and -d -d -d options to show more CPU events
    ftrace/kbuild: Add recordmcount files to force full build
    ftrace: Add self-tests for multiple function trace users
    ftrace: Modify ftrace_set_filter/notrace to take ops
    ftrace: Allow dynamically allocated function tracers
    ftrace: Implement separate user function filtering
    ftrace: Free hash with call_rcu_sched()
    ftrace: Have global_ops store the functions that are to be traced
    ftrace: Add ops parameter to ftrace_startup/shutdown functions
    ftrace: Add enabled_functions file
    ftrace: Use counters to enable functions to trace
    ftrace: Separate hash allocation and assignment
    ftrace: Create a global_ops to hold the filter and notrace hashes
    ftrace: Use hash instead for FTRACE_FL_FILTER
    ftrace: Replace FTRACE_FL_NOTRACE flag with a hash of ignored functions
    perf bench, x86: Add alternatives-asm.h wrapper
    x86, 64-bit: Fix copy_[to/from]_user() checks for the userspace address limit
    x86, mem: memset_64.S: Optimize memset by enhanced REP MOVSB/STOSB
    x86, mem: memmove_64.S: Optimize memmove by enhanced REP MOVSB/STOSB
    ...

    Linus Torvalds
     

19 May, 2011

1 commit

  • This patch places every exported symbol in its own section
    (i.e. "___ksymtab+printk"). Thus the linker will use its SORT() directive
    to sort and finally merge all symbol in the right and final section
    (i.e. "__ksymtab").

    The symbol prefixed archs use an underscore as prefix for symbols.
    To avoid collision we use a different character to create the temporary
    section names.

    This work was supported by a hardware donation from the CE Linux Forum.

    Signed-off-by: Alessio Igor Bogani
    Signed-off-by: Rusty Russell (folded in '+' fixup)
    Tested-by: Dirk Behme

    Alessio Igor Bogani
     

13 May, 2011

1 commit

  • The existing mechanism doesn't really provide
    enough to create the 64-bit "compat" ABI properly in a generic way,
    since the compat ABI is a mix of things were you can re-use the 64-bit
    versions of syscalls and things where you need a compat wrapper.

    To provide this in the most direct way possible, I added two new macros
    to go along with the existing __SYSCALL and __SC_3264 macros: __SC_COMP
    and SC_COMP_3264. These macros take an additional argument, typically a
    "compat_sys_xxx" function, which is passed to __SYSCALL if you define
    __SYSCALL_COMPAT when including the header, resulting in a pointer to
    the compat function being placed in the generated syscall table.

    The change also adds some missing definitions to so that
    it actually has declarations for all the compat syscalls, since the
    "[nr] = ##call" approach requires proper C declarations for all the
    functions included in the syscall table.

    Finally, compat.c defines compat_sys_sigpending() and
    compat_sys_sigprocmask() even if the underlying architecture doesn't
    request it, which tries to pull in undefined compat_old_sigset_t defines.
    We need to guard those compat syscall definitions with appropriate
    __ARCH_WANT_SYS_xxx ifdefs.

    Acked-by: Arnd Bergmann
    Signed-off-by: Chris Metcalf

    Chris Metcalf
     

05 May, 2011

1 commit


27 Apr, 2011

1 commit


08 Apr, 2011

1 commit


05 Apr, 2011

1 commit

  • Introduce:

    static __always_inline bool static_branch(struct jump_label_key *key);

    instead of the old JUMP_LABEL(key, label) macro.

    In this way, jump labels become really easy to use:

    Define:

    struct jump_label_key jump_key;

    Can be used as:

    if (static_branch(&jump_key))
    do unlikely code

    enable/disale via:

    jump_label_inc(&jump_key);
    jump_label_dec(&jump_key);

    that's it!

    For the jump labels disabled case, the static_branch() becomes an
    atomic_read(), and jump_label_inc()/dec() are simply atomic_inc(),
    atomic_dec() operations. We show testing results for this change below.

    Thanks to H. Peter Anvin for suggesting the 'static_branch()' construct.

    Since we now require a 'struct jump_label_key *key', we can store a pointer into
    the jump table addresses. In this way, we can enable/disable jump labels, in
    basically constant time. This change allows us to completely remove the previous
    hashtable scheme. Thanks to Peter Zijlstra for this re-write.

    Testing:

    I ran a series of 'tbench 20' runs 5 times (with reboots) for 3
    configurations, where tracepoints were disabled.

    jump label configured in
    avg: 815.6

    jump label *not* configured in (using atomic reads)
    avg: 800.1

    jump label *not* configured in (regular reads)
    avg: 803.4

    Signed-off-by: Peter Zijlstra
    LKML-Reference:
    Signed-off-by: Jason Baron
    Suggested-by: H. Peter Anvin
    Tested-by: David Daney
    Acked-by: Ralf Baechle
    Acked-by: David S. Miller
    Acked-by: Mathieu Desnoyers
    Signed-off-by: Steven Rostedt

    Jason Baron
     

04 Apr, 2011

1 commit

  • The two percpu helper macros have the section names duplicated. So create
    a new define to merge the two. This also allows arches who need to link
    things more directly themselves to avoid duplicating the input sections in
    their linker script.

    Signed-off-by: Mike Frysinger
    Signed-off-by: Tejun Heo

    Mike Frysinger
     

31 Mar, 2011

1 commit


28 Mar, 2011

1 commit

  • The define to use ({0;}) for the !CONFIG_SMP case of WARN_ON_SMP()
    can be confusing. As the WARN_ON_SMP() needs to be a nop when
    CONFIG_SMP is not set, including all its parameters must not be
    evaluated, and that it must work as both a stand alone statement
    and inside an if condition, we define it to a funky ({0;}).

    A simple "0" will not work as it causes gcc to give the warning that
    the statement has no effect.

    As this strange definition has raised a few eyebrows from some
    major kernel developers, it is wise to document why we create such
    a work of art.

    Cc: Linus Torvalds
    Cc: Alexey Dobriyan
    Signed-off-by: Steven Rostedt

    Steven Rostedt
     

26 Mar, 2011

1 commit


25 Mar, 2011

2 commits

  • Both WARN_ON() and WARN_ON_SMP() should be able to be used in
    an if statement.

    if (WARN_ON_SMP(foo)) { ... }

    Because WARN_ON_SMP() is defined as a do { } while (0) on UP,
    it can not be used this way.

    Convert it to the same form that WARN_ON() is, even when
    CONFIG_SMP is off.

    Signed-off-by: Steven Rostedt
    Acked-by: Thomas Gleixner
    Acked-by: Peter Zijlstra
    Acked-by: Darren Hart
    Cc: Lai Jiangshan
    Cc: Linus Torvalds
    Cc: Andrew Morton
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Steven Rostedt
     
  • Percpu allocator honors alignment request upto PAGE_SIZE and both the
    percpu addresses in the percpu address space and the translated kernel
    addresses should be aligned accordingly. The calculation of the
    former depends on the alignment of percpu output section in the kernel
    image.

    The linker script macros PERCPU_VADDR() and PERCPU() are used to
    define this output section and the latter takes @align parameter.
    Several architectures are using @align smaller than PAGE_SIZE breaking
    percpu memory alignment.

    This patch removes @align parameter from PERCPU(), renames it to
    PERCPU_SECTION() and makes it always align to PAGE_SIZE. While at it,
    add PCPU_SETUP_BUG_ON() checks such that alignment problems are
    reliably detected and remove percpu alignment comment recently added
    in workqueue.c as the condition would trigger BUG way before reaching
    there.

    For um, this patch raises the alignment of percpu area. As the area
    is in .init, there shouldn't be any noticeable difference.

    This problem was discovered by David Howells while debugging boot
    failure on mn10300.

    Signed-off-by: Tejun Heo
    Acked-by: Mike Frysinger
    Cc: uclinux-dist-devel@blackfin.uclinux.org
    Cc: David Howells
    Cc: Jeff Dike
    Cc: user-mode-linux-devel@lists.sourceforge.net

    Tejun Heo
     

24 Mar, 2011

7 commits

  • minix bit operations are only used by minix filesystem and useless by
    other modules. Because byte order of inode and block bitmaps is different
    on each architecture like below:

    m68k:
    big-endian 16bit indexed bitmaps

    h8300, microblaze, s390, sparc, m68knommu:
    big-endian 32 or 64bit indexed bitmaps

    m32r, mips, sh, xtensa:
    big-endian 32 or 64bit indexed bitmaps for big-endian mode
    little-endian bitmaps for little-endian mode

    Others:
    little-endian bitmaps

    In order to move minix bit operations from asm/bitops.h to architecture
    independent code in minix filesystem, this provides two config options.

    CONFIG_MINIX_FS_BIG_ENDIAN_16BIT_INDEXED is only selected by m68k.
    CONFIG_MINIX_FS_NATIVE_ENDIAN is selected by the architectures which use
    native byte order bitmaps (h8300, microblaze, s390, sparc, m68knommu,
    m32r, mips, sh, xtensa). The architectures which always use little-endian
    bitmaps do not select these options.

    Finally, we can remove minix bit operations from asm/bitops.h for all
    architectures.

    Signed-off-by: Akinobu Mita
    Acked-by: Arnd Bergmann
    Acked-by: Greg Ungerer
    Cc: Geert Uytterhoeven
    Cc: Roman Zippel
    Cc: Andreas Schwab
    Cc: Martin Schwidefsky
    Cc: Heiko Carstens
    Cc: Yoshinori Sato
    Cc: Michal Simek
    Cc: "David S. Miller"
    Cc: Hirokazu Takata
    Acked-by: Ralf Baechle
    Acked-by: Paul Mundt
    Cc: Chris Zankel
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Akinobu Mita
     
  • As the result of conversions, there are no users of ext2 non-atomic bit
    operations except for ext2 filesystem itself. Now we can put them into
    architecture independent code in ext2 filesystem, and remove from
    asm/bitops.h for all architectures.

    Signed-off-by: Akinobu Mita
    Cc: Jan Kara
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Akinobu Mita
     
  • As a preparation for removing ext2 non-atomic bit operations from
    asm/bitops.h. This converts ext2 non-atomic bit operations to
    little-endian bit operations.

    Signed-off-by: Akinobu Mita
    Acked-by: Arnd Bergmann
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Akinobu Mita
     
  • Introduce little-endian bit operations to the big-endian architectures
    which do not have native little-endian bit operations and the
    little-endian architectures. (alpha, avr32, blackfin, cris, frv, h8300,
    ia64, m32r, mips, mn10300, parisc, sh, sparc, tile, x86, xtensa)

    These architectures can just include generic implementation
    (asm-generic/bitops/le.h).

    Signed-off-by: Akinobu Mita
    Cc: Richard Henderson
    Cc: Ivan Kokshaysky
    Cc: Mikael Starvik
    Cc: David Howells
    Cc: Yoshinori Sato
    Cc: "Luck, Tony"
    Cc: Ralf Baechle
    Cc: Kyle McMartin
    Cc: Matthew Wilcox
    Cc: Grant Grundler
    Cc: Paul Mundt
    Cc: Kazumoto Kojima
    Cc: Hirokazu Takata
    Cc: "David S. Miller"
    Cc: Chris Zankel
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Acked-by: Hans-Christian Egtvedt
    Acked-by: "H. Peter Anvin"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Akinobu Mita
     
  • This makes the little-endian bitops take any pointer types by changing the
    prototypes and adding casts in the preprocessor macros.

    That would seem to at least make all the filesystem code happier, and they
    can continue to do just something like

    #define ext2_set_bit __test_and_set_bit_le

    (or whatever the exact sequence ends up being).

    Signed-off-by: Akinobu Mita
    Cc: Arnd Bergmann
    Cc: Richard Henderson
    Cc: Ivan Kokshaysky
    Cc: Mikael Starvik
    Cc: David Howells
    Cc: Yoshinori Sato
    Cc: "Luck, Tony"
    Cc: Ralf Baechle
    Cc: Kyle McMartin
    Cc: Matthew Wilcox
    Cc: Grant Grundler
    Cc: Paul Mundt
    Cc: Kazumoto Kojima
    Cc: Hirokazu Takata
    Cc: "David S. Miller"
    Cc: Chris Zankel
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Cc: Hans-Christian Egtvedt
    Cc: "H. Peter Anvin"
    Cc: Benjamin Herrenschmidt
    Cc: Paul Mackerras
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Akinobu Mita
     
  • As a preparation for providing little-endian bitops for all architectures,
    This renames generic implementation of little-endian bitops. (remove
    "generic_" prefix and postfix "_le")

    s/generic_find_next_le_bit/find_next_bit_le/
    s/generic_find_next_zero_le_bit/find_next_zero_bit_le/
    s/generic_find_first_zero_le_bit/find_first_zero_bit_le/
    s/generic___test_and_set_le_bit/__test_and_set_bit_le/
    s/generic___test_and_clear_le_bit/__test_and_clear_bit_le/
    s/generic_test_le_bit/test_bit_le/
    s/generic___set_le_bit/__set_bit_le/
    s/generic___clear_le_bit/__clear_bit_le/
    s/generic_test_and_set_le_bit/test_and_set_bit_le/
    s/generic_test_and_clear_le_bit/test_and_clear_bit_le/

    Signed-off-by: Akinobu Mita
    Acked-by: Arnd Bergmann
    Acked-by: Hans-Christian Egtvedt
    Cc: Geert Uytterhoeven
    Cc: Roman Zippel
    Cc: Andreas Schwab
    Cc: Greg Ungerer
    Cc: Benjamin Herrenschmidt
    Cc: Paul Mackerras
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Akinobu Mita
     
  • This patch series introduces little-endian bit operations in asm/bitops.h
    for all architectures and converts all ext2 non-atomic and minix bit
    operations to use little-endian bit operations. It enables us to remove
    ext2 non-atomic and minix bit operations from asm/bitops.h. The reason
    they should be removed from asm/bitops.h is as follows:

    For ext2 non-atomic bit operations, they are used for little-endian byte
    order bitmap access by some filesystems and modules. But using ext2_*()
    functions on a module other than ext2 filesystem makes some feel strange.

    For minix bit operations, they are only used by minix filesystem and are
    useless by other modules. Because byte order of inode and block bitmap is

    This patch:

    In order to make the forthcoming changes smaller, this merges macro
    definisions in asm-generic/bitops/le.h for big-endian and little-endian as
    much as possible.

    This also removes unused BITOP_WORD macro.

    Signed-off-by: Akinobu Mita
    Acked-by: Arnd Bergmann
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Akinobu Mita
     

23 Mar, 2011

2 commits


21 Mar, 2011

1 commit

  • It is frequently useful to sync a single file system, instead of all
    mounted file systems via sync(2):

    - On machines with many mounts, it is not at all uncommon for some of
    them to hang (e.g. unresponsive NFS server). sync(2) will get stuck on
    those and may never get to the one you do care about (e.g., /).
    - Some applications write lots of data to the file system and then
    want to make sure it is flushed to disk. Calling fsync(2) on each
    file introduces unnecessary ordering constraints that result in a large
    amount of sub-optimal writeback/flush/commit behavior by the file
    system.

    There are currently two ways (that I know of) to sync a single super_block:

    - BLKFLSBUF ioctl on the block device: That also invalidates the bdev
    mapping, which isn't usually desirable, and doesn't work for non-block
    file systems.
    - 'mount -o remount,rw' will call sync_filesystem as an artifact of the
    current implemention. Relying on this little-known side effect for
    something like data safety sounds foolish.

    Both of these approaches require root privileges, which some applications
    do not have (nor should they need?) given that sync(2) is an unprivileged
    operation.

    This patch introduces a new system call syncfs(2) that takes an fd and
    syncs only the file system it references. Maybe someday we can

    $ sync /some/path

    and not get

    sync: ignoring all arguments

    The syscall is motivated by comments by Al and Christoph at the last LSF.
    syncfs(2) seems like an appropriate name given statfs(2).

    A similar ioctl was also proposed a while back, see
    http://marc.info/?l=linux-fsdevel&m=127970513829285&w=2

    Signed-off-by: Sage Weil
    Signed-off-by: Al Viro

    Sage Weil