11 Jan, 2009

3 commits


07 Jan, 2009

2 commits

  • …/git/tip/linux-2.6-tip

    * 'core-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
    rcu: fix rcutorture bug
    rcu: eliminate synchronize_rcu_xxx macro
    rcu: make treercu safe for suspend and resume
    rcu: fix rcutree grace-period-latency bug on small systems
    futex: catch certain assymetric (get|put)_futex_key calls
    futex: make futex_(get|put)_key() calls symmetric
    locking, percpu counters: introduce separate lock classes
    swiotlb: clean up EXPORT_SYMBOL usage
    swiotlb: remove unnecessary declaration
    swiotlb: replace architecture-specific swiotlb.h with linux/swiotlb.h
    swiotlb: add support for systems with highmem
    swiotlb: store phys address in io_tlb_orig_addr array
    swiotlb: add hwdev to swiotlb_phys_to_bus() / swiotlb_sg_to_bus()

    Linus Torvalds
     
  • Signed-off-by: Kay Sievers
    Signed-off-by: Greg Kroah-Hartman

    Kay Sievers
     

06 Jan, 2009

2 commits

  • This adds swiotlb_map_page and swiotlb_unmap_page to lib/swiotlb.c and
    remove IA64 and X86's swiotlb_map_page and swiotlb_unmap_page.

    This also removes unnecessary swiotlb_map_single, swiotlb_map_single_attrs,
    swiotlb_unmap_single and swiotlb_unmap_single_attrs.

    Signed-off-by: FUJITA Tomonori
    Acked-by: Tony Luck
    Signed-off-by: Ingo Molnar

    FUJITA Tomonori
     
  • This converts X86 and IA64 to use include/linux/dma-mapping.h.

    It's a bit large but pretty boring. The major change for X86 is
    converting 'int dir' to 'enum dma_data_direction dir' in DMA mapping
    operations. The major changes for IA64 is using map_page and
    unmap_page instead of map_single and unmap_single.

    Signed-off-by: FUJITA Tomonori
    Acked-by: Tony Luck
    Signed-off-by: Ingo Molnar

    FUJITA Tomonori
     

05 Jan, 2009

1 commit


04 Jan, 2009

1 commit


03 Jan, 2009

1 commit

  • Impact: cleanup, reduce kernel size a bit

    The current kernel build warns:

    WARNING: vmlinux.o(.text+0x11458): Section mismatch in reference from the function swiotlb_alloc_boot() to the function .init.text:__alloc_bootmem_low()
    The function swiotlb_alloc_boot() references
    the function __init __alloc_bootmem_low().
    This is often because swiotlb_alloc_boot lacks a __init
    annotation or the annotation of __alloc_bootmem_low is wrong.

    WARNING: vmlinux.o(.text+0x1011f2): Section mismatch in reference from the function swiotlb_late_init_with_default_size() to the function .init.text:__alloc_bootmem_low()
    The function swiotlb_late_init_with_default_size() references
    the function __init __alloc_bootmem_low().
    This is often because swiotlb_late_init_with_default_size lacks a __init
    annotation or the annotation of __alloc_bootmem_low is wrong.

    and indeed the functions calling __alloc_bootmem_low() can be marked
    __init as well.

    Signed-off-by: Roland Dreier
    Signed-off-by: Ingo Molnar

    Roland Dreier
     

28 Dec, 2008

5 commits


18 Dec, 2008

6 commits


17 Dec, 2008

2 commits


17 Nov, 2008

1 commit

  • Impact: fix DMA buffer allocation coherency bug in certain configs

    This patch fixes swiotlb to use dev->coherent_dma_mask in
    swiotlb_alloc_coherent().

    coherent_dma_mask is a subset of dma_mask (equal to it most of
    the time), enumerating the address range that a given device
    is able to DMA to/from in a cache-coherent way.

    But currently, swiotlb uses dev->dma_mask in alloc_coherent()
    implicitly via address_needs_mapping(), but alloc_coherent is really
    supposed to use coherent_dma_mask.

    This bug could break drivers that uses smaller coherent_dma_mask than
    dma_mask (though the current code works for the majority that use the
    same mask for coherent_dma_mask and dma_mask).

    Signed-off-by: FUJITA Tomonori
    Cc: tony.luck@intel.com
    Signed-off-by: Ingo Molnar

    FUJITA Tomonori
     

24 Oct, 2008

1 commit

  • swiotlb_alloc_coherent calls panic() when allocated swiotlb pages is
    not fit for a device's dma mask. However, alloc_coherent failure is
    not a disaster at all. AFAIK, none of other x86 and IA64 IOMMU
    implementations don't crash in case of alloc_coherent failure.

    There are some drivers that don't check alloc_coherent failure but not
    many (about ten and I've already started to fix some of
    them). alloc_coherent returns NULL in case of failure so it's likely
    that these guilty drivers crash immediately. So swiotlb doesn't need
    to call panic() just for them.

    Reported-by: Takashi Iwai
    Signed-off-by: FUJITA Tomonori
    Tested-by: Takashi Iwai
    Signed-off-by: Ingo Molnar

    FUJITA Tomonori
     

19 Sep, 2008

1 commit


10 Sep, 2008

1 commit


08 Sep, 2008

4 commits


27 Jul, 2008

1 commit

  • Add per-device dma_mapping_ops support for CONFIG_X86_64 as POWER
    architecture does:

    This enables us to cleanly fix the Calgary IOMMU issue that some devices
    are not behind the IOMMU (http://lkml.org/lkml/2008/5/8/423).

    I think that per-device dma_mapping_ops support would be also helpful for
    KVM people to support PCI passthrough but Andi thinks that this makes it
    difficult to support the PCI passthrough (see the above thread). So I
    CC'ed this to KVM camp. Comments are appreciated.

    A pointer to dma_mapping_ops to struct dev_archdata is added. If the
    pointer is non NULL, DMA operations in asm/dma-mapping.h use it. If it's
    NULL, the system-wide dma_ops pointer is used as before.

    If it's useful for KVM people, I plan to implement a mechanism to register
    a hook called when a new pci (or dma capable) device is created (it works
    with hot plugging). It enables IOMMUs to set up an appropriate
    dma_mapping_ops per device.

    The major obstacle is that dma_mapping_error doesn't take a pointer to the
    device unlike other DMA operations. So x86 can't have dma_mapping_ops per
    device. Note all the POWER IOMMUs use the same dma_mapping_error function
    so this is not a problem for POWER but x86 IOMMUs use different
    dma_mapping_error functions.

    The first patch adds the device argument to dma_mapping_error. The patch
    is trivial but large since it touches lots of drivers and dma-mapping.h in
    all the architecture.

    This patch:

    dma_mapping_error() doesn't take a pointer to the device unlike other DMA
    operations. So we can't have dma_mapping_ops per device.

    Note that POWER already has dma_mapping_ops per device but all the POWER
    IOMMUs use the same dma_mapping_error function. x86 IOMMUs use device
    argument.

    [akpm@linux-foundation.org: fix sge]
    [akpm@linux-foundation.org: fix svc_rdma]
    [akpm@linux-foundation.org: build fix]
    [akpm@linux-foundation.org: fix bnx2x]
    [akpm@linux-foundation.org: fix s2io]
    [akpm@linux-foundation.org: fix pasemi_mac]
    [akpm@linux-foundation.org: fix sdhci]
    [akpm@linux-foundation.org: build fix]
    [akpm@linux-foundation.org: fix sparc]
    [akpm@linux-foundation.org: fix ibmvscsi]
    Signed-off-by: FUJITA Tomonori
    Cc: Muli Ben-Yehuda
    Cc: Andi Kleen
    Cc: Thomas Gleixner
    Cc: Ingo Molnar
    Cc: Avi Kivity
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    FUJITA Tomonori
     

29 Apr, 2008

3 commits

  • Change all ia64 machvecs to use the new dma_*map*_attrs() interfaces.
    Implement the old dma_*map_*() interfaces in terms of the corresponding new
    interfaces. For ia64/sn, make use of one dma attribute,
    DMA_ATTR_WRITE_BARRIER. Introduce swiotlb_*map*_attrs() functions.

    Signed-off-by: Arthur Kepner
    Cc: Tony Luck
    Cc: Jesse Barnes
    Cc: Jes Sorensen
    Cc: Randy Dunlap
    Cc: Roland Dreier
    Cc: James Bottomley
    Cc: David Miller
    Cc: Benjamin Herrenschmidt
    Cc: Grant Grundler
    Cc: Michael Ellerman
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Arthur Kepner
     
  • iommu_is_span_boundary in lib/iommu-helper.c was exported for PARISC IOMMUs
    (commit 3715863aa142c4f4c5208f5f3e5e9bac06006d2f). SWIOTLB can use it instead
    of the homegrown function.

    Signed-off-by: FUJITA Tomonori
    Cc: Thomas Gleixner
    Cc: Ingo Molnar
    Cc: H. Peter Anvin
    Cc: Tony Luck
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    FUJITA Tomonori
     
  • There's a pointlessly braced block of code in there. Remove the braces and
    save a tabstop.

    Cc: Andi Kleen
    Cc: FUJITA Tomonori
    Cc: Jan Beulich
    Cc: Tony Luck
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrew Morton
     

14 Mar, 2008

1 commit

  • Commit 681cc5cd3efbeafca6386114070e0bfb5012e249 ("iommu sg merging:
    swiotlb: respect the segment boundary limits") introduced two
    possibilities for entering an endless loop in lib/swiotlb.c:

    - if max_slots is zero (possible if mask is ~0UL)
    - if the number of slots requested fits into a swiotlb segment, but is
    too large for the part of a segment which remains after considering
    offset_slots

    This fixes them

    Signed-off-by: Jan Beulich
    Cc: FUJITA Tomonori
    Cc: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jan Beulich
     

06 Feb, 2008

1 commit

  • This patch makes swiotlb not allocate a memory area spanning LLD's segment
    boundary.

    is_span_boundary() judges whether a memory area spans LLD's segment boundary.
    If map_single finds such a area, map_single tries to find the next available
    memory area.

    Signed-off-by: FUJITA Tomonori
    Cc: James Bottomley
    Cc: Jens Axboe
    Cc: Greg KH
    Cc: Jeff Garzik
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    FUJITA Tomonori
     

23 Oct, 2007

1 commit


17 Oct, 2007

1 commit


16 Oct, 2007

1 commit