28 Sep, 2011

1 commit

  • There are numerous broken references to Documentation files (in other
    Documentation files, in comments, etc.). These broken references are
    caused by typo's in the references, and by renames or removals of the
    Documentation files. Some broken references are simply odd.

    Fix these broken references, sometimes by dropping the irrelevant text
    they were part of.

    Signed-off-by: Paul Bolle
    Signed-off-by: Jiri Kosina

    Paul Bolle
     

11 Aug, 2010

2 commits

  • Architectures implement dma_is_consistent() in different ways (some
    misinterpret the definition of API in DMA-API.txt). So it hasn't been so
    useful for drivers. We have only one user of the API in tree. Unlikely
    out-of-tree drivers use the API.

    Even if we fix dma_is_consistent() in some architectures, it doesn't look
    useful at all. It was invented long ago for some old systems that can't
    allocate coherent memory at all. It's better to export only APIs that are
    definitely necessary for drivers.

    Let's remove this API.

    Signed-off-by: FUJITA Tomonori
    Cc: James Bottomley
    Reviewed-by: Konrad Rzeszutek Wilk
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    FUJITA Tomonori
     
  • dma_get_cache_alignment returns the minimum DMA alignment. Architectures
    defines it as ARCH_DMA_MINALIGN (formally ARCH_KMALLOC_MINALIGN). So we
    can unify dma_get_cache_alignment implementations.

    Note that some architectures implement dma_get_cache_alignment wrongly.
    dma_get_cache_alignment() should return the minimum DMA alignment. So
    fully-coherent architectures should return 1. This patch also fixes this
    issue.

    Signed-off-by: FUJITA Tomonori
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    FUJITA Tomonori
     

16 Dec, 2009

1 commit

  • dma_mask is, when interpreted as address, the last valid byte, and hence
    comparison msut also be done using the last valid of the buffer in
    question.

    Also fix the open-coded instances in lib/swiotlb.c.

    Signed-off-by: Jan Beulich
    Cc: FUJITA Tomonori
    Cc: Becky Bruce
    Cc: "Luck, Tony"
    Cc: Benjamin Herrenschmidt
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Cc: "H. Peter Anvin"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jan Beulich
     

17 Nov, 2009

1 commit

  • This kills bad_dma_address variable, the old mechanism to enable
    IOMMU drivers to make dma_mapping_error() work in IOMMU's
    specific way.

    bad_dma_address variable was introduced to enable IOMMU drivers
    to make dma_mapping_error() work in IOMMU's specific way.
    However, it can't handle systems that use both swiotlb and HW
    IOMMU. SO we introduced dma_map_ops->mapping_error to solve that
    case.

    Intel VT-d, GART, and swiotlb already use
    dma_map_ops->mapping_error. Calgary, AMD IOMMU, and nommu use
    zero for an error dma address. This adds DMA_ERROR_CODE and
    converts them to use it (as SPARC and POWER does).

    Signed-off-by: FUJITA Tomonori
    Acked-by: Jesse Barnes
    Cc: muli@il.ibm.com
    Cc: joerg.roedel@amd.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    FUJITA Tomonori
     

08 Nov, 2009

1 commit

  • Rather than forcing GFP flags and DMA mask to be inconsistent,
    GFP flags should be determined even for the fallback device
    through dma_alloc_coherent_mask()/dma_alloc_coherent_gfp_flags().

    This restores 64-bit behavior as it was prior to commits
    8965eb19386fdf5ccd0ef8b02593eb8560aa3416 and
    4a367f3a9dbf2e7ffcee4702203479809236ee6e (not sure why there are
    two of them), where GFP_DMA was forced on for 32-bit, but not
    for 64-bit, with the slight adjustment that afaict even 32-bit
    doesn't need this without CONFIG_ISA.

    Signed-off-by: Jan Beulich
    Acked-by: Takashi Iwai
    LKML-Reference:
    Signed-off-by: Ingo Molnar
    Signed-off-by: Jesse Barnes

    Jan Beulich
     

28 Jul, 2009

2 commits


19 Jun, 2009

1 commit


15 Jun, 2009

2 commits


07 Apr, 2009

2 commits


17 Mar, 2009

1 commit


05 Mar, 2009

1 commit


30 Jan, 2009

1 commit

  • Move DMA-mapping.txt to Documentation/PCI/.

    DMA-mapping.txt was supposed to be moved from Documentation/ to
    Documentation/PCI/. The 00-INDEX files in those two directories
    were updated, along with a few other text files, but the file
    itself somehow escaped being moved, so move it and update more
    text files and source files with its new location.

    Signed-off-by: Randy Dunlap
    Acked-by: Greg Kroah-Hartman
    cc: Jesse Barnes
    Signed-off-by: Linus Torvalds

    Randy Dunlap
     

06 Jan, 2009

3 commits

  • This converts X86 and IA64 to use include/linux/dma-mapping.h.

    It's a bit large but pretty boring. The major change for X86 is
    converting 'int dir' to 'enum dma_data_direction dir' in DMA mapping
    operations. The major changes for IA64 is using map_page and
    unmap_page instead of map_single and unmap_single.

    Signed-off-by: FUJITA Tomonori
    Acked-by: Tony Luck
    Signed-off-by: Ingo Molnar

    FUJITA Tomonori
     
  • This patch converts dma_map_single and dma_unmap_single to use
    map_page and unmap_page respectively and removes unnecessary
    map_single and unmap_single in struct dma_mapping_ops.

    This leaves intel-iommu's dma_map_single and dma_unmap_single since
    IA64 uses them. They will be removed after the unification.

    Signed-off-by: FUJITA Tomonori
    Signed-off-by: Ingo Molnar

    FUJITA Tomonori
     
  • This patch adds map_page and unmap_page to struct dma_mapping_ops.

    This is a preparation of struct dma_mapping_ops unification. We use
    map_page and unmap_page instead of map_single and unmap_single.

    We will remove map_single and unmap_single hooks in the last patch in
    this patchset.

    Signed-off-by: FUJITA Tomonori
    Signed-off-by: Ingo Molnar

    FUJITA Tomonori
     

31 Dec, 2008

1 commit

  • * 'core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (63 commits)
    stacktrace: provide save_stack_trace_tsk() weak alias
    rcu: provide RCU options on non-preempt architectures too
    printk: fix discarding message when recursion_bug
    futex: clean up futex_(un)lock_pi fault handling
    "Tree RCU": scalable classic RCU implementation
    futex: rename field in futex_q to clarify single waiter semantics
    x86/swiotlb: add default swiotlb_arch_range_needs_mapping
    x86/swiotlb: add default physbus conversion
    x86: unify pci iommu setup and allow swiotlb to compile for 32 bit
    x86: add swiotlb allocation functions
    swiotlb: consolidate swiotlb info message printing
    swiotlb: support bouncing of HighMem pages
    swiotlb: factor out copy to/from device
    swiotlb: add arch hook to force mapping
    swiotlb: allow architectures to override physbusphys conversions
    swiotlb: add comment where we handle the overflow of a dma mask on 32 bit
    rcu: fix rcutorture behavior during reboot
    resources: skip sanity check of busy resources
    swiotlb: move some definitions to header
    swiotlb: allow architectures to override swiotlb pool allocation
    ...

    Fix up trivial conflicts in
    arch/x86/kernel/Makefile
    arch/x86/mm/init_32.c
    include/linux/hardirq.h
    as per Ingo's suggestions.

    Linus Torvalds
     

18 Dec, 2008

1 commit


02 Dec, 2008

1 commit

  • This removes ifdef CONFIG_X86_64 in dma_mapping_error():

    1) Xen people plan to use swiotlb on X86_32 for Dom0 support. swiotlb
    uses ops->mapping_error so X86_32 also needs to check
    ops->mapping_error.

    2) Removing #ifdef hack is almost always a good thing.

    Signed-off-by: FUJITA Tomonori
    Signed-off-by: Ingo Molnar

    FUJITA Tomonori
     

30 Nov, 2008

1 commit

  • Devices like b44 ethernet can't dma from addresses above 1GB. The driver
    handles this cases by falling back to GFP_DMA allocation. But for detecting
    the problem it needs to get an indication from dma_mapping_error.
    The bug is triggered by using a VMSPLIT option of 2G/2G.

    Signed-off-by: Thomas Bogendoerfer
    Acked-by: FUJITA Tomonori
    Signed-off-by: Ingo Molnar

    Thomas Bogendoerfer
     

24 Oct, 2008

1 commit

  • dma_alloc_coherent (include/asm-x86/dma-mapping.h) avoids GFP_DMA
    allocation first and if the allocated address is not fit for the
    device's coherent_dma_mask, then dma_alloc_coherent does GFP_DMA
    allocation. This is because dma_alloc_coherent avoids precious GFP_DMA
    zone if possible. This is also how the old dma_alloc_coherent
    (arch/x86/kernel/pci-dma.c) works.

    However, if the coherent_dma_mask of a device is 24bit, there is no
    point to go into the above GFP_DMA retry mechanism. We had better use
    GFP_DMA in the first place.

    Signed-off-by: FUJITA Tomonori
    Tested-by: Takashi Iwai
    Signed-off-by: Ingo Molnar

    FUJITA Tomonori
     

23 Oct, 2008

2 commits