13 Mar, 2019

3 commits

  • As all the memblock allocation functions return NULL in case of error
    rather than panic(), the duplicates with _nopanic suffix can be removed.

    Link: http://lkml.kernel.org/r/1548057848-15136-22-git-send-email-rppt@linux.ibm.com
    Signed-off-by: Mike Rapoport
    Acked-by: Greg Kroah-Hartman
    Reviewed-by: Petr Mladek [printk]
    Cc: Catalin Marinas
    Cc: Christophe Leroy
    Cc: Christoph Hellwig
    Cc: "David S. Miller"
    Cc: Dennis Zhou
    Cc: Geert Uytterhoeven
    Cc: Greentime Hu
    Cc: Guan Xuetao
    Cc: Guo Ren
    Cc: Guo Ren [c-sky]
    Cc: Heiko Carstens
    Cc: Juergen Gross [Xen]
    Cc: Mark Salter
    Cc: Matt Turner
    Cc: Max Filippov
    Cc: Michael Ellerman
    Cc: Michal Simek
    Cc: Paul Burton
    Cc: Richard Weinberger
    Cc: Rich Felker
    Cc: Rob Herring
    Cc: Rob Herring
    Cc: Russell King
    Cc: Stafford Horne
    Cc: Tony Luck
    Cc: Vineet Gupta
    Cc: Yoshinori Sato
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mike Rapoport
     
  • Add check for the return value of memblock_alloc*() functions and call
    panic() in case of error. The panic message repeats the one used by
    panicing memblock allocators with adjustment of parameters to include
    only relevant ones.

    The replacement was mostly automated with semantic patches like the one
    below with manual massaging of format strings.

    @@
    expression ptr, size, align;
    @@
    ptr = memblock_alloc(size, align);
    + if (!ptr)
    + panic("%s: Failed to allocate %lu bytes align=0x%lx\n", __func__, size, align);

    [anders.roxell@linaro.org: use '%pa' with 'phys_addr_t' type]
    Link: http://lkml.kernel.org/r/20190131161046.21886-1-anders.roxell@linaro.org
    [rppt@linux.ibm.com: fix format strings for panics after memblock_alloc]
    Link: http://lkml.kernel.org/r/1548950940-15145-1-git-send-email-rppt@linux.ibm.com
    [rppt@linux.ibm.com: don't panic if the allocation in sparse_buffer_init fails]
    Link: http://lkml.kernel.org/r/20190131074018.GD28876@rapoport-lnx
    [akpm@linux-foundation.org: fix xtensa printk warning]
    Link: http://lkml.kernel.org/r/1548057848-15136-20-git-send-email-rppt@linux.ibm.com
    Signed-off-by: Mike Rapoport
    Signed-off-by: Anders Roxell
    Reviewed-by: Guo Ren [c-sky]
    Acked-by: Paul Burton [MIPS]
    Acked-by: Heiko Carstens [s390]
    Reviewed-by: Juergen Gross [Xen]
    Reviewed-by: Geert Uytterhoeven [m68k]
    Acked-by: Max Filippov [xtensa]
    Cc: Catalin Marinas
    Cc: Christophe Leroy
    Cc: Christoph Hellwig
    Cc: "David S. Miller"
    Cc: Dennis Zhou
    Cc: Greentime Hu
    Cc: Greg Kroah-Hartman
    Cc: Guan Xuetao
    Cc: Guo Ren
    Cc: Mark Salter
    Cc: Matt Turner
    Cc: Michael Ellerman
    Cc: Michal Simek
    Cc: Petr Mladek
    Cc: Richard Weinberger
    Cc: Rich Felker
    Cc: Rob Herring
    Cc: Rob Herring
    Cc: Russell King
    Cc: Stafford Horne
    Cc: Tony Luck
    Cc: Vineet Gupta
    Cc: Yoshinori Sato
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mike Rapoport
     
  • Add panic() calls if memblock_alloc() returns NULL.

    The panic() format duplicates the one used by memblock itself and in
    order to avoid explosion with long parameters list replace open coded
    allocation size calculations with a local variable.

    Link: http://lkml.kernel.org/r/1548057848-15136-19-git-send-email-rppt@linux.ibm.com
    Signed-off-by: Mike Rapoport
    Cc: Catalin Marinas
    Cc: Christophe Leroy
    Cc: Christoph Hellwig
    Cc: "David S. Miller"
    Cc: Dennis Zhou
    Cc: Geert Uytterhoeven
    Cc: Greentime Hu
    Cc: Greg Kroah-Hartman
    Cc: Guan Xuetao
    Cc: Guo Ren
    Cc: Guo Ren [c-sky]
    Cc: Heiko Carstens
    Cc: Juergen Gross [Xen]
    Cc: Mark Salter
    Cc: Matt Turner
    Cc: Max Filippov
    Cc: Michael Ellerman
    Cc: Michal Simek
    Cc: Paul Burton
    Cc: Petr Mladek
    Cc: Richard Weinberger
    Cc: Rich Felker
    Cc: Rob Herring
    Cc: Rob Herring
    Cc: Russell King
    Cc: Stafford Horne
    Cc: Tony Luck
    Cc: Vineet Gupta
    Cc: Yoshinori Sato
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mike Rapoport
     

11 Mar, 2019

2 commits

  • Pull virtio updates from Michael Tsirkin:
    "Several fixes, most notably fix for virtio on swiotlb systems"

    * tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost:
    vhost: silence an unused-variable warning
    virtio: hint if callbacks surprisingly might sleep
    virtio-ccw: wire up ->bus_name callback
    s390/virtio: handle find on invalid queue gracefully
    virtio-ccw: diag 500 may return a negative cookie
    virtio_balloon: remove the unnecessary 0-initialization
    virtio-balloon: improve update_balloon_size_func
    virtio-blk: Consider virtio_max_dma_size() for maximum segment size
    virtio: Introduce virtio_max_dma_size()
    dma: Introduce dma_max_mapping_size()
    swiotlb: Add is_swiotlb_active() function
    swiotlb: Introduce swiotlb_max_mapping_size()

    Linus Torvalds
     
  • Pull DMA mapping updates from Christoph Hellwig:

    - add debugfs support for dumping dma-debug information (Corentin
    Labbe)

    - Kconfig cleanups (Andy Shevchenko and me)

    - debugfs cleanups (Greg Kroah-Hartman)

    - improve dma_map_resource and use it in the media code

    - arch_setup_dma_ops / arch_teardown_dma_ops cleanups

    - various small cleanups and improvements for the per-device coherent
    allocator

    - make the DMA mask an upper bound and don't fail "too large" dma mask
    in the remaning two architectures - this will allow big driver
    cleanups in the following merge windows

    * tag 'dma-mapping-5.1' of git://git.infradead.org/users/hch/dma-mapping: (21 commits)
    Documentation/DMA-API-HOWTO: update dma_mask sections
    sparc64/pci_sun4v: allow large DMA masks
    sparc64/iommu: allow large DMA masks
    sparc64: refactor the ali DMA quirk
    ccio: allow large DMA masks
    dma-mapping: remove the DMA_MEMORY_EXCLUSIVE flag
    dma-mapping: remove dma_mark_declared_memory_occupied
    dma-mapping: move CONFIG_DMA_CMA to kernel/dma/Kconfig
    dma-mapping: improve selection of dma_declare_coherent availability
    dma-mapping: remove an incorrect __iommem annotation
    of: select OF_RESERVED_MEM automatically
    device.h: dma_mem is only needed for HAVE_GENERIC_DMA_COHERENT
    mfd/sm501: depend on HAS_DMA
    dma-mapping: add a kconfig symbol for arch_teardown_dma_ops availability
    dma-mapping: add a kconfig symbol for arch_setup_dma_ops availability
    dma-mapping: move debug configuration options to kernel/dma
    dma-debug: add dumping facility via debugfs
    dma: debug: no need to check return value of debugfs_create functions
    videobuf2: replace a layering violation with dma_map_resource
    dma-mapping: don't BUG when calling dma_map_resource on RAM
    ...

    Linus Torvalds
     

09 Mar, 2019

1 commit


08 Mar, 2019

1 commit

  • Pull powerpc updates from Michael Ellerman:
    "Notable changes:

    - Enable THREAD_INFO_IN_TASK to move thread_info off the stack.

    - A big series from Christoph reworking our DMA code to use more of
    the generic infrastructure, as he said:
    "This series switches the powerpc port to use the generic swiotlb
    and noncoherent dma ops, and to use more generic code for the
    coherent direct mapping, as well as removing a lot of dead
    code."

    - Increase our vmalloc space to 512T with the Hash MMU on modern
    CPUs, allowing us to support machines with larger amounts of total
    RAM or distance between nodes.

    - Two series from Christophe, one to optimise TLB miss handlers on
    6xx, and another to optimise the way STRICT_KERNEL_RWX is
    implemented on some 32-bit CPUs.

    - Support for KCOV coverage instrumentation which means we can run
    syzkaller and discover even more bugs in our code.

    And as always many clean-ups, reworks and minor fixes etc.

    Thanks to: Alan Modra, Alexey Kardashevskiy, Alistair Popple, Andrea
    Arcangeli, Andrew Donnellan, Aneesh Kumar K.V, Aravinda Prasad, Balbir
    Singh, Brajeswar Ghosh, Breno Leitao, Christian Lamparter, Christian
    Zigotzky, Christophe Leroy, Christoph Hellwig, Corentin Labbe, Daniel
    Axtens, David Gibson, Diana Craciun, Firoz Khan, Gustavo A. R. Silva,
    Igor Stoppa, Joe Lawrence, Joel Stanley, Jonathan Neuschäfer, Jordan
    Niethe, Laurent Dufour, Madhavan Srinivasan, Mahesh Salgaonkar, Mark
    Cave-Ayland, Masahiro Yamada, Mathieu Malaterre, Matteo Croce, Meelis
    Roos, Michael W. Bringmann, Nathan Chancellor, Nathan Fontenot,
    Nicholas Piggin, Nick Desaulniers, Nicolai Stange, Oliver O'Halloran,
    Paul Mackerras, Peter Xu, PrasannaKumar Muralidharan, Qian Cai,
    Rashmica Gupta, Reza Arbab, Robert P. J. Day, Russell Currey,
    Sabyasachi Gupta, Sam Bobroff, Sandipan Das, Sergey Senozhatsky,
    Souptick Joarder, Stewart Smith, Tyrel Datwyler, Vaibhav Jain,
    YueHaibing"

    * tag 'powerpc-5.1-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: (200 commits)
    powerpc/32: Clear on-stack exception marker upon exception return
    powerpc: Remove export of save_stack_trace_tsk_reliable()
    powerpc/mm: fix "section_base" set but not used
    powerpc/mm: Fix "sz" set but not used warning
    powerpc/mm: Check secondary hash page table
    powerpc: remove nargs from __SYSCALL
    powerpc/64s: Fix unrelocated interrupt trampoline address test
    powerpc/powernv/ioda: Fix locked_vm counting for memory used by IOMMU tables
    powerpc/fsl: Fix the flush of branch predictor.
    powerpc/powernv: Make opal log only readable by root
    powerpc/xmon: Fix opcode being uninitialized in print_insn_powerpc
    powerpc/powernv: move OPAL call wrapper tracing and interrupt handling to C
    powerpc/64s: Fix data interrupts vs d-side MCE reentrancy
    powerpc/64s: Prepare to handle data interrupts vs d-side MCE reentrancy
    powerpc/64s: system reset interrupt preserve HSRRs
    powerpc/64s: Fix HV NMI vs HV interrupt recoverability test
    powerpc/mm/hash: Handle mmap_min_addr correctly in get_unmapped_area topdown search
    powerpc/hugetlb: Handle mmap_min_addr correctly in get_unmapped_area callback
    selftests/powerpc: Remove duplicate header
    powerpc sstep: Add support for modsd, modud instructions
    ...

    Linus Torvalds
     

07 Mar, 2019

3 commits

  • The function returns the maximum size that can be mapped
    using DMA-API functions. The patch also adds the
    implementation for direct DMA and a new dma_map_ops pointer
    so that other implementations can expose their limit.

    Cc: stable@vger.kernel.org
    Reviewed-by: Konrad Rzeszutek Wilk
    Reviewed-by: Christoph Hellwig
    Signed-off-by: Joerg Roedel
    Signed-off-by: Michael S. Tsirkin

    Joerg Roedel
     
  • This function will be used from dma_direct code to determine
    the maximum segment size of a dma mapping.

    Cc: stable@vger.kernel.org
    Reviewed-by: Konrad Rzeszutek Wilk
    Reviewed-by: Christoph Hellwig
    Signed-off-by: Joerg Roedel
    Signed-off-by: Michael S. Tsirkin

    Joerg Roedel
     
  • The function returns the maximum size that can be remapped
    by the SWIOTLB implementation. This function will be later
    exposed to users through the DMA-API.

    Cc: stable@vger.kernel.org
    Reviewed-by: Konrad Rzeszutek Wilk
    Reviewed-by: Christoph Hellwig
    Signed-off-by: Joerg Roedel
    Signed-off-by: Michael S. Tsirkin

    Joerg Roedel
     

20 Feb, 2019

4 commits


18 Feb, 2019

4 commits


17 Feb, 2019

1 commit


14 Feb, 2019

4 commits


13 Feb, 2019

3 commits

  • This patch uses io_tlb_used to help check whether swiotlb buffer is full.
    io_tlb_used is no longer used for only debugfs. It is also used to help
    optimize swiotlb_tbl_map_single().

    Suggested-by: Joe Jin
    Signed-off-by: Dongli Zhang
    Signed-off-by: Konrad Rzeszutek Wilk

    Dongli Zhang
     
  • The device driver will not be able to do dma operations once swiotlb buffer
    is full, either because the driver is using so many IO TLB blocks inflight,
    or because there is memory leak issue in device driver. To export the
    swiotlb buffer usage via debugfs would help the user estimate the size of
    swiotlb buffer to pre-allocate or analyze device driver memory leak issue.

    Signed-off-by: Dongli Zhang
    Signed-off-by: Konrad Rzeszutek Wilk

    Dongli Zhang
     
  • Fix the comment as swiotlb_bounce() is used to copy from original dma
    location to swiotlb buffer during swiotlb_tbl_map_single(), while to
    copy from swiotlb buffer to original dma location during
    swiotlb_tbl_unmap_single().

    Signed-off-by: Dongli Zhang
    Signed-off-by: Konrad Rzeszutek Wilk

    Dongli Zhang
     

01 Feb, 2019

3 commits

  • While debugging a DMA mapping leak, I needed to access
    debug_dma_dump_mappings() but easily from user space.

    This patch adds a /sys/kernel/debug/dma-api/dump file which contain all
    current DMA mapping.

    Signed-off-by: Corentin Labbe
    Signed-off-by: Christoph Hellwig

    Corentin Labbe
     
  • When calling debugfs functions, there is no need to ever check the
    return value. The function can work or not, but the code logic should
    never do something different based on this.

    Also delete the variables for the file dentries for the debugfs entries
    as they are never used at all once they are created.

    Signed-off-by: Greg Kroah-Hartman
    Reviewed-by: Robin Murphy
    [hch: moved dma_debug_dent to function scope and renamed it]
    Signed-off-by: Christoph Hellwig

    Greg Kroah-Hartman
     
  • Instead provide a proper implementation in the direct mapping code, and
    also wire it up for arm and powerpc, leaving an error return for all the
    IOMMU or virtual mapping instances for which we'd have to wire up an
    actual implementation

    Signed-off-by: Christoph Hellwig
    Tested-by: Marek Szyprowski

    Christoph Hellwig
     

16 Jan, 2019

1 commit

  • Otherwise is_swiotlb_buffer will return false positives when
    we first initialize a swiotlb buffer, but then free it because
    we have an IOMMU available.

    Fixes: 55897af63091 ("dma-direct: merge swiotlb_dma_ops into the dma_direct code")
    Reported-by: Sibren Vasse
    Signed-off-by: Christoph Hellwig
    Tested-by: Sibren Vasse
    Signed-off-by: Konrad Rzeszutek Wilk

    Christoph Hellwig
     

05 Jan, 2019

1 commit


04 Jan, 2019

4 commits


29 Dec, 2018

1 commit

  • Pull DMA mapping updates from Christoph Hellwig:
    "A huge update this time, but a lot of that is just consolidating or
    removing code:

    - provide a common DMA_MAPPING_ERROR definition and avoid indirect
    calls for dma_map_* error checking

    - use direct calls for the DMA direct mapping case, avoiding huge
    retpoline overhead for high performance workloads

    - merge the swiotlb dma_map_ops into dma-direct

    - provide a generic remapping DMA consistent allocator for
    architectures that have devices that perform DMA that is not cache
    coherent. Based on the existing arm64 implementation and also used
    for csky now.

    - improve the dma-debug infrastructure, including dynamic allocation
    of entries (Robin Murphy)

    - default to providing chaining scatterlist everywhere, with opt-outs
    for the few architectures (alpha, parisc, most arm32 variants) that
    can't cope with it

    - misc sparc32 dma-related cleanups

    - remove the dma_mark_clean arch hook used by swiotlb on ia64 and
    replace it with the generic noncoherent infrastructure

    - fix the return type of dma_set_max_seg_size (Niklas Söderlund)

    - move the dummy dma ops for not DMA capable devices from arm64 to
    common code (Robin Murphy)

    - ensure dma_alloc_coherent returns zeroed memory to avoid kernel
    data leaks through userspace. We already did this for most common
    architectures, but this ensures we do it everywhere.
    dma_zalloc_coherent has been deprecated and can hopefully be
    removed after -rc1 with a coccinelle script"

    * tag 'dma-mapping-4.21' of git://git.infradead.org/users/hch/dma-mapping: (73 commits)
    dma-mapping: fix inverted logic in dma_supported
    dma-mapping: deprecate dma_zalloc_coherent
    dma-mapping: zero memory returned from dma_alloc_*
    sparc/iommu: fix ->map_sg return value
    sparc/io-unit: fix ->map_sg return value
    arm64: default to the direct mapping in get_arch_dma_ops
    PCI: Remove unused attr variable in pci_dma_configure
    ia64: only select ARCH_HAS_DMA_COHERENT_TO_PFN if swiotlb is enabled
    dma-mapping: bypass indirect calls for dma-direct
    vmd: use the proper dma_* APIs instead of direct methods calls
    dma-direct: merge swiotlb_dma_ops into the dma_direct code
    dma-direct: use dma_direct_map_page to implement dma_direct_map_sg
    dma-direct: improve addressability error reporting
    swiotlb: remove dma_mark_clean
    swiotlb: remove SWIOTLB_MAP_ERROR
    ACPI / scan: Refactor _CCA enforcement
    dma-mapping: factor out dummy DMA ops
    dma-mapping: always build the direct mapping code
    dma-mapping: move dma_cache_sync out of line
    dma-mapping: move various slow path functions out of line
    ...

    Linus Torvalds
     

21 Dec, 2018

1 commit

  • The cleanup in commit 356da6d0cde3 ("dma-mapping: bypass indirect calls
    for dma-direct") accidentally inverted the logic in the check for the
    presence of a ->dma_supported() callback. Switch this back to the way it
    was to prevent a crash on boot.

    Fixes: 356da6d0cde3 ("dma-mapping: bypass indirect calls for dma-direct")
    Signed-off-by: Thierry Reding
    Signed-off-by: Christoph Hellwig

    Thierry Reding
     

20 Dec, 2018

1 commit

  • If we want to map memory from the DMA allocator to userspace it must be
    zeroed at allocation time to prevent stale data leaks. We already do
    this on most common architectures, but some architectures don't do this
    yet, fix them up, either by passing GFP_ZERO when we use the normal page
    allocator or doing a manual memset otherwise.

    Signed-off-by: Christoph Hellwig
    Acked-by: Geert Uytterhoeven [m68k]
    Acked-by: Sam Ravnborg [sparc]

    Christoph Hellwig
     

18 Dec, 2018

1 commit

  • The dma_direct_supported() function intends to check the DMA mask against
    specific values. However, the phys_to_dma() function includes the SME
    encryption mask, which defeats the intended purpose of the check. This
    results in drivers that support less than 48-bit DMA (SME encryption mask
    is bit 47) from being able to set the DMA mask successfully when SME is
    active, which results in the driver failing to initialize.

    Change the function used to check the mask from phys_to_dma() to
    __phys_to_dma() so that the SME encryption mask is not part of the check.

    Fixes: c1d0af1a1d5d ("kernel/dma/direct: take DMA offset into account in dma_direct_supported")
    Signed-off-by: Tom Lendacky
    Signed-off-by: Christoph Hellwig

    Lendacky, Thomas
     

14 Dec, 2018

1 commit