27 Feb, 2021

1 commit

  • Pull swiotlb updates from Konrad Rzeszutek Wilk:
    "Two memory encryption related patches (SWIOTLB is enabled by default
    for AMD-SEV):

    - Add support for alignment so that NVME can properly work

    - Keep track of requested DMA buffers length, as underlaying hardware
    devices can trip SWIOTLB to bounce too much and crash the kernel

    And a tiny fix to use proper APIs in drivers"

    * 'stable/for-linus-5.12' of git://git.kernel.org/pub/scm/linux/kernel/git/konrad/swiotlb:
    swiotlb: Validate bounce size in the sync/unmap path
    nvme-pci: set min_align_mask
    swiotlb: respect min_align_mask
    swiotlb: don't modify orig_addr in swiotlb_tbl_sync_single
    swiotlb: refactor swiotlb_tbl_map_single
    swiotlb: clean up swiotlb_tbl_unmap_single
    swiotlb: factor out a nr_slots helper
    swiotlb: factor out an io_tlb_offset helper
    swiotlb: add a IO_TLB_SIZE define
    driver core: add a min_align_mask field to struct device_dma_parameters
    sdhci: stop poking into swiotlb internals

    Linus Torvalds
     

26 Feb, 2021

2 commits

  • The size of the buffer being bounced is not checked if it happens
    to be larger than the size of the mapped buffer. Because the size
    can be controlled by a device, as it's the case with virtio devices,
    this can lead to memory corruption.

    This patch saves the remaining buffer memory for each slab and uses
    that information for validation in the sync/unmap paths before
    swiotlb_bounce is called.

    Validating this argument is important under the threat models of
    AMD SEV-SNP and Intel TDX, where the HV is considered untrusted.

    Signed-off-by: Martin Radev
    Signed-off-by: Konrad Rzeszutek Wilk

    Martin Radev
     
  • Respect the min_align_mask in struct device_dma_parameters in swiotlb.

    There are two parts to it:
    1) for the lower bits of the alignment inside the io tlb slot, just
    extent the size of the allocation and leave the start of the slot
    empty
    2) for the high bits ensure we find a slot that matches the high bits
    of the alignment to avoid wasting too much memory

    Based on an earlier patch from Jianxiong Gao .

    Signed-off-by: Christoph Hellwig
    Acked-by: Jianxiong Gao
    Tested-by: Jianxiong Gao
    Signed-off-by: Konrad Rzeszutek Wilk

    Christoph Hellwig
     

25 Feb, 2021

1 commit

  • Pull dma-mapping updates from Christoph Hellwig:

    - add support to emulate processing delays in the DMA API benchmark
    selftest (Barry Song)

    - remove support for non-contiguous noncoherent allocations, which
    aren't used and will be replaced by a different API

    * tag 'dma-mapping-5.12' of git://git.infradead.org/users/hch/dma-mapping:
    dma-mapping: remove the {alloc,free}_noncoherent methods
    dma-mapping: benchmark: pretend DMA is transmitting

    Linus Torvalds
     

23 Feb, 2021

2 commits


22 Feb, 2021

1 commit

  • Pull MIPS updates from Thomas Bogendoerfer:

    - added support for Nintendo N64

    - added support for Realtek RTL83XX SoCs

    - kaslr support for Loongson64

    - first steps to get rid of set_fs()

    - DMA runtime coherent/non-coherent selection cleanup

    - cleanups and fixes

    * tag 'mips_5.12' of git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux: (98 commits)
    Revert "MIPS: Add basic support for ptrace single step"
    vmlinux.lds.h: catch more UBSAN symbols into .data
    MIPS: kernel: Drop kgdb_call_nmi_hook
    MAINTAINERS: Add git tree for KVM/mips
    MIPS: Use common way to parse elfcorehdr
    MIPS: Simplify EVA cache handling
    Revert "MIPS: kernel: {ftrace,kgdb}: Set correct address limit for cache flushes"
    MIPS: remove CONFIG_DMA_PERDEV_COHERENT
    MIPS: remove CONFIG_DMA_MAYBE_COHERENT
    driver core: lift dma_default_coherent into common code
    MIPS: refactor the runtime coherent vs noncoherent DMA indicators
    MIPS/alchemy: factor out the DMA coherent setup
    MIPS/malta: simplify plat_setup_iocoherency
    MIPS: Add basic support for ptrace single step
    MAINTAINERS: replace non-matching patterns for loongson{2,3}
    MIPS: Make check condition for SDBBP consistent with EJTAG spec
    mips: Replace lkml.org links with lore
    Revert "MIPS: microMIPS: Fix the judgment of mm_jr16_op and mm_jalr_op"
    MIPS: crash_dump.c: Simplify copy_oldmem_page()
    Revert "mips: Manually call fdt_init_reserved_mem() method"
    ...

    Linus Torvalds
     

20 Feb, 2021

4 commits


13 Feb, 2021

1 commit

  • Lift the dma_default_coherent variable from the mips architecture code
    to the driver core. This allows an architecture to sdefault all device
    to be DMA coherent at run time, even if the kernel is build with support
    for DMA noncoherent device. By allowing device_initialize to set the
    ->dma_coherent field to this default the amount of arch hooks required
    for this behavior can be greatly reduced.

    Signed-off-by: Christoph Hellwig
    Acked-by: Greg Kroah-Hartman
    Signed-off-by: Thomas Bogendoerfer

    Christoph Hellwig
     

10 Feb, 2021

1 commit

  • It turns out allowing non-contigous allocations here was a rather bad
    idea, as we'll now need to define ways to get the pages for mmaping
    or dma_buf sharing. Revert this change and stick to the original
    concept. A different API for the use case of non-contigous allocations
    will be added back later.

    Signed-off-by: Christoph Hellwig
    Reviewed-by: Tomasz Figa
    Tested-by: Ricardo Ribalda :wq

    Christoph Hellwig
     

05 Feb, 2021

2 commits

  • In a real dma mapping user case, after dma_map is done, data will be
    transmit. Thus, in multi-threaded user scenario, IOMMU contention
    should not be that severe. For example, if users enable multiple
    threads to send network packets through 1G/10G/100Gbps NIC, usually
    the steps will be: map -> transmission -> unmap. Transmission delay
    reduces the contention of IOMMU.

    Here a delay is added to simulate the transmission between map and unmap
    so that the tested result could be more accurate for TX and simple RX.
    A typical TX transmission for NIC would be like: map -> TX -> unmap
    since the socket buffers come from OS. Simple RX model eg. disk driver,
    is also map -> RX -> unmap, but real RX model in a NIC could be more
    complicated considering packets can come spontaneously and many drivers
    are using pre-mapped buffers pool. This is in the TBD list.

    Signed-off-by: Barry Song
    Signed-off-by: Christoph Hellwig

    Barry Song
     
  • The original code put five u32 before a u64 expansion[10] array. Five is
    odd, this will cause trouble in the extension of the structure by adding
    new features. This patch moves to use u8 for reserved field to avoid
    future alignment risk.
    Meanwhile, it also clears the memory of struct map_benchmark in tools,
    otherwise, if users use old version to run on newer kernel, the random
    expansion value will cause side effect on newer kernel.

    Signed-off-by: Barry Song
    Signed-off-by: Christoph Hellwig

    Barry Song
     

28 Jan, 2021

1 commit

  • if dma_map_single() fails, kernel will give the below oops since
    task_struct has been destroyed and we are running into the memory
    corruption due to use-after-free in kthread_stop():

    [ 48.095310] Unable to handle kernel paging request at virtual address 000000c473548040
    [ 48.095736] Mem abort info:
    [ 48.095864] ESR = 0x96000004
    [ 48.096025] EC = 0x25: DABT (current EL), IL = 32 bits
    [ 48.096268] SET = 0, FnV = 0
    [ 48.096401] EA = 0, S1PTW = 0
    [ 48.096538] Data abort info:
    [ 48.096659] ISV = 0, ISS = 0x00000004
    [ 48.096820] CM = 0, WnR = 0
    [ 48.097079] user pgtable: 4k pages, 48-bit VAs, pgdp=0000000104639000
    [ 48.098099] [000000c473548040] pgd=0000000000000000, p4d=0000000000000000
    [ 48.098832] Internal error: Oops: 96000004 [#1] PREEMPT SMP
    [ 48.099232] Modules linked in:
    [ 48.099387] CPU: 0 PID: 2 Comm: kthreadd Tainted: G W
    [ 48.099887] Hardware name: linux,dummy-virt (DT)
    [ 48.100078] pstate: 60000005 (nZCv daif -PAN -UAO -TCO BTYPE=--)
    [ 48.100516] pc : __kmalloc_node+0x214/0x368
    [ 48.100944] lr : __kmalloc_node+0x1f4/0x368
    [ 48.101458] sp : ffff800011f0bb80
    [ 48.101843] x29: ffff800011f0bb80 x28: ffff0000c0098ec0
    [ 48.102330] x27: 0000000000000000 x26: 00000000001d4600
    [ 48.102648] x25: ffff0000c0098ec0 x24: ffff800011b6a000
    [ 48.102988] x23: 00000000ffffffff x22: ffff0000c0098ec0
    [ 48.103333] x21: ffff8000101d7a54 x20: 0000000000000dc0
    [ 48.103657] x19: ffff0000c0001e00 x18: 0000000000000000
    [ 48.104069] x17: 0000000000000000 x16: 0000000000000000
    [ 48.105449] x15: 000001aa0304e7b9 x14: 00000000000003b1
    [ 48.106401] x13: ffff8000122d5000 x12: ffff80001228d000
    [ 48.107296] x11: ffff0000c0154340 x10: 0000000000000000
    [ 48.107862] x9 : ffff80000fffffff x8 : ffff0000c473527f
    [ 48.108326] x7 : ffff800011e62f58 x6 : ffff0000c01c8ed8
    [ 48.108778] x5 : ffff0000c0098ec0 x4 : 0000000000000000
    [ 48.109223] x3 : 00000000001d4600 x2 : 0000000000000040
    [ 48.109656] x1 : 0000000000000001 x0 : ff0000c473548000
    [ 48.110104] Call trace:
    [ 48.110287] __kmalloc_node+0x214/0x368
    [ 48.110493] __vmalloc_node_range+0xc4/0x298
    [ 48.110805] copy_process+0x2c8/0x15c8
    [ 48.111133] kernel_clone+0x5c/0x3c0
    [ 48.111373] kernel_thread+0x64/0x90
    [ 48.111604] kthreadd+0x158/0x368
    [ 48.111810] ret_from_fork+0x10/0x30
    [ 48.112336] Code: 17ffffe9 b9402a62 b94008a1 11000421 (f8626802)
    [ 48.112884] ---[ end trace d4890e21e75419d5 ]---

    Signed-off-by: Barry Song
    Signed-off-by: Christoph Hellwig

    Barry Song
     

23 Dec, 2020

1 commit

  • Pull dma-mapping updates from Christoph Hellwig:

    - support for a partial IOMMU bypass (Alexey Kardashevskiy)

    - add a DMA API benchmark (Barry Song)

    - misc fixes (Tiezhu Yang, tangjianqiang)

    * tag 'dma-mapping-5.11' of git://git.infradead.org/users/hch/dma-mapping:
    selftests/dma: add test application for DMA_MAP_BENCHMARK
    dma-mapping: add benchmark support for streaming DMA APIs
    dma-contiguous: fix a typo error in a comment
    dma-pool: no need to check return value of debugfs_create functions
    powerpc/dma: Fallback to dma_ops when persistent memory present
    dma-mapping: Allow mixing bypass and mapped DMA operation

    Linus Torvalds
     

17 Dec, 2020

2 commits

  • Pull ARM SoC driver updates from Arnd Bergmann:
    "There are a couple of subsystems maintained by other people that merge
    their drivers through the SoC tree, those changes include:

    - The SCMI firmware framework gains support for sensor notifications
    and for controlling voltage domains.

    - A large update for the Tegra memory controller driver, integrating
    it better with the interconnect framework

    - The memory controller subsystem gains support for Mediatek MT8192

    - The reset controller framework gains support for sharing pulsed
    resets

    For Soc specific drivers in drivers/soc, the main changes are

    - The Allwinner/sunxi MBUS gets a rework for the way it handles
    dma_map_ops and offsets between physical and dma address spaces.

    - An errata fix plus some cleanups for Freescale Layerscape SoCs

    - A cleanup for renesas drivers regarding MMIO accesses.

    - New SoC specific drivers for Mediatek MT8192 and MT8183 power
    domains

    - New SoC specific drivers for Aspeed AST2600 LPC bus control and SoC
    identification.

    - Core Power Domain support for Qualcomm MSM8916, MSM8939, SDM660 and
    SDX55.

    - A rework of the TI AM33xx 'genpd' power domain support to use
    information from DT instead of platform data

    - Support for TI AM64x SoCs

    - Allow building some Amlogic drivers as modules instead of built-in

    Finally, there are numerous cleanups and smaller bug fixes for
    Mediatek, Tegra, Samsung, Qualcomm, TI OMAP, Amlogic, Rockchips,
    Renesas, and Xilinx SoCs"

    * tag 'arm-soc-drivers-5.11' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc: (222 commits)
    soc: mediatek: mmsys: Specify HAS_IOMEM dependency for MTK_MMSYS
    firmware: xilinx: Properly align function parameter
    firmware: xilinx: Add a blank line after function declaration
    firmware: xilinx: Remove additional newline
    firmware: xilinx: Fix kernel-doc warnings
    firmware: xlnx-zynqmp: fix compilation warning
    soc: xilinx: vcu: add missing register NUM_CORE
    soc: xilinx: vcu: use vcu-settings syscon registers
    dt-bindings: soc: xlnx: extract xlnx, vcu-settings to separate binding
    soc: xilinx: vcu: drop useless success message
    clk: samsung: mark PM functions as __maybe_unused
    soc: samsung: exynos-chipid: initialize later - with arch_initcall
    soc: samsung: exynos-chipid: order list of SoCs by name
    memory: jz4780_nemc: Fix potential NULL dereference in jz4780_nemc_probe()
    memory: ti-emif-sram: only build for ARMv7
    memory: tegra30: Support interconnect framework
    memory: tegra20: Support hardware versioning and clean up OPP table initialization
    dt-bindings: memory: tegra20-emc: Document opp-supported-hw property
    soc: rockchip: io-domain: Fix error return code in rockchip_iodomain_probe()
    reset-controller: ti: force the write operation when assert or deassert
    ...

    Linus Torvalds
     
  • Pull swiotlb update from Konrad Rzeszutek Wilk:
    "A generic (but for right now engaged only with AMD SEV) mechanism to
    adjust a larger size SWIOTLB based on the total memory of the SEV
    guests which right now require the bounce buffer for interacting with
    the outside world.

    Normal knobs (swiotlb=XYZ) still work"

    * 'stable/for-linus-5.11' of git://git.kernel.org/pub/scm/linux/kernel/git/konrad/swiotlb:
    x86,swiotlb: Adjust SWIOTLB bounce buffer size for SEV guests

    Linus Torvalds
     

12 Dec, 2020

1 commit

  • For SEV, all DMA to and from guest has to use shared (un-encrypted) pages.
    SEV uses SWIOTLB to make this happen without requiring changes to device
    drivers. However, depending on the workload being run, the default 64MB
    of it might not be enough and it may run out of buffers to use for DMA,
    resulting in I/O errors and/or performance degradation for high
    I/O workloads.

    Adjust the default size of SWIOTLB for SEV guests using a
    percentage of the total memory available to guest for the SWIOTLB buffers.

    Adds a new sev_setup_arch() function which is invoked from setup_arch()
    and it calls into a new swiotlb generic code function swiotlb_adjust_size()
    to do the SWIOTLB buffer adjustment.

    v5 fixed build errors and warnings as
    Reported-by: kbuild test robot

    Signed-off-by: Ashish Kalra
    Co-developed-by: Borislav Petkov
    Signed-off-by: Borislav Petkov
    Signed-off-by: Konrad Rzeszutek Wilk

    Ashish Kalra
     

27 Nov, 2020

4 commits

  • Nowadays, there are increasing requirements to benchmark the performance
    of dma_map and dma_unmap particually while the device is attached to an
    IOMMU.

    This patch enables the support. Users can run specified number of threads
    to do dma_map_page and dma_unmap_page on a specific NUMA node with the
    specified duration. Then dma_map_benchmark will calculate the average
    latency for map and unmap.

    A difficulity for this benchmark is that dma_map/unmap APIs must run on
    a particular device. Each device might have different backend of IOMMU or
    non-IOMMU.

    So we use the driver_override to bind dma_map_benchmark to a particual
    device by:
    For platform devices:
    echo dma_map_benchmark > /sys/bus/platform/devices/xxx/driver_override
    echo xxx > /sys/bus/platform/drivers/xxx/unbind
    echo xxx > /sys/bus/platform/drivers/dma_map_benchmark/bind

    For PCI devices:
    echo dma_map_benchmark > /sys/bus/pci/devices/0000:00:01.0/driver_override
    echo 0000:00:01.0 > /sys/bus/pci/drivers/xxx/unbind
    echo 0000:00:01.0 > /sys/bus/pci/drivers/dma_map_benchmark/bind

    Cc: Will Deacon
    Cc: Shuah Khan
    Cc: Christoph Hellwig
    Cc: Marek Szyprowski
    Cc: Robin Murphy
    Signed-off-by: Barry Song
    [hch: folded in two fixes from Colin Ian King ]
    Signed-off-by: Christoph Hellwig

    Barry Song
     
  • Fix a typo error in cma description comment: "then" -> "than".

    Signed-off-by: tangjianqiang
    Signed-off-by: Christoph Hellwig

    tangjianqiang
     
  • When calling debugfs functions, there is no need to ever check the
    return value. The function can work or not, but the code logic should
    never do something different based on this.

    Signed-off-by: Tiezhu Yang
    Reviewed-by: Robin Murphy
    Signed-off-by: Christoph Hellwig

    Tiezhu Yang
     
  • At the moment we allow bypassing DMA ops only when we can do this for
    the entire RAM. However there are configs with mixed type memory
    where we could still allow bypassing IOMMU in most cases;
    POWERPC with persistent memory is one example.

    This adds an arch hook to determine where bypass can still work and
    we invoke direct DMA API. The following patch checks the bus limit
    on POWERPC to allow or disallow direct mapping.

    This adds a ARCH_HAS_DMA_MAP_DIRECT config option to make the arch_xxxx
    hooks no-op by default.

    Signed-off-by: Alexey Kardashevskiy
    Signed-off-by: Christoph Hellwig

    Alexey Kardashevskiy
     

24 Nov, 2020

1 commit


18 Nov, 2020

2 commits


02 Nov, 2020

2 commits

  • The tbl_dma_addr argument is used to check the DMA boundary for the
    allocations, and thus needs to be a dma_addr_t. swiotlb-xen instead
    passed a physical address, which could lead to incorrect results for
    strange offsets. Fix this by removing the parameter entirely and hard
    code the DMA address for io_tlb_start instead.

    Fixes: 91ffe4ad534a ("swiotlb-xen: introduce phys_to_dma/dma_to_phys translations")
    Signed-off-by: Christoph Hellwig
    Reviewed-by: Stefano Stabellini
    Signed-off-by: Konrad Rzeszutek Wilk

    Christoph Hellwig
     
  • kernel/dma/swiotlb.c:swiotlb_init gets called first and tries to
    allocate a buffer for the swiotlb. It does so by calling

    memblock_alloc_low(PAGE_ALIGN(bytes), PAGE_SIZE);

    If the allocation must fail, no_iotlb_memory is set.

    Later during initialization swiotlb-xen comes in
    (drivers/xen/swiotlb-xen.c:xen_swiotlb_init) and given that io_tlb_start
    is != 0, it thinks the memory is ready to use when actually it is not.

    When the swiotlb is actually needed, swiotlb_tbl_map_single gets called
    and since no_iotlb_memory is set the kernel panics.

    Instead, if swiotlb-xen.c:xen_swiotlb_init knew the swiotlb hadn't been
    initialized, it would do the initialization itself, which might still
    succeed.

    Fix the panic by setting io_tlb_start to 0 on swiotlb initialization
    failure, and also by setting no_iotlb_memory to false on swiotlb
    initialization success.

    Fixes: ac2cbab21f31 ("x86: Don't panic if can not alloc buffer for swiotlb")

    Reported-by: Elliott Mitchell
    Tested-by: Elliott Mitchell
    Signed-off-by: Stefano Stabellini
    Reviewed-by: Christoph Hellwig
    Cc: stable@vger.kernel.org
    Signed-off-by: Konrad Rzeszutek Wilk

    Stefano Stabellini
     

20 Oct, 2020

1 commit


17 Oct, 2020

1 commit

  • Fix multiple occurrences of duplicated words in kernel/.

    Fix one typo/spello on the same line as a duplicate word. Change one
    instance of "the the" to "that the". Otherwise just drop one of the
    repeated words.

    Signed-off-by: Randy Dunlap
    Signed-off-by: Andrew Morton
    Link: https://lkml.kernel.org/r/98202fa6-8919-ef63-9efe-c0fad5ca7af1@infradead.org
    Signed-off-by: Linus Torvalds

    Randy Dunlap
     

16 Oct, 2020

1 commit

  • Pull dma-mapping updates from Christoph Hellwig:

    - rework the non-coherent DMA allocator

    - move private definitions out of

    - lower CMA_ALIGNMENT (Paul Cercueil)

    - remove the omap1 dma address translation in favor of the common code

    - make dma-direct aware of multiple dma offset ranges (Jim Quinlan)

    - support per-node DMA CMA areas (Barry Song)

    - increase the default seg boundary limit (Nicolin Chen)

    - misc fixes (Robin Murphy, Thomas Tai, Xu Wang)

    - various cleanups

    * tag 'dma-mapping-5.10' of git://git.infradead.org/users/hch/dma-mapping: (63 commits)
    ARM/ixp4xx: add a missing include of dma-map-ops.h
    dma-direct: simplify the DMA_ATTR_NO_KERNEL_MAPPING handling
    dma-direct: factor out a dma_direct_alloc_from_pool helper
    dma-direct check for highmem pages in dma_direct_alloc_pages
    dma-mapping: merge into
    dma-mapping: move large parts of to kernel/dma
    dma-mapping: move dma-debug.h to kernel/dma/
    dma-mapping: remove
    dma-mapping: merge into
    dma-contiguous: remove dma_contiguous_set_default
    dma-contiguous: remove dev_set_cma_area
    dma-contiguous: remove dma_declare_contiguous
    dma-mapping: split
    cma: decrease CMA_ALIGNMENT lower limit to 2
    firewire-ohci: use dma_alloc_pages
    dma-iommu: implement ->alloc_noncoherent
    dma-mapping: add new {alloc,free}_noncoherent dma_map_ops methods
    dma-mapping: add a new dma_alloc_pages API
    dma-mapping: remove dma_cache_sync
    53c700: convert to dma_alloc_noncoherent
    ...

    Linus Torvalds
     

15 Oct, 2020

1 commit


14 Oct, 2020

1 commit

  • The memory size calculation in cma_early_percent_memory() traverses
    memblock.memory rather than simply call memblock_phys_mem_size(). The
    comment in that function suggests that at some point there should have
    been call to memblock_analyze() before memblock_phys_mem_size() could be
    used. As of now, there is no memblock_analyze() at all and
    memblock_phys_mem_size() can be used as soon as cold-plug memory is
    registered with memblock.

    Replace loop over memblock.memory with a call to memblock_phys_mem_size().

    Signed-off-by: Mike Rapoport
    Signed-off-by: Andrew Morton
    Reviewed-by: Christoph Hellwig
    Reviewed-by: Baoquan He
    Cc: Andy Lutomirski
    Cc: Benjamin Herrenschmidt
    Cc: Borislav Petkov
    Cc: Catalin Marinas
    Cc: Daniel Axtens
    Cc: Dave Hansen
    Cc: Emil Renner Berthing
    Cc: Hari Bathini
    Cc: Ingo Molnar
    Cc: Ingo Molnar
    Cc: Jonathan Cameron
    Cc: Marek Szyprowski
    Cc: Max Filippov
    Cc: Michael Ellerman
    Cc: Michal Simek
    Cc: Miguel Ojeda
    Cc: Palmer Dabbelt
    Cc: Paul Mackerras
    Cc: Paul Walmsley
    Cc: Peter Zijlstra
    Cc: Russell King
    Cc: Stafford Horne
    Cc: Thomas Bogendoerfer
    Cc: Thomas Gleixner
    Cc: Will Deacon
    Cc: Yoshinori Sato
    Link: https://lkml.kernel.org/r/20200818151634.14343-3-rppt@kernel.org
    Signed-off-by: Linus Torvalds

    Mike Rapoport
     

07 Oct, 2020

3 commits


06 Oct, 2020

3 commits