20 Jan, 2021
1 commit
-
commit ef3a575baf53571dc405ee4028e26f50856898e7 upstream.
Allow issuing an IOCTL_PRIVCMD_MMAP_RESOURCE ioctl with num = 0 and
addr = 0 in order to fetch the size of a specific resource.Add a shortcut to the default map resource path, since fetching the
size requires no address to be passed in, and thus no VMA to setup.This is missing from the initial implementation, and causes issues
when mapping resources that don't have fixed or known sizes.Signed-off-by: Roger Pau Monné
Reviewed-by: Juergen Gross
Tested-by: Andrew Cooper
Cc: stable@vger.kernel.org # >= 4.18
Link: https://lore.kernel.org/r/20210112115358.23346-1-roger.pau@citrix.com
Signed-off-by: Juergen Gross
Signed-off-by: Greg Kroah-Hartman
30 Dec, 2020
5 commits
-
commit 9996bd494794a2fe393e97e7a982388c6249aa76 upstream.
'xenbus_backend' watches 'state' of devices, which is writable by
guests. Hence, if guests intensively updates it, dom0 will have lots of
pending events that exhausting memory of dom0. In other words, guests
can trigger dom0 memory pressure. This is known as XSA-349. However,
the watch callback of it, 'frontend_changed()', reads only 'state', so
doesn't need to have the pending events.To avoid the problem, this commit disallows pending watch messages for
'xenbus_backend' using the 'will_handle()' watch callback.This is part of XSA-349
Cc: stable@vger.kernel.org
Signed-off-by: SeongJae Park
Reported-by: Michael Kurth
Reported-by: Pawel Wieczorkiewicz
Reviewed-by: Juergen Gross
Signed-off-by: Juergen Gross
Signed-off-by: Greg Kroah-Hartman -
commit 3dc86ca6b4c8cfcba9da7996189d1b5a358a94fc upstream.
This commit adds a counter of pending messages for each watch in the
struct. It is used to skip unnecessary pending messages lookup in
'unregister_xenbus_watch()'. It could also be used in 'will_handle'
callback.This is part of XSA-349
Cc: stable@vger.kernel.org
Signed-off-by: SeongJae Park
Reported-by: Michael Kurth
Reported-by: Pawel Wieczorkiewicz
Reviewed-by: Juergen Gross
Signed-off-by: Juergen Gross
Signed-off-by: Greg Kroah-Hartman -
commit be987200fbaceaef340872841d4f7af2c5ee8dc3 upstream.
This commit adds support of the 'will_handle' watch callback for
'xen_bus_type' users.This is part of XSA-349
Cc: stable@vger.kernel.org
Signed-off-by: SeongJae Park
Reported-by: Michael Kurth
Reported-by: Pawel Wieczorkiewicz
Reviewed-by: Juergen Gross
Signed-off-by: Juergen Gross
Signed-off-by: Greg Kroah-Hartman -
commit 2e85d32b1c865bec703ce0c962221a5e955c52c2 upstream.
Some code does not directly make 'xenbus_watch' object and call
'register_xenbus_watch()' but use 'xenbus_watch_path()' instead. This
commit adds support of 'will_handle' callback in the
'xenbus_watch_path()' and it's wrapper, 'xenbus_watch_pathfmt()'.This is part of XSA-349
Cc: stable@vger.kernel.org
Signed-off-by: SeongJae Park
Reported-by: Michael Kurth
Reported-by: Pawel Wieczorkiewicz
Reviewed-by: Juergen Gross
Signed-off-by: Juergen Gross
Signed-off-by: Greg Kroah-Hartman -
commit fed1755b118147721f2c87b37b9d66e62c39b668 upstream.
If handling logics of watch events are slower than the events enqueue
logic and the events can be created from the guests, the guests could
trigger memory pressure by intensively inducing the events, because it
will create a huge number of pending events that exhausting the memory.Fortunately, some watch events could be ignored, depending on its
handler callback. For example, if the callback has interest in only one
single path, the watch wouldn't want multiple pending events. Or, some
watches could ignore events to same path.To let such watches to volutarily help avoiding the memory pressure
situation, this commit introduces new watch callback, 'will_handle'. If
it is not NULL, it will be called for each new event just before
enqueuing it. Then, if the callback returns false, the event will be
discarded. No watch is using the callback for now, though.This is part of XSA-349
Cc: stable@vger.kernel.org
Signed-off-by: SeongJae Park
Reported-by: Michael Kurth
Reported-by: Pawel Wieczorkiewicz
Reviewed-by: Juergen Gross
Signed-off-by: Juergen Gross
Signed-off-by: Greg Kroah-Hartman
09 Dec, 2020
2 commits
-
Commit 9e2369c06c8a18 ("xen: add helpers to allocate unpopulated
memory") introduced usage of ZONE_DEVICE memory for foreign memory
mappings.Unfortunately this collides with using page->lru for Xen backend
private page caches.Fix that by using page->zone_device_data instead.
Cc: # 5.9
Fixes: 9e2369c06c8a18 ("xen: add helpers to allocate unpopulated memory")
Signed-off-by: Juergen Gross
Reviewed-by: Boris Ostrovsky
Reviewed-by: Jason Andryuk
Signed-off-by: Juergen Gross -
Instead of having similar helpers in multiple backend drivers use
common helpers for caching pages allocated via gnttab_alloc_pages().Make use of those helpers in blkback and scsiback.
Cc: # 5.9
Signed-off-by: Juergen Gross
Reviewed-by: Boris Ostrovsky
Signed-off-by: Juergen Gross
02 Nov, 2020
1 commit
-
The tbl_dma_addr argument is used to check the DMA boundary for the
allocations, and thus needs to be a dma_addr_t. swiotlb-xen instead
passed a physical address, which could lead to incorrect results for
strange offsets. Fix this by removing the parameter entirely and hard
code the DMA address for io_tlb_start instead.Fixes: 91ffe4ad534a ("swiotlb-xen: introduce phys_to_dma/dma_to_phys translations")
Signed-off-by: Christoph Hellwig
Reviewed-by: Stefano Stabellini
Signed-off-by: Konrad Rzeszutek Wilk
26 Oct, 2020
1 commit
-
Pull more xen updates from Juergen Gross:
- a series for the Xen pv block drivers adding module parameters for
better control of resource usge- a cleanup series for the Xen event driver
* tag 'for-linus-5.10b-rc1c-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip:
Documentation: add xen.fifo_events kernel parameter description
xen/events: unmask a fifo event channel only if it was masked
xen/events: only register debug interrupt for 2-level events
xen/events: make struct irq_info private to events_base.c
xen: remove no longer used functions
xen-blkfront: Apply changed parameter name to the document
xen-blkfront: add a parameter for disabling of persistent grants
xen-blkback: add a parameter for disabling of persistent grants
23 Oct, 2020
4 commits
-
Unmasking an event channel with fifo events channels being used can
require a hypercall to be made, so try to avoid that by checking
whether the event channel was really masked.Suggested-by: Jan Beulich
Signed-off-by: Juergen Gross
Reviewed-by: Jan Beulich
Link: https://lore.kernel.org/r/20201022094907.28560-5-jgross@suse.com
Signed-off-by: Boris Ostrovsky -
xen_debug_interrupt() is specific to 2-level event handling. So don't
register it with fifo event handling being active.Signed-off-by: Juergen Gross
Reviewed-by: Jan Beulich
Link: https://lore.kernel.org/r/20201022094907.28560-4-jgross@suse.com
Signed-off-by: Boris Ostrovsky -
The struct irq_info of Xen's event handling is used only for two
evtchn_ops functions outside of events_base.c. Those two functions
can easily be switched to avoid that usage.This allows to make struct irq_info and its related access functions
private to events_base.c.Signed-off-by: Juergen Gross
Reviewed-by: Jan Beulich
Link: https://lore.kernel.org/r/20201022094907.28560-3-jgross@suse.com
Signed-off-by: Boris Ostrovsky -
With the switch to the lateeoi model for interdomain event channels
some functions are no longer in use. Remove them.Suggested-by: Jan Beulich
Signed-off-by: Juergen Gross
Reviewed-by: Jan Beulich
Link: https://lore.kernel.org/r/20201022094907.28560-2-jgross@suse.com
Signed-off-by: Boris Ostrovsky
21 Oct, 2020
1 commit
-
Pull more xen updates from Juergen Gross:
- A single patch to fix the Xen security issue XSA-331 (malicious
guests can DoS dom0 by triggering NULL-pointer dereferences or access
to stale data).- A larger series to fix the Xen security issue XSA-332 (malicious
guests can DoS dom0 by sending events at high frequency leading to
dom0's vcpus being busy in IRQ handling for elongated times).* tag 'for-linus-5.10b-rc1b-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip:
xen/events: block rogue events for some time
xen/events: defer eoi in case of excessive number of events
xen/events: use a common cpu hotplug hook for event channels
xen/events: switch user event channels to lateeoi model
xen/pciback: use lateeoi irq binding
xen/pvcallsback: use lateeoi irq binding
xen/scsiback: use lateeoi irq binding
xen/netback: use lateeoi irq binding
xen/blkback: use lateeoi irq binding
xen/events: add a new "late EOI" evtchn framework
xen/events: fix race in evtchn_fifo_unmask()
xen/events: add a proper barrier to 2-level uevent unmasking
xen/events: avoid removing an event channel while handling it
20 Oct, 2020
11 commits
-
In order to avoid high dom0 load due to rogue guests sending events at
high frequency, block those events in case there was no action needed
in dom0 to handle the events.This is done by adding a per-event counter, which set to zero in case
an EOI without the XEN_EOI_FLAG_SPURIOUS is received from a backend
driver, and incremented when this flag has been set. In case the
counter is 2 or higher delay the EOI by 1 << (cnt - 2) jiffies, but
not more than 1 second.In order not to waste memory shorten the per-event refcnt to two bytes
(it should normally never exceed a value of 2). Add an overflow check
to evtchn_get() to make sure the 2 bytes really won't overflow.This is part of XSA-332.
Cc: stable@vger.kernel.org
Signed-off-by: Juergen Gross
Reviewed-by: Jan Beulich
Reviewed-by: Stefano Stabellini
Reviewed-by: Wei Liu -
In case rogue guests are sending events at high frequency it might
happen that xen_evtchn_do_upcall() won't stop processing events in
dom0. As this is done in irq handling a crash might be the result.In order to avoid that, delay further inter-domain events after some
time in xen_evtchn_do_upcall() by forcing eoi processing into a
worker on the same cpu, thus inhibiting new events coming in.The time after which eoi processing is to be delayed is configurable
via a new module parameter "event_loop_timeout" which specifies the
maximum event loop time in jiffies (default: 2, the value was chosen
after some tests showing that a value of 2 was the lowest with an
only slight drop of dom0 network throughput while multiple guests
performed an event storm).How long eoi processing will be delayed can be specified via another
parameter "event_eoi_delay" (again in jiffies, default 10, again the
value was chosen after testing with different delay values).This is part of XSA-332.
Cc: stable@vger.kernel.org
Reported-by: Julien Grall
Signed-off-by: Juergen Gross
Reviewed-by: Stefano Stabellini
Reviewed-by: Wei Liu -
Today only fifo event channels have a cpu hotplug callback. In order
to prepare for more percpu (de)init work move that callback into
events_base.c and add percpu_init() and percpu_deinit() hooks to
struct evtchn_ops.This is part of XSA-332.
Cc: stable@vger.kernel.org
Signed-off-by: Juergen Gross
Reviewed-by: Jan Beulich
Reviewed-by: Wei Liu -
Instead of disabling the irq when an event is received and enabling
it again when handled by the user process use the lateeoi model.This is part of XSA-332.
Cc: stable@vger.kernel.org
Reported-by: Julien Grall
Signed-off-by: Juergen Gross
Tested-by: Stefano Stabellini
Reviewed-by: Stefano Stabellini
Reviewed-by: Jan Beulich
Reviewed-by: Wei Liu -
In order to reduce the chance for the system becoming unresponsive due
to event storms triggered by a misbehaving pcifront use the lateeoi irq
binding for pciback and unmask the event channel only just before
leaving the event handling function.Restructure the handling to support that scheme. Basically an event can
come in for two reasons: either a normal request for a pciback action,
which is handled in a worker, or in case the guest has finished an AER
request which was requested by pciback.When an AER request is issued to the guest and a normal pciback action
is currently active issue an EOI early in order to be able to receive
another event when the AER request has been finished by the guest.Let the worker processing the normal requests run until no further
request is pending, instead of starting a new worker ion that case.
Issue the EOI only just before leaving the worker.This scheme allows to drop calling the generic function
xen_pcibk_test_and_schedule_op() after processing of any request as
the handling of both request types is now separated more cleanly.This is part of XSA-332.
Cc: stable@vger.kernel.org
Reported-by: Julien Grall
Signed-off-by: Juergen Gross
Reviewed-by: Jan Beulich
Reviewed-by: Wei Liu -
In order to reduce the chance for the system becoming unresponsive due
to event storms triggered by a misbehaving pvcallsfront use the lateeoi
irq binding for pvcallsback and unmask the event channel only after
handling all write requests, which are the ones coming in via an irq.This requires modifying the logic a little bit to not require an event
for each write request, but to keep the ioworker running until no
further data is found on the ring page to be processed.This is part of XSA-332.
Cc: stable@vger.kernel.org
Reported-by: Julien Grall
Signed-off-by: Juergen Gross
Reviewed-by: Stefano Stabellini
Reviewed-by: Wei Liu -
In order to reduce the chance for the system becoming unresponsive due
to event storms triggered by a misbehaving scsifront use the lateeoi
irq binding for scsiback and unmask the event channel only just before
leaving the event handling function.In case of a ring protocol error don't issue an EOI in order to avoid
the possibility to use that for producing an event storm. This at once
will result in no further call of scsiback_irq_fn(), so the ring_error
struct member can be dropped and scsiback_do_cmd_fn() can signal the
protocol error via a negative return value.This is part of XSA-332.
Cc: stable@vger.kernel.org
Reported-by: Julien Grall
Signed-off-by: Juergen Gross
Reviewed-by: Jan Beulich
Reviewed-by: Wei Liu -
In order to avoid tight event channel related IRQ loops add a new
framework of "late EOI" handling: the IRQ the event channel is bound
to will be masked until the event has been handled and the related
driver is capable to handle another event. The driver is responsible
for unmasking the event channel via the new function xen_irq_lateeoi().This is similar to binding an event channel to a threaded IRQ, but
without having to structure the driver accordingly.In order to support a future special handling in case a rogue guest
is sending lots of unsolicited events, add a flag to xen_irq_lateeoi()
which can be set by the caller to indicate the event was a spurious
one.This is part of XSA-332.
Cc: stable@vger.kernel.org
Reported-by: Julien Grall
Signed-off-by: Juergen Gross
Reviewed-by: Jan Beulich
Reviewed-by: Stefano Stabellini
Reviewed-by: Wei Liu -
Unmasking a fifo event channel can result in unmasking it twice, once
directly in the kernel and once via a hypercall in case the event was
pending.Fix that by doing the local unmask only if the event is not pending.
This is part of XSA-332.
Cc: stable@vger.kernel.org
Signed-off-by: Juergen Gross
Reviewed-by: Jan Beulich -
A follow-up patch will require certain write to happen before an event
channel is unmasked.While the memory barrier is not strictly necessary for all the callers,
the main one will need it. In order to avoid an extra memory barrier
when using fifo event channels, mandate evtchn_unmask() to provide
write ordering.The 2-level event handling unmask operation is missing an appropriate
barrier, so add it. Fifo event channels are fine in this regard due to
using sync_cmpxchg().This is part of XSA-332.
Cc: stable@vger.kernel.org
Suggested-by: Julien Grall
Signed-off-by: Juergen Gross
Reviewed-by: Julien Grall
Reviewed-by: Wei Liu -
Today it can happen that an event channel is being removed from the
system while the event handling loop is active. This can lead to a
race resulting in crashes or WARN() splats when trying to access the
irq_info structure related to the event channel.Fix this problem by using a rwlock taken as reader in the event
handling loop and as writer when deallocating the irq_info structure.As the observed problem was a NULL dereference in evtchn_from_irq()
make this function more robust against races by testing the irq_info
pointer to be not NULL before dereferencing it.And finally make all accesses to evtchn_to_irq[row][col] atomic ones
in order to avoid seeing partial updates of an array element in irq
handling. Note that irq handling can be entered only for event channels
which have been valid before, so any not populated row isn't a problem
in this regard, as rows are only ever added and never removed.This is XSA-331.
Cc: stable@vger.kernel.org
Reported-by: Marek Marczykowski-Górecki
Reported-by: Jinoh Kang
Signed-off-by: Juergen Gross
Reviewed-by: Stefano Stabellini
Reviewed-by: Wei Liu
19 Oct, 2020
1 commit
-
Replacing alloc_vm_area with get_vm_area_caller + apply_page_range allows
to fill put the phys_addr values directly instead of doing another loop
over all addresses.Signed-off-by: Christoph Hellwig
Signed-off-by: Andrew Morton
Reviewed-by: Boris Ostrovsky
Cc: Chris Wilson
Cc: Jani Nikula
Cc: Joonas Lahtinen
Cc: Juergen Gross
Cc: Matthew Auld
Cc: "Matthew Wilcox (Oracle)"
Cc: Minchan Kim
Cc: Nitin Gupta
Cc: Peter Zijlstra
Cc: Rodrigo Vivi
Cc: Stefano Stabellini
Cc: Tvrtko Ursulin
Cc: Uladzislau Rezki (Sony)
Link: https://lkml.kernel.org/r/20201002122204.1534411-10-hch@lst.de
Signed-off-by: Linus Torvalds
17 Oct, 2020
2 commits
-
Let's try to merge system ram resources we add, to minimize the number of
resources in /proc/iomem. We don't care about the boundaries of
individual chunks we added.Signed-off-by: David Hildenbrand
Signed-off-by: Andrew Morton
Reviewed-by: Juergen Gross
Cc: Michal Hocko
Cc: Boris Ostrovsky
Cc: Stefano Stabellini
Cc: Roger Pau Monné
Cc: Julien Grall
Cc: Pankaj Gupta
Cc: Baoquan He
Cc: Wei Yang
Cc: Anton Blanchard
Cc: Ard Biesheuvel
Cc: Benjamin Herrenschmidt
Cc: Christian Borntraeger
Cc: Dan Williams
Cc: Dave Jiang
Cc: Eric Biederman
Cc: Greg Kroah-Hartman
Cc: Haiyang Zhang
Cc: Heiko Carstens
Cc: Jason Gunthorpe
Cc: Jason Wang
Cc: Kees Cook
Cc: "K. Y. Srinivasan"
Cc: Len Brown
Cc: Leonardo Bras
Cc: Libor Pechacek
Cc: Michael Ellerman
Cc: "Michael S. Tsirkin"
Cc: Nathan Lynch
Cc: "Oliver O'Halloran"
Cc: Paul Mackerras
Cc: Pingfan Liu
Cc: "Rafael J. Wysocki"
Cc: Stephen Hemminger
Cc: Thomas Gleixner
Cc: Vasily Gorbik
Cc: Vishal Verma
Cc: Wei Liu
Link: https://lkml.kernel.org/r/20200911103459.10306-8-david@redhat.com
Signed-off-by: Linus Torvalds -
We soon want to pass flags, e.g., to mark added System RAM resources.
mergeable. Prepare for that.This patch is based on a similar patch by Oscar Salvador:
https://lkml.kernel.org/r/20190625075227.15193-3-osalvador@suse.de
Signed-off-by: David Hildenbrand
Signed-off-by: Andrew Morton
Reviewed-by: Juergen Gross # Xen related part
Reviewed-by: Pankaj Gupta
Acked-by: Wei Liu
Cc: Michal Hocko
Cc: Dan Williams
Cc: Jason Gunthorpe
Cc: Baoquan He
Cc: Michael Ellerman
Cc: Benjamin Herrenschmidt
Cc: Paul Mackerras
Cc: "Rafael J. Wysocki"
Cc: Len Brown
Cc: Greg Kroah-Hartman
Cc: Vishal Verma
Cc: Dave Jiang
Cc: "K. Y. Srinivasan"
Cc: Haiyang Zhang
Cc: Stephen Hemminger
Cc: Wei Liu
Cc: Heiko Carstens
Cc: Vasily Gorbik
Cc: Christian Borntraeger
Cc: David Hildenbrand
Cc: "Michael S. Tsirkin"
Cc: Jason Wang
Cc: Boris Ostrovsky
Cc: Stefano Stabellini
Cc: "Oliver O'Halloran"
Cc: Pingfan Liu
Cc: Nathan Lynch
Cc: Libor Pechacek
Cc: Anton Blanchard
Cc: Leonardo Bras
Cc: Ard Biesheuvel
Cc: Eric Biederman
Cc: Julien Grall
Cc: Kees Cook
Cc: Roger Pau Monné
Cc: Thomas Gleixner
Cc: Wei Yang
Link: https://lkml.kernel.org/r/20200911103459.10306-5-david@redhat.com
Signed-off-by: Linus Torvalds
16 Oct, 2020
2 commits
-
Pull dma-mapping updates from Christoph Hellwig:
- rework the non-coherent DMA allocator
- move private definitions out of
- lower CMA_ALIGNMENT (Paul Cercueil)
- remove the omap1 dma address translation in favor of the common code
- make dma-direct aware of multiple dma offset ranges (Jim Quinlan)
- support per-node DMA CMA areas (Barry Song)
- increase the default seg boundary limit (Nicolin Chen)
- misc fixes (Robin Murphy, Thomas Tai, Xu Wang)
- various cleanups
* tag 'dma-mapping-5.10' of git://git.infradead.org/users/hch/dma-mapping: (63 commits)
ARM/ixp4xx: add a missing include of dma-map-ops.h
dma-direct: simplify the DMA_ATTR_NO_KERNEL_MAPPING handling
dma-direct: factor out a dma_direct_alloc_from_pool helper
dma-direct check for highmem pages in dma_direct_alloc_pages
dma-mapping: merge into
dma-mapping: move large parts of to kernel/dma
dma-mapping: move dma-debug.h to kernel/dma/
dma-mapping: remove
dma-mapping: merge into
dma-contiguous: remove dma_contiguous_set_default
dma-contiguous: remove dev_set_cma_area
dma-contiguous: remove dma_declare_contiguous
dma-mapping: split
cma: decrease CMA_ALIGNMENT lower limit to 2
firewire-ohci: use dma_alloc_pages
dma-iommu: implement ->alloc_noncoherent
dma-mapping: add new {alloc,free}_noncoherent dma_map_ops methods
dma-mapping: add a new dma_alloc_pages API
dma-mapping: remove dma_cache_sync
53c700: convert to dma_alloc_noncoherent
... -
Pull drm updates from Dave Airlie:
"Not a major amount of change, the i915 trees got split into display
and gt trees to better facilitate higher level review, and there's a
major refactoring of i915 GEM locking to use more core kernel concepts
(like ww-mutexes). msm gets per-process pagetables, older AMD SI cards
get DC support, nouveau got a bump in displayport support with common
code extraction from i915.Outside of drm this contains a couple of patches for hexint
moduleparams which you've acked, and a virtio common code tree that
you should also get via it's regular path.New driver:
- Cadence MHDP8546 DisplayPort bridge drivercore:
- cross-driver scatterlist cleanups
- devm_drm conversions
- remove drm_dev_init
- devm_drm_dev_alloc conversionttm:
- lots of refactoring and cleanupsbridges:
- chained bridge support in more driverspanel:
- misc new panelsscheduler:
- cleanup priority levelsdisplayport:
- refactor i915 code into helpers for nouveaui915:
- split into display and GT trees
- WW locking refactoring in GEM
- execbuf2 extension mechanism
- syncobj timeline support
- GEN 12 HOBL display powersaving
- Rocket Lake display additions
- Disable FBC on Tigerlake
- Tigerlake Type-C + DP improvements
- Hotplug interrupt refactoringamdgpu:
- Sienna Cichlid updates
- Navy Flounder updates
- DCE6 (SI) support for DC
- Plane rotation enabled
- TMZ state info ioctl
- PCIe DPC recovery support
- DC interrupt handling refactor
- OLED panel fixesamdkfd:
- add SMI events for thermal throttling
- SMI interface events ioctl update
- process eviction countersradeon:
- move to dma_ for allocations
- expose sclk via sysfsmsm:
- DSI support for sm8150/sm8250
- per-process GPU pagetable support
- Displayport supportmediatek:
- move HDMI phy driver to PHY
- convert mtk-dpi to bridge API
- disable mt2701 tmdstegra:
- bridge supportexynos:
- misc cleanupsvc4:
- dual display cleanupsast:
- cleanupsgma500:
- conversion to GPIOd APIhisilicon:
- misc reworksingenic:
- clock handling and format improvementsmcde:
- DSI supportmgag200:
- desktop g200 supportmxsfb:
- i.MX7 + i.MX8M
- alpha plane supportpanfrost:
- devfreq support
- amlogic SoC supportps8640:
- EDID from eDP retrievaltidss:
- AM65xx YUV workaroundvirtio:
- virtio-gpu exported resourcesrcar-du:
- R8A7742, R8A774E1 and R8A77961 support
- YUV planar format fixes
- non-visible plane handling
- VSP device reference count fix
- Kconfig fix to avoid displaying disabled options in .config"* tag 'drm-next-2020-10-15' of git://anongit.freedesktop.org/drm/drm: (1494 commits)
drm/ingenic: Fix bad revert
drm/amdgpu: Fix invalid number of character '{' in amdgpu_acpi_init
drm/amdgpu: Remove warning for virtual_display
drm/amdgpu: kfd_initialized can be static
drm/amd/pm: setup APU dpm clock table in SMU HW initialization
drm/amdgpu: prevent spurious warning
drm/amdgpu/swsmu: fix ARC build errors
drm/amd/display: Fix OPTC_DATA_FORMAT programming
drm/amd/display: Don't allow pstate if no support in blank
drm/panfrost: increase readl_relaxed_poll_timeout values
MAINTAINERS: Update entry for st7703 driver after the rename
Revert "gpu/drm: ingenic: Add option to mmap GEM buffers cached"
drm/amd/display: HDMI remote sink need mode validation for Linux
drm/amd/display: Change to correct unit on audio rate
drm/amd/display: Avoid set zero in the requested clk
drm/amdgpu: align frag_end to covered address space
drm/amdgpu: fix NULL pointer dereference for Renoir
drm/vmwgfx: fix regression in thp code due to ttm init refactor.
drm/amdgpu/swsmu: add interrupt work handler for smu11 parts
drm/amdgpu/swsmu: add interrupt work function
...
15 Oct, 2020
1 commit
-
Pull xen updates from Juergen Gross:
- two small cleanup patches
- avoid error messages when initializing MCA banks in a Xen dom0
- a small series for converting the Xen gntdev driver to use
pin_user_pages*() instead of get_user_pages*()- intermediate fix for running as a Xen guest on Arm with KPTI enabled
(the final solution will need new Xen functionality)* tag 'for-linus-5.10b-rc1-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip:
x86/xen: Fix typo in xen_pagetable_p2m_free()
x86/xen: disable Firmware First mode for correctable memory errors
xen/arm: do not setup the runstate info page if kpti is enabled
xen: remove redundant initialization of variable ret
xen/gntdev.c: Convert get_user_pages*() to pin_user_pages*()
xen/gntdev.c: Mark pages as dirty
14 Oct, 2020
2 commits
-
In support of device-dax growing the ability to front physically
dis-contiguous ranges of memory, update devm_memremap_pages() to track
multiple ranges with a single reference counter and devm instance.Convert all [devm_]memremap_pages() users to specify the number of ranges
they are mapping in their 'struct dev_pagemap' instance.Signed-off-by: Dan Williams
Signed-off-by: Andrew Morton
Cc: Paul Mackerras
Cc: Michael Ellerman
Cc: Benjamin Herrenschmidt
Cc: Vishal Verma
Cc: Vivek Goyal
Cc: Dave Jiang
Cc: Ben Skeggs
Cc: David Airlie
Cc: Daniel Vetter
Cc: Ira Weiny
Cc: Bjorn Helgaas
Cc: Boris Ostrovsky
Cc: Juergen Gross
Cc: Stefano Stabellini
Cc: "Jérôme Glisse"
Cc: Ard Biesheuvel
Cc: Ard Biesheuvel
Cc: Borislav Petkov
Cc: Brice Goglin
Cc: Catalin Marinas
Cc: Dave Hansen
Cc: David Hildenbrand
Cc: Greg Kroah-Hartman
Cc: "H. Peter Anvin"
Cc: Hulk Robot
Cc: Ingo Molnar
Cc: Jason Gunthorpe
Cc: Jason Yan
Cc: Jeff Moyer
Cc: "Jérôme Glisse"
Cc: Jia He
Cc: Joao Martins
Cc: Jonathan Cameron
Cc: kernel test robot
Cc: Mike Rapoport
Cc: Pavel Tatashin
Cc: Peter Zijlstra
Cc: "Rafael J. Wysocki"
Cc: Randy Dunlap
Cc: Thomas Gleixner
Cc: Tom Lendacky
Cc: Wei Yang
Cc: Will Deacon
Link: https://lkml.kernel.org/r/159643103789.4062302.18426128170217903785.stgit@dwillia2-desk3.amr.corp.intel.com
Link: https://lkml.kernel.org/r/160106116293.30709.13350662794915396198.stgit@dwillia2-desk3.amr.corp.intel.com
Signed-off-by: Linus Torvalds -
The 'struct resource' in 'struct dev_pagemap' is only used for holding
resource span information. The other fields, 'name', 'flags', 'desc',
'parent', 'sibling', and 'child' are all unused wasted space.This is in preparation for introducing a multi-range extension of
devm_memremap_pages().The bulk of this change is unwinding all the places internal to libnvdimm
that used 'struct resource' unnecessarily, and replacing instances of
'struct dev_pagemap'.res with 'struct dev_pagemap'.range.P2PDMA had a minor usage of the resource flags field, but only to report
failures with "%pR". That is replaced with an open coded print of the
range.[dan.carpenter@oracle.com: mm/hmm/test: use after free in dmirror_allocate_chunk()]
Link: https://lkml.kernel.org/r/20200926121402.GA7467@kadamSigned-off-by: Dan Williams
Signed-off-by: Dan Carpenter
Signed-off-by: Andrew Morton
Reviewed-by: Boris Ostrovsky [xen]
Cc: Paul Mackerras
Cc: Michael Ellerman
Cc: Benjamin Herrenschmidt
Cc: Vishal Verma
Cc: Vivek Goyal
Cc: Dave Jiang
Cc: Ben Skeggs
Cc: David Airlie
Cc: Daniel Vetter
Cc: Ira Weiny
Cc: Bjorn Helgaas
Cc: Juergen Gross
Cc: Stefano Stabellini
Cc: "Jérôme Glisse"
Cc: Andy Lutomirski
Cc: Ard Biesheuvel
Cc: Ard Biesheuvel
Cc: Borislav Petkov
Cc: Brice Goglin
Cc: Catalin Marinas
Cc: Dave Hansen
Cc: David Hildenbrand
Cc: Greg Kroah-Hartman
Cc: "H. Peter Anvin"
Cc: Hulk Robot
Cc: Ingo Molnar
Cc: Jason Gunthorpe
Cc: Jason Yan
Cc: Jeff Moyer
Cc: Jia He
Cc: Joao Martins
Cc: Jonathan Cameron
Cc: kernel test robot
Cc: Mike Rapoport
Cc: Pavel Tatashin
Cc: Peter Zijlstra
Cc: "Rafael J. Wysocki"
Cc: Randy Dunlap
Cc: Thomas Gleixner
Cc: Tom Lendacky
Cc: Wei Yang
Cc: Will Deacon
Link: https://lkml.kernel.org/r/159643103173.4062302.768998885691711532.stgit@dwillia2-desk3.amr.corp.intel.com
Link: https://lkml.kernel.org/r/160106115761.30709.13539840236873663620.stgit@dwillia2-desk3.amr.corp.intel.com
Signed-off-by: Linus Torvalds
06 Oct, 2020
2 commits
-
Move more nitty gritty DMA implementation details into the common
internal header.Signed-off-by: Christoph Hellwig
-
Split out all the bits that are purely for dma_map_ops implementations
and related code into a new header so that they
don't get pulled into all the drivers. That also means the architecture
specific is not pulled in by
any more, which leads to a missing includes that were pulled in by the
x86 or arm versions in a few not overly portable drivers.Signed-off-by: Christoph Hellwig
05 Oct, 2020
3 commits
-
After commit 9f51c05dc41a ("pvcalls-front: Avoid
get_free_pages(GFP_KERNEL) under spinlock"), the variable ret is being
initialized with '-ENOMEM' that is meaningless. So remove it.Signed-off-by: Jing Xiangfeng
Link: https://lore.kernel.org/r/20200919031702.32192-1-jingxiangfeng@huawei.com
Reviewed-by: Juergen Gross
Signed-off-by: Boris Ostrovsky -
In 2019, we introduced pin_user_pages*() and now we are converting
get_user_pages*() to the new API as appropriate. [1] & [2] could
be referred for more information. This is case 5 as per document [1].[1] Documentation/core-api/pin_user_pages.rst
[2] "Explicit pinning of user-space pages":
https://lwn.net/Articles/807108/Signed-off-by: Souptick Joarder
Cc: John Hubbard
Cc: Boris Ostrovsky
Cc: Juergen Gross
Cc: David Vrabel
Link: https://lore.kernel.org/r/1599375114-32360-2-git-send-email-jrdr.linux@gmail.com
Reviewed-by: Boris Ostrovsky
Signed-off-by: Boris Ostrovsky -
There seems to be a bug in the original code when gntdev_get_page()
is called with writeable=true then the page needs to be marked dirty
before being put.To address this, a bool writeable is added in gnt_dev_copy_batch, set
it in gntdev_grant_copy_seg() (and drop `writeable` argument to
gntdev_get_page()) and then, based on batch->writeable, use
set_page_dirty_lock().Fixes: a4cdb556cae0 (xen/gntdev: add ioctl for grant copy)
Suggested-by: Boris Ostrovsky
Signed-off-by: Souptick Joarder
Cc: John Hubbard
Cc: Boris Ostrovsky
Cc: Juergen Gross
Cc: David Vrabel
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/r/1599375114-32360-1-git-send-email-jrdr.linux@gmail.com
Reviewed-by: Boris Ostrovsky
Signed-off-by: Boris Ostrovsky
01 Oct, 2020
1 commit
-
Since commit c330fb1ddc0a ("XEN uses irqdesc::irq_data_common::handler_data to store a per interrupt XEN data pointer which contains XEN specific information.")
Xen is using the chip_data pointer for storing IRQ specific data. When
running as a HVM domain this can result in problems for legacy IRQs, as
those might use chip_data for their own purposes.Use a local array for this purpose in case of legacy IRQs, avoiding the
double use.Cc: stable@vger.kernel.org
Fixes: c330fb1ddc0a ("XEN uses irqdesc::irq_data_common::handler_data to store a per interrupt XEN data pointer which contains XEN specific information.")
Signed-off-by: Juergen Gross
Tested-by: Stefan Bader
Reviewed-by: Boris Ostrovsky
Link: https://lore.kernel.org/r/20200930091614.13660-1-jgross@suse.com
Signed-off-by: Juergen Gross