diff options
author | Jason Gunthorpe <jgg@nvidia.com> | 2025-02-19 17:31:38 -0800 |
---|---|---|
committer | Jason Gunthorpe <jgg@nvidia.com> | 2025-02-21 10:04:12 -0400 |
commit | 288683c92b1abc32277c83819bea287af614d239 (patch) | |
tree | 27061a7ce97c331eee0dec21bed266b2a3fc815a /scripts/gdb/linux/interrupts.py | |
parent | 9349887e93009331e751854843f73a086bef4018 (diff) |
iommu: Make iommu_dma_prepare_msi() into a generic operation
SW_MSI supports IOMMU to translate an MSI message before the MSI message
is delivered to the interrupt controller. On such systems, an iommu_domain
must have a translation for the MSI message for interrupts to work.
The IRQ subsystem will call into IOMMU to request that a physical page be
set up to receive MSI messages, and the IOMMU then sets an IOVA that maps
to that physical page. Ultimately the IOVA is programmed into the device
via the msi_msg.
Generalize this by allowing iommu_domain owners to provide implementations
of this mapping. Add a function pointer in struct iommu_domain to allow a
domain owner to provide its own implementation.
Have dma-iommu supply its implementation for IOMMU_DOMAIN_DMA types during
the iommu_get_dma_cookie() path. For IOMMU_DOMAIN_UNMANAGED types used by
VFIO (and iommufd for now), have the same iommu_dma_sw_msi set as well in
the iommu_get_msi_cookie() path.
Hold the group mutex while in iommu_dma_prepare_msi() to ensure the domain
doesn't change or become freed while running. Races with IRQ operations
from VFIO and domain changes from iommufd are possible here.
Replace the msi_prepare_lock with a lockdep assertion for the group mutex
as documentation. For the dmau_iommu.c each iommu_domain is unique to a
group.
Link: https://patch.msgid.link/r/4ca696150d2baee03af27c4ddefdb7b0b0280e7b.1740014950.git.nicolinc@nvidia.com
Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Diffstat (limited to 'scripts/gdb/linux/interrupts.py')
0 files changed, 0 insertions, 0 deletions