If we gate the fence destruction with a check telling us whether there are
valid pointers in there we can eliminate the need for dual, basically
identical, exit paths.
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@igalia.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
(cherry picked from commit bea29bb0dd29012949cd44fdb122465a9fd5cf91)
Huge input values in amdgpu_userq_wait_ioctl can lead to a OOM and
could be exploited.
So check these input value against AMDGPU_USERQ_MAX_HANDLES
which is big enough value for genuine use cases and could
potentially avoid OOM.
v2: squash in Srini's fix
Signed-off-by: Sunil Khatri <sunil.khatri@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
(cherry picked from commit fcec012c664247531aed3e662f4280ff804d1476)
Cc: stable@vger.kernel.org
Huge input values in amdgpu_userq_signal_ioctl can lead to a OOM and
could be exploited.
So check these input value against AMDGPU_USERQ_MAX_HANDLES
which is big enough value for genuine use cases and could
potentially avoid OOM.
Signed-off-by: Sunil Khatri <sunil.khatri@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
(cherry picked from commit be267e15f99bc97cbe202cd556717797cdcf79a5)
Cc: stable@vger.kernel.org
Userspace can either deliberately pass in the too small num_fences, or the
required number can legitimately grow between the two calls to the userq
wait ioctl. In both cases we do not want the emit the kernel warning
backtrace since nothing is wrong with the kernel and userspace will simply
get an errno reported back. So lets simply drop the WARN_ONs.
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@igalia.com>
Fixes: a292fdecd7 ("drm/amdgpu: Implement userqueue signal/wait IOCTL")
Cc: Arunpravin Paneer Selvam <Arunpravin.PaneerSelvam@amd.com>
Cc: Christian König <christian.koenig@amd.com>
Cc: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
(cherry picked from commit 2c333ea579de6cc20ea7bc50e9595ef72863e65c)
This was done entirely with mindless brute force, using
git grep -l '\<k[vmz]*alloc_objs*(.*, GFP_KERNEL)' |
xargs sed -i 's/\(alloc_objs*(.*\), GFP_KERNEL)/\1)/'
to convert the new alloc_obj() users that had a simple GFP_KERNEL
argument to just drop that argument.
Note that due to the extreme simplicity of the scripting, any slightly
more complex cases spread over multiple lines would not be triggered:
they definitely exist, but this covers the vast bulk of the cases, and
the resulting diff is also then easier to check automatically.
For the same reason the 'flex' versions will be done as a separate
conversion.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This is the result of running the Coccinelle script from
scripts/coccinelle/api/kmalloc_objs.cocci. The script is designed to
avoid scalar types (which need careful case-by-case checking), and
instead replace kmalloc-family calls that allocate struct or union
object instances:
Single allocations: kmalloc(sizeof(TYPE), ...)
are replaced with: kmalloc_obj(TYPE, ...)
Array allocations: kmalloc_array(COUNT, sizeof(TYPE), ...)
are replaced with: kmalloc_objs(TYPE, COUNT, ...)
Flex array allocations: kmalloc(struct_size(PTR, FAM, COUNT), ...)
are replaced with: kmalloc_flex(*PTR, FAM, COUNT, ...)
(where TYPE may also be *VAR)
The resulting allocations no longer return "void *", instead returning
"TYPE *".
Signed-off-by: Kees Cook <kees@kernel.org>
In error scenarios (e.g., malformed commands), user queue fences may never
be signaled, causing processes to wait indefinitely. To address this while
preserving the requirement of infinite fence waits, implement an independent
timeout detection mechanism:
1. Initialize a hang detect work when creating a user queue (one-time setup)
2. Start the work with queue-type-specific timeout (gfx/compute/sdma) when
the last fence is created via amdgpu_userq_signal_ioctl (per-fence timing)
3. Trigger queue reset logic if the timer expires before the fence is signaled
v2: make timeout per queue type (adev->gfx_timeout vs adev->compute_timeout vs adev->sdma_timeout) to be consistent with kernel queues. (Alex)
v3: The timeout detection must be independent from the fence, e.g. you don't wait for a timeout on the fence
but rather have the timeout start as soon as the fence is initialized. (Christian)
v4: replace the timer with the `hang_detect_work` delayed work.
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Acked-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Jesse Zhang <jesse.zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
The user mode queue keeps a pointer to the most recent fence in
userq->last_fence. This pointer holds an extra dma_fence reference.
When the queue is destroyed, we free the fence driver and its xarray,
but we forgot to drop the last_fence reference.
Because of the missing dma_fence_put(), the last fence object can stay
alive when the driver unloads. This leaves an allocated object in the
amdgpu_userq_fence slab cache and triggers
This is visible during driver unload as:
BUG amdgpu_userq_fence: Objects remaining on __kmem_cache_shutdown()
kmem_cache_destroy amdgpu_userq_fence: Slab cache still has objects
Call Trace:
kmem_cache_destroy
amdgpu_userq_fence_slab_fini
amdgpu_exit
__do_sys_delete_module
Fix this by putting userq->last_fence and clearing the pointer during
amdgpu_userq_fence_driver_free().
This makes sure the fence reference is released and the slab cache is
empty when the module exits.
v2: Update to only release userq->last_fence with dma_fence_put()
(Christian)
Fixes: edc762a51c ("drm/amdgpu/userq: move some code around")
Cc: Alex Deucher <alexander.deucher@amd.com>
Cc: Christian König <christian.koenig@amd.com>
Signed-off-by: Srinivasan Shanmugam <srinivasan.shanmugam@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
These IOCTLs shouldn't be called when userqs are not
enabled. Make sure they are enabled before executing
the IOCTLs.
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Change gmc macro AMDGPU_GMC_HOLE_START/END/MASK to 57bit if vm root
level is PDB3 for 5-level page tables.
The macro access adev without passing adev as parameter is to minimize
the code change to support 57bit, then we have to add adev variable in
several places to use the macro.
Because adev definition is not available in all amdgpu c files which
include amdgpu_gmc.h, change inline function amdgpu_gmc_sign_extend to
macro.
Signed-off-by: Philip Yang <Philip.Yang@amd.com>
Acked-by: Felix Kuehling <felix.kuehling@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Fix a potential deadlock caused by inconsistent spinlock usage
between interrupt and process contexts in the userq fence driver.
The issue occurs when amdgpu_userq_fence_driver_process() is called
from both:
- Interrupt context: gfx_v11_0_eop_irq() -> amdgpu_userq_fence_driver_process()
- Process context: amdgpu_eviction_fence_suspend_worker() ->
amdgpu_userq_fence_driver_force_completion() -> amdgpu_userq_fence_driver_process()
In interrupt context, the spinlock was acquired without disabling
interrupts, leaving it in {IN-HARDIRQ-W} state. When the same lock
is acquired in process context, the kernel detects inconsistent
locking since the process context acquisition would enable interrupts
while holding a lock previously acquired in interrupt context.
Kernel log shows:
[ 4039.310790] inconsistent {IN-HARDIRQ-W} -> {HARDIRQ-ON-W} usage.
[ 4039.310804] kworker/7:2/409 [HC0[0]:SC0[0]:HE1:SE1] takes:
[ 4039.310818] ffff9284e1bed000 (&fence_drv->fence_list_lock){?...}-{3:3},
[ 4039.310993] {IN-HARDIRQ-W} state was registered at:
[ 4039.311004] lock_acquire+0xc6/0x300
[ 4039.311018] _raw_spin_lock+0x39/0x80
[ 4039.311031] amdgpu_userq_fence_driver_process.part.0+0x30/0x180 [amdgpu]
[ 4039.311146] amdgpu_userq_fence_driver_process+0x17/0x30 [amdgpu]
[ 4039.311257] gfx_v11_0_eop_irq+0x132/0x170 [amdgpu]
Fix by using spin_lock_irqsave()/spin_unlock_irqrestore() to properly
manage interrupt state regardless of calling context.
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Jesse Zhang <Jesse.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
This commit refactors the AMDGPU userqueue management subsystem to replace
IDR (ID Allocation) with XArray for improved performance, scalability, and
maintainability. The changes address several issues with the previous IDR
implementation and provide better locking semantics.
Key changes:
1. **Global XArray Introduction**:
- Added `userq_doorbell_xa` to `struct amdgpu_device` for global queue tracking
- Uses doorbell_index as key for efficient global lookup
- Replaces the previous `userq_mgr_list` linked list approach
2. **Per-process XArray Conversion**:
- Replaced `userq_idr` with `userq_mgr_xa` in `struct amdgpu_userq_mgr`
- Maintains per-process queue tracking with queue_id as key
- Uses XA_FLAGS_ALLOC for automatic ID allocation
3. **Locking Improvements**:
- Removed global `userq_mutex` from `struct amdgpu_device`
- Replaced with fine-grained XArray locking using XArray's internal spinlocks
4. **Runtime Idle Check Optimization**:
- Updated `amdgpu_runtime_idle_check_userq()` to use xa_empty
5. **Queue Management Functions**:
- Converted all IDR operations to equivalent XArray functions:
- `idr_alloc()` → `xa_alloc()`
- `idr_find()` → `xa_load()`
- `idr_remove()` → `xa_erase()`
- `idr_for_each()` → `xa_for_each()`
Benefits:
- **Performance**: XArray provides better scalability for large numbers of queues
- **Memory Efficiency**: Reduced memory overhead compared to IDR
- **Thread Safety**: Improved locking semantics with XArray's internal spinlocks
v2: rename userq_global_xa/userq_xa to userq_doorbell_xa/userq_mgr_xa
Remove xa_lock and use its own lock.
v3: Set queue->userq_mgr = uq_mgr in amdgpu_userq_create()
v4: use xa_store_irq (Christian)
hold the read side of the reset lock while creating/destroying queues and the manager data structure. (Chritian)
Acked-by: Alex Deucher <alexander.deucher@amd.com>
Suggested-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Jesse Zhang <Jesse.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
This commit fixes a potential race condition in the userqueue fence
signaling mechanism by replacing dma_fence_is_signaled_locked() with
dma_fence_is_signaled().
The issue occurred because:
1. dma_fence_is_signaled_locked() should only be used when holding
the fence's individual lock, not just the fence list lock
2. Using the locked variant without the proper fence lock could lead
to double-signaling scenarios:
- Hardware completion signals the fence
- Software path also tries to signal the same fence
By using dma_fence_is_signaled() instead, we properly handle the
locking hierarchy and avoid the race condition while still maintaining
the necessary synchronization through the fence_list_lock.
v2: drop the comment (Christian)
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Jesse Zhang <Jesse.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Add support for forcing completion of userq fences.
This is needed for userq resets and asic resets so that we
can set the error on the fence and force completion.
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
This patch only affects 32bit systems. There are several integer
overflows bugs here but only the "sizeof(u32) * num_syncobj"
multiplication is a problem at runtime. (The last lines of this patch).
These variables are u32 variables that come from the user. The issue
is the multiplications can overflow leading to us allocating a smaller
buffer than intended. For the first couple integer overflows, the
syncobj_handles = memdup_user() allocation is immediately followed by
a kmalloc_array():
syncobj = kmalloc_array(num_syncobj_handles, sizeof(*syncobj), GFP_KERNEL);
In that situation the kmalloc_array() works as a bounds check and we
haven't accessed the syncobj_handlesp[] array yet so the integer overflow
is harmless.
But the "num_syncobj" multiplication doesn't have that and the integer
overflow could lead to an out of bounds access.
Fixes: a292fdecd7 ("drm/amdgpu: Implement userqueue signal/wait IOCTL")
Signed-off-by: Dan Carpenter <dan.carpenter@linaro.org>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
With the goal of reducing the need for drivers to touch (and dereference)
fence->ops, we move the 64-bit seqnos flag from struct dma_fence_ops to
the fence->flags.
Drivers which were setting this flag are changed to use new
dma_fence_init64() instead of dma_fence_init().
v2:
* Streamlined init and added kerneldoc.
* Rebase for amdgpu userq which landed since.
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@igalia.com>
Reviewed-by: Christian König <christian.koenig@amd.com> # v1
Signed-off-by: Tvrtko Ursulin <tursulin@ursulin.net>
Link: https://lore.kernel.org/r/20250515095004.28318-3-tvrtko.ursulin@igalia.com
Keep only the latest fences to reduce the number of values
given back to userspace
v2: - Export this code from dma-fence-unwrap.c(by Christian).
v3: - To split this in a dma_buf patch and amd userq patch(by Sunil).
- No need to add a new function just re-use existing(by Christian).
v4: Export dma_fence_dedub_array function and used it(by Christian).
Cc: Alex Deucher <alexander.deucher@amd.com>
Cc: Arunpravin Paneer Selvam <Arunpravin.PaneerSelvam@amd.com>
Reviewed-by: Sunil Khatri <sunil.khatri@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Arvind Yadav <Arvind.Yadav@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
This error path should call amdgpu_bo_unreserve() before returning.
Fixes: d8675102ba ("drm/amdgpu: add vm root BO lock before accessing the vm")
Signed-off-by: Dan Carpenter <dan.carpenter@linaro.org>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
s/userqueue/userq/
1. remove the mix of amdgpu_userqueue and amdgpu_userq
2. to be consistent with other amdgpu_userq_fence.c
3. it's shorter
Reviewed-by: Prike Liang <Prike.Liang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
1) Checkpatch complains if we print an error message for kzalloc()
failure. The kzalloc() failure already has it's own error messages
built in. Also this allocation is small enough that it is guaranteed
to succeed.
2) Return directly instead of doing a goto free_fence_drv. The
"fence_drv" is already NULL so no cleanup is necessary.
Reviewed-by: Arvind Yadav <arvind.yadav@amd.com>
Signed-off-by: Dan Carpenter <dan.carpenter@linaro.org>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
The goto frees "fence_drv" so this is a double free bug. There is no
need to call amdgpu_seq64_free(adev, fence_drv->va) since the seq64
allocation failed so change the goto to goto free_fence_drv. Also
propagate the error code from amdgpu_seq64_alloc() instead of hard coding
it to -ENOMEM.
Fixes: e7cf21fbb2 ("drm/amdgpu: Few optimization and fixes for userq fence driver")
Reviewed-by: Arvind Yadav <arvind.yadav@amd.com>
Signed-off-by: Dan Carpenter <dan.carpenter@linaro.org>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Move some userq fence handling code into amdgpu_userq_fence.c.
This matches the other code in that file.
Reviewed-by: Sunil Khatri <sunil.khatri@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Add the correct fences count variable [num_fences] in the fences
array iteration to handle the userq / non-userq fences.
v2:(Christian)
- All fences in the array either come from some reservation object
or drm_syncobj. If any of those are NULL then there is a bug
somewhere else.
Signed-off-by: Arunpravin Paneer Selvam <Arunpravin.PaneerSelvam@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
The basic idea in this redesign is to add an eviction fence only in UQ
resume path. When userqueue is not present, keep ev_fence as NULL
Main changes are:
- do not create the eviction fence during evf_mgr_init, keeping
evf_mgr->ev_fence=NULL until UQ get active.
- do not replace the ev_fence in evf_resume path, but replace it only in
uq_resume path, so remove all the unnecessary code from ev_fence_resume.
- add a new helper function (amdgpu_userqueue_ensure_ev_fence) which
will do the following:
- flush any pending uq_resume work, so that it could create an
eviction_fence
- if there is no pending uq_resume_work, add a uq_resume work and
wait for it to execute so that we always have a valid ev_fence
- call this helper function from two places, to ensure we have a valid
ev_fence:
- when a new uq is created
- when a new uq completion fence is created
v2: Worked on review comments by Christian.
v3: Addressed few more review comments by Christian.
v4: Move mutex lock outside of the amdgpu_userqueue_suspend()
function (Christian).
v5: squash in build fix (Alex)
Cc: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Arvind Yadav <arvind.yadav@amd.com>
Signed-off-by: Shashank Sharma <shashank.sharma@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
This patch adds suspend support for gfx userqueues. It typically does
the following:
- adds an enable_signaling function for the eviction fence, so that it
can trigger the userqueue suspend,
- adds a delayed work to handle suspending of the eviction_fence
- adds a suspend function to handle suspending of userqueues which
suspends all the queues under this userq manager and signals the
eviction fence,
- adds a function to replace the old eviction fence with a new one and
attach it to each of the objects,
- adds reference of userq manager in the eviction fence container so
that it can be used in the suspend function.
V2: Addressed Christian's review comments:
- schedule suspend work immediately
V4: Addressed Christian's review comments:
- wait for pending uq fences before starting suspend, added
queue->last_fence for the same
- accommodate ev_fence_mgr into existing code
- some bug fixes and NULL checks
V5: Addressed Christian's review comments (gitlab)
- Wait for eviction fence to get signaled in destroy,
don't signal it
- Wait for eviction fence to get signaled in replace fence,
don't signal it
V6: Addressed Christian's review comments
- Do not destroy the old eviction fence until we have it replaced
- Change the sequence of fence replacement sub-tasks
- reusing the ev_fence delayed work for userqueue suspend as well
(Shashank).
V7: Addressed Christian's review comments
- give evf_mgr as argument (instead of fpriv) to replace_fence()
- save ptr to evf_mgr in ev_fence (instead of uq_mgr)
- modify suspend_all_queues logic to reflect error properly
- remove the garbage drm_exec_lock section in wait_for_signal
- grab the userqueue mutex before starting the wait for fence
- remove the unrelated gobj check from signal_ioctl
V8: Added race condition fixes
Cc: Alex Deucher <alexander.deucher@amd.com>
Cc: Christian Koenig <christian.koenig@amd.com>
Acked-by: Christian Koenig <christian.koenig@amd.com>
Signed-off-by: Shashank Sharma <shashank.sharma@amd.com>
Signed-off-by: Arvind Yadav <arvind.yadav@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Keep the user queue fence signal and wait IOCTLs in the
kernel config CONFIG_DRM_AMDGPU_NAVI3X_USERQ.
v2(Christian):
- Remove the userq specific config added for kernel queues fence init
function.
v3(Alex):
- It will be better to return an error(-ENOTSUPP) in these cases.
Signed-off-by: Arunpravin Paneer Selvam <Arunpravin.PaneerSelvam@amd.com>
Acked-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Drop AMDGPU_USERQ_BO_WRITE as this should not be a global option
of the IOCTL, It should be option per buffer. Hence adding separate
array for read and write BO handles.
v2(Marek):
- Internal kernel details shouldn't be here. This file should only
document the observed behavior, not the implementation .
v3:
- Fix DAL CI clang issue.
v4:
- Added Alex RB to merge the kernel UAPI changes since he has
already approved the amdgpu_drm.h changes.
Signed-off-by: Arunpravin Paneer Selvam <Arunpravin.PaneerSelvam@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Acked-by: Christian König <christian.koenig@amd.com>
Suggested-by: Marek Olšák <marek.olsak@amd.com>
Suggested-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Add a vm root BO lock before accessing the userqueue VM.
v1:(Christian)
- Keep the VM locked until you are done with the mapping.
- Grab a temporary BO reference, drop the VM lock and acquire the BO.
When you are done with everything just drop the BO lock and
then the temporary BO reference.
Signed-off-by: Arunpravin Paneer Selvam <Arunpravin.PaneerSelvam@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Few optimization and fixes for userq fence driver.
v1:(Christian):
- Remove unnecessary comments.
- In drm_exec_init call give num_bo_handles as last parameter it would
making allocation of the array more efficient
- Handle return value of __xa_store() and improve the error handling of
amdgpu_userq_fence_driver_alloc().
v2:(Christian):
- Revert userq_xa xarray init to XA_FLAGS_LOCK_IRQ.
- move the xa_unlock before the error check of the call xa_err(__xa_store())
and moved this change to a separate patch as this is adding a missing error
handling.
- Removed the unnecessary comments.
Signed-off-by: Arunpravin Paneer Selvam <Arunpravin.PaneerSelvam@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Add support to handle the userqueue protected fence signal hardware
interrupt.
Create a xarray which maps the doorbell index to the fence driver address.
This would help to retrieve the fence driver information when an userq fence
interrupt is triggered. Firmware sends the doorbell offset value and
this info is compared with the queue's mqd doorbell offset value.
If they are same, we process the userq fence interrupt.
v1:(Christian):
- use xa_load to extract the fence driver.
- move the amdgpu_userq_fence_driver_process call within the xa_lock
as there is a chance that fence_drv might be freed.
Signed-off-by: Arunpravin Paneer Selvam <Arunpravin.PaneerSelvam@amd.com>
Acked-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Add user fence wait IOCTL timeline syncobj support.
v2:(Christian)
- handle dma_fence_wait() return value.
- shorten the variable name syncobj_timeline_points a bit.
- move num_points up to avoid padding issues.
v3:(Christian)
- Handle timeline drm_syncobj_find_fence() call error
handling
- Use dma_fence_unwrap_for_each() in timeline fence as
there could be more than one fence.
v4:(Christian)
- Drop the first num_fences since fence is always included in
the dma_fence_unwrap_for_each() iteration, when fence != f
then fence is most likely just a container.
v5: Added Alex RB to merge the kernel UAPI changes since he has
already approved the amdgpu_drm.h changes.
Signed-off-by: Arunpravin Paneer Selvam <Arunpravin.PaneerSelvam@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
This patch introduces new IOCTL for userqueue secure semaphore.
The signal IOCTL called from userspace application creates a drm
syncobj and array of bo GEM handles and passed in as parameter to
the driver to install the fence into it.
The wait IOCTL gets an array of drm syncobjs, finds the fences
attached to the drm syncobjs and obtain the array of
memory_address/fence_value combintion which are returned to
userspace.
v2: (Christian)
- Install fence into GEM BO object.
- Lock all BO's using the dma resv subsystem
- Reorder the sequence in signal IOCTL function.
- Get write pointer from the shadow wptr
- use userq_fence to fetch the va/value in wait IOCTL.
v3: (Christian)
- Use drm_exec helper for the proper BO drm reserve and avoid BO
lock/unlock issues.
- fence/fence driver reference count logic for signal/wait IOCTLs.
v4: (Christian)
- Fixed the drm_exec calling sequence
- use dma_resv_for_each_fence_unlock if BO's are not locked
- Modified the fence_info array storing logic.
v5: (Christian)
- Keep fence_drv until wait queue execution.
- Add dma_fence_wait for other fences.
- Lock BO's using drm_exec as the number of fences in them could
change.
- Install signaled fences as well into BO/Syncobj.
- Move Syncobj fence installation code after the drm_exec_prepare_array.
- Directly add dma_resv_usage_rw(args->bo_flags....
- remove unnecessary dma_fence_put.
v6: (Christian)
- Add xarray stuff to store the fence_drv
- Implement a function to iterate over the xarray and drop
the fence_drv references.
- Add drm_exec_until_all_locked() wrapper
- Add a check that if we haven't exceeded the user allocated num_fences
before adding dma_fence to the fences array.
v7: (Christian)
- Use memdup_user() for kmalloc_array + copy_from_user
- Move the fence_drv references from the xarray into the newly created fence
and drop the fence_drv references when we signal this fence.
- Move this locking of BOs before the "if (!wait_info->num_fences)",
this way you need this code block only once.
- Merge the error handling code and the cleanup + return 0 code.
- Initializing the xa should probably be done in the userq code.
- Remove the userq back pointer stored in fence_drv.
- Pass xarray as parameter in amdgpu_userq_walk_and_drop_fence_drv()
v8: (Christian)
- Move fence_drv references must come before adding the fence to the list.
- Use xa_lock_irqsave_nested for nested spinlock operations.
- userq_mgr should be per fpriv and not one per device.
- Restructure the interrupt process code for the early exit of the loop.
- The reference acquired in the syncobj fence replace code needs to be
kept around.
- Modify the dma_fence acquire placement in wait IOCTL.
- Move USERQ_BO_WRITE flag to UAPI header file.
- drop the fence drv reference after telling the hw to stop accessing it.
- Add multi sync object support to userq signal IOCTL.
V9: (Christian)
- Store all the fence_drv ref to other drivers and not ourself.
- Remove the userq fence xa implementation and replace with
kvmalloc_array.
v10: (Christian)
- Add a comment for the userq_xa xarray
- drop the if check of userq_fence->fence_drv_array
- use the i variable to initialize userq_fence->fence_drv_array_count
- drop the fence reference before you free the array in the error handling,
otherwise it could be that some references leaked.
Signed-off-by: Arunpravin Paneer Selvam <Arunpravin.PaneerSelvam@amd.com>
Suggested-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Screen freeze and userq fence driver crash while playing Xonotic
v2: (Christian)
- There is change that fence might signal in between testing
and grabbing the lock. Hence we can move the lock above the
if..else check and use the dma_fence_is_signaled_locked().
Signed-off-by: Arunpravin Paneer Selvam <Arunpravin.PaneerSelvam@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Developed a userqueue fence driver for the userqueue process shared
BO synchronization.
Create a dma fence having write pointer as the seqno and allocate a
seq64 memory for each user queue process and feed this memory address
into the firmware/hardware, thus the firmware writes the read pointer
into the given address when the process completes it execution.
Compare wptr and rptr, if rptr >= wptr, signal the fences for the waiting
process to consume the buffers.
v2: Worked on review comments from Christian for the following
modifications
- Add wptr as sequence number into the fence
- Add a reference count for the fence driver
- Add dma_fence_put below the list_del as it might
frees the userq fence.
- Trim unnecessary code in interrupt handler.
- Check dma fence signaled state in dma fence creation
function for a potential problem of hardware completing
the job processing beforehand.
- Add necessary locks.
- Create a list and process all the unsignaled fences.
- clean up fences in destroy function.
- implement .signaled callback function
v3: Worked on review comments from Christian
- Modify naming convention for reference counted objects
- Fix fence driver reference drop issue
- Drop amdgpu_userq_fence_driver_process() function return value
v4: Worked on review comments from Christian
- Moved fence driver allocation into amdgpu_userq_fence_driver_alloc()
- Added detail doc mentioning the differences b/w
two spinlocks declared.
v5: Worked on review comments from Christian
- Check before upcast and remove local variable
- Add error handling in fence_drv alloc function.
- Move rptr read fn outside of the loop and remove WARN_ON in
destroy function.
v6:
- clear the seq64 memory in user fence driver(Christian)
- fix for the wptr va bo mapping(Christian)
- move the fence_drv xa entry erase code from the interrupt handler
into user fence destroy function
Signed-off-by: Arunpravin Paneer Selvam <Arunpravin.PaneerSelvam@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Suggested-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>