From faf6e1a87e07423a729e04fb2e8188742e89ea4c Mon Sep 17 00:00:00 2001 From: Andrey Grodzovsky Date: Thu, 18 Oct 2018 12:32:46 -0400 Subject: drm/sched: Add boolean to mark if sched is ready to work v5 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Problem: A particular scheduler may become unsuable (underlying HW) after some event (e.g. GPU reset). If it's later chosen by the get free sched. policy a command will fail to be submitted. Fix: Add a driver specific callback to report the sched status so rq with bad sched can be avoided in favor of working one or none in which case job init will fail. v2: Switch from driver callback to flag in scheduler. v3: rebase v4: Remove ready paramter from drm_sched_init, set uncoditionally to true once init done. v5: fix missed change in v3d in v4 (Alex) Signed-off-by: Andrey Grodzovsky Reviewed-by: Christian König Signed-off-by: Alex Deucher --- drivers/gpu/drm/scheduler/sched_entity.c | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) (limited to 'drivers/gpu/drm/scheduler/sched_entity.c') diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c index 3e22a54a99c2..ba54c30a466e 100644 --- a/drivers/gpu/drm/scheduler/sched_entity.c +++ b/drivers/gpu/drm/scheduler/sched_entity.c @@ -130,7 +130,14 @@ drm_sched_entity_get_free_sched(struct drm_sched_entity *entity) int i; for (i = 0; i < entity->num_rq_list; ++i) { - num_jobs = atomic_read(&entity->rq_list[i]->sched->num_jobs); + struct drm_gpu_scheduler *sched = entity->rq_list[i]->sched; + + if (!entity->rq_list[i]->sched->ready) { + DRM_WARN("sched%s is not ready, skipping", sched->name); + continue; + } + + num_jobs = atomic_read(&sched->num_jobs); if (num_jobs < min_jobs) { min_jobs = num_jobs; rq = entity->rq_list[i]; -- cgit From 26efecf9558895a89c2920d258601b4afba10fd0 Mon Sep 17 00:00:00 2001 From: Sharat Masetty Date: Mon, 29 Oct 2018 15:02:28 +0530 Subject: drm/scheduler: Add drm_sched_job_cleanup MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit This patch adds a new API to clean up the scheduler job resources. This is primarliy needed in cases the job was created but was not queued to the scheduler queue. Additionally with this change, the layer which creates the scheduler job also gets to free up the job's resources and this entails moving the dma_fence_put(finished_fence) to the drivers ops free handler routines. Signed-off-by: Sharat Masetty Reviewed-by: Christian König Acked-by: Andrey Grodzovsky Signed-off-by: Alex Deucher --- drivers/gpu/drm/scheduler/sched_entity.c | 1 - 1 file changed, 1 deletion(-) (limited to 'drivers/gpu/drm/scheduler/sched_entity.c') diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c index ba54c30a466e..4463d3826ecb 100644 --- a/drivers/gpu/drm/scheduler/sched_entity.c +++ b/drivers/gpu/drm/scheduler/sched_entity.c @@ -211,7 +211,6 @@ static void drm_sched_entity_kill_jobs_cb(struct dma_fence *f, drm_sched_fence_finished(job->s_fence); WARN_ON(job->s_fence->parent); - dma_fence_put(&job->s_fence->finished); job->sched->ops->free_job(job); } -- cgit