3 * (C) COPYRIGHT 2011-2015 ARM Limited. All rights reserved.
5 * This program is free software and is provided to you under the terms of the
6 * GNU General Public License version 2 as published by the Free Software
7 * Foundation, and any use by you of this program is subject to the terms
10 * A copy of the licence is included with the program, and can also be obtained
11 * from Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor,
12 * Boston, MA 02110-1301, USA.
21 * @file mali_kbase_js_policy.h
22 * Job Scheduler Policy APIs.
25 #ifndef _KBASE_JS_POLICY_H_
26 #define _KBASE_JS_POLICY_H_
29 * @page page_kbase_js_policy Job Scheduling Policies
30 * The Job Scheduling system is described in the following:
31 * - @subpage page_kbase_js_policy_overview
32 * - @subpage page_kbase_js_policy_operation
34 * The API details are as follows:
37 * - @ref kbase_js_policy
41 * @page page_kbase_js_policy_overview Overview of the Policy System
43 * The Job Scheduler Policy manages:
44 * - The assigning of KBase Contexts to GPU Address Spaces (\em ASs)
45 * - The choosing of Job Chains (\em Jobs) from a KBase context, to run on the
46 * GPU's Job Slots (\em JSs).
47 * - The amount of \em time a context is assigned to (<em>scheduled on</em>) an
49 * - The amount of \em time a Job spends running on the GPU
51 * The Policy implements this management via 2 components:
52 * - A Policy Queue, which manages a set of contexts that are ready to run,
53 * but not currently running.
54 * - A Policy Run Pool, which manages the currently running contexts (one per Address
55 * Space) and the jobs to run on the Job Slots.
57 * Each Graphics Process in the system has at least one KBase Context. Therefore,
58 * the Policy Queue can be seen as a queue of Processes waiting to run Jobs on
61 * <!-- The following needs to be all on one line, due to doxygen's parser -->
62 * @dotfile policy_overview.dot "Diagram showing a very simplified overview of the Policy System. IRQ handling, soft/hard-stopping, contexts re-entering the system and Policy details are omitted"
64 * The main operations on the queue are:
65 * - Enqueuing a Context to it
66 * - Dequeuing a Context from it, to run it.
67 * - Note: requeuing a context is much the same as enqueuing a context, but
68 * occurs when a context is scheduled out of the system to allow other contexts
71 * These operations have much the same meaning for the Run Pool - Jobs are
72 * dequeued to run on a Jobslot, and requeued when they are scheduled out of
75 * @note This is an over-simplification of the Policy APIs - there are more
76 * operations than 'Enqueue'/'Dequeue', and a Dequeue from the Policy Queue
77 * takes at least two function calls: one to Dequeue from the Queue, one to add
80 * As indicated on the diagram, Jobs permanently leave the scheduling system
81 * when they are completed, otherwise they get dequeued/requeued until this
82 * happens. Similarly, Contexts leave the scheduling system when their jobs
83 * have all completed. However, Contexts may later return to the scheduling
84 * system (not shown on the diagram) if more Bags of Jobs are submitted to
89 * @page page_kbase_js_policy_operation Policy Operation
91 * We describe the actions that the Job Scheduler Core takes on the Policy in
92 * the following cases:
94 * - The Job Submission Path
95 * - The High Priority Job Submission Path
97 * This shows how the Policy APIs will be used by the Job Scheduler core.
99 * The following diagram shows an example Policy that contains a Low Priority
100 * queue, and a Real-time (High Priority) Queue. The RT queue is examined
101 * before the LowP one on dequeuing from the head. The Low Priority Queue is
102 * ordered by time, and the RT queue is ordered by time weighted by
103 * RT-priority. In addition, it shows that the Job Scheduler Core will start a
104 * Soft-Stop Timer (SS-Timer) when it dequeue's and submits a job. The
105 * Soft-Stop time is set by a global configuration value, and must be a value
106 * appropriate for the policy. For example, this could include "don't run a
107 * soft-stop timer" for a First-Come-First-Served (FCFS) policy.
109 * <!-- The following needs to be all on one line, due to doxygen's parser -->
110 * @dotfile policy_operation_diagram.dot "Diagram showing the objects managed by an Example Policy, and the operations made upon these objects by the Job Scheduler Core."
112 * @section sec_kbase_js_policy_operation_prio Dealing with Priority
114 * Priority applies separately to a context as a whole, and to the jobs within
115 * a context. The jobs specify a priority in the base_jd_atom::prio member, but
116 * it is independent of the context priority. That is, it only affects
117 * scheduling of atoms within a context. Refer to @ref base_jd_prio for more
118 * details. The meaning of the context's priority value is up to the policy
119 * itself, and could be a logarithmic scale instead of a linear scale (e.g. the
120 * policy could implement an increase/decrease in priority by 1 results in an
121 * increase/decrease in \em proportion of time spent scheduled in by 25%, an
122 * effective change in timeslice by 11%).
124 * It is up to the policy whether a boost in priority boosts the priority of
125 * the entire context (e.g. to such an extent where it may pre-empt other
126 * running contexts). If it chooses to do this, the Policy must make sure that
127 * only jobs from high-priority contexts are run, and that the context is
128 * scheduled out once only jobs from low priority contexts remain. This ensures
129 * that the low priority contexts do not gain from the priority boost, yet they
130 * still get scheduled correctly with respect to other low priority contexts.
133 * @section sec_kbase_js_policy_operation_irq IRQ Path
135 * The following happens on the IRQ path from the Job Scheduler Core:
136 * - Note the slot that completed (for later)
137 * - Log the time spent by the job (and implicitly, the time spent by the
139 * - call kbasep_js_policy_log_job_result() <em>in the context of the irq
141 * - This must happen regardless of whether the job completed successfully or
142 * not (otherwise the context gets away with DoS'ing the system with faulty jobs)
143 * - What was the result of the job?
144 * - If Completed: job is just removed from the system
145 * - If Hard-stop or failure: job is removed from the system
146 * - If Soft-stop: queue the book-keeping work onto a work-queue: have a
147 * work-queue call kbasep_js_policy_enqueue_job()
148 * - Check the timeslice used by the owning context
149 * - call kbasep_js_policy_should_remove_ctx() <em>in the context of the irq
151 * - If this returns true, clear the "allowed" flag.
152 * - Check the ctx's flags for "allowed", "has jobs to run" and "is running
154 * - And so, should the context stay scheduled in?
155 * - If No, push onto a work-queue the work of scheduling out the old context,
156 * and getting a new one. That is:
157 * - kbasep_js_policy_runpool_remove_ctx() on old_ctx
158 * - kbasep_js_policy_enqueue_ctx() on old_ctx
159 * - kbasep_js_policy_dequeue_head_ctx() to get new_ctx
160 * - kbasep_js_policy_runpool_add_ctx() on new_ctx
161 * - (all of this work is deferred on a work-queue to keep the IRQ handler quick)
162 * - If there is space in the completed job slots' HEAD/NEXT registers, run the next job:
163 * - kbasep_js_policy_dequeue_job() <em>in the context of the irq
164 * handler</em> with core_req set to that of the completing slot
165 * - if this returned true, submit the job to the completed slot.
166 * - This is repeated until kbasep_js_policy_dequeue_job() returns
167 * false, or the job slot has a job queued on both the HEAD and NEXT registers.
168 * - If kbasep_js_policy_dequeue_job() returned false, submit some work to
169 * the work-queue to retry from outside of IRQ context (calling
170 * kbasep_js_policy_dequeue_job() from a work-queue).
172 * Since the IRQ handler submits new jobs \em and re-checks the IRQ_RAWSTAT,
173 * this sequence could loop a large number of times: this could happen if
174 * the jobs submitted completed on the GPU very quickly (in a few cycles), such
175 * as GPU NULL jobs. Then, the HEAD/NEXT registers will always be free to take
176 * more jobs, causing us to loop until we run out of jobs.
178 * To mitigate this, we must limit the number of jobs submitted per slot during
179 * the IRQ handler - for example, no more than 2 jobs per slot per IRQ should
180 * be sufficient (to fill up the HEAD + NEXT registers in normal cases). For
181 * Mali-T600 with 3 job slots, this means that up to 6 jobs could be submitted per
182 * slot. Note that IRQ Throttling can make this situation commonplace: 6 jobs
183 * could complete but the IRQ for each of them is delayed by the throttling. By
184 * the time you get the IRQ, all 6 jobs could've completed, meaning you can
185 * submit jobs to fill all 6 HEAD+NEXT registers again.
187 * @note As much work is deferred as possible, which includes the scheduling
188 * out of a context and scheduling in a new context. However, we can still make
189 * starting a single high-priorty context quick despite this:
190 * - On Mali-T600 family, there is one more AS than JSs.
191 * - This means we can very quickly schedule out one AS, no matter what the
192 * situation (because there will always be one AS that's not currently running
193 * on the job slot - it can only have a job in the NEXT register).
194 * - Even with this scheduling out, fair-share can still be guaranteed e.g. by
195 * a timeline-based Completely Fair Scheduler.
196 * - When our high-priority context comes in, we can do this quick-scheduling
197 * out immediately, and then schedule in the high-priority context without having to block.
198 * - This all assumes that the context to schedule out is of lower
199 * priority. Otherwise, we will have to block waiting for some other low
200 * priority context to finish its jobs. Note that it's likely (but not
201 * impossible) that the high-priority context \b is running jobs, by virtue of
202 * it being high priority.
203 * - Therefore, we can give a high liklihood that on Mali-T600 at least one
204 * high-priority context can be started very quickly. For the general case, we
205 * can guarantee starting (no. ASs) - (no. JSs) high priority contexts
206 * quickly. In any case, there is a high likelihood that we're able to start
207 * more than one high priority context quickly.
209 * In terms of the functions used in the IRQ handler directly, these are the
210 * perfomance considerations:
211 * - kbase_js_policy_log_job_result():
212 * - This is just adding to a 64-bit value (possibly even a 32-bit value if we
213 * only store the time the job's recently spent - see below on 'priority weighting')
214 * - For priority weighting, a divide operation ('div') could happen, but
215 * this can happen in a deferred context (outside of IRQ) when scheduling out
216 * the ctx; as per our Engineering Specification, the contexts of different
217 * priority still stay scheduled in for the same timeslice, but higher priority
218 * ones scheduled back in more often.
219 * - That is, the weighted and unweighted times must be stored separately, and
220 * the weighted time is only updated \em outside of IRQ context.
221 * - Of course, this divide is more likely to be a 'multiply by inverse of the
222 * weight', assuming that the weight (priority) doesn't change.
223 * - kbasep_js_policy_should_remove_ctx():
224 * - This is usually just a comparison of the stored time value against some
227 * @note all deferred work can be wrapped up into one call - we usually need to
228 * indicate that a job/bag is done outside of IRQ context anyway.
232 * @section sec_kbase_js_policy_operation_submit Submission path
234 * Start with a Context with no jobs present, and assume equal priority of all
235 * contexts in the system. The following work all happens outside of IRQ
237 * - As soon as job is made 'ready to 'run', then is must be registerd with the Job
239 * - 'Ready to run' means they've satisified their dependencies in the
240 * Kernel-side Job Dispatch system.
241 * - Call kbasep_js_policy_enqueue_job()
242 * - This indicates that the job should be scheduled (it is ready to run).
243 * - As soon as a ctx changes from having 0 jobs 'ready to run' to >0 jobs
244 * 'ready to run', we enqueue the context on the policy queue:
245 * - Call kbasep_js_policy_enqueue_ctx()
246 * - This indicates that the \em ctx should be scheduled (it is ready to run)
248 * Next, we need to handle adding a context to the Run Pool - if it's sensible
249 * to do so. This can happen due to two reasons:
250 * -# A context is enqueued as above, and there are ASs free for it to run on
251 * (e.g. it is the first context to be run, in which case it can be added to
252 * the Run Pool immediately after enqueuing on the Policy Queue)
253 * -# A previous IRQ caused another ctx to be scheduled out, requiring that the
254 * context at the head of the queue be scheduled in. Such steps would happen in
255 * a work queue (work deferred from the IRQ context).
257 * In both cases, we'd handle it as follows:
258 * - Get the context at the Head of the Policy Queue:
259 * - Call kbasep_js_policy_dequeue_head_ctx()
260 * - Assign the Context an Address Space (Assert that there will be one free,
261 * given the above two reasons)
262 * - Add this context to the Run Pool:
263 * - Call kbasep_js_policy_runpool_add_ctx()
264 * - Now see if a job should be run:
265 * - Mostly, this will be done in the IRQ handler at the completion of a
267 * - However, there are two cases where this cannot be done: a) The first job
268 * enqueued to the system (there is no previous IRQ to act upon) b) When jobs
269 * are submitted at a low enough rate to not fill up all Job Slots (or, not to
270 * fill both the 'HEAD' and 'NEXT' registers in the job-slots)
271 * - Hence, on each ctx <b>and job</b> submission we should try to see if we
273 * - For each job slot that has free space (in NEXT or HEAD+NEXT registers):
274 * - Call kbasep_js_policy_dequeue_job() with core_req set to that of the
276 * - if we got one, submit it to the job slot.
277 * - This is repeated until kbasep_js_policy_dequeue_job() returns
278 * false, or the job slot has a job queued on both the HEAD and NEXT registers.
280 * The above case shows that we should attempt to run jobs in cases where a) a ctx
281 * has been added to the Run Pool, and b) new jobs have been added to a context
283 * - In the latter case, the context is in the runpool because it's got a job
284 * ready to run, or is already running a job
285 * - We could just wait until the IRQ handler fires, but for certain types of
286 * jobs this can take comparatively a long time to complete, e.g. GLES FS jobs
287 * generally take much longer to run that GLES CS jobs, which are vertex shader
289 * - Therefore, when a new job appears in the ctx, we must check the job-slots
290 * to see if they're free, and run the jobs as before.
294 * @section sec_kbase_js_policy_operation_submit_hipri Submission path for High Priority Contexts
296 * For High Priority Contexts on Mali-T600, we can make sure that at least 1 of
297 * them can be scheduled in immediately to start high prioriy jobs. In general,
298 * (no. ASs) - (no JSs) high priority contexts may be started immediately. The
299 * following describes how this happens:
301 * Similar to the previous section, consider what happens with a high-priority
302 * context (a context with a priority higher than that of any in the Run Pool)
303 * that starts out with no jobs:
304 * - A job becomes ready to run on the context, and so we enqueue the context
305 * on the Policy's Queue.
306 * - However, we'd like to schedule in this context immediately, instead of
307 * waiting for one of the Run Pool contexts' timeslice to expire
308 * - The policy's Enqueue function must detect this (because it is the policy
309 * that embodies the concept of priority), and take appropriate action
310 * - That is, kbasep_js_policy_enqueue_ctx() should check the Policy's Run
311 * Pool to see if a lower priority context should be scheduled out, and then
312 * schedule in the High Priority context.
313 * - For Mali-T600, we can always pick a context to schedule out immediately
314 * (because there are more ASs than JSs), and so scheduling out a victim context
315 * and scheduling in the high priority context can happen immediately.
316 * - If a policy implements fair-sharing, then this can still ensure the
317 * victim later on gets a fair share of the GPU.
318 * - As a note, consider whether the victim can be of equal/higher priority
319 * than the incoming context:
320 * - Usually, higher priority contexts will be the ones currently running
321 * jobs, and so the context with the lowest priority is usually not running
323 * - This makes it likely that the victim context is low priority, but
324 * it's not impossible for it to be a high priority one:
325 * - Suppose 3 high priority contexts are submitting only FS jobs, and one low
326 * priority context submitting CS jobs. Then, the context not running jobs will
327 * be one of the hi priority contexts (because only 2 FS jobs can be
328 * queued/running on the GPU HW for Mali-T600).
329 * - The problem can be mitigated by extra action, but it's questionable
330 * whether we need to: we already have a high likelihood that there's at least
331 * one high priority context - that should be good enough.
332 * - And so, this method makes sure that at least one high priority context
333 * can be started very quickly, but more than one high priority contexts could be
334 * delayed (up to one timeslice).
335 * - To improve this, use a GPU with a higher number of Address Spaces vs Job
337 * - At this point, let's assume this high priority context has been scheduled
338 * in immediately. The next step is to ensure it can start some jobs quickly.
339 * - It must do this by Soft-Stopping jobs on any of the Job Slots that it can
341 * - The rest of the logic for starting the jobs is taken care of by the IRQ
342 * handler. All the policy needs to do is ensure that
343 * kbasep_js_policy_dequeue_job() will return the jobs from the high priority
346 * @note in SS state, we currently only use 2 job-slots (even for T608, but
347 * this might change in future). In this case, it's always possible to schedule
348 * out 2 ASs quickly (their jobs won't be in the HEAD registers). At the same
349 * time, this maximizes usage of the job-slots (only 2 are in use), because you
350 * can guarantee starting of the jobs from the High Priority contexts immediately too.
354 * @section sec_kbase_js_policy_operation_notes Notes
356 * - In this design, a separate 'init' is needed from dequeue/requeue, so that
357 * information can be retained between the dequeue/requeue calls. For example,
358 * the total time spent for a context/job could be logged between
359 * dequeue/requeuing, to implement Fair Sharing. In this case, 'init' just
360 * initializes that information to some known state.
367 * @addtogroup base_api
372 * @addtogroup base_kbase_api
377 * @addtogroup kbase_js_policy Job Scheduler Policy APIs
380 * <b>Refer to @ref page_kbase_js_policy for an overview and detailed operation of
381 * the Job Scheduler Policy and its use from the Job Scheduler Core</b>.
385 * @brief Job Scheduler Policy structure
387 union kbasep_js_policy;
390 * @brief Initialize the Job Scheduler Policy
392 int kbasep_js_policy_init(struct kbase_device *kbdev);
395 * @brief Terminate the Job Scheduler Policy
397 void kbasep_js_policy_term(union kbasep_js_policy *js_policy);
400 * @addtogroup kbase_js_policy_ctx Job Scheduler Policy, Context Management API
403 * <b>Refer to @ref page_kbase_js_policy for an overview and detailed operation of
404 * the Job Scheduler Policy and its use from the Job Scheduler Core</b>.
408 * @brief Job Scheduler Policy Ctx Info structure
410 * This structure is embedded in the struct kbase_context structure. It is used to:
411 * - track information needed for the policy to schedule the context (e.g. time
412 * used, OS priority etc.)
413 * - link together kbase_contexts into a queue, so that a struct kbase_context can be
414 * obtained as the container of the policy ctx info. This allows the API to
415 * return what "the next context" should be.
416 * - obtain other information already stored in the struct kbase_context for
417 * scheduling purposes (e.g process ID to get the priority of the originating
420 union kbasep_js_policy_ctx_info;
423 * @brief Initialize a ctx for use with the Job Scheduler Policy
425 * This effectively initializes the union kbasep_js_policy_ctx_info structure within
426 * the struct kbase_context (itself located within the kctx->jctx.sched_info structure).
428 int kbasep_js_policy_init_ctx(struct kbase_device *kbdev, struct kbase_context *kctx);
431 * @brief Terminate resources associated with using a ctx in the Job Scheduler
434 void kbasep_js_policy_term_ctx(union kbasep_js_policy *js_policy, struct kbase_context *kctx);
437 * @brief Enqueue a context onto the Job Scheduler Policy Queue
439 * If the context enqueued has a priority higher than any in the Run Pool, then
440 * it is the Policy's responsibility to decide whether to schedule out a low
441 * priority context from the Run Pool to allow the high priority context to be
444 * If the context has the privileged flag set, it will always be kept at the
447 * The caller will be holding kbasep_js_kctx_info::ctx::jsctx_mutex.
448 * The caller will be holding kbasep_js_device_data::queue_mutex.
450 void kbasep_js_policy_enqueue_ctx(union kbasep_js_policy *js_policy, struct kbase_context *kctx);
453 * @brief Dequeue a context from the Head of the Job Scheduler Policy Queue
455 * The caller will be holding kbasep_js_device_data::queue_mutex.
457 * @return true if a context was available, and *kctx_ptr points to
459 * @return false if no contexts were available.
461 bool kbasep_js_policy_dequeue_head_ctx(union kbasep_js_policy *js_policy, struct kbase_context ** const kctx_ptr);
464 * @brief Evict a context from the Job Scheduler Policy Queue
466 * This is only called as part of destroying a kbase_context.
468 * There are many reasons why this might fail during the lifetime of a
469 * context. For example, the context is in the process of being scheduled. In
470 * that case a thread doing the scheduling might have a pointer to it, but the
471 * context is neither in the Policy Queue, nor is it in the Run
472 * Pool. Crucially, neither the Policy Queue, Run Pool, or the Context itself
475 * Hence to find out where in the system the context is, it is important to do
476 * more than just check the kbasep_js_kctx_info::ctx::is_scheduled member.
478 * The caller will be holding kbasep_js_device_data::queue_mutex.
480 * @return true if the context was evicted from the Policy Queue
481 * @return false if the context was not found in the Policy Queue
483 bool kbasep_js_policy_try_evict_ctx(union kbasep_js_policy *js_policy, struct kbase_context *kctx);
486 * @brief Call a function on all jobs belonging to a non-queued, non-running
487 * context, optionally detaching the jobs from the context as it goes.
489 * At the time of the call, the context is guarenteed to be not-currently
490 * scheduled on the Run Pool (is_scheduled == false), and not present in
491 * the Policy Queue. This is because one of the following functions was used
492 * recently on the context:
493 * - kbasep_js_policy_evict_ctx()
494 * - kbasep_js_policy_runpool_remove_ctx()
496 * In both cases, no subsequent call was made on the context to any of:
497 * - kbasep_js_policy_runpool_add_ctx()
498 * - kbasep_js_policy_enqueue_ctx()
500 * Due to the locks that might be held at the time of the call, the callback
501 * may need to defer work on a workqueue to complete its actions (e.g. when
504 * \a detach_jobs must only be set when cancelling jobs (which occurs as part
505 * of context destruction).
507 * The locking conditions on the caller are as follows:
508 * - it will be holding kbasep_js_kctx_info::ctx::jsctx_mutex.
510 void kbasep_js_policy_foreach_ctx_job(union kbasep_js_policy *js_policy, struct kbase_context *kctx,
511 kbasep_js_policy_ctx_job_cb callback, bool detach_jobs);
514 * @brief Add a context to the Job Scheduler Policy's Run Pool
516 * If the context enqueued has a priority higher than any in the Run Pool, then
517 * it is the Policy's responsibility to decide whether to schedule out low
518 * priority jobs that are currently running on the GPU.
520 * The number of contexts present in the Run Pool will never be more than the
521 * number of Address Spaces.
523 * The following guarentees are made about the state of the system when this
525 * - kctx->as_nr member is valid
526 * - the context has its submit_allowed flag set
527 * - kbasep_js_device_data::runpool_irq::per_as_data[kctx->as_nr] is valid
528 * - The refcount of the context is guarenteed to be zero.
529 * - kbasep_js_kctx_info::ctx::is_scheduled will be true.
531 * The locking conditions on the caller are as follows:
532 * - it will be holding kbasep_js_kctx_info::ctx::jsctx_mutex.
533 * - it will be holding kbasep_js_device_data::runpool_mutex.
534 * - it will be holding kbasep_js_device_data::runpool_irq::lock (a spinlock)
536 * Due to a spinlock being held, this function must not call any APIs that sleep.
538 void kbasep_js_policy_runpool_add_ctx(union kbasep_js_policy *js_policy, struct kbase_context *kctx);
541 * @brief Remove a context from the Job Scheduler Policy's Run Pool
543 * The kctx->as_nr member is valid and the context has its submit_allowed flag
544 * set when this is called. The state of
545 * kbasep_js_device_data::runpool_irq::per_as_data[kctx->as_nr] is also
546 * valid. The refcount of the context is guarenteed to be zero.
548 * The locking conditions on the caller are as follows:
549 * - it will be holding kbasep_js_kctx_info::ctx::jsctx_mutex.
550 * - it will be holding kbasep_js_device_data::runpool_mutex.
551 * - it will be holding kbasep_js_device_data::runpool_irq::lock (a spinlock)
553 * Due to a spinlock being held, this function must not call any APIs that sleep.
555 void kbasep_js_policy_runpool_remove_ctx(union kbasep_js_policy *js_policy, struct kbase_context *kctx);
558 * @brief Indicate whether a context should be removed from the Run Pool
559 * (should be scheduled out).
561 * The kbasep_js_device_data::runpool_irq::lock will be held by the caller.
563 * @note This API is called from IRQ context.
565 bool kbasep_js_policy_should_remove_ctx(union kbasep_js_policy *js_policy, struct kbase_context *kctx);
568 * @brief Synchronize with any timers acting upon the runpool
570 * The policy should check whether any timers it owns should be running. If
571 * they should not, the policy must cancel such timers and ensure they are not
572 * re-run by the time this function finishes.
574 * In particular, the timers must not be running when there are no more contexts
575 * on the runpool, because the GPU could be powered off soon after this call.
577 * The locking conditions on the caller are as follows:
578 * - it will be holding kbasep_js_kctx_info::ctx::jsctx_mutex.
579 * - it will be holding kbasep_js_device_data::runpool_mutex.
581 void kbasep_js_policy_runpool_timers_sync(union kbasep_js_policy *js_policy);
585 * @brief Indicate whether a new context has an higher priority than the current context.
588 * The caller has the following conditions on locking:
589 * - kbasep_js_kctx_info::ctx::jsctx_mutex will be held for \a new_ctx
591 * This function must not sleep, because an IRQ spinlock might be held whilst
594 * @note There is nothing to stop the priority of \a current_ctx changing
595 * during or immediately after this function is called (because its jsctx_mutex
596 * cannot be held). Therefore, this function should only be seen as a heuristic
597 * guide as to whether \a new_ctx is higher priority than \a current_ctx
599 bool kbasep_js_policy_ctx_has_priority(union kbasep_js_policy *js_policy, struct kbase_context *current_ctx, struct kbase_context *new_ctx);
601 /** @} *//* end group kbase_js_policy_ctx */
604 * @addtogroup kbase_js_policy_job Job Scheduler Policy, Job Chain Management API
607 * <b>Refer to @ref page_kbase_js_policy for an overview and detailed operation of
608 * the Job Scheduler Policy and its use from the Job Scheduler Core</b>.
612 * @brief Job Scheduler Policy Job Info structure
614 * This structure is embedded in the struct kbase_jd_atom structure. It is used to:
615 * - track information needed for the policy to schedule the job (e.g. time
617 * - link together jobs into a queue/buffer, so that a struct kbase_jd_atom can be
618 * obtained as the container of the policy job info. This allows the API to
619 * return what "the next job" should be.
621 union kbasep_js_policy_job_info;
624 * @brief Initialize a job for use with the Job Scheduler Policy
626 * This function initializes the union kbasep_js_policy_job_info structure within the
627 * kbase_jd_atom. It will only initialize/allocate resources that are specific
630 * That is, this function makes \b no attempt to:
631 * - initialize any context/policy-wide information
632 * - enqueue the job on the policy.
634 * At some later point, the following functions must be called on the job, in this order:
635 * - kbasep_js_policy_register_job() to register the job and initialize policy/context wide data.
636 * - kbasep_js_policy_enqueue_job() to enqueue the job
638 * A job must only ever be initialized on the Policy once, and must be
639 * terminated on the Policy before the job is freed.
641 * The caller will not be holding any locks, and so this function will not
642 * modify any information in \a kctx or \a js_policy.
644 * @return 0 if initialization was correct.
646 int kbasep_js_policy_init_job(const union kbasep_js_policy *js_policy, const struct kbase_context *kctx, struct kbase_jd_atom *katom);
649 * @brief Register context/policy-wide information for a job on the Job Scheduler Policy.
651 * Registers the job with the policy. This is used to track the job before it
652 * has been enqueued/requeued by kbasep_js_policy_enqueue_job(). Specifically,
653 * it is used to update information under a lock that could not be updated at
654 * kbasep_js_policy_init_job() time (such as context/policy-wide data).
656 * @note This function will not fail, and hence does not allocate any
657 * resources. Any failures that could occur on registration will be caught
658 * during kbasep_js_policy_init_job() instead.
660 * A job must only ever be registerd on the Policy once, and must be
661 * deregistered on the Policy on completion (whether or not that completion was
664 * The caller has the following conditions on locking:
665 * - kbasep_js_kctx_info::ctx::jsctx_mutex will be held.
667 void kbasep_js_policy_register_job(union kbasep_js_policy *js_policy, struct kbase_context *kctx, struct kbase_jd_atom *katom);
670 * @brief De-register context/policy-wide information for a on the Job Scheduler Policy.
672 * This must be used before terminating the resources associated with using a
673 * job in the Job Scheduler Policy. This function does not itself terminate any
674 * resources, at most it just updates information in the policy and context.
676 * The caller has the following conditions on locking:
677 * - kbasep_js_kctx_info::ctx::jsctx_mutex will be held.
679 void kbasep_js_policy_deregister_job(union kbasep_js_policy *js_policy, struct kbase_context *kctx, struct kbase_jd_atom *katom);
682 * @brief Dequeue a Job for a job slot from the Job Scheduler Policy Run Pool
684 * The job returned by the policy will match at least one of the bits in the
685 * job slot's core requirements (but it may match more than one, or all @ref
686 * base_jd_core_req bits supported by the job slot).
688 * In addition, the requirements of the job returned will be a subset of those
689 * requested - the job returned will not have requirements that \a job_slot_idx
692 * The caller will submit the job to the GPU as soon as the GPU's NEXT register
693 * for the corresponding slot is empty. Of course, the GPU will then only run
694 * this new job when the currently executing job (in the jobslot's HEAD
695 * register) has completed.
697 * @return true if a job was available, and *kctx_ptr points to
699 * @return false if no jobs were available among all ctxs in the Run Pool.
701 * @note base_jd_core_req is currently a u8 - beware of type conversion.
703 * The caller has the following conditions on locking:
704 * - kbasep_js_device_data::runpool_lock::irq will be held.
705 * - kbasep_js_device_data::runpool_mutex will be held.
706 * - kbasep_js_kctx_info::ctx::jsctx_mutex. will be held
708 bool kbasep_js_policy_dequeue_job(struct kbase_device *kbdev, int job_slot_idx, struct kbase_jd_atom ** const katom_ptr);
711 * @brief Requeue a Job back into the Job Scheduler Policy Run Pool
713 * This will be used to enqueue a job after its creation and also to requeue
714 * a job into the Run Pool that was previously dequeued (running). It notifies
715 * the policy that the job should be run again at some point later.
717 * The caller has the following conditions on locking:
718 * - kbasep_js_device_data::runpool_irq::lock (a spinlock) will be held.
719 * - kbasep_js_device_data::runpool_mutex will be held.
720 * - kbasep_js_kctx_info::ctx::jsctx_mutex will be held.
722 void kbasep_js_policy_enqueue_job(union kbasep_js_policy *js_policy, struct kbase_jd_atom *katom);
725 * @brief Log the result of a job: the time spent on a job/context, and whether
726 * the job failed or not.
728 * Since a struct kbase_jd_atom contains a pointer to the struct kbase_context owning it,
729 * then this can also be used to log time on either/both the job and the
730 * containing context.
732 * The completion state of the job can be found by examining \a katom->event.event_code
734 * If the Job failed and the policy is implementing fair-sharing, then the
735 * policy must penalize the failing job/context:
736 * - At the very least, it should penalize the time taken by the amount of
737 * time spent processing the IRQ in SW. This because a job in the NEXT slot
738 * waiting to run will be delayed until the failing job has had the IRQ
740 * - \b Optionally, the policy could apply other penalties. For example, based
741 * on a threshold of a number of failing jobs, after which a large penalty is
744 * The kbasep_js_device_data::runpool_mutex will be held by the caller.
746 * @note This API is called from IRQ context.
748 * The caller has the following conditions on locking:
749 * - kbasep_js_device_data::runpool_irq::lock will be held.
751 * @param js_policy job scheduler policy
752 * @param katom job dispatch atom
753 * @param time_spent_us the time spent by the job, in microseconds (10^-6 seconds).
755 void kbasep_js_policy_log_job_result(union kbasep_js_policy *js_policy, struct kbase_jd_atom *katom, u64 time_spent_us);
757 /** @} *//* end group kbase_js_policy_job */
759 /** @} *//* end group kbase_js_policy */
760 /** @} *//* end group base_kbase_api */
761 /** @} *//* end group base_api */
763 #endif /* _KBASE_JS_POLICY_H_ */