The high-level job execution flow for a queue consists of the following steps:
1. Dequeue a job specification (type and corresponding parameters) from the corresponding storage medium.
2. Deduplicate the job according to deduplication rules (optional).
3. Marshal the job into the corresponding Job subclass and run it via Job::execute().
4. Run Job::tearDown().
5. If the Job failed (as described below), attempt to retry it up to the configured retry limit.
An exception thrown by Job::run(), or Job::run() returning `false`, will cause
the job runner to retry the job up to the configured retry limit, unless Job::allowRetries() returns `false`.
As of MediaWiki 1.43, no job runner implementation makes a distinction between transient errors
(which are retry-safe) and non-transient errors (which are not retry-safe).
A Job implementation that is expected to have both transient and non-transient error states
should therefore catch and process non-transient errors internally and return `true`
from Job::run() in such cases, to reduce the incidence of unwanted retries for such errors
while still benefiting from the automated retry logic for transient errors.
Note that in a distributed job runner implementation, the above steps
may be split between different infrastructure components, as is the case with
the changeprop-based system used by Wikimedia Foundation. This may require
additional configuration than overriding Job::allowRetries() to ensure that
other job runner components do not attempt to retry a job that is not retry-safe (T358939).
Since job runner implementations may vary in reliability, job classes should be
idempotent, to maintain correctness even if the job happens to run more than once.
## Job deduplication
A Job subclass may override Job::getDeduplicationInfo() and Job::ignoreDuplicates() to allow jobs to be deduplicated if the job runner in use supports it.
If Job::ignoreDuplicates() returns `true`, the deduplication logic must consider the job to be a duplicate if a Job of the same type with identical deduplication info has been executed later than the enqueue timestamp of the job.
Jobs that spawn many smaller jobs (so-called "root" and "leaf" jobs) may enable additional deduplication logic,
to make in-flight leaf-jobs no-ops, when a newer root job with identical parameters gets enqueued.
This is done by passing two special parameters, `rootJobTimestamp` and `rootJobSignature`,
which hold the MediaWiki timestamp at which the root job was enqueued, and an SHA-1 checksum uniquely identifying the root job, respectively.
The Job::newRootJobParams() convenience method facilitates adding these parameters to a preexisting parameter set.
When deduplicating leaf jobs, the job runner must consider a leaf job to be a duplicate
if a root job with an identical signature has been executed by the runner later than the
`rootJobTimestamp` of the leaf job.
## Enqueueing jobs
For enqueueing jobs, JobQueue and JobQueueGroup offer the JobQueue::push() and
JobQueue::lazyPush() methods. The former synchronously enqueues the job and propagates
a JobQueueError exception to the caller in case of failure, while the latter defers enqueueing
the job when running a web request context until after the response has been flushed to the client.
Callers should prefer using `lazyPush` unless it is necessary to surface enqueue failures.