While RefreshLinksJob is de-duplicated by page-id, it is possible for two jobs to run for the same page ID if the second one was queued after the first one started running. In that case they the newer one must not be skipped or ignored because it will have newer information to record to the database, but it also has no way to stop the old one, and we can't run them concurrently. Instead of letting the lock exception mark the job as error, making it implicitly retry, do this more explicitly, which avoids logspam. Bug: T170596 Co-Authored-By: Aaron Schulz <aschulz@wikimedia.org> Change-Id: Id2852d73d00daf83f72cf5ff778c638083f5fc73 |
||
|---|---|---|
| .. | ||
| AtomicSectionUpdate.php | ||
| AutoCommitUpdate.php | ||
| CdnCacheUpdate.php | ||
| DataUpdate.php | ||
| DeferrableCallback.php | ||
| DeferrableUpdate.php | ||
| DeferredUpdates.php | ||
| EnqueueableDataUpdate.php | ||
| HTMLCacheUpdate.php | ||
| LinksDeletionUpdate.php | ||
| LinksUpdate.php | ||
| MergeableUpdate.php | ||
| MWCallableUpdate.php | ||
| SearchUpdate.php | ||
| SiteStatsUpdate.php | ||
| TransactionRoundDefiningUpdate.php | ||
| WANCacheReapUpdate.php | ||