SiteStatsTest::testJobsCountGetCached() is somewhat flaky in that if it
runs after a test that adds a page (thereby producing htmlCacheUpdate
and recentChangesUpdate jobs) but doesn't have the CI framework reset
the `page` tables (which has the side effect of clearing all such jobs),
it will fail.
This change manually clears those jobs so it doesn't depend on test
ordering.
Change-Id: I1277e633c81b29bda7564fa12d23f13ded7298c7
The method hits the jobrunner backend to find out how many jobs are
enqueued in each of the JobQueue. It is publicly available via the
MediaWiki API request:
/w/api.php?action=query&meta=siteinfo&siprop=statistics
That is often used by bots when querying recent changes among other and
with fast bot cause useless queries toward the jobrunner backend.
Wrap SiteStats::jobs() with a WAN cache under key SiteStats:jobscount.
Drop SiteStats::$jobs private variable that was used for in process
cache. The WAN Cache does it for us via 'pcTTL'.
That is similar to SiteStats::numberingroup().
Set TTL to one minute, which should still give fresh enough results for
public uses.
Cover that behavior with a test.
When writing tests I noticed MediaWikiTestCase generates a few jobs due
to the creation of the UTPage page:
* HTMLCacheUpdateJob to refresh backlinks (eg: history)
* RecentChangesUpdateJob which happens randomly
Pass EDIT_SUPPRESS_RC to doEditContent to prevent the first and blindly
delete entries in the recentChangesUpdate jobqueue for the second.
Change-Id: I95a272d0691d779bfee9e7a671cbab66a113dfa1