'isParserCachedUsed' implies that the parser cache usage has already occurred,
and obscures the true purpose of this method, which is to determine whether or
not the requested page *should* be looked up in the parser cache.
Only usage in extensions is in TextExtracts, which I changed to be both
backward- and forward-compatible in If5d5da8eab13.
Change-Id: I7de67937f0e57b1dffb466319192e4d400b867de
* This should not happen as doEditContent() saves the parser cache,
so only the rare casing if incompatible options should have misses
* The bug could also cause post-save misses with edit stashing
* Avoid the second page parse post-redirect by making sure cache
timestamps match up instead of calling time() at several points
* Likewise for null edits, which used a different code path
* Removed redundant purge in onArticleCreate() as the new row sets _touched
* Removed pointless purge in onArticleDelete() as there is no row to update
(the method no-ops in that case to avoid contention already)
Change-Id: I178fe334a3f8691ffd9452bec30561a0c5d37c6c
Graphite expects name components to be dot-separated, so our habit of using
dashes doesn't really make sense. Change metric names to be more compatible
with Graphite, except the job queue's, since that will require a gdash
dashboard definition migration.
Change-Id: I77d0ff7606a8fc88434e4352d23415a9a8f4725a
* These updates add to editing time and can be done
after sending the HTTP response for performance
* Also improved the active users job insertion logic
Change-Id: I5b25217c4f08db7fa9a05eac046283f02d45865e
* Made use of this in triggerOpportunisticLinksUpdate()
* This will defer and better batch job insertion
* Lazy job insertion and other deferred updates
make use of register_postsend_function if present
* Also cleaned up some return types and exceptions
in JobQueueGroup
Bug: T99302
Change-Id: I3a3968d75cb37563f970be08e63f31a090e0e037
* On Wikipedia, for example, these jobs are good percent of
all refreshLinks jobs; skipping the parse step should avoid
runner CPU overhead
* Also fixed bad TS_MW/TS_UNIX comparison
* Moved the fudge factor to a constant and raised it a bit
Bug: T98621
Change-Id: Id6d64972739df4b26847e4374f30ddcc7f93b54a
* They get deduplicated on final insertion, but de-duplication them
on initial insertion (EnqueueJob) to avoid any build-up there.
Change-Id: Ia06f2bdf59a7e57fddb22890aa0b39420c0bfa7d
* This avoids writes on view and is more reliable
* Also made the wfWaitForSlaves() there actually work
Bug: T95501
Bug: T92357
Bug: T89027
Change-Id: I0a006fc92a9268feb185c9d88aa04002ea51ecd3
* Just rely on chronology protected and edit conflict handling.
The time a user spends looking at and editing pages is larger
than any normal slave lag anyway.
* However make sure that pages just made in the request are visible.
* In "master" datacenters, the slave lag will low anyway, and
callers make use of $flags when needed. In other datacenters,
the cache will itself be subject to lag anyway.
* Logging (DBPerformance log) shows this case is very rarely
hit anyway.
Change-Id: If34d67c02f9a7bf0a506ee8f3990697eb403a710
Somehow, revisions are getting added to the database without issue but
page_latest is being set to 0 rather than the newly-added revision ID.
Grepping through the code, the only places page_latest gets set are
WikiPage::insertOn() (which isn't relevant for an edit of an existing
page) and WikiPage::updateRevisionOn(). And the only relevant-looking
place WikiPage::updateRevisionOn() gets called seems to be
WikiPage::doEditContent(), which calls Revision::insertOn() just before
which *should* be setting the mId on the revision object.
Since there's no obvious bug in the code, let's add some checks to make
sure that the revision ID isn't 0. If we see exceptions being thrown, at
least we'll have narrowed down the places we need to look more deeply.
And if not (and the bug continues to be reported), we'll at least know
this part is working right.
Bug: T92046
Change-Id: I8cc60593fafb5702e29186ec14cb9d87f1767ef4
* This avoids master queries on view. It could use local jobs, but nothing
was using this by default anyway.
Bug: T92357
Change-Id: Id6353942215a3c704848d3bcc31c2b76225c78be
We review documentation all the time. Even if this was a big, notable
review, it was 5 years ago. It's probably outdated again, e.g. because
methods changed but the corresponting documentaion did not. In my
opinion the fact that a review happened 5 years ago is not useful any
more.
Change-Id: I6f4fb88ea790520bf2443aae4144cdde394b5e78
There's a bunch of stuff that probably only works because the database
representation of infinity is actually 'infinity' on all databases
besides Oracle, and Oracle in general isn't maintained.
Generally, we should probably use 'infinity' everywhere except where
directly dealing with the database.
* Many extension callers of Language::formatExpiry() with $format !==
true are assuming it'll return 'infinity', none are checking for
$db->getInfinity().
* And Language::formatExpiry() would choke if passed 'infinity', despite
callers doing this.
* And Language::formatExpiry() could be more useful for the API if we
can override the string returned for infinity.
* As for core, Title is using Language::formatExpiry() with TS_MW which
is going to be changing anyway. Extension callers mostly don't exist.
* Block already normalizes its mExpiry field (and ->getExpiry()),
but some stuff is comparing it with $db->getInfinity() anyway. A few
external users set mExpiry to $db->getInfinity(), but this is mostly
because SpecialBlock::parseExpiryInput() returns $db->getInfinity()
while most callers (including all extensions) are assuming 'infinity'.
* And for that matter, Block should use $db->decodeExpiry() instead of
manually doing it, once we make that safe to call with 'infinity' for
all the extensions passing $db->getInfinity() to Block's contructor.
* WikiPage::doUpdateRestrictions() and some of its callers are using
$db->getInfinity(), when all the inserts using that value are using
$db->encodeExpiry() which will convert 'infinity'.
This also cleans up a slave-lag issue I noticed in ApiBlock while
testing.
Bug: T92550
Change-Id: I5eb68c1fb6029da8289276ecf7c81330575029ef
The prepareSave function expects the latest revision ID of the article
being replaced. Instead, we were passing an ID only used for rollbacks
and other special effects.
Change-Id: I4647930566b9370052a820ae3a46e10a6bba65ce
* Use special prioritized refreshLinksJobs instead, which triggers when
transcluded pages are changed
* Also added a triggerOpportunisticLinksUpdate() method to handle
dynamic transcludes
bug: T89389
Change-Id: Iea952d4d2e660b7957eafb5f73fc87fab347dbe7
At the moment, when $wgArticleCountMethod = 'link' (as it is on the WMF
cluster), we are querying the Slave database before each individual
revision is imported, in order to find out whether the page is countable
at that time. This is not sensible, as (1) the slave lags behind the
master, but (2) even the master may not be up to date, since page link
updates take place through the job queue.
This change sets up a cache to hold countable values for pages where import
activity has already occurred. That way, we aren't hitting the DB on every
revision, only to get an incorrect response back.
Bug: T42009
Change-Id: I99189c82672d7790cda5036b6aa9883ce6e566b0
When vary-revision is set, use a currentRevisionCallback to ensure that
the newly-saved revision will always be used by the parser. This keeps
slave lag from making vary-revision not do its job.
Bug: T78237
Change-Id: I92ec928203a67f1236c3ecf6dd5002f66a75c38c
Revision->getRawUser()
=> Revision->getUser( Revision::RAW )
Revision->getRawUserText()
=> Revision->getUserText( Revision::RAW )
Revision->getRawComment()
=> Revision->getComment( Revision::RAW )
The body of Revision->getRawUserText() has been moved
into Revision->getUserText().
Every usage has been replaced.
Change-Id: Ic6fbfbc0507dcf88072fcb2a2e2364ae1436dce7