This adds a method to LinkFilter to build the query conditions necessary
to properly use it, and adjusts code to use it.
This also takes the opportunity to clean up the calculation of el_index:
IPs are handled more sensibly and IDNs are canonicalized.
Also weird edge cases for invalid hosts like "http://.example.com" and
corresponding searches like "http://*..example.com" are now handled more
regularly instead of being treated as if the extra dot were omitted,
while explicit specification of the DNS root like "http://example.com./"
is canonicalized to the usual implicit specification.
Note that this patch will break link searches for links where the host
is an IP or IDN until refreshExternallinksIndex.php is run.
Bug: T59176
Bug: T130482
Change-Id: I84d224ef23de22dfe179009ec3a11fd0e4b5f56d
While RefreshLinksJob is de-duplicated by page-id, it is possible
for two jobs to run for the same page ID if the second one was queued
after the first one started running. In that case they the newer
one must not be skipped or ignored because it will have newer
information to record to the database, but it also has no way
to stop the old one, and we can't run them concurrently.
Instead of letting the lock exception mark the job as error,
making it implicitly retry, do this more explicitly, which avoids
logspam.
Bug: T170596
Co-Authored-By: Aaron Schulz <aschulz@wikimedia.org>
Change-Id: Id2852d73d00daf83f72cf5ff778c638083f5fc73
In some functions MediaWikiServices::getInstance() was called twices or
in loops. Extract the variable to reduce calls.
Change-Id: I2705db11d7a9ea73efb9b5a5c40747ab0b3ea36f
The invalid UTF-8 could cause incorrect sorting of affected pages in
category lists on wikis using UCA collations. On my local testing
wiki, the generated cl_sortkey was just 0x30 regardless of the value
of cl_sortkey_prefix.
This doesn't fix existing bad data in the database. It will only be
updated when the affected page is edited (or null-edited).
The cl_timestamp field will also be updated when that happens, which
apparently may affect Wikinews' DynamicPageList extension, according
to comments on T27254. This is not easily avoidable.
Bug: T200623
Change-Id: I4baa9ea3c7f831ff3c9c51e6b8e5d66e7da42a91
This method returns the value used as cl_type for category links that
are "from" pages within the namespace, and is added to avoid duplication
of code across a few classes.
Change-Id: I4e55932a5a27858cfedb12009b455fcd02f9b5df
Adds a maintenance script to populate the field, has that be
automatically run during update.php, and drops the no-longer-needed
default value on the column (where possible: mssql has some sort of
constraint thing going on that I have no idea how it works).
Bug: T59176
Change-Id: I971edf013a1a39466aca3b6e34c915cb24fd3aa7
The hook handlers are likely to write to secondary databases, in which
case it is better to wrap the callback in its own transaction round.
This lowers the chance of pending write warnings happening in
runMasterTransactionIdleCallbacks() as well as DBTransactionError
exceptions in LBFactory due to recursion during commit.
Bug: T191282
Bug: T193668
Change-Id: Ie207ca312888b6bb076f783d41f05b701f70a52e
Having such comments is worse than not having them. They add zero
information. But you must read the text to understand there is
nothing you don't already know from the class and the method name.
This is similar to I994d11e. Even more trivial, because this here is
about comments that don't say anything but "constructor".
Change-Id: I474dcdb5997bea3aafd11c0760ee072dfaff124c
It's unreasonable to expect newbies to know that "bug 12345" means "Task T14345"
except where it doesn't, so let's just standardise on the real numbers.
Change-Id: I6f59febaf8fc96e80f8cfc11f4356283f461142a
Use of &$this doesn't work in PHP 7.1. For callbacks to methods like
array_map() it's completely unnecessary, while for hooks we still need
to pass a reference and so we need to copy $this into a local variable.
Bug: T153505
Change-Id: I8bbb26e248cd6f213fd0e7460d6d6935a3f9e468
* Add the lag checks to LinksUpdate. Previously, only
LinksDeletionUpdate had any such checks.
* Remove the transaction hook usage, since the only two callers are
already lag/contention aware. Deferring them just makes the wait
checks pointless and they might end up happening all at once.
* Also set the visibility on some neigboring methods.
* Clean up LinksUpdate $existing variables in passing. Instead of
overriding the same variable, use a differently named variable
to avoid mistakes.
Bug: T95501
Change-Id: I43e3af17399417cbf0ab4e5e7d1f2bd518fa7e90
* Also make ErrorPageError exceptions display themselves
in PRESEND mode. Before they were always suppressed.
* Make DataUpdate::runUpdates() simply wrap
DeferredUpdates::execute().
* Remove unused installDBListener() method, which was
basically moved to Maintenance.
* Enable DBO_TRX for DeferredUpdates::execute() in CLI mode
* Also perform sub-DeferrableUpdate jobs right after their
parent for better transaction locality.
* Made rollbackMasterChangesAndLog() clear all master
transactions/rounds, even if there are no changes yet.
This keeps the state cleaner for continuing.
* For sanity, avoid calling acquirePageLock() in link updates
unless the transaction ticket is set. These locks are
already redundant and weaker in range than the locks the
Job classes that run them get. This helps guard against
DBTransactionError.
* Renamed $type to $stage to be more clear about the order.
Change-Id: I1e90b56cc80041d70fb9158ac4f027285ad0f2c9
* Avoid using deprecated functions.
* Switch to DataUpdate as the direct parent class as
no benefit was provided from SqlDataUpdate (which
should be deprecated soon).
Change-Id: I0f1c77128f3df658e6a0eaf4471ca48ac536c643
This adds getAddedProperties and getRemovedProperties functions
to LinksUpdate. They are available only after the update, so for
extensions in the LinksUpdateComplete hook. This is useful for
example if an extension caches a page property; if the property
gets changed it may want to purge the cache.
This is similar to the getAddedLinks and getRemovedLinks
functions.
Change-Id: I0c73b3d181f32502da75687857ae9aeff731f559
* Do not commit() inside masterPosWait(). This could happen
inside JobRunner::commitMasterChanges, resulting in
one DB committing while the others may or may not later commit.
* Migrate some commit() callers to commitMasterChanges().
* Removed unsafe upload class commit() which could break
outer transactions.
* Also cleaned up the "flush" flag to make it harder to misuse.
Change-Id: I75193baaed979100c5338abe0c0899abba3444cb
* Removed the lockAndGetLatest() call which caused contention problems.
Previously, job #2 could block on job #1 in that method, then job #1
yields the row lock to job #2 in LinksUpdate::acquirePageLock() by
committing, then job #1 blocks on job #2 in updateLinksTimestamp().
This caused timeout errors. It also is not fully safe ever since
batching and acquirePageLock() was added.
* Add an outer getScopedLockAndFlush() call to runForTitle() which
avoids this contention (as well as contention with page edits)
but still prevents older jobs from clobbering newer jobs. Edits
can happen concurrently, since they will enqueue a job post-commit
that will block on the lock.
* Use the same lock in DeleteLinksJob to edit/deletion races.
Change-Id: I9e2d1eefd7cbb3d2f333c595361d070527d6f0c5
Since LinksUpdate::doUpdate() already flushes the transaction,
go ahead and flush before other DataUpdates might run (e.g.
from RefreshLinksJob). Also release the lock before running
the LinksUpdateComplete handlers, as the lock is just to keep
LinksUpdate instances from racing with each other.
Change-Id: Ied97fa36fbca0203123e9fc966d2e23bfd621c0e
This should avoid erratic lag spikes that happen as many links are
added and removed via new pages (sometimes bot generated) and edits
that blank pages as well as their reversions.
In the common cases of a modest number of link changes, the entire
update will still happen in one transaction. In any case, link updates
now use a lock to avoid clobbering each other on the same page.
Bug: T109943
Change-Id: Icd453fcc3d28342065893260ad327eae11870245
* Recursive link updates no longer mention an category changes.
It's hard to avoid either duplicate mentioning of changes or
confusing explicit and automatic category changes.
* LinksUpdate no longer handles this logic, but rather WikiPage
decides to spawn this update when needed in doEditUpdates().
* Fix race conditions with calculating category deltas. Do not
rely on the link tables for the read used to determine these
writes, as they may be out-of-date due to slave lag. Using the
master would still not be good enough since that would assume
FIFO and serialized job execution, which is not garaunteed.
Use the parser output of the relevant revisions to determine
the RC rows. If 3 users quickly edit a page's categories, the
old way could misattribute who actually changed what.
* Make sure RC rows are inserted in an order that matches that
of the corresponding revisions.
* Better avoid mentioning time-based (parser functions) category
changes so they don't get attributed to the next editor.
* Also wait for slaves between RC row insertions if there where
many category changes (it theory it could well over 10K rows).
* Using a separate job better separates concerns as LinksUpdate
should not have to care about recent changes updates.
* Added more docs to $wgRCWatchCategoryMembership.
Bug: T95501
Change-Id: I5863e7d7483a4fd1fa633597af66a0088ace4c68
Do the LinksUpdateComplete hook updates in a separate
transaction as they may do slow SELECTs and updates.
A large amount of DBPerformance warnings were triggered
by such cases.
Bug: T95501
Change-Id: Ie4e6b7f6aefc21bafba270282c55571ff5385fe0
So extensions like Echo are able to attribute post-edit link updates to
specific the users who triggered them.
Bug: T116485
Change-Id: I083736a174b6bc15e3ce60b2b107c697d0ac13da
* Focus on updating links that would *not* already be updated
by jobs, not those that already *will* be updated.
* Place the jobs into a dedicated queue so they don't wait
behind jobs that actually have to parse every time. This
helps avoid queue buildup.
* Make Job::factory() set the command field to match the value
it had when enqueued. This makes it easier to have the same
job class used for multiple queues.
* Given the above, remove the RefreshLinksJob 'prioritize' flag.
This worked by overriding getType() so that the job went to a
different queue. This required both the special type *and* the
flag to be set if using JobSpecification or either ack() would
route to the wrong queue and fail or the job would go in the
regular queue. This was too messy and error prone. Cirrus jobs
using the same pattern also had ack() failures for example.
Change-Id: I5941cb62cdafde203fdee7e106894322ba87b48a