This means that enqueuable updates (LinksUpdate, LinksDeletionUpdate) will run immediately
at this point rather than be enqueued as jobs. This only affects ApiPurge since the other
callers use either POSTSEND or "false".
Change-Id: I8b6ff6c9a68730374e6d83682e774e4f4bfbf52f
This will make it easier to create redirects where $subpage is the title,
e.g. "Special:Example/Foo?x=y" to "index.php?title=Foo&x=y".
To do that conveniently, getRedirectQuery() needs access to $subpage.
The alternative is to do Title-parsing inside getRedirect(), which then
complicates this significantly as one has to deal with absence of a title
(null) and invalid titles (illegal chars etc.).
By using it plainly as query parameter (defaulting to null/omitted), this
is all deferred to index.php, which seems like a better separation of
concerns.
Motivated by SpecialMobileHistory in MobileFrontend (Ic0aea7ee340a).
Change-Id: I9fe78f479053fb55952ba78850d2fc281a039fe3
I benchmarked this again. The runtime of an unlimited explode() can be
quite high. This is not really a DoS attack vector as it would require to
post megabytes worth of input to the code, which will hit many other
limits before. I still consider it good practice to use unlimited explode()
only when it is actually allowed to return an unlimited amount of elements.
Change-Id: I30f8ca5dba7b317bb4a046b9740fd736b4eea291
This is inspired by Ib117e05.
As far as I can tell this is functionally identical. Even arrays should
behave the same, as both the getVal() and getCheck() methods do have a
special case that returns the `null` default in case the user tried to
pass multiple values instead of a single scalar.
Change-Id: Id4e4ec91f39d3c39461bd41673bdafc3bde11737
Some of the callers of setExpectations() actually need to reset the old
expectations to avoid erroneous warnings.
Change-Id: I63c01c0f6cd748bdc849f1a5264e17bd377b9d11
Use these in place of various wfWikiID() calls.
Also cleanup UserRightsProxy wiki ID variable names and removed unused
and poorly named getDBname() method.
Change-Id: Ib28889663989382d845511f8d34712b08317f60e
This assures that MergeableUpdate tasks that lazy push job will actually
have those jobs run instead of being added after the lone callback update
to call JobQueueGroup::pushLazyJobs() already ran.
This also makes it more obvious that push will happen, since a mergeable
update is added each time lazyPush() is called and a job is buffered,
rather than rely on some magic callback enqueued into DeferredUpdates at
just the right point in multiple entry points.
Bug: T207809
Change-Id: I13382ef4a17a9ba0fd3f9964b8c62f564e47e42d
Should be "string" not "String" and "array" not "Array" in
@param, @return and @var use cases. Also, minor typo fixes.
Change-Id: I9d5ebc5b741c6560907b95f7c0c4039da2861f4a
If a user creates a redirect that goes to a [[Media:example.jpg]]
page, then an exception is thrown because NS_MEDIA is a virtual
namespace. This change catches this case and changes the namespace
to an NS_FILE namespace and the redirect works correctly. This
change only happens when we are dealing with a redirect so other
uses of the NS_MEDIA namespace shouldn't be affected.
Bug: T203942
Change-Id: Ia744059650e16510732a65d51b138b11cbd43eb4
When jobs are being run synchronously post-send, we don't want to allow
bugs to result in a job somehow setting cookies or headers that
interfere with those that were intended to be set in the request.
Bug: T191537
Change-Id: Ib5714a17af417797140f99e41eaacbba1bfd20f4
Previously, if an internal service forwarded the cookies for a
user (e.g. for permissions) but not the User-Agent header or not
the IP address (e.g. XFF), ChronologyProtector could timeout
waiting for a matching writeIndex to appear for the wrong key.
The cookie now tethers the client to the key that holds the
DB positions from their last state-changing request.
Bug: T194403
Bug: T190082
Change-Id: I84f2cbea82532d911cdfed14644008894498813a
Find: /isset\(\s*([^()]+?)\s*\)\s*\?\s*\1\s*:\s*/
Replace with: '\1 ?? '
(Everywhere except includes/PHPVersionCheck.php)
(Then, manually fix some line length and indentation issues)
Then manually reviewed the replacements for cases where confusing
operator precedence would result in incorrect results
(fixing those in I478db046a1cc162c6767003ce45c9b56270f3372).
Change-Id: I33b421c8cb11cdd4ce896488c9ff5313f03a38cf
Since it takes time for the agent to get the response and set the
cookie and, as well, the time into a request that a LoadBalancer is
initialized varies by many seconds (cookies loaded from the start),
give the cookie a much lower TTL than the DB positions in the stash.
This avoids having to wait for a position with a given cpPosIndex
value, when the position already expired from the store, which is
a waste of time.
Also include the timestamp in "cpPosIndex" cookies to implement
logical expiration in case clients do not expire them correctly.
Bug: T194403
Bug: T190082
Change-Id: I97d8f108dec59c5ccead66432a097cda8ef4a178
This avoids triggering other exceptions down the road such as
"Got COMMIT while atomic sections FileDeleteForm::doDelete,
LocalFile::lockingTransaction are still open". Those would
happen in LBFactory::__destruct(), when it tries to commit any
dangling transactions (firing attached callbacks too). Just like
with the Exception case, the DBs needs to all be rolled back.
Also make LoadBalancer::rolbackMasterChanges() rollback any
explicit transactions even if they have no pending changes.
This clears up the state to avoid later atomic section errors.
Change-Id: Ic0b6b12c1edc1eec239f4f048359b3bbb497d3ff
This handles multi-DB transactions properly, instead of causing wait
timeouts in the WaitConditionLoop. It also is more correct in using
a counter instead of relying on wall clocks.
In addition:
* Refactor related code in MediaWiki.php to be comprehensible.
* Always send the cookie even the "remote wiki redirect" case.
* Renamed ChronologyProtector field and constant to avoid any
confusion of "wait for server X to reach Y" with "wait for Y
to show up in position store".
* Add an "asOfTime" field to the position keys for debugging.
Bug: T182322
Change-Id: I5c73cd07eaf664f02ba00c38fab9f49b609f4284
This helps to avoid OOMs from buffer build-ups in the statsd
factory object. This piggy-backs on to the same checks used
for deferred update runs. In addition, the output() method
checks if the data size is getting large and emits if needed.
Bug: T181385
Change-Id: I598be98a5770f8358975815e51380c4b8f63a79e
Renamed and deprecated in MediaWiki in f606fd8d since 1.27.
Only six uses in Wikimedia-hosted git repositories, marked as
dependencies. Also one use snuck back into MediaWiki itself,
fixed in this patch.
Depends-On: Ie8c13a6b1dc1b7861f6c27bbba996099375f066b
Depends-On: Ic2ea90343efda6533c06ca1325bc85d9aa776078
Depends-On: Ibba2f486f0ecb684ded7efb09f9942f5e0f5fd7a
Depends-On: Id27a48e10fd127e00f68e1020e8f40e30ba9a251
Depends-On: Ifd6db7910a71bb700484d6b588327424f11c00e0
Depends-On: I6523059941eb5f86274e364a8d5cc74b849655a4
Change-Id: I2cdfcd60fc7934830e3e6ec132958aa2aa1fe486
If the user is not waiting at this point, so there is not much
reason to enqueue a job over just doing the work now. Running
the update now gives more immediate results however.
This has the effect of making LinksUpdate run post-send for
forward link updates, since the addUpdate() call in WikiPage uses
the default POSTSEND mode. These updates used to be synchronous
in the past, before proper post-send update support. With post-send
updates, there is not much benefit to using the job queue here.
If post-send updates are not supported, this will continue to
use the job queue.
If a caller needs such updates to enqueue post-send to avoid DB
updates on HTTP GET or if the update is too big to run outside of
JobRunner, it can always just use JobQueueGroup::lazyPush() with
a direct job object or JobSpecification.
Change-Id: Ibc4b1e17538cc8b1fba7d13759e1ebb83abed869
This previously only worked if $wgLocalVirtualHosts was set, which
was too specific to check and not used by WMF. Use the more generic
WikiMap class.
Two methods have been added there to do the work of enumerating
canonical wiki farm URLs and checking them against a given URL.
Bug: T172357
Change-Id: Id2415bab5d7f5a08b9f536858c32d329138384a2
Some entry points stream output and flush their own headers.
This avoids "headers already send" warnings in some cases.
Change-Id: Ifb232d4575486749bbbccba88f3f688972fe9c20
Remove the exit(1), which does not seem to be needed by any callers.
Doing so means that post-send updates can still happen, such as the
pushing of lazy jobs.
Better avoid showing exceptions in doPostOutputShutdown(), given
that an error may have already been shown. By the post-send part,
it's to late to show errors anyway.
Bug: T100085
Change-Id: Ib1c75323f222a0e02603d6415626a4b233e8e1c7
Previously, tryOpportunisticExecute() tried to nest transaction rounds,
which would fail. Added LBFactory::hasTransactionRound() as needed.
Also cleaned up some unqualified class names in callbacks and set the
PRESEND flag for the JobQueueDB AutoCommitUpdate callback. Use the
proper getMasterDB() method while at it. These follow up 24842cfac.
Bug: T154425
Change-Id: Ib1d38f68bd217903d1a7d46fb15b7d7d9620daa6
This is needed for deferred updates LinksDeletionUpdate and LinksUpdate, else
callbacks registered with onTransactionIdle prevent other transactions from
being executed, at least in this case.
Bug: T154425
Bug: T154438
Bug: T157679
Change-Id: Iecd396d584a62ac936cd963915339159467b44cd