I searched for /\$(\S+) = (.+?\(.*?\);)\n.*?\$\1\[/, ignored
everything involving isset(), unset() or array assigments, then
skimmed through the remaining results and changed things where they
made sense. These changes were not automated, so please review them.
Change-Id: Ib37b4c66fc57648470f151ad412210b3629c2538
Using either action=upload API or Special:Upload. (No user interface
is provided for the latter, this is meant to be used by on-wiki
scripts/gadgets enhancing the upload process.)
Modelled after how ae3ab9eef0
implemented tagging of regular edits.
Bug: T121876
Change-Id: Ia3e0dbd895b2f8bc66985b24db35f112b6f9a22d
This patch switches to using a slave but imediatly
waits for the slave to catch up with master
(so as not to miss things).
This may result in more delay between an edit and
category changes being inserted.
It may be possible to instead wait for the timestamp
that is passed in $this->params['revTimestamp']
which could result in slightly less delay.
I can't see any uses of waitForReplication in quite
this way but see no imediate reason this would not work.
Bug: T125147
Change-Id: Ia0aa722c97f41a3959bcd3cb4210b39db0c3bc45
This method is less manual and avoids the usual pitfalls of
not unlocking for a return statement or not flushing out any
prior transaction.
Change-Id: Ib1681244767de860105a68210e181e2f024ee525
None of this works and it's been long begging for a mercy kill.
All it does is waste contributor time on updating deprecations
in the dead code. I imagine we wouldn't reuse much of this
code if we're ever going to reimplement it.
Bug: T119336
Change-Id: Ibd26a4bea621857aac77823017e9be9b7dc52cca
It's easily possible for SessionManager::getSessionById() to not be
able to load the specified session and to not be able to create an empty
one by that ID, for example if the user's token changed. So change this
from an exceptional condition to an expected one, and adjust callers to
deal with it appropriately.
Let's also make the checks for invalid data structure when loading the
session from the store delete the bogus data entirely.
At the same time, let's change the silly "$noEmpty" parameter to
"$create" and make the default behavior be not to create an empty
session.
Bug: T124126
Change-Id: I085d2026d1b366b1af9fd0e8ca3d815fd8288030
The missing "bool" should be obvious.
I'm also changing type hints from the implementation to the interface.
All public methods from the JobSpecification class are also in the
interface, except for two: toSerializableArray and newFromArray.
These two are not used here.
Change-Id: I36867cdfdf012a4f3233ac4730ab46dac1edc0ab
I created a basic test yesterday to cover two bugs. Now the test covers
all public methods. I was also able to get rid of the test double.
Change-Id: I53110280e3ef7b7a72d175b11b7fc4ccf1d648b3
SessionManager is a general-purpose session management framework, rather
than the cookie-based sessions that PHP wants to provide us.
While fallback is provided for using $_SESSION and other PHP session
management functions, they should be avoided in favor of using
SessionManager directly.
For proof-of-concept extensions, see OAuth change Ib40b221 and
CentralAuth change I27ccabdb.
Bug: T111296
Change-Id: Ic1ffea74f3ccc8f93c8a23b795ecab6f06abca72
This can happen in sub-second cases with skew. It makes
graphana tend to see -1 as the min for some time values.
Change-Id: I4e39d8ac29f515fd76548f1a7b64d71a03064407
This partially reverts 22476baa85, as the setTriggeringUser()
call that was removed was being used by Echo to be able to determine
which user caused a LinksUpdate to be triggered.
Bug: T121780
Change-Id: I62732032a6b74f17b5ae6a2497fa519f9ff38d4f
This class should manage the escaping it uses, rather than use some
random BagOStuff that has nothing to do with the job queue.
Change-Id: Ie716dc4a3429754a99c5f0670555e5e049b61aa1
* Track queues with non-abandoned jobs per partition server.
The s-queuesWithJobs key can easily be queried to see which
queues need to have periodic tasks run (or for debugging).
* This is requirement for the redis jobchron service to be able to
avoid hitting N=(no. types X no. wikis) queues for periodic tasks
when only a tiny fraction of those actually have any jobs. For WMF,
there are over 30K queues, most of them empty, so doing that can help
lower redis-server CPU (or at least make jobchron more responsive).
* This also allows for jobchron to manage the aggregator by taking the
per-server aggregator sets and merging them. This scales much better
as there are only a modest number of these daemons (18 for WMF) but
vastly more web thread pushing jobs. This cuts down on the connections
to the active aggregator server (the one with the hash table).
* Use Lua unpack() more for stylistic consistency.
Change-Id: I1549f0edc78cc4004dd887b475dec4c0ebd306c6
* Do not de-duplicate jobs with "masterPos". It either does not
catch anything or is not correct. Previously, it was the later,
by making getDuplicationInfo() ignore the position. That made the
oldest DB position win among "duplicate" jobs, which is unsafe.
* From graphite, deduplication only applies .5-2% of the time for
"refreshLinks", so there should not be much more duplicated
effort. Dynamic and Prioritized refreshLinks jobs remain
de-duplicated on push() and root job de-duplication still applies
as it did before. Also, getLinksTimestamp() is still checked to
avoid excess work.
* Document the class constants.
Change-Id: Ie9a10aa58f14fa76917501065dfe65083afb985c
* Using addUpdate() makes sure purges are coalesced and
de-duplicated.
* Also removed incosistent $wgUseSquid checks. If CDN caching
is not used, then $wgSquidServers will just be empty anyway.
Bug: T119016
Change-Id: I8b448366f037f668385d252f9d68289b71d1a707
Title::newFromText will use the given namespace as default, but when
parsing a title with a namespace at the begin the method will not use
the default, instead used the namespace from the given text.
Use Title::makeTitle to create a title with always belongs to the given
namespace.
Bug: T119763
Change-Id: Ic96671e1c33c6572b07f0f859d97c85a7a15bd39
* Recursive link updates no longer mention an category changes.
It's hard to avoid either duplicate mentioning of changes or
confusing explicit and automatic category changes.
* LinksUpdate no longer handles this logic, but rather WikiPage
decides to spawn this update when needed in doEditUpdates().
* Fix race conditions with calculating category deltas. Do not
rely on the link tables for the read used to determine these
writes, as they may be out-of-date due to slave lag. Using the
master would still not be good enough since that would assume
FIFO and serialized job execution, which is not garaunteed.
Use the parser output of the relevant revisions to determine
the RC rows. If 3 users quickly edit a page's categories, the
old way could misattribute who actually changed what.
* Make sure RC rows are inserted in an order that matches that
of the corresponding revisions.
* Better avoid mentioning time-based (parser functions) category
changes so they don't get attributed to the next editor.
* Also wait for slaves between RC row insertions if there where
many category changes (it theory it could well over 10K rows).
* Using a separate job better separates concerns as LinksUpdate
should not have to care about recent changes updates.
* Added more docs to $wgRCWatchCategoryMembership.
Bug: T95501
Change-Id: I5863e7d7483a4fd1fa633597af66a0088ace4c68