This reverts commit 7f63d5250e,
re-applying commit 82da9cf14b.
It can be re-applied safely after T354361 was fixed.
Most of the incidental changes from the original patch are
no longer needed, as they were made unnecessary by other work,
or were applied in I4cb2f29cf890af90f295624c586d9e1eb1939b95.
Change-Id: I1ff9a7c94244bffffe5574c0b99379ed1121a86d
(cherry picked from commit 09703c2c774a65dd9ee57ec83154aa1eab5a9d03)
This is more robust and secure than the regular expression previously
used to extract the <meta> tag.
We also improve HtmlHelper slightly be adding the ability to replace
an element with an 'outerHTML' string.
Because our output is being run through Remex, there is a slightly
larger degree of HTML normalization in the output than previously,
which is visible in some small tweaks to test case outputs.
Bug: T381617
Depends-On: I2712e0fa9272106e8cd686980f847ee7f6385b6f
Change-Id: I4cb2f29cf890af90f295624c586d9e1eb1939b95
(cherry picked from commit 7ebd8034b54495f28f4c5583d4fa55071634b593)
Fixed the query for imported actors and some other potential edge cases.
Unsetting the 'target' field in SpecialDeletedContributions alone should
be sufficient, but I would rather like to keep the behaviour the same
with ContribsPager, which is used by more users and using
`$this->targetUser->getName()` is known to be ok so far.
Also, renamed some variables to match the parent class method signature
to avoid confusion.
Bug: T372444
Bug: T404230
Change-Id: I318ec7f30174087f988536f5196ff81e99241c9b
(cherry picked from commit dda0d4dfcd712b976e542cd688a3ab1c45051e7d)
This cleans up a FIXME left over from
I9e6b924d62ccc3312f5c70989477da1e2f21c86b.
SimpleParsoidOutputStashTest was temporary changed from a unit test to
an integration test, since the serialization/deserialization mechanism
for Content relies on ContentHandlerFactory in a way which is
difficult to unit test. This will be restored in
I0cc1fc1b7403674467d85618b38a3b5a4718b66e once native JSON
serialization for Content is landed.
Follows-Up: I9e6b924d62ccc3312f5c70989477da1e2f21c86b
Change-Id: If985e99f9ca9596d0fe40f0a5ef2cdb72286627d
(cherry picked from commit 2ebf7e12df28f9861bb204ff4134871089a1c771)
By default this uses the existing ContentHandler::serializeContent() and
::unserializeContent() methods. But in cases where existing PHP
serialization preserved fields that ::serializeContent() did not,
provide an additional ContentHandler::serializeContentToJsonArray()
and ContentHandler::deserializeContentFromJsonArray() methods which
can be used. Use these in WikitextContentHandler to preserve the
PST flags.
Added test cases and a ContentSerializationTestTrait to make it
easy to ensure forward- and backward-compatibility in accord with
https://www.mediawiki.org/wiki/Manual:Parser_cache/Serialization_compatibility
The new JsonCodecable codec will be used to improve PageEditStashContent
serialization, which no longer has to PHP-serialize its Content object.
New test case added demonstrating compatibility.
Bug: T264389
Bug: T161647
Change-Id: I544625136088164561b9169a63aed7450cce82f5
(cherry picked from commit 21576d6c1893079777a1a51d0f81c4941c58e376)
In WebP lossless chunks (identified by VP8L), width-minus-1 and height-minus-1 of the canvas are sequentially encoded as 14-bit integers. (spec: https://developers.google.com/speed/webp/docs/webp_lossless_bitstream_specification#3_riff_header)
WebPHandler, when decoding the canvas height, has been skipping two most-significant bits. This results in bogus values being read from larger losslessly-encoded files.
Change-Id: Ib5b26f36a15fa65e7990da2ebd94157faccc70c2
(cherry picked from commit 442b73cebbea6db7b7fc945189d5776602fabc8a)
Update the default Reauthenticate time to 3600
moving from 1 minute timeout to 1 hour to improve
user experience
Bug: T402037
Change-Id: Ic9a4585afcfe72f795868cbf7d5281a809e6a7c5
(cherry picked from commit fa04ae9ab260082b859876bee7b162b8c833c85b)
Why:
* ServiceWiring.php is documented to say that "Services MUST NOT
vary their behaviour on the global state, especially not ...
RequestContext ... or ... "current" user"
** However, the constructor of the CommentParserFactory calls
`RequestContext::getMain()->getLanguage()` which is in
violation of this rule by both using the RequestContext
and being controlled by the state of the "current" user.
* This has caused issues with premature access to the session
user as demonstrated in T397900.
** Specifically, the call to ::getLanguage will load the request
user's preferences and then as part of this checks if the
user is named (which will load the User object).
* Instead of using the incorrect method of getting the user's
language, it should instead be fetched in
CommentParserFactory::create.
** This will also allow the Language associated with the main
request to change without leaving the service with an
outdated and stale version of the user's Language object.
What:
* Update CommentParserFactory to call `RequestContext::getMain()
->getLanguage()` in the ::create method instead of getting it
from the constructor.
* Remove the call to `RequestContext::getMain()->getLanguage()`
in ServiceWiring.php as no longer needed.
* Update the unit test to instead be an integration test due to
::create now calling code which uses the service container.
Bug: T397900
Change-Id: I36c9d8650eb5040f94626fa50f90b8026d3c3fe9
(cherry picked from commit 536f41bce51ca67733c4879d17992ee0b0db1de8)
These requests are usually sent to a wiki operated by a different
organization so UA etiquette is important.
* Add the site's URL (the URL of the main page, more specifically)
as a contact address.
* Add the site's URL as a referer as well.
Considered but not done:
* Use $wgEmergencyContact as the contact part of the UA. It's not
guaranteed to be set correctly, while the main page URL always
exists and will usually be enough to pinpoint the wiki (except
maybe in some intranet scenarios).
* Include information about the user making the request. Would
be a privacy risk + probably useless due to caching.
* Include information about the page the request is for. Would
require lots of refactoring (making the patch harder to
backport) or relying on the context title (which might be
fragile), and in any case probably unreliable due to caching,
and doesn't seem very relevant to the operator of the foreign
site.
Bug: T400881
Change-Id: I968fac6ee0ebbc5a2bd3244f57851eb64125c93d
Running "SELECT @@GLOBAL.read_only" on MariaDB 12.0.2 returns "OFF"
instead of "0", which appears as "true" when cast to boolean in PHP. We
fix that by adding a specific check.
Discord thread where the bug is discussed:
https://discord.com/channels/178359708581625856/1404036592527741049
JIRA ticket in MariaDB: https://jira.mariadb.org/browse/MDEV-37429
Bug: T401570
Change-Id: Ifb04e8b7d04403b6f3dd8517c20c9c0070bd57ac
(cherry picked from commit 54d2416fbcb3a7d0e2a197ca58a755134bd18866)
Why:
- From MediaWiki 1.36 to MediaWiki 1.44 (inclusive),
`PostgresUpdater.php` contains a typo in the instruction to rename
the `sites_group` index to `site_group`.
- This typo means that - on Postgres wikis - the MediaWiki update
script will not currently rename this index as intended, as the index
which the updater is told to rename (i.e., containing the typo)
doesn't exist.
- From MediaWiki 1.42 onwards, this typo indirectly causes `update.php`
on Postgres wikis to throw an error on its first run:
- From MW 1.42 onwards, the update script included an instruction to
drop multiple indexes on the `sites` table, including this index
that was previously intended to be renamed.
- However, as this typo meant that the `sites_group` index was never
renamed on Postgres wikis, the database is unable to find the
renamed index in order to drop it; and consequently throws an
error (reported on Phabricator as T374042).
- This only affects the first run of `update.php` due to the fact
that - when deciding whether to apply the patch containing _all_ of
the index-drops for the `sites` table - the `dropIndex` instruction
only checks for the existence of the `site_type` index (and, if the
`site_type` index doesn't exist, the patch as a whole isn't applied).
However, as - within `patch-sites-drop_indexes.sql` - the statement
to drop the `site_type` index is located _before_ the instruction to
drop the `site_group` index, the `site_type` index will have been
dropped on the first run of `update.php`.
- This also means that - on any future runs of `update.php` - the
indexes listed after (and including) `site_group` in that SQL file
will currently remain un-dropped.
What:
- Fix the typo in the PostgresUpdater index renaming instruction:
`'sites_group, '` -> `'sites_group'`
- Update PostgresUpdater to individually re-attempt to drop the indexes
listed after & including `site_group` in
`patch-sites-drop_indexes.sql`, to ensure that they're dropped on
Postgres wikis that have already (1) upgraded to MW 1.42+, & (2) ran
`update.php`.
(These could theoretically have all been combined within one extra
SQL patch, rather than one for each index; but I thought it might be
best for the updater to check for the existence of each of these
indexes individually before it attempts to drop each one.)
Follows-up 9907b56c9b, 616744db1d
Bug: T374042
Change-Id: Ie6ffa92153e64ca653f726a35a5a6b5d95d093f5
Reason for backport:
This can also be a Debian 13 support issue, some MW installations may
have had `$wgLocaltimezone` set to deprecated values[1] like `PRC`
by the installer or manually.
After they upgrade to Debian 13, the `tzdata` package no longer
provides these timezones, and the `tzdata-legacy` package is not to be
installed by default.
[1]: https://www.php.net/manual/en/timezones.others.php
Bug: T380423
Change-Id: Ie2001796442ee6ba973fdb4b7b1dc7312f802e8d
(cherry picked from commit 45dc435d897d7716ddc8215cb841b07f1c7a2f9c)
- Handle GPS tags with decimal rational number instead of array of dms
rationals
- Mod the decimal values
- Increase validation on GPS tag format
Bug: T386208
Change-Id: Ief823af317bbb01b4a05e34b1d189ce1deaa1f33
(cherry picked from commit 55ffc43a596c0547986322ffe679d37daa921be7)
Use coalesce operator to check if the array key exists
Change-Id: Icf24e208a487bafe3d1983536870aac19cfc4b5e
(cherry picked from commit f0ad539b4e613216639b04386f56d6bb1b656d14)
In a future patch (Ia690f10ccbf4f60f9febca98915155c2df58f0d4) we will
use native JsonCodec serialization of the TOCData object. But first
we will add forward-compatibility code to deserialize TOCData, so that
if we need to rollback the future release we won't break the parser
cache.
New serialization test cases added, as per
https://www.mediawiki.org/wiki/Manual:Parser_cache/Serialization_compatibility
Bug: T327439
Change-Id: I4652b2709afd33ff5e469e36960391e993bc7bae
(cherry picked from commit bf61f6bc0eaf5013167e4b80860b0a610559c661)
Something changed in WMF CI config that causes this warning to be
emitted, perhaps T397429#11035011.
Change-Id: Ib477c1812c48a96b252a4f687e09f1ca5c30c2f3
(cherry picked from commit 4b5fc06c5e34b0a9332c9228ac3c28fd0f750c6c)
WHAT:
- Return the GTID style from `MySQLPrimaryPos::parseGTID`, which already identifies the style during parse.
- Rely on `parseGTID`'s detection in `MySQLPrimaryPos::init`.
WHY:
- When GTID-based replication is enabled and MySQL is used for the database, MediaWiki misidentifies the engine as MariaDB.
- This causes position waits to fail with "No active GTIDs in $1 share a domain with those in $1".
- This is a regression caused by I232274feb12c0ce4826be2c46a35315b425f6673:
- Before that change, parseGTID returned the domain ID as an integer for MariaDB and as a string for MySQL.
- The `init` method used this fact (`is_int`) when determining the GTID style.
- After the change, parseGTID always returns the domain ID as a string.
- The check in `init` was incorrectly updated to expect a string for MariaDB, but did not account for MySQL's source ID also being a string.
Change-Id: I4951e7967a45bae10d26b06ee236a55279fa8fb9
(cherry picked from commit 54154c87c084543fd659f24ae6b4c276184259cc)
This regression was introduced in I6670a58fe1.
Bug: T399793
Co-Authored-By: Jonathan Lee <cookmeplox@weirdgloop.org>
Change-Id: I26b61e2a08b51aaca5d2740dcaf20b509be380eb
(cherry picked from commit fa05279424e0688a7b34f1186050dca1e2ec5f4b)
We were getting PREG_BACKTRACK_LIMIT_ERROR in production from certain
inputs to Parser::extractBody(). Use possessive matchers and a
once-only subpattern to ensure that we don't backtrack unnecessarily
once a <body> tag is found.
Bug: T399064
Follows-Up: I59abad3a58ccd6edc6517b13a56d8253ba0e0928
Change-Id: If6860ca268236cf428d574f6bb21c2070f5aa6a3
(cherry picked from commit 2c56237235a5603a1757982f02d3e542bdafaf06)
Add a check for regex failure in the extractBody method and throw
a RuntimeException with the error details if preg_replace returns null.
Bug: T388729
Change-Id: I59abad3a58ccd6edc6517b13a56d8253ba0e0928
(cherry picked from commit 3b297d37dd368d1d66f7afd78851bbb7a47cab0b)
Add missing namespace prefix to the constant
Change-Id: I3ba37863b1e4de9d64d1c09045c0e5b1da678425
(cherry picked from commit ec02426638f0732a345bd8376f55819ec777741a)
A non-existing field may return null, when trying to drop the default.
Avoid a fatal error in this situation.
There is no real issue yet, but good coding practice to check for null.
Change-Id: I1041f24361febb52fd7fb20c42348b712dd70fe9
* Fix test failures
* Cherry-pick message cache change I957b6fb2bc0d9d4b1aae6e
* Cherry-pick part of I638d6d6d23f9624ba1dff0f4fcc to change cache from
static to non-static.
Change-Id: I77a2facf9923d38269538e48c79365fa117af9af
Follows-Up: Id5462b942f5e916c2f1dc725739615d54a1070de
Follows-Up: I5471fe615d222b936c6668bf3089dd8b5931cc75
Follows-Up: I7bbd6ae36a11840ed6b4620b5d07fa5158ff139e
CVE-2025-6927
In BlockListPager, restore the bl_deleted=0 condition removed in the
previous commit. Add tests.
Bug: T397595
Change-Id: I5471fe615d222b936c6668bf3089dd8b5931cc75