Save the backtrace when locking, so that if some code tries locking again,
we can print the lock owner's backtrace for easier debugging.
Change-Id: I6e352b4aa5e7cb35825a66592f6c066d9e8b95c9
This was the only addModules() call ever to be inside Parser.
Introduced in a54ef1a203. Prior to that, mediawiki.toc had always been loaded
by OutputPage (via mediawiki.util; and before that, via wikibits).
This patch restores that, and also fixes T130632 by making OutputPage get
it from the Skin, instead of hardcoding this somewhere in addParserOutput().
* Remove deprecated method OutputPage::enableTOC().
* Move mEnableTOC to addParserOutputText().
Bug: T130632
Change-Id: Iaad84d241a4c4348c712ac1087a664b8c9c46da4
This will allow CSS to target just the parser output, without also
accidentally targeting the edit form, diff tables, and so on.
Bug: T37247
Change-Id: If4eb5bf71f94fa366ec4eddb6964e8f4df6b824a
Depends-On: I330c6aa4aaee045614b1801ed34bc9e03be69650
Depends-On: I52a518fa44e017841fe78474012cd69823e0a41d
Move link normalization directly into addExternalLink() method,
since you always need to do it - having it separate is just
inviting people to forget to normalize a link.
Additionally, links weren't properly registered for <gallery>.
This was somewhat unnoticed, as the call to recursiveTagParse()
would register free links, but it wouldn't work for example with
protocol relative links.
Issue originally reported by MZMcBride.
Bug: T48143
Change-Id: I557fb3b433ef9d618097b6ba4eacc6bada250ca2
* This was introduced in 4d3446a8e3 when galleries were tables.
However, in 05579cf0e6, it switched to ul's, but missed updating the
sanitization.
* As an example, the test shows that summary is currently wrongly
permitted.
Change-Id: I8c52477dc65499d0c8a1ee5cc661a5f9ae78cc07
U+0000 is not allowed in HTML5, there's no reason to allow it in wikitext.
It simplifies our code if we can just strip them at the start. Strip in
PST as well so they don't sneak into our database either.
Tweaked the EXT_LINK URLs to account for the fact that invalid characters
get transformed into U+FFFD when using Preprocessor_DOM. See 73649741ed
(r65967) for context on that change.
Bug: T159174
Change-Id: I3f67e92b61aacc87a40c3662085c84d1dac08bfb
I was bored. What? Don't look at me that way.
I mostly targetted mixed tabs and spaces, but others were not spared.
Note that some of the whitespace changes are inside HTML output,
extended regexps or SQL snippets.
Change-Id: Ie206cc946459f6befcfc2d520e35ad3ea3c0f1e0
$index is definitely not a int here, see the big switch( $index )-case
statement below. It switches for strings, not numbers. Also, note that
this is lowercase, one might expect it to be uppercase as this is how
magic words are written in wikitext.
Bug: T96633
Change-Id: Iea93c3796fdee4ed7abbb7608e89b627ca95aead
Use of &$this doesn't work in PHP 7.1. For callbacks to methods like
array_map() it's completely unnecessary, while for hooks we still need
to pass a reference and so we need to copy $this into a local variable.
Bug: T153505
Change-Id: I8bbb26e248cd6f213fd0e7460d6d6935a3f9e468
The leading spaces on the link only cause us problems, such as for the
$noforce check 20 lines later.
Bug: T129218
Change-Id: I93a8da1f73b38fa3da362f8f27479b3039ed3f13
This also protects naked external links, which are internally surrounded by
`-{R|...}-` by LanguageConverter::markNoConversion.
Originally found in failed tests in I7fa2d85d6.
Bug: T54190
Change-Id: I9b099273203482ffb570a5654d8ba50c833e526d
A protected version of explode is factored out as
`StringUtils::delimiterExplode`, since it will be used in follow-up
patches in this series. The `delimiterExplode` implementation creates
an intermediate array of the exploded results, which is reasonable as
the number of image options is small; but since an Iterator is
returned the implementation can be upgraded in the future (at the cost
of additional complexity) to avoid this. The additional code in that
case would be similar to ExplodeIterator.
Bug: T146305
Change-Id: I1327685e9e8c07ef476dceaa6f6dae4ba40989ef
Changes:
- uses int instead of number as param and return value type,
- uses stdClass instead of stdObject
- fixes ResourceLoaderClientHtml constructor's $target param type:
it is string|null, not an array (previously misspelled as "aray")
- changes the type of references to XML parser in XMP lib to resource
instead of not existing XMLParser
Change-Id: I98c363ebc6658d1f4dcabad97a9a92f3fcd7ea8c
This is a pure documentation change. It mostly removes empty lines from
comments (and entirely empty comments), as well as adds a few missing
documentation blocks and fixes a minor mistake. I hope it's ok to have
this in one patch. I can split it, please tell me.
Change-Id: I9668338602ac77b903ab6b02ff56bd52743c37c4
This change resulted in unreasonable feature loss (human-readable
limit report was gone). Three months and multiple followups later,
the functionality is still not completely restored. Given lack
of response from the original author, I think it is time to revert
and reconsider, especially since the 1.28 release is soon.
A machine-readable limit report would be a very useful feature,
but not at the cost of losing human-readable limit report.
This reverts the following commits:
* Move NewPP limit report HTML comments to JS variables
b7c4c8717f
* Only pretty-print the parser report JS vars
28adc4d7ee
* Show wgPageParseReport on page previews too
1255654ed5
* Re-add human readable parser limit report
0051f108b9
* Restore hooks.txt for ParserLimitReportFormat
4663e7a737
Resolved minor merge conflicts in OutputPage (with 80e5b160)
and release notes.
Bug: T110763
Bug: T142210
Change-Id: Id88c8066fae3f369e8977b4b7488f67071bdeeb7
Use HTTPS instead of HTTP where the HTTP link is a redirect to the HTTPS link.
Also update some defect links.
Change-Id: Ic3a5eac910d098ed5c2a21e9f47c9b6ee06b2643
This adds 3 tracking categories, one for each type of magic link (ISBN,
RFC, PMID). This will allow wikis to gauge usage and identify pages that
need migrating.
These will only show up if the respective magic links are enabled via
$wgEnableMagicLinks.
Change-Id: Ic483f0c493112bf6373e1b37961e1241c20c3582
Which means we can't check if a parser limit was exceeded while trying
to expand the content of a tag, but that's probably not a huge loss.
It'll just result in potentially strange output rather than an exception.
Bug: T149622
Change-Id: I7910dfa0f61b1cc9168c7ed1498b2bda27c47f0e
The most critical one is if the marker name is bad, since that causes
StripState to throw an exception since I798d31af. But we may as well
check the other expand calls in this function too to avoid outputting
broken wikitext.
Bug: T136401
Change-Id: I1cb353d74f9a46168055e1abeb22cf569fe9354a
Apparently it is possible for Parser::mParserOptions
to not be set in some cases. I'll try again later.
This reverts commit bda74bff6e.
Bug: T146433
Change-Id: Idb6d1b20995d5f86b712abb386ab987356c4f560
wfEscapeWikiText() used $wgEnableMagicLinks, but that could result in an
inconsistency when something modifies the magic link related
ParserOptions.
In general, most uses of wfEscapeWikiText() are in parser functions or
when message parsing, so the Parser is a logical place for it.
A future patch will make it easy to use Parser::escapeWikitext() in
message parameters.
Change-Id: I0fd4d5c135541971b1384a20328f1302b03d715f
The magic link functionality is "old backwards-compatibility baggage"
that we probably want to get rid of eventually. The first step to doing
so would be making it configurable and allowing it to be turned off on
wikis that don't use it.
This adds each of the 3 magic link types as individual parser options,
which can be controlled by the $wgEnableMagicLinks setting.
Additionally, wfEscapeWikiText() was updated to only escape enabled
magic link types.
Bug: T47942
Change-Id: If63965f31d17da4b864510146e0018da1cae188c
Since in several cases, with an all-in-one commit, git's file rename
detection failed, I split the renames out into their own commit to
make review easier. Some changes here won't make complete sense without
the following commit.
* Moved TestsAutoLoader to tests/common/. It will be joined by a friend.
* Renamed ParserTest to ParserTestRunner, since the former name was
overly generic.
* Renamed TestFileIterator to TestFileReader. Please see the subsequent
commit for rationale.
* Moved parserTests.php to tests/parser/. It was the only file left in
tests/, and it should have been moved to tests/parser years ago,
analogous to phpunit.php.
* Renamed NewParserTest to ParserIntegrationTest. This was a tricky one,
apparently the name has to end in "Test" or else the structure test
will fail. Analogous to ParserMethodsTest etc. Rationale: because it's
not new anymore.
* Renamed MediaWikiParserTest to ParserTestTopLevelSuite and moved it to
the suites directory. A more descriptive name. Being in suites/
shields it from StructureTests, and is correct anyway.
Change-Id: Iddc6eaf815fdd64b3addb8570b4b6303ab99d634
This is more consistent with LoadBalancer, modern, and inclusive
of master/master mysql, NDB cluster, and MariaDB galera cluster.
The old constant is an alias now.
Change-Id: I0b37299ecb439cc446ffbe8c341365d1eef45849
Inverse flame graphs shows revision lookups as one of the
big three queries (Revision, LinkCache, getTitleInfo of
ResourceLoaderWikiModule).
This works via a new Revision::newKnownCurrent() method
needs both page/rev ID from the DB (to avoid invalidation)
and fetches the user name and rev_deleted if needed (again
to avoid invalidation). Parser does not care about fields
anyway in the template path.
Also improved cross-wiki support a bit, and fixed up some
docs and IDEA errors.
Change-Id: Icad602dba5de18c7758b77fd23b0a450ff21d09f
For simple pages that transclude special pages, like user pages
including Special:PrefixIndex, the TTL is allowed to drop to 15
seconds if the page parses fast enough.
Bug: T139893
Change-Id: If41885ded648d68352fe3d06336d98aa0ab53966
The code that normalizes line endings ("\r\n" and "\r" to "\n") and
trims trailing whitespace is buried in Parser::preSaveTransform(), and
was duplicated to TextContent in 96b6afb31d, as non-wikitext content
models should still be normalizing line endings.
This splits the duplicated code into
TextContent::normalizeLineEndings(), and utilize it in the Parser.
Additionally, expand the documentation of
TextContent::preSaveTransform() to document that subclasses should make
sure they normalize line endings during the PST stage.
And remove a useless rtrim() call from WikitextContent that did nothing.
Change-Id: I9094c671d4bbd23d75436f8f1d682d6dd6e6d2fc
rawurldecode was being run on unclosed internal links
which could allow an attacker to insert arbitrary
html into the page.
See also related: r13302
Bug: T137264
Change-Id: I4e112a9e918df9fe78b62c311939239b483a21f5
This does the same normalization of newlines that
Parser::preSaveTransform() does. This should be appropriate for any text
content type, especially considering that EditPage uses
WebRequest::getText() which does a less-strict version of this same
transformation.
This also cleans up the code for doing that newline replacement
to be a bit less verbose.
Bug: T142805
Change-Id: I462afcda502f031a8b0360d982ce2398a0383a96