Two micro-optimizations are done in this patch:
1. We know exactly how these placeholders are built in the makeHolder()
method. In »<!--IWLINK'" 1-->« it's guaranteed to be a single number
and in »<!--LINK'" 1:2-->« it's two numbers.
The most extreme synthetic micro benchmark I did cuts the runtime of
these regular expressions down to about 25%. It won't make much of a
difference in real-world scenarios but is still worth it, I believe.
It also makes the code more specific and less confusing (see below).
2. We don't need to use the full string »<!--LINK'" 1:2-->« as array
key when the only thing that matters is the part »1:2«. Note the same
is done just a few lines below in the replaceInterwiki() method.
This code does have outstanding test coverage via all the parser tests,
I believe. Any change here that doesn't make a test fail should be safe.
Note the unit tests have been written many years later via I2c12cc7,
using "dummy" strings and such instead of the expected numeric
namespace and link ids. Most of this is already fixed via previous
patches. The last mistake addressed in this patch is that
getPrefixedDBkey() is supposed to be a title. It can't contain one of
these placeholders.
Follow-Up: I2c12cc76a9bf01eb527db3ea038e4adc59446cac
Change-Id: Ie994059092df8861ddb97c098acd082698d45c53
To follow Message. This is approved as part of RFC T166010.
Also namespace it but doing it properly with PSR-4 would require
namespacing every class under language/ and that will take some time.
Bug: T321882
Change-Id: I195cf4c67bd51410556c2dd1e33cc9c1033d5d18
Parser::nextLinkID cannot return a string. It returns a positive
integer number.
Note a very similar mistake was already fixed before via I7e71ffc.
Change-Id: Ifce71d0f4db31787bf0eb84e621cfdeb07c674ef
* Most of the files were generated from the validate* script.
* Post-processing of these generated files to fix problems:
- Some of the files were binary-edited via "vi -b" to fix some
issues with bad property names used in the prior step.
1.36, 1.38, 1.39 files were all fixed up this way.
- In addition, the 1.36 file had bad data (not sure if the wrong
php version was used) but I fixed this by splicing in data
from the 1.38 file to revert incorrect changes to "Categories"
and "IndexPolicy" properties.
- The 1.35 data file was binary edited by splicing data from the
now 1.36 version.
Change-Id: I4e22b94ce30c2ad9b1f544c15e1c3cd0dd0bce6b
* Generate data files for 1.40 only since the new formats only
showed up in 1.40 and won't be present in the parser cache
for older MW versions.
Change-Id: I6f297e3091ec2faab7c2203c138800551b01e32a
Allow the causeAction that triggers page rendering to be looped through
to ParserCache, so we can count what causes writes to the cache.
Change-Id: I6ad8e105a3ce457e3ab4f85cd154f47a32085e0d
Having pig-latin enabled per default in dev environments is convenient
for manual testing. More importantly, it will allow us to write
end-to-end tests for variant conversion.
Depends-On: I9dc2f743ac487b0f7cfb667150c0f6950d5e7fce
Depends-On: I85b66c85be3959d48a048733af17197bc4cf70af
Change-Id: Ia80ad33cbf5e311fa8b84bd765a8df8d156f4c38
Make parser test discover in core work the same way as it does in
extensions: any file ending with *.txt under tests/parser is run
as a parser test file.
This search is recursive, which is motivation to also move some
unrelated files under tests/parser/preprocess over to
tests/phpunit/data/preprocess where they belong; they are used
by tests/phpunit/includes/parser/PreprocessorTest.php and are
unrelated to the parser test infrastructure.
Change-Id: I8c84b4b853e1309929dceb700aab1e79a598d8ab
The anchor property comes from Sanitizer::escapeIdForAttribute() and
should be used if you want to (eg) look up an element by ID using
document.getElementById(). The linkAnchor property comes from
Sanitizer::escapeIdForLink() and contains additional escaping
appropriate for use in a URL fragment, and should be used (eg) if you
are creating the href attribute of an <a> tag.
Bug: T315222
Change-Id: Icecf9640a62117c2729dca04af343fb1ddaaf8f8
* Lua modules have been written to inspect nowiki strip state markers
and extract nowiki content to further process them. Callers might have
used nowikis in arguments for any number of reasons including needing
to have the argument be treated as raw text intead of wikitext.
While we might add first-class typing features to wikitext, templates,
extensions, and the like in the future which would let Parsoid process
template arguments based on type info (rather than as wikitext always),
we need a solution now to enable modules to work properly with Parsoid.
* The core issue is the decoupled model used by Parsoid where
transclusions are preprocessed before further processing. Since
nowikis cannot be processed and stripped during preprocessing,
Lua modules don't have access to nowiki strip markers in this model.
* In this patch, we change extension tag processsing for nowikis.
When generating HTML, nowikis are replaced with a 'nowiki' strip
marker with the nowiki's "innerXML" (only tag contents).
In this patch, during preprocessing, instead of adding a 'general'
strip marker with the "outerXML" (tag contents and the tag wrapper),
we add a 'nowiki' strip marker with its "outerXML".
* Since Parsoid (and any clients using the preprocessed output) will
unstrip all strip markers, the shift from a general to nowiki
strip marker won't make a difference.
* To support Scribunto and Lua modules unstrip usage, this patch adds
new functionality to StripState to replace the (preprocessing-)nowiki
strip markers with whatever its users want. So, Scribunto could
pass in a callback that replaces these with the "innerXML" by
stripping out the tag wrapper.
* Hat tip to Tim Starling for recommending this strategy.
* Updated strip state tests.
Bug: T272507
Bug: T299103
Depends-On: Id6ea611549e98893f53094116a3851e9c42b8dc8
Change-Id: Ied0295feab06027a8df885b3215435e596f0353b
Pages outside of the main namespace now have the following markup in
their <h1> page titles, using 'Talk:Hello' as an example:
<h1>
<span class="mw-page-title-namespace">Talk</span>
<span class="mw-page-title-separator">:</span>
<span class="mw-page-title-main">Hello</span>
</h1>
(line breaks and spaces added for readability)
Pages in the main namespace only have the last part, e.g. for 'Hello':
<h1>
<span class="mw-page-title-main">Hello</span>
</h1>
The change is motivated by a desire to style the titles differently on
talk pages in the DiscussionTools extension (T313636), but it could
also be used for other things:
* Language-specific tweaks (e.g. adding typographically-correct spaces
around the colon separator: T249149, or replacing it with a
different character: T36295)
* Site-specific tweaks (e.g. de-emphasize or emphasize specific
namespaces like 'Draft': T62973 / T236215)
The markup is also added to automatically language-converted titles.
It is not added when the title is overridden using the wikitext
`{{DISPLAYTITLE:…}}` or `-{T|…}-` forms. I think this is a small
limitation, as those forms mostly used in the main namespace, where
the extra markup isn't very helpful anyway. This may be improved in
the future. As a workaround, users could also just add the same HTML
markup to their wikitext (as those forms accept it).
It is not also added when the title is overridden by an extension
like Translate. Maybe we'll have a better API before anyone wants
to do that. If not, one could un-mark Parser::formatPageTitle()
as @internal, and use that method to add the markup themselves.
Bug: T306440
Change-Id: I62b17ef22de3606d736e6c261e542a34b58b5a05
Split out from the I44045b3b9e78e change.
This is consistent with what Parsoid will use for the TOC marker.
Bug: T287767
Bug: T270199
Bug: T311502
Depends-On: I1f607cf1ef1b61fb4d2e1880de756fb94d5a6b22
Change-Id: Ie63eed07b9bca1bfa07d4c256aba3728cedd8f93
Split out from the I44045b3b9e78e and Ie63eed07b9bca changes. We
first add code to handle the new tag as well as the old tag in
ParserCache contents. This will allow us to safely rollback if needed
when deploying the follow-on patch which actually changes the tag
used.
Bug: T287767
Bug: T270199
Bug: T311502
Change-Id: Ib3e5e010b9f5ca2c4ea7c4fe28080170b6a88812
createMock() does the same, but is much easier to read.
A small difference is that some of the replacements made in this
patch didn't use disableOriginalConstructor() before. In case this
was relevant we should see the respective test fail. If not we can
save some CPU cycles and skip these constructors.
Change-Id: Ib98fb06e0fe753b7a53cb087a47e1159515a8ad5
This is a quick find & replace of calls to the deprecated method
ParserOptions::newCanonical() when the context is the string literal
'canonical'. This can be safely replaced by called newFromAnon().
Change-Id: If7bb68459b11e0c5f5de188f10fdae85ad1a78bf
When JSON support was introduced into ParserCache in 1.36, it was
controlled by a feature flag, $wgParserCacheUseJson. The feature flag
was "born deprecated" in 1.36. It can now be removed.
This means that ParserCache will always store entries as JSON.
Support for reading old non-JSON entries remains intact.
This is needed when updating wikis from a version older than 1.36
to the current version.
Change-Id: Id04e42bfb458d98414bac50e0d6c505e8878e5c0
New trait for PageBundle class to serialize & deserialize
PageBundle object into json before stashing and after unstashing.
Change-Id: I486fab5b3d01bcef2b535af579cd9672403b2102
Previously:
* It was unclear that generate-html is an optional optimization
* Most of MediaWiki core was doing $parserOutput->setText('') if
html wasn't generated. However this is wrong and will cause
$parserOutput->hasText() to return true and also potentially cause
cache pollution if a content handler both does that and supports
parser cache (Like MassMessage; see T299896)
* The default value of mText in the constructor was '', and most
of the time MW used that default. This doesn't seem right. If
setText() is never called, the ParserOutput should not be considered
to have text
* It was impossible to set mText to null, as $parserOutput->setText(null)
was a no-op. Docs implied you were supposed to do this, so it was very
confusing.
This patch clarifies docs, changes the default value for ParserOutput::$mText
from '' to null, and makes $parserOutput->setText(null) do what you
expect it to. The last two are arguably breaking changes, although
the previous behaviours were unexpected, mostly undocumented and
based on a code search do not appear to be relied on.
It seems like the main reason this only broke MassMessage is most
content handlers either don't support generateHtml, or they don't
support parser cache.
Bug: T306591
Change-Id: I49cdf21411c6b02ac9a221a13393bebe17c7871e
Depends-On: I68ad491735b2df13951399312a4f9c37b63a08fa
* Allow EditPage to create a user on page save. This has to be enabled
in config and then activated by the UI/API caller.
* Add an autocreate source for temporary users.
* Allow editing by anonymous users via automatic account creation when
$wgGroupPermisions['*']['edit'] = false. On an edit GET request, use
an unsaved placeholder user to stand in for post-create permissions.
* On preview or aborted save, the username to be created is stashed in a
session and restored on subsequent requests.
* On a (likely) successful page save, create the account.
* Put regular non-temporary users in a "named" group so that they can be
given additional permissions.
* Use a different "~~~" signature for temporary users
* Show account creation warnings on edit and preview.
Change-Id: I67b23abf73cc371280bfb2b6c43b3ce0e077bfe5
All revision related classes are namespaced MediaWiki\Revision
instead of MediaWiki\Storage since 1.32. The old namespaced
class names are deprecated and only kept for backwards-compatibility.
Bug: T305784
Change-Id: I34e492d84d9fc4bc78481667202716d93b3c43cb
Parser is using the service container to get a SignatureValidator
because, as noted in Gerrit comments on the relevant commit, there is a
circular dependency Parser -> SignatureValidatorFactory -> Parser.
So, have SignatureValidatorFactory::__construct() take a closure which
returns a Parser, instead of an actual Parser or ParserFactory.
Change-Id: I7bf4660f84ec8c8fb1d5b3b8581fe5d82bc3156e
All revision related classes are namespaced MediaWiki\Revision
instead of MediaWiki\Storage since 1.32. The old namespaced
class names are deprecated and only kept for backwards-compatibility.
Bug: T305784
Change-Id: Ia0030814ce2176d06e2898acffe533d31633fccb
We changed to operate on an int internally in I92daeb0f7be8a0.
Let's cast it back to a string for the api in order to prevent
a breaking change, which is not really necessary.
Bug: T304171
Change-Id: I5f5a9203b4dd085cb5defba72c6650532bc9e8d1
php internal functions like floor/round/ceil documented to return
float, most cases the result is used as int, added casts
Found by phan strict checks
Change-Id: I92daeb0f7be8a0566fd9258f66ed3aced9a7b792
Rename Sanitizer::removeHTMLtags() into an @internal method named
::internalRemoveHtmlTags() so that we can deprecate external use.
Code search:
https://codesearch.wmcloud.org/deployed/?q=removeHTMLtags&i=nope&files=&excludeFiles=&repos=
Followup-To: Ic864c01471c292f11799c4fbdac4d7d30b8bc50f
Depends-On: Iaca83ed06e9c61d8366579cd2283cba653c82319
Depends-On: I1963bfe9a99198ea02ca482a5769467ce806cd58
Depends-On: I83923d8b38d33f3638cd53958dd10f257ec21f7c
Depends-On: I018b34bb5f6e113056da9b04cc72d4318422adce
Change-Id: I202826f8b27519f7be89643e24eda47a6e3fc9f6
The existing Sanitizer::removeHTMLtags() method, in addition to having
dodgy capitalization, uses regular expressions to parse the HTML.
That produces corner cases like T298401 and T67747 and is not guaranteed
to yield balanced or well-formed HTML.
Instead, introduce and use a new Sanitizer::removeSomeTags() method
which is guaranteed to always return balanced and well-formed HTML.
Note that Sanitizer::removeHTMLtags()/::removeSomeTags() take a callback
argument which (as far as I can tell) is never used outside core. Mark
that argument as @internal, and clean up the version used by
::removeSomeTags().
Use the new ::removeSomeTags() method in the two places where
DISPLAYTITLE is handled (following up on T67747). The use by the
legacy parser is more difficult to replace (and would have a
performace cost), so leave the old ::removeHTMLtags() method in place
for that call site for now: when the legacy parser is replaced by
Parsoid the need for the old ::removeHTMLtags() will go away. In a
follow-up patch we'll rename ::removeHTMLtags() and mark it @internal
so that we can deprecate ::removeHTMLtags() for external use.
Some benchmarking code added. On my machine, with PHP 7.4, the new
method tidies short 30-character title strings at a rate of about
6764/s while the tidy-based method being replaced here managed 6384/s.
Sanitizer::removeHTMLtags blazes through short strings 20x faster
(120,915/s); some of this difference is due to the set up cost of
creating the tag whitelist and the Remex pipeline, so further
optimizations could doubtless be done if Sanitizer::removeSomeTags()
is more widely used.
Bug: T299722
Bug: T67747
Change-Id: Ic864c01471c292f11799c4fbdac4d7d30b8bc50f