This is more consistent with LoadBalancer, modern, and inclusive
of master/master mysql, NDB cluster, and MariaDB galera cluster.
The old constant is an alias now.
Change-Id: I0b37299ecb439cc446ffbe8c341365d1eef45849
Inverse flame graphs shows revision lookups as one of the
big three queries (Revision, LinkCache, getTitleInfo of
ResourceLoaderWikiModule).
This works via a new Revision::newKnownCurrent() method
needs both page/rev ID from the DB (to avoid invalidation)
and fetches the user name and rev_deleted if needed (again
to avoid invalidation). Parser does not care about fields
anyway in the template path.
Also improved cross-wiki support a bit, and fixed up some
docs and IDEA errors.
Change-Id: Icad602dba5de18c7758b77fd23b0a450ff21d09f
For simple pages that transclude special pages, like user pages
including Special:PrefixIndex, the TTL is allowed to drop to 15
seconds if the page parses fast enough.
Bug: T139893
Change-Id: If41885ded648d68352fe3d06336d98aa0ab53966
The code that normalizes line endings ("\r\n" and "\r" to "\n") and
trims trailing whitespace is buried in Parser::preSaveTransform(), and
was duplicated to TextContent in 96b6afb31d, as non-wikitext content
models should still be normalizing line endings.
This splits the duplicated code into
TextContent::normalizeLineEndings(), and utilize it in the Parser.
Additionally, expand the documentation of
TextContent::preSaveTransform() to document that subclasses should make
sure they normalize line endings during the PST stage.
And remove a useless rtrim() call from WikitextContent that did nothing.
Change-Id: I9094c671d4bbd23d75436f8f1d682d6dd6e6d2fc
rawurldecode was being run on unclosed internal links
which could allow an attacker to insert arbitrary
html into the page.
See also related: r13302
Bug: T137264
Change-Id: I4e112a9e918df9fe78b62c311939239b483a21f5
This does the same normalization of newlines that
Parser::preSaveTransform() does. This should be appropriate for any text
content type, especially considering that EditPage uses
WebRequest::getText() which does a less-strict version of this same
transformation.
This also cleans up the code for doing that newline replacement
to be a bit less verbose.
Bug: T142805
Change-Id: I462afcda502f031a8b0360d982ce2398a0383a96
Doxygen requires the full qualified name of the class in a comment
or in the @aram/@return annotation, otherwise the class isn't linked
in the resulting output[1]. This commit changes the LinkRenderer
annotations in SpecialPage and Parser to \MediaWiki\Linker\LinkRenderer.
[1] https://doc.wikimedia.org/mediawiki-core/master/php/classSpecialPage.html#a3560214f63fc2f20c63b4025db5cd81d
Change-Id: I74cedcd764a6053cc5a0c6d2eedbedb72651f57c
We have two hacks which are used when Tidy is not available: one in
Sanitizer::removeHTMLtags(), and the second here as a late Parser pass
equivalent to Tidy itself. But the Sanitizer one was enabled only if
MWTidy::isEnabled() returned false, whereas the Parser one was enabled
also when tidy was disabled in ParserOptions. This patch makes them both
consistent, it enables the bug 2702 hack only when MWTidy::isEnabled()
returns false, and when Tidy is disabled in parser options, the output
is simply passed through.
This allows tidying to be done separately on the ParserOutput, as is
required by the proposed ParserMigration extension (I24d0776a933fa3f).
Eventually the bug 2702 hack will be removed in favour of a pure-PHP
HTML 5 parser, but it looks like it is too early for that.
Change-Id: I94be6c9dec531c23ef80cb36732243bd6858bf22
* Instead of having messy code to create a hidden HTML
comment of English strings at the bottom of the page,
expose the structured data of the parse information
to JS so tools can use it.
* Make makeConfigSetScript() use pretty output so these
variables are also easy to read in "view source".
* Remove ParserLimitReportFormat hook, since the data
is not formatted to HTML anymore.
Bug: T110763
Change-Id: I2783c46c6d80f828f9ecf5e71fc8f35910454582
A convenient factory function to eliminate code duplication in
ParserMigration's MigrationEditPage::tidyParserOutput().
Change-Id: I058912885025e7a9402912236c65c44e32ef036e
No uses of 'modulemessages', getModuleMessages() or addModuleMessages()
anywhere in Wikimedia Git.
Change-Id: I59420880f3545d1aabf9bcbea1e34b1475697d26
The singly-linked list data structure of Preprocessor_Hash was causing
stack exhaustion due to the need for a recursion depth proportional to
the number of children of a given PPNode, in serialize() and on
object destruction. So, switch to array-based storage. PPNode_* becomes
a temporary proxy around the underlying storage, which avoids circular
references and keeps the storage very compact. Preprocessor_DOM uses
similar temporary PPNode objects, so the fact that
$node->getFirstChild() !== $node->getFirstChild()
should not cause any new problems.
* Increment cache version
* Use JSON serialization of the store array instead of serialize(),
since JSON is more compact, even after gzipping.
* For efficiency, make $accum a plain array, and use it as an array
where possible, instead of using helper functions.
Performance and memory usage for typical input are slightly improved:
something like 4% faster for the whole parse, and 20% less memory for
the tree.
Bug: T73486
Change-Id: I0d6c162b790d6dc1ddb0352aba6e4753854f4c56
We originally imagined rolling out the display of empty elements
simultaneously with the Html5Depurate, but now we have added support for
marking empty elements to Html5Depurate and plan on having some sort of
longer migration period. So, move the relevant CSS to content.css, and
remove the concept of CSS dependant on tidy driver.
Add a body class which will allow the effect to be toggled in a gadget or
extension. Actual toggling in the CSS will be in the stage 2 patch, to be
deployed after the varnish cache and parser cache have expired.
I originally imagined that there would be a gadget that overrides the
rule with an !important selector, but that method does not allow you to
recover the original display property, which is often overridden by the
style attribute or site CSS to be "inline".
Also, in RaggettWrapper, switch to the new class mw-empty-elt, following
Html5Depurate, instead of mw-empty-li. The old class will be removed in
the stage 2 patch.
Change-Id: Ic0f432c43a006629ca5a1a7c2dda3552ceb4dc4f
* Have TidySupport provide $wgTidyConfig instead of the legacy globals
* Add --use-tidy-config option to parserTests.php. This tells
TidySupport to use the tidy configuration from LocalSettings.php
instead of the traditional safe defaults.
* Add a way for TidySupport to disable tidy via $wgTidyConfig, using
driver=>disabled
Change-Id: Ie76e68e2d5238d0a1aef49a1a815c0d1cd8bfdae
This is an HTML5-compliant parse/serialize tidy implementation, with
well-delineated hacks to support the <p>-wrapping done by legacy tidy.
Change-Id: I4fd433fd6f1847061b0bf4b3e249c918720d4fae
Some pages use constructs like `<b/>` or `<span/>` to protect spaces or
special characters at the beginning/end of templates. This syntax is
incompatible with HTML5 parsing rules, which dictate that these should
be treated as open tags, and instead rely on an unusual quirk of the
`tidy` program that removes invalid constructs.
This syntax is deprecated as part of the process of reconciling `tidy`
with modern HTML5 parsing semantics. Authors can use ` ` or `<nowiki/>`
as valid replacements.
In order to provide time to transition existing content, pages using
self-closing tags in violation of the HTML5 parsing specification
will have their templates/pages added to a new tracking category.
After these uses are fixed, we will change the sanitizer to treat these
as normal open tags, to be consistent with the HTML5 parsing spec.
Note that this construct is already disallowed if tidy is disabled; it
is rendered as `<b/>`. We add a tracking category in the no-tidy
case as well, in preparation for eventually making the no-tidy and
with-tidy behaviors consistent.
Bug: T134423
Change-Id: Ie1cf3aa40d5483bf395ece539f0240b694ff04ab
During both the edit stash and first parse in on page save,
guess what the rev_id will be and use that instead of null.
Only reparse if it turns out to be wrong. This avoids extra
parsing on wikis that have low-medium traffic, and does not
cost much. The parsing that can be avoided is:
a) in doEditContent() by using the stash
b) in doEditUpdates() by using the doEditContent() result,
whether that was able to use the stash or not itself
Also improved the parse operation logging in save paths.
Bug: T137900
Change-Id: Ic6faae70a78b4e223e4d3585cefd482c0fa00677
And SpecialPage::setLinkRenderer(), so the Parser can pass on its
LinkRenderer instance for when special pages are being included in a
page.
Change-Id: If9a9c648ab670b824ce534e7cf0d20d41e1bfd12
In galleries, bad images are rendered as links. This causes the same behavior
to occur in wikitext, rather than the current behavior of not rendering
anything.
Change-Id: I1a074bff7cb661b5b4e6db9503eb6a5de702ee2f
Few maintained extensions still rely on this and it is
bad practice to use this for handling cache correctness.
Change-Id: I2de481198bbff5c4f3dd81fc6d1b137e4c37b93f
Previously {{Special:Foo}} would cause parser cache to be disabled,
now have a method in SpecialPage to control this behaviour and set
arbitrary caching times.
Note: This does not affect caching of direct views to the special page
The new default is now disabling cache if not in miser mode,
otherwise setting to 1 hour, except for Special:Recentchanges
and Special:Newpages which set to 5 minutes. These values are
possibly really low, but for now I think best to be close to the
old behaviour. We had 0 caching for these things for years, and
afaik it hasn't caused any big issues. Part of me wonders if
Special:Recentchanges should stay at 0, but that sounds crazy.
This change also causes transcluded special pages to not be
"per-user" if they are being cached (Specificly $wgUser et al
become 127.0.0.1).
Bug: 60561
Change-Id: Id9ce987adeaa69d886eb1c5cd74c01072583e84d
Previously, no TTL at all was used, which is quite harsh on
performance and had downstream effects like disabling edit
stashing for affected pages.
Bug: T136678
Change-Id: I2462057aa189cfb05fe65d0b3c081a9fd10066a2
* Do not change the result to a null editing user anymore.
* Use a new vary-user flag instead of vary-revision. This
will only cause a reparse on null edits. Normal edits
can still use the prepared output now.
* Edit stashing now applies for pages with this magic word.
* Fixed bug where the second prepareContentForEdit() call
(due to vary-X flags) would still check the edit stash.
Bug: T135261
Bug: T136678
Change-Id: Id1733443ac3bf053ca61e5ae25db3fbf4499e9f9
Just always use the input size for new revisions. If they are
saved, then that should be the revision size. If they are just
null edits, then the size must have matched the current revision.
This also enables edit stashing for this case.
Change-Id: I428c0cc87750eeddd1d7dcebd1a2b03817cec441
* Rename to getLinkClasses() since it's not really returning colours,
but CSS classes.
* Dependency inject LinkCache into LinkRenderer
* Update all callers of Linker::getLinkColour(), and mark it as
deprecated (no other uses in Gerrit)
* Update a bunch of tests for new dependency
Change-Id: Id178e2dcc60b833ce2dbad4920896b93cabba1bf
Extensions shouldn't be calling this, just the Parser, so make it
protected. And since the only caller passes an empty array for $query,
we can just remove it entirely.
Change-Id: I3adbcaabbb40870eb3df1495c3c2743ff21f0c64
Revision::getSize() might return null when the revision.rev_len field
is null. That should never happen normally (the field should get
backfilled as part of the update process), but we've also had a bug
where rev_len was not being recorded for empty pages (see T135414 for
details). It's saner to return a number here rather than empty string,
and 0 should actually be correct for all pages affected by that issue.
Bug: T20998
Change-Id: Ie12f0be24f00aaf8b90b25c4921a97df3b789369
Ignored restricted DISPLAYTITLE warning isn't really relevant for the casual reader
so don't show it in the page output. Instead show it above the edit box.
Bug: T135949
Change-Id: I009dd865bec7b6e3a7492c49db97074483f93ee4
Added to "Pages with ignored display titles" category
(message key: "restricted-displaytitle-ignored")
Follow up to I6ae6d5d0e567ba9c86e46c32240ee51a2ca5d8d1
Bug: T135949
Change-Id: I9e0f8b1e3d39a62c13191bea6734fb136e976e0c
noreferrer is used as support for noopener is very limited.
This is to prevent the attack detailed at
https://mathiasbynens.github.io/rel-noopener/ where you can
navigate the parent window, even if the new window is a cross-origin.
Bug: T133507
Change-Id: I6e4ab938861e246ff44048077b94847e303f1859
Signed-off-by: Chad Horohoe <chadh@wikimedia.org>
Strip markers get substituted for general html, which means the
substitution text general does not escape quote characters. If
someone can convince MW to put a strip marker in an attribute,
you can get around escaping requirements that way. This patch
adds the characters `"' to the strip marker text. At least one
of these characters should be escaped inside attributes (regardless
of what quote character you use for attributes), thus normal html
escaping will deactivate the strip markers, preventing the
vulnrability.
This will break any extension that escapes input with htmlspecialchars,
to add to html/half parsed html output, but assumes that strip markers
are unmangled. I don't think its very common to do this. The primary
example I found was some core usages of Xml::escapeTagsOnly(). (And
even in that case, it only affected the corner case of being called
via {{#tag:..}})
Based on MatmaRex's suggestion.
Change-Id: If887065e12026530f36e5f35dd7ab0831d313561
Signed-off-by: Chad Horohoe <chadh@wikimedia.org>