- call them pipe tricks (plural) as there's more than one
- mention double-width comma as well
- use one tab character for alignment due to double-width chars
- document reverse pipe trick
Change-Id: I27a1d04362eb3988fc1318fa1f73f69877019439
I've implemented the function Parser::getExternalLinkRel which
gives the 'rel' attribute for a given link in a given NS. Per Tim's
suggestion, as it's currently impossible to invoke the logic in
Parser::getExternalLinkAttribs externally.
Change-Id: Id0bfed81e2afd6730d820b6c9a4a09155a557f37
Additionally remove creation of bogus title in transformMsg.
The only place preprocess uses the title is in startParse. And that explicitly allows null.
Change-Id: I33d090bf250092fc541e284eb19dbd4053f40ae5
* IRIs are getting more and more widely used these days so Chinese
characters are also needed to be prevented from being converted
in text of external links.
* So now all markNoConversion() functions in languages with variants
do the same thing. Merge them into a single function in the
Language class and drop implementations in individual languages.
* By the way rephrase phpdoc of that function, and (bug 24798) fix
the link detection regex to use wfUrlProtocolsWithoutProtRel().
Protocol-relative regex is excluded to avoid false positives.
* Add parser test for it.
Change-Id: I2ec0ac2b9b11221584adb72555168498de209d57
We store various bits of data as "expando" properties on the Parser
object, to pass information from one stage of the parser to another. If
the parser is cloned, however, we can run into trouble because two
different Parser objects are now manipulating the same extension data
structure; this often shows up when ParserClearState is called on one
clone and clears the state of the other as well.
Since a deep clone might be too expensive and still might be wrong in
some cases, it seems most useful to simply provide a ParserCloned hook
so extensions can just do The Right Thing.
Change-Id: Ieec65c908d71e89b9a66f83b9a626f842aadacbb
Before the introduction of the content handler, missing content was
signified by getText() returning null instead of a string. null will
work much like an empty string in most contexts, so in many places,
it was not checked explcitely whether the conent was null.
Now, when getContent() returns null, this often caused a fatal error,
because the code would access whatever getContent() returned as an object,
without checking whether it was null (because no such check was performed
previously, when the content was represented as a string).
This check introduces explicite checks for getContent() returning null
in the most essential core classes.
Change-Id: I551a90b0b67b8edc7570ca5d252ecc1de903f097
- Actually mention "pipe trick" so the code is searchable
- Use spaces rather than tabs for vertical alignment
- Clarify comment for double-width brackets and mention revision it was added
Change-Id: Iaf365e313144e378133fb16c64efa5b7e47d4a6a
* Fixes bug 11748 (Parser issue for HTML definition list) and similar
issues for nested unordered / ordered lists
* Stops wrapping HTML-syntax definition lists into paragraphs
for consistency with their wikitext variants
* Enables one previously disabled test and adds another for nested
definition lists with HTML syntax
Change-Id: If75ed54e11452dbcf5e6213cc20923064f811715
(bug 24502) Resolve the various issues with this accidental feature
by removing it. I think it could be done properly, along the lines of
my comment #5, but I don't think just changing the DB schema to make
langlinks non-unique is a good direction to take. A comment on
I4e1e08a3 from Daniel Kinzler indicates that duplicate language links
won't be possible with Wikidata anyway, so there's not much value in
I4e1e08a3 for WMF wikis.
Change-Id: Iba5f3f29e20f5119d4414b1e87ce5eee674701a8
To prevent large template DOM caches from sending servers into swap,
throw an exception when more than some number of DOM elements are
parsed. Unfortunately, it wasn't possible to return a normal error
message, because it broke PST and extractSections and corrupted the
article text. It's safer to refuse to save the edit, and we don't
have decent ways to do that short of throwing an exception.
Ideally we would like to have an upstream patch that hooks libxml to
allocate memory from PHP's request pool, then a fatal error would be
raised instead of swapping.
Change-Id: I4cb4f6fd313e1e0940b56cc5e586afd1bea9267a
This patch marks the regex matching url protocol as being case
insensitive. We will from now render links like [HTTP://ww].
Tests added.
Change-Id: I706acb7a0ae194b50d2318763beae4e5e83671f3
* Also normalized 0 => false for the rev ID parameter in some places.
* Broke some long lines and shorted a variable name in Skin.php.
Change-Id: I6645315699ec7670ae22aa1dbf787d75d6e6b7ec
Replacing wfMsgExt() with wfMessage() in 4e1ccf0 causes an exception on
parse when the defaults are used for $current and $max. I don't know if
there are other similar fatal errors caused by that set of commits.
Change-Id: I84cfdede844bb2dd3c106721b972ed1cd8bfe480
If PHP's PCRE is not compiled with unicode property support, this causes
the regexes used by the parser to not compile, causing the parser to
output giberish. Its been reported that the default PHP package for
cent os has PCRE in such a config.
As a result the installer will output total giberish. The user has
no idea what went wrong because there is no meaningful output.
To counter that, cause Parser to throw an exception in that case.
It seemed easier than figuring out how to convince the installer
not to parse the environment check. For completeness sake though
I fixed the PCRE environment check to adequetely check for PCRE
not having unicode support.
This should be backported to 1.19 since there are quite a few
complaints about the issue on project:Support_desk. /me has
no idea what the procedure for that is in our new git world
Change-Id: Idb1658be4ee6203a55740450e335f570a616671c
* Replaced WikiPage::DATA_FROM_* constants with IDBAccessObject ones.
* Renamed IDBAccessObject constants a bit for visual consistency.
* Removed AVOID_MASTER parameter and replaced calling instances with READ_NORMAL.
Instead of getting page_latest from the master and the revision from a
slave, just get it all from the master in one RTT. Most callers used
AVOID_MASTER (and now READ_NORMAL), so this case is barely hit anymore.
Change-Id: Ifbefdcd4490094b38e49bbb46c95fdb71b5c9e1a
* Made refreshLinksJob2 always spawn smaller jobs. This can reduce
the problem of all runners doing the same refresh jobs by increasing the
granularity of the work to single pages parses per job.
* Avoid master queries when fetching the latest revision for refresh links jobs.
Also avoid the master for template fetching on parse. A LoadBalancer waitFor()
call is used instead. The main reason for hitting the master to fetch templates
was this job itself.
* Fixed bug in refreshLinksJob2 where one missing page would cause all the
remaining updates for pages to be aborted.
* Factored out some code duplication between the two refresh links job classes.
Change-Id: Ieca51567a888f50a6f15b6c2606323da80d6584b
The alignment of image thumbs should follow the page content language instead of the wiki content language.
For this it needs the parser context, and because it makes sense to have it as first parameter, I renamed makeImageLink2() to makeImageLink(), the 2 seemed to be redundant anyway.
The old function name keeps the old behaviour, but can be removed quite soon since almost no extension is using it.
Change-Id: I0c35b06a85528dcc43fdd0578dc9b327c495cf4a
Using the same regex like [[File:|]]
With heigth, the width inside the thumb link can be calculated, if the
height not fit in the width.
Change-Id: If188d923d6cd25ea6a5118098f3a513ca5135d43