Use the LoadBalancer id in flushPrimarySessions(), not the LBFactory one,
and use assertOwnership() to check $owner, similar to other methods.
In DatabaseMysqlBase::doFlushSession(), change RELEASE_ALL_LOCKS() query
to use RELEASE_LOCK(), since only newer MariaDB versions (>=10.5.2) support
it. No errors were thrown in the method since they are suppressed, but the
syntax error would cause the transaction to be placed in an error state.
Add assertion to testTransactionCallbackChains() that would otherwise fail.
Randomize lock names in lock() tests to avoid contention.
Bug: T292239
Bug: T303887
Follow-Up: ee3c65d541
Follow-Up: 4cac31de4e
Change-Id: I414d737028338cfd5369eee24576df4aa26a2f6f
This ensures that assertions work in a uniform way,
and provides meaningful messages in cause of failure.
Change-Id: Ic01715b9a55444d3df6b5d4097e78cb8ac082b3e
The policy allows this and since 1.39 is going to be the next LTS
release, I think it is fine to do this now.
Change-Id: If426e0ee349252ccc0ba9c4222c7d6865ab57fa2
The testUpgrades() method is slow and is easier to
parallelize via paratest if it lies in a separate class/file
Also make the test use a data provider
Bug: T50217
Change-Id: I5c8df6ae0ff34941f4cf7275680b309e79565925
Rename canRecoverFromDisconnect() in order to better describe
its function. Make it use the transaction ID and query walltime
as arguments and return an ERR_* class constant instead of a bool.
Avoid retries of slow queries that yield lost connection errors.
Add methods and class constants to track session state errors
caused by the loss of named locks or temp tables. Such errors can
be resolved by a "session flush" method.
Make assertQueryIsCurrentlyAllowed() better distinguish ROLLBACK
queries from ROLLBACK TO SAVEPOINT queries. For some scenarios,
only full tranasction ROLLBACK queries should be allowed.
Add flushSession() method to Database and flushPrimarySessions()
methods to LBFactory/LoadBalancer.
Also:
* Rename wasKnownStatementRollbackError() and make it take the
error number as an argument, similar to wasConnectionError().
Add mysql error codes for query timeouts since they only cause
statement rollbacks.
* Rename wasConnectionError() and mark it as protected. This is an
internal method with no outside callers.
* Rename wasQueryTimeout(), remove some HHVM-specific code, and
simplify the arguments.
* Make executeQuery() use a for loop for the query retry logic
to reduce code duplication.
* Move the error state setting logic in executeQueryAttempt() up
in order to reduce code duplication.
* Move the beginIfImplied() call in executeQueryAttempt() up to the
retry loop in executeQuery(). This narrows the executeQueryAttempt()
concerns to sending a single query and updating tracking fields.
* Make closeConnection() and doHandleSessionLossPreconnect() in
DatabaseSqlite more consistent with the base class by releasing named locks.
* Mark trxStatus() as @internal.
Bug: T281451
Bug: T293859
Change-Id: I200f90e413b8a725828745f81925b54985c72180
This is not testing anything since tables.sql was emptied
in I9f33132
It's probably also no longer necessary, since the schema
is now generated by Doctrine per platform, but I am not sure
about that, so I am just fixing the test and simplifying it.
Change-Id: Icb4d04e6057958771eb00a31b967a42cc948d6a1
There is a common and reasonable need for longer lines in tests.
The nudge for shorter lines doesn't seem valuable here. The natural
breaks will likely still fall in 80-100 given the enforced practice
for non-test code, e.g. whether through habit, or 80-100 column markers
in text editors, or the finite width of diff and code review
interfaces.
Change-Id: I879479e13551789a67624ce66f0946d2f185e6ee
Existence of global userpages (or similar nonlocal pages) can only be
known if the relevant title hook is involved, but LinkBatch is caching
these pages as bad links immediately after querying the local database.
CommentParser is then relying on this information to treat them as
always bad; thus preempting any further checks that might be done by
LinkRenderer to properly account for their magical existence.
Now title always-known status will be checked, to preempt bad linking.
Bug: T293665
Change-Id: Ica8733fb4a890fd2d2fc37eb85657c3715805133
LightnCandy resolves recursively-defined partials twice, which leads to
the template file being hashed twice. While the cost of hashing the
partial the second time is minimised due to
FileContentsHasher::getFileContentsHash() caching the hash of a file in
APCu, it need not be paid.
Bug: T300210
Change-Id: Id3f62bf32c47f21181b1ec6d77a5ae9a703952b1
* In LinkBatch::addObj(), reject interwiki links with a warning.
Otherwise the link is added to the batch by ns/title and later
reconstructed as if it were a local link without an interwiki
prefix.
* In CommentParser, treat interwiki links as always good, don't defer
the existence check.
* In LinkBatch, inject a LoggerInstance instead of calling LoggerFactory
in four places.
* Add a regression test, and some general tests for known links.
Bug: T300311
Change-Id: I0e5825eb48a6ba2932aea69a4d0fff3439c50ff5
In PHP 7, "some unlikely string 2" was a valid value for the $text
parameter to XmlDumpWriter::__construct(), because non-strict comparison
was used, so "some unlikely string 2" was coerced to integer 0 which is
equal to WRITE_CONTENT. In PHP 8.0, non-strict comparison has changed such
that "some unlikely string 2" != 0.
Pass strict=true to in_array() so that the test fails on both versions.
Please review for potential production impact.
Fix the test so that it passes after this change.
Bug: T283275
Change-Id: I5a84fb70db9d02ea10f87299eb41f80ba8fc787c
Currently this is implemented internally as a second method call. In the
future, it might be possible to optimize the implementation, e.g. to
reduce the amount of DB queries/jobs etc. without changing anything for
callers.
Since the implementation of e.g. the result getters is generic, a future
commit may add an option to delete subpages with relatively low effort.
The revision limit for big deletions is now using the total number of
revisions (base page + talk page). This is because both deletions would
happen in the same main transaction, and we presumably want to optimize
the deletion as a whole, not as two separated steps. This would become
even more important if the aforementioned improvements are ever
implemented. Note, the whole concept of "big deletion" might have been
superseded by batched deletions in the author's opinion, but as long as
such a limit exists, it should serve its purpose.
The current interpretation of the associated talk option is that the
request is validated, and an exception is raised if we cannot delete the
associated talk, as opposed to silently ignoring the parameter if the
talk cannot be deleted. The only exception to this is that it will not
fail hard if the talk page turns out not to exist by the time the
deletion is attempted, as that's hard to foresee due to race conditions.
Note that the summary for the talk deletion is prefixed with the
delete-talk-summary-prefix message. The main reason behind this is that
the delete UI can sometimes offer an auto-generated summary like "the
only contributor was XXX" or "the content was YYY", and since this may
not apply to the talk page as well, the log entry might look confusing.
Bug: T263209
Bug: T27471
Change-Id: Ife1f4e8258a69528dd1ce8fef8ae809761aa6f1d
In retrospect, I rushed the previous patch: we really need a way to tell
which deletion an ID belong to (and whether it was scheduled). So make
both result getters return an array with known keys that can be used
programmatically. Right now the class can only delete a single page, and
thus there's a single constant and this change is effectively a noop.
The deletionWasScheduled() method, introduced in 1.37, was
hard-deprecated out of an abundance of caution. There are no known uses
on codesearch, so it can probably just be removed in the next release.
Also reorder constructor params to DeletePage -- BacklinkCacheFactory is
a service injected by the factory, hence it shouldn't be grouped
together with value objects injected by the caller.
Change-Id: I32679b7cacc638ec3e9dc5b8dfe9bcc794b22ecf
Instead of putting logID/false in the status' value, add getters that
can be used to retrieve this information. Also, make it return an array
of log IDs, which is easier to expand later on to account for deletion
of related pages (e.g. associated talk page).
Note: this change will be backported to 1.37.
Bug: T288758
Change-Id: I7ef64242ae0cb7018a4b1e8eb004a5563925b9a4
Update/Create override classes of ContentHandler.
Soft-deprecate and remove method from Content and classes that override them.
Bug: T287158
Change-Id: Idfcfbfe1a196cd69a04ca357281d08bb3d097ce2
CommentParser:
* Move comment formatting backend from Linker to a CommentParser service.
Allow link existence and file existence to be batched.
* Rename $local to $samePage since I think that is clearer.
* Rename $title to $selfLinkTarget since it was unclear what the title
was used for.
* Rename the "autocomment" concept to "section link" in public
interfaces, although the old term remains in CSS classes.
* Keep unsafe HTML pass-through in separate "unsafe" methods, for easier
static analysis and code review.
CommentFormatter:
* Add CommentFormatter and RowCommentFormatter services as a usable
frontend for comment batches, and to replace the Linker static methods.
* Provide fluent and parametric interfaces.
Linker:
* Remove Linker::makeCommentLink() without deprecation -- nothing calls
it and it is obviously an internal helper.
* Soft-deprecate Linker methods formatComment(), formatLinksInComment(),
commentBlock() and revComment().
Caller migration:
* CommentFormatter single: Linker, RollbackAction, ApiComparePages,
ApiParse
* CommentFormatter parametric batch: ImageHistoryPseudoPager
* CommentFormatter fluent batch: ApiQueryFilearchive
* RowCommentFormatter sequential: History feed, BlocklistPager,
ProtectedPagesPager, ApiQueryProtectedTitles
* RowCommentFormatter with index: ChangesFeed, ChangesList,
ApiQueryDeletedrevs, ApiQueryLogEvents, ApiQueryRecentChanges
* RevisionCommentBatch: HistoryPager, ContribsPager
Bug: T285917
Change-Id: Ia3fd50a4a13138ba5003d884962da24746d562d0
Since the branch cut has happened, we can bump and get rid of legacy
cruft. According to the policy we can go up to 1.31 but let's keep it
that way to avoid major distruptions.
Change-Id: I9d697445a3bb5047726c8b2a7f808edb8403cdda
Some methods in the PageUpdater's class implements the fluent interface
design pattern. Use the fluent interface where need be.
Change-Id: If76a4b8c5070c20ed40038a4ee78e2d677de5180
This patch adds a service as a replacement for MWGrants. This is done
as it allows proper dependency injection of used services and
configuration settings.
This was previously committed as Iac52dba15f and reverted because it
introduced recursive service instantiation.
To avoid this recursive service instantiantion all UI related methods
get moved to a new GrantsLocalization service, instead of the GrantsInfo
service.
Bug: T253077
Change-Id: Ib900bc424fc272ec709d272dcaff71398fa856f8