The functions returning null or the class property is set explict null
Found by phan strict checks
Change-Id: I4a271093fb6526564d8083a08249c64cb21f2453
Enabling this setting will cause post-send deferred updates to be run
before a response is sent to the client, so the client can observe all
effects of their last request immediately.
This resolves a problem with some end-to-end tests that were failing
because the updates caused by one request had not landed in the database
by the time the subsequent request was made.
This patch re-enabled some e2e tests that were disabled because of this
problem. If $wgForceDeferredUpdatesPreSend works as intended, the tests
should again pass reliably.
Bug: T230211
Bug: T301100
Change-Id: I0e30fdb6acba85cec4bb1499f7063ba6bfb0ffb2
As a security hardening measure to limit exposure on private wikis from
actions on $wgWhitelistRead pages, require an explicit 'read' right on
actions by default. Currently only ViewAction disables this check since
it does its own permissions checking.
This is somewhat duplicative of the permissions check in
MediaWiki::performRequest() but we'll call it defense in depth. It also
matches similar logic in the Action and REST APIs.
Bug: T34716
Bug: T297416
Change-Id: Ib2a6c08dc50c69c3ed6e5708ab72441a90fcd3e1
This reverts commit ef458e8948.
Reason for revert: Causes page tabs to disappear on Special:WhatLinksHere.
Bug: T297744
Change-Id: I0ee282a9f7a5a9b2cfdc3261d800d9e27eaf977e
For web requests, this was attempting to inject the current client's
session user name or XFF-resolved IP address to the SQL query.
However, this has been broken for five years (since around
commit 16b4e3a9f1 / Ibb4f1c0dafea071a) because the relevant objects
are already constructed by the time MediaWiki::main() runs, and so
MediaWiki::setDBProfilingAgent isn't doing much other then changing
the LBFactory state, which rarely gets another chance to pass it down
after that.
This breakage is actually a good thing as otherwise Tendril and
performance_schema tools would not have been able to aggregate slow
queries very well due to being too dynamic/variable (these tools
can't ignore comments per T291420 and T291419, and the comments we
do have for fname are actually useful to aggregate.)
As of I6e9939e34287d27430, this lack of dynamic variability (apart
from standard SQL syntax conditions that can vary) is now documented
as desirable for wikimedia/rdbms. To avoid confusion or from this
code accidentally becoming undead, let's remove it.
While at it, remove it for CLI in MWLBFactory as well. This one did
work currently, but as I understand it was not very useful on its
own but rather filling in data to keep a consistant shape with the
web format, which is broken. In particular, afaik dbuser and
sender webserver hostname are already known to MySQL for all queries
and present in processlist and other tooling.
Bug: T193050
Change-Id: I033140ddbb04df97de3391a247d1ca937b3bc918
I intent to remove Profiler::getContext/setContext after a week
without deprecation. I consider these methods as internal (they
predate the stable interface policy, and we forgot to triage this
class, it has neither `@stable` nor `@internal`).
The hard-deprecation in this commit is to detect any use that may
have gone unnoticed in WMF production from Codesearch analysis alone,
where no usage was found.
Bug: T292269
Change-Id: Id40679f21cc7a3f77a1b96a4bbd55daeaea16892
* Document that Maintenance::shutdown is the CLI equivalent of
MediaWiki::restInPeace.
* Centrally document in the emitStats method why we flush stats
regularly, and clarify that these OOM concerns are specific to
CLI processes there. That isn't to say it could never happen on a
web request, but all our early flush handling (even DB trx hooks)
are explicitly limited to command-line mode today and always have
been.
* Ref T253547. It is now clear why --profiler=text doesn't work on
the CLI (it is missing the non-external profiler output handling),
which I'll fix in a follow-up.
* Ref T292269. The WebRequest-dependency in Profiler is now much more
clearly problematic. Previously this was masked by wfLogProfilingData
effectively silencing the warning on the CLI without it being so
obviously wrong. I'll fix that in a follow-up.
* Ref T292253. All this is already post-send, and flame graphs confirm
that we don't have any calls to emitBufferedStatsdData nor
StatsdClient::send apart from the post-send one via restInPeace.
Bug: T253547
Bug: T292269
Bug: T292253
Change-Id: If78c37046cf8651c7a8d6690e01d38c3ca29d8d8
Per docs added in I18767cd809f67b, these don't need normalization
as they are only compared against predefined strings, and besides
are generally entered manually in a form, and even then would not
require the kinds of Unicode chars that have multiple/non-normalized
forms.
In nearby areas to also fix some trivial cases:
* getVal('title') obviously needs normalization.
Use getText() to make this more obvious.
* getVal() compared against simple string literals within the code
obviously don't need normalization (e.g. printable === 'no').
* Change hot code in MediaWiki checking for whether 'diff' or 'oldid'
are set to getCheck (which uses getRawVal) instead of getVal.
As a bonus this means it now handles values like "0" correctly,
which could theoretically have caused bad behaviour before.
Change-Id: Ied721cfdf59c7ba11d1afa6f4cc59ede1381238e
So that we can see how slow history pages are.
Details are by analogy with API action timing.
Bug: T284274
Change-Id: I8a679b8bc94fe2a062b9d62ecff0a7584e65a4db
Add a helper method for the common use case of temporarily silencing
transaction profiler warnings.
Change-Id: I40de4daf8756da693de969e5526b471b624b2cee
All hard deprecated in 1.35
* BeforeHttpsRedirect
* CanIPUseHTTPS
* UserRequiresHTTPS
Also soft deprecate the wfCanIPUseHTTPS
method, which always returns true. Will
be hard deprecated once callers are updated
Change-Id: Ie6d71809d09edf6be9b8280debeb152ef7fce398
Trap the extra profiling output via a buffer and append it to the
payload string parameter. This way, the Content-Length will be set
correctly with the text profiler.
Update other entry points to call logDataPageOutputOnly().
Follow-up to f4f0ad970e.
Bug: T235554
Change-Id: I4915d1096801a063d493443a3606fd3851e771a6
This fixes problem that arise with apache2/mod_php due to deferred updates
* Do not send unnecessary and invalid "Content-Encoding: identity" header
* Do not send "Connection: close" if HTTP2 is detected (per the HTTP spec)
pending, which reduces the use of output buffer and HTTP header tricks
* Make sure that no output is emitted in doPostOutputShutdown() from any
deferred updates since the response will have already been flushed to
the client by that point
* Make the Content-Length header logic in outputResponsePayload() account
for cases where there is a non-empty output buffer, cases where there
are several output buffers (bail out), and limit the use of the header
to HTTP 200/404 responses (avoids violation of the HTTP spec)
* Make sure OutputHandler::handle() does not send payloads for responses
that must not have one (e.g. "204 No Content")
* If an output buffer using OutputHandler::handle is active, then let it
handle the setting of Content-Length rather than outputResponsePayload()
* Do not bother trying to disable zlib.output_compression, since that did
not actually stop the client from getting blocked
* Set "no-gzip" via apache_setenv() unconditionally
Bug: T235554
Change-Id: I26f16457698c2c45e561b0c79c78a74e7f47126c
Before this patch, if 'search' is in the request params then we always
go to Special:Search. Also, the 'title' param on the top-right search
form is always set to Special:Search, which means that form always goes
to Special:Search too.
In order to allow the search form to go to a different page, this
patch:
1. moves the hardcoded redirect to Special:Search based on 'search' in
the request params, so that it only happens if we cannot determine
the page title in the usual way
2. adds a setter for the default search page title in \Skin, so that
it can be set in a hook
Bug: T273879
Change-Id: If62573d19ca76ed1db53a5117182172233e514ab
Remove WRITE_SYNC flag from ChronologyProtector since the current
plan is to simply use a datacenter-local storage cluster.
Move the touched timestamps into the same stash key that holds the
replication positions. Update the ChronologyProtector::getTouched()
comments.
Also:
* Use $wgMainCacheType as a $wgChronologyProtectorStash fallback
since the main stash will be 'db-replicated' for most sites.
* Remove HashBagOStuff default for position store since that can
result in timeouts waiting on a write position index to appear
since the data does not actually persist accress requests.
* Rename ChronologyProtector::saveSessionReplicationPosition()
since it does not actually save replication positions to storage.
* Make ChronologyProtector::getTouched() check the "enabled" field.
* Allow mocking the current time in ChronologyProtector.
* Mark some internal methods with @internal.
* Migrate various comments from $wgMainStash to BagOStuff.
* Update some other ObjectCache related comments.
Bug: T254634
Change-Id: I0456f5d40a558122a1b50baf4ab400c5cf0b623d
This is micro-optimization of closure code to avoid binding the closure
to $this where it is not needed.
Created by I25a17fb22b6b669e817317a0f45051ae9c608208
Change-Id: I0ffc6200f6c6693d78a3151cb8cea7dce7c21653
This is sent at the end of the LBFactory::shutdown wrapper, so will
still happen at the same logical point in time.
Use LBFactory->replLogger since that it is also the logger used
by ChronologyProtector.
Bug: T254634
Change-Id: Ic4a9573e6cd3ea00f77b2f44c03453c5b96fa486