iconv() can still emit notices even when '//IGNORE'
string flag is passed.
Bug: T387690
Change-Id: I16f1e99f7c25457aa0b35cb428391c42dec7b91d
(cherry picked from commit 357f2b61e815e071147583e07b388801189462bf)
Changes to the use statements done automatically via script
Addition of missing use statement done manually
Change-Id: I73fb416573f5af600e529d224b5beb5d2e3d27d3
Implicitly marking parameter $... as nullable is deprecated in php8.4,
the explicit nullable type must be used instead
Created with autofix from Ide15839e98a6229c22584d1c1c88c690982e1d7a
Break one long line in SpecialPage.php
Bug: T376276
Change-Id: I807257b2ba1ab2744ab74d9572c9c3d3ac2a968e
There is already UploadBase::$mTempPath
Added in baf83f74b8 (r82507), but unneeded.
Also remove error check of $mTempPath
Since e4009c7367 (I8bafc3a6e6) the property $mTempPath can no longer
be false. Previously, the false was from tempnam().
Since f9c6af781c (Icd05956608) it fatals when no temp file can be
created, because TempFSFile::factory can return null and bind() is
called unconditionally.
Change-Id: Ia1302c9c0528691436a0411ca62b651471811c98
wfGetUrlUtils() is also deprecated, but less so, so we can do this first
and then properly replace the individual uses with dependency injection
in local pieces of work.
Also:
* Switching Parser::getExternalLinkRel to UrlUtils::matchesDomainList
exposed a type error in media.txt where $wgNoFollowDomainExceptions
was set to a string (which is invalid) instead of an array.
Bug: T319340
Change-Id: Icb512d7241954ee155b64c57f3782b86acfd9a4c
Add doc-typehints to class properties found by the PropertyDocumentation
sniff to improve the documentation.
Once the sniff is enabled it avoids that new code is missing type
declarations. This is focused on documentation and does not change code.
Change-Id: I07ce1f37d1bfb18d6e73dd008a712b3ca60a80e9
This makes the code more compact and more readable while doing the
same as before.
One relevant note: I'm also removing a null check for a variable
that will be used as an array index. This is fine because of the ??
operator. What actually happens is an $maxUploadSize[''] array
access, which never exists and falls back to what comes after the ??.
Change-Id: I7fc82fd179c9594ce5755327523ceec4f502d14f
In change I625a48a6ecd3fad5c2ed76b23343a0fef91e1b83 I am planning to
make Wikimedia\Message\MessageValue use it, and we try to pretend that
it is a library separate from MediaWiki, so it makes sense to move
MessageSpecifier to the same namespace under Wikimedia\.
Bug: T353458
Change-Id: I9ff4ff7beb098b60c92f564591937c7d789c6684
And deprecated aliases for the the no namespaced classes.
ReplicatedBagOStuff that already is deprecated isn't moved.
Bug: T353458
Change-Id: Ie01962517e5b53e59b9721e9996d4f1ea95abb51
Changes to the use statements done automatically via script
Addition of missing use statement done manually
Change-Id: Id9f3e775e143d1a17b6b96812a8230cfba14d9d3
This patch introduces a namespace declaration for the
Wikimedia\FileBackend to FileBackend and establishes a class
alias marked as deprecated since version 1.43.
Bug: T353458
Change-Id: Id897687b1d679fd7d179e3a32e617aae10ebff33
This commit replaces some of the uses of getErrorsArray(),
getWarningsArray(), getErrorsByType(), and getErrors().
In many cases the code becomes shorter and clearer.
Follow-up to Ibc4ce11594cf36ce7b2495d2636ee080d3443b04.
Change-Id: Id0ebeac26ae62231edb48458dbd2e13ddcbd0a9e
Remove check for false from IDatabase::select as this is not possible
A DBQueryError is thrown (documented since efda8cd3 / I056b7148)
Use IResultWrapper::numRows to check for empty IResultWrapper
This ignores includes\libs\rdbms as QUERY_SILENCE_ERRORS is an internal
option to get false from this function
Change-Id: I4b2fc26ca0e68612f6beadc01e68097a74962c84
With this change, when async uploads are enabled, upload-by-url
will spawn a job and a form with a button to check the status of the
process is shown to the user.
In the process, add processing of warnings in the remote jobs spawned by
the API or the Special page. This is done by adding checks to
UploadJobTrait::verifyUpload. In order to manage warnings serialized in
the job status, a method to unserialize the result of
UploadBase::makeWarningsSerializable.
Things that we might want to fix:
* The form's UI is abysmal, we should probably use Codex
* While it's not a huge deal, I'd like to figure out why I need to
purge the page cache if I want the file to show up. And more
interestingly, why this doesn't happen when uploading via the API
Bug: T295007
Bug: T118887
Change-Id: I49181d93901f064815808380285fc4abae755341
To this end if 'async' is passed with 'url' to the api:
* Avoid downloading the file synchronously, but verify early
if the upload is allowed by adding a canFetchFile to UploadBase
overridden in UploadFromUrl
* Spawn an UploadFromUrlJob
* When checking for the status of the job, do it fetching the data in
the main stash.
Bug: T295007
Change-Id: If95ccf376cfa9fbe9b3cd058e8e334a6bdd2eb44
During a stashed upload, the SHA1 has already been calculated and
is populated based on data saved in DB. Reuse that value in
verifyPartialFile() instead of recalculating as SHA1 can take a
long time to calculate for large files.
This should improve the speed of PublishStashedFile jobs.
Bug: T200820
Change-Id: Ie2967c636b2f942921a125ef62d1a466c6035ca0
This will be useful in making the upload async and allow us
to retrieve the status of an upload remotely.
Change-Id: I7279185b3c5ece5f4177c0550ca0852810c8f052
AssembleUploadChunks was calculating the SHA1 hash of the same
file 5 times in a row. Calculating SHA1 hashes can be somewhat
expensive for multi-GB files, making the job slow, possibly to
the point of a timeout. This change ensures that the SHA1 value
is kept and reused when applicable so that the job will only
calculate it once.
Bug: T200820
Change-Id: I842814c7a2b7dc6e427e040c8dd4d43e7c7cabb4
I believe deleting chunks of large uploads may have been taking
a fair bit of time, potentially triggering a timeout causing the
whole job to fail (Which is unfortunate as they will just be
deleted by a cron job later). Putting them in a deferred job
after transactions commit should ensure that the upload will
still go through.
Additionally, this ties it to AutoCommitUpdate which should cause
the deletes to be cancelled if something causes the transaction
to abort, which is a step towards making the Assemble job be retriable
on failure.
Bug: T200820
Change-Id: I49b8eb23adcd8960783f4f90707faa21760ce078
One possible theory for why chunked upload is unreliable is that
there are multiple jobs racing each other. Add some logging warnings
that should detect such a situation, and abort job if it appears the
job is already in progress
Bug: T200820
Change-Id: Ifaf217bc288dfaa1f7f4ca7e58f8210df232db1b
Users often complain that chunked upload is unreliable. However
it is often difficult to see what happened when it failed. Add
additional debug logging so we can better determine how often
chunked upload fails, and hopefully have a better idea what the
causes are.
This only adds logging and should not change any behaviour
Change-Id: I45b710fa57c7d05bb27a7b00a3303e78f5d2ff2a
This is used to (among other things) detect lang tags in multilingual
SVGs. Users have complained that lang tags are often missed in large
SVG files.
The cut-off is used for two things during upload:
* Run some (simple) regexes to detect <?xml header
* Use XMLReader (with entity substitution enabled!) to look for specific tags.
The first check doesn't make sense to use a configurable cut off. Change
it to look at the first 4096 bytes only. The <?xml header is required to be
the first thing in the file other than BOM, so this should be more than
sufficient. XML parsers give a fatal error if there is whitespace before
the <?xml declaration.
It seems unlikely to be problematic to use XMLReader on up to 5MB of the file,
since that is a "pull" XML parser, and won't load the entire file at once.
The code that cuts off the SVG at the 5MB mark likely uses more memory
than parsing the file does. In fact, we separately use XMLReader to do
security checks with no such cut-off, so potentially it could even make sense
to remove the cut-off entirely, since clearly parsing the full file is not
causing problems.
Bug: T270889
Change-Id: I7350918647d92c40934a7c86e906b6bfb8a40ada
Updating name & email addresses for Brooke Vibber.
Re-ran updateCredits.php as well so there are some new entries in
there as well.
There are a couple of files in resources/libs that will have to
be changed upstream to keep tests happy, I will do patches
later. :D
Change-Id: I2f2e75d3fa42e8cf6de19a8fbb615bac28efcd54
The array spread operator is documented to behave identical to
array_merge. The syntax is just much shorter and easier to read in
situations like this, in my opinion.
Change-Id: I3b016e896e552af53d87d5e72436dc4e29070ce1
Just use the result from one call in both places.
I'm also re-arranging the code for readability. This is quite
critical here. A file name like ".htaccess" where the very first
character is a dot but no other dot follows must be considered a
filename without an extension. I hope this is more visible with
the `> 0`.
Change-Id: I24179de62c3f4443effe8a4ebd089a3f77fd84e3
Found via (?<!IDBAccessObject)::READ_
We are planning to deprecate and remove implementing IDBAccessObject
interface just to use the constants.
Bug: T354194
Change-Id: I89d442fa493b8e5332ce118e5bf13f13b8dd3477
MediaWiki's MWException class is deprecated since version 1.40.
Accordingly, this patch replaces the MWException base class with
PHP's built-in Exception class in UploadChunkFileException to
conform to current standards.
Change-Id: Iaf18520576a237d909e02c3238eb75070bcd5a6e
API modules are high level request handler, lower level code should not
depend on them.
This patch solves the problem only partially, since it leaves references
to ApiUpload in AssembleUploadChunksJob and PublishStashedFileJob. These
jobs were already accessing ApiMain, so while this does not fully resolve
the problem, it reduces it.
Change-Id: I39c9e30cfb2860c573eed8a791f1a292a83cbd76
This method is now redundant since rate limit checks are implicit in
permission checks. verifyPermissions() calls authorizeWrite( 'upload' ),
which will enforce any limits on the upload action.
Change-Id: I2ab3c646b8246411df501b548f652eaf11d0bc8e
Log all the errors from a status object and not only the first one in
the exception. This assumes errors or warnings from the FileBackend
always indicate a infrastructure error and never a user-input-related
error (which should not be logged).
FileRepo::quickImportBatch merge in status of init operations and the
real operations, show all failures in the log.
Seeing only backend-fail-internal is not helpful.
Bug: T228292
Change-Id: I3f03b93de835f6f5497f4b5904b37d62d40eb9f2