Follows-up I4c7d826c7ec654b, I1287f3979aba1bf1.
We lose useful coverage and spend valuable time keeping these accurate
through refactors (or worse, forget to do so). The theoretically "bad"
accidental coverage is almost never actually bad.
Having said that, I'm not removing them wholesale (yet). I've audited
each of these specific files to confirm it is a general test of the
specified subject class, and also kept it limited to those specified
classes. That's imho more than 100% of the benefit for less than 1%
of the cost (more because `@covers` is more valuable than the fragile
and corrosive individual private method tracking in tests that
inevitably get out of date with no local incentive to keep them up to
date).
Cases like structure tests keep `@coversNothing` etc and we still don't
count coverage of other classes. There may be a handful of large
legacy classes where some methods are effectively class-like in
complexity and that's why it's good for PHPUnit to offer the precision
instrument but that doesn't meant we have to use that by-default for
everything.
I think best practice is to write good narrow unit tests, that reflect
how the code should be used in practice. Not to write bad tests and
hide part of its coverage within the same class or even namespace.
Fortunately, that's generally what we do already it's just that we
also kept these annotations still in many cases.
This wastes time to keep methods in sync, time to realize (and fix)
when other people inevitably didn't keep them in sync, time to find
uncovered code only to realize it is already covered, time for a less
experienced engineer to feel obligate to and do write a low quality
test to cover the "missing" branch in an unrealistic way, time wasted
in on-boarding by using such "bad" tests as example for how to use
the code and then having to unlearn it months/years later, loss of
telemetry in knowing what code actually isn't propertly tested due to
being masked by a bad test, and lost oppertunities to find actually
ununused/unreachable code and to think about how to instead structure
the code such that maybe that code can be removed.
------
Especially cases like LBFactoryTest.php were getting out of hand,
and in GlobalIdGeneratorTest.php we even resorted to reminding people
with inline comments to keep tags in sync.
Change-Id: I69b5385868cc6b451e5f2ebec9539694968bf58c
The global function wfWikiID() is deprecated since 1.35 and it's usages
should be replaced with WikiMap::getCurrentWikiId().
Bug: T298059
Change-Id: I22d96b7aec17323d15a9bc401d4511ad2ee14165
* parent::setUp() should be first, and ::tearDown()
should be last
* Move tests that directly extend PHPUnit\Framework\TestCase
to /unit
Change-Id: I1172855c58f4f52a8f624e6d596ec43beb8c93ff
The name change happened some time ago, and I think its
about time to start using the name name!
(Done with a find and replace)
My personal motivation for doing this is that I have started
trying out vscode as an IDE for mediawiki development, and
right now it doesn't appear to handle php aliases very well
or at all.
Change-Id: I412235d91ae26e4c1c6a62e0dbb7e7cf3c5ed4a6
Done with `composer fix` and suppressing the rest (i.e. sniffs for
global variables, which for core should be suppressed anyway).
Additionally, add `-p` to `phpcbf`, as otherwise it just seems stuck.
Change-Id: Ide8d6cdd083655891b6d654e78440fbda81ab2bc
Add public, protected or private to function missing a visibility
Enable the tests folder for the phpcs sniff
Change-Id: Ibefce76ea9984c47e08c94889ea2eafca7565e2c
assertSame() is guaranteed to not do any type conversion. This can be
critical when acciden tially comparing, for example, 0 to 0.0.
Change-Id: Iffcc9bda69573623ba14af655dcd697d0fcce525
This reduces confidence in the test. There is no guruantee that
it won't return the same value twice during the duration of a full
PHPUnit run of all test suites, whether twice in a row or 20 minutes
apart.
For a test that needs a string of any kind, use an explicit, consinstent
and cheap literal value.
For a test that specifically needs some kind of uniqueness compared to
something else within the same test case, do so explicitly.
Tests that require something globally unique (for some undefined/vague
definition of "global") were not found, and should not exist anyway.
Also, in libs/objectcache tests, fix order of parameters in some
assertions (expected first, then actual), and use assertFalse/assertSame
instead of assertEqual for cases where false is expected to remove
tolerance of other loosely equal values.
Change-Id: Ifc60e88178da471330b94bfbf12e2731d2efc77d
Simplify the code of jobs that do not care about titles and removes
the direct Title dependency from JobQueue. Remove getTitle() from
IJobSpecification itself. Move all the Job::factory calls into a
single JobQueue::factoryJob() method.
Depends-on: Iee78f4baeca0c0b4d6db073f2fbcc56855114ab0
Change-Id: I9c9d0726d4066bb0aa937665847ad6042ade13ec
Also moved some WikiMap/$wgJobClasses checks to JobQueueGroup::pop
which is the method callers are supposed to use.
Change-Id: I2ab82d8adc4ae1f54697d2935afa2053539cf2db
This already requires a DB domain ID, so there is no reason to have
hacks for trying to handle a wiki ID being passed in instead. If the
provided domain has a schema, it should not simply be ignored in the
comparison.
Change-Id: I9ced7a46fa05f32843a9a7d17391c5d0576b099c
Using domains means thats JobQueueDB has the right value to use for calls
like LoadBalancer::getConnection(). The full domain includes the schema in
the case of Postgres. This makes calls to getConnection() less awkward by
not relying on the fallback logic in reallyOpenConnection() for null schemas.
Make getWikiIdFromDomain/isCurrentWikiDomain account for the schema if it
is both defined and is not simply the generic "mediawiki" schema MediaWiki
uses by default. If all wikis use the default schema, the wiki IDs can get
by with DB/prefix alone, which various config and methods may be built around.
Otherwise, the config callbacks must account for schema and the config must
include it in various wiki domain ID lists to properly disambiguate wikis.
Also, clean up SiteConfiguration::siteFromDB() since it is not meant
to handle schemas unless the callback method was taylored to do so.
Finally, add more comments to DefaultSettings.php about already existing
limitations of wiki domain IDs and their components.
Change-Id: I8d94a650e5c99a19ee50551c5be9544318eb05b1
The "AUTO" means AUTOCOMMIT, not "automatic transactions"/DBO_TRX,
which is basically the opposite concept. The new name does not
suffer from that ambiguity.
Keep the old constant as an alias for backwards compatibility.
Also remove LoadBalancer comment about non-existing field
Change-Id: I63beeb061fc9be73f320308e4d6393b58628b8c8
When LoadBalancer opens new local domain connections, it currently
assumes that the domain specified by the server info array is the
same. For sanity, make sure that the handle is set to the local
domain.
The main LBFactory/LoadBalancer use $wgDBname/$wgDBprefix as the
local domain, corresponding with wfWikiId(). This relation is set
automatically in MWLBFactory. If $wgLBFactoryConf/$wgDBservers is
manually configured in a way breaking this correspondance, then it
is misconfigured.
Fixes made to avoid test failure:
* Make sure LoadBalancer::setDomainPrefix() updates the local
domain alias member. Also do not bother changing the domain of
foreign connections.
* Use the right domain ID for the connection array key names in
LoadBalancer::openForeignConnection().
* Now that JobQueueTest no longer mistakenly uses the non-test
tables, force it to use the main DB_MASTER handle so that it can
see the unit test tables even if they are TEMPORARY; such tables
are tied to the TCP connection, so separate handles see different
temporary tables.
Change-Id: I56f8b32fe957f984b8c9753e6db3b20abe96b038
* Track queues with non-abandoned jobs per partition server.
The s-queuesWithJobs key can easily be queried to see which
queues need to have periodic tasks run (or for debugging).
* This is requirement for the redis jobchron service to be able to
avoid hitting N=(no. types X no. wikis) queues for periodic tasks
when only a tiny fraction of those actually have any jobs. For WMF,
there are over 30K queues, most of them empty, so doing that can help
lower redis-server CPU (or at least make jobchron more responsive).
* This also allows for jobchron to manage the aggregator by taking the
per-server aggregator sets and merging them. This scales much better
as there are only a modest number of these daemons (18 for WMF) but
vastly more web thread pushing jobs. This cuts down on the connections
to the active aggregator server (the one with the hash table).
* Use Lua unpack() more for stylistic consistency.
Change-Id: I1549f0edc78cc4004dd887b475dec4c0ebd306c6
If we really need this we can do it in MediaWikiTestCase, next
to the setting of wgMainCacheType. But from what I can see the
code being tested here already doesn't use the old $wgMemc.
Change-Id: I9e4b2109b2f3c18d8d5551bbadae5711c1d4c0a6
* Remove some getAcquiredCount() assertions when claimTTL=0
as this is not well defined enough (queues may take a few
minutes to garbage collect the failed jobs).
* Added some tests to make sure push() only de-duplicates
among unclaimed jobs.
Change-Id: Ie0a5e539095c245dfcc8c160417e12824eb7ab83
I noticed JobQueueTest::testRootDeduplication takes ~ 6.5 seconds, which
is due to the test method using sleep(1) and being passed the provider
provider_queueLists which yields six items.
The reason is to have the array returned by Job::newRootJobParams() to
have an incread value for 'rootJobTimestamp'. Instead, just copy the
previous array of parameters and increment the UNIX timestamp and
converting back to TS_MW format.
Change-Id: I75066df73f9f92e56b89eb6d928c41e949a2d6a9
Also update some previous inconsistencies pointed out by Krinkle in change IDs:
* Ide20743a2e84ff68549286120e6cff9d9f396f54
* I811ca957b6588085d67606ebc0cd4033a1e53839
Change-Id: Ife33b931870d0d7e04fcb40974997436d27f528f
Change some tests to use setMwGlobals to have restoring of globals after
the test.
This also removes some save/restore code, which is not needed, due to
the automatically restoring on tearDown with setMwGlobals.
Change-Id: I8d2ac9f6cc14f0bd4ee8eb851c09f2e71babc6e0
* Cleaned up some data structures into hashes, which get better
compression and play well with the KEYS parameter in Lua scripts.
The claim list is now a sorted set with O(logN) removal in ack()
and O(log(N)+M) searching in recycleAndDeleteStaleJobs().
* Made the class itself control object serialization, so that lua
scripts have an easy time. Only the job data itself needs to be
serialized, where as other things just get bloated.
* Used Lua scripts to get push(), pop() and ack() down to 1 RTT.
* Likewise rewrote recycleAndDeleteStaleJobs() to use a script.
* Fixed bug where claimed duplicate jobs removed the data on ack(),
which meant that claimed duplicated jobs could no-op newer ones.
De-duplication should only apply to unclaimed jobs like for the
JobQueueDB class, so that unfinished jobs don't no-op new ones.
* Removed locking in recycleAndDeleteStaleJobs(), which would not do
much since the exclusive set request would serialize on the lua
script anyway. The lua script will finish quickly the next times
if done more than once in a row due to sorted set usage.
Also made recycleAndDeleteStaleJobs() run randomly to reduce the
chance of a single calling tying up the server.
* Removed useless hDel() call in getJobFromUidInternal().
* Changed unit tests to handle the different supported orders better.
Added tests for the 'timestamp' ordering.
Change-Id: Ib2d7aff18753195248ab856afd4a46e18b301db9
* Cleaned up 'server' option to not fragment the pool.
Also made it actually match the documentation.
* Made it use doGetPeriodicTasks() for job recycling.
* Made it so that other job queue classes can be tested.
* Renamed "redisConf" => "redisConfig".
* Tweaked comments about the "random" order option.
Change-Id: I7823d90010e6bc9d581435c3be92830c5ba68480