wiki.techinc.nl/includes/poolcounter/PoolCounterWork.php

199 lines
6.1 KiB
PHP
Raw Normal View History

<?php
/**
* Provides of semaphore semantics for restricting the number
* of workers that may be concurrently performing the same task.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License along
* with this program; if not, write to the Free Software Foundation, Inc.,
* 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
* http://www.gnu.org/copyleft/gpl.html
*
* @file
*/
Make a bunch of incompatible changes to the PoolCounter. It wasn't finished, so it's not a big deal. * Use the term workers instead of threads, which fits better for a multiserver setup. * The API is now more accurate for our goals (I hope). * Add support for using the parse from another worker. * Use child classes instead of array callbacks. * The daemon is written in C using libevent instead of python using twistd. * The hash function used is that of Bob Jenkins, with files hash.c and hash.h directly copied from memcached 1.4.5 * Although similar in a few aspects to memcached assoc.c hash table, this is a different hash table implementation. Most important: ** The usage of a double linked list in the hash table. ** Growing is not performed using a maintenance thread. Since the entries are shortlived, it just waits for the old hash table to disappear. * Note: valgrind 3.5.0 (2009-8-19) does not support accept4 (added in r10955, 2009-11-25). In the meantime you need to use HAVE_ACCEPT4=0 for running with valgrind (as you would need for a non-linux system). * Sending SIGUSR1 to the daemon gracefully restarts it. The maximum limits will be doubled until the old instance finishes (ie. all its client connections expire). * Do not try to test it with instances calling an ?action=purge They will serialize on the "UPDATE `page` SET page_touched" query instead of being serialized by the PoolCounter. * The workers parameter is not stored by the poolcounter. It is expected that all requests with the same key will also have the same value. A reduction in new entries will not take effect if that number is working (not even when they end, if there are waiting entries). But an increase will increase throughput even for old queued requests.
2010-08-27 20:57:32 +00:00
/**
* Class for dealing with PoolCounters using class members
Make a bunch of incompatible changes to the PoolCounter. It wasn't finished, so it's not a big deal. * Use the term workers instead of threads, which fits better for a multiserver setup. * The API is now more accurate for our goals (I hope). * Add support for using the parse from another worker. * Use child classes instead of array callbacks. * The daemon is written in C using libevent instead of python using twistd. * The hash function used is that of Bob Jenkins, with files hash.c and hash.h directly copied from memcached 1.4.5 * Although similar in a few aspects to memcached assoc.c hash table, this is a different hash table implementation. Most important: ** The usage of a double linked list in the hash table. ** Growing is not performed using a maintenance thread. Since the entries are shortlived, it just waits for the old hash table to disappear. * Note: valgrind 3.5.0 (2009-8-19) does not support accept4 (added in r10955, 2009-11-25). In the meantime you need to use HAVE_ACCEPT4=0 for running with valgrind (as you would need for a non-linux system). * Sending SIGUSR1 to the daemon gracefully restarts it. The maximum limits will be doubled until the old instance finishes (ie. all its client connections expire). * Do not try to test it with instances calling an ?action=purge They will serialize on the "UPDATE `page` SET page_touched" query instead of being serialized by the PoolCounter. * The workers parameter is not stored by the poolcounter. It is expected that all requests with the same key will also have the same value. A reduction in new entries will not take effect if that number is working (not even when they end, if there are waiting entries). But an increase will increase throughput even for old queued requests.
2010-08-27 20:57:32 +00:00
*/
abstract class PoolCounterWork {
/** @var string */
protected $type = 'generic';
/** @var bool */
protected $cacheable = false; // does this override getCachedWork() ?
/** @var PoolCounter */
private $poolCounter;
2011-08-17 17:11:49 +00:00
Make a bunch of incompatible changes to the PoolCounter. It wasn't finished, so it's not a big deal. * Use the term workers instead of threads, which fits better for a multiserver setup. * The API is now more accurate for our goals (I hope). * Add support for using the parse from another worker. * Use child classes instead of array callbacks. * The daemon is written in C using libevent instead of python using twistd. * The hash function used is that of Bob Jenkins, with files hash.c and hash.h directly copied from memcached 1.4.5 * Although similar in a few aspects to memcached assoc.c hash table, this is a different hash table implementation. Most important: ** The usage of a double linked list in the hash table. ** Growing is not performed using a maintenance thread. Since the entries are shortlived, it just waits for the old hash table to disappear. * Note: valgrind 3.5.0 (2009-8-19) does not support accept4 (added in r10955, 2009-11-25). In the meantime you need to use HAVE_ACCEPT4=0 for running with valgrind (as you would need for a non-linux system). * Sending SIGUSR1 to the daemon gracefully restarts it. The maximum limits will be doubled until the old instance finishes (ie. all its client connections expire). * Do not try to test it with instances calling an ?action=purge They will serialize on the "UPDATE `page` SET page_touched" query instead of being serialized by the PoolCounter. * The workers parameter is not stored by the poolcounter. It is expected that all requests with the same key will also have the same value. A reduction in new entries will not take effect if that number is working (not even when they end, if there are waiting entries). But an increase will increase throughput even for old queued requests.
2010-08-27 20:57:32 +00:00
/**
* @param string $type The class of actions to limit concurrency for (task type)
* @param string $key Key that identifies the queue this work is placed on
* @param PoolCounter|null $poolCounter
Make a bunch of incompatible changes to the PoolCounter. It wasn't finished, so it's not a big deal. * Use the term workers instead of threads, which fits better for a multiserver setup. * The API is now more accurate for our goals (I hope). * Add support for using the parse from another worker. * Use child classes instead of array callbacks. * The daemon is written in C using libevent instead of python using twistd. * The hash function used is that of Bob Jenkins, with files hash.c and hash.h directly copied from memcached 1.4.5 * Although similar in a few aspects to memcached assoc.c hash table, this is a different hash table implementation. Most important: ** The usage of a double linked list in the hash table. ** Growing is not performed using a maintenance thread. Since the entries are shortlived, it just waits for the old hash table to disappear. * Note: valgrind 3.5.0 (2009-8-19) does not support accept4 (added in r10955, 2009-11-25). In the meantime you need to use HAVE_ACCEPT4=0 for running with valgrind (as you would need for a non-linux system). * Sending SIGUSR1 to the daemon gracefully restarts it. The maximum limits will be doubled until the old instance finishes (ie. all its client connections expire). * Do not try to test it with instances calling an ?action=purge They will serialize on the "UPDATE `page` SET page_touched" query instead of being serialized by the PoolCounter. * The workers parameter is not stored by the poolcounter. It is expected that all requests with the same key will also have the same value. A reduction in new entries will not take effect if that number is working (not even when they end, if there are waiting entries). But an increase will increase throughput even for old queued requests.
2010-08-27 20:57:32 +00:00
*/
public function __construct( string $type, string $key, PoolCounter $poolCounter = null ) {
$this->type = $type;
// MW >= 1.35
$this->poolCounter = $poolCounter ?? PoolCounter::factory( $type, $key );
}
/**
* Actually perform the work, caching it if needed
*
* @return mixed Work result or false
*/
abstract public function doWork();
Make a bunch of incompatible changes to the PoolCounter. It wasn't finished, so it's not a big deal. * Use the term workers instead of threads, which fits better for a multiserver setup. * The API is now more accurate for our goals (I hope). * Add support for using the parse from another worker. * Use child classes instead of array callbacks. * The daemon is written in C using libevent instead of python using twistd. * The hash function used is that of Bob Jenkins, with files hash.c and hash.h directly copied from memcached 1.4.5 * Although similar in a few aspects to memcached assoc.c hash table, this is a different hash table implementation. Most important: ** The usage of a double linked list in the hash table. ** Growing is not performed using a maintenance thread. Since the entries are shortlived, it just waits for the old hash table to disappear. * Note: valgrind 3.5.0 (2009-8-19) does not support accept4 (added in r10955, 2009-11-25). In the meantime you need to use HAVE_ACCEPT4=0 for running with valgrind (as you would need for a non-linux system). * Sending SIGUSR1 to the daemon gracefully restarts it. The maximum limits will be doubled until the old instance finishes (ie. all its client connections expire). * Do not try to test it with instances calling an ?action=purge They will serialize on the "UPDATE `page` SET page_touched" query instead of being serialized by the PoolCounter. * The workers parameter is not stored by the poolcounter. It is expected that all requests with the same key will also have the same value. A reduction in new entries will not take effect if that number is working (not even when they end, if there are waiting entries). But an increase will increase throughput even for old queued requests.
2010-08-27 20:57:32 +00:00
/**
* Retrieve the work from cache
*
* @return mixed Work result or false
Make a bunch of incompatible changes to the PoolCounter. It wasn't finished, so it's not a big deal. * Use the term workers instead of threads, which fits better for a multiserver setup. * The API is now more accurate for our goals (I hope). * Add support for using the parse from another worker. * Use child classes instead of array callbacks. * The daemon is written in C using libevent instead of python using twistd. * The hash function used is that of Bob Jenkins, with files hash.c and hash.h directly copied from memcached 1.4.5 * Although similar in a few aspects to memcached assoc.c hash table, this is a different hash table implementation. Most important: ** The usage of a double linked list in the hash table. ** Growing is not performed using a maintenance thread. Since the entries are shortlived, it just waits for the old hash table to disappear. * Note: valgrind 3.5.0 (2009-8-19) does not support accept4 (added in r10955, 2009-11-25). In the meantime you need to use HAVE_ACCEPT4=0 for running with valgrind (as you would need for a non-linux system). * Sending SIGUSR1 to the daemon gracefully restarts it. The maximum limits will be doubled until the old instance finishes (ie. all its client connections expire). * Do not try to test it with instances calling an ?action=purge They will serialize on the "UPDATE `page` SET page_touched" query instead of being serialized by the PoolCounter. * The workers parameter is not stored by the poolcounter. It is expected that all requests with the same key will also have the same value. A reduction in new entries will not take effect if that number is working (not even when they end, if there are waiting entries). But an increase will increase throughput even for old queued requests.
2010-08-27 20:57:32 +00:00
*/
public function getCachedWork() {
Make a bunch of incompatible changes to the PoolCounter. It wasn't finished, so it's not a big deal. * Use the term workers instead of threads, which fits better for a multiserver setup. * The API is now more accurate for our goals (I hope). * Add support for using the parse from another worker. * Use child classes instead of array callbacks. * The daemon is written in C using libevent instead of python using twistd. * The hash function used is that of Bob Jenkins, with files hash.c and hash.h directly copied from memcached 1.4.5 * Although similar in a few aspects to memcached assoc.c hash table, this is a different hash table implementation. Most important: ** The usage of a double linked list in the hash table. ** Growing is not performed using a maintenance thread. Since the entries are shortlived, it just waits for the old hash table to disappear. * Note: valgrind 3.5.0 (2009-8-19) does not support accept4 (added in r10955, 2009-11-25). In the meantime you need to use HAVE_ACCEPT4=0 for running with valgrind (as you would need for a non-linux system). * Sending SIGUSR1 to the daemon gracefully restarts it. The maximum limits will be doubled until the old instance finishes (ie. all its client connections expire). * Do not try to test it with instances calling an ?action=purge They will serialize on the "UPDATE `page` SET page_touched" query instead of being serialized by the PoolCounter. * The workers parameter is not stored by the poolcounter. It is expected that all requests with the same key will also have the same value. A reduction in new entries will not take effect if that number is working (not even when they end, if there are waiting entries). But an increase will increase throughput even for old queued requests.
2010-08-27 20:57:32 +00:00
return false;
}
/**
2011-08-17 17:11:49 +00:00
* A work not so good (eg. expired one) but better than an error
Make a bunch of incompatible changes to the PoolCounter. It wasn't finished, so it's not a big deal. * Use the term workers instead of threads, which fits better for a multiserver setup. * The API is now more accurate for our goals (I hope). * Add support for using the parse from another worker. * Use child classes instead of array callbacks. * The daemon is written in C using libevent instead of python using twistd. * The hash function used is that of Bob Jenkins, with files hash.c and hash.h directly copied from memcached 1.4.5 * Although similar in a few aspects to memcached assoc.c hash table, this is a different hash table implementation. Most important: ** The usage of a double linked list in the hash table. ** Growing is not performed using a maintenance thread. Since the entries are shortlived, it just waits for the old hash table to disappear. * Note: valgrind 3.5.0 (2009-8-19) does not support accept4 (added in r10955, 2009-11-25). In the meantime you need to use HAVE_ACCEPT4=0 for running with valgrind (as you would need for a non-linux system). * Sending SIGUSR1 to the daemon gracefully restarts it. The maximum limits will be doubled until the old instance finishes (ie. all its client connections expire). * Do not try to test it with instances calling an ?action=purge They will serialize on the "UPDATE `page` SET page_touched" query instead of being serialized by the PoolCounter. * The workers parameter is not stored by the poolcounter. It is expected that all requests with the same key will also have the same value. A reduction in new entries will not take effect if that number is working (not even when they end, if there are waiting entries). But an increase will increase throughput even for old queued requests.
2010-08-27 20:57:32 +00:00
* message.
*
* @param bool $fast True if PoolCounter is requesting a fast stale response (pre-wait)
* @return mixed Work result or false
Make a bunch of incompatible changes to the PoolCounter. It wasn't finished, so it's not a big deal. * Use the term workers instead of threads, which fits better for a multiserver setup. * The API is now more accurate for our goals (I hope). * Add support for using the parse from another worker. * Use child classes instead of array callbacks. * The daemon is written in C using libevent instead of python using twistd. * The hash function used is that of Bob Jenkins, with files hash.c and hash.h directly copied from memcached 1.4.5 * Although similar in a few aspects to memcached assoc.c hash table, this is a different hash table implementation. Most important: ** The usage of a double linked list in the hash table. ** Growing is not performed using a maintenance thread. Since the entries are shortlived, it just waits for the old hash table to disappear. * Note: valgrind 3.5.0 (2009-8-19) does not support accept4 (added in r10955, 2009-11-25). In the meantime you need to use HAVE_ACCEPT4=0 for running with valgrind (as you would need for a non-linux system). * Sending SIGUSR1 to the daemon gracefully restarts it. The maximum limits will be doubled until the old instance finishes (ie. all its client connections expire). * Do not try to test it with instances calling an ?action=purge They will serialize on the "UPDATE `page` SET page_touched" query instead of being serialized by the PoolCounter. * The workers parameter is not stored by the poolcounter. It is expected that all requests with the same key will also have the same value. A reduction in new entries will not take effect if that number is working (not even when they end, if there are waiting entries). But an increase will increase throughput even for old queued requests.
2010-08-27 20:57:32 +00:00
*/
public function fallback( $fast ) {
Make a bunch of incompatible changes to the PoolCounter. It wasn't finished, so it's not a big deal. * Use the term workers instead of threads, which fits better for a multiserver setup. * The API is now more accurate for our goals (I hope). * Add support for using the parse from another worker. * Use child classes instead of array callbacks. * The daemon is written in C using libevent instead of python using twistd. * The hash function used is that of Bob Jenkins, with files hash.c and hash.h directly copied from memcached 1.4.5 * Although similar in a few aspects to memcached assoc.c hash table, this is a different hash table implementation. Most important: ** The usage of a double linked list in the hash table. ** Growing is not performed using a maintenance thread. Since the entries are shortlived, it just waits for the old hash table to disappear. * Note: valgrind 3.5.0 (2009-8-19) does not support accept4 (added in r10955, 2009-11-25). In the meantime you need to use HAVE_ACCEPT4=0 for running with valgrind (as you would need for a non-linux system). * Sending SIGUSR1 to the daemon gracefully restarts it. The maximum limits will be doubled until the old instance finishes (ie. all its client connections expire). * Do not try to test it with instances calling an ?action=purge They will serialize on the "UPDATE `page` SET page_touched" query instead of being serialized by the PoolCounter. * The workers parameter is not stored by the poolcounter. It is expected that all requests with the same key will also have the same value. A reduction in new entries will not take effect if that number is working (not even when they end, if there are waiting entries). But an increase will increase throughput even for old queued requests.
2010-08-27 20:57:32 +00:00
return false;
}
2011-08-17 17:11:49 +00:00
Make a bunch of incompatible changes to the PoolCounter. It wasn't finished, so it's not a big deal. * Use the term workers instead of threads, which fits better for a multiserver setup. * The API is now more accurate for our goals (I hope). * Add support for using the parse from another worker. * Use child classes instead of array callbacks. * The daemon is written in C using libevent instead of python using twistd. * The hash function used is that of Bob Jenkins, with files hash.c and hash.h directly copied from memcached 1.4.5 * Although similar in a few aspects to memcached assoc.c hash table, this is a different hash table implementation. Most important: ** The usage of a double linked list in the hash table. ** Growing is not performed using a maintenance thread. Since the entries are shortlived, it just waits for the old hash table to disappear. * Note: valgrind 3.5.0 (2009-8-19) does not support accept4 (added in r10955, 2009-11-25). In the meantime you need to use HAVE_ACCEPT4=0 for running with valgrind (as you would need for a non-linux system). * Sending SIGUSR1 to the daemon gracefully restarts it. The maximum limits will be doubled until the old instance finishes (ie. all its client connections expire). * Do not try to test it with instances calling an ?action=purge They will serialize on the "UPDATE `page` SET page_touched" query instead of being serialized by the PoolCounter. * The workers parameter is not stored by the poolcounter. It is expected that all requests with the same key will also have the same value. A reduction in new entries will not take effect if that number is working (not even when they end, if there are waiting entries). But an increase will increase throughput even for old queued requests.
2010-08-27 20:57:32 +00:00
/**
* Do something with the error, like showing it to the user.
*
* @param Status $status
2012-02-09 21:35:05 +00:00
* @return bool
Make a bunch of incompatible changes to the PoolCounter. It wasn't finished, so it's not a big deal. * Use the term workers instead of threads, which fits better for a multiserver setup. * The API is now more accurate for our goals (I hope). * Add support for using the parse from another worker. * Use child classes instead of array callbacks. * The daemon is written in C using libevent instead of python using twistd. * The hash function used is that of Bob Jenkins, with files hash.c and hash.h directly copied from memcached 1.4.5 * Although similar in a few aspects to memcached assoc.c hash table, this is a different hash table implementation. Most important: ** The usage of a double linked list in the hash table. ** Growing is not performed using a maintenance thread. Since the entries are shortlived, it just waits for the old hash table to disappear. * Note: valgrind 3.5.0 (2009-8-19) does not support accept4 (added in r10955, 2009-11-25). In the meantime you need to use HAVE_ACCEPT4=0 for running with valgrind (as you would need for a non-linux system). * Sending SIGUSR1 to the daemon gracefully restarts it. The maximum limits will be doubled until the old instance finishes (ie. all its client connections expire). * Do not try to test it with instances calling an ?action=purge They will serialize on the "UPDATE `page` SET page_touched" query instead of being serialized by the PoolCounter. * The workers parameter is not stored by the poolcounter. It is expected that all requests with the same key will also have the same value. A reduction in new entries will not take effect if that number is working (not even when they end, if there are waiting entries). But an increase will increase throughput even for old queued requests.
2010-08-27 20:57:32 +00:00
*/
public function error( $status ) {
Make a bunch of incompatible changes to the PoolCounter. It wasn't finished, so it's not a big deal. * Use the term workers instead of threads, which fits better for a multiserver setup. * The API is now more accurate for our goals (I hope). * Add support for using the parse from another worker. * Use child classes instead of array callbacks. * The daemon is written in C using libevent instead of python using twistd. * The hash function used is that of Bob Jenkins, with files hash.c and hash.h directly copied from memcached 1.4.5 * Although similar in a few aspects to memcached assoc.c hash table, this is a different hash table implementation. Most important: ** The usage of a double linked list in the hash table. ** Growing is not performed using a maintenance thread. Since the entries are shortlived, it just waits for the old hash table to disappear. * Note: valgrind 3.5.0 (2009-8-19) does not support accept4 (added in r10955, 2009-11-25). In the meantime you need to use HAVE_ACCEPT4=0 for running with valgrind (as you would need for a non-linux system). * Sending SIGUSR1 to the daemon gracefully restarts it. The maximum limits will be doubled until the old instance finishes (ie. all its client connections expire). * Do not try to test it with instances calling an ?action=purge They will serialize on the "UPDATE `page` SET page_touched" query instead of being serialized by the PoolCounter. * The workers parameter is not stored by the poolcounter. It is expected that all requests with the same key will also have the same value. A reduction in new entries will not take effect if that number is working (not even when they end, if there are waiting entries). But an increase will increase throughput even for old queued requests.
2010-08-27 20:57:32 +00:00
return false;
}
2011-03-19 12:06:04 +00:00
/**
* Should fast stale mode be used?
*
* @return bool
*/
protected function isFastStaleEnabled() {
return $this->poolCounter->isFastStaleEnabled();
}
2011-03-19 12:06:04 +00:00
/**
* Log an error
2011-08-17 17:11:49 +00:00
*
* @param Status $status
* @return void
2011-03-19 12:06:04 +00:00
*/
public function logError( $status ) {
$key = $this->poolCounter->getKey();
wfDebugLog( 'poolcounter', "Pool key '$key' ({$this->type}): "
. $status->getMessage()->inLanguage( 'en' )->useDatabase( false )->text() );
2011-03-19 12:06:04 +00:00
}
2011-08-17 17:11:49 +00:00
Make a bunch of incompatible changes to the PoolCounter. It wasn't finished, so it's not a big deal. * Use the term workers instead of threads, which fits better for a multiserver setup. * The API is now more accurate for our goals (I hope). * Add support for using the parse from another worker. * Use child classes instead of array callbacks. * The daemon is written in C using libevent instead of python using twistd. * The hash function used is that of Bob Jenkins, with files hash.c and hash.h directly copied from memcached 1.4.5 * Although similar in a few aspects to memcached assoc.c hash table, this is a different hash table implementation. Most important: ** The usage of a double linked list in the hash table. ** Growing is not performed using a maintenance thread. Since the entries are shortlived, it just waits for the old hash table to disappear. * Note: valgrind 3.5.0 (2009-8-19) does not support accept4 (added in r10955, 2009-11-25). In the meantime you need to use HAVE_ACCEPT4=0 for running with valgrind (as you would need for a non-linux system). * Sending SIGUSR1 to the daemon gracefully restarts it. The maximum limits will be doubled until the old instance finishes (ie. all its client connections expire). * Do not try to test it with instances calling an ?action=purge They will serialize on the "UPDATE `page` SET page_touched" query instead of being serialized by the PoolCounter. * The workers parameter is not stored by the poolcounter. It is expected that all requests with the same key will also have the same value. A reduction in new entries will not take effect if that number is working (not even when they end, if there are waiting entries). But an increase will increase throughput even for old queued requests.
2010-08-27 20:57:32 +00:00
/**
* Get the result of the work (whatever it is), or the result of the error() function.
* This returns the result of the one of the following methods:
* - a) doWork() : Applies if the work is exclusive or no another process
* is doing it, and on the condition that either this process
* successfully entered the pool or the pool counter is down.
* - b) doCachedWork() : Applies if the work is cacheable and this blocked on another
* process which finished the work.
* - c) fallback() : Applies for all remaining cases.
* If these all fall through (by returning false), then the result of error() is returned.
*
* In slow stale mode, these three methods are called in the sequence given above, and
* the first non-false response is used.
*
* In fast stale mode, fallback() is called first if the lock acquisition would block.
* If fallback() returns false, the lock is waited on, then the three methods are
* called in the same sequence as for slow stale mode, including potentially calling
* fallback() a second time.
*
* @param bool $skipcache
* @return mixed
Make a bunch of incompatible changes to the PoolCounter. It wasn't finished, so it's not a big deal. * Use the term workers instead of threads, which fits better for a multiserver setup. * The API is now more accurate for our goals (I hope). * Add support for using the parse from another worker. * Use child classes instead of array callbacks. * The daemon is written in C using libevent instead of python using twistd. * The hash function used is that of Bob Jenkins, with files hash.c and hash.h directly copied from memcached 1.4.5 * Although similar in a few aspects to memcached assoc.c hash table, this is a different hash table implementation. Most important: ** The usage of a double linked list in the hash table. ** Growing is not performed using a maintenance thread. Since the entries are shortlived, it just waits for the old hash table to disappear. * Note: valgrind 3.5.0 (2009-8-19) does not support accept4 (added in r10955, 2009-11-25). In the meantime you need to use HAVE_ACCEPT4=0 for running with valgrind (as you would need for a non-linux system). * Sending SIGUSR1 to the daemon gracefully restarts it. The maximum limits will be doubled until the old instance finishes (ie. all its client connections expire). * Do not try to test it with instances calling an ?action=purge They will serialize on the "UPDATE `page` SET page_touched" query instead of being serialized by the PoolCounter. * The workers parameter is not stored by the poolcounter. It is expected that all requests with the same key will also have the same value. A reduction in new entries will not take effect if that number is working (not even when they end, if there are waiting entries). But an increase will increase throughput even for old queued requests.
2010-08-27 20:57:32 +00:00
*/
public function execute( $skipcache = false ) {
Make a bunch of incompatible changes to the PoolCounter. It wasn't finished, so it's not a big deal. * Use the term workers instead of threads, which fits better for a multiserver setup. * The API is now more accurate for our goals (I hope). * Add support for using the parse from another worker. * Use child classes instead of array callbacks. * The daemon is written in C using libevent instead of python using twistd. * The hash function used is that of Bob Jenkins, with files hash.c and hash.h directly copied from memcached 1.4.5 * Although similar in a few aspects to memcached assoc.c hash table, this is a different hash table implementation. Most important: ** The usage of a double linked list in the hash table. ** Growing is not performed using a maintenance thread. Since the entries are shortlived, it just waits for the old hash table to disappear. * Note: valgrind 3.5.0 (2009-8-19) does not support accept4 (added in r10955, 2009-11-25). In the meantime you need to use HAVE_ACCEPT4=0 for running with valgrind (as you would need for a non-linux system). * Sending SIGUSR1 to the daemon gracefully restarts it. The maximum limits will be doubled until the old instance finishes (ie. all its client connections expire). * Do not try to test it with instances calling an ?action=purge They will serialize on the "UPDATE `page` SET page_touched" query instead of being serialized by the PoolCounter. * The workers parameter is not stored by the poolcounter. It is expected that all requests with the same key will also have the same value. A reduction in new entries will not take effect if that number is working (not even when they end, if there are waiting entries). But an increase will increase throughput even for old queued requests.
2010-08-27 20:57:32 +00:00
if ( $this->cacheable && !$skipcache ) {
if ( $this->isFastStaleEnabled() ) {
// In fast stale mode, do not wait if fallback() would succeed.
// Try to acquire the lock with timeout=0
$status = $this->poolCounter->acquireForAnyone( 0 );
if ( $status->isOK() && $status->value === PoolCounter::TIMEOUT ) {
// Lock acquisition would block: try fallback
$staleResult = $this->fallback( true );
if ( $staleResult !== false ) {
return $staleResult;
}
// No fallback available, so wait for the lock
$status = $this->poolCounter->acquireForAnyone();
} // else behave as if $status were returned in slow mode
} else {
$status = $this->poolCounter->acquireForAnyone();
}
Make a bunch of incompatible changes to the PoolCounter. It wasn't finished, so it's not a big deal. * Use the term workers instead of threads, which fits better for a multiserver setup. * The API is now more accurate for our goals (I hope). * Add support for using the parse from another worker. * Use child classes instead of array callbacks. * The daemon is written in C using libevent instead of python using twistd. * The hash function used is that of Bob Jenkins, with files hash.c and hash.h directly copied from memcached 1.4.5 * Although similar in a few aspects to memcached assoc.c hash table, this is a different hash table implementation. Most important: ** The usage of a double linked list in the hash table. ** Growing is not performed using a maintenance thread. Since the entries are shortlived, it just waits for the old hash table to disappear. * Note: valgrind 3.5.0 (2009-8-19) does not support accept4 (added in r10955, 2009-11-25). In the meantime you need to use HAVE_ACCEPT4=0 for running with valgrind (as you would need for a non-linux system). * Sending SIGUSR1 to the daemon gracefully restarts it. The maximum limits will be doubled until the old instance finishes (ie. all its client connections expire). * Do not try to test it with instances calling an ?action=purge They will serialize on the "UPDATE `page` SET page_touched" query instead of being serialized by the PoolCounter. * The workers parameter is not stored by the poolcounter. It is expected that all requests with the same key will also have the same value. A reduction in new entries will not take effect if that number is working (not even when they end, if there are waiting entries). But an increase will increase throughput even for old queued requests.
2010-08-27 20:57:32 +00:00
} else {
$status = $this->poolCounter->acquireForMe();
}
if ( !$status->isOK() ) {
// Respond gracefully to complete server breakage: just log it and do the work
$this->logError( $status );
return $this->doWork();
}
switch ( $status->value ) {
case PoolCounter::LOCK_HELD:
// Better to ignore nesting pool counter limits than to fail.
// Assume that the outer pool limiting is reasonable enough.
/* no break */
case PoolCounter::LOCKED:
try {
return $this->doWork();
} finally {
$this->poolCounter->release();
}
// no fall-through, because try returns or throws
case PoolCounter::DONE:
$result = $this->getCachedWork();
if ( $result === false ) {
/* That someone else work didn't serve us.
* Acquire the lock for me
*/
return $this->execute( true );
}
return $result;
2011-08-17 17:11:49 +00:00
case PoolCounter::QUEUE_FULL:
case PoolCounter::TIMEOUT:
$result = $this->fallback( false );
2011-08-17 17:11:49 +00:00
if ( $result !== false ) {
Make a bunch of incompatible changes to the PoolCounter. It wasn't finished, so it's not a big deal. * Use the term workers instead of threads, which fits better for a multiserver setup. * The API is now more accurate for our goals (I hope). * Add support for using the parse from another worker. * Use child classes instead of array callbacks. * The daemon is written in C using libevent instead of python using twistd. * The hash function used is that of Bob Jenkins, with files hash.c and hash.h directly copied from memcached 1.4.5 * Although similar in a few aspects to memcached assoc.c hash table, this is a different hash table implementation. Most important: ** The usage of a double linked list in the hash table. ** Growing is not performed using a maintenance thread. Since the entries are shortlived, it just waits for the old hash table to disappear. * Note: valgrind 3.5.0 (2009-8-19) does not support accept4 (added in r10955, 2009-11-25). In the meantime you need to use HAVE_ACCEPT4=0 for running with valgrind (as you would need for a non-linux system). * Sending SIGUSR1 to the daemon gracefully restarts it. The maximum limits will be doubled until the old instance finishes (ie. all its client connections expire). * Do not try to test it with instances calling an ?action=purge They will serialize on the "UPDATE `page` SET page_touched" query instead of being serialized by the PoolCounter. * The workers parameter is not stored by the poolcounter. It is expected that all requests with the same key will also have the same value. A reduction in new entries will not take effect if that number is working (not even when they end, if there are waiting entries). But an increase will increase throughput even for old queued requests.
2010-08-27 20:57:32 +00:00
return $result;
}
/* no break */
2011-08-17 17:11:49 +00:00
/* These two cases should never be hit... */
case PoolCounter::ERROR:
default:
$errors = [
PoolCounter::QUEUE_FULL => 'pool-queuefull',
PoolCounter::TIMEOUT => 'pool-timeout' ];
2011-08-17 17:11:49 +00:00
$status = Status::newFatal( $errors[$status->value] ?? 'pool-errorunknown' );
$this->logError( $status );
return $this->error( $status );
Make a bunch of incompatible changes to the PoolCounter. It wasn't finished, so it's not a big deal. * Use the term workers instead of threads, which fits better for a multiserver setup. * The API is now more accurate for our goals (I hope). * Add support for using the parse from another worker. * Use child classes instead of array callbacks. * The daemon is written in C using libevent instead of python using twistd. * The hash function used is that of Bob Jenkins, with files hash.c and hash.h directly copied from memcached 1.4.5 * Although similar in a few aspects to memcached assoc.c hash table, this is a different hash table implementation. Most important: ** The usage of a double linked list in the hash table. ** Growing is not performed using a maintenance thread. Since the entries are shortlived, it just waits for the old hash table to disappear. * Note: valgrind 3.5.0 (2009-8-19) does not support accept4 (added in r10955, 2009-11-25). In the meantime you need to use HAVE_ACCEPT4=0 for running with valgrind (as you would need for a non-linux system). * Sending SIGUSR1 to the daemon gracefully restarts it. The maximum limits will be doubled until the old instance finishes (ie. all its client connections expire). * Do not try to test it with instances calling an ?action=purge They will serialize on the "UPDATE `page` SET page_touched" query instead of being serialized by the PoolCounter. * The workers parameter is not stored by the poolcounter. It is expected that all requests with the same key will also have the same value. A reduction in new entries will not take effect if that number is working (not even when they end, if there are waiting entries). But an increase will increase throughput even for old queued requests.
2010-08-27 20:57:32 +00:00
}
}
}