wiki.techinc.nl/includes/poolcounter/PoolCounterWork.php

224 lines
6.4 KiB
PHP
Raw Normal View History

<?php
/**
* Provides of semaphore semantics for restricting the number
* of workers that may be concurrently performing the same task.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License along
* with this program; if not, write to the Free Software Foundation, Inc.,
* 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
* http://www.gnu.org/copyleft/gpl.html
*
* @file
*/
Make a bunch of incompatible changes to the PoolCounter. It wasn't finished, so it's not a big deal. * Use the term workers instead of threads, which fits better for a multiserver setup. * The API is now more accurate for our goals (I hope). * Add support for using the parse from another worker. * Use child classes instead of array callbacks. * The daemon is written in C using libevent instead of python using twistd. * The hash function used is that of Bob Jenkins, with files hash.c and hash.h directly copied from memcached 1.4.5 * Although similar in a few aspects to memcached assoc.c hash table, this is a different hash table implementation. Most important: ** The usage of a double linked list in the hash table. ** Growing is not performed using a maintenance thread. Since the entries are shortlived, it just waits for the old hash table to disappear. * Note: valgrind 3.5.0 (2009-8-19) does not support accept4 (added in r10955, 2009-11-25). In the meantime you need to use HAVE_ACCEPT4=0 for running with valgrind (as you would need for a non-linux system). * Sending SIGUSR1 to the daemon gracefully restarts it. The maximum limits will be doubled until the old instance finishes (ie. all its client connections expire). * Do not try to test it with instances calling an ?action=purge They will serialize on the "UPDATE `page` SET page_touched" query instead of being serialized by the PoolCounter. * The workers parameter is not stored by the poolcounter. It is expected that all requests with the same key will also have the same value. A reduction in new entries will not take effect if that number is working (not even when they end, if there are waiting entries). But an increase will increase throughput even for old queued requests.
2010-08-27 20:57:32 +00:00
/**
* Class for dealing with PoolCounters using class members
Make a bunch of incompatible changes to the PoolCounter. It wasn't finished, so it's not a big deal. * Use the term workers instead of threads, which fits better for a multiserver setup. * The API is now more accurate for our goals (I hope). * Add support for using the parse from another worker. * Use child classes instead of array callbacks. * The daemon is written in C using libevent instead of python using twistd. * The hash function used is that of Bob Jenkins, with files hash.c and hash.h directly copied from memcached 1.4.5 * Although similar in a few aspects to memcached assoc.c hash table, this is a different hash table implementation. Most important: ** The usage of a double linked list in the hash table. ** Growing is not performed using a maintenance thread. Since the entries are shortlived, it just waits for the old hash table to disappear. * Note: valgrind 3.5.0 (2009-8-19) does not support accept4 (added in r10955, 2009-11-25). In the meantime you need to use HAVE_ACCEPT4=0 for running with valgrind (as you would need for a non-linux system). * Sending SIGUSR1 to the daemon gracefully restarts it. The maximum limits will be doubled until the old instance finishes (ie. all its client connections expire). * Do not try to test it with instances calling an ?action=purge They will serialize on the "UPDATE `page` SET page_touched" query instead of being serialized by the PoolCounter. * The workers parameter is not stored by the poolcounter. It is expected that all requests with the same key will also have the same value. A reduction in new entries will not take effect if that number is working (not even when they end, if there are waiting entries). But an increase will increase throughput even for old queued requests.
2010-08-27 20:57:32 +00:00
*/
abstract class PoolCounterWork {
/** @var string */
protected $type = 'generic';
/** @var bool */
protected $cacheable = false; // does this override getCachedWork() ?
2011-08-17 17:11:49 +00:00
Make a bunch of incompatible changes to the PoolCounter. It wasn't finished, so it's not a big deal. * Use the term workers instead of threads, which fits better for a multiserver setup. * The API is now more accurate for our goals (I hope). * Add support for using the parse from another worker. * Use child classes instead of array callbacks. * The daemon is written in C using libevent instead of python using twistd. * The hash function used is that of Bob Jenkins, with files hash.c and hash.h directly copied from memcached 1.4.5 * Although similar in a few aspects to memcached assoc.c hash table, this is a different hash table implementation. Most important: ** The usage of a double linked list in the hash table. ** Growing is not performed using a maintenance thread. Since the entries are shortlived, it just waits for the old hash table to disappear. * Note: valgrind 3.5.0 (2009-8-19) does not support accept4 (added in r10955, 2009-11-25). In the meantime you need to use HAVE_ACCEPT4=0 for running with valgrind (as you would need for a non-linux system). * Sending SIGUSR1 to the daemon gracefully restarts it. The maximum limits will be doubled until the old instance finishes (ie. all its client connections expire). * Do not try to test it with instances calling an ?action=purge They will serialize on the "UPDATE `page` SET page_touched" query instead of being serialized by the PoolCounter. * The workers parameter is not stored by the poolcounter. It is expected that all requests with the same key will also have the same value. A reduction in new entries will not take effect if that number is working (not even when they end, if there are waiting entries). But an increase will increase throughput even for old queued requests.
2010-08-27 20:57:32 +00:00
/**
* @param string $type The type of PoolCounter to use
* @param string $key Key that identifies the queue this work is placed on
Make a bunch of incompatible changes to the PoolCounter. It wasn't finished, so it's not a big deal. * Use the term workers instead of threads, which fits better for a multiserver setup. * The API is now more accurate for our goals (I hope). * Add support for using the parse from another worker. * Use child classes instead of array callbacks. * The daemon is written in C using libevent instead of python using twistd. * The hash function used is that of Bob Jenkins, with files hash.c and hash.h directly copied from memcached 1.4.5 * Although similar in a few aspects to memcached assoc.c hash table, this is a different hash table implementation. Most important: ** The usage of a double linked list in the hash table. ** Growing is not performed using a maintenance thread. Since the entries are shortlived, it just waits for the old hash table to disappear. * Note: valgrind 3.5.0 (2009-8-19) does not support accept4 (added in r10955, 2009-11-25). In the meantime you need to use HAVE_ACCEPT4=0 for running with valgrind (as you would need for a non-linux system). * Sending SIGUSR1 to the daemon gracefully restarts it. The maximum limits will be doubled until the old instance finishes (ie. all its client connections expire). * Do not try to test it with instances calling an ?action=purge They will serialize on the "UPDATE `page` SET page_touched" query instead of being serialized by the PoolCounter. * The workers parameter is not stored by the poolcounter. It is expected that all requests with the same key will also have the same value. A reduction in new entries will not take effect if that number is working (not even when they end, if there are waiting entries). But an increase will increase throughput even for old queued requests.
2010-08-27 20:57:32 +00:00
*/
public function __construct( $type, $key ) {
$this->type = $type;
$this->poolCounter = PoolCounter::factory( $type, $key );
}
/**
* Actually perform the work, caching it if needed
* @return mixed Work result or false
*/
abstract public function doWork();
Make a bunch of incompatible changes to the PoolCounter. It wasn't finished, so it's not a big deal. * Use the term workers instead of threads, which fits better for a multiserver setup. * The API is now more accurate for our goals (I hope). * Add support for using the parse from another worker. * Use child classes instead of array callbacks. * The daemon is written in C using libevent instead of python using twistd. * The hash function used is that of Bob Jenkins, with files hash.c and hash.h directly copied from memcached 1.4.5 * Although similar in a few aspects to memcached assoc.c hash table, this is a different hash table implementation. Most important: ** The usage of a double linked list in the hash table. ** Growing is not performed using a maintenance thread. Since the entries are shortlived, it just waits for the old hash table to disappear. * Note: valgrind 3.5.0 (2009-8-19) does not support accept4 (added in r10955, 2009-11-25). In the meantime you need to use HAVE_ACCEPT4=0 for running with valgrind (as you would need for a non-linux system). * Sending SIGUSR1 to the daemon gracefully restarts it. The maximum limits will be doubled until the old instance finishes (ie. all its client connections expire). * Do not try to test it with instances calling an ?action=purge They will serialize on the "UPDATE `page` SET page_touched" query instead of being serialized by the PoolCounter. * The workers parameter is not stored by the poolcounter. It is expected that all requests with the same key will also have the same value. A reduction in new entries will not take effect if that number is working (not even when they end, if there are waiting entries). But an increase will increase throughput even for old queued requests.
2010-08-27 20:57:32 +00:00
/**
* Retrieve the work from cache
* @return mixed Work result or false
Make a bunch of incompatible changes to the PoolCounter. It wasn't finished, so it's not a big deal. * Use the term workers instead of threads, which fits better for a multiserver setup. * The API is now more accurate for our goals (I hope). * Add support for using the parse from another worker. * Use child classes instead of array callbacks. * The daemon is written in C using libevent instead of python using twistd. * The hash function used is that of Bob Jenkins, with files hash.c and hash.h directly copied from memcached 1.4.5 * Although similar in a few aspects to memcached assoc.c hash table, this is a different hash table implementation. Most important: ** The usage of a double linked list in the hash table. ** Growing is not performed using a maintenance thread. Since the entries are shortlived, it just waits for the old hash table to disappear. * Note: valgrind 3.5.0 (2009-8-19) does not support accept4 (added in r10955, 2009-11-25). In the meantime you need to use HAVE_ACCEPT4=0 for running with valgrind (as you would need for a non-linux system). * Sending SIGUSR1 to the daemon gracefully restarts it. The maximum limits will be doubled until the old instance finishes (ie. all its client connections expire). * Do not try to test it with instances calling an ?action=purge They will serialize on the "UPDATE `page` SET page_touched" query instead of being serialized by the PoolCounter. * The workers parameter is not stored by the poolcounter. It is expected that all requests with the same key will also have the same value. A reduction in new entries will not take effect if that number is working (not even when they end, if there are waiting entries). But an increase will increase throughput even for old queued requests.
2010-08-27 20:57:32 +00:00
*/
public function getCachedWork() {
Make a bunch of incompatible changes to the PoolCounter. It wasn't finished, so it's not a big deal. * Use the term workers instead of threads, which fits better for a multiserver setup. * The API is now more accurate for our goals (I hope). * Add support for using the parse from another worker. * Use child classes instead of array callbacks. * The daemon is written in C using libevent instead of python using twistd. * The hash function used is that of Bob Jenkins, with files hash.c and hash.h directly copied from memcached 1.4.5 * Although similar in a few aspects to memcached assoc.c hash table, this is a different hash table implementation. Most important: ** The usage of a double linked list in the hash table. ** Growing is not performed using a maintenance thread. Since the entries are shortlived, it just waits for the old hash table to disappear. * Note: valgrind 3.5.0 (2009-8-19) does not support accept4 (added in r10955, 2009-11-25). In the meantime you need to use HAVE_ACCEPT4=0 for running with valgrind (as you would need for a non-linux system). * Sending SIGUSR1 to the daemon gracefully restarts it. The maximum limits will be doubled until the old instance finishes (ie. all its client connections expire). * Do not try to test it with instances calling an ?action=purge They will serialize on the "UPDATE `page` SET page_touched" query instead of being serialized by the PoolCounter. * The workers parameter is not stored by the poolcounter. It is expected that all requests with the same key will also have the same value. A reduction in new entries will not take effect if that number is working (not even when they end, if there are waiting entries). But an increase will increase throughput even for old queued requests.
2010-08-27 20:57:32 +00:00
return false;
}
/**
2011-08-17 17:11:49 +00:00
* A work not so good (eg. expired one) but better than an error
Make a bunch of incompatible changes to the PoolCounter. It wasn't finished, so it's not a big deal. * Use the term workers instead of threads, which fits better for a multiserver setup. * The API is now more accurate for our goals (I hope). * Add support for using the parse from another worker. * Use child classes instead of array callbacks. * The daemon is written in C using libevent instead of python using twistd. * The hash function used is that of Bob Jenkins, with files hash.c and hash.h directly copied from memcached 1.4.5 * Although similar in a few aspects to memcached assoc.c hash table, this is a different hash table implementation. Most important: ** The usage of a double linked list in the hash table. ** Growing is not performed using a maintenance thread. Since the entries are shortlived, it just waits for the old hash table to disappear. * Note: valgrind 3.5.0 (2009-8-19) does not support accept4 (added in r10955, 2009-11-25). In the meantime you need to use HAVE_ACCEPT4=0 for running with valgrind (as you would need for a non-linux system). * Sending SIGUSR1 to the daemon gracefully restarts it. The maximum limits will be doubled until the old instance finishes (ie. all its client connections expire). * Do not try to test it with instances calling an ?action=purge They will serialize on the "UPDATE `page` SET page_touched" query instead of being serialized by the PoolCounter. * The workers parameter is not stored by the poolcounter. It is expected that all requests with the same key will also have the same value. A reduction in new entries will not take effect if that number is working (not even when they end, if there are waiting entries). But an increase will increase throughput even for old queued requests.
2010-08-27 20:57:32 +00:00
* message.
* @return mixed Work result or false
Make a bunch of incompatible changes to the PoolCounter. It wasn't finished, so it's not a big deal. * Use the term workers instead of threads, which fits better for a multiserver setup. * The API is now more accurate for our goals (I hope). * Add support for using the parse from another worker. * Use child classes instead of array callbacks. * The daemon is written in C using libevent instead of python using twistd. * The hash function used is that of Bob Jenkins, with files hash.c and hash.h directly copied from memcached 1.4.5 * Although similar in a few aspects to memcached assoc.c hash table, this is a different hash table implementation. Most important: ** The usage of a double linked list in the hash table. ** Growing is not performed using a maintenance thread. Since the entries are shortlived, it just waits for the old hash table to disappear. * Note: valgrind 3.5.0 (2009-8-19) does not support accept4 (added in r10955, 2009-11-25). In the meantime you need to use HAVE_ACCEPT4=0 for running with valgrind (as you would need for a non-linux system). * Sending SIGUSR1 to the daemon gracefully restarts it. The maximum limits will be doubled until the old instance finishes (ie. all its client connections expire). * Do not try to test it with instances calling an ?action=purge They will serialize on the "UPDATE `page` SET page_touched" query instead of being serialized by the PoolCounter. * The workers parameter is not stored by the poolcounter. It is expected that all requests with the same key will also have the same value. A reduction in new entries will not take effect if that number is working (not even when they end, if there are waiting entries). But an increase will increase throughput even for old queued requests.
2010-08-27 20:57:32 +00:00
*/
public function fallback() {
Make a bunch of incompatible changes to the PoolCounter. It wasn't finished, so it's not a big deal. * Use the term workers instead of threads, which fits better for a multiserver setup. * The API is now more accurate for our goals (I hope). * Add support for using the parse from another worker. * Use child classes instead of array callbacks. * The daemon is written in C using libevent instead of python using twistd. * The hash function used is that of Bob Jenkins, with files hash.c and hash.h directly copied from memcached 1.4.5 * Although similar in a few aspects to memcached assoc.c hash table, this is a different hash table implementation. Most important: ** The usage of a double linked list in the hash table. ** Growing is not performed using a maintenance thread. Since the entries are shortlived, it just waits for the old hash table to disappear. * Note: valgrind 3.5.0 (2009-8-19) does not support accept4 (added in r10955, 2009-11-25). In the meantime you need to use HAVE_ACCEPT4=0 for running with valgrind (as you would need for a non-linux system). * Sending SIGUSR1 to the daemon gracefully restarts it. The maximum limits will be doubled until the old instance finishes (ie. all its client connections expire). * Do not try to test it with instances calling an ?action=purge They will serialize on the "UPDATE `page` SET page_touched" query instead of being serialized by the PoolCounter. * The workers parameter is not stored by the poolcounter. It is expected that all requests with the same key will also have the same value. A reduction in new entries will not take effect if that number is working (not even when they end, if there are waiting entries). But an increase will increase throughput even for old queued requests.
2010-08-27 20:57:32 +00:00
return false;
}
2011-08-17 17:11:49 +00:00
Make a bunch of incompatible changes to the PoolCounter. It wasn't finished, so it's not a big deal. * Use the term workers instead of threads, which fits better for a multiserver setup. * The API is now more accurate for our goals (I hope). * Add support for using the parse from another worker. * Use child classes instead of array callbacks. * The daemon is written in C using libevent instead of python using twistd. * The hash function used is that of Bob Jenkins, with files hash.c and hash.h directly copied from memcached 1.4.5 * Although similar in a few aspects to memcached assoc.c hash table, this is a different hash table implementation. Most important: ** The usage of a double linked list in the hash table. ** Growing is not performed using a maintenance thread. Since the entries are shortlived, it just waits for the old hash table to disappear. * Note: valgrind 3.5.0 (2009-8-19) does not support accept4 (added in r10955, 2009-11-25). In the meantime you need to use HAVE_ACCEPT4=0 for running with valgrind (as you would need for a non-linux system). * Sending SIGUSR1 to the daemon gracefully restarts it. The maximum limits will be doubled until the old instance finishes (ie. all its client connections expire). * Do not try to test it with instances calling an ?action=purge They will serialize on the "UPDATE `page` SET page_touched" query instead of being serialized by the PoolCounter. * The workers parameter is not stored by the poolcounter. It is expected that all requests with the same key will also have the same value. A reduction in new entries will not take effect if that number is working (not even when they end, if there are waiting entries). But an increase will increase throughput even for old queued requests.
2010-08-27 20:57:32 +00:00
/**
* Do something with the error, like showing it to the user.
2012-02-09 21:35:05 +00:00
* @return bool
Make a bunch of incompatible changes to the PoolCounter. It wasn't finished, so it's not a big deal. * Use the term workers instead of threads, which fits better for a multiserver setup. * The API is now more accurate for our goals (I hope). * Add support for using the parse from another worker. * Use child classes instead of array callbacks. * The daemon is written in C using libevent instead of python using twistd. * The hash function used is that of Bob Jenkins, with files hash.c and hash.h directly copied from memcached 1.4.5 * Although similar in a few aspects to memcached assoc.c hash table, this is a different hash table implementation. Most important: ** The usage of a double linked list in the hash table. ** Growing is not performed using a maintenance thread. Since the entries are shortlived, it just waits for the old hash table to disappear. * Note: valgrind 3.5.0 (2009-8-19) does not support accept4 (added in r10955, 2009-11-25). In the meantime you need to use HAVE_ACCEPT4=0 for running with valgrind (as you would need for a non-linux system). * Sending SIGUSR1 to the daemon gracefully restarts it. The maximum limits will be doubled until the old instance finishes (ie. all its client connections expire). * Do not try to test it with instances calling an ?action=purge They will serialize on the "UPDATE `page` SET page_touched" query instead of being serialized by the PoolCounter. * The workers parameter is not stored by the poolcounter. It is expected that all requests with the same key will also have the same value. A reduction in new entries will not take effect if that number is working (not even when they end, if there are waiting entries). But an increase will increase throughput even for old queued requests.
2010-08-27 20:57:32 +00:00
*/
public function error( $status ) {
Make a bunch of incompatible changes to the PoolCounter. It wasn't finished, so it's not a big deal. * Use the term workers instead of threads, which fits better for a multiserver setup. * The API is now more accurate for our goals (I hope). * Add support for using the parse from another worker. * Use child classes instead of array callbacks. * The daemon is written in C using libevent instead of python using twistd. * The hash function used is that of Bob Jenkins, with files hash.c and hash.h directly copied from memcached 1.4.5 * Although similar in a few aspects to memcached assoc.c hash table, this is a different hash table implementation. Most important: ** The usage of a double linked list in the hash table. ** Growing is not performed using a maintenance thread. Since the entries are shortlived, it just waits for the old hash table to disappear. * Note: valgrind 3.5.0 (2009-8-19) does not support accept4 (added in r10955, 2009-11-25). In the meantime you need to use HAVE_ACCEPT4=0 for running with valgrind (as you would need for a non-linux system). * Sending SIGUSR1 to the daemon gracefully restarts it. The maximum limits will be doubled until the old instance finishes (ie. all its client connections expire). * Do not try to test it with instances calling an ?action=purge They will serialize on the "UPDATE `page` SET page_touched" query instead of being serialized by the PoolCounter. * The workers parameter is not stored by the poolcounter. It is expected that all requests with the same key will also have the same value. A reduction in new entries will not take effect if that number is working (not even when they end, if there are waiting entries). But an increase will increase throughput even for old queued requests.
2010-08-27 20:57:32 +00:00
return false;
}
2011-03-19 12:06:04 +00:00
/**
* Log an error
2011-08-17 17:11:49 +00:00
*
* @param Status $status
* @return void
2011-03-19 12:06:04 +00:00
*/
public function logError( $status ) {
$key = $this->poolCounter->getKey();
wfDebugLog( 'poolcounter', "Pool key '$key' ({$this->type}): "
. $status->getMessage()->inLanguage( 'en' )->useDatabase( false )->text() );
2011-03-19 12:06:04 +00:00
}
2011-08-17 17:11:49 +00:00
Make a bunch of incompatible changes to the PoolCounter. It wasn't finished, so it's not a big deal. * Use the term workers instead of threads, which fits better for a multiserver setup. * The API is now more accurate for our goals (I hope). * Add support for using the parse from another worker. * Use child classes instead of array callbacks. * The daemon is written in C using libevent instead of python using twistd. * The hash function used is that of Bob Jenkins, with files hash.c and hash.h directly copied from memcached 1.4.5 * Although similar in a few aspects to memcached assoc.c hash table, this is a different hash table implementation. Most important: ** The usage of a double linked list in the hash table. ** Growing is not performed using a maintenance thread. Since the entries are shortlived, it just waits for the old hash table to disappear. * Note: valgrind 3.5.0 (2009-8-19) does not support accept4 (added in r10955, 2009-11-25). In the meantime you need to use HAVE_ACCEPT4=0 for running with valgrind (as you would need for a non-linux system). * Sending SIGUSR1 to the daemon gracefully restarts it. The maximum limits will be doubled until the old instance finishes (ie. all its client connections expire). * Do not try to test it with instances calling an ?action=purge They will serialize on the "UPDATE `page` SET page_touched" query instead of being serialized by the PoolCounter. * The workers parameter is not stored by the poolcounter. It is expected that all requests with the same key will also have the same value. A reduction in new entries will not take effect if that number is working (not even when they end, if there are waiting entries). But an increase will increase throughput even for old queued requests.
2010-08-27 20:57:32 +00:00
/**
* Get the result of the work (whatever it is), or the result of the error() function.
* This returns the result of the first applicable method that returns a non-false value,
* where the methods are checked in the following order:
* - a) doWork() : Applies if the work is exclusive or no another process
* is doing it, and on the condition that either this process
* successfully entered the pool or the pool counter is down.
* - b) doCachedWork() : Applies if the work is cacheable and this blocked on another
* process which finished the work.
* - c) fallback() : Applies for all remaining cases.
* If these all fall through (by returning false), then the result of error() is returned.
*
* @param bool $skipcache
* @return mixed
Make a bunch of incompatible changes to the PoolCounter. It wasn't finished, so it's not a big deal. * Use the term workers instead of threads, which fits better for a multiserver setup. * The API is now more accurate for our goals (I hope). * Add support for using the parse from another worker. * Use child classes instead of array callbacks. * The daemon is written in C using libevent instead of python using twistd. * The hash function used is that of Bob Jenkins, with files hash.c and hash.h directly copied from memcached 1.4.5 * Although similar in a few aspects to memcached assoc.c hash table, this is a different hash table implementation. Most important: ** The usage of a double linked list in the hash table. ** Growing is not performed using a maintenance thread. Since the entries are shortlived, it just waits for the old hash table to disappear. * Note: valgrind 3.5.0 (2009-8-19) does not support accept4 (added in r10955, 2009-11-25). In the meantime you need to use HAVE_ACCEPT4=0 for running with valgrind (as you would need for a non-linux system). * Sending SIGUSR1 to the daemon gracefully restarts it. The maximum limits will be doubled until the old instance finishes (ie. all its client connections expire). * Do not try to test it with instances calling an ?action=purge They will serialize on the "UPDATE `page` SET page_touched" query instead of being serialized by the PoolCounter. * The workers parameter is not stored by the poolcounter. It is expected that all requests with the same key will also have the same value. A reduction in new entries will not take effect if that number is working (not even when they end, if there are waiting entries). But an increase will increase throughput even for old queued requests.
2010-08-27 20:57:32 +00:00
*/
public function execute( $skipcache = false ) {
Make a bunch of incompatible changes to the PoolCounter. It wasn't finished, so it's not a big deal. * Use the term workers instead of threads, which fits better for a multiserver setup. * The API is now more accurate for our goals (I hope). * Add support for using the parse from another worker. * Use child classes instead of array callbacks. * The daemon is written in C using libevent instead of python using twistd. * The hash function used is that of Bob Jenkins, with files hash.c and hash.h directly copied from memcached 1.4.5 * Although similar in a few aspects to memcached assoc.c hash table, this is a different hash table implementation. Most important: ** The usage of a double linked list in the hash table. ** Growing is not performed using a maintenance thread. Since the entries are shortlived, it just waits for the old hash table to disappear. * Note: valgrind 3.5.0 (2009-8-19) does not support accept4 (added in r10955, 2009-11-25). In the meantime you need to use HAVE_ACCEPT4=0 for running with valgrind (as you would need for a non-linux system). * Sending SIGUSR1 to the daemon gracefully restarts it. The maximum limits will be doubled until the old instance finishes (ie. all its client connections expire). * Do not try to test it with instances calling an ?action=purge They will serialize on the "UPDATE `page` SET page_touched" query instead of being serialized by the PoolCounter. * The workers parameter is not stored by the poolcounter. It is expected that all requests with the same key will also have the same value. A reduction in new entries will not take effect if that number is working (not even when they end, if there are waiting entries). But an increase will increase throughput even for old queued requests.
2010-08-27 20:57:32 +00:00
if ( $this->cacheable && !$skipcache ) {
$status = $this->poolCounter->acquireForAnyone();
} else {
$status = $this->poolCounter->acquireForMe();
}
if ( !$status->isOK() ) {
// Respond gracefully to complete server breakage: just log it and do the work
$this->logError( $status );
return $this->doWork();
}
switch ( $status->value ) {
case PoolCounter::LOCKED:
$result = $this->doWork();
$this->poolCounter->release();
return $result;
2011-08-17 17:11:49 +00:00
case PoolCounter::DONE:
$result = $this->getCachedWork();
if ( $result === false ) {
/* That someone else work didn't serve us.
* Acquire the lock for me
*/
return $this->execute( true );
}
return $result;
2011-08-17 17:11:49 +00:00
case PoolCounter::QUEUE_FULL:
case PoolCounter::TIMEOUT:
$result = $this->fallback();
2011-08-17 17:11:49 +00:00
if ( $result !== false ) {
Make a bunch of incompatible changes to the PoolCounter. It wasn't finished, so it's not a big deal. * Use the term workers instead of threads, which fits better for a multiserver setup. * The API is now more accurate for our goals (I hope). * Add support for using the parse from another worker. * Use child classes instead of array callbacks. * The daemon is written in C using libevent instead of python using twistd. * The hash function used is that of Bob Jenkins, with files hash.c and hash.h directly copied from memcached 1.4.5 * Although similar in a few aspects to memcached assoc.c hash table, this is a different hash table implementation. Most important: ** The usage of a double linked list in the hash table. ** Growing is not performed using a maintenance thread. Since the entries are shortlived, it just waits for the old hash table to disappear. * Note: valgrind 3.5.0 (2009-8-19) does not support accept4 (added in r10955, 2009-11-25). In the meantime you need to use HAVE_ACCEPT4=0 for running with valgrind (as you would need for a non-linux system). * Sending SIGUSR1 to the daemon gracefully restarts it. The maximum limits will be doubled until the old instance finishes (ie. all its client connections expire). * Do not try to test it with instances calling an ?action=purge They will serialize on the "UPDATE `page` SET page_touched" query instead of being serialized by the PoolCounter. * The workers parameter is not stored by the poolcounter. It is expected that all requests with the same key will also have the same value. A reduction in new entries will not take effect if that number is working (not even when they end, if there are waiting entries). But an increase will increase throughput even for old queued requests.
2010-08-27 20:57:32 +00:00
return $result;
}
/* no break */
2011-08-17 17:11:49 +00:00
/* These two cases should never be hit... */
case PoolCounter::ERROR:
default:
$errors = array(
PoolCounter::QUEUE_FULL => 'pool-queuefull',
PoolCounter::TIMEOUT => 'pool-timeout' );
2011-08-17 17:11:49 +00:00
$status = Status::newFatal( isset( $errors[$status->value] )
? $errors[$status->value]
: 'pool-errorunknown' );
$this->logError( $status );
return $this->error( $status );
Make a bunch of incompatible changes to the PoolCounter. It wasn't finished, so it's not a big deal. * Use the term workers instead of threads, which fits better for a multiserver setup. * The API is now more accurate for our goals (I hope). * Add support for using the parse from another worker. * Use child classes instead of array callbacks. * The daemon is written in C using libevent instead of python using twistd. * The hash function used is that of Bob Jenkins, with files hash.c and hash.h directly copied from memcached 1.4.5 * Although similar in a few aspects to memcached assoc.c hash table, this is a different hash table implementation. Most important: ** The usage of a double linked list in the hash table. ** Growing is not performed using a maintenance thread. Since the entries are shortlived, it just waits for the old hash table to disappear. * Note: valgrind 3.5.0 (2009-8-19) does not support accept4 (added in r10955, 2009-11-25). In the meantime you need to use HAVE_ACCEPT4=0 for running with valgrind (as you would need for a non-linux system). * Sending SIGUSR1 to the daemon gracefully restarts it. The maximum limits will be doubled until the old instance finishes (ie. all its client connections expire). * Do not try to test it with instances calling an ?action=purge They will serialize on the "UPDATE `page` SET page_touched" query instead of being serialized by the PoolCounter. * The workers parameter is not stored by the poolcounter. It is expected that all requests with the same key will also have the same value. A reduction in new entries will not take effect if that number is working (not even when they end, if there are waiting entries). But an increase will increase throughput even for old queued requests.
2010-08-27 20:57:32 +00:00
}
}
}
/**
* Convenience class for dealing with PoolCounters using callbacks
* @since 1.22
*/
class PoolCounterWorkViaCallback extends PoolCounterWork {
/** @var callable */
protected $doWork;
/** @var callable|null */
protected $doCachedWork;
/** @var callable|null */
protected $fallback;
/** @var callable|null */
protected $error;
/**
* Build a PoolCounterWork class from a type, key, and callback map.
*
* The callback map must at least have a callback for the 'doWork' method.
* Additionally, callbacks can be provided for the 'doCachedWork', 'fallback',
* and 'error' methods. Methods without callbacks will be no-ops that return false.
* If a 'doCachedWork' callback is provided, then execute() may wait for any prior
* process in the pool to finish and reuse its cached result.
*
* @param string $type
* @param string $key
* @param array $callbacks Map of callbacks
* @throws MWException
*/
public function __construct( $type, $key, array $callbacks ) {
parent::__construct( $type, $key );
foreach ( array( 'doWork', 'doCachedWork', 'fallback', 'error' ) as $name ) {
if ( isset( $callbacks[$name] ) ) {
if ( !is_callable( $callbacks[$name] ) ) {
throw new MWException( "Invalid callback provided for '$name' function." );
}
$this->$name = $callbacks[$name];
}
}
if ( !isset( $this->doWork ) ) {
throw new MWException( "No callback provided for 'doWork' function." );
}
$this->cacheable = isset( $this->doCachedWork );
}
public function doWork() {
return call_user_func_array( $this->doWork, array() );
}
public function getCachedWork() {
if ( $this->doCachedWork ) {
return call_user_func_array( $this->doCachedWork, array() );
}
return false;
}
public function fallback() {
if ( $this->fallback ) {
return call_user_func_array( $this->fallback, array() );
}
return false;
}
public function error( $status ) {
if ( $this->error ) {
return call_user_func_array( $this->error, array( $status ) );
}
return false;
}
}