Produce monolog messages through kafka+avro
This allows a logging channel to be configured to write
directly to kafka. Logs can be serialized either to json
blobs or the more compact apache avro format.
The Kafka handler for monolog needs a list of one of more
kafka servers to query cluster metadata from. This should be
able to use any monolog formatter, although some like
JsonFormatter require you to disable formatBatch as Kafka
protocol would prefer to encode each record independently in
the protocol. This requires the nmred/kafka-php library,
version >= 1.3.0.
Adds a new formatter which serializes to the apache avro
format. This is a compact binary format which uses pre-
defined schemas. This initial implementation is very simple
and takes the plain schemas as a constructor argument.
Adds a new option to MonologSpi to wrap handlers in a
BufferHandler. This doesn't flush until the request shuts
down and prevents any network requests in the logger from
adding latency to web requests.
Related mediawiki/vendor update: Ibfe4bd2036ae8e998e2973f07bd9a6f057691578
The necessary config is something like:
array(
'loggers' => array(
'CirrusSearchRequests' => array(
'handlers' => array( 'kafka' ),
),
),
'handlers' => array(
'kafka' => array(
'factory' => '\\MediaWiki\\Logger\\Monolog\\KafkaHandler::factory',
'args' => array( 'localhost:9092' ),
'formatter' => 'avro',
'buffer' => true,
),
),
'formatters' => array(
'avro' => array(
'class' => '\\MediaWiki\\Logger\\Monolog\\AvroFormatter',
'args' => array(
array(
'CirrusSearchRequests' => array(
'type' => 'record',
'name' => 'CirrusSearchRequests'
'fields' => array( ... )
),
),
),
),
),
)
Bug: T106256
Change-Id: I6ee744b3e5306af0bed70811b558a543eed22840
2015-08-04 18:02:47 +00:00
|
|
|
<?php
|
|
|
|
|
/**
|
|
|
|
|
* Tests for IP validity functions.
|
|
|
|
|
*
|
|
|
|
|
* Ported from /t/inc/IP.t by avar.
|
|
|
|
|
*
|
|
|
|
|
* @group IP
|
|
|
|
|
* @todo Test methods in this call should be split into a method and a
|
|
|
|
|
* dataprovider.
|
|
|
|
|
*/
|
|
|
|
|
|
|
|
|
|
class AvroValidatorTest extends PHPUnit_Framework_TestCase {
|
|
|
|
|
public function setUp() {
|
|
|
|
|
if ( !class_exists( 'AvroSchema' ) ) {
|
|
|
|
|
$this->markTestSkipped( 'Avro is required to run the AvroValidatorTest' );
|
|
|
|
|
}
|
|
|
|
|
parent::setUp();
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
public function getErrorsProvider() {
|
2016-02-17 09:09:32 +00:00
|
|
|
$stringSchema = AvroSchema::parse( json_encode( [ 'type' => 'string' ] ) );
|
|
|
|
|
$stringArraySchema = AvroSchema::parse( json_encode( [
|
2015-09-23 21:07:44 +00:00
|
|
|
'type' => 'array',
|
|
|
|
|
'items' => 'string',
|
2016-02-17 09:09:32 +00:00
|
|
|
] ) );
|
|
|
|
|
$recordSchema = AvroSchema::parse( json_encode( [
|
Produce monolog messages through kafka+avro
This allows a logging channel to be configured to write
directly to kafka. Logs can be serialized either to json
blobs or the more compact apache avro format.
The Kafka handler for monolog needs a list of one of more
kafka servers to query cluster metadata from. This should be
able to use any monolog formatter, although some like
JsonFormatter require you to disable formatBatch as Kafka
protocol would prefer to encode each record independently in
the protocol. This requires the nmred/kafka-php library,
version >= 1.3.0.
Adds a new formatter which serializes to the apache avro
format. This is a compact binary format which uses pre-
defined schemas. This initial implementation is very simple
and takes the plain schemas as a constructor argument.
Adds a new option to MonologSpi to wrap handlers in a
BufferHandler. This doesn't flush until the request shuts
down and prevents any network requests in the logger from
adding latency to web requests.
Related mediawiki/vendor update: Ibfe4bd2036ae8e998e2973f07bd9a6f057691578
The necessary config is something like:
array(
'loggers' => array(
'CirrusSearchRequests' => array(
'handlers' => array( 'kafka' ),
),
),
'handlers' => array(
'kafka' => array(
'factory' => '\\MediaWiki\\Logger\\Monolog\\KafkaHandler::factory',
'args' => array( 'localhost:9092' ),
'formatter' => 'avro',
'buffer' => true,
),
),
'formatters' => array(
'avro' => array(
'class' => '\\MediaWiki\\Logger\\Monolog\\AvroFormatter',
'args' => array(
array(
'CirrusSearchRequests' => array(
'type' => 'record',
'name' => 'CirrusSearchRequests'
'fields' => array( ... )
),
),
),
),
),
)
Bug: T106256
Change-Id: I6ee744b3e5306af0bed70811b558a543eed22840
2015-08-04 18:02:47 +00:00
|
|
|
'type' => 'record',
|
|
|
|
|
'name' => 'ut',
|
2016-02-17 09:09:32 +00:00
|
|
|
'fields' => [
|
|
|
|
|
[ 'name' => 'id', 'type' => 'int', 'required' => true ],
|
|
|
|
|
],
|
|
|
|
|
] ) );
|
|
|
|
|
$enumSchema = AvroSchema::parse( json_encode( [
|
Produce monolog messages through kafka+avro
This allows a logging channel to be configured to write
directly to kafka. Logs can be serialized either to json
blobs or the more compact apache avro format.
The Kafka handler for monolog needs a list of one of more
kafka servers to query cluster metadata from. This should be
able to use any monolog formatter, although some like
JsonFormatter require you to disable formatBatch as Kafka
protocol would prefer to encode each record independently in
the protocol. This requires the nmred/kafka-php library,
version >= 1.3.0.
Adds a new formatter which serializes to the apache avro
format. This is a compact binary format which uses pre-
defined schemas. This initial implementation is very simple
and takes the plain schemas as a constructor argument.
Adds a new option to MonologSpi to wrap handlers in a
BufferHandler. This doesn't flush until the request shuts
down and prevents any network requests in the logger from
adding latency to web requests.
Related mediawiki/vendor update: Ibfe4bd2036ae8e998e2973f07bd9a6f057691578
The necessary config is something like:
array(
'loggers' => array(
'CirrusSearchRequests' => array(
'handlers' => array( 'kafka' ),
),
),
'handlers' => array(
'kafka' => array(
'factory' => '\\MediaWiki\\Logger\\Monolog\\KafkaHandler::factory',
'args' => array( 'localhost:9092' ),
'formatter' => 'avro',
'buffer' => true,
),
),
'formatters' => array(
'avro' => array(
'class' => '\\MediaWiki\\Logger\\Monolog\\AvroFormatter',
'args' => array(
array(
'CirrusSearchRequests' => array(
'type' => 'record',
'name' => 'CirrusSearchRequests'
'fields' => array( ... )
),
),
),
),
),
)
Bug: T106256
Change-Id: I6ee744b3e5306af0bed70811b558a543eed22840
2015-08-04 18:02:47 +00:00
|
|
|
'type' => 'record',
|
|
|
|
|
'name' => 'ut',
|
2016-02-17 09:09:32 +00:00
|
|
|
'fields' => [
|
|
|
|
|
[ 'name' => 'count', 'type' => [ 'int', 'null' ] ],
|
|
|
|
|
],
|
|
|
|
|
] ) );
|
Produce monolog messages through kafka+avro
This allows a logging channel to be configured to write
directly to kafka. Logs can be serialized either to json
blobs or the more compact apache avro format.
The Kafka handler for monolog needs a list of one of more
kafka servers to query cluster metadata from. This should be
able to use any monolog formatter, although some like
JsonFormatter require you to disable formatBatch as Kafka
protocol would prefer to encode each record independently in
the protocol. This requires the nmred/kafka-php library,
version >= 1.3.0.
Adds a new formatter which serializes to the apache avro
format. This is a compact binary format which uses pre-
defined schemas. This initial implementation is very simple
and takes the plain schemas as a constructor argument.
Adds a new option to MonologSpi to wrap handlers in a
BufferHandler. This doesn't flush until the request shuts
down and prevents any network requests in the logger from
adding latency to web requests.
Related mediawiki/vendor update: Ibfe4bd2036ae8e998e2973f07bd9a6f057691578
The necessary config is something like:
array(
'loggers' => array(
'CirrusSearchRequests' => array(
'handlers' => array( 'kafka' ),
),
),
'handlers' => array(
'kafka' => array(
'factory' => '\\MediaWiki\\Logger\\Monolog\\KafkaHandler::factory',
'args' => array( 'localhost:9092' ),
'formatter' => 'avro',
'buffer' => true,
),
),
'formatters' => array(
'avro' => array(
'class' => '\\MediaWiki\\Logger\\Monolog\\AvroFormatter',
'args' => array(
array(
'CirrusSearchRequests' => array(
'type' => 'record',
'name' => 'CirrusSearchRequests'
'fields' => array( ... )
),
),
),
),
),
)
Bug: T106256
Change-Id: I6ee744b3e5306af0bed70811b558a543eed22840
2015-08-04 18:02:47 +00:00
|
|
|
|
2016-02-17 09:09:32 +00:00
|
|
|
return [
|
|
|
|
|
[
|
Produce monolog messages through kafka+avro
This allows a logging channel to be configured to write
directly to kafka. Logs can be serialized either to json
blobs or the more compact apache avro format.
The Kafka handler for monolog needs a list of one of more
kafka servers to query cluster metadata from. This should be
able to use any monolog formatter, although some like
JsonFormatter require you to disable formatBatch as Kafka
protocol would prefer to encode each record independently in
the protocol. This requires the nmred/kafka-php library,
version >= 1.3.0.
Adds a new formatter which serializes to the apache avro
format. This is a compact binary format which uses pre-
defined schemas. This initial implementation is very simple
and takes the plain schemas as a constructor argument.
Adds a new option to MonologSpi to wrap handlers in a
BufferHandler. This doesn't flush until the request shuts
down and prevents any network requests in the logger from
adding latency to web requests.
Related mediawiki/vendor update: Ibfe4bd2036ae8e998e2973f07bd9a6f057691578
The necessary config is something like:
array(
'loggers' => array(
'CirrusSearchRequests' => array(
'handlers' => array( 'kafka' ),
),
),
'handlers' => array(
'kafka' => array(
'factory' => '\\MediaWiki\\Logger\\Monolog\\KafkaHandler::factory',
'args' => array( 'localhost:9092' ),
'formatter' => 'avro',
'buffer' => true,
),
),
'formatters' => array(
'avro' => array(
'class' => '\\MediaWiki\\Logger\\Monolog\\AvroFormatter',
'args' => array(
array(
'CirrusSearchRequests' => array(
'type' => 'record',
'name' => 'CirrusSearchRequests'
'fields' => array( ... )
),
),
),
),
),
)
Bug: T106256
Change-Id: I6ee744b3e5306af0bed70811b558a543eed22840
2015-08-04 18:02:47 +00:00
|
|
|
'No errors with a simple string serialization',
|
2016-02-17 09:09:32 +00:00
|
|
|
$stringSchema, 'foobar', [],
|
|
|
|
|
],
|
Produce monolog messages through kafka+avro
This allows a logging channel to be configured to write
directly to kafka. Logs can be serialized either to json
blobs or the more compact apache avro format.
The Kafka handler for monolog needs a list of one of more
kafka servers to query cluster metadata from. This should be
able to use any monolog formatter, although some like
JsonFormatter require you to disable formatBatch as Kafka
protocol would prefer to encode each record independently in
the protocol. This requires the nmred/kafka-php library,
version >= 1.3.0.
Adds a new formatter which serializes to the apache avro
format. This is a compact binary format which uses pre-
defined schemas. This initial implementation is very simple
and takes the plain schemas as a constructor argument.
Adds a new option to MonologSpi to wrap handlers in a
BufferHandler. This doesn't flush until the request shuts
down and prevents any network requests in the logger from
adding latency to web requests.
Related mediawiki/vendor update: Ibfe4bd2036ae8e998e2973f07bd9a6f057691578
The necessary config is something like:
array(
'loggers' => array(
'CirrusSearchRequests' => array(
'handlers' => array( 'kafka' ),
),
),
'handlers' => array(
'kafka' => array(
'factory' => '\\MediaWiki\\Logger\\Monolog\\KafkaHandler::factory',
'args' => array( 'localhost:9092' ),
'formatter' => 'avro',
'buffer' => true,
),
),
'formatters' => array(
'avro' => array(
'class' => '\\MediaWiki\\Logger\\Monolog\\AvroFormatter',
'args' => array(
array(
'CirrusSearchRequests' => array(
'type' => 'record',
'name' => 'CirrusSearchRequests'
'fields' => array( ... )
),
),
),
),
),
)
Bug: T106256
Change-Id: I6ee744b3e5306af0bed70811b558a543eed22840
2015-08-04 18:02:47 +00:00
|
|
|
|
2016-02-17 09:09:32 +00:00
|
|
|
[
|
Produce monolog messages through kafka+avro
This allows a logging channel to be configured to write
directly to kafka. Logs can be serialized either to json
blobs or the more compact apache avro format.
The Kafka handler for monolog needs a list of one of more
kafka servers to query cluster metadata from. This should be
able to use any monolog formatter, although some like
JsonFormatter require you to disable formatBatch as Kafka
protocol would prefer to encode each record independently in
the protocol. This requires the nmred/kafka-php library,
version >= 1.3.0.
Adds a new formatter which serializes to the apache avro
format. This is a compact binary format which uses pre-
defined schemas. This initial implementation is very simple
and takes the plain schemas as a constructor argument.
Adds a new option to MonologSpi to wrap handlers in a
BufferHandler. This doesn't flush until the request shuts
down and prevents any network requests in the logger from
adding latency to web requests.
Related mediawiki/vendor update: Ibfe4bd2036ae8e998e2973f07bd9a6f057691578
The necessary config is something like:
array(
'loggers' => array(
'CirrusSearchRequests' => array(
'handlers' => array( 'kafka' ),
),
),
'handlers' => array(
'kafka' => array(
'factory' => '\\MediaWiki\\Logger\\Monolog\\KafkaHandler::factory',
'args' => array( 'localhost:9092' ),
'formatter' => 'avro',
'buffer' => true,
),
),
'formatters' => array(
'avro' => array(
'class' => '\\MediaWiki\\Logger\\Monolog\\AvroFormatter',
'args' => array(
array(
'CirrusSearchRequests' => array(
'type' => 'record',
'name' => 'CirrusSearchRequests'
'fields' => array( ... )
),
),
),
),
),
)
Bug: T106256
Change-Id: I6ee744b3e5306af0bed70811b558a543eed22840
2015-08-04 18:02:47 +00:00
|
|
|
'Cannot serialize integer into string',
|
|
|
|
|
$stringSchema, 5, 'Expected string, but recieved integer',
|
2016-02-17 09:09:32 +00:00
|
|
|
],
|
Produce monolog messages through kafka+avro
This allows a logging channel to be configured to write
directly to kafka. Logs can be serialized either to json
blobs or the more compact apache avro format.
The Kafka handler for monolog needs a list of one of more
kafka servers to query cluster metadata from. This should be
able to use any monolog formatter, although some like
JsonFormatter require you to disable formatBatch as Kafka
protocol would prefer to encode each record independently in
the protocol. This requires the nmred/kafka-php library,
version >= 1.3.0.
Adds a new formatter which serializes to the apache avro
format. This is a compact binary format which uses pre-
defined schemas. This initial implementation is very simple
and takes the plain schemas as a constructor argument.
Adds a new option to MonologSpi to wrap handlers in a
BufferHandler. This doesn't flush until the request shuts
down and prevents any network requests in the logger from
adding latency to web requests.
Related mediawiki/vendor update: Ibfe4bd2036ae8e998e2973f07bd9a6f057691578
The necessary config is something like:
array(
'loggers' => array(
'CirrusSearchRequests' => array(
'handlers' => array( 'kafka' ),
),
),
'handlers' => array(
'kafka' => array(
'factory' => '\\MediaWiki\\Logger\\Monolog\\KafkaHandler::factory',
'args' => array( 'localhost:9092' ),
'formatter' => 'avro',
'buffer' => true,
),
),
'formatters' => array(
'avro' => array(
'class' => '\\MediaWiki\\Logger\\Monolog\\AvroFormatter',
'args' => array(
array(
'CirrusSearchRequests' => array(
'type' => 'record',
'name' => 'CirrusSearchRequests'
'fields' => array( ... )
),
),
),
),
),
)
Bug: T106256
Change-Id: I6ee744b3e5306af0bed70811b558a543eed22840
2015-08-04 18:02:47 +00:00
|
|
|
|
2016-02-17 09:09:32 +00:00
|
|
|
[
|
Produce monolog messages through kafka+avro
This allows a logging channel to be configured to write
directly to kafka. Logs can be serialized either to json
blobs or the more compact apache avro format.
The Kafka handler for monolog needs a list of one of more
kafka servers to query cluster metadata from. This should be
able to use any monolog formatter, although some like
JsonFormatter require you to disable formatBatch as Kafka
protocol would prefer to encode each record independently in
the protocol. This requires the nmred/kafka-php library,
version >= 1.3.0.
Adds a new formatter which serializes to the apache avro
format. This is a compact binary format which uses pre-
defined schemas. This initial implementation is very simple
and takes the plain schemas as a constructor argument.
Adds a new option to MonologSpi to wrap handlers in a
BufferHandler. This doesn't flush until the request shuts
down and prevents any network requests in the logger from
adding latency to web requests.
Related mediawiki/vendor update: Ibfe4bd2036ae8e998e2973f07bd9a6f057691578
The necessary config is something like:
array(
'loggers' => array(
'CirrusSearchRequests' => array(
'handlers' => array( 'kafka' ),
),
),
'handlers' => array(
'kafka' => array(
'factory' => '\\MediaWiki\\Logger\\Monolog\\KafkaHandler::factory',
'args' => array( 'localhost:9092' ),
'formatter' => 'avro',
'buffer' => true,
),
),
'formatters' => array(
'avro' => array(
'class' => '\\MediaWiki\\Logger\\Monolog\\AvroFormatter',
'args' => array(
array(
'CirrusSearchRequests' => array(
'type' => 'record',
'name' => 'CirrusSearchRequests'
'fields' => array( ... )
),
),
),
),
),
)
Bug: T106256
Change-Id: I6ee744b3e5306af0bed70811b558a543eed22840
2015-08-04 18:02:47 +00:00
|
|
|
'Cannot serialize array into string',
|
2016-02-17 09:09:32 +00:00
|
|
|
$stringSchema, [], 'Expected string, but recieved array',
|
|
|
|
|
],
|
Produce monolog messages through kafka+avro
This allows a logging channel to be configured to write
directly to kafka. Logs can be serialized either to json
blobs or the more compact apache avro format.
The Kafka handler for monolog needs a list of one of more
kafka servers to query cluster metadata from. This should be
able to use any monolog formatter, although some like
JsonFormatter require you to disable formatBatch as Kafka
protocol would prefer to encode each record independently in
the protocol. This requires the nmred/kafka-php library,
version >= 1.3.0.
Adds a new formatter which serializes to the apache avro
format. This is a compact binary format which uses pre-
defined schemas. This initial implementation is very simple
and takes the plain schemas as a constructor argument.
Adds a new option to MonologSpi to wrap handlers in a
BufferHandler. This doesn't flush until the request shuts
down and prevents any network requests in the logger from
adding latency to web requests.
Related mediawiki/vendor update: Ibfe4bd2036ae8e998e2973f07bd9a6f057691578
The necessary config is something like:
array(
'loggers' => array(
'CirrusSearchRequests' => array(
'handlers' => array( 'kafka' ),
),
),
'handlers' => array(
'kafka' => array(
'factory' => '\\MediaWiki\\Logger\\Monolog\\KafkaHandler::factory',
'args' => array( 'localhost:9092' ),
'formatter' => 'avro',
'buffer' => true,
),
),
'formatters' => array(
'avro' => array(
'class' => '\\MediaWiki\\Logger\\Monolog\\AvroFormatter',
'args' => array(
array(
'CirrusSearchRequests' => array(
'type' => 'record',
'name' => 'CirrusSearchRequests'
'fields' => array( ... )
),
),
),
),
),
)
Bug: T106256
Change-Id: I6ee744b3e5306af0bed70811b558a543eed22840
2015-08-04 18:02:47 +00:00
|
|
|
|
2016-02-17 09:09:32 +00:00
|
|
|
[
|
Produce monolog messages through kafka+avro
This allows a logging channel to be configured to write
directly to kafka. Logs can be serialized either to json
blobs or the more compact apache avro format.
The Kafka handler for monolog needs a list of one of more
kafka servers to query cluster metadata from. This should be
able to use any monolog formatter, although some like
JsonFormatter require you to disable formatBatch as Kafka
protocol would prefer to encode each record independently in
the protocol. This requires the nmred/kafka-php library,
version >= 1.3.0.
Adds a new formatter which serializes to the apache avro
format. This is a compact binary format which uses pre-
defined schemas. This initial implementation is very simple
and takes the plain schemas as a constructor argument.
Adds a new option to MonologSpi to wrap handlers in a
BufferHandler. This doesn't flush until the request shuts
down and prevents any network requests in the logger from
adding latency to web requests.
Related mediawiki/vendor update: Ibfe4bd2036ae8e998e2973f07bd9a6f057691578
The necessary config is something like:
array(
'loggers' => array(
'CirrusSearchRequests' => array(
'handlers' => array( 'kafka' ),
),
),
'handlers' => array(
'kafka' => array(
'factory' => '\\MediaWiki\\Logger\\Monolog\\KafkaHandler::factory',
'args' => array( 'localhost:9092' ),
'formatter' => 'avro',
'buffer' => true,
),
),
'formatters' => array(
'avro' => array(
'class' => '\\MediaWiki\\Logger\\Monolog\\AvroFormatter',
'args' => array(
array(
'CirrusSearchRequests' => array(
'type' => 'record',
'name' => 'CirrusSearchRequests'
'fields' => array( ... )
),
),
),
),
),
)
Bug: T106256
Change-Id: I6ee744b3e5306af0bed70811b558a543eed22840
2015-08-04 18:02:47 +00:00
|
|
|
'allows and ignores extra fields',
|
2016-02-17 09:09:32 +00:00
|
|
|
$recordSchema, [ 'id' => 4, 'foo' => 'bar' ], [],
|
|
|
|
|
],
|
Produce monolog messages through kafka+avro
This allows a logging channel to be configured to write
directly to kafka. Logs can be serialized either to json
blobs or the more compact apache avro format.
The Kafka handler for monolog needs a list of one of more
kafka servers to query cluster metadata from. This should be
able to use any monolog formatter, although some like
JsonFormatter require you to disable formatBatch as Kafka
protocol would prefer to encode each record independently in
the protocol. This requires the nmred/kafka-php library,
version >= 1.3.0.
Adds a new formatter which serializes to the apache avro
format. This is a compact binary format which uses pre-
defined schemas. This initial implementation is very simple
and takes the plain schemas as a constructor argument.
Adds a new option to MonologSpi to wrap handlers in a
BufferHandler. This doesn't flush until the request shuts
down and prevents any network requests in the logger from
adding latency to web requests.
Related mediawiki/vendor update: Ibfe4bd2036ae8e998e2973f07bd9a6f057691578
The necessary config is something like:
array(
'loggers' => array(
'CirrusSearchRequests' => array(
'handlers' => array( 'kafka' ),
),
),
'handlers' => array(
'kafka' => array(
'factory' => '\\MediaWiki\\Logger\\Monolog\\KafkaHandler::factory',
'args' => array( 'localhost:9092' ),
'formatter' => 'avro',
'buffer' => true,
),
),
'formatters' => array(
'avro' => array(
'class' => '\\MediaWiki\\Logger\\Monolog\\AvroFormatter',
'args' => array(
array(
'CirrusSearchRequests' => array(
'type' => 'record',
'name' => 'CirrusSearchRequests'
'fields' => array( ... )
),
),
),
),
),
)
Bug: T106256
Change-Id: I6ee744b3e5306af0bed70811b558a543eed22840
2015-08-04 18:02:47 +00:00
|
|
|
|
2016-02-17 09:09:32 +00:00
|
|
|
[
|
Produce monolog messages through kafka+avro
This allows a logging channel to be configured to write
directly to kafka. Logs can be serialized either to json
blobs or the more compact apache avro format.
The Kafka handler for monolog needs a list of one of more
kafka servers to query cluster metadata from. This should be
able to use any monolog formatter, although some like
JsonFormatter require you to disable formatBatch as Kafka
protocol would prefer to encode each record independently in
the protocol. This requires the nmred/kafka-php library,
version >= 1.3.0.
Adds a new formatter which serializes to the apache avro
format. This is a compact binary format which uses pre-
defined schemas. This initial implementation is very simple
and takes the plain schemas as a constructor argument.
Adds a new option to MonologSpi to wrap handlers in a
BufferHandler. This doesn't flush until the request shuts
down and prevents any network requests in the logger from
adding latency to web requests.
Related mediawiki/vendor update: Ibfe4bd2036ae8e998e2973f07bd9a6f057691578
The necessary config is something like:
array(
'loggers' => array(
'CirrusSearchRequests' => array(
'handlers' => array( 'kafka' ),
),
),
'handlers' => array(
'kafka' => array(
'factory' => '\\MediaWiki\\Logger\\Monolog\\KafkaHandler::factory',
'args' => array( 'localhost:9092' ),
'formatter' => 'avro',
'buffer' => true,
),
),
'formatters' => array(
'avro' => array(
'class' => '\\MediaWiki\\Logger\\Monolog\\AvroFormatter',
'args' => array(
array(
'CirrusSearchRequests' => array(
'type' => 'record',
'name' => 'CirrusSearchRequests'
'fields' => array( ... )
),
),
),
),
),
)
Bug: T106256
Change-Id: I6ee744b3e5306af0bed70811b558a543eed22840
2015-08-04 18:02:47 +00:00
|
|
|
'detects missing fields',
|
2016-02-17 09:09:32 +00:00
|
|
|
$recordSchema, [], [ 'id' => 'Missing expected field' ],
|
|
|
|
|
],
|
Produce monolog messages through kafka+avro
This allows a logging channel to be configured to write
directly to kafka. Logs can be serialized either to json
blobs or the more compact apache avro format.
The Kafka handler for monolog needs a list of one of more
kafka servers to query cluster metadata from. This should be
able to use any monolog formatter, although some like
JsonFormatter require you to disable formatBatch as Kafka
protocol would prefer to encode each record independently in
the protocol. This requires the nmred/kafka-php library,
version >= 1.3.0.
Adds a new formatter which serializes to the apache avro
format. This is a compact binary format which uses pre-
defined schemas. This initial implementation is very simple
and takes the plain schemas as a constructor argument.
Adds a new option to MonologSpi to wrap handlers in a
BufferHandler. This doesn't flush until the request shuts
down and prevents any network requests in the logger from
adding latency to web requests.
Related mediawiki/vendor update: Ibfe4bd2036ae8e998e2973f07bd9a6f057691578
The necessary config is something like:
array(
'loggers' => array(
'CirrusSearchRequests' => array(
'handlers' => array( 'kafka' ),
),
),
'handlers' => array(
'kafka' => array(
'factory' => '\\MediaWiki\\Logger\\Monolog\\KafkaHandler::factory',
'args' => array( 'localhost:9092' ),
'formatter' => 'avro',
'buffer' => true,
),
),
'formatters' => array(
'avro' => array(
'class' => '\\MediaWiki\\Logger\\Monolog\\AvroFormatter',
'args' => array(
array(
'CirrusSearchRequests' => array(
'type' => 'record',
'name' => 'CirrusSearchRequests'
'fields' => array( ... )
),
),
),
),
),
)
Bug: T106256
Change-Id: I6ee744b3e5306af0bed70811b558a543eed22840
2015-08-04 18:02:47 +00:00
|
|
|
|
2016-02-17 09:09:32 +00:00
|
|
|
[
|
Produce monolog messages through kafka+avro
This allows a logging channel to be configured to write
directly to kafka. Logs can be serialized either to json
blobs or the more compact apache avro format.
The Kafka handler for monolog needs a list of one of more
kafka servers to query cluster metadata from. This should be
able to use any monolog formatter, although some like
JsonFormatter require you to disable formatBatch as Kafka
protocol would prefer to encode each record independently in
the protocol. This requires the nmred/kafka-php library,
version >= 1.3.0.
Adds a new formatter which serializes to the apache avro
format. This is a compact binary format which uses pre-
defined schemas. This initial implementation is very simple
and takes the plain schemas as a constructor argument.
Adds a new option to MonologSpi to wrap handlers in a
BufferHandler. This doesn't flush until the request shuts
down and prevents any network requests in the logger from
adding latency to web requests.
Related mediawiki/vendor update: Ibfe4bd2036ae8e998e2973f07bd9a6f057691578
The necessary config is something like:
array(
'loggers' => array(
'CirrusSearchRequests' => array(
'handlers' => array( 'kafka' ),
),
),
'handlers' => array(
'kafka' => array(
'factory' => '\\MediaWiki\\Logger\\Monolog\\KafkaHandler::factory',
'args' => array( 'localhost:9092' ),
'formatter' => 'avro',
'buffer' => true,
),
),
'formatters' => array(
'avro' => array(
'class' => '\\MediaWiki\\Logger\\Monolog\\AvroFormatter',
'args' => array(
array(
'CirrusSearchRequests' => array(
'type' => 'record',
'name' => 'CirrusSearchRequests'
'fields' => array( ... )
),
),
),
),
),
)
Bug: T106256
Change-Id: I6ee744b3e5306af0bed70811b558a543eed22840
2015-08-04 18:02:47 +00:00
|
|
|
'handles first element in enum',
|
2016-02-17 09:09:32 +00:00
|
|
|
$enumSchema, [ 'count' => 4 ], [],
|
|
|
|
|
],
|
Produce monolog messages through kafka+avro
This allows a logging channel to be configured to write
directly to kafka. Logs can be serialized either to json
blobs or the more compact apache avro format.
The Kafka handler for monolog needs a list of one of more
kafka servers to query cluster metadata from. This should be
able to use any monolog formatter, although some like
JsonFormatter require you to disable formatBatch as Kafka
protocol would prefer to encode each record independently in
the protocol. This requires the nmred/kafka-php library,
version >= 1.3.0.
Adds a new formatter which serializes to the apache avro
format. This is a compact binary format which uses pre-
defined schemas. This initial implementation is very simple
and takes the plain schemas as a constructor argument.
Adds a new option to MonologSpi to wrap handlers in a
BufferHandler. This doesn't flush until the request shuts
down and prevents any network requests in the logger from
adding latency to web requests.
Related mediawiki/vendor update: Ibfe4bd2036ae8e998e2973f07bd9a6f057691578
The necessary config is something like:
array(
'loggers' => array(
'CirrusSearchRequests' => array(
'handlers' => array( 'kafka' ),
),
),
'handlers' => array(
'kafka' => array(
'factory' => '\\MediaWiki\\Logger\\Monolog\\KafkaHandler::factory',
'args' => array( 'localhost:9092' ),
'formatter' => 'avro',
'buffer' => true,
),
),
'formatters' => array(
'avro' => array(
'class' => '\\MediaWiki\\Logger\\Monolog\\AvroFormatter',
'args' => array(
array(
'CirrusSearchRequests' => array(
'type' => 'record',
'name' => 'CirrusSearchRequests'
'fields' => array( ... )
),
),
),
),
),
)
Bug: T106256
Change-Id: I6ee744b3e5306af0bed70811b558a543eed22840
2015-08-04 18:02:47 +00:00
|
|
|
|
2016-02-17 09:09:32 +00:00
|
|
|
[
|
Produce monolog messages through kafka+avro
This allows a logging channel to be configured to write
directly to kafka. Logs can be serialized either to json
blobs or the more compact apache avro format.
The Kafka handler for monolog needs a list of one of more
kafka servers to query cluster metadata from. This should be
able to use any monolog formatter, although some like
JsonFormatter require you to disable formatBatch as Kafka
protocol would prefer to encode each record independently in
the protocol. This requires the nmred/kafka-php library,
version >= 1.3.0.
Adds a new formatter which serializes to the apache avro
format. This is a compact binary format which uses pre-
defined schemas. This initial implementation is very simple
and takes the plain schemas as a constructor argument.
Adds a new option to MonologSpi to wrap handlers in a
BufferHandler. This doesn't flush until the request shuts
down and prevents any network requests in the logger from
adding latency to web requests.
Related mediawiki/vendor update: Ibfe4bd2036ae8e998e2973f07bd9a6f057691578
The necessary config is something like:
array(
'loggers' => array(
'CirrusSearchRequests' => array(
'handlers' => array( 'kafka' ),
),
),
'handlers' => array(
'kafka' => array(
'factory' => '\\MediaWiki\\Logger\\Monolog\\KafkaHandler::factory',
'args' => array( 'localhost:9092' ),
'formatter' => 'avro',
'buffer' => true,
),
),
'formatters' => array(
'avro' => array(
'class' => '\\MediaWiki\\Logger\\Monolog\\AvroFormatter',
'args' => array(
array(
'CirrusSearchRequests' => array(
'type' => 'record',
'name' => 'CirrusSearchRequests'
'fields' => array( ... )
),
),
),
),
),
)
Bug: T106256
Change-Id: I6ee744b3e5306af0bed70811b558a543eed22840
2015-08-04 18:02:47 +00:00
|
|
|
'handles second element in enum',
|
2016-02-17 09:09:32 +00:00
|
|
|
$enumSchema, [ 'count' => null ], [],
|
|
|
|
|
],
|
Produce monolog messages through kafka+avro
This allows a logging channel to be configured to write
directly to kafka. Logs can be serialized either to json
blobs or the more compact apache avro format.
The Kafka handler for monolog needs a list of one of more
kafka servers to query cluster metadata from. This should be
able to use any monolog formatter, although some like
JsonFormatter require you to disable formatBatch as Kafka
protocol would prefer to encode each record independently in
the protocol. This requires the nmred/kafka-php library,
version >= 1.3.0.
Adds a new formatter which serializes to the apache avro
format. This is a compact binary format which uses pre-
defined schemas. This initial implementation is very simple
and takes the plain schemas as a constructor argument.
Adds a new option to MonologSpi to wrap handlers in a
BufferHandler. This doesn't flush until the request shuts
down and prevents any network requests in the logger from
adding latency to web requests.
Related mediawiki/vendor update: Ibfe4bd2036ae8e998e2973f07bd9a6f057691578
The necessary config is something like:
array(
'loggers' => array(
'CirrusSearchRequests' => array(
'handlers' => array( 'kafka' ),
),
),
'handlers' => array(
'kafka' => array(
'factory' => '\\MediaWiki\\Logger\\Monolog\\KafkaHandler::factory',
'args' => array( 'localhost:9092' ),
'formatter' => 'avro',
'buffer' => true,
),
),
'formatters' => array(
'avro' => array(
'class' => '\\MediaWiki\\Logger\\Monolog\\AvroFormatter',
'args' => array(
array(
'CirrusSearchRequests' => array(
'type' => 'record',
'name' => 'CirrusSearchRequests'
'fields' => array( ... )
),
),
),
),
),
)
Bug: T106256
Change-Id: I6ee744b3e5306af0bed70811b558a543eed22840
2015-08-04 18:02:47 +00:00
|
|
|
|
2016-02-17 09:09:32 +00:00
|
|
|
[
|
Produce monolog messages through kafka+avro
This allows a logging channel to be configured to write
directly to kafka. Logs can be serialized either to json
blobs or the more compact apache avro format.
The Kafka handler for monolog needs a list of one of more
kafka servers to query cluster metadata from. This should be
able to use any monolog formatter, although some like
JsonFormatter require you to disable formatBatch as Kafka
protocol would prefer to encode each record independently in
the protocol. This requires the nmred/kafka-php library,
version >= 1.3.0.
Adds a new formatter which serializes to the apache avro
format. This is a compact binary format which uses pre-
defined schemas. This initial implementation is very simple
and takes the plain schemas as a constructor argument.
Adds a new option to MonologSpi to wrap handlers in a
BufferHandler. This doesn't flush until the request shuts
down and prevents any network requests in the logger from
adding latency to web requests.
Related mediawiki/vendor update: Ibfe4bd2036ae8e998e2973f07bd9a6f057691578
The necessary config is something like:
array(
'loggers' => array(
'CirrusSearchRequests' => array(
'handlers' => array( 'kafka' ),
),
),
'handlers' => array(
'kafka' => array(
'factory' => '\\MediaWiki\\Logger\\Monolog\\KafkaHandler::factory',
'args' => array( 'localhost:9092' ),
'formatter' => 'avro',
'buffer' => true,
),
),
'formatters' => array(
'avro' => array(
'class' => '\\MediaWiki\\Logger\\Monolog\\AvroFormatter',
'args' => array(
array(
'CirrusSearchRequests' => array(
'type' => 'record',
'name' => 'CirrusSearchRequests'
'fields' => array( ... )
),
),
),
),
),
)
Bug: T106256
Change-Id: I6ee744b3e5306af0bed70811b558a543eed22840
2015-08-04 18:02:47 +00:00
|
|
|
'rejects element not in union',
|
2016-02-17 09:09:32 +00:00
|
|
|
$enumSchema, [ 'count' => 'invalid' ], [ 'count' => [
|
Produce monolog messages through kafka+avro
This allows a logging channel to be configured to write
directly to kafka. Logs can be serialized either to json
blobs or the more compact apache avro format.
The Kafka handler for monolog needs a list of one of more
kafka servers to query cluster metadata from. This should be
able to use any monolog formatter, although some like
JsonFormatter require you to disable formatBatch as Kafka
protocol would prefer to encode each record independently in
the protocol. This requires the nmred/kafka-php library,
version >= 1.3.0.
Adds a new formatter which serializes to the apache avro
format. This is a compact binary format which uses pre-
defined schemas. This initial implementation is very simple
and takes the plain schemas as a constructor argument.
Adds a new option to MonologSpi to wrap handlers in a
BufferHandler. This doesn't flush until the request shuts
down and prevents any network requests in the logger from
adding latency to web requests.
Related mediawiki/vendor update: Ibfe4bd2036ae8e998e2973f07bd9a6f057691578
The necessary config is something like:
array(
'loggers' => array(
'CirrusSearchRequests' => array(
'handlers' => array( 'kafka' ),
),
),
'handlers' => array(
'kafka' => array(
'factory' => '\\MediaWiki\\Logger\\Monolog\\KafkaHandler::factory',
'args' => array( 'localhost:9092' ),
'formatter' => 'avro',
'buffer' => true,
),
),
'formatters' => array(
'avro' => array(
'class' => '\\MediaWiki\\Logger\\Monolog\\AvroFormatter',
'args' => array(
array(
'CirrusSearchRequests' => array(
'type' => 'record',
'name' => 'CirrusSearchRequests'
'fields' => array( ... )
),
),
),
),
),
)
Bug: T106256
Change-Id: I6ee744b3e5306af0bed70811b558a543eed22840
2015-08-04 18:02:47 +00:00
|
|
|
'Expected any one of these to be true',
|
2016-02-17 09:09:32 +00:00
|
|
|
[
|
Produce monolog messages through kafka+avro
This allows a logging channel to be configured to write
directly to kafka. Logs can be serialized either to json
blobs or the more compact apache avro format.
The Kafka handler for monolog needs a list of one of more
kafka servers to query cluster metadata from. This should be
able to use any monolog formatter, although some like
JsonFormatter require you to disable formatBatch as Kafka
protocol would prefer to encode each record independently in
the protocol. This requires the nmred/kafka-php library,
version >= 1.3.0.
Adds a new formatter which serializes to the apache avro
format. This is a compact binary format which uses pre-
defined schemas. This initial implementation is very simple
and takes the plain schemas as a constructor argument.
Adds a new option to MonologSpi to wrap handlers in a
BufferHandler. This doesn't flush until the request shuts
down and prevents any network requests in the logger from
adding latency to web requests.
Related mediawiki/vendor update: Ibfe4bd2036ae8e998e2973f07bd9a6f057691578
The necessary config is something like:
array(
'loggers' => array(
'CirrusSearchRequests' => array(
'handlers' => array( 'kafka' ),
),
),
'handlers' => array(
'kafka' => array(
'factory' => '\\MediaWiki\\Logger\\Monolog\\KafkaHandler::factory',
'args' => array( 'localhost:9092' ),
'formatter' => 'avro',
'buffer' => true,
),
),
'formatters' => array(
'avro' => array(
'class' => '\\MediaWiki\\Logger\\Monolog\\AvroFormatter',
'args' => array(
array(
'CirrusSearchRequests' => array(
'type' => 'record',
'name' => 'CirrusSearchRequests'
'fields' => array( ... )
),
),
),
),
),
)
Bug: T106256
Change-Id: I6ee744b3e5306af0bed70811b558a543eed22840
2015-08-04 18:02:47 +00:00
|
|
|
'Expected integer, but recieved string',
|
|
|
|
|
'Expected null, but recieved string',
|
2016-02-17 09:09:32 +00:00
|
|
|
]
|
|
|
|
|
] ]
|
|
|
|
|
],
|
|
|
|
|
[
|
2015-09-23 21:07:44 +00:00
|
|
|
'Empty array is accepted',
|
2016-02-17 09:09:32 +00:00
|
|
|
$stringArraySchema, [], []
|
|
|
|
|
],
|
|
|
|
|
[
|
2015-09-23 21:07:44 +00:00
|
|
|
'correct array element accepted',
|
2016-02-17 09:09:32 +00:00
|
|
|
$stringArraySchema, [ 'fizzbuzz' ], []
|
|
|
|
|
],
|
|
|
|
|
[
|
2015-09-23 21:07:44 +00:00
|
|
|
'incorrect array element rejected',
|
2016-02-17 09:09:32 +00:00
|
|
|
$stringArraySchema, [ '12', 34 ], [ 'Expected string, but recieved integer' ]
|
|
|
|
|
],
|
|
|
|
|
];
|
Produce monolog messages through kafka+avro
This allows a logging channel to be configured to write
directly to kafka. Logs can be serialized either to json
blobs or the more compact apache avro format.
The Kafka handler for monolog needs a list of one of more
kafka servers to query cluster metadata from. This should be
able to use any monolog formatter, although some like
JsonFormatter require you to disable formatBatch as Kafka
protocol would prefer to encode each record independently in
the protocol. This requires the nmred/kafka-php library,
version >= 1.3.0.
Adds a new formatter which serializes to the apache avro
format. This is a compact binary format which uses pre-
defined schemas. This initial implementation is very simple
and takes the plain schemas as a constructor argument.
Adds a new option to MonologSpi to wrap handlers in a
BufferHandler. This doesn't flush until the request shuts
down and prevents any network requests in the logger from
adding latency to web requests.
Related mediawiki/vendor update: Ibfe4bd2036ae8e998e2973f07bd9a6f057691578
The necessary config is something like:
array(
'loggers' => array(
'CirrusSearchRequests' => array(
'handlers' => array( 'kafka' ),
),
),
'handlers' => array(
'kafka' => array(
'factory' => '\\MediaWiki\\Logger\\Monolog\\KafkaHandler::factory',
'args' => array( 'localhost:9092' ),
'formatter' => 'avro',
'buffer' => true,
),
),
'formatters' => array(
'avro' => array(
'class' => '\\MediaWiki\\Logger\\Monolog\\AvroFormatter',
'args' => array(
array(
'CirrusSearchRequests' => array(
'type' => 'record',
'name' => 'CirrusSearchRequests'
'fields' => array( ... )
),
),
),
),
),
)
Bug: T106256
Change-Id: I6ee744b3e5306af0bed70811b558a543eed22840
2015-08-04 18:02:47 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/**
|
|
|
|
|
* @dataProvider getErrorsProvider
|
|
|
|
|
*/
|
|
|
|
|
public function testGetErrors( $message, $schema, $datum, $expected ) {
|
|
|
|
|
$this->assertEquals(
|
|
|
|
|
$expected,
|
|
|
|
|
AvroValidator::getErrors( $schema, $datum ),
|
|
|
|
|
$message
|
|
|
|
|
);
|
|
|
|
|
}
|
|
|
|
|
}
|