wiki.techinc.nl/tests/phpunit/includes/utils/AvroValidatorTest.php

113 lines
2.5 KiB
PHP
Raw Normal View History

Produce monolog messages through kafka+avro This allows a logging channel to be configured to write directly to kafka. Logs can be serialized either to json blobs or the more compact apache avro format. The Kafka handler for monolog needs a list of one of more kafka servers to query cluster metadata from. This should be able to use any monolog formatter, although some like JsonFormatter require you to disable formatBatch as Kafka protocol would prefer to encode each record independently in the protocol. This requires the nmred/kafka-php library, version >= 1.3.0. Adds a new formatter which serializes to the apache avro format. This is a compact binary format which uses pre- defined schemas. This initial implementation is very simple and takes the plain schemas as a constructor argument. Adds a new option to MonologSpi to wrap handlers in a BufferHandler. This doesn't flush until the request shuts down and prevents any network requests in the logger from adding latency to web requests. Related mediawiki/vendor update: Ibfe4bd2036ae8e998e2973f07bd9a6f057691578 The necessary config is something like: array( 'loggers' => array( 'CirrusSearchRequests' => array( 'handlers' => array( 'kafka' ), ), ), 'handlers' => array( 'kafka' => array( 'factory' => '\\MediaWiki\\Logger\\Monolog\\KafkaHandler::factory', 'args' => array( 'localhost:9092' ), 'formatter' => 'avro', 'buffer' => true, ), ), 'formatters' => array( 'avro' => array( 'class' => '\\MediaWiki\\Logger\\Monolog\\AvroFormatter', 'args' => array( array( 'CirrusSearchRequests' => array( 'type' => 'record', 'name' => 'CirrusSearchRequests' 'fields' => array( ... ) ), ), ), ), ), ) Bug: T106256 Change-Id: I6ee744b3e5306af0bed70811b558a543eed22840
2015-08-04 18:02:47 +00:00
<?php
/**
* Tests for IP validity functions.
*
* Ported from /t/inc/IP.t by avar.
*
* @group IP
* @todo Test methods in this call should be split into a method and a
* dataprovider.
*/
class AvroValidatorTest extends PHPUnit_Framework_TestCase {
public function setUp() {
if ( !class_exists( 'AvroSchema' ) ) {
$this->markTestSkipped( 'Avro is required to run the AvroValidatorTest' );
}
parent::setUp();
}
public function getErrorsProvider() {
$stringSchema = AvroSchema::parse( json_encode( [ 'type' => 'string' ] ) );
$stringArraySchema = AvroSchema::parse( json_encode( [
'type' => 'array',
'items' => 'string',
] ) );
$recordSchema = AvroSchema::parse( json_encode( [
Produce monolog messages through kafka+avro This allows a logging channel to be configured to write directly to kafka. Logs can be serialized either to json blobs or the more compact apache avro format. The Kafka handler for monolog needs a list of one of more kafka servers to query cluster metadata from. This should be able to use any monolog formatter, although some like JsonFormatter require you to disable formatBatch as Kafka protocol would prefer to encode each record independently in the protocol. This requires the nmred/kafka-php library, version >= 1.3.0. Adds a new formatter which serializes to the apache avro format. This is a compact binary format which uses pre- defined schemas. This initial implementation is very simple and takes the plain schemas as a constructor argument. Adds a new option to MonologSpi to wrap handlers in a BufferHandler. This doesn't flush until the request shuts down and prevents any network requests in the logger from adding latency to web requests. Related mediawiki/vendor update: Ibfe4bd2036ae8e998e2973f07bd9a6f057691578 The necessary config is something like: array( 'loggers' => array( 'CirrusSearchRequests' => array( 'handlers' => array( 'kafka' ), ), ), 'handlers' => array( 'kafka' => array( 'factory' => '\\MediaWiki\\Logger\\Monolog\\KafkaHandler::factory', 'args' => array( 'localhost:9092' ), 'formatter' => 'avro', 'buffer' => true, ), ), 'formatters' => array( 'avro' => array( 'class' => '\\MediaWiki\\Logger\\Monolog\\AvroFormatter', 'args' => array( array( 'CirrusSearchRequests' => array( 'type' => 'record', 'name' => 'CirrusSearchRequests' 'fields' => array( ... ) ), ), ), ), ), ) Bug: T106256 Change-Id: I6ee744b3e5306af0bed70811b558a543eed22840
2015-08-04 18:02:47 +00:00
'type' => 'record',
'name' => 'ut',
'fields' => [
[ 'name' => 'id', 'type' => 'int', 'required' => true ],
],
] ) );
$enumSchema = AvroSchema::parse( json_encode( [
Produce monolog messages through kafka+avro This allows a logging channel to be configured to write directly to kafka. Logs can be serialized either to json blobs or the more compact apache avro format. The Kafka handler for monolog needs a list of one of more kafka servers to query cluster metadata from. This should be able to use any monolog formatter, although some like JsonFormatter require you to disable formatBatch as Kafka protocol would prefer to encode each record independently in the protocol. This requires the nmred/kafka-php library, version >= 1.3.0. Adds a new formatter which serializes to the apache avro format. This is a compact binary format which uses pre- defined schemas. This initial implementation is very simple and takes the plain schemas as a constructor argument. Adds a new option to MonologSpi to wrap handlers in a BufferHandler. This doesn't flush until the request shuts down and prevents any network requests in the logger from adding latency to web requests. Related mediawiki/vendor update: Ibfe4bd2036ae8e998e2973f07bd9a6f057691578 The necessary config is something like: array( 'loggers' => array( 'CirrusSearchRequests' => array( 'handlers' => array( 'kafka' ), ), ), 'handlers' => array( 'kafka' => array( 'factory' => '\\MediaWiki\\Logger\\Monolog\\KafkaHandler::factory', 'args' => array( 'localhost:9092' ), 'formatter' => 'avro', 'buffer' => true, ), ), 'formatters' => array( 'avro' => array( 'class' => '\\MediaWiki\\Logger\\Monolog\\AvroFormatter', 'args' => array( array( 'CirrusSearchRequests' => array( 'type' => 'record', 'name' => 'CirrusSearchRequests' 'fields' => array( ... ) ), ), ), ), ), ) Bug: T106256 Change-Id: I6ee744b3e5306af0bed70811b558a543eed22840
2015-08-04 18:02:47 +00:00
'type' => 'record',
'name' => 'ut',
'fields' => [
[ 'name' => 'count', 'type' => [ 'int', 'null' ] ],
],
] ) );
Produce monolog messages through kafka+avro This allows a logging channel to be configured to write directly to kafka. Logs can be serialized either to json blobs or the more compact apache avro format. The Kafka handler for monolog needs a list of one of more kafka servers to query cluster metadata from. This should be able to use any monolog formatter, although some like JsonFormatter require you to disable formatBatch as Kafka protocol would prefer to encode each record independently in the protocol. This requires the nmred/kafka-php library, version >= 1.3.0. Adds a new formatter which serializes to the apache avro format. This is a compact binary format which uses pre- defined schemas. This initial implementation is very simple and takes the plain schemas as a constructor argument. Adds a new option to MonologSpi to wrap handlers in a BufferHandler. This doesn't flush until the request shuts down and prevents any network requests in the logger from adding latency to web requests. Related mediawiki/vendor update: Ibfe4bd2036ae8e998e2973f07bd9a6f057691578 The necessary config is something like: array( 'loggers' => array( 'CirrusSearchRequests' => array( 'handlers' => array( 'kafka' ), ), ), 'handlers' => array( 'kafka' => array( 'factory' => '\\MediaWiki\\Logger\\Monolog\\KafkaHandler::factory', 'args' => array( 'localhost:9092' ), 'formatter' => 'avro', 'buffer' => true, ), ), 'formatters' => array( 'avro' => array( 'class' => '\\MediaWiki\\Logger\\Monolog\\AvroFormatter', 'args' => array( array( 'CirrusSearchRequests' => array( 'type' => 'record', 'name' => 'CirrusSearchRequests' 'fields' => array( ... ) ), ), ), ), ), ) Bug: T106256 Change-Id: I6ee744b3e5306af0bed70811b558a543eed22840
2015-08-04 18:02:47 +00:00
return [
[
Produce monolog messages through kafka+avro This allows a logging channel to be configured to write directly to kafka. Logs can be serialized either to json blobs or the more compact apache avro format. The Kafka handler for monolog needs a list of one of more kafka servers to query cluster metadata from. This should be able to use any monolog formatter, although some like JsonFormatter require you to disable formatBatch as Kafka protocol would prefer to encode each record independently in the protocol. This requires the nmred/kafka-php library, version >= 1.3.0. Adds a new formatter which serializes to the apache avro format. This is a compact binary format which uses pre- defined schemas. This initial implementation is very simple and takes the plain schemas as a constructor argument. Adds a new option to MonologSpi to wrap handlers in a BufferHandler. This doesn't flush until the request shuts down and prevents any network requests in the logger from adding latency to web requests. Related mediawiki/vendor update: Ibfe4bd2036ae8e998e2973f07bd9a6f057691578 The necessary config is something like: array( 'loggers' => array( 'CirrusSearchRequests' => array( 'handlers' => array( 'kafka' ), ), ), 'handlers' => array( 'kafka' => array( 'factory' => '\\MediaWiki\\Logger\\Monolog\\KafkaHandler::factory', 'args' => array( 'localhost:9092' ), 'formatter' => 'avro', 'buffer' => true, ), ), 'formatters' => array( 'avro' => array( 'class' => '\\MediaWiki\\Logger\\Monolog\\AvroFormatter', 'args' => array( array( 'CirrusSearchRequests' => array( 'type' => 'record', 'name' => 'CirrusSearchRequests' 'fields' => array( ... ) ), ), ), ), ), ) Bug: T106256 Change-Id: I6ee744b3e5306af0bed70811b558a543eed22840
2015-08-04 18:02:47 +00:00
'No errors with a simple string serialization',
$stringSchema, 'foobar', [],
],
Produce monolog messages through kafka+avro This allows a logging channel to be configured to write directly to kafka. Logs can be serialized either to json blobs or the more compact apache avro format. The Kafka handler for monolog needs a list of one of more kafka servers to query cluster metadata from. This should be able to use any monolog formatter, although some like JsonFormatter require you to disable formatBatch as Kafka protocol would prefer to encode each record independently in the protocol. This requires the nmred/kafka-php library, version >= 1.3.0. Adds a new formatter which serializes to the apache avro format. This is a compact binary format which uses pre- defined schemas. This initial implementation is very simple and takes the plain schemas as a constructor argument. Adds a new option to MonologSpi to wrap handlers in a BufferHandler. This doesn't flush until the request shuts down and prevents any network requests in the logger from adding latency to web requests. Related mediawiki/vendor update: Ibfe4bd2036ae8e998e2973f07bd9a6f057691578 The necessary config is something like: array( 'loggers' => array( 'CirrusSearchRequests' => array( 'handlers' => array( 'kafka' ), ), ), 'handlers' => array( 'kafka' => array( 'factory' => '\\MediaWiki\\Logger\\Monolog\\KafkaHandler::factory', 'args' => array( 'localhost:9092' ), 'formatter' => 'avro', 'buffer' => true, ), ), 'formatters' => array( 'avro' => array( 'class' => '\\MediaWiki\\Logger\\Monolog\\AvroFormatter', 'args' => array( array( 'CirrusSearchRequests' => array( 'type' => 'record', 'name' => 'CirrusSearchRequests' 'fields' => array( ... ) ), ), ), ), ), ) Bug: T106256 Change-Id: I6ee744b3e5306af0bed70811b558a543eed22840
2015-08-04 18:02:47 +00:00
[
Produce monolog messages through kafka+avro This allows a logging channel to be configured to write directly to kafka. Logs can be serialized either to json blobs or the more compact apache avro format. The Kafka handler for monolog needs a list of one of more kafka servers to query cluster metadata from. This should be able to use any monolog formatter, although some like JsonFormatter require you to disable formatBatch as Kafka protocol would prefer to encode each record independently in the protocol. This requires the nmred/kafka-php library, version >= 1.3.0. Adds a new formatter which serializes to the apache avro format. This is a compact binary format which uses pre- defined schemas. This initial implementation is very simple and takes the plain schemas as a constructor argument. Adds a new option to MonologSpi to wrap handlers in a BufferHandler. This doesn't flush until the request shuts down and prevents any network requests in the logger from adding latency to web requests. Related mediawiki/vendor update: Ibfe4bd2036ae8e998e2973f07bd9a6f057691578 The necessary config is something like: array( 'loggers' => array( 'CirrusSearchRequests' => array( 'handlers' => array( 'kafka' ), ), ), 'handlers' => array( 'kafka' => array( 'factory' => '\\MediaWiki\\Logger\\Monolog\\KafkaHandler::factory', 'args' => array( 'localhost:9092' ), 'formatter' => 'avro', 'buffer' => true, ), ), 'formatters' => array( 'avro' => array( 'class' => '\\MediaWiki\\Logger\\Monolog\\AvroFormatter', 'args' => array( array( 'CirrusSearchRequests' => array( 'type' => 'record', 'name' => 'CirrusSearchRequests' 'fields' => array( ... ) ), ), ), ), ), ) Bug: T106256 Change-Id: I6ee744b3e5306af0bed70811b558a543eed22840
2015-08-04 18:02:47 +00:00
'Cannot serialize integer into string',
$stringSchema, 5, 'Expected string, but recieved integer',
],
Produce monolog messages through kafka+avro This allows a logging channel to be configured to write directly to kafka. Logs can be serialized either to json blobs or the more compact apache avro format. The Kafka handler for monolog needs a list of one of more kafka servers to query cluster metadata from. This should be able to use any monolog formatter, although some like JsonFormatter require you to disable formatBatch as Kafka protocol would prefer to encode each record independently in the protocol. This requires the nmred/kafka-php library, version >= 1.3.0. Adds a new formatter which serializes to the apache avro format. This is a compact binary format which uses pre- defined schemas. This initial implementation is very simple and takes the plain schemas as a constructor argument. Adds a new option to MonologSpi to wrap handlers in a BufferHandler. This doesn't flush until the request shuts down and prevents any network requests in the logger from adding latency to web requests. Related mediawiki/vendor update: Ibfe4bd2036ae8e998e2973f07bd9a6f057691578 The necessary config is something like: array( 'loggers' => array( 'CirrusSearchRequests' => array( 'handlers' => array( 'kafka' ), ), ), 'handlers' => array( 'kafka' => array( 'factory' => '\\MediaWiki\\Logger\\Monolog\\KafkaHandler::factory', 'args' => array( 'localhost:9092' ), 'formatter' => 'avro', 'buffer' => true, ), ), 'formatters' => array( 'avro' => array( 'class' => '\\MediaWiki\\Logger\\Monolog\\AvroFormatter', 'args' => array( array( 'CirrusSearchRequests' => array( 'type' => 'record', 'name' => 'CirrusSearchRequests' 'fields' => array( ... ) ), ), ), ), ), ) Bug: T106256 Change-Id: I6ee744b3e5306af0bed70811b558a543eed22840
2015-08-04 18:02:47 +00:00
[
Produce monolog messages through kafka+avro This allows a logging channel to be configured to write directly to kafka. Logs can be serialized either to json blobs or the more compact apache avro format. The Kafka handler for monolog needs a list of one of more kafka servers to query cluster metadata from. This should be able to use any monolog formatter, although some like JsonFormatter require you to disable formatBatch as Kafka protocol would prefer to encode each record independently in the protocol. This requires the nmred/kafka-php library, version >= 1.3.0. Adds a new formatter which serializes to the apache avro format. This is a compact binary format which uses pre- defined schemas. This initial implementation is very simple and takes the plain schemas as a constructor argument. Adds a new option to MonologSpi to wrap handlers in a BufferHandler. This doesn't flush until the request shuts down and prevents any network requests in the logger from adding latency to web requests. Related mediawiki/vendor update: Ibfe4bd2036ae8e998e2973f07bd9a6f057691578 The necessary config is something like: array( 'loggers' => array( 'CirrusSearchRequests' => array( 'handlers' => array( 'kafka' ), ), ), 'handlers' => array( 'kafka' => array( 'factory' => '\\MediaWiki\\Logger\\Monolog\\KafkaHandler::factory', 'args' => array( 'localhost:9092' ), 'formatter' => 'avro', 'buffer' => true, ), ), 'formatters' => array( 'avro' => array( 'class' => '\\MediaWiki\\Logger\\Monolog\\AvroFormatter', 'args' => array( array( 'CirrusSearchRequests' => array( 'type' => 'record', 'name' => 'CirrusSearchRequests' 'fields' => array( ... ) ), ), ), ), ), ) Bug: T106256 Change-Id: I6ee744b3e5306af0bed70811b558a543eed22840
2015-08-04 18:02:47 +00:00
'Cannot serialize array into string',
$stringSchema, [], 'Expected string, but recieved array',
],
Produce monolog messages through kafka+avro This allows a logging channel to be configured to write directly to kafka. Logs can be serialized either to json blobs or the more compact apache avro format. The Kafka handler for monolog needs a list of one of more kafka servers to query cluster metadata from. This should be able to use any monolog formatter, although some like JsonFormatter require you to disable formatBatch as Kafka protocol would prefer to encode each record independently in the protocol. This requires the nmred/kafka-php library, version >= 1.3.0. Adds a new formatter which serializes to the apache avro format. This is a compact binary format which uses pre- defined schemas. This initial implementation is very simple and takes the plain schemas as a constructor argument. Adds a new option to MonologSpi to wrap handlers in a BufferHandler. This doesn't flush until the request shuts down and prevents any network requests in the logger from adding latency to web requests. Related mediawiki/vendor update: Ibfe4bd2036ae8e998e2973f07bd9a6f057691578 The necessary config is something like: array( 'loggers' => array( 'CirrusSearchRequests' => array( 'handlers' => array( 'kafka' ), ), ), 'handlers' => array( 'kafka' => array( 'factory' => '\\MediaWiki\\Logger\\Monolog\\KafkaHandler::factory', 'args' => array( 'localhost:9092' ), 'formatter' => 'avro', 'buffer' => true, ), ), 'formatters' => array( 'avro' => array( 'class' => '\\MediaWiki\\Logger\\Monolog\\AvroFormatter', 'args' => array( array( 'CirrusSearchRequests' => array( 'type' => 'record', 'name' => 'CirrusSearchRequests' 'fields' => array( ... ) ), ), ), ), ), ) Bug: T106256 Change-Id: I6ee744b3e5306af0bed70811b558a543eed22840
2015-08-04 18:02:47 +00:00
[
Produce monolog messages through kafka+avro This allows a logging channel to be configured to write directly to kafka. Logs can be serialized either to json blobs or the more compact apache avro format. The Kafka handler for monolog needs a list of one of more kafka servers to query cluster metadata from. This should be able to use any monolog formatter, although some like JsonFormatter require you to disable formatBatch as Kafka protocol would prefer to encode each record independently in the protocol. This requires the nmred/kafka-php library, version >= 1.3.0. Adds a new formatter which serializes to the apache avro format. This is a compact binary format which uses pre- defined schemas. This initial implementation is very simple and takes the plain schemas as a constructor argument. Adds a new option to MonologSpi to wrap handlers in a BufferHandler. This doesn't flush until the request shuts down and prevents any network requests in the logger from adding latency to web requests. Related mediawiki/vendor update: Ibfe4bd2036ae8e998e2973f07bd9a6f057691578 The necessary config is something like: array( 'loggers' => array( 'CirrusSearchRequests' => array( 'handlers' => array( 'kafka' ), ), ), 'handlers' => array( 'kafka' => array( 'factory' => '\\MediaWiki\\Logger\\Monolog\\KafkaHandler::factory', 'args' => array( 'localhost:9092' ), 'formatter' => 'avro', 'buffer' => true, ), ), 'formatters' => array( 'avro' => array( 'class' => '\\MediaWiki\\Logger\\Monolog\\AvroFormatter', 'args' => array( array( 'CirrusSearchRequests' => array( 'type' => 'record', 'name' => 'CirrusSearchRequests' 'fields' => array( ... ) ), ), ), ), ), ) Bug: T106256 Change-Id: I6ee744b3e5306af0bed70811b558a543eed22840
2015-08-04 18:02:47 +00:00
'allows and ignores extra fields',
$recordSchema, [ 'id' => 4, 'foo' => 'bar' ], [],
],
Produce monolog messages through kafka+avro This allows a logging channel to be configured to write directly to kafka. Logs can be serialized either to json blobs or the more compact apache avro format. The Kafka handler for monolog needs a list of one of more kafka servers to query cluster metadata from. This should be able to use any monolog formatter, although some like JsonFormatter require you to disable formatBatch as Kafka protocol would prefer to encode each record independently in the protocol. This requires the nmred/kafka-php library, version >= 1.3.0. Adds a new formatter which serializes to the apache avro format. This is a compact binary format which uses pre- defined schemas. This initial implementation is very simple and takes the plain schemas as a constructor argument. Adds a new option to MonologSpi to wrap handlers in a BufferHandler. This doesn't flush until the request shuts down and prevents any network requests in the logger from adding latency to web requests. Related mediawiki/vendor update: Ibfe4bd2036ae8e998e2973f07bd9a6f057691578 The necessary config is something like: array( 'loggers' => array( 'CirrusSearchRequests' => array( 'handlers' => array( 'kafka' ), ), ), 'handlers' => array( 'kafka' => array( 'factory' => '\\MediaWiki\\Logger\\Monolog\\KafkaHandler::factory', 'args' => array( 'localhost:9092' ), 'formatter' => 'avro', 'buffer' => true, ), ), 'formatters' => array( 'avro' => array( 'class' => '\\MediaWiki\\Logger\\Monolog\\AvroFormatter', 'args' => array( array( 'CirrusSearchRequests' => array( 'type' => 'record', 'name' => 'CirrusSearchRequests' 'fields' => array( ... ) ), ), ), ), ), ) Bug: T106256 Change-Id: I6ee744b3e5306af0bed70811b558a543eed22840
2015-08-04 18:02:47 +00:00
[
Produce monolog messages through kafka+avro This allows a logging channel to be configured to write directly to kafka. Logs can be serialized either to json blobs or the more compact apache avro format. The Kafka handler for monolog needs a list of one of more kafka servers to query cluster metadata from. This should be able to use any monolog formatter, although some like JsonFormatter require you to disable formatBatch as Kafka protocol would prefer to encode each record independently in the protocol. This requires the nmred/kafka-php library, version >= 1.3.0. Adds a new formatter which serializes to the apache avro format. This is a compact binary format which uses pre- defined schemas. This initial implementation is very simple and takes the plain schemas as a constructor argument. Adds a new option to MonologSpi to wrap handlers in a BufferHandler. This doesn't flush until the request shuts down and prevents any network requests in the logger from adding latency to web requests. Related mediawiki/vendor update: Ibfe4bd2036ae8e998e2973f07bd9a6f057691578 The necessary config is something like: array( 'loggers' => array( 'CirrusSearchRequests' => array( 'handlers' => array( 'kafka' ), ), ), 'handlers' => array( 'kafka' => array( 'factory' => '\\MediaWiki\\Logger\\Monolog\\KafkaHandler::factory', 'args' => array( 'localhost:9092' ), 'formatter' => 'avro', 'buffer' => true, ), ), 'formatters' => array( 'avro' => array( 'class' => '\\MediaWiki\\Logger\\Monolog\\AvroFormatter', 'args' => array( array( 'CirrusSearchRequests' => array( 'type' => 'record', 'name' => 'CirrusSearchRequests' 'fields' => array( ... ) ), ), ), ), ), ) Bug: T106256 Change-Id: I6ee744b3e5306af0bed70811b558a543eed22840
2015-08-04 18:02:47 +00:00
'detects missing fields',
$recordSchema, [], [ 'id' => 'Missing expected field' ],
],
Produce monolog messages through kafka+avro This allows a logging channel to be configured to write directly to kafka. Logs can be serialized either to json blobs or the more compact apache avro format. The Kafka handler for monolog needs a list of one of more kafka servers to query cluster metadata from. This should be able to use any monolog formatter, although some like JsonFormatter require you to disable formatBatch as Kafka protocol would prefer to encode each record independently in the protocol. This requires the nmred/kafka-php library, version >= 1.3.0. Adds a new formatter which serializes to the apache avro format. This is a compact binary format which uses pre- defined schemas. This initial implementation is very simple and takes the plain schemas as a constructor argument. Adds a new option to MonologSpi to wrap handlers in a BufferHandler. This doesn't flush until the request shuts down and prevents any network requests in the logger from adding latency to web requests. Related mediawiki/vendor update: Ibfe4bd2036ae8e998e2973f07bd9a6f057691578 The necessary config is something like: array( 'loggers' => array( 'CirrusSearchRequests' => array( 'handlers' => array( 'kafka' ), ), ), 'handlers' => array( 'kafka' => array( 'factory' => '\\MediaWiki\\Logger\\Monolog\\KafkaHandler::factory', 'args' => array( 'localhost:9092' ), 'formatter' => 'avro', 'buffer' => true, ), ), 'formatters' => array( 'avro' => array( 'class' => '\\MediaWiki\\Logger\\Monolog\\AvroFormatter', 'args' => array( array( 'CirrusSearchRequests' => array( 'type' => 'record', 'name' => 'CirrusSearchRequests' 'fields' => array( ... ) ), ), ), ), ), ) Bug: T106256 Change-Id: I6ee744b3e5306af0bed70811b558a543eed22840
2015-08-04 18:02:47 +00:00
[
Produce monolog messages through kafka+avro This allows a logging channel to be configured to write directly to kafka. Logs can be serialized either to json blobs or the more compact apache avro format. The Kafka handler for monolog needs a list of one of more kafka servers to query cluster metadata from. This should be able to use any monolog formatter, although some like JsonFormatter require you to disable formatBatch as Kafka protocol would prefer to encode each record independently in the protocol. This requires the nmred/kafka-php library, version >= 1.3.0. Adds a new formatter which serializes to the apache avro format. This is a compact binary format which uses pre- defined schemas. This initial implementation is very simple and takes the plain schemas as a constructor argument. Adds a new option to MonologSpi to wrap handlers in a BufferHandler. This doesn't flush until the request shuts down and prevents any network requests in the logger from adding latency to web requests. Related mediawiki/vendor update: Ibfe4bd2036ae8e998e2973f07bd9a6f057691578 The necessary config is something like: array( 'loggers' => array( 'CirrusSearchRequests' => array( 'handlers' => array( 'kafka' ), ), ), 'handlers' => array( 'kafka' => array( 'factory' => '\\MediaWiki\\Logger\\Monolog\\KafkaHandler::factory', 'args' => array( 'localhost:9092' ), 'formatter' => 'avro', 'buffer' => true, ), ), 'formatters' => array( 'avro' => array( 'class' => '\\MediaWiki\\Logger\\Monolog\\AvroFormatter', 'args' => array( array( 'CirrusSearchRequests' => array( 'type' => 'record', 'name' => 'CirrusSearchRequests' 'fields' => array( ... ) ), ), ), ), ), ) Bug: T106256 Change-Id: I6ee744b3e5306af0bed70811b558a543eed22840
2015-08-04 18:02:47 +00:00
'handles first element in enum',
$enumSchema, [ 'count' => 4 ], [],
],
Produce monolog messages through kafka+avro This allows a logging channel to be configured to write directly to kafka. Logs can be serialized either to json blobs or the more compact apache avro format. The Kafka handler for monolog needs a list of one of more kafka servers to query cluster metadata from. This should be able to use any monolog formatter, although some like JsonFormatter require you to disable formatBatch as Kafka protocol would prefer to encode each record independently in the protocol. This requires the nmred/kafka-php library, version >= 1.3.0. Adds a new formatter which serializes to the apache avro format. This is a compact binary format which uses pre- defined schemas. This initial implementation is very simple and takes the plain schemas as a constructor argument. Adds a new option to MonologSpi to wrap handlers in a BufferHandler. This doesn't flush until the request shuts down and prevents any network requests in the logger from adding latency to web requests. Related mediawiki/vendor update: Ibfe4bd2036ae8e998e2973f07bd9a6f057691578 The necessary config is something like: array( 'loggers' => array( 'CirrusSearchRequests' => array( 'handlers' => array( 'kafka' ), ), ), 'handlers' => array( 'kafka' => array( 'factory' => '\\MediaWiki\\Logger\\Monolog\\KafkaHandler::factory', 'args' => array( 'localhost:9092' ), 'formatter' => 'avro', 'buffer' => true, ), ), 'formatters' => array( 'avro' => array( 'class' => '\\MediaWiki\\Logger\\Monolog\\AvroFormatter', 'args' => array( array( 'CirrusSearchRequests' => array( 'type' => 'record', 'name' => 'CirrusSearchRequests' 'fields' => array( ... ) ), ), ), ), ), ) Bug: T106256 Change-Id: I6ee744b3e5306af0bed70811b558a543eed22840
2015-08-04 18:02:47 +00:00
[
Produce monolog messages through kafka+avro This allows a logging channel to be configured to write directly to kafka. Logs can be serialized either to json blobs or the more compact apache avro format. The Kafka handler for monolog needs a list of one of more kafka servers to query cluster metadata from. This should be able to use any monolog formatter, although some like JsonFormatter require you to disable formatBatch as Kafka protocol would prefer to encode each record independently in the protocol. This requires the nmred/kafka-php library, version >= 1.3.0. Adds a new formatter which serializes to the apache avro format. This is a compact binary format which uses pre- defined schemas. This initial implementation is very simple and takes the plain schemas as a constructor argument. Adds a new option to MonologSpi to wrap handlers in a BufferHandler. This doesn't flush until the request shuts down and prevents any network requests in the logger from adding latency to web requests. Related mediawiki/vendor update: Ibfe4bd2036ae8e998e2973f07bd9a6f057691578 The necessary config is something like: array( 'loggers' => array( 'CirrusSearchRequests' => array( 'handlers' => array( 'kafka' ), ), ), 'handlers' => array( 'kafka' => array( 'factory' => '\\MediaWiki\\Logger\\Monolog\\KafkaHandler::factory', 'args' => array( 'localhost:9092' ), 'formatter' => 'avro', 'buffer' => true, ), ), 'formatters' => array( 'avro' => array( 'class' => '\\MediaWiki\\Logger\\Monolog\\AvroFormatter', 'args' => array( array( 'CirrusSearchRequests' => array( 'type' => 'record', 'name' => 'CirrusSearchRequests' 'fields' => array( ... ) ), ), ), ), ), ) Bug: T106256 Change-Id: I6ee744b3e5306af0bed70811b558a543eed22840
2015-08-04 18:02:47 +00:00
'handles second element in enum',
$enumSchema, [ 'count' => null ], [],
],
Produce monolog messages through kafka+avro This allows a logging channel to be configured to write directly to kafka. Logs can be serialized either to json blobs or the more compact apache avro format. The Kafka handler for monolog needs a list of one of more kafka servers to query cluster metadata from. This should be able to use any monolog formatter, although some like JsonFormatter require you to disable formatBatch as Kafka protocol would prefer to encode each record independently in the protocol. This requires the nmred/kafka-php library, version >= 1.3.0. Adds a new formatter which serializes to the apache avro format. This is a compact binary format which uses pre- defined schemas. This initial implementation is very simple and takes the plain schemas as a constructor argument. Adds a new option to MonologSpi to wrap handlers in a BufferHandler. This doesn't flush until the request shuts down and prevents any network requests in the logger from adding latency to web requests. Related mediawiki/vendor update: Ibfe4bd2036ae8e998e2973f07bd9a6f057691578 The necessary config is something like: array( 'loggers' => array( 'CirrusSearchRequests' => array( 'handlers' => array( 'kafka' ), ), ), 'handlers' => array( 'kafka' => array( 'factory' => '\\MediaWiki\\Logger\\Monolog\\KafkaHandler::factory', 'args' => array( 'localhost:9092' ), 'formatter' => 'avro', 'buffer' => true, ), ), 'formatters' => array( 'avro' => array( 'class' => '\\MediaWiki\\Logger\\Monolog\\AvroFormatter', 'args' => array( array( 'CirrusSearchRequests' => array( 'type' => 'record', 'name' => 'CirrusSearchRequests' 'fields' => array( ... ) ), ), ), ), ), ) Bug: T106256 Change-Id: I6ee744b3e5306af0bed70811b558a543eed22840
2015-08-04 18:02:47 +00:00
[
Produce monolog messages through kafka+avro This allows a logging channel to be configured to write directly to kafka. Logs can be serialized either to json blobs or the more compact apache avro format. The Kafka handler for monolog needs a list of one of more kafka servers to query cluster metadata from. This should be able to use any monolog formatter, although some like JsonFormatter require you to disable formatBatch as Kafka protocol would prefer to encode each record independently in the protocol. This requires the nmred/kafka-php library, version >= 1.3.0. Adds a new formatter which serializes to the apache avro format. This is a compact binary format which uses pre- defined schemas. This initial implementation is very simple and takes the plain schemas as a constructor argument. Adds a new option to MonologSpi to wrap handlers in a BufferHandler. This doesn't flush until the request shuts down and prevents any network requests in the logger from adding latency to web requests. Related mediawiki/vendor update: Ibfe4bd2036ae8e998e2973f07bd9a6f057691578 The necessary config is something like: array( 'loggers' => array( 'CirrusSearchRequests' => array( 'handlers' => array( 'kafka' ), ), ), 'handlers' => array( 'kafka' => array( 'factory' => '\\MediaWiki\\Logger\\Monolog\\KafkaHandler::factory', 'args' => array( 'localhost:9092' ), 'formatter' => 'avro', 'buffer' => true, ), ), 'formatters' => array( 'avro' => array( 'class' => '\\MediaWiki\\Logger\\Monolog\\AvroFormatter', 'args' => array( array( 'CirrusSearchRequests' => array( 'type' => 'record', 'name' => 'CirrusSearchRequests' 'fields' => array( ... ) ), ), ), ), ), ) Bug: T106256 Change-Id: I6ee744b3e5306af0bed70811b558a543eed22840
2015-08-04 18:02:47 +00:00
'rejects element not in union',
$enumSchema, [ 'count' => 'invalid' ], [ 'count' => [
Produce monolog messages through kafka+avro This allows a logging channel to be configured to write directly to kafka. Logs can be serialized either to json blobs or the more compact apache avro format. The Kafka handler for monolog needs a list of one of more kafka servers to query cluster metadata from. This should be able to use any monolog formatter, although some like JsonFormatter require you to disable formatBatch as Kafka protocol would prefer to encode each record independently in the protocol. This requires the nmred/kafka-php library, version >= 1.3.0. Adds a new formatter which serializes to the apache avro format. This is a compact binary format which uses pre- defined schemas. This initial implementation is very simple and takes the plain schemas as a constructor argument. Adds a new option to MonologSpi to wrap handlers in a BufferHandler. This doesn't flush until the request shuts down and prevents any network requests in the logger from adding latency to web requests. Related mediawiki/vendor update: Ibfe4bd2036ae8e998e2973f07bd9a6f057691578 The necessary config is something like: array( 'loggers' => array( 'CirrusSearchRequests' => array( 'handlers' => array( 'kafka' ), ), ), 'handlers' => array( 'kafka' => array( 'factory' => '\\MediaWiki\\Logger\\Monolog\\KafkaHandler::factory', 'args' => array( 'localhost:9092' ), 'formatter' => 'avro', 'buffer' => true, ), ), 'formatters' => array( 'avro' => array( 'class' => '\\MediaWiki\\Logger\\Monolog\\AvroFormatter', 'args' => array( array( 'CirrusSearchRequests' => array( 'type' => 'record', 'name' => 'CirrusSearchRequests' 'fields' => array( ... ) ), ), ), ), ), ) Bug: T106256 Change-Id: I6ee744b3e5306af0bed70811b558a543eed22840
2015-08-04 18:02:47 +00:00
'Expected any one of these to be true',
[
Produce monolog messages through kafka+avro This allows a logging channel to be configured to write directly to kafka. Logs can be serialized either to json blobs or the more compact apache avro format. The Kafka handler for monolog needs a list of one of more kafka servers to query cluster metadata from. This should be able to use any monolog formatter, although some like JsonFormatter require you to disable formatBatch as Kafka protocol would prefer to encode each record independently in the protocol. This requires the nmred/kafka-php library, version >= 1.3.0. Adds a new formatter which serializes to the apache avro format. This is a compact binary format which uses pre- defined schemas. This initial implementation is very simple and takes the plain schemas as a constructor argument. Adds a new option to MonologSpi to wrap handlers in a BufferHandler. This doesn't flush until the request shuts down and prevents any network requests in the logger from adding latency to web requests. Related mediawiki/vendor update: Ibfe4bd2036ae8e998e2973f07bd9a6f057691578 The necessary config is something like: array( 'loggers' => array( 'CirrusSearchRequests' => array( 'handlers' => array( 'kafka' ), ), ), 'handlers' => array( 'kafka' => array( 'factory' => '\\MediaWiki\\Logger\\Monolog\\KafkaHandler::factory', 'args' => array( 'localhost:9092' ), 'formatter' => 'avro', 'buffer' => true, ), ), 'formatters' => array( 'avro' => array( 'class' => '\\MediaWiki\\Logger\\Monolog\\AvroFormatter', 'args' => array( array( 'CirrusSearchRequests' => array( 'type' => 'record', 'name' => 'CirrusSearchRequests' 'fields' => array( ... ) ), ), ), ), ), ) Bug: T106256 Change-Id: I6ee744b3e5306af0bed70811b558a543eed22840
2015-08-04 18:02:47 +00:00
'Expected integer, but recieved string',
'Expected null, but recieved string',
]
] ]
],
[
'Empty array is accepted',
$stringArraySchema, [], []
],
[
'correct array element accepted',
$stringArraySchema, [ 'fizzbuzz' ], []
],
[
'incorrect array element rejected',
$stringArraySchema, [ '12', 34 ], [ 'Expected string, but recieved integer' ]
],
];
Produce monolog messages through kafka+avro This allows a logging channel to be configured to write directly to kafka. Logs can be serialized either to json blobs or the more compact apache avro format. The Kafka handler for monolog needs a list of one of more kafka servers to query cluster metadata from. This should be able to use any monolog formatter, although some like JsonFormatter require you to disable formatBatch as Kafka protocol would prefer to encode each record independently in the protocol. This requires the nmred/kafka-php library, version >= 1.3.0. Adds a new formatter which serializes to the apache avro format. This is a compact binary format which uses pre- defined schemas. This initial implementation is very simple and takes the plain schemas as a constructor argument. Adds a new option to MonologSpi to wrap handlers in a BufferHandler. This doesn't flush until the request shuts down and prevents any network requests in the logger from adding latency to web requests. Related mediawiki/vendor update: Ibfe4bd2036ae8e998e2973f07bd9a6f057691578 The necessary config is something like: array( 'loggers' => array( 'CirrusSearchRequests' => array( 'handlers' => array( 'kafka' ), ), ), 'handlers' => array( 'kafka' => array( 'factory' => '\\MediaWiki\\Logger\\Monolog\\KafkaHandler::factory', 'args' => array( 'localhost:9092' ), 'formatter' => 'avro', 'buffer' => true, ), ), 'formatters' => array( 'avro' => array( 'class' => '\\MediaWiki\\Logger\\Monolog\\AvroFormatter', 'args' => array( array( 'CirrusSearchRequests' => array( 'type' => 'record', 'name' => 'CirrusSearchRequests' 'fields' => array( ... ) ), ), ), ), ), ) Bug: T106256 Change-Id: I6ee744b3e5306af0bed70811b558a543eed22840
2015-08-04 18:02:47 +00:00
}
/**
* @dataProvider getErrorsProvider
*/
public function testGetErrors( $message, $schema, $datum, $expected ) {
$this->assertEquals(
$expected,
AvroValidator::getErrors( $schema, $datum ),
$message
);
}
}