TYK_PMP_. The environment variables will take precedence over the values in the configuration file.
purge_delay
ENV: TYK_PMP_PURGEDELAYType:
intThe number of seconds the Pump waits between checking for analytics data and purge it from Redis.
purge_chunk
ENV: TYK_PMP_PURGECHUNKType:
int64The maximum number of records to pull from Redis at a time. If it’s unset or
0, all the
analytics records in Redis are pulled. If it’s set, storage_expiration_time is used to
reset the analytics record TTL.
storage_expiration_time
ENV: TYK_PMP_STORAGEEXPIRATIONTIMEType:
int64The number of seconds for the analytics records TTL. It only works if
purge_chunk is
enabled. Defaults to 60 seconds.
dont_purge_uptime_data
ENV: TYK_PMP_DONTPURGEUPTIMEDATAType:
boolSetting this to
false will create a pump that pushes uptime data to Uptime Pump, so the
Dashboard can read it. Disable by setting to true.
uptime_pump_config
Example Uptime Pump configuration:log_level field into the uptime_pump_config to debug,
info or warning. By default, the SQL logger verbosity is silent.
uptime_pump_config.EnvPrefix
ENV: TYK_PMP_UPTIMEPUMPCONFIG_ENVPREFIXType:
stringThe prefix for the environment variables that will be used to override the configuration. Defaults to
TYK_PMP_PUMPS_SQL_META
uptime_pump_config.mongo_url
ENV: TYK_PMP_UPTIMEPUMPCONFIG_MONGOURLType:
stringThe full URL to your MongoDB instance, this can be a clustered instance if necessary and should include the database and username / password data.
uptime_pump_config.mongo_use_ssl
ENV: TYK_PMP_UPTIMEPUMPCONFIG_MONGOUSESSLType:
boolSet to true to enable Mongo SSL connection.
uptime_pump_config.mongo_ssl_insecure_skip_verify
ENV: TYK_PMP_UPTIMEPUMPCONFIG_MONGOSSLINSECURESKIPVERIFYType:
boolAllows the use of self-signed certificates when connecting to an encrypted MongoDB database.
uptime_pump_config.mongo_ssl_allow_invalid_hostnames
ENV: TYK_PMP_UPTIMEPUMPCONFIG_MONGOSSLALLOWINVALIDHOSTNAMESType:
boolIgnore hostname check when it differs from the original (for example with SSH tunneling). The rest of the TLS verification will still be performed.
uptime_pump_config.mongo_ssl_ca_file
ENV: TYK_PMP_UPTIMEPUMPCONFIG_MONGOSSLCAFILEType:
stringPath to the PEM file with trusted root certificates
uptime_pump_config.mongo_ssl_pem_keyfile
ENV: TYK_PMP_UPTIMEPUMPCONFIG_MONGOSSLPEMKEYFILEType:
stringPath to the PEM file which contains both client certificate and private key. This is required for Mutual TLS.
uptime_pump_config.mongo_db_type
ENV: TYK_PMP_UPTIMEPUMPCONFIG_MONGODBTYPEType:
intSpecifies the mongo DB Type. If it’s 0, it means that you are using standard mongo db. If it’s 1 it means you are using AWS Document DB. If it’s 2, it means you are using CosmosDB. Defaults to Standard mongo (0).
uptime_pump_config.omit_index_creation
ENV: TYK_PMP_UPTIMEPUMPCONFIG_OMITINDEXCREATIONType:
boolSet to true to disable the default tyk index creation.
uptime_pump_config.mongo_session_consistency
ENV: TYK_PMP_UPTIMEPUMPCONFIG_MONGOSESSIONCONSISTENCYType:
stringSet the consistency mode for the session, it defaults to
Strong. The valid values are: strong, monotonic, eventual.
uptime_pump_config.driver
ENV: TYK_PMP_UPTIMEPUMPCONFIG_MONGODRIVERTYPEType:
stringMongoDriverType is the type of the driver (library) to use. The valid values are: “mongo-go” and “mgo”. Since v1.9, the default driver is “mongo-go”. Check out this guide to learn about MongoDB drivers supported by Tyk Pump.
uptime_pump_config.mongo_direct_connection
ENV: TYK_PMP_UPTIMEPUMPCONFIG_MONGODIRECTCONNECTIONType:
boolMongoDirectConnection informs whether to establish connections only with the specified seed servers, or to obtain information for the whole cluster and establish connections with further servers too. If true, the client will only connect to the host provided in the ConnectionString and won’t attempt to discover other hosts in the cluster. Useful when network restrictions prevent discovery, such as with SSH tunneling. Default is false.
uptime_pump_config.collection_name
ENV: TYK_PMP_UPTIMEPUMPCONFIG_COLLECTIONNAMEType:
stringSpecifies the mongo collection name.
uptime_pump_config.max_insert_batch_size_bytes
ENV: TYK_PMP_UPTIMEPUMPCONFIG_MAXINSERTBATCHSIZEBYTESType:
intMaximum insert batch size for mongo selective pump. If the batch we are writing surpasses this value, it will be sent in multiple batches. Defaults to 10Mb.
uptime_pump_config.max_document_size_bytes
ENV: TYK_PMP_UPTIMEPUMPCONFIG_MAXDOCUMENTSIZEBYTESType:
intMaximum document size. If the document exceed this value, it will be skipped. Defaults to 10Mb.
uptime_pump_config.collection_cap_max_size_bytes
ENV: TYK_PMP_UPTIMEPUMPCONFIG_COLLECTIONCAPMAXSIZEBYTESType:
intAmount of bytes of the capped collection in 64bits architectures. Defaults to 5GB.
uptime_pump_config.collection_cap_enable
ENV: TYK_PMP_UPTIMEPUMPCONFIG_COLLECTIONCAPENABLEType:
boolEnable collection capping. It’s used to set a maximum size of the collection.
uptime_pump_config.type
ENV: TYK_PMP_UPTIMEPUMPCONFIG_TYPEType:
stringThe supported and tested types are
sqlite and postgres.
uptime_pump_config.connection_string
ENV: TYK_PMP_UPTIMEPUMPCONFIG_CONNECTIONSTRINGType:
stringSpecifies the connection string to the database.
uptime_pump_config.postgres
ENV: TYK_PMP_UPTIMEPUMPCONFIG_POSTGRESType:
PostgresConfigPostgres configurations.
uptime_pump_config.postgres.prefer_simple_protocol
ENV: TYK_PMP_UPTIMEPUMPCONFIG_POSTGRES_PREFERSIMPLEPROTOCOLType:
boolDisables implicit prepared statement usage.
uptime_pump_config.mysql
ENV: TYK_PMP_UPTIMEPUMPCONFIG_MYSQLType:
MysqlConfigMysql configurations.
uptime_pump_config.mysql.default_string_size
ENV: TYK_PMP_UPTIMEPUMPCONFIG_MYSQL_DEFAULTSTRINGSIZEType:
uintDefault size for string fields. Defaults to
256.
uptime_pump_config.mysql.disable_datetime_precision
ENV: TYK_PMP_UPTIMEPUMPCONFIG_MYSQL_DISABLEDATETIMEPRECISIONType:
boolDisable datetime precision, which not supported before MySQL 5.6.
uptime_pump_config.mysql.dont_support_rename_index
ENV: TYK_PMP_UPTIMEPUMPCONFIG_MYSQL_DONTSUPPORTRENAMEINDEXType:
boolDrop & create when rename index, rename index not supported before MySQL 5.7, MariaDB.
uptime_pump_config.mysql.dont_support_rename_column
ENV: TYK_PMP_UPTIMEPUMPCONFIG_MYSQL_DONTSUPPORTRENAMECOLUMNType:
boolchange when rename column, rename column not supported before MySQL 8, MariaDB.
uptime_pump_config.mysql.skip_initialize_with_version
ENV: TYK_PMP_UPTIMEPUMPCONFIG_MYSQL_SKIPINITIALIZEWITHVERSIONType:
boolAuto configure based on currently MySQL version.
uptime_pump_config.table_sharding
ENV: TYK_PMP_UPTIMEPUMPCONFIG_TABLESHARDINGType:
boolSpecifies if all the analytics records are going to be stored in one table or in multiple tables (one per day). By default,
false. If false, all the records are going to be
stored in tyk_aggregated table. Instead, if it’s true, all the records of the day are
going to be stored in tyk_aggregated_YYYYMMDD table, where YYYYMMDD is going to change
depending on the date.
uptime_pump_config.log_level
ENV: TYK_PMP_UPTIMEPUMPCONFIG_LOGLEVELType:
stringSpecifies the SQL log verbosity. The possible values are:
info,error and warning. By
default, the value is silent, which means that it won’t log any SQL query.
uptime_pump_config.batch_size
ENV: TYK_PMP_UPTIMEPUMPCONFIG_BATCHSIZEType:
intSpecifies the amount of records that are going to be written each batch. Type int. By default, it writes 1000 records max per batch.
uptime_pump_config.uptime_type
ENV: TYK_PMP_UPTIMEPUMPCONFIG_UPTIMETYPEType:
stringDetermines the uptime type. Options are
mongo and sql. Defaults to mongo.
pumps
The default environment variable prefix for each pump follows this format:TYK_PMP_PUMPS_{PUMP-NAME}_, for example TYK_PMP_PUMPS_KAFKA_.
You can also set custom names for each pump specifying the pump type. For example, if you
want a Kafka pump which is called PROD you need to create TYK_PMP_PUMPS_PROD_TYPE=kafka
and configure it using the TYK_PMP_PUMPS_PROD_ prefix.
pumps.csv.name
ENV: TYK_PMP_PUMPS_CSV_NAMEType:
stringThe name of the pump. This is used to identify the pump in the logs. Deprecated, use
type instead.
pumps.csv.type
ENV: TYK_PMP_PUMPS_CSV_TYPEType:
stringSets the pump type. This is needed when the pump key does not equal to the pump name type. Current valid types are:
mongo, mongo-pump-selective, mongo-pump-aggregate, csv,
elasticsearch, influx, influx2, moesif, statsd, segment, graylog, splunk, hybrid, prometheus,
logzio, dogstatsd, kafka, syslog, sql, sql_aggregate, stdout, timestream, mongo-graph,
sql-graph, sql-graph-aggregate, resurfaceio.
pumps.csv.filters
This feature adds a new configuration field in each pump called filters and its structure is the following:pumps.csv.filters.org_ids
ENV: TYK_PMP_PUMPS_CSV_FILTERS_ORGSIDSType:
[]stringFilters pump data by an allow list of org_ids.
pumps.csv.filters.api_ids
ENV: TYK_PMP_PUMPS_CSV_FILTERS_APIIDSType:
[]stringFilters pump data by an allow list of api_ids.
pumps.csv.filters.response_codes
ENV: TYK_PMP_PUMPS_CSV_FILTERS_RESPONSECODESType:
[]intFilters pump data by an allow list of response_codes.
pumps.csv.filters.skip_org_ids
ENV: TYK_PMP_PUMPS_CSV_FILTERS_SKIPPEDORGSIDSType:
[]stringFilters pump data by a block list of org_ids.
pumps.csv.filters.skip_api_ids
ENV: TYK_PMP_PUMPS_CSV_FILTERS_SKIPPEDAPIIDSType:
[]stringFilters pump data by a block list of api_ids.
pumps.csv.filters.skip_response_codes
ENV: TYK_PMP_PUMPS_CSV_FILTERS_SKIPPEDRESPONSECODESType:
[]intFilters pump data by a block list of response_codes.
pumps.csv.timeout
ENV: TYK_PMP_PUMPS_CSV_TIMEOUTType:
intBy default, a pump will wait forever for each write operation to complete; you can configure an optional timeout by setting the configuration option
timeout.
If you have deployed multiple pumps, then you can configure each timeout independently. The timeout is in seconds and defaults to 0.
The timeout is configured within the main pump config as shown here; note that this example would configure a 5 second timeout:
purge_delay) as this will mean that data is purged before being written to the target data sink.
If there is no timeout configured and pump’s write operation is taking longer than the purging loop, the following warning log will be generated:
Pump {pump_name} is taking more time than the value configured of purge_delay. You should try to set a timeout for this pump.
If there is a timeout configured, but pump’s write operation is still taking longer than the purging loop, the following warning log will be generated:
Pump {pump_name} is taking more time than the value configured of purge_delay. You should try lowering the timeout configured for this pump..
pumps.csv.omit_detailed_recording
ENV: TYK_PMP_PUMPS_CSV_OMITDETAILEDRECORDINGType:
boolSetting this to true will avoid writing raw_request and raw_response fields for each request in pumps. Defaults to
false.
pumps.csv.max_record_size
ENV: TYK_PMP_PUMPS_CSV_MAXRECORDSIZEType:
intDefines maximum size (in bytes) for Raw Request and Raw Response logs, this value defaults to 0. If it is not set then tyk-pump will not trim any data and will store the full information. This can also be set at a pump level. For example:
pumps.csv.ignore_fields
ENV: TYK_PMP_PUMPS_CSV_IGNOREFIELDSType:
[]stringIgnoreFields defines a list of analytics fields that will be ignored when writing to the pump. This can be used to avoid writing sensitive information to the Database, or data that you don’t really need to have. The field names must be the same as the JSON tags of the analytics record fields. For example:
["api_key", "api_version"].
pumps.csv.meta.EnvPrefix
ENV: TYK_PMP_PUMPS_CSV_META_ENVPREFIXType:
stringThe prefix for the environment variables that will be used to override the configuration. Defaults to
TYK_PMP_PUMPS_CSV_META
pumps.csv.meta.csv_dir
ENV: TYK_PMP_PUMPS_CSV_META_CSVDIRType:
stringThe directory and the filename where the CSV data will be stored.
pumps.dogstatsd.name
ENV: TYK_PMP_PUMPS_DOGSTATSD_NAMEType:
stringThe name of the pump. This is used to identify the pump in the logs. Deprecated, use
type instead.
pumps.dogstatsd.type
ENV: TYK_PMP_PUMPS_DOGSTATSD_TYPEType:
stringSets the pump type. This is needed when the pump key does not equal to the pump name type. Current valid types are:
mongo, mongo-pump-selective, mongo-pump-aggregate, csv,
elasticsearch, influx, influx2, moesif, statsd, segment, graylog, splunk, hybrid, prometheus,
logzio, dogstatsd, kafka, syslog, sql, sql_aggregate, stdout, timestream, mongo-graph,
sql-graph, sql-graph-aggregate, resurfaceio.
pumps.dogstatsd.filters
This feature adds a new configuration field in each pump called filters and its structure is the following:pumps.dogstatsd.filters.org_ids
ENV: TYK_PMP_PUMPS_DOGSTATSD_FILTERS_ORGSIDSType:
[]stringFilters pump data by an allow list of org_ids.
pumps.dogstatsd.filters.api_ids
ENV: TYK_PMP_PUMPS_DOGSTATSD_FILTERS_APIIDSType:
[]stringFilters pump data by an allow list of api_ids.
pumps.dogstatsd.filters.response_codes
ENV: TYK_PMP_PUMPS_DOGSTATSD_FILTERS_RESPONSECODESType:
[]intFilters pump data by an allow list of response_codes.
pumps.dogstatsd.filters.skip_org_ids
ENV: TYK_PMP_PUMPS_DOGSTATSD_FILTERS_SKIPPEDORGSIDSType:
[]stringFilters pump data by a block list of org_ids.
pumps.dogstatsd.filters.skip_api_ids
ENV: TYK_PMP_PUMPS_DOGSTATSD_FILTERS_SKIPPEDAPIIDSType:
[]stringFilters pump data by a block list of api_ids.
pumps.dogstatsd.filters.skip_response_codes
ENV: TYK_PMP_PUMPS_DOGSTATSD_FILTERS_SKIPPEDRESPONSECODESType:
[]intFilters pump data by a block list of response_codes.
pumps.dogstatsd.timeout
ENV: TYK_PMP_PUMPS_DOGSTATSD_TIMEOUTType:
intBy default, a pump will wait forever for each write operation to complete; you can configure an optional timeout by setting the configuration option
timeout.
If you have deployed multiple pumps, then you can configure each timeout independently. The timeout is in seconds and defaults to 0.
The timeout is configured within the main pump config as shown here; note that this example would configure a 5 second timeout:
purge_delay) as this will mean that data is purged before being written to the target data sink.
If there is no timeout configured and pump’s write operation is taking longer than the purging loop, the following warning log will be generated:
Pump {pump_name} is taking more time than the value configured of purge_delay. You should try to set a timeout for this pump.
If there is a timeout configured, but pump’s write operation is still taking longer than the purging loop, the following warning log will be generated:
Pump {pump_name} is taking more time than the value configured of purge_delay. You should try lowering the timeout configured for this pump..
pumps.dogstatsd.omit_detailed_recording
ENV: TYK_PMP_PUMPS_DOGSTATSD_OMITDETAILEDRECORDINGType:
boolSetting this to true will avoid writing raw_request and raw_response fields for each request in pumps. Defaults to
false.
pumps.dogstatsd.max_record_size
ENV: TYK_PMP_PUMPS_DOGSTATSD_MAXRECORDSIZEType:
intDefines maximum size (in bytes) for Raw Request and Raw Response logs, this value defaults to 0. If it is not set then tyk-pump will not trim any data and will store the full information. This can also be set at a pump level. For example:
pumps.dogstatsd.ignore_fields
ENV: TYK_PMP_PUMPS_DOGSTATSD_IGNOREFIELDSType:
[]stringIgnoreFields defines a list of analytics fields that will be ignored when writing to the pump. This can be used to avoid writing sensitive information to the Database, or data that you don’t really need to have. The field names must be the same as the JSON tags of the analytics record fields. For example:
["api_key", "api_version"].
pumps.dogstatsd.meta.EnvPrefix
ENV: TYK_PMP_PUMPS_DOGSTATSD_META_ENVPREFIXType:
stringThe prefix for the environment variables that will be used to override the configuration. Defaults to
TYK_PMP_PUMPS_DOGSTATSD_META
pumps.dogstatsd.meta.namespace
ENV: TYK_PMP_PUMPS_DOGSTATSD_META_NAMESPACEType:
stringPrefix for your metrics to datadog.
pumps.dogstatsd.meta.address
ENV: TYK_PMP_PUMPS_DOGSTATSD_META_ADDRESSType:
stringAddress of the datadog agent including host & port.
pumps.dogstatsd.meta.sample_rate
ENV: TYK_PMP_PUMPS_DOGSTATSD_META_SAMPLERATEType:
float64Defaults to
1 which equates to 100% of requests. To sample at 50%, set to 0.5.
pumps.dogstatsd.meta.async_uds
ENV: TYK_PMP_PUMPS_DOGSTATSD_META_ASYNCUDSType:
boolEnable async UDS over UDP https://github.com/Datadog/datadog-go#unix-domain-sockets-client.
pumps.dogstatsd.meta.async_uds_write_timeout_seconds
ENV: TYK_PMP_PUMPS_DOGSTATSD_META_ASYNCUDSWRITETIMEOUTType:
intInteger write timeout in seconds if
async_uds: true.
pumps.dogstatsd.meta.buffered
ENV: TYK_PMP_PUMPS_DOGSTATSD_META_BUFFEREDType:
boolEnable buffering of messages.
pumps.dogstatsd.meta.buffered_max_messages
ENV: TYK_PMP_PUMPS_DOGSTATSD_META_BUFFEREDMAXMESSAGESType:
intMax messages in single datagram if
buffered: true. Default 16.
pumps.dogstatsd.meta.tags
ENV: TYK_PMP_PUMPS_DOGSTATSD_META_TAGSType:
[]stringList of tags to be added to the metric. The possible options are listed in the below example. If no tag is specified the fallback behavior is to use the below tags:
pathmethodresponse_codeapi_versionapi_nameapi_idorg_idtrackedoauth_id
path tag.
pumps.elasticsearch.name
ENV: TYK_PMP_PUMPS_ELASTICSEARCH_NAMEType:
stringThe name of the pump. This is used to identify the pump in the logs. Deprecated, use
type instead.
pumps.elasticsearch.type
ENV: TYK_PMP_PUMPS_ELASTICSEARCH_TYPEType:
stringSets the pump type. This is needed when the pump key does not equal to the pump name type. Current valid types are:
mongo, mongo-pump-selective, mongo-pump-aggregate, csv,
elasticsearch, influx, influx2, moesif, statsd, segment, graylog, splunk, hybrid, prometheus,
logzio, dogstatsd, kafka, syslog, sql, sql_aggregate, stdout, timestream, mongo-graph,
sql-graph, sql-graph-aggregate, resurfaceio.
pumps.elasticsearch.filters
This feature adds a new configuration field in each pump called filters and its structure is the following:pumps.elasticsearch.filters.org_ids
ENV: TYK_PMP_PUMPS_ELASTICSEARCH_FILTERS_ORGSIDSType:
[]stringFilters pump data by an allow list of org_ids.
pumps.elasticsearch.filters.api_ids
ENV: TYK_PMP_PUMPS_ELASTICSEARCH_FILTERS_APIIDSType:
[]stringFilters pump data by an allow list of api_ids.
pumps.elasticsearch.filters.response_codes
ENV: TYK_PMP_PUMPS_ELASTICSEARCH_FILTERS_RESPONSECODESType:
[]intFilters pump data by an allow list of response_codes.
pumps.elasticsearch.filters.skip_org_ids
ENV: TYK_PMP_PUMPS_ELASTICSEARCH_FILTERS_SKIPPEDORGSIDSType:
[]stringFilters pump data by a block list of org_ids.
pumps.elasticsearch.filters.skip_api_ids
ENV: TYK_PMP_PUMPS_ELASTICSEARCH_FILTERS_SKIPPEDAPIIDSType:
[]stringFilters pump data by a block list of api_ids.
pumps.elasticsearch.filters.skip_response_codes
ENV: TYK_PMP_PUMPS_ELASTICSEARCH_FILTERS_SKIPPEDRESPONSECODESType:
[]intFilters pump data by a block list of response_codes.
pumps.elasticsearch.timeout
ENV: TYK_PMP_PUMPS_ELASTICSEARCH_TIMEOUTType:
intBy default, a pump will wait forever for each write operation to complete; you can configure an optional timeout by setting the configuration option
timeout.
If you have deployed multiple pumps, then you can configure each timeout independently. The timeout is in seconds and defaults to 0.
The timeout is configured within the main pump config as shown here; note that this example would configure a 5 second timeout:
purge_delay) as this will mean that data is purged before being written to the target data sink.
If there is no timeout configured and pump’s write operation is taking longer than the purging loop, the following warning log will be generated:
Pump {pump_name} is taking more time than the value configured of purge_delay. You should try to set a timeout for this pump.
If there is a timeout configured, but pump’s write operation is still taking longer than the purging loop, the following warning log will be generated:
Pump {pump_name} is taking more time than the value configured of purge_delay. You should try lowering the timeout configured for this pump..
pumps.elasticsearch.omit_detailed_recording
ENV: TYK_PMP_PUMPS_ELASTICSEARCH_OMITDETAILEDRECORDINGType:
boolSetting this to true will avoid writing raw_request and raw_response fields for each request in pumps. Defaults to
false.
pumps.elasticsearch.max_record_size
ENV: TYK_PMP_PUMPS_ELASTICSEARCH_MAXRECORDSIZEType:
intDefines maximum size (in bytes) for Raw Request and Raw Response logs, this value defaults to 0. If it is not set then tyk-pump will not trim any data and will store the full information. This can also be set at a pump level. For example:
pumps.elasticsearch.ignore_fields
ENV: TYK_PMP_PUMPS_ELASTICSEARCH_IGNOREFIELDSType:
[]stringIgnoreFields defines a list of analytics fields that will be ignored when writing to the pump. This can be used to avoid writing sensitive information to the Database, or data that you don’t really need to have. The field names must be the same as the JSON tags of the analytics record fields. For example:
["api_key", "api_version"].
pumps.elasticsearch.meta.EnvPrefix
ENV: TYK_PMP_PUMPS_ELASTICSEARCH_META_ENVPREFIXType:
stringThe prefix for the environment variables that will be used to override the configuration. Defaults to
TYK_PMP_PUMPS_ELASTICSEARCH_META
pumps.elasticsearch.meta.index_name
ENV: TYK_PMP_PUMPS_ELASTICSEARCH_META_INDEXNAMEType:
stringThe name of the index that all the analytics data will be placed in. Defaults to “tyk_analytics”.
pumps.elasticsearch.meta.elasticsearch_url
ENV: TYK_PMP_PUMPS_ELASTICSEARCH_META_ELASTICSEARCHURLType:
stringIf sniffing is disabled, the URL that all data will be sent to. Defaults to “http://localhost:9200”.
pumps.elasticsearch.meta.use_sniffing
ENV: TYK_PMP_PUMPS_ELASTICSEARCH_META_ENABLESNIFFINGType:
boolIf sniffing is enabled, the “elasticsearch_url” will be used to make a request to get a list of all the nodes in the cluster, the returned addresses will then be used. Defaults to
false.
pumps.elasticsearch.meta.document_type
ENV: TYK_PMP_PUMPS_ELASTICSEARCH_META_DOCUMENTTYPEType:
stringThe type of the document that is created in ES. Defaults to “tyk_analytics”.
pumps.elasticsearch.meta.rolling_index
ENV: TYK_PMP_PUMPS_ELASTICSEARCH_META_ROLLINGINDEXType:
boolAppends the date to the end of the index name, so each days data is split into a different index name. E.g. tyk_analytics-2016.02.28. Defaults to
false.
pumps.elasticsearch.meta.extended_stats
ENV: TYK_PMP_PUMPS_ELASTICSEARCH_META_EXTENDEDSTATISTICSType:
boolIf set to
true will include the following additional fields: Raw Request, Raw Response and
User Agent.
pumps.elasticsearch.meta.generate_id
ENV: TYK_PMP_PUMPS_ELASTICSEARCH_META_GENERATEIDType:
boolWhen enabled, generate _id for outgoing records. This prevents duplicate records when retrying ES.
pumps.elasticsearch.meta.decode_base64
ENV: TYK_PMP_PUMPS_ELASTICSEARCH_META_DECODEBASE64Type:
boolAllows for the base64 bits to be decode before being passed to ES.
pumps.elasticsearch.meta.version
ENV: TYK_PMP_PUMPS_ELASTICSEARCH_META_VERSIONType:
stringSpecifies the ES version. Use “3” for ES 3.X, “5” for ES 5.X, “6” for ES 6.X, “7” for ES 7.X . Defaults to “3”.
pumps.elasticsearch.meta.disable_bulk
ENV: TYK_PMP_PUMPS_ELASTICSEARCH_META_DISABLEBULKType:
boolDisable batch writing. Defaults to false.
pumps.elasticsearch.meta.bulk_config
Batch writing trigger configuration. Each option is an OR with eachother:pumps.elasticsearch.meta.bulk_config.workers
ENV: TYK_PMP_PUMPS_ELASTICSEARCH_META_BULKCONFIG_WORKERSType:
intNumber of workers. Defaults to 1.
pumps.elasticsearch.meta.bulk_config.flush_interval
ENV: TYK_PMP_PUMPS_ELASTICSEARCH_META_BULKCONFIG_FLUSHINTERVALType:
intSpecifies the time in seconds to flush the data and send it to ES. Default disabled.
pumps.elasticsearch.meta.bulk_config.bulk_actions
ENV: TYK_PMP_PUMPS_ELASTICSEARCH_META_BULKCONFIG_BULKACTIONSType:
intSpecifies the number of requests needed to flush the data and send it to ES. Defaults to 1000 requests. If it is needed, can be disabled with -1.
pumps.elasticsearch.meta.bulk_config.bulk_size
ENV: TYK_PMP_PUMPS_ELASTICSEARCH_META_BULKCONFIG_BULKSIZEType:
intSpecifies the size (in bytes) needed to flush the data and send it to ES. Defaults to 5MB. If it is needed, can be disabled with -1.
pumps.elasticsearch.meta.auth_api_key_id
ENV: TYK_PMP_PUMPS_ELASTICSEARCH_META_AUTHAPIKEYIDType:
stringAPI Key ID used for APIKey auth in ES. It’s send to ES in the Authorization header as ApiKey base64(auth_api_key_id:auth_api_key)
pumps.elasticsearch.meta.auth_api_key
ENV: TYK_PMP_PUMPS_ELASTICSEARCH_META_AUTHAPIKEYType:
stringAPI Key used for APIKey auth in ES. It’s send to ES in the Authorization header as ApiKey base64(auth_api_key_id:auth_api_key)
pumps.elasticsearch.meta.auth_basic_username
ENV: TYK_PMP_PUMPS_ELASTICSEARCH_META_USERNAMEType:
stringBasic auth username. It’s send to ES in the Authorization header as username:password encoded in base64.
pumps.elasticsearch.meta.auth_basic_password
ENV: TYK_PMP_PUMPS_ELASTICSEARCH_META_PASSWORDType:
stringBasic auth password. It’s send to ES in the Authorization header as username:password encoded in base64.
pumps.elasticsearch.meta.use_ssl
ENV: TYK_PMP_PUMPS_ELASTICSEARCH_META_USESSLType:
boolEnables SSL connection.
pumps.elasticsearch.meta.ssl_insecure_skip_verify
ENV: TYK_PMP_PUMPS_ELASTICSEARCH_META_SSLINSECURESKIPVERIFYType:
boolControls whether the pump client verifies the Elastic Search server’s certificate chain and hostname.
pumps.elasticsearch.meta.ssl_cert_file
ENV: TYK_PMP_PUMPS_ELASTICSEARCH_META_SSLCERTFILEType:
stringCan be used to set custom certificate file for authentication with Elastic Search.
pumps.elasticsearch.meta.ssl_key_file
ENV: TYK_PMP_PUMPS_ELASTICSEARCH_META_SSLKEYFILEType:
stringCan be used to set custom key file for authentication with Elastic Search.
pumps.graylog.name
ENV: TYK_PMP_PUMPS_GRAYLOG_NAMEType:
stringThe name of the pump. This is used to identify the pump in the logs. Deprecated, use
type instead.
pumps.graylog.type
ENV: TYK_PMP_PUMPS_GRAYLOG_TYPEType:
stringSets the pump type. This is needed when the pump key does not equal to the pump name type. Current valid types are:
mongo, mongo-pump-selective, mongo-pump-aggregate, csv,
elasticsearch, influx, influx2, moesif, statsd, segment, graylog, splunk, hybrid, prometheus,
logzio, dogstatsd, kafka, syslog, sql, sql_aggregate, stdout, timestream, mongo-graph,
sql-graph, sql-graph-aggregate, resurfaceio.
pumps.graylog.filters
This feature adds a new configuration field in each pump called filters and its structure is the following:pumps.graylog.filters.org_ids
ENV: TYK_PMP_PUMPS_GRAYLOG_FILTERS_ORGSIDSType:
[]stringFilters pump data by an allow list of org_ids.
pumps.graylog.filters.api_ids
ENV: TYK_PMP_PUMPS_GRAYLOG_FILTERS_APIIDSType:
[]stringFilters pump data by an allow list of api_ids.
pumps.graylog.filters.response_codes
ENV: TYK_PMP_PUMPS_GRAYLOG_FILTERS_RESPONSECODESType:
[]intFilters pump data by an allow list of response_codes.
pumps.graylog.filters.skip_org_ids
ENV: TYK_PMP_PUMPS_GRAYLOG_FILTERS_SKIPPEDORGSIDSType:
[]stringFilters pump data by a block list of org_ids.
pumps.graylog.filters.skip_api_ids
ENV: TYK_PMP_PUMPS_GRAYLOG_FILTERS_SKIPPEDAPIIDSType:
[]stringFilters pump data by a block list of api_ids.
pumps.graylog.filters.skip_response_codes
ENV: TYK_PMP_PUMPS_GRAYLOG_FILTERS_SKIPPEDRESPONSECODESType:
[]intFilters pump data by a block list of response_codes.
pumps.graylog.timeout
ENV: TYK_PMP_PUMPS_GRAYLOG_TIMEOUTType:
intBy default, a pump will wait forever for each write operation to complete; you can configure an optional timeout by setting the configuration option
timeout.
If you have deployed multiple pumps, then you can configure each timeout independently. The timeout is in seconds and defaults to 0.
The timeout is configured within the main pump config as shown here; note that this example would configure a 5 second timeout:
purge_delay) as this will mean that data is purged before being written to the target data sink.
If there is no timeout configured and pump’s write operation is taking longer than the purging loop, the following warning log will be generated:
Pump {pump_name} is taking more time than the value configured of purge_delay. You should try to set a timeout for this pump.
If there is a timeout configured, but pump’s write operation is still taking longer than the purging loop, the following warning log will be generated:
Pump {pump_name} is taking more time than the value configured of purge_delay. You should try lowering the timeout configured for this pump..
pumps.graylog.omit_detailed_recording
ENV: TYK_PMP_PUMPS_GRAYLOG_OMITDETAILEDRECORDINGType:
boolSetting this to true will avoid writing raw_request and raw_response fields for each request in pumps. Defaults to
false.
pumps.graylog.max_record_size
ENV: TYK_PMP_PUMPS_GRAYLOG_MAXRECORDSIZEType:
intDefines maximum size (in bytes) for Raw Request and Raw Response logs, this value defaults to 0. If it is not set then tyk-pump will not trim any data and will store the full information. This can also be set at a pump level. For example:
pumps.graylog.ignore_fields
ENV: TYK_PMP_PUMPS_GRAYLOG_IGNOREFIELDSType:
[]stringIgnoreFields defines a list of analytics fields that will be ignored when writing to the pump. This can be used to avoid writing sensitive information to the Database, or data that you don’t really need to have. The field names must be the same as the JSON tags of the analytics record fields. For example:
["api_key", "api_version"].
pumps.graylog.meta.EnvPrefix
ENV: TYK_PMP_PUMPS_GRAYLOG_META_ENVPREFIXType:
stringThe prefix for the environment variables that will be used to override the configuration. Defaults to
TYK_PMP_PUMPS_GRAYLOG_META
pumps.graylog.meta.host
ENV: TYK_PMP_PUMPS_GRAYLOG_META_GRAYLOGHOSTType:
stringGraylog host.
pumps.graylog.meta.port
ENV: TYK_PMP_PUMPS_GRAYLOG_META_GRAYLOGPORTType:
intGraylog port.
pumps.graylog.meta.tags
ENV: TYK_PMP_PUMPS_GRAYLOG_META_TAGSType:
[]stringList of tags to be added to the metric. The possible options are listed in the below example. If no tag is specified the fallback behaviour is to don’t send anything. The possible values are:
pathmethodresponse_codeapi_versionapi_nameapi_idorg_idtrackedoauth_idraw_requestraw_responserequest_timeip_address
pumps.hybrid.name
ENV: TYK_PMP_PUMPS_HYBRID_NAMEType:
stringThe name of the pump. This is used to identify the pump in the logs. Deprecated, use
type instead.
pumps.hybrid.type
ENV: TYK_PMP_PUMPS_HYBRID_TYPEType:
stringSets the pump type. This is needed when the pump key does not equal to the pump name type. Current valid types are:
mongo, mongo-pump-selective, mongo-pump-aggregate, csv,
elasticsearch, influx, influx2, moesif, statsd, segment, graylog, splunk, hybrid, prometheus,
logzio, dogstatsd, kafka, syslog, sql, sql_aggregate, stdout, timestream, mongo-graph,
sql-graph, sql-graph-aggregate, resurfaceio.
pumps.hybrid.filters
This feature adds a new configuration field in each pump called filters and its structure is the following:pumps.hybrid.filters.org_ids
ENV: TYK_PMP_PUMPS_HYBRID_FILTERS_ORGSIDSType:
[]stringFilters pump data by an allow list of org_ids.
pumps.hybrid.filters.api_ids
ENV: TYK_PMP_PUMPS_HYBRID_FILTERS_APIIDSType:
[]stringFilters pump data by an allow list of api_ids.
pumps.hybrid.filters.response_codes
ENV: TYK_PMP_PUMPS_HYBRID_FILTERS_RESPONSECODESType:
[]intFilters pump data by an allow list of response_codes.
pumps.hybrid.filters.skip_org_ids
ENV: TYK_PMP_PUMPS_HYBRID_FILTERS_SKIPPEDORGSIDSType:
[]stringFilters pump data by a block list of org_ids.
pumps.hybrid.filters.skip_api_ids
ENV: TYK_PMP_PUMPS_HYBRID_FILTERS_SKIPPEDAPIIDSType:
[]stringFilters pump data by a block list of api_ids.
pumps.hybrid.filters.skip_response_codes
ENV: TYK_PMP_PUMPS_HYBRID_FILTERS_SKIPPEDRESPONSECODESType:
[]intFilters pump data by a block list of response_codes.
pumps.hybrid.timeout
ENV: TYK_PMP_PUMPS_HYBRID_TIMEOUTType:
intBy default, a pump will wait forever for each write operation to complete; you can configure an optional timeout by setting the configuration option
timeout.
If you have deployed multiple pumps, then you can configure each timeout independently. The timeout is in seconds and defaults to 0.
The timeout is configured within the main pump config as shown here; note that this example would configure a 5 second timeout:
purge_delay) as this will mean that data is purged before being written to the target data sink.
If there is no timeout configured and pump’s write operation is taking longer than the purging loop, the following warning log will be generated:
Pump {pump_name} is taking more time than the value configured of purge_delay. You should try to set a timeout for this pump.
If there is a timeout configured, but pump’s write operation is still taking longer than the purging loop, the following warning log will be generated:
Pump {pump_name} is taking more time than the value configured of purge_delay. You should try lowering the timeout configured for this pump..
pumps.hybrid.omit_detailed_recording
ENV: TYK_PMP_PUMPS_HYBRID_OMITDETAILEDRECORDINGType:
boolSetting this to true will avoid writing raw_request and raw_response fields for each request in pumps. Defaults to
false.
pumps.hybrid.max_record_size
ENV: TYK_PMP_PUMPS_HYBRID_MAXRECORDSIZEType:
intDefines maximum size (in bytes) for Raw Request and Raw Response logs, this value defaults to 0. If it is not set then tyk-pump will not trim any data and will store the full information. This can also be set at a pump level. For example:
pumps.hybrid.ignore_fields
ENV: TYK_PMP_PUMPS_HYBRID_IGNOREFIELDSType:
[]stringIgnoreFields defines a list of analytics fields that will be ignored when writing to the pump. This can be used to avoid writing sensitive information to the Database, or data that you don’t really need to have. The field names must be the same as the JSON tags of the analytics record fields. For example:
["api_key", "api_version"].
pumps.hybrid.meta.EnvPrefix
ENV: TYK_PMP_PUMPS_HYBRID_META_ENVPREFIXType:
stringThe prefix for the environment variables that will be used to override the configuration. Defaults to
TYK_PMP_PUMPS_HYBRID_META
pumps.hybrid.meta.ConnectionString
ENV: TYK_PMP_PUMPS_HYBRID_META_CONNECTIONSTRINGType:
stringMDCB URL connection string
pumps.hybrid.meta.RPCKey
ENV: TYK_PMP_PUMPS_HYBRID_META_RPCKEYType:
stringYour organization ID to connect to the MDCB installation.
pumps.hybrid.meta.APIKey
ENV: TYK_PMP_PUMPS_HYBRID_META_APIKEYType:
stringThis the API key of a user used to authenticate and authorize the Hybrid Pump access through MDCB. The user should be a standard Dashboard user with minimal privileges so as to reduce any risk if the user is compromised.
pumps.hybrid.meta.ignore_tag_prefix_list
ENV: TYK_PMP_PUMPS_HYBRID_META_IGNORETAGPREFIXLISTType:
[]stringSpecifies prefixes of tags that should be ignored if
aggregated is set to true.
pumps.hybrid.meta.CallTimeout
ENV: TYK_PMP_PUMPS_HYBRID_META_CALLTIMEOUTType:
intHybrid pump RPC calls timeout in seconds. Defaults to
10 seconds.
pumps.hybrid.meta.RPCPoolSize
ENV: TYK_PMP_PUMPS_HYBRID_META_RPCPOOLSIZEType:
intHybrid pump connection pool size. Defaults to
5.
pumps.hybrid.meta.aggregationTime
ENV: TYK_PMP_PUMPS_HYBRID_META_AGGREGATIONTIMEType:
intaggregationTime is to specify the frequency of the aggregation in minutes if
aggregated is set to true.
pumps.hybrid.meta.Aggregated
ENV: TYK_PMP_PUMPS_HYBRID_META_AGGREGATEDType:
boolSend aggregated analytics data to Tyk MDCB
pumps.hybrid.meta.TrackAllPaths
ENV: TYK_PMP_PUMPS_HYBRID_META_TRACKALLPATHSType:
boolSpecifies if it should store aggregated data for all the endpoints if
aggregated is set to true. By default, false
which means that only store aggregated data for tracked endpoints.
pumps.hybrid.meta.store_analytics_per_minute
ENV: TYK_PMP_PUMPS_HYBRID_META_STOREANALYTICSPERMINUTEType:
boolDetermines if the aggregations should be made per minute (true) or per hour (false) if
aggregated is set to true.
pumps.hybrid.meta.UseSSL
ENV: TYK_PMP_PUMPS_HYBRID_META_USESSLType:
boolUse SSL to connect to Tyk MDCB
pumps.hybrid.meta.SSLInsecureSkipVerify
ENV: TYK_PMP_PUMPS_HYBRID_META_SSLINSECURESKIPVERIFYType:
boolSkip SSL verification
pumps.influx.name
ENV: TYK_PMP_PUMPS_INFLUX_NAMEType:
stringThe name of the pump. This is used to identify the pump in the logs. Deprecated, use
type instead.
pumps.influx.type
ENV: TYK_PMP_PUMPS_INFLUX_TYPEType:
stringSets the pump type. This is needed when the pump key does not equal to the pump name type. Current valid types are:
mongo, mongo-pump-selective, mongo-pump-aggregate, csv,
elasticsearch, influx, influx2, moesif, statsd, segment, graylog, splunk, hybrid, prometheus,
logzio, dogstatsd, kafka, syslog, sql, sql_aggregate, stdout, timestream, mongo-graph,
sql-graph, sql-graph-aggregate, resurfaceio.
pumps.influx.filters
This feature adds a new configuration field in each pump called filters and its structure is the following:pumps.influx.filters.org_ids
ENV: TYK_PMP_PUMPS_INFLUX_FILTERS_ORGSIDSType:
[]stringFilters pump data by an allow list of org_ids.
pumps.influx.filters.api_ids
ENV: TYK_PMP_PUMPS_INFLUX_FILTERS_APIIDSType:
[]stringFilters pump data by an allow list of api_ids.
pumps.influx.filters.response_codes
ENV: TYK_PMP_PUMPS_INFLUX_FILTERS_RESPONSECODESType:
[]intFilters pump data by an allow list of response_codes.
pumps.influx.filters.skip_org_ids
ENV: TYK_PMP_PUMPS_INFLUX_FILTERS_SKIPPEDORGSIDSType:
[]stringFilters pump data by a block list of org_ids.
pumps.influx.filters.skip_api_ids
ENV: TYK_PMP_PUMPS_INFLUX_FILTERS_SKIPPEDAPIIDSType:
[]stringFilters pump data by a block list of api_ids.
pumps.influx.filters.skip_response_codes
ENV: TYK_PMP_PUMPS_INFLUX_FILTERS_SKIPPEDRESPONSECODESType:
[]intFilters pump data by a block list of response_codes.
pumps.influx.timeout
ENV: TYK_PMP_PUMPS_INFLUX_TIMEOUTType:
intBy default, a pump will wait forever for each write operation to complete; you can configure an optional timeout by setting the configuration option
timeout.
If you have deployed multiple pumps, then you can configure each timeout independently. The timeout is in seconds and defaults to 0.
The timeout is configured within the main pump config as shown here; note that this example would configure a 5 second timeout:
purge_delay) as this will mean that data is purged before being written to the target data sink.
If there is no timeout configured and pump’s write operation is taking longer than the purging loop, the following warning log will be generated:
Pump {pump_name} is taking more time than the value configured of purge_delay. You should try to set a timeout for this pump.
If there is a timeout configured, but pump’s write operation is still taking longer than the purging loop, the following warning log will be generated:
Pump {pump_name} is taking more time than the value configured of purge_delay. You should try lowering the timeout configured for this pump..
pumps.influx.omit_detailed_recording
ENV: TYK_PMP_PUMPS_INFLUX_OMITDETAILEDRECORDINGType:
boolSetting this to true will avoid writing raw_request and raw_response fields for each request in pumps. Defaults to
false.
pumps.influx.max_record_size
ENV: TYK_PMP_PUMPS_INFLUX_MAXRECORDSIZEType:
intDefines maximum size (in bytes) for Raw Request and Raw Response logs, this value defaults to 0. If it is not set then tyk-pump will not trim any data and will store the full information. This can also be set at a pump level. For example:
pumps.influx.ignore_fields
ENV: TYK_PMP_PUMPS_INFLUX_IGNOREFIELDSType:
[]stringIgnoreFields defines a list of analytics fields that will be ignored when writing to the pump. This can be used to avoid writing sensitive information to the Database, or data that you don’t really need to have. The field names must be the same as the JSON tags of the analytics record fields. For example:
["api_key", "api_version"].
pumps.influx.meta.EnvPrefix
ENV: TYK_PMP_PUMPS_INFLUX_META_ENVPREFIXType:
stringThe prefix for the environment variables that will be used to override the configuration. Defaults to
TYK_PMP_PUMPS_INFLUX_META
pumps.influx.meta.database_name
ENV: TYK_PMP_PUMPS_INFLUX_META_DATABASENAMEType:
stringInfluxDB pump database name.
pumps.influx.meta.address
ENV: TYK_PMP_PUMPS_INFLUX_META_ADDRType:
stringInfluxDB pump host.
pumps.influx.meta.username
ENV: TYK_PMP_PUMPS_INFLUX_META_USERNAMEType:
stringInfluxDB pump database username.
pumps.influx.meta.password
ENV: TYK_PMP_PUMPS_INFLUX_META_PASSWORDType:
stringInfluxDB pump database password.
pumps.influx.meta.fields
ENV: TYK_PMP_PUMPS_INFLUX_META_FIELDSType:
[]stringDefine which Analytics fields should be sent to InfluxDB. Check the available fields in the example below. Default value is
["method", "path", "response_code", "api_key", "time_stamp", "api_version", "api_name", "api_id", "org_id", "oauth_id", "raw_request", "request_time", "raw_response", "ip_address"].
pumps.influx.meta.tags
ENV: TYK_PMP_PUMPS_INFLUX_META_TAGSType:
[]stringList of tags to be added to the metric.
pumps.kafka.name
ENV: TYK_PMP_PUMPS_KAFKA_NAMEType:
stringThe name of the pump. This is used to identify the pump in the logs. Deprecated, use
type instead.
pumps.kafka.type
ENV: TYK_PMP_PUMPS_KAFKA_TYPEType:
stringSets the pump type. This is needed when the pump key does not equal to the pump name type. Current valid types are:
mongo, mongo-pump-selective, mongo-pump-aggregate, csv,
elasticsearch, influx, influx2, moesif, statsd, segment, graylog, splunk, hybrid, prometheus,
logzio, dogstatsd, kafka, syslog, sql, sql_aggregate, stdout, timestream, mongo-graph,
sql-graph, sql-graph-aggregate, resurfaceio.
pumps.kafka.filters
This feature adds a new configuration field in each pump called filters and its structure is the following:pumps.kafka.filters.org_ids
ENV: TYK_PMP_PUMPS_KAFKA_FILTERS_ORGSIDSType:
[]stringFilters pump data by an allow list of org_ids.
pumps.kafka.filters.api_ids
ENV: TYK_PMP_PUMPS_KAFKA_FILTERS_APIIDSType:
[]stringFilters pump data by an allow list of api_ids.
pumps.kafka.filters.response_codes
ENV: TYK_PMP_PUMPS_KAFKA_FILTERS_RESPONSECODESType:
[]intFilters pump data by an allow list of response_codes.
pumps.kafka.filters.skip_org_ids
ENV: TYK_PMP_PUMPS_KAFKA_FILTERS_SKIPPEDORGSIDSType:
[]stringFilters pump data by a block list of org_ids.
pumps.kafka.filters.skip_api_ids
ENV: TYK_PMP_PUMPS_KAFKA_FILTERS_SKIPPEDAPIIDSType:
[]stringFilters pump data by a block list of api_ids.
pumps.kafka.filters.skip_response_codes
ENV: TYK_PMP_PUMPS_KAFKA_FILTERS_SKIPPEDRESPONSECODESType:
[]intFilters pump data by a block list of response_codes.
pumps.kafka.timeout
ENV: TYK_PMP_PUMPS_KAFKA_TIMEOUTType:
intBy default, a pump will wait forever for each write operation to complete; you can configure an optional timeout by setting the configuration option
timeout.
If you have deployed multiple pumps, then you can configure each timeout independently. The timeout is in seconds and defaults to 0.
The timeout is configured within the main pump config as shown here; note that this example would configure a 5 second timeout:
purge_delay) as this will mean that data is purged before being written to the target data sink.
If there is no timeout configured and pump’s write operation is taking longer than the purging loop, the following warning log will be generated:
Pump {pump_name} is taking more time than the value configured of purge_delay. You should try to set a timeout for this pump.
If there is a timeout configured, but pump’s write operation is still taking longer than the purging loop, the following warning log will be generated:
Pump {pump_name} is taking more time than the value configured of purge_delay. You should try lowering the timeout configured for this pump..
pumps.kafka.omit_detailed_recording
ENV: TYK_PMP_PUMPS_KAFKA_OMITDETAILEDRECORDINGType:
boolSetting this to true will avoid writing raw_request and raw_response fields for each request in pumps. Defaults to
false.
pumps.kafka.max_record_size
ENV: TYK_PMP_PUMPS_KAFKA_MAXRECORDSIZEType:
intDefines maximum size (in bytes) for Raw Request and Raw Response logs, this value defaults to 0. If it is not set then tyk-pump will not trim any data and will store the full information. This can also be set at a pump level. For example:
pumps.kafka.ignore_fields
ENV: TYK_PMP_PUMPS_KAFKA_IGNOREFIELDSType:
[]stringIgnoreFields defines a list of analytics fields that will be ignored when writing to the pump. This can be used to avoid writing sensitive information to the Database, or data that you don’t really need to have. The field names must be the same as the JSON tags of the analytics record fields. For example:
["api_key", "api_version"].
pumps.kafka.meta.EnvPrefix
ENV: TYK_PMP_PUMPS_KAFKA_META_ENVPREFIXType:
stringThe prefix for the environment variables that will be used to override the configuration. Defaults to
TYK_PMP_PUMPS_KAFKA_META
pumps.kafka.meta.broker
ENV: TYK_PMP_PUMPS_KAFKA_META_BROKERType:
[]stringThe list of brokers used to discover the partitions available on the kafka cluster. E.g. “localhost:9092”.
pumps.kafka.meta.client_id
ENV: TYK_PMP_PUMPS_KAFKA_META_CLIENTIDType:
stringUnique identifier for client connections established with Kafka.
pumps.kafka.meta.topic
ENV: TYK_PMP_PUMPS_KAFKA_META_TOPICType:
stringThe topic that the writer will produce messages to.
pumps.kafka.meta.timeout
ENV: TYK_PMP_PUMPS_KAFKA_META_TIMEOUTType:
interface{}Timeout is the maximum amount of seconds to wait for a connect or write to complete.
pumps.kafka.meta.compressed
ENV: TYK_PMP_PUMPS_KAFKA_META_COMPRESSEDType:
boolEnable “github.com/golang/snappy” codec to be used to compress Kafka messages. By default is
false.
pumps.kafka.meta.meta_data
ENV: TYK_PMP_PUMPS_KAFKA_META_METADATAType:
map[string]stringCan be used to set custom metadata inside the kafka message.
pumps.kafka.meta.use_ssl
ENV: TYK_PMP_PUMPS_KAFKA_META_USESSLType:
boolEnables SSL connection.
pumps.kafka.meta.ssl_insecure_skip_verify
ENV: TYK_PMP_PUMPS_KAFKA_META_SSLINSECURESKIPVERIFYType:
boolControls whether the pump client verifies the kafka server’s certificate chain and host name.
pumps.kafka.meta.ssl_cert_file
ENV: TYK_PMP_PUMPS_KAFKA_META_SSLCERTFILEType:
stringCan be used to set custom certificate file for authentication with kafka.
pumps.kafka.meta.ssl_key_file
ENV: TYK_PMP_PUMPS_KAFKA_META_SSLKEYFILEType:
stringCan be used to set custom key file for authentication with kafka.
pumps.kafka.meta.sasl_mechanism
ENV: TYK_PMP_PUMPS_KAFKA_META_SASLMECHANISMType:
stringSASL mechanism configuration. Only “plain” and “scram” are supported.
pumps.kafka.meta.sasl_username
ENV: TYK_PMP_PUMPS_KAFKA_META_USERNAMEType:
stringSASL username.
pumps.kafka.meta.sasl_password
ENV: TYK_PMP_PUMPS_KAFKA_META_PASSWORDType:
stringSASL password.
pumps.kafka.meta.sasl_algorithm
ENV: TYK_PMP_PUMPS_KAFKA_META_ALGORITHMType:
stringSASL algorithm. It’s the algorithm specified for scram mechanism. It could be sha-512 or sha-256. Defaults to “sha-256”.
pumps.kinesis.name
ENV: TYK_PMP_PUMPS_KINESIS_NAMEType:
stringThe name of the pump. This is used to identify the pump in the logs. Deprecated, use
type instead.
pumps.kinesis.type
ENV: TYK_PMP_PUMPS_KINESIS_TYPEType:
stringSets the pump type. This is needed when the pump key does not equal to the pump name type. Current valid types are:
mongo, mongo-pump-selective, mongo-pump-aggregate, csv,
elasticsearch, influx, influx2, moesif, statsd, segment, graylog, splunk, hybrid, prometheus,
logzio, dogstatsd, kafka, syslog, sql, sql_aggregate, stdout, timestream, mongo-graph,
sql-graph, sql-graph-aggregate, resurfaceio.
pumps.kinesis.filters
This feature adds a new configuration field in each pump called filters and its structure is the following:pumps.kinesis.filters.org_ids
ENV: TYK_PMP_PUMPS_KINESIS_FILTERS_ORGSIDSType:
[]stringFilters pump data by an allow list of org_ids.
pumps.kinesis.filters.api_ids
ENV: TYK_PMP_PUMPS_KINESIS_FILTERS_APIIDSType:
[]stringFilters pump data by an allow list of api_ids.
pumps.kinesis.filters.response_codes
ENV: TYK_PMP_PUMPS_KINESIS_FILTERS_RESPONSECODESType:
[]intFilters pump data by an allow list of response_codes.
pumps.kinesis.filters.skip_org_ids
ENV: TYK_PMP_PUMPS_KINESIS_FILTERS_SKIPPEDORGSIDSType:
[]stringFilters pump data by a block list of org_ids.
pumps.kinesis.filters.skip_api_ids
ENV: TYK_PMP_PUMPS_KINESIS_FILTERS_SKIPPEDAPIIDSType:
[]stringFilters pump data by a block list of api_ids.
pumps.kinesis.filters.skip_response_codes
ENV: TYK_PMP_PUMPS_KINESIS_FILTERS_SKIPPEDRESPONSECODESType:
[]intFilters pump data by a block list of response_codes.
pumps.kinesis.timeout
ENV: TYK_PMP_PUMPS_KINESIS_TIMEOUTType:
intBy default, a pump will wait forever for each write operation to complete; you can configure an optional timeout by setting the configuration option
timeout.
If you have deployed multiple pumps, then you can configure each timeout independently. The timeout is in seconds and defaults to 0.
The timeout is configured within the main pump config as shown here; note that this example would configure a 5 second timeout:
purge_delay) as this will mean that data is purged before being written to the target data sink.
If there is no timeout configured and pump’s write operation is taking longer than the purging loop, the following warning log will be generated:
Pump {pump_name} is taking more time than the value configured of purge_delay. You should try to set a timeout for this pump.
If there is a timeout configured, but pump’s write operation is still taking longer than the purging loop, the following warning log will be generated:
Pump {pump_name} is taking more time than the value configured of purge_delay. You should try lowering the timeout configured for this pump..
pumps.kinesis.omit_detailed_recording
ENV: TYK_PMP_PUMPS_KINESIS_OMITDETAILEDRECORDINGType:
boolSetting this to true will avoid writing raw_request and raw_response fields for each request in pumps. Defaults to
false.
pumps.kinesis.max_record_size
ENV: TYK_PMP_PUMPS_KINESIS_MAXRECORDSIZEType:
intDefines maximum size (in bytes) for Raw Request and Raw Response logs, this value defaults to 0. If it is not set then tyk-pump will not trim any data and will store the full information. This can also be set at a pump level. For example:
pumps.kinesis.ignore_fields
ENV: TYK_PMP_PUMPS_KINESIS_IGNOREFIELDSType:
[]stringIgnoreFields defines a list of analytics fields that will be ignored when writing to the pump. This can be used to avoid writing sensitive information to the Database, or data that you don’t really need to have. The field names must be the same as the JSON tags of the analytics record fields. For example:
["api_key", "api_version"].
pumps.kinesis.meta.EnvPrefix
ENV: TYK_PMP_PUMPS_KINESIS_META_ENVPREFIXType:
stringThe prefix for the environment variables that will be used to override the configuration. Defaults to
TYK_PMP_PUMPS_KINESIS_META
pumps.kinesis.meta.StreamName
ENV: TYK_PMP_PUMPS_KINESIS_META_STREAMNAMEType:
stringA name to identify the stream. The stream name is scoped to the AWS account used by the application that creates the stream. It is also scoped by AWS Region. That is, two streams in two different AWS accounts can have the same name. Two streams in the same AWS account but in two different Regions can also have the same name.
pumps.kinesis.meta.Region
ENV: TYK_PMP_PUMPS_KINESIS_META_REGIONType:
stringAWS Region the Kinesis stream targets
pumps.kinesis.meta.BatchSize
ENV: TYK_PMP_PUMPS_KINESIS_META_BATCHSIZEType:
intEach PutRecords (the function used in this pump)request can support up to 500 records. Each record in the request can be as large as 1 MiB, up to a limit of 5 MiB for the entire request, including partition keys. Each shard can support writes up to 1,000 records per second, up to a maximum data write total of 1 MiB per second.
pumps.logzio.name
ENV: TYK_PMP_PUMPS_LOGZIO_NAMEType:
stringThe name of the pump. This is used to identify the pump in the logs. Deprecated, use
type instead.
pumps.logzio.type
ENV: TYK_PMP_PUMPS_LOGZIO_TYPEType:
stringSets the pump type. This is needed when the pump key does not equal to the pump name type. Current valid types are:
mongo, mongo-pump-selective, mongo-pump-aggregate, csv,
elasticsearch, influx, influx2, moesif, statsd, segment, graylog, splunk, hybrid, prometheus,
logzio, dogstatsd, kafka, syslog, sql, sql_aggregate, stdout, timestream, mongo-graph,
sql-graph, sql-graph-aggregate, resurfaceio.
pumps.logzio.filters
This feature adds a new configuration field in each pump called filters and its structure is the following:pumps.logzio.filters.org_ids
ENV: TYK_PMP_PUMPS_LOGZIO_FILTERS_ORGSIDSType:
[]stringFilters pump data by an allow list of org_ids.
pumps.logzio.filters.api_ids
ENV: TYK_PMP_PUMPS_LOGZIO_FILTERS_APIIDSType:
[]stringFilters pump data by an allow list of api_ids.
pumps.logzio.filters.response_codes
ENV: TYK_PMP_PUMPS_LOGZIO_FILTERS_RESPONSECODESType:
[]intFilters pump data by an allow list of response_codes.
pumps.logzio.filters.skip_org_ids
ENV: TYK_PMP_PUMPS_LOGZIO_FILTERS_SKIPPEDORGSIDSType:
[]stringFilters pump data by a block list of org_ids.
pumps.logzio.filters.skip_api_ids
ENV: TYK_PMP_PUMPS_LOGZIO_FILTERS_SKIPPEDAPIIDSType:
[]stringFilters pump data by a block list of api_ids.
pumps.logzio.filters.skip_response_codes
ENV: TYK_PMP_PUMPS_LOGZIO_FILTERS_SKIPPEDRESPONSECODESType:
[]intFilters pump data by a block list of response_codes.
pumps.logzio.timeout
ENV: TYK_PMP_PUMPS_LOGZIO_TIMEOUTType:
intBy default, a pump will wait forever for each write operation to complete; you can configure an optional timeout by setting the configuration option
timeout.
If you have deployed multiple pumps, then you can configure each timeout independently. The timeout is in seconds and defaults to 0.
The timeout is configured within the main pump config as shown here; note that this example would configure a 5 second timeout:
purge_delay) as this will mean that data is purged before being written to the target data sink.
If there is no timeout configured and pump’s write operation is taking longer than the purging loop, the following warning log will be generated:
Pump {pump_name} is taking more time than the value configured of purge_delay. You should try to set a timeout for this pump.
If there is a timeout configured, but pump’s write operation is still taking longer than the purging loop, the following warning log will be generated:
Pump {pump_name} is taking more time than the value configured of purge_delay. You should try lowering the timeout configured for this pump..
pumps.logzio.omit_detailed_recording
ENV: TYK_PMP_PUMPS_LOGZIO_OMITDETAILEDRECORDINGType:
boolSetting this to true will avoid writing raw_request and raw_response fields for each request in pumps. Defaults to
false.
pumps.logzio.max_record_size
ENV: TYK_PMP_PUMPS_LOGZIO_MAXRECORDSIZEType:
intDefines maximum size (in bytes) for Raw Request and Raw Response logs, this value defaults to 0. If it is not set then tyk-pump will not trim any data and will store the full information. This can also be set at a pump level. For example:
pumps.logzio.ignore_fields
ENV: TYK_PMP_PUMPS_LOGZIO_IGNOREFIELDSType:
[]stringIgnoreFields defines a list of analytics fields that will be ignored when writing to the pump. This can be used to avoid writing sensitive information to the Database, or data that you don’t really need to have. The field names must be the same as the JSON tags of the analytics record fields. For example:
["api_key", "api_version"].
pumps.logzio.meta.EnvPrefix
ENV: TYK_PMP_PUMPS_LOGZIO_META_ENVPREFIXType:
stringThe prefix for the environment variables that will be used to override the configuration. Defaults to
TYK_PMP_PUMPS_LOGZIO_META
pumps.logzio.meta.check_disk_space
ENV: TYK_PMP_PUMPS_LOGZIO_META_CHECKDISKSPACEType:
boolSet the sender to check if it crosses the maximum allowed disk usage. Default value is
true.
pumps.logzio.meta.disk_threshold
ENV: TYK_PMP_PUMPS_LOGZIO_META_DISKTHRESHOLDType:
intSet disk queue threshold, once the threshold is crossed the sender will not enqueue the received logs. Default value is
98 (percentage of disk).
pumps.logzio.meta.drain_duration
ENV: TYK_PMP_PUMPS_LOGZIO_META_DRAINDURATIONType:
stringSet drain duration (flush logs on disk). Default value is
3s.
pumps.logzio.meta.queue_dir
ENV: TYK_PMP_PUMPS_LOGZIO_META_QUEUEDIRType:
stringThe directory for the queue.
pumps.logzio.meta.token
ENV: TYK_PMP_PUMPS_LOGZIO_META_TOKENType:
stringToken for sending data to your logzio account.
pumps.logzio.meta.url
ENV: TYK_PMP_PUMPS_LOGZIO_META_URLType:
stringIf you do not want to use the default Logzio url i.e. when using a proxy. Default is
https://listener.logz.io:8071.
pumps.moesif.name
ENV: TYK_PMP_PUMPS_MOESIF_NAMEType:
stringThe name of the pump. This is used to identify the pump in the logs. Deprecated, use
type instead.
pumps.moesif.type
ENV: TYK_PMP_PUMPS_MOESIF_TYPEType:
stringSets the pump type. This is needed when the pump key does not equal to the pump name type. Current valid types are:
mongo, mongo-pump-selective, mongo-pump-aggregate, csv,
elasticsearch, influx, influx2, moesif, statsd, segment, graylog, splunk, hybrid, prometheus,
logzio, dogstatsd, kafka, syslog, sql, sql_aggregate, stdout, timestream, mongo-graph,
sql-graph, sql-graph-aggregate, resurfaceio.
pumps.moesif.filters
This feature adds a new configuration field in each pump called filters and its structure is the following:pumps.moesif.filters.org_ids
ENV: TYK_PMP_PUMPS_MOESIF_FILTERS_ORGSIDSType:
[]stringFilters pump data by an allow list of org_ids.
pumps.moesif.filters.api_ids
ENV: TYK_PMP_PUMPS_MOESIF_FILTERS_APIIDSType:
[]stringFilters pump data by an allow list of api_ids.
pumps.moesif.filters.response_codes
ENV: TYK_PMP_PUMPS_MOESIF_FILTERS_RESPONSECODESType:
[]intFilters pump data by an allow list of response_codes.
pumps.moesif.filters.skip_org_ids
ENV: TYK_PMP_PUMPS_MOESIF_FILTERS_SKIPPEDORGSIDSType:
[]stringFilters pump data by a block list of org_ids.
pumps.moesif.filters.skip_api_ids
ENV: TYK_PMP_PUMPS_MOESIF_FILTERS_SKIPPEDAPIIDSType:
[]stringFilters pump data by a block list of api_ids.
pumps.moesif.filters.skip_response_codes
ENV: TYK_PMP_PUMPS_MOESIF_FILTERS_SKIPPEDRESPONSECODESType:
[]intFilters pump data by a block list of response_codes.
pumps.moesif.timeout
ENV: TYK_PMP_PUMPS_MOESIF_TIMEOUTType:
intBy default, a pump will wait forever for each write operation to complete; you can configure an optional timeout by setting the configuration option
timeout.
If you have deployed multiple pumps, then you can configure each timeout independently. The timeout is in seconds and defaults to 0.
The timeout is configured within the main pump config as shown here; note that this example would configure a 5 second timeout:
purge_delay) as this will mean that data is purged before being written to the target data sink.
If there is no timeout configured and pump’s write operation is taking longer than the purging loop, the following warning log will be generated:
Pump {pump_name} is taking more time than the value configured of purge_delay. You should try to set a timeout for this pump.
If there is a timeout configured, but pump’s write operation is still taking longer than the purging loop, the following warning log will be generated:
Pump {pump_name} is taking more time than the value configured of purge_delay. You should try lowering the timeout configured for this pump..
pumps.moesif.omit_detailed_recording
ENV: TYK_PMP_PUMPS_MOESIF_OMITDETAILEDRECORDINGType:
boolSetting this to true will avoid writing raw_request and raw_response fields for each request in pumps. Defaults to
false.
pumps.moesif.max_record_size
ENV: TYK_PMP_PUMPS_MOESIF_MAXRECORDSIZEType:
intDefines maximum size (in bytes) for Raw Request and Raw Response logs, this value defaults to 0. If it is not set then tyk-pump will not trim any data and will store the full information. This can also be set at a pump level. For example:
pumps.moesif.ignore_fields
ENV: TYK_PMP_PUMPS_MOESIF_IGNOREFIELDSType:
[]stringIgnoreFields defines a list of analytics fields that will be ignored when writing to the pump. This can be used to avoid writing sensitive information to the Database, or data that you don’t really need to have. The field names must be the same as the JSON tags of the analytics record fields. For example:
["api_key", "api_version"].
pumps.moesif.meta.EnvPrefix
ENV: TYK_PMP_PUMPS_MOESIF_META_ENVPREFIXType:
stringThe prefix for the environment variables that will be used to override the configuration. Defaults to
TYK_PMP_PUMPS_MOESIF_META
pumps.moesif.meta.application_id
ENV: TYK_PMP_PUMPS_MOESIF_META_APPLICATIONIDType:
stringMoesif Application Id. You can find your Moesif Application Id from Moesif Dashboard -> Top Right Menu -> API Keys . Moesif recommends creating separate Application Ids for each environment such as Production, Staging, and Development to keep data isolated.
pumps.moesif.meta.request_header_masks
ENV: TYK_PMP_PUMPS_MOESIF_META_REQUESTHEADERMASKSType:
[]stringAn option to mask a specific request header field.
pumps.moesif.meta.response_header_masks
ENV: TYK_PMP_PUMPS_MOESIF_META_RESPONSEHEADERMASKSType:
[]stringAn option to mask a specific response header field.
pumps.moesif.meta.request_body_masks
ENV: TYK_PMP_PUMPS_MOESIF_META_REQUESTBODYMASKSType:
[]stringAn option to mask a specific - request body field.
pumps.moesif.meta.response_body_masks
ENV: TYK_PMP_PUMPS_MOESIF_META_RESPONSEBODYMASKSType:
[]stringAn option to mask a specific response body field.
pumps.moesif.meta.disable_capture_request_body
ENV: TYK_PMP_PUMPS_MOESIF_META_DISABLECAPTUREREQUESTBODYType:
boolAn option to disable logging of request body. Default value is
false.
pumps.moesif.meta.disable_capture_response_body
ENV: TYK_PMP_PUMPS_MOESIF_META_DISABLECAPTURERESPONSEBODYType:
boolAn option to disable logging of response body. Default value is
false.
pumps.moesif.meta.user_id_header
ENV: TYK_PMP_PUMPS_MOESIF_META_USERIDHEADERType:
stringAn optional field name to identify User from a request or response header.
pumps.moesif.meta.company_id_header
ENV: TYK_PMP_PUMPS_MOESIF_META_COMPANYIDHEADERType:
stringAn optional field name to identify Company (Account) from a request or response header.
pumps.moesif.meta.enable_bulk
ENV: TYK_PMP_PUMPS_MOESIF_META_ENABLEBULKType:
boolSet this to
true to enable bulk_config.
pumps.moesif.meta.bulk_config
ENV: TYK_PMP_PUMPS_MOESIF_META_BULKCONFIGType:
map[string]interface{}Batch writing trigger configuration.
"event_queue_size"- (optional) An optional field name which specify the maximum number of events to hold in queue before sending to Moesif. In case of network issues when not able to connect/send event to Moesif, skips adding new events to the queue to prevent memory overflow. Type: int. Default value is10000."batch_size"- (optional) An optional field name which specify the maximum batch size when sending to Moesif. Type: int. Default value is200."timer_wake_up_seconds"- (optional) An optional field which specifies a time (every n seconds) how often background thread runs to send events to moesif. Type: int. Default value is2seconds.
pumps.moesif.meta.authorization_header_name
ENV: TYK_PMP_PUMPS_MOESIF_META_AUTHORIZATIONHEADERNAMEType:
stringAn optional request header field name to used to identify the User in Moesif. Default value is
authorization.
pumps.moesif.meta.authorization_user_id_field
ENV: TYK_PMP_PUMPS_MOESIF_META_AUTHORIZATIONUSERIDFIELDType:
stringAn optional field name use to parse the User from authorization header in Moesif. Default value is
sub.
pumps.mongo.name
ENV: TYK_PMP_PUMPS_MONGO_NAMEType:
stringThe name of the pump. This is used to identify the pump in the logs. Deprecated, use
type instead.
pumps.mongo.type
ENV: TYK_PMP_PUMPS_MONGO_TYPEType:
stringSets the pump type. This is needed when the pump key does not equal to the pump name type. Current valid types are:
mongo, mongo-pump-selective, mongo-pump-aggregate, csv,
elasticsearch, influx, influx2, moesif, statsd, segment, graylog, splunk, hybrid, prometheus,
logzio, dogstatsd, kafka, syslog, sql, sql_aggregate, stdout, timestream, mongo-graph,
sql-graph, sql-graph-aggregate, resurfaceio.
pumps.mongo.filters
This feature adds a new configuration field in each pump called filters and its structure is the following:pumps.mongo.filters.org_ids
ENV: TYK_PMP_PUMPS_MONGO_FILTERS_ORGSIDSType:
[]stringFilters pump data by an allow list of org_ids.
pumps.mongo.filters.api_ids
ENV: TYK_PMP_PUMPS_MONGO_FILTERS_APIIDSType:
[]stringFilters pump data by an allow list of api_ids.
pumps.mongo.filters.response_codes
ENV: TYK_PMP_PUMPS_MONGO_FILTERS_RESPONSECODESType:
[]intFilters pump data by an allow list of response_codes.
pumps.mongo.filters.skip_org_ids
ENV: TYK_PMP_PUMPS_MONGO_FILTERS_SKIPPEDORGSIDSType:
[]stringFilters pump data by a block list of org_ids.
pumps.mongo.filters.skip_api_ids
ENV: TYK_PMP_PUMPS_MONGO_FILTERS_SKIPPEDAPIIDSType:
[]stringFilters pump data by a block list of api_ids.
pumps.mongo.filters.skip_response_codes
ENV: TYK_PMP_PUMPS_MONGO_FILTERS_SKIPPEDRESPONSECODESType:
[]intFilters pump data by a block list of response_codes.
pumps.mongo.timeout
ENV: TYK_PMP_PUMPS_MONGO_TIMEOUTType:
intBy default, a pump will wait forever for each write operation to complete; you can configure an optional timeout by setting the configuration option
timeout.
If you have deployed multiple pumps, then you can configure each timeout independently. The timeout is in seconds and defaults to 0.
The timeout is configured within the main pump config as shown here; note that this example would configure a 5 second timeout:
purge_delay) as this will mean that data is purged before being written to the target data sink.
If there is no timeout configured and pump’s write operation is taking longer than the purging loop, the following warning log will be generated:
Pump {pump_name} is taking more time than the value configured of purge_delay. You should try to set a timeout for this pump.
If there is a timeout configured, but pump’s write operation is still taking longer than the purging loop, the following warning log will be generated:
Pump {pump_name} is taking more time than the value configured of purge_delay. You should try lowering the timeout configured for this pump..
pumps.mongo.omit_detailed_recording
ENV: TYK_PMP_PUMPS_MONGO_OMITDETAILEDRECORDINGType:
boolSetting this to true will avoid writing raw_request and raw_response fields for each request in pumps. Defaults to
false.
pumps.mongo.max_record_size
ENV: TYK_PMP_PUMPS_MONGO_MAXRECORDSIZEType:
intDefines maximum size (in bytes) for Raw Request and Raw Response logs, this value defaults to 0. If it is not set then tyk-pump will not trim any data and will store the full information. This can also be set at a pump level. For example:
pumps.mongo.ignore_fields
ENV: TYK_PMP_PUMPS_MONGO_IGNOREFIELDSType:
[]stringIgnoreFields defines a list of analytics fields that will be ignored when writing to the pump. This can be used to avoid writing sensitive information to the Database, or data that you don’t really need to have. The field names must be the same as the JSON tags of the analytics record fields. For example:
["api_key", "api_version"].
pumps.mongo.meta.EnvPrefix
ENV: TYK_PMP_PUMPS_MONGO_META_ENVPREFIXType:
stringPrefix for the environment variables that will be used to override the configuration. Defaults to
TYK_PMP_PUMPS_MONGO_META for Mongo Pump
TYK_PMP_PUMPS_UPTIME_META for Uptime Pump
TYK_PMP_PUMPS_MONGOAGGREGATE_META for Mongo Aggregate Pump
TYK_PMP_PUMPS_MONGOSELECTIVE_META for Mongo Selective Pump
TYK_PMP_PUMPS_MONGOGRAPH_META for Mongo Graph Pump.
pumps.mongo.meta.mongo_url
ENV: TYK_PMP_PUMPS_MONGO_META_MONGOURLType:
stringThe full URL to your MongoDB instance, this can be a clustered instance if necessary and should include the database and username / password data.
pumps.mongo.meta.mongo_use_ssl
ENV: TYK_PMP_PUMPS_MONGO_META_MONGOUSESSLType:
boolSet to true to enable Mongo SSL connection.
pumps.mongo.meta.mongo_ssl_insecure_skip_verify
ENV: TYK_PMP_PUMPS_MONGO_META_MONGOSSLINSECURESKIPVERIFYType:
boolAllows the use of self-signed certificates when connecting to an encrypted MongoDB database.
pumps.mongo.meta.mongo_ssl_allow_invalid_hostnames
ENV: TYK_PMP_PUMPS_MONGO_META_MONGOSSLALLOWINVALIDHOSTNAMESType:
boolIgnore hostname check when it differs from the original (for example with SSH tunneling). The rest of the TLS verification will still be performed.
pumps.mongo.meta.mongo_ssl_ca_file
ENV: TYK_PMP_PUMPS_MONGO_META_MONGOSSLCAFILEType:
stringPath to the PEM file with trusted root certificates
pumps.mongo.meta.mongo_ssl_pem_keyfile
ENV: TYK_PMP_PUMPS_MONGO_META_MONGOSSLPEMKEYFILEType:
stringPath to the PEM file which contains both client certificate and private key. This is required for Mutual TLS.
pumps.mongo.meta.mongo_db_type
ENV: TYK_PMP_PUMPS_MONGO_META_MONGODBTYPEType:
intSpecifies the mongo DB Type. If it’s 0, it means that you are using standard mongo db. If it’s 1 it means you are using AWS Document DB. If it’s 2, it means you are using CosmosDB. Defaults to Standard mongo (0).
pumps.mongo.meta.omit_index_creation
ENV: TYK_PMP_PUMPS_MONGO_META_OMITINDEXCREATIONType:
boolSet to true to disable the default tyk index creation.
pumps.mongo.meta.mongo_session_consistency
ENV: TYK_PMP_PUMPS_MONGO_META_MONGOSESSIONCONSISTENCYType:
stringSet the consistency mode for the session, it defaults to
Strong. The valid values are: strong, monotonic, eventual.
pumps.mongo.meta.driver
ENV: TYK_PMP_PUMPS_MONGO_META_MONGODRIVERTYPEType:
stringMongoDriverType is the type of the driver (library) to use. The valid values are: “mongo-go” and “mgo”. Since v1.9, the default driver is “mongo-go”. Check out this guide to learn about MongoDB drivers supported by Tyk Pump.
pumps.mongo.meta.mongo_direct_connection
ENV: TYK_PMP_PUMPS_MONGO_META_MONGODIRECTCONNECTIONType:
boolMongoDirectConnection informs whether to establish connections only with the specified seed servers, or to obtain information for the whole cluster and establish connections with further servers too. If true, the client will only connect to the host provided in the ConnectionString and won’t attempt to discover other hosts in the cluster. Useful when network restrictions prevent discovery, such as with SSH tunneling. Default is false.
pumps.mongo.meta.collection_name
ENV: TYK_PMP_PUMPS_MONGO_META_COLLECTIONNAMEType:
stringSpecifies the mongo collection name.
pumps.mongo.meta.max_insert_batch_size_bytes
ENV: TYK_PMP_PUMPS_MONGO_META_MAXINSERTBATCHSIZEBYTESType:
intMaximum insert batch size for mongo selective pump. If the batch we are writing surpasses this value, it will be sent in multiple batches. Defaults to 10Mb.
pumps.mongo.meta.max_document_size_bytes
ENV: TYK_PMP_PUMPS_MONGO_META_MAXDOCUMENTSIZEBYTESType:
intMaximum document size. If the document exceed this value, it will be skipped. Defaults to 10Mb.
pumps.mongo.meta.collection_cap_max_size_bytes
ENV: TYK_PMP_PUMPS_MONGO_META_COLLECTIONCAPMAXSIZEBYTESType:
intAmount of bytes of the capped collection in 64bits architectures. Defaults to 5GB.
pumps.mongo.meta.collection_cap_enable
ENV: TYK_PMP_PUMPS_MONGO_META_COLLECTIONCAPENABLEType:
boolEnable collection capping. It’s used to set a maximum size of the collection.
pumps.mongoaggregate.name
ENV: TYK_PMP_PUMPS_MONGOAGGREGATE_NAMEType:
stringThe name of the pump. This is used to identify the pump in the logs. Deprecated, use
type instead.
pumps.mongoaggregate.type
ENV: TYK_PMP_PUMPS_MONGOAGGREGATE_TYPEType:
stringSets the pump type. This is needed when the pump key does not equal to the pump name type. Current valid types are:
mongo, mongo-pump-selective, mongo-pump-aggregate, csv,
elasticsearch, influx, influx2, moesif, statsd, segment, graylog, splunk, hybrid, prometheus,
logzio, dogstatsd, kafka, syslog, sql, sql_aggregate, stdout, timestream, mongo-graph,
sql-graph, sql-graph-aggregate, resurfaceio.
pumps.mongoaggregate.filters
This feature adds a new configuration field in each pump called filters and its structure is the following:pumps.mongoaggregate.filters.org_ids
ENV: TYK_PMP_PUMPS_MONGOAGGREGATE_FILTERS_ORGSIDSType:
[]stringFilters pump data by an allow list of org_ids.
pumps.mongoaggregate.filters.api_ids
ENV: TYK_PMP_PUMPS_MONGOAGGREGATE_FILTERS_APIIDSType:
[]stringFilters pump data by an allow list of api_ids.
pumps.mongoaggregate.filters.response_codes
ENV: TYK_PMP_PUMPS_MONGOAGGREGATE_FILTERS_RESPONSECODESType:
[]intFilters pump data by an allow list of response_codes.
pumps.mongoaggregate.filters.skip_org_ids
ENV: TYK_PMP_PUMPS_MONGOAGGREGATE_FILTERS_SKIPPEDORGSIDSType:
[]stringFilters pump data by a block list of org_ids.
pumps.mongoaggregate.filters.skip_api_ids
ENV: TYK_PMP_PUMPS_MONGOAGGREGATE_FILTERS_SKIPPEDAPIIDSType:
[]stringFilters pump data by a block list of api_ids.
pumps.mongoaggregate.filters.skip_response_codes
ENV: TYK_PMP_PUMPS_MONGOAGGREGATE_FILTERS_SKIPPEDRESPONSECODESType:
[]intFilters pump data by a block list of response_codes.
pumps.mongoaggregate.timeout
ENV: TYK_PMP_PUMPS_MONGOAGGREGATE_TIMEOUTType:
intBy default, a pump will wait forever for each write operation to complete; you can configure an optional timeout by setting the configuration option
timeout.
If you have deployed multiple pumps, then you can configure each timeout independently. The timeout is in seconds and defaults to 0.
The timeout is configured within the main pump config as shown here; note that this example would configure a 5 second timeout:
purge_delay) as this will mean that data is purged before being written to the target data sink.
If there is no timeout configured and pump’s write operation is taking longer than the purging loop, the following warning log will be generated:
Pump {pump_name} is taking more time than the value configured of purge_delay. You should try to set a timeout for this pump.
If there is a timeout configured, but pump’s write operation is still taking longer than the purging loop, the following warning log will be generated:
Pump {pump_name} is taking more time than the value configured of purge_delay. You should try lowering the timeout configured for this pump..
pumps.mongoaggregate.omit_detailed_recording
ENV: TYK_PMP_PUMPS_MONGOAGGREGATE_OMITDETAILEDRECORDINGType:
boolSetting this to true will avoid writing raw_request and raw_response fields for each request in pumps. Defaults to
false.
pumps.mongoaggregate.max_record_size
ENV: TYK_PMP_PUMPS_MONGOAGGREGATE_MAXRECORDSIZEType:
intDefines maximum size (in bytes) for Raw Request and Raw Response logs, this value defaults to 0. If it is not set then tyk-pump will not trim any data and will store the full information. This can also be set at a pump level. For example:
pumps.mongoaggregate.ignore_fields
ENV: TYK_PMP_PUMPS_MONGOAGGREGATE_IGNOREFIELDSType:
[]stringIgnoreFields defines a list of analytics fields that will be ignored when writing to the pump. This can be used to avoid writing sensitive information to the Database, or data that you don’t really need to have. The field names must be the same as the JSON tags of the analytics record fields. For example:
["api_key", "api_version"].
pumps.mongoaggregate.meta.EnvPrefix
ENV: TYK_PMP_PUMPS_MONGOAGGREGATE_META_ENVPREFIXType:
stringPrefix for the environment variables that will be used to override the configuration. Defaults to
TYK_PMP_PUMPS_MONGO_META for Mongo Pump
TYK_PMP_PUMPS_UPTIME_META for Uptime Pump
TYK_PMP_PUMPS_MONGOAGGREGATE_META for Mongo Aggregate Pump
TYK_PMP_PUMPS_MONGOSELECTIVE_META for Mongo Selective Pump
TYK_PMP_PUMPS_MONGOGRAPH_META for Mongo Graph Pump.
pumps.mongoaggregate.meta.mongo_url
ENV: TYK_PMP_PUMPS_MONGOAGGREGATE_META_MONGOURLType:
stringThe full URL to your MongoDB instance, this can be a clustered instance if necessary and should include the database and username / password data.
pumps.mongoaggregate.meta.mongo_use_ssl
ENV: TYK_PMP_PUMPS_MONGOAGGREGATE_META_MONGOUSESSLType:
boolSet to true to enable Mongo SSL connection.
pumps.mongoaggregate.meta.mongo_ssl_insecure_skip_verify
ENV: TYK_PMP_PUMPS_MONGOAGGREGATE_META_MONGOSSLINSECURESKIPVERIFYType:
boolAllows the use of self-signed certificates when connecting to an encrypted MongoDB database.
pumps.mongoaggregate.meta.mongo_ssl_allow_invalid_hostnames
ENV: TYK_PMP_PUMPS_MONGOAGGREGATE_META_MONGOSSLALLOWINVALIDHOSTNAMESType:
boolIgnore hostname check when it differs from the original (for example with SSH tunneling). The rest of the TLS verification will still be performed.
pumps.mongoaggregate.meta.mongo_ssl_ca_file
ENV: TYK_PMP_PUMPS_MONGOAGGREGATE_META_MONGOSSLCAFILEType:
stringPath to the PEM file with trusted root certificates
pumps.mongoaggregate.meta.mongo_ssl_pem_keyfile
ENV: TYK_PMP_PUMPS_MONGOAGGREGATE_META_MONGOSSLPEMKEYFILEType:
stringPath to the PEM file which contains both client certificate and private key. This is required for Mutual TLS.
pumps.mongoaggregate.meta.mongo_db_type
ENV: TYK_PMP_PUMPS_MONGOAGGREGATE_META_MONGODBTYPEType:
intSpecifies the mongo DB Type. If it’s 0, it means that you are using standard mongo db. If it’s 1 it means you are using AWS Document DB. If it’s 2, it means you are using CosmosDB. Defaults to Standard mongo (0).
pumps.mongoaggregate.meta.omit_index_creation
ENV: TYK_PMP_PUMPS_MONGOAGGREGATE_META_OMITINDEXCREATIONType:
boolSet to true to disable the default tyk index creation.
pumps.mongoaggregate.meta.mongo_session_consistency
ENV: TYK_PMP_PUMPS_MONGOAGGREGATE_META_MONGOSESSIONCONSISTENCYType:
stringSet the consistency mode for the session, it defaults to
Strong. The valid values are: strong, monotonic, eventual.
pumps.mongoaggregate.meta.driver
ENV: TYK_PMP_PUMPS_MONGOAGGREGATE_META_MONGODRIVERTYPEType:
stringMongoDriverType is the type of the driver (library) to use. The valid values are: “mongo-go” and “mgo”. Since v1.9, the default driver is “mongo-go”. Check out this guide to learn about MongoDB drivers supported by Tyk Pump.
pumps.mongoaggregate.meta.mongo_direct_connection
ENV: TYK_PMP_PUMPS_MONGOAGGREGATE_META_MONGODIRECTCONNECTIONType:
boolMongoDirectConnection informs whether to establish connections only with the specified seed servers, or to obtain information for the whole cluster and establish connections with further servers too. If true, the client will only connect to the host provided in the ConnectionString and won’t attempt to discover other hosts in the cluster. Useful when network restrictions prevent discovery, such as with SSH tunneling. Default is false.
pumps.mongoaggregate.meta.use_mixed_collection
ENV: TYK_PMP_PUMPS_MONGOAGGREGATE_META_USEMIXEDCOLLECTIONType:
boolIf set to
true your pump will store analytics to both your organization defined
collections z_tyk_analyticz_aggregate_{ORG ID} and your org-less tyk_analytics_aggregates
collection. When set to ‘false’ your pump will only store analytics to your org defined
collection.
pumps.mongoaggregate.meta.track_all_paths
ENV: TYK_PMP_PUMPS_MONGOAGGREGATE_META_TRACKALLPATHSType:
boolSpecifies if it should store aggregated data for all the endpoints. By default,
false
which means that only store aggregated data for tracked endpoints.
pumps.mongoaggregate.meta.ignore_tag_prefix_list
ENV: TYK_PMP_PUMPS_MONGOAGGREGATE_META_IGNORETAGPREFIXLISTType:
[]stringSpecifies prefixes of tags that should be ignored.
pumps.mongoaggregate.meta.threshold_len_tag_list
ENV: TYK_PMP_PUMPS_MONGOAGGREGATE_META_THRESHOLDLENTAGLISTType:
intDetermines the threshold of amount of tags of an aggregation. If the amount of tags is superior to the threshold, it will print an alert. Defaults to 1000.
pumps.mongoaggregate.meta.store_analytics_per_minute
ENV: TYK_PMP_PUMPS_MONGOAGGREGATE_META_STOREANALYTICSPERMINUTEType:
boolDetermines if the aggregations should be made per minute (true) or per hour (false).
pumps.mongoaggregate.meta.aggregation_time
ENV: TYK_PMP_PUMPS_MONGOAGGREGATE_META_AGGREGATIONTIMEType:
intDetermines the amount of time the aggregations should be made (in minutes). It defaults to the max value is 60 and the minimum is 1. If StoreAnalyticsPerMinute is set to true, this field will be skipped.
pumps.mongoaggregate.meta.enable_aggregate_self_healing
ENV: TYK_PMP_PUMPS_MONGOAGGREGATE_META_ENABLEAGGREGATESELFHEALINGType:
boolDetermines if the self healing will be activated or not. Self Healing allows pump to handle Mongo document’s max-size errors by creating a new document when the max-size is reached. It also divide by 2 the AggregationTime field to avoid the same error in the future.
pumps.mongoaggregate.meta.ignore_aggregations
ENV: TYK_PMP_PUMPS_MONGOAGGREGATE_META_IGNOREAGGREGATIONSLISTType:
[]stringThis list determines which aggregations are going to be dropped and not stored in the collection. Posible values are: “APIID”,“errors”,“versions”,“apikeys”,“oauthids”,“geo”,“tags”,“endpoints”,“keyendpoints”, “oauthendpoints”, and “apiendpoints”.
pumps.mongoselective.name
ENV: TYK_PMP_PUMPS_MONGOSELECTIVE_NAMEType:
stringThe name of the pump. This is used to identify the pump in the logs. Deprecated, use
type instead.
pumps.mongoselective.type
ENV: TYK_PMP_PUMPS_MONGOSELECTIVE_TYPEType:
stringSets the pump type. This is needed when the pump key does not equal to the pump name type. Current valid types are:
mongo, mongo-pump-selective, mongo-pump-aggregate, csv,
elasticsearch, influx, influx2, moesif, statsd, segment, graylog, splunk, hybrid, prometheus,
logzio, dogstatsd, kafka, syslog, sql, sql_aggregate, stdout, timestream, mongo-graph,
sql-graph, sql-graph-aggregate, resurfaceio.
pumps.mongoselective.filters
This feature adds a new configuration field in each pump called filters and its structure is the following:pumps.mongoselective.filters.org_ids
ENV: TYK_PMP_PUMPS_MONGOSELECTIVE_FILTERS_ORGSIDSType:
[]stringFilters pump data by an allow list of org_ids.
pumps.mongoselective.filters.api_ids
ENV: TYK_PMP_PUMPS_MONGOSELECTIVE_FILTERS_APIIDSType:
[]stringFilters pump data by an allow list of api_ids.
pumps.mongoselective.filters.response_codes
ENV: TYK_PMP_PUMPS_MONGOSELECTIVE_FILTERS_RESPONSECODESType:
[]intFilters pump data by an allow list of response_codes.
pumps.mongoselective.filters.skip_org_ids
ENV: TYK_PMP_PUMPS_MONGOSELECTIVE_FILTERS_SKIPPEDORGSIDSType:
[]stringFilters pump data by a block list of org_ids.
pumps.mongoselective.filters.skip_api_ids
ENV: TYK_PMP_PUMPS_MONGOSELECTIVE_FILTERS_SKIPPEDAPIIDSType:
[]stringFilters pump data by a block list of api_ids.
pumps.mongoselective.filters.skip_response_codes
ENV: TYK_PMP_PUMPS_MONGOSELECTIVE_FILTERS_SKIPPEDRESPONSECODESType:
[]intFilters pump data by a block list of response_codes.
pumps.mongoselective.timeout
ENV: TYK_PMP_PUMPS_MONGOSELECTIVE_TIMEOUTType:
intBy default, a pump will wait forever for each write operation to complete; you can configure an optional timeout by setting the configuration option
timeout.
If you have deployed multiple pumps, then you can configure each timeout independently. The timeout is in seconds and defaults to 0.
The timeout is configured within the main pump config as shown here; note that this example would configure a 5 second timeout:
purge_delay) as this will mean that data is purged before being written to the target data sink.
If there is no timeout configured and pump’s write operation is taking longer than the purging loop, the following warning log will be generated:
Pump {pump_name} is taking more time than the value configured of purge_delay. You should try to set a timeout for this pump.
If there is a timeout configured, but pump’s write operation is still taking longer than the purging loop, the following warning log will be generated:
Pump {pump_name} is taking more time than the value configured of purge_delay. You should try lowering the timeout configured for this pump..
pumps.mongoselective.omit_detailed_recording
ENV: TYK_PMP_PUMPS_MONGOSELECTIVE_OMITDETAILEDRECORDINGType:
boolSetting this to true will avoid writing raw_request and raw_response fields for each request in pumps. Defaults to
false.
pumps.mongoselective.max_record_size
ENV: TYK_PMP_PUMPS_MONGOSELECTIVE_MAXRECORDSIZEType:
intDefines maximum size (in bytes) for Raw Request and Raw Response logs, this value defaults to 0. If it is not set then tyk-pump will not trim any data and will store the full information. This can also be set at a pump level. For example:
pumps.mongoselective.ignore_fields
ENV: TYK_PMP_PUMPS_MONGOSELECTIVE_IGNOREFIELDSType:
[]stringIgnoreFields defines a list of analytics fields that will be ignored when writing to the pump. This can be used to avoid writing sensitive information to the Database, or data that you don’t really need to have. The field names must be the same as the JSON tags of the analytics record fields. For example:
["api_key", "api_version"].
pumps.mongoselective.meta.EnvPrefix
ENV: TYK_PMP_PUMPS_MONGOSELECTIVE_META_ENVPREFIXType:
stringPrefix for the environment variables that will be used to override the configuration. Defaults to
TYK_PMP_PUMPS_MONGO_META for Mongo Pump
TYK_PMP_PUMPS_UPTIME_META for Uptime Pump
TYK_PMP_PUMPS_MONGOAGGREGATE_META for Mongo Aggregate Pump
TYK_PMP_PUMPS_MONGOSELECTIVE_META for Mongo Selective Pump
TYK_PMP_PUMPS_MONGOGRAPH_META for Mongo Graph Pump.
pumps.mongoselective.meta.mongo_url
ENV: TYK_PMP_PUMPS_MONGOSELECTIVE_META_MONGOURLType:
stringThe full URL to your MongoDB instance, this can be a clustered instance if necessary and should include the database and username / password data.
pumps.mongoselective.meta.mongo_use_ssl
ENV: TYK_PMP_PUMPS_MONGOSELECTIVE_META_MONGOUSESSLType:
boolSet to true to enable Mongo SSL connection.
pumps.mongoselective.meta.mongo_ssl_insecure_skip_verify
ENV: TYK_PMP_PUMPS_MONGOSELECTIVE_META_MONGOSSLINSECURESKIPVERIFYType:
boolAllows the use of self-signed certificates when connecting to an encrypted MongoDB database.
pumps.mongoselective.meta.mongo_ssl_allow_invalid_hostnames
ENV: TYK_PMP_PUMPS_MONGOSELECTIVE_META_MONGOSSLALLOWINVALIDHOSTNAMESType:
boolIgnore hostname check when it differs from the original (for example with SSH tunneling). The rest of the TLS verification will still be performed.
pumps.mongoselective.meta.mongo_ssl_ca_file
ENV: TYK_PMP_PUMPS_MONGOSELECTIVE_META_MONGOSSLCAFILEType:
stringPath to the PEM file with trusted root certificates
pumps.mongoselective.meta.mongo_ssl_pem_keyfile
ENV: TYK_PMP_PUMPS_MONGOSELECTIVE_META_MONGOSSLPEMKEYFILEType:
stringPath to the PEM file which contains both client certificate and private key. This is required for Mutual TLS.
pumps.mongoselective.meta.mongo_db_type
ENV: TYK_PMP_PUMPS_MONGOSELECTIVE_META_MONGODBTYPEType:
intSpecifies the mongo DB Type. If it’s 0, it means that you are using standard mongo db. If it’s 1 it means you are using AWS Document DB. If it’s 2, it means you are using CosmosDB. Defaults to Standard mongo (0).
pumps.mongoselective.meta.omit_index_creation
ENV: TYK_PMP_PUMPS_MONGOSELECTIVE_META_OMITINDEXCREATIONType:
boolSet to true to disable the default tyk index creation.
pumps.mongoselective.meta.mongo_session_consistency
ENV: TYK_PMP_PUMPS_MONGOSELECTIVE_META_MONGOSESSIONCONSISTENCYType:
stringSet the consistency mode for the session, it defaults to
Strong. The valid values are: strong, monotonic, eventual.
pumps.mongoselective.meta.driver
ENV: TYK_PMP_PUMPS_MONGOSELECTIVE_META_MONGODRIVERTYPEType:
stringMongoDriverType is the type of the driver (library) to use. The valid values are: “mongo-go” and “mgo”. Since v1.9, the default driver is “mongo-go”. Check out this guide to learn about MongoDB drivers supported by Tyk Pump.
pumps.mongoselective.meta.mongo_direct_connection
ENV: TYK_PMP_PUMPS_MONGOSELECTIVE_META_MONGODIRECTCONNECTIONType:
boolMongoDirectConnection informs whether to establish connections only with the specified seed servers, or to obtain information for the whole cluster and establish connections with further servers too. If true, the client will only connect to the host provided in the ConnectionString and won’t attempt to discover other hosts in the cluster. Useful when network restrictions prevent discovery, such as with SSH tunneling. Default is false.
pumps.mongoselective.meta.max_insert_batch_size_bytes
ENV: TYK_PMP_PUMPS_MONGOSELECTIVE_META_MAXINSERTBATCHSIZEBYTESType:
intMaximum insert batch size for mongo selective pump. If the batch we are writing surpass this value, it will be send in multiple batchs. Defaults to 10Mb.
pumps.mongoselective.meta.max_document_size_bytes
ENV: TYK_PMP_PUMPS_MONGOSELECTIVE_META_MAXDOCUMENTSIZEBYTESType:
intMaximum document size. If the document exceed this value, it will be skipped. Defaults to 10Mb.
pumps.prometheus.name
ENV: TYK_PMP_PUMPS_PROMETHEUS_NAMEType:
stringThe name of the pump. This is used to identify the pump in the logs. Deprecated, use
type instead.
pumps.prometheus.type
ENV: TYK_PMP_PUMPS_PROMETHEUS_TYPEType:
stringSets the pump type. This is needed when the pump key does not equal to the pump name type. Current valid types are:
mongo, mongo-pump-selective, mongo-pump-aggregate, csv,
elasticsearch, influx, influx2, moesif, statsd, segment, graylog, splunk, hybrid, prometheus,
logzio, dogstatsd, kafka, syslog, sql, sql_aggregate, stdout, timestream, mongo-graph,
sql-graph, sql-graph-aggregate, resurfaceio.
pumps.prometheus.filters
This feature adds a new configuration field in each pump called filters and its structure is the following:pumps.prometheus.filters.org_ids
ENV: TYK_PMP_PUMPS_PROMETHEUS_FILTERS_ORGSIDSType:
[]stringFilters pump data by an allow list of org_ids.
pumps.prometheus.filters.api_ids
ENV: TYK_PMP_PUMPS_PROMETHEUS_FILTERS_APIIDSType:
[]stringFilters pump data by an allow list of api_ids.
pumps.prometheus.filters.response_codes
ENV: TYK_PMP_PUMPS_PROMETHEUS_FILTERS_RESPONSECODESType:
[]intFilters pump data by an allow list of response_codes.
pumps.prometheus.filters.skip_org_ids
ENV: TYK_PMP_PUMPS_PROMETHEUS_FILTERS_SKIPPEDORGSIDSType:
[]stringFilters pump data by a block list of org_ids.
pumps.prometheus.filters.skip_api_ids
ENV: TYK_PMP_PUMPS_PROMETHEUS_FILTERS_SKIPPEDAPIIDSType:
[]stringFilters pump data by a block list of api_ids.
pumps.prometheus.filters.skip_response_codes
ENV: TYK_PMP_PUMPS_PROMETHEUS_FILTERS_SKIPPEDRESPONSECODESType:
[]intFilters pump data by a block list of response_codes.
pumps.prometheus.timeout
ENV: TYK_PMP_PUMPS_PROMETHEUS_TIMEOUTType:
intBy default, a pump will wait forever for each write operation to complete; you can configure an optional timeout by setting the configuration option
timeout.
If you have deployed multiple pumps, then you can configure each timeout independently. The timeout is in seconds and defaults to 0.
The timeout is configured within the main pump config as shown here; note that this example would configure a 5 second timeout:
purge_delay) as this will mean that data is purged before being written to the target data sink.
If there is no timeout configured and pump’s write operation is taking longer than the purging loop, the following warning log will be generated:
Pump {pump_name} is taking more time than the value configured of purge_delay. You should try to set a timeout for this pump.
If there is a timeout configured, but pump’s write operation is still taking longer than the purging loop, the following warning log will be generated:
Pump {pump_name} is taking more time than the value configured of purge_delay. You should try lowering the timeout configured for this pump..
pumps.prometheus.omit_detailed_recording
ENV: TYK_PMP_PUMPS_PROMETHEUS_OMITDETAILEDRECORDINGType:
boolSetting this to true will avoid writing raw_request and raw_response fields for each request in pumps. Defaults to
false.
pumps.prometheus.max_record_size
ENV: TYK_PMP_PUMPS_PROMETHEUS_MAXRECORDSIZEType:
intDefines maximum size (in bytes) for Raw Request and Raw Response logs, this value defaults to 0. If it is not set then tyk-pump will not trim any data and will store the full information. This can also be set at a pump level. For example:
pumps.prometheus.ignore_fields
ENV: TYK_PMP_PUMPS_PROMETHEUS_IGNOREFIELDSType:
[]stringIgnoreFields defines a list of analytics fields that will be ignored when writing to the pump. This can be used to avoid writing sensitive information to the Database, or data that you don’t really need to have. The field names must be the same as the JSON tags of the analytics record fields. For example:
["api_key", "api_version"].
pumps.prometheus.meta.EnvPrefix
ENV: TYK_PMP_PUMPS_PROMETHEUS_META_ENVPREFIXType:
stringPrefix for the environment variables that will be used to override the configuration. Defaults to
TYK_PMP_PUMPS_PROMETHEUS_META
pumps.prometheus.meta.listen_address
ENV: TYK_PMP_PUMPS_PROMETHEUS_META_ADDRType:
stringThe full URL to your Prometheus instance, :. For example
localhost:9090.
pumps.prometheus.meta.path
ENV: TYK_PMP_PUMPS_PROMETHEUS_META_PATHType:
stringThe path to the Prometheus collection. For example
/metrics.
pumps.prometheus.meta.aggregate_observations
ENV: TYK_PMP_PUMPS_PROMETHEUS_META_AGGREGATEOBSERVATIONSType:
boolThis will enable an experimental feature that will aggregate the histogram metrics request time values before exposing them to prometheus. Enabling this will reduce the CPU usage of your prometheus pump but you will loose histogram precision. Experimental.
pumps.prometheus.meta.disabled_metrics
ENV: TYK_PMP_PUMPS_PROMETHEUS_META_DISABLEDMETRICSType:
[]stringMetrics to exclude from exposition. Currently, excludes only the base metrics.
pumps.prometheus.meta.track_all_paths
ENV: TYK_PMP_PUMPS_PROMETHEUS_META_TRACKALLPATHSType:
boolSpecifies if it should expose aggregated metrics for all the endpoints. By default,
false
which means that all APIs endpoints will be counted as ‘unknown’ unless the API uses the track endpoint plugin.
pumps.prometheus.meta.custom_metrics
ENV: TYK_PMP_PUMPS_PROMETHEUS_META_CUSTOMMETRICSType:
CustomMetricsCustom Prometheus metrics.
pumps.splunk.name
ENV: TYK_PMP_PUMPS_SPLUNK_NAMEType:
stringThe name of the pump. This is used to identify the pump in the logs. Deprecated, use
type instead.
pumps.splunk.type
ENV: TYK_PMP_PUMPS_SPLUNK_TYPEType:
stringSets the pump type. This is needed when the pump key does not equal to the pump name type. Current valid types are:
mongo, mongo-pump-selective, mongo-pump-aggregate, csv,
elasticsearch, influx, influx2, moesif, statsd, segment, graylog, splunk, hybrid, prometheus,
logzio, dogstatsd, kafka, syslog, sql, sql_aggregate, stdout, timestream, mongo-graph,
sql-graph, sql-graph-aggregate, resurfaceio.
pumps.splunk.filters
This feature adds a new configuration field in each pump called filters and its structure is the following:pumps.splunk.filters.org_ids
ENV: TYK_PMP_PUMPS_SPLUNK_FILTERS_ORGSIDSType:
[]stringFilters pump data by an allow list of org_ids.
pumps.splunk.filters.api_ids
ENV: TYK_PMP_PUMPS_SPLUNK_FILTERS_APIIDSType:
[]stringFilters pump data by an allow list of api_ids.
pumps.splunk.filters.response_codes
ENV: TYK_PMP_PUMPS_SPLUNK_FILTERS_RESPONSECODESType:
[]intFilters pump data by an allow list of response_codes.
pumps.splunk.filters.skip_org_ids
ENV: TYK_PMP_PUMPS_SPLUNK_FILTERS_SKIPPEDORGSIDSType:
[]stringFilters pump data by a block list of org_ids.
pumps.splunk.filters.skip_api_ids
ENV: TYK_PMP_PUMPS_SPLUNK_FILTERS_SKIPPEDAPIIDSType:
[]stringFilters pump data by a block list of api_ids.
pumps.splunk.filters.skip_response_codes
ENV: TYK_PMP_PUMPS_SPLUNK_FILTERS_SKIPPEDRESPONSECODESType:
[]intFilters pump data by a block list of response_codes.
pumps.splunk.timeout
ENV: TYK_PMP_PUMPS_SPLUNK_TIMEOUTType:
intBy default, a pump will wait forever for each write operation to complete; you can configure an optional timeout by setting the configuration option
timeout.
If you have deployed multiple pumps, then you can configure each timeout independently. The timeout is in seconds and defaults to 0.
The timeout is configured within the main pump config as shown here; note that this example would configure a 5 second timeout:
purge_delay) as this will mean that data is purged before being written to the target data sink.
If there is no timeout configured and pump’s write operation is taking longer than the purging loop, the following warning log will be generated:
Pump {pump_name} is taking more time than the value configured of purge_delay. You should try to set a timeout for this pump.
If there is a timeout configured, but pump’s write operation is still taking longer than the purging loop, the following warning log will be generated:
Pump {pump_name} is taking more time than the value configured of purge_delay. You should try lowering the timeout configured for this pump..
pumps.splunk.omit_detailed_recording
ENV: TYK_PMP_PUMPS_SPLUNK_OMITDETAILEDRECORDINGType:
boolSetting this to true will avoid writing raw_request and raw_response fields for each request in pumps. Defaults to
false.
pumps.splunk.max_record_size
ENV: TYK_PMP_PUMPS_SPLUNK_MAXRECORDSIZEType:
intDefines maximum size (in bytes) for Raw Request and Raw Response logs, this value defaults to 0. If it is not set then tyk-pump will not trim any data and will store the full information. This can also be set at a pump level. For example:
pumps.splunk.ignore_fields
ENV: TYK_PMP_PUMPS_SPLUNK_IGNOREFIELDSType:
[]stringIgnoreFields defines a list of analytics fields that will be ignored when writing to the pump. This can be used to avoid writing sensitive information to the Database, or data that you don’t really need to have. The field names must be the same as the JSON tags of the analytics record fields. For example:
["api_key", "api_version"].
pumps.splunk.meta.EnvPrefix
ENV: TYK_PMP_PUMPS_SPLUNK_META_ENVPREFIXType:
stringThe prefix for the environment variables that will be used to override the configuration. Defaults to
TYK_PMP_PUMPS_SPLUNK_META
pumps.splunk.meta.collector_token
ENV: TYK_PMP_PUMPS_SPLUNK_META_COLLECTORTOKENType:
stringAddress of the datadog agent including host & port.
pumps.splunk.meta.collector_url
ENV: TYK_PMP_PUMPS_SPLUNK_META_COLLECTORURLType:
stringEndpoint the Pump will send analytics too. Should look something like:
https://splunk:8088/services/collector/event.
pumps.splunk.meta.ssl_insecure_skip_verify
ENV: TYK_PMP_PUMPS_SPLUNK_META_SSLINSECURESKIPVERIFYType:
boolControls whether the pump client verifies the Splunk server’s certificate chain and host name.
pumps.splunk.meta.ssl_cert_file
ENV: TYK_PMP_PUMPS_SPLUNK_META_SSLCERTFILEType:
stringSSL cert file location.
pumps.splunk.meta.ssl_key_file
ENV: TYK_PMP_PUMPS_SPLUNK_META_SSLKEYFILEType:
stringSSL cert key location.
pumps.splunk.meta.ssl_server_name
ENV: TYK_PMP_PUMPS_SPLUNK_META_SSLSERVERNAMEType:
stringSSL Server name used in the TLS connection.
pumps.splunk.meta.obfuscate_api_keys
ENV: TYK_PMP_PUMPS_SPLUNK_META_OBFUSCATEAPIKEYSType:
boolControls whether the pump client should hide the API key. In case you still need substring of the value, check the next option. Default value is
false.
pumps.splunk.meta.obfuscate_api_keys_length
ENV: TYK_PMP_PUMPS_SPLUNK_META_OBFUSCATEAPIKEYSLENGTHType:
intDefine the number of the characters from the end of the API key. The
obfuscate_api_keys
should be set to true. Default value is 0.
pumps.splunk.meta.fields
ENV: TYK_PMP_PUMPS_SPLUNK_META_FIELDSType:
[]stringDefine which Analytics fields should participate in the Splunk event. Check the available fields in the example below. Default value is
["method", "path", "response_code", "api_key", "time_stamp", "api_version", "api_name", "api_id", "org_id", "oauth_id", "raw_request", "request_time", "raw_response", "ip_address"].
pumps.splunk.meta.ignore_tag_prefix_list
ENV: TYK_PMP_PUMPS_SPLUNK_META_IGNORETAGPREFIXLISTType:
[]stringChoose which tags to be ignored by the Splunk Pump. Keep in mind that the tag name and value are hyphenated. Default value is
[].
pumps.splunk.meta.enable_batch
ENV: TYK_PMP_PUMPS_SPLUNK_META_ENABLEBATCHType:
boolIf this is set to
true, pump is going to send the analytics records in batch to Splunk.
Default value is false.
pumps.splunk.meta.max_retries
ENV: TYK_PMP_PUMPS_SPLUNK_META_MAXRETRIESType:
uint64MaxRetries represents the maximum amount of retries to attempt if failed to send requests to splunk HEC. Default value is
0
pumps.sql.name
ENV: TYK_PMP_PUMPS_SQL_NAMEType:
stringThe name of the pump. This is used to identify the pump in the logs. Deprecated, use
type instead.
pumps.sql.type
ENV: TYK_PMP_PUMPS_SQL_TYPEType:
stringSets the pump type. This is needed when the pump key does not equal to the pump name type. Current valid types are:
mongo, mongo-pump-selective, mongo-pump-aggregate, csv,
elasticsearch, influx, influx2, moesif, statsd, segment, graylog, splunk, hybrid, prometheus,
logzio, dogstatsd, kafka, syslog, sql, sql_aggregate, stdout, timestream, mongo-graph,
sql-graph, sql-graph-aggregate, resurfaceio.
pumps.sql.filters
This feature adds a new configuration field in each pump called filters and its structure is the following:pumps.sql.filters.org_ids
ENV: TYK_PMP_PUMPS_SQL_FILTERS_ORGSIDSType:
[]stringFilters pump data by an allow list of org_ids.
pumps.sql.filters.api_ids
ENV: TYK_PMP_PUMPS_SQL_FILTERS_APIIDSType:
[]stringFilters pump data by an allow list of api_ids.
pumps.sql.filters.response_codes
ENV: TYK_PMP_PUMPS_SQL_FILTERS_RESPONSECODESType:
[]intFilters pump data by an allow list of response_codes.
pumps.sql.filters.skip_org_ids
ENV: TYK_PMP_PUMPS_SQL_FILTERS_SKIPPEDORGSIDSType:
[]stringFilters pump data by a block list of org_ids.
pumps.sql.filters.skip_api_ids
ENV: TYK_PMP_PUMPS_SQL_FILTERS_SKIPPEDAPIIDSType:
[]stringFilters pump data by a block list of api_ids.
pumps.sql.filters.skip_response_codes
ENV: TYK_PMP_PUMPS_SQL_FILTERS_SKIPPEDRESPONSECODESType:
[]intFilters pump data by a block list of response_codes.
pumps.sql.timeout
ENV: TYK_PMP_PUMPS_SQL_TIMEOUTType:
intBy default, a pump will wait forever for each write operation to complete; you can configure an optional timeout by setting the configuration option
timeout.
If you have deployed multiple pumps, then you can configure each timeout independently. The timeout is in seconds and defaults to 0.
The timeout is configured within the main pump config as shown here; note that this example would configure a 5 second timeout:
purge_delay) as this will mean that data is purged before being written to the target data sink.
If there is no timeout configured and pump’s write operation is taking longer than the purging loop, the following warning log will be generated:
Pump {pump_name} is taking more time than the value configured of purge_delay. You should try to set a timeout for this pump.
If there is a timeout configured, but pump’s write operation is still taking longer than the purging loop, the following warning log will be generated:
Pump {pump_name} is taking more time than the value configured of purge_delay. You should try lowering the timeout configured for this pump..
pumps.sql.omit_detailed_recording
ENV: TYK_PMP_PUMPS_SQL_OMITDETAILEDRECORDINGType:
boolSetting this to true will avoid writing raw_request and raw_response fields for each request in pumps. Defaults to
false.
pumps.sql.max_record_size
ENV: TYK_PMP_PUMPS_SQL_MAXRECORDSIZEType:
intDefines maximum size (in bytes) for Raw Request and Raw Response logs, this value defaults to 0. If it is not set then tyk-pump will not trim any data and will store the full information. This can also be set at a pump level. For example:
pumps.sql.ignore_fields
ENV: TYK_PMP_PUMPS_SQL_IGNOREFIELDSType:
[]stringIgnoreFields defines a list of analytics fields that will be ignored when writing to the pump. This can be used to avoid writing sensitive information to the Database, or data that you don’t really need to have. The field names must be the same as the JSON tags of the analytics record fields. For example:
["api_key", "api_version"].
pumps.sql.meta.EnvPrefix
ENV: TYK_PMP_PUMPS_SQL_META_ENVPREFIXType:
stringThe prefix for the environment variables that will be used to override the configuration. Defaults to
TYK_PMP_PUMPS_SQL_META
pumps.sql.meta.type
ENV: TYK_PMP_PUMPS_SQL_META_TYPEType:
stringThe supported and tested types are
sqlite and postgres.
pumps.sql.meta.connection_string
ENV: TYK_PMP_PUMPS_SQL_META_CONNECTIONSTRINGType:
stringSpecifies the connection string to the database.
pumps.sql.meta.postgres
Postgres configurations.pumps.sql.meta.postgres.prefer_simple_protocol
ENV: TYK_PMP_PUMPS_SQL_META_POSTGRES_PREFERSIMPLEPROTOCOLType:
boolDisables implicit prepared statement usage.
pumps.sql.meta.mysql
Mysql configurations.pumps.sql.meta.mysql.default_string_size
ENV: TYK_PMP_PUMPS_SQL_META_MYSQL_DEFAULTSTRINGSIZEType:
uintDefault size for string fields. Defaults to
256.
pumps.sql.meta.mysql.disable_datetime_precision
ENV: TYK_PMP_PUMPS_SQL_META_MYSQL_DISABLEDATETIMEPRECISIONType:
boolDisable datetime precision, which not supported before MySQL 5.6.
pumps.sql.meta.mysql.dont_support_rename_index
ENV: TYK_PMP_PUMPS_SQL_META_MYSQL_DONTSUPPORTRENAMEINDEXType:
boolDrop & create when rename index, rename index not supported before MySQL 5.7, MariaDB.
pumps.sql.meta.mysql.dont_support_rename_column
ENV: TYK_PMP_PUMPS_SQL_META_MYSQL_DONTSUPPORTRENAMECOLUMNType:
boolchange when rename column, rename column not supported before MySQL 8, MariaDB.
pumps.sql.meta.mysql.skip_initialize_with_version
ENV: TYK_PMP_PUMPS_SQL_META_MYSQL_SKIPINITIALIZEWITHVERSIONType:
boolAuto configure based on currently MySQL version.
pumps.sql.meta.table_sharding
ENV: TYK_PMP_PUMPS_SQL_META_TABLESHARDINGType:
boolSpecifies if all the analytics records are going to be stored in one table or in multiple tables (one per day). By default,
false. If false, all the records are going to be
stored in tyk_aggregated table. Instead, if it’s true, all the records of the day are
going to be stored in tyk_aggregated_YYYYMMDD table, where YYYYMMDD is going to change
depending on the date.
pumps.sql.meta.log_level
ENV: TYK_PMP_PUMPS_SQL_META_LOGLEVELType:
stringSpecifies the SQL log verbosity. The possible values are:
info,error and warning. By
default, the value is silent, which means that it won’t log any SQL query.
pumps.sql.meta.batch_size
ENV: TYK_PMP_PUMPS_SQL_META_BATCHSIZEType:
intSpecifies the amount of records that are going to be written each batch. Type int. By default, it writes 1000 records max per batch.
pumps.sqlaggregate.name
ENV: TYK_PMP_PUMPS_SQLAGGREGATE_NAMEType:
stringThe name of the pump. This is used to identify the pump in the logs. Deprecated, use
type instead.
pumps.sqlaggregate.type
ENV: TYK_PMP_PUMPS_SQLAGGREGATE_TYPEType:
stringSets the pump type. This is needed when the pump key does not equal to the pump name type. Current valid types are:
mongo, mongo-pump-selective, mongo-pump-aggregate, csv,
elasticsearch, influx, influx2, moesif, statsd, segment, graylog, splunk, hybrid, prometheus,
logzio, dogstatsd, kafka, syslog, sql, sql_aggregate, stdout, timestream, mongo-graph,
sql-graph, sql-graph-aggregate, resurfaceio.
pumps.sqlaggregate.filters
This feature adds a new configuration field in each pump called filters and its structure is the following:pumps.sqlaggregate.filters.org_ids
ENV: TYK_PMP_PUMPS_SQLAGGREGATE_FILTERS_ORGSIDSType:
[]stringFilters pump data by an allow list of org_ids.
pumps.sqlaggregate.filters.api_ids
ENV: TYK_PMP_PUMPS_SQLAGGREGATE_FILTERS_APIIDSType:
[]stringFilters pump data by an allow list of api_ids.
pumps.sqlaggregate.filters.response_codes
ENV: TYK_PMP_PUMPS_SQLAGGREGATE_FILTERS_RESPONSECODESType:
[]intFilters pump data by an allow list of response_codes.
pumps.sqlaggregate.filters.skip_org_ids
ENV: TYK_PMP_PUMPS_SQLAGGREGATE_FILTERS_SKIPPEDORGSIDSType:
[]stringFilters pump data by a block list of org_ids.
pumps.sqlaggregate.filters.skip_api_ids
ENV: TYK_PMP_PUMPS_SQLAGGREGATE_FILTERS_SKIPPEDAPIIDSType:
[]stringFilters pump data by a block list of api_ids.
pumps.sqlaggregate.filters.skip_response_codes
ENV: TYK_PMP_PUMPS_SQLAGGREGATE_FILTERS_SKIPPEDRESPONSECODESType:
[]intFilters pump data by a block list of response_codes.
pumps.sqlaggregate.timeout
ENV: TYK_PMP_PUMPS_SQLAGGREGATE_TIMEOUTType:
intBy default, a pump will wait forever for each write operation to complete; you can configure an optional timeout by setting the configuration option
timeout.
If you have deployed multiple pumps, then you can configure each timeout independently. The timeout is in seconds and defaults to 0.
The timeout is configured within the main pump config as shown here; note that this example would configure a 5 second timeout:
purge_delay) as this will mean that data is purged before being written to the target data sink.
If there is no timeout configured and pump’s write operation is taking longer than the purging loop, the following warning log will be generated:
Pump {pump_name} is taking more time than the value configured of purge_delay. You should try to set a timeout for this pump.
If there is a timeout configured, but pump’s write operation is still taking longer than the purging loop, the following warning log will be generated:
Pump {pump_name} is taking more time than the value configured of purge_delay. You should try lowering the timeout configured for this pump..
pumps.sqlaggregate.omit_detailed_recording
ENV: TYK_PMP_PUMPS_SQLAGGREGATE_OMITDETAILEDRECORDINGType:
boolSetting this to true will avoid writing raw_request and raw_response fields for each request in pumps. Defaults to
false.
pumps.sqlaggregate.max_record_size
ENV: TYK_PMP_PUMPS_SQLAGGREGATE_MAXRECORDSIZEType:
intDefines maximum size (in bytes) for Raw Request and Raw Response logs, this value defaults to 0. If it is not set then tyk-pump will not trim any data and will store the full information. This can also be set at a pump level. For example:
pumps.sqlaggregate.ignore_fields
ENV: TYK_PMP_PUMPS_SQLAGGREGATE_IGNOREFIELDSType:
[]stringIgnoreFields defines a list of analytics fields that will be ignored when writing to the pump. This can be used to avoid writing sensitive information to the Database, or data that you don’t really need to have. The field names must be the same as the JSON tags of the analytics record fields. For example:
["api_key", "api_version"].
pumps.sqlaggregate.meta.EnvPrefix
ENV: TYK_PMP_PUMPS_SQLAGGREGATE_META_ENVPREFIXType:
stringThe prefix for the environment variables that will be used to override the configuration. Defaults to
TYK_PMP_PUMPS_SQLAGGREGATE_META
pumps.sqlaggregate.meta.type
ENV: TYK_PMP_PUMPS_SQLAGGREGATE_META_TYPEType:
stringThe supported and tested types are
sqlite and postgres.
pumps.sqlaggregate.meta.connection_string
ENV: TYK_PMP_PUMPS_SQLAGGREGATE_META_CONNECTIONSTRINGType:
stringSpecifies the connection string to the database.
pumps.sqlaggregate.meta.postgres
ENV: TYK_PMP_PUMPS_SQLAGGREGATE_META_POSTGRESType:
PostgresConfigPostgres configurations.
pumps.sqlaggregate.meta.postgres.prefer_simple_protocol
ENV: TYK_PMP_PUMPS_SQLAGGREGATE_META_POSTGRES_PREFERSIMPLEPROTOCOLType:
boolDisables implicit prepared statement usage.
pumps.sqlaggregate.meta.mysql
ENV: TYK_PMP_PUMPS_SQLAGGREGATE_META_MYSQLType:
MysqlConfigMysql configurations.
pumps.sqlaggregate.meta.mysql.default_string_size
ENV: TYK_PMP_PUMPS_SQLAGGREGATE_META_MYSQL_DEFAULTSTRINGSIZEType:
uintDefault size for string fields. Defaults to
256.
pumps.sqlaggregate.meta.mysql.disable_datetime_precision
ENV: TYK_PMP_PUMPS_SQLAGGREGATE_META_MYSQL_DISABLEDATETIMEPRECISIONType:
boolDisable datetime precision, which not supported before MySQL 5.6.
pumps.sqlaggregate.meta.mysql.dont_support_rename_index
ENV: TYK_PMP_PUMPS_SQLAGGREGATE_META_MYSQL_DONTSUPPORTRENAMEINDEXType:
boolDrop & create when rename index, rename index not supported before MySQL 5.7, MariaDB.
pumps.sqlaggregate.meta.mysql.dont_support_rename_column
ENV: TYK_PMP_PUMPS_SQLAGGREGATE_META_MYSQL_DONTSUPPORTRENAMECOLUMNType:
boolchange when rename column, rename column not supported before MySQL 8, MariaDB.
pumps.sqlaggregate.meta.mysql.skip_initialize_with_version
ENV: TYK_PMP_PUMPS_SQLAGGREGATE_META_MYSQL_SKIPINITIALIZEWITHVERSIONType:
boolAuto configure based on currently MySQL version.
pumps.sqlaggregate.meta.table_sharding
ENV: TYK_PMP_PUMPS_SQLAGGREGATE_META_TABLESHARDINGType:
boolSpecifies if all the analytics records are going to be stored in one table or in multiple tables (one per day). By default,
false. If false, all the records are going to be
stored in tyk_aggregated table. Instead, if it’s true, all the records of the day are
going to be stored in tyk_aggregated_YYYYMMDD table, where YYYYMMDD is going to change
depending on the date.
pumps.sqlaggregate.meta.log_level
ENV: TYK_PMP_PUMPS_SQLAGGREGATE_META_LOGLEVELType:
stringSpecifies the SQL log verbosity. The possible values are:
info,error and warning. By
default, the value is silent, which means that it won’t log any SQL query.
pumps.sqlaggregate.meta.batch_size
ENV: TYK_PMP_PUMPS_SQLAGGREGATE_META_BATCHSIZEType:
intSpecifies the amount of records that are going to be written each batch. Type int. By default, it writes 1000 records max per batch.
pumps.sqlaggregate.meta.track_all_paths
ENV: TYK_PMP_PUMPS_SQLAGGREGATE_META_TRACKALLPATHSType:
boolSpecifies if it should store aggregated data for all the endpoints. By default,
false
which means that only store aggregated data for tracked endpoints.
pumps.sqlaggregate.meta.ignore_tag_prefix_list
ENV: TYK_PMP_PUMPS_SQLAGGREGATE_META_IGNORETAGPREFIXLISTType:
[]stringSpecifies prefixes of tags that should be ignored.
pumps.sqlaggregate.meta.store_analytics_per_minute
ENV: TYK_PMP_PUMPS_SQLAGGREGATE_META_STOREANALYTICSPERMINUTEType:
boolDetermines if the aggregations should be made per minute instead of per hour.
pumps.sqlaggregate.meta.omit_index_creation
ENV: TYK_PMP_PUMPS_SQLAGGREGATE_META_OMITINDEXCREATIONType:
boolSet to true to disable the default tyk index creation.
pumps.statsd.name
ENV: TYK_PMP_PUMPS_STATSD_NAMEType:
stringThe name of the pump. This is used to identify the pump in the logs. Deprecated, use
type instead.
pumps.statsd.type
ENV: TYK_PMP_PUMPS_STATSD_TYPEType:
stringSets the pump type. This is needed when the pump key does not equal to the pump name type. Current valid types are:
mongo, mongo-pump-selective, mongo-pump-aggregate, csv,
elasticsearch, influx, influx2, moesif, statsd, segment, graylog, splunk, hybrid, prometheus,
logzio, dogstatsd, kafka, syslog, sql, sql_aggregate, stdout, timestream, mongo-graph,
sql-graph, sql-graph-aggregate, resurfaceio.
pumps.statsd.filters
This feature adds a new configuration field in each pump called filters and its structure is the following:pumps.statsd.filters.org_ids
ENV: TYK_PMP_PUMPS_STATSD_FILTERS_ORGSIDSType:
[]stringFilters pump data by an allow list of org_ids.
pumps.statsd.filters.api_ids
ENV: TYK_PMP_PUMPS_STATSD_FILTERS_APIIDSType:
[]stringFilters pump data by an allow list of api_ids.
pumps.statsd.filters.response_codes
ENV: TYK_PMP_PUMPS_STATSD_FILTERS_RESPONSECODESType:
[]intFilters pump data by an allow list of response_codes.
pumps.statsd.filters.skip_org_ids
ENV: TYK_PMP_PUMPS_STATSD_FILTERS_SKIPPEDORGSIDSType:
[]stringFilters pump data by a block list of org_ids.
pumps.statsd.filters.skip_api_ids
ENV: TYK_PMP_PUMPS_STATSD_FILTERS_SKIPPEDAPIIDSType:
[]stringFilters pump data by a block list of api_ids.
pumps.statsd.filters.skip_response_codes
ENV: TYK_PMP_PUMPS_STATSD_FILTERS_SKIPPEDRESPONSECODESType:
[]intFilters pump data by a block list of response_codes.
pumps.statsd.timeout
ENV: TYK_PMP_PUMPS_STATSD_TIMEOUTType:
intBy default, a pump will wait forever for each write operation to complete; you can configure an optional timeout by setting the configuration option
timeout.
If you have deployed multiple pumps, then you can configure each timeout independently. The timeout is in seconds and defaults to 0.
The timeout is configured within the main pump config as shown here; note that this example would configure a 5 second timeout:
purge_delay) as this will mean that data is purged before being written to the target data sink.
If there is no timeout configured and pump’s write operation is taking longer than the purging loop, the following warning log will be generated:
Pump {pump_name} is taking more time than the value configured of purge_delay. You should try to set a timeout for this pump.
If there is a timeout configured, but pump’s write operation is still taking longer than the purging loop, the following warning log will be generated:
Pump {pump_name} is taking more time than the value configured of purge_delay. You should try lowering the timeout configured for this pump..
pumps.statsd.omit_detailed_recording
ENV: TYK_PMP_PUMPS_STATSD_OMITDETAILEDRECORDINGType:
boolSetting this to true will avoid writing raw_request and raw_response fields for each request in pumps. Defaults to
false.
pumps.statsd.max_record_size
ENV: TYK_PMP_PUMPS_STATSD_MAXRECORDSIZEType:
intDefines maximum size (in bytes) for Raw Request and Raw Response logs, this value defaults to 0. If it is not set then tyk-pump will not trim any data and will store the full information. This can also be set at a pump level. For example:
pumps.statsd.ignore_fields
ENV: TYK_PMP_PUMPS_STATSD_IGNOREFIELDSType:
[]stringIgnoreFields defines a list of analytics fields that will be ignored when writing to the pump. This can be used to avoid writing sensitive information to the Database, or data that you don’t really need to have. The field names must be the same as the JSON tags of the analytics record fields. For example:
["api_key", "api_version"].
pumps.statsd.meta.EnvPrefix
ENV: TYK_PMP_PUMPS_STATSD_META_ENVPREFIXType:
stringThe prefix for the environment variables that will be used to override the configuration. Defaults to
TYK_PMP_PUMPS_STATSD_META
pumps.statsd.meta.address
ENV: TYK_PMP_PUMPS_STATSD_META_ADDRESSType:
stringAddress of statsd including host & port.
pumps.statsd.meta.fields
ENV: TYK_PMP_PUMPS_STATSD_META_FIELDSType:
[]stringDefine which Analytics fields should have its own metric calculation.
pumps.statsd.meta.tags
ENV: TYK_PMP_PUMPS_STATSD_META_TAGSType:
[]stringList of tags to be added to the metric.
pumps.statsd.meta.separated_method
ENV: TYK_PMP_PUMPS_STATSD_META_SEPARATEDMETHODType:
boolAllows to have a separated method field instead of having it embedded in the path field.
pumps.stdout.name
ENV: TYK_PMP_PUMPS_STDOUT_NAMEType:
stringThe name of the pump. This is used to identify the pump in the logs. Deprecated, use
type instead.
pumps.stdout.type
ENV: TYK_PMP_PUMPS_STDOUT_TYPEType:
stringSets the pump type. This is needed when the pump key does not equal to the pump name type. Current valid types are:
mongo, mongo-pump-selective, mongo-pump-aggregate, csv,
elasticsearch, influx, influx2, moesif, statsd, segment, graylog, splunk, hybrid, prometheus,
logzio, dogstatsd, kafka, syslog, sql, sql_aggregate, stdout, timestream, mongo-graph,
sql-graph, sql-graph-aggregate, resurfaceio.
pumps.stdout.filters
This feature adds a new configuration field in each pump called filters and its structure is the following:pumps.stdout.filters.org_ids
ENV: TYK_PMP_PUMPS_STDOUT_FILTERS_ORGSIDSType:
[]stringFilters pump data by an allow list of org_ids.
pumps.stdout.filters.api_ids
ENV: TYK_PMP_PUMPS_STDOUT_FILTERS_APIIDSType:
[]stringFilters pump data by an allow list of api_ids.
pumps.stdout.filters.response_codes
ENV: TYK_PMP_PUMPS_STDOUT_FILTERS_RESPONSECODESType:
[]intFilters pump data by an allow list of response_codes.
pumps.stdout.filters.skip_org_ids
ENV: TYK_PMP_PUMPS_STDOUT_FILTERS_SKIPPEDORGSIDSType:
[]stringFilters pump data by a block list of org_ids.
pumps.stdout.filters.skip_api_ids
ENV: TYK_PMP_PUMPS_STDOUT_FILTERS_SKIPPEDAPIIDSType:
[]stringFilters pump data by a block list of api_ids.
pumps.stdout.filters.skip_response_codes
ENV: TYK_PMP_PUMPS_STDOUT_FILTERS_SKIPPEDRESPONSECODESType:
[]intFilters pump data by a block list of response_codes.
pumps.stdout.timeout
ENV: TYK_PMP_PUMPS_STDOUT_TIMEOUTType:
intBy default, a pump will wait forever for each write operation to complete; you can configure an optional timeout by setting the configuration option
timeout.
If you have deployed multiple pumps, then you can configure each timeout independently. The timeout is in seconds and defaults to 0.
The timeout is configured within the main pump config as shown here; note that this example would configure a 5 second timeout:
purge_delay) as this will mean that data is purged before being written to the target data sink.
If there is no timeout configured and pump’s write operation is taking longer than the purging loop, the following warning log will be generated:
Pump {pump_name} is taking more time than the value configured of purge_delay. You should try to set a timeout for this pump.
If there is a timeout configured, but pump’s write operation is still taking longer than the purging loop, the following warning log will be generated:
Pump {pump_name} is taking more time than the value configured of purge_delay. You should try lowering the timeout configured for this pump..
pumps.stdout.omit_detailed_recording
ENV: TYK_PMP_PUMPS_STDOUT_OMITDETAILEDRECORDINGType:
boolSetting this to true will avoid writing raw_request and raw_response fields for each request in pumps. Defaults to
false.
pumps.stdout.max_record_size
ENV: TYK_PMP_PUMPS_STDOUT_MAXRECORDSIZEType:
intDefines maximum size (in bytes) for Raw Request and Raw Response logs, this value defaults to 0. If it is not set then tyk-pump will not trim any data and will store the full information. This can also be set at a pump level. For example:
pumps.stdout.ignore_fields
ENV: TYK_PMP_PUMPS_STDOUT_IGNOREFIELDSType:
[]stringIgnoreFields defines a list of analytics fields that will be ignored when writing to the pump. This can be used to avoid writing sensitive information to the Database, or data that you don’t really need to have. The field names must be the same as the JSON tags of the analytics record fields. For example:
["api_key", "api_version"].
pumps.stdout.meta.EnvPrefix
ENV: TYK_PMP_PUMPS_STDOUT_META_ENVPREFIXType:
stringThe prefix for the environment variables that will be used to override the configuration. Defaults to
TYK_PMP_PUMPS_STDOUT_META
pumps.stdout.meta.format
ENV: TYK_PMP_PUMPS_STDOUT_META_FORMATType:
stringFormat of the analytics logs. Default is
text if json is not explicitly specified. When
JSON logging is used all pump logs to stdout will be JSON.
pumps.stdout.meta.log_field_name
ENV: TYK_PMP_PUMPS_STDOUT_META_LOGFIELDNAMEType:
stringRoot name of the JSON object the analytics record is nested in.
pumps.syslog.name
ENV: TYK_PMP_PUMPS_SYSLOG_NAMEType:
stringThe name of the pump. This is used to identify the pump in the logs. Deprecated, use
type instead.
pumps.syslog.type
ENV: TYK_PMP_PUMPS_SYSLOG_TYPEType:
stringSets the pump type. This is needed when the pump key does not equal to the pump name type. Current valid types are:
mongo, mongo-pump-selective, mongo-pump-aggregate, csv,
elasticsearch, influx, influx2, moesif, statsd, segment, graylog, splunk, hybrid, prometheus,
logzio, dogstatsd, kafka, syslog, sql, sql_aggregate, stdout, timestream, mongo-graph,
sql-graph, sql-graph-aggregate, resurfaceio.
pumps.syslog.filters
This feature adds a new configuration field in each pump called filters and its structure is the following:pumps.syslog.filters.org_ids
ENV: TYK_PMP_PUMPS_SYSLOG_FILTERS_ORGSIDSType:
[]stringFilters pump data by an allow list of org_ids.
pumps.syslog.filters.api_ids
ENV: TYK_PMP_PUMPS_SYSLOG_FILTERS_APIIDSType:
[]stringFilters pump data by an allow list of api_ids.
pumps.syslog.filters.response_codes
ENV: TYK_PMP_PUMPS_SYSLOG_FILTERS_RESPONSECODESType:
[]intFilters pump data by an allow list of response_codes.
pumps.syslog.filters.skip_org_ids
ENV: TYK_PMP_PUMPS_SYSLOG_FILTERS_SKIPPEDORGSIDSType:
[]stringFilters pump data by a block list of org_ids.
pumps.syslog.filters.skip_api_ids
ENV: TYK_PMP_PUMPS_SYSLOG_FILTERS_SKIPPEDAPIIDSType:
[]stringFilters pump data by a block list of api_ids.
pumps.syslog.filters.skip_response_codes
ENV: TYK_PMP_PUMPS_SYSLOG_FILTERS_SKIPPEDRESPONSECODESType:
[]intFilters pump data by a block list of response_codes.
pumps.syslog.timeout
ENV: TYK_PMP_PUMPS_SYSLOG_TIMEOUTType:
intBy default, a pump will wait forever for each write operation to complete; you can configure an optional timeout by setting the configuration option
timeout.
If you have deployed multiple pumps, then you can configure each timeout independently. The timeout is in seconds and defaults to 0.
The timeout is configured within the main pump config as shown here; note that this example would configure a 5 second timeout:
purge_delay) as this will mean that data is purged before being written to the target data sink.
If there is no timeout configured and pump’s write operation is taking longer than the purging loop, the following warning log will be generated:
Pump {pump_name} is taking more time than the value configured of purge_delay. You should try to set a timeout for this pump.
If there is a timeout configured, but pump’s write operation is still taking longer than the purging loop, the following warning log will be generated:
Pump {pump_name} is taking more time than the value configured of purge_delay. You should try lowering the timeout configured for this pump..
pumps.syslog.omit_detailed_recording
ENV: TYK_PMP_PUMPS_SYSLOG_OMITDETAILEDRECORDINGType:
boolSetting this to true will avoid writing raw_request and raw_response fields for each request in pumps. Defaults to
false.
pumps.syslog.max_record_size
ENV: TYK_PMP_PUMPS_SYSLOG_MAXRECORDSIZEType:
intDefines maximum size (in bytes) for Raw Request and Raw Response logs, this value defaults to 0. If it is not set then tyk-pump will not trim any data and will store the full information. This can also be set at a pump level. For example:
pumps.syslog.ignore_fields
ENV: TYK_PMP_PUMPS_SYSLOG_IGNOREFIELDSType:
[]stringIgnoreFields defines a list of analytics fields that will be ignored when writing to the pump. This can be used to avoid writing sensitive information to the Database, or data that you don’t really need to have. The field names must be the same as the JSON tags of the analytics record fields. For example:
["api_key", "api_version"].
pumps.syslog.meta.meta_env_prefix
ENV: TYK_PMP_PUMPS_SYSLOG_META_ENVPREFIXType:
stringThe prefix for the environment variables that will be used to override the configuration. Defaults to
TYK_PMP_PUMPS_SYSLOG_META
pumps.syslog.meta.transport
ENV: TYK_PMP_PUMPS_SYSLOG_META_TRANSPORTType:
stringPossible values are
udp, tcp, tls in string form.
pumps.syslog.meta.network_addr
ENV: TYK_PMP_PUMPS_SYSLOG_META_NETWORKADDRType:
stringHost & Port combination of your syslog daemon ie:
"localhost:5140".
pumps.syslog.meta.log_level
ENV: TYK_PMP_PUMPS_SYSLOG_META_LOGLEVELType:
intThe severity level, an integer from 0-7, based off the Standard: Syslog Severity Levels.
pumps.syslog.meta.tag
ENV: TYK_PMP_PUMPS_SYSLOG_META_TAGType:
stringPrefix tag When working with FluentD, you should provide a FluentD Parser based on the OS you are using so that FluentD can correctly read the logs.
pumps.timestream.name
ENV: TYK_PMP_PUMPS_TIMESTREAM_NAMEType:
stringThe name of the pump. This is used to identify the pump in the logs. Deprecated, use
type instead.
pumps.timestream.type
ENV: TYK_PMP_PUMPS_TIMESTREAM_TYPEType:
stringSets the pump type. This is needed when the pump key does not equal to the pump name type. Current valid types are:
mongo, mongo-pump-selective, mongo-pump-aggregate, csv,
elasticsearch, influx, influx2, moesif, statsd, segment, graylog, splunk, hybrid, prometheus,
logzio, dogstatsd, kafka, syslog, sql, sql_aggregate, stdout, timestream, mongo-graph,
sql-graph, sql-graph-aggregate, resurfaceio.
pumps.timestream.filters
This feature adds a new configuration field in each pump called filters and its structure is the following:pumps.timestream.filters.org_ids
ENV: TYK_PMP_PUMPS_TIMESTREAM_FILTERS_ORGSIDSType:
[]stringFilters pump data by an allow list of org_ids.
pumps.timestream.filters.api_ids
ENV: TYK_PMP_PUMPS_TIMESTREAM_FILTERS_APIIDSType:
[]stringFilters pump data by an allow list of api_ids.
pumps.timestream.filters.response_codes
ENV: TYK_PMP_PUMPS_TIMESTREAM_FILTERS_RESPONSECODESType:
[]intFilters pump data by an allow list of response_codes.
pumps.timestream.filters.skip_org_ids
ENV: TYK_PMP_PUMPS_TIMESTREAM_FILTERS_SKIPPEDORGSIDSType:
[]stringFilters pump data by a block list of org_ids.
pumps.timestream.filters.skip_api_ids
ENV: TYK_PMP_PUMPS_TIMESTREAM_FILTERS_SKIPPEDAPIIDSType:
[]stringFilters pump data by a block list of api_ids.
pumps.timestream.filters.skip_response_codes
ENV: TYK_PMP_PUMPS_TIMESTREAM_FILTERS_SKIPPEDRESPONSECODESType:
[]intFilters pump data by a block list of response_codes.
pumps.timestream.timeout
ENV: TYK_PMP_PUMPS_TIMESTREAM_TIMEOUTType:
intBy default, a pump will wait forever for each write operation to complete; you can configure an optional timeout by setting the configuration option
timeout.
If you have deployed multiple pumps, then you can configure each timeout independently. The timeout is in seconds and defaults to 0.
The timeout is configured within the main pump config as shown here; note that this example would configure a 5 second timeout:
purge_delay) as this will mean that data is purged before being written to the target data sink.
If there is no timeout configured and pump’s write operation is taking longer than the purging loop, the following warning log will be generated:
Pump {pump_name} is taking more time than the value configured of purge_delay. You should try to set a timeout for this pump.
If there is a timeout configured, but pump’s write operation is still taking longer than the purging loop, the following warning log will be generated:
Pump {pump_name} is taking more time than the value configured of purge_delay. You should try lowering the timeout configured for this pump..
pumps.timestream.omit_detailed_recording
ENV: TYK_PMP_PUMPS_TIMESTREAM_OMITDETAILEDRECORDINGType:
boolSetting this to true will avoid writing raw_request and raw_response fields for each request in pumps. Defaults to
false.
pumps.timestream.max_record_size
ENV: TYK_PMP_PUMPS_TIMESTREAM_MAXRECORDSIZEType:
intDefines maximum size (in bytes) for Raw Request and Raw Response logs, this value defaults to 0. If it is not set then tyk-pump will not trim any data and will store the full information. This can also be set at a pump level. For example:
pumps.timestream.ignore_fields
ENV: TYK_PMP_PUMPS_TIMESTREAM_IGNOREFIELDSType:
[]stringIgnoreFields defines a list of analytics fields that will be ignored when writing to the pump. This can be used to avoid writing sensitive information to the Database, or data that you don’t really need to have. The field names must be the same as the JSON tags of the analytics record fields. For example:
["api_key", "api_version"].
pumps.timestream.meta.EnvPrefix
ENV: TYK_PMP_PUMPS_TIMESTREAM_META_ENVPREFIXType:
stringThe prefix for the environment variables that will be used to override the configuration. Defaults to
TYK_PMP_PUMPS_TIMESTREAM_META
pumps.timestream.meta.AWSRegion
ENV: TYK_PMP_PUMPS_TIMESTREAM_META_AWSREGIONType:
stringThe aws region that contains the timestream database
pumps.timestream.meta.TableName
ENV: TYK_PMP_PUMPS_TIMESTREAM_META_TABLENAMEType:
stringThe table name where the data is going to be written
pumps.timestream.meta.DatabaseName
ENV: TYK_PMP_PUMPS_TIMESTREAM_META_DATABASENAMEType:
stringThe timestream database name that contains the table being written to
pumps.timestream.meta.Dimensions
ENV: TYK_PMP_PUMPS_TIMESTREAM_META_DIMENSIONSType:
[]stringA filter of all the dimensions that will be written to the table. The possible options are [“Method”,“Host”,“Path”,“RawPath”,“APIKey”,“APIVersion”,“APIName”,“APIID”,“OrgID”,“OauthID”]
pumps.timestream.meta.Measures
ENV: TYK_PMP_PUMPS_TIMESTREAM_META_MEASURESType:
[]stringA filter of all the measures that will be written to the table. The possible options are [“ContentLength”,“ResponseCode”,“RequestTime”,“NetworkStats.OpenConnections”, “NetworkStats.ClosedConnection”,“NetworkStats.BytesIn”,“NetworkStats.BytesOut”, “Latency.Total”,“Latency.Upstream”,“GeoData.City.GeoNameID”,“IPAddress”, “GeoData.Location.Latitude”,“GeoData.Location.Longitude”,“UserAgent”,“RawRequest”,“RawResponse”, “RateLimit.Limit”,“Ratelimit.Remaining”,“Ratelimit.Reset”, “GeoData.Country.ISOCode”,“GeoData.City.Names”,“GeoData.Location.TimeZone”]
pumps.timestream.meta.WriteRateLimit
ENV: TYK_PMP_PUMPS_TIMESTREAM_META_WRITERATELIMITType:
boolSet to true in order to save any of the
RateLimit measures. Default value is false.
pumps.timestream.meta.ReadGeoFromRequest
ENV: TYK_PMP_PUMPS_TIMESTREAM_META_READGEOFROMREQUESTType:
boolIf set true, we will try to read geo information from the headers if values aren’t found on the analytic record . Default value is
false.
pumps.timestream.meta.WriteZeroValues
ENV: TYK_PMP_PUMPS_TIMESTREAM_META_WRITEZEROVALUESType:
boolSet to true, in order to save numerical values with value zero. Default value is
false.
pumps.timestream.meta.NameMappings
ENV: TYK_PMP_PUMPS_TIMESTREAM_META_NAMEMAPPINGSType:
map[string]stringA name mapping for both Dimensions and Measures names. It’s not required
analytics_storage_type
ENV: TYK_PMP_ANALYTICSSTORAGETYPEType:
stringSets the analytics storage type. Where the pump will be fetching data from. Currently, only the
redis option is supported.
analytics_storage_config
Example Temporal storage configuration:analytics_storage_config.type
ENV: TYK_PMP_ANALYTICSSTORAGECONFIG_TYPEType:
stringDeprecated.
analytics_storage_config.host
ENV: TYK_PMP_ANALYTICSSTORAGECONFIG_HOSTType:
stringHost value. For example: “localhost”.
analytics_storage_config.port
ENV: TYK_PMP_ANALYTICSSTORAGECONFIG_PORTType:
intPort value. For example: 6379.
analytics_storage_config.hosts
ENV: TYK_PMP_ANALYTICSSTORAGECONFIG_HOSTSType:
map[string]stringDeprecated: use Addrs instead.
analytics_storage_config.addrs
ENV: TYK_PMP_ANALYTICSSTORAGECONFIG_ADDRSType:
[]stringUse instead of the host value if you’re running a Redis cluster with multiple instances.
analytics_storage_config.master_name
ENV: TYK_PMP_ANALYTICSSTORAGECONFIG_MASTERNAMEType:
stringSentinel master name.
analytics_storage_config.sentinel_password
ENV: TYK_PMP_ANALYTICSSTORAGECONFIG_SENTINELPASSWORDType:
stringSentinel password.
analytics_storage_config.username
ENV: TYK_PMP_ANALYTICSSTORAGECONFIG_USERNAMEType:
stringDatabase username.
analytics_storage_config.password
ENV: TYK_PMP_ANALYTICSSTORAGECONFIG_PASSWORDType:
stringDatabase password.
analytics_storage_config.database
ENV: TYK_PMP_ANALYTICSSTORAGECONFIG_DATABASEType:
intDatabase name.
analytics_storage_config.timeout
ENV: TYK_PMP_ANALYTICSSTORAGECONFIG_TIMEOUTType:
intHow long to allow for new connections to be established (in milliseconds). Defaults to 5sec.
analytics_storage_config.optimisation_max_idle
ENV: TYK_PMP_ANALYTICSSTORAGECONFIG_MAXIDLEType:
intMaximum number of idle connections in the pool.
analytics_storage_config.optimisation_max_active
ENV: TYK_PMP_ANALYTICSSTORAGECONFIG_MAXACTIVEType:
intMaximum number of connections allocated by the pool at a given time. When zero, there is no limit on the number of connections in the pool. Defaults to 500.
analytics_storage_config.enable_cluster
ENV: TYK_PMP_ANALYTICSSTORAGECONFIG_ENABLECLUSTERType:
boolEnable this option if you are using a cluster instance. Default is
false.
analytics_storage_config.redis_key_prefix
ENV: TYK_PMP_ANALYTICSSTORAGECONFIG_REDISKEYPREFIXType:
stringPrefix the key names. Defaults to “analytics-”. Deprecated: use KeyPrefix instead.
analytics_storage_config.key_prefix
ENV: TYK_PMP_ANALYTICSSTORAGECONFIG_KEYPREFIXType:
stringPrefix the key names. Defaults to “analytics-”.
analytics_storage_config.use_ssl
ENV: TYK_PMP_ANALYTICSSTORAGECONFIG_USESSLType:
boolSetting this to true to use SSL when connecting to the DB.
analytics_storage_config.ssl_insecure_skip_verify
ENV: TYK_PMP_ANALYTICSSTORAGECONFIG_SSLINSECURESKIPVERIFYType:
boolSet this to
true to tell Pump to ignore database’s cert validation.
analytics_storage_config.ssl_ca_file
ENV: TYK_PMP_ANALYTICSSTORAGECONFIG_SSLCAFILEType:
stringPath to the CA file.
analytics_storage_config.ssl_cert_file
ENV: TYK_PMP_ANALYTICSSTORAGECONFIG_SSLCERTFILEType:
stringPath to the cert file.
analytics_storage_config.ssl_key_file
ENV: TYK_PMP_ANALYTICSSTORAGECONFIG_SSLKEYFILEType:
stringPath to the key file.
analytics_storage_config.ssl_max_version
ENV: TYK_PMP_ANALYTICSSTORAGECONFIG_SSLMAXVERSIONType:
stringMaximum supported TLS version. Defaults to TLS 1.3, valid values are TLS 1.0, 1.1, 1.2, 1.3.
analytics_storage_config.ssl_min_version
ENV: TYK_PMP_ANALYTICSSTORAGECONFIG_SSLMINVERSIONType:
stringMinimum supported TLS version. Defaults to TLS 1.2, valid values are TLS 1.0, 1.1, 1.2, 1.3.
analytics_storage_config.redis_use_ssl
ENV: TYK_PMP_ANALYTICSSTORAGECONFIG_REDISUSESSLType:
boolSetting this to true to use SSL when connecting to the DB. Deprecated: use UseSSL instead.
analytics_storage_config.redis_ssl_insecure_skip_verify
ENV: TYK_PMP_ANALYTICSSTORAGECONFIG_REDISSSLINSECURESKIPVERIFYType:
boolSet this to
true to tell Pump to ignore database’s cert validation.
Deprecated: use SSLInsecureSkipVerify instead.
statsd_connection_string
ENV: TYK_PMP_STATSDCONNECTIONSTRINGType:
stringConnection string for StatsD monitoring for information please see the Instrumentation docs.
statsd_prefix
ENV: TYK_PMP_STATSDPREFIXType:
stringCustom prefix value. For example separate settings for production and staging.
log_level
ENV: TYK_PMP_LOGLEVELType:
stringSet the logger details for tyk-pump. The posible values are:
info,debug,error and
warn. By default, the log level is info.
log_format
ENV: TYK_PMP_LOGFORMATType:
stringSet the logger format. The possible values are:
text and json. By default, the log
format is text.
Health Check
From v2.9.4, we have introduced a/health endpoint to confirm the Pump is running. You
need to configure the following settings. This returns a HTTP 200 OK response if the Pump is
running.
health_check_endpoint_name
ENV: TYK_PMP_HEALTHCHECKENDPOINTNAMEType:
stringThe default is “health”.
health_check_endpoint_port
ENV: TYK_PMP_HEALTHCHECKENDPOINTPORTType:
intThe default port is 8083.
omit_detailed_recording
ENV: TYK_PMP_OMITDETAILEDRECORDINGType:
boolSetting this to true will avoid writing raw_request and raw_response fields for each request in pumps. Defaults to false.
max_record_size
ENV: TYK_PMP_MAXRECORDSIZEType:
intDefines maximum size (in bytes) for Raw Request and Raw Response logs, this value defaults to 0. If it is not set then tyk-pump will not trim any data and will store the full information. This can also be set at a pump level. For example:
omit_config_file
ENV: TYK_PMP_OMITCONFIGFILEType:
boolDefines if tyk-pump should ignore all the values in configuration file. Specially useful when setting all configurations in environment variables.
enable_http_profiler
ENV: TYK_PMP_HTTPPROFILEType:
boolEnable debugging of Tyk Pump by exposing profiling information, the same as the gateway
raw_request_decoded
ENV: TYK_PMP_DECODERAWREQUESTType:
boolSetting this to true allows the Raw Request to be decoded from base 64 for all pumps. This is set to false by default.
raw_response_decoded
ENV: TYK_PMP_DECODERAWRESPONSEType:
boolSetting this to true allows the Raw Response to be decoded from base 64 for all pumps. This is set to false by default.