The backend uses statsd client to log stats. These stats can be collected by any statsd server such as Graphite, CloudWatch, etc. For example, CloudWatch Agent can be used to collect stats to Amazon CloudWatch.
Note that the collection of stats can be disabled using
instanceNamethat can be used to filter metrics. This can be helpful in case of multi-node deployments.
The backend usually runs in normal mode. If backend crashes and restarts multiple times in a short span, it is started in either degraded or maintenance mode. In degraded mode, events are collected and stored by the backend gateway, but are not sent to destinations. In maintenance mode, existing database is set aside for further inspection and a new database is used. So, it is important that recovery mode is monitored and appropriate action is taken when backend enters either degraded or maintenance mode.
has a value of :
1 when running in normal mode
0 when running in degraded or maintenance mode
|Response time of each request||-|
|Requests are grouped together internally for processing. It captures the size of such batch||-|
|Time taken to process each batch of requests||-|
|Number of requests received with each write key|
|Number of successful requests with each write key|
|Number of failed requests with each write key. *|
* Requests fail in cases such as large request size, invalid write key, bad format of events, etc.
|Number of active users. This is based on the most recent events received. Useful for monitoring real time traffic.|
|Number of events read from database for processing.|
|Number of events whose status is updated in gateway database after processing.|
|Number of events written to router db.|
|Number of events written to batch router db. Note that batch router db is used for handling batch dumping destinations like S3, MinIO, etc.|
|Number of events sent to transformer.|
|Number of events received from transformer. Note that this may not always be the same as transformer_sent even if there are no failures.|
|Number of events from transformer with error responses.|
|Time taken to send each event to a specific destination.|
|Time taken by routing worker for each iteration. Multiple events are sent in each iteration. Equivalent to the interval with which a worker picks new batch of events to send.**|
|Number of retries made for a specification destination.|
|Total number of events delivered to all destinations.|
* These metrics are each destination type such as GA, AMP, etc. All the different Google Analytics destinations are grouped under a single metric (e.g: router.GA_worker_network). Useful for monitoring if there are failures or delays in delivering to a particular destination.
** Number of events picked in each iteration can be configured using
Destinations such as S3, MinIO, where raw events are dumped, are handled by Batch Router.
|Number of successful events sent to a specific destination|
|Number of failed attempts per specific destination. Increased number of this metric means we are unable to reach that specific destination (usually due to invalid authorization or endpoint).|
|Time taken to upload events to a specific destination (S3, MinIO, etc.)||-|
|Total number of errors when sending events to destinations||-|
These are the backend's implementation-specific metrics that can be used to analyze the performance based on traffic. JobsDB maintains active events and their statuses. For optimizing db operations, we periodically add new tables in the db and migrate rows from older tables.
|Number of gateway tables in JobsDB|
|Number of router tables in JobsDB|
|Number of batch router tables in JobsDB|
- Indicate events not getting processed and delivered in time.
- Indicate the load exceeded what current setup can handle and it is time to scale.
All the events from gateway tables are periodically dumped to S3/MinIO as a backup and also to facilitate event replay. These stats monitor delays or errors in dumping.
|Time taken to dump gateway tables to a JSON file|
|Time taken to compress and upload the generated JSON files.|
|Total time taken for the whole process of dumping tables to S3.|
Configuration of the sources and their corresponding destinations is polled from config backend. Any errors in fetching this config can be monitored using config_backend_errors.
|Number of errors in fetching or processing config from control-plane's backend.|