From 791b84830db20a20d407684093db64e23af5bb35 Mon Sep 17 00:00:00 2001 From: showuon <43372967+showuon@users.noreply.github.com> Date: Mon, 31 Aug 2020 16:50:42 +0800 Subject: [PATCH] Update the storage.md to fix typo, and some grammar/format issue (#7871) * Update the storage.md to fix typo, and some grammar/format issue Signed-off-by: Luke Chen --- docs/storage.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/storage.md b/docs/storage.md index 5c79a3c25..57b878e11 100644 --- a/docs/storage.md +++ b/docs/storage.md @@ -15,7 +15,7 @@ Prometheus's local time series database stores time series data in a custom form Ingested samples are grouped into blocks of two hours. Each two-hour block consists of a directory containing one or more chunk files that contain all time series samples for that window of time, as well as a metadata file and index file (which indexes metric names and labels to time series in the chunk files). When series are deleted via the API, deletion records are stored in separate tombstone files (instead of deleting the data immediately from the chunk files). -The block for currently incoming samples is kept in memory and not fully persisted yet. It is secured against crashes by a write-ahead-log (WAL) that can be replayed when the Prometheus server restarts after a crash. Write-ahead log files are stored in the `wal` directory in 128MB segments. These files contain raw data that has not been compacted yet, so they are significantly larger than regular block files. Prometheus will keep a minimum of 3 write-ahead log files, however high-traffic servers may see more than three WAL files since it needs to keep at least two hours worth of raw data. +The block for currently incoming samples is kept in memory and not fully persisted yet. It is secured against crashes by a write-ahead log (WAL) that can be replayed when the Prometheus server restarts after a crash. Write-ahead log files are stored in the `wal` directory in 128MB segments. These files contain raw data that has not been compacted yet, so they are significantly larger than regular block files. Prometheus will keep a minimum of 3 write-ahead log files, however high-traffic servers may see more than three WAL files since it needs to keep at least two hours worth of raw data. The directory structure of a Prometheus server's data directory will look something like this: @@ -46,7 +46,7 @@ The directory structure of a Prometheus server's data directory will look someth ``` -Note that a limitation of the local storage is that it is not clustered or replicated. Thus, it is not arbitrarily scalable or durable in the face of disk or node outages and should be treated as you would any other kind of single node database. Using RAID for disk availability, [snapshots](querying/api.md#snapshot) for backups, capacity planning, etc, is recommended for improved durability. With proper storage durability and planning storing years of data in the local storage is possible. +Note that a limitation of the local storage is that it is not clustered or replicated. Thus, it is not arbitrarily scalable or durable in the face of disk or node outages and should be treated as you would any other kind of single node database. Using RAID for disk availability, [snapshots](querying/api.md#snapshot) for backups, capacity planning, etc, is recommended for improved durability. With proper storage durability and planning, storing years of data in the local storage is possible. Alternatively, external storage may be used via the [remote read/write APIs](https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage). Careful evaluation is required for these systems as they vary greatly in durability, performance, and efficiency.