mirror of https://github.com/prometheus/prometheus
b02d900e61
Also, clean up some things in the code (especially introduction of the chunkLenWithHeader constant to avoid the same expression all over the place). Benchmark results: BEFORE BenchmarkLoadChunksSequentially 5000 283580 ns/op 152143 B/op 312 allocs/op BenchmarkLoadChunksRandomly 20000 82936 ns/op 39310 B/op 99 allocs/op BenchmarkLoadChunkDescs 10000 110833 ns/op 15092 B/op 345 allocs/op AFTER BenchmarkLoadChunksSequentially 10000 146785 ns/op 152285 B/op 315 allocs/op BenchmarkLoadChunksRandomly 20000 67598 ns/op 39438 B/op 103 allocs/op BenchmarkLoadChunkDescs 20000 99631 ns/op 12636 B/op 192 allocs/op Note that everything is obviously loaded from the page cache (as the benchmark runs thousands of times with very small series files). In a real-world scenario, I expect a larger impact, as the disk operations will more often actually hit the disk. To load ~50 sequential chunks, this reduces the iops from 100 seeks and 100 reads to 1 seek and 1 read. |
||
---|---|---|
.. | ||
codable | ||
fixtures/b0 | ||
flock | ||
index | ||
chunk.go | ||
crashrecovery.go | ||
delta.go | ||
delta_helpers.go | ||
doubledelta.go | ||
instrumentation.go | ||
interface.go | ||
locker.go | ||
locker_test.go | ||
persistence.go | ||
persistence_test.go | ||
preload.go | ||
series.go | ||
storage.go | ||
storage_test.go | ||
test_helpers.go |