diff options
author | Florian Westphal <fw@strlen.de> | 2024-12-03 12:08:30 +0100 |
---|---|---|
committer | Pablo Neira Ayuso <pablo@netfilter.org> | 2025-01-05 18:41:31 +0100 |
commit | 178883fd039d38a708cc56555489533d9a9c07df (patch) | |
tree | 3ac4a2d98883cd9c6ff766ec29d524aa284fe4d1 /tools/perf/scripts/python/export-to-sqlite.py | |
parent | da0a090a3c6220772801b791845e408ae7579914 (diff) |
ipvs: speed up reads from ip_vs_conn proc file
Reading is very slow because ->start() performs a linear re-scan of the
entire hash table until it finds the successor to the last dumped
element. The current implementation uses 'pos' as the 'number of
elements to skip, then does linear iteration until it has skipped
'pos' entries.
Store the last bucket and the number of elements to skip in that
bucket instead, so we can resume from bucket b directly.
before this patch, its possible to read ~35k entries in one second, but
each read() gets slower as the number of entries to skip grows:
time timeout 60 cat /proc/net/ip_vs_conn > /tmp/all; wc -l /tmp/all
real 1m0.007s
user 0m0.003s
sys 0m59.956s
140386 /tmp/all
Only ~100k more got read in remaining the remaining 59s, and did not get
nowhere near the 1m entries that are stored at the time.
after this patch, dump completes very quickly:
time cat /proc/net/ip_vs_conn > /tmp/all; wc -l /tmp/all
real 0m2.286s
user 0m0.004s
sys 0m2.281s
1000001 /tmp/all
Signed-off-by: Florian Westphal <fw@strlen.de>
Acked-by: Julian Anastasov <ja@ssi.bg>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Diffstat (limited to 'tools/perf/scripts/python/export-to-sqlite.py')
0 files changed, 0 insertions, 0 deletions