diff options
author | Stuart Hayes <stuart.w.hayes@gmail.com> | 2024-07-17 13:55:50 -0500 |
---|---|---|
committer | Keith Busch <kbusch@kernel.org> | 2024-08-22 10:05:22 -0700 |
commit | 4e893ca8117022de68ce1b61c0309e3d17bb8a25 (patch) | |
tree | f9276d87582631a623f256449eb652c24739b19a /rust/helpers/workqueue.c | |
parent | b2261de75212910e2ca01fa673c8855a535d8c60 (diff) |
nvme_core: scan namespaces asynchronously
Use async function calls to make namespace scanning happen in parallel.
Without the patch, NVME namespaces are scanned serially, so it can take
a long time for all of a controller's namespaces to become available,
especially with a slower (TCP) interface with large number of
namespaces.
It is not uncommon to have large numbers (hundreds or thousands) of
namespaces on nvme-of with storage servers.
The time it took for all namespaces to show up after connecting (via
TCP) to a controller with 1002 namespaces was measured on one system:
network latency without patch with patch
0 6s 1s
50ms 210s 10s
100ms 417s 18s
Measurements taken on another system show the effect of the patch on the
time nvme_scan_work() took to complete, when connecting to a linux
nvme-of target with varying numbers of namespaces, on a network of
400us.
namespaces without patch with patch
1 16ms 14ms
2 24ms 16ms
4 49ms 22ms
8 101ms 33ms
16 207ms 56ms
100 1.4s 0.6s
1000 12.9s 2.0s
On the same system, connecting to a local PCIe NVMe drive (a Samsung
PM1733) instead of a network target:
namespaces without patch with patch
1 13ms 12ms
2 41ms 13ms
Signed-off-by: Stuart Hayes <stuart.w.hayes@gmail.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Diffstat (limited to 'rust/helpers/workqueue.c')
0 files changed, 0 insertions, 0 deletions