diff options
author | Christoph Hellwig <hch@lst.de> | 2025-01-16 07:01:42 +0100 |
---|---|---|
committer | Carlos Maiolino <cem@kernel.org> | 2025-01-16 10:19:59 +0100 |
commit | ee10f6fcdb961e810d7b16be1285319c15c78ef6 (patch) | |
tree | 922781843c3071785b84fdb05fc6171c5010b0f9 /fs/xfs/xfs_buf.h | |
parent | 07eae0fa67ca4bbb199ad85645e0f9dfaef931cd (diff) |
xfs: fix buffer lookup vs release race
Since commit 298f34224506 ("xfs: lockless buffer lookup") the buffer
lookup fastpath is done without a hash-wide lock (then pag_buf_lock, now
bc_lock) and only under RCU protection. But this means that nothing
serializes lookups against the temporary 0 reference count for buffers
that are added to the LRU after dropping the last regular reference,
and a concurrent lookup would fail to find them.
Fix this by doing all b_hold modifications under b_lock. We're already
doing this for release so this "only" ~ doubles the b_lock round trips.
We'll later look into the lockref infrastructure to optimize the number
of lock round trips again.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Darrick J. Wong <djwong@kernel.org>
Signed-off-by: Carlos Maiolino <cem@kernel.org>
Diffstat (limited to 'fs/xfs/xfs_buf.h')
-rw-r--r-- | fs/xfs/xfs_buf.h | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/fs/xfs/xfs_buf.h b/fs/xfs/xfs_buf.h index 10bf66e074a0..7e73663c5d4a 100644 --- a/fs/xfs/xfs_buf.h +++ b/fs/xfs/xfs_buf.h @@ -168,7 +168,7 @@ struct xfs_buf { xfs_daddr_t b_rhash_key; /* buffer cache index */ int b_length; /* size of buffer in BBs */ - atomic_t b_hold; /* reference count */ + unsigned int b_hold; /* reference count */ atomic_t b_lru_ref; /* lru reclaim ref count */ xfs_buf_flags_t b_flags; /* status flags */ struct semaphore b_sema; /* semaphore for lockables */ |