summaryrefslogtreecommitdiff
path: root/scripts/lib/kdoc/kdoc_output.py
diff options
context:
space:
mode:
authorAlexander Lobakin <aleksander.lobakin@intel.com>2025-06-12 18:02:26 +0200
committerTony Nguyen <anthony.l.nguyen@intel.com>2025-06-16 11:40:14 -0700
commit3ef2b0192e8ba133f597919632bd9cf196076f0b (patch)
tree84df4706fd848182caa5916e635cc3c77e1bbaf0 /scripts/lib/kdoc/kdoc_output.py
parent819bbaefeded93df36d71d58d9963d706e6e99e1 (diff)
libeth: xdp: add helpers for preparing/processing &libeth_xdp_buff
Add convenience helpers to build an &xdp_buff. This means: general initialization before the NAPI loop, adding head, adding frags etc. libeth_xdp_process_buff() is the same what everybody have in their drivers: dma_sync_for_cpu(); if (!frag) { add_head(); prefetch(); } else { add_frag(); } Note that I don't use net_prefetch(), sticking to the original prefetch(). In none of my tests prefetching 128 bytes yielded better perf than 64 bytes. That might differ if the headers are huge enough, but then additional tunneling etc. overhead takes place, you either way won't win a lot. &libeth_xdp_stash is for cases when you exit the polling loop without finishing building the buff. If that happens, you need to store the buffer in the queue structure until the next loop and then restore it. It makes no sense to place a whole full &xdp_buff there. Define a minimal structure, which would store only the fields essential to restore it. I was able to pack it into 16 bytes, which is only 8 bytes bigger than `struct sk_buff *skb` on x64. Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Diffstat (limited to 'scripts/lib/kdoc/kdoc_output.py')
0 files changed, 0 insertions, 0 deletions