summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2017-12-03net: xdp: make the stack take care of the tear downJakub Kicinski
Since day one of XDP drivers had to remember to free the program on the remove path. This leads to code duplication and is error prone. Make the stack query the installed programs on unregister and if something is installed, remove the program. Freeing of program attached to XDP generic is moved from free_netdev() as well. Because the remove will now be called before notifiers are invoked, BPF offload state of the program will not get destroyed before uninstall. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Simon Horman <simon.horman@netronome.com> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2017-12-03net: xdp: report flags program was installed with on queryJakub Kicinski
Some drivers enforce that flags on program replacement and removal must match the flags passed on install. This leaves the possibility open to enable simultaneous loading of XDP programs both to HW and DRV. Allow such drivers to report the flags back to the stack. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Simon Horman <simon.horman@netronome.com> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2017-12-03net: xdp: avoid output parameters when querying XDP progJakub Kicinski
The output parameters will get unwieldy if we want to add more information about the program. Simply pass the entire struct netdev_bpf in. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Simon Horman <simon.horman@netronome.com> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2017-12-01Merge branch 'cpsw-ale-cleanups'David S. Miller
Grygorii Strashko says: ==================== net: ethernet: ti: cpsw/ale clean up and optimization This is set of non critical clean ups and optimizations for TI CPSW and ALE drivers. Rebased on top on net-next. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01net: ethernet: ti: ale: fix port check in cpsw_ale_control_set/getGrygorii Strashko
ALE ports number includes the Host port and ext Ports, and ALE ports numbering starts from 0, so correct corresponding port checks in cpsw_ale_control_set/get(). Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01net: ethernet: ti: ale: use devm_kzalloc in cpsw_ale_create()Grygorii Strashko
Use cpsw_ale_create in cpsw_ale_create(). This also makes cpsw_ale_destroy() function nop, so remove it. Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01net: ethernet: ti: ale: move static initialization in cpsw_ale_create()Grygorii Strashko
Move static initialization from cpsw_ale_start() to cpsw_ale_create() as it does not make much sence to perform static initializtion in cpsw_ale_start() which is called everytime netif[s] is opened. Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01net: ethernet: ti: ale: optimize ale entry mask bits configuartionGrygorii Strashko
The ale->params.ale_ports parameter can be used to deriver values for all ale entry mask bits: port_mask_bits, port_mask_bits, port_num_bits. Hence, calculate above values and drop all hardcoded values. For port_num_bits calcualtion use order_base_2() API. Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01net: ethernet: ti: ale: disable ale from stop()Grygorii Strashko
ALE is enabled from cpsw_ale_start() now, but disabled only from cpsw_ale_destroy() which introduces inconsitance as cpsw_ale_start() is called when netif[s] is opened, but cpsw_ale_destroy() is called when driver is removed. Hence, move ALE disabling in cpsw_ale_stop(). Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01net: ethernet: ti: ale: use proper io apisGrygorii Strashko
Switch to use writel_relaxed/readl_relaxed() IO API instead of raw version as it is recommended. Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01net: ethernet: ti: cpsw: fix ale port numbersGrygorii Strashko
TI OMAP/Sitara SoCs have fixed number of ALE ports 3, which includes Host port also. Hence, use fixed value instead of value calcualted from DT, which can be set by user and might not reflect actual HW configuration. Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01net: ethernet: ti: cpsw: move mac_hi/lo defines in cpsw.hGrygorii Strashko
Move mac_hi/lo defines in common header cpsw.h and re-use them for netcp_ethss.c. Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01net: ethernet: ti: cpsw: move platform data struct to .c fileGrygorii Strashko
CPSW platform data struct cpsw_platform_data and struct cpsw_slave_data are used only incide cpsw.c module, so move these definitions there. Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01net: ethernet: ti: cpsw: use proper io apisGrygorii Strashko
Switch to use writel_relaxed/readl_relaxed() IO API instead of raw version as it is recommended. Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01net: ethernet: ti: cpsw: drop unused var poll from cpsw_update_channels_resGrygorii Strashko
Drop unused variable "poll" from cpsw_update_channels_res(). Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01net: phy: remove generic settings for callbacks config_aneg and read_status ↵Heiner Kallweit
from drivers Remove generic settings for callbacks config_aneg and read_status from drivers. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01net: phy: core: use genphy version of callbacks read_status and config_aneg ↵Heiner Kallweit
per default read_status and config_aneg are the only mandatory callbacks and most of the time the generic implementation is used by drivers. So make the core fall back to the generic version if a driver doesn't implement the respective callback. Also currently the core doesn't seem to verify that drivers implement the mandatory calls. If a driver doesn't do so we'd just get a NPE. With this patch this potential issue doesn't exit any longer. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01Merge branch 'ip6_gre-add-erspan-native-tunnel-for-ipv6'David S. Miller
William Tu says: ==================== ip6_gre: add erspan native tunnel for ipv6 The patch series add support for ERSPAN tunnel over ipv6. The first patch refectors the existing ipv4 gre implementation and the second refactors the ipv6 gre's xmit code. Finally the last patch introduces erspan protocol. change in v5: - add cover-letter description change in v4: - rebase on top of net-next - use log_ecn_error in ip6_tnl_rcv change in v3: - add inline for functions in header - rebase on top of net-next change in v2: - remove inline - fix some indent - fix errors reports by clang and scan-build ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01ip6_gre: Add ERSPAN native tunnel supportWilliam Tu
The patch adds support for ERSPAN tunnel over ipv6. Signed-off-by: William Tu <u9012063@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01ip6_gre: Refactor ip6gre xmit codesWilliam Tu
This patch refactors the ip6gre_xmit_{ipv4, ipv6}. It is a prep work to add the ip6erspan tunnel. Signed-off-by: William Tu <u9012063@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01ip_gre: Refector the erpsan tunnel code.William Tu
Move two erspan functions to header file, erspan.h, so ipv6 erspan implementation can use it. Signed-off-by: William Tu <u9012063@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01Merge branch 'ethtool-reset-AP'David S. Miller
Scott Branden says: ==================== net: ethtool: add support for ETH_RESET_AP Add support to reset appplication processors inside SmartNICs by defining new ETH_RESET_AP bit. And use new ETH_RESET_AP bit in bnxt ethernet driver. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01bnxt_en: Add ETH_RESET_AP supportScott Branden
Add ETH_RESET_AP support handling to reset the internal Application Processor(s) of the SmartNIC card. Signed-off-by: Scott Branden <scott.branden@broadcom.com> Acked-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01net: ethtool: add support for reset of AP inside NIC interface.Scott Branden
Add ETH_RESET_AP to reset the application processor(s) inside the NIC interface. Current ETH_RESET_MGMT supports a management processor inside this NIC. This is typically used for remote NIC management purposes. Application processors exist inside some SmartNICs to run various applications inside the NIC processor - be it a simple algorithm without an OS to as complex as hosting multiple VMs. Signed-off-by: Scott Branden <scott.branden@broadcom.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01Merge branch 'rds-tcp-netns-delete-related-fixes'David S. Miller
Sowmini Varadhan says: ==================== rds-tcp netns delete related fixes Patchset contains cleanup and bug fixes. Patch 1 is the removal of some redundant code/functions. Patch 2 and 3 are fixes for corner cases identified by syzkaller. I've not been able to reproduce the actual use-after-free race flagged in the syzkaller reports, thus these fixes are based on code inspection plus manual testing to make sure the modified code paths are executed without problems in the commonly encountered timing cases. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01rds: tcp: atomically purge entries from rds_tcp_conn_list during netns deleteSowmini Varadhan
The rds_tcp_kill_sock() function parses the rds_tcp_conn_list to find the rds_connection entries marked for deletion as part of the netns deletion under the protection of the rds_tcp_conn_lock. Since the rds_tcp_conn_list tracks rds_tcp_connections (which have a 1:1 mapping with rds_conn_path), multiple tc entries in the rds_tcp_conn_list will map to a single rds_connection, and will be deleted as part of the rds_conn_destroy() operation that is done outside the rds_tcp_conn_lock. The rds_tcp_conn_list traversal done under the protection of rds_tcp_conn_lock should not leave any doomed tc entries in the list after the rds_tcp_conn_lock is released, else another concurrently executiong netns delete (for a differnt netns) thread may trip on these entries. Reported-by: syzbot <syzkaller@googlegroups.com> Signed-off-by: Sowmini Varadhan <sowmini.varadhan@oracle.com> Acked-by: Santosh Shilimkar <santosh.shilimkar@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01rds: tcp: correctly sequence cleanup on netns deletion.Sowmini Varadhan
Commit 8edc3affc077 ("rds: tcp: Take explicit refcounts on struct net") introduces a regression in rds-tcp netns cleanup. The cleanup_net(), (and thus rds_tcp_dev_event notification) is only called from put_net() when all netns refcounts go to 0, but this cannot happen if the rds_connection itself is holding a c_net ref that it expects to release in rds_tcp_kill_sock. Instead, the rds_tcp_kill_sock callback should make sure to tear down state carefully, ensuring that the socket teardown is only done after all data-structures and workqs that depend on it are quiesced. The original motivation for commit 8edc3affc077 ("rds: tcp: Take explicit refcounts on struct net") was to resolve a race condition reported by syzkaller where workqs for tx/rx/connect were triggered after the namespace was deleted. Those worker threads should have been cancelled/flushed before socket tear-down and indeed, rds_conn_path_destroy() does try to sequence this by doing /* cancel cp_send_w */ /* cancel cp_recv_w */ /* flush cp_down_w */ /* free data structures */ Here the "flush cp_down_w" will trigger rds_conn_shutdown and thus invoke rds_tcp_conn_path_shutdown() to close the tcp socket, so that we ought to have satisfied the requirement that "socket-close is done after all other dependent state is quiesced". However, rds_conn_shutdown has a bug in that it *always* triggers the reconnect workq (and if connection is successful, we always restart tx/rx workqs so with the right timing, we risk the race conditions reported by syzkaller). Netns deletion is like module teardown- no need to restart a reconnect in this case. We can use the c_destroy_in_prog bit to avoid restarting the reconnect. Fixes: 8edc3affc077 ("rds: tcp: Take explicit refcounts on struct net") Signed-off-by: Sowmini Varadhan <sowmini.varadhan@oracle.com> Acked-by: Santosh Shilimkar <santosh.shilimkar@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01rds: tcp: remove redundant function rds_tcp_conn_paths_destroy()Sowmini Varadhan
A side-effect of Commit c14b0366813a ("rds: tcp: set linger to 1 when unloading a rds-tcp") is that we always send a RST on the tcp connection for rds_conn_destroy(), so rds_tcp_conn_paths_destroy() is not needed any more and is removed in this patch. Signed-off-by: Sowmini Varadhan <sowmini.varadhan@oracle.com> Acked-by: Santosh Shilimkar <santosh.shilimkar@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01tipc: fall back to smaller MTU if allocation of local send skb failsJon Maloy
When sending node local messages the code is using an 'mtu' of 66060 bytes to avoid unnecessary fragmentation. During situations of low memory tipc_msg_build() may sometimes fail to allocate such large buffers, resulting in unnecessary send failures. This can easily be remedied by falling back to a smaller MTU, and then reassemble the buffer chain as if the message were arriving from a remote node. At the same time, we change the initial MTU setting of the broadcast link to a lower value, so that large messages always are fragmented into smaller buffers even when we run in single node mode. Apart from obtaining the same advantage as for the 'fallback' solution above, this turns out to give a significant performance improvement. This can probably be explained with the __pskb_copy() operation performed on the buffer for each recipient during reception. We found the optimal value for this, considering the most relevant skb pool, to be 3744 bytes. Acked-by: Ying Xue <ying.xue@ericsson.com> Signed-off-by: Jon Maloy <jon.maloy@ericsson.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-01Merge branch 'bpf-nfp-jmp-memcpy-improvements'Daniel Borkmann
Jiong Wang says: ==================== Currently, compiler will lower memcpy function call in XDP/eBPF C program into a sequence of eBPF load/store pairs for some scenarios. Compiler is thinking this "inline" optimiation is beneficial as it could avoid function call and also increase code locality. However, Netronome NPU is not an tranditional load/store architecture that doing a sequence of individual load/store actions are not efficient. This patch set tries to identify the load/store sequences composed of load/store pairs that comes from memcpy lowering, then accelerates them through NPU's Command Push Pull (CPP) instruction. This patch set registered an new optimization pass before doing the actual JIT work, it traverse through eBPF IR, once found candidate sequence then record the memory copy source, destination and length information in the first load instruction starting the sequence and marks all remaining instructions in the sequence into skipable status. Later, when JITing the first load instructoin, optimal instructions will be generated using those record information. For this safety of this transformation: - jump into the middle of the sequence will cancel the optimization. - overlapped memory access will cancel the optimization. - the load destination register still contains the same value as before the transformation. ==================== Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2017-12-01nfp: bpf: detect load/store sequences lowered from memory copyJiong Wang
This patch add the optimization frontend, but adding a new eBPF IR scan pass "nfp_bpf_opt_ldst_gather". The pass will traverse the IR to recognize the load/store pairs sequences that come from lowering of memory copy builtins. The gathered memory copy information will be kept in the meta info structure of the first load instruction in the sequence and will be consumed by the optimization backend added in the previous patches. NOTE: a sequence with cross memory access doesn't qualify this optimization, i.e. if one load in the sequence will load from place that has been written by previous store. This is because when we turn the sequence into single CPP operation, we are reading all contents at once into NFP transfer registers, then write them out as a whole. This is not identical with what the original load/store sequence is doing. Detecting cross memory access for two random pointers will be difficult, fortunately under XDP/eBPF's restrictied runtime environment, the copy normally happen among map, packet data and stack, they do not overlap with each other. And for cases supported by NFP, cross memory access will only happen on PTR_TO_PACKET. Fortunately for this, there is ID information that we could do accurate memory alias check. Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2017-12-01nfp: bpf: implement memory bulk copy for length bigger than 32-bytesJiong Wang
When the gathered copy length is bigger than 32-bytes and within 128-bytes (the maximum length a single CPP Pull/Push request can finish), the strategy of read/write are changeed into: * Read. - use direct reference mode when length is within 32-bytes. - use indirect mode when length is bigger than 32-bytes. * Write. - length <= 8-bytes use write8 (direct_ref). - length <= 32-byte and 4-bytes aligned use write32 (direct_ref). - length <= 32-bytes but not 4-bytes aligned use write8 (indirect_ref). - length > 32-bytes and 4-bytes aligned use write32 (indirect_ref). - length > 32-bytes and not 4-bytes aligned and <= 40-bytes use write32 (direct_ref) to finish the first 32-bytes. use write8 (direct_ref) to finish all remaining hanging part. - length > 32-bytes and not 4-bytes aligned use write32 (indirect_ref) to finish those 4-byte aligned parts. use write8 (direct_ref) to finish all remaining hanging part. Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2017-12-01nfp: bpf: implement memory bulk copy for length within 32-bytesJiong Wang
For NFP, we want to re-group a sequence of load/store pairs lowered from memcpy/memmove into single memory bulk operation which then could be accelerated using NFP CPP bus. This patch extends the existing load/store auxiliary information by adding two new fields: struct bpf_insn *paired_st; s16 ldst_gather_len; Both fields are supposed to be carried by the the load instruction at the head of the sequence. "paired_st" is the corresponding store instruction at the head and "ldst_gather_len" is the gathered length. If "ldst_gather_len" is negative, then the sequence is doing memory load/store in descending order, otherwise it is in ascending order. We need this information to detect overlapped memory access. This patch then optimize memory bulk copy when the copy length is within 32-bytes. The strategy of read/write used is: * Read. Use read32 (direct_ref), always. * Write. - length <= 8-bytes write8 (direct_ref). - length <= 32-bytes and is 4-byte aligned write32 (direct_ref). - length <= 32-bytes but is not 4-byte aligned write8 (indirect_ref). NOTE: the optimization should not change program semantics. The destination register of the last load instruction should contain the same value before and after this optimization. Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2017-12-01nfp: bpf: factor out is_mbpf_load & is_mbpf_storeJiong Wang
It is usual that we need to check if one BPF insn is for loading/storeing data from/to memory. Therefore, it makes sense to factor out related code to become common helper functions. Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2017-12-01nfp: bpf: encode indirect commandsJakub Kicinski
Add support for emitting commands with field overwrites. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2017-12-01nfp: bpf: correct the encoding for No-Dest immedJiong Wang
When immed is used with No-Dest, the emitter should use reg.dst instead of reg.areg for the destination, using the latter will actually encode register zero. Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2017-12-01nfp: bpf: relax source operands checkJiong Wang
The NFP normally requires the source operands to be difference addressing modes, but we should rule out the very special NN_REG_NONE type. There are instruction that ignores both A/B operands, for example: local_csr_rd For these instructions, we might pass the same operand type, NN_REG_NONE, for both A/B operands. NOTE: in current NFP ISA, it is only possible for instructions with unrestricted operands to take none operands, but in case there is new and similar instructoin in restricted form, they would follow similar rules, so swreg_to_restricted is updated as well. Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2017-12-01nfp: bpf: don't do ld/shifts combination if shifts are jump destinationJiong Wang
If any of the shift insns in the ld/shift sequence is jump destination, don't do combination. Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2017-12-01nfp: bpf: don't do ld/mask combination if mask is jump destinationJiong Wang
If the mask insn in the ld/mask pair is jump destination, then don't do combination. Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2017-12-01nfp: bpf: flag jump destination to guide insn combine optimizationsJiong Wang
NFP eBPF offload JIT engine is doing some instruction combine based optimizations which however must not be safe if the combined sequences are across basic block boarders. Currently, there are post checks during fixing jump destinations. If the jump destination is found to be eBPF insn that has been combined into another one, then JIT engine will raise error and abort. This is not optimal. The JIT engine ought to disable the optimization on such cross-bb-border sequences instead of abort. As there is no control flow information in eBPF infrastructure that we can't do basic block based optimizations, this patch extends the existing jump destination record pass to also flag the jump destination, then in instruction combine passes we could skip the optimizations if insns in the sequence are jump targets. Suggested-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2017-12-01nfp: bpf: record jump destination to simplify jump fixupJiong Wang
eBPF insns are internally organized as dual-list inside NFP offload JIT. Random access to an insn needs to be done by either forward or backward traversal along the list. One place we need to do such traversal is at nfp_fixup_branches where one traversal is needed for each jump insn to find the destination. Such traversals could be avoided if jump destinations are collected through a single travesal in a pre-scan pass, and such information could also be useful in other places where jump destination info are needed. This patch adds such jump destination collection in nfp_prog_prepare. Suggested-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2017-12-01nfp: bpf: support backward jumpJiong Wang
This patch adds support for backward jump on NFP. - restrictions on backward jump in various functions have been removed. - nfp_fixup_branches now supports backward jump. There is one thing to note, currently an input eBPF JMP insn may generate several NFP insns, for example, NFP imm move insn A \ NFP compare insn B --> 3 NFP insn jited from eBPF JMP insn M NFP branch insn C / --- NFP insn X --> 1 NFP insn jited from eBPF insn N --- ... therefore, we are doing sanity check to make sure the last jited insn from an eBPF JMP is a NFP branch instruction. Once backward jump is allowed, it is possible an eBPF JMP insn is at the end of the program. This is however causing trouble for the sanity check. Because the sanity check requires the end index of the NFP insns jited from one eBPF insn while only the start index is recorded before this patch that we can only get the end index by: start_index_of_the_next_eBPF_insn - 1 or for the above example: start_index_of_eBPF_insn_N (which is the index of NFP insn X) - 1 nfp_fixup_branches was using nfp_for_each_insn_walk2 to expose *next* insn to each iteration during the traversal so the last index could be calculated from which. Now, it needs some extra code to handle the last insn. Meanwhile, the use of walk2 is actually unnecessary, we could simply use generic single instruction walk to do this, the next insn could be easily calculated using list_next_entry. So, this patch migrates the jump fixup traversal method to *list_for_each_entry*, this simplifies the code logic a little bit. The other thing to note is a new state variable "last_bpf_off" is introduced to track the index of the last jited NFP insn. This is necessary because NFP is generating special purposes epilogue sequences, so the index of the last jited NFP insn is *not* always nfp_prog->prog_len - 1. Suggested-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2017-12-01nfp: fix old kdoc issuesJakub Kicinski
Since commit 3a025e1d1c2e ("Add optional check for bad kernel-doc comments") when built with W=1 build will complain about kdoc errors. Fix the kdoc issues we have. kdoc is still confused by defines in nfp_net_ctrl.h but those are not really errors. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2017-12-01Merge branch 'bpf-verifier-misc-improvements'Daniel Borkmann
Alexei Starovoitov says: ==================== Small set of verifier improvements and cleanups which is necessary for bigger patch set of bpf-to-bpf calls coming later. See individual patches for details. Tested on x86 and arm64 hw. ==================== Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2017-12-01selftests/bpf: adjust test_align expected outputAlexei Starovoitov
since verifier started to print liveness state of the registers adjust expected output of test_align. Now this test checks for both proper alignment handling by verifier and correctness of liveness marks. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2017-12-01bpf: cleanup register_is_null()Alexei Starovoitov
don't pass large struct bpf_reg_state by value. Instead pass it by pointer. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: John Fastabend <john.fastabend@gmail.com> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2017-12-01bpf: improve JEQ/JNE path walkingAlexei Starovoitov
verifier knows how to trim paths that are known not to be taken at run-time when register containing run-time constant is compared with another constant. It was done only for JEQ comparison. Extend it to include JNE as well. More cases can be added in the future. before after bpf_lb-DLB_L3.o 2270 2051 bpf_lb-DLB_L4.o 3682 3287 bpf_lb-DUNKNOWN.o 1110 1080 bpf_lxc-DDROP_ALL.o 27876 24980 bpf_lxc-DUNKNOWN.o 38780 34308 bpf_netdev.o 16937 15404 bpf_overlay.o 7929 7191 Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: John Fastabend <john.fastabend@gmail.com> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2017-12-01bpf: improve verifier liveness marksAlexei Starovoitov
registers with pointers filled from stack were missing live_written marks which caused liveness propagation to unnecessary mark more registers as live_read and miss state pruning opportunities later on. before after bpf_lb-DLB_L3.o 2285 2270 bpf_lb-DLB_L4.o 3723 3682 bpf_lb-DUNKNOWN.o 1110 1110 bpf_lxc-DDROP_ALL.o 27954 27876 bpf_lxc-DUNKNOWN.o 38954 38780 bpf_netdev.o 16943 16937 bpf_overlay.o 7929 7929 Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: John Fastabend <john.fastabend@gmail.com> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2017-12-01bpf: don't mark FP reg as uninitAlexei Starovoitov
when verifier hits an internal bug don't mark register R10==FP as uninit, since it's read only register and it's not technically correct to let verifier run further, since it may assume that R10 has valid auxiliary state. While developing subsequent patches this issue was discovered, though the code eventually changed that aux reg state doesn't have pointers any more it is still safer to avoid clearing readonly register. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: John Fastabend <john.fastabend@gmail.com> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2017-12-01bpf: print liveness info to verifier logAlexei Starovoitov
let verifier print register and stack liveness information into verifier log Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: John Fastabend <john.fastabend@gmail.com> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>