summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2019-04-12rhashtable: reorder some inline functions and macros.NeilBrown
This patch only moves some code around, it doesn't change the code at all. A subsequent patch will benefit from this as it needs to add calls to functions which are now defined before the call-site, but weren't before. Signed-off-by: NeilBrown <neilb@suse.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-12rhashtable: fix some __rcu annotation errorsNeilBrown
With these annotations, the rhashtable now gets no warnings when compiled with "C=1" for sparse checking. Signed-off-by: NeilBrown <neilb@suse.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-12rhashtable: use struct_size() in kvzalloc()Gustavo A. R. Silva
One of the more common cases of allocation size calculations is finding the size of a structure that has a zero-sized array at the end, along with memory for some number of elements for that array. For example: struct foo { int stuff; struct boo entry[]; }; size = sizeof(struct foo) + count * sizeof(struct boo); instance = kvzalloc(size, GFP_KERNEL); Instead of leaving these open-coded and prone to type mistakes, we can now use the new struct_size() helper: instance = kvzalloc(struct_size(instance, entry, count), GFP_KERNEL); This code was detected with the help of Coccinelle. Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-12Merge branch 'nfp-update-to-control-structures'David S. Miller
Jakub Kicinski says: ==================== nfp: update to control structures This series prepares NFP control structures for crypto offloads. So far we mostly dealt with configuration requests under rtnl lock. This will no longer be the case with crypto. Additionally we will try to reuse the BPF control message format, so we move common code out of BPF. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-12nfp: split out common control message handling codeJakub Kicinski
BPF's control message handler seems like a good base to built on for request-reply control messages. Split it out to allow for reuse. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Dirk van der Merwe <dirk.vandermerwe@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-12nfp: move vNIC reset before netdev initJakub Kicinski
During probe we clear vNIC configuration in case the device wasn't closed cleanly by previous driver. Move that code before netdev init, so netdev init can already try to apply its config parameters. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Dirk van der Merwe <dirk.vandermerwe@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-12nfp: add a mutex lock for the vNIC ctrl BARJakub Kicinski
Soon we will try to write to the vNIC mailbox without RTNL held. Add a new mutex to protect access to specific parts of the PCI control BAR. Move the mailbox size checking to the mailbox lock() helper, where it can be more effective (happen prior to potential overwrite of other data). Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Dirk van der Merwe <dirk.vandermerwe@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-12nfp: opportunistically poll for reconfig resultDirk van der Merwe
If the reconfig was a quick update, we could have results available from firmware within 200us. Signed-off-by: Dirk van der Merwe <dirk.vandermerwe@netronome.com> Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-12ipv6: Remove flowi6_oif compare from __ip6_route_redirectDavid Ahern
In the review of 0b34eb004347 ("ipv6: Refactor __ip6_route_redirect"), Martin noted that the flowi6_oif compare is moved to the new helper and should be removed from __ip6_route_redirect. Fix the oversight. Fixes: 0b34eb004347 ("ipv6: Refactor __ip6_route_redirect") Reported-by: Martin Lau <kafai@fb.com> Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-12Merge branch 'netdevsim-Mostly-cleanup-in-sdev-bpf-iface-area'David S. Miller
Jiri Pirko says: ==================== netdevsim: Mostly cleanup in sdev/bpf iface area This patches does mainly internal netdevsim code shuffle. Nothing serious, just small changes to help readability and preparations for future work. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-12netdevsim: move sdev-specific init/uninit code into separate functionsJiri Pirko
In order to improve readability and prepare for future code changes, move sdev specific init/uninit code into separate functions. Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-12netdevsim: make bpf_offload_dev_create() per-sdev instead of first nsJiri Pirko
offload dev is stored in sdev struct. However, first netdevsim instance is used as a priv. Change this to be sdev to as it is shared among multiple netdevsim instances. Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-12netdevsim: move sdev specific bpf debugfs files to sdev dirJiri Pirko
Some netdevsim bpf debugfs files are per-sdev, yet they are defined per netdevsim instance. Move them under sdev directory. Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-12netdevsim: move shared dev creation and destruction into separate fileJiri Pirko
To make code easier to read, move shared dev bits into a separate file. Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-12Documentation: net: dsa: transition to the rst formatIoana Ciornei
This patch also performs some minor adjustments such as numbering for the receive path sequence, conversion of keywords to inline literals and adding an index page so it looks better in the output of 'make htmldocs'. Signed-off-by: Ioana Ciornei <ciorneiioana@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-12net: veth: use generic helper to report timestamping infoJulian Wiedmann
For reporting the common set of SW timestamping capabilities, use ethtool_op_get_ts_info() instead of re-implementing it. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-12net: loopback: use generic helper to report timestamping infoJulian Wiedmann
For reporting the common set of SW timestamping capabilities, use ethtool_op_get_ts_info() instead of re-implementing it. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-12net: dummy: use generic helper to report timestamping infoJulian Wiedmann
For reporting the common set of SW timestamping capabilities, use ethtool_op_get_ts_info() instead of re-implementing it. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-12Merge branch 'smc-next'David S. Miller
Ursula Braun says: ==================== net/smc: patches 2019-04-12 here are patches for SMC: * patch 1 improves behavior of non-blocking connect * patches 2, 3, 5, 7, and 8 improve connecting return codes * patches 4 and 6 are a cleanups without functional change ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-12net/smc: improve smc_conn_create reason codesKarsten Graul
Rework smc_conn_create() to always return a valid DECLINE reason code. This removes the need to translate the return codes on 4 different places and allows to easily add more detailed return codes by changing smc_conn_create() only. Signed-off-by: Karsten Graul <kgraul@linux.ibm.com> Signed-off-by: Ursula Braun <ubraun@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-12net/smc: improve smc_listen_work reason codesKarsten Graul
Rework smc_listen_work() to provide improved reason codes when an SMC connection is declined. This allows better debugging on user side. This also adds 3 more detailed reason codes in smc_clc.h to indicate what type of device was not found (ism or rdma or both), or if ism cannot talk to the peer. Signed-off-by: Karsten Graul <kgraul@linux.ibm.com> Signed-off-by: Ursula Braun <ubraun@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-12net/smc: code cleanup smc_listen_workKarsten Graul
In smc_listen_work() the variables rc and reason_code are defined which have the same meaning. Eliminate reason_code in favor of the shorter name rc. No functional changes. Rename the functions smc_check_ism() and smc_check_rdma() into smc_find_ism_device() and smc_find_rdma_device() to make there purpose more clear. No functional changes. Signed-off-by: Karsten Graul <kgraul@linux.ibm.com> Signed-off-by: Ursula Braun <ubraun@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-12net/smc: cleanup of get vlan idKarsten Graul
The vlan_id of the underlying CLC socket was retrieved two times during processing of the listen handshaking. Change this to get the vlan id one time in connect and in listen processing, and reuse the id. And add a new CLC DECLINE return code for the case when the retrieval of the vlan id failed. Signed-off-by: Karsten Graul <kgraul@linux.ibm.com> Signed-off-by: Ursula Braun <ubraun@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-12net/smc: consolidate function parametersKarsten Graul
During initialization of an SMC socket a lot of function parameters need to get passed down the function call path. Consolidate the parameters in a helper struct so there are less enough parameters to get all passed by register. Signed-off-by: Karsten Graul <kgraul@linux.ibm.com> Signed-off-by: Ursula Braun <ubraun@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-12net/smc: check for ip prefix and subnetKarsten Graul
The check for a matching ip prefix and subnet was only done for SMC-R in smc_listen_rdma_check() but not when an SMC-D connection was possible. Rename the function into smc_listen_prfx_check() and move its call to a place where it is called for both SMC variants. And add a new CLC DECLINE reason for the case when the IP prefix or subnet check fails so the reason for the failing SMC connection can be found out more easily. Signed-off-by: Karsten Graul <kgraul@linux.ibm.com> Signed-off-by: Ursula Braun <ubraun@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-12net/smc: fallback to TCP after connect problemsKarsten Graul
Correct the CLC decline reason codes for internal problems to not have the sign bit set, negative reason codes are interpreted as not eligible for TCP fallback. Signed-off-by: Karsten Graul <kgraul@linux.ibm.com> Signed-off-by: Ursula Braun <ubraun@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-12net/smc: nonblocking connect reworkUrsula Braun
For nonblocking sockets move the kernel_connect() from the connect worker into the initial smc_connect part to return kernel_connect() errors other than -EINPROGRESS to user space. Reviewed-by: Karsten Graul <kgraul@linux.ibm.com> Signed-off-by: Ursula Braun <ubraun@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-12xen-netback: add reference from xenvif to backend_info to facilitate ↵Dongli Zhang
coredump analysis During coredump analysis, it is not easy to obtain the address of backend_info in xen-netback. So far there are two ways to obtain backend_info: 1. Do what xenbus_device_find() does for vmcore to find the xenbus_device and then derive it from dev_get_drvdata(). 2. Extract backend_info from callstack of xenwatch (e.g., netback_remove() or frontend_changed()). This patch adds a reference from xenvif to backend_info so that it would be much more easier to obtain backend_info during coredump analysis. Signed-off-by: Dongli Zhang <dongli.zhang@oracle.com> Acked-by: Wei Liu <wei.liu2@citrix.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-11Merge branch 'sctp-skb-list'David S. Miller
David Miller says: ==================== SCTP: Event skb list overhaul. This patch series eliminates the explicit reference to the skb list implementation via skb->prev dereferences. The approach used is to pass a non-empty skb list around instead of an event skb object which may or may not be on a list. I'd like to thank Marcelo Leitner, Xin Long, and Neil Horman for reviewing previous versions of this series. Testing would be very much appreciated, in addition to the review of course. v4 --> v5: Rebase to net-next v3 --> v4: Fix the logic in patch #4 so that we don't miss cases where we should add event to the on-stack temp list. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-11sctp: Pass sk_buff_head explicitly to sctp_ulpq_tail_event().David Miller
Now the SKB list implementation assumption can be removed. And now that we know that the list head is always non-NULL we can remove the code blocks dealing with that as well. Signed-off-by: David S. Miller <davem@davemloft.net> Acked-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-11sctp: Make sctp_enqueue_event tak an skb list.David Miller
Pass this, instead of an event. Then everything trickles down and we always have events a non-empty list. Then we needs a list creating stub to place into .enqueue_event for sctp_stream_interleave_1. Signed-off-by: David S. Miller <davem@davemloft.net> Acked-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-11sctp: Use helper for sctp_ulpq_tail_event() when hooked up to ->enqueue_eventDavid Miller
This way we can make sure events sent this way to sctp_ulpq_tail_event() are on a list as well. Now all such code paths are fully covered. Signed-off-by: David S. Miller <davem@davemloft.net> Acked-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-11sctp: Always pass skbs on a list to sctp_ulpq_tail_event().David Miller
This way we can simplify the logic and remove assumptions about the implementation of skb lists. Signed-off-by: David S. Miller <davem@davemloft.net> Acked-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-11sctp: Remove superfluous test in sctp_ulpq_reasm_drain().David Miller
Inside the loop, we always start with event non-NULL. Signed-off-by: David S. Miller <davem@davemloft.net> Acked-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-11net: sched: flower: fix filter net reference countingVlad Buslov
Fix net reference counting in fl_change() and remove redundant call to tcf_exts_get_net() from __fl_delete(). __fl_put() already tries to get net before releasing exts and deallocating a filter, so this code caused flower classifier to obtain net twice per filter that is being deleted. Implementation of __fl_delete() called tcf_exts_get_net() to pass its result as 'async' flag to fl_mask_put(). However, 'async' flag is redundant and only complicates fl_mask_put() implementation. This functionality seems to be copied from filter cleanup code, where it was added by Cong with following explanation: This patchset tries to fix the race between call_rcu() and cleanup_net() again. Without holding the netns refcnt the tc_action_net_exit() in netns workqueue could be called before filter destroy works in tc filter workqueue. This patchset moves the netns refcnt from tc actions to tcf_exts, without breaking per-netns tc actions. This doesn't apply to flower mask, which doesn't call any tc action code during cleanup. Simplify fl_mask_put() by removing the flag parameter and always use tcf_queue_work() to free mask objects. Fixes: 061775583e35 ("net: sched: flower: introduce reference counting for filters") Fixes: 1f17f7742eeb ("net: sched: flower: insert filter to ht before offloading it to hw") Fixes: 05cd271fd61a ("cls_flower: Support multiple masks per priority") Reported-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-11selftests: Add debugging options to pmtu.shDavid Ahern
pmtu.sh script runs a number of tests and dumps a summary of pass/fail. If a test fails, it is near impossible to debug why. For example: TEST: ipv6: PMTU exceptions [FAIL] There are a lot of commands run behind the scenes for this test. Which one is failing? Add a VERBOSE option to show commands that are run and any output from those commands. Add a PAUSE_ON_FAIL option to halt the script if a test fails allowing users to poke around with the setup in the failed state. In the process, rename tracing to TRACING and move declaration to top with the new variables. Signed-off-by: David Ahern <dsahern@gmail.com> Reviewed-by: Stefano Brivio <sbrivio@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-11Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-nextDavid S. Miller
Daniel Borkmann says: ==================== pull-request: bpf-next 2019-04-12 The following pull-request contains BPF updates for your *net-next* tree. The main changes are: 1) Improve BPF verifier scalability for large programs through two optimizations: i) remove verifier states that are not useful in pruning, ii) stop walking parentage chain once first LIVE_READ is seen. Combined gives approx 20x speedup. Increase limits for accepting large programs under root, and add various stress tests, from Alexei. 2) Implement global data support in BPF. This enables static global variables for .data, .rodata and .bss sections to be properly handled which allows for more natural program development. This also opens up the possibility to optimize program workflow by compiling ELFs only once and later only rewriting section data before reload, from Daniel and with test cases and libbpf refactoring from Joe. 3) Add config option to generate BTF type info for vmlinux as part of the kernel build process. DWARF debug info is converted via pahole to BTF. Latter relies on libbpf and makes use of BTF deduplication algorithm which results in 100x savings compared to DWARF data. Resulting .BTF section is typically about 2MB in size, from Andrii. 4) Add BPF verifier support for stack access with variable offset from helpers and add various test cases along with it, from Andrey. 5) Extend bpf_skb_adjust_room() growth BPF helper to mark inner MAC header so that L2 encapsulation can be used for tc tunnels, from Alan. 6) Add support for input __sk_buff context in BPF_PROG_TEST_RUN so that users can define a subset of allowed __sk_buff fields that get fed into the test program, from Stanislav. 7) Add bpf fs multi-dimensional array tests for BTF test suite and fix up various UBSAN warnings in bpftool, from Yonghong. 8) Generate a pkg-config file for libbpf, from Luca. 9) Dump program's BTF id in bpftool, from Prashant. 10) libbpf fix to use smaller BPF log buffer size for AF_XDP's XDP program, from Magnus. 11) kallsyms related fixes for the case when symbols are not present in BPF selftests and samples, from Daniel ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-12bridge: broute: make broute a real ebtables tableFlorian Westphal
This makes broute a normal ebtables table, hooking at PREROUTING. The broute hook is removed. It uses skb->cb to signal to bridge rx handler that the skb should be routed instead of being bridged. This change is backwards compatible with ebtables as no userspace visible parts are changed. This means we can also remove the !ops test in ebt_register_table, it was only there for broute table sake. Signed-off-by: Florian Westphal <fw@strlen.de> Acked-by: David S. Miller <davem@davemloft.net> Acked-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2019-04-12bridge: netfilter: unroll NF_HOOK helper in bridge input pathFlorian Westphal
Replace NF_HOOK() based invocation of the netfilter hooks with a private copy of nf_hook_slow(). This copy has one difference: it can return the rx handler value expected by the stack, i.e. RX_HANDLER_CONSUMED or RX_HANDLER_PASS. This is needed by the next patch to invoke the ebtables "broute" table via the standard netfilter hooks rather than the custom "br_should_route_hook" indirection that is used now. When the skb is to be "brouted", we must return RX_HANDLER_PASS from the bridge rx input handler, but there is no way to indicate this via NF_HOOK(), unless perhaps by some hack such as exposing bridge_cb in the netfilter core or a percpu flag. text data bss dec filename 3369 56 0 3425 net/bridge/br_input.o.before 3458 40 0 3498 net/bridge/br_input.o.after This allows removal of the "br_should_route_hook" in the next patch. Signed-off-by: Florian Westphal <fw@strlen.de> Acked-by: David S. Miller <davem@davemloft.net> Acked-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2019-04-12bridge: reduce size of input cb to 16 bytesFlorian Westphal
Reduce size of br_input_skb_cb from 24 to 16 bytes by using bitfield for those values that can only be 0 or 1. igmp is the igmp type value, so it needs to be at least u8. Furthermore, the bridge currently relies on step-by-step initialization of br_input_skb_cb fields as the skb passes through the stack. Explicitly zero out the bridge input cb instead, this avoids having to review/validate that no BR_INPUT_SKB_CB(skb)->foo test can see a 'random' value from previous protocol cb. AFAICS all current fields are always set up before they are read again, so this is not a bug fix. Signed-off-by: Florian Westphal <fw@strlen.de> Acked-by: David S. Miller <davem@davemloft.net> Acked-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2019-04-12selftests: netfilter: add ebtables broute test caseFlorian Westphal
ebtables -t broute allows to redirect packets in a way that they get pushed up the stack, even if the interface is part of a bridge. In case of IP packets to non-local address, this means those IP packets are routed instead of bridged-forwarded, just as if the bridge would not have existed. Expected test output is: PASS: netns connectivity: ns1 and ns2 can reach each other PASS: ns1/ns2 connectivity with active broute rule PASS: ns1/ns2 connectivity with active broute rule and bridge forward drop Signed-off-by: Florian Westphal <fw@strlen.de> Acked-by: David S. Miller <davem@davemloft.net> Acked-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2019-04-12bpf: explicitly prohibit ctx_{in, out} in non-skb BPF_PROG_TEST_RUNStanislav Fomichev
This should allow us later to extend BPF_PROG_TEST_RUN for non-skb case and be sure that nobody is erroneously setting ctx_{in,out}. Fixes: b0b9395d865e ("bpf: support input __sk_buff context in BPF_PROG_TEST_RUN") Reported-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-04-11tools: add smp_* barrier variants to include infrastructureDaniel Borkmann
Add the definition for smp_rmb(), smp_wmb(), and smp_mb() to the tools include infrastructure: this patch adds the implementation for x86-64 and arm64, and have it fall back as currently is for other archs which do not have it implemented at this point. The x86-64 one uses lock + add combination for smp_mb() with address below red zone. This is on top of 09d62154f613 ("tools, perf: add and use optimized ring_buffer_{read_head, write_tail} helpers"), which didn't touch smp_* barrier implementations. Magnus recently rightfully reported however that the latter on x86-64 still wrongly falls back to sfence, lfence and mfence respectively, thus fix that for applications under tools making use of these to avoid such ugly surprises. The main header under tools (include/asm/barrier.h) will in that case not select the fallback implementation. Reported-by: Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-04-11Merge branch 'ipv6-Refactor-nexthop-selection-helpers-during-a-fib-lookup'David S. Miller
David Ahern says: ==================== ipv6: Refactor nexthop selection helpers during a fib lookup IPv6 has a fib6_nh embedded within each fib6_info and a separate fib6_info for each path in a multipath route. A side effect is that a fib6_info is passed all the way down the stack when selecting a path on a fib lookup. Refactor the fib lookup functions and associated helper functions to take a fib6_nh when appropriate to enable IPv6 to work with nexthop objects where the fib6_nh is not directly part of a fib entry. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-11ipv6: Refactor __ip6_route_redirectDavid Ahern
Move the nexthop evaluation of a fib entry to a helper that can be leveraged for each fib6_nh in a multipath nexthop object. In the move, 'continue' statements means the helper returns false (loop should continue) and 'break' means return true (found the entry of interest). Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-11ipv6: Refactor rt6_device_matchDavid Ahern
Move the device and gateway checks in the fib6_next loop to a helper that can be called per fib6_nh entry. Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-11ipv6: Move fib6_multipath_select down in ip6_pol_routeDavid Ahern
Move the siblings and fib6_multipath_select after the null entry check since a null entry can not have siblings. Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-11ipv6: Be smarter with null_entry handling in ip6_pol_route_lookupDavid Ahern
Clean up the fib6_null_entry handling in ip6_pol_route_lookup. rt6_device_match can return fib6_null_entry, but fib6_multipath_select can not. Consolidate the fib6_null_entry handling and on the final null_entry check set rt and goto out - no need to defer to a second check after rt6_find_cached_rt. Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-11ipv6: Refactor find_rr_leafDavid Ahern
find_rr_leaf has 3 loops over fib_entries calling find_match. The loops are very similar with differences in start point and whether the metric is evaluated: 1. start at rr_head, no extra loop compare, check fib metric 2. start at leaf, compare rt against rr_head, check metric 3. start at cont (potential saved point from earlier loops), no extra loop compare, no metric check Create 1 loop that is called 3 different times. This will make a later change with multipath nexthop objects much simpler. Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-11ipv6: Refactor find_matchDavid Ahern
find_match primarily needs a fib6_nh (and fib6_flags which it passes through to rt6_score_route). Move fib6_check_expired up to the call sites so find_match is only called for relevant entries. Remove the match argument which is mostly a pass through and use the return boolean to decide if match gets set in the call sites. The end result is a helper that can be called per fib6_nh struct which is needed once fib entries reference nexthop objects that have more than one fib6_nh. Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>