summaryrefslogtreecommitdiff
path: root/drivers/net/ipa
AgeCommit message (Collapse)Author
2022-06-13net: ipa: derive channel from transactionAlex Elder
In gsi_channel_tx_queued(), we report when a transaction gets passed to hardware. Change that function so it takes transaction rather than a channel as its argument, and derive the channel from the transaction. Rename the function accordingly. Delete the header comments above the function definition; the ones above the declaration in "gsi_private.h" should suffice. In addition, the comments above gsi_channel_tx_update() do a fine job of explaining what's going on. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-06-13net: ipa: determine channel from eventAlex Elder
Each event in an event ring describes the TRE whose completion caused the event. Currently, every event ring is dedicated to a single channel, so the channel is easily derived from the event ring. An event ring can actually be shared by more than one channel though, and to distinguish events for one channel from another, the event structure contains a field indicating which channel the event is associated with. In gsi_event_trans(), use the channel ID in an event to determine which channel the event is for. This makes the channel pointer now passed to that function irrelevant; pass the GSI pointer to that function instead. And although it shouldn't happen, warn if an event arrives that records a channel ID that's not in use, or if the event does not have a transaction associated with it. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-06-13net: ipa: simplify endpoint transaction completionAlex Elder
When a GSI transaction completes, ipa_endpoint_trans_complete() is eventually called. That handles TX and RX completions separately, but ipa_endpoint_tx_complete() is a no-op. Instead, have ipa_endpoint_trans_complete() return immediately for a TX transaction, and incorporate code from ipa_endpoint_rx_complete() to handle RX transactions. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-06-13net: ipa: rename endpoint->trans_tre_maxAlex Elder
The trans_tre_max field of the IPA endpoint structure is only used to limit the number of fragments allowed for an SKB being prepared for transmission. Recognizing that, rename the field skb_frag_max, and reduce its value by 1. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-06-13net: ipa: rename channel->tlv_countAlex Elder
Each GSI channel has a TLV FIFO of a certain size, specified in the configuration data for an AP channel. That size dictates the maximum number of TREs that are allowed in a single transaction. The only way that value is used after initialization is as a limit on the number of TREs in a transaction; calling it "tlv_count" isn't helpful, and in fact gsi_channel_trans_tre_max() exists to sort of abstract it. Instead, rename the channel->tlv_count field trans_tre_max, and get rid of the helper function. Update a couple of comments as well. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-06-13net: ipa: verify command channel TLV countAlex Elder
In commit 8797972afff3d ("net: ipa: remove command info pool"), the maximum number of IPA commands that would be sent in a single transaction was defined. That number can't exceed the size of the TLV FIFO on the command channel, and we can check that at runtime. To add this check, pass a new flag to gsi_channel_data_valid() to indicate the channel being checked is being used for IPA commands. Knowing that we can also verify the channel direction is correct. Use a new local variable that refers to the command-specific portion of the data being checked. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-05-27net: ipa: fix page free in ipa_endpoint_replenish_one()Alex Elder
Currently the (possibly compound) pages used for receive buffers are freed using __free_pages(). But according to this comment above the definition of that function, that's wrong: If you want to use the page's reference count to decide when to free the allocation, you should allocate a compound page, and use put_page() instead of __free_pages(). Convert the call to __free_pages() in ipa_endpoint_replenish_one() to use put_page() instead. Fixes: 6a606b90153b8 ("net: ipa: allocate transaction in replenish loop") Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-05-27net: ipa: fix page free in ipa_endpoint_trans_release()Alex Elder
Currently the (possibly compound) page used for receive buffers are freed using __free_pages(). But according to this comment above the definition of that function, that's wrong: If you want to use the page's reference count to decide when to free the allocation, you should allocate a compound page, and use put_page() instead of __free_pages(). Convert the call to __free_pages() in ipa_endpoint_trans_release() to use put_page() instead. Fixes: ed23f02680caa ("net: ipa: define per-endpoint receive buffer size") Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-05-22net: ipa: use data space for command opcodesAlex Elder
The 64-bit data field in a transaction is not used for commands. And the opcode array is *only* used for commands. They're (currently) the same size; save a little space in the transaction structure by enclosing the two fields in a union. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-05-22net: ipa: remove command info poolAlex Elder
The ipa_cmd_info structure now contains only one field, and it's an enumerated type whose values all fit in 8 bits. Currently we'll never use more than 8 TREs in a command transaction, and we can represent that number of command opcodes in the same space as a 64 bit pointer to an ipa_cmd_info structure. Define IPA_COMMAND_TRANS_TRE_MAX as the maximum number of TREs that can be in a command transaction. Replace the info pointer in a transaction with a fixed-size array named cmd_opcode[] of that many bytes. Store the opcode in this array when adding a command TRE to a transaction, as was done previously for the info array. This makes the ipa_cmd_info unused, so get rid of it. When committing an immediate command transaction, use the channel's Boolean command flag to determine whether to fill in the opcode, which will be taken (as before) from the array in the transaction. This makes the command info pool unnecessary, so get rid of it. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-05-22net: ipa: remove command direction argumentAlex Elder
We no longer use the direction argument for gsi_trans_cmd_add(), so get rid of it in its definition, and in its seven callers. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-05-22net: ipa: get rid of ipa_cmd_info->directionAlex Elder
The direction field of the ipa_cmd_info structure is set, but never used. It seems it might have been used for the DMA_SHARED_MEM immediate command, but the DIRECTION flag is set based on the value of the passed-in direction flag there. Anyway, remove this unused field from the ipa_cmd_info structure. This is done as a separate patch to make it very obvious that it's not required. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-05-22net: ipa: count the number of modem TX endpointsAlex Elder
In ipa_endpoint_modem_exception_reset_all(), a high estimate was made of the number of endpoints that need their status register updated. We only used what was needed, so the high estimate didn't matter much. However the next few patches are going to limit the number of commands in a single transaction, and the overestimate would exceed that. So count the number of modem TX endpoints at initialization time, and use it in ipa_endpoint_modem_exception_reset_all(). Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-05-22net: ipa: kill gsi_trans_commit_wait_timeout()Alex Elder
Since the beginning gsi_trans_commit_wait_timeout() has existed to provide a way to allow waiting a limited time for a transaction to complete. But that function has never been used. In fact, there is no use for this function, because a transaction committed to hardware should *always* complete. The only reason it might not complete is if there were a hardware failure, or perhaps a system configuration error. Furthermore, if a timeout ever did occur, the IPA hardware would be in an indeterminate state, from which there is no recovery. It would require some sort of complete IPA reset, and would require the participation of the modem, and at this time there is no such sequence defined. So get rid of the definition of gsi_trans_commit_wait_timeout(), and update a few comments accordingly. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-05-22net: ipa: specify RX aggregation time limit in config dataAlex Elder
Don't assume that a 500 microsecond time limit should be used for all receive endpoints that support aggregation. Instead, specify the time limit to use in the configuration data. Set a 500 microsecond limit for all existing RX endpoints, as before. Checking for overflow for the time limit field is a bit complicated. Rather than duplicate a lot of code in ipa_endpoint_data_valid_one(), call WARN() if any value is found to be too large when encoding it. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-05-22net: ipa: support hard aggregation limitsAlex Elder
Add a new flag for AP receive endpoints that indicates whether a "hard limit" is used as a criterion for closing aggregation. Add comments explaining the difference between "hard" and "soft" aggregation limits. Pass a flag to ipa_aggr_size_kb() so it computes the proper aggregation size value whether using hard or soft limits. Move that function earlier in "ipa_endpoint.c" so it can be used without a forward-reference. Update ipa_endpoint_data_valid_one() so it validates endpoints whose data indicate a hard aggregation limit is used, and so it reports set aggregation flags for endpoints without aggregation enabled. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-05-22net: ipa: make endpoint HOLB drop configurableAlex Elder
Add a new Boolean flag for RX endpoints defining whether HOLB drop is initially enabled or disabled for the endpoint. All existing AP endpoints should have HOLB drop disabled. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-05-20net: ipa: save a copy of endpoint default configAlex Elder
All elements of the default endpoint configuration are used in the code when programming an endpoint for use. But none of the other configuration data is ever needed once things are initialized. So rather than saving a pointer to *all* of the configuration data, save a copy of only the endpoint configuration portion. This will eventually allow endpoint configuration to be modifiable at runtime. But even before that it means we won't keep a pointer to configuration data after when no longer needed. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-05-20net: ipa: rename a few endpoint config data typesAlex Elder
Rename the just-moved data structure types to drop the "_data" suffix, to make it more obvious they are no longer meant to be used just as read-only initialization data. Rename the fields and variables of these types to use "config" instead of "data" in the name. This is another small step meant to facilitate review. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-05-20net: ipa: move endpoint configuration data definitionsAlex Elder
Move the definitions of the structures defining endpoint-specific configuration data out of "ipa_data.h" and into "ipa_endpoint.h". This is a trivial movement of code without any other change, to prepare for the next few patches. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-05-20net: ipa: open-code ether_setup()Alex Elder
About half of the fields set by the call in ipa_modem_netdev_setup() are overwritten after the call. Instead, just skip the call, and open-code the (other) assignments it makes to the net_device structure fields. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-05-20net: ipa: ignore endianness if there is no headerAlex Elder
If we program an RX endpoint to have no header (header length is 0), header-related endpoint configuration values are meaningless and are ignored. The only case we support that defines a header is QMAP endpoints. In ipa_endpoint_init_hdr_ext() we set the endianness mask value unconditionally, but it should not be done if there is no header (meaning it is not configured for QMAP). Set the endianness conditionally, and rearrange the logic in that function slightly to avoid testing the qmap flag twice. Delete an incorrect comment in ipa_endpoint_init_aggr(). Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-05-20net: ipa: rename a GSI error codeAlex Elder
The CHANNEL_NOT_RUNNING error condition has been generalized, so rename it to be INCORRECT_CHANNEL_STATE. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-05-20net: ipa: drop an unneeded transaction referenceAlex Elder
In gsi_channel_update(), a reference count is taken on the last completed transaction "to keep it from completing" before we give the event back to the hardware. Completion processing for that transaction (and any other "new" ones) will not occur until after this function returns, so there's no risk it completing early. So there's no need to take and drop the additional transaction reference. Use local variables in the call to gsi_evt_ring_doorbell(). Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-05-19net: ipa: don't proceed to out-of-bound writeJakub Kicinski
GCC 12 seems upset that we check ipa_irq against array bound but then proceed, anyway: drivers/net/ipa/ipa_interrupt.c: In function ‘ipa_interrupt_add’: drivers/net/ipa/ipa_interrupt.c:196:27: warning: array subscript 30 is above array bounds of ‘void (*[30])(struct ipa *, enum ipa_irq_id)’ [-Warray-bounds] 196 | interrupt->handler[ipa_irq] = handler; | ~~~~~~~~~~~~~~~~~~^~~~~~~~~ drivers/net/ipa/ipa_interrupt.c:42:27: note: while referencing ‘handler’ 42 | ipa_irq_handler_t handler[IPA_IRQ_COUNT]; | ^~~~~~~ Reviewed-by: Alex Elder <elder@linaro.org> Link: https://lore.kernel.org/r/20220519004417.2109886-1-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-05-19Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski
drivers/net/ethernet/mellanox/mlx5/core/main.c b33886971dbc ("net/mlx5: Initialize flow steering during driver probe") 40379a0084c2 ("net/mlx5_fpga: Drop INNOVA TLS support") f2b41b32cde8 ("net/mlx5: Remove ipsec_ops function table") https://lore.kernel.org/all/20220519040345.6yrjromcdistu7vh@sx1/ 16d42d313350 ("net/mlx5: Drain fw_reset when removing device") 8324a02c342a ("net/mlx5: Add exit route when waiting for FW") https://lore.kernel.org/all/20220519114119.060ce014@canb.auug.org.au/ tools/testing/selftests/net/mptcp/mptcp_join.sh e274f7154008 ("selftests: mptcp: add subflow limits test-cases") b6e074e171bc ("selftests: mptcp: add infinite map testcase") 5ac1d2d63451 ("selftests: mptcp: Add tests for userspace PM type") https://lore.kernel.org/all/20220516111918.366d747f@canb.auug.org.au/ net/mptcp/options.c ba2c89e0ea74 ("mptcp: fix checksum byte order") 1e39e5a32ad7 ("mptcp: infinite mapping sending") ea66758c1795 ("tcp: allow MPTCP to update the announced window") https://lore.kernel.org/all/20220519115146.751c3a37@canb.auug.org.au/ net/mptcp/pm.c 95d686517884 ("mptcp: fix subflow accounting on close") 4d25247d3ae4 ("mptcp: bypass in-kernel PM restrictions for non-kernel PMs") https://lore.kernel.org/all/20220516111435.72f35dca@canb.auug.org.au/ net/mptcp/subflow.c ae66fb2ba6c3 ("mptcp: Do TCP fallback on early DSS checksum failure") 0348c690ed37 ("mptcp: add the fallback check") f8d4bcacff3b ("mptcp: infinite mapping receiving") https://lore.kernel.org/all/20220519115837.380bb8d4@canb.auug.org.au/ Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-05-13net: ipa: get rid of a duplicate initializationAlex Elder
In ipa_qmi_ready(), the "ipa" local variable is set when initialized, but then set again just before it's first used. One or the other is enough, so get rid of the first one. References: https://lore.kernel.org/lkml/200de1bd-0f01-c334-ca18-43eed783dfac@intel.com/ Reported-by: kernel test robot <lkp@intel.com> Fixes: 530f9216a953 ("soc: qcom: ipa: AP/modem communications") Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-05-13net: ipa: record proper RX transaction countAlex Elder
Each time we are notified that some number of transactions on an RX channel has completed, we record the number of bytes that have been transferred since the previous notification. We also track the number of transactions completed, but that is not currently being calculated correctly; we're currently counting the number of such notifications, but each notification can represent many transaction completions. Fix this. Fixes: 650d1603825d8 ("soc: qcom: ipa: the generic software interface") Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-05-13net: ipa: certain dropped packets aren't accounted forAlex Elder
If an RX endpoint receives packets containing status headers, and a packet in the buffer is not dropped, ipa_endpoint_skb_copy() is responsible for wrapping the packet data in an SKB and forwarding it to ipa_modem_skb_rx() for further processing. If ipa_endpoint_skb_copy() gets a null pointer from build_skb(), it just returns early. But in the process it doesn't record that as a dropped packet in the network device statistics. Instead, call ipa_modem_skb_rx() whether or not the SKB pointer is NULL; that function ensures the statistics are properly updated. Fixes: 1b65bbcc9a710 ("net: ipa: skip SKB copy if no netdev") Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-05-05net: switch to netif_napi_add_tx()Jakub Kicinski
Switch net callers to the new API not requiring the NAPI_POLL_WEIGHT argument. Acked-by: Florian Fainelli <f.fainelli@gmail.com> Reviewed-by: Alex Elder <elder@linaro.org> Acked-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Acked-by: Alexandra Winter <wintera@linux.ibm.com> Link: https://lore.kernel.org/r/20220504163725.550782-1-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-04-25net: ipa: compute proper aggregation limitAlex Elder
The aggregation byte limit for an endpoint is currently computed based on the endpoint's receive buffer size. However, some bytes at the front of each receive buffer are reserved on the assumption that--as with SKBs--it might be useful to insert data (such as headers) before what lands in the buffer. The aggregation byte limit currently doesn't take into account that reserved space, and as a result, aggregation could require space past that which is available in the buffer. Fix this by reducing the size used to compute the aggregation byte limit by the NET_SKB_PAD offset reserved for each receive buffer. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-03-11net: ipa: use struct_size() for the interconnect arrayAlex Elder
In review for commit 8ee7ec4890e2b ("net: ipa: embed interconnect array in the power structure"), Jakub Kicinski suggested that a follow-up patch use struct_size() when computing the size of the IPA power structure, which ends with a flexible array member. Do that. Suggested-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: Alex Elder <elder@linaro.org> Link: https://lore.kernel.org/r/20220311162423.872645-1-elder@linaro.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-03-10net: ipa: use IPA power device pointerAlex Elder
The ipa_power structure contains a copy of the IPA device pointer, so there's no need to pass it to ipa_interconnect_init(). We can also use that pointer for an error message in ipa_power_enable(). Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-03-10net: ipa: embed interconnect array in the power structureAlex Elder
Rather than allocating the interconnect array dynamically, represent the interconnects with a variable-length array at the end of the ipa_power structure. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-03-10net: ipa: use bulk interconnect initializationAlex Elder
The previous patch used bulk interconnect operations to initialize IPA interconnects one at a time. This rearranges things to use the bulk interfaces as intended--on all interconnects together. As a result ipa_interconnect_init_one() and ipa_interconnect_exit_one() are no longer needed. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-03-10net: ipa: use bulk operations to set up interconnectsAlex Elder
Use of_icc_bulk_get() and icc_bulk_put(), icc_bulk_set_bw(), and icc_bulk_enable() and icc_bulk_disable() to initialize individual IPA interconnects. Those functions already log messages in the event of error so we don't need to. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-03-10net: ipa: use interconnect bulk enable/disable operationsAlex Elder
The power interconnect array is now an array of icc_bulk_data structures, which is what the interconnect bulk enable and disable functions require. Get rid of ipa_interconnect_enable() and ipa_interconnect_disable(), and just call icc_bulk_enable() and icc_bulk_disable() instead. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-03-10net: ipa: use icc_enable() and icc_disable()Alex Elder
The interconnect framework now provides the ability to enable and disable interconnects without having to change their recorded "enabled" bandwidth value. Use this mechanism, rather than setting the bandwidth values to zero and non-zero respectively to disable and enable the IPA interconnects. Disable each interconnect before setting its "enabled" average and peak bandwidth values. Thereafter, enable and disable interconnects when required rather than setting their bandwidths. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-03-10net: ipa: kill struct ipa_interconnectAlex Elder
The ipa_interconnect structure contains an icc_path pointer, plus an average and peak bandwidth value. Other than the interconnect name, this matches the icc_bulk_data structure exactly. Use the icc_bulk_data structure in place of the ipa_interconnect structure, and add an initialization of its name field. Then get rid of the now unnecessary ipa_interconnect structure definition. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-03-03Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski
net/batman-adv/hard-interface.c commit 690bb6fb64f5 ("batman-adv: Request iflink once in batadv-on-batadv check") commit 6ee3c393eeb7 ("batman-adv: Demote batadv-on-batadv skip error message") https://lore.kernel.org/all/20220302163049.101957-1-sw@simonwunderlich.de/ net/smc/af_smc.c commit 4d08b7b57ece ("net/smc: Fix cleanup when register ULP fails") commit 462791bbfa35 ("net/smc: add sysctl interface for SMC") https://lore.kernel.org/all/20220302112209.355def40@canb.auug.org.au/ Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-03-02net: ipa: add an interconnect dependencyAlex Elder
In order to function, the IPA driver very clearly requires the interconnect framework to be enabled in the kernel configuration. State that dependency in the Kconfig file. This became a problem when CONFIG_COMPILE_TEST support was added. Non-Qualcomm platforms won't necessarily enable CONFIG_INTERCONNECT. Reported-by: kernel test robot <lkp@intel.com> Fixes: 38a4066f593c5 ("net: ipa: support COMPILE_TEST") Signed-off-by: Alex Elder <elder@linaro.org> Link: https://lore.kernel.org/r/20220301113440.257916-1-elder@linaro.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-02-28net: ipa: fix a build dependencyAlex Elder
An IPA build problem arose in the linux-next tree the other day. The problem is that a recent commit adds a new dependency on some code, and the Kconfig file for IPA doesn't reflect that dependency. As a result, some configurations can fail to build (particularly when COMPILE_TEST is enabled). The recent patch adds calls to qmp_get(), qmp_put(), and qmp_send(), and those are built based on the QCOM_AOSS_QMP config option. If that symbol is not defined, stubs are defined, so we just need to ensure QCOM_AOSS_QMP is compatible with QCOM_IPA, or it's not defined. Reported-by: Randy Dunlap <rdunlap@infradead.org> Fixes: 34a081761e4e3 ("net: ipa: request IPA register values be retained") Signed-off-by: Alex Elder <elder@linaro.org> Tested-by: Randy Dunlap <rdunlap@infradead.org> Acked-by: Randy Dunlap <rdunlap@infradead.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-04net: ipa: determine replenish doorbell differentlyAlex Elder
Rather than tracking the number of receive buffer transactions that have been submitted without a doorbell, just track the total number of transactions that have been issued. Then ring the doorbell when that number modulo the replenish batch size is 0. The effect is roughly the same, but the new count is slightly more interesting, and this approach will someday allow the replenish batch size to be tuned at runtime. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-04net: ipa: replenish after delivering payloadAlex Elder
Replenishing is now solely driven by whether transactions are available for a channel, and it doesn't really matter whether we replenish before or after we deliver received packets to the network stack. Replenishing before delivering the payload adds a little latency. Eliminate that by requesting a replenish after the payload is delivered. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-04net: ipa: kill replenish_backlogAlex Elder
We no longer use the replenish_backlog atomic variable to decide when we've got work to do providing receive buffers to hardware. Basically, we try to keep the hardware as full as possible, all the time. We keep supplying buffers until the hardware has no more space for them. As a result, we can get rid of the replenish_backlog field and the atomic operations performed on it. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-04net: ipa: introduce gsi_channel_trans_idle()Alex Elder
Create a new function that returns true if all transactions for a channel are available for use. Use it in ipa_endpoint_replenish_enable() to see whether to start replenishing, and in ipa_endpoint_replenish() to determine whether it's necessary after a failure to schedule delayed work to ensure a future replenish attempt occurs. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-04net: ipa: don't use replenish_backlogAlex Elder
Rather than determining when to stop replenishing using the replenish backlog, just stop when we have exhausted all available transactions. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-04net: ipa: allocate transaction in replenish loopAlex Elder
When replenishing, have ipa_endpoint_replenish() allocate a transaction, and pass that to ipa_endpoint_replenish_one() to fill. Then, if that produces no error, commit the transaction within the replenish loop as well. In this way we can distinguish between transaction failures and buffer allocation/mapping failures. Failure to allocate a transaction simply means the hardware already has as many receive buffers as it can hold. In that case we can break out of the replenish loop because there's nothing more to do. If we fail to allocate or map pages for the receive buffer, just try again later. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-04net: ipa: decide on doorbell in replenish loopAlex Elder
Decide whether the doorbell should be signaled when committing a replenish transaction in the main replenish loop, rather than in ipa_endpoint_replenish_one(). This is a step to facilitate the next patch. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-04net: ipa: increment backlog in replenish callerAlex Elder
Three spots call ipa_endpoint_replenish(), and just one of those requests that the backlog be incremented after completing the replenish operation. Instead, have the caller increment the backlog, and get rid of the add_one argument to ipa_endpoint_replenish(). Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>