summaryrefslogtreecommitdiff
path: root/crypto
AgeCommit message (Collapse)Author
2025-03-21crypto: acomp - Add support for foliosHerbert Xu
For many users, it's easier to supply a folio rather than an SG list since they already have them. Add support for folios to the acomp interface. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-21crypto: acomp - Add async nondma fallbackHerbert Xu
Add support for passing non-DMA virtual addresses to async drivers by passing them along to the fallback software algorithm. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-21crypto: acomp - Add ACOMP_REQUEST_ALLOC and acomp_request_alloc_extraHerbert Xu
Add ACOMP_REQUEST_ALLOC which is a wrapper around acomp_request_alloc that falls back to a synchronous stack reqeust if the allocation fails. Also add ACOMP_REQUEST_ON_STACK which stores the request on the stack only. The request should be freed with acomp_request_free. Finally add acomp_request_alloc_extra which gives the user extra memory to use in conjunction with the request. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-21crypto: scomp - Add chaining and virtual address supportHerbert Xu
Add chaining and virtual address support to all scomp algorithms. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-21crypto: scomp - Remove support for some non-trivial SG listsHerbert Xu
As the only user of acomp/scomp uses a trivial single-page SG list, remove support for everything else in preprataion for the addition of virtual address support. However, keep support for non-trivial source SG lists as that user is currently jumping through hoops in order to linearise the source data. Limit the source SG linearisation buffer to a single page as that user never goes over that. The only other potential user is also unlikely to exceed that (IPComp) and it can easily do its own linearisation if necessary. Also keep the destination SG linearisation for IPComp. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-21crypto: hash - Use nth_page instead of doing it by handHerbert Xu
Use nth_page instead of adding n to the page pointer. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-21crypto: hash - Fix test underflow in shash_ahash_digestHerbert Xu
The test on PAGE_SIZE - offset in shash_ahash_digest can underflow, leading to execution of the fast path even if the data cannot be mapped into a single page. Fix this by splitting the test into four cases: 1) nbytes > sg->length: More than one SG entry, slow path. 2) !IS_ENABLED(CONFIG_HIGHMEM): fast path. 3) nbytes > (unsigned int)PAGE_SIZE - offset: Two highmem pages, slow path. 4) Highmem fast path. Fixes: 5f7082ed4f48 ("crypto: hash - Export shash through hash") Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-21crypto: krb5 - Use SG miter instead of doing it by handHerbert Xu
The function crypto_shash_update_sg iterates through an SG by hand. It fails to handle corner cases such as SG entries longer than a page. Fix this by using the SG iterator. Fixes: 348f5669d1f6 ("crypto/krb5: Implement the Kerberos5 rfc3961 get_mic and verify_mic") Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-21crypto: scatterwalk - simplify map and unmap calling conventionEric Biggers
Now that the address returned by scatterwalk_map() is always being stored into the same struct scatter_walk that is passed in, make scatterwalk_map() do so itself and return void. Similarly, now that scatterwalk_unmap() is always being passed the address field within a struct scatter_walk, make scatterwalk_unmap() take a pointer to struct scatter_walk instead of the address directly. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-20crypto,fs: Separate out hkdf_extract() and hkdf_expand()Hannes Reinecke
Separate out the HKDF functions into a separate module to to make them available to other callers. And add a testsuite to the module with test vectors from RFC 5869 (and additional vectors for SHA384 and SHA512) to ensure the integrity of the algorithm. Signed-off-by: Hannes Reinecke <hare@kernel.org> Acked-by: Eric Biggers <ebiggers@kernel.org> Acked-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Keith Busch <kbusch@kernel.org>
2025-03-15crypto: testmgr - Remove NULL dst acomp testsHerbert Xu
In preparation for the partial removal of NULL dst acomp support, remove the tests for them. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-15crypto: acomp - Add request chaining and virtual addressesHerbert Xu
This adds request chaining and virtual address support to the acomp interface. It is identical to the ahash interface, except that a new flag CRYPTO_ACOMP_REQ_NONDMA has been added to indicate that the virtual addresses are not suitable for DMA. This is because all existing and potential acomp users can provide memory that is suitable for DMA so there is no need for a fall-back copy path. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-15crypto: scomp - Disable BH when taking per-cpu spin lockHerbert Xu
Disable BH when taking per-cpu spin locks. This isn't an issue right now because the only user zswap calls scomp from process context. However, if scomp is called from softirq context the spin lock may dead-lock. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-15crypto: acomp - Move stream management into scomp layerHerbert Xu
Rather than allocating the stream memory in the request object, move it into a per-cpu buffer managed by scomp. This takes the stress off the user from having to manage large request objects and setting up their own per-cpu buffers in order to do so. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-15crypto: scomp - Remove tfm argument from alloc/free_ctxHerbert Xu
The tfm argument is completely unused and meaningless as the same stream object is identical over all transforms of a given algorithm. Remove it. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-15crypto: api - Add cra_type->destroy hookHerbert Xu
Add a cra_type->destroy hook so that resources can be freed after the last user of a registered algorithm is gone. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-15crypto: skcipher - Make skcipher_walk src.virt.addr constHerbert Xu
Mark the src.virt.addr field in struct skcipher_walk as a pointer to const data. This guarantees that the user won't modify the data which should be done through dst.virt.addr to ensure that flushing is done when necessary. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-15crypto: skcipher - Eliminate duplicate virt.addr fieldHerbert Xu
Reuse the addr field from struct scatter_walk for skcipher_walk. Keep the existing virt.addr fields but make them const for the user to access the mapped address. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-15crypto: scatterwalk - Add memcpy_sglistHerbert Xu
Add memcpy_sglist which copies one SG list to another. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-15crypto: scatterwalk - Change scatterwalk_next calling conventionHerbert Xu
Rather than returning the address and storing the length into an argument pointer, add an address field to the walk struct and use that to store the address. The length is returned directly. Change the done functions to use this stored address instead of getting them from the caller. Split the address into two using a union. The user should only access the const version so that it is never changed. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-15async_xor: Remove unused 'async_xor_val'Dr. David Alan Gilbert
async_xor_val has been unused since commit a7c224a820c3 ("md/raid5: convert to new xor compution interface") Remove it. Signed-off-by: Dr. David Alan Gilbert <linux@treblig.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-08crypto: skcipher - fix mismatch between mapping and unmapping orderEric Biggers
Local kunmaps have to be unmapped in the opposite order from which they were mapped. My recent change flipped the unmap order in the SKCIPHER_WALK_DIFF case. Adjust the mapping side to match. This fixes a WARN_ON_ONCE that was triggered when running the crypto-self tests on a 32-bit kernel with CONFIG_DEBUG_KMAP_LOCAL_FORCE_MAP=y. Fixes: 95dbd711b1d8 ("crypto: skcipher - use the new scatterwalk functions") Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-08crypto: Kconfig - Select LIB generic optionHerbert Xu
Select the generic LIB options if the Crypto API algorithm is enabled. Otherwise this may lead to a build failure as the Crypto API algorithm always uses the generic implementation. Fixes: 17ec3e71ba79 ("crypto: lib/Kconfig - Hide arch options from user") Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202503022113.79uEtUuy-lkp@intel.com/ Closes: https://lore.kernel.org/oe-kbuild-all/202503022115.9OOyDR5A-lkp@intel.com/ Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-08crypto: acomp - Remove acomp request flagsHerbert Xu
The acomp request flags field duplicates the base request flags and is confusing. Remove it. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-08crypto: lzo - Fix compression buffer overrunHerbert Xu
Unlike the decompression code, the compression code in LZO never checked for output overruns. It instead assumes that the caller always provides enough buffer space, disregarding the buffer length provided by the caller. Add a safe compression interface that checks for the end of buffer before each write. Use the safe interface in crypto/lzo. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-08crypto: api - Move struct crypto_type into internal.hHerbert Xu
Move the definition of struct crypto_type into internal.h as it is only used by API implementors and not algorithm implementors. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-02crypto/krb5: Implement crypto self-testingDavid Howells
Implement self-testing infrastructure to test the pseudo-random function, key derivation, encryption and checksumming. Add the testing data from rfc8009 to test AES + HMAC-SHA2. Add the testing data from rfc6803 to test Camellia. Note some encryption test vectors here are incomplete, lacking the key usage number needed to derive Ke and Ki, and there are errata for this: https://www.rfc-editor.org/errata_search.php?rfc=6803 Signed-off-by: David Howells <dhowells@redhat.com> cc: Herbert Xu <herbert@gondor.apana.org.au> cc: "David S. Miller" <davem@davemloft.net> cc: Chuck Lever <chuck.lever@oracle.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: Eric Dumazet <edumazet@google.com> cc: Jakub Kicinski <kuba@kernel.org> cc: Paolo Abeni <pabeni@redhat.com> cc: Simon Horman <horms@kernel.org> cc: linux-afs@lists.infradead.org cc: linux-nfs@vger.kernel.org cc: linux-crypto@vger.kernel.org cc: netdev@vger.kernel.org
2025-03-02crypto/krb5: Implement the Camellia enctypes from rfc6803David Howells
Implement the camellia128-cts-cmac and camellia256-cts-cmac enctypes from rfc6803. Note that the test vectors in rfc6803 for encryption are incomplete, lacking the key usage number needed to derive Ke and Ki, and there are errata for this: https://www.rfc-editor.org/errata_search.php?rfc=6803 Signed-off-by: David Howells <dhowells@redhat.com> cc: Herbert Xu <herbert@gondor.apana.org.au> cc: "David S. Miller" <davem@davemloft.net> cc: Chuck Lever <chuck.lever@oracle.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: Eric Dumazet <edumazet@google.com> cc: Jakub Kicinski <kuba@kernel.org> cc: Paolo Abeni <pabeni@redhat.com> cc: Simon Horman <horms@kernel.org> cc: linux-afs@lists.infradead.org cc: linux-nfs@vger.kernel.org cc: linux-crypto@vger.kernel.org cc: netdev@vger.kernel.org
2025-03-02crypto/krb5: Implement the AES enctypes from rfc8009David Howells
Implement the aes128-cts-hmac-sha256-128 and aes256-cts-hmac-sha384-192 enctypes from rfc8009, overriding the rfc3961 kerberos 5 simplified crypto scheme. Signed-off-by: David Howells <dhowells@redhat.com> cc: Herbert Xu <herbert@gondor.apana.org.au> cc: "David S. Miller" <davem@davemloft.net> cc: Chuck Lever <chuck.lever@oracle.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: Eric Dumazet <edumazet@google.com> cc: Jakub Kicinski <kuba@kernel.org> cc: Paolo Abeni <pabeni@redhat.com> cc: Simon Horman <horms@kernel.org> cc: linux-afs@lists.infradead.org cc: linux-nfs@vger.kernel.org cc: linux-crypto@vger.kernel.org cc: netdev@vger.kernel.org
2025-03-02crypto/krb5: Implement the AES enctypes from rfc3962David Howells
Implement the aes128-cts-hmac-sha1-96 and aes256-cts-hmac-sha1-96 enctypes from rfc3962, using the rfc3961 kerberos 5 simplified crypto scheme. Signed-off-by: David Howells <dhowells@redhat.com> cc: Herbert Xu <herbert@gondor.apana.org.au> cc: "David S. Miller" <davem@davemloft.net> cc: Chuck Lever <chuck.lever@oracle.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: Eric Dumazet <edumazet@google.com> cc: Jakub Kicinski <kuba@kernel.org> cc: Paolo Abeni <pabeni@redhat.com> cc: Simon Horman <horms@kernel.org> cc: linux-afs@lists.infradead.org cc: linux-nfs@vger.kernel.org cc: linux-crypto@vger.kernel.org cc: netdev@vger.kernel.org
2025-03-02crypto/krb5: Implement the Kerberos5 rfc3961 get_mic and verify_micDavid Howells
Add functions that sign and verify a message according to rfc3961 sec 5.4, using Kc to generate a checksum and insert it into the MIC field in the skbuff in the sign phase then checksum the data and compare it to the MIC in the verify phase. Signed-off-by: David Howells <dhowells@redhat.com> cc: Herbert Xu <herbert@gondor.apana.org.au> cc: "David S. Miller" <davem@davemloft.net> cc: Chuck Lever <chuck.lever@oracle.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: Eric Dumazet <edumazet@google.com> cc: Jakub Kicinski <kuba@kernel.org> cc: Paolo Abeni <pabeni@redhat.com> cc: Simon Horman <horms@kernel.org> cc: linux-afs@lists.infradead.org cc: linux-nfs@vger.kernel.org cc: linux-crypto@vger.kernel.org cc: netdev@vger.kernel.org
2025-03-02crypto/krb5: Implement the Kerberos5 rfc3961 encrypt and decrypt functionsDavid Howells
Add functions that encrypt and decrypt a message according to rfc3961 sec 5.3, using Ki to checksum the data to be secured and Ke to encrypt it during the encryption phase, then decrypting with Ke and verifying the checksum with Ki in the decryption phase. Signed-off-by: David Howells <dhowells@redhat.com> cc: Herbert Xu <herbert@gondor.apana.org.au> cc: "David S. Miller" <davem@davemloft.net> cc: Chuck Lever <chuck.lever@oracle.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: Eric Dumazet <edumazet@google.com> cc: Jakub Kicinski <kuba@kernel.org> cc: Paolo Abeni <pabeni@redhat.com> cc: Simon Horman <horms@kernel.org> cc: linux-afs@lists.infradead.org cc: linux-nfs@vger.kernel.org cc: linux-crypto@vger.kernel.org cc: netdev@vger.kernel.org
2025-03-02crypto/krb5: Provide RFC3961 setkey packaging functionsDavid Howells
Provide functions to derive keys according to RFC3961 (or load the derived keys for the selftester where only derived keys are available) and to package them up appropriately for passing to a krb5enc AEAD setkey or a hash setkey function. Signed-off-by: David Howells <dhowells@redhat.com> cc: Herbert Xu <herbert@gondor.apana.org.au> cc: "David S. Miller" <davem@davemloft.net> cc: Chuck Lever <chuck.lever@oracle.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: Eric Dumazet <edumazet@google.com> cc: Jakub Kicinski <kuba@kernel.org> cc: Paolo Abeni <pabeni@redhat.com> cc: Simon Horman <horms@kernel.org> cc: linux-afs@lists.infradead.org cc: linux-nfs@vger.kernel.org cc: linux-crypto@vger.kernel.org cc: netdev@vger.kernel.org
2025-03-02crypto/krb5: Implement the Kerberos5 rfc3961 key derivationDavid Howells
Implement the simplified crypto profile for Kerberos 5 rfc3961 with the pseudo-random function, PRF(), from section 5.3 and the key derivation function, DK() from section 5.1. Signed-off-by: David Howells <dhowells@redhat.com> cc: Herbert Xu <herbert@gondor.apana.org.au> cc: "David S. Miller" <davem@davemloft.net> cc: Chuck Lever <chuck.lever@oracle.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: Eric Dumazet <edumazet@google.com> cc: Jakub Kicinski <kuba@kernel.org> cc: Paolo Abeni <pabeni@redhat.com> cc: Simon Horman <horms@kernel.org> cc: linux-afs@lists.infradead.org cc: linux-nfs@vger.kernel.org cc: linux-crypto@vger.kernel.org cc: netdev@vger.kernel.org
2025-03-02crypto/krb5: Provide infrastructure and key derivationDavid Howells
Provide key derivation interface functions and a helper to implement the PRF+ function from rfc4402. Signed-off-by: David Howells <dhowells@redhat.com> cc: Herbert Xu <herbert@gondor.apana.org.au> cc: "David S. Miller" <davem@davemloft.net> cc: Chuck Lever <chuck.lever@oracle.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: Eric Dumazet <edumazet@google.com> cc: Jakub Kicinski <kuba@kernel.org> cc: Paolo Abeni <pabeni@redhat.com> cc: Simon Horman <horms@kernel.org> cc: linux-afs@lists.infradead.org cc: linux-nfs@vger.kernel.org cc: linux-crypto@vger.kernel.org cc: netdev@vger.kernel.org
2025-03-02crypto/krb5: Add an API to perform requestsDavid Howells
Add an API by which users of the krb5 crypto library can perform crypto requests, such as encrypt, decrypt, get_mic and verify_mic. These functions take the previously prepared crypto objects to work on. Signed-off-by: David Howells <dhowells@redhat.com> cc: Herbert Xu <herbert@gondor.apana.org.au> cc: "David S. Miller" <davem@davemloft.net> cc: Chuck Lever <chuck.lever@oracle.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: Eric Dumazet <edumazet@google.com> cc: Jakub Kicinski <kuba@kernel.org> cc: Paolo Abeni <pabeni@redhat.com> cc: Simon Horman <horms@kernel.org> cc: linux-afs@lists.infradead.org cc: linux-nfs@vger.kernel.org cc: linux-crypto@vger.kernel.org cc: netdev@vger.kernel.org
2025-03-02crypto/krb5: Add an API to alloc and prepare a crypto objectDavid Howells
Add an API by which users of the krb5 crypto library can get an allocated and keyed crypto object. For encryption-mode operation, an AEAD object is returned; for checksum-mode operation, a synchronous hash object is returned. Signed-off-by: David Howells <dhowells@redhat.com> cc: Herbert Xu <herbert@gondor.apana.org.au> cc: "David S. Miller" <davem@davemloft.net> cc: Chuck Lever <chuck.lever@oracle.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: Eric Dumazet <edumazet@google.com> cc: Jakub Kicinski <kuba@kernel.org> cc: Paolo Abeni <pabeni@redhat.com> cc: Simon Horman <horms@kernel.org> cc: linux-afs@lists.infradead.org cc: linux-nfs@vger.kernel.org cc: linux-crypto@vger.kernel.org cc: netdev@vger.kernel.org
2025-03-02crypto/krb5: Add an API to query the layout of the crypto sectionDavid Howells
Provide some functions to allow the called to find out about the layout of the crypto section: (1) Calculate, for a given size of data, how big a buffer will be required to hold it and where the data will be within it. (2) Calculate, for an amount of buffer, what's the maximum size of data that will fit therein, and where it will start. (3) Determine where the data will be in a received message. Signed-off-by: David Howells <dhowells@redhat.com> cc: Herbert Xu <herbert@gondor.apana.org.au> cc: "David S. Miller" <davem@davemloft.net> cc: Chuck Lever <chuck.lever@oracle.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: Eric Dumazet <edumazet@google.com> cc: Jakub Kicinski <kuba@kernel.org> cc: Paolo Abeni <pabeni@redhat.com> cc: Simon Horman <horms@kernel.org> cc: linux-afs@lists.infradead.org cc: linux-nfs@vger.kernel.org cc: linux-crypto@vger.kernel.org cc: netdev@vger.kernel.org
2025-03-02crypto/krb5: Implement Kerberos crypto coreDavid Howells
Provide core structures, an encoding-type registry and basic module and config bits for a generic Kerberos crypto library. Signed-off-by: David Howells <dhowells@redhat.com> cc: Herbert Xu <herbert@gondor.apana.org.au> cc: "David S. Miller" <davem@davemloft.net> cc: Chuck Lever <chuck.lever@oracle.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: Eric Dumazet <edumazet@google.com> cc: Jakub Kicinski <kuba@kernel.org> cc: Paolo Abeni <pabeni@redhat.com> cc: Simon Horman <horms@kernel.org> cc: linux-afs@lists.infradead.org cc: linux-nfs@vger.kernel.org cc: linux-crypto@vger.kernel.org cc: netdev@vger.kernel.org
2025-03-02crypto/krb5: Test manager dataDavid Howells
Add Kerberos crypto tests to the test manager database. This covers: camellia128-cts-cmac samples from RFC6803 camellia256-cts-cmac samples from RFC6803 aes128-cts-hmac-sha256-128 samples from RFC8009 aes256-cts-hmac-sha384-192 samples from RFC8009 but not: aes128-cts-hmac-sha1-96 aes256-cts-hmac-sha1-96 as the test samples in RFC3962 don't seem to be suitable. Signed-off-by: David Howells <dhowells@redhat.com> cc: Herbert Xu <herbert@gondor.apana.org.au> cc: "David S. Miller" <davem@davemloft.net> cc: Chuck Lever <chuck.lever@oracle.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: Eric Dumazet <edumazet@google.com> cc: Jakub Kicinski <kuba@kernel.org> cc: Paolo Abeni <pabeni@redhat.com> cc: Simon Horman <horms@kernel.org> cc: linux-afs@lists.infradead.org cc: linux-nfs@vger.kernel.org cc: linux-crypto@vger.kernel.org cc: netdev@vger.kernel.org
2025-03-02crypto: Add 'krb5enc' hash and cipher AEAD algorithmDavid Howells
Add an AEAD template that does hash-then-cipher (unlike authenc that does cipher-then-hash). This is required for a number of Kerberos 5 encoding types. [!] Note that the net/sunrpc/auth_gss/ implementation gets a pair of ciphers, one non-CTS and one CTS, using the former to do all the aligned blocks and the latter to do the last two blocks if they aren't also aligned. It may be necessary to do this here too for performance reasons - but there are considerations both ways: (1) firstly, there is an optimised assembly version of cts(cbc(aes)) on x86_64 that should be used instead of having two ciphers; (2) secondly, none of the hardware offload drivers seem to offer CTS support (Intel QAT does not, for instance). However, I don't know if it's possible to query the crypto API to find out whether there's an optimised CTS algorithm available. Signed-off-by: David Howells <dhowells@redhat.com> cc: Herbert Xu <herbert@gondor.apana.org.au> cc: "David S. Miller" <davem@davemloft.net> cc: Chuck Lever <chuck.lever@oracle.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: Eric Dumazet <edumazet@google.com> cc: Jakub Kicinski <kuba@kernel.org> cc: Paolo Abeni <pabeni@redhat.com> cc: Simon Horman <horms@kernel.org> cc: linux-afs@lists.infradead.org cc: linux-nfs@vger.kernel.org cc: linux-crypto@vger.kernel.org cc: netdev@vger.kernel.org
2025-03-02crypto: lib/Kconfig - Hide arch options from userHerbert Xu
The ARCH_MAY_HAVE patch missed arm64, mips and s390. But it may also lead to arch options being enabled but ineffective because of modular/built-in conflicts. As the primary user of all these options wireguard is selecting the arch options anyway, make the same selections at the lib/crypto option level and hide the arch options from the user. Instead of selecting them centrally from lib/crypto, simply set the default of each arch option as suggested by Eric Biggers. Change the Crypto API generic algorithms to select the top-level lib/crypto options instead of the generic one as otherwise there is no way to enable the arch options (Eric Biggers). Introduce a set of INTERNAL options to work around dependency cycles on the CONFIG_CRYPTO symbol. Fixes: 1047e21aecdf ("crypto: lib/Kconfig - Fix lib built-in failure when arch is modular") Reported-by: kernel test robot <lkp@intel.com> Reported-by: Arnd Bergmann <arnd@kernel.org> Closes: https://lore.kernel.org/oe-kbuild-all/202502232152.JC84YDLp-lkp@intel.com/ Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-02crypto: skcipher - Use restrict rather than hand-rolling accessesHerbert Xu
Rather than accessing 'alg' directly to avoid the aliasing issue which leads to unnecessary reloads, use the __restrict keyword to explicitly tell the compiler that there is no aliasing. This generates equivalent if not superior code on x86 with gcc 12. Note that in skcipher_walk_virt the alg assignment is moved after might_sleep_if because that function is a compiler barrier and forces a reload. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-02crypto: scatterwalk - don't split at page boundaries when !HIGHMEMEric Biggers
When !HIGHMEM, the kmap_local_page() in the scatterlist walker does not actually map anything, and the address it returns is just the address from the kernel's direct map, where each sg entry's data is virtually contiguous. To improve performance, stop unnecessarily clamping data segments to page boundaries in this case. For now, still limit segments to PAGE_SIZE. This is needed to prevent preemption from being disabled for too long when SIMD is used, and to support the alignmask case which still uses a page-sized bounce buffer. Even so, this change still helps a lot in cases where messages cross a page boundary. For example, testing IPsec with AES-GCM on x86_64, the messages are 1424 bytes which is less than PAGE_SIZE, but on the Rx side over a third cross a page boundary. These ended up being processed in three parts, with the middle part going through skcipher_next_slow which uses a 16-byte bounce buffer. That was causing a significant amount of overhead which unnecessarily reduced the performance benefit of the new x86_64 AES-GCM assembly code. This change solves the problem; all these messages now get passed to the assembly code in one part. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-02crypto: scatterwalk - remove obsolete functionsEric Biggers
Remove various functions that are no longer used. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-02crypto: skcipher - use the new scatterwalk functionsEric Biggers
Convert skcipher_walk to use the new scatterwalk functions. This includes a few changes to exactly where the different parts of the iteration happen. For example the dcache flush that previously happened in scatterwalk_done() now happens in scatterwalk_dst_done() or in memcpy_to_scatterwalk(). Advancing to the next sg entry now happens just-in-time in scatterwalk_clamp() instead of in scatterwalk_done(). Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-02crypto: aegis - use the new scatterwalk functionsEric Biggers
Use scatterwalk_next() which consolidates scatterwalk_clamp() and scatterwalk_map(), and use scatterwalk_done_src() which consolidates scatterwalk_unmap(), scatterwalk_advance(), and scatterwalk_done(). Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-02crypto: skcipher - use scatterwalk_start_at_pos()Eric Biggers
In skcipher_walk_aead_common(), use scatterwalk_start_at_pos() instead of a sequence of scatterwalk_start(), scatterwalk_copychunks(..., 2), and scatterwalk_done(). This is simpler and faster. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-02crypto: scatterwalk - add new functions for copying dataEric Biggers
Add memcpy_from_sglist() and memcpy_to_sglist() which are more readable versions of scatterwalk_map_and_copy() with the 'out' argument 0 and 1 respectively. They follow the same argument order as memcpy_from_page() and memcpy_to_page() from <linux/highmem.h>. Note that in the case of memcpy_from_sglist(), this also happens to be the same argument order that scatterwalk_map_and_copy() uses. The new code is also faster, mainly because it builds the scatter_walk directly without creating a temporary scatterlist. E.g., a 20% performance improvement is seen for copying the AES-GCM auth tag. Make scatterwalk_map_and_copy() be a wrapper around memcpy_from_sglist() and memcpy_to_sglist(). Callers of scatterwalk_map_and_copy() should be updated to call memcpy_from_sglist() or memcpy_to_sglist() directly, but there are a lot of them so they aren't all being updated right away. Also add functions memcpy_from_scatterwalk() and memcpy_to_scatterwalk() which are similar but operate on a scatter_walk instead of a scatterlist. These will replace scatterwalk_copychunks() with the 'out' argument 0 and 1 respectively. Their behavior differs slightly from scatterwalk_copychunks() in that they automatically take care of flushing the dcache when needed, making them easier to use. scatterwalk_copychunks() itself is left unchanged for now. It will be removed after its callers are updated to use other functions instead. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-02crypto: scatterwalk - add new functions for skipping dataEric Biggers
Add scatterwalk_skip() to skip the given number of bytes in a scatter_walk. Previously support for skipping was provided through scatterwalk_copychunks(..., 2) followed by scatterwalk_done(), which was confusing and less efficient. Also add scatterwalk_start_at_pos() which starts a scatter_walk at the given position, equivalent to scatterwalk_start() + scatterwalk_skip(). This addresses another common need in a more streamlined way. Later patches will convert various users to use these functions. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>