summaryrefslogtreecommitdiff
path: root/include/crypto
AgeCommit message (Collapse)Author
2025-04-23crypto: x86/sha1 - Use API partial block handlingHerbert Xu
Use the Crypto API partial block handling. Also remove the unnecessary SIMD fallback path. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-23crypto: md5-generic - Use API partial block handlingHerbert Xu
Use the Crypto API partial block handling. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-23crypto: riscv/ghash - Use API partial block handlingHerbert Xu
Use the Crypto API partial block handling. As this was the last user relying on crypto/ghash.h for gf128mul.h, remove the unnecessary inclusion of gf128mul.h from crypto/ghash.h. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-23crypto: ghash-generic - Use API partial block handlingHerbert Xu
Use the Crypto API partial block handling. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-23crypto: arm/blake2b - Use API partial block handlingHerbert Xu
Use the Crypto API partial block handling. Also remove the unnecessary SIMD fallback path. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-23crypto: blake2b-generic - Use API partial block handlingHerbert Xu
Use the Crypto API partial block handling. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-23crypto: shash - Handle partial blocks in APIHerbert Xu
Provide an option to handle the partial blocks in the shash API. Almost every hash algorithm has a block size and are only able to hash partial blocks on finalisation. Rather than duplicating the partial block handling many times, add this functionality to the shash API. It is optional (e.g., hmac would never need this by relying on the partial block handling of the underlying hash), and to enable it set the bit CRYPTO_AHASH_ALG_BLOCK_ONLY. The export format is always that of the underlying hash export, plus the partial block buffer, followed by a single-byte for the partial block length. Set the bit CRYPTO_AHASH_ALG_FINAL_NONZERO to withhold an extra byte in the partial block. This will come in handy when this is extended to ahash where hardware often can't deal with a zero-length final. It will also be used for algorithms requiring an extra block for finalisation (e.g., cmac). As an optimisation, set the bit CRYPTO_AHASH_ALG_FINUP_MAX if the algorithm wishes to get as much data as possible instead of just the last partial block. The descriptor will be zeroed after finalisation. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-23crypto: engine - Realign struct crypto_engine to save 8 bytesThorsten Blum
Realign struct crypto_engine to reduce its size by 8 bytes. Total size is now 192 bytes, allowing it to fit within 3 cachelines instead of 4. pahole output before: /* size: 200, cachelines: 4, members: 17 */ /* sum members: 183, holes: 3, sum holes: 17 */ /* paddings: 1, sum paddings: 4 */ /* last cacheline: 8 bytes */ and after: /* size: 192, cachelines: 3, members: 17 */ /* sum members: 183, holes: 2, sum holes: 9 */ /* paddings: 1, sum paddings: 4 */ No functional changes intended. Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-17Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski
Cross-merge networking fixes after downstream PR (net-6.15-rc3). No conflicts. Adjacent changes: tools/net/ynl/pyynl/ynl_gen_c.py 4d07bbf2d456 ("tools: ynl-gen: don't declare loop iterator in place") 7e8ba0c7de2b ("tools: ynl: don't use genlmsghdr in classic netlink") Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-04-17crypto: deflate - Make the acomp walk atomicHerbert Xu
Add an atomic flag to the acomp walk and use that in deflate. Due to the use of a per-cpu context, it is impossible to sleep during the walk in deflate. Reported-by: kernel test robot <oliver.sang@intel.com> Closes: https://lore.kernel.org/oe-lkp/202504151654.4c3b6393-lkp@intel.com Fixes: 08cabc7d3c86 ("crypto: deflate - Convert to acomp") Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16crypto: poly1305 - remove rset and sset fields of poly1305_desc_ctxEric Biggers
These fields are no longer needed, so remove them. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16crypto: poly1305 - centralize the shash wrappers for arch codeEric Biggers
Following the example of the crc32, crc32c, and chacha code, make the crypto subsystem register both generic and architecture-optimized poly1305 shash algorithms, both implemented on top of the appropriate library functions. This eliminates the need for every architecture to implement the same shash glue code. Note that the poly1305 shash requires that the key be prepended to the data, which differs from the library functions where the key is simply a parameter to poly1305_init(). Previously this was handled at a fairly low level, polluting the library code with shash-specific code. Reorganize things so that the shash code handles this quirk itself. Also, to register the architecture-optimized shashes only when architecture-optimized code is actually being used, add a function poly1305_is_arch_optimized() and make each arch implement it. Change each architecture's Poly1305 module_init function to arch_initcall so that the CPU feature detection is guaranteed to run before poly1305_is_arch_optimized() gets called by crypto/poly1305.c. (In cases where poly1305_is_arch_optimized() just returns true unconditionally, using arch_initcall is not strictly needed, but it's still good to be consistent across architectures.) Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16crypto: sm3-base - Use sm3_initHerbert Xu
Remove the duplicate init code and simply call sm3_init. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16crypto: lib/sm3 - Export generic block functionHerbert Xu
Export the generic block function so that it can be used by the Crypto API. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16crypto: hash - Update HASH_MAX_DESCSIZE commentHerbert Xu
The biggest context is not sha3_generic (356), but sha-s390 (360). Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16crypto: hash - Add HASH_REQUEST_ON_STACKHerbert Xu
Allow any ahash to be used with a stack request, with optional dynamic allocation when async is needed. The intended usage is: HASH_REQUEST_ON_STACK(req, tfm); ... err = crypto_ahash_digest(req); /* The request cannot complete synchronously. */ if (err == -EAGAIN) { /* This will not fail. */ req = HASH_REQUEST_CLONE(req, gfp); /* Redo operation. */ err = crypto_ahash_digest(req); } Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16crypto: shash - Remove dynamic descsizeHerbert Xu
As all users of the dynamic descsize have been converted to use a static one instead, remove support for dynamic descsize. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16crypto: skcipher - Realign struct skcipher_walk to save 8 bytesThorsten Blum
Reduce skcipher_walk's struct size by 8 bytes by realigning its members. pahole output before: /* size: 120, cachelines: 2, members: 13 */ /* sum members: 108, holes: 2, sum holes: 8 */ /* padding: 4 */ /* last cacheline: 56 bytes */ and after: /* size: 112, cachelines: 2, members: 13 */ /* padding: 4 */ /* last cacheline: 48 bytes */ No functional changes intended. Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16crypto: simd - Include asm/simd.h in internal/simd.hHerbert Xu
Now that the asm/simd.h files have been made safe against double inclusion, include it directly in internal/simd.h. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16crypto: ecdsa - Fix NIST P521 key size reported by KEYCTL_PKEY_QUERYLukas Wunner
When user space issues a KEYCTL_PKEY_QUERY system call for a NIST P521 key, the key_size is incorrectly reported as 528 bits instead of 521. That's because the key size obtained through crypto_sig_keysize() is in bytes and software_key_query() multiplies by 8 to yield the size in bits. The underlying assumption is that the key size is always a multiple of 8. With the recent addition of NIST P521, that's no longer the case. Fix by returning the key_size in bits from crypto_sig_keysize() and adjusting the calculations in software_key_query(). The ->key_size() callbacks of sig_alg algorithms now return the size in bits, whereas the ->digest_size() and ->max_size() callbacks return the size in bytes. This matches with the units in struct keyctl_pkey_query. Fixes: a7d45ba77d3d ("crypto: ecdsa - Register NIST P521 and extend test suite") Signed-off-by: Lukas Wunner <lukas@wunner.de> Reviewed-by: Stefan Berger <stefanb@linux.ibm.com> Reviewed-by: Ignat Korchagin <ignat@cloudflare.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16crypto: ahash - Use cra_reqsizeHerbert Xu
Use the common reqsize field and remove reqsize from ahash_alg. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16crypto: acomp - Remove reqsize fieldHerbert Xu
Remove the type-specific reqsize field in favour of the common one. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16crypto: ctr - Remove unnecessary header inclusionsHerbert Xu
Now that the broken drivers have been fixed, remove the unnecessary inclusions from crypto/ctr.h. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16crypto: acomp - Simplify folio handlingHerbert Xu
Rather than storing the folio as is and handling it later, convert it to a scatterlist right away. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16crypto: acomp - Remove ACOMP_REQUEST_ALLOCHerbert Xu
Remove ACOMP_REQUEST_ALLOC in favour of ACOMP_REQUEST_ON_STACK with ACOMP_REQUEST_CLONE. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16crypto: acomp - Add ACOMP_REQUEST_CLONEHerbert Xu
Add a new helper ACOMP_REQUEST_CLONE that will transform a stack request into a dynamically allocated one if possible, and otherwise switch it over to the sycnrhonous fallback transform. The intended usage is: ACOMP_STACK_ON_REQUEST(req, tfm); ... err = crypto_acomp_compress(req); /* The request cannot complete synchronously. */ if (err == -EAGAIN) { /* This will not fail. */ req = ACOMP_REQUEST_CLONE(req, gfp); /* Redo operation. */ err = crypto_acomp_compress(req); } Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16crypto: acomp - Add ACOMP_FBREQ_ON_STACKHerbert Xu
Add a helper to create an on-stack fallback request from a given request. Use this helper in acomp_do_nondma. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16crypto: acomp - Use request flag helpers and add acomp_request_flagsHerbert Xu
Use the newly added request flag helpers to manage the request flags. Also add acomp_request_flags which lets bottom-level users to access the request flags without the bits private to the acomp API. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16crypto: api - Add helpers to manage request flagsHerbert Xu
Add helpers so that the ON_STACK request flag management is not duplicated all over the place. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16crypto: ahash - Remove request chainingHerbert Xu
Request chaining requires the user to do too much book keeping. Remove it from ahash. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16crypto: acomp - Remove request chainingHerbert Xu
Request chaining requires the user to do too much book keeping. Remove it from acomp. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-14rxrpc: Add the security index for yfs-rxgkDavid Howells
Add the security index and abort codes for the YFS variant of rxgk. Signed-off-by: David Howells <dhowells@redhat.com> Link: https://patch.msgid.link/20250411095303.2316168-6-dhowells@redhat.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-04-12Merge git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6Herbert Xu
Merge crypto tree to pick up scompress and ahash fixes. The scompress fix becomes mostly unnecessary as the bugs no longer exist with the new acompress code. However, keep the NULL assignment in crypto_acomp_free_streams so that if the user decides to call crypto_acomp_alloc_streams again it will work.
2025-04-12crypto: ahash - Disable request chainingHerbert Xu
Disable hash request chaining in case a driver that copies an ahash_request object by hand accidentally triggers chaining. Reported-by: Manorit Chawdhry <m-chawdhry@ti.com> Fixes: f2ffe5a9183d ("crypto: hash - Add request chaining API") Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Tested-by: Manorit Chawdhry <m-chawdhry@ti.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-07crypto: chacha - remove <crypto/internal/chacha.h>Eric Biggers
<crypto/internal/chacha.h> is now included only by crypto/chacha.c, so fold it into there. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-07crypto: chacha - centralize the skcipher wrappers for arch codeEric Biggers
Following the example of the crc32 and crc32c code, make the crypto subsystem register both generic and architecture-optimized chacha20, xchacha20, and xchacha12 skcipher algorithms, all implemented on top of the appropriate library functions. This eliminates the need for every architecture to implement the same skcipher glue code. To register the architecture-optimized skciphers only when architecture-optimized code is actually being used, add a function chacha_is_arch_optimized() and make each arch implement it. Change each architecture's ChaCha module_init function to arch_initcall so that the CPU feature detection is guaranteed to run before chacha_is_arch_optimized() gets called by crypto/chacha.c. In the case of s390, remove the CPU feature based module autoloading, which is no longer needed since the module just gets pulled in via function linkage. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-07crypto: ctr - remove unused crypto_ctr_encrypt_walk()Ard Biesheuvel
crypto_ctr_encrypt_walk() is no longer used so remove it. Note that some existing drivers currently rely on the transitive includes of some other crypto headers so retain those for the time being. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-07crypto: hash - Do not use shash in hard IRQsHerbert Xu
Update the documentation to be consistent with the fact that shash may not be used in hard IRQs. Reported-by: Eric Biggers <ebiggers@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-07crypto: acomp - Add acomp_walkHerbert Xu
Add acomp_walk which is similar to skcipher_walk but tailored for acomp. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-07crypto: acomp - Move scomp stream allocation code into acompHerbert Xu
Move the dynamic stream allocation code into acomp and make it available as a helper for acomp algorithms. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-07crypto: scomp - Allocate per-cpu buffer on first use of each CPUHerbert Xu
Per-cpu buffers can be wasteful when the number of CPUs is large, especially if the buffer itself is likely to never be used. Reduce such wastage by only allocating them on first use of a particular CPU. On start-up allocate a single buffer on the first possible CPU. For every other CPU a work struct will be scheduled on first use to allocate the buffer for that CPU. Until the allocation succeeds simply use the first CPU's buffer which is protected under a spin lock. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-07crypto: api - Move alg destroy work from instance to templateHerbert Xu
Commit 9ae4577bc077 ("crypto: api - Use work queue in crypto_destroy_instance") introduced a work struct to free an instance after the last user goes away. Move the delayed work from the instance into its template so that when the template is unregistered it can ensure that all its instances have been freed before returning. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-29Merge tag 'v6.15-p1' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6 Pull crypto updates from Herbert Xu: "API: - Remove legacy compression interface - Improve scatterwalk API - Add request chaining to ahash and acomp - Add virtual address support to ahash and acomp - Add folio support to acomp - Remove NULL dst support from acomp Algorithms: - Library options are fuly hidden (selected by kernel users only) - Add Kerberos5 algorithms - Add VAES-based ctr(aes) on x86 - Ensure LZO respects output buffer length on compression - Remove obsolete SIMD fallback code path from arm/ghash-ce Drivers: - Add support for PCI device 0x1134 in ccp - Add support for rk3588's standalone TRNG in rockchip - Add Inside Secure SafeXcel EIP-93 crypto engine support in eip93 - Fix bugs in tegra uncovered by multi-threaded self-test - Fix corner cases in hisilicon/sec2 Others: - Add SG_MITER_LOCAL to sg miter - Convert ubifs, hibernate and xfrm_ipcomp from legacy API to acomp" * tag 'v6.15-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (187 commits) crypto: testmgr - Add multibuffer acomp testing crypto: acomp - Fix synchronous acomp chaining fallback crypto: testmgr - Add multibuffer hash testing crypto: hash - Fix synchronous ahash chaining fallback crypto: arm/ghash-ce - Remove SIMD fallback code path crypto: essiv - Replace memcpy() + NUL-termination with strscpy() crypto: api - Call crypto_alg_put in crypto_unregister_alg crypto: scompress - Fix incorrect stream freeing crypto: lib/chacha - remove unused arch-specific init support crypto: remove obsolete 'comp' compression API crypto: compress_null - drop obsolete 'comp' implementation crypto: cavium/zip - drop obsolete 'comp' implementation crypto: zstd - drop obsolete 'comp' implementation crypto: lzo - drop obsolete 'comp' implementation crypto: lzo-rle - drop obsolete 'comp' implementation crypto: lz4hc - drop obsolete 'comp' implementation crypto: lz4 - drop obsolete 'comp' implementation crypto: deflate - drop obsolete 'comp' implementation crypto: 842 - drop obsolete 'comp' implementation crypto: nx - Migrate to scomp API ...
2025-03-21crypto: lib/chacha - remove unused arch-specific init supportEric Biggers
All implementations of chacha_init_arch() just call chacha_init_generic(), so it is pointless. Just delete it, and replace chacha_init() with what was previously chacha_init_generic(). Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-21crypto: acomp - Add support for foliosHerbert Xu
For many users, it's easier to supply a folio rather than an SG list since they already have them. Add support for folios to the acomp interface. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-21crypto: acomp - Add ACOMP_REQUEST_ALLOC and acomp_request_alloc_extraHerbert Xu
Add ACOMP_REQUEST_ALLOC which is a wrapper around acomp_request_alloc that falls back to a synchronous stack reqeust if the allocation fails. Also add ACOMP_REQUEST_ON_STACK which stores the request on the stack only. The request should be freed with acomp_request_free. Finally add acomp_request_alloc_extra which gives the user extra memory to use in conjunction with the request. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-21crypto: acomp - Remove dst_freeHerbert Xu
Remove the unused dst_free hook. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-21crypto: scomp - Remove support for some non-trivial SG listsHerbert Xu
As the only user of acomp/scomp uses a trivial single-page SG list, remove support for everything else in preprataion for the addition of virtual address support. However, keep support for non-trivial source SG lists as that user is currently jumping through hoops in order to linearise the source data. Limit the source SG linearisation buffer to a single page as that user never goes over that. The only other potential user is also unlikely to exceed that (IPComp) and it can easily do its own linearisation if necessary. Also keep the destination SG linearisation for IPComp. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-21crypto: scatterwalk - Use nth_page instead of doing it by handHerbert Xu
Curiously, the Crypto API scatterwalk incremented pages by hand rather than using nth_page. Possibly because scatterwalk predates nth_page (the following commit is from the history tree): commit 3957f2b34960d85b63e814262a8be7d5ad91444d Author: James Morris <jmorris@intercode.com.au> Date: Sun Feb 2 07:35:32 2003 -0800 [CRYPTO]: in/out scatterlist support for ciphers. Fix this by using nth_page. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-21crypto: scatterwalk - simplify map and unmap calling conventionEric Biggers
Now that the address returned by scatterwalk_map() is always being stored into the same struct scatter_walk that is passed in, make scatterwalk_map() do so itself and return void. Similarly, now that scatterwalk_unmap() is always being passed the address field within a struct scatter_walk, make scatterwalk_unmap() take a pointer to struct scatter_walk instead of the address directly. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>