summaryrefslogtreecommitdiff
path: root/include/crypto/internal
AgeCommit message (Collapse)Author
2025-06-27Merge tag 'libcrypto-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux Pull crypto library fix from Eric Biggers: "Fix a regression where the purgatory code sometimes fails to build" * tag 'libcrypto-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux: lib/crypto: sha256: Mark sha256_choose_blocks as __always_inline
2025-06-20lib/crypto: sha256: Mark sha256_choose_blocks as __always_inlineArnd Bergmann
When the compiler chooses to not inline sha256_choose_blocks() in the purgatory code, it fails to link against the missing CPU specific version: x86_64-linux-ld: arch/x86/purgatory/purgatory.ro: in function `sha256_choose_blocks.part.0': sha256.c:(.text+0x6a6): undefined reference to `irq_fpu_usable' sha256.c:(.text+0x6c7): undefined reference to `sha256_blocks_arch' sha256.c:(.text+0x6cc): undefined reference to `sha256_blocks_simd' Mark this function as __always_inline to prevent this, same as sha256_finup(). Fixes: 5b90a779bc54 ("crypto: lib/sha256 - Add helpers for block-based shash") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Link: https://lore.kernel.org/r/20250620191952.1867578-1-arnd@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-06-13crypto: testmgr - reinstate kconfig control over full self-testsEric Biggers
Commit 698de822780f ("crypto: testmgr - make it easier to enable the full set of tests") removed support for building kernels that run only the "fast" set of crypto self-tests by default. This assumed that nearly everyone actually wanted the full set of tests, *if* they had already chosen to enable the tests at all. Unfortunately, it turns out that both Debian and Fedora intentionally have the crypto self-tests enabled in their production kernels. And for production kernels we do need to keep the testing time down, which implies just running the "fast" tests, not the full set of tests. For Fedora, a reason for enabling the tests in production is that they are being (mis)used to meet the FIPS 140-3 pre-operational testing requirement. However, the other reason for enabling the tests in production, which applies to both distros, is that they provide some value in protecting users from buggy drivers. Unfortunately, the crypto/ subsystem has many buggy and untested drivers for off-CPU hardware accelerators on rare platforms. These broken drivers get shipped to users, and there have been multiple examples of the tests preventing these buggy drivers from being used. So effectively, the tests are being relied on in production kernels. I think this is kind of crazy (untested drivers should just not be enabled at all), but that seems to be how things work currently. Thus, reintroduce a kconfig option that controls the level of testing. Call it CRYPTO_SELFTESTS_FULL instead of the original name CRYPTO_MANAGER_EXTRA_TESTS, which was slightly misleading. Moreover, given the "production kernel" use case, make CRYPTO_SELFTESTS depend on EXPERT instead of DEBUG_KERNEL. I also haven't reinstated all the #ifdefs in crypto/testmgr.c. Instead, just rely on the compiler to optimize out unused code. Fixes: 40b9969796bf ("crypto: testmgr - replace CRYPTO_MANAGER_DISABLE_TESTS with CRYPTO_SELFTESTS") Fixes: 698de822780f ("crypto: testmgr - make it easier to enable the full set of tests") Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-05-23Revert "crypto: testmgr - Add hash export format testing"Herbert Xu
This reverts commit 18c438b228558e05ede7dccf947a6547516fc0c7. The s390 hmac and sha3 algorithms are failing the test. Revert the change until they have been fixed. Reported-by: Ingo Franzki <ifranzki@linux.ibm.com> Link: https://lore.kernel.org/all/623a7fcb-b4cb-48e6-9833-57ad2b32a252@linux.ibm.com/ Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-05-19crypto: testmgr - Add hash export format testingHerbert Xu
Ensure that the hash state can be exported to and imported from the generic algorithm. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-05-19crypto: hmac - Add ahash supportHerbert Xu
Add ahash support to hmac so that drivers that can't do hmac in hardware do not have to implement duplicate copies of hmac. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-05-19crypto: hash - Add export_core and import_core hooksHerbert Xu
Add export_core and import_core hooks. These are intended to be used by algorithms which are wrappers around block-only algorithms, but are not themselves block-only, e.g., hmac. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-05-19crypto: hash - Move core export and import into internel/hash.hHerbert Xu
The core export and import functions are targeted at implementors so move them into internal/hash.h. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-05-12crypto: testmgr - make it easier to enable the full set of testsEric Biggers
Currently the full set of crypto self-tests requires CONFIG_CRYPTO_MANAGER_EXTRA_TESTS=y. This is problematic in two ways. First, developers regularly overlook this option. Second, the description of the tests as "extra" sometimes gives the impression that it is not required that all algorithms pass these tests. Given that the main use case for the crypto self-tests is for developers, make enabling CONFIG_CRYPTO_SELFTESTS=y just enable the full set of crypto self-tests by default. The slow tests can still be disabled by adding the command-line parameter cryptomgr.noextratests=1, soon to be renamed to cryptomgr.noslowtests=1. The only known use case for doing this is for people trying to use the crypto self-tests to satisfy the FIPS 140-3 pre-operational self-testing requirements when the kernel is being validated as a FIPS 140-3 cryptographic module. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-05-12crypto: geniv - use memcpy_sglist() instead of null skcipherEric Biggers
For copying data between two scatterlists, just use memcpy_sglist() instead of the so-called "null skcipher". This is much simpler. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-05-05crypto: ahash - Add HASH_REQUEST_ZEROHerbert Xu
Add a helper to zero hash stack requests that were never cloned off the stack. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-05-05crypto: lib/sha256 - Use generic block helperHerbert Xu
Use the BLOCK_HASH_UPDATE_BLOCKS helper instead of duplicating partial block handling. Also remove the unused lib/sha256 force-generic interface. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-05-05crypto: lib/sha256 - Add helpers for block-based shashHerbert Xu
Add an internal sha256_finup helper and move the finalisation code from __sha256_final into it. Also add sha256_choose_blocks and CRYPTO_ARCH_HAVE_LIB_SHA256_SIMD so that the Crypto API can use the SIMD block function unconditionally. The Crypto API must not be used in hard IRQs and there is no reason to have a fallback path for hardirqs. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-05-05crypto: api - Rename CRYPTO_ALG_REQ_CHAIN to CRYPTO_ALG_REQ_VIRTHerbert Xu
As chaining has been removed, all that remains of REQ_CHAIN is just virtual address support. Rename it before the reintroduction of batching creates confusion. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-05-05crypto: sha256 - support arch-optimized lib and expose through shashEric Biggers
As has been done for various other algorithms, rework the design of the SHA-256 library to support arch-optimized implementations, and make crypto/sha256.c expose both generic and arch-optimized shash algorithms that wrap the library functions. This allows users of the SHA-256 library functions to take advantage of the arch-optimized code, and this makes it much simpler to integrate SHA-256 for each architecture. Note that sha256_base.h is not used in the new design. It will be removed once all the architecture-specific code has been updated. Move the generic block function into its own module to avoid a circular dependency from libsha256.ko => sha256-$ARCH.ko => libsha256.ko. Signed-off-by: Eric Biggers <ebiggers@google.com> Add export and import functions to maintain existing export format. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-05-05crypto: lib/poly1305 - Add block-only interfaceHerbert Xu
Add a block-only interface for poly1305. Implement the generic code first. Also use the generic partial block helper. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-05-05crypto: lib/sha256 - Move partial block handling outHerbert Xu
Extract the common partial block handling into a helper macro that can be reused by other library code. Also delete the unused sha256_base_do_finalize function. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-28crypto: scatterwalk - Move skcipher walk and use it for memcpy_sglistHerbert Xu
Move the generic part of skcipher walk into scatterwalk, and use it to implement memcpy_sglist. This makes memcpy_sglist do the right thing when two distinct SG lists contain identical subsets (e.g., the AD part of AEAD). Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-28crypto: api - Add crypto_stack_request_init and initialise flags fullyHerbert Xu
Add a helper to initialise crypto stack requests and use it for ahash and acomp. Make sure that the flags field is initialised fully in the helper to silence false-positive warnings from the compiler. Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202504250751.mdy28Ibr-lkp@intel.com/ Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-28crypto: api - Add crypto_request_clone and fbHerbert Xu
Add a helper to clone crypto requests and eliminate code duplication. Use kmemdup in the helper. Also add an fb field to crypto_tfm. This also happens to fix the existing implementations which were buggy. Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202504230118.1CxUaUoX-lkp@intel.com/ Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202504230004.c7mrY0C6-lkp@intel.com/ Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-23crypto: arm/blake2b - Use API partial block handlingHerbert Xu
Use the Crypto API partial block handling. Also remove the unnecessary SIMD fallback path. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-23crypto: blake2b-generic - Use API partial block handlingHerbert Xu
Use the Crypto API partial block handling. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-23crypto: shash - Handle partial blocks in APIHerbert Xu
Provide an option to handle the partial blocks in the shash API. Almost every hash algorithm has a block size and are only able to hash partial blocks on finalisation. Rather than duplicating the partial block handling many times, add this functionality to the shash API. It is optional (e.g., hmac would never need this by relying on the partial block handling of the underlying hash), and to enable it set the bit CRYPTO_AHASH_ALG_BLOCK_ONLY. The export format is always that of the underlying hash export, plus the partial block buffer, followed by a single-byte for the partial block length. Set the bit CRYPTO_AHASH_ALG_FINAL_NONZERO to withhold an extra byte in the partial block. This will come in handy when this is extended to ahash where hardware often can't deal with a zero-length final. It will also be used for algorithms requiring an extra block for finalisation (e.g., cmac). As an optimisation, set the bit CRYPTO_AHASH_ALG_FINUP_MAX if the algorithm wishes to get as much data as possible instead of just the last partial block. The descriptor will be zeroed after finalisation. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-23crypto: engine - Realign struct crypto_engine to save 8 bytesThorsten Blum
Realign struct crypto_engine to reduce its size by 8 bytes. Total size is now 192 bytes, allowing it to fit within 3 cachelines instead of 4. pahole output before: /* size: 200, cachelines: 4, members: 17 */ /* sum members: 183, holes: 3, sum holes: 17 */ /* paddings: 1, sum paddings: 4 */ /* last cacheline: 8 bytes */ and after: /* size: 192, cachelines: 3, members: 17 */ /* sum members: 183, holes: 2, sum holes: 9 */ /* paddings: 1, sum paddings: 4 */ No functional changes intended. Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-17crypto: deflate - Make the acomp walk atomicHerbert Xu
Add an atomic flag to the acomp walk and use that in deflate. Due to the use of a per-cpu context, it is impossible to sleep during the walk in deflate. Reported-by: kernel test robot <oliver.sang@intel.com> Closes: https://lore.kernel.org/oe-lkp/202504151654.4c3b6393-lkp@intel.com Fixes: 08cabc7d3c86 ("crypto: deflate - Convert to acomp") Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16crypto: hash - Add HASH_REQUEST_ON_STACKHerbert Xu
Allow any ahash to be used with a stack request, with optional dynamic allocation when async is needed. The intended usage is: HASH_REQUEST_ON_STACK(req, tfm); ... err = crypto_ahash_digest(req); /* The request cannot complete synchronously. */ if (err == -EAGAIN) { /* This will not fail. */ req = HASH_REQUEST_CLONE(req, gfp); /* Redo operation. */ err = crypto_ahash_digest(req); } Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16crypto: skcipher - Realign struct skcipher_walk to save 8 bytesThorsten Blum
Reduce skcipher_walk's struct size by 8 bytes by realigning its members. pahole output before: /* size: 120, cachelines: 2, members: 13 */ /* sum members: 108, holes: 2, sum holes: 8 */ /* padding: 4 */ /* last cacheline: 56 bytes */ and after: /* size: 112, cachelines: 2, members: 13 */ /* padding: 4 */ /* last cacheline: 48 bytes */ No functional changes intended. Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16crypto: simd - Include asm/simd.h in internal/simd.hHerbert Xu
Now that the asm/simd.h files have been made safe against double inclusion, include it directly in internal/simd.h. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16crypto: acomp - Remove reqsize fieldHerbert Xu
Remove the type-specific reqsize field in favour of the common one. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16crypto: acomp - Simplify folio handlingHerbert Xu
Rather than storing the folio as is and handling it later, convert it to a scatterlist right away. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16crypto: acomp - Add ACOMP_REQUEST_CLONEHerbert Xu
Add a new helper ACOMP_REQUEST_CLONE that will transform a stack request into a dynamically allocated one if possible, and otherwise switch it over to the sycnrhonous fallback transform. The intended usage is: ACOMP_STACK_ON_REQUEST(req, tfm); ... err = crypto_acomp_compress(req); /* The request cannot complete synchronously. */ if (err == -EAGAIN) { /* This will not fail. */ req = ACOMP_REQUEST_CLONE(req, gfp); /* Redo operation. */ err = crypto_acomp_compress(req); } Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16crypto: acomp - Add ACOMP_FBREQ_ON_STACKHerbert Xu
Add a helper to create an on-stack fallback request from a given request. Use this helper in acomp_do_nondma. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16crypto: acomp - Use request flag helpers and add acomp_request_flagsHerbert Xu
Use the newly added request flag helpers to manage the request flags. Also add acomp_request_flags which lets bottom-level users to access the request flags without the bits private to the acomp API. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16crypto: ahash - Remove request chainingHerbert Xu
Request chaining requires the user to do too much book keeping. Remove it from ahash. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16crypto: acomp - Remove request chainingHerbert Xu
Request chaining requires the user to do too much book keeping. Remove it from acomp. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-12Merge git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6Herbert Xu
Merge crypto tree to pick up scompress and ahash fixes. The scompress fix becomes mostly unnecessary as the bugs no longer exist with the new acompress code. However, keep the NULL assignment in crypto_acomp_free_streams so that if the user decides to call crypto_acomp_alloc_streams again it will work.
2025-04-12crypto: ahash - Disable request chainingHerbert Xu
Disable hash request chaining in case a driver that copies an ahash_request object by hand accidentally triggers chaining. Reported-by: Manorit Chawdhry <m-chawdhry@ti.com> Fixes: f2ffe5a9183d ("crypto: hash - Add request chaining API") Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Tested-by: Manorit Chawdhry <m-chawdhry@ti.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-07crypto: chacha - remove <crypto/internal/chacha.h>Eric Biggers
<crypto/internal/chacha.h> is now included only by crypto/chacha.c, so fold it into there. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-07crypto: acomp - Add acomp_walkHerbert Xu
Add acomp_walk which is similar to skcipher_walk but tailored for acomp. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-07crypto: acomp - Move scomp stream allocation code into acompHerbert Xu
Move the dynamic stream allocation code into acomp and make it available as a helper for acomp algorithms. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-07crypto: scomp - Allocate per-cpu buffer on first use of each CPUHerbert Xu
Per-cpu buffers can be wasteful when the number of CPUs is large, especially if the buffer itself is likely to never be used. Reduce such wastage by only allocating them on first use of a particular CPU. On start-up allocate a single buffer on the first possible CPU. For every other CPU a work struct will be scheduled on first use to allocate the buffer for that CPU. Until the allocation succeeds simply use the first CPU's buffer which is protected under a spin lock. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-21crypto: acomp - Add support for foliosHerbert Xu
For many users, it's easier to supply a folio rather than an SG list since they already have them. Add support for folios to the acomp interface. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-21crypto: acomp - Add ACOMP_REQUEST_ALLOC and acomp_request_alloc_extraHerbert Xu
Add ACOMP_REQUEST_ALLOC which is a wrapper around acomp_request_alloc that falls back to a synchronous stack reqeust if the allocation fails. Also add ACOMP_REQUEST_ON_STACK which stores the request on the stack only. The request should be freed with acomp_request_free. Finally add acomp_request_alloc_extra which gives the user extra memory to use in conjunction with the request. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-21crypto: acomp - Remove dst_freeHerbert Xu
Remove the unused dst_free hook. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-21crypto: scomp - Remove support for some non-trivial SG listsHerbert Xu
As the only user of acomp/scomp uses a trivial single-page SG list, remove support for everything else in preprataion for the addition of virtual address support. However, keep support for non-trivial source SG lists as that user is currently jumping through hoops in order to linearise the source data. Limit the source SG linearisation buffer to a single page as that user never goes over that. The only other potential user is also unlikely to exceed that (IPComp) and it can easily do its own linearisation if necessary. Also keep the destination SG linearisation for IPComp. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-15crypto: acomp - Add request chaining and virtual addressesHerbert Xu
This adds request chaining and virtual address support to the acomp interface. It is identical to the ahash interface, except that a new flag CRYPTO_ACOMP_REQ_NONDMA has been added to indicate that the virtual addresses are not suitable for DMA. This is because all existing and potential acomp users can provide memory that is suitable for DMA so there is no need for a fall-back copy path. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-15crypto: acomp - Move stream management into scomp layerHerbert Xu
Rather than allocating the stream memory in the request object, move it into a per-cpu buffer managed by scomp. This takes the stress off the user from having to manage large request objects and setting up their own per-cpu buffers in order to do so. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-15crypto: scomp - Remove tfm argument from alloc/free_ctxHerbert Xu
The tfm argument is completely unused and meaningless as the same stream object is identical over all transforms of a given algorithm. Remove it. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-15crypto: skcipher - Make skcipher_walk src.virt.addr constHerbert Xu
Mark the src.virt.addr field in struct skcipher_walk as a pointer to const data. This guarantees that the user won't modify the data which should be done through dst.virt.addr to ensure that flushing is done when necessary. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-15crypto: skcipher - Eliminate duplicate virt.addr fieldHerbert Xu
Reuse the addr field from struct scatter_walk for skcipher_walk. Keep the existing virt.addr fields but make them const for the user to access the mapped address. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>