summaryrefslogtreecommitdiff
path: root/crypto
AgeCommit message (Collapse)Author
2025-01-19crypto: asymmetric_keys - Remove unused key_being_used_for[]Dr. David Alan Gilbert
key_being_used_for[] is an unused array of textual names for the elements of the enum key_being_used_for. It was added in 2015 by commit 99db44350672 ("PKCS#7: Appropriately restrict authenticated attributes and content type") Remove it. Signed-off-by: Dr. David Alan Gilbert <linux@treblig.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-01-14crypto: skcipher - call cond_resched() directlyEric Biggers
In skcipher_walk_done(), instead of calling crypto_yield() which requires a translation between flags, just call cond_resched() directly. This has the same effect. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-01-14crypto: skcipher - optimize initializing skcipher_walk fieldsEric Biggers
The helper functions like crypto_skcipher_blocksize() take in a pointer to a tfm object, but they actually return properties of the algorithm. As the Linux kernel is compiled with -fno-strict-aliasing, the compiler has to assume that the writes to struct skcipher_walk could clobber the tfm's pointer to its algorithm. Thus it gets repeatedly reloaded in the generated code. Therefore, replace the use of these helper functions with staightforward accesses to the struct fields. Note that while *users* of the skcipher and aead APIs are supposed to use the helper functions, this particular code is part of the API *implementation* in crypto/skcipher.c, which already accesses the algorithm struct directly in many cases. So there is no reason to prefer the helper functions here. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-01-14crypto: skcipher - clean up initialization of skcipher_walk::flagsEric Biggers
- Initialize SKCIPHER_WALK_SLEEP in a consistent way, and check for atomic=true at the same time as CRYPTO_TFM_REQ_MAY_SLEEP. Technically atomic=true only needs to apply after the first step, but it is very rarely used. We should optimize for the common case. So, check 'atomic' alongside CRYPTO_TFM_REQ_MAY_SLEEP. This is more efficient. - Initialize flags other than SKCIPHER_WALK_SLEEP to 0 rather than preserving them. No caller actually initializes the flags, which makes it impossible to use their original values for anything. Indeed, that does not happen and all meaningful flags get overridden anyway. It may have been thought that just clearing one flag would be faster than clearing all flags, but that's not the case as the former is a read-write operation whereas the latter is just a write. - Move the explicit clearing of SKCIPHER_WALK_SLOW, SKCIPHER_WALK_COPY, and SKCIPHER_WALK_DIFF into skcipher_walk_done(), since it is now only needed on non-first steps. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-01-14crypto: skcipher - fold skcipher_walk_skcipher() into skcipher_walk_virt()Eric Biggers
Fold skcipher_walk_skcipher() into skcipher_walk_virt() which is its only remaining caller. No change in behavior. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-01-14crypto: skcipher - remove redundant check for SKCIPHER_WALK_SLOWEric Biggers
In skcipher_walk_done(), remove the check for SKCIPHER_WALK_SLOW because it is always true. All other flags (and lack thereof) were checked earlier in the function, leaving SKCIPHER_WALK_SLOW as the only remaining possibility. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-01-14crypto: skcipher - remove redundant clamping to page sizeEric Biggers
In the case where skcipher_walk_next() allocates a bounce page, that page by definition has size PAGE_SIZE. The number of bytes to copy 'n' is guaranteed to fit in it, since earlier in the function it was clamped to be at most a page. Therefore remove the unnecessary logic that tried to clamp 'n' again to fit in the bounce page. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-01-14crypto: skcipher - remove unnecessary page alignment of bounce bufferEric Biggers
In the slow path of skcipher_walk where it uses a slab bounce buffer for the data and/or IV, do not bother to avoid crossing a page boundary in the part(s) of this buffer that are used, and do not bother to allocate extra space in the buffer for that purpose. The buffer is accessed only by virtual address, so pages are irrelevant for it. This logic may have been present due to the physical address support in skcipher_walk, but that has now been removed. Or it may have been present to be consistent with the fast path that currently does not hand back addresses that span pages, but that behavior is a side effect of the pages being "mapped" one by one and is not actually a requirement. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-01-14crypto: skcipher - document skcipher_walk_done() and rename some varsEric Biggers
skcipher_walk_done() has an unusual calling convention, and some of its local variables have unclear names. Document it and rename variables to make it a bit clearer what is going on. No change in behavior. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-01-14crypto: proc - Use str_yes_no() and str_no_yes() helpersThorsten Blum
Remove hard-coded strings by using the str_yes_no() and str_no_yes() helpers. Remove unnecessary curly braces. Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-01-08treewide: Introduce kthread_run_worker[_on_cpu]()Frederic Weisbecker
kthread_create() creates a kthread without running it yet. kthread_run() creates a kthread and runs it. On the other hand, kthread_create_worker() creates a kthread worker and runs it. This difference in behaviours is confusing. Also there is no way to create a kthread worker and affine it using kthread_bind_mask() or kthread_affine_preferred() before starting it. Consolidate the behaviours and introduce kthread_run_worker[_on_cpu]() that behaves just like kthread_run(). kthread_create_worker[_on_cpu]() will now only create a kthread worker without starting it. Signed-off-by: Frederic Weisbecker <frederic@kernel.org> Signed-off-by: Dan Carpenter <dan.carpenter@linaro.org>
2025-01-04crypto: ahash - make hash walk functions private to ahash.cEric Biggers
Due to the removal of the Niagara2 SPU driver, crypto_hash_walk_first(), crypto_hash_walk_done(), crypto_hash_walk_last(), and struct crypto_hash_walk are now only used in crypto/ahash.c. Therefore, make them all private to crypto/ahash.c. I.e. un-export the two functions that were exported, make the functions static, and move the struct definition to the .c file. As part of this, move the functions to earlier in the file to avoid needing to add forward declarations. Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-01-04crypto: keywrap - remove unused keywrap algorithmEric Biggers
The keywrap (kw) algorithm has no in-tree user. It has never had an in-tree user, and the patch that added it provided no justification for its inclusion. Even use of it via AF_ALG is impossible, as it uses a weird calling convention where part of the ciphertext is returned via the IV buffer, which is not returned to userspace in AF_ALG. It's also unclear whether any new code in the kernel that does key wrapping would actually use this algorithm. It is controversial in the cryptographic community due to having no clearly stated security goal, no security proof, poor performance, and only a 64-bit auth tag. Later work (https://eprint.iacr.org/2006/221) suggested that the goal is deterministic authenticated encryption. But there are now more modern algorithms for this, and this is not the same as key wrapping, for which a regular AEAD such as AES-GCM usually can be (and is) used instead. Therefore, remove this unused code. There were several special cases for this algorithm in the self-tests, due to its weird calling convention. Remove those too. Cc: Stephan Mueller <smueller@chronox.de> Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> # m68k Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-01-04crypto: vmac - remove unused VMAC algorithmEric Biggers
Remove the vmac64 template, as it has no known users. It also continues to have longstanding bugs such as alignment violations (see https://lore.kernel.org/r/20241226134847.6690-1-evepolonium@gmail.com/). This code was added in 2009 by commit f1939f7c5645 ("crypto: vmac - New hash algorithm for intel_txt support"). Based on the mention of intel_txt support in the commit title, it seems it was added as a prerequisite for the contemporaneous patch "intel_txt: add s3 userspace memory integrity verification" (https://lore.kernel.org/r/4ABF2B50.6070106@intel.com/). In the design proposed by that patch, when an Intel Trusted Execution Technology (TXT) enabled system resumed from suspend, the "tboot" trusted executable launched the Linux kernel without verifying userspace memory, and then the Linux kernel used VMAC to verify userspace memory. However, that patch was never merged, as reviewers had objected to the design. It was later reworked into commit 4bd96a7a8185 ("x86, tboot: Add support for S3 memory integrity protection") which made tboot verify the memory instead. Thus the VMAC support in Linux was never used. No in-tree user has appeared since then, other than potentially the usual components that allow specifying arbitrary hash algorithms by name, namely AF_ALG and dm-integrity. However there are no indications that VMAC is being used with these components. Debian Code Search and web searches for "vmac64" (the actual algorithm name) do not return any results other than the kernel itself, suggesting that it does not appear in any other code or documentation. Explicitly grepping the source code of the usual suspects (libell, iwd, cryptsetup) finds no matches either. Before 2018, the vmac code was also completely broken due to using a hardcoded nonce and the wrong endianness for the MAC. It was then fixed by commit ed331adab35b ("crypto: vmac - add nonced version with big endian digest") and commit 0917b873127c ("crypto: vmac - remove insecure version with hardcoded nonce"). These were intentionally breaking changes that changed all the computed MAC values as well as the algorithm name ("vmac" to "vmac64"). No complaints were ever received about these breaking changes, strongly suggesting the absence of users. The reason I had put some effort into fixing this code in 2018 is because it was used by an out-of-tree driver. But if it is still needed in that particular out-of-tree driver, the code can be carried in that driver instead. There is no need to carry it upstream. Cc: Atharva Tiwari <evepolonium@gmail.com> Cc: Shane Wang <shane.wang@intel.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> # m68k Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-01-04crypto: fips - Use str_enabled_disabled() helper in fips_enable()Thorsten Blum
Remove hard-coded strings by using the str_enabled_disabled() helper function. Use pr_info() instead of printk(KERN_INFO) to silence a checkpatch warning. Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-12-14crypto: keywrap - remove assignment of 0 to cra_alignmaskEric Biggers
Since this code is zero-initializing the algorithm struct, the assignment of 0 to cra_alignmask is redundant. Remove it to reduce the number of matches that are found when grepping for cra_alignmask. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-12-14crypto: aegis - remove assignments of 0 to cra_alignmaskEric Biggers
Struct fields are zero by default, so these lines of code have no effect. Remove them to reduce the number of matches that are found when grepping for cra_alignmask. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-12-14crypto: seed - stop using cra_alignmaskEric Biggers
Instead of specifying a nonzero alignmask, use the unaligned access helpers. This eliminates unnecessary alignment operations on most CPUs, which can handle unaligned accesses efficiently, and brings us a step closer to eventually removing support for the alignmask field. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-12-14crypto: khazad - stop using cra_alignmaskEric Biggers
Instead of specifying a nonzero alignmask, use the unaligned access helpers. This eliminates unnecessary alignment operations on most CPUs, which can handle unaligned accesses efficiently, and brings us a step closer to eventually removing support for the alignmask field. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-12-14crypto: tea - stop using cra_alignmaskEric Biggers
Instead of specifying a nonzero alignmask, use the unaligned access helpers. This eliminates unnecessary alignment operations on most CPUs, which can handle unaligned accesses efficiently, and brings us a step closer to eventually removing support for the alignmask field. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-12-14crypto: aria - stop using cra_alignmaskEric Biggers
Instead of specifying a nonzero alignmask, use the unaligned access helpers. This eliminates unnecessary alignment operations on most CPUs, which can handle unaligned accesses efficiently, and brings us a step closer to eventually removing support for the alignmask field. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-12-14crypto: anubis - stop using cra_alignmaskEric Biggers
Instead of specifying a nonzero alignmask, use the unaligned access helpers. This eliminates unnecessary alignment operations on most CPUs, which can handle unaligned accesses efficiently, and brings us a step closer to eventually removing support for the alignmask field. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-12-14crypto: skcipher - remove support for physical address walksEric Biggers
Since the physical address support in skcipher_walk is not used anymore, remove all the code associated with it. This includes: - The skcipher_walk_async() and skcipher_walk_complete() functions; - The SKCIPHER_WALK_PHYS flag and everything conditional on it; - The buffers, phys, and virt.page fields in struct skcipher_walk; - struct skcipher_walk_buffer. As a result, skcipher_walk now just supports virtual addresses. Physical address support in skcipher_walk is unneeded because drivers that need physical addresses just use the scatterlists directly. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-12-10crypto: sig - Set maskset to CRYPTO_ALG_TYPE_MASKHerbert Xu
As sig is now a standalone type, it no longer needs to have a wide mask that includes akcipher. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-12-10crypto: api - Call crypto_schedule_test outside of mutexHerbert Xu
There is no need to hold the crypto mutex when scheduling a self- test. In fact prior to the patch introducing asynchronous testing, this was done outside of the locked area. Move the crypto_schedule_test call back out of the locked area. Also move crypto_remove_final to the else branch under the schedule- test call as the list of algorithms to be removed is non-empty only when the test larval is NULL (i.e., testing is disabled). Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-12-10crypto: api - Fix boot-up self-test raceHerbert Xu
During the boot process self-tests are postponed so that all algorithms are registered when the test starts. In the event that algorithms are still being registered during these tests, which can occur either because the algorithm is registered at late_initcall, or because a self-test itself triggers the creation of an instance, some self-tests may never start at all. Fix this by setting the flag at the start of crypto_start_tests. Note that this race is theoretical and has never been observed in practice. Fixes: adad556efcdd ("crypto: api - Fix built-in testing dependency failures") Signed-off-by: Herbert Xu <herbert.xu@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-12-10crypto: rsassa-pkcs1 - Copy source data for SG listHerbert Xu
As virtual addresses in general may not be suitable for DMA, always perform a copy before using them in an SG list. Fixes: 1e562deacecc ("crypto: rsassa-pkcs1 - Migrate to sig_alg backend") Reported-by: Zorro Lang <zlang@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-12-02module: Convert symbol namespace to string literalPeter Zijlstra
Clean up the existing export namespace code along the same lines of commit 33def8498fdd ("treewide: Convert macro and uses of __section(foo) to __section("foo")") and for the same reason, it is not desired for the namespace argument to be a macro expansion itself. Scripted using git grep -l -e MODULE_IMPORT_NS -e EXPORT_SYMBOL_NS | while read file; do awk -i inplace ' /^#define EXPORT_SYMBOL_NS/ { gsub(/__stringify\(ns\)/, "ns"); print; next; } /^#define MODULE_IMPORT_NS/ { gsub(/__stringify\(ns\)/, "ns"); print; next; } /MODULE_IMPORT_NS/ { $0 = gensub(/MODULE_IMPORT_NS\(([^)]*)\)/, "MODULE_IMPORT_NS(\"\\1\")", "g"); } /EXPORT_SYMBOL_NS/ { if ($0 ~ /(EXPORT_SYMBOL_NS[^(]*)\(([^,]+),/) { if ($0 !~ /(EXPORT_SYMBOL_NS[^(]*)\(([^,]+), ([^)]+)\)/ && $0 !~ /(EXPORT_SYMBOL_NS[^(]*)\(\)/ && $0 !~ /^my/) { getline line; gsub(/[[:space:]]*\\$/, ""); gsub(/[[:space:]]/, "", line); $0 = $0 " " line; } $0 = gensub(/(EXPORT_SYMBOL_NS[^(]*)\(([^,]+), ([^)]+)\)/, "\\1(\\2, \"\\3\")", "g"); } } { print }' $file; done Requested-by: Masahiro Yamada <masahiroy@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://mail.google.com/mail/u/2/#inbox/FMfcgzQXKWgMmjdFwwdsfgxzKpVHWPlc Acked-by: Greg KH <gregkh@linuxfoundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2024-12-01crypto: crct10dif - expose arch-optimized lib functionEric Biggers
Now that crc_t10dif_update() may be directly optimized for each architecture, make the shash driver for crct10dif register a crct10dif-$arch algorithm that uses it, instead of only crct10dif-generic which uses crc_t10dif_generic(). The result is that architecture-optimized crct10dif will remain available through the shash API once the architectures implement crc_t10dif_arch() instead of the shash API. Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Link: https://lore.kernel.org/r/20241202012056.209768-4-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com>
2024-12-01lib/crc-t10dif: stop wrapping the crypto APIEric Biggers
In preparation for making the CRC-T10DIF library directly optimized for each architecture, like what has been done for CRC32, get rid of the weird layering where crc_t10dif_update() calls into the crypto API. Instead, move crc_t10dif_generic() into the crc-t10dif library module, and make crc_t10dif_update() just call crc_t10dif_generic(). Acceleration will be reintroduced via crc_t10dif_arch() in the following patches. Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Link: https://lore.kernel.org/r/20241202012056.209768-2-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com>
2024-12-01crypto: crc32 - don't unnecessarily register arch algorithmsEric Biggers
Instead of registering the crc32-$arch and crc32c-$arch algorithms if the arch-specific code was built, only register them when that code was built *and* is not falling back to the base implementation at runtime. This avoids confusing users like btrfs which checks the shash driver name to determine whether it is crc32c-generic. (It would also make sense to change btrfs to test the crc32_optimization flags itself, so that it doesn't have to use the weird hack of parsing the driver name. This change still makes sense either way though.) Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20241202010844.144356-5-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com>
2024-12-01lib/crc32: improve support for arch-specific overridesEric Biggers
Currently the CRC32 library functions are defined as weak symbols, and the arm64 and riscv architectures override them. This method of arch-specific overrides has the limitation that it only works when both the base and arch code is built-in. Also, it makes the arch-specific code be silently not used if it is accidentally built with lib-y instead of obj-y; unfortunately the RISC-V code does this. This commit reorganizes the code to have explicit *_arch() functions that are called when they are enabled, similar to how some of the crypto library code works (e.g. chacha_crypt() calls chacha_crypt_arch()). Make the existing kconfig choice for the CRC32 implementation also control whether the arch-optimized implementation (if one is available) is enabled or not. Make it enabled by default if CRC32 is also enabled. The result is that arch-optimized CRC32 library functions will be included automatically when appropriate, but it is now possible to disable them. They can also now be built as a loadable module if the CRC32 library functions happen to be used only by loadable modules, in which case the arch and base CRC32 modules will be automatically loaded via direct symbol dependency when appropriate. Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20241202010844.144356-3-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com>
2024-12-01lib/crc32: drop leading underscores from __crc32c_le_baseEric Biggers
Remove the leading underscores from __crc32c_le_base(). This is in preparation for adding crc32c_le_arch() and eventually renaming __crc32c_le() to crc32c_le(). Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20241202010844.144356-2-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com>
2024-11-19Merge tag 'random-6.13-rc1-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/crng/random Pull random number generator updates from Jason Donenfeld: "This contains a single series from Uros to replace uses of <linux/random.h> with prandom.h or other more specific headers as needed, in order to avoid a circular header issue. Uros' goal is to be able to use percpu.h from prandom.h, which will then allow him to define __percpu in percpu.h rather than in compiler_types.h" * tag 'random-6.13-rc1-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/crng/random: prandom: Include <linux/percpu.h> in <linux/prandom.h> random: Do not include <linux/prandom.h> in <linux/random.h> netem: Include <linux/prandom.h> in sch_netem.c lib/test_scanf: Include <linux/prandom.h> instead of <linux/random.h> lib/test_parman: Include <linux/prandom.h> instead of <linux/random.h> bpf/tests: Include <linux/prandom.h> instead of <linux/random.h> lib/rbtree-test: Include <linux/prandom.h> instead of <linux/random.h> random32: Include <linux/prandom.h> instead of <linux/random.h> kunit: string-stream-test: Include <linux/prandom.h> lib/interval_tree_test.c: Include <linux/prandom.h> instead of <linux/random.h> bpf: Include <linux/prandom.h> instead of <linux/random.h> scsi: libfcoe: Include <linux/prandom.h> instead of <linux/random.h> fscrypt: Include <linux/once.h> in fs/crypto/keyring.c mtd: tests: Include <linux/prandom.h> instead of <linux/random.h> media: vivid: Include <linux/prandom.h> in vivid-vid-cap.c drm/lib: Include <linux/prandom.h> instead of <linux/random.h> drm/i915/selftests: Include <linux/prandom.h> instead of <linux/random.h> crypto: testmgr: Include <linux/prandom.h> instead of <linux/random.h> x86/kaslr: Include <linux/prandom.h> instead of <linux/random.h>
2024-11-19Merge tag 'v6.13-p1' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6 Pull crypto updates from Herbert Xu: "API: - Add sig driver API - Remove signing/verification from akcipher API - Move crypto_simd_disabled_for_test to lib/crypto - Add WARN_ON for return values from driver that indicates memory corruption Algorithms: - Provide crc32-arch and crc32c-arch through Crypto API - Optimise crc32c code size on x86 - Optimise crct10dif on arm/arm64 - Optimise p10-aes-gcm on powerpc - Optimise aegis128 on x86 - Output full sample from test interface in jitter RNG - Retry without padata when it fails in pcrypt Drivers: - Add support for Airoha EN7581 TRNG - Add support for STM32MP25x platforms in stm32 - Enable iproc-r200 RNG driver on BCMBCA - Add Broadcom BCM74110 RNG driver" * tag 'v6.13-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (112 commits) crypto: marvell/cesa - fix uninit value for struct mv_cesa_op_ctx crypto: cavium - Fix an error handling path in cpt_ucode_load_fw() crypto: aesni - Move back to module_init crypto: lib/mpi - Export mpi_set_bit crypto: aes-gcm-p10 - Use the correct bit to test for P10 hwrng: amd - remove reference to removed PPC_MAPLE config crypto: arm/crct10dif - Implement plain NEON variant crypto: arm/crct10dif - Macroify PMULL asm code crypto: arm/crct10dif - Use existing mov_l macro instead of __adrl crypto: arm64/crct10dif - Remove remaining 64x64 PMULL fallback code crypto: arm64/crct10dif - Use faster 16x64 bit polynomial multiply crypto: arm64/crct10dif - Remove obsolete chunking logic crypto: bcm - add error check in the ahash_hmac_init function crypto: caam - add error check to caam_rsa_set_priv_key_form hwrng: bcm74110 - Add Broadcom BCM74110 RNG driver dt-bindings: rng: add binding for BCM74110 RNG padata: Clean up in padata_do_multithreaded() crypto: inside-secure - Fix the return value of safexcel_xcbcmac_cra_init() crypto: qat - Fix missing destroy_workqueue in adf_init_aer() crypto: rsassa-pkcs1 - Reinstate support for legacy protocols ...
2024-11-10crypto: rsassa-pkcs1 - Reinstate support for legacy protocolsLukas Wunner
Commit 1e562deacecc ("crypto: rsassa-pkcs1 - Migrate to sig_alg backend") enforced that rsassa-pkcs1 sign/verify operations specify a hash algorithm. That is necessary because per RFC 8017 sec 8.2, a hash algorithm identifier must be prepended to the hash before generating or verifying the signature ("Full Hash Prefix"). However the commit went too far in that it changed user space behavior: KEYCTL_PKEY_QUERY system calls now return -EINVAL unless they specify a hash algorithm. Intel Wireless Daemon (iwd) is one application issuing such system calls (for EAP-TLS). Closer analysis of the Embedded Linux Library (ell) used by iwd reveals that the problem runs even deeper: When iwd uses TLS 1.1 or earlier, it not only queries for keys, but performs sign/verify operations without specifying a hash algorithm. These legacy TLS versions concatenate an MD5 to a SHA-1 hash and omit the Full Hash Prefix: https://git.kernel.org/pub/scm/libs/ell/ell.git/tree/ell/tls-suites.c#n97 TLS 1.1 was deprecated in 2021 by RFC 8996, but removal of support was inadvertent in this case. It probably should be coordinated with iwd maintainers first. So reinstate support for such legacy protocols by defaulting to hash algorithm "none" which uses an empty Full Hash Prefix. If it is later on decided to remove TLS 1.1 support but still allow KEYCTL_PKEY_QUERY without a hash algorithm, that can be achieved by reverting the present commit and replacing it with the following patch: https://lore.kernel.org/r/ZxalYZwH5UiGX5uj@wunner.de/ It's worth noting that Python's cryptography library gained support for such legacy use cases very recently, so they do seem to still be a thing. The Python developers identified IKE version 1 as another protocol omitting the Full Hash Prefix: https://github.com/pyca/cryptography/issues/10226 https://github.com/pyca/cryptography/issues/5495 The author of those issues, Zoltan Kelemen, spent considerable effort searching for test vectors but only found one in a 2019 blog post by Kevin Jones. Add it to testmgr.h to verify correctness of this feature. Examination of wpa_supplicant as well as various IKE daemons (libreswan, strongswan, isakmpd, raccoon) has determined that none of them seems to use the kernel's Key Retention Service, so iwd is the only affected user space application known so far. Fixes: 1e562deacecc ("crypto: rsassa-pkcs1 - Migrate to sig_alg backend") Reported-by: Klara Modin <klarasmodin@gmail.com> Tested-by: Klara Modin <klarasmodin@gmail.com> Closes: https://lore.kernel.org/r/2ed09a22-86c0-4cf0-8bda-ef804ccb3413@gmail.com/ Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-11-02crypto: asymmetric_keys - Remove unused functionsDr. David Alan Gilbert
encrypt_blob(), decrypt_blob() and create_signature() were some of the functions added in 2018 by commit 5a30771832aa ("KEYS: Provide missing asymmetric key subops for new key type ops [ver #2]") however, they've not been used. Remove them. Signed-off-by: Dr. David Alan Gilbert <linux@treblig.org> Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-10-28crypto: api - move crypto_simd_disabled_for_test to libEric Biggers
Move crypto_simd_disabled_for_test to lib/ so that crypto_simd_usable() can be used by library code. This was discussed previously (https://lore.kernel.org/linux-crypto/20220716062920.210381-4-ebiggers@kernel.org/) but was not done because there was no use case yet. However, this is now needed for the arm64 CRC32 library code. Tested with: export ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- echo CONFIG_CRC32=y > .config echo CONFIG_MODULES=y >> .config echo CONFIG_CRYPTO=m >> .config echo CONFIG_DEBUG_KERNEL=y >> .config echo CONFIG_CRYPTO_MANAGER_DISABLE_TESTS=n >> .config echo CONFIG_CRYPTO_MANAGER_EXTRA_TESTS=y >> .config make olddefconfig make -j$(nproc) Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-10-28crypto: crc32c - Provide crc32c-arch driver for accelerated library codeArd Biesheuvel
crc32c-generic is currently backed by the architecture's CRC-32c library code, which may offer a variety of implementations depending on the capabilities of the platform. These are not covered by the crypto subsystem's fuzz testing capabilities because crc32c-generic is the reference driver that the fuzzing logic uses as a source of truth. Fix this by providing a crc32c-arch implementation which is based on the arch library code if available, and modify crc32c-generic so it is always based on the generic C implementation. If the arch has no CRC-32c library code, this change does nothing. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-10-28crypto: crc32 - Provide crc32-arch driver for accelerated library codeArd Biesheuvel
crc32-generic is currently backed by the architecture's CRC-32 library code, which may offer a variety of implementations depending on the capabilities of the platform. These are not covered by the crypto subsystem's fuzz testing capabilities because crc32-generic is the reference driver that the fuzzing logic uses as a source of truth. Fix this by providing a crc32-arch implementation which is based on the arch library code if available, and modify crc32-generic so it is always based on the generic C implementation. If the arch has no CRC-32 library code, this change does nothing. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-10-28crypto: drbg - Use str_true_false() and str_enabled_disabled() helpersThorsten Blum
Remove hard-coded strings by using the helper functions str_true_false() and str_enabled_disabled(). Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-10-28crypto: pcrypt - Call crypto layer directly when padata_do_parallel() return ↵Yi Yang
-EBUSY Since commit 8f4f68e788c3 ("crypto: pcrypt - Fix hungtask for PADATA_RESET"), the pcrypt encryption and decryption operations return -EAGAIN when the CPU goes online or offline. In alg_test(), a WARN is generated when pcrypt_aead_decrypt() or pcrypt_aead_encrypt() returns -EAGAIN, the unnecessary panic will occur when panic_on_warn set 1. Fix this issue by calling crypto layer directly without parallelization in that case. Fixes: 8f4f68e788c3 ("crypto: pcrypt - Fix hungtask for PADATA_RESET") Signed-off-by: Yi Yang <yiyang13@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-10-28crypto: ecdsa - Update Kconfig help text for NIST P521Lukas Wunner
Commit a7d45ba77d3d ("crypto: ecdsa - Register NIST P521 and extend test suite") added support for ECDSA signature verification using NIST P521, but forgot to amend the Kconfig help text. Fix it. Fixes: a7d45ba77d3d ("crypto: ecdsa - Register NIST P521 and extend test suite") Signed-off-by: Lukas Wunner <lukas@wunner.de> Reviewed-by: Stefan Berger <stefanb@linux.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-10-26crypto: sig - Fix oops on KEYCTL_PKEY_QUERY for RSA keysLukas Wunner
Commit a2471684dae2 ("crypto: ecdsa - Move X9.62 signature size calculation into template") introduced ->max_size() and ->digest_size() callbacks to struct sig_alg. They return an algorithm's maximum signature size and digest size, respectively. For algorithms which lack these callbacks, crypto_register_sig() was amended to use the ->key_size() callback instead. However the commit neglected to also amend sig_register_instance(). As a result, the ->max_size() and ->digest_size() callbacks remain NULL pointers if instances do not define them. A KEYCTL_PKEY_QUERY system call results in an oops for such instances: BUG: kernel NULL pointer dereference, address: 0000000000000000 Call Trace: software_key_query+0x169/0x370 query_asymmetric_key+0x67/0x90 keyctl_pkey_query+0x86/0x120 __do_sys_keyctl+0x428/0x480 do_syscall_64+0x4b/0x110 The only instances affected by this are "pkcs1(rsa, ...)". Fix by moving the callback checks from crypto_register_sig() to sig_prepare_alg(), which is also invoked by sig_register_instance(). Change the return type of sig_prepare_alg() from void to int to be able to return errors. This matches other algorithm types, see e.g. aead_prepare_alg() or ahash_prepare_alg(). Fixes: a2471684dae2 ("crypto: ecdsa - Move X9.62 signature size calculation into template") Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-10-19crypto: jitter - output full sample from test interfaceJoachim Vandersmissen
The Jitter RNG time delta is computed based on the difference of two high-resolution, 64-bit time stamps. However, the test interface added in 69f1c387ba only outputs the lower 32 bits of those time stamps. To ensure all information is available during the evaluation process of the Jitter RNG, output the full 64-bit time stamps. Any clients collecting data from the test interface will need to be updated to take this change into account. Additionally, the size of the temporary buffer that holds the data for user space has been clarified. Previously, this buffer was JENT_TEST_RINGBUFFER_SIZE (= 1000) bytes in size, however that value represents the number of samples held in the kernel space ring buffer, with each sample taking 8 (previously 4) bytes. Rather than increasing the size to allow for all 1000 samples to be output, we keep it at 1000 bytes, but clarify that this means at most 125 64-bit samples will be output every time this interface is called. Reviewed-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Joachim Vandersmissen <git@jvdsn.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-10-16Merge tag 'v6.12-p3' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6 Pull crypto fixes from Herbert Xu: - Remove bogus testmgr ENOENT error messages - Ensure algorithm is still alive before marking it as tested - Disable buggy hash algorithms in marvell/cesa * tag 'v6.12-p3' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: crypto: marvell/cesa - Disable hash algorithms crypto: testmgr - Hide ENOENT errors better crypto: api - Fix liveliness check in crypto_alg_tested
2024-10-10crypto: testmgr - Hide ENOENT errors betterHerbert Xu
The previous patch removed the ENOENT warning at the point of allocation, but the overall self-test warning is still there. Fix all of them by returning zero as the test result. This is safe because if the algorithm has gone away, then it cannot be marked as tested. Fixes: 4eded6d14f5b ("crypto: testmgr - Hide ENOENT errors") Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-10-10crypto: api - Fix liveliness check in crypto_alg_testedHerbert Xu
As algorithm testing is carried out without holding the main crypto lock, it is always possible for the algorithm to go away during the test. So before crypto_alg_tested updates the status of the tested alg, it checks whether it's still on the list of all algorithms. This is inaccurate because it may be off the main list but still on the list of algorithms to be removed. Updating the algorithm status is safe per se as the larval still holds a reference to it. However, killing spawns of other algorithms that are of lower priority is clearly a deficiency as it adds unnecessary churn. Fix the test by checking whether the algorithm is dead. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-10-05crypto: ecrdsa - Fix signature size calculationLukas Wunner
software_key_query() returns the curve size as maximum signature size for ecrdsa. However it should return twice as much. It's only the maximum signature size that seems to be off. The maximum digest size is likewise set to the curve size, but that's correct as it matches the checks in ecrdsa_set_pub_key() and ecrdsa_verify(). Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-10-05crypto: ecdsa - Support P1363 signature decodingLukas Wunner
Alternatively to the X9.62 encoding of ecdsa signatures, which uses ASN.1 and is already supported by the kernel, there's another common encoding called P1363. It stores r and s as the concatenation of two big endian, unsigned integers. The name originates from IEEE P1363. Add a P1363 template in support of the forthcoming SPDM library (Security Protocol and Data Model) for PCI device authentication. P1363 is prescribed by SPDM 1.2.1 margin no 44: "For ECDSA signatures, excluding SM2, in SPDM, the signature shall be the concatenation of r and s. The size of r shall be the size of the selected curve. Likewise, the size of s shall be the size of the selected curve. See BaseAsymAlgo in NEGOTIATE_ALGORITHMS for the size of r and s. The byte order for r and s shall be in big endian order. When placing ECDSA signatures into an SPDM signature field, r shall come first followed by s." Link: https://www.dmtf.org/sites/default/files/standards/documents/DSP0274_1.2.1.pdf Signed-off-by: Lukas Wunner <lukas@wunner.de> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Stefan Berger <stefanb@linux.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>