Provide an option to handle the partial blocks in the shash API.
Almost every hash algorithm has a block size and are only able
to hash partial blocks on finalisation.
Rather than duplicating the partial block handling many times,
add this functionality to the shash API.
It is optional (e.g., hmac would never need this by relying on
the partial block handling of the underlying hash), and to enable
it set the bit CRYPTO_AHASH_ALG_BLOCK_ONLY.
The export format is always that of the underlying hash export,
plus the partial block buffer, followed by a single-byte for the
partial block length.
Set the bit CRYPTO_AHASH_ALG_FINAL_NONZERO to withhold an extra
byte in the partial block. This will come in handy when this
is extended to ahash where hardware often can't deal with a
zero-length final.
It will also be used for algorithms requiring an extra block for
finalisation (e.g., cmac).
As an optimisation, set the bit CRYPTO_AHASH_ALG_FINUP_MAX if
the algorithm wishes to get as much data as possible instead of
just the last partial block.
The descriptor will be zeroed after finalisation.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Merge crypto tree to pick up scompress off-by-one patch. The
merge resolution is non-trivial as the dst handling code has been
moved in front of the src.
Fix off-by-one bug in the last page calculation for src and dst.
Reported-by: Nhat Pham <nphamcs@gmail.com>
Fixes: 2d3553ecb4 ("crypto: scomp - Remove support for some non-trivial SG lists")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The return statements were missing which causes REQ_CHAIN algorithms
to execute twice for every request.
Reported-by: Eric Biggers <ebiggers@kernel.org>
Fixes: 64929fe8c0 ("crypto: acomp - Remove request chaining")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The recent code changes in this function triggered a false-positive
maybe-uninitialized warning in software_key_query. Rearrange the
code by moving the sig/tfm variables into the if clause where they
are actually used.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Following the example of the crc32, crc32c, and chacha code, make the
crypto subsystem register both generic and architecture-optimized
poly1305 shash algorithms, both implemented on top of the appropriate
library functions. This eliminates the need for every architecture to
implement the same shash glue code.
Note that the poly1305 shash requires that the key be prepended to the
data, which differs from the library functions where the key is simply a
parameter to poly1305_init(). Previously this was handled at a fairly
low level, polluting the library code with shash-specific code.
Reorganize things so that the shash code handles this quirk itself.
Also, to register the architecture-optimized shashes only when
architecture-optimized code is actually being used, add a function
poly1305_is_arch_optimized() and make each arch implement it. Change
each architecture's Poly1305 module_init function to arch_initcall so
that the CPU feature detection is guaranteed to run before
poly1305_is_arch_optimized() gets called by crypto/poly1305.c. (In
cases where poly1305_is_arch_optimized() just returns true
unconditionally, using arch_initcall is not strictly needed, but it's
still good to be consistent across architectures.)
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The block size of a hash algorithm is meant to be the number of
bytes its block function can handle. For cbcmac that should be
the block size of the underlying block cipher instead of one.
Set the block size of all cbcmac implementations accordingly.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Allow any ahash to be used with a stack request, with optional
dynamic allocation when async is needed. The intended usage is:
HASH_REQUEST_ON_STACK(req, tfm);
...
err = crypto_ahash_digest(req);
/* The request cannot complete synchronously. */
if (err == -EAGAIN) {
/* This will not fail. */
req = HASH_REQUEST_CLONE(req, gfp);
/* Redo operation. */
err = crypto_ahash_digest(req);
}
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As all users of the dynamic descsize have been converted to use
a static one instead, remove support for dynamic descsize.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Rather than setting descsize in init_tfm, make it an algorithm
attribute and set it during instance construction.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
If the bit CRYPTO_ALG_DUP_FIRST is set, an algorithm will be
duplicated by kmemdup before registration. This is inteded for
hardware-based algorithms that may be unplugged at will.
Do not use this if the algorithm data structure is embedded in a
bigger data structure. Perform the duplication in the driver
instead.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The current algorithm unregistration mechanism originated from
software crypto. The code relies on module reference counts to
stop in-use algorithms from being unregistered. Therefore if
the unregistration function is reached, it is assumed that the
module reference count has hit zero and thus the algorithm reference
count should be exactly 1.
This is completely broken for hardware devices, which can be
unplugged at random.
Fix this by allowing algorithms to be destroyed later if a destroy
callback is provided.
Reported-by: Sean Anderson <sean.anderson@linux.dev>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
If the destination buffer has a fixed length, strscpy() automatically
determines its size using sizeof() when the argument is omitted. This
makes the explicit size argument unnecessary - remove it.
No functional changes intended.
Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When user space issues a KEYCTL_PKEY_QUERY system call for a NIST P521
key, the key_size is incorrectly reported as 528 bits instead of 521.
That's because the key size obtained through crypto_sig_keysize() is in
bytes and software_key_query() multiplies by 8 to yield the size in bits.
The underlying assumption is that the key size is always a multiple of 8.
With the recent addition of NIST P521, that's no longer the case.
Fix by returning the key_size in bits from crypto_sig_keysize() and
adjusting the calculations in software_key_query().
The ->key_size() callbacks of sig_alg algorithms now return the size in
bits, whereas the ->digest_size() and ->max_size() callbacks return the
size in bytes. This matches with the units in struct keyctl_pkey_query.
Fixes: a7d45ba77d ("crypto: ecdsa - Register NIST P521 and extend test suite")
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
Reviewed-by: Ignat Korchagin <ignat@cloudflare.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
KEYCTL_PKEY_QUERY system calls for ecdsa keys return the key size as
max_enc_size and max_dec_size, even though such keys cannot be used for
encryption/decryption. They're exclusively for signature generation or
verification.
Only rsa keys with pkcs1 encoding can also be used for encryption or
decryption.
Return 0 instead for ecdsa keys (as well as ecrdsa keys).
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
Reviewed-by: Ignat Korchagin <ignat@cloudflare.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Rather than storing the folio as is and handling it later, convert
it to a scatterlist right away.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add a new helper ACOMP_REQUEST_CLONE that will transform a stack
request into a dynamically allocated one if possible, and otherwise
switch it over to the sycnrhonous fallback transform. The intended
usage is:
ACOMP_STACK_ON_REQUEST(req, tfm);
...
err = crypto_acomp_compress(req);
/* The request cannot complete synchronously. */
if (err == -EAGAIN) {
/* This will not fail. */
req = ACOMP_REQUEST_CLONE(req, gfp);
/* Redo operation. */
err = crypto_acomp_compress(req);
}
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add a helper to create an on-stack fallback request from a given
request. Use this helper in acomp_do_nondma.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Use kzalloc() to zero out the one-element array instead of using
kmalloc() followed by a manual NUL-termination.
No functional changes intended.
Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev>
Reviewed-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This reverts commit 99585c2192.
Remove the acomp multibuffer tests so that the interface can be
redesigned.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Merge crypto tree to pick up scompress and ahash fixes. The
scompress fix becomes mostly unnecessary as the bugs no longer
exist with the new acompress code. However, keep the NULL assignment
in crypto_acomp_free_streams so that if the user decides to call
crypto_acomp_alloc_streams again it will work.
Disable hash request chaining in case a driver that copies an
ahash_request object by hand accidentally triggers chaining.
Reported-by: Manorit Chawdhry <m-chawdhry@ti.com>
Fixes: f2ffe5a918 ("crypto: hash - Add request chaining API")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Tested-by: Manorit Chawdhry <m-chawdhry@ti.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In order to use scomp_free_streams to free the partially allocted
streams in the allocation error path, move the alg->stream assignment
to the beginning. Also check for error pointers in scomp_free_streams
before freeing the ctx.
Finally set alg->stream to NULL to not break subsequent attempts
to allocate the streams.
Fixes: 3d72ad46a2 ("crypto: acomp - Move stream management into scomp layer")
Reported-by: syzkaller <syzkaller@googlegroups.com>
Co-developed-by: Kuniyuki Iwashima <kuniyu@amazon.com>
Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com>
Co-developed-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Merge crypto tree to pick up scompress and caam fixes. The scompress
fix has a non-trivial resolution as the code in question has moved
over to acompress.
As the scomp streams are freed when an algorithm is unregistered,
it is possible that the algorithm has never been used at all (e.g.,
an algorithm that does not have a self-test). So test whether the
streams exist before freeing them.
Reported-by: Sourabh Jain <sourabhjain@linux.ibm.com>
Fixes: 3d72ad46a2 ("crypto: acomp - Move stream management into scomp layer")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Tested-by: Sourabh Jain <sourabhjain@linux.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
<crypto/internal/chacha.h> is now included only by crypto/chacha.c, so
fold it into there.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Following the example of the crc32 and crc32c code, make the crypto
subsystem register both generic and architecture-optimized chacha20,
xchacha20, and xchacha12 skcipher algorithms, all implemented on top of
the appropriate library functions. This eliminates the need for every
architecture to implement the same skcipher glue code.
To register the architecture-optimized skciphers only when
architecture-optimized code is actually being used, add a function
chacha_is_arch_optimized() and make each arch implement it. Change each
architecture's ChaCha module_init function to arch_initcall so that the
CPU feature detection is guaranteed to run before
chacha_is_arch_optimized() gets called by crypto/chacha.c. In the case
of s390, remove the CPU feature based module autoloading, which is no
longer needed since the module just gets pulled in via function linkage.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>