Compare commits

...

18 Commits

Author SHA1 Message Date
Thorsten Blum
be0240f657 crypto: qce - use memcpy_and_pad in qce_aead_setkey
Replace memset() followed by memcpy() with memcpy_and_pad() to simplify
the code and to write to ->auth_key only once.

Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev>
Reviewed-by: Konrad Dybcio <konrad.dybcio@oss.qualcomm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2026-03-27 18:52:44 +09:00
Mieczyslaw Nalewaj
fdacdc8cf8 crypto: inside-secure/eip93 - add missing address terminator character
Add the missing > characters to the end of the email address

Signed-off-by: Mieczyslaw Nalewaj <namiltd@yahoo.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2026-03-27 18:52:44 +09:00
Mieczyslaw Nalewaj
9503ab5a1d crypto: inside-secure/eip93 - correct ecb(des-eip93) typo
Correct the typo in the name "ecb(des-eip93)".

Signed-off-by: Mieczyslaw Nalewaj <namiltd@yahoo.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2026-03-27 18:52:44 +09:00
Wenkai Lin
67b53a660e crypto: hisilicon/sec2 - prevent req used-after-free for sec
During packet transmission, if the system is under heavy load,
the hardware might complete processing the packet and free the
request memory (req) before the transmission function finishes.
If the software subsequently accesses this req, a use-after-free
error will occur. The qp_ctx memory exists throughout the packet
sending process, so replace the req with the qp_ctx.

Fixes: f0ae287c50 ("crypto: hisilicon/sec2 - implement full backlog mode for sec")
Signed-off-by: Wenkai Lin <linwenkai6@hisilicon.com>
Signed-off-by: Chenghai Huang <huangchenghai2@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2026-03-27 18:52:44 +09:00
Eric Biggers
07fa25957a crypto: cryptd - Remove unused functions
Many functions in cryptd.c no longer have any caller.  Remove them.

Also remove several associated structs and includes.  Finally, inline
cryptd_shash_desc() into its only caller, allowing it to be removed too.

Signed-off-by: Eric Biggers <ebiggers@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2026-03-27 18:52:44 +09:00
Aleksander Jan Bajkowski
5c8009f3c1 crypto: inside-secure/eip93 - make it selectable for ECONET
Econet SoCs feature an integrated EIP93 in revision 3.0p1. It is identical
to the one used by the Airoha AN7581 and the MediaTek MT7621. Ahmed reports
that the EN7528 passes testmgr's self-tests. This driver should also work
on other little endian Econet SoCs.

CC: Ahmed Naseef <naseefkm@gmail.com>
Signed-off-by: Aleksander Jan Bajkowski <olek2@wp.pl>
Reviewed-by: Antoine Tenart <atenart@kernel.org>
Tested-by: Ahmed Naseef <naseefkm@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2026-03-27 18:52:44 +09:00
T Pratham
a09c5e0649 crypto: ti - Add support for AES-CCM in DTHEv2 driver
AES-CCM is an AEAD algorithm supporting both encryption and
authentication of data. This patch introduces support for AES-CCM AEAD
algorithm in the DTHEv2 driver.

Signed-off-by: T Pratham <t-pratham@ti.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2026-03-27 18:52:44 +09:00
T Pratham
37b902c603 crypto: ti - Add support for AES-GCM in DTHEv2 driver
AES-GCM is an AEAD algorithm supporting both encryption and
authentication of data. This patch introduces support for AES-GCM as the
first AEAD algorithm supported by the DTHEv2 driver.

Signed-off-by: T Pratham <t-pratham@ti.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2026-03-27 18:52:43 +09:00
Thorsten Blum
92c0a9bbcd crypto: stm32 - use list_first_entry_or_null to simplify cryp_find_dev
Use list_first_entry_or_null() to simplify stm32_cryp_find_dev() and
remove the now-unused local variable 'struct stm32_cryp *tmp'.

Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev>
Reviewed-by: Kees Cook <kees@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2026-03-27 18:52:43 +09:00
Thorsten Blum
1a9670df56 crypto: stm32 - use list_first_entry_or_null to simplify hash_find_dev
Use list_first_entry_or_null() to simplify stm32_hash_find_dev() and
remove the now-unused local variable 'struct stm32_hash_dev *tmp'.

Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2026-03-27 18:52:43 +09:00
Aleksander Jan Bajkowski
d0c0a414cc crypto: testmgr - Add test vectors for authenc(hmac(md5),rfc3686(ctr(aes)))
Test vectors were generated starting from existing RFC3686(CTR(AES)) test
vectors and adding HMAC(MD5) computed with software implementation.
Then, the results were double-checked on Mediatek MT7986 (safexcel).
Platform pass self-tests.

Signed-off-by: Aleksander Jan Bajkowski <olek2@wp.pl>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2026-03-27 18:52:43 +09:00
Suman Kumar Chakraborty
6ac142bf26 crypto: qat - add anti-rollback support for GEN6 devices
Anti-Rollback (ARB) is a QAT GEN6 hardware feature that prevents loading
firmware with a Security Version Number (SVN) lower than an authorized
minimum. This protects against downgrade attacks by ensuring that only
firmware at or above a committed SVN can run on the acceleration device.

During firmware loading, the driver checks the SVN validation status via
a hardware CSR. If the check reports a failure, firmware authentication
is aborted. If it reports a retry status, the driver reissues the
authentication command up to a maximum number of retries.

Extend the firmware admin interface with two new messages,
ICP_QAT_FW_SVN_READ and ICP_QAT_FW_SVN_COMMIT, to query and commit the
SVN, respectively. Integrate the SVN check into the firmware
authentication path in qat_uclo.c so the driver can react to
anti-rollback status during device bring-up.

Expose SVN information to userspace via a new sysfs attribute group,
qat_svn, under the PCI device directory. The group provides read-only
attributes for the active, enforced minimum, and permanent minimum SVN
values, as well as a write-only commit attribute that allows a system
administrator to commit the currently active SVN as the new authorized
minimum.

This is based on earlier work by Ciunas Bennett.

Signed-off-by: Suman Kumar Chakraborty <suman.kumar.chakraborty@intel.com>
Reviewed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2026-03-27 18:52:43 +09:00
Thorsten Blum
177730a273 crypto: caam - guard HMAC key hex dumps in hash_digest_key
Use print_hex_dump_devel() for dumping sensitive HMAC key bytes in
hash_digest_key() to avoid leaking secrets at runtime when
CONFIG_DYNAMIC_DEBUG is enabled.

Fixes: 045e36780f ("crypto: caam - ahash hmac support")
Fixes: 3f16f6c9d6 ("crypto: caam/qi2 - add support for ahash algorithms")
Cc: stable@vger.kernel.org
Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2026-03-27 18:52:43 +09:00
Thorsten Blum
d134feeb5d printk: add print_hex_dump_devel()
Add print_hex_dump_devel() as the hex dump equivalent of pr_devel(),
which emits output only when DEBUG is enabled, but keeps call sites
compiled otherwise.

Suggested-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev>
Reviewed-by: John Ogness <john.ogness@linutronix.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2026-03-27 18:52:43 +09:00
Gustavo A. R. Silva
b0bfa49c03 crypto: nx - Fix packed layout in struct nx842_crypto_header
struct nx842_crypto_header is declared with the __packed attribute,
however	the fields grouped with struct_group_tagged() were not packed.
This caused the grouped header portion of the structure to lose the
packed layout guarantees of the containing structure.

Fix this by replacing struct_group_tagged() with __struct_group(...,
..., __packed, ...) so the grouped fields are packed, and the original
layout is preserved, restoring the intended packed layout of the
structure.

Before changes:
struct nx842_crypto_header {
	union {
		struct {
			__be16     magic;                /*     0     2 */
			__be16     ignore;               /*     2     2 */
			u8         groups;               /*     4     1 */
		};                                       /*     0     6 */
		struct nx842_crypto_header_hdr hdr;      /*     0     6 */
	};                                               /*     0     6 */
	struct nx842_crypto_header_group group[];        /*     6     0 */

	/* size: 6, cachelines: 1, members: 2 */
	/* last cacheline: 6 bytes */
} __attribute__((__packed__));

After changes:
struct nx842_crypto_header {
	union {
		struct {
			__be16     magic;                /*     0     2 */
			__be16     ignore;               /*     2     2 */
			u8         groups;               /*     4     1 */
		} __attribute__((__packed__));           /*     0     5 */
		struct nx842_crypto_header_hdr hdr;      /*     0     5 */
	};                                               /*     0     5 */
	struct nx842_crypto_header_group group[];        /*     5     0 */

	/* size: 5, cachelines: 1, members: 2 */
	/* last cacheline: 5 bytes */
} __attribute__((__packed__));

Fixes: 1e6b251ce1 ("crypto: nx - Avoid -Wflex-array-member-not-at-end warning")
Cc: stable@vger.kernel.org
Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org>
Reviewed-by: Thorsten Blum <thorsten.blum@linux.dev>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2026-03-27 18:52:43 +09:00
Thorsten Blum
928c5e894c crypto: nx - annotate struct nx842_crypto_header with __counted_by
Add the __counted_by() compiler attribute to the flexible array member
'group' to improve access bounds-checking via CONFIG_UBSAN_BOUNDS and
CONFIG_FORTIFY_SOURCE.

Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev>
Reviewed-by: Gustavo A. R. Silva <gustavoars@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2026-03-27 18:52:25 +09:00
Thorsten Blum
914b0c68d4 crypto: marvell/cesa - use memcpy_and_pad in mv_cesa_ahash_export
Replace memset() followed by memcpy() with memcpy_and_pad() to simplify
the code and to write to 'cache' only once.

Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2026-03-27 18:52:25 +09:00
Thorsten Blum
f30579bbae crypto: s5p-sss - use unregister_{ahashes,skciphers} in probe/remove
Replace multiple for loops with calls to crypto_unregister_ahashes() and
crypto_unregister_skciphers().

If crypto_register_skcipher() fails in s5p_aes_probe(), log the error
directly instead of checking 'i < ARRAY_SIZE(algs)' later.  Also drop
now-unused local index variables.  No functional changes.

Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev>
Reviewed-by: Krzysztof Kozlowski <krzysztof.kozlowski@oss.qualcomm.com>
Reviewed-by: Vladimir Zapolskiy <vz@mleia.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2026-03-27 18:52:25 +09:00
45 changed files with 1510 additions and 217 deletions

View File

@@ -0,0 +1,114 @@
What: /sys/bus/pci/devices/<BDF>/qat_svn/
Date: June 2026
KernelVersion: 7.1
Contact: qat-linux@intel.com
Description: Directory containing Security Version Number (SVN) attributes for
the Anti-Rollback (ARB) feature. The ARB feature prevents downloading
older firmware versions to the acceleration device.
What: /sys/bus/pci/devices/<BDF>/qat_svn/enforced_min
Date: June 2026
KernelVersion: 7.1
Contact: qat-linux@intel.com
Description:
(RO) Reports the minimum allowed firmware SVN.
Returns an integer greater than zero. Firmware with SVN lower than
this value is rejected.
A write to qat_svn/commit will update this value. The update is not
persistent across reboot; on reboot, this value is reset from
qat_svn/permanent_min.
Example usage::
# cat /sys/bus/pci/devices/<BDF>/qat_svn/enforced_min
2
This attribute is available only on devices that support
Anti-Rollback.
What: /sys/bus/pci/devices/<BDF>/qat_svn/permanent_min
Date: June 2026
KernelVersion: 7.1
Contact: qat-linux@intel.com
Description:
(RO) Reports the persistent minimum SVN used to initialize
qat_svn/enforced_min on each reboot.
Returns an integer greater than zero. A write to qat_svn/commit
may update this value, depending on platform/BIOS settings.
Example usage::
# cat /sys/bus/pci/devices/<BDF>/qat_svn/permanent_min
3
This attribute is available only on devices that support
Anti-Rollback.
What: /sys/bus/pci/devices/<BDF>/qat_svn/active
Date: June 2026
KernelVersion: 7.1
Contact: qat-linux@intel.com
Description:
(RO) Reports the SVN of the currently active firmware image.
Returns an integer greater than zero.
Example usage::
# cat /sys/bus/pci/devices/<BDF>/qat_svn/active
2
This attribute is available only on devices that support
Anti-Rollback.
What: /sys/bus/pci/devices/<BDF>/qat_svn/commit
Date: June 2026
KernelVersion: 7.1
Contact: qat-linux@intel.com
Description:
(WO) Commits the currently active SVN as the minimum allowed SVN.
Writing 1 sets qat_svn/enforced_min to the value of qat_svn/active,
preventing future firmware loads with lower SVN.
Depending on platform/BIOS settings, a commit may also update
qat_svn/permanent_min.
Note that on reboot, qat_svn/enforced_min reverts to
qat_svn/permanent_min.
It is advisable to use this attribute with caution, only when
it is necessary to set a new minimum SVN for the firmware.
Before committing the SVN update, it is crucial to check the
current values of qat_svn/active, qat_svn/enforced_min and
qat_svn/permanent_min. This verification helps ensure that the
commit operation aligns with the intended outcome.
While writing to the file, any value other than '1' will result
in an error and have no effect.
Example usage::
## Read current values
# cat /sys/bus/pci/devices/<BDF>/qat_svn/enforced_min
2
# cat /sys/bus/pci/devices/<BDF>/qat_svn/permanent_min
2
# cat /sys/bus/pci/devices/<BDF>/qat_svn/active
3
## Commit active SVN
# echo 1 > /sys/bus/pci/devices/<BDF>/qat_svn/commit
## Read updated values
# cat /sys/bus/pci/devices/<BDF>/qat_svn/enforced_min
3
# cat /sys/bus/pci/devices/<BDF>/qat_svn/permanent_min
3
This attribute is available only on devices that support
Anti-Rollback.

View File

@@ -646,7 +646,8 @@ static int cryptd_hash_import(struct ahash_request *req, const void *in)
{
struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
struct cryptd_hash_ctx *ctx = crypto_ahash_ctx(tfm);
struct shash_desc *desc = cryptd_shash_desc(req);
struct cryptd_hash_request_ctx *rctx = ahash_request_ctx(req);
struct shash_desc *desc = &rctx->desc;
desc->tfm = ctx->child;
@@ -952,115 +953,6 @@ static struct crypto_template cryptd_tmpl = {
.module = THIS_MODULE,
};
struct cryptd_skcipher *cryptd_alloc_skcipher(const char *alg_name,
u32 type, u32 mask)
{
char cryptd_alg_name[CRYPTO_MAX_ALG_NAME];
struct cryptd_skcipher_ctx *ctx;
struct crypto_skcipher *tfm;
if (snprintf(cryptd_alg_name, CRYPTO_MAX_ALG_NAME,
"cryptd(%s)", alg_name) >= CRYPTO_MAX_ALG_NAME)
return ERR_PTR(-EINVAL);
tfm = crypto_alloc_skcipher(cryptd_alg_name, type, mask);
if (IS_ERR(tfm))
return ERR_CAST(tfm);
if (tfm->base.__crt_alg->cra_module != THIS_MODULE) {
crypto_free_skcipher(tfm);
return ERR_PTR(-EINVAL);
}
ctx = crypto_skcipher_ctx(tfm);
refcount_set(&ctx->refcnt, 1);
return container_of(tfm, struct cryptd_skcipher, base);
}
EXPORT_SYMBOL_GPL(cryptd_alloc_skcipher);
struct crypto_skcipher *cryptd_skcipher_child(struct cryptd_skcipher *tfm)
{
struct cryptd_skcipher_ctx *ctx = crypto_skcipher_ctx(&tfm->base);
return ctx->child;
}
EXPORT_SYMBOL_GPL(cryptd_skcipher_child);
bool cryptd_skcipher_queued(struct cryptd_skcipher *tfm)
{
struct cryptd_skcipher_ctx *ctx = crypto_skcipher_ctx(&tfm->base);
return refcount_read(&ctx->refcnt) - 1;
}
EXPORT_SYMBOL_GPL(cryptd_skcipher_queued);
void cryptd_free_skcipher(struct cryptd_skcipher *tfm)
{
struct cryptd_skcipher_ctx *ctx = crypto_skcipher_ctx(&tfm->base);
if (refcount_dec_and_test(&ctx->refcnt))
crypto_free_skcipher(&tfm->base);
}
EXPORT_SYMBOL_GPL(cryptd_free_skcipher);
struct cryptd_ahash *cryptd_alloc_ahash(const char *alg_name,
u32 type, u32 mask)
{
char cryptd_alg_name[CRYPTO_MAX_ALG_NAME];
struct cryptd_hash_ctx *ctx;
struct crypto_ahash *tfm;
if (snprintf(cryptd_alg_name, CRYPTO_MAX_ALG_NAME,
"cryptd(%s)", alg_name) >= CRYPTO_MAX_ALG_NAME)
return ERR_PTR(-EINVAL);
tfm = crypto_alloc_ahash(cryptd_alg_name, type, mask);
if (IS_ERR(tfm))
return ERR_CAST(tfm);
if (tfm->base.__crt_alg->cra_module != THIS_MODULE) {
crypto_free_ahash(tfm);
return ERR_PTR(-EINVAL);
}
ctx = crypto_ahash_ctx(tfm);
refcount_set(&ctx->refcnt, 1);
return __cryptd_ahash_cast(tfm);
}
EXPORT_SYMBOL_GPL(cryptd_alloc_ahash);
struct crypto_shash *cryptd_ahash_child(struct cryptd_ahash *tfm)
{
struct cryptd_hash_ctx *ctx = crypto_ahash_ctx(&tfm->base);
return ctx->child;
}
EXPORT_SYMBOL_GPL(cryptd_ahash_child);
struct shash_desc *cryptd_shash_desc(struct ahash_request *req)
{
struct cryptd_hash_request_ctx *rctx = ahash_request_ctx(req);
return &rctx->desc;
}
EXPORT_SYMBOL_GPL(cryptd_shash_desc);
bool cryptd_ahash_queued(struct cryptd_ahash *tfm)
{
struct cryptd_hash_ctx *ctx = crypto_ahash_ctx(&tfm->base);
return refcount_read(&ctx->refcnt) - 1;
}
EXPORT_SYMBOL_GPL(cryptd_ahash_queued);
void cryptd_free_ahash(struct cryptd_ahash *tfm)
{
struct cryptd_hash_ctx *ctx = crypto_ahash_ctx(&tfm->base);
if (refcount_dec_and_test(&ctx->refcnt))
crypto_free_ahash(&tfm->base);
}
EXPORT_SYMBOL_GPL(cryptd_free_ahash);
struct cryptd_aead *cryptd_alloc_aead(const char *alg_name,
u32 type, u32 mask)
{

View File

@@ -4100,6 +4100,13 @@ static const struct alg_test_desc alg_test_descs[] = {
.suite = {
.aead = __VECS(hmac_md5_ecb_cipher_null_tv_template)
}
}, {
.alg = "authenc(hmac(md5),rfc3686(ctr(aes)))",
.generic_driver = "authenc(hmac-md5-lib,rfc3686(ctr(aes-lib)))",
.test = alg_test_aead,
.suite = {
.aead = __VECS(hmac_md5_aes_ctr_rfc3686_tv_temp)
}
}, {
.alg = "authenc(hmac(sha1),cbc(aes))",
.generic_driver = "authenc(hmac-sha1-lib,cbc(aes-lib))",

View File

@@ -17752,6 +17752,213 @@ static const struct aead_testvec hmac_sha512_des_cbc_tv_temp[] = {
},
};
static const struct aead_testvec hmac_md5_aes_ctr_rfc3686_tv_temp[] = {
{ /* RFC 3686 Case 1 */
#ifdef __LITTLE_ENDIAN
.key = "\x08\x00" /* rta length */
"\x01\x00" /* rta type */
#else
.key = "\x00\x08" /* rta length */
"\x00\x01" /* rta type */
#endif
"\x00\x00\x00\x14" /* enc key length */
"\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00"
"\xae\x68\x52\xf8\x12\x10\x67\xcc"
"\x4b\xf7\xa5\x76\x55\x77\xf3\x9e"
"\x00\x00\x00\x30",
.klen = 8 + 16 + 20,
.iv = "\x00\x00\x00\x00\x00\x00\x00\x00",
.assoc = "\x00\x00\x00\x00\x00\x00\x00\x00",
.alen = 8,
.ptext = "Single block msg",
.plen = 16,
.ctext = "\xe4\x09\x5d\x4f\xb7\xa7\xb3\x79"
"\x2d\x61\x75\xa3\x26\x13\x11\xb8"
"\xdd\x5f\xea\x13\x2a\xf2\xb0\xf1"
"\x91\x79\x46\x40\x62\x6c\x87\x5b",
.clen = 16 + 16,
}, { /* RFC 3686 Case 2 */
#ifdef __LITTLE_ENDIAN
.key = "\x08\x00" /* rta length */
"\x01\x00" /* rta type */
#else
.key = "\x00\x08" /* rta length */
"\x00\x01" /* rta type */
#endif
"\x00\x00\x00\x14" /* enc key length */
"\x20\x21\x22\x23\x24\x25\x26\x27"
"\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f"
"\x7e\x24\x06\x78\x17\xfa\xe0\xd7"
"\x43\xd6\xce\x1f\x32\x53\x91\x63"
"\x00\x6c\xb6\xdb",
.klen = 8 + 16 + 20,
.iv = "\xc0\x54\x3b\x59\xda\x48\xd9\x0b",
.assoc = "\xc0\x54\x3b\x59\xda\x48\xd9\x0b",
.alen = 8,
.ptext = "\x00\x01\x02\x03\x04\x05\x06\x07"
"\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
"\x10\x11\x12\x13\x14\x15\x16\x17"
"\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f",
.plen = 32,
.ctext = "\x51\x04\xa1\x06\x16\x8a\x72\xd9"
"\x79\x0d\x41\xee\x8e\xda\xd3\x88"
"\xeb\x2e\x1e\xfc\x46\xda\x57\xc8"
"\xfc\xe6\x30\xdf\x91\x41\xbe\x28"
"\x03\x39\x23\xcd\x22\x5f\x1b\x8b"
"\x93\x70\xbc\x45\xf3\xba\xde\x2e",
.clen = 32 + 16,
}, { /* RFC 3686 Case 3 */
#ifdef __LITTLE_ENDIAN
.key = "\x08\x00" /* rta length */
"\x01\x00" /* rta type */
#else
.key = "\x00\x08" /* rta length */
"\x00\x01" /* rta type */
#endif
"\x00\x00\x00\x14" /* enc key length */
"\x11\x22\x33\x44\x55\x66\x77\x88"
"\x99\xaa\xbb\xcc\xdd\xee\xff\x11"
"\x76\x91\xbe\x03\x5e\x50\x20\xa8"
"\xac\x6e\x61\x85\x29\xf9\xa0\xdc"
"\x00\xe0\x01\x7b",
.klen = 8 + 16 + 20,
.iv = "\x27\x77\x7f\x3f\x4a\x17\x86\xf0",
.assoc = "\x27\x77\x7f\x3f\x4a\x17\x86\xf0",
.alen = 8,
.ptext = "\x00\x01\x02\x03\x04\x05\x06\x07"
"\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
"\x10\x11\x12\x13\x14\x15\x16\x17"
"\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f"
"\x20\x21\x22\x23",
.plen = 36,
.ctext = "\xc1\xcf\x48\xa8\x9f\x2f\xfd\xd9"
"\xcf\x46\x52\xe9\xef\xdb\x72\xd7"
"\x45\x40\xa4\x2b\xde\x6d\x78\x36"
"\xd5\x9a\x5c\xea\xae\xf3\x10\x53"
"\x25\xb2\x07\x2f"
"\xb4\x40\x0c\x7b\x4c\x55\x8a\x4b"
"\x04\xf7\x48\x9e\x0f\x9a\xae\x73",
.clen = 36 + 16,
}, { /* RFC 3686 Case 4 */
#ifdef __LITTLE_ENDIAN
.key = "\x08\x00" /* rta length */
"\x01\x00" /* rta type */
#else
.key = "\x00\x08" /* rta length */
"\x00\x01" /* rta type */
#endif
"\x00\x00\x00\x1c" /* enc key length */
"\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00"
"\x16\xaf\x5b\x14\x5f\xc9\xf5\x79"
"\xc1\x75\xf9\x3e\x3b\xfb\x0e\xed"
"\x86\x3d\x06\xcc\xfd\xb7\x85\x15"
"\x00\x00\x00\x48",
.klen = 8 + 16 + 28,
.iv = "\x36\x73\x3c\x14\x7d\x6d\x93\xcb",
.assoc = "\x36\x73\x3c\x14\x7d\x6d\x93\xcb",
.alen = 8,
.ptext = "Single block msg",
.plen = 16,
.ctext = "\x4b\x55\x38\x4f\xe2\x59\xc9\xc8"
"\x4e\x79\x35\xa0\x03\xcb\xe9\x28"
"\xc4\x5d\xa1\x16\x6c\x2d\xa5\x43"
"\x60\x7b\x58\x98\x11\x9b\x50\x06",
.clen = 16 + 16,
}, { /* RFC 3686 Case 5 */
#ifdef __LITTLE_ENDIAN
.key = "\x08\x00" /* rta length */
"\x01\x00" /* rta type */
#else
.key = "\x00\x08" /* rta length */
"\x00\x01" /* rta type */
#endif
"\x00\x00\x00\x1c" /* enc key length */
"\x20\x21\x22\x23\x24\x25\x26\x27"
"\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f"
"\x7c\x5c\xb2\x40\x1b\x3d\xc3\x3c"
"\x19\xe7\x34\x08\x19\xe0\xf6\x9c"
"\x67\x8c\x3d\xb8\xe6\xf6\xa9\x1a"
"\x00\x96\xb0\x3b",
.klen = 8 + 16 + 28,
.iv = "\x02\x0c\x6e\xad\xc2\xcb\x50\x0d",
.assoc = "\x02\x0c\x6e\xad\xc2\xcb\x50\x0d",
.alen = 8,
.ptext = "\x00\x01\x02\x03\x04\x05\x06\x07"
"\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
"\x10\x11\x12\x13\x14\x15\x16\x17"
"\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f",
.plen = 32,
.ctext = "\x45\x32\x43\xfc\x60\x9b\x23\x32"
"\x7e\xdf\xaa\xfa\x71\x31\xcd\x9f"
"\x84\x90\x70\x1c\x5a\xd4\xa7\x9c"
"\xfc\x1f\xe0\xff\x42\xf4\xfb\x00"
"\xc5\xec\x47\x33\xae\x05\x28\x49"
"\xd5\x2b\x08\xad\x10\x98\x24\x01",
.clen = 32 + 16,
}, { /* RFC 3686 Case 7 */
#ifdef __LITTLE_ENDIAN
.key = "\x08\x00" /* rta length */
"\x01\x00" /* rta type */
#else
.key = "\x00\x08" /* rta length */
"\x00\x01" /* rta type */
#endif
"\x00\x00\x00\x24" /* enc key length */
"\x00\x00\x00\x00\x00\x00\x00\x00"
"\x00\x00\x00\x00\x00\x00\x00\x00"
"\x77\x6b\xef\xf2\x85\x1d\xb0\x6f"
"\x4c\x8a\x05\x42\xc8\x69\x6f\x6c"
"\x6a\x81\xaf\x1e\xec\x96\xb4\xd3"
"\x7f\xc1\xd6\x89\xe6\xc1\xc1\x04"
"\x00\x00\x00\x60",
.klen = 8 + 16 + 36,
.iv = "\xdb\x56\x72\xc9\x7a\xa8\xf0\xb2",
.assoc = "\xdb\x56\x72\xc9\x7a\xa8\xf0\xb2",
.alen = 8,
.ptext = "Single block msg",
.plen = 16,
.ctext = "\x14\x5a\xd0\x1d\xbf\x82\x4e\xc7"
"\x56\x08\x63\xdc\x71\xe3\xe0\xc0"
"\xc6\x26\xb2\x27\x0d\x21\xd4\x40"
"\x6c\x4f\x53\xea\x19\x75\xda\x8e",
.clen = 16 + 16,
}, { /* RFC 3686 Case 8 */
#ifdef __LITTLE_ENDIAN
.key = "\x08\x00" /* rta length */
"\x01\x00" /* rta type */
#else
.key = "\x00\x08" /* rta length */
"\x00\x01" /* rta type */
#endif
"\x00\x00\x00\x24" /* enc key length */
"\x20\x21\x22\x23\x24\x25\x26\x27"
"\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f"
"\xf6\xd6\x6d\x6b\xd5\x2d\x59\xbb"
"\x07\x96\x36\x58\x79\xef\xf8\x86"
"\xc6\x6d\xd5\x1a\x5b\x6a\x99\x74"
"\x4b\x50\x59\x0c\x87\xa2\x38\x84"
"\x00\xfa\xac\x24",
.klen = 8 + 16 + 36,
.iv = "\xc1\x58\x5e\xf1\x5a\x43\xd8\x75",
.assoc = "\xc1\x58\x5e\xf1\x5a\x43\xd8\x75",
.alen = 8,
.ptext = "\x00\x01\x02\x03\x04\x05\x06\x07"
"\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
"\x10\x11\x12\x13\x14\x15\x16\x17"
"\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f",
.plen = 32,
.ctext = "\xf0\x5e\x23\x1b\x38\x94\x61\x2c"
"\x49\xee\x00\x0b\x80\x4e\xb2\xa9"
"\xb8\x30\x6b\x50\x8f\x83\x9d\x6a"
"\x55\x30\x83\x1d\x93\x44\xaf\x1c"
"\x8c\x4d\x2a\x8d\x23\x47\x59\x6f"
"\x1e\x74\x62\x39\xed\x14\x50\x6c",
.clen = 32 + 16,
},
};
static const struct aead_testvec hmac_md5_des3_ede_cbc_tv_temp[] = {
{ /*Generated with cryptopp*/
#ifdef __LITTLE_ENDIAN

View File

@@ -3270,7 +3270,7 @@ static int hash_digest_key(struct caam_hash_ctx *ctx, u32 *keylen, u8 *key,
dpaa2_fl_set_addr(out_fle, key_dma);
dpaa2_fl_set_len(out_fle, digestsize);
print_hex_dump_debug("key_in@" __stringify(__LINE__)": ",
print_hex_dump_devel("key_in@" __stringify(__LINE__)": ",
DUMP_PREFIX_ADDRESS, 16, 4, key, *keylen, 1);
print_hex_dump_debug("shdesc@" __stringify(__LINE__)": ",
DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc),
@@ -3290,7 +3290,7 @@ static int hash_digest_key(struct caam_hash_ctx *ctx, u32 *keylen, u8 *key,
/* in progress */
wait_for_completion(&result.completion);
ret = result.err;
print_hex_dump_debug("digested key@" __stringify(__LINE__)": ",
print_hex_dump_devel("digested key@" __stringify(__LINE__)": ",
DUMP_PREFIX_ADDRESS, 16, 4, key,
digestsize, 1);
}

View File

@@ -393,7 +393,7 @@ static int hash_digest_key(struct caam_hash_ctx *ctx, u32 *keylen, u8 *key,
append_seq_store(desc, digestsize, LDST_CLASS_2_CCB |
LDST_SRCDST_BYTE_CONTEXT);
print_hex_dump_debug("key_in@"__stringify(__LINE__)": ",
print_hex_dump_devel("key_in@"__stringify(__LINE__)": ",
DUMP_PREFIX_ADDRESS, 16, 4, key, *keylen, 1);
print_hex_dump_debug("jobdesc@"__stringify(__LINE__)": ",
DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc),
@@ -408,7 +408,7 @@ static int hash_digest_key(struct caam_hash_ctx *ctx, u32 *keylen, u8 *key,
wait_for_completion(&result.completion);
ret = result.err;
print_hex_dump_debug("digested key@"__stringify(__LINE__)": ",
print_hex_dump_devel("digested key@"__stringify(__LINE__)": ",
DUMP_PREFIX_ADDRESS, 16, 4, key,
digestsize, 1);
}

View File

@@ -230,7 +230,7 @@ static int qp_send_message(struct sec_req *req)
spin_unlock_bh(&qp_ctx->req_lock);
atomic64_inc(&req->ctx->sec->debug.dfx.send_cnt);
atomic64_inc(&qp_ctx->ctx->sec->debug.dfx.send_cnt);
return -EINPROGRESS;
}

View File

@@ -1,7 +1,7 @@
# SPDX-License-Identifier: GPL-2.0
config CRYPTO_DEV_EIP93
tristate "Support for EIP93 crypto HW accelerators"
depends on SOC_MT7621 || ARCH_AIROHA ||COMPILE_TEST
depends on SOC_MT7621 || ARCH_AIROHA || ECONET || COMPILE_TEST
select CRYPTO_LIB_AES
select CRYPTO_LIB_DES
select CRYPTO_SKCIPHER

View File

@@ -3,7 +3,7 @@
* Copyright (C) 2019 - 2021
*
* Richard van Schagen <vschagen@icloud.com>
* Christian Marangi <ansuelsmth@gmail.com
* Christian Marangi <ansuelsmth@gmail.com>
*/
#include <crypto/aead.h>

View File

@@ -3,7 +3,7 @@
* Copyright (C) 2019 - 2021
*
* Richard van Schagen <vschagen@icloud.com>
* Christian Marangi <ansuelsmth@gmail.com
* Christian Marangi <ansuelsmth@gmail.com>
*/
#ifndef _EIP93_AEAD_H_
#define _EIP93_AEAD_H_

View File

@@ -3,7 +3,7 @@
* Copyright (C) 2019 - 2021
*
* Richard van Schagen <vschagen@icloud.com>
* Christian Marangi <ansuelsmth@gmail.com
* Christian Marangi <ansuelsmth@gmail.com>
*/
#ifndef _EIP93_AES_H_
#define _EIP93_AES_H_

View File

@@ -3,7 +3,7 @@
* Copyright (C) 2019 - 2021
*
* Richard van Schagen <vschagen@icloud.com>
* Christian Marangi <ansuelsmth@gmail.com
* Christian Marangi <ansuelsmth@gmail.com>
*/
#include <crypto/aes.h>
@@ -320,7 +320,7 @@ struct eip93_alg_template eip93_alg_ecb_des = {
.ivsize = 0,
.base = {
.cra_name = "ecb(des)",
.cra_driver_name = "ebc(des-eip93)",
.cra_driver_name = "ecb(des-eip93)",
.cra_priority = EIP93_CRA_PRIORITY,
.cra_flags = CRYPTO_ALG_ASYNC |
CRYPTO_ALG_KERN_DRIVER_ONLY,

View File

@@ -3,7 +3,7 @@
* Copyright (C) 2019 - 2021
*
* Richard van Schagen <vschagen@icloud.com>
* Christian Marangi <ansuelsmth@gmail.com
* Christian Marangi <ansuelsmth@gmail.com>
*/
#ifndef _EIP93_CIPHER_H_
#define _EIP93_CIPHER_H_

View File

@@ -3,7 +3,7 @@
* Copyright (C) 2019 - 2021
*
* Richard van Schagen <vschagen@icloud.com>
* Christian Marangi <ansuelsmth@gmail.com
* Christian Marangi <ansuelsmth@gmail.com>
*/
#include <crypto/aes.h>

View File

@@ -3,7 +3,7 @@
* Copyright (C) 2019 - 2021
*
* Richard van Schagen <vschagen@icloud.com>
* Christian Marangi <ansuelsmth@gmail.com
* Christian Marangi <ansuelsmth@gmail.com>
*/
#ifndef _EIP93_COMMON_H_

View File

@@ -3,7 +3,7 @@
* Copyright (C) 2019 - 2021
*
* Richard van Schagen <vschagen@icloud.com>
* Christian Marangi <ansuelsmth@gmail.com
* Christian Marangi <ansuelsmth@gmail.com>
*/
#ifndef _EIP93_DES_H_
#define _EIP93_DES_H_

View File

@@ -2,7 +2,7 @@
/*
* Copyright (C) 2024
*
* Christian Marangi <ansuelsmth@gmail.com
* Christian Marangi <ansuelsmth@gmail.com>
*/
#include <crypto/sha1.h>

View File

@@ -3,7 +3,7 @@
* Copyright (C) 2019 - 2021
*
* Richard van Schagen <vschagen@icloud.com>
* Christian Marangi <ansuelsmth@gmail.com
* Christian Marangi <ansuelsmth@gmail.com>
*/
#ifndef _EIP93_HASH_H_
#define _EIP93_HASH_H_

View File

@@ -3,7 +3,7 @@
* Copyright (C) 2019 - 2021
*
* Richard van Schagen <vschagen@icloud.com>
* Christian Marangi <ansuelsmth@gmail.com
* Christian Marangi <ansuelsmth@gmail.com>
*/
#include <linux/atomic.h>

View File

@@ -3,7 +3,7 @@
* Copyright (C) 2019 - 2021
*
* Richard van Schagen <vschagen@icloud.com>
* Christian Marangi <ansuelsmth@gmail.com
* Christian Marangi <ansuelsmth@gmail.com>
*/
#ifndef _EIP93_MAIN_H_
#define _EIP93_MAIN_H_

View File

@@ -3,7 +3,7 @@
* Copyright (C) 2019 - 2021
*
* Richard van Schagen <vschagen@icloud.com>
* Christian Marangi <ansuelsmth@gmail.com
* Christian Marangi <ansuelsmth@gmail.com>
*/
#ifndef REG_EIP93_H
#define REG_EIP93_H

View File

@@ -462,6 +462,21 @@ static int reset_ring_pair(void __iomem *csr, u32 bank_number)
return 0;
}
static bool adf_anti_rb_enabled(struct adf_accel_dev *accel_dev)
{
struct adf_hw_device_data *hw_data = GET_HW_DATA(accel_dev);
return !!(hw_data->fuses[0] & ADF_GEN6_ANTI_RB_FUSE_BIT);
}
static void adf_gen6_init_anti_rb(struct adf_anti_rb_hw_data *anti_rb_data)
{
anti_rb_data->anti_rb_enabled = adf_anti_rb_enabled;
anti_rb_data->svncheck_offset = ADF_GEN6_SVNCHECK_CSR_MSG;
anti_rb_data->svncheck_retry = 0;
anti_rb_data->sysfs_added = false;
}
static int ring_pair_reset(struct adf_accel_dev *accel_dev, u32 bank_number)
{
struct adf_hw_device_data *hw_data = accel_dev->hw_device;
@@ -1024,6 +1039,7 @@ void adf_init_hw_data_6xxx(struct adf_hw_device_data *hw_data)
adf_gen6_init_ras_ops(&hw_data->ras_ops);
adf_gen6_init_tl_data(&hw_data->tl_data);
adf_gen6_init_rl_data(&hw_data->rl_data);
adf_gen6_init_anti_rb(&hw_data->anti_rb_data);
}
void adf_clean_hw_data_6xxx(struct adf_hw_device_data *hw_data)

View File

@@ -53,6 +53,12 @@
#define ADF_GEN6_ADMINMSGLR_OFFSET 0x500578
#define ADF_GEN6_MAILBOX_BASE_OFFSET 0x600970
/* Anti-rollback */
#define ADF_GEN6_SVNCHECK_CSR_MSG 0x640004
/* Fuse bits */
#define ADF_GEN6_ANTI_RB_FUSE_BIT BIT(24)
/*
* Watchdog timers
* Timeout is in cycles. Clock speed may vary across products but this

View File

@@ -4,6 +4,7 @@ ccflags-y += -DDEFAULT_SYMBOL_NAMESPACE='"CRYPTO_QAT"'
intel_qat-y := adf_accel_engine.o \
adf_admin.o \
adf_aer.o \
adf_anti_rb.o \
adf_bank_state.o \
adf_cfg.o \
adf_cfg_services.o \
@@ -29,6 +30,7 @@ intel_qat-y := adf_accel_engine.o \
adf_rl_admin.o \
adf_rl.o \
adf_sysfs.o \
adf_sysfs_anti_rb.o \
adf_sysfs_ras_counters.o \
adf_sysfs_rl.o \
adf_timer.o \

View File

@@ -11,6 +11,7 @@
#include <linux/types.h>
#include <linux/qat/qat_mig_dev.h>
#include <linux/wordpart.h>
#include "adf_anti_rb.h"
#include "adf_cfg_common.h"
#include "adf_dc.h"
#include "adf_rl.h"
@@ -328,6 +329,7 @@ struct adf_hw_device_data {
struct adf_dev_err_mask dev_err_mask;
struct adf_rl_hw_data rl_data;
struct adf_tl_hw_data tl_data;
struct adf_anti_rb_hw_data anti_rb_data;
struct qat_migdev_ops vfmig_ops;
const char *fw_name;
const char *fw_mmp_name;

View File

@@ -6,8 +6,10 @@
#include <linux/iopoll.h>
#include <linux/pci.h>
#include <linux/dma-mapping.h>
#include <linux/delay.h>
#include "adf_accel_devices.h"
#include "adf_admin.h"
#include "adf_anti_rb.h"
#include "adf_common_drv.h"
#include "adf_cfg.h"
#include "adf_heartbeat.h"
@@ -19,6 +21,7 @@
#define ADF_ADMIN_POLL_DELAY_US 20
#define ADF_ADMIN_POLL_TIMEOUT_US (5 * USEC_PER_SEC)
#define ADF_ONE_AE 1
#define ADF_ADMIN_RETRY_MAX 60
static const u8 const_tab[1024] __aligned(1024) = {
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
@@ -536,6 +539,73 @@ int adf_send_admin_tl_stop(struct adf_accel_dev *accel_dev)
return adf_send_admin(accel_dev, &req, &resp, ae_mask);
}
static int adf_send_admin_retry(struct adf_accel_dev *accel_dev, u8 cmd_id,
struct icp_qat_fw_init_admin_resp *resp,
unsigned int sleep_ms)
{
u32 admin_ae_mask = GET_HW_DATA(accel_dev)->admin_ae_mask;
struct icp_qat_fw_init_admin_req req = { };
unsigned int retries = ADF_ADMIN_RETRY_MAX;
int ret;
req.cmd_id = cmd_id;
do {
ret = adf_send_admin(accel_dev, &req, resp, admin_ae_mask);
if (!ret)
return 0;
if (resp->status != ICP_QAT_FW_INIT_RESP_STATUS_RETRY)
return ret;
msleep(sleep_ms);
} while (--retries);
return -ETIMEDOUT;
}
static int adf_send_admin_svn(struct adf_accel_dev *accel_dev, u8 cmd_id,
struct icp_qat_fw_init_admin_resp *resp)
{
return adf_send_admin_retry(accel_dev, cmd_id, resp, ADF_SVN_RETRY_MS);
}
int adf_send_admin_arb_query(struct adf_accel_dev *accel_dev, int cmd, u8 *svn)
{
struct icp_qat_fw_init_admin_resp resp = { };
int ret;
ret = adf_send_admin_svn(accel_dev, ICP_QAT_FW_SVN_READ, &resp);
if (ret)
return ret;
switch (cmd) {
case ARB_ENFORCED_MIN_SVN:
*svn = resp.enforced_min_svn;
break;
case ARB_PERMANENT_MIN_SVN:
*svn = resp.permanent_min_svn;
break;
case ARB_ACTIVE_SVN:
*svn = resp.active_svn;
break;
default:
*svn = 0;
dev_err(&GET_DEV(accel_dev),
"Unknown secure version number request\n");
ret = -EINVAL;
}
return ret;
}
int adf_send_admin_arb_commit(struct adf_accel_dev *accel_dev)
{
struct icp_qat_fw_init_admin_resp resp = { };
return adf_send_admin_svn(accel_dev, ICP_QAT_FW_SVN_COMMIT, &resp);
}
int adf_init_admin_comms(struct adf_accel_dev *accel_dev)
{
struct adf_admin_comms *admin;

View File

@@ -27,5 +27,7 @@ int adf_send_admin_tl_start(struct adf_accel_dev *accel_dev,
dma_addr_t tl_dma_addr, size_t layout_sz, u8 *rp_indexes,
struct icp_qat_fw_init_admin_slice_cnt *slice_count);
int adf_send_admin_tl_stop(struct adf_accel_dev *accel_dev);
int adf_send_admin_arb_query(struct adf_accel_dev *accel_dev, int cmd, u8 *svn);
int adf_send_admin_arb_commit(struct adf_accel_dev *accel_dev);
#endif

View File

@@ -0,0 +1,66 @@
// SPDX-License-Identifier: GPL-2.0-only
/* Copyright(c) 2026 Intel Corporation */
#include <linux/bitfield.h>
#include <linux/delay.h>
#include <linux/errno.h>
#include <linux/kstrtox.h>
#include "adf_accel_devices.h"
#include "adf_admin.h"
#include "adf_anti_rb.h"
#include "adf_common_drv.h"
#include "icp_qat_fw_init_admin.h"
#define ADF_SVN_RETRY_MAX 60
int adf_anti_rb_commit(struct adf_accel_dev *accel_dev)
{
return adf_send_admin_arb_commit(accel_dev);
}
int adf_anti_rb_query(struct adf_accel_dev *accel_dev, enum anti_rb cmd, u8 *svn)
{
return adf_send_admin_arb_query(accel_dev, cmd, svn);
}
int adf_anti_rb_check(struct pci_dev *pdev)
{
struct adf_anti_rb_hw_data *anti_rb;
u32 svncheck_sts, cfc_svncheck_sts;
struct adf_accel_dev *accel_dev;
void __iomem *pmisc_addr;
accel_dev = adf_devmgr_pci_to_accel_dev(pdev);
if (!accel_dev)
return -EINVAL;
anti_rb = GET_ANTI_RB_DATA(accel_dev);
if (!anti_rb->anti_rb_enabled || !anti_rb->anti_rb_enabled(accel_dev))
return 0;
pmisc_addr = adf_get_pmisc_base(accel_dev);
cfc_svncheck_sts = ADF_CSR_RD(pmisc_addr, anti_rb->svncheck_offset);
svncheck_sts = FIELD_GET(ADF_SVN_STS_MASK, cfc_svncheck_sts);
switch (svncheck_sts) {
case ADF_SVN_NO_STS:
return 0;
case ADF_SVN_PASS_STS:
anti_rb->svncheck_retry = 0;
return 0;
case ADF_SVN_FAIL_STS:
dev_err(&GET_DEV(accel_dev), "Security Version Number failure\n");
return -EIO;
case ADF_SVN_RETRY_STS:
if (anti_rb->svncheck_retry++ >= ADF_SVN_RETRY_MAX) {
anti_rb->svncheck_retry = 0;
return -ETIMEDOUT;
}
msleep(ADF_SVN_RETRY_MS);
return -EAGAIN;
default:
dev_err(&GET_DEV(accel_dev), "Invalid SVN check status\n");
return -EINVAL;
}
}

View File

@@ -0,0 +1,37 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/* Copyright(c) 2026 Intel Corporation */
#ifndef ADF_ANTI_RB_H_
#define ADF_ANTI_RB_H_
#include <linux/types.h>
#define GET_ANTI_RB_DATA(accel_dev) (&(accel_dev)->hw_device->anti_rb_data)
#define ADF_SVN_NO_STS 0x00
#define ADF_SVN_PASS_STS 0x01
#define ADF_SVN_RETRY_STS 0x02
#define ADF_SVN_FAIL_STS 0x03
#define ADF_SVN_RETRY_MS 250
#define ADF_SVN_STS_MASK GENMASK(7, 0)
enum anti_rb {
ARB_ENFORCED_MIN_SVN,
ARB_PERMANENT_MIN_SVN,
ARB_ACTIVE_SVN,
};
struct adf_accel_dev;
struct pci_dev;
struct adf_anti_rb_hw_data {
bool (*anti_rb_enabled)(struct adf_accel_dev *accel_dev);
u32 svncheck_offset;
u32 svncheck_retry;
bool sysfs_added;
};
int adf_anti_rb_commit(struct adf_accel_dev *accel_dev);
int adf_anti_rb_query(struct adf_accel_dev *accel_dev, enum anti_rb cmd, u8 *svn);
int adf_anti_rb_check(struct pci_dev *pdev);
#endif /* ADF_ANTI_RB_H_ */

View File

@@ -10,6 +10,7 @@
#include "adf_dbgfs.h"
#include "adf_heartbeat.h"
#include "adf_rl.h"
#include "adf_sysfs_anti_rb.h"
#include "adf_sysfs_ras_counters.h"
#include "adf_telemetry.h"
@@ -263,6 +264,7 @@ static int adf_dev_start(struct adf_accel_dev *accel_dev)
adf_dbgfs_add(accel_dev);
adf_sysfs_start_ras(accel_dev);
adf_sysfs_start_arb(accel_dev);
return 0;
}
@@ -292,6 +294,7 @@ static void adf_dev_stop(struct adf_accel_dev *accel_dev)
adf_rl_stop(accel_dev);
adf_dbgfs_rm(accel_dev);
adf_sysfs_stop_ras(accel_dev);
adf_sysfs_stop_arb(accel_dev);
clear_bit(ADF_STATUS_STARTING, &accel_dev->status);
clear_bit(ADF_STATUS_STARTED, &accel_dev->status);

View File

@@ -0,0 +1,133 @@
// SPDX-License-Identifier: GPL-2.0-only
/* Copyright(c) 2026 Intel Corporation */
#include <linux/sysfs.h>
#include <linux/types.h>
#include "adf_anti_rb.h"
#include "adf_common_drv.h"
#include "adf_sysfs_anti_rb.h"
static ssize_t enforced_min_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
struct adf_accel_dev *accel_dev;
int err;
u8 svn;
accel_dev = adf_devmgr_pci_to_accel_dev(to_pci_dev(dev));
if (!accel_dev)
return -EINVAL;
err = adf_anti_rb_query(accel_dev, ARB_ENFORCED_MIN_SVN, &svn);
if (err)
return err;
return sysfs_emit(buf, "%u\n", svn);
}
static DEVICE_ATTR_RO(enforced_min);
static ssize_t active_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
struct adf_accel_dev *accel_dev;
int err;
u8 svn;
accel_dev = adf_devmgr_pci_to_accel_dev(to_pci_dev(dev));
if (!accel_dev)
return -EINVAL;
err = adf_anti_rb_query(accel_dev, ARB_ACTIVE_SVN, &svn);
if (err)
return err;
return sysfs_emit(buf, "%u\n", svn);
}
static DEVICE_ATTR_RO(active);
static ssize_t permanent_min_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
struct adf_accel_dev *accel_dev;
int err;
u8 svn;
accel_dev = adf_devmgr_pci_to_accel_dev(to_pci_dev(dev));
if (!accel_dev)
return -EINVAL;
err = adf_anti_rb_query(accel_dev, ARB_PERMANENT_MIN_SVN, &svn);
if (err)
return err;
return sysfs_emit(buf, "%u\n", svn);
}
static DEVICE_ATTR_RO(permanent_min);
static ssize_t commit_store(struct device *dev, struct device_attribute *attr,
const char *buf, size_t count)
{
struct adf_accel_dev *accel_dev;
bool val;
int err;
accel_dev = adf_devmgr_pci_to_accel_dev(to_pci_dev(dev));
if (!accel_dev)
return -EINVAL;
err = kstrtobool(buf, &val);
if (err)
return err;
if (!val)
return -EINVAL;
err = adf_anti_rb_commit(accel_dev);
if (err)
return err;
return count;
}
static DEVICE_ATTR_WO(commit);
static struct attribute *qat_svn_attrs[] = {
&dev_attr_commit.attr,
&dev_attr_active.attr,
&dev_attr_enforced_min.attr,
&dev_attr_permanent_min.attr,
NULL
};
static const struct attribute_group qat_svn_group = {
.attrs = qat_svn_attrs,
.name = "qat_svn",
};
void adf_sysfs_start_arb(struct adf_accel_dev *accel_dev)
{
struct adf_anti_rb_hw_data *anti_rb = GET_ANTI_RB_DATA(accel_dev);
if (!anti_rb->anti_rb_enabled || !anti_rb->anti_rb_enabled(accel_dev))
return;
if (device_add_group(&GET_DEV(accel_dev), &qat_svn_group)) {
dev_warn(&GET_DEV(accel_dev),
"Failed to create qat_svn attribute group\n");
return;
}
anti_rb->sysfs_added = true;
}
void adf_sysfs_stop_arb(struct adf_accel_dev *accel_dev)
{
struct adf_anti_rb_hw_data *anti_rb = GET_ANTI_RB_DATA(accel_dev);
if (!anti_rb->sysfs_added)
return;
device_remove_group(&GET_DEV(accel_dev), &qat_svn_group);
anti_rb->sysfs_added = false;
anti_rb->svncheck_retry = 0;
}

View File

@@ -0,0 +1,11 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/* Copyright(c) 2026 Intel Corporation */
#ifndef ADF_SYSFS_ANTI_RB_H_
#define ADF_SYSFS_ANTI_RB_H_
struct adf_accel_dev;
void adf_sysfs_start_arb(struct adf_accel_dev *accel_dev);
void adf_sysfs_stop_arb(struct adf_accel_dev *accel_dev);
#endif /* ADF_SYSFS_ANTI_RB_H_ */

View File

@@ -31,11 +31,15 @@ enum icp_qat_fw_init_admin_cmd_id {
ICP_QAT_FW_RL_REMOVE = 136,
ICP_QAT_FW_TL_START = 137,
ICP_QAT_FW_TL_STOP = 138,
ICP_QAT_FW_SVN_READ = 146,
ICP_QAT_FW_SVN_COMMIT = 147,
};
enum icp_qat_fw_init_admin_resp_status {
ICP_QAT_FW_INIT_RESP_STATUS_SUCCESS = 0,
ICP_QAT_FW_INIT_RESP_STATUS_FAIL
ICP_QAT_FW_INIT_RESP_STATUS_FAIL = 1,
ICP_QAT_FW_INIT_RESP_STATUS_RETRY = 2,
ICP_QAT_FW_INIT_RESP_STATUS_UNSUPPORTED = 4,
};
struct icp_qat_fw_init_admin_tl_rp_indexes {
@@ -159,6 +163,15 @@ struct icp_qat_fw_init_admin_resp {
};
struct icp_qat_fw_init_admin_slice_cnt slices;
__u16 fw_capabilities;
struct {
__u8 enforced_min_svn;
__u8 permanent_min_svn;
__u8 active_svn;
__u8 resrvd9;
__u16 svn_status;
__u16 resrvd10;
__u64 resrvd11;
};
};
} __packed;

View File

@@ -12,6 +12,7 @@
#include <linux/pci_ids.h>
#include <linux/wordpart.h>
#include "adf_accel_devices.h"
#include "adf_anti_rb.h"
#include "adf_common_drv.h"
#include "icp_qat_uclo.h"
#include "icp_qat_hal.h"
@@ -1230,10 +1231,11 @@ static int qat_uclo_map_suof(struct icp_qat_fw_loader_handle *handle,
static int qat_uclo_auth_fw(struct icp_qat_fw_loader_handle *handle,
struct icp_qat_fw_auth_desc *desc)
{
u32 fcu_sts, retry = 0;
unsigned int retries = FW_AUTH_MAX_RETRY;
u32 fcu_ctl_csr, fcu_sts_csr;
u32 fcu_dram_hi_csr, fcu_dram_lo_csr;
u64 bus_addr;
u32 fcu_sts;
bus_addr = ADD_ADDR(desc->css_hdr_high, desc->css_hdr_low)
- sizeof(struct icp_qat_auth_chunk);
@@ -1248,17 +1250,32 @@ static int qat_uclo_auth_fw(struct icp_qat_fw_loader_handle *handle,
SET_CAP_CSR(handle, fcu_ctl_csr, FCU_CTRL_CMD_AUTH);
do {
int arb_ret;
msleep(FW_AUTH_WAIT_PERIOD);
fcu_sts = GET_CAP_CSR(handle, fcu_sts_csr);
arb_ret = adf_anti_rb_check(handle->pci_dev);
if (arb_ret == -EAGAIN) {
if ((fcu_sts & FCU_AUTH_STS_MASK) == FCU_STS_VERI_FAIL) {
SET_CAP_CSR(handle, fcu_ctl_csr, FCU_CTRL_CMD_AUTH);
continue;
}
} else if (arb_ret) {
goto auth_fail;
}
if ((fcu_sts & FCU_AUTH_STS_MASK) == FCU_STS_VERI_FAIL)
goto auth_fail;
if (((fcu_sts >> FCU_STS_AUTHFWLD_POS) & 0x1))
if ((fcu_sts & FCU_AUTH_STS_MASK) == FCU_STS_VERI_DONE)
return 0;
} while (retry++ < FW_AUTH_MAX_RETRY);
} while (--retries);
auth_fail:
pr_err("authentication error (FCU_STATUS = 0x%x),retry = %d\n",
fcu_sts & FCU_AUTH_STS_MASK, retry);
pr_err("authentication error (FCU_STATUS = 0x%x)\n", fcu_sts & FCU_AUTH_STS_MASK);
return -EINVAL;
}

View File

@@ -847,8 +847,7 @@ static int mv_cesa_ahash_export(struct ahash_request *req, void *hash,
*len = creq->len;
memcpy(hash, creq->state, digsize);
memset(cache, 0, blocksize);
memcpy(cache, creq->cache, creq->cache_ptr);
memcpy_and_pad(cache, blocksize, creq->cache, creq->cache_ptr, 0);
return 0;
}

View File

@@ -159,15 +159,15 @@ struct nx842_crypto_header_group {
struct nx842_crypto_header {
/* New members MUST be added within the struct_group() macro below. */
struct_group_tagged(nx842_crypto_header_hdr, hdr,
__struct_group(nx842_crypto_header_hdr, hdr, __packed,
__be16 magic; /* NX842_CRYPTO_MAGIC */
__be16 ignore; /* decompressed end bytes to ignore */
u8 groups; /* total groups in this header */
);
struct nx842_crypto_header_group group[];
struct nx842_crypto_header_group group[] __counted_by(groups);
} __packed;
static_assert(offsetof(struct nx842_crypto_header, group) == sizeof(struct nx842_crypto_header_hdr),
"struct member likely outside of struct_group_tagged()");
"struct member likely outside of __struct_group()");
#define NX842_CRYPTO_GROUP_MAX (0x20)

View File

@@ -637,8 +637,8 @@ static int qce_aead_setkey(struct crypto_aead *tfm, const u8 *key, unsigned int
memcpy(ctx->enc_key, authenc_keys.enckey, authenc_keys.enckeylen);
memset(ctx->auth_key, 0, sizeof(ctx->auth_key));
memcpy(ctx->auth_key, authenc_keys.authkey, authenc_keys.authkeylen);
memcpy_and_pad(ctx->auth_key, sizeof(ctx->auth_key),
authenc_keys.authkey, authenc_keys.authkeylen, 0);
return crypto_aead_setkey(ctx->fallback, key, keylen);
}

View File

@@ -2131,7 +2131,7 @@ static struct skcipher_alg algs[] = {
static int s5p_aes_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
int i, j, err;
int i, err;
const struct samsung_aes_variant *variant;
struct s5p_aes_dev *pdata;
struct resource *res;
@@ -2237,8 +2237,11 @@ static int s5p_aes_probe(struct platform_device *pdev)
for (i = 0; i < ARRAY_SIZE(algs); i++) {
err = crypto_register_skcipher(&algs[i]);
if (err)
if (err) {
dev_err(dev, "can't register '%s': %d\n",
algs[i].base.cra_name, err);
goto err_algs;
}
}
if (pdata->use_hash) {
@@ -2265,20 +2268,12 @@ static int s5p_aes_probe(struct platform_device *pdev)
return 0;
err_hash:
for (j = hash_i - 1; j >= 0; j--)
crypto_unregister_ahash(&algs_sha1_md5_sha256[j]);
crypto_unregister_ahashes(algs_sha1_md5_sha256, hash_i);
tasklet_kill(&pdata->hash_tasklet);
res->end -= 0x300;
err_algs:
if (i < ARRAY_SIZE(algs))
dev_err(dev, "can't register '%s': %d\n", algs[i].base.cra_name,
err);
for (j = 0; j < i; j++)
crypto_unregister_skcipher(&algs[j]);
crypto_unregister_skciphers(algs, i);
tasklet_kill(&pdata->tasklet);
err_irq:
@@ -2294,15 +2289,13 @@ err_clk:
static void s5p_aes_remove(struct platform_device *pdev)
{
struct s5p_aes_dev *pdata = platform_get_drvdata(pdev);
int i;
for (i = 0; i < ARRAY_SIZE(algs); i++)
crypto_unregister_skcipher(&algs[i]);
crypto_unregister_skciphers(algs, ARRAY_SIZE(algs));
tasklet_kill(&pdata->tasklet);
if (pdata->use_hash) {
for (i = ARRAY_SIZE(algs_sha1_md5_sha256) - 1; i >= 0; i--)
crypto_unregister_ahash(&algs_sha1_md5_sha256[i]);
crypto_unregister_ahashes(algs_sha1_md5_sha256,
ARRAY_SIZE(algs_sha1_md5_sha256));
pdata->res->end -= 0x300;
tasklet_kill(&pdata->hash_tasklet);

View File

@@ -361,19 +361,13 @@ static int stm32_cryp_it_start(struct stm32_cryp *cryp);
static struct stm32_cryp *stm32_cryp_find_dev(struct stm32_cryp_ctx *ctx)
{
struct stm32_cryp *tmp, *cryp = NULL;
struct stm32_cryp *cryp;
spin_lock_bh(&cryp_list.lock);
if (!ctx->cryp) {
list_for_each_entry(tmp, &cryp_list.dev_list, list) {
cryp = tmp;
break;
}
ctx->cryp = cryp;
} else {
cryp = ctx->cryp;
}
if (!ctx->cryp)
ctx->cryp = list_first_entry_or_null(&cryp_list.dev_list,
struct stm32_cryp, list);
cryp = ctx->cryp;
spin_unlock_bh(&cryp_list.lock);
return cryp;

View File

@@ -792,19 +792,13 @@ static int stm32_hash_dma_send(struct stm32_hash_dev *hdev)
static struct stm32_hash_dev *stm32_hash_find_dev(struct stm32_hash_ctx *ctx)
{
struct stm32_hash_dev *hdev = NULL, *tmp;
struct stm32_hash_dev *hdev;
spin_lock_bh(&stm32_hash.lock);
if (!ctx->hdev) {
list_for_each_entry(tmp, &stm32_hash.dev_list, list) {
hdev = tmp;
break;
}
ctx->hdev = hdev;
} else {
hdev = ctx->hdev;
}
if (!ctx->hdev)
ctx->hdev = list_first_entry_or_null(&stm32_hash.dev_list,
struct stm32_hash_dev, list);
hdev = ctx->hdev;
spin_unlock_bh(&stm32_hash.lock);
return hdev;

View File

@@ -8,6 +8,9 @@ config CRYPTO_DEV_TI_DTHEV2
select CRYPTO_CBC
select CRYPTO_CTR
select CRYPTO_XTS
select CRYPTO_GCM
select CRYPTO_CCM
select SG_SPLIT
help
This enables support for the TI DTHE V2 hw cryptography engine
which can be found on TI K3 SOCs. Selecting this enables use

View File

@@ -10,15 +10,18 @@
#include <crypto/aes.h>
#include <crypto/algapi.h>
#include <crypto/engine.h>
#include <crypto/gcm.h>
#include <crypto/internal/aead.h>
#include <crypto/internal/skcipher.h>
#include "dthev2-common.h"
#include <linux/bitfield.h>
#include <linux/delay.h>
#include <linux/dmaengine.h>
#include <linux/dma-mapping.h>
#include <linux/io.h>
#include <linux/iopoll.h>
#include <linux/scatterlist.h>
/* Registers */
@@ -53,6 +56,7 @@
#define DTHE_P_AES_C_LENGTH_1 0x0058
#define DTHE_P_AES_AUTH_LENGTH 0x005C
#define DTHE_P_AES_DATA_IN_OUT 0x0060
#define DTHE_P_AES_TAG_OUT 0x0070
#define DTHE_P_AES_SYSCONFIG 0x0084
#define DTHE_P_AES_IRQSTATUS 0x008C
@@ -65,6 +69,8 @@ enum aes_ctrl_mode_masks {
AES_CTRL_CBC_MASK = BIT(5),
AES_CTRL_CTR_MASK = BIT(6),
AES_CTRL_XTS_MASK = BIT(12) | BIT(11),
AES_CTRL_GCM_MASK = BIT(17) | BIT(16) | BIT(6),
AES_CTRL_CCM_MASK = BIT(18) | BIT(6),
};
#define DTHE_AES_CTRL_MODE_CLEAR_MASK ~GENMASK(28, 5)
@@ -77,6 +83,11 @@ enum aes_ctrl_mode_masks {
#define DTHE_AES_CTRL_CTR_WIDTH_128B (BIT(7) | BIT(8))
#define DTHE_AES_CCM_L_FROM_IV_MASK GENMASK(2, 0)
#define DTHE_AES_CCM_M_BITS GENMASK(2, 0)
#define DTHE_AES_CTRL_CCM_L_FIELD_MASK GENMASK(21, 19)
#define DTHE_AES_CTRL_CCM_M_FIELD_MASK GENMASK(24, 22)
#define DTHE_AES_CTRL_SAVE_CTX_SET BIT(29)
#define DTHE_AES_CTRL_OUTPUT_READY BIT_MASK(0)
@@ -91,6 +102,10 @@ enum aes_ctrl_mode_masks {
#define AES_IV_SIZE AES_BLOCK_SIZE
#define AES_BLOCK_WORDS (AES_BLOCK_SIZE / sizeof(u32))
#define AES_IV_WORDS AES_BLOCK_WORDS
#define DTHE_AES_GCM_AAD_MAXLEN (BIT_ULL(32) - 1)
#define DTHE_AES_CCM_AAD_MAXLEN (BIT(16) - BIT(8))
#define DTHE_AES_CCM_CRYPT_MAXLEN (BIT_ULL(61) - 1)
#define POLL_TIMEOUT_INTERVAL HZ
static int dthe_cipher_init_tfm(struct crypto_skcipher *tfm)
{
@@ -266,6 +281,16 @@ static void dthe_aes_set_ctrl_key(struct dthe_tfm_ctx *ctx,
case DTHE_AES_XTS:
ctrl_val |= AES_CTRL_XTS_MASK;
break;
case DTHE_AES_GCM:
ctrl_val |= AES_CTRL_GCM_MASK;
break;
case DTHE_AES_CCM:
ctrl_val |= AES_CTRL_CCM_MASK;
ctrl_val |= FIELD_PREP(DTHE_AES_CTRL_CCM_L_FIELD_MASK,
(iv_in[0] & DTHE_AES_CCM_L_FROM_IV_MASK));
ctrl_val |= FIELD_PREP(DTHE_AES_CTRL_CCM_M_FIELD_MASK,
((ctx->authsize - 2) >> 1) & DTHE_AES_CCM_M_BITS);
break;
}
if (iv_in) {
@@ -542,6 +567,642 @@ static int dthe_aes_decrypt(struct skcipher_request *req)
return dthe_aes_crypt(req);
}
static int dthe_aead_init_tfm(struct crypto_aead *tfm)
{
struct dthe_tfm_ctx *ctx = crypto_aead_ctx(tfm);
struct dthe_data *dev_data = dthe_get_dev(ctx);
ctx->dev_data = dev_data;
const char *alg_name = crypto_tfm_alg_name(crypto_aead_tfm(tfm));
ctx->aead_fb = crypto_alloc_sync_aead(alg_name, 0,
CRYPTO_ALG_NEED_FALLBACK);
if (IS_ERR(ctx->aead_fb)) {
dev_err(dev_data->dev, "fallback driver %s couldn't be loaded\n",
alg_name);
return PTR_ERR(ctx->aead_fb);
}
return 0;
}
static void dthe_aead_exit_tfm(struct crypto_aead *tfm)
{
struct dthe_tfm_ctx *ctx = crypto_aead_ctx(tfm);
crypto_free_sync_aead(ctx->aead_fb);
}
/**
* dthe_aead_prep_aad - Prepare AAD scatterlist from input request
* @sg: Input scatterlist containing AAD
* @assoclen: Length of AAD
* @pad_buf: Buffer to hold AAD padding if needed
*
* Description:
* Creates a scatterlist containing only the AAD portion with padding
* to align to AES_BLOCK_SIZE. This simplifies DMA handling by allowing
* AAD to be sent separately via TX-only DMA.
*
* Return:
* Pointer to the AAD scatterlist, or ERR_PTR(error) on failure.
* The calling function needs to free the returned scatterlist when done.
**/
static struct scatterlist *dthe_aead_prep_aad(struct scatterlist *sg,
unsigned int assoclen,
u8 *pad_buf)
{
struct scatterlist *aad_sg;
struct scatterlist *to_sg;
int aad_nents;
if (assoclen == 0)
return NULL;
aad_nents = sg_nents_for_len(sg, assoclen);
if (assoclen % AES_BLOCK_SIZE)
aad_nents++;
aad_sg = kmalloc_array(aad_nents, sizeof(struct scatterlist), GFP_ATOMIC);
if (!aad_sg)
return ERR_PTR(-ENOMEM);
sg_init_table(aad_sg, aad_nents);
to_sg = dthe_copy_sg(aad_sg, sg, assoclen);
if (assoclen % AES_BLOCK_SIZE) {
unsigned int pad_len = AES_BLOCK_SIZE - (assoclen % AES_BLOCK_SIZE);
memset(pad_buf, 0, pad_len);
sg_set_buf(to_sg, pad_buf, pad_len);
}
return aad_sg;
}
/**
* dthe_aead_prep_crypt - Prepare crypt scatterlist from req->src/req->dst
* @sg: Input req->src/req->dst scatterlist
* @assoclen: Length of AAD (to skip)
* @cryptlen: Length of ciphertext/plaintext (minus the size of TAG in decryption)
* @pad_buf: Zeroed buffer to hold crypt padding if needed
*
* Description:
* Creates a scatterlist containing only the ciphertext/plaintext portion
* (skipping AAD) with padding to align to AES_BLOCK_SIZE.
*
* Return:
* Pointer to the ciphertext scatterlist, or ERR_PTR(error) on failure.
* The calling function needs to free the returned scatterlist when done.
**/
static struct scatterlist *dthe_aead_prep_crypt(struct scatterlist *sg,
unsigned int assoclen,
unsigned int cryptlen,
u8 *pad_buf)
{
struct scatterlist *out_sg[1];
struct scatterlist *crypt_sg;
struct scatterlist *to_sg;
size_t split_sizes[1] = {cryptlen};
int out_mapped_nents[1];
int crypt_nents;
int err;
if (cryptlen == 0)
return NULL;
/* Skip AAD, extract ciphertext portion */
err = sg_split(sg, 0, assoclen, 1, split_sizes, out_sg, out_mapped_nents, GFP_ATOMIC);
if (err)
goto dthe_aead_prep_crypt_split_err;
crypt_nents = sg_nents_for_len(out_sg[0], cryptlen);
if (cryptlen % AES_BLOCK_SIZE)
crypt_nents++;
crypt_sg = kmalloc_array(crypt_nents, sizeof(struct scatterlist), GFP_ATOMIC);
if (!crypt_sg) {
err = -ENOMEM;
goto dthe_aead_prep_crypt_mem_err;
}
sg_init_table(crypt_sg, crypt_nents);
to_sg = dthe_copy_sg(crypt_sg, out_sg[0], cryptlen);
if (cryptlen % AES_BLOCK_SIZE) {
unsigned int pad_len = AES_BLOCK_SIZE - (cryptlen % AES_BLOCK_SIZE);
sg_set_buf(to_sg, pad_buf, pad_len);
}
dthe_aead_prep_crypt_mem_err:
kfree(out_sg[0]);
dthe_aead_prep_crypt_split_err:
if (err)
return ERR_PTR(err);
return crypt_sg;
}
static int dthe_aead_read_tag(struct dthe_tfm_ctx *ctx, u32 *tag)
{
struct dthe_data *dev_data = dthe_get_dev(ctx);
void __iomem *aes_base_reg = dev_data->regs + DTHE_P_AES_BASE;
u32 val;
int ret;
ret = readl_relaxed_poll_timeout(aes_base_reg + DTHE_P_AES_CTRL, val,
(val & DTHE_AES_CTRL_SAVED_CTX_READY),
0, POLL_TIMEOUT_INTERVAL);
if (ret)
return ret;
for (int i = 0; i < AES_BLOCK_WORDS; ++i)
tag[i] = readl_relaxed(aes_base_reg +
DTHE_P_AES_TAG_OUT +
DTHE_REG_SIZE * i);
return 0;
}
static int dthe_aead_enc_get_tag(struct aead_request *req)
{
struct dthe_tfm_ctx *ctx = crypto_aead_ctx(crypto_aead_reqtfm(req));
u32 tag[AES_BLOCK_WORDS];
int nents;
int ret;
ret = dthe_aead_read_tag(ctx, tag);
if (ret)
return ret;
nents = sg_nents_for_len(req->dst, req->cryptlen + req->assoclen + ctx->authsize);
sg_pcopy_from_buffer(req->dst, nents, tag, ctx->authsize,
req->assoclen + req->cryptlen);
return 0;
}
static int dthe_aead_dec_verify_tag(struct aead_request *req)
{
struct dthe_tfm_ctx *ctx = crypto_aead_ctx(crypto_aead_reqtfm(req));
u32 tag_out[AES_BLOCK_WORDS];
u32 tag_in[AES_BLOCK_WORDS];
int nents;
int ret;
ret = dthe_aead_read_tag(ctx, tag_out);
if (ret)
return ret;
nents = sg_nents_for_len(req->src, req->assoclen + req->cryptlen);
sg_pcopy_to_buffer(req->src, nents, tag_in, ctx->authsize,
req->assoclen + req->cryptlen - ctx->authsize);
if (crypto_memneq(tag_in, tag_out, ctx->authsize))
return -EBADMSG;
else
return 0;
}
static int dthe_aead_setkey(struct crypto_aead *tfm, const u8 *key, unsigned int keylen)
{
struct dthe_tfm_ctx *ctx = crypto_aead_ctx(tfm);
if (keylen != AES_KEYSIZE_128 && keylen != AES_KEYSIZE_192 && keylen != AES_KEYSIZE_256)
return -EINVAL;
crypto_sync_aead_clear_flags(ctx->aead_fb, CRYPTO_TFM_REQ_MASK);
crypto_sync_aead_set_flags(ctx->aead_fb,
crypto_aead_get_flags(tfm) &
CRYPTO_TFM_REQ_MASK);
return crypto_sync_aead_setkey(ctx->aead_fb, key, keylen);
}
static int dthe_gcm_aes_setkey(struct crypto_aead *tfm, const u8 *key, unsigned int keylen)
{
struct dthe_tfm_ctx *ctx = crypto_aead_ctx(tfm);
int ret;
ret = dthe_aead_setkey(tfm, key, keylen);
if (ret)
return ret;
ctx->aes_mode = DTHE_AES_GCM;
ctx->keylen = keylen;
memcpy(ctx->key, key, keylen);
return ret;
}
static int dthe_ccm_aes_setkey(struct crypto_aead *tfm, const u8 *key, unsigned int keylen)
{
struct dthe_tfm_ctx *ctx = crypto_aead_ctx(tfm);
int ret;
ret = dthe_aead_setkey(tfm, key, keylen);
if (ret)
return ret;
ctx->aes_mode = DTHE_AES_CCM;
ctx->keylen = keylen;
memcpy(ctx->key, key, keylen);
return ret;
}
static int dthe_aead_setauthsize(struct crypto_aead *tfm, unsigned int authsize)
{
struct dthe_tfm_ctx *ctx = crypto_aead_ctx(tfm);
/* Invalid auth size will be handled by crypto_aead_setauthsize() */
ctx->authsize = authsize;
return crypto_sync_aead_setauthsize(ctx->aead_fb, authsize);
}
static int dthe_aead_do_fallback(struct aead_request *req)
{
struct dthe_tfm_ctx *ctx = crypto_aead_ctx(crypto_aead_reqtfm(req));
struct dthe_aes_req_ctx *rctx = aead_request_ctx(req);
SYNC_AEAD_REQUEST_ON_STACK(subreq, ctx->aead_fb);
aead_request_set_callback(subreq, req->base.flags,
req->base.complete, req->base.data);
aead_request_set_crypt(subreq, req->src, req->dst, req->cryptlen, req->iv);
aead_request_set_ad(subreq, req->assoclen);
return rctx->enc ? crypto_aead_encrypt(subreq) :
crypto_aead_decrypt(subreq);
}
static void dthe_aead_dma_in_callback(void *data)
{
struct aead_request *req = (struct aead_request *)data;
struct dthe_aes_req_ctx *rctx = aead_request_ctx(req);
complete(&rctx->aes_compl);
}
static int dthe_aead_run(struct crypto_engine *engine, void *areq)
{
struct aead_request *req = container_of(areq, struct aead_request, base);
struct dthe_tfm_ctx *ctx = crypto_aead_ctx(crypto_aead_reqtfm(req));
struct dthe_aes_req_ctx *rctx = aead_request_ctx(req);
struct dthe_data *dev_data = dthe_get_dev(ctx);
unsigned int cryptlen = req->cryptlen;
unsigned int assoclen = req->assoclen;
unsigned int authsize = ctx->authsize;
unsigned int unpadded_cryptlen;
struct scatterlist *src = NULL;
struct scatterlist *dst = NULL;
struct scatterlist *aad_sg = NULL;
u32 iv_in[AES_IV_WORDS];
int aad_nents = 0;
int src_nents = 0;
int dst_nents = 0;
int aad_mapped_nents = 0;
int src_mapped_nents = 0;
int dst_mapped_nents = 0;
u8 *src_assoc_padbuf = rctx->padding;
u8 *src_crypt_padbuf = rctx->padding + AES_BLOCK_SIZE;
u8 *dst_crypt_padbuf = rctx->padding + AES_BLOCK_SIZE;
bool diff_dst;
enum dma_data_direction aad_dir, src_dir, dst_dir;
struct device *tx_dev, *rx_dev;
struct dma_async_tx_descriptor *desc_in, *desc_out, *desc_aad_out;
int ret;
int err;
void __iomem *aes_base_reg = dev_data->regs + DTHE_P_AES_BASE;
u32 aes_irqenable_val = readl_relaxed(aes_base_reg + DTHE_P_AES_IRQENABLE);
u32 aes_sysconfig_val = readl_relaxed(aes_base_reg + DTHE_P_AES_SYSCONFIG);
aes_sysconfig_val |= DTHE_AES_SYSCONFIG_DMA_DATA_IN_OUT_EN;
writel_relaxed(aes_sysconfig_val, aes_base_reg + DTHE_P_AES_SYSCONFIG);
aes_irqenable_val |= DTHE_AES_IRQENABLE_EN_ALL;
writel_relaxed(aes_irqenable_val, aes_base_reg + DTHE_P_AES_IRQENABLE);
/* In decryption, the last authsize bytes are the TAG */
if (!rctx->enc)
cryptlen -= authsize;
unpadded_cryptlen = cryptlen;
memset(src_assoc_padbuf, 0, AES_BLOCK_SIZE);
memset(src_crypt_padbuf, 0, AES_BLOCK_SIZE);
memset(dst_crypt_padbuf, 0, AES_BLOCK_SIZE);
tx_dev = dmaengine_get_dma_device(dev_data->dma_aes_tx);
rx_dev = dmaengine_get_dma_device(dev_data->dma_aes_rx);
if (req->src == req->dst) {
diff_dst = false;
src_dir = DMA_BIDIRECTIONAL;
dst_dir = DMA_BIDIRECTIONAL;
} else {
diff_dst = true;
src_dir = DMA_TO_DEVICE;
dst_dir = DMA_FROM_DEVICE;
}
aad_dir = DMA_TO_DEVICE;
/* Prep AAD scatterlist (always from req->src) */
aad_sg = dthe_aead_prep_aad(req->src, req->assoclen, src_assoc_padbuf);
if (IS_ERR(aad_sg)) {
ret = PTR_ERR(aad_sg);
goto aead_prep_aad_err;
}
/* Prep ciphertext src scatterlist */
src = dthe_aead_prep_crypt(req->src, req->assoclen, cryptlen, src_crypt_padbuf);
if (IS_ERR(src)) {
ret = PTR_ERR(src);
goto aead_prep_src_err;
}
/* Prep ciphertext dst scatterlist (only if separate dst) */
if (diff_dst) {
dst = dthe_aead_prep_crypt(req->dst, req->assoclen, unpadded_cryptlen,
dst_crypt_padbuf);
if (IS_ERR(dst)) {
ret = PTR_ERR(dst);
goto aead_prep_dst_err;
}
} else {
dst = src;
}
/* Calculate padded lengths for nents calculations */
if (req->assoclen % AES_BLOCK_SIZE)
assoclen += AES_BLOCK_SIZE - (req->assoclen % AES_BLOCK_SIZE);
if (cryptlen % AES_BLOCK_SIZE)
cryptlen += AES_BLOCK_SIZE - (cryptlen % AES_BLOCK_SIZE);
if (assoclen != 0) {
/* Map AAD for TX only */
aad_nents = sg_nents_for_len(aad_sg, assoclen);
aad_mapped_nents = dma_map_sg(tx_dev, aad_sg, aad_nents, aad_dir);
if (aad_mapped_nents == 0) {
dev_err(dev_data->dev, "Failed to map AAD for TX\n");
ret = -EINVAL;
goto aead_dma_map_aad_err;
}
/* Prepare DMA descriptors for AAD TX */
desc_aad_out = dmaengine_prep_slave_sg(dev_data->dma_aes_tx, aad_sg,
aad_mapped_nents, DMA_MEM_TO_DEV,
DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
if (!desc_aad_out) {
dev_err(dev_data->dev, "AAD TX prep_slave_sg() failed\n");
ret = -EINVAL;
goto aead_dma_prep_aad_err;
}
}
if (cryptlen != 0) {
/* Map ciphertext src for TX (BIDIRECTIONAL if in-place) */
src_nents = sg_nents_for_len(src, cryptlen);
src_mapped_nents = dma_map_sg(tx_dev, src, src_nents, src_dir);
if (src_mapped_nents == 0) {
dev_err(dev_data->dev, "Failed to map ciphertext src for TX\n");
ret = -EINVAL;
goto aead_dma_prep_aad_err;
}
/* Prepare DMA descriptors for ciphertext TX */
desc_out = dmaengine_prep_slave_sg(dev_data->dma_aes_tx, src,
src_mapped_nents, DMA_MEM_TO_DEV,
DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
if (!desc_out) {
dev_err(dev_data->dev, "Ciphertext TX prep_slave_sg() failed\n");
ret = -EINVAL;
goto aead_dma_prep_src_err;
}
/* Map ciphertext dst for RX (only if separate dst) */
if (diff_dst) {
dst_nents = sg_nents_for_len(dst, cryptlen);
dst_mapped_nents = dma_map_sg(rx_dev, dst, dst_nents, dst_dir);
if (dst_mapped_nents == 0) {
dev_err(dev_data->dev, "Failed to map ciphertext dst for RX\n");
ret = -EINVAL;
goto aead_dma_prep_src_err;
}
} else {
dst_nents = src_nents;
dst_mapped_nents = src_mapped_nents;
}
/* Prepare DMA descriptor for ciphertext RX */
desc_in = dmaengine_prep_slave_sg(dev_data->dma_aes_rx, dst,
dst_mapped_nents, DMA_DEV_TO_MEM,
DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
if (!desc_in) {
dev_err(dev_data->dev, "Ciphertext RX prep_slave_sg() failed\n");
ret = -EINVAL;
goto aead_dma_prep_dst_err;
}
desc_in->callback = dthe_aead_dma_in_callback;
desc_in->callback_param = req;
} else if (assoclen != 0) {
/* AAD-only operation */
desc_aad_out->callback = dthe_aead_dma_in_callback;
desc_aad_out->callback_param = req;
}
init_completion(&rctx->aes_compl);
/*
* HACK: There is an unknown hw issue where if the previous operation had alen = 0 and
* plen != 0, the current operation's tag calculation is incorrect in the case where
* plen = 0 and alen != 0 currently. This is a workaround for now which somehow works;
* by resetting the context by writing a 1 to the C_LENGTH_0 and AUTH_LENGTH registers.
*/
if (cryptlen == 0) {
writel_relaxed(1, aes_base_reg + DTHE_P_AES_C_LENGTH_0);
writel_relaxed(1, aes_base_reg + DTHE_P_AES_AUTH_LENGTH);
}
if (ctx->aes_mode == DTHE_AES_GCM) {
if (req->iv) {
memcpy(iv_in, req->iv, GCM_AES_IV_SIZE);
} else {
iv_in[0] = 0;
iv_in[1] = 0;
iv_in[2] = 0;
}
iv_in[3] = 0x01000000;
} else {
memcpy(iv_in, req->iv, AES_IV_SIZE);
}
/* Clear key2 to reset previous GHASH intermediate data */
for (int i = 0; i < AES_KEYSIZE_256 / sizeof(u32); ++i)
writel_relaxed(0, aes_base_reg + DTHE_P_AES_KEY2_6 + DTHE_REG_SIZE * i);
dthe_aes_set_ctrl_key(ctx, rctx, iv_in);
writel_relaxed(lower_32_bits(unpadded_cryptlen), aes_base_reg + DTHE_P_AES_C_LENGTH_0);
writel_relaxed(upper_32_bits(unpadded_cryptlen), aes_base_reg + DTHE_P_AES_C_LENGTH_1);
writel_relaxed(req->assoclen, aes_base_reg + DTHE_P_AES_AUTH_LENGTH);
/* Submit DMA descriptors: AAD TX, ciphertext TX, ciphertext RX */
if (assoclen != 0)
dmaengine_submit(desc_aad_out);
if (cryptlen != 0) {
dmaengine_submit(desc_out);
dmaengine_submit(desc_in);
}
if (cryptlen != 0)
dma_async_issue_pending(dev_data->dma_aes_rx);
dma_async_issue_pending(dev_data->dma_aes_tx);
/* Need to do timeout to ensure finalise gets called if DMA callback fails for any reason */
ret = wait_for_completion_timeout(&rctx->aes_compl, msecs_to_jiffies(DTHE_DMA_TIMEOUT_MS));
if (!ret) {
ret = -ETIMEDOUT;
if (cryptlen != 0)
dmaengine_terminate_sync(dev_data->dma_aes_rx);
dmaengine_terminate_sync(dev_data->dma_aes_tx);
for (int i = 0; i < AES_BLOCK_WORDS; ++i)
readl_relaxed(aes_base_reg + DTHE_P_AES_DATA_IN_OUT + DTHE_REG_SIZE * i);
} else {
ret = 0;
}
if (cryptlen != 0)
dma_sync_sg_for_cpu(rx_dev, dst, dst_nents, dst_dir);
if (rctx->enc)
err = dthe_aead_enc_get_tag(req);
else
err = dthe_aead_dec_verify_tag(req);
ret = (ret) ? ret : err;
aead_dma_prep_dst_err:
if (diff_dst && cryptlen != 0)
dma_unmap_sg(rx_dev, dst, dst_nents, dst_dir);
aead_dma_prep_src_err:
if (cryptlen != 0)
dma_unmap_sg(tx_dev, src, src_nents, src_dir);
aead_dma_prep_aad_err:
if (assoclen != 0)
dma_unmap_sg(tx_dev, aad_sg, aad_nents, aad_dir);
aead_dma_map_aad_err:
if (diff_dst && cryptlen != 0)
kfree(dst);
aead_prep_dst_err:
if (cryptlen != 0)
kfree(src);
aead_prep_src_err:
if (assoclen != 0)
kfree(aad_sg);
aead_prep_aad_err:
memzero_explicit(rctx->padding, 2 * AES_BLOCK_SIZE);
if (ret)
ret = dthe_aead_do_fallback(req);
local_bh_disable();
crypto_finalize_aead_request(engine, req, ret);
local_bh_enable();
return 0;
}
static int dthe_aead_crypt(struct aead_request *req)
{
struct dthe_tfm_ctx *ctx = crypto_aead_ctx(crypto_aead_reqtfm(req));
struct dthe_aes_req_ctx *rctx = aead_request_ctx(req);
struct dthe_data *dev_data = dthe_get_dev(ctx);
struct crypto_engine *engine;
unsigned int cryptlen = req->cryptlen;
bool is_zero_ctr = true;
/* In decryption, last authsize bytes are the TAG */
if (!rctx->enc)
cryptlen -= ctx->authsize;
if (ctx->aes_mode == DTHE_AES_CCM) {
/*
* For CCM Mode, the 128-bit IV contains the following:
* | 0 .. 2 | 3 .. 7 | 8 .. (127-8*L) | (128-8*L) .. 127 |
* | L-1 | Zero | Nonce | Counter |
* L needs to be between 2-8 (inclusive), i.e. 1 <= (L-1) <= 7
* and the next 5 bits need to be zeroes. Else return -EINVAL
*/
u8 *iv = req->iv;
u8 L = iv[0];
/* variable L stores L-1 here */
if (L < 1 || L > 7)
return -EINVAL;
/*
* DTHEv2 HW can only work with zero initial counter in CCM mode.
* Check if the initial counter value is zero or not
*/
for (int i = 0; i < L + 1; ++i) {
if (iv[AES_IV_SIZE - 1 - i] != 0) {
is_zero_ctr = false;
break;
}
}
}
/*
* Need to fallback to software in the following cases due to HW restrictions:
* - Both AAD and plaintext/ciphertext are zero length
* - For AES-GCM, AAD length is more than 2^32 - 1 bytes
* - For AES-CCM, AAD length is more than 2^16 - 2^8 bytes
* - For AES-CCM, plaintext/ciphertext length is more than 2^61 - 1 bytes
* - For AES-CCM, AAD length is non-zero but plaintext/ciphertext length is zero
* - For AES-CCM, the initial counter (last L+1 bytes of IV) is not all zeroes
*
* PS: req->cryptlen is currently unsigned int type, which causes the second and fourth
* cases above tautologically false. If req->cryptlen is to be changed to a 64-bit
* type, the check for these would also need to be added below.
*/
if ((req->assoclen == 0 && cryptlen == 0) ||
(ctx->aes_mode == DTHE_AES_CCM && req->assoclen > DTHE_AES_CCM_AAD_MAXLEN) ||
(ctx->aes_mode == DTHE_AES_CCM && cryptlen == 0) ||
(ctx->aes_mode == DTHE_AES_CCM && !is_zero_ctr))
return dthe_aead_do_fallback(req);
engine = dev_data->engine;
return crypto_transfer_aead_request_to_engine(engine, req);
}
static int dthe_aead_encrypt(struct aead_request *req)
{
struct dthe_aes_req_ctx *rctx = aead_request_ctx(req);
rctx->enc = 1;
return dthe_aead_crypt(req);
}
static int dthe_aead_decrypt(struct aead_request *req)
{
struct dthe_aes_req_ctx *rctx = aead_request_ctx(req);
rctx->enc = 0;
return dthe_aead_crypt(req);
}
static struct skcipher_engine_alg cipher_algs[] = {
{
.base.init = dthe_cipher_init_tfm,
@@ -640,12 +1301,75 @@ static struct skcipher_engine_alg cipher_algs[] = {
}, /* XTS AES */
};
static struct aead_engine_alg aead_algs[] = {
{
.base.init = dthe_aead_init_tfm,
.base.exit = dthe_aead_exit_tfm,
.base.setkey = dthe_gcm_aes_setkey,
.base.setauthsize = dthe_aead_setauthsize,
.base.maxauthsize = AES_BLOCK_SIZE,
.base.encrypt = dthe_aead_encrypt,
.base.decrypt = dthe_aead_decrypt,
.base.chunksize = AES_BLOCK_SIZE,
.base.ivsize = GCM_AES_IV_SIZE,
.base.base = {
.cra_name = "gcm(aes)",
.cra_driver_name = "gcm-aes-dthev2",
.cra_priority = 299,
.cra_flags = CRYPTO_ALG_TYPE_AEAD |
CRYPTO_ALG_KERN_DRIVER_ONLY |
CRYPTO_ALG_ASYNC |
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = 1,
.cra_ctxsize = sizeof(struct dthe_tfm_ctx),
.cra_reqsize = sizeof(struct dthe_aes_req_ctx),
.cra_module = THIS_MODULE,
},
.op.do_one_request = dthe_aead_run,
}, /* GCM AES */
{
.base.init = dthe_aead_init_tfm,
.base.exit = dthe_aead_exit_tfm,
.base.setkey = dthe_ccm_aes_setkey,
.base.setauthsize = dthe_aead_setauthsize,
.base.maxauthsize = AES_BLOCK_SIZE,
.base.encrypt = dthe_aead_encrypt,
.base.decrypt = dthe_aead_decrypt,
.base.chunksize = AES_BLOCK_SIZE,
.base.ivsize = AES_IV_SIZE,
.base.base = {
.cra_name = "ccm(aes)",
.cra_driver_name = "ccm-aes-dthev2",
.cra_priority = 299,
.cra_flags = CRYPTO_ALG_TYPE_AEAD |
CRYPTO_ALG_KERN_DRIVER_ONLY |
CRYPTO_ALG_ASYNC |
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = 1,
.cra_ctxsize = sizeof(struct dthe_tfm_ctx),
.cra_reqsize = sizeof(struct dthe_aes_req_ctx),
.cra_module = THIS_MODULE,
},
.op.do_one_request = dthe_aead_run,
}, /* CCM AES */
};
int dthe_register_aes_algs(void)
{
return crypto_engine_register_skciphers(cipher_algs, ARRAY_SIZE(cipher_algs));
int ret = 0;
ret = crypto_engine_register_skciphers(cipher_algs, ARRAY_SIZE(cipher_algs));
if (ret)
return ret;
ret = crypto_engine_register_aeads(aead_algs, ARRAY_SIZE(aead_algs));
if (ret)
crypto_engine_unregister_skciphers(cipher_algs, ARRAY_SIZE(cipher_algs));
return ret;
}
void dthe_unregister_aes_algs(void)
{
crypto_engine_unregister_skciphers(cipher_algs, ARRAY_SIZE(cipher_algs));
crypto_engine_unregister_aeads(aead_algs, ARRAY_SIZE(aead_algs));
}

View File

@@ -38,6 +38,8 @@ enum dthe_aes_mode {
DTHE_AES_CBC,
DTHE_AES_CTR,
DTHE_AES_XTS,
DTHE_AES_GCM,
DTHE_AES_CCM,
};
/* Driver specific struct definitions */
@@ -78,16 +80,22 @@ struct dthe_list {
* struct dthe_tfm_ctx - Transform ctx struct containing ctx for all sub-components of DTHE V2
* @dev_data: Device data struct pointer
* @keylen: AES key length
* @authsize: Authentication size for modes with authentication
* @key: AES key
* @aes_mode: AES mode
* @aead_fb: Fallback crypto aead handle
* @skcipher_fb: Fallback crypto skcipher handle for AES-XTS mode
*/
struct dthe_tfm_ctx {
struct dthe_data *dev_data;
unsigned int keylen;
unsigned int authsize;
u32 key[DTHE_MAX_KEYSIZE / sizeof(u32)];
enum dthe_aes_mode aes_mode;
struct crypto_sync_skcipher *skcipher_fb;
union {
struct crypto_sync_aead *aead_fb;
struct crypto_sync_skcipher *skcipher_fb;
};
};
/**
@@ -98,7 +106,7 @@ struct dthe_tfm_ctx {
*/
struct dthe_aes_req_ctx {
int enc;
u8 padding[AES_BLOCK_SIZE];
u8 padding[2 * AES_BLOCK_SIZE];
struct completion aes_compl;
};

View File

@@ -16,39 +16,6 @@
#include <linux/types.h>
#include <crypto/aead.h>
#include <crypto/hash.h>
#include <crypto/skcipher.h>
struct cryptd_skcipher {
struct crypto_skcipher base;
};
/* alg_name should be algorithm to be cryptd-ed */
struct cryptd_skcipher *cryptd_alloc_skcipher(const char *alg_name,
u32 type, u32 mask);
struct crypto_skcipher *cryptd_skcipher_child(struct cryptd_skcipher *tfm);
/* Must be called without moving CPUs. */
bool cryptd_skcipher_queued(struct cryptd_skcipher *tfm);
void cryptd_free_skcipher(struct cryptd_skcipher *tfm);
struct cryptd_ahash {
struct crypto_ahash base;
};
static inline struct cryptd_ahash *__cryptd_ahash_cast(
struct crypto_ahash *tfm)
{
return (struct cryptd_ahash *)tfm;
}
/* alg_name should be algorithm to be cryptd-ed */
struct cryptd_ahash *cryptd_alloc_ahash(const char *alg_name,
u32 type, u32 mask);
struct crypto_shash *cryptd_ahash_child(struct cryptd_ahash *tfm);
struct shash_desc *cryptd_shash_desc(struct ahash_request *req);
/* Must be called without moving CPUs. */
bool cryptd_ahash_queued(struct cryptd_ahash *tfm);
void cryptd_free_ahash(struct cryptd_ahash *tfm);
struct cryptd_aead {
struct crypto_aead base;

View File

@@ -801,6 +801,19 @@ static inline void print_hex_dump_debug(const char *prefix_str, int prefix_type,
}
#endif
#if defined(DEBUG)
#define print_hex_dump_devel(prefix_str, prefix_type, rowsize, \
groupsize, buf, len, ascii) \
print_hex_dump(KERN_DEBUG, prefix_str, prefix_type, rowsize, \
groupsize, buf, len, ascii)
#else
static inline void print_hex_dump_devel(const char *prefix_str, int prefix_type,
int rowsize, int groupsize,
const void *buf, size_t len, bool ascii)
{
}
#endif
/**
* print_hex_dump_bytes - shorthand form of print_hex_dump() with default params
* @prefix_str: string to prefix each line with;